text
stringlengths
0
7.94M
lang
stringclasses
12 values
\begin{document} \baselineskip = 16pt \newcommand \ZZ {{\mathbb Z}} \newcommand \NN {{\mathbb N}} \newcommand \RR {{\mathbb R}} \newcommand \PR {{\mathbb P}} \newcommand \AF {{\mathbb A}} \newcommand \GG {{\mathbb G}} \newcommand \QQ {{\mathbb Q}} \newcommand \bcA {{\mathscr A}} \newcommand \bcC {{\mathscr C}} \newcommand \bcD {{\mathscr D}} \newcommand \bcF {{\mathscr F}} \newcommand \bcG {{\mathscr G}} \newcommand \bcH {{\mathscr H}} \newcommand \bcM {{\mathscr M}} \newcommand \bcJ {{\mathscr J}} \newcommand \bcL {{\mathscr L}} \newcommand \bcO {{\mathscr O}} \newcommand \bcP {{\mathscr P}} \newcommand \bcQ {{\mathscr Q}} \newcommand \bcR {{\mathscr R}} \newcommand \bcS {{\mathscr S}} \newcommand \bcU {{\mathscr U}} \newcommand \bcV {{\mathscr V}} \newcommand \bcW {{\mathscr W}} \newcommand \bcX {{\mathscr X}} \newcommand \bcY {{\mathscr Y}} \newcommand \bcZ {{\mathscr Z}} \newcommand \goa {{\mathfrak a}} \newcommand \gob {{\mathfrak b}} \newcommand \goc {{\mathfrak c}} \newcommand \gom {{\mathfrak m}} \newcommand \gon {{\mathfrak n}} \newcommand \gop {{\mathfrak p}} \newcommand \goq {{\mathfrak q}} \newcommand \goQ {{\mathfrak Q}} \newcommand \goP {{\mathfrak P}} \newcommand \goM {{\mathfrak M}} \newcommand \goN {{\mathfrak N}} \newcommand \uno {{\mathbbm 1}} \newcommand \Le {{\mathbbm L}} \newcommand \Spec {{\rm {Spec}}} \newcommand \Gr {{\rm {Gr}}} \newcommand \Pic {{\rm {Pic}}} \newcommand \Jac {{{J}}} \newcommand \Alb {{\rm {Alb}}} \newcommand \Corr {{Corr}} \newcommand \Chow {{\mathscr C}} \newcommand \Sym {{\rm {Sym}}} \newcommand \Prym {{\rm {Prym}}} \newcommand \cha {{\rm {char}}} \newcommand \eff {{\rm {eff}}} \newcommand \tr {{\rm {tr}}} \newcommand \Tr {{\rm {Tr}}} \newcommand \pr {{\rm {pr}}} \newcommand \ev {{\it {ev}}} \newcommand \cl {{\rm {cl}}} \newcommand \interior {{\rm {Int}}} \newcommand \sep {{\rm {sep}}} \newcommand \td {{\rm {tdeg}}} \newcommand \alg {{\rm {alg}}} \newcommand \im {{\rm im}} \newcommand \gr {{\rm {gr}}} \newcommand \op {{\rm op}} \newcommand \Hom {{\rm Hom}} \newcommand \Hilb {{\rm Hilb}} \newcommand \Sch {{\mathscr S\! }{\it ch}} \newcommand \cHilb {{\mathscr H\! }{\it ilb}} \newcommand \cHom {{\mathscr H\! }{\it om}} \newcommand \colim {{{\rm colim}\, }} \newcommand \End {{\rm {End}}} \newcommand \coker {{\rm {coker}}} \newcommand \id {{\rm {id}}} \newcommand \van {{\rm {van}}} \newcommand \spc {{\rm {sp}}} \newcommand \Ob {{\rm Ob}} \newcommand \Aut {{\rm Aut}} \newcommand \cor {{\rm {cor}}} \newcommand \Cor {{\it {Corr}}} \newcommand \res {{\rm {res}}} \newcommand \red {{\rm{red}}} \newcommand \Gal {{\rm {Gal}}} \newcommand \PGL {{\rm {PGL}}} \newcommand \Bl {{\rm {Bl}}} \newcommand \Sing {{\rm {Sing}}} \newcommand \spn {{\rm {span}}} \newcommand \Nm {{\rm {Nm}}} \newcommand \inv {{\rm {inv}}} \newcommand \codim {{\rm {codim}}} \newcommand \Div{{\rm{Div}}} \newcommand \sg {{\Sigma }} \newcommand \DM {{\sf DM}} \newcommand \Gm {{{\mathbb G}_{\rm m}}} \newcommand \tame {\rm {tame }} \newcommand \znak {{\natural }} \newcommand \lra {\longrightarrow} \newcommand \hra {\hookrightarrow} \newcommand \rra {\rightrightarrows} \newcommand \ord {{\rm {ord}}} \newcommand \Rat {{\mathscr Rat}} \newcommand \rd {{\rm {red}}} \newcommand \bSpec {{\bf {Spec}}} \newcommand \Proj {{\rm {Proj}}} \newcommand \pdiv {{\rm {div}}} \newcommand \CH {{\it {CH}}} \newcommand \wt {\widetilde } \newcommand \ac {\acute } \newcommand \ch {\check } \newcommand \ol {\overline } \newcommand \Th {\Theta} \newcommand \cAb {{\mathscr A\! }{\it b}} \newenvironment{pf}{\par\noindent{\em Proof}.}{ \framebox(6,6) \par } \newtheorem{theorem}[subsection]{Theorem} \newtheorem{conjecture}[subsection]{Conjecture} \newtheorem{proposition}[subsection]{Proposition} \newtheorem{lemma}[subsection]{Lemma} \newtheorem{remark}[subsection]{Remark} \newtheorem{remarks}[subsection]{Remarks} \newtheorem{definition}[subsection]{Definition} \newtheorem{corollary}[subsection]{Corollary} \newtheorem{example}[subsection]{Example} \newtheorem{examples}[subsection]{examples} \title{Theta divisors of abelian varieties and push-forward homomorphism at the level of Chow groups} \author{ Kalyan Banerjee} \address{Indian Statistical Institute, Bangalore Center, Bangalore 560059} \email{kalyanb$_{-}[email protected]} \footnotetext{Mathematics Classification Number: 14C25, 14D05, 14D20, 14D21} \footnotetext{Keywords: Pushforward homomorphism, Theta divisor, Jacobian varieties, Chow groups, higher Chow groups.} \begin{abstract} In this text we prove that if an abelian variety $A$ admits of an embedding into the Jacobian of a smooth projective curve $C$, and if we consider $\Th_A$ to be the divisor $\Th_C\cap A$, where $\Th_C$ denotes the theta divisor of $J(C)$, then the embedding of $\Th_A$ into $A$ induces an injective push-forward homomorphism at the level of Chow groups. We show that this is the case for every principally polarized abelian varieties. \end{abstract} \maketitle \section{Introduction} In the paper \cite{BI} the authors were investigating the following question. Let $C$ be a smooth projective curve of genus $g$ and let $\Th$ denote the theta divisor embedded into the Jacobian $J(C)$ of the curve $C$. Let $j$ denote this embedding. Then the push-forward homomorphism $j_*$ at the level of Chow groups is injective. Also in this paper the author discussed about the push-forward homomorphism at the level of Chow groups induced by the closed embedding of some special divisors in the Jacobian $J(C)$, arising from finite, \'etale coverings of the curve $C$, see \cite{BI}[theorem $4.1$]. In this paper we investigate the following question for an arbitrary principally polarized abelian variety $A$ . That is let $A$ be a principally polarized abelian variety and let $H$ denote a divisor embedded inside $A$. Let $j$ denote this embedding. Then can we say that the push-forward homomorphism $j_*$ at the level of Chow groups of $k$-dimensional cycles ($k\geq 0$) is injective? This question is affirmatively answered in the case when the abelian variety $A$ is embedded in the Jacobian variety and we consider the divisor $\Th\cap A$ inside $A$, where $\Th$ is the theta divisor of $J(C)$. This is exactly the case of Prym-Tyurin varieties, which are abelian varieties embedded inside some Jacobian variety, and the intersection of the theta divisor of $J(C)$ with $A$ is linearly equivalent to some multiple of the Theta divisor of $A$. Since any principally polarized abelian variety is a Prym-Tyurin variety of some exponent (see \cite{BL} corollary $12.2.4$), so for the principally polarized abelian varieties the above question about the injectivity of the push-forward homomorphism at the level of Chow groups of $k$-dimensional cycle ($k\geq0$) is answered, when the divisor $H$ is the intersection of the abelian variety $A$ with the theta divisor of the ambient Jacobian variety, where $A$ is embedded. \textit{Let $A$ be a principally polarized abelian variety embedded into $J(C)$ for some smooth projective curve $C$. Let $\Theta$ be the theta divisor of $J(C)$. Then the embedding of $\Theta\cap A$ into $A$ induces an injective push-forward homomorphism at the level of Chow groups of $k$-cycle with $k\geq0$.} As an application we get that the embedding of the theta divisor inside a Prym variety induces injection at the level of Chow groups. We also show that if we start with an principally polarized abelian surface $A$ and consider the corresponding $K3$-surface, then the push-forward at the level of Chow groups induced by the closed embedding of the divisor coming from $\Th\cap A$ inside the $K3$-surface is injective. {\small \textbf{Acknowledgements:} The author is indebted to A.Beauville for communicating the main application of the paper. The author expresses his sincere gratitude to Jaya Iyer for telling this problem about injectivity of push-forward induced by the closed embedding of a divisor into an abelian variety, to the author and for discussions relevant to the theme of the paper. The author thanks C.Voisin for pointing out the fact that this injectivity of the push-forward is not true for zero cycles on a very general divisor on an abelian variety and also for relevant discussion regarding the theme of the paper. Also the author wishes to thank the ISF-UGC grant for funding this project and hospitality of Indian Statistical Institute, Bangalore Center for hosting this project.} \section{Abelian varieties embedded in Jacobians} Let $A$ be an abelian variety embedded inside the Jacobian of a smooth projective curve $C$. Let $\Th_C$ denote the theta divisor of the Jacobian $J(C)$. Consider $\Th_A$ to be $\Th_C\cap A$, and the closed embedding of $\Th_A$ into $A$, denote it by $j_A$, then we prove that $j_{A*}$ is injective from $\CH_*(\Th_A)$ to $\CH_*(A)$. To prove that first we show that the embedding of $\Th_C$ into $J(C)$ gives rise to an injection at the level of Chow groups. Although this has been proved in \cite{BI}[theorem 3.1], here we present an alternative proof following \cite{Collino} which gives a better understanding of the picture when we blow up $J(C)$ along some subvariety (in our case finitely many points). \begin{theorem} The embedding of the theta divisor $\Th_C$ into $J(C)$ for a smooth projective curve $C$, gives rise to an injection at the level of Chow groups. \end{theorem} \begin{proof} We use the fact that $\Sym^g C$ maps surjectively and birationally onto $J(C)$ and $\Sym^{g-1}C$ maps surjectively and birationally onto $\Th_C$. We have a natural correspondence $\Gamma$ given by $\pi_g\times \pi_{g-1}(Graph(pr))$, where $pr$ is the projection from $C^g$ to $C^{g-1}$, $\pi_i$ is the natural morphism from $C^i$ to $\Sym^i C$. Consider the correspondence $\Gamma_1$ on $J(C)\times \Th_C$ given by $(f_1\times f_2)(\Gamma)$, where $f_1,f_2$ are natural morphisms from $\Sym^{g-1}C,\Sym^g C$ to $\Th_C,J(C)$ respectively. Then by projection formula it follows that $${\Gamma_1}_*j_*$$ is induced by $(j\times id)^*(\Gamma_1)$ where $j$ is the closed embedding of $\Th_C$ into $J(C)$. Now we compute the cycle $$(j\times id)^*(\Gamma_1)$$ that is nothing but the collection of divisors $$(D_1,D_2)$$ such that $D_1$ is linearly equivalent to $\sum_{i=1}^{g-1} x_i-(g-1)p$ and $D_2$ is linearly equivalent to $\sum_{i=1}^{g-1}y_i-(g-1)p$, where $$([x_1,\cdots,x_{g-1},p],[y_1,\cdots,y_{g-1}])\in \Gamma\;.$$ Therefore without loss of generality we can assume that elements in $(j\times id)^*(\Gamma_1)$ are classes of effective divisors on $C$ of the form $$([x_1+\cdots+x_{g-1}+p-gp],[y_1+\cdots+y_{g-1}+p-gp])$$ such that $$([x_1,\cdots,x_{g-1},p],[y_1,\cdots,y_{g-1}])\in \Gamma$$ so we get that either $$x_i=y_i$$ for all $i$ or $$y_i=p$$ for some $i$. Therefore we get that $(j\times id)^*\Gamma_1$ is equal to $$\Delta+Y$$ where $Y$ is supported on $\Th_C\times \sum_{i=1}^{g-2}C_i$. The fact that the multiplicity of $\Delta$ in $(j\times id)^*(\Gamma_1)$ is $1$ follows from the fact that $\Sym^g C$ maps surjectively and birationally onto $J(C)$, and the computations following \cite{Collino}. Also here the Chow moving lemma holds for $\Th_C\times \Th_C$, because it holds for $\Sym^{g-1}C\times \Sym^{g-1}C$ with the cycles taken with $\QQ$-coefficients and the fact that $f_1$ is birational. So let $$\rho:U\to \Th_C$$ be the open embedding of the complement of $\sum_{i=1}^{g-2}C_i$ in $\Th_C$. Then we have $$\rho^*{\Gamma_1}_*j_*(Z)=\rho^*(Z+Z_1)=\rho^*(Z)$$ where $Z_1$ is supported on $\sum_{i=1}^{g-2}C_i$. This follows since $(j\times id)^*(\Gamma_1)=\Delta+Y$, where $Y$ is supported on $\Th_C\times\sum_{i=1}^{g-2}C_i$. Now consider the following commutative diagram. $$ \xymatrix{ \CH_*(\sum_{i=1}^{g-2}C_i) \ar[r]^-{j'_{*}} \ar[dd]_-{} & \CH_*(\Th_C) \ar[r]^-{\rho^{*}} \ar[dd]_-{j_{*}} & \CH_*(U) \ar[dd]_-{} \ \\ \\ \CH_*(\sum_{i=1}^{g-2}C_i) \ar[r]^-{j''_*} & \CH_*(J(C)) \ar[r]^-{} & \CH^*(V) } $$ Here $U,V$ are complements of $\sum_{i=1}^{g-2}C_i$ in $\Th_C,J(C)$ respectively. Now suppose that $j_*(z)=0$, then from the previous it follows that $$\rho^*{\Gamma_1}_*j_*(z)=\rho^*(z)=0$$ by the localisation exact sequence it follows that there exists $z'$ in $\sum_{i=1}^{g-2}C_i$ such that $j'_*(z')=z$. By the commutativity and the induction hypothesis it follows that $$j''_*(z')=0$$ since $\sum_{i=1}^{g-2}C_i$ is of dimension $g-2$. So we get that $z'=0$ hence $z=0$. So $j_*$ is injective. \end{proof} Now we prove the following: \begin{theorem} \label{theorem2} Let $A$ be a principally polarized abelian variety embedded into Jacobian $J(C)$ of a smooth projective curve $C$. Let $\Th_C$ denote the theta divisor of $J(C)$. Pull it back to $A$ and denote the pull-back by $\Th_A$. Suppose that $\Sym^{g-i}C\cap A'$ is smooth for all $i\geq 0$, where $A'$ is the inverse image of $A$, under the map $\Sym^g C\to J(C)$. Then the closed embedding $j:\Th_A\to A$ induces an injection $j_*$ at the level of Chow groups. \end{theorem} \begin{proof} We have a natural correspondence as previous, $\Gamma$ given by $\pi_g\times \pi_{g-1}(Graph(pr))$ on $\Sym^g C\times \Sym^{g-1}C$, where $pr$ is the projection from $C^g$ to $C^{g-1}$, $\pi_i$ is the natural morphism from $C^i$ to $\Sym^i C$. Consider the correspondence $\Gamma_1$ on $J(C)\times \Th_C$ given by $(f_1\times f_2)(\Gamma)$, where $f_1,f_2$ are natural morphisms from $\Sym^{g-1}C,\Sym^g C$ to $\Th_C,J(C)$ respectively. Consider the restriction of $\Gamma_1$ to $A\times \Th_A$. Call it $\Gamma_1'$. Let $j$ denote the embedding of $\Th_A$ into $A$. Then by projection formula it follows that $${\Gamma_1'}_*j_*$$ is induced by $(j\times id)^*(\Gamma_1')$, this is because $A$ is smooth and hence $j$ is a local complete intersection. Now we compute the cycle $$(j\times id)^*(\Gamma_1')$$ that is nothing but the collection of divisors $$(D_1,D_2)$$ such that $D_1$ is linearly equivalent to $\sum_{i=1}^{g-1} x_i-(g-1)p$ and $D_2$ is linearly equivalent to $\sum_{i=1}^{g-1}y_i-(g-1)p$, where $$([x_1,\cdots,x_{g-1},p],[y_1,\cdots,y_{g-1}])\in \Gamma'\;.$$ Here $\Gamma'$ is the restriction of $\Gamma$ to the scheme-theoretic inverse $A'\times \Th_A'$ of $A\times \Th_A$, under the natural map from $\Sym^g C\times \Sym^{g-1}C$ to $J(C)\times \Th_C$. Therefore without loss of generality we can assume that elements in $(j\times id)^*(\Gamma_1')$ are classes of effective divisors on $C$ of the form $$([x_1+\cdots+x_{g-1}+p-gp],[y_1+\cdots+y_{g-1}+p-gp])$$ such that $$([x_1,\cdots,x_{g-1},p],[y_1,\cdots,y_{g-1}])\in \Gamma'$$ so we get that either $$x_i=y_i$$ for all $i$ or $$y_i=p$$ for some $i$. Therefore we get that $(j\times id)^*\Gamma_1'$ is equal to $$\Delta_{\Th_A\times \Th_A}+Y$$ where $Y$ is supported on $\Th_A\times \sum_{i=1}^{g-2}C_i\cap A$. The fact that the multiplicity of $\Delta$ in $(j\times id)^*(\Gamma_1')$ is $1$ follows from the fact that $\Sym^g C$ maps surjectively and birationally onto $J(C)$, and the computations following \cite{Collino}. Also here the Chow moving lemma holds for $\Th_A\times \Th_A$ by the assumption of the theorem. So let $$\rho:U\to \Th_A$$ be the open embedding of the complement of $\sum_{i=1}^{g-2}C_i\cap A$ in $\Th_A$. Then we have $$\rho^*{\Gamma_1'}_*j_*(Z)=\rho^*(Z+Z_1)=\rho^*(Z)$$ where $Z_1$ is supported on $\sum_{i=1}^{g-2}C_i\cap A$. This follows since $(j\times id)^*(\Gamma_1')=\Delta+Y$, where $Y$ is supported on $\Th_A\times\sum_{i=1}^{g-2}C_i$. Now consider the following commutative diagram. $$ \xymatrix{ \CH_*(\sum_{i=1}^{g-2}C_i\cap A) \ar[r]^-{j'_{*}} \ar[dd]_-{} & \CH_*(\Th_A) \ar[r]^-{\rho^{*}} \ar[dd]_-{j_{*}} & \CH_*(U) \ar[dd]_-{} \ \\ \\ \CH_*(\sum_{i=1}^{g-2}C_i\cap A) \ar[r]^-{j''_*} & \CH_*(A) \ar[r]^-{} & \CH^*(V) } $$ Here $U,V$ are complements of $\sum_{i=1}^{g-2}C_i\cap A$ in $\Th_A,A$ respectively. Now suppose that $j_*(z)=0$, then from the previous it follows that $$\rho^*{\Gamma_1'}_*j_*(z)=\rho^*(z)=0$$ by the localisation exact sequence it follows that there exists $z'$ supported on $\sum_{i=1}^{g-2}C_i\cap A$ such that $j'_*(z')=z$. By the commutativity and the induction hypothesis it follows that $$j''_*(z')=0$$ since $\sum_{i=1}^{g-2}C_i\cap A$ is of dimension $d-2$, here $d$ is the dimension of $A$. So we get that $z'=0$ hence $z=0$. So $j_*$ is injective. \end{proof} The previous theorem gives rise to the following corollary: \begin{corollary} Let $\wt{C}\to C$ be an unramified double cover of smooth projective curves. Consider the Prym variety associated to this double cover, denote by $P(\wt{C}/C)$. Consider the embedding of $P(\wt{C}/C)$ into $J(\wt{C})$. Let $\Th'$ be the pullback of the theta divisor on $J(\wt{C})$ to $P(\wt{C}/C)$. Then the closed embedding $\Th'\to P(\wt{C}/C)$ induces an injection at the level of Chow groups. \end{corollary} \begin{proof} Let $g$ be the genus of $C$. So by Riemann-Hurwitz formula the genus of $\wt{C}$ is $2g-1$. We have the following commutative square. $$ \diagram \Sym^{2g-1} \wt{C}\ar[dd]_-{\theta_C} \ar[rr]^-{} & & \Sym^{2g-1} C \ar[dd]^-{\theta_{\wt{C}}} \\ \\ J(\wt{C}) \ar[rr]^-{} & & J(C) \enddiagram $$ Then first of all the Prym variety is the image under $\theta_C$ of the double cover $P'$ of a projective space $\PR^g$. This follows from the Riemann-Roch and the very definition of the Prym variety. Now consider the intersection of $\Sym^{2g-i}\wt{C}$ with $P'$, where $i\geq 2$. This intersection is smooth for a general copy of $\Sym^{2g-i}\wt{C}$ in $\Sym^{2g-1}\wt{C}$, in the following way. Consider the family $$\bcU:=\{([x_1,\cdots,x_{2g-2},x_{2g-1}],p)\in P'\times \wt{C}|p\in [x_1,\cdots,x_{2g-1}]\}$$ and the projection from $\bcU$ to $\wt{C}$. Then $\bcU$ is a family of $\Sym^{2g-2}\wt{C}\cap P'$ over $\wt{C}$. Hence by Bertini's theorem for a general $p$, $\bcU_p$ is smooth. Similarly we can prove that a general $\Sym^{2g-i}\wt{C}\cap P$ is smooth. Hence we have the assumption of the Theorem \ref{theorem2} is satisfied, whence the conclusion follows. \end{proof} Now we prove that if we blow up $J(C)$ at finitely many points and denote the blow up by $\wt{J(C)}$ and let $\wt{\Th_C}$ denote the total transform of $\Th_C$, then the closed embedding of $\wt{\Th_C}$ into $\wt{J(C)}$ induces injective push-forward homomorphism at the level of Chow groups. \begin{theorem} \label{theorem3} Let $\wt{J(C)}$ be the blow up of $J(C)$ at some non-singular subvariety $Z$ whose inverse image is $E$. Let $\Th_C$ intersect $Z$ transversely. Let $\wt{\Th_C}$ denote the strict transform of $\Th_C$ in $\wt{J(C)}$. Then the closed embedding of $\wt{\Th_C}$ into $\wt{J(C)}$ induces injective push-forward homomorphism at the level of Chow groups of zero cycles. \end{theorem} \begin{proof} Let us consider $\pi$ to be the morphism from $\wt{J(C)}$ to $J(C)$. Consider the correspondence $(\pi'\times \pi)^*(\Gamma_1)$, where $\pi'$ is the restriction of $\pi$ to $\wt{\Th_C}$. Call this correspondence $\Gamma'$. Then $\Gamma'_*\wt{j}_*$ is induced by $(\wt{j}\times id)^*\Gamma'$, where $\wt{j}$ is the closed embedding of $\wt{\Th_C}$ into $\wt{J(C)}$. Consider the commutative square. $$ \diagram \wt{\Th_C}\times \wt{\Th_C}\ar[dd]_-{\pi'\times \pi'} \ar[rr]^-{\wt{j}\times id} & & \wt{J(C)}\times \wt{J(C)} \ar[dd]^-{\pi\times \pi} \\ \\ \Th_C\times \Th_C \ar[rr]^-{j\times id} & & J(C)\times J(C) \enddiagram $$ This gives us that $$(\wt{j}\times \id)^*\Gamma'=(\pi'\times \pi')^*(j\times id)^*\Gamma_1=(\pi'\times \pi')^*(\Delta+Y)$$ where $Y$ is supported on $\Th_C\times \sum_{i=1}^{g-2}C_i$. Now $$(\pi'\times \pi')^*(\Delta)=\Delta+V$$ where $E$ is the exceptional locus of $\pi$ and $V$ is supported on $(E\cap \wt{\Th_C})\times (E\cap \wt{\Th_C})$. So considering $\rho$ to be the inclusion of the complement of $\wt{\sum_{i=1}^{g-2}C_i}$ in $\wt{\Th_C}$ and applying Chow moving lemma we have $$\rho^*\Gamma'_*\wt{j}_*=\rho^*\;.$$ Consider the following commutative diagram. $$ \xymatrix{ \CH_0(A) \ar[r]^-{\wt{j'}_{*}} \ar[dd]_-{} & \CH_0(\wt{\Th_C}) \ar[r]^-{\rho_0^{*}} \ar[dd]_-{\wt{j}_{*}} & \CH_0(U) \ar[dd]_-{} \ \\ \\ \CH_0(A) \ar[r]^-{\wt{j''}_*} & \CH_0(\wt J(C)) \ar[r]^-{} & \CH_0(V) } $$ Here $A= \wt{\sum_{i=1}^{g-2}C_i}$. Now suppose that $\wt{j}_*(z)=0$. By the previous computation we get that $\rho^*(z)=0$, so by the localisation exact sequence we get that there exists $z'$ in $\CH_*(A)$ such that $\wt{j'}_*(z')=z$. By induction $\CH_*(A)\to \CH_*(\wt{J(C)})$ is injective. So we get that $z'=0$ hence $z=0$ giving $\wt{j_*}$ injective. \end{proof} Now let $A$ be an abelian surface which is embedded in some $J(C)$. Let $i$ denote the involution of $A$. Then $i$ has $16$ fixed points. We blow up $A$ along these fixed points. Then we get $\wt{A}$ on which we have an induced involution, call it $i$. Let $\wt{\Th_A}$ denote the total transform of $\Th_A$ in $\wt{A}$. Then the above discussion tells us the following. \begin{theorem} The closed embedding of $\wt{\Th_A}$ into $\wt{A}$ induces injective push-forward homomorphism at the level of Chow groups of zero cycles. \end{theorem} Now $\wt{A}/i$ is the Kummer's K3 surface associated to $A$. Suppose that we choose $\Th_C$ such that it is $i$ invariant. Then $\wt{\Th_A}$ will be $i$ invariant. The above theorem gives us: \begin{theorem} The closed embedding of $\wt{\Th_A}/i$ into $\wt{A}/i$ induces injective push-forward homomorphism at the level of Chow groups of zero cycles with $\QQ$-coefficients. \end{theorem} Note that all these techniques can be repeated if we consider the group of algebraic cycles modulo algebraic equivalence. Then therefore the closed embedding $\wt{\Th_A}/i$ into $\wt{A}/i$ induces injection at the level of zero cycles modulo algebraic equivalence. \end{document}
math
\begin{document} \title[ ] {Symbolic Blowup algebras and invariants of certain monomial curves in an affine space} \author{Clare D'Cruz} \address{Chennai Mathematical Institute, Plot H1 SIPCOT IT Park, Siruseri, Kelambakkam 603103, Tamil Nadu, India } \email{[email protected]} \author[Masuti] {Shreedevi K. Masuti} \address{Chennai Mathematical Institute, H1-SIPCOT IT Park, Siruseri, Kelambakkam - 603 103, India} \email{[email protected]} \keywords{Symbolic Rees algebra, Cohen-Macaulay, Gorenstein} \thanks{Both authors are partially funded by a grant from Infosys Foundation} \thanks{SKM is supported by INSPIRE faculty award funded by Department of Science and Technology, Govt. of India. } \subjclass[2010]{Primary: 13A30, 1305, 13H15, 13P10} \begin{abstract} Let $d \geq 2$ and $m\geq 1$ be integers such that $\gcd (d,m)=1.$ Let $\mathfrak p$ be the defining ideal of the monomial curve in $\mathbb A_{\Bbbk}^d$ parametrized by $(t^{n_1}, \ldots, t^{n_d})$ where $n_i = d + (i-1)m$ for all $i = 1, \ldots, d$. In this paper, we describe the symbolic powers $\mathfrak p^{(n)} $ for all $n \geq 1$. As a consequence we show that the symbolic blowup algebras $\mathcal R_s{(\mathfrak p)}$ and $G_{s}(\mathfrak p) $ are Cohen-Macaulay. This gives a positive answer to a question posed by S.~Goto in \cite{goto}. We also discuss when these blowup algebras are Gorenstein. Moreover, for $d=3$, considering $\mathfrak p$ as a weighted homogeneous ideal, we compute the resurgence, the Waldschmidt constant and the Castelnuovo-Mumford regularity of $\mathfrak p^{(n)}$ for all $n \geq 1$. The techniques of this paper for computing $\mathfrak p^{(n)}$ are new and we hope that these will be useful to study the symbolic powers of other prime ideals. \end{abstract} \mathfrak maketitle \section{Introduction} Let $I $ be an ideal in a Noetherian ring $A$. Then for all $n \geq 1$, the $n$-th symbolic power of $I$ is the ideal $\displaystyle { I^{(n)} :=\cap_{\mathfrak p \in \mathfrak minass(A/I)} ( I A_{\mathfrak p} \cap A} )$. In this paper we are interested in the symbolic powers of certain prime ideals in the polynomial ring and power series ring. In particular, let $T := \Bbbk[x_1, \ldots, x_d]$ and $\mathfrak p:= \mathfrak p_{{\mathfrak mathcal C}(n_1, \ldots, n_d)} \subseteq T$ be the defining ideal of the monomial curve in $\mathbb A_{\Bbbk}^d$ paramtererized by $(t^{n_1}, t^{n_2}, \ldots, t^{n_d})$, where $t \in \Bbbk$ and $n_i = d + (i-1)m$ for all $i = 1, \ldots, d$. If $R:= k[[ x_1, \ldots, x_d]]$ denotes the $\mathfrak m$-adic completion of $T$, where $\mathfrak m = (x_1, \ldots, x_d)$, then $\mathfrak p^{(n)}R = \mathfrak p^{(n)}T \otimes_T R$. The first part of our paper concerns the Cohen-Macaulay and Gorenstein property of the symbolic Rees algebra $R_{s}(\mathfrak p) := \oplus_{n \geq 0} (\mathfrak p^{(n)}R )t^n$ and symbolic associated graded ring $G_s(\mathfrak p) := \oplus_{\mathfrak n \geq 0} \mathfrak p^{(n) } R / \mathfrak p^{(n+1)} R$ when $\mathcal R_{s}(\mathfrak p)$ is Noetherian. The second part of our paper concerns the computation of resurgence and Waldschmidt constant of $\mathfrak p T$. We also give a formula for the Castelnuovo-Mumford regularity of $\mathfrak p^{(n)}T$ for all $n \geq 1$. The $n$-th symbolic power $\mathfrak p^{(n)}$ is of interest for several reasons. It is related to an open question which goes back to the work of L.~Kronecker \cite{kronecker}, where he showed that every irreducible curve in $\mathbb A_{\Bbbk}^{d}$ can be defined by $(d+1)$-equations. In $1981$, R.~Cowsik gave a striking relationship between the Noetherianness of the symbolic Rees algebra and the problem of set-theoretic complete intersection. He showed that if $\mathfrak p$ is a prime ideal in a regular local ring $R$ such that $\operatorname{dim}(R/ \mathfrak p)=1$ and $\mathcal R_{s}(\mathfrak p)$ is Noetherian, then $\mathfrak p$ is a set-theoretic complete intersection \cite{cowsik}. Motivated by Cowsik's result, in 1987, C.~Huneke gave necessary and sufficient conditions for $\mathcal R_{s}(\mathfrak p)$ to be Noetherian when $\operatorname{dim}~ R=3$ \cite{huneke}. Huneke's result was generalised for $\operatorname{dim}~ R \geq 3$ by M.~Morales \cite{morales}. In general, the symbolic Rees algebra need not be Noetherian even for an affine monomial curve in $\mathbb A^3$ and depends on the characteristic of $\Bbbk$ \cite{gnw}. This unpredictable behaviour attracted the attention of several researchers and properties of the Noetherian symbolic Rees algebra was studied in several cases (for example see \cite{eliahou}, \cite{huneke0}, \cite{goto-nishida-shimoda}, \cite{goto-nishida-shimoda2}, \cite{gnw}, \cite{herzog-ulirch}, \cite{kurano}, \cite{reed}, \cite{schenzel}, \cite{schenzel2} and \cite{vasconcelos2}). The main difficulty in the study of the symbolic Rees algebra is describing the generators of the symbolic powers. The symbolic powers $\mathfrak p^{(2)}$ and $\mathfrak p^{(3)}$ for monomial curves in $\mathbb A^3$ has been studied extensively (\cite{eliahou}, \cite{huneke0}, \cite{schenzel}, \cite{schenzel2} and \cite{vasconcelos2}). In fact, using the ideas in \cite{vasconcelos1} and \cite{vasconcelos2}, J.~Herzog and B.~Ulrich gave a characterization for $\mathcal R_{s}(\mathfrak p) = R[\mathfrak p t, \mathfrak p^{(2)} t^2]$ \cite[Corollary~2.12]{herzog-ulirch}. However, for $d \geq 4$, there are very few results on $\mathfrak p^{(n)}$, $n \geq 1$ (\cite{goto} and \cite{scolan}). In 1994, S.~Goto gave necessary and sufficient conditions for the Cohen-Macaulayness and Gorensteiness of the symbolic blowup algebras when $\mathcal R_{s}(\mathfrak p)$ is Noetherian \cite{goto}. The Gorenstein property of $\mathcal R_{s}(\mathfrak p)$ for monomial curves has also been studied in \cite{simis-trung}, \cite{goto-nishida-shimoda} and \cite{goto-nishida-shimoda2}. In the last decade, motivated by the work in \cite{eisenbud-mazur}, \cite{ein-laz-smith} and \cite{hochster-huneke}, there has been a great interest in the relation between the symbolic powers and ordinary powers of ideals. Since symbolic powers are hard to describe, in order to compare the ordinary and symbolic powers of a homogenous ideal $ I \subset T$, C.~Bocci and B.~Harbourne defined an asymptotic quantity called the resurgence of $I$ which is defined as $\rho(I) = \sup \{ m/r : I^{(m)} \mathfrak not \subset I^r\}$ \cite{BH}. They observed that it exists for radical ideals. The resurgence is hard to compute in general and is a challenging problem. Hence, in order to give a bound for $\rho(I)$, in the same paper they defined another invariant $\gamma(I)$ called it the Waldschmidt constant. The Waldschmidt constant of $I,$ denoted as $\gamma(I)$, is defined as ${\displaystyle \gamma(I) = \limm~\f{\alpha( I^{(n)})}{n}, }$ where $\alpha(I):= \mathfrak min \{ n | I_n \mathfrak not = 0 \}$. They showed that if $I$ is a homogenous ideal, then $\alpha(I) / \gamma(I) \leq \rho(I)$ and in addition if $I$ defines a zero dimensional subscheme in a projective space, then $ \rho(I) \leq \operatorname{reg} (I) / \gamma(I)$ where $\operatorname{reg}(I)$ denotes the Castelnuovo-Mumford regularity of $I$ \cite[Theorem~1.2.1]{BH}. The resurgence and the Waldschmidt constant has been studied in a few cases: for certain general points in $\mathbb P^2$ \cite{BH0}, smooth subschemes \cite{guardo}, fat linear subspaces \cite{fatabbi}, special point configurations \cite{duminicki} and monomial ideals \cite{bocco-waldschmidt}. We now describe the work in this paper. Let $\gcd(d,m)=1$, $n_i:= d + (i-1) m$ for $i=1, \ldots,d$ and $\mathfrak p:= \mathfrak p_{{\mathfrak mathcal C}(n_, \ldots, n_d)} \subseteq R = \Bbbk[[x_1, \ldots, x_d]]$. In 1994, S.~Goto showed that $\mathcal R_{s}(\mathfrak p)$ is Noetherian for all $d \geq 2$ and is Cohen-Macaulay if $d \leq 4$ \cite[Proposition~7.6]{goto}. In the same article he raised the question whether $\mathcal R_{s}(\mathfrak p)$ is Cohen-Macaulay if $d\geq 5$ \cite[page 58]{goto}. In this paper we explicitly describe $\mathfrak p^{(n)}$ for all $n \geq 1$. For this, we first define the ideal $\mathfrak mathcal{I}_n \subseteq \mathfrak p^{(n)}$ (see \ref{equation of In}). One important observation is that $ {\mathfrak mathcal I}_n T^{\mathfrak prime}$ is a homogenous ideal (Proposition~\ref{description of In}) where $T^{\mathfrak prime} = T/ x_1 T$. The new idea of this paper is to give a monomial order on the monomials in $T^{\mathfrak prime} = T/ (x_1)$ (Definition~\ref{monomial order}) and compute the leading ideal of $\mathfrak p^{(n)}\Tprime$ (Theorem~\ref{symbolic power}). More precisely, we define monomial ideals $I_n \subseteq T^{\mathfrak prime}$ (\ref{ definition of In}) and show that $I_n = LI( \mathfrak p^{(n)} T^{\mathfrak prime})$. As a consequence, we show that $\mathfrak p^{(n)} R= \mathfrak mathcal{I}_n R$ for all $n \geq 1$. As a first application, we show that $\mathcal R_s({\mathfrak p})$ is Cohen-Macaulay for all $d \geq 2$ (Theorem~\ref{cm-rees}(\ref{cm-rees-one})). This gives a positive answer to Goto's question. We also show that $G_s(\mathfrak p) $ is Gorenstein for all $d \geq 2$ (Theorem~\ref{cm-ass-gr}) and $\mathcal R_s({\mathfrak p})$ is Gorenstein if and only if $d=3$ (Theorem~\ref{cm-rees}(\ref{cm-rees-two})). We elucidate other applications in this paper. Let $d=3$. We put weights $wt(x_i) = n_i$ where $n_i:= 3 + (i-1) m$ and $i=1, 2, 3$. With these weights, $\mathfrak p^{(n)} = (\mathfrak p^n)^{sat}$ defines a fat point for all $n \geq 1$ in the weighted projective space $\mathbb P := \mathbb P^{2}(n_1, n_2, n_3) := \operatorname{Proj}(T)$. Since $\mathfrak p$ is a weighted homogenous ideal, we extend the definition of resurgence and Waldschmidt constant to $\mathfrak p$. Moreover, we observe that Theorem~1.2.1 of \cite{BH} holds true for $\mathfrak p$ (Theorem~\ref{lb for resurgence} and Theorem~\ref{ub for resurgence}). In \cite[Theorem~1.1]{cut-kurano} Cutkosky and Kurano showed that $\limnn \operatorname{reg}( (\mathfrak p^{n})^{sat} ) /n $ exists and $\operatorname{reg}(T/\mathfrak p^{(n)})$ is eventually periodic \cite[Corollary~4.9]{cut-kurano}. We give an explicit formula for $ \operatorname{reg}( T/ (\mathfrak p^{n} )^{sat} )$ for $d=3$ and for all $n \geq 1$. In particular ${ \displaystyle \limnn \operatorname{reg}( (\mathfrak p^{n})^{sat} ) /n=\frac{3 e(T/ \mathfrak p)}{2} + 3m}$ (Theorem~\ref{final regularity}). We remark that using the techniques of this papers one can compute the resurgence, Waldschmidt constant and regularity of $\mathfrak p^{(n)}$ for $d \geq 4$. However, this involves tedious computation and hence we restrict ourselves to $d=3$ in this paper. We now describe the organisation of this paper. In Section~2 we prove some preliminary results which will be needed in the subsequent sections. In Section~3 we describe the monomial order we are using and describe the ideals $I_nT^{\mathfrak prime} \subseteq LI( { \mathfrak mathcal I}_n T^{\mathfrak prime})$. Section~4 is mainly devoted to show that the associated graded ring corresponding to the filtration $\{I_n\}_{n \geq 0}$ is Cohen- Macaulay. In Section~5 we explicitly describe the monomials which span $I_{n-1}$ modulo $(I_n:x_d)$. In Section~6 we explicitly describe all the symbolic powers $\mathfrak p^{(n)}$. The main results of this paper are in Section~7. In this section we study the Cohen-Macaulay and Gorenstein property of $\mathcal R_s(\mathfrak p)$ and $G_s(\mathfrak p)$ for $d \geq 2$, and give an explicit formula for the resurgence and Waldschmidt constant. Moreover, we also compute the Castelnuovo-Mumford regularity of the symbolic powers. \section{Preliminaries} In this paper, we consider the following class of monomial curves: Let $d \geq 2$. Let $R= \Bbbk[[x_1, \ldots, x_d]]$ and $S = \Bbbk[[t]]$ be formal power series rings over $\Bbbk$. For any positive integer $m \geq 1$, with $\gcd(d,m)=1$, we put $n_i:= d + (i-1) m$ for $i=1, \ldots,d$. Let ${\mathfrak mathcal C}(n_1, \ldots, n_d)$ be the affine curve parameterised by $(t^{n_1}, \ldots, t^{n_d})$ and let $I_{{\mathfrak mathcal C}(n_1, \ldots, n_d)}$ be the ideal defining this monomial curve. In other words, let $\mathfrak phi: R {\longrightarrow} \Bbbk[[t]]$ denote the homomorphism defined by $\mathfrak phi(x_i) = t^{n_i}$ for $1 \leq i \leq d$ and $\mathfrak p:= \operatorname{ker}(\mathfrak phi)= I_{{\mathfrak mathcal C}(n_1, \ldots, n_d)}$. Throughout this paper $\mathfrak p=I_{{\mathfrak mathcal C}(n_1, \ldots, n_d)}$ unless otherwise specified. It is well known that $\mathfrak p$ is generated by the $2 \times 2$ minors of the matrix described in \eqref{matrix of p}. In \cite[Proposition~7.6]{goto}, Goto described $\mathfrak p^{(n)}$ for $d=4$ and $n=2,3$. It is not easy to describe the ideals $\mathfrak p^{(n)}$ in general. To achieve this, we define ideals ${\mathfrak mathcal I}_n R\subseteq \mathfrak p^{(n)}$ (see \eqref{equation of In}). We exploit the fact that the ideals $ ({\mathfrak mathcal I}_n , x_1) T$ are homogeneous ideals (Proposition~\ref{description of In}). \subsection{Computation of multiplicity} \label{section 2} Let $R = \Bbbk[[x_1, \ldots, x_d]]$ and $X = [X_{ij}]$ be the $d \times d$ matrix given by \beq \label{matrix of p} X_{ij}:= \left\{ \begin{array}{ll} x_{i+j-1} & \mathfrak mbox{ if } 1 \leq i \leq d \mathfrak mbox{ and } 1 \leq j \leq d-i+1\\ x_1^m x_{i+j-d-1} & \mathfrak mbox{ if } 2 \leq i \leq d \mathfrak mbox{ and }d-i+2 \leq j \leq d.\\ \end{array} \right. \eeq For each $1 \leq i,k \leq d-1$, we define: \beq \label{definition of X_i} X(i) &:=& \mathfrak mbox{The matrix consisting of the first $i+1$ rows and $i+1$ columns of $X$}, \\ \label{definition of f_i} f_i &:=& \det(X(i) ) \hspace{.2in} \mathfrak mbox{ and } {\bf f}_k := f_1,\ldots, f_k. \eeq Goto showed that ${\bf f}_{d-1}$ satisfies Huneke's criterion for the Noetherianness of $\mathcal R_{s}({\mathfrak p})$ (\cite[Theorem~7.4]{goto}). In this section we give a lower bound for the length of the modules $ R/ (\mathfrak p^{(n)} + ( x_1, {\bf f}_k ) )$ where $\mathfrak p= I_{{\mathfrak mathcal C}(n_1, \ldots, n_d)},$ $1\leq k \leq d-1$ and $n \geq 1.$ We need a few preliminary results. Let $(A, \mathfrak n)$ be a Noetherian local ring of positive dimension $d$ and $\mathfrak mathfrak{a}$ an $\mathfrak n$-primary ideal. Let $\mathcal F=\{\mathcal F(n)\}_{n \in \mathbb Z}$ be a Noetherian filtration of ideals, i.e., $\mathcal F(0) = A$, $\mathcal F(1) \mathfrak not =A $, $\mathcal F(n+1) \subseteq \mathcal F(n)$, $\mathcal F(n) \cdot \mathcal F(m) \subseteq \mathcal F(n+m)$ for all $n,m \in \mathbb Z$ and the Rees ring $\mathfrak mathcal{R}(\mathcal F) := \oplus_{n \geq 0} \mathcal F(n) t^n$ is Noetherian. Let $1 \leq k \leq d$ and $z_i \in {\mathcal F}(a_i) \setminus {\mathcal F}(a_i +1)$ for all $i=1, \ldots k$. Put ${\bf z}_k = z_1, \ldots, z_k$. For all $n \in \mathbb Z$, using the mapping cone construction, similar to that in \cite{huckaba-marley}, we construct the complex $C_{\bullet}({\bf z}_k; n)$ which has the form: \beq \label{complex} 0 {\longrightarrow} \f{A}{\mathcal F({n- (a_1 + \ldots + a_k)})} {\longrightarrow} \cdots {\longrightarrow} \bigoplus_{1\leq i < j \leq k} \f{A}{{\mathcal F}({n- a_i-a_j})} {\longrightarrow} \bigoplus_{i=1}^k \f{A}{{\mathcal F}({n- a_i})} {\longrightarrow} \f{A}{{\mathcal F}(n)} {\longrightarrow} 0. \eeq The maps are from the Koszul complex $K_{\bullet} ({\bf z}_k, A)$. Let $H_i ( C_{\bullet}( {\bf z}_k, n))$ denote the $i$-th homology of the complex $C_{\bullet}({\bf z}_k; n)$. For any element $z \in {\mathcal F}(n) \setminus {\mathcal F}({n+1})$, let $z^{\stackrelr}$ denote the image of $z$ in $G(\mathcal F):=\oplus_{n \in \mathbb N}{\mathcal F(n)}/{\mathcal F(n+1)}$. Let ${\bf z}_k^{\stackrelr}:= z_1^{\stackrelr},\cdots, z_k^{\stackrelr}$. \begin{proposition} \label{vanishing} Let $\{{\mathcal F}(n)\}_{n \geq 0}$ be a filtration of $\mathfrak m$-primary ideals. For $1 \leq i \leq k$, let $z_i \in {\mathcal F}({a_i}) \backslash {\mathcal F}({a_i+1})$. Suppose ${\bf z}_k^{\stackrelr}$ is a regular sequence in $G({\mathcal F})$. Then \been \item \label{vanishing one} $H_i ( C_{\bullet}( {\bf z}_k, n))=0$ for all $i \geq 1$ and all $n \in \mathbb Z$. \item \label{vanishing two} $ {\displaystyle \ell \left( \f{A} {{\mathcal F}(n) + ({\bf z}_k) }\right) = \sum_{i=0}^{k} (-1)^{i} \left[ \sum_{1 \leq j_1 < \cdots <j_i \leq k } \ell \left( \f{A}{{\mathcal F}({n - [a_{j_1} + \cdots + a_{j_i} ]})} \right) \right].} $ \eeen \end{proposition} \begin{proof} (\ref{vanishing one}) Let $K_{\bullet} ({\bf z}_k^{\stackrelr}, G({\mathcal F}))$ denote the Koszul complex of $G({\mathcal F})$ with respect to ${\bf z}_k^{\stackrelr}$. Then we have the short exact sequence of complexes: \beq \label{ses} 0 {\longrightarrow} K_{\bullet} ({\bf z}_k^{\stackrelr}, G({\mathcal F}))_{n-1} {\longrightarrow} C_{\bullet}( {\bf z}_k, n) {\longrightarrow} C_{\bullet}( {\bf z}_k, n-1) {\longrightarrow} 0. \eeq Since ${\bf z}_k^{\stackrelr}$ is a regular sequence in $G(\mathcal F)$, $H_i( K_{\bullet} ({\bf z}_k^{\stackrelr}, G({\mathcal F})))=0$ for all $i \geq 1$ \cite[Theorem~16.5]{matsumura}. Hence from \eqref{ses} for all $n \in \mathbb Z$ we have: \beqn H_i ( C_{\bullet}( {\bf z}_k, n)) \cong H_i ( C_{\bullet}( {\bf z}_k, n-1)) \hspace{.2in} \mathfrak mbox{for all } i \geq 2 \eeqn and the short exact sequence \beqn 0 {\longrightarrow} H_1( C_{\bullet}( {\bf z}_k, n)) {\longrightarrow} H_1( C_{\bullet}( {\bf z}_k, n-1)). \eeqn As $H_i ( C_{\bullet}( {\bf z}_k, n)) =0$ for all $n \leq 0$, we conclude that $H_i ( C_{\bullet}( {\bf z}_k, n)) =0$ for all $n$ and for all $i \geq 1$. This proves (\ref{vanishing one}). (\ref{vanishing two}) As $H_0 ( C_{\bullet}( {\bf z}_k, n)) = A/ ({\mathcal F}(n) + ({\bf z}_k))$, from the complex (\ref{complex}) we get \beqn \ell \left( \f{A}{{\mathcal F}(n) + ({\bf z}_k)} \right) + \sum_{i \geq 1} (-1)^i~\ell (H_i ( C_{\bullet}( {\bf z}_k, n))) = \sum_{i=0}^{k} (-1)^{i}\left[ \sum_{1 \leq j_1 < \cdots <j_i \leq k } \ell \left( \f{A}{{\mathcal F}({n - [a_{j_1} + \cdots + a_{j_i} ]})} \right) \right]. \eeqn Applying (\ref{vanishing one}) we get the result. \end{proof} \begin{corollary} \label{multiplicity} Let $(A,\mathfrak n)$ be a Cohen-Macaulay local ring of dimension $d$. Let $\mathfrak p$ be a prime ideal of height $d-1$ and $x \mathfrak notin \mathfrak p.$ Let $1 \leq k \leq d-1$ and $z_i \in \mathfrak p^{(a_i)} \backslash \mathfrak p^{(a_i+1)}$. Suppose ${\bf z}_k^{\stackrelr}$ is a regular sequence in $G(\mathfrak p A_{\mathfrak p})$. Then \been \item \label{multiplicity-one} $ {\displaystyle e\left(x; \f{A}{\mathfrak p^{(n)} + ({\bf z}_k)} \right) = \ell \left( \f{A}{(\mathfrak p,x)}\right) \sum_{i=0}^{k} (-1)^{i} \left[ \sum_{1 \leq j_1 < \cdots <j_i \leq k} \ell \left( \f{A_{\mathfrak p}} {\mathfrak p^{n - [a_{j_1} + \cdots +a_{j_i} ]} A_{\mathfrak p}} \right) \right].} $ \item \label{multiplicity-two} $ {\displaystyle \ell \left( \f{A} { \mathfrak p^{(n)} + ({\bf z}_k) +( x) } \right) \geq \ell \left( \f{A}{(\mathfrak p,x)}\right) \sum_{i=0}^{k} (-1)^{i} \left[ \sum_{1 \leq j_1 < \cdots <j_i \leq k } \ell \left( \f{A_{\mathfrak p}}{\mathfrak p^{n - [a_{j_1} + \cdots +a_{j_i} ]} A_{\mathfrak p}} \right) \right]. } $ \eeen \end{corollary} \begin{proof} (\ref{multiplicity-one}) As $\mathfrak p^{(n)} \subseteq \mathfrak p^{(n)} + ({\bf z}_k) \subseteq \mathfrak p$, taking radicals we get $\sqrt{ \mathfrak p^{(n)} + ({\bf z}_k)} = \mathfrak p$. Hence $\mathfrak p$ is the only minimal prime of $\mathfrak p^{(n)} + ({\bf z}_k)$. From the associativity formula for multiplicities \cite[Theorem~14.7]{matsumura} we get \beqn e\left(x; \f{A}{\mathfrak p^{(n)} + ({\bf z}_k)} \right) =e \left(x; \f{A}{\mathfrak p} \right) \ell\left( \f{A_{\mathfrak p}} { (\mathfrak p^{(n)} + ({\bf z}_k)) A_{\mathfrak p}} \right). \eeqn As $x$ is a nonzero divisor on $A/ \mathfrak p$, $e(x; A/ \mathfrak p) = \ell (A/ (\mathfrak p,x))$. Replacing $A$ by $A_{\mathfrak p}$ and $G(\mathcal F)$ by $G(\mathfrak p A_{\mathfrak p})$ in Proposition~\ref{vanishing}(\ref{vanishing two}) we get the result. (\ref{multiplicity-two}) From \cite[Theorem~14.10]{matsumura}, we get $ {\displaystyle \ell\left( \f{A} { \mathfrak p^{(n)} + ({\bf z}_k) + (x) } \right) \geq e\left(x; \f{A}{\mathfrak p^{(n)} + ({\bf z}_k)} \right)}$. Now apply (\ref{multiplicity-one}). \end{proof} \begin{theorem} \label{computation of multiplicity} Let $R = \Bbbk[[x_1, \ldots, x_d]]$ and $\mathfrak p= I_{{\mathfrak mathcal C}(n_1, \ldots, n_d)}$. For $1 \leq i,k \leq d-1$, let $f_i$ and ${\bf f}_k$ be as in (\ref{definition of f_i}). Then \beqn \ell \left( \f{R} {\mathfrak p^{(n)} + ({\bf f}_k) + (x_1) }\right) \geq \ell \left( \f{R}{(\mathfrak p,x)}\right) \sum_{i=0}^{k} (-1)^{i} \left[ \sum_{1 \leq j_1 < \cdots< j_i \leq k } \ell \left( \f{R_{\mathfrak p}}{\mathfrak p^{n - [{j_1} + \cdots +{j_i} ]} R_{\mathfrak p}} \right) \right]. \eeqn \begin{proof} By \cite[Lemma~7.5]{goto}, $f_i \in \mathfrak p^{(i)}$. As $G(\mathfrak p R_{\mathfrak p})$ is a regular ring and ${\bf f}_{d-1}^{\stackrelr}$ is a regular sequence \cite[Proposition~5.3(3)]{goto}, from Corollary~\ref{multiplicity}(\ref{multiplicity-two}), we get the result. \end{proof} \end{theorem} \subsection{The power series ring and the polynomial ring} From now on $R= \Bbbk[[x_1, \ldots, x_{d}]]$ and $T=\Bbbk[x_1, \ldots,x_d]$. The following lemma gives us a way to compute the length of an $R$-module in terms of the length of the corresponding $T$-module. \begin{lemma} \label{comparing lengths} Let $\mathfrak m = (x_1, \ldots, x_d)T$ and $M$ a finitely generated $T$-module such that $\operatorname{Supp}(M) = \{ \mathfrak m\}$. Then \beqn \ell_R ( M \otimes_T R) = \ell_T(M). \eeqn \end{lemma} \begin{proof} We prove by induction on $\ell_T(M)$. If $\ell_T(M)=1$, then $M \cong T/ \mathfrak m$. Therefore, \beqn \ell_R(M \otimes_T R) = \ell_R \left(\f{R}{\mathfrak m R} \right) = 1 \hspace{.1in} \mathfrak mbox{(as $\mathfrak m R$ is the maximal ideal of $R$)}. \eeqn If $\ell_T(M)> 1$, then as the minimal primes of $\operatorname{Supp}(M)$ and $\operatorname{Ass}(M)$ are the same, $\mathfrak m \in \operatorname{Ass}(M)$. This gives the exact sequence \beq \label{ses-1} 0 {\longrightarrow} \f{T}{\mathfrak m} {\longrightarrow} M {\longrightarrow} C {\longrightarrow} 0, \eeq where $C \cong M / (T/ \mathfrak m)$. As $R$ is $T$-flat, tensoring (\ref{ses-1}) with $R$ we get: \beqn 0 {\longrightarrow} \f{T}{\mathfrak m} \otimes_T R \cong \f{R}{\mathfrak m R} {\longrightarrow} M \otimes_T R {\longrightarrow} C \otimes_T R {\longrightarrow} 0. \eeqn From the exact sequence (\ref{ses-1}), we get $\operatorname{Supp}(C) = \{\mathfrak m\}$ and $\ell_T(C) < \ell_T(M)$. Therefore by induction hypothesis $\ell_R(C \otimes_T R) = \ell_T(C)$. Hence \beqn \ell_R( M \otimes_T R) = \ell_R(C \otimes_T R) + \ell_R \left( \f{R}{\mathfrak m R}\right) = \ell_T(C) + \ell_T\left( \f{T}{\mathfrak m} \right) = \ell_T(M). \eeqn \end{proof} Let \beq \mathfrak nonumber && X_{i+1, (j_1, \ldots, j_{i+1})} \\ \label{definition of X_i+1} &:=& \mathfrak mbox{the matrix obtained by choosing the first $i+1$ rows and $j_1, \ldots, j_{i+1}$ columns of $X$} \\ \label{equation of Jn} \mathfrak nonumber && {\mathfrak mathcal J}_i \\ &:=& \{ \det( X_{i+1, (j_1, \ldots, j_{i+1}) } )| 1 \leq j_1 < \cdots < j_{i+1} \leq d \} . \eeq \begin{notation} If $A_1, \ldots, A_n$ are $n$ sets of monomials we define the set $A_1 \cdots A_n$ by $A_1\cdots A_n := \{a_1 \cdots a_n : a_i \in A_i \}$. \end{notation} Let ${\mathfrak mathcal I}_n$ denote the set \beq \label{equation of In} {\mathfrak mathcal I}_n := \sum_{a_1 + 2 a_2 + \cdots + (d-1) a_{d-1} = n} \mathcal J_{1}^{a_1} \cdots \mathcal J_{d-1}^{a_{d-1}}. \eeq As $R$ is a flat $T$-module, $ {\mathfrak mathcal I}_nR = {\mathfrak mathcal I}_n T \otimes_T R$. \begin{proposition} \label{description of In} Let $n \geq 1$. Then \been \item \label{description of In one} $ {\mathfrak mathcal I}_n R\subseteq \mathfrak p^{(n)}$. \item \label{description of In two} $ ({\mathfrak mathcal I}_n + (x_1))T$ is a homogeneous ideal. \item \label{description of In three} $({\mathfrak mathcal I}_n + (x_1))T$ is an $\mathfrak m$-primary ideal. \eeen \end{proposition} \begin{proof} (\ref{description of In one}) By \cite[Lemma~7.5]{goto}, ${\mathfrak mathcal J}_i \subseteq \mathfrak p^{(i)}$ for all $i=1, \ldots, d-1$. Hence for all $a_1, \ldots, a_{d-1} \in \mathbb Z_{\geq 0}$, \beq \label{containment of J} \mathcal J_{1}^{a_1} \cdots \mathcal J_{d-1}^{a_{d-1}} \subseteq \mathfrak p^{a_1} (\mathfrak p^{(2)})^{a_2} \cdots (\mathfrak p^{(d-1)})^{a_{d-1}} \subseteq \mathfrak p^{(a_1 + 2a_2 + \cdots + (d-1) a_{d-1})}. \eeq Summing over all $a_1+ 2a_2\cdots + (d-1)a_{d-1} =n$ and applying (\ref{containment of J}) to (\ref{equation of In}) we get (\ref{description of In one}). (\ref{description of In two}) Fix $1 \leq j_1< j_2<\ldots < j_{i+1} \leq d$. Then $\det( X_{i+1, (j_1, \ldots, j_{i+1}) })$ is a sum of distinct monomials and the monomials which do not contain $x_1$ are homogeneous of degree $i+1$. Hence $({\mathfrak mathcal J}_i + (x_1))T$ is a homogeneous ideal. From (\ref{equation of In}) we get (\ref{description of In two}). (\ref{description of In three}) By \eqref{equation of In}, ${\mathfrak mathcal J}_1^{n} \subseteq {\mathfrak mathcal I}_n$ and ${\mathfrak mathcal J}_1^{n} + (x_1) = (x_2, \ldots, x_d)^{2n} + (x_1)$ which implies that $\mathfrak m = \sqrt{{\mathfrak mathcal J}_1^{n} + (x_1)} \subseteq \sqrt{{\mathfrak mathcal I}_n + (x_1)} \subseteq \mathfrak m$. \end{proof} \section{Monomial ordering and Initial ideals} Using the description of $\mathfrak p^{(n)}$ for $d=4$ and $n=2,3$, Goto proved that the rings $R/ (\mathfrak p^{(n)} +(f_1, f_2, f_3))$ are Cohen-Macaulay, where the $f_i \in \mathfrak p^{(i)}$ ($1 \leq i \leq 3$) are as described in \cite[page 57]{goto}. However, from their method it is not easy to prove a similar result for $d \geq 5$. The new idea in this paper is to give an ordering on $T^{\mathfrak prime} = T/ (x_1)$ which we call the grevelex which is described in Definition~\ref{monomial order}. \begin{definition} \label{monomial order} Let ${\bf a } = (a_2, \ldots, a_d)$ and ${\displaystyle {\bf x}^{\bf a}:= \mathfrak prod_{i=2}^d x_i^{a_i} }$. We say that ${\bf x}^{\bf a} > {\bf x}^{\bf b}$ if ${ \operatorname{deg}( {\bf x}^{\bf a} ) > \operatorname{deg}( {\bf x}^{\bf b} )}$ or $\operatorname{deg}( {\bf x}^{\bf a} ) = \operatorname{deg}( {\bf x}^{\bf b} )$ and in the ordered tuple $(a_2-b_2, \ldots, a_d-b_d)$ the left-most nonzero entry is negative. \end{definition} Note that with respect to this order we have $ x_2< x_3< \ldots < x_d$. For any polynomial $f \in T^{\mathfrak prime}$, let $LM(f)$ denote the initial term of $f$ and for any ideal $I \subset T^{\mathfrak prime}$, let $LI(I)$ be the initial ideal of the ideal $I$ with respect to the grevelex order. We define monomial ideals $I_n\subseteq LI( {\mathfrak mathcal I}_{n} T^{\mathfrak prime})$ (\eqref{ definition of In}, Proposition~\ref{description of In in S}) and consider the filtration $\mathcal F = \{I_n\}_{\mathfrak n \geq 0}$. In fact $I_n = LI( {\mathfrak mathcal I}_nT^{\mathfrak prime})$ (Theorem~\ref{symbolic power}(\ref{symbolic power three})). For $2 \leq r < s \leq d$ and $l \geq 1$, let $M_{r,s}^l$ denote the set of monomials of degree $l$ in the variables $x_r, \ldots, x_s$. We set $M_{r,s}:=M_{r,s}^1.$ Let $1 \leq i \leq d-1$ and $n \geq 1$. We define the ideals $J_i$ and $I_n$ in $T^{\mathfrak prime}$ as follows: \beq \label{definition of Ji} J_i &:=& (M_{i+1,d})^{i+1},\\ \label{ definition of In} I_n &:= & \sum_{a_1 + 2 a_2 + \cdots +(d-1) a_{d-1} = n} J_{1}^{a_1} \cdots J_{d-1}^{a_{d-1} }. \eeq \begin{proposition} \label{description of In in S} For all $n \geq 1$, $I_n \subseteq LI( {\mathfrak mathcal I}_n \Tprime)$. \end{proposition} To prove Proposition~\ref{description of In in S}, we first need to consider $LM( \det( X_{i+1, (j_1, \ldots, j_{i+1}) } ) )$ for all $1 \leq j_1 < \cdots < j_{i+1} \leq d $. This is done in Proposition~\ref{proposition ji}. \begin{notation} For any $n \times n $ matrix $M = (m_{ij})$, let $p(M):=\mathfrak prod_{i+j = n+1} m_{ij}$ denote the product of anti-diagonal elements of the matrix $M$. \end{notation} \begin{proposition} \label{proposition ji} For $1 \leq i \leq d$, \been \item \label{proposition ji one} $ {\displaystyle p(X_{i+1, (j_1, \ldots, j_{i+1}) }) = LM (\det (X_{i+1, (j_1, \ldots, j_{i+1} ) } ) \Tprime) = \mathfrak prod_{k=1}^{i+1} x_{ j_k + (i-k+1) } .} $ \item \label{proposition ji two} $J_i \subseteq LI ( {\mathfrak mathcal J}_i \Tprime )$. \eeen \end{proposition} \begin{proof} (\ref{proposition ji one}) By definition, $p (X_{i+1, (j_1, \ldots, j_{i+1}) } ) = \mathfrak prod_{k=1}^{i+1} X_{i-k+2, j_k}$. We claim that $X_{i-k+2, j_k}=x_{j_ k + (i-k+1)}$ for all $k=1, \ldots, i+1$. Since $j_k \leq j_{i+1}-(i-k+1)$ for all $1\leq k\leq i+1,$ it follows that $j_k+(i-k+2)\leq j_{i+1} + 1 \leq d+1.$ Hence the matrix $X_{i+1, (j_1, \ldots, j_{i+1}) }$ is \beq \label{matrix x(i+1)} \left(\begin{array}{ccccccc} x_{j_1} & \cdots & x_{j_k} & \cdots & X_{1,j_{i+1}}=x_{j_{i+1}} \\ \vdots & \reflectbox{$\ddots$} & \reflectbox{$\ddots$} & \reflectbox{$\ddots$} & \vdots\\ x_{ j_1 +( i-k+1)} & \reflectbox{$\ddots$} &X_{i-k+2, j_k}=x_{j_ k +( i-k+1)} & \reflectbox{$\ddots$} &\vdots \\ \vdots & \reflectbox{$\ddots$} & \reflectbox{$\ddots$} & \reflectbox{$\ddots$} & \vdots\\ X_{i+1, j_1}=x_{ j_1 + i} & \reflectbox{$\ddots$} & \reflectbox{$\ddots$} & \reflectbox{$\ddots$} & \stackrelr \\ \end{array}\right). \eeq This proves the claim. Hence ${\displaystyle p (X_{i+1, (j_1, \ldots, j_{i+1}) } ) = \mathfrak prod_{k=1}^{i+1} x_{j_k + (i-k+1) }.} $ To complete the proof of (\ref{proposition ji one}) we need to show that: \beq \label{leading monomial} LM(\det( X_{i+1, (j_1, \ldots, j_{i+1}) } )\Tprime ) = \mathfrak prod_{k=1}^{i+1} x_{ j_k + (i-k+1)}. \eeq We prove (\ref{leading monomial}) by induction on $i$. Note that in the matrix $X$ defined in (\ref{matrix of p}), $x_1$ divides $X_{11}$ and $X_{ij}$ for $i+j \geq d+2$. Let $i=1$. Then \beqn \det( X_{2, (j_1, j_2) } )\Tprime = \begin{cases} x_{j_1 +1} x_{j_2} & \mathfrak mbox {if $j_1=1$ or $j_2=d$}\\ x_{j_1} x_{j_2+1} - x_{j_1+1}x_{j_2} & \mathfrak mbox{ if $1< j_1 <j_2 <d$ }. \end{cases} \eeqn Hence $LM(\det( X_{2, (j_1, j_2) } )\Tprime) = x_{j_1 +1} x_{j_2}$ if $j_1=1$ or $j_2=d$. If $1 < j_1 < j_2 <d$, then $j_1 < j_1 +1 \leq j_2 < j_2 +1$ and hence $x_{j_1+1}x_{j_2} > x_{j_1} x_{j_2+1}$ which implies that $LM(\det( X_{2, (j_1, j_2) } )\Tprime) = x_{j_1+1}x_{j_2}$. Hence (\ref{leading monomial}) is true for $i=1$. Now let $i>1$. Expanding the matrix in (\ref{matrix x(i+1)}) along the last row we get: \beq \label{expansion of det} \det( X_{i+1, (j_1, \ldots, j_{i+1}) } )\Tprime &=& \left( \sum_{k=1}^{t} (-1)^{k + i+1} x_{j_k + i} \det (X_{i, (j_1, \ldots, \widehat{j_k}, \ldots, j_{i+1})} )\right)\Tprime \eeq where $t = \mathfrak max\{k | j_k + i \leq d \}$. As $X_{i, (j_1, \ldots, \widehat{j_k}, \ldots, j_{i+1} )} $ has the form as the matrix described in $\eqref{matrix x(i+1)}$, by induction hypothesis, \beq \label{expansion of det k} \mathfrak nonumber LM ( \det( X_{i, (j_1, \ldots, \widehat{j_k}, \ldots, j_{i+1})})\Tprime ) &=& \mathfrak prod_{\alpha =1}^{k-1} x_{j_{\alpha} + i - \alpha} \mathfrak prod_{\alpha = k+1}^{i+1} x_{j_{\alpha} +( i - \alpha + 1 )}\\ &=& \begin{cases} \label{expansion of det k cases}\displaystyle \mathfrak prod_{\alpha = 2}^{i+1} x_{j_{\alpha} +( i - \alpha + 1 )} & \mathfrak mbox{ if } k=1\\ \displaystyle x_{j_1 + (i-1)}\mathfrak prod_{\alpha =2}^{k-1} x_{j_{\alpha} + i - \alpha} \mathfrak prod_{\alpha = k+1}^{i+1} x_{j_{\alpha} +( i - \alpha + 1 )} & \mathfrak mbox{ if } k =2, \ldots,t. \end{cases} \eeq Hence for all $k=2, \ldots, t$ \beq \label{comparing leading terms} \mathfrak nonumber && x_{j_k + i} LM ( \det( X_{i, (j_1, \ldots, \widehat{j_k}, \ldots, j_{i+1}) } ) \Tprime)\\ \mathfrak nonumber &=& {\displaystyle x_{j_1 + (i-1)} \left[ \mathfrak prod_{\alpha =2}^{k-1}x_{j_{\alpha} + i - \alpha} \right] x_{j_k+i} \left[ \mathfrak prod_{\alpha = k+1}^{i+1} x_{j_{\alpha} + (i - \alpha + 1 )}\right]} \hspace{.5in} \mathfrak mbox{[by \eqref{expansion of det k cases}]}\\ \mathfrak nonumber &<& {\displaystyle x_{j_1 + i} x_{j_2 + (i-1)} \cdots x_{j_{i} +1} x_{j_{i+1}}}\\ &=& {\displaystyle x_{j_1 + i}LM ( \det( X_{i, (\widehat{j_1}, j_2, \ldots, j_{i+1}) } ) \Tprime ) } \hspace{1.62in} \mathfrak mbox{[by \eqref{expansion of det k}]}. \eeq Therefore \beqn LM(\det( X_{i+1, (j_1, \ldots, j_{i+1}) } )\Tprime ) &=& {\displaystyle LM ( \sum_{k=1}^{t} (-1)^{k + i+1} x_{j_k + i} LM (\det( X_{i, (j_1, \ldots, \widehat{j_k}, \ldots, j_{i+1} ) }) )\Tprime ) }\hspace{0.2in} \mathfrak mbox{[by \eqref{expansion of det}]} \\ &=& x_{j_1 + i}LM ( \det( X_{i, (\widehat{j_1}, j_2, \ldots, j_{i+1}) } ) \Tprime ) \hspace{1.6in} \mathfrak mbox{[by \eqref{comparing leading terms}] }\\ &=& x_{j_1 + i} x_{j_2 + (i-1)} \cdots x_{j_{i} +1} x_{j_{i+1}} \hspace{1.92in} \mathfrak mbox{[by \eqref{expansion of det k cases}]}. \eeqn This proves (\ref{proposition ji one}). (\ref{proposition ji two}) Let $x_{i+k_1}^{\alpha_{i+k_1}} \cdots x_{i+k_s}^{\alpha_{i+k_s}} \in M_{i+1,d}^{i+1}$ where $1 \leq k_1 < k_2 < \cdots < k_s\leq d-i $ and ${\alpha_{i+k_1}}, \ldots, {\alpha_{i+k_s}} \mathfrak neq 0$ such that $\alpha_{i+k_1}+\cdots+\alpha_{i+k_s}=i+1.$ Set $\beta_0=0$ and $\beta_r=\alpha_{i+k_1}+\cdots+\alpha_{i+k_{r}}$ for $1\leq r \leq s.$ Define \beqn S_r=\{\beta_{r-1}+1,\beta_{r-1}+2,\ldots,\beta_{r}\} \mathfrak mbox{ for }1 \leq r \leq s. \eeqn Then $\bigsqcup_{r=1}^s S_r = \{1, \ldots, i+1\}$. Let $1 \leq t \leq i+1$. If $t \in S_r$ define \beqn j_t & = k_r + (t-1). \eeqn With this choice of $j_1, \ldots, j_{i+1}$, $p( X_{i+1, (j_1, \ldots, j_{i+1})} ) = x_{i+k_1}^{\alpha_{i+k_1}} \cdots x_{i+k_s}^{\alpha_{i+k_s}} $. By (\ref{proposition ji one}) $x_{i+k_1}^{\alpha_{i+k_1}} \cdots x_{i+k_s}^{\alpha_{i+k_s}} \in LI ( {\mathfrak mathcal J}_i \Tprime )$. Hence $J_i \subseteq LI ( {\mathfrak mathcal J}_i \Tprime )$. \end{proof} \mathfrak noindent {\bf Proof of Proposition~\ref{description of In in S}:} The proof follows from (\ref{equation of In}), Proposition~\ref{proposition ji}\eqref{proposition ji two} and (\ref{ definition of In}). \section{The associated graded ring corresponding to the filtration $\mathcal F:= \{ I_n\}_{n \geq 0}$} Let $G(\mathcal F) :=\oplus_{n \geq 0} I_n /I_{n+1}$ be the associated graded ring corresponding to the filtration $\mathcal F= \{ I_n\}_{n \geq 0}$, where $I_n$ are ideals defined in \eqref{ definition of In}. By definition of $I_n$, $G(\mathcal F) $ is Noetherian (Theorem~\ref{cohen macaulayness of G}). One of the key steps is to prove that the associated graded ring $G(\mathcal F) = \oplus_{n \geq 0} (I_n/I_{n+1})$ is Cohen-Macaulay. In particular we show that $(x_2^2)^{\stackrelr}, \ldots, (x_d^d)^{\stackrelr}$ is a regular sequence in $G(\mathcal F)$ (Theorem~\ref{cohen macaulayness of G}). As an immediate consequence, we give a formula for ${\displaystyle \ell\left( \f{\Tprime}{(I_n + (x_2^2, \cdots, x_{k}^{k} ))\Tprime}\right)}$ (Proposition~\ref{computing full length}) which is useful in the subsequent sections. The following proposition is crucial to prove Theorem~\ref{cohen macaulayness of G}. \begin{proposition} \label{symbolic ideal quotients} For all $n \geq 1$ and $i = 2, \ldots, d$, \beqn (I_n: (x_i^i)) = \begin{cases} T^{\mathfrak prime} & \mathfrak mbox{ if } n <i \\ I_{n-i+1} & \mathfrak mbox{ if } n \geq i. \end{cases} \eeqn \end{proposition} \begin{proof} If $n<i,$ then $x_i^i \in J_{i-1} \subseteq I_n$ which implies that $(I_n:(x_i^i ))=T^\mathfrak prime.$ Therefore, for the rest of the proof we will assume that $n \geq i.$ As $I_n = \displaystyle \sum_{a_1 + 2 a_2 + \cdots + (d-1) a_{d-1}=n} J_1^{a_1} \cdots J_{d-1}^{a_{d-1}},$ by \cite[Proposition~1.14]{ene-herzog}, we only need to consider $M_j \in J_j^{a_j}$ with $\operatorname{deg}(M_j) = (j+1) a_j $ and show that ${\displaystyle ((\mathfrak prod_{j=1}^{d-1}M_j): (x_i^i)) \subseteq I_{n-i+1}}.$ Note that \beqn \left( \left( \mathfrak prod_{j=1}^{d-1}M_j\right) :(x_i^i )\right) = \left( \f{\left( { \mathfrak prod_{j=1}^{d-1}M_j}\right) } {gcd( \mathfrak prod_{j=1}^{d-1}M_j, x_i^i) } \right) = \left( \f {\left( { \mathfrak prod_{j=1}^{i-1} {M_j}}\right)} {x_i^g} \left[ {\mathfrak prod_{j=i}^{d-1}M_j}\right] \right)\\ \eeqn where $g = \mathfrak min\{ i, \sum_{j=1}^{i-1} b_j\}$ and $b_j := \mathfrak max \{ t | x_i^t \mathfrak mbox{ divides } M_j\}$. If $b_j=0$ for all $j=1, \ldots, i-1$, then $g=0$ and ${ \displaystyle\left(\mathfrak prod_{j=1}^{d-1}M_j\right): (x_i^i ) = \left(\mathfrak prod_{j=1}^{d-1}M_j \right ) \subseteq I_n \subseteq I_{n-i+1}}$. Hence, for the rest of the proof we will assume that $b_j \mathfrak not =0$ for some $j=1, \ldots, i-1$. {\bf Claim:} For $j=1, \ldots, i-1$, there exist integers $a_j^{\mathfrak prime}$ and monomials $M_j^{\mathfrak prime}$ such that: \been \item \label{claim one} $M_j^{\mathfrak prime}\in J_j^{ a_j^{\mathfrak prime} }$ for all $j=1, \ldots, i-1$. \item \label{claim two} $ {\displaystyle \f{ \left( { \displaystyle\mathfrak prod_{j=1}^{i-1} {M_j}} \right) } {x_i^g} = \left( \mathfrak prod_{j=1}^{i-1} M_j^{\mathfrak prime}\right) N }$, for some monomial $N$ in $T^{\mathfrak prime}$. \item \label{claim three} ${\displaystyle \sum_{j=1}^{i-1} j a_j^{\mathfrak prime} + \sum_{j=i}^{d-1} j a_j \geq n-i+1}$. \eeen Put ${ \displaystyle k := \mathfrak min\left\{ l \left| \sum_{j=l}^{i-1} b_j \leq i-1 \right.\right\}}$. For $k \leq j \leq i-1$ we define $q_j$ and $r_j$ using the following algorithm:\mathfrak newline \begin{algorithm}[] \caption{} \begin{algorithmic}[1] \Ensure{Defines $q_j,r_j$ for $k \leq j \leq i-1.$} \mathcal Require{$r_i=0$ and $j=i-1$} \Statex \While{$j \geq k$} \mathcal If{$b_j=0$} \State{define $q_j=0$ and $r_j=r_{j+1}$} \Else \State { find integers $q_j$ and $0 \leq r_j \leq j$ such that} \beq \label{defn of qj} b_j-r_{j+1}=(j+1)q_j-r_j. \eeq \EndIf \State \mathcal Return{$q_j,r_j$} \State $j\gets j-1$ \EndWhile \end{algorithmic} \end{algorithm} Define non-negative integers $q_{k-1}$ and $r_{k-1}$ as follows: Put \beq \label{definition of c-1} c&:=&g- \sum_{j=k}^{i-1} b_j. \eeq If $c=0$, then put $q_{k-1}:=0$ and $r_{k-1}:=r_k$. If $c>0$, then choose $q_{k-1}\geq 0$ and $0 \leq r_{k-1} \leq k-1$ such that \beq \label{defintion of c-2} c-r_{k}&=&kq_{k-1}-r_{k-1}. \eeq For $j=1, \ldots, i-1$ we define $a_j^{\mathfrak prime}$ as follows: \beq \label{a prime} a_{j}^{\mathfrak prime} := \begin{cases} a_j- q_j & \mathfrak mbox{ if } j \in\{ k-1, \ldots, i-1\} \setminus \{ r_{k-1} -1\}\\ a_j & \mathfrak mbox{ if } j\in \{1, \ldots, k-2\} \setminus \{ r_{k-1} -1\}\\ a_{r_{k-1}-1} +1 & \mathfrak mbox{ if } j = r_{k-1}-1 \mathfrak mbox{ and } r_{k-1} \geq 2 .\\ \end{cases} \eeq Set $M_0=1$ and define $N_j$ for $j = k-1, \ldots, i $ as follows: \begin{eqnarray*} N_j=\begin{cases} 1 & \mathfrak mbox{ if } j=i\\ \mathfrak mbox{ a monomial of degree $r_j$ that divides }\frac{M_jN_{j+1}}{x_i^{b_j}} &\mathfrak mbox{ if } b_j \mathfrak neq 0 \mathfrak mbox{ and } k\leq j < i\\ N_{j+1} &\mathfrak mbox{ if } b_j = 0 \mathfrak mbox{ and } k\leq j < i\\ \mathfrak mbox{ a monomial of degree $r_{k-1}$ that divides } \frac{M_{k-1}N_k}{x_i^c} &\mathfrak mbox{ if } j=k-1. \end{cases} \end{eqnarray*} For $j=1, \ldots,i-1$ we define $M_j^{\mathfrak prime}$ as follows: \beqn M_j^{\mathfrak prime} := \begin{cases} {\displaystyle \f{M_j N_{j+1}}{x_i^{b_j} N_j} }& \mathfrak mbox{ if } j \in\{ k, \ldots, i-1\} \setminus \{ r_{k-1} -1\} \\ {\displaystyle \f{M_{k-1} N_{k}} {x_i^c N_{k-1}}} & \mathfrak mbox{ if } j = k-1\\ M_j & \mathfrak mbox{ if } j\in\{1, \ldots, k-2\} \setminus \{ r_{k-1} -1\}\\ M_{r_{k-1}-1} N_{k-1} & \mathfrak mbox{ if } j= r_{k-1}-1 \mathfrak mbox{ and } 2 \leq r_{k-1}.\\ \end{cases} \eeqn By our definition of $M_j^\mathfrak prime$, $\operatorname{deg}(M_j^{\mathfrak prime})= (j+1) a_j^{\mathfrak prime}$ for all $j=1, \ldots, i-1$. Hence $M_j^{\mathfrak prime} \in J_j^{ a_j^{\mathfrak prime} }$. This proves (\ref{claim one}) of the Claim. Let \beqn N=\begin{cases} N_{k-1} &\mathfrak mbox{ if } r_{k-1} \leq 1\\ 1 &\mathfrak mbox{ if } r_{k-1} > 1. \end{cases} \eeqn Then we can express ${\displaystyle \mathfrak prod_{j=1}^{i-1} M_j}$ as in (\ref{claim two}) of the Claim. We now prove (\ref{claim three}) of the Claim. To complete the proof it suffices to show that ${ \displaystyle\sum_{j =1}^{i-1} j a_j^{\mathfrak prime} + \sum_{j = i}^{d-1} j a_j \geq n -i + 1}$. Put \beqn \alpha(r_{k-1}) &:=& \begin{cases} 0 & \mathfrak mbox{ if } r_{k-1}=0 \\ r_{k-1}-1 & \mathfrak mbox{ if } r_{k-1} \mathfrak not = 0. \end{cases} \eeqn Then \beq \label{adding}\mathfrak nonumber && \sum_{j =1}^{i-1} j a_j^{\mathfrak prime} + \sum_{j = i}^{d-1} j a_j \\ \mathfrak nonumber &=& n - \sum_{j=k-1}^{i-1}[(j+1) q_j ] + \sum_{j=k-1}^{i-1}q_j + \alpha(r_{k-1}) \hspace{1.8in} \mathfrak mbox{[by (\ref{a prime})]} \\ \mathfrak nonumber &=& n- [c- r_{k} + r_{k-1}] - \sum_{j=k}^{i-1} [b_j - r_{j+1}+ r_j] + \sum_{j=k-1}^{i-1}q_j + \alpha(r_{k-1}) \hspace{.5in} \mathfrak mbox{[by (\ref{defintion of c-2}) and (\ref{defn of qj})]}\\ \mathfrak nonumber &=& n - g + [ \alpha(r_{k-1} )- r_{k-1}] + \sum_{j=k-1}^{i-1}q_j \hspace{2.2in} \mathfrak mbox{[by (\ref{definition of c-1})]} . \eeq We claim that:\mathfrak newline (a) ${\displaystyle \sum_{j=k-1}^{i-1}q_j \geq 1}.$ \mathfrak newline (b) If $g=i$ and $r_{k-1} > 0$, then ${\displaystyle \sum_{j=k-1}^{i-1}q_j \geq 2}.$ Suppose $q_j=0$ for all $j=k-1, \ldots, i-1$. Then \beqn 0 \leq g = \sum_{j=k}^{i-1} b_j + c = \sum_{j=k}^{i-1} [r_{j+1} - r_j] + r_k - r_{k-1} = - r_{k-1} \leq 0, \eeqn which implies that $g=0$. Hence $\sum_{j=k}^{i-1}b_j=0$ which leads to a contradiction on our assumption of $b_j$'s. This proves (a) of the claim. Now suppose $g=i$ and $r_{k-1} > 0$. By (a), ${\displaystyle \sum_{j=k-1}^{i-1}q_j \geq 1}.$ If ${\displaystyle \sum_{j=k-1}^{i-1}q_j=1},$ then $q_l=1$ for some $k-1 \leq l\leq i-1$ and $q_j=0$ for $j\mathfrak neq l.$ Hence \beqn i = g = \sum_{j=k}^{i-1} b_j + c = (l+1) - r_{k-1} \leq i-r_{k-1} \leq i-1 \eeqn which leads to a contradiction. If $g \leq i-1$ or $g=i$ and $r_{k-1}=0$, then by Claim (a), ${\displaystyle -g +[ \alpha(r_{k-1} )- r_{k-1}] + \sum_{j=k-1}^{i-1}q_j \geq -i+1}$. If $g=i$ and $r_{k-1} \mathfrak not =0$, then by Claim (b) ${\displaystyle -g +[ \alpha(r_{k-1} )- r_{k-1}] + \sum_{j=k-1}^{i-1}q_j \geq -i+1}$. This completes the proof of (\ref{claim three}) of the Claim. \end{proof} \begin{theorem} \label{cohen macaulayness of G} The associated graded ring $G(\mathcal F)$ is Cohen-Macaulay. \end{theorem} \begin{proof} Let ${a}^{\stackrelr}$ denote the image of $a$ in $G(\mathcal F).$ Since $x_i^i \in I_{i-1} \setminus I_{i}$ it follows that $(x_i^i)^\stackrelr \in [G(\mathcal F)]_{i-1}.$ To prove the theorem it is enough to show that $(x_2^2)^{\stackrelr}, \ldots ,(x_i^i)^{\stackrelr}$ is a regular sequence in $G(\mathcal F)$ for all $2 \leq i \leq d.$ We prove by induction on $i$. If $i=2$, then by Proposition~\ref{symbolic ideal quotients}, ${(x_2^2)}^{\stackrelr}$ is a regular element in $G(\mathcal F)$. Now let $i>2$ and assume that $ {(x_2^2)}^{\stackrelr}, \ldots, {(x_{i-1}^{i-1})}^{\stackrelr}$ is a regular sequence in $G(\mathcal F)$. Then \beqn \f{G(\mathcal F) }{({(x_2^2)}^{\stackrelr}, \ldots, {(x_{i-1}^{i-1})}^{\stackrelr}) } &\cong& \bigoplus_{n \geq 0} \f{I_n} { I_{n+1} + \sum_{j=2}^{i-1} {x_j^j} I_{n+1-j}} . \eeqn One can verify that \beqn { \displaystyle ((I_{n+i} + \sum_{j=2}^{i-1} {x_j^j} I_{n+i-j}) : (x_i^i)) } &=& {\displaystyle (I_{n+i} :( x_i^i) ) + \sum_{j=2}^{i-1}( {x_j^j} I_{n+i-j}: (x_i^i))} \hspace{.3in} \mathfrak mbox{\cite[Proposition~1.14]{ene-herzog}} \\ &=& {\displaystyle (I_{n+i} :( x_i^i) ) + \sum_{j=2}^{i-1} {x_j^j}( I_{n+i-j}: (x_i^i)) }\\ &=&{\displaystyle I_{n+1} + \sum_{j=2}^{i-1} {x_j^j} I_{n+1-j}} \hspace{1.2in} \mathfrak mbox{[by Proposition~\ref{symbolic ideal quotients}]}. \eeqn Hence ${(x_{i}^{i}})^{\stackrelr}$ is ${G(\mathcal F) } / {({(x_2^2)}^{\stackrelr}, \ldots, {(x_{i-1}^{i-1})}^{\stackrelr})}$- regular. \end{proof} \begin{proposition} \label{computing full length} Let $2 \leq k \leq d$. Then for all $n \geq 1$, \beqn \ell \left( \f{\Tprime}{(I_n + (x_2^2, \cdots, x_{k}^{k} ))\Tprime}\right) = \sum_{i=0}^{k-1} (-1)^i \left[ \sum_{1 \leq j_1 < \cdots < j_i \leq k-1} \ell \left( \f{\Tprime}{(I_{n - (j_1 + \cdots + j_{i} )} )\Tprime} \right) \right] . \eeqn \end{proposition} \begin{proof} The proof follows from Proposition~\ref{vanishing} and Theorem~\ref{cohen macaulayness of G}. \end{proof} \section{Monomial generators of $I_{n-1}$ modulo $(I_n : (x_d))$ as a $\Bbbk$-vector space } In this section we first show that $(I_n : (x_d)) \subseteq I_{n-1}$. Next we describe the generators of $I_{n-1}$ modulo $(I_n : (x_d))$ (Proposition~\ref{a k basis}). This will be used to compute $\ell (T^{\mathfrak prime} / I_{n} )$ and $\ell (T^{\mathfrak prime} / I_{n} + (x_2^2, \ldots, x_{k+1}^{k+1}))$. The following lemma is simple, but we state it as it is crucially used to prove Lemma~\ref{reduction step}. \begin{lemma} \label{powers of momomials} \begin{enumerate} \item \label{powers of momomial-one} Let $1 \leq j \leq d-1$ and $a \geq 1$. Then \beqn (M_{j+1,d})^{ (j+1)a } = x_{j+1}^{(j+1) a -j }( M_{j+1,d})^j + (M_{j+2,d})^{j+1} (M_{j+1,d})^{ (j+1) (a-1)} . \eeqn \item \label{powers of momomial-two} Let $1 \leq k < j \leq d-1$ and $a,b \geq 1.$ Then \beqn (M_{k+1,d})^a (M_{j+1,d})^b = (M_{k+1,j+1})^{a} (M_{j+1,d})^{b} + (M_{k+1, d})^{a-1} (M_{j+2, d})^{b+1}. \eeqn \end{enumerate} \end{lemma} \begin{proof} (\ref{powers of momomial-one}) The proof follows by induction on $a.$ (\ref{powers of momomial-two}) The proof follows by induction on $a + b$. \end{proof} Before we proceed we set up some notation. For $1 \leq j\leq d-1$, let $a_j \mathfrak not = 0$ and ${\bf a_j}:=(a_1,\ldots,a_{j}) \in \mathbb N^{j}$. We inductively define the set $S( {\bf a_j})$ as follows: \begin{eqnarray*} S({\bf a_1}) &:=& \{x_2^{2a_1-1} \} \\ S({\bf a_j}) &:=& \begin{cases} \{x_{j+1}^{(j+1) a_j -j} \} & \mathfrak mbox{ if } \{i<j | a_i \mathfrak neq 0 \} = \emptyset\\ \{x_{j+1}^{(j+1) a_j -j} \} S({\bf a_k}) M_{k+1, j+1}^k & \mathfrak mbox{ if } \{i < j | a_i \mathfrak neq 0 \} \mathfrak not = \emptyset \mathfrak mbox{ and } k = \mathfrak max\{i<j | a_i \mathfrak neq 0 \}. \end{cases} \end{eqnarray*} We set ${\bf J}^{\bf a_j}:=J_1^{a_1} \cdots J_{j}^{a_{j}}$. Let $\operatorname{wt} {\bf a_j}:=a_1+2a_2+\cdots+j a_{j}$ be the weight of ${\bf a_j}$. For all $n \in \mathbb N$ we define $\Lambda_{j,n}:=\{{\bf a_{j}} \in \mathbb N^{j}:\operatorname{wt} {\bf a_j} = n, ~~a_j \mathfrak neq 0 \}.$ \begin{lemma} \label{reduction step} Let $n \geq 2$. Then \been \item \label{reduction step -2} $(I_n : (x_d)) \subseteq I_{n-1}$. \item \label{reduction step -1} For all $1 \leq j \leq d-1$ and for all ${\bf a_j} \in \Lambda_{j,n-1}$ \been \item \label{reduction step zero} $ {\displaystyle S({\bf a_j}) M_{j+1, d}^j \subseteq {\bf J}^{\bf a_j} \setminus \mathfrak mprime {\bf J}^{\bf a_j} } $ where $\mathfrak mprime = (x_2, \ldots, x_d)\Tprime$. \item \label{reduction step one} For all $1 \leq j \leq d-1$, ${\displaystyle {\bf J}^{ \bf a_j} \subseteq \left(S({\bf a_j}) M_{j+1,d}^j \right) + {(I_n : (x_d))}. } $ \eeen \item \label{reduction step three} $I_{n-1} = \sum_{j=1}^{d-1} \sum_{{\bf a_j} \in \Lambda_{j,n-1} }\left(S({\bf a_j}) M_{j+1,d}^j \right) + (I_n : (x_d))$. \eeen \end{lemma} \begin{proof} \eqref{reduction step -2} By \cite[Proposition 1.14]{ene-herzog} it is enough to show that for all $j = 1 \dots d-1$ and ${\bf a_j } \in \Lambda_{j,n},$ $({\bf J}^{\bf a_{j}}: (x_d)) \subseteq I_{n-1}.$ One can verify that \beqn ( {\bf J}^{\bf a_{j}}: (x_d )) &=& (M_{2,d})^{2a_1} \cdots (M_{j,d})^{ j a_{j-1}} ( M_{j +1,d})^{ (j+1) a_{j}-1}\\ &=& (M_{2, d})^{2a_1} \cdots [ (M_{j,d})^{ j a_{j-1} } (M_{j+1 , d})^{j}] (M_{j+1 , d})^{ (j+1) a_{j}- (j+1)}\\ & \subseteq& (M_{2, d})^{2a_1} \cdots (M_{j,d})^{ j (a_{j-1}+1) } (M_{j+1, d})^{(j+1) (a_{j}-1)} \hspace{1.0in} [\mathfrak mbox{as } (M_{j+1, d}) \subseteq (M_{j,d})] \\ &\subseteq& I_{n-1}, \eeqn since $a_1 + \cdots + (j-2)a_{j-2}+(j-1)( a_{j-1}+1) + j (a_{j}-1) = n-1$. This proves (\ref{reduction step -2}). \eqref{reduction step -1} Set $r({\bf a_j})= \# \{ i : 1 \leq i \leq j \mathfrak mbox{ and } a_i \mathfrak neq 0\}$. We prove by induction on $r({\bf a_j})$. (\ref{reduction step zero}) If $r({\bf a_j})=1$, then $S({\bf a_j}) =\{ x_{j+1}^{(j+1) a_j -j}\} .$ Hence $ {\displaystyle S({\bf a_j}) M_{j+1, d}^j}=\{ x_{j+1}^{(j+1) a_j -j}\} M_{j+1, d}^j \subseteq J_j^{a_j} = J_j^{\bf a_j} .$ If $ r({\bf a_j})>1$ and $k = \mathfrak max\{ i | 1 \leq i < j \mathfrak mbox{ and } a_i \mathfrak neq 0\}$, then $ S({\bf a_j}) M_{j+1, d}^j = S({\bf a_k}) M_{k+1, j+1}^k \left[ x_{j+1}^{(j+1) a_j - j}M _{j+1,d}^j \right] $ and by induction hypothesis, \beqn S({\bf a_k}) M_{k+1, j+1}^k \left[ x_{j+1}^{(j+1) a_j - j}M _{j+1,d}^j \right] \subseteq {\bf J}^{\bf a_k} J_j^{a_j} = {\bf J}^{\bf a_j}. \eeqn Comparing the degree of the monomials in $ S({\bf a_j}) M_{j+1, d}^j$ we conclude that these monomials are not in $\mathfrak mprime {\bf J}^{\bf a_j}$. (\ref{reduction step one}) If $r({\bf a_j})=1$ ,then \beqn {\bf J}^{\bf a_j} &=& (M_{j+1,d})^{(j+1)a_j} \\ &=& x_{j+1}^{(j+1) a_j -j }(M_{j+1,d})^j + (M_{j+2,d})^{j+1} (M_{j+1,d})^{ (j+1) (a_j-1)} \hspace{.2in} \mathfrak mbox{[by Lemma \ref{powers of momomials}(\ref{powers of momomial-one})]}\\ &\subseteq& S({\bf a_j})(M_{j+1,d})^j + (I_n: (x_d)) \eeqn as $ x_d (M_{j+2,d})^{j+1} (M_{j+1,d})^{ (j+1) (a_j-1)} \subseteq J_{j+1} J_j^{a_j-1} $ and $(j+1) + j(a_j-1) = ja_j + 1 = (n-1) + 1 =n$. Hence (\ref{reduction step one}) is true for $r({\bf a_j})=1$. Now let $ r({\bf a_j})>1$ and $k = \mathfrak max\{ i | 1 \leq i < j \mathfrak mbox{ and } a_i \mathfrak neq 0\}$. Then \beqn && {\bf J}^{\bf a_j} \\ &=& {\bf J}^{\bf a_k} J_j^{a_j}\\ \mathfrak nonumber &\subseteq& \left( ( S({\bf a_k}) M_{k+1,d}^k) + (I_{n-ja_j} : (x_d)) \right)J_j^{a_j} \\ && \hspace{4in} [\mathfrak mbox{by induction hypothesis applied to ${\bf J}^{\bf a_k} $}]\\ &\subseteq& x_{j+1}^{(j+1) a_j - j } ( S({\bf a_k}) M_{k+1,d}^k )(M_{j+1,d} )^j + \left( S({\bf a_k}) M_{k+1,d}^k \right) (I_{ja_j+1 } : (x_d)) + (I_{n-ja_j} : (x_d)) J_j^{a_j} \\ && \hspace{4.5in} [\mathfrak mbox{by the case $r=1$ applied to $J_j^{a_j} $}] \\ & \subseteq & x_{j+1}^{(j+1) a_j - j } ( S({\bf a_k}) M_{k+1,d}^k) (M_{j+1,d} )^j + (I_n : (x_d)) \hspace{2.45in} \mathfrak mbox{[by Lemma~\ref{reduction step}\eqref{reduction step zero}]}\\ &\subseteq& x_{j+1}^{(j+1) a_j - j } ( S({\bf a_k}) )\left[ (M_{k+1,j+1})^{k} (M_{j+1,d})^{j} + (M_{k+1, d})^{k-1} (M_{j+2, d})^{j+1}\right] + (I_n : (x_d)) \hspace{.3in} \mathfrak mbox{[by Lemma~\ref{powers of momomials}(\ref{powers of momomial-two})]}\\ &=& \left( S({\bf a_j}) (M_{j+1,d})^{j}\right) + ( I_n: (x_d)) \eeqn as \beqn && x_d ( {\displaystyle x_{j+1}^{(j+1) a_j - j }}) ( S({\bf a_k}) ) (M_{k+1, d})^{k-1} (M_{j+2, d})^{j+1}\\ \mathfrak nonumber &\subseteq& \left[ \left( S({\bf a_k}) \right) (x_{j+1} (M_{k+1, d})^{k-1})\right] x_d( M_{j+2, d})^{j+1} (x_{j+1}^{(j+1)(a_j -1) } )\\ \mathfrak nonumber &\subseteq& {\bf J}^{\bf a_k} J_{j+1} J_j^{a_j-1} \hspace{3.5in} \mathfrak mbox{[by Lemma~\ref{reduction step}\eqref{reduction step zero}]}\\ &\subseteq & I_n. \eeqn This proves \eqref{reduction step one} for all $r({\bf a_j}) \geq 1$. \eqref{reduction step three} The proof follows from \eqref{reduction step -2} and \eqref{reduction step -1}. \end{proof} \begin{proposition} \label{a k basis} The set $ \{M + (I_n : (x_d)) | M \in \{\Vx_{j=1}^{d-1} {\Vx}_{\bf a_j \in \Lambda_{j,n-1}}\{ S({\bf a_j}) M_{j+1, d}^j \}\} $ generates $ {\displaystyle {\f{I_{n-1}}{ (I_n: (x_d))}} }$ as a $\Bbbk$-vector space. \end{proposition} \begin{proof} Let $M$ be a monomial in $S({\bf a_j}) M_{j+1, d}^j.$ By Lemma \ref{reduction step}\eqref{reduction step zero}, $M \in {\bf J}^{\bf a_j}.$ Thus $x_d x_i M \in ({\mathfrak mprime})^2 J_1^{a_1} \cdots J_{j}^{a_{j} }= J_1^{a_1+1} \cdots J_{j}^{a_{j} }\subseteq I_n. $ This implies that $x_i M \in (I_n : (x_d))$ for all $i=2, \ldots, d$. Hence from Lemma~\ref{reduction step}\eqref{reduction step three}, the monomials in $S({\bf a_j}) M_{j+1, d}^j $ generate $I_{n-1}$ modulo $(I_n : (x_d))$ as a $\Bbbk$-vector space. \end{proof} From Proposition~\ref{a k basis}, giving an upper bound for the length of the vector space $ {\displaystyle {\f{I_{n-1}}{ (I_n: (x_d))}} }$ involves counting monomials and hence it is combinatorial in nature. Hence we prove some preliminary results before we arrive at the main result of this section. We state the well known Vandermonde's identity which will be needed in our proofs. \begin{lemma} \label{vandermonde} \rm[Vandermonde's identity] Let $n,r, s \in \mathfrak mathbb \mathbb N $. Then \beqn \sum_{i \geq 0} {n \choose i}{s \choose r-i} = {n + s \choose r} \eeqn \end{lemma} The next lemma is the main step in proving our main result. \begin{lemma} \label{number of sj} Fix $1 \leq j \leq d-1$ and $n >1$. Then ${\displaystyle \sum_{{\bf a_j } \in \Lambda_{j,n-1}} \# S({\bf a_j}) = \binom{n-2}{j-1}.} $ \end{lemma} \begin{proof} We prove by induction on $j.$ If $j=1$ then $S({\bf a_1})= \{x_2^{2a_1 -1}\}$, and hence the assertion is true for $j=1.$ Now let $j >1.$ Then \begin{eqnarray} \label{ computation of Saj}\mathfrak nonumber && \sum_{{\bf a_j } \in \Lambda_{j,n-1}} \#S({\bf a_j})\\ &=& \begin{cases} {\displaystyle \sum_{a_j=1}^{\lfloor \frac{n-2}{j} \rfloor} \sum_{i=1}^{j-1} \left[ \sum_{{\bf a_i} \in \Lambda_{i,n-1-ja_j}} \#S({\bf a_i}) \right] \#M_{i+1,j+1}^i } & \mathfrak mbox{ if }j\mathfrak not | (n-1 )\\ \#S(0,\ldots,0,\frac{n-1}{j}) + {\displaystyle \sum_{a_j=1}^{\lfloor \frac{n-2}{j} \rfloor} \sum_{i=1}^{j-1} \left[ \sum_{{\bf a_i} \in \Lambda_{i,n-1-ja_j}} \#S({\bf a_i}) \right] \#M_{i+1,j+1}^i} & \mathfrak mbox{ if } j | (n-1) \end{cases}. \eeq Define $ {\displaystyle \alpha_{j,n} := \begin{cases} 0 & \mathfrak mbox{ if } j \mathfrak not | (n-1 )\\ 1 & \mathfrak mbox{ if } j| (n-1) \end{cases} } . $ Then \eqref{ computation of Saj} can be written as \beqn && \sum_{{\bf a_j } \in \Lambda_{j,n-1}} \#S({\bf a_j})\\ &=& \alpha_{j,n} + \sum_{a_j=1}^{\lfloor \frac{n-2}{j} \rfloor} \sum_{i=1}^{j-1}\left[ \sum_{{\bf a_i} \in \Lambda_{i,n-1-ja_j}} \#S({\bf a_i}) \right]\#M _{i+1,j+1}^i \\ &=& \alpha_{j,n} + \sum_{a_j=1}^{\lfloor \frac{n-2}{j} \rfloor} \sum_{i=1}^{j-1} \binom{n-ja_j-2}{i-1} \binom{j}{j-i} \hspace{1.95in} \mathfrak mbox{ [by induction hypothesis] }\\ &=& \alpha_{j,n} + \sum_{a_j=1}^{\lfloor \frac{n-2}{j} \rfloor} \left[ \left[ \sum_{i=0}^{j-1} \binom{n-ja_j-2}{i} \binom{j}{j-i-1}\right] - \binom{n - j a_j -2}{j-1} \right] \hspace{.2in} \mathfrak mbox{[replacing $i-1$ by $i$]}\\ &=& \alpha_{j,n} + \sum_{a_j=1}^{\lfloor \frac{n-2}{j} \rfloor} \left[ \binom{n - j (a_j-1) -2}{j-1} - \binom{n - j a_j -2}{j-1} \right] \hspace{1.1in} \mathfrak mbox{[by Lemma~\ref{vandermonde}]}\\ &=& \alpha_{j,n} + \binom{n-2}{j-1} - \alpha_{j,n}\\ &=& \binom{n-2}{j-1}. \end{eqnarray*} \end{proof} We are now ready to prove the main result in this section. \begin{proposition} \label{length of last term} Let $d,n \geq 2$. Then \beqn \ell \left( \f{I_{n-1}} {(I_n : (x_d))}\right) \leq {n + d-3 \choose d-2}. \eeqn \end{proposition} \begin{proof} By Proposition~\ref{a k basis} we get \beqn \ell \left( \f{I_{n-1} } {(I_n : (x_d))} \right) &\leq& \sum_{j=1}^{d-1} \left[ \sum_{ {\bf a_j} \in \Lambda_{j, n-1} } \# S({\bf a_j}) \right] \# M_{j+1, d}^j \\ & =& \sum_{j=1}^{d-1} {n- 2 \choose j-1} {d-1 \choose d-j-1} \hspace{.6in} \mathfrak mbox{[by Lemma~\ref{number of sj}]}.\\ &=& \sum_{i = 0}^{d-2}{n-2 \choose i} { d -1 \choose d-i-2} \hspace{.6in} \mathfrak mbox{[put $i=j-1$]}\\ &=& {n + d-3 \choose d-2} \hspace{1.36in} \mathfrak mbox{[by Lemma~\ref{vandermonde}]}. \eeqn\end{proof} \section{Cohen-Macaulayness of $R/ (\mathfrak p^{(n)} + ({\bf f}_k))$ } In \cite[Proposition~7.6]{goto} Goto showed that $R/ (\mathfrak p^{(n)} + ({\bf f}_{d-1}) )$ is Cohen-Macaulay for $d=3,4$ and $n \leq {d-1 \choose 2}$. This was done by explicitly describing $\mathfrak p^{(n)}$ for $d\leq 4$ and $n \leq {d-1 \choose 2}$. Using the techniques developed in this paper, we generalise Goto's result for all $d \geq 2$ and $n \geq 1$. A lower bound for $\ell (R/ (\mathfrak p^{(n)} + ({\bf f}_k , x_1) ))$ was given using the multiplicity formula (Theorem~\ref{computation of multiplicity}). In this section, we show that the inequality in Theorem~\ref{computation of multiplicity} is indeed an equality (Theorem~\ref{bound on length}). This implies that for all $n \geq 1$ and $1 \leq k \leq d-1$, the rings $R/ (\mathfrak p^{(n)} + ({\bf f}_k) )$ are Cohen-Macaulay. As a consequence, we describe $\mathfrak p^{(n)}$ for all $d \geq 2$ and all $n \geq 1$. In particular we prove that $\mathfrak p^{(n)} = {\mathfrak mathcal I}_nR$ and $LI(\mathfrak p^{(n)} \Tprime) = I_n\Tprime$ for all $d \geq 2$ and $n \geq 1$. We first give an upper bound on $\ell (T^{\mathfrak prime}/ I_n)$. This is crucial to prove an interesting result which shows that the the equality of the lengths of the various modules (over different rings) in Theorem~\ref{main theorem 1}. \begin{proposition} \label{main theorem} Let $d \geq 2$. Then for all $n \geq 1$, \beqn \ell \left( \f{\Tprime}{I_n} \right) \leq d{n+d-2 \choose d-1}. \eeqn \end{proposition} \begin{proof} We prove by double induction on $n$ and $d$. If $n =1$, then \beqn \ell \left( \f{\Tprime}{I_1} \right) = \ell \left( \f{k[x_2, \ldots, x_d]}{(x_2, \ldots, x_d)^2}\right) = d. \eeqn If $d=2$, then \beqn \ell \left( \f{\Tprime}{I_n} \right) = \ell \left( \f{k[x_2]}{(x_2)^{2n}}\right) = 2n. \eeqn Now let $n > 1$ and $d>2$. From the exact sequence \beqn 0 {\longrightarrow} \f{\Tprime}{(I_n: (x_d))} \stackrelckrel{.x_d}{{\longrightarrow}} \f{\Tprime}{I_n} {\longrightarrow} \f{\Tprime}{I_n + (x_d)} {\longrightarrow} 0 \eeqn we get \beqn && \ell \left( \f{\Tprime}{I_n}\right)\\ &=& \ell \left( \f{\Tprime}{I_n + (x_d)}\right) + \ell \left( \f{\Tprime}{(I_n : (x_d))}\right)\\ &=& \ell \left( \f{\Tprime}{I_n + (x_d)}\right) + \ell \left( \f{\Tprime}{I_{n-1}}\right) + \ell \left( \f{I_{n-1}}{(I_n : (x_d))}\right) \hspace{0.95in} \mathfrak mbox{[Lemma~\ref{reduction step}(\ref{reduction step -2})]}\\ &\leq& (d-1) {n + d-3 \choose d-2} + d {n-1 + d-2 \choose d-1} + {n + d-3 \choose d-2} \hspace{.1in} \mathfrak mbox{[by induction hypothesis and Proposition~\ref{length of last term}]}\\ &=& d{n + d-3 \choose d-2 } + d {n + d-3 \choose d-1}\\ &= & d {n + d-2 \choose d-1}. \eeqn \end{proof} \begin{theorem} \label{main theorem 1} Let $d \geq 2$. Then for all $n \geq 1$, \beqn e \left(x_1; \f{R}{\mathfrak p^{(n)}}\right) = \ell \left( \f{R}{\mathfrak p^{(n)} + (x_1)}\right) &=& \ell_R \left(\f{R} { ({\mathfrak mathcal I}_{n}, x_1) R}\right) = \ell_{\Tprime} \left( \f{\Tprime} { {\mathfrak mathcal I}_{n}\Tprime } \right)\\ &=& \ell_{\Tprime} \left( \f{\Tprime} { LI( {\mathfrak mathcal I}_{n} )\Tprime } \right) = \ell \left( \f{\Tprime}{I_n} \right) = d{n+d-2 \choose d-1}. \eeqn \end{theorem} \begin{proof} From Proposition~\ref{description of In}(\ref{description of In one}) ${\mathfrak mathcal I}_n R\subseteq \mathfrak p^{(n)}$. Since $R/ \mathfrak p^{(n)}$ is Cohen-Macaulay, \beq \label{equality of all terms 1} e\left(x_1;\f{R} {\mathfrak p^{(n)}}\right) = \ell_R \left(\f{R} {\mathfrak p^{(n)}+(x_1)}\right) \leq \ell_R \left(\f{R} { ({\mathfrak mathcal I}_{n}, x_1) R}\right). \eeq By Proposition~\ref{description of In}(\ref{description of In three}), for any prime $\mathfrak q \mathfrak not = \mathfrak m $, $(({\mathfrak mathcal I}_n , x_1)T)_{\mathfrak q} = T$. This implies that ${\displaystyle \operatorname{Supp}_T \left( \f{T }{ ( {\mathfrak mathcal I}_{n}, x_1)T} \right) =\{\mathfrak m\}} $. Hence we get \beq \label{equality of all terms 2}\mathfrak nonumber \ell_R \left(\f{R} { ({\mathfrak mathcal I}_{n}, x_1) R}\right) \mathfrak nonumber &=& \ell_{\Tprime} \left( \f{\Tprime} { {\mathfrak mathcal I}_{n} \Tprime } \right) \hspace{1.35in} \mathfrak mbox{[Lemma~\ref{comparing lengths}]} \\ \mathfrak nonumber &=& \ell_{\Tprime} \left( \f{\Tprime} { LI( {\mathfrak mathcal I}_{n} )\Tprime } \right) \hspace{1in} \mathfrak mbox{ \cite[Proposition~2.1]{bayer-stillmann}} \\ \mathfrak nonumber &\leq& \ell_{\Tprime} \left(\f{\Tprime} {I_n}\right) \hspace{1.5in} \mathfrak mbox{ [Proposition~\ref{description of In in S}]} \\ \mathfrak nonumber & \leq & d \binom{n+d-2}{d-1} \hspace{1.2in} \mathfrak mbox{ [Proposition~\ref{main theorem}]} \\ \mathfrak nonumber &=& e \left(x_1;\f{R}{\mathfrak p}\right) \ell_{R_{\mathfrak p}} \left(\f{R_{\mathfrak p}}{\mathfrak p^{n}R_{\mathfrak p}}\right) \\ \mathfrak nonumber &=& e \left(x_1;\f{R} {\mathfrak p}\right) \ell_{R_{\mathfrak p}} \left(\f{R_{\mathfrak p}} {\mathfrak p^{(n)}R_{\mathfrak p}}\right) \hspace{.5in} [\mathfrak mbox{since } \mathfrak p^{(n)}R_{\mathfrak p}=\mathfrak p^{n}R_{\mathfrak p}]\\ &=& e\left(x_1;\f{R} {\mathfrak p^{(n)}}\right) \hspace{1.3in} \mathfrak mbox{[by \cite[Theorem 14.7]{matsumura}]}. \eeq Thus equality holds in (\ref{equality of all terms 1}) and (\ref{equality of all terms 2}) which proves the theorem. \end{proof} \begin{theorem} \label{bound on length} Let $d \geq 2$ and $1 \leq k \leq d-1$. Let ${\bf f}_k$ be as in \eqref{definition of f_i}. Then for all $n \geq 1$, \beqn && e \left(x_1; \f{R}{\mathfrak p^{(n)} + ({\bf f}_k)}\right) = \ell_R \left( \f{R}{\mathfrak p^{(n)} + (x_1, {\bf f}_k)}\right) = \ell_{\Tprime} \left( \f{\Tprime} { ({\mathfrak mathcal I}_{n} + {\bf f}_k)\Tprime } \right)\\ &&= \ell_{\Tprime} \left( \f{\Tprime} { LI( ( {\mathfrak mathcal I}_{n} + {\bf f}_k )\Tprime )} \right) = \ell \left( \f{\Tprime}{I_n + (x_2^2, \ldots, x_{k+1}^{k+1}) } \right)\\ &&= d \sum_{i=0}^{k} (-1)^i \left[ \sum_{1 \leq j_1 < \cdots < j_i \leq k} \binom{n-(j_1+\cdots+j_i) + d-2}{d-1} \right]. \eeqn In particular, $ {\displaystyle {R}/ { (\mathfrak p^{(n)}+({\bf f}_k )} )} $ is Cohen-Macaulay. \end{theorem} \begin{proof} From Proposition~\ref{description of In}(\ref{description of In one}) $({\mathfrak mathcal I}_n ,x_1,{\bf f}_k ) R\subseteq ( \mathfrak p^{(n)} ,x_1,{\bf f}_k ) R$. Hence \beq \label{multiplicity inequality 1}\mathfrak nonumber e \left(x_1; \f{R} {\mathfrak p^{(n)} + ({\bf f}_k)} \right) &\leq & \ell_R \left( \f{R} {\mathfrak p^{(n)}+(x_1,{\bf f}_k )}\right) \hspace{.5in} \mathfrak mbox{\cite[Theorem~14.10]{matsumura}}\\ &\leq& \ell_R \left( \f{R} { ({\mathfrak mathcal I}_{n}, x_1, {\bf f}_k)R }\right) . \eeq Since $({\mathfrak mathcal I}_n, x_1) T \subseteq ({\mathfrak mathcal I}_n, {\bf f}_k, x_1) T$ by Proposition~\ref{description of In}(\ref{description of In three}), for any prime $\mathfrak q \mathfrak not = \mathfrak m $, $(({\mathfrak mathcal I}_n , {\bf f}_k, x_1)T)_{\mathfrak q} = T$. This implies that ${\displaystyle \operatorname{Supp}_T \left( \f{T} { ({\mathfrak mathcal I}_{n}, {\bf f}_k, x_1)T } \right) =\{\mathfrak m\} }$. Hence we get \beq \label{multiplicity inequality 2} \mathfrak nonumber && \ell_R \left( \f{R} { ({\mathfrak mathcal I}_{n},x_1, {\bf f}_k)R }\right)\\\mathfrak nonumber & = & \ell_R \left( \f{T} {( {\mathfrak mathcal I}_{n} ,x_1, {\bf f}_k) T} \otimes_T R \right) \\ \mathfrak nonumber &=& \ell_T \left( \f{T} {( {\mathfrak mathcal I}_{n}, x_1, {\bf f}_k) T} \right) \hspace{3.68in} \mathfrak mbox{[Lemma~\ref{comparing lengths}]} \\ \mathfrak nonumber &=& \ell_{\Tprime} \left( \f{\Tprime} { ({\mathfrak mathcal I}_{n} , {\bf f}_k) \Tprime } \right) \\ \mathfrak nonumber &=& \ell_{\Tprime} \left( \f{\Tprime} { LI(( {\mathfrak mathcal I}_{n}, {\bf f}_k)\Tprime )} \right) \hspace{3.05in} \mathfrak mbox{ \cite[Proposition~2.1]{bayer-stillmann}} \\ \mathfrak nonumber &\leq& \ell_{\Tprime} \left( \f{\Tprime} { { I}_{n} + ( x_2^2, \ldots, x_{k+1}^{k+1} )} \right) \hspace{2.1in} \mathfrak mbox{ [Propositions~\ref{description of In in S} and \ref{proposition ji}(\ref{proposition ji one})] } \\ \mathfrak nonumber & =& \sum_{i=0}^{k} (-1)^i \left[ \sum_{1 \leq j_1 < \cdots < j_i \leq k} \ell \left( \f{\Tprime} {(I_{n - (j_1 + \cdots + j_{i} )} )\Tprime} \right) \right] \hspace{1.65in} \mathfrak mbox{ [Proposition~\ref{computing full length}]} \\ \mathfrak nonumber &=& d \sum_{i=0}^{k} (-1)^i \left[ \sum_{1 \leq j_1 < \cdots < j_i \leq k} \binom{n-(j_1+\cdots+j_i)+d-2}{d-1} \right] \hspace{1.3in} \mathfrak mbox{[Theorem \ref{main theorem 1}]} \\ \mathfrak nonumber &=&d \sum_{i=0}^{k} (-1)^{i} \left[ \sum_{1 \leq j_1 < \cdots < j_i \leq k} \ell \left( \f{R_{\mathfrak p}}{\mathfrak p^{n - [{j_1} + \cdots +{j_i} ]} R_{\mathfrak p}} \right)\right]\\ &=& e \left(x_1; \f{R} {\mathfrak p^{(n)} + ({\bf f}_k)} \right) \hspace*{1.7in} \mathfrak mbox{[\cite[Proposition 5.3(3)]{goto} and Corollary \ref{multiplicity}\eqref{multiplicity-one}]}. \eeq Hence equality holds in \eqref{multiplicity inequality 1} and \eqref{multiplicity inequality 2} which proves the theorem. \end{proof} We end this section by explicitly describing the generators of $\mathfrak p^{(n)}$ for all $n \geq 1$. We also describe the leading ideal $LI(\mathfrak p^{(n)})T^{\mathfrak prime}$. \begin{theorem} \label{symbolic power} \been \item \label{symbolic power one} For all $n \geq 1$, $\mathfrak p^{(n)} = {\mathfrak mathcal{I}}_n R$. \item \label{symbolic power two} For all $n \geq d$, ${\displaystyle \mathfrak p^{(n)} = \sum_{a_1 + 2a_2+ \cdots + (d-1) a_{d-1}=n} \mathfrak p^{a_1} (\mathfrak p^{(2)})^{a_2} \cdots (\mathfrak p^{(d-1)})^{a_{d-1}}}$. \item \label{symbolic power three} For all $ n \geq 1$, $LI( \mathfrak p^{(n)} T^{\mathfrak prime}) = I_n = LI( {\mathfrak mathcal I}_n \Tprime)$. \eeen \end{theorem} \begin{proof} (1) By Theorem~\ref{main theorem 1} we get \beqn \ell \left( \f{R} {\mathfrak p^{(n) } + (x_1)}\right) = \ell \left( \f{R} { {\mathfrak mathcal I}_n R + (x_1)}\right). \eeqn This implies that $\mathfrak p^{(n)} = {\mathfrak mathcal I}_n R + x_1 ( \mathfrak p^{(n)} :(x_1))$. As $x_1$ is a nonzerodivisor on $R/ \mathfrak p^{(n)}$, $( \mathfrak p^{(n)} :(x_1)) = \mathfrak p^{(n)}$. By Nakayama's lemma, $\mathfrak p^{(n)} = {\mathfrak mathcal I}_nR$. (2) For all $n \geq d$, \beqn \mathfrak p^{(n) } &=& {\mathfrak mathcal I}_n R\\ &=& \sum_{a_1 + 2a_2+ \cdots + (d-1) a_{d-1}=n} {\mathfrak mathcal J}_1^{a_1}{\mathfrak mathcal J}_2^{a_2} \cdots {\mathfrak mathcal J}_{d-1}^{a_{d-1}} R \\ &\subseteq & \sum_{a_1 + 2a_2+ \cdots + (d-1) a_{d-1}=n} \mathfrak p^{a_1} (\mathfrak p^{(2)})^{a_2} \cdots (\mathfrak p^{(d-1)})^{a_{d-1}} \hspace{.5in} \mathfrak mbox{[by Proposition~\ref{description of In}(\ref{description of In one})]}\\ &\subseteq& \mathfrak p^{(n)}. \eeqn Hence equality holds. (3) The proof follows from Proposition~\ref{description of In in S} and Theorem~\ref{main theorem 1}. \end{proof} \section{Applications} \subsection{Cohen-Macaulayness and Gorensteinness of symbolic blowup algebras} \\ In \cite{goto-nishida-shimoda}, Goto et al. studied the Gorenstein property of the symbolic Rees algebra. If $d=3$, then $\operatorname{ht}(\mathfrak p)= 2$ and hence, if $\mathcal R_s({\mathfrak p})$ is Cohen-Macaulay, then it is also Gorenstein (\cite[Corollary~3.4]{simis-trung}). From \cite[Theorem~6.7(4)]{goto} and Theorem~\ref{bound on length}, it follows that $G_s({\mathfrak p})$ is Cohen-Macaulay. In this paper we give an alternate argument for $G_s({\mathfrak p})$ to be Cohen-Macaulay. In fact, we show that $G_s(\mathfrak p):= \oplus_{n \geq 0} \mathfrak p^{(n)} / \mathfrak p^{(n+1)}$ is Gorenstein for all $d \geq 2$ (Theorem~\ref{cm-ass-gr}). We also prove that $\mathcal R_s({\mathfrak p})$ is Cohen-Macaulay for all $d \geq 2$ (Theorem~\ref{cm-rees}(\ref{cm-rees-one})). Moreover, $\mathcal R_s({\mathfrak p})$ is Gorenstein if and only if $d=3$ (Theorem~\ref{cm-rees}(\ref{cm-rees-two})). Put $f_0 = x_1$. Let $f_i$'s be as in \eqref{definition of f_i} and let $f_{i}^{\stackrelr}$ denotes the image of $f_i$ in $\mathfrak p^{(i)} / \mathfrak p^{(i+1)} $. In \cite[Proposition~5.3]{goto}, Goto showed that ${\bf f}_{d-1}$ is a homogenous system of parameters in $G_s({\mathfrak p})$. In Theorem~\ref{cm-ass-gr}, we show that $f_0^{\stackrelr}, {\bf f}_{d-1}^{\stackrelr}$ is a regular sequence in $G_s({\mathfrak p})$. \begin{theorem} \label{cm-ass-gr} Let $d\geq 2$. Then \been \item For all $d \geq 2$, $f_0^{\stackrelr}, {\bf f}_{d-1}^{\stackrelr}$ is a regular sequence in $G_s({\mathfrak p})$. \item $G_{s}(\mathfrak p)$ is Gorenstein. \eeen \end{theorem} \begin{proof} We first show that $G_{s}(\mathfrak p)$ is Cohen-Macaulay. By induction on $k$, we prove that $f_0^{\stackrelr}, {\bf f}_{k}^{\stackrelr}$ is a regular sequence in $G_{s}(\mathfrak p)$ for all $k=0, \ldots, d-1$. Let $k=0$. Then as $x_1$ is a nonzerodivisor on $R/ \mathfrak p^{(n)}$ for all $n$, we conclude that $f_0^{\stackrelr}$ is a nonzerodivisor in $G_{s}(\mathfrak p)$. Now let $k \geq 1$ and assume that $f_0^{\stackrelr}, {\bf f}_{k-1}^{\stackrelr}$ is a regular sequence in $G_{s}(\mathfrak p)$. Then \beqn \f{G_s(\mathfrak p) }{ ({f_0}^{\stackrelr}, {\bf f}_{k-1}^{\stackrelr}) } &\cong& \bigoplus_{n \geq 0} \f{\mathfrak p^{(n)}} { \mathfrak p^{(n+1)} + \sum_{j=0}^{k-1} {f_j} \mathfrak p^{(n-j)}} \cong \bigoplus_{n \geq 0} \f{ \mathfrak p^{(n)} + (f_0, {\bf f}_{k-1})} { \mathfrak p^{(n+1)} + (f_0, {\bf f}_{k-1})}. \eeqn Hence, to show that $f_k^{\stackrelr}$ is a nonzerodivisor on ${\displaystyle \f{G_{s}(\mathfrak p)}{({f_0}^{\stackrelr}, {\bf f}_{k-1}^{\stackrelr})}}$ it is enough to show that $((\mathfrak p^{(n+1)}, f_0, {\bf f}_{k-1} ) : (f_k)) = (\mathfrak p^{(n+ 1-k)}, f_0, {\bf f}_{k-1} )$ for all $n \geq k$. Since \beqn && \ell \left( \f{R} {((\mathfrak p^{(n+1)}, f_0, {\bf f}_{k-1} ) : (f_k))} \right) \\ &=& \ell \left( \f{R}{(\mathfrak p^{(n+1)}, f_0, {\bf f}_{k-1} )} \right) - \ell \left( \f{R}{(\mathfrak p^{(n+1)}, f_0, {\bf f}_{k} )} \right) \\ &=& \ell \left( \f{T^{\mathfrak prime}}{ I_{n+1} + ( x_2^2, \ldots, x_{k}^{k})}\right) - \ell \left( \f{T^{\mathfrak prime}}{ I_{n+1} + ( x_2^2, \ldots, x_{k+1}^{k+1})}\right) \hspace{.3in} \mathfrak mbox{[Theorem~\ref{bound on length}]}\\ &=& \ell \left( \f{T^{\mathfrak prime}}{ (I_{n+1} + ( x_2^2, \ldots, x_{k}^{k})) : (x_{k+1}^{k+1} ) }\right)\\ &=& \ell \left( \f{T^{\mathfrak prime}}{ I_{n+1-k} + ( x_2^2, \ldots, x_{k}^{k})}\right) \hspace{2.1in} \mathfrak mbox{[Proposition~\ref{symbolic ideal quotients} and \cite[Proposition~1.14]{ene-herzog}]}\\ &=& \ell \left( \f{R} {(\mathfrak p^{(n+1-k)}, f_0, {\bf f}_{k-1} )} \right) \hspace{2.3in} \mathfrak mbox{[Theorem~\ref{bound on length}]}, \eeqn we get $( (\mathfrak p^{(n+1)}, f_0, {\bf f}_{k-1} ) : (f_k)) = (\mathfrak p^{(n+1-k)}, f_0, {\bf f}_{k-1} )$. This implies that $f_k$ is a nonzerodivisor in $G_{s}({\mathfrak p})/ ({f_0}^{\stackrelr}, {\bf f}_{k-1}^{\stackrelr}) $. Hence $G_{s}({\mathfrak p})$ is Cohen-Macaulay. As $G(\mathfrak p R_{\mathfrak p})$ is a polynomial ring, it is Gorenstein. Hence by Theorem~\ref{bound on length} and \cite[Corollary~5.8]{goto} $G_{s}({\mathfrak p})$ is Gorenstein. \end{proof} \begin{theorem} \label{cm-rees} Let $d \geq 2$. Then \been \item \label{cm-rees-zero} $\mathcal R_s(\mathfrak p)= R[\mathfrak p t, {\mathfrak mathcal J_2}t^2, \ldots, {\mathfrak mathcal J}_{d-1}t^{d-1}]$. \item \label{cm-rees-one} $\mathcal R_s(\mathfrak p)$ is Cohen-Macaulay. \item \label{cm-rees-two} $\mathcal R_s(\mathfrak p)$ is Gorenstein if and only if $d=3$. \eeen \end{theorem} \begin{proof} (\ref{cm-rees-zero}) The proof follows from Theorem~\ref{symbolic power}(\ref{symbolic power two}). (\ref{cm-rees-one}) By \cite[Theorem~6.7]{goto}, it suffices to show that ${\displaystyle \f{R}{\mathfrak p^{(n)}+({\bf f}_{d-1})}}$ is Cohen-Macaulay for $1 \leq n \leq \binom{d-1}{2}$. This holds true by Theorem~\ref{bound on length}. (\ref{cm-rees-two}) By \cite[Lemma~6.1]{goto}, the a-invariant of $(G_{s}(\mathfrak p))$, $a(G_{s}(\mathfrak p))= -(d-1)$. By \cite[Theorem~6.6]{goto} and Theorem~\ref{cm-ass-gr}, $\mathcal R_s(\mathfrak p)$ is Gorenstein if and only if $d=3$. \end{proof} \subsection{Computation of resurgence} \\ In \cite{BH} C. Bocci and B. Harbourne defined the {\it resurgence} of an ideal $I$ in $R$ as $$ {\displaystyle \rho(I):=\sup\left\{\frac{n}{r}: I^{(n)} \mathfrak nsubseteq I^r \right\}.} $$ We can also compute the resurgence in the following way: For any ideal $I \subseteq R$ let $\rho_n(I):=\mathfrak min\{r: I^{(n)} \mathfrak nsubseteq I^r\}.$ Then $$ {\displaystyle \rho(I):=\sup\left\{\frac{n}{\rho_n(I)}: n \geq 1\right\}.} $$ In this subsection we explicitly describe the resurgence of $\mathfrak p=I_{{\mathfrak mathcal C}(n_1, n_2, n_3)}.$ From \eqref{matrix of p} we have $ X = {\displaystyle \left(\begin{array}{ccc} x_1 & x_2 & x_3 \{\bf x}_2 & x_3 & x_1^{m+1} \{\bf x}_3 & x_1^{m+1} & x_1^m x_2\end{array}\right)} .$ Put \beq \label{def of delta} \Delta_1 := \det( X_{2, (2,3)}), \hspace{.2in} \Delta_2 := \det( X_{2, (1,3)}) \hspace{.2in} \mathfrak mbox{ and } \Delta_3 := \det( X_{2, (1, 2)}). \eeq Let $f_2$ be as in \eqref{definition of f_i}. \begin{lemma} \label{f_2} With the above notation: \been \item \label{x_if_2} For all $i=1,2,3$, $x_i f_2 \in \mathfrak p^2$. \item \label{f_2^2} $f_2^2 \in \mathfrak p^3$. \eeen \end{lemma} \begin{proof} \eqref{x_if_2} One can verify that \beqn x_1 f_2 &=& -\Delta_2^2 + \Delta_1 \Delta_3 \\ x_2 f_2 &=& - x_1^m \Delta_3^2 - \Delta_1\Delta_2 \\ x_3 f_2 &= & - \Delta_1^2 - x_1^m \Delta_2 \Delta_3. \eeqn As $\Delta_j \in \mathfrak p$ for all $j=1,2,3$, we get $x_i f_2 \in \mathfrak p^2$ for all $i=1,2,3$. \eqref{f_2^2} We have \beqn f_2^2& =& (x_3 \Delta_1 - x_1^{m+1} \Delta_2+x_1^mx_2 \Delta_3)f_2\\ & =& \Delta_1(x_3 f_2) - \Delta_2 (x_1^{m+1}f_2) + \Delta_3 (x_1^mx_2f_2) \\ & \in & \mathfrak p \mathfrak p^2 \hspace{3.2in} \mathfrak mbox{[from \eqref{x_if_2}]} \\ &=& \mathfrak p^3. \eeqn \end{proof} \begin{proposition} \label{Prop:Contain} Let $k \geq 0$. Then \beqn \rho_n(\mathfrak p) = \begin{cases} 3k + 1 & \mathfrak mbox{ if } n = 4k\\ 3k + 2 & \mathfrak mbox{ if } n = 4k+1\\ 3k + 2 & \mathfrak mbox{ if } n = 4k+2\\ 3k + 3 & \mathfrak mbox{ if } n = 4k+3\\ \end{cases} \eeqn \end{proposition} \begin{proof} From Theorem~\ref{symbolic power}\eqref{symbolic power one} and Theorem~\ref{symbolic power}\eqref{symbolic power two} we get \begin{eqnarray} \label{Eqn:f_2} \mathfrak p^{(2)} = \mathfrak p^2 + (f_2), \hspace{.2in} \mathfrak p^{(2n)}=(\mathfrak p^{(2)})^n \hspace{.2in} \mathfrak mbox{and} \hspace{.2in} \mathfrak p^{(2n+1)} =\mathfrak p \mathfrak p^{(2n)} . \end{eqnarray} From \eqref{Eqn:f_2} and Lemma~\ref{f_2} we get \beqn \mathfrak p^{(4k)} &= & (\mathfrak p^{(4)})^k = ((\mathfrak p^2+(f_2))^2)^k = (\mathfrak p^4 + f_2 \mathfrak p^2 + (f_2)^2)^k \subseteq (\mathfrak p^{3})^k = \mathfrak p^{3k}\\ \mathfrak p^{(4k+1)} &=& \mathfrak p \mathfrak p^{(4k)} \subseteq \mathfrak p \mathfrak p^{3k} = \mathfrak p^{3k+1} \\ \mathfrak p^{(4k+2)} &=& \mathfrak p^{(2)}\mathfrak p^{(4k)} \subseteq \mathfrak p \mathfrak p^{3k} = \mathfrak p^{3k+1} \\ \mathfrak p^{(4k+3)} &=& \mathfrak p \mathfrak p^{(2)} \mathfrak p^{(4k)} \subseteq \mathfrak p^2 \mathfrak p^{3k} = \mathfrak p^{3k+2}. \eeqn As $f_2 \equiv x_3^3(\mathfrak mod~x_1)$, $\Delta_3 \equiv x_3^2(\mathfrak mod~x_1)$ and $\mathfrak p \equiv (x_2, x_3)^2(\mathfrak mod~x_1)$ \beqn f_2^{2k} \equiv x_3^{6k}&\in& \mathfrak p^{(4k)} \setminus \mathfrak p^{3k+1} (\mathfrak mod x_1)\\ \Delta_1 f_2^{2k} \equiv x_3^{6k+2} &\in& \mathfrak p^{(4k+1)} \setminus \mathfrak p^{3k+2} (\mathfrak mod x_1)\\ f_2^{2k+1} \equiv x_3^{6k+3} &\in& \mathfrak p^{(4k+2)} \setminus \mathfrak p^{3k+2} (\mathfrak mod x_1)\\ \Delta_1 f_2^{2k+1} \equiv x_3^{6k+5} &\in& \mathfrak p^{(4k+3)} \setminus \mathfrak p^{3k+3} (\mathfrak mod x_1). \eeqn This completes the proof. \end{proof} \begin{theorem} \label{resurgance} $\rho(\mathfrak p)=\frac{4}{3}.$ \end{theorem} \begin{proof} By Proposition \ref{Prop:Contain} \[ \rho(\mathfrak p)=\sup\left\{\frac{4k}{3k+1},\frac{4k+1}{3k+2},\frac{4k+2}{3k+2},\frac{4k+3}{3k+3} : k \geq 0\right\} =\frac{4}{3}. \] \end{proof} \subsection{Waldschmidt Constant} Consider the polynomial ring $T = \Bbbk[x_1, x_2, x_3]$ with weights $d_i= wt(x_i)$ where $d_1 = 3$, $d_2 = 3+m$ and $d_3 = 3 + 2m$. With these weights, $\mathfrak p^n$ and $\mathfrak p^{(n)}$ are weighted homogenous ideals. For any weighted homogenous ideal $I \subseteq T$, let $\alpha(I):= \mathfrak min \{ n | I_n \mathfrak not = 0 \}$. Recall that the Waldschmidt constant is defined as $$ \gamma(I) = \limm~\f{\alpha( I^{(n)})}{n}. $$ In this section we compute $\alpha(\mathfrak p)/ \gamma(\mathfrak p)$ and compare it with $\rho(\mathfrak p)$. We obtain similar results as in \cite[Theorem~1.2.1]{BH} and \cite[Lemma~2.3.2]{BH}. \begin{theorem} \label{waldschmidt constant} \been \item \label{waldschmidt constant one} $\alpha(\mathfrak p) = 2m +6$ \item \label{waldschmidt constant two} $ {\displaystyle \gamma(\mathfrak p) = \begin{cases} 15/2 & \mathfrak mbox{ if } m=1 \\ 2m+6 & \mathfrak mbox{ if } m >1. \end{cases} } $ \eeen \end{theorem} \begin{proof} Note that $\mathfrak p = (\Delta_1, \Delta_2, \Delta_3)$ where $\Delta_1$ $\Delta_2$ and $\Delta_3$ are as defined in (\ref{def of delta}). Then $\operatorname{deg}( \Delta_1) = 4m+6$, $\operatorname{deg}( \Delta_2) = 3m+6$, $\operatorname{deg}(\Delta_3) = 2m+6$, and $\operatorname{deg}(f_2) = 6m+9$. Hence $\alpha(\mathfrak p) = 2m + 6$. We now compute $\alpha (\mathfrak p^{(n)})$. Then from (\ref{Eqn:f_2}) we get \beqn \alpha( \mathfrak p^{(2n)} )&= & \begin{cases} n\operatorname{deg}(f_2) =15n & \mathfrak mbox{ if } m=1\\ 2n \operatorname{deg}( \Delta_3) = 2n (2m + 6) & \mathfrak mbox { if } m >1 \end{cases} \\ \alpha( \mathfrak p^{(2n + 1)}) &= & \begin{cases} 15n + 8 = n \operatorname{deg}(f_2) + \operatorname{deg}(\Delta_3)& \mathfrak mbox{ if } m=1\\ (2n+1) \operatorname{deg}( \Delta_3) =(2n+1) (2m + 6 ) & \mathfrak mbox { if } m >1 \end{cases}. \eeqn Hence \beqn \gamma(\mathfrak p) = \limm~\f{\alpha( \mathfrak p^{(n)})}{n} = \begin{cases} 15/ 2 \mathfrak mbox{ if } m=1\\ 2m + 6 \mathfrak mbox{ if } m >1. \end{cases} \eeqn \end{proof} \begin{theorem} \label{lb for resurgence} \beqn 1 \leq \f{\alpha(\mathfrak p)}{ \gamma(\mathfrak p)} \leq \rho(\mathfrak p). \eeqn \end{theorem} \begin{proof} By Theorem~\ref{waldschmidt constant}, \beqn \f{\alpha(\mathfrak p)}{\gamma(\mathfrak p) } =\begin{cases} \f{8 }{15/2} = \f{16}{15} & \mathfrak mbox{ if } m=1\\ \f{2m+6}{2m+6} = 1 & \mathfrak mbox{ if } m> 1. \end{cases} \eeqn By Theorem~\ref{resurgance}, the result follows. \end{proof} \subsection{Regularity} In \cite{cut-kurano}, S.~D.~Cutkosky and K. Kurano studied the regularity of saturated ideals in a weighted projective space. In this subsection we consider the polynomial ring $T = \Bbbk[x_1, x_2, x_3]$ with weights $d_i= wt(x_i)$, where $d_1 = 3$, $d_2 = 3+m$ and $d_3 = 3 + 2m$. With these weights, $\mathfrak p^{(n)}$ is a weighted homogenous ideal. We compute the regularity of $\mathfrak p^{(n)}$ for all $n \geq 1$. We begin with some basic results comparing $\mathfrak p^{(n)}T^{\mathfrak prime}$ and $I_nT^{\mathfrak prime}$. \begin{lemma} \label{ideals in tprime} For all $n \geq 1$, \been \item \label{ideals in tprime one} $\mathfrak p^{(n)} T^{\mathfrak prime} = I_n T^{\mathfrak prime}$. \item \label{ideals in tprime two} $I_{2n} \Tprime = I_2^n \Tprime$. \item \label{ideals in tprime three} $I_{2n+1} \Tprime = I_2I_{2n} \Tprime$. \eeen \end{lemma} \begin{proof} Since $\mathcal J_i T^{\mathfrak prime}= J_i T^{\mathfrak prime}$ for $i=1,2$, we get $\mathcal I_n T^{\mathfrak prime}= I_n T^{\mathfrak prime}$ for all $n \geq 1$. Hence from Theorem~\ref{symbolic power}(\ref{symbolic power one}), $\mathfrak p^{(n)} T^{\mathfrak prime} = \mathcal I_n T^{\mathfrak prime} = I_n T^{\mathfrak prime}$. (2) and (3) follow from (1) and (\ref{Eqn:f_2}) \end{proof} \begin{lemma} \label{reg comparision} For all $n \geq 1$, $\operatorname{reg}( T/ \mathfrak p^{(n)} T ) = \operatorname{reg}(\Tprime/ I_n ) $. \end{lemma} \begin{proof} As $x_1$ is a nonzerodivisor on $T/ \mathfrak p^{(n)}$ and $T / I_n T$, \beqn \operatorname{reg} \left( \f{\Tprime } { I_{n} } \right) &=& \operatorname{reg} \left( \f{T } { I_{n} } \right) \\ &=& \operatorname{reg} \left( \f{T } { I_{n} + (x_1) } \right) - 2 \hspace{.2in} \mathfrak mbox{\cite[Remark~4.1]{chardin}}\\ &=& \operatorname{reg} \left( \f{T }{ \mathfrak p^{(n)} + (x_1)} \right) - 2 \\ &=& \operatorname{reg} \left( \f{T} { \mathfrak p^{(n)}} \right) \hspace{.1in} \mathfrak mbox{\cite[Remark~4.1]{chardin}}. \eeqn Let $F_{\bullet}$ be a minimal free resolution of $T^{\mathfrak prime} / \mathfrak p^{(n)} T^{\mathfrak prime}$. Since $T$ is a free $T^{\mathfrak prime}$-module, $F_{\bullet} \otimes_{T^{\mathfrak prime}} T$ is a minimal free resolution of $T^{\mathfrak prime} / \mathfrak p^{(n)} T^{\mathfrak prime} \otimes_{T^{\mathfrak prime}} T \cong T / \mathfrak p^{(n)}T$. Hence $\operatorname{reg}(T/ \mathfrak p^{(n)}T )= \operatorname{reg}(T^{\mathfrak prime} / \mathfrak p^{(n)} T^{\mathfrak prime})$. By Lemma~\ref{ideals in tprime}(1), $\mathfrak p^{(n)} T^{\mathfrak prime} = I_n T^{\mathfrak prime}$. \end{proof} It follows from Lemma~\ref{reg comparision}, that in order to compute the regularity of $ T/ \mathfrak p^{(n)}$, it is enough to compute the regularity of $ {\Tprime}/{I_{n} } $. \begin{lemma} \label{regularity modulo x_2^2} Let $n \geq 1$. Then \beqn \operatorname{reg}\left( \f{T^{\mathfrak prime}}{I_{n} + ( x_2^2) } \right) = \begin{cases} \f{3d_3}{2} n + 2 d_2-2 & \mathfrak mbox{ if $n$ is even}\\ \f{3d_3}{2} n + d_2 + \f{d_3}{2} -2 & \mathfrak mbox{ if $n$ is odd} . \end{cases} \eeqn \end{lemma} \begin{proof} Let $n=2r$, where $r \geq 1$. Then by Lemma~\ref{ideals in tprime}(\ref{ideals in tprime two}), \beq \label{I_n even} (I_{2r} + (x_2^2) ) \Tprime = I_{2}^r \Tprime+ (x_2^2) \Tprime = (x_2^4, x_2^3 x_3, x_2^2x_3^2, x_3^3)^r \Tprime+ (x_2^2) \Tprime = (x_2^2, x_3^{3r}) \Tprime . \eeq Hence \beqn \operatorname{reg}\left( \f{\Tprime}{I_{2r} + (x_2^2)} \right) = 3rd_3 + 2d_2 -2 = \f{3d_3}{2} n + 2 d_2-2. \eeqn Let $n = 2r-1$, where $r \geq 1$. Then by Lemma~\ref{ideals in tprime}(\ref{ideals in tprime three}) we get \beqn (I_{2r-1} + (x_2^2)) \Tprime &=& (I_1 + (x_2^2)) (I_{2(r-1)} + ( x_2^2)) \Tprime+ (x_2^2)\Tprime\\ &=& ( x_2^2, x_2 x_3, x_3^{ 2}) ( x_2^2, x_3^{3(r-1)}) \Tprime + (x_2^2 ) \Tprime \hspace{.5in} \mathfrak mbox{[by (\ref{I_n even})]}\\ &=& ( x_2^2, x_2 x_3^{3r-2}, x_3^{ 3r-1}) \Tprime. \eeqn By Hilbert-Burch theorem the minimal free resolution of $(I_{2r-1} + (x_2^2))T^{\mathfrak prime}$ is $$ {\bf x}ymatrix@C=18pt{ 0\ar[r] & {\begin{array}{c} T^{\mathfrak prime}[ - 2d_2 - (3r-2)d_3 ] \\ \oplus \\ T^{\mathfrak prime}[ -d_2 - (3r-1) d_3]\end{array}} \ar[rrr]^(0.5){ \left( \begin{array}{cc} x_3^{3r-2} & 0\\ -x_2 & -x_3\\ 0 & x_2 \end{array} \right)} &&& { \begin{array}{c} T^{\mathfrak prime}[-2d_2] \\ \oplus \\ T^{\mathfrak prime}[ - d_2 - (3r-2)d_3 ] \\ \oplus \\ T^{\mathfrak prime}[- (3r-1) d_3]\end{array}} \ar[r] & T^{\mathfrak prime} \ar[r] & {\displaystyle \f{T^{\mathfrak prime}}{I_{2r-1} + (x_2^2)}} \ar[r] & 0. } $$ This gives \beqn \operatorname{reg} \left(\f{T^{\mathfrak prime}}{I_{2r-1} + (x_2^2)} \right) = (3r-1)d_3 + d_2 -2 = \f{3 d_3}{2} n + d_2 + \f{d_3}{2} -2. \eeqn \end{proof} \begin{lemma} \label{lemma mod x_3} For all $n \geq 1$, \beqn \operatorname{reg}\left( \f{T^{\mathfrak prime}}{I_{2n} + ( x_3^3) } \right) = 2d_2(2n) -2d_2+ 3d_3-2. \eeqn \end{lemma} \begin{proof} By Lemma~\ref{ideals in tprime}(\ref{ideals in tprime two}) we get \beqn I_{2n} + (x_3^3) = I_2^n + (x_3^3) = (x_2, x_3)^{4n} + (x_3^3) = (x_2^{4n}, x_2^{4n-1}x_3, x_2^{4n-2} x_3^2, x_3^3). \eeqn Hence by Hilbert-Burch theorem the minimal free resolution of $ I_{2n} + (x_3^3)$ is \small $$ {\bf x}ymatrix@C=25pt{ 0\ar[r] & {\begin{array}{c} T^{\mathfrak prime}[ - (4n-1 )d_2 -2 d_3 ] \\ \oplus \\T^{\mathfrak prime}[ -4n d_2 - d_3] \\ \oplus \\ T^{\mathfrak prime}[-(4n-2) d_2 - 3 d_3] \end{array}} \ar[rrr]^(0.5){ \left( \begin{array}{ccc} 0 & x_3 &0\\ x_3 & -x_2 & 0\\ -x_2 & 0 &- x_3\\ 0& 0& x_2^{4n-2} \end{array} \right)} &&& { \begin{array}{c} T^{\mathfrak prime}[-4n d_2] \\ \oplus \\ T^{\mathfrak prime}[ -(4n-1) d_2 - d_3] \\ \oplus \\ T^{\mathfrak prime}[- (4n-2) d_2 - 2 d_3]\\ \oplus \\T^{\mathfrak prime}[ -3 d_3] \end{array}} \ar[r] & T^{\mathfrak prime} \ar[r] & {\displaystyle \f{T^{\mathfrak prime}}{I_{2n} + (x_3^3)}} \ar[r] & 0. } $$ \mathfrak normalsize This gives $\operatorname{reg}( T/ I_{2n} + (x_3^3)) = 3 d_3 + (4n-2) d_2 - 2 = 2d_2(2n) -2d_2+ 3d_3-2 $. \end{proof} \begin{proposition} \label{regularity even} Let $n \geq 1$. Then \beqn \operatorname{reg} \left( \f{\Tprime}{I_{2n} }\right) &=& \begin{cases} (2d_2)(2n) - 2d_2 + 3d_3 -2 & \mathfrak mbox{ if } m=1,\\ \f{3d_3}{2} (2n) + 2d_2-2& \mathfrak mbox{ if } m \geq 2 . \end{cases} \eeqn \end{proposition} \begin{proof} For all $n \geq 1$, the sequence \mathfrak normalsize $$ {\bf x}ymatrix@C=20pt{ 0 \ar[r] &{\displaystyle \f{\Tprime}{I_{2n-2} } [-3d_3]} \ar[r]^{.x_3^3} & {\displaystyle \f{\Tprime}{I_{2n}}} \ar[r] & {\displaystyle \f{\Tprime}{I_{2n} + (x_3^3)}} \ar[r] & 0\\ } $$ is exact by Proposition~\ref{symbolic ideal quotients}. Hence \beqn \operatorname{reg}\left( \f{\Tprime}{I_{2n}}\right) &=& \mathfrak max\left\{ \operatorname{reg}\left( \f{\Tprime}{I_{2n-2}}\right) + 3 d_3 , \operatorname{reg}\left( \f{\Tprime}{I_{2n} + (x_3^3) }\right) \right\}\\ &=& \mathfrak max\left\{ \operatorname{reg}\left( \f{\Tprime}{I_{2n-4}}\right) + 6 d_3 , \operatorname{reg}\left( \f{\Tprime}{I_{2n-2} + (x_3^3) } \right) + 3 d_3 , \operatorname{reg}\left( \f{\Tprime}{I_{2n} + (x_3^3) }\right) \right\}\\ &=& \vdots\\ &=& \mathfrak max\left\{ \operatorname{reg}\left( \left. \f{\Tprime}{I_{2n-2i} + (x_3^3)}\right) + 3i d_3 \right| i=0, \ldots, n-1 \right\}\\ &=& \mathfrak max\left\{ \left. 2d_2(2n-2i) -2d_2 + 3d_3 -2 + 3i d_3\right| i=0, \ldots, n-1 \right\} \mathfrak mbox{[ by Lemma~\ref{lemma mod x_3}]}\\ &=& \mathfrak max\left\{ \left. 4nd_2 - 2d_2 + 3d_3 -2 + i(-4d_2 + 3d_3) \right| i=0, \ldots, n-1 \right\} \\ &=& \begin{cases} (2d_2)(2n) - 2d_2 + 3d_3 -2 & \mathfrak mbox{ if } m=1,\\ (2d_2)(2n) - 2d_2 + 3d_3 -2 + (n-1)(-4d_2 + 3d_3)& \mathfrak mbox{ if } m \geq 2 . \end{cases}\\ &=& \begin{cases} (2d_2)(2n) - 2d_2 + 3d_3 -2 & \mathfrak mbox{ if } m=1,\\ \f{3d_3}{2} (2n) + 2d_2-2& \mathfrak mbox{ if } m \geq 2 . \end{cases} \eeqn \end{proof} \begin{proposition} \label{regularity odd} Let $n \geq 1$. Then \beqn \operatorname{reg} \left( \f{\Tprime}{I_{2n+1} }\right) = \begin{cases} (2d_2) (2n+1) - 2d_2 + 3d_3 -2 & \mathfrak mbox{ if } m=1\\ \f{3d_3}{2} (2n+1) + 4d_2 -\f{3d_3}{2} -2 & \mathfrak mbox{ if } m=2 \\ \f{3d_3}{2} (2n+1) +d_2 + \f{d_3}{2} -2 & \mathfrak mbox{ if } m \geq 3 \end{cases}. \eeqn \end{proposition} \begin{proof} For all $n \geq 1$, the sequence \mathfrak normalsize $$ {\bf x}ymatrix@C=20pt{ 0 \ar[r] &{\displaystyle \f{\Tprime}{I_{2n}}[-2d_2] } \ar[r]^{.x_2^2} & {\displaystyle \f{\Tprime}{I_{2n+1}}} \ar[r] & {\displaystyle \f{\Tprime}{I_{2n+1} + (x_2^2)}} \ar[r] & 0\\ } $$ is exact by Proposition~\ref{symbolic ideal quotients}. Hence \beq \label{regularity odd and even} \operatorname{reg}\left( \f{\Tprime}{I_{2n+1}}\right) = \mathfrak max\left\{ \operatorname{reg}\left( \f{\Tprime}{I_{2n}}\right) + 2 d_2 , \operatorname{reg}\left( \f{\Tprime}{I_{2n+1} + (x_2^2) }\right) \right\}. \eeq Using Proposition~\ref{regularity even} and Lemma~\ref{regularity modulo x_2^2} in (\ref{regularity odd and even}) we get \beqn \operatorname{reg}\left( \f{\Tprime}{I_{2n+1}}\right) &=&\begin{cases} \mathfrak max\left\{(2d_2) (2n+1) - 2d_2 + 3d_3 -2, \f{3d_3}{2} (2n+1) +d_2 + \f{d_3}{2} -2 \right\} & \mathfrak mbox{ if } m=1\\ \mathfrak max\left\{ \f{3d_3}{2} (2n+1) + 4d_2 -\f{3d_3}{2} -2, \f{3d_3}{2} (2n+1) +d_2 + \f{d_3}{2} -2 \right\} & \mathfrak mbox{ if } m\geq 2 \end{cases} \\ &=& \begin{cases} (2d_2) (2n+1) - 2d_2 + 3d_3 -2 & \mathfrak mbox{ if } m=1\\ \f{3d_3}{2} (2n+1) + 4d_2 -\f{3d_3}{2} -2 & \mathfrak mbox{ if } m=2 \\ \f{3d_3}{2} (2n+1) +d_2 + \f{d_3}{2} -2 & \mathfrak mbox{ if } m \geq 3 \end{cases}. \eeqn \end{proof} \begin{theorem} \label{final regularity} \been \item \label{final regularity one} $\operatorname{reg} (T/ \mathfrak p)= d_2 + 2d_3-2$. \item \label{final regularity two} Let $n \geq 2$. \been \item If $m=1$, then $\operatorname{reg} (T/ \mathfrak p^{(n)}) = (2d_2)n -2d_2 + 3d_3-2$. \item If $m=2$, then ${ \displaystyle \operatorname{reg} \left( \f{T}{ \mathfrak p^{(n) } } \right) = \begin{cases} \f{3d_3}{2}n + 4d_2 -\f{3d_3}{2} -2& \mathfrak mbox{ if $n$ is odd},\\ \f{3d_3}{2}n + 2d_2-2 & \mathfrak mbox{ if $n$ is even} . \end{cases} } $. \item If $m \geq 3$, then ${\displaystyle \operatorname{reg} \left( \f{T}{ \mathfrak p^{(n) } } \right) = \begin{cases} \f{3d_3}{2}n + d_2 + \f{d_3}{2}-2& \mathfrak mbox{ if $n$ is odd}\\ \f{3d_3}{2}n + 2d_2-2 & \mathfrak mbox{ if $n$ is even} . \end{cases} }$. \eeen \eeen In particular, ${ \displaystyle \limnn \operatorname{reg}( (\mathfrak p^{n})^{sat} ) /n=\frac{3 e(T/ \mathfrak p)}{2} + 3m}$. \end{theorem} \begin{proof} By Lemma~\ref{reg comparision}, $\operatorname{reg} (T/ \mathfrak p^{(n)} )= \operatorname{reg} (\Tprime/ I_n) $. Since $I_1 + (x_2^2) = I_1$, (\ref{final regularity one}) from Lemma~\ref{regularity modulo x_2^2}. (\ref{final regularity two}) follows from Proposition~\ref{regularity even} and Proposition~\ref{regularity odd}. Finally, ${ \displaystyle \limnn \operatorname{reg}( (\mathfrak p^{n})^{sat} ) /n= \frac{3 d_3}{2} = \frac{3( e(T/ \mathfrak p) + 2m )}{2} = \frac{3 e(T/ \mathfrak p)}{2} + 3m}$ \end{proof} \begin{theorem} \label{ub for resurgence} $\rho(\mathfrak p) \leq \operatorname{reg}(\mathfrak p) / \gamma(\mathfrak p).$ \end{theorem} \begin{proof} By Theorem~\ref{final regularity} and Theorem~\ref{waldschmidt constant}(\ref{waldschmidt constant two}) \beqn \f{\operatorname{reg}(\mathfrak p) }{ { \gamma} (\mathfrak p)} &=& \begin{cases} {\displaystyle \f{13}{{15/2}} = \f{26}{15} \geq \f{4}{3} = \rho (\mathfrak p)} & \mathfrak mbox{ if } m=1 \\ {\displaystyle \f{3m + (9/2)} {2m + 6} \geq \f{4}{3} = \rho (\mathfrak p) } & \mathfrak mbox{ if } m \geq 2. \end{cases} \eeqn \end{proof} \end{document}
math
\begin{document} \title{Theta-vexillary signed permutations} \author{Jordan Lambert} \address{Department of Mathematics, Federal University of Juiz de Fora, Juiz de Fora 36036-900, Minas Gerais, Brazil} \email{[email protected]} \thanks{The author was supported by FAPESP Grant 2013/10467-3 and 2014/27042-8} \subjclass[2010]{Primary 05A05; Secondary 14M15} \keywords{Permutations, Schubert varieties} \begin{abstract} Theta-vexillary signed permutations are elements in the hyperoctahedral group that index certain classes of degeneracy loci of type B and C. These permutations are described using triples of $s$-tuples of integers subject to specific conditions. The objective of this work is to present different characterizations of theta-vexillary signed permutations, describing them in terms of corners in the Rothe diagram and pattern avoidance. \end{abstract} \maketitle \section{Introduction} A permutation $w$ is called vexillary if and only if it avoids the patters $[2\ 1\ 4\ 3]$, i.e., there are no indices $a<b<c<d$ such that $w(b)<w(a)<w(d)<w(c)$. Vexillary permutations were found by Lascoux and Sch\"utzenberger \cite{LS} in the 1980s. Fulton \cite{Fu91} in the 1990s obtained other equivalent characterizations for the vexillary permutations: in addition to the pattern avoidance criterion, he figured out one in terms of the essential set of a permutation, among others. Since $S_{n}$ is the Weyl group of type A, the vexillary permutations represent Schubert varieties in some flag manifold where the Lie group is $G=\mathrm{Sl}(n,\mathbb{C})$. A few years later, the notion of vexillary permutations in the hyperoctaedral group were introduced by Billy and Lam \cite{BL}. Recently, Anderson and Fulton \cite{AF12, AF14} provided a different characterization for vexillary signed permutations. They defined them through a specific triple of integers: given three $s$-tuple of positive integers $\boldsymbol{\tau}=(\mathbf{k},\mathbf{p},\mathbf{q})$, where $\mathbf{k}=(0<k_{1}<\cdots< k_{s})$, $\mathbf{p}=(p_{1}\geqslant\cdots \geqslant p_{s}>0)$, and $\mathbf{q}=(q_{1}\geqslant\cdots \geqslant q_{s}>0)$, satisfying $p_{i}-p_{i+1}+q_{i}-q_{i+1}>k_{i+1}-k_{i}$ for $1\leqslant i \leqslant s-1$, one constructs a signed permutation $w=w(\boldsymbol{\tau})$. Since the hyperoctahedral group $\mathcal{W}_{n}$ can be included in the group $S_{2n+1}$, a signed permutation $w$ in $\mathcal{W}_{n}$ is vexillary if and only if its inclusion $\iota(w)$ in $S_{2n+1}$ is a vexillary permutation as mentioned above. Anderson and Fulton in \cite{AF14} characterize vexillary signed permutations in terms of essential sets, pattern avoidance, and Stanley symmetric functions. In this work, we present a class of signed permutations called \emph{theta-vexillary signed permutations}. They are defined using a triple of integers $\boldsymbol{\tau}=(\mathbf{k},\mathbf{p},\mathbf{q})$ where we allow negative values for $\mathbf{q}$ and satisfy eight different conditions, which will be called a \emph{theta-triple}. The set of theta-vexillary signed permutations is relevant because it contains all vexillary signed permutations and k-Grassmannian permutations, which are the ones associated to the Grassmannian Schubert varieties of type B and C. Theta-vexillary signed permutations have an important geometric interpretation in terms of degeneracy loci. For our purpose, it is easier to denote the hyperoctahedral group as the Weyl group of type B. Consider a vector bundle $V$ of rank $2n+1$ over $X$, equipped with a nondegenerate form and two flags of bundles $E_{\bullet}=(E_{p_{1}}\subset E_{p_{2}}\subset \cdots \subset E_{p_{s}}\subset V)$ and $F_{\bullet}=(F_{q_{1}}\subset F_{q_{2}}\subset \cdots \subset F_{q_{s}}\subset V)$ such that: for $q>0$, the subbundles $F_{q}$ are isotropic, of rank $n+1-q$; for $q<0$, $F_{q}$ is coisotropic, of corank $n+q$; and all the subbundles $E_{p}$ are isotropic, of rank $n+1-p$. The degeneracy locus of $\boldsymbol{\tau}$ is $$ \Omega_{\boldsymbol{\tau}}:=\{ x\in X \ |\ \dim(E_{p_{i}} \cap F_{q_{i}})\geqslant k_{i}, \mbox{ for } 1\leqslant i\leqslant s \}. $$ Anderson and Fulton in \cite{AF15} figured out that if the triple $\boldsymbol{\tau}$ is subjet to certain conditions, the cohomology class $[\Omega_{\boldsymbol{\tau}}]$ is the multi-theta-polynomial $\Theta_{\lambda(\boldsymbol{\tau})}$ whose coefficiens are Chern classes of the vector bundles $E_{p_{i}}$ and $F_{q_{i}}$. The polinomial $\Theta_{\lambda(\boldsymbol{\tau})}$ derives from the theta-polynomials defined via raising operators by Bush, Kresch, and Tamvakis \cite{BKT} and inspired the name theta-vexillary signed permutations. The main result of this work provides two other ways to characterize theta-vexillary permutations. If a permutation $w$ in the Weyl group $\mathcal{W}_{n}$ of type B is represented as a matrix of dots in a $(2n+1)\times n$ array of boxes, the (Rothe) extended diagram is the subset of boxes that remains after striking out the boxes weakly south or east of each dot. The southeast (SE) corners in the extended diagram form the set of corners $\mathscr{C}(w)$. One characterization of theta-vexillary signed permutations is the set of corners $\mathscr{C}(w)$ is the disjoint union of the set $\mathbb{N}E(w)$ which is composed by all corners that form a piecewise path that goes to the northeast direction, and the set $\mathscr{U}\! (w)$ of unessential corners. We also have a characterization via pattern avoidance. \begin{thm}\label{thm:main} Let $w$ be a signed permutation. The following are equivalent: \begin{enumerate} \item $w$ is theta-vexillary, i.e., there is a triple $\boldsymbol{\tau}$ such that $w=w(\boldsymbol{\tau})$; \item the set of corners $\mathscr{C}(w)$ is the disjoint union $$ \mathscr{C}(w)=\mathbb{N}E(w)\dot{\cup}\mathscr{U}\! (w), $$ \item $w$ avoids the follow thirteen signed patterns $[\overline{1}\ 3\ 2]$, $[\overline{2}\ 3\ 1]$, $[\overline{3}\ 2\ 1]$, $[\overline{3}\ 2\ \overline{1}]$, $[2\ 1\ 4\ 3]$, $[2\ \overline{3}\ 4\ \overline{1}]$, $[\overline{2}\ \overline{3}\ 4\ \overline{1}]$, $[3\ \overline{4}\ 1\ \overline{2}]$, $[3\ \overline{4}\ \overline{1}\ \overline{2}]$, $[\overline{3}\ \overline{4}\ 1\ \overline{2}]$, $[\overline{3}\ \overline{4}\ \overline{1}\ \overline{2}]$, $[\overline{4}\ 1\ \overline{2}\ 3]$, and $[\overline{4}\ \overline{1}\ \overline{2}\ 3]$. \end{enumerate} \end{thm} This theorem is consequence of Propositions \ref{prop:svexcore2} and \ref{prop:patavoid} and it is similar to the vexillary signed permutation's version. It is interesting to notice that, comparing to the vexillary case, we admit some SE corners in the diagram that are not in an ordered northeast path, which we call the unessential corners. Besides, the characterization via signed pattern avoidance for the theta-vexillary permutations has eight patterns in common with those for the vexillary case and $[2\ 1]$ is the unique not present in this list. Considering the pattern avoidance criterion, the set of theta-vexillary signed permutations form a new class of permutations according to the ``Database of Permutation Pattern Avoidance'' maintained by Tenner \cite{patternDB}. This work is part of my Ph.D. thesis \cite{Lam18}. \subsubsection*{Acknowledgments} I would like to express my very great appreciation to David Anderson for his valuable and constructive suggestions during the development of this work while I was a visiting scholar at The Ohio State University. I also thank to Lonardo Rabelo for comments on a previous manuscript. \section{Signed permutations in \texorpdfstring{$\mathcal{W}_{n}$}{Wn}} The notation present here is the same used in \cite{AF16}. We also refer \cite[\S 8.1]{BB} for further details. Consider the permutation of positive and negative integers, where the bar over the number denotes the negative sign, and consider the natural order of them $$ \dots, \overline{n},\dots, \overline{2},\overline{1},0,1,\dots, n,\dots $$ A \emph{signed permutation} is a permutation $w$ satisfying that $w(\overline{\imath})=\overline{w(i)}$, for each $i$. A signed permutation belongs to $\mathcal{W}_{n}$ if $w(m)=m$ for all $m>n$; this is a group isomorphic to the hyperoctahedral group, the Weyl group of types $B_{n}$ and $C_{n}$. Since $w(\overline{\imath})=\overline{w(i)}$, we just need the positive positions when writing signed permutation in one-line notation, i.e., a permutation $w\in \mathcal{W}_{n}$ is represented by $w(1)\ w(2)\ \cdots\ w(n)$. For example, the full form of the signed permutation $w=\overline{2}\ 1\ \overline{3}$ in $\mathcal{W}_{3}$ is $3\ \overline{1}\ 2\ 0\ \overline{2}\ 1\ \overline{3}$, but we can omit the values at the position $\overline{3}, \overline{2}, \overline{1}$ and $0$. The group $\mathcal{W}_{n}$ is generated by the \emph{simple transpositions} $s_{0},\dots, s_{n}$, where for $i>0$, right-multiplication by $s_{i}$ exchanges entries in positions $i$ and $i+1$, and right-multiplication by $s_{0}$ replaces $w(1)$ with $\overline{w(1)}$. Every signed permutation $w$ can be written as $w=s_{i_{1}}\cdots s_{i_{\ell}}$ such that $\ell$ is minimal; call the number $\ell=\ell(w)$ the \emph{length} of $w$. This value counts the number of inversions of $w\in\mathcal{W}_{n}$, and it is given by the formula \begin{align}\label{eq:lengthTipoB} \ell(w) = \#\{1\leqslant i< j\leqslant n \ |\ w(i)>w(j)\} + \#\{1\leqslant i\leqslant j\leqslant n \ |\ w(-i)>w(j)\}. \end{align} The element $w_{\circ}^{(n)}= \overline{1}\ \overline{2}\cdots\overline{n}$ is the longest element in $\mathcal{W}_{n}$ and it is called the involution of $\mathcal{W}_{n}$. Notice that the involution $w_{\circ}^{(n)}$ has length $n^{2}$. The group of permutations $\mathcal{W}_{n}$ can be embedded in the symmetric group $S_{2n+1}$, considering $S_{2n+1}$ the permutations of $\overline{n},\dots, 0,\dots, n$. Indeed, define the \emph{odd} embedding by $\iota:\mathcal{W}_{n}\hookrightarrow S_{2n+1}$ where it sends $w=w(1)\ w(2)\ \cdots\ w(n)$ to the permutation $$ \overline{w(n)}\ \cdots\ \overline{w(2)}\ \overline{w(1)}\ 0\ w(1)\ w(2)\ \cdots\ w(n) $$ in $S_{2n+1}$. The embedding $\iota$ will be used when it is necessary to highlight that we need the full permutation of $w$. There is also a \emph{even} embedding $\iota':\mathcal{W}_{n}\hookrightarrow S_{2n}$ defined by omitting the value $w(0)=0$. Considering the natural inclusions $\mathcal{W}_{n}\subset \mathcal{W}_{n+1}\subset\cdots$, we get the infinite Weyl group $\mathcal{W}_{\infty}=\cup \mathcal{W}_{n}$. When the value $n$ is understood or irrelevant, we can consider $w$ as an element of $\mathcal{W}_{\infty}$. The odd embeddings are compatible with the corresponding inclusions $S_{2n+1}\subset S_{2n+3}\subset\cdots$. \subsection{Diagram of a permutation in \texorpdfstring{$S_{2n+1}$}{S(2n+1)}} Let us consider the specific case where the permutation group is $S_{2n+1}$. It is important to consider this case because we need to do some modification in the notation that will be useful for us. Consider a $(2n+1)\times (2n+1)$ arrays of boxes with rows and columns indexed by integers $[\overline{n},n]=\{\overline{n},\dots, \overline{1},0,1,\dots,n\}$ in matrix style. The \emph{permutation matrix} associated to a permutation $w\in S_{2n+1}$ is obtained by placing dots in positions $(w(i),i)$, for all $\overline{n}\leqslant i\leqslant n$, in the array. Again the \emph{diagram} of $w$ is the collection of boxes that remain after removing those which are (weakly) south and east of a dot in the permutation matrix. Observe that the number of boxes in the diagram is equal to the length of the permutation. The \emph{rank function} of a permutation $w\in S_{2n+1}$ for a pair $(p,q)$, where $\overline{n}\leqslant p, q\leqslant n$, is the number of dots strictly south and weakly west of the box $(q-1,\overline{p})$ in the permutation matrix of $w$. In other words, it will be defined by \begin{align*} r_{w}(p,q) &:=\#\{i\leqslant \overline{p} \ |\ w(i)\geqslant q\}, \end{align*} for $\overline{n}\leqslant p, q\leqslant n$. We say that a box $(a,b)$ is a southeast (SE) corner of the diagram of $w$ if $w$ has a descent at $b$, with $a$ lying in the interval of the jump, and $w^{-1}$ has a descent at $a$, with $b$ lying in the interval of the jump. This can be written as \begin{align}\label{eq:defconerSn} \begin{aligned} w(b)> a &\geqslant w(b+1) \quad \mbox{and}\\ w^{-1}(a)> b &\geqslant w^{-1}(a+1). \end{aligned} \end{align} A \emph{corner position} of $w$ is a pair $(p,q)$ such that the box $(q-1,\overline{p})$ is a southeast (SE) corner of the diagram of $w$. The \emph{set of corners} of $w$ is the set $\mathscr{C}(w)$ of triples $(k,p,q)$ such that $(p,q)$ is a corner position and $k=r_{w}(p,q)$. For example, consider $w=\iota(\overline{2}\ 3\ 1)=\overline{1}\ \overline{3}\ 2\ 0\ \overline{2}\ 3\ 1$. Figure \ref{fig:exdiagram} shows the diagram of $w$. The SE corners $(q-1,\overline{p})$ are highlighted and they are filled with the rank function values $r_{w}(p,q)$. In this case, the set of corners is $$ \mathscr{C}(w)=\{(1,3,\overline{1}), (1,1,2), (3,0,\overline{1}),(2,\overline{2},2)\}. $$ \begin{figure}\label{fig:exdiagram} \end{figure} Notice that if a box $(q-1,\overline{p})$ is a SE corner that satisfies \eqref{eq:defconerSn}, then $(p,q)$ is a corner position and $k=r_{w}(p,q)$. \subsection{Extended diagram of a signed permutation in \texorpdfstring{$\mathcal{W}_{n}$}{W(n)}} We know that signed permutations must satisfy the relation $w(\overline{\imath})=\overline{w(i)}$, then the negative positions can be obtained from the positive ones. Hence, a signed permutation $w\in \mathcal{W}_{n}$ corresponds to a $(2n+1)\times n$ array of boxes, with rows indexed by $\{\overline{n},\dots,n\}$ and the columns indexed by $\{\overline{n},\dots,\overline{1}\}$, where the dots are placed in the boxes $(w(i),i)$ for $\overline{n}\leqslant i\leqslant \overline{1}$. For each dot, we place an ``$\times$'' in those boxes $(a,b)$ such that $a=\overline{w(i)}$ and $i\leqslant b$, in other words, an $\times$ is placed in the same column and opposite along with the boxes to the right of this $\times$. The \emph{extended diagram} $D^{+}(w)$ of a signed permutation $w$ is the collection of boxes in the $(2n+1)\times n$ rectangle that remain after removing those which are south or east of a dot. The \emph{diagram} $D(w)\subseteq D^{+}(w)$ is obtained from extended diagram $D^{+}(w)$ by removing the ones marked with $\times$. Namely, $D(w)$ is defined by \begin{align*} D(w)=\{(i,j)\in [\overline{n},\overline 1]\times[\overline{n},n] \ |\ w(i)>j, w^{-1}(j)>i \mbox{, and } w^{-1}(-j)>i\}. \end{align*} \begin{lem} The number of boxes of $D(w)$ is equal to the length of $w$. \end{lem} \begin{proof} Observe that if we define $J = \{ j\in [\overline{n},n] \ |\ w^{-1}(j)<0\}$, the set $D(w)$ can be split into two subsets $D_{1}(w) = \{(i,j) \in D(w) \ |\ j\in J\}$ and $D_{2}(w) = \{(i,j) \in D(w) \ |\ j\not\in J\}$. Since both sets have cardinality $\# D_{1}(w) = \#\{1\leqslant l<m\leqslant n \ |\ w(l)>w(m)\}$ and $\# D_{2}(w) = \#\{1\leqslant l\leqslant m\leqslant n \ |\ w(-l)>w(m)\}$, the assertion follows from Equation \eqref{eq:lengthTipoB}. \end{proof} Observe that if we use the embedding $\iota:\mathcal{W}_{n}\hookrightarrow S_{2n+1}$, the matrix and extended diagram of $w\in \mathcal{W}_{n}$ corresponds, respectively, to the first $n$ columns of the matrix and diagram of $\iota(w)$. The notation $\iota(D^{+}(w))$ will be used when we need to use the respective $(2n+1)\times (2n+1)$ diagram of $\iota(w)$. The rank function of a permutation $w$ in $\mathcal{W}_{n}$ is defined by \begin{align*} r_{w}(p,q) &=\#\{i\geqslant p \ |\ w(i)\leqslant \overline{q}\}, \end{align*} for $1\leqslant p\leqslant n$, and $\overline{n}\leqslant q\leqslant n$. Since $w(\overline{\imath})=\overline{w(i)}$, then the rank function $r_{w}(p,q)$ is also equal to $\#\{i\leqslant \overline{p} \ |\ w(i)\geqslant q\}$, so the rank functions $r_{w}$ coincides to $r_{\iota(w)}$. Given $w\in \mathcal{W}_{n}$, the next lemma states that there is a symmetry about the origin of the corner positions corresponding to $\iota(w)$. In order to simplify the notation, given a triple $(k,p,q)$, define the \emph{reflected} triple $(k,p,q)^{\perp}=(k+p+q-1,\overline{p}+1,\overline{q}+1)$. \begin{lem}[\cite{AF16}, Lemma 1.1]\label{lema:symmetry} For $w\in \mathcal{W}_{n}$, the set of corners of $\iota(w)\in S_{2n+1}$ has the following symmetry: $(k,p,q)$ is in $\mathscr{C}(\iota(w))$ if and only if $(k,p,q)^{\perp}$ is in $\mathscr{C}(\iota(w))$. \end{lem} We can see in Figure \ref{fig:exdiagram} that both corners in the left half of the diagram are symmetric by the origin to other two corners in the right side. This behavior will happen for every signed permutation $w$, implying that half of $\mathscr{C}(\iota(w))$ suffices to determine the signed permutation $w$; we will consider those corners appearing in the first $n$ columns. A \emph{corner position} of signed permutation $w$ is a pair $(p,q)$ such that the box $(q-1,\overline{p})$ is a southeast (SE) corner of the extended diagram of $w$. The \emph{set of corners} $\mathscr{C}(w)$ of a signed permutation $w$ is the set of triples $(k,p,q)$ such that $(q-1,\overline{p})$ is a SE corner of the extended diagram $D^{+}(w)$ and $k=r_{w}(p,q)$, except for corner positions $(p,q)$ where $p=1$ and $q<0$. This exception comes from the fact that $(1,q)$, for $q<0$, is not a corner position in $\iota(w)$ because the respective box $(q-1,\overline{1})$ cannot be a SE corner since $w(0)=0$. \begin{rem} Anderson and Fulton in \cite{AF16, AF14} defined a slightly different set called \emph{essential set} of $w$. This set is contained in the set of corners, since they remove a few ``redundant'' SE corners. In the present work, we need the whole set of corners since the essential set is not enough to perform our computations. \end{rem} Since the integer $k$ is the rank of $w$ in $(p,q)$, sometimes we can simply say that the corner position $(p,q)\in\mathscr{C}(w)$, instead of the triple $(k,p,q)$. The Figure \ref{fig:diagram1} illustrates the extended diagram and the set of cornet of the signed permutation $w=10\ 1\ 5\ 3\ \overline{2}\ \overline{4}\ 6\ \overline{9}\ \overline{8}\ \overline{7}$. \begin{figure} \caption{Diagram and set of corners of a signed permutation.} \label{fig:diagram1} \end{figure} To make the diagrams look cleaner, from now on we won't denote $\times$ in the extended diagrams $D^{+}(w)$. During the text, it can happen that we omit the word ``extended'' since we are only interested in studying the extended diagram of a signed permutation $w$ so that the diagram $D(w)$ won't be useful for us. \subsection{NE path and unessential corners} Suppose that $w\in \mathcal{W}_{n}$ is any signed permutation. There are two notable classes of SE corner in the set $\mathscr{C}(w)$ that we will be important to our main theorem. They are the corners in the northeast path and the unessential corners. Given any signed permutation $w$, consider a \emph{(strict) partial order} for the set of corners $\mathscr{C}(w)$ by $(p,q)<(p',q')$ if and only if $p>p'$ and $q<q'$, for corner positions $(p,q),(p',q')\in \mathscr{C}(w)$. For example, in Figure \ref{fig:diagram1}, the unique possible relation is $(4,\overline{3})<(2,\overline{2})$, the two boxes filled in with the value $6$. Define the \emph{northeast (NE) path} as the set $\mathbb{N}E(w)$ of minimal elements of $\mathscr{C}(w)$ relative to the poset ``$<$''. Using the same example of Figure \ref{fig:diagram1}, we have that $\mathbb{N}E(w)=\mathscr{C}(w)-\{(6,2,\overline{1})\}$, since all the corners are minimal under this poset except $(6,2,\overline{1})$. The positions $(p_{i},q_{i})$ of the NE path $\mathbb{N}E(w)$ can be ordered so that $p_{1}\geqslant p_{2}\geqslant \cdots \geqslant p_{r}>0$ and $q_{1}\geqslant q_{2}\geqslant \cdots \geqslant q_{r}$. In fact, suppose that we order $p_{1}\geqslant p_{2}\geqslant \cdots \geqslant p_{r}>0$ but there is $i$ such that and $q_{i}<q_{i+1}$. If $p_{i}=p_{i+1}$ then we can exchange $i$ and $i+1$. Otherwise, if $p_{i}>p_{i+1}$ then $(p_{i},q_{i})<(p_{i+1},q_{i+1})$ and $(p_{i+1},q_{i+1})$ does not belong to the NE path. Given a signed permutation $w$, we say that a corner position $(p,q)$ of $\mathscr{C}(w)$ is \emph{unessential} if there are corner positions $(p_{1},q_{1})$, $(p_{2},q_{2})$ and $(p_{3},q_{3})$ in the NE path $\mathbb{N}E(w)$ satisfying the following conditions: \begin{gather*} p_{1}=p \mbox{ and } q_{1}<q<0;\\ p_{2}>0 \mbox{ and } q_{2}=\overline{q}+1;\\ (p_{3},q_{3})<(p,q). \end{gather*} In other words, $(p,q)$ is not a minimal corner in the poset in the upper half of the diagram, the box $(q_{1}-1,\overline{p_{1}})$ lies above and in the same column of the box $(q-1,\overline{p})$, and the box $(\overline{q_{2}},p_{2}-1)$ reflected from $(q_{2}-1,\overline{p_{2}})$ lies to the right and in the same row of $(q-1,\overline{p})$, as shown in Figure \ref{fig:configuness}. We denote by $\mathscr{U}\! (w)$ the set of all unessential corners of $w$. \begin{figure}\label{fig:configuness} \end{figure} It is important to emphasize that all three corners $(p_{1},q_{1})$, $(p_{2},q_{2})$ and $(p_{3},q_{3})$ must belong to the NE path $\mathbb{N}E(w)$. For instance, considering the signed permutation $w=10\ 1\ 5\allowbreak\ 3\ \overline{2}\ \overline{4}\ 6\ \overline{9}\ \overline{8}\ \overline{7}$ of Figure \ref{fig:diagram1}, the set of unessential corners $\mathscr{U}\! (w)$ only contains the triple $(6,2,\overline{1})$. \section{Theta-triples and theta-vexillary signed permutations}\label{sec:svex} A \emph{theta-triple} is three $s$-tuples $\boldsymbol{\tau}=(\mathbf{k},\mathbf{p},\mathbf{q})$ with \begin{align}\label{eq:defvex} \mathbf{k} & =(0< k_{1}<k_{2}<\dots < k_{s}), \nonumber\\ \mathbf{p}& =(p_{1} \geqslant p_{2} \geqslant \dots \geqslant p_{s} >0),\\ \mathbf{q} & =(q_{1} \geqslant q_{2} \geqslant \dots \geqslant q_{s}), \nonumber \end{align} and satisfying eight conditions. We intentionally split such conditions in three blocks that share common characteristics. The first three conditions are \begin{enumerate} \item[A1.] $q_{i}\neq 0$ for all $i$; \item[A2.] $q_{i}\neq-q_{j}$, for any $i\neq j$. \item[A3.] If $q_{s}<0$ then $p_{s}>1$; \end{enumerate} Now, let $a=a(\boldsymbol{\tau})$ be the integer such that $q_{a-1}>0>q_{a}$, allowing $a=1$ and $a=s+1$ for the cases where all $q$'s are negative or all $q$'s are positive, respectively. For all $i\geqslant a$, denote by $R(i)$ (or $R(i)_{\boldsymbol{\tau}}$ to specify the triple) the unique integer such that $q_{R(i)}>-q_{i}>q_{R(i)+1}$; if necessary, consider $k_{0}=0$, $p_{0}=+\infty$, $q_{0}=+\infty$, and $R(a-1)=a-1$. The next three conditions are \begin{enumerate} \item[B1.] $(p_{i}-p_{i+1})+(q_{i}-q_{i+1})> k_{i+1}-k_{i}$, for $1\leqslant i<a-1$; \item[B2.] $(p_{i}-p_{i+1})+(q_{i}-q_{i+1})> (k_{i+1}-k_{i})+(k_{R(i)}-k_{R(i+1)})$, for $a\leqslant i<s$; \item[B3.] $p_{s}+q_{s}+k_{s}> k_{R(s)}+1$, if $a \leqslant s$. \end{enumerate} It is important to observe that none of the above conditions compare indexes $a-1$ and $a$. Finally, consider $a\leqslant i\leqslant s$ and let $L(i)=L_{\boldsymbol{\tau}}(i)$ be the biggest integer $j$ in $\{R(i)+1,\dots, a-1\}$ satisfying $k_{j}-k_{R(i)+1}\geqslant q_{R(i)+1}-q_{j}$, i.e., \begin{align} L(i)=\max\{R(i)+1\leqslant j \leqslant a-1 \ |\ k_{j}-k_{R(i)+1}\geqslant q_{R(i)+1}-q_{j}\}. \end{align} The last two conditions are \begin{enumerate} \item[C1.] $-q_{i}\geqslant k_{i}-k_{R(i)}$ for all $a\leqslant i\leqslant s$; \item[C2.] $-q_{i}\geqslant {q_{L(i)}+k_{L(i)}-k_{R(i)}}$ for all $a\leqslant i\leqslant s$. \end{enumerate} Given a theta-triple $\boldsymbol{\tau}$, \emph{the construction algorithm of the permutation $w(\boldsymbol{\tau})$} is given by a sequence of $s+1$ steps as follows: \begin{description} \item [Step ($1$)] Starting in the $p_{1}$ position, place $k_{1}$ consecutive entries, in increasing order, ending with $-q_{1}$. Mark the \emph{absolute} value of these numbers as ``used''; \item [Step ($i$)] For $1<i\leqslant s$, starting in the $p_{i}$ position, or the next available position to the right, fill the next available $k_{i}-k_{i-1}$ positions with entries chosen consecutively from the unused \emph{absolute} numbers, in increasing order, ending with $-q_{i}$ or, if it is not available, the biggest unused number below $-q_{i}$. Again, mark the absolute value of these numbers as ``used''; \item [Step ($s+1$)] Fill the remaining available positions with the unused positive numbers, in increasing order. \end{description} Notice that we should mark as used the absolute of the placed values because we allow negative $q_{i}$ for a theta-triple. A signed permutation $w\in \mathcal{W}_{n}$ is called \emph{theta-vexillary} if $w=w(\boldsymbol{\tau})$ comes from some theta-triple $\boldsymbol{\tau}=(\mathbf{k},\mathbf{p},\mathbf{q})$. \begin{ex}\label{ex:perm1} The permutation $w$ given in Figure \ref{fig:diagram1} can be obtained from the triple $\boldsymbol{\tau}=(3\: 4\: 5\: 6\: 9,\: 8\: 6\: 5\: 4\: 2,\: 7\: 4\: 2\: \overline{3}\: \overline{6})$ using the above algorithm: $$ \begin{array}{cccccccccccc} & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \mathbf{\overline{9}} & \mathbf{\overline{8}} & \mathbf{\overline{7}} \\ & \cdot & \cdot & \cdot & \cdot & \cdot & \mathbf{\overline{4}} & \cdot & {\overline{9}} & {\overline{8}} & {\overline{7}} \\ & \cdot & \cdot & \cdot & \cdot & \mathbf{\overline{2}} & {\overline{4}} & \cdot & {\overline{9}} & {\overline{8}} & {\overline{7}} \\ & \cdot & \cdot & \cdot & \mathbf{3} & {\overline{2}} & {\overline{4}} & \cdot & {\overline{9}} & {\overline{8}} & {\overline{7}} \\ & \cdot & \mathbf{1} & \mathbf{5} & {3} & {\overline{2}} & {\overline{4}} & \mathbf{6} & {\overline{9}} & {\overline{8}} & {\overline{7}} \\ w= & \mathbf{10} & {1} & {5} & {3} & {\overline{2}} & {\overline{4}} & {6} & {\overline{9}} & {\overline{8}} & {\overline{7}} \end{array} $$ Clearly, $\boldsymbol{\tau}$ satisfies all eight conditions above and, then, $\boldsymbol{\tau}$ is a theta-triple. Thus, $w$ is theta-vexillary signed permutation. Observe that every pair $(p_{i},q_{i})$ in this triple is also a corner position in the diagram of Figure \ref{fig:diagram1}. This fact is not a coincidence, and we will show that every theta-triple are corner positions on the permutation. \end{ex} Notice the construction algorithm does not create an inversion inside a step, i.e., if $a<b$ are positions placed by a Step $(i)$ then $w(a)<w(b)$. \begin{rem} The definition of a theta-triple was motivated by the \emph{triple of type C} given by Anderson and Fulton \cite{AF15}. Indeed, any theta-triple is a triple of type C, but the converse is not true. A theta-triple has two properties that a triple of type C does not: each $(k_{i},p_{i},q_{i})$ is associated to a SE corner of $w(\boldsymbol{\tau})$ (Proposition \ref{prop:SEcorners}); and any theta-vexillary permutation is given by a unique theta-triple $\boldsymbol{\tau}$ (Proposition \ref{prop:unique}). Both results are relevant when we study SE corners in the diagram of $w(\boldsymbol{\tau})$. \end{rem} Now, we will give a brief explanation about the eight conditions of a theta-triple. Conditions A1, A2 and A3 guarantee that the permutation $w(\boldsymbol{\tau})$ associated to such theta-triple is a signed permutation. Conditions B1, B2 and B3, in some sense, characterize a theta-vexillary permutations as well as the condition $(p_{i}-p_{i+1})+(q_{i}-q_{i+1})> k_{i+1}-k_{i}$ does for vexillary permutations. In condition B2, an extra $k_{R(i)}-k_{R(i+1)}$ is added to the right side because, during the construction of $w(\boldsymbol{\tau})$, Step $(i)$ skips an equal number of entries that have already been used in previous steps. Moreover, condition B3 is equivalent to apply $i=s$ in condition B2, where we consider the extreme cases $(k_{0},p_{0},q_{0})=(0,n,n)$ and $(k_{s+1},p_{s+1},q_{s+1})=(n,1,-n)$. Finally, for conditions C1 and C2, the next lemma states an equivalent definition of them based on the construction algorithm of $w(\boldsymbol{\tau})$: \begin{lem}\label{lema:posEntriesStep} The conditions \emph{C1} and \emph{C2} are equivalent, respectively, to \begin{enumerate} \item[{C1$'$.}] Given any $a\leqslant i \leqslant s$, all entries placed by Steps $(a)$ to $(i)$ are positive; \item[{C2$'$.}] Given any $a\leqslant i \leqslant s$, all entries placed by Steps $(R(i)+1)$ to $(a-1)$ are strictly bigger than $q_{i}$. \end{enumerate} \end{lem} \begin{proof} For the first statement, observe that all Steps from $(a)$ to $(i)$ must skip at most $k_{a-1}-k_{R(i)}$ values because they were already used in Steps $(R(i)+1)$ to $(a-1)$ and denote by $\alpha:=-q_{i}-(k_{a-1}-k_{R(i)})$ the number of available positive entries from $1$ to $\overline{q_{i}}$ that can be used by Steps $(a)$ to $(i)$. Then, condition C1 is equivalent to say that $\alpha\geqslant k_{i}-k_{a-1}$, i.e., there is enough positive values available to be placed by Steps $(a)$ to $(i)$. For the second assertion, remember that the definition of $L(i)$ says that it is the biggest integer in $\{R(i)+1,\dots, a-1\}$ where $k_{L(i)}-k_{R(i)+1}\geqslant q_{R(i)+1}-q_{L(i)}$. The smallest possible entry placed by Steps $(R(i)+1)$ to $(L(i))$ is limited below by $\overline{q_{L(i)}+k_{L(i)}-k_{R(i)}}+1$. Since for any Step $(j)$ after $L(i)$, we have that $k_{j}-k_{R(i)+1}<q_{R(i)+1}-q_{j}$, then no entry placed by such step cannot be smaller than $\overline{q_{L(i)}}$. So, every entry placed by Steps $(R(i)+1)$ to $(a-1)$ is limited below by $\overline{q_{L(i)}+k_{L(i)}-k_{R(i)}}+1$, and we conclude that both conditions C2 and C2$'$ imply that $q_{i}< \overline{q_{L(i)}+k_{L(i)}-k_{R(i)}}+1$. \end{proof} In other words, conditions C1 and C2 guarantee that given $i\geqslant a$, then all values placed by Steps $(R(i)+1)$ to $(i)$ ranges from ${q_{i}}$ to $\overline{q_{i}}$. In practice, it will be easier to use C1$'$ and C2$'$ instead of C1 and C2. Now, let us study the descents of a theta-vexillary signed permutation $w(\boldsymbol{\tau})$ and its inverse $w(\boldsymbol{\tau})^{-1}$. \begin{prop}\label{prop:descentsofW} Let $w=w(\boldsymbol{\tau})$ be a theta-vexillary signed permutation and $\boldsymbol{\tau}=(\mathbf{k},\mathbf{p},\mathbf{q})$ be a theta-triple. Then all the descents of $w$ are at positions $p_{i}-1$, i.e, for each $i$, we have $w(p_{i}-1)>\overline{q_{i}}\geqslant w(p_{i})$ and there are no other descents. \end{prop} \begin{proof} In Step $(1)$, no descents are created, unless $p_{1}=1$, in which case the permutation has a single descent at 0. For $1<i<a$, this is proved in Lemma 2.2 of \cite{AF14}. Now, supposing that $a\leqslant i\leqslant s$ and $i\geqslant 2$, assume inductively that for $j<i$, there is a descent at position $p_{j}-1$ whenever this positions has been filled, satisfying $w(p_{j}-1)>\overline{q_{j}}\geqslant w(p_{j})$, and there are no other descents. By Lemma \ref{lema:posEntriesStep}, only positive entries are placed in consecutive vacant positions of Step $(i)$, from left to right, at position $p_{i}$ (or the next vacant position to the right, if $p_{i-1}=p_{i}$). We consider ``sub-steps'' of Step $(i)$, where we are placing an entry at position $p\geqslant p_{i}$, and distinguish three cases. First, suppose we are at position $p$, with $p<p_{i-1}-1$. In this case, the previous entry placed in Step $(i)$ (if any) was placed at position $p-1$, so we did not create a descent at $p-1$. Position $p+1$ is still vacant, so no new descents are created. To clarify this proof, let $\boldsymbol{\tau}=(3\: 4\: 5\: 6\: 9,\: 8\: 6\: 5\: 4\: 2,\: 7\: 4\: 2\: \overline{3}\: \overline{6})$ as in Example \ref{ex:perm1}. In Step $(5)$, the first entry placed is $1$ and it does not create a descent: $$ w= \cdot \: \mathbf{1}\: \cdot \: {3}\: {\overline{2}} \: {\overline{4}} \: \cdot \: {\overline{9}} \: {\overline{8}} \: {\overline{7}} $$ Next, suppose we are at position $p=p_{i-1}-1$. This means that $p_{i-1}-p_{i}\leqslant k_{i}-k_{i-1}$, so let $\beta=(k_{i}-k_{i-1})-(p_{i-1}-p_{i})$ be the number of entries remaining to be placed in Step $(i)$, after placing the current at position $p$. Condition B1 tell us that $q_{i}\leqslant q_{i}+\beta<q_{i-1}$, then considering the integer interval $\mathcal{I}_{i}=\{\overline{q_{i-1}}+1,\dots,\overline{q_{i}}\}$, it must be non-empty. We claim that the entry $w(p)=w(p_{i-1}-1)$ lies in $\mathcal{I}_{i}$ and therefore $w(p_{i-1}-1)>\overline{q_{i-1}}\geqslant w(p_{i-1})$, proving this situation. Remember that the construction algorithm must skip those entries that its absolute value have already been used, and then this claim is equivalent to say that even removing from $\mathcal{I}_{i}$ those repetitions, there still is some value to be picked by $w(p)$ in $\mathcal{I}_{i}$. To prove that claim, lets count how many values in $\mathcal I_{i}$ were used in previous steps. For $a\leqslant j<i$, any entry $x$ of Steps $(j)$ satisfies $x\leqslant \overline{q_{j}}\leqslant \overline{q_{i-1}}$, that means $x\not\in \mathcal{I}_{i}$. If $1\leqslant j\leqslant R(i)$ then any entry $x$ placed in Steps $(j)$ satisfies $x\leqslant\overline{q_{j}}\leqslant\overline{q_{R(i)}}<q_{i}$, implying that $\overline{x}\not\in \mathcal{I}_{i}$. If $R(i-1)<j<a$ then by condition C2$'$, any entry $x$ placed in Steps $(j)$ satisfies $x>q_{i-1}$, implying that $\overline{x}\not\in \mathcal{I}_{i}$. Finally, if $R(i)<j\leqslant R(i-1)$ then by condition C2$'$, any entry $x$ placed in Step $(j)$ satisfies $q_{i}<x\leqslant\overline{q_{j}}\leqslant\overline{q_{R(i-1)}}<q_{i-1}$, hence, $\overline{x}\in \mathcal{I}_{i}$. We conclude that the only absolute values placed in previous steps that belongs to the interval $\mathcal{I}_{i}$ are all the ones from Steps $(R(i)+1)$ to $(R(i-1))$. So there are $\alpha:=k_{R(i-1)}-k_{R(i)}$ values in $\mathcal{I}_{i}$ that cannot be used in Step $(i)$ in position $p$. In order to place the correct value for position $p$ of Step $(i)$, we need to consider that the values which are going to be placed after position $p$ also must belong to $\mathcal{I}_{i}$ and are bigger than $w(p)$, i.e., it also is required to skip the $\beta$ biggest values in $\mathcal{I}_{i}$. Since the number of elements of $\mathcal{I}_{i}$ is $\overline{q_{i}}-\overline{q_{i-1}}$, follows from condition B2 that $\#(\mathcal{I}_{i})>\alpha+\beta$ and, therefore, there is some value in $\mathcal{I}_{i}$ to pick for $w(p)$. Continuing the example, in Step $(5)$, the second entry placed is $5$, which creates a descent at position $3$: $$ w= \cdot \: {1}\: \mathbf{5} \: {3}\: {\overline{2}} \: {\overline{4}} \: \cdot \: {\overline{9}} \: {\overline{8}} \: {\overline{7}} $$ Finally, suppose we are at position $p\geqslant p_{i-1}$. Using the previous case, the entry to be placed is some $x\in \mathcal{I}_{i}$. When an entry is placed in a vacant position to the right of a filled position, it does not create a descent since either all entries already placed the previous steps are smaller than $\overline{q_{i-1}}< x$ or the entries placed in this step is smaller than $x$. When it is placed to the left of a filled position, which can only happen at positions $p_{j}-1$ for some $j<i-1$, and it does create a descent at the position $p_{j}-1$ satisfying $w(p_{j}-1)>\overline{q_{i-1}}\geqslant \overline{q_{j}}\geqslant w(p_{j})$ In Step $(5)$ of our example, it remains to place the 3rd value $6$ in the next vacant position, which occurs at position $7$. Observe that we do not create a descent at the filled position to its left, but we do create a descent at position $7$, since the position $8$ is already filled: $$ w= \cdot \: {1}\: {5} \: {3}\: {\overline{2}} \: {\overline{4}} \: \mathbf{6} \: {\overline{9}} \: {\overline{8}} \: {\overline{7}} $$ At Step $(s+1)$, we can apply the previous case for $i=s+1$, adding the values $k_{s+1}=n$, $p_{s+1}=0, q_{s+1}=-n+1$ to $\boldsymbol{\tau}$. This procedure will create descents only at those $p_{j}-1$ which are still vacant. \end{proof} Given a triple $\boldsymbol{\tau}=(\mathbf{k},\mathbf{p},\mathbf{q})$, the dual triple is defined by $\boldsymbol{\tau}^{*}=(\mathbf{k},\mathbf{q},\mathbf{p})$, where $p$ and $q$ were switched. Clearly, a dual triple could not be a theta-vexillary permutation, but the dual triple is useful to compute the inverse of $w(\boldsymbol{\tau})$. The dual triple $\boldsymbol{\tau}^{*}=(k,q,p)$ determines a signed permutation $\iota(w(\boldsymbol{\tau}^{*}))$ in $ S_{2n+1}$ using the following algorithm: \begin{description} \item [Step (0)] Put a zero at the position $0$; \item [Step (1)] Starting in the $q_{1}$ position, place $k_{1}$ consecutive entries, in increasing order, ending with $-p_{1}$. Mark the absolute value of these numbers as ``used'' and fill the reflection through $0$ with the respective reflection $w(\overline{a})=\overline{w(a)}$; \item [Step (i)] For $1<i\leqslant s$, starting in the $q_{i}$ position (if $q_{i}<0$ then use a position before zero), or the next available position to the right, fill the next available $k_{i}-k_{i-1}$ positions with entries chosen consecutively from the unused absolute numbers, in increasing order, ending with $-p_{i}$ or, if it is not available, the biggest unused number below $-p_{i}$. Again, mark the absolute value of these numbers as ``used'' and fill the reflection through $0$ with the respective reflection $w(\overline{a})=\overline{w(a)}$; \item [Step (s+1)] Fill the remaining available positions after $0$ with the unused positive numbers in increasing order. Finally, fill the reflection through $0$ with the respective reflection $w(\overline{a})=\overline{w(a)}$ \end{description} The difference here compared to the construction using the theta-vexillary permutation is that we allow to have negative positions, so we need the full form of the permutation. The signed permutation $w(\boldsymbol{\tau}^{*})$ is obtained from $\iota(w(\boldsymbol{\tau}^{*}))$ by restricting it to the positions $\{1,\dots,n\}$. \begin{lem} We have $w(\boldsymbol{\tau}^{*})=w(\boldsymbol{\tau})^{-1}$. \end{lem} \begin{proof} We can prove in the same way as Lemma 2.3 of \cite{AF14}, adding the fact that for $a\leqslant i\leqslant s$, the permutation $\iota(w)$ maps the set $a(i)$ to $b(i)$ and, hence, the inverse $\iota(w)^{-1}$ maps $b(i)$ to $a(i)$. \end{proof} \begin{ex} Consider the dual triple $\boldsymbol{\tau}^{*}=(3\: 4\: 5\: 6\: 9,\: 7\: 4\: 2\: \overline{3}\: \overline{6},\: 8\: 6\: 5\: 4\: 2)$ of the one gave in Example \ref{ex:perm1}. The permutation $\iota(w(\boldsymbol{\tau}^{*}))$ is constructed as follows: $$ \begin{array}{ccccccccccc|c|cccccccccc} & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \mathbf{0} & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot\\ & \cdot & \mathit{8} & \mathit{9} & \mathit{10} & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & {0} & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \mathbf{\overline{10}} & \mathbf{\overline{9}} & \mathbf{\overline{8}} & \cdot\\ & \cdot & {8} & {9} & {10} & \cdot & \cdot & \mathit{6} & \cdot & \cdot & \cdot & {0} & \cdot & \cdot & \cdot & \mathbf{\overline{6}} & \cdot & \cdot & {\overline{10}} & {\overline{9}} & {\overline{8}} & \cdot\\ & \cdot & {8} & {9} & {10} & \cdot & \cdot & {6} & \cdot & \mathit{5} & \cdot & {0} & \cdot & \mathbf{\overline{5}} & \cdot & {\overline{6}} & \cdot & \cdot & {\overline{10}} & {\overline{9}} & {\overline{8}} & \cdot\\ & \cdot & {8} & {9} & {10} & \cdot & \cdot & {6} & \mathbf{\overline{4}} & {5} & \cdot & {0} & \cdot & {\overline{5}} & \mathit{4} & {\overline{6}} & \cdot & \cdot & {\overline{10}} & {\overline{9}} & {\overline{8}} & \cdot\\ & \cdot & {8} & {9} & {10} & \mathbf{\overline{7}} & \mathbf{\overline{3}} & {6} & {\overline{4}} & {5} & \mathbf{\overline{2}} & {0} & \mathit{2} & {\overline{5}} & {4} & {\overline{6}} & \mathit{3} & \mathit{7} & {\overline{10}} & {\overline{9}} & {\overline{8}} & \cdot\\ & \mathit{\overline{1}} & {8} & {9} & {10} & {\overline{7}} & {\overline{3}} & {6} & {\overline{4}} & {5} & {\overline{2}} & {0} & {2} & {\overline{5}} & {4} & {\overline{6}} & {3} & {7} & {\overline{10}} & {\overline{9}} & {\overline{8}} & \mathbf{1}\\ \end{array} $$ For each step, the bold numbers represent the values placed for such step, and the italic ones are their reflection through zero. Thus, $w(\boldsymbol{\tau}^{*})= 2 \: \overline{5} \: 4 \: \overline{6} \: 3 \: 7 \: \overline{10} \: \overline{9} \: \overline{8} \: 1$ and we can easily verify that this permutation is the inverse of $w=10\ 1\ 5\ 3\ \overline{2}\ \overline{4}\ 6\ \overline{9}\ \overline{8}\ \overline{7}$. \end{ex} Although $w(\boldsymbol{\tau}^{*})$ is not theta-vexillary, a similar version of Proposition \ref{prop:descentsofW} holds for this case and the proof follows that same idea. \begin{prop}\label{prop:descentsofWinv} Let $w=w(\boldsymbol{\tau})$ be a theta-vexillary signed permutation, for a theta-triple $\boldsymbol{\tau}=(\mathbf{k},\mathbf{p},\mathbf{q})$. Then all the descents of $w^{-1}$ are at positions $q_{i}-1$, when $i<a$, and $\overline{q_{i}}$, when $i\geqslant a$. In fact, we have \begin{align*} &w^{-1}(q_{i}-1)>\overline{p_{i}}\geqslant w^{-1}(q_{i}) \mbox{ , for } i<a;\\ &w^{-1}(\overline{q_{i}})>p_{i}-1\geqslant w^{-1}(\overline{q_{i}}+1) \mbox{ , for } i\geqslant a; \end{align*} and there are no other descents. \end{prop} \section{Extended diagrams for theta-vexillary permutations} In this section, we aim to understand how a theta-vexillary permutation looks like in the extended diagram. Given a position $(p,q)$ in the extended diagram $D^{+}(w)$, define the \emph{left lower region} of $(p,q)$ by the set boxes in the extended diagram strictly south and weakly west of the SE corner $(q-1,\overline{p})$. In other words, denoting it by $\Lambda(p,q)$, this set is \begin{align*} \Lambda(p,q):=\{(a,b) \in D^{+}(w) \ |\ a\geqslant {q}, b\leqslant \overline{p}\}. \end{align*} Notice that the construction algorithm of a theta-vexillary permutation $w(\boldsymbol{\tau})$ can also be seen as a process of placing dots in the extended diagram, since each pair $(w(i),i)$ corresponds to a dot in the diagram. We can say that a Step $(i)$ places dots in the diagram using the following rule: if an entry $x$ is placed at a position $z$ in the permutation, i.e., $w(z)=x$, then it produces a dot at the box $(\overline{x},\overline{z})$ in the diagram. For instance, the triple $\boldsymbol{\tau}=(3\: 4\: 5\: 6\: 9,\: 8\: 6\: 5\: 4\: 2,\: 7\: 4\: 2\: \overline{3}\: \overline{6})$ of Example \ref{ex:perm1} whose diagram is represented in Figure \ref{fig:diagram1}. The first step places the entries $\overline{9}$, $\overline{8}$ and $\overline{7}$, respectively, at positions $8$, $9$ and $10$, which correspond to place dots in boxes $(9,\overline{8})$, $(8,\overline{9})$ and $(7,\overline{10})$. The second step places only a dot in the box $(4,\overline{6})$. The next steps places all other dots in the diagram. \begin{prop}\label{prop:SEcorners} Let $w=w(\boldsymbol{\tau})$ be a theta-vexillary signed permutation and $\boldsymbol{\tau}=(\mathbf{k},\mathbf{p},\mathbf{q})$ be a theta-triple. Then we have the following: \begin{enumerate} \item The boxes $(q_{i}-1,\overline{p_{i}})$ and their reflection $(\overline{q_{i}}, p_{i}-1)$ are SE corners of the diagram of $\iota(w)$ (not necessary all of them); \item For any $1\leqslant i\leqslant s+1$, all the dots placed by Step $(i)$ in the diagram are inside region $\Lambda(p_{i},q_{i})$ and outside $\Lambda(p_{i-1},q_{i-1})$; \item $k_{i}$ is the number of dots inside the region $\Lambda(p_{i},q_{i})$. \end{enumerate} \end{prop} \begin{proof} Lemma \ref{lema:symmetry} says that there is a symmetry between boxes $(q_{i}-1,\overline{p_{i}})$ and their reflection $(\overline{q_{i}}, p_{i}-1)$. Then, it suffices to prove that every $(q_{i}-1,\overline{p_{i}})$ is a SE corner. If $p>0$, then a signed permutation $w$ has a descent at position $p-1$ if and only if $i(w)$ has descents at position $p-1$ and $\overline{p}$. By proposition \ref{prop:descentsofW}, $\iota(w)$ satisfies $\iota(w)(p_{i}-1)>\overline{q_{i}}\geqslant \iota(w)(p_{i})$, and it implies that \begin{align*} \iota(w)(\overline{p_{i}})>{q_{i}} -1\geqslant \iota(w)(\overline{p_{i}}+1). \end{align*} On the other hand, by Proposition \ref{prop:descentsofWinv}, $\iota(w)^{-1}$ satisfies \begin{align*} \iota(w)^{-1}({q_{i}}-1)>\overline{p_{i}}\geqslant \iota(w)^{-1}({q_{i}}), \end{align*} for any $i$. This proves that $(q_{i}-1,\overline{p_{i}})$ satisfies Equation \eqref{eq:defconerSn}, which proves item (1). For item (2), first of all, observe that every entry $x$ placed at position $z$ in Step $(i)$ satisfies $p_{i}\leqslant z$ and $x\leqslant \overline{q_{i}}$, implying that the correspondent dot at box $(\overline{x},\overline{z})$ in the diagram belongs to $\Lambda(p_{i},q_{i})$. Now, we need to check that all dots placed by Step $(i)$ are outside $\Lambda(p_{i-1},q_{i-1})$. It is enough to verify that whenever in Step $(i)$ we are placing an entry $x$ at a position $z\geqslant p_{i-1}$ in the permutation, then $x>\overline{q_{i-1}}$. Set $\beta=(k_{i}-k_{i-1})-(p_{i-1}-p_{i})$ the number of entries to be placed after the position $p_{i-1}$ during Step $(i)$. If $1\leqslant i < a$ then condition B1$'$ implies that $\beta<q_{i-1}-q_{i}$. The entries that will be placed are $\overline{q_{i}+\beta}+1,\dots,\overline{q_{i}}$ and they are all strictly greater than $\overline{q_{i-1}}$ (in the diagram, it is equivalent to say that we have $q_{i-1}-q_{i}$ available rows to place the dots above $q_{i-1}$ but we only need $\beta$ rows). If $i=a$ then by Lemma \ref{lema:posEntriesStep}, $x>0>\overline{q_{i-1}}$. If $a<i\leqslant s+1$ then condition B2$'$ implies that $\beta<(q_{i-1}-q_{i})-(k_{R(i-i)}-k_{R(i)})$, which means that have $(q_{i-1}-q_{i})-(k_{R(i-1)}-k_{R(i)})$ available rows in the diagram to place the dots above $q_{i-1}$ but we only need $\beta$ rows. Observe that we must skip $k_{R(i-1)}-k_{R(i)}$ rows in the diagram since their reflection have already been used between Steps $(R(i)+1)$ to $(R(i-1))$. This proves item (2). Finally for (3), $k_{i}$ is the total of dots placed until Step $(i)$ and they are all placed inside the region $\Lambda(p_{i},q_{i})$. Any other dot placed after this step is placed outside $\Lambda(p_{i},q_{i})$. \end{proof} If we denote $\boldsymbol{\tau}$ as the set $\{(k_{i},p_{i},q_{i}) \ |\ 1\leqslant i \leqslant s\}$, then Proposition \ref{prop:SEcorners} tell us that $\boldsymbol{\tau}$ as a subset of corners, i.e., we can denote $\boldsymbol{\tau}\subset \mathscr{C}(w)$. Remember that there is a poset ``$<$'' in the set of corners $\mathscr{C}(w)$ where two corners positions satisfy $(p,q)<(p',q')$ if and only if $p>p'$ and $q<q'$. Also remember that the NE path $\mathbb{N}E(w)\subset\mathscr{C}(w)$ is the set of minimal elements of this poset. \begin{lem}\label{lema:properties} Let $w=w(\boldsymbol{\tau})$ be a theta-vexillary signed permutation, and $\boldsymbol{\tau}=(\mathbf{k},\mathbf{p},\mathbf{q})$ be a theta-triple. Then every corner position $(p_{i},q_{i})$ of $\boldsymbol{\tau}$ is minimal in the poset ``$<$'', i.e., $\boldsymbol{\tau}\subset\mathbb{N}E(w)$. \end{lem} \begin{proof} Suppose that there is a pair $(p_{i},q_{i})$ of $\boldsymbol{\tau}$ and a corner position $(p,q)\in\mathscr{C}(w)$ such that $(p,q)<(p_{i},q_{i})$, i.e., $p>p_{i}$ and $q<q_{i}$. The pair $(p,q)$ is not in $\boldsymbol{\tau}$ because $\mathbf{p}$ and $\mathbf{q}$ are strictly decreasing $s$-tuples. Since the box $(q-1,\overline{p})$ is a SE corner, Equation \eqref{eq:defconerSn} implies that \begin{align}\label{eq:lema_minimal} \overline{q}<x \ \ \mbox{ and }\ \ p\leqslant y, \end{align} where $x:=w(p-1)$ and $y:=w^{-1}(\overline{q})$. When we use the construction algorithm to produce the permutation $w$, observe that the position $p-1$ must be filled by some step and the entry $\overline{q}$ must be placed in some step. So, there must be integers $1\leqslant m,l \leqslant s+1$ such that: \begin{enumerate} \item[a)] The entry $x$ is placed in the position $p-1$ during some Step $(m)$. This places a dot at the box $(\overline{x},\overline{p}+1)\in\Lambda(p_{m},q_{m})$; \item[b)] The entry $\overline{q}$ is placed in the position $y$ during some Step $(l)$. This places a dot at the box $(q,\overline{y})\in\Lambda(p_{l},q_{l})$. \end{enumerate} Although there exist such integers $m$ and $i$, we are going to show that they cannot be either equal, smaller or greater than each other. Hence, this contradicts the assumption that $(p_{i},q_{i})$ is not minimal in the poset. If $m=l$ then, using Equation \eqref{eq:lema_minimal}, $p-1<y$ are positions in Step $(m=l)$ and the entry in such positions are $w(p-1)=x >\overline{q}=w(y)$, i.e., there is a descent in it. This contradicts the fact that there are no descents in a step. If $m<l$ then, using Equation \eqref{eq:lema_minimal}, we got that $\overline{y}\leqslant \overline{p}$ and $\overline{q} \leqslant x\leqslant \overline{q_{m}}$ (the former relation comes from the fact that every entry placed by Step $(m)$ is weakly smaller than $\overline{q_{m}}$). This implies that the box $(q,\overline{y})$ also belongs to the region $\Lambda(p_{m},q_{m})$, a contradiction of item 2 of Proposition \ref{prop:SEcorners}. If $m>l$ then observe that Step $(l)$ must fill all positions from $p_{l}$ to $y$ in the construction algorithm of the permutation $w$. Since $y>p-1\geqslant p_{i}\geqslant p_{l}$ (because $i<l$), we have that the position $p-1$ is also filled by Step $(l)$, which contradicts the fact that it is filled during Step $(m)$. \end{proof} Recall that a corner position $(p,q)$ of $\mathscr{C}(w)$ is unessential if there are corners $(p_{1},q_{1})$, $(p_{2},q_{2})$ and $(p_{3},q_{3})$ in the NE path $\mathbb{N}E(w)$ such that $(p,q)$ is not a minimal corner in the poset in the upper half of the diagram, the box $(q_{1}-1,\overline{p_{1}})$ lies above and in the same column of the box $(q-1,\overline{p})$, and the box $(\overline{q_{2}},p_{2}-1)$ reflected from $(q_{2}-1,\overline{p_{2}})$ lies to the right and in the same row of $(q-1,\overline{p})$, as in Figure \ref{fig:configuness}. \begin{prop}\label{prop:svexcore} Given $w\in \mathcal{W}_{n}$, suppose that the set of corner $\mathscr{C}(w)$ is the disjoint union $$ \mathscr{C}(w)=\mathbb{N}E(w)\dot{\cup}\mathscr{U}\! (w). $$ Then $w$ is a theta-vexillary. \end{prop} \begin{proof} Suppose that the set of corners $\mathscr{C}(w)$ of a permutation $w$ is given by the disjoint union of the NE path $\mathbb{N}E(w)$ and the set of unessential corners $\mathscr{U}\! (w)$. Since all corner positions $(p_{i},q_{i})$ of $\mathbb{N}E(w)$ can be ordered so that $p_{1}\geqslant p_{2}\geqslant \cdots \geqslant p_{r}>0$ and $q_{1}\geqslant q_{2}\geqslant \cdots \geqslant q_{r}$, set $k_{i}$ as the rank $r_{w}(p_{i},q_{i})$ and define the triple $\boldsymbol{\tau}'=(\mathbf{k},\mathbf{p},\mathbf{q})$. We will prove that $\boldsymbol{\tau}$ is almost a theta-vexillary triple, i.e., it satisfies A1, A2, A3, C1, C2, and B1. In order to get B2 and B3, occasionally some elements $(k_{i},p_{i},q_{i})$ should be removed from $\boldsymbol{\tau}'$. Conditions A1, A2 and A3 are true because $w$ is a signed permutation in $\mathcal{W}_{n}$. In fact, A1 and A3 come direct from the fact that there is no SE corner at row $-1$ or above the middle in column $-1$ since $w(0)=0$, and A2 is satisfied just because we cannot have dots lying in opposite rows. Now, $a$ and $R(i)$, for $a\leqslant i\leqslant s$, can be defined. Let us prove that $\boldsymbol{\tau}$ satisfies the remaining conditions. Consider the diagrams sketched in Figure \ref{fig:condproofBC}. \begin{figure} \caption{Configuration required to get conditions C1, C2 (left), B1 (middle), B2, and B3 (right).} \label{fig:condproofBC} \end{figure} For condition C1, let $a\leqslant i\leqslant s$ and consider the regions $A$ and $B$ as in Figure \ref{fig:condproofBC} (left). Denote by $d(A)$ and $d(B)$ the number of dots in each one of them. The definition of $R(i)$ can be translated to the diagram as follows: $R(i)$ is the unique index smaller than $a$ such that there is no other corner of $\boldsymbol{\tau}$ lying to the right of it and in the rows $q_{R(i)}-1,\dots, \overline{q_{i}}$. Suppose that there is a dot in the darker region $A$ of Figure \ref{fig:condproofBC} (left). This dot must be placed by some Step $(j)$, for $j>R(i)$, which implies that the corner position $(p_{j},q_{j})$ is located above the row $\overline{q_{i}}$ and it also places a dot above $\overline{q_{i}}$. However, the construction of a step says that we must fill all entries between them, including $q_{i}$. So, we should have a dot at row $q_{i}$ and another in the row $\overline{q_{i}}$, a contradiction of condition A2. Hence, $d(A)=0$. On the other hand, $d(B)\leqslant -q_{i}$ because condition A2 says that we cannot have dots in opposite rows. Thus, $-q_{i}\geqslant d(A)+d(B)=k_{i}-k_{R(i)}$, since $d(A)+d(B)$ is the amount of dots to be placed from Step $(R(i)+1)$ to $(i)$. By Lemma \ref{lema:posEntriesStep}, we may show that $\boldsymbol{\tau}$ satisfies condition C2$'$ instead of C2. In the previous case, we proved that region $A$ contains no dots. It means that no step from $(R(i)+1)$ to $(a-1)$ place dots in $A$, which is equivalent to say that all entries placed by Steps $(R(i)+1)$ to $(a-1)$ are strictly bigger than $q_{i}$, proving C2$'$. For condition B1, let $1\leqslant i < a-1$ and consider the rectangular regions $A$ and $B$ as in Figure \ref{fig:condproofBC} (middle). Denote by $d(A)$ and $d(B)$ the number of dots in each one of them. Notice that the number of dots in each rectangle is limited by the length of their sides, and $d(A)+d(B)$ is the number of dots in Step $(i)$. If $p_{i}=p_{i+1}$ then $d(B)=0$ and $d(A)<q_{i}-q_{i+1}$, since we cannot place a dot in row ${q_{i}}-1$. So, $(p_{i}-p_{i+1})+(q_{i}-q_{i+1})> d(B)+d(A)=k_{i+1}-k_{i}$. If $p_{i}>p_{i+1}$ then, we cannot have the dot in column $\overline{p_{i}}+1$ inside $B$ because it would not create a SE corner $(q_{i}-1,\overline{p_{i}})$. Hence, $(p_{i}-p_{i+1})+(q_{i}-q_{i+1})> d(B)+d(A)=k_{i+1}-k_{i}$. For conditions B2 and B3, let $a\leqslant i\leqslant s$ and consider the rectangular regions $A$, $B$ and $C$ of Figure \ref{fig:condproofBC} (right). Suppose that $p_{i}>p_{i+1}$. Using the same argument of condition C1, all dots between the rows $\overline{q_{i}}$ and $\overline{q_{i+1}}$ are in rectangle $C$ and the number of dots in this region is $d(C)=k_{R(i)}-k_{R(i+1)}$. As well as the previous case, $d(B)< p_{i}-p_{i+1}$ and the number of dots in region $A$ is $d(A)\leqslant (q_{i}-q_{i+1})-d(C)$, since we cannot have dots in opposite rows. Therefore, $(p_{i}-p_{i+1})+(q_{i}-q_{i+1})> d(B)+d(A)+d(C)=(k_{i+1}-k_{i})+(k_{R(i)}-k_{R(i+1)})$. The difficulty appears when $p_{i}=p_{i+1}$. In this case, $d(B)=0$ and $d(A)\leqslant (q_{i}-q_{i+1})-d(C)$. Then, $(p_{i}-p_{i+1})+(q_{i}-q_{i+1})\geqslant d(B)+d(A)+d(C)=(k_{i+1}-k_{i})+(k_{R(i)}-k_{R(i+1)})$, which means that the equality can happen. So, we need to remove these elements from $\boldsymbol{\tau}'$ where the equality holds. Denote the set of index $I=I_{\boldsymbol{\tau}'}\subset [1,s]$ by \begin{align*} I=\{i\geqslant a\ |\ (p_{i}-p_{i+1})+(q_{i}-q_{i+1})=(k_{i+1}-k_{i})+(k_{R(i)}-k_{R(i+1)})\} \end{align*} Define $\boldsymbol{\tau}$ the triple \begin{align*} \boldsymbol{\tau} := \{(k_{i},p_{i},q_{i})\in \boldsymbol{\tau}' \ |\ i\not\in I\} \end{align*} Clearly, $\boldsymbol{\tau}$ satisfies A1, A2, A3, C1, C2, and B1. Suppose that $a\leqslant i<j$ are integers such that $i,j\not\in I$ and $i+1,i+2,\dots, j-1 \in I$, i.e., $i$ and $j$ are consecutive indexes in $\boldsymbol{\tau}$. Then they satisfy \begin{gather*} (p_{i}-p_{i+1})+(q_{i}-q_{i+1}) > (k_{i+1}-k_{i})+(k_{R(i)}-k_{R(i+1)}),\\ (p_{i+1}-p_{i+2})+(q_{i+1}-q_{i+2}) = (k_{i+2}-k_{i+1})+(k_{R(i+1)}-k_{R(i+2)}),\\ (p_{i+2}-p_{i+3})+(q_{i+2}-q_{i+3}) = (k_{i+3}-k_{i+2})+(k_{R(i+2)}-k_{R(i+3)}),\\ \vdots \\ (p_{j-1}-p_{j})+(q_{j-1}-q_{j}) = (k_{j}-k_{j-1})+(k_{R(j-1)}-k_{R(j)}).\\ \end{gather*} Therefore, \begin{align*} (p_{i}-p_{j})+(q_{i}-q_{j}) > (k_{j}-k_{i})+(k_{R(i)}-k_{R(j)}),\\ \end{align*} and $\boldsymbol{\tau}$ also satisfies B2 and B3. Finally, observe that the extended diagram of $w(\boldsymbol{\tau})$ is exactly the extended diagram of $w$, which means that $w(\boldsymbol{\tau})=w$. \end{proof} Now, we aim to proof the converse of Proposition \ref{prop:svexcore}. \begin{lem}\label{lema:properties2} Let $w=w(\boldsymbol{\tau})$ be a theta-vexillary signed permutation, and $\boldsymbol{\tau}=(\mathbf{k},\mathbf{p},\mathbf{q})$ be a theta-triple. Then for any $1\leqslant i\leqslant s$ such that $p_{i}>p_{i+1}$, there is no corner position $(p,q)$ different of $(p_{i},q_{i})$ satisfying $p>p_{i+1}$ and $q_{i}\geqslant q$. In other words, $(p_{i},q_{i})$ is the unique SE corner in the region highlighted in Figure \ref{fig:specialSE}. \begin{figure} \caption{Region in the extended diagram where we cannot have a SE corner.} \label{fig:specialSE} \end{figure} \end{lem} \begin{proof} Suppose that there is $(p,q)$ for some $i$ such that $p>p_{i+1}$. If $p_{i}>p>p_{i+1}$ then the position $p-1$ is a descent of $w$, which is impossible since all descents of $w$ are at positions $p_{i}-1$ and no one matches to $p-1$. If $p_{i}=p$ and $q>q_{i}$ then the box $(q-1,\overline{p_{i}})$ is a SE corner, and the dots in row $\overline{q}$ and column $\overline{p_{i}+1}$ are placed as in Figure \ref{fig:prooflema1}. The dot placed in row $\overline{q}$ lies inside the region $\Lambda(p_{i+1},q_{i+1})$, and outside $\Lambda(p_{i},q_{i})$, implying that such dot is placed during Step $(i+1)$. \begin{figure} \caption{Sketch of the diagram to prove Lemma \ref{lema:properties2} \label{fig:prooflema1} \end{figure} Notice that the dot at column $\overline{p_{i}}+1$ cannot be placed during Step $(i+1)$ because it would create a descent in Step $(i+1)$. Then, there is $j>i+1$ such that Step $(j)$ placed such dot. In this case, Step $(i+1)$ should skip column $\overline{p_{i}}+1$, which is impossible (by construction, this step places dots in all available columns between $w^{-1}(\overline{q}$ and $p_{i+1}$). \end{proof} The NE path also can contain another kind of SE corner defined as follows: given a theta-vexillary signed permutation $w$ and a theta-triple $\boldsymbol{\tau}$, a corner position $(p,q)\not\in\boldsymbol{\tau}$ is called \emph{optional} if there are $a\leqslant i\leqslant s$ and $1\leqslant j<a$ such that $p=p_{i}$, $q_{i}<q=\overline{q_{j}}+1$ and $q_{i-1}\geqslant q>q_{i}$. In other words, $(p,q)$ belongs to the NE path just between the corners $(p_{i-1},q_{i-1})$ and $(p_{i},q_{i})$, and the box $(q_{i}-1,\overline{p_{i}})$ lies above and in the same column of $(q-1,\overline{p})$, as shown in Figure \ref{fig:configopt}. Denote by $\mathcal{O}p_{\ttau}(w)$ the set of all optional corners and observe that $\mathcal{O}p_{\ttau}(w)\subset\mathbb{N}E(w)$. \begin{figure} \caption{Configuration of an optional corner $(p,q)$.} \label{fig:configopt} \end{figure} Observe that such box only occurs if the number of available rows between $q_{i}$ and $q$ is smaller than the number of dots to be placed by Step $(i)$, which is $k_{i}-k_{i-1}$. In other words, we need to have enough dots to place during Step $(i)$ such that some of them are placed below the corner $(p,q)$. This implies that the following equation is satisfied: \begin{align}\label{eq:optional} q-q_{i}=k_{i}-k+k_{j}-k_{R(i)}. \end{align} Thus, a triple $\boldsymbol{\tau}'$ obtained by adding $(k,p,q)$ to $\boldsymbol{\tau}$ also gives the same permutation but it is not a theta-triple anymore. For instance, considering the theta-vexillary signed permutation $w=10\ 1\ 5\allowbreak\ 3\ \overline{2}\ \overline{4}\ 6\ \overline{9}\ \overline{8}\ \overline{7}$ of Figure \ref{fig:diagram1}, the set of optional corners $\mathcal{O}p_{\ttau}(w)$ only contains the triple $(7,2,\overline{3})$. \begin{prop}\label{prop:svexcore3} Let $w$ be a theta-vexillary and $\boldsymbol{\tau}$ be a theta-triple. Then, the set of corners is the disjoint union $$ \mathscr{C}(w)=\boldsymbol{\tau} \ \dot{\cup}\ \mathcal{O}p_{\ttau}(w) \ \dot{\cup}\ \mathscr{U}\! (w). $$ \end{prop} \begin{proof} Denote by $\iota(\boldsymbol{\tau})\subset \mathscr{C}(\iota(w))$ the set of all corner positions of $\boldsymbol{\tau}$ and their reflections, i.e., \begin{align*} \iota(\boldsymbol{\tau})=\bigcup_{i=1}^{s}\{(k_{i},p_{i},q_{i}) \cup (k_{i},p_{i},q_{i})^{\perp}\}. \end{align*} Propositions \ref{prop:descentsofW} and \ref{prop:descentsofWinv} state that all descents of $w$ and $w^{-1}$ are exclusively determined by elements in $\boldsymbol{\tau}$. More over, this assertion can be extended to the diagram $D(\iota(w)))$ of $\iota(w)$: all descents of $\iota(w)$ are at position $p_{i}-1$ and $\overline{p_{i}}$, and all descents of $\iota(w)^{-1}$ are at position $q_{i}-1$ and $\overline{q_{i}}$, where $i$ ranges from $1$ to $s$. Thus, if there is other SE corner, it should not create descents, but it must match existing descents. For instance, for $\boldsymbol{\tau}$ of Example \ref{ex:perm1} and its diagram in Figure \ref{fig:diagram1}, observe that the corner position $(2,\overline{3})$ does not belong to $ \iota(\boldsymbol{\tau})$ but it is in the same row and column of two corner positions of $\iota(\boldsymbol{\tau})$, namely, $(4,\overline{3})$ and $(2,\overline{6})$. We conclude that if there exists a corner position $T$ that does not belong to $\iota(\boldsymbol{\tau})$ then there are corner positions $T',T''$ in $ \iota(\boldsymbol{\tau})$ such that $T'$ is in the same column of $T$, and $T''$ is in the same row of $T$. Then, we need to figure out when a combination of descents of corners $T'$ and $T''$ in $\iota(\boldsymbol{\tau})$ a new corner. Consider the diagram $D(\iota(w))$ divided in quadrants as in Figure \ref{fig:quadrant}. \begin{figure} \caption{Quadrants of the diagram.} \label{fig:quadrant} \end{figure} Given $(p,q)\in\iota(\boldsymbol{\tau})$, we say that the SE corner $(q-1,\overline{p})$ belongs to: \begin{itemize} \item Quadrant \textbf{A} if $(p,q)=(p_{i},q_{i})$ for some $i<a$; \item Quadrant \textbf{B} if $(p,q)=(p_{i},q_{i})$ for some $i\geqslant a$; \item Quadrant \textbf{C} if $(p,q)=(p_{i},q_{i})^{\perp}$ for some $i< a$; \item Quadrant \textbf{D} if $(p,q)=(p_{i},q_{i})^{\perp}$ for some $i\geqslant a$. \end{itemize} Consider two corner positions $T'=(p',q')$ and $T''=(p'',q'')$ in $\iota(\boldsymbol{\tau})$ such that $p'>p''$ and $q'\neq q''$, i.e., $T'$ and $T''$ are in different rows and columns. A \emph{cross descent} is a box that lies in the same row of one of these corners (either $T'$ or $T''$) and in the same column of the remaining one. There are four types of cross descent boxes of $T'$ and $T''$ as shown in Figure \ref{fig:crossing}. \begin{figure} \caption{Four possibles cross descents boxes.} \label{fig:crossing} \end{figure} Namely, \begin{itemize} \item \emph{A cross descent of type $\alpha$} is the box $(q''-1,\overline{p'})$ when $q'>q''$; \item \emph{A cross descent of type $\beta$} is the box $(q'-1,\overline{p''})$ when $q'>q''$; \item \emph{A cross descent of type $\gamma$} is the box $(q''-1,\overline{p'})$ when $q'<q''$; \item \emph{A cross descent of type $\delta$} is the box $(q'-1,\overline{p''})$ when $q'<q''$. \end{itemize} Suppose that $T'=(p',q')$ and $T''=(p'',q'')$ are two corners of $\iota(\tau)$. Consider that $T'$ lies in some quadrant $\mathbf{X}$, $T''$ lies in some quadrant $\mathbf{Y}$, and they have cross descent box $(a,b)$ of type $\xi$, where $\mathbf{X},\mathbf{Y}\in \{\mathbf{A},\mathbf{B},\mathbf{C},\mathbf{D}\}$ and $\xi\in\{\alpha,\beta,\gamma,\delta\}$. We say that this configuration has \emph{shape} \shape{X}{\xi}{Y}. Also denote by $c_{\xi}(T',T'')=(a,b)$ the respective cross descent box. First of all, we need to figure out all possible shapes and, then, verify if the cross descent box of each shapes is an SE corner. There are $64$ different combination of shapes \shape{X}{\xi}{Y}, where $\mathbf{X},\mathbf{Y}\in \{\mathbf{A},\mathbf{B},\mathbf{C},\mathbf{D}\}$ and $\xi\in\{\alpha,\beta,\gamma,\delta\}$. However, not every shape is possible because $\boldsymbol{\tau}$ is a theta-triple and $T',T''$ are chosen in $\iota(\boldsymbol{\tau})$. An example of impossible shape is \shape{A}{\delta}{A} since, by definition, there is no $i<j$ where $T'=(p_{i},q_{i})$ and $T''=(p_{j},q_{j})$ such that $q_{i}<q_{j}$. Thus, it remains only $24$ possible shapes. We listed them in Table \ref{tab:shapes}, divided in two categories: the shapes $\shape{X}{\xi}{Y}$ where the cross descent box $c_{\xi}(T',T'')$ belongs to the quadrants $\mathbf{A}$ or $\mathbf{B}$, and the shapes $\shape{X}{\xi}{Y}$ where $c_{\xi}(T',T'')$ belongs to the quadrants $\mathbf{C}$ of $\mathbf{D}$. \begin{table}[ht] \centering \caption{Possible shapes} \label{tab:shapes} \begin{tabular}{|c|c|} \hline Shapes $\shape{X}{\xi}{Y}$ where $c_{\xi}(T',T'')$ & Shapes $\shape{X}{\xi}{Y}$ where $c_{\xi}(T',T'')$ \\ belongs to \textbf{A} or \textbf{B}: & belongs to \textbf{C} or \textbf{D}: \\ \hline \shape{A}{\alpha}{A}, \shape{A}{\alpha}{B}, \shape{A}{\alpha}{C}, & \shape{C}{\beta}{C}, \shape{D}{\beta}{C}, \shape{A}{\beta}{C}, \\ \shape{A}{\alpha}{D}, \shape{B}{\alpha}{B}, \shape{B}{\alpha}{C},& \shape{B}{\beta}{C}, \shape{D}{\beta}{D}, \shape{A}{\beta}{D},\\ \shape{A}{\beta}{A}, \shape{A}{\beta}{B}, \shape{B}{\beta}{B}, & \shape{C}{\alpha}{C}, \shape{D}{\alpha}{C}, \shape{D}{\alpha}{D},\\ \shape{A}{\gamma}{D}, \shape{B}{\gamma}{C}, \shape{B}{\gamma}{D}. & \shape{B}{\delta}{C}, \shape{A}{\delta}{D}, \shape{B}{\delta}{D}.\\ \hline \end{tabular} \end{table} Observe that if $c_{\xi}(T',T'')$ belongs to quadrants \textbf{C} or \textbf{D} then its reflection $c_{\xi}(T',T'')^{\perp}$ belongs to quadrant \textbf{A} or \textbf{B} and corresponds to the cross descent box of corners $(T'')^{\perp}=(\overline{p''}+1,\overline{q''}+1)$ and $(T')^{\perp}=(\overline{p'}+1,\overline{q'}+1)$. In other words, each shape in the left column of Table \ref{tab:shapes} is equivalent to another one to the right column. Hence, we can consider only the $12$ shapes where $c_{\xi}(T',T'')$ belongs to quadrants \textbf{A} or \textbf{B}. It follows from Lemma \ref{lema:properties} that $\boldsymbol{\tau}\cap\mathscr{U}\! (w)=\varnothing$ because no unessential corner is minimal in the poset. By definition of optional corner, we also have that $\boldsymbol{\tau}\cap\mathcal{O}p_{\ttau}(w)=\varnothing$ and $\mathcal{O}p_{\ttau}(w) \cap \mathscr{U}\! (w)=\varnothing$. Then, all the sets are disjoint. Suppose that $T'=(p',q')$ and $T''=(p'',q'')$ of $\iota(\tau)$ has some shape \shape{X}{\xi}{Y}, where $\mathbf{X,Y}\in \{\mathbf{A,B,C,D}\}$ and $\xi\in\{\alpha,\beta,\gamma,\delta\}$, such that the cross descent box $c_{\xi}(T',T'')$ is a SE corner in quadrant \textbf{A} or \textbf{B} which \emph{does not} belongs to $\boldsymbol{\tau}$. Then, analyzing each situation in the first column of Table \ref{tab:shapes}, we must show that either $c_{\xi}(T',T'')\in\ \mathcal{O}p_{\ttau}(w) \ \dot{\cup}\ \mathscr{U}\! (w)$ or it leads us to a contradiction. Consider $\xi=\alpha$, where $p'>p''$, $q'>q''$, and $T=(p',q'')$ is a SE corner $(q''-1,\overline{p'})$ not in $\boldsymbol{\tau}$ and satisfying the following conditions \begin{align}\label{eqn:SEcorneralpha} \begin{split} \iota(w)(p'-1)>\overline{q''} &\geqslant \iota(w)(p')\\ \iota(w)^{-1}(\overline{q''})>p'-1 &\geqslant \iota(w)^{-1}(\overline{q''}+1). \end{split} \end{align} \begin{itemize} \item If \shape{X}{\xi}{Y} is a shape \shape{A}{\alpha}{A}, \shape{A}{\alpha}{B} or \shape{B}{\alpha}{B}, then $T'=(p_{i},q_{i})$, $T''=(p_{j},q_{j})$, where $1\leqslant i<j\leqslant s$, and $T=(p_{i},q_{j})$ is a SE corner $(q_{j}-1,\overline{p_{i}})$. But Lemma \ref{lema:properties2} says that $T$ cannot be a corner. \item If \shape{X}{\xi}{Y} is a shape \shape{A}{\alpha}{C} or \shape{B}{\alpha}{C}, then $T'=(p_{i},q_{i})$, for some $i$, $T''=(p_{j},q_{j})^{\perp}=(\overline{p_{j}}+1,\overline{q_{j}}+1)$, for some $j<a$, and $q_{i}>\overline{q_{j}}+1$. We can assume that $i$ is chosen such that there is no $l>i$ satisfying $p_{i}=p_{l}$ and $q_{i}>q_{l}>\overline{q_{j}}+1$, i.e., there is no corner of $\boldsymbol{\tau}$ in the same column and between the SE corners $T'$ and $T$. If $p_{i}>p_{i+1}$ then Lemma \ref{lema:properties2} is contradicted. Thus, we have that $p_{i}=p_{i+1}$ and $q_{i}>\overline{q_{j}}+1>q_{i+1}$, implying that $T$ is an optional SE corner. \item If \shape{X}{\xi}{Y} is a shape \shape{A}{\alpha}{D}, then $T'=(p_{i},q_{i})$, for some $i<a$, $T''=(p_{j},q_{j})^{\perp}=(\overline{p_{j}}+1,\overline{q_{j}}+1)$, for some $a\leqslant j\leqslant s$, and $q_{i}>\overline{q_{j}}+1$. As in the previous case, we can assume that $i$ is chosen such that there is no $l>i$ satisfying $p_{i}=p_{l}$ and $q_{i}>q_{l}>\overline{q_{j}}+1$, i.e., there is no corner of $\boldsymbol{\tau}$ in the same column and between the SE corners $T'$ and $T$. If $p_{i}>p_{i+1}$, then the corner $T$ contradict Lemma \ref{lema:properties2}. Thus, $p_{i}=p_{i+1}$, $q_{i}> \overline{q_{j}}+1> q_{i+1}$ and $i=R(j)$. Notice that the dot in the row $\overline{q_{j}}+1$ is between rows $q_{i}$ and $\overline{q_{j}}$ since $T$ is a corner, which is impossible as shown in proof of Proposition \ref{prop:svexcore} (see Figure \ref{fig:proofprop2}). \begin{figure} \caption{Sketch of the diagram of shape \shape{A} \label{fig:proofprop2} \end{figure} \end{itemize} Consider $\xi=\beta$, where $p'>p''$, $q'>q''$, and $T=(p'',q')$ is a SE corner $(q'-1,\overline{p''})$ is a SE corner not in $\boldsymbol{\tau}$ and satisfying the following conditions \begin{align}\label{eqn:SEcornerbeta} \begin{split} \iota(w)(p''-1)>\overline{q'} &\geqslant \iota(w)(p'')\\ \iota(w)^{-1}(\overline{q'})>p''-1 &\geqslant \iota(w)^{-1}(\overline{q'}+1). \end{split} \end{align} \begin{itemize} \item If \shape{X}{\xi}{Y} is a shape \shape{A}{\beta}{A} or \shape{A}{\beta}{B}, then $T'=(p_{i},q_{i})$ and $T''=(p_{j},q_{j})$, for some $i<a$ and $i<j$. Observe that the dots at column $\overline{p_{j}}$ and row $q_{j}$ are placed by Step $(j)$ (or some previous one). Then, by construction, the dot at row $q_{i}-1$ must be placed by some Step $(l)$ for $l\leqslant j$. Thus, $\iota(w)^{-1}(q_{i}-1)\geqslant \overline{p_{j}}$, a contradiction of Equation \eqref{eqn:SEcornerbeta}. \item If \shape{X}{\xi}{Y} is a shape \shape{B}{\beta}{B}, then $T'=(p_{i},q_{i})$ and $T''=(p_{j},q_{j})$, for some $a\leqslant i<j\leqslant s$. If $\iota(w)^{-1}({q_{i}}-1)<0$, i.e., the dot in the row $q_{i}-1$ is in quadrant \textbf{B} then we can proceed as the previous case. If $\iota(w)^{-1}({q_{i}}-1)>0$ then belongs to the quadrant \textbf{C} is a reflection of a dot placed during some Step $(l)$ for $l<a$. Since $\iota(w)(q_{i})<\overline{p_{j}}+1\leqslant 0$, then $q_{l}=\overline{q_{i}}+1$ and the corner $(p_{l},q_{l})$ lies in row $\overline{q_{i}}+1$. Therefore, the reflection $(p_{l},q_{l})^{\perp}$ is in the row $q_{i}-1$, and the corner $T$ is optional or unessential (see Figure \ref{fig:proofprop4}). \begin{figure} \caption{Sketch of the diagram of shape \shape{B} \label{fig:proofprop4} \end{figure} \end{itemize} Consider $\xi=\gamma$, where $p'>p''$, $q'<q''$, and $T=(p',q'')$ is a SE corner $(q''-1,\overline{p'})$ is a SE corner not in $\boldsymbol{\tau}$ and satisfying the following conditions \begin{align}\label{eqn:SEcornergamma} \begin{split} \iota(w)(p'-1)>\overline{q''} &\geqslant \iota(w)(p')\\ \iota(w)^{-1}(\overline{q''})>p'-1 &\geqslant \iota(w)^{-1}(\overline{q''}+1). \end{split} \end{align} \begin{itemize} \item If \shape{X}{\xi}{Y} is a shape \shape{A}{\gamma}{D} or \shape{B}{\gamma}{D}, then $T'=(p_{i},q_{i})$, for any $i$, $T''=(p_{j},q_{j})^{\perp}=(\overline{p_{j}}+1,\overline{q_{j}}+1)$, for some $a\leqslant j\leqslant s$, and $q_{i}<\overline{q_{j}}+1$. By Equation \eqref{eqn:SEcornergamma}, $q_{j}-1 \geqslant \iota(w)(p_{i})$ and $\iota(w)^{-1}(q_{j}-1)>p_{i}-1\geqslant 0> \iota(w)^{-1}(q_{j})$, implying that no Step $(l)$, for $l<a$, can place the dot at row $\overline{q_{j}}$. Hence, $q_{R(j)}=\overline{q_{j}}+1$ and $T$ is exactly the corner $(p_{R(j)},q_{R(j)})$ of $\boldsymbol{\tau}$ (see Figure \ref{fig:proofprop3}). \begin{figure} \caption{Sketch of the diagram of shape \shape{A} \label{fig:proofprop3} \end{figure} \item If \shape{X}{\xi}{Y} is a shape \shape{B}{\gamma}{C}, then we clearly have that $T$ is an unessential or optional corner. \qedhere \end{itemize} \end{proof} \begin{prop}\label{prop:svexcore2} For $w\in \mathcal{W}_{n}$, $w$ is theta-vexillary if and only if the set of corner $\mathscr{C}(w)$ is the disjoint union $$ \mathscr{C}(w)=\mathbb{N}E(w)\dot{\cup}\mathscr{U}\! (w). $$ \end{prop} \begin{proof} Suppose that $w(\boldsymbol{\tau})$ is theta-vexillary. From Lemma \ref{lema:properties}, $\boldsymbol{\tau}\cup\mathcal{O}p_{\ttau}(w)\subset\mathbb{N}E(w)$. On the other hand, Proposition \ref{prop:svexcore3} implies that $\mathbb{N}E(w)\subset\boldsymbol{\tau}\cup\mathcal{O}p_{\ttau}(w)$ since $\mathbb{N}E(w)\cap\mathscr{U}\! (w)=\varnothing$. Hence, $\mathbb{N}E(w)= \boldsymbol{\tau}\cup\mathcal{O}p_{\ttau}(w)$. \end{proof} \begin{rem} If $w$ is a theta-vexillary signed permutation but we don't know a theta-triple such that $w=w(\boldsymbol{\tau})$, we can use the process in the proof of Proposition \ref{prop:svexcore} to get $\boldsymbol{\tau}$. Basically, set $\boldsymbol{\tau}$ with all the corners in the NE path $\mathbb{N}E(w)$. Then, withdraw all the optional corners from it, which results in a valid theta-triple $\boldsymbol{\tau}$ of $w$. \end{rem} \begin{prop}\label{prop:unique} The theta-triple is unique for each theta-vexillary signed permutation. \end{prop} \begin{proof} Suppose that $\boldsymbol{\tau}$ and $\tilde\boldsymbol{\tau}$ are two different theta-triples such that $w=w(\boldsymbol{\tau})=w(\tilde\boldsymbol{\tau})$. Then, $\boldsymbol{\tau}\dot{\cup}\mathcal{O}p_{\ttau}(w)=\mathbb{N}E(w)=\tilde{\boldsymbol{\tau}}\dot{\cup} \mathcal{O}p_{\tilde\boldsymbol{\tau}}$. If there is a corner position $(p,q_{1})\in \mathcal{O}p_{\boldsymbol{\tau}}\cap \tilde{\boldsymbol{\tau}}$ then there is $q_{2}>q_{1}$ such that $(p,q_{2})\in \boldsymbol{\tau}$ is a corner position immediately above it. Notice that $(p,q_{2})$ does not belong to $\tilde\boldsymbol{\tau}$, otherwise condition B2 of $\tilde{\boldsymbol{\tau}}$ for both corners would contradict Equation \eqref{eq:optional} for the optional corner $(p,q_{1})$. Then, $(p,q_{2}) \in \mathcal{O}p_{\tilde\boldsymbol{\tau}}\cap {\boldsymbol{\tau}}$. For the same reason, there is $q_{3}>q_{2}$ such that $(p,q_{3})\in \mathcal{O}p_{\boldsymbol{\tau}}\cap \tilde{\boldsymbol{\tau}}$, and keep going. Hence, this process should be repeated forever, which is impossible since the sets are finite. Therefore, $\mathcal{O}p_{\boldsymbol{\tau}}\cap \tilde{\boldsymbol{\tau}}=\emptyset$, and by the same reason $\mathcal{O}p_{\tilde\boldsymbol{\tau}}\cap {\boldsymbol{\tau}}=\emptyset$, which implies that $\boldsymbol{\tau}=\tilde{\boldsymbol{\tau}}$. \end{proof} \section{Pattern avoidance} Recall that given a signed pattern $\pi=\pi(1)\ \pi(2)\cdots \pi(m)$ in $\mathcal{W}_{m}$, a signed permutation $w$ \emph{contains} $\pi$ if there is a subsequence $w(i_{1})\cdots w(i_{m})$ such that the signs of $w(i_{j})$ and $\pi(j)$ are the same for all $j$, and also the absolute values of the subsequence are in the same relative order as the absolute values of $\pi$. Otherwise $w$ \emph{avoids} $\pi$. \begin{prop}\label{prop:patavoid} A signed permutation $w$ is theta-vexillary if and only if $w$ avoids the follow thirteen signed patterns $[\overline{1}\ 3\ 2]$, $[\overline{2}\ 3\ 1]$, $[\overline{3}\ 2\ 1]$, $[\overline{3}\ 2\ \overline{1}]$, $[2\ 1\ 4\ 3]$, $[2\ \overline{3}\ 4\ \overline{1}]$, $[\overline{2}\ \overline{3}\ 4\ \overline{1}]$, $[3\ \overline{4}\ 1\ \overline{2}]$, $[3\ \overline{4}\ \overline{1}\ \overline{2}]$, $[\overline{3}\ \overline{4}\ 1\ \overline{2}]$, $[\overline{3}\ \overline{4}\ \overline{1}\ \overline{2}]$, $[\overline{4}\ 1\ \overline{2}\ 3]$, and $[\overline{4}\ \overline{1}\ \overline{2}\ 3]$. \end{prop} \begin{proof} We know, by Proposition \ref{prop:svexcore}, how to describe a theta-vexillary permutation by the SE corners of the extended diagram. Assume that $w$ is a theta-vexillary signed permutation. To prove that it avoids all these 13 patterns, we will assume if one of these patterns is contained in $w$, then show that there is a SE corner $T$ such that $T\not\in\mathbb{N}E(w)\cup\mathscr{U}\! (w)$. Suppose that $w$ contains $[2\ 1\ 4\ 3]$ as a subsequence $(w(a)\ w(b)\ w(c)\ w(d))$ satisfying $w(b)<w(a)<w(d)<w(c)$ for some $a<b<c<d$ as in Figure \ref{fig:patternsavoid2143} (left). Then, there is at least one SE corner in each shaded area, which will be denoted by $T$ and $T'$. Clearly $T\not\in\mathbb{N}E(w)$ and, by Proposition \ref{prop:svexcore2}, it should be an unessential corner. Since $\boldsymbol{\tau}$ is a theta-triple of $w$, there are $i<j$ such that Step $(i)$ places the dot in the column $\overline{b}$ and Step $(j)$ place the dot in the column $\overline{a}$. However it lead us to a contradiction because we cannot place a dot in the row $w(\overline{b})$ during Step $(i)$ and skip the row $w(\overline{a})$ since it will be place further. \begin{figure} \caption{Situation where $w$ contains: $[2\ 1\ 4\ 3]$ in the left; and $[\overline{1} \label{fig:patternsavoid2143} \end{figure} Now, suppose that $w$ contains $[\overline{1}\ 3\ 2]$ as a subsequence $(w(a)\ w(b)\ w(c))$ satisfying $\overline{w(a)}<w(c)<w(b)$ for some $a<b<c$ as in Figure \ref{fig:patternsavoid2143} (right). Then, there is at least one SE corner in each shaded area, which will be denoted by $T$ and $T'$. By definition, $T$ is neither an unessential corner nor belongs to the NE path, i.e, $T\not\in \mathbb{N}E(w)\cup\mathscr{U}\! (w)$, which contradicts Proposition \ref{prop:svexcore2}. Notice that we could consider the diagram of $w$ restricted to the columns $\overline{a},\overline{b},\overline{c}$ and rows $0,\pm w(\overline{a}),\pm w(\overline{b}),\pm w(\overline{c})$. Then, the corners $T$ and $T'$ in such restriction can be easily represented as the first diagram of Figure \ref{fig:patternsavoid}. Clearly, such configuration tell us that $T$ is neither unessential nor minimal. The same idea can be used to prove that the remaining eleven patterns in Figure \ref{fig:patternsavoid} should be avoided. \begin{figure} \caption{Diagram of $w$ restricted to 11 different patterns.} \label{fig:patternsavoid} \end{figure} Now, let us assume that $w$ is permutation that avoids all the thirteen patterns listed above. We are going to prove that $\mathscr{C}(w)=\mathbb{N}E(w)\cup\mathscr{U}\! (w)$, and hence, $w$ is a theta-vexillary permutation. Suppose that there are corners $T=(p,q)$ and $T'=(p',q')$ such that $q>0>q'$ and $p'>p>0$, i.e., $T$ is in quadrant \textbf{A}, $T'$ is in quadrant \textbf{B}, and $T'<T$. If we denote $a:={p}$, $b:={p'}-1$ and $c:=w^{-1}(\overline{q'})$, then they satisfy $0<a<b<c$ and $w(a)<0<w(c)<w(b)$. Observe that $\overline{a},\overline{b},\overline{c}$ are the columns of the dots in Figure \ref{pic:3to2_1}. \begin{figure} \caption{Sketch for the case where $T$ is in quadrant \textbf{A} \label{pic:3to2_1} \end{figure} In order to relate the subsequence $(w(a)\ w(b)\ w(c))$ of $w$ to some 3-pattern $\pi$, we need describe all possible orderings of $\overline{w(a)},w(b)$ and $w(c)$. \begin{itemize} \item If ${\overline{w(a)}}<w(c)<w(b)$ then $\pi=[\overline{1}\ 3\ 2]$; \item If $w(c)<{\overline{w(a)}}<w(b)$ then $\pi=[\overline{2}\ 3\ 1]$; \item If $w(c)<w(b)<{\overline{w(a)}}$ then $\pi=[\overline{3}\ 2\ 1]$. \end{itemize} By hypothesis, the pattern in each case should be avoid. Hence the configuration in Figure \ref{pic:3to2_1} is impossible. Now, suppose that there are corners $T=(p,q)$ and $T'=(p',q')$ such that $q>q'>0$ and $p'>p>0$, i.e., both $T$ and $T'$ are in quadrant \textbf{A} and $T'<T$. Denote $i:=w^{-1}(\overline{q}+1)$, $a={p}$, $b={p'}-1$ and $c=w^{-1}(\overline{q'})$. If $i>0$, then they satisfy $0<i<a<b<c$ and $w(a)<w(i)<w(c)<w(b)$. Observe that $\overline{\imath},\overline{a},\overline{b},\overline{c}$ are the columns of the dots in Figure \ref{pic:3to2_2} (left). \begin{figure} \caption{Sketch for the case where both $T$ and $T'$ are in quadrant \textbf{A} \label{pic:3to2_2} \end{figure} In order to relate the subsequence $(w(i)\ w(a)\ w(b)\ w(c))$ of $w$ to some 4-pattern $\pi$, we need to describe all possible orderings of $\overline{w(i)}, \overline{w(a)}, \overline{w(c)}$ and $\pm w(b)$. \begin{itemize} \item If ${\overline{w(b)}}<\overline{w(c)}<\overline{w(i)}<\overline{w(a)}$ then $\pi=[\overline{3}\ \overline{4}\ \overline{1}\ \overline{2}]$; \item If ${w(b)}<\overline{w(c)}<\overline{w(i)}<\overline{w(a)}$ then $\pi=[\overline{3}\ \overline{4}\ 1\ \overline{2}]$; \item If $\overline{w(c)}<{w(b)}<\overline{w(i)}<\overline{w(a)}$ then $\pi=[\overline{3}\ \mathbf{\overline{4}\ 2\ \overline{1}}]$; \item If $\overline{w(c)}<\overline{w(i)}<{w(b)}<\overline{w(a)}$ then $\pi=[\overline{2}\ \mathbf{\overline{4}\ 3\ \overline{1}}]$; \item If $\overline{w(c)}<\overline{w(i)}<\overline{w(a)}<{w(b)}$ then $\pi=[\overline{2}\ \overline{3}\ 4\ \overline{1}]$. \end{itemize} By hypothesis, the pattern in each case should be avoided (in some cases, the highlighted parts are avoided by $[\overline{3}\ 2\ \overline{1}]$). Hence this configuration is impossible. If $i<0$ then we have four possibilities to place $\overline{\imath}>0$ in the sequence $0<a<b<c$. Observe that $i,\overline{a},\overline{b},\overline{c}$ are the columns of the dots in Figure \ref{pic:3to2_2} (right). Table \ref{tab:possiblearreng} combines all these possibilities along with all possible orderings of $w(\overline{\imath}), \overline{w(a)}, \overline{w(c)}$ and $\pm w(b)$, in order to get the respective 4-pattern $\pi$ relative to the correspondent subsequence of $w$. \begin{table}[ht] \caption{Combinations to get the respective 4-pattern of the subsequence of $w$.} \label{tab:possiblearreng} \renewcommand{1.3}{1.3} \begin{center} \footnotesize{ \begin{tabular}{|c|c|c|c|c|} \hline & $\overline{\imath}\! <\! a\! <\! b \! <\! c$ & $a\! <\! \overline{\imath}\! <\! b\! <\! c$ & $a\! <\! b\! <\! \overline{\imath}\! <\! c$ & $a\! <\! b\! <\! c\! <\! {\overline{\imath}}$ \\ \hline ${\overline{w(b)}}\! <\! \overline{w(c)}\! <\! {w(\overline{\imath})}\! <\! \overline{w(a)}$ & $[3\ \overline{4}\ \overline{1}\ \overline{2}]$ & $[\mathbf{\overline{4}\ 3\ \overline{1}}\ \overline{2}]$ & $[\mathbf{\overline{4}}\ \overline{1}\ \mathbf{3\ \overline{2}}]$ & $[\overline{4}\ \overline{1}\ \overline{2}\ 3]$ \\ \hline ${{w(b)}}\! <\! \overline{w(c)}\! <\! {w(\overline{\imath})}\! <\! \overline{w(a)}$ & $[3\ \overline{4}\ 1\ \overline{2}]$ & $[\mathbf{\overline{4}\ 3}\ 1\ \mathbf{\overline{2}}]$ & $[\mathbf{\overline{4}}\ 1\ \mathbf{3\ \overline{2}}]$ & $[\overline{4}\ 1\ \overline{2}\ 3]$ \\ \hline $\overline{w(c)}\! <\!{{w(b)}}\! <\! {w(\overline{\imath})}\! <\! \overline{w(a)}$ & $[3\ \mathbf{\overline{4}\ 2\ \overline{1}}]$ & $[\mathbf{\overline{4}\ 3}\ 2\ \mathbf{\overline{1}}]$ & $[\mathbf{\overline{4}\ 2}\ 3\ \mathbf{\overline{1}}]$ & $[\mathbf{\overline{4}\ 2\ \overline{1}}\ 3]$ \\ \hline $\overline{w(c)}\! <\! {w(\overline{\imath})}\! <\!{{w(b)}}\! <\! \overline{w(a)}$ & $[2\ \mathbf{\overline{4}\ 3\ \overline{1}}]$ & $[\mathbf{\overline{4}\ 2}\ 3\ \mathbf{\overline{1}}]$ & $[\mathbf{\overline{4}\ 3}\ 2\ \mathbf{\overline{1}}]$ & $[\mathbf{\overline{4}\ 3\ \overline{1}}\ 2]$ \\ \hline $\overline{w(c)}\! <\! {w(\overline{\imath})}\! <\! \overline{w(a)}\! <\!{{w(b)}}$ & $[2\ \overline{3}\ 4\ \overline{1}]$ & $[\mathbf{\overline{3}\ 2}\ 4\ \mathbf{\overline{1}}]$ & $[3\ \mathbf{\overline{4}\ 2\ \overline{1}}]$ & $[\mathbf{\overline{3}\ 4}\ \overline{1}\ \mathbf{2}]$ \\ \hline \end{tabular}} \end{center} \end{table} By hypothesis, the pattern in each case should be avoid and, hence, this case is impossible. Finally, suppose that there are corners $T=(p,q)$ and $T'=(p',q')$ such that $0>q>q'$ and $p'>p>0$, i.e., both $T$ and $T'$ are in quadrant \textbf{B}, and $T'<T$. If we denote $i=w^{-1}(\overline{q}+1)$, $a={p}-1$, $b={p}$, $c={p'}-1$ and $d=w^{-1}(\overline{q'})$, then they satisfy $i<a<b<c<d$, $w(b)<w(a)$ and $w(b)<w(i)<w(d)<w(c)$. Observe that $\overline{\imath},\overline{a},\overline{b},\overline{c}$ are the columns of the dots in Figure \ref{pic:3to2_3} (left). \begin{figure} \caption{Sketch for the case where both $T$ and $T'$ are in quadrant \textbf{B} \label{pic:3to2_3} \end{figure} Consider the following situations: \begin{itemize} \item If ${w(b)}<0$ then the subsequence $(w(b)\ w(c)\ w(d))$ of $w$ is a 3-pattern $\pi$ equal to $[\overline{1}\ 3\ 2]$, $[\overline{2}\ 3\ 1]$ or $[\overline{3}\ 2\ 1]$, which is impossible; \item If $0<w(b)<{w(a)}<w(d)<w(c)$ then $(w(a)\ w(b)\ w(c)\ w(d))$ is a 4-pattern $\pi=[2\ 1\ 4\ 3]$ and also should be avoided; \item If $i>0$ and $w(b)>0$ then the subsequence $(w(i)\ w(b)\ w(c)\ w(d))$ is a 4-pattern $\pi=[2\ 1\ 4\ 3]$ and also should be avoided; \item If $0>i>\overline{c}$ and ${w(b)}>0$ then the subsequence $(w(\overline{\imath})\ w(c)\ w(d))$ of $w$ is a 3-pattern $\pi$ equal to $[\overline{1}\ 3\ 2]$, $[\overline{2}\ 3\ 1]$ or $[\overline{3}\ 2\ 1]$, which is impossible; \item If $i<\overline{c}$ and $0<w(b)<w(d)<{w(a)}$ then clearly there are SE corners $S$ and $S'$ as in Figure \ref{pic:3to2_3} (right). Therefore, such construction implies that $T$ is an unessential box. \qedhere \end{itemize} \end{proof} Therefore, Propositions \ref{prop:svexcore2} and \ref{prop:patavoid} prove Theorem \ref{thm:main}. This pattern avoidance criterion allow us to compare a theta-vexillary permutation with a vexillary permutation of type B defined by Billey and Lam \cite{BL}. According to them, a signed permutation is vexillary of type B if the Stanley symmetric function $F_{w}$ is equals to the Schur Q-function $Q_{\lambda}$ for some shape $\lambda \vdash \ell(w)$ with distinct parts. In \cite[Theorem 7]{BL}, they proved that a vexillary permutation of type B should avoid eighteen patterns, which includes all thirteen patterns given in our main theorem. Then, every vexillary permutation of type B is theta-vexillary. Therefore, for an arbitrary theta-vexillary signed permutation $w$, we cannot say that the Stanley symmetric function $F_{w}$ is equals to the Schur Q-function of $w$. In principle, this property is not expected since the equality holds for every vexillary signed permutation as proved by \cite[Corollary 5.2]{AF14}. \end{document}
math
\begin{document} \let\thefootnote\relax\footnote{2010 Mathematics Subject Classification: 53C42, 53A10. \\ \indent The authors are partially supported by INdAM-GNSAGA. \\ \indent Keywords: higher order mean curvature, Alexandrov reflection technique, hyperbolic space.} \begin{abstract} We classify hypersurfaces with rotational symmetry and positive constant $r$-th mean curvature in $\mathbb H^n \times \mathbb R$. Specific constant higher order mean curvature hypersurfaces invariant under hyperbolic translation are also treated. Some of these invariant hypersurfaces are employed as barriers to prove a Ros--Rosenberg type theorem in $\mathbb H^n \times \mathbb R$: we show that compact connected hypersurfaces of constant $r$-th mean curvature embedded in $\mathbb H^n \times [0,\infty)$ with boundary in the slice $\mathbb H^n \times \{0\}$ are topological disks under suitable assumptions. \end{abstract} \title{\textbf{On constant higher order mean curvature hypersurfaces in \texorpdfstring{$\mathbb H^{\MakeLowercase{n} \tableofcontents \section*{Introduction} \label{intro} Let $M$ be a hypersurface in an $(n+1)$-dimensional Riemannian manifold and denote by $k_1,\dots,k_n$ its principal curvatures. The \emph{$r$-th mean curvature} of $M$ is the elementary symmetric polynomial $H_r$ in the variables $k_i$ defined as $$\binom{n}{r}H_r \coloneqq \sum_{i_1 < \dots < i_r} k_{i_1}k_{i_2}\cdots k_{i_r}.$$ We say that $M$ is an \emph{$H_r$-hypersurface} when $H_r$ is a positive constant for any $r \in \{1,\dots,n\}$. Note in particular that $H_1$ is the mean curvature of $M$. In his pioneering work \cite{reilly}, Reilly showed that $H_r$-hypersurfaces in space forms appear as solutions of a variational problem, thus extending the corresponding property of constant mean curvature surfaces. Earlier, Alexandrov had dealt with higher mean curvature functions in a series of papers \cite{alexandrov}, and later on many existence and classification results were achieved in space forms. A list of contributions to this subject (far to be exhaustive) is \cite{alias, barbosa, caffarelli, hounie, hsiang, leite, mori, nelli-rosenberg, nelli-semmler, nelli-zhu, oscar, ros, rosenberg, rosenberg-spruck}. Studies on $H_r$-hypersurfaces in more general ambient manifolds appeared in the literature more recently, see for example \cite{cheng-rosenberg, elbert-nelli2, elbert-nelli1, elbert-nelli-santos}. Most notable for us are the results of Elbert and Sa Earp \cite{elb-earp} on $H_r$-hypersurfaces in $\mathbb H^n\times \mathbb R$, where $\mathbb H^n$ is the hyperbolic space and de Lima--Manfio--dos Santos \cite{lima} on $H_r$-hypersurfaces in $N\times \mathbb R$, where $N$ is a Riemannian manifold. The goal of this paper is two-fold. Our first result is a complete classification of rotationally invariant $H_r$-hypersurfaces in $\mathbb H^n \times \mathbb R$. Note that $\mathbb H^n \times \mathbb R$ has non-constant sectional curvature, but it is symmetric enough to allow a fruitful investigation of invariant hypersurfaces. The mean curvature case $r=1$ has already been studied by Hsiang--Hsiang in \cite{hsiang} and B\'erard and Sa Earp \cite{berard-earp}. A general study of $H_r$-hypersurfaces invariant by an ambient isometry in $N \times \mathbb R$, with $N$ a Riemannian manifold, has been carried out by de Lima--Manfio--dos Santos \cite{lima}. We point out that part of our classification results are included in \cite{lima}, but our description and focus are different in nature for several reasons. First, we use a parametrization that allows us to consider and analyze hypersurfaces with singularities. In fact, we get $13$ different qualitative behaviors for rotational $H_r$-hypersurfaces in $\mathbb H^n \times \mathbb R$. Moreover, we always include the case $n=r$, which often produces exceptional examples. Finally, we provide detailed topological and geometric descriptions for all values of the parameters involved. The geometry of $H_r$-hypersurfaces with $r \geq 2$ is substantially different than that of constant mean curvature hypersurfaces. This is mainly due to the full non-linearity of the relation among the principal curvatures, in contrast with the quasi-linearity of the mean curvature equation. Most importantly, many singular cases arise and need to be classified. For instance, one gets conical singularities, which are not allowed in the constant mean curvature case. Our classification results are summarized in Tables \ref{table:1}--\ref{table:3}. We recall that $H_r$-hypersurfaces invariant by rotations in space forms were studied by Leite and Mori \cite{leite, mori} for the case $r=2$, and Palmas \cite{oscar} for any $r$. Our second goal is to understand the topology of embedded $H_r$-hypersurfaces in $\mathbb H^n \times [0,\infty)$ with boundary in the horizontal slice $\mathbb H^n \times \{0\}$. We prove the following Ros--Rosenberg type theorem. \begin{theorem*} Let $M$ be a compact connected hypersurface in $\mathbb H^n \times [0,\infty)$ with constant $H_r > (n-r)/n$ and boundary in the slice $ \mathbb H^n \times \{0\}$. When the boundary is sufficiently small and horoconvex, then $M$ is a topological disk. \end{theorem*} Horoconvexity of the boundary is a natural assumption in the hyperbolic space, whereas what ``sufficiently small'' means will be explained more precisely in Section \ref{main-result}, cf.\ Theorem \ref{main-thm}. A fundamental tool in our proof is Alexandrov reflection tecnhique, for which one needs a tangency principle. For $H_r$-hypersurfaces in Riemannian manifolds, such a tangency principle is proved by Fontenele--Silva \cite{fontenele-silva} under suitable assumptions. We point out that the geometry of our hypersurfaces implies the existence of a strictly convex point, which guarantees the validity of the tangency principle (see Remark \ref{convex-point}). Analogous results as in the above theorem for the constant mean curvature case are due to Ros--Rosenberg in $\mathbb R^3$ \cite[Theorem 1]{ros-rosenberg}, Semmler in $\mathbb H^3$ \cite[Theorem 2]{semmler}, and Nelli--Pipoli in $\mathbb H^n \times \mathbb{R}$ \cite[Theorem 4.1]{nelli-pipoli}. For $H_r$-hypersurfaces in the Euclidean space, Ros--Rosenberg theorem is proved by Nelli--Semmler \cite[Theorem 1.2]{nelli-semmler}. In order to prove our Ros--Rosenberg type theorem we also need to discuss certain $H_r$-hypersurfaces that are invariant under hyperbolic translation. The structure of the paper is the following. In Section \ref{rotational-surfaces} we classify $H_r$-hypersurfaces in $\mathbb H^n \times \mathbb R$ with rotational symmetry. Since the cases $r$ even and odd exhibit substantial differences, we treat them separately in two subsections. At the end of each one, we provide complete descriptions of the various hypersurfaces that occur, see Theorems \ref{str-thm1}--\ref{str-thm4}, \ref{str-thm5}--\ref{str-thm8}, and Tables \ref{table:1}--\ref{table:3}. In Section \ref{translation-surfaces} we study specific translation $H_r$-hypersurfaces, cf.\ Theorem \ref{str-thm-cyl2}. Finally, in Section \ref{estimates} and \ref{limacon} we provide useful estimates and tools to be employed in Section \ref{main-result}, where we prove Ros--Rosenberg's Theorem (see Theorem~\ref{main-thm}). \section{Classification of rotation \texorpdfstring{$H_r$}{Hr}-hypersurfaces} \label{rotational-surfaces} We will generally use the Poincar\'e model of the hyperbolic space $\mathbb H^n$, $n \geq 2$. This is defined as the open ball of Euclidean unit radius in $\mathbb R^n$ centered at the origin, and is equipped with the metric $\tilde{g}$ that at a point $x \in \mathbb H^n$ takes the form $$\tilde{g}_x \coloneqq \left(\frac{2}{1-\lVert x\rVert^2}\right)^2(dx_1^2+\dots+dx_n^2),$$ where $\lVert \cdot \rVert$ denotes the Euclidean norm, and $(x_i)_i$ are the standard coordinates in $\mathbb R^n$. We work with the Riemannian cylinder $\mathbb H^n \times \mathbb R$ with product metric $g \coloneqq \tilde{g}+dt^2$, where $t$ is a global coordinate on the $\mathbb R$ factor. In order to describe rotational hypersurfaces inside $\mathbb H^n \times \mathbb R$ we follow \cite{elb-earp}. Up to isometry of the ambient space, a rotationally invariant hypersurface is determined by rotation of a profile curve contained in a vertical plane through the origin inside $\mathbb H^n \times \mathbb R$. Let us take the plane \begin{equation*} V \coloneqq \{(x_1,\dots,x_n,t) \in \mathbb H^n \times \mathbb R: x_1 = \dots = x_{n-1} = 0\}, \end{equation*} and consider the curve parametrized by $\rho > 0$ given as $$x_n = \tanh(\rho/2), \qquad t=\lambda(\rho).$$ The function $\lambda$ will be determined by imposing that the rotational hypersurface generated by the profile curve have $r$-th mean curvature equal to a positive constant. We already defined the $r$-th mean curvature in the Introduction, but we write it here for further references. \begin{definition} \label{r-mean-curv} Let $k_1,\dots,k_n$ be the principal curvatures of an immersed hypersurface in any Riemannian manifold. The \emph{$r$-th mean curvature} $H_r$ is the elementary symmetric polynomial defined as $$\binom{n}{r}H_r \coloneqq \sum_{i_1 < \dots < i_r} k_{i_1}k_{i_2}\cdots k_{i_r}.$$ \end{definition} Rotating the curve about the line $\{0\} \times \mathbb R$ generates a hypersurface with parametrization $$\mathbb R_+ \times S^{n-1} \to \mathbb H^n \times \mathbb R, \qquad (\rho,\xi) \mapsto (\tanh(\rho/2)\xi,\lambda(\rho)).$$ The unit normal field to the immersion is $$\nu = \frac{1}{(1+\dot{\lambda}^2)^{\frac12}}\left(-\frac{\dot{\lambda}}{2\cosh^2(\rho/2)}\xi,1\right),$$ and the associated principal curvatures are \begin{equation} \label{data} k_1 = \dots = k_{n-1} = \mathrm{cotgh}(\rho)\frac{\dot{\lambda}}{(1+\dot{\lambda}^2)^{\frac12}}, \qquad k_n = \frac{\ddot{\lambda}}{(1+\dot{\lambda}^2)^{\frac32}}, \end{equation} where $\dot{\lambda}$ denotes the derivative of $\lambda$ with respect to $\rho$. By applying suitable vertical reflections or translations of the hypersurface generated by $\lambda$, one gets several types of rotationally invariant hypersurfaces. Care should be taken when applying the reflection $\lambda \mapsto -\lambda$, as this changes the orientation of the hypersurface. However, setting $\nu \mapsto -\nu$ leaves the signs of each $k_i$ unchanged. Hereafter we classify those rotationally invariant hypersurfaces with positive constant $r$-th mean curvature. Specializing the expression of the $r$-th mean curvature to the case $k_1 = \dots = k_{n-1}$ and $k_n$ as in \eqref{data} we find $$nH_r = (n-r)\mathrm{cotgh}^r(\rho)\frac{\dot{\lambda}^r}{(1+\dot{\lambda}^2)^{\frac{r}{2}}}+\mathrm{cotgh}^{r-1}(\rho)\frac{r\dot{\lambda}^{r-1}\ddot{\lambda}}{(1+\dot{\lambda}^2)^{\frac{r+2}{2}}}.$$ If we divide by $\cosh^{r-1}(\rho)$ and multiply by $\sinh^{n-1}(\rho)$ both sides of the identity, we can rewrite the above as \begin{equation} \label{start} n\frac{\sinh^{n-1}(\rho)}{\cosh^{r-1}(\rho)}H_r = \frac{d}{d \rho}\left(\sinh^{n-r}(\rho)\frac{\dot{\lambda}^r}{(1+\dot{\lambda}^2)^{\frac{r}{2}}} \right), \qquad r = 1,\dots,n. \end{equation} Choose now $H_r$ to be a positive constant, and define the function $$I_{n,r}(\rho) \coloneqq \int_0^{\rho} \frac{\sinh^{n-1}(\tau)}{\cosh^{r-1}(\tau)}\,d\tau.$$ We can then integrate \eqref{start} once to obtain \begin{equation} \label{first-integ} nH_rI_{n,r}(\rho)+d_r = \sinh^{n-r}(\rho)\frac{\dot{\lambda}^r}{(1+\dot{\lambda}^2)^{\frac{r}{2}}}, \end{equation} where $d_r$ is an integration constant depending on $r$. Then one integrates again to find (up to sign for $r$ even) \begin{equation} \label{lambda} \lambda_{H_r,d_r}(\rho) = \int_{\rho_-}^{\rho} \frac{(nH_rI_{n,r}(\xi)+d_r)^{\frac{1}{r}}}{\sqrt{\sinh^{\frac{2(n-r)}{r}}(\xi)-(nH_rI_{n,r}(\xi)+d_r)^{\frac{2}{r}}}}\,d\xi, \end{equation} where $\rho_- \geq 0$ is the infimum of the set where the integrand function makes sense. One should think of $\lambda$ as a one-parameter family of curves depending on $d_r$. We write $\lambda_{H_r,d_r}$ as in \eqref{lambda} to make the dependence on $H_r$ and $d_r$ more explicit. \begin{remark} \label{signs} When $r$ is even, the right-hand side in \eqref{first-integ} is non-negative, which forces the left-hand side to be non-negative as well. In this case $-\lambda$ satisfies \eqref{first-integ}. When $r$ is odd, identity \eqref{first-integ} implies that $\dot{\lambda}$ has the same sign of $nH_rI_{n,r}+d_r$. Moreover, $-\lambda$ satisfies \eqref{first-integ} only after changing $\nu \mapsto -\nu$. Lastly, critical points for $\lambda_{H_r,d_r}$ are zeros of $nH_rI_{n,r}+d_r$. The second derivative of $\lambda_{H_r,d_r}$ is computed as \begin{equation} \label{second-derivative} \ddot{\lambda}_{H_r,d_r}(\rho) = \frac{\cosh(\rho)\sinh^{\frac{2(n-r)}{r}-1}(\rho)\left(nH_r\frac{\sinh^n(\rho)}{\cosh^r(\rho)}-(n-r)(nH_rI_{n,r}(\rho)+d_r)\right)}{r(nH_rI_{n,r}(\rho)+d_r)^{\frac{r-1}{r}}\left(\sinh^{\frac{2(n-r)}{r}}(\rho)-(nH_rI_{n,r}(\rho)+d_r)^{\frac{2}{r}}\right)^{\frac{3}{2}}}. \end{equation} We will refer to this expression when studying the convexity of $\lambda_{H_r,d_r}$ and its regularity up to second order. Note that if $r>1$ the second derivative of $\lambda_{H_r,d_r}$ is not defined at its critical points. \end{remark} \begin{remark} \label{integ} Let us discuss a few more details on $I_{n,r}$. It is clear that $I_{n,r}(0) = 0$ and $I_{n,r}'(0) = 0$. Also, $I_{n,r}'(\rho) > 0$ and $I_{n,r}''(\rho) > 0$ for $\rho > 0$ and all $n \geq r \geq 1$, so $I_{n,r}$ is a non-negative increasing convex function. For all values $n \geq r$ we have $nI_{n,r}(\rho) \approx \rho^n$ for $\rho \to 0$. Moreover, for $n>r$, one has the asymptotic behavior $(n-r)I_{n,r}(\rho) \approx \sinh^{n-r}(\rho)$ for $\rho \to +\infty$, whereas for $n=r$ we have $I_{n,n}(\rho) \approx \rho$ for $\rho \to +\infty$. \end{remark} Next, we analyze $\lambda_{H_r,d_r}$ as in \eqref{lambda} for all values of $r = 1,\dots,n$, $H_r > 0$, and $d_r \in \mathbb R$. The goal is to find the domain of $\lambda_{H_r,d_r}$, study its qualitative behavior, and describe the rotational $H_r$-hypersurfaces generated by $\lambda_{H_r,d_r},$ including the description of their singularities. This can be thought of as a classification \`a la Delaunay of rotational $H_r$-hypersurfaces in $\mathbb H^n \times \mathbb R$. Note that we choose $n$, $r$, and $H_r > 0$ a priori, so that the family of curves $\lambda_{H_r,d_r}$ really depends only on the parameter $d_r$. We will find a critical value of $H_r$, namely $(n-r)/n$, which we use together with the sign of $d_r$ and the parity of $r$ to distinguish various cases. Also, we discuss $n>r$ and $n=r$ separately, as the latter case exhibits substantial differences from the former. One may find the salient properties of the classified hypersurfaces in Tables \ref{table:1}--\ref{table:3} at the end of this section. \subsection{Case \texorpdfstring{$r$}{r} even} \label{case-r-even} We start by proving the following result. \begin{proposition} \label{r-even-n>r1} Assume $r$ even, $n>r$, and $d_r \leq 0$. \begin{enumerate} \item If $H_r \leq (n-r)/n$, then $\lambda_{H_r,d_r}$ is defined on $[\rho_-,+\infty)$, where $\rho_- \geq 0$ is the only solution of $nH_rI_{n,r}(\rho)+d_r = 0$. \item If $H_r > (n-r)/n$, then $\lambda_{H_r,d_r}$ is defined on $[\rho_-,\rho_+]$, where $\rho_-$ is as above, and $\rho_+ > 0$ is the only solution of $\sinh^{n-r}(\rho)-(nH_rI_{n,r}(\rho)+d_r)=0$. \end{enumerate} Further, $\lambda_{H_r,d_r}$ is increasing and convex in the interior of its domain. Also, $\lambda_{H_r,d_r}(\rho_-) = 0 = \lim_{\rho \to \rho_-}\dot{\lambda}_{H_r,d_r}(\rho)$. In case $(1)$, $\lambda_{H_r,d_r}$ is unbounded. In case $(2)$ $\lim_{\rho \to \rho_+} \dot{\lambda}_{H_r,d_r}(\rho) = +\infty$. In both cases, $d_r= 0$ if and only if $\rho_- = 0$. We have $\lim_{\rho \to 0} \ddot{\lambda}_{H_r,0}(\rho) = H_r{}^{1/r}$, and for $d_r < 0$ one finds $\lim_{\rho \to \rho_-} \ddot{\lambda}_{H_r,d_r}(\rho) = +\infty$ (Figure \ref{fig:1}). \end{proposition} \begin{figure} \caption{Behavior of $\lambda_{H_r,d_r} \label{fig:1} \end{figure} \begin{proof} The function $nH_rI_{n,r}+d_r$ must be non-negative as noted in Remark~\ref{signs}, hence $\lambda_{H_r,d_r}$ is well-defined when $$0 \leq nH_rI_{n,r}(\rho)+d_r < \sinh^{n-r}(\rho).$$ There is a unique value $\rho_- \geq 0$ depending on $d_r$ such that $nH_rI_{n,r}(\rho_-)+d_r = 0$, and Remark \ref{integ} implies $d_r = 0$ if and only if $\rho_-=0$. Set $$f(\rho) \coloneqq \sinh^{n-r}(\rho)-(nH_rI_{n,r}(\rho)+d_r), \qquad \rho \geq 0.$$ Then $f(\rho_-) \geq 0$ and $f'(\rho) = \sinh^{n-r-1}(\rho)\cosh(\rho)((n-r)-nH_r\tanh^r(\rho))$. We have $f'(\rho) > 0$ for $\rho > \rho_-$ when $\tanh^r(\rho) < (n-r)/nH_r$. So if $H_r \leq (n-r)/n$ the inequality is always true, and $f$ has no zeros in $(\rho_-,+\infty)$. If $H_r > (n-r)/n$ then $\lim_{\rho \to +\infty} f'(\rho) = -\infty$, so $f$ eventually decreases to $-\infty$. This implies $f$ has a zero $\rho_+ > \rho_-$ depending on the value of $d_r$. It follows that $\lambda_{H_r,d_r}$ is defined on some interval with $\rho_-$ as minimum. If $H_r \leq (n-r)/n$ then the interval is unbounded. We have $\lambda_{H_r,d_r}(\rho_-) = 0 = \lim_{\rho \to \rho_-} \dot{\lambda}_{H_r,d_r}(\rho)$, and $\lim_{\rho \to +\infty} \lambda_{H_r,d_r}(\rho) = +\infty$ by the asymptotic behavior of $I_{n,r}$ noted in Remark \ref{integ}. Moreover, $\lambda_{H_r,d_r}$ is increasing as the integrand function is positive away from $\rho_-$. If $H_r > (n-r)/n$ then the denominator of the integrand function has a zero $\rho_+$ depending on $d_r$. This means $\lambda_{H_r,d_r}$ is defined on $[\rho_-,\rho_+)$, and its slope tends to $+\infty$ when $\rho \to \rho_+$. We claim that $\lambda_{H_r,d_r}$ is finite at $\rho_+$. Convergence of the integral is essentially determined by the behavior of $$h(\rho) \coloneqq \sinh^{\frac{n-r}{r}}(\rho)-(nH_rI_{n,r}(\rho)+d_r)^{\frac{1}{r}}$$ near $\rho_+$. But $h(\rho_+) = 0$, and $h'(\rho_+)$ is finite, which implies that $\lambda_{H_r,d_r}$ behaves as the integral of $1/(\rho_+-\rho)^{1/2}$ for $\rho$ close to $\rho_+$, whence convergence at $\rho_+$. In order to check convexity on $(\rho_-,\rho_+)$, observe that the sign of $\ddot{\lambda}_{H_r,d_r}$ as in \eqref{second-derivative} is determined by the sign of $$g(\rho) \coloneqq \frac{\sinh^n(\rho)}{\cosh^r(\rho)}-(n-r)I_{n,r}(\rho) - \frac{d_r(n-r)}{nH_r}.$$ We trivially have $g(\rho_-) \geq 0$ and $g'(\rho) = r\sinh^{n-1}(\rho)/\cosh^{r+1}(\rho) > 0$, so that $g(\rho)$ is always positive for $\rho > 0$. Continuity of the second derivative of $\lambda_{H_r,d_r}$ at the origin for $d_r=0$ follows by an explicit calculation using Remark \ref{integ}, whereas the statement $\lim_{\rho \to \rho_-} \ddot{\lambda}_{H_r,d_r}(\rho) = \infty$ for $d_r < 0$ is trivial, cf.\ \eqref{second-derivative}. \end{proof} We now go on with the analysis of the case $d_r > 0$, but we first make a few technical considerations. For $r > 2$ we have the following formula, which can be proved via integration by parts: \begin{equation} \label{integral_formula} I_{r+1,r}(x) = -\frac{\sinh^{r-1}(x)}{(r-2)\cosh^{r-2}(x)}+\frac{r-1}{r-2}I_{r-1,r-2}(x). \end{equation} Recall that for a natural number $m$ the double factorial is $m!! \coloneqq m(m-2)!!$, and $1!! = 0!! = 1$. Now take $r > 2$ even. From the recurrence relation \eqref{integral_formula} we derive the following closed expression for $I_{r+1,r}(x)$: \begin{align} \label{formula_integral} I_{r+1,r}(x) & = -\sinh(x)\biggl(\frac{1}{r-2}\tanh^{r-2}(x)+\frac{r-1}{(r-2)(r-4)}\tanh^{r-4}(x) \nonumber \\ & \qquad +\frac{(r-1)(r-3)}{(r-2)(r-4)(r-6)}\tanh^{r-6}(x)+\dots + \frac{(r-1)!!}{3(r-2)!!}\tanh^2(x)\biggr) \nonumber \\ & \qquad + \frac{(r-1)!!}{(r-2)!!}I_{3,2}(x). \end{align} The explicit expression $I_{3,2}(x) = \sinh(x)-\arctan(\sinh(x))$ returns now a closed formula for each $I_{r+1,r}(x)$. We note here a useful identity which can be proved by induction. \begin{lemma} \label{important_identity} Let $r \geq 2$ be an even natural number. Then \begin{align*} \frac{(r-1)!!}{(r-2)!!} & = 1+\frac{1}{r-2}+\frac{r-1}{(r-2)(r-4)}+\frac{(r-1)(r-3)}{(r-2)(r-4)(r-6)} +\\ & \qquad + \dots + \frac{(r-1)!!}{3(r-2)!!}, \end{align*} where, for all $r,$ the sum on the right-hand side must be truncated in such a way that all summands exist. \end{lemma} We shall see that when $d_r > 0$ then $\lambda_{H_r,d_r}$ is not well-defined for $d_r$ too large. We will combine \eqref{formula_integral} and Lemma \ref{important_identity} to give a precise upper bound for $d_r$ when $n=r+1$ and $H_r = (n-r)/n = 1/(r+1)$. \begin{proposition} \label{r_even-2} Assume $r$ even, $n>r$, and $d_r > 0$. \begin{enumerate} \item If $H_r < (n-r)/n$, then $\lambda_{H_r,d_r}$ is defined on $[\rho_-,+\infty)$, where $\rho_- > 0$ is the only solution of $\sinh^{n-r}(\rho)-(nH_rI_{n,r}(\rho)+d_r) = 0$ on $(0,\infty)$. \item If $H_r = (n-r)/n$, then when $n=r+1$ we need $d_r < (r-1)!!\pi/2(r-2)!!$ for $\lambda_{H_r,d_r}$ to be well-defined, whereas for $n>r+1$ we have no constraint. Under such conditions, the results in the previous point hold. \item If $H_r > (n-r)/n$, set $\tau > 0$ such that $\tanh^r(\tau) = (n-r)/nH_r$. Then $d_r < \sinh^{n-r}(\tau)-nH_rI_{n,r}(\tau)$ for $\lambda_{H_r,d_r}$ to be defined. So $\lambda_{H_r,d_r}$ is a function on $[\rho_-,\rho_+] \subset (0,+\infty)$, where $\sinh^{n-r}(\rho_{\pm})-(nH_rI_{n,r}(\rho_{\pm})+d_r) = 0$. \end{enumerate} Further, $\lambda_{H_r,d_r}$ is increasing in the interior of its domain. In cases $(1)$--$(2)$, $\lambda_{H_r,d_r}(\rho_-) = 0$, $\lim_{\rho \to \rho_-} \dot{\lambda}_{H_r,d_r}(\rho) = +\infty$, $\lambda_{H_r,d_r}$ is unbounded, and is concave in the interior of its domain. In case $(3)$, $\lambda_{H_r,d_r}(\rho_-) = 0$, $\lim_{\rho \to \rho_{\pm}}\dot{\lambda}_{H_r,d_r}(\rho) = +\infty$, $\lambda_{H_r,d_r}$ has a unique inflection point in $(\rho_-,\rho_+)$, and goes from being concave to convex (Figure \ref{fig:2}). \end{proposition} \begin{figure} \caption{Behavior of $\lambda_{H_r,d_r} \label{fig:2} \end{figure} \begin{proof} We have the constraint $0 \leq nH_rI_{n,r}(\rho)+d_r < \sinh^{n-r}(\rho)$ for $\rho > 0$. Since $I_{n,r}(0)=0$ and $d_r > 0$ we must have $\rho_- > 0$. Such a $\rho_-$ exists only if $$f(\rho) \coloneqq \sinh^{n-r}(\rho)-(nH_rI_{n,r}(\rho)+d_r)$$ has a zero. We have $f(0) < 0$ and $$f'(\rho) = \sinh^{n-r-1}(\rho)\cosh(\rho)\left((n-r)-nH_r\tanh^r(\rho)\right).$$ For $H_r < (n-r)/n$ the derivative $f'$ is always positive and tends to $+\infty$ as $\rho$ runs to $\infty$, so $\rho_-$ exists and $\lambda_{H_r,d_r}$ is defined on $[\rho_-,+\infty)$. For $H_r = (n-r)/n$ we have a more subtle behavior. We compute \begin{align*} \frac{1}{n-r}\lim_{\rho \to \infty} f'(\rho) & = \lim_{\rho\to\infty} \sinh^{n-r-1}(\rho)\cosh(\rho)(1-\tanh^r(\rho)) \\ & = \lim_{\rho\to\infty} \sinh^{n-r-1}(\rho)\frac{\cosh^r(\rho)-\sinh^r(\rho)}{\cosh^{r-1}(\rho)} \\ & = \lim_{\rho\to\infty} \sinh^{n-r-1}(\rho)\frac{(\cosh(\rho)-\sinh(\rho))\sum_{i=0}^{r-1} \cosh^{r-1-i}(\rho)\sinh^{i}(\rho)}{\cosh^{r-1}(\rho)} \\ & = \lim_{\rho\to\infty} \frac{\sinh^{n-r-1}(\rho)}{\cosh(\rho)+\sinh(\rho)}\sum_{i=0}^{r-1} \tanh^i(\rho). \end{align*} When $n=r+2$ the limit of $f'$ is $r$, and if $n>r+2$ the limit is $+\infty$. In these two cases $\rho_-$ exists and $\lambda_{H_r,d_r}$ is defined on $[\rho_-,\infty)$. The case $n=r+1$ needs to be studied separately, as the limit vanishes. The claim is that for any $r$ even we have that $\rho_-$ exists only if $$d_r < \frac{(r-1)!!}{(r-2)!!}\frac{\pi}{2}.$$ Indeed, when $r=2$ we compute \begin{equation*} \lim_{\rho\to +\infty} \sinh(\rho)-\int_0^{\rho} \frac{\sinh^2(\sigma)}{\cosh(\sigma)}\,d\sigma -d_2 = \lim_{\rho\to +\infty} (\arctan(\sinh(\rho))-d_2) = \tfrac{\pi}{2}-d_2. \end{equation*} In this case, $f$ cannot have a zero if $d_2 \geq \pi/2$. To prove the above claim for $r\geq4$, we use \eqref{formula_integral} and find \begin{align*} f(\rho) & = \sinh(\rho)-I_{r+1,r}(\rho)-d_r \\ & = \sinh(\rho)\biggl(1+\frac{1}{r-2}\tanh^{r-2}(\rho)+\frac{r-1}{(r-2)(r-4)}\tanh^{r-4}(\rho) \\ & \quad +\dots+\frac{(r-1)(r-3)\cdots 5}{(r-2)(r-4)\cdots 2}\tanh^2(\rho)-\frac{(r-1)!!}{(r-2)!!}\biggr) \\ & \quad +\frac{(r-1)!!}{(r-2)!!}\arctan(\sinh(\rho))-d_r. \end{align*} Now Lemma \ref{important_identity} implies that when $\rho \to +\infty$ the sum of the terms into brackets goes to zero, and the product of $\sinh(\rho)$ with the latter vanishes (one can use the estimates $\sinh(\rho) \approx e^\rho/2$ and $\tanh(\rho) \approx 1-2e^{-2\rho}$ for $\rho \to +\infty$ to see this). Hence $$\lim_{\rho \to +\infty} f(\rho) = \frac{(r-1)!!}{(r-2)!!}\frac{\pi}{2}-d_r,$$ and the claim is proved. Convergence of $\lambda_{H_r,d_r}$ at $\rho_-$ follows by a similar argument as in the proof of Proposition \ref{r-even-n>r1}. If $H_r > (n-r)/n$ there is a $\tau > 0$ such that $f$ is increasing on $(0,\tau)$ and decreasing on $(\tau,+\infty)$. Such a $\tau$ satisfies $\tanh^r(\tau) = (n-r)/nH_r$. In order to have a well-defined $\lambda_{H_r,d_r}$, we necessarily want $f(\tau) > 0$, which forces the condition $$d_r < \sinh^{n-r}(\tau)-nH_rI_{n,r}(\tau).$$ Since $f'(\rho_-) > 0$, $f'(\rho_+) < 0$, then $f$ vanishes at $\rho_-$ and $\rho_+$ with order $1$. This gives convergence of $\lambda_{H_r,d_r}$ at the boundary points. We have $\lambda_{H_r,d_r}(\rho_-) = 0$, $\lambda_{H_r,d_r}(\rho_+) > 0$, and $\lim_{\rho \to \rho_{\pm}} \dot{\lambda}_{H_r,d_r}(\rho) = +\infty$ at once. We finally discuss convexity of $\lambda_{H_r,d_r}$ by proceeding as in the case $d_r \leq 0$. The sign of the second derivative is determined by the sign of $$g(\rho) \coloneqq \frac{\sinh^n(\rho)}{\cosh^r(\rho)}-(n-r)I_{n,r}(\rho) - \frac{d_r(n-r)}{nH_r}.$$ By definition of $\rho_-$, the sign of $g(\rho_-)$ is determined by the sign of $\tanh^r(\rho_-)-(n-r)/nH_r$. When $nH_r > n-r$, then the above quantity is negative as $$\tanh^r(\rho_-)-\frac{n-r}{nH_r} = \tanh^r(\rho_-) - \tanh^r(\tau).$$ Similarly, $g(\rho_+) > 0$. Since $g'(\rho) > 0$, $\lambda_{H_r,d_r}$ has a unique inflection point, and goes from being concave to convex. If $nH_r \leq n-r$, we have $\lim_{\rho \to +\infty} g(\rho) = -d_r(n-r)/nH_r < 0$ by Remark \ref{integ}. But $g$ is an increasing function, so it is always negative, and hence $\lambda_{H_r,d_r}$ is concave. \end{proof} There remains to look at the case $n=r$. Set $I_n(\rho) \coloneqq I_{n,n}(\rho) = \int_0^{\rho} \tanh^{n-1}(\tau)\,d\tau$. \begin{proposition} \label{r-even-n=r} Assume $n=r$ even. Then $\lambda_{H_n,d_n}$ is well-defined for $d_n < 1$. \begin{enumerate} \item If $d_n < 0$, then $\lambda_{H_n,d_n}$ is defined on $[\rho_-,\rho_+]$, where $\rho_-$ is the only solution of $nH_nI_n(\rho)+d_n = 0$, and $\rho_+$ is the only solution of $nH_nI_n(\rho)+d_n = 1$. \item If $0 \leq d_n < 1$, then $\lambda_{H_n,d_n}$ is defined on $[0,\rho_+]$, where $\rho_+$ is defined as above. \end{enumerate} Further, $\lambda_{H_n,d_n}$ is increasing and convex in the interior of its domain. In case $(1)$, $\lambda_{H_n,d_n}(\rho_-) = 0 = \dot{\lambda}_{H_n,d_n}(\rho_-)$, and $\lim_{\rho \to \rho_+} \dot{\lambda}_{H_n,d_n}(\rho) = +\infty$. In case $(2)$, $\lambda_{H_n,d_n}(0) = 0$, $\dot{\lambda}_{H_n,d_n}(\rho_-) = d_n^{1/n}/(1-d_n^{2/n})^{1/2}$, and $\lim_{\rho \to \rho_+} \dot{\lambda}_{H_n,d_n}(\rho) = +\infty$. In the particular case $d_n=0$, we also have $\lim_{\rho \to 0} \ddot{\lambda}_{H_n,0}(\rho) = H_n{}^{1/n}$, and if $d_n < 0$ then $\lim_{\rho \to \rho_-} \ddot{\lambda}_{H_r,d_r}(\rho_-) = +\infty$ (Figure \ref{fig:3}). \end{proposition} \begin{figure} \caption{Behavior of $\lambda_{H_n,d_n} \label{fig:3} \end{figure} \begin{proof} Our usual constraint becomes $$0 \leq nH_nI_n(\rho)+d_n < 1.$$ Hence necessarily $d_n < 1$. If $d_n < 0$ there are positive numbers $\rho_-,\rho_+$ such that $nH_nI_n(\rho_-) +d_n=0$ and $nH_nI_n(\rho_+)+d_n = 1$, and $\lambda_{H_n,d_n}$ is defined on $[\rho_-,\rho_+)$. Clearly $\dot{\lambda}_{H_n,d_n}(\rho_-) = 0$. If $0 \leq d_n < 1$, then $\lambda_{H_n,d_n}$ is defined on $[0,\rho_+)$. We have $\dot{\lambda}_{H_n,d_n}(0) = d_n^{1/n}/(1-d_n^{2/n})^{1/2}$. The same method as in the proof of Proposition~\ref{r-even-n>r1} shows that in both cases $\lambda_{H_n,d_n}$ is finite at $\rho_+$. The expression of $\ddot{\lambda}_{H_r,d_r}$ in \eqref{second-derivative} for $n=r$ implies convexity of the graphs at once. Continuity of the second derivative at the origin for $d_n=0$ follows by an explicit calculation, cf.\ \eqref{second-derivative} and Remark \ref{integ}. \end{proof} We now study the regularity of the $H_r$-hypersurface generated by the rotation of the graph of $\lambda_{H_r,d_r}$, as described at the beginning of Section \ref{rotational-surfaces}. Then we will proceed with the classification result. \begin{proposition} \label{prop:c2-reg} Let $n\geq r$, $r$ even. Then the hypersurface generated by the curve $\lambda_{H_r,d_r}$ is of class $C^2$ at $\rho = \rho_+$, when the latter exists, and it is of class $C^2$ at $\rho=\rho_-$ if and only if $n>r$ and $d_r\geq 0$ or $n=r$ and $d_n=0$. When $n=r$ and $d_n>0$, it has a conical singularity at $\rho=0$. If $n\geq r$ and $d_r < 0$, it has cuspidal singularities at $\rho=\rho_-$. \end{proposition} \begin{proof} Regularity to second order of the hypersurface generated by $\lambda_{H_r,d_r}$ is proved by showing that the second fundamental form $A$ is bounded. For any choice of $n\geq r$, $H_r$ and $d_r$ for which $\rho_+$ exists, we have that $\rho_+>0$ and $\lim_{\rho\rightarrow\rho_+}\dot{\lambda}_{H_r,d_r}(\rho)=+\infty$. By \eqref{data}, for any $i=1,\dots,n-1$ we have that $$\lim_{\rho\rightarrow\rho_+}k_i(\rho)=\mathrm{cotgh}(\rho_+).$$ By definition of $\rho_+$, combining \eqref{data} and \eqref{second-derivative} one finds $$\lim_{\rho\rightarrow\rho_+}k_n(\rho)=\frac{\mathrm{cotgh}(\rho_+)}{r}(nH_r\tanh(\rho_+)-(n-r)).$$ It follows that $\lim_{\rho\rightarrow\rho_+}|A|^2(\rho)$ exists and is finite. Assume now that $n>r$ and $d_r>0$, then $\rho_->0$ and $\lim_{\rho\rightarrow\rho_-}\dot{\lambda}_{H_r,d_r}(\rho)=+\infty$. Therefore $\lim_{\rho\rightarrow\rho_-}|A|^2(\rho)$ exists and is finite by arguing as above. When $d_r=0$ we have $\rho_-=0$. By Remark \ref{integ}, \eqref{data} and \eqref{second-derivative}, as $\rho\rightarrow 0$ we get the estimates \begin{equation*} \mathrm{cotgh}(\rho) \approx \rho^{-1}, \qquad \dot{\lambda}_{H_r,d_r}(\rho) \approx H_r^{\frac 1r}\rho, \qquad \ddot{\lambda}_{H_r,d_r}(\rho) \approx H_r^{\frac 1r}. \end{equation*} For any $i=1,\dots,n$ it follows that $$ \lim_{\rho\rightarrow 0}k_i(\rho)=H_r^{\frac 1r}, $$ and $\lim_{\rho\rightarrow0}|A|^2(\rho)$ exists and is finite in this case as well. In the case $n\geq r$ and $d_r<0$ we have $\rho_->0$, $\dot{\lambda}_{H_r,d_r}(\rho_-)=0$, but $\lim_{\rho\rightarrow\rho_-}\ddot{\lambda}_{H_r,d_r}(\rho)=+\infty$. Hence $|A|^2$ blow up close to $\rho_-$ because $\lim_{\rho\rightarrow\rho_-}k_n(\rho)=+\infty$. Moreover, it is clear that by reflecting the hypersurface generated by $\lambda_{H_r,d_r}$ across the slice ${\mathbb H}^n\times\{0\}$ one gets cuspidal singularities along the intersection with ${\mathbb H}^n\times\{0\}.$ Finally, when $n=r$ and $0<d_n<1$, by Proposition \ref{r-even-n=r} we have that $\rho_-=0$ and $$\dot\lambda_{H_n,d_n}(0)=\frac{d_n^{\frac 1n}}{(1-d_n^{\frac 2n})^{\frac 12}}>0.$$ So the hypersurface generated by $\lambda_{H_n,d_n}$ has a conical singularity in $\rho=0$. \end{proof} We now classify rotational $H_r$-hypersurfaces for $r$ even based on the above arguments. We recover results by Elbert--Sa Earp \cite[Section 6]{elb-earp} and de Lima--Manfio--dos Santos \cite[Theorem 1 and 2]{lima}. We recall that a \emph{slice} is any subspace $\mathbb H^n\times\{t\} \subset \mathbb H^n \times \mathbb R$, and by its \emph{origin} we mean its intersection with the $t$-axis. \begin{theorem} \label{str-thm1} Assume $r$ even, $n>r$, and $d_r < 0$. By reflecting the rotational hypersurface given by the graph of $\lambda_{H_r,d_r}$ across suitable slices, we get a non-compact embedded $H_r$-hypersurface. \begin{enumerate} \item If $H_r \leq (n-r)/n$, the hypersurface generated by $\lambda_{H_r,d_r}$ together with its reflection across the slice ${\mathbb H}^n\times\{0\}$ is a singular annulus. Its singular set is made of cuspidal points along a sphere of radius $\rho_-$ centered at the origin of the slice $\mathbb H^n \times \{0\}.$ \item If $H_r > (n-r)/n$, then the hypersurface generated by $\lambda_{H_r,d_r}$, together with its reflections across the slices $\mathbb H^n \times \{k\lambda_{H_r,d_r}(\rho_+)\}$, $k \in \mathbb Z$, gives a singular onduloid. Its singular set is made of cuspidal points along spheres of radius $\rho_-$ centered at the origin of the slices $\mathbb H^n \times \{2k\lambda_{H_r,d_r}(\rho_+)\}$, $k \in \mathbb Z$. \end{enumerate} \end{theorem} \begin{theorem} \label{str-thm2} Assume $r$ even, $n>r$, and $d_r = 0$. Then the rotational hypersurface given by the graph of $\lambda_{H_r,0}$ is a complete embedded $H_r$-hypersurface, possibly after reflection across a suitable slice. \begin{enumerate} \item If $H_r \leq (n-r)/n$, the hypersurface generated by $\lambda_{H_r,0}$ is an entire graph of class $C^2$ tangent to the slice $\mathbb H^n \times \{0\}$ at the origin. \item If $H_r > (n-r)/n$, the hypersurface generated by $\lambda_{H_r,0}$, together with its reflection across the slice $\mathbb H^n \times \{\lambda_{H_r,0}(\rho_+)\}$, is a class $C^2$ sphere. \end{enumerate} \end{theorem} \begin{theorem} \label{str-thm3} Assume $r$ even, $n>r$, and $d_r > 0$. By reflecting the rotational hypersurface given by the graph of $\lambda_{H_r,d_r}$ across suitable slices, we get a complete non-compact embedded $H_r$-hypersurface. \begin{enumerate} \item If $H_r \leq (n-r)/n$, the hypersurface generated by $\lambda_{H_r,d_r}$, together with its reflection across the slice ${\mathbb H}^n\times\{0\}$, is a class $C^2$ annulus. When $n=r+1$ and $H_r = 1/(r+1)$, the same holds, provided that $d_r$ is smaller than $(r-1)!!\pi/2(r-2)!!$. \item If $H_r > (n-r)/n$, the hypersurface generated by $\lambda_{H_r,d_r}$ together with its reflections across the slices $\mathbb H^n \times \{k\lambda_{H_r,d_r}(\rho_+)\}$, $k \in \mathbb Z$, is a class $C^2$ onduloid. \end{enumerate} \end{theorem} \begin{theorem} \label{str-thm4} Assume $n=r$ even and $H_n > 0$. Then the $H_n$-hypersurface generated by $\lambda_{H_n,d_n},$ together with its reflection across the slice $\mathbb H^n \times \{\lambda_{H_n,d_n}(\rho_+)\}$, is a class $C^2$ sphere if $d_n=0$, and a picked sphere if $0 < d_n < 1$. If $d_n < 0$ then the $H_n$-hypersurface generated by $\lambda_{H_n,d_n}$, together with its reflections across the slices $\mathbb H^n \times \{k\lambda_{H_n,d_n}(\rho_+)\}$, $k \in \mathbb Z$, gives a singular onduloid. Its singular set is made of cuspidal points along spheres of radius $\rho_-$ centered at the origin of the slices $\mathbb H^n \times \{2k\lambda_{H_n,d_n}(\rho_+)\}$, $k \in \mathbb Z$. \end{theorem} \subsection{Case \texorpdfstring{$r$}{r} odd} \label{case-r-odd} We organize this subsection in a similar fashion as the previous one. Some of the arguments will be analogous to the corresponding ones for $r$ even, so we leave out the relative details. Note that this subsection includes and extends the mean curvature case treated in \cite{berard-earp} and \cite{nelli-pipoli}. A crucial difference from the case $r$ even is that for $d_r < 0$ the derivative $\dot{\lambda}_{H_r,d_r}$ is negative on some subset of the domain of $\lambda_{H_r,d_r}$, and for $r > 1$ the curve $\lambda_{H_r,d_r}$ is not $C^2$-regular at its minimum point. Further, more types of curves arise when $n > r$ and $d_r < 0$, and when $n=r$. In our classification, we will recover results by B\'erard--Sa Earp \cite[Section 2]{berard-earp}, Elbert--Sa Earp \cite[Section 6]{elb-earp}, and de Lima--Manfio--dos Santos \cite[Theorem 1 and 2]{lima}. \begin{proposition} \label{r-odd-n>r} Assume $r$ odd, $n>r$, and $d_r < 0$. \begin{enumerate} \item If $H_r \leq (n-r)/n$, then $\lambda_{H_r,d_r}$ is defined on $[\rho_-,+\infty)$, where $\rho_- > 0$ is the only solution of $\sinh^{n-r}(\rho)+(nH_rI_{n,r}(\rho)+d_r) = 0$. \item If $H_r > (n-r)/n$, then $\lambda_{H_r,d_r}$ is defined on $[\rho_-,\rho_+]$, where $\rho_-$ is as above, and $\rho_+>0$ is the only solution of $\sinh^{n-r}(\rho)-(nH_rI_{n,r}(\rho)+d_r)=0$. \end{enumerate} Set $\rho_0$ to be the only zero of $nH_rI_{n,r}+d_r$. We have $\lambda_{H_r,d_r}(\rho_-) = 0$, $\lim_{\rho \to \rho_-} \dot{\lambda}_{H_r,d_r}(\rho) = -\infty$, $\dot{\lambda}_{H_r,d_r}(\rho) < 0$ when $\rho_- < \rho < \rho_0$, and $\dot{\lambda}_{H_r,d_r}(\rho) > 0$ when $\rho > \rho_0$. In case $(1)$, $\lim_{\rho \to +\infty} \lambda_{H_r,d_r}(\rho) = +\infty$. In case $(2)$, $\lim_{\rho \to \rho_+} \dot{\lambda}_{H_r,d_r}(\rho) = +\infty$. Further, $\lambda_{H_r,d_r}$ is convex in the interior of its domain. In particular, it is of class $C^2$ for $r=1$, and $\lim_{\rho \to \rho_0} \ddot{\lambda}_{H_r,d_r}(\rho) = +\infty$ for $r>1$ (Figure \ref{fig:4}). \end{proposition} \begin{figure} \caption{Behavior of $\lambda_{H_r,d_r} \label{fig:4} \end{figure} \begin{proof} Our constraint for $\lambda_{H_r,d_r}$ to be well-defined is now \begin{equation} \label{cond2} -\sinh^{n-r}(\rho) < nH_r I_{n,r}(\rho)+d_r < \sinh^{n-r}(\rho), \qquad \rho > 0. \end{equation} We know that $nH_rI_{n,r}+d_r$ is an increasing function with $d_r < 0$ and $I_{n,r}(0) = 0$, so that $nH_rI_{n,r}(0)+d_r < 0$. The first inequality in \eqref{cond2} is then always satisfied for $\rho > \rho_- > 0$, where $\rho_-$ is the unique solution of $nH_rI_{n,r}(\rho)+d_r+\sinh^{n-r}(\rho) = 0$. It is clear that $\rho_- \to 0$ if and only if $d_r \to 0$. The study of the second inequality goes along the lines of the corresponding one for $r$ even (Proposition~\ref{r-even-n>r1}). Note that $\lim_{\rho \to \rho_-} \dot{\lambda}_{H_r,d_r}(\rho) = -\infty$ regardless of the value of $H_r$. Also, $\lambda_{H_r,d_r}$ is decreasing on $(\rho_-,\rho_0)$, where $\rho_0$ is the only zero of $nH_rI_{n,r}+d_r$, then it increases beyond $\rho_0$. Convergence at $\rho_-$ or $\rho_+$ and the statements involving the second derivative follow by \eqref{second-derivative} and similar arguments as in the proof of Proposition \ref{r-even-n>r1}. We point out that for $r=1$ the term $(nH_rI_{n,r}(\rho)+d_r)^{(r-1)/r}$ equals $1$, so the second derivative of $\lambda_{H_r,d_r}$ is well-defined over the interior of the whole domain. For $r > 1$ the same term vanishes at $\rho_0$, and this concludes the proof. \end{proof} Unlike in the case $r$ even, the sign of $\lambda_{H_r,d_r}(\rho_+),$ for $H_r > (n-r)/n,$ $r>1$ odd, is not always positive. We discuss this point here below. Moreover we show that $\lambda_{H_1,d_1}(\rho_+)$ only takes positive values. \begin{proposition} \label{prop:special-cases} The following statements hold. \begin{enumerate} \item \label{item1} If $H_1 > (n-1)/n$, then $\lambda_{H_1,d_1}(\rho_+) > 0$ for all $d_1 < 0$. \item \label{item2} Let $2r-1>n>r\geq 3$, and $r$ odd. Then there exist values $H_r>(n-r)/n$ and $d_r<0$ such that $\lambda_{H_r,d_r}(\rho_+)$ is negative, positive, or zero. \end{enumerate} \end{proposition} \begin{proof} In case (\ref{item1}), it is well known that the rotational hypersurface generated by $\lambda_{H_1,d_1}$ is of class $C^2$. We show (\ref{item1}) by using Alexandrov reflection method with respect to vertical hyperplanes in $\mathbb H^n \times \mathbb R$. Let $H_1 > (n-1)/n$ be fixed. Since the function defining $\lambda_{H_1,0}$ is non-negative and does not vanish, and $\lambda_{H_1,d_1}$ is continuous in $d_1$, then for $d_1 < 0$ close enough to $0$ we have $\lambda_{H_1,d_1}(\rho_+) > 0$. Suppose there is a value of the parameter $d_1$ for which $\lambda_{H_1,d_1}(\rho_+)$ vanishes. Consider the rotational hypersurface $S$ obtained after reflecting the graph of $\lambda_{H_1,d_1}$ across the $\rho$-axis, and then rotating about the $t$-axis. Topologically $S$ is a product $S^1 \times S^{n-1}$, and is of class $C^2$. Since $S$ is compact, we can take a vertical hyperplane $\Pi \subset \mathbb H^n \times \mathbb R$ corresponding to $\rho > 0$ large enough not intersecting $S$, and then move it towards $S$ until $\Pi \cap S \neq \varnothing$. We keep moving $\Pi$ in the same way and reflect the portion of $S$ left behind $\Pi$ across $\Pi$. Since $\lambda_{H_1,d_1}(\rho_-) = 0$, there will be a first intersection point between the reflected part of $S$ and $S$ itself. The Maximum Principle then implies that $S$ has a symmetry with respect to a vertical hyperplane corresponding to some $\rho \in (\rho_-,\rho_+)$. But this is a contradiction, as the hypersurface has rotational symmetry about $t = 0$. Continuity of $\lambda_{H_1,d_1}$ with respect to the parameters implies that there cannot be values of $d_1$ such that $\lambda_{H_1,d_1}(\rho_+)$ is negative. As for (\ref{item2}), observe that for $H_r > (n-r)/n$ we have $\lambda_{H_r,0}(\rho_+)>0$, because the integrand function defining $\lambda_{H_r,0}$ is non-negative and does not vanish identically. Continuity with respect to the parameter $d_r$ implies that $\lambda_{H_r,d_r}(\rho_+)>0$ for $d_r<0$ close enough to $0$. We now show that $\lambda_{H_r,d_r}(\rho_+)<0$ for some $H_r > (n-r)/n$ and $d_r < 0$. Let us introduce the function $$g(\rho) \coloneqq \frac{nH_rI_{n,r}(\rho)+d_r}{\sinh^{n-r}(\rho)},$$ and note that we can rewrite $\lambda_{H_r,d_r}(\rho_+)$ as $$\lambda_{H_r,d_r}(\rho_+)=\int_{\rho_-}^{\rho_+}\frac{g(\xi)^{\frac 1r}}{\sqrt{1-g(\xi)^{\frac 2r}}}\,d\xi.$$ We claim that, for any $d_r<0$ and $2r-n-1 > 0$, if $H_r$ is large enough then $g$ is convex on $(\rho_-,\rho_+)$. So let $d_r < 0$ be fixed. By definition of $\rho_{\pm}$ we have $$ H_r=\frac{|d_r|\pm\sinh^{n-r}(\rho_{\pm})}{nI_{n,r}(\rho_{\pm})}. $$ Observe that $\rho_{\pm}\rightarrow 0$ if and only if $H_r\rightarrow\infty$ and $\rho_{\pm}\approx |d_r|^{\frac{1}{n}}H_r{}^{-\frac{1}{n}}$ as $H_r\rightarrow\infty$. Therefore for any $\rho\in(\rho_-,\rho_+)$ we estimate \begin{equation}\label{eq001} \rho\approx\left(\frac{|d_r|}{H_r}\right)^{\frac 1n},\quad H_r\rightarrow\infty. \end{equation} Since $-\sinh^{n-r}(\rho)<nH_rI_{n,r}(\rho)+d_r<\sinh^{n-r}(\rho)$ holds on $(\rho_-,\rho_+)$, \eqref{eq001} and explicit computations give that for any $\rho\in(\rho_-,\rho_+)$ we have \begin{align*} g''(\rho) & = nH_r\left(\frac{\sinh(\rho)}{\cosh(\rho)}\right)^{r-2}\left(\frac{r-1}{\cosh^2(\rho)}-(n-r)\right)\\ & \qquad +\frac{nH_rI_{n,r}(\rho)+d_r}{\sinh^{n-r+2}(\rho)}((n-r)\sinh^2(\rho)+n-r+1) \\ & > nH_r\left(\frac{\sinh(\rho)}{\cosh(\rho)}\right)^{r-2}\left(\frac{r-1}{\cosh^2(\rho)}-(n-r)\right)-\frac{(n-r)\sinh^2(\rho)+n-r+1}{\sinh^2(\rho)} \\ & \approx H_r^{\frac 2n}\left((2r-1-n)|d_r|^{\frac{r-2}{n}}H_r{}^{\frac{n-r}{n}}-(n-r+1)|d_r|^{-\frac{2}{n}}\right)-(n-r). \end{align*} When $H_r\rightarrow\infty$ the latter quantity diverges to $+\infty$ if $2r-1-n>0$, hence $g'' > 0$ on $(\rho_-,\rho_+)$. Fix $H_r$ large enough such that $g$ is convex in $(\rho_-,\rho_+)$. Since $g(\rho_{\pm})=\pm 1$, then $g(\rho)<s(\rho)$ for any $\rho \in (\rho_-,\rho_+)$, where $s$ is the segment-line connecting $(\rho_-,-1)$ with $(\rho_+,1)$. Moreover the function $x \mapsto x^{1/r}/\sqrt{1-x^{2/r}}$ is increasing on $(-1,1)$. For such a choice of $H_r$ and $d_r$ we then have $$ \lambda_{H_r,d_r}(\rho_+)<\int_{\rho_-}^{\rho_+}\frac{s(\xi)^\frac 1r}{\sqrt{1-s(\xi)^{\frac 2r}}}\,d\xi=\frac{\rho_+-\rho_-}{2}\int_{-1}^1\frac{u^\frac 1r}{\sqrt{1-u^{\frac 2r}}}\,du=0, $$ as the latter integrand function is odd. Continuity of $\lambda_{H_r,d_r}$ with respect to the parameters $H_r$ and $d_r$ implies the last assertion of (\ref{item2}) at once. \end{proof} The proof of the next statement is left out, because the results can be seen by adapting the proof of Proposition \ref{r-even-n>r1} when $d_r = 0$. \begin{proposition} \label{r-odd-n>r2} Assume $r$ odd, $n>r$, and $d_r=0$. \begin{enumerate} \item If $H_r \leq (n-r)/n$, then $\lambda_{H_r,0}$ is defined on $[0,+\infty)$. \item If $H_r > (n-r)/n$, then $\lambda_{H_r,0}$ is defined on $[0,\rho_+]$, where $\rho_+>0$ is the only solution of $\sinh^{n-r}(\rho)-nH_rI_{n,r}(\rho) = 0$. \end{enumerate} Further, $\lambda_{H_r,0}$ is increasing and convex in the interior of its domain. We have $\lambda_{H_r,0}(0) = 0 = \lim_{\rho \to 0} \dot{\lambda}_{H_r,0}(\rho)$. In case $(1)$, $\lambda_{H_r,0}$ is unbounded. In case $(2)$, $\lim_{\rho \to \rho_+} \dot{\lambda}_{H_r,0}(\rho) = +\infty$. Finally, $\lim_{t \to 0} \ddot{\lambda}_{H_r,0}(\rho) = H_r{}^{1/r}$ (Figure \ref{fig:5}). \end{proposition} \begin{figure} \caption{Behavior of $\lambda_{H_r,0} \label{fig:5} \label{limits} \end{figure} In order to prove the next result, one needs the analogue of formula \eqref{formula_integral} and Lemma~\ref{important_identity} for $r$ odd. We have $I_{2,1}(x) = \cosh(x) - 1$ and for $r \geq 3$ we compute \begin{align*} I_{r+1,r}(x) & = -\sinh(x)\biggl(\frac{1}{r-2}\tanh^{r-2}(x)+\frac{r-1}{(r-2)(r-4)}\tanh^{r-4}(x) \nonumber \\ & \qquad +\frac{(r-1)(r-3)}{(r-2)(r-4)(r-6)}\tanh^{r-6}(x)+\dots + \frac{(r-1)!!}{2(r-2)!!}\tanh(x)\biggr) \nonumber \\ & \qquad + \frac{(r-1)!!}{(r-2)!!}I_{2,1}(x). \end{align*} \begin{lemma} Let $r \geq 3$ be an odd natural number. Then \begin{align*} \frac{(r-1)!!}{(r-2)!!} & = 1+\frac{1}{r-2}+\frac{r-1}{(r-2)(r-4)}+\frac{(r-1)(r-3)}{(r-2)(r-4)(r-6)} +\\ & \qquad + \dots + \frac{(r-1)(r-3)\cdots 4}{(r-2)(r-4)\cdots 3}, \end{align*} where, for all $r,$ the sum on the right-hand side must be truncated in such a way that all summands are positive. \end{lemma} The next two results can be proved following the proof of Propositions \ref{r_even-2} and \ref{r-even-n=r}. \begin{proposition} Assume $r$ odd, $n>r$, and $d_r > 0$. \begin{enumerate} \item If $H_r < (n-r)/n$, then $\lambda_{H_r,d_r}$ is defined on $[\rho_-,+\infty)$, where $\rho_- > 0$ is the only solution of $\sinh^{n-r}(\rho)-(nH_rI_{n,r}(\rho)+d_r) = 0$. \item If $H_r = (n-r)/n$, then when $n=r+1$ we need $d_1 < 1$ or $d_r < (r-1)!!/(r-2)!!$ for $r>1,$ in order for $\lambda_{H_r,d_r}$ to be well-defined, whereas for $n>r+1$ we have no constraint. Under such conditions, the results in the previous point hold. \item If $H_r > (n-r)/n$, set $\tau > 0$ such that $\tanh^r(\tau) = (n-r)/nH_r$. Then $d_r < \sinh^{n-r}(\tau)-nH_rI_{n,r}(\tau)$, for $\lambda_{H_r,d_r}$ to be defined. So $\lambda_{H_r,d_r}$ is a function on $[\rho_-,\rho_+] \subset (0,+\infty)$, where $\sinh^{n-r}(\rho_{\pm})-(nH_rI_{n,r}(\rho_{\pm})+d_r) = 0$. \end{enumerate} Further, $\lambda_{H_r,d_r}$ is increasing in the interior of its domain. In cases $(1)$--$(2)$, $\lambda_{H_r,d_r}(\rho_-) = 0$, $\lim_{\rho \to \rho_-} \dot{\lambda}_{H_r,d_r}(\rho) = +\infty$, $\lambda_{H_r,d_r}$ is unbounded, and is concave in the interior of its domain. In case $(3)$, $\lambda_{H_r,d_r}(\rho_-) = 0$, $\lim_{\rho \to \rho_{\pm}}\dot{\lambda}_{H_r,d_r}(\rho) = +\infty$, $\lambda_{H_r,d_r}$ has a unique inflection point in $(\rho_-,\rho_+)$, and goes from being concave to convex (Figure \ref{fig:6}). \end{proposition} \begin{figure} \caption{Behavior of $\lambda_{H_r,d_r} \label{fig:6} \end{figure} \begin{proposition} \label{r-odd-n=r} Assume $n=r$ odd. Then $\lambda_{H_n,d_n}$ is well-defined for $d_n < 1$. Set $I_n \coloneqq I_{n,n}$. \begin{enumerate} \item If $d_n < -1$, then $\lambda_{H_n,d_n}$ is defined on $[\rho_-,\rho_+]$, where $\rho_-$ is the only solution of $nH_nI_n(\rho)+d_n=-1$, and $\rho_+$ is the only solution of $nH_nI_n(\rho)+d_n=1$. \item If $-1\leq d_n < 1$, then $\lambda_{H_n,d_n}$ is defined on $[0,\rho_+]$, where $\rho_+$ is defined as above. \end{enumerate} Further, $\lambda_{H_n,d_n}$ is convex in the interior of its domain. Set $\rho_0$ to be the only solution of $nH_nI_n(\rho)+d_n = 0$. In case $(1)$, we have $\lambda_{H_n,d_n}(\rho_-) = 0$, $\dot{\lambda}_{H_n,d_n}(\rho) < 0$ for $\rho_- < \rho < \rho_0$, $\dot{\lambda}_{H_n,d_n}(\rho) > 0$ for $\rho > \rho_0$, $\lambda_{H_n,d_n}(\rho_+) < 0$, and $\lim_{\rho \to \rho_+}\dot{\lambda}_{H_n,d_n}(\rho) = +\infty$. In case $(2)$, one finds $\dot{\lambda}_{H_n,d_n}(0) = d_n^{1/n}/(1-d_n^{2/n})^{1/2}$, and $\lim_{d_n \to -1} \dot{\lambda}_{H_n,d_n}(0) = -\infty$. For $d_n < 0$ the curve $\lambda_{H_n,d_n}$ first decreases then increases, and the sign of $\lambda_{H_n,d_n}$ depends on the value of $d_n$, whereas for $d_n \geq 0$ the curve $\lambda_{H_n,d_n}$ is increasing on the whole domain. Moreover, $\lim_{\rho \to 0} \ddot{\lambda}_{H_n,0}(\rho) = H_n{}^{1/n}$, and $\lim_{\rho \to \rho_0} \ddot{\lambda}_{H_n,d_n}(\rho) = +\infty$ (Figure~\ref{fig:7}). \end{proposition} \begin{figure} \caption{Behavior of $\lambda_{H_n,d_n} \label{fig:7} \end{figure} \begin{proof} The only part of the proof differing from the proof of Proposition \ref{r-even-n=r} is about the sign of $\lambda_{H_n,d_n}(\rho_+)$. We look first at the case $d_n < -1$. Since $nH_nI_n+d_n$ is convex, $nH_nI_n(\rho)+d_n < s(\rho)$, where $s$ is the line passing through the points $(\rho_-,-1)$ and $(\rho_+,1)$. Now $nH_nI_n(\rho) + d_n < s(\rho)$ for $\rho \in (\rho_-,\rho_+)$, so we also have $$\frac{(nH_nI_n(\rho)+d_n)^{\frac1n}}{\sqrt{1-(nH_nI_n(\rho)+d_n)^{\frac2n}}} < \frac{s(\rho)^{\frac1n}}{\sqrt{1-s(\rho)^{\frac2n}}},$$ as the function $x \mapsto x^{1/n}/\sqrt{1-x^{2/n}}$ is increasing. But the integral of the latter quantity over $(\rho_-,\rho_+)$ is computed to be zero, as the integrand function is odd: \begin{align*} \int_{\rho_-}^{\rho_+} \frac{s(\xi)^{\frac1n}}{\sqrt{1-s(\xi)^{\frac2n}}}\,d\xi = \frac{\rho_+-\rho_-}{2}\int_{-1}^1 \frac{u^{\frac1n}}{\sqrt{1-u^{\frac2n}}}\,du = 0. \end{align*} This shows $\lambda_{H_n,d_n}(\rho_+) < 0$. The same holds when $d_n = -1$, the only difference being that $\rho_- = 0$. Since $\lambda_{H_n,d_n}(\rho_+)$ depends continuously on $d_n$, and for $d_n \geq 0$ we have $\lambda_{H_n,d_n}(\rho_+) > 0$, there must be a $d_n \in (-1,0)$ such that $\lambda_{H_n,d_n}(\rho_+) = 0$. \end{proof} As in the case of $r$ even (cf.\ Proposition \ref{prop:c2-reg}), before proceeding with the classification result, we study the regularity of the $H_r$-hypersurface generated by a rotation of the graph of $\lambda_{H_r,d_r}.$ \begin{proposition} \label{prop:c2reg-disp} Let $n\geq r$, $r$ odd. Then the hypersurface generated by the curve $\lambda_{H_r,d_r}$ is of class $C^2$ at $\rho = \rho_+$, when the latter exists, and it is of class $C^2$ at $\rho=\rho_-$ if and only if $n>r$, or $n=r$ and $d_n\in(-\infty,-1)\cup\{0\}$. When $n=r$ and $d_r\in[-1,0)\cup(0,1)$, there is a conical singularity at $\rho=0$. Moreover, if $r\geq 3$ and $d_r \neq 0$, the hypersurface is $C^2$-singular at any critical point of the function $\lambda_{H_r,d_r}$. \end{proposition} \begin{proof} The first part of the proof is an application of the same argument as in Proposition \ref{prop:c2-reg}. If $r=1$ it is well known that the corresponding hypersurface is smooth. Now let $r\geq 3$ and let $\rho_0$ be a critical point of $\lambda_{H_r,d_r}$. By \eqref{lambda} we have that $nH_rI_{n,r}(\rho_0)+d_r=0$. By \eqref{second-derivative} it follows that $\lim_{\rho\rightarrow\rho_0}\ddot\lambda_{H_r,d_r}(\rho)=+\infty$. Using \eqref{data} we can see that $\lim_{\rho\rightarrow\rho_0}k_n(\rho)=+\infty$, hence $|A|^2$ blows up near $\rho_0$. \end{proof} \begin{remark} The same kind of singularities appears in the construction of rotationally invariant higher order translators, i.e.\ hypersurfaces with $H_r=g(\nu,\partial/\partial t)$, where $r>1$ and $\nu$ is the unit normal, see \cite{lima-pipoli}. \end{remark} We now proceed with the classification results. We recover results by Bérard--Sa Earp \cite{berard-earp}, Elbert--Sa Earp \cite[Section 6]{elb-earp} and de Lima--Manfio--dos Santos \cite[Theorem 1 and 2]{lima}. Recall that a slice is any subspace $\mathbb H^n\times\{t\} \subset \mathbb H^n \times \mathbb R$, and its origin was defined as its intersection with the $t$-axis. \begin{theorem} \label{str-thm5} Assume $r$ odd, $n>r$, and $d_r < 0$. By reflecting the rotational hypersurface given by the graph of $ \lambda_{H_r,d_r}$ across suitable slices, we get an immersed $H_r$-hypersurface. \begin{enumerate} \item If $H_r \leq (n-r)/n$, the hypersurface generated by $\lambda_{H_r,d_r}$, together with its reflection across the slice $\mathbb H^n \times \{0\}$, is an annulus with self-intersections along a sphere centered at the origin of the slice $\mathbb H^n \times \{0\}$. The hypersurface is of class $C^2$ for $r=1$, and of class $C^1$ for $r \geq 3$. In the latter case, the singular set consists of two spheres of radius $\rho_0$ contained in the slices $\mathbb H^n \times \{\pm \lambda_{H_r,d_r}(\rho_0)\}$ and centered at their origin. \item If $H_r > (n-r)/n$, then we distinguish two subcases. If $r=1$, the hypersurface generated by $\lambda_{H_r,d_r}$, together with its reflection across the slice $\mathbb H^n \times \{0\}$ and vertical translations of integral multiples of $2\lambda_{H_r,d_r}(\rho_+)$, is a $C^2$ nodoid with self-intersections along infinitely many spheres centered at the origin of distinct slices. If $r \geq 3$, we have two possibilities. First, one may get nodoids as in the $r=1$ case, except that they are not $C^2$-regular (singularities appear along infinitely many spheres of radius $\rho_0$ in distinct slices). Second, one may get compact hypersurfaces with the topology of $S^1 \times S^{n-1}$ with $C^2$ singularities along two spheres of radius $\rho_0$ contained in the slices $\mathbb H^n \times \{\pm \lambda_{H_r,d_r}(\rho_0)\}$ and centered at their origin. \end{enumerate} \end{theorem} \begin{theorem} \label{str-thm6} Assume $r$ odd, $n>r$, and $d_r = 0$. Then the rotational hypersurface given by the graph of $\lambda_{H_r,0}$ is a complete embedded $H_r$-hypersurface, possibly after reflection across a suitable slice. \begin{enumerate} \item If $H_r \leq (n-r)/n$, the hypersurface generated by $\lambda_{H_r,0}$ is an entire graph of class $C^2$ tangent to the slice $\mathbb H^n \times \{0\}$ at the origin. \item If $H_r > (n-r)/n$, the hypersurface generated by $\lambda_{H_r,0}$, together with its reflection across the slice $\mathbb H^n \times \{\lambda_{H_r,0}(\rho_+)\}$, is a class $C^2$ sphere. \end{enumerate} \end{theorem} \begin{theorem} \label{str-thm7} Assume $r$ odd, $n>r$, and $d_r > 0$. By reflecting the rotational hypersurface given by the graph of $\lambda_{H_r,d_r}$ across suitable slices, we get a complete non-compact embedded $H_r$-hypersurface. \begin{enumerate} \item If $H_r \leq (n-r)/n$, the hypersurface generated by $\lambda_{H_r,d_r}$, together with its reflection across the slice $\mathbb H^n \times \{0\}$, is a class $C^2$ annulus. When $n=r+1$ and $H_r=1/(r+1)$ the same holds, provided that $d_r$ is smaller than $(r-1)!!/(r-2)!!$ for $r>1$, or smaller than $1$ for $r=1$. \item If $H_r > (n-r)/n$, the hypersurface generated by $\lambda_{H_r,d_r}$ together with its reflections across the slices $\mathbb H^n \times \{k\lambda_{H_r,d_r}(\rho_+)\}$, $k \in \mathbb Z$, is a class $C^2$ onduloid. \end{enumerate} \end{theorem} \begin{theorem} \label{str-thm8} Assume $n=r$ odd and $H_n > 0$. Then the hypersurface generated by $\lambda_{H_n,d_n}$, together with its reflection across the slice $\mathbb H^n \times \{\lambda_{H_n,d_n}(\rho_+)\}$ is a class $C^2$ sphere if $d_n = 0$, and a picked sphere if $0 < d_n < 1$. When $-1 \leq d_n < 0$, the hypersurface generated by $\lambda_{H_n,d_n}$, together with its reflections across suitable slices, has self-intersections and we have three possibilities: it may be a generalized horn torus, a portion of generalized spindle torus, or a nodoid. In all cases, the hypersurface has $C^2$ singularities, cf.\ Table \ref{table:3}. When $d_n < -1$, the hypersurface generated by $\lambda_{H_n,d_n}$, together with its reflection across the slice $\mathbb H^n \times \{0\}$ and vertical translations of integral multiples of $2\lambda_{H_n,d_n}(\rho_+)$, is an immersed nodoid with $C^2$ singularities along infinitely many spheres of radius $\rho_0$ in distinct slices and centered at their origin. \end{theorem} Tables \ref{table:1}--\ref{table:3} summarize the above results. We describe the shape of the hypersurfaces and specify their homeomorphism type when the topology is easily described. \begin{table}[ht] \centering \caption{Rotation $H_r$-hypersurfaces in $\mathbb H^n \times \mathbb R$ with $H_r > (n-r)/n$} \begin{tabular}{p{25mm}p{30mm}p{40mm}M{10mm}} \toprule \textbf{Parameters} & \textbf{Shape / Topology} & \textbf{Singularities} & \textbf{Figure} \\ \midrule $d_r>0$ & onduloid / $S^{n-1} \times \mathbb R$ & \ding{55} & \ref{fig:2}, \ref{fig:6} \\ \cmidrule{1-4} $d_r=0$ & sphere / $S^n$ & \ding{55} & \ref{fig:1}, \ref{fig:5} \\ \cmidrule{1-4} $d_r<0$, $r$ even & singular onduloid / $S^{n-1} \times \mathbb R$ & infinitely many copies of $S^{n-1}$ given by cusps in horizontal slices & \ref{fig:1}\\ \cmidrule{1-4} \multirow{2}{*}[-2.5em]{$d_r<0$, $r$ odd} & nodoid & $|A|^2\rightarrow\infty$ on infinitely many copies of $S^{n-1}$ in horizontal slices if $r \geq 3$, else smooth & \ref{fig:4} \\ \cmidrule(l){2-4} & $S^{n-1} \times S^1$ & $|A|^2\rightarrow\infty$ on two copies of $S^{n-1}$ in horizontal slices & \ref{fig:4} \\ \bottomrule \end{tabular} \label{table:1} \end{table} \begin{remark} Let us comment on the last case of Table \ref{table:1}, i.e.\ $d_r<0$ and $r$ odd. Both types of hypersurfaces noted there occur depending on the value of $H_r$ and $d_r$. Also, $S^{n-1} \times S^1$ occurs only if $r \geq 3$. As $\lambda_{H_r,d_r}(\rho_+) \neq 0$ gets closer to the $\rho$-axis, the corresponding nodoids get more self-intersections, and the topology of the hypersurface becomes non-trivial. \end{remark} \begin{table}[ht] \centering \caption{Rotation $H_r$-hypersurfaces in $\mathbb H^n \times \mathbb R$ with $H_r \leq (n-r)/n$} \begin{tabular}{p{25mm}p{30mm}p{40mm}M{10mm}} \toprule \textbf{Parameters} & \textbf{Shape / Topology} & \textbf{Singularities} & \textbf{Figure} \\ \midrule $d_r>0$ & unbounded annulus / $S^{n-1} \times \mathbb R$ & \ding{55} & \ref{fig:2}, \ref{fig:6} \\ \cmidrule{1-4} $d_r=0$ & entire graph / $\mathbb R^n$ & \ding{55} & \ref{fig:1}, \ref{fig:5} \\ \cmidrule{1-4} $d_r<0$, $r$ even & singular annulus / $S^{n-1} \times \mathbb R$ & a copy of $S^{n-1}$ given by cusps in the slice $t=0$ & \ref{fig:1}\\ \cmidrule{1-4} $d_r<0$, $r$ odd & singular annulus with self-intersections along a copy of $S^{n-1}$ in $\mathbb H^n \times \{0\}$ & $|A|^2\rightarrow\infty$ on two copies of $S^{n-1}$ in horizontal slices if $r \geq 3$, else smooth & \ref{fig:4} \\ \bottomrule \end{tabular} \label{table:2} \end{table} \begin{table}[ht] \centering \caption{Rotation $H_n$-hypersurfaces in $\mathbb H^n\times\mathbb R$ with $H_n>0$} \begin{tabular}{p{25mm}p{30mm}p{40mm}M{10mm}} \toprule \textbf{Parameters} & \textbf{Shape / Topology} & \textbf{Singularities} & \textbf{Figure} \\ \midrule $d_n < -1$, $n$ odd & nodoid & $|A|^2\rightarrow\infty$ at infinitely many copies of $S^{n-1}$ in horizontal slices & \ref{fig:7} \\ \cmidrule{1-4} & nodoid & $|A|^2 \to \infty$ at infinitely many points on the $t$-axis and copies of $S^{n-1}$ in horizontal slices & \ref{fig:7} \\ \cmidrule(l){2-4} \multirow{2}{*}{\parbox{3cm}{$-1 \leq d_n < 0$ \\ $n$ odd}} & generalized horn torus & $|A|^2 \to \infty$ at two copies of $S^{n-1}$ in horizontal slices and at one point on the $t$-axis & \ref{fig:7}\\ \cmidrule(l){2-4} & portion of generalized spindle torus & $|A|^2 \to \infty$ at two copies of $S^{n-1}$ in horizontal slices and at two points on the $t$-axis & \ref{fig:7} \\ \cmidrule{1-4} $d_n < 0$, $n$ even & singular onduloid / $S^{n-1} \times \mathbb R$ & infinitely many copies of $S^{n-1}$ given by cusps in horizontal slices & \ref{fig:3} \\ \cmidrule{1-4} $d_n = 0$ & sphere / $S^n$ & \ding{55} & \ref{fig:3}, \ref{fig:7} \\ \cmidrule{1-4} $0 < d_n < 1$ & picked sphere / $S^n$ & $|A|^2\rightarrow\infty$ at two points on the $t$-axis & \ref{fig:3}, \ref{fig:7} \\ \bottomrule \end{tabular} \label{table:3} \end{table} \begin{remark} When $-1\leq d_n<0$ and $n$ is odd, all three cases in Table \ref{table:3} occur depending on the value of the parameter $d_n$. \end{remark} \section{Translation \texorpdfstring{$H_r$}{Hr}-hypersurfaces} \label{translation-surfaces} In the proof of Theorem \ref{main-thm}, besides rotation hypersurfaces, we will need further $H_r$-hypersurfaces as barriers. The suitable ones are invariant under hyperbolic translation in $\mathbb H^n\times \mathbb R$ with $r$-th mean curvature $H_r>(n-r)/n.$ Hyperbolic translations in $\mathbb H^n\times \mathbb R$ are hyperbolic translations in a slice $\mathbb H^n \times \{t\}$ extended to be constant on the vertical component, and will be described precisely later. When $0<H_r< (n-r)/n$, smooth complete hypersurfaces invariant under hyperbolic translation are treated in \cite{lima}. The case $r=1$ has already been studied in \cite{berard-earp}, and an explicit description for $n=2$ has been given by Manzano \cite{manzano}. Therefore, we restrict to the case $r>1.$ As in Section \ref{rotational-surfaces}, given $n$, $r$, and $H_r > 0$, one finds a one-parameter family of curves describing the profile of such translation hypersurfaces. Since we do not aim to a complete classification of translation hypersurfaces, we will choose the parameter to be zero (see \eqref{def:mu} below), and we will only describe a portion of the hypersurface. This will be enough for our purposes. Let us recall the construction of translation hypersurfaces in $\mathbb H^n \times \mathbb R$ by B\'erard--Sa Earp \cite{berard-earp}. For simplifying the notation, we denote the zero-section $\mathbb H^n \times \{0\}$ by $\mathbb H^n.$ Take $\gamma$ in $\mathbb H^n$ to be a geodesic passing through $0$. We define $V$ to be the vertical plane $\{(\gamma(\rho),t): t,\rho \in \mathbb R\}$. We now take $\pi$ to be a totally geodesic hyperplane in $\mathbb H^n$ orthogonal to $\gamma$ at the origin. We consider hyperbolic translations along a geodesic $\delta$ passing through $0$ in $\pi,$ repeated slice-wise to get isometries of $\mathbb H^n \times \mathbb R$. Now take a curve $c(\rho) \coloneqq (\tanh(\rho/2), \mu(\rho))$ in $V$, where $\mu$ is to be determined. For any $\rho > 0$, consider the section $\mathbb H^n \times \{\mu(\rho)\}$, and move the point $c(\rho)$ via the above hyperbolic translations. On each slice, this gives a hypersurface $M_{\rho}$ in $\mathbb H^n \times \{\mu(\rho)\}$ through $c(\rho)$. Hence the curve $c$ generates a translation hypersurface $M = \cup_{\rho} M_{\rho}$ in $\mathbb H^n \times \mathbb R$. The principal curvatures of the hypersurface $M,$ with respect to the unit normal pointing upwards are \begin{equation*} k_1 = \dots = k_{n-1} = \frac{\dot{\mu}}{(1+\dot{\mu}^2)^{\frac12}}\tanh(\rho), \qquad k_n = \frac{\ddot{\mu}}{(1+\dot{\mu}^2)^{\frac32}}. \end{equation*} Then $$nH_r = (n-r)\tanh^r(\rho)\frac{\dot{\mu}^r}{(1+\dot{\mu}^2)^{r/2}}+\tanh^{r-1}(\rho)\frac{r\dot{\mu}^{r-1}\ddot{\mu}}{(1+\dot{\mu}^2)^{\frac{r+2}{2}}}.$$ This is equivalent to the identity \begin{equation} \label{start2} nH_r\frac{\cosh^{n-1}(\rho)}{\sinh^{r-1}(\rho)} = \frac{d}{d \rho} \left(\cosh^{n-r}(\rho)\frac{\dot{\mu}^r}{(1+\dot{\mu}^2)^{\frac{r}{2}}}\right), \qquad r=1,\dots,n. \end{equation} Note that now the integrals $$\int_0^{\rho} \frac{\cosh^{n-1}(\tau)}{\sinh^{r-1}(\tau)}\,d\tau$$ are not well-defined for $r > 1$, because $$\int_0^{\rho} \frac{\cosh^{n-1}(\tau)}{\sinh^{r-1}(\tau)}\,d\tau \geq \int_0^{\rho} \frac{\cosh^{r-1}(\tau)}{\sinh^{r-1}(\tau)}\,d\tau \geq \int_0^{\rho} \mathrm{cotgh}(\tau)\,d\tau = \infty.$$ We then choose $\epsilon > 0$ and define $$J_{n,r,\epsilon}(\rho) \coloneqq \int_{\epsilon}^{\rho} \frac{\cosh^{n-1}(\tau)}{\sinh^{r-1}(\tau)}\,d\tau, \quad \text{ and } \quad J_{n,1}(\rho) \coloneqq \int_0^{\rho} \cosh^{n-1}(\tau)\,d\tau, \qquad \rho > 0.$$ Then one can integrate \eqref{start2} twice and set the constant of integration to be zero, so as to obtain \begin{equation} \label{def:mu} \mu_{H_r,\epsilon}(\rho) = \int_{\epsilon}^{\rho} \frac{(nH_rJ_{n,r,\epsilon}(\xi))^{\frac{1}{r}}}{\sqrt{\cosh^{\frac{2(n-r)}{r}}(\xi)-(nH_rJ_{n,r,\epsilon}(\xi))^{\frac{2}{r}}}}\,d\xi, \qquad \rho \geq \epsilon. \end{equation} Again, $\mu$ depends on $H_r$, and $\epsilon$, so we write $\mu_{H_r,\epsilon}$ to be precise. \begin{remark} \label{signs2} Note that we have defined $\mu_{H_r,\epsilon}$ in \eqref{def:mu} for $\rho \geq \epsilon$. This is because we are only interested in the portion of translation hypersurface described by $\mu_{H_r,\epsilon}$ for $\rho \geq \epsilon$. The tangent line to the curve described by $\mu_{H_r,\epsilon}$ at $\rho = \epsilon$ is horizontal for all $r$, and $\mu_{H_r,\epsilon}$ is increasing for $\rho > \epsilon$. The second derivative of $\mu_{H_r,\epsilon}(t)$ is computed as \begin{equation} \label{second-derivative-mu} \ddot{\mu}_{H_r,\epsilon}(\rho) = \frac{\sinh(\rho)\cosh^{\frac{2(n-r)}{r}-1}(\rho)\left(nH_r\frac{\cosh^n(\rho)}{\sinh^r(\rho)}-(n-r)(nH_rJ_{n,r,\epsilon}(\rho))\right)}{r(nH_rJ_{n,r,\epsilon}(\rho))^{\frac{r-1}{r}}\left(\cosh^{\frac{2(n-r)}{r}}(\rho)-(nH_rJ_{n,r,\epsilon}(\rho))^{\frac{2}{r}}\right)^{\frac{3}{2}}}. \end{equation} This expression will be used when studying the convexity of $\mu_{H_r,\epsilon}$ and its regularity up to second order. \end{remark} \begin{remark} \label{concavity-j} Let us discuss a few details on $J_{n,r,\epsilon}$ for $r>1$ and $\rho>\epsilon.$ It is clear that $J_{n,r,\epsilon}(\epsilon) = 0$ and $\lim_{\rho \to +\infty} J_{n,r,\epsilon}(\rho) = +\infty.$ Further, $J_{n,r,\epsilon}'(\rho) > 0$ for $\rho\geq \epsilon.$ Hence $J_{n,r,\epsilon}$ is a bijection between $(\epsilon,\infty)$ and $(0,+\infty)$. For $n>r$, we have the asymptotic behavior $(n-r)J_{n,r,\epsilon}(\rho) \approx \cosh^{n-r}(\rho)$ for $\rho \to+\infty$, and for $n = r$ we have $J_{n,n,\epsilon}(\rho) \approx \rho$ for $\rho \to +\infty$. \end{remark} We fix $r > 1$ $H_r > (n-r)/n$, and $\epsilon>0$, and study the function $$\mu_{H_r,\epsilon}(\rho) \coloneqq \int_{\epsilon}^{\rho} \frac{(nH_rJ_{n,r,\epsilon}(\xi))^{\frac{1}{r}}}{\sqrt{\cosh^{\frac{2(n-r)}{r}}(\xi)-(nH_rJ_{n,r,\epsilon}(\xi))^{\frac{2}{r}}}}\,d\xi.$$ \begin{proposition} \label{curve-translation-prop} Let $r > 1,$ $H_r > (n-r)/n$, and fix $\epsilon>0.$ Then $\mu_{H_r,\epsilon}$ is defined on $[\epsilon,\rho_+^{\epsilon}]$, where $\rho_+^{\epsilon}$ is the only solution of $\cosh^{n-r}(\rho)-nH_rJ_{n,r,\epsilon}(\rho) = 0$. We have $\mu_{H_r,\epsilon}(\epsilon) = 0 = \dot{\mu}_{H_r,\epsilon}(\epsilon)$, $\dot{\mu}_{H_r,\epsilon}(\rho) > 0$ for $\rho \in (\epsilon,\rho_+^{\epsilon})$, $\lim_{\rho \to \rho_+^{\epsilon}} \dot{\mu}_{H_r,\epsilon}(\rho) = +\infty$, and $\mu_{H_r,\epsilon}$ is convex in the interior of its domain. Further, $\lim_{\rho \to \epsilon} \ddot{\mu}_{H_r,\epsilon}(\rho) = +\infty$ (Figure \ref{fig:9}). \end{proposition} \begin{figure} \caption{Behavior of $\mu_{H_r,\epsilon} \label{fig:9} \end{figure} \begin{proof}Putting together all constraints gives $$0 \leq nH_rJ_{n,r,\epsilon}(\rho) < \cosh^{n-r}(\rho).$$ Notice that $\rho_+^{\epsilon}$ is finite if and only if $$f_{\epsilon}(\rho) \coloneqq \cosh^{n-r}(\rho)-nH_rJ_{n,r,\epsilon}(\rho)$$ admits a zero. One has $f_{\epsilon}(\epsilon) > 0$, whereas the derivative of $f_{\epsilon}$ is $$f_{\epsilon}'(\rho) = \frac{\cosh^{n-1}(\rho)}{\sinh^{r-1}(\rho)}((n-r)\tanh^r(\rho)-nH_r),$$ and is negative since $H_r > (n-r)/n$. Moreover, $f_{\epsilon}'$ tends to $-\infty$, hence a zero $\rho_+^{\epsilon}$ exists and is unique. For $n=r$, $f_{\epsilon}$ reduces to $1-nH_nJ_{n,\epsilon}$, which has a zero $\rho_+^{\epsilon} > \epsilon$ regardless of the value of $H_n > 0$. The remaining details on the behavior of $\mu_{H_n,\epsilon}$ follow as in previous section. \end{proof} \begin{remark} The technique used for Proposition \ref{prop:c2-reg} can be combined with \eqref{second-derivative-mu} and yields $C^2$-regularity of the translation $H_r$-hypersurface at points corresponding to $\rho = \rho_+^{\epsilon}$. At points corresponding to $\rho = \epsilon$ when $r > 1$ we only have regularity $C^1$. \end{remark} By using the translation defined at the beginning of this section on the curves defined in Proposition \ref{curve-translation-prop}, one gets translation $H_r$-hypersurfaces in $\mathbb H^n \times \mathbb R$, which we describe in the following theorem. Recall that $\pi$ is the totally geodesic hyperplane in $\mathbb H^n$ orthogonal to the plane containing the support of the curve given by $\mu_{H_r,\epsilon}$ at the origin. \begin{theorem} \label{str-thm-cyl2} Let $r > 1$, $H_r > (n-r)/n$, and $\epsilon > 0$. Reflect the graph of $\mu_{H_r,\epsilon}$ on $[\epsilon,\rho_+^{\epsilon}]$ with respect to the horizontal slice $\mathbb H^n \times \{\mu_{H_r,\epsilon}(\rho_+^{\epsilon})\}$. Translating the arc obtained along geodesics through the origin in $\pi$ gives an $H_r$-hypersurface with the topology of $\mathbb R^{n-1} \times [0,1]$ and of class $C^2$ away from the boundary. The boundary components are planar equidistant hypersurfaces with distance $\epsilon$ from $\pi$, they lie in two different slices, and can be obtained from one another by a vertical translation. \end{theorem} \section{Estimates} \label{estimates} In this section we collect estimates that will be needed in the proof of Theorem \ref{main-thm}. We define radii and heights related to pieces of the hypersurfaces classified in the above sections, and study the interplay between them. First we need to compare \emph{spheres} and \emph{horizontal cylinders}. Fix $n \geq r > 1$ and $H_r > (n-r)/n.$ Denote by $\mathcal S_r$ the sphere in $\mathbb H^n \times \mathbb R$ with $r$-th mean curvature $H_r$, namely the compact rotation hypersurface generated by $\lambda_{H_r,0}$ in Theorems \ref{str-thm2}, \ref{str-thm4}, \ref{str-thm6}, \ref{str-thm8}. Let $R_{\mathcal S_r} \coloneqq \rho_+$, where $\rho_+$ was defined as the length of the domain of $\lambda_{H_r,0}$. For any $\epsilon>0$, let us denote by $\mathcal C_{r,\epsilon}$ the $H_r$-hypersurface described in Theorem \ref{str-thm-cyl2}, which is a portion of a horizontal cylinder. Set $R_{\mathcal C_{r,\epsilon}} \coloneqq \rho_+^{\epsilon}-\epsilon$ where $\rho_+^{\epsilon}$ is the unique value such that $$f_{\epsilon}(\rho_+) = \cosh^{n-r}(\rho_+)-nH_rJ_{n,r,\epsilon}(\rho_+) = 0.$$ Note that $\mathcal C_{r,\epsilon}$ has a horizontal hyperplane of symmetry $P$ and $R_{\mathcal C_{r,\epsilon}}$ is the distance between the projection of the boundary of $\mathcal C_{r,\epsilon}$ on $P$ and $\mathcal C_{r,\epsilon}\cap P.$ The next estimate will be used in Claim \ref{claim2} for the proof of Theorem \ref{main-thm}. \begin{lemma} \label{radii-comparison} For all $n$, $r$, $H_r$ with $n \geq r > 1$ and $H_r > (n-r)/n$, there exists a positive $\epsilon=\epsilon(n,r,H_r)$ such that $R_{\mathcal C_{r,\epsilon}} <R_{\mathcal S_r}$. \end{lemma} \begin{remark} A version of this statement for $r=1$ is given in Nelli--Pipoli \cite[Lemma~3.3]{nelli-pipoli}. Lemma \ref{radii-comparison} may be viewed as an extension of the latter to $r > 1$. \end{remark} \begin{proof} We have already shown that for $H_r > (n-r)/n$ (or $H_n > 0$) the function $f_{\epsilon}$ is decreasing. Since $\rho_+^{\epsilon}>\epsilon$, we have $\lim_{\epsilon\rightarrow\infty}\rho_+^{\epsilon}=\infty$. Note that the function $\epsilon \mapsto \rho_+^{\epsilon}$ is continuous and increasing. To see this, let $0<a<b$ and $\rho > 0$, so that $$J_{n,r,a}(\rho)=\int_a^b\frac{\cosh^{n-1}(x)}{\sinh^{r-1}(x)}\,dx+\int_{b}^{\rho}\frac{\cosh^{n-1}(x)}{\sinh^{r-1}(x)}\,dx>\int_{b}^{\rho}\frac{\cosh^{n-1}(x)}{\sinh^{r-1}(x)}\,dx=J_{n,r,b}(\rho).$$ It follows that $$f_a(\rho_+^b)<\cosh^{n-r}(\rho_+^b)-nH_rJ_{n,r,b}(\rho_+^b)=f_b(\rho_+^b)=0=f_a(\rho_+^a).$$ Since $f_a$ is decreasing, $\rho_+^b>\rho_+^a$ holds. We claim that $\rho_+^{\epsilon}<\sqrt\epsilon$ if $\epsilon$ is small enough. By definition of $\rho_+^{\epsilon}$ and the fact that $f_{\epsilon}$ is decreasing, it is enough to prove that $f_{\epsilon}(\sqrt\epsilon)<0$ for $\epsilon$ small enough. Since the function $x\cosh(x)-\sinh(x)$ is positive for $x > 0$, we deduce that $$\frac{\cosh^{n-1}(x)}{\sinh^{r-1}(x)} = \cosh^{n-r}(x)\frac{\cosh^{r-1}(x)}{\sinh^{r-1}(x)} > \frac{1}{x^{r-1}},$$ whence $$ J_{n,r,\epsilon}(\sqrt\epsilon)\geq\left\{\begin{array}{rcl} -\frac12\log(\epsilon)&\text{ if }&r=2,\\ \frac{1}{r-2}(\epsilon^{{2-r}}-\epsilon^{\frac{2-r}{2}})&\text{ if }&r>2. \end{array}\right. $$ In both cases $\lim_{\epsilon\rightarrow 0}J_{n,r,\epsilon}(\sqrt\epsilon)=+\infty$. It follows that $f_{\epsilon}(\sqrt\epsilon)<0$ for $\epsilon$ sufficiently small, hence the claim is proved. We deduce that $\epsilon<\rho_+^{\epsilon}<\sqrt\epsilon$ for $\epsilon$ small, so $\lim_{\epsilon\rightarrow 0}\rho_+^{\epsilon}=0$. Given $n \geq r > 1$ and $H_r > (n-r)/n$, the value of $R_{\mathcal S_r}$ is fixed. By the above statements, there is a value of $\epsilon > 0$ such that $R_{\mathcal C_{r,\epsilon}} < \rho_+^{\epsilon} < R_{\mathcal S_r}$. \end{proof} The next type of hypersurfaces we consider are \emph{annuli}. Let $n>r$, $H_r = (n-r)/n$, and choose $d_r > 0$. For these values of the parameters, the curves $\lambda_{H_r,d_r}$ for $r$ even and odd share the same behavior. Specifically, they have a zero $\rho_-$, which is the only solution of $\sinh^{n-r}(\rho) -(nH_rI_{n,r}(\rho)+d_r)=0$, and start with vertical tangent. After a vertical reflection across the slice $\mathbb H^n \times \{0\}$ and rotation about a vertical axis, each curve produces an unbounded annulus (see Theorem \ref{str-thm3}, \ref{str-thm7}). Let us highlight a property of $d_r$ that will simplify our calculations. Since $nI_{n,r}(\rho) \approx \rho^n$ for $\rho$ close to $0$, for $\rho_-$ small we estimate $$d_r = \sinh^{n-r}(\rho_-)-nH_rI_{n,r}(\rho_-) \approx \rho_-^{n-r}-H_r\rho_-^{n} = \rho_-^{n-r}(1-H_r\rho_-^r).$$ This implies \begin{equation} \label{dr} \lim_{d_r \to 0} \frac{d_r}{\rho_-^{n-r}} = 1. \end{equation} We need to consider the portion of the previous annulus between the slices $\mathbb H^n \times \{0\}$ and $\mathbb H^n \times \{h^*\},$ where $h^*$ is defined as \begin{equation} \label{h-star} h^* \coloneqq\int_{\rho_-}^{2\rho_-} \frac{((n-r)I_{n,r}(\xi)+d_r)^{\frac1r}}{\sinh^{\frac{n-r}{r}}(\xi)}\,d\xi. \end{equation} Observe that by \eqref{dr} we can interpret $h^*$ as an approximation of the value $\lambda_{{(n-r)}/{r},d_r}(2\rho_-)$ for $\rho_-$ small. Moreover $h^*< \lambda_{{(n-r)}/{r},d_r}(2\rho_-).$ Let now $n=r.$ For $d_n > 0$ small enough, we consider portions of the picked spheres found in Theorems \ref{str-thm4} and \ref{str-thm8}, so that the cases $n$ even and odd can be treated together. Here $\rho_-$ is not defined a priori, so we choose $\rho_- = d_n^{2/n}$ and define $h^*$ as follows (by abuse of notation, we use the notation $h^*$ as above) \begin{equation} \label{new-h} h^* \coloneqq \int_{\rho_-}^{2\rho_-} d_n^{\frac1n} = d_n^{\frac3n}. \end{equation} Note that when $\rho_-$ is small, then $2\rho_-<\rho_+$ and $h^*$ is an approximation of $\lambda_{H_n,d_n}(2\rho_-)-\lambda_{H_n,d_n}(\rho_-)$, which is the height of the portion of the picked sphere between two slices intersecting it in codimension one spheres of radii ${\rho_-}$ and ${2\rho_-}$. Moreover $h^*<\lambda_{H_n,d_n}(2\rho_-)-\lambda_{H_n,d_n}(\rho_-).$ For any $n \geq r$ we define $\rho_{H_r}^*$ implicitly as \begin{equation} \label{rho*} h^* \eqqcolon \int_0^{\rho_{H_r}^*} \frac{(nH_rI_{n,r}(\xi))^{\frac1r}}{\sqrt{\sinh^{\frac{2(n-r)}{r}}(\xi)-(nH_rI_{n,r}(\xi))^{\frac2r}}}\,d\xi, \end{equation} Notice that $\rho_{H_r}^*$ is the radius of the intersection of the sphere $\mathcal S_r$ of constant curvature $H_r$ with a slice at vertical distance $h^*$ from the South Pole. As above, we assume $\rho_-$ is small, which is equivalent to requiring $d_r$ small (recall that $d_r \to 0$ if and only if $\rho_- \to 0$). \begin{lemma} \label{various-estimates} Let $n > r$, $d_r > 0$, $H_r = (n-r)/n$, take $h^*$ as in \eqref{h-star}, and $\rho_{H_r}^*$ as in \eqref{rho*}. Then $$\lim_{d_r \to 0} \rho_{H_r}^* = 0, \qquad \lim_{d_r \to 0} \frac{\rho_-}{\rho_{H_r}^*} = 0.$$ For $n=r$, $d_n > 0$, and $H_n > 0$, take $h^*$ as in \eqref{new-h} and $\rho_{H_n}^*$ as in \eqref{rho*}. Then $$\lim_{d_n \to 0} \rho_{H_n}^* = 0, \qquad \lim_{d_n \to 0} \frac{\rho_-}{\rho_{H_n}^*} = 0.$$ \end{lemma} \begin{proof} First assume $r<n.$ For $d_r$ small the right-hand side of \eqref{rho*} is approximated as $$\int_0^{\rho_{H_r}^*} H_r^{\frac1r}\xi\, d\xi = \frac{H_r^{\frac1r}}{2}(\rho_{H_r}^*)^2.$$ We then approximate $h^*$ in \eqref{h-star} as \begin{equation} \label{approx-h} h^* \approx \int_{\rho_-}^{2\rho_-} \left(\frac{n-r}{n}\xi^r+\frac{d_r}{\xi^{n-r}}\right)^{\frac{1}{r}}\,d\xi \approx \int_{\rho_-}^{2\rho_-} d_r^{\frac{1}{r}}\xi^{\frac{r-n}{r}}\,d\xi. \end{equation} Assume now $n \neq 2r$. We integrate \eqref{approx-h} to find \begin{equation*} h^* \approx \frac{r}{2r-n}(2^{\frac{2r-n}{r}}-1)\rho_- \end{equation*} On the other hand, $h^* \approx \frac{H_r^{\frac1r}}{2}(\rho_{H_r}^*)^2$. Since $d_r \to 0$ is equivalent to $\rho_- \to 0$, it is clear that $\lim_{d_r \to 0} \rho_{H_r}^* = 0$ and $$\lim_{d_r \to 0} \frac{\rho_-}{\rho_{H_r}^*} = \lim_{d_r \to 0} \sqrt{\rho_-} = 0.$$ If $n = 2r$ we need to integrate \eqref{approx-h} in a different manner, namely \begin{align*} \int_{\rho_-}^{2\rho_-} d_r^{\frac1r}\xi^{-1}\,d\xi & = d_r^{\frac1r}\ln 2 \approx \rho_-\ln 2. \end{align*} Then again, $\lim_{d_r \to 0} \rho_-/\rho_{H_r}^* = 0$. When $n=r$ the proof is analogous provided that $h^* \coloneqq d_n^{\frac3n}$, as in \eqref{new-h}. \end{proof} \section{Hyperbolic lima\c{c}on} \label{limacon} The goal of this section is to improve the estimates on the size of the hyperbolic lima\c con introduced in \cite{nelli-pipoli}. This hypersurface of $\mathbb H^n$ generalizes the well-known \emph{lima\c{c}on de Pascal} in the Euclidean plane, and it will play an important role in the proof of Theorem~\ref{main-thm}. We start by recalling its definition. \begin{definition} \label{limacon-def} Let $A$ and $C$ be two distinct points in $\mathbb H^n$, and $c > 0$ be a constant. Let $\mathcal C$ be the geodesic sphere with radius $c$ centered at $C$. For any $P \in \mathcal C$ define $A_P$ to be the reflection of $A$ across the totally geodesic hyperplane in $\mathbb H^n$ tangent to $\mathcal C$ at $P$. The set $$\mathcal L \coloneqq \{A_P \in \mathbb H^n: P \in \mathcal C\}$$ is called \emph{hyperbolic lima\c{c}on}, and $A$ is called \emph{base point} of $\mathcal L.$ \end{definition} Since the hyperbolic space is two-points homogeneous, up to isometries of the ambient space $\mathcal L$ depends only on two parameters: $a \coloneqq d(A,C)$, where $d$ is the hyperbolic distance, and $c > 0$ as in Definition~\ref{limacon-def}. The shape of $\mathcal L$ changes depending on whether $a=c$, $a<c$, or $a>c$. Here we are only interested in the latter case. We refer to Nelli--Pipoli \cite[Section 2]{nelli-pipoli} for general properties of $\mathcal L$. The following result improves \cite[Lemma 2.5]{nelli-pipoli} and will allow to remove the pinching assumption in \cite[Theorem 4.1]{nelli-pipoli}. \begin{lemma} \label{limacon-lemma} Take $\mathcal{L}$ to be the hyperbolic lima\c{c}on with $a > c$ and base point $A$. Let $\mathcal{C}$ be the geodesic sphere defining $\mathcal L$, $C$ be its center, and $X$ be the point of $\mathcal C$ with minimal distance from~$A$. Then $\mathcal L$ has two loops, one inside the other, and it has a self-intersection only at~$A$. Moreover the following statements hold. \begin{enumerate} \item The smaller (resp.\ larger) loop of $\mathcal L$ is contained in (resp.\ contains) the disk centered at $X$ and radius $a-c$. \item The smaller loop of $\mathcal L$ bounds the disk centered at $X$ and radius \begin{equation}\label{improvement-distance} \ell(a,c) \coloneqq \cosh^{-1}\left(\cosh(a-c)-\frac{\sinh c}{2\sinh a} \sinh^2(a-c) \right), \end{equation} \item All of $\mathcal L$ sits inside the disk centered at $C$ and radius $a+2c$. \end{enumerate} \end{lemma} \begin{proof} Since $a>c$, $\mathcal L$ has two loops, one inside the other, and has a self-intersection only at $A$, cf.\ \cite[Lemma 2.4]{nelli-pipoli}. The estimates (1) and (3) have been proved in \cite[Lemma 2.5]{nelli-pipoli}. It remains to prove (2). Since $\mathcal L$ is invariant with respect to rotations about the geodesic passing through $A$ and $C$, we can assume $n=2$. We start by giving an explicit parametrization of $\mathcal L$ in the hyperboloid model for the hyperbolic space canonically embedded in the Minkowski space $\mathbb R^{2,1} = (\mathbb R^3,q)$, where $q$ is the standard scalar product of signature $(2,1)$. Without loss of generality, we can assume that $A = (\sinh a, 0, \cosh a)$, and the center of $\mathcal C$ to be $(0,0,1)$. Then we parametrize $\mathcal C$ by $$\alpha(\theta) = (\sinh c \cos \theta, \sinh c\sin \theta, \cosh c).$$ Let $P = \alpha(\theta)$ for some $\theta$. We want to find the unique geodesic $\gamma_P$ through $P$ tangent to $\mathcal{C}$ explicitly: $\gamma_P$ is the geodesic passing through $P$ and generated by the unit tangent vector to $\mathcal{C}$ at $P$, which is $$T(\theta) = \frac{\alpha'(\theta)}{\sqrt{q(\alpha'(\theta),\alpha'(\theta))}} = (-\sin \theta,\cos \theta,0).$$ Therefore $\gamma_P = \mathbb H^2 \cap \Pi_P$, where $\Pi_P$ is the plane in $\mathbb R^{2,1}$ passing through $O$, $P$, and parallel to $T$. A unit normal to $\Pi_P$ with respect to $q$ is the vector $$\nu(\theta) = (\cosh c \cos \theta, \cosh c \sin \theta, \sinh c).$$ Following Definition \ref{limacon-def}, we need to reflect $A$ across $\gamma_P$. Since the reflection in $\mathbb H^2$ across $\gamma_P$ is the restriction to $\mathbb H^2$ of the reflection in $\mathbb R^{2,1}$ across $\Pi_P$, it follows that $\mathcal L$ can be parametrized as $$ L(\theta) = A-2q(A,\nu(\theta))\nu(\theta). $$ The point of $\mathcal C$ at minimal distance from $A$ is $X = (\sinh c, 0, \cosh c)$. Since $a > c$, then $X$ is in the compact region bounded by the smaller loop of the hyperbolic lima\c{c}on. The strategy now is to compute the distance between $X$ and $\mathcal L$, then the smaller loop of $\mathcal{L}$ will bound a disk centered at $X$ and radius the above distance. It is well known that the hyperbolic distance in the upper hyperboloid is $$ d(A,B) = \cosh^{-1}(-q(A,B)), \qquad A,B \in \mathbb H^2. $$ In order to find the critical points of the function $\theta\mapsto d(X, L(\theta))$, it is enough to find the critical points of the function $\theta\mapsto q(X,L(\theta))$. We have $$q(X,{L}(\theta)) = -\cosh(a-c) + q(\theta) \sinh 2c(1-\cos \theta),$$ where $q(\theta) \coloneqq q(A,\nu(\theta)) = \cosh c \sinh a \cos \theta - \sinh c \cosh a$. Explicit computations give $$\frac{d}{d\theta}\left(q(X,{L}(\theta))\right) = \sinh 2c \sin \theta(2\sinh a \cosh c \cos \theta - \sinh(a+c)).$$ Hence critical points are given by \begin{equation} \label{hyper-rels} \sin \theta = 0, \qquad \text{ and } \qquad \cos\theta = \frac{\sinh (a+c)}{2\sinh a \cosh c}. \end{equation} The case $\theta = 0$ yields a new proof of \cite[Lemma 2.5, part 1]{nelli-pipoli}. The case $\theta = \pi$ produces a disk centered at $X$ and radius $a+3c$, which is worse than the disk in \cite[Lemma 2.5, part 3]{nelli-pipoli}. The case of interest is now the last one. Let $\theta_0$ be such that $\cos \theta_0$ satisfies the second identity in \eqref{hyper-rels}. Then $$q(X,{L}(\theta_0)) = -\cosh(a-c)+\frac{\sinh c}{2\sinh a}\sinh^2(a-c).$$ We then have \begin{equation*} d(X,\mathcal{L}) = \ell(a,c) = \cosh^{-1}\left(\cosh(a-c)-\frac{\sinh c}{2\sinh a} \sinh^2(a-c) \right), \end{equation*} hence the smaller loop of $\mathcal{L}$ bounds a disk of center $X$ and radius $\ell(a,c)$. \end{proof} We conclude this section with a list of properties of $\ell$ which will be useful for the estimates in the proof of Theorem \ref{main-thm}. \begin{lemma} \label{limacon-rmk} The following properties hold. \begin{enumerate} \item \label{item:lim1} For any $a>c \geq 0$, $\ell(a,0) = a$, $\ell(a,a) = 0$, and $\ell(a,c) > 0$. Moreover $\ell(a,c) < a-c$. \item \label{item:lim2} The function $(a,c)\mapsto\ell(a,c)$ with domain $\{(a,c)\in\mathbb R^2 : a>c>0\}$ is increasing in the first variable and decreasing in the second one. \item \label{item:lim3} For any $x>0$, then $\ell(4x,2x)>x$. \end{enumerate} \end{lemma} \begin{proof} The properties in (\ref{item:lim1}) follow directly by the definition of $\ell$, cf.\ \eqref{improvement-distance}. As for (\ref{item:lim2}), we have \begin{align*} \frac{\partial}{\partial a}\cosh(\ell(a,c))&=\frac{\sinh(a-c)}{2\sinh^2a}\left(2\sinh^2a-\sinh^2c-\cosh(a-c)\sinh a\sinh c\right)\\ &>\frac{\sinh(a-c)}{2\sinh a}\left(\sinh a-\cosh(a-c)\sinh c\right)\\ &=\frac{\sinh^2(a-c)\cosh c}{2\sinh a}>0, \end{align*} where we have used the fact that $a>c>0$. Likewise \begin{align*} \frac{\partial}{\partial c}\cosh(\ell(a,c))&=-\frac{\sinh(a-c)}{2\sinh a}(2\sinh a + \cosh c \sinh(a-c)-2\sinh c \cosh(a-c)) \\ & = -\frac{3\sinh^2(a-c)\cosh c}{2\sinh a} < 0. \end{align*} Since the functions $\sinh$ and $\cosh$ are increasing in $[0,+\infty)$, the claim follows. Let us now prove (\ref{item:lim3}). By \eqref{improvement-distance} we have \begin{align*} \ell(4x,2x) & =\cosh^{-1}\left(\cosh(2x)-\frac{\sinh^2(2x)}{4\cosh(2x)}\right)\\ & =\cosh^{-1}\left(\frac{3(2\cosh^2 x-1)^2+1}{4(2\cosh^2x-1)}\right). \end{align*} It follows that $\ell(4x,2x)>x$ if and only if $$\frac{3(2\cosh^2 x-1)^2+1}{4(2\cosh^2x-1)}>\cosh x,$$ namely $(\cosh(x)-1)(\cosh^2 x + (\cosh x-1)(3\cosh^2 x + 3\cosh x + 1))>0$. The latter holds true for all $x>0$, and we are done. \end{proof} \iffalse We conclude this Section with a list of properties of $\ell$ useful for the estimates in the proof of Theorem \ref{main-thm}. \begin{lemma}\label{limacon-rmk} \begin{enumerate} \item For any $a>c \geq 0$, $\ell(a,0) = a$, $\ell(a,a) = 0$, and $\ell(a,c) > 0$ hold. Moreover $\ell(a,c) < a-c$. \item The function $(a,c)\mapsto\ell(a,c)$ is increasing in the $a$ variable and decreasing in the $c$ variable in the domain $\{(a,c)\in\mathbb R^2 : a>c>0\}$. \item For any $x>0$, $\ell(4x,2x)>x$ holds. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item These properties follow directly by the definition of $\ell$, c.f. \eqref{improvement-distance}. \item Proof. We have \begin{eqnarray*} \frac{\partial\cosh(\ell(a,c))}{\partial a}&=&\frac{\sinh(a-c)}{2\sinh^2a}\left(2\sinh^2a-\sinh^2c-\cosh(a-c)\sinh a\sinh c\right)\\ &>&\frac{\sinh(a-c)}{2\sinh a}\left(\sinh a-\cosh(a-c)\sinh c\right)\\ &=&\frac{\sinh^2(a-c)\cosh c}{2\sinh a}>0, \end{eqnarray*} where in the inequalities we used the fact that $a>c>0$. Moreover since the functions $\sinh(x)$ and $\cosh(x)$ are increasing in $[0,+\infty)$, then it is not hard to show that $\ell$ is decreasing in its second entry. \item By \eqref{improvement-distance} we have \begin{eqnarray*} \ell(4x,2x)&=&\cosh^{-1}\left(\cosh(2x)-\frac{\sinh^2(2x)}{4\cosh(2x)}\right)\\ &=&\cosh^{-1}\left(\frac{3(2\cosh^2(x)-1)^2+1}{4(2\cosh^2(x)-1)}\right) \end{eqnarray*} It follows that \begin{eqnarray*} \ell(4x,2x)>x&\Leftrightarrow&\frac{3(2\cosh^2(x)-1)^2+1}{4(2\cosh^2(x)-1)}>\cosh(x)\\ &\Leftrightarrow&3(2\cosh^2(x)-1)^2+1-4\cosh(x)(2\cosh^2(x)-1)\\ &&=4(\cosh(x)-1)(\cosh^2(x)(3\cosh(x)-2)+(\cosh^2(x)-1))>0. \end{eqnarray*} The terms in brackets in the last line are strictly positive for $x>0$, therefore the last inequality holds true. \end{enumerate} \end{proof} \fi \section{Ros--Rosenberg type theorem} \label{main-result} The second goal of the present paper is to prove a topological result about compact connected $H_r$-hypersurfaces embedded in $\mathbb H^n \times\mathbb R$ with planar boundary. This is a generalization of the classical result of Ros and Rosenberg \cite{ros-rosenberg} about the topology of constant mean curvature surfaces in the Euclidean three-dimensional space. \begin{theorem} \label{main-thm} Let $M$ be a compact connected hypersurface embedded in $\mathbb H^n \times [0,\infty) \subset \mathbb H^n \times \mathbb R$ with boundary a closed horoconvex $(n-1)$-dimensional hypersurface $\Gamma$ embedded in the horizontal slice $\mathbb H^n \times \{0\}$. Assume $M$ has constant $r$-th mean curvature $H_r > (n-r)/n$ for some $r = 1,\dots, n$. Then there is a constant $\delta = \delta(n,r,H_r) > 0$ small enough such that, if $\Gamma$ is contained in a disk of radius $\delta$, then $M$ is topologically a disk. \end{theorem} We recall that a hypersurface $\Gamma$ of the hyperbolic space is called \emph{horoconvex} if all its principal curvatures are larger than one. \begin{remark} Let us make a few observations. \begin{enumerate} \item When $r=1$, Theorem \ref{main-thm} improves \cite[Theorem 4.1]{nelli-pipoli}. In fact, thanks to the new estimates given in Lemma \ref{limacon-lemma}(2), we do not need to assume any pinching on $\Gamma$. \item Elbert--Sa Earp \cite[Theorem 7.7]{elb-earp} proved that when $n>r$ and $H_r \leq (n-r)/n$, then a compact connected $H_r$-hypersurface $M$ embedded in $\mathbb H^n \times [0,\infty)$ with horoconvex boundary $\Gamma$ in the slice $\mathbb H^n \times \{0\}$ is necessarily a graph over the compact planar domain bounded by $\Gamma$. In particular $M$ is a disk. Therefore, we focus on the cases $n>r$, with $H_r > (n-r)/n$, and $n=r$, with $H_n > 0$. \item By using Alexandrov reflections with respect to vertical hyperplanes, we can show that $M$ shares the same symmetries of its boundary. In particular, when $\Gamma$ is a geodesic sphere, $M$ is rotationally symmetric. It follows that $M$ is a portion of one of the compact hypersurfaces classified in Section \ref{rotational-surfaces}, and Theorem \ref{main-thm} is proved in this special case. \end{enumerate} \end{remark} In view of the previous remark, we will assume throughout that $\Gamma$ is not a geodesic sphere. \begin{remark} \label{convex-point} In the following we will do extensive use of the tangency principle for $H_r$-hypersurfaces as it is stated in \cite[Theorem 1.1]{fontenele-silva}. In order to satisfy the assumptions there, it is enough that the hypersurface $M$ in Theorem \ref{main-thm} has a strictly convex point. This is guaranteed by \cite[Lemma 7.5]{elb-earp}. \end{remark} \begin{notations*} Let us introduce some notations that will be useful in the proof of Theorem~\ref{main-thm}. For the reader's convenience, there is a list of notations at the end of the article. We denote by $\Omega$ the compact domain of $\mathbb H^n\times\{0\}$ bounded by $\Gamma$ and by $W$ the compact domain in $\mathbb H^n \times \mathbb R$ with boundary $M\cup\Omega$. Given $n \geq r \geq 2$, and $H_r > (n-r)/n$, we fix an $\epsilon > 0$ such that Lemma \ref{radii-comparison} is satisfied. Denote by $\mathcal C_r \coloneqq \mathcal C_{r,\epsilon}$ the corresponding translation $H_r$-hypersurface of Theorem \ref{str-thm-cyl2}. When $r=1$ we use the same notation, however recall that no choice of $\epsilon$ is involved. Let $h_{\mathcal C_r}$ denote the height of $\mathcal C_r$ (namely $2\mu_{H}(\rho_+)$ for $r=1$ and $2\mu_{H_r,\epsilon}(\rho_+^{\epsilon})$ for $r > 1$). Analogously let $h_{\mathcal S_r}$ be the height of $\mathcal S_r$ (i.e.\ $2\lambda_{H_r,0}(\rho_+)$, cf.\ Theorems \ref{str-thm2}, \ref{str-thm4}, \ref{str-thm6}, \ref{str-thm8}). We define $h_M$ to be the height of $M$ with respect to the slice $\mathbb H^n \times \{0\}$. The \emph{exterior} (resp.\ \emph{interior}) \emph{radius} of $\Gamma$ is the smaller (resp.\ larger) radius $\rho$ such that for any $p \in \Gamma$ there is a geodesic sphere $S$ with radius $\rho$ tangent to $\Gamma$ at $p$ and $\Gamma$ sits in (resp.\ encloses) the closed ball with boundary $S$. We write $r_{ext}$ for the exterior radius and $r_{int}$ for the interior one. Clearly $r_{ext} \geq r_{int}$, and equality occurs if and only if $\Gamma$ is a geodesic sphere. Moreover, since $\Gamma$ is horoconvex, $r_{int}$ and $r_{ext}$ are determined by the maximum and the minimum of the principal curvatures of $\Gamma$. Finally we denote by $D(R)$ any disk of radius $R > 0$ in a horizontal slice of $\mathbb H^n \times \mathbb R$. \end{notations*} The strategy of the proof of Theorem \ref{main-thm} is similar to that of \cite{nelli-pipoli}: if the height of $M$ is less than the height of $\mathcal C_r$, then $M$ is a graph over $\Omega$, otherwise it is a union of hypersurfaces, each one a graph over a suitable domain. As in \cite{ros-rosenberg}, at the end of the proof it will be clear that the union of such graphs has the topology of the disk. The hyperbolic lima\c{c}on described in Section \ref{limacon} will be used in various estimates. \begin{lemma} \label{lemma5} Let $M$ and $\Gamma$ satisfy the assumptions of Theorem \ref{main-thm}. There is a disk $D(r_{min})$ in $\mathbb H^n \times \{0\}$ such that $M \cap (D(r_{min}) \times \mathbb R)$ is a graph, and $\ell(r_{ext},r_{ext}-r_{int}) \leq r_{min} < r_{int}$. In particular, $r_{min}$ depends only on the principal curvatures of $\Gamma$. \end{lemma} \begin{proof} In order to prove the statement, we apply Alexandrov's reflection technique with horizontal hyperplanes coming down from above. Since $M$ is compact, the slice $\mathbb H^n \times \{t\}$, $t > h_M$ does not intersect $M$. Then we let $t$ decrease. When $t<h_M$, reflect the part above the slice and stop if there is a first contact point between $M$ and its reflection. If we can get to $t=0$ without having contact points, then $M$ is a graph over $\Omega$ and we can choose $r_{min}< r_{int}.$ If this does not happen, there will be a $0 < t_0 < h_M/2$ such that the reflected hypersurface touches $M$ for the first time. If the intersection point lied in the interior of $M$ we would have a contradiction with the Maximum Principle, hence a first touching point belongs to $\Gamma$. Let $q$ be one of such points. Then the line $\{q\} \times (0,\infty)$ intersects $M$ exactly once, and $\{q\} \times (0,2t_0)$ is contained in the interior of $W$, as $t_0 < h_M/2$. Note that the portion of $M$ above $\mathbb H^n \times \{t_0\}$ is a graph. We now perform Alexandrov's reflections with respect to vertical hyperplanes, i.e.\ the product of a totally geodesic hypersurface of $\mathbb H^n$ and $\mathbb R$. Let $Q$ be one of such hyperplanes. Since $M$ is compact, we can assume that $Q\cap M=\varnothing$. Fix a point $x\in Q$ and let $\gamma$ be the geodesic passing through $x$ and orthogonal to $Q$. Move $Q$ along $\gamma$ towards $M$ such that $Q$ is always orthogonal to $\gamma$. By abuse of notation, we call again $Q$ any parallel translation of the initial hyperplane. When $Q$ touches $M$ for the first time, keep moving $Q$ and start reflecting through $Q$ the part of $M$ left behind $Q$. In order not to have a contradiction with the Maximum Principle, we can continue this procedure with no contact points between $M$ and its reflection until $Q$ enters $\Gamma$ at distance at least $r_{int}$ from it. We can avoid the dependence on the contact point $q$ by stopping reflecting when $Q$ is tangent to $\mathcal{C}$, where $\mathcal{C}$ is as follows. Denote by $\mathcal{C}_{ext}$ the geodesic sphere in $\mathbb H^n \times \{0\}$ of radius $r_{ext}$, tangent to $\Gamma$ at $q$, and enclosing $\Gamma$. Then $\mathcal{C}$ is the geodesic sphere with the same center as that of $\mathcal{C}_{ext}$ and radius equal to $r_{ext}-r_{int}$. Define $\mathcal{L}$ to be the set of the reflections of $q$ through any vertical hyperplane tangent to $\mathcal C$. It follows that $\mathcal{L}$ is a hyperbolic lima\c{c}on as in Definition~\ref{limacon-def} whose base point is $q$ and whose parameters are $a = r_{ext}$ and $c = r_{ext}-r_{int}$. Since $a > c$, $\mathcal L$ has two loops. Moreover, since $\Gamma$ is horoconvex, the smaller loop of $\mathcal{L}$ sits in $\Omega$. Furthermore, since $\{q\}\times\mathbb R$ intersects $M$ in exactly one point, then the same holds true for any $p$ in the compact planar domain bounded by the smaller loop of $\mathcal{L}$. Define $r_{min}$ as the largest radius of a ball bounded by the smaller loop of $\mathcal L$. Then $M\cap(D(r_{min})\times\mathbb R)$ is a graph. Finally Lemma \ref{limacon-lemma} and Lemma \ref{limacon-rmk} imply $\ell(r_{ext},r_{ext}-r_{int}) \leq r_{min} < r_{int}$ at once. We remark that $r_{min}$ depends only on $a$ and $c$, namely only on the curvature of $\Gamma$, but not on $q$. \end{proof} \begin{proof}[Proof of Theorem \ref{main-thm}] We first assume $h_M < h_{\mathcal C_r}$. Recall that $R_{\mathcal C_r} = \rho_+^{\epsilon}-\epsilon$ for $r > 1$, and $R_{\mathcal C_1} = \rho_+$. We can then adapt the proof as in Nelli--Pipoli \cite{nelli-pipoli} to our case. \begin{customthm}{\RomanNumeralCaps{1}} \label{claim1} The hypersurface $M$ lies in $D(r_{ext}+R_{\mathcal C_r}) \times [0,h_{\mathcal C_r})$. \end{customthm} \begin{proof} Consider the $H_r$-hypersurface $\mathcal C_r$. Its lower boundary is in the slice $\mathbb H^n \times \{0\}$ and the upper boundary sits in the slice $\mathbb H^n \times \{h_{\mathcal C_r}\}$. We call $\mathcal C_{r}$ any horizontal translation or rotation of $\mathcal C_{r}$. Since $M$ is compact, we can translate $\mathcal C_{r}$ horizontally so that $M \cap \mathcal C_{r} = \varnothing$ and $M$ lies in the part of $\mathcal C_{r}$ containing the axis of $\mathcal C_r$. Then we move $\mathcal C_{r}$ isometrically towards $M$ until $\mathcal C_{r}$ touches $M$ for the first time. By the Maximum Principle, $\mathcal C_{r}$ and $M$ do not touch at any interior point. Since $h_M < h_{\mathcal C_r}$, the first touching point belongs to $\Gamma$. The same steps can be repeated for $\mathcal C_{r}$ with any horizontal axis. By definition of $r_{ext}$ we get that $M$ sits inside $D(r_{ext}+R_{\mathcal C_r}) \times [0,h_{\mathcal C_r})$. \end{proof} \begin{customthm}{\RomanNumeralCaps{2}} \label{claim2} If $\Gamma$ is sufficiently small, then $M$ is contained in the cylinder $\Omega \times \mathbb R$. \end{customthm} \begin{proof} By Lemma \ref{radii-comparison} and our choice of $\epsilon$ one has $R_{\mathcal C_r} < R_{\mathcal S_r}.$ Recall that $\mathcal S_r$ is the sphere with the same $r$-th mean curvature as that of $M$. Cut $\mathcal S_r$ with its horizontal hyperplane of symmetry and let $\mathcal S_r^+$ be the upper hemisphere. Now take $\Gamma$ small enough so that $R_{\mathcal C_r} + r_{ext} < R_{\mathcal S_r}$. Translate $\mathcal S_r^+$ horizontally in such a way that the intersection of its axis of rotation with the slice $\mathbb H^n \times \{0\}$ coincides with the center of the disk found in Claim \ref{claim1}. Translate upwards $\mathcal S_r^+$ such that $\mathcal S_r^+\cap M=\varnothing$. By the Maximum Principle, Claim \ref{claim1}, and the hypothesis on $\Gamma$, we can translate $\mathcal S_r^+$ downwards without having a contact point between $\mathcal S_r^+$ and $M$ until the boundary of $\mathcal S_r^+$ is contained in the slice $\mathbb H^n \times \{0\}$, whence $M$ is below $\mathcal S_r^+$. By the Maximum Principle and the fact that $r_{ext} < R_{\mathcal S_r}$, one can translate horizontally $\mathcal S_r^+$ without having a contact point with $M$ until $\mathcal S_r^+$ becomes tangent to $\Gamma$ at any point of $\Gamma$, which gives the claim. \end{proof} \begin{customthm}{\RomanNumeralCaps{3}} \label{claim3} The hypersurface $M$ is a graph over $\Omega,$ hence it is a disk. \end{customthm} \begin{proof} By Alexandrov's reflections technique with horizontal hyperplanes coming down from above, it follows that $M$ is a graph over $\Omega$, which proves Theorem \ref{main-thm} when $h_M < h_{\mathcal C_r}$. Observe that $\delta$ can be taken as $R_{\mathcal S_r}-R_{\mathcal C_r}$, cf.\ Claim \ref{claim2}. \end{proof} We now assume that $h_M \geq h_{\mathcal C_r}$. Alexandrov's reflection technique with horizontal and vertical hyperplanes guarantees that the part of $M$ above the plane $t = h_M/2$ is a graph over a domain of $\mathbb H^n\times\{0\}$ and that the part of $M$ outside the cylinder $\Omega \times \mathbb R$ is a graph over a domain of $\Gamma\times\mathbb R$. The goal is to prove that $M$ is the union of such graphs, i.e.\ $M \cap (\Omega \times (0,h_M/2])$ is empty. In this way it will be clear that $M$ has the topology of a disk. Recall the definition of $h^*$ in \eqref{h-star} for $n>r$ and \eqref{new-h} for $n=r$. Hereafter we show that $\Omega \times [h^*,h_M/2]$ contains no point of $M$ if $\Gamma$ is small enough, and lastly we prove that there is no interior point of $M$ in $\Omega \times [0,h^*]$ as well. Before doing this we discuss how the various quantities we use are related to one another. Let $d_r > 0$ be such that $$\ell(r_{ext},r_{ext}-r_{int}) \leq \rho_- \leq r_{min} < r_{int} < r_{ext},$$ where $r_{min}$ is the radius defined in Lemma \ref{lemma5} and $\rho_-$ is the minimum of the interval where $\lambda_{(n-r)/n,d_r}$ is defined when $n>r$ (see Section~\ref{rotational-surfaces}), and for $n=r$ was chosen in Section \ref{estimates} to be $d_n{}^{2/n}$. Note that if $r_{ext} \to 0$, i.e.\ $\Gamma$ shrinks to a point, then $d_r \to 0$, and so $\rho_-$, $h^*$, and $\rho_{H_r}^*$ go to zero as well (cf.\ Lemma \ref{various-estimates}). Hence if $r_{ext}$ is small enough, then $h^* \ll\frac{h_M}{2}$. Further, since $\ell(r_{ext},r_{ext}-r_{int}) > 0$, we can find $\alpha > 0$ such that \begin{equation}\label{alpha} \alpha r_{ext} < \ell(r_{ext},r_{ext}-r_{int}) \leq \rho_-, \end{equation} whence $\rho_{H_r}^*/r_{ext} > \alpha \rho_{H_r}^*/\rho_-$. Taking $\Gamma$ small enough, by Lemma \ref{various-estimates} we have $$\frac{\rho_{H_r}^*}{r_{ext}} > \alpha \frac{\rho_{H_r}^*}{\rho_-} > 3,$$ therefore we can assume \begin{equation} \label{last-estimate} \rho_{H_r}^* > 3r_{ext}. \end{equation} \begin{customthm}{\RomanNumeralCaps{4}} \label{claim4} The compact domain bounded by $M \cap (\mathbb H^n \times \{h_M - h^*\})$ contains a geodesic segment of length at least $\rho_{H_r}^*$. \end{customthm} \begin{proof} Alexandrov's technique with respect to horizontal hyperplanes implies that the reflection of points in $M$ at height $h_M$ across the hyperplane $\mathbb H^n \times \{h_M/2\}$ sits in the closure of $\Omega$. We can assume that one of these points lies on the $t$-axis after applying a horizontal isometry. Let $M'$ be the portion of $M$ above the hyperplane $\mathbb H^n \times \{h_M - h^*\}$. Then $M'$ is a graph with height $h^*$. Suppose that for any $p \in \partial M'$ the distance between $p$ and the $t$-axis is smaller than $\rho_{H_r}^*.$ Cut $\mathcal S_r$ with a horizontal hyperplane so that the spherical cap $S_r'$ above that hyperplane has height $h^*$. Then translate $\mathcal S_r'$ up until it has empty intersection with $M$, then move it downwards. The Maximum Principle implies there is no contact point between $\mathcal S_r'$ and the interior of $M'$ at least until the boundary of $\mathcal S_r'$ reaches the level $t = h_M - h^*$. Therefore the height of $M'$ is less than $h^*$, which is a contradiction. \end{proof} \begin{customthm}{\RomanNumeralCaps{5}} \label{claim5} The domain bounded by $M \cap (\mathbb H^n \times \{h_M - h^*\})$ contains a disk $D(R)$ with $R > \ell(\rho_{H_r}^*-r_{ext},r_{ext})$. \end{customthm} \begin{proof} Up to horizontal translation, we can assume that one of the endpoints of the geodesic segment found in Claim \ref{claim4} is on the $t$-axis. Let $p$ be the other endpoint. Consider a geodesic sphere $\mathcal C_{ext}$ of $\mathbb H^n\times\{0\}$ tangent to $\Gamma$ and containing $\Gamma$. Reflect the point $p$ across any vertical hyperplane tangent to $\mathcal C_{ext}$ in $\mathbb H^n \times \mathbb R$. The set of such reflections is a hyperbolic lima\c{c}on $\mathcal L$ in $\mathbb H^n \times \{h_M-h^*\}$ with base point $p$. By the choice of $p$, the parameters of $\mathcal L$ are $a > \rho_{H_r}^*-r_{ext}$ and $c = r_{ext}$. By \eqref{last-estimate}, $a>c$, so $\mathcal L$ has two loops, and the smaller one is contained in $W$ -- argue as in Lemma \ref{lemma5}. The claim now follows by Lemma \ref{limacon-lemma}. \end{proof} \begin{customthm}{\RomanNumeralCaps{6}} \label{claim6} The intersection between $M$ and $D(R) \times [h^*,h_M-h^*]$ is empty. \end{customthm} \begin{proof} Claim \ref{claim5} implies that $D(R)$ is contained in $W$, and since we have chosen $h^* \ll h_M$, the hyperplane $\mathbb H^n \times \{h_M-h^*\}$ is above the hyperplane $\mathbb H^n \times \{h_M/2\}$. By applying the Alexandrov's reflection technique with horizontal hyperplanes, the reflection of $D(R)$ across $\mathbb H^n \times \{\tau\}$ is contained in $W$ for all $\tau \in [h_M/2,h_M-h^*]$. The claim then follows. \end{proof} \begin{customthm}{\RomanNumeralCaps{7}} \label{claim7} There is no point of $M$ in the cylinder $\Omega \times \{0 < t \leq h^*\}$. \end{customthm} \begin{proof} If $n>r$, $\Sigma$ will denote the portion of the rotational hypersurface generated by $\lambda_{(n-r)/n,d_r}$ contained in $(D(2\rho_-)\setminus D(\rho_-)) \times \mathbb R$. For $n=r$, $\Sigma$ will denote the portion of a picked sphere generated by $\lambda_{H_n,d_n}$ contained in $(D(2\rho_-)\setminus D(\rho_-)) \times \mathbb R$. Note that if $n>r$, then the $r$-th mean curvature of $\Sigma$ is strictly smaller than that of $M$, while if $n=r$ the $n$-th mean curvatures of $M$ and $\Sigma$ coincide. In both cases $\Gamma$ and $d_r > 0$ are chosen small enough so that the Claims \ref{claim4}, \ref{claim5}, and \ref{claim6} hold. For any $n\geq r$, $\Sigma$ has two boundary components $C_0$ and $C_1$. Up to vertical translation, we can assume $C_0 \subset \mathbb H^n \times \{0\}$ and $C_1 \subset \mathbb H^n \times \{h^*\}$. Up to horizontal translation we can assume that the center of $C_0$ coincides with the center of the disk of Lemma \ref{lemma5}. Moreover, by definition of $h^*$, the radius of $C_1$ is smaller than $2\rho_-$. Let $R$ be the radius found in Claim \ref{claim5}. By Claim \ref{claim5}, \eqref{last-estimate} and Lemma \ref{limacon-rmk} we get $$ R>\ell(\rho^*_{H_r}-r_{ext},r_{ext})>\ell\left(\frac {2\rho^*_{H_r}}{3},\frac{\rho^*_{H_r}}{3}\right)>\frac{\rho^*_{H_r}}{6}. $$ By Lemma \ref{various-estimates}, we can take $\Gamma$ small enough such that $$ \frac{\rho^*_{H_r}}{6}>\left(\frac{1}{\alpha}+1\right)\rho_-, $$ where $\alpha$ is the constant in \eqref{alpha}. It follows that if $\Gamma$ is small enough, then \begin{equation} \label{stimaR} R > r_{ext}+\rho_->2\rho_-. \end{equation} Claim \ref{claim5} and \eqref{stimaR} allow us to translate $\Sigma$ vertically in such a way that it is contained in $W$. By Lemma \ref{lemma5} and the Maximum Principle, we can then translate $\Sigma$ down until $C_0$ reaches $\mathbb H^n \times \{0\}$ without having contact points with the interior of $M$. Because of $\rho_-<r_{int}$, we can translate horizontally $\Sigma$ in such a way that it touches every point of $\Gamma$ with $C_0$ and keeping $C_0$ inside $\Omega$. Since \eqref{stimaR} holds true, during this translation $C_1$ remains inside the disk $D^*(R) \subset \mathbb H^n \times \{h^*\}$, which is the reflection of $D(R) \subset \mathbb H^n \times \{h_M-h^*\}$. By Claim \ref{claim6}, in this process, the upper boundary of $\Sigma$ does not touch $M$. Recalling that the $r$-th mean curvature of $\Sigma$ is not bigger than that of $M$, by the Maximum Principle, we get that there can be no internal contact point between $M$ and $\Sigma$. The claim then follows because $\Sigma$ is a graph over the exterior of $D(\rho_-)$. \end{proof} The proof of Theorem \ref{main-thm} is now complete. \end{proof} \section*{List of notations} \label{appendix} We include a summary of the various notations we use throughout for the most notable objects and quantities. \begin{enumerate} \itemsep0.5em \item Profile curves: \begin{enumerate} \itemsep0.2em \item[$\lambda_{H_r,d_r}$:] profile curve of $H_r$-hypersurfaces in $\mathbb H^n \times \mathbb R$ invariant under rotation depending on a real parameter $d_r$ (Section \ref{rotational-surfaces}). \item[$\mu_{H_r,\epsilon}$:] profile curve of $H_r$-hypersurfaces in $\mathbb H^n \times \mathbb R$ invariant under hyperbolic translation depending on a real parameter $\epsilon > 0$ (Section \ref{translation-surfaces}). \end{enumerate} \item Domain of profile curves: \begin{enumerate} \itemsep0.2em \item[$\rho_-$:] minimum of the domain of $\lambda_{H_r,d_r}$ when this is not zero. \item[$\rho_+$:] maximum of the domain of $\lambda_{H_r,d_r}$. \item[$\rho_0$:] minimum point of $\lambda_{H_r,d_r}$ in $(\rho_-,\rho_+)$. \item[$\rho_+^{\epsilon}$:] maximum of the domain of $\mu_{H_r,\epsilon}$ for $r > 1$. \end{enumerate} \item Hypersurfaces in $\mathbb H^n \times \mathbb R$: \begin{enumerate} \itemsep0.2em \item[$\mathcal S_r$:] rotation $H_r$-hypersurface generated by $\lambda_{H_r,0}$, for some $H_r > (n-r)/n$. \item[$\mathcal C_{r,\epsilon}$:] translation $H_r$-hypersurface with $H_r > (n-r)/n$ generated by $\mu_{H_r,\epsilon}$. \end{enumerate} \item Special quantities: \begin{enumerate} \itemsep0.2em \item[$R_{\mathcal S_r}$:] the value $\rho_+$ for $\lambda_{H_r,0}$. \item[$R_{\mathcal C_{r,\epsilon}}$:] the value $\rho_+^{\epsilon}-\epsilon$ for $r > 1$. \item[$h^*$:] approximated value of $\lambda_{H_r,d_r}(2\rho_-)$ for $d_r > 0$ and $H_r = (n-r)/n$ \eqref{h-star}. \item[$\rho_{H_r}^*$:] radius of the hypersurface given by $\lambda_{H_r,0}$, $H_r > (n-r)/n$, at height $h^*$ \eqref{rho*}. \end{enumerate} \item Specific notations for Section \ref{main-result}: \begin{enumerate} \itemsep0.2em \item[$\mathcal C_r$:] same as $\mathcal C_{r,\epsilon}$ with a choice of $\epsilon$ such that $R_{\mathcal C_{r,\epsilon}} < R_{\mathcal S_r}$. \item[$h_{\mathcal C_r}$:] height of $\mathcal C_r$, namely $2\mu_H(\rho_+)$ for $r=1$ and $2\mu_{H_r,\epsilon}(\rho_+^{\epsilon})$ for $r > 1$, cf.~Theorem \ref{str-thm-cyl2}. \item[$h_M$:] height of $M \subset \mathbb H^n \times [0,\infty)$ with respect to the slice $\mathbb H^n \times \{0\}$. \item[$\mathcal L$:] the hyperbolic Lima\c{con} as in Definition \ref{limacon-def}. \item[$\ell(a,c)$:] optimal radius of a ball bounded by the smaller loop of $\mathcal L$ with parameters $a>c$, see Lemma \ref{limacon-lemma} and identity \eqref{improvement-distance} for its explicit definition. \item[$r_{int}$:] interior radius of $\Gamma$. \item[$r_{ext}$:] exterior radius of $\Gamma$. \item[$r_{min}$:] the largest radius of a ball bounded by the smaller loop of $\mathcal L$ over which $M$ is a graph, see Lemmas \ref{limacon-lemma} and \ref{lemma5}. \end{enumerate} \end{enumerate} \printbibliography \end{document}
math
\begin{equation}gin{document} \newcommand{\hspace*{1.5em}}{\hspace*{1.5em}} \newcommand{\em Journal of Russian Laser Research}{\em Journal of Russian Laser Research} \newcommand{\em Volume 30, Number 5, 2009}{\em Volume 30, Number 5, 2009} \newcommand{\begin{equation}}{\begin{equation}gin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\boldmath}{\boldmath} \newcommand{\displaystyle}{\displaystyle} \newcommand{\begin{equation}a}{\begin{equation}gin{eqnarray}} \newcommand{\end{equation}a}{\end{eqnarray}} \newcommand{\begin{array}}{\begin{equation}gin{array}} \newcommand{\end{array}}{\end{array}} \newcommand{\mathop{\rm arcsinh}\nolimits}{\mathop{\rm arcsinh}\nolimits} \newcommand{\mathop{\rm arctanh}\nolimits}{\mathop{\rm arctanh}\nolimits} \newcommand{\begin{center}}{\begin{equation}gin{center}} \newcommand{\end{center}}{\end{center}} \renewcommand{(\alph{enumi})}{(\alph{enumi})} \let\vaccent=\v \renewcommand{\v}[1]{\ensuremath{\mathbf{#1}}} \newcommand{\gv}[1]{\ensuremath{\mbox{\boldmath$ #1 $}}} \newcommand{\uv}[1]{\ensuremath{\mathbf{\hat{#1}}}} \newcommand{\abs}[1]{\left| #1 \right|} \newcommand{\avg}[1]{\left< #1 \right>} \let\underdot=\d \renewcommand{\d}[2]{\frac{d #1}{d #2}} \newcommand{\dd}[2]{\frac{d^2 #1}{d #2^2}} \newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\pdd}[2]{\frac{\partial^2 #1}{\partial #2^2}} \newcommand{\pdc}[3]{\left( \frac{\partial #1}{\partial #2} \right)_{#3}} \newcommand{\ket}[1]{\left| #1 \right>} \newcommand{\bra}[1]{\left< #1 \right|} \newcommand{\braket}[2]{\left< #1 \vphantom{#2} \right| \left. #2 \vphantom{#1} \right>} \newcommand{\matrixel}[3]{\left< #1 \vphantom{#2#3} \right| #2 \left| #3 \vphantom{#1#2} \right>} \newcommand{\grad}[1]{\gv{\nabla} #1} \let\divsymb=\div \renewcommand{\div}[1]{\gv{\nabla} \cdot #1} \newcommand{\curl}[1]{\gv{\nabla} \times #1} \let\begin{array}raccent=\= \renewcommand{\=}[1]{\stackrel{#1}{=}} \thispagestyle{plain} \label{sh} \begin{equation}gin{center} {\Large \bf \begin{equation}gin{tabular}{c} WEIGHTED INFORMATION\\ AND WEIGHTED ENTROPIC INEQUALITIES \\[-1mm] FOR QUTRIT AND QUQUART STATES \end{tabular} } \end{center} \begin{equation}gin{center} {\bf Vladimir I.~Man'ko$^{1,2*}$ and Zhanat Seilov$^2$ }\end{center} \begin{equation}gin{center} {\it $^1$Lebedev Physical Institute, Russian Academy of Sciences\\ Leninskii Prospect 53, Moscow, Russia 119991 $^2$Moscow Institute of Physics and Technology (State University)\\ Institutskii per. 9, Dolgoprudnyi, Moscow Region Russia 141700 } $^*$Corresponding author e-mail:[email protected]\\ \end{center} \begin{equation}gin{abstract}\noindent The notion of weighted quantum entropy is reviewed and considered for bipartite and noncomposite quantum systems. The known for the weighted entropy information inequality (subadditivity condition) is extended to the case of indivisible qudit system on example of qutrit. This new inequality for qutrit density matrix is discussed for different cases of weights and states. The role of weighted entropy is discussed in connection with studies of nonlinear quantum channels. \end{abstract} \noindent{\bf Keywords:} information and entropic inequalities, quantum weighted entropy, qudits. \section{Introduction} The classical probability distributions are characterized by such functionals as Shannon entropy \cite{shannon}. The entropy value corresponds to degree order in the classical system. The larger is the Shannon entropy, the larger degree of disorder in the system is. More detailed description of the probability distributions is associated with Renyi entropy \cite{renyi} and Tsallis entropy \cite{tsallis}. These entropies depend on extra parameter. The dependence on this parameter provides possibility to characterize the properties of the distribution in more detail. The generalization for entropy of classical system called weighted entropy was introduced by M.Belis and S.Guiasu in \cite{1968} and then developed in \cite{1971}. The reason to introduce this generalization was to show the qualitative aspect of measurement, whereas probabilities in Shannon entropy reflecting quantitative aspect. Weights can be distributed according to the relevance or utility of corresponding events or states. The weighted entropy depends on many extra parameters determining the chosen probability distribution. Von Neumann entropy \cite{neumann} is one of the central concepts in quantum information theory. It shows the unpredictability of information content in quantum system. The Shannon entropy of multipartite classical system satisfies known information-entropic inequalities like subadditivity condition for bipartite systems \cite{robinson1} and strong subadditivity condition for tripartite classical systems \cite{robinson2}. These conditions are also valid for quantum von Neumann entropy \cite{araki, carlen}. The Tsallis entropy of bipartite system also satisfies the subadditivity condition. Y.Suhov and S.Zohren in \cite{suhov} introduced quantum weighted entropy as generalized concept of well known von Neumann entropy. In this generalization weight is represented by weight matrix. Authors of \cite{suhov} derived and proved main properties of new entropy: subadditivity, concavity and strong subadditivity. Big interest for us represents the property of subadditivity of quantum weighted entropy, which shows that mutual information can't be of negative values, and it was shown recently in \cite{manko2, manko3, chernega, markovich, hidden}. It was clarified that entropic-information inequalities known for composite systems are valid for systems without subsystems both in classical and quantum domains. The aim of our work is to review the properties of weighted quantum entropy discussed in \cite{suhov, suhov2, suhov3} and to show that entropic inequalities for weighted quantum entropy like subadditivity condition found for tripartite systems \cite{suhov} are also valid for systems without subsystems. We demonstrated that this inequality fits particular qutrit states. There exists the nonlinear maps of density matrices called nonlinear quantum channels (see, e.g. \cite{europhys, puzko, manko4}). We will discuss the changes of the weighted entropic inequalities due to action of nonlinear quantum channels. The paper is organized as follows: in Sec. 2 we review the notion of quantum weighted entropy. In Sec. 3 we consider the subadditivity property for qutrit state with diagonal density matrix. In Sec. 4 the conclusion and prospectives are presented. \section{Quantum Weighted Entropy} Let $\rho$ be a density matrix and $\phi$ be a positive definite, Hermitian matrix both on Hilbert space $\mathcal H$. Then quantum weighted entropy is determined as follows \begin{equation} \label{qwe} S_\phi(\rho)\equiv-tr(\phi \rho \log\rho) , \end{equation} where $\phi$ is called weight. \\ Subadditivity property is represented by inequality \begin{equation} \label{sub} S_{\phi_{AB}}(\rho_{AB})\leq S_{\phi_A}(\rho_A)+S_{\phi_B}(\rho_B) ,\end{equation} which is correct under the condition \begin{equation} \label{cond} tr_{AB}(\phi_{AB} \rho_{AB})\geq tr_A(\phi_A \rho_A)tr_B(\phi_B \rho_B) \end{equation} and in case $\rho_{AB}= \rho_A \otimes \rho_B$ it simplifies to equality. Reduced weights are defined as follows \begin{equation} \label{rm} \psi_A \rho_A = tr_B(\phi_{AB}\rho_{AB}) \end{equation} and the same way for case B. The idea of this work is to extend the weighted entropic inequalities (\ref{sub}) and (\ref{cond}) formulated above for composite systems to the noncomposite (indivisible) systems. Furthermore, we suggest to use the inequalities for studying the transformation of density matrix of qudit states due to action of nonlinear quantum channels. \section{Subadditivity property for qutrit with diagonal density matrix} Let us consider particular case of this property for qutrit. There are 3 states for qutrit which are supplemented with one new state with 0-probability. We introduce following designations: \begin{equation} \phi_A= \begin{equation}gin{pmatrix} \phi_1 & 0 \\ 0 & \phi_2 \end{pmatrix},~~~~~~~~~ \phi_B= \begin{equation}gin{pmatrix} \chi_1 & 0 \\ 0 & \chi_2 \end{pmatrix},~~~~~~~~~ \phi_{AB}=\phi_A \otimes \phi_B= \begin{equation}gin{pmatrix} \phi_1 \chi_1 & 0 & 0 & 0\\ 0 & \phi_1 \chi_2 & 0 & 0\\ 0 & 0 & \phi_2 \chi_1 & 0\\ 0 & 0 & 0 & \phi_2 \chi_2 \end{pmatrix}; \end{equation} \begin{equation} \label{matrix} \rho_{AB}=\begin{equation}gin{pmatrix} p_1 & 0 & 0 & 0\\ 0 & p_2 & 0 & 0\\ 0 & 0 & p_3 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix},~~~~~~~ \rho_A=\begin{equation}gin{pmatrix} p_1+p_2 & 0\\ 0 & p_3 \end{pmatrix},~~~~~~~ \rho_B=\begin{equation}gin{pmatrix} p_1+p_3 & 0\\ 0 & p_2 \end{pmatrix}. \end{equation} We define the condition (\ref{cond}) of ineqauality for qutrit: \begin{equation} \label{e1} tr_{AB}(\phi_{AB} \rho_{AB})=tr_{AB}\begin{equation}gin{pmatrix} \phi_1 \chi_1 p_1 & 0 & 0 & 0\\ 0 & \phi_1 \chi_2 p_2 & 0 & 0\\ 0 & 0 & \phi_2 \chi_1 p_3 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix}. \end{equation} $$tr_A(\phi_A \rho_A)tr_B(\phi_B \rho_B) =tr(\begin{equation}gin{pmatrix} \phi_1 & 0 \\ 0 & \phi_2 \end{pmatrix} \begin{equation}gin{pmatrix} p_1+p_2 & 0\\ 0 & p_3 \end{pmatrix}) tr(\begin{equation}gin{pmatrix} \chi_1 & 0 \\ 0 & \chi_2 \end{pmatrix}\begin{equation}gin{pmatrix} p_1+p_3 & 0\\ 0 & p_2 \end{pmatrix})=$$ $$=({\it substitution~} p_3=1-p_1-p_2)=$$ \begin{equation} \label{e2} =\phi_1 \chi_1(p_1+p_2-p_1p_2-p_2^2)+\phi_1 \chi_2(p_1p_2+p_2^2)+\phi_2 \chi_1(1-p_1-p_2-p_2+p_1p_2+p_2^2)+\phi_2\chi_2(p_2-p_1p_2-p_2^2). \end{equation} We can write expression (\ref{cond}) in form $tr_{AB}(\phi_{AB} \rho_{AB}) - tr_A(\phi_A \rho_A)tr_B(\phi_B \rho_B)\geq 0$. After substitution from (\ref{e1}) and (\ref{e2}) we obtain new inequality $$(p_2^2+p_2 p_1 -p_2)(\phi_1\chi_1-\phi_1\chi_2 -\phi_2 \chi_1 +\phi_2 \chi_2)\geq0~,$$ \\ $$\underbrace{p_2(1-p_2-p_1)}_{>0} (\phi_1-\phi_2)(\chi_2-\chi_1)\geq 0~.$$ Thus, a condition for subadditivity property for qutrits: \begin{equation} \label{cond2} (\phi_1-\phi_2)(\chi_2-\chi_1)\geq 0. \end{equation} Condition (\ref{cond2}) depends only on weights $\phi_A$ and $\phi_B$. We derive the property (\ref{sub}) for qutrit case. For instance, we calculate reduced matrices using formule (\ref{rm}): $$\psi_A \rho_A = tr_B(\phi_{AB}\rho_{AB})=tr_B\begin{equation}gin{pmatrix} \phi_1 \chi_1 p_1 & 0 & 0 & 0\\ 0 & \phi_1 \chi_2 p_2 & 0 & 0\\ 0 & 0 & \phi_2 \chi_1 p_3 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix}=\begin{equation}gin{pmatrix} \phi_1 \chi_1 p_1+\phi_2 \chi_1 p_3 & 0 \\ 0 & \phi_1 \chi_2 p_2 \end{pmatrix},$$ $$\psi_B \rho_B = tr_A(\phi_{AB}\rho_{AB})=tr_A\begin{equation}gin{pmatrix} \phi_1 \chi_1 p_1 & 0 & 0 & 0\\ 0 & \phi_1 \chi_2 p_2 & 0 & 0\\ 0 & 0 & \phi_2 \chi_1 p_3 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix}=\begin{equation}gin{pmatrix} \phi_1 \chi_1 p_1+\phi_1 \chi_2 p_2 & 0 \\ 0 & \phi_2 \chi_1 p_3 \end{pmatrix}.$$ Using definition of quantum weighted entropy \ref{qwe}, we can calculate $S_{\phi_{AB}}(\rho_{AB})$, $S_{\psi_A}(\rho_A)$ and $S_{\psi_B}(\rho_B)$: \begin{equation}gin{enumerate} \item $S_{\phi_{AB}}(\rho_{AB})=-tr\left[\begin{equation}gin{pmatrix} \phi_1 \chi_1 p_1 & 0 & 0 & 0\\ 0 & \phi_1 \chi_2 p_2 & 0 & 0\\ 0 & 0 & \phi_2 \chi_1 p_3 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix} \begin{equation}gin{pmatrix} \log p_1 & 0 & 0 & 0\\ 0 & \log p_2 & 0 & 0\\ 0 & 0 & \log p_3 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix}\right]= {} \\ {} =-(\phi_1 \chi_1 p_1 \log p_1+\phi_1 \chi_2 p_2 \log p_2+\phi_2 \chi_1 p_3 \log p_3);$ \item $S_{\psi_A}(\rho_A)=-tr(\psi_A \rho_A \log \rho_A)=\begin{equation}gin{pmatrix} \phi_1 \chi_1 p_1+\phi_2 \chi_1 p_3 & 0 \\ 0 & \phi_1 \chi_2 p_2 \end{pmatrix}\begin{equation}gin{pmatrix} \log(p_1+p_2) & 0 \\ 0 & \log p_3 \end{pmatrix}={}\\ {} =-[(\phi_1 \chi_1 p_1+\phi_2 \chi_1 p_3)\log(p_1+p_2)+\phi_1 \chi_2 p_2\log p_3];$ \item $S_{\psi_B}(\rho_B)=-[(\phi_1 \chi_1 p_1+\phi_1 \chi_2 p_2)\log(p_1+p_3)+\phi_2 \chi_1 p_3\log p_2].$ \end{enumerate} Ineq. (2) \ref{sub} can be rewritten in form $S_{\psi_A}(\rho_A)S_{\psi_B}(\rho_B)-S_{\phi_{AB}}(\rho_{AB}) \geq 0$. After substitution from (a), (b) and (c) we obtain following expression: $[\phi_1 \chi_1 p_1 \log p_1+\phi_1 \chi_2 p_2 \log p_2+\phi_2 \chi_1 p_3 \log p_3]-[(\phi_1 \chi_1 p_1+\phi_2 \chi_1 p_3)\log(p_1+p_2)+\phi_1 \chi_2 p_2\log p_3]-{}\\ {} -[(\phi_1 \chi_1 p_1+\phi_1 \chi_2 p_2)\log(p_1+p_3)+\phi_2 \chi_1 p_3\log p_2]\geq 0.$ Transforming previous expression we get following inequality: \begin{equation} \label{ineq} -[\phi_1 \chi_1 p_1 \log [\frac{(p_1+p_2)(p_1+p_3)}{p_1}]+\phi_1 \chi_2 p_2 \log [p_1+p_2]+\phi_2 \chi_1 p_3 \log[p_1+p_3]]\geq 0. \end{equation} Let us consider some particular case for qutrit. We choose probabilities $p_1, p_2, p_3$ weigths $\phi_A$ and $\phi_B$ to satisfy the condition (\ref{cond2}): $$\phi_A=\begin{equation}gin{pmatrix} 3/4 & 0\\ 0 & 1/4 \end{pmatrix},~~~~~~~~~~ \phi_B=\begin{equation}gin{pmatrix} 1/3 & 0\\ 0 & 2/3 \end{pmatrix};$$ $$p_1=p_2=1/10,~~~~~~p_3=8/10.$$ These values satisfy the required condition (\ref{cond2}): $$(3/4-1/4)(2/3-1/3)=1/6\geq 0.$$ Subadditivity property for this case is correct: \\ $$-(\frac{3}{4} \cdot\frac{1}{3} \cdot\frac{1}{10} \log(18/10)+\frac{3}{4} \cdot\frac{2}{3} \cdot\frac{1}{10} \log(2/10)+\frac{1}{12} \cdot\frac{8}{10} \log(9/10))=0.0728\geq 0.$$ Difference $S_{\psi_A}(\rho_A)S_{\psi_B}(\rho_B)-S_{\phi_{AB}}(\rho_{AB})$ is equal to mutual information I. Thus, we have following expression: {\small $$I=I(p_1, p_2, \phi_1, \phi_2, \chi_1, \chi_2)=-\left(\phi_1 \chi_1 p_1 \log \left[\frac{(p_1+p_2)(p_1+p_3)}{p_1}\right]+\phi_1 \chi_2 p_2 \log [p_1+p_2]+\phi_2 \chi_1 p_3 \log[p_1+p_3]\right)\geq0.$$} Varying two out of six variables of mutual information $I(p_1, p_2, \phi_1, \phi_2, \chi_1, \chi_2)$ we can plot 3D graph showing dependence of mutual information on probabilities and weights. Let is consider some particular cases. 1) Weights $\phi_A= \begin{equation}gin{pmatrix} 3/4 & 0 \\ 0 & 1/4 \end{pmatrix}$ and $\phi_B= \begin{equation}gin{pmatrix} 1/3 & 0 \\ 0 & 2/3 \end{pmatrix}$ satisfy the condition (\ref{cond2}) of subadditivity: $ \begin{equation}gin{cases} \phi_1 - \phi_2 =3/4 - 1/4 =1/2\geq 0 \\ \chi_2-\chi_1 =2/3-1/3=1/3\geq 0\end{cases}.$ Varying probabilities $p_1$ and $p_2$ we plot dependence $I(p_1,p_2)$ illustrated on Fig.1. 2) Considering $\chi_2=1-\chi_1$, $\phi_2=1-\phi_1$, we fix probabilities $p_1, p_2, p_3$ to plot the dependence of mutual information on weights: $p_1=1/4$, $p_2=1/8$, $p_3=1-p_1-p_2$. This dependence $I(\phi_1,\chi_1)$ is illustrated on Fig.2 and Fig.3. Plot consists of 2 parts, according to the condition \ref{cond2}, part \textbf{a}: $ \begin{equation}gin{cases} \phi_1 - \phi_2 \geq 0\\ \chi_2-\chi_1\geq 0 \end{cases}$ and part \textbf{b} of the plot: $ \begin{equation}gin{cases} \phi_1 - \phi_2 \leq 0\\ \chi_2-\chi_1\leq 0 \end{cases}$ . \begin{equation}gin{figure}[h!] \begin{equation}gin{center} \includegraphics[width=70mm]{I_p1p2_.png} \caption{The dependence of mutual information $ I(p_1, p_2) $ on probabilities $p_1$ and $p_2$} \end{center} \end{figure} \begin{equation}gin{figure}[h!] \begin{equation}gin{multicols}{2} \includegraphics[width=70mm]{I_phi1,chi1_11.png} \caption{The dependence of mutual information $ I(\phi_1, \chi_1) $ on weights $\phi_1$ and $\chi_1$: third case \textbf{a}} \label{figLeft} \includegraphics[width=70mm]{I_phi1,chi1_12.png} \caption{The dependence of mutual information $ I(\phi_1, \chi_1) $ on weights $\phi_1$ and $\chi_1$: third case \textbf{b}} \label{figRight} \end{multicols} \end{figure} \section{Conclusion} To resume, we point out two main results of our work. Using the approach developed in \cite{suhov} we extended the notion of weighted entropies known for multipartite system states and entropies of its subsystem states to the case of single qudit. We showed that the subadditivity condition for the weighted entropies of bipartite system obtained in \cite{suhov} can be formulated also for the weighted entropies of the noncomposite system like qutrit state. The new inequality can be checked experimentally for superconductive circuit states discussed, e.g. in \cite{kikt1, kikt2, glush}. Other new entropic inequalities for weighted entropy, e.g. strong subadditivity condition found in \cite{suhov} for tripartite systems, can be considered and extended to the case of noncomposite system. Applying the nonlinear quantum channel to density matrix $\rho_{AB}$ defined in (\ref{matrix}), we get the transformed density $4 \times 4$ matrix of the form $\hat{\rho}'=\frac{\hat{P}\hat{\rho}\hat{P}}{Tr \hat{P}\hat{\rho}\hat{P}}$ (defined by Eq. (10) in \cite{europhys}). For chosen rank-2 projector $\hat{P}=\begin{equation}gin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 \end{pmatrix}$, and therefore $\hat{\rho_{AB}}'=\begin{equation}gin{pmatrix} \frac{p_1}{p_1+p_3} & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & \frac{p_3}{p_1+p_3} & 0\\ 0 & 0 & 0 & 0 \end{pmatrix}$, we can get transformed weighted entropy inequality (\ref{ineq}) which reads $\phi_1 p_1 + \phi_2 p_2 \geq 0 $. \\ This obvious example can be extended to other states of qudits to get more complicated inequalities. We study such examples in future publications. \section{Acknowledgments} Formulation of the problem and results of sections I and II are obtained by V. I. Man'ko, supported by Russian Science Foundation under Project No. 16-11-00084 and performed in Moscow Institute of Physics and Technology. \begin{equation}gin{thebibliography}{99} \bibitem{shannon} C. E. Shannon, {\sl Bell Syst. Tech. J.}, \textbf{27}, 379; 623 (1948). \bibitem{renyi} A. Renyi, \textit{Probability Theory}, North-Holland, Amsterdam (1970). \bibitem{tsallis} C. Tsallis, \textit{Nonextensive Statistical Mechanics and Thermodynamics: Historical Background and Present Status}, Springer, Berlin (2001). \bibitem{1968} M. Belis and S. Guiasu, {\sl IEEE Trans. Inf. Theory}, \textbf{14}, 593 (1968). \bibitem{1971} S. Guiasu, {\sl Rep. Math. Phys.}, \textbf{2}, 165 (1971). \bibitem{neumann} J. von Neumann, \textit{Mathematische Grundlagen der Quantenmechanik}, Springer, Berlin (1932). \bibitem{robinson1} D. W. Robinson and D. Ruelle, {\sl Commun. Math. Phys.}, \textbf{5}, 288 (1967). \bibitem{robinson2} O. Lanford III and D. W. Robinson, {\sl J. Math. Phys.}, \textbf{9}, 1120 (1968). \bibitem{araki} H. Araki and E. H. Lieb, {\sl Commun. Math. Phys.}, \textbf{18}, 160 (1970). \bibitem{carlen} E. A. Carlen and E. H. Lieb, {\sl Jour. Math. Phys}, \textbf{55}, 042201 (2014). \bibitem{suhov} Y. Suhov and S. Zohren, ``Quantum weighted entropy and its properties," arXiv:1411.0892v1. \bibitem{manko2} M. A. Man'ko and V. I. Man'ko, {\sl J. Russ. Laser Res.}, \textbf{37}, 1 (2016). \bibitem{chernega} V. N. Chernega and O. V. Manko, {\sl Phys. Scr.}, \textbf{90}, 074052 (2015). \bibitem{manko3} M. A. Man'ko and V. I. Man'ko, \textit{Entropy}, \textbf{17}, 2876 (2015). \bibitem{hidden} M. A. Man'ko and V. I. Man'ko, {\sl J. Russ. Laser Res.}, \textbf{36}, 301 (2015). \bibitem{markovich} V. I. Man'ko and L. A. Markovich, {\sl J. Russ. Laser Res.}, \textbf{36}, 4 (2015). \bibitem{suhov2} Y. Suhov, S. Yasaei Sekeh, and M. Kelbert, ``Entropy-power inequality for weighted entropy," arXiv:1502.02188. \bibitem{suhov3} Y. Suhov, I. Stuhl, S. Yasaei Sekeh, and M. Kelbert, {\sl Aequationes mathematicae}, \textbf{90}, 817 (2016). \bibitem{europhys} J. A. L\'{o}pez-Sald\'{i}var, A. Figueroa, O. Casta\~{n}os, et al., {\sl J. Russ. Laser Res.}, \textbf{37}, 313 (2016). \bibitem{puzko} V. I. Man'ko and R. S. Puzko, {\sl Europhys. Lett.}, \textbf{109}, 50005 (2015). \bibitem{manko4} V. I. Man'ko, G. Marmo, A. Simoni, and F. Ventriglia, {\sl Phys. Lett. A}, \textbf{372}, 6490 (2008). \bibitem{kikt1} A. K. Fedorov, E. O. Kiktenko, O. V. Man'ko, and V. I. Man'ko, {\sl Phys. Rev. A}, \textbf{91}, 042312 (2015). \bibitem{kikt2} E. O. Kiktenko, A. K. Fedorov, A. A. Strakhov, and V. I. Man'ko, {\sl Phys. Lett. A}, \textbf{379}, 1409 (2015). \bibitem{glush} E. Glushkov, A. Glushkova, and V. I. Man'ko, {\sl J. Russ. Laser Res.}, \textbf{36}, 448 (2015). \end{thebibliography} \end{document}
math
\begin{document} \title{{F} \begin{abstract} Frege's \emph{Grundgesetze} was one of the 19th century forerunners to contemporary set theory which was plagued by the Russell paradox. In recent years, it has been shown that subsystems of the \emph{Grundgesetze} formed by restricting the comprehension schema are consistent. One aim of this paper is to ascertain how much set theory can be developed within these consistent fragments of the \emph{Grundgesetze}, and our main theorem (Theorem~\ref{thm:main}) shows that there is a model of a fragment of the \emph{Grundgesetze} which defines a model of all the axioms of Zermelo-Fraenkel set theory with the exception of the power set axiom. The proof of this result appeals to G\"odel's constructible universe of sets and to Kripke and Platek's idea of the projectum, as well as to a weak version of uniformization (which does not involve knowledge of Jensen's fine structure theory). The axioms of the \emph{Grundgesetze} are examples of \emph{abstraction~principles}, and the other primary aim of this paper is to articulate a sufficient condition for the consistency of abstraction principles with limited amounts of comprehension (Theorem~\ref{thm:jointconsistency}). As an application, we resolve an analogue of the joint~consistency problem in the predicative setting. \end{abstract} \section{Introduction} There has been a recent renewed interest in the technical facets of Frege's \emph{Grundgesetze} (\cite{Burgess2005},~\cite{Cook2007aa}) paralleling the long-standing interest in Frege's philosophy of mathematics and logic (\cite{Dummett1991a}, \cite{Beany2005aa}). This interest has been engendered by the consistency proofs, due to Parsons~\cite{Parsons1987a}, Heck~\cite{Heck1996}, and Ferreira-Wehmeier~\cite{Ferreira2002aa}, of this system with limited amounts of comprehension. The broader intellectual interest in Frege's \emph{Grundgesetze} stems in part from the two related ways in which it was a predecessor of contemporary set theory: first, the system was originally designed to be able to reconstruct much of ordinary mathematics, and second it comes equipped with the resources needed to define a membership relation. It is thus natural to ask how much set theory can be consistently developed within these fragments of the \emph{Grundgesetze}. Our main theorem (Theorem~\ref{thm:main}) shows it is possible within some models of these fragments to recover all the axioms of Zermelo-Fraenkel set theory with the exception of the power set axiom. To make this precise, one needs to carefully set out the primitives of the consistent fragments of the \emph{Grundgesetze} and indicate what precisely it means to recover a fragment of set theory. This is the primary goal of \S\ref{sec02.1} of the paper. Following Wright and Hale (\cite{Hale2001}, cf.~\cite{Cook2007aa}), the system of the \emph{Grundgesetze} has been studied in recent decades as a special case of so-called \emph{abstraction principles}. These are principles that postulate lower-order representatives for equivalence relations on higher-order entities. Many of these principles are inconsistent with full comprehension, which intuitively says that every formula determines a concept or higher-order entity. So as with the \emph{Grundgesetze}, the idea has been to look for consistency with respect to the so-called predicative instances of the comprehension schema, in which the presence of higher-order quantifiers within formulas is highly restricted. Of course, while predicativity in connection with the \emph{Grundgesetze} is a fairly new topic, predicativity has a long tradition within mathematical logic, beginning with Poincar\'e, Russell, and Weyl (\cite{Heinzmann1986aa},~\cite{Weyl1918}), and found in our day in the work of Feferman (\cite{Feferman2005ab,Feferman1964aa}) and in~${\tt ACA}_0$ and related systems of Friedman and Simpson's project of reverse mathematics (\cite{Friedman1975aa},~\cite{Simpson2009aa}). The other chief theorem of this paper (Theorem~\ref{thm:jointconsistency}) shows that an abstraction principle associated to an equivalence relation is consistent with predicative comprehension so long as the equivalence relation is provably an equivalence relation in a limited theory of pure second-order logic and is expressible in the signature of that pure second-order logic. One application of this result is a resolution to the joint consistency problem in the predicative setting. For, in the setting of full comprehension, it has been known for some time that there are abstraction principles which are individually but not jointly consistent. In \S\ref{sec0.1555predab}, we define the notion of an abstraction principle and further contextualize our results within the extant literature on abstraction principles. The methods used in all these results draw on considerations related to G\"odel's constructible universe of sets. Whereas in the cumulative hierarchy of sets~$V_{\alpha}$, one proceeds by iterating the operation of the powerset into the transfinite, in the constructible hierarchy of sets~$L_{\alpha}$, one proceeds by iterating the operation of taking definable subsets into the transfinite. G\"odel showed that just like the universe of sets~$V=\bigcup_{\alpha} V_{\alpha}$ is a model of the axioms of set theory, so the constructible universe of sets~$L=\bigcup_{\alpha} L_{\alpha}$ is a model of the axioms of set theory, along with a strong form of the axiom of choice according to which the elements of~$L$ are well-ordered by a relation~$<_L$ (cf. \cite{Jech2003}~Chapter 13,~\cite{Kunen1980} Chapter 6,~\cite{Kunen2011aa} II.6, \cite{Devlin1984aa}). Our present understanding of the more ``local'' or ``micro'' properties of the constructible sets was furthered by the work of Kripke (\cite{Kripke1964aa}), Platek (\cite{Platek1966aa}) and Jensen (\cite{Jensen1972aa}), in whose results we find the key ideas of the projectum and uniformization. Roughly, a level~$L_{\alpha}$ of the constructible hierarchy satisfies uniformization if whenever it satisfies~$\forall \; x \; \exists \; y \; R(x,y)$ then there is a definable function~$f$ of the same level of complexity as~$R$ which satisfies~$\forall \;x \; R(x,f(x))$. The projectum, on the other hand, is related to the idea that certain initial segments~$L_{\alpha}$ of the constructible universe can be shrunk via a definable injection~$\iota:L_{\alpha}\rightarrow \rho$ to a smaller ordinal~$\rho<\alpha$. The formal definitions of the projectum and uniformization are given in \S\ref{sec02.2}. It bears emphasizing that we only employ a weak version of uniformization which has an elementary proof, and so this paper does not presuppose knowledge of Jensen's fine structure theory (cf. Proposition~\ref{prop:weakuniform}). It's actually rather natural to think that uniformization and the projectum would be useful in producing models of abstraction principles. On the one hand, given an equivalence relation~$E$ on the set~$P(\rho)\cap L_{\alpha}$, we can conceive of the elements of this set as higher-order entities, and then we can take the lower-order representative in~$\rho$ of an~$E$-equivalence class to be the injection~$\iota$ applied to the~$<_L$-least element of~$E$'s equivalence class. On the other hand, uniformization allows one to secure further instances of the comprehension schema in which there are some controlled occurrences of higher-order quantifiers, in essence because one can use uniformization to choose one particular higher-order entity with which to work. This, in any case, is the intuitive idea behind the proof of our theorem on the consistency of abstraction principles (Theorem~\ref{thm:jointconsistency}) which we prove in \S\ref{sec04.5}. However, this does not itself deliver our result on how much set theory one can recover in the consistent fragments of the \emph{Grundgesetze}. For this, we need to additionally show that if we start from a level of the constructible hierarchy which satisfies certain axioms of set theory, and if we perform the construction of a model of the fragment of the \emph{Grundgesetze} in the manner intimated in the above paragraph and made precise in \S\ref{sec04.5}, then we can recover these original constructible sets definably within the model of the fragment of the \emph{Grundgesetze}. The details of this argument are carried out in \S\ref{sec05} where our Main Theorem~\ref{thm:main} is finally established. This paper is the first in a series of three papers -- the other two being~\cite{Walsh2014ad},~\cite{Walsh2014ae}-- which collectively constitute a sequel to the Basic Law~V components of our paper~\cite{Walsh2012aa}. In that earlier paper~\cite{Walsh2012aa}, we gave a proof of the consistency of Frege's \emph{Grundgesetze} system with limited amounts of comprehension using tools from hyperarithmetic theory. However, we were unable to use these models to ascertain how much Zermelo-Fraenkel set theory could be consistently done in the Fregean setting. The work in this paper explains why this was the case. The key to this was an axiom known as Axiom~Beta (cf. Definition~\ref{defna:axiombeta}), which effectively ensures that the Mostowski Collapse Theorem holds in a structure. As one can see by inspection of the proofs in \S\ref{sec05}, it is being able to invoke this theorem in a model which allows us to obtain finally the Main Theorem~\ref{thm:main}. It turns out that the usual models associated to hyperarithmetic theory simply are not models of Axiom~Beta. This present paper does not depend on results from our earlier paper~\cite{Walsh2012aa}, nor does it depend on its two thematically-linked companion papers,~\cite{Walsh2014ae}, ~\cite{Walsh2014ad}. In the companion paper~\cite{Walsh2014ae}, we use the constructible hierarchy to develop models of an intensional type theory, roughly analogous to how one can use the cumulative hierarchy to build models of an extensional type theory. This intensional type theory can in turn interpret fragments of the \emph{Grundgesetze} system, and so stands to the predicative \emph{Grundgesetze} system as the stage axioms of Shoenfield~\cite{Shoenfield1961aa, Shoenfield1967aa, Shoenfield1977aa} and Boolos~\cite{Boolos1971} stands to the Zermelo-Fraenkel system. In the other companion paper~\cite{Walsh2014ad}, we examine the deductive strength of the theory consisting of all the predicative abstraction principles whose consistency we establish here. \section{The \emph{Grundgesetze} and Its Set Theory}\label{sec02.1} Basic Law~V is the crucial fifth axiom of Frege's \emph{Grundgesetze} (\cite{Frege1893},~\cite{Frege2013aa}), and it axiomatizes the behavior of a certain type-lowering operator from second-order entities to first-order entities, called the ``extension operator.'' In Frege's type-theory, the second-order entities are called ``concepts'' while the first-order entities are called ``objects,'' so that the extension operator~$\partial$ takes a concept~$X$ and returns an object~$\partial(X)$. (There is no standard notation for the extension operator, and so some authors write~$\S(X)$ in lieu of~$\partial(X)$). Basic Law~V then simply postulates that the extension operator is injective: \begin{equation} \mbox{\emph{Basic Law~V:}}\hspace{10mm}~\forall \; X, Y \; (\partial(X) = \partial(Y) \leftrightarrow X=Y)\label{eqn:blv} \end{equation} Here the identity of concepts is regarded as extensional in character, so that two concepts~$X,Y$ are said to be identical precisely when they are coextensive, i.e.~$X=Y$ if and only if for all objects~$z$ we have that~$Xz$ if and only if~$Yz$. Models of Basic Law~V have the following form: \begin{equation}\label{eqn:blvmodels} \mathcal{M}=(M, S_1(M), S_2(M), \ldots, \partial) \end{equation} wherein~$M$ is a non-empty set that serves as the interpretation of the objects, and the set~$S_n(M)\subseteq P(M^n)$ serves as the interpretation of the~$n$-ary concepts, and wherein the function~$\partial:S_1(M)\rightarrow M$ is an injection. Further, we assume that in the object-language of the structure from equation~(\ref{eqn:blvmodels}) we have the resources to describe when an~$n$-tuple~$(a_1, \ldots, a_n)$ from~$M^n$ is in an~$n$-ary concept~$R$ from~$S_n(M)$, and we write this in the object-language alternatively as~$R(a_1, \ldots, a_n)$ or~$(a_1, \ldots, a_n)\in R$, and we refer to this relation as the predication relation. As is well-known, Basic Law~V is inconsistent with the full second-order comprehension schema: \begin{defn} The \emph{Full Comprehension Schema} consists of the all axioms of the form \;$\exists \; R \; \forall \; \overline{a} \; (R\overline{a} \leftrightarrow \varphi(\overline{a}))$, wherein~$\varphi(\overline{x})$ is allowed to be any formula, perhaps with parameters, and~$\overline{x}$ abbreviates~$(x_1, \ldots, x_n)$ and~$R$ is an~$n$-ary concept variable for~$n\geq 1$ that does not appear free in~$\varphi(\overline{x})$. \label{eqn:unaryfullcomp} \end{defn} \noindent In spite of this inconsistency, Parsons and Heck (\cite{Parsons1987a}, \cite{Heck1996}) showed that Basic Law~V is \emph{consistent} with the version of the comprehension schema in which~$\varphi(x)$ contains no second-order quantifiers: \begin{defn} The \emph{First-Order Comprehension Schema} consists of all axioms of the form \;$\exists \; R \; \forall \; \overline{a} \; (R\overline{a} \leftrightarrow \varphi(\overline{a}))$, wherein~$\varphi(\overline{x})$ is allowed to be any formula with no second-order quantifiers but perhaps with parameters, and~$\overline{x}$ abbreviates~$(x_1, \ldots, x_n)$ and~$R$ is an~$n$-ary concept variable for~$n\geq 1$ that does not appear free in~$\varphi(\overline{x})$. \label{pred:comp:schema} \end{defn} \noindent Ferreira and Wehmeier extended the Parsons-Heck result by showing that there are models~$\mathcal{M}=(M, D(M), D(M^2), \ldots, \partial)$ of Basic Law~V which also model stronger forms of comprehension, namely the \;$\Delta^1_1$-comprehension schema and the~$\Sigma^1_1$-choice schema (\cite{Ferreira2002aa} \S4). These schemata are defined as follows: \begin{defn} The {\it$\Delta^1_1$-Comprehension Schema} consists of all axioms of the form \begin{equation} \forall \; \overline{x} \; (\varphi(\overline{x})\leftrightarrow \psi(\overline{x}))\rightarrow \exists \; R \; \forall \; \overline{a} \; (R\overline{a} \leftrightarrow \varphi(\overline{a})) \end{equation} wherein~$\varphi(\overline{x})$ is a~$\Sigma^1_1$-formula and~$\psi(\overline{x})$ is a~$\Pi^1_1$-formula that may contain parameters, and~$\overline{x}$ abbreviates~$(x_1, \ldots, x_n)$, and~$R$ is an~$n$-ary concept variable for~$n\geq 1$ that does not appear free in~$\varphi(\overline{x})$ or~$\psi(\overline{x})$. \label{delta11comp} \end{defn} \begin{defn} The {\it$\Sigma^1_1$-Choice Schema} consists of all axioms of the form \begin{equation}\label{eqn:LATE:formchoice} [\forall \; \overline{x} \; \exists \; R^{\prime} \; \varphi(\overline{x}, R^{\prime})]\rightarrow \exists \; R \; [\forall \; \overline{x} \; \forall \; R^{\prime} \; [(\forall \; \overline{y} \; (R^{\prime}\overline{y} \leftrightarrow R\overline{x}\:\overline{y}))\rightarrow \varphi(\overline{x}, R^{\prime})]] \end{equation} wherein the formula~$\varphi(\overline{x},R^{\prime})$ is~$\Sigma^1_1$, perhaps with parameters, and~$\overline{x}$ abbreviates {}$(x_1, \ldots, x_n)$ and~$\overline{y}$ abbreviates~$(y_1, \ldots, y_m)$ and~$R$ is an~$(n+m)$-ary concept variable for~$n,m\geq 1$ that does not appear free in~$\varphi(\overline{x},R^{\prime})$ where~$R^{\prime}$ is an~$m$-ary concept variable. \label{sigam11choice} \end{defn} \noindent Here, as is usual, a~$\Sigma^1_1$-formula (resp.~$\Pi^1_1$-formula) is one which begins with a block of existential quantifiers (resp. universal quantifiers) over~$n$-ary concepts for various~$n\geq 1$ and which contains no further second-order quantifiers. Given this variety of comprehension schemata, it becomes expedient to explicitly distinguish between different formal theories that combine these schemata with the axiom Basic Law~V from equation~(\ref{eqn:blv}). In particular, one defines the following systems (cf.~\cite{Walsh2012aa} Definition 5 p. 1683): \begin{defn} The theory~$\tt{ ABL}_0$ is Basic Law~V together with the First-Order Comprehension Schema~(cf. Definition~\ref{pred:comp:schema}). The theory~${\tt \Delta^1_1\mbox{-}BL_0}$ is Basic Law~V together with the~$\Delta^1_1$-Comprehension Schema (cf. Definition~\ref{delta11comp}). The theory~${\tt \Sigma^1_1\mbox{-}LB_0}$ is Basic Law~V together with the~$\Sigma^1_1$-Choice Schema (cf. Definition~\ref{sigam11choice}) and the First-Order Comprehension Schema~(cf. Definition~\ref{pred:comp:schema}).\label{defn:Sigma11choiceBLV} \end{defn} \noindent We opt to designate the subsystem formed with~$\Sigma^1_1$-Choice by inverting the letters ``{\tt BL}'' to ``{\tt LB}'', since this convention saves us from needing to write out the word ``choice'' when referring to a theory, and since it is compatible with the convention in subsystems of second-order arithmetic (\cite{Simpson2009aa}), wherein the~$\Delta^1_1$-comprehension fragment is called~${\tt \Delta^1_1\mbox{-}CA_0}$ and the~$\Sigma^1_1$-choice fragment is called~${\tt \Sigma^1_1\mbox{-}AC_0}$. In the companion paper \cite{Walsh2014ad}, we work deductively in theories containing limited amounts of comprehension. In these situations, it will prove expedient to consider an enrichment of the above theories by the addition of certain function symbols. In particular, we assume that for every~$m,n>0$ we have a~$(n+1)$-ary function symbol in the language for the map~$(R, a_1, \ldots, a_n)\mapsto R[a_1, \ldots, a_n]$ from a single~$(n+m)$-ary relation~$R$ and an~$n$-tuple of objects~$(a_1, \ldots, a_n)$ to the~$m$-ary relation \begin{equation}\label{eqn:iamafunction} R[a_1, \ldots, a_n]= \{(y_1, \ldots, y_m): R(a_1, \ldots, a_n, y_1, \ldots, y_m)\} \end{equation} One benefit of the addition of these symbols is that it allows for a compact formalization of the key clause~(\ref{eqn:LATE:formchoice}) of the $\Sigma^1_1$-choice schema, namely: \begin{equation}\label{eqn:LATE:formchoice2} [\forall \; \overline{x} \; \exists \; R^{\prime} \; \varphi(\overline{x}, R^{\prime})]\rightarrow [\exists \; R \; \forall \; \overline{x} \; \varphi(\overline{x}, R[\overline{x}])] \end{equation} The addition of these function symbols to the signature impacts the axiom system because we continue to assume that we have~$\Sigma^1_1$-choice and first-order comprehension. In particular, the inclusion of the function symbols~$(R, a_1, \ldots, a_n)\mapsto R[a_1, \ldots, a_n]$ in the signature then adds to the collection of terms of the signature, which in turn adds to the collection of quantifier-free and hence first-order formulas of the signature. Let us call this expansion of~${\tt \Sigma^1_1\mbox{-}LB_0}$ the system~${\tt \Sigma^1_1\mbox{-}LB}$, i.e. we drop the ``zero'' subscript; and likewise for the other systems from Definition~\ref{pred:comp:schema}. For ease of future reference, let's explicitly record this in the following definition: \begin{defn} The theory~$\tt{ ABL}$ is Basic Law~V together with the First-Order Comprehension Schema~(cf. Definition~\ref{pred:comp:schema}) in the signature including the function symbols~$(R, a_1, \ldots, a_n)\mapsto R[a_1, \ldots, a_n]$. The theory~${\tt \Delta^1_1\mbox{-}BL}$ is Basic Law~V together with the~$\Delta^1_1$-Comprehension Schema (cf. Definition~\ref{delta11comp}) in the signature with these function symbols. The theory~${\tt \Sigma^1_1\mbox{-}LB}$ is Basic Law~V together with the~$\Sigma^1_1$-Choice Schema (cf. Definition~\ref{sigam11choice}) and the First-Order Comprehension Schema~(cf. Definition~\ref{pred:comp:schema}) in the signature including these function symbols.\label{defn:Sigma11choiceBLV00} \end{defn} In building models of these consistent fragments of Frege's system, one of our chief aims is to understand how much set theory can be thereby recovered. The crucial idea is to define an ersatz membership-relation~$\eta$ in terms of the extension operator and predication: \begin{equation}\label{eqn:Fregemembership} a\eta b \Longleftrightarrow \exists \; B \; (\partial(B) =b \; \& \; Ba) \end{equation} Since the \emph{extensions} are precisely the objects in the range of the extension operator~$\partial$, we write the collection of extensions as~$\mathrm{rng}(\partial)$. Now it follows from considerations related to the Russell paradox that~$\mathrm{rng}(\partial)$ is not a concept in the presence of~$\Delta^1_1$-comprehension (cf. \cite{Walsh2012aa} Proposition 29 p. 1692). In contrast to~$\mathrm{rng}(\partial)$, the collections~$V=\{x:x=x\}$ and~$\emptyset =\{x: x\neq x\}$ do form concepts since they are first-order definable. The following elementary proposition, provable in~${\tt \Sigma^1_1\mbox{-}LB}$, says that for subconcepts of~$\mathrm{rng}(\partial)$, the~$\eta$-relation restricted to this concept exists as a binary concept: \begin{prop}\label{iamaveryhelpfulprop} (Existence of Restricted~$\eta$-relation) (${\tt \Sigma^1_1\mbox{-}LB}$) For every concept~$X\subseteq \mathrm{rng}(\partial)$ there is a binary concept~$R$ such that for all~$a$, we have that~$Xa$ implies~$\partial(R[a])=a$. So for all concepts~$X\subseteq \mathrm{rng}(\partial)$ there is a binary relation~$E_X\subseteq V\times X$ such that~$Xa$ implies:~$E_X(b,a)$ iff~$b\eta a$. \end{prop} It will also be helpful in what follows to have some fixed notation for subset and successor. So similar to equation~(\ref{eqn:Fregemembership}) we define the associated Fregean subset relation~$\subseteq_{\eta}$ as follows: \begin{equation} \label{eqn:defn:subsetfrege} a\subseteq_{\eta} b \Longleftrightarrow \forall \; c \; (c\eta a \rightarrow c\eta b) \end{equation} However, note that if~$a$ is \emph{not} an extension, then~$c\eta a$ is always false and so~$(c\eta a\rightarrow \psi)$ is always true, regardless of what~$\psi$ is. Hence, if~$a$ is \emph{not} an extension, then~$a\subseteq_{\eta} b$ is always true. So the expressions~$a\eta b$ and~$a\subseteq_{\eta} b$ will behave like membership and subset only if one restricts attention to~$a,b,$ that are extensions. In what follows, it will also be useful to introduce some notation for a successor-like operation on extensions. So let us say that \begin{equation}\label{eqn:iamsuccessorfunctiondefin} \sigma(x) = y \Longleftrightarrow \exists \; F \; \exists \; G \; [\partial(F)=x \; \& \; \partial(G)=y \; \& \; \forall \; z \; (Gz\leftrightarrow (Fz \vee z=x))] \end{equation} However, this function is not total, and in particular it should be emphasized that~$\sigma(x)$ is only well-defined when~$x$ is an extension. Accordingly, the graph of the function~$x\mapsto \sigma(x)$ does not exist as a binary concept, since if it did, then its domain would likewise exist, and its domain is precisely~$\mathrm{rng}(\partial)$. However, when~$\sigma(x)$ is defined, note that it satisfies~$z\eta (\sigma(x))$~iff either~$z\eta x$ or~$z=x$. This of course reminds us of the usual set-theoretic successor operation~$x\mapsto (x\cup \{x\})$. In the axiomatic development of systems related to~${\tt \Sigma^1_1\mbox{-}LB}$, the crucially important concept is the notion of transitive closure. If~$F$ is a concept, then let us say that~$F$ is {\it~$\eta$-transitive} or {\it~$\eta$-closed} if~$(Fx \; \& \; y\eta x)$ implies~$Fy$, for all~$x,y$. Then we define transitive closure as follows: \begin{equation}\label{eqn:iamtransitiveclosureeta} (\mathrm{Trcl}_{\eta}(x))(y) \equiv \forall \; F \; [\mbox{$F$ is~$\eta$-transitive \&~$x\subseteq_{\eta} \partial(F)$}]\rightarrow Fy \end{equation} It is easily provable that~$\mathrm{Trcl}_{\eta}(x)$ also has the following properties: \begin{prop}\label{prop:elementary} (Elementary Facts about Transitive Closure) \begin{enumerate} \item \emph{Transitive Closure is~$\eta$-transitive}:~$[(\mathrm{Trcl}_{\eta}(x))(y) \wedge~z\eta y]$ implies~$(\mathrm{Trcl}_{\eta}(x))(z)$. \item \emph{Transitive Closure is an~$\eta$-superclass}:~$w\eta x$ implies~$(\mathrm{Trcl}_{\eta}(x))(w)$. \end{enumerate} \end{prop} So now we may describe the procedure for carving out a model of a fragment of classical set theory~{\tt ZFC} from a model of~$\mathcal{M}$ of~${\tt \Sigma^1_1\mbox{-}LB}$. Since the foundation~axiom is a traditional part of~{\tt ZFC}, we want to ensure that our fragments always include this axiom, and for this purpose it is important that we avoid infinite descending~$\eta$-chains. Since~$\mathcal{M}$ has second-order resources, this can be effected in a straightforward manner. In particular, if~$\mathcal{X}\subseteq M$ and~$\mathcal{R}\subseteq \mathcal{X}\times \mathcal{X}$ are~$\mathcal{M}$-definable (but not necessarily elements of~$S_k(M)$), then let us say that ``{\it{}$(\mathcal{X},\mathcal{R})$ is well-founded in~$\mathcal{M}$}'' if~$\mathcal{M}$ models that every non-empty \emph{subconcept} of~$\mathcal{X}$ has an~$\mathcal{R}$-least member, i.e.~$\mathcal{M}$ models~$\forall \; F \; [\exists \; x \; Fx \; \& \; \forall \; x \; (Fx\rightarrow \mathcal{X}(x))] \rightarrow [\exists \; y \; Fy \; \& \; \forall \; z \; (Fz \rightarrow \neg \mathcal{R}(z,y))]$. A special case of this is when~$X$ is a concept and~$R$ is a binary concept, in which case we likewise define ``\emph{$(X,R)$ is well-founded in~$\mathcal{M}$}'' to mean that~$\mathcal{M}$ models that every non-empty subconcept of~$X$ has an~$R$-least element, i.e that~$\mathcal{M}$ models \begin{equation}\label{eqn:well-foundeddd123412341} \forall \; F \; [\exists \; x \; Fx \; \& \; \forall \; x \; (Fx\rightarrow Xx)] \rightarrow [\exists \; y \; Fy \; \& \; \forall \; z \; (Fz \rightarrow \neg R(z,y))] \end{equation} Since~$S_1(M)$ is in general a small subset of~$P(M)$, we of course need to be wary of inferring from ``$(X, R)$ is well-founded in~$\mathcal{M}$'' to~$(X, R)$ having no infinite descending~$R$-chains, or to~$(X,R)$ having no infinite~$\mathcal{M}$-definable descending~$R$-chains. Finally, putting this all together, let us define the notion of a ``well-founded extension'': \begin{align} \mathrm{wfExt}(x) \equiv x\mbox{ is an extension} & \; \& \; (\mathrm{Trcl}_{\eta}(\sigma(x)), \eta) \mbox{ is well-founded} \notag \\ & \; \& \; (\mathrm{Trcl}_{\eta}(\sigma(x)) \subseteq \mathrm{rng}(\partial)\label{eqnasdfsadf2} \end{align} Given a model~$\mathcal{M}$ of~${\tt \Sigma^1_1\mbox{-}LB}$, let us define its collection of well-founded extensions as follows: \begin{equation}\label{eqnasdfsadf1} \mathrm{wfExt}(\mathcal{M}) = \{x\in M: \mathcal{M}\models \mathrm{wfExt}(x)\} \end{equation} In broad analogy with its usage in set theory, we shall sometimes refer to this as the \emph{inner model} of well-founded extensions relative to a model of~${\tt \Sigma^1_1\mbox{-}LB}$. The other definition that we need in order to state and prove our results is a global choice principle. Suppose that~$T$ is a theory in one of our signatures. Then we let~$T+{\tt GC}$ be the expansion of~$T$ by a new binary relation symbol~$<$ on objects in the signature, with axioms saying that~$<$ is a linear order of the first-order objects, and we additionally have a schema in the expanded signature saying that any instantiated formula~$\varphi(x)$ in the expanded signature, perhaps containing parameters, that holds of some first-order object~$x$ will hold of a~$<$-least element: \begin{equation} [\exists \; x \; \varphi(x)]\rightarrow [\exists \; x \; \varphi(x) \; \& \; \forall \; y<x \; \neg \varphi(y)] \label{eqn:gcschema} \end{equation} Since all our theories~$T$ contain first-order comprehension~(cf. Definition~\ref{pred:comp:schema}), and since instances of~$<$ are quantifier-free and hence first-order, we have that the graph of~$<$ forms a binary concept in~$T+{\tt GC}$. Of course the postulated binary relation~$<$ does not necessarily have anything to do with the usual ``less than'' relation on the natural numbers. With this all notation in place, our main theorem can be expressed as follows, wherein~${\tt P}$ denotes the power set axiom: \begin{thm}\label{thm:main} (Main Theorem) There is a model~$\mathcal{M}$ of~${\tt \Sigma^1_1\mbox{-}LB}+{\tt GC}$ such that~$(\mathrm{wfExt}(\mathcal{M}), \eta)$ satisfies the axioms of~${\tt ZFC\mbox{-}P}$. \end{thm} \noindent This result is proven at the close of \S\ref{sec05}. It is significant primarily because it shows us what kind of set theory may be consistently developed if one takes Basic Law~V as a primitive. Now, one subtlety should be mentioned here at the outset: in the absence of power set, it is not entirely obvious which form of replacement and which form of choice is optimal. The discussion in Gitman-Hamkins-Johnstone (\cite{Gitman2011aa}) suggests that instead of the replacement schema one should use the collection schema, and as for the axiom of choice one should use the principle that every set can be well-ordered; the reason in each case being that these are the deductively stronger principles in the absence of powerset. (For a formal statement of the collection schema, cf. equation~(\ref{eqn:collectionschema})). As we will note when establishing our main theorem in \S\ref{sec05}, our models satisfy these principles as well. Hence, for the sake of concreteness, in this paper we may define~${\tt ZFC\mbox{-}P}$ as follows: \begin{defn}\label{defn:ZFCminusP} ${\tt ZFC\mbox{-}P}$ is the theory consisting of extensionality, pairing, union, infinity, separation, collection, foundation, and the statement that every set can be well-ordered. \end{defn} \noindent For precise definitions of these axioms, one may consult any standard set theory textbook (\cite{Kunen1980,Kunen2011aa},~\cite{Jech2003}; and for the collection schema see again equation~(\ref{eqn:collectionschema})). The Main Theorem~\ref{thm:main} is a natural analogue of the work of Boolos, Hodes, and Cook's on the axiom ``New V'' (\cite{Boolos1989aa},~\cite{Hodes1991},~\cite{Cook2003ab}). This is the axiom in the signature of Basic Law~V, but where, for the sake of disambiguation, we write the type-lowering operator with the symbol~$\partial^{\prime}$ as opposed to~$\partial$. The axiom \emph{New~V} then says that \begin{equation} \emph{New~V}: \hspace{3mm} \forall \; X,Y \; (\partial^{\prime}(X)=\partial^{\prime}(Y) \leftrightarrow ((\mathrm{Small}(X) \vee \mathrm{Small}(Y)) \rightarrow X=Y)) \label{eqn:NewV} \end{equation} Here~$\mathrm{Small}(X)$ is an abbreviation for the statement that~$X$ is not bijective with the universe of first-order objects~$\{x: x=x\}$. So if~$\mathcal{M}=(M, S_1(M), S_2(M), \ldots, \partial^{\prime})$ is a model of New~V, then~$\mathcal{M}\models \mathrm{Small}(X)$ if and only if there's no bijection~$f:X\rightarrow M$ whose graph is in~$S_2(M)$. To see the connection between New~V and {\tt ZFC}, recall that for a cardinal~$\kappa$, the set~$H_{\kappa}$ is defined as~$H_{\kappa}=\{x: \left|\mathrm{trcl}(x)\right|<\kappa\}$ (cf.~\cite{Kunen1980} \S{IV.6} pp. 130 ff,~\cite{Kunen2011aa} p. 78,~\cite{Jech2003} p. 171). Suppose that~$\kappa>\omega$ is regular and satisfies~$\left|H_{\kappa}\right|=\kappa$. In this circumstance, let us define: \begin{equation} \mathbb{H}_{\kappa} = (H_{\kappa}, P(H_{\kappa}), P(H_{\kappa} \times H_{\kappa}), \ldots, \partial^{\prime}) \end{equation} where~$\partial^{\prime}(X)=\langle 1, X\rangle$ if~$\left|X\right|<\kappa$ and~$\partial^{\prime}(X) = \langle 0,0\rangle$ otherwise (wherein~$\langle \cdot,\cdot\rangle$ is the usual set-theoretic pairing function). Then in analogue to Frege's definition of membership in equation~(\ref{eqn:Fregemembership}), we can define a quasi-membership relation~$\eta^{\prime}$ in models of New~V as follows: \begin{equation}\label{eqn:newVmembership} a\eta^{\prime} b \Longleftrightarrow \exists \; B \; (\mathrm{Small}(B) \; \& \; \partial^{\prime}(B)=b \; \& \; Ba) \end{equation} Likewise, we can define~$\mathrm{wfExt^{\prime}}$ using the relation~$\eta^{\prime}$ just as~$\mathrm{wfExt}$ is defined in equation~(\ref{eqnasdfsadf2}) using the relation~$\eta$. Then one may prove that~$\mathbb{H}_{\kappa}$ is a model of New~V and~$(\mathrm{wfExt^{\prime}}(\mathbb{H}_{\kappa}), \eta^{\prime})$ is isomorphic to~$(H_{\kappa}, \in)$, which is known to model~${\tt ZFC\mbox{-}P}$ when~$\kappa>\omega$ is regular (cf.~\cite{Kunen1980} Theorem IV.6.5 p. 132,~\cite{Kunen2011aa} Theorem II.2.1 p. 109,~\cite{Jech2003} p. 171). Hence one has following: \begin{prop}\label{cor:mainanalogue} There is a model~$\mathcal{M}$ of New~V and the Full Comprehension Schema~(cf. Definition~\ref{eqn:unaryfullcomp}) such that~$(\mathrm{wfExt^{\prime}}(\mathcal{M}), \eta^{\prime})$ satisfies the axioms of~${\tt ZFC\mbox{-}P}$ (cf. Definition~\ref{defn:ZFCminusP}). \end{prop} \noindent The Main Theorem~\ref{thm:main} establishes an analogous result for Basic~Law~V in the setting of limited amounts of comprehension. \section{Predicative Abstraction Principles}\label{sec0.1555predab} The axioms Basic Law~V and New~V are examples of what are now called \emph{abstraction principles}. If~$E(R,S)$ is a formula of second-order logic with exactly two free~$n$-ary relation variables for some~$n\geq 1$ then the \emph{abstraction principle}~$A[E]$ associated to~$E$ is the following axiom in a signature expanded by a new function symbol~$\partial_E$ from~$n$-ary relations to objects: \begin{equation} A[E]: \hspace{15mm} \forall \; R, S, [\partial_E(R)= \partial_E(S) \leftrightarrow E(R,S)]\label{defnAe} \end{equation} Abstraction principles have been studied extensively for many decades. For an introduction to this subject, see Burgess~\cite{Burgess2005}, and for many important papers, see the collections edited by Demopoulos~\cite{Demopoulos1995} and Cook~\cite{Cook2007aa}. The first thing that one observes in this subject is that some abstraction principles are consistent with the Full Comprehension Schema in their signature~(cf. Definition~\ref{eqn:unaryfullcomp}) while others are not. For instance, we saw above that New~V is consistent with the Full Comprehension Schema in its signature, while Basic Law~V~(\ref{eqn:blv}) itself is not. Given that Basic Law~V is consistent with weaker forms of comprehension, one may ask whether there is any general method for determining whether the abstraction principle~$A[E]$ is consistent with these weaker forms of comprehension. In answering this question, it's helpful to have specific names for the theories consisting of combinations of the abstraction principle~$A[E]$ with the weaker forms of comprehension: \begin{defn} \label{defn:oneabstract} For each formula~$E(R,S)$ with exactly two free~$n_E$-ary variables~$R,S$ for a specific~$n_E\geq 1$, let the theory~${\tt \Delta^1_1\mbox{-}A[E]}$ (resp.~${\tt \Sigma^1_1\mbox{-}[E]A}$) consist of~$A[E]$ from equation~(\ref{defnAe}) plus the~$\Delta^1_1$-Comprehension Schema (cf. Definition~\ref{delta11comp}) in the signature containing the function symbol~$\partial_E$ (resp.~$A[E]$ from equation~(\ref{defnAe}) plus the~$\Sigma^1_1$-Choice Schema (cf. Definition~\ref{sigam11choice}) and the First-Order Comprehension Schema~(cf. Definition \ref{pred:comp:schema}) in the signature containing the function symbol~$\partial_E$). \end{defn} \noindent Further, let us define a theory of pure second-order logic in a very limited signature: \begin{defn} The theory~${\tt SO}$ is the second-order theory consisting of the Full Comprehension Schema~(\ref{eqn:unaryfullcomp}) in the signature of pure second-order logic bereft of all type-lowering function symbols. \label{eqn:secondorderlogic} \end{defn} \noindent Here the abbreviation~``${\tt SO}$'' is chosen because it reminds us of ``second-order logic.'' It's worth emphasizing that while the theory~${\tt SO}$ has full comprehension in its signature, its signature is very impoverished and does not include any of the type-lowering function symbols featuring in abstraction principles. But as with the fragments of Basic Law~V discussed in the previous sections, we're assuming that we have the function symbols~$(\overline{a}, R)\mapsto R[\overline{a}]$ from equation~(\ref{eqn:iamafunction}) in the signature of all our theories, including ${\tt SO}$. So just to be clear: the signature of ${\tt SO}$ consists merely of the predication relations $R\overline{x}$ and the maps~$(\overline{a}, R)\mapsto R[\overline{a}]$, and its axioms consist solely of the extensionality axioms and the instances of the Full Comprehension Schema~(\ref{eqn:unaryfullcomp}) in its signature. Sometimes in what follows we consider the extension ${\tt SO}+{\tt GC}$, which per the discussion of global choice in the previous section adds to the signature of ${\tt SO}$ a binary relation on objects and posits that it is a well-order. In the theory ${\tt SO}+{\tt GC}$, we adopt the convention that instances of the Full Comprehension Schema~(\ref{eqn:unaryfullcomp}) may include the global well-order. One of our chief results on the consistency of abstraction principles with predicative levels of comprehension is the following: \begin{thm}\label{cor:consistency} Suppose that~$n\geq 1$ and that~$E(R,S)$ is a formula in the signature of~${\tt SO}+{\tt GC}$ which is provably an equivalence relation on~$n$-ary concepts in~${\tt SO}+{\tt GC}$. Then~${\tt \Sigma^1_1\mbox{-}[E]A}+{\tt SO}+{\tt GC}$ is consistent. \end{thm} \noindent \noindent This result is proven in \S\ref{sec04.5} below. This result indicates that the fact that Basic Law~V is consistent with the~$\Delta^1_1$-comprehension schema and~$\Sigma^1_1$-choice schema is not an isolated phenomena, but follows from the fact that~$E(X,Y)\equiv X=Y$ is provably an equivalence relation in~${\tt SO}+{\tt GC}$. It's worth stressing that the theory $T={\tt \Sigma^1_1\mbox{-}[E]A}+{\tt SO}+{\tt GC}$ has full comprehension for formulas in the signature of ${\tt SO}+{\tt GC}$ but only has predicative comprehension for formulas in~$T$'s full signature which includes the type-lowering function~$\partial_E$. A related problem of long-standing interest has been the ``joint consistency problem.'' This is the problem of determining natural conditions on~$E_1, E_2$ so that if~$A[E_1]$ and~$A[E_2]$ has a standard model then~$A[E_1]\wedge A[E_2]$ has a standard model. A second-order theory is said to have a \emph{standard model} if it has a model~$\mathcal{M}$ satisfying~$S_n(M)=P(M^n)$ for all~$n\geq 1$, where we here employ the notation introduced in the previous section in equation~(\ref{eqn:blvmodels}) for models. This is a non-trivial problem: for, some~$A[E_1]$ have standard models~$\mathcal{M}$ only when the underlying first-order domain~$M$ is finite, such as when~$E_1(X,Y)$ is expressive of the symmetric difference of~$X$ and~$Y$ being Dedekind-finite (cf.~\cite{Boolos1998} p. 215,~\cite{Hale2001} pp. 289 ff). However, other~$A[E_2]$ have a standard model~$\mathcal{M}$ with underlying first-order domain~$M$ only when~$M$ is infinite, such as when~$E_2(X,Y)$ is expressive of~$X,Y$ being bijective. In the setting of limited amounts of comprehension, the most obvious analogue of the joint consistency problem is to ask about the extent to which it is consistent that~$A[E_1]\wedge A[E_2]$ has a model satisfying e.g. the~$\Delta^1_1$-comprehension schema when each~$A[E_i]$-individually does. Formally, let us introduce the following theories: \begin{defn} \label{predabstracdefn} The theory~${\tt \Delta^1_1\mbox{-}A[E_1, \ldots, E_k]}$ (resp.~${\tt \Sigma^1_1\mbox{-}[E_1, \ldots, E_k]A}$) consists both of the abstraction principles~$A[E_1] \wedge \cdots \wedge A[E_k]$~(\ref{defnAe}) and the~$\Delta^1_1$-Comprehension Schema (cf. Definition~\ref{delta11comp}) (resp. plus the~$\Sigma^1_1$-Choice Schema (cf. Definition~\ref{sigam11choice}) and the First-Order Comprehension Schema (cf. Definition~\ref{pred:comp:schema})) in the signature containing all the function symbols~$\partial_{E_1}, \ldots, \partial_{E_k}$. \end{defn} Our result Theorem~\ref{cor:consistency} from above is a direct consequence of the following theorem, which indicates that the joint consistency problem does not arise in the setting with limited amounts of comprehension, assuming that we can prove the formulas are equivalence relations in~${\tt SO}+{\tt GC}$, and assuming that the equivalence relations are expressible in the signature of~${\tt SO}+{\tt GC}$: \begin{thm}\label{thm:jointconsistency} (Joint Consistency Theorem) Suppose~$n_1, \ldots, n_k\geq 1$ and that the formulas~$E_1(R,S)$, \ldots,~$E_k(R,S)$ in the signature of~${\tt SO}+{\tt GC}$ are provably equivalence relations on~$m_i$-ary concepts in~${\tt SO}+{\tt GC}$. Then the theory~${\tt \Sigma^1_1\mbox{-}[E_1, \ldots, E_k]A}+{\tt SO}+{\tt GC}$ is consistent. \end{thm} \noindent This result is proven in \S\ref{sec04.5} below. It's worth again underscoring that the theory $T={\tt \Sigma^1_1\mbox{-}[E_1, \ldots, E_k]A}+{\tt SO}+{\tt GC}$ has full comprehension for formulas in the signature of ${\tt SO}+{\tt GC}$ but only has predicative comprehension for formulas in~$T$'s full signature which includes the type-lowering function~$\partial_{E_1}, \ldots, \partial_{E_k}$. By compactness, this theorem establishes the consistency of a theory which includes abstraction principles associated to each formula in the signature of ${\tt SO}+{\tt GC}$ which one can prove to be an equivalence relation in ${\tt SO}+{\tt GC}$. In our companion paper \cite{Walsh2014ad}, we study the deductive strength of this theory. \section{Constructibility and Generalized Admissibility}\label{sec02.2} The aim of this section is to briefly review several of the tools from constructibility that we use in the below proofs. Hence, it might be advisable to skip this section on a first read-through and refer back to this section as needed. In this section, we work entirely with fragments and extensions of the standard {\tt ZFC}-set theory, so that all structures~$M$ are structures in the signature of set-theory. The tools which we review and describe in this section come from constructibility, the study of G\"odel's universe~$L$ (cf. ~\cite{Jech2003}~Chapter 13,~\cite{Kunen1980} Chapter 6,~\cite{Kunen2011aa} II.6, \cite{Devlin1984aa}). This is the union of the sets~$L_{\alpha}$ that are defined recursively as follows, wherein~$\mathrm{Defn}(M)$ refers to the subsets of~$M$ which are definable with parameters (when~$M$ is conceived of as having, as its only primitive, the membership relation restricted to its elements): \begin{equation}\label{defn:L} L_0=\emptyset, \hspace{10mm} L_{\alpha+1}=\mathrm{Defn}(L_{\alpha}), \hspace{10mm} L_{\alpha}=\bigcup_{\beta<\alpha} L_{\beta} \mbox{~for~$\alpha$ a limit} \end{equation} One tool which we shall use frequently in this paper is the following natural generalization of the notion of an admissible ordinal: \begin{defn}\label{defn:nadmissible} For~$n\geq 1$, an ordinal~$\alpha$ is \emph{$\Sigma_n$-admissible} if~$\alpha$ a limit and~$\alpha>\omega$ and~$L_{\alpha}$ models~$\Sigma_n$-collection and~$\Sigma_{n-1}$-separation. \end{defn} \noindent Recall that the collection schema is the following schema: \begin{equation}\label{eqn:collectionschema} \forall \; \overline{p} \; [\forall \; x \; \exists \; y \; \varphi(x,y,\overline{p}) ]\rightarrow [\forall \; u \; \exists \; v \; (\forall \; x\in u \; \exists \; y\in v \; \varphi(x,y,\overline{p}))] \end{equation} \noindent By abuse of notation, we also say that~$L_{\alpha}$ is~$\Sigma_n$-admissible iff~$\alpha$ is~$\Sigma_n$-admissible; and we write ``admissible'' in lieu of ``$\Sigma_1$-admissible.'' The notion of~$\Sigma_n$-admissibility can be described axiomatically as well. In particular, it is not difficult to see that~$L_{\alpha}$ is~$\Sigma_n$-admissible if and only if~$L_{\alpha}$ satisfies extensionality, pairing, union, infinity, the foundation schema,~$\Sigma_n$-collection, and~$\Sigma_{n-1}$-separation. In the case~$n=1$, this set of axioms provides an equivalent axiomatization of Kripke-Platek set theory (\cite{Kripke1964aa}, \cite{Platek1966aa}, \cite{Devlin1984aa} p. 48, p. 36). Further, the union of this set of axioms for all~$n\geq 1$, along with the axiom choice (in the form that every set can be well-ordered), is deductively equivalent to~${\tt ZFC\mbox{-}P}$ (cf. Definition~\ref{defn:ZFCminusP}). Finally, an equivalent definition of~$\Sigma_n$-admissibility is as follows:~$\alpha$ is~$\Sigma_n$-admissible if and only if~$\alpha$ is a limit and~$\alpha>\omega$ and~$L_{\alpha}$ models Kripke-Platek set theory and~$\Sigma_n$-replacement in the strong form that both the graph and range of~$\Sigma_n$-definable functions on sets exists (cf. \cite{Simpson1978aa} p. 368, \cite{Sacks1990} p. 174). Several of the classical results about Kripke-Platek set theory easily generalize to~$\Sigma_n$-admissibles. In particular, if~$L_{\alpha}$ is~$\Sigma_n$-admissible, then (i)~$L_{\alpha}$ satisfies~$\Delta_n$-separation, (ii)~$L_{\alpha}$ models that the~$\Sigma_n$- and~$\Pi_n$-formulas are uniformly closed under bounded quantification, and (iii)~$L_{\alpha}$ satisfies~$\Sigma_n$-transfinite recursion. For the proofs of these results for the case~$n=1$, see Chapters~I-II of Devlin's book~\cite{Devlin1984aa}; the proofs for the results~$n>1$ carry over word-for-word. An idea closely related to transfinite recursion is Mostowksi Collapse. Since admissible~$L_{\alpha}$ don't necessarily model the Mostowksi Collapse Lemma, it is natural to formulate axioms pertaining directly to the Mostowski Collapse Lemma. In particular, we define: \begin{defn} $\emph{Axiom\mbox{\;}Beta}$ says that for all sets~$X,R$ such that~$(X,R)$ is well-founded, there is a set~$\pi$ such that~$\pi$ is a function with domain~$X$ satisfying, for each~$y$ from~$X$, the equation~$\pi(y)=\{\pi(y^{\prime}): y^{\prime}\in X \; \& \; y^{\prime} R y\}$ (cf. Barwise~\cite{Barwise1975ab} Definition I.9.5 p. 39). \label{defna:axiombeta} \end{defn} \noindent The set-version of the Mostowksi Collapse Lemma holds in admissible~$L_{\alpha}$ which satisfy Axiom~Beta. The set-version of this lemma states that for all sets~$X,E$ such that~$(X,E)$ is well-founded and extensional, there is a transitive set~$M$ and an isomorphism~$\pi:(X,E)\rightarrow (M,\in)$ (cf. \cite{Kunen1980} pp. 105-106, \cite{Kunen2011aa}, \cite{Kunen2011aa} p. 56~ff, \cite{Jech2003} p. 69). The traditional~${\tt ZFC}$-proof of~$\mathrm{Axiom\mbox{\;}Beta}$ uses~$\Sigma_1$-replacement and~$\Sigma_1$-separation, and so~$L_{\alpha}$ models Axiom~Beta for all~$\Sigma_2$-admissible~$\alpha$. Other basic properties of the structures~$L_{\alpha}$ relate to its canonical well-ordering~$<_L$. The well-order~$<_L$ may be taken to be given by a canonical formula that is uniformly~$\Delta_1$ in admissible~$L_{\alpha}$. Further, since~$<_L$ is uniformly~$\Delta_1$, we have that this well-order is absolute between various admissible~$L_{\alpha}$. Moreover, one has that the function~$x\mapsto \mathrm{pred}_{<_L}(x)$ is uniformly~$\Delta_1$ in admissibles where we define~$y\in \mathrm{pred}_{<_L}(x)$ iff~$y<_L x$ (cf. Devlin~\cite{Devlin1984aa} pp. 74-75). Finally, just as the~$\Sigma_m$- and~$\Pi_m$-formulas are closed under bounded quantification for~$0\leq m\leq n$ in~$\Sigma_n$-admissibles, so for~$0< m\leq n$ they are closed under~$<_L$-bounding in~$\Sigma_n$-admissibles. Other important properties of~$\Sigma_n$-admissibles that we shall use are related to uniformization. A structure~$M$ satisfies {\it~$\utilde{ \Sigma}_n$-uniformization} if for every~$\utilde{\Sigma}_n^{M}$-definable relation~$R\subseteq M\times M$ there is a~$\utilde{\Sigma}_n^{M}$-definable relation~$R^{\prime}\subseteq R$ such that~$M \models \forall \; x \; [(\exists \; y \; R(x,y))\rightarrow (\exists !\; y \; R^{\prime}(x,y))]$. In this case,~$R^{\prime}$ is called a~$\utilde{\Sigma}_n^{M}$-definable \emph{uniformization} of~$R$. In his famous paper, Jensen showed that admissible~$L_{\alpha}$ are models of~$\utilde{\Sigma}_n$-uniformization \emph{for all~$n\geq 1$} (cf.~\cite{Jensen1972aa} Theorem 3.1 p. 256 and Lemma 2.15 p. 255;~\cite{Devlin1984aa} Theorem 4.5 p. 269). The proof of this theorem is very difficult, and in fact holds for all members~$J_{\alpha}$ of Jensen's alternate hierarchy. However, in what follows we can avoid direct appeal to Jensen's Theorem by appealing to the following weak version, whose elementary proof proceeds by choosing~$<_L$-least witnesses: \begin{prop}\label{prop:weakuniform} (\emph{Weak Uniformization}) Suppose~$n\geq 1$. If~$L_{\alpha}$ is~$\Sigma_n$-admissible then~$L_{\alpha}$ satisfies~$\utilde{\Sigma}_m$-uniformization for every~$1\leq m\leq n$. Moreover, the parameters in the~$\Sigma_m$-definition of the uniformization~$R^{\prime}$ can be taken to be the same as the parameters in the~$\Sigma_m$-definition of~$R$. \end{prop} \noindent Let's finally state a simple consequence of uniformization that we shall appeal to repeatedly in what follows: \begin{prop}\label{invert} (Proposition on Right-Inverting a Surjection) Suppose that~$n\geq 1$ and that~$L_{\alpha}$ is~$\Sigma_n$-admissible. Suppose that~$Y$ is a~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable subset of~$L_{\alpha}$ and~$X$ is a subset of~$L_{\alpha}$. Suppose there is a~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable surjection~$\pi:Y\rightarrow X$. Then~$X$ is a~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable subset of~$L_{\alpha}$ and there is a~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable injection~$\iota:X\rightarrow Y$ satisfying~$\pi\circ \iota = \mathrm{id}_X$. \end{prop} An important concept in what follows is the~$n$-th projectum of the structure~$L_{\alpha}$. This was introduced by Kripke (\cite{Kripke1964aa}) and Platek (\cite{Platek1966aa}), and it records how small one can possibly make~$\alpha$ under a~$\Sigma_n$-definable injection: \begin{defn}\label{defn:project} Suppose that~$n> 0$ and~$\alpha>\omega$. Then the {\it~$n$-th projectum}~$\rho_n(\alpha)=\rho_n$ of~$\alpha$ is the least~$\rho\leq \alpha$ such that there is a~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable injection~$\iota:\alpha\rightarrow \rho$. \end{defn} \noindent There are several different equivalent characterizations of the~$n$-th projectum (cf. \cite{Sacks1990} p. 157, \cite{Barwise1975ab} Definition V.6.1 p. 174, \cite{Jensen1972aa} pp. 256-257, \cite{Schindler2010aa} Definition 2.1 p. 619). In particular, for admissible~$\alpha$, the~$n$-th projectum may be equivalently defined as the smallest~$\rho\leq \alpha$ such that there is a~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable injection~$\iota:L_{\alpha}\rightarrow \rho$. Another basic tool that we employ is the notion of a~$\Sigma_n$-elementary substructure. Recall that if~$M$ and~$N$ are structures in the signature of~${\tt ZFC}$, then~$M\prec_n N$ is said to hold, and~$M$ is said to be a~$\Sigma_n$-\emph{elementary substructure} of~$N$, if~$M\subseteq N$ and for every~$\Sigma_n$-formula~$\varphi(\overline{x})$ and every tuple of parameters~$\overline{a}$ from~$M$, it is the case that~$M\models \varphi(\overline{a})$ if and only if~$N\models \varphi(\overline{a})$. Here are some basic facts about~$\Sigma_n$-elementary substructures and the constructible hierarchy that we shall use: \begin{prop}\label{prop:whatami234123} \begin{enumerate} \item[] \item \emph{The~$\Sigma_n$-Definable Closure is a~$\Sigma_n$-Elementary Substructure}: Suppose that~$L_{\alpha}$ is~$\Sigma_n$-admissible and~$A\subseteq L_{\alpha}$. Let \emph{the~$\Sigma_n$-definable closure of~$L_{\alpha}$ with parameters~$A$}, written~$\mathrm{dcl}^{L_{\alpha}}_{\Sigma_n}(A)$, denote the set of elements~$a$ of~$L_{\alpha}$ such that there is a~$\Sigma_n$-formula~$\varphi(x,\overline{y})$ with all free variables displayed and parameters~$\overline{p}\in A$ such that~$L_{\alpha}\models \varphi(a,\overline{p}) \wedge \forall \; a^{\prime} \; (\varphi(a^{\prime},\overline{p})\rightarrow a=a^{\prime})$. Then~$\mathrm{dcl}^{L_{\alpha}}_{\Sigma_n}(A)\prec_n L_{\alpha}$. \label{eqn:denclos} \item If~$\kappa$ is an uncountable regular cardinal, then~$L_{\kappa}$ is a model of~${\tt ZFC}\mbox{-}{\tt P}$ (cf. Definition~\ref{defn:ZFCminusP}). \label{eqn:helper1} \item \emph{Admissibility and Axiom Beta Preserved Under Elementary Substructure}: Suppose that~$n\geq 1$ and that~$L_{\alpha}\prec_n L_{\beta}$ where~$\beta$ is~$\Sigma_n$-admissible. Then~$\alpha$ is~$\Sigma_n$-admissible. Further, if~$L_{\beta}\models \mathrm{Axiom\mbox{\;}Beta}$ then~$L_{\alpha}\models \mathrm{Axiom\mbox{\;}Beta}$. \label{simpelthings} \item \emph{Consequence of~${\tt V\hspace{-1mm}=\hspace{-1mm}L}$ for~$\Sigma_1$-Substructures of~$L$ up to a Successor Cardinal}: Suppose that~${\tt V\hspace{-1mm}=\hspace{-1mm}L}$ and~$\lambda$ is an infinite cardinal and~$\lambda\cup \{\lambda\}\subseteq M$,~$M\prec_1 L_{\lambda^{+}}$,~$\left|M\right|=\lambda$. Then~$M=L_{\gamma}$ for some~$\gamma$ with~$\left|\gamma\right|= \lambda$.\label{eqn:genpop} \end{enumerate} \end{prop} \begin{proof} For the first item, see Devlin~\cite{Devlin1984aa} Lemma II.5.3 p. 83 which proves the result for~$n=\omega$; the same proof works for~$1\leq n<\omega$. For the next item on ${\tt ZFC}\mbox{-}{\tt P}$, see ~\cite{Kunen1980} p. 177,~\cite{Kunen2011aa} Lemma II.6.22 p. 139, ~\cite{Jech2003} p. 198, noting that these same proofs also give one the collection schema from the official definition of ${\tt ZFC}\mbox{-}{\tt P}$ (cf. Definition~\ref{defn:ZFCminusP}). The proof of the third item follows from a routine induction. For the fourth and final item, see Devlin~\cite{Devlin1984aa} Lemma II.5.10 p. 85 for the special case~$\lambda =\omega$; the same proof works for the general case. \end{proof} \section{Construction and Existence Theorems, and Joint Consistency Problem}\label{sec04.5} The aim of this section is to build models of abstraction principles with predicative amounts of comprehension, and these yield our solution to the joint consistency problem described at the close of \S\ref{sec0.1555predab}. The first step is the following construction. This construction is also an important part of the proof of our Main Theorem~\ref{thm:main}, whose proof is presented in the next section \S\ref{sec05}. In the statement of this construction theorem, the key concepts of~$\Sigma_n$-admissible and~$n$-th projectum~$\rho_n$ were defined in the previous section \S\ref{sec02.2}. Likewise, recall that the theories~${\tt SO}$ and~${\tt \Sigma^1_1\mbox{-}[E_1, \ldots, E_k]A}$ were defined respectively in Definition~\ref{eqn:secondorderlogic} and Definition~\ref{predabstracdefn} from \S\ref{sec0.1555predab}. \begin{thm}\label{thm:BCT} (\emph{Construction Theorem}). Suppose that~$n\geq 1$ and that~$\alpha$ is~$\Sigma_n$-admissible with~$\rho_n(\alpha)=\rho<\alpha$ and let~$\iota:L_{\alpha}\rightarrow \rho$ be a witnessing~${\utilde \Sigma}_n^{L_{\alpha}}$-definable injection. Then consider the following structure~$\mathcal{M}$ in the signature of~${\tt SO}$~(cf. Definition~\ref{eqn:secondorderlogic}): \begin{equation}\label{eqn:555666} \mathcal{M} = (\rho, P(\rho)\cap L_{\alpha}, P(\rho\times \rho)\cap L_{\alpha}, \ldots) \end{equation} Further, suppose for each~$i\in [1,k]$, the relation~$E_i$ is a~${\utilde \Sigma}^{1, \mathcal{M}}_{n\mbox{-}1}$-definable equivalence relation on~$(P(\rho^{m_i})\cap L_{\alpha})$. Then consider the~${\utilde \Sigma}_{n}^{L_{\alpha}}$-definable maps~$\partial_{E_i}:(P(\rho^{m_i})\cap L_{\alpha})\rightarrow \rho$ defined by~$\partial_{E_i}(X)=\iota(\ell_i(X))$ where~$\ell_i(X)$ is the~$<_L$-least member of~$X$'s~$E_i$-equivalence class. Then the following expansion of~$\mathcal{M}$ is a model of the theory~${\tt \Sigma^1_1\mbox{-}[E_1, \ldots, E_k]A}+{\tt GC}$, where the global well-order~$<$ on objects is given by the membership relation on the ordinal~$\rho$: \begin{equation}\label{eqn:thestructure} \mathcal{N}=(\rho, P(\rho)\cap L_{\alpha}, P(\rho\times \rho)\cap L_{\alpha}, \ldots, \partial_{E_1}, \ldots, \partial_{E_k},<) \end{equation} \end{thm} \begin{proof} For each~$i\in [1,k]$, define \begin{equation} \widehat{E}_i = \{(X,Y)\in (P(\rho^{m_i})\cap L_{\alpha})\times (P(\rho^{m_i})\cap L_{\alpha}): \mathcal{M}\models E_i(X,Y)\} \end{equation} Then since~$E_i$ is~${\utilde \Sigma}^{1, \mathcal{M}}_{n\mbox{-}1}$-definable, it follows that~$\widehat{E}_i$ is~${\utilde \Sigma}^{L_{\alpha}}_{n\mbox{-}1}$-definable, so that~$\widehat{E}_i$ is a~${\utilde \Sigma}^{L_{\alpha}}_{n\mbox{-}1}$-definable equivalence relation on~$(P(\rho^{m_i})\cap L_{\alpha})$. For each element~$X$ of~$(P(\rho^{m_i})\cap L_{\alpha})$, let~$[X]_{\widehat{E}_i}\subseteq (P(\rho^{m_i})\cap L_{\alpha})$ denote the~$\widehat{E}_i$-equivalence class of~$X$. Then~$\ell_i:(P(\rho^{m_i})\cap L_{\alpha})\rightarrow (P(\rho^{m_i})\cap L_{\alpha})$ is defined by~$\ell_i(X)=\min_{<_L}([X]_{\widehat{E}_i})$, and its graph has the following definition: \begin{align} \ell_i(X)=Y \Longleftrightarrow X,Y\in L_{\alpha} & \;\&\; \; X,Y\subseteq \rho^{m_i} \; \& \; \widehat{E}_i(X,Y)\label{defnLekki}\\ & \; \& \; \forall Z<_L Y \; [Z\subseteq \rho^{m_i}\rightarrow \neg \widehat{E}_i(X,Z)] \notag \end{align} Since adding quantifiers bounded by~$<_L$ to~$\Sigma_m$- or~$\Pi_m$-formulas for~$m\leq n$ does not increase their complexity, we have that the graph of~$\ell_i$ is defined by the conjunction of a~${\utilde \Sigma}_{n\mbox{-}1}^{L_{\alpha}}$-formula with a~${\utilde \Pi}_{n\mbox{-}1}^{L_{\alpha}}$-formula and so is~${\utilde \Sigma}_{n}^{L_{\alpha}}$-definable. Then the map~$\partial_{E_i}: (P(\rho^{m_i})\cap L_{\alpha})\rightarrow \rho$ is defined by~$\partial_{E_i}(X) = \iota(\ell_i(X))$, which is likewise~${\utilde \Sigma}_{n}^{L_{\alpha}}$-definable since it is the composition of two~${\utilde \Sigma}_{n}^{L_{\alpha}}$-definable functions. (Note that in the case~$n=1$, the function~$\ell_i$ is defined by the conjunction of a~${\utilde \Sigma}_{0}^{L_{\alpha}}$-formula with a~${\utilde \Sigma}_{1}^{L_{\alpha}}$-formula and so is~${\utilde \Sigma}_{1}^{L_{\alpha}}$-definable. For, the formula~$\forall \; Z<_L Y \; \theta(Z,Y)$ for any~${\utilde \Sigma}_{0}^{L_{\alpha}}$-definable~$\theta(Z,Y)$ is equivalent to the formula $\exists \; Y^{\prime} \; Y^{\prime}=\mathrm{pred}_{<_L}(Y) \; \& \; \forall \; Z\in Y^{\prime} \; \theta(Z,Y)$ which is~${\utilde \Sigma}_{1}^{L_{\alpha}}$-definable because the map~$Y\mapsto \mathrm{pred}_{<_L}(Y)$ is~${\utilde \Delta}_{1}^{L_{\alpha}}$-definable). Now let us argue that the so-defined structure~$\mathcal{N}$ from equation~(\ref{eqn:thestructure}) satisfies the abstraction principle~$A[E_i]$~(\ref{defnAe}). First suppose that~$\mathcal{N}\models \partial_{E_i}(X)=\partial_{E_i}(Y)$ for some~$X,Y\in (P(\rho^{m_i})\cap L_{\alpha})$. Then since~$\iota:L_{\alpha}\rightarrow \rho$ is an injection we have that~$\ell_i(X)=\ell_i(Y)$, so that~$\min_{<_L}([X]_{\widehat{E}_i})=\min_{<_L}([Y]_{\widehat{E}_i})$. Hence~$\widehat{E}_i(X,Y)$ so that~$\mathcal{M}\models E_i(X,Y)$ and hence its expansion~$\mathcal{N}$ also models this. Conversely, suppose that~$\mathcal{N}\models E_i(X,Y)$, so that its reduct~$\mathcal{M}$ also models this. Then~$\widehat{E}_i(X,Y)$ and hence~$[X]_{\widehat{E}_i}= [Y]_{\widehat{E}_i}$ and~$\min_{<_L}([X]_{\widehat{E}_i}) = \min_{<_L}([Y]_{\widehat{E}_i})$, so that~$\ell_i(X)=\ell_i(Y)$ and hence~$\partial_{E_i}(X)=\partial_{E_i}(Y)$. Hence in fact the structure~$\mathcal{N}$ from equation~(\ref{eqn:thestructure}) satisfies the abstraction principle~$A[E_i]$. So now it remains to show that the structure~$\mathcal{N}$ from equation~(\ref{eqn:thestructure}) satisfies the First-Order Comprehension Schema (cf. Definition~\ref{pred:comp:schema}) and the~$\Sigma^1_1$-Choice Schema (cf. Definition~\ref{sigam11choice}) in the expanded signature containing the function symbols~$\partial_{E_1}, \ldots, \partial_{E_k}$. First let us establish a preliminary result that any $\utilde{\Sigma}^{1,\mathcal{N}}_1$-definable subset of $\mathcal{N}$ is $\utilde{\Sigma}^{L_{\alpha}}_n$-definable (indeed, in the same parameters). This result is proven by induction on the complexity of the formula defining the subset of $\mathcal{N}$. By a subset of the many-sorted structure~$\mathcal{N}$, we mean any subset of any finite product $S_1\times \cdots \times S_n$, wherein $S_i$ is one of the sorts $\rho, P(\rho)\cap L_{\alpha}, P(\rho\times \rho)\cap L_{\alpha}, \ldots$ of the structure~$\mathcal{N}$ as displayed in equation~(\ref{eqn:thestructure}). So our preliminary result establishes not only that $\utilde{\Sigma}^{1,\mathcal{N}}_1$-definable subsets of the first-order part $\rho$ are $\utilde{\Sigma}^{L_{\alpha}}_n$-definable, but also that e.g. $\utilde{\Sigma}^{1,\mathcal{N}}_1$-definable subsets of $\rho\times (P(\rho)\cap L_{\alpha})$ are $\utilde{\Sigma}^{L_{\alpha}}_n$-definable. As a base case, we show that any subset of $\mathcal{N}$ defined by an atomic formula is $\utilde{\Delta}^{L_{\alpha}}_n$-definable. Recall that an \emph{unnested} atomic formula in a signature is one of the form $x=y$, $c=y$, $f(\overline{x})=y$ or $P\overline{x}$, where $c, f, P$ are respectively constant symbols, function symbols, and relation symbols of the signature (cf. \cite{Hodges1993aa} p. 58). Then in $\mathcal{N}$, any atomic formula $\varphi$ is equivalent to both a $\utilde{\Sigma}^{1, \mathcal{N}}_1$-formula~$\varphi^{\exists}\equiv \exists \; \overline{R} \; \exists \; \overline{y} \; \varphi^{\exists}_0$ and a $\utilde{\Pi}^{1, \mathcal{N}}_1$-formula $\varphi^{\forall}\equiv \forall \; \overline{R} \; \forall \; \overline{y} \; \varphi^{\forall}_0$ in which $\varphi^{\exists}_0$ and $\varphi^{\forall}_0$ are quantifer-free and in which any atomic subformula of them is unnested (cf. \cite{Hodges1993aa} Theorem 2.6.1 p. 58). Now, the unnested atomics of the signature of $\mathcal{N}$ are $\partial_{E_i}(R)=x$, $R[\overline{x}]=S$, $R\overline{y}$, and $x=y$. The first is by construction $\utilde{\Sigma}^{L_{\alpha}}_n$-definable and the last three are trivially $\utilde{\Sigma}^{L_{\alpha}}_n$-definable. Further, their negations are likewise $\utilde{\Sigma}^{L_{\alpha}}_n$-definable: for instance, $\partial_{E_i}(R)\neq x$ iff $\exists \; y\in \rho \; (\partial_{E_i}(R)=y \; \& \; y\neq x)$, and $\Sigma_n$-formulas are closed under bounded quantification in $L_{\alpha}$. Now, without loss of generality, we may assume that the negations in $\varphi^{\exists}_0$ and $\varphi^{\forall}_0$ are all pushed to the inside, so that they apply only to unnested atomics. Then since the $\Sigma_n$-formulas are closed under finite union and intersection in $L_{\alpha}$, we have that each subformula of $\varphi^{\exists}_0$ and $\varphi^{\forall}_0$ is $\utilde{\Sigma}^{L_{\alpha}}_n$-definable and thus so are they themselves. By the same reasoning, this holds true for their negations $\neg \varphi^{\exists}_0$ and $\neg \varphi^{\forall}_0$ as well. Since $\varphi^{\exists}$ is formed from $\varphi^{\exists}_0$ by adding a single block of existential quantifiers, we have that $\varphi^{\exists}$ is $\utilde{\Sigma}^{L_{\alpha}}_n$-definable. Since $\neg (\varphi^{\forall})$ is formed from $\neg (\varphi^{\forall}_0)$ by adding a single block of existential quantifiers, we have that $\neg (\varphi^{\forall})$ is $\utilde{\Sigma}^{L_{\alpha}}_n$-definable, so that $ \varphi^{\forall}$ is $\utilde{\Pi}^{L_{\alpha}}_n$-definable. Since the original atomic formula~$\varphi$ is equivalent to both $\varphi^{\exists}$ and $\varphi^{\forall}$, we have that the original atomic formula~$\varphi$ is indeed $\utilde{\Delta}^{L_{\alpha}}_n$-definable. Then we show that any $\utilde{\Sigma}^{1,\mathcal{N}}_1$-definable subset of $\mathcal{N}$ is $\utilde{\Sigma}^{L_{\alpha}}_n$-definable by a straightforward induction. We may again assume that all the negations are pushed to the inside and apply only to atomics. And the atomics and negated atomics are $\utilde{\Sigma}^{L_{\alpha}}_n$-definable by the previous paragraph. Since $\utilde{\Sigma}^{L_{\alpha}}_n$-definability is closed under finite union and intersection, the induction steps for conjunction and disjunction are trivial. Likewise, the induction steps for first-order quantification hold because first-order quantification in $\mathcal{N}$ corresponds to bounded quantification over elements of $\rho$ in $L_{\alpha}$, and the $\Sigma_n$-formulas are closed under bounded quantification in $L_{\alpha}$. So the inductive argument up this point establishes the result for formulas with no higher-order quantifiers. But since we're restricting to the case of $\Sigma^{1,\mathcal{N}}_1$-definable subsets, the addition of a block of higher-order existential quantifiers ranging over $P(\rho^k)\cap L(\alpha)$ does not bring us out of the complexity class~$\utilde{\Sigma}^{L_{\alpha}}_n$. Hence indeed any $\utilde{\Sigma}^{1,\mathcal{N}}_1$-definable subset of $\mathcal{N}$ is $\utilde{\Sigma}^{L_{\alpha}}_n$-definable. From this, the First-Order Comprehension Schema (cf. Definition~\ref{pred:comp:schema}) follows directly from $\Delta_n$-separation in $L_{\alpha}$. As for the $\Sigma^1_1$-choice schema, suppose that~$\mathcal{N} \models \forall \; x \; \exists \; R \; \varphi(x,R)$, wherein~$\varphi$ is~$\Sigma^1_1$. Choose a $\utilde{\Sigma}_n$-formula $\psi$ such that for all $x\in \rho$ and $R\in (P(\rho)\cap L_{\alpha})$ one has $\mathcal{N}\models \varphi(x,R)$ iff $L_{\alpha}\models \psi(x,R)$. Then one has that $L_{\alpha}\models \forall x\in \rho \; \exists \; R\subseteq \rho \; \psi(x,R)$. Define $\Gamma(x,R)\equiv [x\in \rho \; \& \; R\in (P(\rho)\cap L_{\alpha}) \; \& \; L_{\alpha}\models \psi(x,R)]$. By weak uniformization~(\ref{prop:weakuniform}), choose a $\Sigma^{L_{\alpha}}_n$-definable uniformization $\Gamma^{\prime}$ of $\Gamma$. Since $\Gamma^{\prime}:\rho\rightarrow (P(\rho)\cap L_{\alpha})$, it follows from~$\Sigma_n$-replacement that its graph is an element of~$L_{\alpha}$, and obviously we have $L_{\alpha}\models \forall \; x\in \rho \; \psi(x,\Gamma^{\prime}(x))$. Then define $R^{\prime}xy$ if and only if $y\in \Gamma^{\prime}(x)$, so that $R^{\prime}\in (P(\rho\times \rho)\cap L_{\alpha})$ and $R^{\prime}[x]=\Gamma(x)$. Then one has $L_{\alpha}\models \forall \;x\in \rho \; \psi(x,R^{\prime}[x])$ and hence $\mathcal{N}\models \varphi(x,R^{\prime}[x])$. As for the global choice principle~${\tt GC}$, we may briefly note that~$\mathcal{N}$ obviously satisfies it when we use the ordinary ordering~$<$ on the ordinal~$\rho$ as the witness. For, since the ordering~$<$ on~$\rho$ is~$\Delta_0$-definable, it exists in~$P(\rho\times \rho)\cap L_{\alpha}$ by~$\Delta_0$-separation on the set~$\rho\times \rho$ in~$L_{\alpha}$. In the previous paragraphs, we have verified that various forms of comprehension hold on~$\mathcal{N}$, in which parameters are allowed to occur. Hence these forms of comprehension continue to hold when~$<$ is permitted to occur within the formulas because we can view this as simply yet another parameter. \end{proof} \begin{thm}\label{thm:basicexistence} (\emph{Existence Theorem}). Let~$\gamma\geq 0$ and let~$\lambda = \omega_{\gamma}^L$ and~$\kappa=\omega_{\gamma+1}^L$. Then for each~$n\geq 1$ there is an~$\Sigma_n$-admissible~$\alpha_n$ such that \begin{equation} \lambda<\alpha_n<\kappa, \hspace{5mm} \rho_n(\alpha_n)=\lambda, \hspace{5mm} L_{\alpha_n}\prec_n L_{\kappa}, \hspace{5mm} L_{\alpha_n}\models \mathrm{Axiom\;Beta} \end{equation} More specifically, we can choose~$\alpha_n$ so that~$L_{\alpha_n} =\mathrm{dcl}_{\Sigma_n}^{L_{\kappa}}(\lambda \cup \{\lambda\})$. Further, the following set~$\mathcal{F}_n\subseteq \lambda$ is~$\Sigma_1^{L_{\alpha_n}}$-definable, wherein~$\langle \cdot, \cdot\rangle:\lambda \times \lambda \rightarrow \lambda$ is G\"odel's~$\Sigma_1$-definable pairing function and~$\mathrm{Form}(\Sigma_n)$ is the set of G\"odel numbers of~$\Sigma_n$-formulas: \begin{equation}\label{eqn:defnbigO} \mathcal{F}_n = \{\langle \ulcorner \varphi(x, \overline{y},z)\urcorner, \overline{\beta}\rangle: \varphi(x,\overline{y},z)\in \mathrm{Form}(\Sigma_n) \; \& \; \overline{\beta} <\lambda\} \end{equation} Moreover, there is a~$\utilde{\Sigma}^{L_{\alpha_n}}_n$ surjective partial map~$\theta_n:\mathcal{F}_n \dashrightarrow L_{\alpha_n}$ such that \begin{align} \theta_n(\langle \ulcorner \varphi(x, \overline{y},z)\urcorner, \overline{\beta}\rangle) =a \Longrightarrow & L_{\alpha_n} \models \varphi(a,\overline{\beta}, \lambda) \label{eqn:idoall} \\ (L_{\alpha_n} \models \exists \; ! \; x\; \varphi(x,\overline{\beta}, \lambda))& \Longrightarrow \langle \ulcorner \varphi(x, \overline{y},z)\urcorner, \overline{\beta}\rangle\in \mathrm{dom}(\theta_n) \nonumber \end{align} and a~$\utilde{\Sigma}^{L_{\alpha_n}}_n$-definable injection~$\iota_n: L_{\alpha_n}\rightarrow \mathrm{dom}(\theta_n)$ such that~$\theta_n \circ \iota_n$ is the identity on~$L_{\alpha_n}$. Further, the sequence~$\alpha_n$ is strictly increasing. Finally, for each~$n\geq 1$ there is an injection~$\chi_n:\lambda\rightarrow \theta^{-1}_n(\{0,1\})$ whose graph is in~$L_{\alpha_n}$. \end{thm} \begin{proof} Since the result is absolute, we may assume~${\tt V\hspace{-1mm}=\hspace{-1mm}L}$, and hence we may assume that~$\lambda$ and~$\kappa$ are cardinals. Since~$\kappa$ is regular uncountable one has that~$L_{\kappa}\models {\tt ZFC\mbox{-}P}$ (cf. Proposition~\ref{prop:whatami234123}, item \ref{eqn:helper1}). Let~$M=\mathrm{dcl}_{\Sigma_n}^{L_{\kappa}}(\lambda \cup \{\lambda\})$. Since the~$\Sigma_n$-definable closure is a~$\Sigma_n$-elementary substructure~(cf. Proposition~\ref{prop:whatami234123}, item~\ref{eqn:denclos}), we have~$M\prec_n L_{\kappa}$. Since~$\kappa=\lambda^{+}$ and~$\lambda\cup \{\lambda\}\subseteq M$ and~$M\prec_1 L_{\lambda^{+}}$, it follows from the consequence of~${\tt V\hspace{-1mm}=\hspace{-1mm}L}$ for~$\Sigma_1$-substructures of~$L$ up to a successor cardinal (Proposition~\ref{prop:whatami234123} item \ref{eqn:genpop}) that~$M=L_{\alpha_n}$ where~$\left|\alpha_n\right|= \lambda$. Then~$\lambda \leq \alpha_n<\kappa$. But since~$\lambda\in M=L_{\alpha_n}$ we have~$\lambda <\alpha_n<\kappa$. By Proposition~\ref{prop:whatami234123} item \ref{simpelthings}, we also have that~$L_{\alpha_n}$ is~$\Sigma_n$-admissible and satisfies Axiom~Beta. Then define the following relation~$R_n\subseteq \mathcal{F}_n\times L_{\alpha_n}$ by \begin{equation} R_n(\langle \ulcorner \varphi(x, \overline{y},z)\urcorner, \overline{\beta}\rangle, a)\Longleftrightarrow \langle \ulcorner \varphi(x, \overline{y},z)\urcorner, \overline{\beta}\rangle\in \mathcal{F}_n \; \& \;\; L_{\alpha_n}\models \varphi(a, \overline{\beta}, \lambda) \end{equation} Then by the definability of partial satisfaction predicates,~$R_n$ is~$\utilde{\Sigma}^{L_{\alpha_n}}_n$-definable. Then by weak uniformization (Proposition~\ref{prop:weakuniform}), choose a~$\utilde{\Sigma}^{L_{\alpha_n}}_n$-definable uniformization~$\theta_n:\mathcal{F}_n\dashrightarrow L_{\alpha_n}$ of~$R_n$. Then~$\theta_n$ is a surjective partial function. For, suppose that~$a\in L_{\alpha_n}$. Since~$L_{\alpha_n}=\mathrm{dcl}_{\Sigma_n}^{L_{\kappa}}(\lambda \cup \{\lambda\})$, there is~$\Sigma_n$-formula~$\varphi(x, \overline{y},z)$ and~$\overline{\beta}< \lambda$ such that \begin{equation}\label{eqn:whatigot4} L_{\alpha_n} \models \varphi(a,\overline{\beta},\lambda) \; \& \; [\forall \; x \; (\varphi(x, \overline{\beta}, \lambda)\rightarrow x=a)] \end{equation} Then~$L_{\alpha_n} \models \exists \; x \; \varphi(x,\overline{\beta}, \lambda)$ and~$ \langle \ulcorner \varphi(x, \overline{y},z)\urcorner, \overline{\beta}\rangle$ is in~$\mathcal{F}_n$. Then on the input~$u= \langle \ulcorner \varphi(x, \overline{y},z)\urcorner, \overline{\beta}\rangle$, we have that~$\theta_n(u)$ is defined and if~$\theta_n(u)=a^{\prime}$ then~$L_{\alpha_n} \models \varphi(a^{\prime},\overline{\beta},\lambda)$. But in conjunction with equation~(\ref{eqn:whatigot4}), it thus follows that~$a=a^{\prime}=\theta_n(u)$. Hence, indeed~$\theta_n:\mathcal{F}_n\dashrightarrow L_{\alpha_n}$ is a surjective partial function. Let~$\mathcal{F}^{\prime}_n\subseteq \mathcal{F}_n$ be the domain of~$\theta_n$, which is likewise~$\utilde{\Sigma}^{L_{\alpha_n}}_n$-definable. By the Proposition on Right-Inverting a Surjection (Proposition~\ref{invert}), it follows that there is a~$\utilde{\Sigma}^{L_{\alpha_n}}_n$-definable injection~$\iota_n: L_{\alpha_n}\rightarrow \mathcal{F}^{\prime}_n$ such that~$\theta_n \circ \iota_n=\mathrm{id}_{L_{\alpha_n}}$. Since~$\mathcal{F}_n\subseteq \lambda$, we then have that~$\rho_n(\alpha_n)\leq \lambda$. Since~$\lambda$ is a cardinal and~$L_{\alpha_n}$ has cardinality~$\lambda$, we must have then that~$\rho_n(\alpha_n)= \lambda$. Now we argue that~$\alpha_1<\alpha_2<\alpha_3<\cdots$. Since~$L_{\alpha_n}=\mathrm{dcl}_{\Sigma_n}^{L_{\kappa}}(\lambda \cup \{\lambda\})$, we have that~$L_{\alpha_n}\subseteq L_{\alpha_{n+1}}$ and hence that~$\alpha_n\leq \alpha_{n+1}$. Suppose that it was not always that case that~$\alpha_n<\alpha_{n+1}$ for all~$n\geq 1$. Then~$\alpha_n=\alpha_{n+1}$ for some~$n\geq 1$. Since~$L_{\alpha_{n+1}}$ is~$\Sigma_{n\mbox{+}1}$-admissible and~$L_{\alpha_n}=L_{\alpha_{n+1}}$, we have that~$L_{\alpha_{n}}$ is~$\Sigma_{n\mbox{+}1}$-admissible and so satisfies~$\Sigma_{n}$-separation. Hence~$\mathcal{F}_n^{\prime}\in L_{\alpha_n}$ and hence by~$\Sigma_n$-replacement, the~$\utilde{\Sigma}_n^{L_{\alpha_n}}$-definable map~$\theta_n:\mathcal{F}_n^{\prime}\rightarrow L_{\alpha_n}$ would be bounded and thus not surjective. Finally, we verify that for each~$n\geq 1$ there is injection~$\chi_n:\lambda\rightarrow \theta^{-1}_n(\{0,1\})$. Let~$\varphi(x,y,z)$ say ``$x=0$ and~$y$ is an ordinal.'' Then for each~$\beta<\lambda$ there is exactly one~$x$ in~$L_{\alpha_n}$ such that~$L_{\alpha_n}\models \varphi(x, \beta, \lambda)$. Then by equation~(\ref{eqn:idoall}) we have that~$\langle \ulcorner \varphi(x,y,z)\urcorner, \beta\rangle\in \mathrm{dom}(\theta_n)$ and we have by equation~(\ref{eqn:idoall}) that {}$\theta_n(\langle \ulcorner \varphi(x,y,z)\urcorner, \beta\rangle)=0$. Then define the function~$\chi_n:\lambda\rightarrow \theta^{-1}_n(\{0,1\})$ by~$\chi_n(\beta)=\langle \ulcorner \varphi(x,y,z)\urcorner, \beta\rangle$, which is clearly injective; further clearly the graph of~$\chi$ is in~$L_{\alpha_n}$. \end{proof} The extra information about the injection~$\chi_n:\lambda\rightarrow \theta^{-1}_n(\{0,1\})$ in Theorem~\ref{thm:basicexistence} will be primarily useful for our companion paper~\cite{Walsh2014ae}, where we use constructible sets to build models of an intensional type theory (cf. \S{5} of~\cite{Walsh2014ae}). The reason for the focus on $\{0,1\}$ is that in the tradition of type-theory these are used as ersatzes for the truth-values $\{F,T\}$. The map~$\chi_n$ then allows us to inject ordinals $\beta<\lambda$ into intensional entities $\chi_n(\beta)$ which determine truth-values $\theta_n(\chi_n(\beta))$, roughly after the manner in which we inject natural numbers~$e$ into algorithms~$P_e$ which determine computable number-theoretic functions~$\varphi_e$. The proofs of these results can be seen as a generalization of our earlier constructions of models of~${\tt \Sigma^1_1\mbox{-}LB}_0$ of the form~$\mathcal{N}=(\omega, \mathrm{HYP}, \ldots, \partial)$ (\cite{Walsh2012aa} Theorem~53 p. 1695). Here~$\mathrm{HYP}$ denotes the hyperarithmetic subsets of natural numbers and~$\partial(Y)=\langle b,e\rangle$ only if~$b$ is a code for a computable ordinal~$\beta$ and~$Y$ is computable from~$b$'s canonical coding~$H_b$ of the~$\beta$-th Turing jump by the program~$e$. This earlier result can be seen as a special case of these results by virtue of the fact that if~$\alpha=\omega_1^{\mathrm{CK}}$ then~$P(\omega)\cap L_{\alpha}=\mathrm{HYP}$ (cf. Sacks~\cite{Sacks1990} \S~III.9 Exercise 9.12 p. 87). The primary difference between the proofs here and our earlier constructions of models of~${\tt \Sigma^1_1\mbox{-}LB}_0$ (\cite{Walsh2012aa} Theorem~53 p. 1695) was that the latter used Kond{\^o}'s Uniformization Theorem (\cite{Simpson2009aa} p. 224,~\cite{Kechris1995} p. 306), while the proof here used uniformization results in the constructible hierarchy like weak uniformization~\ref{prop:weakuniform}. Further, our results here can cover not just Basic Law~V, but the abstraction principles described in \S\ref{sec0.1555predab}. Finally, we can now prove the main results on the consistency of abstraction principles in the predicative setting, which were first stated and motivated in \S\ref{sec0.1555predab}. As for Theorem~\ref{cor:consistency}, this is a limiting case of the Joint Consistency Theorem~\ref{thm:jointconsistency}. So it remains to establish this latter theorem: \begin{proof} (\emph{of Joint Consistency Theorem}~\ref{thm:jointconsistency}): So suppose that the formulas~$E_1(R,S)$, \ldots,~$E_k(R,S)$ in the signature of~${\tt SO}+{\tt GC}$ are provably equivalence relations on~$m_i$-ary concepts in~${\tt SO}+{\tt GC}$. The theory~${\tt SO}$ from Definition~\ref{eqn:secondorderlogic} can be naturally written as the union of theories~${\tt SO}_m$, where ${\tt SO}_m$ restricts the instances of the Full Comprehension Schema~(\ref{eqn:unaryfullcomp}) in the signature of~${\tt SO}+{\tt GC}$ to its $\Sigma^1_m$-instances, where this is the standard notion for formulas which begin with $m$-alternating blocks of quantifiers, the first of which is a block of second-order existential quantifiers. By compactness, it suffices to show, for each $m\geq 1$, that the theory~${\tt \Sigma^1_1\mbox{-}[E_1, \ldots, E_k]A}+{\tt SO}_m+{\tt GC}$ is consistent. Let us then fix, for the remainder of this proof, an $m\geq 1$. Choose $n>m$ sufficiently large so that (i)~the formulas~$E_1(R,S)$, \ldots,~$E_k(R,S)$ are provably equivalence relations in ${\tt SO}_{n\mbox{-}1}+{\tt GC}$, and (ii) each of the formulas~$E_1(R,S)$, \ldots,~$E_k(R,S)$ is a $\Sigma^1_{n\mbox{-}1}$-formula. By the Existence Theorem~\ref{thm:basicexistence}, choose $\Sigma_n$-admissible $\alpha$ such that $\rho=\rho(\alpha)<\alpha$ and $L_{\alpha}$ is a model of Axiom~Beta. Then consider the following structure~$\mathcal{M}$ in the signature of~${\tt SO}$, as was also featured in equation~(\ref{eqn:555666}) of the hypothesis of the Construction Theorem~\ref{thm:BCT}: \begin{equation}\label{eqn:555666redux} \mathcal{M} = (\rho, P(\rho)\cap L_{\alpha}, P(\rho\times \rho)\cap L_{\alpha}, \ldots) \end{equation} Since $\alpha$ is $\Sigma_n$-admissible, one has that $L_{\alpha}$ satisfies $\Sigma_{n-1}$-separation. Thus the structure~$\mathcal{M}$ from equation~(\ref{eqn:555666redux}) satisfies the theory ${\tt SO}_{n-1}$ since instances of the Full-Comprehension Schema~(\ref{eqn:unaryfullcomp}) in the signature of ${\tt SO}$ associated to $\Sigma^1_{n-1}$-formulas correspond naturally to $\Sigma_{n-1}$-instances of separation in $L_{\alpha}$ on the set~$\rho$. When expanded by the well-order~$<$ on objects given by the membership relation on~$\rho$, it likewise satisfies the theory ${\tt SO}_{n-1}+{\tt GC}$. Since the formulas~$E_1(R,S)$, \ldots,~$E_k(R,S)$ are provably equivalence relations on $m_i$-ary concepts in ${\tt SO}_{n\mbox{-}1}+{\tt GC}$, it follows that they are likewise a~${\utilde \Sigma}^{1, \mathcal{M}}_{n\mbox{-}1}$-definable equivalence relation on~$P(\rho^{m_i})\cap L_{\alpha}$. Then by the Construction Theorem, one can build a model of ${\tt \Sigma^1_1\mbox{-}[E_1, \ldots, E_k]A}+{\tt GC}$ of the following form, wherein again the global well-order~$<$ is given by the membership relation on~$\rho$: \begin{equation}\label{eqn:thestructureredux} \mathcal{N}=(\rho, P(\rho)\cap L_{\alpha}, P(\rho\times \rho)\cap L_{\alpha}, \ldots, \partial_{E_1}, \ldots, \partial_{E_k},<) \end{equation} Since satisfaction of formulas in the signature of~$\mathcal{M}$ is invariant between it and its expansion~$\mathcal{N}$, this model~$\mathcal{N}$ is the witness to the consistency of~${\tt \Sigma^1_1\mbox{-}[E_1, \ldots, E_k]A}+{\tt SO}_m+{\tt GC}$. \end{proof} \section{Identifying the Well-Founded Extensions}\label{sec05} The goal of this section is to establish the Main Theorem~\ref{thm:main}. This is done in two steps: (i) first by identifying in Theorem~\ref{thM:firstidwef} the well-founded extensions within models induced via the Construction Theorem~\ref{thm:BCT} from~$L_{\alpha}$, and (ii) second in Theorem~\ref{thm1} by an identification within models satisfying~$\mathrm{Axiom\mbox{\;}Beta}$~(cf. Definition~\ref{defna:axiombeta}). The basic idea of these proofs is to relate the notion~$\mathrm{Trcl}_{\eta}(x)$ from \S\ref{sec02.1} equation~(\ref{eqn:iamtransitiveclosureeta}) defined in the object-language of a model of~${\tt \Sigma^1_1\mbox{-}LB}$ to the notion~$\mathrm{trcl}_{\eta}(x)$ defined in the meta-language. In particular, given an arbitrary relation~$R$, the notion~$\mathrm{trcl}_{R}(x)$ is defined to be the set of all~$y$ such that there is a finite sequence~$x_1, \ldots, x_n$ such that~$x_1=y$ and~$x_n=x$ and~$x_{m} R x_{m+1}$ for all~$m< n$. So a model~$\mathcal{N}$ of~${\tt \Sigma^1_1\mbox{-}LB}$ induces a specific relation~$\eta$ via the definition of the Fregean membership relation from equation~(\ref{eqn:Fregemembership}), and then~$\mathrm{trcl}_{\eta}(x)$ is defined to be~$\mathrm{trcl}_{R}(x)$ with~$R=\eta$. Finally, recall that the well-founded extensions~$\mathrm{wfExt}$ were defined in~(\ref{eqnasdfsadf2}). \begin{thm}\label{thM:firstidwef} (\emph{First Identification of Well-Founded Extensions}) Suppose~$n\geq 1$. Suppose that~$L_{\alpha}$ is~$\Sigma_n$-admissible. Let~$\rho=\rho_n(\alpha)$ and let~$\partial:L_{\alpha}\rightarrow \rho$ be a witnessing~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable injection. Suppose also that~$\rho<\alpha$. Then the structure \begin{equation} \mathcal{N} = (\rho, P(\rho)\cap L_{\alpha}, P(\rho\times \rho)\cap L_{\alpha}, \ldots, \partial\upharpoonright (P(\rho)\cap L_{\alpha})) \end{equation} is a model of~${\tt \Sigma^1_1\mbox{-}LB}+{\tt GC}$, where the global well-order on objects is given by the membership relation on~$\rho$. Further: \begin{align} \mathrm{wfExt}(\mathcal{N}) = \{x\in &\rho : (\mathrm{trcl}_{\eta}(x)\cup \{x\}, \eta) \mbox{ is} \notag\\ & ~\mbox{$\utilde{\Delta}^{L_{\alpha}}_n$-well-founded} \; \& \; (\mathrm{trcl}_{\eta}(x)\cup \{x\}) \subseteq \mathrm{rng}(\partial)\} \label{whasdsafsafdsafds} \end{align} Moreover, there is a~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable embedding~$j:(L_{\alpha},\in)\rightarrow (\mathrm{wfExt}(\mathcal{N}), \eta)$, and its image is: \begin{align} \mathrm{wfExt}_{\ast}(\mathcal{N}) = \{x\in & \rho : (\mathrm{trcl}_{\eta}(x) \cup \{x\}, \eta) \mbox{ is } \notag \\ & \mbox{well-founded} \; \& \; (\mathrm{trcl}_{\eta}(x) \cup \{x\}) \subseteq \mathrm{rng}(\partial)\}\label{whasdsafsafdsafds123423} \end{align} Finally, the isomorphism~$j:(L_{\alpha},\in)\rightarrow (\mathrm{wfExt}_{\ast}(\mathcal{N}), \eta)$ is the inverse of the Mostowski collapse~$\pi: (\mathrm{wfExt}_{\ast}(\mathcal{N}), \eta)\rightarrow (L_{\alpha},\in)$. \end{thm} \noindent For the statement of the Mostowski collapse theorem, see the discussion immediately following the definition of Axiom~Beta (Definition~\ref{defna:axiombeta}). \begin{proof} By the Construction Theorem~\ref{thm:BCT}, the structure~$\mathcal{N}$ is a model of~${\tt \Sigma^1_1\mbox{-}LB}+{\tt GC}$. Now we argue for the identity in equation~(\ref{whasdsafsafdsafds}). To see this identity, let us first show both of the following, wherein~$x$ is an arbitrary element of~$\rho$: \begin{align} & w\in \mathrm{trcl}_{\eta}(x) \Longrightarrow \mathcal{N}\models (\mathrm{Trcl}_{\eta}(x))(w)\label{help1} \\ & (\mathrm{trcl}_{\eta}(x)\cup \{x\}) \subseteq \mathrm{rng}(\partial) \Longrightarrow \mathrm{trcl}_{\eta}(x) \in L_{\alpha} \label{help2} \end{align} For equation~(\ref{help1}), suppose that~$w\in \mathrm{trcl}_{\eta}(x)$ and suppose that~$F\in (P(\rho)\cap L_{\alpha})$ is such that~$\mathcal{N}\models [\forall \; z \; (z\eta x \rightarrow Fz ) \; \& \; \forall \; u,v \; ((Fv \; \& \; u\eta v) \rightarrow Fu)]$. We must show that~$w\in F$. Since~$w\in \mathrm{trcl}_{\eta}(x)$, choose a sequence~$y_1, \ldots, y_n\in \rho$ such that~$y_1=w$ and~$y_n=x$ and~$y_i\eta y_{i+1}$ for all~$i<n$. Then we may show by induction on~$0<k\leq n\mbox{-}1$ that~$y_{n-k}\in F$. For equation~(\ref{help2}), first define a map~$\tau: P(\rho)\rightarrow P(\rho)$ by~$\tau(U) = \{v\in \rho: \exists \; w \in U \; v\eta w\}$. Now, it follows from the proposition on the existence of restricted~$\eta$-relation (Proposition~\ref{iamaveryhelpfulprop}) that the map~$\tau$ has the property: \begin{align} [U\in L_{\alpha} \; \& \; U\subseteq \mathrm{rng}(\partial)] & \Rightarrow \exists \; S\in (P(\rho\times \rho)\cap L_{\alpha}) \; [\forall \; w\in U \; \partial(S[w])=w \notag \\ & \; \& \; \tau(U) = \{v\in \rho: \exists\; w\in U\; v\in S[w]\}\in L_{\alpha}] \label{eqn111} \end{align} Let us note one further property of the map~$\tau$, namely its connection to transitive closure: \begin{equation}\label{eqn:Lobvcios} U\in (P(\rho)\cap L_{\alpha})\Rightarrow \mathrm{trcl}_{\eta}(\partial(U)) = \bigcup_{n=0}^{\infty} \tau^{(n)} (U) \end{equation} To see this, suppose that~$U\in (P(\rho)\cap L_{\alpha})$. First consider the left-to-right direction of the identity. Suppose that~$y\in \mathrm{trcl}_{\eta}(\partial(U))$. Then there are~$y_1, \ldots, y_n$ where~$y_1=y$ and~$y_n=\partial(U)$ and~$y_i\eta y_{i+1}$ for~$i<n$. By induction on~$0<k\leq n\mbox{-}1$ we may then show that~$y_{n\mbox{-}k}\in \tau^{(k\mbox{-}1)}(U)$. Second, consider the right-to-left direction of the identity in equation~(\ref{eqn:Lobvcios}). For this one simply shows by induction on~$n\geq 0$, that~$\tau^{(n)}(U)\subseteq \mathrm{trcl}_{\eta}(\partial(U))$. Turning now to the verification of equation~(\ref{help2}), suppose that~$(\mathrm{trcl}_{\eta}(x)\cup \{x\}) \subseteq \mathrm{rng}(\partial)$. Then~$\partial(X)=x$ for some~$X\in (P(\rho)\cap L_{\alpha})$. Now we argue that~$\tau^{(n)}(X)\in L_{\alpha}$ for all~$n\geq 0$. Clearly this holds for~$n=0$, since by hypothesis one has that~$\tau^{(0)}(X)=X\in L_{\alpha}$. Suppose, for the induction step, that~$\tau^{(n)}(X)\in L_{\alpha}$. Then by equation~(\ref{eqn:Lobvcios}) we can collect together the following information: $\tau^{(n)}(X)\in L_{\alpha}$ and $\tau^{(n)}(X)\subseteq \mathrm{trcl}_{\eta}(x) \subseteq \mathrm{rng}(\partial)$. Then we can deduce immediately from equation~(\ref{eqn111}) that~$\tau^{n+1}(X)=\tau(\tau^{n}(X))\in L_{\alpha}$. So now we have finished arguing that~$\tau^{(n)}(X)\in L_{\alpha}$ for all~$n\geq 0$. By appealing repeatedly to the proposition on the existence of restricted~$\eta$-relation (Proposition~\ref{iamaveryhelpfulprop}), one has that~$L_{\alpha}$ models that for all~$n<\omega$ there is a sequence~$\langle U_0, S_0, \ldots, U_n, S_n\rangle$ of elements of~$U_i\in P(\rho)\cap L_{\alpha}$,~$S_i\in P(\rho\times\rho)\cap L_{\alpha}$ such that $U_0=X$ and \begin{align} & \forall \; m\leq n \; \forall \; w\in U_m \; \partial(S_m[w])=w\label{manythree1} \\ & \forall \; m<n \; U_{m+1} = \{v\in \rho: \exists \; w\in U_m \; v\in S_m[w]\}\label{manythree3} \end{align} Let~$n<\omega$ and let~$\langle U_0, S_0, \ldots, U_n, S_n\rangle$ be such a sequence. We argue by induction on~$m\leq n$ that~$U_m=\tau^{(m)}(X)$. Clearly this holds for~$m=0$ since $U_0=X$. Suppose it holds for~$m<n$. To see it holds for~$m+1$, note that equation~(\ref{manythree1}) and equation~(\ref{manythree3}) and the induction hypothesis imply \begin{align} & \forall \; w\in \tau^{(m)}(X) \; \partial(S_m[w])=w \\ & U_{m+1} = \{v\in \rho: \exists \; w\in\tau^{(m)}(X) \; v\eta w\} = \tau^{(m+1)}(X) \end{align} So consider the following function~$f:\omega\rightarrow L_{\alpha}$ defined as follows:~$f(m)=U$ iff there is a sequence~$\langle U_0, S_0, \ldots, U_m, S_m\rangle$ satisfying (\ref{manythree1})-(\ref{manythree3}) such that~$U=U_m$. Then the graph of~$f$ is~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable and so by~$\Sigma_n$-replacement, its graph exists as a set in~$L_{\alpha}$. Hence the infinite sequence {}$\langle \tau^{(0)}(X),\tau^{(1)}(X), \ldots, \tau^{(n)}(X), \ldots\rangle$ is an element of~$L_{\alpha}$ and so by equation~(\ref{eqn:Lobvcios}), one also has that~$\mathrm{trcl}_{\eta}(x)=\mathrm{trcl}_{\eta}(\partial(X))\in L_{\alpha}$. So we have finished now the verification of equation~(\ref{help2}). Now we proceed to the verification of equation~(\ref{whasdsafsafdsafds}). Suppose first that we have an extension~$x\in \mathrm{wfExt}(\mathcal{N})$. Recall that the membership conditions of~$\mathrm{wfExt}(\mathcal{N})$ are defined in equation~(\ref{eqnasdfsadf1}), so that \begin{equation}\label{eqnsadfsdaf} \mathcal{N}\models (\mathrm{Trcl}_{\eta}(\sigma(x)), \eta) \mbox{ is well-founded} \; \& \; (\mathrm{Trcl}_{\eta}(\sigma(x))) \subseteq \mathrm{rng}(\partial) \end{equation} By equation~(\ref{help1}), we automatically have that \begin{equation} (\mathrm{trcl}_{\eta}(x)\cup \{x\}) \subseteq \{w\in \rho: \mathcal{N}\models \mathrm{Trcl}_{\eta}(x)(w)\vee w=x\} \subseteq \mathrm{rng}(\partial) \end{equation} Hence from equation~(\ref{help2}), we may conclude that~$\mathrm{trcl}_{\eta}(x)\in L_{\alpha}$. Note that if we set~$F=\mathrm{trcl}_{\eta}(x)$ then~$F$ satisfies the following condition: \begin{equation} [\forall \; z \; (z\eta x \rightarrow Fz ) \; \& \; \forall \; u,v \; ((Fv \; \& \; u\eta v) \rightarrow Fu)]\label{dsafasdfasdfasdf} \end{equation} Since~$F\in (P(\rho)\cap L_{\alpha})$, it follows that the converse to equation~(\ref{help1}) holds as well, so that we may conclude $\mathrm{trcl}_{\eta}(x) = \{w\in \rho: \mathcal{N}\models \mathrm{Trcl}_{\eta}(x)(w)\}$. So now suppose that~$(\mathrm{trcl}_{\eta}(x) \cup \{x\}, \eta)$ is not~$\utilde{\Delta}^{L_{\alpha}}_n$-well-founded. Then there is some non-empty~$\utilde{\Delta}^{L_{\alpha}}_n$-definable subset~$Z$ of~$(\mathrm{trcl}_{\eta}(x) \cup \{x\}, \eta)$ which has no~$\eta$-least member. By~$\Delta_n$-separation in~$L_{\alpha}$ on the set~$(\mathrm{trcl}_{\eta}(x)\cup \{x\})\in L_{\alpha}$, we have~$Z\in P(\rho)\cap L_{\alpha}$, which is a contradiction. So we just completed the left-to-right direction of equation~(\ref{whasdsafsafdsafds}). For the other direction, suppose that~$x\in \rho$ and \begin{equation} (\mathrm{trcl}_{\eta}(x) \cup \{x\}, \eta) \mbox{ is~$\utilde{\Delta}^{L_{\alpha}}_n$-well-founded} \; \& \; (\mathrm{trcl}_{\eta}(x) \cup \{x\}) \subseteq \mathrm{rng}(\partial) \end{equation} Then equation~(\ref{help2}) implies that~$\mathrm{trcl}_{\eta}(x)\in L_{\alpha}$. By a similar argument, we have that~$x\in \mathrm{wfExt}(\mathcal{N})$. So we have now finished verifying equation~(\ref{whasdsafsafdsafds}). Now we turn to constructing an embedding~$j:L_{\alpha}\rightarrow \rho$. By transfinite recursion, there is~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable~$j:L_{\alpha}\rightarrow \rho$ which satisfies~$j(x) = \partial(\{j(y): y\in x\})$. Then one has that~$y \in x$ implies~$j(y)\eta j(x)$. Further, since~$\partial:L_{\alpha}\rightarrow \rho$ is an injection, we may argue by induction that~$j:L_{\alpha}\rightarrow \rho$ is an injection. Since~$j:L_{\alpha}\rightarrow \rho$ is an injection,~$y \in x$ iff~$j(y)\eta j(x)$. Hence,~$j:L_{\alpha}\rightarrow \rho$ is indeed an embedding. Now we argue that~$j:L_{\alpha}\rightarrow \mathrm{wfExt}_{\ast}(\mathcal{N})$. First let us show: \begin{equation}\label{nsdafasdfsad} x\in L_{\alpha} \Longrightarrow (\mathrm{trcl}_{\eta}(j(x)) \cup \{j(x)\}) \subseteq \mathrm{rng}(j)\subseteq \mathrm{rng}(\partial) \end{equation} Let~$x\in L_{\alpha}$ and let~$y\in \mathrm{trcl}_{\eta}(j(x))$. Then there are ~$y_1, \ldots, y_n$ in~$\rho$ with~$y_1=y$ and~$y_n=j(x)$ and~$y_1 \eta y_2, \ldots y_{n\mbox{-}1}\eta y_n$. Then using the definition of~$j$ we may argue by induction that~$y_i=j(x_i)$ for~$x_i\in L_{\alpha}$. Let us now argue that \begin{equation}\label{nsdafasdfsad2} x\in L_{\alpha} \Longrightarrow (\mathrm{trcl}_{\eta}(j(x)) \cup \{j(x)\}, \eta) \mbox{ is well-founded} \end{equation} For, suppose that there was an infinite descending~$\eta$-sequence~$y_n$ in the set~$(\mathrm{trcl}_{\eta}(j(x)) \cup \{j(x)\}) \subseteq \mathrm{rng}(j)$. Then since~$j$ is an embedding this would lead to an infinite descending~$\in$-sequence. Before proceeding, let's note that~$\eta$ is well-founded on~$\mathrm{wfExt}_{\ast}(\mathcal{N})$. For, suppose that~$\emptyset \neq X\subseteq \mathrm{wfExt}_{\ast}(\mathcal{N})$. Choose~$x$ with~$Xx$, so that of course~$x$ is in~$\mathrm{wfExt}_{\ast}(\mathcal{N})$. Then consider~$X^{\prime}=X\cap (\mathrm{trcl}_{\eta}(x) \cup \{x\})$, which is a non-empty subset of~$\mathrm{trcl}_{\eta}(x) \cup \{x\}$. So there is some~$x_0$ with~$X^{\prime}x_0$ such that~$y\eta x_0$ implies~$\neg X^{\prime}y$. Suppose that~$y\eta x_0$ with~$Xy$. Since~$x_0$ is in~$\mathrm{trcl}_{\eta}(x) \cup \{x\}$ and~$y\eta x_0$, we have that~$y$ is in~$(\mathrm{trcl}_{\eta}(x)\cup \{x\})$. Then of course~$y$ is in~$X^{\prime}=X\cap (\mathrm{trcl}_{\eta}(x) \cup \{x\})$, which is a contradiction. So indeed~$\eta$ is well-founded on~$\mathrm{wfExt}_{\ast}(\mathcal{N})$. Now let us argue that~$j:L_{\alpha}\rightarrow \mathrm{wfExt}_{\ast}(\mathcal{N})$ is surjective. First note that it follows from the definitions that the class~$\mathrm{wfExt}_{\ast}(\mathcal{N})$ is transitive in the following sense: \begin{equation}\label{asdfasdfadsfdsaf} [y,z\in \rho \; \& \; y\in \mathrm{wfExt}_{\ast}(\mathcal{N}) \; \& \; z\eta y] \Longrightarrow z\in \mathrm{wfExt}_{\ast}(\mathcal{N}) \end{equation} So let's proceed in establishing surjectivity by reductio: suppose that~$j:L_{\alpha}\rightarrow \mathrm{wfExt}_{\ast}(\mathcal{N})$ is not surjective. So there is some~$y\in \mathrm{wfExt}_{\ast}(\mathcal{N})\setminus j\mbox{''}L_{\alpha}$. Since~$\eta$ is well-founded on~$\mathrm{wfExt}_{\ast}(\mathcal{N})$ and since~$\mathrm{wfExt}_{\ast}(\mathcal{N})$ is transitive~(\ref{asdfasdfadsfdsaf}), there is~$y\in \mathrm{wfExt}_{\ast}(\mathcal{N})\setminus j\mbox{''}L_{\alpha}$ such that \begin{equation} z\eta y \Longrightarrow z\in (\mathrm{wfExt}_{\ast}(\mathcal{N})\cap j\mbox{''}L_{\alpha}) \end{equation} Since~$y\in \mathrm{wfExt}_{\ast}(\mathcal{N})\subseteq \mathrm{rng}(\partial)$, choose~$Y\in (P(\rho)\cap L_{\alpha})$ such that~$\partial(Y)=y$. Then by the previous equation, we may conclude that $L_{\alpha} \models \forall \; z\in Y \; \exists \; x \; j(x)=z$. By~$\Sigma_n$-collection, choose~$X\in L_{\alpha}$ such that $L_{\alpha} \models \forall \; z\in Y \; \exists \; x\in X \; j(x)=z$. Then set~$X^{\prime} = X\cap j^{-1}(Y)=\{x\in X: j(x)\in Y\}$ which is in~$L_{\alpha}$ by~$\Delta_n$-separation since in addition to its natural~$\utilde{\Sigma}_n^{L_{\alpha}}$-definition it has the following~$\utilde{\Pi}_n^{L_{\alpha}}$-definition: $X^{\prime} = \{x\in X: \forall \; y\in (L_{\alpha}\setminus Y) \; j(x)\neq y\}$. Also~$\{j(x): x\in X^{\prime}\} = Y$, so that we have $j(X^{\prime}) = \partial(\{j(x): x\in X^{\prime}\}) = \partial(Y)=y$ which contradicts the hypothesis that~$y$ was not in the image of~$j$. Finally note that the isomorphism~$j:(L_{\alpha},\in)\rightarrow (\mathrm{wfExt}_{\ast}(\mathcal{N}), \eta)$ is the inverse of the Mostowski collapse~$\pi: (\mathrm{wfExt}_{\ast}(\mathcal{N}), \eta)\rightarrow (L_{\alpha},\in)$ due to the uniqueness of the latter isomorphism. \end{proof} \begin{thm}\label{thm1} (\emph{Second Identification of the Well-Founded Extensions}) Suppose that~$n\geq 1$ and~$L_{\alpha}$ is~$\Sigma_n$-admissible and satisfies Axiom~Beta. Let~$\rho=\rho_n(L_{\alpha})<\alpha$ and let~$\partial:L_{\alpha}\rightarrow \rho$ be a witnessing~$\utilde{\Sigma}_n^{L_{\alpha}}$-definable injection. Then the structure \begin{equation} \mathcal{N} = (\rho, P(\rho)\cap L_{\alpha}, P(\rho\times \rho)\cap L_{\alpha}, \ldots, \partial\upharpoonright P(\rho)\cap L_{\alpha}) \end{equation} is a model of~${\tt \Sigma^1_1\mbox{-}LB}+{\tt GC}$, where the global well-order on objects is given by the membership relation on~$\rho$. Further,~$(L_{\alpha}, \in)$ is isomorphic to~$(\mathrm{wfExt}(\mathcal{N}),\eta)$. \end{thm} \begin{proof} By the previous theorem, it suffices to show that~$\mathrm{wfExt}(\mathcal{N})\subseteq \mathrm{wfExt}_{\ast}(\mathcal{N})$. For this, it suffices to show that for all~$x\in \rho$ we have \begin{align} [(\mathrm{trcl}_{\eta}(x) \cup \{x\}, \eta) & \mbox{ is~$\utilde{\Delta}^{L_{\alpha}}_n$-well-founded} \; \& \; (\mathrm{trcl}_{\eta}(x) \cup \{x\}) \subseteq \mathrm{rng}(\partial)] \notag\\ & \Longrightarrow (\mathrm{trcl}_{\eta}(x) \cup \{x\}, \eta) \mbox{ is well-founded} \label{eqn:13412341234} \end{align} So suppose that~$x\in \rho$ satisfies the hypothesis of this conditional. Then define the set~$X=(\mathrm{trcl}_{\eta}(x) \cup \{x\})$, which is in~$L_{\alpha}$ by (\ref{help2}) of the previous proof. Then by the proposition on the existence of restricted~$\eta$-relation (Proposition~\ref{iamaveryhelpfulprop}), choose binary relation~$E_X\in L_{\alpha}$ such that~$E_X\subseteq \rho\times X$ and such that~$Xa$ implies~$E_X(b,a)$ iff~$b\eta a$. Since~$X$ is~$\eta$-transitive, we have that~$E_X\subseteq X\times X$. Then the hypothesis that~$(\mathrm{trcl}_{\eta}(x) \cup \{x\}, \eta)$ is~$\utilde{\Delta}^{L_{\alpha}}_n$-well-founded and the~$\eta$-transitivity of~$X$ implies \begin{equation} L_{\alpha} \models (X,E_X) \mbox{ is well-founded and extensional} \end{equation} Since the structure~$L_{\alpha}$ satisfies~$\mathrm{Axiom\mbox{\;}Beta}$, the structure~$L_{\alpha}$ satisfies the Mostowski Collapse Theorem (cf. discussion following Definition~\ref{defna:axiombeta}). Then there is a transitive set~$M$ in~$L_{\alpha}$ and a map~$\pi$ in~$L_{\alpha}$ such that~$\pi:(X,E_X)\rightarrow (M,\in)$ is an isomorphism. Suppose that~$(X,E_X)$ is not well-founded. Then there is an infinite decreasing~$\eta$-sequence~$x_i$ in~$X\subseteq L_{\alpha}$. Then~$\pi(x_i)$ is an infinite decreasing~$\in$-sequence. \end{proof} This allows us to now establish the Main Theorem~\ref{thm:main}: \begin{proof} (of Theorem~\ref{thm:main}): By compactness, this follows from the Existence Theorem~\ref{thm:basicexistence} and the Second Identification of the Well-Founded Sets Theorem~\ref{thm1}. Here we're also appealing to the connection between the union of the axiomatic characterizations of~$\Sigma_n$-admissibility and~${\tt ZFC\mbox{-}P}$, which we noted immediately after the definition of~$\Sigma_n$-admissibility (cf. Definition~\ref{defn:nadmissible}). \end{proof} \end{document}
math
\begin{document} \title{Quiver representations and dimension reduction in dynamical systems} \begin{abstract} Dynamical systems often admit geometric properties that must be taken into account when studying their behaviour. We show that many such properties can be encoded by means of quiver representations. These properties include classical symmetry, hidden symmetry and feedforward structure, as well as subnetwork and quotient relations in network dynamical systems. A quiver equivariant dynamical system consists of a collection of dynamical systems with maps between them that send solutions to solutions. We prove that such quiver structures are preserved under Lyapunov-Schmidt reduction, center manifold reduction, and normal form reduction. \end{abstract} \section{Introduction}\lambdabel{sec0} In this paper we show that various structural properties of dynamical systems (ODEs and iterated maps) can be encoded using the language of quiver representations. These structural properties include classical symmetry, but also feedforward structure, subnetwork and quotient relations in network dynamical systems, and so-called hidden symmetry, including interior symmetry and quotient symmetry. This paper aims to provide a unifying framework for studying dynamical systems with quiver symmetry. A quiver representation consists of a collection of vector spaces with linear maps between them. A simple example is the representation of a group (where there is only one vector space). We shall speak of a dynamical system with quiver symmetry when a dynamical system is defined on each of the vector spaces of the representation, and so that all the linear maps in the representation send the orbits of one dynamical system to orbits of another one. We will argue that quiver symmetry is quite prevalent in dynamical systems that have the structure of an interacting network. Essentially, this insight can already be found in the work of Golubitsky, Stewart et al \cite{curious}, \cite{golstew}, \cite{torok}, \cite{stewartnature}, \cite{pivato}, who realised that every trajectory of a so-called {\it quotient} of a network gives rise to a trajectory in the original network system. DeVille and Lerman \cite{deville} later generalised this result and formulated it in the language that we shall use in this paper. Inspired by these ideas, we shall define two distinct quiver representations for each network dynamical system - the quiver of subnetworks and the quiver of quotient networks - and we will investigate how these quivers impact the dynamics of a network. The advantage of interpreting a property of a dynamical system as a quiver symmetry, lies in the fact that quiver symmetry is an intrinsic property of a dynamical system. It is for example preserved under composition of maps and Lie brackets of vector fields. Unlike for example network structure, which is generally destroyed when a coordinate transformation is applied, quiver symmetry is thus defined in a coordinate-invariant manner. This motivates us to start developing a theory for dynamical systems with quiver symmetry. As a consequence of its intrinsic definition, quiver symmetry can be incorporated quite easily in many of the tools that are available for the analysis of dynamical systems. In this paper, we focus on the impact of quiver symmetry on local dimension reduction techniques. It is well-known that classical symmetry (of compact group actions) is preserved by various of these reduction techniques - see \cite{field3}, \cite{field4}, \cite{fieldgolub}, \cite{constrainedLS}, \cite{perspective}, \cite{golschaef2} and references therein for an overview of results. In this paper, we will generalise these results, by proving that quiver symmetry can be preserved in Lyapunov-Schmidt reduction, center manifold reduction and normal form reduction. More precisely, we show that the dynamical systems that result after applying these reduction techniques inherit the quiver symmetry of the original dynamical system. Partial results in this direction were obtained by the authors in earlier papers \cite{fibr}, \cite{CMR}, \cite{CMRSIREV}, \cite{RinkSanders2}, \cite{RinkSanders3}, \cite{CCN}, \cite{schwenker}. This paper provides a unifying context for these earlier results. Because the results in this paper apply to any (finite) quiver, we shall not yet try to use any of the more involved results from the theory of quiver representations, such as Gabriel's classification theorem \cite{Gabriel}. We will start this paper with a simple illustrative example of a dynamical system with quiver symmetry in Section \ref{sec2}. We then define quiver representations and quiver equivariant maps in Section \ref{sec3}. In Sections \ref{sec4} and \ref{sec5} we discuss two natural examples of quiver representations that one encounters in the study of network dynamical sytems. In Section \ref{sec6} we gather some properties of endomorphisms of quiver representations. This prepares us to prove the results on Lyapunov-Schmidt reduction, center manifold reduction and normal forms in Sections \ref{sec7}, \ref{sec8} and \ref{sec9}. We finish the paper with an example in Section \ref{sec10}. \section{A simple feedforward system}\lambdabel{sec2} Before describing our results in more generality, let us start with a simple example. To this end, let $E_1$ and $E_2$ be finite dimensional real vector spaces and consider a differential equation of the feedforward form \begin{align} \lambdabel{ffw} \left\{\begin{array}{l} \frac{dx}{dt} = f(x)\, ,\\ \frac{dy}{dt} = g(x,y)\, , \end{array}\right. \end{align} where $x\in E_1$ and $y\in E_2$. We will show that such a feedforward system can in fact be thought of as a system with quiver symmetry. To explain this, let us (artificially) replace (\ref{ffw}) by two separate systems of differential equations \begin{align} \lambdabel{ffwpairA} & \left\{ \begin{array}{l} \frac{dx}{dt} = f(x)\, , \\ \frac{dy}{dt} = g(x,y)\ , \, \end{array} \right. \hspace{.5cm} \mbox{for} \ (x,y) \in E_1\times E_2\, , \\ \lambdabel{ffwpairB} & \left\{ \frac{dX}{dt} = f(X)\right. \, , \hspace{.9cm} \mbox{for} \ X \in E_1 \, . \end{align} This unconventional step allows us to formulate the following simple lemma. It states that a feedforward system can be thought of as a system with symmetry. \begin{lem}\lambdabel{fflemma} A pair of (systems of) differential equations \begin{align} \lambdabel{ffwpairgen1} & \left\{ \begin{array}{l} \frac{dx}{dt} = F(x, y)\\ \frac{dy}{dt} = G(x,y)\, \end{array} \right. \\ \lambdabel{ffwpairgen2} & \left\{ \frac{dX}{dt} = H(X)\right. \end{align} is of the feedforward form (\ref{ffwpairA}), (\ref{ffwpairB}) if and only if the map $$R: E_1\times E_2 \to E_1 \ \mbox{defined\ by}\ R(x,y):=x$$ sends every solution of (\ref{ffwpairgen1}) to a solution of (\ref{ffwpairgen2}). \end{lem} \begin{proof} Firstly, assume that (\ref{ffwpairgen1}) and (\ref{ffwpairgen2}) are actually of the form (\ref{ffwpairA}), (\ref{ffwpairB}) respectively. This means that $$F(x,y)=f(x), G(x,y)=g(x,y), H(X)=f(X)\, .$$ Assume now that $(x(t), y(t))$ solves (\ref{ffwpairgen1}). Then $\frac{dx}{dt}(t) = F(x(t),y(t))$ and hence $X(t):=R(x(t), y(t)) = x(t)$ satisfies $$\frac{dX}{dt}(t) = \frac{dx}{dt}(t) = F(x(t),y(t)) =f(x(t)) = f(X(t))=H(X(t))\, .$$ So $R$ sends solutions of (\ref{ffwpairgen1}) to solutions of (\ref{ffwpairgen2}). For the other direction, assume that for every solution $(x(t), y(t))$ of (\ref{ffwpairgen1}) the curve $X(t):=R(x(t), y(t)) = x(t)$ is a solution of (\ref{ffwpairgen2}). Let $(x, y) \in E_1\times E_2$ be arbitrary, and let $(x(t), y(t))$ be the solution of (\ref{ffwpairgen1}) with $(x(0), y(0))=(x,y)$. Define $X := R(x,y)=x$ and $X(t):=R(x(t), y(t))= x(t)$. Then $X(0)=X$ and $\frac{dX}{dt}(t) = H(X(t))$. It follows that $F(x,y) = \frac{dx}{dt}(0) = \frac{dX}{dt}(0) = H(X(0))=H(X) = H(x)$. So $F(x,y)=H(x)$ for all $x,y$. If we now define $f(X):=H(X)$ and $g(x,y) := G(x,y)$, then obviously $F(x,y) = H(x) = f(x)$. In other words, (\ref{ffwpairA}) coincides with (\ref{ffwpairgen1}) and (\ref{ffwpairB}) coincides with (\ref{ffwpairgen2}). \end{proof} \noindent Lemma \ref{fflemma} translates the property that an ODE has feedforward structure into a somewhat unconventional symmetry property. We will see many more examples of this phenomenon later. It is important to note that the symmetry in Lemma \ref{fflemma} is a noninvertible map between two different vector spaces. We are thus not in the classical setting where the symmetries form a group. Instead, they form a (rather simple) quiver. The statement of the following lemma is not new, see \cite{curious}, but we provide a new proof that is based on the observation in Lemma \ref{fflemma}. This proof nicely illustrates how quiver symmetry can be taken into account when we analyse a dynamical system. Moreover, the proof below easily generalises to dynamical systems with more complicated quiver symmetries, see Theorem \ref{CMtheorem} below. \begin{lem}\lambdabel{cmff} Let $(x_0,y_0)$ be an equilibrium point of the feedforward system \begin{align} \lambdabel{ffwpairAac} & \left\{ \begin{array}{l} \frac{dx}{dt} = f(x)\, ,\\ \frac{dy}{dt} = g(x,y)\, ,\, \end{array} \right. \hspace{.5cm} \mbox{for} \ (x,y) \in E_1\times E_2\, . \end{align} Denote by $L_{(x_0, y_0)}$ the Jacobian of (\ref{ffwpairAac}) at $(x_0, y_0)$ and by $E_1\times E_2 = E^c \oplus E^h$ the decomposition into its center and hyperbolic subspaces. We denote by $$\pi^c: E_1\times E_2 = E^c \oplus E^h \to E^c \ , \ (x, y)\mapsto (x^c, y^c)$$ the projection onto $E^c$ along $E^h$. Assume that (\ref{ffwpairAac}) admits a global center manifold at $(x_0, y_0)$. Then $\pi^c$ conjugates the dynamics on this center manifold to a dynamical system on $E^c$ of the form \begin{align} \lambdabel{ffwpairAacred} & \left\{ \begin{array}{l} \frac{d x^c}{dt} = f^c(x^c)\, ,\\ \frac{d y^c}{dt} = g^c(x^c, y^c)\, . \end{array} \right. \end{align} \end{lem} \begin{proof} Let us define $F: E_1\times E_2 \to E_1 \times E_2$ by $F(x,y):=(f(x), g(x,y))$. Then equation (\ref{ffwpairAac}) can be written as $\frac{d}{dt}(x,y)=F(x,y)$. Recall from Lemma \ref{fflemma} that the feedforward structure of $F$ implies that $R: E_1\times E_2 \to E_1, (x,y)\mapsto x$ sends solutions of this ODE to solutions of the ODE \begin{align}\lambdabel{smaller} \frac{dX}{dt}=f(X)\, . \end{align} In other words, we have that $$R \circ F = f \circ R\, .$$ This clearly implies that $X_0:=R(x_0,y_0)=x_0$ is an equilibrium point of (\ref{smaller}), and if we write $L_{X_0}=Df(X_0)$ for the Jacobian of (\ref{smaller}) at $X_0$, then \begin{align}\lambdabel{linearconjugacy} R \circ L_{(x_0, y_0)} = L_{X_0} \circ R\, . \end{align} This follows from differentiating $R(F(x,y)) = f(R(x,y))$ at $(x,y)=(x_0, y_0)$. Let us decompose $E_1 = E^{c'}\oplus E^{h'}$ into the center and hyperbolic subspaces of $L_{X_0}$. Then it follows from (\ref{linearconjugacy}) that $R$ maps any generalised eigenspace of $L_{(x_0, y_0)}$ into the generalised eigenspace of $L_{X_0}$ with the same eigenvalue. It follows that $R(E^c)\subset E^{c'}$ and $R(E^h)\subset E^{h'}$. Denoting by $\Pi^c: E_1 = E^{c'}\oplus E^{h'} \to E^{c'}$ the projection onto $E^{c'}$ along $E^{h'}$, we conclude that $$\Pi^c \circ R = R \circ \pi^c \, .$$ The next step is to prove that $R$ sends the global center manifold of (\ref{ffwpairAac}) at $(x_0, y_0)$ to the global center manifold of (\ref{smaller}) at $X_0$. For this we recall that a solution $(x(t), y(t))$ lies in the global center manifold of (\ref{ffwpairAac}) if and only if $$\sup_{t\in \mathbb{R}} || \pi^h(x(t), y(t))|| < \infty\, ,$$ where we write $\pi^h = 1-\pi^c$. We similarly write $\Pi^h=1-\Pi^c$. If we define $X(t) := R(x(t), y(t)) = x(t)$ for such a solution, then clearly $\Pi^h(X(t)) = \Pi^h(R(x(t), y(t)) = R (\pi^h(x(t),y(t)))$, and so $$\sup_{t\in \mathbb{R}} || \Pi^h(X(t))|| \leq ||R|| \cdot \sup_{t\in \mathbb{R}}|| \pi^h(x(t), y(t))|| < \infty \, ,$$ where $||R||$ denotes the operator norm of $R$ (which equals $1$ here). So $X(t)$ lies in the global center manifold of (\ref{smaller}). This proves that $R$ maps the center manifold of (\ref{ffwpairAac}) into the center manifold of (\ref{smaller}). Next, recall that the center manifolds of (\ref{ffwpairAac}) and (\ref{smaller}) are the graphs of certain (finitely many times continuously differentiable) functions $\phi: E^c \to E^h$ and $\psi: E^{c'}\to E^{h'}$ respectively. In other words, for every $(x,y)$ in the center manifold of (\ref{ffwpairAac}) and $X$ in the center manifold of (\ref{smaller}), we have \begin{align} \lambdabel{cmgraph} (x, y) & = \underbrace{(x^c, y^c)}_{\in E^c} + \underbrace{\phi(x^c, y^c)}_{\in E^h} \, , \\ \lambdabel{cmgraph2} X & = \underbrace{X^c}_{\in E^{c'}} + \underbrace{\psi(X^c)}_{\in E^{h'}}\, . \end{align} Pick an $(x,y)$ in the center manifold of (\ref{ffwpairAac}). Applying $R$ to (\ref{cmgraph}) yields that $$x = x^c + R(\phi(x^c, y^c))\, .$$ Note that this $x$ lies in the center manifold of (\ref{smaller}) by the result above. We also have that $x^c = R(x^c, y^c) \in E^{c'}$ because $(x^c, y^c) \in E^c$ and $R(E^c)\subset E^{c'}$. Similarly, $R(\phi(x^c, y^c))\in E^{h'}$ because $\phi(x^c, y^c) \in E^h$ and $R(E^h)\subset E^{h'}$. But this means that $x^c$ is the center part of $x$ and $R(\phi(x^c, x^h))$ is its hyperbolic part. So (\ref{cmgraph2}) gives that $R(\phi(x^c, y^c))$ must be equal to $\psi(x^c)=\psi(R(x^c, y^c))$. This proves that $$R \circ \phi = \psi \circ R \, .$$ Next, let $(x(t), y(t))$ be an integral curve of $F$ lying inside the center manifold of (\ref{ffwpairAac}), and let us once again write $$(x(t), y(t)) = (x^c(t), y^c(t)) + \phi(x^c(t), y^c(t))\, .$$ Because $\frac{d}{dt}(x(t),y(t)) = F(x(t), y(t))$, it then follows that $$\frac{d}{dt}(x^c(t), y^c(t)) = (\pi^c\circ F) ( (x^c(t), y^c(t))+ \phi(x^c(t), y^c(t)))\, .$$ This shows that the restriction of $\pi^c$ to the center manifold sends integral curves of $F$ to integral curves of the vector field $F^c: E^{ c} \to E^{ c}$ defined by \begin{align} F^c(x^c, y^c) & := (\pi^c \circ F)((x^c, y^c) + \phi( x^c, y^c))\, . \nonumber \end{align} Similarly, the restriction of $\Pi^c$ to the center manifold of (\ref{smaller}) sends integral curves of $f$ to integral curves of $f^c: E^{c'}\to E^{c'}$ defined by \begin{align}\nonumber f^c(X^c) & := (\Pi^c \circ f)(X^c + \psi(X^c))\, . \end{align} Now we simply notice that \begin{align}\nonumber R(F^c(x^c, y^c)) =& (R\circ \pi^c \circ F)((x^c, y^c) + \phi( x^c, y^c)) \\ \nonumber = & (\Pi^c \circ R \circ F)((x^c, y^c) + \phi( x^c, y^c)) \\ \nonumber = & (\Pi^c \circ f \circ R)((x^c, y^c) + \phi( x^c, y^c)) \\ \nonumber = & (\Pi^c \circ f) ( R (x^c, y^c) + R( \phi( x^c, y^c))) \\ \nonumber = & (\Pi^c \circ f) ( R(x^c, y^c) + \psi( R (x^c, y^c))) \\ \nonumber = & f^c(R(x^c, y^c)))\, , \end{align} which proves that $$R\circ F^c = f^c \circ R\, . $$ In this last formula, $R$ in fact denotes the restriction $R:E^c \to E^{c'}$ given by $R(x^c, y^c)=x^c$. Lemma \ref{ffw} thus guarantees that $F^c$ is of the feedforward form $$F^c(x^c, y^c)=(f^c(x^c), g^c(x^c, y^c))$$ for some function $g^c:E^c\to E^c$. This finishes the proof. \end{proof} \section{Quiver equivariant dynamical systems}\lambdabel{sec3} The pair of ODEs (\ref{ffwpairA}), (\ref{ffwpairB}) is a simple example of a quiver equivariant dynamical system. We shall now give the general definition. In this paper, we only consider quivers with finitely many vertices and arrows, because this simplifies our proofs. \begin{defi} \mbox{}\\ \begin{itemize} \item[{\it i)}] A {\it quiver} is a directed (multi)graph $${\bf Q} = \{A\rightrightarrows^s_t V\}$$ consisting of a finite set of arrows $A$, a finite set of vertices $V$, a source map $s: A \to V$ and a target map $t: A\to V$. \item[{\it ii)}] A {\it representation} ({\bf E}, {\bf R}) of a quiver {\bf Q} consists of a set {\bf E} of finite dimensional vector spaces $E_v$ (one for each vertex $v\in V$), and a set {\bf R} of linear maps $$R_{a}: E_{s(a)} \to E_{t(a)}\ \ \mbox{(one for each arrow}\ a\in A).$$ \item[{\it iii)}] A {\bf Q}-{\it equivariant map} {\bf F} of a representation ({\bf E}, {\bf R}) of a quiver {\bf Q} consists of a collection of maps $F_v: E_v\to E_v$ (one for each vertex $v\in V$) so that $$ F_{t(a)} \circ R_a = R_a \circ F_{s(a)} \ \ \mbox{for every arrow}\ a\in A\, .$$ We shall write ${\bf F}\in C^{\infty}({\bf E}, {\bf R})$ if $F_v \in C^{\infty}(E_v)$ for every $v\in V$. We shall sometimes refer to a {\bf Q}-equivariant map as a {\bf Q}-{\it equivariant vector field} or a {\bf Q}-{\it equivariant dynamical system}. \end{itemize} \end{defi} \noindent The following simple proposition expresses that quiver-equivariance is an intrinsic property. Proposition \ref{Lieproperty} formulates the corresponding result for the Lie bracket. \begin{prop}\lambdabel{obviouscomp} Let $({\bf E}, {\bf R})$ be a representation of a quiver ${\bf Q}= \{A\rightrightarrows^s_t V\}$ and let ${\bf F}, {\bf G} \in C^{\infty}({\bf E}, {\bf R})$. Define the composition ${\bf F} \circ {\bf G}$ to consist of the maps $(F_v \circ G_v): E_v \to E_v$ (for $v\in V$). Then ${\bf F} \circ {\bf G} \in C^{\infty}({\bf E}, {\bf R})$. \end{prop} \begin{proof} Smoothness of $({ F} _v\circ { G}_v)$ is obvious. Now let $a\in A$ be an arrow. Then \begin{align}\nonumber R_a \circ (F_{s(a)} \circ G_{s(a)}) = F_{t(a)} \circ R_a \circ G_{s(a)} = (F_{t(a)} \circ G_{t(a)}) \circ R_a \, . \end{align} \end{proof} \noindent The next example shows that the feedforward system of Section \ref{sec2} constitutes a quiver equivariant dynamical system. \begin{ex} \lambdabel{ffexamplequiver} Consider a quiver {\bf Q} consisting of two vertices $V=\{v_1, v_2\}$ and three arrows $A=\{a_1, a_2, a_3\}$, where $s(a_1) = t(a_1) = v_1$ and $s(a_2)=v_1$ and $t(a_2)=v_2$ and $s(a_3) = t(a_3) = v_2$. Define $$E_{v_1} = E_1\times E_2 \ \mbox{and} \ E_{v_2} = E_1$$ with $E_1$ and $E_2$ vector spaces, and \begin{align} \nonumber &R_{a_1}(x,y) = (x,y)\, , \\ \nonumber &R_{a_2}(x,y) = x\, , \\ \nonumber &R_{a_3}(X) = X\, .\ \end{align} Then the pair of maps \begin{align} \nonumber& F_{v_1} = (F, G): E_{v_1} = E_1\times E_2 \to E_{v_1} = E_1\times E_2 , \\ \nonumber &F_{v_2} = H: E_{v_2} = E_1 \to E_{v_2} = E_1 \end{align} is {\bf Q}-equivariant if and only if $R_{a_2} \circ F_{v_1} = F_{v_2}\circ R_{a_2}$, that is, if \begin{align} \nonumber H(x) = & F_{v_2} (x) = F_{v_2} (R_{a_2}(x, y)) \\ \nonumber = & R_{a_2} (F_{v_1}(x,y)) = R_{a_2} (F(x,y), G(x,y)) = F(x,y)\, . \end{align} So {\bf Q}-equivariance just means that $F_{v_1}$ is of feedforward form. \end{ex} \begin{ex} Let $E$ be a representation of a finite group $G$. This means that $E$ is a vector space and that for every $g\in G$ there is a (necessarily invertible) linear map $R_g: E\to E$, such that $R_e={\rm Id}_E$ and $R_{g_1} \circ R_{g_2} = R_{g_1g_2}$. Such a group representation can be thought of as a representation of a quiver with one vertex, say $V=\{v\}$, and exactly one arrow $a=a(g)$ (to and from $v$) for each $g\in G$. This is done by defining $E_v:=E$ and $R_{a(g)} :=R_g$. The quiver equivariant maps are then simply the maps $F:E\to E$ with $$F \circ R_{g} = R_g \circ F \ \mbox{for all} \ g\in G\ .$$ So the quiver equivariant maps coincide with the usual $G$-equivariant maps. \end{ex} \begin{ex} As a straightforward generalisation of the previous example, one may study linear maps that are not invertible. Consider for example the map $$R: x\mapsto 0\ \mbox{from}\ \mathbb{R}\ \mbox{to}\ \mathbb{R}\, .$$ This map defines a representation of a quiver with just one vertex $v\in V$ and one arrow $a\in A$, where $E_v:=\mathbb{R}$ and $R_a:=R$. Note that an ODE $\frac{dx}{dt} = F(x)$ satisfies $F\circ R= R \circ F$ if and only if $F(0)=0$. So having this quiver symmetry is equivalent to having a steady state at the origin. Interestingly, this is the setting in which the {\it transcritical bifurcation} $$\frac{dx}{dt} = \lambdambda x \pm x^2$$ is the typical one-parameter bifurcation. Curiously, this shows that the transcritical bifurcation is a generic quiver equivariant bifurcation. \end{ex} \begin{ex} \lambdabel{monoidex} In \cite{fibr} it turned out natural to study dynamical systems that are equivariant under the action of a finite monoid $\Sigma$. A monoid is a set $\Sigma$ with an associative multiplication $(\sigma_1, \sigma_2)\mapsto \sigma_1 \sigma_2$ and a multiplicative unit $\sigma_0$. A representation of $\Sigma$ consists of (not necessarily invertible) linear maps $R_{\sigma}: E\to E$ on a vector space $E$, so that $R_{\sigma_0}={\rm Id}_{E}$ and $R_{\sigma_1} \circ R_{\sigma_2} = R_{\sigma_1\sigma_2}$. This setup arises for example when studying the network in Figure \ref{pictfundamental}. The figure displays a network with five nodes and a map $F = F(x_1, x_2, x_3, x_4, x_5)$ that is ``compatible'' with the structure of this network. \begin{figure} \caption{\footnotesize {\rm A network map with a monoid of $5$ symmetries.} \end{figure} It turns out that an $F$ of this form always commutes with the maps \begin{align}\nonumber & R_{\sigma_0}(x_1, x_2, x_3, x_4, x_5) = (x_1, x_2, x_3, x_4, x_5) \, ,\\ \nonumber & R_{\sigma_1}(x_1, x_2, x_3, x_4, x_5) = (x_2, x_4, x_3, x_4, x_5)\, , \\ \nonumber & R_{\sigma_2}(x_1, x_2, x_3, x_4, x_5) = (x_3, x_5, x_3, x_4, x_5) \, , \\ \nonumber & R_{\sigma_3}(x_1, x_2, x_3, x_4, x_5) = (x_4, x_4, x_3, x_4, x_5)\, , \\ \nonumber & R_{\sigma_4}(x_1, x_2, x_3, x_4, x_5) = (x_5, x_4, x_3, x_4, x_5)\, . \end{align} These maps together form a representation of a monoid $\Sigma$ with $5$ elements. In \cite{fibr} this representation was used to classify the bifurcations that occur in the dynamics of the ODE $\frac{dx}{dt} = F(x)$. We will not discuss these results in any detail here. Just like for groups, one may think of a representation of a monoid as a special case of a representation of a quiver. \end{ex} \begin{remk} The notion of {\it interior network symmetry} was defined in \cite{pivato2}. We will not discuss interior symmetry in any detail here, but we would like to point out that interior symmetry is equivalent to a special type of quiver symmetry, for a quiver with two vertices and a possibly quite large number of arrows. This fact was proved in Section 9 of \cite{fibr}. \end{remk} \noindent In the coming sections we provide more examples of dynamical systems with quiver-symmetry. We start by generalising Example \ref{ffexamplequiver} to include more general network systems. \section{The quiver of subnetworks}\lambdabel{sec4} In this section and the next we consider dynamical systems with the structure of an interacting network. We apologise for the somewhat heavy notation in this section, which we found impossible to avoid. We start by letting ${\bf N} = \{ E \rightrightarrows^s_t N\}$ be a directed graph consisting of a finite number of nodes $n \in N$ and directed edges $e \in E$ (the letter {\bf N} stands for {\it network}). This ${\bf N}$ should not be thought of as a quiver (we shall use ${\bf N}$ to define a quiver ${\bf SubQ}({\bf N})$ later) but as the network structure of an iterated map or ODE. More precisely, we assume that for each $n \in N$ we are given a vector space $E_n$ (the so-called ``internal phase space'' of this node) and a map $$F_{n}: \bigoplus_{e \in E \, :\, t(e) = n} E_{s(e)} \to E_{n}\, .$$ So $F_{n}$ depends only on those $x_{m}$ for which there is an edge $e$ from ${m}$ to ${n}$. Together the $F_n$ define a {\it network map} $F^{\bf N}: \bigoplus_{{ m} \in { N}} E_{ m} \to \bigoplus_{{ m} \in { N}} E_{ m}$ given by $$F^{\bf N}_n\left( \, \bigoplus_{{ m} \in N} x_m\, \right) = F_{ n}\left( \bigoplus_{{ e} \in { E}\, : \, t(e) = n } x_{ s(e)} \right) \, .$$ One could say that this $F^{\bf N}$ is ``compatible'' with the network ${\bf N}$. We may use $F^{\bf N}$ to define a ``network dynamical system'' on the ``total phase space'' $\bigoplus_{{ m} \in { N}} E_{ m}$, for example the iteration $x^{(n+1)} = F^{\bf N}(x^{(n)})$ or the flow of the ODE $\frac{dx}{dt} = F^{\bf N}(x)$. \begin{ex}\lambdabel{exffsimple} The network {\bf N} in Figure \ref{pict2} consist of $2$ nodes (labeled $1$ and $2$) and $3$ arrows. The network maps compatible with this network are the maps of the form $$F^{\bf N}(x_1, x_2) = (F_1(x_1), F_2(x_1, x_2))\, .$$ These are precisely the feedforward maps of Example \ref{ffexamplequiver}. \begin{figure} \caption{\footnotesize {\rm A feedforward network with two nodes.} \end{figure} \end{ex} \begin{ex}\lambdabel{interestingffex} Let ${\bf N}$ be the network consisting of $5$ nodes (labeled $1, \ldots, 5$) and $12$ arrows as defined in Figure \ref{pict1}. \begin{figure} \caption{\footnotesize {\rm A feedforward type network with five nodes.} \end{figure} \\ \noindent Then any network map $F^{\bf N}$ takes the form \begin{align}\lambdabel{networkform} F^{\bf N}\left( \begin{array}{l} x_1\\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{array} \right) = \left( \begin{array}{l} F_1(x_1) \\ F_2(x_1, x_2) \\ F_3(x_1, x_2, x_3) \\ F_4(x_1, x_3, x_4) \\ F_5(x_3, x_4, x_5) \\ \end{array} \right)\, . \end{align} Thus $F^{\bf N}$ has a rather particular feedforward structure. Note also that when $F^{\bf N}$ and $G^{\bf N}$ are two such network maps, then their composition will have the form \begin{align} (F^{\bf N}\circ G^{\bf N}) \! \left( \!\!\! \begin{array}{l} x_1\\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{array} \!\!\! \right) \! = \! \left( \!\!\! \begin{array}{l} \nonumber F_1(G_1(x_1)) \\ F_2(G_1(x_1), G_2(x_1, x_2)) \\ F_3(G_1(x_1), G_2(x_1,x_2), G_3(x_1, x_2, x_3)) \\ F_4(G_1(x_1), G_3(x_1, x_2, x_3), G_4(x_1, x_3, x_4)) \\ F_5(G_3(x_1, x_2, x_3), G_4(x_1, x_3, x_4), G_5(x_3, x_4, x_5) ) \\ \end{array} \!\!\! \right) . \end{align} This shows that $(F^{\bf N}\circ G^{\bf N})_4(x)$ depends explicitly on $x_2$, while $F_4(x)$ and $G_4(x)$ do not. Similarly, $(F^{\bf N}\circ G^{\bf N})_5(x)$ depends explicitly on $x_1, x_2$, while $F_5(x)$ and $G_5(x)$ do not. So we see that the network structure of $F^{\bf N}$ and $G^{\bf N}$ is destroyed when we compose them. On the other hand, we also observe that a large part of the network structure of $F^{\bf N}$ and $G^{\bf N}$ remains intact in $F^{\bf N}\circ G^{\bf N}$. \end{ex} \noindent In the remainder of this section we will show that network maps admit a specific quiver symmetry. This will clarify which characteristics of the network structure will survive if we, for example, compose network maps. We start with the definition of a subnetwork. \begin{defi} Let ${\bf N} = \{ E \rightrightarrows^s_t N\}$ be a network and let $N'\subseteq N$. Assume that for every $e\in E$ with $t(e)\in N'$ it holds that $s(e)\in N'$. Define $$E' := \{e\in E \, : \, s(e), t(e) \in N'\}\, .$$ Then ${\bf N}' = \{E' \rightrightarrows^s_t N' \}$ is called a {\it subnetwork} of ${\bf N}$. We shall write ${\bf N'} \sqsubseteq {\bf N}$. \end{defi} \begin{remk} \noindent The relation $\sqsubseteq $ defines a partial order on the set of subnetworks of ${\bf N}$. Indeed, ${\bf N}' \sqsubseteq {\bf N}'$ for all ${\bf N}' \sqsubseteq {\bf N}$ (reflexivity), ${\bf N}' \sqsubseteq {\bf N}''$ and ${\bf N}'' \sqsubseteq {\bf N}'$ imply ${\bf N}' = {\bf N}''$ (antisymmetry) and ${\bf N}''' \sqsubseteq {\bf N}''$ and ${\bf N}'' \sqsubseteq {\bf N}'$ together imply that ${\bf N}''' \sqsubseteq {\bf N}'$ (transitivity). \end{remk} \noindent We shall use the subnetworks of {\bf N} to define a quiver as follows. \begin{defi} Let ${\bf N}$ be a network. The quiver ${\bf SubQ}({\bf N}) = \{A \rightrightarrows^s_t V\}$ of subnetworks of ${\bf N}$ has as its vertices the nonempty subnetworks of ${\bf N}$, i.e., $$V=\{ {\bf N}' \, | \, \emptyset \neq {\bf N}' \sqsubseteq {\bf N}\, \} \, .$$ There is exactly one arrow $a\in A$ with $s(a)={\bf N}'$ and $t(a)={\bf N}''$ if ${\bf N}'' \sqsubseteq {\bf N}'$. \end{defi} \noindent A representation of ${\bf SubQ}({\bf N})$ can be constructed in the following straightforward manner. Recall that for every $n\in {N}$ there is a vector space $E_n$. We now set $$E_{{\bf N}'} := \bigoplus_{m \in N'} E_{m} $$ and we define, for the arrow $a \in A$ from ${\bf N}'$ to ${ \bf N}''$ (so assuming that $N'' \subseteq N'$), \begin{align}\lambdabel{Radef} R_{a}: E_{{\bf N}'} \to E_{{\bf N}''} \ \mbox{by} \ R_a\left(\bigoplus_{m \in N'} x_m \right) : = \bigoplus_{m\in N''} x_m \, . \end{align} So $R_a$ ``forgets'' the states $x_m$ with $m\in N' \backslash N''$. Before we continue to explain why these definitions are useful, let us briefly return to our two examples. \begin{ex} Let ${\bf N}$ be the network of Example \ref{exffsimple} and Figure \ref{pict2}. It has two nonempty subnetworks, which we call ${\bf N}_1$ and ${\bf N}_2$. Figure \ref{pict3} depicts the quiver ${\bf SubQ}({\bf N})$. The arrows in the quiver that express the subnetwork relations ${\bf N}_1 \sqsubseteq {\bf N}_1, {\bf N}_1 \sqsubseteq {\bf N}_2$ and ${\bf N}_2 \sqsubseteq {\bf N}_2$ are drawn as snaking arrows. \begin{figure} \caption{\footnotesize {\rm Subnetwork quiver for the feedforward network in Figure \ref{pict2} \end{figure} It should be clear that the linear maps defining the representation are given by $R_{a_1}(x_1, x_2) = (x_1, x_2)$, $R_{a_2}(x_1, x_2) = x_1$ and $R_{a_3}(x_1) = x_1$. \end{ex} \begin{ex}\lambdabel{ex2quiver} Let {\bf N} be the network of Example \ref{interestingffex} as depicted in Figure \ref{pict1}. It has five nonempty subnetworks, which we depict in Figure \ref{pict4}. The figure also depicts some (but not all) of the arrows in the subnetwork quiver. \begin{figure} \caption{\footnotesize {\rm The subnetwork quiver for the network in Figure \ref{pict1} \end{figure} The maps $R_{a_i}$ are given by \begin{align}\nonumber \begin{array}{ll} R_{a_1}(x_1, x_2, x_3, x_4, x_5) & = (x_1, x_2, x_3, x_4),\\ R_{a_2}(x_1, x_2, x_3, x_4, x_5) & = (x_1, x_2, x_3),\\ R_{a_3}(x_1, x_2, x_3, x_4, x_5) & = (x_1, x_2),\\ R_{a_4}(x_1, x_2, x_3, x_4) & = (x_1, x_2, x_3),\\ R_{a_5}(x_1, x_2, x_3) & = (x_1, x_2),\\ R_{a_6}(x_1, x_2, x_3, x_4) & = x_1,\\ R_{a_7}(x_1, x_2, x_3) & = x_1,\\ R_{a_8}(x_1, x_2) & = x_1.\\ \end{array} \end{align} \end{ex} \noindent The following result reveals the dynamical meaning of the quiver ${\bf SubQ}({\bf N})$. \begin{lem}\lambdabel{obvious} Let ${\bf N} = \{E\rightrightarrows^s_t N\}$ be a network and $F^{\bf N}: E_{\bf N} \to E_{\bf N}$ a network map, i.e., it is of the form $$F^{\bf N}_n\left( \, \bigoplus_{{ m} \in N} x_m\, \right) = F_{ n}\left( \bigoplus_{{ e} \in { E}\, : \, t(e) = n } x_{ s(e)} \right) \ \mbox{for all} \ n\in N \, .$$ For any subnetwork ${\bf N}' \sqsubseteq {\bf N}$ define $F^{{\bf N}'} : E_{{\bf N}'} \to E_{{\bf N}'}$ by $$F^{{\bf N}'}_n \left( \bigoplus_{m\in {\bf N}'} x_m \right) := F_n\left( \bigoplus_{e \in E \, : \, t(e) = n} x_{s(e)} \right)\ \mbox{for all}\ n\in N'\, .$$ Then these $F^{{\bf N}'}$ together define a ${\bf SubQ}({\bf N})$-equivariant map. \end{lem} \begin{proof} First of all, note that the maps $F^{{\bf N}'}$ are well-defined because we assumed that ${\bf N}' \sqsubseteq{\bf N}$, so that $s(e)\in N'$ whenever $t(e)\in N'$. To prove ${\bf SubQ}({\bf N})$-equivariance, assume that $a\in A$ is the arrow from ${\bf N'}$ to ${\bf N''}$. It then holds that ${\bf N}'' \sqsubseteq {\bf N}' \sqsubseteq {\bf N}$, so $$F^{{\bf N}'}_n\! \left( \bigoplus_{m\in {\bf N}'} x_m \right) \!\! = F_n\! \left( \bigoplus_{e \in E \, : \, t(e) = n} x_{s(e)} \right) \! = F^{{\bf N}''}_n \! \left( \bigoplus_{m\in {\bf N}''} x_m \right)\, \mbox{for all}\ n\in N''. $$ But this implies that \begin{align}\nonumber & R_a \left(F^{{\bf N}'} \left( \bigoplus_{m\in {\bf N}'} x_m \right) \right) = \bigoplus_{n\in N''} F^{{\bf N}'}_n \left( \bigoplus_{m\in {\bf N}'} x_m \right) \\ \nonumber & = \bigoplus_{n\in N''} F^{{\bf N}''}_n \left( \bigoplus_{m\in {\bf N}''} x_m \right) = F^{{\bf N}''} \left( R_a\left( \bigoplus_{m\in {\bf N}'} x_m \right) \right)\, , \end{align} which proves the lemma. \end{proof} \noindent The next result is the converse of Lemma \ref{obvious} and the natural generalisation of Lemma \ref{fflemma}. The proof is similar to that of Lemma \ref{obvious}. \begin{lem}\lambdabel{alsoobvious} A collection of maps $F^{\bf N'}: E_{\bf N'} \to E_{\bf N'}$ (one for each ${\bf N}' \sqsubseteq {\bf N})$ is ${\bf SubQ}({\bf N})$-equivariant if and only if for all ${\bf N}'' \sqsubseteq {\bf N}' \sqsubseteq {\bf N}$ it holds that \begin{align}\lambdabel{equivariancesubnetworks} F^{{\bf N}'}_n \left( \bigoplus_{m\in N'} x_m \right) = F^{{\bf N}''}_n \left(\bigoplus_{m\in N''} x_m \right) \ \mbox{for all}\ n\in N'' \, . \end{align} (In other words: if the $n$-th components of all the maps are equal and depend only on the variables $x_m$ with $m$ in the smallest subnetwork of {\bf N} containing $n$.) \end{lem} \begin{proof} Let ${\bf N}'' \sqsubseteq {\bf N}'$ and let $a \in A$ be the arrow with $s(a)={\bf N}'$ and $t(a)={\bf N}''$. By definition of $R_a$, we have on the one hand that $$ R_a \left(F^{{\bf N}'} \left( \bigoplus_{m\in {\bf N}'} x_m \right) \right) = \bigoplus_{n\in N''} F^{{\bf N}'}_n \left( \bigoplus_{m\in {\bf N}'} x_m \right)\, . $$ On the other hand, $$ F^{{\bf N}''} \left( R_a\left( \bigoplus_{m\in {\bf N}'} x_m \right) \right) = \bigoplus_{n\in N''} F^{{\bf N}''}_n \left( \bigoplus_{m\in {\bf N}''} x_m \right) \, . $$ So $R_a \circ F^{{\bf N}'} = F^{{\bf N}''} \circ R_a$ if and only if (\ref{equivariancesubnetworks}) holds. \end{proof} \begin{ex} Let us investigate what Lemma \ref{alsoobvious} says for the network {\bf N} in Examples \ref{interestingffex} and \ref{ex2quiver}. So assume that $F^{\bf N_1}, \ldots, F^{\bf N_5}$ form an equivariant map for the quiver depicted in Figure \ref{pict4}. Observe that ${\bf N}_5={\bf N}$ and that the smallest subnetwork of {\bf N} that contains node $1$ is ${\bf N}_1$. Substituting $n=1$, ${\bf N}'= {\bf N}$ and ${\bf N}''= {\bf N}_1$ in (\ref{equivariancesubnetworks}) yields \begin{align}\nonumber & F^{{\bf N}}_1 (x_1, x_2, x_3, x_4, x_5) = F^{{\bf N}_1}_1 ( x_1) \, . \end{align} This shows that $F^{\bf N}_1(x)$ depends only on $x_1$. Continuing in this way for the other nodes, choosing each time for ${\bf N}''$ the smallest subnetworks containing them, we find \begin{align}\nonumber & F^{{\bf N}}_2 (x_1, x_2, x_3, x_4, x_5)= F^{{\bf N}_2}_2 ( x_1, x_2) \, , \\ \nonumber & F^{{\bf N}}_3 (x_1, x_2, x_3, x_4, x_5)= F^{{\bf N}_3}_3 ( x_1, x_2, x_3) \, ,\\ \nonumber & F^{{\bf N}}_4 (x_1, x_2, x_3, x_4, x_5)= F^{{\bf N}_4}_4 ( x_1, x_2, x_3, x_4) \, , \\ \nonumber & F^{{\bf N}}_5 (x_1, x_2, x_3, x_4, x_5)= F^{{\bf N}_5}_5 ( x_1, x_2, x_3, x_4, x_5) \, . \end{align} We conclude that ${\bf SubQ}({\bf N})$-equivariance is equivalent to $F^{\bf N}$ being of the form \begin{align}\lambdabel{networkform2} F^{\bf N} \left( \begin{array}{l} x_1\\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{array} \right) = \left( \begin{array}{l} F_1^{{\bf N}_1} (x_1) \\ F_2^{{\bf N}_2} (x_1, x_2) \\ F_3^{{\bf N}_3} (x_1, x_2, x_3) \\ F_4^{{\bf N}_4} (x_1, x_2, x_3, x_4) \\ F_5^{{\bf N}_5} (x_1, x_2, x_3, x_4, x_5) \\ \end{array} \right)\, \end{align} for some functions $F_i^{{\bf N}_i}$ depending on an appropriate number of variables. Note that (\ref{networkform2}) is different from (\ref{networkform}). In fact, any map of the form (\ref{networkform}) is also of the form (\ref{networkform2}) but not vice versa. Hence (\ref{networkform2}) defines a more general class of maps than (\ref{networkform}). Nevertheless, by construction (\ref{networkform}) and (\ref{networkform2}) have exactly the same subnetworks, so a lot of the network structure of (\ref{networkform}) is also present in (\ref{networkform2}). More importantly, the network structure of (\ref{networkform2}) remains intact when we compose network maps (because quiver-symmetry remains intact under composition, see Proposition \ref{obviouscomp}). We already saw in Example \ref{interestingffex} that network maps of the form (\ref{networkform}) do not possess this nice property. \end{ex} \section{The quiver of quotient networks}\lambdabel{sec5} Quotient networks were introduced by Golubitsky and Stewart et al. \cite{curious}, \cite{golstew}, \cite{torok}, \cite{stewartnature}, \cite{pivato} to compute robust synchrony patterns in network dynamical systems. It was shown for the first time in \cite{torok} that every solution of any quotient network lifts to a solution of the original network, i.e., that there is a linear map between the phase spaces that sends solutions of the quotient system to solutions of the original system. More recently, DeVille and Lerman \cite{deville} generalised this result, and reformulated it using the language of category theory and graph fibrations. The goal of this section is to translate all these observations into the language of quiver representations. The first half of this section has been added for completeness. We do not aim to provide a comprehensive exposition on quotient networks. Instead, we shall give the basic definitions that allow us to define the quiver of quotient networks. The informed reader may want to skip the first half of this section and start reading from Theorem \ref{devilletheorem1}. We start this section by generalising the notion of a network that was introduced in the previous section. \begin{defi}\lambdabel{defnetwork} A {\it coloured network} is a network ${\rm \bf N} = \{E \rightrightarrows^s_t N\}$ in which all nodes and edges are assigned a colour, in such a way that \begin{itemize} \item[{\bf 1.}] if two edges $e_1, e_2\in E$ have the same colour, then so do their sources $s(e_1)$ and $s(e_2)$, and so do their targets $t(e_1)$ and $t(e_2)$; \item[{\bf 2.}] if two nodes $n_1, n_2\in N$ have the same colour, then there is at least one colour preserving bijection $$\beta_{n_2, n_1}:t^{-1}(n_1)\to t^{-1}(n_2)$$ between the edges that target $n_1$ and $n_2$. \end{itemize} \end{defi} \noindent One should think of the networks of Section \ref{sec4} as coloured networks in which all nodes and edges have a different colour, so that conditions {\bf 1} and {\bf 2} are automatically satisfied. We remark that the node- and arrow-colours in Definition \ref{defnetwork} are the same as the cell- and arrow-types defined in \cite{torok}. The collection of colour preserving bijections $$\mathbb{G}_{\bf N}:=\{\, \beta_{n_2, n_1}: t^{-1}(n_1)\to t^{-1}(n_2)\ \mbox{colour preserving bijection} \, | \, n_1, n_2\in N \, \}$$ is the so-called {\it symmetry groupoid} of Golubitsky, Stewart and Pivato \cite{pivato}. These authors also make the following definition, generalising the network maps that we defined in Section \ref{sec4}. \begin{defi}\lambdabel{defadmissible} Let ${\rm \bf N} = \{E \rightrightarrows^s_t N\}$ be a coloured network and assume that $F^{\bf N}: \bigoplus_{{ m} \in { N}} E_{ m} \to \bigoplus_{{ m} \in { N}} E_{ m}$ is a map of the form $$F^{\bf N}_n\left( \, \bigoplus_{{ m} \in N} x_m\, \right) = F_{ n}\left( \bigoplus_{{ e} \in { E}\, : \, t(e) = n } x_{ s(e)} \right) \, .$$ Assume moreover that $$E_{n_1}=E_{n_2} \ \mbox{whenever}\ n_1, n_2\in N \ \mbox{have the same colour},$$ and that for every $n_1, n_2\in N$ of the same colour and every colour preserving bijection $\beta_{n_2, n_1} \in \mathbb{G}_{\bf N}$ it holds that $$F_{n_1}\left( \bigoplus_{{ e} \in { E}\, : \, t(e) = n_1 } x_{ s(\beta_{n_2, n_1}(e))} \right) = F_{n_2} \left( \bigoplus_{{ e} \in { E}\, : \, t(e) = n_2 } x_{ s(e)} \right)\, .$$ Then we say that $F^{\bf N}$ is an {\it admissible map} for ${\rm\bf N}$. \end{defi} \begin{ex} Figure \ref{pict1e} shows an example of a coloured network with two node colours and three edge colours. The edges from a node to itself representing internal dynamics are not depicted. Note that each yellow node is targeted by two blue edges. Hence there are two colour-preserving bijections between the edges targeting any two yellows nodes. Similarly, each green node is targeted by one red and one orange edge, so there is exactly one colour-preserving bijection between the edges targeting any two green nodes. \begin{figure} \caption{\footnotesize {\rm An example of a network with two node colours and three edge colours. Self-loops describing internal dynamics are not shown.} \end{figure} \noindent An admissible map for this network is of the form \begin{equation} \nonumber F^{{\bf N}}\left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ y_4 \\ y_5 \end{array} \right) = \left( \begin{array}{l} F(x_{1}, \overline{\bl{ x_{2}}, \bl{x_{3}}}) \\%, x_2, x_3) \\ F(x_{2}, \overline{\bl{x_{2}}, \bl{x_{3}}}) \\%, x_2, x_3) \\ F(x_{3}, \overline{\bl{x_{1}}, \bl{x_{2}}})\\ G(y_{4}, \ro{y_{4}}, \ora{x_1})\\ G(y_{5}, \ro{y_{4}}, \ora{x_3})\\ \end{array}\right) \, , \hspace{-.8cm} \end{equation} for some functions $F$ and $G$. The bar indicates that variables may be interchanged, i.e., it expresses that $F(x,\overline{\bl{y}, \bl{z}}) = F(x,\overline{\bl{z}, \bl{y}}) $ for all $x,y,z$. \end{ex} \noindent The next definition is due to DeVille and Lerman \cite{deville}. \begin{defi}\lambdabel{deffibration} Let ${\bf N}= \{E \rightrightarrows^s_t N\}$ and ${\bf N}' = \{E' \rightrightarrows^s_t N'\} $ be coloured networks and let $\phi :{\rm \bf N}\to {\rm \bf N}'$. Assume that \begin{itemize} \item[{\it i)}] this $\phi$ sends edges to edges and nodes to nodes, it preserves the colours of nodes and edges, and sends the head and tail of every edge $e\in E$ to the head and tail of $\phi(e)\in E'$; \item[{\it ii)}] for every node $n \in N$, the restriction $\phi|_{t^{-1}(n)} :t^{-1}(n) \to t^{-1}(\phi(n))$ is a colour preserving bijection. \end{itemize} Then $\phi$ is called a {\it graph fibration}. \end{defi} \noindent The key result in \cite{deville} is the following theorem. \begin{thr}[DeVille \& Lerman]\lambdabel{devilletheorem1} Let ${\rm \bf N} = \{E \rightrightarrows^s_t N\}$ and ${\rm \bf N}' = \{E' \rightrightarrows^s_t N'\}$ be coloured networks, let $\phi: {\rm \bf N} \to {\rm \bf N}'$ be a graph fibration, and let $F^{\bf N}$ and $F^{{\bf N}'}$ be admissible maps for ${\bf N}$ and ${\bf N}'$ respectively. In particular, they have the form \begin{align} \nonumber F^{\bf N}_n\left( \, \bigoplus_{{ m} \in N} x_m\, \right) & = F_{ n}\left( \bigoplus_{{ e} \in { E}\, : \, t(e) = n } x_{ s(e)} \right) \ \mbox{and} \\ \nonumber F^{{\bf N}'}_{n'}\left( \, \bigoplus_{{ m} \in N'} x_m\, \right) & = F'_{ n'}\left( \bigoplus_{{ e'} \in { E'}\, : \, t(e') = n' } x_{ s(e')} \right)\, . \end{align} Finally, assume that for every $n\in N, n'\in N'$ of the same colour and every colour preserving bijection $\beta_{n', n}: t^{-1}(n) \to t^{-1}(n')$, it holds that $$F_{n} \left( \bigoplus_{{ e} \in { E}\, : \, t(e) = n } x_{ s(\beta_{n', n}(e))} \right) = F'_{n'}\left( \bigoplus_{{ e'} \in { E'}\, : \, t(e') = n' } x_{ s(e')} \right)\, .$$ Then the linear map $$R_{\phi}: \bigoplus_{m'\in {\bf N}'} E_{m'} \to \bigoplus_{m\in {\bf N}} E_{m} \ \mbox{defined by}\ R_{\phi} \left( \bigoplus_{m'\in N'} x_{m'} \right) := \bigoplus_{m\in N} x_{\phi(m)}\, .$$ satisfies $$R_{\phi} \circ F^{{\bf N}'} = F^{\bf N} \circ R_{\phi} \, .$$ \end{thr} \noindent The proof of the theorem is simple and consists of combining all the definitions that were made. It can be found in \cite{deville}. It is not hard to see that ${\bf N'}\sqsubseteq {\bf N}$ is a subnetwork if and only if the inclusion $i: {\bf N'}\to {\bf N}$ is an injective graph fibration. The map $R_{i}: \bigoplus_{m\in N} E_m \to \bigoplus_{m\in N'} E_m$ is then given by $$R_{i}\left(\bigoplus_{m\in N} x_m\right) = \bigoplus_{m\in N'} x_{i(m)} =\bigoplus_{m\in N'} x_{m}\, .$$ So we recover the linear maps of Section \ref{sec4}. In this section we shall be interested in surjective graph fibrations instead. \begin{defi} When $\phi:{\rm \bf N}\to {\rm \bf N}'$ is a surjective graph fibration, then we call ${\rm \bf N}'$ a {\it quotient} of ${\rm \bf N}$. \end{defi} \noindent We are now ready to define the quiver of quotient networks. \begin{defi} Let ${\bf N}$ be a coloured network. The quiver ${\bf QuoQ}({\bf N}) = \{A \rightrightarrows^s_t V\}$ of quotient networks of ${\bf N}$ has as its vertices the nonempty quotients of ${\bf N}$, i.e., $$V=\{ {\bf N}' \neq \emptyset \, | {\bf N}' \ \mbox{is a quotient of} \ {\bf N} \} \, .$$ There is exactly one arrow $a \in A$ with $s(a)={\bf N}'$ and $t(a)={\bf N}''$ for each distinct surjective graph fibration $\phi$ from ${\bf N}''$ to ${\bf N}'$. \end{defi} \noindent A representation of ${\bf QuoQ}({\bf N})$ is defined in a straightforward manner. To each quotient ${\bf N}'$ of ${\bf N}$ (i.e., each vertex ${\bf N}'$ of the quiver ${\bf QuoQ}({\bf N})$), we assign the vector space $$E_{{\bf N}'} := \bigoplus_{m \in N'} E_{m}\, , $$ and for each arrow $a \in A$ from ${\bf N}'$ to ${ \bf N}''$ (corresponding to the graph fibration $\phi: {\bf N}'' \to {\bf N}'$), we define $R_a = R_{\phi}$, where $R_{\phi}$ is the linear map defined in Theorem \ref{devilletheorem1}. In other words, $R_a: E_{{\bf N}'} \to E_{{\bf N}''}$ is defined by the formula \begin{align} R_{a} \left( \bigoplus_{m\in N'} x_m \right) := \bigoplus_{m\in N''} x_{\phi(m)}\, . \lambdabel{repmap} \end{align} Theorem \ref{devilletheorem1} then trivially translates into the following result. \begin{cor} Let ${\rm \bf N} = \{E \rightrightarrows^s_t N\}$ be a coloured network and let $F^{\bf N} : E_{{\bf N}} \to E_{{\bf N}} $ be an admissible map, so that in particular it is of the form $$F^{\bf N}_n\left( \, \bigoplus_{{ m} \in N} x_m\, \right) = F_{ n}\left( \bigoplus_{{ e} \in { E}\, : \, t(e) = n } x_{ s(e)} \right)\, .$$ For each surjective graph fibration $\phi: {\bf N} \to {\bf N}'$, define $F^{\bf N'}: E_{{\bf N}'} \to E_{{\bf N}'} $ by $$F^{{\bf N}'}_{\phi(n)}\left( \, \bigoplus_{{ m} \in N'} x_m\, \right) := F_{ n}\left( \bigoplus_{{ e} \in { E}\, : \, t(e) = n } x_{ s(\phi(e))} \right)\, .$$ Then each $F^{{\bf N}'}$ is well-defined and admissible for ${\bf N}'$. Together the $F^{{\bf N}'}$ $({\bf N}'$ quotient of ${\bf N})$ form a ${\bf QuoQ}({\bf N})$-equivariant map. \end{cor} \begin{ex} The network {\bf N} in Figure \ref{pict1e} has six nonempty quotients (including ${\bf N}={\bf N}_1$ itself). Figure \ref{pict2e} shows the quiver of quotient networks. \begin{figure} \caption{\footnotesize {\rm The quiver of quotient networks for the network in Figure \ref{pict1e} \end{figure} To illustrate, note that there is a graph fibration $\phi_2: {\bf N}_1 \to {\bf N}_2$ which sends node $1$ and $2$ to node $1$, node $3$ to node $2$, node $4$ to node $3$ and node $5$ to node $4$. The corresponding linear map in the representation given by formula (\ref{repmap}) is $$R_{a_2}(x_1, x_2, y_3, y_4) = (x_1,x_1,x_2, y_3, y_4)\, .$$ The admissible maps for ${\bf N}_1$ and ${\bf N}_2$ are given by \begin{equation} \nonumber F^{{\bf N}_1}\left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ y_4 \\ y_5 \end{array} \right) = \left( \begin{array}{l} F(x_{1}, \overline{\bl{ x_{2}}, \bl{x_{3}}}) \\%, x_2, x_3) \\ F(x_{2}, \overline{\bl{x_{2}}, \bl{x_{3}}}) \\%, x_2, x_3) \\ F(x_{3}, \overline{\bl{x_{1}}, \bl{x_{2}}})\\ G(y_{4}, \ro{y_{4}}, \ora{x_1})\\ G(y_{5}, \ro{y_{4}}, \ora{x_3})\\ \end{array}\right) \, \mbox{and}\ F^{{\bf N}_2}\left( \begin{array}{c} x_1 \\ x_2 \\ y_3 \\ y_4 \end{array} \right) = \left( \begin{array}{l} F(x_{1}, \overline{\bl{ x_{1}}, \bl{x_{2}}}) \\%, x_2, x_3) \\ F(x_{2}, \overline{\bl{x_{1}}, \bl{x_{1}}}) \\%, x_2, x_3) \\ G(y_{3}, \ro{y_{3}}, \ora{x_1})\\ G(y_{4}, \ro{y_{3}}, \ora{x_2})\\ \end{array}\right) \, . \hspace{-.8cm} \end{equation} One verifies that indeed $R_{a_2} \circ F^{{\bf N}_2} = F^{{\bf N}_1} \circ R_{a_2}$. The full list of representation maps for the arrows in Figure \ref{pict2e} is given by \begin{align}\nonumber \begin{array}{ll} R_{a_1}(x_1, y_2) &= (x_1,x_1,x_1, y_2, y_2)\, , \\ R_{a_2}(x_1, x_2, y_3, y_4) &= (x_1,x_1,x_2, y_3, y_4)\, , \\ R_{a_3}(x_1, y_2) &= (x_1,x_1, y_2, y_2)\, , \\ R_{a_4}(x_1, y_2, y_3) &= (x_1,x_1, y_2, y_3)\, , \\ R_{a_5}(x_1, y_2, y_3) &= (x_1,x_1, x_1, y_2, y_3)\, , \\ R_{a_6}(x_1, y_2) &= (x_1, y_2, y_2)\, , \\ R_{a_7}(x_1, x_2, y_3, y_4) &= (x_1,x_2, x_1, y_3, y_4)\, , \\ R_{a_8}(x_1, y_2, y_3) &= (x_1, x_1, y_2, y_3)\, , \\ R_{a_9}(x_1, y_2) &= (x_1, x_1, y_2, y_2) \, , \\ R_{a_{10}}(x_1, x_2, y_3) &= (x_1, x_2, y_3, y_3)\, , \\ R_{a_{11}}(x_1, x_2, y_3) &= (x_1, x_2, x_1, y_3, y_3)\, , \\ R_{a_{12}}(x_1, y_2) &= (x_1, x_1, y_2)\, . \end{array} \end{align} \end{ex} \section{Endomorphisms of quiver representations}\lambdabel{sec6} In this section, we gather some basic properties of endomorphisms of quiver representations that will be important in the remainder of this paper. An endomorphism is simply a linear equivariant map: \begin{defi} An {\it endomorphism} of a quiver representation ({\bf E}, {\bf R}) of a quiver ${\bf Q}= \{A\rightrightarrows^s_t V\}$ is a set {\bf L} of {\it linear} maps $L_v: E_v\to E_v$ (one for each $v\in V$) such that $$ L_{t(a)} \circ R_a = R_a \circ L_{s(a)} \ \ \mbox{for every arrow}\ a\in A\, .$$ The collection of all endomorphisms is denoted by ${\rm End}({\bf E}, {\bf R})$. \end{defi} \begin{ex} For any representation ({\bf E}, {\bf R}) of any quiver ${\bf Q}= \{A\rightrightarrows^s_t V\}$, the identity {\bf Id}, consisting of the maps ${\rm Id}_v: E_v \to E_v$ ($v\in V$), is an example of an endomorphism. This is simply because $ {\rm Id}_{t(a)} \circ R_a = R_a \circ {\rm Id}_{s(a)}$. \end{ex} \begin{ex} If ${\bf F} \in C^{\infty}({\bf E}, {\bf R})$ is a smooth equivariant map of a representation $({\bf E}, {\bf R})$ and ${\bf F}(0)=0$ (meaning that $F_v(0)=0$ for every $v\in V$) then the derivative ${\bf L} = D{\bf F}(0)$ (consisting of the maps $L_v:=DF_v(0): E_v\to E_v$) is an example of an endomorphism. This follows from differentiating the identities $ F_{t(a)} \circ R_a = R_a \circ F_{s(a)}$ at $0$ and noting that $R_a(0)=0$. \end{ex} \begin{defi} A {\it subrepresentation} ${\bf D}$ of a representation ({\bf E}, {\bf R}) of a quiver ${\bf Q}= \{A\rightrightarrows^s_t V\}$ is a set of linear subspaces $D_{v} \subset E_{v}$ ($v\in V$) such that $$R_a(D_{s(a)}) \subset D_{t(a)} \ \ \mbox{for every arrow}\ a\in A\, .$$ \end{defi} \noindent In other words, {\bf D} is a subrepresentation of ({\bf E}, {\bf R}) if the restriction $({\bf D}, {\bf R}|_{\bf D})$ defines a representation. Examples of subrepresentations are the eigenspaces of endomorphisms. In this paper, we will use generalised eigenspaces more often than eigenspaces, so we formulate the following as a separate proposition. \begin{prop}\lambdabel{eigenspacesprop} Let ${\bf L}$ be an endomorphism of a representation $({\bf E}, {\bf R})$ of a quiver ${\bf Q}= \{A\rightrightarrows^s_t V\}$. We say that $\lambdambda\in \mathbb{C}$ is an eigenvalue of ${\bf L}$ if there is at least one $v\in V$ such that $\lambdambda$ is an eigenvalue of $L_v$. We then call $\lambdambda$ an eigenvalue of all the $L_{w}$ ($w\in V$) even if the corresponding eigenspace of $L_w$ is trivial. \begin{itemize} \item[{\it i)}] For $\lambdambda \in \mathbb{R}$, denote by $E^{\lambdambda}_v\subset E_v$ the generalised eigenspace of $L_v:E_v \to E_v$ for the eigenvalue $\lambdambda$. Then the $E_v^{\lambdambda}$ define a subrepresentation ${\bf E}^{\lambdambda}$ of $({\bf E}, {\bf R})$. \item[{\it ii)}] For $\mu \in \mathbb{C}\backslash \mathbb{R}$, denote by $E^{\mu, \bar \mu}_v \subset E_v$ the real generalised eigenspace of $L_v: E_v \to E_v$ for the eigenvalue pair $\mu, \bar \mu$. Then the $E_v^{\mu, \bar \mu}$ define a subrepresentation ${\bf E}^{\mu, \bar \mu}$ of $({\bf E}, {\bf R})$. \end{itemize} \end{prop} \begin{proof} Recall that {\bf L} consists of linear maps $L_v: E_v\to E_v$ $(v\in V)$ for which $R_a \circ L_{s(a)} = L_{t(a)} \circ R_a$ for each $a\in A$. Choose $\lambdambda \in \mathbb{R}$ and assume that $x\in E_{s(a)}^{\lambdambda}$. This means that $(L_{s(a)} - \lambdambda {\rm Id}_{s(a)})^N (x) = 0$ for any $N\geq {\rm dim}\, E_{s(a)}$. But then $$(L_{t(a)} - \lambdambda {\rm Id}_{t(a)})^N (R_a x) = R_a (L_{s(a)} - \lambdambda {\rm Id}_{s(a)})^N (x)=0\, .$$ So $R_a(E_{s(a)}^{\lambdambda}) \subset E_{t(a)}^{\lambdambda}$. For $\mu \in \mathbb{C}\backslash \mathbb{R}$, $E_v^{\mu, \bar \mu} = \ker (({ L}_v - \mu {\rm Id}_v)({ L}_v - \overline{\mu} {\rm Id}_v))^N$. So the proof is completely analogous. \end{proof} \section{Lyapunov-Schmidt reduction and quivers}\lambdabel{sec7} In this and the coming sections, we will show that quiver symmetry can be preserved in a number of well-known dimension reduction techniques. We start with the most straightforward result, which shows that quiver symmetry can be preserved in the process of Lyapunov-Schmidt reduction. We only prove this for steady state bifurcations at this point. How to preserve quiver symmetry in the Lyapunov-Schmidt reduction for periodic orbits is left as an open problem. Let us start by reviewing the classical Lyapunov-Schmidt reduction process for steady state bifurcations (so without any quiver symmetry) to set the stage for the proof of Theorem \ref{LStheorem} below. We consider the differential equation $$\frac{dx}{dt} = F(x;\lambdambda) \ \mbox{for}\ x\in E \ \mbox{and}\ \lambdambda \in \Lambda \subset \mathbb{R}^p\, , $$ where $F:E \times \Lambda \to E$ is a smooth vector field defined on a finite dimensional vector space $E$, depending smoothly on parameters from an open set $\Lambda \subset \mathbb{R}^p$. We also assume that for some value of the parameters this differential equation admits a steady state. We assume without loss of generality that $F(0; 0)=0$. The goal is to find all other steady states near $(x; \lambdambda)=(0; 0)$ by reducing the equation $$F(x; \lambdambda)=0 \ \mbox{on}\ E\times \Lambda$$ to a simpler ``bifurcation equation'' with as few dimensions as possible. To explain how this is done, denote by $L = D_xF(0;0): E\to E$ the derivative of $F$ in the direction of $E$ at $(0;0)$. We shall denote by $E^{\rm ker}\subset E$ the generalised kernel of $L$ (i.e., the generalised eigenspace for the eigenvalue zero) and by $E^{\rm im}$ its reduced image (the sum of the remaining generalised eigenspaces). We write $$\pi: E = E^{\rm im} \oplus E^{\rm ker} \to E^{\rm im}\, , \ x = x^{\rm im} + x^{\rm ker} \mapsto x^{\rm im}$$ for the projection onto $E^{\rm im}$ along $E^{\rm ker}$ (i.e., $\pi$ has kernel $E^{\rm ker}$ and is the identity on $E^{\rm im}$). The derivative in the direction of $E^{\rm im}$ of $$\pi \circ F : E^{\rm im} \oplus E^{\rm ker} \times \Lambda \to E^{\rm im}$$ at $(0;0)$ is equal to $$D_{x^{\rm im}}(\pi \circ F)(0;0) = \pi \circ L |_{E^{\rm im}} : E^{\rm im} \to E^{\rm im}\, .$$ By construction this map is invertible. By the implicit function theorem there is thus a unique smooth function $$\phi: U \subset E^{\rm ker} \times \Lambda \to W \subset E^{\rm im}$$ defined on some open neighbourhood $U$ of $(0;0)\in E^{\rm ker} \times \Lambda$ and mapping into an open neighborhood $W$ of $0\in E^{\rm im}$ that satisfies $$(\pi \circ F) (x^{\ker} + \phi(x^{\ker}; \lambdambda); \lambdambda)=0\, .$$ We clearly have $\phi(0;0)=0$ because $F(0;0)=0$. To find all other solutions $(x^{\rm im}, x^{\rm ker}; \lambdambda)\in W \times U$ to the equation $F(x^{\rm im}, x^{\rm ker}; \lambdambda)=0$, it then remains to solve only the reduced bifurcation equation \begin{align}\lambdabel{redLS} f(x^{\ker}; \lambdambda):= ((1 - \pi ) \circ F )(x^{\ker} +\phi(x^{\ker}; \lambdambda); \lambdambda) = 0\, , \end{align} where $$f: U \subset E^{\rm ker} \times \Lambda \to E^{\rm ker}\, .$$ This method to (locally) reduce the equation $F(x; \lambdambda)=0$ to the lower-dimensional equation $f(x^{\ker}; \lambdambda)=0$ is called Lyapunov-Schmidt reduction. The following theorem states that the reduced equation inherits quiver symmetry if it is present in the original equation. \begin{thr} \lambdabel{LStheorem}(Quiver equivariant Lyapunov-Schmidt theorem) Let $({\bf E}, {\bf R})$ be a representation of a quiver ${\bf Q}=\{A\rightrightarrows^s_t V\}$ and assume that ${\bf F}\in C^{\infty}({\bf E} \times \Lambda, {\bf R})$ is a smooth parameter-dependent {\bf Q}-equivariant map, i.e., for every $v\in V$ there is a smooth $F_v:E_v\times \Lambda \to E_v$ satisfying $$R_a ( F_{s(a)}(x; \lambdambda)) = F_{t(a)}(R_a(x); \lambdambda) \ \mbox{for all} \ x\in E_{s(a)}, \lambdambda\in \Lambda\ \mbox{and}\ a\in A\, .$$ Assume moreover that ${\bf F}(0;0)=0$, i.e., $F_v(0;0)=0$ for all $v\in V$. Then the reduced maps $f_v: U_v \subset E_v^{\rm ker} \times \Lambda \to E_v^{\rm ker}$ ($v\in V$) defined in (\ref{redLS}) satisfy $$R_a ( f_{s(a)}(x^{\ker}; \lambdambda)) = f_{t(a)}(R_a(x^{\ker}); \lambdambda) \ \mbox{for all} \ a\in A\, $$ and for all $(x^{\ker};\lambdambda) \in \overline U_{s(a)}$ in some open neighbourhood $\overline U_{s(a)}$ of $(0;0)$. This means that the $f_v\ (v\in V)$ define a {\bf Q}-equivariant map ${\bf f}$ on an open neighbourhood of $(0; 0)$ of the subrepresentation ${\bf E}^{\ker}\times \Lambda$ of $({\bf E}\times \Lambda, {\bf R})$. \end{thr} \begin{proof} Fix an $a\in A$ and consider the map $R_a: E_{s(a)}\to E_{t(a)}$. Recall that $R_a \circ L_{s(a)} = L_{t(a)} \circ R_a$, where $L_v = D_xF_v(0;0)$, so that $R_a(E_{s(a)}^{\ker}) \subset E_{t(a)}^{\ker}$ and $R_a(E_{s(a)}^{\rm im}) \subset E_{t(a)}^{\rm im}$ by Proposition \ref{eigenspacesprop}. It follows in particular that $$R_a \circ \pi_{s(a)} = \pi_{t(a)} \circ R_a\, .$$ Recall that by definition of $\phi_{s(a)}: U_{s(a)} \to W_{s(a)}$ it holds that $$(\pi_{s(a)} \circ F_{s(a)})( x^{\ker} + \phi_{s(a)}(x^{\ker}; \lambdambda); \lambdambda) = 0\, $$ for all $(x^{\ker}; \lambdambda)\in U_{s(a)}$. It follows that \begin{align}\nonumber 0 & = (R_a \circ \pi_{s(a)} \circ F_{s(a)}) ( x^{\ker} + \phi_{s(a)}(x^{\ker}; \lambdambda); \lambdambda) \\ \nonumber & = (\pi_{t(a)} \circ F_{t(a)}) ( R_a (x^{\ker}) + R_a (\phi_{s(a)}(x^{\ker}; \lambdambda )); \lambdambda)\, . \end{align} By definition of $\phi_{t(a)} : U_{t(a)} \to W_{t(a)}$ it thus holds that $$\phi_{t(a)}(R_a (x^{\ker}); \lambdambda) = R_a (\phi_{s(a)}(x^{\ker}; \lambdambda) )\, $$ for all $(x^{\ker}; \lambdambda)\in U_{s(a)}$ with $(R_a(x^{\ker}); \lambdambda)\in U_{t(a)}$ and $R_a(\phi_{s(a)}(x^{\ker}; \lambdambda))\in W_{t(a)}$. The $(x^{\ker};\lambdambda)$ for which these inclusions hold form an open neighbourhood $ \tilde U_{a}$ of $(0;0)$. For $(x^{\ker}; \lambdambda) \in \tilde U_{a}$ we then have that \begin{align}\nonumber &R_a (f_{s(a)} (x^{\ker}; \lambdambda))= (R_a \circ ( 1- \pi_{s(a)}) \circ F_{s(a)} )(x^{\ker}+\phi_{s(a)}(x^{\ker}; \lambdambda); \lambdambda ) \\ \nonumber &= (( 1 - \pi_{t(a)}) \circ F_{t(a)} )(R_a (x^{\ker})+\phi_{t(a)}(R_a(x^{\ker}); \lambdambda);\lambdambda) = f_{t(a)} (R_a (x^{\ker}); \lambdambda)\, . \end{align} This would prove the theorem if for every vertex $v\in V$ there was at most one arrow $a\in A$ with $s(a)=v$. If there are more such arrows, then the finite intersection $\overline{U}_v := \bigcap_{a: s(a)=v} \tilde U_{a}$ will satisfy the requirements. \end{proof} \section{Center manifolds and quivers} \lambdabel{sec8} In this section we show that quiver symmetry can be preserved in the process of center manifold reduction. The main result is Theorem \ref{CMtheorem} below, which is a {\bf Q}-equivariant {\it global} center manifold theorem. We encountered various obstructions in trying to prove a fully general ${\bf Q}$-equivariant {\it local} center manifold theorem. These will be discussed in Remark \ref{localproblem} below. We start our analysis by recalling the classical global center manifold theorem \cite{vdbauw}. We will not prove this classical theorem here, and for simplicity we only formulate a version of the theorem without parameters. To formulate the classical result, let $E$ be a finite dimensional real vector space and $L: E\to E$ a linear map. Let us denote by $E^{ c}$ the center subspace of $L$ (the sum of the generalised eigenspaces of $L$ for the eigenvalues on the imaginary axis) and by $E^{ h}$ the hyperbolic subspace of $L$ (the sum of the generalised eigenspaces of $L$ for the eigenvalues not lying on the imaginary axis). We shall denote by $$\pi^c : E = E^{ c} \oplus E^{h} \to E^{ c}\ \mbox{and by}\ \pi^h: = 1-\pi^c : E = E^{ c} \oplus E^{ h} \to E^{ h}$$ the projections corresponding to the splitting $E = E^{ c} \oplus E^{ h}$. Now we can formulate the global center manifold theorem, referring to \cite{vdbauw} for a proof. \begin{thr}\lambdabel{cmthm} Let $L:E\to E$ be a linear map and $k \in \{1,2,3, \ldots\}$. Then there is an $\varepsilon = \varepsilon(L, k) >0$ for which the following holds. If $F: E\to E$ is a $C^k$ vector field that satisfies $F(0)=0$, $DF(0)=L$, $$ \sup_{x\in E} ||D^{\alpha} (F(x)-L) || < \infty \ \mbox{for all}\ |\alpha| \leq k \ \mbox{and}\ \sup_{x\in E} ||DF(x)-L|| < \varepsilon\, ,$$ then there exists a $C^k$ map $\phi: E^{ c} \to E^{ h}$, satisfying $\phi(0)=0$ and $D\phi(0)=0$, of which the graph $$M^{ c}:= \{ x^c + \phi(x^c) \ |\ x^c \in E^c\} \subset E$$ is an invariant manifold for the flow of the differential equation $\frac{dx}{dt} = F(x)$. Moreover, if we denote this flow by $e^{tF}$, then $$M^{ c} = \left\{ x\in E\ \left| \ \sup_{t\in \mathbb{R} } || (\pi^h \circ e^{tF})(x) || < \infty\, \right. \right\} .$$ We call $M^{ c}$ the {\rm global center manifold} of $F$. \end{thr} \begin{remk} Let $x(t)$ be an integral curve of $F$, i.e., $\frac{dx(t)}{dt} = F(x(t))$, and let us write $x^c(t):=\pi^c( x(t))$. Then $$\frac{dx^c(t)}{dt} = (\pi^c\circ F)(x(t)) \, .$$ If $x(t)$ happens to lie inside $M^{ c}$, then by definition of $\phi$ we moreover have that $x(t) = x^c(t) + \phi (x^c(t))$. So then $$\frac{dx^c(t)}{dt} = (\pi^c\circ F) (x^c(t) + \phi(x^c(t)))\, .$$ This proves that the restriction of $\pi^c$ to $M^{ c}$ sends integral curves of $F$ in $M^c$ to integral curves of the vector field $F^c: E^{ c} \to E^{ c}$ defined by $$F^{ c}(x^c) := (\pi^c \circ F)(x^c+\phi(x^c))\, .$$ We shall call this vector field $F^{ c}$ on $E^{ c}$ the {\it center manifold reduction} of $F$. \end{remk} \noindent We are now ready to formulate our result on quivers and center manifolds, remarking that its proof is more or less identical to that of Lemma \ref{cmff}. \begin{thr} \lambdabel{CMtheorem}(Quiver equivariant center manifold theorem) Let $({\bf E}, {\bf R})$ be a representation of a quiver ${\bf Q}=\{A\rightrightarrows^s_t V\}$ and let ${\bf L} \in {\rm End}({\bf E}, {\bf R})$ and ${\bf F}\in C^{k}({\bf E}, {\bf R})$ ($k=1,2,\ldots$) with ${\bf F}(0)=0$ and $D{\bf F}(0)={\bf L}$. So we assume that for every $v\in V$ there is a linear map $L_v:E_v\to E_v$ and a $C^k$ smooth map $F_v:E_v \to E_v$ with $F_v(0)=0$, $DF_v(0)=L_v$, such that $$R_a \circ L_{s(a)} = L_{t(a)} \circ R_a \ \mbox{and}\ R_a \circ F_{s(a)} = F_{t(a)} \circ R_a \ \mbox{for all} \ a\in A\, .$$ Assume moreover that each of the $L_v$ and $F_v$ ($v\in V$) satisfy the bounds of Theorem \ref{cmthm}, so that each $F_v$ admits a unique global center manifold $M^{c}_v$. Then $R_a$ maps the global center manifold of $F_{s(a)}$ into that of $F_{t(a)}$, i.e., $$R_a(M^c_{s(a)}) \subset M^c_{t(a)}\, .$$ Moreover, the center manifold reductions $F_v^{ c}: E_v^{ c} \to E_v^{ c}$ ($v\in V$) satisfy $$R_a \circ F_{s(a)}^{ c} = F_{t(a)}^{ c} \circ R_a \ \mbox{for all} \ a\in A\, .$$ So the $F^c_v$ define a {\bf Q}-equivariant vector field ${\bf F}^{c}$ on the subrepresentation ${\bf E}^{ c}$ of $({\bf E}, {\bf R})$ consisting of the center subspaces $E^{ c}_v$ ($v\in V$). \end{thr} \begin{proof} Fix an $a\in A$. By Proposition \ref{eigenspacesprop} we have that $R_a(E_{s(a)}^{ c}) \subset E_{t(a)}^{ c}$ and $R_a(E_{s(a)}^{ h}) \subset E_{t(a)}^{ h}$, so in particular it holds that $$R_a \circ \pi_{s(a)}^{ c} = \pi_{t(a)}^{ c} \circ R_a \ \mbox{and}\ R_a \circ \pi_{s(a)}^{ h} = \pi_{t(a)}^{ h} \circ R_a\, .$$ Next, choose an $x\in M^{ c}_{s(a)}$ and recall that for such $x$ we have $$\sup_{t\in \mathbb{R}} || (\pi^{ h}_{s(a)} \circ e^{t F_{s(a)}}) (x)|| < \infty\, .$$ Because $R_{a} \circ e^{t F_{s(a)}} = e^{t F_{t(a)}} \circ R_a$ and $R_a \circ \pi_{s(a)}^{ h} = \pi_{t(a)}^{ h} \circ R_a$, this implies that \begin{align}\nonumber \sup_{t\in \mathbb{R}} || (\pi^{ h}_{t(a)} \circ e^{t F_{t(a)}}) (R_a(x)) || & = \sup_{t\in \mathbb{R}} || R_a ( \pi^{ h}_{s(a)} \circ e^{t F_{s(a)}}) (x)) || \\ \nonumber \leq ||R_a|| \cdot \sup_{t\in \mathbb{R}}|| ( \pi^{ h}_{s(a)}& \circ e^{t F_{s(a)}})(x) || < \infty \, , \end{align} where $||R_a||$ is the operator norm of $R_a$. We conclude that $R_a (x) \in M^{ c}_{t(a)}$. This proves that $$R_a (M^{ c}_{s(a)}) \subset M^{ c}_{t(a)}\, .$$ Next, recall that if $x\in M^{ c}_{s(a)}$, then it is of the form \begin{align} \nonumber & x = \underbrace{x^c}_{\in E^{ c}_{s(a)}} + \underbrace{\phi_{s(a)}(x^c)}_{\in E^{ h}_{s(a)}} \, , \end{align} where $\phi_{s(a)}: E^{ c}_{s(a)} \to E^{ h}_{s(a)}$ is the $C^k$ function whose graph is $M_{s(a)}^{ c}$. Applying $R_a$ to this equality we find that $$R_a (x) = \underbrace{R_a (x^c)}_{\in E_{t(a)}^{ c}} + \underbrace{R_a (\phi_{s(a)}(x^c))}_{\in E^{ h}_{t(a)}} \in M^{ c}_{t(a)}\, .$$ But every $X\in M^{ c}_{t(a)}$ can uniquely be written in the form \begin{align} \nonumber & X = \underbrace{X^c}_{\in E^{ c}_{t(a)}} + \underbrace{\phi_{t(a)}(X^c)}_{\in E^{ h}_{t(a)}} \, , \end{align} where $\phi_{t(a)}: E^{ c}_{t(a)} \to E^{ h}_{t(a)}$ is the $C^k$ function whose graph is $M^{ c}_{t(a)}$. This proves that $R_a(\phi_{s(a)}(x^c)) = \phi_{t(a)} (R_a(x^c))$, i.e., that $$R_a \circ \phi_{s(a)} = \phi_{t(a)} \circ R_a \, .$$ Recalling the definition of the center manifold reductions $F^c_{v}: E_{v}^c \to E_{v}^c $, we finish by noticing that \begin{align}\nonumber R_a(F^c_{s(a)} (x^c) ) =& (R_a\circ \pi^c_{s(a)} \circ F_{s(a)} )(x^c + \phi_{s(a)}( x^c)) \\ \nonumber = & (\pi^c_{t(a)} \circ F_{t(a)} \circ R_a)(x^c + \phi_{s(a)}( x^c)) \\ \nonumber = & (\pi^c_{t(a)} \circ F_{t(a)}) ( R_a (x^c) + R_a(\phi_{s(a)}( x^c))) \\ \nonumber = & (\pi^c_{t(a)} \circ F_{t(a)}) ( R_a (x^c) + \phi_{t(a)} ( R_a (x^c))) \\ \nonumber = & F^c_{t(a)}(R_a(x^c))\, , \end{align} i.e., $R_a\circ F^c_{s(a)} = F^c_{t(a)} \circ R_a$. This finishes the proof. \end{proof} \begin{remk}\lambdabel{localproblem} Theorem \ref{CMtheorem} is a {\bf Q}-equivariant {\it global} center manifold theorem. Assuming that the first derivatives of the nonlinearities $F_v-L_v$ are globally small, and that their higher derivatives are globally bounded, it guarantees the existence of a globally defined center manifold. The global conditions on the nonlinearities are rather unnatural though, as in practice the nonlinearities will only be small in a neighbourhood of the equilibrium under consideration. The global center manifold theorem is a (very important) step in the proof of a {\it local} center manifold theorem - where the global bounds are not required and a center manifold is guaranteed in a small neighbourhood of the equilibrium. Although it is reasonable to assume that a local version of Theorem \ref{CMtheorem} holds as well, we were so far unable to prove such a theorem for general {\bf Q}-equivariant systems. The problem arises from the way one usually makes the step from a global to a local center manifold theorem: one replaces the unbounded nonlinearities $F_v-L_v$ by globally bounded nonlinearities, for example by replacing the ODE $\frac{dx}{dt} = F_v(x)$ by the ODE $\frac{dx}{dt} = \widetilde F_v(x):= L_vx + \zeta_v(x) (F_v(x)-L_vx)$, where $\zeta_v: E_v \to \mathbb{R}$ is a smooth bump function with $\zeta_v(x) = 1$ for small $||x||$. By shrinking the support of $\zeta_v$ one can then satisfy the assumptions of Theorem \ref{cmthm}. The problem that we encounter is that in general it is unclear how to choose the bump functions $\zeta_v \, (v\in V)$ in such a way that ${\bf Q}$-equivariance is preserved. This problem can sometimes be circumvented if the ${\bf Q}$-equivariant vector field happens to be an admissible vector field $F^{\bf N}$ for some network ${\bf N}$. In that case one can multiply the nonlinear parts of each of the separate components $F^{\bf N}_n$ of the vector field with a bump function, choosing the same bump function for nodes with the same colour (more precisely, choosing bump functions that are invariant under the symmetry groupoid $\mathbb{G}_{\bf N}$). The resulting vector field $\widetilde F^{\bf N}$ will then have the same network structure as $F^{\bf N}$, and will hence admit for example the same quiver of subnetworks and quiver of quotient networks. In \cite{CMR}, \cite{CMRSIREV} it was shown in detail how this works out for so-called {\it fully homogeneous networks with asymmetric inputs}. It is not hard to see that the same procedure can be applied to the admissible maps of any network, see Definition \ref{defadmissible}. On the other hand, quiver symmetry is not always the same as network structure. Therefore even proving an equivariant local center manifold theorem for specific quivers remains problematic. The mentioned fully homogeneous networks with asymmetric inputs are an exception, as we proved in \cite{RinkSanders3} that such networks admit a quiver symmetry that is equivalent to a particular network structure (which may be more general than the original network structure though). Such a result will not hold for other types of networks and quivers. For instance, it is not clear to us that equivariance of ${\bf F}$ under {\bf QuoQ(N)} implies that ${\bf F}$ is an admissible vector field for some network that is somehow related to {\bf N}. We therefore do not know at this moment how to prove a {\bf QuoQ(N)}-equivariant local center manifold theorem. \end{remk} \section{Normal forms and quivers}\lambdabel{sec9} The normal form of a local dynamical system displays the system in a ``standard'' or ``simple'' form. Normal forms are an important tool in the study of the dynamics and bifurcations of maps and vector fields near equilibria, cf. \cite{murdock}, \cite{sanvermur}. The goal of this section is to prove Theorem \ref{normalformtheorem} below. This theorem states that it can be arranged that the normal form of a dynamical system possesses the same quiver symmetry as the original system. For simplicity, we do not consider parameter dependent vector fields in this section (but it is straightforward to prove the same result for systems with parameters as well). We start by recalling one of the classical results of normal form theory in Theorem \ref{normalnormalformtheorem}. To this end, let us consider a smooth ODE $$\frac{dx}{dt} = F(x) = F^0(x) + F^1(x)+ F^2(x)+ \ldots$$ on a finite-dimensional vector space $E$. That is, $F\in C^{\infty}(E)$ is a smooth vector field on $E$, $F(0)=0$, and $F^k \in P^k(E)$ where $$P^k(E):=\{ F^k: E\to E \ |\ F^k \ \mbox{is a homogeneous polynomial of degree}\ k+1\}\, .$$ The idea is that we now try to make local coordinate transformations $$x\mapsto y = \Phi(x) = x + \mathcal{O}(||x||^2)$$ that simplify (in one way or another) the higher order terms $F^k \ (k=1, 2, \ldots)$ of $F$. There are various ways to define such coordinate transformations, and there are various ways to define what it means to ``simplify'' a local ODE. Theorem \ref{normalnormalformtheorem} states one of the many well-known results. \begin{thr}\lambdabel{normalnormalformtheorem}(Normal form theorem) Let $E$ be a finite dimensional real vector space and let $F \in C^{\infty}(E)$ be a smooth vector field with $F(0)=0$ and Taylor expansion $$F = F^0 + F^1 + F^2 + \ldots , \ {\it where}\ F^k \in P^k(E)\, .$$ Then, for every $1\leq r<\infty$, there exists an analytic diffeomorphism $\Phi$, sending an open neighborhood of $0$ in $E$ to an open neighborhood of $0$ in $E$, so that the coordinate transformation $x\mapsto y = \Phi(x) = x + \mathcal{O}(||x||^2)$ transforms the ODE $$\frac{dx}{dt} = F(x)$$ into an ODE of the form $$\frac{dy}{dt} = \overline F(y)$$ with $$ \overline F = F^0 + \overline F^1 +\overline F^2 + \ldots \ {\rm with}\ \overline F^k \in P^k(E)\, ,$$ while at the same time it holds that \begin{align}\lambdabel{commutator} e^{t L^S}\circ \overline F^k = \overline F^k \circ e^{t L^S} \ \mbox{for all} \ 1\leq k\leq r\ \mbox{and}\ t\in \mathbb{R}\, . \end{align} Here, $L^S=(F^0)^S$ denotes the semisimple part of $F_0$ and $e^{t L^S}$ its (linear) time-$t$ flow. \end{thr} \noindent To clarify the statement in this theorem, we will now make a number of definitions and observations. A sketch of the proof of Theorem \ref{normalnormalformtheorem} will be given afterwards. First of all, for any two smooth vector fields $F, G \in C^{\infty}(E)$ on $E$ one may define the Lie bracket $[F, G]\in C^{\infty}(E)$ as the vector field \begin{align}\lambdabel{bracketdef} [F, G](x) := \left. \frac{d}{dt}\right|_{t=0} \!\!\!\!\! (e^{tF})_*G (x)= DF(x) \cdot G(x) - DG(x) \cdot F(x)\, . \end{align} Here, $e^{tF}$ denotes the time-$t$ flow of $F$ (which is defined near each $x\in E$ for some positive time) and $(e^{tF})_*G(x) := De^{tF} \cdot G(e^{-tF}(x))$ is the pushforward of the vector field $G$ by the time-$t$ flow of $F$. We say that $F$ and $G$ commute if $[F,G]=0$, which is equivalent to their flows $e^{tF}$ and $e^{tG}$ commuting, and equivalent to $F$ being equivariant under the flow $e^{tG}$, and equivalent to $G$ being equivariant under the flow $e^{tF}$. In particular, (\ref{commutator}) is equivalent to $$[L^S, \overline F_k]=0\ \mbox{for all}\ 1\leq k\leq r\, .$$ We shall also define, for $F\in C^{\infty}(E)$, the linear operator $${\rm ad}_{F}: C^{\infty}(E) \to C^{\infty}(E)\ \mbox{by}\ {\rm ad}_{F}(G):= [F, G]\, .$$ It follows from (\ref{bracketdef}) that, if $F^k\in P^k(E)$ and $G^l\in P^l(E)$, then $[F^k, G^l]\in P^{k+l}(E)$. In other words, ${\rm ad}_{F^k}: P^l(E) \to P^{k+l}(E)$. We will also use the following result, that we state here without proof. \begin{prop} Let $L: E\to E$ be a linear map on a finite dimensional vector space $E$. Recall that there are a unique semisimple linear map $L^S:E \to E$ and a unique nilpotent linear map $L^N:E \to E$ so that $$L=L^S+L^N \ \mbox{and}\ [L^S, L^N]: = L^S L^N - L^N L^S =0\, .$$ $L^S$ is called the {\rm semisimple part} of $L$ and $L^N$ the {\rm nilpotent part} of $L$. The semisimple and nilpotent parts of the restriction ${\rm ad}_{L}: P^k(E) \to P^k(E)$ of ${\rm ad}_L$ to $P^k(E)$ are then given by $$\left({\rm ad}_{L} \right)^S = {\rm ad}_{L^S}\ \mbox{and} \ \left({\rm ad}_{L} \right)^N = {\rm ad}_{L^N}\, .$$ \end{prop} \begin{cor}\lambdabel{corrimker} It holds that \begin{itemize} \item[{\it i)}] $P^k(E) = {\rm im\, ad}_{L^S} \oplus {\rm ker\, ad}_{L^S}$; \item[{\it ii)}] ${\rm im\, ad}_{L^S} \cap P^k(E) \subset {\rm im\, ad}_{L} \cap P^k(E) $; \item[{\it iii)}] ${\rm ker\, ad}_{L} \cap P^k(E) \subset {\rm ker\, ad}_{L^S} \cap P^k(E)$. \item[{\it iv)}] ${\rm ad}_L: {\rm im\, ad}_{L^S}\cap P^k(E) \to {\rm im\, ad}_{L^S} \cap P^k(E)$ is an isomorphism. \end{itemize} \end{cor} \begin{proof} For any linear map $M$ on a finite dimensional real vector space $V$ it holds that $V= {\rm im} \, M^S \oplus {\rm ker} \, M^S$, ${\rm im} \, M^S \subset {\rm im} \, M$ and ${\rm ker} \, M \subset {\rm ker} \, M^S$. So these identities hold in particular for $M= {\rm ad}_{L}$ and $V=P^k(E)$. To prove point {\it iv)}, note that $M(M^S(x))=M^S(M(x))$ so $M$ sends ${\rm im}\, M^S$ into itself. Because ${\rm ker}\, M \subset {\rm ker} \, M^S$, we have that ${\rm ker}\, M \cap {\rm im}\, M^S = \{0\}$. \end{proof} \begin{proof}{\bf [of Theorem \ref{normalnormalformtheorem}] } We sketch the well-known construction of the normal form by means of ``Lie transformations'', providing only those details that are necessary to prove Theorem \ref{normalformtheorem} below, and leaving out any analytical estimates. First of all, recall that for any smooth vector field $G\in C^{\infty}(E)$ satisfying $G(0)=0$, the time-$t$ flow $e^{tG}$ defines a diffeomorphism of an open neighborhood of $0$ in $E$ to another open neighborhood of $0$ in $E$. Thus we can consider, for any other smooth vector field $F \in C^{\infty}(E)$, the curve $t \mapsto (e^{tG})_*F$ of transformed vector fields. This curve satisfies the initial condition $$ (e^{0G})_*F =F\, $$ together with the linear differential equation \begin{align} \nonumber \frac{d}{dt} (e^{tG})_*F = & \left. \frac{d}{dh}\right|_{h=0} \!\!\!\!\! (e^{hG})_* ((e^{tG})_*F) = [G, (e^{tG})_*F ] \\ & = {\rm ad}_{G}((e^{tG})_*F)\, . \end{align} The second equality holds by definition of the Lie bracket. We conclude that $$(e^{G})_*F = e^{{\rm ad}_{G}}(F) = F + [G, F]+\frac{1}{2}[G, [G, F]] + \ldots \ . $$ The diffeomorphism $\Phi$ in the statement of the theorem is now constructed as the composition of a sequence of time-$1$ flows $e^{G^k}$ $(1 \leq k \leq r)$ with $G^k\in P^k(E)$. We first take $G^1 \in P^1(E)$, so that $F$ is transformed by $e^{G^1}$ into $$(e^{G^1})_*F=e^{{\rm ad}_{G^1}}(F) =F^0 + F^{1,1}+ F^{2,1} + \ldots $$ in which \begin{align}\nonumber \begin{array}{ll} F^{1,1} = F^1+[G^1, F^0] & \in P^1(E) \\ F^{2,1} =F^2 + [G^1, F^1] + \frac{1}{2}[G^1, [G^1, F^0]] & \in P^2(E)\\ F^{3,1}=F^3+\ldots & \in P^3(E)\\ \mbox{etc.}& \ \end{array} \end{align} From now on we shall often use the notation $$L:=F^0\, .$$ The reason is that the operator ${\rm ad}_L$ plays an important role in the rest of this proof. In fact, the idea is that we now try to choose a $G^1\in P^1(E)$ so that $$F^{1,1}= F^{1} + [G^1, F^0] = F^{1} + [G^1, L] = F^{1} - {\rm ad}_{L}(G^1)$$ is as simple as possible. In general, it will not be possible to arrange that $F^{1,1}$ vanishes completely. But according to Corollary \ref{corrimker} we can write $$F^1 = (F^1)^{\rm im} + (F^1)^{\rm ker} \ \mbox{for unique}\ (F^1)^{\rm im} \in {\rm im \ ad}_{L^S} \ {\rm and}\ (F^1)^{\rm ker} \in {\rm ker \ ad}_{L^S} \, .$$ Because ${\rm im \ ad}_{L^S} \subset {\rm im\ ad}_L$ (by Corollary \ref{corrimker}), we can then find a $G^1\in P^1(E)$ so that ${\rm ad}_L(G^1)=(F^1)^{\rm im}$. With this choice of $G^1$ it will hold that $$F^{1,1} = (F^1)^{\rm ker} \in {\rm ker \ ad}_{L^S}\, .$$ Note that the choice of $G^1$ is not unique. If we replace our $G^1$ by $G^1+H^1$ with $H^1\in {\rm ker\ ad}_L$, then it will still hold that ${\rm ad}_L(G^1)=(F^1)^{\rm im}$ and therefore also $F^{1,1} = (F^1)^{\rm ker} \in {\rm ker \ ad}_{L^S}$. To remove this freedom in the choice of $G^1$, let us recall from Corollary \ref{corrimker} that ${\rm ad}_{L}: {\rm im \ ad}_{L^S} \to {\rm im \ ad}_{L^S}$ is an isomorphism. Thus there is a unique $$G^1\in {\rm im\ ad}_{L^S}\, $$ with the property that ${\rm ad}_L(G^1)=(F^1)^{\rm im}$, and therefore $F^{1,1} \in {\rm ker \ ad}_{L^S}$. It will be important for later that we choose this particular unique $G^1$ to generate our first normalising transformation. We proceed by picking the unique $G^2 \in {\rm im \ ad}_{L^S} \cap P^2(E)$ that makes that $(e^{{G^2}} \circ e^{{G^1}})_*F= $ $F^0+ F^{1,1} + F^{2,2} + \ldots $ with $F^{2,2} \in {\rm ker\, ad}_{L^S}$. Continuing in this way, after $r$ steps we obtain that $$\Phi:= e^{G^r} \circ \ldots \circ e^{G^1}$$ transforms $F$ into $\Phi_*F=\overline F = F^0 + F^{1,1} + F^{2,2} + \ldots = F^0 + \overline F^1+\overline F^2+ \ldots$ where $\overline F^k \in {\rm ker\, ad}_{L^S}$ for all $1\leq k\leq r$. Being the composition of finitely many flows of polynomial vector fields, this $\Phi$ is obviously analytic and well-defined on an open neighbourhood of $0\in E$. \end{proof} \noindent Before we can prove the main result of this section we will make a few technical observations. The first is simply that quiver symmetry is preserved when taking Lie brackets. \begin{prop}\lambdabel{Lieproperty} Let $({\bf E}, {\bf R})$ be a representation of a quiver ${\bf Q}=\{A \rightrightarrows^s_t V\}$ and let ${\bf F}, {\bf G}\in C^{\infty}({\bf E}, {\bf R})$ be smooth equivariant maps. Then also their Lie brackets $[F_v, G_v]$ ($v\in V$) define a smooth equivariant map $[{\bf F}, {\bf G}]\in C^{\infty}({\bf E}, {\bf R})$. \end{prop} \begin{proof} Smoothness of the $[F_v, G_v]$ is clear. If ${\bf F}, {\bf G}\in C^{\infty}({\bf E}, {\bf R})$, then for any arrow $a\in A$ we have that $$R_a \circ F_{s(a)} = F_{t(a)} \circ R_a \ \mbox{and}\ R_a \circ G_{s(a)} = G_{t(a)} \circ R_a\, .$$ Differentiation of the identity $R_a \circ F_{s(a)} = F_{t(a)} \circ R_a$ yields that $$R_a \circ DF_{s(a)} = (DF_{t(a)} \circ R_a)\cdot R_a\, .$$ Similarly, $R_a \circ DG_{s(a)} = (DG_{t(a)} \circ R_a)\cdot R_a$. As a result we obtain that \begin{align}\nonumber R_a\circ [F_{s(a)}, G_{s(a)}] =& R_a \circ ( DF_{s(a)} \cdot G_{s(a)} - DG_{s(a)}\cdot F_{s(a)} ) \\ \nonumber = & (DF_{t(a)} \circ R_a) \cdot (R_a \circ G_{s(a)}) - (DG_{t(a)} \circ R_a)\cdot (R_a \circ F_{s(a)}) \\ \nonumber = & (DF_{t(a)} \circ R_a) \cdot (G_{s(a)} \circ R_a ) - (DG_{t(a)} \circ R_a)\cdot (F_{s(a)}\circ R_a ) \\ \nonumber = & ( DF_{t(a)} \cdot G_{t(a)} - DG_{t(a)}\cdot F_{t(a)} ) \circ R_a = [F_{t(a)}, G_{t(a)}] \circ R_a \, . \end{align} \end{proof} \noindent The second technical result states that $S-N$-decomposition respects quiver symmetry. \begin{prop} Let $({\bf E}, {\bf R})$ be a representation of a quiver ${\bf Q}=\{A \rightrightarrows^s_t V\}$ and $\bf L\in \rm{End}(\bf E, \bf R)$. Define the semisimple part ${\bf L}^S$ and the nilpotent part ${\bf L}^N$ of $\bf L$ to consist of the maps $(L_v)^S: E_v \to E_v$ and $(L_v)^N: E_v\to E_v$ (for each $v\in V$) respectively. Then ${\bf L}^S, {\bf L}^N \in \rm{End}(\bf E, \bf R)$. \end{prop} \begin{proof} For any arrow $a\in A$ consider the linear map $$L_{(a)} = \left( \begin{array}{cc} L_{s(a)} & 0 \\ 0 & L_{t(a)} \end{array} \right) \ \mbox{from} \ E_{s(a)} \times E_{t(a)} \ \mbox{to}\ E_{s(a)} \times E_{t(a)} \, .$$ To compute the semisimple and nilpotent parts of this $L_{(a)}$, recall that $L_{(a)}^S$ and $L_{(a)}^N$ are polynomial functions of $L_{(a)}$. This means that there is a polynomial $p(x) = a_0 + a_1 x + \ldots + a_n x^n$, with $a_0, \ldots, a_n \in \mathbb{R}$, so that $L_{(a)}^S = p(L_{(a)}) = a_0 \mbox{Id} + a_1 L_{(a)} + \ldots + a_nL_{(a)}^n$ and $L_{(a)}^N = (1-p)(L_{(a)})$. Clearly, $$L_{(a)}^S = p(L_{(a)}) = \left( \begin{array}{cc} p(L_{s(a)}) & 0 \\ 0 & p(L_{t(a)}) \end{array} \right)\, .$$ From this it is clear that $p(L_{s(a)})$ must be the semisimple part of $L_{s(a)}$ and $(1-p)(L_{s(a)})$ must be the nilpotent part of $L_{s(a)}$, and similarly for $L_{t(a)}$. For example, because $p(L_{(a)})$ is semisimple it follows that $p(L_{s(a)})$ and $p(L_{t(a)})$ must both be semisimple as well. The remaining conditions for an $S-N$-decomposition are checked similarly. Finally, recall that $R_a \circ L_{s(a)} = L_{t(a)} \circ R_a$ because $\bf L\in \rm{End}(\bf E, \bf R)$. Hence it follows that $R_a \circ L_{s(a)}^S = R_a \circ p(L_{s(a)}) = p(L_{t(a)}) \circ R_a = L_{t(a)}^S \circ R_a$, and $R_a \circ L_{s(a)}^N = R_a \circ (1-p)(L_{s(a)}) = (1-p)(L_{t(a)}) \circ R_a = L_{t(a)}^N \circ R_a$. \end{proof} \noindent We are now ready for the main result of this section: \begin{thr}[Quiver equivariant normal form theorem] \lambdabel{normalformtheorem} Let $({\bf E}, {\bf R})$ be a representation of a quiver ${\bf Q}=\{A \rightrightarrows^s_t V\}$. Let ${\bf F}\in C^{\infty}({\bf E}, {\bf R})$ be a smooth ${\bf Q}$-equivariant vector field, i.e., it consists of vector fields $$F_v:E_v\to E_v \ (v\in V)\ \mbox{satisfying}\ F_{t(a)}\circ R_a = R_a \circ F_{s(a)}\ (a\in A)\, .$$ We assume that ${\bf F}(0)=0$, i.e., $F_v(0)=0$ for all $v\in V$, and we write $${\bf F}= {\bf F}^0+ {\bf F}^1+ {\bf F}^2+ \ldots \ \mbox{with}\ {\bf F}^k \in P^k({\bf E}, {\bf R}) \, ,$$ i.e., $F_v^k\in P^k(E_v)$ for each $v\in V$ and $F_{t(a)}^k\circ R_a = R_a \circ F^k_{s(a)}$ for all $a\in A$. Then the local normal forms $\overline{F}_v = F_v^0+\overline F_v^1 + \overline F_v^2 + \ldots$ ($v\in V$) of the vector fields $F_v=F_v^0+ F_v^1 + F_v^2 + \ldots$ constructed in Theorem \ref{normalnormalformtheorem} satisfy $$ R_a \circ \overline{F}_{s(a)} = \overline{F}_{t(a)} \circ R_a\ \mbox{for each}\ a\in A\, .$$ So they define a smooth ${\bf Q}$-equivariant vector field $\overline {\bf F}$ and polynomial ${\bf Q}$-equivariant vector fields $\overline {\bf F}^k$ on an open neighbourhood of $0$ in $({\bf E}, {\bf R})$. \end{thr} \begin{proof} Recall from the proof of Theorem \ref{normalnormalformtheorem} that each of the vector fields $$F_v = F_v^0 + F_v^1+ F_v^2 + \ldots \in C^{\infty}(E_v)$$ is brought into normal form by a sequence of transformations $e^{G_v^k}$. We will show that the generators $G_v^k$ of these transformations satisfy $G_{t(a)}^k \circ R_a = R_a \circ G_{s(a)}^k$ for every $a\in A$. From this it follows that $R_a \circ e^{G_{s(a)}^k} = e^{G_{t(a)}^k} \circ R_a$ and hence that $R_a \circ \overline F_{s(a)} = \overline F_{t(a)} \circ R_a$. The proof is by induction on $k$. So let us assume that $$R_a \circ G_{s(a)}^j = G_{t(a)}^j \circ R_a \ \mbox{for every} \ a\in A\ \mbox{and every}\ j=1, \ldots k-1\, .$$ We recall that the generator $G_v^k$ is the unique vector field in ${\rm im \ ad}_{L_v^S} \cap P^k(E_v)$ that solves the equation \begin{align} \lambdabel{homologicalv} F_{v}^{k, k-1} - {\rm ad}_{L_{v}}(G_{v}^k) \in {\rm ker \ ad}_{L^S_{v}} \ \mbox{for all}\ v\in V\, . \end{align} Here $F_{v}^{k, k-1} \in P^k(E_v)$. Importantly, our induction hypothesis implies that $$R_a \circ F_{s(a)}^{k, k-1} = F_{t(a)}^{k, k-1} \circ R_a\ \mbox{for all}\ a\in A\, .$$ We will now show that this implies that $R_a \circ G_{s(a)}^k = G_{t(a)}^k\circ R_a$ for every $a\in A$. The proof of this fact is somewhat technical and goes as follows. For any arrow $a\in A$, let us define the space of conjugate pairs of homogeneous polynomial vector fields $$P^k_{(a)} := \{(F_{s(a)}^k, F_{t(a)}^k) \in P^k(E_{s(a)})\times P^k(E_{t(a)}) \, |\, R_a \circ F_{s(a)}^k = F_{t(a)}^k \circ R_a \} \, ,$$ \noindent and for any ${\bf L}\in {\rm End}({\bf E}, {\bf R})$, let us define the linear map ${\rm ad}_{L_{(a)}}: P^k_{(a)} \to P^k_{(a)}$ by $$ {\rm ad}_{L_{(a)}} (F_{s(a)}^k, F_{t(a)}^k) := ( {\rm ad}_{L_{s(a)}}(F^k_{s(a)}), {\rm ad}_{L_{t(a)}}(F^k_{t(a)}))\, . $$ Note that this map is well-defined by Proposition \ref{Lieproperty}: it maps $P^k_{(a)}$ into $P^k_{(a)}$. Note moreover that equation (\ref{homologicalv}) for $v=s(a)$ and equation (\ref{homologicalv}) for $v=t(a)$ together read $$\underbrace{(F_{s(a)}^{k, k-1}, F_{t(a)}^{k, k-1})}_{\in P^k_{(a)}} - {\rm ad}_{L_{(a)}}\underbrace{(G_{s(a)}^k, G_{t(a)}^k)}_{{\rm is\, \, this\, \, in}\ P^k_{(a)}{\rm \, ?}} \in {\rm ker \ ad}_{L^S_{(a)}}\, .$$ We dedicate a separate proposition to the following observation. \begin{prop}\lambdabel{SNadad} The $S-N$-decomposition of ${\rm ad}_{L_{(a)}}: P^k_{(a)} \to P^k_{(a)}$ is given by $$ ({\rm ad}_{L_{(a)}})^S = {\rm ad}_{L_{(a)}^S} \ \mbox{and} \ ({\rm ad}_{L_{(a)}})^N= {\rm ad}_{L_{(a)}^N}\, . $$ \end{prop} \begin{proof}{\bf [of Proposition \ref{SNadad}]} Note that ${\rm ad}_{L_{(a)}}$ is the restriction of the product $${\rm ad}_{L_{s(a)}} \times {\rm ad}_{L_{t(a)}}: P^k(E_{s(a)}) \times P^k(E_{t(a)}) \to P^k(E_{s(a)}) \times P^k(E_{t(a)}) \, $$ to $P^k_{(a)}$. The $S-N$-decomposition of this product map is clearly given by the product of the $S-N$-decompositions, i.e., \begin{align}\nonumber & ({\rm ad}_{L_{s(a)}} \times {\rm ad}_{L_{t(a)}})^S = {\rm ad}_{L_{s(a)}^S} \times {\rm ad}_{L_{t(a)}^S} \, , \\ \nonumber & ({\rm ad}_{L_{s(a)}} \times {\rm ad}_{L_{t(a)}})^N = {\rm ad}_{L_{s(a)}^N} \times {\rm ad}_{L_{t(a)}^N}\, . \end{align} This can be checked directly, by verifying that the right hand sides satisfy the requirements for the $S-N$-decomposition of ${\rm ad}_{L_{s(a)}} \times {\rm ad}_{L_{t(a)}}$. But the restriction of ${\rm ad}_{L_{s(a)}^S} \times {\rm ad}_{L_{t(a)}^S}$ to $P^k_{(a)}$ is ${\rm ad}_{L_{(a)}^S}$. And the restriction of ${\rm ad}_{L_{s(a)}^N} \times {\rm ad}_{L_{t(a)}^N}$ to $P^k_{(a)}$ is ${\rm ad}_{L_{(a)}^N}$. Moreover, ${\rm ad}_{L_{(a)}^S}$ and ${\rm ad}_{L_{(a)}^N}$ leave $P^k_{(a)}$ invariant as ${\bf L}_{(a)}^S, {\bf L}_{(a)}^N \in {\rm End}({\bf E}, {\bf R})$. This proves the proposition, because the $S-N$-decomposition of the restriction is the restriction of the $S-N$-decomposition. \end{proof} \noindent We continue the proof of Theorem \ref{normalformtheorem}. Note that Proposition \ref{SNadad} implies that $$P^k_{(a)} = {\rm im\ ad}_{L^S_{(a)}} \oplus {\rm ker\ ad}_{L^S_{(a)}} \, .$$ Just like in the proof of Theorem \ref{normalnormalformtheorem} we can thus uniquely decompose \begin{align}\lambdabel{splitF} \hspace{-.2cm} (F^{k,k-1}_{s(a)}, F^{k,k-1}_{t(a)}) = \underbrace{((\widetilde{F}^{k,k-1}_{s(a)})^{\rm im}, (\widetilde{F}^{k,k-1}_{t(a)})^{\rm im})}_{\in \, {\rm im\, ad}_{L_{(a)}^S} \! \cap \, P^k_{(a)}} + \underbrace{((\widetilde{F}^{k,k-1}_{s(a)})^{\rm ker}, (\widetilde{F}^{k,k-1}_{t(a)})^{\rm ker})}_{\in \, {\rm ker\, ad}_{L_{(a)}^S}\! \cap \, P^k_{(a)}} \, . \end{align} By definition of ${\rm ad}_{L_{(a)}^S}$, equation (\ref{splitF}) just means that $$F^{k,k-1}_{s(a)} = \underbrace{(\widetilde{F}^{k,k-1}_{s(a)})^{\rm im}}_{\in \, {\rm im\, ad}_{L_{s(a)}^S}} + \underbrace{(\widetilde{F}^{k,k-1}_{s(a)})^{\rm ker}}_{\in \, {\rm ker\, ad}_{L_{s(a)}^S}} \ \mbox{and} \ F^{k,k-1}_{t(a)} = \underbrace{(\widetilde{F}^{k,k-1}_{t(a)})^{\rm im}}_{\in \, {\rm im\, ad}_{L_{t(a)}^S}} + \underbrace{(\widetilde{F}^{k,k-1}_{t(a)})^{\rm ker}}_{\in \, {\rm ker\, ad}_{L_{t(a)}^S}} \, .$$ But we already know from the proof of Theorem \ref{normalnormalformtheorem} that the latter two decompositions are unique inside $P^k(E_{s(a)})$ and $P^k(E_{t(a)})$ respectively. So we conclude that \begin{align} & \left( (\widetilde{F}_{s(a)}^{k, k-1})^{\rm im}, (\widetilde{F}_{t(a)}^{k, k-1})^{\rm im} \right) = \left( (F_{s(a)}^{k, k-1})^{\rm im}, (F_{t(a)}^{k, k-1})^{\rm im} \right) \, \mbox{and} \nonumber \\ \nonumber & \left( (\widetilde{F}_{s(a)}^{k, k-1})^{\rm ker}, (\widetilde{F}_{t(a)}^{k, k-1})^{\rm ker} \right) = \left( (F_{s(a)}^{k, k-1})^{\rm ker}, (F_{t(a)}^{k, k-1})^{\rm ker} \right) \, , \end{align} where $(F_{s(a)}^{k, k-1})^{\rm im}$, $(F_{s(a)}^{k, k-1})^{\rm ker}$, $(F_{t(a)}^{k, k-1})^{\rm im}$, $(F_{t(a)}^{k, k-1})^{\rm ker}$ are the unique vector fields given in the proof of Theorem \ref{normalnormalformtheorem}. In particular it holds that $$R_a \circ (F_{s(a)}^{k, k-1})^{\rm im} = (F_{t(a)}^{k, k-1})^{\rm im} \circ R_a\ \mbox{and}\ R_a \circ (F_{s(a)}^{k, k-1})^{\rm ker} = (F_{t(a)}^{k, k-1})^{\rm ker} \circ R_a\, .$$ Next, we note that Proposition \ref{SNadad} implies that ${\rm ad}_{L_{(a)}}: {\rm im\, ad}_{L_{(a)}^S} \to {\rm im\, ad}_{L_{(a)}^S}$ is an isomorphism. Hence there is a unique $G^k_{(a)} = (\widetilde{G}^k_{s(a)}, \widetilde{G}^k_{t(a)}) \in {\rm im \ ad}_{L^S_{(a)}} \! \cap \, P^k_{(a)}$ such that \begin{align}\lambdabel{homoldouble} {\rm ad}_{L_{(a)}} \underbrace{(\widetilde{G}^k_{s(a)}, \widetilde{G}^k_{t(a)})}_{\in \, {\rm im \, ad}_{L^S_{(a)}} \! \cap \, P^k_{(a)}} = ((F^{k,k-1}_{s(a)})^{\rm im}, (F^{k,k-1}_{t(a)})^{\rm im})\, . \end{align} By definition of ${\rm ad}_{L_{(a)}}$ and ${\rm ad}_{L_{(a)}^S}$, equation (\ref{homoldouble}) just means that $${\rm ad}_{L_{s(a)}} (\!\!\!\!\!\!\!\!\!\! \underbrace{\widetilde{G}^k_{s(a)}}_{\in \, {\rm im \, ad}_{L^S_{s(a)}} \! \cap \, P^k_{s(a)}}\!\!\!\!\!\!\!\!\!\! ) = (F^{k,k-1}_{s(a)})^{\rm im} \ {\rm and}\ \, {\rm ad}_{L_{t(a)}} (\!\!\!\!\!\!\!\!\!\! \underbrace{\widetilde{G}^k_{t(a)}}_{\in \, {\rm im \, ad}_{L^S_{t(a)}} \! \cap \, P^k_{t(a)}}\!\!\!\!\!\!\!\!\!\! ) = (F^{k,k-1}_{t(a)})^{\rm im} \, .$$ Again, we already know from the proof of Theorem \ref{normalnormalformtheorem} that the solutions $\widetilde{G}^k_{s(a)}$ and $\widetilde{G}^k_{t(a)}$ to these two equations are unique. We conclude that $$\left( \widetilde{G}_{s(a)}^{k}, \widetilde{G}_{s(a)}^{k} \right) = \left( G_{s(a)}^{k}, G_{s(a)}^{k} \right)$$ where $G_{s(a)}^{k}$ and $G_{t(a)}^{k}$ are the unique vector fields given in the proof of Theorem \ref{normalnormalformtheorem}. In particular it holds that $$R_a \circ G_{s(a)}^k = G_{t(a)}^k \circ R_a\, .$$ This proves that the $G_v^k$ ($v\in V$) define an equivariant vector field ${\bf G}^k\in P^k({\bf E}, {\bf R})$, which concludes the proof of the induction step and hence the proof of the theorem. \end{proof} \section{An example}\lambdabel{sec10} We finish this paper with an example of a network dynamical system admitting a symmetry quiver that does not only consist of subnetworks or quotients. We consider the network {\bf N} in Figure \ref{picchap10}. \begin{figure} \caption{\footnotesize {\rm A network with two types of nodes. Self loops corresponding to internal dynamics are not drawn.} \end{figure} \noindent Its admissible maps take the general form \begin{equation}\lambdabel{exchap10-1} F^{{\bf N}}\left( \begin{array}{c} x_1 \\ y_2 \\ x_3 \\ y_4 \\ x_5 \end{array} \right) = \left( \begin{array}{l} f(x_{1}, \bl{ y_{2}}) \\ g(y_{2}, \ro{x_{3}}) \\ f(x_{3}, \bl{ y_{4}}) \\ g(y_{4}, \ro{x_{3}})\\ f(x_{5}, \bl{ y_{4}}) \\ \end{array}\right) \, . \hspace{-.8cm} \end{equation} We will assume that all the variables are one-dimensional, i.e., that $x_1, y_2, x_3$, $y_4, x_5 \in \field{R}$ and $f,g: \field{R}^2 \rightarrow \field{R}$. To study this class of maps we will use the $3$-vertex quiver {\bf Q} shown in Figure \ref{picchap102}. The vector spaces corresponding to the vertices of {\bf Q} are given by $E_1 = \field{R}^5$ (for the vertex at ${{\bf N}_1} = {\bf N}$), $E_2 = \field{R}^4$ (for the vertex at ${{\bf N}_2}$) and $E_3 = \field{R}^3$ (for the vertex at ${{\bf N}_3}$). The linear maps of the representation are given by \begin{align}\lambdabel{linmaps10} \begin{array}{ll} R_{a_1}(x_1, y_2, x_3, y_4, x_5) &= (x_1, y_2, x_3, y_4)\, , \\ R_{a_2}(x_1, y_2, x_3, y_4, x_5) &= (x_5, y_4, x_3, y_4)\, , \\ R_{a_3}(y_1, x_2, y_3) &= (x_2, y_3, x_2, y_3)\, , \\ R_{a_4}(x_1, y_2, x_3, y_4) &= (y_2, x_3, y_4)\, . \end{array} \end{align} \begin{figure} \caption{\footnotesize {\rm A quiver involving the network ${\bf N} \end{figure} \noindent We moreover define the quiver equivariant map ${\bf F} = \left\{F^{{\bf N}_1}, F^{{\bf N}_2}, F^{{\bf N}_3}\right\}$, where $F^{{\bf N}_1} = F^{{\bf N}}$ is given by equation \eqref{exchap10-1}, and $F^{{\bf N}_2}$ and $F^{{\bf N}_3}$ are given by \begin{equation}\lambdabel{exchap10-2} \nonumber F^{{\bf N}_2}\left( \begin{array}{c} x_1 \\ y_2 \\ x_3 \\ y_4 \end{array} \right) = \left( \begin{array}{l} f(x_{1}, \bl{ y_{2}}) \\ g(y_{2}, \ro{x_{3}}) \\ f(x_{3}, \bl{ y_{4}}) \\ g(y_{4}, \ro{x_{3}})\\ \end{array}\right) \, , \quad F^{{\bf N}_3}\left( \begin{array}{c} y_1 \\ x_2 \\ y_3 \end{array} \right) = \left( \begin{array}{l} g(y_{1}, \ro{ x_{2}}) \\ f(x_{2}, \bl{y_{3}}) \\ g(y_{3}, \ro{ x_{2}}) \\ \end{array}\right) \, . \hspace{-.8cm} \end{equation} A direct calculation shows that indeed \begin{align}\lambdabel{quividentts} \begin{array}{ll} F^{{\bf N}_2} \circ R_{a_1} = R_{a_1} \circ F^{{\bf N}_1}\, & F^{{\bf N}_2} \circ R_{a_2} = R_{a_2} \circ F^{{\bf N}_1}\, , \\ F^{{\bf N}_2} \circ R_{a_3} = R_{a_3} \circ F^{{\bf N}_3} \, , & F^{{\bf N}_3} \circ R_{a_4} = R_{a_4} \circ F^{{\bf N}_2}\, . \end{array} \end{align} Alternatively, note that $F^{{\bf N}_1}$, $F^{{\bf N}_2}$ and $F^{{\bf N}_3}$ are the admissible maps for the networks ${{\bf N}_1}$, ${{\bf N}_2}$ and ${{\bf N}_3}$ shown in Figure \ref{picchap102}. It can easily be seen that the linear maps $R_{a_1}, R_{a_2}, R_{a_3}, R_{a_4}$ are induced by graph fibrations between the networks, so that the identities \eqref{quividentts} follow from Theorem \ref{devilletheorem1}. The attentive reader might wonder why we have chosen the specific quiver of Figure \ref{picchap102}. After all, this quiver does not contain all subnetworks of ${\bf N}_1$, nor does it contain all of its quotient networks. For instance, the subnetwork of ${\bf N}_1$ consisting of nodes $3$ and $4$ (which is also a quotient of ${\bf N}_1$) is absent. To better explain our choice of the quiver, we need the following proposition. \begin{prop}\lambdabel{quivequitonet} Let $({\bf E}, {\bf R})$ be the representation of the quiver {\bf Q} of Figure \ref{picchap102}, consisting of the vector spaces $E_1 = \field{R}^5$, $E_2 = \field{R}^4$ and $E_3 = \field{R}^3$, and the linear maps $R_{a_1}, \ldots, R_{a_4}$ given in \eqref{linmaps10}. A triple of maps ${\bf G} = \left\{G^1, G^2, G^3\right\}$ (with $G^i: E_i \to E_i$) is ${\bf Q}$-equivariant if and only if there exist maps $h: \field{R}^4 \rightarrow \field{R}$ and $l: \field{R}^3 \rightarrow \field{R}$ such that \begin{align}\lambdabel{exchap10-3} &G^1\left( \! \begin{array}{c} x_1 \\ y_2 \\ x_3 \\ y_4 \\ x_5 \end{array} \! \right) = \left( \! \begin{array}{l} h(x_{1}, y_{2}, x_3, y_4) \\ l(y_{2}, x_{3}, y_4) \\ h(x_{3}, y_{4}, x_3, y_4)\\ l(y_{4}, x_{3}, y_4)\\ h(x_{5}, y_{4}, x_3, y_4) \end{array} \! \right) \, , \\ \nonumber &G^2\left( \! \begin{array}{c} x_1 \\ y_2 \\ x_3 \\ y_4 \end{array} \! \right) = \left( \! \begin{array}{l} h(x_{1}, y_{2}, x_3, y_4) \\ l(y_{2}, x_{3}, y_4) \\ h(x_{3}, y_{4}, x_3, y_4)\\ l(y_{4}, x_{3}, y_4) \end{array} \! \right) \ \mbox{and}\ \ G^3\left( \! \begin{array}{c} y_1 \\ x_2 \\ y_3 \end{array} \! \right) = \left( \! \begin{array}{l} l(y_{1}, x_{2}, y_3) \\ h(x_{2}, y_{3}, x_2, y_3)\\ l(y_{3}, x_{2}, y_3) \end{array} \! \right) \, . \end{align} \end{prop} \begin{proof} A direct calculation shows that ${\bf G} = \left\{G^1, G^2, G^3\right\}$ as given by equations \eqref{exchap10-3} is indeed ${\bf Q}$-equivariant. In other words, one verifies that $G^2 \circ R_{a_1} = R_{a_1} \circ G^1$, with similar relations for $R_{a_2}$, $R_{a_3}$ and $R_{a_4}$ as in equation \eqref{exchap10-2}. Conversely, suppose ${\bf G} = \left\{G^1, G^2, G^3\right\}$ is a ${\bf Q}$-equivariant map. This assumption implies in particular that $G^2 \circ R_{a_3} = R_{a_3}\circ G^3$, where we recall that $R_{a_3}(y_1, x_2, y_3) = (x_2, y_3, x_2, y_3)$. Reading off the first component of the identity $(G^2 \circ R_{a_3})(y_1, x_2, y_3) = (R_{a_3} \circ G^3)(y_1, x_2, y_3)$, we thus see that \begin{equation}\lambdabel{proofthing1} G^2_1(x_2, y_3, x_2, y_3) = G^3_2(y_1, x_2, y_3)\, , \end{equation} where we have used that $(R_{a_3}\circ G^3)_1 = G^3_2$. Likewise, evaluating the first component of the identity $G^3 \circ R_{a_4} = R_{a_4} \circ G^2$ gives \begin{equation}\lambdabel{proofthing2} G^3_1(y_2, x_3, y_4) = G^2_2(x_1, y_2, x_3, y_4)\, . \end{equation} Next, we calculate \begin{align}\nonumber &R_{a_4}R_{a_3}(y_1, x_2, y_3) = (y_3, x_2, y_3) \, ,\\ \nonumber &R_{a_3}R_{a_4}(x_1, y_2, x_3, y_4) = (x_3, y_4, x_3, y_4)\, , \\ \nonumber &R_{a_4}R_{a_3}R_{a_4}(x_1, y_2, x_3, y_4) = (y_4, x_3, y_4) \, . \end{align} From the identities $G^2 \circ R_{a_3} = R_{a_3} \circ G^3$ and $G^3 \circ R_{a_4} = R_{a_4} \circ G^2$ we get $G^3 \circ R_{a_4} \circ R_{a_3} = R_{a_4} \circ R_{a_3}\circ G^3$, and the first component of this equation reads \begin{equation}\lambdabel{proofthing3} G^3_1 (y_3, x_2, y_3)= G^3_3(y_1, x_2, y_3) \, . \end{equation} We likewise find $G^2 \circ R_{a_3} \circ R_{a_4} = R_{a_3} \circ R_{a_4}\circ G^2$, so that \begin{equation}\lambdabel{proofthing4} G^2_1 (x_3, y_4, x_3, y_4) = G^2_3(x_1, y_2, x_3, y_4)\, . \end{equation} And finally, using that $G^3 \circ R_{a_4} \circ R_{a_3} \circ R_{a_4} = R_{a_4} \circ R_{a_3}\circ R_{a_4} \circ G^2$, we have \begin{align}\lambdabel{proofthing5} G^3_1 (y_4, x_3, y_4) = G^2_4(x_1, y_2, x_3, y_4)\, . \end{align} If we now set \begin{align} h(x_1, y_2, x_3, y_4) := G^2_1(x_1, y_2, x_3, y_4) \quad \text{ and } \nonumber \quad l(y_1, x_2, y_3) := G^3_1(y_1, x_2, y_3)\, , \end{align} then equations \eqref{proofthing1}, \eqref{proofthing2}, \eqref{proofthing3}, \eqref{proofthing4} and \eqref{proofthing5} show that $G^2$ and $G^3$ have the required form. The form of $G^1$ follows from similar arguments involving $R_{a_1}$ and $R_{a_2}$. More precisely, for $i = 1, \dots, 4$, we find \begin{equation} G^2_i \circ R_{a_1} = (G^2 \circ R_{a_1})_i = (R_{a_1} \circ G^1)_i = G^1_i\, . \end{equation} Moreover, we have \begin{equation} G^2_1 \circ R_{a_2} = (G^2 \circ R_{a_2})_1 = (R_{a_2} \circ G^1)_1 = G^1_5\, . \end{equation} This proves that ${\bf G}$ is ${\bf Q}$-equivariant if and only if it is of the form \eqref{exchap10-3}. \end{proof} \begin{remk} Note that each of the maps $G^1$, $G^2$ and $G^3$ in equation \eqref{exchap10-3} may be seen as the admissible map for some network with two types of nodes. For example, $G^1$ is an admissible map for the network ${\bf \widetilde{N}}_1$ shown in Figure \ref{picchap103}. Proposition \ref{quivequitonet} therefore shows that a map $\tilde{G}: \field{R}^5 \rightarrow \field{R}^5$ is an admissible map for the network ${\bf \widetilde{N}}_1$, if and only if we have $\tilde{G} = G^1$ for some ${\bf Q}$-equivariant map ${\bf G} = \left\{G^1, G^2, G^3\right\}$ for the quiver ${\bf Q}$ in Figure \ref{picchap102}. \begin{figure} \caption{\footnotesize {\rm The network ${\bf \widetilde{N} \end{figure} \\ \noindent Comparing the networks ${\bf \widetilde{N}}_1$ and ${\bf {N}}_1$, we see that ${\bf \widetilde{N}}_1$ can be obtained from ${\bf {N}}_1$ by adding arrow types. More precisely, these additional arrows are formed by concatenating two or more existing arrow types (i.e., the red and blue arrows of network ${\bf {N}}_1$). It can be shown that adding concatenations of arrow types in this fashion has no effect on the presence of sub- and quotient networks, cf. Example \ref{interestingffex}. As the network structure of $G^1$ is a consequence of quiver symmetry, we see that all information about sub- and quotient networks in $F^{{\bf N}_1}$ is ``encoded'' in the quiver of Figure \ref{picchap102}. To obtain such a useful quiver representation, one uses the theory of fundamental networks. We will not explain this in further detail here, but see for instance Section 11 of \cite{fibr}. \end{remk} \noindent In what follows, we shall determine the properties of generic one-parameter steady state bifurcations for the ODE $\frac{dx}{dt} = F^{{\bf N}}(x; \lambdambda)$ by means of Lyapunov-Schmidt reduction, while exploiting the quiver symmetry that is present in the problem. To this end, we let $F^{{\bf N}} = F^{{\bf N}_1}$, $F^{{\bf N}_2}$ and $F^{{\bf N}_3}$ depend on a parameter $\lambda$, taking values in some open neighbourhood $\Lambda$ of $0 \in \field{R}$. To keep the network structures intact for all values of $\lambda$, we simply replace $f(x, y)$ and $g(y,x)$ in $F^{{\bf N}_1}$, $F^{{\bf N}_2}$ and $F^{{\bf N}_3}$ by $f(x, y; \lambda)$ and $g(y, x; \lambda)$. For instance, we get \begin{equation}\lambdabel{exchap10-4} F^{{\bf N}_1}\left( \begin{array}{c} x_1 \\ y_2 \\ x_3 \\ y_4 \\ x_5 \\ \lambda \end{array} \right) = \left( \begin{array}{l} f(x_{1}, \bl{ y_{2}}; \lambda) \\ g(y_{2}, \ro{x_{3}}; \lambda) \\ f(x_{3}, \bl{ y_{4}}; \lambda) \\ g(y_{4}, \ro{x_{3}}; \lambda)\\ f(x_{5}, \bl{ y_{4}}; \lambda) \\ \end{array}\right) \, . \hspace{-.8cm} \end{equation} We will assume that $F^{{\bf N}_1}(0; 0) = 0$, from which it follows that also $F^{{\bf N}_2}(0;0) = F^{{\bf N}_3}( 0; 0) = 0$. If we now define \begin{align} \begin{array}{ll} a := \left.\frac{\partial f(x,y; \lambda)}{\partial x}\right|_{(0;0)} \, &c := \left.\frac{\partial f(x,y; \lambda)}{\partial y}\right|_{(0;0)} \, ,\\ b := \left.\frac{\partial g(y,x; \lambda)}{\partial y}\right|_{(0;0)} \, & d := \left.\frac{\partial g(y,x; \lambda)}{\partial x}\right|_{(0;0)} \, , \end{array} \end{align} then the Jacobian matrices of the network maps (in the direction of the variables $x_i$ and $y_j$, but not $\lambda$) are given by \begin{align} DF^{{\bf N}_1}(0; 0) &= \begin{pmatrix*}[l] a & c & 0 & 0 & 0 \\ 0 & b & d & 0 & 0 \\ 0 & 0 & a & c & 0 \\ 0 & 0 & d & b & 0 \\ 0 & 0 & 0 & c & a \\ \end{pmatrix*} \, ,\\ \nonumber DF^{{\bf N}_2}(0; 0) &= \begin{pmatrix*}[l] a & c & 0 & 0 \\ 0 & b & d & 0 \\ 0 & 0 & a & c \\ 0 & 0 & d & b \\ \end{pmatrix*} \text{ and } \, DF^{{\bf N}_3}(0; 0) = \begin{pmatrix*}[l] b & d & 0 \\ 0 & a & c \\ 0 & d & b \\ \end{pmatrix*} \, . \end{align} It is not hard to see that $DF^{{\bf N}_1}(0; 0)$ is non-invertible precisely when either $a = 0$, $b= 0$, or \begin{equation}\lambdabel{matrixofexam} \det \begin{pmatrix*}[l] a & c \\ d & b \\ \end{pmatrix*} = ab-cd = 0\, . \end{equation} Note that generically only one of these three conditions ($a=0$, $b=0$ or $ab-cd = 0$) is satisfied. We shall study these cases separately. \\ \mbox{} \\ \noindent {\bf Case 1:} We start with the case $a = 0$, where we assume in addition that $b \not= 0$ and $ab-cd \not= 0$. It follows that the kernels (which in this case are also the generalised kernels) of $D_XF^{{\bf N}_i}(0; 0)$, $i = 1,2,3$, are given by \begin{align}\lambdabel{kersnelsof1}\nonumber &\ker(D_XF^{{\bf N}_1}(0; 0)) = \{(x_1, 0,0,0,x_5) \mid x_1, x_5 \in \field{R}\} \subset \field{R}^5\, , \\ &\ker(D_XF^{{\bf N}_2}(0; 0)) = \{(x_1, 0,0,0) \mid x_1 \in \field{R}\} \subset \field{R}^4\, , \\ \nonumber &\ker(D_XF^{{\bf N}_3}(0; 0)) = \{0\} \subset \field{R}^3 \, . \end{align} We will identify these spaces with $\field{R}^2$, $\field{R}$ and $\{0\}$ respectively, using the variables $x_1$ and $x_5$. Recall now that these kernels together form a subrepresentation (it turns out in fact that this subrepresentation is {\it indecomposable}). Under our identifications, the quiver symmetries restrict to this subrepresentation as \begin{align} &R_{a_1}(x_1, x_5) = x_1 \, , R_{a_2}(x_1, x_5) = x_5 \, , R_{a_3}(0) = 0 \text{ and } R_{a_4}(x_1) = 0\, . \end{align} One verifies that a general equivariant map on this subrepresentation must be of the form ${\bf H} = \left\{ H^1, H^2, 0 \right\}$, with $$H^1(x_1, x_5) = (h(x_1), h(x_5))\ \mbox{and}\ H^2(x_1) = h(x_1)\, ,$$ where $h: \field{R} \rightarrow \field{R}$ is any smooth function satisfying $h(0) = 0$. It thus follows from Theorem \ref{LStheorem} that, after performing equivariant Lyapunov-Smith reduction, the bifurcation equation that needs to be solved for $F^{{\bf N}_1}$ is of the form $$H^1(x_1, x_5; \lambda) = (h(x_{1}; \lambda), h(x_{5}; \lambda)) = (0,0)\, ,$$ where $h(0;\lambda) = 0$ for all $\lambda$ close to zero. Interestingly, quiver-equivariance therefore implies that the two-dimensional bifurcation equation decouples into two one-dimensional equations. For each of the two components of $H^1$, we generically find a transcritical bifurcation with one branch satisfying $x_i(\lambda) = 0$, and the other satisfying $x_i(\lambda) \sim \lambda$ (with $i = 1$ or $i = 5$). We thus get a remarkable {\bf double transcritical bifurcation} in which a total of four bifurcation branches coalesce. Their asymptotics are given by $$(x_1(\lambda), x_5(\lambda)) \sim (0,0), (0,\lambda), (\lambda, 0)\ \mbox{and} \ (\lambda, \lambda)\, .$$ From the description of $\ker(D_XF^{{\bf N}_1}(0; 0))$ in equation \eqref{kersnelsof1} we see that these branches lie in the synchrony spaces $\{x_1 = x_3 = x_5, y_2 = y_4\}$, $\{x_1 = x_3, y_2 = y_4\}$, $\{x_3 = x_5, y_2 = y_4\}$ and $\{x_1 = x_5, y_2 = y_4\}$, in order of listing.\\ \ \mbox{} \\ \noindent {\bf Case 2:} Next, we investigate what happens in case $b=0$, while $a, ab-cd \not= 0$. It follows that the (generalised) kernels of $D_XF^{{\bf N}_i}(0; 0)$, $i = 1,2,3$, are given by \begin{align}\lambdabel{kersnelsof2}\nonumber &\ker(D_XF^{{\bf N}_1}(0; 0)) = \{(-ca^{-1}x, x,0,0,0) \mid x \in \field{R}\} \subset \field{R}^5\, ,\\ &\ker(D_XF^{{\bf N}_2}(0; 0)) = \{(-ca^{-1}x, x,0,0) \mid x \in \field{R}\} \subset \field{R}^4\, , \\ \nonumber &\ker(D_XF^{{\bf N}_3}(0; 0)) = \{(x, 0, 0) \mid x \in \field{R}\} \subset \field{R}^3 \, . \end{align} If we now use the variable $x$ to identify each of these spaces with $\field{R}$, then the quiver symmetries become \begin{align}\lambdabel{Rsubrep} &R_{a_1}(x) = x \, , R_{a_2}(x) = 0 \, , R_{a_3}(x) = 0 \text{ and } R_{a_4}(x) = x\, . \end{align} Again, this defines an indecomposable subrepresentation, non-isomorphic to the one we found for Case 1. It follows from (\ref{Rsubrep}) that an equivariant map now must be of the form ${\bf H} = \left\{ h,h,h\right\}$, with $h: \mathbb{R}\to\mathbb{R}$ and $h(0) = 0$. For $F^{{\bf N}_1}$ we therefore get a single {\bf transcritical bifurcation} with branches $x(\lambda) = 0$ and $x(\lambda) \sim \lambda$. Comparing to the expression for $\ker(D_XF^{{\bf N}_1}(0; 0))$, we see that the former branch lies in the synchrony space $\{x_1 = x_3 = x_5, y_2 = y_4\}$, whereas the latter lies in $\{x_3 = x_5\}$. \\ \mbox{} \\ \noindent {\bf Case 3:} Finally, we assume $ab-cd = 0$ while $a,b \not= 0$. In addition, we make the generic assumption that $a+b \not= 0$. As $a+b$ is the trace of the matrix in equation \eqref{matrixofexam}, this means $D_XF^{{\bf N}_1}(0; 0)$ has a simple eigenvalue $0$. We find that there exists a unique non-zero vector $(s,t) \in \field{R}^2$ such that \begin{align}\lambdabel{kersnelsof2}\nonumber &\ker(D_XF^{{\bf N}_1}(0; 0)) = \{x\cdot (s,t,s,t,s) \mid x \in \field{R}\} \subset \field{R}^5\, , \\ &\ker(D_XF^{{\bf N}_2}(0; 0)) = \{x\cdot(s,t,s,t) \mid x \in \field{R}\} \subset \field{R}^4\, , \\ \nonumber &\ker(D_XF^{{\bf N}_3}(0; 0)) = \{x\cdot(t,s,t) \mid x \in \field{R}\} \subset \field{R}^3 \, . \end{align} If we use $x$ to identify these spaces with $\field{R}$, then each map $R_{a_i}$ restricts to the identity. This means that we found yet another non-isomorphic indecomposable subrepresentation. It also means that equivariance poses no restrictions on the reduced maps, and we find a {\bf saddle-node bifurcation} within the maximally synchronous space $\{x_1 = x_3 = x_5, y_2 = y_4\}$. \\ \mbox{} \\ \noindent Note that triples of admissible maps ${\bf F} = \left\{ F^{\bf N}_1, F^{\bf N}_2, F^{\bf N_3}\right\}$ for the networks ${\bf N}_1, {\bf N}_2$ and ${\bf N}_3$ constitute only a subset of the collection of all ${\bf Q}$-equivariant maps ${\bf G}=\left\{ G^1, G^2, G^3 \right\}$; see Proposition \ref{quivequitonet}. As a result, we cannot rule out restrictions on the Taylor coefficients of the Lyapunov-Schmidt reduction ${\bf H}$ of ${\bf F}$, in addition to the ones that we found in the above ``generic'' bifurcation analysis. Such additional restrictions could make for different generic bifurcation scenarios from the ones we described, so that a more in depth analysis is necessary to obtain the generic bifurcations for the admissible maps ${\bf F}$. We were indeed able to verify that the bifurcations that we described above for generic reduced quiver equivariant vector fields, are also generic for admissible vector fields ${\bf F}$. We actually found that all one-parameter bifurcation scenarios that are generic for quiver equivariant vector fields ${\bf G}=\{G^1, G^2, G^3\}$, are also generic for admissible vector fields ${\bf F} = \{ F^{\bf N}_1, F^{\bf N}_2, F^{\bf N_3}\}$. As these results are to be expected, we will not prove them here. \begin{remk} We end this paper with a remark on representation theory. Recall that, for each of the three cases of the above example (namely, $a=0$, $b=0$ and $ab-cd=0$), we claimed that the kernels of the Jacobian matrices form (non-isomorphic) {\it indecomposable} subrepresentations. In fact, this holds because every endomorphism of the subrepresentation is a scalar multiple of the identity endomorphism (the identity endomorphism consists of an identity map at each node of the quiver). This fact can for instance be seen from our description of the equivariant maps for each of the three cases. The reader familiar with classical representation theory (i.e. pertaining to compact groups) might recognise in this the definition of an \textit{absolutely irreducible subrepresentation}. Indeed, a representation of a compact group is called absolutely irreducible precisely when all of its endomorphisms are given by scalar multiples of the identity, see \cite{golschaef2}. An important result from classical equivariant theory is that a one-parameter steady state bifurcation can generically only occur along a kernel that is an absolutely irreducible representation of the symmetry. This result is especially powerful when combined with an algebraic result known as the \textit{Krull-Schmidt theorem}. The latter theorem states that any (finite dimensional) representation of a group can uniquely be written as the direct sum of a number of irreducible representations. Without going into technical details, these two results together imply that, up to isomorphism, there are only finitely many subrepresentations that one has to consider in a full investigation of possible bifurcation scenarios. A version of the Krull-Schmidt theorem exists for quiver symmetries as well, see \cite{Krause}. We aim to show in a follow-up article that a one-parameter steady state bifurcation in a quiver equivariant ODE occurs generically along an \textit{absolutely indecomposable subrepresentation}. This latter notion means that all endomorphisms of the subrepresentation are scalar multiples of the identity, up to nilpotent maps (in our example, the zero map happens to be the only nilpotent map, but this is not true for general quivers.) We will also show more general results pertaining to generic center subspaces, as well as multiple bifurcation parameters. Analogous results have already been shown for systems with a \textit{monoid} symmetry \cite{schwenker} and \cite{transversality}. A monoid is a generalisation of a group (see Example \ref{monoidex}) but only a special case of a quiver. \end{remk} \end{document}
math
\begin{document} \date{} \mathpzc{m}aketitle \begin{abstract} In a recent paper \cite{K}, M. E. Kahoui and M. Ouali have proved that over an algebraically closed field $k$ of characteristic zero, residual coordinates in $k[X][Z_1,\dots,Z_n]$ are one-stable coordinates. In this paper we extend their result to the case of an algebraically closed field $k$ of arbitrary characteristic. In fact, we show that the result holds when $k[X]$ is replaced by any one-dimensional seminormal domain $R$ which is affine over an algebraically closed field $k$. For our proof, we extend a result of S. Maubach in \cite{M} giving a criterion for a polynomial of the form $a(X)W+P(X,Z_1,\dots,Z_n)$ to be a coordinate in $k[X][Z_1,\dots,Z_n,W]$.\\ \indent Kahoui and Ouali had also shown that over a Noetherian $d$-dimensional ring $R$ containing $\mathbb Q$ any residual coordinate in $R[Z_1,\dots,Z_n]$ is an $r$-stable coordinate, where $r=(2^d-1)n$. We will give a sharper bound for $r$ when $R$ is affine over an algebraically closed field of characteristic zero. \mathpzc{n}oindent {\small {{\bf Keywords.} Polynomial algebra, Residual coordinate, Stable coordinate, Exponential map.}} \mathpzc{n}oindent {\small {{\bf AMS Subject classifications (2010)}. Primary: 13B25; Secondary: 14R25, 14R10, 13A50}}. \end{abstract} \section{Introduction} We will assume all rings to be commutative containing unity. The notation $R^{[n]}$ will be used to denote any $R$-algebra isomorphic to a polynomial algebra in $n$ variables over $R$. \mathpzc{m}edskip We will discuss results connecting coordinates, residual coordinates and stable coordinates in polynomial algebras (see 2.2 to 2.4 for definitions). The study was initiated for the case $n=2$ by Bhatwadekar and Dutta (\cite{BD1}) and later extended to $n>2$ by Das and Dutta (\cite{DD}). \mathpzc{m}edskip An important problem in the study of polynomial algebras is to find fibre conditions for a polynomial $F$ in a polynomial ring $A=R^{[n+1]}$ to be a coordinate in $A$. In the case $R=k[X]$, where $k$ is an algebraically closed field of characteristic zero, S. Maubach gave the following useful result for polynomials linear in one of the variables (\cite[Theorem 4.5]{M}). \begin{thm}\label{maubach} Let $k$ be an algebraically closed field of characteristic zero, $P(X,Z_1,\dots,Z_n)$ be an element in the polynomial ring $k[X,Z_1,\dots,Z_n]$ and $a(X)\in{k[X]}-\{0\}$. Suppose, for each root $\alpha$ of $a(X)$, $P(\alpha,Z_1,\dots,Z_n)$ is a coordinate in $k[Z_1,\dots,Z_n]$. Then, the polynomial $F$ defined by $F:=a(X)W+P(X,Z_1,\dots,Z_n)$ is a coordinate in $k[X,Z_1,\dots,Z_n,W](=k^{[n+2]})$, along with $X$. \end{thm} In this paper, we will show that Maubach's result holds when $k[X]$ is replaced by \textit{any} one-dimensional ring $R$ which is affine over an algebraically closed field $k$ such that either the characteristic of $k$ is zero or $R_{red}$ is seminormal and $a(X)$ is replaced by a non-zerodivisor $a$ in $R$ for which the image of $P$ becomes a coordinate over $R/aR$ (see Theorem \ref{ext-Maubach}). \mathpzc{m}edskip As an application of Theorem \ref{maubach}, Kahoui and Ouali have recently proved the following two results on the connection between residual coordinates and stable coordinates (\cite[Theorem 1.1 and Theorem 1.2]{K}). \begin{thm}\label{Kahoui} Let $k$ be an algebraically closed field of characteristic zero, $R=k[X]~(=k^{[1]})$ and $A=R[Z_1,\dots,Z_n]~(=R^{[n]})$. Then every residual coordinate in $A$ is a $1$-stable coordinate in $A$. \end{thm} \begin{thm}\label{Kahoui-general} Let $R$ be a Noetherian $d$-dimensional ring containing $\mathbb Q$ and $A=R[Z_1,\dots,Z_n]~(=R^{[n]})$. Then every residual coordinate in $A$ is a $(2^d-1)n$-stable coordinate in $A$. \end{thm} Using our generalization of Maubach's result (Theorem \ref{ext-Maubach}) and the concept of exponential maps (see Definition \ref{exp map} and Proposition \ref{exp-auto}) we will generalize Theorem \ref{Kahoui} to the case when $k[X]$ is replaced by a one-dimensional ring $R$ which is affine over an algebraically closed field $k$ such that either the characteristic of $k$ is zero or $R_{red}$ is seminormal (see Theorem \ref{ext-Kahoui}). Next, we will show (Theorem \ref{ext kahoui general}) that the condition ``$R$ contains $\mathbb Q$'' can be dropped from Theorem \ref{Kahoui-general}. We will also show that when $R$ is affine over an algebraically closed field of characteristic zero, the bound $(2^d-1)n$ given in Theorem \ref{Kahoui-general} can be reduced to $2^{d-1}(n+1)-n$ (see Theorem \ref{efficient bound}). \section{Preliminaries} In this section we recall a few definitions and some well-known results. \begin{defn} {\em A reduced ring $R$ is said to be {\it seminormal} if it satisfies the condition: for $b,c\in{R}$ with $b^3=c^2$, there is an $a\in{R}$ such that $a^2=b$ and $a^3=c$.} \end{defn} \begin{defn} {\em Let $A=R[X_1,\dots,X_n]~(=R^{[n]})$ and $F\in{A}$. $F$ is said to be a {\it coordinate in $A$} if there exist $F_2,\dots,F_n \in{A}$ such that $A=R[F,F_2,\dots,F_n]$.} \end{defn} \begin{defn} {\em Let $A=R[X_1,\dots,X_n]~(=R^{[n]})$, $F\in{A}$ and $m\geq{0}$. $F$ is said to be an {\it $m$-stable coordinate in $A$} if $F$ is a coordinate in $A^{[m]}$.} \end{defn} \begin{defn} {\em Let $A=R[X_1,\dots,X_n]~(=R^{[n]})$ and $F\in{A}$. $F$ is said to be a {\it residual coordinate in $A$} if, for each prime ideal $\mathpzc{m}athpzc{p}$ of $R$, $A\otimes_{R}k(\mathpzc{m}athpzc{p})=k(\mathpzc{m}athpzc{p})[\overline{F}]^{[n-1]}$, where $\overline{F}$ denotes the image of $F$ in $A\otimes_{R}k(\mathpzc{m}athpzc{p})$ and $k(\mathpzc{m}athpzc{p}):= \dfrac{R_{\mathpzc{m}athpzc{p}}}{\mathpzc{m}athpzc{p} R_{\mathpzc{m}athpzc{p}}}$ is the residue field of $R$ at $\mathpzc{m}athpzc{p}$.} \end{defn} Now, we state some known results connecting coordinates, residual coordinates and stable coordinates in polynomial algebras. First, we state an elementary result (\cite[Lemma 1.1.9]{E}). \begin{lem}\label{nilradical} Let $R$ be a ring, $nil(R)$ denote the nilradical of $R$ and $F\in{R[Z_1,\dots,Z_n]}~(=R^{[n]})$. Let $\overline{R}:=\dfrac{R}{nil(R)}$ and $\overline{F}$ denote the image of $F$ in $\overline{R}[Z_1,\dots,Z_n]$. Then $F$ is a coordinate in $R[Z_1,\dots,Z_n]$ if and only if $\overline{F}$ is a coordinate in $\overline{R}[Z_1,\dots,Z_n]$. \end{lem} The following result has been proved by Kahoui and Ouali in \cite[Lemma 3.4]{K}. \begin{prop}\label{Artinian} Let $R$ be an Artinian ring and $A=R[Z_1,\dots,Z_n]~(=R^{[n]})$. Then every residual coordinate in $A$ is a coordinate in $A$. \end{prop} The following result on residual coordinates was proved by Bhatwadekar and Dutta (\cite[Theorem 3.2]{BD1}). \begin{thm}\label{res-stable} Let $R$ be a Noetherian ring such that either $R$ contains $\mathbb Q$ or $R_{red}$ is seminormal. Let $A= R^{[2]}$ and $F\in{A}$. If $F$ is a residual coordinate in $A$, then $F$ is a coordinate in $A$. \end{thm} Now, we state a result on stable coordinates due to J. Berson, J. W. Bikker and A. van den Essen (\cite[Proposition 5.3]{BBE}); the following version was observed by Kahoui and Ouali in \cite{K}. \begin{thm}\label{essen stable} Let $R$ be a ring, $a$ be a non-zerodivisor of $R$ and $P\in{R[Z_1,\dots,Z_n]}~(=R^{[n]})$. Suppose, the image of $P$ is an $m$-stable coordinate in $\dfrac{R}{aR}[Z_1,\dots,Z_n]$. Then the polynomial $F$ defined by $F:=aW+P$ is a $(2m+n-1)$-stable coordinate in $R[Z_{1},\dots, Z_{n},W]~(=R^{[n+1]})$. \end{thm} The following result on linear planes over a discrete valuation ring was proved by S.M. Bhatwadekar and A.K. Dutta in \cite[Theorem 3.5]{BhA}. \begin{thm}\label{dvr} Let $R$ be a discrete valuation ring with parameter $\mathpzc{m}athpzc{p}i$, $K=R[\frac{1}{\mathpzc{m}athpzc{p}i}]$, $k=\frac{R}{\mathpzc{m}athpzc{p}i R}$ and $F=aW-b\in{R[Y,Z,W]}~(=R^{[3]})$, where $a(\mathpzc{n}eq{0}),b\in{R[Y,Z]}$. Suppose that $\dfrac{R[Y,Z,W]}{(F)}=R^{[2]}$. Let, for each $G\in{R[Y,Z,W]}$, $\overline{G}$ denote the image of $G$ in $k[Y,Z,W]$. Then there exists an element $Y_0\in{R[Y,Z]}$ such that $a\in{R[Y_0]}$, $\overline{Y_0}\mathpzc{n}otin{k}$ and $K[Y,Z]=K[Y_0]^{[1]}$. Moreover, if $dim(k[\overline{F},\overline{Y_0}])=2$, then $F$ is a coordinate in $R[Y,Z,W]$. \end{thm} Now, we define $\mathbb A^{r}$-fibration and state a theorem of A. Sathaye (\cite[Theorem 1]{S}) on the triviality of $\mathbb A^2$-fibration over a discrete valuation ring containing $\mathbb Q$. \begin{defn} {\em An $R$-algebra $A$ is said to be an {\it $\mathbb A^{r}$-fibration over $R$} if the following hold: \begin{enumerate} \item[\rm(i)] $A$ is finitely generated over $R$, \item[\rm(ii)] $A$ is flat over $R$, \item[\rm(iii)] $A\otimes_{R}{k(\mathpzc{m}athpzc{p})}=k(\mathpzc{m}athpzc{p})^{[r]}$, for each prime ideal $\mathpzc{m}athpzc{p}$ of $R$. \end{enumerate}} \end{defn} \begin{thm}\label{Sathaye} Let $R$ be a discrete valuation ring containing $\mathbb Q$. Let $A$ be an $\mathbb A^2$-fibration over $R$. Then $A=R^{[2]}$. \end{thm} Next, we define exponential maps and record a basic result. \begin{defn}\label{exp map} {\em Let $k$ be a field of arbitrary characteristic, $R$ a $k$-algebra and $A$ be an $R$-algebra. Let $\delta:A\longrightarrow{A^{[1]}}$ be an $R$-algebra homomorphism. We write $\delta=\delta_{W}:A\longrightarrow{A[W]}$ if we wish to emphasize an indeterminate $W$. We say $\delta$ is an $R$-linear {\it exponential map} if \begin{enumerate} \item[\rm(i)] ${\epsilon_{0}}{\delta_{W}}$ is the identity map on $A$, where ${\epsilon_{0}}:A[W]\longrightarrow{A}$ is the $A$-algebra homomorphism defined by $\epsilon_{0}(W)=0$. \item[\rm(ii)] ${\delta_{U}}{\delta_{W}}=\delta_{U+W}$, where $\delta_{U}$ is extended to a homomorphism of $A[W]$ into $A[U,W]$ by setting ${\delta_{U}}(W)=W$. \end{enumerate}} \end{defn} \mathpzc{n}oindent For an exponential map $\delta : A\longrightarrow{A^{[1]}}$, we get a sequence of maps ${\delta}^{(i)}:A\longrightarrow{A}$ as follows: for $a\in{A}$, set ${\delta}^{(i)}(a)$ to be the coefficient of $W^i$ in $\delta_{W}{(a)}$, i.e., for $\delta_{W}:A\longrightarrow{A[W]}$, we have $$\delta_{W}{(a)}=\sum{\delta^{(i)}(a)W^{i}}.$$ Note that since $\delta_{W}(a)$ is an element in $A[W]$, the sequence ${\lbrace\delta^{(i)}(a)\rbrace}_{i\geq{0}}$ has only finitely many nonzero elements for each $a\in{A}$. Since $\delta_{W}$ is a ring homomorphism, we see that ${\delta}^{(i)}:A\longrightarrow{A}$ is linear for each $i$ and that the Leibnitz Rule: ${\delta}^{(n)}(ab)=\underset{i+j=n}{\sum}{\delta^{(i)}(a)\delta^{(j)}}(b)$ holds for all $n$ and for all $a,b\in{A}$. \mathpzc{m}edskip \mathpzc{n}oindent The above properties (i) and (ii) of the exponential map $\delta_W$ translate into the following properties: \begin{enumerate} \item[\rm(i)$^{\mathpzc{m}athpzc{p}rime}$] $\delta^{(0)}$ is the identity map on $A$. \item[\rm(ii)$^{\mathpzc{m}athpzc{p}rime}$] The ``iterative property'' $\delta^{(i)}\delta^{(j)}={\binom{i+j}{j}}\delta^{(i+j)}$ holds for all $i,j\geq{0}$. \end{enumerate} The following result on exponential maps can be deduced from properties (ii) and (i)$'$ stated above. \begin{prop}\label{exp-auto} Let $k$ be a field of arbitrary characteristic, $R$ a $k$-algebra and $A$ an $R$-algebra. Let $\delta_{W}:A\longrightarrow {A[W]}$ be an $R$-linear exponential map and ${\lbrace\delta^{(i)}\rbrace}_{i\geq{0}}$ the sequence of maps on $A$ defined above. Then the extension of $\delta_{W}$ to $\widetilde{\delta_W}:A[W]\longrightarrow {A[W]}$ defined by setting $\widetilde{\delta_W}(W)=W$, is an $R[W]$-automorphism of $A[W]$ with inverse given by the sequence ${\lbrace(-1)^{i}\delta^{(i)}\rbrace}_{i\geq{0}}$. \end{prop} Next, we quote some famous results which will be needed later in this paper. First, we state Bass's cancellation theorem (\cite[Theorem 9.3]{B}). \begin{thm}\label{Bass Cancellation} Let $R$ be a Noetherian $d$-dimensional ring and $Q$ be a finitely generated projective $R$-module whose rank at each localization at a prime ideal is at least $d+1$. Let $M$ be a finitely generated projective $R$-module such that $M\oplus{Q}\cong{M\oplus{N}}$ for some $R$-module $N$. Then $Q\cong{N}$. \end{thm} Now, we state Quillen's local-global theorem (\cite[Theorem 1]{Q}). \begin{thm}\label{extended} Let $R$ be a Noetherian ring, $D=R^{[1]}$ and $M$ be a finitely generated $D$-module. Suppose, for each maximal ideal $\mathpzc{m}$ of $R$, $M_{\mathpzc{m}}$ is extended from $R_{\mathpzc{m}}$. Then $M$ is extended from $R$. \end{thm} For convenience, we record a basic result on symmetric algebras of finitely generated modules (\cite[Lemma 1.3]{Ea}). \begin{lem}\label{symm} Let $R$ be a ring and $M,N$ two finitely generated $R$-modules. If $Sym_{R}(M)$ and ${Sym_{R}(N)}$ denote the respective symmetric algebras then $Sym_{R}(M)\cong{Sym_{R}(N)}$ as $R$-algebras if and only if $M\cong{N}$ as $R$-modules. \end{lem} Finally, we quote the theorem on the triviality of locally polynomial algebras proved by Bass-Connell-Wright (\cite{BCW}) and independently by Suslin (\cite{Su}). \begin{thm}\label{eq: local-global} Let $A$ be a finitely presented $R$-algebra. Suppose that for each maximal ideal $\mathpzc{m}$ of $R$, the $R_{\mathpzc{m}}$-algebra $A_{\mathpzc{m}}$ is $R_{\mathpzc{m}}$-isomorphic to the symmetric algebra of some $R_{\mathpzc{m}}$-module. Then $A$ is $R$-isomorphic to the symmetric algebra of a finitely generated $projective$ $R$-module. \end{thm} \section{Main Results} In this section we prove our main results. First, we will extend Theorem \ref{maubach} (see Theorem \ref{ext-Maubach}). For convenience, we record below a local-global result. \begin{lem}\label{one dimensional cancellation} Let $R$ be a one-dimensional ring, $n\geq{2}$, $A=R[Z_1,\dots,Z_n]~(=R^{[n]})$ and $F\in{A}$. Suppose that for each maximal ideal $\mathpzc{m}$ of $R$, $A_{\mathpzc{m}}=R_{\mathpzc{m}}[F]^{[n-1]}$. Then $F$ is a coordinate in $A$. \end{lem} \begin{proof} Let $D:=R[F]$, $\mathpzc{n}$ be an arbitrary maximal ideal of $D$, $\mathpzc{m}athpzc{p}:=\mathpzc{n}\cap{R}$ and $\mathpzc{m}_0$ a maximal ideal of $R$ such that $\mathpzc{m}athpzc{p}\subseteq{\mathpzc{m}_0}$. From the natural maps $R_{\mathpzc{m}_{0}}\longrightarrow{R_{\mathpzc{m}athpzc{p}}}\longrightarrow{D_{\mathpzc{n}}}$, we see that $A_{\mathpzc{m}athpzc{p}}=R_{\mathpzc{m}athpzc{p}}[F]^{[n-1]}$ and hence $A_{\mathpzc{n}}={D_{\mathpzc{n}}}^{[n-1]}$. By Theorem \ref{eq: local-global}, there exists a projective $D$-module $Q'$ of rank $(n-1)$ such that $A\cong{Sym_{D}(Q')}$. Since, for each maximal ideal $\mathpzc{m}$ of $R$, $A_{\mathpzc{m}}=R_{\mathpzc{m}}[F]^{[n-1]} \cong Sym_{D_{\mathpzc{m}}}(({D_{\mathpzc{m}})}^{n-1})$, by Lemma 2.16, we have $Q'_{\mathpzc{m}} \cong ({D_{\mathpzc{m}}})^{n-1} \cong (R_{\mathpzc{m}})^{n-1} \otimes_R D $. Thus, $Q'$ is locally extended from $R$ and hence by Theorem \ref{extended}, $Q'$ is extended from $R$, i.e., there exists a projective $R$-module $Q$ of rank $(n-1)$ such that $Q'=Q\otimes_{R}{D}$. Therefore, $A\cong{{Sym_{R}(Q)}\otimes_{R}{D}}\cong{Sym_{R}(Q)\otimes_{R}{Sym_{R}(R)}}\cong{Sym_{R}(Q\oplus{R})}$. Since $A=R[Z_1,\dots,Z_n]\cong{Sym_{R}(R^{n})}$, by Lemma \ref{symm}, we have $Q\oplus{R}\cong{R^{n}}$. Hence, by Theorem \ref{Bass Cancellation}, $Q$ is a free $R$-module of rank $(n-1)$. Therefore, $Q'$ is a free $D$-module of rank $(n-1)$. Hence, $A=R[F]^{[n-1]}$. \end{proof} We now extend Theorem \ref{maubach}. \begin{thm}\label{ext-Maubach} Let $k$ be an algebraically closed field and $R$ a one-dimensional affine $k$-algebra. Let $a$ be a non-zerodivisor in $R$ and $P(Z_{1},\dots, Z_{n})\in{R[Z_{1},\dots, Z_{n}]}~(=R^{[n]}$) be such that the image of $P$ is a coordinate in $\dfrac{R}{aR}[Z_{1},\dots, Z_{n}]$. If $R_{red}$ is seminormal or if the characteristic of $k$ is zero then the polynomial $F$ defined by $F:=aW+P$ is a coordinate in $R[Z_{1},\dots, Z_{n},W]~(=R^{[n+1]})$. \end{thm} \begin{proof} By Lemma \ref{one dimensional cancellation}, it is enough to consider the case when $R$ is a local ring with unique maximal ideal $\mathpzc{m}$. Since $R$ is an affine algebra over the algebraically closed field $k$, the residue field $\dfrac{R}{\mathpzc{m}}$ is $k$ (by Hilbert's Nullstellensatz). Let $\eta$ denote the canonical map $R[Z_1,\dots,Z_n,W]\longrightarrow{k[Z_1,\dots,Z_n,W]}(\subset{R[Z_1,\dots,Z_n,W]})$. \mathpzc{m}edskip \indent If $a\mathpzc{n}otin{\mathpzc{m}}$, then $a$ is a unit in $R$ and hence $F$ is a coordinate in $R[ Z_{1},\dots, Z_{n},W]$. So, we assume that $a\in{\mathpzc{m}}$. Then $\eta(F)=\eta(P)$. Since the image of $P$ is a coordinate in $\dfrac{R}{aR}[Z_1,\dots,Z_n]$, hence $g:=\eta(P)(=\eta(F))$ is a coordinate in $k[Z_1,\dots,Z_n]$ and hence in $R[Z_1,\dots,Z_n]$. Thus, there exist $g_2,\dots,g_n\in{k[Z_1,\dots,Z_n]}$ such that $k[Z_1,\dots,Z_n]=k[g,g_2,\dots,g_n]$ and hence $R[Z_1,\dots,Z_n]=R[g,g_2,\dots,g_n]$. \mathpzc{m}edskip \indent Set $A:=R[g_2,\dots,g_n]$ and $B:=R[Z_{1},\dots, Z_{n},W]=A[g,W](=A^{[2]})$. We now show that $F$ is a residual coordinate in $B=A[g,W]$. Let $Q$ be an arbitrary prime ideal of $A$ and $\mathpzc{m}athpzc{p}=Q\cap{R}$. If $\mathpzc{m}athpzc{p}=\mathpzc{m}$, then $k=k(\mathpzc{m}athpzc{p})\hookrightarrow{k(Q)}$ and the image of $F$ in $B\otimes_{A}k(Q)$, being the image of $g(=\eta(F))$ in $B\otimes_{A}k(Q)$, is a coordinate in $B\otimes_{A}k(Q)$. If $\mathpzc{m}athpzc{p}\mathpzc{n}eq{\mathpzc{m}}$, then $\mathpzc{m}athpzc{p}$ is a minimal prime ideal of the one-dimensional ring $R$ and hence $\mathpzc{m}athpzc{p}\in{Ass(R)}$. Therefore, $a\mathpzc{n}otin{\mathpzc{m}athpzc{p}}$. Hence $a$ is a unit in $k(\mathpzc{m}athpzc{p})$ and therefore the image of $F$ in $B\otimes_{A}k(Q)$ is a coordinate in $B\otimes_{A}k(Q)$. Thus, $F$ is a residual coordinate in $A[g,W]$. Hence, by Theorem \ref{res-stable}, $F$ is a coordinate in $A[g,W](=R[Z_1,\dots,Z_n,W]$). \end{proof} Now, we extend Theorem \ref{Kahoui} to any algebraically closed field of arbitrary characteristic. First, we prove an easy lemma on existence of exponential maps. \mathpzc{n}oindent \begin{lem}\label{exp} Let $R$ be a ring and $A=R[Z_1,\dots,Z_n]$. Let $S$ be a multiplicatively closed subset of $R$ consisting of non-zerodivisors in $R$ and $P\in{R[Z_1,\dots, Z_n]}~(=R^{[n]})$. If $P$ is a coordinate in $S^{-1}A$ then there exists an $R$-linear exponential map ${\varphi}_{W}: A\rightarrow{A[W]}$ such that $\varphi_W(P)=aW+P$, for some $a\in{S}$. \end{lem} \begin{proof} Since $P$ is a coordinate in $S^{-1}A$, there exist $c\in{S}$ and $g_2, \dots, g_n\in{A}$ such that $$ A_{c}= R_{c}[g_2, \dots, g_n,P]~(={R_{c}[g_2,\dots,g_n]}^{[1]}), $$ where $A_c$ and $R_c$ denote the localisation of the rings $A$ and $R$ respectively at the multiplicative set $\{1,c,c^2,\dots\}$. Let $Z_i=\sum{a_{i, j_1,\dots, j_n}P^{j_1}g_{2}^{j_2}\dots g_{n}^{j_n}}$, where $a_{i,j_1,\dots, j_n}\in{R_{c}}$, for all $i, 1\leq{i}\leq{n}$ and $j_1,\dots, j_n\geq{0}$ . Then there exists $m\geq{0}$ such that $c^{m}a_{i,j_1,\dots, j_n}\in{R}$, for all $i, 1\leq{i}\leq{n}$ and $j_1,\dots, j_n\geq{0}$. Define an $R_{c}[g_2,\dots,g_n]$-algebra homomorphism ${\mathpzc{m}athpzc{p}si_{W}}: A_{c}\rightarrow{ A_{c}[W]}$ by setting ${\mathpzc{m}athpzc{p}si_{W}}(P):=P+c^{m}W$. Clearly, ${\mathpzc{m}athpzc{p}si_{W}}$ is an exponential map and ${\mathpzc{m}athpzc{p}si_{W}}(Z_{i})=Z_{i}+ h_i$, for some $h_i\in{A[W]}$. Now, if we set $a$ to be $c^m$ and define $\varphi_{W}:={\mathpzc{m}athpzc{p}si_{W}}|_A$ then $\varphi_{W}$ is our desired exponential map. \end{proof} We now generalize Theorem \ref{Kahoui}. \begin{thm}\label{ext-Kahoui} Let $k$ be an algebraically closed field, $R$ a one-dimensional affine $k$-algebra such that either the characteristic of $k$ is zero or $R_{red}$ is seminormal. Then, every residual coordinate in $A:=R[Z_1,\dots,Z_n]~(=R^{[n]}), n\geq{3}$, is a $1$-stable coordinate. \end{thm} \begin{proof} By Lemma \ref{nilradical}, it is enough to consider the case when $R$ is a reduced ring. Let $P(Z_1,\dots, Z_n)$ be a residual coordinate in $A$ and $S$ be the set of all non-zerodivisors in $R$. Since $S^{-1}R$ is Artinian, hence by Proposition \ref{Artinian}, $P$ is a coordinate in $S^{-1}A$. Therefore, by Lemma \ref{exp}, there exists an $R$-linear exponential map ${\varphi}_{W}: A\rightarrow{A[W]}$ such that $\varphi_W(P)=aW+P$, for some $a\in{S}$. Now, by Theorem \ref{ext-Maubach}, $aW+P$ is a coordinate in $A[W]$. Since by Proposition \ref{exp-auto}, the extension of ${\varphi}_{W}$ to $A[W]$ is an $R$-automorphism of $A[W]$ which maps $P$ to $aW+P$, $P$ is a $1$-stable coordinate in $A$. \end{proof} \begin{rem} {\em Recall that if $R$ is as in Theorem \ref{ext-Kahoui} then a residual coordinate in $R[Z_1,Z_2]~(=R^{[2]})$ is actually a coordinate (\cite[Theorem 3.2]{BD1}).} \end{rem} Now, using Lemma \ref{exp}, we will show that the condition ``$R$ contains $\mathbb Q$'' can be dropped from Theorem \ref{Kahoui-general}. \begin{thm}\label{ext kahoui general} Let $R$ be a Noetherian $d$-dimensional ring. Then every residual coordinate in $R[Z_1,\dots,Z_n]~(=R^{[n]})$ is a $(2^d-1)n$-stable coordinate. \end{thm} \begin{proof} We prove the result by induction on $d$. If $d=0$, the result follows from Proposition \ref{Artinian}. Now, let $d\geq{1}$ and $P$ a residual coordinate in $A:={R[Z_1,\dots,Z_n]}$. We show that $P$ is a $(2^d-1)n$-stable coordinate in $A$.\\ \indent By Lemma \ref{nilradical}, we may assume that $R$ is a reduced ring. Let $S$ be the set of all non-zerodivisors of $R$. Then, as $S^{-1}R$ is an Artinian ring, by the case $d=0$, $P$ is a coordinate in $S^{-1}A$. Hence, by Lemma \ref{exp}, there exists a non-zerodivisor $a$ in $R$ and an exponential map ${\varphi}_{W}:A\longrightarrow{A[W]}$ such that $\varphi_{W}(P)=aW+P$. Now, we observe that the image of $P$ is a residual coordinate in $\dfrac{R}{aR}[Z_1,\dots,Z_n]$ and $\dfrac{R}{aR}$ is a $(d-1)$-dimensional ring. So, by induction hypothesis, $P$ is a $(2^{d-1}-1)n$-stable coordinate in $\dfrac{R}{aR}[Z_1,\dots,Z_n]$. Hence, by Theorem \ref{essen stable}, $aW+P$ is an $r$-stable coordinate in $A[W]$, where $r=2n(2^{d-1}-1)+n-1=(2^d-1)n-1$. Since by Proposition \ref{exp-auto}, the extension of ${\varphi}_{W}$ to $A[W]$ is an $R$-automorphism of $A[W]$ which maps $P$ to $aW+P$, $P$ is an $r+1~(=(2^d-1)n)$-stable coordinate in $A$. \end{proof} Next, using Theorem \ref{ext-Maubach} and Lemma \ref{exp}, we will show that under the additional hypothesis that $R$ is affine over an algebraically closed field of characteristic zero, we can get a sharper bound in Theorem \ref{ext kahoui general}. \begin{thm}\label{efficient bound} Let $k$ be an algebraically closed field of characteristic zero and $R$ a finitely generated $k$-algebra of dimension $d$. Then every residual coordinate in $R[Z_1,\dots,Z_n]~(=R^{[n]})$ is an $r$-stable coordinate, where $r=(2^d-1)n-2^{d-1}(n-1)=2^{d-1}{(n+1)}-n$. \end{thm} \begin{proof} We prove the result by induction on $d$. If $d=1$, the result follows from Theorem \ref{ext-Maubach}. Let $d\geq{2}$ and $P$ a residual coordinate in $A:={R[Z_1,\dots,Z_n]}$. We show that $P$ is a $(2^{d-1}{(n+1)}-n)$-stable coordinate in $A$.\\ \indent By Lemma \ref{nilradical}, we may assume that $R$ is a reduced ring. Let $S$ be the set of all non-zerodivisors of $R$. Since $S^{-1}R$ is an Artinian ring, by Proposition \ref{Artinian}, $P$ is a coordinate in $S^{-1}A$. Hence, by Lemma \ref{exp}, there exists a non-zerodivisor $a$ in $R$ and an exponential map ${\varphi}_{W}:A\longrightarrow{A[W]}$ such that $\varphi_{W}(P)= aW+P$. Now, we observe that the image of $P$ is a residual coordinate in $\dfrac{R}{aR}[Z_1,\dots,Z_n]$ and $\dfrac{R}{aR}$ is a $(d-1)$-dimensional ring containing an algebraically closed field of characteristic zero. So, by induction hypothesis, $P$ is a $(2^{d-2}(n+1)-n)$-stable coordinate in $\dfrac{R}{aR}[Z_1,\dots,Z_n]$. Now, arguing as in Theorem \ref{ext kahoui general}, the result follows from Theorem \ref{essen stable} and Proposition \ref{exp-auto}. \end{proof} \mathpzc{m}edskip The following question asks whether Theorem \ref{ext-Maubach} (and thereby Theorem \ref{ext-Kahoui}) can be extended to an affine algebra of \textit{any} dimension over \textit{any} field (not necessarily algebraically closed) of arbitrary characteristic. \begin{ques} Let $k$ be a field, $R$ an affine $k$-algebra, $a$ be a non-zerodivisor in $R$ and $P(Z_{1},\dots, Z_{n})\in{R[Z_{1},\dots, Z_{n}]}$ be such that the image of $P$ is a coordinate in $\dfrac{R}{aR}[Z_{1},\dots, Z_{n}]$. Suppose, $R_{red}$ is seminormal or the characteristic of $k$ is zero. Then, is $F:=aW+P$ a coordinate in $R[Z_{1},\dots, Z_{n},W]$? \end{ques} \mathpzc{n}oindent The following result shows that for $n=2$, the above question has an affirmative answer when $R$ is a Dedekind domain containing a field of characteristic zero. \begin{prop} Let $R$ be a Dedekind domain containing $\mathbb Q$, $a\in{R-{\lbrace0\rbrace}}$ and $F= aW+P(Y,Z)\in{R[Y,Z,W]}~(=R^{[3]})$. If the image of $P$ is a coordinate in $\dfrac{R}{aR}[Y,Z]$, then $F$ is a coordinate in $R[Y,Z,W]$. \end{prop} \begin{proof} By Lemma \ref{one dimensional cancellation}, it is enough to assume that $R$ is a discrete valuation ring with parameter $t$. Let $k=\dfrac{R}{tR}$, $K=R[\frac{1}{t}]$, $A=\dfrac{R[Y,Z,W]}{(F)}$ and $\overline{P}$ denote the image of $P$ in $k[Y,Z]$. Note that $a$ is a unit in $K$ and hence $F$ is a coordinate in $K[Y,Z,W]$; in particular $A[\frac{1}{t}]=K^{[2]}$. \mathpzc{m}edskip \indent If $a\mathpzc{n}otin{tR}$, then $a$ is a unit in $R$ and hence $F$ is a coordinate in $R[Y,Z,W]$. We now consider the case $a\in{tR}$. Considering the natural map $\dfrac{R}{aR}\rightarrow{\dfrac{R}{tR}(=k)}$, we see that $\overline{P}$ is a coordinate in $k[Y,Z]$ and hence $\dfrac{A}{tA}=k^{[2]}$. It also follows that $P\mathpzc{n}otin{tR[Y,Z]}$ and hence $a$ and $P$ are coprime in $R[Y,Z]$. Thus, $F(=aW+P)$ is irreducible in $R[Y,Z,W]$. So, $A$ is a torsion free module over the discrete valuation ring $R$ and hence $A$ is flat over $R$. Thus, $A$ is an $\mathbb A^{2}$-fibration over $R$ and hence by Theorem \ref{Sathaye}, $A=R^{[2]}$. As $\overline{P}$ is a coordinate in $k[Y,Z]$, we see that $\overline{P}\mathpzc{n}otin{k[Y]\cap{k[Z]}}(=k)$. Hence at least one of the rings $k[Y,\overline{P}]$ and $k[Z,\overline{P}]$ is of dimension two. Now, the result follows from Theorem \ref{dvr}. \end{proof} \mathpzc{n}oindent {\bf Acknowledgements.} The authors thank S. M. Bhatwadekar and Neena Gupta for the current version of Theorem \ref{ext-Maubach} and Theorem \ref{efficient bound} and their useful suggestions. The second author also acknowledges Council of Scientific and Industrial Research (CSIR) for their research grant. {\small{ }} \end{document}
math
\begin{document} \title{The mean number of 2-torsion elements in the class groups of $n$-monogenized cubic fields} \begin{abstract} We prove that, on average, the monogenicity or $n$-monogenicity of a cubic field has an altering effect on the behavior of the 2-torsion in its class group. \end{abstract} \setcounter{tocdepth}{2} \tableofcontents \section{Introduction}\label{secintro} The seminal works of Cohen--Lenstra \cite{CL} and Cohen--Martinet \cite{CM1} provide heuristics that predict, for suitable ``good primes'' $p$, the distribution of the $p$-torsion subgroups ${\rm Cl}_p(K)$ of the class groups of number fields $K$ of degree $d$. To date, only two cases of these conjectures have been proven; they concern the mean sizes of the 3-torsion subgroups of the class groups of quadratic fields, and the 2-torsion subgroups of the class groups of cubic fields: \begin{thm}[Davenport--Heilbronn {\cite[Theorem 3]{DH}}] \label{thquadfields} Let $K$ run through all isomorphism classes of quadratic fields ordered by discriminant. Then: \begin{itemize} \item[{\rm (a)}] The average size of ${\rm Cl}_3(K)$ over real quadratic fields $K$ is $4/3;$ \item[{\rm (b)}] The average size of ${\rm Cl}_3(K)$ over complex quadratic fields $K$ is $2$. \end{itemize} \end{thm} \begin{thm}[{\cite[Theorem 5]{dodqf}}] \label{thallcubicfields} Let $K$ run through all isomorphism classes of cubic fields ordered by discriminant. Then: \begin{itemize} \item[{\rm (a)}] The average size of ${\rm Cl}_2(K)$ over totally real cubic fields $K$ is $5/4;$ \item[{\rm (b)}] The average size of ${\rm Cl}_2(K)$ over complex cubic fields $K$ is $3/2$. \end{itemize} \end{thm} These two theorems have had a number of important applications. For example, they imply that, when ordered by discriminant, a positive proportion of quadratic fields have class number indivisible by 3, and a positive proportion of cubic fields have class number indivisible by 2 -- among numerous other consequences relating to the asymptotics and ranks of elliptic curves, the density of discriminants of number fields of given degree, the existence of units in number fields with given signatures, rational points on surfaces, the statistics of Galois representations and modular forms of weight one, and more (see, e.g., \cite{FNT}, \cite{Vatsal}, \cite{DK}, \cite{BhSh}, \cite{dodqf}, \cite{BV}, \cite{Browning}, \cite{Ellenberg}, \cite{BG}, \cite{AK}). The averages occurring in Theorems~\ref{thquadfields} and \ref{thallcubicfields} are remarkably robust. In \cite[Corollary 4]{BV1} and \cite[Theorem 1]{BV}, it was shown that the averages in these two theorems remain unchanged even when one ranges over quadratic and cubic fields, respectively, that satisfy any specified set of local splitting conditions at finitely many primes, or even suitable sets of local conditions at {\it infinitely} many primes. A natural next question that arises is: how stable are the averages in Theorems~\ref{thquadfields} and \ref{thallcubicfields} if more global conditions are imposed on the fields being considered? One natural such global condition on a field is {\it monogenicity}, i.e., that the ring of integers in the field is generated by one element. While monogenicity is a global condition that automatically holds for all quadratic fields, it is a nontrivial condition for cubic fields. The purpose of this paper is to show, surprisingly (at least to the authors!), that monogenicity does have a nontrivial effect on the behavior of class groups of cubic fields and, in particular, the condition of monogenicity does change the averages occurring in Theorem~\ref{thallcubicfields}. {}section {The stability of averages in Theorem~\ref{thallcubicfields} when ordering cubic fields by height} Recall that a number field $K$ is called {\bf monogenic} if its ring ${\mathcal O}_K$ of integers is generated by one element as a ${\mathbb Z}$-algebra, i.e., ${\mathcal O}_K={\mathbb Z}[\alphapha]$ for some element $\alphapha\in{\mathcal O}_K$; such an $\alphapha$ is then called a {\bf monogenizer} of $K$ or of ${\mathcal O}_K$. The asymptotic number of monogenic cubic fields of bounded discriminant is not known, and it is therefore more convenient to order these fields by the heights of their monic defining polynomials. A pair $(K,\alpha)$ is called a {\bf monogenized cubic field} if $K$ is a cubic field and ${\mathcal O}_K={\mathbb Z}[\alpha]$ for some $\alpha\in{\mathcal O}_K$. More generally, we define an {\bf $n$-monogenized cubic field} to be a pair $(K, \alpha)$ where $K$ is a cubic field, $\alpha\in{\mathcal O}_K$ is primitive in ${\mathcal O}_K/{\mathbb Z}$, and $[{\mathcal O}_K:{\mathbb Z}[\alpha]]=n$; such an $\alphapha$ is then called an {\bf $n$-monogenizer} of $K$ or of ${\mathcal O}_K$. Two $n$-monogenized cubic fields $(K,\alpha)$ and $(K',\alpha')$ are {\bf isomorphic} if $K$ and $K'$ are isomorphic as cubic fields and, under such an isomorphism, $\alpha$ is mapped to $\alpha'+m$ for some $m\in{\mathbb Z}$. We next define an isomorphism-invariant height on the set of $n$-monogenized cubic fields. Let $(K,\alphapha)$ be an $n$-monogenized cubic field, and suppose that $f(x)$ is the characteristic polynomial of~$\alphapha$. Since $(K,\alphapha)$ and $(K,\alphapha+m)$ are isomorphic for every $m\in{\mathbb Z}$, we may define two invariants $I(f)$ and $J(f)$ on the space of monic cubic polynomials $f(x)=x^3+ax^2+bx+c$, by setting \begin{equation*} \begin{array}{rcl} I(f)&:=&a^2-3b,\\[.05in] J(f)&:=&-2a^3+9ab-27c. \end{array} \end{equation*} These invariants satisfy $I(f(x))=I(f(x+m))$ and $J(f(x))=J(f(x+m))$ for all $m\in{\mathbb Z}$. We then define the {\bf height} $H$ of an $n$-monogenized cubic field $(K,\alphapha)$ by \begin{equation}\label{eqheightn} H(K,\alphapha):=n^{-2}H(f):=n^{-2}{\rm max}\bigl\{|I(f)|^3,J(f)^2/4\bigr\}. \end{equation} Since the discriminant $\Delta(K)$ of $K$ is described in terms of the invariants of $f$ as \begin{equation} 27\Delta(K)=n^{-2}\bigl(4I(f)^3-J(f)^2\bigr), \end{equation} we see that the height of an $n$-monogenized cubic field is comparable with its discriminant. We note that all cubic fields $K$ are $n$-monogenized for some $n\ll|\Delta(K)|^{1/4}$; see Remark~\ref{remdelta14}. Thus we may expect that the average size of the 2-torsion subgroup of the class group over all cubic fields $K$ ordered by absolute discriminant is the same as the average over all $n$-monogenized cubic fields $(K,\alphapha)$ ordered by height $H(K,\alphapha)$ with $n\ll H(K,\alphapha)^{1/4}$. This is indeed the case, as we will prove in Theorem \ref{thsubmonloc}. We will also prove in \S\ref{secsubmono}, that the number of $n$-monogenized cubic fields $(K,\alphapha)$ with height $H(K,\alphapha)<X$ and $n\ll H(K,\alphapha)^{1/4}$ grows as $\asymp X$, which are the same asymptotics as for the number of cubic fields having absolute discriminant bounded by $X$. We may also consider, for every $\delta\in(0,1/4]$, the average over the much thinner family of $n$-monogenized cubic fields $(K,\alphapha)$ satisfying $H(K,\alphapha)< X$ and $n\leq H(K,\alphapha)^\delta$; the number of such $n$-monogenized fields grows as $\asymp X^{5/6+2\delta/3}=o(X)$ (see Theorem~\ref{thsubcount}). In \S\ref{secsubmono}, we will prove the following theorem. \begin{thm}\label{thsubmon} Let $0<\delta\leq 1/4$ and $c>0$ be real numbers, and let $F({\leq\! cH^\delta},X)$ denote the set of isomorphism classes of $n$-monogenized cubic fields $(K,\alpha)$ with height $H(K,\alpha)<X$ and $n\leq \nolinebreak c H(K,\alphapha)^\delta$. Then, as $X\to\infty$: \begin{itemize} \item[{\rm (a)}] The average size of ${\rm Cl}_2(K)$ over totally real cubic fields $K$ in $F({\leq\! cH^\delta},X)$ approaches $5/4;$ \item[{\rm (b)}] The average size of ${\rm Cl}_2(K)$ over complex cubic fields $K$ in $F({\leq\! cH^\delta},X)$ approaches $3/2$. \end{itemize} Furthermore, these values remain the same if we instead average over those $n$-monogenized cubic fields satisfying any given set of local splitting conditions at finitely many primes. \end{thm} Theorem~\ref{thsubmon} shows that the averages in Theorem~\ref{thallcubicfields} remain very stable, both under the imposition of local conditions and under changing the ordering of the fields from discriminant to~height. {}section {The effect of $n$-monogenicity on the averages in Theorem~\ref{thallcubicfields} for fixed $n$} We consider next the limiting behavior of Theorem~\ref{thsubmon} as $\delta$ approaches $0$. First, we consider the family of monogenized cubic fields, ordered by height. In this case, we find that the average values appearing in Theorems \ref{thallcubicfields} and \ref{thsubmon} {\it do} change. We have the following theorem: \begin{thm}\label{thmoncubicfields} Let $K$ run through all isomorphism classes of monogenized cubic fields ordered by height. Then: \begin{itemize} \item[{\rm (a)}] The average size of ${\rm Cl}_2(K)$ over totally real cubic fields $K$ is $3/2$; \item[{\rm (b)}] The average size of ${\rm Cl}_2(K)$ over complex cubic fields $K$ is $2$. \end{itemize} Furthermore, these values remain the same if we instead average over those monogenized cubic fields satisfying any given set of local splitting conditions at finitely many primes. \end{thm} Surprisingly, the values in Theorem \ref{thmoncubicfields} are different from those in Theorems \ref{thallcubicfields} and \ref{thsubmon}. In particular, these class groups do not appear to be random groups in the same sense as Cohen--Lenstra--Martinet. It therefore appears that monogenicity has an altering affect on the class groups of cubic fields! To be more precise, it appears that on average, monogenicity has a doubling effect on the nontrivial part of the $2$-torsion subgroups of the class groups of cubic fields. More generally, we may ask what happens when we restrict Theorem \ref{thsubmon} to $n$-monogenized cubic fields for any fixed positive integer $n$. Once again, the averages occurring in Theorems~\ref{thallcubicfields} and \ref{thsubmon} change as a rather interesting function of $n$: \begin{thm}\label{thmain2} Fix a positive integer $n$ and write $n=m^2k$ where $k$ is squarefree. Let $\sigma(k)$ denote the sum of the divisors of $k$. Let $(K,\alphapha)$ run through all isomorphism classes of $n$-monogenized cubic fields ordered by height. Then: \begin{itemize} \item[{\rm (a)}] The average size of ${\rm Cl}_2(K)$ over totally real cubic fields $K$ is $\frac{5}{4}+\frac{1}{4\sigma(k)};$ \item[{\rm (b)}] The average size of ${\rm Cl}_2(K)$ over complex cubic fields $K$ is $\frac{3}{2}+\frac{1}{2\sigma(k)}$. \end{itemize} Furthermore, these values remain the same if we instead average over those $n$-monogenized cubic fields satisfying any given set of local splitting conditions at finitely many primes $p\nmid k$. \end{thm} We also prove that the average sizes of ${\rm Cl}_2(K)$ in Theorem~\ref{thmain2} surprisingly {\it do} change further if the $n$-monogenized cubic fields $(K,\alphapha)$ being averaged over satisfy specified local conditions at primes dividing $k$, where $n=m^2k$ and $k$ is squarefree. In Theorem \ref{main_theorem}, we compute the average size of ${\rm Cl}_2(K)$ as $K$ varies over a family of $n$-monogenized cubic fields satisfying any finite set (and certain infinite sets) of local conditions. One simple consequence of Theorem~\ref{main_theorem} is: \begin{thm}\label{thmain1} Let $n$ be a fixed positive integer, and write $n=m^2k$ where $k$ is squarefree. Let $(K,\alphapha)$ run through all isomorphism classes of $n$-monogenized cubic fields having discriminant prime to $k$, ordered by height. Then: \begin{itemize} \item[{\rm (a)}] The average size of ${\rm Cl}_2(K)$ over totally real cubic fields $K$ is $3/2$ if $k=1$ and $5/4$ otherwise; \item[{\rm (b)}] The average size of ${\rm Cl}_2(K)$ over complex cubic fields $K$ is $2$ if $k=1$ and $3/2$ otherwise. \end{itemize} Furthermore, these values remain the same if we instead average over such monogenized cubic fields satisfying any given set of local splitting conditions at finitely many primes. \end{thm} Note that the averages occurring in Theorem \ref{thmain1} agree with the larger values in Theorem \ref{thmoncubicfields} when~$n$ is a square, but with the smaller values in \pagebreak Theorems~\ref{thallcubicfields} and~\ref{thsubmon} otherwise. Furthermore, Theorems~\ref{thmain2} and \ref{thmain1} imply that for a positive integer $n$, and a prime $p$ dividing $n$ to odd order, ramification at $p$ causes, on average, an increasing effect on the $2$-torsion in the class groups of $n$-monogenic cubic fields. Theorems \ref{thmoncubicfields}, \ref{thmain2}, and \ref{thmain1} all follow from our main result, Theorem~\ref{main_theorem} below, which determines the average sizes of ${\rm Cl}_2$ and ${\rm Cl}_2^+$, the 2-torsion subgroups of class groups and narrow class groups, respectively, in any ``large'' family of $n$-monogenized cubic fields defined by local conditions at~primes. For a prime $p$, let $T_p$ denote the set of all isomorphism classes of {\bf $n$-monogenized \'etale cubic extensions of ${\mathbb Q}_p$}, i.e., the set of all isomorphism classes of pairs $(\mathcal K_p,\alphapha_p)$, where $\mathcal K_p$ is an \'etal{e} cubic extension of ${\mathbb Q}_p$ with ring of integers ${\mathcal O}_p$, and $\alphapha_p\in{\mathcal O}_p$ is primitive in ${\mathcal O}_p/{\mathbb Z}_p$, such that the $p$-part of the index of ${\mathbb Z}_p[\alphapha_p]$ in ${\mathcal O}_p$ is equal to the $p$-part of $n$; here, two pairs $(\mathcal K_p,\alphapha_p)$ and $(\mathcal K'_p,\alphapha'_p)$ are {\bf isomorphic} if $\mathcal K_p$ and $\mathcal K'_p$ are isomorphic as ${\mathbb Q}_p$-algebras and, under such an isomorphism, $\alphapha_p$ is mapped to $\alpha'_p+m$ for some $m\in{\mathbb Z}_p$. In Remark \ref{remTpmeasure}, we will see that the set $T_p$ naturally injects into a closed subset of the set of cubic polynomials over ${\mathbb Z}_p$ with leading coefficient~$n$, which equips $T_p$ with a topology and a measure. \hypertarget{fnx}{For} each prime $p$, let $\Sigma_p{}set T_p$ be an open and closed set whose boundary has measure $0$. We say that $\Sigma=(\Sigma_p)_p$ is a {\bf large collection of local specifications for $n$} if, for all but finitely many primes~$p$, the set $\Sigma_p$ contains all pairs $(\mathcal K_p,\alphapha_p)$ such that $\mathcal K_p$ is {\it not} a totally ramified cubic extension of~${\mathbb Q}_p$. Let \hypertarget{fnx1}{$F(n,X)$} denote the set of isomorphism classes of all $n$-monogenized cubic fields $(K,\alphapha)$ such that $H(K,\alphapha)<X$. Given a large collection $\Sigma=(\Sigma_p)_p$ of local specifications, let \hypertarget{fnxSig}{$F_\Sigma(n,X)$} denote the subset of $F(n,X)$ consisting of pairs $(K,\alphapha)$ such that for all primes $p$, we have $(K\otimes{\mathbb Q}_p,\alphapha)\in\Sigma_p$. For a prime $p$ dividing $n$, we say that an element $(\mathcal K_p,\alphapha_p)\in T_p$ is {\bf sufficiently ramified} if one of the following two conditions is satisfied: \begin{itemize} \item [{\rm (a)}] $\mathcal K_p$ is a totally ramified cubic extension of ${\mathbb Q}_p;$ \item [{\rm (b)}] $\mathcal K_p={\mathbb Q}_p\times F$, where $F$ is a ramified quadratic extension of ${\mathbb Q}_p$, and ${\mathbb Z}_p[\alphapha_p]={\mathbb Z}_p\times {\mathcal O}$, where~${\mathcal O}$ is an order in $F$. \end{itemize} For a prime $p$ dividing $n$, we define the {\bf local sufficiently-ramified density} $\rho_p(\Sigma_p)$ of a large collection $\Sigma=(\Sigma_p)_p$ to be density of the set of sufficiently-ramified elements within $\Sigma_p$. The {\bf global sufficiently-ramified density $\rho(\Sigma)$ with respect to $n$} is then the product of $\rho_p(\Sigma_p)$ over all primes $p$ that divide $n$ to an {\it odd} power. Our main theorem is as follows: \begin{thm}[Main $n$-monogenic theorem] \label{main_theorem} Let $n$ be a positive integer, and let $\Sigma$ be a large collection of local specifications for $n$. Then, as $X\to\infty$: \begin{itemize} \item[{\rm (a)}] The average size of ${\rm Cl}_2(K)$ over totally real cubic fields $K$ in $F_\Sigma(n,X)$ approaches $\frac{5}{4}+\frac{1}{4}\rho(\Sigma);$ \item[{\rm (b)}] The average size of ${\rm Cl}_2(K)$ over complex cubic fields $K$ in $F_\Sigma(n,X)$ approaches $\frac{3}{2}+\frac{1}{2}\rho(\Sigma);$ \item[{\rm (c)}] The average size of ${\rm Cl}^+_2(K)$ over totally real cubic fields $K$ in $F_\Sigma(n,X)$ approaches $2+\frac{1}{2}\rho(\Sigma)$. \end{itemize} \end{thm} Theorem~\ref{thmoncubicfields} follows from Theorem~\ref{main_theorem} by noting that there are no primes dividing $n=1$ to odd order, and so $\rho(\Sigma)=1$ for every large collection of local specifications $\Sigma$. Theorem~\ref{thmain2} follows from Theorem~\ref{main_theorem} by Corollary \ref{propdensuf}, which states that the local sufficiently-ramified density $\rho_p(T_p)$ is equal to $1/(p+1)$ for each $p\mid k$. Finally, Theorem~\ref{thmain1} follows from Theorem~\ref{main_theorem} by noting that when $\Sigma_p{}set T_p$ contains no extensions ramified at primes dividing $k$, then $\rho_p(\Sigma_p)=0$. Theorem~\ref{main_theorem} also implies an analogous increasing effect of $n$-monogenicity on the 2-torsion subgroup of the {\it narrow} class group, and this increase on average is from 2 (cf.\ \cite[Theorem 1(c)]{BV}) to~2.5, in the case when $n$ is a square. \pagebreak Taken together, Theorems \ref{thallcubicfields}, \ref{thmain2}, \ref{thmain1}, and \ref{main_theorem} thus paint the following picture. For integers~$n$ that are perfect squares, $n$-monogenicity has a doubling effect on the nontrivial part of the $2$-torsion in the class groups of cubic fields. For nonsquare integers $n=m^2k$ with $k>1$ squarefree, $n$-monogenicity still has an increasing effect on the nontrivial part of the $2$-torsion in the class groups of cubic fields. This increase goes to $0$ as~$k$ tends to infinity; moreover, the increase is concentrated on those $n$-monogenized cubic fields that are sufficiently ramified at every prime dividing $k$. The analogous phenomena also occur for the 2-torsion in narrow class groups of these fields. {}section{Method of proof} In \S\ref{secparam}, we prove parametrizations of $n$-monogenized cubic fields and index 2 subgroups of class groups of $n$-monogenic cubic fields, respectively, in terms of suitable spaces of forms. Namely, in~\S\ref{subsecparamrings}, we begin by proving that there is a natural bijection between \begin{itemize} \item[(a)] $n$-monogenized cubic fields $(K,\alphapha)$, and \item[(b)] certain cubic polynomials $f$ with integer coefficients whose leading coefficient is~$n$. \end{itemize} In \S\S\ref{subsecparam2}--\ref{subsecparam3}, we use the parametrization of quartic rings in \cite{quarparam} to give a natural bijection between: \begin{itemize} \item[(a)] index 2 subgroups of narrow class groups of $n$-monogenized cubic fields $(K,\alphapha)$ whose corresponding cubic polynomial is $f(x)$, and \item[(b)] certain ${\rm SL}_3({\mathbb Z})$-orbits on pairs $(A,B)$ of integer-coefficient ternary quadratic forms such that $4\,\det(Ax-B)=f(x)$. \end{itemize} Our main results are then proved by counting the relevant elements in (b) having bounded height in both of the above parametrizations, and then computing the limiting ratios. More precisely, Theorem~\ref{thsubmon} is proven in \S\ref{secsubmono} by counting ${\rm SL}_3({\mathbb Z})$-orbits of pairs $(A,B)$ of integer-coefficient ternary quadratic forms such that $f(x):=4\,\det(Ax-B)$ satisfies $H(f)<X$ and $n=4\,\det(A)<c\smash{H(f)^\delta}$. The technique to obtain this count is a suitable adaptation of the averaging and sieving methods developed in \cite{dodqf,dodpf} for counting by discriminant, but modified to allow imposing constraints on two different parameters (the height $H$ and index $n$). Next, Theorem~\ref{main_theorem} is proved in \S\ref{secn} by counting pairs $(A,B)$ such that $f(x):=4\,\det(Ax-B)$ satisfies $H(f)<X$, but where $4\det(A)=n$ is {\it fixed}. This is much more subtle than the count~in Theorem~\ref{thsubmon} where $n$ varies. It requires counting pairs $(A,B)$ in a fundamental domain for the action of the group ${\rm SL}_3({\mathbb Z})$ on the hypersurface defined by $\det(A)=n/4$. To carry out this count, we sum the total number of $B$ in a fundamental domain for the action of ${\rm SO}_{A}({\mathbb Z})$ on the space of real ternary quadratic forms, where $A$ runs over a set of representatives for the distinct ${\rm SL}_3({\mathbb Z})$-classes of integer-coefficient ternary quadratic forms having determinant $n/4$. This calculation is intimately related to the mass calculations for quadratic forms of a given determinant in \cite{Hanke_structure_of_massses,Hanke_ternary_quadratic_massses}; see also the work of Ibukiyama and Saito~\cite{IS} on Shintani zeta functions associated to spaces of quadratic forms. This count, and the analogous counts with suitable congruence conditions, enable the completion of the proof of the general result, Theorem~\ref{main_theorem}, implying in particular Theorems \ref{thmoncubicfields}, \ref{thmain2}, and \ref{thmain1}. \noindent {\it Acknowledgments.} It is a pleasure to thank Hendrik Lenstra, Artane Siad, Ashvin Swaminathan, Ling-Sang Tse, Ila Varma, and Mikaeel Yunus for helpful conversations and many useful comments. M.B.\ was supported by a Simons Investigator Grant and NSF grant~DMS-1001828, and thanks the Flatiron Institute for its kind hospitality during the academic year 2019--2020. A.S.\ was supported by an NSERC discovery grant and a Sloan fellowship. \section{Parametrizations involving quartic rings and $n$-monogenized cubic rings}\label{secparam} The purpose of this section is to describe the connection between 2-torsion subgroups in the class groups of $n$-monogenized cubic fields and pairs $(A,B)$ of integer-coefficient ternary quadratic forms with $4\,\det(A)=n$. In \S\ref{subsecparamrings}, we adapt the Delone--Faddeev parametrization \cite{DF} of cubic rings to obtain a parametrization of isomorphism classes of $n$-monogenized cubic rings by integer-coefficient binary cubic forms having leading coefficient $n$. In \S\ref{subsecparam2}, we recall the parametrization of quartic rings by pairs of integer-coefficient ternary quadratic forms, as developed in~\cite{quarparam}, and use these two parametrizations (in conjunction with class field theory) in \S\ref{subsecparam3} to parametrize index $2$ subgroups of class groups of $n$-monogenized cubic fields. Finally, in \S\ref{subsecparampid}, we then discuss versions of these parametrization results where ${\mathbb Z}$ is replaced by a principal ideal domain $R$. {}section{Parametrization of $n$-monogenized cubic rings}\label{subsecparamrings} \begin{defn} \label{sec:2.1} {\em A {\bf cubic ring} (resp.\ {\bf quartic ring}) is a ring that is free of rank~$3$ (resp.\ rank~$4$) as a ${\mathbb Z}$-module.} \end{defn} The works of Levi \cite{Levi}, Delone--Faddeev \cite{DF}, and Gan--Gross--Savin~\cite{GGS} give a parametrization of cubic rings by ${\rm GL}_2({\mathbb Z})$-orbits of integer-coefficient binary cubic forms. \begin{theorem}[\cite{Levi},\cite{DF},\cite{GGS}]\label{df} There is a bijection between isomorphism classes of cubic rings $C$ with a chosen basis $\langle\bar\omega,\bar\theta\rangle$ of $C/{\mathbb Z}$, and integer-coefficient binary cubic forms $f(x,y)=ax^3+bx^2y+cxy^2+dy^3$. The bijection is given by \begin{equation}\label{dfbijection} (C,\langle\bar\omega,\bar\theta\rangle)\mapsto 1\wedge(x\omega+y\theta)\wedge(x\omega+y\theta)^2= (ax^3+bx^2y+cxy^2+dy^3)\, (1\wedge\omega\wedge\theta)\in\wedge^3C \end{equation} where $\omega,\theta\in C$ denote any lifts of $\bar\omega,\bar\theta\in C/{\mathbb Z}$. \end{theorem} See also \cite[\S 2]{MAJ} for a concise proof of Theorem~\ref{df}. \begin{nota} \label{sec:2.3} {\em For any ring $R$, an element $\gamma\in {\rm GL}_2(R)$ acts on the space $U(R)$ of binary cubic forms $f$ with coefficients in $R$ via the twisted action \begin{equation}\label{gl2action}\gamma\cdot f(x,y):=\det(\gamma)^{-1}f((x,y)\gamma)\end{equation} where we view $(x,y)$ as a row vector. }\end{nota} We then have the following immediate corollary of Theorem~\ref{df}. \begin{corollary}\label{dfcor} There is a bijection between isomorphism classes of cubic rings and ${\rm GL}_2({\mathbb Z})$-orbits on the space $U({\mathbb Z})$ of integer-coefficient binary cubic forms. \end{corollary} Corollary~\ref{dfcor} follows from Theorem~\ref{df} by noting that the action of ${\rm GL}_2({\mathbb Z})$ on the basis $\langle\omega,\theta\rangle$ of $C/{\mathbb Z}$ leads to the action (\ref{gl2action}) on the corresponding binary cubic form $f$. We observe that the leading coefficient $a$ of the binary cubic form in the bijection of Theorem~\ref{df} is equal to the (signed) index of ${\mathbb Z}[\omega]$ in $C$, because setting $x=1$ and $y=0$ in \eqref{dfbijection} yields $1\wedge\omega\wedge\omega^2=a\,(1\wedge\omega\wedge\theta)$. Hence cubic rings having a monogenic subring of index $n$ may be classified in terms of binary cubic forms having (leading) $x^3$-coefficient $n$. \begin{defn} \label{sec:2.5} {\em A pair $(C,\alphapha)$ is an {\bf $n$-monogenized cubic ring} if $C$ is a cubic ring, $\alphapha$ is a primitive element of $C/{\mathbb Z}$, and $[C:{\mathbb Z}[\alphapha]]=n$. Two $n$-monogenized cubic rings $(C,\alpha)$ and $(C',\alpha')$ are {\bf isomorphic} if there is a ring isomorphism $C\to C'$ that maps $\alpha$ to $\alpha'+m$ for some $m\in{\mathbb Z}$.} \end{defn} \noindent Thus an $n$-monogenized cubic ring is a cubic ring $C$ equipped with a basis $\langle\bar\omega,\bar\theta\rangle$ of $C/{\mathbb Z}$, but where $(C,\langle\bar\omega,\bar\theta\rangle)$ and $(C,\langle\bar\omega,k\bar\omega+\bar\theta\rangle)$ are considered isomorphic, as only the basis element $\bar\omega$ is relevant in defining the monogenic subring ${\mathbb Z}[\omega]$ of index $n$ in $C$. The change-of-basis $\langle\bar\omega,\bar\theta\rangle\mapsto\langle\bar\omega,k\bar\omega+\bar\theta\rangle$ corresponds to the transformation $f(x,y)\mapsto f(x+ky,y)$ on the associated binary cubic form. \begin{nota} \label{sec:2.6} {\em For a ring $R$, we let ${M}(R){}set {\rm GL}_2(R)$ denote the subgroup of lower triangular unipotent matrices \begin{equation}\label{fz} {M}(R):=\left\{\left(\begin{array}{rc} 1&{}\\k&1\end{array}\right):k\in R\right\}. \end{equation} For $n\in R$, let $U_n(R){}set U(R)$ denote the subset of binary cubic forms having (leading) $x^3$-coefficient~$n$. }\end{nota} Then we have proven the following theorem. \begin{theorem}\label{thcubringpar}\label{df2} There is a bijection between isomorphism classes of $n$-monogenized cubic rings and ${M}({\mathbb Z})$-orbits on the set $U_n({\mathbb Z})$ of integer-coefficient binary cubic forms having leading coefficient $n$. \end{theorem} {}section{Parametrization of quartic rings having $n$-monogenic cubic resolvent rings}\label{subsecparam2} In this section, we recall the parametrization in \cite{quarparam} of pairs $(Q, C)$, where $Q$ is a quartic ring and~$C$ is a cubic resolvent ring of $Q$. \begin{nota} \label{sec:2.8} {\em For any ring $R$, let $V(R)$ denote the space of pairs of ternary quadratic forms with coefficients in $R$. We represent an element in $V(R)$ as $(A,B)$, where $A(x_1,x_2,x_3)=\sum_{1\leq i\leq j\leq 3} a_{ij} x_i x_j$ and $B(x_1,x_2,x_3)=\sum_{1\leq i\leq j\leq 3} b_{ij} x_i x_j$ with $a_{ij},b_{ij}\in R$. When 2 is not a zero divisor in $R$, we may also represent an element $(A,B)\in V(R)$ as a pair of $3\times 3$ symmetric matrices with entries in $R[1/2]$ via the Gram identification \begin{equation*} (A,B)=\left( \frac12\left[ \begin{array}{ccc} 2a_{11} & a_{12} & a_{13} \\ a_{12} & 2a_{22} & a_{23} \\ a_{13} & a_{23} & 2a_{33} \end{array} \right], \frac12\left[ \begin{array}{ccc} 2b_{11} & b_{12} & b_{13} \\ b_{12} & 2b_{22} & b_{23} \\ b_{13} & b_{23} & 2b_{33} \end{array} \right] \right). \end{equation*} The group $G(R){}set {\rm GL}_2(R)\times {\rm GL}_3(R)$ defined by \begin{equation*} G(R):=\{(g_2,g_3)\in{\rm GL}_2(R)\times{\rm GL}_3(R):\det(g_2)\det(g_3)=1\} \end{equation*} acts naturally on $V(R)$ as follows: \begin{equation}\label{Gaction} (g_2,g_3)\cdot(A,B)=(g_3Ag_3^t,g_3Bg_3^t)\cdot g_2^t. \end{equation} Given an element $(A,B)\in V(R)$, we define its {\bf cubic resolvent form} ${\mathbb R}es(A,B)\in U(R)$ by \begin{equation}\label{eqresolventmap} {\mathbb R}es(A,B):=4\det(Ax-By). \end{equation} \pagebreak Since the action of $G(R)$ on $V(R)$ in (\ref{Gaction}) and the resolvent map ${\mathbb R}es:V(R)\to U(R)$ in (\ref{eqresolventmap}) are defined by integer-coefficient polynomials in the entries of $(g_2,g_3)\in G(R)$ and the coefficients $a_{ij}$ of $A$ and $b_{ij}$ of $B$, we see that the action of $G(R)$ on $V(R)$ and the definitions of $4\det(A)$ and ${\mathbb R}es(A,B)$ for $(A,B)\in V(R)$ make sense for arbitrary rings $R$. }\end{nota} The following theorem is proved~in~\cite{quarparam}. \begin{theorem}[\cite{quarparam}]\label{thqrpar} There is a bijection between pairs $(A,B)\in V({\mathbb Z})$ of integer-coefficient ternary quadratic forms and isomorphism classes of pairs $((Q,\langle\bar\alphapha,\bar\beta,\bar\gamma\rangle),(C,\langle\bar\omega,\bar\theta\rangle))$, where $Q$ is a quartic ring with a chosen basis $\langle\bar\alphapha,\bar\beta,\bar\gamma\rangle$ of $Q/{\mathbb Z}$ and $C$ is a cubic resolvent ring of $Q$ with a chosen basis $\langle\bar\omega,\bar\theta\rangle$ of~$C/{\mathbb Z}$. Furthermore, under this bijection, $(C,\langle\bar\omega,\bar\theta\rangle)$ is the data corresponding to the cubic resolvent form of $(A,B)$ under Theorem~${\ref{df}}$. \end{theorem} A complete description of the construction of $((Q,\langle\bar\alphapha,\bar\beta,\bar\gamma\rangle),(C,\langle\bar\omega,\bar\theta\rangle))$ from $(A,B)$ can be found in~\cite[\S3.2, \S3.3]{quarparam}. Theorem~\ref{thqrpar} has the following immediate corollary. \begin{corollary}\label{qparcor} There is a bijection between $G({\mathbb Z})$-orbits on the space $V({\mathbb Z})$ of pairs of integer-coefficient ternary quadratic forms and isomorphism classes of pairs $(Q,C)$, where $Q$ is a quartic ring and $C$ is a cubic resolvent ring of $Q$. \end{corollary} Corollary~\ref{qparcor} follows from Theorems~\ref{df} and \ref{thqrpar} by noting that the action of $G{}set {\rm GL}_2({\mathbb Z})\times{\rm GL}_3({\mathbb Z})$ on the bases $\langle\bar\alphapha,\bar\beta,\bar\gamma\rangle$ of $Q/{\mathbb Z}$ and $\langle\bar\omega,\bar\theta\rangle$ of $C/{\mathbb Z}$ leads to the action (\ref{Gaction}) on the corresponding pair $(A,B)$ of ternary quadratic forms. Finally, the identical reasoning now yields the following parametrization of quartic rings having $n$-monogenized cubic rings. \begin{corollary}\label{qparcor2} There is a bijection between ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-orbits on the set $V_n({\mathbb Z})$ of pairs $(A,B)$ of integer-coefficient ternary quadratic forms with $\det(A)=n$ and isomorphism classes of pairs $(Q,(C,\alphapha))$, where $Q$ is a quartic ring and $(C,\alphapha)$ is an $n$-monogenized cubic resolvent ring of~$Q$. \end{corollary} {}section{Parametrization of index 2 subgroups of class groups of cubic fields}\label{subsecparam3} With the parametrizations of quartic rings having $n$-monogenized cubic rings established, we are now in a position to parametrize index 2 subgroups in the class groups of $n$-monogenized cubic fields. This parametrization will be useful to us because the number of elements of order $2$ in a finite abelian group $A$ is equal to the number of index 2 subgroups of $A$. We will need the following definition. \begin{defn} \label{sec:2.12} {\em For a maximal quartic ring $Q$ and a prime $p$, we say that $Q$ is {\bf overramified at~$p$} if the ideal $p{\mathbb Z}$ factors in $Q$ as either $P^4$, $P^2$, or $P_1^2P_2^2$, where $P$, $P_1$, and $P_2$ are prime ideals of $Q$. A maximal quartic ring $Q$ is overramified at $\infty$ if $Q\otimes{\mathbb R}\cong {\mathbb C}^2$ as ${\mathbb R}$-algebras. A maximal quartic ring $Q$ is {\bf nowhere overramified} if it is not overramified at any (finite or infinite) place.} \end{defn} The significance of being nowhere overramified comes from the following theorem of Heilbronn. \begin{theorem}[\cite{Hcf}] \label{Theorem:Heilbronn_overramified}\label{h} Let $K_4$ be a totally real $S_4$-quartic field, and $K_3$ a cubic $($resolvent$)$ field inside $K_{24}$, the Galois closure of $K_{4}$. Let $K_6$ be the non-Galois sextic field in $K_{24}$ containing $K_3$. Then the quadratic extension $K_6/K_3$ is unramified precisely when the quartic field $K_4$ is nowhere overramified. Conversely, every unramified quadratic extension $K_6/K_3$ of a cubic $S_3$-field $K_3$ lies in the Galois closure of a nowhere overramified quartic field $K_4$ which is unique up to conjugacy. \end{theorem} \begin{nota} \label{sec:2.14} {\em For the maximal order $C$ in a cubic field $K_3$, let ${\rm Cl}_2(C)$ and ${\rm Cl}_2^+(C)$ denote the 2-torsion subgroups of the class group and narrow class group of $C$, respectively. Let ${\rm Cl}_2(C)^*$ and ${\rm Cl}_2^+(C)^*$ denote the groups dual to ${\rm Cl}_2(C)$ and ${\rm Cl}_2^+(C)$, respectively. Then the set of nontrivial elements of ${\rm Cl}_2(C)^*$ (resp.\ ${\rm Cl}_2^+(C)^*$) are in bijection with the set of index two subgroups of ${\rm Cl}(C)$ (resp.\ ${\rm Cl}^+(C)$) simply by mapping a character to its kernel. }\end{nota} Theorems~\ref{thqrpar} and \ref{h}, together with class field theory, now immediately yield a parametrization of index 2 subgroups of the class groups and narrow class groups of cubic fields. \begin{theorem}\label{thclgp1} Let $C$ be the maximal order in an $S_3$-cubic field $K_3$, and let $f(x,y)$ be a binary cubic form corresponding to $C$ under Theorem~$\ref{df}$. \begin{itemize} \item[{\rm (a)}] If $\Delta(K_3)>0$, then there is a canonical bijection between elements of ${\rm Cl}_2^+(C)^*$ and ${\rm SL}_3({\mathbb Z})$-orbits on ${\mathbb R}es^{-1}(f){}set V({\mathbb Z})$. Under this bijection, elements of ${\rm Cl}_2(C)^*{}set {\rm Cl}_2^+(C)^*$ correspond to ${\rm SL}_3({\mathbb Z})$-orbits on pairs $(A,B)\in V({\mathbb Z})$ such that $A(x,y,z)=B(x,y,z)=0$ has a nonzero solution over ${\mathbb R}$. \item[{\rm (b)}] When $\Delta(K_3)<0$, there is a canonical bijection between elements of ${\rm Cl}_2^+(C)^*={\rm Cl}_2(C)^*$ and ${\rm SL}_3({\mathbb Z})$-orbits on ${\mathbb R}es^{-1}(f){}set V({\mathbb Z})$. \end{itemize} \end{theorem} \begin{proof} By class field theory, the nontrivial elements of ${\rm Cl}_2(C)^*$ (resp.\ ${\rm Cl}_2^+(C)^*$) correspond to quadratic extensions $K_6/K_3$ that are unramified at all places (resp.\ unramified at all finite places). These quadratic extensions $K_6/K_3$, by Theorem~\ref{h}, in turn correspond to quartic fields $K_4$ whose maximal orders $Q$ are nowhere overramified (resp.\ not overramified at all finite places). In this scenario, we have the equality of discriminants $\Delta(Q)=\Delta(C)$ (by \cite{Hcf}), and so $C$ is~the unique cubic resolvent ring of $Q$ (\cite[Def.~8]{quarparam}). The bijections in (a) and (b), for nontrivial elements of ${\rm Cl}_2^+(K_3)^*$ and ${\rm SL}_3({\mathbb Z})$-orbits on ${\mathbb R}es^{-1}(f){}set V({\mathbb Z})$ corresponding to integral domains $Q$, now follow from Theorem~\ref{thqrpar}. The bijections in both (a) and (b) of Theorem~\ref{thclgp1} are completed by sending the identity element in ${\rm Cl}_2^+(K_3)^*$ to the unique ${\rm SL}_3({\mathbb Z})$-orbit on ${\mathbb R}es^{-1}(f){}set V({\mathbb Z})$ corresponding to the quartic ring $Q={\mathbb Z}\times C$, whose unique cubic resolvent ring is $C$ as well. \end{proof} {}section{Parametrizations over principal ideal domains}\label{subsecparampid} \begin{defn}{\em Let $R$ be a principal ideal domain. A {\bf cubic ring} $($resp.\ {\bf quartic ring}$)$ {\bf over $R$} is an $R$-algebra that is free of rank $3$ $($resp.\ rank $4)$ as an $R$-module.} \end{defn} The following theorem, proved by Gross and Lucianovic \cite{GS} and in~\cite{BSW}, generalizes the parametrization of cubic and quartic rings over ${\mathbb Z}$ in \cite{DF} and \cite{quarparam}, respectively, to the setting of cubic and quartic rings over a principal ideal domain. For a formulation and proof in the vastly more general case where ${\mathbb Z}$ is replaced by an arbitrary ring, or even an arbitrary base scheme, see Wood~\cite{Wood}. \begin{theorem}[{\cite[Proposition 2.1]{GS}, \cite[Theorem~5]{BSW}}]\label{thpid} Let $R$ be a principal ideal domain. {\rm (a)} There is a natural bijection between isomorphism classes of cubic rings over $R$ and ${\rm GL}_2(R)$-orbits on $U(R)$. Under this bijection, the group of automorphisms of a cubic ring over $R$ is isomorphic to the stabilizer in ${\rm GL}_2(R)$ of the corresponding binary cubic form in $U(R)$. {\rm (b)} There is a natural bijection between isomorphism classes of pairs $(Q,C)$, where $Q$ is a quartic ring over $R$ and $C$ is a cubic resolvent ring of $Q$, and $G(R)$-orbits on $V(R)$. Under this bijection, the group of automorphisms of a quartic ring $Q$ over $R$ is isomorphic to the stabilizer in $G(R)$ of the corresponding pair of ternary quadratic forms in $V(R)$. \end{theorem} \begin{defn}{\em Let $R$ be a principal ideal domain. For $n\in R$, an {\bf $n$-monogenized cubic ring over $R$} is a pair $(C,\alphapha)$, where $C$ is a cubic ring over $R$, the element $\alphapha\in C$ is primitive in $C/R$, and~the $R$-ideal $(C:R[\alphapha]):=\{r\in R: r \wedge^3C{}set\wedge^3(R[\alphapha]) \mbox{ as $R$-modules}\}$ is generated by $n$. Two $n$-monogenized cubic rings $(C,\alphapha)$ and $(C',\alphapha')$ over~$R$ are {\bf isomorphic} if there is an $R$-algebra isomorphism $C\to C'$ that maps $\alphapha$ to $\alphapha'+m$ for some~$m\in R$.} \end{defn} Note that if $R={\mathbb Z}$ or $R={\mathbb Z}_p$, then $(C:R[\alphapha])$ is the ideal generated by the usual index $[C:R[\alphapha]].$ The proofs of Theorems \ref{thcubringpar} and \ref{thpid} have the following consequence. \begin{corollary}\label{thZp} Let $R$ be a principal ideal domain. There is a bijection between isomorphism classes of $n$-monogenized cubic rings over $R$ and $M(R)$-orbits on the set $U_n(R)$. \end{corollary} \begin{rem}\label{remTpmeasure}{\em When $R={\mathbb Z}_p$, Corollary \ref{thZp} provides a bijection between isomorphism classes of $n$-monogenized cubic rings over ${\mathbb Z}_p$ and the set \begin{equation}\label{eqFDZp} \bigl\{f(x,y)=nx^3+bx^2y+cxy^2+dy^3:b\in{\mathbb Z},\;c,d\in{\mathbb Z}_p,\;0\leq b< \gcd(p,3n)\bigr\} \end{equation} which is a fundamental domain for the action of $M({\mathbb Z}_p)$ on $U_n({\mathbb Z}_p)$. The set \eqref{eqFDZp} can be identified with ${\mathbb Z}_p\times{\mathbb Z}_p\times\{0,1,\ldots,\gcd(p,3n)-1\}$. We then equip the set of isomorphism classes of $n$-monogenized cubic rings over ${\mathbb Z}_p$ with the product topology and measure, via the usual topology and additive Haar measure on ${\mathbb Z}_p$ and the discrete topology and counting measure on $\{0,1,\ldots,\gcd(p,3n)-1\}$. By Lemma~\ref{propcondmax}, the set $T_p$ of isomorphism classes of \'etale cubic extensions of ${\mathbb Q}_p$ can be identified with an open and closed subset of ${\mathbb Z}_p\times{\mathbb Z}_p\times\{0,1,\ldots,\gcd(p,3n)-1\}$, namely, by identifying $T_p$ with the subset of \eqref{eqFDZp} corresponding to $n$-monogenized cubic rings $({\mathcal O}_p,\alphapha_p)$ over ${\mathbb Z}_p$, where ${\mathcal O}_p$ is the maximal order in an \'etale cubic extension of ${\mathbb Q}_p$. The set $T_p$ also thereby inherits a topology and a measure.} \end{rem} We will have occasion to use Theorem~\ref{thpid} with $R$ specialized to ${\mathbb R}$, ${\mathbb Z}_p$, and ${\mathbb F}_p$. In these cases, it will be convenient to use the language of splitting types, which we now define. \begin{defn} \label{sec:2.17} {\em Let $f$ be a binary cubic form in $U({\mathbb R})$, $U({\mathbb Z}_p)$, or $U({\mathbb F}_p)$. When $f$ belongs to $U({\mathbb R})$, assume that $f$ has three roots (counted with multiplicity) in ${\mathbb P}^1({\mathbb C})$, and when $f$ belongs to $U({\mathbb Z}_p)$ or $U({\mathbb F}_p)$, assume that $f$ has three roots (counted with multiplicity) in ${\mathbb P}^1(\overline{{\mathbb F}}_p)$. We define the {\bf splitting type} of $f$ to be $(f_1^{e_1}f_2^{e_2}\cdots)$, where the $f_i$ are the degrees over ${\mathbb R}$ or ${\mathbb F}_p$ of the fields of definition of these roots and the $e_i$ are their respective multiplicities. Similarly, let $(A,B)$ be an element in $V({\mathbb R})$, $V({\mathbb Z}_p)$, or $V({\mathbb F}_p)$. When $(A,B)$ belongs to $V({\mathbb R})$, assume that the intersection of the conics defined by $A$ and $B$ consists of four points (counted with multiplicity) in ${\mathbb P}^2({\mathbb C})$. When $(A,B)$ belongs to $V({\mathbb Z}_p)$ or $V({\mathbb F}_p)$, assume that the intersection of the conics defined by $A$ and $B$ consists of four points (counted with multiplicity) in ${\mathbb P}^2(\overline{{\mathbb F}}_p)$. We define the {\bf splitting type} of $(A,B)$ to be $(f_1^{e_1}f_2^{e_2}\cdots)$, where the $f_i$ are the degrees over ${\mathbb R}$ or ${\mathbb F}_p$ of the fields of definition of these points and the $e_i$ are their respective multiplicities.} \end{defn} We conclude this section with a discussion of local versions of Theorem \ref{thclgp1}. Let $p$ be a prime and let $K_m$ be an \'etale degree $m$ extension of ${\mathbb Q}_p$. We write $K_m=\prod_{i=1}^k L_i$ as a product of fields $L_i$. An unramified degree $2$ extension $K_{2m}$ of $K_m$ is a product \smash{$\prod_{i=1}^k L_i'$}, where~$L_i'$ is either $L_i\times L_i$ (i.e., $L_i$ splits) or the (unique) quadratic unramified extension of $L_i$ (i.e., $L_i$ is~inert). Let ${\rm Aut}_{K_m}(K_{2m})$ denote the group of automorphisms of $K_{2m}$ fixing $K_m$ pointwise. As there are~$2^k$ different unramified extensions $K_{2m}$ of $K_m$, and each such extension has automorphism group~$({\mathbb Z}/2{\mathbb Z})^k$, we have \begin{equation}\label{eqmasstriv} \sum_{{}stack{[K_{2m}:K_m]=2\\{\rm unramified}}}\frac{1}{|{\rm Aut}_{K_m}(K_{2m})|}=1. \end{equation} The norm of the discriminant of each $K_{2m}/K_m$ is either a square or nonsquare in ${\mathbb Z}_p^\times$. Indeed, \begin{equation*} N_{K_m/{\mathbb Q}_p}\Delta(K_{2m}/K_m)=\smash{\prod_{i=1}^k} N_{L_i/{\mathbb Q}_p}\Delta(L_i'/L_i). \end{equation*} Let $e_i$ denote the ramification degree of $L_i$; then $N_{K_m/{\mathbb Q}_p}\Delta(K_{2m}/K_m)$ is a square if and only if there are an even number of $i$ for which both $L_i$ is inert and $e_i$ is odd. We now focus on the case $m=3$. The arguments in \cite[\S2]{Baily} imply the following. \begin{theorem}[\cite{Baily}]\label{thHeilbronPID} There is a bijection between \'etale non-overramified quartic extensions~$K_4/{\mathbb Q}_p$ with cubic resolvent $K_3$ and \'etale unramified quadratic extensions $K_6/K_3$ such that $N_{K_3/{\mathbb Q}_p}\Delta(K_6/K_3)$ is a square in~${\mathbb Z}_p^\times$. \end{theorem} \begin{rem}\label{geom} {\em The bijection of Theorem \ref{thHeilbronPID} can be made explicit using the correspondences of Theorem \ref{thpid}. Let $K_3$ be an \'etale cubic algebra corresponding to a binary cubic form $f(x,y)$. Let ${\mathbb P}P=\{P_1,P_2,P_3\}$ denote the set of roots of $f(x,y)$ in ${\mathbb P}^1(\overline{{\mathbb Q}}_p)$. Let $K_4$ be an \'etale non-overramified quartic extension of ${\mathbb Q}_p$ with resolvent $K_3$ corresponding to a pair $(A,B)$ of ternary quadratic forms. Let ${\mathcal Q}=\{Q_1,Q_2,Q_3,Q_4\}$ denote the set of common zeros of $A$ and $B$ in ${\mathbb P}^2(\overline{{\mathbb Q}}_p)$. These two sets ${\mathbb P}P$ and~${\mathcal Q}$ come equipped with an action of the absolute Galois group $G_{{\mathbb Q}_p}$ of ${\mathbb Q}_p$. Furthermore, a set of three points (resp.\ four points) together with an action of $G_{{\mathbb Q}_p}$ uniquely determines an \'etale cubic (resp.\ quartic) extension of ${\mathbb Q}_p$, and in this manner ${\mathbb P}P$ corresponds to the \'etale cubic extension $K_3$ and ${\mathcal Q}$ corresponds to the \'etale quartic extension $K_4$ of ${\mathbb Q}_p$. Let ${\mathbb P}P'$ denote the set of pairs of lines $(L_1,L_2)$ where $L_1$ passes through two of the points $Q_i$ and $L_2$ passes through the other two $Q_i$. Then ${\mathbb P}P'$ has three elements and we have a natural bijection ${\mathbb P}P \to {\mathbb P}P'$ that is $G_{{\mathbb Q}_p}$-equivariant. Indeed, we simply associate to $P_i=(x_i,y_i)$ the zero set of $Ax_i-By_i$, which is a pair of lines in ${\mathbb P}^2(\overline{{\mathbb Q}}_p)$ since $4\det(Ax_i-By_i)=f(x_i,y_i)=0$. Moreover, since the four points in ${\mathcal Q}$ are in general position, both of these lines must pass through exactly two points in~${\mathcal Q}$. We may thus naturally identify the Galois sets ${\mathbb P}P$ and ${\mathbb P}P'$. Let ${\mathcal L}$ denote the set of six lines passing through each choice of two points in ${\mathcal Q}$. The Galois action on ${\mathcal Q}$ induces one on~${\mathcal L}$, which in turn yields an \'etale sextic extension $K_6$ of ${\mathbb Q}_p$; this is indeed the unramified quadratic extension of $K_3$ with discriminant of square norm corresponding to $K_4$ in~Theorem \ref{thHeilbronPID}.} \end{rem} \section{The mean number of $2$-torsion elements in the class groups of $n$-monogenized cubic fields ordered by height with varying $n$}\label{secsubmono} The purpose of this section is to prove a more general version of Theorem~\ref{thsubmon} where we also allow our $n$-monogenized cubic fields to satisfy certain infinite sets of congruence conditions. We fix real numbers $c$ and $\delta$ satisfying $c>0$ and $0<\delta\leq 1/4$. \begin{defn} \label{sec:3.1} {\em For each prime $p$, let $S_p{}set U({\mathbb Z}_p)$ be an open and closed nonempty subset whose boundary has measure $0$. Then $S:=(S_p)_p$ is a {\bf collection of cubic local specifications}. The collection $S=(S_p)_p$ is {\bf large} if, for all but finitely many primes~$p$, the set $S_p$ contains all elements $f\in U({\mathbb Z}_p)$ with $p^2\nmid\Delta(f)$. To each $S_p$, we associate the set $\Sigma_p$ of pairs $(\mathcal K_p,\alphapha_p)$, up to isomorphism, where $\mathcal K_p$ is an \'etale cubic extension of ${\mathbb Q}_p$ with ring of integers ${\mathcal O}_p$, $\alphapha_p$ is an element of ${\mathcal O}_p$ that is primitive in ${\mathcal O}_p/{\mathbb Z}_p$, and the pair $({\mathcal O}_p,\alphapha_p)$ corresponds to some $f(x,y)\in S_p$. The collection $\Sigma:=(\Sigma_p)_p$ is called {\bf large} if $S$ is large. For a large collection $\Sigma$, let $F_\Sigma({\leq\! cH^\delta},X)$ denote the set of isomorphism classes of $n$-monogenized cubic fields $(K,\alphapha)$ such that $n\leq cH(K,\alphapha)^\delta$, $H(K,\alphapha)<X$, and $(K\otimes{\mathbb Q}_p,\alphapha)\in\Sigma_p$ for all primes~$p$.} \end{defn} The main result of this section is then the following theorem. \pagebreak \begin{theorem}\label{thsubmonloc} Let notation be as above. As $X\to\infty$, we have: \begin{itemize} \item[{\rm (a)}] The average size of ${\rm Cl}_2(K)$ over totally real cubic fields $K$ in $F_\Sigma({\leq\! cH^\delta},X)$ approaches $5/4$; \item[{\rm (b)}] The average size of ${\rm Cl}_2(K)$ over complex cubic fields $K$ in $F_\Sigma({\leq\! cH^\delta},X)$ approaches $3/2$; \item[{\rm (c)}] The average size of ${\rm Cl}^+_2(K)$ over totally real cubic fields $K$ in $F_\Sigma({\leq\! cH^\delta},X)$ approaches $2$. \end{itemize} \end{theorem} That is, the averages in Theorem~\ref{thallcubicfields} and \cite[Theorem 1]{BV} remain the same even when ordering cubic fields by height, restricting $n$ to slowly growing ranges relative to the height, and imposing quite general local conditions on the cubic~fields. \begin{rem}\label{remdelta14} {\em We always take $\delta\leq1/4$ because every cubic field $K$ is $n$-monogenized for some $n\ll|\Delta(K)|^{1/4}$. Indeed, the covolume of ${\mathcal O}_K$ in ${\mathcal O}_K\otimes {\mathbb R}$ is $|\Delta(K)|^{1/2}$, and so the length of the second successive minimum $\alphapha\in{\mathcal O}_K$ is $\ll |\Delta(K)|^{1/4}$ (the first successive minimum being $1\in{\mathcal O}_K$). Therefore, $$|\Delta({\mathbb Z}[\alphapha])|=|\Delta(\langle1,\alphapha,\alphapha^2\rangle)|\ll (1\: |\Delta(K)|^{1/4} |\Delta(K)|^{1/2})^2\ll |\Delta(K)|^{3/2}.$$ Hence $$[{\mathcal O}_K:{\mathbb Z}[\alphapha]]=|\Delta({\mathbb Z}[\alphapha])/\Delta({\mathcal O}_K)|^{1/2} \ll (|\Delta(K)|^{3/2}/|\Delta(K)|)^{1/2} = |\Delta(K)|^{1/4}.$$} \end{rem} This section is organized as follows. In \S\ref{subsecsubcub}, we give asymptotics for the number of $n$-monogenized cubic rings of bounded height $H$ and $n<cH^\delta$ in terms of local volumes of certain sets of binary cubic forms. In \S\ref{subsecsubquar}, we determine asymptotics for the number of quartic rings with $n$-monogenized cubic resolvent rings, where these resolvent rings again have bounded height and bounded $n$. In~\S\ref{subsecsubunif}, we then prove uniformity estimates that allow us to impose conditions of maximality on these counts. The leading constants for these asymptotics are expressed as a product of local volumes of sets in $U(R)$ and $V(R)$, where $R$ ranges over ${\mathbb R}$ and ${\mathbb Z}_p$ for all primes $p$. In \S\ref{seclmsub}, we prove certain mass formulas relating \'etale quartic and cubic extensions of~${\mathbb Q}_p$. Finally, in \S\ref{subsecsubvol}, we use these mass formulas to compute the required local volumes, concluding the proofs of Theorem~\ref{thsubmon} and Theorem~\ref{thsubmonloc}. {}section{The number of $n$-monogenized cubic rings of bounded height $H$ and $n<cH^\delta$}\label{subsecsubcub} In this subsection, we determine the asymptotic number of $n$-monogenized cubic rings of bounded index and height. By Theorem \ref{thcubringpar}, such rings are parametrized by ${M}({\mathbb Z})$-orbits on binary cubic forms in $U({\mathbb Z})$ of bounded height $H$ whose $x^3$-coefficient is positive and less than~$cH^\delta$. \begin{defn} \label{sec:3.4} {\em For a binary cubic form $f(x,y)=ax^3+bx^2y+cxy^3+dy^3\in U({\mathbb R})$, we define the {\bf index} ${\rm ind}$, the {\bf $F$-invariants} $I$ and $J$, {\bf height} $H$, and {\bf discriminant} $\Delta$ of $f$ by: \begin{equation*} \begin{array}{rcl} {\rm ind}(f)&:=&a;\\[.05in] I(f)&:=&b^2-3ac;\\[.05in] J(f)&:=&-2b^3+9abc-27a^2d;\\[.05in] H(f)&:=&a^{-2}{\rm max}\bigl\{|I(f)|^3,J(f)^2/4\}; \\[.05in] \Delta(f)&:=&b^2c^2-4ac^3-4b^3d-27a^2d^2+18abcd. \end{array} \end{equation*} }\end{defn} If the ${M}({\mathbb Z})$-orbit of an element $f\in U({\mathbb Z})$ corresponds to an $n$-monogenized ring $(C,\alphapha)$ by the bijection of Theorem~\ref{df2}, then: \begin{equation*} {\rm ind}(f)=n;\quad I(f)=I(C,\alphapha);\quad J(f)=J(C,\alphapha);\quad H(f)=H(C,\alphapha);\quad \Delta(f)=\Delta(C).\end{equation*} \pagebreak \begin{cons} \label{sec:3.5} {\em Define the sets \smash{${\mathbb F}F_U^\pm$} and \smash{${\mathbb F}F_U^\pm({\leq\! cH^\delta},X)$} as follows: \begin{equation*} \begin{array}{rcl} \displaystyle{\mathbb F}F_U^\pm&\!\!:=\!\!& \displaystyle\{f(x,y)=ax^3+bx^2y+cxy^2+dy^3\in U({\mathbb R}):\pm\Delta(f)>0, \;0<a,\;0\leq b<3a\}; \\[.05in] \displaystyle \smash{{\mathbb F}F_U^\pm}({\leq\! cH^\delta},X)&\!\!:=\!\!&\displaystyle\bigl\{f(x,y)\in{\mathbb F}F_U^\pm:{\rm ind}(f)\leq cH(f)^\delta,\; H(f)<X\bigr\}. \end{array} \end{equation*} Then $\smash{{\mathbb F}F_U^\pm}$ is a fundamental domain for the action of ${M}({\mathbb Z})$ on the set of binary cubic forms $f(x,y)\in U({\mathbb R})$ such that $\pm\Delta(f)>0$ and the $x^3$-coefficient of $f(x,y)$ is positive. (In this paper, the symbol ``$\pm$'' always refers to two distinct statements, one for $+$ and one for $-$.) }\end{cons} In this subsection, we prove the following theorem. \begin{theorem}\label{thsubcount} Let $\smash{N^\pm_3}({\leq\! cH^\delta},X)$ denote the number of isomorphism classes of $n$-monogenized $S_3$-orders $(C,\alphapha)$ such that $\pm\Delta(C)>0$, $n\leq \smash{cH(C,\alphapha)^{\delta}}$, and $H(C,\alphapha)<X$. Then \begin{equation*} \smash{N^\pm_3({\leq\! cH^\delta},X)}={\rm Vol}\bigl(\smash{{\mathbb F}F_U^\pm}({\leq\! cH^\delta},X)\bigr)+O(X^{5/6}) = k\, X^{5/6+2\delta/3}+O(X^{5/6}), \end{equation*} where $k$ and the implied $O$-constant depend only on $n$, $c$, and $\delta$. \end{theorem} We begin by counting ${M}({\mathbb Z})$-equivalence classes of integer-coefficient binary cubic forms, with bounded index and height, that satisfy any finite set of congruence conditions. We use the following result on counting lattice points in regions due to Davenport. \begin{proposition}[\cite{Davenport1}]\label{davlem} Let $\mathcal R$ be a bounded, semi-algebraic multiset in ${\mathbb R}^n$ having maximum multiplicity $m$ that is defined by at most $k$ polynomial inequalities each having degree at most $\ell$. Let ${\mathbb R}R'$ denote the image of ${\mathbb R}R$ under any $($upper or lower$)$ triangular unipotent transformation of ${\mathbb R}^n$. Then the number of integer lattice points $($counted with multiplicity$)$ contained in the region $\mathcal R'$ is \[{\rm Vol}(\mathcal R)+ O({\rm max}\{{\rm Vol}(\bar{\mathcal R}),1\}),\] where ${\rm Vol}(\bar{\mathcal R})$ denotes the greatest $d$-dimensional volume of any projection of $\mathcal R$ onto a coordinate subspace obtained by equating $n-d$ coordinates to zero, where $d$ takes all values from $1$ to $n-1$. The implied constant in the second summand depends only on $n$, $m$, $k$, and $\ell$. \end{proposition} \begin{nota} \label{sec:3.8} {\em Let $L{}set U({\mathbb Z})$ denote an ${M}({\mathbb Z})$-invariant set defined by congruence conditions modulo some positive integer, and let $\nu(L)$ denote the volume of the closure of $L$ in $U(\widehat{{\mathbb Z}})$, where~$\widehat{{\mathbb Z}}:=\prod_p {\mathbb Z}_p$. For any subset $L{}set U({\mathbb R})$, let $L^\pm$ denote the set of elements $f\in L$ with $\pm\Delta(f)>0$. For real numbers ${T},X>0$ such that ${T}=O(X^{1/4})$, define the sets \begin{equation}\label{definesets} \displaystyle{\mathbb F}F_U^\pm({\Y;X}):=\displaystyle\bigl\{f(x,y)\in{\mathbb F}F_U^\pm:{T}\leq {\rm ind}(f)<2{T},\; X\leq H(f)<2X\bigr\}. \end{equation} }\end{nota} \begin{proposition}\label{propcountsmf} We have \begin{equation*} \#\bigl\{f\in {M}({\mathbb Z})\backslash L^\pm: {T}\leq {\rm ind}(f)<2{T},\;X\leq H(f)<2X\bigr\} =\nu(L){\rm Vol}\bigl({\mathbb F}F_U^\pm({\Y;X})\bigr)+O\bigl(X^{5/6}/{T}^{1/3}\bigr). \end{equation*} \end{proposition} \begin{proof} The height and leading-coefficient bounds on an element $f(x,y)=ax^3+bx^2y+cxy^2+dy^3\in {\mathbb F}F_U^\pm({\Y;X})$ imply that we have: \begin{equation}\label{eqFUYX} |a|\ll {T};\quad |b|\ll {T};\quad |c|\ll X^{1/3}/{T}^{1/3}; \quad |d|\ll X^{1/2}/{T}. \end{equation} Applying Proposition \ref{davlem} yields \begin{equation*} \#\bigl({\mathbb F}F_U^\pm({\Y;X})\cap L\bigr)=\nu(L) {\rm Vol}\bigl({\mathbb F}F_U^\pm({\Y;X})\bigr)+O\bigl(X^{5/6}/{T}^{1/3}). \end{equation*} Since ${\mathbb F}F_U^\pm$ is a fundamental domain for the action of ${M}({\mathbb Z})$ on $U({\mathbb R})^\pm$, the proposition follows. \end{proof} \noindent The main term in Proposition~\ref{propcountsmf} grows as $X^{5/6}{T}^{2/3}$. Hence the error term is smaller than the main term whenever ${T}\gg_\varepsilonilon X^{\varepsilonilon}$.\pagebreak \begin{defn} \label{sec:3.10} {\em An element $f\in U({\mathbb Z})$ is {\bf generic} if the cubic ring corresponding to $f$ is an order in an $S_3$-cubic field. Similarly, an element $v\in V({\mathbb Z})$ is {\bf generic} if the quartic ring corresponding to $v$ is an order in an $S_4$-quartic field. For any subset $L$ of $U({\mathbb Z})$ (resp.\ $V({\mathbb Z})$), we denote the set of generic elements in $L$ by $L^{\rm gen}$.} \end{defn} \begin{lemma}\label{lemsubnongencub} We have $ \#\bigl((U({\mathbb Z})\setminus U({\mathbb Z})^{\rm gen})\cap {\mathbb F}F_U^\pm({\Y;X}) \bigr)=O_\varepsilonilon(X^{1/2+\varepsilonilon}\,{T}).$ \end{lemma} \begin{proof} This follows immediately from the coefficient bounds \eqref{eqFUYX} along with the fact that when $a$ and $d$ are nonzero, the coefficients $a$, $b$, and $d$ of an integer-coefficient binary cubic form $ax^3+bx^2y+cxy^2+dy^3\in U({\mathbb Z})\setminus U({\mathbb Z})^{\rm gen}$ determine $c$ up to $O_\varepsilonilon(X^\varepsilonilon)$ choices (as in the proofs of \cite[Lemmas~21--22]{MAJ}). \end{proof} \noindent {\bf Proof of Theorem~\ref{thsubcount}:} Theorem \ref{thsubcount} immediately follows by breaking the intervals $[1,X]$ and $[1,cX^\delta]$ into dyadic ranges, and then applying Proposition \ref{propcountsmf} and Lemma \ref{lemsubnongencub} to each pair of dyadic ranges. $\Box$ {}section{The number of quartic rings having $n$-monogenized cubic resolvent rings of bounded height~$H$ and $n<cH^\delta$}\label{subsecsubquar} In this subsection, we determine the asymptotic number of quartic rings having an $n$-monogenized cubic resolvent ring with bounded height $H$ and $n<cH^\delta$. \begin{theorem}\label{thqsubcount} For $i\in\{0,1,2\}$, let $\smash{N_4^{(i)}}({\leq\! cH^\delta},X)$ denote the number of isomorphism classes of pairs $(Q,(C,\alphapha))$, where $Q$ is an order in an $S_4$-quartic field with $4-2i$ real embeddings and $(C,\alphapha)$ is an $n$-monogenized cubic resolvent ring of $Q$ with $n\leq cX^{\delta}$ and $H<X$. Then \begin{equation}\label{n4eq} N^{(i)}_4({\leq\! cH^\delta},X)=\frac{1}{m_i}\, {\rm Vol}\bigl({\mathbb F}F_{{\rm SL}_3}\cdot{\mathbb F}F_V^{(i)}({\leq\! cH^\delta},X)\bigr)+O(X^{5/6+\delta/3}), \end{equation} where \smash{${\mathbb F}F_{{\rm SL}_3}$} is a fundamental domain for the action of ${\rm SL}_3({\mathbb Z})$ on ${\rm SL}_3({\mathbb R})$, the sets $\smash{{\mathbb F}F_V^{(i)}}({\leq\! cH^\delta},X)$ and the constants $m_i$ are defined immediately after Proposition $\ref{propsmfssize}$. \end{theorem} \begin{nota} \label{sec:3.13} {\em Let $V({\mathbb Z})_+:=\{(A,B)\in V({\mathbb Z}) : n=\det(A)>0\}=V({\mathbb Z})\cap V({\mathbb R})_+$. }\end{nota} To prove Theorem~\ref{thqsubcount}, we use the natural bijection of Corollary~\ref{qparcor2} between the sets \begin{equation*} \bigl\{(Q,(C,\alphapha))\bigr\}\leftrightarrow \bigl({M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})\bigr)\backslash V({\mathbb Z})_+, \end{equation*} where $Q$ is a quartic ring, $C$ is a cubic resolvent ring of $Q$, and $\alphapha$ is an $n$-monogenizer of $C$ for some $n\geq 1$. Given $(A,B)\in V({\mathbb Z})_+$ corresponding to $(Q,(C,\alphapha))$, the resolvent binary cubic form ${\mathbb R}es(A,B)=f(x,y)=4\det(Ax-By)$ corresponds to the $n$-monogenized cubic ring~$(C,\alphapha)$ under the bijection of Theorem~\ref{thcubringpar}. \begin{defn} \label{sec:3.14} {\em We use the resolvent map ${\mathbb R}es$ to define the functions ${\rm ind}$, $I$, $J$, $H$, and $\Delta$ on $V({\mathbb R})_+:=\{(A,B)\in V({\mathbb R}) : \det(A)>0\}.$ For $v\in V({\mathbb R})_+$, we set \begin{equation*} {\rm ind}(v)\!:=\!{\rm ind}({\mathbb R}es(v));\: I(v)\!:=\!I({\mathbb R}es(v));\: J(v)\!:=\!J({\mathbb R}es(v));\: H(v)\!:=\!H({\mathbb R}es(v));\: \Delta(v)\!:=\!\Delta({\mathbb R}es(v)). \end{equation*} These functions are invariants for the action of ${M}({\mathbb R})\times{\rm SL}_3({\mathbb R})$ on $V({\mathbb R})_+$. }\end{defn} In the rest of this subsection, to prove Theorem~\ref{thqsubcount}, we determine the number of ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-orbits on $V({\mathbb Z})_+$ having bounded height and index. {}subsection{Reduction theory for the action of ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$ on $V({\mathbb R})_+$} \begin{nota} \label{sec:3.15} {\em For $i\in\{0,1,2\}$, let $V({\mathbb R})^{(i)}{}set V({\mathbb R})_+$ denote the set of elements corresponding to the ${\mathbb R}$-algebra ${\mathbb R}^{4-2i}\times{\mathbb C}^i$. That is, $V({\mathbb R})^{(i)}$ consists of elements having splitting type $(1111)$ when $i=0$, $(112)$ when $i=1$, and $(22)$ when $i=2$. }\end{nota} \begin{lemma}\label{lemsubmonss} The size of the stabilizer in ${\rm SL}_3({\mathbb R})$ of an element $(A,B)\in V({\mathbb R})$ is $4$ if $\Delta(A,B)>0$ and $2$ if $\Delta(A,B)<0$. \end{lemma} \begin{proof} As described in Remark \ref{geom}, a nondegenerate element $(A,B)\in V({\mathbb R})$ gives rise to a set ${\mathcal Q}$ of four points in ${\mathbb P}^2({\mathbb C})$, equipped with the action of $\mathrm{Gal}({\mathbb C}/{\mathbb R})={\mathbb Z}/2{\mathbb Z}$. Let ${\mathbb P}P'$ denote the set of pairs of lines $(L_1,L_2)$, where $L_1$ passes through two of the points in ${\mathcal Q}$, and $L_2$ passes through the other two. The set ${\mathbb P}P'$ inherits an action of $\mathrm{Gal}({\mathbb C}/{\mathbb R})$ from ${\mathcal Q}$. Theorem \ref{thpid} now implies that the stabilizer in ${\rm SL}_3({\mathbb R})$ of $(A,B)$ is isomorphic to the set of Galois-invariant permutations of ${\mathcal Q}$ which induce the trivial permutation on ${\mathbb P}P'$. If $\Delta(A,B)>0$, then ${\mathcal Q}$ either consists of four points defined over ${\mathbb R}$ or consists of two pairs of complex conjugate points. In either case, the nontrivial permutations of ${\mathcal Q}$ which induce the trivial permutation on ${\mathbb P}P'$ are the double transpositions. If $\Delta(A,B)<0$, then ${\mathcal Q}$ consists of two points $Q_1$ and $Q_2$ defined over ${\mathbb R}$ and a pair $Q_3$ and $Q_4$ of complex conjugate points. In this case, the only nontrivial permutation of ${\mathcal Q}$ which induces the trivial permutation on ${\mathbb P}P'$ is the one switching $Q_1$ with $Q_2$ and $Q_3$ with~$Q_4$. This concludes the proof of the lemma. \end{proof} \begin{lemma}\label{lemsubmonos} Let $f(x,y)\in U({\mathbb R})$ be a binary cubic form with $\Delta(f)\neq 0$ and positive $x^3$-coefficient. \begin{itemize} \item[{\rm (a)}] If $\Delta(f)>0$, then the set of elements in $V({\mathbb R})_+$ with resolvent $f$ consists of one ${\rm SL}_3({\mathbb R})$-orbit with splitting type $(1111)$ and three ${\rm SL}_3({\mathbb R})$-orbits with splitting type $(22)$. \item[{\rm (b)}] If $\Delta(f)<0$, then the set of elements in $V({\mathbb R})_+$ with resolvent $f$ consists of a single ${\rm SL}_3({\mathbb R})$-orbit with splitting type $(112)$. \end{itemize} \end{lemma} \begin{proof} In Case (a), Theorem \ref{thpid} implies that the set of ${\rm SL}_3({\mathbb R})$-orbits on $V({\mathbb R})_+$ with resolvent $f$ are in bijection with the set of \'etale quartic extensions of ${\mathbb R}$ having cubic resolvent algebra ${\mathbb R}^3$ along with a choice of basis for ${\mathbb R}^3$. Theorem \ref{thHeilbronPID} implies that the latter set is in bijection with the set of \'etale quadratic extensions of ${\mathbb R}^3$ whose discriminants have square norm. There are four such extensions: first, each ${\mathbb R}$-factor can split, yielding the algebra ${\mathbb R}^6$ corresponding to elements in $V({\mathbb R})$ with splitting type $(1111)$. Second, exactly one of the ${\mathbb R}$-factors can split, yielding three different extensions ${\mathbb R}^2\times{\mathbb C}^2$ each corresponding to elements in $V({\mathbb R})$ with splitting type $(22)$. Similarly, in Case (b), the set of ${\rm SL}_3({\mathbb R})$-orbits on $V({\mathbb R})_+$ with resolvent $f$ are in bijection with \'etale quadratic extensions of ${\mathbb R}\times{\mathbb C}$ whose discriminants have square norm in ${\mathbb R}$. There is only one such extension, namely, ${\mathbb R}^2\times{\mathbb C}^2$ corresponding to elements in $V({\mathbb R})$ with splitting type $(112)$. \end{proof} \begin{rem}\label{three22orbits}{\em When $\Delta(f)>0$, we can describe the three ${\rm SL}_3({\mathbb R})$-orbits in $V({\mathbb R})_+$ with splitting type $(22)$ as follows. An element $(A,B)\in V({\mathbb R})_+$ has splitting type $(22)$ if and only if $A$ and $B$ have no common zeros in ${\mathbb P}^2({\mathbb R})$. This can occur in three ways: \begin{itemize} \item[(i)] $A$ has no zeros in ${\mathbb P}^2({\mathbb R})$, i.e., $A$ is anisotropic. \item[(ii)] $A$ has zeros in ${\mathbb P}^2({\mathbb R})$, i.e., $A$ is isotropic, and $B$ takes only {\it positive} values on the zeros of~$A$ in ${\mathbb P}^2({\mathbb R})$. \item[(iii)] $A$ has zeros in ${\mathbb P}^2({\mathbb R})$, i.e., $A$ is isotropic, and $B$ takes only {\it negative} values on the zeros of~$A$ in ${\mathbb P}^2({\mathbb R})$. \end{itemize}\pagebreak Note that the conditions (ii) and (iii) are disjoint because, if $B$ took both positive and negative values on the zeros of $A$, then by the intermediate value theorem, $A$ and $B$ would have a common zero in ${\mathbb P}^2({\mathbb R})$---a contradiction. }\end{rem} \begin{nota} \label{sec:3.19} {\em The conditions (i), (ii), and (iii) on $(A,B)\in V({\mathbb R})_+$ in Remark~\ref{three22orbits} correspond exactly to the three orbits in Lemma~\ref{lemsubmonos}(a) having splitting type (22); we denote these three orbits in $V({\mathbb R})_+$ by $V({\mathbb R})^{(2\#)}$, $V({\mathbb R})^{(2+)}$, and $V({\mathbb R})^{(2-)}$, respectively. Thus $V({\mathbb R})^{(2)}=V({\mathbb R})^{(2\#)}\cup V({\mathbb R})^{(2+)}\cup V({\mathbb R})^{(2-)}$. For any $i\in\{0,1,2,2\#,2+,2-\}$ and any subset $L{}set V({\mathbb R})_+$, let $L^{(i)}$ denote the set $L\cap V({\mathbb R})^{(i)}$. }\end{nota} \begin{cons} \label{sec:3.20} {\em Suppose $2{T}<cX^\delta$, and recall the sets $\smash{{\mathbb F}F_U^\pm}({\Y;X}){}set U({\mathbb R})$ defined in~(\ref{definesets}).~Let \begin{equation*} \kappa=\kappa({\Y;X}):=\smash{\bigl\lfloor X^{1/6}/{T}^{2/3}\bigr\rfloor}. \end{equation*} Since $2{T}<cX^\delta\ll X^{1/4}$, we have $\kappa\gg 1$. Let $\smash{\widetilde{{\mathbb F}F}_U^\pm}({\Y;X})$ denote the $\kappa$-fold cover \begin{equation*} \widetilde{{\mathbb F}F}_U^\pm({\Y;X}):=\bigcup_{0\leq k\leq\kappa}\left(\begin{matrix}1 & \\ k & 1\end{matrix}\right) \cdot {\mathbb F}F_U^\pm({\Y;X}) \end{equation*} }\end{cons} of~$\smash{{\mathbb F}F_U^\pm}({\Y;X})$. The coefficients of elements $f\in \smash{{\mathbb F}F_U^\pm}({\Y;X})$ satisfy the bounds \eqref{eqFUYX}. It follows that the coefficients of an element $ax^3+bx^2y+cxy^2+dy^3\in \smash{\widetilde{{\mathbb F}F}_U^\pm}({\Y;X})$ satisfy: \begin{equation}\label{eqFUYXcov} |a|\ll {T};\quad 0\leq b\ll X^{1/6}{T}^{1/3};\quad |c|\ll X^{1/3}/{T}^{1/3}; \quad |d|\ll X^{1/2}/{T}. \end{equation} We now describe a fundamental set for the action of ${\rm SL}_3({\mathbb R})$ on elements in $V({\mathbb R})_+$ with resolvent in $\smash{\widetilde{{\mathbb F}F}_U^\pm}({\Y;X})$. \begin{proposition}\label{propsmfssize} There exist continuous maps \begin{equation*} \begin{array}{rcl} s:\smash{U({\mathbb R})^+}&\to& \smash{V({\mathbb R})^{(i)}}\quad \mathrm{ for }\quad i\in\{0,\smash{2\#},2+,2-\},\\[.035in] s:U({\mathbb R})^-&\to& V({\mathbb R})^{(i)}\quad \mathrm{ for }\quad i=1,\\[-.05in] \end{array} \end{equation*} satisfying: \begin{itemize} \item[{\rm (a)}] The resolvent cubic form of $s(f)$ is $f$, i.e., $s$ gives a section of the cubic resolvent map $V({\mathbb R})^{(i)}\to U({\mathbb R})^\pm$. \item[{\rm (b)}] If $f\in\widetilde{{\mathbb F}F}_U^\pm({\Y;X})$, and $s(f)=(A,B)$ with $A=(a_{ij})$ and $B=(b_{ij})$, then \begin{equation}\label{sfbounds} |a_{ij}|\leq {T}^{1/3} {\rm{ \;\;\: and \;\;\: }} |b_{ij}|\leq X^{1/6} / {T}^{1/3}. \end{equation} \end{itemize} \end{proposition} \begin{proof} Lemma \ref{lemsubmonos} implies that sections $s:U({\mathbb R})^\pm\to V({\mathbb R})^{(i)}$ exist, and it suffices to prove that the section $s$ can be chosen to satisfy the bounds of~(b). Let $g(x,y)=f({T}^{-1/3}x,X^{-1/6}{T}^{1/3}y)$. Then by (\ref{eqFUYXcov}), the absolute values of all coefficients of $g$ are $\leq 1$. A section $t\in V({\mathbb R})$ can be constructed on elements $h\in U({\mathbb R})$ having absolutely bounded coefficients so that all the entries of $t(h)$ have size $O(1)$; for example, when $i=0$ or $i=1$, we may take the section: \begin{equation}\label{tsection} t \,:\, ax^3+bx^2y+cxy^2+dy^3\mapsto \left(\left[ \begin{array}{ccc} &&1/2\\&-a&\\1/2&& \end{array}\right], \left[ \begin{array}{ccc} &1/2&\\1/2&-b&c/2\\&c/2&-d \end{array}\right] \right). \end{equation} Writing $t(g)=(A,B)$, we set $s(f)=({T}^{1/3}A,X^{1/6}{T}^{-1/3}B);$ then $s(f)$ satisfies the bounds (\ref{sfbounds}). \end{proof} \begin{cons} \label{sec:3.22} {\em For $i\in\{0,1,2\#,2+,2-\}$, let ${\mathbb F}F_V^{(i)}({\Y;X})$ (resp.\ $\widetilde{{\mathbb F}F}_V^{(i)}({\Y;X})$, ${\mathbb F}F_V^{(i)}({\leq\! cH^\delta},X)$) denote the image of ${\mathbb F}F_U^\pm({\Y;X})$ (resp.\ $\widetilde{{\mathbb F}F}_U^\pm({\Y;X})$, ${\mathbb F}F_U^\pm({\leq\! cH^\delta},X)$) under the map $s:U({\mathbb R})^\pm\to V({\mathbb R})^{(i)}$ of Proposition~\ref{propsmfssize}. Let \begin{equation*} \begin{array}{rcl} \smash{\displaystyle{\mathbb F}F_V^{(2)}({\Y;X})}&:=&\smash{\displaystyle{\mathbb F}F_V^{(2+)}({\Y;X})\cup{\mathbb F}F_V^{(2-)}({\Y;X})\cup{\mathbb F}F_V^{(2\#)}({\Y;X});} \\[.125in] \smash{\displaystyle\widetilde{{\mathbb F}F}_V^{(2)}({\Y;X})}&:=&\smash{\displaystyle\widetilde{{\mathbb F}F}_V^{(2+)}({\Y;X})\cup\widetilde{{\mathbb F}F}_V^{(2-)}({\Y;X})\cup\widetilde{{\mathbb F}F}_V^{(2\#)}({\Y;X});} \\[.125in] \smash{\displaystyle{\mathbb F}F_V^{(2)}({\leq\! cH^\delta},X)}&:=&\smash{\displaystyle{\mathbb F}F_V^{(2+)}({\leq\! cH^\delta},X)\cup {\mathbb F}F_V^{(2-)}({\leq\! cH^\delta},X)\cup {\mathbb F}F_V^{(2\#)}({\leq\! cH^\delta},X).} \end{array} \end{equation*} Let ${\mathbb F}F_{{\rm SL}_3}$ be a fundamental domain for the action of ${\rm SL}_3({\mathbb Z})$ on ${\rm SL}_3({\mathbb R})$ by left multiplication, and~let \begin{equation}\label{eqmi} m_0=m_2=m_{2\pm}=m_{2\#}=4;\quad m_1=2. \end{equation} }\end{cons} \begin{proposition}\label{propsmfd} For $i\in \{0,1,2,2\#,2+,2-\}$, the multiset ${\mathbb F}F_{{\rm SL}_3}\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X})$ is an $m_i\kappa$-fold cover of a fundamental domain for the action of ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$ on the set of elements $v\in V({\mathbb R})^{(i)}$ with ${T}\leq{\rm ind}(v)<2{T}$ and $X\leq H(v)<2X$. \end{proposition} \begin{cons} \label{sec:3.24} {\em We now fix ${\mathbb F}F_{{\rm SL}_3}$ to lie in a {\bf Siegel domain} as in \cite{BH}, i.e., every element $\gamma\in \smash{{\mathbb F}F_{{\rm SL}_3}}$ can be expressed in Iwasawa coordinates in the form $\gamma=n tk$, where $n$ is a lower triangular unipotent matrix with coefficients bounded by $1$ in absolute value, $k$ belongs to the compact group ${\rm SO}_3({\mathbb R})$, and $t=t(s_1,s_2)$ is a diagonal matrix having diagonal entries $s_1^{-2}s_2^{-1}$, $s_1s_2^{-1}$, $s_1s_2^2$, with $s_1,s_2\gg 1$. A Haar measure $d\gamma$ on ${\rm SL}_3({\mathbb R})$ in these coordinates is $(s_1s_2)^{-6}d\nu\,d^\times s_1\, d^\times s_2 \,dk$ (for a proof, see \cite[Proposition 1.5.3]{GoldBook}). }\end{cons} {}subsection{Averaging and cutting off the cusp} Let $L{}set V({\mathbb Z})$ be a finite union of translates of lattices that is ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-invariant. We now determine the asymptotic number of ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-orbits on $L^{\rm gen}$ having bounded height $H$ and index bounded by $cH^\delta$. \begin{nota}\label{N:3.25}{\em Let $N^{(i)}(L;{\Y;X})$ denote the number of ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-orbits on the set of elements $v\in L^{\rm gen}\cap \smash{V({\mathbb R})_+^{(i)}}$ such that ${T}\leq{\rm ind}(v)<2{T}$ and $X\leq H(v)<2X$. } \end{nota} Then we prove the following theorem. \begin{theorem}\label{thmsmqc} We have \begin{equation*} N^{(i)}(L;{\Y;X})= \frac{1}{m_i\kappa}\nu(L)\, {\rm Vol}\bigl({\mathbb F}F_{{\rm SL}_3}\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X})\bigr) +O_\varepsilonilon(X^{5/6+\varepsilonilon}{T}^{1/3}). \end{equation*} \end{theorem} \begin{proof} By Proposition \ref{propsmfd}, it follows that \begin{equation*} N^{(i)}(L;{\Y;X})=\frac{1}{m_i\kappa}\#\bigl\{ {\mathbb F}F_{{\rm SL}_3}\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X}) \cap L^{\rm gen}\bigl\}. \end{equation*} Let $G_0{}set {\rm SL}_3({\mathbb R})$ be a nonempty open bounded set. Then, using the identical averaging argument as in \cite[Theorem 2.5]{BhSh}, we have \begin{equation}\label{eqsmmtavg} \displaystyle \#\bigl\{ {\mathbb F}F_{{\rm SL}_3}\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X}) \cap L^{\rm gen}\bigl\} = \displaystyle \frac{1}{{\rm Vol}(G_0)} \int_{\gamma\in{\mathbb F}F_{{\rm SL}_3}}\#\bigl\{ \gamma G_0\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X}) \cap L^{\rm gen}\bigl\}d\gamma. \end{equation} The remainder of the proof now follows exactly the arguments of \pagebreak \cite[\S3]{dodqf}. First, proceeding as in the proofs of \cite[Lemma 11]{dodqf} and \cite[Lemmas~12--13]{dodqf}, we obtain the following two estimates, respectively: \begin{equation*} \begin{array}{rcl} \displaystyle\int_{\gamma\in{\mathbb F}F_{{\rm SL}_3}}\#\bigl\{(A,B)\in \gamma G_0\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X}) \cap L^{\rm gen}:a_{11}=0\bigl\}d\gamma&\ll& X/{T}^{1/3}; \\[.15in] \displaystyle\int_{\gamma\in{\mathbb F}F_{{\rm SL}_3}}\#\bigl\{(A,B)\in \gamma G_0\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X}) \cap (L\setminus L^{\rm gen}):a_{11}\neq 0\bigl\}d\gamma&\ll_\varepsilonilon& X^{1+\varepsilonilon}/{T}^{1/3}. \\[-.05in] \end{array} \end{equation*} Next, since Proposition \ref{propsmfssize} implies that the coefficients $a_{ij}$ of elements in $\widetilde{{\mathbb F}F}_V^{(i)}({\Y;X})$ are bounded by $O({T}^{1/3})$, it follows that the set \begin{equation*} \bigl\{(A,B)\in \gamma G_0\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X}) \cap V({\mathbb Z}):a_{11}\neq 0\bigl\} \end{equation*} is empty unless $\smash{s_1^{-4}s_2^{-2}}{T}^{1/3}\gg 1$, or equivalently, $\smash{s_1^4s_2^2}\ll {T}^{1/3}$. Carrying out the integral in~\eqref{eqsmmtavg}, cutting it off when $\smash{s_1^4s_2^2}\ll {T}^{1/3}$, and applying Proposition \ref{davlem} to estimate the number of lattice points in $\gamma G_0\cdot \smash{\widetilde{{\mathbb F}F}_V^{(i)}}({\Y;X})$, we obtain \begin{equation}\label{eqsmmtavg1} \begin{array}{rcl} \displaystyle \#\bigl\{ {\mathbb F}F_{{\rm SL}_3}\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X}) \cap L^{\rm gen}\bigl\}&=& \displaystyle \frac{\nu(L)}{{\rm Vol}(G_0)} \int_{\gamma\in{\mathbb F}F_{{\rm SL}_3}}{\rm Vol}\bigl( \gamma G_0\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X})\bigr)d\gamma +O_\varepsilonilon\bigl(X^{1+\varepsilonilon}{T}^{-1/3}\bigr) \\[.2in]&=& \displaystyle\frac{\nu(L)}{{\rm Vol}(G_0)} \int_{\gamma\in G_0}{\rm Vol}\bigl( {\mathbb F}F_{{\rm SL}_3}\gamma\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X})\bigr)d\gamma +O_\varepsilonilon\bigl(X^{1+\varepsilonilon}{T}^{-1/3}\bigr) \\[.2in]&=& \displaystyle\nu(L)\cdot{\rm Vol}\bigl( {\mathbb F}F_{{\rm SL}_3}\cdot \widetilde{{\mathbb F}F}_V^{(i)}({\Y;X})\bigr) +O_\varepsilonilon\bigl(X^{1+\varepsilonilon}{T}^{-1/3}\bigr). \end{array} \end{equation} Finally, we note that by Proposition \ref{propsmfd}, the multiset ${\mathbb F}F_{{\rm SL}_3}\cdot \smash{\widetilde{{\mathbb F}F}_V^{(i)}}({\Y;X})$ contains exactly $m_i\,\kappa$ representatives of an ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-orbit $v\in L^{\rm gen}\cap V({\mathbb R})^{(i)}$. Dividing the first and last terms of \eqref{eqsmmtavg1} by $m_i\,\kappa$ thus yields Theorem \ref{thmsmqc}. \end{proof} \noindent {\bf Proof of Theorem~\ref{thqsubcount}:} Theorem \ref{thqsubcount} now follows immediately by breaking the intervals $[1,X]$ and $[1,cX^\delta]$ into dyadic ranges, and then applying Theorem \ref{thmsmqc} to each pair of dyadic ranges. $\Box$ {}section{Uniformity estimates and squarefree sieves}\label{subsecsubunif} In this subsection, we count ${M}({\mathbb Z})$-orbits on $U({\mathbb Z})$ and ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-orbits on $V({\mathbb Z})$ of bounded height and index satisfying certain infinite sets of congruence conditions. \begin{nota} \label{sec:3.26} {\em For a large collection $S=(S_p)_p$ of cubic local specifications, let $U({\mathbb Z})_S$ denote the set of elements $f\in U({\mathbb Z})$ such that $f$ belongs to $S_p$ for all $p$, and let $V({\mathbb Z})_S$ denote the set of elements in $V({\mathbb Z})$ whose cubic resolvent forms are in $U({\mathbb Z})_S$. }\end{nota} We prove the following theorem: \begin{theorem}\label{thsubsieve} For a set $L{}set U({\mathbb Z})$ $($resp.\ $L{}set V({\mathbb Z}))$ defined via congruence conditions, let $\nu(L)$ denote the volume of the closure of $L$ in \smash{$U(\widehat{{\mathbb Z}})$} $($resp.\ \smash{$V(\widehat{{\mathbb Z}}))$}. Then \begin{equation*} \begin{array}{rcl} \displaystyle \smash{N^{\pm}(U({\mathbb Z})^{\rm max}x_S;{\Y;X})} &=&\displaystyle \;\;\;\;\displaystyle \;\,\smash{\nu\bigl(U({\mathbb Z})^{\rm max}x_S\bigr)}\,{\rm Vol}\bigl( {\mathbb F}F_U^\pm({\Y;X})\bigr)+o(X^{5/6}{T}^{2/3}); \\[.05in] \displaystyle N^{(i)}(V({\mathbb Z})^{\rm max}x_S;{\Y;X})&=& \displaystyle \frac{1}{m_i }\, \nu\bigl(V({\mathbb Z})^{\rm max}x_S\bigr) \,{\rm Vol}\bigl({\mathbb F}F_{{\rm SL}_3}\cdot \,{{\mathbb F}F}_V^{(i)}({\Y;X})\bigr) +o(X^{5/6}{T}^{2/3}). \end{array} \end{equation*} \end{theorem} To prove the first claim of Theorem~\ref{thsubsieve}, we must impose infinitely many congruence conditions on $U({\mathbb Z})$. Namely, we must impose the congruence conditions of maximality at every prime as well as the congruence conditions forced by $\Sigma$. For this, we require estimates on the number of ${M}({\mathbb Z})$-orbits in $U({\mathbb Z})$ having bounded height and index with discriminant divisible by the square of \pagebreak some prime $p>M$. For a prime $p$ and an element $f\in U({\mathbb Z})$, note that $p^2\mid\Delta(f)$ precisely when the cubic ring corresponding to $f$ is {totally ramified} (i.e., has splitting type $(1^3)$) at~$p$ or is non-maximal~at~$p$. Similarly, to prove the second claim of Theorem~\ref{thsubsieve}, we require estimates on the number of ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-orbits on $V({\mathbb Z})$ having bounded height and index with discriminant divisible by the square of some prime $p>M$. This time, note that $p^2\mid\Delta(v)$ for $(A,B)\in V({\mathbb Z})$ precisely when $(A,B)$ corresponds to a quartic ring that is either {\bf extra ramified} (i.e., has splitting type $(1^31)$, $(1^21^2)$, $(2^2)$, or $(1^4)$) at~$p$ or is nonmaximal at $p$. \begin{nota} \label{sec:3.28} {\em For a prime $p$, let $\smash{{\mathcal W}_p^{(1)}}(U)$ denote the set of integer-coefficient binary cubic forms with splitting type \smash{$(1^3)$} at $p$, and $\smash{{\mathcal W}_p^{(2)}}(U)$ the set of integer-coefficient binary cubic forms that are nonmaximal at $p$. Similarly, let \smash{${\mathcal W}_p^{(1)}(V)$} denote the set of elements in $V({\mathbb Z})$ with splitting type \smash{$(1^31)$, $(1^21^2)$, $(2^2)$, or $(1^4)$} at $p$, and $\smash{{\mathcal W}_p^{(2)}}(V)$ the set of elements in $V({\mathbb Z})$ that are nonmaximal at~$p$. Finally, set ${\mathcal W}_p(U):=\smash{{\mathcal W}_p^{(1)}}(U)\cup \smash{{\mathcal W}_p^{(2)}}(U)$ and ${\mathcal W}_p(V):=\smash{{\mathcal W}_p^{(1)}}(V)\cup \smash{{\mathcal W}_p^{(2)}}(V)$. In the language of \cite[\S1.5]{geosieve}, the sets \smash{${\mathcal W}_p^{(i)}(U)$} and \smash{${\mathcal W}_p^{(i)}(V)$} consist of elements in $U({\mathbb Z})$ and $V({\mathbb Z})$, respectively, whose discriminants are divisible by $p^2$ for ``mod $p^i$ reasons''.} \end{nota} We begin by bounding the number of $n$-monogenized cubic rings (resp.\ quartic rings with $n$-mon\-ogenized cubic resolvent rings) that are totally ramified (resp.\ extra ramified) at some prime~$p>M$. \begin{proposition}\label{propsmusgs} For any positive real numbers $X$, ${T}$, and $M$ with ${T}\ll X^{1/4}$, we have \begin{equation*} \begin{array}{rcl} \displaystyle \sum_{p>M}N^{\pm}(\smash{{\mathcal W}_p^{(1)}}(U);{\Y;X}) &=&\displaystyle O_\varepsilonilon\bigl({X^{5/6}{T}^{2/3}}/{M^{1-\varepsilonilon}}\bigr) +O_\varepsilonilon(X^{5/6+\varepsilonilon}); \\[.215in] \displaystyle\sum_{p>M} N^{(i)}(\smash{{\mathcal W}_p^{(1)}}(V);{\Y;X})&=& \displaystyle O_\varepsilonilon\bigl({X^{5/6}{T}^{2/3}}/{M^{1-\varepsilonilon}}\bigr) +O(X^{5/6}{T}^{1/3}). \\[-.05in] \end{array} \end{equation*} \end{proposition} \begin{proof} Theorem~\ref{propsmusgs} follows immediately by \cite[Theorem 3.3]{geosieve} and our counting results (Theorems~\ref{thsubcount} and \ref{thqsubcount}) of the previous two subsections. \end{proof} Next, we obtain bounds on the number of orbits on $U({\mathbb Z})$ and $V({\mathbb Z})$ of bounded height and index that correspond to nonmaximal rings. \begin{nota} \label{sec:3.30} {\em For a cubic number field $K$, we define the following {\bf height} function $H$ on its ring of integers ${\mathcal O}_K$: for $\beta\in{\mathcal O}_K$, let $H(\beta):={\rm max}_v\bigl\{|\beta'|_v\bigr\},$ where $v$ ranges over the archimedean valuations of $K$ and $\beta'$ denotes the unique ${\mathbb Z}$-translate of $\beta$ such that the absolute trace ${\rm Tr}(\beta')\in\{0,1,2\}$. }\end{nota} Let $\alphapha\in {\mathcal O}_K$. Since either $\alphapha'$, $\alphapha'-1/3$, or $\alphapha'-2/3$ satisfies the equation \begin{equation}\label{ijeq} x^3 - (I(f)/3) x - J(f)/27 = 0, \end{equation} we have the estimate \begin{equation}\label{htest} H(\alphapha)\ll {\rm max}\{|I(f)|^3,J(f)^2/4\}^{1/6} = (n^{2}H(f))^{1/6}={\rm ind}(f)^{1/3}H(f)^{1/6}. \end{equation} Indeed, if $H(\alphapha)\gg {\rm ind}(f)^{1/3}H(f)^{1/6}$, then the left-hand side of (\ref{ijeq}) could not equal zero as the $x^3$-term would dominate the other two terms. \begin{defn} \label{sec:3.31} {\em Let $C$ be a nondegenerate cubic ring, and consider $C$ embedded as a lattice in $C\otimes{\mathbb R}\cong{\mathbb R}^3$ with covolume $\sqrt{\Delta(C)}$. Let $1\leq\ell_1(C)\leq\ell_2(C)$ denote the three successive minima of~$C$. We define the {\bf skewness} of a cubic ring $C$ to be} \begin{equation*} {\rm sk}(C):=\ell_2(C)/\ell_1(C). \end{equation*} \end{defn} As in \cite[Lemma 4.7]{SSW}, the number of integers $\alphapha\in C$ with bounded height can be controlled by the discriminant and skewness of $C$. \pagebreak \begin{lemma}\label{lemcountint} Let $C$ be a cubic ring with discriminant $D=\Delta(C)$ and skewness $Z={\rm sk}(C)$. Then the number of elements $\alphapha\in C\setminus{\mathbb Z}$ with ${\rm Tr}(\alphapha)\in\{0,1,2\}$ and $H(\alphapha)< H$ is \begin{equation*} \ll\left\{ \begin{array}{ccl} 0 &\mbox{if}& H< \ell_1(C)\asymp D^{1/4}/Z^{1/2};\\ HZ^{1/2}/D^{1/4} &\mbox{if}& \ell_1(C)< H< \ell_2(C)\asymp D^{1/4}Z^{1/2};\\ H^2/D^{1/2} &\mbox{if}& \ell_2(C)< H. \end{array}\right. \end{equation*} \end{lemma} \begin{proof} The definition of ${\rm sk}(C)$ and the fact that $\ell_1(C)\ell_2(C)\asymp\sqrt{D}$ imply that $\ell_1(C)\asymp D^{1/4}/Z^{1/2}$ and $\ell_2(C)\asymp D^{1/4}Z^{1/2}$. When $H<\ell_1(C)$, there are no elements $\alphapha\in C\setminus{\mathbb Z}$ with $H(\alphapha)<H$. When $\ell_1(C)<H<\ell_2(C)$, the only elements $\alphapha\in C\setminus{\mathbb Z}$ with ${\rm Tr}(\alphapha)\in\{0,1,2\}$ are (appropriately translated) multiples of the element in $C$ of length $\ell_1(C)$. When $H>\ell_2(C)$, the number of such $\alphapha$ is then bounded by the volume of the region $\{\theta\in C\otimes{\mathbb R}:H(\theta)<H \mbox{ and } {\rm{Tr}}(\theta)\in\{0,1,2\}\}$ divided by $\sqrt{D}$, the covolume of $C$. \end{proof} \begin{proposition}\label{propskringscount} The number of cubic orders $C$ $($resp.\ pairs $(Q,C)$, where $Q$ is a quartic order and $C$ is a cubic resolvent order of $Q)$ satisfying $D\leq|\Delta(C)|<2D$ and ${\rm sk}(C)\gg Z$ is $O(D/Z)$. \end{proposition} \begin{proof} Proposition~\ref{propskringscount} follows from the proofs of \cite[Theorem 4.1 and Proposition~4.4]{SSW}. \end{proof} \begin{corollary}\label{corcountint} The number of pairs $(C,\alphapha)$, where $C$ is a cubic order $($resp.\ triples $(Q,C,\alphapha)$, where $Q$ is a quartic order and $C$ is a cubic resolvent order of $Q)$ satisfying $|\Delta(C)|<X$, $\alphapha\in C\setminus{\mathbb Z}$ with ${\rm Tr}(\alphapha)\in\{0,1,2\}$, and $H(\alphapha)\ll H$ is $O_\varepsilonilon(H^2X^{1/2+\varepsilonilon})$. \end{corollary} \begin{proof} We break up the discriminant range $[0,X]$ of $C$ into $O(\log X)$ dyadic ranges. For each such dyadic range $[D,2D]$, we break up the range of possible skewness of $C$ into dyadic ranges $[Z,2Z]$, where clearly $Z\ll D^{1/2}$. For a cubic order $C$ with $|\Delta(C)|\asymp D$ and ${\rm sk}(C)\asymp Z$, Lemma \ref{lemcountint} implies that the number of elements $\alphapha\in C\setminus{\mathbb Z}$ with ${\rm Tr}(\alphapha)\in\{0,1,2\}$ and $H(\alphapha)\ll H$ is bounded by \begin{equation*} \ll\left\{ \begin{array}{cl} \displaystyle H^2/D^{1/2}+HZ^{1/2}/D^{1/4} &\mbox{if } \displaystyle Z\gg D^{1/2}/H^2;\\[.02in] 0 & \mbox{otherwise}. \end{array}\right. \end{equation*} Adding up over the $O(D/Z)$ orders with discriminant and skewness in this range, gives us the following bound on the number of pairs $(C,\alphapha)$ (resp.\ triples $(Q,C,\alphapha)$): \begin{equation*} \ll\left\{ \begin{array}{cl} \displaystyle D^{1/2}H^2/Z+HD^{3/4}/Z^{1/2} &\mbox{if } \displaystyle Z\gg D^{1/2}/H^2;\\[.02in] 0 &\mbox{otherwise.} \end{array}\right. \end{equation*} In either case, the bound is $O(H^2D^{1/2})$. Adding up over the dyadic ranges yields the result. \end{proof} We are now ready to prove the other required uniformity estimate. \begin{proposition}\label{propunifsub2} For any positive real numbers $X$, ${T}$, and $M$ with ${T}\ll X^{1/4}$, we have \begin{equation*} \begin{array}{rcl} \displaystyle \sum_{p>M}N^{(i)}(\smash{{\mathcal W}_p^{(2)}}(U);{\Y;X})&=& \displaystyle O_\varepsilonilon\bigl({X^{5/6}{T}^{2/3}}/{M^{1-\varepsilonilon}}\bigr); \\[.215in] \displaystyle\sum_{p>M}N^{(i)}(\smash{{\mathcal W}_p^{(2)}}(V);{\Y;X})&=& \displaystyle O_\varepsilonilon\bigl({X^{5/6}{T}^{2/3}}/{M^{1-\varepsilonilon}}\bigr).\\[-.05in] \end{array} \end{equation*} \end{proposition} \begin{proof} If a prime $p$ satisfies $p\ll X^{1/8}$ in the first sum, then we have the following upper bound on the corresponding summand: \begin{equation}\label{smallp} N^{(i)}(\smash{{\mathcal W}_p^{(2)}}(U);{\Y;X})=\#\bigl\{{\mathbb F}F_U^\pm({\Y;X}) \cap\smash{{\mathcal W}_p^{(2)}}(U)\bigr\}\ll {X^{5/6}{T}^{2/3}}/{p^2}. \pagebreak \end{equation} This can be seen by simply partitioning $\smash{{\mathcal W}_p^{(2)}}(U)$ into $O(p^6)$ translates of $p^2 U({\mathbb Z})$ and counting integer points of bounded height in each translate using Proposition~\ref{davlem}, noting that the last variable $d$ takes a range of length at least $X^{1/2}/{T}\gg X^{1/4}\gg p^2$. The sum of the bound \eqref{smallp} over all primes $p$ with $M < p < X^{1/8}$ is less than the bound of Proposition~\ref{propunifsub2}. We may thus assume that $M\gg X^{1/8}$ for the purpose of proving the first estimate of the proposition. Similarly, since the variable $b_{33}$ takes a range of at least $X^{1/6}/{T}^{1/3}\gg X^{1/12}$ in the proof of Theorem \ref{thmsmqc}, we may assume that $M\gg X^{1/24}$ for the purpose of proving the second estimate. Let $f$ be an irreducible element of \begin{equation}\label{equniftemp} \bigcup_{p>M}\bigl\{{\mathbb F}F_U^\pm({\Y;X})\cap\smash{{\mathcal W}_p^{(2)}}(U)\bigr\}. \end{equation} Then $f$ corresponds to a pair $(C,\alphapha)$, where $C$ is a cubic ring and $\alphapha$ is an element of $C$. Furthermore, $C$ is nonmaximal at some prime $p>M$. Let $C_1$ be the (unique) order containing $C$ with index $p$. The treatment of the case $n=3$ in \cite[\S4.2]{geosieve} implies that $C_1$ comes from at most $3$ such rings $R$. It thus follows from \eqref{htest} that the set \eqref{equniftemp} maps into the set \begin{equation}\label{eqKal} \bigl\{(C_1,\alphapha):|\Delta(C_1)|\ll X/M^2,\;H(\alphapha)\asymp X^{1/6}{T}^{1/3}\bigr\}, \end{equation} where $K$ is a cubic field and $\alphapha$ is an element in ${\mathcal O}_K$ with ${\rm Tr}(\alphapha)\in\{0,1,2\}$. Moreover, the sizes of the fibers of this map are bounded by $3$. Therefore, we have \begin{equation}\label{equnifRal} \sum_{p>M}N^{(i)}(\smash{{\mathcal W}_p^{(2)}}(U);{\Y;X})\ll \#\bigl\{(C_1,\alphapha):|\Delta(C_1)| \ll X/M^2,\;H(\alphapha)\asymp X^{1/6}{T}^{1/3}\bigr\}. \end{equation} Similarly, let $v\in V({\mathbb Z})$ be an element contributing to the count of the left-hand side of the second line of the proposition. Then the cubic resolvent of $v$ belongs to \eqref{equniftemp}, and thus $v$ corresponds to a triple $(Q,C,\alphapha)$, where $Q$ is a quartic ring, $C$ is a cubic resolvent of $Q$ nonmaximal at a prime $p>M$, and $\alphapha$ is an element of $C$. Let $C_1$ be as in the previous paragraph. The treatment of the case $n=4$ in \cite[\S4.2]{geosieve} implies that the set of quartic rings with resolvent $C$ maps to the set of quartic rings with resolvent $C_1$, where the sizes of the fibers of this map are bounded by $6$. Therefore, \begin{equation}\label{equnifQRal} \sum_{p>M}N^{(i)}(\smash{{\mathcal W}_p^{(2)}}(V);{\Y;X})\ll \#\bigl\{(Q_1,C_1,\alphapha):|\Delta(C_1)| \ll X/M^2,\;H(\alphapha)\asymp X^{1/6}{T}^{1/3}\bigr\}, \end{equation} where $Q_1$ is a quartic ring with resolvent $C_1$ and $\alphapha\in C_1$ has trace in $\{0,1,2\}$. Corollary \ref{corcountint} implies that the right-hand sides of \eqref{equnifRal} and \eqref{equnifQRal} are bounded by \begin{equation*} O_\varepsilonilon\bigl({X^{5/6+\varepsilonilon}{T}^{2/3}}/{M}\bigr). \end{equation*} This concludes the proof of the lemma, since $M\gg X^{1/24}$. \end{proof} \noindent {\bf Proof of Theorem~\ref{thsubsieve}:} Theorem \ref{thsubsieve} follows immediately from the counting results of Proposition \ref{propcountsmf} and Theorem~\ref{thmsmqc}, in conjunction with the tail estimates in Propositions~\ref{propsmusgs} and \ref{propunifsub2}, by applying a standard squarefree sieve. See \cite[\S8.5]{MAJ} for an identical proof. $\Box$ {}section{Local mass formulas}\label{seclmsub} To obtain the volumes in Theorem~\ref{thsubsieve}, we prove certain mass formulas relating \'etale quartic and cubic algebras over ${\mathbb Q}_p$. Let $K_3$ be an \'etale cubic extension of ${\mathbb Q}_p$. Let $K_4$ be an \'etale quartic extension of ${\mathbb Q}_p$ with cubic resolvent $K_3$ that corresponds to the unramified quadratic extension $K_6/K_3$ \pagebreak under the bijection of Theorem \ref{thHeilbronPID}. By Remark~\ref{geom}, the \'etale ${\mathbb Q}_p$-algebras $K_3$, $K_4$, and~$K_6$ correspond to sets~${\mathbb P}P$, ${\mathcal Q}$, and~${\mathcal L}$, respectively, equipped with actions of~$G_{{\mathbb Q}_p}$. The automorphism groups ${\rm Aut}(K_3)$, ${\rm Aut}(K_4)$, and~${\rm Aut}(K_6)$ can be identified with the groups of $G_{{\mathbb Q}_p}$-equivariant permutations of ${\mathbb P}P$, ${\mathcal Q}$, and~${\mathcal L}$, respectively. A $G_{{\mathbb Q}_p}$-equivariant permutation of ${\mathcal Q}$ induces $G_{{\mathbb Q}_p}$-equivariant permutations of~${\mathbb P}P$ and~${\mathcal L}$. \begin{nota} \label{sec:3.36} {\em Let ${\rm Aut}_{K_3}(K_4)$ denote the subgroup of ${\rm Aut}(K_4)$ consisting of automorphisms of $K_4$ that induce the trivial automorphism of $K_3$ (equivalently, the subgroup consisting of Galois equivariant permutations of ${\mathcal Q}$ that induce the trivial permutation of~${\mathbb P}P$). }\end{nota} We have the following result. \begin{theorem}\label{thmmassmain} Let $K_3$ be an \'etale cubic extension of ${\mathbb Q}_p$, and let ${\mathbb R}R(K_3)$ denote the set of \'etale non-overramified quartic extensions $K_4$ of ${\mathbb Q}_p$, up to isomorphism, with resolvent $K_3$. Then \begin{equation*} \displaystyle\sum_{K_4\in{\mathbb R}R(K_3)}\frac{1}{|{\rm Aut}_{K_3}(K_4)|} =1. \end{equation*} \end{theorem} \begin{proof} Recall that ${\rm Aut}_{K_3}(K_6)$ is the subgroup of ${\rm Aut}(K_6)$ consisting of automorphisms of $K_6$ that fix every element in $K_3$. Let $\smash{{\rm Aut}_{K_3}^+(K_6)}$ denote the index $2$ subgroup of ${\rm Aut}_{K_3}(K_6)$ consisting of even permutations of ${\mathcal L}$. We may check that the map from Galois-equivalent permutations of ${\mathcal Q}$ to the Galois-equivalent permutations of ${\mathcal L}$ induces an isomorphism ${\rm Aut}_{K_3}(K_4)\cong\smash{{\rm Aut}^+_{K_3}(K_6)}$. Exactly half of the unramified quadratic extensions $K_6/K_3$ have discriminant whose norm is a square in ${\mathbb Z}_p^\times$. Moreover, all of these quadratic extensions $K_6/K_3$ have isomorphic automorphism groups. Therefore, we have \begin{equation*} \displaystyle\sum_{K_4\in{\mathbb R}R(K_3)}\frac{1}{|{\rm Aut}_{K_3}(K_4)|} \,=\, \displaystyle\frac{1}{2}\sum_{{}stack{[K_{6}:K_3]=2\\{\rm unramified}}} \frac{1}{|{\rm Aut}_{K_3}^+(K_6)|} \,=\, \sum_{{}stack{[K_{6}:K_3]=2\\{\rm unramified}}} \frac{1}{|{\rm Aut}_{K_3}(K_6)|}=1, \end{equation*} where the last equality follows from \eqref{eqmasstriv}. \end{proof} Finally, we translate Theorem \ref{thmmassmain} into the language of binary cubic forms. \begin{defn} \label{sec:3.38} {\em For $f\in U({\mathbb Z}_p)$, define the {\bf local mass} $\mathrm{Mass}_p(f)$ of $f$ by \begin{equation}\label{eqmasssub} \mathrm{Mass}_{p}(f):= \sum_{v\in\frac{{\mathbb R}es^{-1}(f)}{{\rm SL}_3({\mathbb Z}_p)}}\frac1{\#{\rm Stab}_{{\rm SL}_3({\mathbb Z}_p)}(v)},\\[-.05in] \end{equation}} where \smash{$\frac{{\mathbb R}es^{-1}(f)}{{\rm SL}_3({\mathbb Z}_p)}$} is a set of representatives for the action of ${\rm SL}_3({\mathbb Z}_p)$ on ${\mathbb R}es^{-1}(f)$. \end{defn} Since the stabilizers in $M({\mathbb Q}_p)$ and $G({\mathbb Q}_p)$ of maximal elements in $U({\mathbb Z}_p)$ and $V({\mathbb Z}_p)$ actually lie in $M({\mathbb Z}_p)$ and $G({\mathbb Z}_p)$, respectively, we obtain the following consequence of Theorems \ref{thpid} and~\ref{thmmassmain}:\!\!\! \begin{corollary}\label{cortotmass} Suppose $f\in U({\mathbb Z}_p)$ corresponds to a maximal cubic ring over~${\mathbb Z}_p$. Then \begin{equation*} \displaystyle \mathrm{Mass}_p(f) =1. \end{equation*} \end{corollary} {}section{Volume computations and proof of Theorem~\ref{thsubmon}} \label{subsecsubvol} In this subsection, we prove Theorem \ref{thsubmonloc}, from which Theorem \ref{thsubmon} will be shown to follow. To compute the volumes of sets in $V({\mathbb R})$ and $V({\mathbb Z}_p)$, we use the following Jacobian change of variables. \pagebreak \begin{proposition}\label{propjacsub} Let $K$ be ${\mathbb R}$ or ${\mathbb Z}_p$, let $\smash{|\cdot|}$ denote the usual normalized absolute value on $K$, and let $s:U(K)\to V(K)$ be a continuous map such that ${\mathbb R}es(s(f))=f$, for each $f\in U(K)$. Then there exists a constant ${\mathcal J}\in{\mathbb Q}^\times$, independent of $K$ and $s$, such that for any measurable function $\phi$ on $V(K)$, we have: \begin{equation}\label{eqjacsub} \begin{array}{rcl} \!\! \! \displaystyle\int_{{\rm SL}_3(K)\cdot s(U(K))}\!\!\!\phi(v)dv&\!\!\!=\!\!\!& |{\mathcal J}|\!\displaystyle\int_{f\in U(K)}\displaystyle\int_{g\in {\rm SL}_3(K)} \!\!\phi(g\cdot s(f))\omega(g) df,\\[0.25in] \displaystyle\int_{V(K)}\!\!\phi(v)dv&\!\!\!=\!\!\!& |{\mathcal J}|\!\displaystyle\int_{{}stack{f\in U(K)\\ {\rm Disc}(f)\neq 0}}\! \displaystyle\sum_{v\in\!\!\textstyle{\frac{{\mathbb R}es^{-1}(f)}{{\rm SL}_3(K)}}} \!\!\!\frac{1}{\#{\rm Stab}_{{\rm SL}_3({\mathbb Z}_p)}(v)}\int_{g\in {\rm SL}_3(K)}\!\!\!\phi(g\cdot v)\omega(g)df,\\[-.05in] \end{array} \end{equation} where \smash{$\frac{{\mathbb R}es^{-1}(f)}{{\rm SL}_3(K)}$} is a set of representatives for the action of ${\rm SL}_3(K)$ on ${\mathbb R}es^{-1}(f)$. \end{proposition} \begin{proof} Proposition~\ref{propjacsub} follows immediately from the proofs of \cite[Propositions 3.11 and 3.12]{BhSh} (see also \cite[Remark 3.14]{BhSh}). \end{proof} \begin{corollary}\label{corjacsub} Let $S_p$ be an open and closed subset of $U({\mathbb Z}_p)$ such that the boundary of $S_p$ has measure $0$ and every element in $S_p$ is maximal. Consider the set $\smash{V({\mathbb Z})_{S,p}:={\mathbb R}es^{-1}(S_p)}$. Then \begin{equation} {\rm Vol}(V({\mathbb Z})_{S,p})=|{\mathcal J}|_p{\rm Vol}({\rm SL}_3({\mathbb Z}_p)){\rm Vol}(S_p). \end{equation} \end{corollary} \begin{proof} We set $\phi$ to be the characteristic function of $V({\mathbb Z})_{S,p}$ in the second line of \eqref{eqjacsub} to obtain \begin{equation*} {\rm Vol}(V({\mathbb Z})_{S,p})=|{\mathcal J}|_p{\rm Vol}({\rm SL}_3({\mathbb Z}_p))\int_{f\in S_p}\mathrm{Mass}_p(f)df. \end{equation*} The result then follows from Corollary \ref{cortotmass}. \end{proof} \noindent {\bf Proof of Theorem \ref{thsubmonloc}:} Let $\Sigma$ be associated to a large collection $S=(S_p)_p$ of local specifications, where we may assume that every element $f(x,y)\in S_p$ is maximal. For any finite set $T$ of $n$-monogenized cubic fields, let \smash{$\mathrm{Avg}({\rm Cl}_2,T)$} (resp.\ \smash{$\mathrm{Avg}({\rm Cl}^+_2,T)$}) denote the average size of the $2$-torsion in the class group (resp.\ narrow class group) of the fields in $T$. Let \smash{$F^\pm_\Sigma({\leq\! cH^\delta},X)$} denote the set of elements \smash{$(K,\alphapha)\in F_\Sigma({\leq\! cH^\delta},X)$} with $\pm\Delta(K)>0$. By Theorem \ref{thsubsieve} and Corollary~\ref{corjacsub}, \begin{equation*} \begin{array}{rcl} \displaystyle\mathrm{Avg}({\rm Cl}_2,{F_\Sigma^+({\leq\! cH^\delta},X)}) &=&\displaystyle 1+\frac{\frac1{m_0}\nu(V({\mathbb Z})_S)\,{\rm Vol}({\mathbb F}F_{{\rm SL}_3}\cdot{\mathbb F}F_{V}^{(0)}({\leq\! cH^\delta},X))} {{\rm Vol}({\mathbb F}F_{U}^{+}({\leq\! cH^\delta},X))\,\nu(V({\mathbb Z})_S)}+o(1) \\[.175in]&=&\displaystyle 1+\frac{\frac{1}{m_0}|{\mathcal J}|{\rm Vol}({\mathbb F}F_{{\rm SL}_3}) \prod_p\bigl[|{\mathcal J}|_p{\rm Vol}({\rm SL}_3({\mathbb Z}_p)){\rm Vol}(S_p)\bigr]}{\prod_p{\rm Vol}(S_p)} +o(1) \\[.175in]\displaystyle &=&\displaystyle 1+\frac{1}{m_0}+o(1)\;=\;\frac{5}{4}+o(1), \end{array} \end{equation*} since ${\rm Vol}({\mathbb F}F_{{\rm SL}_3})\cdot\prod_{p}{\rm Vol}({\rm SL}_3({\mathbb Z}_p))$ is equal to $1$, the Tamagawa number of ${\rm SL}_3$. Similarly, \begin{equation*} \begin{array}{rcccl} \displaystyle\mathrm{Avg}({\rm Cl}_2,{F_\Sigma^-({\leq\! cH^\delta},X)}) &=&\displaystyle 1+\frac{1}{m_1}+o(1)&=&\displaystyle\frac{3}{2}+o(1);\\[.15in] \displaystyle\mathrm{Avg}({\rm Cl}^+_2,{F_\Sigma^+({\leq\! cH^\delta},X)}) &=&\displaystyle 1+\frac{1}{m_0}+\frac{1}{m_{2+}}+\frac{1}{m_{2-}}+\frac{1}{m_{2\#}}+o(1) &=&2+o(1). \qquad\Box \end{array} \end{equation*} \noindent {\bf Proof of Theorem \ref{thsubmon}:} Theorem~\ref{thsubmon} now follows immediately from Theorem~\ref{thsubmonloc} by letting $\Sigma_p$ consist of all pairs $(\mathcal K_p,\alphapha_p)$, where $\mathcal K_p$ is an \'etale cubic extension of ${\mathbb Q}_p$ satisfying the splitting conditions prescribed in Theorem~3. $\Box$ \section{The mean number of $2$-torsion elements in the class groups of $n$-monogenized cubic fields ordered by height and fixed $n$}\label{secn} In this section we fix a positive integer $n$ throughout, and prove Theorem \ref{main_theorem} by computing, for such a {\it fixed} $n$, the average size of the $2$-torsion subgroups in the class groups of $n$-monogenized cubic fields when these fields are ordered by height. In \S\ref{subsecncub}, we determine asymptotics for the number of $n$-monogenized cubic rings of bounded height. In \S\ref{subsecnquar}, we determine asymptotics for the number of quartic rings having an $n$-monogenic cubic resolvent of bounded height. In \S\ref{subsecnunif}, we prove uniformity estimates that enable us to impose conditions of maximality on these counts of cubic and quartic rings. The leading constants for these asymptotics are expressed as products of volumes of sets in $U_n(R)$ and $V(R)$, where $R$ ranges over ${\mathbb R}$ and ${\mathbb Z}_p$ for all primes $p$. In \S\ref{seclmn}, we prove certain mass formulas relating \'etale quartic and cubic algebras over ${\mathbb Q}_p$. Finally, in \S\ref{subsecnlocal}, we use these mass formulas to compute the necessary local volumes in order to prove our main results. {}section{The number of $n$-monogenized cubic rings of bounded height}\label{subsecncub} In this subsection, we determine the number of primitive $n$-monogenized cubic rings having bounded height that are orders in $S_3$-number fields. Specifically, we prove the following result. \begin{theorem}\label{thcubringcount} Let $\smash{N_3^{+}}(n,X)$ $($resp.\ $\smash{N_3^{-}}(n,X))$ denote the number of isomorphism classes of $n$-monogenized cubic orders in $S_3$-cubic fields with positive $($resp.\ negative$)$ discriminant and height bounded by $X$. Then \begin{itemize} \item[{\rm (a)}] $N_3^{+}(n, X)=\displaystyle\frac{8}{135n^{1/3}}X^{5/6}+O_\varepsilonilon(X^{1/2+\varepsilonilon});$ \item[{\rm (b)}] $N_3^{-}(n, X)=\displaystyle\frac{32}{135n^{1/3}}X^{5/6}+O_\varepsilonilon(X^{1/2+\varepsilonilon}).$ \end{itemize} \end{theorem} Theorem \ref{thcubringcount} is proved by using the parametrizations of \S\ref{subsecparamrings} in conjunction with a count of elements in $U_n({\mathbb Z})$ having bounded height. \begin{nota} \label{sec:4.2} {\em For any ring $R$ and integers $n$ and $b$, let $U_{n,b}(R){}set U_n(R)$ denote the set of binary cubic forms $f(x,y)\in U_n(R)$ such that the $x^2y$-coefficient of $f$ is $b$. For a subset $S{}set U_{n}({\mathbb R})$, let $S^\pm$ denote the set of elements $f\in S$ with $\pm\Delta(f)>0$. For a subset $L{}set U_{n}({\mathbb Z})$, let $N^\pm(L;n,X)$ denote the number of generic ${M}({\mathbb Z})$-equivalence classes $f$ in $L^\pm$ such that $H(f)<X$. }\end{nota} \begin{lemma}\label{lred411} The number of non-generic ${M}({\mathbb Z})$-orbits in $U_{n}({\mathbb Z})$ of height less than $X$ is $O_\varepsilonilon(X^{1/2+\varepsilonilon})$. \end{lemma} \begin{proof} An ${M}({\mathbb Z})$-orbit on $U_n({\mathbb Z})$ has a unique representative with $x^2y$-coefficient in $[0,3n)$. Let $f(x)=nx^3+bx^2y+cxy^2+dy^3$ be such an element with height less than $X$. Then we have the bounds $c\ll X^{1/3}$ and $d\ll X^{1/2}$. Furthermore, the proofs of \cite[Lemmas 21, 22]{MAJ} imply that if $f(x,y)$ is either reducible or corresponds to an order in a $C_3$-number field, then the value of $d$ determines the value of $c$ up to $O_\varepsilonilon(X^\varepsilonilon)$ choices, proving the lemma. \end{proof} \begin{proposition}\label{thbccount} Let $b\in[0,3n)$ be an integer, and let $L{}set U_{n,b}({\mathbb Z})$ be defined by finitely many congruence conditions. Then \begin{itemize} \item[{\rm (a)}] $N^+(L;n,X) =\displaystyle\frac{8}{405n^{4/3}}\nu(L)X^{5/6}+O_\varepsilonilon(X^{1/2+\varepsilonilon}),$ \pagebreak \item[{\rm (b)}] $N^-(L;n,X) =\displaystyle\frac{32}{405n^{4/3}}\nu(L)X^{5/6}+O_\varepsilonilon(X^{1/2+\varepsilonilon}),$ \end{itemize} where $\nu(L)$ denotes the volume of the closure of $L$ in $U_{n,b}(\widehat{{\mathbb Z}})$. \end{proposition} \begin{proof} By Lemma \ref{lred411}, it suffices to estimate the number of elements in $L^\pm$ that have height bounded by $X$. By Proposition \ref{davlem}, this is equal to $\nu(L)$ times the volume of $\smash{U_{n,b}({\mathbb R})^\pm_{H<X}}$, up to an error of $O(X^{1/2})$, since the largest projection of $\smash{U_{n,b}({\mathbb R})^\pm_{H<X}}$ is onto the $y^3$-coefficient of elements in $U_{n,b}({\mathbb R})$ and this projection has size $O(X^{1/2})$. The volume of ${U_{n,0}({\mathbb R})^+_{H<X}}$ is \begin{equation} \int_{c=0}^{X^{1/3}/(3n^{1/3})} \int_{\smash{d=\frac{-2c^{3/2}}{3^{3/2}n^{1/2}}}}^{\smash{\frac{2c^{3/2}}{3^{3/2}n^{1/2}}}}dc\,dd= \int_{c=0}^{X^{1/3}/(3n^{1/3})}\frac{4c^{3/2}}{3^{3/2}n^{1/2}}dc= \frac{8}{405n^{4/3}}X^{5/6}, \end{equation} and the volume of ${U_{n,0}({\mathbb R})^-_{H<X}}$ is \begin{equation} \frac{8}{81n^{4/3}}X^{5/6}-{\rm Vol}(U_{n,0}({\mathbb R})^+_{H<X})=\frac{32}{405n^{4/3}}X^{5/6}. \end{equation} To compute the volumes of $\smash{U_{n,b}({\mathbb R})^\pm_{H<X}}$, note that there exists a bijective unipotent invariant-preserving map $\smash{U_{n,0}({\mathbb R})^\pm_{H<X}} \to \smash{U_{n,b}({\mathbb R})^\pm_{H<X}}$ defined by $g(x,y)\mapsto g(x+by/(3n),y)$, and the Jacobian determinant of this map is equal to $1$. Thus the volume of $\smash{U_{n,b}({\mathbb R})^\pm_{H<X}}$ is equal to that of $\smash{U_{n,0}({\mathbb R})^\pm_{H<X}}$, concluding the proof of Proposition \ref{thbccount}. \end{proof} \noindent {\bf Proof of Theorem \ref{thcubringcount}:} Theorem \ref{thcubringcount} follows from the parametrization of $n$-monogenized cubic orders in Theorem~\ref{thcubringpar}, together with Proposition~\ref{thbccount} applied to $U_{n,b}({\mathbb Z})$ for each $b$ with $0\leq b<3n$ and then summing over these $b$. $\Box$ {}section{The number of quartic rings having $n$-monogenized cubic resolvent rings of bounded height}\label{subsecnquar} In this subsection, we determine asymptotics for the number of pairs $(Q,(C,\alphapha))$, where $Q$ is an order in an $S_4$-quartic field and $(C,\alphapha)$ is an $n$-monogenized cubic resolvent ring of $Q$ having bounded height. This requires determining asymptotics for the number of ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-orbits $(A,B)$ on $V({\mathbb Z})$ of bounded height that satisfy $\det(A)=n/4$. Fixing the determinant of $A$ imposes a cubic polynomial condition on elements in $V({\mathbb R})$; however, counting integer points on a cubic hypersurface is in general a difficult question. Instead, we proceed as follows. We use the action of ${\rm SL}_3({\mathbb Z})$ to transform any element $(A,B)\in V({\mathbb Z})$, with $\det(A)=n/4$, to an element $(A',B')$, where $A'$ belongs to a fixed finite set of representatives for the action of ${\rm SL}_n({\mathbb Z})$ on integer-coefficient ternary quadratic forms of determinant~$n/4$. \begin{nota} \label{sec:4.5} {\em For a ring $R$ and an element $n\in R$, let $\mathcal{Q}_n(R)$ denote the space of ternary quadratic forms $A$ with coefficients in~$R$ such that $4\det(A)=n$. For a fixed $A$, let $V_{A}(R)$ denote the set of pairs $(A,B)\in V(R)$ where $B$ is arbitrary. If $A\in\mathcal{Q}_n(R)$, then the resolvent map ${\mathbb R}es$ sends $V_A(R)$ to $U_n(R)$. For a fixed $b\in R$, let $V_{A,b}(R)$ denote the subset of $V_A(R)$ mapping to $U_{n,b}(R)$ under~${\mathbb R}es$. The set $V_{A,b}(R)$ is defined by linear conditions on $V_A(R)$, since $b$ is linear in the coefficients of $B$. }\end{nota} Instead of counting ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-orbits $(A',B')$ with $\det(A')=n/4$, it suffices to count ${\rm SO}_A({\mathbb Z})$-orbits on $V_A({\mathbb Z})$, where $A$ ranges over a fixed set of representatives for ${\rm SL}_3({\mathbb Z})\backslash\mathcal{Q}_n({\mathbb Z})$. We note that these representations are different ${\mathbb Z}$-forms of the same representation over ${\mathbb C}$. \pagebreak \begin{rem}{\em One such ${\mathbb Z}$-form of the representation of ${\rm SO}_A({\mathbb Z})$ on $V_A({\mathbb Z})$ is the representation of ${\rm PGL}_2$ on the space of binary quartic forms. Indeed, when $n=1$, the set $\mathcal{Q}_1({\mathbb Z})$ consists of a single ${\rm SL}_3({\mathbb Z})$-orbit, and one of the representatives of this orbit is the $3\times3$ symmetric matrix $A_1$ given~by \begin{equation}\label{eqA1} A_1:=\left( \begin{array}{ccc} & & 1/2 \\ & -1 & \\ 1/2 & & \end{array} \right). \end{equation} The algebraic group ${\rm PGL}_2$ is isomorphic to ${\rm SO}_{A_1}$ via the map \begin{equation}\label{eqtwistedaction1} \begin{array}{rcl} \tau:{\rm PGL}_2({\mathbb Z})&\rightarrow&{\rm SL}_3({\mathbb Z}),\;\; \mbox{given explicitly by}\\[.05in] {\left(\begin{array}{cc} a & {b} \\ c & d \end{array}\right)}&\mapsto& \displaystyle\frac{{1}}{ad-bc}{\left(\begin{array}{ccc} {d^2} & {cd} & {c^2} \\ {2bd} & {ad+bc} & {2ac} \\ {b^2} & {ab} & {a^2} \end{array}\right)}. \end{array} \end{equation} Furthermore, we have a map from the space of binary quartic forms to the space of pairs $(A_1,B)$ given by \begin{equation}\label{vtow} \phi: ax^4+bx^3y+cx^2y^2+dxy^3+ey^4\mapsto \left( \left[ \begin{array}{ccc}\phantom0 & \phantom0 & 1/2 \\ \phantom0 & -1 & \phantom0 \\ 1/2 & \phantom0 & \phantom0 \end{array} \right], \left[ \begin{array}{ccc} a & b/2 & 0 \\ b/2 & c & d/2 \\ 0 & d/2 & e \end{array} \right]\right). \end{equation} The above two maps (with the latter map slightly modified) are considered in work of Wood~\cite{MWt}, who then shows that the representation of ${\rm PGL}_2$ on the space of binary quartic forms is isomorphic over ${\mathbb Z}$ to the representation of ${M}({\mathbb Z})\times{\rm SO}_{A_1}({\mathbb Z})$ on the space $V_{A_1}$. }\end{rem} Asymptotics for the number of ${\rm PGL}_2({\mathbb Z})$-orbits on the set of integer-coefficient binary quartic forms with bounded height were determined in~\cite{BhSh}. In this section, we determine analogous asymptotics for other ${\mathbb Z}$-forms of this representation. As a consequence, we prove the following~theorem. \begin{theorem}\label{thqnmccountnr} For $i\in\{0,1,2\}$, let $N^{(i)}_4(n,X)$ denote the number of isomorphism classes of pairs $(Q,(C,\alphapha))$, where $Q$ is an order in a quartic $S_4$-field having $i$ complex embeddings, and $(C,\alphapha)$ is an $n$-monogenized cubic resolvent ring of $Q$ with height bounded by $X$. Then \begin{equation*} N_4^{(i)}(n,X)=\frac{3n}{m_i}\sum_{A\in{\rm SL}_3({\mathbb Z})\backslash\mathcal{Q}_n({\mathbb Z})}{\rm Vol}({\mathbb F}F_A\cdot {\mathbb R}F{i}(X))+o(X^{5/6}), \end{equation*} where ${\mathbb F}F_A$ is a fundamental domain for the action of ${\rm SO}_{A}({\mathbb Z})$ on ${\rm SO}_A({\mathbb R})$, and the sets ${\mathbb R}F{i}(X)$ are defined in $\S\ref{sssnqred}$ prior to Proposition~$\ref{propfundset}$. \end{theorem} In \S\ref{sssnqred}, we construct fundamental domains for the action of ${\rm SO}_A({\mathbb Z})$ on $V_{A,b}({\mathbb R})$ for any $3\times3$ symmetric matrix $A$ having determinant $n/4$, and any $b$ with $0\leq b<3n$. In~\S\ref{countsoava}, using these fundamental domains, we count the number of generic ${\rm SO}_A({\mathbb Z})$-orbits on $V_{A,b}({\mathbb Z})$ having bounded height. Summing over $A\in {\rm SL}_3({\mathbb Z})\backslash\mathcal{Q}_n({\mathbb Z})$ and $0\leq b<3n$ will then immediately yield Theorem~\ref{thqnmccountnr}. {}subsection{Reduction theory}\label{sssnqred} Let $A$ be the Gram matrix of an integer-coefficient ternary quadratic form with $\det(A)=n/4$. In this subsubsection, we construct finite covers of fundamental domains for the action of ${\rm SO}_A({\mathbb Z})$ on $V_{A,b}({\mathbb R})$. We start by constructing fundamental sets for the action of ${\rm SO}_A({\mathbb R})$ on $V_{A,b}({\mathbb R})$. \begin{cons} \label{sec:4.8} {\em First, suppose that $A$ is isotropic over ${\mathbb R}$. If $A=A_1$, where $A_1$ is defined in \eqref{eqA1}, then by the discussion above we may construct our fundamental sets $\smash{{\mathbb R}F{i}}$, for $i\in\{0,1,2+,2-\}$, using analogous fundamental sets constructed for the action of ${\rm PGL}_2$ on binary quartic forms. Namely, let \begin{equation*} {R}_1^{(i)}:= \phi\bigl({\mathbb R}_{>0}\cdot L^{(i)}\bigr), \end{equation*} where $\phi$ is defined in \eqref{vtow} and the sets $L^{(i)}$ are defined in {\rm \cite[Table 1]{BhSh}}. If $A$ is a general integer-coefficient ternary quadratic form that is isotropic over ${\mathbb R}$, then we simply translate the sets $\smash{R_1^{(i)}}$ using the group action of ${\rm GL}_3({\mathbb R})$ on the space of ternary quadratic forms. Namely, let $g_3\in{\rm GL}_3({\mathbb R})$ be the element such that $g_3 \cdot A_1:=g_3A_1g_3^t=A$, and let \begin{equation*} {R}^{(i)}:=g_3. \smash{\phi\bigl({\mathbb R}_{>0} \cdot L^{(i)}\bigr)}. \end{equation*} Then, for $A$ isotropic, $i\in\{0,1,2+,2-\}$, and any integer $b\in[0,3n)$, let \begin{equation}\label{firstRF} \smash{{\mathbb R}F{i}}:=\bigl\{(A,c_BA+B): (A,B)\in {R}^{(i)}\bigr\}, \end{equation} where $c_B$ is the unique real number such that ${\mathbb R}es(A,c_BA+B)$ has $x^2y$-coefficient $b$. Next, suppose that $A$ is anisotropic over ${\mathbb R}$. We begin with the case $b=0$. Let \begin{equation*} \smash{{\mathbb F}F^{(2\#)}_{V_{A,0}}}:=\bigl\{ g_3\cdot({\rm Id},cB_f):c\in{\mathbb R}_{>0},\;f\in U_{n,0}({\mathbb R})^+,\;H(f)=1 \bigr\}, \end{equation*} where $g_3\in{\rm GL}_3({\mathbb R})$ is the matrix taking the identity matrix ${\rm Id}$ to $A$, and $B_f$ corresponds to the diagonal matrix whose entries are the (real) roots of $f(x,1)$ in ascending order. For a general integer $b\in [0,3n)$, let \begin{equation}\label{secondRF} \smash{{\mathbb R}F{2\#}}:=\bigl\{(A,B+bA/3):(A,B)\in \smash{{\mathbb F}F^{(2\#)}_{V_{A,0}}}\bigr\}. \end{equation} If {$A$ is isotropic over ${\mathbb R}$ and $i=2\#$, or $A$ is anisotropic over ${\mathbb R}$ and $i\in\{0,1,2+,2-\}$}, then let \begin{equation}\label{fourthRF} \smash{{\mathbb R}F{i}=\emptyset.} \end{equation} Let \begin{equation}\label{thirdRF} \smash{{\mathbb R}F{2}}:=\smash{{\mathbb R}F{2+}}\cup \smash{{\mathbb R}F{2-}}\cup \smash{{\mathbb R}F{2\#}}. \end{equation} Finally, for $X>0$, let $\smash{{\mathbb R}F{i}(X)}$ denote the subset of elements in $\smash{{\mathbb R}F{i}}$ having height less than $X$. }\end{cons} \begin{proposition}\label{propfundset} If $A$ is isotropic over ${\mathbb R}$, then the sets $\smash{{\mathbb R}F{0}}$, \smash{${\mathbb R}F{1}$}, and $\smash{{\mathbb R}F{2}}=\smash{{\mathbb R}F{2+}}\cup \smash{{\mathbb R}F{2-}}$ are fundamental sets for the action of ${{\rm SO}_A({\mathbb R})}$ on $\smash{V_{A,b}({\mathbb R})^{(0)}}$, $\smash{V_{A,b}({\mathbb R})^{(1)}}$, and $\smash{V_{A,b}({\mathbb R})^{(2)}}$, respectively. If~$A$ is anisotropic over ${\mathbb R}$, then the set \smash{${\mathbb R}F{2\#}$} is a fundamental set for the action of ${\rm SO}_A({\mathbb R})$ on~\smash{$V_{A,b}({\mathbb R})$}. Moreover, for $i\in\{0, 1, 2, \smash{2\#},2+, 2-\}$, all entries of elements in $\smash{{\mathbb R}F{i}(X)}$ are~$O(X^{1/6})$. \end{proposition} \begin{proof} When $A$ is anisotropic over ${\mathbb R}$, $A$ can be translated via ${\rm GL}_3({\mathbb R})$ into the identity matrix~${\rm Id}$. In that case, every element in $\smash{V_{{\rm Id},0}({\mathbb R})}$ has splitting type $(22)$ over ${\mathbb R}$. Furthermore, by the spectral theorem, every element in $\smash{V_{{\rm Id},0}({\mathbb R})}$ is ${\rm SO}_3({\mathbb R})$ equivalent to $({\rm Id},B)$, where $B$ is diagonal and $b_{11}<b_{22}<b_{33}$. Then $\smash{{\mathbb R}F{2\#}}$ gives the desired fundamental set. By \cite[\S2.1]{BhSh}, the $\smash{{\mathbb R}F{i}}$ are fundamental sets for $i\neq(2\#)$. Regarding the heights of elements in $\smash{{\mathbb R}F{i}}$, note that the set of elements in $\smash{{\mathbb R}F{i}}$ having height~$1$ have absolutely bounded coefficients. The final assertion now follows since $H$ is a homogeneous function on $V_{{\rm Id},0}({\mathbb R})$ of degree $6$. \end{proof} \begin{theorem}\label{thfunddom} Let ${\mathbb F}F_A$ be a fundamental domain for the action of ${\rm SO}_A({\mathbb Z})$ on ${\rm SO}_A({\mathbb R})$. Then ${\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}}$ is an $m_i$-fold cover of a fundamental domain for the action of ${\rm SO}_A({\mathbb Z})$ on $V({\mathbb R})^{(i)}$. \end{theorem} \begin{proof} If $(A,B)\in V({\mathbb R})^{(i)}$, then by \pagebreak Lemma \ref{lemsubmonss} the stabilizer of $(A,B)$ in ${\rm SL}_3({\mathbb R})$ has size $m_i$. Since every element of this stabilizer must belong to ${\rm SO}_A({\mathbb R})$, it follows that the size of the stabilizer of $(A,B)$ in ${\rm SO}_A({\mathbb R})$ is also $m_i$. The result now follows from Proposition \ref{propfundset}. \end{proof} We will describe explicit constructions of ${\mathbb F}F_A$ in the next subsubsection. {}subsection{Counting ${\rm SO}_A({\mathbb Z})$-orbits on $V_A({\mathbb Z})$}\label{countsoava} In this subsection, we fix positive integers $n$ and $b\in[0,3n)$. \begin{nota}\label{N:4.11}{\em Let $A$ be an integer-coefficient ternary quadratic form of determinant $n/4$. For a subset ${L}{}set V_{A,b}({\mathbb Z})$ defined by congruence conditions and $i\in\{0,1,2,2\pm,2\#\}$, let $N^{(i)}({L};A,X)$ denote the number of generic ${\rm SO}_A({\mathbb Z})$-orbits on ${L}^{(i)}$ having height less than~$X$. Let $\nu({L})$ denote the volume of the closure of ${L}$ in $V_{A,b}(\widehat{{\mathbb Z}})$. }\end{nota} \begin{theorem}\label{thisotcount} Let ${L}{}set V_{A,b}({\mathbb Z})$ be defined by finitely many congruence conditions, and let ${\mathbb F}F_A$ denote a fundamental domain for the action of ${\rm SO}_A({\mathbb Z})$ on ${\rm SO}_A({\mathbb R})$. Then \begin{equation*} N^{(i)}({L};A,X)=\frac{1}{m_i}\nu({L}){\rm Vol}({\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}(X)})+o(X^{5/6}), \end{equation*} where $i=2\#$ if $A$ is isotropic over ${\mathbb R}$ and $i\in\{0,1,2+,2-\}$ if $A$ is anisotropic over ${\mathbb R}$. \end{theorem} Since the truth of Theorem~\ref{thisotcount} is independent of the choice of ${\mathbb F}F_A$, we begin by constructing convenient fundamental domains ${\mathbb F}F_A$ for the action of ${\rm SO}_A({\mathbb Z})$ on ${\rm SO}_A({\mathbb R})$. \begin{cons} \label{sec:4.12} {\em Suppose that $A$ is anisotropic over ${\mathbb Q}$. Then we may choose ${\mathbb F}F_A$ to be a compact fundamental domain for the left action of ${\rm SO}_A({\mathbb Z})$ on ${\rm SO}_A({\mathbb R})$. By Theorem~\ref{thfunddom}, the multiset ${\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}}$ is an $m_i$-fold cover of a fundamental domain for the action of ${\rm SO}_A({\mathbb Z})$ on~$\smash{V_{A,b}({\mathbb R})^{(i)}}$. Next, suppose that $A$ is isotropic over ${\mathbb Q}$. Then there exists $g_A\in{\rm SL}_3({\mathbb Q})$ such that $\smash{g_AAg_A^t}$ is \begin{equation}\label{eqan} A_n:=\left( \begin{array}{ccc} & & 1/2 \\ & -n & \\ 1/2 & & \end{array} \right). \end{equation} For $F={\mathbb R}$ or ${\mathbb Q}$, consider the maps \begin{equation}\label{eqAtoAn} \begin{array}{rcl} \sigma_A:V_{A,b}(F)&\to& \smash{V_{A_n,b}(F)},\\[.05in] \sigma_G:{\rm SO}_A(F)&\to&\smash{{\rm SO}_{A_n}(F)}, \end{array} \end{equation} given by $\smash{\sigma_A(A,B)=(A_n,g_ABg_A^t)}$ and $\smash{\sigma_G(g)=g_Agg_A^{-1}}$. The map $\sigma_A$ is height preserving and $\sigma_A(g\cdot v)=\sigma_G(g)\cdot\sigma_A(v)$. Let ${L}_n{}set V_{A_n,b}({\mathbb R})$ denote the lattice $\sigma_A({L})$ and $\Gamma{}set{\rm SO}_{A_n}({\mathbb R})$ the subgroup $\sigma_G({\rm SO}_A({\mathbb Z}))$. Then $\Gamma$ is commensurable with ${\rm SO}_{A_n}({\mathbb Z})$. By~\cite[Example 2.5]{Borel}, there exists a fundamental domain ${\mathbb F}F$ for the action of $\Gamma$ on ${\rm SO}_{A_n}({\mathbb R})$ that is contained in a finite union~$\cup_j g_j {\mathcal S}$, where $g_i\in{\rm SO}_{A_n}({\mathbb Q})$ and ${\mathcal S}$ is a Siegel domain. We choose ${\mathcal S}$ to be $N'T'K$, where \begin{equation*}\label{nak} N':= \left\{\left(\begin{array}{ccc} 1 & {} &{}\\ {2nu} & 1 & {}\\ {nu^2} & {u} & {1}\end{array}\right): u\in\ [-1/2,1/2] \right\} , \;\; T' := \left\{\left(\begin{array}{ccc} t^{-2} & {} & {} \\ {} & 1 & {} \\{} & {} & t^2 \end{array}\right): t\geq 1/2 \right\}, \;\; \end{equation*} and $K$ is a maximal compact subgroup of ${\rm SO}_{A_n}({\mathbb R})$. We set ${\mathbb F}F_A:=\smash{\sigma_G^{-1}({\mathbb F}F)}$. Then ${\mathbb F}F_A$ is a fundamental domain for the action of ${\rm SO}_A({\mathbb Z})$ on ${\rm SO}_A({\mathbb R})$. }\end{cons} \noindent {\bf Proof of Theorem \ref{thisotcount}:} First, we consider the case that $A$ is anisotropic. By Proposition~\ref{propfundset}, the entries of elements in ${\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}(X)}$ are each $O(X^{1/6})$. An argument identical to the proof of \cite[Lemma 14]{dodpf} implies that the number of non-generic integral points \pagebreak in ${\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}(X)}$ is $o(X^{5/6})$. Thus, to prove Theorem~\ref{thisotcount}, it suffices to estimate the number of elements in ${\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}(X)}\cap {L}$, and this is immediate from Proposition~\ref{davlem}. Next, we assume that $A$ is isotropic. Theorem \ref{thfunddom} implies that \begin{equation*} N^{(i)}({L};A,X)= \frac1{m_{i}}\#\bigl({\mathbb F}F\cdot\sigma_A({\mathbb R}F{i}(X))\cap{L}_n^{\rm gen}\bigr), \end{equation*} where ${L}_n^{\rm gen}$ denotes the subset $v\in{L}_n$ such that $\sigma_A^{-1}(v)$ is generic. Let $G_0$ be an open nonempty bounded subset of ${\rm SO}_{A_n}({\mathbb R})$. Then, by an averaging argument as in \cite[Theorem 2.5]{BhSh}, we obtain \begin{equation}\label{eqavg423} N^{(i)}({L};A,X)= \frac{1}{m_{i}{\rm Vol}(G_0)}\int_{h\in{\mathbb F}F} \#\bigl(h G_0\cdot\sigma_A(\smash{{\mathbb R}F{i}}(X))\cap{L}_n^{{\rm gen}}\bigr)dh, \end{equation} where the volume of $G_0$ is computed with respect to any fixed Haar measure $dh$. \begin{lemma}\label{lemredisot1} If $h=utk\in{\mathcal S}=N'T'K$ as above, $t\gg X^{1/24}$, and $g\in{\rm SO}_{A_n}({\mathbb Q})$, then $$hG_0\cdot\sigma_A(\smash{{\mathbb R}F{i}(X)})\cap g^{-1}{L}_n^{\rm gen}=\emptyset.$$ \end{lemma} \begin{proof} The entries of elements in $G_0\cdot\sigma_A(\smash{{\mathbb R}F{i}(X)})$ are all $O(X^{1/6})$. Since $h=utk\in{\mathcal S}$, the entries of $u$ and $k$ are bounded. Hence the six coefficients of elements in $hG_0(\sigma_A(\smash{{\mathbb R}F{i}(X)})$, considered as a subset of $V_{A_n}({\mathbb R})$, satisfy: \begin{equation}\label{eqcoeffbdbiAn} b_{11}\ll t^{-4}X^{1/6};\;\; b_{12}\ll t^{-2}X^{1/6};\;\; b_{13},b_{22}\ll X^{1/6};\;\; b_{23}\ll t^{2}X^{1/6};\;\; b_{33}\ll t^{4}X^{1/6}. \end{equation} The subset $\smash{V_{A_n,b}({\mathbb R})}$ of $V_{A_n}({\mathbb R})$ is cut out by one linear equation, involving only $b_{13}$ and $b_{22}$. Hence $b_{11},b_{12},b_{22},b_{23},b_{33}$ form a complete set of variables for $\smash{V_{A_n,b}({\mathbb R})}$, and elements in $hG_0(\sigma_A(\smash{{\mathbb R}F{i}}(X))$ satisfy the same coefficient bounds as in \eqref{eqcoeffbdbiAn}. Since $g^{-1}{L}_n$ is a lattice commensurable to $\smash{V_{A_n,b}}({\mathbb Z})$, the denominators of elements in $g^{-1}{L}$ are absolutely bounded. Therefore, if $t\gg X^{1/24}$, then elements $(A_n,B)\in hG_0\cdot\sigma_A(\smash{{\mathbb R}F{i}(X)})\cap g^{-1}{L}_n$ satisfy $b_{11}=0$, and are reducible since $A_n$ and $B$ have a common zero in ${\mathbb P}^2({\mathbb Q})$. \end{proof} \begin{lemma}\label{lemredisot2} We have $\displaystyle \int_{{}stack{h=g_iutk\in{\mathbb F}F\\ t\ll X^{1/24}}} \#\bigl(h G_0\cdot\sigma_A({\mathbb R}F{i}(X))\cap({L}_n\setminus{L}_n^{{\rm gen}})\bigr)dh=o(X^{5/6}).$ \end{lemma} \begin{proof} The proof is identical to that of \cite[Lemma 14]{dodpf}. \end{proof} By Lemma~\ref{lemredisot1}, the integral on the right-hand side of \eqref{eqavg423} can be restricted to $h\in{\mathbb F}F'$, where ${\mathbb F}F'=\{h=g_intk\in{\mathbb F}F:t\ll X^{1/24}\}$. Lemma \ref{lemredisot2} implies that replacing $\smash{{L}_n^{\rm gen}}$ by ${L}_n$ on the right-hand side of \eqref{eqavg423} introduces an error of at most $o(X^{5/6})$. Using Proposition \ref{davlem} to estimate \smash{$\#\bigl(hG_0\sigma_A(\smash{{\mathbb R}F{i}(X)})\cap{L}_n\bigr)$} for $i=0$, $1$, $2+$, and $2-$, we~obtain \begin{equation*} \begin{array}{rcl} N^{(i)}({L};A,X)&=&\displaystyle\frac1{m_i{\rm Vol}(G_0)} \int_{h\in{\mathbb F}F'}\#\bigl(hG_0\sigma_A({\mathbb R}F{i}(X))\cap{L}_n\bigr)dh+o(X^{5/6})\\[.15in] &=&\displaystyle\frac1{m_i{\rm Vol}(G_0)}\int_{h\in{\mathbb F}F'} {\rm Vol}'(hG_0\cdot\sigma_A({\mathbb R}F{i}(X)))+O(t^4X^{4/6})dh+o(X^{5/6}), \end{array} \end{equation*} where the volumes ${\rm Vol}'$ of sets in $V_{A_n,b}({\mathbb R})$ are computed with respect to Euclidean measure normalized so that ${L}_n$ has covolume $1$. The measure $dn\,d^\times t\,dk/t^2$ is a Haar measure on ${\rm SO}_{A_n,b}({\mathbb R})$. \pagebreak The integral over ${\mathbb F}F$ of $t^4$ is $O(X^{1/12})$ while the volume of ${\mathbb F}F\setminus{\mathbb F}F'$ is $O(X^{-1/12})$. It follows that \begin{equation*} \begin{array}{rcl} N^{(i)}({L};A,X)&=&\displaystyle\frac{1}{m_i{\rm Vol}(G_0)}\int_{h\in{\mathbb F}F} {\rm Vol}'(hG_0\cdot\sigma_A({\mathbb R}F{i}(X)))dh+O(X^{3/4})+o(X^{5/6})\\[.15in] &=&\displaystyle\frac{1}{m_i}{\rm Vol}'({\mathbb F}F\cdot\sigma_A({\mathbb R}F{i}(X)))+o(X^{5/6})\\[.15in] &=&\displaystyle\frac{1}{m_i}\nu({L}){\rm Vol}({\mathbb F}F_A\cdot {\mathbb R}F{i}(X))+o(X^{5/6}), \end{array} \end{equation*} where the second equality follows from the analogue of Proposition~\ref{propjacsub} for $V$, stated as Proposition~\ref{propjac} in \S\ref{subsecnlocal} below, and the volume of ${\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}}(X)$ is computed with respect to Euclidean measure on $V_{A,b}({\mathbb R})$ normalized so that $V_{A,b}({\mathbb Z})$ has covolume $1$. We have proven Theorem~\ref{thisotcount}. $\Box$ \noindent {\bf Proof of Theorem \ref{thqnmccountnr}:} By Corollary~\ref{qparcor2}, we have \begin{equation*} N^{(i)}_4(n,X) \!=\!\!\! \sum_{A\in{\rm SL}_3({\mathbb Z})\backslash{\mathcal Q}_n({\mathbb Z})}\sum_{b=0}^{3n-1}N^{(i)}(V_{A,b}({\mathbb Z});A,X) \!=\! \frac{3n}{m_i}\sum_{A\in{\rm SL}_3({\mathbb Z})\backslash{\mathcal Q}_n({\mathbb Z})}\!\!\! {\rm Vol}({\mathbb F}F_A\cdot {\mathbb R}F{i}(X))+o(X^{5/6}), \end{equation*} where the final equality follows from Theorem \ref{thisotcount}. $\Box$ {}section{Uniformity estimates and squarefree sieves}\label{subsecnunif} As before, $n\geq 1$ is a fixed integer. Throughout this subsection, we also fix an integer $b\in[0,3n)$. We begin with the following definition. \begin{defn} \label{sec:4.15} {\em A collection $S=(S_p)_p$, where $S_p$ is an open and closed subset of $U_{n,b}({\mathbb Z}_p)$ whose boundary has measure $0$, is called a {\bf collection of cubic local specifications}. Such a collection $S$ is {\bf large} if, for all but finitely many primes $p$, the set $S_p$ contains all elements $f\in U_{n,b}({\mathbb Z}_p)$ with $p^2\nmid\Delta(f)$. We associate to $S$ the set $U_{n,b}({\mathbb Z})_S {}set U_{n,b}({\mathbb Z})$ of integer-coefficient binary cubic forms, where $f\in U_{n,b}({\mathbb Z})_S $ if and only if $f\in S_p$ for all primes $p$. We also associate to $S$ the set $V_{A,b}({\mathbb Z})_S{}set V_{A,b}({\mathbb Z})$, where $v\in V_{A,b}({\mathbb Z})_S$ if ${\mathbb R}es(v)\in S_p$ for all primes $p$.} \end{defn} In this subsection, we deduce the following theorem. \begin{theorem}\label{countingpairsfinal} Let $S$ be a large collection of local specifications on $U_{n,b}$, and let $A$ be an integer-coefficient ternary cubic form with determinant $n/4$. For $i\in \{0,1,2\#,2+,2-\}$, we have \begin{equation*} \begin{array}{rcl} \displaystyle N^\pm(U_{n,b}({\mathbb Z})_S;n,X)&=&\;\;\;\;\; \displaystyle\nu(U_{n,b}({\mathbb Z})_S){\rm Vol}(U_{n,b}({\mathbb R})^\pm(X))+o(X^{5/6});\\[.065in] \displaystyle N^{(i)}(V_{A,b}({\mathbb Z})_S;A,X)&=& \displaystyle\frac{1}{m_i}\nu(V_{A,b}({\mathbb Z})_S){\rm Vol}({\mathbb F}F_A\cdot {\mathbb R}F{i}(X))+o(X^{5/6}). \end{array} \end{equation*} \end{theorem} \noindent Note that we have $\nu(U_{n,b}({\mathbb Z})_S)=\prod_p{\rm Vol}(S_p)$. The determination of $\nu(V_{A,b}({\mathbb Z})_S)$ in terms of $S$ is more subtle, and we devote most of the next subsection to it. To prove Theorem \ref{countingpairsfinal}, we require the following tail estimate. \begin{proposition}\label{propunif} For a prime $p$, let ${\mathcal W}_p(U_{n,b})$ $($resp.\ ${\mathcal W}_p(V_{A,b}))$ denote the set of elements in $U_{n,b}({\mathbb Z})$ $($resp.\ $V_{A,b}({\mathbb Z}))$ whose discriminants are divisible by $p^2$. Then for any $M>0$, we have \begin{equation}\label{eqnunif} \begin{array}{rcl} \displaystyle\sum_{p> M}N({\mathcal W}_p(U_{n,b});n,X)&=&\displaystyle O_\varepsilonilon(X^{5/6+\varepsilonilon}/M)+O(X^{1/3}),\\[.2in] \displaystyle\sum_{p> M}N({\mathcal W}_p(V_{A,b});A,X)&=&\displaystyle O_\varepsilonilon(X^{5/6+\varepsilonilon}/M)+O(X^{19/24}), \end{array} \end{equation} where the implied constants are independent of $M$ and $X$. \end{proposition} \begin{proof} We write ${\mathcal W}_p(U_{n,b})$ (resp.\ ${\mathcal W}_p(V_{A,b})$) as the disjoint union $\smash{{\mathcal W}_p^{(1)}}(U_{n,b})\cup \smash{{\mathcal W}_p^{(2)}}(U_{n,b})$ (resp.\ $\smash{{\mathcal W}_p^{(1)}}(V_{A,b})\cup \smash{{\mathcal W}_p^{(2)}}(V_{A,b})$), where $\smash{{\mathcal W}_p^{(1)}}(U_{n,b})$ (resp.\ $\smash{{\mathcal W}_p^{(1)}}(V_{A,b})$) consists of elements in ${\mathcal W}_p(U_{n,b})$ (resp.\ ${\mathcal W}_p(V_{A,b})$) whose discriminants are divisible by $p$ for mod $p$ reasons (in the sense of \cite[\S1.5]{geosieve}). The bounds of \eqref{eqnunif}, with ${\mathcal W}_p(U_{n,b})$ and ${\mathcal W}_p(V_{A,b})$ replaced by $\smash{{\mathcal W}_p^{(1)}}(U_{n,b})$ and $\smash{{\mathcal W}_p^{(1)}}(V_{A,b})$, respectively, follow from \cite[Theorem 3.5]{geosieve} in conjunction with the proofs of Theorems \ref{thbccount} and \ref{thisotcount}. The proof of \eqref{eqnunif}, with ${\mathcal W}_p(U_{n,b})$ and ${\mathcal W}_p(V_{A,b})$ replaced by $\smash{{\mathcal W}_p^{(2)}}(U_{n,b})$ and $\smash{{\mathcal W}_p^{(2)}}(V_{A,b})$, respectively, follows from Proposition \ref{propunifsub2} by taking ${T}=X^\varepsilonilon$. \end{proof} \noindent {\bf Proof of Theorem~\ref{countingpairsfinal}:} Theorem \ref{countingpairsfinal} is an immediate consequence of the tail estimate in Proposition~\ref{propunif}, together with an application of a squarefree sieve identically as in the proof of \cite[Theorem 2.21]{BhSh}. $\Box$ {}section{Local mass formulas}\label{seclmn} To compute the volumes in Theorem \ref{countingpairsfinal} in a manner analogous to those computed in \S\ref{seclmsub}, we prove certain mass formulas relating \'etale quartic and cubic algebras over ${\mathbb Q}_p$. \begin{defn} \label{sec:4.18} {\em Let $n$ be a positive integer, and let $p$ be a prime dividing $n$. Let $({\mathcal O},\alphapha)$ be an $n$-monogenized cubic ring over ${\mathbb Z}_p$, where ${\mathcal O}$ is the ring of integers of an \'etale cubic extension $K$ of~${\mathbb Q}_p$. We define the {\bf algebra at infinity} $\mathcal A_\infty(\alphapha)$ of $({\mathcal O},\alphapha)$ in two equivalent ways. Let $f(x,y)\in U_n({\mathbb Z}_p)$ be a binary cubic form corresponding to $({\mathcal O},\alphapha)$, so that $K={\mathbb Q}_p[x]/(f(x,1))$. Then $K=\prod L_i$ as a product of fields, where $L_i:={\mathbb Q}_p[x]/(f_i(x,y))$ and $f=\prod f_i$ as a product of irreducible factors over~${\mathbb Z}_p$. Without loss of generality, we may assume that the leading coefficient of $f_1$ is $n$, and that the other $f_i$'s have leading coefficient $1$. (Indeed, if two factors $f_1$ and $f_2$ both have leading coefficients that are multiples of $p$, then $\prod f_i$ would not correspond to a maximal ring by Lemma~\ref{propcondmax}.) Then we define $\mathcal A_\infty(\alphapha)$ to be $L_1$, the factor of $K$ corresponding to $f_1$. Equivalently, we can also define $\mathcal A_\infty(\alphapha)$ intrinsically in terms of the data $({\mathcal O},\alphapha)$. If ${\mathbb Z}_p[\alphapha]{}set {\mathcal O}$ factors as ${\mathbb Z}_p\times S$ for some quadratic ring $S$ over ${\mathbb Z}_p$, then $K\cong{\mathbb Q}_p\times (S\otimes{\mathbb Q}_p)=L_1\times L_2$, and we define $\mathcal A_\infty(\alphapha):=L_1\cong{\mathbb Q}_p$. Otherwise, if ${\mathbb Z}_p$ is not a factor of ${\mathbb Z}_p[\alphapha]$, then we define ${\rm mon}Kal$ as the (unique) factor $L_1$ of $K$ that is a ramified field extension of ${\mathbb Q}_p$. }\end{defn} \begin{rem}{\em To see the equivalence of the two definitions of the algebra at infinity $\mathcal A_\infty(\alphapha)$, note that if $f(x,y)=\prod_{i\geq1} f_i(x,y)$ as in Definition \ref{sec:4.18}, then the characteristic polynomial of $\alphapha$ is $g(x)= (f_1(x,n)/n) \prod_{i> 1} f_i(x,n)$ when expressed as a product of monic polynomials. If $f_1(x,y)$ has degree $1$, then ${\mathcal O}$ has a factor of ${\mathbb Z}_p$ corresponding to $f_1(x,y)$; this is because $f_1(x,n)/n$ is a linear factor of $g(x)$ sharing no common root modulo $p$ with $\prod_{i>1}f_i(x,n)\equiv x^2\pmod{p}$. Otherwise, $f_1(x,y)$ has degree at least $2$ with a root at infinity modulo $p$. It follows that $f_1(x,y)$ corresponds to a ramified factor of~$K$. Hence our two definitions of $\mathcal A_\infty(\alphapha)$ agree.} \end{rem} \begin{nota} \label{sec:4.19} {\em For an \'etale cubic extension $K_3$ of ${\mathbb Q}_p$, let ${\mathbb R}R(K_3)$ denote again the set of \'etale non-overramified quartic extensions of ${\mathbb Q}_p$, up to isomorphism, with cubic resolvent $K_3$. For an $n$-monogenized \'etale cubic extension $(K_3,\alphapha)$ of ${\mathbb Q}_p$, let ${\mathbb R}R^+(K_3,\alphapha)$ (resp.\ ${\mathbb R}R^-(K_3,\alphapha)$) consist of those $K_4\in{\mathbb R}R(K_3)$ such that ${\rm mon}Kal$ splits (resp.\ stays inert) in the unramified quadratic extension $K_6/K_3$ corresponding to~$K_4$. }\end{nota} \begin{theorem}\label{thpartialmass} Let $(K_3,\alphapha)$ be an $n$-monogenized \'etale cubic extension of ${\mathbb Q}_p$. If $(K_3,\alphapha)$ is sufficiently ramified, then \begin{equation}\label{firsteq} \sum_{K_4\in{\mathbb R}R^+(K_3,\alphapha)}\smash{\frac{1}{|{\rm Aut}_{K_3}(K_4)|}}=1;\quad \sum_{K_4\in{\mathbb R}R^-(K_3,\alphapha)}\smash{\frac{1}{|{\rm Aut}_{K_3}(K_4)|}}=0. \end{equation} Otherwise, \begin{equation}\label{secondeq} \sum_{K_4\in{\mathbb R}R^+(K_3,\alphapha)}\smash{\frac{1}{|{\rm Aut}_{K_3}(K_4)|}}=\frac12; \quad \sum_{K_4\in{\mathbb R}R^-(K_3,\alphapha)}\smash{\frac{1}{|{\rm Aut}_{K_3}(K_4)|}}=\frac12. \end{equation} \end{theorem} \begin{proof} First, assume that $(K_3,\alphapha)$ is sufficiently ramified; then, by definition, either: \begin{itemize} \item[{\rm (a)}] $K_3$ is a totally ramified cubic extension of ${\mathbb Q}_p$, in which case ${\rm mon}Kal=K_3$; or \item[{\rm (b)}] $K_3={\mathbb Q}_p\times F$, where $F$ is a ramified quadratic extension of ${\mathbb Q}_p$, and ${\rm mon}Kal={\mathbb Q}_p$. \end{itemize} Let $K_6$ be an unramified extension of $K_3$ such that $N_{K_3/{\mathbb Q}_p}\Delta(K_6/K_3)$ is a square in ${\mathbb Z}_p^\times$. In Case (a) above, $K_6/K_3$ must split for $N_{K_3/{\mathbb Q}_p}\Delta(K_6/K_3)$ to be a square. In Case (b), note that $\smash{N_{F/{\mathbb Q}_p}\Delta(F'/F)}$ is a square for every unramified extension $F'$ of $F$; hence, for $N_{K_3/{\mathbb Q}_p}\Delta(K_6/K_3)$ to be a square, the component ${\mathbb Q}_p$ of $K_3$ must split in $K_6$. Thus (\ref{firsteq}) follows from Theorem \ref{thmmassmain}. Next assume that $(K_3,\alphapha)$ is not sufficiently ramified. Then $K_3$ has a factor $F\neq {\rm mon}Kal$ that is not a ramified quadratic extension of ${\mathbb Q}_p$. Now \smash{$N_{F/{\mathbb Q}_p}\Delta(F'/F)$} is a square for an unramified quadratic extension $F'/F$ if and only if it is split; hence the sizes of ${\mathbb R}R^+(K_3,\alphapha)$ and ${\mathbb R}R^-(K_3,\alphapha)$ are equal. Since the automorphism group of every $K_4\in {\mathbb R}R(K_3)$ is the same, (\ref{secondeq}) follows. \end{proof} \begin{defn} \label{sec:4.21} {\em Let $p$ be a prime. A ternary quadratic form $A\in\mathcal{Q}_n({\mathbb Z}_p)$ is said to be {\bf good at~$p$} if the conic in ${\mathbb P}^2(\overline{{\mathbb F}}_p)$ given as the zero set of the reduction of $A$ modulo $p$ is either smooth or a union of two distinct lines. In the latter case, if each line is defined over ${\mathbb F}_p$, then we define $\kappa_p(A):=1$ and say that $A$ is {\bf residually hyperbolic}. Otherwise, the two lines are a pair of conjugate lines each defined over ${\mathbb F}_{p^2}$; we then define $\kappa_p(A):=-1$ and say that $A$ is~{\bf residually~nonhyperbolic}.} \end{defn} \begin{lemma}\label{lem:quadlemma} Let $p$ be a fixed prime and $n$ a fixed integer. \begin{itemize} \item[{\rm (a)}] If $p\nmid n$, then $\mathcal{Q}_n({\mathbb Z}_p)$ is a single ${\rm SL}_3({\mathbb Z}_p)$-orbit. \item[{\rm (b)}] If $p\mid n$, then the set of good elements in $\mathcal{Q}_n({\mathbb Z}_p)$ breaks up into two ${\rm SL}_3({\mathbb Z}_p)$-orbits consisting of elements having $\kappa_p=1$ and $\kappa_p=-1$, respectively. \end{itemize} Finally, if $A$ is a ternary quadratic form over ${\mathbb Z}_p$ that is not good at $p$, then for every ternary quadratic form $B$ over ${\mathbb Z}_p$, ${\mathbb R}es(A,B)$ corresponds to a ring that is nonmaximal at $p$. \end{lemma} \begin{proof} Suppose $p\neq 2$. Then any $m$-ary quadratic form over ${\mathbb Z}_p$ can be diagonalized via ${\rm SL}_m({\mathbb Z}_p)$-transformations (see, e.g., \cite[Chapter~8,~Theorem~3.1]{Cassels}). Furthermore, a diagonal binary quadratic form $[u_1,u_2]$ over ${\mathbb Z}_p$ with unit determinant (i.e., $u_1u_2\in{\mathbb Z}_p^\times$) is ${\rm SL}_2({\mathbb Z}_p)$-equivalent to the diagonal form $[1,u_1u_2]$. Indeed, if at least one of $u_1$ or $u_2$ is a unit square, then this is immediate; if both $u_1$ and $u_2$ are unit nonsquares in ${\mathbb Z}_p$, then it suffices to check that a unit nonsquare in ${\mathbb Z}_p$ can be expressed as a sum of two unit squares, and this is true since the smallest nonsquare $m$ in ${\mathbb Z}/p{\mathbb Z}$ is the sum $1+(m-1)$ of two unit squares. Part (a) now follows because any form $A\in \mathcal{Q}_n({\mathbb Z}_p)$ with $n$ a unit in ${\mathbb Z}_p$ is seen to be ${\rm SL}_3({\mathbb Z}_p)$-equivalent to the diagonal form $[1,1,n/4]$. To deduce Part (b), note that the reduction modulo~$p$ of a good form $A\in\mathcal{Q}_n({\mathbb Z}_p)$ with $p\mid n$ must have ${\mathbb F}_p$-rank~2. Therefore, $A$ is ${\rm SL}_3({\mathbb Z}_p)$-equivalent to either $[1,1,n/4]$ or $[1,m,n/(4m)]$, where $m$ is a unit nonsquare; these two forms are ${\rm SL}_3({\mathbb Z}_p)$-inequivalent, as one is residually hyperbolic while the other is not. Next suppose $p=2$. A binary quadratic form over ${\mathbb Z}_2$ with unit discriminant is ${\rm GL}_2({\mathbb Z}_2)$-equivalent to either $S(x,y):=xy$ or $U(x,y):=x^2+xy+y^2$ (see, e.g., \cite[Chapter~8,~Lemma~4.1]{Cassels}). Since a good ternary quadratic form $A$ over ${\mathbb Z}_2$ is not diagonalizable (for otherwise $A$ modulo~2 would be the square of a linear form), it must have a summand that is either $S$ or $U$. Thus every good form $A\in\mathcal{Q}_n({\mathbb Z}_2)$ is equivalent to either $B:=S\oplus [-n]$ or~$B':=U\oplus [n/3]$. It remains to prove that $B$ is ${\rm SL}_3({\mathbb Z}_2)$-equivalent to $B'$ if and only if $n$ is odd. Since $x^2+xy+y^2$ represents every odd squareclass in ${\mathbb Z}_2$, it follows that when $n$ is odd, the form $B'$ represents $0$ and is thus equivalent to~$B$. When $n$ is even, $B$ and $B'$ are inequivalent since $B$ is residually hyperbolic while $B'$ is not. (For quadratic forms of arbitrary dimension, see \cite[Chapter~8]{Cassels} and \cite[Chapter~15,~\S7]{ConwaySloane} for the general theory over ${\mathbb Z}_p$, and \cite[Theorem~2.6]{Hanke_structure_of_massses} for a proof of~(a) over local~fields.) Finally, if $A$ is a ternary quadratic form over ${\mathbb Z}_p$ that is not good, and $B$ is any ternary quadratic form over ${\mathbb Z}_p$, then the $x^3$- and $x^2y$-coefficients of ${\mathbb R}es(A,B)$ are divisible by $p^2$ and $p$, respectively. The last assertion of the lemma then follows from \cite[Lemma 2.10]{BBP} (stated below in Lemma~\ref{propcondmax}), which asserts that a binary cubic form whose $x^3$-coefficient is divisible by $p^2$ and whose $x^2y$-coefficient is divisible by $p$ is nonmaximal at $p$. \end{proof} The above lemma motivates the definition of the following partial mass formulas. \begin{defn} \label{sec:4.23} {\em Let $n$ be a positive integer, let $p$ a prime dividing $n$, and let $f$ be an element in $U_n({\mathbb Z}_p)$ corresponding to a maximal ring. Then we define the {\bf $\kappa$-mass} of $f$ as \begin{equation*} \mathrm{Mass}_p^\pm(f):=\sum_{{}stack{(A,B)\in\frac{{\mathbb R}es^{-1}(f)}{{\rm SL}_3({\mathbb Z}_p)}\\\kappa_p(A)=\pm1}} \smash{\frac1{\#{\rm Stab}_{{\rm SL}_3({\mathbb Z}_p)}(A,B)}}, \end{equation*} where \smash{$\frac{{\mathbb R}es^{-1}(f)}{{\rm SL}_3({\mathbb Z}_p)}$} is a set of representatives for the action of ${\rm SL}_3({\mathbb Z}_p)$ on ${\mathbb R}es^{-1}(f)$.} \end{defn} \begin{corollary}\label{corpartialmass} Let $n$ be a positive integer, let $p$ be a prime dividing $n$, and let $f(x,y)$ be an element in $U_n({\mathbb Z}_p)$ corresponding to a maximal ring. Then \begin{equation*} \begin{array}{ll} \displaystyle\mathrm{Mass}^+_p(f)=\,1\mbox{ and }\mathrm{Mass}^-_p(f)=\,0 & \mbox{ if $f$ is sufficiently ramified;}\\[.1in] \displaystyle\mathrm{Mass}^+_p(f)=\frac12\mbox{ and }\mathrm{Mass}^-_p(f)=\frac12 & \mbox{ otherwise.} \end{array} \end{equation*} \end{corollary} \begin{proof} Let $(A,B)$ be an element of $V({\mathbb Z}_p)$ with resolvent $f$. Then $(A,B)$ corresponds to an \'etale quartic algebra $K_4$ with cubic resolvent $K_3$, and $K_4$ yields an unramified quadratic extension~$K_6/K_3$. It follows that ${\rm mon}Kal$ splits (resp.\ stays inert) in $K_6$ if and only if $A$ is residually hyperbolic (resp.\ residually nonhyperbolic). \end{proof} {}section{Volume computations and proof of the main $n$-monogenic theorem (Theorem~\ref{main_theorem})} \label{subsecnlocal} Let $\Sigma$ denote a large collection of $n$-monogenized cubic fields. We may assume that every $(K,\alphapha)$ arising from $\Sigma$ satisfies ${\rm{Tr}}(\alphapha)=b\in[0,3n)$, as the general case follows by summing over all such~$b$. The large collection $\Sigma$ of $n$-monogenized cubic fields gives rise to a large collection $S=(S_p)_p$ of binary cubic forms where $S_p{}set U_{n,b}({\mathbb Z}_p)$ for every $p$ and every $f\in S_p$ is maximal. To compute $\nu(V_{A,b}({\mathbb Z})_S)$, we use the following Jacobian change of variables. \begin{proposition}\label{propjac} Let $K$ be ${\mathbb R}$ or ${\mathbb Z}_p$, let $|\cdot|$ denote the usual absolute value on $K$, and let $s:U_{n,b}(K)\to V_{A,b}(K)$ be a continuous map such that ${\mathbb R}es(s(f))=f$, for each $f\in U_{n,b}(K)$. Then there exists a rational nonzero constant ${\mathcal J}$, independent of $K$ and $s$, such that for any measurable function $\phi$ on $V_{A,b}(K)$, we have \begin{equation*}\label{eqjac} \begin{array}{rcl} \!\! \! \displaystyle\int_{{\rm SO}_A(K)\cdot s(U_{n,b}(K))}\!\!\!\phi(v)dv&\!\!\!=\!\!\!& |{\mathcal J}|\!\displaystyle\int_{f\in U_{n,b}(K)}\displaystyle\int_{g\in {\rm SO}_A(K)} \!\!\phi(g\cdot s(f))\omega(g) df,\\[0.2in] \displaystyle\int_{V_{A,b}(K)}\!\!\phi(v)dv&\!\!\!=\!\!\!& |{\mathcal J}|\!\displaystyle\int_{{}stack{f\in U_{n,b}(K)\\ {\rm Disc}(f)\neq 0}}\! \displaystyle\sum_{v\in\!\!\textstyle{\frac{V_{A,b}(K)\cap{\mathbb R}es^{-1}(f)}{{\rm SO}_A(K)}}} \!\!\!\frac{1}{\#{\rm Stab}_{{\rm SO}_A({\mathbb Z}_p)}(v)}\int_{g\in {\rm SO}_A(K)}\!\!\!\phi(g\cdot v)\omega(g)df,\\[-.05in] \end{array} \end{equation*} where \smash{$\frac{V_{A,b}(K)\cap{\mathbb R}es^{-1}(f)}{{\rm SO}_A(K)}$} is a set of representatives for the action of ${\rm SO}_A(K)$ on $V_{A,b}(K)\cap{\mathbb R}es^{-1}(f)$. \end{proposition} \begin{proof} Proposition~\ref{propjac} follows immediately from the proofs of \cite[Propositions 3.11--12]{BhSh} (see also \cite[Remark~3.14]{BhSh}). \end{proof} \begin{corollary}\label{corjac} Let $S_p{}set U_{n,b}({\mathbb Z}_p)$ be a closed subset whose boundary has measure $0$. Consider the set $V_{A,b}({\mathbb Z})_{S,p}$ defined by $V_{A,b}({\mathbb Z})_{S,p}:=V_{A,b}({\mathbb Z}_p)\cap{\mathbb R}es^{-1}(S_p).$ Then \begin{equation} {\rm Vol}(V_{A,b}({\mathbb Z})_{S,p})= \left\{ \begin{array}{ll} \;\;\,\displaystyle |{\mathcal J}|_p{\rm Vol}({\rm SO}_A({\mathbb Z}_p)){\rm Vol}(S_p) &\mbox{when } p\nmid n; \\[.05in] \displaystyle \frac12|{\mathcal J}|_p{\rm Vol}({\rm SO}_A({\mathbb Z}_p)){\rm Vol}(S_p) (1+\kappa_p(A)\rho_p(\Sigma)) &\mbox{when } p\mid n. \end{array}\right. \end{equation} \end{corollary} \begin{proof} By Proposition~\ref{propjac} (setting $\phi$ to be the characteristic function of $V({\mathbb Z})_{S,p}$), we have \begin{equation}\label{eqcorjact1} {\rm Vol}(V_{A,b}({\mathbb Z})_{S,p})=|{\mathcal J}|_p{\rm Vol}({\rm SO}_A({\mathbb Z}_p))\int_{f\in S_p} \sum_{v\in\!\!\textstyle{\frac{V_{A,b}({\mathbb Z}_p)\cap{\mathbb R}es^{-1}(f)}{{\rm SO}_A({\mathbb Z}_p)}}} \!\!\!\smash{\frac{1}{\#{\rm Stab}_{{\rm SL}_3({\mathbb Z}_p)}(v)}} df, \end{equation} since the stabilizers of $v$ in ${\rm SO}_A({\mathbb Z}_p)$ and ${\rm SL}_3({\mathbb Z}_p)$ are the same, being equal to ${\rm SO}_A({\mathbb Z}_p)\cap {\rm SO}_B({\mathbb Z}_p)$. If $p\nmid n$, then $\mathcal{Q}_n({\mathbb Z}_p)$ is a single ${\rm SL}_3({\mathbb Z}_p)$-orbit by Lemma~\ref{lem:quadlemma}(a). Hence there is a one-to-one correspondence between the sets ${\rm SO}_A({\mathbb Z}_p)\backslash (V_{A,b}({\mathbb Z}_p)\cap{\mathbb R}es^{-1}(f))$ and ${\rm SL}_3({\mathbb Z}_p)\backslash {\mathbb R}es^{-1}(f)$, and so the integrand on the right-hand side of \eqref{eqcorjact1} is equal to~$\mathrm{Mass}_p(f)$. If $p\mid n$, then $\mathcal{Q}_n({\mathbb Z}_p)$ consists of two ${\rm SL}_3({\mathbb Z}_p)$-orbits by Lemma~\ref{lem:quadlemma}(b), having $\kappa_p=1$ and $\kappa_p=-1$, respectively. Thus there is a one-to-one correspondence between ${\rm SO}_A({\mathbb Z}_p)\backslash (V_{A,b}({\mathbb Z}_p)\cap{\mathbb R}es^{-1}(f))$ and the set of elements $(A_1,B_1)\in{\rm SL}_3({\mathbb Z}_p)\backslash {\mathbb R}es^{-1}(f)$ with $\kappa_p(A_1)=\kappa_p(A)=\pm1$. Hence the integrand on the right-hand side of \eqref{eqcorjact1} is $\mathrm{Mass}^{\pm}_p(f)$. By Corollaries~\ref{cortotmass} and \ref{corpartialmass}, we then have \begin{equation*} \smash{\int_{f\in S_p}}\mathrm{Mass}_p(f)=1; \qquad \smash{\int_{f\in S_p}}\mathrm{Mass}_p^\pm(f)=\frac{1\pm\rho_p(\Sigma)}{2}, \end{equation*} as desired. \end{proof} \begin{proposition}\label{propgenus} Let ${\mathcal G}$ be a genus of elements in $\mathcal{Q}_n({\mathbb Z})$ that are good at $p$ for all $p\mid n$. Then \begin{equation*} \sum_{A\in{\rm SL}_3({\mathbb Z})\backslash{\mathcal G}} \frac{N^{(i)}(V_{A,b}({\mathbb Z})_S;A,X)}{N^\pm(U_{n,b}({\mathbb Z})_S;n,X)} =\frac2{m_i}\prod_{p\mid n}\frac{1+\kappa_p(A)\rho_p(\Sigma)}{2}+o(1), \end{equation*} where: $i\in\{0, 2+, 2-, 2\#\}$ if $\pm=+$; $i=1$ if $\pm=-$ and $A$ is isotropic over ${\mathbb R}$; and $i=2\#$ and $\pm=+$ if $A$ is anisotropic over~${\mathbb R}$. \end{proposition} \begin{proof} From Theorem \ref{countingpairsfinal}, we obtain \begin{equation*} \frac{\smash{N^{(i)}(V_{A,b}({\mathbb Z})_S;A,X)}}{N^\pm(U_{n,b}({\mathbb Z})_S;n,X)}= \frac{\frac{1}{m_i}\nu(\smash{V_{A,b}({\mathbb Z})_S}){\rm Vol}({\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}}(X))}{ \nu(U_{n,b}({\mathbb Z})_S){\rm Vol}(\smash{U_{n,b}}({\mathbb R})^\pm)} +o(1). \end{equation*} We evaluate ${\rm Vol}({\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}}(X))$ and $\nu(V_{A,b}({\mathbb Z})_{S,p})$ using Proposition \ref{propjac} (with $\phi$ being the \pagebreak characteristic function of ${\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}}(X)$) and Corollary \ref{corjac}, respectively, yielding \begin{equation*} \begin{array}{rcl} \displaystyle \frac{\frac{1}{m_i}\nu(V_{A,b}({\mathbb Z})_S){\rm Vol}({\mathbb F}F_A\cdot \smash{{\mathbb R}F{i}}(X))}{ \nu(U_{n,b}({\mathbb Z})_S){\rm Vol}(U_{n,b}({\mathbb R})^\pm)}&=& \displaystyle\frac{1}{m_i}{\rm Vol}({\mathbb F}F_A)\prod_{p}{\rm Vol}({\rm SO}_A({\mathbb Z}_p)) \prod_{p\mid n}\frac{1+\kappa_p(A)\rho_p(\Sigma)}{2}. \end{array} \end{equation*} Since \begin{equation*} \sum_{A\in{\rm SL}_3({\mathbb Z})\backslash{\mathcal G}}{\rm Vol}({\mathbb F}F_A)\prod_{p}{\rm Vol}({\rm SO}_A({\mathbb Z}_p))=2, \end{equation*} the Tamagawa number of ${\rm SO}_3$, the proposition follows. \end{proof} \noindent The following lemma will be used repeatedly in the proof of Theorem~\ref{main_theorem}. \begin{lemma}[{\cite[{Lemma~4.17}]{Hanke_structure_of_massses}}]\label{hankelemma} Suppose $\mathbb T$ is a nonempty finite set, $X_i$ and $Y_i$ are indeterminates for each $i\in\mathbb T$, and $\mu_N$ is the set of all $N^{th}$ roots of unity in ${\mathbb C}$. Then for any $c\in\mu_N$, we have the polynomial~identity \begin{equation}\label{eqcombeq} \sum_{{}stack{(\varepsilonilon_i)_{i\in\mathbb T}\in\mu_N^{\mathbb T}\\\prod_{i\in\mathbb T}\varepsilonilon_i=c}} \prod_{i\in \mathbb T}(X_i+\varepsilonilon_i Y_i)=N^{|\mathbb T|-1}\smash{\Bigl[\prod_{i\in\mathbb T}X_i+c\prod_{i\in \mathbb T}Y_i\Bigr]}. \end{equation} \end{lemma} \noindent We will also require the following notation. \begin{nota} \label{sec:4.28} {\em For a genus ${\mathcal G}$ of elements in $\mathcal{Q}_n({\mathbb Z})$ that are good at $p$ for all $p$, define $\kappa_p({\mathcal G}):=\kappa_p(A)$ for $A\in{\mathcal G}$. Let $T:=T_n$ denote the set of primes $p$ dividing $n$, and let $T_{{\rm even}}:=T_{n,{\rm even}}$ (resp.\ $T_{{\rm odd}}:=T_{n,{\rm odd}}$) denote the set of $p\in T$ such that the order of $p$ dividing $n$ is even (resp.\ odd). }\end{nota} \noindent{\bf Proof of Theorem \ref{main_theorem}:} Let \smash{$F^\pm_\Sigma(n,X)$} denote the set of fields $K$ in $F_\Sigma(n,X)$ with $\pm\Delta(K)>0$. \noindent {\bf (a):} In the case of 2-class groups of totally real cubic fields, by Theorem \ref{thclgp1} we have \begin{equation*} \begin{array}{rcl} \mathrm{Avg}({\rm Cl}_2,F_\Sigma^+(n,X))&=&1+\displaystyle \displaystyle\frac{\#\bigl({\rm SL}_3({\mathbb Z})\backslash \bigl(V({\mathbb Z})^{(0),{\rm gen}}\cap{\mathbb R}es^{-1}(U_{n,b}({\mathbb Z})_{S,X}^+)\bigr) \bigr)}{N^+(U_{n,b}({\mathbb Z})_S ;n,X)}+o(1)\\[.175in] &=&1+\displaystyle \frac{\displaystyle\sum_{A\in{\rm SL}_3({\mathbb Z})\backslash\mathcal{Q}_n({\mathbb Z})} N^{(0)}(V_{A,b}({\mathbb Z})_S;A,X)}{N^+(U_{n,b}({\mathbb Z})_S;n,X)}+o(1). \end{array} \end{equation*} We claim that there is a (unique) genus ${\mathcal G}$ of good ternary quadratic forms with $\kappa_p({\mathcal G})=\varepsilonilon_p$ for all $p\in T$ if and only if the ordered tuple \smash{$\varepsilonilon\in\{\pm1\}^T$} satisfies \smash{$\prod_{T_{\rm odd}}\varepsilonilon_p=\kappa_\infty({\mathcal G})$}; here we define $\kappa_\infty({\mathcal G})$ to be $1$ if the forms in ${\mathcal G}$ are isotropic over ${\mathbb R}$ and $-1$ otherwise. Indeed, if $c_v(A)$ denotes the Hasse invariant of~$A$ (see~\cite[p.~55]{Cassels}) and $(a,b)_v$ denotes the Hilbert symbol of $a$ and $b$ over ${\mathbb Q}_v$, then one checks~that \begin{equation}\label{eqkappaHasse} \smash{\displaystyle c_p(A)=(-1,-n)_p\cdot\kappa_p(A)^{{\rm ord}_p(n)}} \end{equation} for any prime $p$ and any $A\in\mathcal Q_n({\mathbb Z})$ that is good at all primes. Since $n$ is positive, we have $\kappa_\infty(A)=-c_\infty(A)$. Thus the product formula for the Hilbert symbol translates directly to \begin{equation*} \prod_{p\in T_{\rm odd}}\kappa_p(A)=\kappa_\infty(A). \end{equation*} The claim now follows from \cite[Theorem~1.3, p.~77]{Cassels} and \cite[Theorem 1.2, p.~129]{Cassels}. By Proposition~\ref{propgenus}, we therefore have \begin{equation}\label{eqavg+} \begin{array}{rcl} \mathrm{Avg}({\rm Cl}_2,F_\Sigma^+(n,X))&\!=\!&1+\displaystyle\frac{1}{2} \sum_{{\mathcal G}:\kappa_\infty({\mathcal G})=1}\displaystyle\prod_{p\mid n} \frac{1+\kappa_p({\mathcal G})\rho_p(\Sigma)}{2}+o(1)\\[.2in] &\!=\!&1+\displaystyle\frac12 \sum_{{}stack{(\varepsilonilon_p)\in\{\pm1\}^T\\\prod_{T_{\rm odd}}\varepsilonilon_p=1}} \prod_{T}\frac{1+\varepsilonilon_p\rho_p(\Sigma)}{2}+o(1)\\[.3in] &\!=\!&1+\displaystyle\frac{1}{2} \sum_{{}stack{(\varepsilonilon_p)\in\{\pm1\}^{T_{\rm odd}}\\\prod_{T_{\rm odd}}\varepsilonilon_p=1}} \prod_{T_{\rm odd}}\frac{1+\varepsilonilon_p\rho_p(\Sigma)}{2} \prod_{T_{{\rm even}}}\Bigl( \frac{1+\rho_p(\Sigma)}{2}+\frac{1-\rho_p(\Sigma)}{2}\Bigr)+o(1)\\[.3in] &\!=\!&1+\displaystyle\frac14\Bigl(1+\prod_{T_{\rm odd}}\rho_p(\Sigma)\Bigr)+o(1)\\[.275in] &\!=\!&\displaystyle{\frac54+\frac{1}{4}\rho(\Sigma)+o(1)}, \end{array} \end{equation} where the penultimate equality follows from Lemma~\ref{hankelemma} with $N=2$, $c=1$, and $\mathbb T=T_{\rm odd}.$ \noindent {\bf (b):} In the case of complex cubic fields, we similarly have \begin{equation}\label{eqavg-} \begin{array}{rcl} \mathrm{Avg}({\rm Cl}_2,F_\Sigma^-(n,X))&=&1+ \displaystyle\frac{\#\Bigl({\rm SL}_3({\mathbb Z})\backslash \bigl(V({\mathbb Z})^{(1),{\rm gen}}\cap{\mathbb R}es^{-1}(U_{n,b}({\mathbb Z})_{S,X}^-)\bigr) \Bigr)}{N^-(U_{n,b}({\mathbb Z})_S;n,X)}+o(1)\\[.15in] &=&1+\displaystyle \frac{\displaystyle\sum_{A\in{\rm SL}_3({\mathbb Z})\backslash\mathcal{Q}_n({\mathbb Z})} N^{(1)}(V_{A,b}({\mathbb Z})_S;A,X)}{N^-(U_{n,b}({\mathbb Z})_S;n,X)}+o(1)\\[.15in] &=&1+\displaystyle\frac12\Bigl(1+\prod_{T_{\rm odd}}\rho_p(\Sigma)\Bigr)+o(1)\\[.25in] &=&\displaystyle{\frac32+\frac{1}{2}\rho(\Sigma)+o(1)}, \end{array} \end{equation} where we again use Lemma~\ref{hankelemma} with $N=2$, $c=1$, and $\mathbb T=T_{\rm odd}$. \noindent {\bf (c):} Finally, in the case of narrow 2-class groups of totally real cubic fields, we have \begin{equation*} \mathrm{Avg}({\rm Cl}^+_2,F_\Sigma^+(n,X))\;=\;1+\displaystyle \frac{\displaystyle\sum_{A\in{\rm SL}_3({\mathbb Z})\backslash\mathcal{Q}_n({\mathbb Z})} N^{(0)}(V_{A,b}({\mathbb Z})_S;A,X)+N^{(2)}(V_{A,b}({\mathbb Z})_S;A,X)}{N^+(U_{n,b}({\mathbb Z})_S;n,X)}+o(1). \end{equation*} The contributions of $N^{(2+)}(V_{A,b}({\mathbb Z})_S;A,X)$ and $N^{(2-)}(V_{A,b}({\mathbb Z})_S;A,X)$ are the same as that of $N^{(0)}(V_{A,b}({\mathbb Z})_S;A,X)$, yielding a total contribution of \begin{equation*} \frac34\Bigl(1+\prod_{T_{\rm odd}}\rho_p(\Sigma)\Bigr)+o(1) \end{equation*} to the average size of ${\rm Cl}_2^+$ from the splitting types $0$, $2+$, and $2-$. On the other hand, since $\kappa_\infty(A)=-1$ for pairs $(A,B)\in \smash{V({\mathbb Z})^{(2\#)}}$, the contribution of \smash{$N^{(2\#)}(V_{A,b}({\mathbb Z})_S;A,X)$} is \begin{equation*} \frac14\Bigl(1-\prod_{T_{\rm odd}}\rho_p(\Sigma)\Bigr)+o(1), \end{equation*} this time using Lemma~\ref{hankelemma} with $N=2$, $c=-1$, and $\mathbb T=T_{\rm odd}$. Summing up, we obtain \begin{equation}\label{eqavgn+} \mathrm{Avg}({\rm Cl}^+_2,F_\Sigma^+(n,X))=2+\frac{1}{2}\rho(\Sigma)+o(1). \end{equation} We have proven Theorem \ref{main_theorem}. $\Box$ {}section{Deduction of Theorems \ref{thmoncubicfields}, \ref{thmain2}, and \ref{thmain1}}\label{subsecnlocal2} {\bf Proof of Theorem~\ref{thmoncubicfields}:} Theorem~\ref{thmoncubicfields} follows from Theorem~\ref{main_theorem} by noting that there are no primes dividing $n=1$ to odd order, and so $\rho(\Sigma)=1$ for every large collection $\Sigma$. $\Box$ \noindent {\bf Proof of Theorem~\ref{thmain1}:} Theorem~\ref{thmain1} follows from Theorem~\ref{main_theorem} by noting that when $\Sigma_p{}set T_p$ contains no extensions ramified at primes dividing $k$, then $\rho_p(\Sigma_p)=0$. $\Box$ To deduce Theorem \ref{thmain2} from Theorem \ref{main_theorem}, we determine the probability that an $n$-monogenized cubic field is sufficiently ramified at a prime dividing $n$. We use the following lemmas. \begin{lemma}\label{lemsuf} Let $n$ be a positive integer, and let $p$ be a prime dividing $n$. Suppose $f(x,y) \in U_n({\mathbb Z}_p)$ corresponds to a maximal $n$-monogenized cubic ring $(\mathcal{C}_p,\alphapha_p)$. Then the pair $(\mathcal{C}_p\otimes{\mathbb Q}_p,\alphapha_p)$ is sufficiently ramified if and only if either: \begin{itemize} \item[{\rm (a)}] $f(x,y)$ has a triple root in ${\mathbb P}^1({\mathbb F}_p);$ or \item[{\rm (b)}] $f(x,1)$ has a double root in ${\mathbb F}_p$. \end{itemize} \end{lemma} \begin{proof} If $f(x,y)$ has distinct roots in ${\mathbb P}^1(\overline{{\mathbb F}}_p)$, then $\mathcal K_p:=\mathcal{C}_p\otimes{\mathbb Q}_p$ is unramified and so $(\mathcal K_p,\alphapha_p)$ is not sufficiently ramified. If $f(x,y)$ has a triple root modulo $p$, then $\mathcal K_p$ is a totally ramified cubic extension of ${\mathbb Q}_p$, and hence $(\mathcal K_p,\alphapha_p)$ is sufficiently ramified. We now assume that the reduction of $f(x,y)$ modulo $p$ has a double root but not a triple root. Then the pair $(\mathcal{C}_p,\alphapha_p)$ is sufficiently ramified when ${\mathbb Z}_p[\alphapha]\otimes{\mathbb F}_p$ is isomorphic to ${\mathbb F}_p\times{\mathbb F}_p[t]/(t^2)$, and not sufficiently ramified when ${\mathbb Z}_p[\alphapha]\otimes{\mathbb F}_p$ is isomorphic to ${\mathbb F}_p[t]/(t^3)$. Write $f(x,y)=nx^3+bx^2y+cxy^2+dy^3$. Then the characteristic polynomial of $\alphapha_p$ is equal~to $x^3+bx^2y+nxy^2+n^2dy^3$. Hence ${\mathbb Z}_p[\alphapha]\otimes{\mathbb F}_p\cong{\mathbb F}_p[x]/(x^3+bx^2)$, and so $(\mathcal{C}_p,\alphapha_p)$ is sufficiently ramified if and only if $p\nmid b$. Thus, when the reduction of $f(x,y)$ modulo $p$ has a double root but not a triple root, the pair $(\mathcal{C}_p,\alphapha_p)$ is sufficiently ramified if and only if the reduction of $f(x,1)$ modulo $p$ has a double root in ${\mathbb F}_p$. \end{proof} The next result determines when $f\in U({\mathbb Z})$ corresponds to a cubic ring nonmaximal at $p$. \begin{lemma}[{\cite[Lemma 2.10]{BBP}}]\label{propcondmax} A cubic ring corresponding to a binary cubic form $f(x,y)$ fails to be locally maximal at $p$ if and only if either: {\rm (a)} $f$ is a multiple of $p$, or {\rm (b)} there is some ${\rm GL}_2({\mathbb Z})$-transformation of $f(x,y)=ax^3+bx^2y+cxy^2+dy^3$ such that $d$ is a multiple of $p^2$ and $c$ is a multiple of~$p$. \end{lemma} \begin{proposition}\label{propdensmax} The density $\mu_{\rm max}(U_n({\mathbb Z}_p))$ of elements in $U_n({\mathbb Z}_p)$ that are maximal is given by \begin{equation*} \mu_{\rm max}(U_n({\mathbb Z}_p))=\left\{ \begin{array}{cl} (p^2-1)/p^2 &\mbox{if } p^2\nmid n;\\[.035in] (p-1)^2(p+1)/p^3&\mbox{if } p^2\mid n. \end{array}\right. \end{equation*} \end{proposition} \begin{proof} We calculate the probability of nonmaximality in $U_n({\mathbb Z}_p)$ using conditions (a) and (b) of Lemma~\ref{propcondmax}. Regarding (a), the probability that an element of $U_n({\mathbb Z}_p)$ is imprimitive is 0 if $p\nmid n$ and is $1/p^3$ otherwise. \pagebreak Now (b) implies that a primitive binary cubic form $f(x,y)=nx^3+bx^2y+cxy^2+dy^3\in U_n({\mathbb Z}_p)$ is nonmaximal if and only if: $p\mid b$ and $p^2\mid n$ {\bf (Condition $\mathrm C_\infty$)}; or for some $r\in\{0,1,\ldots,p-1\}$, the binary cubic form $f(x+ry,y)\!=\!nx^3+b_rx^2y+c_rxy^2+d_ry^3$ satisfies $p\mid c_r$ and $p^2\mid d_r$ {\bf (Condition~$\mathrm C_r$)}. We determine next the probabilities that primitivity and nonmaximality occur due to Condition~$\mathrm C_r$ for $r=0,1,\ldots,p-1,\infty$, and then~sum. \begin{itemize} \item If $p\nmid n$, then for finite $r$, primitivity and Condition $\mathrm C_r$ hold when $p\mid c_r$ and $p^2\mid d_r$, which has probability $1/p^3$; Condition $\mathrm C_\infty$ never holds in this case. \item If $p\parallel n$, then for finite $r$, primitivity and Condition $\mathrm C_r$ hold when $p\nmid b_r$, $p\mid c_r$, and $p^2\mid d_r$, which has probability $(p-1)/p\times1/p^3 = (p-1)/p^4$; Condition $\mathrm C_\infty$ never holds in this case. \item If $p^2\mid n$, then for finite $r$, primitivity and Condition $\mathrm C_r$ hold (just as when $p\parallel n$) with probability $(p-1)/p^4$; moreover, primitivity and Condition $\mathrm C_\infty$ hold when $p\mid b$ and either $p\nmid c$ or $p\nmid d$, which occurs with probability $1/p\times (p^2-1)/p^2=(p^2-1)/p^3$. \end{itemize} Thus the probability of nonmaximality is given by: $p\times 1/p^3 = 1/p^2$ if $p\nmid n$; \;$1/p^3+p\times (p-1)/p^4=1/p^2$ if $p\parallel n$; and $1/p^2+(p^2-1)/p^3 = (p^2+p-1)/p^3$ if $p^2\mid n$. \end{proof} \begin{proposition}\label{propdensuf1} The density $\mu_{{\rm max},\rm{suff}}(U_n({\mathbb Z}_p))$ of elements in $U_n({\mathbb Z}_p)$ that are maximal and sufficiently ramified~is given by \begin{equation*} \mu_{{\rm max},\rm{suff}}(U_n({\mathbb Z}_p))=\left\{ \begin{array}{cl} (p-1)/p^2 &\mbox{if } p^2\nmid n;\\[.035in] (p-1)^2/p^3&\mbox{if } p^2\mid n. \end{array}\right. \end{equation*} \end{proposition} \begin{proof} Here we use Lemma \ref{lemsuf} and Lemma~\ref{propcondmax}, and also the same notation and representatives~$r$ for ${\mathbb P}^1({\mathbb F}_p)$ as in the proof of Proposition~\ref{propdensmax}. \begin{itemize} \item If $p\nmid n$, then for finite $r$, we have that $f$ is maximal with a multiple root at $r$ precisely when $p\mid c_r$ and $p\parallel d_r$, which has probability $1/p\times (p-1)/p^2=(p-1)/p^3$; a multiple root at $\infty$ cannot occur in this case. \item If $p\parallel n$, then for finite $r$, we have that $f$ is maximal with a multiple root at $r$ precisely when $p\nmid c_r$, $p\mid c_r$ and $p\parallel d_r$, which has probability $(p-1)/p\times 1/p\times (p-1)/p^2=(p-1)^2/p^4$; maximality and a triple root occur at $\infty$ when $p\mid b$, $p\mid c$, and $p\nmid d$, which has probability $1/p \times 1/p \times (p-1)/p=(p-1)/p^3$. \item If $p^2\mid n$, then for finite $r$, we have that $f$ is maximal with a multiple root at $r$ (just as when~$p\parallel n$) with probability $(p-1)^2/p^4$; maximality with a multiple root at $\infty$ cannot~occur. \end{itemize} So the probability that $f$ is maximal and sufficiently ramified is: $p\times (p-1)/p^3 = (p-1)/p^2$ if $p\nmid n$; \;$p\times (p-1)^2/p^4+(p-1)/p^3=(p-1)/p^2$ if $p\parallel n$; and $p\times (p-1)^2/p^4=(p-1)^2/p^3$ if $p^2\mid n$. \end{proof} Propositions~\ref{propdensmax} and \ref{propdensuf1} therefore imply the following. \begin{corollary}\label{propdensuf} For any positive integer $n$ and prime $p$, the relative density of sufficiently-ramified elements in $U_n({\mathbb Z}_p)$ among the maximal elements in $U_n({\mathbb Z}_p)$ is $$\frac{\mu_{{\rm max},\rm{suff}}(U_n({\mathbb Z}_p))}{\mu_{\rm max}(U_n({\mathbb Z}_p))}=\frac1{p+1}.$$ \end{corollary} \noindent {\bf Proof of Theorem~\ref{thmain2}:} Theorem~\ref{thmain2} follows immediately from Theorem~\ref{main_theorem} and Corollary \ref{propdensuf}. $\Box$ \pagebreak \begin{comment} \section*{Appendix: Numerical data} The following table contains average sizes of ${\rm Cl}(K)[2]$, where $K$ runs through finite sets of $n$-monogenized cubic fields for $n=1$ to $10$. For each $n$, the sets were picked by randomly choosing $1000$ $n$-monogenized cubic fields with height bounded by $10^{12}$. The average sizes of ${\rm Cl}(K)[2]$ were computed using Sage. \begin{table}[ht] \centering \begin{tabular}{|c | c| c| c|c|} \hline \multirow{2}{*}{$n=m^2k$} & \multicolumn{2}{|c|}{Positive Discriminant} & \multicolumn{2}{|c|}{Negative Discriminant}\\ \cline{2-5} \hspace{.02in} & $\frac{5}{4}+\frac{1}{4\sigma(k)}$ & Average & $\frac{3}{2}+\frac{1}{2\sigma(k)}$ & Average\\ \hline $1$ & $1.5$ & $1.371$ & $2$ & $1.736$ \\ $2$ & $1.333$ & $1.244$ & $1.667$ & $1.588$ \\ $3$ & $1.313$ & $1.188$ & $1.625$ & $1.552$ \\ $4$ & $1.5$ & $1.375$ & $2$ & $1.676$ \\ $5$ & $1.292$ & $1.199$ & $1.583$ & $1.484$ \\ $6$ & $1.271$ & $1.184$ & $1.542$ & $1.372$ \\ $7$ & $1.281$ & $1.214$ & $1.563$ & $1.456$ \\ $8$ & $1.333$ & $1.226$ & $1.667$ & $1.5$ \\ $9$ & $1.5$ & $1.366$ & $2$ & $1.764$ \\ $10$ & $1.264$ & $1.208$ & $1.528$ & $1.464$ \\ \hline \end{tabular} \end{table} \end{comment} \addcontentsline{toc}{section}{Index of notation} \begin{center} {\bf \large Index of notation} \begin{longtable}{|@{\hspace{.06in}}p{1.13in}@{\hspace{.06in}}|@{\hspace{.06in}}p{4.6in}@{\hspace{.06in}}|@{\hspace{.06in}}p{.37in}@{\hspace{.06in}}|} \hline {\bf Notation} & {\bf Description} & {\bf In} \\ \hline \hline \endhead \hline $A_n$ & The symmetric antidiagonal matrix with antidiagonal $(1/2,-n,1/2)$. & \hyperref[sec:4.12]{C\ref{sec:4.12}} \\ \hline $\mathcal A_\infty(\alphapha)$ & The algebra at infinity of the $n$-monogenized cubic ring $({\mathcal O},\alphapha)$. & \hyperref[sec:4.18]{D\ref{sec:4.18}} \\ \hline ${\rm Aut}_{K_3}(K_4)$ & The subgroup of ${\rm Aut}(K_4)$ inducing the trivial element of ${\rm Aut}(K_3)$. & \hyperref[sec:3.36]{N\ref{sec:3.36}} \\ \hline$(C,\alphapha)$ & An $n$-monogenized cubic ring. & \hyperref[sec:2.5]{D\ref{sec:2.5}} \\ \hline ${\rm Cl}_2(C)$, \!${\rm Cl}_2^+(C)$,\phantom{\!\!\!|\!\!} ${\rm Cl}_2(C)^*$, \!${\rm Cl}_2^+(C)^*$ & The 2-torsion subgroups of the class group, the narrow class group, and their respective dual groups. & \hyperref[sec:2.14]{N\ref{sec:2.14}} \\ \hline $\Delta(f)$ & $b^2c^2-4ac^3-4b^3d-27a^2d^2+18abcd$, where $f(x,y)=ax^3+bx^2y+cxy^3+dy^3\in U(R)$. & \hyperref[sec:3.4]{D\ref{sec:3.4}} \\ \hline $d\gamma$ & A Haar measure on ${\rm SL}_3({\mathbb R})$. & \hyperref[sec:3.24]{C\ref{sec:3.24}} \\ \hline\hline $F(n,X)$& The set of $n$-monogenized cubic fields $(K,\alphapha)$ with $H(K)<X$. & \hyperlink{fnx}{\S1.2} \\ \hline $F({\leq\! cH^\delta},X)$& The set of $n$-monogenized cubic fields $(K,\alphapha)$ such that $n\leq cH(K,\alphapha)^\delta$, $H(K,\alphapha)<X$, and $(K\otimes{\mathbb Q}_p,\alphapha)\in\Sigma_p$ for all primes~$p$. & \hyperref[thsubmon]{T\ref{thsubmon}} \\ \hline {$F_\Sigma(n,X)$}& The set of $n$-monogenized cubic fields $(K,\alphapha)$ with local conds.\ of $\Sigma$ and $H(K,\alphapha)<X$. & \hyperlink{fnx}{\S1.2} \\ \hline {$F_\Sigma({\leq\! cH^\delta},X)$} & The set of $n$-monogenized cubic fields $(K,\alphapha)$ with local conds.\ of $\Sigma$, $\,n\leq cH(K,\alphapha)^\delta$, and $H(K,\alphapha)<X$. & \hyperref[sec:3.1]{D\ref{sec:3.1}} \\ \hline\hline ${\mathbb F}F_A$ & A fundamental domain for the action of ${\rm SO}_A({\mathbb Z})$ on ${\rm SO}_A({\mathbb R})$. & \hyperref[sec:4.12]{C\ref{sec:4.12}} \\ \hline ${\mathbb F}F_{{\rm SL}_3}$ & A fundamental domain for the action of ${\rm SL}_3({\mathbb Z})$ on ${\rm SL}_3({\mathbb R})$. & \hyperref[sec:3.22]{C\ref{sec:3.22}} \\ \hline\hline {${\mathbb F}F_U^\pm$} & A fundamental domain for the action of ${M}({\mathbb Z})$ on $f\in U({\mathbb R})$ with $\pm\Delta(f)>0$ and ${\rm ind}(f)>0$. & {\hyperref[sec:3.5]{C\ref{sec:3.5}}} \\ \hline ${\mathbb F}F_U^\pm({\leq\! cH^\delta},X)$ & The set $\{f\in{\mathbb F}F_U^\pm$ with ${\rm ind}(f)\leq cH(f)^\delta$ and $H(f)<X\}$. & \hyperref[sec:3.5]{C\ref{sec:3.5}} \\ \hline ${\mathbb F}F_U^\pm(T;X)$ & The set $\{f\in{\mathbb F}F_U^\pm$ with $T\leq{\rm ind}(f)<2T$ and $X\leq H(f)<2X\}$.& \hyperref[sec:3.8]{N\ref{sec:3.8}} \\ \hline $\widetilde{{\mathbb F}F}_U^\pm(T;X)$ & $\kappa(T;X)$-fold cover of ${\mathbb F}F_U^\pm(T;X)$. & \hyperref[sec:3.20]{C\ref{sec:3.20}} \\ \hline\hline {${\mathbb F}F_V^{(i)}({\leq\! cH^\delta},X)$}& A fundamental set for the action of ${\rm SL}_3({\mathbb R})$ on the set of elements in $V({\mathbb R})^{(i)}$ with resolvent in ${\mathbb F}F_U^\pm({\leq\! cH^\delta},X)$. & \hyperref[sec:3.22]{C\ref{sec:3.22}} \\ \hline {${\mathbb F}F_V^{(i)}(T;X)$}& A fundamental set for the action of ${\rm SL}_3({\mathbb R})$ on the set of elements in $V({\mathbb R})^{(i)}$ with resolvent in ${\mathbb F}F_U^\pm(T;X)$ & \hyperref[sec:3.22]{C\ref{sec:3.22}} \\ \hline $\widetilde{{\mathbb F}F}_V^{(i)}(T;X)$ & $\kappa(T;X)$-fold cover of ${\mathbb F}F_V^{(i)}(T;X)$. & \hyperref[sec:3.22]{C\ref{sec:3.22}} \\ \hline\hline ${\mathbb F}F^{(i)}_{V_{A,b}}$& A fundamental set for the action of ${\rm SO}_A({\mathbb R})$ on $V_{A,b}({\mathbb R})^{(i)}$. & \hyperref[sec:4.8]{C\ref{sec:4.8}} \\ \hline ${\mathbb R}F{i}(X)$ & The set of elements in ${\mathbb R}F{i}$ having height less than $X$. & \hyperref[sec:4.8]{C\ref{sec:4.8}} \\ \hline\hline $g_A$ & An element in ${\rm SL}_3({\mathbb Q})$ such that $g_AAg_A^t=A_n$. & \hyperref[sec:4.12]{C\ref{sec:4.12}} \\ \hline ${\mathcal G}$ & A genus of good integer-coefficient ternary quadratic forms. & \hyperref[sec:4.28]{N\ref{sec:4.28}} \\ \hline $G(R)$ & The subgroup $\{(g_2,g_3)\in{\rm GL}_2(R)\times{\rm GL}_3(R):\det(g_2)\det(g_3)=1\}$. & \hyperref[sec:2.8]{N\ref{sec:2.8}} \\ \hline $(g_2,g_3)\cdot(A,B)$ & The action of $(g_2,g_3) \in G(R)$ on $(A,B) \in V(R)$. & \hyperref[sec:2.8]{N\ref{sec:2.8}} \\ \hline $\gamma \cdot f(x,y)$ & The twisted action of $\gamma\in {\rm GL}_2$ on a binary cubic form $f(x,y)$. & \hyperref[sec:2.3]{N\ref{sec:2.3}} \\ \hline {generic} & Corresponds to an order in an $S_3$-cubic field or an $S_4$-quartic field. & \hyperref[sec:3.10]{D\ref{sec:3.10}} \\ \hline {good at $p$} & The associated conic in ${\mathbb P}^2(\overline{{\mathbb F}}_p)$ is either smooth or a union of two lines. & \hyperref[sec:4.21]{D\ref{sec:4.21}} \\ \hline $H(\beta)$ & ${\rm max}_v\bigl\{|\beta'|_v\bigr\}$, where $\beta'-\beta\in{\mathbb Z}$ and the absolute trace ${\rm Tr}(\beta')\in\{0,1,2\}$. & \hyperref[sec:3.30]{N\ref{sec:3.30}} \\ \hline $H(f)$ & $a^{-2}{\rm max}\bigl\{|I(f)|^3,J(f)^2/4\}$, where $f\in U({\mathbb R}).$ & \hyperref[sec:3.4]{D\ref{sec:3.4}} \\ \hline $I(f)$ & $b^2-3ac$, where $f(x,y)=ax^3+bx^2y+cxy^3+dy^3\in U(R)$. & \hyperref[sec:3.4]{D\ref{sec:3.4}} \\ \hline ${\rm ind}(f)$ & $a$, where $f(x,y)=ax^3+bx^2y+cxy^3+dy^3\in U(R)$. & \hyperref[sec:3.4]{D\ref{sec:3.4}} \\ \hline $J(f)$ & $-2b^3+9abc-27a^2d$, where $f(x,y)=ax^3+bx^2y+cxy^3+dy^3\in U(R)$. & \hyperref[sec:3.4]{D\ref{sec:3.4}} \\ \hline $\kappa=\kappa({\Y;X})$ & $\bigl\lfloor X^{1/6}/{T}^{2/3}\bigr\rfloor$. & \hyperref[sec:3.20]{C\ref{sec:3.20}} \\ \hline $\kappa_p := \kappa_p(A)$ & (For $A$ having ${\mathbb F}_p$-rank two.) $1$ if residually hyperbolic, $-1$ otherwise. & \hyperref[sec:4.21]{D\ref{sec:4.21}} \\ \hline $\kappa_p({\mathcal G})$ & The $\kappa$-invariant $\kappa_p(A)$ for any $A \in {\mathcal G}$. & \hyperref[sec:4.28]{N\ref{sec:4.28}} \\ \hline $L^{\rm gen}$ & The set of generic elements in $L$. & \hyperref[sec:3.10]{D\ref{sec:3.10}} \\ \hline $L^\pm$ & The set of elements $f\in L$ with $\pm\Delta(f)>0$. & \hyperref[sec:3.8]{N\ref{sec:3.8}} \\ \hline $L^{(i)}$ & $L\cap V({\mathbb R})^{(i)}$. & \hyperref[sec:3.19]{N\ref{sec:3.19}} \\ \hline {large} & $S := (S_p)_p$ is large if for all but finitely many~$p$, the set $S_p$ contains all $f\in U({\mathbb Z}_p)$ (resp. $(A,B) \in V({\mathbb Z}_p)$) with $p^2\nmid\Delta(f)$ (resp. $p^2\nmid\Delta(A,B)$). & \hyperref[sec:3.1]{D\ref{sec:3.1}}, \hyperref[sec:4.15]{D\ref{sec:4.15}} \\ \hline ${M}(R)$ & The group of $2 \times 2$ lower triangular unipotent matrices over $R$. & \hyperref[sec:2.6]{N\ref{sec:2.6}} \\ \hline $\mathrm{Mass}_{p}(f)$ & The local mass of binary cubic form $f\in U({\mathbb Z}_p)$ at the prime $p$. & \hyperref[sec:3.38]{D\ref{sec:3.38}} \\ \hline {$\mathrm{Mass}_p^\pm(f)$} & The local $\kappa$-mass of the binary cubic form $f\in U({\mathbb Z}_p)$ at the prime $p$. & \hyperref[sec:4.23]{D\ref{sec:4.23}} \\ \hline $m_i$ & $m_0=m_2=m_{2\pm}=m_{2\#}=4;\quad m_1=2.$ & \hyperref[sec:3.22]{C\ref{sec:3.22}} \\ \hline $N'$ & A compact subset of unipotent lower triangular $3 \times 3$ matrices over ${\mathbb R}$. & \hyperref[sec:4.12]{C\ref{sec:4.12}} \\ \hline\hline {$N_3^\pm(n,X)$} & Number of $n$-monogenized $S_3$-orders $(C,\alphapha)$ with $\pm\Delta(C)>0$ and $H(C,\alphapha)<X$. & \hyperref[thcubringcount]{T\ref{thcubringcount}} \\ \hline $N_3^\pm({\leq\! cH^\delta},X)$ & Number of $n$-monogenized $S_3$-orders $(C,\alphapha)$ with $\pm\Delta(C)>0$, \linebreak $n\leq cH(C,\alphapha)^\delta$, and $H(C,\alphapha)<X$. & \hyperref[thsubcount]{T\ref{thsubcount}} \\ \hline {$N_4^{(i)}(n,X)$} & Number of pairs $(Q,(C,\alphapha))$ where $Q$ is an $S_4$-order, $Q\otimes{\mathbb R}\cong {\mathbb R}^{4-2i}\times{\mathbb C}^i$, and $(C,\alphapha)$ is an $n$-monogenized cubic resolvent of $Q$ with $H(C,\alphapha)\!<\!X$.& \hyperref[thqnmccountnr]{T\ref{thqnmccountnr}} \\ \hline {$N_4^{(i)}({\leq\! cH^\delta},X)$} & Number of pairs $(Q,(C,\alphapha))$ where $Q$ is an $S_4$-order, $Q\otimes{\mathbb R}\cong {\mathbb R}^{4-2i}\times{\mathbb C}^i$, \linebreak and $(C,\alphapha)$ is an $n$-monogenized cubic resolvent of $Q$ with \linebreak $n\leq cH(C,\alphapha)^\delta$ and $H(C,\alphapha)<X$. & \hyperref[thqsubcount]{T\ref{thqsubcount}} \\ \hline\hline {$N^\pm(L;n,X)$} & The number of generic ${M}({\mathbb Z})$-orbits $f\in L^\pm$ with $H(f)<X$. & \hyperref[sec:4.2]{N\ref{sec:4.2}} \\ \hline $N^{(i)}(L;A,X)$& The number of generic ${\rm SO}_A({\mathbb Z})$-orbits $v\in L\cap V_A({\mathbb Z})^{(i)}$ with $H(v)<X$. & \hyperref[N:4.11]{N\ref{N:4.11}} \\ \hline {$N^{(i)}(L;T;X)$} & The number of generic ${M}({\mathbb Z})\times{\rm SL}_3({\mathbb Z})$-orbits $v\in L^{(i)}$ with \linebreak $T\leq{\rm ind}(v)<2T$ and $X\leq H(v)<2X$. & \hyperref[N:3.25]{N\ref{N:3.25}} \\ \hline\hline {nowhere \linebreak overramified} & Not overramified at any finite or infinite place. & \hyperref[sec:2.12]{D\ref{sec:2.12}} \\ \hline $\nu(L)$ & The volume of the closure of $L$ in $U(\widehat{{\mathbb Z}})$. & \hyperref[sec:3.8]{N\ref{sec:3.8}} \\ \hline {overramified at $p$} & The prime ideal $p\mathbb{Z}$ factors as $P^4$, $P^2$, or $P_1^2 P_3^2$. & \hyperref[sec:2.12]{D\ref{sec:2.12}} \\ \hline $\mathcal{Q}_n(R)$ & The set of ternary quadratic forms with coefficients in $R$ and $4\det=n$. & \hyperref[sec:4.5]{N\ref{sec:4.5}} \\ \hline {${\mathbb R}R(K_3)$} & The set of \'etale non-overramified quartic extensions of ${\mathbb Q}_p$ with cubic resolvent $K_3$. & \hyperref[sec:4.19]{N\ref{sec:4.19}} \\ \hline {${\mathbb R}R^+(K_3)$, ${\mathbb R}R^-(K_3)$} & The elements of ${\mathbb R}R(K_3)$ that are respectively split or inert in the unramified quadratic extension $K_6/K_3$ corresponding to~$K_4$. & \hyperref[sec:4.19]{N\ref{sec:4.19}} \\ \hline ${\mathbb R}es$ & The resolvent map $V(R)\to U(R)$ defined by $(A,B)\to 4\det(Ax-By)$. & \hyperref[sec:2.8]{N\ref{sec:2.8}} \\ \hline {residually hyperbolic at $p$} & The associated conic in ${\mathbb P}^2(\overline{{\mathbb F}}_p)$ is a union of two lines defined over ${\mathbb F}_p$. & \hyperref[sec:4.21]{D\ref{sec:4.21}} \\ \hline {residually nonhyperbolic at $p$} & The \!associated \!conic \!in \!${\mathbb P}^2(\overline{{\mathbb F}}_p)$ \!is \!a \!union \!of \!two \!lines \!not \!defined over~${\mathbb F}_p$. & \hyperref[sec:4.21]{D\ref{sec:4.21}} \\ \hline $S:=(S_p)_p$ & A collection of local cubic specifications $S_p$ indexed by primes $p$. & \hyperref[sec:3.1]{D\ref{sec:3.1}}, \hyperref[sec:4.15]{D\ref{sec:4.15}} \\ \hline $\sigma_A$ & The map $V_{A,b}(F) \to V_{A_n,b}(F)$ given by $(A,B)\mapsto (A_n,g_ABg_A^t)$. & \hyperref[sec:4.12]{C\ref{sec:4.12}} \\ \hline $\sigma_G$ & The map ${\rm SO}_A(F) \to {\rm SO}_{A_n}(F)$ given by $g\mapsto g_Agg_A^{-1}$. & \hyperref[sec:4.12]{C\ref{sec:4.12}} \\ \hline $\Sigma$ & $\Sigma:=(\Sigma_p)_p$. & \hyperref[sec:3.1]{D\ref{sec:3.1}} \\ \hline $\Sigma_p$ & A set of pairs $(\mathcal K_p,\alphapha_p)$ where $\mathcal K_p$ is an \'etale cubic extension of ${\mathbb Q}_p$ with ring of integers ${\mathcal O}_p$, $\alphapha_p$ is an element of ${\mathcal O}_p$ that is primitive in ${\mathcal O}_p/{\mathbb Z}_p$, and the pair $({\mathcal O}_p,\alphapha_p)$ corresponds to some $f(x,y)\in S_p$. & \hyperref[sec:3.1]{D\ref{sec:3.1}} \\ \hline ${\rm sk}(C)$ & The skewness of the cubic ring $C$. & \hyperref[sec:3.31]{D\ref{sec:3.31}} \\ \hline $T'$ & A subset of the diagonal $3 \times 3$ matrices over ${\mathbb R}$ with $\det=1$. & \hyperref[sec:4.12]{C\ref{sec:4.12}} \\ \hline $T_{\rm even}$, $T_{\rm odd}$ & The set of primes $p$ dividing $n$ to even or odd order, respectively. & \hyperref[sec:4.28]{N\ref{sec:4.28}} \\ \hline\hline $U(R)$ & The set of binary cubic forms $f(x,y)$ over $R$. & \hyperref[sec:2.3]{N\ref{sec:2.3}} \\ \hline $U_n(R)$ & The set of binary cubic forms $f(x,y)\in U(R)$ with $x^3$-coefficient $n$.& \hyperref[sec:2.6]{N\ref{sec:2.6}} \\ \hline $U_{n,b}(R)$ & The set of binary cubic forms $f(x,y)\in U_n(R)$ with $x^2y$-coefficient~$b$.& \hyperref[sec:4.2]{N\ref{sec:4.2}} \\ \hline $U({\mathbb Z})_S$ & The set of elements in $U({\mathbb Z})$ satisfying the local prescriptions of $S$.& \hyperref[sec:3.26]{N\ref{sec:3.26}} \\ \hline $U_{n,b}({\mathbb Z})_S$ & The set of elements in $U_{n,b}({\mathbb Z})$ satisfying the local prescriptions of $S$. & \hyperref[sec:4.15]{D\ref{sec:4.15}} \\ \hline\hline $V(R)$ & The set of pairs of ternary quadratic forms $(A,B)$ over $R$. & \hyperref[sec:2.8]{N\ref{sec:2.8}} \\ \hline $V_A(R)$ & The subset of $(A,B)\in V(R)$ with fixed $A$. & \hyperref[sec:4.5]{N\ref{sec:4.5}} \\ \hline $V_{A,b}(R)$ & The subset of $(A,B)\in V_A(R)$ with resolvent in $U_{n,b}(R)$ for some $n$. & \hyperref[sec:4.5]{N\ref{sec:4.5}} \\ \hline $V({\mathbb Z})_S$ & The set of elements $v\in V({\mathbb Z})$ such that ${\mathbb R}es(v)\in U({\mathbb Z})_S$. & \hyperref[sec:3.26]{N\ref{sec:3.26}} \\ \hline $V_{A,b}({\mathbb Z})_S$ & The set of elements $v\in V_{A,b}({\mathbb Z})$ such that ${\mathbb R}es(v)\in U_{n,b}({\mathbb Z})_S$. & \hyperref[sec:4.15]{D\ref{sec:4.15}} \\ \hline\hline $V({\mathbb R})_+$ & The set of elements $(A, B) \in V({\mathbb R})$ with $\det(A) > 0$. & \hyperref[sec:3.14]{D\ref{sec:3.14}} \\ \hline $V({\mathbb Z})_+$ & $V({\mathbb Z})\cap V({\mathbb R})_+$. & \hyperref[sec:3.13]{N\ref{sec:3.13}} \\ \hline $V({\mathbb R})^{(i)}$ & A $G({\mathbb R})$-orbit in $V({\mathbb R})_+$ specified by the index $i\in\{0,1,2,2\#,2+,2-\}$. & \hyperref[sec:3.14]{D\ref{sec:3.14}}, \hyperref[sec:3.19]{N\ref{sec:3.19}} \\ \hline $V({\mathbb Z})^{(i)}$ & $V({\mathbb Z})\cap V({\mathbb R})^{(i)}$. & \hyperref[sec:3.19]{N\ref{sec:3.19}} \\ \hline\hline ${\mathcal W}_p(U)$, \,${\mathcal W}_p(V)$ & Subsets of $U({\mathbb Z})$, \!$V({\mathbb Z})$ where $p^2\mid\Delta$. & \hyperref[sec:3.28]{N\ref{sec:3.28}} \\ \hline ${\mathcal W}_p^{(i)}(U)$, \!\!${\mathcal W}_p^{(i)}(V)$ & Subsets of $U({\mathbb Z})$, \!$V({\mathbb Z})$ where $p^2\mid \Delta$ for ``mod $p^i$ reasons". & \hyperref[sec:3.28]{N\ref{sec:3.28}} \\ \hline \end{longtable} \end{center} \end{document}
math
\begin{document} \title{Generally relativistical Daffin-Kemmer formalism and behaviour of quantum-mechanical particle of spin $1$ in the Abelian monopole field} \author {V.M.Red'kov \\ Institute of Physics, Belarus Academy of Sciences\\ F Skoryna Avenue 68, Minsk 72, Republic ob Belarus\\ e-mail: [email protected]} \maketitle \begin{abstract} It is shown that the~manner of introducing the~interaction between a~spin $1$ particle and external classical gravitational field can be successfully unified with~the approach that occurred with regard to a~spin $1/2$ particle and was first developed by Tetrode, Weyl, Fock, Ivanenko. On that way a generally relativistical Duffin-Kemmer equation is costructed. So, the manner of extending the~flat space Dirac equation to general relativity case indicates clearly that the~Lorentz group underlies equally both these theories. In other words, the~Lorentz group retains its importance and significance at changing the Minkowski space model to an~arbitrary curved space-time. In contrast to this, at generalizing the~Proca formulation, we automatically destroy any relations to the~Lorentz group, although the~definition itself for a~spin $1$ particle as an~elementary object was based on just this group. Such a~gravity's sensitiveness to the~fermion-boson division might appear rather strange and unattractive asymmetry, being subjected to the~criticism. Moreover, just this feature has brought about a~plenty of speculation about this matter. In any case, this peculiarity of particle-gravity field interaction is recorded almost in every handbook. In the paper, on the base of the Duffin-Kemmer formalism developed, the problem of a vector particle in the~Abelian monopole potential is considered. The content is: 1. On Duffin-Kemmer formalism in the Rimannian space-time; 2. On wave functions of a spin 1 particle in the monopole field; 3. On connection with the Proca approach; 4. Discret symmetry. \end{abstract} \subsection*{1. On Duffin-Kemmer formalism in the Rimannian space-time} A generally acceptable point of view is that description of interaction between a quantum mechanical particle and an~external classical gravitational field looks basically different in accordance as whether fermion or boson is meant. So, the starting flat space (Dirac) equation $$ (\; i \gamma ^{a} \; \partial _{a} \; - \; m \;) \; \Psi (x) \; = \; 0 $$ \noindent as well known, we have to generalize through the~use of the~tetrad formalism according to the~Tetrode-Weyl-Fock-Ivanenko (TWFI) procedure [1-9]. With regard to a vector bosons [10-33], a~totally different approach is generally used: it consists in ordinary formal changing all involved tensors and usual derivative $\partial_{a}$ into general relativity ones. For example, in case of a~vector (spin 1) particle, the~flat space Proca equations $$ ( \partial _{a} \; \Psi _{b} \; - \; \partial _{b} \; \Psi _{a}) = \;m \qquad \Psi _{ab} \;\; , \;\; \partial ^{b} \; \Psi _{ab} = m \; \Psi _{a} \eqno(1.1a) $$ \noindent being subjected to the~formal change $$ \partial _{a} \; \rightarrow \; \nabla _{\alpha } \; , \qquad \Psi _{a} \; \rightarrow \; \Psi _{\alpha } ,\qquad \Psi _{ab} \; \rightarrow \; \Psi _{\alpha \beta } \eqno(1.1b) $$ \noindent results in $$ (\nabla _{\alpha }\; \Psi _{\beta} - \nabla _{\beta }\; \Psi_{\alpha }) = m \; \Psi _{\alpha \beta } \; , \qquad \nabla ^{\beta }\; \Psi _{\alpha \beta } = m \; \Psi _{\alpha } \eqno(1.1c) $$ \noindent However, it is known already for a~long time that all particles of the~theory, irrespective of whether bosons or fermions are meant, obey in a curved background space-time a~unique TWFI approach (see, for example, in [8,9]). But admittedly, in the~common literature, they do not use consistently this universal formalism. Although the~widely spread method of light tetrad or Newman-Penrose formalism [34,35]) is certainly a~renewed and modified variant of the TWFI above mentioned approach, the~Newman-Penrose method was developed in accordance with its own special intrinsic requirements and with no clearly visible relations to the conventional TWFI approach (such a~correlation is potentially implied rather than observed really). As a matter of fact, a potentially existing (general relativity) Duffin-Kemmmer $(D-K)$ equation for a spin 1 particle, apparently, is not widely adopted. But, as evidenced by many examples, sometimes it is desirable if not necessary, to depart from constructions of common use in order to arrive at a~simpler or more suitable one for a~particular situation. Bellow, we develop some aspects of this generalized $D-K$ theory, that are essential to real practical calculations (I adhere an unpublished work of the three authors [...]). This method will be successfully applied further in Sec.2 to a~spin $1$ particle-monopole problem. So, let us take up considering this matter in more detail. We start from a flat space equation in its matrix (Duffin-Kemmer) form [10]: $$ (\; i\; \beta ^{a} \; \partial_{a} \; - \; m \; )\; \Phi (x) = 0 \eqno(1.2a) $$ \noindent where $\Phi (x)$ is a ten component column-function; $\beta ^{a}$ is $(10 \times 10)$ -matrices; in the Cartesian representation they are $$ \Phi = ( \Phi _{0} \; \Phi _{1} \; \Phi _{2} \Phi _{3} ; \; \Phi _{01} \; \Phi _{02} \; \Phi _{03} ; \; \Phi _{23}\; \Phi _{31} \; \Phi _{12} ) \;\; , $$ $$ \beta ^{a} = \pmatrix{0 & \kappa ^{a} \cr \lambda ^{a} & 0} = ( \kappa ^{a} \oplus \lambda ^{a} ) \;, \;\; ( \kappa ^{a})_{j} ^{[mn]} \; = \; - i ( \delta ^{m}_{j} \; g^{na} \; - \; \delta ^{n}_{j} \; g^{ma} )\;\; , $$ $$ ( \lambda ^{a})^{j}_{[mn]} \; = \; - i ( \delta ^{a}_{m} \; \delta ^{j}_{n} - \delta ^{a}_{n} \;\delta ^{j}_{m} ) \; = - i \delta ^{aj}_{mn} \eqno(1.2b) $$ \noindent $( g^{na} ) = \hbox{diag}( +1,-1,-1,-1 )$ is the Minkowski metric tensor; the sectional matrix structure introduced here will be used bellow. By using this representation (5.2b), we can easily verify the major properties of $\beta ^{a}$: $$ \beta ^{c} \; \beta ^{a} \; \beta ^{b} = \left( \begin{array}{cc} 0 & \kappa ^{c} \;\lambda ^{a}\;\kappa ^{b} \\ \lambda^{c} \; \kappa ^{a} \; \lambda ^{b} & 0 \end{array} \right ) , \qquad (\lambda ^{c} \; \kappa ^{a} \; \lambda ^{b}) ^{j}_{[mn]} = i\; [ \delta ^{cb}_{mn} \; g^{aj} \; - \; \delta ^{cj}_{mn} \; g^{ab} ] \; , $$ $$ (\kappa ^{c} \; \lambda ^{a} \; \kappa ^{b})^{[mn]}_{j} = i \; [ \delta ^{a}_{j}\; (g^{cm} \; g^{bn} \; - \; g^{cn}\; g^{bm} ) \; + \; \;g^{ac}\; ( \delta ^{n}_{j} \; g^{mb} \; - \; \delta ^{m}_{j}\; g^{nb} ) ] $$ \noindent and then $$ ( \beta ^{c} \; \beta ^{a} \; \beta ^{b} \; + \; \beta ^{b} \; \beta ^{a} \; \beta ^{c} ) = ( \beta ^{c} \; g^{ab} \; + \; \beta ^{b} g^{ac} ) , $$ $$ [\beta ^{c} , j^{ab} ] = ( g^{ca} \;\beta^{b} \; - \; g^{cb} \; \beta ^{a} ) , \qquad j^{ab} = ( \beta ^{a} \; \beta ^{b} \; - \; \beta ^{b} \; \beta ^{a} ) , $$ $$ [j^{mn}, j^{ab}] = ( g^{na} \; j^{mb} \; - \; g^{nb} \; j^{ma} ) \; - \; ( g^{ma} \;j^{nb}\; - \; g^{mb} \; j^{na} )) $$ To follow the TWFI procedure, the equation (1.2a) must be extended to a Rimannian space-time (with a metric $g_{\alpha \beta }(x)$ and its concomitant tetrad $e^{\alpha }_{(a)}(x) )$ according to $$ [ \; i \; \beta ^{\alpha }(x)\; (\partial_{\alpha} \; + \; B_{\alpha }(x) ) \; - m \; ] \;\Phi (x) = 0 \eqno(1.3) $$ \noindent where $$ \beta ^{\alpha }(x) = \beta ^{a} e ^{\alpha }_{(a)}(x) \; , \qquad B_{\alpha }(x) = {1 \over 2}\; j^{ab} e ^{\beta }_{(a)}\nabla _{\alpha }( e_{(b)\beta }) , \qquad j^{ab} = ( \beta ^{a} \beta ^{b} - \beta ^{b} \beta ^{a}) . $$ \noindent This equation contains the tetrad $e^{\alpha }_{(a)}(x)$ explicitly. Therefore, there must exist a~possibility to demonstrate the~equivalence of any variants of this equation associated with various tetrads: $$ e^{\alpha }_{(a)}(x) \;\; \hbox{and} \;\; e'^{\alpha }_{(b)}(x)\; = \; L^{b}_{a} (x) \; e^{\alpha }_{(b)}(x) \eqno(1.4a) $$ \noindent $(L(x)$ is an arbitrary local Lorentz transformation). We will show that two such equations $$ [\; i\; \beta ^{\alpha }(x) \; (\partial_{\alpha} \; + \; B_{\alpha }(x)) \; - \; m ] \; \Phi (x) = 0 \; , \qquad [\;i\; \beta'^{\alpha }(x)\; (\partial_{\alpha} \;+\; B'_{\alpha }(x)) \; - \; m\; ] \; \Phi'(x) = 0 \eqno(1.4b) $$ \noindent generating in tetrads $e^{\alpha }_{(a)}(x)$ and $e'^{\alpha }_{(b)}(x)$, respectively, can be converted into each other through the transformation $\Phi (x) = S(x)\; \Phi (x)$ : $$ \left ( \begin{array}{c} \phi'_{a}(x) \\ \phi'_{[ab]}(x) \end{array} \right ) = \left ( \begin{array}{cc} L_{a}^{\;\;l} & 0 \\ 0 & L_{a}^{\;\;m} L_{b}^{\;\;n} \end{array} \right ) \;\; \left ( \begin{array}{c} \phi_{l}(x) \\ \phi_{[mn]}(x) \end{array} \right ) \eqno(1.4c) $$ \noindent here the $L(x)$ is the same as in the~relation (1.4a). So, starting from the first equation in (1.4b), let us obtain an equation for $\Phi'(x)$. Allowing for $\Phi(x) = S(x)\; \Phi (x)$, we get $$ [\; i\; S \; \beta ^{\alpha } \; S^{-1} (\partial _{\alpha} \; + \; S \; B_{\alpha }\; S^{-1} \; + \; S \; \partial_{\alpha}\; S^{-1}) \; - \; m \;] \; \Phi'(x) = 0 $$ \noindent A task that faces us now is of verifying the relationships $$ S(x) \; \beta ^{\alpha }(x) \; S^{-1}(x) = \beta'^{\alpha}(x) \;\; , \eqno(1.5a) $$ $$ [ S(x) \; B(x)\; S^{-1}(x) \; + \; S(x) \; \partial_{\alpha}\; S^{-1}(x)\; ] = 0 \; . \eqno(1.5b) $$ \noindent The first one can be rewritten as $$ S(x) \; \beta ^{a} \; e ^{\alpha }_{(a)}(x) \; S^{-1}(x) \; = \beta ^{b} \; e'^{\alpha }_{(b)}(x) $$ \noindent from where, taking into account the relation (1.4a) between tetrads, we come to $$ S(x) \;\beta ^{a} \; S^{-1}(x) \; = \; \beta ^{b} \; L^{a}_{b}(x) \; . \eqno(1.5c) $$ \noindent The latter condition is of great familiarity in $D-K$ theory; one can verify it through the use of the sectional structure of $\beta ^{a}$, which provides two sub-relations: $$ L(x) \; \kappa ^{a} \; [\; L^{-1}(x) \otimes L(x)^{-1}\;] \; = \;\kappa ^{b} \; L^{\;\;a}_{b}(x) \; , \qquad [\; L(x) \otimes L(x)\; ] \; \lambda ^{a} \; L(x)^{-1} \; = \; \lambda ^{b} \; L^{\;\;a}_{b}(x) \; . \eqno(1.5d) $$ \noindent Those latter will be satisfied identically, after we take explicit form of $\kappa ^{a}$ and $\lambda ^{a}$ into account and also allow for the $L^{\;\;b}_{a}$ being pseudo orthogonal: $g^{al} \; (L^{-1})^{\;\;k}_{l}(x) = g^{kb}\; L^{\;\;a}_{b}(x)$. Now, let us pass to the proof of the relationship (1.5b). By using the determining relation for $D-K$ connection $$ B_{\alpha }(x) = {1 \over 2} \; j ^{ab} \; e^{\beta }_{(a)} \; \nabla _{\alpha } \; ( e_{(b)\beta }) , \qquad j^{ab} \; = (\beta ^{a} \; \beta ^{b} \; - \; \beta ^{b} \; \beta ^{a}) $$ \noindent and also the formula (1.5c), we get $$ S(x)\; \partial_{\alpha} \; S^{-1}(x) \; = \; [ \; B'_{\alpha }(x) \; - \; {1 \over 2} \; j^{mn} L^{\;\;n}_{m}(x)\; g_{ab} \; \partial _{\alpha}\; L^{\;\;b}_{n}(x) \;] \; , $$ \noindent In a sequence, the (1.5b) results in $$ S(x) \; \partial_{\alpha} \; S^{-1}(x) = \; {1\over2}\; L^{\;\;a}_{m}(x)\; g_{ab}\; (\; \partial_{\alpha } \;L^{\;\;b}_{n}(x)\; ) \; . $$ \noindent The latter condition is an identity: this is readily verified through the~use of sectional structure of all involved matrices. Thus, the equations from (1.4b) are translated into each other; thereby, they manifest a~gauge symmetry under local Lorentz transformations (in a~complete analogy with more familiar Dirac particle case [1-9]). In the~same time, the~wave function from this equation represents scalar quantity relative to general coordinate transformations: that is, if $\; x^{\alpha } \; \rightarrow \; x'^{\alpha } = f^{\alpha }(x)$ , then $\; \Phi'(x) = \Phi (x)$. It remains to demonstrate that this $D-K$ formulation can be inverted into the Proca formalism in terms of general relativity tensors. To this end, as a~first step, let us allow for the~sectional structure of $\beta ^{a}, J^{ab}$ and $\Phi (x)$ in the~$D-K$ equation; then instead of (1.3) we get $$ i \; [\; \lambda^{c} \; e^{\alpha }_{(c)} \; (\; \partial_{\alpha} \;+ \; \kappa ^{a} \; \lambda ^{b} \; e^{\beta }_{(a)} \; \nabla_{\alpha }\; e_{(b)\beta }\; ) \; ]^{\;\;\;\;\;\;l}_{[mn]}\; \Phi _{l} = m\; \Phi _{[mn]} \; , $$ $$ i \; [\kappa ^{c} \; e^{\alpha }_{(c)} \; ( \;\partial_{\alpha} \; + \; \lambda ^{a} \; \kappa ^{b}\; e^{\beta }_{(a)} \; \nabla _{\alpha } \; e_{(b)\beta }\; ) \; ]^{[\;\;mn]}_{l} \; \Phi_{[mn]} = m \Phi _{l} \eqno(1.6a) $$ \noindent which, after taking into account the~explicit form of $( \lambda ^{c},\; \lambda ^{c} \; \kappa ^{a} \; \lambda ^{b}$ , $\kappa ^{c} ,\; \kappa ^{c} \; \lambda ^{a} \; \kappa^{b} )$, lead to $$ [\; (e_{(a)}^{\alpha} \; \partial_{\alpha} \; \Phi _{b} \; - \; e^{\alpha }_{(b)} \; \partial_{\alpha}\; \Phi _{a}) \; + \; ( \gamma ^{c}_{\;\;ab} - \gamma ^{c}_{\;\;ba} ) \; \Phi _{c}\; ] = m \; \Phi _{ab}\; , $$ $$ [\; e^{(b)\alpha } \;\partial_{\alpha } \; \Phi _{ab} \; + \; \gamma ^{nb} _{\;\;\;\;n} \Phi _{ab} + \gamma ^{\;\;mn}_{a} \Phi _{mn}\; ] = m \; \Phi _{a} \; . \eqno(1.6b) $$ \noindent In turn, these will represent just the~Proca equations (1.1c) after they are rewritten in terms of tetrad components according to $$ \Phi _{a}(x) \;= \; e^{\alpha }_{(a)} (x) \; \Phi _{\alpha }(x) , \qquad \Phi _{ab}(x)= e^{\alpha }_{(a)} (x) \; e^{\beta }_{(b)}(x) \; \Phi_{\alpha \beta }(x) \eqno(1.7) $$ \noindent the symbol $\gamma _{abc}(x)$ is used to designate a~rotational Ricci coefficients: $$ \gamma _{abc}(x) \; = \; - \; (\nabla _{\beta } \; e_{(a)\alpha }) \; e^{\alpha }_{(b)} \; e^{\beta }_{(c)} \; . $$ So, as evidenced by the~above, the~manner of introducing the~interaction between a~spin $1$ particle and external classical gravitational field can be successfully unified with~the approach that occurred with regard to a~spin $1/2$ particle and was first developed by Tetrode, Weyl, Fock, Ivanenko. One should attach great significance to that possibility of unification. Moreover, its absence would be a~very strange fact indeed because it touches concepts of great physical significance. Let us discuss this matter in more detail. The manner of extending the~flat space Dirac equation to general relativity case indicates clearly that the~Lorentz group underlies equally both these theories. In other words, the~Lorentz group retains its importance and significance at changing the Minkowski space model to an~arbitrary curved space-time. In contrast to this, at generalizing the~Proca formulation, we automatically destroy any relations to the~Lorentz group, although the~definition itself for a~spin $1$ particle as an~elementary object was based on just this group. Such a~gravity sensitiveness to the~fermion-boson division might appear rather strange and unattractive asymmetry, being subjected to the~criticism. Moreover, just this feature has brought about a~plenty of speculation about this matter. In any case, this peculiarity of particle-gravity field interaction is recorded almost in every handbook. By my mind, the~possibility itself of rewriting the~tetrad-based Duffin-Kemmer equation in terms of general relativity tensors looks very surprising indeed. \subsection*{2. On wave functions of a spin $1$ particle in the monopole field} Now, on the base of Duffin-Kemmer (D-K) formalism, let us consider the problem of a vector particle in the~Abelian monopole potential. The starting D-K equation in the~spherical tetrad takes the~form $$ \left [\;i\;\beta ^{0} \; \partial _{t} \; + \; i \; ( \beta ^{3} \; \partial _{r}\; + {1 \over r} \;(\beta ^{1} \; j^{31} + \beta ^{2} \; j^{32}) ) \;+ \; {1 \over r} \; \Sigma ^{\kappa }_{\theta ,\phi } \; - \; {mc \over \hbar} \;\right ] \; \Phi (x) = 0 \eqno(2.1a) $$ \noindent where $$ \Sigma ^{\kappa }_{\theta ,\phi } \; = \; \left [\; i\; \beta ^{1} \; \partial _{\theta } \; + \; \beta ^{2} \; {i\; \partial \; + \; (i\; j^{12} - \kappa ) \cos \theta \over \sin \theta} \right ] \eqno(2.1b) $$ \noindent These relations are very close to analogous ones used in the~electronic case [36] ; variations concern only the~explicit expressions for matrices: $\gamma ^{a} , \; \sigma ^{ab} $ are to be changed into $\; \beta ^{a} , \; J^{ab}$. Below, we will use the~cyclic basis for Duffin-Kemmer matrices: $$ \beta^{0} = \left ( \begin{array}{cccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & +i & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & +i & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & +i & 0 & 0 & 0 \\ 0 & -i & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -i & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ i & 0 & 0 & -i & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right ) , $$ $$ \beta^{3} = \left ( \begin{array}{cccccccccc} 0 & 0 & 0 & 0 & 0 & i & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & +1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ i & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & +1 & 0 & 0 & i & 0 & 0 & 0 & 0 \end{array} \right ) , $$ $$ \beta^{1} = {1 \over \sqrt{2}} \; \left ( \begin{array}{cccccccccc} 0 & 0 & 0 & 0 & -i & 0 & +i & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & +1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & +1 & 0 & +1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & +1 & 0 \\ -i & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ +i & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right ) , $$ $$ \beta^{2} = {1 \over \sqrt{2}} \; \left ( \begin{array}{cccccccccc} 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -i & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & +i & 0 & -i \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & +i & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & +i & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & -i & 0 & +i & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & -i & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right ) , $$ \noindent correspondingly, the matrix $ij^{12}$ has a~diagonal structure $$ ij^{12} = \left ( \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & t_{3} & 0 & 0 \\ 0 & 0 & t_{3} & 0 \\ 0 & 0 & 0 & t_{3} \end{array} \right ) , \qquad t_{3} = \left ( \begin{array}{ccc} +1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right ) \; . $$ In the given tetrad representation, three components of the~total conserved momentum are (compare with [37,38]) $$ j^{\kappa }_{1} = \left [ \; l_{1} + { \cos \phi \over \sin \theta} \;(ij^{12} - \kappa) \; \right ] \; , \qquad j^{\kappa }_{2} =\left [\;l_{2} + { \sin \phi \over \sin \theta} \; (ij^{12} - \kappa) \; \right ] \; , \qquad j^{\kappa }_{3} = l_{3} \; . \eqno(2.2a) $$ \noindent Correspondingly, according to the~general procedure [36], the~particle's wave functions with fixed quantum number $(\epsilon , j , m )$ are to be constructed as follows: $$ \Phi _{\epsilon jm}(x) = e^{-i\epsilon t} \; [ f_{1}(r)\; D_{\kappa } ,\; f_{2}(r)\; D_{\kappa -1} , \; f_{3}(r) \; D_{\kappa}, \; f_{4}(r) \; D_{\kappa +1}, $$ $$ f_{5}(r) \; D_{\kappa -1}, \; f_{6}(r) \; D_{\kappa } , \; f_{7}(r) \; D_{\kappa +1}, \; f_{8}(r) \;D_{\kappa -1} , \; f_{9}(r) \;D_{\kappa } ,\; f_{10}(r)\; D_{\kappa +1} ] \eqno(2.2b) $$ \noindent here, $D_{\sigma } \equiv D^{j}_{-m,\sigma } (\phi ,\theta ,0)$. At finding $10$ radial equations for $f_{1},\ldots, f_{10}$, we are to use the~six recursive relations [39] $$ \partial_{\theta } \; D_{\kappa -1} \; = \; (a \; D_{\kappa-2} - c \; D_{\kappa } ), \;\; { -m-(\kappa -1) \cos \theta \over \sin \theta } \; D_{\kappa -1} = (-a \; D_{\kappa -2} - c \; D_{\kappa }), $$ $$ \partial _{\theta }\; D_{\kappa } \; = \; (c \; D_{\kappa-1} - d \; D_{\kappa +1}), \;\; {- m - \kappa \cos \theta \over \sin \theta } \; D_{\kappa } = (-c \; D_{\kappa-1} - d \; D_{\kappa +1}), $$ $$ \partial _{\theta } \; D_{\kappa +1} \; = \; (d \; D_{\kappa } - b \; D_{\kappa +2}), \;\; {-m-(\kappa +1)\cos \theta \over \sin \theta } \; D_{\kappa +1} \; = \; (-d \; D_{\kappa } - b \; D_{\kappa +2}) $$ \noindent where $$ a = {1 \over 2} \sqrt{(j + \kappa -1)(j - \kappa + 2)} , \qquad b = {1 \over 2} \sqrt{(j - \kappa -1)(j + \kappa + 2)} , $$ $$ c = {1 \over 2} \sqrt{(j + \kappa )(j - \kappa + 1)} , \qquad d = {1 \over 2} \sqrt{(j - \kappa )(j + \kappa +1)} . $$ Allowing for the following intermediate results $$ \Sigma ^{\kappa }_{\theta ,\phi } \; \Phi\; = \; \exp{-i\epsilon t} \; \sqrt{2} \; \left ( \begin{array}{c} (\; -\; c \; f_{5} \; -\; d \; f_{7}\; )\;\; D_{\kappa} \\ -\; i\; c \; f_{9} \;\; D_{\kappa -1} \\ (\; -\; i\; c \; f_{8} \;+ \;i \; d \; f_{10}) \; \; D_{\kappa} \\ -\; i \; d\; f_{9} \;\; D_{\kappa +1} \\ c \; f_{1} \;\; D_{\kappa -1} \\ 0 \\ d \; f_{1} \;\; D_{\kappa +1} \\ - \;i\; c \; f_{3} \;\; D_{\kappa -1} \\ (\;+\; i\; c \; f_{2} \; - \; i\; d \; f_{4})\;\; D_{\kappa} \\ +\; i\; d \; f_{3} \;\; D_{\kappa +1} \end{array} \right ) ; \eqno(2.3a) $$ $$ i \; \beta ^{0} \; \partial _{t} \; \Phi = \; \epsilon \; \exp{-i\epsilon t} \; \left ( \begin{array}{c} 0 \\ -\; i\; f_{5} \;\; D_{\kappa - 1} \\ i\; f_{6}\; \; D_{\kappa } \\ i\; f_{7} \;\; D_{\kappa +1} \\ -\; i\; f_{2} \;\; D_{\kappa -1} \\ -\; i\; f_{3} \;\; D_{\kappa} \\ -\; i \; f_{4} \;\; D_{\kappa +1} \\ 0 \\ 0 \\ 0 \end{array} \right ) \; ; \eqno(2.3b) $$ $$ i \; (\; \beta ^{3} \; \partial _{r} \; + \; {1 \over r} \; (\; \beta ^{1} \; \beta ^{31} + \beta ^{2} \; \beta ^{32})\; ) \; \Phi _{\epsilon jm} = \; \exp{-i\epsilon t} \; \left ( \begin{array}{c} (\; - \; d/dr \; - \; 2/r\; ) \; f_{6} \; \; D_{\kappa } \\ (\;i\;d/dr\; + \;i/r\;) \; f_{8} \;\; D_{\kappa -1} \\ 0 \\ (\;-\;i\;d/dr\; -\;i/r\;) \; f_{10}\;\; D_{\kappa +1} \\ 0 \\ 0 \\ 0 \\ (\;-\;i\; d/dr \;-\; i/r\;) \; f_{2} \;\; D_{\kappa -1} \\ 0 \\ (\; i\;d/dr \;+\;i/r\;) \; f_{4} \;\; D_{\kappa +1} \end{array} \right ) \eqno(2.3c) $$ \noindent from (2.1a) we produce $$ -( {d \over dr} + {2 \over r} ) \; f_{6} - {\sqrt{2} \over r} \; ( c \; f_{5} + d \; f_{7}) - m \; f_{1} = 0 \; , $$ $$ i \epsilon \; f_{5} + i ( {d \over dr} + {1 \over r} ) \; f_{8} + i {\sqrt{2} c \over r}\; f_{9} - m \; f_{2} = 0 \; , $$ $$ i \epsilon \; f_{6} + {2i \over r} \; (- c \; f_{8} + d \; f_{10}) - m \; f_{3} = 0 \; , $$ $$ i \epsilon \; f_{7} - i ( {d \over dr} + {1 \over r} ) \; f_{10} - i {\sqrt{2} d \over r}\; f_{9} - m \; f_{4}= 0 \; , $$ $$ i \epsilon \; f_{2} + {\sqrt{2} c \over r} \; f_{1} - m \; f_{5} = 0 , \;\; -i \epsilon \; f_{3} - {d \over dr} \; f_{1} - m \; f_{6} = 0 \; , $$ $$ -i \epsilon \; f_{4} +{\sqrt{2} d \over r} \; f_{1} - m \; f_{7} = 0 \; , \;\; -i({d \over dr} + {1 \over r}) \; f_{2} - i {\sqrt{2} c \over r} \; f_{3} - m \; f_{8} = 0 $$ $$ i{\sqrt{2} \over r} ( c \; f_{2} - d \; f_{4}) - m \; f_{9} = 0 \; , \;\; i ({d \over dr} + {1 \over r}) \; f_{4} + {i \sqrt{2} d \over r} \; f_{3} - m \; f_{10} = 0\; . \eqno(2.4) $$ \noindent Parametre $j$ are allowed to take values (we have to draw distinction between $\kappa = \pm 1/2$ and all remaining $\kappa $): $$ \hbox{if} \;\;\; \kappa = \pm 1/2 \;, \qquad \hbox{then} \;\;\; j = \mid \kappa \mid , \mid \kappa \mid + 1 , \ldots ; $$ $$ \hbox{if}\;\;\; \kappa = \pm 1, \pm 3/2,\ldots \qquad \hbox{then} \;\;\; j = \mid \kappa \mid - 1, \mid \kappa \mid , \mid \kappa \mid + 1,\ldots \eqno(2.5) $$ \noindent In both cases, the states of minimal $j$ (respectively $j_{min.}= \mid \kappa \mid $ and $j_{\min }= \mid \kappa \mid - 1 )$ are to be considered separately: the~radial system (2.4) is not valid for those states. Let us consider the state with $j_{\min } = \mid \kappa \mid -1$ . First, one ought to investigate the $j_{min.} = 0$ situation arisen at $\kappa = \pm 1$; the relevant wave function does not depend on the~$\theta ,\phi $ variables at all. Let $\kappa = + 1$ and $j_{min.} = 0$, then we start with the~substitution $$ \Phi ^{0}(t,r) = \; \exp{-i\epsilon t} \; [ \; 0 , \;\; \; f_{2} ,\;\; 0 , \;\; 0 , \;\; f_{5}, \;\; 0 ,\; 0 ; \;\; f_{8} , \;\; 0 , \;\; 0 \; ] \eqno(2.6a) $$ \noindent It is readily verified that the $\Sigma _{\theta ,\phi }$ operator acts on $\Phi _{0}$ as a~null operator: $\Sigma _{\theta ,\phi } \; \Phi _{0} \; = 0$; because the~identity $(i\; j^{12} \; - \; \kappa ) \; \Phi ^{0}\; \equiv\; 0$ holds. As a~result, we produce only three non-trivial (as one should expect) equations: $$ i \; \epsilon \; f_{5} \; + \; i \; (\; {d \over dr}\; + \;{1 \over r}\; ) \;f_{8} - m \; f_{2}\; = 0 $$ $$ -\; i\; f_{2}\; -\; m \; f_{5} \;= 0 _ , \;\;\; -\; i\;(\; {d \over dr}\; +\; {1 \over r})\; f_{2} - m \; f_{8} \; = 0 \eqno(2.6b) $$ \noindent From here, it follows $$ f_{5} = \; - \; i \; {\epsilon \over m} \; f_{2} , \qquad f_{8} = \; -\; {i \over m} \; ( \; {d \over dr} \; + \; {1 \over r} ) \; f_{2} $$ \noindent and the function $f_{2}$ ($F_{2} = {1 \over r}\; f_{2} $) satisfies the equation $$ (\; {d^{2} \over dr^{2}} \; + \; \epsilon ^{2} \; - \; m^{2} \; ) \; F_{2} = 0 \eqno(2.6c) $$ \noindent The latter provides us with an~exponential solution of the~same kind as in the~electronic case, that is a~candidate for a~possible bound state. The situation with $j_{min.}= 0$ and $\kappa = - 1$ looks completely analogous: $$ \Phi ^{0}(t,r)\; =\; \exp{-i\epsilon t} \; [\; 0, \;\; 0,\;\; 0,\;\; f_{4}\; ,\;\; 0, \;\; 0, \;\; f_{7}, \;\; 0, \;\; 0, \;\; f_{10}\; ] \eqno(2.7a) $$ \noindent and the radial equations $$ i \; \epsilon\; f_{7}\; - i\; ({d \over dr} \; + \; {1 \over r}) \; f_{10} \;-\; m \;f_{4} = 0 , $$ $$ -\; i \; f_{4} \; - \;m \; f_{7} = 0 , \qquad i\; ({d \over dr} \;+\; {1 \over r})\; f_{4} \; - m \; f_{10} = 0 \eqno(2.7b) $$ \noindent and eventually we get $$ f_{7} = -i\;{ \epsilon \over m} \; f_{4} \;\; , \qquad f_{10} = {i \over m } \; ({d \over dr} \;+ \;{1 \over r} ) \; f_{2} \;\; , $$ $$ (\; {d^{2} \over dr^{2}} \; + \; \epsilon ^{2} \; - \;m^{2} )\; F_{4} \;= 0 , \qquad (\; F_{4} = {1 \over r} \; f_{4}\; ) \; . \eqno(2.7c) $$ Now, we pass on the case of minimal $j_{min.} = \mid \kappa \mid -1$ with higher values of $\kappa $: $ \kappa = \pm 3/2 , \pm 2 , \ldots $ First, let $\kappa $ be positive, then we have start with a~substitution $$ \kappa \ge 3/2 : \qquad \Phi ^{0} \;=\; \exp{-i\epsilon t}\; [ \;0 ,\; f_{2}\; D_{\kappa -1} ,\; 0 ,\; 0 ; f_{5} \; D_{\kappa -1} ,\; 0 , \; 0 ;\; f_{8}\; D_{\kappa -1} ,\; 0 ,\; 0\; ] \eqno(2.8a) $$ \noindent Using the recursive relations $$ \partial _{\theta}\; D_{\kappa-1} = \sqrt{{\kappa -1 \over 2}}\; D_{\kappa -2} , \;\; {-m - (\kappa -1) \cos \theta \over \sin \theta} \; D_{\kappa -1} = - \sqrt{{\kappa -1 \over 2}}\; D_{\kappa -2} $$ \noindent we find $$ i \beta ^{1} \; \Phi ^{0} = \exp{-i\epsilon t} \; i\; \sqrt{{\kappa -1 \over 2}} \left ( \begin{array}{c} -i f_{5} \; D_{\kappa -2} \\ 0 \\ + f_{8} \; D_{\kappa -2} \\ 0 \\ 0 \\ 0 \\ 0\\ 0 \\ - f_{2} \;D_{\kappa -2} \\ 0 \end{array} \right ) $$ $$ \beta ^{2}\; { i\; \partial _{\phi } + (ij^{12} - \kappa )\cos \theta \over \sin \theta } \; \Phi ^{0} = e^{-i \epsilon t} \; \sqrt{{\kappa -1 \over 2}} \; \left ( \begin{array}{c} - f_{5} \; D_{\kappa - 2} \\ 0 \\ -i f_{8}\; D_{\kappa-2}\\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ +i f_{2} \; D_{\kappa-2} \\ 0 \end{array} \right ) $$ \noindent and further we produce $\Sigma _{\theta ,\phi } \Phi ^{0} = 0$. Therefore, the~radial functions $f_{2}, f_{5}, f_{8}$ satisfy again the~same system (2.6b). The case of $j_{min.}=\mid \kappa \mid -1$ with negative $\kappa $ looks completely similar to the above: $$ \kappa \leq -3/2: \qquad \Phi ^{0} = e^{-i\epsilon t}\; [\; 0 ,\; 0 ,\; 0 ,\;\; f_{4}\; D_{\kappa +1} ,\; \; 0 ,\; 0 , $$ $$ \;\; f_{7}\; D_{\kappa +1} ,\; 0 ,\; 0 ,\;\; f_{10}\; D_{\kappa +1}\;\; ] \eqno(2.9) $$ \noindent the identity $\Sigma _{\theta ,\phi } \Phi ^{0} \equiv 0$ also holds and a~radial system coincides with (2.7b). So, the description of $j_{min.} = \mid \kappa \mid - 1$ states has been completed; all of them provide us with solutions of a~special exponential kind which potentially might be related to a~bound state and therefore these solutions are of special physical interest. In the~same time, unfortunately, it is a~unique case that we have managed to solve entirely up to their radial equations. Now, let us pass on the states with $j = \mid \kappa \mid $ that which are to be regarded whether as $j_{\min }= \mid \kappa \mid $ states at $\kappa = \pm 1/2$ or non-minimal $j$ states at all other values of $\kappa $. Let $j = \mid \kappa \mid $ and $\kappa $ be positive ($ \kappa \geq + 1/2$), then we have to begin with a~substitution (the~radial functions at all $D^{j = \kappa }_{-m,\kappa +1}$ in $\Phi (x)$ are equated to zero) $ \kappa \geq + 1/2 :$ $$ \Phi _{\epsilon jm}(x) \; = \; \exp{-i\epsilon t} \; [ \; f_{1}(r)\;D_{\kappa } ;\; f_{2}(r)\; D_{\kappa -1} , \; f_{3}(r) \; D_{\kappa } ,\; 0 ; $$ $$ f_{5}(r) \; D_{\kappa -1} ,\; f_{6}(r) \; D_{\kappa } , \; 0 ; \; f_{8}(r)\;D _{\kappa -1} ,\; f_{9}(r) \; D_{\kappa } , \; 0 \; ] \eqno(2.10a) $$ \noindent For $\Sigma _{\theta ,\phi }\Phi $ we get $$ \Sigma _{\theta ,\phi } \; \Phi = \; \exp{-i\epsilon t} \sqrt{\kappa}\; \left ( \begin{array}{c} - f_{5} \; D_{\kappa } \\ +i f_{9} \; D_{\kappa -1} \\ -i f_{8} \; D_{\kappa } \\ 0 \\ f_{1} \; D_{\kappa -1}\\ 0 \\ 0 \\ -i f_{3} \; D_{\kappa -1} \\ +i f_{2} \; D_{\kappa } \\ 0 \end{array} \right ) $$ \noindent and further we produce the~radial system $$ - ( {d \over dr} + {2 \over r} ) \; f_{6} - {\sqrt{\kappa} \over r} f_{5} - m \; f_{1} = 0 , \;\; i \epsilon \; f_{5} + i ( {d \over dr} + {1 \over r} )\; f_{8} + i {\sqrt{\kappa} \over r} \; f_{9} - m \; f_{2} = 0 , $$ $$ i \epsilon \; f_{6} - i { \sqrt{\kappa} \over r} \; f _{8} - m \; f _{3} = 0, \qquad 0 = 0 , \;\;\; i \epsilon \; f_{2} + {\sqrt{\kappa} \over r}\; f_{1} - m \; f_{5} = 0 , $$ $$ - i \epsilon \; f_{3} - {d \over dr} \; f_{1} - m \; f_{6} = 0 \qquad 0 = 0, \qquad - i ({d \over dr} + {1 \over r})\; f_{2} - i {\sqrt{\kappa} \over r} \; f_{3} - m \; f_{8} = 0 , $$ $$ i {\sqrt{\kappa} \over r} \; f_{2} - m \; f_{9} = 0 ,\qquad 0 = 0 \; . \eqno(2.10b) $$ In an analogous way one can consider the $j = \mid \kappa \mid $ states at negative $\kappa$: $ \kappa \leq -1/2 :\qquad $ $$ \Psi = \exp{-i\epsilon t} \; [\; f_{1}\; D_{\kappa } ,\; 0 , \;f_{3} \;D_{\kappa}, \; f_{4} \;D_{\kappa +1} ,\; 0 , $$ $$ f_{6}\; D_{\kappa } ,\; f_{7} \;D_{\kappa +1}, \; 0 , \;f_{9}\; D_{\kappa } ,\; f_{10} \; D_{\kappa +1}\; ] \; ; \eqno(2.11a) $$ $$ ( {d \over dr} + {2 \over r} ) \; f_{6} + {\sqrt{-\kappa} \over r}\; f_{7} + m \; f_{1} = 0 \; , \;\; 0 = 0 \qquad i \epsilon \; f_{6} - i {\sqrt{-\kappa} \over r} \; f_{10} - m \; f_{3} = 0 \; , $$ $$ i\epsilon \; f_{7} - i {\sqrt{-\kappa} \over r} \; f_{9} - i({d \over dr} + {1 \over r}) \; f_{10} - m \; f_{4} = 0 \;\; , \qquad 0 = 0 \; , $$ $$ i \epsilon \; f_{3} + {d \over dr} \; f_{1} + m \; f_{6} = 0 , \;\; - i \epsilon \; f_{4} + {\sqrt{-\kappa} \over r} \; f_{1} + m \;f_{7} = 0 , \qquad 0 = 0 \; , $$ $$ i {\sqrt{-\kappa} \over r} \; f_{4} + m \; f_{9} = 0 , \qquad i ({d \over dr} + {1 \over r}) \; f_{4} + i {\sqrt{-\kappa} \over r} \; f_{3} - m \; f_{10} = 0 \; . \eqno(2.11b) $$ Thus, the task of finding radial equations has been completely solved. All those systems look rather involved, so we are reasons to question its easy analysis in terms of any standard special functions. It can be noted that the ten equations established above fall naturally into 4 plus 6 sub-groups: those six give us a possibility to express the functions $f_{5},\ldots ,f_{10}$ in terms of $f_{1},\ldots, f_{4}$. Thereby, we can reduce the~first order system of $10$ equations to a~second order system of $4$ ones. Evidently, those four relation will represent a~still complicated system. \subsection*{3. On connection with the Proca approach} At analyzing the~above radial system, any additional information can be useful. In particularly, as well known, there must exist a first order differential condition on the vector constituent of $10$-dimensional wave function, namely, the~so-called generalized Lorentz relation. Let us work out it explicitly in this monopole situation. To this end, instead of D-K formalism it will be more convenient to use the Proca formalism (see Sec.2): $$ D_{\alpha } \; \Psi _{\beta } - D_{\beta } \; \Psi _{\alpha } = {mc \over \hbar} \; \Psi _{\alpha \beta } ,\qquad D^{\alpha } \;\Psi _{\alpha \beta } = {mc \over \hbar} \; \Psi _{\beta } \eqno(3.1a) $$ \noindent where $D_{\alpha } = (\nabla _{\alpha } + i\; {e \over \hbar c}\; A_{\alpha })$ ; $A_{\alpha }$ is an~electromagnetic potential (here, it is presented by Scwinger monopole potential $A_{\phi } = g\; \cos \phi )$. After the~operator $D_{\alpha }$ acts on the~second equation in (3.1a), we will get $$ {mc \over \hbar} \; ( \nabla _{\alpha } \; + \; i \; {e \over c \hbar} \; A_{\alpha }) \; \Psi ^{\alpha } = i \; {e \over 2c \hbar} \; F_{\alpha \beta } \; \Psi ^{\alpha \beta } \eqno(3.1b) $$ \noindent where, $F_{\alpha \beta } = (\partial _{\alpha } \; A_{\beta } - \partial _{\beta } \; A_{\alpha } )$. When $A_{\alpha } = 0$, this relations provides us with the usual Lorentz condition $\nabla _{\alpha } \; \Psi ^{\alpha } = 0$. Now, we face to translate this relationship (3.1b) from Proca representation into the Duffin-Kemmer's. All above, instead of $\Psi ^{\alpha }$ and $\Psi ^{\alpha \beta}$ we have to introduce their tetrad components: $ \Psi ^{\alpha } = e^{(a)\alpha } \; \Psi _{a} , \;\;\; \Psi ^{\alpha \beta } = e^{(a)\alpha } \; e^{(b)\beta } \; \Psi _{ab} . $ Correspondingly, the (3.1b) will take on the~form $$ {mc \over \hbar} \; [\; e^{(a)\alpha }_{;\alpha } \; \Psi _{a} \; + \; e^{(a)\alpha } \; \partial _{\alpha } \; ] \; \Psi _{a} \; + \; \; i\; {e \over \hbar c} \; A^{a}\; \Psi _{a} \; ] \; = \; i\; {e \over 2\hbar c} \; F ^{ab} \; \Psi _{ab} \eqno(3.1c) $$ \noindent The coordinate representatives of the monopole $ A_{\phi } = g\; \cos\theta , \;\; F_{\theta \phi } = - g \sin \theta$ have the following tetrad description $$ A^{2} = e^{(2) \phi }\; A_{\phi } = - g {\cos \theta \over r \sin \theta } , \qquad F^{12} \;= \; e^{(1)\theta } \; e^{(2)\phi } \; F_{\theta,\phi} = \; - \; {g \over r^{2}} \eqno(3.1d) $$ \noindent In addition, on simple straightforward computation, we find $$ e^{(0)\alpha }_{\;\;\; ;\alpha } = 0 , \;\; e^{(1)\alpha }_{\;\;\; ;\alpha } = - {\cos \theta \over r \sin \theta } ,\;\; e^{(2)\alpha }_{\;\;\; ;\alpha } = 0 , \;\; e^{(3)A}_{\;\;\; ;\alpha } = - {2 \over r} \eqno(3.1e) $$ \noindent The functions $\Psi _{a}$ and $\Psi _{ab}$ involved in (3.1c), relate to the $10$ constituents of $D-K$ column $\Phi$ as follows (this represents translating from cyclic basis into Cartesian one; $W \equiv -1/ \sqrt{2}$) $$ \left ( \begin{array}{c} \Phi_{0} \\ \Phi_{1} \\ \Phi_{2} \\ \Phi_{3} \\ \Phi_{01} \\ \Phi_{02} \\ \Phi_{03} \\ \Phi_{23} \\ \Phi_{31} \\ \Phi_{12} \end{array} \right ) = \left ( \begin{array}{cccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 &-W & 0 & +W & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 &-iW& 0 &-iW & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 &-W & 0 & +W & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 &-iW& 0 &-iW & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 &-W & 0 & +W \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 &-iW& 0 &-iW \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \end{array} \right ) = \left ( \begin{array}{c} f_{1} \;D_{\kappa} \\ f_{2} \;D_{\kappa-1} \\ f_{3} \;D_{\kappa} \\ f_{4} \; D_{\kappa+1} \\ f_{5} \; D_{\kappa-1} \\ f_{6} D_{\kappa} \\ f_{7} \; D_{\kappa+1} \\ f_{8} \;D_{\kappa-1} \\ f_{9} D_{\kappa} \\ f_{10}\;D_{\kappa+1} \end{array} \right ) \eqno(3.2a) $$ \noindent In the following we need only the components $\Psi _{0}, \;\Psi _{1} , \;\Psi _{2} , \;\Psi _{3} , \;\Psi _{12}:$ $$ \Psi _{0} = \; e^{-i\epsilon t} \; f_{1} \; D_{\kappa } , \qquad \Psi _{3} = \; e^{-i\epsilon t} \; f_{3} \; D_{\kappa } , \qquad \Psi _{1} = \; e^{-i\epsilon t} {1 \over \sqrt{2}} \; (- f_{2} \;D_{\kappa -1} + f_{4} \; D_{\kappa +1} ) , $$ $$ \Psi _{2} = \; e^{-i\epsilon t} \; {i \over \sqrt{2}} \; (- f_{2} \;D_{\kappa -1} - f_{4} \; D_{\kappa +1} ) , \qquad \Psi _{12} = e^{-i\epsilon t} \; f_{9}\; D_{\kappa } \eqno(3.2b) $$ \noindent Allowing for (3.2b) and (3.1d,e), the~condition (3.1c) has taken the form: $$ {mc \over \hbar} \; [\; {1 \over \sqrt{2}} \; r \; f_{2} \; ( \partial _{\theta } \; D_{\kappa -1} - {m + (\kappa -1) \cos \theta \over \sin \theta} \; D_{\kappa -1} ) - {1 \over \sqrt{2}} \; r \; f_{4}\; ( \partial _{\theta } \; D_{\kappa +1} + $$ $$ {m + (\kappa +1) \cos \theta ) \over \sin \theta} \; D _{\kappa +1}) + \; D_{\kappa } \; ( - {2 \over r} \; f_{3} \; - \; i {\epsilon \over \hbar c} \; f_{1} \; - \; {d \over dr}\; f_{3} ) ] = - i \; {\kappa \over r^{2}} \; f_{9} D_{\kappa} \eqno(3.3a) $$ \noindent After having used the recursive relations $$ \partial _{\theta } \; D_{\kappa -1} - {m + (\kappa -1)\cos \theta \over \sin \theta } \; D_{\kappa -1} = - \sqrt{(j-\kappa +1)(j+\kappa )} \; D_{\kappa } , $$ $$ \partial _{\theta } \; D_{\kappa +1} - {m + (\kappa +1)\cos \theta \over \sin \theta } \; D_{\kappa +1} = - \sqrt{(j+\kappa +1)(j-\kappa )} \; D_{\kappa } $$ \noindent (which are easily derived from the used above) we eventually arrive at $$ {mc \over \hbar} \; - i \; {\epsilon \over \hbar c} \; f_{1} \; + \; ({d \over dr} + {2 \over r})\; f_{3}\; - \; {1 \over \sqrt{2}} \; ( c \; f_{2} + d \; f_{4}) = - i \; {\kappa \over r^{2}} \; f_{9} \eqno(3.3b) $$ \noindent If $j = \mid \kappa \mid , \kappa \geq +1/2$ , one gets $$ {mc \over \hbar} \; [- i{\epsilon \over \hbar c} \; f_{1} \; + \; ( {d \over dr} + {2 \over r} ) \; f_{3} \; - \; {\sqrt{\kappa} \over r} \; f_{2} ] = - i \; { \kappa \over r^{2}} \; f_{9} \eqno(3.3c) $$ \noindent if $j = \mid \kappa \mid , \kappa \leq -1/2$, one gets $$ {mc \over \hbar} \; [- i \; {\epsilon \over \hbar c } \; f_{1} \; + \; ( {d \over dr} \; + \; {2 \over r} ) \; f_{3} - {\sqrt{-\kappa} \over r} \; f_{2} \;] = - i \; { \kappa \over r^{2}} \; f_{9} \; . \eqno(3.3d) $$ \subsection*{4. Discret symmetry.} Now, let us take up else one question, namely, concerning a problem of discrete symmetry at the vector particle - monopole case. As was shown in [36], at the electron-monopole case there exists some composite operator $\hat{N} = [ \hat{\pi } \otimes \hat{P}_{bisp.} \otimes \hat{P} ) ]$. It would seems that the~same possibility is realized also in case of vector particle. Indeed, a~direct extension of the~above to a~new situation: $\hat{N}_{vect.} = [ \hat{\pi } \otimes \hat{P}_{vect.} \otimes \hat{P} )]$ affords formally an operator with analogous commuting properties, that is, $$ [ \hat{N}_{vect.}, \hat{H}_{vect.} ]_{-} = 0 \; , \qquad [ \hat{N}_{vect.}, \vec{J}^{eg}_{vect.} ]_{-} = 0 \; . $$ \noindent However, as it will be verified bellow, such an operator cannot be diagonalized on Duffin-Kemmer wave functions found above. This matter is worth considering in more detail. The vector ordinary $P$-reflection operator in Cartesian tetrad, is $$ \hat{P}_{Cart.} \; = \; \left ( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & -I & 0 & 0 \\ 0 & 0 & -I & 0 \\ 0 & 0 & 0 & +I \end{array} \right ) \eqno(4.1) $$ \noindent where a symbol "I" denotes a~unit $3\times 3$ matrix. After translating this $\hat{P}_{Cart.}$ -operator into the~spherical tetrad's basis according to $ \hat{P}_{sph.} = O(\theta ,\phi ) \; \hat{P}_{Cart.}\; O^{-1}(\theta ,\phi )$, where $( O(\theta ,\phi)$ is a $10$-dimension rotational matrix associated with the~spinor gauge transformation used in case of electronic field, it takes on the~form (the standart cyclic basis in the vrctor space is used) $$ \hat{P}_{sph.}^{cycl.} \; = \; \left ( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & +E & 0 & 0 \\ 0 & 0 & +E & 0 \\ 0 & 0 & 0 & -E \end{array} \right ) , \qquad \hbox{where} \qquad E \equiv \left ( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{array} \right ) \; . \eqno(4.2) $$ \noindent From the equation on proper values $$ [ \hat{\pi } \otimes \hat{P}^{cycl.}_{sph.} \otimes \hat{P} ] \; \Phi ^{eg}_{jm} = N \; \Phi ^{eg}_{jm} \eqno(4.3a) $$ \noindent it follows $$ N = (-1)^{j+1} : \qquad f_{1} = \; f_{3} = \; f_{6} = \; 0 , \; f_{4} = - f_{2}, \; f_{7} = - f_{5} , \; f_{10} = + f_{8} ; \eqno(4.3b) $$ $$ N = (-1)^{j} : \qquad f_{9} = 0 , \; f_{4} = + \; f_{2} , \; f_{7} = +\; f_{5} ,\; f_{10} = -\; f_{8} \eqno(4.3c) $$ \noindent these relations are exactly the same which had arisen from diagonalizing the~ordinary $P$-reflection operator in case of a~free vector field: $[ \hat{P}^{cycl.}_{sph.} \otimes \hat{P} ] \Phi^{0} = P \Phi^{0}$. Let us try imposing these additional relations (4.3b) or (4.3c) on radial functions $f_{1}(r), \ldots , f_{10}(r)$ obeying the system (2.4). On direct verification , one concludes that a~system so achieved is not self-consistent. This means that the $\hat{N}$ operator, though commuting with the vector $eg$-Hamiltonian, cannot be regarded as an observable quantity measured simultaneously with vector particle-monopole's Hamiltonian. For example, in case (4.3b), one has $$ - ( {d \over dr} + {2 \over r} )\; 0 - {\sqrt{2} \over r}\; ( c - d )\; f_{5} - m\; 0 = 0 \; , \qquad \hbox{that is} \qquad f_{5} \equiv 0 \; ; $$ $$ i \epsilon \; 0 + i ({d \over dr} + {1 \over r} ) \; f_{8} + i{\sqrt{2} c \over r} \; f_{9} - m \; f_{2} = 0 \; ; $$ $$ i \epsilon \; 0 + 2 {i \over r} \; (- c + d ) \; f_{8} - m \; 0 = 0 \; , \qquad \hbox{that is} \qquad f_{8} \equiv 0 \; $$ $$ i \epsilon \; 0 - i ( {d \over dr} + {1 \over r} ) \; 0 - {\sqrt{2} d \over r}\; f_{9} - m \; f_{2} = 0 \; ; $$ $$ i \epsilon f_{2} + {\sqrt{2} c \over r} \; 0 - m \; 0 = 0 \; , \qquad \hbox{that is} \qquad f_{2} \equiv 0 \; , \; f_{9} \equiv 0 \; , $$ $$ - i \epsilon \; 0 - {d \over dr} 0 - m \; 0 = 0 \;\; \; , \;\; - i \epsilon \; 0 + {\sqrt{2} d \over r})\; 0 - m \; , $$ $$ 0 = 0 \;\; , \;\; - i ({d \over dr} + {1 \over r}) \; 0 - i{\sqrt{2} c \over r} \; 0 - m\; 0 = 0 \; . $$ \noindent So, all the $f_{i}(r)$ turn out to be equal to zero; but such a~solution is not of interest because of its triviality. Here one gives some added comment on extending the~vector particle-monopole formalism constructed above to an arbitrary background space-time with spherical symmetry. The relevant Duffin-Kemmer $eg$-equation is taken in the form $$ \left [ i \beta ^{0} \; (e^{-\nu /2} \; \partial _{t} \; + \; {1 \over 2}\; {\partial \nu \over \partial r} \; e ^{-\mu /2}\; j ^{03}) \; + \; + i \beta ^{3} \; (e^{-\mu /2} \; \partial _{r} \; + \; {1 \over 2} \; {\partial \mu \over \partial t} \; e^{-\nu /2} \; j^{03} ) \; + \right. $$ $$ \left. - \; {i \over r}\; e^{-\mu /2} \; ( \beta ^{1} \; \beta ^{12} \; + \; \beta ^{2} \; \beta ^{23}) \; + \; {1 \over r} \; \Sigma _{\theta ,\phi } \; - \; {mc \over \hbar} \; \right ] \; \Phi (x) = 0 \; . $$ \noindent Therefore, almost all done above for the flat space model will be easily taken into a~curved space model with only several evident changes. \end{document}
math
\begin{document} \title[Bounded weak solutions to the thin film Muskat problem]{Bounded weak solutions to the thin film Muskat problem via an infinite family of Liapunov functionals} \thanks{} \author{Philippe Lauren\c{c}ot} \address{Institut de Math\'ematiques de Toulouse, UMR~5219, Universit\'e de Toulouse, CNRS \\ F--31062 Toulouse Cedex 9, France} \email{[email protected]} \author{Bogdan-Vasile Matioc} \address{Fakult\"at f\"ur Mathematik, Universit\"at Regensburg \\ D--93040 Regensburg, Deutschland} \email{[email protected]} \keywords{degenerate parabolic system - cross-diffusion - boundedness - Liapunov functionals - global existence} \subjclass{35K65 - 35K51 - 37L45 - 35B65 - 35Q35} \date{\today} \begin{abstract} A countably infinite family of Liapunov functionals is constructed for the thin film Muskat problem, which is a second-order degenerate parabolic system featuring cross-diffusion. More precisely, for each $n\geq 2$ we construct an homogeneous polynomial of degree $n$, which is convex on $[0,\infty)^2$, with the property that its integral is a Liapunov functional for the problem. Existence of global bounded non-negative weak solutions is then shown in one space dimension. \end{abstract} \maketitle \pagestyle{myheadings} \markboth{\sc{Ph.~Lauren\c cot \& B.-V.~Matioc}}{\sc{Bounded weak solutions to the thin film Muskat problem}} \section{Introduction}\label{sec01} The thin film Muskat problem describes the dynamics of the respective heights of two immiscible fluids with different densities $(\rho_-,\rho_+)$ and viscosities $(\mu_-,\mu_+)$ in a porous media and reads \begin{subequations}\label{tfm1} \begin{align} \partial_t f & = \mathrm{div}\left(f \nabla\left[ (1+R) f + R g \right] \right) \;\text{ in }\; (0,\infty)\times \Omega\,, \label{tfm1a} \\ \partial_t g & = \mu R\, \mathrm{div}\left[ g\nabla (f + g) \right] \;\text{ in }\; (0,\infty)\times \Omega\,, \label{tfm1b} \end{align} supplemented with homogeneous Neumann boundary conditions \begin{equation} \nabla f\cdot \mathbf{n} = \nabla g\cdot \mathbf{n} = 0 \;\text{ on }\; (0,\infty)\times \partial\Omega\,, \label{tfm1c} \end{equation} and non-negative initial conditions \begin{equation} (f,g)(0) = (f^{in},g^{in}) \;\text{ in }\; \Omega\,. \label{tfm1d} \end{equation} \end{subequations} In \eqref{tfm1}, $\Omega$ is a bounded domain of $\mathbb{R}^d$, $d\ge 1$, with smooth boundary $\partial\Omega$, $f$ and $g$ denote the heights of the heavier and lighter fluids, respectively, and $R:=\rho_+/(\rho_- - \rho_+)$ and $\mu:=\mu_-/\mu_+$ are positive parameters depending solely on the densities ($\rho_- > \rho_+$) and viscosities of the two fluids. We recall that \eqref{tfm1} can be derived from the classical Muskat problem by a lubrication approximation \cite{EMM2012, JM2014, WM2000}. From a mathematical viewpoint, the system~\eqref{tfm1} is a quasilinear degenerate parabolic system with full diffusion matrix and it is well-known that the analysis of cross-diffusion systems is in general rather involved. Fortunately, as noticed in \cite{EMM2012}, an important property of \eqref{tfm1} is the availability of an energy functional. Specifically, \begin{equation} \mathcal{E}(f,g) := \frac{1}{2} \int_\Omega \left[ f^2 + R (f+g)^2 \right]\ \mathrm{d}x \label{in1} \end{equation} is a non-increasing function of time along the trajectories of \eqref{tfm1}. In fact, a salient feature of~\eqref{tfm1}, first observed in~\cite{LM2013}, is that it has, at least formally, a gradient flow structure for the energy~$\mathcal{E}$ with respect to the $2$-Wasserstein distance. This structure provides in particular a variational scheme to establish the existence of weak solutions to \eqref{tfm1}. Furthermore, as first noticed in \cite{ELM2011} and subsequently used in \cite{ACCL2019, LM2013}, the entropy functional \begin{equation} \mathcal{H}(f,g) := \int_\Omega \left[ L(f) + \frac{1}{\mu} L(g) \right]\ \mathrm{d}x\,, \label{in2} \end{equation} with $L(r) := r\ln{r}-r+1\ge 0$, $r\ge 0$, is also a non-increasing function of time along the trajectories of \eqref{tfm1}. Building upon the above mentioned properties, the existence of non-negative global weak solutions~${(f,g)}$ to~\eqref{tfm1} satisfying \begin{equation} (f,g) \in L_\infty((0,T),L_2(\Omega,\mathbb{R}^2))\cap L_2((0,T),H^1(\Omega,\mathbb{R}^2))\,, \label{in3} \end{equation} is shown in \cite{ELM2011, LM2017} in one space dimension $d=1$, see also \cite{AIJM2018, BGB2019} for the existence of global weak solutions to a generalized version of \eqref{tfm1} in the $d$-dimensional torus with periodic boundary conditions instead of the homogeneous Neumann boundary conditions~\eqref{tfm1c}. Local existence and uniqueness of classical solutions to \eqref{tfm1} with positive initial data are reported in \cite{EMM2012}, along with the local stability of spatially uniform steady states. As for the Cauchy problem when $\Omega=\mathbb{R}^d$ and~${d\in \{1,2\}}$, non-negative global weak solutions are constructed in \cite{ACCL2019, LM2013} for non-negative initial conditions~${(f^{in},g^{in})\in L_1(\mathbb{R}^d,\mathbb{R}^2)\cap L_2(\mathbb{R}^d,\mathbb{R}^2)}$ with finite second moments, exploiting the aforementioned gradient flow structure to set up a variational scheme, see also \cite{La2017} for an extension to a multicomponent version of~\eqref{tfm1} in one space dimension. This approach is further developed in \cite{ZM2015} to investigate the existence of non-negative global weak solutions to a broader class of quasilinear cross-diffusion systems. In view of \eqref{in3}, weak solutions to \eqref{tfm1} have rather low regularity. It is actually a general feature of cross-diffusion systems that classical regularity is hard to reach. In particular, the cross-diffusion structure impedes the use of bootstrap arguments which have proved efficient for triangular systems. A different route to improved regularity is to look for additional estimates and the purpose of this paper is to derive (formally) $L_n$-estimates for solutions to~\eqref{tfm1} for all integers $n\ge 2$, eventually leading to $L_\infty$-estimates in the limit $n\to\infty$. This feature paves the way to the construction of non-negative global bounded weak solutions to~\eqref{tfm1} but, as explained below, we are only able to complete this construction successfully in one space dimension $d=1$. The first main contribution of this paper is actually to show that, for each $n\ge 2$, there is an homogeneous polynomial $\Phi_n$ of degree~$n$, which is non-negative and convex on $[0,\infty)^2$, and such that \begin{equation} u=(f,g) \longmapsto \int_\Omega \Phi_n(u)\ \mathrm{d}x \end{equation} is a Liapunov functional for \eqref{tfm1}. More precisely, the first main result of this paper is the following. \begin{theorem}\label{MainTh} Let $R>0$, $\mu>0$, and $u^{in} := (f^{in},g^{in})\in L_{\infty,+}(\Omega,\mathbb{R}^2)$. If $u=(f,g)$ is a sufficiently regular solution to \eqref{tfm1} on $[0,\infty)$ with non-negative components, then \begin{itemize} \item[(l1)] for all $t\ge 0$, \begin{equation} \int_\Omega \Phi_1(u(t,x))\ \mathrm{d}x + \int_0^t \int_\Omega \left[ |\nabla f|^2 + R |\nabla (f+g)|^2 \right](s,x)\ \mathrm{d}x\mathrm{d}s \le \int_\Omega \Phi_1(u^{in}(x))\ \mathrm{d}x\,, \label{p3} \end{equation} where $\Phi_1(X) := L(X_1) + L(X_2)/\mu$, $X=(X_1,X_2)\in [0,\infty)^2$; \item[(l2)] for all $n\ge 2$ and all $t\ge 0$, \begin{subequations}\label{p4} \begin{equation} \int_\Omega \Phi_n(u(t,x))\ \mathrm{d}x \le \int_\Omega \Phi_n(u^{in}(x))\ \mathrm{d}x\,, \label{p4a} \end{equation} where $\Phi_n$ is the homogeneous polynomial of degree $n$ given by \begin{equation} \Phi_n(X) := \sum_{j=0}^n a_{j,n} X_1^j X_2^{n-j}\,, \qquad X=(X_1,X_2)\in \mathbb{R}^2\,, \label{p4b} \end{equation} with $a_{0,n}:=1$, \begin{equation} \label{p4c} \begin{split} a_{j,n} & := \binom{n}{j} \prod_{k=0}^{j-1} \frac{k +\alpha_{k,n}}{\alpha_{k,n}} > 0\,, \qquad 1\le j \le n\,, \\ \alpha_{k,n} & := R [ k + \mu(n-k-1)]> 0\,, \qquad 0 \le k \le n-1\,. \end{split} \end{equation} \end{subequations} In addition, $\Phi_n$ is convex on $[0,\infty)^2$; \item[(l3)] for $t\ge 0$, \begin{equation} \|(f+g)(t)\|_\infty \le \frac{1+R}{R} \|f^{in}+g^{in}\|_\infty\,. \label{p5} \end{equation} \end{itemize} \end{theorem} For $n=2$, Theorem~\ref{MainTh} gives $(a_{0,2},a_{1,2},a_{2,2})=(1,2,(1+R)/R)$. Therefore, \begin{equation*} \Phi_2(X) = \frac{1+R}{R} X_1^2 + (X_1+X_2)^2\,, \qquad X\in\mathbb{R}^2\,, \end{equation*} and \begin{equation*} \mathcal{E}(f,g) = \frac{R}{2} \int_\Omega \Phi_2((f,g))\ \mathrm{d}x\,, \end{equation*} so that we recover the time monotonicity of the energy from \eqref{p4a} with $n=2$. It seems worth pointing out that the availability of an infinite number of Liapunov functionals, eventually leading to $L_\infty$-estimates, seems rather seldom for cross-diffusion systems and that we are not aware of other systems sharing this feature. Whether it is possible to extend the analysis performed in this paper to a broader class of cross-diffusion systems will be the subject of future research. To construct the family of polynomials $(\Phi_n)_{n\ge 2}$, we introduce $u=(f,g)$ and the mobility matrix \begin{equation}\label{moma} M(X) = (m_{jk}(X))_{1\le j,k\le 2} := \begin{pmatrix} (1+R) X_1 & R X_1 \\ \mu R X_2 & \mu R X_2 \end{pmatrix}\,, \qquad X\in\mathbb{R}^2\,, \end{equation} so that \eqref{tfm1a}-\eqref{tfm1b} becomes \begin{equation} \partial_t u - \sum_{i=1}^d \partial_i \left( M(u) \partial_i u \right) = 0 \;\text{ in }\; (0,\infty)\times \Omega\,. \label{in4} \end{equation} We then use the observation that, since $\Phi\in C^2(\mathbb{R}^2)$, \eqref{tfm1c}, \eqref{in4}, and the symmetry of the Hessian matrix $D^2\Phi(u)$ of $\Phi$ entail that \begin{equation} \frac{d}{dt} \int_\Omega \Phi(u)\ \mathrm{d}x + \sum_{i=1}^d \int_\Omega \langle D^2\Phi(u) M(u) \partial_i u , \partial_i u \rangle\ \mathrm{d}x = 0\,. \label{in5} \end{equation} It readily follows from \eqref{in5} that $\Phi$ provides a Liapunov functional for \eqref{in4} as soon as the matrix~${D^2\Phi(u) M(u)}$ is symmetric and positive semidefinite. Using the ansatz~\eqref{p4b} for $\Phi=\Phi_n$ and the explicit form of the matrix $M$, we then compute $D^2\Phi_n(u) M(u)$ and require that it is a symmetric matrix, thereby obtaining \eqref{p4c}. Direct computations then show that the polynomial thus obtained is actually non-negative and convex on $[0,\infty)^2$, see section~\ref{sec04} and appendix~\ref{secapA}. Having uncovered the estimates~(l1)-(l3) at a somewhat formal level, it is tempting to exploit them to construct a bounded weak solution to \eqref{tfm1} endowed with these properties. The difficulty we face here is the construction of an appropriate approximation of \eqref{tfm1} having sufficiently smooth solutions for which the estimates~(l1)-(l3) remain valid. In particular, boundedness of solutions to the approximation seems to be required to be able to compute the integral of $\Phi_n(u)$. Unfortunately, we have yet been unable to design an approximation scheme producing bounded solutions while preserving the structure leading to~(l1)-(l3) that could work in arbitrary space dimensions and we only provide below the existence of a bounded weak solution to~\eqref{tfm1} in one space dimension $d=1$. \begin{theorem}\label{ThBWS} Let $R>0$, $\mu>0$, $u^{in} := (f^{in},g^{in})\in L_{\infty,+}(\Omega,\mathbb{R}^2)$, and assume that $d=1$ (so that~$\Omega$ is a bounded interval of $\mathbb{R}$). Then, there is a bounded weak solution $u:=(f,g)$ to \eqref{tfm1} which satisfies: \begin{itemize} \item[(p1)] for each $T>0$, \begin{equation} (f,g)\in L_{\infty,+}((0,T)\times \Omega,\mathbb{R}^2) \cap L_2((0,T),H^1(\Omega,\mathbb{R}^2))\cap W_2^1((0,T),H^1 (\Omega,\mathbb{R}^2)')\,; \label{p1} \end{equation} \item[(p2)] for all $\varphi\in H^1(\Omega)$ and $t\ge 0$, \begin{subequations}\label{p2} \begin{equation} \int_\Omega (f(t,x)-f^{in}(x)) \varphi(x)\ \mathrm{d}x +\int_0^t \int_\Omega f(s,x) \partial_x\left[ (1+R) f + R g \right](s,x) \cdot \partial_x\varphi(x)\, \mathrm{d}x\mathrm{d}s =0 \label{p2a} \end{equation} and \begin{equation} \int_\Omega (g(t,x)-g^{in}(x)) \varphi(x)\ \mathrm{d}x + \mu R \int_0^t \int_\Omega g(s,x) \partial_x\left( f + g \right)(s,x) \cdot \partial_x\varphi(x)\, \mathrm{d}x\mathrm{d}s =0\,; \label{p2b} \end{equation} \end{subequations} \item[(p3)] and the properties (l1), (l2), and (l3) stated in Theorem~\ref{MainTh}. \end{itemize} \end{theorem} A key ingredient in the proof of Theorem~\ref{ThBWS} is the continuous embedding of $H^1(\Omega)$ in $L_\infty(\Omega)$, which readily provides the above mentioned boundedness of solutions to the approximation of~\eqref{tfm1} designed below and is of course only available in one space dimension. Besides, we employ a rather classical approximation approach, relying on a time implicit Euler scheme with constant time step~${\tau\in (0,1)}$ for the time discretization and a suitable modification of the mobility matrix to a non-degenerate one. As a consequence of Theorem~\ref{ThBWS} and of the estimate~\eqref{LUB}, we obtain uniform $L_\infty$-bounds for the height $f$ of the denser fluid in the regime where $R\to 0$ and $\mu$ is fixed. Such an estimate has been used recently in \cite[Corollary~1.4]{LM2021a} when performing the singular limit $R\to0$ (while~$\mu$ is kept fixed or~$\mu\to\infty$) in the thin film Muskat problem \eqref{tfm1} in order to recover the porous medium equation $\partial_t f=\mathrm{div}\big(f \nabla f\big)$ in the limit. \begin{corollary}\label{Cor:1} If $R\max\{1,\mu\}\in (0,1/(2e)]$, then the solution $u=(f,g)$ to \eqref{tfm1} from Theorem~\ref{ThBWS} satisfies \begin{equation}\label{estaaa} \|f(t)\|_\infty\leq \big( 1 + e\max\{1,\mu\} \big) \|f^{in}\|_\infty + \| g^{in}\|_\infty\,, \qquad t\geq0. \end{equation} \end{corollary} We provide the proof of Theorem~\ref{ThBWS} in sections~\ref{sec02} and ~\ref{sec03} below. It involves three steps: we begin with the existence of a weak solution to the implicit time discrete scheme associated to \eqref{tfm1} which satisfies a time discrete version of the estimates~\eqref{p4} and~\eqref{p5} (section~\ref{sec02}). This result is achieved with an approximation procedure which is designed and studied in section~\ref{sec02.1}, a technical lemma being postponed to appendix~\ref{secapB}. The next section~\ref{sec02.2} is devoted to the time discrete version of the estimates~\eqref{p4} and~\eqref{p5}, the proof of the properties of the polynomials $\Phi_n$, $n\ge 2$, being collected in appendix~\ref{secapA}. The proof of Theorem~\ref{ThBWS} is given in section~\ref{sec03} and is based on a compactness method. We finally prove Theorem~\ref{MainTh} in section~\ref{sec04}. \paragraph{\textbf{Notation.}} For $p\in [1,\infty]$, we denote by $\|\cdot\|_p$ the $L_p$-norm in $L_p(\Omega)$, $L_p(\Omega,\mathbb{R}^2) := L_p(\Omega)\times L_p(\Omega)$, and~$H^1(\Omega,\mathbb{R}^2):= H^1(\Omega)\times H^1(\Omega)$. The positive cone of a Banach lattice $E$ is denoted by $E_+$. Next,~${\mathbf{M}_2(\mathbb{R})}$ denotes the space of $2\times 2$ real-valued matrices, ${\mathbf{Sym}_2(\mathbb{R})}$ is the subset of ${\mathbf{M}_2(\mathbb{R})}$ consisting of symmetric matrices, and $\mathbf{SPD}_2(\mathbb{R})$ is the set of symmetric and positive definite matrices in $\mathbf{M}_2(\mathbb{R})$. Finally, we denote the positive part of a real number $r\in\mathbb{R}$ by~${r_+:=\max\{r,0\}}$ and $\langle \cdot,\cdot \rangle$ is the scalar product on $\mathbb{R}^2$. \section{A time discrete scheme: $d=1$}\label{sec02} Throughout this section, we assume that $d=1$ and $\Omega$ is a bounded open interval of $\mathbb{R}$. In order to construct bounded non-negative global weak solutions to the evolution problem~\eqref{tfm1} we introduce an implicit time discrete scheme, see \eqref{ex1a}-\eqref{ex1b}, as well as a regularized version of this scheme, see \eqref{ex12a}. The aim of this section is to prove the existence of a bounded weak solution to the implicit time discrete scheme, as stated in Proposition~\ref{prop.e1}. \begin{proposition}\label{prop.e1} Given $\tau>0$ and $U=(F,G)\in L_{\infty,+}(\Omega,\mathbb{R}^2)$, there is a weak solution~$u=(f,g)$ with~$u\in H^1_+(\Omega,\mathbb{R}^2)$ to \begin{subequations}\label{ex1} \begin{align} \int_\Omega \Big[ f \varphi + \tau f \partial_x\left[ (1+R) f + R g \right] \cdot\partial_x\varphi \Big]\ \mathrm{d}x & = \int_\Omega F \varphi\ \mathrm{d}x\,, \qquad \varphi\in H^1(\Omega)\,, \label{ex1a} \\ \int_\Omega \Big[ g \psi + \tau \mu R g \partial_x(f + g)\cdot \partial_x\psi \Big]\ \mathrm{d}x & = \int_\Omega G \psi\ \mathrm{d}x\,, \qquad \psi\in H^1(\Omega)\,, \label{ex1b} \end{align} \end{subequations} which also satisfies \begin{equation} \int_\Omega \Phi_n(u)\ \mathrm{d}x \le \int_\Omega \Phi_n(U)\ \mathrm{d}x \label{ex2} \end{equation} for $n\ge 2$, \begin{equation} \|f+g\|_\infty \le \frac{1+R}{R} \|F+G\|_\infty\,, \label{ex2a} \end{equation} and \begin{equation} \int_\Omega \Phi_1(u)\ \mathrm{d}x + \tau \int_\Omega \left[ |\partial_x f|^2 + R |\partial_x (f + g)|^2 \right]\ \mathrm{d}x \le \int_\Omega \Phi_1(U)\ \mathrm{d}x\,. \label{ex2b} \end{equation} \end{proposition} We fix $\tau>0$ and $U=(F,G)\in L_{\infty,+}(\Omega,\mathbb{R}^2)$. Recalling the definition~\eqref{moma} of the mobility matrix \begin{equation*} M(X) = \begin{pmatrix} (1+R) X_1 & R X_1 \\ \mu R X_2 & \mu R X_2 \end{pmatrix}\,, \qquad X\in\mathbb{R}^2\,, \end{equation*} an alternative formulation of \eqref{ex1} reads \begin{equation} \int_\Omega \left[\langle u,v \rangle + \tau \langle M(u) \partial_x u , \partial_x v\rangle \right]\ \mathrm{d}x = \int_\Omega \langle U,v\rangle\ \mathrm{d}x\,, \qquad v\in H^1(\Omega,\mathbb{R}^2)\,. \label{ex3} \end{equation} Obviously, the mobility matrix $M(X)$ is in general not symmetric and the associated quadratic form \begin{equation*} \mathbb{R}^2\ni\xi=(\xi_1,\xi_2) \mapsto \sum_{j,k=1}^2 m_{jk}(X) \xi_j \xi_k\in \mathbb{R} \end{equation*} is not positive definite (even if $X\in[0,\infty)^2)$, two features which complicate the analysis concerning the solvability of \eqref{ex3}. Fortunately, as noticed in \cite{DGJ1997}, the underlying gradient flow structure provides a way to transform \eqref{ex3} to an elliptic system with symmetric and positive semidefinite matrix. More precisely, we introduce the symmetric matrix $S$ with constant coefficients \begin{equation*} S := \begin{pmatrix} 1+R & R \\ R & R \end{pmatrix}\,, \end{equation*} which is actually the Hessian matrix of $R\Phi_2/2$. Clearly, $S$ belongs to $\mathbf{SPD}_2(\mathbb{R})$ and \begin{equation} \langle S \xi , \xi \rangle = \xi_1^2 + R \left( \xi_1 + \xi_2 \right)^2 \ge \frac{R}{1+2R} |\xi|^2\,, \qquad \xi\in\mathbb{R}^2\,. \label{ex4} \end{equation} Choosing $Sv$ instead of $v\in H^1(\Omega,\mathbb{R}^2)$ as a test function in \eqref{ex3} and using the symmetry of $S$, lead to another alternative formulation of \eqref{ex1a}-\eqref{ex1b} (or \eqref{ex3}), which reads \begin{equation} \int_\Omega \left[ \langle Su,v \rangle + \tau \langle SM(u) \partial_x u , \partial_x v\rangle \right]\ \mathrm{d}x = \int_\Omega \langle SU,v\rangle\ \mathrm{d}x\,, \qquad v\in H^1(\Omega,\mathbb{R}^2)\,. \label{ex5} \end{equation} We next observe that, for $X\in [0,\infty)^2$, \begin{equation} SM(X) = \begin{pmatrix} (1+R)^2 X_1 + \mu R^2 X_2 & (1+R)R X_1 + \mu R^2 X_2 \\ (1+R)R X_1 + \mu R^2 X_2 & R^2 X_1 + \mu R^2 X_2 \end{pmatrix} \label{ex6a} \end{equation} and \begin{equation} \begin{split} \langle SM(X)\xi,\xi\rangle & = X_1 ((1+R) \xi_1 + R \xi_2 )^2 + \mu R^2 X_2 (\xi_1+\xi_2)^2\geq 0\,. \end{split}\label{ex6b} \end{equation} Consequently, $SM(X)$ belongs to $\mathbf{SPD}_2(\mathbb{R})$ for all $X\in (0,\infty)^2$ and the formulation~\eqref{ex5} seems more appropriate to study the solvability of \eqref{ex1a}-\eqref{ex1b}. However, the matrix $SM(X)$ is still degenerate as $X_1\to 0$ or $X_2\to 0$, so that we first solve a regularized problem in the next section. \subsection{A regularization of the time discrete scheme}\label{sec02.1} Let $\varepsilon\in (0,1)$ and define \begin{equation} M_{\varepsilon}(X) := (m_{\varepsilon, jk}(X))_{1\le j,k\le 2}:= \varepsilon I_2 + M((X_{1,+},X_{2,+}))\,, \qquad X\in\mathbb{R}^2\,. \label{ex10} \end{equation} \begin{lemma}\label{lem.ex2} Given $\tau>0$, $U=(U_1,U_2)\in L_{\infty,+}(\Omega,\mathbb{R}^2)$, and $\varepsilon\in (0,1)$, there is a weak solution~${u_{\varepsilon} =(u_{1,\varepsilon},u_{2,\varepsilon})\in H^1_+(\Omega,\mathbb{R}^2)}$ to \begin{equation} \int_\Omega \left[ \langle u_{\varepsilon} , v \rangle + \tau \langle M_{\varepsilon}(u_{\varepsilon}) \partial_x u_{\varepsilon} , \partial_x v \rangle \right]\ \mathrm{d}x = \int_\Omega \langle U , v \rangle\ \mathrm{d}x\,, \qquad v\in H^1(\Omega,\mathbb{R}^2)\,. \label{ex12a} \end{equation} Moreover, \begin{equation} \| u_{1,\varepsilon} + u_{2,\varepsilon} \|_\infty \le \frac{1+R}{R} \|U_1+U_2\|_\infty\,, \label{ex12c} \end{equation} and, for $n\ge 2$, \begin{equation} \int_\Omega \Phi_n(u_{\varepsilon})\ \mathrm{d}x \le \int_\Omega \Phi_n(U)\ \mathrm{d}x\,. \label{ex12b} \end{equation} \end{lemma} \begin{proof} For each $\varepsilon\in (0,1)$, $M_{\varepsilon}$ lies in $C(\mathbb{R}^2,\mathbf{M}_2(\mathbb{R}))$ and satisfies \begin{subequations}\label{ex11} \begin{equation} \begin{split} m_{\varepsilon,11}(X) \ge m_{\varepsilon,12}(X) & = 0\,, \qquad X\in (-\infty,0)\times \mathbb{R}\,, \\ m_{\varepsilon,22}(X) \ge m_{\varepsilon,21}(X) & = 0\,, \qquad X\in \mathbb{R}\times (-\infty,0)\,. \end{split} \label{ex11b} \end{equation} In addition, it follows from \eqref{ex4}, \eqref{ex6a}, \eqref{ex6b}, and \eqref{ex10} that $SM_{\varepsilon}(X)$ belongs to $\mathbf{SPD}_2(\mathbb{R})$ for all~${X\in\mathbb{R}^2}$ with \begin{equation} \langle SM_{\varepsilon}(X)\xi,\xi\rangle \ge \frac{\varepsilon R}{1+2R} |\xi|^2\,, \qquad \xi\in\mathbb{R}^2\,. \label{ex11c} \end{equation} \end{subequations} According to the properties~\eqref{ex11}, we are now in a position to apply Lemma~\ref{lem.ap1} (with $A=S$ and~${B=M_{\varepsilon}}$) and deduce that there is a non-negative solution $u_{\varepsilon}\in H^1(\Omega,\mathbb{R}^2)$ to \eqref{ex12a}. In the remaining part, we prove that~$u_\varepsilon$ satisfies both estimates~\eqref{ex12c} and~\eqref{ex12b}. We begin with~\eqref{ex12b} and thus consider $n\ge 2$. Since $\Phi_n$ is a polynomial and $H^1(\Omega,\mathbb{R}^2)$ continuously embeds in $L_\infty(\Omega,\mathbb{R}^2)$, the vector field $D\Phi_n(u_\varepsilon)$ belongs to $H^1(\Omega,\mathbb{R}^2)$. We may then take $v=D\Phi_n(u_\varepsilon)$ in~\eqref{ex12a} to obtain \begin{equation} \int_\Omega \left[ \langle u_{\varepsilon} - U , D\Phi_n(u_\varepsilon) \rangle + \tau \langle M_{\varepsilon}(u_{\varepsilon}) \partial_x u_{\varepsilon} , \partial_x( D\Phi_n(u_\varepsilon)) \rangle \right]\ \mathrm{d}x = 0 \,. \label{ex13} \end{equation} On the one hand, it follows from the convexity of $\Phi_n$ on $[0,\infty)^2$, see Proposition~\ref{prlfa}~(a), that \begin{equation} \int_\Omega \langle u_{\varepsilon} - U , D\Phi_n(u_\varepsilon) \rangle\ \mathrm{d}x \ge \int_\Omega \left[ \Phi_n(u_\varepsilon) - \Phi_n(U) \right]\ \mathrm{d}x \,. \label{ex14} \end{equation} On the other hand, owing to the symmetry of $D^2\Phi_n(u_\varepsilon)$, \begin{align*} \langle M_{\varepsilon}(u_{\varepsilon}) \partial_x u_{\varepsilon} , \partial_x (D\Phi_n(u_\varepsilon)) \rangle & = \langle D^2\Phi_n(u_\varepsilon) M_{\varepsilon}(u_{\varepsilon}) \partial_x u_{\varepsilon} , \partial_x u_{\varepsilon} \rangle \\ & = \varepsilon \langle D^2\Phi_n(u_\varepsilon) \partial_x u_{\varepsilon} , \partial_x u_{\varepsilon} \rangle \\ & \qquad + \langle D^2\Phi_n(u_\varepsilon) M(u_{\varepsilon}) \partial_x u_{\varepsilon} , \partial_x u_{\varepsilon} \rangle\,. \end{align*} Since both $D^2\Phi_n(u_\varepsilon)$ and $D^2\Phi_n(u_\varepsilon) M(u_{\varepsilon})$ belong to $\mathbf{SPD}_2(\mathbb{R})$ by Proposition~\ref{prlfa}, we conclude that \begin{equation} \langle M_{\varepsilon}(u_{\varepsilon}) \partial_x u_{\varepsilon} , \partial_x D\Phi_n(u_\varepsilon) \rangle \ge 0\,. \label{ex17} \end{equation} Combining \eqref{ex13}, \eqref{ex14}, and \eqref{ex17}, we end up with \begin{equation*} \int_\Omega \left[ \Phi_n(u_{\varepsilon}) - \Phi_n(U) \right]\ \mathrm{d}x \le 0\,, \end{equation*} and we have established \eqref{ex12b}. It next follows from \eqref{ex12b} and Lemma~\ref{lelfb} that \begin{align*} \| u_{1,\varepsilon} + u_{2,\varepsilon} \|_n & \le \left( \int_\Omega \Phi_n(u_{\varepsilon})\ \mathrm{d}x \right)^{1/n} \le \left( \int_\Omega \Phi_n(U)\ \mathrm{d}x \right)^{1/n} \\ & \le \frac{1+R}{R} \|U_{1} + U_{2}\|_n\,. \end{align*} Hence, letting $n\to\infty$ in the above inequality leads us to \eqref{ex12c}, and the proof is complete. \end{proof} We next derive estimates on $\partial_x u_{\varepsilon}$. \begin{lemma}\label{lem.ex3} Let $\tau>0$, $U\in L_{\infty,+}(\Omega,\mathbb{R}^2)$, and $\varepsilon\in (0,1)$. The weak solution~$u_{\varepsilon} =(u_{1,\varepsilon},u_{2,\varepsilon})$ constructed in Lemma~\ref{lem.ex2} satisfies \begin{equation*} \int_\Omega \Phi_1(u_{\varepsilon})\ \mathrm{d}x + \tau \int_\Omega \left[ |\partial_x u_{1,\varepsilon}|^2 + R |\partial_x (u_{1,\varepsilon} + u_{2,\varepsilon})|^2 \right]\ \mathrm{d}x \le \int_\Omega \Phi_1(U)\ \mathrm{d}x\,. \end{equation*} \end{lemma} \begin{proof} Let $\eta\in (0,1)$. Recalling that $u_{\varepsilon}\in H^1_+(\Omega,\mathbb{R}^2)$ has non-negative components, we deduce that the vector field~$\left( \ln{(u_{1,\varepsilon}+\eta)}, \ln{(u_{2,\varepsilon}+\eta)/\mu} \right)$ belongs to $H^1(\Omega,\mathbb{R}^2)$, and we infer from \eqref{ex12a} that \begin{align} 0 & = \int_\Omega \left[ (u_{1,\varepsilon} - U_1) \ln{(u_{1,\varepsilon}+\eta)} + \frac{1}{\mu} (u_{2,\varepsilon} - U_2)\ln{(u_{2,\varepsilon}+\eta)} \right]\ \mathrm{d}x \nonumber \\ & \qquad + \tau \int_\Omega \Big( m_{\varepsilon,11}(u_{\varepsilon}) \partial_x u_{1,\varepsilon} + m_{\varepsilon,12}(u_{\varepsilon}) \partial_x u_{2,\varepsilon} \Big) \frac{\partial_x u_{1,\varepsilon}}{u_{1,\varepsilon}+\eta}\ \mathrm{d}x \label{ex19}\\ & \qquad + \frac{\tau}{\mu} \int_\Omega\Big( m_{\varepsilon,21}(u_{\varepsilon}) \partial_x u_{1,\varepsilon} + m_{\varepsilon,22}(u_{\varepsilon}) \partial_x u_{2,\varepsilon} \Big) \frac{\partial_x u_{2,\varepsilon}}{u_{2,\varepsilon}+\eta}\ \mathrm{d}x \,. \nonumber \end{align} On the one hand, since $L'(r)=\ln{r}$, $r>0$, the convexity of $L$ guarantees that \begin{align*} & \int_\Omega \left[ (u_{1,\varepsilon} - U_1) \ln{(u_{1,\varepsilon}+\eta)} + \frac{1}{\mu} (u_{2,\varepsilon} - U_2)\ln{(u_{2,\varepsilon}+\eta)} \right]\ \mathrm{d}x \\ & \qquad \ge \int_\Omega \left[ (L(u_{1,\varepsilon}+\eta) - L(U_1+\eta)) + \frac{1}{\mu} (L(u_{2,\varepsilon}+\eta) - L(U_2+\eta)) \right]\ \mathrm{d}x \\ & \qquad = \int_\Omega \Phi_1((u_{1,\varepsilon}+\eta,u_{2,\varepsilon}+\eta))\ \mathrm{d}x - \int_\Omega \Phi_1((U_1+\eta,U_2+\eta))\ \mathrm{d}x\,. \end{align*} Owing to the continuity of $\Phi_1$ on $[0,\infty)^2$, letting $\eta\to 0$ in the above inequality gives \begin{equation} \begin{split} & \liminf_{\eta\to 0} \int_\Omega \left[ (u_{1,\varepsilon} - U_1) \ln{(u_{1,\varepsilon}+\eta)} + \frac{1}{\mu} (u_{2,\varepsilon} - U_2)\ln{(u_{2,\varepsilon}+\eta)} \right]\ \mathrm{d}x \\ & \qquad \ge \int_\Omega \Phi_1(u_{\varepsilon})\ \mathrm{d}x - \int_\Omega \Phi_1(U)\ \mathrm{d}x\,. \end{split}\label{ex20} \end{equation} On the other hand, \begin{equation}\label{ex21} \begin{aligned} D(\eta) & := \tau \int_\Omega \Big( m_{\varepsilon,11}(u_{\varepsilon}) \partial_x u_{1,\varepsilon} + m_{\varepsilon,12}(u_{\varepsilon}) \partial_x u_{2,\varepsilon} \Big) \frac{\partial_x u_{1,\varepsilon}}{u_{1,\varepsilon}+\eta}\ \mathrm{d}x \\ & \qquad + \frac{\tau}{\mu} \int_\Omega \Big( m_{\varepsilon,21}(u_{\varepsilon}) \partial_x u_{1,\varepsilon} + m_{\varepsilon,22}(u_{\varepsilon}) \partial_x u_{2,\varepsilon} \Big) \frac{\partial_x u_{2,\varepsilon}}{u_{2,\varepsilon}+\eta}\ \mathrm{d}x \\ & = \tau \varepsilon \int_\Omega \left( \frac{|\partial_x u_{1,\varepsilon}|^2}{u_{1,\varepsilon}+\eta} + \frac{|\partial_x u_{2,\varepsilon}|^2}{u_{2,\varepsilon}+\eta} \right)\ \mathrm{d}x \\ & \qquad + \tau \int_\Omega \left[ \frac{u_{1,\varepsilon}}{u_{1,\varepsilon}+\eta} |\partial_x u_{1,\varepsilon}|^2 + R |\partial_x u_{1,\varepsilon} + \partial_x u_{2,\varepsilon}|^2 \right]\ \mathrm{d}x \\ & \qquad - \tau R \int_\Omega \left[ \left( 1 - \frac{u_{1,\varepsilon}}{u_{1,\varepsilon}+\eta} \right) \partial_x u_{1,\varepsilon}\cdot \partial_x\left( u_{1,\varepsilon} + u_{2,\varepsilon} \right) \right]\ \mathrm{d}x \\ & \qquad - \tau R \int_\Omega \left[ \left( 1 - \frac{u_{2,\varepsilon}}{u_{2,\varepsilon}+\eta} \right) \partial_x u_{2,\varepsilon} \cdot\partial_x\left( u_{1,\varepsilon} + u_{2,\varepsilon} \right) \right]\ \mathrm{d}x \\ & \ge \tau \int_\Omega \left[ |\partial_x u_{1,\varepsilon}|^2 + R |\partial_x (u_{1,\varepsilon} + u_{2,\varepsilon})|^2 \right]\ \mathrm{d}x \\ & \qquad - J_0(\eta) - J_1(\eta) - J_2(\eta)\,, \end{aligned} \end{equation} with \begin{align*} J_0(\eta) & := \tau \int_\Omega \left( 1 - \frac{u_{1,\varepsilon}}{u_{1,\varepsilon}+\eta} \right) |\partial_x u_{1,\varepsilon}|^2\ \mathrm{d}x\,, \\ J_1(\eta) & := \tau R \int_\Omega \left[ \left( 1 - \frac{u_{1,\varepsilon}}{u_{1,\varepsilon}+\eta} \right) \partial_x u_{1,\varepsilon} \cdot \partial_x \left( u_{1,\varepsilon} + u_{2,\varepsilon} \right) \right]\ \mathrm{d}x\,, \\ J_2(\eta) & := \tau R \int_\Omega \left[ \left( 1 - \frac{u_{2,\varepsilon}}{u_{2,\varepsilon}+\eta} \right) \partial_x u_{2,\varepsilon} \cdot\partial_x \left( u_{1,\varepsilon} + u_{2,\varepsilon} \right) \right]\ \mathrm{d}x\,. \end{align*} Now, $u_{\varepsilon}\in H^1(\Omega,\mathbb{R}^2)$ and, for $j\in\{1,2\}$, \begin{align*} & \lim_{\eta\to 0} \frac{u_{j,\varepsilon}}{u_{j,\varepsilon}+\eta} = 1 \;\text{ a.e. in }\; \{x\in\Omega\ :\ u_{j,\varepsilon}>0\}\,, \\ & \lim_{\eta\to 0} \left( 1 - \frac{u_{j,\varepsilon}}{u_{j,\varepsilon}+\eta} \right) \partial_x u_{j,\varepsilon} = 0\;\text{ a.e. in }\; \{x\in\Omega\ :\ u_{j,\varepsilon}=0\}\,, \end{align*} so that we infer from Lebesgue's dominated convergence theorem that \begin{equation} \lim_{\eta\to 0} \left( J_0(\eta) + J_1(\eta) + J_2(\eta) \right) = 0\,. \label{ex22} \end{equation} Combining \eqref{ex21} and \eqref{ex22} gives \begin{equation} \liminf_{\eta\to 0} D(\eta) \ge \tau \int_\Omega \left[ |\partial_x u_{1,\varepsilon}|^2 + R |\partial_x (u_{1,\varepsilon} + u_{2,\varepsilon})|^2 \right]\ \mathrm{d}x\,. \label{ex23} \end{equation} In view of \eqref{ex20} and \eqref{ex23}, we may pass to the limit $\eta\to 0$ in \eqref{ex19} and obtain the stated inequality. \end{proof} \subsection{A time discrete scheme: existence}\label{sec02.2} Thanks to the analysis performed in the previous section, we are now in a position to take the limit $\varepsilon\to 0$ and prove Proposition~\ref{prop.e1}. \begin{proof}[Proof of Proposition~\ref{prop.e1}] Consider $\tau>0$ and $U=(F,G)\in L_{\infty,+}(\Omega,\mathbb{R}^2)$. Given $\varepsilon\in (0,1),$ let~${u_{\varepsilon} =(u_{1,\varepsilon},u_{2,\varepsilon})\in H^1_+(\Omega,\mathbb{R}^2)}$ denote the weak solution to \eqref{ex12a} provided by Lemma~\ref{lem.ex2}. It first follows from \eqref{ex12c} and the componentwise non-negativity of $u_{\varepsilon}$ that \begin{equation} \max\left\{ \|u_{1,\varepsilon}\|_\infty, \|u_{2,\varepsilon}\|_\infty \right\}\le \|u_{\varepsilon}\|_\infty \le \frac{1+R}{R} \|F+G\|_\infty\,. \label{cv01} \end{equation} In view of the non-negativity of $\Phi_1$, we infer from Lemma~\ref{lem.ex3} that \begin{equation*} \int_\Omega \left[ |\partial_x u_{1,\varepsilon}|^2 + R |\partial_x (u_{1,\varepsilon} + u_{2,\varepsilon})|^2 \right]\ \mathrm{d}x \le \frac{1}{\tau} \int_\Omega \Phi_1(U)\ \mathrm{d}x\,. \end{equation*} Hence, by \eqref{ex4}, \begin{equation} \|\partial_x u_{\varepsilon}\|_2^2 \le \frac{1+2R}{\tau R} \int_\Omega \Phi_1(U)\ \mathrm{d}x\,. \label{cv03} \end{equation} Due to the compactness of the embedding of $H^1(\Omega)$ in $L_\infty(\Omega)$, we deduce from \eqref{cv01} and \eqref{cv03} that there is $u=(f,g)\in H^1_+(\Omega,\mathbb{R}^2)$ and a sequence $(u_{\varepsilon_j})_{j\ge 1}$ such that \begin{equation} \begin{split} & u_{\varepsilon_j} \rightharpoonup u \;\text{ in }\; H^1(\Omega,\mathbb{R}^2)\,, \\ & \lim_{j\to\infty} \|u_{\varepsilon_j} - u\|_\infty = 0\,. \end{split} \label{cv04} \end{equation} An immediate consequence of \eqref{ex12c}, \eqref{ex12b}, and \eqref{cv04} is that $(f,g)$ satisfies \eqref{ex2} for $n\ge 2$ and \eqref{ex2a}. Moreover, another consequence of \eqref{cv04}, along with Lemma~\ref{lem.ex3} and a weak lower semicontinuity argument, is that $(f,g)$ satisfies \eqref{ex2b}. Finally, owing to \eqref{cv04} and the boundedness of the coefficients of $M_{\varepsilon}(u_{\varepsilon})$ due to \eqref{cv01}, we may use Lebesgue's dominated convergence theorem to take the limit $j\to\infty$ in the identity \eqref{ex12a} for $u_{\varepsilon_j}$ and conclude that $(f,g)$ satisfies \eqref{ex1}, thereby completing the proof of Proposition~\ref{prop.e1}. \end{proof} \section{Existence of bounded weak solutions: $d=1$ }\label{sec03} This section is devoted to the proof of Theorem~\ref{ThBWS}. To this end, we argue in a standard way and construct, starting from the initial condition $(f^{in},g^{in})\in L_{\infty,+}(\Omega,\mathbb{R}^2)$ and using Proposition~\ref{prop.e1}, a family of piecewise constant functions $(u^\tau)_{\tau\in(0,1)}$. Specifically, we set $u^\tau(0):=u_0^\tau $ and \begin{equation}\label{x01} u^\tau(t)= u^\tau_{l}\,, \qquad t\in ((l-1)\tau, l\tau]\,, \quad 1\leq l\in\mathbb{N}\,, \end{equation} where, given $\tau\in(0,1)$, the sequence $(u_{l}^\tau)_{l\geq0}$ is defined as follows \begin{equation} \begin{split} &u_0^\tau := u^{in} =(f^{in},g^{in})\in L_{\infty,+}(\Omega,\mathbb{R}^2)\,, \\ &u_{l+1}^\tau =(f_{l+1}^\tau,g_{l+1}^\tau)\in H^1_+(\Omega,\mathbb{R}^2) \;\text{is the solution to \eqref{ex1}} \\ &\text{with $ U=u_l^\tau=(f_l^\tau,g_l^\tau)$ constructed in Proposition~\ref{prop.e1} for $ l\ge 0$}\,. \end{split}\label{x02} \end{equation} In order to establish Theorem~\ref{ThBWS}, we show that the family $(u^\tau)_{\tau\in(0,1)}$ converges along a subsequence~${\tau_j\to0}$ towards a pair $u=(f,g)$ which fulfills all the requirements of Theorem~\ref{ThBWS}. Throughout this section, $C$ and $C_i$, with~${i\ge 0}$, denote positive constants depending only on~$R$,~$\mu$, and~${(f^{in},g^{in})}$. Dependence upon additional parameters will be indicated explicitly. \begin{proof}[Proof of Theorem~\ref{MainTh}] Let $\tau\in (0,1)$ and let $u^\tau$ be defined in \eqref{x01}-\eqref{x02}. Given $l\ge 0$, we infer from Proposition \ref{prop.e1} that \begin{subequations}\label{x03} \begin{align} \int_\Omega \Big[ f_{l+1}^\tau \varphi + \tau f_{l+1}^\tau \partial_x\left[ (1+R) f_{l+1}^\tau + R g_{l+1}^\tau \right] \partial_x\varphi \Big]\ \mathrm{d}x & = \int_\Omega f_{l}^\tau \varphi\ \mathrm{d}x\,, \qquad \varphi\in H^1(\Omega)\,, \label{x03a} \\ \int_\Omega \Big[ g_{l+1}^\tau \psi + \tau \mu R g_{l+1}^\tau \partial_x(f_{l+1}^\tau + g_{l+1}^\tau) \partial_x\psi \Big]\ \mathrm{d}x & = \int_\Omega g_{l}^\tau \psi\ \mathrm{d}x\,, \qquad \psi\in H^1(\Omega)\,. \label{x03b} \end{align} \end{subequations} Moreover, \begin{equation} \int_\Omega \Phi_n(u_{l+1}^\tau)\ \mathrm{d}x \le \int_\Omega \Phi_n(u_{l}^\tau)\ \mathrm{d}x \label{x04} \end{equation} for $n\ge 2$ and we also have \begin{equation} \int_\Omega \Phi_1(u_{l+1}^\tau)\ \mathrm{d}x + \tau \int_\Omega \left[ |\partial_x f_{l+1}^\tau|^2 + R |\partial_x (f_{l+1}^\tau + g_{l+1}^\tau)|^2 \right]\ \mathrm{d}x \le \int_\Omega \Phi_1(u_{l}^\tau)\ \mathrm{d}x\,. \label{x05} \end{equation} It readily follows from \eqref{x01}, \eqref{x02}, \eqref{x04}, and \eqref{x05} that, for $t>0$, \begin{equation} \int_\Omega \Phi_1(u^\tau(t))\ \mathrm{d}x + \int_0^t \int_\Omega \left[ |\partial_x f^\tau(s)|^2 + R |\partial_x (f^\tau + g^\tau)(s)|^2 \right]\ \mathrm{d}x\mathrm{d}s \le \int_\Omega \Phi_1(u^{in})\ \mathrm{d}x\,, \label{x07} \end{equation} and \begin{equation} \int_\Omega \Phi_n(u^\tau(t))\ \mathrm{d}x \le \int_\Omega \Phi_n(u^{in})\ \mathrm{d}x\,, \qquad \ n\ge 2\,. \label{x06} \end{equation} An immediate consequence of \eqref{x06} and Lemma~\ref{lelfb} is the estimate \begin{equation*} \|f^\tau(t)+g^\tau(t)\|_n \le \frac{1+R}{R} \|f^{in}+g^{in}\|_n\,, \qquad n\ge 2\,, \ t>0\,. \end{equation*} Letting $n\to\infty$ in the above inequality gives \begin{equation} \|f^\tau(t)+g^\tau(t)\|_\infty \le C_1 := \frac{1+R}{R} \|f^{in}+g^{in}\|_\infty\,, \qquad t>0\,. \label{x08} \end{equation} Also, it readily follows from \eqref{ex4}, \eqref{x07}, and the non-negativity of $\Phi_1$ that \begin{align*} & \frac{R}{1+2R} \int_0^t \int_\Omega \left( |\partial_x f^\tau(s)|^2 + |\partial_x g^\tau(s)|^2 \right)\ \mathrm{d}x\mathrm{d}s \\ & \qquad \le \int_\Omega \Phi_1(u^\tau(t))\ \mathrm{d}x + \int_0^t \int_\Omega \left[ |\partial_x f^\tau(s)|^2 + R |\partial_x (f^\tau + g^\tau)(s)|^2 \right]\ \mathrm{d}x\mathrm{d}s \\ & \qquad \le \int_\Omega \Phi_1(u^{in})\ \mathrm{d}x\,. \end{align*} Therefore we have \begin{equation} \int_0^t \left( \|\partial_x f^\tau(s)\|_2^2 + \|\partial_x g^\tau(s)\|_2^2 \right)\ \mathrm{d}s \le C_2 := \frac{1+2R}{R} \int_\Omega \Phi_1(u^{in})\ \mathrm{d}x\,, \qquad t>0\,. \label{x09} \end{equation} Next, for $l\ge 1$ and $t\in ((l-1)\tau,l\tau]$, we deduce from \eqref{x03a}, \eqref{x08}, and H\"older's inequality that, for $\varphi\in H^1(\Omega)$, \begin{align*} \left| \int_\Omega \left( f^{\tau}(t+\tau) - f^\tau(t) \right) \varphi\ \mathrm{d}x \right| & = \left| \int_{l\tau}^{(l+1) \tau} \int_\Omega f^\tau(s) \partial_x\left[ (1+R) f^\tau(s) + R g^\tau(s) \right] \partial_x\varphi\ \mathrm{d}x\mathrm{d}s \right| \\ & \le \int_{l\tau}^{(l+1) \tau} \| f^\tau(s)\|_\infty \|\partial_x\left[(1+R) f^\tau(s) + R g^\tau(s)\right]\|_2 \|\partial_x\varphi\|_2\ \mathrm{d}s \\ & \le C_1 \|\partial_x\varphi\|_2 \int_{l\tau}^{(l+1) \tau} \|\partial_x\left[ (1+R) f^\tau(s) + R g^\tau(s) \right]\|_2\ \mathrm{d}s \,. \end{align*} A duality argument then gives \begin{equation*} \| f^{\tau}(t+\tau) - f^\tau(t) \|_{(H^1)'} \le C_1 \int_{l\tau}^{(l+1) \tau} \|\partial_x\left[ (1+R) f^\tau(s) + R g^\tau(s) \right]\|_2\ \mathrm{d}s\,, \quad t\in ((l-1)\tau,l\tau]\,,l\geq1\,. \end{equation*} Now, for $L\ge 2$ and $T\in ((L-1)\tau,L\tau]$, the above inequality, along with H\"older's inequality, entails that \begin{align*} \int_0^{T-\tau} \| f^{\tau}(t+\tau) - f^\tau(t) \|_{(H^1)'}^2\ \mathrm{d}t & \le \int_0^{(L-1)\tau} \| f^{\tau}(t+\tau) - f^\tau(t) \|_{(H^1)'}^2\ \mathrm{d}t \\ & =\sum_{l=1}^{L-1} \int_{(l-1)\tau}^{l\tau} \| f^{\tau}(t+\tau) - f^\tau(t) \|_{(H^1)'}^2\ \mathrm{d}t \\ & \le C_1^2 \tau \sum_{l=1}^{L-1} \left( \int_{l\tau}^{(l+1) \tau} \|\partial_x\left[ (1+R) f^\tau(s) + R g^\tau(s) \right]\|_2\ \mathrm{d}s \right)^2\\ & \le C_1^2 \tau^2 \sum_{l=1}^{L-1} \int_{l\tau}^{(l+1) \tau} \|\partial_x\left[ (1+R) f^\tau(s) + R g^\tau(s) \right]\|_2^2\ \mathrm{d}s \\ & \le C_1^2 \tau^2 \int_{0}^{L \tau} \|\partial_x\left[ (1+R) f^\tau(s) + R g^\tau(s) \right]\|_2^2\ \mathrm{d}s\,. \end{align*} We then use \eqref{x09} (with $t=L\tau$) and Young's inequality to obtain \begin{align} \int_0^{T-\tau} \| f^{\tau}(t+\tau) - f^\tau(t) \|_{(H^1)'}^2\ \mathrm{d}t & \le C_1^2 \tau^2 \int_{0}^{L\tau} \left( 2 (1+R)^2 \|\nabla f^\tau(s)\|_2^2 + 2R^2 \|\nabla g^\tau(s)\|_2^2 \right)\ \mathrm{d}s \nonumber \\ & \le C_3 \tau^2 \,, \label{x10} \end{align} with $C_3 := 2 (1+R)^2 C_1^2 C_2$. Similarly, \begin{equation} \int_0^{T-\tau} \| g^{\tau}(t+\tau) - g^\tau(t) \|_{(H^1)'}^2\ \mathrm{d}t \le C_4 \tau^2 \,, \label{x11} \end{equation} with $C_4 := 2 \mu^2 R^2 C_1^2 C_2$. Since $H^1(\Omega,\mathbb{R}^2)$ is compactly embedded in $L_2(\Omega,\mathbb{R}^2)$ and $L_2(\Omega,\mathbb{R}^2)$ is continuously embedded in $H^1(\Omega,\mathbb{R}^2)'$, we infer from \eqref{x08}, \eqref{x09}, \eqref{x10}, \eqref{x11}, and \cite[Theorem~1]{DJ2012} that, for any $T>0$, \begin{equation} (u^\tau)_{\tau\in (0,1)} \;\text{ is relatively compact in }\; L_2((0,T)\times\Omega,\mathbb{R}^2)\,. \label{x12} \end{equation} Owing to \eqref{x08}, \eqref{x09}, and \eqref{x12}, we may use a Cantor diagonal argument to find a function~${u:=(f,g)}$ in~$L_{\infty,+}((0,\infty)\times\Omega,\mathbb{R}^2)$ and a sequence $(\tau_j)_{j\ge 1}$, $\tau_j\to 0$, such that, for any $T>0$ and $p\in [1,\infty)$, \begin{equation} \begin{split} u^{\tau_j} & \longrightarrow u \;\;\text{ in }\;\; L_p((0,T)\times\Omega,\mathbb{R}^2)\,, \\ u^{\tau_j} & \stackrel{*}{\rightharpoonup} u \;\;\text{ in }\;\; L_\infty((0,T)\times\Omega,\mathbb{R}^2)\,, \\ u^{\tau_j} & \rightharpoonup u \;\;\text{ in }\;\; L_2((0,T),H^1(\Omega,\mathbb{R}^2))\,. \end{split} \label{x13} \end{equation} In addition, the compact embedding of $L_2(\Omega,\mathbb{R}^2)$ in $H^1(\Omega,\mathbb{R}^2)'$, along with \eqref{x06} with $n=2$, \eqref{x10}, and \eqref{x11}, allows us to apply once more \cite[Theorem~1]{DJ2012} to conclude that \begin{equation} u\in C([0,\infty),H^1(\Omega,\mathbb{R}^2)')\,. \label{x14} \end{equation} Let us now identify the equations solved by the components $f$ and $g$ of $u$. To this end, let~${\chi\in W^1_\infty([0,\infty))}$ be a compactly supported function and $\varphi\in C^1(\bar{\Omega})$. In view of \eqref{x03a}, classical computations give \begin{align*} & \int_0^\infty \int_\Omega \frac{\chi(t+\tau)-\chi(t)}{\tau} f^\tau(t) \varphi\ \mathrm{d}x\mathrm{d}t + \left( \frac{1}{\tau} \int_0^\tau \chi(t)\ \mathrm{d}t \right) \int_\Omega f^{in}\varphi\ \mathrm{d}x \\ & \qquad = \int_0^\infty \int_\Omega \chi(t) f^\tau(t) \partial_x\left[ (1+R)f^\tau(t) + R g^\tau(t)\right] \partial_x\varphi\ \mathrm{d}x\mathrm{d}t\,. \end{align*} Taking $\tau=\tau_j$ in the above identity, it readily follows from \eqref{x13} and the regularity of $\chi$ and $\varphi$ that we may pass to the limit as $j\to\infty$ and conclude that \begin{equation} \begin{split} & \int_0^\infty \int_\Omega \frac{d\chi}{dt}(t) f(t) \varphi\ \mathrm{d}x\mathrm{d}t + \chi(0) \int_\Omega f^{in}\varphi\ \mathrm{d}x \\ & \qquad = \int_0^\infty \int_\Omega \chi(t) f(t) \partial_x\left[ (1+R)f(t) + R g(t)\right] \partial_x \varphi\ \mathrm{d}x\mathrm{d}t\,. \end{split}\label{x15} \end{equation} Since $f\partial_x f$ and $f\partial_x g$ belong to $L_2((0,T)\times\Omega)$ for all $T>0$ by \eqref{x13}, a density argument ensures that the identity~\eqref{x15} is valid for any $\varphi\in H^1(\Omega)$. We next use the time continuity \eqref{x14} of $f$ and a classical approximation argument to show that $f$ solves \eqref{p2a}. A similar argument allows us to derive \eqref{p2b} from \eqref{x03b}. Finally, combining \eqref{x13}, \eqref{x14}, and a weak lower semicontinuity argument, we may let $j\to\infty$ in \eqref{x07}, \eqref{x06}, and \eqref{x08} with $\tau=\tau_j$ to show that $u=(f,g)$ satisfies \eqref{p3}, \eqref{p4a}, and \eqref{p5}, thereby completing the proof. \end{proof} We end up this section with the proof of Corollary~\ref{Cor:1}. \begin{proof}[Proof of Corollary~\ref{Cor:1}] Assume that $R\max\{1,\mu\}\in(0,1/(2e)]$. Given an integer $m\ge 1$, we define the function $\xi:(0,1/(2e)]\to\mathbb{R}$ by the formula \begin{equation*} \xi(y):=\exp\left\{m \left[ (1+ y) \ln\Big(1+\frac{1}{y}\Big) - 1 \right]\right\}-1. \end{equation*} It then holds \begin{align*} y^m\xi(y)&=(1+y)^m\exp\left\{m \left[ y \ln\Big(1+\frac{1}{y}\Big) - 1 \right]\right\}-y^m>\frac{(1+y)^m}{e^m} -y^m\geq \frac{1}{e^m} -\frac{1}{(2e)^m}\geq\frac{1}{2e^m}\,. \end{align*} Consequently, the constant $\nu_n$ defined in Lemma~\ref{lelfb} satisfies \begin{equation*} \nu_n>\frac{1}{2(eR\max\{1,\mu\})^{n-1}},\qquad n\geq 2, \end{equation*} We then infer from Theorem~\ref{ThBWS}~(p3), the above inequality, and~\eqref{LUB} that, for $t>0$ and $n\ge 2$, \begin{align*} \|f(t)\|_n^n & \le \frac{1}{\nu_n} \int_\Omega \Phi_n((f(t),g(t)))\ \mathrm{d}x \le \frac{1}{\nu_n} \int_\Omega \Phi_n((f^{in},g^{in}))\ \mathrm{d}x \\ & \le \frac{2(eR\max\{1,\mu\})^{n-1}}{R^n} \|(1+R) f^{in} + R g^{in}\|_n^n\,. \end{align*} Hence, \begin{equation*} \|f(t)\|_n \le \left( \frac{2}{R} \right)^{1/n} (e\max\{1,\mu\})^{(n-1)/n} \|(1+R) f^{in} + R g^{in}\|_n\,. \end{equation*} Taking the limit $n\to\infty$ in the above inequality gives \begin{equation*} \|f(t)\|_\infty \le e\max\{1,\mu\} \|(1+R) f^{in} + R g^{in}\|_\infty\,, \end{equation*} and we use the upper bound $e R\max\{1,\mu\}\le 1$ to obtain the desired estimate~\eqref{estaaa}. \end{proof} \section{Proof of Theorem~\ref{MainTh}}\label{sec04} \begin{proof}[Proof of Theorem~\ref{MainTh}] Owing to Proposition~\ref{prlfa}, the proof of Theorem~\ref{MainTh} is a simple application of the scheme described in \eqref{in4} and \eqref{in5}. Indeed, let $u=(f,g)$ be a sufficiently regular solution to \eqref{tfm1} on $[0,\infty)$ and $n\ge 2$. It follows from the alternative form~\eqref{in4} of the system~\eqref{tfm1a}-\eqref{tfm1b} and the boundary conditions~\eqref{tfm1c} that \begin{equation*} \frac{d}{dt} \int_\Omega \Phi_n(u)\ \mathrm{d}x + \sum_{i=1}^d \int_\Omega \langle D^2\Phi_n(u) M(u) \partial_i u , \partial_i u \rangle\ \mathrm{d}x = 0\,. \end{equation*} According to \eqref{lf03} and Proposition~\ref{prlfa}~(b), we infer from the componentwise non-negativity of $u$ and the continuity of $D^2\Phi_n M$ that \begin{equation*} \langle D^2\Phi_n(u) M(u) \partial_i u , \partial_i u \rangle \ge 0 \;\;\text{ in }\;\; (0,\infty)\times\Omega\,, \qquad 1\le i \le d\,. \end{equation*} Consequently, \begin{equation*} \frac{d}{dt} \int_\Omega \Phi_n(u)\ \mathrm{d}x \le 0\,, \qquad t>0\,, \end{equation*} and \eqref{p4a} is proved. In particular, thanks to \eqref{LUB}, we have shown that, for $t>0$ and $n\ge 2$, \begin{equation*} \|f(t)+g(t)\|_n \le \left( \int_\Omega \Phi_n(u(t))\ \mathrm{d}x \right)^{1/n} \le \left( \int_\Omega \Phi_n(u^{in})\ \mathrm{d}x \right)^{1/n} \le \frac{1+R}{R} \| f^{in} + g^{in} \|_n\,. \end{equation*} Taking the limit $n\to\infty$ in the above inequality gives \eqref{p5}. Finally, to establish the inequality~\eqref{p3}, we use \eqref{tfm1} to compute the time derivative of \begin{equation*} \int_\Omega \Phi_1((f(t)+\eta,g(t)+\eta))\ \mathrm{d}x \end{equation*} and argue as in the proof of Lemma~\ref{lem.ex3} to derive~\eqref{p3}. \end{proof} \section*{Acknowledgments} PhL gratefully acknowledges the hospitality and support of the Fakult\"at f\"ur Mathematik, Universit\"at Regensburg, where this work was done. \appendix \section{Properties of the polynomials $\Phi_n$, $n\ge 2$}\label{secapA} In this section, we establish some important properties of the polynomials $\Phi_n$, $n\ge 2$, defined in~\eqref{p4b} {and~\eqref{p4c}}, which lead to Theorem~\ref{MainTh} according to the scheme outlined in the Introduction, see \eqref{in4}-\eqref{in5}, and are extensively used in Section~\ref{sec02}, see the proof of Lemma~\ref{lem.ex2}. Let thus $n\geq 2$. To begin with, we recall that $a_{0,n}=1$, \begin{subequations}\label{lf01} \begin{equation} a_{j,n} = \binom{n}{j} \prod_{k=0}^{j-1} \frac{k +\alpha_{k,n}}{\alpha_{k,n}}\,, \qquad 1\le j \le n\,, \label{lf01a} \end{equation} where \begin{equation} \alpha_{k,n} = R [ k + \mu(n-k-1)] = \mu R(n-1) + R(1-\mu)k > 0\,, \qquad 0 \le k \le n-1\,, \label{lf01b} \end{equation} \end{subequations} and \begin{equation} \Phi_n(X) := \sum_{j=0}^n a_{j,n} X_1^j X_2^{n-j}\,, \qquad X=(X_1,X_2)\in \mathbb{R}^2\,. \label{lf02} \end{equation} Also, the mobility matrix $M\in C^\infty(\mathbb{R}^2,\mathbf{M}_2(\mathbb{R}))$ is defined in \eqref{moma} by \begin{equation*} M(X) := \begin{pmatrix} (1+R) X_1 & R X_1 \\ \mu R X_2 & \mu R X_2 \end{pmatrix} \,, \qquad X\in \mathbb{R}^2\,. \end{equation*} The aim of this section is twofold. On the one hand, we establish the convexity of $\Phi_n$ on $[0,\infty)^2$ and actually show that its Hessian matrix $D^2\Phi_n\in C^\infty(\mathbb{R}^2,\mathbf{Sym}_2(\mathbb{R}))$, defined as usual by \begin{equation*} D^2\Phi_n(X) = \begin{pmatrix} \partial_1^2 \Phi_n(X) & \partial_1 \partial_2 \Phi_n(X)\\ \partial_1 \partial_2 \Phi_n(X) & \partial_2^2 \Phi_n(X) \end{pmatrix} \,, \qquad X\in \mathbb{R}^2\,, \end{equation*} is positive definite on $[0,\infty)^2\setminus\{(0,0)\}$. On the other hand, we prove that the matrix \begin{equation} S_n(X) := D^2\Phi_n(X) M(X)\,, \qquad X\in \mathbb{R}^2\,, \label{lf03} \end{equation} belongs to $\mathbf{Sym}_2(\mathbb{R})$ and actually lies in $\mathbf{SPD}_2(\mathbb{R})$ for $X\in (0,\infty)^2$. \pagebreak \begin{proposition}\label{prlfa} Let $n\ge 2$. \begin{itemize} \item [(a)] The polynomial $\Phi_n$ is non-negative and convex on $[0,\infty)^2$. Moreover, we have: \begin{itemize} \item[(a1)] The gradient $D\Phi_n(X)$ belongs to $[0,\infty)^2$ provided that $X\in[0,\infty)^2;$ \item[(a2)] The Hessian matrix $D^2\Phi_n(X)$ belongs to $\mathbf{SPD}_2(\mathbb{R})$ for all $X\in [0,\infty)^2\setminus \{(0,0)\}$. \end{itemize} \item[(b)] Given $X\in \mathbb{R}^2$, the matrix $S_n(X)$ defined in \eqref{lf03} is symmetric. In addition, it holds that~${S_n(X)\in \mathbf{SPD}_2(\mathbb{R})}$ for all $X\in (0,\infty)^2$. \end{itemize} \end{proposition} \begin{proof} The proof in the case $n=2$ is a simple exercise. Let now $n\geq3$. We first note that \eqref{lf01} implies that $(a_{j,n})_{1\le j \le n}$ satisfies the following recursion formula \begin{equation} a_{j+1,n} = \frac{(n-j)(j + \alpha_{j,n})}{(j+1)\alpha_{j,n}} a_{j,n}\,, \qquad 0 \le j \le n-1\,, \label{lf04} \end{equation} from which we deduce that \begin{equation} a_{j,n} > 0\,, \qquad 0\le j \le n\,. \label{lf05} \end{equation} In particular, $\Phi_n$ is non-negative on $[0,\infty)^2$ and, since \begin{align*} \partial_1 \Phi_n(X) & = \sum_{j=0}^{n-1} (j+1) a_{j+1,n} X_1^j X_2^{n-j-1}\,, \qquad X\in\mathbb{R}^2\,,\\ \partial_2 \Phi_n(X) & = \sum_{j=0}^{n-1} (n-j) a_{j,n} X_1^j X_2^{n-j-1}\,, \qquad X\in\mathbb{R}^2\,, \end{align*} the gradient $D\Phi_n(X)$ belongs to $[0,\infty)^2$ for $X\in [0,\infty)^2$, which proves~(a1). \noindent\textsl{Convexity of $\Phi_n$ on $[0,\infty)^2$}. The convexity of $\Phi_n$ on $[0,\infty)^2$ is a consequence of the property~(a2) which we establish now. Let $X\in [0,\infty)^2$. We then have \begin{align*} \partial_1^2 \Phi_n(X) & = \sum_{j=1}^{n-1} j(j+1) a_{j+1,n} X_1^{j-1} X_2^{n-j-1} = \sum_{j=0}^{n-2} (j+1)(j+2) a_{j+2,n} X_1^j X_2^{n-j-2} \,, \\ \partial_1 \partial_2 \Phi_n(X) & = \sum_{j=1}^{n-1} j(n-j) a_{j,n} X_1^{j-1} X_2^{n-j-1} = \sum_{j=0}^{n-2} (j+1)(n-j-1) a_{j+1,n} X_1^j X_2^{n-j-2} \,, \\ \partial_2^2 \Phi_n(X) & = \sum_{j=0}^{n-2} (n-j)(n-j-1) a_{j,n} X_1^j X_2^{n-j-2} \,. \end{align*} It then readily follows from \eqref{lf05} that the Hessian matrix $D^2\Phi_n(X)$ has a non-negative trace \begin{equation} \mathrm{tr}(D^2\Phi_n(X)) := \partial_1^2 \Phi_n(X) + \partial_2^2 \Phi_n(X) \ge 0\,, \qquad X\in [0,\infty)^2\,. \label{lf06} \end{equation} Next, \begin{align} \det(D^2\Phi_n(X)) & = \partial_1^2 \Phi_n(X) \partial_2^2 \Phi_n(X) - [\partial_1 \partial_2 \Phi_n(X)]^2 \nonumber \\ & = \sum_{j=0}^{n-2} \sum_{k=0}^{n-2} (j+1)(n-k-1) A_{j,k} X_1^{j+k} X_2^{2n-j-k-4}\,, \label{lf07} \end{align} where \begin{equation*} A_{j,k} := (j+2)(n-k) a_{j+2,n} a_{k,n} - (n-j-1)(k+1) a_{j+1,n} a_{k+1,n}\,, \qquad 0 \le j,k \le n-2\,. \end{equation*} We now simplify the above formula for $A_{j,k}$ and first use \eqref{lf04} to replace $a_{j+2,n}$ and $a_{k+1,n}$ and subsequently the definition \eqref{lf01b} of $\alpha_{k,n}$, thereby obtaining \begin{align} A_{j,k} & = (n-j-1)(n-k) \frac{j+1+\alpha_{j+1,n}}{\alpha_{j+1,n}} a_{j+1,n} a_{k,n} - (n-j-1)(n-k) \frac{k+\alpha_{k,n}}{\alpha_{k,n}} a_{j+1,n} a_{k,n} \nonumber \\ & = \mu R (n-1) \frac{(n-j-1)(n-k) ( j+1-k )}{\alpha_{j+1,n} \alpha_{k,n}} a_{j+1,n} a_{k,n} \label{lf08} \end{align} for $0 \le j,k \le n-2$. In particular, \begin{equation} A_{k-1,j+1} = - A_{j,k}\,, \qquad 0\le j\le n-3\,, \ 1\le k\le n-2\,. \label{lf09} \end{equation} It then follows from \eqref{lf07} and \eqref{lf08} that \begin{align*} 2 \det(D^2\Phi_n(X)) & = \sum_{j=0}^{n-2} \sum_{k=0}^{n-2} (j+1)(n-k-1) A_{j,k} X_1^{j+k} X_2^{2n-j-k-4} \\ & \qquad + \sum_{l=1}^{n-1} \sum_{i=-1}^{n-3} l(n-i-2) A_{l-1,i+1} X_1^{i+l} X_2^{2n-i-l-4} \\ & = \sum_{j=0}^{n-2} \sum_{k=0}^{n-2} (j+1)(n-k-1) A_{j,k} X_1^{j+k} X_2^{2n-j-k-4} \\ & \qquad + \sum_{j=-1}^{n-3} \sum_{k=1}^{n-1} k(n-j-2) A_{k-1,j+1} X_1^{j+k} X_2^{2n-j-k-4}\\ & = \sum_{j=0}^{n-3} \sum_{k=1}^{n-2} (j+1)(n-k-1) A_{j,k} X_1^{j+k} X_2^{2n-j-k-4} \\ & \qquad + \sum_{k=0}^{n-2} (n-1)(n-k-1) A_{n-2,k} X_1^{n-2+k} X_2^{n-k-2} \\ & \qquad + \sum_{j=0}^{n-3} (j+1)(n-1) A_{j,0} X_1^{j} X_2^{2n-j-4} \\ & \qquad + \sum_{j=0}^{n-3} \sum_{k=1}^{n-2} k(n-j-2) A_{k-1,j+1} X_1^{j+k} X_2^{2n-j-k-4} \\ & \qquad + \sum_{k=1}^{n-1} k(n-1) A_{k-1,0} X_1^{k-1} X_2^{2n-k-3} \\ & \qquad + \sum_{j=0}^{n-3} (n-1)(n-j-2) A_{n-2,j+1} X_1^{j+n-1} X_2^{n-j-3} \,. \end{align*} Owing to \eqref{lf05} and \eqref{lf08}, $A_{l,0}> 0$ and $A_{n-2,l}>0$ for $0\le l\le n-2$, so that the terms in the above identity involving a single sum are non-negative. Therefore, using the symmetry property~\eqref{lf09} and retaining in the last two sums only the terms corresponding to $k=1$ and $j=n-3$, respectively, we get \begin{align*} 2 \det(D^2\Phi_n(X)) & \ge \sum_{j=0}^{n-3} \sum_{k=1}^{n-2} \left[ (j+1)(n-k-1) - k(n-j-2) \right] A_{j,k} X_1^{j+k} X_2^{2n-j-k-4} \\ & \qquad + (n-1) A_{n-2,n-2} X_1^{2n-4} + (n-1) A_{0,0} X_2^{2n-4} \\ & = \sum_{j=0}^{n-3} \sum_{k=1}^{n-2} (n-1)(j+1-k) A_{j,k} X_1^{j+k} X_2^{2n-j-k-4} \\ & \qquad + (n-1) A_{n-2,n-2} X_1^{2n-4} + (n-1) A_{0,0} X_2^{2n-4}\,. \end{align*} Since \begin{equation*} (n-1)(j+1-k) A_{j,k} = \mu R (n-1)^2 \frac{(n-j-1)(n-k) ( j+1-k )^2}{\alpha_{j+1,n} \alpha_{k,n}} a_{j+1,n} a_{k,n} \ge 0 \end{equation*} for $0\le j, k\le n-2$ by \eqref{lf01b}, \eqref{lf05}, and \eqref{lf08}, we conclude that \begin{equation} \det(D^2\Phi_n(X)) \ge (n-1) A_{n-2,n-2} X_1^{2n-4} + (n-1) A_{0,0} X_2^{2n-4}\,, \qquad X\in [0,\infty)^2\,. \label{lf10} \end{equation} Since $A_{0,0}>0$ and $A_{n-2,n-2}>0$, we have thus established that, for each $X\in [0,\infty)^2\setminus\{(0,0)\}$, the symmetric matrix $D^2\Phi_n(X)$ has non-negative trace and positive determinant, so that it is positive definite. This proves~(a2). \noindent\textsl{Symmetry of $S_n(X)$}. Let $X\in \mathbb{R}^2$. It follows from \eqref{lf03} that \begin{align*} [S_n(X)]_{11} & = (1+R) X_1 \partial_1^2 \Phi_n(X) + \mu R X_2 \partial_1 \partial_2 \Phi_n(X)\\ & = (1+R) \sum_{j=1}^{n-1} j(j+1) a_{j+1,n} X_1^j X_2^{n-j-1} + \mu R \sum_{j=0}^{n-2} (j+1)(n-j-1) a_{j+1,n} X_1^j X_2^{n-j-1}\,, \\ [S_n(X)]_{12} & = RX_1 \partial_1^2\Phi_n(X) + \mu R X_2 \partial_1 \partial_2 \Phi_n(X) \\ & = R \sum_{j=1}^{n-1} j(j+1) a_{j+1,n} X_1^j X_2^{n-j-1} + \mu R \sum_{j=0}^{n-2} (j+1)(n-j-1) a_{j+1,n} X_1^j X_2^{n-j-1}\,,\\ [S_n(X)]_{21} & = (1+ R)X_1 \partial_1 \partial_2 \Phi_n(X) + \mu R X_2 \partial_2^2 \Phi_n(X)\\ & = (1+R) \sum_{j=1}^{n-1} j(n-j) a_{j,n} X_1^{j} X_2^{n-j-1} + \mu R \sum_{j=0}^{n-2} (n-j)(n-j-1) a_{j,n} X_1^j X_2^{n-j-1}\,,\\ [S_n(X)]_{22} & = R X_1 \partial_1 \partial_2 \Phi_n(X) + \mu R X_2 \partial_2^2 \Phi_n(X)\\ & = R \sum_{j=1}^{n-1} j(n-j) a_{j,n} X_1^{j} X_2^{n-j-1} + \mu R \sum_{j=0}^{n-2} (n-j)(n-j-1) a_{j,n} X_1^j X_2^{n-j-1}\,. \end{align*} It then holds \begin{equation*} [S_n(X)]_{12} = R n(n-1) a_{n,n} X_1^{n-1} + \sum_{j=1}^{n-2} (j+1) \alpha_{j,n} a_{j+1,n} X_1^j X_2^{n-j-1} + \mu R (n-1) a_{1,n} X_2^{n-1}\,. \end{equation*} Using the recursion formula \eqref{lf04} and the definition \eqref{lf01b} of $\alpha_{j,n}$, we get \begin{align*} [S_n(X)]_{12} & = R(n-1) \frac{n-1+\alpha_{n-1,n}}{\alpha_{n-1,n}} a_{n-1,n} X_1^{n-1} + \sum_{j=1}^{n-2} (n-j)(j+\alpha_{j,n}) a_{j,n} X_1^j X_2^{n-j-1} \\ & \qquad + \mu R n(n-1) a_{0,n} X_2^{n-1} \\ & = (1+R) (n-1) a_{n-1,n} X_1^{n-1} + (1+R) \sum_{j=1}^{n-2} j(n-j) a_{j,n} X_1^j X_2^{n-j-1} \\ & \qquad + \mu R \sum_{j=1}^{n-2} (n-j)(n-j-1) a_{j,n} X_1^j X_2^{n-j-1} + \mu R n(n-1) a_{0,n} X_2^{n-1} \\ & = [S_n(X)]_{21}\,, \end{align*} so that $S_n(X)\in \mathbf{Sym}_2(\mathbb{R})$. \noindent\textsl{Positive definiteness of $S_n(X)$}. Let $X\in [0,\infty)^2$. It readily follows from \eqref{lf05} that $[S_n(X)]_{11}\geq0$ and~$[S_n(X)]_{22}\geq0,$ hence \begin{equation} \mathrm{tr}(S_n(X)) \ge 0\,. \label{lf12} \end{equation} Moreover, \eqref{lf03} and \eqref{lf10} imply that \begin{equation} \det(S_n(X)) = \det(D^2\Phi_n(X)) \det(M(X)) = \mu R X_1 X_2 \det(D^2\Phi_n(X)) \ge 0\,. \label{lf13} \end{equation} Consequently, $S_n(X)$ is a positive semidefinite symmetric matrix for each $X\in [0,\infty)^2$. Moreover, if~${X\in (0,\infty)^2}$, then $\det(S_n(X)) >0$ by \eqref{lf10} and \eqref{lf13}, so that $S_n(X)\in \mathbf{SPD}_2(\mathbb{R})$. This completes the proof of (b). \end{proof} We next derive lower and upper bounds for $\Phi_n$, $n\geq 2$. \begin{lemma}\label{lelfb} Given $n\ge 2$, we have \begin{equation}\label{LUB} \nu_n X_1^n + (X_1+X_2)^n \le \Phi_n(X) \le \frac{\left[ (1+R) X_1 + R X_2 \right]^n}{R^n}\,, \qquad X\in [0,\infty)^2\,, \end{equation} where $\nu_n$ is the positive number defined by \begin{equation*} \nu_n :=\exp\left\{(n-1) \left[ (1+ R\max\{1,\mu\}) \ln\Big(1+\frac{1}{R\max\{1,\mu\}}\Big) - 1 \right]\right\}-1 >0\,. \end{equation*} \end{lemma} \begin{proof} On the one hand, since the function \begin{equation*} \chi (z) := \frac{\mu R + [1+R(1-\mu)] z}{\mu R + R(1-\mu) z}\,, \qquad z\in [0,1]\,, \end{equation*} is increasing, we deduce from \eqref{lf01} that, for $1\le j \le n$, \begin{equation*} a_{j,n} = \binom{n}{j} \prod_{k=0}^{j-1} \chi\left( \frac{k}{n-1} \right) \le \binom{n}{j} [\chi(1)]^j = \left( \frac{1+R}{R} \right)^j \binom{n}{j}\,. \end{equation*} The upper bound in \eqref{LUB} is then a straightforward consequence of the above inequality. On the other hand, in order to estimate $\Phi_n(X)$, $X\in[0,\infty)^2$, from below we infer from \eqref{lf01a} that \begin{equation*} a_{j,n} \ge \binom{n}{j}\,, \qquad 0 \le j \le n-1\,. \end{equation*} \ When estimating the coefficient $a_{n,n}$ from below we need to be more subtle and proceed as follows: \begin{align*} a_{n,n} & = \prod_{k=0}^{n-1} \frac{k +\alpha_{k,n}}{\alpha_{k,n}}=\prod_{k=1}^{n-1} \left( 1 + \frac{k}{R [ k + \mu(n-k-1)]} \right)\\[1ex] & \geq \prod_{k=1}^{n-1} \left( 1 + \frac{k}{R\max\{1,\mu\}(n-1)} \right) \,. \end{align*} Now, \begin{align*} \ln\left( \prod_{k=1}^{n-1} \left( 1 + \frac{k}{R\max\{1,\mu\}(n-1)}\right) \right) & = \sum_{k=1}^{n-1} \ln\left( 1 + \frac{k}{R\max\{1,\mu\}(n-1)} \right) \\ & \ge (n-1) \sum_{k=1}^{n-1} \int_{(k-1)/(n-1)}^{k/(n-1)} \ln\left( 1+\frac{x}{R\max\{1,\mu\}} \right)\ \mathrm{d}x \\ & = (n-1) \int_0^1 \ln\left( 1+\frac{x}{R\max\{1,\mu\}} \right)\ \mathrm{d}x \\[1ex] & = (n-1) \left[ (1+ R\max\{1,\mu\}) \ln\Big(1+\frac{1}{R\max\{1,\mu\}}\Big) - 1 \right] \end{align*} and, taking into account that \begin{equation*} (1+x)\ln\Big(1+\frac{1}{x}\Big)>1 \quad\text{for $ x>0$}\,, \end{equation*} we end up with \begin{equation*} a_{n,n} \ge \exp\left\{(n-1) \left[ (1+ R\max\{1,\mu\}) \ln\Big(1+\frac{1}{R\max\{1,\mu\}}\Big) - 1 \right]\right\}= 1 + \nu_n\,. \end{equation*} We thus have \begin{align*} \Phi_n(X) &\ge \nu_n X_1^n + \sum_{j=0}^{n} \binom{n}{j} X_1^j X_2^{n-j} =\nu_n X_1^n+ (X_1+X_2)^{n}\,, \end{align*} and the proof is complete. \end{proof} \section{An auxiliary elliptic system}\label{secapB} In this appendix, we establish Lemma~\ref{lem.ap1}, which is an important argument in the proof of Lemma~\ref{lem.ex2}. Let $\tau>0$, $B=(b_{jk})_{1\le j,k\le 2}\in C(\mathbb{R}^2,\mathbf{M}_2(\mathbb{R}))$, and~${A=(a_{jk})_{1\le j,k\le 2}\in \mathbf{SPD}_2(\mathbb{R})}$ satisfy~${AB(X)\in \mathbf{SPD}_2(\mathbb{R})}$ for all $X\in \mathbb{R}^2$ and assume that there is $\delta_1>0$ with the property \begin{equation} \langle AB(X)\xi,\xi \rangle \ge \delta_1 |\xi|^2\,, \qquad (X,\xi)\in \mathbb{R}^2\times\mathbb{R}^2\,. \label{ap1} \end{equation} Since $A\in \mathbf{SPD}_2(\mathbb{R})$, there is also $\delta_2>0$ such that \begin{equation} \langle A\xi,\xi \rangle \ge \delta_2 |\xi|^2\,, \qquad \xi\in\mathbb{R}^2\,. \label{ap2} \end{equation} Here, $\Omega$ is a bounded interval of $\mathbb{R}$ ($d = 1$) and we recall that, in that specific case, $H^1(\Omega)$ embeds continuously in $L_\infty(\Omega)$, so that there is $\Lambda>0$ with \begin{equation} \| z\|_{\infty} \le \Lambda \|z\|_{H^1}\,, \qquad z\in H^1(\Omega)\,. \label{EHL} \end{equation} \begin{lemma}\label{lem.ap1} Given $U\in L_2(\Omega,\mathbb{R}^2)$, there is a solution $u\in H^1(\Omega,\mathbb{R}^2)$ to the nonlinear elliptic equation \begin{equation} \int_\Omega \left[ \langle u , v \rangle + \tau \langle B(u) \partial_x u , \partial_x v \rangle \right]\ \mathrm{d}x = \int_\Omega \langle U , v \rangle\ \mathrm{d}x\,, \qquad v\in H^1(\Omega,\mathbb{R}^2)\,. \label{ap3} \end{equation} Moreover, if \begin{equation} \begin{split} b_{11}(X) \geq b_{12}(X) & = 0\,, \qquad X\in (-\infty,0)\times \mathbb{R}\,, \\ b_{22}(X) \geq b_{21}(X) & = 0\,, \qquad X\in \mathbb{R}\times (-\infty,0)\,, \end{split} \label{ap10} \end{equation} and $U(x)\in [0,\infty)^2$ for a.a. $x\in\Omega$, then $u(x)\in [0,\infty)^2$ for a.a. $x\in\Omega$. \end{lemma} \begin{proof} To set up a fixed point scheme, we define $\delta_0:=\min\{\tau \delta_1, \delta_2\}$ and introduce the compact and convex subset $\mathcal{K}$ of $L_2(\Omega,\mathbb{R}^2)$ defined by \begin{equation} \mathcal{K} := \left\{ u\in H^1(\Omega,\mathbb{R}^2)\ :\ \|u\|_{H^1} \le \frac{\|AU\|_2}{\delta_0}\right\}\,, \label{ap100} \end{equation} the compactness of $\mathcal{K}$ being a straightforward consequence of the compactness of the embedding of~${H^1(\Omega,\mathbb{R}^2)}$ in $L_2(\Omega,\mathbb{R}^2)$. According to \eqref{EHL}, \begin{equation} \|u\|_\infty \le \frac{\Lambda\|AU\|_2}{\delta_0}\,, \qquad u\in\mathcal{K}\,. \label{BU} \end{equation} We now consider $u\in \mathcal{K}$ and define a bilinear form $b_u$ on~${H^1(\Omega,\mathbb{R}^2)}$ by \begin{equation*} b_u(v,w) := \int_\Omega \left[ \langle Av , w \rangle + \tau \langle AB(u) \partial_x v , \partial_x w \rangle \right]\ \mathrm{d}x\,, \qquad (v,w)\in H^1(\Omega,\mathbb{R}^2)\times H^1(\Omega,\mathbb{R}^2) \,. \end{equation*} Owing to \eqref{ap1} and \eqref{ap2}, \begin{equation} b_u(v,v) \ge \delta_0 \|v\|_{H^1}^2\,, \qquad v\in H^1(\Omega,\mathbb{R}^2)\,, \label{ap4} \end{equation} while the continuity of $B$ and the boundedness \eqref{BU} of $u$ guarantee that \begin{equation*} |b_u(v,w)| \le b_u^* \|v\|_{H^1} \|w\|_{H^1}\,, \qquad (v,w)\in H^1(\Omega,\mathbb{R}^2)\times H^1(\Omega,\mathbb{R}^2) \,, \end{equation*} where \begin{equation*} b_u^* := 2\max_{1\le j,k\le 2}\{|a_{jk}|\} \left( 1 + 2\tau \max_{1\le j,k\le 2}\{\| b_{jk}(u) \|_\infty\} \right)\,. \end{equation*} We then infer from Lax-Milgram's theorem that there is a unique $\mathcal{V}[u]\in H^1(\Omega,\mathbb{R}^2)$ such that \begin{equation} b_u(\mathcal{V}[u],w) = \int_\Omega \langle AU , w \rangle\ \mathrm{d}x\,, \qquad w\in H^1(\Omega,\mathbb{R}^2)\,. \label{ap5} \end{equation} In particular, taking $w=\mathcal{V}[u]$ in \eqref{ap5} and using \eqref{ap4} and H\"older's inequality give \begin{equation*} \delta_0 \|\mathcal{V}[u]\|_{H^1}^2 \le b_u(\mathcal{V}[u],\mathcal{V}[u]) \le \|AU\|_2 \|\mathcal{V}[u]\|_2 \le \|AU\|_2 \|\mathcal{V}[u]\|_{H^1}\,. \end{equation*} Consequently, \begin{equation} \|\mathcal{V}[u]\|_{H^1} \le \frac{\|AU\|_2}{\delta_0} \;\;\text{ and }\;\; \mathcal{V}[u]\in\mathcal{K} \,. \label{ap6} \end{equation} We now claim that the map $\mathcal{V}$ is continuous on $\mathcal{K}$ with respect to the norm-topology of $L_2(\Omega,\mathbb{R}^2)$. Indeed, consider a sequence $(u_j)_{j\ge 1}$ in $\mathcal{K}$ and $u\in \mathcal{K}$ such that \begin{equation*} \lim_{j\to\infty} \|u_j-u\|_2=0\,. \end{equation*} Upon extracting a subsequence (not relabeled), we may assume that \begin{equation*} \lim_{j\to\infty} u_j(x) = u(x) \;\text{ for a.a. }\; x\in \Omega\,, \end{equation*} so that the continuity of $B$ and \eqref{BU} ensure that \begin{subequations}\label{ap7} \begin{equation} \lim_{j\to\infty} B(u_j(x)) = B(u(x)) \;\text{ for a.a. }\; x\in \Omega \label{ap7.1} \end{equation} and \begin{equation} \max\left\{ \|B(u)\|_\infty , \sup_{j\ge 1}\{\|B(u_j)\|_\infty\} \right\} \le \max_{|X|\le \Lambda \|AU\|_2/\delta_0}\{|B(X)|\}\,. \label{ap7.2} \end{equation} \end{subequations} It also follows from \eqref{ap6} and the compactness of the embedding of $H^1(\Omega,\mathbb{R}^2)$ in $L_2(\Omega,\mathbb{R}^2)$ that there is $v\in H^1(\Omega,\mathbb{R}^2)$ such that, after possibly extracting a further subsequence, \begin{equation} \lim_{j\to\infty} \|\mathcal{V}[u_j] - v \|_2 = 0 \qquad\text{ and }\qquad \mathcal{V}[u_j] \rightharpoonup v \;\text{ in }\; H^1(\Omega,\mathbb{R}^2)\,. \label{ap8} \end{equation} Since \begin{equation*} \int_\Omega \langle AB(u_j)\partial_x \mathcal{V}[u_j] , \partial_x w \rangle\ \mathrm{d}x = \int_\Omega \langle \partial_x \mathcal{V}[u_j], AB(u_j)\partial_x w \rangle\ \mathrm{d}x\,, \end{equation*} due to the symmetry of $AB(X)$ for $X\in\mathbb{R}^2$, it readily follows from \eqref{ap7}, \eqref{ap8}, and Lebesgue's dominated convergence theorem that we may pass to the limit $j\to\infty$ in the variational identity~\eqref{ap5} for $\mathcal{V}[u_j]$ and conclude that \begin{equation*} b_u(v,w) = \int_\Omega \langle AU , w \rangle\ \mathrm{d}x\,, \qquad w\in H^1(\Omega,\mathbb{R}^2)\,, \end{equation*} that is, $v=\mathcal{V}[u]$. We have thus shown that any subsequence of $(\mathcal{V}[u_j])_{j\ge 1}$ has a subsequence that converges to $\mathcal{V}[u]$, which proves the claimed continuity of the map $\mathcal{V}$. We are therefore in a position to apply Schauder's fixed point theorem, see \cite[Theorem~11.1]{GT2001} for instance, and conclude that the map $\mathcal{V}$ has a fixed point $u\in \mathcal{K}$. In particular, the function~$u$ satisfies \begin{equation*} b_u(u,w) = \int_\Omega \langle AU , w \rangle\ \mathrm{d}x\,, \qquad w\in H^1(\Omega,\mathbb{R}^2)\,. \end{equation*} Now, given $v\in H^1(\Omega,\mathbb{R}^2)$, the function $w=A^{-1}v$ also belongs to $H^1(\Omega,\mathbb{R}^2)$ and we infer from the above identity and the symmetry of $A$ that \begin{align*} \int_\Omega \langle U , v \rangle\ \mathrm{d}x & = \int_\Omega \langle AU , w \rangle\ \mathrm{d}x = b_u(u,w) = b_u(u,A^{-1}v) \\ & = \int_\Omega \left[ \langle u , v \rangle + \tau \langle B(u) \partial_x u , \partial_x v \rangle \right]\ \mathrm{d}x \,. \end{align*} We have thus constructed a solution $u\in H^1(\Omega,\mathbb{R}^2)$ to \eqref{ap3}. We now turn to the non-negativity-preserving property and assume that $U(x)\in [0,\infty)^2$ for a.a.~${x\in\Omega}$. Let $u\in H^1(\Omega,\mathbb{R}^2)$ be a solution to \eqref{ap3} and set~$\varphi:=-u$. Then $(\varphi_{1,+},\varphi_{2,+})$ belongs to $H^1(\Omega,\mathbb{R}^2)$ and it follows from \eqref{ap3} that \begin{align} & \int_\Omega \left( \varphi_1 \varphi_{1,+} + \varphi_2 \varphi_{2,+} + \tau \sum_{j,k=1}^2 b_{jk}(u) \partial_x \varphi_k \partial_x (\varphi_{j,+}) \right)\ \mathrm{d}x \nonumber \\ & \hspace{5cm} = - \int_\Omega \left( U_1 \varphi_{1,+} + U_2 \varphi_{2,+} \right)\ \mathrm{d}x \le 0\,. \label{ap11} \end{align} We now infer from \eqref{ap10} that \begin{align*} b_{11}(u) \partial_x\varphi_1 \partial_x\varphi_{1,+} & = b_{11}(u) \mathbf{1}_{(-\infty,0)}(u_1) |\partial_x u_1|^2 \ge 0\,, \\ b_{12}(u) \partial_x\varphi_2 \partial_x\varphi_{1,+} & = b_{12}(u) \mathbf{1}_{(-\infty,0)}(u_1) \partial_x u_1 \partial_x u_2 = 0\,, \\ b_{21}(u) \partial_x\varphi_1 \partial_x\varphi_{2,+} & = b_{21}(u) \mathbf{1}_{(-\infty,0)}(u_2) \partial_x u_1 \partial_x u_2 = 0\,, \\ b_{22}(u) \partial_x\varphi_2 \partial_x\varphi_{2,+} & = b_{22}(u) \mathbf{1}_{(-\infty,0)}(u_2) |\partial_x u_2|^2 \ge 0\,, \end{align*} so that the second term on the left-hand side of \eqref{ap11} is non-negative. Consequently, \eqref{ap11} gives \begin{equation*} \int_\Omega \left( |\varphi_{1,+}|^2 + |\varphi_{2,+}|^2 \right)\ \mathrm{d}x \le 0\,, \end{equation*} which implies that $\varphi_{1,+}=\varphi_{2,+}=0$ a.e. in $\Omega$. Hence, $u(x)\in [0,\infty)^2$ for a.a. $x\in\Omega$ and the proof of Lemma~\ref{lem.ap1} is complete. \end{proof} \end{document}
math
\begin{document} \title {Liouville type theorem of integral equation with anisotropic structure} \author{Yating Niu} \date{} \maketitle \begin{abstract} In this paper, we classify all positive solutions for the following integral equation: \begin{equation}\label{s0} u(x)=\sqrt{-1}nt_{\mathbb{R}^n_+}K_b(x,y)y_n^b f(u(y))dy, \end{equation} where $ b > 1$ is a constant. Here $ K_b(x,y)$ is the Green function of the following homogeneous Neumann boundary problem \begin{equation}\label{s11} \left\{ \begin{aligned} -\text{div}(x^{b}_n \nabla u)&= f \quad \text{in} \ \mathbb R^n_+\\ \frac{\partial u}{\partial x_n}&= 0 \quad \text{on} \ \partial \mathbb R^n_+ . \end{aligned} \right. \end{equation} By using the method of moving planes in integral form, we derive the symmetry of positive solutions. We also establish the equivalence between the integral equation and its corresponding partial differential equation. Similarly, the results can be generalized to the integral system. \end{abstract} \section{Introduction} In this paper, we study the positive solutions $ u(x)$ of the following type of integral equation \begin{equation}\label{s3} u(x)=\sqrt{-1}nt_{\mathbb R^n_+}K_b(x,y)y_n^b f(u(y))dy \end{equation} and integral system \begin{equation}\label{p2} \left\{ \begin{aligned} u(x)&=\sqrt{-1}nt_{\mathbb R^n_+}K_b(x,y)y_n^b f(v(y))dy \quad x\sqrt{-1}n \mathbb R^n_+\\ v(x)&=\sqrt{-1}nt_{\mathbb R^n_+}K_b(x,y)y_n^b g(u(y))dy \quad x\sqrt{-1}n \mathbb R^n_+. \end{aligned} \right. \end{equation} on upper half space $ \mathbb R^n_+$, where $ n \geq 3$. In recent years, there has been great interest in using the method of moving planes to classify the solutions of equation. It is a very powerful tool to study the symmetry of solution. The method of moving planes for PDEs was invented by the Alexandroff in the early 1950's. Later, it was further developed by Serrin \cite{Serrin} and Gidas, Ni and Nirenberg \cite{GiNiNi1979}, \cite{GidasNiNiren}. In the paper of Chen, Li \cite{ChenLi91}, they studied the following partial differential equation \begin{equation}\label{k1} \nablaelta u + u^p =0 \quad x\sqrt{-1}n \mathbb R^n. \end{equation} They proved that for $ p=\frac{n+2}{n-2}$ all the positive $ C^2$ solutions of \eqref{k1} are radially symmetric about some point. It is a natural question to ask whether similar results hold for the following system \begin{equation}\label{k2} \left\{ \begin{aligned} -\nablaelta u &= v^p \quad x\sqrt{-1}n \mathbb{R}^n \\ -\nablaelta v &= u^q \quad x\sqrt{-1}n \mathbb{R}^n. \end{aligned} \right. \end{equation} In 1993, Mitidieri \cite{EM93} has considered system \eqref{k2} the nonexistence of radial positive solutions, where $ \frac{1}{p+1} + \frac{1}{q+1} > \frac{n-2}{n}$ and $ p >1$, $ q >1$. Later, Mitidieri in \cite{EM} also extended this results to more general system. In \cite{DeFe}, they proved that for $ p=q=\frac{n+2}{n-2}$ the positive solutions of \eqref{k2} are radially symmetric with respect to some point. Guo and Liu extended to more general elliptic system in \cite{GuoLiu}. For other results, we refer to \cite{PRE,GuoNie,Huang2012,HuangLi,SeZou}. For the integral equation, we can use the method of moving planes in integral form to study the properties of the solution. The integral equation \eqref{s3} and integral system \eqref{p2} is closely related to \cite{Yu13}. In this paper, Yu studied the positive solutions for the following integral equation \begin{equation}\label{s2} u(x) = \sqrt{-1}nt_{\mathbb R^n} \frac{1}{|x-y|^{n-\alpha}} f(u(y)) dy \quad x\sqrt{-1}n \mathbb R^n, \end{equation} where $ n \geq 2$, $ 0<\alpha<n$. If $ f(u)= u ^{\frac{n+\alpha}{n-\alpha}}$, integral equation \eqref{s2} arises as an Euler-Lagrange equation for a functional under a constraint in the context of Hardy-Littlewood-Sobolev inequalities. Lieb \cite{Lieb} posed the classification of all the critical points of the functional - the solutions of the integral equation as an open problem. Chen, Li and Ou \cite{ChenLiOu06} solved the problem by using the method of moving planes in an integral form. They proved that all the positive regular solutions are radially symmetric and monotone decreasing about some point. For the integral system, Yu also established the Liouville type result for the following integral system \begin{equation}\label{p3} \left\{ \begin{aligned} u(x)&=\sqrt{-1}nt_{\mathbb R^n}\frac{1}{|x-y|^{n-\alpha}} f(u(y),v(y))dy \quad x\sqrt{-1}n \mathbb R^n\\ v(x)&=\sqrt{-1}nt_{\mathbb R^n}\frac{1}{|x-y|^{n-\alpha}} g(u(y),v(y))dy \quad x\sqrt{-1}n \mathbb R^n \end{aligned} \right. \end{equation} in \cite{Yu13}, where $ n \geq 2$, $ 0<\alpha<n$. If $ f=v^q$ and $ g=u^p$, Chen, Li and Ou \cite{ChenLiOu05} proved that the positive solutions of \eqref{p3} are radially symmetric for $ \frac{1}{q+1} + \frac{1}{p+1}=\frac{n-\alpha}{n}$. Later, more general integral equations and systems have also been studied in papers \cite{ChenLiOu09,GuoLiu,GuoNie,LiZhuo,LiuGuoZhang,MaChen,EM}. The above results are all about the whole space $ \mathbb{R}^n$. For the upper half space \[ \mathbb{R}^n_+ = \{x=(x_1,x_2,\cdots,x_n)\sqrt{-1}n \mathbb{R}^n \ | \ x_n>0\}, \] Fang and Chen \cite{FangChen} considered the integral equation $ u(x)= \sqrt{-1}nt_{\mathbb R^n_+} G^+_\sqrt{-1}nfty(x,y)u^p dy$, where $ G^+_\sqrt{-1}nfty(x,y)$ is the Green's function in $ \mathbb R^n_+$ with the following Dirichlet problem \begin{equation}\label{s12} \left\{ \begin{aligned} (-\nablaelta)^m u &= u^p \qquad \qquad \qquad \quad \ \text{in} \ \mathbb R^n_+\\ u=\frac{\partial u}{\partial x_n}&=\cdots = \frac{\partial ^{m-1}u}{\partial x^{m-1}_n} =0 \quad \text{on} \ \partial \mathbb R^n_+. \end{aligned} \right. \end{equation} They proved that the Dirichlet problem \eqref{s12} is equivalent to the integral equation. Later, Tang and Dou \cite{TangDou} studied the system of integral equations on upper half space. In 2015, Chen, Fang and Yang \cite{ChenFangYang} considered the Dirichlet problem involving the fractional Laplacian on upper half space. In this paper, we study the upper half space results for problem \eqref{s3} and \eqref{p2}. Our result is the following \begin{theorem}\label{thm0} Let $ u(x)\sqrt{-1}n C(\overline{\mathbb{R}^n_+})$ be a positive solution of \eqref{s3}, where $ n\geq3$ and let $ f:[0,+\sqrt{-1}nfty)\rightarrow \mathbb{R}$ is a continuous function with the properties\\ $(i) \ f(t)$ is non-decreasing in $ [0,+\sqrt{-1}nfty)$; \\ $(ii)g(t)=\frac{f(t)}{t^{\frac{n+b+2}{n+b-2}}}$ is non-increasing in $ (0,+\sqrt{-1}nfty)$.\\ Then either $ u\equiv 0$ or there exists $ x_0 \sqrt{-1}n \partial a\mathbb{R}^{n}_+$ such that $ u(x)=u_{a,x_0}(x)=\left(\frac{ca}{a^2+|x-x_0|^2}\right)^{\frac{n+b-2}{2}}$ and $ g(t)\equiv \bar{c}$, where $ c=\sqrt{\frac{(n+b)(n+b-2)}{\bar{c}}}$ and $ a\geq0 $. \end{theorem} \begin{remark}\label{r1} In Theorem $ \ref{thm0}$, we can get $ f(t)\geq 0$ from conditions $ (i)$ and $ (ii)$. \end{remark} \begin{theorem}\label{thm1} Let $ (u,v) \sqrt{-1}n C(\overline{\mathbb R^n_+})\times C(\overline{\mathbb R^n_+}) $ be a positive solution of problem \eqref{p2}. Suppose that $ f$, $ g$: $ [0,+\sqrt{-1}nfty) \rightarrow \mathbb R$ are continuous and satisfy \\ (i) \ $ f(t)$, $ g(t)$ are non-decreasing in $ [0,+\sqrt{-1}nfty)$; \\ (ii) \ $ h(t)=\frac{f(t)}{t^{\frac{n+b+2}{n+b-2}}}$, $ k(t)=\frac{g(t)}{t^{\frac{n+b+2}{n+b-2}}}$ are non-increasing in $ (0,+\sqrt{-1}nfty)$. \\ Then either $ (u,v)\equiv(0,0) $ or $ u$, $ v$ has the form $ u(x)=\left(\frac{ca}{a^2+|x-x_0|^2}\right)^{\frac{n+b-2}{2}}$ and $ v(x)=\left(\frac{\tilde{c}a}{a^2+|x-x_0|^2}\right)^{\frac{n+b-2}{2}}$, for some $ x_0 \sqrt{-1}n \partial a\mathbb{R}^{n}_+$ and $ a\geq 0$. \end{theorem} \begin{remark} In Theorem $ \ref{thm1}$, we can get $ f(t)\geq 0$ and $ g(t)\geq 0$ from conditions $ (i)$ and $ (ii)$. \end{remark} The main method in this paper is the moving plane method in integral form. We also use Hardy-Littlewood-Sobolev inequalities and Kelvin's transform to prove the symmetry of positive solutions with respect to the $ x_1, \cdots ,x_{n-1}$ directions. And $ x_n$ is the anisotropic direction. This paper is organized as follow. In Section 2, we use the method of moving planes to get the symmetry of the solutions. In Section 3, we prove the equivalence of the integral equation with differential equation and the nonexistence of an ordinary differential equation. We also give the proof of Theorem \ref{thm0}. In Section 4, we obtain the symmetry of solutions of integral equation system and and we prove Theorem $ \ref{thm1}$. \section{ Symmetry of solutions} Let us study the positive solutions to the integral equation \[ u(x)=\sqrt{-1}nt_{\mathbb R^n_+}K_b(x,y)y_n^b f(u(y))dy \quad x \sqrt{-1}n \mathbb R^n_+ , \] where \begin{equation}\label{G1} \begin{split} &K_b(x,y)=D_b \sqrt{-1}nt_0^1 (|x-y|^2(1-\tau)+|x-y^*|^2\tau)^{-\frac{n+b-2}{2}}[\tau(1-\tau)]^{\frac b 2-1}d\tau, \\ &D_b=2^{b-2}\partial i^{-\frac n2}\frac{\Gamma(\frac{n+b-2}{2})}{\Gamma(\frac b2)},\quad y^*=(y_1,\cdots,y_{n-1},-y_n). \end{split} \end{equation} Moreover, for $b>0,n\geq 3$, the following estimates for $K_{b}(x,y)$ holds \begin{equation}\label{G2} |\partial _x^{\gammamma} K_b(x,y)|\leq C(\gammamma,b) |x-y^*|^{-b}|x-y|^{2-n-|\gammamma|}, \end{equation} where $\gammamma \sqrt{-1}n \mathbb N^n $ (see Proposition 1-2 of \cite{Horiuchi95}). Since we don't know the behaviors of $ u$ at infinity, we introduce the Kelvin's transform of $ u$ as $ v(x)=\frac{1}{|x-x_0|^{n+b-2}}u\left(\frac{x-x_0}{|x-x_0|^2}\right)$. $ \forall x_0 \sqrt{-1}n \partial \mathbb{R}^n_+ $, we define $ w(x):= u(x-x_0)$. Since \begin{align*} w(x) &= u(x-x_0) \\ &= \sqrt{-1}nt_{\mathbb{R}^n_+} K_b(x,y+x_0)y^b_n f(u(y)) dy \\ &= \sqrt{-1}nt_{\mathbb{R}^n_+} K_b(x,y)y^b_n f(w(y)) dy, \end{align*} so we take $ x_0 = 0$. We consider $ v(x)=\frac{1}{|x|^{n+b-2}}u\left(\frac{x}{|x|^2}\right)$. Then a direct calculation shows that $ v(x)$ solves \\ \[ v(x)=\sqrt{-1}nt_{\mathbb R^n_+}K_b(x,y)y_n^b f(v(y)|y|^{n+b-2})\frac{1}{|y|^{n+b+2}}dy \quad x \sqrt{-1}n \mathbb R^n_+ . \] We define $ \tilde{\tau}=\frac{n+b+2}{n+b-2}$. By the definition of \(g(t)=\frac{f(t)}{t^{\tilde{\tau}}}\displaystyle\), we deduce that $ v(x)$ satisfies \\ \[ v(x)=\sqrt{-1}nt_{\mathbb R^n_+}K_b(x,y)y_n^b g(v(y)|y|^{n+b-2}) v^{\tilde{\tau}}dy . \] Since $ u$ is continuous in $ \overline{\mathbb{R}^n_+}$, we conclude that $ v$ is continuous and positive in $ \overline{\mathbb{R}^n_+} \setminus \{0\}$ with possible singularity at the origin. Moreover, $ v$ decays at infinity as $ u(0)|x|^{2-n-b}$. By the asymptotic behavior of $ v$ at $ \sqrt{-1}nfty$, we get \[ v(x) \sqrt{-1}n L^{\frac{2n}{n+b-2}}(\mathbb{R}^n_+ \setminus B_r(0)) \cap L^\sqrt{-1}nfty (\mathbb{R}^n_+ \setminus B_r(0)) \] for all $ r>0$. Now we use the moving plane method to prove our result. For a given real number $ \lambdabda$, define \[ \Si_\lambda = \{x=(x_1,\cdots ,x_n)\sqrt{-1}n \mathbb R^n_+ \ | \ x_1\geq \lambdabda \}, \quad T_\lambdabda = \{x\sqrt{-1}n \mathbb R^n_+ \ | \ x_1=\lambdabda \} \] and let $ x^\lambdabda = (2\lambdabda-x_1,x_2,\cdots ,x_n)$ and $ u_\lambdabda(x)=u(x^\lambdabda)$. \begin{lemma}\label{lem1} \begin{align*} v(x)-v(x^\lambdabda)=& \sqrt{-1}nt_{\Sigma_\lambdabda}(K_b(x,y)-K_b(x^\lambdabda,y)) y^b_n \times \\ &[g(v(y)|y|^{n+b-2})v(y)^{\tilde{\tau}}-g(v(y^\lambdabda)|y^\lambdabda|^{n+b-2})v(y^\lambdabda)^{\tilde{\tau}}]dy. \end{align*} \end{lemma} \noindent \emph{Proof.} By the definition of $ K_b(x,y)$, we know $ K_b(x,y) = K_b(x^{\lambda},y^{\lambda})$, $ K_b(x^{\lambda},y) = K_b(x,y^{\lambda}) $. A direct calculation implies \begin{equation*} \begin{aligned} v(x) = \sqrt{-1}nt_{\Sigma_\lambdabda} K_b(x,y) y^b_n g(v(y)|y|^{n+b-2})v(y)^{\tilde{\tau}}dy + \sqrt{-1}nt_{\Sigma_\lambdabda} K_b(x,y^{\lambda}) y^b_n g(v(y^\lambdabda)|y^\lambdabda|^{n+b-2})v(y^\lambdabda)^{\tilde{\tau}}dy, \end{aligned} \end{equation*} and \begin{align*} v(x^\lambdabda)= \sqrt{-1}nt_{\Sigma_\lambdabda} K_b(x^{\lambda},y) y^b_n g(v(y)|y|^{n+b-2})v(y)^{\tilde{\tau}} dy + \sqrt{-1}nt_{\Sigma_\lambdabda} K_b(x^{\lambda},y^{\lambda}) y^b_n g(v(y^\lambdabda)|y^\lambdabda|^{n+b-2}) v(y^\lambdabda)^{\tilde{\tau}}dy. \end{align*} We get the desired result. $ \Box$ \begin{lemma}\label{lem2} Under the assumptions of Theorem \ref{thm0}, there exists $ \lambda_0 > 0$ such that for all $ \lambda \geq \lambda_0$, we have $ v_\lambda(x) \geq v(x)$ for all $ x \sqrt{-1}n \Si_\lambda$. \end{lemma} \noindent \emph{Proof.} \ $ \forall \lambda >0$, it is easy to see that $ |y|>|y^\lambda|$, $ \forall y\sqrt{-1}n \Si_\lambda$. Let \[ \Si^{-}_{\lambda} = \{y\sqrt{-1}n \Si_\lambda \ | \ v(y)>v_\lambda(y) \}, \] then $ \forall y\sqrt{-1}n \Si^-_\lambda$, we have \[ v(y)|y|^{n+b-2} > v_\lambda(y)|y^\lambda|^{n+b-2}. \] Since $ g(t)$ is non-increasing (see Theorem \ref{thm0}(ii)), we get \[ g(v(y)|y|^{n+b-2}) \leq g(v_\lambda(y)|y^\lambda|^{n+b-2}). \] This implies \begin{align} g(v(y)|y|^{n+b-2})v(y)^{\tilde{\tau}}- g(v_\lambdabda(y)|y^\lambdabda|^{n+b-2})v_\lambdabda(y)^{\tilde{\tau}} \leq g(v(y)|y|^{n+b-2})(v(y)^{\tilde{\tau}} - v_\lambda(y)^{\tilde{\tau}}). \label{s4} \end{align} As for $ y \sqrt{-1}n \Si_\lambda\setminus\Si^-_\lambda$, using the assumptions of Theorem \ref{thm0} we have \begin{align} g(v(y)|y|^{n+b-2})v(y)^{\tilde{\tau}} &= \frac{f(v(y)|y|^{n+b-2})}{|y|^{n+b+2}} \nonumber \\ &\leq \frac{f(v_\lambda(y)|y|^{n+b-2})}{|y|^{n+b+2}} \nonumber \\ &=\frac{f(v_\lambda(y)|y|^{n+b-2})}{[|y|^{n+b-2}v_\lambda(y)]^{\tilde{\tau}}}v_\lambdabda(y)^{\tilde{\tau}}\nonumber \\ &= g(v_\lambda(y)|y|^{n+b-2}) v_\lambda(y)^{\tilde{\tau}} \nonumber \\ &\leq g(v_\lambda(y)|y^\lambda|^{n+b-2})v_\lambda(y)^{\tilde{\tau}}.\label{s5} \end{align} Then by Lemma \ref{lem1} and the mean value theorem, we know that there exists a $ \xi$ between $y$ and $ y^\lambdabda$ that makes \begin{equation} v(y)^{\tilde{\tau}}-v_\lambdabda(y)^{\tilde{\tau}}=\tilde{\tau}v(\xi)^{\tilde{\tau}-1}(v(y)-v_\lambdabda(y)) \end{equation} true. Combining Lemma \ref{lem1}, \eqref{s4} and \eqref{s5}, we have \begin{align*} v(x)-v_\lambdabda(x) &\leq \sqrt{-1}nt_{\Sigma^-_\lambdabda} (K_b(x,y)-K_b(x^\lambdabda,y))y^b_n g(v(y)|y|^{n+b-2})(v(y)^{\tilde{\tau}}-v_\lambdabda(y)^{\tilde{\tau}}) dy \\ &\leq \tilde{\tau} \sqrt{-1}nt_{\Sigma^-_\lambdabda} (K_b(x,y)-K_b(x^\lambdabda,y))y^b_n g(v(y)|y|^{n+b-2})v(y)^{\tilde{\tau}-1}(v(y)-v_\lambdabda(y)) dy. \end{align*} In getting the above inequality, we also used $ \forall x,y \sqrt{-1}n \Sigma_\lambda$, $ K_b(x,y) \geq K_b(x^\lambda, y)$ and $ \forall y \sqrt{-1}n \Sigma^-_\lambda$, $ v_\lambda (y) \leq v(\xi) \leq v(y)$. Since \[ |y|^{n+b-2}v(y)=u\left(\frac{y}{|y|^2}\right) \geq \min_{B_{\frac{1}{\lambda}}} u(x) \geq C_\lambda >0, \quad \forall \lambda>0, \ y \sqrt{-1}n \Sigma_\lambda, \] and $ f$ is continuous, we conclude that $ g(|y|^{n+b-2}v(y))$ is bounded for $ y\sqrt{-1}n \Sigma_\lambdabda$. Hence we can deduce from the above inequality that \[ v(x)-v_\lambdabda(x) \leq C\sqrt{-1}nt_{\Sigma^-_\lambdabda} K_b(x,y)y^b_n v(y)^{\tilde{\tau}-1}(v(y)-v_\lambdabda(y)) dy, \quad \forall \lambda > 0, \ x\sqrt{-1}n \Sigma_\lambda. \] By \eqref{G2} and since $ \frac{y^b_n}{|x-y*|^b} \leq 1$, then \[ v(x)-v_\lambdabda(x) \leq C\sqrt{-1}nt_{\Sigma^-_\lambdabda} \frac{1}{|x-y|^{n-2}} v(y)^{\tilde{\tau}-1}(v(y)-v_\lambdabda(y)) dy. \] By the Hardy-Littlewood-Sobolev inequality, it follows that for any $ q > \frac{n}{n-2}$, \[ \|v-v_\lambda \|_{L^q(\Sigma^-_\lambdabda)} \leq C \|v^{\tilde{\tau} -1}(v-v_\lambda )\|_{L^{\frac{qn}{n+2q}}(\Sigma^-_\lambdabda)}. \] By using the generalized H$ \ddot{o}$lder inequality, we obtain \begin{align*} \|v-v_\lambda \|_{L^q(\Sigma^-_\lambdabda)} & \leq C\|v^{\tilde{\tau} -1}\|_{L^{\frac{n}{2}}(\Sigma^-_\lambdabda)} \|v-v_\lambda \|_{L^q(\Sigma^-_\lambdabda)}\\ &=C\left(\sqrt{-1}nt_{\Sigma^-_\lambdabda}v^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}}\|v-v_\lambdabda \|_{L^q(\Sigma^-_\lambdabda)}. \end{align*} Due to $ v\sqrt{-1}n L^{\frac{2n}{n+b-2}}(\mathbb{R}^n_+\setminus B_r(0))$, we can choose $ \lambdabda_0$ sufficiently large, such that for $ \lambdabda \geq \lambdabda_0$, we have \[ C\left(\sqrt{-1}nt_{\Sigma_\lambdabda}v^{\frac{2n}{n+b-2}}(y)dy \right)^{\frac{2}{n}} \leq \frac{1}{2}. \] Then we conclude \[ \|v-v_\lambda \|_{L^q(\Sigma^-_\lambdabda)} = 0 \] for all $ \lambdabda \geq \lambdabda_0$. Thus, $ \Si^-_\lambda$ must be measure $ 0$. Since $ v$ is continuous, we deduce that $ \Si^-_\lambda$ is empty. $ \Box$ We define $ \lambda_1 = \sqrt{-1}nf \{ \lambda \ | \ v(x)\leq v_\mu(x), \ \forall \mu \geq \lambda, \ x\sqrt{-1}n\Sigma_\mu \}$. \begin{lemma}\label{lem3} If $ \lambdabda_1 >0$, then $ v(x)\equiv v_{\lambdabda_1}(x)$ for all $ x\sqrt{-1}n \Sigma_{\lambdabda_1}$. \end{lemma} \noindent \emph{Proof.} \ Suppose that conclusion does not hold. We have $ v(x) \leq v_{\lambda_1}(x)$, but $ v(x) \not\equiv v_{\lambda_1}(x)$ in $ \Si_{\lambda_1}$. We will prove that the plane can be moved further to the left. We infer from the proof of Lemma \ref{lem2} that \begin{equation}\label{s6} \|v-v_\lambda \|_{L^q(\Si^-_\lambda)} \leq C\left(\sqrt{-1}nt_{\Si^-_\lambda}v^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}}\|v-v_\lambda \|_{L^q(\Si^-_\lambda)}. \end{equation} If one can show that for $ \varepsilon$ sufficiently small so that $ \forall \lambda \sqrt{-1}n (\lambda_1-\varepsilon,\lambda_1]$, there holds \begin{equation}\label{s7} C\left(\sqrt{-1}nt_{\Si^-_\lambda}v^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}} \leq \hlf, \end{equation} then by \eqref{s6}, we have $ \|v-v_\lambda\|_{L^q(\Si^-_\lambda)}=0$, and therefore $ \Si^-_\lambda$ must be measure zero. Since $ v$ is continuous, we deduce that $ \Si^-_\lambda$ is empty. This contradicts the definition of $ \lambda_1$. Now we verify inequality \eqref{s7}. Since $ v\sqrt{-1}n L^{\frac{2n}{n+b-2}}(\mathbb{R}^n_+\setminus B_r(0))$, for any small $ \eta >0$ we can choose $ R$ sufficiently large so that \begin{equation}\label{s8} C\left(\sqrt{-1}nt_{\mathbb{R}^n_+ \backslash B_R}v^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}} \leq \eta. \end{equation} We fix this $ R$ and then show that the measure of $ \Si^-_\lambda \cap B_R$ is sufficiently small for $ \lambda$ close to $ \lambda_1$. Since the measure of $ \Si^-_{\lambda_1}$ is zero, by Lemma \ref{lem1} and \eqref{s5} we deduce $ v(x) < v_{\lambda_1}(x)$ in the interior of $ \Si_{\lambda_1}$. For any $ \gamma >0$, let \[ E_\gamma = \{x\sqrt{-1}n\Si_{\lambda_1} \cap B_R \ | \ v_{\lambda_1}(x) - v(x) > \gamma\}, \qquad F_\gamma = (\Si_{\lambda_1} \cap B_R)\backslash E_\gamma. \] It is obviously that \[ \lim_{\gamma\rightarrow 0} \mu(F_\gamma) =0. \] For $ \lambda < \lambda_1$, let \[ D_\lambda = (\Si_\lambda \backslash \Si_{\lambda_1}) \cap B_R. \] Apparently, the measure of $ D_\lambda$ is small for $ \lambda$ to close to $ \lambda_1$. Then it is easy to see that \begin{equation}\label{s9} (\Si^-_\lambda \cap B_R) \subseteq (\Si^-_\lambda \cap E_\gamma) \cup F_\gamma \cup D_\lambda. \end{equation} In fact, $ \forall x\sqrt{-1}n \Si_\lambda^- \cap E_\gamma$, we have \[ v(x)- v_\lambda(x) = v(x)-v_{\lambda_1}(x) + v_{\lambda_1}(x) - v_\lambda(x) >0. \] Hence \[ v_{\lambda_1}(x) - v_\lambda(x) > v_{\lambda_1}(x)- v(x) > \gamma. \] It follows that \begin{equation}\label{s10} (\Si_\lambda^- \cap E_\gamma) \subseteq G_\gamma \equiv \{x\sqrt{-1}n B_R \ | \ v_{\lambda_1}(x) - v_\lambda(x) >\gamma \}. \end{equation} By the well-known Chebyshev inequality, we have \begin{align*} \mu (G_\gamma) & \leq \frac{1}{\gamma^{p+1}} \sqrt{-1}nt_{G_\gamma} |v_{\lambda_1}(x)-v_\lambda(x)|^{p+1}dx \\ & \leq \frac{1}{\gamma^{p+1}} \sqrt{-1}nt_{B_R} |v_{\lambda_1}(x)-v_\lambda(x)|^{p+1}dx, \end{align*} where $ p>0$. For each fixed $ \gamma$, as $ \lambda$ is close to $ \lambda_1$, the right hand side of the above inequality can be made as small as we wish. Therefore by \eqref{s9} and \eqref{s10}, the measure of $ \Si^-_\lambda \cap B_R$ can also be made sufficiently small. Combining this with \eqref{s8}, we obtain \eqref{s7}. $ \Box$ \section{ Proof for Theorem 1.1 } We define \[ \mathcal L_b(u)=- \text{div}(y_n^b \nabla u). \] Take a cutoff function $ \varphi_R (y) \sqrt{-1}n C^{\sqrt{-1}nfty}_c(B_R)$ such that $ 0\leq \varphi_R \leq 1$ in $ B_R$, $ \varphi_R =1$ in $ B_{\frac{R}{2}}$. Setting \[ u_R(x)=\sqrt{-1}nt_{\mathbb R^n_+} K_b(x,y)y^b_n f(u(y))\varphi_R (y)dy, \] we clearly have $ f(u(y))\varphi_R (y) \sqrt{-1}n C(\overline{\mathbb R^n_+}) \cap \mathcal{E}'$, where $ \mathcal{E}'$ is the dual space of $ C^\sqrt{-1}nfty(\overline{\mathbb R^n_+})$. By the result of \cite{Horiuchi95}, we have $ \mathcal L_b K_b(x,y)=\delta(x-y)$ and \begin{equation} \left\{ \begin{aligned} \mathcal L_b(u_R) &= y^b_n f(u(y))\varphi_R (y) \quad \text{in} \quad \mathbb R^n_+ \\ \frac{\partial u_R}{\partial x_n} &= 0 \quad \qquad \qquad \qquad \ \text{on} \quad \partial \mathbb R^n_+ \end{aligned} \right. \end{equation} (see Proposition 1-1 of \cite{Horiuchi95}). By letting $ R \rightarrow \sqrt{-1}nfty$, we then conclude that \begin{equation} \left\{ \begin{aligned} -(y^b_n \nablaelta u + b y^{b-1}_n u_n)&= y^b_n f(u(y)) \quad \text{in} \quad \mathbb R^n_+ \\ \frac{\partial u}{\partial x_n} &= 0 \quad \qquad \quad \ \ \text{on} \quad \partial \mathbb R^n_+. \end{aligned} \right. \end{equation} Therefore, the integral equation \eqref{s3} satisfies the following partial differential equation: \begin{equation}\label{s13} \left\{ \begin{aligned} \nablaelta u + \frac{b}{y_n}u_n + f(u(y))&= 0 \quad \text{in}\quad \mathbb R^n_+ \\ \frac{\partial u}{\partial x_n}&=0 \quad \text{on}\quad \partial \mathbb R^n_+. \end{aligned} \right. \end{equation} Then we will establish the equivalence between the integral equation \eqref{s3} and the partial differential equation \eqref{s13}. We need the asymptotic behavior $ u(x) \sim \frac{1}{|x|^{n+b-2}}$ as $ x\rightarrow \sqrt{-1}nfty$; that is, there exist two constant $ R$, $ C$ such that \[ u(x)= \frac{C}{|x|^{n+b-2}}, \quad |x|>R. \] \begin{lemma}\label{lem8} Let $ f(t)=\bar{c}t^{\frac{n+b+2}{n+b-2}}$ in Theorem \ref{thm0} and $ u(x) \sim \frac{1}{|x|^{n+b-2}}$ as $ x\rightarrow \sqrt{-1}nfty$. Then the positive solution $ u$ of $ \eqref{s3}$ is $ C^2(\overline{\mathbb{R}^n_+})$. \end{lemma} \noindent $ \emph{Proof.}$ \\ \textbf{Step 1:} \emph{We prove that $ u \sqrt{-1}n C^\alpha(\overline{\mathbb{R}^n_+})$. } Since $ u \sqrt{-1}n C^0(\overline{\mathbb{R}^n_+})$, one know $ u\sqrt{-1}n L^\sqrt{-1}nfty(\overline{B_R \cap \mathbb{R}^n_+})$. Since $ u(x) \sim \frac{1}{|x|^{n+b-2}}$ as $ x\rightarrow \sqrt{-1}nfty$, we obtain $ u\sqrt{-1}n L^p(\mathbb{R}^n_+)$, $ \forall p>\frac{n}{n+b-2}$. By $ \eqref{G2}$ we have \[ | Du(x)| \leq C\sqrt{-1}nt_{\mathbb{R}^n_+} \frac{1}{|x-y|^{n-1}} u(y)^{\tilde{\tau}} dy. \] We apply the Hardy-Littlewood-Sobolev inequality to get \[ \| Du \|_{L^p(\mathbb{R}^n_+)} \leq C \|u^{\tilde{\tau}}\|_{L^{\frac{np}{n+p}}(\mathbb{R}^n_+)} \] for any $ p > \frac{n}{n-1}$. Since $ \tilde{\tau}\frac{np}{n+p} > \frac{n}{n+b-2}$, we obtain $ Du \sqrt{-1}n L^p(\mathbb{R}^n_+)$. This implies the desired result for all $ p>n$. \noindent \textbf{Step 2:} \emph{ We prove that $ u \sqrt{-1}n C^2(\overline{\mathbb{R}^n_+})$.} We write $ W_R = \{ x=(x',x_n) \ | \ 0 < x_n < R, \ |x'| < R \}$. For an arbitrarily fixed $ \bar{x} \sqrt{-1}n \overline{\mathbb{R}^n_+}$, there is positive constant $ R$, such that $ \bar{x} \sqrt{-1}n W_{R}$ and $ \text{dist}(\bar{x},\tilde{\partial } W_{R}) > 1$, where $ \tilde{\partial } W_{R} = \partial W_R \backslash \{x \ | \ x_n = 0, \ |x'| < R \}$. We have $ B_{1}(\bar{x})\cap \mathbb{R}^n_+ \subset W_R$. Take a cutoff function $ \varphi(y) \sqrt{-1}n C^{\sqrt{-1}nfty}_c(B_{1}(\bar{x}))$ such that $ 0\leq \varphi \leq 1$ in $ B_1(\bar{x})$, $ \varphi =1$ in $ B_{\frac{1}{2}}(\bar{x})$. We define \[ u_1(x)=\sqrt{-1}nt_{\mathbb R^n_+} K_b(x,y)y^b_n f(u(y))\varphi(y)dy, \quad x\sqrt{-1}n B_{\frac{1}{4}}(\bar{x}). \] By \cite{Horiuchi95}, we have $ u_1 \sqrt{-1}n C^{2,\alpha}(\overline{\mathbb R^n_+})$. Set \begin{align*} u_2(x) & =\sqrt{-1}nt_{\mathbb R^n_+} K_b(x,y)y^b_n f(u(y))(1- \varphi(y)) dy \\ & =\sqrt{-1}nt_{\mathbb R^n_+ \setminus B_{\frac{1}{2}}(\bar{x})} K_b(x,y)y^b_n f(u(y))(1- \varphi(y)) dy, \quad x\sqrt{-1}n B_{\frac{1}{4}}(\bar{x}). \end{align*} Then by the H$ \ddot{o}$lder inequality and $ f(t)=\bar{c}t^{\tilde{\tau}}$, we conclude \begin{align*} |D^2 u_2(x)| & \leq \left \| \frac{1}{|x-y|^n} \right \|_{L^{q^\ast}(\mathbb R^n_+ \setminus B_{\frac{1}{2}}(\bar{x}))} \| u^{\tilde{\tau}} \|_{L^{p^\ast}(\mathbb R^n_+ \setminus B_{\frac{1}{2}}(\bar{x}))} \\ & \leq C \| u \|^{\tilde{\tau}}_{L^{p}(\mathbb R^n_+ \setminus B_{\frac{1}{2}}(\bar{x}))} \end{align*} for $ p > \max \left \{ \frac{n+b+2}{n+b-2}, n \right \} $, where $ p^{\ast}= \frac{p}{\tilde{\tau}}$ and $ \frac{1}{p^{\ast}} + \frac{1}{q^{\ast}} = 1$. Similarly, \begin{align*} |D^3 u_2(x)| & \leq \left \| \frac{1}{|x-y|^{n+1}} \right \|_{L^{q^\ast}(\mathbb R^n_+ \setminus B_{\frac{1}{2}}(\bar{x}))} \| u^{\tilde{\tau}} \|_{L^{p^\ast}(\mathbb R^n_+ \setminus B_{\frac{1}{2}}(\bar{x}))} \\ & \leq C \| u \|^{\tilde{\tau}}_{L^{p}(\mathbb R^n_+ \setminus B_{\frac{1}{2}}(\bar{x}))}. \end{align*} This implies the desired result. $ \Box$ Since $ u\sqrt{-1}n C^2(\overline{\mathbb{R}^n_+})$, we may assume that (see \cite{Huang2012}) \begin{equation}\label{s14} \bar{u}(x',x_n,x_{n+1}) = u\left(x',\sqrt{x^2_n+x^2_{n+1}}\right). \end{equation} It follows that, \begin{equation}\label{sec2} \left \{ \begin{aligned} \nablaelta_{n+1} \bar{u} + \frac{b-1}{x_{n+1}} \bar{u}_{n+1} + \bar{c} \bar{u}(x)^{\tilde{\tau}} &= 0 \quad \text{in}\quad \mathbb R^{n+1}_+ \\ \frac{\partial \bar{u}}{\partial x_{n+1}} &=0 \quad \text{on} \quad \partial \mathbb R^{n+1}_+. \end{aligned} \right. \end{equation} $ \bar{u}$ is a classical solution to \eqref{sec2}. \begin{lemma}\label{lem7} Let $ \bar{u}(x) $ be the positive solution of $ \eqref{sec2} $, and the asymptotic behavior of $ \bar{u}$ at infinity is $ \bar{u}(x) \sim \frac{1}{|x|^{n+b-2}}$. Then $ \bar{u}(x)$ satisfies the corresponding integral equation \begin{equation}\label{sec3} \bar{u}(x) = \bar{c} \sqrt{-1}nt_{\mathbb R^{n+1}_+} K_{b-1}(x,y) y^{b-1}_{n+1} \bar{u}(y)^{\tilde{\tau}} dy. \end{equation} \end{lemma} \noindent \emph{Proof.} \ Consider $ R$ large enough and $ |x|>R$, and set $ A_1 := \left\{y\sqrt{-1}n \mathbb R^{n+1}_+ \ | \ |y-x| \leq \frac{|x|}{2} \right \}$, and $ A_2 := \left\{ y\sqrt{-1}n \mathbb R^{n+1}_+ \ | \ |y-x| \geq \frac{|x|}{2} \right \}$. Assume that $ \zeta(x) := \bar{c} \sqrt{-1}nt_{\mathbb R^{n+1}_+} K_{b-1}(x,y) y^{b-1}_{n+1} \bar{u}(y)^{\tilde{\tau}} dy$. By the asymptotic behavior of $ \bar{u}$ at infinity, one can easily verify that \[ \left|\bar{c} \sqrt{-1}nt_{A_1} K_{b-1}(x,y) y^{b-1}_{n+1} \bar{u}(y)^{\tilde{\tau}} dy\right| \leq C\frac{1}{|x|^{n+b}} \] and \begin{align*} & \left|\bar{c} \sqrt{-1}nt_{A_2} K_{b-1}(x,y) y^{b-1}_{n+1} \bar{u}(y)^{\tilde{\tau}} dy\right| \\ \leq & \ C \sqrt{-1}nt_{A_2 \cap B_{\sqrt{|x|}}} \frac{y^{b-1}_{n+1}}{|x-y^*|^{b-1}} \frac{1}{|x-y|^{n-1}}\left(\frac{1}{1+|y|}\right)^{n+b+2} dy + C \sqrt{-1}nt_{A_2 \backslash B_{\sqrt{|x|}}} \frac{1}{|x-y|^{n-1}}\frac{1}{|y|^{n+b+2}} dy \\ \leq & \ \frac{C}{|x|^{n-1+\frac{b-1}{2}}}\sqrt{-1}nt_{A_2 \cap B_{\sqrt{|x|}}} \left(\frac{1}{1+|y|}\right)^{n+b+2} dy + \frac{C}{|x|^{n-1}}\sqrt{-1}nt_{A_2 \backslash B_{\sqrt{|x|}}}\frac{1}{|y|^{n+b+2}} dy \\ \leq & \ \frac{C}{|x|^{n+\frac{b-3}{2}}} + \frac{C}{|x|^{n-1+\frac{b+1}{2}}} \\ \leq & \ \frac{C}{|x|^{n+\frac{b-3}{2}}}, \end{align*} where $ |x|>R$ and $ B_R = \{x\sqrt{-1}n \mathbb{R}^{n+1} \ | \ |x|<R \} $. In getting the above inequality, we used $ \forall |y|<\sqrt{|x|}$, $ u(y)\sim \left(\frac{1}{1+|y|}\right)^{n+b-2}$ and \[ |x-y^*|\geq ||x|-|y^*|| \geq |x|-\sqrt{|x|} \geq \frac{|x|}{2}. \] Then we have $ | \zeta(x)| \leq C\frac{1}{|x|^{n+\frac{b-3}{2}}}$. Similarly, we also obtain $ | \nabla \zeta(x)| \leq C\frac{1}{|x|^{n+\frac{b-1}{2}}}$. We know that the asymptotic behavior $ \bar{u}(x) \sim \frac{1}{|x|^{n+b-2}}$ and $ \nabla \bar{u}(x) \sim \frac{1}{|x|^{n+b-1}}$ at $ \sqrt{-1}nfty$. Let $ w=\bar{u} - \zeta$, then $ w$ satisfies the following equation \[ \nablaelta w + \frac{b-1}{x_{n+1}} w_{n+1} = 0 \quad \text{in}\quad \mathbb R^{n+1}_+. \] Multiplying this identity by $ wx_{n+1}$ and integrating by parts, we get \begin{align*} 0 & = \lim_{R\rightarrow \sqrt{-1}nfty} \sqrt{-1}nt_{B^+_R(0)} (\nablaelta w + \frac{b-1}{x_{n+1}} w_{n+1}) wx_{n+1} dx \\ & = - \lim_{R\rightarrow \sqrt{-1}nfty} \sqrt{-1}nt_{B^+_R(0)} x_{n+1}|\nabla w|^2 dx + \lim_{R\rightarrow \sqrt{-1}nfty} \sqrt{-1}nt_{B^+_R(0)} (b-2)ww_{n+1} dx. \end{align*} Since the asymptotic properties of $ w$ and $ \nabla w $, the boundary integral term is zero. We deduce that \begin{align*} \lim_{R\rightarrow \sqrt{-1}nfty} \sqrt{-1}nt_{B^+_R(0)} x_{n+1}|\nabla w|^2 dx & = \frac{b-2}{2} \lim_{R\rightarrow \sqrt{-1}nfty} \sqrt{-1}nt_{\partial artial B^+_R(0)} w^2 \nu^{n+1} dS \\ & \leq C \lim_{R\rightarrow \sqrt{-1}nfty} \sqrt{-1}nt_{\partial artial B^+_R(0)} \frac{1}{|x|^{2n+b-3}} dS \\ & = 0, \end{align*} where $ \nu = (\nu^1 , \nu^2 , \cdots , \nu^{n+1}) $ is the outward pointing unit normal vector field. Then we have $ \nabla w = 0$ and $ w = 0$. $ \Box$ \begin{proposition}\label{pro1} Let $ f(t)$ satisfy the conditions of Theorem \ref{thm0}. The problem \begin{equation}\label{pr1} \left\{ \begin{aligned} t^b u''(t) + bt^{b-1} u'(t)+t^bf(u(t))&=0 \quad t \sqrt{-1}n[0,+\sqrt{-1}nfty) \\ u'(0)&= 0 \end{aligned} \right. \end{equation} has no positive solutions of class $ C^2$. \end{proposition} In order to prove Proposition \ref{pro1} we make use of the following results. And the idea of the proof from Theorem 3.2 of \cite{EM93}. \begin{lemma}\label{lem9} Let $ u(t) \sqrt{-1}n C^2$ be a positive solution of $ (\ref{pr1})$, then for every $ t>0$ we have \begin{equation}\label{l3} tu'(t) + (b-1)u(t) \geq 0. \end{equation} \end{lemma} \noindent $ \emph{Proof.}$ Since $ (\ref{pr1})$ we have \[ tu'' + bu' + tf(u) =0, \quad t>0, \] hence \[ tu'' + bu'=(tu')' + (b-1)u' \leq 0. \] The function $ M(t):= tu'(t)+(b-1)u(t)$ is non-increasing. Now we proceed by contradiction. If there exists $ t_1>0$ such that \[ M(t_1)=t_1u'(t_1)+(b-1)u(t_1) <0, \] then the monotonicity of $ M(t)$ and the positivity of $ u(t)$ imply that \[ u'(t)\leq u'(t) + \frac{b-1}{t} u(t) \leq \frac{M(t_1)}{t} \ \text{for} \ t\geq t_1. \] Integrating the inequality $ u'(t)\leq\frac{M(t_1)}{t}$ on $ (t_1,t)$, we obtain \[ -u(t_1) \leq u(t)-u(t_1) \leq M(t_1)\ln \left (\frac{t}{t_1}\right). \] Letting $ t\rightarrow \sqrt{-1}nfty$ and recalling that $ M(t_1)<0$, we get a contradiction. $ \Box$ The next lemma contains the necessary a priori estimate for our study. \begin{lemma}\label{lem10} Let $ u \sqrt{-1}n C^2$ be a positive solution of $ (\ref{pr1})$, then \begin{equation}\label{l7} u(t) \leq Ct^{-\frac{n+b-2}{2}}, \end{equation} \begin{equation}\label{l8} |u'(t)| \leq Ct^{-\frac{n+b}{2}}, \end{equation} where $ C$ is a positive constant. \end{lemma} \noindent $ \emph{Proof.}$ \ We claim $ f(0)=0$. Suppose $ f(0)>0$. Since $ f(t)$ is non-decreasing, we have \begin{equation}\label{l4} (t^b u'(t))'=-t^b f(u(t)) \leq -f(0)t^b. \end{equation} Integrating $ (\ref{l4})$ on $ (0,t)$ and using the fact that $ u'(0)=0$, we obtain \begin{equation}\label{l5} t^b u'(t) \leq -\frac{f(0)}{b+1} t^{b+1}. \end{equation} Integrating $ (\ref{l5})$, we get \[ u(t) \leq -\frac{f(0)}{2(b+1)}t^2 + u(0). \] This contradicts $ u(t)>0$ by letting $ t\rightarrow \sqrt{-1}nfty$. We have $ f(0)=0$. Since $ f(t)$ is non-decreasing, by Remark \ref{r1} there are three cases for $ f$. For the case $ f\equiv 0$, we obtain $ u\equiv 0$ by applying $ (\ref{s3})$. If $ f(t) = 0 \ \text{in} \ [0,t_0]$ for some $ t_0 >0$, then we have $ g(t) = 0 \ \text{in} \ [0,t_0]$ and $ g(t) > 0 \ \text{in} \ (t_0, +\sqrt{-1}nfty)$. This contradicts the non-increasing property of $ g$. Then we consider that $ f(t) > 0$ for every $ t>0$. Since \[ (t^b u'(t))' = -t^b f(u(t)) <0, \quad \forall t>0, \] $ t^b u'(t)$ is strictly monotonically decreasing. Noticing that $ u'(0)=0$, we see that $ u'(t) \leq 0$, $ \forall t\geq 0$. Since $ u' \leq 0$ and $ u >0$, we get $ u$ is bounded. Assume that $ u(t)\leq M$ in $ [0,+\sqrt{-1}nfty)$. Since $ g(t)$ is non-increasing, we have \[ \frac{f(t)}{t^{\tilde{\tau}}} = g(t) \geq g(M) = \frac{f(M)}{M^{\tilde{\tau}}} \quad \forall t\sqrt{-1}n(0,M]. \] We can prove $ f(u)\geq Cu^{\tilde{\tau}}$. Therefore, we have \begin{equation}\label{l6} -(t^b u'(t))' = t^b f(u(t)) \geq Ct^b u(t)^{\tilde{\tau}}. \end{equation} Integrating $ (\ref{l6})$ on $ (0,t)$, by using $ u'(t)\leq 0$, we have \[ -t^b u' \geq Ct^{b+1}u(t)^{\tilde{\tau}}. \] Integrating $ -u' \geq Ct u(t)^{\tilde{\tau}}$ over $ (0,t)$, we get $ u(t)^{1-\tilde{\tau}} \geq Ct^2 + u(0)^{1-\tilde{\tau}}$. Therefore, we obtain \[ u(t) \leq Ct^{-\frac{2}{\tilde{\tau}-1}} = Ct^{-\frac{n+b-2}{2}}. \] In order to prove $ (\ref{l8})$ it is sufficient to $ (\ref{l3})$ and $ (\ref{l7})$ . $\Box$ \\ \noindent \textbf{Proof of Proposition \ref{pro1}} We will proceed by contradiction argument. We define $ F(t)=\sqrt{-1}nt^t_0 f(s) ds$. Multiplying $ (\ref{pr1})$ by $ u$ and integrating by parts on $ (0,t)$, we obtain \begin{equation}\label{l9} t^bu(t)u'(t) - \sqrt{-1}nt^t_0 s^b(u'(s))^2 ds + \sqrt{-1}nt^t_0 s^bf(u(s))u(s) ds = 0. \end{equation} On the other hand, multiplying $ (\ref{pr1})$ by $ tu'(t)$ and integrating by parts on $ (0,t)$, we get \begin{equation}\label{l10} \frac{1}{2}t^{b+1}(u'(t))^2 + \frac{b-1}{2}\sqrt{-1}nt^t_0 s^b(u'(s))^2 ds + t^{b+1}F(u(t)) - (b+1)\sqrt{-1}nt^t_0 s^b F(u(s)) ds = 0. \end{equation} Using $ (\ref{l7})$ and $ (\ref{l8})$, we conclude that \[ \lim_{t\rightarrow \sqrt{-1}nfty} t^bu(t)u'(t) = \lim_{t\rightarrow \sqrt{-1}nfty}t^{b+1}(u'(t))^2 =0 \] and \[ \sqrt{-1}nt^{\sqrt{-1}nfty}_0 s^b(u'(s))^2 ds < \sqrt{-1}nfty. \] Hence by $ (\ref{l9})$, we have \[ \sqrt{-1}nt^{\sqrt{-1}nfty}_0 s^b(u'(s))^2 ds = \sqrt{-1}nt^{\sqrt{-1}nfty}_0 s^bf(u(s))u(s) ds < \sqrt{-1}nfty. \] We claim that there is a sequence $ t_k\rightarrow \sqrt{-1}nfty$ such that $ t^{b+1}_{k}F(u(t_k))\rightarrow 0$. Suppose not, $ t^{b+1}F(u(t)) \geq C_0 > 0$, $ \forall t > 1$, for some positive constant $ C_0$, we have \begin{equation}\label{l11} \frac{C_0}{t} \leq t^{b}F(u(t)) = t^{b}\sqrt{-1}nt^{u(t)}_0 f(s)ds \leq t^bf(u(t))u(t). \end{equation} Integrating the inequality $ (\ref{l11})$ on $ (1,+\sqrt{-1}nfty)$, we obtain \[ \sqrt{-1}nt^{\sqrt{-1}nfty}_1 \frac{C_0}{t} dt \leq \sqrt{-1}nt^{\sqrt{-1}nfty}_1 t^bf(u(t))u(t) dt. \] The left-hand side of the inequality is unbounded, but the right-hand side is bounded. This is a contradiction. The claim is proved. Since $ g(t)=\frac{f(t)}{t^{\tilde{\tau}}}$ is non-increasing, we have \begin{equation}\label{l12} F(u(t))=\sqrt{-1}nt^{u(t)}_0 \frac{f(s)}{s^{\tilde{\tau}}}s^{\tilde{\tau}}ds \geq \frac{1}{\tilde{\tau}+1}f(u(t))u(t). \end{equation} Combining $ (\ref{l9})$, $ (\ref{l10})$ and $ (\ref{l12})$, and taking $ t=t_k$, we get \begin{align*} \frac{1}{2}t^{b+1}_k(u'(t_k))^2 + \frac{b-1}{2}\left[t^b_k u(t_k)u'(t_k) + \sqrt{-1}nt^{t_k}_0 s^bf(u(s))u(s) ds\right] \\ + t^{b+1}_k F(u(t_k)) - \frac{(b+1)}{\tilde{\tau}+1}\sqrt{-1}nt^{t_k}_0 s^b f(u(s))u(s) ds \geq 0. \end{align*} Letting $ t_k \rightarrow \sqrt{-1}nfty$, we have \[ \left(\frac{b-1}{2}-\frac{(b+1)}{\tilde{\tau}+1}\right)\sqrt{-1}nt^{\sqrt{-1}nfty}_0 s^b f(u(s))u(s) ds \geq 0. \] This contradicts the fact that \[ \left(\frac{b-1}{2}-\frac{(b+1)}{\tilde{\tau}+1}\right) = \frac{1}{2}\left(\frac{2(b+1)}{n+b}-2\right) < 0. \] $ \Box$\\ \textbf{Proof of Theorem \ref{thm0}} By Lemma \ref{lem1}, Lemma \ref{lem2} and Lemma \ref{lem3}, we choose the $ x_1 $ direction and prove that $ v $ is symmetric in the $ x_1$ direction. If $ \lambda_1 >0$, then $ v$ is symmetric in the direction of $ x_1$. If $ \lambda_1 =0 $, then we conclude by continuity that $ v(x) \leq v_0(x)$ for all $ x \sqrt{-1}n \Sigma_0$. We can also start the moving plane from $ -\sqrt{-1}nfty$ and find a corresponding $ \lambda^\partial rime_1$. If $ \lambda^\partial rime_1 =0$, then we get $ v_0(x) \leq v(x)$ for $ x \sqrt{-1}n \Sigma_0$. So $ v(x)$ is symmetric with respect to $ T_0$. If $ \lambda^\partial rime_1<0$, an analogue to Lemma \ref{lem3} shows that $v$ is symmetric with respect to $ T_{\lambda^\partial rime_1}$. For the $ x_2,x_3 \cdots x_{n-1}$, we can carry out the procedure as the above. There are two cases for solutions. \noindent \textbf{Case 1} \ If $ \lambda_1 >0 $ or $ \lambda^\partial rime_1 <0$ in some direction for some $ x_0 \sqrt{-1}n \partial artial\mathbb{R}^n_+$, we have $ v=v_{\lambda_1}$ or $ v=v_{\lambda^\partial rime_1}$. Since $ g$ is non-increasing and by Lemma \ref{lem1}, we get $ g(t) = \bar{c}$. This implies $ f(t)=\bar{c} t^{\frac{n+b+2}{n+b-2}}$. For $ \bar{c}=0$, one has $ u\equiv0$. In the following, we always assume $ \bar{c}>0$. Since $ v$ is regular at the origin, we have the asymptotic behavior of $ u$ at infinity is $ u(x) \sim \frac{1}{|x|^{n+b-2}}$. According to Lemma \ref{lem8}, we obtain $ u\sqrt{-1}n C^2(\overline{\mathbb{R}^n_+})$. We know $ u$ satisfies \eqref{s13}. After transformation \eqref{s14}, $ \bar{u}$ satisfies $ n+1$ dimensional equation \eqref{sec2}. Thanks to Lemma \ref{lem7}, we obtain the equivalence between the integral equation \eqref{sec3} and its corresponding differential equation \eqref{sec2}. And we know that $ \bar{u}$ satisfies $ n+1$ dimensional integral equation \eqref{sec3}. Using the above moving plane in integral form for $ \bar{u}$, we obtain $ \bar{u}$ is radially symmetric in the direction of $ x_1,\cdots x_{n-1},x_n $. We define $ (x',x_n)=(x_1,\cdots , x_{n-1},x_n)$. There exists $ p=(p_1,\cdots, p_n) \sqrt{-1}n \mathbb{R}^{n}$ such that \[ u(x',|x_n|)= \bar{u}(x',x_n,0)=\bar{u}(\bar{x}',\bar{x}_n,0) = u(\bar{x}',|\bar{x}_n|) \] if $\sum^{n}_{i=1}|x_i-p_i|^2 = \sum^n_{i=1}|\bar{x}_i-p_i|^2$. $ u$ is radially symmetric about $ p$. In fact, $ p_n$ must be zero. Otherwise, it follows that \[ \bar{u}(x',2p_n-x_n,0) = \bar{u}(x',x_n,0) = u(x',|x_n|) = \bar{u}(x',-x_n,0) \] It shows that for the fixed $ x'$, $ \bar{u}$ is periodic with respect to $ x_n$ with period $ 2p_n$. Similarly to the proof of Lemma \ref{lem2}, we obtain that $ \bar{u}$ is monotonic with respect to $ p$. Thanks to the decay estimate of $ \bar{u}$ we directly get $ \bar{u} \equiv 0$ and $ u \equiv 0$. We get a contradiction. Thus, we have $ p_n=0$ and $ u$ is radially symmetric about $ p\sqrt{-1}n\partial a\mathbb{R}^{n}_+$. By a result of \cite{ChenLiOu06}, we deduce that $ u(x)=u_{a,p}(x)=\left(\frac{ca}{a^2+|x-p|^2}\right)^{\frac{n+b-2}{2}}$, where $ c=\sqrt{\frac{(n+b)(n+b-2)}{\bar{c}}}$ and $ a\geq0 $. \noindent \textbf{Case 2} \ Now we suppose that $ \lambda_1=\lambda^\partial rime_1=0$ for $ x_1,x_2 \cdots x_{n-1}$ directions and for all $ x_0\sqrt{-1}n \partial artial\mathbb{R}^n_+$, then $ v $ and hence $ u $ are radially symmetric in the $ x_1,x_2 \cdots x_{n-1}$ directions. We define $ S_C := \{x\sqrt{-1}n \mathbb{R}^n_+ \ | \ x_n= C\}$. For any given $ p$, $ q \sqrt{-1}n S_C$, there exists $ x_0 \sqrt{-1}n \partial artial\mathbb{R}^n_+ $ such that $ d(p,x_0) = d(q,x_0)$. Since $ u$ is radially symmetric, we have $ u(p)=u(q)$. According to the arbitrariness of $ p$ and $ q$, the solution $ u$ depends only on $ x_n$. Set $ \tilde{u}(x_n) := u(x)$, and we have: \begin{equation} \left\{ \begin{aligned} x^b_n \tilde{u}^{\partial rime \partial rime} + b x^{b-1}_n \tilde{u}^{\partial rime} +x^b_n f(\tilde{u})&=0 \\ \tilde{u}'(0)&= 0. \end{aligned} \right. \end{equation} According to Proposition \ref{pro1}, we have $ \tilde{u}\equiv 0$. Hence, $ u(x)\equiv0$. $ \Box$ \section{ Integral system} The idea of proving Theorem \ref{thm1} is similar to the proof of Theorem \ref{thm0}. We consider the Kelvin's transform $ w$, $ z$ of $ u$, $ v$ defined by \[ w(x)= \frac{1}{|x|^{n+b-2}}u \left (\frac{x}{|x|^2} \right ), \ z(x) = \frac{1}{|x|^{n+b-2}}v\left(\frac{x}{|x|^2}\right ). \] The definition of $ \Sigma_\lambdabda$, $ T_\lambdabda$, $ x^\lambdabda$ and $ u_\lambdabda(x)$ in the moving plane method is the same as above. By a simple calculation, we have \begin{equation} \left\{ \begin{aligned} w(x)&=\sqrt{-1}nt_{\mathbb R^n_+}K_b(x,y)y_n^b f(z(y)|y|^{n+b-2})\frac{1}{|y|^{n+b+2}}dy \quad x \sqrt{-1}n \mathbb R^n_+ , \\ z(x)&=\sqrt{-1}nt_{\mathbb R^n_+}K_b(x,y)y_n^b g(w(y)|y|^{n+b-2})\frac{1}{|y|^{n+b+2}}dy \quad x \sqrt{-1}n \mathbb R^n_+ . \end{aligned} \right. \end{equation} By the definition of $h(t)=\frac{f(t)}{t^{\tilde{\tau}}}$, $k(t)=\frac{g(t)}{t^{\tilde{\tau}}}$ we deduce that \begin{equation} \left\{ \begin{aligned} w(x)&=\sqrt{-1}nt_{\mathbb R^n_+}K_b(x,y)y_n^b h(z(y)|y|^{n+b-2}) z(y)^{\tilde{\tau}}dy , \\ z(x)&=\sqrt{-1}nt_{\mathbb R^n_+}K_b(x,y)y_n^b k(w(y)|y|^{n+b-2}) w(y)^{\tilde{\tau}}dy . \end{aligned} \right. \end{equation} Since that $ w$ and $ z$ are continuous and positive in $ \overline{\mathbb R^n_+}$ with possible singularity at the origin. They decay at infinity as $ u(0)|x|^{2-n-b}$ and $ v(0)|x|^{2-n-b}$. We obtain $ w$, $ z\sqrt{-1}n L^{\frac{2n}{n+b-2}}(\mathbb{R}^n_+ \setminus B_r(0)) \cap L^\sqrt{-1}nfty (\mathbb{R}^n_+ \setminus B_r(0))$ for any $ r>0$. \begin{lemma}\label{lem4} \begin{align*} w(x)-w_\lambda(x)=& \sqrt{-1}nt_{\Sigma_\lambda} (K_b(x,y)-K_b(x^\lambda,y)) y_n^b \times \\ &[h(z(y)|y|^{n+b-2})z(y)^{\tilde{\tau}}- h(z(y^\lambda) |y^{\lambda}|^{n+b-2}) z(y^\lambda)^{\tilde{\tau}} ] dy, \end{align*} \begin{align*} z(x)-z_\lambda(x)=& \sqrt{-1}nt_{\Sigma_\lambda} (K_b(x,y)-K_b(x^\lambda,y)) y_n^b \times \\ &[k(w(y)|y|^{n+b-2})w(y)^{\tilde{\tau}}- k(w(y^\lambda) |y^{\lambda}|^{n+b-2}) w(y^\lambda)^{\tilde{\tau}} ] dy. \end{align*} \end{lemma} \noindent \emph{Proof.} The proof is similar to that of Lemma \ref{lem1} and is omitted. $ \Box $ \begin{lemma}\label{lem5} Under the conditions of Theorem \ref{thm1}, there exists $ \lambdabda_0 > 0$ such that for all $ \lambdabda \geq \lambdabda_0$, we have $ w_\lambdabda(x) \geq w(x)$ and $ z_\lambdabda(x) \geq z(x)$ for all $ x \sqrt{-1}n \Sigma_\lambdabda$. \end{lemma} \noindent \emph{Proof.} \ The proof is similar to Lemma \ref{lem2}. We denote by $ \Sigma^w_\lambda = \{y\sqrt{-1}n \Sigma_\lambda \ | \ w(y) > w_\lambda(y)\}$ and $ \Sigma^z_\lambda = \{y\sqrt{-1}n \Sigma_\lambda \ | \ z(y) > z_\lambda(y)\}$. By the Hardy-Littlewood-Sobolev inequality, we get \begin{align*} \|w-w_\lambda\|_{L^q(\Sigma^w_\lambda)} &\leq C\left(\sqrt{-1}nt_{\Sigma_\lambda^z}z^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}}\|z-z_\lambda\|_{L^q(\Sigma^z_\lambda)},\\ \|z-z_\lambda\|_{L^q(\Sigma^z_\lambda)} &\leq C\left(\sqrt{-1}nt_{\Sigma_\lambda^w}w^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}}\|w-w_\lambda\|_{L^q(\Sigma^w_\lambda)}, \end{align*} where $ q>\frac{n}{n-2}$. The above two inequalities imply \begin{align} &\|w-w_\lambda\|_{L^q(\Sigma^w_\lambda)} \nonumber \\ &\leq C\left(\sqrt{-1}nt_{\Sigma_\lambda^w}w^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}}\left(\sqrt{-1}nt_{\Sigma_\lambda^z}z^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}}\|w-w_\lambda\|_{L^q(\Sigma^w_\lambda)}, \label{s15} \end{align} and \begin{align} &\|z-z_\lambda\|_{L^q(\Sigma^z_\lambda)} \nonumber \\ &\leq C\left(\sqrt{-1}nt_{\Sigma_\lambda^w}w^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}}\left(\sqrt{-1}nt_{\Sigma_\lambda^z}z^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}}\|z-z_\lambda\|_{L^q(\Sigma^z_\lambda)}. \label{s16} \end{align} Since $ w(x)$, $ z(x)\sqrt{-1}n L^{\frac{2n}{n+b-2}}(\mathbb{R}^n_+ \setminus B_r(0))$ for any $ r>0$, we choose $ \lambda_0$ large enough such that \[ C\left(\sqrt{-1}nt_{\Sigma_\lambda}w^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}}\left(\sqrt{-1}nt_{\Sigma_\lambda}z^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}} \leq \frac{1}{2} \] for all $ \lambda \geq \lambda_0$. Then we have \[ \|w-w_\lambda\|_{L^q(\Sigma^w_\lambda)} = \|z-z_\lambda\|_{L^q(\Sigma^z_\lambda)} = 0 \] for all $ \lambda \geq \lambda_0$. So we get the desired result. $ \Box$ We define $ \lambda_1 = \sqrt{-1}nf \{ \lambda \ | \ w(x)\leq w_\mu(x) \ \text{and} \ z(x)\leq z_\mu(x), \ \forall \mu \geq \lambda, \ x\sqrt{-1}n\Sigma_\mu \}$. \begin{lemma}\label{lem6} If $ \lambda_1>0$, then $ w(x)\equiv w_{\lambda_1}(x)$ and $ z(x)\equiv z_{\lambda_1}(x)$ for all $ x\sqrt{-1}n \Sigma_{\lambda_1}$. \end{lemma} \noindent \emph{Proof.} \ Suppose that $ z(x)\not\equiv z_{\lambda_1}(x)$, then we can infer from Lemma \ref{lem4} that $ w< w_{\lambda_1}$ in the interior of $ \Sigma_{\lambda_1}$ and this further implies $ z< z_{\lambda_1}$ in the same area. If one can show that for $ \varepsilon$ sufficiently small so that $ \forall \lambda \sqrt{-1}n (\lambda_1-\varepsilon,\lambda_1]$, there holds \begin{equation}\label{s17} C\left(\sqrt{-1}nt_{\Sigma_\lambda^w}w^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}}\left(\sqrt{-1}nt_{\Sigma_\lambda^z}z^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}} \leq \hlf, \end{equation} then by \eqref{s15} and \eqref{s16}, we have \[ \|w-w_\lambda\|_{L^q(\Sigma^w_\lambda)} = \|z-z_\lambda\|_{L^q(\Sigma^z_\lambda)} = 0, \] and therefore $ \Si^w_\lambda$ and $ \Si^z_\lambda$ must be measure zero. Since $ w$ and $ z$ are continuous, we deduce that $ \Si^w_\lambda$ and $ \Si^z_\lambda$ are empty. This contradicts the definition of $ \lambda_1$. Now we verify inequality \eqref{s17}. Since $ w$, $ z \sqrt{-1}n L^{\frac{2n}{n+b-2}}(\mathbb{R}^n_+ \setminus B_r(0))$, for any small $ \eta >0$, we can choose $ R$ sufficiently large so that \[ C\left(\sqrt{-1}nt_{\mathbb{R}^n_+ \backslash B_R}w^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}} \leq \eta, \qquad C\left(\sqrt{-1}nt_{\mathbb{R}^n_+ \backslash B_R}z^{\frac{2n}{n+b-2}}(y)dy\right)^{\frac{2}{n}} \leq \eta. \] We fix this $ R$ and then show that the measure of $ \Si^w_\lambda \cap B_R$ and $ \Si^z_\lambda \cap B_R$ are sufficiently small for $ \lambda$ close to $ \lambda_1$. The rest part of the proof is similar to that of Lemma \ref{lem3} and is omitted. $\Box$ \\ \textbf{Proof of Theorem \ref{thm1}} By Lemma \ref{lem4}, Lemma \ref{lem5} and Lemma \ref{lem6}, we choose the $ x_1 $ direction and prove that $ w$, $ z$ are symmetric in the $ x_1$ direction. If $ \lambda_1 >0$, then $ w$, $ z$ are symmetric in the direction of $ x_1$. If $ \lambda_1 =0 $, then we conclude by continuity that $ w(x) \leq w_0(x)$, $ z(x) \leq z_0(x)$ for all $ x \sqrt{-1}n \Sigma_0$. We can also start the moving plane from $ -\sqrt{-1}nfty$ and find a corresponding $ \lambda^\partial rime_1$. If $ \lambda^\partial rime_1 =0$, then we get $ w_0(x) \leq w(x)$, $ z_0(x) \leq z(x)$ for $ x \sqrt{-1}n \Sigma_0$. So $ w(x)$ and $ z(x)$ are symmetric with respect to $ T_0$. If $ \lambda^\partial rime_1<0$, an analogue to Lemma \ref{lem6} shows that $ w$ and $ z$ are symmetric with respect to $ T_{\lambda^\partial rime_1}$. For the $ x_2,x_3 \cdots x_{n-1}$, we can carry out the procedure as the above. There are two cases for solutions. \noindent \textbf{Case 1} \ If $ \lambda_1 >0 $ or $ \lambda^\partial rime_1 <0$ in some direction for some $ x_0 \sqrt{-1}n \partial artial\mathbb{R}^n_+$, we have $ w=w_{\lambda_1}$, $ z=z_{\lambda_1}$ or $ w=w_{\lambda^\partial rime_1}$, $ z=z_{\lambda^\partial rime_1}$. Since $ h$ and $ k$ are non-increasing and by the Lemma \ref{lem4}, then we get $ h(t) = c_1$ and $ k(t)=c_2$, where $ c_1$ and $ c_2$ are positive constants. We have $ f(t)=c_1 t^{\frac{n+b+2}{n+b-2}}$ and $ g(t)=c_2 t^{\frac{n+b+2}{n+b-2}}$. Since $ w$ and $ z$ are regular at the origin, we have the asymptotic behavior of $ u$ and $ v$ at infinity are $ u \sim \frac{1}{|x|^{n+b-2}}$, $ v \sim \frac{1}{|x|^{n+b-2}}$. Similarly, we can prove $ u$, $ v \sqrt{-1}n C^2(\overline{\mathbb{R}^n_+})$. Thus, we take the transformation \[ \bar{u}(x',x_n,x_{n+1}) = u\left(x',\sqrt{x^2_n+x^2_{n+1}}\right) \quad \text{and} \quad \bar{v}(x',x_n,x_{n+1}) = v\left(x',\sqrt{x^2_n+x^2_{n+1}}\right). \] It is easy to verify that $ \bar{u}$ and $ \bar{v}$ satisfy equation \begin{equation}\label{s20} \left \{ \begin{aligned} \nablaelta_{n+1} \bar{u} + \frac{b-1}{x_{n+1}} \bar{u}_{n+1} + c_1 \bar{v}(x)^{\tilde{\tau}} & = 0 \quad \text{in}\quad \mathbb R^{n+1}_+ \\ \frac{\partial \bar{u}}{\partial x_{n+1}} & = 0 \quad \text{on} \quad \partial \mathbb R^{n+1}_+. \end{aligned} \right. \end{equation} \begin{equation}\label{s21} \left \{ \begin{aligned} \nablaelta_{n+1} \bar{v} + \frac{b-1}{x_{n+1}} \bar{v}_{n+1} + c_2 \bar{u}(x)^{\tilde{\tau}} &= 0 \quad \text{in}\quad \mathbb R^{n+1}_+ \\ \frac{\partial \bar{v}}{\partial x_{n+1}} & = 0 \quad \text{on} \quad \partial \mathbb R^{n+1}_+. \end{aligned} \right. \end{equation} Exactly as the proof of Lemma \ref{lem7}, using the asymptotic behavior of $ \bar{u}$ and $ \bar{v}$ at infinity, one can easily deduce that \begin{equation}\label{s18} \bar{u}(x) = c_1 \sqrt{-1}nt_{\mathbb R^{n+1}_+} K_{b-1}(x,y) y^{b-1}_{n+1} \bar{v}(y)^{\tilde{\tau}} dy, \end{equation} \begin{equation}\label{s19} \bar{v}(x) = c_2 \sqrt{-1}nt_{\mathbb R^{n+1}_+} K_{b-1}(x,y) y^{b-1}_{n+1} \bar{u}(y)^{\tilde{\tau}} dy. \end{equation} We establish the equivalence between the integral equation \eqref{s18}, \eqref{s19} and its corresponding differential equation \eqref{s20}, \eqref{s21}. Using the above moving plane method for the $ n+1$ dimensional integral equations \eqref{s18} and \eqref{s19}, we obtain $ \bar{u}$ and $ \bar{v}$ are radially symmetric in the direction of $ x_1,\cdots x_{n-1},x_n $. There exists $ p \sqrt{-1}n \partial a\mathbb{R}^{n}_+$ such that \[ u(x',|x_n|)= \bar{u}(x',x_n,0)=\bar{u}(\bar{x}',\bar{x}_n,0) = u(\bar{x}',|\bar{x}_n|) \] and \[ v(x',|x_n|)= \bar{v}(x',x_n,0)=\bar{v}(\bar{x}',\bar{x}_n,0) = v(\bar{x}',|\bar{x}_n|). \] if $\sum^{n}_{i=1}|x_i-p_{i}|^2 = \sum^n_{i=1}|\bar{x}_i-p_{i}|^2$. Thus, $ u$ and $ v$ are radially symmetric about $ p$. By the results of \cite{ChenLiOu05} and \cite{ChenLiOu06} we obtain $ u(x)=\left(\frac{ca}{a^2+|x-p|^2}\right)^{\frac{n+b-2}{2}}$ and $ v(x)=\left(\frac{\tilde{c}a}{a^2+|x-p|^2}\right)^{\frac{n+b-2}{2}}$ where $ p \sqrt{-1}n \partial a\mathbb{R}^{n}_+$. \noindent \textbf{Case 2} \ (The idea of the proof from Theorem 3.2 of \cite{EM93}) Now we suppose that $ \lambda_1=\lambda^\partial rime_1=0$ for $ x_1,x_2 \cdots x_{n-1}$ directions and for all $ x_0\sqrt{-1}n \partial artial\mathbb{R}^n_+$, then $ w $, $ z$ and hence $ u$, $ v$ are radially symmetric in the $ x_1,x_2 \cdots x_{n-1}$ directions. We define $ S_C := \{x\sqrt{-1}n \mathbb{R}^n_+ \ | \ x_n= C\}$. Similarly, we obtain the solutions $ u$ and $ v$ depend only on $ x_n$. Set $ \tilde{u}(x_n) := u(x)$, $ \tilde{v}(x_n) := v(x)$ and we have: \begin{equation}\label{o1} \left\{ \begin{aligned} x^b_n \tilde{u}^{\partial rime \partial rime} + b x^{b-1}_n \tilde{u}^{\partial rime} +x^b_n f(\tilde{v}) &=0 \\ \tilde{u}'(0)&=0, \end{aligned} \right. \end{equation} \begin{equation}\label{o2} \left\{ \begin{aligned} x^b_n \tilde{v}^{\partial rime \partial rime} + b x^{b-1}_n \tilde{v}^{\partial rime} +x^b_n g(\tilde{u}) &=0 \\ \tilde{v}'(0)&=0. \end{aligned} \right. \end{equation} Exactly as the proof of Lemma \ref{lem9}, using $ f(t)\geq 0$ and $ g(t)\geq 0$ we deduce that \begin{equation}\label{o3} t \tilde{u}'(t) + (b-1)\tilde{u}(t) \geq 0, \end{equation} \begin{equation}\label{o4} t\tilde{v}'(t) + (b-1)\tilde{v}(t) \geq 0. \end{equation} Similarly to the proof of Lemma \ref{lem10}, we obtain \[ (b-1) \tilde{u}(t) \geq -t\tilde{u}'(t) \geq C t^2 \tilde{v}^{\tilde{\tau}}(t), \] \[ (b-1) \tilde{v}(t) \geq -t\tilde{v}'(t) \geq C t^2 \tilde{u}^{\tilde{\tau}}(t). \] Solving these inequalities, we get for all $ t>0$, \begin{equation}\label{o5} \tilde{u}(t) \leq Ct^{-\frac{n+b-2}{2}}, \qquad \tilde{v}(t) \leq Ct^{-\frac{n+b-2}{2}}. \end{equation} Since \eqref{o3} and \eqref{o4}, we have \begin{equation}\label{o6} |\tilde{u}'(t)| \leq Ct^{-\frac{n+b}{2}}, \qquad |\tilde{v}'(t)| \leq Ct^{-\frac{n+b}{2}}. \end{equation} Multiplying \eqref{o1} by $ \tilde{v}$ and \eqref{o2} by $ \tilde{u}$ and integrating by parts on $ (0,t)$, we get \begin{equation}\label{o7} t^b\tilde{u}'(t)\tilde{v}(t) - \sqrt{-1}nt^t_0 x^b_n\tilde{u}'\tilde{v}' dx_n = - \sqrt{-1}nt^t_0 x^b_n f(\tilde{v})\tilde{v}dx_n, \end{equation} \begin{equation}\label{o8} t^b\tilde{u}(t)\tilde{v}'(t) - \sqrt{-1}nt^t_0 x^b_n\tilde{u}'\tilde{v}' dx_n = - \sqrt{-1}nt^t_0 x^b_n g(\tilde{u})\tilde{u}dx_n. \end{equation} Using the fact that \eqref{o5} and \eqref{o6}, we deduce that \begin{equation} \lim_{t\rightarrow \sqrt{-1}nfty} t^b\tilde{u}'(t)\tilde{v}(t) = \lim_{t\rightarrow \sqrt{-1}nfty} t^b\tilde{u}(t)\tilde{v}'(t) =0, \end{equation} \[ \sqrt{-1}nt^{\sqrt{-1}nfty}_0 x^b_n\tilde{u}'\tilde{v}' dx_n < \sqrt{-1}nfty. \] Hence by \eqref{o7} and \eqref{o8}, we have \begin{equation}\label{o9} \sqrt{-1}nt^{\sqrt{-1}nfty}_0 x^b_n\tilde{u}'\tilde{v}' dx_n = \sqrt{-1}nt^{\sqrt{-1}nfty}_0 x^b_n f(\tilde{v})\tilde{v} dx_n =\sqrt{-1}nt^{\sqrt{-1}nfty}_0 x^b_n g(\tilde{u})\tilde{u}dx_n. \end{equation} We define $ F(t)=\sqrt{-1}nt^t_0 f(s)ds$ and $ G(t)=\sqrt{-1}nt^t_0 g(s)ds$. By multiplying \eqref{o1} by $ x_n\tilde{v}'$ and \eqref{o2} by $ x_n\tilde{u}'$ and integrating by parts on $ (0,t)$, we obtain \[ t^{b+1}\tilde{u}'(t)\tilde{v}'(t) - \sqrt{-1}nt^t_0 x^b_n\tilde{u}'\tilde{v}' dx_n - \sqrt{-1}nt^t_0 x^{b+1}_n \tilde{u}'\tilde{v}'' dx_n = -t^{b+1}F(\tilde{v}) + (b+1)\sqrt{-1}nt^t_0 x^b_n F(\tilde{v})dx_n, \] \[ \sqrt{-1}nt^t_0 x^{b+1}_n \tilde{u}'\tilde{v}'' dx_n + b \sqrt{-1}nt^t_0 x^b_n\tilde{u}'\tilde{v}' dx_n = -t^{b+1}G(\tilde{u}) + (b+1)\sqrt{-1}nt^t_0 x^b_n G(\tilde{u})dx_n. \] Hence, we get \[ t^{b+1}\tilde{u}'(t)\tilde{v}'(t) + (b-1) \sqrt{-1}nt^t_0 x^b_n\tilde{u}'\tilde{v}' dx_n + t^{b+1}(F(\tilde{v}) + G(\tilde{u})) - (b+1)\sqrt{-1}nt^t_0 x^b_n (F(\tilde{v}) + G(\tilde{u})) dx_n = 0 . \] As in the proof of Proposition \ref{pro1}, there is a sequence $ t_k \rightarrow \sqrt{-1}nfty$ such that \begin{equation}\label{l13} t^{b+1}_k F(\tilde{v}(t_k)) \rightarrow 0, \qquad t^{b+1}_k G(\tilde{u}(t_k)) \rightarrow 0. \end{equation} Using the fact that \eqref{o5} and \eqref{o6}, we have \begin{equation}\label{l15} \lim_{t\rightarrow \sqrt{-1}nfty} t^{b+1}\tilde{u}'(t)\tilde{v}'(t) =0. \end{equation} And it is easy to see that \begin{equation}\label{l14} F(\tilde{v}(t)) \geq \frac{1}{\tilde{\tau}+1}f(\tilde{v}(t))\tilde{v}(t), \qquad G(\tilde{u}(t)) \geq \frac{1}{\tilde{\tau}+1}g(\tilde{u}(t))\tilde{u}(t). \end{equation} By taking $ t=t_k$, we have \[ t^{b+1}_k\tilde{u}'(t_k)\tilde{v}'(t_k) + (b-1) \sqrt{-1}nt^{t_k}_0 x^b_n\tilde{u}'\tilde{v}' dx_n + t^{b+1}_k (F(\tilde{v}) + G(\tilde{u})) - (b+1)\sqrt{-1}nt^{t_k}_0 x^b_n (F(\tilde{v}) + G(\tilde{u})) dx_n = 0 . \] Letting $ t_k \rightarrow \sqrt{-1}nfty$ and using \eqref{o9}, \eqref{l13}, \eqref{l15} and \eqref{l14}, we get \[ \left(\frac{b-1}{2}-\frac{b+1}{\tilde{\tau}+1}\right) \sqrt{-1}nt^{\sqrt{-1}nfty}_0 (x^b_n f(\tilde{v})\tilde{v} + x^b_n g(\tilde{u})\tilde{u}) dx_n \geq 0. \] Since $ \frac{b-1}{2} - \frac{b+1}{\tilde{\tau}+1} < 0$, we get a contradiction. We obtain $ u(x)\equiv0$ and $ v(x)\equiv0$. $ \Box$\\ \noindent \textbf{Acknowledgments.} \ The work of the author is sponsored by Shanghai Rising-Star Program 19QA1400900. The author would like to thank Professor Huang Genggeng for his valuable suggestions. The author was supported by National Natural Science Foundation of China under Grant 11871160. {\em Addresse and E-mail:} {\em Yating Niu} {\em School of Mathematical Sciences} {\em Fudan University} {\em [email protected]} \end{document}
math
\begin{document} \begin{merci} This work began while both authors shared the hospitality of Centre de Recherches Math\'ematiques (Montr\'eal) during the theme year "Analysis in Number Theory" (first semester of 2006). We would like to thank C.~David, H.~Darmon and A.~Granville for their invitation. This work was essentially completed in september 2006 in CIRM (Luminy) at the occasion of J.-M.~Deshouillers' sixtieth birthday. We would like to wish him the best. \end{merci} \section{Introduction and statement of the results} \subsection{Description of the families of $L$-functions studied} The purpose of this paper is to compute various statistics associated to low-lying zeros of several families of symmetric power $L$-functions in the level aspect. First of all, we give a short description of these families. To any primitive holomorphic cusp form $f$ of prime level $q$ and even weight \footnote{In this paper, the weight $\kappa$ is a \emph{fixed} even integer and the level $q$ goes to infinity among the prime numbers.} $\kappa\geq 2$ (see \S~\ref{sec_autoback} for the automorphic background) say $f\in\prim{\kappa}{q}$, one can associate its $r$-th symmetric power $L$-function denoted by $L(\sym^rf,s)$ for any integer $r\geq 1$. It is given by an explicit absolutely convergent Euler product of degree $r+1$ on $\Re{s}>1$ (see \S~\ref{sec_sympow}). The completed $L$-function is defined by \begin{equation*} \Lambda(\sym^rf,s)\coloneqq\left(q^{r}\right)^{s/2}L_\infty(\sym^rf,s)L(\sym^rf,s) \end{equation*} where $L_\infty(\sym^rf,s)$ is a product of $r+1$ explicit $\fGamma_{\R}$-factors (see \S~\ref{sec_sympow}) and $q^r$ is the arithmetic conductor. We will need some control on the analytic behaviour of this function. Unfortunately, such information is not currently known in all generality. We sum up our main assumption in the following statement. \label{hypohypo} \begin{hypothesis} The function $\Lambda\left(\sym^rf,s\right)$ is a \emph{completed $L$-function} in the sense that it satisfies the following \emph{nice} analytic properties: \begin{itemize} \item it can be extended to an holomorphic function of order $1$ on $\C$, \item it satisfies a functional equation of the shape \[ \Lambda(\sym^rf,s)=\epsilon\left(\sym^rf\right)\Lambda(\sym^rf,1-s) \] where the sign $\epsilon\left(\sym^rf\right)=\pm 1$ of the functional equation is given by \begin{equation}\label{valueofsign} \epsilon\left(\sym^rf\right)\coloneqq \begin{cases} +1 & \text{if $r$ is even},\\ \epsilon_f(q)\times\epsilon(\kappa,r) & \text{otherwise} \end{cases} \end{equation} with \[ \epsilon(\kappa,r)\coloneqq i^{\left(\frac{r+1}{2}\right)^2(\kappa-1)+\frac{r+1}{2}}= \begin{cases} i^{\kappa} & \text{if $\;r\equiv 1\pmod{8}$,} \\ -1 & \text{if $\;r\equiv 3\pmod{8}$}, \\ -i^{\kappa} & \text{if $\;r\equiv 5\pmod{8}$}, \\ +1 & \text{if $r\;\equiv 7\pmod{8}$} \end{cases} \] and $\epsilon_f(q)=\pm 1$ is defined in \eqref{eq_signe} and only depends on $f$ and $q$. \end{itemize} \end{hypothesis} \begin{remint} Hypothesis $\Nice(r,f)$ is known for $r=1$ (E.~Hecke \cite{MR1513069,MR1513122,MR1513142}), $r=2$ thanks to the work of S.~Gelbart and H.~Jacquet \cite{GeJa} and $r=3,4$ from the works of H.~Kim and F.~Shahidi \cite{KiSh1,KiSh2,Ki}. \end{remint} We aim at studying the low-lying zeros for the family of $L$-functions given by \begin{equation*} \mathcal{F}_r\coloneqq \bigcup_{\text{$q$ prime}}\left\{L(\sym^rf,s), f\in\prim{\kappa}{q}\right\} \end{equation*} for any integer $r\geq 1$. Note that when $r$ is even, the sign of the functional equation of any $L(\sym^rf,s)$ is constant of value $+1$ but when $r$ is odd, this is definitely not the case. As a consequence, it is very natural to understand the low-lying zeros for the subfamilies given by \begin{equation*} \mathcal{F}_r^{\epsilon}\coloneqq \bigcup_{\text{$q$ prime}}\left\{L(\sym^rf,s), \;f\in\prim{\kappa}{q}, \epsilon\left(\sym^rf\right)=\epsilon\right\} \end{equation*} for any odd integer $r\geq 1$ and for $\epsilon=\pm 1$. \subsection{Symmetry type of these families} One of the purpose of this work is to determine the symmetry type of the families $\mathcal{F}_r$ and $\mathcal{F}_r^{\epsilon}$ for $\epsilon=\pm 1$ and for any integer $r\geq 1$ (see \S~\ref{sec_densres} for the background on symmetry types). The following theorem is a quick summary of the symmetry types obtained. \begin{theoint}\label{thm_A} Let $r\geq 1$ be any integer and $\epsilon=\pm 1$. We assume that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any primitive holomorphic cusp form of level $q$ and even weight $\kappa\geq 2$. The symmetry group $G(\mathcal{F}_r)$ of $\mathcal{F}_r$ is given by \begin{equation*} G(\mathcal{F}_r)=\begin{cases} Sp & \text{ if $r$ is even,}\\% O & \text{ otherwise.} \end{cases} \end{equation*} If $r$ is odd then the symmetry group $G(\mathcal{F}_r^\epsilon)$ of $\mathcal{F}_r^\epsilon$ is given by \begin{equation*} G(\mathcal{F}_r^\epsilon)=\begin{cases} SO(\mathrm{even}) & \text{ if $\epsilon=+1$,}\\% SO(\mathrm{odd}) & \text{ otherwise.} \end{cases} \end{equation*} \end{theoint} \begin{remint} \label{remark2} It follows in particular from the value of $\epsilon\left(\sym^rf\right)$ given in \eqref{valueofsign} that, if $r$ is even, then $\sym^rf$ has not the same symmetry type than $f$ and, if $r$ is odd, then $f$ and $\sym^rf$ have the same symmetry type if and only if \begin{align*} r \equiv 1 \pmod{8}\; \text{ and }\; \kappa\equiv 0\pmod{4} \shortintertext{or} r \equiv 5 \pmod{8}\; \text{ and }\; \kappa\equiv 2\pmod{4} \shortintertext{or} r \equiv 7 \pmod{8}. \end{align*} \end{remint} \begin{remint} Note that we do not assume any Generalised Riemann Hypothesis for the symmetric power $L$-functions. \end{remint} In order to prove theorem~\ref{thm_A}, we compute either the (signed) asymptotic expectation of the one-level density or the (signed) asymptotic expectation of the two-level density. The results are given in the next two sections in which $\epsilon=\pm 1$, $\nu$ will always be a positive real number, $\Phi, \Phi_1$ and $\Phi_2$ will always stand for even Schwartz functions whose Fourier transforms $\widehat{\Phi}, \widehat{\Phi_1}$ and $\widehat{\Phi_2}$ are compactly supported in $[-\nu,+\nu]$ and $f$ will always be a primitive holomorphic cusp form of prime level $q$ and even weight $\kappa\geq 2$ for which hypothesis $\Nice(r,f)$ holds. We refer to \S~\ref{sec_proba} for the probabilistic background. \subsubsection{(Signed) asymptotic expectation of the one-level density} The \emph{one-level density} (relatively to $\Phi$) of $\sym^rf$ is defined by \begin{equation*} D_{1,q}[\Phi;r](f) \coloneqq \sum_{\rho,\;\Lambda(\sym^rf,\rho)=0} \Phi\left(\frac{\log{\left(q^r\right)}}{2i\pi}\left( \Re{\rho}-\frac{1}{2}+i\Im{\rho} \right)\right) \end{equation*} where the sum is over the non-trivial zeros $\rho$ of $L(\sym^rf,s)$ with multiplicities. The \emph{asymptotic expectation} of the one-level density is by definition \begin{equation*} \lim_{\substack{q\;\text{prime} \\ q\to+\infty}} \smashoperator[r]{ \sum_{f\in\prim{\kappa}{q}} }\omega_q(f)D_{1,q}[\Phi;r](f) \end{equation*} where $\omega_q(f)$ is the harmonic weight defined in \eqref{eq_facomeg} and similarly the \emph{signed asymptotic expectation} of the one-level density is by definition \begin{equation*} \lim_{\substack{q\;\text{prime} \\ q\to+\infty}} 2 \smashoperator[r]{ \sum_{\substack{f\in\prim{\kappa}{q} \\ \epsilon\left(\sym^rf\right)=\epsilon}} } \omega_q(f)D_{1,q}[\Phi;r](f) \end{equation*} when $r$ is odd. \begin{theoint}\label{thm_B} Let $r\geq 1$ be any integer and $\epsilon=\pm 1$. We assume that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any primitive holomorphic cusp form of level $q$ and even weight $\kappa\geq 2$ and also that $\theta$ is admissible (see hypothesis $\Hy_2(\theta)$ page~\pageref{hyp_RPS}). Let \[ \nu_{1,\mathrm{max}}(r,\kappa,\theta)\coloneqq \left(1-\frac{1}{2(\kappa-2\theta)}\right)\frac{2}{r^2}. \] If $\nu<\nu_{1,\mathrm{max}}(r,\kappa,\theta)$ then the asymptotic expectation of the one-level density is \begin{equation*} \widehat{\Phi}(0)+\frac{(-1)^{r+1}}{2}\Phi(0). \end{equation*} Let \[ \nu_{1,\mathrm{max}}^\epsilon(r,\kappa,\theta)\coloneqq \inf{\left(\nu_{1,\mathrm{max}}(r,\kappa,\theta),\frac{3}{r(r+2)}\right).} \] If $r$ is odd and $\nu<\nu_{1,\mathrm{max}}^\epsilon(r,\kappa,\theta)$ then the signed asymptotic expectation of the one-level density is \begin{equation*} \widehat{\Phi}(0)+\frac{(-1)^{r+1}}{2}\Phi(0). \end{equation*} \end{theoint} \begin{remint} \label{remark4} The first part of Theorem~\ref{thm_B} reveals that the symmetry type of $\mathcal{F}_r$ is \[ G(\mathcal{F}_r)= \begin{cases} Sp & \text{if $r$ is even,}\\% O & \text{if $r=1$,}\\% SO(\mathrm{even}) \text{ or } O \text{ or } SO(\mathrm{odd}) & \text{if $r\geq 3$ is odd.} \end{cases} \] We cannot decide between the three orthogonal groups when $r\geq 3$ is odd since in this case $\nu_{1,\mathrm{max}}(r,\kappa,\theta)<1$ but the computation of the two-level densities will enable us to decide. Note also that we go beyond the support $[-1,1]$ when $r=1$ as Iwaniec, Luo \& Sarnak \cite{IwLuSa} (Theorem 1.1) but without doing any subtle arithmetic analysis of Kloosterman sums. Also, A.~G{\"u}loglu in \cite[Theorem 1.2]{Gu} established some density result for the same family of $L$-functions but when the weight $\kappa$ goes to infinity and the level $q$ is fixed. It turns out that we recover the same constraint on $\nu$ when $r$ is even but we get a better result when $r$ is odd. This can be explained by the fact that the analytic conductor of any $L(\sym^rf,s)$ with $f$ in $\prim{\kappa}{q}$ which is of size \[ q^r\times \begin{cases} \kappa^{r} & \text{if $r$ is even}\\% \kappa^{r+1} & \text{otherwise} \end{cases} \] is slightly larger in his case than in ours when $r$ is odd. \end{remint} \begin{remint} \label{remark5} The second part of Theorem~\ref{thm_B} reveals that if $r$ is odd and $\epsilon=\pm 1$ then the symmetry type of $\mathcal{F}_r^\epsilon$ is \[ G(\mathcal{F}_r^{\epsilon})= SO(\mathrm{even}) \text{ or } O \text{ or } SO(\mathrm{odd}). \] Here $\nu$ is always strictly smaller than one and we are not able to recover the result of \cite[Theorem 1.1]{IwLuSa} without doing some arithmetic on Kloosterman sums. \end{remint} \subsubsection{Sketch of the proof} We give here a sketch of the proof of the first part of Theorem \ref{thm_B} namely we briefly explain how to determine the asymptotic expectation of the one-level density assuming that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any primitive holomorphic cusp form of level $q$ and even weight $\kappa\geq 2$ and also that $\theta$ is admissible. The first step consists in transforming the sum over the zeros of $\Lambda(\sym^rf,s)$ which occurs in $D_{1,q}[\Phi;r](f)$ into a sum over primes. This is done \emph{via} some Riemann's explicit formula for symmetric power $L$-functions stated in Proposition \ref{explicit} which leads to \begin{equation*} D_{1,q}[\Phi;r](f)=\widehat{\Phi}(0)+\frac{(-1)^{r+1}}{2}\Phi(0)+P_q^1[\Phi;r](f)+\sum_{m=0}^{r-1}(-1)^mP_q^2[\Phi;r,m](f)+o(1) \end{equation*} where \begin{equation}\label{eq_cneser} P_{q}^1[\Phi;r](f)\coloneqq-\frac{2}{\log{\left(q^r\right)}}\sum_{\substack{p\in\prem \\ p\nmid q}}\lambda_{f}\left(p^r\right)\frac{\log{p}}{\sqrt{p}}\widehat{\Phi}\left(\frac{\log{p}}{\log{\left(q^r\right)}}\right). \end{equation} The terms $P_q^2[\Phi;r,m](f)$ are also sums over primes which look like $P_{q}^1[\Phi;r](f)$ but can be forgotten in first approximation since they can be thought as sums over squares of primes which are easier to deal with. The second step consists in averaging over all the $f$ in $\prim{\kappa}{q}$. While doing this, the asymptotic expectation of the one-level density \begin{equation*} \widehat{\Phi}(0)+\frac{(-1)^{r+1}}{2}\Phi(0) \end{equation*} naturally appears and we need to show that \begin{equation*} -\frac{2}{\log{\left(q^r\right)}}\sum_{\substack{p\in\prem \\ p\nmid q}}\left(\sum_{f\in\prim{\kappa}{q}}\omega_q(f)\lambda_{f}\left(p^r\right)\right)\frac{\log{p}}{\sqrt{p}}\widehat{\Phi}\left(\frac{\log{p}}{\log{\left(q^r\right)}}\right) \end{equation*} is a remainder term provided that the support $\nu$ of $\Phi$ is small enough. We apply some suitable trace formula given in Proposition \ref{iwlusatr} in order to express the previous average of Hecke eigenvalues. We cannot directly apply Peterson's trace formula since there may be some old forms of level $q$ especially when the weight $\kappa$ is large. Nevertheless, these old forms are automatically of level $1$ since $q$ is prime and their contribution remains negligible. So, we have to bound \begin{equation*} -\frac{4\pi i^\kappa}{\log{\left(q^r\right)}}\sum_{\substack{p\in\prem \\ p\nmid q}}\sum_{\substack{c\geq 1 \\ q\mid c}}\frac{S(1,p^r;c)}{c}J_{\kappa-1}\left(\frac{4\pi\sqrt{p^r}}{c}\right)\frac{\log{p}}{\sqrt{p}}\widehat{\Phi}\left(\frac{\log{p}}{\log{\left(q^r\right)}}\right) \end{equation*} where $S(1,p^r;c)$ is a Kloosterman sum and which can be written as \begin{equation*} -\frac{4\pi i^\kappa}{\log{\left(q^r\right)}}\sum_{\substack{c\geq 1 \\ q\mid c}}\sum_{m\geq 1}a_m\frac{S(1,m;c)}{c}g(m;c) \end{equation*} where \begin{equation*} a_m\coloneqq\un_{[1,q^{r^2\nu}]}(m)\frac{\log{m}}{rm^{1/(2r)}}\times\begin{cases} 1 & \text{if $m=p^r$ for some prime $p\neq q$}, \\ 0 & \text{otherwise} \end{cases} \end{equation*} and \begin{equation*} g(m;c)\coloneqq J_{\kappa-1}\left(\frac{4\pi\sqrt{m}}{c}\right)\widehat{\Phi}\left(\frac{\log{m}}{r\log{\left(q^r\right)}}\right). \end{equation*} We apply the large sieve inequality for Kloosterman sums given in proposition \ref{sieve}. It entails that if $\nu\leq 2/r^2$ then such quantity is bounded by \begin{equation*} \ll_\epsilon q^{\left(\frac{\kappa-1}{2}-\theta\right)(r^2\nu-2)+\epsilon}+q^{\left(\frac{\kappa}{2}-\theta\right)r^2\nu-\left(\kappa-\frac{1}{2}-2\theta\right)+\epsilon}. \end{equation*} This is an admissible error term if $\nu<\nu_{1,\mathrm{max}}(r,\kappa,\theta)$. We focus on the fact that we did any arithmetic analysis of Kloosterman sums to get this result. Of course, the power of spectral theory of automorphic forms is hidden in the large sieve inequalities for Kloosterman sums. \subsubsection{(Signed) asymptotic expectation of the two-level density} The \emph{two-level density} of $\sym^rf$ (relatively to $\Phi_1$ and $\Phi_2$) is defined by \[ D_{2,q}[\Phi_1,\Phi_2;r](f) \coloneqq \sum_{\substack{(j_1,j_2)\in\mathcal{E}(f,r)^2\\ j_1\neq\pm j_2}} \Phi_1\left(\widehat{\rho}_{f,r}^{(j_1)}\right) \Phi_2\left(\widehat{\rho}_{f,r}^{(j_2)}\right). \] For more precision on the numbering of the zeros, we refer to \S~\ref{sec_explicit}. The \emph{asymptotic expectation} of the two-level density is by definition \begin{equation*} \lim_{\substack{q\;\text{prime} \\ q\to+\infty}} \smashoperator[r]{ \sum_{f\in\prim{\kappa}{q}} } \omega_q(f)D_{2,q}[\Phi_1,\Phi_2;r](f) \end{equation*} and similarly the \emph{signed asymptotic expectation} of the two-level density is by definition \begin{equation*} \lim_{\substack{q\;\text{prime} \\ q\to+\infty}} 2 \smashoperator[r]{ \sum_{\substack{f\in\prim{\kappa}{q} \\ \epsilon\left(\sym^rf\right)=\epsilon}} } \omega_q(f)D_{2,q}[\Phi_1,\Phi_2;r](f) \end{equation*} when $r$ is odd and $\epsilon=\pm 1$. \begin{theoint}\label{thm_C} Let $r\geq 1$ be any integer and $\epsilon=\pm 1$. We assume that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any primitive holomorphic cusp form of level $q$ and even weight $\kappa\geq 2$. If $\nu<1/r^2$ then the asymptotic expectation of the two-level density is \begin{multline*} \left[ \widehat{\Phi_1}(0) + \frac{(-1)^{r+1}}{2}\Phi_1(0) \right] \left[ \widehat{\Phi_2}(0) + \frac{(-1)^{r+1}}{2}\Phi_2(0) \right] \\ +2\int_\R\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u -2\widehat{\Phi_1\Phi_2}(0) +\left((-1)^{r}+\frac{ \un_{2\N+1}(r) }{2}\right)\Phi_1(0)\Phi_2(0). \end{multline*} If $r$ is odd and $\nu<1/(2r(r+2))$ then the signed asymptotic expectation of the two-level density is \begin{multline*} \left[ \widehat{\Phi_1}(0) + \frac{1}{2}\Phi_1(0) \right] \left[ \widehat{\Phi_2}(0) + \frac{1}{2}\Phi_2(0) \right] \\% +2\int_\R\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u -2\widehat{\Phi_1\Phi_2}(0) -\Phi_1(0)\Phi_2(0) \\% + \un_{\{-1\}}(\epsilon) \Phi_1(0)\Phi_2(0). \end{multline*} \end{theoint} \begin{remint} \label{remark6} We have just seen that the computation of the one-level density already reveals that the symmetry type of $\mathcal{F}_r$ is $Sp$ when $r$ is even. The asymptotic expectation of the two-level density also coincides with the one of $Sp$ (see \cite[Theorem A.D.2.2]{KaSa} or \cite[Theorem 3.3]{Mil}). When $r\geq 3$ is odd, the first part of Theorem~\ref{thm_C} together with a result of Katz \& Sarnak (see \cite[Theorem A.D.2.2]{KaSa} or \cite[Theorem 3.2]{Mil}) imply that the symmetry type of $\mathcal{F}_r$ is $O$. \end{remint} \begin{remint} The second part of Theorem~\ref{thm_C} and a result of Katz \& Sarnak (see \cite[Theorem A.D.2.2]{KaSa} or \cite[Theorem 3.2]{Mil}) imply that the symmetry type of $\mathcal{F}_r^{\epsilon}$ is as in Theorem~\ref{thm_A} for any odd integer $r\geq 1$ and $\epsilon=\pm 1$. \end{remint} In order to prove Theorem~\ref{thm_C}, we need to determine the \emph{asymptotic variance} of the one-level density which is defined by \begin{equation*} \lim_{\substack{q\;\text{prime} \\ q\to+\infty}} \smashoperator[r]{ \sum_{f\in\prim{\kappa}{q}} } \omega_q(f)\left(D_{1,q}[\Phi;r](f)- \sum_{g\in\prim{\kappa}{q}}\omega_q(g)D_{1,q}[\Phi;r](g)\right)^2 \end{equation*} and the \emph{signed asymptotic variance} of the one-level density which is similarly defined by \begin{equation*} \lim_{\substack{q\;\text{prime} \\ q\to+\infty}} 2 \smashoperator[r]{ \sum_{\substack{f\in\prim{\kappa}{q} \\ \epsilon\left(\sym^rf\right)=\epsilon}} } \omega_q(f)\left(D_{1,q}[\Phi;r](f)- 2 \smashoperator[r]{ \sum_{\substack{g\in\prim{\kappa}{q} \\ \epsilon\left(\sym^rg\right)=\epsilon}} } \omega_q(g)D_{1,q}[\Phi;r](g)\right)^2 \end{equation*} when $r$ is odd and $\epsilon=\pm 1$. \begin{theoint}\label{thm_D} Let $r\geq 1$ be any integer and $\epsilon=\pm 1$. We assume that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any primitive holomorphic cusp form of level $q$ and even weight $\kappa\geq 2$. If $\nu<1/r^2$ then the asymptotic variance of the one-level density is \begin{equation*} 2\int_\R\abs{u}\widehat{\Phi}^2(u)\dd u. \end{equation*} If $r$ is odd and $\nu<1/(2r(r+2))$ then the signed asymptotic variance of the one-level density is \begin{equation*} 2\int_\R\abs{u}\widehat{\Phi}^2(u)\dd u. \end{equation*} \end{theoint} \subsection{Asymptotic moments of the one-level density} Last but not least, we compute the \emph{asymptotic $m$-th moment} of the one-level density which is defined by \begin{equation*} \lim_{\substack{q\;\text{prime} \\ q\to+\infty}} \smashoperator[r]{ \sum_{f\in\prim{\kappa}{q}} } \omega_q(f)\left(D_{1,q}[\Phi;r](f) -\sum_{g\in\prim{\kappa}{q}}\omega_q(g)D_{1,q}[\Phi;r](g)\right)^m \end{equation*} for any integer $m\geq 1$. \begin{theoint}\label{thm_F} Let $r\geq 1$ be any integer and $\epsilon=\pm 1$. We assume that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any primitive holomorphic cusp form of level $q$ and even weight $\kappa\geq 2$. If $m\nu<4\left/(r(r+2))\right.$ then the asymptotic $m$-th moment of the one-level density is \begin{equation*} \begin{cases} 0 & \text{if $m$ is odd,}\\% 2\int_\R\abs{u}\widehat{\Phi}^2(u)\dd u\times\frac{m!}{2^{m/2}\left(\frac{m}{2}\right)!} & \text{otherwise.} \end{cases} \end{equation*} \end{theoint} \begin{remint} This result is another evidence for mock-Gaussian behaviour (see \cite{MR2166468,HuRu,HuRu2} for instance). \end{remint} \begin{remint} We compute the first asymptotic moments of the one-level density. These computations allow to compute the asymptotic expectation of the first level-densities \cite[\S 1.2]{MR2166468}. We will use the specific case of the asymptotic expectation of the two-level density and the asymptotic variance in \S~\ref{sec_twoandvar}. \end{remint} Let us sketch the proof of Theorem \ref{thm_F} by explaining the origin of the main term. We have to evaluate \begin{equation}\label{eq_lasomme} \sum_{ \substack{ 0\leq\ell\leq m \\ 0\leq\alpha\leq\ell } } \binom{m}{\ell}\binom{\ell}{\alpha}R(q)^{\ell-\alpha} \Eh[q]\left(P_q^1[\Phi;r]^{m-\ell}P_q^2[\Phi;r]^{\alpha}\right) \end{equation} where $P_q^1[\Phi;r]$ has been defined in \eqref{eq_cneser}, \[ P_q^2[\Phi;r](f)= -\frac{2}{\log(q^r)} \sum_{j=1}^{r} (-1)^{r-j} \sum_{ \substack{ p\in\prem\\% p\nmid q } } \lambda_f\left(p^{2j}\right) \frac{\log p}{p}\widehat{\Phi}\left(\frac{2\log p}{\log(q^r)}\right) \] and $R(q)$ satisfies \[ R(q)=O\left(\frac{1}{\log q}\right). \] The main term comes from the contribution $\ell=0$ in the sum \eqref{eq_lasomme}. Using a combinatorial lemma, we rewrite this main contribution as \[ \frac{(-2)^m}{\log^m{(q^r)}} \sum_{s=1}^m \sum_{\sigma\in P(m,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \Eh[q]\left( \prod_{u=1}^s \lambda_f\left(\widehat{p}_{i_{u}}^r\right)^{\varpi^{(\sigma)}_u} \right) \] where $P(m,s)$ is the set of surjective functions \[ \sigma \colon \{1,\dotsc,\alpha\}\twoheadrightarrow \{1,\dotsc,s\} \] such that for any $j\in\{1,\dotsc,s\}$, either $\sigma(j)=1$ or there exists $k<j$ such that $\sigma(j)=\sigma(k)+1$ and for any $j\in\{1,\dotsc,s\}$ \[ \varpi_j^{(\sigma)}\coloneqq\#\sigma^{-1}(\{j\}). \] $\left(\widehat{p}_i\right)_{i\geq 1}$ stands for the increasing sequence of prime numbers different from $q$. Linearising each $\lambda_f\left(\widehat{p}_{i_{u}}^r\right)^{\varpi^{(\sigma)}_u}$ in terms of $\lambda_f\left(\widehat{p}_{i_{u}}^{j_u}\right)$ with $j_u$ runs over integers in $[0,r\varpi^{(\sigma)}_u]$ and using a trace formula to prove that the only $\sigma\in P(m,s)$ leading to a principal contribution satisfy $\varpi^{(\sigma)}_j=2$ for any $j\in\{1,\dotsc,s\}$, we have to estimate \begin{equation}\label{eq_homo} \frac{(-2)^m}{\log^m{(q^r)}} \sum_{s=1}^m \sum_{\substack{ \sigma\in P(m,s)\\% \forall j\in\{1,\dotsc,s\}, \varpi^{(\sigma)}_j=2 } } \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \prod_{u=1}^s \frac{ \log^2{(\widehat{p}_{i_u})} }{ \widehat{p}_{i_u} } \widehat{\Phi}^2\left( \frac{ \log\widehat{p}_{i_{u}} }{\log{(q^r)}} \right). \end{equation} This sum vanishes if $m$ is odd since \[ \sum_{j=1}^s\varpi_j^{(\sigma)}=m \] and it remains to prove the formula for $m$ even. In this case, and since we already computed the moment for $m=2$, we deduce from \eqref{eq_homo} that the main contribution is \[ \Eh[q](P_q^1[\Phi;r]^2)\times \#\left\{ \sigma\in P(m,m/2) \colon \varpi^{(\sigma)}_j=2\ (\forall j) \right\} \] and we conclude by computing \[ \#\left\{ \sigma\in P(m,m/2) \colon \varpi^{(\sigma)}_j=2\ (\forall j) \right\} = \frac{m!}{2^{m/2}\left(\frac{m}{2}\right)!}. \] Proving that the other terms lead to error terms is done by implementing similar ideas, but requires -- especially for the double products (namely terms implying both $P_q^1$ and $P_q^2$) -- much more combinatorial technicalities. \subsection{Organisation of the paper} Section \ref{autoproba} contains the automorphic and probabilistic background which is needed to be able to read this paper. In particular, we give here the accurate definition of symmetric power $L$-functions and the properties of Chebyshev polynomials useful in section \ref{momentt}. In section \ref{technical}, we describe the main technical ingredients of this work namely large sieve inequalities for Kloosterman sums and Riemann's explicit formula for symmetric power $L$-functions. In section \ref{one}, some standard facts about symmetry groups are given and the computation of the (signed) asymptotic expectation of the one-level density is done. The computations of the (signed) asymptotic expectation, covariance and variance of the two-level density are done in section \ref{two} whereas the computation of the asymptotic moments of the one-level density is provided in section \ref{momentt}. Some well-known facts about Kloosterman sums are recalled in appendix \ref{klooster}. \begin{notations} We write $\prem$ for the set of prime numbers and the main parameter in this paper is a prime number $q$, whose name is the level, which goes to infinity among $\prem$. Thus, if $f$ and $g$ are some $\C$-valued functions of the real variable then the notations $f(q)\ll_{A}g(q)$ or $f(q)=O_A(g(q))$ mean that $\abs{f(q)}$ is smaller than a "constant" which only depends on $A$ times $g(q)$ at least for $q$ a large enough prime number and similarly, $f(q)=o(1)$ means that $f(q)\rightarrow 0$ as $q$ goes to infinity among the prime numbers. We will denote by $\epsilon$ an absolute positive constant whose definition may vary from one line to the next one. The characteristic function of a set $S$ will be denoted $\un_S$. \end{notations} \section{Automorphic and probabilistic background} \label{autoproba} \subsection{Automorphic background}\label{sec_autoback} \subsubsection{Overview of holomorphic cusp forms} In this section, we recall general facts about holomorphic cusp forms. A reference is \cite{Iw}. \pa{Generalities} We write $\gGamma_{0}(q)$ for the congruence subgroup of level $q$ which acts on the upper-half plane $\pk$. A holomorphic function $f\colon\pk\mapsto\C$ which satisfies \[ \forall\begin{pmatrix} a & b \\ c & d \end{pmatrix}\in\gGamma_{0}(q),\forall z\in\pk,\quad f\left(\frac{az+b}{cz+d}\right)=(cz+d)^\kappa f(z) \] and vanishes at the cusps of $\gGamma_{0}(q)$ is a \emph{holomorphic cusp form} of level $q$, even weight $\kappa\geq 2$. We denote by $\cusp{\kappa}{q}$ this space of holomorphic cusp forms which is equipped with the Peterson inner product \[ \scal{f_{1}}{f_{2}}_q\coloneqq \int_{\quotientgauche{\gGamma_0(q)}{\pk}}y^{\kappa}f_{1}(z)\overline{f_{2}(z)}\frac{\dd x\dd y}{y^{2}}. \] The Fourier expansion at the cusp $\infty$ of any such holomorphic cusp form $f$ is given by \[ \forall z\in\pk,\quad f(z)=\sum_{n\geq1}\psi_{f}(n)n^{(\kappa-1)/2}e(nz) \] where $e(z)\coloneqq\exp{(2i\pi z)}$ for any complex number $z$. The \emph{Hecke operators} act on $\cusp{\kappa}{q}$ by \[ T_{\ell}(f)(z) \coloneqq \frac{1}{\sqrt{\ell}}\sum_{\substack{ad=\ell\\ (a,q)=1}}\sum_{0\leq b<d}f\left(\frac{az+b}{d}\right) \qquad \] for any $z\in\pk$. If $f$ is an eigenvector of $T_\ell$, we write $\lambda_f(\ell)$ the corresponding eigenvalue. We can prove that $T_\ell$ is hermitian if $\ell\geq 1$ is any integer coprime with $q$ and that \begin{equation} \label{compo} T_{\ell_{1}}\circ T_{\ell_{2}}=\sum_{\substack{d\mid(\ell_{1},\ell_{2})\\(d,q)=1}}T_{\ell_{1}\ell_{2}\left/d^{2}\right.} \end{equation} for any integers $\ell_1, \ell_2\geq 1$. By Atkin \& Lehner theory \cite{AtLe}, we get a splitting of $\cusp{\kappa}{q}$ into $\anc{\kappa}{q}\oplus^{\perp_{\scal{\cdot}{\cdot}_q}}\nouv{\kappa}{q}$ where \begin{align*} \anc{\kappa}{q} & \coloneqq \Vect_{\C}\left\{f(qz), f\in\cusp{\kappa}{1}\right\}\cup\cusp{\kappa}{1}, \\ \nouv{\kappa}{q} & \coloneqq \left(\anc{\kappa}{q}\right)^{\perp_{\scal{\cdot}{\cdot}_q}} \end{align*} where "o" stands for "old" and "n" for "new". Note that $\anc{\kappa}{q}=\{0\}$ if $\kappa<12$ or $\kappa=14$. These two spaces are $T_{\ell}$-invariant for any integer $\ell\geq 1$ coprime with $q$. A \emph{primitive} cusp form $f\in\nouv{\kappa}{q}$ is an eigenfunction of any operator $T_\ell$ for any integer $\ell\geq 1$ coprime with $q$ which is new and arithmetically normalised namely $\psi_{f}(1)=1$. Such an element $f$ is automatically an eigenfunction of the other Hecke operators and satisfies $\psi_{f}(\ell)=\lambda_{f}(\ell)$ for any integer $\ell\geq 1$. Moreover, if $p$ is a prime number, define $\alpha_{f}(p)$, $\beta_{f}(p)$ as the complex roots of the quadratic equation \begin{equation} \label{quadra} X^{2}-\lambda_{f}(p)X+\epsilon_{q}(p)=0 \end{equation} where $\epsilon_{q}$ denotes the trivial Dirichlet character of modulus $q$. Then it follows from the work of Eichler, Shimura, Igusa and Deligne that \[ \abs{\alpha_{f}(p)}, \abs{\beta_{f}(p)}\leq 1 \] for any prime number $p$ and so \begin{equation} \label{individualh} \forall\ell\geq 1,\quad \abs{\lambda_{f}(\ell)}\leq\tau(\ell). \end{equation} The set of primitive cusp forms is denoted by $\prim{\kappa}{q}$. It is an orthogonal basis of $\nouv{\kappa}{q}$. Let $f$ be a holomorphic cusp form with Hecke eigenvalues $\left(\lambda_{f}(\ell)\right)_{(\ell,q)=1}$. The composition property \eqref{compo} entails that for any integer $\ell_{1}\geq 1$ and for any integer $\ell_{2}\geq 1$ coprime with $q$ the following multiplicative relations hold: \begin{align} \label{compoeigen} \psi_f(\ell_1)\lambda_f(\ell_2) &= \sum_{\substack{d\mid(\ell_1,\ell_2)\\ (d,q)=1}}\psi_f\left(\ell_{1}\ell_{2}\left/d^{2}\right.\right), \\ \psi_f(\ell_1\ell_2) &= \sum_{\substack{d\mid(\ell_1,\ell_2)\\ (d,q)=1}}\mu(d)\psi_f\left(\ell_1/d\right)\lambda_f\left(\ell_2/d\right) \end{align} and these relations hold for any integers $\ell_{1}, \ell_{2}\geq 1$ if $f$ is primitive. The adjointness relation is \begin{equation} \label{adjointness} \lambda_{f}(\ell)=\overline{\lambda_{f}(\ell)}, \quad \psi_{f}(\ell)=\overline{\psi_{f}(\ell)} \end{equation} for any integer $\ell\geq 1$ coprime with $q$ and this remains true for any integer $\ell\geq 1$ if $f$ is primitive. \pa{Trace formulas} We need two definitions. The harmonic weight associated to any $f$ in $\cusp{\kappa}{q}$ is defined by \begin{equation}\label{eq_facomeg} \omega_q(f)\coloneqq\frac{\fGamma(\kappa-1)}{(4\pi)^{\kappa-1}\scal{f}{f}_q}. \end{equation} For any natural integer $m$ and $n$, the $\Delta_q$-symbol is given by \begin{equation}\label{eq_deltasymb} \Delta_q(m,n)\coloneqq\delta_{m,n}+ 2\pi i^{\kappa}\sum_{\substack{c\geq 1 \\ q\mid c}}\frac{S(m,n;c)}{c}J_{\kappa-1}\left(\frac{4\pi\sqrt{mn}}{c}\right) \end{equation} where $S(m,n;c)$ is a Kloosterman sum defined in appendix \ref{Kloos} and $J_{\kappa-1}$ is a Bessel function of first kind defined in appendix \ref{Bessel}.The following proposition is \emph{Peterson's trace formula}. \begin{proposition}\label{prop_orth} If $\orth{\kappa}{q}$ is any orthogonal basis of $\cusp{\kappa}{q}$ then \begin{equation} \label{tr1} \sum_{f\in\orth{\kappa}{q}}\omega_q(f)\psi_f(m)\psi_f(n)=\Delta_q(m,n) \end{equation} for any integers $m$ and $n$. \end{proposition} H.~Iwaniec, W.~Luo \& P.~Sarnak proved in \cite{IwLuSa} a useful variation of Peterson's trace formula which is an average over only primitive cusp forms. This is more convenient when there are some old forms which is the case for instance when the weight $\kappa$ is large. Let $\nu$ be the arithmetic function defined by $$\nu(n)\coloneqq n\prod_{p\mid n}\left(1+1/p\right)$$ for any integer $n\geq 1$. \begin{proposition}[H.~Iwaniec, W.~Luo \& P.~Sarnak~(2001)] \label{iwlusatr} If $\left(n,q^2\right)\mid q$ and $q\nmid m$ then \begin{equation} \label{tr2} \sum_{f\in \prim{\kappa}{q}}\omega_q(f)\lambda_f(m)\lambda_f(n)= \Delta_q(m,n)-\frac{1}{q\nu((n,q))}\sum_{\ell\mid q^\infty}\frac{1}{\ell}\Delta_1\left(m\ell^2,n\right). \end{equation} \end{proposition} \begin{remark} The first term in \eqref{tr2} is exactly the term which appears in \eqref{tr1} whereas the second term in \eqref{tr2} will be usually very small as an old form comes from a form of level $1$! Thus, everything works in practice as if there were no old forms in $\cusp{\kappa}{q}$. \end{remark} \subsubsection{Chebyshev polynomials and Hecke eigenvalues} Let $p\neq q$ a prime number and $f\in\prim{\kappa}{q}$. The multiplicativity relation \eqref{compoeigen} leads to \[ \sum_{r\geq 0}\lambda_f(p^r)t^r = \frac{1}{1-\lambda_f(p)t+t^2}. \] It follows that \begin{equation}\label{eq_ams} \lambda_f(p^r)=X_r\left(\lambda_f(p)\right) \end{equation} where the polynomials $X_r$ are defined by their generating series \[ \sum_{r\geq 0}X_r(x)t^r = \frac{1}{1-xt+t^2}. \] They are also defined by \[ X_r(2\cos\theta)=\frac{\sin{((r+1)\theta)}}{\sin{(\theta)}}. \] These polynomials are known as the Chebyshev polynomials of second kind. Each $X_r$ has degree $r$, is even if $r$ is even and odd otherwise. The family $\{X_r\}_{r\geq 0}$ is a basis for $\Q[X]$, orthonormal with respect to the inner product \[ \scalst{P}{Q}\coloneqq\frac{1}{\pi} \int_{-2}^{2}P(x)Q(x)\sqrt{1-\frac{x^2}{4}}\dd x. \] In particular, for any integer $\varpi\geq 0$ we have \begin{equation}\label{eq_lintch} X_r^\varpi=\sum_{j=0}^{r\varpi}x(\varpi,r,j)X_j \end{equation} with \begin{equation}\label{eq_valx} x(\varpi,r,j)\coloneqq \scalst{X_r^\varpi}{X_j} =\frac{2}{\pi}\int_0^\pi\frac{\sin^\varpi{((r+1)\theta)}\sin{((j+1)\theta)}}{\sin^{\varpi-1}{(\theta)}}\dd\theta. \end{equation} The following relations are useful in this paper \begin{equation} \label{propriox} x(\varpi,r,j)=\begin{cases} 1 & \text{if $j=0$ and $\varpi$ is even,} \\ 0 & \text{if $j$ is odd and $r$ is even,} \\ 0 & \text{if $j=0$, $\varpi=1$ and $r\geq 1$.} \end{cases} \end{equation} \subsubsection{Overview of $L$-functions associated to primitive cusp forms} Let $f$ in $\prim{\kappa}{q}$. We define \[ L(f,s)\coloneqq\sum_{n\geq 1}\frac{\lambda_f(n)}{n^s}= \prod_{p\in\prem}\left(1-\frac{\alpha_f(p)}{p^s}\right)^{-1}\left(1-\frac{\beta_f(p)}{p^s}\right)^{-1} \] which is an absolutely convergent and non-vanishing Dirichlet series and Euler product on $\Re{s}>1$ and also \[ L_\infty(f,s)\coloneqq\fGamma_{\R}\left(s+(\kappa-1)/2\right)\fGamma_{\R}\left(s+(\kappa+1)/2\right) \] where $\fGamma_{\R}(s)\coloneqq\pi^{-s/2}\fGamma\left(s/2\right)$ as usual. The function $$\Lambda(f,s)\coloneqq q^{s/2}L_\infty(f,s)L(f,s)$$ is a \emph{completed $L$-function} in the sense that it satisfies the following \emph{nice} analytic properties: \begin{itemize} \item the function $\Lambda(f,s)$ can be extended to an holomorphic function of order $1$ on $\C$, \item the function $\Lambda(f,s)$ satisfies a functional equation of the shape \[ \Lambda(f,s)=i^{\kappa}\epsilon_f(q)\Lambda(f,1-s) \] where \begin{equation}\label{eq_signe} \epsilon_f(q)=-\sqrt{q}\lambda_f(q)=\pm 1. \end{equation} \end{itemize} \subsubsection{Overview of symmetric power $L$-functions}\label{sec_sympow} Let $f$ in $\prim{\kappa}{q}$. For any natural integer $r\geq 1$, the \emph{symmetric $r$-th power} associated to $f$ is given by the following Euler product of degree $r+1$ \[ L(\sym^rf,s)\coloneqq\prod_{p\in\prem}L_p(\sym^rf,s) \] where \[ L_p(\sym^rf,s)\coloneqq \prod_{i=0}^r\left(1-\frac{\alpha_f(p)^{i}\beta_f(p)^{r-i}}{p^{s}}\right)^{-1} \] for any prime number $p$. Let us remark that the local factors of this Euler product may be written as \[ L_p(\sym^rf,s)=\prod_{i=0}^r\left(1-\frac{\alpha_f(p)^{2i-r}}{p^s}\right)^{-1} \] for any prime number $p\neq q$ and \[ L_q(\sym^rf,s)=1-\frac{\lambda_f(q)^r}{q^{s}}=1-\frac{\lambda_f(q^r)}{q^{s}} \] as $\alpha_f(p)+\beta_f(p)=\lambda_f(p)$ and $\alpha_f(p)\beta_f(p)=\epsilon_q(p)$ for any prime number $p$ according to \eqref{quadra}. On $\Re{s}>1$, this Euler product is absolutely convergent and non-vanishing. We also defines \cite[(3.16) and (3.17)]{CoMi} a local factor at $\infty$ which is given by a product of $r+1$ Gamma factors namely \[ L_\infty(\sym^rf,s)\coloneqq \prod_{0\leq a\leq(r-1)/2} \fGamma_{\R}\left(s+(2a+1)(\kappa-1)/2\right) \fGamma_{\R}\left(s+1+(2a+1)(\kappa-1)/2\right) \] if $r$ is odd and \[ L_\infty(\sym^rf,s)\coloneqq \fGamma_{\R}(s+\mu_{\kappa,r}) \prod_{1\leq a\leq r/2} \fGamma_{\R}\left(s+a(\kappa-1)\right) \fGamma_{\R}\left(s+1+a(\kappa-1)\right) \] if $r$ is even where \[ \mu_{\kappa,r}\coloneqq\begin{cases} 1 & \text{if } r(\kappa-1)/2 \text{ is odd,} \\ 0 & \text{otherwise.} \\ \end{cases} \] All the local data appearing in these local factors are encapsulated in the following completed $L$-function \[ \Lambda(\sym^rf,s)\coloneqq\left(q^{r}\right)^{s/2}L_\infty(\sym^rf,s)L(\sym^rf,s). \] Here, $q^r$ is called the arithmetic conductor of $\Lambda(\sym^rf,s)$ and somehow measures the size of this function. We will need some control on the analytic behaviour of this function. Unfortunately, such information is not currently known in all generality. Our main assumption is given in hypothesis $\Nice(r,f)$ page \pageref{hypohypo}. Indeed, much more is expected to hold as it is discussed in details in \cite{CoMi} namely the following assumption is strongly believed to be true and lies in the spirit of Langlands program. \begin{hypCoMi} There exists an automorphic cuspidal self-dual representation, denoted by $\sym^r\pi_f=\otimes'_{p\in\prem\cup\{\infty\}}\sym^r\pi_{f,p}$, of $GL_{r+1}\left(\A_{\Q}\right)$ whose local factors $L\left(\sym^r\pi_{f,p},s\right)$ agree with the local factors $L_p\left(\sym^rf,s\right)$ for any $p$ in $\prem\cup\{\infty\}$. \end{hypCoMi} Note that the local factors and the arithmetic conductor in the definition of $\Lambda\left(\sym^rf,s\right)$ and also the sign of its functional equation which all appear without any explanations so far come from the explicit computations which have been done \emph{via} the local Langlands correspondence by J.~Cogdell and P.~Michel in \cite{CoMi}. Obviously, hypothesis $\Nice(r,f)$ is a weak consequence of hypothesis $\sym^{r}(f)$. For instance, the cuspidality condition in hypothesis $\sym^{r}(f)$ entails the fact that $\Lambda\left(\sym^rf,s\right)$ is of order $1$ which is crucial for us to state a suitable explicit formula. As we will not exploit the power of automorphic theory in this paper, hypothesis $\Nice(r,f)$ is enough for our purpose. In addition, it may happen that hypothesis $\Nice(r,f)$ is known whereas hypothesis $\sym^{r}f$ is not. Let us overview what has been done so far. For any $f$ in $\prim{\kappa}{q}$, hypothesis $\sym^{r}f$ is known for $r=1$ (E.~Hecke), $r=2$ thanks to the work of S.~Gelbart and H.~Jacquet \cite{GeJa} and $r=3,4$ from the works of H.~Kim and F.~Shahidi \cite{KiSh1,KiSh2,Ki}. \subsection{Probabilistic background}\label{sec_proba} The set $\prim{\kappa}{q}$ can be seen as a probability space if \begin{itemize} \item the measurable sets are all its subsets, \item the \emph{harmonic probability measure} is defined by \[ \muh[q](A)\coloneqq\sumh_{f\in A}1\coloneqq\sum_{f\in A}\omega_q(f) \] for any subset $A$ of $\prim{\kappa}{q}$. \end{itemize} Indeed, there is a slight abuse here as we only know that \begin{equation}\label{eq_mesasy} \lim_{\substack{q\in\prem \\ q\to+\infty}}\muh[q]\left(\prim{\kappa}{q}\right)=1 \end{equation} (see remark~\ref{rem_moyun}) which means that $\muh[q]$ is an ``asymptotic'' probability measure. If $X_q$ is a measurable complex-valued function on $\prim{\kappa}{q}$ then it is very natural to compute its \emph{expectation} defined by \[ \Eh[q]\left(X_q\right)\coloneqq\sumh_{f\in\prim{\kappa}{q}}X_q(f), \] its \emph{variance} defined by \[ \Vh[q]\left(X_q\right)\coloneqq\Eh[q]\left(\left(X_q-\Eh[q]\left(X_q\right)\right)^2\right) \] and its \emph{$m$-th moments} given by \[ \Mh[q,m]\left(X_q\right)\coloneqq\Eh[q]\left(\left(X_q-\Eh[q]\left(X_q\right)\right)^m\right) \] for any integer $m\geq 1$. If $X\coloneqq\left(X_q\right)_{q\in\prem}$ is a sequence of such measurable complex-valued functions then we may legitimely wonder if the associated complex sequences \[ \left(\Eh[q]\left(X_q\right)\right)_{q\in\prem},\quad \left(\Vh[q]\left(X_q\right)\right)_{q\in\prem},\quad \left(\Mh[q,m]\left(X_q\right)\right)_{q\in\prem} \] converge as $q$ goes to infinity among the primes. If yes, the following general notations will be used for their limits \[ \Eh[\infty]\left(X\right),\quad \Vh[\infty]\left(X\right),\quad \Mh[\infty,m]\left(X\right) \] for any natural integer $m$. In addition, these potential limits are called \emph{asymptotic expectation}, \emph{asymptotic variance} and \emph{asymptotic $m$-th moments} of $X$ for any natural integer $m\geq 1$. For the end of this section, we assume that $r$ is \emph{odd}. We may remark that the sign of the functional equations of any $L(\sym^rf,s)$ when $q$ goes to infinity among the prime numbers and $f$ ranges over $\prim{\kappa}{q}$ is not constant as it depends on $\epsilon_f(q)$. Let \[ \primeps{\kappa}{q}\coloneqq\left\{f\in \prim{\kappa}{q}, \epsilon(\sym^rf)=\epsilon\right\} \] where $\epsilon=\pm 1$. If $f\in\primpair{\kappa}{q}$, then $\sym^rf$ is said to be \emph{even} whereas it is said to be \emph{odd} if $f\in\primimpair{\kappa}{q}$. It is well-known that \[ \lim_{\substack{q\in\prem \\ q\to +\infty}}\muh[q]\left(\left\{f\in\prim{k}{q} \colon \epsilon_f(q)=\epsilon\right\}\right)=\frac{1}{2}. \] Since $\epsilon(\sym^rf)$ is $\epsilon_q(f)$ up to a sign depending only on $\kappa$ and $r$ (by hypothesis $\Nice(r,f)$), it follows that \begin{equation}\label{eq_sign} \lim_{\substack{q\in\prem \\ q\to +\infty}}\muh[q]\left(\primeps{\kappa}{q}\right)=\frac{1}{2}. \end{equation} For $X_q$ as previous, we can compute its \emph{signed expectation} defined by \[ \Eheps[q]\left(X_q\right)\coloneqq 2\sumh_{f\in\primeps{\kappa}{q}}X_q(f), \] its \emph{signed variance} defined by \[ \Vheps[q]\left(X_q\right)\coloneqq\Eheps[q]\left(\left(X_q-\Eheps[q]\left(X_q\right)\right)^2\right) \] and its \emph{signed $m$-th moments} given by \[ \Mheps[q,m]\left(X_q\right)\coloneqq\Eheps[q]\left(\left(X_q-\Eheps[q]\left(X_q\right)\right)^m\right) \] for any natural integer $m\geq 1$. In case of existence, we write $\Eheps[\infty](X)$, $\Vheps[\infty](X)$ and $\Mheps[\infty,m](X)$ for the limits which are called \emph{signed asymptotic expectation}, \emph{signed asymptotic variance} and \emph{signed asymptotic moments}. The signed expectation and the expectation are linked through the formula \begin{align} \notag \Eheps[q](X_q) &= 2\sumh_{f\in\prim{\kappa}{q}}\frac{1+\epsilon\times\epsilon(\sym^rf)}{2}X_q(f) \\ \label{eq_removeps} &= \Eh[q](X_q)-\epsilon\times\epsilon(\kappa,r)\sqrt{q}\sumh_{f\in\prim{\kappa}{q}}\lambda_f(q)X_q(f). \end{align} \section{Main technical ingredients of this work} \label{technical} \subsection{Large sieve inequalities for Kloosterman sums} One of the main ingredients in this work is some large sieve inequalities for Kloosterman sums which have been established by J.-M.~Deshouillers \& H.~Iwaniec in \cite{DeIw} and then refined by V.~Blomer, G.~Harcos \& P.~Michel in \cite{BlHaMi}. The proof of these large sieve inequalities relies on the spectral theory of automorphic forms on $GL_2\left(\A_{\Q}\right)$. In particular, the authors have to understand the size of the Fourier coefficients of these automorphic cusp forms. We have already seen that the size of the Fourier coefficients of holomorphic cusp forms is well understood \eqref{individualh} but we only have partial results on the size of the Fourier coefficients of Maass cusp forms which do not come from holomorphic forms. We introduce the following hypothesis which measures the approximation towards the \emph{Ramanujan-Peterson-Selberg conjecture}. \begin{ATRPS}\label{hyp_RPS} If $\pi\coloneqq\otimes'_{p\in\prem\cup\{\infty\}}\pi_{p}$ is any automorphic cuspidal form on $GL_{2}(\A_\Q)$ with local Hecke parameters $\alpha_{\pi}^{(1)}(p)$, $\alpha_{\pi}^{(2)}(p)$ at any prime number $p$ and $\mu_{\pi}^{(1)}(\infty)$, $\mu_{\pi}^{(2)}(\infty)$ at infinity then \[ \forall j\in\{1,2\},\quad \abs{\alpha_{\pi}^{(j)}(p)}\leq p^{\theta} \] for any prime number $p$ for which $\pi_p$ is unramified and \[ \forall j\in\{1,2\},\quad \abs{ \Re{\left(\mu_{\pi}^{(j)}(\infty)\right)} } \leq\theta \] provided $\pi_{\infty}$ is unramified. \end{ATRPS} \begin{definition} We say that\/ $\theta$ is \emph{admissible} if\/ $\Hy_2(\theta)$ is satisfied. \end{definition} \begin{remark} The smallest admissible value of $\theta$ is currently $\theta_{0}=\frac{7}{64}$ thanks to the works of H.~Kim, F.~Shahidi and P.~Sarnak \cite{KiSh2,Ki}. The Ramanujan-Peterson-Selberg conjecture asserts that $0$ is admissible. \end{remark} \begin{definition}\label{def_propS} Let $T\colon\mathbb{R}^3\to\R^+$ and $(M,N,C)\in(\mathbb{R}\setminus\{0\})^{3}$, we say that a smooth function $h\colon\R^3\to\R^3$ satisfies the property $\prp(T;M,N,C)$ if there exists a real number $K>0$ such that \begin{multline*} \forall(i,j,k)\in\N^3, \forall(x_1,x_2,x_3)\in \left[\frac{M}{2},2M\right]\times\left[\frac{N}{2},2N\right]\times\left[\frac{C}{2},2C\right], \\ x_1^ix_2^jx_3^k\frac{\partial^{i+j+k} h} {\partial x_1^i\partial x_2^j\partial x_3^k}(x_1,x_2,x_3)\leq KT(M,N,C)\left(1+\frac{\sqrt{MN}}{C}\right)^{i+j+k}. \end{multline*} \end{definition} With this definition in mind, we are able to write the following proposition which is special case of a large sieve inequality adapted from the one of Deshouil\-lers \& Iwaniec \cite[Theorem 9]{DeIw} by Blomer, Harcos \& Michel \cite[Theorem 4]{BlHaMi}. \begin{proposition}\label{sieve} Let $q$ be some positive integer. Let $M, N, C\geq 1$ and $g$ be a smooth function satisfying property $\prp(1;M,N,C)$. Consider two sequences of complex numbers $(a_m)_{m\in[M/2,2M]}$ and $(b_n)_{n\in[N/2,2N]}$. If \/ $\theta$ is admissible and $MN\ll C^2$ then \begin{multline} \sum_{\substack{c\geq 1 \\ q\mid c}} \sum_{m\geq 1}\sum_{n\geq 1}a_mb_n\frac{S(m,\pm n;c)}{c}g(m,n;c) \\% \ll_{\epsilon} (qMNC)^\epsilon \left(\frac{C^2}{MN}\right)^{\theta} \left( 1+\frac{M}{q} \right)^{1/2} \left( 1+\frac{N}{q} \right)^{1/2} \norm{a}_2\norm{b}_2 \end{multline} for any $\epsilon>0$. \end{proposition} We shall use a test function. For any $\nu>0$ let us define $\Schwartz_\nu(\R)$ as the space of even Schwartz function $\Phi$ whose Fourier transform \[ \widehat{\Phi}(\xi)\coloneqq \mathcal{F}[x\mapsto\Phi(x)](\xi)\coloneqq \int_{\R}\Phi(x)e(-x\xi)\dd x \] is compactly supported in $[-\nu,+\nu]$. Thanks to the Fourier inversion formula: \begin{equation}\label{eq_fouinv} \Phi(x)=\int_{\R}\widehat{\Phi}(\xi)e(x\xi)\dd x= \mathcal{F}[\xi\mapsto\widehat{\Phi}(\xi)](-x), \end{equation} such a function $\Phi$ can be extended to an entire even function which satisfies \begin{equation} \label{estim} \forall s\in\C,\quad \Phi(s)\ll_n\frac{\exp{(\nu\abs{\Im{s}})}}{(1+\abs{s})^n} \end{equation} for any integer $n\geq 0$. The version of the large sieve inequality we shall use several times in this paper is then the following. \begin{corollary}\label{usefulsieve} Let $q$ be some prime number, $k_1, k_2>0$ be some integers, $\alpha_1, \alpha_2, \nu$ be some positive real numbers and $\Phi\in\Schwartz_{\nu}(\R)$. Let $h$ be some smooth function satisfying property $\prp(T;M,N,C)$ for any $1\leq M\leq q^{k_1\alpha_1\nu}$, $1\leq N\leq q^{k_2\alpha_2\nu}$ and $C\geq q$. Let $\left(a_{p}\right)_{\substack{p\in\prem\\ p\leq q^{\alpha_1\nu}}}$ and $\left(b_{p}\right)_{\substack{p\in\prem\\ p\leq q^{\alpha_2\nu}}}$ be some complex numbers sequences. If $\;\theta$ is admissible and $\nu\leq2\left/(k_1\alpha_1+k_2\alpha_2)\right.$ then \begin{multline} \sum_{\substack{c\geq 1\\ q\mid c}} \sum_{\substack{p_1\in\prem\\ p_1\nmid q}} \sum_{\substack{p_2\in\prem\\ p_2\nmid q}} a_{p_1}b_{p_2}\frac{S(p_1^{k_1},p_2^{k_2};c)}{c} h\left(p_1^{k_1},p_2^{k_2};c\right) \widehat{\Phi}\left(\frac{\log p_1}{\log(q^{\alpha_1})}\right) \widehat{\Phi}\left(\frac{\log p_2}{\log(q^{\alpha_2})}\right) \\% \ll q^{\epsilon} \sumsh_{\substack{ 1\leq M\leq q^{\nu\alpha_1k_1}\\% 1\leq N\leq q^{\nu\alpha_2k_2}\\% C\geq q/2 }} \left(1+\sqrt{\frac{M}{q}}\right) \left(1+\sqrt{\frac{N}{q}}\right) \left(\frac{C^2}{MN}\right)^\theta T(M,N,C) \norm{a}_2\norm{b}_2 \end{multline} where $\sharp$ indicates that the sum is on powers of $\sqrt{2}$. The constant implied by the symbol $\ll$ depends at most on $\epsilon$, $k_1$, $k_2$, $\alpha_1$, $\alpha_2$ and $\nu$. \end{corollary} \begin{proof} Define $\left(\widehat{a}_m\right)_{m\in\N}$, $\left(\widehat{b}_n\right)_{n\in\N}$ and $g(m,n;c)$ by \begin{align} \widehat{a}_m &\coloneqq a_{m^{1/k_1}}\un_{\prem^{k_1}}(m)\un_{[1,q^{\nu\alpha_1k_1}]}(m)\\% \widehat{b}_n &\coloneqq b_{n^{1/k_1}}\un_{\prem^{k_1}}(n)\un_{[1,q^{\nu\alpha_1k_1}]}(n)\\% g(m,n;c) &\coloneqq h(m,n,c) \widehat{\Phi}\left(\frac{\log m}{\log(q^{\alpha_1k_1})}\right) \widehat{\Phi}\left(\frac{\log n}{\log(q^{\alpha_2k_2})}\right). \end{align} Using a smooth partition of unity, as detailed in \S~\ref{unity}, we need to evaluate \begin{equation}\label{eq_vtmp} \sumsh_{\substack{ 1\leq M\leq q^{\nu\alpha_1k_1}\\% 1\leq N\leq q^{\nu\alpha_2k_2}\\% C\geq q/2 }} T(M,N,C) \sum_{\substack{c\geq 1\\ q\mid c}}\sum_{m\geq 1}\sum_{n\geq 1} \widehat{a}_m \widehat{b}_n \frac{S(m,n;c)}{c} \frac{g_{M,N,C}(m,n;c)}{T(M,N,C)}. \end{equation} Since $\nu\leq2\left/(\alpha_1k_1+\alpha_2k_2)\right.$, the first summation is restricted to $MN\ll C^2$ hence, using proposition~\ref{sieve}, the quantity in \eqref{eq_vtmp} is \begin{equation} \ll \norm{a}_2\norm{b}_2q^\epsilon \sumsh_{\substack{ 1\leq M\leq q^{\nu\alpha_1k_1}\\% 1\leq N\leq q^{\nu\alpha_2k_2}\\% C\geq q/2 }} T(M,N,C) \left(1+\sqrt{\frac{M}{q}}\right) \left(1+\sqrt{\frac{N}{q}}\right) \left(\frac{C^2}{MN}\right)^\theta. \end{equation} \end{proof} \subsection{Riemann's explicit formula for symmetric power $L$-functions}\label{sec_explicit} In this section, we give an analog of Riemann-von Mangoldt's explicit formula for symmetric power $L$-functions. Before that, let us recall some preliminary facts on zeros of symmetric power $L$-functions which can be found in section 5.3 of \cite{IwKo}. Let $r\geq 1$ and $f\in \prim{\kappa}{q}$ for which hypothesis $\Nice(r,f)$ holds. All the zeros of $\Lambda(\sym^rf,s)$ are in the critical strip $\{s\in\C\colon 0<\Re{s}<1\}$. The multiset of the zeros of $\Lambda(\sym^rf,s)$ counted with multiplicities is given by \[ \left\{ \rho_{f,r}^{(j)}=\beta_{f,r}^{(j)}+i\gamma_{f,r}^{(j)} \colon j\in\mathcal{E}(f,r) \right\} \] where \[ \mathcal{E}(f,r)\coloneqq \begin{cases} \Z & \text{if $\sym^rf$ is odd}\\ \Z\setminus\{0\} & \text{if $\sym^rf$ is even.} \end{cases} \] and \begin{align*} \beta_{f,r}^{(j)} & = \Re{\rho_{f,r}^{(j)}}, \\ \gamma_{f,r}^{(j)} & = \Im{\rho_{f,r}^{(j)}} \end{align*} for any $j\in\mathcal{E}(f,r)$. We enumerate the zeros such that \begin{enumerate} \item the sequence $j\mapsto\gamma_{f,r}^{(j)}$ is increasing \item we have $j\geq 0$ if and only if $\gamma_{f,r}^{(j)}\geq 0$ \item we have $\rho_{f,r}^{(-j)}=1-\rho_{f,r}^{(j)}$. \end{enumerate} Note that if $\rho_{f,r}^{(j)}$ is a zero of $\Lambda(\sym^rf,s)$ then $\overline{\rho_{f,r}^{(j)}}$, $1-\rho_{f,r}^{(j)}$ and $1-\overline{\rho_{f,r}^{(j)}}$ are also some zeros of $\Lambda(\sym^rf,s)$. In addition, remember that if $\sym^rf$ is odd then the functional equation of $L(\sym^rf,s)$ evaluated at the critical point $s=1/2$ provides a trivial zero denoted by $\rho_{f,r}^{(0)}$. It can be shown \cite[Theorem 5.8]{IwKo} that the number of zeros $\Lambda(\sym^rf,s)$ satisfying $\abs{\gamma_{f,r}^{(j)}}\leq T$ is \begin{equation}\label{eq_nbzero} \frac{T}{\pi}\log{\left(\frac{q^rT^{r+1}}{(2\pi e)^{r+1}}\right)}+O\left(\log(qT)\right) \end{equation} as $T\geq 1$ goes to infinity. We state now the \emph{Generalised Riemann Hypothesis} which is the main conjecture about the horizontal distribution of the zeros of $\Lambda(\sym^rf,s)$ in the critical strip. \begin{hGRH} For any prime number $q$ and any $f$ in $\prim{\kappa}{q}$, all the zeros of $\Lambda(\sym^rf,s)$ lie on the critical line $\left\{s\in\C \colon \Re{s}=1/2\right\}$ namely $\beta_{r,f}^{(j)}=1/2$ for any $j\in\mathcal{E}(f,r)$. \end{hGRH} \begin{remark} We \emph{do not} use this hypothesis in our proofs. \end{remark} Under hypothesis $\GRH(r)$, it can be shown that the number of zeros of the function $\Lambda(\sym^rf,s)$ satisfying $\abs{\gamma_{f,r}^{(j)}}\leq 1$ is given by \[ \frac{1}{\pi}\log{\left(q^r\right)}(1+o(1)) \] as $q$ goes to infinity. Thus, the spacing between two consecutive zeros with imaginary part in $[0,1]$ is roughly of size \begin{equation} \label{meanspacing} \frac{2\pi}{\log{\left(q^r\right)}}. \end{equation} We aim at studying the local distribution of the zeros of $\Lambda(\sym^rf,s)$ in a neighborhood of the real axis of size $1/\log q^r$ since in such a neighborhood, we expect to catch only few zeros (but without being able to say that we catch only one\footnote{We refer to Miller \cite{mil02a} and Omar \cite{oma00} for works related to the ``first'' zero.}). Hence, we normalise the zeros by defining \[ \widehat{\rho}_{f,r}^{(j)} \coloneqq \frac{\log{\left(q^r\right)}}{2i\pi}\left( \beta_{f,r}^{(j)}-\frac{1}{2}+i\gamma_{f,r}^{(j)} \right). \] Note that \[ \widehat{\rho}_{f,r}^{(-j)}=-\widehat{\rho}_{f,r}^{(j)}. \] \begin{definition} Let $f\in\prim{\kappa}{q}$ for which hypothesis\/ $\Nice(r,f)$ holds and let $\Phi\in\Schwartz_\nu(\R)$. The \emph{one-level density} (relatively to $\Phi$) of\/ $\sym^rf$ is \begin{equation} \label{eq_defdens} D_{1,q}[\Phi;r](f) \coloneqq \sum_{j\in\mathcal{E}(f,r)} \Phi\left(\widehat{\rho}_{f,r}^{(j)}\right). \end{equation} \end{definition} To study $D_{1,q}[\Phi;r](f)$ for any $\Phi\in\Schwartz_\nu(\R)$, we transform this sum over zeros into a sum over primes in the next proposition. In other words, we establish an explicit formula for symmetric power $L$-functions. Since the proof is classical, we refer to \cite[\S 4]{IwLuSa} or \cite[\S 2.2]{Gu} which present a method that has just to be adapted to our setting. \begin{proposition} \label{explicit} Let $r\geq 1$ and $f\in \prim{\kappa}{q}$ for which hypothesis $\Nice(r,f)$ holds and let $\Phi\in\Schwartz_\nu(\R)$. We have \[ D_{1,q}[\Phi;r](f)= E[\Phi;r]+P_q^1[\Phi;r](f)+\sum_{m=0}^{r-1}(-1)^mP_q^2[\Phi;r,m](f)+O\left(\frac{1}{\log{\left(q^r\right)}}\right) \] where \begin{align*} E[\Phi;r] & \coloneqq \widehat{\Phi}(0)+\frac{(-1)^{r+1}}{2}\Phi(0), \\ P_{q}^1[\Phi;r](f) & \coloneqq -\frac{2}{\log{\left(q^r\right)}}\sum_{\substack{p\in\prem \\ p\nmid q}} \lambda_{f}\left(p^r\right)\frac{\log{p}}{\sqrt{p}}\widehat{\Phi}\left(\frac{\log{p}}{\log{\left(q^r\right)}}\right), \\ P_q^2[\Phi;r,m](f) & \coloneqq -\frac{2}{\log{\left(q^r\right)}}\sum_{\substack{p\in\prem \\ p\nmid q}} \lambda_f\left(p^{2(r-m)}\right)\frac{\log{p}}{p}\widehat{\Phi}\left(\frac{2\log{p}}{\log{\left(q^r\right)}}\right) \end{align*} for any integer $m\in\{0,\ldots,r-1\}$. \end{proposition} \subsection{Contribution of the old forms} In this short section, we prove the following useful lemmas. \begin{lemma}\label{lem_delun} Let $p_1$ and $p_2\neq q$ be some prime numbers and $a_1$, $a_2$, $a$ be some nonnegative integers. Then \[ \sum_{\ell\mid q^{\infty}}\frac{\Delta_1\n(\ell^2p_1^{a_1},p_2^{a_2}q^a)}{\ell} \ll \frac{1}{q^{a/2}} \] the implied constant depending only on $a_1$ and $a_2$. \end{lemma} \begin{proof} Using proposition~\ref{prop_orth} and the fact that $\orth{\kappa}{1}=\prim{\kappa}{1}$, we write \begin{align} \Delta_1\n(\ell^2p_1^{a_1},p_2^{a_2}q^a) &= \sumh_{f\in\prim{\kappa}{1}}\lambda_f\n(\ell^2p_1^{a_1})\lambda_f\n(p_2^{a_2}q^a)\\ &\ll \label{eq_ici} \sumh_{f\in\prim{\kappa}{1}}\abs{\lambda_f\n(\ell^2p_1^{a_1})}\cdot\abs{\lambda_f\n(p_2^{a_2})}\cdot\abs{\lambda_f(q^a)}. \end{align} By Deligne's bound~\eqref{individualh} we have \begin{equation}\label{eq_sansq} \abs{\lambda_f\n(\ell^2p_1^{a_1})}\cdot\abs{\lambda_f\n(p_2^{a_2})} \leq \tau(\ell^2p_1^{a_1})\tau(p_2^{a_2}) \leq (a_1+1)(a_2+2)\tau(\ell^2). \end{equation} By the multiplicativity relation~\eqref{compoeigen} and the value of the sign of the functional equation~\eqref{eq_signe}, we have \begin{equation}\label{eq_avecq} \abs{\lambda_f(q^a)}\ll \frac{1}{q^{a/2}}. \end{equation} We obtain the result by reporting \eqref{eq_avecq} and \eqref{eq_sansq} in \eqref{eq_ici} and by using \eqref{eq_mesasy} and \[ \sum_{\ell\mid q^{\infty}}\frac{\tau(\ell^2)}{\ell}=\frac{1+1/q}{(1-1/q)^2}\ll 1. \] \end{proof} \begin{lemma}\label{deltaestimate} Let $m,n\geq 1$ be some coprime integers. Then, \[ \Delta_q(m,n)-\delta(m,n)\ll \begin{cases} \frac{(mn)^{1/4}}{q}\log\left(\frac{mn}{q^2}\right) & \text{if $mn>q^2$}\\ \frac{(mn)^{( \kappa-1)/2}}{q^{\kappa-1/2}} \leq \frac{(mn)^{1/4}}{q} & \text{if $mn\leq q^2$.} \end{cases} \] \end{lemma} \begin{proof} This is a direct consequence of the Weil-Estermann bound \eqref{weil} and lem\-ma \ref{lem_picard}. \end{proof} \begin{corollary}\label{lem_sumlambdafq} For any prime number $q$, we have \[ \sqrt{q}\sumh_{f\in\prim{\kappa}{q}}\lambda_f(q)\ll\frac{1}{q^{\delta_\kappa}} \] where \[ \delta_\kappa\coloneqq \begin{cases} \frac{\kappa-1}{2} & \text{if $\kappa\leq 10$ or $\kappa=14$} \\ \frac{5}{2} & \text{otherwise.} \end{cases} \] \end{corollary} \begin{proof}[\proofname{} of corollary~\ref{lem_sumlambdafq}] Let $\mathcal{K}=\{\kappa\in 2\N \colon 2\leq\kappa\leq 14,\, \kappa\neq 12\}$. By proposition~\ref{iwlusatr}, we have \begin{equation}\label{eq_termzero} \sumh_{f\in\prim{\kappa}{q}}\lambda_f(q)= \Delta_q(1,q)- \frac{\delta(\kappa \notin \mathcal{K})}{q\nu(q)} \sum_{\ell\mid q^\infty}\frac{\Delta_1(\ell^2,q)}{\ell}. \end{equation} The term $\delta(\kappa\notin\mathcal{K})$ comes from proposition~\ref{prop_orth} with the fact that there is no cusp forms of weight $\kappa\in\mathcal{K}$ and level $1$. Lemma~\ref{deltaestimate} gives \begin{equation}\label{eq_termun} \Delta_q(1,q)\ll \frac{1}{q^{\kappa/2}} \end{equation} and lemma~\ref{lem_delun} gives \begin{equation}\label{eq_termdeux} \sum_{\ell\mid q^\infty}\frac{\Delta_1(\ell^2,q)}{\ell}\ll\frac{1}{ \sqrt{q} }. \end{equation} Since $\nu(q)>q$, the result follows from reporting \eqref{eq_termun} and \eqref{eq_termdeux} in \eqref{eq_termzero}. \end{proof} \begin{remark}\label{rem_moyun} In a very similar fashion, one can prove that \begin{equation}\label{eq_moyun} \muh[q]\left(\prim{\kappa}{q}\right) =\Eh[q](1) =1+O\left(\frac{1}{q^{\gamma_\kappa}}\right). \end{equation} where \[ \gamma_\kappa\coloneqq \begin{cases} \kappa-\frac{1}{2} & \text{if $\kappa\leq 10$ or $\kappa=14$} \\ 1 & \text{otherwise.} \end{cases} \] Corollary \ref{lem_sumlambdafq}, \eqref{eq_moyun} and \eqref{eq_removeps} imply \begin{equation}\label{eq_mbun} \Eheps[q](1) =1+O\left(\frac{1}{q^{\beta_\kappa}}\right) \end{equation} where \[ \beta_\kappa\coloneqq \begin{cases} \frac{\kappa-1}{2} & \text{if $\kappa\leq 10$ or $\kappa=14$} \\ 1 & \text{otherwise.} \end{cases} \] \end{remark} A direct consequence of lemma~\ref{lem_delun} is the following one. \begin{lemma}\label{lem_old} Let $\alpha_1,\alpha_2,\beta_1,\beta_2,\gamma_1,\gamma_2,w$ be some nonnegative real numbers. Let $\Phi_1$ and $\Phi_2$ be in $\Schwartz_\nu(\R)$. Then, \begin{multline*} \sum_{\substack{p_1\in\prem\\ p_1\nmid q}} \sum_{\substack{p_2\in\prem\\ p_2\nmid q}} \frac{\log p_1}{p_1^{\alpha_1}}\frac{\log p_2}{p_2^{\alpha_2}} \widehat{\Phi_1}\left(\frac{\log p_1}{\log{\left(q^{\beta_1}\right)}}\right) \widehat{\Phi_2}\left(\frac{\log p_2}{\log{\left(q^{\beta_2}\right)}}\right) \sum_{\ell\mid q^\infty}\frac{\Delta_1(\ell^2p_1^{\gamma_1},p_2^{\gamma_2}q^w)}{\ell} \\ \ll q^{\delta\nu-w/2+\varepsilon} \end{multline*} with $\delta$ given in table~\ref{tab_vald}. \begin{table}[H] \setlength{\extrarowheight}{4pt} \begin{tabular}{|c|c|c|} \hline \backslashbox{$\alpha_2$}{$\alpha_1$} & $]0,1]$ & $[1,+\infty[$\\ \hline $]0,1]$ & $\beta_1(1-\alpha_1)+\beta_2(1-\alpha_2)$ & $\beta_2(1-\alpha_2)$ \\ \hline $[1,+\infty[$ & $\beta_1(1-\alpha_1)$ & $0$ \\ \hline \end{tabular} \caption{Values of $\delta$}\label{tab_vald} \end{table} \end{lemma} \section{Linear statistics for low-lying zeros} \label{one} \subsection{Density results for families of $L$-functions}\label{sec_densres} We briefly recall some well-known features that can be found in \cite{IwLuSa}. Let $\mathcal{F}$ be a family of $L$-functions indexed by the arithmetic conductor namely \[ \mathcal{F}=\bigcup_{Q\geq 1}\mathcal{F}(Q) \] where the arithmetic conductor of any $L$-function in $\mathcal{F}(Q)$ is of order $Q$ in the logarithmic scale. It is expected that there is a symmetry group $G(\mathcal{F})$ of matrices of large rank endowed with a probability measure which can be associated to $\mathcal{F}$ such that the low-lying zeros of the $L$-functions in $\mathcal{F}$ namely the non-trivial zeros of height less than $1/\log{Q}$ are distributed like the eigenvalues of the matrices in $G(\mathcal{F})$. In other words, there should exist a symmetry group $G(\mathcal{F})$ such that for any $\nu>0$ and any $\Phi\in\Schwartz_\nu(\R)$, \begin{multline*} \lim_{Q\to+\infty}\frac{1}{\mathcal{F}(Q)} \sum_{\pi\in\mathcal{F}(Q)}\sum_{\substack{ 0\leq\beta_\pi\leq 1 \\ \gamma_\pi\in\R \\ L\left(\pi,\beta_\pi+i\gamma_\pi\right)=0 }} \Phi\left(\frac{\log{Q}}{2i\pi}\left(\beta_\pi-\frac{1}{2}+i\gamma_\pi\right)\right) \\ = \int_{\R}\Phi(x)W_1(G(\mathcal{F}))(x)\dd x \end{multline*} where $W_1(G(\mathcal{F}))$ is the one-level density of the eigenvalues of $G(\mathcal{F})$. In this case, $\mathcal{F}$ is said to be of \emph{symmetry type} $G(\mathcal{F})$ and we said that we proved a \emph{density result} for $\mathcal{F}$. For instance, the following densities are determined in \cite{KaSa}: \begin{align*} W_1(SO(\mathrm{even}))(x) &= 1+\frac{\sin{(2\pi x)}}{2\pi x},\\ W_1(O)(x) &= 1+\frac{1}{2}\delta_0(x),\\ W_1(SO(\mathrm{odd}))(x) &= 1-\frac{\sin{(2\pi x)}}{2\pi x}+\delta_0(x),\\ W_1(Sp)(x) &= 1-\frac{\sin{(2\pi x)}}{2\pi x} \end{align*} where $\delta_0$ is the Dirac distribution at $0$. According to Plancherel's formula, \[ \int_{\R}\Phi(x)W_1(G(\mathcal{F}))(x)\dd x=\int_{\R}\widehat{\Phi}(x)\widehat{W}_1(G(\mathcal{F}))(x)\dd x \] and we can check that \begin{align*} \widehat{W}_1(SO(\mathrm{even}))(x) &= \delta_0(x)+\frac{1}{2}\eta(x),\\ \widehat{W}_1(O)(x) &= \delta_0(x)+\frac{1}{2},\\ \widehat{W}_1(SO(\mathrm{odd}))(x) &= \delta_0(x)-\frac{1}{2}\eta(x)+1,\\ \widehat{W}_1(Sp)(x) &= \delta_0(x)-\frac{1}{2}\eta(x) \end{align*} where \[ \eta(x)\coloneqq \begin{cases} 1 & \text{if $\abs{x}<1$,} \\ \frac{1}{2} & \text{if $x=\pm 1$,} \\ 0 & \text{otherwise.} \end{cases} \] As a consequence, if we can only prove a density result for $\nu\leq 1$, the three orthogonal densities are indistinguishable although they are distinguishable from $Sp$. Thus, the challenge is to pass the natural barrier $\nu=1$. \subsection{Asymptotic expectation of the one-level density} The aim of this part is to prove a density result for the family \[ \mathcal{F}_r\coloneqq\bigcup_{q\in\prem}\left\{L(\sym^rf,s), f\in \prim{\kappa}{q}\right\} \] for any $r\geq 1$ which consists in proving the existence and computing the asymptotic expectation $\Eh[\infty]\left(D_{1}[\Phi;r]\right)$ of $D_1[\Phi;r]\coloneqq\left(D_{1,q}[\Phi;r]\right)_{q\in\prem}$ for any $r\geq 1$ and for $\Phi$ in $\Schwartz_\nu(\R)$ with $\nu>0$ as large as possible in order to be able to distinguish between the three orthogonal densities if $r$ is small enough. Recall that $E[\Phi;r]$ has been defined in proposition \ref{explicit}. \begin{theorem} \label{density1} Let $r\geq 1$ and $\Phi\in\Schwartz_\nu(\R)$. We assume that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any $f\in \prim{\kappa}{q}$ and also that $\theta$ is admissible. Let \[ \nu_{1,\mathrm{max}}(r,\kappa,\theta)\coloneqq\left(1-\frac{1}{2(\kappa-2\theta)}\right)\frac{2}{r^2}. \] If $\nu<\nu_{1,\mathrm{max}}(r,\kappa,\theta)$ then \[ \Eh[\infty]\left(D_{1}[\Phi;r]\right)=E[\Phi;r]. \] \end{theorem} \begin{remark} We remark that \begin{alignat}{2} \label{eq_pqcn} \nu_{1,\mathrm{max}}(r,\kappa,\theta_0) &= \left(1-\frac{16}{32\kappa-7}\right)\frac{2}{r^2}& \geq \frac{82}{57r^2}, \\ \nu_{1,\mathrm{max}}(r,\kappa,0) &= \left(1-\frac{1}{2\kappa}\right)\frac{2}{r^2}& \geq \frac{3}{2r^2} \end{alignat} and thus $\nu_{1,\mathrm{max}}(1,\kappa,\theta_0)>1$ whereas $\nu_{1,\mathrm{max}}(r,\kappa,\theta_0)\leq 1$ for any $r\geq 2$. \end{remark} \begin{remark}\label{rem_symtyp} Note that \[ E[\Phi;r]=\int_{\R}\widehat{\Phi}(x)\left(\delta_0(x)+\frac{(-1)^{r+1}}{2}\right)\dd x. \] Thus, this theorem reveals that the symmetry type of $\mathcal{F}_r$ is \[ G(\mathcal{F}_r)= \begin{cases} Sp & \text{if $r$ is even,} \\ O & \text{if $r=1$,} \\ SO(\mathrm{even}) \text{ or } O \text{ or } SO(\mathrm{odd}) & \text{if $r\geq 3$ is odd.} \end{cases} \] Some additional comments are given in remark \ref{remark4} page \pageref{remark4}. \end{remark} \begin{proof}[\proofname{} of theorem \ref{density1}.] The proof is detailed and will be a model for the next density results. According to proposition \ref{explicit} and \eqref{eq_moyun} , we have \begin{multline} \label{explicitaverage} \Eh[q]\left(D_{1,q}[\Phi;r]\right)= E[\Phi;r] +\Eh[q]\left(P_q^1[\Phi;r]\right) \\ +\sum_{m=0}^{r-1}(-1)^m\Eh[q]\left(P_q^2[\Phi;r,m]\right)+O\left(\frac{1}{\log{\left(q^r\right)}}\right). \end{multline} The first term in \eqref{explicitaverage} is the main term given in the theorem. We now estimate the second term of \eqref{explicitaverage} \emph{via} the trace formula given in proposition \ref{iwlusatr}. \begin{equation}\label{eq_pu} \Eh[q]\left(P_q^1[\Phi;r]\right)= \PP{q,\mathrm{new}}1[\Phi;r]+\PP{q,\mathrm{old}}1[\Phi;r] \end{equation} where \begin{align*} \PP{q,\mathrm{new}}1[\Phi;r] &= -\frac{2}{\log{\left(q^r\right)}}\sum_{\substack{p\in\prem \\ p\nmid q}}\Delta_q(p^r,1)\frac{\log{p}}{\sqrt{p}} \widehat{\Phi}\left(\frac{\log{p}}{\log{\left(q^r\right)}}\right), \\ \PP{q,\mathrm{old}}1[\Phi;r] &= \frac{2}{q\log{\left(q^r\right)}} \sum_{\ell\mid q^\infty}\frac{1}{\ell}\sum_{\substack{p\in\prem \\ p\nmid q}}\Delta_1(p^r\ell^2,1) \frac{\log{p}}{\sqrt{p}}\widehat{\Phi}\left(\frac{\log{p}}{\log{\left(q^r\right)}}\right). \end{align*} Let us estimate the new part which can be written as \begin{multline*} \PP{q,\mathrm{new}}1[\Phi;r]= -\frac{2 (2\pi i^\kappa) }{\log{\left(q^r\right)}}\sum_{\substack{c\geq 1 \\ q\mid c}} \sum_{p\in\prem}\left(\frac{\log{p}}{\sqrt{p}}\delta_{q\nmid p}\un_{\left[1,q^{r\nu}\right]}(p)\right) \frac{S(p^r,1;c)}{c} \\ \times J_{\kappa-1}\left(\frac{4\pi\sqrt{p^r}}{c}\right)\widehat{\Phi}\left(\frac{\log{p}}{\log{\left(q^{r}\right)}}\right). \end{multline*} Thanks to \eqref{bessel}, the function \[ h(m;c)\coloneqq J_{\kappa-1}\left(\frac{4\pi\sqrt{m}}{c}\right) \] satisfies hypothesis $\prp(T;M,1,C)$ with \[ T(M,1,C)=\left(1+\frac{\sqrt{M}}{C}\right)^{1/2-\kappa} \left(\frac{\sqrt{M}}{C}\right)^{\kappa-1}. \] Hence, if $\nu\leq 2/r^2$ then corollary \ref{usefulsieve} leads to \begin{align} \label{eq_plusgrand} \PP{q,\mathrm{new}}1[\Phi;r] &\ll_\epsilon q^\epsilon \sumsh_{\substack{1\leq M\leq q^{\nu r^2}\\ C\geq q/2}} \left(1+\sqrt{\frac{M}{q}}\right) \left(\frac{\sqrt{M}}{C}\right)^{\kappa-1-2\theta} \\% &\ll_\epsilon q^\epsilon \sumsh_{1\leq M\leq q^{\nu r^2}} \left(\frac{M^{\frac{\kappa-1}{2}-\theta}}{q^{\kappa-1-2\theta}}+ \frac{M^{\frac{\kappa}{2}-\theta}}{q^{\kappa-\frac{1}{2}-2\theta}}\right) \end{align} thanks to \eqref{dyadic2}. Summing over $M$ \emph{via} \eqref{dyadic1} leads to \begin{equation}\label{eq_pun} \PP{q,\mathrm{new}}1[\Phi;r]\ll_\epsilon q^{\left(\frac{\kappa-1}{2}-\theta\right)(r^2\nu-2)+\epsilon}+ q^{\left(\frac{\kappa}{2}-\theta\right)r^2\nu-\left(\kappa-\frac{1}{2}-2\theta\right)+\epsilon} \end{equation} which is an admissible error term if $\nu<\nu_{1,\mathrm{max}}(r,\kappa,\theta)$. According to lemma \ref{lem_old} (with $\alpha_2=+\infty$) we have \begin{equation}\label{eq_puo} \PP{q,\mathrm{old}}1[\Phi;r]\ll_\epsilon q^{\frac{r\nu}{2}-1+\epsilon} \end{equation} which is an admissible error term if $\nu<2/r$. Reporting \eqref{eq_pun} and \eqref{eq_puo} in \eqref{eq_pu} we obtain \begin{equation}\label{eq_psum} \Eh[q]\left(P^1_q[\Phi;r]\right)\ll\frac{1}{q^{\delta_1}} \end{equation} for some $\delta_1>0$ (depending on $\nu$ and $r$) as soon as $\nu<\nu_{1,\mathrm{max}}(r,\kappa,\theta)$. We now estimate the third term of \eqref{explicitaverage}. If $0\leq m\leq r-1$ then the trace formula given in proposition \ref{iwlusatr} implies that \begin{equation}\label{eq_pd} \Eh[q]\left(P_q^2[\Phi;r,m]\right)=\PP{q,\mathrm{new}}2[\Phi;r,m]+\PP{q,\mathrm{old}}2[\Phi;r,m] \end{equation} where \begin{align*} \PP{q,\mathrm{new}}2[\Phi;r,m] &= -\frac{2}{\log{\left(q^r\right)}}\sum_{\substack{p\in\prem \\ p\nmid q}} \Delta_q\left(p^{2(r-m)},1\right)\frac{\log{p}}{p}\widehat{\Phi} \left(\frac{\log{\left(p^2\right)}}{\log{\left(q^r\right)}}\right), \\ \PP{q,\mathrm{old}}2[\Phi;r,m] &= \frac{2}{q\log{\left(q^r\right)}} \sum_{\ell\mid q^\infty}\frac{1}{\ell}\sum_{\substack{p\in\prem \\ p\nmid q}} \Delta_1\left(p^{2(r-m)}\ell^2,1\right)\frac{\log{p}}{p}\widehat{\Phi} \left(\frac{\log{\left(p^2\right)}}{\log{\left(q^r\right)}}\right). \end{align*} Let us estimate the new part which can be written as \begin{multline*} \PP{q,\mathrm{new}}2[\Phi;r,m]=-\frac{2 (2\pi i^\kappa) }{\log{\left(q^r\right)}}\sum_{\substack{c\geq 1 \\ q\mid c}}\sum_{p\in\prem}\left(\frac{\log{p}}{\sqrt{p}}\delta_{q\nmid p}\un_{\left[1,q^{\frac{r\nu}{2}}\right]}(p)\right)\frac{S\left(p^{2(r-m)},1;c\right)}{c} \\ \times \frac{1}{\sqrt{p}}J_{\kappa-1}\left(\frac{4\pi\sqrt{p^{2(r-m)}}}{c}\right) \widehat{\Phi}\left( \frac{\log p}{\log q^{r/2}} \right). \end{multline*} The function \[ h(m,c)\coloneqq J_{\kappa-1}\left(\frac{4\pi\sqrt{m}}{c}\right)\times\frac{1}{m^{1/(4(r-m))}} \] satisfies hypothesis $\prp(T;M,1,C)$ with \[ T(M,1,C)=\left(1+\frac{\sqrt{M}}{C}\right)^{1/2-\kappa} \left(\frac{\sqrt{M}}{C}\right)^{\kappa-1} \frac{1}{M^{1/(4(r-m))}}. \] Hence, if $\nu\leq 2/r^2$ then corollary~\ref{usefulsieve} leads to \[ \PP{q,\mathrm{new}}2[\Phi;r,m] \ll_\epsilon q^\epsilon \sumsh_{\substack{M\leq q^{\nu r(r-m)}\\ C\geq q/2}} \frac{1}{(M)^{1/(4r-4m)}} \left(\frac{\sqrt{M}}{C}\right)^{\kappa-1-2\theta} \left(1+\sqrt{\frac{M}{q}}\right). \] This is smaller than the bound given in \eqref{eq_plusgrand} and hence is an admissible error term if $\nu<\nu_{1,\mathrm{max}}(r,\kappa,\theta)$. According to lemma \ref{lem_old}, we have \begin{equation}\label{eq_pdo} \PP{q,\mathrm{old}}2[\Phi;r]\ll_\epsilon q ^{-1+\epsilon}. \end{equation} We obtain \begin{equation}\label{eq_psdm} \Eh[q]\left(P^2_q[\Phi;r ,m ]\right)\ll\frac{1}{q^{\delta_2}} \end{equation} for some $\delta_2>0$ (depending on $\nu$ and $r$) as soon as $\nu<\nu_{1,\mathrm{max}}(r,\kappa,\theta)$. Finally, reporting \eqref{eq_psdm} and \eqref{eq_psum} in \eqref{explicitaverage}, we get \begin{equation}\label{eq_mhe} \Eh[q]\left(D_{1,q}[\Phi;r]\right)= E[\Phi;r]+O\left(\frac{1}{\log q}\right). \end{equation} \end{proof} \subsection{Signed asymptotic expectation of the one-level density} In this part, we prove some density results for subfamilies of $\mathcal{F}_r$ on which the sign of the functional equation remains constant. The two subfamilies are defined by \[ \mathcal{F}_r^{\epsilon}\coloneqq\bigcup_{q\in\prem}\left\{L(\sym^rf,s), f\in \primeps{\kappa}{q}\right\}. \] Indeed, we compute the asymptotic expectation $\Eheps[\infty]\left(D_{1}[\Phi;r]\right)$. \begin{theorem} \label{density2} Let $r\geq 1$ be an odd integer, $\epsilon=\pm 1$ and $\Phi\in\Schwartz_\nu(\R)$. We assume that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any $f\in \prim{\kappa}{q}$ and also that $\theta$ is admissible. Let \[ \nu_{1,\mathrm{max}}^\epsilon(r,\kappa,\theta)\coloneqq\inf{\left(\nu_{1,\mathrm{max}}(r,\kappa,\theta),\frac{3}{r(r+2)}\right).} \] If $\nu<\nu_{1,\mathrm{max}}^\epsilon(r,\kappa,\theta)$ then \[ \Eheps[\infty]\left(D_{1}[\Phi;r]\right)=E[\Phi;r]. \] \end{theorem} Some comments are given in remark \ref{remark5} page \pageref{remark5}. \begin{proof}[\proofname{} of theorem \ref{density2}] By \eqref{eq_removeps}, we have \begin{equation}\label{eq_fmnr} \Eheps[q]\left(D_{1,q}[\Phi;r]\right) = \Eh[q]\left(D_{1,q}[\Phi;r]\right)- \epsilon\times\epsilon(k,r)\sqrt{q}\Eh[q]\left(\lambda_.(q)D_{1,q}[\Phi;r]\right). \end{equation} The first term is the main term of the theorem thanks to theorem \ref{density1}. According to proposition \ref{explicit} and corollary~\ref{lem_sumlambdafq} , the second term (without the epsilon factors) is given by \begin{multline} \label{start} \sqrt{q}\Eh[q]\left(\lambda_.(q)P_q^1[\Phi;r]\right) \\% +\sqrt{q}\sum_{m=0}^{r-1}(-1)^m\Eh[q]\left(\lambda_.(q)P_q^2[\Phi;r,m]\right)+O\left(\frac{1}{\log{\left(q^r\right)}}\right). \end{multline} Let us focus on the first term in \eqref{start} knowing that the same discussion holds for the second term with even better results on $\nu$. We have \begin{equation}\label{eq_resun} \sqrt{q}\Eh[q]\left(\lambda_.(q)P_q^1[\Phi;r]\right)=\sqrt{q}\PP{q,\mathrm{new}}1[\Phi;r]+\sqrt{q}\PP{q,\mathrm{old}}1[\Phi;r] \end{equation} where \begin{align*} \PP{q,\mathrm{new}}1[\Phi;r] &= -\frac{2}{\log{\left(q^r\right)}}\sum_{\substack{p\in\prem \\ p\nmid q}}\Delta_q\left(p^{r}q,1\right)\frac{\log{p}}{\sqrt{p}}\widehat{\Phi}\left(\frac{\log{p}}{\log{\left(q^r\right)}}\right), \\% \PP{q,\mathrm{old}}1[\Phi;r] &= \frac{2}{q\nu(q)\log{\left(q^r\right)}}\sum_{\ell\mid q^\infty}\frac{1}{\ell}\sum_{\substack{p\in\prem \\% p\nmid q}}\Delta_1\left(p^{r}\ell^2,q\right)\frac{\log{p}}{\sqrt{p}}\widehat{\Phi}\left(\frac{\log{p}}{\log{\left(q^r\right)}}\right). \end{align*} Lemma \ref{lem_old} implies \begin{equation}\label{eq_rqpuqo} \sqrt{q}\PP{q,\mathrm{old}}1[\Phi;r]\ll q^{(\nu r-4)/2} \end{equation} which is an admissible error term if $\nu<4/r$. The new part is given by \[ \PP{q,\mathrm{new}}1[\Phi;r] = -\frac{2 (2\pi i^\kappa) }{\log{\left(q^r\right)}} \sum_{\substack{c\geq 1 \\% q\mid c}}\sum_{\substack{p\in\prem \\% q\nmid p}}\frac{\log{p}}{\sqrt{p}}\frac{S\left(p^{r}q,1;c\right)}{c} J_{\kappa-1}\left(\frac{4\pi\sqrt{p^{r}q}}{c}\right) \widehat{\Phi}\left(\frac{\log{\left(p\right)}}{\log{\left(q^{r}\right)}}\right). \] and can be written as \begin{equation*} -\frac{2(2\pi i^\kappa)}{\log{\left(q^r\right)}}\sum_{\substack{c\geq 1 \\ q\mid c}}\sum_{m\geqslant 1}\widehat{a}_m\frac{S(m,1;c)}{c}J_{\kappa-1}\left(\frac{4\pi\sqrt{m}}{c}\right)\widehat{\Phi}\left(\frac{\log{\left(m/q\right)}}{\log{(q^{r^2})}}\right) \end{equation*} where \begin{equation*} \widehat{a}_m\coloneqq\un_{[1,q^{1+\nu r^2}]}\begin{cases} 0 & \text{if $q\nmid m$ or $m\neq p^rq$ for some $p\neq q$ in $\mathcal{P}$}, \\ \frac{\log{p}}{\sqrt{p}} & \text{if $m=p^rq$ for some $p\neq q$ in $\mathcal{P}$.} \end{cases} \end{equation*} Thus, if $\nu\leq 1/r^2$ then we obtain \[ \PP{q,\mathrm{new}}1[\Phi;r,m] \ll_\epsilon q^\epsilon \sumsh_{\substack{M\leq q^{1+\nu r^2}\\ C\geq q/2}} \left(\frac{\sqrt{M}}{C}\right)^{\kappa-1-2\theta} \left(1+\sqrt{\frac{M}{q}}\right) \] as in the proof of corollary~\ref{usefulsieve}. Summing over $C$ \emph{via} \eqref{dyadic2} gives \[ \PP{q,\mathrm{new}}1[\Phi;r ,m ]\ll_\epsilon q^\epsilon\sumsh_{M\leq q^{1+r^2\nu}} \left(\frac{M^{\frac{\kappa-1}{2}-\theta}}{q^{\kappa-1-2\theta}}+ \frac{M^{\frac{\kappa}{2}-\theta}}{q^{\kappa-\frac{1}{2}-2\theta}}\right). \] Summing over $M$ \emph{via} \eqref{dyadic1} leads to \begin{equation}\label{eq_rqpuqn} \PP{q,\mathrm{new}}1[\Phi;r ,m ]\ll_\epsilon q^{\left(\frac{\kappa-1}{2}-\theta\right)r^2\nu-(\frac{\kappa-1}{2}-\theta)+\epsilon}+ q^{\left(\frac{\kappa}{2}-\theta\right)r^2\nu-\left(\frac{\kappa-1}{2}-\theta\right)+\epsilon} \end{equation} which is an admissible error term if $\nu<\frac{1}{r^2}\left(1-\frac{1}{\kappa-2\theta}\right)$. \end{proof} \section{Quadratic statistics for low-lying zeros} \label{two} \subsection{Asymptotic expectation of the two-level density and asymptotic variance} \label{sec_twoandvar} \begin{definition}\label{def_tld} Let $f\in\prim{\kappa}{q}$ and $\Phi_1$, $\Phi_2$ in $\Schwartz_\nu(\R)$. The \emph{two-level density} (relatively to $\Phi_1$ and $\Phi_2$) of $\sym^rf$ is \[ D_{2,q}[\Phi_1,\Phi_2;r](f) \coloneqq \sum_{\substack{(j_1,j_2)\in\mathcal{E}(f,r)^2\\ j_1\neq\pm j_2}} \Phi_1\left(\widehat{\rho}_{f,r}^{(j_1)}\right) \Phi_2\left(\widehat{\rho}_{f,r}^{(j_2)}\right). \] \end{definition} \begin{remark} In this definition, it is important to note that the condition $j_1\neq j_2$ \emph{does not imply} that $\widehat{\rho}_{f,r}^{(j_1)}\neq \widehat{\rho}_{f,r}^{(j_2)}$. It only implies this if the zeros are simple. Recall however that some $L$-functions of elliptic curves (hence of modular forms) have multiple zeros at the critical point \cite{MR848380, MR777279}. \end{remark} The following lemma is an immediate consequence of definition~\ref{def_tld}. \begin{lemma}\label{lem_tld} Let $f\in\prim{\kappa}{q}$ and $\Phi_1$, $\Phi_2$ in $\Schwartz_\nu(\R)$. Then, \begin{multline*} D_{2,q}[\Phi_1,\Phi_2;r](f)= D_{1,q}[\Phi_1;r](f)D_{1,q}[\Phi_2;r](f) -2D_{1,q}[\Phi_1\Phi_2;r](f)\\ + \un_{\primimpair{\kappa}{q}}(f) \times\Phi_1(0)\Phi_2(0). \end{multline*} \end{lemma} We first evaluate the product of one-level statistics on average. \begin{lemma}\label{lem_conesson} Let $r\geq 1$. Let $\Phi_1$ and $\Phi_2$ in $\Schwartz_\nu(\R)$. We assume that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any $f\in \prim{\kappa}{q}$ and also that $\theta$ is admissible. If $\;\nu<1/r^2$ then \[ \Eh[\infty]\left( D_{1}[\Phi_1;r]D_{1}[\Phi_2;r] \right) = E[\Phi_1;r]E[\Phi_2;r]+ 2\int_\R\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u. \] \end{lemma} \begin{remark} Since theorem~\ref{density1} implies that \begin{multline*} \Eh[\infty]\left( D_{1}[\Phi_1;r]D_{1}[\Phi_2;r] \right) - E[\Phi_1;r]E[\Phi_2;r] =\\ \Eh[\infty]\left( D_{1}[\Phi_1;r]D_{1}[\Phi_2;r] \right) - \Eh[\infty]\left(D_{1}[\Phi_1;r]\right) \Eh[\infty]\left(D_{1}[\Phi_2;r]\right), \end{multline*} lemma \ref{lem_conesson} reveals that the term \[ \Ch[\infty]\left(D_{1}[\Phi_1;r],D_{1}[\Phi_2;r]\right)\coloneqq 2\int_\R\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u \] measures the dependence between $D_{1}[\Phi_1;r]$ and $D_{1}[\Phi_2;r]$. This term is the \emph{asymptotic covariance} of $D_{1}[\Phi_1;r]$ and $D_{1}[\Phi_2;r]$. In particular, taking $\Phi_1=\Phi_2$, we obtain the asymptotic variance. \end{remark} \begin{theorem} \label{th_variance} Let $\Phi\in\Schwartz_{\nu}(\R)$. If $\nu<1/r^2$ then the asymptotic variance of the random variable $D_{1,q}[\Phi;r]$ is \[ \Vh[\infty]\left(D_1[\Phi;r]\right)=2\int_\R\abs{u}\widehat{\Phi}^2(u)\dd u. \] \end{theorem} \begin{proof}[\proofname{} of lemma~\ref{lem_conesson}] From proposition~\ref{explicit}, we obtain \begin{multline} \label{eq_prodD} \Eh[q]\left(D_{1,q}[\Phi_1;r]D_{1,q}[\Phi_2;r]\right) = E[\Phi_1;r]E[\Phi_2;r]+\Ch[q] \\ +\sum_{\substack{(i,j)\in\{1,2\}^2\\ i\neq j}} \sum_{m=0}^{r-1}(-1)^m\Eh[q]\left(P_q^1[\Phi_i;r]P_q^2[\Phi_j;r,m]\right) \\ +\sum_{m_1=0}^{r-1}\sum_{m_2=0}^{r-1}(-1)^{m_1+m_2}\Eh[q]\left(P_q^2[\Phi_1;r,m_1]P_q^2[\Phi_2;r,m_2]\right) +O\left(\frac{1}{\log{\left(q^r\right)}}\right) \end{multline} with \[ \Ch[q]\coloneqq \Eh[q]\left(P_q^1[\Phi_1;r]P_q^1[\Phi_2;r]\right). \] The error term is evaluated by use of theorem~\ref{density1} and equations~\eqref{eq_mesasy}, \eqref{eq_psum} and \eqref{eq_psdm}. We first compute $\Ch[q]$. Using proposition~\ref{iwlusatr}, we compute $\Ch[q]=E^n-4E^o$ with \begin{equation*} E^n\coloneqq \frac{4}{\log^2{\left(q^r\right)}} \sum_{\substack{p_1\in\prem\\ p_1\nmid q}} \sum_{\substack{p_2\in\prem\\ p_2\nmid q}} \frac{\log p_1}{\sqrt{p_1}}\frac{\log p_2}{\sqrt{p_2}} \widehat{\Phi_1}\left(\frac{\log p_1}{\log{\left(q^r\right)}}\right) \widehat{\Phi_2}\left(\frac{\log p_2}{\log{\left(q^r\right)}}\right) \Delta_q(p_1^r,p_2^r) \end{equation*} and \begin{multline*} E^o\coloneqq \frac{1}{q\log^2{\left(q^r\right)}} \\% \times \sum_{\substack{p_1\in\prem\\ p_1\nmid q}} \sum_{\substack{p_2\in\prem\\ p_2\nmid q}} \frac{\log p_1}{\sqrt{p_1}}\frac{\log p_2}{\sqrt{p_2}} \widehat{\Phi_1}\left(\frac{\log p_1}{\log{\left(q^r\right)}}\right) \widehat{\Phi_2}\left(\frac{\log p_2}{\log{\left(q^r\right)}}\right) \sum_{\ell\mid q^\infty}\frac{\Delta_1(\ell^2p_1^r,p_2^r)}{\ell}. \end{multline*} By definition of the $\Delta$-symbol, we write $E^n=E^n_{\mathrm{p}}+\frac{8\pi i^{\kappa}}{\log^2{\left(q^r\right)}}E^n_{\mathrm{e}}$ with \[ E^n_{\mathrm{p}}\coloneqq \frac{4}{\log^2{(q^r)}} \sum_{\substack{p\in\prem\\ p\nmid q}} \frac{\log^2 p}{p}\left(\widehat{\Phi_1}\widehat{\Phi_2}\right) \left(\frac{\log p}{\log{(q^r)}}\right) \] and \begin{multline*} E^n_{\mathrm{e}}\coloneqq \sum_{\substack{c\geq 1\\ q\mid c}} \sum_{\substack{p_1\in\prem\\ p_1\nmid q}} \sum_{\substack{p_2\in\prem\\ p_2\nmid q}} \frac{\log p_1}{\sqrt{p_1}}\frac{\log p_2}{\sqrt{p_2}} \widehat{\Phi_1}\left(\frac{\log p_1}{\log{\left(q^r\right)}}\right) \widehat{\Phi_2}\left(\frac{\log p_2}{\log{\left(q^r\right)}}\right) \\ \times \frac{S(p_1^r,p_2^r;c)}{c} J_{\kappa-1}\left(\frac{4\pi\sqrt{p_1^rp_2^r}}{c}\right). \end{multline*} We remove the condition $p\nmid q$ from $E^n_{\mathrm{p}}$ at an admissible cost and obtain, after integration by parts, \begin{equation}\label{eq_epn} E^n_{\mathrm{p}}=2\int_\R\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u +O\left(\frac{1}{\log^2{(q^r)}}\right). \end{equation} Using corollary~\ref{usefulsieve}, we get \begin{equation}\label{eq_een} E^n_{\mathrm{e}}\ll\frac{1}{\log^2{\left(q^r\right)}} \end{equation} as soon as $\nu\leq 1/r^2$. Finally, using lemma~\ref{lem_old}, we see that $E^o$ is an admissible error term for $\nu<1/r$ so that equations \eqref{eq_epn} and \eqref{eq_een} lead to \begin{equation}\label{eq_en} \Ch[q]=2\int_\R\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u +O\left(\frac{1}{\log^2{\left(q^r\right)}}\right). \end{equation} Let $\{i,j\}=\{1,2\}$. We prove next that each $\Eh[q]\left(P^1_q[\Phi_i;r]P^2_q[\Phi_j;r,m]\right)$ is an error term when $\nu<1/r^2$. Using proposition~\ref{iwlusatr} and lemma~\ref{lem_old} we have \begin{multline*} \Eh[q]\left(P^1_q[\Phi_i;r]P^2_q[\Phi_j;r,m]\right) = \frac{8\pi i^\kappa}{\log^2{(q^r)}} \sum_{\substack{c\geq 1\\ q\mid c}} \sum_{\substack{p_1\in\prem\\ p_1\nmid q}} \sum_{\substack{p_2\in\prem\\ p_2\nmid q}} \frac{\log p_1}{\sqrt{p_1}}\frac{\log p_2}{p_2} \widehat{\Phi_i}\left(\frac{\log p_1}{\log{\left(q^r\right)}}\right) \\ \times \widehat{\Phi_j}\left(\frac{\log p_2}{\log{\left(q^{r/2}\right)}}\right) \frac{S(p_1^r,p_2^{2r-2m};c)}{c} J_{\kappa-1}\left(\frac{4\pi\sqrt{p_1^rp_2^{2r-2m}}}{c}\right) +O\left(\frac{1}{\log{\left(q^r\right)}}\right)^2. \end{multline*} We use corollary~\ref{usefulsieve} to conclude that \begin{equation}\label{eq_utile} \Eh[q]\left(P^1_q[\Phi_i;r]P^2_q[\Phi_j;r,m]\right) \ll\frac{1}{\log q} \end{equation} when $\nu<1/r^2$. Finally, $\Eh[q]\left(P^2_q[\Phi_1;r,m_1]P^2_q[\Phi_2;r,m_2]\right)$ is shown to be an error term in the same way. \end{proof} Using lemmas~\ref{lem_tld} and \ref{lem_conesson}, theorem~\ref{density1}, hypothesis~$\Nice(r,f)$ and remark~\ref{rem_moyun}, we prove the following theorem. \begin{theorem}\label{thm_twodens} Let $r\geq 1$. Let $\Phi_1$ and $\Phi_2$ in $\Schwartz_\nu(\R)$. We assume that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any $f\in \prim{\kappa}{q}$ and also that $\theta$ is admissible. If $\;\nu<\nu_{2,\mathrm{max}}(r,\kappa,\theta)$ then \begin{multline*} \Eh[\infty]\left(D_2[\Phi_1,\Phi_2;r]\right) = \left[ \widehat{\Phi_1}(0) + \frac{(-1)^{r+1}}{2}\Phi_1(0) \right] \left[ \widehat{\Phi_2}(0) + \frac{(-1)^{r+1}}{2}\Phi_2(0) \right] \\ +2\int_\R\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u -2\widehat{\Phi_1\Phi_2}(0) +\left((-1)^{r}+ \frac{ \un_{2\N+1}(r) }{2}\right) \Phi_1(0)\Phi_2(0). \end{multline*} \end{theorem} Some comments are given in remark \ref{remark6} page \pageref{remark6}. \subsection{Signed asymptotic expectation of the two-level density and signed asymptotic variance} In this part, $r$ is \emph{odd}. \begin{lemma}\label{lem_signedconesson} Let $\Phi_1$ and $\Phi_2$ in $\Schwartz_\nu(\R)$. If $\nu<1/(2r^2)$ then \[ \Eheps[\infty]\left( D_{1}[\Phi_1;r]D_{1}[\Phi_2;r] \right) = E[\Phi_1;r]E[\Phi_2;r]+ 2\int_\R\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u. \] \end{lemma} \begin{remark} By theorem~\ref{density2} and lemma~\ref{lem_signedconesson} we have \begin{multline*} \Eheps[\infty]\left( D_{1}[\Phi_1;r]D_{1}[\Phi_2;r] \right) - E[\Phi_1;r]E[\Phi_2;r] =\\ \Eheps[\infty]\left( D_{1}[\Phi_1;r]D_{1}[\Phi_2;r] \right) - \Eheps[\infty]\left(D_{1}[\Phi_1;r]\right) \Eheps[\infty]\left(D_{1}[\Phi_2;r]\right). \end{multline*} Thus, \[ \Cheps[\infty]\left(D_{1}[\Phi_1;r],D_{1}[\Phi_2;r]\right)\coloneqq 2\int_\R\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u \] is the \emph{signed asymptotic covariance} of $D_{1}[\Phi_1;r]$ and $D_{1}[\Phi_2;r]$. In particular, taking $\Phi_1=\Phi_2$, we obtain the signed asymptotic variance. \end{remark} \begin{theorem} Let $\Phi\in\Schwartz_{\nu}(\R)$. If $\nu<1/(2r^2)$ then the signed asymptotic variance of $D_1[\Phi;r]$ is \[ \Vheps[\infty]\left(D_1[\Phi;r]\right)=2\int_\R\abs{u}\widehat{\Phi}^2(u)\dd u. \] \end{theorem} \begin{proof}[\proofname{} of lemma~\ref{lem_signedconesson}] From proposition~\ref{explicit} and \eqref{eq_mbun}, we obtain \begin{multline} \label{eq_resu} \Eheps[q]\left(D_{1,q}[\Phi_1;r]D_{1,q}[\Phi_2;r]\right) = E[\Phi_1;r]E[\Phi_2;r] +\Cheps[q] \\% + \sum_{\substack{(i,j)\in\{1,2\}^2\\ i\neq j}} \sum_{m=0}^{r-1}(-1)^m\Eheps[q]\left(P_q^1[\Phi_i;r]P_q^2[\Phi_j;r,m]\right) \\ +\sum_{m_1=0}^{r-1}\sum_{m_2=0}^{r-1}(-1)^{m_1+m_2}\Eheps[q]\left(P_q^2[\Phi_1;r,m_1]P_q^2[\Phi_2;r,m_2]\right) +O\left(\frac{1}{\log{\left(q^r\right)}}\right) \end{multline} with \[ \Cheps[q]\coloneqq \Eheps[q]\left(P_q^1[\Phi_1;r]P_q^1[\Phi_2;r]\right). \] Assume that $\nu<1/r^2$. Then equations~\eqref{eq_removeps}, \eqref{eq_en} and proposition~\ref{iwlusatr} lead to \begin{equation}\label{eq_valCheps} \Cheps[q]=2\int_{\R}\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u -\epsilon\times\epsilon(\kappa,r)(G^n-4G^o) \end{equation} with \[ G^n\coloneqq \frac{4\sqrt{q}}{\log^2{\left(q^r\right)}} \sum_{\substack{p_1\in\prem\\ p_1\nmid q}} \sum_{\substack{p_2\in\prem\\ p_2\nmid q}} \frac{\log p_1}{\sqrt{p_1}} \frac{\log p_2}{\sqrt{p_2}} \widehat{\Phi_1}\left(\frac{\log p_1}{\log{\left(q^r\right)}}\right) \widehat{\Phi_2}\left(\frac{\log p_2}{\log{\left(q^r\right)}}\right) \Delta_q\left(p_1^rq,p_2^r\right) \] and \begin{multline} G^o\coloneqq \frac{1}{\sqrt{q}\log^2{\left(q^r\right)}} \\% \times \sum_{\substack{p_1\in\prem\\ p_1\nmid q}} \sum_{\substack{p_2\in\prem\\ p_2\nmid q}} \frac{\log p_1}{\sqrt{p_1}} \frac{\log p_2}{\sqrt{p_2}} \widehat{\Phi_1}\left(\frac{\log p_1}{\log{\left(q^r\right)}}\right) \widehat{\Phi_2}\left(\frac{\log p_2}{\log{\left(q^r\right)}}\right) \sum_{\ell\mid q^\infty} \frac{\Delta_q\left(\ell^2p_1^r,p_2^rq\right)}{\ell}. \end{multline} Lemma~\ref{deltaestimate} implies that if $\nu<1/(2r^2)$ then \begin{equation}\label{eq_majoGn} G^n\ll \frac{q^{\nu r[r(\kappa-1)+1]/2}}{q^{(\kappa-1)/2}} \end{equation} hence $G^n$ is an error term as soon as $\nu\leq 1/(2r^2)$. Lemma~\ref{lem_old} implies \begin{equation}\label{eq_majoGo} G^o\ll q^{-3/2+\nu r+\epsilon} \end{equation} which is an error term. Reporting equations~\eqref{eq_majoGn} and \eqref{eq_majoGo} in \eqref{eq_valCheps} we obtain \begin{equation}\label{eq_unos} \Cheps[\infty]= 2\int_{\mathbb{R}}\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u \end{equation} for $\nu\leq 1/(2r(r+2))$. Next, we prove that each $\Eheps[q]\left(P_q^1[\Phi_i;r]P_q^2[\Phi_j;r,m]\right)$ is an error term as soon as $\nu\leq 1/(2r^2)$. From equations~\eqref{eq_removeps} and \eqref{eq_utile}, we obtain \begin{multline}\label{eq_signedsecond} \Eheps[q]\left(P_q^1[\Phi_i;r]P_q^2[\Phi_j;r,m]\right) =\\ -\epsilon\times\epsilon(\kappa,r)\sqrt{q}\sumh_{f\in\prim{\kappa}{q}} \lambda_f(q)P_q^1[\Phi_i;r]P_q^2[\Phi_j;r,m] +O\left(\frac{1}{\log q)}\right). \end{multline} We use proposition~\ref{iwlusatr} and lemmas~\ref{lem_old} and \ref{deltaestimate} to have \begin{multline}\label{eq_signedinter} \sqrt{q}\sumh_{f\in\prim{\kappa}{q}} \lambda_f(q)P_q^1[\Phi_i;r]P_q^2[\Phi_j;r,m] \ll\\% \frac{q^{\nu r(2r-m+2)/4-1/4}}{\log^2q}+ \frac{q^{(\nu r-1)/2+\epsilon}}{\log q}. \end{multline} It follows from ~\eqref{eq_signedinter} and \eqref{eq_signedsecond} that \begin{equation}\label{eq_dos} \Eheps[\infty]\left(P_q^1[\Phi_i;r]P_q^2[\Phi_j;r,m]\right)=0 \end{equation} for $\nu\leq 1/(2r(r+1))$. In the same way, we have, for $\nu$ in the previous range, \begin{equation}\label{eq_tres} \Eheps[\infty]\left(P_q^2[\Phi_1;r,m_1]P_q^2[\Phi_2;r,m_2]\right)=0. \end{equation} Reporting \eqref{eq_unos}, \eqref{eq_dos} and \eqref{eq_tres} in \eqref{eq_resu}, we have the announced result. \end{proof} Using lemmas~\ref{lem_tld}, \ref{lem_signedconesson}, theorem~\ref{density2}, hypothesis~$\Nice(r,f)$ and \eqref{eq_mbun}, we prove the following theorem. \begin{theorem}\label{thm_signedtwodens} Let $f\in\prim{\kappa}{q}$ and $\Phi_1$, $\Phi_2$ in $\Schwartz_\nu(\R)$. If $\nu<1/(2r(r+1))$ then \begin{multline*} \Eheps[\infty]\left(D_2[\Phi_1,\Phi_2;r]\right) = \left[ \widehat{\Phi_1}(0) + \frac{1}{2}\Phi_1(0) \right] \left[ \widehat{\Phi_2}(0) + \frac{1}{2}\Phi_2(0) \right] \\ +2\int_\R\abs{u}\widehat{\Phi_1}(u)\widehat{\Phi_2}(u)\dd u -2\widehat{\Phi_1\Phi_2}(0) -\Phi_1(0)\Phi_2(0) \\ + \un_{\{-1\}}(\epsilon) \Phi_1(0)\Phi_2(0). \end{multline*} \end{theorem} \begin{remark} Remark~\ref{rem_symtyp} together with theorem~\ref{thm_signedtwodens} and a result of Katz \& Sarnak (see \cite[Theorem A.D.2.2]{KaSa} or \cite[Theorem 3.2]{Mil}) imply that the symmetry type of $\mathcal{F}_r^{\epsilon}$ is as in table~\ref{tab_symty}. Some additional comments are given in remark \ref{remark2} page \pageref{remark2}. \begin{table}[ht] \setlength{\extrarowheight}{4pt} \begin{tabular}{|c|c|c|} \hline \backslashbox{$\epsilon$}{$r$} & even & odd\\ \hline $-1$ & & $SO(\mathrm{odd})$ \\ \hline $1$ & $Sp$ & $SO(\mathrm{even})$\\ \hline \end{tabular} \caption{Symmetry type of $\mathcal{F}_r^{\epsilon}$}\label{tab_symty} \end{table} \end{remark} \section{First asymptotic moments of the one-level density} \label{momentt} In this section, we compute the asymptotic $m$-th moment of the one level density namely \begin{equation*} \Mh[\infty,m]\left(D_{1,q}[\Phi;r]\right)\coloneqq\lim_{\substack{q\in\mathcal{P} \\ q\to+\infty}}\Mh[q,m]\left(D_{1,q}[\Phi;r]\right) \end{equation*} where \begin{equation*} \Mh[q,m]\left(D_{1,q}[\Phi;r]\right)= \Eh[q] \left( \left(D_{1,q}[\Phi;r]-\Eh[q](D_{1,q}[\Phi;r])\right)^m \right) \end{equation*} for $m$ small enough (regarding to the size of the support of $\Phi$). The end of this section is devoted to the proof of theorem \ref{thm_F}. Note that we can assume that $m\geq 3$ since the work has already been done for $m=1$ and $m=2$. Thanks to equation~\eqref{eq_mhe} and proposition~\ref{explicit}, we have \begin{align} \Mh[q,m]\left(D_{1,q}[\Phi;r]\right) & = \sum_{\ell=0}^m\binom{m}{\ell}\Eh[q]\left(P_q^1[\Phi;r]^{m-\ell}\left(P_q^2[\Phi;r]+ O\left(\frac{1}{\log q}\right)\right)^\ell\right) \\ \label{eq_evmom} \\ & = \sum_{\substack{0\leq\ell\leq m \\ 0\leq\alpha\leq\ell}}\binom{m}{\ell}\binom{\ell}{\alpha}R(q)^{\ell-\alpha} \Eh[q]\left(P_q^1[\Phi;r]^{m-\ell}P_q^2[\Phi;r]^{\alpha}\right) \end{align} where \begin{align} P_q^2[\Phi;r](f) & \coloneqq -\frac{2}{\log(q^r)} \sum_{j=0}^{r-1}(-1)^j \sum_{\substack{p\in\prem\\ p\nmid q}} \lambda_f\left(p^{2(r-j)}\right)\frac{\log p}{p}\widehat{\Phi}\left(\frac{2\log p}{\log(q^r)}\right) \\ \label{eq_tard} & = -\frac{2}{\log(q^r)}\sum_{j=1}^{r}(-1)^{r-j}\sum_{\substack{p\in\prem\\ p\nmid q}}\lambda_f\left(p^{2j}\right)\frac{\log p}{p}\widehat{\Phi}\left(\frac{2\log p}{\log(q^r)}\right) \end{align} and $R$ is a positive function satisfying \[ R(q)\ll\frac{1}{\log q}. \] Thus, an asymptotic formula for $\Mh[q,m]\left(D_{1,q}[\Phi;r]\right)$ directly follows from the next proposition. \begin{proposition} \label{moments} Let $r\geq 1$ be any integer. We assume that hypothesis $\Nice(r,f)$ holds for any prime number $q$ and any primitive holomorphic cusp form of level $q$ and even weight $\kappa$. Let $\alpha\geq 0$ and $\ell\geq 0$ be any integers. \begin{itemize} \item If $\;\alpha\geq 1$ and $\;\alpha\nu<4/r^2$ then \begin{equation*} \Eh[q]\left(P_q^2[\Phi;r]^{\alpha}\right)=O\left(\frac{1}{\log{q}}\right). \end{equation*} \item If $\;1\leq\alpha\leq\ell\leq m-1$ and $(\alpha+m-\ell)\nu<4/(r(r+2))$ then \begin{equation*} \Eh[q]\left(P_q^1[\Phi;r]^{m-\ell}P_q^2[\Phi;r]^{\alpha}\right)=O\left(\frac{1}{\log{q}}\right). \end{equation*} \item If $\;\alpha\geq 1$ and $\;\alpha\nu<4/(r(r+2))$ then \begin{equation*} \Eh[q]\left(P_q^1[\Phi;r]^{\alpha}\right)=\begin{cases} O\left(\frac{1}{\log^2{(q)}}\right) & \text{if $\alpha$ is odd,} \\ 2\int_\R\abs{u}\widehat{\Phi}^2(u)\dd u\times\frac{\alpha!}{2^{\alpha/2}\left(\frac{\alpha}{2}\right)!}+O\left(\frac{1}{\log^2{(q)}}\right) & \text{otherwise}. \end{cases} \end{equation*} \end{itemize} \end{proposition} \subsection{One some useful combinatorial identity} In order to use the multiplicative properties of Hecke eigenvalues in the proof of proposition \ref{moments}, we want to reorder some sums over many primes to sums over distinct primes. We follow the work of Hughes \& Rudnick \cite[\S 7]{HuRu} (see also \cite{MR2166468} and the work of Soshnikov \cite{Sos00}) to achieve this. Let $P(\alpha,s)$ be the set of surjective functions \[ \sigma \colon \{1,\dotsc,\alpha\}\twoheadrightarrow \{1,\dotsc,s\} \] such that for any $j\in\{1,\dotsc,\alpha\}$, either $\sigma(j)=1$ or there exists $k<j$ such that $\sigma(j)=\sigma(k)+1$. This can be viewed as the number of partitions of a set of $\alpha$ elements into $s$ nonempty subsets. By definition, the cardinality of $P(\alpha,s)$ is the Stirling number of second kind \cite[\S 1.4]{Sta97}. For any $j\in\{1,\dotsc,s\}$, let \[ \varpi_j^{(\sigma)}\coloneqq\#\sigma^{-1}(\{j\}). \] Note that \begin{equation}\label{eq_trivcond} \varpi_j^{(\sigma)}\geq 1 \quad\text{ for any $1\leq j\leq s$}\qquad\text{ and }\qquad \sum_{j=1}^s\varpi_j^{(\sigma)}=\alpha. \end{equation} The following lemma is lemma 7.3 of \cite[\S 7]{HuRu}. \begin{lemma} \label{primesdistincts} If $g$ is any function of $m$ variables then \begin{equation*} \sum_{j_1,\dotsc,j_m}g\left(x_{j_1},\dotsc,x_{j_m}\right)=\sum_{s=1}^m\sum_{\sigma\in P(m,s)}\sum_{\substack{i_1,\dotsc,i_s \\ \text{distinct}}}g\left(x_{i_{\sigma(1)}},\dotsc,x_{i_{\sigma(m)}}\right). \end{equation*} \end{lemma} \subsection{Proof of the first bullet of proposition \ref{moments}} By the definition \eqref{eq_tard}, we have \begin{multline}\label{eq_mais} \Eh[q]\left(P_q^2[\Phi;r]^\alpha\right)=\frac{(-2)^\alpha}{\log^\alpha{(q^r)}}\sum_{1\leq j_1,\dotsc,j_\alpha\leq r}(-1)^{\alpha r-(j_1+\dotsc+j_\alpha)} \\ \times\sum_{\substack{p_1,\dotsc,p_\alpha\in\prem\\ q\nmid p_1\dotsc p_\alpha}}\left(\prod_{i=1}^\alpha\frac{\log p_i}{p_i}\widehat{\Phi}\left(\frac{2\log p_i}{\log(q^r)}\right)\right)\Eh[q]\left(\prod_{i=1}^\alpha\lambda_f\left(p_i^{2j_i}\right)\right). \end{multline} Writing $\{\widehat{p}_i\}_{i\geq 1}$ for the increasing sequence of prime numbers except $q$, we have \begin{multline}\label{eq_renum} \sum_{\substack{p_1,\dotsc,p_\alpha\in\prem\\ q\nmid p_1\dotsc p_\alpha}}\left(\prod_{i=1}^\alpha\frac{\log p_i}{p_i}\widehat{\Phi}\left(\frac{2\log p_i}{\log(q^r)}\right)\right)\Eh[q]\left(\prod_{i=1}^\alpha\lambda_f\left(p_i^{2j_i}\right)\right) \\ =\sum_{i_1,\dotsc,i_\alpha}\left(\prod_{\ell=1}^\alpha\frac{\log\widehat{p}_{i_\ell}}{\widehat{p}_{i_\ell}}\widehat{\Phi}\left(\frac{2\log\widehat{p}_{i_\ell}}{\log(q^r)}\right)\right)\Eh[q]\left(\prod_{\ell=1}^\alpha\lambda_f\left(\widehat{p}_{i_\ell}^{2j_\ell}\right)\right). \end{multline} Using lemma \ref{primesdistincts}, we rewrite the right sum in \eqref{eq_renum} as \begin{multline}\label{eq_sosru} \sum_{s=1}^\alpha\sum_{\sigma\in P(\alpha,s)}\sum_{\substack{k_1,\dotsc,k_s\\\text{distinct}}}\left(\prod_{i=1}^\alpha\frac{\log\widehat{p}_{k_{\sigma(i)}}}{\widehat{p}_{k_{\sigma(i)}}}\widehat{\Phi}\left(\frac{ 2\log\widehat{p}_{k_{\sigma(i)}}}{\log(q^r)}\right)\right)\Eh[q]\left(\prod_{i=1}^\alpha\lambda_f\left(\widehat{p}_{k_{\sigma(i)}}^{2j_i}\right)\right) \\ =\sum_{s=1}^\alpha\sum_{\sigma\in P(\alpha,s)}\sum_{\substack{k_1,\dotsc,k_s\\\text{distinct}}}\left(\prod_{u=1}^s\left(\frac{\log\widehat{p}_{k_{u}}}{\widehat{p}_{k_{u}}}\widehat{\Phi}\left(\frac{ 2\log\widehat{p}_{k_{u}}}{\log(q^r)}\right)\right)^{\varpi^{(\sigma)}_u}\right)\Eh[q]\left(\prod_{\substack{1\leq u\leq s \\ 1\leq j\leq r}}\lambda_f\left(\widehat{p}_{k_{u}}^{2j}\right)^{\varpi_{u,j}^{(\sigma)}}\right) \end{multline} where \begin{equation*} \varpi_{u,j}^{(\sigma)}\coloneqq\#\{1\leq i\leq\alpha, \sigma(i)=u, j_i=j\} \end{equation*} for any $1\leq u\leq s$ and any $1\leq j\leq r$. Now, we show that \begin{multline}\label{s<alpha} \sum_{s=1}^{\alpha-1}\sum_{\sigma\in P(\alpha,s)}\sum_{\substack{k_1,\dotsc,k_s\\\text{distinct}}}\left(\prod_{u=1}^s\left(\frac{\log\widehat{p}_{k_{u}}}{\widehat{p}_{k_{u}}}\widehat{\Phi}\left(\frac{ 2\log\widehat{p}_{k_{u}}}{\log(q^r)}\right)\right)^{\varpi^{(\sigma)}_u}\right)\Eh[q]\left(\prod_{\substack{1\leq u\leq s \\ 1\leq j\leq r}}\lambda_f\left(\widehat{p}_{k_{u}}^{2j}\right)^{\varpi_{u,j}^{(\sigma)}}\right) \\ \ll\log^{\alpha-1}{(q)}. \end{multline} For $s<\alpha$ and $\sigma\in P(\alpha,s)$, we use \eqref{individualh} together with \eqref{eq_moyun} to obtain that the left-hand side of the previous equation is bounded by \begin{equation} \sum_{s=1}^{\alpha-1}\sum_{\sigma\in P(\alpha,s)} \sum_{\substack{k_1,\dotsc,k_s\\\text{distinct}}} \prod_{u=1}^s \left( \frac{ \log\widehat{p}_{k_{u}} }{ \widehat{p}_{k_{u}} } \abs{ \widehat{\Phi}\left( \frac{ 2\log\widehat{p}_{k_{u}} }{\log(q^r)} \right) } \right)^{\varpi_u^{(\sigma)}}. \end{equation} Since $s<\alpha$, equation~\eqref{eq_trivcond} implies that $\varpi_u^{(\sigma)}>1$ for some $1\leq u\leq s$. These values lead to convergent, hence bounded, sums. Let \[ d^{(\sigma)}\coloneqq\#\left\{1\leq u\leq s \colon \varpi_u^{(\sigma)}=1\right\}\in\{0,\dotsc,\alpha-1\}, \] then \begin{multline} \sum_{s=1}^{\alpha-1}\sum_{\sigma\in P(\alpha,s)} \sum_{\substack{k_1,\dotsc,k_s\\\text{distinct}}} \prod_{u=1}^s \left( \frac{ \log\widehat{p}_{k_{u}} }{ \widehat{p}_{k_{u}} } \abs{ \widehat{\Phi}\left( \frac{ 2\log\widehat{p}_{k_{u}} }{\log(q^r)} \right) } \right)^{\varpi_u^{(\sigma)}} \\ \ll \sum_{s=1}^{\alpha-1}\sum_{\sigma\in P(\alpha,s)} \sum_{\substack{k_1,\dotsc,k_d\\\text{distinct}}} \prod_{u=1}^{d^{(\sigma)}} \left( \frac{ \log\widehat{p}_{k_{u}} }{ \widehat{p}_{k_{u}} } \abs{ \widehat{\Phi}\left( \frac{ 2\log\widehat{p}_{k_{u}} }{\log(q^r)} \right) } \right) \ll\log^{\alpha-1}{(q)}. \end{multline} We have altogether \begin{multline}\label{eq_warwick} \Eh[q]\left(P_q^2[\Phi;r]^\alpha\right)=\frac{(-2)^\alpha}{\log^\alpha{(q^r)}}\sum_{1\leq j_1,\dotsc,j_\alpha\leq r}(-1)^{\alpha r-(j_1+\dotsc+j_\alpha)} \\ \times\sum_{\substack{k_1,\dotsc,k_\alpha\\\text{distinct}}}\left(\prod_{u=1}^\alpha\left(\frac{\log\widehat{p}_{k_{u}}}{\widehat{p}_{k_{u}}}\widehat{\Phi}\left(\frac{ 2\log\widehat{p}_{k_{u}}}{\log(q^r)}\right)\right)\right)\Eh[q]\left(\lambda_f\left(\prod_{u=1}^\alpha\widehat{p}_{k_{u}}^{2j_u}\right)\right) \\ +O\left(\frac{1}{\log{q}}\right) \end{multline} since the only element of $P(\alpha,\alpha)$ is the identity function. By lemmas~\ref{lem_delun} and \ref{deltaestimate}, we have \[ \Eh[q]\left(\lambda_f\left(\prod_{u=1}^\alpha\widehat{p}_{k_{u}}^{2j_u}\right)\right) \ll \frac{1}{q}\prod_{u=1}^\alpha \widehat{p}_{k_{u}}^{j_u/2}\log{\widehat{p}_{k_{u}}} \] hence the first term in the right-hand side of \eqref{eq_warwick} is bounded by a negative power of $q$ as soon as $\alpha\nu r^2 < 4$. \subsection{Proof of the third bullet of proposition \ref{moments}} By proposition \ref{explicit}, we have \begin{equation}\label{eq_EhPqU} \Eh[q](P_q^1[\Phi;r]^\alpha) = \frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{\substack{p_1,\dotsc,p_\alpha\in\prem\\ p_1,\dotsc,p_\alpha\nmid q}} \left(\prod_{i=1}^\alpha\frac{\log p_i}{\sqrt{p_i}}\widehat{\Phi}\left(\frac{\log p_i}{\log q^r}\right)\right) \Eh[q]\left(\prod_{i=1}^\alpha\lambda_f\left(p_i^r\right)\right). \end{equation} Using lemma \ref{primesdistincts}, we rewrite equation \eqref{eq_EhPqU} as \begin{align} \Eh[q](P_q^1[\Phi;r]^\alpha) &= \frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{s=1}^\alpha \sum_{\sigma\in P(\alpha,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \left(\prod_{j=1}^\alpha \left(\frac{\log\widehat{p}_{i_{\sigma(j)}}}{\sqrt{\widehat{p}_{i_{\sigma(j)}}}} \widehat{\Phi}\left(\frac{\log\widehat{p}_{i_{\sigma(j)}}}{\log{(q^r)}}\right)\right)\right) \\% & \phantom{ =\frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{s=1}^\alpha \sum_{\sigma\in P(\alpha,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} } \times \Eh[q]\left(\prod_{j=1}^\alpha\lambda_f\left(\widehat{p}_{i_{\sigma(j)}}^r\right)\right) \\% &= \frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{s=1}^\alpha \sum_{\sigma\in P(\alpha,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \left( \prod_{u=1}^s \left( \frac{ \log\widehat{p}_{i_u} }{ \sqrt{ \widehat{p}_{i_u} } } \widehat{\Phi}\left( \frac{ \log\widehat{p}_{i_{u}} }{\log q^r} \right) \right)^{\varpi^{(\sigma)}_u} \right) \\% & \label{eq_ewrite} \phantom{ = \frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{s=1}^\alpha \sum_{\sigma\in P(\alpha,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} } \times \Eh[q]\left( \prod_{u=1}^s \lambda_f\left(\widehat{p}_{i_{u}}^r\right)^{\varpi^{(\sigma)}_u} \right). \end{align} It follows from \eqref{eq_ams} and \eqref{eq_lintch} that \[ \lambda_f\left(\widehat{p}_{i_{u}}^r\right)^{\varpi^{(\sigma)}_u} = \sum_{j_u=0}^{r\varpi^{(\sigma)}_u} x(\varpi^{(\sigma)}_u,r,j_u)\lambda_f\left(\widehat{p}_{i_{u}}^{j_u}\right). \] Since $u\neq v$ implies that $\widehat{p}_{i_{u}}\neq\widehat{p}_{i_{v}}$, equation \eqref{eq_ewrite} becomes \begin{multline} \label{eq_drole} \Eh[q](P_q^1[\Phi;r]^\alpha) = \frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{s=1}^\alpha \sum_{\sigma\in P(\alpha,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \left( \prod_{u=1}^s \left( \frac{ \log\widehat{p}_{i_u} }{ \sqrt{ \widehat{p}_{i_u} } } \widehat{\Phi}\left( \frac{ \log\widehat{p}_{i_{u}} }{\log{(q^r)}} \right) \right)^{\varpi^{(\sigma)}_u} \right) \\% \times \sum_{\substack{j_1,\dotsc,j_s \\ 0\leq j_u\leq r\varpi^{(\sigma)}_u}}\left( \prod_{u=1}^sx(\varpi^{(\sigma)}_u,r,j_u) \right) \Eh[q]\left( \lambda_f\left( \prod_{u=1}^s\widehat{p}_{i_u}^{j_u} \right) \right). \end{multline} Using proposition \ref{iwlusatr} and lemmas \ref{deltaestimate} and \ref{lem_delun}, we get \[ \Eh[q]\left( \lambda_f\left( \prod_{u=1}^s\widehat{p}_{i_u}^{j_u} \right) \right) = \prod_{u=1}^s\delta_{j_u,0}+O\left( \frac{1}{q}\prod_{u=1}^s\widehat{p}_{i_u}^{j_u/4}\log\widehat{p}_{i_u} \right) \] hence \begin{equation}\label{eq_cesttpte} \Eh[q](P_q^1[\Phi;r]^\alpha)=\TP+O(\TE) \end{equation} with \begin{equation}\label{eq_deftp} \TP\coloneqq \frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{s=1}^\alpha \sum_{\sigma\in P(\alpha,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \prod_{u=1}^s \left( \frac{ \log\widehat{p}_{i_u} }{ \sqrt{ \widehat{p}_{i_u} } } \widehat{\Phi}\left( \frac{ \log\widehat{p}_{i_{u}} }{\log{(q^r)}} \right) \right)^{\varpi^{(\sigma)}_u} x(\varpi^{(\sigma)}_u,r,0) \end{equation} and \begin{equation}\label{eq_defte} \TE\coloneqq \frac{1}{q\log^\alpha{(q^r)}} \sum_{s=1}^\alpha \sum_{\sigma\in P(\alpha,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \prod_{u=1}^s \left( \widehat{p}_{i_u}^{(r-2)/4}\log^2\widehat{p}_{i_u} \abs{ \widehat{\Phi}\left( \frac{ \log\widehat{p}_{i_{u}} }{\log{(q^r)}} \right) } \right)^{\varpi^{(\sigma)}_u}. \end{equation} We have \begin{equation}\label{eq_majte} \TE= \frac{1}{q\log^\alpha{(q^r)}} \left( \sum_{\substack{p\in\prem\\ p\nmid q}}p^{(r-2)/4}\log^2p \abs{ \widehat{\Phi}\left( \frac{\log p}{\log{(q^r)}} \right) } \right)^\alpha \ll q^{\alpha r\nu(r+2)/4-1} \end{equation} so that, $\TE$ is an error term as soon as \begin{equation}\label{eq_conddeux} \alpha r\nu(r+2)<4. \end{equation} We assume from now on that this condition is satisfied. According to \eqref{propriox} (recall that $r\geq 1$), we rewrite \eqref{eq_deftp} as \begin{equation}\label{eq_tpsimpl} \TP= \frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{s=1}^\alpha \sum_{\sigma\in P^{\geq 2}(\alpha,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \prod_{u=1}^s \left( \frac{ \log\widehat{p}_{i_u} }{ \sqrt{ \widehat{p}_{i_u} } } \widehat{\Phi}\left( \frac{ \log\widehat{p}_{i_{u}} }{\log{(q^r)}} \right) \right)^{\varpi^{(\sigma)}_u} x(\varpi^{(\sigma)}_u,r,0) \end{equation} where \[ P^{\geq 2}(\alpha,s)\coloneqq \left\{ \sigma\in P(\alpha,s) \colon \forall u\in\{1,\dotsc,s\}, \varpi^{(\sigma)}_u\geq 2 \right\}. \] Moreover, if for at least one $\sigma$ and at least one $u$ (say $u_0$) we have $\varpi^{(\sigma)}_u\geq 3$, then \begin{multline} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \prod_{u=1}^s \left( \frac{ \log\widehat{p}_{i_u} }{ \sqrt{ \widehat{p}_{i_u} } } \widehat{\Phi}\left( \frac{ \log\widehat{p}_{i_{u}} }{\log{(q^r)}} \right) \right)^{\varpi^{(\sigma)}_u} x(\varpi^{(\sigma)}_u,r,0) \\% \ll \left( \sum_{\substack{p\in\prem\\ p\leq q^{r\nu}}}\frac{\log^3{(p)}}{p^{3/2}} \right) \prod_{\substack{u=1\\ u\neq u_0}}^s \left( \sum_{\substack{p_u\in\prem\\ p_u\leq q^{r\nu}}}\frac{\log^2{(p_u)}}{p_u} \right) \\% \ll\label{eq_setm} (\log q)^{2s-2}. \end{multline} But, from \eqref{eq_trivcond}, we deduce \[ 2s\leq\sum_{j=1}^s\varpi^{(\sigma)}_j=\alpha \] hence $(\log q)^{2s-2}\ll (\log q)^{\alpha-2}$. Reinserting this in \eqref{eq_setm} and the result in \eqref{eq_tpsimpl}, we obtain \begin{multline}\label{eq_tpplussimpl} \TP= \frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{s=1}^\alpha \sum_{\sigma\in P^{2}(\alpha,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \prod_{u=1}^s \left( \frac{ \log\widehat{p}_{i_u} }{ \sqrt{ \widehat{p}_{i_u} } } \widehat{\Phi}\left( \frac{ \log\widehat{p}_{i_{u}} }{\log q^r} \right) \right)^{\varpi^{(\sigma)}_u} x(\varpi^{(\sigma)}_u,r,0) \\ +O\left(\frac{1}{\log^2{(q)}}\right) \end{multline} where \[ P^{2}(\alpha,s)\coloneqq \left\{ \sigma\in P(\alpha,s) \colon \forall u\in\{1,\dotsc,s\}, \varpi^{(\sigma)}_u= 2 \right\}. \] From \eqref{eq_tpplussimpl}, \eqref{eq_majte} and \eqref{eq_cesttpte}, we deduce \begin{multline} \Eh[q](P_q^1[\Phi;r]^\alpha) = \frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{s=1}^\alpha \sum_{\sigma\in P^{2}(\alpha,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \prod_{u=1}^s \frac{ \log^2{(\widehat{p}_{i_u})} }{ \widehat{p}_{i_u} } \widehat{\Phi}^2\left( \frac{ \log\widehat{p}_{i_{u}} }{\log{(q^r)}} \right) \\ +O\left(\frac{1}{\log^2{(q)}}\right) \end{multline} since $x(2,r,0)=1$ according to \eqref{propriox}. Note in particular that, according to \eqref{eq_trivcond} the previous sum is zero if $\alpha$ is odd. Thus, we can assume now that $\alpha$ is even and get \begin{multline}\label{eq_ouf} \Eh[q](P_q^1[\Phi;r]^\alpha) = \frac{(-2)^\alpha}{\log^\alpha{(q^r)}} \sum_{\sigma\in P^{2}(\alpha,\alpha/2)} \sum_{\substack{i_1,\dotsc,i_{\alpha/2}\\\text{distinct}}} \prod_{u=1}^{\alpha/2} \frac{ \log^2{(\widehat{p}_{i_u})} }{ \widehat{p}_{i_u} } \widehat{\Phi}^2\left( \frac{ \log\widehat{p}_{i_{u}} }{\log{(q^r)}} \right) \\ +O\left(\frac{1}{\log^2{(q)}}\right). \end{multline} However, summing over all the possible $(i_1,\dotsc,i_{\alpha/2})$ instead of the one with distinct indices reintroduces convergent sums that enter the error term because of the $1/\log^\alpha{(q^r)}$ factor. It follows that \eqref{eq_ouf} becomes: \begin{multline}\label{eq_ooo} \Eh[q](P_q^1[\Phi;r]^\alpha) = \left[ \frac{4}{\log^2{(q^r)}} \sum_{p\in\prem} \frac{\log^2{(p)}}{p} \widehat{\Phi}^2\left( \frac{\log p}{\log{(q^r)}} \right) \right]^{\alpha/2} \#P^{2}(\alpha,\alpha/2) \\ +O\left(\frac{1}{\log^2{(q)}}\right). \end{multline} Taking $m=2$ (we already proved that the second moment is finite, see section~\ref{sec_twoandvar}) and reinserting the result in \eqref{eq_ooo} implies that \[ \Eh[q](P_q^1[\Phi;r]^\alpha) = \Eh[q](P_q^1[\Phi;r]^2) \#P^{2}(\alpha,\alpha/2)+O\left(\frac{1}{\log^2{(q)}}\right). \] We conclude by computing \[ \#P^{2}(\alpha,\alpha/2)=\frac{\alpha!}{2^{\alpha/2}\left(\frac{\alpha}{2}\right)!}. \] (see \cite[Example 5.2.6 and Exercise 5.43]{Stan00}). \subsection{Proof of the second bullet of proposition \ref{moments}} We mix the two techniques which have been used to prove the first and third bullets of proposition \ref{moments}. We get following the same lines and thanks to lemma~\ref{primesdistincts} \begin{multline} \Eh[q]\left(P_q^1[\Phi;r]^{m-\ell}P_q^2[\Phi;r]^\alpha\right)= \frac{(-2)^{\alpha+m-\ell}}{\log^{\alpha+m-\ell}{(q^r)}} \sum_{1\leq j_1,\dotsc,j_\alpha\leq r}(-1)^{\alpha r-(j_1+\dotsc+j_\alpha)} \sum_{s=1}^{\alpha+m-\ell} \\ \times \sum_{\sigma\in P(\alpha+m-\ell,s)}\sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \prod_{u=1}^{s} \left( \frac{\log^{\varpi^{(\sigma,1)}_{u}+\varpi^{(\sigma,2)}_{u}} {\left(\widehat{p}_{i_u}\right)}}{\widehat{p}_{i_{u}}^{\varpi^{(\sigma,1)}_{u}/2+\varpi^{(\sigma,2)}_{u}}} \widehat{\Phi}\left(\frac{\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)^{\varpi^{(\sigma,1)}_{u}} \widehat{\Phi}\left(\frac{2\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)^{\varpi^{(\sigma,2)}_{u}} \right) \\ \times \Eh[q]\left(\prod_{u=1}^s\left(\lambda_f\left(\widehat{p}_{i_{u}}^{r}\right)^{\varpi^{(\sigma,1)}_{u}} \prod_{j=1}^r\lambda_f\left(\widehat{p}_{i_{u}}^{2j}\right)^{\varpi^{(\sigma,2)}_{u,j}}\right)\right) \end{multline} where \begin{align*} \varpi^{(\sigma,1)}_{u} & \coloneqq \#\left\{i\in\left\{1,\dotsc,m-\ell\right\},\; \sigma(i)=u\right\}, \\ \varpi^{(\sigma,2)}_{u} & \coloneqq \#\left\{i\in\left\{1,\dotsc,\alpha\right\},\; \sigma(m-\ell+i)=u\right\}, \\ \varpi^{(\sigma,2)}_{u,j} & \coloneqq \#\left\{i\in\left\{1,\dotsc,\alpha\right\},\; \sigma(m-\ell+i)=u \text{ and } j_{i}=j\right\} \end{align*} for any $1\leq u\leq s$, any $1\leq j\leq r$ and any $\sigma\in P(\alpha+m-\ell,s)$. Note that these numbers satisfy \begin{equation} \label{prop1} \sum_{u=1}^s\left(\varpi^{(\sigma,1)}_{u}+\varpi^{(\sigma,2)}_{u}\right)=m-\ell+\alpha \end{equation} and \begin{equation} \label{prop2} \sum_{j=1}^r\varpi^{(\sigma,2)}_{u,j}=\varpi^{(\sigma,2)}_{u} \end{equation} for any $1\leq u\leq r$ and any $\sigma\in P(\alpha+m-\ell,s)$ by definition. They also satisfy \begin{equation} \label{prop3} \forall\sigma\in P(\alpha+m-\ell,s), \forall u\in\left\{1,\dotsc,s\right\}, \quad \varpi^{(\sigma,1)}_{u}+\varpi^{(\sigma,2)}_{u}\geq 1 \end{equation} since any $\sigma\in P(\alpha+m-\ell,s)$ is surjective and \begin{equation} \label{prop4} \forall\sigma\in P(\alpha+m-\ell,s), \forall i\in\left\{1,2\right\},\exists u_{i,\sigma}\in\left\{1,\dotsc,s\right\}, \quad \varpi^{(\sigma,i)}_{u_{i,\sigma}}\geq 1 \end{equation} since $\alpha\geq 1$ and $m-\ell\geq 1$. The strategy is to estimate individually each term of the $\sigma$-sum. Thus, we fix some integers $j_1,\dotsc,j_\alpha$ in $\left\{1,\dotsc,r\right\}$, some integer $s$ in $\left\{1,\dotsc,r\right\}$ and some application $\sigma$ in $P(\alpha+m-\ell,s)$.\newline \noindent{\textit{\underline{First case}:} $\quad\mathit{\forall u\in\left\{1,\dotsc,s\right\}, \;\varpi^{(\sigma,1)}_{u}/2+\varpi^{(\sigma,2)}_{u}\leq 1.}$}\newline Let us remark that if $\varpi^{(\sigma,2)}_{u}=1$ for some $1\leq u\leq s$ then there exists a unique $1\leq j_{i_u}\leq r$ depending on $\sigma$ such that $\varpi^{(\sigma,2)}_{u,j_{i_u}}=1$ and $\varpi^{(\sigma,2)}_{u,j}=0$ for any $1\leq j\neq j_{i_u}\leq r$ according to \eqref{prop2}. Thus, \begin{multline*} \prod_{u=1}^s\left(\lambda_f\left(\widehat{p}_{i_{u}}^{r}\right)^{\varpi^{(\sigma,1)}_{u}}\prod_{j=1}^r\left(\lambda_f\left(\widehat{p}_{i_{u}}^{2j}\right)^{\varpi^{(\sigma,2)}_{u,j}}\right)\right)=\lambda_f\left(\prod_{\substack{1\leq u\leq s\\ \left(\varpi^{(\sigma,1)}_{u},\varpi^{(\sigma,2)}_{u}\right)=(2,0)}}\widehat{p}_{i_{u}}^{r\varpi^{(\sigma,1)}_{u}/2}\right) \\ \times\lambda_f\left(\prod_{\substack{1\leq u\leq s\\ \left(\varpi^{(\sigma,1)}_{u},\varpi^{(\sigma,2)}_{u}\right)=(2,0)}}\widehat{p}_{i_{u}}^{r\varpi^{(\sigma,1)}_{u}/2}\prod_{\substack{1\leq u\leq s\\ \left(\varpi^{(\sigma,1)}_{u},\varpi^{(\sigma,2)}_{u}\right)=(1,0)}}\widehat{p}_{i_{u}}^{r\varpi^{(\sigma,1)}_{u}}\prod_{\substack{1\leq u\leq s\\ \left(\varpi^{(\sigma,1)}_{u},\varpi^{(\sigma,2)}_{u}\right)=(0,1)}}\widehat{p}_{i_{u}}^{2j_{i_u}\varpi^{(\sigma,2)}_{u,j_{i_u}}}\right) \end{multline*} where the two integers appearing in the right-hand side of the previous equality are different according to \eqref{prop4}. Consequently, proposition \ref{iwlusatr} and lemmas \ref{deltaestimate} and \ref{lem_delun} enable us to assert that \begin{multline*} \Eh[q]\left(\prod_{u=1}^s\left(\lambda_f\left(\widehat{p}_{i_{u}}^{r}\right)^{\varpi^{(\sigma,1)}_{u}}\prod_{j=1}^r\left(\lambda_f\left(\widehat{p}_{i_{u}}^{2j}\right)^{\varpi^{(\sigma,2)}_{u,j}}\right)\right)\right) \ll \frac{1}{q} \prod_{\substack{1\leq u\leq s\\ \left(\varpi^{(\sigma,1)}_{u},\varpi^{(\sigma,2)}_{u}\right)=(2,0)}} \frac{\log{\widehat{p}_{i_{u}}}}{\widehat{p}_{i_{u}}^{-r\varpi^{(\sigma,1)}_{u}/4}} \\ \times \prod_{\substack{1\leq u\leq s\\ \left(\varpi^{(\sigma,1)}_{u},\varpi^{(\sigma,2)}_{u}\right)=(1,0)}} \frac{\log{\widehat{p}_{i_{u}}}}{\widehat{p}_{i_{u}}^{-r\varpi^{(\sigma,1)}_{u}/4}} \prod_{\substack{1\leq u\leq s\\ \left(\varpi^{(\sigma,1)}_{u},\varpi^{(\sigma,2)}_{u}\right)=(0,1)}} \frac{\log{\widehat{p}_{i_{u}}}}{\widehat{p}_{i_{u}}^{-r\varpi^{(\sigma,2)}_{u}/2}}. \end{multline*} Note that, in this first case, the right hand term is \[ \frac{1}{q} \prod_{u=1}^s \frac{ \log{\widehat{p}_{i_{u}}} } { \widehat{p}_{i_{u}}^{-r(\varpi^{(\sigma,1)}_{u}/4+\varpi^{(\sigma,2)}_{u}/2)} } \] hence the contribution of these $\sigma$'s to $\Eh[q]\left(P_q^1[\Phi;r]^{m-\ell}P_q^2[\Phi;r]^\alpha\right)$ is bounded by \[ \frac{q^\epsilon}{q} \left( \sum_{p\leq q^{\nu r}}\frac{1}{p^{1/2-r/4}} \right)^{m-\ell} \left( \sum_{p\leq q^{\nu r/2}}\frac{1}{p^{1-r/2}} \right)^{\alpha} \ll q^{ \nu r/4[(m-\ell)(r+2)+\alpha r]-1+\epsilon }. \] This is an admissible error term as long as $\nu r/4[(m-\ell)(r+2)+\alpha r]<1$. \noindent{\textit{\underline{Second case}:} $\quad\mathit{\exists u_\sigma\in\left\{1,\dotsc,s\right\}, \;\varpi^{(\sigma,1)}_{u_\sigma}/2+\varpi^{(\sigma,2)}_{u_\sigma}>1.}$} According to \eqref{eq_ams} and \eqref{eq_lintch}, if $1\leq u\leq s$ and $1\leq j\leq r$ then \begin{equation*} \lambda_f\left(\widehat{p}_{i_{u}}^r\right)^{\varpi^{(\sigma,1)}_u}=\sum_{k_{u,1}=0}^{r\varpi^{(\sigma,1)}_u}x(\varpi^{(\sigma,1)}_u,r,k_{u,1})\lambda_f\left(\widehat{p}_{i_{u}}^{k_{u,1}}\right) \end{equation*} and \begin{equation*} \lambda_f\left(\widehat{p}_{i_{u}}^{2j}\right)^{\varpi^{(\sigma,2)}_{u,j}}= \sum_{k_{u,j,2}=0}^{j\varpi^{(\sigma,2)}_{u,j}} x(\varpi^{(\sigma,2)}_{u,j},2j,2k_{u,j,2}) \lambda_f\left(\widehat{p}_{i_{u}}^{2k_{u,j,2}}\right) \end{equation*} since $x(\varpi^{(\sigma,2)}_{u,j},2j,k_{u,j,2})=0$ if $k_{u,j,2}$ is odd (see \eqref{propriox}). Then, one may remark that \begin{equation*} \prod_{1\leq j\leq r}\lambda_f\left(\widehat{p}_{i_{u}}^{2k_{u,j,2}}\right)=\sum_{\ell_u=0}^{K_u}y_{\ell_u}\lambda_f\left(\widehat{p}_{i_{u}}^{2\ell_u}\right) \end{equation*} for some integers $y_{\ell_u}$ and where $K_u\coloneqq\sum_{1\leq j\leq r}k_{u,j,2}$ for any $1\leq u\leq s$. All these facts lead to \begin{multline} \Eh[q]\left(P_q^1[\Phi;r]^{m-\ell}P_q^2[\Phi;r]^\alpha\right)=\frac{(-2)^{\alpha+m-\ell}(-1)^{\alpha r}}{\log^{\alpha+m-\ell}{(q^r)}}\sum_{1\leq j_1,\dotsc,j_\alpha\leq r}(-1)^{j_1+\dotsc+j_\alpha}\sum_{s=1}^{\alpha+m-\ell} \\ \times\sum_{\sigma\in P(\alpha+m-\ell,s)}\sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}}\prod_{u=1}^{s}\left(\frac{\log^{\varpi^{(\sigma,1)}_{u}+\varpi^{(\sigma,2)}_{u}}{\left(\widehat{p}_{i_u}\right)}}{\widehat{p}_{i_{u}}^{\varpi^{(\sigma,1)}_{u}\left/2\right.+\varpi^{(\sigma,2)}_{u}}}\widehat{\Phi}\left(\frac{\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)^{\varpi^{(\sigma,1)}_{u}}\widehat{\Phi}\left(\frac{ 2\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)^{\varpi^{(\sigma,2)}_{u}}\right) \\ \times\sum_{\substack{0\leq k_{1,1}\leq r\varpi^{(\sigma,1)}_1 \\ \vdots \\ 0\leq k_{s,1}\leq r\varpi^{(\sigma,1)}_s}}\sum_{\substack{0\leq k_{1,1,2}\leq \varpi^{(\sigma,2)}_{1,1} \\ \vdots \\ 0\leq k_{s,1,2}\leq\varpi^{(\sigma,2)}_{s,1}}}\dotsc\sum_{\substack{0\leq k_{1,r,2}\leq r\varpi^{(\sigma,2)}_{1,r} \\ \vdots \\ 0\leq k_{s,r,2}\leq r\varpi^{(\sigma,2)}_{s,r}}}\sum_{\substack{0\leq\ell_1\leq K_1 \\ \vdots \\ 0\leq\ell_s\leq K_s}} \\ \times\prod_{u=1}^s\left(x\left(\varpi^{(\sigma,1)}_u,r,k_{u,1}\right)y_{\ell_u}\prod_{j=1}^r\left(x\left(\varpi^{(\sigma,2)}_{u,j},2j,2k_{u,j,2}\right)\right)\right) \\ \times\Eh[q]\left(\lambda_f\left(\prod_{u=1}^s\widehat{p}_{i_{u}}^{k_{u,1}}\right)\lambda_f\left(\prod_{u=1}^s\widehat{p}_{i_{u}}^{2\ell_u}\right)\right). \end{multline} Proposition \ref{iwlusatr} and lemmas \ref{deltaestimate} and \ref{lem_delun} enable us to assert that \begin{equation*} \Eh[q]\left(\lambda_f\left(\prod_{u=1}^s\widehat{p}_{i_{u}}^{k_{u,1}}\right)\lambda_f\left(\prod_{u=1}^s\widehat{p}_{i_{u}}^{2\ell_u}\right)\right)=\prod_{u=1}^s\delta_{k_{u,1},2\ell_u} \\ +O\left(\frac{1}{q}\prod_{u=1}^s\widehat{p}_{i_{u}}^{k_{u,1}/4+\ell_u/2}\log{\widehat{p}_{i_{u}}}\right) \end{equation*} and we can write \begin{equation} \Eh[q]\left(P_q^1[\Phi;r]^{m-\ell}P_q^2[\Phi;r]^\alpha\right)=\TP+O(\TE) \end{equation} with \begin{multline}\label{endddd} \TP\coloneqq\frac{(-2)^{\alpha+m-\ell}(-1)^{\alpha r}}{\log^{\alpha+m-\ell}{(q^r)}}\sum_{1\leq j_1,\dotsc,j_\alpha\leq r}(-1)^{j_1+\dotsc+j_\alpha}\sum_{s=1}^{\alpha+m-\ell} \\ \times\sum_{\sigma\in P(\alpha+m-\ell,s)}\sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}}\prod_{u=1}^{s}\left(\frac{\log^{\varpi^{(\sigma,1)}_{u}+\varpi^{(\sigma,2)}_{u}}{\left(\widehat{p}_{i_u}\right)}}{\widehat{p}_{i_{u}}^{\varpi^{(\sigma,1)}_{u}\left/2\right.+\varpi^{(\sigma,2)}_{u}}}\widehat{\Phi}\left(\frac{\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)^{\varpi^{(\sigma,1)}_{u}}\widehat{\Phi}\left(\frac{ 2\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)^{\varpi^{(\sigma,2)}_{u}}\right) \\ \times\sum_{\substack{0\leq k_{1,1,2}\leq \varpi^{(\sigma,2)}_{1,1} \\ \vdots \\ 0\leq k_{s,1,2}\leq\varpi^{(\sigma,2)}_{s,1}}}\dotsc\sum_{\substack{0\leq k_{1,r,2}\leq r\varpi^{(\sigma,2)}_{1,r} \\ \vdots \\ 0\leq k_{s,r,2}\leq r\varpi^{(\sigma,2)}_{s,r}}}\sum_{\substack{0\leq\ell_1\leq r\min{\left(\varpi^{(\sigma,1)}_{1}/2,\varpi^{(\sigma,2)}_{1}\right)} \\ \vdots \\ 0\leq\ell_s\leq r\min{\left(\varpi^{(\sigma,1)}_{s}/2,\varpi^{(\sigma,2)}_{s}\right)}}} \\ \times\prod_{u=1}^{s}\left(x\left(\varpi^{(\sigma,1)}_u,r,2\ell_u\right)y_{\ell_u}\prod_{j=1}^r\left(x\left(\varpi^{(\sigma,2)}_{u,j},2j,2k_{u,j,2}\right)\right)\right) \end{multline} and \begin{multline} \TE\coloneqq \frac{1}{q\log^{\alpha+m-\ell}{(q^r)}} \\% \times \sum_{s=1}^{\alpha+m-\ell} \sum_{\sigma\in P(\alpha+m-\ell,s)} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}} \prod_{u=1}^{s}\log^{\varpi^{(\sigma,1)}_{u}+\varpi^{(\sigma,2)}_{u}+1}{(\widehat{p}_{i_u})} \widehat{p}_{i_u}^{\left(r/2-1\right)\left(\varpi^{(\sigma,1)}_{u}/2+\varpi^{(\sigma,2)}_{u}\right)} \\% \times \left\vert\widehat{\Phi}\left(\frac{\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)\right\vert^{\varpi^{(\sigma,1)}_{u}} \left\vert\widehat{\Phi}\left(\frac{2\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)\right\vert^{\varpi^{(\sigma,2)}_{u}} \end{multline} which is bounded by $O_\epsilon\left(q^{(\alpha+m-\ell)\nu r^2/4-1+\varepsilon}\right)$ for any $\varepsilon>0$ and is an admissible error term if $(\alpha+m-\ell)\nu<4/r^2$. Estimating $\TP$ is possible since we can assume that $\sigma$ satisfies the following additional property. If $\varpi_u^{(\sigma,2)}=0$ for some $1\leq u\leq s$ then $\varpi_u^{(\sigma,1)}>1$. Let us assume on the contrary that $\varpi_u^{(\sigma,1)}\leq 1$ which entails $\varpi_u^{(\sigma,1)}=1$ according to \eqref{prop3}. Then, \begin{equation*} x\left(\varpi^{(\sigma,1)}_u,r,2\ell_u\right)=x\left(1,r,0\right)=0 \end{equation*} since $\ell_u=0$ and according to \eqref{propriox}. Thus, the contribution of the $\sigma$'s which do not satisfy this last property vanishes. As a consequence, the sum over the distinct $i_1,\dotsc,i_s$ is bounded by \begin{multline*} \sum_{\substack{i_1,\dotsc,i_s\\\text{distinct}}}\prod_{\substack{1\leq u\leq s \\ \left(\varpi_u^{(\sigma,1)},\varpi_u^{(\sigma,2)}\right)=(2,0)}}\left(\frac{\log^{2}{\left(\widehat{p}_{i_u}\right)}}{\widehat{p}_{i_{u}}}\left\vert\widehat{\Phi}\left(\frac{\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)\right\vert^{2}\right) \\ \times\prod_{\substack{1\leq u\leq s \\ \left(\varpi_u^{(\sigma,1)},\varpi_u^{(\sigma,2)}\right)=(0,1)}}\left(\frac{\log{\left(\widehat{p}_{i_u}\right)}}{\widehat{p}_{i_{u}}}\left\vert\widehat{\Phi}\left(\frac{2\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)\right\vert\right) \\ \times\prod_{\substack{1\leq u\leq s \\ \varpi_u^{(\sigma,1)}/2+\varpi_u^{(\sigma,2)}>1}}\left(\frac{\log^{\varpi^{(\sigma,1)}_{u}+\varpi^{(\sigma,2)}_{u}}{\left(\widehat{p}_{i_u}\right)}}{\widehat{p}_{i_{u}}^{\varpi^{(\sigma,1)}_{u}\left/2\right.+\varpi^{(\sigma,2)}_{u}}}\left\vert\widehat{\Phi}\left(\frac{\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)\right\vert^{\varpi^{(\sigma,1)}_{u}}\left\vert\widehat{\Phi}\left(\frac{ 2\log\widehat{p}_{i_{u}}}{\log(q^r)}\right)\right\vert^{\varpi^{(\sigma,2)}_{u}}\right) \end{multline*} which is itself bounded by $O\left(\log^{A_\sigma}{(q)}\right)$ where the exponent is given by \begin{multline*} A_\sigma\coloneqq 2\#\left\{1\leq u\leq s, \varpi_u^{(\sigma,2)}=0 \text{ and } \varpi_u^{(\sigma,1)}/2+\varpi_u^{(\sigma,2)}\leq 1\right\} \\ +\#\left\{1\leq u\leq s, \varpi_u^{(\sigma,2)}=1 \text{ and } \varpi_u^{(\sigma,1)}/2+\varpi_u^{(\sigma,2)}\leq 1\right\}<m-\ell+\alpha. \end{multline*} The last inequality follows from (see \eqref{prop1} and the additional property of $\sigma$) \begin{equation*} m-\ell+\alpha=A_\sigma+\sum_{\substack{1\leq u\leq s \\ \varpi_u^{(\sigma,1)}/2+\varpi_u^{(\sigma,2)}>1}}\left(\varpi_u^{(\sigma,1)}+\varpi_u^{(\sigma,2)}\right). \end{equation*} Thus, the contribution of the $\TP$ term of these $\sigma$'s to $\Eh[q]\left(P_q^1[\Phi;r]^{m-\ell}P_q^2[\Phi;r]^\alpha\right)$ is bounded by $O\left(\log^{-1}{(q)}\right)$. \appendix \section{Analytic and arithmetic toolbox} \label{klooster} \subsection{On smooth dyadic partitions of unity} \label{unity} Let $\psi\colon\R_+\to\R$ be any smooth function satisfying \begin{equation*} \psi(x)=\begin{cases} 0 & \text{if } 0\leq x\leq 1, \\ 1 & \text{if } x>\sqrt{2} \end{cases} \end{equation*} and $x^j\psi^{(j)}(x)\ll_j 1$ for any real number $x\geq 0$ and any integer $j\geq 0$. If $\rho\colon\R_+\to\R$ is the function defined by \begin{equation*} \rho(x)\coloneqq\begin{cases} \psi(x) & \text{if } 0\leq x\leq\sqrt{2}, \\ 1-\psi\left(\frac{x}{\sqrt{2}}\right) & \text{otherwise} \end{cases} \end{equation*} then $\rho$ is a smooth function compactly supported in $[1,2]$ satisfying \begin{equation*} x^j\rho^{(j)}(x)\ll_j 1\quad\text{ and }\quad\sum_{a\in\mathbb{Z}}\rho\left(\frac{x}{\sqrt{2}^a}\right)=1 \end{equation*} for any real number $x\geq 0$ and any integer $j\geq 0$. \begin{figure}\end{figure} If $F\colon\R_+^n\to\R$ is a function of $n\geq 1$ real variables then we can decompose it in \begin{equation*} F=\sum_{a_1\in\mathbb{Z}}\ldots\sum_{a_n\in\mathbb{Z}}F_{A_1,\cdots,A_n} \end{equation*} where $A_i\coloneqq\sqrt{2}^{a_i}$ and \begin{equation*} F_{A_1,\cdots,A_n}(x_1,\cdots,x_n)\coloneqq\prod_{i=1}^n\rho_{A_i}(x_i)F(x_1,\cdots,x_n) \end{equation*} with $\rho_{A_i}(x_i)\coloneqq\rho\left(x_i\left/A_i\right.\right)$ is a smooth function compactly supported in $[A_i,2A_i]$ satisfying $x_i^j\rho_{A_i}^{(j)}(x_i)\ll_j 1$ for any real number $x_i\geq 0$ and any integer $j\geq 0$. Let us introduce the following notation for summation over powers of $\sqrt{2}$ : \[ \sumsh_{A\leq M\leq B}f(M)\coloneqq \sum_{ \substack{n\in\N\\ A\leq 2^{n/2}\leq B} } f\left( 2^{n/2} \right). \] We will use such smooth dyadic partitions of unity several times in this paper and we will also need these natural estimates in such contexts \begin{equation} \label{dyadic1} \sumsh_{M\leq M_1} M^\alpha \ll M_1^\alpha \end{equation} for any $\alpha, M_1>0$ and \begin{equation} \label{dyadic2} \sumsh_{M\geq M_0} \frac{1}{M^{\alpha}} \ll \frac{1}{M_0^\alpha} \end{equation} for any $\alpha, M_0>0$. \subsection{On Bessel functions} \label{Bessel} The Bessel function of first kind and order a integer $\kappa\geq 1$ is defined by \[ \forall z\in\C, \quad J_\kappa(z)\coloneqq\sum_{n\geq 0}\frac{(-1)^n}{n!(\kappa+n)!}\left(\frac{z}{2}\right)^{\kappa+2n}. \] It satisfies the following estimate (founded in \cite[Lemma C.2]{MR1915038}), valid for any real number $x$, any integer $j\geq 0$ and any integer $\kappa\geq 1$: \begin{equation} \label{bessel} \left(\frac{x}{1+x}\right)^jJ_\kappa^{(j)}(x)\ll_{j,\kappa}\frac{1}{\left(1+x\right)^{\frac{1}{2}}}\left(\frac{x}{1+x}\right)^\kappa \end{equation} for any real number $x$, any integer $j\geq 0$ and any integer $\kappa\geq 1$. The following useful lemma follows immediately. \begin{lemma}\label{lem_picard} Let $X>0$ and $\kappa\geq 1$, then \[ \sum_{d>0}\frac{\tau(d)}{\sqrt{d}} \abs{J_{\kappa}\left(\frac{X}{d}\right)} \ll \begin{cases} X^{1/2}\log X & \text{if $\;X>1$,}\\ X^{\kappa} & \text{if $\;0<X\leq 1$.} \end{cases} \] \end{lemma} \subsection{Basic facts on Kloosterman sums}\label{Kloos} For any integer $m,n, c\geq 1$, the Kloosterman sum is defined by \[ S(m,n;c)\coloneqq \sum_{\substack{x\mod (c) \\ (x,c)=1}} e\left(\frac{mx+n\overline{x}}{c}\right) \] where $\overline{x}$ stands for the inverse of $x$ modulo $c$. We recall some basic facts on these sums. The Chinese remainder theorem implies the following multiplicativity relation \begin{equation}\label{eq_crt} S(m,n;qr) = S(m\overline{q}^2,n;r) S(m\overline{r}^2,n;q) \end{equation} valid as soon as $(q,r)=1$. Here, $\overline{q}$ (resp. $\overline{r}$) is the inverse of $q$ (resp. $r$) modulo $r$ (resp. $q$). If $p$ and $q$ are two prime numbers, $\gamma\geq 1$ and $r\geq 1$ then, from \eqref{eq_crt} and \cite[(2.312)]{55.0703.02} we obtain \begin{equation}\label{eq_klzero} S\left(p^{\gamma}q,1;qr\right) = \begin{cases} -S\left(p^{\gamma}\overline{q},1;r\right) & \text{if $(q,r)=1$, }\\% 0 & \text{otherwise.} \end{cases} \end{equation} The Weil-Estermann inequality \cite{Es} is \begin{equation}\label{weil} \abs{S(m,n;c)}\leq\sqrt{(m,n,c)}\tau(c)\sqrt{c}. \end{equation} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
math
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{claim}[theorem]{Claim} \newtheorem{cor}[theorem]{Corollary} \newtheorem{prop}[theorem]{Proposition} \newtheorem{rem}[theorem]{Remark} \newtheorem{definition}{Definition} \newtheorem{quest}[theorem]{Open Question} \newtheorem{example}[theorem]{Example} \numberwithin{equation}{section} \numberwithin{theorem}{section} \def \blambda{\bm{\lambda}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\D}[1]{D\leqslantft(#1\right)} \def\scriptstyle{\scriptstyleiptstyle} \def\cr{\cr} \def\leqslantft({\leqslantft(} \def\right){\right)} \def\leqslantft[{\leqslantft[} \def\right]{\right]} \def\langle{\langle} \def\rangle{\rangle} \def\fl#1{\leqslantft\lfloor#1\right\rfloor} \def\rf#1{\leqslantft\lceil#1\right\rceil} \def\leqslant{\leqslantqslant} \def\geqslant{\geqslantqslant} \def\varepsilon{\varepsilon} \def\qquad\mbox{and}\qquad{\qquad\mbox{and}\qquad} \def\mathbf{e}_p{\mathbf{e}_p} \def\mathbf{e}{\mathbf{e}} \def\vec#1{\mathbf{#1}} \defm{m} \newcommand{\commA}[1]{\marginpar{ \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule \color{blue}A.: #1\par \hrule}} \newcommand{\commR}[1]{\marginpar{ \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule \color{red}R.: #1\par \hrule}} \newcommand{\commI}[1]{\marginpar{ \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule \color{magenta}I.: #1\par \hrule}} \newcommand{\commII}[1]{\marginpar{ \vskip-\baselineskip \raggedright\footnotesize \itshape\hrule \color{green}I.: #1\par \hrule}} \newcommand{\mathbb{F}q}{\mathbb{F}_q} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{{\mathcal S}}{\mathcal{S}} \newcommand{\mathbb{F}p}{\mathbb{F}_p} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\Disc}[1]{\operatorname{Disc}\leqslantft(#1\right)} \newcommand{\Res}[1]{\operatorname{Res}\leqslantft(#1\right)} \def{\mathcal A}{{\mathcal A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal G}{{\mathcal G}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathcal P}{{\mathcal P}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal T}{{\mathcal T}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal W}{{\mathcal W}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}} \def{\mathfrak M}{{\mathfrak M}} \newcommand{\mathbb{N}m}[1]{\mathrm{Norm}_{\,\mathbb{F}_{q^k}/\mathbb{F}q}(#1)} \def\mbox{Tr}{\mbox{Tr}} \newcommand{\rad}[1]{\mathrm{rad}(#1)} \title[Fields Generated by Polynomials of Given Height]{Discriminants of Fields Generated by Polynomials of Given Height} \author{Rainer Dietmann} \address{Department of Mathematics, Royal Holloway, University of London, Egham, Surrey, TW20 0EX, United Kingdom} \mathbf{e}mail{[email protected]} \author{Alina Ostafe} \address{Department of Pure Mathematics, University of New South Wales, Sydney, NSW 2052, Australia} \mathbf{e}mail{[email protected]} \author{Igor E. Shparlinski} \address{Department of Pure Mathematics, University of New South Wales, Sydney, NSW 2052, Australia} \mathbf{e}mail{[email protected]} \begin{abstract} We obtain upper bounds for the number of monic irreducible polynomials over $\mathbb{Z}$ of a fixed degree $n$ and a growing height $H$ for which the field generated by one of its roots has a given discriminant. We approach it via counting square-free parts of polynomial discriminants via two complementing approaches. In turn, this leads to a lower bound on the number of distinct discriminants of fields generated by roots of polynomials of degree $n$ and height at most $H$. We also give an upper bound for the number of trinomials of bounded height with given square-free part of the discriminant, improving previous results of I.~E.~Shparlinski~(2010).\mathbf{e}nd{abstract} \maketitle \section{Introduction} \subsection{Motivation and background} For a positive integer $H$, we use ${\mathcal P}_n(H)$ to denote the set of polynomials \begin{align*} {\mathcal P}_n(H) = \{X^n+a_{n-1}X^{n-1} + \ldots+a_1&X+a_0 \in \mathbb{Z}[X]~:\cr & ~|a_0|, \ldots, |a_{n-1}| <H \}. \mathbf{e}nd{align*} Furthermore, we use ${\mathcal I}_n(H)$ to denote the set of irreducible polynomials from ${\mathcal P}_n(H)$. It is useful to recall that $$ \# {\mathcal I}_n(H) = 2^nH^{n} + O(H^{n-1}), $$ which follows immediately from much more precise results of Chela~\cite{Chela}, Dietmann~\cite{Diet1,Diet2} and Zywina~\cite{Zyw}. We also note that Bhargava~\cite{Bha} has recently established the celebrated {\it van der Waerden conjecture\/} about Galois groups of polynomials from ${\mathcal I}_n(H)$. For an irreducible monic polynomial $f\in \mathbb{Z}[X]$ we use $\Delta(f)$ to denote the discriminant of the algebraic number field $\mathbb{Q}(\alpha)$, where $\alpha$ is a root of $f$ (clearly, for any $f \in {\mathcal I}_n(H)$ all such fields $\mathbb{Q}(\alpha)$ are isomorphic and thus have the same discriminant). For an integer $\Delta$ we denote by $N_n(H,\Delta)$ the number of polynomials $f \in {\mathcal I}_n(H)$ with $\Delta(f) = \Delta$. We recall that various counting problems for discriminants of number fields have been studied in a number of works, see~\cite{BSW, BBP, ILOSS, Jones1, Jones2, JoWh, ElVe, LaRo} and references therein. In particular, a remarkable result of Bhargava, Shankar, and Wang~\cite{BSW} gives an asymptotic formula for the density of polynomials with square-free discriminants, however their model of counting is different from ours. In fact, it seems that the function $N_n(H,\Delta)$ which is our main object of study, has never been investigated before. We derive our estimates from some counting results on square-free parts of discriminants of the polynomials from ${\mathcal P}_n(H)$. We recall that the square-free part $u$ of an integer $k$ is defined by $k = uv^2$ where $v^2$ is the largest perfect square dividing $k$. In particular, $u$ has the same sign as $k$. We remark that, despite the recent progress in~\cite{BSW}, the problem of counting square-free discriminants of the polynomials from ${\mathcal P}_n(H)$ still remains open, unless one assumes the celebrated $ABC$-conjecture, see~\cite{Kedl,Poon}. So, one can consider our result as a first approximation to the desired goal. Furthermore, some counting results about square-free parts of discriminants \begin{equation} \label{eq:DiscTrin} \Delta_n(a,b)=(n-1)^{n-1} a^n+n^n b^{n-1} \mathbf{e}nd{equation} of trinomials $X^n + aX + b$ with $n \mathbf{e}quiv 1 \pmod 4$ have been given in~\cite{MMS} (conditionally under the $ABC$-conjecture) and in~\cite{Shp1} (unconditionally). Here we obtain a new bound, improving that of~\cite{Shp1} for a wide range of parameters. In fact our bound is a combination of two results, which we use depending on the relative sizes of parameters. One result is obtained via the {\it determinant method\/} of Bombieri and Pila~\cite{BoPi}, Heath-Brown~\cite{HB3} and Salberger~\cite{S}, as in the work of Dietmann~\cite{Diet2}. The other one is based on the {\it square sieve\/} of Heath-Brown~\cite{HB1}, see Section~\ref{sec:SqSieve}, combined with bounds on character sums with discriminants, see Lemma~\ref{lem:CharSum p}, which are better than those directly implied by the Weil bound (see, for example,~\cite[Theorem~11.23]{IwKow}). We believe such bounds can be of independent interest. \subsection{Notation} We recall that the expressions $A \ll B$, $B \gg A$ and $A=O(B)$ are each equivalent to the statement that $|A|\leqslant cB$ for some positive constant $c$. We use $o(1)$ to denote any expression that tends to $0$ for a fixed $n$ and $H\to\infty$. Throughout the paper, the implied constants in these symbols may depend on the degree $n$ of the polynomials involved, and occasionally, when mentioned explicitly, on some other parameters. The letters $p$ and $q$ always denote prime numbers. \subsection{Discriminants of general polynomials} Our main result is the following upper bound on $N_n(H,\Delta)$, which is obtained by a combination of various techniques. \begin{theorem} \label{thm:NHDelta} Let $\Delta$ be a non-zero integer and $n \geqslant 3$. Then, uniformly over $\Delta$, if for the square-free part $u$ of $\Delta$ neither $|u|(n-1)^{n-1}$ nor $|u|n^n$ is a square, then \begin{equation} \label{eq:nn-1 discr} N_n(H,\Delta)\leqslant H^{n-2+\sqrt{2}+o(1)}, \mathbf{e}nd{equation} otherwise \begin{equation} \label{eq:Any discr} N_n(H,\Delta)\ll \begin{cases}H^{n-2n/(3n+3)}(\log H)^{(5n+1)/(3n+3)} & \text{if}\ n \geqslant 5\cr H^{n-n/(2n-1)}(\log H)^{(3n-2)/(2n-1)} & \text{if}\ n =3, 4. \mathbf{e}nd{cases} \mathbf{e}nd{equation} for any $\Delta$. \mathbf{e}nd{theorem} We note that the bound~\mathbf{e}qref{eq:nn-1 discr} is better (when it applies) than~\mathbf{e}qref{eq:Any discr} only for $n \leqslant 7$. Let $$ M_n(H,D)= \sum_{|\Delta| \leqslant D} N_n(H,\Delta). $$ Given a real parameter $D\geqslant 1$, using that there are $O(D^{1/2})$ values of $\Delta\leqslant D$ with the square-free part $u$ satisfying $|u|(n-1)^{n-1}$ or $|u|n^n$ being a square, we immediately obtain that uniformly over $D$, $$ M_n(H,D) \leqslant H^{o(1)} \begin{cases}D H^{n-2n/(3n+3) } & \text{if}\ n \geqslant 8,\cr DH^{n-2+\sqrt{2} } + D^{1/2} H^{n-2n/(3n+3) } & \text{if}\ n =5,6,7,\cr DH^{n-2+\sqrt{2}} + D^{1/2} H^{n-n/(2n-1)} & \text{if}\ n =3, 4. \mathbf{e}nd{cases} $$ However one can get a better result. \begin{theorem} \label{thm:MHD} Let $D\geqslant 1$ be an integer. Then, for $n \geqslant 5$, uniformly over $D$, we have $$ M_n(H,D) \leqslant D^{(3n+2)/(3n+3)} H^{n(3n+1)/(3n+3)+o(1)} . $$ \mathbf{e}nd{theorem} Clearly, Theorem~\ref{thm:MHD} is nontrivial (that is, improves the trivial bound $M_n(H,D)\ll H^n$) provided that $D \leqslant H^{2n/(3n+2) - \varepsilon}$ for some fixed $\varepsilon>0$. In particular, we see that almost all polynomials from ${\mathcal I}_n(H)$ generate fields with discriminants of size at least $H^{2n/(3n+2)+o(1)}$. We also note very recent results of Anderson, Gafni, Lemke Oliver, Lowry-Duda, Shakan, and Zhang~\cite{AGLLSZ} about the arithmetic structure of discriminants of polynomials from ${\mathcal P}_n(H)$. \begin{rem} \label{rem:disc split} We note that the bound of Theorem~\ref{thm:NHDelta} also implies an upper bound for the number $K_n(H,\delta)$ of polynomials $f\in{\mathcal I}_n(H)$ such that the discriminant $\delta(f)$ of the splitting field $L_f$ of $f$ is $\delta$. Indeed, for a given $f\in {\mathcal I}_n(H)$, we have the divisibility $\Delta(f)\mid\delta(f)$. Thus, using $\delta(f) = H^{O(1)}$ for $f\in{\mathcal I}_n(H)$ and the classical bound $\tau(\delta) = \delta^{o(1)}$ for the divisor function $\tau$, we obtain $$ K_n(H,\delta)\leqslant \sum_{\Delta \mid\delta} N_n(H, \Delta) \leqslant H^{o(1)} \begin{cases}H^{n-2n/(3n+3) } & \text{if}\ n \geqslant 5,\cr H^{n-n/(2n-1) } & \text{if}\ n =3, 4. \mathbf{e}nd{cases} $$ Similarly, we also have an analogue of Theorem~\ref{thm:MHD} for $K_n(H,\delta)$. \mathbf{e}nd{rem} \subsection{Discriminants of trinomials} \label{sec:discr trinom} Let $T_n(A,B,C,D;u)$ be the number of pairs of integers $(a,b) \in [C,C+A] \times [D,D+B]$ such that for the trinomial discriminant~\mathbf{e}qref{eq:DiscTrin} we have $\Delta_n(a,b) =ur^2$, for some positive integer $r$. For $n \mathbf{e}quiv 1 \pmod 4$, $A \geqslant 1$, $B \geqslant 1$, $C \geqslant 0$, $D \geqslant 0$ and square-free $u$, Shparlinski~\cite[Theorem~1]{Shp1} has obtained the bound \begin{align*} T_n(A,B,C,D;u) & \ll (AB)^{2/3} (\log (AB))^{4/3} + (A+B) (\log(AB))^2\cr & \qquad \qquad \qquad + (AB)^{1/3} \leqslantft( \frac{\log(ABCD) \log (AB)}{\log \log (ABCD)} \right)^2, \mathbf{e}nd{align*} using exponential sums and the square sieve. For $C \geqslant 1$ and $A \ll B^{2-\varepsilon}$ with an arbitrary fixed $\varepsilon > 0$, we can sharpen this as follows. \begin{theorem} \label{thm:trinom} Let $n \mathbf{e}quiv 1 \pmod 4$, $n \geqslant 2$, $A \geqslant 1$, $B \geqslant 1$, $C \geqslant 1$, $D \geqslant 0$, and let $u$ be square-free. Then $$ T_n(A,B,C,D;u) \leqslant A (A+B+C+D)^{o(1)}. $$ \mathbf{e}nd{theorem} As in~\cite{Shp1}, from this we obtain the following two results: \begin{cor} \label{cor:Sn} In the notation of Theorem~\ref{thm:trinom}, let $S_n(A,B,C,D)$ be the number of distinct quadratic fields $\mathbb{Q}(\sqrt{\Delta_n(a,b)})$ taken for all pairs of integers $(a,b) \in [C,C+A] \times [D,D+B]$ such that $X^n+aX+b$ is irreducible over $\mathbb{Q}$. Then under the assumptions of Theorem~\ref{thm:trinom}, we have $$ S_n(A,B,C,D) \geqslant B (A+B+C+D)^{o(1)}. $$ \mathbf{e}nd{cor} In fact, in Corollary~\ref{cor:Sn}, the lower bound holds for the number of distinct square-free parts of discriminants $\Delta_n(a,b)$. Thus, taking $C=D=1$ and $A=B=H$ in Corollary~\ref{cor:Sn}, by Lemma~\ref{lem:D/Delta} below, we also obtain the following: \begin{cor} \label{cor:distinct disc} For $n \geqslant 2$ with $n \mathbf{e}quiv 1 \pmod 4$, the number of distinct discriminants of fields generated by a root of polynomials from $$ \{X^n+aX+b~:~ 1\leqslant a,b \leqslant H\} $$ is at least $H^{1+o(1)}$. \mathbf{e}nd{cor} \begin{cor} \label{cor:QX} Let $Q_n(\Delta)$ be the number of distinct quadratic fields $\mathbb{Q}(\sqrt{\Delta_n(a,b)})$ taken over all integers $a,b \geqslant 1$ such that $X^n+aX+b$ is irreducible over $\mathbb{Q}$ and $|\Delta_n(a,b)| \leqslant \Delta$. Then, for $n \mathbf{e}quiv 1 \pmod 4$ we have $$ Q_n(\Delta) \geqslant \Delta^{1/(n-1)+o(1)}. $$ \mathbf{e}nd{cor} Corollary~\ref{cor:QX} improves the bound $$ Q_n(\Delta) \gg \Delta^{\kappa_n/3} (\log \Delta)^{-1} $$ in~\cite{Shp1}, where $$ \kappa_n = \frac{1}{n} + \frac{1}{n-1}, $$ by asymptotically a factor $3/2$ in the exponent. We conclude the paper with an appendix where we take the opportunity to correct an error in~\cite[Lemmas~5 and~6]{Diet2} (and consequently~\cite[Lemma~8]{Diet2}) which are not correct as stated if the degree $n$ is of the form $n=m^2$ or $n=m^2+1$ for some odd $m$, see Section~\ref{sec:app}. \section{Preparations} \subsection{Polynomials and discriminants} We recall that for an arbitrary field $\mathbb{K}$ and $f=X^n+a_{n-1}X^{n-1} +\ldots+a_1X+a_0\in\mathbb{K}[X]$, the discriminant of $f$ is defined by \begin{equation} \label{eq:D vs R} \Disc{f}=(-1)^{n(n-1)/2}\Res{f,f'}, \mathbf{e}nd{equation} where $\Res{g,h}$ denotes the resultant of $g,h \in \mathbb{K}[X]$. Throughout we treat $\Disc{f}$ as a polynomial in formal variables $a_0, \ldots, a_{n-1}$. It is well-known, see~\cite[Section~3.3]{Lang}, that $\Disc{f}$ and $\Delta(f)$ are related via an integer square. \begin{lemma} \label{lem:D/Delta} Let $f\in\mathbb{Q}[X]$ be a monic irreducible polynomial. Then $\Disc{f}/\Delta(f)=r^2$ for some integer $r\geqslant 1$. \mathbf{e}nd{lemma} We also recall that the question about the number of polynomials $f \in {\mathcal I}_n(H)$ with $\Delta(f) = \Disc{f}$ remains unanswered. Ash, Brakenhoff and Zarrabi~\cite{ABZ} give some heuristic and numerical evidences towards the conjecture, attributed in~\cite{ABZ} to Hendrik Lenstra, that the density of such polynomials is $6/\pi^2$. We remark that this density is higher than the expected density of square-free discriminants $\Disc{f}$ (in which case we immediately obtain $\Delta(f) = \Disc{f}$ by Lemma~\ref{lem:D/Delta}), see~\cite{ABZ} for a discussion of this phenomenon. We now need several results about the irreducibility of some polynomials involving polynomial discriminants. For the rest of the paper the discriminant $\Disc{F}$ of a polynomial $F$ always means the discriminant with respect to the variable $X$, even if the polynomial $F$ may depend on other variables. \begin{lemma} \label{l1} Let $n \geqslant 3$, let $a_{2}, \ldots, a_{n-1} \in \mathbb{Z}$ and let $c_0, c_1 \in \mathbb{Q}$. Moreover, let $u \in \mathbb{Z}$ be square-free such that neither $|u|(n-1)^{n-1}$ nor $|u|n^n$ is a square. Then the polynomial \begin{align*} Z^2-u \Disc{X^n+a_{n-1} X^{n-1} + \ldots + a_2 X^2 + (c_0 A_0 + c_1) X + A_0}\quad &\cr \in \mathbb{Q}[A_0,Z] & \mathbf{e}nd{align*} is irreducible in $\mathbb{Q}[A_0,Z]$. \mathbf{e}nd{lemma} \begin{proof} We closely follow the proof of~\cite[Lemma~5]{Diet2}, see also Section~\ref{sec:app}. Writing $$ D(A_0) = u \Disc{X^n+a_{n-1} X^{n-1} + \ldots + a_2 X^2 + (c_0 A_0 + c_1) X + A_0}, $$ it is enough to show that $D(A_0)$ is no square in $\mathbb{Q}[A_0]$. For $c_0 \ne 0$, by~\cite[Lemma~4]{Diet2}, we find that the monomial in $D(A_0)$ with biggest degree is $$ u (-1)^{(n-1)(n-2)/2} (n-1)^{n-1} c_0^n A_0^n, $$ which cannot be a square in $\mathbb{Q}[A_0]$. Indeed, if $n$ is odd this is obvious. If $n$ is even this is true since $|u|(n-1)^{n-1}$ is not a square but $c_0^n$ is. For $c_0=0$, by~\cite[Lemma~3]{Diet2}, one finds that the monomial in $D(A_0)$ with biggest degree is $$ u (-1)^{n(n-1)/2} n^n A_0^{n-1}. $$ Again, since $|u|n^n$ is no square, this cannot be a square in $\mathbb{Q}[A_0]$. \mathbf{e}nd{proof} In the same way one proves the following analogue of~\cite[Lemma~6]{Diet2}. \begin{lemma} \label{l2} Let $n \geqslant 3$, let $a_2, \ldots, a_{n-1} \in \mathbb{Z}$ and $c \in \mathbb{Q}$. Moreover, let $u \in \mathbb{Z}$ be square-free such that $|u|(n-1)^{n-1}$ is not a square. Then the polynomial $$ Z^2 - u \Disc{X^n+a_{n-1} X^{n-1} + \ldots + a_2 X^2 + A_1 X + c}\in \mathbb{Q}[A_1,Z] $$ is irreducible in $\mathbb{Q}[A_1,Z]$. \mathbf{e}nd{lemma} The argument below is modelled from that of the proof of~\cite[Lemma~4]{Shp0}. For a monic polynomial $f(X)\in\mathbb{K}[X]$ and $(u,v) \in \mathbb{K}^*\times \mathbb{K}$, we define the polynomial $$ f_{u,v}(X) = u^{n} f\leqslantft(u^{-1} \leqslantft(X+v\right)\right) \in\mathbb{K}[X], $$ which we write as \begin{equation} \label{eq:Auv def} f_{u,v}(X) = X^n + \sum_{j=1}^{n}A_{f,j} (u,v) X^{n-j}. \mathbf{e}nd{equation} One easily verifies that by the Taylor formula \begin{equation} \label{eq:Auv} A_{f,j} (u,v) = u^j \frac{f^{(n-j)}(u^{-1} v)} {(n-j)!}, \qquad j =1, \ldots, n. \mathbf{e}nd{equation} We need some simple properties of the polynomials $f_{u,v}(X)$. First we relate $\Disc{f_{u,v}}$ to $\Disc{f}$. The following statement is shown in the proof of~\cite[Theorem~1]{Shp2}; it follows easily via the standard expression of the discriminant via the roots of the corresponding polynomial and the relation between the roots of $f$ and $f_{u,v}$. \begin{lemma} \label{lem:Discr uv} For any field $\mathbb{K}$ and a monic polynomial $f(X) \in \mathbb{K}[X]$ of degree $n$, we have $$ \Disc{f_{u,v}} = u^{n(n-1)}\Disc{f}, \qquad (u,v) \in \mathbb{K}^*\times \mathbb{K}. $$ \mathbf{e}nd{lemma} Let $\mathbb{F}_p$ denote the finite field of $p$ elements. We now show that the map $f \mapsto f_{u,v}$ is almost a permutation on the set of monic polynomials $f(X) \in \mathbb{F}_p[X]$ of fixed degree. \begin{lemma} \label{lem:Distinct} For a prime $p> n$, for all but at most $O\leqslantft(p^{\fl{n/2}+1}\right)$ monic polynomials $f(X) \in \mathbb{F}_p[X]$ of degree $n$, the polynomials $f_{u,v}(X)$, $(u,v) \in \mathbb{F}_p^*\times \mathbb{F}_p$, are pairwise distinct. \mathbf{e}nd{lemma} \begin{proof} Let ${\mathcal A}$ be the set of $m \leqslant n $ distinct roots of $f$. Then the non-uniqueness condition $$ f_{s,t}(X)= f_{u,v}(X) $$ with $(s,t) \ne (u,v)$ means that for any $\alpha \in {\mathcal A}$ there is $\beta \in {\mathcal A}$ with $s\alpha - t = u\beta - v$. Hence there is a nontrivial linear transformation ${\mathcal A} \mapsto a{\mathcal A} + b$, sending each element $\alpha \in {\mathcal A}$ to $a\alpha+b$, which fixes the set ${\mathcal A}$, that is, $$ {\mathcal A} = a{\mathcal A} + b. $$ If $a=1$ then $b \ne 0$ and examining the orbit $$ \alpha \mapsto \alpha + b \mapsto \alpha +2b\mapsto \ldots $$ of any element $\alpha \in {\mathcal A}$ we see that for some $k \leqslant n$ we have to have $\alpha = \alpha + kb$ which is impossible since $p > n$. Assume now that $a\ne 1$. Hence for ${\mathcal B} = {\mathcal A} +b(a-1)^{-1}$ we have \begin{equation} \label{eq:Set B} {\mathcal B} = a{\mathcal B}. \mathbf{e}nd{equation} Examining the orbit $$ \beta \mapsto a\beta \mapsto a^2 \beta \mapsto \ldots $$ of any non-zero element $\beta \in {\mathcal B}$ we see that $a$ is of multiplicative order at most $m$ and thus takes at most $m(m+1)/2$ possible values. Finally, when $a\ne 1$ is fixed, there are at most $O\leqslantft(p^{\fl{n/2}}\right)$ possibilities for the set ${\mathcal B}$. Indeed, we see from~\mathbf{e}qref{eq:Set B} that ${\mathcal B}$ is a union of cosets of the multiplicative group $\langle a\rangle \subseteq \mathbb{F}_p^*$ generated by $a$, and possibly of $\{0\}$. Since $a\ne 1$ we see that $\#\langle a\rangle \geqslant 2$ so each such coset if of size at least $2$, and thus there are at most $\fl{m/2}$ such cosets in ${\mathcal B}$. We now observe that, for a fixed $a$, any such coset is defined by any of its elements. Hence the number of possibilities for the set ${\mathcal B}$ does not exceed the number of choices of $\fl{m/2}$ distinct elements of $\mathbb{F}_p$. When the set ${\mathcal B}$ is fixed, there are $p$ possibilities for $b \in \mathbb{F}_p$. Therefore we conclude that there are $O\leqslantft(p^{\fl{m/2}+1}\right)$ possibilities for the set of roots ${\mathcal A}$. Since there are $O(1)$ choices for the multiplicities of these roots, and $m \leqslant n$, the result follows. \mathbf{e}nd{proof} \subsection{Character sums with discriminants} We remark that the values of the quadratic character $\chi$ of discriminants are polynomial analogues of the M{\"o}bius functions for integers, since by the Stickelberger theorem~\cite{Dalen,St}, for a square-free polynomial $f\in\mathbb{F}_p[X]$ of degree $n$, where $p$ is odd, \begin{equation} \label{eq:Stickel} \leqslantft(\frac{\Disc{f}}{p}\right)=(-1)^{n-r}, \mathbf{e}nd{equation} where $(u/p)$ is the Legendre symbol of $u$ modulo $p$ and $r$ is the number of distinct irreducible factors of $f$ and, of course, $$ \leqslantft(\frac{\Disc{f}}{p}\right)= 0 $$ if $f$ is not square-free. In particular, this interpretation has motivated the work of Carmon and Rudnick~\cite{CaRu}. Here we also need some simple estimates. Let ${\mathcal M}_{n,p}$ be the set of monic polynomials of degree $n$ over $\mathbb{F}_p$. \begin{lemma} \label{lem:sumDisc} For a prime $p\geqslant 3$, $$ \sum_{f \in {\mathcal M}_{n,p}} \leqslantft(\frac{\Disc{f}}{p}\right)=0. $$ \mathbf{e}nd{lemma} \begin{proof} Let ${\mathcal J}(p)$ be the set of all monic irreducible polynomials over $\mathbb{F}_p$. We consider the zeta function $\zeta(T)$ of the affine line over $\mathbb{F}_p$, which is given by the product $$ \zeta(T)= \prod_{g \in{\mathcal J}(p)} \leqslantft(\frac{1}{1-T^{\deg g}}\right) = \frac{1}{1-pT}, $$ that is absolutely converging for $|T| < 1$, see~\cite[Equations~(1) and~(2)]{Ros}. Taking the inverse, we derive \begin{equation} \label{eq:zetainv} \zeta(T)^{-1}=(1-pT)=\prod_{g \in{\mathcal J}(p)} \leqslantft(1-T^{\deg g}\right) =\sum_{ f\in{\mathcal S}(p)} (-1)^{\omega(f)}T^{\deg f}, \mathbf{e}nd{equation} where ${\mathcal S}(p)$ is the set of all monic square-free polynomials over $\mathbb{F}_p$ and $\omega(f)$ denotes the number of distinct irreducible factors of $f$. Using~\mathbf{e}qref{eq:Stickel} and comparing the coefficient of $T^n$ in the equation~\mathbf{e}qref{eq:zetainv}, we obtain the claimed result. \mathbf{e}nd{proof} Now, for a vector $$ \blambda = \leqslantft(\lambda_1 , \ldots, \lambda_n\right) \in \mathbb{F}_p^n $$ and a polynomial $$f(X) = X^n+a_{n-1}X^{n-1} + \ldots+a_1X+a_0\in {\mathcal M}_{n,p} $$ we define \begin{equation} \label{eq:InnProd} \langle \blambda \circ f\rangle = \lambda_1 a_{n-1} + \ldots+ \lambda_{n} a_{0}. \mathbf{e}nd{equation} For an integer $m\geqslant 1$, we denote $$ \mathbf{e}_m(z) = \mathbf{e}xp(2 \pi i z/m), $$ and consider certain mixed exponential and character sums with polynomials. Bounds of these sums underly our approach via the square-sieve method. We emphasise that our bounds in Lemmas~\ref{lem:CharSum p large n} and~\ref{lem:CharSum p} below save $\max\{p^{(n-1)/4}, p\}$ against the trivial bound, while an immediate application of the classical Weil bound (see, for example,~\cite[Theorem~11.23]{IwKow}) saves only $p^{1/2}$. The existence of such a bound is quite remarkable since the discriminant $\Disc{f}$, as a polynomial in the coefficients of $f$, is highly singular: its locus of singularity is of co-dimension one, see~\cite[Section~4]{Shp2}. In particular, this means that the result of Katz~\cite{Katz} does not apply, while the result of Rojas-Le{\'o}n~\cite{Ro-Le} does not give any advantage over the direct application of the Weil bound~\cite[Theorem~11.23]{IwKow}, which saves $p^{1/2}$ over the trivial bound. Instead we recall the following bound obtained independently by Bienvenu and L{\^e}~\cite[Theorem~1]{BiLe} and Porritt~\cite[Theorem~1]{Por}. \begin{lemma} \label{lem:CharSum p large n} Let $p \geqslant 3$ be a prime. Then, for $n \geqslant 3$ and any $\blambda \in \mathbb{F}_p^n $, in the notation~\mathbf{e}qref{eq:InnProd}, we have $$ \sum_{f \in {\mathcal M}_{n,p} } \leqslantft(\frac{\Disc{f}}{p}\right) \mathbf{e}_p\leqslantft(\langle \blambda \circ f\rangle \right) \ll p^{(3n+1)/4}. $$ \mathbf{e}nd{lemma} We now use a different argument, which stems from~\cite{Shp0}, to get a larger saving for small values of $n$. \begin{lemma} \label{lem:CharSum p} Let $p \geqslant 3$ be a prime. Then, for $n \geqslant 3$ and any $\blambda \in \mathbb{F}_p^n $, in the notation~\mathbf{e}qref{eq:InnProd}, we have $$ \sum_{f \in {\mathcal M}_{n,p} } \leqslantft(\frac{\Disc{f}}{p}\right) \mathbf{e}_p\leqslantft(\langle \blambda \circ f\rangle \right) \ll p^{n-1}. $$ \mathbf{e}nd{lemma} \begin{proof} By Lemma~\ref{lem:sumDisc} we can assume that $\blambda$ is not identical to zero. Let ${\mathcal E}_{n,p}$ be the exceptional set of polynomials which are described in Lemma~\ref{lem:Distinct}, that is, the set of monic polynomials $f(X) \in \mathbb{F}_p[X]$ of degree $n$, such that the polynomials $f_{u,v}(X)$, $(u,v) \in \mathbb{F}_p^*\times \mathbb{F}_p$, are not pairwise distinct. Thus, by Lemma~\ref{lem:Distinct}, for $n \geqslant 3$, we have \begin{equation} \label{eq:set E} \# {\mathcal E}_{n,p} = O\leqslantft(p^{\fl{n/2}+1}\right) = O\leqslantft(p^{n-1}\right). \mathbf{e}nd{equation} For $f \in {\mathcal M}_{n,p}$ we define the quantity $R(f)$ as the following product of the resultants of the consecutive derivatives of $f$: $$ R(f) = \prod_{j = 0}^{n-1}\Res{f^{(j)},f^{(j+1)}}. $$ Let ${\mathcal F}_{n,p}$ be the set of $f \in {\mathcal M}_{n,p}$ with $R(f) = 0$. Clearly \begin{equation} \label{eq:set F} \# {\mathcal F}_{n,p} = O\leqslantft(p^{n-1}\right). \mathbf{e}nd{equation} Define $$ {\mathcal L}_{n,p} = {\mathcal M}_{n,p} \setminus \leqslantft({\mathcal E}_{n,p}\cup {\mathcal F}_{n,p}\right). $$ We now see from~\mathbf{e}qref{eq:set E} and~\mathbf{e}qref{eq:set F} that for any $(u,v) \in \mathbb{F}_p^*\times \mathbb{F}_p$ we have \begin{equation} \begin{split} \label{eq:M2L} \sum_{f \in {\mathcal M}_{n,p} } & \leqslantft(\frac{\Disc{f}}{p}\right) \mathbf{e}_p\leqslantft(\langle \blambda \circ f\rangle \right) \cr & = \sum_{f \in {\mathcal L}_{n,p} } \leqslantft(\frac{\Disc{f}}{p}\right) \mathbf{e}_p\leqslantft(\langle \blambda \circ f\rangle \right) + O\leqslantft(p^{n-1}\right)\cr & = \sum_{f \in {\mathcal L}_{n,p} } \leqslantft(\frac{\Disc{f_{u,v}}}{p}\right) \mathbf{e}_p\leqslantft(\langle \blambda \circ f_{u,v}\rangle \right) + O\leqslantft(p^{n-1}\right). \mathbf{e}nd{split} \mathbf{e}nd{equation} Since $n(n-1)$ is even, by Lemma~\ref{lem:Discr uv} we have $$ \leqslantft(\frac{\Disc{f_{u,v}}}{p}\right) = \leqslantft(\frac{\Disc{f}}{p}\right). $$ Thus, summing~\mathbf{e}qref{eq:M2L} over all pairs $(u,v) \in \mathbb{F}_p^*\times \mathbb{F}_p$ and changing the order of summation, we obtain \begin{align*} \sum_{f \in {\mathcal M}_{n,p} } & \leqslantft(\frac{\Disc{f}}{p}\right) \mathbf{e}_p\leqslantft(\langle \blambda \circ f\rangle \right) \cr & =\frac{1}{p(p-1)}\sum_{f \in {\mathcal L}_{n,p} } \leqslantft(\frac{\Disc{f}}{p}\right) \sum_{(u,v) \in \mathbb{F}_p^*\times \mathbb{F}_p} \mathbf{e}_p\leqslantft(\langle \blambda \circ f_{u,v}\rangle \right) \cr & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + O\leqslantft(p^{n-1}\right). \mathbf{e}nd{align*} Extending the summation to all pairs $(u,v) \in \mathbb{F}_p\times \mathbb{F}_p$ introduces an error $ O\leqslantft(p^{n-1}\right)$ which is admissible, so we obtain \begin{equation} \begin{split} \label{eq:M2L uv} \sum_{f \in {\mathcal M}_{n,p} } & \leqslantft(\frac{\Disc{f}}{p}\right) \mathbf{e}_p\leqslantft(\langle \blambda \circ f\rangle \right) \cr & =\frac{1}{p(p-1)}\sum_{f \in {\mathcal L}_{n,p} } \leqslantft(\frac{\Disc{f}}{p}\right) \sum_{(u,v) \in \mathbb{F}_p^2} \mathbf{e}_p\leqslantft(\langle \blambda \circ f_{u,v}\rangle \right) \cr & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + O\leqslantft(p^{n-1}\right). \mathbf{e}nd{split} \mathbf{e}nd{equation} Recalling the notation~\mathbf{e}qref{eq:Auv def} we write $$ \sum_{(u,v) \in \mathbb{F}_p^2} \mathbf{e}_p\leqslantft(\langle \blambda \circ f_{u,v}\rangle \right) = \sum_{(u,v) \in \mathbb{F}_p^2} \mathbf{e}_p\leqslantft(\sum_{j=1}^{n} \lambda_j A_{f,j} (u,v) \right) . $$ We now see from~\mathbf{e}qref{eq:M2L uv} that it is enough to show that for any $f \in {\mathcal L}_{n,p}$ the Deligne bound (see~\cite[Section~11.11]{IwKow}) applies to the last sum and thus implies the bound \begin{equation} \label{eq:Deligne} \sum_{(u,v) \in \mathbb{F}_p^2} \mathbf{e}_p\leqslantft(\sum_{j=1}^{n} \lambda_j A_{f,j} (u,v) \right) = O(p). \mathbf{e}nd{equation} For this we have to show that the highest form of the polynomial $$ F_f(U,V) = \sum_{j=1}^{n} \lambda_j A_{f,j} (U,V) \in \mathbb{F}_p[U,V] $$ is nonsingular. Since $\blambda$ is not identical to zero there is $m$ such that $\lambda_m \ne 0$ and $\lambda_j = 0 $ for $j > m$ (this condition is void if $m=n$). We see from~\mathbf{e}qref{eq:Auv} that $A_{f,j} (U,V) $ is a homogeneous polynomial of degree $\deg A_{f,j} (U,V) = j$, $ j=1, \ldots, n$. Hence the highest form of $F_f(U,V)$ is $\lambda_m A_{f,m} (U,V)$. Therefore, to establish the bound~\mathbf{e}qref{eq:Deligne}, it is sufficient to show that the polynomial $\lambda_m A_{f,m} (U,V) $ is nonsingular, that is, that the equations \begin{equation} \label{eq:Eq dU} \frac{ \partial A_{f,m} (U,V)}{\partial U} = 0 \mathbf{e}nd{equation} and \begin{equation} \label{eq:Eq dV} \frac{ \partial A_{f,m} (U,V)}{\partial V} = 0 \mathbf{e}nd{equation} have no common zero $(u_0,v_0) \ne (0,0)$ in the algebraic closure of $\mathbb{F}_p$. Now, if $u_0 = 0$, then from~\mathbf{e}qref{eq:Eq dV} we conclude that $v_0=0$, which is impossible. Indeed, from~\mathbf{e}qref{eq:Auv} we see that $$ A_{f,m} (U,V) \mathbf{e}quiv \binom{n}{m} V^m \pmod U $$ in the ring $\mathbb{F}_p[U,V]$. Hence $$ \frac{ \partial A_{f,m} (U,V)}{\partial V} \mathbf{e}quiv \binom{n}{m} m V^{m-1} \pmod U, $$ which implies the above claim. If $u_0 \ne 0$, then by the Euler formula for partial derivatives we have $$ U \frac{ \partial A_{f,m} (U,V)}{\partial U} + V \frac{ \partial A_{f,m} (U,V)}{\partial V} = m U^m \frac{f^{(n-m)}(U^{-1} V)} {(n-m)!} . $$ Therefore, we conclude from the equations~\mathbf{e}qref{eq:Eq dU} and~\mathbf{e}qref{eq:Eq dV} that $v_0/u_0$ is a zero of $f^{(n-m)}(X)$, and also by~\mathbf{e}qref{eq:Eq dV}, $v_0/u_0$ is also a zero of $f^{(n-m+1)}(X)$. Hence $\Res{f^{(n-m)}, f^{(n-m+1)}} = 0$, which contradicts the condition that $f \not \in {\mathcal F}_{n,p}$. Thus, we have the bound~\mathbf{e}qref{eq:Deligne} and the desired result follows. \mathbf{e}nd{proof} We now recall the definition of the Jacobi symbol $(u/m)$ modulo an odd square-free integer $m$: $$ \leqslantft(\frac{u}{m}\right) = \prod_{\substack{p \mid m\cr p~\mathrm{prime}}} \leqslantft(\frac{u}{p}\right) , $$ where, as before, $\leqslantft(\frac{u}{p}\right)$ is the Legendre symbol (that is, the quadratic character) modulo a prime $p$, see~\cite[Section~3.5]{IwKow}). We not extend the definition of ${\mathcal M}_{n,p}$ to residue rings, and use ${\mathcal M}_{n,m}$ to denote the set of monic polynomials of degree $n$ over $\mathbb{Z}_m$. Now, using the Chinese Remainder Theorem for character sums, see~\cite[Equation~(12.21)]{IwKow}, we see that Lemma~\ref{lem:sumDisc}, implies the following identity. \begin{lemma} \label{lem:CharSum m compl} Let $p$ and $q$ be two sufficiently large distinct primes and let $m=pq$. Then, for $n \geqslant 3$ we have $$ \sum_{f \in {\mathcal M}_{n,m} }\leqslantft(\frac{\Disc{f}}{m}\right) = 0. $$ \mathbf{e}nd{lemma} Similarly, we see that Lemmas~\ref{lem:CharSum p large n} and~\ref{lem:CharSum p}, together with the Chinese Remainder Theorem for mixed sums of additive and multiplicative characters, see~\cite[Equation~(12.21)]{IwKow}, yield the following bound. \begin{lemma} \label{lem:CharSum m} Let $p$ and $q$ be two sufficiently large distinct primes and let $m=pq$. Then, for $n \geqslant 3$ and any $\blambda \in \mathbb{Z}_m^n$, in the notation~\mathbf{e}qref{eq:InnProd}, we have $$ \sum_{f \in {\mathcal M}_{n,m} }\leqslantft(\frac{\Disc{f}}{m}\right) \mathbf{e}_m\leqslantft(\langle \blambda \circ f\rangle \right) \ll \min\leqslantft\{m^{(3n+1)/4}, m^{n-1}\right\}. $$ \mathbf{e}nd{lemma} We now derive our main tool. \begin{lemma} \label{lem:CharSum m H} Let $p$ and $q$ be two sufficiently large distinct primes and let $m=pq$. Then we have \begin{align*} \sum_{f \in {\mathcal P}_n(H)}& \leqslantft(\frac{\Disc{f}}{m}\right)\cr &\quad \ll \leqslantft(\leqslantft(H/m\right)^{n-1}\log m +\leqslantft(\log m\right)^{n}\right) \begin{cases}m^{(3n+1)/4} & \text{if}\ n \geqslant 5.\cr m^{n-1}& \text{if}\ n =3,4. \mathbf{e}nd{cases} \mathbf{e}nd{align*} \mathbf{e}nd{lemma} \begin{proof} Clearly the above sum can be split into $O(H^n/m^n)$ complete sums, which all vanish by Lemma~\ref{lem:CharSum m compl}, and also, for $k=0, \ldots, n-1$ into $O(H^{k}/m^{k}+1)$ hybrid sums, that are complete with respect to exactly $k$ variables and incomplete with respect to the remaining $n-k$ variables. Using the standard reduction between complete and incomplete sums (see~\cite[Section~12.2]{IwKow}) and applying Lemma~\ref{lem:CharSum m} (for incomplete sums), we derive \begin{align*} \sum_{f \in {\mathcal P}_n(H)}& \leqslantft(\frac{\Disc{f}}{m}\right)\cr &\quad\ll \sum_{k=0}^{n-1} (H^{k}/m^{k}+1) \min\leqslantft\{m^{(3n+1)/4}, m^{n-1}\right\} (\log m)^{n-k}, \mathbf{e}nd{align*} which implies the result. \mathbf{e}nd{proof} \section{Proof of Theorem~\ref{thm:NHDelta}} \subsection{The bound~\mathbf{e}qref{eq:nn-1 discr}: the determinant method} \label{sec:DetMethod} We need some results about equations involving discriminants, which could be of independent interest. \begin{lemma} \label{l3} Let $n \geqslant 3$, let $a_2, \ldots, a_{n-1} \in \mathbb{Z}$ and let $d_0, d_1, d_2 \in \mathbb{Q}$ such that $(d_0, d_1) \ne (0, 0)$. Moreover, let $u \in \mathbb{Z}$ be square-free such that neither $|u|(n-1)^{n-1}$ nor $|u|n^n$ is a square. Then, for any $c \geqslant 1$, the system of equations \begin{align*} z^2 & = u\Disc{X^n + a_{n-1} X^n + \ldots + a_1 X + a_0}\cr 0 & = d_0 a_0 + d_1 a_1 + d_2 \mathbf{e}nd{align*} has at most $ H^{1/2+o(1)}$ solutions $z, a_0, a_1 \in \mathbb{Z}$ such that $$ |a_0|, |a_1| \leqslant H \qquad\mbox{and}\qquad |z| \leqslant H^c. $$ \mathbf{e}nd{lemma} \begin{proof} This is a straightforward generalization of~\cite[Lemma~8]{Diet2}, which dealt with the special case $u=1$. Lemmas~\ref{l1} and~\ref{l2} now play the role of~\cite[Lemma~5]{Diet2} and~\cite[Lemma~6]{Diet2}, respectively. The proof can then be followed in a completely analogous way to the proof of~\cite[Lemma~8]{Diet2}. \mathbf{e}nd{proof} We also need the following technical result, which generalises~\cite[Lemma~11]{Diet2}. \begin{lemma} \label{l4} Let $u \in \mathbb{Z} \backslash\{0\}$, and let $N(H)$ be the number of coefficients $a_2, \ldots, a_{n-1} \in \mathbb{Z}$ such that $|a_i| \leqslant H \quad (2 \leqslant i \leqslant n-1)$ and the polynomial \begin{equation} \begin{split} \label{eq:polyAAZ} Z^2-u \Disc{X^n+a_{n-1} X^{n-1} + \ldots +a_{2} X^2 + A_1 X + A_0}\qquad &\cr \in \mathbb{Z}[A_0, A_1, Z]& \mathbf{e}nd{split} \mathbf{e}nd{equation} as a polynomial in $A_0, A_1, Z$ is not absolutely irreducible. Then $$ N(H) \ll H^{n-3}. $$ \mathbf{e}nd{lemma} \begin{proof} The special case $u=1$ is just~\cite[ Lemma~11]{Diet2}. However, if the polynomial~\mathbf{e}qref{eq:polyAAZ} factorises over $\mathbb{C}[A_0, A_1, Z]$, then \begin{equation} \label{eq:polyAA} u\Disc{X^n+a_{n-1} X^{n-1} + \ldots +a_{2} X^2 + A_1 X + A_0}\in \mathbb{C}[A_0, A_1] \mathbf{e}nd{equation} is a perfect square in $\mathbb{C}[A_0, A_1]$ whence, as $u \ne 0$, also the polynomial~\mathbf{e}qref{eq:polyAA} with $u=1$ is a square in $\mathbb{C}[A_0, A_1]$. Hence also the polynomial~\mathbf{e}qref{eq:polyAAZ} with $u=1$ factorises over $\mathbb{C}[A_0, A_1, Z]$. The result therefore follows immediately from the special case $u=1$. \mathbf{e}nd{proof} Given a square-free integer $u\geqslant 1$, we denote by ${\mathcal T}_n(H,u)$ the set of $f \in {\mathcal I}_n(H)$ for which the square-free part of $\Delta(f)$ is $u$, that is, $|\Delta(f)| = r^2u$ for some integer $r\geqslant 1$, and by $T_n(H,u)$ the cardinality of this set. \begin{lemma} \label{lem:det} Uniformly over square-free integers $u \geqslant 1$ with the condition that neither $u(n-1)^{n-1}$ nor $un^n$ is a square, we have the following estimate $$ T_n(H,u) \leqslant H^{n-2+\sqrt{2}+o(1)}. $$ \mathbf{e}nd{lemma} \begin{proof} Let us fix some $\varepsilon > 0$. By Lemma~\ref{lem:D/Delta}, $\Disc{f}$ and $\Delta(f)$ have the same square-free part. So, for square-free $u \in \mathbb{Z}$, we see that $T_n(H, u)$ is the number of solutions $a_0, \ldots, a_{n-1} \in \mathbb{Z}, r \in \mathbb{N}$ of the Diophantine equation \begin{equation} \label{aug6} r^2 u = \Disc{X^n + a_{n-1} X^{n-1} + \ldots + a_1 X + a_0} \mathbf{e}nd{equation} such that $|a_i| \leqslant H \quad (0 \leqslant i \leqslant n-1)$. On writing $z=ru$ one observes that $T_n(H, u)$ is at most the number of solutions $a_0, \ldots, a_{n-1} \in \mathbb{Z}$, $z \in \mathbb{N}$, of \begin{equation} \label{eq1} z^2 = u \Disc{X^n + a_{n-1} X^{n-1} + \ldots + a_1 X + a_0} \mathbf{e}nd{equation} such that $|a_i| \leqslant H \quad (0 \leqslant i \leqslant n-1)$, and that~\mathbf{e}qref{aug6} and the conditions $|a_i| \leqslant H \quad (0 \leqslant i \leqslant n-1)$ force $|r| \leqslant H^{c_1}$, $|u| \leqslant H^{c_2}$ for some constants $c_1, c_2>0$ only depending on $n$, so $|z| \leqslant H^c$ for some $c \geqslant 1$ depending only on $n$. To bound the number of these solutions, we can now, in a completely analogous way, follow the proof from~\cite[Section~5]{Diet2}, which deals with the special case $u=1$: First, fix $a_2, \ldots, a_{n-1}$; there are $O(H^{n-2})$ choices. By Lemma~\ref{eq:polyAA}, we may assume that \begin{equation} \label{eq:poly za1a2} z^2 - u \Disc{X^n + a_{n-1} X^{n-1} + \ldots + a_1 X + a_0} \mathbf{e}nd{equation} as a polynomial in $z,a_1,a_0$ is absolutely irreducible. We can therefore apply~\cite[Lemma~12]{Diet2}, and the same calculation as in~\cite{Diet2} shows that there exist $J \ll H^{\sqrt{2}/2+\varepsilon}$ polynomials $g_1, \ldots, g_J \in \mathbb{Z}[Z,A_1,A_0]$, such that each $g_j$ is coprime with the polynomial~\mathbf{e}qref{eq:poly za1a2} and has degree bounded only in terms of $n$ and $\varepsilon$, and every solution $(z,a_1,a_0)$ to~\mathbf{e}qref{eq1} with \begin{equation} \label{eq:cond a1a2z} |a_1|, |a_0| \leqslant H \qquad\mbox{and}\qquad |z| \leqslant H^c, \mathbf{e}nd{equation} in addition satisfies $g_j(z,a_1,a_0)=0$ for some $j \in \{1, \ldots, J\}$, apart possibly from some exceptional set of solutions of cardinality at most $H^{\sqrt{2}+o(1)}$. So we have to consider $J$ systems of two Diophantine equations, each consisting of~\mathbf{e}qref{eq1} and the equation $g_j(z,a_1,a_0)=0$ for some $j \in \{1, \ldots, J\}$. Fix any of those systems. Then it is enough to show that there are at most $H^{\sqrt{2}/2+o(1)}$ integer solutions satisfying~\mathbf{e}qref{eq:cond a1a2z} to this system. To this end, as in~\cite{Diet2}, we can eliminate $z$ from the system, resulting in one Diophantine equation $f_j(a_1, a_0)=0$, where $f_j \in \mathbb{Z}[A_1, A_0]$, which is a non-zero rational polynomial by the coprimality of $g_j$ and~\mathbf{e}qref{eq:poly za1a2}. This can be factored over $\mathbb{Q}$, and as in~\cite {Diet2}, for each factor that is at least quadratic, the bound of Bombieri and Pila~\cite{BoPi} yields at most $H^{1/2+o(1)}$ integer solutions with $|a_1|,|a_0| \leqslant H$, which is more than satisfactory, as from~\mathbf{e}qref{eq1}, for each pair $(a_1, a_0)$, we get at most two solutions $z$. The case of linear factors is covered by Lemma~\ref{l3}, again yielding at most $H^{1/2+o(1)}$ solutions satisfying~\mathbf{e}qref{eq:cond a1a2z}. All together, over all $J \ll H^{\sqrt{2}/2+\varepsilon}$ systems, and considering the exceptional set, we obtain at most $$ H^{\sqrt{2}+o(1)} + H^{\sqrt{2}/2+\varepsilon} H^{1/2+o(1)} = H^{\sqrt{2}+o(1)} $$ integer solutions with~\mathbf{e}qref{eq:cond a1a2z}, provided that $\varepsilon < \sqrt{2} -1$. Taking into account the $O(H^{n-2})$ choices for $a_2, \ldots, a_n$ from the beginning, we obtain $$ T_n(H,u) \leqslant H^{n-2+\sqrt{2}+o(1)}, $$ as required. \mathbf{e}nd{proof} By Lemma~\ref{lem:D/Delta} we have $$ N_n(H,\Delta)\leqslant T_n(H,u), $$ where $u$ is the square-free part of $\Delta$, and using Lemma~\ref{lem:det}, we now obtain the bound~\mathbf{e}qref{eq:nn-1 discr}. \subsection{The bound~\mathbf{e}qref{eq:Any discr}: the square-sieve method} \label{sec:SqSieve} We recall the definitions of ${\mathcal T}_n(H,u)$ and $T_n(H,u)= \# {\mathcal T}_n(H,u)$ from Section~\ref{sec:DetMethod}. \begin{lemma} \label{lem:sieve} Uniformly over square-free integers $u \geqslant 1$ we have the following estimate $$ T_n(H,u) \ll \begin{cases}H^{n-2n/(3n+3)}(\log H)^{(5n+1))/(3n+3)} & \text{if}\ n \geqslant 5.\cr H^{n-n/(2n-1)}(\log H)^{(3n-2)/(2n-1)} & \text{if}\ n =3,4. \mathbf{e}nd{cases} $$ \mathbf{e}nd{lemma} \begin{proof} As before, by Lemma~\ref{lem:D/Delta}, $\Disc{f}$ and $\Delta(f)$ have the same square-free part, and thus $T_n(H,u)$ is the number of polynomials $f \in {\mathcal I}_n(H)$ for which the square-free part of $\Disc{f}$ is $u$. We now apply the square sieve of Heath-Brown~\cite{HB1} to the discriminants $\Disc{f}$ of polynomials $f \in {\mathcal I}_n(H)$. Take now a real $z\geqslant 2$ and denote by ${\mathcal Q}_z$ the set of all primes $p$ in the interval $(z,2z]$ and by $\pi(z,2z)$ the cardinality of this set, that is $\pi(z,2z)=\pi(2z)-\pi(z)$ where, as usual, $\pi(x)$ is the number of primes $p\leqslant x$. Clearly, for any $f\in{\mathcal T}_n(H,u)$ the product $u\Disc{f}$ is a perfect square, and thus, for a prime $p\geqslant 3$ we have $$ \leqslantft(\frac{u\Disc{f}}{p}\right) = 1, $$ unless $p\mid u\Disc{f}$, or equivalently $p\mid \Disc{f}$ (as $u \mid \Disc{f}$), in which case we have $$ \leqslantft(\frac{u\Disc{f}}{p}\right) = 0. $$ Note that the condition $f \in {\mathcal T}_n(H,u) \subseteq {\mathcal I}_n(H)$ automatically implies that $\Disc{f}\ne 0$. Hence, for any $f\in{\mathcal T}_n(H,u)$ we have \begin{equation} \label{eq:sieve} \sum_{p\in{\mathcal Q}_z}\leqslantft(\frac{u\Disc{f}}{p}\right)= \pi(z,2z) + O\leqslantft(\omega\leqslantft(\Disc{f}\right)\right), \mathbf{e}nd{equation} where $\omega(d)$ is the number of prime divisors of the integer $d \ne 0$. Since $f\in{\mathcal I}_n(H)$, we trivially have $\Disc{f} = H^{O(1)}$. Now, using the trivial bound $\omega(d) = O(\log d)$ and imposing the restriction \begin{equation} \label{eq:z log H} z\geqslant (\log H)^2, \mathbf{e}nd{equation} we see from the prime number theorem that \begin{equation} \label{eq:z omega} \pi(z,2z) + O\leqslantft(\omega\leqslantft(\Disc{f}\right)\right)\geqslant \frac{1}{2} \pi(z,2z) \mathbf{e}nd{equation} provided that $H$ is large enough (certainly~\mathbf{e}qref{eq:z log H} can be substantially relaxed, but this does not affect our result). Hence, from~\mathbf{e}qref{eq:sieve} and~\mathbf{e}qref{eq:z omega} we conclude $$ \frac{2}{\pi(z,2z)} \sum_{p\in{\mathcal Q}_z}\leqslantft(\frac{u\Disc{f}}{p}\right) \geqslant 1. $$ Squaring, summing over all $f\in{\mathcal T}_n(H,u)$ and then expanding the summation to all $f\in{\mathcal P}_n(H)$, we obtain \begin{align*} T_n(H,u)&\leqslant\frac{4}{\pi(z,2z)^2}\sum_{f\in{\mathcal T}_n(H,u)}\leqslantft|\sum_{p\in{\mathcal Q}_z}\leqslantft(\frac{u\Disc{f}}{p}\right)\right|^2\cr &\leqslant \frac{4}{\pi(z,2z)^2}\sum_{f\in{\mathcal P}_n(H)}\leqslantft|\sum_{p\in{\mathcal Q}_z}\leqslantft(\frac{u\Disc{f}}{p}\right)\right|^2. \mathbf{e}nd{align*} Now, expanding the square and then changing the order of summation and using the multiplicativity of the Jacobi symbol, we derive \begin{equation} \begin{split} \label{eq:Cauchy T} T_n(H,u) &\leqslant \frac{4}{\pi(z,2z)^2}\sum_{f\in{\mathcal P}_n(H)} \sum_{p,q\in{\mathcal Q}_z} \leqslantft(\frac{u\Disc{f}}{pq}\right)\cr & = \frac{4}{\pi(z,2z)^2}\sum_{p,q\in{\mathcal Q}_z}\leqslantft(\frac{u}{pq}\right) \sum_{f\in{\mathcal P}_n(H)}\leqslantft(\frac{\Disc{f}}{pq}\right). \mathbf{e}nd{split} \mathbf{e}nd{equation} Hence $$ T_n(H,u)\ll \frac{1}{\pi(z,2z)^2}\sum_{p,q\in{\mathcal Q}_z} \leqslantft|\sum_{f\in{\mathcal P}_n(H)}\leqslantft(\frac{\Disc{f}}{pq}\right)\right|. $$ If $n \geqslant 5$ we apply now the first bound of Lemma~\ref{lem:CharSum m H} for the inner sum for $O(\pi(z,2z)^2)$ primes $p\ne q$ and the trivial bound $H^n$ for $\pi(z,2z)$ choices of primes $p=q$. Taking also into consideration that $$ \pi(z,2z) \gg \frac{z}{\log z} $$ and $pq\leqslant 4z^2$, we derive \begin{align*} T_n(H,u)& \ll z^{-1}H^n\log z+(H/z^2)^{n-1} z^{(3n+1)/2} \log z +z^{(3n+1)/2} (\log z)^n\cr & \ll z^{-1}H^n\log z+H^{n-1} z^{-n/2+5/2} \log z +z^{(3n+1)/2} (\log z)^n. \mathbf{e}nd{align*} Choosing $z=H^{2n/(3n+3)}(\log H)^{-2(n-1)/(3n+3)}$, thus the condition~\mathbf{e}qref{eq:z log H} is satisfied, we obtain the desired bound. For $n =3,4$, we apply now the second bound of Lemma~\ref{lem:CharSum m H} for the inner sum for $O(\pi(z,2z)^2)$ primes $p\ne q$ and the trivial bound $H^n$ for $\pi(z,2z)$ choices of primes $p=q$. Taking also into consideration that $$ \pi(z,2z) \gg \frac{z}{\log z} $$ and $pq\leqslant 4z^2$, we derive $$ T_n(H,u)\ll z^{-1}H^n\log z+H^{n-1} \log z +z^{2n-2}(\log z)^n. $$ Choosing $z=H^{n/(2n-1)}(\log H)^{-(n-1)/(2n-1)}$, thus the condition~\mathbf{e}qref{eq:z log H} is satisfied, we conclude the proof. \mathbf{e}nd{proof} As before, by Lemma~\ref{lem:D/Delta} we have \begin{equation} \label{eq: N and T} N_n(H,\Delta)\leqslant T_n(H,u), \mathbf{e}nd{equation} where $u$ is the square-free part of $\Delta$, and using Lemma~\ref{lem:sieve}, we now obtain the bound~\mathbf{e}qref{eq:Any discr} and conclude the proof of Theorem~\ref{thm:NHDelta}. \section{Proof of Theorem~\ref{thm:MHD}} \subsection{Bounds of mean of sums of Jacobi symbols} We also make use of the following bounds of character sums ``on average'' over square-free moduli which are due to Heath-Brown~\cite[Corollary 3.]{HB2}. In fact we only need a very special case of this result, which we present in the following form. \begin{lemma} \label{lem:MeanVal} For all real positive numbers $D\geqslant 1$ and $Z\geqslant 1$, such that $DZ\to \infty$, $$ \frac{1}{Z} \sum_{\substack{m \leqslant Z\cr m~\text{odd square-free}}} \leqslantft | \sum_{|\Delta| \leqslant D} \leqslantft(\frac{\Delta}{m}\right)\right|^2 \leqslant (DZ)^{o(1)}\sqrt{D(D/Z+1)}. $$ \mathbf{e}nd{lemma} \subsection{Optimization of power sums} We need the following technical result, see~\cite[Lemma~2.4]{GrKol}. \begin{lemma} \label{lem: Optim} For $I,J\in\mathbb{N}$ let $$ F(Z)=\sum_{i=1}^I A_i Z^{a_i}+\sum_{j=1}^JB_j Z^{-b_j}, $$ where $A_i,B_j,a_i$ and $b_j$ are positive for $1\leqslant i\leqslant I$ and $1\leqslant j \leqslant J$. Let $0\leqslantq Z_1\leqslantq Z_2$. Then there is some $Z\in [Z_1,Z_2]$ with $$ F(Z)\ll \sum_{i=1}^I\sum_{j=1}^J \leqslantft(A_i^{b_j}B_j^{a_i}\right)^{1/(a_i+b_j)}+\sum_{i=1}^I A_iZ_1^{a_i}+\sum_{j=1}^J B_jZ_2^{-b_j}, $$ where the implied constant depends only on $I$ and $J$. \mathbf{e}nd{lemma} \subsection{Concluding the proof} Using~\mathbf{e}qref {eq: N and T} and also that $$ \leqslantft(\frac{u}{pq}\right) = \leqslantft(\frac{\Delta}{pq}\right), $$ where $u$ is the square-free part of $\Delta$, we can see that the bound~\mathbf{e}qref{eq:Cauchy T} implies $$ M_n(H,D) \leqslant \frac{4}{\pi(z,2z)^2}\sum_{p,q\in{\mathcal Q}_z} \sum_{|\Delta| \leqslant D} \leqslantft(\frac{\Delta}{pq}\right) \sum_{f\in{\mathcal P}_n(H)}\leqslantft(\frac{\Disc{f}}{pq}\right). $$ Hence $$ M_n(H,D) \ll \frac{1}{\pi(z,2z)^2}\sum_{p,q\in{\mathcal Q}_z} \leqslantft|\sum_{|\Delta| \leqslant D} \leqslantft(\frac{\Delta}{pq}\right)\right| \leqslantft| \sum_{f\in{\mathcal P}_n(H)}\leqslantft(\frac{\Disc{f}}{pq}\right)\right|. $$ Continuing as in Section~\mathbf{e}qref{sec:SqSieve}, and separating the contribution from the terms with $p=q$, we obtain \begin{align*} M_n(H,D) \ll z^{-1}& DH^n \log z \cr &+ \frac{1}{\pi(z,2z)^2}\sum_{\substack{p,q\in{\mathcal Q}_z\crp\ne q}} \leqslantft|\sum_{|\Delta| \leqslant D} \leqslantft(\frac{\Delta}{pq}\right)\right| \leqslantft| \sum_{f\in{\mathcal P}_n(H)}\leqslantft(\frac{\Disc{f}}{pq}\right)\right|. \mathbf{e}nd{align*} If $n \geqslant 5$ we apply now the first bound of Lemma~\ref{lem:CharSum m H} for the inner sum and then the bound of Lemma~\ref{lem:MeanVal}, and thus derive (after replacing all power logarithms with $H^{o(1)}$ \begin{align*} M_n(H,D) \ll z^{-1}& DH^{n+o(1)} \cr &+ H^{o(1)} \leqslantft(\leqslantft(H/z^2\right)^{n-1} +1\right) z^{(3n+1)/2}\sqrt{D (D/z^2+ 1)} \mathbf{e}nd{align*} After some trivial manipulations, we obtain \begin{equation} \label{eq: M and M} M_n(H,D) \ll H^{o(1)} {\mathcal M} , \mathbf{e}nd{equation} where \begin{align*} {\mathcal M} = z^{-1} DH^{n} + z^{-(n-3)/2} DH^{n-1} & + z^{-(n-5)/2} D^{1/2} H^{n-1}\cr & \quad + z^{(3n-1)/2} D + z^{(3n+1)/2} D^{1/2} \mathbf{e}nd{align*} Since we obviously have $z^{-1} DH^{n} \geqslant z^{-(n-3)/2} DH^{n-1}$ we can simplify the above bound as \begin{align*} {\mathcal M} & \ll z^{-1} DH^{n} + z^{-(n-5)/2} D^{1/2} H^{n-1} + z^{(3n-1)/2} D + z^{(3n+1)/2} D^{1/2} \cr &= \leqslantft(z^{-1} D^{1/2} H^{n} + z^{-(n-5)/2} H^{n-1} + z^{(3n-1)/2} D^{1/2} + z^{(3n+1)/2}\right) D^{1/2}. \mathbf{e}nd{align*} We now apply Lemma~\ref{lem: Optim} with $I = J=2$, $Z=(DH)^{100} $, $Z_1=(\log H)^2$ (see~\mathbf{e}qref{eq:z log H}), $Z_2=(DH)^{100}$ and parameters \begin{align*} & (A_1,a_1) = \leqslantft(D^{1/2} , (3n-1)/2\right) , \qquad &(A_2,a_2) = \leqslantft(1 , (3n+1)/2\right) , \quad \cr &(B_1,b_1) = (D^{1/2} H^{n},1), \qquad & (B_2,b_2) = ( H^{n-1}, (n-5)/2). \mathbf{e}nd{align*} We now compute \begin{align*} \leqslantft( A_1^{b_1} B_1^{a_1}\right)^{1/(a_1+b_1)} &= \leqslantft(D^{1/2} \leqslantft(D^{1/2} H^{n}\right)^{(3n-1)/2} \right)^{2/(3n+1)}\cr &= D^{1/2} H^{n(3n-1)/(3n+1)},\cr \leqslantft( A_1^{b_2} B_2^{a_1}\right)^{1/(a_1+b_2)} &= \leqslantft(D^{ (n-5)/4} \leqslantft( H^{n-1}\right)^{(3n-1)/2} \right)^{1/(2n-3)}\cr &=D^{ (n-5)/(8n-12)} H^{(n-1)(3n-1)/(4n-6)},\cr \leqslantft( A_2^{b_1} B_1^{a_2}\right)^{1/(a_2+b_2)} &= \leqslantft( \leqslantft(D^{1/2} H^{n}\right)^{(3n+1)/2} \right)^{2/(3n+3)}\cr &= D^{(3n+1)/(6n+6)} H^{n(3n+1)/(3n+3)},\cr \leqslantft( A_2^{b_2} B_2^{a_2}\right)^{1/(a_2+b_2)} &= \leqslantft( \leqslantft( H^{n-1}\right)^{(3n+1)/2} \right)^{1/(2n-2)}\cr &= H^{(3n+1)/4}. \mathbf{e}nd{align*} Certainly the contribution from the terms involving $Z_1$ and $Z_2$ is negligible. We also note that for $n \geqslant 5$ we have $$ D^{1/2} H^{n(3n-1)/(3n+1)} \geqslant H^{n(3n-1)/(3n+1)} \geqslant H^{(3n+1)/4}. $$ Hence the last term $H^{(3n+1)/4}$ can be omitted. Furthermore, for $n \geqslant 5$ we also have $$ \frac{n-5}{8n-12} \leqslant \frac{3n+1}{6n+6} \qquad\mbox{and}\qquad \frac{(n-1)(3n-1)}{4n-6} \leqslant \frac{n(3n+1)}{3n+3}. $$ Hence the second term $D^{ (n-5)/(8n-12)} H^{(n-1)(3n-1)/(4n-6)}$ can be omitted too. Thus we obtain $$ {\mathcal M} \ll D H^{n(3n-1)/(3n+1)} + D^{(3n+1)/(6n+6)+1/2} H^{n(3n+1)/(3n+3)} . $$ Recalling~\mathbf{e}qref{eq: M and M} we obtain $$ M_n(H,D) \leqslant D H^{n(3n-1)/(3n+1)+o(1)} + D^{(3n+2)/(3n+3)} H^{n(3n+1)/(3n+3)+o(1)} . $$ We now observe that the second term improved the trivial bound $M_n(H,D) \ll H^n$ only for $D \leqslant H^{2n/(3n+2)}$, in which case the second term also dominates the first term as $$ D H^{n(3n-1)/(3n+1)} \leqslant D^{(3n+2)/(3n+3)} H^{n(3n+1)/(3n+3)} $$ is equivalent to $D \leqslant H^{4n/(3n+1)}$. The desired result now follows. \section{Proof of Theorem~\ref{thm:trinom}} Write $H=A+B+C+D$. There are $O(A)$ choices for $a$, so it suffices to show that for fixed $a \in [C, C+A]$ there are at most $H^{o(1)}$ solutions $(b,r) \in \mathbb{Z}^2$, $b \in [D, D+B]$ to the equation $$ ur^2-n^n b^{n-1} = (n-1)^{n-1} a^n. $$ As $n \mathbf{e}quiv 1 \pmod 4$, substituting $t=b^{(n-1)/2}$, it is enough to uniformly in $a$ bound the number of $r, t \in \mathbb{Z}$, $|r|, |t| \ll H^{n/2}$, such that \begin{equation} \label{tag} ur^2-n^nt^2 = (n-1)^{n-1}a^n. \mathbf{e}nd{equation} Note that $(n-1)^{n-1} a^n \ne 0$ as $n>1$ and $a \in [C, C+A]$ where $C \geqslant 1$. If $-un^n$ is a square in $\mathbb{Z}$, then we can factor the left hand side of~\mathbf{e}qref{tag} and use the divisor function estimate $\tau(m)= m^{o(1)}$ for all $m \in \mathbb{Z} \backslash \{0\}$ to see that~\mathbf{e}qref{tag} has at most $H^{o(1)}$ solutions $r, t \in \mathbb{Z}$. If $-un^n$ is no square, then the left hand side of~\mathbf{e}qref{tag} is a Pellian type equation and though~\mathbf{e}qref{tag} has possibly infinitely many solutions $r, c \in \mathbb{Z}$, the number of solutions such that $|r|, |t| \ll H^{n/2}$ by a familiar result can be bounded by $H^{o(1)}$, see, for example,~\cite[Lemma~3]{KonShp} for arbitrary quadratic polynomials. \section{Concluding Comments} \label{sec:comm} \subsection{Discriminants of splitting fields of polynomials} As mentioned in Remark~\ref{rem:disc split}, it is certainly interesting to count the discriminants of splitting fields of polynomials $f \in {\mathcal I}_n(H)$. Unfortunately our basic tool, Lemma~\ref{lem:D/Delta}, does not generalise to the discriminants of these fields. Motivated by this and also by an apparently terminological oversight at the beginning of~\cite[Section~1]{ABZ} (where $\Delta(f)$ is called the discriminant of the splitting field of $f$), we give two examples showing that such a direct analogue of Lemma~\ref{lem:D/Delta} is false. In particular, for the polynomial $f(X) = X^4-2$ it is easy to check that the splitting field $L$ of $f$ over $\mathbb{Q}$ is given by $L=\mathbb{Q}(\sqrt[4]{2}, i)$ and that $|L:\mathbb{Q}|=8$. Further, it is not hard to see that $\sqrt[4]{2}(1+2i)$ satisfies the equation $F(\sqrt[4]{2}(1+2i))=0$, where $F(X)$ is the degree $8$ polynomial $F(X)=X^8+28 X^4+2500$ which is irreducible in $\mathbb{Q}[X]$. Hence $L=\mathbb{Q}\leqslantft(\sqrt[4]{2}(1+2i)\right)$. Using the discriminant formula for trinomials, one finds that $\Disc{f}=-2^{11}$ and $\Disc{F}=2^{62} \cdot 3^8 \cdot 5^{12}$. As $\Disc{f}<0$ and $\Disc{F}>0$, by Lemma~\ref{lem:D/Delta} the ratio of $\Disc{f}$ and the discriminant $\Delta$ of $L$ is not a rational square (in fact, using for example {\sl Sage}, one can check that $\Delta=2^{24}$, so $\Delta/\Disc{f}=-2^{13}$; see also Global Number Field 8.0.16777216.2 in~\cite{lmfdb}). A slightly more complicated non-binomial example is given by the polynomial $f(X) = X^4-X-1$. {\sl Magma} computes the defining polynomial of the splitting field of $f$ as \begin{align*} F(X) & =X^{24}+ 90X^{21} - 70X^{20} + 5695X^{18} - 18690X^{17} + 34895X^{16}\cr & \qquad + 225900X^{15} - 1544060X^{14} + 3867780X^{13} + 18840027X^{12} \cr & \qquad- 62876100X^{11} + 228621050X^{10} - 222888810X^{9} \cr & \qquad+ 999415025X^{8} + 9907474500X^{7}- 24575577355X^{6} \cr & \qquad+ 34467394920X^{5} + 232838692457X^{4}- 705674357100X^{3}\cr & \qquad + 2030693398335X^{2} - 2155371295770X + 1779496656001. \mathbf{e}nd{align*} Since $\Disc{f} = 283$ and \begin{align*} \Disc{F} & = 2^{144} \cdot {3}^{24} \cdot {17}^{8} \cdot {37}^{4} \cdot {73}^{2} \cdot {83}^{2} \cdot {101}^{2} \cdot {181}^{2} \cdot {227}^{2} \cdot {283}^{12}\cr & \qquad \cdot {359}^{4} \cdot {8867}^{8} \cdot {9473}^{2} \cdot {47777}^{4} \cdot {1271971}^{2} \cdot {1660069}^{4} \cr & \qquad \cdot {970293859}^{2} \cdot {4552394491}^ {2} \cdot {857054278934851321}^{2}\cr & \qquad \cdot {1521484680115687561}^{2}, \mathbf{e}nd{align*} the presence of the even power of $283$ in the prime number factorisation of $ \Disc{F}$ and Lemma~\ref{lem:D/Delta} show that the ratio of $\Disc{f}$ and the discriminant of the splitting field is not a rational square. We note that both approaches, via the determinant method and via the square sieve are flexible enough to admit several variations in the way we count polynomials. For example, one can fix some of the coefficients, or make them run in a non-cubic box, $[-H_0, H_0]\times \ldots \times [-H_{n-1},H_{n-1}]$, or move the boxes away from the origin, as in Section~\ref{sec:discr trinom}. \subsection{Discriminants of polynomials} It is also natural to ask about the number $D_n(H)$ of distinct discriminants that are generated by all polynomials from ${\mathcal I}_n(H)$. It is reasonable to expect $D_n(H) = H^{n+o(1)}$, however this question seems to be open. We briefly note that trinomials immediately imply $D_n(H) \gg H^{2}$. Indeed, we consider the discriminants $$ \Disc{X^n + aX - b} = (-1)^{(n-1)(n+2)/2}((n-1)^{n-1} a^n+n^n b^{n-1}) $$ of trinomials $X^n + aX - b$ (see for example,~\cite[Theorem~2]{Swan}) with $$ H/2 \leqslant a \leqslant H \qquad\mbox{and}\qquad 1 \leqslant b \leqslant \frac{H}{3n} $$ with the additional condition $$ a\mathbf{e}quiv 0 \pmod 2 \qquad\mbox{and}\qquad b\mathbf{e}quiv 2 \pmod 4 $$ to guarantee the irreducibility by the Eisenstein criterion. We claim all such pairs $(a,b)$ generate distinct discriminants. Indeed, if $$(n-1)^{n-1} a_1^n+ n^n b_1^{n-1} = (n-1)^{n-1} a_2^n+ n^n b_2^{n-1} $$ then for $a_1=a_2$ we also have $b_1 = b_2$. So we can now assume that $a_1 > a_2$. In this case we obtain \begin{align*} (n-1)^{n-1} a_1^n - (n-1)^{n-1} a_2^n & \geqslant (n-1)^{n-1} a_1^n - (n-1)^{n-1} (a_1-1)^n\cr &\geqslant n (n-1)^{n-1} (H/2)^{n-1} +O(H^{n-2}) \cr & = 2^{-n+1} n (n-1)^{n-1} H^{n-1} +O(H^{n-2}) \mathbf{e}nd{align*} while $$ n^n b_2^{n-1} - n^n b_1^{n-1} \leqslant n^n b_2^{n-1} \leqslant 3^{-n+1} n H^{n-1} $$ which is impossible for a sufficiently large $H$. Unfortunately, this argument does not give the lower bound $H^{2+o(1)}$ for the number of distinct discriminants of fields generated by roots of polynomials in ${\mathcal I}_n(f)$, improving Corollary~\ref{cor:distinct disc}, since having distinct discriminants of polynomials does not imply necessarily distinct discriminants of fields. Finally, we note that our methods can also be used to investigate the discriminants of the fields generated by some other special families of polynomials. For example, one of such families is given by quadrinomials $X^n + aX^2 +bX + c$ for the discriminant of which an explicit formula has been given by Otake and Shaska~\cite{OtSh}. \section{Appendix} \label{sec:app} \subsection{Preliminary discussion} We use this opportunity to fix an error in~\cite{Diet2}. Namely~\cite[Lemmas~5 and~6]{Diet2} (and consequently~\cite[Lemma~8]{Diet2}) there are not correct as stated if the degree $n$ is of the form $n=m^2$ or $n=m^2+1$ for some odd $m$, and therefore~\cite[Lemma~8]{Diet2} cannot always be directly applied in these cases as well. This does not affect the main results~\cite[Theorems~1 and~2]{Diet2} in these cases, so let us quickly explain how to amend the proof: \subsection{The case of $n=m^2$} If $n=m^2$ for odd $m$, then we can directly handle the contribution of $a_n$ such that $z^2-\Delta(a_1, \ldots, a_n)$ is reducible: \cite[Lemma~6]{Diet2} as well as \cite[Lemma~5]{Diet2} in the case of $c_1 \ne 0$ are still correct. As a substitute for~\cite[Lemma~5]{Diet2} for $c_1=0$, we can use~\cite[Satz 1]{Hering0} (see also~\cite[Section~1]{Hering}). The latter result shows that for fixed $a_1, \ldots, a_{n-2} \in \mathbb{Z}$, there are, uniformly in $a_1, \ldots, a_{n-2}$, only finitely many rational specialisations for $a_{n-1}$, for which the resulting polynomial $f(X)=X^n+a_1 X^{n-1}+\ldots+a_n$, regarded as a polynomial in $\mathbb{Q}(a_n)[X]$, does not have Galois group $S_n$ over the rational function field $\mathbb{Q}(a_n)$. Only in these cases $z^2-\Delta(a_1, \ldots, a_n)$, as a polynomial in $z$ and $a_n$, can be reducible over $\mathbb{Q}$, since otherwise having Galois group $S_n$ over $\mathbb{Q}(a_n)$ excludes the possibility that the discriminant $\Delta(a_1, \ldots, a_n)$ is a square in $\mathbb{Q}(a_n)$. Therefore there can be only $O(1)$ many exceptional `bad planes' given by~\cite[Equation~(5)]{Diet2}, for which the bound in~\cite[Lemma~8]{Diet2} does not hold true. Just using the trivial bound $O(H)$ for the number of solutions in these cases instead of the bound provided by~\cite[Lemma~8]{Diet2} is acceptable, as the resulting bound of $O(H^{n-2})$ (for fixing $a_1, \ldots, a_{n-2}$) times $O(1)$ (for the number of exceptional `bad planes') times $O(H)$ (trivially bounding the solutions instead of using the bound from~\cite[ Lemma~8]{Diet2}) is certainly $H^{n-2+\sqrt{2}+o(1)}$. This fixes the error for $n=m^2$ and odd $m$. \subsection{The case of $n=m^2+1$} If $n=m^2+1$ for odd $m \geqslant 3$, then $n$ cannot be divisible by $3$. In this case, at the outset instead of fixing $n-2$ coefficients $a_1, \ldots, a_{n-2}$ we fix $n-2$ coefficients $a_1, \ldots, a_{n-4}, a_{n-2}, a_{n-1}$ instead. As a substitute for~\cite[Lemma~5]{Diet2} in the case of $c_1 \ne 0$ we prove the following result. \begin{lemma} Let $m \geqslant 3$ be an odd integer, and let $n=m^2+1$. Further, let $a_1, \ldots, a_{n-4}, a_{n-2}, a_{n-1}$ be fixed integers, and let $c_1, c_2 \in \mathbb{Q}$ with $c_1 \ne 0$. Then the polynomial $$ z^2-\Delta(a_1, \ldots, a_{n-4}, c_1 a_n + c_2, a_{n-2}, a_{n-1}, a_n) $$ is irreducible in $\mathbb{Q}[z, a_n]$. \mathbf{e}nd{lemma} \begin{proof} We use the observation that for fixed $a_1, \ldots, a_{n-4}, a_{n-2}, a_{n-1}$, the discriminant $\Delta(a_{n-3}, a_n)=\Delta(a_1, \ldots, a_n)$ as a polynomial in $a_{n-3}$ and $a_n$ is of the form \begin{equation} \label{master} \Delta(a_{n-3}, a_n)=(n-3)^{n-3}3^3 a_{n-3}^n a_n^2 + \Phi(a_{n-3}, a_n), \mathbf{e}nd{equation} where $\Phi$ has total degree strictly less than $n+2$. The proof is analogous to that of~\cite[Lemma~4]{Diet2}, using the fact that $\Delta(a_1, \ldots, a_n)$ is a weighted-homogeneous polynomial in the $a_i$, each $a_i$ having weight $i$, and the total weight of $\Delta(a_1, \ldots, a_n)$ is $n(n-1)$. Therefore, for fixed $a_1, \ldots, a_{n-4}, a_{n-2}, a_{n-1}$ any monomial $a_{n-3}^\alpha a_n^\beta$ occurring in $\Delta(a_1, \ldots, a_n)$ satisfies \begin{equation} \label{yacht} (n-3)\alpha+n\beta \leqslant n(n-1). \mathbf{e}nd{equation} For $\alpha=n$ and $\beta=2$ the left hand side of~\mathbf{e}qref{yacht} just equals $n(n-1)$, whence the monomial $\delta_n a_{n-3}^n a_n^2$ occurs in $\Delta(a_{n-3}, a_n)$, with a constant $\delta_n$ only depending on $n$; note that we do not yet know whether $\delta_n \ne 0$. To establish~\mathbf{e}qref{master} it is therefore enough to check that this is the only solution of~\mathbf{e}qref{yacht} with $\alpha+\beta \geqslant n+2$, and then to evaluate $\delta_n$. If $\alpha+\beta \geqslant n+2$ and $\beta \geqslant 3$, then \begin{align*} (n-3)\alpha+n\beta & \geqslant (n-3)(n+2-\beta)+n\beta\cr & = (n-3)(n+2)+3\beta\cr & = n(n-1)-6+3\beta \geqslant n(n-1)+3. \mathbf{e}nd{align*} If $\beta \leqslant 1$, then $\alpha+\beta \geqslant n+2$ gives $\alpha>n$, which is impossible, because the maximum power of any $a_i$ occurring in any monomial of $\Delta(a_1, \ldots, a_n)$ is at most $n$. The latter is easily checked by writing the discriminant $\Delta(a_1, \ldots, a_n)$ in the form $$ \Delta(a_1, \ldots, a_n)=(-1)^{n(n-1)/2} \Res{f, f'}, $$ (see the formula~\mathbf{e}qref{eq:D vs R}), where $f=X^n+a_1 X^{n-1} + \ldots + a_n$, expressing the resultant $\Res{f,f'}$ of $f$ and its derivative $f'$ by the Sylvester formula as a certain determinant in $a_1, \ldots, a_n$, and checking that each $a_i$ occurs in at most $n$ columns. Hence $$ \Delta(a_{n-3}, a_n)=\delta_n a_{n-3}^n a_n^2+\Phi(a_{n-3}, a_n), $$ where $\Phi$ has total degree less than $n+2$. To determine the value of $\delta_n$ (which only depends on $n$ as remarked above), we observe that, as $n$ is coprime to $3$, the trinomial $X^n+aX^3+b$ has discriminant $$ (-1)^{n(n-1)/2} b^2 (n^n b^{n-3} + (-1)^{n+1} (n-3)^{n-3} 3^3 a^n) $$ (see, for example,~\cite[Theorem 2]{Swan}), which immediately yields $$ \delta_n=(-1)^{n(n-1)/2+n+1} (n-3)^{n-3} 3^3 = (n-3)^{n-3} 3^3 $$ as $n=m^2+1 \mathbf{e}quiv 2 \pmod 4$. Having established~\mathbf{e}qref{master}, we see that for $n$ coprime to $3$ the number $(n-3)^{n-3}3^3$ cannot be a square, whence \begin{align*} & z^2-\Delta(a_1, \ldots, a_{n-4}, c_1a_n+c_2, a_{n-2}, a_{n-1}, a_n)\cr & = z^2 - (n-3)^{n-3}3^3 c_1^n a_n^{n+2} + O(a_n^{n+1}) \mathbf{e}nd{align*} is irreducible in $\mathbb{Q}[z, a_n]$. \mathbf{e}nd{proof} The special cases that $a_{n-3}$ or $a_n$ are being fixed (substitutes for the analogues of~\cite[Lemma~5]{Diet2} where $c_1=0$, and~\cite[Lemma~6]{Diet2}, respectively) can be handled as above by the result of Hering~\cite{Hering0}, again using that $n$ is coprime to $3$. The argument can then be finished as above, using the main result of~\cite{Smith} instead of~\cite[ Lemma~10]{Diet2} to see that for $n$ coprime to $3$ the polynomial $X^n+aX^3+b$ has Galois group $S_n$ over any function field $K(a,b)$ where $K$ is any field of characteristic zero. \begin{thebibliography}{9999} \bibitem{AGLLSZ} T. C. Anderson, A. Gafni, R. J. Lemke Oliver, D. Lowry-Duda, G. Shakan, and R. Zhang, `Quantitative Hilbert irreducibility and almost prime values of polynomial discriminants', {\it Intern. Math. Res. Notices\/}, (to appear). \bibitem{ABZ} A. Ash, J. Brakenhoff and T. Zarrabi, `Equality of polynomial and field discriminants', {\it Experim. Math.\/}, {\bf 16} (2007), 367--374. \bibitem{BBP} K. Belabas, M. Bhargava and C. Pomerance, `Error estimates for the Davenport--Heilbronn theorems', {\it Duke Math. J.\/}, {\bf 153} (2010), 173--210. \bibitem{Bha} M. Bhargava, `Galois groups of random integer polynomials and van der Waerden's Conjecture', {\it Preprint\/}, 2021 (available from \url{http://arxiv.org/abs/2111.06507}). \bibitem{BSW} M. Bhargava, A. Shankar and X. Wang, `Squarefree values of polynomial discriminants~I', {\it Invent. Math.\/}, (to appear). \bibitem{BiLe} P.-Y. Bienvenu and T. H. L{\^e}, `Linear and quadratic uniformity of the M{\"o}bius function over $\mathbb{F}q[t]$', {\it Mathematika\/}, {\bf 65} (2019), 505--529. \bibitem{BoPi} E. Bombieri and J. Pila, `The number of integral points on arcs and ovals', {\it Duke Math. J.\/}, {\bf 59} (1989), 337--357. \bibitem{CaRu} D. Carmon and Z. Rudnick, `The autocorrelation of the M{\"o}bius function and Chowla's conjecture for the rational function field', {\it Quart. J. Math.\/}, {\bf 65} (2014), 53--61. \bibitem{Chela} R. Chela, `Reducible polynomials', {\it J. London Math. Soc.\/}, {\bf 38} (1963), 183--188. \bibitem{Dalen} K. Dalen, `On a theorem of Stickelberger', {\it Math. Scand.\/}, {\bf 3} (1955), 124--126. \bibitem{Diet1} R.~Dietmann, `On the distribution of Galois groups', {\it Mathematika\/}, {\bf 58} (2012), 35--44. \bibitem{Diet2} R.~Dietmann, `Probabilistic Galois theory', {\it Bull. London Math. Soc.}, {\bf 45} (2013), 453--462. \bibitem{ElVe} J. S. Ellenberg and A. Venkatesh, `The number of extensions of a number field with fixed degree and bounded discriminant', {\it Ann. Math.\/}, {\bf 163} (2006), 723--741. \bibitem{GrKol} S. W. Graham and G. Kolesnik, {\it Van der Corput's method of exponential sums\/}, Cambridge Univ. Press, 1991. \bibitem{HB1} D. R. Heath-Brown, `The square sieve and consecutive squarefree numbers', {\it Math. Ann.\/}, {\bf 266} (1984), 251--259. \bibitem{HB2} D. R. Heath-Brown, `A mean value estimate for real character sums', {\it Acta Arith.\/} {\bf 72} (1995), 235--275. \bibitem{HB3} D. R. Heath-Brown, `The density of rational points on curves and surfaces', {\it Ann. Math.}, {\bf 155} (2002), 553--595. \bibitem{Hering0} H. Hering, `Seltenheit der Gleichungen mit Affekt bei linearem Parameter', {\it Math. Ann.}, {\bf 186} (1970), 263--270. \bibitem{Hering} H. Hering, `\"Uber Koeffizientenbeschr\"ankungen affektloser Gleichungen', {\it Math. Ann.}, {\bf 195} (1972), 121--136. \bibitem{ILOSS} R. Ibarra, H. Lembeck, M. Ozaslan, H. Smith and K. Stange, `Monogenic fields arising from trinomials', {\it Involve\/}, (to appear). \bibitem{IwKow} H. Iwaniec and E. Kowalski, {\it Analytic number theory\/}, Amer. Math. Soc., Providence, RI, 2004. \bibitem{Jones1} L. Jones, `A brief note on some infinite families of monogenic polynomials', {\it Bull. Aust. Math. Soc.\/}, {\bf 100} (2019), 239--244. \bibitem{Jones2} L. Jones, `Monogenic polynomials with non-squarefree discriminant', {\it Proc. Amer. Math. Soc.\/}, {\bf 148} (2020), 1527--1533. \bibitem{JoWh} L. Jones and D. White, `Monogenic trinomials with non-squarefree discriminant', {\it Preprint\/}, 2019 (available from \url{http://arxiv.org/abs/1908.07947}). \bibitem{Katz} N. Katz, `Estimates for nonsingular mixed character sums', {\it International Mathematics Research Notices\/}, \textbf{2007} (2007), Article ID rnm069, 1--19. \bibitem{Kedl} K. S. Kedlaya, `A construction of polynomials with squarefree discriminants', {\it Proc. Amer. Math. Soc.\/}, {\bf 140} (2012), 3025--3033. \bibitem{KonShp} S. V. Konyagin and I. E. Shparlinski, `On convex hull of points on modular hyperbolas', {\it Moscow J. Comb. and Number Theory\/}, {\bf 1} (2011), 43--51. \bibitem{Lang} S. Lang, {\it Algebraic number theory\/}, Springer, Berlin, 1970. \bibitem{LaRo} E. Larson and L. Rolen, `Upper bounds for the number of number fields with alternating Galois group', {\it Proc. Amer. Math. Soc.\/}, {\bf 141} (2013), 499--503. \bibitem{lmfdb} LMFDB - The L-functions and modular forms database, \url{http://www.lmfdb.org/NumberField}. \bibitem{MMS} A. Mukhopadhyay, M. R. Murty and K.~Srinivas, `Counting squarefree discriminants of trinomials under $abc$', {\it Proc. Amer. Math. Soc.\/}, {\bf 137} (2009), 3219--3226. \bibitem{OtSh} S. Otake and T. Shaska, `On the discriminant of certain quadrinomials', {\it Contemp. Math.\/}, vol.~724, 2019, Amer. Math. Soc., 55--72. \bibitem{Poon} B. Poonen, `Squarefree values of multivariable polynomials', {\it Duke Math. J.\/}, {\bf 118} (2003), 353--373. \bibitem{Por} S. Porritt, `A note on exponential-M{\"o}bius sums over $\mathbb{F}q[t]$', {\it Finite Fields Appl.\/}, {\bf 51} (2018), 298--305. \bibitem{Ro-Le} A. Rojas-Le{\'o}n, `Estimates for singular multiplicative character sums', {\it International Mathematics Research Notices\/}, \textbf{2005} (2005), 1221--1234. \bibitem{Ros} M. Rosen, {\it Number theory in function fields\/}, Springer, Berlin, 2002. \bibitem{S} P. Salberger, `Counting rational points on projective varieties', {\it Preprint\/}. \bibitem{Shp0} I. E. Shparlinski, `Distribution of primitive and irreducible polynomials modulo a prime', {\it Diskret. Mat.\/}, {\bf 1} (1989), no. 1, 117--124 (in Russian); translation in {\it Discrete Math. Appl.\/}, {\bf 1} (1991), 59--67. \bibitem{Shp1} I. E. Shparlinski, `On quadratic fields generated by discriminants of irreducible trinomials', {\it Proc. Amer. Math. Soc.\/}, {\bf 138} (2010), 125--132. \bibitem{Shp2} I. E. Shparlinski, `Distribution of polynomial discriminants modulo a prime', {\it Arch. Math.\/}, {\bf 105} (2015), 251--259. \bibitem{Smith} J. H. Smith, `General trinomials having symmetric Galois group', {\it Proc. Amer. Math. Soc.}, {\bf 63} (1977), 208--212. \bibitem{St} L. Stickelberger, `{\"U}ber eine neue Eigenschaft der Diskriminanten algebraischer Zahlk{\"o}rper', {\it Verh. 1 Internat. Math. Kongresses, 1897\/}, Leipzig, 1898, 182--193. \bibitem{Swan} R. G. Swan, `Factorization of polynomials over finite fields', {\it Pacific J. Math.\/}, {\bf 12} (1962), 1099--1106. \bibitem{Zyw} D. Zywina, `Hilbert's irreducibility theorem and the larger sieve', {\it Preprint\/}, 2010 (available from \url{http://arxiv.org/abs/1011.6465}). \mathbf{e}nd{thebibliography} \mathbf{e}nd{document}
math
\begin{document} \title{Gallai-Ramsey numbers of $C_{10}$ and $C_{12}$} \author{Hui Lei$^1$, Yongtang Shi$^1$, Zi-Xia Song$^2$\thanks{Corresponding author. Email address: [email protected]}\ \ and Jingmei Zhang$^2$\\%\thanks{Corresponding author. Email address: [email protected] (J. Zhang)} \\ { $^1$ Center for Combinatorics and LPMC}\\ { Nankai University, Tianjin 300071, China}\\ { $^2$ Department of Mathematics}\\ { University of Central Florida, Orlando, FL32816, USA}\\ } \maketitle \begin{abstract} A {\it Gallai coloring} is a coloring of the edges of a complete graph without rainbow triangles, and a {\it Gallai $k$-coloring} is a Gallai coloring that uses $k$ colors. Given an integer $k\ge1$ and graphs $H_1, \ldots, H_k$, the {\it Gallai-Ramsey number} $GR(H_1, \ldots, H_k)$ is the least integer $n$ such that every Gallai $k$-coloring of the complete graph $K_n$ contains a monochromatic copy of $H_i$ in color $i$ for some $i \in \{1, \ldots, k\}$. When $H = H_1 = \cdots = H_k$, we simply write $GR_k(H)$. We continue to study Gallai-Ramsey numbers of even cycles and paths. For all $n\ge3$ and $k\ge1$, let $G_i=P_{2i+3}$ be a path on $2i+3$ vertices for all $i\in\{0,1, \ldots, n-2\}$ and $G_{n-1}\in\{C_{2n}, P_{2n+1}\}$. Let $ i_j\in\{0,1,\ldots, n-1 \}$ for all $j\in\{1, \ldots, k\}$ with $ i_1\ge i_2\ge\cdots\ge i_k $. Song recently conjectured that $ GR(G_{i_1}, \ldots, G_{i_k}) = 3+\min\{i_1,n^*-2\}+\sum_{j=1}^k i_j$, where $n^* =n$ when $G_{i_1}\ne P_{2n+1}$ and $n^* =n+1$ when $G_{i_1}= P_{2n+1}$. This conjecture has been verified to be true for $n\in\{3,4\}$ and all $k\ge1$. In this paper, we prove that the aforementioned conjecture holds for $n \in\{5, 6\}$ and all $k \ge1$. Our result implies that for all $k\ge1$, $GR_k(C_{2n})= GR_k(P_{2n})= (n-1)k+n+1$ for $n\in\{5,6\}$ and $GR_k(P_{2n+1})= (n-1)k+n+2$ for $1\le n \le6 $. \end{abstract} {\it{Keywords}}: Gallai coloring; Gallai-Ramsey number; Rainbow triangle\\ {\it {2010 Mathematics Subject Classification}}: 05C55; 05D10; 05C15 \section{Introduction} \baselineskip 16pt In this paper we consider graphs that are finite, simple and undirected. Given a graph $G$ and a set $A\subseteq V(G)$, we use $|G|$ to denote the number of vertices of $G$, and $G[A]$ to denote the subgraph of $G$ obtained from $G$ by deleting all vertices in $V(G)\backslash A$. A graph $H$ is an \dfn{induced subgraph} of $G$ if $H=G[A]$ for some $A\subseteq V(G)$. We use $P_n$, $C_n$ and $K_n$ to denote the path, cycle and complete graph on $n$ vertices, respectively. For any positive integer $k$, we write $[k]$ for the set $\{1, \ldots, k\}$. We use the convention ``$A:=$'' to mean that $A$ is defined to be the right-hand side of the relation. Given an integer $k \ge 1$ and graphs $H_1, \ldots, H_k$, the classical Ramsey number $R(H_1, \ldots, H_k)$ is the least integer $n$ such that every $k$-coloring of the edges of $K_n$ contains a monochromatic copy of $H_i$ in color $i$ for some $i \in [k]$. Ramsey numbers are notoriously difficult to compute in general. In this paper, we study Ramsey numbers of graphs in Gallai colorings, where a \dfn{Gallai coloring} is a coloring of the edges of a complete graph without rainbow triangles (that is, a triangle with all its edges colored differently). Gallai colorings naturally arise in several areas including: information theory~\cite{KG}; the study of partially ordered sets, as in Gallai's original paper~\cite{Gallai} (his result was restated in \cite{Gy} in the terminology of graphs); and the study of perfect graphs~\cite{CEL}. There are now a variety of papers which consider Ramsey-type problems in Gallai colorings (see, e.g., \cite{chgr, c5c6,GS, exponential, Hall, DylanSong, C9C11, C13C15}). These works mainly focus on finding various monochromatic subgraphs in such colorings. More information on this topic can be found in~\cite{FGP, FMO}. A \dfn{Gallai $k$-coloring} is a Gallai coloring that uses $k$ colors. Given an integer $k \ge 1$ and graphs $H_1, \ldots, H_k$, the \dfn{Gallai-Ramsey number} $GR(H_1, \ldots, H_k)$ is the least integer $n$ such that every Gallai $k$-coloring of $K_n$ contains a monochromatic copy of $H_i$ in color $i$ for some $i \in [k]$. When $H = H_1 = \dots = H_k$, we simply write $GR_k(H)$ and $R_k(H)$. Clearly, $GR_k(H) \leq R_k(H)$ for all $k\ge1$ and $GR(H_1, H_2) = R(H_1, H_2)$. In 2010, Gy\'{a}rf\'{a}s, S\'{a}rk\"{o}zy, Seb\H{o} and Selkow~\cite{exponential} proved the general behavior of $GR_k(H)$. \begin{thm} [\cite{exponential}] Let $H$ be a fixed graph with no isolated vertices and let $k\ge1$ be an integer. Then $GR_k(H)$ is exponential in $k$ if $H$ is not bipartite, linear in $k$ if $H$ is bipartite but not a star, and constant (does not depend on $k$) when $H$ is a star. \end{thm} It turns out that for some graphs $H$ (e.g., when $H=C_3$), $GR_k(H)$ behaves nicely, while the order of magnitude of $R_k(H)$ seems hopelessly difficult to determine. It is worth noting that finding exact values of $GR_k(H)$ is far from trivial, even when $|H|$ is small. We will utilize the following important structural result of Gallai~\cite{Gallai} on Gallai colorings of complete graphs. \begin{thm}[\cite{Gallai}]\label{Gallai} For any Gallai coloring $c$ of a complete graph $G$ with $|G|\ge2$, $V(G)$ can be partitioned into nonempty sets $V_1, \dots, V_p$ with $p\ge2$ so that at most two colors are used on the edges in $E(G)\backslash (E(G[V_1])\cup \cdots\cup E(G[V_p]))$ and only one color is used on the edges between any fixed pair $(V_i, V_j)$ under $c$. \end{thm} The partition given in Theorem~\ref{Gallai} is a \dfn{Gallai-partition} of the complete graph $G$ under $c$. Given a Gallai-partition $V_1, \dots, V_p$ of the complete graph $G$ under $c$, let $v_i\in V_i$ for all $i\in[p]$ and let $\mathcal{R}:=G[\{v_1, \dots, v_p\}]$. Then $\mathcal{R}$ is the \dfn{reduced graph} of $G$ corresponding to the given Gallai-partition under $c$. Clearly, $\mathcal{R}$ is isomorphic to $K_p$. By Theorem~\ref{Gallai}, all edges in $\mathcal{R}$ are colored by at most two colors under $c$. One can see that any monochromatic $H$ in $\mathcal{R}$ under $c$ will result in a monochromatic $H$ in $G$ under $c$. It is not surprising that Gallai-Ramsey numbers $GR_k(H)$ are closely related to the classical Ramsey numbers $R_2(H)$. Recently, Fox, Grinshpun and Pach posed the following conjecture on $GR_k(H)$ when $H$ is a complete graph. \begin{conj}[\cite{FGP}]\label{Fox} For all integers $k\ge1$ and $t\ge3$, \[ GR_k(K_t)= \begin{cases} (R_2(K_t)-1)^{k/2} + 1 & \text{if } k \text{ is even} \\ (t-1) (R_2(K_t)-1)^{(k-1)/2} + 1 & \text{if } k \text{ is odd.} \end{cases} \] \end{conj} The first case of Conjecture~\ref{Fox} follows from a result of Chung and Graham~\cite{chgr} from 1983. A simpler proof of this case can be found in~\cite{exponential}. The case when $t=4$ was recently settled in~\cite{K4}. Conjecture~\ref{Fox} remains open for all $t\ge 5$. The next open case, when $t=5$, involves $R_2(K_5)$. Angeltveit and McKay \cite{K5} recently proved that $R_2(K_5)\le 48$. It is widely believed that $R_2(K_5)=43$ (see \cite{K5}). It is worth noting that Schiermeyer~\cite{GRK5} recently observed that if $R_2(K_5)=43$, then Conjecture~\ref{Fox} fails for $K_5$ when $k=3$. More recently, Gallai-Ramsey numbers of odd cycles on at most $15$ vertices have been completely settled by Fujita and Magnant~\cite{c5c6} for $C_5$, Bruce and Song~\cite{DylanSong} for $C_7$, Bosse and Song~\cite{C9C11} for $C_9$ and $C_{11}$, and Bosse, Song and Zhang~\cite{C13C15} for $C_{13}$ and $C_{15}$. \begin{thm}[\cite{DylanSong, C9C11, C13C15}] For all $k \ge 1$ and $n\in\{3,4,5,6,7\}$, $GR_k(C_{2n+1}) = n\cdot 2^{k} + 1$. \end{thm} In this paper, we continue to study Gallai-Ramsey numbers of even cycles and paths. For all $n \ge 3$ and $k\ge1$, let $G_{n-1}\in \{C_{2n}, P_{2n+1}\}$, $G_i :=P_{2i+3}$ for all $i \in \{0, 1, \ldots, n-2\}$, and $ i_j\in\{0,1,\ldots, n-1 \}$ for all $j\in[k]$. We want to determine the exact values of $GR (G_{i_1}, \ldots, G_{i_k})$. By reordering colors if necessary, we assume that $i_1\ge \cdots \ge i_k$. Let $n^* :=n$ when $G_{i_1}\ne P_{2n+1}$ and $n^* :=n+1$ when $G_{i_1}= P_{2n+1}$. Song and Zhang~\cite{C8} recently proved that \begin{prop} [\cite{C8}]\label{lower} For all $n \ge 3$ and $k\ge1$, $$GR(G_{i_1}, \ldots, G_{i_k}) \ge 3+\min\{i_1, n^*-2\}+\sum_{j=1}^k i_j.$$ \end{prop} In the same paper, Song~\cite{C8} further made the following conjecture. \begin{conj}[\cite{C8}]\label{Song} For all $n \ge 3$ and $k\ge1$, $$GR(G_{i_1}, \ldots, G_{i_k}) = 3+\min\{i_1, n^*-2\}+\sum_{j=1}^k i_j.$$ \end{conj} To completely solve Conjecture~\ref{Song}, one only needs to consider the case $G_{n-1}=C_{2n}$. \begin{prop}[\cite{C8}]\label{C2n to P2n+1} For all $n \ge 3$ and $k\ge1$, if Conjecture~\ref{Song} holds for $G_{n-1}=C_{2n}$, then it also holds for $G_{n-1}=P_{2n+1}$. \end{prop} Let $M_n$ denote a matching of size $n$. As observed in \cite{C8}, the truth of Conjecture~\ref{Song} implies that $GR_k(C_{2n})=GR_k(P_{2n})=GR_k(M_{n})=(n-1)k+n+1$ for all $n\ge3$ and $k\ge1$ and $GR_k(P_{2n+1})=(n-1)k+n+2$ for all $n\ge1$ and $k\ge1$. It is worth noting that Dzido, Nowik and Szuca~\cite{R_3(10)} proved that $R_3(C_{2n})\ge4n$ for all $n\ge3$. The truth of Conjecture~\ref{Song} implies that $GR_3(C_{2n})=4n-2< R_3(C_{2n})$ for all $n\ge3$. Conjecture~\ref{Song} has recently been verified to be true for $n\in\{3,4\}$ and all $k\ge1$. \begin{thm}[\cite{C8}]\label{C8} For $n \in\{3,4\}$ and all $k\ge1$, let $G_i=P_{2i+3}$ for all $i \in \{0, 1, \ldots, n-2\}$, $G_{n-1}=C_{2n}$, and $ i_j\in\{0,1,\ldots, n-1 \}$ for all $j\in[k]$ with $i_1\ge \cdots \ge i_k$. Then $$GR(G_{i_1}, \ldots, G_{i_k}) = 3+\min\{i_1, n-2\}+\sum_{j=1}^k i_j.$$ \end{thm} In this paper, we continue to establish more evidence for Conjecture~\ref{Song}. We prove that Conjecture~\ref{Song} holds for $n\in\{5,6\}$ and all $k\ge1$. \begin{thm}\label{main} For $n \in\{5,6\}$ and all $k\ge1$, let $G_i=P_{2i+3}$ for all $i \in \{0, 1, \ldots, n-2\}$, $G_{n-1}=C_{2n}$, and $ i_j\in\{0,1,\ldots, n-1 \}$ for all $j\in[k]$ with $i_1\ge \cdots \ge i_k$. Then $$GR(G_{i_1}, \ldots, G_{i_k}) = 3+\min\{i_1, n-2\}+\sum_{j=1}^k i_j.$$ \end{thm} We prove Theorem~\ref{main} in Section~\ref{section2}. We believe the method we developed here can be used to determine the exact values of $GR_k( C_{2n})$ for all $n\ge7$. Applying Theorem~\ref{main} and Proposition~\ref{C2n to P2n+1}, we obtain the following. \begin{cor}\label{P11} Let $G_i=P_{2i+3}$ for all $i\in\{0,1,2,3,4,5\}$. For every integer $k \ge 1$, let $ i_j\in\{0,1,2, 3,4,5 \}$ for all $ j \in[k]$. Then $$GR (G_{i_1}, \ldots, G_{i_k}) = 3+ \max \{i_j: j \in [k]\} +\sum_{j=1}^k i_j.$$ \end{cor} \begin{cor} For all $k\ge1$, \begin{enumerate}[(a)] \item $GR_k (P_{2n+1}) =(n-1)k+n+2$ for all $n \in [6]$. \item $ GR_k (C_{2n}) = GR_k ( P_{2n}) =(n-1)k+n+1$ for $n \in \{5, 6\}$. \end{enumerate} \end{cor} Finally, we shall make use of the following results on 2-colored Ramsey numbers of cycles and paths in the proof of Theorem~\ref{main}. \begin{thm}[\cite{Rosta}]\label{cycles} For all $n\ge3$, $ R_2(C_{2n}) = 3n-1$. \end{thm} \begin{thm}[\cite{FLPS}]\label{path-cycle} For all integers $n, m$ satisfying $2n \geq m \geq 3$, $R(P_m, C_{2n}) = 2n+\lfloor\frac m 2\rfloor-1$. \end{thm} \section{Proof of Theorem~\ref{main}}\label{section2} We are ready to prove Theorem~\ref{main}. Let $n\in\{5,6\}$. By Proposition~\ref{lower}, it suffices to show that $ GR(G_{i_1}, \ldots, G_{i_k}) \le 3+\min\{i_1, n-2\}+\sum_{j=1}^k i_j$. By Theorem \ref{C8} and Proposition \ref{C2n to P2n+1}, we may assume that $i_1=n-1$. Then $3+\min\{i_1, n-2\}+\sum_{j=1}^k i_j=n+1+\sum_{j=1}^k i_j$. Since $|G_{i_1}|=3+\min\{i_1, n-2\}+ i_1=1+n+i_1$, and $ GR(G_{i_1}, G_{i_2})=R(G_{i_1}, G_{i_2})=1+n+i_1+i_2$ by Theorem~\ref{cycles} and Theorem~\ref{path-cycle}, we may assume $k\ge3$. Let $N: =\min\{\max\{i_j:j \in [k]\}, n-2\}+\sum_{j=1}^k i_j$. Then $N \ge 2n-3$. Let $G$ be a complete graph on $3+N$ vertices and let $c: E(G)\rightarrow [k]$ be any Gallai coloring of $ G$ such that all the edges of $G$ are colored by at least three colors under $c$. We next show that $G$ contains a monochromatic copy of $G_{i_j}$ in color $j$ for some $j\in[k]$. Suppose $G$ contains no monochromatic copy of $G_{i_j}$ in color $j$ for any $ j\in[k]$ under $c$. Such a Gallai $k$-coloring $c$ is called a \dfn{bad coloring}. Among all complete graphs on $3+N$ vertices with a bad coloring, we choose $G$ with $N$ minimum. Consider a Gallai-partition of $G$ with parts $A_1, \dots, A_{p}$, where $p\ge2$. We may assume that $|A_1| \ge \cdots \ge |A_p| \ge 1$. Let $\mathcal{R}$ be the reduced graph of $G$ with vertices $a_1, \ldots, a_p$, where $a_i \in A_i$ for all $i\in[p]$. By Theorem \ref{Gallai}, we may assume that the edges of $\mathcal{R}$ are colored either red or blue. Since all the edges of $G$ are colored by at least three colors under $c$, we see that $\mathcal{R}\ne G$ and so $|A_1|\ge2$. By abusing the notation, we use $i_b$ to denote $i_j$ when the color $j$ is blue. Similarly, we use $i_r$ (resp. $i_g$) to denote $i_j$ when the color $j$ is red (resp. green). Let \[ \begin{split} A_b &:= \{a_i \in \{a_2, \ldots, a_p\} \mid a_ia_1 \text{ is colored blue in } \mathcal{R} \}\\ A_r &:= \{a_j \in \{a_2, \ldots, a_p\} \mid a_ja_1 \text{ is colored red in } \mathcal{R} \} \end{split} \] Then $|A_b|+|A_r|=p-1$. Let $B:= \bigcup_{a_i \in A_b} A_i$ and $R:=\bigcup_{a_j \in A_r} A_j$. Then $\max\{|B|, |R|\} \ne0$ because $p\ge2$. Thus $G$ contains a blue $P_3$ between $B$ and $A_1$ or a red $P_3$ between $R$ and $A_1$, and so $\max\{i_b, i_r\}\ge 1$. We next prove several claims. \\ \noindent {\bf Claim\refstepcounter{counter}\label{Observation} \arabic{counter}.} Let $r\in[k]$ and let $s_1, \ldots,s_r$ be nonnegative integers with $s_1+ \cdots+s_r\ge1$. If $i_{j_1}\geq s_1, \dots, i_{j_r}\geq s_r$ for colors $ j_1, \dots, j_r\in[k]$, then for any $S\subseteq V(G)$ with $|S|\ge|G|-(s_1+ \cdots+s_r)$, $G[S]$ must contain a monochromatic copy of $G_{i^*_{j_q}}$ in color $j_q$ for some $j_q\in\{j_1, \ldots,j_r\}$, where $i^*_{j_q}=i_{j_q}-s_q$. \noindent {\bf{Proof.}}~~ Let $i^*_{j_1} :=i_{j_1}-s_1, \dots,i^*_{j_r} :=i_{j_r}-s_r$, and $i^*_j :=i_j$ for all $j\in [k]\backslash\{j_1, \ldots,j_r\}$. Then $\max \{i^*_\ell: {\ell \in [k]} \} \le i_1$. Let $N^* :=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, n-2\}+\sum_{\ell=1}^k i^*_\ell$. Then $N^*\ge 0$ and $ N^*\le N-(s_1+ \cdots+s_r)<N$ because $s_1+\cdots+s_r\ge1$. Since $|S|\ge 3+N-(s_1+ \cdots+s_r)\ge 3+N^*$ and $G[S]$ does not have a monochromatic copy of $G_{i_j}$ in color $j$ for all $j\in [k]\backslash\{j_1, \ldots,j_r\}$ under $c$, by minimality of $N$, $G[S]$ must contain a monochromatic copy of $G_{i^*_{j_q}}$ in color $j_q$ for some $j_q\in\{j_1,\ldots,j_r\}$. \vrule height3pt width6pt depth2pt \noindent {\bf Claim\refstepcounter{counter}\label{e:Ap} \arabic{counter}.} $|A_1|\le n-1$ and so $G$ does not contain a monochromatic copy of a graph on $|A_1|+1\le n$ vertices in color $m$, where $m\in[k]$ is a color that is neither red nor blue. \noindent {\bf{Proof.}}~~ Suppose $|A_1| \ge n$. We first claim that $i_b\ge |B|$ and $i_r\ge |R|$. Suppose $i_b\le |B|-1$ or $i_r\le |R|-1$. Then we obtain a blue $G_{i_b}$ using the edges between $B$ and $A_1$ or a red $G_{i_r}$ using the edges between $R$ and $A_1$, a contradiction. Thus $i_b\ge |B|$ and $i_r\ge |R|$, as claimed. Let $i_b^*:=i_b-|B|$ and $i_r^*:=i_r-|R|$. Since $|A_1|= |G|-|B|-|R|$, by Claim~\ref{Observation} applied to $i_b\ge |B|$, $i_r\ge |R|$ and $A_1$, $G[A_1]$ must have a blue $G_{i^*_b}$ or a red $G_{i^*_r}$. But then either $G[A_1\cup B]$ contains a blue $G_{i_b}$ or $G[A_1\cup R]$ contains a red $G_{i_r}$, because $|A_1| \ge 3+\min\{\max\{i_b, i_r\}, n-2\}+i_b^*+i_r^*$, a contradiction. This proves that $|A_1|\le n-1$. Next, let $m\in[k]$ be a color that is neither red nor blue. Suppose $G$ contains a monochromatic copy of a graph, say $J$, on $n$ vertices in color $m$. Then $V(J)\subseteq A_\ell$ for some $\ell\in[p]$. But then $|A_1|\ge|A_\ell|\ge n$, contrary to $|A_1|\le n-1$. \vrule height3pt width6pt depth2pt For two disjoint sets $U, W\subseteq V(G)$, we say $U$ is \dfn{blue-complete} (resp. \dfn{red-complete}) to $W$ if all the edges between $U$ and $W$ are colored blue (resp. red) under $c$. For convenience, we say $u$ is \dfn{blue-complete} (resp. \dfn{red-complete}) to $W$ when $U=\{u\}$.\\ \noindent {\bf Claim\refstepcounter{counter}\label{e:R} \arabic{counter}.} $\min\{|B|, |R|\} \ge 1$, $p\ge3$, and $B$ is neither red- nor blue-complete to $R$ under $c$. \noindent {\bf{Proof.}}~~ Suppose $B=\emptyset$ or $R=\emptyset$. By symmetry, we may assume that $R=\emptyset$. Then $|B|=|G|-|A_1|=3+N-|A_1|\ge n+1+i_b-|A_1|$. If $i_b \le |A_1|-1$, then $i_b\le n-2$ by Claim~\ref{e:Ap}. But then we obtain a blue $G_{i_b}$ using the edges between $B$ and $A_1$. Thus $i_b \ge |A_1|$. Let $i^*_b=i_b-|A_1|$. By Claim \ref{Observation} applied to $i_b \ge |A_1|$ and $B$, $G[B]$ must have a blue $G_{i^*_b}$. Since $|B|\ge n+1+i_b^*$, we see that $G$ contains a blue $G_{i_b}$, a contradiction. Hence $R\ne \emptyset$ and so $p\ge3$ for any Gallai-partition of $G$. It follows that $B$ is neither red- nor blue-complete to $R$, otherwise $\{B\cup A_1, R\}$ or $\{B, R\cup A_1\}$ yields a Gallai-partition of $G$ with only two parts. \vrule height3pt width6pt depth2pt \noindent {\bf Claim\refstepcounter{counter}\label{e:ij} \arabic{counter}.} Let $m\in[k]$ be the color that is neither red nor blue. Then $i_m\le n-4$. In particular, if $i_m\ge1$, then $G$ contains a monochromatic copy of $P_{2i_m+1}$ in color $m$ under $c$. \noindent {\bf{Proof.}}~~ By Claim~\ref{e:Ap}, $|A_1|\le n-1$ and $G$ contains no monochromatic copy of $P_{|A_1|+1}$ in color $m$ under $c$. Suppose $i_m\ge1$. Let $ i^*_m:=i_m-1$. By Claim \ref{Observation} applied to $i_m\ge1$ and $ V(G) $, $G$ must have a monochromatic copy of $G_{i^*_m}$ in color $m$ under $c$. Since $n\in\{5,6\}$, $|A_1|\le n-1$ and $G$ contains no monochromatic copy of $P_{|A_1|+1}$ in color $m$, we see that $i^*_m \le n-5$. Thus $i_m \le n-4$ and $G$ contains a monochromatic copy of $P_{2i_m+1}$ in color $m$ under $c$ if $i_m\ge1$. \vrule height3pt width6pt depth2pt By Claim~\ref{e:R} and the fact that $|A_1|\ge2$, $G$ has a red $P_3$ and a blue $P_3$. Thus $\min\{i_b, i_r\}\ge1$. By Claim~\ref{e:ij}, $\max\{i_b, i_r\}=i_1=n-1$. Then $|G|= 3+(n-2) +\sum_{i=1}^k i_j \ge 1+n+i_b+i_r \ge 2n+1$. For the remainder of the proof of Theorem~\ref{main}, we choose $p\ge3$ to be as large as possible. \\ \noindent {\bf Claim\refstepcounter{counter}\label{e:R4} \arabic{counter}.} $\min\{|B|, |R|\} \le n-1$ if $|A_1| \ge n-3$. \noindent {\bf{Proof.}}~~ Suppose $|A_1| \ge n-3$ but $\min\{|B|, |R|\} \ge n$. By symmetry, we may assume that $|B| \ge |R| \ge n$. Let $B:=\{x_1, x_2, \ldots, x_{|B|}\}$ and $R:=\{y_1, y_2, \ldots, y_{|R|}\}$. Let $H:=(B,R)$ be the complete bipartite graph obtained from $G[B\cup R]$ by deleting all the edges with both ends in $B$ or in $R$. Then $H$ has no blue $P_7$ with both ends in $B$ and no red $P_7$ with both ends in $R$, else we obtain a blue $C_{2n}$ or a red $C_{2n}$ because $|A_1|\ge n-3$. We next show that $H$ has no red $K_{3,3}$. Suppose $H$ has a red $K_{3,3}$. We may assume that $H[\{x_1, x_2, x_3, y_1, y_2, y_3\}]$ is a red $K_{3,3}$ under $c$. Since $H$ has no red $P_7$ with both ends in $R$, $\{y_4, \ldots, y_{|R|}\}$ must be blue-complete to $\{x_1, x_2, x_3\}$. Thus $H[\{x_1, x_2,x_3, y_4, y_5\}]$ has a blue $P_5$ with both ends in $\{x_1, x_2, x_3\}$ and $H[\{x_1, x_2, x_3, y_1, y_2, y_3\}] $ has a red $P_5$ with both ends in $\{y_1, y_2, y_3\}$. If $|A_1|\ge n-2$ or $\min\{i_b, i_r\}\le n-2$, then we obtain a blue $G_{i_b}$ or a red $G_{i_r}$, a contradiction. It follows that $|A_1|=n-3$ and $i_b=i_r=n-1$. Thus $|B \cup R|\ge 1+n+i_b+i_r-|A_1|=2n+2$. If $|R|\ge6$, then $ \{y_4, y_5, y_6\}$ must be red-complete to $ \{x_4, x_5, x_6\}$, else $H$ has a blue $P_7$ with both ends in $B$. But then we obtain a red $C_{2n}$ in $G$. Thus $|R|=5$, $n=5$, and so $|B|\ge7$. Let $a_1, a^*_1\in A_1$. For each $j\in\{4,5,6,7\}$ and every $W\subseteq \{x_1, x_2, x_3\}$ with $|W|=2$, no $x_j$ is red-complete to $W$ under $c$, else, say, $x_4$ is red-complete to $\{x_1, x_2\}$, then we obtain a red $C_{10}$ with vertices $a_1, y_1, x_1, x_4, x_2, y_2, x_3, y_3, a^*_1, y_4$ in order, a contradiction. We may assume that $x_4x_1, x_5x_2$ are colored blue. But then we obtain a blue $C_{10}$ with vertices $a_1, x_4, x_1, y_4, x_3, y_5, x_2, x_5, a^*_1, x_6$ in order, a contradiction. This proves that $H$ has no red $K_{3,3}$. Let $X:=\{x_1, x_2, \ldots, x_5\}$ and $Y :=\{y_1, y_2, \ldots, y_5\}$. Let $H_b$ and $H_r$ be the spanning subgraphs of $H[X\cup Y]$ induced by all the blue edges and red edges of $H[X\cup Y]$ under $c$, respectively. By the Pigeonhole Principle, there exist at least three vertices, say $ x_1, x_2, x_3$, in $ X$ such that either $d_{H_b}(x_i) \ge 3$ for all $i\in[3]$ or $d_{H_r}(x_i) \ge 3$ for all $i\in[3]$. Suppose $d_{H_r}(x_i) \ge 3$ for all $i\in[3]$. We may assume that $x_1$ is red-complete to $\{y_1, y_2, y_3\}$. Since $|Y|=5$ and $H$ has no red $P_7$ with both ends in $R$, we see that $N_{H_r}(x_1)=N_{H_r}(x_2)=N_{H_r}(x_3)=\{y_1, y_2, y_3\}$. But then $H[\{x_1, x_2, x_3, y_1, y_2, y_3\}]$ is a red $K_{3,3}$, contrary to $H$ has no red $K_{3,3}$. Thus $d_{H_b}(x_i) \ge 3$ for all $i\in[3]$. Since $|Y|=5$, we see that any two of $x_1, x_2, x_3$ have a common neighbor in $H_b$. Furthermore, two of $x_1, x_2, x_3$, say $x_1, x_2$, have at least two common neighbors in $H_b$. It can be easily checked that $H$ has a blue $P_5$ with ends in $\{x_1, x_2, x_3\}$, and there exist three vertices, say $y_1, y_2, y_3$, in $Y$ such that $y_ix_i$ is blue for all $i\in[3]$ and $ \{x_4,\ldots, x_{|B|}\}$ is red-complete to $\{y_1, y_2, y_3\}$. Then $H$ has a blue $P_5$ with both ends in $\{x_1, x_2, x_3\}$ and a red $P_5$ with both ends in $\{y_1, y_2, y_3\}$. If $|A_1|\ge n-2$ or $\min\{i_b, i_r\}\le n-2$, then we obtain a blue $G_{i_b}$ or a red $G_{i_r}$, a contradiction. It follows that $|A_1|= n-3$ and $i_b=i_r=n-1$. Thus $|B \cup R|\ge 1+n+i_b+i_r-|A_1|=2n+2$. Then $|B|\ge n+1$ and so $H[\{x_4, x_5, x_6, y_1, y_2, y_3\}]$ is a red $K_{3,3}$, contrary to the fact that $H$ has no red $K_{3,3}$. \vrule height3pt width6pt depth2pt \noindent {\bf Claim\refstepcounter{counter}\label{e:A1} \arabic{counter}.} $|A_1| \ge 3$. \noindent {\bf{Proof.}}~~ Suppose $|A_1| = 2$. Then $G$ has no monochromatic copy of $P_3$ in color $j$ for any $ j\in\{3, \ldots, k\}$ under $c$. By Claim~\ref{e:ij}, $i_3=\cdots=i_k=0$. We may assume that $|A_1| = \cdots =|A_t| =2$ and $|A_{t+1}| =\cdots=|A_p|=1$ for some integer $t$ satisfying $p\ge t\ge1$. Let $A_i=\{a_i, b_i\}$ for all $i\in [t]$. By reordering if necessary, each of $A_1, \ldots, A_t$ can be chosen as the largest part in the Gallai-partition $A_1, A_2, \ldots, A_p$ of $G$. For all $i\in [t]$, let \[ \begin{split} A^i_b &:= \{a_j \in V(\mathcal{R}) \mid a_ja_i \text{ is colored blue in } \mathcal{R} \}\\ A^i_r &:= \{a_j \in V(\mathcal{R}) \mid a_ja_i \text{ is colored red in } \mathcal{R} \}. \end{split} \] Let $B^i:= \bigcup_{a_j \in A^i_b} A_j$ and $R^i:=\bigcup_{a_j \in A^i_r} A_j$. Then $|B^i|+|R^i|=1+(n-2)+i_b+i_r =2n-2+\min\{i_b, i_r\}$. Let \begin{align*} E_B &:= \{a_ib_i \mid i\in [t] \text{ and } |R^i|<|B^i| \} \\ E_R &:= \{a_ib_i \mid i\in [t] \text{ and } |B^i| <|R^i|\}\\ E_Q &:= \{a_ib_i \mid i\in [t] \text{ and } |B^i|=|R^i|\}. \end{align*} Let $c^*$ be obtained from $c$ by recoloring all the edges in $E_B$ blue, all the edges in $E_R$ red and all the edges in $E_Q$ either red or blue. Then all the edges of $G$ are colored red or blue under $c^*$. Since $|G|=n+1+i_b+i_r= R(G_{i_b}, G_{i_r})$ by Theorem~\ref{cycles} and Theorem~\ref{path-cycle}, we see that $G$ must contain a blue $G_{i_b}$ or a red $G_{i_r}$ under $c^*$. By symmetry, we may assume that $G$ has a blue $H:=G_{i_b}$. Then $H$ contains no edges of $E_R$ but must contain at least one edge of $E_B \cup E_Q$, else we obtain a blue $G_{i_b}$ in $G$ under $c$. We choose $H$ so that $|E(H) \cap (E_B\cup E_Q)|$ is minimal. We may further assume that $a_1b_1 \in E(H)$. Since $|B^1|+|R^1|=2n-2+\min\{i_b, i_r\}$, by the choice of $c^*$, $|B^1|\ge n-1\geq4$ and $|R^1| \le n-1+\lfloor \frac{\min\{i_b, i_r\}}{2} \rfloor\leq7$. So $i_b\ge2$. By Claim \ref{e:R4}, $|R^1| \le 4$ when $n=5$. Let $W:=V(G) \backslash V(H) $. We next claim that $i_b=n-1$. Suppose $i_b\le n-2$. Then $H=P_{2i_b+3}$, $i_r=n-1$, $|G|=2n+i_b$ and $|W|=2n-3- i_b \ge n-1$. Let $x_1, x_2, \ldots, x_{2i_b+3}$ be the vertices of $H$ in order. We may assume that $x_\ell x_{\ell+1}=a_1b_1$ for some $\ell\in [2i_b+2]$. If a vertex $w\in W $ is blue-complete to $\{a_1, b_1\}$, then we obtain a blue $H':=G_{i_b}$ under $c^*$ with vertices $ x_1, \ldots, x_{\ell}, w, x_{\ell+1}, \ldots, x_{2i_b+2}$ in order (when $\ell\ne 2i_b+2$) or $ x_1, x_{2}, \ldots, x_{2i_b+2}, w$ in order (when $\ell= 2i_b+2$) such that $|E(H') \cap (E_B\cup E_Q)| < |E(H) \cap (E_B\cup E_Q)|$, contrary to the choice of $H$. Thus no vertex in $ W$ is blue-complete to $\{a_1, b_1\}$ under $c$ and so $W$ must be red-complete to $\{a_1, b_1\}$ under $c$. This proves that $W \subseteq R^1 $. We next claim that $\ell=1$ or $\ell = 2i_b+2$. Suppose $\ell\in\{2, \ldots, 2i_b+1\}$. Then $\{x_1, x_{2i_b+3}\}$ must be red-complete to $\{a_1, b_1\}$, else, we obtain a blue $H':=G_{i_b}$ with vertices $x_{\ell}, \ldots, x_1, x_{\ell+1}, \ldots, x_{2i_b+3}$ or $x_1, \ldots, x_{\ell}, x_{2i_b+3}, x_{\ell+1}, \ldots, x_{2i_b+2}$ in order under $c^*$ such that $|E(H') \cap (E_B\cup E_Q)| < |E(H) \cap (E_B\cup E_Q)|$. Thus $\{x_1, x_{2i_b+3}\}\subseteq R^1$ and so $W\cup\{x_1, x_{2i_b+3}\}$ is red-complete to $\{a_1, b_1\}$. If $n=5$, then $4\ge |R^1|\ge |W\cup\{x_1, x_{2i_b+3}\}|\ge 6$, a contradiction. Thus $n=6$ and $7 \ge |R^1|\ge |W\cup\{x_1, x_{2i_b+3}\}|\ge 7$. It follows that $R^1\cap V(H)=\{x_1, x_{2i_b+3}\}$ and thus either $\{x_{\ell-2}, x_{\ell-1}\}$ or $\{x_{\ell+2}, x_{\ell+3}\}$ is blue-complete to $\{a_1, b_1\}$. In either case, we obtain a blue $H':=G_{i_b}$ under $c^*$ such that $|E(H') \cap (E_B\cup E_Q)| < |E(H) \cap (E_B\cup E_Q)|$, a contradiction. This proves that $\ell=1$ or $\ell = 2i_b+2$. By symmetry, we may assume that $\ell=1$. Then $x_1x_3$ is colored blue under $c$ because $A_1=\{a_1, b_1\}$. Similarly, for all $j\in \{3, \ldots, 2i_b+2\}$, $\{x_j, x_{j+1}\}$ is not blue-complete to $\{a_1, b_1\}$, else we obtain a blue $H':=G_{i_b}$ with vertices $ x_1, x_j, \ldots, x_2, x_{j+1}, \ldots, x_{2i_b+3}$ in order under $c^*$ such that $|E(H') \cap (E_B\cup E_Q)| < |E(H) \cap (E_B\cup E_Q)|$. It follows that $x_4\in R^1$ and so $|R^1\cap \{x_4, \ldots, x_{2i_b+3}\}|\ge i_b$. Then $|R^1|\ge |W|+|R^1\cap \{x_4, \ldots, x_{2i_b+3}\}|\ge 2n-3$, so $4 \ge |R^1| \ge 7$ (when $n=5$) or $7 \ge |R^1| \ge 9$ (when $n=6$), a contradiction. This proves that $i_b=n-1$. Since $i_b=n-1$, we see that $H=C_{2n}$. Then $|G|=2n+i_r$ and so $|W|=i_r$. Let $ a_1, x_1, \ldots, x_{2n-2}, b_1 $ be the vertices of $H$ in order and let $W:=\{w_1, \ldots, w_{i_r}\}$. Then $x_1b_1$ and $a_1x_{2n-2}$ are colored blue under $c$ because $A_1=\{a_1, b_1\}$. Suppose $\{x_j, x_{j+1}\}$ is blue-complete to $\{a_1, b_1\}$ for some $j\in[2n-3]$. We then obtain a blue $H':=C_{2n}$ with vertices $a_1, x_1, \ldots, x_j, b_1,$ $ x_{2n-2},\ldots, x_{j+1}$ in order under $c^*$ such that $|E(H') \cap (E_B\cup E_Q)| < |E(H) \cap (E_B\cup E_Q)|$, contrary to the choice of $H$. Thus, for all $j\in[2n-3]$, $\{x_j, x_{j+1}\}$ is not blue-complete to $\{a_1, b_1\}$. Since $\{x_1, x_{2n-2}\}$ is blue-complete to $\{a_1, b_1\}$ under $c$, we see that $x_2, x_{2n-3}\in R^1$, and so $4 \ge |R^1\cap V(H)|\ge 4$ (when $n=5$) and $5+\lfloor \frac{i_r}{2} \rfloor \ge |R^1\cap V(H)|\ge 5$ (when $n=6$). Thus, when $n=5$, we have $R^1=\{x_2, x_4, x_5, x_7\}$ or $R^1=\{x_2, x_4, x_6, x_7\}$, as depicted in Figure~\ref{case1} and Figure~\ref{case2}; when $n=6$, we have $R^1 \cap V(H)=\{x_2, x_9\} \cup \{x_j: j \in J\}$, where $J\in \{ \{4,6,8\}$, $\{4,6,7\}$, $\{3,4,6,7\}$, $\{3,5,6,7\}$, $\{4,5,6,7\}$, $\{4,6,7,8\}$, $ \{3,5,7,8\}$, $\{3,5,6,8\}$, $\{3,4,5,6,7\}$, $ \{3,4,5,6,8\}$, $\{3,4,5,7,8\} \}$. \begin{figure} \caption{Two cases of $R^1$ when $i_b =4$ and $n=5$.} \label{case1} \label{case2} \end{figure} Since $|R^1|\ge n-1$ and $R^1$ is red-complete to $\{a_1, b_1\}$ under $c$, we see that $i_r\ge2$. Let $W':=W \backslash R^1 \subset B^1$. It follows that $|W'|=i_r-|R^1 \backslash V(H)| \ge \lceil \frac{i_r}{2} \rceil \ge 1$. We may assume $W'=\{w_1, \ldots, w_{|W'|}\}$. We claim that $E(H) \cap (E_B \cup E_Q) = \{a_1b_1\}$. Suppose, say $a_2b_2 \in E(H) \cap (E_B \cup E_Q) $. Since $\{x_1, x_2\}\ne A_i$ and $\{x_{2n-3}, x_{2n-2}\}\ne A_i$ for all $i\in [t]$, we may assume that $a_2=x_j$ and $b_2=x_{j+1}$ for some $j\in \{2, \ldots, 2n-4\}$. Then $x_{j-1}x_{j+1}$ and $x_{j}x_{j+2}$ are colored blue under $c$. But then we obtain a blue $H':=C_{2n}$ under $c^*$ with vertices $a_1, x_1, \ldots, x_{j-1}, x_{j+1}, \ldots, x_{2n-2}, b_1, w_1$ in order such that $|E(H') \cap (E_B \cup E_Q)| < |E(H) \cap (E_B \cup E_Q)|$, contrary to the choice of $H$. Thus $E(H) \cap (E_B \cup E_Q) = \{a_1b_1\}$, as claimed. \noindent ($\ast$) Let $w \in W'$. For $j \in \{1,2n-2\}$, if $\{x_j, w\} \neq A_i$ for all $i \in [t]$, then $x_jw$ is colored red. For $j \in \{2,\dots,2n-3\}$, if $\{x_j, w\} \neq A_i$ for all $i \in [t]$ and $x_{j-2}$ or $x_{j+2} \in B^1$, then $x_jw$ is colored red. \noindent {\bf{Proof.}}~~ Suppose there are some $j \in [2n-2]$ such that $\{x_j, w\} \neq A_i$ for all $i \in [t]$, and $x_{j-2}$ or $x_{j+2} \in B^1$ if $j \in \{2,\dots,2n-3\}$, but $x_jw$ is colored blue. Then we obtain a blue $C_{2n}$ under $c$ with vertices $a_1, w, x_1, \ldots, x_{2n-2}$ (when $j=1$) or $a_1, x_1, \ldots, x_{2n-2}, w$ (when $j=2n-2$) in order if $j\in\{1,2n-2\}$, and with vertices $b_1, x_{2n-2}, x_{2n-3}, \cdots, x_{j+2}, a_1, w, x_j, \cdots, x_1$ in order (when $x_{j+2} \in B^1$) or $a_1, x_1, \cdots, x_{j-2}, b_1, w, x_j, \cdots, x_{2n-2}$ in order (when $x_{j-2} \in B^1$) if $j \in \{2,\dots,2n-3\}$, a contradiction. \vrule height3pt width6pt depth2pt \noindent ($\ast\ast$) For $j \in [2n-4]$, $x_jx_{j+2}$ is colored red if $\{x_j, x_{j+2}\}\ne A_i$ for all $i\in [t]$. \noindent {\bf{Proof.}}~~ Suppose $x_jx_{j+2}$ is colored blue for some $j \in [2n-4]$. Then we obtain a blue $C_{2n}$ with vertices $a_1, x_1, \ldots, x_j, x_{j+2}, \ldots, x_{2n-2}, b_1, w$ in order, a contradiction, where $w \in W'$. \vrule height3pt width6pt depth2pt First if $n=5$, then $W'=W$. Let $(\alpha,\beta)\in\{(5,7),(7,6)\}$. Suppose $R^1=\{x_2, x_4, x_\alpha, x_\beta\}$. Since $\{x_{\alpha-1}, w_j\}\ne A_i$ and $\{x_\alpha, w_j\}\ne A_i$ for all $w_j\in W$ and $i\in [t]$, $x_{\alpha+1}, x_{\alpha-2} \in B^1$, by ($\ast$), $\{x_{\alpha-1}, x_\alpha\}$ must be red-complete to $W$ under $c$. Then for any $w_j\in W$, $\{x_{\alpha-2}, w_j\}\ne A_i$ and $\{x_{\alpha+1}, w_j\}\ne A_i$ for all $i\in [t]$ since $x_{\alpha-1}x_{\alpha-2}$ and $x_\alpha x_{\alpha+1}$ are colored blue under $c$. Thus $\{x_{\alpha-2},x_{\alpha+1}\}$ is red-complete to $W$ by ($\ast$). So $\{x_{\alpha-2}, x_{\alpha-1}, x_\alpha, x_{\alpha+1}\}$ is red-complete to $W$ under $c$. But then we obtain a red $P_9$ under $c$ (when $i_r\le3$) with vertices $ x_2, a_1,x_{\alpha-1}, b_1, x_\alpha, w_1, x_{\alpha-2}, w_2, x_{\alpha+1}$ in order or a red $C_{10}$ under $c$ (when $i_r=4$) with vertices $a_1, x_2, b_1, x_{\alpha-1}, w_1, x_{\alpha-2}, w_2, x_{\alpha+1}, w_3, x_{\alpha}$ in order, a contradiction. This proves that $n=6$. By ($\ast$), we may assume $x_1$ is red-complete to $W' \backslash w_1$ and $x_{10}$ is red-complete to $W' \backslash w_{|W'|}$ because $|A_1|=2$. \noindent{\bf Case 1.} $|R^1 \cap V(H)|=5$. Let $(\alpha,\beta)\in\{(9,8),(7,9)\}$. Suppose $R^1=\{x_2, x_4, x_6,x_\alpha, x_\beta\}$. Since $\{x_{\alpha-1}, w_j\}\ne A_i$ and $\{x_\alpha, w_j\}\ne A_i$ for all $w_j\in W'$ and $i\in [t]$, $x_{\alpha+1}, x_{\alpha-2} \in B^1$, $\{x_{\alpha-1}, x_\alpha\}$ must be red-complete to $W'$ under $c$ by ($\ast$). Then for any $w_j\in W'$, $\{x_{\alpha-2}, w_j\}\ne A_i$ and $\{x_{\alpha+1}, w_j\}\ne A_i$ for all $i\in [t]$ since $x_{\alpha-1}x_{\alpha-2}$ and $x_\alpha x_{\alpha+1}$ are colored blue under $c$. Thus $\{x_{\alpha-2},x_{\alpha+1}\}$ is red-complete to $W'$ by ($\ast$). So $\{x_{\alpha-2}, x_{\alpha-1}, x_\alpha, x_{\alpha+1}\}$ is red-complete to $W'$ under $c$. We see that $G$ has a red $P_7$ with vertices $x_{\alpha-1}, w_1, x_{\alpha}, a_1, x_2, b_1, x_4$ in order, and so $i_r \ge 3$ and $|W'| \ge 2$. Moreover, $x_{\alpha-1}x_{\alpha+1}$ and $x_{\alpha-2}x_\alpha$ are colored red by ($\ast\ast$). Then $G$ has a red $P_{11}$ with vertices $ x_1, w_2,x_{\alpha-1}, x_{\alpha+1},w_1,x_{\alpha-2},x_\alpha,a_1,x_2,b_1, x_4$ in order under $c$. Thus $i_r=5$ and so $|W'| \ge 3$. Since $|A_1|=2$ and $x_{\alpha-6} \in B^1$, by ($\ast$), we may assume $x_{\alpha-4}$ is red-complete to $W' \backslash w_2$. But then we obtain a red $C_{12}$ with vertices $a_1, x_\alpha, x_{\alpha-2}, w_1, x_{\alpha-4}, w_3, x_1, w_2, x_{\alpha+1}, x_{\alpha-1}, b_1, x_2$ in order under $c$, a contradiction. \noindent{\bf Case 2.} $|R^1 \cap V(H)|=6$, then $i_r \ge 3$ and $|W'| \ge 3$. Let $(\alpha,\beta,\gamma)\in\{(5,2,4),(4,7,5)\}$. Suppose $R^1 \cap V(H)=\{x_2, x_3, x_\alpha, x_6, x_7, x_9\}$. Since $\{x_\beta, w_j\}\ne A_i$, $\{x_3, w_j\}\ne A_i$ and $\{x_6, w_j\}\ne A_i$ for all $w_j\in W'$ and $i\in [t]$, by ($\ast$), $\{x_\beta, x_3, x_6\}$ must be red-complete to $W'$ under $c$. By ($\ast\ast$), $x_\gamma$ is red-complete to $\{x_{\gamma-2}, x_{\gamma+2}\}$. But then we obtain a red $C_{12}$ under $c$ with vertices $a_1, x_2, x_4, x_6, w_1, x_{10}, w_2, x_1, w_3, x_3, b_1, x_5$ (when $\alpha=5$) or $a_1, x_3, x_5, x_7, w_1, x_{10}, w_2, x_1, w_3, x_6, b_1, x_4$ (when $\alpha=4$) in order, a contradiction. Let $(\alpha,\beta,\gamma,\delta)\in\{(3,8,5,6), (3,5,7,8),(4,6,8,2)\}$. Suppose $R^1 \cap V(H)=V(H) \backslash \{a_1, b_1, x_1, x_{10}, x_\alpha, x_\beta\}$. Since $\{x_\gamma, w\}\ne A_i$ and $\{x_\delta, w\}\ne A_i$ for all $w\in W'$ and $i\in [t]$, $\{x_\gamma, x_\delta\}$ must be red-complete to $W'$ under $c$ by ($\ast$). Moreover, $x_\gamma x_{\gamma-2}$ and $x_\delta x_{\delta+2}$ are colored red by ($\ast\ast$). Since $|A_1|=2$, there exists at least one of $x_1, x_{10}, x_\alpha, x_\beta$ is red-complete to $\{w_1, w_2, w_3\}$ by ($\ast$). So we may assume $x_\alpha$ is red-complete to $W' \backslash w_2$ and $x_\beta$ is red-complete to $\{w_1, w_2, w_3\}$. But then we obtain a red $C_{12}$ with vertices $a_1, x_\gamma, x_{\gamma-2}, w_1, x_{10}, w_2, x_1, w_3, x_{\delta+2},x_\delta, b_1, x_7 $ in order if $(\alpha,\beta,\gamma,\delta)\in\{(3,8,5,6),(4,6,8,2)\}$ and $a_1, x_7, x_5, w_1, x_3, w_3, x_1, w_2, x_{10}, x_8, b_1, x_6 $ in order if $(\alpha,\beta,\gamma,\delta)=(3,5,7,8)$, a contradiction. Finally if $R^1 \cap V(H)=\{x_2, x_3, x_5, x_6, x_8, x_9\}$. By ($\ast$), $R^1 \cap V(H)$ is red-complete to $W'$. Then $G$ has a red $P_{11}$ with vertices $x_2, a_1, x_3, b_1, x_5, w_1, x_6, w_2, x_8, w_3, x_9$ in order. Thus $i_r=5$ and so $|W'| \ge 4$. But then we obtain a red $C_{12}$ with vertices $a_1, x_2, w_1, x_3, w_2, x_5, w_3, x_6, w_4, x_8, b_1, x_9$ in order, a contradiction. \noindent{\bf Case 3.} $|R^1 \cap V(H)|=7$, then $i_r \ge 4$ and $|W'| =|W|=i_r$. Let $(\alpha,\beta)\in\{(6,5), (7,4)\}$. Suppose $R^1 \cap V(H)$=$\{x_2, x_3, x_4, x_5, x_\alpha, x_8, x_9\}$. Since $\{x_3, w_j\}\ne A_i$, $\{x_\beta, w_j\}\ne A_i$ and $\{x_8, w_j\}\ne A_i$ for all $i\in [t]$ and any $w_j\in W'$, $\{x_3, x_\beta, x_8\}$ must be red-complete to $W'$ under $c$ by ($\ast$). But then we obtain a red $C_{12}$ with vertices $a_1, x_3, w_1, x_{10}, w_2, x_1, w_3, x_\beta, w_4, x_8, b_1, x_2$ in order, a contradiction. Finally if $R^1 \cap V(H)=\{x_2, x_3, x_4, x_5, x_6, x_7, x_9\}$. Since $\{x_3, w_j\}\ne A_i$ and $\{x_6, w_j\}\ne A_i$ for all $i\in [t]$ and any $w_j\in W'$, $\{x_3, x_6\}$ must be red-complete to $W'$ under $c$ by ($\ast$). We may assume $x_8$ is red-complete to $W' \backslash w_2$ by ($\ast$). But then we obtain a red $C_{12}$ with vertices $a_1, x_3, w_1, x_{10}, w_2, x_1, w_3, x_8, w_4, x_6, b_1, x_2$ in order, a contradiction. This proves that $|A_1|\ge3$. \vrule height3pt width6pt depth2pt \noindent {\bf Claim\refstepcounter{counter}\label{e:ig} \arabic{counter}.} For any $A_i$ with $3 \le |A_i| \le 4$, $G[A_i]$ has a monochromatic copy of $P_3$ in some color $m \in [k]$ other than red and blue. \noindent {\bf{Proof.}}~~ Suppose there exists a part $A_i$ with $3 \le |A_i| \le 4$ but $G[A_i]$ has no monochromatic copy of $P_3$ in any color $m \in [k]$ other than red and blue. We may assume $i=1$. Since $GR_k(P_3)=3$, we see that $G[A_1]$ must contain a red or blue $P_3$, say blue. We may assume $a_i, b_i, c_i$ are the vertices of the blue $P_3$ in order. Then $|A_1|=4$, else $\{b_1\}, \{a_1, c_1\}, A_2, \ldots, A_p$ is a Gallai partition of $G$ with $p+1$ parts. Let $z_1 \in A_1\backslash \{a_1, b_1, c_1, \}$. Then $z_1$ is not blue-complete to $\{a_1, c_1\}$, else $\{a_1, c_1\}, \{b_1, z_1\}, A_2, \ldots, A_p$ is a Gallai partition of $G$ with $p+1$ parts. Moreover, $b_1z_1$ is not colored blue, else $\{b_1\}, \{a_1, c_1, z_1\}, A_2, \ldots, A_p$ is a Gallai partition of $G$ with $p+1$ parts. If $b_1z_1$ is colored red, then $a_1z_1$ and $ c_1z_1$ are colored either red or blue because $G$ has no rainbow triangle. Similarly, $z_1$ is not red-complete to $\{a_1, c_1\}$, else $\{z_1\}, \{a_1, b_1, c_1\}, A_2, \ldots, A_p$ is a Gallai partition of $G$ with $p+1$ parts. Thus, by symmetry, we may assume $a_1z_1$ is colored blue and $c_1z_1$ is colored red, and so $a_1c_1$ is colored blue or red because $G$ has no rainbow triangle. But then $\{a_1\}, \{b_1\}, \{c_1\}, \{z_1\}, A_2, \ldots, A_p$ is a Gallai partition of $G$ with $p+3$ parts, a contradiction. Thus $b_1z_1$ is colored neither red nor blue. But then $a_1z_1$ and $c_1z_1$ must be colored blue because $G[A_1]$ has neither rainbow triangle nor monochromatic $P_3$ in any color $m \in [k]$ other than red and blue, a contradiction. \vrule height3pt width6pt depth2pt For the remainder of the proof of Theorem~\ref{main}, we assume that $|B|\ge|R|$ . By Claim~\ref{e:R4}, $|R|\le n-1$. Let $\{a_i,b_i,c_i\}\subseteq A_i$ if $|A_i|\geq3$ for any $i\in[p]$. Let $B:=\{x_1, \ldots, x_{|B|}\}$ and $R:=\{y_1, \ldots, y_{|R|}\}$. We next show that \\ \noindent {\bf Claim\refstepcounter{counter}\label{irR} \arabic{counter}.} $i_r \ge |R|$. \noindent {\bf{Proof.}}~~ Suppose $i_r \le |R|-1 \le n-2$. Then $i_b=n-1$, $i_r \ge 3$, $|A_1| \le 4$, else we obtain a red $G_{i_r}$ because $R$ is not blue-complete to $B$ and $|A_1| \ge 3$. Moreover, there exist two edges, say $x_1y_1, x_2y_2$, that are colored red, else we obtain a blue $C_{2n}$. Then $G[A_1 \cup R \cup \{x_1, x_2\}]$ has a red $P_9$, it follows that $n=6$, $i_r=4$ and $|R|=5$. By Claim~\ref{e:ig}, $G[A_1]$ has a monochromatic, say green, copy of $P_3$. By Claim~\ref{e:ij}, $i_g=1$. Then $|A_1 \cup B| =|G|-|R| \ge 7+i_b+i_r+i_g-|R|=12$, and so $G[B]$ has no blue $G_{i_b-|A_1|}$, else we obtain a blue $C_{12}$. Let $i_b^*:=i_b-|A_1|\leq2$, $i_r^*:=i_r-|R|+2=1$, $i_j^*:=i_j \le 2$ for all color $j \in [k]$ other than red and blue. Let $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, 4\}+\sum_{\ell=1}^k i_{\ell}^*$. Observe that $|B|\geq3+N^*$. By minimality of $N$, $G[B]$ has a red $P_5$ with vertices, say $x_1, \ldots, x_5$, in order. Because there is a red $P_7$ with both ends in $R$ by using edges between $A_1$ and $R$, we see that $R$ is blue-complete to $\{x_1, x_2, x_4, x_5\}$, else $G[A_1 \cup R \cup \{x_1, \ldots, x_5\}]$ has a red $P_{11}$. But then we obtain a blue $C_{12}$ with vertices $a_1, x_1, y_1, x_2, y_2, x_4, y_3, x_5, b_1, x_3, c_1, x_6$ in order, a contradiction. \vrule height3pt width6pt depth2pt \noindent {\bf Claim\refstepcounter{counter}\label{e:A14} \arabic{counter}.} $i_b > |A_1|$ and so $|A_1| \le n-2$. \noindent {\bf{Proof.}}~~ Suppose $i_b \le |A_1|$. If $i_b \le |A_1|-1$, then $i_b\le n-2$ by Claim~\ref{e:Ap} and so $i_r=n-1$. Thus $|B|\ge 2+i_b$ because $|B|+|R|=|G|-|A_1|\ge n+1+i_b+(i_r-|A_1|) \ge 3+2i_b$. But then $G$ has a blue $G_{i_b}$ using edges between $A_1$ and $B$, a contradiction. Thus $i_b=|A_1|$. By Claim~\ref{e:R4} and Claim \ref{irR}, $ |R| \le n-1$ and $i_r \ge |R|$. Observe that $|B|\ge 1+n+i_r-|R|\ge 1+n$. Then $G[B\cup R]$ has no blue $P_3$ with both ends in $B$, else we obtain a blue $G_{i_b}$ in $G$. Let $i_b^*:=i_b -|A_1|=0$, $i_r^*:=i_r-|R|$, and $i^*_j:=i_j \le n-4$ for all color $j\in [k]$ other than blue and red. Let $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, n-2\}+\sum_{\ell=1}^k i^*_\ell $. Then $0< N^*<N$. Suppose first that $|R|\ge2$. Since $B$ is not red-complete to $R$, we may assume that $y_1x$ is colored blue for some $x\in B$. Note that $i_r^* \le n-3$ and $|B\backslash x| =3+N-|A_1|-|R|-1\ge 3+N^*$. By minimality of $N$, $G[B \backslash x]$ must have a red $P_{2i^*_r+3}$ with vertices, say $x_1, \ldots, x_q$, in order, where $q=2i^*_r+3$. Since $G[B\cup R]$ contains no blue $P_3$ with both ends in $B$ and $xy_1$ is colored blue, we see that $y_1$ must be red-complete to $B \backslash x$ and $y_2$ is not blue-complete to $\{x_1, x_q\}$. We may assume that $x_qy_2$ is colored red in $G$. Then $n=6$, $i_r=|R|=5$ and $i_b=|A_1|=3$, else we obtain a red $G_{i_r}$ using vertices in $V(P_{2i^*_r+3})\cup R\cup A_1$. Let $x' \in B \backslash \{x, x_1, x_2, x_3\}$. Then $\{x, x'\} \nsubseteq A_i$ and $\{x, x_1\} \nsubseteq A_i$ for all $i \in [p]$ because $yx$ is colored blue and $yx', yx_1$ are colored red, and so $xx'$ and $xx_1$ are colored red, else $G[A_1 \cup B \cup \{y_1\}]$ has a blue $P_9$. But then we obtain a red $C_{12}$ with vertices $a_1, y_1, x', x, x_1, x_2, x_3, y_2, b_1, y_3, c_1, y_4$ in order, a contradiction. Thus $|R|=1$. By Claim~\ref{Observation} applied to $i_b=|A_1|$, $i_r\ge |R|$ and $B$, $G[B ]$ must have a red $P_{2i_r+1}$ with vertices, say $x_1, x_2, \ldots, x_{2i_r+1}$, in order. Since $G[B\cup R]$ contains no blue $P_3$ with both ends in $B$, we may assume that $y_1x_1$ is colored red under $c$. Then $i_r=n-1$, else we obtain a red $G_{i_r}$, a contradiction. Moreover, $y_1x_{2n-1}$ must be colored blue, else $G$ has a red $C_{2n}$ with vertices $y_1, x_1, \ldots, x_{2n-1}$ in order. Thus $y_1$ is red-complete to $\{x_1, \ldots, x_{2n-2}\}$, and so $\{x_j, x_{2n-1}\} \nsubseteq A_i$ for all $i \in [p]$ and $j \in [2n-2]$. So $x_{2n-1}x_i$ must be colored red for some $i\in[2n-3]$ because $G[B]$ has no blue $P_3$. But then we obtain a red $C_{2n}$ with vertices $y_1, x_1, \ldots, x_i, x_{2n-1}, x_{2n-2}, \ldots, x_{i+1}$ in order, a contradiction. This proves that $i_b > |A_1| $, and so $|A_1|\le n-2$. \vrule height3pt width6pt depth2pt By Claim~\ref{e:A1} and Claim~\ref{e:A14}, $3 \le |A_1| \le n-2$. Then by Claim~\ref{e:ig}, $G[A_1]$ has a monochromatic, say green, copy of $P_3$. By Claim~\ref{e:ij}, $i_g=1$. \noindent {\bf Claim\refstepcounter{counter}\label{e:A2A3} \arabic{counter}.} If $|A_1| = 3$, then $|A_2|=3$, $|A_3| \le 2$, and $i_j=0$ for all color $j \in [k] \backslash [3]$. \noindent {\bf{Proof.}}~~ Assume $|A_1|=3$. To prove $|A_2|=3$, we show that $G[B \cup R]$ has a green $P_3$. Suppose $G[B \cup R]$ has no green $P_3$. By Claim~\ref{e:A14}, $i_b \ge |A_1|+1=4$. Let $i_g^*:=0$ and $i^*_j:=i_j$ for all $j\in [k]$ other than green. Let $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, n-2\}+\sum_{\ell=1}^k i^*_\ell$. Then $ N^*=N-1$ and $|G\backslash a_1 | =3+N-1 = 3+N^*$. But then $G\backslash a_1$ has no monochromatic copy of $G_{i^*_j}$ in color $j$ for all $j\in[k]$, contrary to the minimality of $N$. Thus $G[B \cup R]$ has a green $P_3$ and so $|A_2|=3$. Suppose $|A_3|=3$. For all $i\in [3]$, let \[ \begin{split} A^i_b &:= \{a_j \in V(\mathcal{R}) \mid a_ja_i \text{ is colored blue in } \mathcal{R} \}\\ A^i_r &:= \{a_j \in V(\mathcal{R}) \mid a_ja_i \text{ is colored red in } \mathcal{R} \} \end{split} \] Let $B^i:= \bigcup_{a_j \in A^i_b} A_j$ and $R^i:=\bigcup_{a_j \in A^i_r} A_j$. Since each of $A_1, A_2, A_3$ can be chosen as the largest part in the Gallai-partition $A_1, A_2,\ldots, A_p$ of $G$, by Claim \ref{e:R4}, either $|B^i| \le 5$ or $|R^i| \le 5$ for all $i\in [3]$. Without loss of generality, we may assume that $A_2$ is blue-complete to $A_1 \cup A_3$. Let $X := V(G) \backslash (A_1 \cup A_2 \cup A_3)=\{v_1, \ldots, v_{|X|}\}$. Then $|X| \ge 1+n+i_b+i_r+i_g-9=2n-8+\min\{i_b, i_r\}$. Suppose $|X \cap B^1| \ge 2$. We may assume $v_1, v_2 \in X \cap B^1$. Then $G$ has a blue $C_{10}$ with vertices $a_1, v_1, b_1, v_2, c_1, a_2, a_3, b_2, b_3, c_2$ in order and a blue $P_{11}$ with vertices $a_1, v_1, b_1, v_2, c_1, a_2, a_3, b_2, b_3, c_2, c_3$ in order, and so $n=6$ and $i_b=5$. Moreover, $X \backslash \{v_1, v_2\} \subseteq R^3$, else, say $v_3$ is blue-complete to $A_3$, then we obtain a blue $C_{12}$ under $c$ with vertices $a_1, v_1, b_1, v_2, c_1, a_2, a_3, v_3, b_3, b_2, c_3, c_2$ in order. Thus $|R^3| \ge |X \backslash \{v_1, v_2\}|\ge 2+i_r$, and so $i_r \ge 3$, else $G$ has a red $G_{i_r}$ using the edges between $A_3$ and $R^3$. Then there exist at least two vertices in $X \backslash \{v_1, v_2\}$, say $v_3, v_4$, such that $\{v_3, v_4\}$ is blue-complete to $A_1$, else $G[A_1 \cup A_3 \cup (X \backslash \{v_1, v_2\})]$ contains a red $G_{i_r}$. Thus $|B^1| \ge |A_2 \cup \{v_1, \ldots, v_4\}|=7$ and so $|R^1| \le 5$. Moreover, $\{v_1, v_2\} \subset R^3$, else, say $v_1$ is blue-complete to $A_3$, we then obtain a blue $C_{12}$ under $c$ with vertices $a_1, v_3, b_1, v_4, c_1, a_2, a_3, v_1, b_3, b_2, c_3, c_2$ in order. Then $X \subseteq R^3$ and $|R^3| \ge |X| \ge 4+i_r \ge 7$, and so $|B^3| \le 5$ and $A_1$ is red-complete to $A_3$. Furthermore, $G[B^1\backslash A_2]$ has no blue $P_3$, else, say $v_1, v_2, v_3$ is such a blue $P_3$ in order, we obtain a blue $C_{12}$ with vertices $a_1, v_1, v_2, v_3, b_1, v_4, c_1, a_2, a_3, b_2, b_3, c_2$ in order. Therefore for any $U\subseteq B^1 \backslash A_2$ with $|U|\geq4$, $G[U]$ contains a red $P_3$ because $|A_1|=3$ and $GR_k(P_3)=3$. Since $|R^1| \le 5$ and $A_3 \subseteq R^1$, we may assume $v_1, \ldots, v_{|X|-2} \in B^1 \backslash A_2$. Then $G[\{v_1, \ldots, v_4\}]$ must contain a red $P_3$ with vertices, say $v_1, v_2, v_3$, in order. We claim that $X \subset B^1$. Suppose $v_{|X|} \in R^1$. Then $v_{|X|}$ is red-complete to $A_1$ and so $G$ has a red $P_{11}$ with vertices $c_1, v_{|X|}, a_1, a_3, b_1, b_3, v_1, v_2, v_3, c_3, v_4$ in order, it follows that $i_r=5$. Thus $|X| \ge 9$, and $G[\{v_4, \ldots, v_7\}]$ has a red $P_3$ with vertices, say $v_4, v_5, v_6$, in order. But then we obtain a red $C_{12}$ with vertices $a_1, v_{|X|}, b_1, a_3, v_1, v_2, v_3, b_3, v_4, v_5, v_6, c_3$ in order, a contradiction. Thus $X \subset B^1$ as claimed. Since $|X| \ge 7$, $G[\{v_4, \ldots, v_7\}]$ contains a red $P_3$ with vertices, say $v_4, v_5, v_6$, in order. Then $G$ has a red $P_{11}$ with vertices $a_1, a_3, b_1, b_3, v_1, v_2, v_3, c_3, v_4, v_5, v_6$ in order, and so $i_r=5$, $|X| \ge 9$. Suppose $G[\{v_4, \ldots, v_9\}]$ has no red $P_5$. Then $G[\{v_4, \ldots, v_9\}]$ has at most one part with order three, say $A_4$, and we may assume $G[A_4]$ has a monochromatic $P_3$ in some color $m$ other than red and blue if $|A_4|=3$ by Claim \ref{e:ig}. Let $i^*_r:=1$, $i^*_m:=1$, $i^*_j:=0$ for all color $j\in [k] \backslash \{m\}$ other than red. Let $N^*:=\min\{i^*_r, 4\}+\sum_{\ell=1}^k i^*_{\ell}=3<N$. Then $G[\{v_4, \ldots, v_9\}]$ has no monochromatic copy of $G_{i_j^*}$ in any color $j \in [k]$, which contradicts to the minimality of $N$. Thus $G[\{v_4, \ldots, v_9\}]$ has a red $P_5$ with vertices, say $v_4, \ldots, v_8$, in order. But then we obtain a red $C_{12}$ with vertices $a_3, v_1, v_2, v_3, b_3, v_4, \ldots, v_8, c_3, v_9$ in order, a contradiction. Therefore, $|X \cap B^1| \le 1$. By symmetry, $|X \cap B^3| \le 1$. Let $w \in X \cap B^1$ and $w' \in X \cap B^3$. Then $A_1 \cup A_3$ is red-complete to $X \backslash \{w, w'\}$. It follows that $n=5$ and $|X \cap B^1|=|X \cap B^3|=1$, else $G[A_1 \cup A_3 \cup (X \backslash \{w, w'\})]$ has a red $G_{i_r}$ because $|X| \ge 2n-8+\min\{i_b, i_r\}$ and $i_b \ge 4$, a contradiction. But then we obtain a blue $C_{10}$ with vertices $a_2, a_1, w, b_1, b_2, a_3, w', b_3, c_2, c_3$ in order, a contradiction. This proves that $|A_3| \le 2$, and then both $G[A_1]$ and $G[A_2]$ have a green $P_3$, so $i_j=0$ for all color $j \in [k]$ other than red, blue and green by Claim~\ref{e:ij}. \vrule height3pt width6pt depth2pt \noindent {\bf Claim\refstepcounter{counter}\label{n6R} \arabic{counter}.} If $i_b=|A_1|+1$, then $|R|\leq 2$. \noindent {\bf{Proof.}}~~ Suppose $i_b=|A_1|+1$ but $|R|\geq3$. By Claim~\ref{irR}, $i_r \ge |R|$, it follows that $|B| \ge 1+n+i_b+i_r+i_g-|A_1|-|R| \ge 3+n$. Thus $G[B \cup R]$ has no blue $P_5$ with both ends in $B$, else we obtain a blue $G_{i_b}$. Let $i_b^*:=i_b-|A_1| = 1$, $i_r^*:=i_r-|R|+1$ (when $n=5$) or $i_r^*:=\max\{i_r-|R|+1,2\}$ (when $n=6$), $i_j^*:=i_j$ for all $j \in [k]$ other than red and blue. Let $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, n-2\}+\sum_{\ell=1}^k i_{\ell}^*$. Then $0< N< N^*$. Observe that $|B|\geq3+N^*$. By minimality of $N$, $G[B]$ has a red $G_{i_r^*}$ with vertices, say $x_1, \ldots, x_q$, in order, where $q=2i_r^*+3$. If $R$ is blue-complete to $\{x_1, x_q\}$, then $R$ is red-complete to $B \backslash \{x_1, x_q\}$ because $G[B \cup R]$ has no blue $P_5$ with both ends in $B$. But then $G[A_1 \cup R \cup \{x_2, \ldots, x_{q-1}\}]$ has a red $G_{i_r}$, a contradiction. Thus $R$ is not blue-complete to $\{x_1, x_q\}$, and so we may assume $y_1x_1$ is colored red. Then $i_r=n-1$ and $R\backslash\{y_1\}$ is blue-complete to $ \{x_{q-2}, x_q\}$, else $G[A_1 \cup R \cup \{x_1, \ldots, x_q\}]$ has a red $G_{i_r}$. So $R\backslash\{y_1\}$ is red-complete to $B \backslash \{x_{q-2}, x_q\}$ because $G[B \cup R]$ has no blue $P_5$ with both ends in $B$. But then $G[A_1 \cup R \cup \{x_2, \ldots, x_{q-1}\}]$ has a red $G_{i_r}$, a contradiction. \vrule height3pt width6pt depth2pt \noindent {\bf Claim\refstepcounter{counter}\label{e:ibn} \arabic{counter}.} $i_b = n-1$. \noindent {\bf{Proof.}}~~ Suppose $i_b \le n-2$. Then $i_r=5$. By Claim \ref{e:A1} and Claim \ref{e:A14}, $|A_1| \ge 3$ and $i_b >|A_1|$, it follows that $n=6$, $i_b=4$ and $|A_1|=3$. By Claim \ref{e:A2A3}, $|A_2|=3$, $|A_3| \le 2$, $i_j=0$ for all color $j \in [k] \backslash [3]$. By Claim \ref{n6R}, $|R| \le 2$ and so $A_2 \subset B$. It follows that $|B|= 7+i_b+i_r+i_g-|A_1 \cup R| =14-|R| \ge 12$. Then $G[B \cup R]$ has no blue $P_5$ with both ends in $B$, else $G$ has a blue $P_{11}$ because $|A_1|=3$. Thus there exists a set $W$ such that $ (B \cup R) \backslash (A_2 \cup W)$ is red-complete to $A_2$, where $W \subset (B \cup R) \backslash A_2$ with $|W|\le 1$. Let $i_b^*:=i_b-|A_1| = 1$, $i_r^*:=2$, $i_j^*:=0$ for all $j \in [k]$ other than red and blue. Let $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, 4\}+\sum_{\ell=1}^k i_{\ell}^*=5$. Then $N^* < N$. Observe that $|B \backslash (A_2 \cup W)| = 11-|R|-|W| \ge 3+N^*$. By minimality of $N$, $G[B \backslash (A_2 \cup W)]$ must contain a red $G_{i_r^*}=P_7$. But then $G[(B \cup R) \backslash W]$ has a red $C_{12}$, a contradiction. Thus $i_b=n-1$. \vrule height3pt width6pt depth2pt \noindent {\bf Claim\refstepcounter{counter}\label{e:A2} \arabic{counter}.} $|A_1| = n-2$. \noindent {\bf{Proof.}}~~ By Claim~\ref{e:A14}, $|A_1| \le n-2$. Suppose $|A_1| \le n-3$. By Claim~\ref{e:A1}, $n=6$ and $|A_1|=3$. By Claim~\ref{e:ibn}, $i_b =5$. By Claim \ref{e:A2A3}, $|A_2|=3$, $|A_3| \le 2$ and $i_j=0$ for all color $j \in [k]\backslash [3]$. By Claim \ref{irR}, $i_r \ge |R|$. Then $|B| = 7+i_b+i_r+i_g-|A_1|-|R|\ge 10$, and so $G[B \cup R]$ has neither blue $P_7$ nor blue $P_5 \cup P_3$ with all ends in $B$ else we obtain a blue $C_{12}$. Suppose $|R| \le 2$. Then $A_2 \subset B$ and there exists a set $W \subset (B \cup R) \backslash A_2$ with $|W| \le 3$ such that $W$ is blue-complete to $A_2$ and $(B \cup R) \backslash (A_2 \cup W)$ is red-complete to $A_2$. Since $|B \backslash (A_2 \cup W)| \ge 4$, we see that there is a red $P_7$ using edges between $A_2$ and $B \backslash (A_2 \cup W)$, so $i_r \ge 3$ and $i_r-|R| \ge 1$. Let $i_b^*:=2$ (when $|B \cap W| \le 1$) or $i_b^*:=0$ (when $|B \cap W| \ge 2$), $i_r^*:= \min\{i_r-|R|-1,2\}$, $i_j^*:=0$ for all color $j \in [k]$ other than red and blue, and $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, 4\}+\sum_{\ell=1}^k i_{\ell}^*=\max\{i_b^*, i_r^*\}+i_b^*+i_r^*$. Observe that $|B \backslash (A_2 \cup W)|=7+i_r-|R \cup W|\ge 3+N^*$. By minimality of $N$, $G[B \backslash (A_2 \cup W)]$ has a red $G_{i_r^*}$ because $G[B]$ has neither blue $P_7$ nor blue $P_5 \cup P_3$ and $|A_3| \le 2$. But then $G[(B\cup R) \backslash W]$ has a red $G_{i_r}$ because $|(B\cup R) \backslash W| \ge 7+i_r \ge |G_{i_r}|$ and $A_2$ is red-complete to $(B \cup R) \backslash (A_2 \cup W)$, a contradiction. Therefore, $3 \le |R| \le 5$ and so $i_r \ge 3$. We claim that $i_r=5$. Suppose $3 \le i_r \le 4$. Let $i_b^*:=2$, $i_r^*:=2$, $i_j^*:=i_j$ for all color $j \in [k]$ other than red and blue, and $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, 4\}+\sum_{\ell=1}^k i_{\ell}^*=7$. Observe that $|B| \ge 10=3+N^*$. Since $G[B]$ has no blue $P_7$, by minimality of $N$, $G[B]$ has a red $P_7$ with vertices, say $x_1, \ldots, x_7$, in order. Then $R$ is blue-complete to $\{x_1, \ldots, x_7\} \backslash x_4$, else $G[A_1 \cup R \cup \{x_1, \ldots, x_7\}]$ has a red $G_{i_r}$. But then $G[B \cup R]$ has a blue $P_7$ with vertices $x_1, y_1, x_2, y_2, x_3, y_3, x_5$ in order, a contradiction. Thus $i_r=5$ and so $|G|=18$, $|B|=15-|R|$. If $|R|=3$. First suppose $A_2 \subseteq R$. Since $R$ is not red-complete to $B$, we may assume that $A_2$ is blue-complete to $x_1$. Let $i_b^*:=2$, $i_r^*:=3$, $i_j^*:=0$ for all color $j \in [k]$ other than red and blue, and $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, 4\}+\sum_{\ell=1}^k i_{\ell}^*=8$. Observe that $|B \backslash x_1|=11=3+N^*$. By minimality of $N$, $G[B \backslash x_1]$ has a red $P_9$ with vertices, say $x_2, \ldots, x_{10}$, in order. We claim that $A_2$ is blue-complete to $\{x_2, x_{10}\}$, else, say $x_2$ is red-complete to $A_2$. Then $A_2$ is blue-complete to $\{x_8, x_{10}\}$, else $G[A_1 \cup A_2 \cup \{x_2, \ldots, x_{10}\}]$ has a red $C_{12}$. Thus $A_2$ is red-complete to $B \backslash \{x_1, x_8, x_{10}\}$ because $G[B \cup R]$ has no blue $P_7$ with both ends in $B$. But then we obtain a red $C_{12}$ with vertices $a_1, a_2, x_3, \ldots, x_9, b_2, b_1, c_2$ in order, a contradiction. Thus, $A_2$ is blue-complete to $\{x_1, x_2, x_{10}\}$, and so $A_2$ is red-complete to $B \backslash \{x_1, x_2, x_{10}\}$ because $G[B \cup R]$ has no blue $P_7$ with both ends in $B$. But then we obtain a red $C_{12}$ with vertices $a_1, a_2, x_3, \ldots, x_9, b_2, b_1, c_2$ in order, a contradiction. This proves that $A_2 \subset B$. Then there exists a set $W \subset (B \cup R) \backslash A_2$ with $|W \cap B| \le 3$ such that $W$ is blue-complete to $A_2$ and $(B \cup R) \backslash (A_2 \cup W)$ is red-complete to $A_2$. Then $|W| \le 3$ and $|W \cap B| \le 3$ or $|W| = 4$ and $|W \cap B| = 1$ because $G[B \cup R]$ has no blue $P_7$ with both ends in $B$. Let \begin{align*} & i_b^*:=2-|W|,\ i_r^*:=2 \,\ \text{when}\ |W| \in \{0, 1\} \\ & i_b^*:=0, \ i_r^*:=2 \,\ \text{when}\ |W| \ge 2 \ \text{and }\ |W \cap B| \le 2 \\ & i_b^*:=0, \ i_r^*:=1 \,\ \text{when}\ |W|=|W \cap B| =3, \end{align*} $i_j^*:=0$ for all color $j \in [k]$ other than red and blue, and $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, 4\}+\sum_{\ell=1}^k i_{\ell}^*=2i_r^*+i_b^*$. Observe that $|B \backslash (A_2 \cup W)| \ge 3+N^*$. By minimality of $N$, $G[B \backslash (A_2 \cup W)]$ has a red $G_{i_r^*}$ because $G[B \cup R]$ has neither blue $P_7$ nor blue $P_5 \cup P_3$ with all ends in $B$ and $|A_3| \le 2$. If $|W| \le 3$ and $|W \cap B| \le 2$, then $G[(B \cup R) \backslash W]$ has a red $C_{12}$ because $|(B \cup R) \backslash W|\geq12$ and $A_2$ is red-complete to $(B\cup R) \backslash (A_2 \cup W)$. Thus $|W|=|W \cap B|=3$ or $|W|=4$ and $|W \cap B|=1$. For the former case, $G[B \backslash (A_2 \cup W)]$ has a red $P_5$ with vertices, say $x_1, \ldots, x_5$, in order. Let $W:=\{w_1, w_2, w_3\} \subset B$. Then $A_2$ is blue-complete to $W$ and red-complete to $\{x_1, \ldots, x_5\}$, and so $W$ is red-complete to $\{x_1, \ldots, x_5\}$ because $G[B]$ has no blue $P_7$. But then we obtain a red $C_{12}$ with vertices $a_2, x_1, w_1, x_2, w_2, x_3, w_3, x_4, b_2, x_5, c_2, x_6$ in order, where $x_6 \in B \backslash (A_2 \cup W \cup \{x_1, \ldots, x_5\})$, a contradiction. For the latter case, $G[B \backslash (A_2 \cup W)]$ has a red $P_7$ with vertices, say $x_1, \ldots, x_7$, in order. Let $W \cap B:=\{w\}$. Then $w$ is red-complete to $\{x_1, \ldots, x_7\}$ because $G[B]$ has no blue $P_7$. But then we obtain a red $C_{12}$ with vertices $a_2, x_1, w, x_2, \ldots, x_6, b_2, x_7, c_2, x_8$ in order, where $x_8 \in B \backslash (A_2 \cup W \cup \{x_1, \ldots, x_7\})$, a contradiction. This proves that $|R| \in \{4, 5\}$. First we claim that $G[E(B, R)]$ has no blue $P_5$ with both ends in $B$. Suppose there is a blue $H:=P_5$ with vertices, say $x_1, y_1, x_2, y_2, x_3$, in order. Then $G[(B \cup R) \backslash V(H)]$ has no blue $P_3$ with both ends in $B$. Let $i_b^*:=0$, $i_r^*:=i_r-|R|+1$, $i_j^*:=i_j$ for all color $j \in [k]$ other than red and blue, and $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, 4\}+\sum_{\ell=1}^k i_{\ell}^*=3+2(i_r-|R|)$. Observe that $|B \backslash \{x_1, x_2, x_3\}| = 7+i_r-|R| \ge 3+N^*$ since $|R| \in \{4, 5\}$. By minimality of $N$, $G[B \backslash \{x_1, x_2, x_3\}]$ has a red $G_{i_r^*}$ with vertices, say $x_4, \ldots, x_{q}$, in order, where $q=2i_r^*+6$. Then $y_3$ is not blue-complete to $\{x_4, x_q\}$ because $G[(B \cup R) \backslash V(H)]$ has no blue $P_3$ with both ends in $B$. We may assume $x_4y_3$ is colored red. Then $R \backslash \{y_1,y_2,y_3\}$ is blue-complete to $x_8$, else we obtain a red $C_{12}$ with vertices $a_1, y_3, x_4, \ldots, x_8, y_4, b_1, y_1, c_1, y_2$ in order, a contradiction. Since $G[(B \cup R) \backslash V(H)]$ has no blue $P_3$ with both ends in $B$, we see that $R \backslash \{y_1,y_2,y_3\}$ is red-complete to $\{x_4, \ldots, x_q\} \backslash \{x_8\}$. But then we obtain a red $C_{12}$ with vertices $a_1, y_3, x_4, \ldots, x_{10}, y_4, b_1, y_1$ (when $|R|=4$), or $a_1, y_3, x_4, x_5, x_6, y_4, x_7, y_5, b_1, y_1, c_1, y_2$ (when $|R|=5$) in order, a contradiction. Thus, $G[E(B, R)]$ has no blue $P_5$ with both ends in $B$. Let $i_b^*:=2$, $i_r^*:=2$, $i_j^*:=i_j$ for all color $j \in [k]$ other than red and blue, and $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, 4\}+\sum_{\ell=1}^k i_{\ell}^*=7$. Observe that $|B| \ge 10=3+N^*$. By minimality of $N$, $G[B]$ has a red $P_7$ with vertices, say $x_1, \ldots, x_7$, in order. We claim that $x_1$ is blue-complete to $R$. Suppose $x_1y_1$ is colored red. Then $R \backslash y_1$ is blue-complete to $\{x_5, x_7\}$, else $G[A_1 \cup R \cup \{x_1, \ldots, x_7\}]$ has a red $C_{12}$. Thus $R \backslash y_1$ is red-complete to $B \backslash \{x_5, x_7\}$ because $G[E(B, R)]$ has no blue $P_5$ with both ends in $B$. But then we obtain a red $C_{12}$ with vertices $a_1, y_2, x_2, \ldots, x_6, y_3, b_1, y_4, c_1, y_1$ in order, a contradiction. Therefore, $x_1$ is blue-complete to $R$. By symmetry, $x_7$ is blue-complete to $R$. Then $R$ is red-complete to $B \backslash \{x_1, x_7\}$ because $G[E(B, R)]$ has no blue $P_5$ with both ends in $B$. But then we obtain a red $C_{12}$ with vertices $a_1, y_2, x_2, \ldots, x_6, y_3, b_1, y_4, c_1, y_1$ in order, a contradiction. This proves that $|A_1| = n-2$. \vrule height3pt width6pt depth2pt By Claim~\ref{e:ibn}, Claim \ref{e:A2} and Claim \ref{irR}, $i_b=n-1$, $|A_1|=n-2$, $ i_r\ge |R|$. By Claim \ref{n6R}, $|R|\le 2$. Then $|B|\ge 3+n+i_r-|R|\ge 3+n$, and so $G[B\cup R]$ has no blue $P_5$ with both ends in $B$. \noindent {\bf Claim\refstepcounter{counter}\label{e:ir=4} \arabic{counter}.} $i_r=n-1$. \noindent {\bf{Proof.}}~~ Suppose $i_r\le n-2$. By Claim~\ref{e:R}, $B$ is not blue-complete to $R$. Let $x\in B$ and $y\in R$ such that $xy$ is colored red. Let $i_b^*:=i_b -|A_1|=1$ and $i_r^*:=i_r-|R|\le n-3$, $i^*_j:=i_j\le n-4$ for all color $j\in [k]$ other than red and blue. Let $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, n-2\}+\sum_{\ell=1}^k i^*_\ell$. Then $0< N^*<N$ and $|B\backslash x| =3+N-|A_1|-|R|-1\ge 3+N^*$. By minimality of $N$, $G[B \backslash x]$ must have a red $P_{2i^*_r+3}$ with vertices, say $x_1, x_2, \ldots, x_{2i^*_r+3}$, in order. Then $\{x_1, x_{2i^*_r+3}\}$ must be blue-complete to $\{x,y\}$ and $xx_2$ must be colored blue under $c$, else we obtain a red $P_{2i_r+3}$ using vertices in $V(P_{2i^*_r+3})\cup\{x,y\}$ or in $V(P_{2i^*_r+3}\backslash x_1)\cup\{x,y\}\cup A_1$. But then $G[B\cup R]$ has a blue $P_5$ with vertices $x_2, x, x_1, y, x_{2i^*_r+3}$ in order, a contradiction. \vrule height3pt width6pt depth2pt Let $A_1:=\{a_1, b_1, c_1\}$ (when $n=5$) or $A_1:=\{a_1, b_1, c_1, z_1\}$ (when $n=6$). By Claim~\ref{e:ig}, $G[A_1]$ has a monochromatic, say green, copy of $P_3$. By Claim~\ref{e:ij}, $i_g=1$. We next show that $|A_2| \ge 3$. Suppose $|A_2| \le 2$. Then by Claim~\ref{e:A2A3}, $|A_1|=4$ and so $n=6$. Let $i_b^*:=i_b-|A_1|$, $i_r^*:= i_r-|R|+1$, $i_g^*:=i_g-1=0$ and $i^*_j:=i_j$ for all $j\in [k]$ other than red, blue and green. Let $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \}, 4\}+\sum_{\ell=1}^k i^*_\ell$. Then $ 0< N^*<N$ and $|B| =|G|-|A_1|-|R| = 3+N^*$. By minimality of $N$, $G[B]$ must contain a red $G_{i^*_r}$. It follows that $|R|=2$ and $G_{i^*_r}= P_{11}$. Let $x_1, x_2, \ldots, x_{11}$ be the vertices of the red $P_{11}$ in order. If $R$ is blue-complete to $\{x_1, x_{11}\}$, then $R$ is red-complete to $B \backslash \{x_1, x_{11}\}$ because $G[B\cup R]$ has no blue $P_5$ with both ends in $B$. But then $G$ has a red $C_{12}$ with vertices $a_1, y_1, x_2, \ldots, x_{10}, y_2$ in order, a contradiction. Thus, $R$ is not blue-complete to $\{x_1, x_{11}\}$ and we may assume $x_1y_1$ is colored red. Then $x_{11}y_1$ and $x_9y_2$ are colored blue, else $G[\{x_1, \ldots, x_{11}\} \cup R \cup A_1]$ has a red $C_{12}$. If $x_{11}y_2$ is colored red, then $x_1y_2$ and $x_3y_1$ are colored blue by the same reasoning. But then we obtain a blue $C_{12}$ with vertices $a_1, x_1, y_2, x_9, b_1, x_3, y_1, x_{11}, c_1, x_2, z_1, x_4$ in order, a contradiction. Thus $x_{11}y_2$ is colored blue. Then $y_1$ is red-complete to $B \backslash \{x_9, x_{11}\}$, else, say $y_1w$ is colored blue with $w \in B \backslash \{x_9, x_{11}\}$, then $G[B \cup R]$ has a blue $P_5$ with vertices $w, y_1, x_{11}, y_2, x_9$ in order. It follows that $\{x_{11}, w\} \nsubseteq A_j$ for all $j \in [q]$, where $w \in B \backslash \{x_9, x_{11}\}$. Moreover, $x_2y_2$ is colored blue, else $G$ has a red $C_{12}$ with vertices $a_1, y_2, x_2, \ldots, x_{10}, y_1$ in order, a contradiction. Thus, $G[B \backslash \{x_2, x_9\}]$ has no blue $P_3$, else $G[A_1 \cup B \cup \{y_2\}]$ has a blue $C_{12}$. Therefore, $x_ix_{11}$ is colored red for some $i \in \{3, \ldots, 7\}$. But then we obtain a red $C_{12}$ with vertices $y_1, x_1, \ldots, x_i, x_{11}, x_{10}, \ldots, x_{i+1}$ in order, a contradiction. Thus $3 \le |A_2| \le n-2$ and $A_2 \subset B$ because $|R| \le 2$. Since $G[B\cup R]$ has no blue $P_5$ with both ends in $B$, there exists at most one vertex, say $w \in B\cup R$, such that $(B\cup R)\backslash (A_2 \cup\{w\})$ is red-complete to $A_2$. Suppose $3 \le |A_3| \le n-2$. Then $n=6$ by Claim~\ref{e:A2A3}, $A_3\subseteq B$ and $A_3$ must be red-complete to $A_2$. Since $G[B\cup R]$ has no blue $P_5$ with both ends in $B$, there exists at most one vertex, say $ w'\in B\cup R$, such that $(B\cup R)\backslash (A_3 \cup\{w'\})$ is red-complete to $A_3$. But then $G[(B \cup R)\backslash \{w,w'\} ]$ has a red $C_{12}$, a contradiction. Thus $|A_3|\le2$ and so $G[B\backslash A_2]$ has no monochromatic copy of $P_3$ in color $j$ for all $j\in[k]$ other than red and blue. Let $i_b^*:=1$, $i_r^*:=n-1-|A_2|$, and $i^*_j:=0$ for all colors $j\in [k]$ other than red and blue. Let $N^*:=\min\{\max \{i^*_\ell: {\ell \in [k]} \},n-2\}+\sum_{\ell=1}^k i^*_\ell=2i_r^*+1=2n-1-2|A_2|$. Then $0< N^*<N$ and $|B \backslash (A_2\cup \{w\})| \ge 2n+1-|R|- |A_2|\ge 3+N^*$. By minimality of $N$, $G[B \backslash (A_2\cup \{w\})]$ has a red $G_{i_r^*}$. But then $G[(B \cup R)\backslash \{w\} ]$ has a red $C_{2n}$, a contradiction. This completes the proof of Theorem~\ref{main}. \vrule height3pt width6pt depth2pt \end{document}
math
\begin{document} \numberwithin{equation}{section} \overline{t}itle[Estimates for the ergodic measure related to SCSF]{Estimates for the ergodic measure and polynomial stability of Plane Stochastic Curve Shortening Flow} {\bf a}uthor[A. Es--Sarhir]{Abdelhadi Es--Sarhir} {\bf a}uthor[M-K. von Renesse]{Max-K. von Renesse} {\bf a}uthor[W. Stannat]{Wilhelm Stannat} {\bf a}ddress{Technische Universit\"at Berlin, Institut f\"ur Mathematik \newline Stra{\ss}e des 17. Juni 136, D-10623 Berlin, Germany} {\bf a}ddress{Technische Universit\"at Darmstadt, Fachbereich Mathematik, \newline Schlo\ss gartenstra\ss e 7, D-64289 Darmstadt, Germany} \email{[email protected]} \email{[essarhir,mrenesse]@math.tu-berlin.de} \overline{t}hanks{The first two authors acknowledge support from the DFG Forschergruppe 718 "Analysis and Stochastics in Complex Physical Systems".} \def_{E^*}{_{E^*}} \keywords{Degenerate stochastic equations, invariant measures, moment estimates.} \subjclass[2000]{47D07, 60H15, 35R60} \begin{abstract} We establish moment estimates for the invariant measure $\mu_{\alpha,\beta}u$ of a stochastic partial differential equation describing motion by mean curvature flow in (1+1) dimension, leading to polynomial stability of the associated Markov semigroup. We also prove maximal dissipativity on $L^1(\mu_{\alpha,\beta}u)$ for the related Kolmogorov operator. \end{abstract} \mu_{\alpha,\beta}aketitle \section{Introduction and preliminaries} We study the invariant measure $\mu_{\alpha,\beta}u $ on $L^2(0,1)$ and the stability of the following SPDE for a function $u(t)\in L^2(0,1)$ introduced in \cite{Es-Re}, describing curve shortening flow in (1+1)D driven by additive noise \begin{equation} \langlebel{sde0} du(t)= ({\bf a}rctan u_x(t))_x dt + {\mu_{\alpha,\beta}athbf{\sigma}} dW_t,\quad t\geq 0. \end{equation} Here $W$ is cylindrical white noise on a separable Hilbert space $U$ defined on a filtered probability space $(\Omega,\mu_{\alpha,\beta}athcal{F},(\mu_{\alpha,\beta}athcal{F}_t)_{t\ge 0},\mu_{\alpha,\beta}athbb{P})$ and $ {\mu_{\alpha,\beta}athbf{\sigma}} $ is a Hilbert-Schmidt operator from $U$ to the Sobolev space $H^{1}_0(0,1)$. Existence of a unique generalized Markov solution of \eqref{sde0} and its ergodicity were shown in \cite{Es-Re}, working in the variational SPDE framework of Pardoux resp.\ Krylov-Rozovski\u\i. However, certain modifications of standard arguments apply since in contrast to previous works (like e.g.\ \cite{BD}) on variational SPDE the drift operator in \eqref{sde0} is neither coercive nor strongly dissipative. As a consequence exponential stability of the semigroup cannot be expected here, and it is our main goal to establish polynomial stability instead (see corollary \ref{regularite Holderienne} below). To this aim we derive moment estimates for the invariant measure of \eqref{sde0} which become crucial for the control of the contraction by the drift of \eqref{sde0} along the flow. \\ As a second application we establish the maximal dissipativity of the Kolmogorov operator $J_0$ associated to \eqref{sde0}, acting on smooth test functions $v_{\alpha,\beta}arphi: L^2(0,1) \mu_{\alpha,\beta}apsto \mu_{\alpha,\beta}athbb R$ by \begin{equation} \langlebel{defjo} J_0v_{\alpha,\beta}arphi(u)=\frac 12 \Tr {Q}D^2v_{\alpha,\beta}arphi(u)+\left \langle \frac{ u_{xx}}{1+{u_x^2}},Dv_{\alpha,\beta}arphi(u)\right\rangle, \quad u\in D_0, \end{equation} with the covariance operator $Q ={\mu_{\alpha,\beta}athbf{\sigma}} {\mu_{\alpha,\beta}athbf{\sigma}}^* $ on $L^2(0,1)$ and \begin{equation} \langlebel{defd0} D_0\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} \bigl \{u\in W^{1,1}_{loc}(0,1)\,|\, ({\bf a}rctan (u_x))_x\in L^2(0,1)\bigr\}. \end{equation} In contrast to the variational approach, here we shall work with a realization of the drift as a maximally monotone operator on $L^2(0,1)$ given by a subgradient $V=\partial {\mathcal P}hi $ of a convex l.s.c.\ functional ${\mathcal P}hi$ on $L^2(0,1)$, using results of Andreu et al.\ \cite{ACM} for variational PDE of linear growth functionals. Combining this with the moment estimates we prove that operator $J_0$ defined on the domain $D(J_0)= C^2_b(H)\subset L^1(H,\mu_{\alpha,\beta}u)$ with $H=L^2(0,1)$ is closable on $L^1(H,\mu_{\alpha,\beta}u)$ and its closure generates a strongly continuous Markov semigroup on $L^1(H,\mu_{\alpha,\beta}u)$ (cf.\ \cite{St:99} for related results). \section{Moment estimates for the invariant measure} In the sequel we denote by $(e_k)_{k\geq 0}$ the system of eigenfunctions corresponding to the Laplace operator $\mu_{\alpha,\beta}athcal{D}^{\prime}elta$ on $(0,1)$ with Dirichlet boundary condition. For $n\geq 1$ we denote by $H_n\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} \s\{e_1,\cdots, e_n\}$ and $E\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} H_0^1(0,1)$ and hence $E^{{\bf a}st}=H^{-1}(0,1)$. Recall also that $u\in L^1_{loc}(0,1)$ belongs to the space $BV$ of bounded variation functions if $$ \bigl[Du\bigr]\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}}\sup\Bigl\{\int_{[0,1]}uv_x\:d\xi:\:\:v\in C_0^{\infty}(0,1),\:\|v\|_{\infty}\leq 1\Bigr\}<+\infty. $$ The main result of this section reads as follows. \begin{theorem} \langlebel{momentthm} The measure $\mu_{\alpha,\beta}u$ is concentrated on the subset $D_0 \cap \{ u \in L^2 (0,1)\,| \,u_x \in BV(0,1)\}$ and $$ \int\bigl[Du_x\bigr] ^{\frac 12}\:\mu_{\alpha,\beta}u(du) + \int \|u\|_{E}^{\frac 1 2 }\:\mu_{\alpha,\beta}u(du)+\int \| ({\bf a}rctan u_x)_x\|_{L^2(0,1)}^2\:\mu_{\alpha,\beta}u(du)<+\infty. $$ \end{theorem} \begin{proof} Introducing the operator $A:\:E\rightarrow E^{{\bf a}st}$ $$ \langle Au,v\rangle=-\int_0^1 {\bf a}rctan u_x(z)\cdot v_x(z)\:dz,\quad u,\:v\in E. $$ we may write \eqref{sde0} as a variational SPDE in the Gelfand triple $E \subset H \subset E^*$ as \begin{equation*} du(t)=A u(t)dt + {\mu_{\alpha,\beta}athbf{\sigma}} dW_t,\quad t\geq 0. \end{equation*} \noindent Below we write ${_{E^*}\langle} .,. \rangle_E$ for the duality in $E^*\overline{t}imes E$, whereas $\langlengle .,.\ranglengle_E$ denotes the inner product in $E$, i.e.\ ${_{E^*}\langle} \xi,\zeta \rangle_E = \langlengle \xi, \zeta\ranglengle_{L^2(0.1)}$ and $ \langle \xi,\zeta \rangle_E = \langlengle \xi_x, \zeta_x\ranglengle_{L^2(0.1)}$ for $\xi, \zeta \in C^\infty_c(0,1)$. \\ It is not difficult to see that the operator $A$ satisfies the following properties. \begin{itemize} \item[(H1)] For all $u,\:v,\:x \in E$ the map \[ \mu_{\alpha,\beta}athbb R \ni \langlembda \overline{t}o _{E^*}\langlengle A(u+\langlembda v ), x\ranglengle_E\] is continuous. \item [(H2)] (Monotonicity) For all $u$, $v\in E$ \begin{equation*} \ _{E^*} \langle Au-Av,u-v\rangle_E \leq 0. \end{equation*} \item [(H3)] For $n \in \mu_{\alpha,\beta}athbb N$, the operator $A$ maps $H^n:=\mu_{\alpha,\beta}athop{\rm span}\{e_1, \dots, e_n\}\subset E$ into $E$ and there exists a constant $c_1 \in \mu_{\alpha,\beta}athbb R$ such that \begin{equation*} \langle Au,u\rangle_E + \| {\mu_{\alpha,\beta}athbf{\sigma}} \|_{L_2(U,E)}^2 \leq c_1(1+ \| u \|_E^2 )\quad \forall u \in H^n, \, n\in \mu_{\alpha,\beta}athbb N. \end{equation*} \item [(H4)] There exists a constant $c_2 \in \mu_{\alpha,\beta}athbb R$ such that \[ \|A(u) \|_{E^*} \leq c_2(1+\| u\|_E).\] \end{itemize} Define ${\mathcal P}_n:\:E^{{\bf a}st}\rightarrow H_n$ by $$ {\mathcal P}_n y\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} \sum\limits_{i=1}^n\:_{E^{{\bf a}st}}\langle y,e_i\rangle_{E}e_i,\quad y\in E^{{\bf a}st}. $$ \noindent Then ${\mathcal P}_n|_H$ is just the orthogonal projection onto $H_n$ in $H$. Define the family of $n$-dimensional Brownian motions in $U$ by $$ W^n_t\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} \sum\limits_{i=1}^n\langle W_t,f_i\rangle_{U} f_i=\sum\limits_{i=1}^n B^i(t)f_i, $$ \noindent where $(f_i)_{i\geq 1}$ is an orthonormal basis of the Hilbert space $U$. The $n$-dimensional SDE in $H$ \begin{equation}\langlebel{n-sde} \left\{ \begin{array}{ll} du^n(t)={\mathcal P}_n A u^n(t)dt+{\mathcal P}_n {\mu_{\alpha,\beta}athbf{\sigma}} dW^n_t \\ u^n(0,x)={\mathcal P}_n u_0(x) \end{array} \right. \end{equation} may be identified with a corresponding SDE $dx(t)= b ^n (x(t)) dt + {\mu_{\alpha,\beta}athbf{\sigma}} ^n(x(t))d B^n_t$ in $\mu_{\alpha,\beta}athbb R^n$ via the isometric map $\mu_{\alpha,\beta}athbb R^n \overline{t}o H^n, x \overline{t}o \sum _{i=1}^n x_i e_i$. By \cite[remark 4.1.2]{Ro} conditions (H1) and (H2) imply the continuity of the fields $x\overline{t}o b^n(x)\in \mu_{\alpha,\beta}athbb R^n$. Moreover, assumption (H2) implies \[ \langlengle b^n (x) -b^n(y), x-y\rangle_{\mu_{\alpha,\beta}athbb R^n}\leq c |x-y|^2, \quad \forall x,y \in \mu_{\alpha,\beta}athbb R^n \] and, by the equivalence of norms on $\mu_{\alpha,\beta}athbb R^n$, (H3) gives the bound \[ \langle b^n(x),x\rangle + \| {\mu_{\alpha,\beta}athbf{\sigma}} ^n\|_{L_2(\mu_{\alpha,\beta}athbb R^n,\mu_{\alpha,\beta}athbb R^n)} \leq c (1+|x|^2), \] for some $c>0$. Hence, equation \eqref{n-sde} is a weakly monotone and coercive equation in $\mu_{\alpha,\beta}athbb R^n$ which has a unique globally defined solution, cf.\ \cite[chapter 3]{Ro}. It is proved in \cite{Es-Re} that for initial datum $u_0\in E$, the process $(u^n(t))_{t\geq 0}$ converges $dt$-a.e. in $H$ to a process $(u(t))_{t\geq 0}$.\\ As in \cite{Es-Re} we apply the It\^{o} formula in finite dimensions to derive for $t\overline{t}o \|u^n(t)\|^2_E$ \begin{equation*} \begin{split} \|u^n(t)\|^2_E&=\|u_0^n\|_E^2+2\int_0^t\langle {\mathcal P}_n A(u^n(s)),u^n(s)\rangle_{E}\:ds+\int_0^t\|{\mathcal P}_n {\mu_{\alpha,\beta}athbf{\sigma}} \|^2_{L_2(U_n,E)}\:ds\\&~~~+ M^n(t),\quad t\in[0,T], \end{split} \end{equation*} where \begin{equation*} M^n(t)\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} 2\int_0^t\langle u^n(s),{\mathcal P}_n {\mu_{\alpha,\beta}athbf{\sigma}} \:dW^n_s\rangle_E \end{equation*} \noindent and $$ \langle {\mathcal P}_n A(u^n(s)),u^n(s)\rangle_{E}=-\int_{(0,1)}\frac{(u^n_{xx})^2}{1+(u_x^n)^2} dx. $$ Taking expectation together with $\|{\mathcal P}_n u_0(x)\|_E\leq \|u_0\|_E$ this entails \begin{equation}\langlebel{Bound1} \frac{1}{t}\mu_{\alpha,\beta}athbb E\int_0^t\int_{(0,1)}\frac{(u_{xx}^n(s))^2}{1+(u_x^n(s))^2}dx\:ds<C_1 \end{equation} for some positive constant $C_1$ independent of $n$ and $t$. On the other hand, the It\^o formula for $\|u^n(t)\|^2_H$ reads \begin{equation} \begin{split} \|u^n(t)\|_H^2&=\|u_0^n\|_H^2+2\int_0^t\langle {\mathcal P}_n A(u^n(s)),u^n(s)\rangle_H\:ds+\int_0^t\|{\mathcal P}_n {\mu_{\alpha,\beta}athbf{\sigma}} \|^2_{L_2(U_n,H)}\:ds\\&~~~+ N ^n(t),\quad t\in[0,T], \end{split} \langlebel{itoh} \end{equation} with \begin{equation*} N^n(t)\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} 2\int_0^t\langle u^n(s),{\mathcal P}_n {\mu_{\alpha,\beta}athbf{\sigma}} \:dW^n_s\rangle_H \end{equation*} \noindent and $$ \langle {\mathcal P}_n A(u^n(s)),u^n(s)\rangle_{H}=\int_{(0,1)} \frac{u^n_{xx}}{1+(u_x^n)^2} u^n \:dx. = - \int_{(0,1)} u^n_x \cdot {\bf a}rctan(u^n_{x}) \:dx $$ Dividing by $t$ and taking expectation in \eqref{itoh}, using ${\bf a}rctan s \cdot s \geq |s| - K$ for some $K >0$ yield \begin{equation} \frac 1 t \mu_{\alpha,\beta}athbb E \int_0^t \int_{(0,1)}|u^n_x(s)|\:dx \:ds \leq C_2 \langlebel{hnorbd} \end{equation} for some $C_2>0$. In particular, by the compactness of the embedding $W^{1,1}_0(0,1) \subset L^2(0,1)$ for each $n \in \mu_{\alpha,\beta}athbb N$ the family of measures $\nu(n,t)(du)\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} \frac{1}{t}\int_0^t\mu_{\alpha,\beta}athbb P(u^n(s) \in du)\:ds$, $t \geq 0$, is tight on $L^2(0,1)$. By analogous arguments as in \cite{Es-Re} ergodicity of the Markov semigroup $(P_t^n)_{t \geq 0}$ on $L^2(0,1)$ associated to $(u_t^n)_{t\geq 0}$ holds. Denoting by $\nu_n$ the corresponding invariant distribution on $L^2(0,1)$, we may thus infer from \eqref{hnorbd}, \eqref{Bound1} and Birkhoff's ergodic theorem that for arbitrary $L>0$ \begin{equation*} \langlebel{sqbd} \int \Bigl(\int_{(0,1)}|u^n_x|\:dx\wedge L \Bigr) \:\nu_n(du) + \int \Bigl(\int_{(0,1)}\frac{{u_{xx}^2}}{1+{u_x^2}}dx \wedge L\Bigr) \:\nu_n(du)<C \end{equation*} where $C=C_1+C_2$. Letting tend $L$ to infinity, by Fatou's lemma we obtain \begin{equation}\langlebel{n-borne} \sup\limits_{n\geq 1} \int \int_{(0,1)} |u_x|\:dx\:\nu_n(du) + \sup\limits_{n\geq 1} \int \int_{(0,1)} \frac{{u_{xx}^2}}{1+{u_x^2}}\:dx\:\nu_n(du)<+\infty. \end{equation} Since \begin{equation*} \|({\bf a}rctan u_x)_x\|_{L^2(0,1)}^2 = \int_{(0,1)} \frac{u_{xx}^2}{(1+u_x^2)^2}dx \leq \int_{(0,1)} \frac{u_{xx}^2}{1+u_x^2}dx \end{equation*} this implies \begin{equation} \langlebel{estimate-arctan} \sup\limits_{n\geq 1}\int_H \| ({\bf a}rctan u_x )_x\|_{L^2(0,1)}^2\:\nu_n(du)<+\infty. \end{equation} \noindent Again, due to the compactness of $W^{1,1}_0(0,1) \subset H$ the bound \eqref{n-borne} implies that the sequence $(\nu_n)_{n\geq 1}$ is tight w.r.t.\ the $H$-topology. This will now be amplified. \begin{lemma} \langlebel{tightness in E} For $u \in C_0^\infty(0,1)$ \[\left(\int_{(0,1)} |u_{xx}(x)|\:dx\right)^{\frac 12}\leq \frac 12 \int _{(0,1)} \frac {u_{xx}^2(x)}{1+u_x^2(x)}\:dx +\frac{3}{2} +\frac 12 \|u_x\|_{L^1(0,1)}. \] \end{lemma} \begin{proof} Starting from \begin{equation*} \int_{(0,1)} |u_{xx}(x)|\:dx\leq \left(\int_{(0,1)} \frac {u^2_{xx}(x)}{1+u^2_x(x)}\:dx\right)^{\frac 12}\left(\int_{(0,1)} (1+u^2_x(x))\:dx\right)^{\frac 12}, \end{equation*} \noindent we get \begin{equation*} \begin{split} \left(\int_{(0,1)} |u_{xx}(x)|\:dx\right)^{\frac 12}&\leq \left(\int_{(0,1)} \frac {u^2_{xx}(x)}{1+u^2_x(x)}\:dx\right)^{\frac 14}\left(\int_{(0,1)} (1+u^2_x(x))\:dx\right)^{\frac 14}\\ &\leq \frac 14 \int _{(0,1)}\frac {u^2_{xx}(x)}{1+u^2_x(x)}\:dx+\frac 34 \left(\int_{(0,1)} (1+u^2_x(x))\:dx\right)^{\frac 13}. \end{split} \end{equation*} Combining this with \begin{align} \int_{(0,1)} (u_x(x))^2\:dx=-\int_{(0,1)} u_{xx}(x)u(x)\:dx&\leq \int_{(0,1)} |u_{xx}(x)|\:dx\cdot \|u\|_{\infty} \nonumber \\ &\leq \int_{(0,1)} |u_{xx}(x)|\:dx\cdot \|u_x\|_{L^1(0,1)} \langlebel{tembed} \end{align} the claim is obtained using Youngs inequality \begin{equation*} \begin{split} \left(\int_{(0,1)} |u_{xx}(x)|\:dx\right)^{\frac 12}&\leq \frac 14 \int_{(0,1)} \frac {u_{xx}^2(x)}{1+u_x^2(x)}\:dx+\frac{3}{4} +\frac{3}{4} \left(\int_{(0,1)} |u_{xx}(x)|\:dx\right)^{\frac 13}\cdot \|u_x\|^{\frac 13}_{L^1(0,1)}\\ &\leq \frac 14 \int _{(0,1)} \frac {u_{xx}^2(x)}{1+u_x^2(x)}\:dx +\frac{3}{4} +\frac 12 \left(\int_H |u_{xx}(x)|\:dx\right)^{\frac 12}+\frac 14 \|u_x\|_{L^1(0,1)}. \end{split} \end{equation*} \end{proof} Combining \eqref{n-borne} with Lemma \ref{tightness in E} we obtain a uniform bound \begin{equation} \sup_n \int _H \Big(\int_{(0,1)}|u_{xx}(x)|\:dx\Big)^{\frac 12}\:\nu_n(du)<\infty. \langlebel{unifintbd} \end{equation} Due to the compactness of the embedding $W^{2,1}(0,1)\hookrightarrow E$ this implies that the sequence of measures $(\nu_n)_{n\geq 1}$ is tight w.r.t.\ the $E$-topology. Let $\nu$ be the limit of a converging subsequence. \noindent From the weak convergence of $\nu_n$ to $\nu$ w.r.t.\ the $E$-topology and the fact that for $\zeta \in L^2(0,1)$ the function $u\rightarrow \langlengle \zeta , {\bf a}rctan u_x\ranglengle_{L^2(0,1)}^2$ is bounded continuous on $E$ we have $$ \int_H \langle e_k,{\bf a}rctan u_x\rangle^2\:\nu(du)=\lim\limits_{n\overline{t}o +\infty}\int_H \langle e_k,{\bf a}rctan u_x\rangle^2\:\nu_n(du). $$ Hence for $m\geq 1$ \begin{equation*} \begin{split} \sum\limits_{k=1}^{m}\int_H (\pi k)^2\langle e_k,{\bf a}rctan u_x\rangle^2\:\nu(du)&=\lim\limits_{n\overline{t}o +\infty}\sum\limits_{k=1}^{m}\int_H (\pi k)^2\langle e_k,{\bf a}rctan u_x\rangle^2\:\nu_n(du)\\ &\leq \lim\limits_{n\overline{t}o +\infty}\int _H \sum\limits_{k=1}^{+\infty} (\pi k)^2\langle e_k,{\bf a}rctan u_x\rangle^2\:\nu_n(du)\\ &\leq \lim\limits_{n\overline{t}o +\infty}\int _H \| ({\bf a}rctan u_x )_x\|_{L^2(0,1)}^2\:\nu_n(du)<+\infty, \end{split} \end{equation*} using \eqref{estimate-arctan} in the last step. Sending $m$ to infinity we arrive at \begin{equation*} \int _H \| ({\bf a}rctan u_x )_x\|_{L^2(0,1)}^2\:\nu(du)=\sum\limits_{k=1}^{+\infty}\int_H (\pi k)^2\langle e_k,{\bf a}rctan u_x\rangle^2\:\nu(du)<+\infty. \end{equation*} Moreover, due to the lower semicontinuity of $u \overline{t}o [Du_x]$ w.r.t.\ to the E-topology \eqref{unifintbd} yields \[\int\bigl[Du_x\bigr] ^{\frac 12}\:\nu(du)<\infty.\] From this and the boundedness of the embedding $W_0^{2,1}(0,1)$ into $W_0^{1,2}(0,1)$ we finally obtain \[\int\|u\|_E^{\frac 1 2 } \:\nu(du)<\infty.\] \noindent It remains to show that the measures $\nu$ and $\mu_{\alpha,\beta}u$ coincide. Recall that for $T>0$ and regular initial condition $u_0 \in E$ the sequence of Galerkin approximations $u^n$ converges to $u$ in the space $L^2([0,T]\overline{t}imes \Omega, H)$, c.f. \cite[Chap. 4]{Ro}. Hence, for all $t>0$, $\rho >0$, $x \in E$ and bounded Lipschitz function $v_{\alpha,\beta}arphi : H \mu_{\alpha,\beta}apsto \mu_{\alpha,\beta}athbb R$ \[ P_t^{n,\rho} v_{\alpha,\beta}arphi(x) := \frac 1 \rho \int_t^{t+\rho} P_s^n v_{\alpha,\beta}arphi (x) ds \longrightarrow P_t^{\rho} v_{\alpha,\beta}arphi(x) := \frac 1 \rho \int_t^{t+\rho} P_s v_{\alpha,\beta}arphi (x) ds.\] A straightforward application of It\^o's formula yields for all $n\in \mu_{\alpha,\beta}athbb N$ \[ \langlebel{unifcont} |P_t^n v_{\alpha,\beta}arphi (x) -P_t^n v_{\alpha,\beta}arphi (y) | \leq Lip(v_{\alpha,\beta}arphi) \, \|x-y\|_{H} \quad \forall x,y \in H.\] Hence the familiy of functions $(P_t^{n,\rho} v_{\alpha,\beta}arphi)_{n\geq 0}$ is uniformly continuous on $H$, and for given compact subset $K \subset H$ the Arzela-Ascoli theorem guarantees the existence of a subsequence of $(P^{n, \rho}_t v_{\alpha,\beta}arphi)_{n\geq 0}$ converging uniformly on $K$ to $P^{\rho}_t v_{\alpha,\beta}arphi$. Moreover, by \eqref{unifintbd} and Chebyshev's inequality for the collection of compact subsets $K_R = \{ u \in H\, | \|u \|_{E} \leq R\}\subset H$ we find \[ \lim_{R\overline{t}o \infty} \sup_n \nu_n (H\setminus K_R) =0.\] These two facts allow to select a further subsequence, still denoted by $n$, such that \[ \lim_{n} \int_H P_t^{n,\rho} v_{\alpha,\beta}arphi(x) \nu_{n} (dx) = \int_H P_t^{\rho} v_{\alpha,\beta}arphi(x) \nu (dx).\] Since $\nu_n$ is $P_t^n$-invariant the l.h.s.\ above equals \[ \lim_{n} \int_H v_{\alpha,\beta}arphi(x) \nu_n (dx) = \int_H v_{\alpha,\beta}arphi(x) \nu(dx), \] i.e.\ $\nu$ is $P^{\rho}_t$-invariant, hence also $P_t$-invariant by letting $\rho$ tend to zero. By the uniqueness of invariant measure for the ergodic semigroup $(P_t)$ we conclude that $\nu=\mu_{\alpha,\beta}u$. \end{proof} \section{Polynomial stability} \begin{theorem} \langlebel{ctheorem} Let $(u_t)_{t\geq 0}$, $(v_t)_{t\geq 0}$ be two solutions of \eqref{sde0} with initial condition $u_0$, $v_0\in E$. Then we have for ${\bf a}lpha\in (0,1]$ \begin{equation*} \|u_t-v_t\|_H^{2{\bf a}lpha}\leq t^{-{\bf a}lpha}\Big( 3^{{\bf a}lpha}\Big(1+\frac{1}{t}\int_0^t \|u_s\|_E^{2{\bf a}lpha}\:ds+\frac{1}{t}\int_0^t \|v_s\|_E^{2{\bf a}lpha}\:ds\Big)\Big) \|u_0-v_0\|_H^{2{\bf a}lpha}. \end{equation*} \end{theorem} \begin{proof} For the proof of the theorem we need the following elementary assertion. \begin{lemma} For $u$, $v\in E$ we have \begin{equation} \langlebel{V-accroissement} _{E^*} \langle V(u)-V(v),u-v\rangle_E\leq -\frac{1}{\Big(1+\|u\|_E^2+\|v\|_E^2\Big)}\|u-v\|^2_H. \end{equation} \end{lemma} \begin{proof} Let $\gamma(t)\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} v+t(u-v)$, $t\in [0,1]$. Then \begin{align} _{E^*} \langle V(u)-V(v),u-v\rangle_E&=-\int_{(0,1)} \bigl({\bf a}rctan u_x(r)-{\bf a}rctan v_x(r)\bigr) \bigl(u_x(r)-v_x(r)\bigr)\:dr\nonumber \\&=-\int_{(0,1)}\int_{(0,1)} \frac{1}{1+\gamma_x^2(t,r)}(u_x(r)-v_x(r))^2\:dr\:dt, \langlebel{stepone} \end{align} Note that for $h\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} u-v$ we have $h(0)=0$ and hence for all $t\in [0,1]$ \begin{equation*} \begin{split} h^2(x)&=\left(\int_0^x h_x(r)\:dr\right)^2\leq \int_0^x\frac{h_x^2(r)}{1+\gamma^2_x(t,r)}\:dr\cdot\int_0^x (1+\gamma_x^2(t,r))\:dr\\ & \leq \int_0^x\frac{h_x^2(r)}{1+\gamma^2_x(t,r)}\:dr\cdot\int_0^x (1+u_x^2(r)+v_x^2(r))\:dr \end{split} \end{equation*} which in view of \eqref{stepone} yields the claim after integration w.r.t.\ $x$ and $t$. \end{proof} \noindent For $u_0$, $v_0\in E$ let $(u_t)_{t\geq 0}$, $(v_t)_{t\geq 0}$ be the strong solutions of \eqref{sde0} starting from $u_0$ resp.\ $v_0$. Then \begin{equation*} \frac{1}{2}\frac{d}{dt}\|u_t-v_t\|_H^2={_{E^*}\langle} V(u_t)-V(v_t),u_t-v_t\rangle_E\leq -\frac{\|u_t-v_t\|_H^2}{1+ \|u_t\|_E^2+\|v_t\|_E^2 }. \end{equation*} \noindent In particular $t\mu_{\alpha,\beta}apsto \|u_t-v_t\|_H^2 $ is decreasing and thus \begin{align*} \|u_0-v_0\|_H^2&\geq \|u_t-v_t\|_H^2+\int_0^t \frac{2\|u_s-v_s\|_H^2}{1+\|u_s\|_E^2+\|v_s\|_E^2}\:ds \nonumber \\ &\geq \|u_t-v_t\|_H^2\left(1+\int_0^t \frac{2}{1+\|u_s\|_E^2+\|v_s\|_E^2}\:ds\right). \end{align*} \noindent Since for any ${\bf a}lpha\in (0,1]$ by Jensen's inequality \begin{equation*} \begin{split} \left(1+t^{{\bf a}lpha-1}\int_0^t\frac{2^{{\bf a}lpha}}{\Big(1+\|u_s\|_E^2+\|v_s\|_E^2\Big)^{{\bf a}lpha}}\:ds\right) &\leq 2^{1-{\bf a}lpha} \left(1+\int_0^t\frac{2}{1+\|u_s\|_E^2+\|v_s\|_E^2}\:ds\right)^{{\bf a}lpha} \end{split} \end{equation*} this implies \begin{equation} \|u_t-v_t\|_H^{2{\bf a}lpha}\leq 2^{{\bf a}lpha-1} \left(1+t^{{\bf a}lpha-1}\int_0^t\frac{2^{{\bf a}lpha}}{\Big(1+\|u_s\|_E^2+\|v_s\|_E^2\Big)^{{\bf a}lpha}}\:ds\right)^{-1}\|u_0-v_0\|_H^{2{\bf a}lpha}. \langlebel{cest1} \end{equation} \noindent Furthermore, using again Jensen for the convex function $1/x$ \begin{equation*} \begin{split} \int_0^t\frac{1}{\Big(1+\|u_s\|_E^2+\|v_s\|_E^2\Big)^{{\bf a}lpha}}\:ds&\geq \frac{t^2}{\int_0^t\Big(1+\|u_s\|_E^2+\|v_s\|_E^2\Big)^{{\bf a}lpha}\:ds}\\ &\geq \frac{t^2}{3^{{\bf a}lpha-1}\Big(t +\int_0^t \|u_s\|_E^{2{\bf a}lpha}\:ds+\int_0^t \|v_s\|_E^{2{\bf a}lpha}\:ds\Big)}\\ &= \frac{t}{3^{{\bf a}lpha-1}\Big(1 +\frac{1}{t}\int_0^t \|u_s\|_E^{2{\bf a}lpha}\:ds+\frac{1}{t}\int_0^t \|v_s\|_E^{2{\bf a}lpha}\:ds\Big)}, \end{split} \end{equation*} \noindent which inserted into \eqref{cest1} gives \begin{equation*} \begin{split} \|u_t-v_t\|_H^{2{\bf a}lpha}&\leq 2^{{\bf a}lpha-1}\left(1+\frac{2^{{\bf a}lpha}t^{{\bf a}lpha}}{3^{{\bf a}lpha-1}\Big(1 +\frac{1}{t}\int_0^t \|u_s\|_E^{2{\bf a}lpha}\:ds+\frac{1}{t}\int_0^t \|v_s\|_E^{2{\bf a}lpha}\:ds\Big)}\right)^{-1}\|u_0-v_0\|_H^{2{\bf a}lpha}\\ &\leq 2^{{\bf a}lpha}\left(1+\frac{2^{{\bf a}lpha}t^{{\bf a}lpha}}{3^{{\bf a}lpha}\Big(1 +\frac{1}{t}\int_0^t \|u_s\|_E^{2{\bf a}lpha}\:ds+\frac{1}{t}\int_0^t \|v_s\|_E^{2{\bf a}lpha}\:ds\Big)}\right)^{-1}\|u_0-v_0\|_H^{2{\bf a}lpha}\\ &= 2^{{\bf a}lpha}\frac{3^{{\bf a}lpha}\Big(1 +\frac{1}{t}\int_0^t \|u_s\|_E^{2{\bf a}lpha}\:ds+\frac{1}{t}\int_0^t \|v_s\|_E^{2{\bf a}lpha}\:ds\Big)}{2^{{\bf a}lpha}t^{{\bf a}lpha}+3^{{\bf a}lpha}\Big(1 +\frac{1}{t}\int_0^t \|u_s\|_E^{2{\bf a}lpha}\:ds+\frac{1}{t}\int_0^t \|v_s\|_E^{2{\bf a}lpha}\:ds\Big)}\|u_0-v_0\|_H^{2{\bf a}lpha}\\ &\leq t^{-{\bf a}lpha}\Big( 3^{{\bf a}lpha}\Big(1 +\frac{1}{t}\int_0^t \|u_s\|_E^{2{\bf a}lpha}\:ds+\frac{1}{t}\int_0^t \|v_s\|_E^{2{\bf a}lpha}\:ds\Big)\Big) \|u_0-v_0\|_H^{2{\bf a}lpha}. \end{split} \end{equation*} \end{proof} \noindent Choosing ${\bf a}lpha=\frac 14$, we obtain in particular \begin{equation}\langlebel{1/4-Hoelder} \|u_t-v_t\|_H^{\frac 12}\leq t^{-\frac 14}C\Big(1+\frac{1}{t}\int_0^t \|u_s\|_E^{\frac 12}\:ds+\frac{1}{t}\int_0^t \|v_s\|_E^{\frac 12}\:ds\Big)\Big) \|u_0-v_0\|_H^{\frac 12}, \end{equation} for some positive constant $C$. As a consequence we arrive at the following statement. \begin{cor}\langlebel{regularite Holderienne} Let $v_{\alpha,\beta}arphi: L^2(0,1) \mu_{\alpha,\beta}apsto \mu_{\alpha,\beta}athbb R$ bounded and $\frac 1 2 $-H\"older-continuous, i.e. \[ \sup_{x\ne y \in L^2(0,1)} \frac {v_{\alpha,\beta}arphi(x) - v_{\alpha,\beta}arphi(y)}{\|x-y\|_{L^2(0,1)}^{1/2}} =: \bigl|v_{\alpha,\beta}arphi\bigr|_{1/2} <\infty,\] then for $u,v \in E$ \[ \limsup_{t \overline{t}o \infty} \left[ t^{1/4} \frac {|P_tv_{\alpha,\beta}arphi (u) - P_tv_{\alpha,\beta}arphi (v) | }{\|u-v\|_{L^2(0,1)}^{1/2}} \right] \leq \frac 1 {\sqrt 2} \bigl|v_{\alpha,\beta}arphi\bigr|_{1/2} C \left( 1+2 \int \|u\|_E^{\frac 1 2 }\,\mu_{\alpha,\beta}u(du) \right).\] \end{cor} \begin{proof} Using \eqref{1/4-Hoelder} \begin{align*} |P_tv_{\alpha,\beta}arphi(u)-P_tv_{\alpha,\beta}arphi(v)|& = | \mu_{\alpha,\beta}athbb E(v_{\alpha,\beta}arphi(u_t)-v_{\alpha,\beta}arphi(v_t))|\leq \bigl|v_{\alpha,\beta}arphi\bigr|_{1/2} \mu_{\alpha,\beta}athbb E(\|u_t-v_t\|_H^{\frac 1 2 })\\ &\leq t^{-\frac 1 4} \|u-v\|_H^{\frac 1 2 } \cdot C\bigl(1+\mu_{\alpha,\beta}athbb E (\frac{1}{t}\int_0^t \|u_s\|_E^{\frac 12}\:ds+\frac{1}{t}\int_0^t \|v_s\|_E^{\frac 12}\:ds )\Bigr) , \end{align*} where $ \frac{1}{t} \mu_{\alpha,\beta}athbb E \int_0^t \|u_s\|_E^{\frac 1 2 }\:ds$ converges to $\displaystyle\int_H \|u\|^{\frac 1 2 }_E\:\mu_{\alpha,\beta}u(du)$ as $t\overline{t}o+\infty$, due to the ergodicity of $(P_t)$. \end{proof} \section{Maximal dissipativity of the operator $J$} In this final section we prove the maximal dissipativity of the operator $(J_0,D(J_0))$ on the space $L^1(H,\mu_{\alpha,\beta}u)$, where $J_0$ is defined in \eqref{defd0} and $D(J_0):= C^2_b(H)$. As a standard consequence the transition semigroup $(P_t)$ corresponding to the generalized solution of \eqref{sde0} admits a unique extension to a strongly continuous semigroup $(P^0_t)_{t\geq 0}$ on $L^1(H,\mu_{\alpha,\beta}u)$. For the proof we exploit that the drift in \eqref{sde0} can be associated to a subdifferential of a convex l.s.c.\ functional on $L^2(0,1)$, using the general set-up introduced in \cite{ACM} for $L^2$-gradient flows of linear growth functionals. Let $G$ denote the primitive of the function $s\mu_{\alpha,\beta}apsto{\bf a}rctan s$, then $G$ is a convex function with linear growth at infinity. For a measure $\nu$ on $[0,1]$ with Lebesgue decomposition $$ \nu\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} h dx+\nu^s $$ with $\nu=h|\nu|$ and $\nu^s$ is singular part of $\nu$, we define a new measure $G(\nu)$ on the Borel sets $B\subset [0,1]$ by $$ \int_B G(\nu)\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} \int_B G(h(x))\:dx+\int_B G_{\infty}\Big(\frac{d\nu}{d|\nu|}\Big)\:d|\nu|^s. $$ where $$ G_{\infty}(x)\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} \lim\limits_{t\overline{t}o+\infty}\frac{G(tx)}{t}= \frac \pi 2 x. $$ We introduce the functional ${\mathcal P}hi$ on $L^2(0,1)$ \[{\mathcal P}hi(x)=\left\{\begin{array}{ll} \int_{[0,1]} G(Du),&\quad u\in BV(0,1)\\ +\infty,&\quad u\in L^2(0,1)\setminus BV(0,1). \end{array} \right. \] \noindent By the results in \cite{ACM} the functional ${\mathcal P}hi$ is convex on $BV(0,1)$ and lower semicontinuous on every $L^p(0,1)$. Hence the subdifferential $\partial {\mathcal P}hi$ of ${\mathcal P}hi$, which is the multi-valued operator in $L^2(0,1)$ defined by $$ v\in\partial {\mathcal P}hi\:\: Longleftrightarrow\:\: {\mathcal P}hi(\zeta)-{\mathcal P}hi(u)\geq \int_{(0,1)} v(\zeta-u)\:dx,\:\:\forall\:\zeta\in L^2(0,1) $$ is a maximal monotone operator in $L^2(0,1)$. Clearly $u\in BV_0^1(0,1)$ if $u\in W^{1,1}_0(0,1)$ with $\|Du\|=\| u_x\|_{L^1(0,1)}$ and so if $$ v=- ({\bf a}rctan u_x)_x\in L^2(0,1), $$ then $v\in \partial {\mathcal P}hi(u)$. Moreover, since $$ \mu_{\alpha,\beta}box{For $\zeta\in \mu_{\alpha,\beta}athbb R$}, \quad |\zeta|-C_1\leq {\bf a}rctan \zeta\cdot\zeta\quad \quad \mu_{\alpha,\beta}box{for some }\quad C_1>0 $$ $u\in D_0$ implies $u\in W^{1,1}(0,1)$, i.e.\ with $V(u)\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} -\partial{\mathcal P}hi(u)$ for $u\in D_0$ we find that $V(u)= {u_{xx}}/({1+{u_x^2}})$. Thus, considering the Kolmogorov operator $J_0$ as unbounded operator in $L^1(H,\mu_{\alpha,\beta}u)$ with domain $D(J_0)\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} C_b^2(H) $ we can write for $v_{\alpha,\beta}arphi \in C_b^2(H)$ $$ J_0v_{\alpha,\beta}arphi(u)=\frac 12 \Tr {Q}D^2v_{\alpha,\beta}arphi(u)+\left \langle V(u),Dv_{\alpha,\beta}arphi(u)\right\rangle, \quad v_{\alpha,\beta}arphi\in C_b^2(H). $$ \noindent Note that this definition of $J_0$ makes sense in $L^1(H,\mu_{\alpha,\beta}u)$, because by Theorem \ref{momentthm} \[\int_H V(u)^2\:\mu_{\alpha,\beta}u(du)<+\infty.\] Secondly, it follows from It\^o's-formula for $\|u(t)\|_H^2$ for solutions with regular initial condition that the measure $\mu_{\alpha,\beta}u$ is infinitesimally invariant for the operator $J_0$, i.e.\ $$ \int J_0v_{\alpha,\beta}arphi(u)\:\mu_{\alpha,\beta}u(du)=0\quad\mu_{\alpha,\beta}box{for all $v_{\alpha,\beta}arphi\in D(J_0)$}, $$ and moreover, since $$ J_0v_{\alpha,\beta}arphi^2=2v_{\alpha,\beta}arphi J_0v_{\alpha,\beta}arphi+\frac 12 \langle Q Dv_{\alpha,\beta}arphi, Dv_{\alpha,\beta}arphi\rangle,\quad v_{\alpha,\beta}arphi\in D(J_0) $$ also \[ \int_H Jv_{\alpha,\beta}arphi(u) v_{\alpha,\beta}arphi(u)\:\mu_{\alpha,\beta}u(du)=-\frac 12 \int _H \| {\mu_{\alpha,\beta}athbf{\sigma}} ^* Dv_{\alpha,\beta}arphi(u)\|^2\:\mu_{\alpha,\beta}u(du). \] which entails that $J_0$ is dissipative in the Hilbert space $L^2(H,\mu_{\alpha,\beta}u)$. By similar argument as in \cite{Es-St} one proves that $J_0$ is also dissipative in $L^1(H,\mu_{\alpha,\beta}u)$. Therefore it is closable and its closure $J\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} \bar{J_0}$ with domain $ D(J)$ is dissipative. Now the main assertion of this section reads as follows. \begin{theorem} The operator $(J,D(J))$ generates a $C_0$-semigroup of contractions on $L^1(H,\mu_{\alpha,\beta}u)$. \end{theorem} \begin{proof} We shall prove that $\rg(\langlembda -J)$ is dense in $L^1(H,\mu_{\alpha,\beta}u)$. To this aim for ${\bf a}lpha>0$ consider the Yosida approximation of $V$ defined by \begin{equation*} V_{\bf a}lpha(x)=V(J_{{\bf a}lpha}(x)),\quad\mu_{\alpha,\beta}box{where}\quad J_{{\bf a}lpha}(x)=(\Id-{\bf a}lpha V)^{-1}(x),\quad x\in D(V). \end{equation*} \noindent For the sequence $V_{\bf a}lpha$ we have the following: \begin{enumerate} \item[(i)] For any ${\bf a}lpha>0$, $V_{{\bf a}lpha}$ is dissipative and Lipschitz continuous. \item [(ii)] $\|V_{{\bf a}lpha}(x)\|\leq \|V(x)\|$ for any $x\in D(V)$. \end{enumerate} \noindent Note that the function $V_{{\bf a}lpha}$ is not differentiable in general. Therefore we shall consider a $C^1$-approximation as in \cite{DaZ3}. For ${\bf a}lpha$, $\beta>0$ we set $$ V_{{\bf a}lpha,\beta}(x)\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} \int_H e^{\beta \mu_{\alpha,\beta}athcal{D}^{\prime}elta}V_{{\bf a}lpha}(e^{\beta \mu_{\alpha,\beta}athcal{D}^{\prime}elta}x+y)\mathcal{N}_{0, {\mu_{\alpha,\beta}athbf{\sigma}} _{\beta}}(dy) $$ where $\mathcal{N}_{0, {\mu_{\alpha,\beta}athbf{\sigma}} _{\beta}}$ is the Gaussian measure on $H$ with mean $0$ and covariance operator defined by $ {\mu_{\alpha,\beta}athbf{\sigma}} _{\beta}\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}}\int_0^{\beta}e^{2s\mu_{\alpha,\beta}athcal{D}^{\prime}elta}\:ds$. Then, $V_{\alpha,\beta}$ is dissipative and by the Cameron-Martin formula it is $C^{\infty}$ differentiable. Moreover, as ${\bf a}lpha$, $\beta\overline{t}o 0$, $V_{\alpha,\beta}\overline{t}o V$ pointwise. Let us now introduce the following approximating equation \begin{equation} \langlebel{uab} \left\{ \begin{array}{ll} du_{\alpha,\beta}(t)=V_{\alpha,\beta}(u_{\alpha,\beta}(t))dt+ {\mu_{\alpha,\beta}athbf{\sigma}} dW_t,\quad t\geq0\\ u_{\alpha,\beta}(0)=x. \end{array} \right. \end{equation} Since $V_{\alpha,\beta}$ is globally Lipschitz, equation \eqref{uab} has a unique strong solution $(u_{\alpha,\beta}(t))_{t\geq 0}$. Moreover by the regularity of $V_{\alpha,\beta}$ the process $(u_{\alpha,\beta}(t))_{t\geq 0}$ is differentiable on $H$. For any $h\in H$ we set $\eta_h(t,x)\mu_{\alpha,\beta}athrel{\mu_{\alpha,\beta}athrm{\rangleise0.1ex\hbox{:}\hbox{=}\strut}} Du_{\alpha,\beta}(t,x)\cdot h $ it holds \begin{equation}\langlebel{eqet} \left\{ \begin{array}{ll} \frac{d}{dt}\eta_h(t,x)=DV_{\alpha,\beta}(u_{\alpha,\beta}(t,x))\cdot\eta_h(t,x),\\ \eta^h(0,x)=h\in H. \end{array} \right. \end{equation} \noindent From the dissipativity of $V_{\alpha,\beta}$ we have that $$ \langle DV_{\alpha,\beta}(z)h,h\rangle\leq 0,\quad h\in H, z\in D(V). $$ \noindent Hence by multiplying both sides of \eqref{eqet} by $\eta_h(t,x)$, integrating with respect to $t$, we have \begin{equation}\langlebel{bound-eta} \|\eta_h(t,x)\|^2\leq \|h\|^2. \end{equation} Now for $\langlembda>0$ and $f\in C_b^2(H)$, consider the following elliptic equation \begin{equation}\langlebel{elliptic} (\langlembda-J_{V_{\alpha,\beta}})v_{\alpha,\beta}a=f,\quad \langlembda>0. \end{equation} where $J_{V_{\alpha,\beta}}$ is the Kolmogorov operator corresponding to the SDE \eqref{uab}. \noindent It is well-known that this equation has a solution $v_{\alpha,\beta}a\in C_b^2(H)$ and can be written in the form $v_{\alpha,\beta}a=R(\langlembda,J_{V_{\alpha,\beta}})f$, where $$ \bigl(R(\langlembda,J_{V_{\alpha,\beta}})f\bigr)(x)=\int_0^{+\infty}e^{-\langlembda t}\mu_{\alpha,\beta}athbb E(f(u_{\alpha,\beta}(t,x)))\:dt $$ is the pseudo resolvent associated with $J_{V_{\alpha,\beta}}$. Thus we have \begin{equation}\langlebel{res-estimate} \|\langlembdav_{\alpha,\beta}a\|_\infty\leq\|f\|_\infty. \end{equation} \noindent We have, moreover, for all $h\in H$, \begin{equation*} Dv_{\alpha,\beta}a(x)h=\int_0^{+\infty}e^{-\langlembda t}\mu_{\alpha,\beta}athbb E\Big(Df(u_{\alpha,\beta}(t,x))(Du_{\alpha,\beta}(t,x)h)\Big)\:dt. \end{equation*} \noindent consequently by using \eqref{bound-eta} it follows that \[\sup\limits_{{\bf a}lpha,\beta>0}\|Dv_{\alpha,\beta}a(x)\|\leq \frac{1}{\langlembda}\|Df\|_{\infty}.\] \noindent From \eqref{elliptic} we have \[\begin{split} \langlembda\: v_{\alpha,\beta}a(x)&-\frac 12 \Tr {Q}D^2v_{\alpha,\beta}arphi(x)+\langle V(x),Dv_{\alpha,\beta}a(x)\rangle\\ &=f(x)+\langle V(x)-V_{\alpha,\beta}(x),Dv_{\alpha,\beta}a(x)\rangle,\quad \langlembda>0,\:\:x\in D(V). \end{split}\] \noindent Using gradient bound \eqref{res-estimate} we deduce that \begin{equation*}\begin{split} \int_H|\langleV_{\alpha,\beta}(x)-V(x),Dv_{\alpha,\beta}a(x)\rangle| \:\mu_{\alpha,\beta}u(dx)\leq \frac{1}{\langlembda^2}\|Df\|^2_{\infty}\|V_{\alpha,\beta}-V\|_{L^2 (H, \mu_{\alpha,\beta}u )}. \end{split} \end{equation*} By Lebesgue's theorem $\|V_{\alpha,\beta}-V\|_{L^2 (H, \mu_{\alpha,\beta}u )}$ converges to $0$ as ${\bf a}lpha,\:\beta\overline{t}o 0$. Therefore we deduce that for ${\bf a}lpha,\:\beta\overline{t}o 0$, $$ \langlembda\: v_{\alpha,\beta}a(x)-\frac 12 \Tr {Q}D^2v_{\alpha,\beta}a (x)+\langle V(x),Dv_{\alpha,\beta}a(x)\rangle\overline{t}o f $$ strongly in $L^1(H,\mu_{\alpha,\beta}u)$. This implies that $$ C_b^2(H)\subset \overline{(\langlembda-J_0)(D(J_0))}. $$ Since $C_b^2(H)$ is dense in $L^1(H,\mu_{\alpha,\beta}u)$, the proof is complete. \end{proof} \end{document}
math
\begin{document} \title{Finite extinction time for the solutions to the Ricci flow on certain three-manifolds}\author{Grisha Perelman\thanks{St.Petersburg branch of Steklov Mathematical Institute, Fontanka 27, St.Petersburg 191023, Russia. Email: [email protected] or [email protected] }} \maketitle \par In our previous paper we constructed complete solutions to the Ricci flow with surgery for arbitrary initial riemannian metric on a (closed, oriented) three-manifold [P,6.1], and used the behavior of such solutions to classify three-manifolds into three types [P,8.2]. In particular, the first type consisted of those manifolds, whose prime factors are diffeomorphic copies of spherical space forms and $\mathbb{S}^2\times\mathbb{S}^1;$ they were characterized by the property that they admit metrics, that give rise to solutions to the Ricci flow with surgery, which become extinct in finite time. While this classification was sufficient to answer topological questions, an analytical question of significant independent interest remained open, namely, whether the solution becomes extinct in finite time for every initial metric on a manifold of this type. \par In this note we prove that this is indeed the case. Our argument (in conjunction with [P,\S 1-5]) also gives a direct proof of the so called "elliptization conjecture". It turns out that it does not require any substantially new ideas: we use only a version of the least area disk argument from [H,\S 11] and a regularization of the curve shortening flow from [A-G]. \section{Finite time extinction} \par {\bf 1.1 Theorem.} {\it Let $M$ be a closed oriented three-manifold, whose prime decomposition contains no aspherical factors. Then for any initial metric on $M$ the solution to the Ricci flow with surgery becomes extinct in finite time.} \par {\it Proof for irreducible $M$.} Let $\Lambda M$ denote the space of all contractible loops in $C^1(\mathbb{S}^1\to M).$ Given a riemannian metric $g$ on $M$ and $c\in\Lambda M,$ define $A(c,g)$ to be the infimum of the areas of all lipschitz maps from $\mathbb{D}^2$ to $M,$ whose restriction to $\partial\mathbb{D}^2=\mathbb{S}^1$ is $c.$ For a family $\Gammaamma\subset\Lambda M$ let $A(\Gammaamma ,g)$ be the supremum of $A(c,g)$ over all $c\in\Gammaamma.$ Finally, for a nontrivial homotopy class $\alpha\in\pi_{\ast}(\Lambda M,M)$ let $A(\alpha ,g)$ be the infimum of $A(\Gammaamma ,g)$ over all $\Gammaamma\in\alpha.$ Since $M$ is not aspherical, it follows from a classical (and elementary) result of Serre that such a nontrivial homotopy class exists. \par {\bf 1.2 Lemma.} (cf. [H,\S 11]) {\it If $g^t$ is a smooth solution to the Ricci flow, then for any $\alpha$ the rate of change of the function $A^t=A(\alpha ,g^t)$ satisfies the estimate $$ \frac{d}{dt}A^t\le-2\pi-\frac{1}{2}R^t_{\mathrm{min}}A^t $$ (in the sense of the $\mathrm{lim \ sup}$ of the forward difference quotients), where $R^t_{\mathrm{min}}$ denotes the minimum of the scalar curvature of the metric $g^t.$ } \par A rigorous proof of this lemma will be given in \S 3, but the idea is simple and can be explained here. Let us assume that at time $t$ the value $A^t$ is attained by the family $\Gammaamma,$ such that the loops $c\in\Gammaamma$ where $A(c,g^t)$ is close to $A^t$ are embedded and sufficiently smooth. For each such $c$ consider the minimal disk $D_c$ with boundary $c$ and with area $A(c,g^t).$ Now let the metric evolve by the Ricci flow and let the curves $c$ evolve by the curve shortening flow (which moves every point of the curve in the direction of its curvature vector at this point) with the same time parameter. Then the rate of change of the area of $D_c$ can be computed as $$\int_{D_c}{(-\mathrm{Tr(Ric^T)})}\ \ + \int_c{(-k_g)}$$ where $\mathrm{Ric^T}$ is the Ricci tensor of $M$ restricted to the tangent plane of $D_c,$ and $k_g$ is the geodesic curvature of $c$ with respect to $D_c$ (cf. [A-G, Lemma 3.2]). In three dimensions the first integrand equals $-\frac{1}{2}R-(K-\mathrm{det \ II}),$ where $K$ is the intrinsic curvature of $D_c$ and $\mathrm{det \ II},$ the determinant of the second fundamental form, is nonpositive, because $D_c$ is minimal. Thus, the rate of change of the area of $D_c$ can be estimated from above by $$ \int_{D_c}(-\frac{1}{2}R-K) \ + \int_c (-k_g) = \int_{D_c}(-\frac{1}{2}R) \ -2\pi $$ by the Gauss-Bonnet theorem, and the statement of the lemma follows. \par The problem with this argument is that if $\Gammaamma$ contains curves, which are not immersed (for instance, a curve could pass an arc once in one direction and then make an about turn and pass the same arc in the opposite direction), then it is not clear how to define curve shortening flow so that it would be continuous both in the time parameter and in the family parameter. In \S 3 we'll explain how to circumvent this difficulty, essentially by adding one dimension to the ambient manifold. This regularization of the curve shortening flow has been worked out by Altschuler and Grayson [A-G] (who were interested in approximating the singular curve shortening flow on the plane and obtained for that case more precise results than what we need). \par {\bf 1.3} Now consider the solution to the Ricci flow with surgery. Since $M$ is assumed irreducible, the surgeries are topologically trivial, that is one of the components of the post-surgery manifold is diffeomorphic to the pre-surgery manifold, and all the others are spheres. Moreover, by the construction of the surgery [P,4.4], the diffeomorphism from the pre-surgery manifold to the post-surgery one can be chosen to be distance non-increasing ( more precisely, $(1+\xi)$-lipschitz, where $\xi>0$ can be made as small as we like). It follows that the conclusion of the lemma above holds for the solutions to the Ricci flow with surgery as well. \par Now recall that the evolution equation for the scalar curvature $$ \frac{d}{dt}R=\triangle R + 2|\mathrm{Ric}|^2=\triangle R+\frac{2}{3}R^2+2|\mathrm{Ric}^{\circ}|^2 $$ implies the estimate $R^t_{\mathrm{min}}\ge -\frac{3}{2}\frac{1}{t+\mathrm{const}}.$ It follows that $\hat{A}^t=\frac{A^t}{t+\mathrm{const}}$ satisfies $\frac{d}{dt}\hat{A}^t\le -\frac{2\pi}{t+\mathrm{const}},$ which implies finite extinction time since the right hand side is non-integrable at infinity whereas $\hat{A}^t$ can not become negative. \par {\bf 1.4} {\it Remark.} The finite time extinction result for irreducible non-aspherical manifolds already implies (in conjuction with the work in [P,\S 1-5] and the Kneser finiteness theorem) the so called "elliptization conjecture", claiming that a closed manifold with finite fundamental group is diffeomorphic to a spherical space form. The analysis of the long time behavior in [P,\S 6-8] is not needed in this case; moreover the argument in [P,\S 5] can be slightly simplified, replacing the sequences $r_j, \kappa_j, \bar{\delta}_j$ by single values $r, \kappa, \bar{\delta},$ since we already have an upper bound on the extinction time in terms of the initial metric. \par In fact, we can even avoid the use of the Kneser theorem. Indeed, if we start from an initial metric on a homotopy sphere (not assumed irreducible), then at each surgery time we have (almost) distance non-increasing homotopy equivalences from the pre-surgery manifold to each of the post-surgery components, and this is enough to keep track of the nontrivial relative homotopy class of the loop space. \par {\bf 1.5} {\it Proof of theorem 1.1 for general $M$.} The Kneser theorem implies that our solution undergoes only finitely many topologically nontrivial surgeries, so from some time $T$ on all the surgeries are trivial. Moreover, by the Milnor uniqueness theorem, each component at time $T$ satisfies the assumption of the theorem. Since we already know from 1.4 that there can not be any simply connected prime factors, it follows that every such component is either irreducible, or has nontrivial $\pi_2;$ in either case the proof in 1.1-1.3 works. \section{Preliminaries on the curve shortening flow} \par In this section we rather closely follow [A-G]. \par {\bf 2.1} Let $M$ be a closed $n$-dimensional manifold, $n\ge 3,$ and let $g^t$ be a smooth family of riemannian metrics on $M$ evolving by the Ricci flow on a finite time interval $[t_0,t_1].$ It is known [B] that $g^t$ for $t>t_0$ are real analytic. Let $c^t$ be a solution to the curve shortening flow in $(M,g^t),$ that is $c^t$ satisfies the equation $\frac{d}{dt}c^t(x)=H^t(x),$ where $x$ is the parameter on $\mathbb{S}^1,$ and $H^t$ is the curvature vector field of $c^t$ with respect to $g^t.$ It is known [G-H] that for any smoothly immersed initial curve $c$ the solution $c^t$ exists on some time interval $[t_0,t_1'),$ each $c^t$ for $t>t_0$ is an analytic immersed curve, and either $t_1'=t_1,$ or the curvature $k^t=g^t(H^t,H^t)^{\frac{1}{2}}$ is unbounded when $t\to t_1'.$ \par Denote by $X^t$ the tangent vector field to $c^t,$ and let $S^t=g^t(X^t,X^t)^{-\frac{1}{2}}X^t$ be the unit tangent vector field; then $H=\nabla_S S$ (from now on we drop the superscript $t$ except where this omission can cause confusion). We compute \begin{equation} \frac{d}{dt}g(X,X)=-2\mathrm{Ric}(X,X)-2g(X,X)k^2, \epsilonnd{equation} which implies \begin{equation} [H,S]=(k^2+\mathrm{Ric}(S,S))S \epsilonnd{equation} Now we can compute \begin{equation} \frac{d}{dt}k^2=(k^2)''-2g((\nabla_S H)^{\perp},(\nabla_S H)^{\perp})+2k^4 + ..., \epsilonnd{equation} where primes denote differentiation with respect to the arclength parameter $s,$ and where dots stand for the terms containing the curvature tensor of $g,$ which can be estimated in absolute value by $\mathrm{const}\cdot(k^2+k).$ Thus the curvature $k$ satisfies \begin{equation} \frac{d}{dt}k\le k''+k^3+\mathrm{const}\cdot(k+1) \epsilonnd{equation} Now it follows from (1) and (4) that the length $L$ and the total curvature $\Theta=\int kds$ satisfy \begin{equation} \frac{d}{dt}L\le \int (\mathrm{const} \ -k^2)ds, \epsilonnd{equation} \begin{equation} \frac{d}{dt}\Theta\le \int \mathrm{const}\cdot (k+1)ds \epsilonnd{equation} In particular, both quantities can grow at most exponentially in $t$ (they would be non-increasing in a flat manifold). \par {\bf 2.2} In general the curvature of $c^t$ may concentrate near certain points, creating singularities. However, if we know that this does not happen at some time $t^{\ast},$ then we can estimate the curvature and higher derivatives at times shortly thereafter. More precisely, there exist constants $\epsilon, C_1, C_2,...$ (which may depend on the curvatures of the ambient space and their derivatives, but are independent of $c^t$), such that if at time $t^{\ast}$ for some $r>0$ the length of $c^t$ is at least $r$ and the total curvature of each arc of length $r$ does not exceed $\epsilon,$ then for every $t\in (t^{\ast},t^{\ast}+\epsilon r^2)$ the curvature $k$ and higher derivatives satisfy the estimates $k^2=g(H,H)\le C_0 (t-t^{\ast})^{-1},\ \ g(\nabla_S H,\nabla_S H)\le C_1 (t-t^{\ast})^{-2},...\ \ $ This can be proved by adapting the arguments of Ecker and Huisken [E-Hu]; see also [A-G,\S 4]. \par {\bf 2.3} Now suppose that our manifold $(M,g^t)$ is a metric product $(\bar{M},\bar{g}^t)\times \mathbb{S}^1_{\lambda},$ where the second factor is the circle of constant length $\lambda;$ let $U$ denote the unit tangent vector field to this factor. Then $u=g(S,U)$ satisfies the evolution equation \begin{equation} \frac{d}{dt}u=u''+(k^2+\mathrm{Ric}(S,S))u \epsilonnd{equation} \par Assume that $u$ was strictly positive everywhere at time $t_0$ (in this case the curve is called a ramp). Then it will remain positive and bounded away from zero as long as the solution exists. Now combining (4) and (7) we can estimate the right hand side of the evolution equation for the ratio $\frac{k}{u}$ and conclude that this ratio, and hence the curvature $k,$ stays bounded (see [A-G,\S 2]). It follows that $c^t$ is defined on the whole interval $[t_0,t_1].$ \par {\bf 2.4} Assume now that we have two ramp solutions $c_1^t, c_2^t,$ each winding once around the $\mathbb{S}^1_{\lambda}$ factor. Let $\mu^t$ be the infimum of the areas of the annuli with boundary $c_1^t\cup c_2^t.$ Then \begin{equation} \frac{d}{dt}\mu^t\le (2n-1)|\mathrm{Rm}^t|\mu^t, \epsilonnd{equation} where $|\mathrm{Rm}^t|$ denotes a bound on the absolute value of sectional curvatures of $g^t.$ Indeed, the curves $c^t_1$ and $c^t_2,$ being ramps, are embedded and without substantial loss of generality we may assume them to be disjoint. In this case the results of Morrey [M] and Hildebrandt [Hi] yield an analytic minimal annulus $A,$ immersed, except at most finitely many branch points, with prescribed boundary and with area $\mu.$ The rate of change of the area of $A$ can be computed as $$ \int_A (-\mathrm{Tr}(\mathrm{Ric}^T)) +\int_{\partial A} (-k_g) \le \int_A (-\mathrm{Tr}(\mathrm{Ric}^T)+K)$$ $$ \le \int_A (-\mathrm{Tr}(\mathrm{Ric}^T)+\mathrm{Rm}^T) \le (2n-1)|\mathrm{Rm}|\mu ,$$ where the first inequality comes from the Gauss-Bonnet theorem, with possible contribution of the branch points, and the second one is due to the fact that a minimal surface has nonpositive extrinsic curvature with respect to any normal vector. \par {\bf 2.5} The estimate (8) implies that $\mu^t$ can grow at most exponentially; in particular, if $c^t_1$ and $ c^t_2$ were very close at time $t_0,$ then they would be close for all $t\in [t_0,t_1]$ in the sense of minimal annulus area. In general this does not imply that the lengths of the curves are also close. However, an elementary argument shows that if $\epsilon>0$ is small then, given any $r>0, $ one can find $\bar{\mu},$ depending only on $r$ and on upper bound for sectional curvatures of the ambient space, such that if the length of $c^t_1$ is at least $r,$ each arc of $c^t_1$ with length $r$ has total curvature at most $\epsilon,$ and $\mu^t\le\bar{\mu},$ then $L(c^t_2)\ge (1-100\epsilon)L(c^t_1).$ \section{Proof of lemma 1.2} \par {\bf 3.1} In this section we prove the following statement \par {\it Let $M$ be a closed three-manifold, and let $(M,g^t)$ be a smooth solution to the Ricci flow on a finite time interval $[t_0,t_1].$ Suppose that $\Gamma\subset \Lambda M$ is a compact family. Then for any $\xi>0$ one can construct a continuous deformation $\Gamma^t, t\in [t_0,t_1], \Gamma^{t_0}=\Gamma,$ such that for each curve $c\in\Gamma$ either the value $A(c^{t_1},g^{t_1})$ is bounded from above by $\xi$ plus the value at $t=t_1$ of the solution to the ODE $\frac{d}{dt}w(t)=-2\pi-\frac{1}{2}R^t_{\mathrm{min}}w(t)$ with the initial data $\ \ w(t_0)=A(c^{t_0},g^{t_0}),\ \ $ or $L(c^{t_1})\le \xi;$ moreover, if $c$ was a constant map, then all $c^t$ are constant maps.} \par It is clear that our statement implies lemma 1.2, because a family consisting of very short loops can not represent a nontrivial relative homotopy class. \par {\bf 3.2} As a first step of the proof of the statement we can replace $\Gamma$ by a family, which consists of piecewise geodesic loops with some large fixed number of vertices and with each segment reparametrized in some standard way to make the parametrizations of the whole curves twice continuously differentiable. \par Now consider the manifold $M_{\lambda}=M\times\mathbb{S}^1_{\lambda}, 0<\lambda<1,$ and for each $c\in\Gamma$ consider the smooth embedded closed curve $c_{\lambda}$ such that $p_1c_{\lambda}(x)=c(x)$ and $p_2c_{\lambda}(x)=\lambda x\ \mathrm{mod}\ \lambda,$ where $p_1$ and $p_2$ are projections of $M_{\lambda}$ to the first and second factor respectively, and $x$ is the parameter of the curve $c$ on the standard circle of length one. Using 2.3 we can construct a solution $c_{\lambda}^t, t\in [t_0,t_1]$ to the curve shortening flow with initial data $c_{\lambda}.$ The required deformation will be obtained as $\Gamma^t=p_1\Gamma^t_{\lambda}$ (where $\Gamma^t_{\lambda}$ denotes the family consisting of $c^t_{\lambda}$) for certain sufficiently small $\lambda>0.$ We'll verify that an appropriate $\lambda$ can be found for each individual curve $c,$ or for any finite number of them, and then show that if our $\lambda$ works for all elements of a $\mu$-net in $\Gamma,$ for sufficiently small $\mu>0,$ then it works for all elements of $\Gamma.$ \par {\bf 3.3} In the following estimates we shall denote by $C$ large constants that may depend on metrics $g^t,$ family $\Gamma$ and $\xi,$ but are independent of $\lambda ,\mu $ and a particular curve $c.$ \par The first step in 3.2 implies that the lengths and total curvatures of $c_{\lambda}$ are uniformly bounded, so by 2.1 the same is true for all $c^t_{\lambda}.$ It follows that the area swept by $c^t_{\lambda}, t\in [t',t'']\subset [t_0,t_1]$ is bounded above by $C(t''-t'),$ and therefore we have the estimates $A(p_1c^t_{\lambda},g^t)\le C, A(p_1c^{t''}_{\lambda},g^{t''})-A(p_1c^{t'}_{\lambda},g^{t'})\le C(t''-t').$ \par {\bf 3.4} It follows from (5) that $\int_{t_0}^{t_1} \int k^2 dsdt \le C$ for any $c^t_{\lambda}.$ Fix some large constant $B,$ to be chosen later. Then there is a subset $I_B(c_{\lambda})\subset [t_0,t_1]$ of measure at least $t_1-t_0-CB^{-1}$ where $\int k^2 ds\le B,$ hence $\int kds \le \epsilon$ on any arc of length $\le \epsilon^2 B^{-1}.$ Assuming that $c^t_{\lambda}$ are at least that long, we can apply 2.2 and construct another subset $J_B(c_{\lambda})\subset [t_0,t_1]$ of measure at least $t_1-t_0-CB^{-1},$ consisting of finitely many intervals of measure at least $C^{-1}B^{-2}$ each, such that for any $t\in J_B(c_{\lambda})$ we have pointwise estimates on $c^t_{\lambda}$ for curvature and higher derivatives, of the form $k\le CB,...$ \par Now fix $c, B,$ and consider any sequence of $\lambda\to 0.$ Assume again that the lengths of $c^t_{\lambda}$ are bounded below by $\epsilon^2 B^{-1},$ at least for $t\in [t_0,t_2],$ where $t_2=t_1-B^{-1}.$ Then an elementary argument shows that we can find a subsequence $\Lambda_c$ and a subset $J_B(c)\subset [t_0,t_2]$ of measure at least $t_1-t_0-CB^{-1},$ consisting of finitely many intervals, such that $J_B(c)\subset J_B(c_{\lambda})$ for all $\lambda\in \Lambda_c.$ It follows that on every interval of $J_B(c)$ the curve shortening flows $c^t_{\lambda}$ smoothly converge (as $\lambda\to 0$ in some subsequence of $\Lambda_c$ ) to a curve shortening flow in $M.$ \par Let $w_c(t)$ be the solution of the ODE $\frac{d}{dt}w_c(t)=-2\pi-\frac{1}{2}R^t_{\mathrm{min}}w_c(t)$ with initial data $w_c(t_0)=A(c,g^{t_0}).$ Then for sufficiently small $\lambda\in\Lambda_c$ we have $A(p_1c^t_{\lambda},g^t)\le w_c(t)+\frac{1}{2}\xi$ provided that $B>C\xi^{-1}.$ Indeed, on the intervals of $J_B(c)$ we can estimate the change of $A$ for the limit flow using the minimal disk argument as in 1.2, and this implies the corresponding estimate for $p_1c^t_{\lambda}$ if $\lambda\in\Lambda_c$ is small enough, whereas for the intervals of the complement of $J_B(c)$ we can use the estimate in 3.3. \par On the other hand, if our assumption on the lower bound for lengths does not hold, then it follows from (5) that $L(c^{t_2}_{\lambda})\le CB^{-1}\le\frac{1}{2}\xi.$ \par {\bf 3.5} Now apply the previous argument to all elements of some finite $\mu$-net $\hat{\Gamma}\subset \Gamma$ for small $\mu>0$ to be determined later. We get a $\lambda>0$ such that for each $\hat{c}\in\hat{\Gamma}$ either $A(p_1\hat{c}_{\lambda}^{t_1},g^{t_1})\le w_{\hat{c}}(t_1)+\frac{1}{2}\xi$ or $L(\hat{c}_{\lambda}^{t_2})\le \frac{1}{2}\xi.$ Now for any curve $c\in\Gamma$ pick a curve $\hat{c}\in\hat{\Gamma},$ $\mu$-close to $c,$ and apply the result of 2.4. It follows that if $A(p_1\hat{c}_{\lambda}^{t_1},g^{t_1})\le w_{\hat{c}}(t_1)+\frac{1}{2}\xi$ and $\mu \le C^{-1}\xi,$ then $A(p_1c_{\lambda}^{t_1},g^{t_1})\le w_c(t_1)+\xi.$ On the other hand, if $L(\hat{c}_{\lambda}^{t_2})\le \frac{1}{2}\xi,$ then we can conclude that $L(c_{\lambda}^{t_1})\le \xi$ provided that $\mu>0$ is small enough in comparison with $\xi$ and $B^{-1}.$ Indeed, if $L(c_{\lambda}^{t_1})>\xi,$ then $L(c_{\lambda}^t)>\frac{3}{4}\xi$ for all $t\in [t_2,t_1];$ on the other hand, using (5) we can find a $t\in [t_2,t_1],$ such that $\int k^2ds \le CB$ for $c^t_{\lambda};$ hence, applying 2.5, we get $L(\hat{c}_{\lambda}^t)>\frac{2}{3}\xi$ for this $t,$ which is incompatible with $L(\hat{c}_{\lambda}^{t_2})\le \frac{1}{2}\xi.$ The proof of the statement 3.1 is complete. \section*{References} \ \ \ [A-G] S.Altschuler, M.Grayson Shortening space curves and flow through singularities. Jour. Diff. Geom. 35 (1992), 283-298. \par [B] S.Bando Real analyticity of solutions of Hamilton's equation. Math. Zeit. 195 (1987), 93-97. \par [E-Hu] K.Ecker, G.Huisken Interior estimates for hypersurfaces moving by mean curvature. Invent. Math. 105 (1991), 547-569. \par [G-H] M.Gage, R.S.Hamilton The heat equation shrinking convex plane curves. Jour. Diff. Geom. 23 (1986), 69-96. \par [H] R.S.Hamilton Non-singular solutions of the Ricci flow on three-manifolds. Commun. Anal. Geom. 7 (1999), 695-729. \par [Hi] S.Hildebrandt Boundary behavior of minimal surfaces. Arch. Rat. Mech. Anal. 35 (1969), 47-82. \par [M] C.B.Morrey The problem of Plateau on a riemannian manifold. Ann. Math. 49 (1948), 807-851. \par [P] G.Perelman Ricci flow with surgery on three-manifolds. \par arXiv:math.DG/0303109 v1 \epsilonnd{document}
math
\begin{document} \title{Computing twists of hyperelliptic curves} \author{Davide Lombardo, Elisa Lorenzo Garc\'ia} \address{Davide Lombardo, Università di Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy} \email{[email protected]} \address{Elisa Lorenzo Garc\'ia, IRMAR, Universit\'e de Rennes 1, Campus de Beaulieu, 35042 RENNES Cédex, France} \email{[email protected]} \keywords{Hyperelliptic curves, conics, twists, Galois cohomology} \subjclass[2010]{11R34, 14H10, 14H45} \date{} \begin{abstract} We give an efficient algorithm to compute equations of twists of hyperelliptic curves $C$ of arbitrary genus over any perfect field $k$ (of characteristic different from 2) starting with a cocycle in $\operatorname{H}^1(\operatorname{Gal}(\overline{k}/k),\operatorname{Aut}(C))$. We also discuss some interesting examples. \end{abstract} \maketitle \section{Introduction} In this paper we describe an algorithm to compute explicit equations of twists of hyperelliptic curves (of genus $g \geq 2$) starting from a cocycle in a given cohomology class. The study of twists of curves is a very useful tool for understanding some arithmetic problems: for example, it has proved to be extremely helpful to explore the Sato-Tate conjecture \cite{AWS2016,FLS,FS}, as well as to solve some Diophantine equations \cite{PSS}, to compute $\mathbb{Q}$-curves realizing certain Galois representations \cite{BFGL}, to find counterexamples to the Hasse principle \cite{Oz}, \cite{Oz2}, and to produce curves with many points \cite{MT}. Let us briefly review previous work concerning the computation of twists of curves. Geometrically, all smooth conics are isomorphic to $\mathbb{P}^1$, so the twists of a smooth conic $C/K$ are precisely the smooth conics over $K$ (recall that the anticanonical embedding realizes any genus-0 curve as a conic). The set of twists of an elliptic curve is well understood if one restricts to those automorphisms that fix the origin of the group law (see for example \cite[Section X.5]{Sil} and \cite{Top}), but the situation is way more complicated if one considers arbitrary twists. For curves of genus $2$, we refer to the work of Cardona \cite{Cart} and for non-hyperelliptic curves of genus $3$ to the work of the second author \cite{Lor16}. A general algorithm to compute twists of non-hyperelliptic curves is also due to the second author \cite{Lor15,Tesis}, and therefore only the general hyperelliptic case remains to be treated: thanks to the algorithm presented in this paper, we are now able to compute equations of twists of any curve of genus $g \geq 2$ (in characteristic different from 2) starting from the corresponding cocycles. { We remark that the main difficulty of working with hyperelliptic curves is that $\operatorname{Aut}_{\overline{k}}(C)$ does not embed in the automorphism group of $H^0(C,\Omega^1_C)$; indeed, this is the property that makes the construction of \cite{Lor15,Tesis} work, and it fails for hyperelliptic curves because the canonical bundle is not very ample in this case. To remedy this, one could consider higher powers of the canonical bundle: given any hyperelliptic curve $C/k$ of genus $g$, the line bundle $(\Omega^1_C)^{\otimes 2}$ is very ample, so that we get an embedding of $C$ into $\mathbb{P}H^0\left(C, (\Omega^1_C)^{\otimes 2}\right)^\vee \cong \mathbb{P}^{3(g-1)-1}$. Moreover, $\operatorname{Aut}(C)$ embeds into $\operatorname{GL}\left( H^0\left(C, (\Omega^1_C)^{\otimes 2}\right)\right)$, and this would allow us to (in principle) use the approach of \cite{Lor15,Tesis} to compute models of twists of $C$. However, this seems quite inefficient from a computational point of view, since it requires one to explicitly find a basis of sections for $(\Omega^1_C)^{\otimes 2}$, and to work with models of $C$ in high-dimensional projective spaces. Our algorithm, which exploits the specific geometry of the situation, provides a much more efficient approach.} We briefly describe the organization of the paper: in Section \ref{sect:HypCurves} we set up our notation for hyperelliptic curves and recall some well-known facts about them. Section \ref{method} forms the core of the paper, giving the details of our algorithm for the computation of twists. Finally, in Section \ref{sect:Examples} we compute equations for several twists given by cocycles. \subsection*{Notation} Throughout the paper $k$ is a perfect field of characteristic different from $2$, with algebraic closure $\overline{k}$, and $\mu_n(\overline{k})$ is the set of $n$-th roots of unity in $\overline{k}$. We denote by $\zeta_n$ a fixed primitive $n$-th root of unity in $\overline{k}$ (provided that $(n,\operatorname{char}(k))=1$) and by $\operatorname{Gal}(\overline{k}/k)$ the absolute Galois group of $k$. If $M$ is a matrix in $\operatorname{GL}_3(k)$, we write $M(x,y,z)$ for the linear map that sends $x, y, z$ respectively to the three entries of the vector $M \cdot (x, y, z)^t$; here $x,y,z$ can be either variables or elements of $k$. By a curve $C/k$ we mean an irreducible, geometrically connected, projective variety of dimension 1 defined over a field $k$. We denote by $\operatorname{Aut}(C)$ the group of geometric automorphisms of $C$, that is, the group of automorphisms of $C \times_k {\overline{k}}$, and if $L/k$ is a field extension we denote by $\operatorname{Aut}_L(C)$ the group of automorphisms of $C$ defined over $L$. \subsection*{Acknowledgment} We thank Christophe Ritzenthaler for useful discussions and for pointing out some of the references, { and Jeroen Sijsling for his careful reading of a preliminary version of this manuscript and for his many useful comments. We also thank Bas Edixhoven and Bjorn Poonen for interesting discussions during the workshop Arithmetic Geometry and Computer Algebra held in Oldenburg in June 2017. Finally, we thank the anonymous referee for helping us improving the clarity of the exposition and some of the proofs in the paper, especially Lemma \ref{superembedding0} and Corollary \ref{superembedding}. Part of this work was carried out during a visit of the first author to Universit\'e de Rennes 1, and we are grateful to this institution for its hospitality and for the ideal working conditions. \subsection{Twists and cohomology} We recall the notion of twist of a curve and its well-known relationship with a certain Galois cohomology set. \begin{definition} Let $C/k$ be a smooth projective curve. A twist of $C/k$ is a smooth projective curve $C'/k$ for which there exists a $\overline{k}$-isomorphism $\varphi:\,C' \to C$. We identify two twists if they are isomorphic over $k$. \end{definition} \begin{theorem}[\cite{Sil}, Chapter X, Theorem $2.2$]\label{thm_CohomologyTwists} The set of twists of $C/k$ is in bijection with the pointed cohomology set $\operatorname{H}^1\left(\abGal{k},\operatorname{Aut}(C) \right)$. The bijection is given explicitly as follows: \begin{itemize} \item let $C'/k$ be a twist of $C/k$ with associated $\overline{k}$-isomorphism $\varphi: C' \to C$. The corresponding cohomology class is that of the cocycle \[ \begin{array}{cccc} \xi : & \abGal{k} & \to & \operatorname{Aut}(C) \\ & \sigma & \mapsto & \varphi \circ\,^\sigma \varphi^{-1}. \end{array} \] \item let $\xi \in \operatorname{H}^1\left(\abGal{k}, \operatorname{Aut}(C) \right)$. We define an action of $\abGal{k}$ on $\overline{k}(C)$ extending by linearity the prescription \[ \sigma:\,a f \mapsto \sigma(a) \cdot \xi(\sigma)(f) \quad \forall a \in \overline{k}, \forall f \in k(C). \] Given $\xi$, we obtain a twist $C'$ as follows: the curve $C'$ is the unique smooth projective $k$-curve whose function field is $\overline{k}(C)^{\abGal{k}}$, the field of invariants for the action just defined. The $\overline{k}$-isomorphism $\varphi:\,C' \to C$ comes from the isomorphism $\overline{k}(C') \cong \overline{k}(C)$ induced by the natural inclusion $k(C') \hookrightarrow \overline{k}(C)$. \end{itemize} We say that a $\overline{k}$-isomorphism $\varphi : C' \to C$ realizes the cocycle $\xi : \operatorname{Gal}(\overline{k}/k) \to \operatorname{Aut}(C)$ if $\varphi \circ\,^\sigma\varphi^{-1} = \xi(\sigma)$ for all $\sigma \in \operatorname{Gal}(\overline{k}/k)$. \end{theorem} \section{Hyperelliptic curves}\label{sect:HypCurves} By a hyperelliptic curve over a field $k$ we mean a smooth projective curve $C$ over $k$, of positive genus $g$, and such that the canonical morphism maps $C$ to a genus-0 curve. {We observe that curves of genus 1 are not hyperelliptic with this definition, because for such curves the canonical morphism maps $C$ to a point.} Also notice that some authors might call the curves we consider \textit{geometrically hyperelliptic}, reserving the term hyperelliptic for curves given by a hyperelliptic model over $k$ (see Equation \eqref{eq:HyperellipticModel} below). We also remark that we shall often use affine models of (Zariski-open subsets of) $C$: given an affine curve, there is, up to isomorphism, precisely one smooth projective curve with the same function field, so we can make this identification without loss of generality (as our base field is perfect, we need not to worry about the distinction between smooth and regular). If $C$ is a hyperelliptic curve, one knows that the canonical morphism is 2-to-1, and $C$ admits an involution $\iota :C \to C$ (the hyperelliptic involution) that preserves its fibers. When the genus-0 quotient $C/\langle \iota \rangle$ is isomorphic to $\mathbb{P}^1$ over $k$ (and not just over $\overline{k}$), and provided that $\operatorname{char}(k) \neq 2$, the curve $C$ admits a $k$-model of the form \begin{equation}\label{eq:HyperellipticModel} y^2=f(x), \end{equation} where $f(x)$ is a polynomial of degree $2g+1$ or $2g+2$. A model as in \eqref{eq:HyperellipticModel} will be called a hyperelliptic model over $k$. Notice that not all hyperelliptic curves defined over $k$ admit a hyperelliptic model over $k$; however, a result of Mestre shows that this is the case for all hyperelliptic curves of even genus. \begin{lemma}[{\cite[§2.1]{MR1106431}}]\label{isos} Let $C/k$ be a hyperelliptic curve (where $\operatorname{char}(k) \neq 2$). If the genus of $C$ is even, then $C$ admits a hyperelliptic model defined over $k$. \end{lemma} \begin{remark}\label{rem-fields} Over finite fields, the field of moduli of any hyperelliptic curve is a field of definition and moreover of hyperelliptic definition \cite{LR, LRS}. In characteristic zero, for elliptic curves we know that the field of moduli is always a field of definition and of hyperelliptic definition. For genus $2$, and for any hyperelliptic curve of even genus, the fields of definition and hyperelliptic definition coincide, but they are not necessarily equal to the field of moduli (this is measured by Mestre's obstruction) \cite{LR, LRS}. We find the first example for which a field of definition is not a field of hyperelliptic definition in the genus $3$ case \cite{Hug}. \end{remark} Even when a hyperelliptic model is not available (necessarily in the odd genus case), the fact that the quotient $C/\langle \iota \rangle$ is of genus 0 implies the existence of a specific kind of $k$-model for $C$, which we now describe: \begin{lemma}\label{hypmodel} Let $C/k$ be a hyperelliptic curve of odd genus $g$ (over a field $k$ of characteristic different from 2). There exists a $k$-rational model \begin{equation}\label{eq:NonHyperellipticModel} C:\, \begin{cases} t^2=f(x,y,z)\\z^2=ax^2+by^2 \end{cases} \subseteq \mathbb{P}_{1,1,1,\frac{g+1}{2}}(k), \end{equation} where $f\in k[x,y,z]$ is a homogeneous polynomial of degree $g+1$ and $\mathbb{P}_{1,1,1,\frac{g+1}{2}}(k)$ is a weighted projective space over $k$ (in the variables $x,y,z,t$, with weights $1,1,1,\frac{g+1}{2}$). Moreover, $C$ has a hyperelliptic model over $k$ if and only if the quaternion algebra $\left( \frac{a,b}{k}\right)$ is trivial. \end{lemma} \begin{proof} Let $\iota$ be the hyperelliptic involution of $C$. By definition of a hyperelliptic curve, the canonical morphism gives a 2-to-1 map $C \to C/\langle \iota \rangle$. As $C/ \langle \iota \rangle$ has genus $0$, it is $k$-isomorphic to a curve given by a homogeneous equation of the form $ax^2+by^2=z^2$ for some $a,b\in k$. The quotient map has degree $2$, so $C$ is given by a model \[C:\, \begin{cases} t^2=f(x,y,z)\\z^2=ax^2+by^2 \end{cases}. \] Since $C$ has genus $g$, the map $C \to C /\langle \iota \rangle $ ramifies at $2g+2=2\cdot\text{deg}(f)$ points, and $f$ must have degree $g+1$. This establishes the first part of the lemma. As for the second part, recall that a curve of genus $\geq 2$ {admits at most one 2-to-1 map to a genus 0 curve (that is, $C$ is hyperelliptic in at most one way)}. If $C/k$ admits a hyperelliptic model over $k$, then by definition it {admits a 2-to-1 map to $\mathbb{P}^1_k$}, and since it also {admits a 2-to-1 map to $z^2=ax^2+by^2$} this latter quadric must be isomorphic to $\mathbb{P}^1$ over $k$, which happens precisely when the quaternion algebra $\left( \frac{a, b}{k}\right)$ is trivial. Conversely, suppose that the quaternion algebra is trivial: then $z^2=ax^2+by^2$ has a rational point, so we can find three new variables $x', y', z'$ which are (invertible) linear functions of $x,y,z$ and satisfy $z'^2 = x'y'$. In terms of these new variables, $C$ is given by a model \[ \begin{cases} t^2=F \left(x',y',z'\right)\\ z'^2=x'y' \end{cases}. \] Restricting to the affine chart $z'=1$ and replacing $y'$ by $1/x'$ in the first equation we find the model \[ t^2 = F\left( x', 1/x', 1 \right); \] multiplying by $x'^{g+1}$ and setting $w:=t x'^{(g+1)/2}$ we finally get the hyperelliptic model \[ w^2 = t^2 x'^{g+1} = x'^{g+1} F\left( x', 1/x', 1 \right) =: H(x'). \] \end{proof} In the course of the proof of Lemma \ref{hypmodel} we have seen how to construct a hyperelliptic model for a (odd genus) curve given by a model as in \eqref{eq:NonHyperellipticModel}, if such a hyperelliptic model exists. For the inverse transformation, if we start with $C \, : y^2=f(x)=\sum_{n=0}^{2g+2} a_nx^n$, with $a_{2g+2}$ not necessarily nonzero, then $C$ admits the following model of the form \eqref{eq:NonHyperellipticModel}: \begin{equation}\label{bigmodel} \begin{cases} t^2=\sum_{n=0}^{g+1} a_n y'^{g+1-n}z'^n + \sum_{n=g+2}^{2g+2} a_n x'^{n-g-1}z'^{2g+2-n} \\ z'^2 = x'y' \end{cases}. \end{equation} \subsection{Automorphisms of hyperelliptic curves}\label{sect:Automorphisms} We start by considering automorphisms of curves given by hyperelliptic models. It is well-known that if the genus $g$ is at least 2, then all the isomorphisms from a curve $C$ to a curve $C'$, both given by hyperelliptic models as in \eqref{eq:HyperellipticModel}, are of the form \begin{equation}\label{iso0} \phi : (x,y)\mapsto \left(\frac{\alpha x+\beta}{\gamma x+\delta},y\frac{e}{(\gamma x+\delta)^{g+1}}\right) \end{equation} for some $\alpha,\,\beta,\,\gamma,\,\delta,\,e\in\bar{k}$ (see for example \cite[Proposition 3.1.1]{Hug} or \cite[§1.5.1]{MR3207427}). \begin{lemma}\label{superembedding0} Let $r$ be an integer different from $\frac{g+1}{2}$ and set $D=|g+1-2r|$. \begin{enumerate}[(a)] \item For any automorphism $\phi$ of $C$ there exist $\alpha', \beta', \gamma', \delta' \in \overline{k}$ such that $\phi$ can be represented as \begin{equation}\label{iso} \phi:(x,y)\mapsto \left(\frac{\alpha' x+\beta'}{\gamma' x+\delta'},y\frac{(\alpha'\delta'-\beta'\gamma')^r}{(\gamma' x+\delta')^{g+1}}\right). \end{equation} The matrix $\begin{pmatrix} \alpha' & \beta' \\ \gamma' & \delta' \end{pmatrix}$ is well-defined up to multiplication by a $D$-th root of unity in $\overline{k}$. \item The map \[ \begin{array}{cccc} \operatorname{Aut}(C) & \to & \operatorname{GL}_2(\overline{k})/\mu_D(\overline{k}) \\ \phi & \mapsto & \begin{pmatrix} \alpha' & \beta' \\ \gamma' & \delta' \end{pmatrix}, \end{array} \] where $\alpha',\beta',\gamma',\delta'$ are as above, is a Galois-equivariant embedding. \end{enumerate} \end{lemma} \begin{proof} For part (a), start with a representation of $\phi$ as in Equation \eqref{iso0}. Set $\Delta=\alpha\delta-\beta\gamma$ and take $(\alpha',\,\beta',\,\gamma',\,\delta')=\lambda(\alpha,\,\beta,\,\gamma,\,\delta)$, where $\lambda$ is any solution to the equation $\lambda^{2r-g-1}=e\times\Delta^{-r}$. Uniqueness of the representation up to $D$-th roots of unity and part (b) of the lemma are immediate to check. \end{proof} \begin{corollary}\label{superembedding} Let $C$ be a hyperelliptic curve of genus $g$ admitting a hyperelliptic model over $k$. If $g$ is odd, there exists a Galois-equivariant embedding $$ \operatorname{Aut}(C)\hookrightarrow\operatorname{GL}_2(\overline{k})/\{\pm1\}; $$ if $g$ is even, there exists a Galois-equivariant embedding $$ \operatorname{Aut}(C)\hookrightarrow\operatorname{GL}_2(\overline{k}). $$ \end{corollary} \begin{proof} It suffices to take $r=(g-1)/2$ (for $g$ odd) and $r=g/2$ (for $g$ even) in the previous lemma. \end{proof} We now discuss the case of $C$ not being defined by a hyperelliptic model. Let $$C:\, \begin{cases} t^2=f(x,y,z)\\z^2=ax^2+by^2 \end{cases} $$ be a hyperelliptic curve of odd genus $g$ over a field $k$. To describe the automorphisms of $C$, we start by recalling the fundamental fact that the hyperelliptic involution is unique and lies in the center of $\operatorname{Aut}(C)$, see \cite[III.7.9]{RiemannSurfaces}. Notice that any automorphism of $C$ induces an automorphism of $C / \langle \iota \rangle =: \mathcal{L}$. It is easy to see that $\mathcal{L}$ can be embedded in $\mathbb{P}^2$, and every automorphism of $\mathcal{L}$ lifts to an automorphism of $\mathbb{P}^2$: one way to check this is to identify $\mathcal{L}$ with its anticanonical embedding, and notice that every automorphism of $\mathcal{L}$ sends the anticanonical sheaf $O(2)$ to itself, hence it induces a well-defined automorphism of $\mathbb{P}H^0(C,O(2))^\vee=\mathbb{P}^2$. Hence for every $k$-automorphism $\varphi$ of $C$ we have a corresponding element of $\operatorname{PGL}_3(k)=\operatorname{Aut}(\mathbb{P}^2_k)$, and in particular on the (homogeneous) coordinates $[x,y,z]$ the action of $\varphi$ is given by \[ [x,y,z] \mapsto \left[ M_{11}x+M_{12}y+M_{13}z, M_{21}x+M_{22}y+M_{23}z, M_{31}x+M_{32}y+M_{33}z \right] \] for some $M=(M_{ij})_{1 \leq i,j \leq 3} \in \operatorname{GL}_3(k)$. {Using the fact that the hyperelliptic involution is unique, and hence stable under $\varphi$,} we also have $\iota^* \varphi^*(t)=-\varphi^*(t)$, so (since $\iota$ acts trivially on $x,y,z$ and sends $t$ to $-t$) we must have $\varphi^*(t) = t g(x,y,z)$ for some $g(x,y,z)$. Since the image of $[x,y,z,t]$ must still lie on $C$ we easily get the condition $\varphi^*(t)=et$ for some $e \in k^\times$: indeed, we must have $\varphi^*(t)^2 = f(\varphi^* (x),\varphi^* (y),\varphi^* (z))$; comparing the degree in $x,y,z$ of the two sides of this equality shows that $g$ is a constant. \section{Computing equations of twists}\label{method} In this section we describe our algorithms to compute twists of hyperelliptic curves and prove their correctness. In what follows $k$ is a perfect field of characteristic different from 2. We first treat the even genus case, which is easier than the odd genus one since (thanks to Lemma \ref{isos}) we always have a hyperelliptic model over the field of definition for the curve. \subsection{The even genus case}\label{sect:EvenGenus} The algorithm is the following. \noindent\fbox{\begin{minipage}{39.5em} \begin{algorithm}\label{algo:MainEvenGenus} (For computing twists of hyperelliptic curves of even genus) \textbf{Input:} A hyperelliptic curve $C/k$ of even genus given by a hyperelliptic equation $y^2=f(x)$, and a cocycle $\xi:\operatorname{Gal}(\overline{k}/k) \to \operatorname{Aut}(C)$ factoring through a finite extension $L/k$. \textbf{Output:} A curve $C'/k$ and a $\overline{k}$-isomorphism $\phi:\,C'\rightarrow C$ such that $\xi(\sigma)=\phi\circ\,^{\sigma}\phi^{-1}$ for all $\sigma \in \operatorname{Gal}(\overline{k}/k)$. \begin{enumerate} \item Compose $\xi$ with the embedding $\operatorname{Aut}(C) \hookrightarrow \operatorname{GL}_2(\overline{k})$ given by Corollary \ref{superembedding} to obtain a new cocycle $\psi$ in $\operatorname{Z}^1(\operatorname{Gal}(\overline{k}/k),\operatorname{GL}_2(\overline{k} ))$. \item Using Hilbert's theorem 90, compute $M \in \operatorname{GL}_2(\overline{k})$ such that ${\psi}(\sigma)=M \cdot \,^\sigma M^{-1}$. \item Write $M=\begin{pmatrix} \alpha & \beta \\ \gamma & \delta \end{pmatrix}$ and let $ C' : y^2 = \det(M)^{-g} (\gamma x+\delta)^{2g+2} f\left( \frac{\alpha x+\beta}{\gamma x+\delta} \right). $ Define \[ \begin{array}{cccc} \phi : & C' & \to & C \\ & (x,y) & \mapsto & \left(\frac{\alpha x+\beta}{\gamma x+\delta}, y\frac{(\alpha\delta-\beta\gamma)^{g/2}}{(\gamma x+\delta)^{g+1}} \right) \end{array} \] $C'/k$ is the desired twist and the isomorphism $\phi:\,C'\rightarrow C$ realizes the cocycle $\xi$. \end{enumerate} \end{algorithm} \end{minipage}} In the light of Corollary \ref{superembedding} the correctness of the algorithm is trivial (one checks easily that the model of $C'$ thus found has $k$-rational coefficients). From the computational point of view, the important issues are computing the lift of the cocycle in step (1), for which see the proof of Lemma \ref{superembedding0}, and the computation of $M$ using Hilbert's Theorem $90$, see subsection \ref{sect:Hilbert90}. \subsection{The odd genus case}\label{sect:OddGenus} Let $C$ be a hyperelliptic curve of odd genus $g$ defined over a perfect field of characteristic different from $2$. We now describe our approach to computing twists of $C$ in this case, starting -- in the next subsection -- with the problem of finding models of twists of conics. \subsubsection{Computing twists of conics}\label{sect:TwistingConics} Our approach relies on two fundamental ideas. The first is that working with the anti-canonical embedding of a conic $Q/k$ allows us to interpret cocycles with values in $\operatorname{Aut}(Q)$ as cocycles with values in $\operatorname{PGL}_3(\overline{k})$. The second is that -- as we shall show -- such cocycles can then be lifted to cocycles with values in $\operatorname{GL}_3(\overline{k})$, thus removing the ambiguity coming from the projective quotient. Notice that it is not true in general that one can lift cocycles with values in $\operatorname{PGL}_3(\overline{k})$ to cocycles with values in $\operatorname{GL}_3(\overline{k})$, but this will turn out to be the case for the specific cocycles we need to work with. \begin{theorem}\label{thm-cle} Let $Q$ be a smooth genus-0 curve over a field $k$, embedded as a conic section in $\mathbb{P}^2_k$. There is an injective, Galois-equivariant map $\operatorname{Aut}(Q) \hookrightarrow \operatorname{Aut}(\mathbb{P}_{\overline{k}}^2) \cong \operatorname{PGL}_3(\overline{k})$. Furthermore, any cocycle $\xi : \operatorname{Gal}(\overline{k}/k) \to \operatorname{Aut}(Q) \hookrightarrow \operatorname{PGL}_3(\overline{k})$ can be lifted to a cocycle $\tilde{\xi}: \operatorname{Gal}(\overline{k}/k) \to \operatorname{GL}_3(\overline{k})$. \end{theorem} \begin{proof} The conic $Q$ is embedded in $\mathbb{P}^2$ via the linear series corresponding to $O(2)$. Any automorphism of $Q_{\overline{k}}$ pulls $O(2)$ back to itself (since there is only one equivalence class of divisors of degree 2), hence it induces an automorphism of $\mathbb{P}^2_{\overline{k}}$ which, upon restriction to $Q$, is the automorphism we started with. This proves the first statement. For the second part, notice that given a class in $\operatorname{PGL}_3(\overline{k})$ there are only two matrices $\pm M \in \operatorname{GL}_3(\overline{k})$ representing that class and whose action on the ring $\overline{k}[x,y,z]$ preserves the equation of $Q$. This gives us a way to uniquely determine an element of $\operatorname{GL}_3(\overline{k})/\mu_2(\overline{k})$ from an element of $\operatorname{Aut}(Q) \subseteq \operatorname{PGL}_3(\overline{k})$, thus providing a lift of the embedding $\operatorname{Aut}(Q) \hookrightarrow \operatorname{PGL}_3(\overline{k})$ to $\operatorname{Aut}(Q) \hookrightarrow \operatorname{GL}_3(\overline{k})/\mu_2(\overline{k})$. Finally we notice that $\operatorname{GL}_3(\overline{k})/\mu_2(\overline{k})\simeq\operatorname{GL}_3(\overline{k})$ by the isomorphism $A\mapsto\frac{1}{\operatorname{det}(A)}A$. Now it is easy to check that the composition $\operatorname{Aut}(Q) \hookrightarrow \operatorname{GL}_3(\overline{k})$ is Galois-equivariant. \end{proof} \begin{remark}\label{CoolEmbedding} If $Q=\mathbb{P}^1$ and it is embedded in $\mathbb{P}^2$ as $Q_0:\,z^2=xy$, the previous lift is given by the map $\psi:\,\operatorname{Aut}(\mathbb{P}^1)\simeq \operatorname{PGL}_2(\overline{k}) \hookrightarrow\operatorname{GL}_3(\overline{k})$ defined by $$ \left[\begin{pmatrix} \alpha&\beta \\ \gamma& \delta \end{pmatrix}\right]\mapsto\frac{1}{\alpha\delta-\beta\gamma}\begin{pmatrix} \alpha^2& \beta^2 & 2\alpha\beta \\ \gamma^2 & \delta^2 & 2\gamma\delta \\ \alpha\gamma & \beta\delta & \beta\gamma+\alpha\delta\end{pmatrix}. $$ \end{remark} \begin{remark}\label{explicit-lift} In general, if $Q:\,ax^2+by^2=z^2$, we consider the isomorphism $$B=\begin{pmatrix}\sqrt{a} & \sqrt{-b} & 0\\ \sqrt{a} & -\sqrt{-b} & 0\\ 0 & 0 & 1 \end{pmatrix}:\,Q\rightarrow Q_0.$$ The embedding of $Q$ in $\mathbb{P}^2$ in Theorem \ref{thm-cle} can then be obtained by choosing as basis of $\operatorname{H}^0(Q_{\overline{k}},O(2))$ the three sections $\begin{pmatrix} x \\ y \\ z \end{pmatrix} = B^{-1} \begin{pmatrix} (dt)^* \\ t^2 (dt)^* \\ t(dt)^* \end{pmatrix}$; notice that by definition of $B$ we have $ax^2+by^2=z^2$, so $Q$ is again identified with its anticanonical embedding. The corresponding Galois-equivariant embedding is \[ \operatorname{Aut}(Q)\xrightarrow[\text{by }B]{\text{conj.}}\operatorname{Aut}(Q_0)\xrightarrow{\sim}\operatorname{Aut}(\mathbb{P}^1)\xrightarrow{\psi}\operatorname{GL}_3(\overline{k})\xrightarrow[\text{by }B^{-1}]{\text{conj.}}\operatorname{GL}_3(\overline{k}), \] where we consider $\operatorname{Aut}(Q)$ and $\operatorname{Aut}(Q_0)$ as subgroups of $\operatorname{Aut}(\mathbb{P}^2_{\overline{k}}) \cong \operatorname{PGL}_3(\overline{k})$. \end{remark} \subsubsection{Hilbert's Theorem 90}\label{sect:Hilbert90} Hilbert's Theorem $90$ is the statement that, for every integer $n \geq 1$ and for every Galois extension of fields $L/k$, the first cohomology set $\operatorname{H}^1(\operatorname{Gal}(L/k),\operatorname{GL}_n(L))$ is trivial. Thus, given a cocycle $\overline{\xi}:\,\operatorname{Gal}(L/k)\to\operatorname{GL}_n(L)$, there exists a matrix $M \in \operatorname{GL}_n(L)$ such that $\overline{\xi}(\sigma)=M \cdot\,^\sigma M^{-1}$. Moreover, this matrix $M$ can be explicitly computed \cite[p.159, Prop. 3]{SerCL}, for instance by choosing a sufficiently generic $M_0 \in \operatorname{GL}_n(L)$ and setting \begin{equation}\label{eq:Thm90} M:=\sum_{\sigma \in \operatorname{Gal}(L/k)} \overline{\xi}(\sigma) \cdot {}^{\sigma}M_0. \end{equation} Another approach (used in Section $3$ in \cite{Lor15}) is to choose the columns of the matrix $M$ to be a $k$-basis of the $k$-vector subspace of $L^n$ fixed by the action of $\operatorname{Gal}(L/k)$ given by $$ \sigma:\,v\mapsto \overline{\xi}(\sigma) \cdot {}^{\sigma}v. $$ \subsubsection{Twisting the underlying conic} We now describe our algorithm for twisting hyperelliptic curves of odd genus. Recall that our input consists of a cocycle $\xi : \operatorname{Gal}(L/k) \to \operatorname{Aut}_L(C)$; composing $\xi$ with the Galois-equivariant embedding of $\operatorname{Aut}(C)$ in $\operatorname{GL}_3(L)$ provided by Theorem \ref{thm-cle} (and made explicit in Remark \ref{explicit-lift}) we obtain a cocycle $\overline{\xi} : \operatorname{Gal}(L/k) \to \operatorname{GL}_3(L)$. Applying the explicit form of Hilbert's theorem 90 just described we obtain a matrix $M \in \operatorname{GL}_3(L)$ such that $\overline{\xi}(\sigma) = M \cdot {}^\sigma M^{-1}$ for all $\sigma \in \operatorname{Gal}(L/K)$. We set $Q'(x,y,z)=Q(M(x,y,z))$; using the fact that the matrix $\overline{\xi}(\sigma)$ preserves the equation of $Q$ for every $\sigma \in \operatorname{Gal}(L/k)$, one checks easily that $Q'(x,y,z)$ has $k$-rational coefficients. Our twist will be described in terms of $Q'$ and of the polynomial $f(M(x,y,z))$; the latter, however, is in general not defined over $k$. In order to resolve this inconvenience, we show the following: \begin{proposition}\label{rationalmodel} There exist $v\in L^\times$ and $h\in L[x,y,z]$ such that $vf(M(x,y,z))+h\cdot Q'$ has coefficients in $k$. \end{proposition} \begin{proof} As $M \cdot {}^\sigma M^{-1}$ induces an automorphism of $C$, we see that for every $\sigma$ there exist $a_\sigma \in L^\times$ and $p_\sigma \in L[x,y,z]$ such that ${}^\sigma(f(M(x,y,z))=a_\sigma f(M(x,y,z))+(p_\sigma \cdot Q)(M(x,y,z))$. It is easy to check that the map $\sigma \mapsto a_\sigma$ is a (continuous) cocycle of $\operatorname{Gal}(\overline{k}/k)$ with values in $\overline{k}^\times$. Let $v \in \overline{k}^\times$ be such that $a_\sigma=v/{}^\sigma v$ (such a $v$ can again be computed explicitly thanks to the effective form of Hilbert's theorem 90: notice that since $a_\sigma$ takes values in $L^\times$, $v$ can also be taken in $L^\times$). Now let $m \in L$ be an element such that $\operatorname{tr}_{L/k}(m)=1$ (if $\operatorname{char}(k)$ does not divide $[L:k]$ one can take $m=[L:k]^{-1}$) and set $h(x,y,z)=\sum_{\tau}\,^{\tau}m \, ^{\tau}v \, p_\tau(M(x,y,z))$. Multiply the identity ${}^\sigma(f(M(x,y,z))=a_\sigma f(M(x,y,z))+(p_\sigma \cdot Q)(M(x,y,z))$ by ${}^\sigma(mv)$ and sum over $\sigma \in \operatorname{Gal}(L/k)$: \[ \sum_{\sigma} {}^\sigma(mv) {}^\sigma(f(M(x,y,z)) = \sum_{\sigma} \left( {}^\sigma(mv) a_\sigma f(M(x,y,z)) + {}^\sigma(mv) (p_\sigma \cdot Q)(M(x,y,z)) \right). \] The left hand side has coefficients in $k$; we now show that the right hand side is $vf(M(x,y,z))+h \cdot Q'$. Indeed, recalling that $a_\sigma = v/{}^\sigma v$ we obtain \[ \begin{aligned} \sum_{\sigma} & \left( {}^\sigma(mv) a_\sigma f(M(x,y,z)) + {}^\sigma(mv) (p_\sigma \cdot Q)(M(x,y,z)) \right) = \\ & = \sum_{\sigma} {}^\sigma m \, vf(M(x,y,z)) + Q'(x,y,z)\sum_{\sigma} {}^\sigma m {}^\sigma v \; p_\sigma(M(x,y,z)) \\ & = \left(\sum_{\sigma} {}^\sigma m \right) vf(M(x,y,z)) + h(x,y,z) \, Q'(x,y,z) \\ & = vf(M(x,y,z)) + h(x,y,z) \, Q'(x,y,z). \end{aligned} \] \end{proof} \begin{corollary}\label{corollary:AlmostCorrectCocycle} Let $g(x,y,z) = vf(M(x,y,z))+h\cdot Q'$ be as above. The curve \[ C_0 : \begin{cases} t^2 = g(x,y,z)\\ Q'(x,y,z)=0 \end{cases} \] is a $k$-twist of $C$. An isomorphism $\psi_0:C_0 \to C$ is given by $(x,y,z,t) \mapsto (M(x,y,z), t/\sqrt{v})$. For all $\sigma \in \operatorname{Gal}(\overline{k}/k)$, the automorphisms $\psi_0 \circ {}^\sigma \psi_0^{-1}$ and $\xi(\sigma)$ of $C$ differ at most by the hyperelliptic involution. \end{corollary} \begin{proof} The only nontrivial statement is that $\psi_0 \circ {}^\sigma \psi_0^{-1}$ and $\xi(\sigma)$ differ at most by the hyperelliptic involution, and this follows from the fact that $\xi(\sigma)$ and $M \cdot \, {}^\sigma M^{-1}$ induce the same automorphism on the conic $C/\langle \iota \rangle$. \end{proof} \subsubsection{Pinning down the quadratic twist}\label{sect:FinishProof} In view of Corollary \ref{corollary:AlmostCorrectCocycle}, we can define a new cocycle $\xi_0(\sigma):=\psi_0 \circ {}^\sigma \psi_{0}^{-1}$, which differs from $\xi(\sigma)$ at most by the hyperelliptic involution. Define $i:\operatorname{Gal}(\overline{k}/k) \to \langle \iota \rangle$ by the prescription $ \xi_0(\sigma)=\xi(\sigma) i(\sigma). $ \begin{lemma}\label{quadtwist} The map $i$ is a group homomorphism. By Galois correspondence, its kernel defines an at most quadratic extension $k(\sqrt{e})$ of $k$. \end{lemma} \begin{proof} Since $\xi$ and $\xi_0$ are both cocycles, so is $\sigma \mapsto i(\sigma)=\xi(\sigma)^{-1}\xi_0(\sigma) \in \langle \iota \rangle$. Since the Galois action on $\langle \iota \rangle$ is trivial, a cocycle is the same thing as a group homomorphism. The second statement follows from the fact that the image of $i$ has order at most 2. \end{proof} \begin{theorem}\label{thm:RestatementMain} Let $e$ be as in the previous lemma. The curve \[ C' \, : \begin{cases} et^2=g(x,y,z) \\ Q(M(x,y,z))=0 \end{cases} \] is the $k$-twist of $C$ corresponding to the cocycle $\xi$. The map $\phi : C' \to C$ that sends $(x,y,z,t)$ to $(M(x,y,z), \sqrt{e/v}t)$ realizes the cocycle $\xi$. \end{theorem} \begin{proof} We have $\phi=\psi_0 \circ \psi_1$, where $\psi_1$ is the isomorphism $C' \to C_0$ given by $\psi_1(x,y,z,t)=(x,y,z,\sqrt{e}t)$. It is clear that $C'$ and $C_0$ are isomorphic through $\psi_1$, so (in view of Corollary \ref{corollary:AlmostCorrectCocycle}) $C'$ is a $k$-twist of $C$. It remains to show that $\phi$ realizes the cocycle $\xi$. Notice that by construction one has $\psi_1 \circ {}^\sigma \psi_1^{-1}=i(\sigma)$, where $i$ is the homomorphism of the previous lemma. Further observing that the image of $i$ is contained in $\langle \iota \rangle$, hence it is central in $\operatorname{Aut}(C)$, we have \[ \begin{aligned} \phi \circ {}^\sigma \phi^{-1} & = \psi_0 \psi_1 \circ {}^\sigma (\psi_0 \psi_1)^{-1} \\ & = \psi_0 \circ \psi_1 \circ {}^\sigma \psi_1^{-1}\circ {}^\sigma \psi_0^{-1} \\ & = \psi_0 \circ i(\sigma) \circ {}^\sigma \psi_0^{-1} \\ & = \psi_0 \circ {}^\sigma \psi_0^{-1} \circ i(\sigma) \\ & = \xi_0(\sigma) \circ i(\sigma) \\ & = \xi_0(\sigma) \circ i(\sigma)^{-1} =\xi(\sigma), \end{aligned} \] where $i(\sigma)=i(\sigma)^{-1}$ since $\iota$ is of order 2. \end{proof} Putting everything together, we have proven the correctness of the following algorithm: \noindent\fbox{\begin{minipage}{39.5em} \begin{algorithm}\label{algo:Main} (For computing twists of hyperelliptic curves of odd genus) \textbf{Input:} A hyperelliptic curve $C/k$ and a cocycle $\xi:\operatorname{Gal}(\overline{k}/k) \to \operatorname{Aut}(C)$ factoring through a finite extension $L/k$. \textbf{Output:} A curve $C'/k$ and a $\overline{k}$-isomorphism $\phi:\,C'\rightarrow C$ such that $\xi(\sigma)=\phi\circ\,^{\sigma}\phi^{-1}$ for all $\sigma \in \operatorname{Gal}(\overline{k}/k)$. \begin{enumerate} \item Find a model of $C$ of the form \eqref{eq:NonHyperellipticModel} and let $Q:\,z^2=ax^2+by^2$. \item Project $\xi$ into $Z^1(\operatorname{Gal}(\overline{k}/k),\operatorname{Aut}(Q))$. \item Compose with the embedding of $\operatorname{Aut}(Q)$ in $\operatorname{GL}_3(\overline{k})$ given by Remark \ref{explicit-lift} to obtain a cocycle $\overline{\xi}:\,\operatorname{Gal}(\overline{k}/k)\rightarrow\operatorname{GL}_3(\overline{k})$. \item Use Hilbert's theorem 90 to compute a matrix $M \in \operatorname{GL}_3(\overline{k})$ such that $\overline{\xi}(\sigma)=M\, \cdot \, ^\sigma \hspace{-1mm} M^{-1}$. \item Use Proposition \ref{rationalmodel} to compute a $k$-rational model of $$ \begin{cases} t^2=f(M(x,y,z))\\ Q'(x,y,z):=Q(M(x,y,z))=0\end{cases} $$ of the form $$ C_0:\,\begin{cases} t^2=g(x,y,z)=vf(M(x,y,z))+h(x,y,z)Q'(x,y,z)\\ Q'(x,y,z)=0\end{cases}. $$ \item Compute $e$ as in Lemma \ref{quadtwist}. The twist of $C$ by $\xi$ is $$ C':\,\begin{cases} et^2=g(x,y,z)\\ 0=Q'(x,y,z)\end{cases}. $$ {The map $\phi:\,(x,y,z,t) \mapsto (M(x,y,z), \sqrt{e/v}t)$ is an isomorphism between $C'$ and the model of $C$ chosen in part (1), and it realizes $\xi$.} \end{enumerate} \end{algorithm} \end{minipage}} \section{Examples}\label{sect:Examples} In this section we explicitly compute some twists of hyperelliptic curves; {this will show how the different parts of the algorithm come together, and why all the steps are necessary.} \subsection{Example 1}\label{sect:Example2} Consider the curve $C:y^2=x^9-x\; /\mathbb{Q}$ of even genus $g=4$, and the cocycle $ \xi : \operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}) \to \operatorname{Aut}(C) $ which factors through $\operatorname{Gal}(\mathbb{Q}(i)/\mathbb{Q})$ and sends the generator $\tau$ of this group to the automorphism \[ \begin{array}{cccc} \alpha: & C & \to & C \\ & (x,y) & \mapsto & \left(\frac{1}{x}, \frac{iy}{x^5} \right) \end{array}. \] We apply Algorithm \ref{algo:MainEvenGenus} to this situation. \textbf{Steps 1.} The M\"obius transformation $x \mapsto \frac{1}{x}$ is represented by the projective class of matrices of the form $\begin{pmatrix} 0 & \lambda \\ \lambda & 0 \end{pmatrix}$. To find a matrix in $\operatorname{GL}_2(\mathbb{Q}(i))$ which represents $\alpha$ we use the recipe given in the proof of Lemma \ref{superembedding0}: the lift is $\lambda\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ with $\lambda^{2r-5}=i \cdot (-1)^r$ (here $r=g/2=2$), hence $\lambda=-i$. Thus in particular the lift of our cocycle to $\operatorname{GL}_2(\mathbb{Q}(i))$ sends $\tau$ to $\begin{pmatrix} 0 & -i \\ -i & 0 \end{pmatrix}$. \textbf{Step 2.} We use the explicit version of Hilbert's Theorem 90 (see subsection \ref{sect:Hilbert90}) with $M_0=\operatorname{Id}$ to get $M=\begin{pmatrix}1 & -i \\ -i & 1 \end{pmatrix}$. \textbf{Step 3.} We have \[ C' : \; y^2 = \det(M)^{-4} (-i x+1)^{10}f\left( \frac{x-i}{-ix+1} \right) = -x^9+6 x^7-6 x^3+x \] and $M$ induces the isomorphism \[ \begin{array}{cccc} \phi :\, & C' & \to & C \\ & (x,y) & \mapsto & \left(\frac{x-i}{-ix+1},\frac{2^2y}{(-ix+1)^5} \right)=\left(i\frac{x-i}{x+i},\frac{4iy}{(x+i)^5}\right). \end{array} \] \subsection{Example 2} Consider the curve $C \, : y^2=x(x^6+3x^4+1)$ over $\mathbb{Q}$. The geometric automorphism group of $C$ is cyclic of order 4, generated by $\alpha : (x,y) \mapsto (-x,iy)$. Since the automorphism $i \mapsto -i$ maps $\alpha$ to $\alpha^{-1}$, we can define a cocycle $\operatorname{Gal}(\mathbb{Q}(i)/\mathbb{Q}) \to \operatorname{Aut}(C)$ by sending the generator $\tau$ of $\operatorname{Gal}(\mathbb{Q}(i)/\mathbb{Q})$ to $\alpha$. Let us compute the twist of $C$ corresponding to this cocycle with Algorithm \ref{algo:Main}. \textbf{Step $1.$} A model as in \eqref{bigmodel} is given by $ \begin{cases} t^2=f(x,y,z):=x^3z+3x^2yz+y^3z \\ z^2=xy \end{cases}. $ \textbf{Steps $2$ and $3.$} The $2 \times 2$ matrix representing $\alpha$ in the hyperelliptic model is $\begin{pmatrix} \zeta_8 & 0 \\ 0 & -\zeta_8 \end{pmatrix}$. Applying the embedding of Remark \ref{CoolEmbedding} we obtain that the corresponding element in $\operatorname{GL}_3(\overline{\mathbb{Q}})$ is simply $\overline{\xi}(\tau):=\operatorname{diag}(-1,-1,1)$. Notice that this matrix has order 2 while $\alpha$ has order 4, but this is to be expected: our embedding only considers the automorphism induced on $C/\langle \iota \rangle$, and $\alpha$ squares to the hyperelliptic involution, which is trivial on $C/\langle \iota \rangle$. \textbf{Step 4.} We find $M=\operatorname{diag}(i,i,1)$. \textbf{Step 5.} We compute $(z^2-xy)(M(x,y,z))=(z^2-xy)(ix,iy,z)=z^2+xy$ and \[ \begin{aligned} f(M(x,y,z)) & =f(ix,iy,z) & =-i f(x,y,z); \end{aligned} \] it follows that ${}^{\tau}(f(M(x,y,z))=-f(M(x,y,z))$ so that $(a_\sigma)$ is the cocycle mapping $\tau \mapsto -1$ and $(p_\sigma)=0$. We can then take $v=i$, $h=0$, and $g(x,y,z)=if(M(x,y,z))=f(x,y,z)$. The intermediate curve $C_0$ is therefore $C_0 \, : \begin{cases} t^2=f(x,y,z) \\ z^2=-xy \end{cases}.$ \textbf{Step 6.} Let $\sigma_i$, for $i=1,3,5,7$, be the automorphism of $\mathbb{Q}(\zeta_8)/\mathbb{Q}$ which sends $\zeta_8$ to $\zeta_8^i$. We have that $\sigma_1, \sigma_5$ map to the identity in $\operatorname{Gal}(\mathbb{Q}(i)/\mathbb{Q})$, while $\sigma_3, \sigma_7$ map to $\tau$. One checks that the homomorphism $i$ of Lemma \ref{quadtwist} is trivial on $\sigma_1, \sigma_3$ and nontrivial on $\sigma_5, \sigma_7$. This means that we can take $e=-2$, because $\mathbb{Q}(\sqrt{-2})$ is the fixed field of $\sigma_3$. Finally, the curve we are looking for is \[ C': \, \begin{cases} -2t^2=f(x,y,z) \\ z^2=-xy \end{cases} \] and an isomorphism realizing the cocycle $\xi$ is $\phi \, : (x,y,z,t) \mapsto (ix,iy,z,(1+i)t)$. Since the conic $z^2=-xy$ obviously has rational points, $C'$ also admits a $\mathbb{Q}$-hyperelliptic model: one can check that such a model is given by $y^2=2x^7 + 6x^3 - 2x$, and on this model an isomorphism $C' \to C$ realising the cocycle is $(x,y) \mapsto \left(\frac{i}{x}, -\frac{1}{2}(i + 1)\frac{y}{x^4} \right)$. \subsection{Example 3} Finally, we give an example where $C$ admits a hyperelliptic model over its field of definition but one of its twists does not. Consider the curve $C:v^2=u^8+14u^4+1 \; /\mathbb{Q}$ and the cocycle $ \xi : \operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}) \to \operatorname{Aut}(C) $ which factors through $\operatorname{Gal}(\mathbb{Q}(i)/\mathbb{Q})$ and sends the generator $\tau$ of this group to the automorphism \[ \begin{array}{cccc} \alpha: & C & \to & C \\ & (u,v) & \mapsto & \left( -\frac{1}{u}, -\frac{v}{u^4} \right) \end{array}. \] \textbf{Step 1.} $C$ admits the model $ \; C: \; \begin{cases} t^2=x^4+14x^2y^2+y^4 \\ xy=z^2 \end{cases} $ \textbf{Steps 2 and 3.} In these coordinates, $\alpha$ is given by $[x:y:z:t] \mapsto [-y:-x:z:-t]$, and we have a cocycle with values in $\operatorname{Aut}(C/\langle \iota \rangle)$ given by $\tau \mapsto \left[ \begin{pmatrix} 0 & -1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \right]$. Via Remark \ref{explicit-lift} we obtain a cocycle $\overline{\xi}$ with values in $\operatorname{GL}_3(\mathbb{Q}(i))$ which sends $\tau$ to the matrix $ \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & -1 \end{pmatrix}$. \textbf{Steps 4 and 5.} A possible choice of $M$ is $\begin{pmatrix}1+i & 1-i & 0 \\ 1-i & 1+i & 0 \\ 0 & 0 & 2i\end{pmatrix}$. We have $v=1$, and the curve of Corollary \ref{corollary:AlmostCorrectCocycle} is $ C_0:\,\begin{cases} X^2+Y^2+2Z^2=0 \\ t^2=48X^4+160X^2Y^2+48Y^4 \end{cases}. $ \textbf{Step 6.} The isomorphism $\varphi : [x:y:z:t] \mapsto [M(x,y,z),t]$ acts trivially on $t$, so we have $\varphi \circ {}^\tau \varphi(t)^{-1}=t$, while $\xi_\tau(t)=-t$. It follows that $e=-1$, and the twist of $C$ corresponding to $\xi$ is \[ C'\, : \begin{cases} X^2+Y^2+2Z^2=0 \\ -T^2=48X^4+160X^2Y^2+48Y^4 \end{cases} \] Notice that this curve does not have a hyperelliptic model over $\mathbb{Q}$, because the conic $X^2+Y^2+2Z^2=0$ has no $\mathbb{Q}$-rational points. It is also easy to check that the change of variables $A=X-Y, B=X+Y, C=2Z, D=T/4$ leads to the more symmetrical model $ \begin{cases}A^2+B^2+C^2=0 \\ -2D^2 = A^4+B^4+C^4\end{cases} $, which agrees with the result found by a different method in \cite{AWS2016}. \end{document}
math
\begin{document} \title{Multiple $T$-values with one parameter} \section*{Introduction} Multiple zeta values can be defined as the iterated integrals of the differential forms $dt/t$ and $dt/(1-t)$ on the real interval $[0,1]$. They are very important and central objects, in relation with many fields, including knot theory, quantum groups and perturbative quantum field theory \cite{waldschmidt, fresan}. They form an algebra $\mathscr{A}_{\mathsf{MZV}}$ over $\mathbb{Q}$, which has been studied a lot, and is conjecturally graded by the weight. Many variants of the multiple zeta values have been considered in the literature. Some of these variants are also numbers, for example when allowing various roots of unity as poles of the differential forms. Other variants are functions, with one or more arguments, such as multiple polylogarithms \cite{goncharov} or hyperlogarithms \cite{Panzer}. Among all these variations on a theme, a specific one, recently introduced by Kaneko and Tsumura in \cite[\S 5]{kaneko_tsumura_1} and further studied in \cite{kaneko_final}, deals with the iterated integrals of the forms $dt/t$ and $2dt/(1-t^2)$. The resulting numbers, called multiple $T$-values, also form an algebra $\mathscr{A}_{\mathsf{MTV}}$ over $\mathbb{Q}$ under the shuffle product. This new algebra has been less studied than the algebra of multiple zeta values. Conjecturally, there is an inclusion $\mathscr{A}_{\mathsf{MZV}} \subset \mathscr{A}_{\mathsf{MTV}}$. The present article introduces a new algebra $\mathscr{A}_{\mathsf{MTV}}c$ of iterated integrals, whose elements are functions of a parameter $c$. This algebra can be seen as a common deformation of both the algebras $\mathscr{A}_{\mathsf{MZV}}$ and $\mathscr{A}_{\mathsf{MTV}}$, in the following manner. For every admissible index $(k_1,\dots,k_r)$, there is a function $Z_c(k_1,\dots,k_r)$ in $\mathscr{A}_{\mathsf{MTV}}c$. These functions span the algebra $\mathscr{A}_{\mathsf{MTV}}c$, and their product is given by the usual shuffle rule, exactly as for multiple zeta values. One can evaluate the function $Z_c(k_1,\dots,k_r)$ when the parameter $c$ is a real number with $c < 1$. When $c=0$, one recovers the multiple zeta value $\zeta(k_1,\dots,k_r)$ and when $c=-1$, one recovers the multiple $T$-value $T(k_1,\dots,k_r)$. It follows that both $\mathscr{A}_{\mathsf{MZV}}$ and $\mathscr{A}_{\mathsf{MTV}}$ are quotient algebras of $\mathscr{A}_{\mathsf{MTV}}c$. We start here the study of $\mathscr{A}_{\mathsf{MTV}}c$. Our original motivation was to generalize both multiple zeta values and multiple $T$-values while keeping the duality relations. We therefore show that for every $c$, there is an involution of $\mathbb{P}^1$ that implies the duality relations for the functions $Z_c$. We extend a result of Kaneko and Tsumura relating a generating function of some $Z_c$ to the hypergeometric function ${}_2F_{1}$. Using computer experiments, we determine the first few dimensions of the graded pieces of the algebra $\mathscr{A}_{\mathsf{MTV}}c$, assuming that it is graded by the weight. We also propose a guess for the generating series of the graded dimensions of the algebra $\mathscr{A}_{\mathsf{MTV}}$. It seems that the algebra $\mathscr{A}_{\mathsf{MTV}}c$ is a new object, although it is difficult to be sure, given the very large number of articles related to multiple zeta values and their many variants. Let us also note that a related study can be seen in \S 4.2 of \cite{mastrolia}, where our main differential form appears in formula (4.47). \section{Definition and first properties} The letter $c$ will denote a parameter, either complex or real, not equal to $1$. In most of the article, we will assume that $c$ is real and $c < 1$. Consider the two differential forms: \begin{equation} \omega_0(t) = \frac{dt}{t} \quad\text{and}\quad\omega_{1}(t)=\frac{(1-c)dt}{(1-t)(1-ct)}. \end{equation} Note that one can also write \begin{equation} \label{omega1} \omega_{1}(t) = \frac{dt}{1-t} - \frac{c dt}{1-ct}. \end{equation} Because of the assumption $c < 1$, the only singularity of the differential form $\omega_1$ on the interval $[0,1]$ is therefore the simple pole at $1$. When $c=0$, these two differential forms become the two differential forms $dt/t$ and $dt/(1-t)$ whose iterated integrals are the classical multiple zeta values (MZV). When $c=-1$, they become the two differential forms $\Omega_0 = dt/t$ and $\Omega_1 = 2dt/(1-t^2)$ whose iterated integrals are Kaneko-Tsumura's multiple T-values (MTV) \cite{kaneko_final}. For general $c$, one can consider the iterated integrals of $\omega_0$ and $\omega_{1}$ as functions of the parameter $c$. We will use the definition \begin{equation} \label{iterated} I(\varepsilon_1, \dots, \varepsilon_k) = \mathop{\int\cdots\int}\limits_{0<t_1<\cdots <t_k<1} \omega_{\varepsilon_1}(t_1) \cdots \omega_{\varepsilon_k}(t_k), \end{equation} where each $\varepsilon_i$ is either $0$ or $1$, with $\varepsilon_1=1$ and $\varepsilon_k=0$ to ensure convergence. Using the standard conversion of indices, let us introduce the functions defined by \begin{equation} \label{conversion} Z_c(k_1,\dots,k_r) = I(1,0^{k_1-1},1,0^{k_2-1},\dots,1,0^{k_r-1}) \end{equation} for $r \geq 1$ with $k_i \geq 1$ and $k_r \geq 2$. Here powers of $0$ stand for repeated zeroes. We have used above the same conventions as Kaneko and Tsumura in \cite{kaneko_final}, so that the comparison with their results would be simple. By their definition as iterated integrals, the functions $Z_c$ satisfy the same shuffle product rule as multiple zeta values and multiple T-values. This is most easily described as a sum over the shuffle product of indices in the notation \eqref{iterated}. For example, \begin{equation} Z_c(2) Z_c(3) = 6 Z_c(1,4) + 3 Z_c(2,3) + Z_c(3,2). \end{equation} The vector space over $\mathbb{Q}$ spanned by all the functions $Z_c$ is therefore a commutative algebra, denoted by $\mathscr{A}_{\mathsf{MTV}}c$. Moreover, for any fixed $c$, the vector space over $\mathbb{Q}$ spanned by all the values of $Z_c$ is also a commutative algebra. As a side remark, one could wonder what happens to the relationship of multiple zeta values with the Drinfeld associator. One could naively replace the KZ equation by \begin{equation} \frac{dF}{dz} = \left(\frac{e_0}{z} + \frac{(1-c)e_1}{(1-z)(1-cz)}\right) F \end{equation} and ask about properties of the solution. \subsection{Duality} The functions $Z_c$ also satisfy the duality property, using the change of variable \begin{equation} \label{involution} s = \frac{t-1}{ct-1}, \end{equation} which makes sense as soon as $c \not= 1$. This is an involution of $\mathbb{P}^1$ that exchanges $0$ and $1$, $\infty$ with $1/c$ and maps the interval $[0,1]$ to itself. It also exchanges $\omega_0$ and $\omega_{1}$ up to sign: \begin{equation} \label{dlogs} -\frac{ds}{s} = \frac{dt}{1-t} - \frac{c dt}{1-ct}, \end{equation} which is $\omega_{1}$ by \eqref{omega1}. This implies, with the usual proof by change of variables, that the standard duality relations known for multiple zeta values and multiple $T$-values also hold for the functions $Z_c$. In the notation of \eqref{iterated}, the sequences $(\varepsilon_1,\varepsilon_2,\dots,\varepsilon_k)$ and $(1-\varepsilon_k,\dots,1-\varepsilon_2,1-\varepsilon_1)$ give the same iterated integral. This translates via \eqref{conversion} into equalities between two functions $Z_c$, including for example \begin{equation} Z_c(1,2) = Z_c(3). \end{equation} The unique fixed point in $[0,1]$ of the involution \eqref{involution} is \begin{equation} -\frac{\sqrt{-c + 1} - 1}{c}. \end{equation} This can be used for the purpose of numerical computations, as a convenient cut-point where to apply the composition-of-paths formula for iterated integrals. \begin{thm} There is an equality between generating series: \begin{multline} 1-\sum_{m,n \geq 1} Z_c(\underbrace{1,\ldots,1}_{n-1},m+1) X^m Y^n =\\ (1-c) \frac{\Gamma(1-X)\Gamma(1-Y)}{\Gamma(1-X-Y)} {}_2F_{1}(1-X,1-Y;1-X-Y;c). \end{multline} \end{thm} \begin{proof} The proof is essentially the proof of Theorem 3.6 in Kaneko and Tsumura \cite{kaneko_final}. Let us only sketch the main steps. Define an auxiliary function \begin{equation*} \mathscr{L}(t) = \int_{0}^{t} \omega_1. \end{equation*} Because the integrand of $\omega_1$ is positive on $[0,1]$, the function $\mathscr{L}$ maps as a diffeomorphism the open interval $(0,1)$ to the positive real line $\mathbb{R}_{>0}$. From the iterated integral \eqref{iterated} and the conversion rule \eqref{conversion}, one gets by symmetrization that \begin{equation*} Z_c(\underbrace{1,\ldots,1}_{n-1},m+1) = \frac{1}{(n-1)!m!}\int_{0}^{1} \mathscr{L}(t)^{n-1} \log(1/t)^m \omega_1(t). \end{equation*} Therefore \begin{equation*} \sum_{m,n \geq 1} Z_c(\underbrace{1,\ldots,1}_{n-1},m+1) X^m Y^n= \int_{0}^{1} e^{\mathscr{L}(t) Y} (t^{-X} - 1) \omega_1(t). \end{equation*} Let us write this integral as the sum of $I_1$ and $-I_2$, where \begin{equation*} I_1 = \int_{0}^{1} e^{\mathscr{L}(t) Y} t^{-X} \omega_1(t) = (1-c) \int_{0}^{1} \left(\frac{1-t}{1-ct} \right)^{-Y} t^{-X} \frac{dt}{(1-t)(1-ct)} \end{equation*} and \begin{equation*} I_2 = \int_{0}^{1} e^{\mathscr{L}(t) Y} \omega_1(t) = \int_{0}^{\infty} e^{wY} dw= -1/Y. \end{equation*} In $I_1$, one uses the involution \eqref{involution}, which implies \eqref{dlogs} and $-\log(s) = \mathscr{L}(t)$. In $I_2$, one uses the change of variables $w = \mathscr{L}(t)$. One concludes by evaluating $I_1$ using the classical Euler integral expression for the hypergeometric function: \begin{equation*} \frac{\Gamma(A)\Gamma(C-A)}{\Gamma(C)} {}_2F_{1}(A,B;C;z) = \int_{0}^{1} t^{A-1} (1-t)^{C-A-1} (1-z t)^{-B} dt. \end{equation*} \end{proof} As for the MTV, one can expand the iterated integrals for $Z_c$ into iterated sums using \eqref{omega1}, but this will involve not only one but a linear combination of iterated sums. For example, \begin{equation*} Z_c(2) = \int_{0<s<t<1} \omega_{1}(s) \frac{dt}{t} = \sum_{n \geq 1} \frac{1-c^n}{n^2} = \operatorname{Li}_2(1) - \operatorname{Li}_2(c). \end{equation*} The same argument shows that more generally \begin{equation} Z_c(m) = \operatorname{Li}_m(1) - \operatorname{Li}_m(c) \end{equation} for all integers $m \geq 2$, where $\operatorname{Li}_m$ is the $m$-th polylogarithm. \section{Dimensions} Let us declare that the function $Z_c(k_1,\dots,k_r)$ has weight $k_1 + \dots + k_r$. This is also the number of integration signs in the iterated integral \eqref{iterated}. Note that it is not clear at all that only weight-homogeneous linear relations can exist between the functions $Z_c$. This is probably expected from the motivic philosophy, as in the classical case of MZV \cite{brown_decomposition} and also for MTV. Assuming that, one can then ask the following question: what are the graded dimensions (with respect to the weight) of the vector space $\mathscr{A}_{\mathsf{MTV}}c$ spanned by the functions $Z_c(k_1,\dots,k_r)$ over $\mathbb{Q}$ ? One can also ask the same question for any fixed value of $c$. When $c = 0$, this question is about the algebra $\mathscr{A}_{\mathsf{MZV}}$ of MZV and has been studied a lot. The conjecture by Zagier states that the dimensions are the Padovan numbers. This has been proved by Brown in the setting of motivic multiple zeta values \cite{brown_annals}. When $c=-1$, the question has been considered in \cite{kaneko_final}, where the authors have performed large-scale computations to find the expected dimensions in low degrees. Based on this data, one could try to find an analogue of Zagier's conjecture. A proposal is made below in \S \ref{conjecture_mtv}. Here is a little table for $\mathscr{A}_{\mathsf{MZV}}$ and $\mathscr{A}_{\mathsf{MTV}}$. \begin{equation*} \begin{array}{rrrrrrrrrrrrrrrrr} n & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11&12&13\\ \text{MZV} & 1 & 0 & 1 & 1 & 1 & 2 & 2 & 3 & 4 & 5& 7 & 9&12&16\\ \text{MTV} & 1 & 0 & 1 & 1 & 2 & 2 & 4 & 5 & 9 & 10&19&23&42&49\\ \end{array} \end{equation*} It would also be interesting to consider other special values for $c$, such as roots of unity or the inverse golden ratio, as some related variants of multiple zeta values have already been studied. Besides $0$ and $-1$, what are the exceptional values for $c$ where the dimensions are smaller than those of $\mathscr{A}_{\mathsf{MTV}}c$ ? \subsection{Constraints of duality} \label{upper_bound} As the functions $Z_c$ satisfy the duality relations and form an algebra $\mathscr{A}_{\mathsf{MTV}}c$ under the shuffle product, one can try to get purely algebraic upper bounds on the graded dimensions of this algebra. For this, consider the shuffle algebra $\mathscr{A}$ on all words in letters $0$ and $1$. The subspace $\mathscr{A}'$ of $\mathscr{A}$ spanned by all words starting with $1$ and ending with $0$ is a subalgebra. One can define in $\mathscr{A}'$ formal elements $Z(k_1,\dots,k_r)$ using the conversion rule \eqref{conversion}. Let $\mathscr{B}$ be the quotient of $\mathscr{A}'$ by the ideal generated by elements $Z(\alpha) - Z(\alpha^*)$ for all indices $\alpha$, where $*$ is the duality of indices. This ideal contains more linear relations, the first one being \begin{equation} Z(3) (Z(3) - Z(1,2)) = 0. \end{equation} Using a computer, one can find the first few dimensions of $\mathscr{B}$ in small degrees: \begin{equation*} \begin{array}{rrrrrrrrrrrrrrrrr} n & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11&12&13\\ \mathscr{B}_n & 1 & 0 & 1 & 1 & 3 & 4 & 9 & 15 & 31 & 55&109&203&397&754\\ \end{array}. \end{equation*} It would be good to have some kind of formula for these dimensions. \subsection{The case of $\mathscr{A}_{\mathsf{MTV}}$} \label{conjecture_mtv} In this section, we propose a guess for the dimensions of the algebra $\mathscr{A}_{\mathsf{MTV}}$ of Kaneko-Tsumura. It would be rather bold to call this a conjecture. Let us start from the data given in Kaneko and Tsumura article: \begin{equation*} \begin{array}{rrrrrrrrrrrrrrrrr} n & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ A & 1 & 0 & 1 & 1 & 2 & 2 & 4 & 5 & 9 & 10 & 19 & 23 & 42 & 49 & 91 & 110 \end{array} \end{equation*} which gives in line $A$ the conjectural dimensions obtained from numerical experiments. Let us add some lines to these table. First, add one line $B$ by computing the sum of two consecutive terms in $A$, suitably aligned. In the next line, compute the difference $B-A$ between the lines $B$ and $A$. Next, do something rather strange, namely define $A \sharp B$ as the column-per-column product of $A$ and $B$: \begin{equation*} \begin{array}{rrrrrrrrrrrrrrrrrr} n & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\ A & 1 & 0 & 1 & 1 & 2 & 2 & 4 & 5 & 9 & 10 & 19 & 23 & 42 & 49 & 91 & 110 \\ B & & & 1 & 1 & 2 & 3 & 4 & 6 & 9 & 14 & 19 & 29 & 42 & 65 & 91 & 140 \\ B-A & & & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 4 & 0 & 6 & 0 & 16 & 0 & 30 \\ A\sharp B & & & 1 & 1 & 4 & 6 & 16 & 30 & 81 & 140 & 361 & 667 & & & & \end{array} \end{equation*} Now comes the key observation: the lines $B-A$ and $A\sharp B$ seem to be the same, up to insertion of one $0$ between any two consecutive terms in $A\sharp B$. Assuming that this equality continues to hold at every order, one can compute as many terms as one wants in the sequence $A$, using this presumed identity between $B-A$ and $A\sharp B$. This gives the sequence \begin{multline*} 1, 0, 1, 1, 2, 2, 4, 5, 9, 10, 19, 23, 42, 49, 91, 110, 201, 230, 431, 521, 952, 1112, 2064,\\ 2509, 4573, 5318, 9891, 12024, 21915, 25658, 47573, 57831, 105404, 122834, \dots \end{multline*} This is of course a rather strange procedure, which lacks an interpretation in terms of the structure of the algebra of MTV. In term of generating series, this can be summarized as \begin{align*} A &= 1 + O(t^2),\\ B &= (t + t^2) A - t,\\ B &= A - 1 + t \operatorname{Diag}(A, B), \end{align*} where $\operatorname{Diag}(A, B)$ keeps only the diagonal terms in the product $AB$. Another conjecture for the dimensions of $\mathscr{A}_{\mathsf{MTV}}$ has been proposed in Remark 2.2 of \cite{kaneko_final}. \subsection{Dimensions with parameter $c$} Let us now consider the dimensions for the algebra spanned by all functions $Z_c$. This section is the result of some experimental work, based on an implementation of these functions using Pari. Linear relations were searched in the intersection of the space of relations for MTV and MZV, and only beyond the obvious relations deduced from duality. Up to weight $6$, there seems to be no relation beyond the relations that can be deduced from the duality relations. For example, in weight $4$, there does not seem to be any relation over $\mathbb{Q}$ between $Z_c(1,3)$, $Z_c(2,2)$ and $Z_c(4)$. This implies that the graded dimensions of $\mathscr{A}_{\mathsf{MTV}}c$ should be strictly larger than those of $\mathscr{A}_{\mathsf{MTV}}$. In weight $7$, one finds $2$ linearly independant relations, not in the span of relations implied by duality. So the expected dimension is $13=15-2$. In extenso, these relations are \begin{multline} Z_c(1,2,4) - 2 Z_c(1,3,3) - 4 Z_c(2,1,1,3) + 3 Z_c(2,1,4) \\ + Z_c(2,2,3) - Z_c(2,3,2) + 2 Z_c(3,1,3) = 0 \end{multline} and \begin{multline} 18 Z_c(1,1,5) + 26 Z_c(1,3,3) - 30 Z_c(1,6) + 45 Z_c(2,1,1,3) - 27 Z_c(2,1,4)\\ - 8 Z_c(2,2,3) + 12 Z_c(2,3,2) - 15 Z_c(2,5) - 19 Z_c(3,1,3) + Z_c(3,2,2)\\ - 4 Z_c(3,4) + Z_c(4,1,2) = 0 \end{multline} One can check that these $2$ relations hold exactly for MZV and numerically for MTV. In weight $8$, one finds $3$ linearly independant relations, not in the span of relations implied by duality. So the expected dimension is $28=31-3$. Here are these three relations as lists of coefficients: \begin{multline*} [0, 0, 0, 0, 0, -1, 0, 0, -1, -3, 0, 2, 5, 10, -1, -5, 21, -2,\\ -10, -18, 5, 0, 13, 27, -6, 52, -13, -48, 90, -45, -72]\\ [0, 0, 0, -2, 0, 0, 1, -8, 0, -2, -20, 10, 6, -10, 0, -2, -16,\\ 1, 4, 6, 20, -40, 14, -8, -4, -16, 7, 24, 0, 19, 16]\\ [0, 0, -5, 9, -5, 0, 0, 48, 4, 29, 135, -45, -56, -50, 4, 38, -144,\\ 7, 54, 117, -100, 270, -154, -227, 60, -479, 82, 345, -975, 366, 672] \end{multline*} between the $31$ elements \begin{multline*} Z_c(8), Z_c(6,2), Z_c(5,1,2), Z_c(4,4), Z_c(4,2,2), Z_c(4,1,3), Z_c(4,1,1,2), Z_c(3,5),\\ Z_c(3,1,2,2), Z_c(3,1,1,3), Z_c(2,6), Z_c(2,4,2), Z_c(2,3,3), Z_c(2,2,4), Z_c(2,2,2,2),\\ Z_c(2,2,1,3), Z_c(2,1,5), Z_c(2,1,3,2), Z_c(2,1,2,3), Z_c(2,1,1,4), Z_c(2,1,1,1,3),\\ Z_c(1,7), Z_c(1,4,3), Z_c(1,3,4), Z_c(1,3,1,3), Z_c(1,2,5), Z_c(1,2,2,3), Z_c(1,2,1,4),\\ Z_c(1,1,6), Z_c(1,1,2,4), Z_c(1,1,1,5) \end{multline*} that span the space modulo the relations induced by duality. These $3$ relations also hold exactly for MZV and numerically for MTV. In weight $9$, a similar search found $15$ relations beyond duality. So the expected dimension would be $40 = 55-15$. This is slightly surprising, as one may have expected to get $41 = 13+28$ from the idea of summing the two previous terms when the weight is odd. This idea seems to work for even weights in the case of $\mathscr{A}_{\mathsf{MTV}}$. Either our experimental work has a flaw, or this idea must be abandoned for $\mathscr{A}_{\mathsf{MTV}}c$. Here is a table summarizing the experimental results. \begin{equation*} \begin{array}{rrrrrrrrrrrrrrrrr} n & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11&12&13\\ \text{MZV} & 1 & 0 & 1 & 1 & 1 & 2 & 2 & 3 & 4 & 5& 7 & 9&12&16\\ \text{MTV} & 1 & 0 & 1 & 1 & 2 & 2 & 4 & 5 & 9 & 10&19&23&42&49\\ \mathscr{A}_{\mathsf{MTV}}c & 1 & 0 & 1 & 1 & 3 & 4 & 9 & 13 & 28 & 40& &&&\\ \mathscr{B} & 1 & 0 & 1 & 1 & 3 & 4 & 9 & 15 & 31 & 55&109&203&397&754\\ \end{array} \end{equation*} The line labelled $\mathscr{B}$ is the upper bound assuming only the relations implied by the duality relations, as explained in \S \ref{upper_bound}. \end{document}
math
\begin{document} \date{} \title{QuickMergesort: Practically Efficient Constant-Factor Optimal Sorting} \thispagestyle{empty} \definecolor{mygreen}{rgb}{0,0.6,0} \definecolor{mygray}{rgb}{0.5,0.5,0.5} \definecolor{mymauve}{rgb}{0.58,0,0.82} \begin{abstract} We consider the fundamental problem of internally sorting a sequence of $n$ elements. In its best theoretical setting QuickMergesort, a combination Quicksort with Mergesort with a Median-of-$\sqrt{n}$ pivot selection, requires at most $n \log n - 1.3999n + o(n)$ element comparisons on the average. The questions addressed in this paper is how to make this algorithm practical. As refined pivot selection usually adds much overhead, we show that the Median-of-3 pivot selection of QuickMergesort leads to at most $n \log n - 0{.}75n + o(n)$ element comparisons on average, while running fast on elementary data. The experiments show that QuickMergesort outperforms state-of-the-art library implementations, including {C++}'s Introsort and Java's Dual-Pivot Quicksort. Further trade-offs between a low running time and a low number of comparisons are studied. Moreover, we describe a practically efficient version with $n \log n + \mathcal{O}(n)$ comparisons in the worst case. \keywords{in-place sorting, quicksort, mergesort, analysis of algorithms} \end{abstract} \pagestyle{plain} \section{Introduction} Sorting a sequence of $n$ elements remains one of the most fascinating topics in computer science, and runtime improvements to sorting has significant impact for many applications. The lower bound is $\log (n!) \approx n \log n - 1.44n + \Theta(\log n)$ element comparisons applies to the worst and the average case\footnote{Logarithms denoted by $\log$ are base 2, and the term {average case} refers to a uniform distribution of all input permutations assuming all elements are different.}. The sorting algorithms we propose in this paper are \emph{internal} or \emph{in-place}: they need at most $\mathcal{O}(\log n)$ space (computer words) in addition to the array to be sorted. That means we consider {Quicksort}~\cite{Hoa62} an internal algorithm, whereas standard {Mergesort} is \emph{external} because it needs a linear amount of extra space. Based on {QuickHeapsort} \cite{quickheap,DiekertW13Quick}, Edelkamp and Wei\ss~\cite{EWCSR} developed the concept of {QuickXsort} and applied it to X = {WeakHeapsort}~\cite{Dut93} and X = {Mergesort}. The idea~-- going back to {UltimateHeapsort}~\cite{ultimate}~-- is very simple: as in {Quicksort} the array is partitioned into the elements greater and less than some pivot element, respectively. Then one part of the array is sorted by X and the other part is sorted recursively. The advantage is that, if X is an external algorithm, then in {QuickXsort} the part of the array which is not currently being sorted may be used as temporary space, which yields an internal variant of X. Using {Mergesort} as X, a partitioning scheme with $\lceil \sqrt{n} \rceil$ pivots, known to be optimal for classical {Quicksort}~\cite{MartinezR01}, and Ford and Johnson's {MergeInsertion} as the base case~\cite{FordJ59}, {QuickMergesort} requires at most $n \log n - 1.3999n + o(n)$ element comparisons on the average (and $n \log n - 1.4n + o(n)$ for $n=2^k$), while preserving worst-case bounds $n \log n + \mathcal{O}(n)$ element comparisons and $\mathcal{O}(n \log n)$ time for all other operations~\cite{EWCSR}. To the authors' knowledge the average-case result is the best-known upper bound for sequential sorting with $\mathcal{O}(n \log n)$ overall time bound, in the leading term matching, and in the linear term being less than $0.045n$ away from the lower bound. The research question addressed in this paper, is whether {QuickMergesort} can be made \emph{practical} in relation to efficient library implementations for sorting, such as {Introsort} and {Dual-Pivot Quicksort}. {Introsort}~\cite{Mus97}, implemented as \texttt{std::sort} in C++/STL, is a mix of {Insertionsort}, {CleverQuicksort} (the Median-of-3 variant of {Quicksort}) and {Heapsort}~\cite{Flo64,Wil64}, where the former and latter are used as recursion stoppers (the one for improving the performance for small sets of data, the other one for improving worst-case performance). The average-time complexity, however, is dominated by {CleverQuicksort}. {Dual-Pivot Quicksort}\footnote{Oracle states: \emph{The sorting algorithm is a Dual-Pivot Quicksort by Vladimir Yaroslavskiy, Jon Bentley, and Joshua Bloch}; see http://permalink.gmane.org/gmane.comp.java.openjdk.core-libs.devel/2628} by Yaroslavskiy et al. as implemented in current versions of Java (e.g., Oracle Java 7 and Java 8) is an interesting {Quicksort} variant using two (instead of one) pivot elements in the partitioning stage (recent proposals use three and more pivots~\cite{KushagraLQM14}). It has been shown that -- in contrast to ordinary {Quicksort} with an average case of $2\cdot n \ln n +\mathcal{O}(n)$ element comparisons -- {Dual-Pivot Quicksort} requires at most $1.9 \cdot n \ln n +\mathcal{O}(n)$ element comparisons on the average, and there are variants that give $1.8 \cdot n \ln n +\mathcal{O}(n)$. For a rising number of samples for pivot selection, the leading factor decreases~\cite{dqsanalysis1,dqsanalysis2,dqsanalysis3}. So far there is no \emph{practical} (competitive in performance to state-of-the-art library implementations) sorting algorithm that is \emph{internal} and \emph{constant-factor-optimal} (optimal in the leading term). Maybe closest is InSituMergesort~\cite{KatajainenPT96,ElmasryKS12}, but even though that algorithm improves greatly over the library implementation of in-place stable sort in STL, it could not match with other internal sorting algorithms. Hence, the aim of the paper is to design fast {QuickMergesort} variants. Instead of using a Median-of-$\sqrt{n}$ strategy, we will use the Median-of-3. For Quicksort, the Median-of-3 strategy is also known as {CleverQuicksort}. The leading constant in $c \cdot n \log n +\mathcal{O}(n)$ for the average case of comparisons of {CleverQuicksort} is $c = (12/7)\ln 2 \approx 1.188$. As $c < 1.8 \ln 2$, {CleverQuicksort} is theoretically superior to the wider class of {DualPivotQuicksort} algorithms considered in~\cite{dqsanalysis1,dqsanalysis3,dqsanalysis2}. Another sorting algorithm studied in this paper is a mix of {QuickMergesort} and {CleverQuicksort}: during the sorting with {Mergesort}, for small arrays {CleverQuicksort} is applied. The contributions of the paper are as follows. \begin{enumerate} \item We derive a bound on the average number of comparisons in {QuickMergesort} when the Median-of-3 partitioning strategy is used instead of the Median-of-$\sqrt{n}$ strategy, and show a surprisingly low upper bound of $n \log n - 0.75n + o(n)$ comparisons on average. \item We analyze a variant of {QuickMergesort} where base cases of size at most $n^\beta$ for some $\beta \in [0,1]$ are sorted using yet another sorting algorithm X; otherwise the algorithm is identical to {QuickMergesort}. We show that if X is called for about $\sqrt{n}$ elements and X uses at most $\alpha \cdot n \log n+ \mathcal{O}(n)$ comparisons on average, the average number of comparisons of is $(1+\alpha)/2 \cdot n \log n+ \mathcal{O}(n)$, with $(1+\alpha)/2 \approx 1.094$ for X $=$ Median-of-3 {Quicksort}. Other element size thresholds for invoking X lead to further trade-offs. \item We refine a trick suggested in \cite{EWCSR} in order to obtain a bound of $n \log n + 16.1n$ comparisons in the worst case using the median-of-median algorithm \cite{BFPRT73} with an adaptive pivot sampling strategy. On average the modified algorithm is only slightly slower than the Median-of-3 variant of QuickMergesort. \item We compare the proposals empirically to other algorithms from the literature. \end{enumerate} We start with revisiting {QuickXsort} and especially {QuickMergesort}, including theoretically important and practically relevant sub-cases. We derive an upper bound on the average number of comparisons in {QuickMergesort} with Median-of-3 pivot selection. In \prettyref {sec:QMQS}, we present changes to the algorithm that lead to the hybrid {QuickMergeXsort}. Next, we introduce the worst-case efficient variant MoMQuickMergesort, and, finally, we present experimental results. \section{ {QuickXsort}\xspace\ and {QuickMergesort}\xspace }\label{sec:quickXsort} In this section we give a brief description of {QuickXsort} and extend a result concerning the number of comparisons performed in the average case. Let X be some sorting algorithm. {QuickXsort} works as follows: First, choose a pivot element as the median of some sample (the performance will depend on the size of the sample). Next, partition the array according to this pivot element, i.\,e., rearrange the array such that all elements left of the pivot are less or equal and all elements on the right are greater or equal than the pivot element. Then, choose one part of the array and sort it with the algorithm X. After that, sort the other part of the array recursively with {QuickXsort}. The main advantage of this procedure is that the part of the array that is not being sorted currently can be used as temporary memory for the algorithm X. This yields fast \emph{internal} variants for various \emph{external} sorting algorithms such as {Mergesort}. The idea is that whenever a data element should be moved to the extra (additional or external) element space, instead it is swapped with the data element occupying the respective position in part of the array which is used as temporary memory. Of course, this works only if the algorithm needs additional storage only for data elements. Furthermore, the algorithm has to keep track of the positions of elements which have been swapped. For the number of comparisons some general results hold for a wide class of algorithms X. Under natural assumptions the average number of comparisons of X and of {QuickXsort}\xspace{} differ only by an $o(n)$-term: Let X be some sorting algorithm requiring at most $n \log n + cn +o(n)$ comparisons on average. Then, {QuickXsort}\xspace{} with a Median-of-$\sqrt{n}$ pivot selection also needs at most $n \log n + cn +o(n)$ comparisons on average~\cite{EWCSR}. Sample sizes of approximately $\sqrt{n}$ are likely to be optimal \cite{DiekertW13Quick,MartinezR01}. If the unlikely case happens that always the $\sqrt{n}$ smallest elements are chosen for pivot selection, $\Omega(n^{3/2})$ comparisons are performed. However, as we showed in \cite{EWCSR}, such a worst case is unlikely. Nevertheless, for improving the worst-case complexity, in \cite{EWCSR} we suggested a trick similar to {Introsort} \cite{Mus97} leading to $n \log n + \mathcal{O}(n)$ comparisons in the worst case (use the median of the whole array as pivot if the previous pivot was very bad). In \prettyref{sec:worstcase} of this paper, we refine this method yielding a better average and worst-case performance. One example for {QuickXsort}\xspace{} is {QuickMergesort}. For the {Mergesort} part we use standard (top-down) {Mergesort}, which can be implemented using $m$ extra element spaces to merge two arrays of length $m$. After the partitioning, one part of the array -- for a simpler description we assume the first part -- has to be sorted with {Mergesort} (note, however, that any of the two sides can be sorted with Mergesort as long as the other side contains at least $n/3$ elements. In order to do so, the second half of this first part is sorted recursively with {Mergesort} while moving the elements to the back of the whole array. The elements from the back of the array are inserted as dummy elements into the first part. Then, the first half of the first part is sorted recursively with {Mergesort} while being moved to the position of the former second half of the first part. Now, at the front of the array, there is enough space (filled with dummy elements) such that the two halves can be merged. The executed stages of the algorithm {QuickMergesort} (with no median pivot selection strategy applied) are illustrated in Fig~\ref{fig:sample}. \begin{figure} \caption{Example for the execution of {QuickMergesort} \label{fig:sample} \end{figure} {Mergesort} requires approximately $n \log n - 1.26n$ comparisons on average, so that with a Median-of-$\sqrt{n}$ we obtain an internal sorting algorithm with $n \log n - 1.26n + o(n)$ comparisons on average. One can do even better by sorting small subarrays with a more complicated algorithm requiring less comparisons~-- for details see \cite{EWCSR}. Since the Median-of-3 variant (i.\,e.\ CleverQuickMergesortsort) shows a slightly better practical performance than with Median-of-$\sqrt{n}$ (see \cite{EWCSR}), we provide here a theoretical analysis of it by showing that {CleverQuickMergesortsort} performs at most $n \log n - 0.75n + o(n)$ comparisons on average. In fact, as in \cite{EWCSR} we show a more general result for CleverQuickXsort for an arbitrary algorithm X. \begin{theorem} [Average Case {CleverQuickXsort}] \label{thm:CQXS} Let the algorithm {X} perform at most $\alpha n \log n + cn + \mathcal{O}(\log n)$ comparisons on average. Then, {CleverQuickXsort} performs at most $\alpha n \log n + (c + \kappa_\alpha) n + \mathcal{O}(\log^2 n)$ comparisons on average with $\kappa_\alpha = \frac{4}{15}\left(12 - \frac{7\alpha}{\ln 2}\right) \leq 0{.}51$. \end{theorem} Since {Mergesort} requires at most $n \log n - 1.26n + o(n)$ comparisons on average, we obtain the following corollary: \begin{corollary} [Average Case {CleverQuickMergesort}] \label{cor:CQMS} {CleverQuickMergesort} is an in-place algorithm that performs at most $n \log n - 0.75n + o(n)$ comparisons on average. \end{corollary} \begin{proof}[Proof of \prettyref{thm:CQXS}] The probability of choosing the $k$-th element (in the ordered sequence) as pivot of a random $n$-element array is $\Pr{\!\text{pivot }= k\!} = (k-1)(n-k){\binom{n}{3}}^{-1}$ (one element of the three element set has to be less than the $k$-th, one equal to the $k$-th, and one greater than $k$-th element of the array). Note that this holds no matter whether we select the three elements at random or we use fixed positions and average over all input permutations. Since probabilities sum up to 1, we have \begin{align} \sum_{k=1}^{n}(k-1)(n-k){\binom{n}{3}}^{-1} = 1. \label{eq:prob} \end{align} Moreover, partitioning preserves randomness of the two sides of the array~-- this includes the positions where the other two elements from the pivot sample are placed (since for a fixed pivot, every element smaller (resp.\ greater) than the pivot has the same probability of being part of the sample). Also, using the array as temporary space for Mergesort does not destroy randomness since the dummy elements are never compared. Let $T(n)$ be the average-case number of comparisons of {CleverQuickXsort} for sorting an input of size $n$ and let $$S(n)= \alpha n \log n + cn + d (1+\log n)$$ be a bound for the average number of comparisons of the algorithm X (e.\,g.\ Mergesort). We will show by induction that $$T(n) \leq \alpha n\log n + (c+ \kappa_\alpha )n + D (1+\log^2 n) $$ for some constant $D\geq d$ (which we specify later such that the induction base is satisfied) and $\kappa_\alpha = \frac{4}{15}\left(12 - \frac{7\alpha}{\ln 2}\right) \leq 0.51$ (since $\alpha\geq 1$ by the general lower bound on sorting). As induction hypothesis for $1\leq k \leq n$ we assume that \begin{align*} \max\{\, T(k-1) & + S(n-k), T(n-k)+S(k-1)\,\}\\ &\leq \alpha (k-1) \log (k-1) + \alpha(n-k) \log(n-k) + cn + \kappa_\alpha \max\{k-1, n-k\}\\ &\qquad + D\left(1+\log^2 (\max\{k-1, n-k\})\right) + d\left(1+\log (\min\{k-1, n-k\})\right) \\ &=: f(n,k). \end{align*} In order to find the pivot element, three comparisons are needed. After that, for partitioning $n-3$ comparisons are performed (all except the three elements of the pivot sample are compared with the pivot). Since after partitioning, one part of the array is sorted with X and the other recursively with CleverQuickXsort, we obtain the recurrence {\allowdisplaybreaks \begin{align} \nonumber T(n) &\leq n + \sum_{k=1}^n \Pr{\!\text{pivot }= k\!}\cdot\max\left\{\, T(k-1) + S(n-k), T(n-k)+S(k-1)\,\right\}\\ \nonumber &\leq n + \sum_{k=1}^n \frac{(k-1)(n-k)}{\binom{n}{3}} f(n,k) \\ &\leq n + \frac{1}{\binom{n}{3}}\sum_{k=1}^n (k-1)(n-k) \mathbb{B}igl(\alpha (k-1) \log (k-1) + \alpha(n-k) \log(n-k)\mathbb{B}igr) \label{eq:nlogn}\\ &\qquad + \frac{1}{\binom{n}{3}}\sum_{k=1}^n (k-1)(n-k) \kappa_\alpha \max\{k-1, n-k\} \label{eq:ckappa}\\ &\qquad + \frac{1}{\binom{n}{3}}\sum_{k=1}^n (k-1)(n-k) D\log^2(\max\{k-1, n-k\}) \label{eq:logsquare}\\ &\qquad + \frac{1}{\binom{n}{3}}\sum_{k=1}^n (k-1)(n-k) \mathbb{B}igl(c n + D + d +d\log\left( \min\{k-1, n-k\}\right)\mathbb{B}igr)\label{eq:rest} \end{align}} We simplify the terms \prettyref{eq:nlogn}--\prettyref{eq:rest} separately using \texttt{http://www.wolframalpha.com/} for evaluating the sums and integrals. The function $x \mapsto g(x) = (x-1)^2(n-x) \log (x-1) $ is non-negative and has a single maximum for $1 \leq x \leq n$ at position $x=\xi$; on the left of $\xi$, it is monotonically increasing, on the right monotonically decreasing. Therefore, \begin{align*} \sum_{k=1}^n g(k) &= \sum_{k=1}^{\floor{\xi}} g(k) + \sum_{k=\floor{\xi} + 1}^n g(k)\ \leq \int_{1}^{\floor{\xi}} g(x)\,d x + \int_{\floor{\xi} + 1}^n g(x)\,d x + 2g(\xi). \end{align*} Since the second term of \prettyref{eq:nlogn} is obtained from the first one by a substitution $k \mapsto n + 1 - k$, it follows that \begin{align*} \prettyref{eq:nlogn} -n & \leq \frac{\alpha}{\binom{n}{3}}\cdot\mathbb{B}iggl(\int_{1}^{n} g(x)\,d x + \int_{1}^n g(n + 1 - x)\,d x + 4g(\xi)\mathbb{B}iggr)\\ &\leq \frac{\alpha}{\binom{n}{3}}\cdot\mathbb{B}iggl(2\int_{1}^n x^2(n-x) \log x \,d x + 4 n^3\log n\mathbb{B}iggr)\\ &= \frac{\alpha}{\binom{n}{3}}\cdot\mathbb{B}iggl(\frac{2}{144 \ln 2}n^4(12\ln n-7) + 4 n^3\log n\mathbb{B}iggr) \leq \alpha n\log n-\frac{7\alpha}{12 \ln 2}n + c_3\alpha\log n \end{align*} for some properly chosen constant $c_3$. Now, first assume that $\kappa_\alpha \geq 0$. Then we have \begin{align*} \prettyref{eq:ckappa} & \leq\frac{2\kappa_\alpha}{\binom{n}{3}}\sum_{k=1}^{\ceil{n/2}} (k-1)(n-k)^2 \leq \frac{2\kappa_\alpha n}{192\binom{n}{3}}\left(11n^3 - 20n^2 - 44n + 80 \right) \leq \frac{11}{16}\kappa_\alpha n + c_4 \end{align*} for some constant $c_4$. On the other hand, if $\kappa_\alpha < 0$, we have \begin{align*} \prettyref{eq:ckappa} & \leq\frac{2\kappa_\alpha}{\binom{n}{3}}\sum_{k=1}^{\floor{n/2}} (k-1)(n-k)^2 \leq \frac{2\kappa_\alpha n}{192\binom{n}{3}}\left(11n^3 - 68n^2 - 100n + 16 \right) \leq \frac{11}{16}\kappa_\alpha n + c_4 \end{align*} for some constant $c_4$. Thus, in any case, we have $\prettyref{eq:ckappa} \leq \frac{11}{16}\kappa_\alpha n + c_4$. With the same argument as for \prettyref{eq:nlogn}, we have \begin{align*} \prettyref{eq:logsquare} &\leq \frac{2D}{\binom{n}{3}}\sum_{k=\floor{n/2}}^{n} (k-1)(n-k)\log^2(k-1)\\ &\leq \frac{2D}{\binom{n}{3}}\int_{\floor{n/2}}^{n} (x-1)(n-x)\log^2(x-1)\,dx + D\cdot c_5' \leq D \log^2 n - \frac{5D}{3}\log n + D \cdot c_5 \end{align*} for some constants $c_5'$ and $c_5$. Finally, by \prettyref{eq:prob}, we have \begin{align*} \prettyref{eq:rest} \leq c n + D + d + d \log(n/2) = cn + D + d\log n. \end{align*} Now, we combine all the terms and obtain \begin{align*} T(n) &\leq \alpha n \log n + n\left(1+ \frac{-7\alpha}{12\ln 2} + c + \frac{11}{16}\kappa_\alpha \right)\\ &\qquad + c_3\alpha\log n + c_4 + D\log^2n - \frac{5D}{3}\log n + Dc_5 + D + d\log n \end{align*} We can choose $D$ such that $ \frac{5D}{3}\log n \geq c_3\alpha\log n + c_4 + Dc_5 + D + d\log n$ for $n$ large enough and $D \geq T(n)$ for all smaller $n$. Hence, we conclude the proof of \prettyref{thm:CQXS}: \begin{align*} T(n) &\leq \alpha n \log n + n\left(1+ \frac{-7\alpha}{12\ln 2} + c + \frac{11}{16}\kappa_\alpha \right) + D\log^2n + D\\ & = \alpha n \log n + n\left(1+ \frac{-7\alpha}{12\ln 2} + c + \frac{11}{16}\cdot\frac{4}{15}\left(12 - \frac{7\alpha}{\ln 2}\right) \right) + D\log^2n + D\\ &= \alpha n \log n + (c + \kappa_\alpha )n + D\log^2n + D. \end{align*} \end{proof} Notice that in the case that in each recursion level always the smaller part is sorted with X, the inequalities in the proof of \prettyref{thm:CQXS} are tight up to some lower order terms. Thus, the proof can be easily modified to provide a lower bound of $\alpha n \log n + (c + \kappa_\alpha) n - \mathcal{O}(\log^2 n)$ comparisons in this special case. \section{{QuickMergeXsort}}\label{sec:QMQS} {QuickMergeXsort} agrees with {QuickMergesort} up to the following change: for arrays of size smaller than some threshold cardinality {X\_THRESH}, the sorting algorithm X is called (instead of Mergesort) and the sorted elements are moved to their target location expected by {QuickMergesort}. Fig.~\ref{fig:pseudocode} provides the full implementation details of {QuickMerge(X)sort} (in {C++}). The realization of the sorting algorithm X and the partitioning algorithm have to be added. The listing shows that by dropping the base cases from {QuickMergesort} the code is short enough for textbooks on algorithms and data structures. The general principle is that we have a merging step that takes two sorted areas, merges and swaps them into a third one. The program \emph{msort} applies {Mergesort} with X as a stopper. It goes down the recursion tree and shrinks the size of the array accordingly. If the array is small enough, the algorithm calls X followed by a joint movement (memory copy) of array elements (the only change of code wrt.\ {QuickMergesort}). The algorithm \emph{out} serves as an interface between the recursive procedure \emph{msort} and top-level procedure \emph{sort}. Last, but not least, we have the overall internal sorting algorithm \emph{sort}, that performs the partitioning. \lstset{ backgroundcolor=\color{white}, basicstyle=\footnotesize, breakatwhitespace=false, breaklines=true, captionpos=b, commentstyle=\color{mygreen}, deletekeywords={...}, escapeinside={\%*}{*)}, extendedchars=true, frame=single, keepspaces=true, keywordstyle=\color{blue}, language=Octave, otherkeywords={*,...}, numbers=left, numbersep=5pt, numberstyle=\tiny\color{mygray}, rulecolor=\color{black}, showspaces=false, showstringspaces=false, showtabs=false, stepnumber=1, stringstyle=\color{mymauve}, tabsize=10, title=\lstname } \begin{figure} \caption{Implementation of QuickMergeXsort.} \label{fig:pseudocode} \end{figure} The following result is a generalization of the $1 \cdot n \log n + cn +o(n)$ average comparisons bound in~\cite{EWCSR}. Indeed, the proof is almost a verbatim copy of the proof of \cite[Thm.\ 1]{EWCSR} (compare to the role of $\alpha$ in the proof of \prettyref{thm:CQXS}). \begin{theorem} [Average-Case {QuickXsort}\xspace{}] \label{thm:quickXsort} For $\alpha \ge 1$ let X be some sorting algorithm requiring at most $\alpha \cdot n \log n + cn +o(n)$ comparisons on average. Then, {QuickXsort}\xspace{} with a Median-of-$\sqrt{n}$ pivot selection also needs at most $\alpha \cdot n \log n + cn +o(n)$ comparisons on average. \end{theorem} We are now ready to analyze the average-case performance of {QuickMergeXsort}. \begin{theorem} [Average-Case {QuickMergeXsort}/{CleverQuickMergeXsort}] \label{thm:quickmergeXsqrtsortac} Let X be a sorting algorithm with $\alpha \cdot n \log n + c n + o(n)$ comparisons in the average case, called when reaching $\ceil{n^\beta}$ elements, $0 <\beta < 1$. Then, {QuickMergeXsort}\xspace{} with Median-of-$\sqrt{n}$ pivot selection, as well as with Median-of-3 pivot selection, is a sorting algorithm that needs at most $(\alpha \beta + (1 - \beta)) \cdot n \log n + \mathcal{O}(n)$ comparisons in the average case. \end{theorem} \begin{proof} To begin with we analyze {MergeXsort}, i.e., Mergesort, with recursion stopper X. We assume that every path of the recursion tree of Mergesort has the same length until the algorithm switches to X. This can be easily implemented and guarantees that all calls to X are made on arrays of almost identical size. First, we look at the $\ceil{(\log n) \cdot (1-\beta)}$ top layers of the recursion tree, which are sorted by {Mergesort}. In the worst-case, in layer $i$ of the tree, {Mergesort} requires at most $n-2^i < n$ comparisons, so that in total we have at most $C_{\text{MergeXsort}} (n) = n \cdot \ceil{(1-\beta) \cdot \log n}$ element comparisons. The average case differs only negligibly from the worst case. In the $\ceil{(\log n) \cdot (1-\beta)}$ recursion levels of Mergesort, $2^{\ceil{(1-\beta)\log n)}}$ sorted arrays are merged to one large sorted array. Each of the $g_{\beta}(n) = 2^{\ceil{(1-\beta)\log n}}$ arrays is of size at most $f_\beta(n)= \ceil{2^{\log n - \ceil{(1-\beta)\log n}}} \leq \ceil{n^\beta}$. Next, we look at the $g_{\beta}(n) = 2^{\ceil{(1-\beta)\log n}}$ calls to X. Let $C_X(n)$ denote the average number of element comparisons executed by all calls of X. Given that $g_{\beta}(n) f_{\beta}(n) = n+ \mathcal{O}(n^{1-\beta})$ and $\log f_\beta(n) = \log \ceil{2^{\log n - \ceil{(1-\beta)\log n}}} = \log n - \ceil{(1-\beta)\log n} + \mathcal{O}(1/n^\beta)$, we obtain \begin{align*} C_{X} (n) &= g_{\beta}(n) \cdot (\alpha \cdot f_\beta(n) \log f_\beta(n) + c f_\beta(n) + o(f_\beta(n) )) \\ &= \alpha \cdot n \log f_\beta(n) + cn + o(n) = \alpha \cdot n \left(\log n - \ceil{(1-\beta)\log n}\right) + c n + o(n) \end{align*} In cumulation, for the average-case number of comparisons of {MergeXsort} we have the following upper bound \begin{align*} C_{\text{MergeXsort}}(n) &= C_{X} (n) + C_{\text{MergeXsort}} (n)\\ & \leq \alpha \cdot n \left(\log n - \ceil{(1-\beta)\log n}\right) + c n + o(n) + n \ceil{(1-\beta) \log n} \\ &=n \bigl(\alpha \cdot \log n - (\alpha - 1) \ceil{(1-\beta)\log n}\bigr) + c n + o(n) \\ &= (\alpha\beta+(1-\beta)) \cdot n \log n + \mathcal{O}(n). \end{align*} Using Theorem~\ref{thm:quickXsort} (resp.\ \prettyref{thm:CQXS} for Median-of-3) we obtain the matching bound of at most $(\alpha\beta+(1-\beta)) \cdot n \log n + \mathcal{O}(n)$ element comparisons on average for {QuickMergeXsort}. \end{proof} Theorem~\ref{thm:quickmergeXsqrtsortac} implies that CleverQuickMergeXsort{} implemented with {CleverQuicksort} as recursion stopper at $\sqrt{n}$ elements ($\beta = 1/2$ ) is a sorting algorithm that needs at most $((\alpha+1)/2) \cdot n \log n + \mathcal{O}(n) = 1.094 \cdot n \log n +\mathcal{O}(n)$ comparisons on average. \section{Worst-Case Efficient QuickMergeSort}\label{sec:worstcase} Although QuickMergesort has an $\mathcal{O}(n^2)$ worst-case running time, is is quite simple to guarantee a worst-case number of comparisons of $n \log n + \mathcal{O}(n)$: just choose the median of the whole array as pivot. This is essentially how InSituMergesort~\cite{ElmasryKS12} works. The most efficient way for finding the median is using Quickselect \cite{Hoare61_find} as applied in InSituMergesort. However, this does not allow the desired bound on the number of comparisons (even not when using IntroSelect as in~\cite{ElmasryKS12}). Alternatively, one could use the median-of-medians algorithm \cite{BFPRT73}, which, while having a linear worst-case running time, on average is quite slow. In this section we describe a slight variation of the median-of-medians approach, which combines a linear worst-case running time with almost the same average performance as InSituMergesort. Again, the crucial observation is that it is not necessary to use the actual median as pivot. As remarked in \prettyref{sec:quickXsort}, the larger of the two sides of the partitioned array can be sorted with Mergesort as long as the smaller side contains at least one third of the total number of elements. Therefore, it suffices to find a pivot which guarantees such a partition. For doing so, we can apply the idea of the median-of-medians algorithm: for sorting an array of $n$ elements, we choose first $n/3$ elements as median of three elements each. Then, the median-of-medians algorithm is used to find the median of those $n/3$ elements. This median becomes the next pivot. Like for the median-of-medians algorithm \cite{BFPRT73}, this ensures that at least $2\cdot\floor{n/6}$ elements are less or equal and at least the same number of elements are greater or equal than the pivot~-- thus, always the larger part of the partitioned array can be sorted with Mergesort and the recursion takes place on the smaller part. The big advantage over the straightforward application of the median-of-medians algorithm it that it is called on an array of only size $n/3$ (with the cost of introducing a small overhead for finding the $n/3$ medians of three)~-- giving less weight on its big constant for the linear number of comparisons. We call this algorithm MoMQuickMergesort (MOMQMS). In our implementation of the median-of-medians algorithm, we use select the pivot as median of the medians of groups of five elements~-- we refer to \cite[Sec.\ 9.3]{CLRS09} for a detailed description. The total number $T(n)$ of comparisons in the worst case of MoMQuickMergesort is bounded by $$T(n) \leq T(n/2) + S(n/2) + M(n/3) + \frac{n}{3}\cdot 3 + \frac{2}{3}n$$ where $S(n)$ is the number of comparisons incurred by Mergesort and $M(n)$ the number of comparisons for the median-of-medians algorithm. We have $M(n) \leq 22n$ (for the variant used in our implementation, which uses seven comparisons for finding the median of five elements). The $\frac{n}{3}\cdot 3$-term comes from finding $n/3$ medians of three elements, the $2n/3$ comparisons from partitioning the remaining elements (after finding the pivot, the correct side of the partition is known for $n/3$ elements). Since by \cite{Knu73} we have $S(n) \leq n\log n - 0.9n$, this yields $$T(n) \leq T(n/2) + \frac{n}{2}\log(n/2) - \frac{0.9 n}{2}+ \frac{22}{3}n + \frac{5}{3}n$$ resolving to $T(n) \leq n\log n + 16.1 n$. For our implementation we also use a slight improvement over the basic median-of-medians algorithm by using the approach of adaption, which was first introduced in \cite{MartinezPV10} for Quickselect and recently applied to the median-of-medians algorithm \cite{Alexandrescu17}. More specifically, whenever in a recursive call the $k$-th element is searched with $k$ far apart from $n/2$ (more precisely for $k \leq 0.3n$ or $k\geq 0.7n$), we do not choose the median of the medians as pivot but an element proportional to $k$ (while still guaranteeing that at least $0.3n$ elements are discarded for the next recursive call as in \cite{BFPRT73}). Notice that in the presence of duplicate elements, we need to apply three-way partitioning for guaranteeing that worst-case number of comparisons (that is elements equal to the pivot are placed in the middle and not included into the recursive call nor into Mergesort). With the usual partitioning (as in our experiments), we obtain a worse bound for the worst case since it might happen that the smaller part of the array has to be sorted with Mergesort. In order to achieve the guarantee for the worst case together with the efficiency of the Median-of-3 pivot sampling, we can combine the two approaches using a trick similar to {Introsort} \cite{Mus97}: we fix some small $\delta >0$. Whenever the pivot is contained in the interval $\left[\delta n, (1-\delta)n \right]$, the next pivot is selected as Median-of-3, otherwise according to the worst-case efficient procedure described in the previous section~-- for the following pivots switch back to Median-of-3. When choosing $\delta$ not too small, the worst case number of comparisons will be only approximately $2n$ more than of MoMQuickMergesort (because in the worst case before every partitioning step according to MoMQuickMergesort, there will be one partitioning step with Median-of-3 using $n$ comparisons), while the average is almost as CleverQuickMergesort. We propose $\delta = 1/16$. We call this algorithm HybridQuickMergsort (HQMS). \section{Experiments}\label{sec:experiments} The collection of sorting algorithms we considered for comparison is much larger than the one we present here, but the bar of being competitive wrt.\ state-of-the-art library implementations in {C++} and {Java} on basic data types is surprisingly high. For example, all {Heapsort} variants we are aware of fail this test, we checked refined implementations of {Binary Heapsort}~\cite{Flo64,Wil64}, {Bottom-Up Heapsort}~\cite{Weg93}, {MDR Heapsort} \cite{Wegener:MDR}, {QuickHeapsort}~\cite{DiekertW13Quick}, and {Weak-Heapsort}~\cite{Dut93}. Some of these algorithm even use extra space. {Timsort} (by Tim Peters; used in {Java} for sorting non-elementary object sequences) was less performant on simple data types. There are fast algorithms that exploit the set of keys to be sorted (like {CountingSort} or {Radixsort}), but we aim at a general algorithm. We also experimented with Sanders and Winkel's {SuperScalarSampleSort} that has a particular memory profile~\cite{SandersW04,EdelkampW16_BQS}. The main reason not to include the results was that it allocates substantial amounts of space for the elements and, thus, is not internal. We experienced that it acts fast on random data, but not as good on presorted inputs. One remaining competitor was {(Bottom-Up) Mergesort} (\texttt{std::stable\_sort}) in the {C++} STL library, which on some inputs shows a very good performance. As this is an external algorithm, we chose a tuned version of in-place {Mergesort} (\texttt{stl::inplace\_stable\_sort} simply was too slow) called {InSituMergesort} (ISMS)~\cite{ElmasryKS12} for our experiments. According to~\cite{dqsanalysis1,dqsanalysis2,dqsanalysis3}, for the {DualPivotQuicksort} algorithm variants, there was no clear-cut winner, but the experiments suggested that the standard ones had a slight edge. For {DualPivotQuicksort} we translated the most recent Oracle's (Java) version (the algorithm selects the 2nd and 4th element of the inner five pivot candidates of a split-into-7). As the full sorting algorithm is lengthy and contains many checks for special input types (with different code fragments and parameter settings for sorting arrays of bytes, shorts, ints, floats, doubles etc.) we extracted the integer part. {TunedQuicksort}~\cite{ElmasryKS12} is an engineered implementation of {CleverQuicksort}, probably unnoticed by the public and contained in a paper on tuning {Mergesort} for studying branch misprediction as in~\cite{Kaligosi}. It applies Lomuto's uni-directional Median-of-3 partitioner~\cite{CLRS09}, which works well for permutations and a limited number of duplicates in the element set. As with {Introsort}, the algorithm stops recursion, if less than a fixed number of elements are reached (16 in our case). These elements are then sorted together, calling STL's {Insertionsort} algorithm. The implementation utilizes a stack to avoid recursion, being responsible for tracking the remaining array intervals to be processed. We dropped TunedQuicksort from the experiments as it failed on presorted data and data with duplicates, but we used parts of its efficient stack-based implementation. This advanced CleverQuicksort implementation and {CleverQuickMergesort} (QMS) are the two extremes, while {CleverQuickMergeCleverQuicksort} (QuickMergeCleverQuicksort with a modified TunedQuicksort implementation at $\sqrt{n}$ elements) (QMQS for short) is our tested intermediate. QMS uses hard-coded base cases for $n<10$, while the recursion stopper in QMQS does not. Depending on the size of the arrays the displayed numbers are averages over multiple runs (repeats)\footnote{Experiments were run on one core of an Intel Core i5-2520M CPU (2.50GHz, 3MB Cache) with 16GB RAM; Operating system: Ubuntu Linux 64bit; Compiler: GNU's \texttt{g++} (4.8.2); optimized with flags \texttt{-O3 -march=native -funroll-loops}. }. The arrays we sorted were random permutations of $\{1,\ldots,n\}$. The number of element comparisons was measured by increasing a counter for every comparison. For CPU time experiments we used vectors of integers as this is often most challenging for algorithms with a lower number of comparison. All algorithms sort the same arrays. As counting the number of comparisons affects the speed of the sorting algorithms, for further measurements (e.g., moves and comparisons) we started another sets of experiments. We made element comparisons more expensive (we experimented with logarithms, and elements as vectors and records). Through a lower number of comparisons results were even better. As a first empirical observation, for {Introsort} (Std) the number of element comparisons divided by $n \log n$ is larger than $1.18$, due to higher lower-order terms. As theoretically shown, for QMS the number of element comparisons divided by $n \log n$ was below~1. For our QuickMergesort implementations we used the block partitionioner from \cite{EdelkampW16_BQS}, which improves the performance considerably over the standard Hoare partitioner. Figs.~\ref{fig:sortrandom}--\ref{fig:sortrandomdup} show the results when sorting random integer data (with QMQS: CleverQuickMergeCleverQuicksort, QMS: CleverQuickMergesort, MOMQMS: worst-case-efficent QuickMergesort, HQMS: hybrid of worst-case- and average-case-efficient QuickMergeSort, ISMS: InSituMergesort, Java: DualPivotQuicksort, and Std: \texttt{std::sort}). Times displayed are the total running times divided by the number of elements (in ns). We see that QuickMergeSort variants are fast. For measuring element moves (assignments of input data elements, e.g., a swap of two elements is counted as three moves). \begin{figure} \caption{Time (left) and element comparisons (right) for sorting random integer data. } \label{fig:sortrandom} \end{figure} \begin{figure} \caption{Time for sorting random data with a comparator that applies the logarithm to the integer elements (left), and number of element moves (right). } \label{fig:sortrandomdup} \end{figure} \section{Conclusion} Sorting $n$ elements is one of the most frequently studied subjects in computer science with applications in almost all areas in which efficient programs run. With variants of {QuickMergesort}, we contributed sorting algorithms which are able to run faster than {Introsort} and {DualPivotQuicksort} even for elementary data. Compared to Introsort, we reduced the leading term $\alpha$ in $\alpha \cdot n \log n +\mathcal{O}(n)$ in the average number of comparisons from $\alpha \approx 1.18$ via $1.09$ to finally reaching 1. The algorithms are simple but effective: a) Median-of-3 pivot selection (as opposed to using a sample of $\sqrt{n}$), b) faster sorting for smaller element sets. Both modifications show empirical impact and are analyzed theoretically to provide upper bounds on the average number of comparisons. We discussed options to warrant a constant-factor optimal worst-case. In the theoretical part of our work we concentrated on average-case analyses, as we strongly believe that this reflects realistic behavior more closely than worst-case analyses. With very low overhead, {QuickMergesort} has implemented in a way that it becomes constant-factor optimal in the worst-case, too. We chose efficient deterministic median-of-median strategies that are also of interest for further considerations. For future research we propose the integration of QuickMergesort with multi-way merging, envisioning to scale the algorithm beyond main memory capacity and effective parallelizations. \end{document}
math
\begin{document} \title[Algebraic Goodwillie Calculus]{Algebraic Goodwillie Calculus \\ and a Cotriple Model for the Remainder} \author{Andrew Mauer-Oats} \address{Department of Mathematics \\ Purdue University \\ West Lafayette, Indiana 47907} \email{[email protected]} \subjclass[2000]{55P65} \date{\today} \begin{abstract} Goodwillie has defined a tower of approximations for a functor from spaces to spaces that is analogous to the Taylor series of a function. His \nth{} order approximation $P_n F$ at a space $X$ depends on the values of $F$ on coproducts of large suspensions of the space: $F(\vee \Sigma^M X)$. We define an ``algebraic'' version of the Goodwillie tower, $\Pnalg F(X)$, that depends only on the behavior of $F$ on coproducts of $X$. When $F$ is a functor to connected spaces or grouplike $H$-spaces, the functor $\Pnalg F$ is the base of a fibration $$ \realization{\Perp^{*+1} F} \rightarrow F \rightarrow \Pnalg F, $$ whose fiber is the simplicial space associated to a cotriple $\Perp$ built from the $(n+1)^{\text{st}}$ cross effect of the functor $F$. In a range in which $F$ commutes with realizations (for instance, when $F$ is the identity functor of spaces), the algebraic Goodwillie tower agrees with the ordinary (topological) Goodwillie tower, so this theory gives a way of studying the Goodwillie approximation to a functor $F$ in many interesting cases. \end{abstract} \maketitle \section{Introduction} A function on the real line can be approximated by its ordinary Taylor series at a point, with the ``\nth{}-order approximation'' being the Taylor polynomial through the $x^n$ term. Goodwillie \cite{Cal1,Cal2,Cal3} has defined an analogous approximation for functors of topological spaces. Johnson and McCarthy \cite{Johnson-McCarthy:taylor-towers-for-functors-of-additive-categories} have explored alternative models for Goodwillie's approximation, working in the simpler setting of chain complexes. Specifically, for a functor $C$ that is the prolongation to chain complexes of a functor to an abelian category, they have defined a tower of functors to chain complexes, which we denote $\Pnalg C$ to distinguish from Goodwillie's tower. The functor $\Pnalg$ approximates $C$ in the sense that the map $C \rightarrow \Pnalg C$ is the universal map under $C$ to a functor whose $(n+1)^{\text{st}}$ cross-effect $\Perp_{n+1} C$ is acyclic. In a later work \cite{Johnson-McCarthy:deriving-calculus-with-cotriples}, the same authors show that $\Pnalg$ can be constructed from a cotriple. One source of the interest in $\Pnalg$ approximation stems from the fact that, for many functors, the cross effects are more computationally accessible than the stabilization; hence, $\Pnalg$ should also be more accessible than Goodwillie's $P_n$. Taking their work as inspiration, we explore to what extent similar ideas are useful in the study of functors from spaces to spaces. In this context, we develop a tower analogous to that of Johnson and McCarthy. We call this algebraic Goodwillie tower, and denote it $\Pnalg F(X)$. The algebraic Goodwillie approximation was created with the intent to be universal with respect to natural transformations of $F$ to functors whose $(n+1)^{\text{st}}$ cross effect is contractible; however, it turns out that there is a subtle issue involving $\pi_0$. For example, the vanishing of the second cross effect forces a monoidal structure on $\pi_0$, but on $\pi_0$ the approximation process can only produce the group completion of this monoid (since it involves loops on a space). We solve this problem is by requiring our functors to take values in topological groups or connected spaces. Our main theorem is that for good functors $F$, there exists a fibration sequence up to homotopy \begin{equation} \label{eq:main-thm-in-intro} \sR{ \Perp_{n+1}^{*+1} F } \rightarrow F \rightarrow \Pnalg F , \end{equation} where the fiber is built from a cotriple $\Perp$ formed from the diagonal of the $(n+1)^{\text{st}}$ cross effect. In \cite{Johnson-McCarthy:deriving-calculus-with-cotriples}, the authors are able to define the analog of $\Pnalg$ simply by taking the cofiber of the left-hand map in \eqref{eq:main-thm-in-intro}. However, in an unstable topological setting, that approach is not useful, and the proof is much more difficult. It is easy to show that when $F$ commutes with realizations, the \nth{} algebraic Goodwillie approximation $\Pnalg F(X)$ agrees with the \nth{} topological Goodwillie approximation $P_n F(X)$ (see \ref{prop:Pnalg-agrees-Pn}), so in this case \eqref{eq:main-thm-in-intro} shows that the ``remainder'' of the $n$-excisive Goodwillie approximation is the cotriple homology of $F$. \subsection{Organization of the paper} We state the main theorem in \S\ref{sec:main-theorem}, along with all of the basic definitions necessary to understand it. In \S\ref{sec:technical}, we summarize technical ingredients of the proof, including the limit axiom, $\pi_*$-Kan functors, results of Goodwillie, and iterated cross effects. Then \S\ref{sec:Pnalg-connectivity} shows that the approximation $\Pnalg$ preserves the connectivity of natural transformations. To deal with another (important) technical issue, \S\ref{sec:fiber-contractible-Cartesian} sketches proofs that when the cross effect vanishes, a certain cube is actually Cartesian (rather than just having contractible total fiber). In \S\ref{sec:perp-is-cotriple}, we prove that $\Perp$ is really a cotriple. After these technicalities, we give a detailed outline of the main theorem (\S\ref{sec:main-theorem-outline}). The proof of the main theorem then follows, with \S\ref{sec:perp-F-zero} proving that when $\Perp_{n+1}F(X)\simeq 0$, then $F(X)\simeq \Pnalg F(X)$, and \S\ref{sec:perp-F-nonzero} reducing the general case to that case. \section{Preliminaries} \label{sec:preliminaries} The category of ``spaces'' is taken to be the category of compactly generated spaces with nondegenerate basepoint. The space $0$ or $\basept$ is the space with only one point. When forming the function space $\Map(A,B)$, we always implicitly replace $A$ by a cofibrant approximation (CW complex) and $B$ by a fibrant approximation (which is no change with our definition of spaces). When forming the geometric realization, we first ``thicken'' each space (\cite[p.~308]{Segal:categories-and-cohomology-theories}) so the resulting functor has good homotopy behavior. When we say two spaces are \emph{equivalent}, we mean they are weakly homotopy equivalent. We use the term \emph{fibration sequence up to homotopy} for a sequence $F \rightarrow E \rightarrow B$ in which $F \xrightarrow{\simeq} \hofib(E \rightarrow B)$. We briefly recall a few standard properties of functors. A functor is \emph{continuous} if the map $\Map(X,Y) \rightarrow \Map(F(X),F(Y))$ sending $f$ to $F(f)$ is continuous. A functor is a \emph{homotopy functor} if it preserves weak homotopy equivalences. A functor is called \emph{reduced} if $F(0)\simeq 0$. \begin{definition} \label{def:limit-axiom} A homotopy functor $F$ is said to satisfy the \emph{limit axiom} if $F$ commutes with filtered homotopy colimits of finite complexes. That is, if $\hocolim F(X_\alpha) \xrightarrow{\simeq} F(\hocolim X_\alpha)$ for all filtered systems $\Set{X_\alpha}$ of finite complexes. \end{definition} In this paper, we assume that all functors are continuous homotopy functors that satisfy the limit axiom. All functors are defined from pointed spaces to pointed spaces unless otherwise stated. Cubical diagrams and homotopy fibers figure heavily into this work, so we recall several definitions from \cite{Cal2}. Let $T$ be a finite set. $\PP(T)$ is the poset of subsets of $T$ (regarded as a category). $\PP_0(T)$ is the poset of nonempty subsets of $T$. A $T$-cube is a functor defined on $\PP(T)$. If $\cube{X}$ is a $T$-cube of pointed spaces, then its homotopy fiber, $\hofib \cube{X}$, is the subspace of the function space $\prod_{U \subset T} \Map\left([0,1]^U,\cube{X}(U)\right)$ consisting of maps that are natural in $U$ and send points with any coordinate $1$ to the basepoint. A formal definition is given in \cite[1.1]{Cal2}. Alternatively, the homotopy fiber can be constructed by iterating the process of taking fibers in a single direction. We write $\mathbf{n}$ for the finite (unpointed) set $\Set{1,\ldots,n}$, and $[n]$ for the finite space $\bigvee^n S^0 \cong \Set{0,\ldots,n}$, with basepoint $0$. \section{The Main Theorem} \label{sec:main-theorem} In this section, we briefly present all of the background necessary to understand the Main Theorem (\ref{thm:main-theorem}). In brief: cross effects, left Kan extensions, excisive functors, and $\Pnalg$ The cross effect measures the failure of a functor to take coproducts to products. \begin{definition}[$cr_n$] Define the $n^{\text{th}}$ cross-effect cube, $\CRN F(X_1,\ldots,X_n)$, to be the $\mathbf{n}$-cube $\cube{X}$ with $\cube{X}(U) = F\left( \bigvee_{u\not\in U} X_u\right)$ and $\cube{X}(i:U\rightarrow V)$ induced by the identity of $X_u$ if $u\not\in V$ and the map to the basepoint if $u\in V$. Let $cr_n F(X_1,\ldots,X_n)$ denote the homotopy fiber of $\cube{X}$. \end{definition} \begin{definition}[$\Perp_n$, $\epsilon$] Let $\cube{X} = \CRN F(X,\ldots,X)$ and let $\cube{Y}$ be the $\mathbf{n}$-cube with $$\cube{Y}(U) = \begin{cases} F(X) & \text{if $U=\emptyset$} \\ \basept & \text{otherwise} \end{cases} $$ Define $\Perp_n F(X) = \hofib \cube{X}$, and note that $F(X) \cong \hofib \cube{Y}$. Using the fold map $\bigvee X \rightarrow X$ on the vertex $U=\emptyset$ and the zero map elsewhere induces a map of cubes $\cube{X} \rightarrow \cube{Y}$. Define $\epsilon$ to be the induced map on homotopy fibers, so $$ \epsilon: \Perp_n F(X) \rightarrow F(X).$$ \end{definition} As we will show in Section~\ref{sec:perp-is-cotriple}, $\Perp_n$ is part of a cotriple. Hence there is a simplicial object $Y$ with $Y_k = \Perp_n^{k+1} F(X)$ and face maps $d_i^{(k)} = \Perp_n^i \epsilon \Perp_n^{k-i}$. \begin{example} The second cross effect of the functor $F(X)=Q(X)$ is contractible, but the second cross effect of $F(X)=Q(X\wedge X)$ is $cr_2 F(X,Y) \simeq Q(X \wedge Y) \times Q(Y \wedge X)$. \end{example} The left Kan extension gives a canonical way of extending a functor from a subcategory to a functor defined on the whole category. \begin{definition}[left Kan extension] \label{def:left-Kan}\label{def:left-kan} Let $LF$ denote the homotopy invariant left Kan extension of a functor $F$ along the inclusion of finite pointed sets into pointed spaces. Letting $\cat{S}$ denote a small category of finite pointed sets and $\cat{T}$ denote the category of pointed spaces, the realization of the following simplicial space can taken to be the definition of LF(-): \begin{equation} \label{eq:left-kan-spaces} [n] \mapsto \bigvee_{(C_0, \ldots, C_n)} F(C_0) \wedge \left( \Map_{\cat{S}}(C_0,C_1) \times \cdots \times \Map_{\cat{T}}(C_n, - ) \right)_+ . \end{equation} The coproduct is taken over all $(C_0, \ldots, C_n)\in\cat{S}^{\times n}$. \end{definition} When $F$ is the restriction of a functor (also called $F$) defined on $\cat{T}$, then there is a natural map $a: LF(Y) \rightarrow F(Y)$ induced by $$ F(C_0) \wedge \Map(C_0,Y)_+ \rightarrow F(C_0) \wedge \Map(F(C_0),F(Y))_+ \rightarrow F(Y) $$ sending $$ y \wedge f \mapsto y \wedge F(f) \mapsto F(f )(y) .$$ We have required that all functors be continuous to guarantee that the first map above is continuous. \begin{lemma} When $Y$ is a finite pointed set in $\cat{S}$, the map $a: LF(Y) \rightarrow F(Y)$ is a simplicial homotopy equivalence. \end{lemma} \begin{proof} When $Y\in\cat{S}$, the category $\cat{S}\downarrow Y$ has a terminal object. This immediately translates into a homotopy contracting $LF(Y)$ to $F(Y)\wedge(id_Y)_+ \cong F(Y)$. \end{proof} When working with the left Kan extension, we will frequently want to shift the functor so that we can examine its value on coproducts of spaces other than $S^0$. To do this, we write $F_X(-)$ for the functor $F(X\wedge -)$. We primarily understand $LF$ as ``$F$ made to commute with realizations'', in the sense of the following lemma. \begin{lemma} \label{lem:LF-commutes-with-realization} Let $Y_\cdot$ be a simplicial set. Then $$LF_X(\sR{Y_\cdot}) \simeq \sR{ F_X(Y_\cdot)} = \sR{ F(X \wedge Y_\cdot)}.$$ \end{lemma} \begin{proof} Using \eqref{eq:left-kan-spaces}, this follows from the observation that $\Map(S,Y)\cong\prod_{s\in S} Y$ commutes with realizations when $S$ is a finite set. \end{proof} Before we define the algebraic Goodwillie tower, we will recall the classical definition of the topological Goodwillie tower. \begin{definition}[Cartesian, co-Cartesian] An $S$-cube $\cube{X}$ is Cartesian if the categorical map $\cube{X}(\emptyset) \cong \holim_{\PP(S)} \cube{X} \rightarrow \holim_{\PP_0(S)} \cube{X}$, induced by the inclusion of the category $\PP_0(S)$ of nonempty subsets of $S$ into the category $\PP(S)$ of all subset of $S$, is a weak equivalence. An $S$-cube $\cube{X}$ is co-Cartesian if the categorical map $ \hocolim_{\PP_1(S)} \cube{X} \rightarrow \hocolim_{\PP(S)} \cube{X} \cong \cube{X}(S)$, induced by the inclusion of all proper subsets of $S$ into all subsets of $S$, is a weak equivalence. An $S$-cube is strongly co-Cartesian if every sub-$2$-cube is co-Cartesian. \end{definition} \begin{definition}[$n$-excisive] (\cite[3.1]{Cal2}) $F$ is $n$-excisive if for every strongly co-Cartesian $(n+1)$-cube $\cube{X}$, the cube $F\cube{X}$ is Cartesian. \end{definition} In \cite{Cal3}, Goodwillie constructs a universal $n$-excisive approximation $P_n F$ to a functor $F$. The approximations form a tower of functors equipped with natural transformations of the following form: $$\xymatrix{ F \ar[r] \ar[rd] & P_n F \ar[d]\\ & P_{n-1} F }$$ We are now in a position to define the algebraic Goodwillie approximation. \begin{definition}[$\Pnalg F$] \label{def:Pnalg-F} The algebraic Goodwillie approximation $\Pnalg F(X)$ is defined to be the functor given by applying $P_n$ to the left Kan extension of $F$ shifted over $X$; that is, $\Pnalg F(X) = P_n (L F_X) (S^0)$. \end{definition} The natural transformation $F(X) \rightarrow \Pnalg F(X)$ arises from evaluating the map $LF_X \rightarrow P_n (LF_X)$ at $S^0$. \begin{proposition} \label{prop:Pnalg-agrees-Pn} If $F_X$ is a functor that commutes with realizations, that is, the natural map $LF_X(Y) \rightarrow F_X(Y)$ is an equivalence for all spaces $Y$, then the natural map $\Pnalg F(X) \rightarrow P_n F(X)$ is an equivalence. \end{proposition} \begin{proof} Given $L F_X \xrightarrow{\simeq} F_X$, applying $P_n$ and evaluating at $S^0$ gives $P_n(L F_X)(S^0) \xrightarrow{\simeq} P_n F_X(S^0)$. The left hand side is $\Pnalg F(X)$, and the right hand side is $P_n F(X)$. \end{proof} There remains a technical hypothesis on $F$, related to the ``$\pi_*$-Kan condition'', needed in the main theorem. We give the hypotheses here, but defer discussion of how they are used until Section~\ref{sec:pi-star-Kan}. \begin{hypothesis}[Connected Values] \label{hyp:connected-values} \label{hypothesis-1} $F$ has connected values (on coproducts of $X$) if the functor $L F_X$ is always connected. \end{hypothesis} Let $\cat{G}$ denote the category of grouplike $H$-spaces. Specifically, by $\cat{G}$, we mean the category of algebras over the associativity operad with inverses and identity up to homotopy. In this category, all morphisms strictly preserve all homotopies, so it is rigid enough that the realization of a simplicial object in $\cat{G}$ is still in $\cat{G}$. \begin{hypothesis}[Group Values] \label{hyp:group-values} \label{hypothesis-2} In the following definition, let $\cat{T}$ denote the category of pointed spaces. Let $U: \cat{G} \rightarrow \cat{T}$ be the forgetful functor. $F$ ``has group values'' or ``takes values in groups'' if there exists a functor $F'$ so that the following diagram commutes: $$\xymatrix{ & \cat{G} \ar[d]^U \\ \cat{T} \ar[ur]^{F'(-)} \ar[r]_{F(-)} & \cat{T} }$$ In this case, we will conflate $F( -)$ with its lift to groups. We say that $F$ has group values on coproducts of $X$ if $F_X$ has group values. \end{hypothesis} We are now able to state our theorem relating the cross effects of a functor and the algebraic Goodwillie approximation. \begin{theorem}[Main Theorem] \label{thm:main-theorem} Let $F$ be a reduced homotopy functor from pointed spaces to pointed spaces. If $F$ has either connected values (\ref{hypothesis-1}) or group values (\ref{hypothesis-2}) on coproducts of $X$, then the following is a fibration sequence up to homotopy: \begin{equation} \label{eq:perp-fibration} \realization{\Perp_{n+1}^{*+1} F(X)} \rightarrow F(X) \rightarrow \Pnalg F(X) , \end{equation} and furthermore the right hand map is surjective on $\pi_0$. \end{theorem} To establish this theorem, we use a ``ladder'' induction on $n$, where there are two cases for each $n$: the first depends on the second for smaller $n$, and the second depends on the first for the same $n$. The first case is to establish that when $\Perp_{n+1} F$ vanishes---that is, when $F$ is degree $n$--- the map $F \rightarrow \Pnalg F$ is actually an equivalence; this proof proceeds by examining a fibration sequence obtained from the second case for smaller $n$. The second case is to consider the general case, in which $\Perp_{n+1} F$ may be nonvanishing, and proceed by applying the first case (for the same $n$) to the fiber of the map from $F\rightarrow \Pnalg F$. A more extensive outline of the proof is given in Section~\ref{sec:main-theorem-outline}. \section{Technical Conditions} \label{sec:technical} In this section, we address many technical aspects needed to make our machinery work. \subsection{Limit axiom} It is of primary importance to know that the functors we will be working with satisfy the limit axiom. \begin{lemma} For any functor $F$, the functor $LF$ satisfies the limit axiom (\ref{def:limit-axiom}). \end{lemma} \begin{proof} The functor $\Map(K,-)$ satisfies the limit axiom for any compact $K$. Examining \eqref{eq:left-kan-spaces}, we see that this implies that $LF$ satisfies the limit axiom. \end{proof} \subsection{Eilenberg-Zilber} An ``Eilenberg-Zilber''-type theorem for bisimplicial sets provides a source for connectivity estimates. \begin{lemma} \label{lem:basic-Eilenberg-Zilber} Let $X_\cdot$ and $Y_\cdot$ be simplicial spaces satisfying the $\pi_*$-Kan condition, and let $f_\cdot: X_\cdot \rightarrow Y_\cdot$ be a map between them. If $n\ge 0$ and $w \ge -1$ are integers such that for $i<n$, the map $f_i$ is a weak equivalence, and for $i\ge n$, the map $f_i$ is $w$-connected, then $\realization{f_\cdot}$ is $(n+w)$-connected. \end{lemma} \begin{proof} Using the homotopy spectral sequence of \cite[Theorem~B.5]{Bousfield-Friedlander:Gamma-Spaces}, we have a spectral sequence $E_{*,*} \Rightarrow \pi_*\realization{X_\cdot}$ and a spectral sequence $F_{*,*} \Rightarrow \pi_*\realization{Y_\cdot}$, with a $f$ inducing a map of spectral sequences $E \rightarrow F$. The hypotheses imply that $E^1_{i,j} \cong F^1_{i,j}$ when $i<n$ or $j<w$, and $E^1_{i,w}$ surjects onto $F^1_{i,w}$. An easy analysis of possible differentials then shows that the map $E^{\infty}_{i,j} \rightarrow F^{\infty}_{i,j}$ is an isomorphism when $i+j < n+w$, and a surjection when $i+j = n+w$. \end{proof} \begin{corollary}[Eilenberg-Zilber connectivity estimate] \label{cor:eilenberg-zilber-connectivity-estimate} Let $X$ and $Y$ be $\pi_*$-Kan functors from $(\Delta^{\text{op}})^{\times N}$ to spaces, and let $p=(p_1,\ldots,p_N) \in (\Delta^{\text{op}})^{\times N}$ denote an index for these multisimplicial spaces. Let $f: X\rightarrow Y$. Suppose that $f(p)$ is $w$-connected for all indices $p$. If in addition, there are integers $\Set{n_i \suchthat i=1,\ldots,N}$ so that $f(p)$ is an equivalence if there exists an $i$ with $1\le i \le N$ such that $p_i < n_i$, then $\sR{f}$ is $(\Sigma n_i + w)$-connected. \end{corollary} \begin{proof} Iterating Lemma~\ref{lem:basic-Eilenberg-Zilber} produces this result. \end{proof} \subsection{The $\pi_*$-Kan condition} \label{sec:pi-star-Kan} For us to be able to say anything useful about $L F$, we need to know that the Kan extension of the fiber of a map is the fiber of the Kan extensions, and that one can continue to say similar things about the functor defined by taking fibers of certain maps. \begin{definition}[$\pi_*$-Kan functor] \label{def:pi-star-Kan-functor} A functor $F$ is called a $\pi_*$-Kan functor if given a map of simplicial sets $p: Y_\cdot \rightarrow Z_\cdot$ with a section: \begin{enumerate} \item the simplicial spaces $F(Y_\cdot)$ and $F(Z_\cdot)$ satisfy the $\pi_*$-Kan condition; \item $\pi_0 F(p_\cdot)$ is a fibration of simplicial sets; and \item as a functor of $p: Y\rightarrow Z$, the fiber of $F(p)$ is still a $\pi_*$-Kan functor. \end{enumerate} \end{definition} This is useful in practice due to the following theorem of Bousfield and Friedlander, restated for simplicial spaces. \begin{theorem} (\cite[Theorem~B.4]{Bousfield-Friedlander:Gamma-Spaces}) \label{thm:bousfield-friedlander} Let $A$, $B$, $X$, and $Y$ be simplicial spaces, and suppose that the cube $$\xymatrix{ A \ar[r] \ar[d] & X \ar[d] \\ B \ar[r] & Y }$$ has the property that evaluation at every $[n]\in \Delta^{\text{op}}$ produces a Cartesian cube. If $X$ and $Y$ satisfy the $\pi_*$-Kan condition and if $\pi_0 X \rightarrow \pi_0 Y$ is a fibration of simplicial sets, then after realization the cube is still Cartesian. \end{theorem} In this work we restrict ourselves to functors satisfying the hypotheses of connected values (\ref{hyp:connected-values}) or group values (\ref{hyp:group-values}) so that the simplicial spaces involved always satisfy the $\pi_*$-Kan condition. The $\pi_*$-Kan condition is a technical fibrancy condition introduced in \cite[\S{}B.3]{Bousfield-Friedlander:Gamma-Spaces} that we do not repeat here. \begin{corollary} \label{cor:pi-star-fibers-levelwise} If $F$ is a $\pi_*$-Kan functor, then given a map $p: X\rightarrow Y$ with a section, the spaces $L\hofib F(p)$ and $\hofib L F(p)$ are equivalent. In view of Lemma~\ref{lem:LF-commutes-with-realization}, this is implies $\sR{ \hofib F(p_\cdot)} \simeq \hofib F (\sR{p_\cdot})$. \end{corollary} \begin{proof} By definition, a $\pi_*$-Kan functor causes $F(X_\cdot) \rightarrow F(Y_\cdot)$ to satisfy the hypotheses on the right hand vertical map in Theorem~\ref{thm:bousfield-friedlander}, so letting $B=\basept$ and $A = \hofib F(p_\cdot)$ produces the desired result. \end{proof} \begin{remark} Theorem~\ref{thm:bousfield-friedlander} implies that if each space $X_i$ is connected then $\sR{\Omega X_i} \simeq \Omega \sR{X_i}$. \end{remark} \begin{lemma} \label{lem:connected-or-gp-is-pi-star-kan} If either $F$ has connected values (\ref{hyp:connected-values}) or group values (\ref{hyp:group-values}), then $F$ is a $\pi_*$-Kan functor (\ref{def:pi-star-Kan-functor}). \end{lemma} \begin{proof} In these cases, $LF$ always satisfies the $\pi_*$-Kan condition (\cite[p.~120]{Bousfield-Friedlander:Gamma-Spaces}). Also, for any surjective map $p$, the function $\pi_0 F(p_\cdot)$ is a fibration of simplicial sets, since surjective maps of simplicial groups are fibrations. If $F$ has connected values, then the requirement that $p$ have a section implies, using the long exact sequence on homotopy, that $\hofib F(p)$ is also a functor to connected spaces. If $F$ has group values, then the fact that taking products commutes with taking fibers means $\hofib F(p)$ still has group values. \end{proof} The cross effect of the Kan extension of a $\pi_*$-Kan functor can be computed from finite sets, as the following lemma shows. \begin{lemma} \label{lem:perp-commutes-with-realization-v2} Let $Y_1, \ldots, Y_n$ be spaces. If $F$ is a $\pi_*$-Kan functor, then $$ cr_n (LF)(Y_1,\ldots,Y_n) \simeq L^n (cr_n F) (Y_1,\ldots,Y_n) , $$ where $L^n$ indicates the Kan extension is taken in each of the $n$ variables of $cr_n F$. The statement above can be abbreviated to $\Perp_n(LF)(Y) \simeq L (\Perp_n F)(Y)$. \end{lemma} \begin{proof} All of the maps in $\CRN (LF)$ have sections, so the hypothesis that $F$ is a $\pi_*$-Kan functor means that Corollary~\ref{cor:pi-star-fibers-levelwise} applies, so taking fibers in one direction commutes with realizations. The sections are all coherent, so they produce a section on the fibers. The property of being a $\pi_*$-Kan functor passes to the fibers (by definition \ref{def:pi-star-Kan-functor}(3)), so the above argument applies inductively. \end{proof} \begin{lemma} \label{lem:perp-commutes-with-realization-for-simplicial-functors} Let $\Perp = \Perp_n$ for some $n$. Let $F_\cdot$ be a simplicial object in the category of either functors to connected spaces or functors to $\cat{G}$ (the category of ``grouplike $H$-spaces''; see definition prior to \ref{hyp:group-values}). Then $\realization{\Perp F} \xrightarrow{\simeq} \Perp\realization{F}$. \end{lemma} \begin{proof} As in Lemma~\ref{lem:perp-commutes-with-realization-v2}, the question is whether the $\Perp$ construction may be taken dimensionwise. By the same reasoning as Lemma~\ref{lem:connected-or-gp-is-pi-star-kan}, the spaces in the cube $\CRN F(X, \ldots, X)$ satisfy the $\pi_*$-Kan condition, and on $\pi_0$ the maps are fibrations of simplicial sets, so the fibers may be taken dimensionwise. Furthermore, as in that lemma, the cube of fibers has the same property, so we may proceed inductively. \end{proof} \subsection{Goodwillie calculus: classification of homogeneous functors} The functor $P_n$ gives the universal $n$-excisive approximation to a functor. The functor $D_n$ gives the homogeneous $n$-excisive part of a functor; it is part of a fibration sequence $$ D_n \rightarrow P_n \rightarrow P_{n-1} .$$ Goodwillie shows that there is actually a functorial delooping of the derivative, so this fibration sequence can be delooped once: \begin{theorem} \label{thm:delooping-Dn} (\cite[Lemma~2.2]{Cal3}) If $F$ is a reduced homotopy functor from spaces to spaces, then the map $P_n F \rightarrow P_{n-1} F$ is part of a fibration sequence $$ P_n F \rightarrow P_{n-1} F \rightarrow \Omega^{-1} D_n F ,$$ where $\Omega^{-1} D_n F$ is a homogeneous $n$-excisive functor. \end{theorem} \begin{definition}[Derivative of $F$] \label{def:derivative-of-F} The $n^{\text{th}}$ derivative of $F$ (at $\basept$), denoted $\partial^{(n)} F(\basept)$, is the following spectrum with $\Sigma_n$ action, which we will denote $\mathbf{Y}$. The space $Y_k$ in the spectrum is $\Omega^{k (n-1)} cr_n F(S^k,\ldots, S^k)$. The structure map $Y_k \rightarrow \Omega Y_{k+1}$ arises from suspending inside and looping outside each variable of $cr_n$. \end{definition} When $F$ satisfies the limit axiom (\ref{def:limit-axiom}), we can express $D_n F(X)$ using the derivative: \begin{theorem} (\cite[\S 5, p.~686]{Cal3}) \label{thm:def-of-deriv} If $F$ is a homotopy functor from spaces to spaces that satisfies the limit axiom (\ref{def:limit-axiom}), then the functor $D_n F$ is given by \begin{equation*} D_n F(X) \simeq \Omega^\infty \left( \partial^{(n)} F(\basept) \wedge_{h \Sigma_n} X^{\wedge n} \right). \end{equation*} where smashing over $h \Sigma_n$ denotes taking homotopy orbits. \end{theorem} \subsection{Iterated cross effects} Classically, cross-effects are defined inductively, by the repeated application of $cr_2$ in a single variable. With our definition, it is a theorem that applying $cr_2$ in a single variable of $cr_n$ produces $cr_{n+1}$. Recall Goodwillie's notation for sub-cubes: given an $S$-cube $\cube{W}$ and a subset $A$ of $S$, $\partial^A \cube{W}$ denotes the $A$-cube given by restricting $\cube{W}$ to subsets of $A$, and $\partial_A \cube{W}$ denotes the $(S-A)$-cube with $\partial_A \cube{W}(B) = \cube{W}(A\cup B)$. \begin{lemma} \label{lem:crn-from-cr2} For $n\ge 2$, there is a natural equivalence $$cr_2 (cr_n F(X_1, \ldots, X_{n-1}, -))(X_{n},X_{n+1}) \xrightarrow{\simeq} cr_{n+1} F(X_1, \ldots, X_{n+1}).$$ \end{lemma} \begin{proof} \begin{raggedright} Let $S = \mathbf{n}\amalg\mathbf{2}$, and let $T = S - \Set{1}\amalg\emptyset$. The $S$-cube $\cube{W}$ defining $cr_2 (cr_n F(X_1, \ldots, X_{n-1}, -))(X_{n},X_{n+1})$ is \begin{equation*} \cube{W}[X_1,\ldots,X_{n+1}](U\amalg V) = \begin{cases} F\left( \bigvee_{v\not\in V} X_v \vee \bigvee_{u\not\in U\cup\Set{1}} X_{u+1} \right) & \text{$1\not\in U$} \\ F\left( \bigvee_{u\not\in U} X_{u+1} \right) & \text{$1 \in U$} \end{cases} \end{equation*} \end{raggedright} Notice that the cube used to compute $cr_{n+1}F(X_1,\ldots,X_{n+1})$ is exactly the $(n+1)$-cube $\partial^T \cube{W}$; that is, \begin{equation} \label{eq:a-cube-1} \hofib \partial^T \cube{W} = cr_{n+1} F(X_1,\ldots,X_{n+1}). \end{equation} Also, when $1\in U$, the sub-cube $\cube{W}(U\amalg -)$ is a constant cube, so so \begin{equation} \label{eq:a-cube-2} \hofib \partial_{\Set{1}\amalg \emptyset} \cube{W} \simeq \basept. \end{equation} $\cube{W}$ can be written as a $1$-cube of $(n+1)$-cubes: $$\partial^T \cube{W} \rightarrow \partial_{\Set{1}\amalg \emptyset} \cube{W},$$ so the total homotopy fiber of $\cube{W}$ is the homotopy fiber of the homotopy fibers of these cubes. Applying \eqref{eq:a-cube-1} and \eqref{eq:a-cube-2} gives a homotopy fiber sequence $$cr_2 (cr_n F(-,X_3,\ldots,X_{n+1}))(X_{1},X_{2}) \rightarrow cr_{n+1} F(X_1,\ldots,X_{n+1}) \rightarrow \basept, $$ so the map from the fiber to the total space is a weak equivalence, as desired. \end{proof} \begin{corollary} \label{cor:cr2-crn-contractible} Suppose that $\Perp_{n+1} F \simeq \basept$. Then $cr_2$ applied in any variable of $cr_n F$ results in a contractible functor. \end{corollary} \begin{proof} First, for any spaces $X_1,\ldots, X_{n+1}$, the space $cr_{n+1} F(X_1,\ldots, X_{n+1})$ is a retract of $\Perp_{n+1} F(X_1 \vee \cdots \vee X_{n+1}) \simeq \basept$, so $cr_{n+1} F(X_1, \ldots, X_{n+1})$ is contractible. Lemma~\ref{lem:crn-from-cr2} then shows that $cr_2$ applied in any variable of $cr_n$ is equivalent to $cr_{n+1}$, and hence contractible. \end{proof} \section{$\Pnalg$ preserves connectivity} \label{sec:Pnalg-connectivity} In this section, we establish a property of fundamental importance when working with $\Pnalg$: the $n$-additive approximation preserves the connectivity of natural transformations that satisfy some basic good properties. Actually, we prove the slightly stronger result that before evaluation at $S^0$, the functor $P_n L(-)_X$ preserves connectivity. \begin{theorem} \label{thm:Pnd-preserves-connectivity} Let $F$ and $G$ be reduced $\pi_*$-Kan functors (\ref{def:pi-star-Kan-functor}). If $\eta: F\rightarrow G$ is a natural transformation that is $w$-connected, then the natural transformation $P_{n} ( L \eta )$ is $w$-connected. \end{theorem} Once this theorem is established, we have the following immediate corollary. \begin{corollary} \label{cor:Pnd-preserves-connectivity} Let $F$ and $G$ be reduced $\pi_*$-Kan functors. If $\eta: F\rightarrow G$ is a natural transformation that is $w$-connected, then the natural transformation $\Pnalg ( \eta )$ is $w$-connected. $\qed$ \end{corollary} \begin{lemma} \label{lem:deriv-spectra-connectivity} Let $\eta: F \rightarrow G$ be a natural transformation of $\pi_*$-Kan functors. If $\eta$ is $w$-connected, then the induced map of derivative spectra $\partial^{(n)} LF(\basept) \rightarrow \partial^{(n)} LG(\basept)$ is $w$-connected. \end{lemma} \begin{proof} Using the Eilenberg-Zilber connectivity estimate (\ref{cor:eilenberg-zilber-connectivity-estimate}), for all $n\ge 1$, the map $\realization{\Perp_n F(S^k_\cdot)} \rightarrow \realization{\Perp_n G(S^k_\cdot)}$ is $(nk+w)$-connected. The derivative spectrum $\partial^{(n)}LF(\basept)$ then has as its $k^{\text{th}}$ space the space $\Omega^{k(n-1)} \Perp_n LF(S^k_\cdot)$, which by Lemma~\ref{lem:perp-commutes-with-realization-v2}, is equivalent to $\Omega^{k(n-1)} \realization{\Perp_n F(S^k_\cdot)}$. On these spaces the map induced by $\eta$ is $(k+w)$-connected, exactly as required to produce a $w$-connected map $\partial^{(n)} LF(\basept) \rightarrow \partial^{(n)} LG(\basept)$. \end{proof} \begin{corollary} \label{cor:derivatives-connective} The derivative spectrum $\partial^{(n)} LF(\basept)$ is connective. \end{corollary} \begin{proof} The map $F \rightarrow 0$ is always $0$-connected. \end{proof} \begin{corollary} \label{cor:Dnd-preserves-conn} If $\eta: F \rightarrow G$ is a $w$-connected map of $\pi_*$-Kan functors, then $D_n (L \eta)$ is $w$-connected. \end{corollary} \begin{proof} Recall from Theorem~\ref{thm:def-of-deriv} that for any functor $H$, $$ D_n (LH) (Y) = \LoopInfty \left( \partial^{(n)} LH (\basept) \wedge_{h\Sigma_n} (Y)^{\wedge n} \right) . $$ Taking homotopy orbits and smashing with a fixed space preserves connectivity, so this is really a question about the connectivity of the map $\partial^{(n)} LF (\basept) \rightarrow \partial^{(n)} LG (\basept)$. The required connectivity was established in Lemma~\ref{lem:deriv-spectra-connectivity}. \end{proof} \begin{corollary} \label{cor:Pn+1-Pn-surjective} If $F$ is a reduced $\pi_*$-Kan functor then the natural transformation $P_{n+1} L F\rightarrow P_n LF$ is surjective in $\pi_0$. \end{corollary} \begin{proof} Theorem~\ref{thm:delooping-Dn} (Goodwillie's delooping of $D_n$) shows that the fibration $$D_{n+1} LF \rightarrow P_{n+1} L F \rightarrow P_n LF$$ deloops to a fibration \begin{equation} \label{eq:delooping-of-Dn+1} P_{n+1} LF \rightarrow P_n LF \rightarrow \Omega^{-1} D_{n+1} LF. \end{equation} The delooping of $D_{n+1} LF$ consists of smashing with the suspension of $\partial^{(n+1)} LF(\basept)$ and taking homotopy orbits. By Corollary~\ref{cor:derivatives-connective}, the spectrum $\partial^{(n+1)} LF(\basept)$ is connective, so its suspension is $0$-connected. From the long exact sequence on $\pi_*$, this implies $P_{n+1} LF \rightarrow P_n LF$ is surjective on $\pi_0$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:Pnd-preserves-connectivity}] As above, the spectrum $\partial^{(n+1)} LF(\basept)$ in Equation~\eqref{eq:delooping-of-Dn+1} is connective, so the map on the delooping of $D_{n+1}$ induced by $\eta$ is $(w+1)$-connected. The map on the fibers ($P_{n+1} (L \eta)$) is therefore $w$-connected. \end{proof} A result that is closely related to Theorem~\ref{thm:Pnd-preserves-connectivity} is the following, which says that $\Pnalg$ preserves (good) fibrations that are surjective on $\pi_0$. \begin{proposition} \label{prop:LK-group-connected-base} Given a space $X$ and functors $A$, $B$, and $C$, suppose $A(Y) \rightarrow B(Y) \rightarrow C(Y)$ is a fibration sequence for all finite coproducts $Y = \bigvee X$ of $X$. If, on finite coproducts of $X$, either: \begin{enumerate} \item $C$ takes connected values (Hypothesis~\ref{hypothesis-1}); or \item $B$ and $C$ take group values (Hypothesis~\ref{hypothesis-2}), and the map $B\rightarrow C$ is a surjective homomorphism of groups, \end{enumerate} then \begin{equation*} LA_X(Z) \rightarrow LB_X(Z) \rightarrow LC_X(Z) \end{equation*} is a fibration sequence for all spaces $Z$. Furthermore, the sequence is surjective on $\pi_0$. \end{proposition} \begin{proof} This is an easy application of Theorem~\ref{thm:bousfield-friedlander}. \end{proof} \begin{corollary} \label{cor:Pnd-group-connected-base} Under the conditions of Proposition~\ref{prop:LK-group-connected-base}, $$\Pnalg A \rightarrow \Pnalg B \rightarrow \Pnalg C$$ is a fibration and the map to the base is surjective on $\pi_0$. \end{corollary} \begin{proof} By definition, $\Pnalg F(X) = P_n (LF_X)(S^0)$, so combining Proposition~\ref{prop:LK-group-connected-base} with Theorem~\ref{thm:Pnd-preserves-connectivity} gives the desired result. \end{proof} \section{Fiber contractible implies Cartesian} \label{sec:fiber-contractible-Cartesian} In this section, we will sketch proofs of the critical but mainly technical fact that in the cases we consider, the cross effect vanishing is equivalent to the cross effect cubes being Cartesian. We generally want to use the fact that the cross effect is contractible to conclude that the initial space in the cross-effect cube is equivalent to the (homotopy) inverse limit of the rest of the spaces. Unfortunately, this is not always true; the problem is that the homotopy fiber does not detect failure to be surjective on $\pi_0$. Here we show that our hypotheses on $F$ are sufficient so that this does not happen. \begin{lemma} \label{lem:connected-cartesian} Let $F$ be a functor satisfying Hypothesis~\ref{hypothesis-1} (connected values) on coproducts of $X$. If $\Perp_n F(X) \simeq 0$, then the cube $\CRN F(X,\ldots,X)$ defining $\Perp_n F(X)$ is Cartesian. \end{lemma} \begin{proof}[Proof sketch.] The first step is to consider the pullback diagram $$ X \xrightarrow{p} Z \xleftarrow{q} Y $$ when the spaces $X$, $Y$, and $Z$ are connected, and $p$ and $q$ have sections. In this case, one can directly show that $\pi_0$ of the homotopy inverse limit is $0$. In general, decompose the desired inverse limit into iterated pullbacks of the form in the first step. All of the maps have sections because of the very special form of the cube $\CRN F(X,\ldots,X)$. This shows that the map from the initial space to the homotopy inverse limit is (trivially) surjective in $\pi_0$; then Cartesian-ness follows because the total fiber is contractible. \end{proof} \begin{lemma} \label{lem:groups-cartesian} Let $F$ be a functor satisfying Hypothesis~\ref{hypothesis-2} (group values) on coproducts of $X$. If $\Perp_n F(X) \simeq 0$, then the cube $\CRN F(X)$ defining $\Perp_n F(X)$ is Cartesian. \end{lemma} \begin{proof}[Proof sketch.] This strategy here is to decompose $F$ into a fibration involving $F_0$, the connected component of the identity, and $\pi_0 F$, a functor to discrete groups. $$ F_0 \rightarrow F \rightarrow \pi_0 F .$$ The statement for $F_0$ follows from Lemma~\ref{lem:connected-cartesian}. The statement for $\pi_0 F$ is Lemma~\ref{lem:groups-surj-X0-lim}, below. Then a short argument shows no problem arises on $\pi_0$ in $\CRN F(X)$. \end{proof} The case of discrete groups is key, so we give a complete proof for this case. \begin{lemma} \label{lem:groups-surj-X0-lim} Let $\cube{X}$ be an $n$-cube ($n\ge 1$) in the category of discrete groups. If $\cube{X}$ has compatible sections to all structure maps (for example, $\cube{X}=\CRN F(X)$), then the map $$\cube{X}(\emptyset) \rightarrow \lim_{U\in\Power_0(\mathbf{n})} \cube{X}(U)$$ is surjective. \end{lemma} \begin{proof} We need to show that the map above is surjective. This is equivalent to showing that there exists an $x_\emptyset \in \cube{X}(\emptyset)$ mapping to each coherent system of elements $x_U \in \cube{X}(U)$, with $U \not= \emptyset$. $\cube{X}$ is a cube of groups, and hence all of the structure maps are group homomorphisms. This allows us to subtract an arbitrary $w\in\cube{X}(\emptyset)$ from $x_\emptyset$, and subtract the images $\Image_U(w)$ of $w$ in $\cube{X}(U)$ from each $x_U$, to show the question is equivalent to the existence of an $x_\emptyset - w \in \cube{X}(\emptyset)$ mapping to each coherent system of elements $x_U - \Image_U(w)$. Given a coherent system of elements $x_U$ in an $n$-cube, let $w$ be the image of $x_{\Set{n}}$ in $\cube{X}(\emptyset)$ using the section map $\cube{X}(\Set{n}) \rightarrow \cube{X}(\emptyset)$. Define $z_U = x_U - \Image_U(w)$, noting that when $\Set{n}\subset V$, we have $\Image_V(w) = x_W$, so $z_W = 0$. By the preceding paragraph, the surjectivity that we are trying to establish is equivalent to the existence of a $z_\emptyset$ mapping to each coherent collection $z_U$. If $n=1$, then $\lim_{U\in\Power_0(\mathbf{1})} \cube{X}(U) = X(\Set{1})$, so the section map $\cube{X}(\Set{1}) \rightarrow \cube{X}(\emptyset)$ produces a $z_\emptyset$ mapping to $z_{\Set{1}}$, as desired. If $n>1$, then we proceed by induction, assuming the lemma is true for smaller $n$. Taking the fiber of $\cube{X}$ in the direction of $\Set{n}$, we have an $(n-1)$-cube $$\cube{Y}(U) := \fib \left( \cube{X}(U) \rightarrow \cube{X}(U \cup \Set{n}) \right).$$ The cube $\cube{Y}$ satisfies the hypothesis of the lemma because taking fibers preserves compatible sections. Notice that for $\Set{n} \not\subset U$, the element $z_U$ passes to the fiber, since it maps to $z_{U\cup \Set{n}} = 0$. Now $\cube{Y}$ is an $(n-1)$-cube, so by induction, the map from $\cube{Y}(\emptyset)$ to $\lim \cube{Y}(U)$ is surjective. That is, there exists a $y\in \cube{Y}(\emptyset)$ with $\Image_U(y) = z_U$. Mapping $y$ to $z \in \cube{X}(\emptyset)$ gives an element $z$ with $\Image_U(z) = z_U$ for $U \subset \Set{1,\ldots,n-1}$. As above, if $\Set{n} \subset U$, then $z_U = 0$, so $\Image_U(z) = z_U$ in this case as well. Therefore, we have produced an element $z$ mapping to each coherent collection of elements $z_U$, as desired. \end{proof} \section{$\Perp$ is a cotriple} \label{sec:perp-is-cotriple} In this section, we verify the claim that $\Perp$ is actually a cotriple on the category of homotopy functors from pointed spaces to pointed spaces. Let $\cat{S}$ denote the full skeletal category of finite sets whose objects are $\mathbf{k} = \Set{1,\ldots,k}$, for each integer $k\ge 0$. Given an integer $n$, a homotopy functor $F$, and a space $X$, we construct a functor $C$, defined on $\cat{S}^{\text{op}}$ and natural in $F$, $X$, and $n$, such that $C(\mathbf{k})$ is naturally isomorphic (actually equivariantly homeomorphic) to $\Perp_n^k F(X)$. The functoriality of $C$ shows immediately that the iterates of $\Perp_n$ assemble to form a simplicial object. The difference between $\Perp_n^k$ and $C(\mathbf{k})$ is the difference between computing the homotopy fiber of a cubical diagram by an iterative process and in a single step. Using a good model for the total homotopy fiber, these are isomorphic. Since the strict commutativity of certain diagrams is essential, we begin by recalling Goodwillie's model for the homotopy fiber of a cubical diagram. \begin{definition}[homotopy fiber] (\cite[Definition~1.1]{Cal2}) Let $I=[0,1]$ denote the unit interval. For any set $S$, let $I^S$ denote the product of $\abs{S}$ copies of $I$. For clarity, when $S=\emptyset$, we take $I^S=\Set{0}$. Let $\cube{X}$ be an $S$-cube of spaces. Define the homotopy fiber of $\cube{X}$ to be the subspace of the space of maps $$\hofib \cube{X} \subset \prod_{U\subset S} \Map (I^U, \cube{X}(U))$$ with a map $\Phi \in \hofib \cube{X}$ satisfying: \begin{enumerate} \item $\Phi$ is natural with respect to $S$ \item if $x = (x_1,\ldots,x_u)$ is a point in $I^U$ with some $x_i = 1$, then $\Phi_U(x) = \basept$, the basepoint of $\cube{X}(U)$. \end{enumerate} \end{definition} The utility of this definition is that computing the homotopy fiber of a cube by iterating the process of taking fiber of maps produces a result isomorphic (that is, homeomorphic) to the result of taking the homotopy fiber in one step. \begin{lemma} \label{lem:hofib-homeo-iterated-hofib} Let $\cube{X}$ and $\cube{Y}$ be cubical diagrams of spaces, and define the cube of cubes $\cube{Z} = \cube{X}\rightarrow \cube{Y}$. Then there is a natural homeomorphism $$\hofib \cube{Z} \xrightarrow{\cong} \hofib \left( \hofib \cube{X} \rightarrow \hofib \cube{Y} \right).$$ \end{lemma} \begin{proof} Given the above definition of homotopy fiber, this is easy to check; it follows from the homeomorphism $\Map(I^n,\Map(I^m,X)) \cong \Map(I^{n+m}, X)$. \end{proof} This definition of homotopy fiber has one other important property: \begin{lemma} Let $\cube{X}$ be an $S$-cube, and let $0$ denote the $S$-cube that is the one point space for all $U\subset S$. Let $\cube{Y} = \cube{X}\rightarrow 0$ be the $(S\sqcup \Set{\basept})$-cube created using the zero map to join them. Then there is a homeomorphism $$\hofib \cube{Y} = \hofib \left(\cube{X} \rightarrow 0\right) \xrightarrow{\cong} \hofib\cube{X}$$ induced by the inclusion of $S$ into $S\sqcup\Set{\basept}$. \end{lemma} \begin{proof} In the definition of homotopy fiber, we see that the inclusion of $S$ into $S\sqcup\Set{\basept}$ induces a projection from $$\prod_{U\subset S\sqcup\Set{\basept}} \Map(I^U,\cube{Y}(U)) \rightarrow \prod_{U\subset S} \Map(I^U,\cube{X}(U)). $$ The components of ``$\Phi$'' corresponding to sets containing $\Set{\basept}$ are all the constant map to the only available point, $\basept$, so the projection is a homeomorphism. \end{proof} We now proceed with the proof that $\Perp$ is a cotriple. We begin by defining a ``diagonal'' to encode the information needed to construct a cube of coproducts and inclusion and projection maps of the type used to define the cross effect. \begin{definition}[Diagonal] For any sets $S$ and $U$, and given a function $f$ from $S$ to $U$, define the set $$B_f = \Set{ (s,u) \suchthat u \not= f(s)} \subset S\times U .$$ Define the ``diagonal'' $\Delta(S,U)$ to be the element of $\Power(S \times U)$ given by the union of all $B_f$: $$ \Delta(S,U) = \Set{ B_f \suchthat f \in \Hom(S,U)} .$$ \end{definition} Since $\Delta(S,U)$ is naturally isomorphic to $\Hom(S,U)$ by sending $B_f$ to $f$, it is obviously a functor in $S$ and $U$. \begin{definition}[Free cube] \label{def:free-cube} Given sets $U$ and $S$ and a functor $g$ from the discrete category $\Delta(S,U)$ to a pointed category with coproducts (for example pointed spaces or cubes of pointed spaces), we define $\Free(S,U,g)$ to be the $(S \times U)$-cube $\cube{X}$ with vertices $$\cube{X}(A) = \bigvee_{\Set{B \in\Delta(S,U) : A \subset B}} g(B)$$ Morphisms in $\cube{X}$ are induced by the maps $g(B) \rightarrow g(B')$ that are the identity if $B=B'$ and the zero map otherwise. \end{definition} We now establish that ``free cubes'' are closed under the pullback operation. \begin{lemma} \label{lem:pullbacks-of-free-cubes} Let $m$ and $n$ be sets, let the $(m\times U)$-cube $\cube{X}= \Free(m,U,g)$ be a free cube, and let $f: n\rightarrow m$ be a function. The $(n\times U)$-cube $\Power(f\times 1)^* \cube{X}$ is isomorphic to a free cube $\cube{Y} = \Free(n,U,h)$ with $$h(B) = \bigvee_{B'\in \Delta(f,1)^{-1}(B)} g(B') .$$ \end{lemma} \begin{proof} This is a straightforward argument by expanding the definition of $h$ in $\cube{Y}(A)$, combining and interchanging the order of quantifiers to turn two coproducts into one, and then verifying that the resulting indexing set is the same as the indexing set for $\Power(f\times 1)^*\cube{X}$. \end{proof} We are now in a position to identify the relationship $\Perp$ has to the free cube functor. For the rest of this section, we fix a functor $F$ and a space $X$. Given sets $U$ and $S$, let $c_X$ be the function on $\Delta(S,U)$ that has a constant value $X$. Let $C(S)$ be the contravariant functor of sets $S$ given by $$ C(S) = \hofib F \circ \Free(S,U,c_X) .$$ \begin{lemma} $C(S)$ is a contravariant functor of $S$. \end{lemma} \begin{proof} Let $\cube{Y} = C(S)$. Given a function $f: S\rightarrow T$, we can construct the $(S\times U)$-cube $\Power(f\times 1)^* \cube{Y}$. The map in the indexing categories induces \cite[\S{}XI.9]{Bousfield-Kan:homotopy-limits-completions-and-localizations} a map on the homotopy fibers: $$ \hofib_{\Power(T\times U)} F \cube{Y} \rightarrow \hofib_{\Power(S\times U)} \Power(f\times 1)^* \cube{Y},$$ so it remains to construct a map $$ \hofib_{\Power(S\times U)} F \circ \Power(f \times 1)^* \cube{Y} \rightarrow \hofib_{\Power(S\times U)} F \circ \Free(S,U,c_X) .$$ Recall from Lemma~\ref{lem:pullbacks-of-free-cubes} that $\Power(f \times 1)^* \cube{Y} $ is a free cube with generating function $$h(B) = \bigvee_{\Delta(f,1)^{-1}(B)} X .$$ To specify the desired map between free cubes, it suffices to specify a natural transformation between their ``generating functions''; the universal map from $\bigvee X$ to $X$ that is the identity on each $X$ is a natural choice in this case. \end{proof} \begin{lemma} \label{lem:Ck-Perpk} Let $n = \abs{U}$. $C(\mathbf{k})$ is equivariantly homeomorphic to $ \Perp^k_{n} F(X)$. \end{lemma} \begin{proof} This is a straightforward verification that both are naturally homeomorphic (by Lemma~\ref{lem:hofib-homeo-iterated-hofib}) to the homotopy fiber of a $\mathbf{k}\times U$-cube whose vertices are $$F \left( \bigvee_{i=1}^{i=k} \bigvee_{v_i \not\in V_i} X \right) .$$ \end{proof} \begin{lemma} \label{lem:identify-epsilon} Let $n = \abs{U}$. Via the homeomorphism of Lemma~\ref{lem:Ck-Perpk}, the map $C(\mathbf{1}) \rightarrow C(\emptyset)$ induced by $C(i: \emptyset \rightarrow \mathbf{1})$ corresponds to the map $\Perp_n F(X) \rightarrow F(X)$ induced by $F(\bigvee^n X \rightarrow X)$. \end{lemma} \begin{proof} This is also straightforward. Let $\cube{X}=\Free(\mathbf{1},U,c_X)$ and observe that the map $\Power(i\times 1)^*$ induces the identity on $\cube{X}(\emptyset) = \bigvee^n X$. \end{proof} \begin{theorem} \label{thm:perp-is-cotriple} The functor $\Perp$ is a cotriple on the category of homotopy functors from pointed spaces to pointed spaces. The map $\epsilon: \Perp \rightarrow 1$ is induced by the ``fold map'' $\bigvee X \rightarrow X$. \end{theorem} \begin{proof} Lemma~\ref{lem:Ck-Perpk} identifies $\Perp^k$ and $C(\mathbf{k})$, and Lemma~\ref{lem:identify-epsilon} identifies the map $\epsilon$. Then the requisite identities follow from the applying $C$ to the following diagrams: $$\xymatrix{ \Set{1,2} & \Set{1} \ar[l] \\ \Set{2} \ar[u] & \emptyset \ar[l] \ar[u] } \qquad \xymatrix{ {} & \Set{1} & {} \\ \Set{1} \ar[ur]^= \ar[r]^{i_1} & \Set{1,2} \ar[u] & \Set{2} \ar[ul]^= \ar[l]^{i_2} } $$ \end{proof} \section{Main Theorem: Outline} \label{sec:main-theorem-outline} To establish the main theorem, we use induction on $n$, beginning with the case $n=1$. We further break down the induction into ``Case I'', where $F$ is degree $n$ --- that is, $\Perp_{n+1} F(X) \simeq 0$, and ``Case II'', which shows that Case~I implies the result for arbitrary $F$. Our proof will be a ladder induction, with Case~I depending on Case~II for smaller values of $n$, and Case~II depending on Case~I for the same value of $n$. In Section~\ref{sec:perp-F-zero}, we treat the case when $\Perp_{n+1} F(X) \simeq 0$. In this case, we show directly that the fiber of the fibration sequence we obtain from induction, $$ \realization{\psp[n]{F}(X)} \rightarrow F(X) \rightarrow P_{n-1}^{\alg} F(X) , $$ is a homogeneous degree $n$ functor. This implies that $F(X) \simeq \Pnalg F(X)$ in this case. \begin{definition}[$A_F$] \label{def:AF} Define the functor $A_F(X)$ to be the homotopy fiber in the fibration: \begin{equation} \label{eq:AF-def} A_F(X) \rightarrow \rpspfx{F}{X} \rightarrow F(X). \end{equation} \end{definition} For the general case, in Section~\ref{sec:perp-F-nonzero}, we consider the auxiliary diagram: $$\xymatrix{ A_F(X) \ar[r] \ar[d] & \realization{\psp{F}(X)} \ar[r]^{\epsilon} \ar[d] & F(X) \ar[d] \\ \Pnalg A_F(X) \ar[r] & \Pnalg \left( \sR{\psp{F}(X)} \right) \ar[r] & \Pnalg F(X) }$$ where the bottom row is shown to be a fibration sequence up to homotopy (Proposition~\ref{prop:perp-f-nonzero-fib-conn} and~\ref{prop:perp-f-nonzero-fib-discrete}). We show that $\Perp_{n+1} A_F(X) \simeq 0$ (Lemma~\ref{lem:perp-AF-zero}), and hence Case~I shows that there is an equivalence of the fibers, so the square on the right is Cartesian. Then it is not hard to establish that $\Pnalg \left( \realization{\psp{F}(X)}\right) \simeq 0$ (Lemma~\ref{lem:Pnd-perp-contractible}), so that the sequence in the main theorem is actually a fibration sequence up to homotopy. \section{Main Theorem, Case I: $F$ degree $n$} \label{sec:perp-F-zero} In this section, the goal is to establish that when the $(n+1)$-st cross effect of $F$ vanishes, $F$ is equivalent to its $n$-additive approximation, $\Pnalg F$. The proof of this result (Proposition~\ref{prop:perp-F-zero-realizations}) will be given in Section~\ref{sec:proof-of-perp-F-zero}. \subsection{Additivization and the bar construction} The case $n=1$ is the work of Segal. \begin{lemma} \label{lem:perp-2-zero-Segal} Suppose $F$ is a reduced functor that has either connected values (\ref{hypothesis-1}) or group values (\ref{hypothesis-2}) and commutes with realizations. If $\Perp_{2} F \simeq 0$, then $F \simeq \Omega \circ F \circ \Sigma$. \end{lemma} \begin{proof} Let $X$ be a space. Under these hypotheses, the map $F(X\vee X) \xrightarrow{\simeq} F(X) \times F(X)$ is an equivalence, so we can regard $[n] \mapsto F(\bigvee^n X)$ as a $\Gamma$-space. Segal's work \cite[Proposition~1.4]{Segal:categories-and-cohomology-theories} then shows that $F(X) \simeq \Omega B F(X)$, where $B$ denotes the bar construction, and note that $BF(X) \simeq \realization{F(X \wedge S^1_\cdot)} \simeq F(S^1 \wedge X)$, because $F$ commutes with realizations. \end{proof} \begin{corollary} \label{cor:perp-f-spectrum} If $F$ has either connected values (\ref{hypothesis-1}) or group values (\ref{hypothesis-2}), and $\Perp_{n+1} F \simeq 0$, then, as a symmetric functor of $n$ variables, $\Perp_n F$ is the infinite loop space of a symmetric functor to connective spectra $\mathbf{\BoldPerp_n F}$: $$\Perp_n F \simeq \LoopInfty \left( \mathbf{\BoldPerp_n F} \right).$$ \end{corollary} \begin{proof} When $F$ satisfies the hypotheses above, then $\Perp_{n} F$ also satisfies these hypotheses. Since $\Perp_{n+1} F\simeq 0$, we know that applying $\Perp_2$ in any input to $cr_n F$ is contractible (by Corollary~\ref{cor:cr2-crn-contractible}), so Lemma~\ref{lem:perp-2-zero-Segal} gives $\Perp_n F$ as the first space of a connective $\Omega$-spectrum. \end{proof} \begin{corollary} \label{cor:perp-2-zero-F-P1dF} Suppose $F$ is a reduced functor that commutes with realizations and has either connected values (\ref{hypothesis-1}) or group values (\ref{hypothesis-2}), and $\Perp_{2} F \simeq 0$, then $F \simeq P_1 F$. \end{corollary} \begin{proof} We need to show that $F \rightarrow \Omega^n \circ F \circ \Sigma^n$ is an equivalence for all $n$. This almost follows from Lemma~\ref{lem:perp-2-zero-Segal}, but one needs to check that $\Omega \circ F \circ \Sigma$ commutes with realizations. This follows from Theorem~\ref{thm:bousfield-friedlander} because $F(S^1\wedge -)$ is always connected. \end{proof} \subsection{The equivariant structure of the cross-effects of a symmetric functor} We begin with a definition of the ``symmetric group'' and its action on the set of integers $\mathbf{n}= \Set{1,\ldots,n}$. By $\Sigma_n$, we mean the set of bijective set maps from $\mathbf{n}$ to $\mathbf{n}$ with the group operation corresponding to composition of functions. So no confusion arises, let us fix a representation of the symmetric group on the set $\mathbf{n}$: let $\sigma\in\Sigma_n$ act on $\mathbf{n}$ by sending $j$ to $\sigma(j)$. To verify that this is an ``action'', note that $\tau_* \sigma_*$ sends $j$ to $\tau(\sigma(j)) = (\tau\circ \sigma)(j)$, which is exactly $(\tau\sigma)_*$. Write $\Sigma_n^+$ for the space $\Sigma_n$ with a disjoint basepoint added. \begin{definition}{\cite[p.~675]{Cal3}} A functor of $n$ variables $H(X_1,\ldots,X_n)$ is called \emph{symmetric} if for each permutation $\sigma\in\Sigma_n$ there is a natural transformation $$\sigma_*: H(X_1, \ldots, X_n) \rightarrow H(X_{\sigma^{-1}(1)}, \ldots, X_{\sigma^{-1}(n)})$$ satisfying $(\sigma \tau)_*=\sigma_* \tau_*$. This definition differs from Goodwillie's only in that we use $\sigma^{-1}$ where he uses $\sigma$, and consequently the action of a composition appears in a different order than his. \end{definition} We immediately begin using some abbreviated notation. \begin{notation}[{$H[\sigma]$}] Given a symmetric functor of $n$ variables $H$, a map of sets $\sigma: \mathbf{n}\rightarrow\mathbf{n}$, and $n$ ambient variables $X_1, \ldots, X_n$, define $$ H[\sigma] = H(X_{\sigma(1)}, \ldots, X_{\sigma(n)}).$$ \end{notation} With this notation, the symmetric structure is $\sigma_*: H[1] \rightarrow H[\sigma^{-1}]$. Notice that $\tau_*: H[\sigma^{-1}] \rightarrow H[\sigma^{-1}\tau^{-1}] = H[(\tau\sigma)^{-1}]$. \begin{definition}[$r_\sigma$] When $X_1 = \cdots = X_n$, let $r_\sigma$ be the ``relabeling'' map induced by the isomorphisms $X_{i} \mapsto X_{\sigma(i)}$. \end{definition} \begin{definition}[Action of $\Sigma_n$ on the diagonal of a symmetric functor] The diagonal of a symmetric functor of $n$ variables has a natural action of the symmetric group on $n$ letters. Let $X = X_1 =\cdots = X_n$, and let $\sigma\in\Sigma_n$. The action of $\sigma$ on $\diag H(X)$ is the composite: $$ H[1] \xrightarrow{\sigma_*} H[\sigma^{-1}] \xrightarrow{r_\sigma} H[1] $$ where $r_\sigma$ is the isomorphism $X_i \cong X_{\sigma(i)}$ that is available because all of the $X_i$ are equal. \end{definition} \begin{remark} \label{remark:action-on-Hsigma} By relabeling variables, one sees immediately that the action of $\tau$ on $H[\sigma]$ is given by the composite: $$H[\sigma] \xrightarrow{\tau_*} H[\sigma\tau^{-1}] \xrightarrow{r_{\sigma\tau\sigma^{-1}}} H[\sigma].$$ \end{remark} \begin{example} \label{example:coproduct} The fundamental example is the wedge. When $n=3$, let $H(X_1,X_2,X_3) = X_1 \vee X_2 \vee X_3$. The symmetric structure map for the cycle $\sigma=(1 2 3)$ in $\Sigma_3$ is $$\sigma_*: x_1\vee x_2 \vee x_3 \in H(X_1,X_2,X_3) \mapsto x_3 \vee x_1 \vee x_2 \in H(X_3, X_1, X_2).$$ Notice that $H(X_3,X_1,X_2) = H[\sigma^{-1}]$, and not $H[\sigma]$. The map $r_\sigma$ then has us regard $x_3$ as an element of $X_1$, \emph{etc}. \end{example} For the rest of this section, we have will work with the diagonals of symmetric functors, so set $X_1=\cdots=X_n$, but label them differently to be able to see the action of the symmetric group more clearly. To keep notation under control, we write $H$ instead of $\diag H$. \begin{lemma} \label{lem:cross-effect-1} Let $H$ be a symmetric functor of $n$ variables. If in each variable $i$, the categorical map \begin{equation} \label{eq:H-coprod-prod} H(\ldots, X_i \vee Y_i, \ldots) \xrightarrow{\simeq} H(\ldots, X_i, \ldots) \times H(\ldots, Y_i, \ldots) \end{equation} is an equivalence, then $$\Perp_n H[1] \xrightarrow{\simeq} \prod_{\sigma\in\Sigma_n} H[\sigma].$$ Furthermore, with the following action, the map is equivariant. Let $\beta\times\alpha\in\Sigma_n\times\Sigma_n$ act on the left as follows: $\alpha$ acts on $H$, and hence $\alpha$ acts on $\Perp_n H$ (via $\Perp_n \alpha$) because $\Perp_n$ is a functor; $\beta$ acts on $\Perp_n$ because $\Perp_n$ is the diagonal of the symmetric functor $cr_n$. Let $\beta\times\alpha$ act on the right as follows: $\alpha$ acts diagonally on all copies of $H$; $\beta$ acts by permuting coordinates so that the $\beta\sigma$ component of $\beta_*(h)$ equals the $\sigma$ component of $h$. \end{lemma} \begin{proof} First note that the actions on the left commute because the action on $\Perp_n$ is a natural transformation. The actions on the right obviously commute. Using the hypothesis in each variable of $H$, we see that the categorical map to product (by iterating \eqref{eq:H-coprod-prod}) is an equivalence. $$ \myH\left(\bigvee_{i=1}^{n} X_i, \ldots, \bigvee_{i=1}^{n} X_i \right) \xrightarrow{\simeq} \prod_{\sigma: \mathbf{n}\rightarrow \mathbf{n}} \myH[\sigma] $$ Using this to compute the cross effect, we see there is an equivalence \begin{equation} \label{eq:perp-n-H-decompose-1} \Perp_n \mathbf{H}[1] = \Perp_n \mathbf{H} (X_1, \ldots, X_n) \xrightarrow{\simeq} \prod_{\sigma\in\Sigma_n} \myH[\sigma] . \end{equation} Also, notice that substituting $X_{\beta^{-1}(i)}$ for $X_i$ gives: $$ \Perp_n \mathbf{H}[\beta^{-1}] \xrightarrow{\simeq} \prod_{\sigma\in\Sigma_n} \myH[\beta^{-1}\sigma] . $$ A short reflection on the origins of the map in \eqref{eq:perp-n-H-decompose-1} should convince the reader that it is equivariant with respect to the action of $\Sigma_n$ on $H$. The equivariance of the action of $\Sigma_n$ on $\Perp_n$ is a bit more complicated, so we spell it out in detail. To orient the reader, consider the commutative diagram whose vertical composite is action by $\beta$ on $\Perp_n H(1)$. $$\xymatrix{ \Perp_n H[1] \ar[r]^{\simeq} \ar[d]^{\beta_*} & \prod_{\sigma\in\Sigma_n} H[\sigma] \ar@{-->}[d] \\ \Perp_n H[\beta^{-1}] \ar[r]^{\simeq} \ar[d]^{r_\beta} & \prod_{\sigma\in\Sigma_n} H[\beta^{-1}\sigma] \ar@{-->}[d] \\ \Perp_n H[1] \ar[r]^{\simeq} & \prod_{\sigma\in\Sigma_n} H[\sigma] }$$ Under $\beta_*$, the copy of $H[\sigma]$ in the $\sigma$ component of the top row is sent to $H[\beta^{-1}\beta\sigma]$ in the $\beta\sigma$ component of the middle row. This is often a confusing point, but it is made more evident by the fact that the map $\beta_*$ exists even if the inputs $X_i$ are all different, and $\beta_*$ does not even require that $H$ be a symmetric functor (only that it be additive in each variable). Under the ``relabeling'' isomorphism $r_\beta$, this becomes $H[\beta\sigma]$ in the $\beta\sigma$ component of the bottom row. This shows that the action of $\beta\in\Sigma_n$ on $\Perp_n H[1]$ corresponds to the action sending the $\sigma$ component of the product isomorphically to the $\beta\sigma$ component. \end{proof} \begin{lemma} \label{lem:cross-effect-2} Let $\myH$ be a symmetric functor of $n$ variables taking values in spectra. There is a stable equivalence $$ \Sigma_n^+ \wedge \myH[1] \xrightarrow{\simeq} \prod_{\sigma\in\Sigma_n} \myH[\sigma]$$ given by $$\Sigma_n^+ \wedge \myH[1] \cong \bigvee_{\sigma\in\Sigma_n} \myH[1] \xrightarrow{\vee r_\sigma} \bigvee_{\sigma\in\Sigma_n} \myH[\sigma] \hookrightarrow^{\simeq} \prod_{\sigma\in\Sigma_n} \myH[\sigma] . $$ Let $\beta\times\alpha\in\Sigma_n\times\Sigma_n$ act on $\Sigma_n^+ \wedge \myH[1]$ as follows: $\alpha$ acts on $\myH[1]$, and $\beta$ sends $\sigma\wedge h$ to $(\beta\sigma) \wedge h$. With the action $\Sigma_n\times\Sigma_n$ on the right as in Lemma~\ref{lem:cross-effect-1}, this map is equivariant. \end{lemma} \begin{proof} The equivariance with respect to the action of $\beta$ is immediate. The equivariance with respect to the action of $\alpha$ follows from the commutativity of both of the following squares, where the vertical composites are the action of $\alpha$ on the left and one component of the right. $$\xymatrix{ \myH[1] \ar[r]^{r_\sigma} \ar[d]^{\alpha_*} & \myH[\sigma] \ar[d]^{\alpha_*} \\ \myH[\alpha^{-1}] \ar[r]^{r_\sigma} \ar[d]^{r_\alpha} & \myH[\sigma\alpha^{-1}] \ar[d]^{r_{\sigma\alpha\sigma^{-1}}} \\ \myH[1] \ar[r]^{r_\sigma} & \myH[\sigma] } $$ The map $r_{\sigma\alpha\sigma^{-1}}$ is used for the action of $\alpha$ on $H[\sigma]$, as noted in Remark~\ref{remark:action-on-Hsigma}. The stable equivalence of coproducts and products is the last map used. \end{proof} \begin{corollary} \label{cor:perp-n-decomp-to-sigma-n} Let $\myH$ be a symmetric functor of $n$ variables taking values in spectra. If the second cross effect in each variable of $\myH$ is contractible, then there is an weak equivariant map \begin{equation*} \Perp_n \mathbf{H}[1] \xrightarrow{\simeq} \prod_{\sigma\in\Sigma_n} \mathbf{H}[\sigma] \xleftarrow{\simeq} \Sigma_n^+ \wedge \mathbf{H}[1] . \end{equation*} The action of $\beta\in\Sigma_n$ on $\Perp_n$ corresponds to multiplication by $\beta$ on $\Sigma_n^+$ on the right side of the equation, and the action on $\myH$ is the same on both sides. \end{corollary} \begin{proof} Since $\myH$ is a functor to spectra, the vanishing of the second cross effect implies that the hypotheses of Lemma~\ref{lem:cross-effect-1} are satisfied. Combining Lemmas~\ref{lem:cross-effect-1} and~\ref{lem:cross-effect-2} gives the desired result. \end{proof} This type of identification goes back at least to Eilenberg and MacLane in the 1950s \cite[Theorem~9.1]{Eilenberg-MacLane:H-Pi-n-II}. With this model, the map $\epsilon: \Perp_n \mathbf{H} \rightarrow \mathbf{H}$ is given by $\sigma \wedge x \mapsto x$. \begin{corollary} \label{cor:decomp-identify-epsilon} Let $\overbar{\epsilon}$ be the map $\Sigma_n^+ \wedge \myH[1] \rightarrow \myH[1]$ sending $\sigma \wedge h$ to $h$. Under the conditions of Corollary~\ref{cor:perp-n-decomp-to-sigma-n}, the following diagram commutes: $$\xymatrix{ \Perp_n \myH[1] \ar[r]^{\simeq} \ar[d]^{\epsilon} & \prod_{\sigma\in\Sigma_n} \myH[\sigma] & \Sigma_n^+\wedge \myH[1] \ar[l]^{\simeq} \ar[d]^{\overbar{\epsilon}} \\ \myH[1] \ar[rr]^{=} & & \myH[1] } .$$ \end{corollary} \begin{proof} Let $X=X_1=\cdots=X_n$ be spaces. The map $\epsilon$ is induced by the fold map $\bigvee X_i \rightarrow X$, so it sends $$\myH[\sigma] = \myH(X_{\sigma(1)}, \ldots, X_{\sigma(n)}) \rightarrow \myH(X,\ldots,X) = \myH(X_1,\ldots,X_n) = \myH[1]$$ by the isomorphism $X_{\sigma(i)} = X = X_i$; this is the map we have denoted $r_{\sigma^{-1}}$. On $\sigma\wedge h$, the composite of the upper and left maps is then $r_{\sigma^{-1}} r_\sigma(h) = h$, and $\overbar{\epsilon}(\sigma \wedge h) = h$, so the diagram commutes. \end{proof} \subsection{Iterated cross effects produce homogeneous functors} \begin{lemma} \label{lem:perp-homotopy-orbits-spectrum} If $F$ has either connected values (\ref{hypothesis-1}) or group values (\ref{hypothesis-2}) on coproducts of $X$, and $\Perp_{n+1} F(X) \simeq 0$, then $$\realization{\psP[n]{F}(X)} \simeq \mathbf{\BoldPerp_n F}(X) \wedge_{\Sigma_n} E \Sigma_n^+ , $$ where $\mathbf{\BoldPerp_n F}$ denotes the lift to spectra of $\Perp_n F$, as in Corollary~\ref{cor:perp-f-spectrum}. \end{lemma} With these hypotheses, Corollary~\ref{cor:perp-f-spectrum} shows that $\Perp_n F(X) \simeq \LoopInfty \mathbf{\BoldPerp_n F}(X)$, so we are entitled to consider the functor to spectra ``$\mathbf{\BoldPerp_n F}$''. Corollary~\ref{cor:cr2-crn-contractible} shows that the second cross effect in each variable of $\mathbf{\BoldPerp_n F}$ is contractible, so we may apply Corollary~\ref{cor:perp-n-decomp-to-sigma-n} with $\myH = \mathbf{\BoldPerp_n F}$ to conclude that \begin{equation} \label{eq:perp-is-sigma-n} \Perp_n \mathbf{\BoldPerp_n F}(X) \simeq \Sigma_n^+ \wedge \mathbf{\BoldPerp_n F}(X) . \end{equation} We are now in a position to understand $\Perp_n^* \mathbf{\BoldPerp_n F}$. The issue of how multiplication by $\sigma$ arises from the $\epsilon$ above is somewhat subtle, so we spell it out in detail. Applying~\eqref{eq:perp-is-sigma-n} repeatedly at each level, we have $$ \Perp_n^k \mathbf{\BoldPerp_n F}(X) \simeq \underbrace{ \Sigma_n^+ \wedge \cdots \wedge \Sigma_n^+ }_{\text{$k$ factors}} \wedge \mathbf{\BoldPerp_n F}(X) .$$ Recall that the face maps from dimension $n$ to $n-1$ are given by $d_i = \Perp_n^i \epsilon \Perp_n^{n-i}$. In dimension $k$, the face map $d_k = \epsilon \Perp_n^k$ just drops the first element (by Corollary~\ref{cor:decomp-identify-epsilon}): $$ d_k (g_k \wedge \cdots \wedge g_1 \wedge y) = g_{k-1} \wedge \cdots \wedge g_1 \wedge y .$$ To compute the others, note that for any $f$, the map $\Perp_n(f)$ is equivariant with respect to the action of $\Sigma_n$ on $\Perp_n$ (by permuting inputs), so in particular $\Perp_n(\epsilon): \Perp_n(\Perp_n F) \rightarrow \Perp_n F$ is equivariant with respect to the action on of $\Sigma_n$ on the leftmost $\Perp_n$, so \begin{align*} \Perp_n \epsilon (g \wedge y ) &= \Perp_n\epsilon(g * (1 \wedge y) ) \\ &= g * \Perp_n\epsilon(1 \wedge y) \\ &= g * y, \end{align*} where the last follows since the degeneracy $\delta: \Perp_n F \rightarrow \Perp_n^2 F$ which has $\delta(y) = 1\wedge y$ is a section to the face map $\Perp_n\epsilon$. This argument shows that all of the face maps $d_j$ with $0\le j<k$ are given by multiplying $g_{j+1}$ by the next coordinate to the right (either $g_j$ if $j>0$ or $y$ if $j=0$). This is a standard model for $E \Sigma_n^+ \wedge_{\Sigma_n} \mathbf{\BoldPerp_n F}(X)$, so we have proven Lemma~\ref{lem:perp-homotopy-orbits-spectrum}. $\qed$ \subsection{Proof of Main Theorem, Case~I} \label{sec:proof-of-perp-F-zero} \begin{proposition} \label{prop:perp-F-zero-realizations} Suppose that $F$ commutes with realizations and has connected values (\ref{hyp:connected-values}) or group values (\ref{hyp:group-values}). If $\Perp_{n+1} F \simeq 0$, then $F \simeq P_n F$. \end{proposition} \begin{proof} When $n=0$, the hypothesis that $F$ is reduced makes the result trivial. When $n=1$, Corollary~\ref{cor:perp-2-zero-F-P1dF} shows that $F(X) \simeq P_1 F(X)$, so that establishes the truth of the base case in our induction. Finally, when $n>1$ we apply Proposition~\ref{prop:perp-F-nonzero} with one smaller $n$ to produce a fibration sequence: \begin{equation} \label{eq:fib-seq-with-n-1} \realization{\Perp_n^* (\Perp_n F) } \rightarrow F \rightarrow P_{n-1} F , \end{equation} where the map $F\rightarrow P_{n-1} F$ is surjective on $\pi_0$. (Recall that since $F$ commutes with realizations, $P_{n-1}^{\alg} F \simeq P_{n-1} F$.) We now show that the fiber here is an $n$-excisive functor. From Corollary~\ref{cor:perp-f-spectrum}, we know that $\Perp_n F$ lifts to a functor to connective spectra $\mathbf{\BoldPerp_n F}$, so that $\Perp_n F \simeq \LoopInfty \mathbf{\BoldPerp_n F}$. Using this, we have: \begin{gather*} \sR{\Perp_n^* (\Perp_n F) } \simeq \sR{ \Perp_n^* \left( \LoopInfty \mathbf{\BoldPerp_n F} \right) } . \\ \intertext{The functor $\LoopInfty$ is a right adjoint, so it preserves homotopy fibers, and hence commutes with $\Perp_n$:} \sR{ \Perp_n^* \left( \LoopInfty \mathbf{\BoldPerp_n F} \right) } \simeq \sR{ \LoopInfty \Perp_n^* \left(\mathbf{\BoldPerp_n F} \right) } . \\ \intertext{Now $\mathbf{\BoldPerp_n F}$ is a functor to connective spectra, and hence all applications of $\Perp_n$ to it result in functors to connective spectra, so we can use \cite{Beck:classifying-spaces-for-homotopy-everything-H-spaces} to move $\LoopInfty$ outside of the realization:} \sR{ \LoopInfty \Perp_n^* \left(\mathbf{\BoldPerp_n F} \right) } \simeq \LoopInfty \sR{ \Perp_n^* \left(\mathbf{\BoldPerp_n F} \right) } . \\ \intertext{Then Lemma~\ref{lem:perp-homotopy-orbits-spectrum} shows that the term inside the realization computes the homotopy orbits of the $\Sigma_n$ action on $\mathbf{\BoldPerp_n F}$:} \sR{ \Perp_n^* (\mathbf{\BoldPerp_n F}) } \simeq \mathbf{\BoldPerp_n F } \wedge_{\Sigma_n} E \Sigma_n^+ . \\ \intertext{Combining all of these, we identify the fiber in \eqref{eq:fib-seq-with-n-1} as the infinite loop space of the preceding homotopy orbit spectrum:} \sR{\Perp_n^* (\Perp_n F) } \simeq \LoopInfty \left( \mathbf{\BoldPerp_n F } \wedge_{\Sigma_n} E \Sigma_n^+ \right) . \end{gather*} We now establish that this functor is $n$-excisive. Since it is known to be the fiber of $F \rightarrow P_{n-1} F$, this will imply that it is actually homogeneous $n$-excisive. The functor $\LoopInfty$ preserves Cartesian squares, so we need only establish that $\mathbf{\BoldPerp_n F } \wedge_{\Sigma_n} E \Sigma_n^+$ is $n$-excisive. By hypothesis, $\Perp_{n+1} F \simeq \basept$, so Corollary~\ref{cor:cr2-crn-contractible} shows that the second cross effect in any variable of $\mathbf{cr_n F}$ is contractible. Hence, by Corollary~\ref{cor:perp-2-zero-F-P1dF}, in each variable $\mathbf{cr_n F}$ is $1$-excisive. Then \cite[Proposition~3.4]{Cal2} shows that its diagonal, $\mathbf{\BoldPerp_n F}$ is $n$-excisive. That is, given any strongly co-Cartesian $(n+1)$-cube $\cube{X}$, the map $$\mathbf{\BoldPerp_n F} \cube{X}(\emptyset) \rightarrow \holim_{\Power_0(\mathbf{n+1})} \mathbf{\BoldPerp_n F} \cube{X}$$ is an equivalence. Since $\Sigma_n$ acts naturally on $\mathbf{\BoldPerp_n F}$, this is a $\Sigma_n$-equivariant map, so it is still an equivalence after taking homotopy orbits. This establishes that the functor $\mathbf{\BoldPerp_n F } \wedge_{\Sigma_n} E \Sigma_n^+$ is $n$-excisive, as desired. Equation~\eqref{eq:fib-seq-with-n-1} is a fibration sequence up to homotopy, so we know the natural map $$\realization{\Perp_{n}^{*} (\Perp_n F) } \rightarrow \fib ( F \rightarrow P_{n-1} F ) $$ is an equivalence. Applying $P_n$ and using the fact that we have just shown that $\realization{\Perp_{n}^{*} (\Perp_n F) } $ is $n$-excisive, we have \begin{align*} \realization{\Perp_{n}^{*} (\Perp_n F) } &\simeq P_n \realization{\Perp_{n}^{*} (\Perp_n F) } \\ &\simeq P_n \fib( F \rightarrow P_{n-1} F ) \\ &\simeq \fib ( P_n F \rightarrow P_{n-1} F ) \\ &\simeq D_n F . \end{align*} When $F$ commutes with realizations, the map $F \rightarrow P_{n-1} F$ is surjective on $\pi_0$ (Theorem~\ref{thm:Pnd-preserves-connectivity}). This lets us argue that the total space $F$ of the fibration in \eqref{eq:fib-seq-with-n-1} is $n$-excisive, since the base and the fiber are. The argument is straightforward but does require a variant of the five-lemma on $\pi_0$ to make a conclusion about $\pi_0 F$. Once we know $F$ is $n$-excisive, we know $F \simeq P_n F$, as desired. \end{proof} \begin{corollary} \label{cor:perp-F-zero} If $F$ is a reduced functor that has either connected values (\ref{hypothesis-1}) or group values (\ref{hypothesis-2}) on coproducts of $X$, and $\Perp_{n+1} F(X) \simeq 0$, then $F(X) \simeq \Pnalg F(X)$. \end{corollary} \begin{proof} $\Pnalg F(X)$ is defined by evaluating the functor $P_n (L F_X)$ at the space $S^0$, and $LF_X$ commutes with realizations (by Lemma~\ref{lem:LF-commutes-with-realization}), so by Proposition~\ref{prop:Pnalg-agrees-Pn}, we have $P_n(L F_X)(S^0) \simeq P_n F(X)$. Hence the result follows from Proposition~\ref{prop:perp-F-zero-realizations}. \end{proof} \section{Main Theorem, Case II: General $F$} \label{sec:perp-F-nonzero} In this section, the goal is to establish the other side of the ``ladder induction'' for Theorem~\ref{thm:main-theorem}. We proceed essentially as outlined in Section~\ref{sec:main-theorem-outline}. We actually decompose the problem further, considering functors to discrete groups or connected spaces. As one might expect, the case of a functor to discrete groups is the pivotal one. \subsection{Functors To Discrete Groups: $\Perp G^{ab} = 0$} This section shows that a certain functor $G_n^{ab}$ has no $(n+1)^{\text{st}}$ cross effect, as one would expect in view of the construction of $G_n^{ab}$ (given below). In this section, we consider a functor $G$ from spaces to discrete groups (for example, $G(X) = \pi_0 \Omega(X)$). \begin{definition} \label{def:Gprime-Gab} Given an $n>0$ and a functor $G$ to discrete groups, define $G'_n := \Image (\epsilon: \Perp_{n+1} G \rightarrow G)$ and $G^{ab}_n := \coker (\epsilon)$. Usually, the $n$ is clear from context, and we will abbreviate these $G'$ and $G^{ab}$. \end{definition} There is a short exact (fibration) sequence of groups \begin{equation} \label{eq:ses-Gprime-G-Gab} G'(X) \rightarrow G(X) \rightarrow G^{ab}(X) , \end{equation} and this sequence is surjective on $\pi_0$ (\emph{i.e.}, right exact). \begin{remark} Note that since $\Perp_{n+1} G$ is constructed by taking a kernel, the image of $G'$ is normal in $G$. \end{remark} Our motivation for the preceding notation comes from considering the case when the source and target category under consideration are both the category of groups and the functor $G$ is the identity $G(H)=H$. In this case, the image of $\Perp_2 G(H)$ in $G(H)$ is the first derived subgroup of $H$. The cokernel of the map $\Perp_2 G(H) \rightarrow G(H)$ is the abelianization, $H^{ab}$. Lacking a more appropriate name for modding out by higher derived subgroups, we continue to use the same notation in that case. \begin{lemma} \label{lem:perp-Gab-0} If $F$ takes values in discrete groups, then with $G'$ and $G^{ab}$ as in Definition~\ref{def:Gprime-Gab}, $\Perp G^{ab}(X) \simeq 0$. \end{lemma} \begin{proof} From the construction of $G'$, the map $\epsilon: \Perp G \rightarrow G$ factors through the inclusion $i: G'\rightarrow G$. Applying $\Perp$ again results in the following diagram: $$\xymatrix{ \Perp^2 G(X) \ar[dr]_{\Perp \epsilon} \ar[d] & \\ \Perp G'(X) \ar[r]^{\Perp i} & \Perp G(X) \ar@/_1pc/[ul]_{\delta} }$$ The map $\Perp \epsilon$ has a section, $\delta$, so it is surjective. Hence $\Perp i$ is also surjective. Consider the short exact sequence of functors to discrete groups in Equation \eqref{eq:ses-Gprime-G-Gab} defining $G^{ab}$. If we show that $\Perp$ preserves this short exact sequence, then the surjectivity of the map $\Perp i$ will imply that $\Perp G^{ab} = 0$. Short exact sequences of discrete groups are fiber sequences that are surjective on the base space. The functor $\Perp$ preserves fiber sequences because the construction involves only taking fibers. The functor $\Perp$ preserves surjections because all of the maps in the cube $\CR_{n+1} F(X,\ldots,X)$ defining $\Perp_{n+1} F(X)$ have sections, and hence taking fibers with respect to them does not lower connectivity. \end{proof} \subsection{$\Pnalg$ preserves $A_F$ fibration} This section establishes that $\Pnalg$ actually produces a fibration when applied to the fibration defining $A_F$. The results in this section also contain a statement about the map from $F \rightarrow \Pnalg F$, because in the case of $F$ taking values in discrete groups, the proof that this map is surjective on $\pi_0$ uses the same technical details that the proof that we get a fibration. To remind the reader that the functor takes values in discrete groups in the next proposition, we use the letter $G$ (for group) to denote the functor, instead of the usual $F$. \begin{proposition} \label{prop:perp-f-nonzero-fib-discrete} If $G$ takes values in discrete groups (so in particular $G$ satisfies Hypothesis~\ref{hypothesis-2}), then the following is a fibration sequence up to homotopy: \begin{equation} \label{eq:Pnd-still-fibration} \Pnalg A_G(X) \rightarrow \Pnalg \left( \realization{\Perp_{n+1}^{*+1} G(X)} \right) \rightarrow \Pnalg G(X) . \end{equation} \end{proposition} \begin{proof} Replacing the base $G$ in the definition of $A_G$ (Equation~\eqref{eq:AF-def}) with $G'$ from Definition~\ref{def:Gprime-Gab}, we have the fibration sequence \begin{equation} \label{eq:fib-Gprime-base} A_G(X) \rightarrow \realization{\Perp_{n+1}^{*+1} G(X)} \rightarrow G'(X), \end{equation} and this sequence is surjective on $\pi_0$. The hypotheses of Corollary~\ref{cor:Pnd-group-connected-base} are satisfied by the sequences in \eqref{eq:ses-Gprime-G-Gab} and \eqref{eq:fib-Gprime-base}, so applying $\Pnalg$ both are fibration sequences whose maps to the base spaces are surjective on $\pi_0$: \begin{gather} \label{eq:Pnd-Gprime-G-Gab} \Pnalg G'(X) \rightarrow \Pnalg G (X) \rightarrow \Pnalg G^{ab}(X) \\ \label{eq:Pnd-AG-seq} \Pnalg (A_G)(X) \rightarrow \Pnalg (\realization{\Perp_{n+1}^{*+1} G(-)}) (X) \rightarrow \Pnalg G' (X). \end{gather} The aim now is to show that \eqref{eq:Pnd-AG-seq} remains a fibration when the base $\Pnalg G'(X)$ is replaced by $\Pnalg G(X)$. From Lemma~\ref{lem:perp-Gab-0}, $\Perp_{n+1} G^{ab}(X) \simeq 0$, so Corollary~\ref{cor:perp-F-zero} shows that $\Pnalg G^{ab}(X) \simeq G^{ab}(X)$, which is a discrete space. Then, using the long exact sequence on homotopy, the fibration in \eqref{eq:Pnd-Gprime-G-Gab} gives $\Pnalg G'(X) \xrightarrow{\simeq} \Pnalg G(X)$ except possibly on $\pi_0$, where the map is injective. This is enough to show that changing the base in \eqref{eq:Pnd-AG-seq} from $\Pnalg G'(X)$ to $\Pnalg G(X)$ still yields a fibration. That is, \eqref{eq:Pnd-still-fibration} is a fibration (but perhaps not surjective on $\pi_0$). \end{proof} \begin{proposition} \label{prop:perp-f-nonzero-pi0-control} If $G$ takes values in discrete groups (so in particular $G$ satisfies Hypothesis~\ref{hypothesis-2}), then $$ \pi_0 \Pnalg G(X) \cong \coker_{Gps} \left( \pi_0 \epsilon \right) ,$$ and the map $\pi_0 G \rightarrow \pi_0 \Pnalg G$ is the universal map to the cokernel of $\pi_0 \epsilon$ in the category of groups. \end{proposition} \begin{proof} As in the preceding Proposition~\ref{prop:perp-f-nonzero-fib-discrete}, we have the following fibration sequence that is surjective on $\pi_0$: $$\Pnalg (A_G)(X) \rightarrow \Pnalg (\realization{\Perp_{n+1}^{*+1} G(-)}) (X) \rightarrow \Pnalg G' (X). $$ Lemma~\ref{lem:Pnd-perp-contractible} shows that the total space in this fibration is contractible, and the map to the base is surjective on $\pi_0$, so $\pi_0 \Pnalg G'(X) = 0$. Also following Proposition~\ref{prop:perp-f-nonzero-fib-discrete}, we have the following diagram in which the horizonal rows are fibrations that are surjective on $\pi_0$: $$\xymatrix{ G'(X) \ar[r]\ar[d] & G(X) \ar[r]\ar[d] & G^{ab}(X) \ar[d]^{\simeq} \\ \Pnalg G'(X) \ar[r] & \Pnalg G(X) \ar[r] & \Pnalg G^{ab}(X) }$$ Since $\pi_0 \Pnalg G'(X) = 0$, the long exact sequence for the bottom fibration implies that $\pi_0 \Pnalg G(X) \cong \pi_0 \Pnalg G^{ab}$. The right hand vertical map is an equivalence, again as noted in the preceding proposition, using Lemma~\ref{lem:perp-Gab-0} and Corollary~\ref{cor:perp-F-zero}. Combining these, we have \begin{align*} \pi_0 \Pnalg G(X) &\cong \pi_0 \Pnalg G^{ab}(X) \\ &\cong \pi_0 G^{ab}(X) , \end{align*} which is isomorphic to $\coker_{Gps} (\pi_0 \epsilon)$ because the map $\epsilon: \Perp_{n+1} G(X) \rightarrow G(X)$ factors through $G'(X)$. \end{proof} \begin{proposition} \label{prop:perp-f-nonzero-fib-conn} If $F$ has connected values (\ref{hypothesis-1}), then the following is a fibration sequence up to homotopy: \begin{equation*} \Pnalg A_F(X) \rightarrow \Pnalg \left( \realization{\Perp_{n+1}^{*+1} F(X)} \right) \rightarrow \Pnalg F(X) , \end{equation*} and furthermore the map $F(X)\rightarrow \Pnalg F(X)$ is (trivially) surjective on $\pi_0$. \end{proposition} \begin{proof} If $F$ has connected values on coproducts of $X$, then $$ A_F(X) \rightarrow \realization{\Perp_{n+1}^{*+1} F(X)} \rightarrow F(X) $$ is a fibration over a connected base. Therefore, by Corollary~\ref{cor:Pnd-group-connected-base}, applying $\Pnalg$ yields a fibration, so Equation~\eqref{eq:Pnd-still-fibration} is a fibration. The map $0\rightarrow F$ is $0$-connected, so Theorem~\ref{thm:Pnd-preserves-connectivity} shows that $0 \simeq \Pnalg(0) \rightarrow \Pnalg F(X)$ is $0$-connected as well. Hence $\pi_0 \Pnalg F(X) = 0$. \end{proof} \subsection{If $m<n$, then $\Pnalg \sR{\psp[n]{F}} \simeq 0$} This section establishes the relatively easy fact that for $\Perp_n$, the part of the Goodwillie tower below degree $n$ is trivial. \begin{lemma} \label{lem:Pnd-perp-contractible} Let $R(X_1,\ldots,X_n) = \realization{cr_n \left( \Perp_{n}^{*} F\right) (X_1, \ldots, X_n)}$ be a functor of $n$ variables. Define the diagonal of such a functor to be the functor of one variable given by $(\diag R)(X) = R(X,\ldots,X)$. Then $P_m^{\alg} \left( \diag R \right) (X) \simeq 0$ for $0\le m<n$. \end{lemma} \begin{proof} Two results in Goodwillie's work \cite[Lemmas~3.1 and~3.2]{Cal3} combine to show that if $H(X_1,\ldots, X_n)$ is a functor of $n$ variables that is contractible whenever some $X_i$ is contractible (this is called a ``multi-reduced'' functor), then $P_m (\diag H) \simeq 0$ for $0\le m<n$. Writing out $P_m^{\alg} (\diag R) (X) = P_m [ L (\diag R)_X ] (S^0)$, it is easy to check that $L(\diag R)_X$ is the diagonal of a multi-reduced functor, so Goodwillie's result applies. \end{proof} \subsection{The functor $A_F$ has no $n+1$ cross effect} Having created the functor $A_F$ to be ``$F$ with the cross effect killed'', we now need to establish that $\Perp A_F \simeq 0$. The main issue is the commuting of the $\Perp$ and the realization. \begin{lemma} \label{lem:perp-AF-zero} Let $F$ be a functor that has either connected values (\ref{hypothesis-1}) or group values (\ref{hypothesis-2}), let $\Perp$ denote $\Perp_n$ for some $n$, and let $A_F$ be the functor given in Definition~\ref{def:AF}. Then $\Perp A_F$ is contractible. \end{lemma} \begin{proof} Taking cross effects is a homotopy inverse limit construction, and homotopy inverse limits commute, so $$ \Perp A_F \simeq \hofib \left( \Perp \realization{\Perp^{*+1} F} \rightarrow \Perp F \right) . $$ It is easy to check that if $F$ has connected values, then so does $\Perp F$, and also $\realization{\Perp^{*+1} F}$. If $F$ has group values, then so does $\Perp F$; hence $\Perp^{*+1} F$ is a functor to $\text{Simp}(\cat{G})$ --- simplicial grouplike $H$-spaces. As remarked prior to Hypothesis~\ref{hyp:group-values}, the rigidity of our definition of $H$-space implies the realization of $\Perp^{*+1} F$ is still a functor to $\cat{G}$; that is, $\realization{\Perp^{*+1} F}$ has group values. By Lemma~\ref{lem:perp-commutes-with-realization-for-simplicial-functors}, $\Perp$ commutes with the realization in that functor, so $\Perp\sR{\psp[]{F}} \simeq \sR{\Perp \psp[]{F}}$. Finally, the existence of $\delta: \Perp F \rightarrow \Perp^2 F$ shows that $\Perp F$ is the augmentation of the simplicial space $\Perp \psp[]{F}$, so the standard ``extra degeneracy'' argument \cite[Exercise~8.4.6, p.~275]{Weibel:homological-algebra} shows that $\sR{\Perp \psp[]{F}} \simeq \Perp F$, and hence that $$A_F \simeq \hofib \left( {\Perp F} \rightarrow \Perp F \right) \simeq 0, $$ as desired. \end{proof} \subsection{Proof of Main Theorem, Case II} \label{sec:proof-of-perp-nonzero} \begin{proposition} \label{prop:perp-F-nonzero} If $F$ is a reduced functor that has either connected values (\ref{hypothesis-1}) or group values (\ref{hypothesis-2}) on coproducts of $X$, then the following is a fibration sequence up to homotopy: \begin{equation} \label{eq:perp-fibration-perp-nonzero-v2} \realization{\psp{F}(X)} \xrightarrow{\epsilon} F(X) \rightarrow \Pnalg F(X) . \end{equation} Furthermore, the map $\pi_0 F(X) \rightarrow \pi_0 \Pnalg F(X)$ is the universal map to the cokernel of the group homomorphism $\pi_0 \Perp_{n+1} F(X) \rightarrow \pi_0 F(X)$. \end{proposition} \begin{proof} First, suppose that $F(X)$ takes either connected values or discrete group values on coproducts of $X$. Consider the auxiliary diagram created by applying $\Pnalg$ to the fibration sequence defining $A_F(X)$: $$\xymatrix{ A_F(X) \ar[r] \ar[d] & \realization{\Perp_{n+1}^{*+1} F(X)} \ar[r]^{\epsilon} \ar[d] & F(X) \ar[d] \\ \Pnalg A_F(X) \ar[r] & \Pnalg \left( \realization{\Perp_{n+1}^{*+1} F(X)} \right) \ar[r] & \Pnalg F(X) }$$ Proposition~\ref{prop:perp-f-nonzero-fib-conn} (in the case of connected values) or Proposition~\ref{prop:perp-f-nonzero-fib-discrete} (in the case of discrete group values) shows that the bottom row is a fibration sequence up to homotopy. Proposition~\ref{prop:perp-f-nonzero-fib-conn} (connected values) or Proposition~\ref{prop:perp-f-nonzero-pi0-control} (discrete group values) imply that the map $F(X) \rightarrow \Pnalg F(X)$ surjective on $\pi_0$. Lemma~\ref{lem:perp-AF-zero} shows that $\Perp_{n+1} A_F(X) \simeq 0$, so that Corollary~\ref{cor:perp-F-zero} gives $A_F(X) \simeq \Pnalg A_F(X)$, and hence the square on the right is homotopy Cartesian. Lemma~\ref{lem:Pnd-perp-contractible} shows that $\Pnalg \left( \realization{\Perp_{n+1}^{*+1} F(X)}\right) \simeq 0$, so this square being Cartesian is equivalent to \eqref{eq:perp-fibration-perp-nonzero-v2} being a fibration sequence up to homotopy, as we wanted to establish. We can reduce the general problem when $F$ has group values to the cases of connected and discrete group values that we have already considered by examining the fibration $$ \widehat{F}(X) \rightarrow F (X) \rightarrow \pi_0 F(X),$$ where $\widehat{F}(X)$ is the component of the basepoint in $F(X)$. This gives rise to the following square: $$\xymatrix{ \realization{\Perp_{n+1}^{*+1} \widehat{F}(X)} \ar[r] \ar[d] & \widehat{F} \ar[r] \ar[d] & \Pnalg \widehat{F} \ar[d] \\ \realization{\Perp_{n+1}^{*+1} {F}(X)} \ar[r] \ar[d] & {F} \ar[r] \ar[d] & \Pnalg {F} \ar[d] \\ \realization{\Perp_{n+1}^{*+1} \pi_0{F}(X)} \ar[r] & \pi_0{F} \ar[r] & \Pnalg \pi_0 {F} }$$ It is straightforward to check that every row and column except the middle row is a fibration that is surjective in $\pi_0$, and that the composition of the two maps in the middle is null homotopic, and we have shown that the map $F \rightarrow \Pnalg F$ is surjective on $\pi_0$. This gives us the data required to use the $3 \times 3$ lemma for fibrations to show that the middle row (\emph{i.e.}, \eqref{eq:Pnd-still-fibration}) is a fibration and surjective on $\pi_0$. The statement about $\pi_0$ is trivial in the connected case; $\pi_0$ of every space in the top row is zero (which is trivially a group). This implies that the vertical arrows connecting the second and third rows are $\pi_0$-isomorphisms, so the statement about $\pi_0$ follows from Proposition~\ref{prop:perp-f-nonzero-pi0-control}. \end{proof} \end{document}
math
\begin{document} \title{Introduction to Robust Power Domination} \begin{abstract} Sensors called phasor measurement units (PMUs) are used to monitor the electric power network. The power domination problem seeks to minimize the number of PMUs needed to monitor the network. We extend the power domination problem and consider the minimum number of sensors and appropriate placement to ensure monitoring when $k$ sensors are allowed to fail with multiple sensors allowed to be placed in one location. That is, what is the minimum multiset of the vertices, $S$, such that for every $F\mathcal{S}ubseteq S$ with $|F|=k$, $S\mathcal{S}etminus F$ is a power dominating set. Such a set of PMUs is called a \emph{$k$-robust power domination set}. This paper generalizes the work done by Pai, Chang and Wang in 2010 on vertex-fault-tolerant power domination, which did not allow for multiple sensors to be placed at the same vertex. We provide general bounds and determine the $k$-robust power domination number of some graph families. \end{abstract} \textbf{Keywords:} robust power domination, power domination, tree\\ \textbf{AMS subject classification:} 05C69, 05C85, 68R10, 94C15 \mathcal{S}ection{Introduction} The power domination problem seeks to find the placement of the mimimum number of sensors called phasor measurement units (PMUs) needed to monitor an electric power network. In \cite{hhhh02}, Haynes et al. defined the power domination problem in graph theoretic terms by placing PMUs at a set of initial vertices and then applying observation rules to the vertices and edges of the graph. This process was simplified by Brueni and Heath in \cite{bh05}. Pai, Chang, and Wang \cite{pcw10} generalized power domination to create \emph{vertex-fault-tolerant power domination} in 2010 to model the possibility of sensor failure. The $k$-\textit{fault-tolerant power domination problem} seeks to find the minimum number of PMUs needed to monitor a power network (and their placements) given that any $k$ of the PMUs will fail. The vertex containing the failed PMU remains in the graph, as do its edges; it is only the PMU that fails. This generalization allows for the placement of only one PMU per vertex. We consider the related problem of the minimum number of PMUs needed to monitor a power network given that $k$ PMUs will fail \emph{but also allow for multiple PMUs to be placed at a given vertex}. We call this \emph{PMU-defect-robust power domination}, as it is not the vertices that cause a problem with monitoring the network, but the individual PMUs themselves. This models potential synchronization issues, sensor errors, or malicious interference with the sensor outputs. To demonstrate the difference between vertex-fault-tolerant power domination and PMU-defect-robust power domination and how drastic the difference between these two parameters can be, consider the star on $16$ vertices with $k= 1$, shown in Figure \ref{fig:starmulti}. Notice that in vertex-fault-tolerant power domination, if one PMU is placed in the center of the star and this PMU fails, then all but one of the leaves must have PMUs in order to still form a power dominating set. However, with PMU-defect-robust power domination, placing two PMUs in the center is sufficient to ensure that even if one PMU fails, the power domination process will still observe all of the vertices. \\ \begin{figure} \caption{A minimum vertex-fault-tolerant power dominating set and a minimum PMU-defect-robust power dominating set shown for a star when $k=1$.} \label{fig:starmulti} \end{figure} In Section \ref{sec:prelimkrpds}, we review definitions from past work and formally define PMU-defect-robust power domination. We also include some basic results in that section. Section \ref{sec:boundskrpds} consists of general bounds for $k$-robust power domination and in Section \ref{sec:completebipartite} we demonstrate the tightness of these bounds with a family of complete bipartite graphs. In Section \ref{sec:trees} we establish the $k$-robust power domination number for trees. Section \ref{sec:concrem} contains concluding remarks, including suggestions for future work. \mathcal{S}ection{Preliminaries}\label{sec:prelimkrpds} We begin by giving relevant graph theory definitions. Then we define power domination, vertex-fault-tolerant power domination, and PMU-defect-robust power domination. Finally, we include useful properties of the floor and ceiling functions. \mathcal{S}ubsection{Graph Theory} A graph $G$ is a set of vertices, $V(G)$, and a set of edges, $E(G)$. Each (unordered) edge consists of a set of two distinct vertices; the edge $\{u,v\}$ is often written as $uv$. When $G$ is clear, we write $V=V(G)$ and $E=E(G)$. A \emph{path} from $v_1$ to $v_{\ell+1}$ is a sequence of vertices and edges $v_1, e_1, v_2, e_2, \ldots, v_\ell, e_\ell, v_{\ell+1}$ so that the $v_i$ are distinct vertices and $v_i\in e_i$ for all $i$ and $v_i\in e_{i-1}$ for all $i\geq 2$. Such a path has \emph{length} $\ell$. The \emph{distance} between vertices $u$ and $v$ is the minimum length of a path between $u$ and $v$. A graph $G$ is \emph{connected} if there is a path from any vertex to any other vertex. \emph{Throughout what follows, we consider only graphs that are connected.} We say that vertices $u$ and $v$ are \emph{neighbors} if $uv\in E$. The \emph{neighborhood} of $u\in V$ is the set containing all neighbors of $u$ and is denoted by $N(u)$. The \emph{closed neighborhood} of $u$ is $N[u]=N(u)\cup \{u\}$. The \emph{degree} of a vertex $u\in V$ is the number of edges that contain $u$, that is, $\deg{u}{G} = |N(u)|$. When $G$ is clear, we omit the subscript. The \emph{maximum degree} of a graph $G$ is $\Delta\left(G\right) = \displaystyle \max_{v\in V} \deg{v}{}$. A \emph{subgraph} $H$ of a graph $G$ is a graph such that $V(H)\mathcal{S}ubseteq V(G)$ and $E(H)\mathcal{S}ubseteq E(G)$. An \emph{induced subgraph} $H$ of a graph $G$, denoted $H=G[V(H)]$, is a graph with vertex set $V(H)\mathcal{S}ubseteq V(G)$ and edge set $E(H)=\{ uv : u,v\in V(H) \text{ and } uv\in E(G)\}$. We refer the reader to \textit{Graph Theory} by Diestel \cite{diestelbook} for additional graph terminology not detailed here. \mathcal{S}ubsection{Power domination, vertex-fault-tolerant power domination, and PMU-defect-robust power domination} What follows is an equivalent statement of the power domination process as defined in \cite{hhhh02}, and established by \cite{bh05}. The \emph{power domination process} on a graph $G$ with initial set $S\mathcal{S}ubseteq V$ proceeds recursively by: \begin{enumerate} \item $B = \displaystyle \bigcup_{v\in S} N[v]$ \item While there exists $v\in B$ such that exactly one neighbor, say $u$, of $v$ is \emph{not} in $B$, add $u$ to $B$. \end{enumerate} Step 1 is referred to as the \emph{domination step} and each repetition of step 2 is called a \emph{zero forcing step}. During the process, we say that a vertex in $B$ is \emph{observed} and a vertex not in $B$ is \emph{unobserved}. A \emph{power dominating set} of a graph $G$ is an initial set $S$ such that $B=V(G)$ at the termination of the power domination process. The \emph{power domination number} of a graph $G$ is the minimum cardinality of a power dominating set of $G$ and is denoted by $\gp{G}$. In \cite{pcw10}, Pai, Chang, and Wang define the following variant of power domination. For a graph $G$ and an integer $k$ with $0\leq k \leq |V|$, a set $S\mathcal{S}ubseteq V$ is called a \emph{$k$-fault-tolerant power dominating set of $G$} if $S\mathcal{S}etminus F$ is still a power dominating set of $G$ for any subset $F\mathcal{S}ubseteq V$ with $|F|\leq k$. The \emph{$k$-fault-tolerant power domination number}, denoted by $\vftk{G}$, is the minimum cardinality of a $k$-fault-tolerant power dominating set of $G$. While $k$-fault-tolerant power domination allows us to examine what occurs when a previously chosen PMU location is no longer usable (yet the vertex remains in the graph), it is also interesting to study when an individual PMU fails. That is, allow for multiple PMUs to be placed at the same location and consider if a subset of the PMUs fail. This also avoids issues with poorly connected graphs, such as in Figure \ref{fig:starmulti}, where $\vftother{1}{G}$ may be close to the number of vertices of $G$. Thus we define \emph{PMU-defect-robust power domination} as follows. \begin{defn} For a given graph $G$ and integer $k\geq 0$, we say that a multiset $S$, each of whose elements is in $V$, is a \emph{$k$-robust power dominating set} of $G$ if $S\mathcal{S}etminus F$ is a power dominating set of $G$ for any submultiset $F$ of $S$ with $|F|=k$. We shorten $k$-robust power dominating set of $G$ to $k$-rPDS of $G$. The size of a minimum $k$-rPDS is denoted by $\gpk{G}$ and such a multiset is also referred to as a $\ddot{\gamma}_P^k\text{-set}$ of $G$. The \emph{number of PMUs} at a vertex $v\in S$ is its multiplicity in $S$, denoted by $\pmuss{S}{v}$, or when $S$ is clear, by $\pmus{v}$. \end{defn} There are several observations that one can quickly make. \begin{obs}\label{obs:comparing} Let $G$ be a graph and $k\geq 0$. Then \begin{enumerate} \item $\gpkother{0}{G} = \vftother{0}{G}=\gp{G}$, \item $\gpk{G} \leq \vftk{G}$, \item $\gp{G}=1$ if and only if $\gpk{G} = k+1$. \end{enumerate} \end{obs} For any minimum $k$-rPDS, having more than $k+1$ PMUs at a single vertex is redundant. \begin{obs}\label{obs:numpmusleqk+1} Let $G$ be a graph and $k\geq 0$. If $S$ is a $\ddot{\gamma}_P^k\text{-set}$ of $G$, then for all $v\in S$ we have $\pmus{v} \leq k+1$. \end{obs} \mathcal{S}ubsection{Floor and ceiling functions} Throughout what follows, recall the following rules for the floor and ceiling functions. Most can be found in Chapter 3 in \cite{knuthbook} and we provide proofs for the rest. \begin{prop}{\rm \cite[Equation 3.11]{knuthbook}}\label{prop:ceilfracfix} If $m$ is an integer, $n$ is a positive integer, and $x$ is any real number, then \[ \left\left\lceileil \frac{\left\left\lceileil x \right \right\rceileil+m}{n} \right\right\rceileil =\left\left\lceileil \frac{x+m}{n} \right\right\rceileil. \] \end{prop} \begin{prop}{\rm \cite[Ch. 3 Problem 12]{knuthbook}}\label{prop:ceiltofloor} If $m$ is an integer and $n$ is a positive integer, then \[ \left\left\lceileil \frac{m}{n} \right\right\rceileil =\left\left\lfloorloor \frac{m-1}{n} \right\right\rfloorloor +1. \] \end{prop} \begin{prop}{\rm \cite[Equation 3.4]{knuthbook}}\label{prop:ceilneg} For any real number $x$, $\left\lceileil -x \right\rceileil = - \left\lfloorloor x \right\rfloorloor$. \end{prop} \begin{prop} \label{prop:ceilfuncbound} If $x$ and $y$ are real numbers then \[\left\lceileil x \right\rceileil + \left\lceileil y \right\rceileil -1 \leq \left\lceileil x+y \right\rceileil.\] \end{prop} \begin{proof} Observe that $ \left \left\lceileil x \right\right\rceileil -1 + \left \left\lceileil y \right \right\rceileil -1 < x+y $ and so $\left \left\lceileil x \right\right\rceileil + \left \left\lceileil y \right \right\rceileil -2 < \left \left\lceileil x+y\right\right\rceileil $ which is a strict inequality of integers, so $\left\lceileil x \right\rceileil + \left\lceileil y \right\rceileil -1 \leq \left\lceileil x+y \right\rceileil$. \end{proof} We can repeatedly apply the inequality in Proposition \ref{prop:ceilfuncbound} to obtain \begin{cor} \label{cor:ceilfuncmultbound} If $x$ is a real number and $a$ is a positive integer then \[a \left\lceileil x \right\rceileil \leq \left\lceileil ax \right\rceileil +a-1.\] \end{cor} \mathcal{S}ection{General bounds}\label{sec:boundskrpds} A useful property of robust power domination is the subadditivity of the parameter with respect to $k$. This idea is established in the next three statements. \begin{prop}\label{prop:incr} Let $k\geq 0$. For any graph $G$, $\gpk{G} +1 \leq \gpkplus{G}$. \end{prop} \begin{proof} Consider a $\gpkotherset{k+1}$, $S$, of $G$. Let $v\in S$. Create $S'=S\mathcal{S}etminus\{v\}$, that is, $S'$ is $S$ with one fewer PMU at $v$. Observe that for any $F'\mathcal{S}ubseteq S'$ with $|F'|=k$, we have $F'\cup \{v\} \mathcal{S}ubseteq S$ and $|F'\cup \{v\}|=k+1$. Hence $S\mathcal{S}etminus \left ( F'\cup \{v\} \right )$ is a power dominating set of $G$. Thus, for any such $F'$, we have $\left ( S\mathcal{S}etminus \{v\} \right )\mathcal{S}etminus F' = S'\mathcal{S}etminus F'$ is a power dominating set of $G$. Therefore, $S'$ is a $k$-robust power dominating set of $G$ of size $|S|-1$. \end{proof} Proposition \ref{prop:incr} can be applied repeatedly to obtain the next result. \begin{cor}\label{cor:incrj} Let $k\geq 0$ and $j\geq 1$. For any graph $G$, \[\gpk{G} + j \leq \gpkother{k+j}{G}.\] \end{cor} Corollary \ref{cor:incrj} implies the lower bound in the next proposition. The upper bound follows from taking $k+1$ copies of any minimum power dominating set for $G$ to form a $k$-rPDS. \begin{prop}\label{prop:basicbounds} Let $k\geq 0$. For any graph $G$, \[\gp{G}+k \leq \gpk{G} \leq (k+1)\gp{G}.\] \end{prop} Observe that if $\gp{G} =1$ for any graph $G$, both Observation \ref{obs:comparing} and Proposition \ref{prop:basicbounds} demonstrate that $\gpk{G} = k+1.$ Haynes et al. observed in \cite[Observation~4]{hhhh02} that in a graph with maximum degree at least three, a minimum power dominating set can be chosen in which each vertex has degree at least $3$. We observe that this is the same for robust power domination. \begin{obs}\label{obs:deg3} Let $k\geq 0$. If $G$ is a connected graph with $\Delta(G)\geq 3$, then $G$ contains a $\ddot{\gamma}_P^k\text{-set}$ in which every vertex has degree at least 3. \end{obs} A \emph{terminal path} from a vertex $v$ in $G$ is a path from $v$ to a vertex $u$ such that $\deg{u}{}=1$ and every internal vertex on the path has degree 2. A \emph{terminal cycle} from a vertex $v$ in $G$ is a cycle $v,u_1,u_2,\ldots,u_\ell,v$ in which $\deg{u_i}{G}=2$ for $i=1,\ldots, \ell$. \begin{prop}\label{prop:twotermpathsortermcycle} Let $k\geq 0$ and let $G$ be a connected graph with $\Delta(G)\geq 3$. Let $S$ be a $\ddot{\gamma}_P^k\text{-set}$ in which every vertex has degree at least 3. Any vertex $v\in S$ that has at least two terminal paths from $v$ must have $\pmus{v}=k+1$. Any vertex $v\in S$ that has at least one terminal cycle must have $\pmus{v}=k+1$. \end{prop} \begin{proof} Let $v$ be a vertex in $S$ and suppose that $v$ has two terminal paths or a terminal cycle. All of the vertices in the terminal paths or terminal cycle have degree 1 or 2 and so are not in $S$. Thus, there are at least two neighbors of $v$ which can only be observed via $v$. As $v$ can only observe both of these neighbors via the domination step, it must be the case that $\pmus{v} = k+1$. \end{proof} Zhao, Kang, and Chang \cite{zkc06} defined the family of graphs $\mathcal{T}$ to be those graphs obtained by taking a connected graph $H$ and for each vertex $v\in V(H)$ adding two vertices, $v'$ and $v''$; and two edges $vv'$ and $vv''$, with the edge $v'v''$ optional. The complete bipartite graph $K_{3,3}$ is the graph with vertex set $X\cup Y$ with $|X|=|Y|=3$ and edge set $E=\{xy:x\in X, y\in y\}$. \begin{thm}{\rm \cite[Theorem~3.]{zkc06}}\label{thm:powdomnover3} If $G$ is a connected graph on $n\geq 3$ vertices then $\gp{G}\leq\frac{n}{3}$ with equality if and only if $G\in \mathcal{T}\cup \{K_{3,3}\}$. \end{thm} This gives an upper bound for $\gpk{G}$ in terms of the size of the vertex set and equality conditions, as demonstrated in the next corollary. \begin{cor} Let $G$ be a connected graph with $n\geq 3$ vertices. Then $\gpk{G}\leq (k+1)\frac{n}{3}$ for $k\geq 0$. When $k=0$, this is an equality if and only if $G\in \mathcal{T}\cup\{K_{3,3}\}$. When $k\geq 1$, this is an equality if and only if $G\in\mathcal{T}$. \end{cor} \begin{proof} The upper bound is given by Proposition \ref{prop:basicbounds} and Theorem \ref{thm:powdomnover3}. From these results, we need only consider $\mathcal{T}\cup \{K_{3,3}\}$ for equality. The $k=0$ case follows directly from the power domination result. Let $k\geq 1$. First consider $G\in\mathcal{T}$, constructed from $H$. Note that $\Delta \left ( G \right ) \geq 3$, so there exists a $\ddot{\gamma}_P^k\text{-set}$, say $S$, in which every vertex has degree at least 3, so every vertex in $S$ is a vertex of $H$. For each $v\in V(H)$, $\deg{v}{G}\geq 3$ and there are either two terminal paths (if $v'v''\not\in E(H)$) or a terminal cycle (if $v'v''\in E(H)$). By Proposition \ref{prop:twotermpathsortermcycle}, each $v\in V(H)$ must have at least $k+1$ PMUs. Finally, consider $K_{3,3}$. Note that $\gp{K_{3,3}}=2$. We will see in Theorem \ref{thm:k33} that $\gpk{K_{3,3}} = k +\left\left\lfloorloor \frac{k}{5}\right\right\rfloorloor +2 < 2(k+1)$ for $k\geq 1$. \end{proof} \begin{defn} For any graph $G$, define $\bigpds{G}$ to be the size of the largest set $A\mathcal{S}ubseteq V$ such that for any $B\mathcal{S}ubseteq A$ with $|B|=\gp{G}$, $B$ is a power dominating set of $G$. \end{defn} Observe that $\gp{G}\leq \bigpds{G}$. For example, the star graph $S_{16}$ shown in Figure \ref{fig:starmulti} has $\gp{S_{16}}= \bigpds{S_{16}}=1$. The complete bipartite graph $K_{3,3}$ has $\gp{K_{3,3}}= 2$ and $\bigpds{K_{3,3}}=6$ as any two vertices of $K_{3,3}$ form a power dominating set. \begin{prop}\label{prop:gpklowQ} For any graph $G$ and $k\geq 0$, if $\bigpds{G} \geq k+\gp{G}$ then $\gpk{G}=k+\gp{G}$. \end{prop} \begin{proof} If $\bigpds{G} \geq k+\gp{G}$, then there exists a set $S$ of size at least $k+\gp{G}$ so that any $\gp{G}$ elements of $S$ form a power dominating set of $G$. Thus, any $\gp{G}+k$ elements of $S$ form a $k$-rPDS of $G$ of size $\gp{G}+k$ and so $\gpk{G}\leq \gp{G}+k$. By the lower bound in Proposition \ref{prop:basicbounds}, $\gpk{G}\geq \gp{G}+k$. \end{proof} When $\bigpds{G} > \gp{G} \geq 2$, the following upper bound sometimes improves the upper bound from Proposition \ref{prop:basicbounds}. \begin{thm}\label{thm:gpQbound} If $\bigpds{G} > \gp{G}\geq 2$, then $\gpk{G} \leq \left\left\lceileil \frac{\bigpds{G}(k+\gp{G}-1)}{\bigpds{G}-\gp{G}+1} \right\right\rceileil$ for $k\geq 1$. \end{thm} \begin{proof} Let $A=\lb v_1,v_2,\ldots,v_{\bigpds{G}}\rb\mathcal{S}ubseteq V$ be a maximum set such that any subset of size $\gp{G}$ is a power dominating set of $G$. For what follows, let $p=\left\left\lceileil \frac{\bigpds{G}(k+\gp{G}-1)}{\bigpds{G}-\gp{G}+1} \right\right\rceileil$. Construct $S=\lb v_1^{m_1}, v_2^{m_2},\ldots, v_{\bigpds{G}}^{m_{\bigpds{G}}} \rb$ where \begin{align*} m_1= \left\left\lceileil \frac{p}{\bigpds{G}} \right\right\rceileil \text{ and } m_i= \min\lb \left\left\lceileil \frac{p}{\bigpds{G}} \right\right\rceileil, p-\mathcal{S}um_{j=1}^{i-1} m_j \rb \text{ for } i\geq 2. \end{align*} In order to show that $S$ is a $k$-rPDS of $G$, we will show that $p-k \geq (\gp{G}-1)\left\left\lceileil \frac{p}{\bigpds{G}} \right\right\rceileil + 1.$ Assume this is true. Then whenever we have $p$ PMUs and $k$ fail, there are at least $(\gp{G}-1)\left\left\lceileil \frac{p}{\bigpds{G}} \right\right\rceileil + 1$ working PMUs. As each vertex has at most $\left\left\lceileil \frac{p}{\bigpds{G}} \right\right\rceileil$ PMUs, there are at least $\gp{G}$ vertices of $A$ that must have at least one PMU remaining and so form a power dominating set. We prove the equivalent statement \[p-k- (\gp{G}-1)\left\left\lceileil \frac{p}{\bigpds{G}} \right\right\rceileil \geq 1.\] \noindent Observe that by Proposition \ref{prop:ceilfracfix}, \begin{align*} p-k-(\gp{G}-1)\left\left\lceileil \frac{p}{\bigpds{G}} \right\right\rceileil &= p-k-(\gp{G}-1)\left\left\lceileil \frac{k+\gp{G}-1}{\bigpds{G}-\gp{G}+1} \right\right\rceileil. \end{align*} Then by Corollary \ref{cor:ceilfuncmultbound} and simplifying, we see that \begin{align*} p-k-(\gp{G}&-1)\left\left\lceileil \frac{p}{\bigpds{G}} \right\right\rceileil \geq p-k-\left ( \left\left\lceileil \frac{(k+\gp{G}-1)(\gp{G}-1)}{\bigpds{G}-\gp{G}+1} \right\right\rceileil +\gp{G}-2 \right ) \label{eqn:fracfix}\\ &= p-\left ( \left\left\lceileil \frac{(k+\gp{G}-1)(\gp{G}-1)}{\bigpds{G}-\gp{G}+1} \right\right\rceileil +\gp{G}-2 +k \right )\nonumber\\ &= p-\left\left\lceileil \frac{(k+\gp{G}-1)(\gp{G}-1) + (\bigpds{G}-\gp{G}+1)( \gp{G}-2+k)}{\bigpds{G}-\gp{G}+1} \right\right\rceileil\nonumber \\ &=p-\left\left\lceileil\frac{\bigpds{G}(k+\gp{G}-1) -\bigpds{G}+\gp{G}-1}{\bigpds{G}-\gp{G}+1} \right\right\rceileil\nonumber\\ &=p-\left\left\lceileil\frac{\bigpds{G}(k+\gp{G}-1)}{\bigpds{G}-\gp{G}+1} \right\right\rceileil +1\nonumber\\ &=1.\nonumber \qedhere \end{align*} \end{proof} To see the difference between the upper bounds given in Theorem \ref{thm:gpQbound} and Proposition \ref{prop:basicbounds}, let $\bigpds{G}= \gp{G}+r$. Then Theorem \ref{thm:gpQbound} becomes \begin{align*} \gpk{G} &\leq \left\left\lceileil \frac{(\gp{G}+r)(k+\gp{G}-1)}{r+1} \right\right\rceileil\\ &= \left\left\lceileil \frac{\gp{G}(k+1)+\gp{G}(\gp{G}-2)+r(k+\gp{G}-1)}{r+1} \right\right\rceileil \\ &= \gp{G}(k+1)+\left\left\lceileil \frac{-r\gp{G}(k+1)+ \gp{G}(\gp{G}-2)+ r(k+\gp{G}-1)}{r+1} \right\right\rceileil \\ &= \gp{G}(k+1)+\left\left\lceileil \frac{-kr(\gp{G}-1)+\gp{G}(\gp{G}-2)-r}{r+1} \right\right\rceileil. \end{align*} This means that the bound from Theorem \ref{thm:gpQbound} improves the upper bound in Proposition \ref{prop:basicbounds} when the second term above is negative. Since $r\geq 1$ and $\gp{G}\geq 2$, the second term is negative as $k$ approaches infinity. Thus for $\bigpds{G} > \gp{G}\geq 2$, we see that there exists some $k'$ such that for every $k\geq k'\geq 1$, the bound from Theorem \ref{thm:gpQbound} is an improvement over the upper bound from Proposition \ref{prop:basicbounds}. It will be useful to look specifically at Theorem \ref{thm:gpQbound} when $\gp{G}=2$. \begin{cor}\label{cor:2Qbound} If $\bigpds{G}>\gp{G} = 2$, then $\gpk{G} \leq \left\left\lceileil \frac{\bigpds{G}(k+1)}{\bigpds{G}-1} \right\right\rceileil$ for $k\geq 1$. \end{cor} To illustrate the use of the bounds in this section, we use these results to determine the $k$-robust power domination number for $K_{3,3}$ and $K_{3,b}$ for sufficiently large $b$ in Section \ref{sec:completebipartite}. \mathcal{S}ection{Complete bipartite graphs}\label{sec:completebipartite} The \emph{complete bipartite graph}, $K_{n,m}$ is the graph with vertex set $V=X\cup Y$ such that $|X|=n$, $|Y|=m$, and edge set $E=\{xy : x\in X, y\in Y\}$. We examine the case when $n=m=3$, then the case when $n=3$ and $m\geq 4$. Next, we find bounds for the $n,m\geq 4$ case, which combine to provide a result for the general $n=m$ case. We will need the following observation. \begin{obs} \label{obs:bip_pow_set} Suppose the parts of $K_{n,m}$ are $X$ and $Y$, such that $|X| = n$ and $|Y|= m$. If $S$ is a power dominating set of $K_{n,m}$, then $S$ contains: at least 1 vertex from $X$ and 1 vertex from $Y$; or at least $n-1$ vertices from $X$; or at least $m-1$ vertices from $Y$. \end{obs} We will also need the Reverse Pigeonhole Principle. This follows from the the Pigeonhole Principle, which states that if $k$ objects are distributed among $n$ sets, then one set must have at least $\left\lceil \frac{k}{n}\right\rceil$ objects. \begin{rem}[Reverse Pigeonhole Principle]\label{rem:rev_pigeonhole} If $k$ objects are distributed among $n$ sets, then one set must have at most $\left\lfloor\frac{k}{n}\right\rfloor$ objects. \end{rem} To see why Remark \ref{rem:rev_pigeonhole} holds, observe that if $n$ sets each had at least $\left\lfloor\frac{k}{n}\right\rfloor+1$ objects, then, for $k = qn+r$ such that $q\geq 0$ and $0\leq r\leq n-1$, we would have \[k \geq \mathcal{S}um_{i=1}^n \left (\left\lfloor\frac{k}{n}\right\rfloor +1 \right ) = \mathcal{S}um_{i=1}^n \left ( q+1 \right ) =qn+n >k,\] which is a contradiction. \\ We begin with $\gpk{K_{3,3}}$. \begin{thm}\label{thm:k33} Let $k\geq 0$. Let $K_{3,3}$ be the complete bipartite graph with parts $X=\{x_1,x_2,x_3\}$ and $Y=\{x_4,x_5,x_6\}$. Then \[\gpk{K_{3,3}}= k + \left\left\lfloorloor \frac{k}{5} \right\right\rfloorloor + 2.\] \end{thm} \begin{proof} We begin by observing that any two vertices of $K_{3,3}$ form a power dominating set, and so $\bigpds{K_{3,3}}=6$. First we prove the lower bound $k+ \left \left\lfloorloor \frac{k}{5} \right \right\rfloorloor +2 \leq \gpk{K_{3,3}}$ where $k=5q$. Assume for contradiction that there exists a $\gpkotherset{5q}$ $S$ of size $5q+ \left \left\lfloorloor \frac{5q}{5} \right \right\rfloorloor +1 = 6q+1$. By the Pigeonhole Principle, some $x_i$ contains at least $\left\left\lceileil\frac{6q+1}{6}\right\right\rceileil = q + 1$ of the PMUs. Observe that $|S|-5q=q+1$. Thus, we can remove $5q$ PMUs so that some vertex $x_i$ contains all remaining PMUs. This is a contradiction, as $\gp{K_{3,3}}=2$. Thus, $\gpkother{5q}{K_{3,3}} \geq 6q+2 = 5q + \left \left\lfloorloor \frac{5q}{5} \right\right\rfloorloor +2$, as desired. The lower bounds when $k$ is not a multiple of $5$ then follow by Corollary \ref{cor:incrj}. For the upper bound, observe that by Corollary \ref{cor:2Qbound}, \begin{align*} \gpk{K_{3,3}} &\leq \left\left\lceileil \frac{6(k+1)}{5} \right\right\rceileil \\ &= k+1+\left\left\lceileil \frac{k+1}{5} \right\right\rceileil. \\ \end{align*} Then by Proposition \ref{prop:ceiltofloor}, we see that \begin{align*} \gpk{K_{3,3}} &\leq k+1 + \left\left\lfloorloor \frac{k+1-1}{5} \right\right\rfloorloor +1\\ &= k + \left\left\lfloorloor \frac{k}{5} \right\right\rfloorloor +2. \qedhere \end{align*} \end{proof} Theorem \ref{thm:k33} gives an example of a graph for which Theorem \ref{thm:gpQbound} is tight and the structure of the $\ddot{\gamma}_P^k\text{-set}$ suggested by the proof of Theorem \ref{thm:gpQbound} is shown in Figure \ref{fig:k33}. A larger family of complete bipartite graphs follows the same pattern, as shown in Theorem \ref{thm:k3bgeqk/3+3}. \begin{figure} \caption{Minimum $k$-rPDS for $K_{3,3} \label{fig:k33} \end{figure} \begin{thm}\label{thm:k3bgeqk/3+3} Let $k\geq 0$. Let $K_{3,m}$ be the complete bipartite graph with parts $X= \{x_1, x_2, x_3\}$ and $Y=\{y_1, y_2, \ldots, y_m\}$. For $m\geq \left\left\lfloorloor \frac{k}{3} \right\right\rfloorloor + 3$, \[\gpk{K_{3,m}} = k + \left\left\lfloorloor \frac{k}{3} \right\right\rfloorloor +2.\] \end{thm} \begin{proof} First we prove the lower bound when $k=3q$. Assume for eventual contradiction that there exists a $\gpkotherset{3q}$, $S$, of size $3q + \left\left\lfloorloor \frac{3q}{3} \right\right\rfloorloor +1 = 4q+1$. Let $y=\mathcal{S}um_{i=1}^m \pmus{y_i}$. Then \[\pmus{x_1}+\pmus{x_2}+\pmus{x_3}+y = 4q+1.\] By the Pigeonhole Principle, we see that one of $x_1, x_2, x_3$ or $y$ must represent at least \[\left\left\lceileil \frac{4m+1}{4} \right\right\rceileil = q+1\] of the PMUs. Observe that $|S|-3q=q+1$. Thus, we can remove $3q$ PMUs such that either: \begin{enumerate} \item All $q+1$ remaining PMUs are on a single $x_i$, which is a contradiction as this is only one vertex and $\gp{K_{3,m}}=2$; or \item All $q+1$ remaining PMUs are on the $y_i$ vertices. In order for the PMUs on the $y_i$'s to form a power dominating set of $K_{3,m}$, $m-1$ of the $y_i$'s must have a PMU. However, we also have that \begin{align*} m-1 &\geq \left\left\lfloorloor\frac{3q}{3}\right\right\rfloorloor+3-1\\ &=q+2. \end{align*} This means that at least $q+2$ PMUs are needed but after $3q$ PMUs are removed only $q+1$ PMUs remain, a contradiction. \end{enumerate} Therefore, $\gpkother{3q}{K_{3,m}} > 4q+1$. Hence, $\gpkother{3q}{K_{3,m}} \geq 4q+2=3q+\left\left\lfloorloor \frac{3q}{3}\right\right\rfloorloor +2$, as desired. The lower bounds for the remaining cases then follow by Corollary \ref{cor:incrj}. For the upper bound, the case of $k=0$ is given by the power domination number. If $m=3$ we need only consider when $k=0,1,2$; this is covered by Theorem \ref{thm:k33}. If $m\geq 4$ and $k\geq 1$, we have $\bigpds{K_{3,m}}=4$. Then by Corollary \ref{cor:2Qbound}, \begin{align*} \gpk{K_{3,m}} &\leq \left\left\lceileil \frac{4(k+1)}{3} \right\right\rceileil\nonumber \\ &= k+1+\left\left\lceileil \frac{k+1}{3} \right\right\rceileil \nonumber \end{align*} and by Proposition \ref{prop:ceiltofloor}, \begin{align*} \gpk{K_{3,m}} &\leq k+1 + \left\left\lfloorloor \frac{k+1-1}{3} \right\right\rfloorloor +1\\ &= k + \left\left\lfloorloor \frac{k}{3} \right\right\rfloorloor +2.\nonumber \qedhere \end{align*} \end{proof} Next, we examine complete bipartite graphs with at least 4 vertices in each part, beginning with the lower bound in the following theorem. \begin{thm}\label{thm:bipartite_lower} Let $m \geq n\geq 4$ and $k\geq 1$. Then, \[\gpk{K_{n,m}} \geq \begin{cases} 2(k+1)- 4\left\lfloor \frac{k}{n+2} \right\rfloor & k \equiv i \Mod{n+2},\, \, 0\leq i\leq n-4\\ k+n-1 +(n-2)\left\lfloor \frac{k}{n+2}\right\rfloor & \text{otherwise} \end{cases}\] \end{thm} \begin{proof} Let $K_{n,m}$ be the complete bipartite graph with $V(K_{n,m}) = X\cup Y$, leaving the sizes of $X$ and $Y$ generic. Let $k = q(n+2) +i$ for $q\geq 0$ and $0\leq i < n+2.$ Observe the following: \begin{align*} i &= k - q(n+2)\\ &= k - \left\lfloor \frac{k}{n+2}\right\rfloor(n+2) \end{align*} First suppose that $0\leq i\leq n-4$. For the sake of contradiction, assume that there exists a $k$-robust power dominating set $S$ such that $|S| = 2k+1- 4\left\lfloor \frac{k}{n+2} \right\rfloor$. Thus, \begin{align*} |S| &= 2k+1 + (n-2 - (n+2))\left\lfloor \frac{k}{n+2} \right\rfloor\\ &= k+1 + k - (n+2)\left\lfloor \frac{k}{n+2} \right\rfloor + (n-2)\left\lfloor \frac{k}{n+2} \right\rfloor \\ &= k + 1 + i + (n-2)\left\lfloor \frac{k}{n+2} \right\rfloor\\ &= q(n+2) +i + 1 + i + q(n-2)\\ &= q(n+2)+2i +1+q(n-2)\\ &= 2qn +2i +1. \end{align*} Without loss of generality, assume that $\pmus{X} \leq \pmus{Y}$. By Observation \ref{rem:rev_pigeonhole}, $\pmus{X} \leq \left\lfloor\frac{2qn+2i+1}{2}\right\rfloor = qn +i \leq k$. Observe that if $q = 0$, we have equality. Let $B\mathcal{S}ubseteq S$, such that $|B| = qn +i$ and $B$ contains all the PMUs from vertices of $X$ (and possibly some from vertices of $Y$). Then, by definition, $S\mathcal{S}etminus B$ contains only PMUs from vertices of $Y$, and $|S\mathcal{S}etminus B| = qn +i+1$. Note that if $q= 0$, we are distributing $i+1 \leq n-3$ PMUs amongst the vertices of $Y$, and by Observation \ref{obs:bip_pow_set}, $S\mathcal{S}etminus B$ is not a power dominating set. By Remark \ref{rem:rev_pigeonhole}, there exists a vertex $y_1 \in Y$, such that \begin{align*} \pmuss{S\mathcal{S}etminus B}{y_1}&\leq \left\lfloor\frac{qn+i+1}{|Y|}\right\rfloor \\ &\leq \left\lfloor\frac{qn+n-4+1}{n}\right\rfloor \\ & = \left\lfloor\frac{qn}{n}+\frac{n-3}{n}\right\rfloor \\ &= q+\left\lfloor\frac{n-3}{n}\right\rfloor \\ &= q. \end{align*} Let $B'\mathcal{S}ubseteq S\mathcal{S}etminus B$ such that $|B'|= q$ and $B'$ contains all the PMUs from $y_1$ (and possibly some from other vertices of $Y$). Then, by definition, $S\mathcal{S}etminus (B\cup B')$ contains PMUs only from vertices of $Y\mathcal{S}etminus\{y_1\}$ and $|S\mathcal{S}etminus (B\cup B')|=q(n-1)+i+1$. Thus, by Remark \ref{rem:rev_pigeonhole} there exists a vertex, $y_2\in Y$, such that \begin{align*} \pmuss{S\mathcal{S}etminus (B\cup B')}{y_2} &\leq \left\lfloor\frac{q(n-1)+i+1}{|Y|-1}\right\rfloor\\ &\leq \left\lfloor\frac{q(n-1)+n-4+1}{n-1}\right\rfloor\\ & = \left\lfloor\frac{q(n-1)}{n-1}+\frac{n-3}{n-1}\right\rfloor \\ &= q+\left\lfloor\frac{n-3}{n}\right\rfloor \\ &= q. \end{align*} Let $B''\mathcal{S}ubseteq S\mathcal{S}etminus (B\cup B')$ such that $|B''| = q$ and $B$ contains all the PMUs from $y_2$ (and possibly from other vertices of $Y$). Then, by definition $S\mathcal{S}etminus (B\cup B' \cup B'')$ contains only PMUs from vertices of $Y\mathcal{S}etminus\{y_1,y_2\}$. Note that $|B\cup B' \cup B''| = k$ and by Observation \ref{obs:bip_pow_set}, $S\mathcal{S}etminus (B\cup B' \cup B'')$ is not a power dominating set. Therefore, no such $S$ is a $k-$robust dominating set. \\ For the second case, suppose that $n-3\leq i\leq n+1$. By Proposition \ref{prop:basicbounds}, to show that $\gpk{K_{n,m}}\geq k+n-1 +(n-2)\left\lfloor \frac{k}{n+2}\right\rfloor$, we need only show it for $i = n-3$. For the sake of contradiction, suppose there exists a $k$-robust power dominating set, $S$, such that $|S| = k+n-2 +(n-2)\left\lfloor \frac{k}{n+2}\right\rfloor$. Thus, \begin{align*} |S| &= q(n+2) +n-3 +n -2 +q(n-2)\\ &= 2qn+2n-5 \end{align*} Without loss of generality, assume that $\pmus{X} \leq \pmus{Y}$. By Remark \ref{rem:rev_pigeonhole}, \begin{align*} \pmus{X} &\leq \left\lfloor \frac{2qn+2n-5 }{2}\right\rfloor \\ &= qn +n -3 \\ &\leq q(n+2)+n-3 \\ &= k. \end{align*} Observe that if $q = 0$, we have equality. Let $B\mathcal{S}ubseteq S$, such that $|B| = qn +n -3$ and $B$ contains all the PMUs from $X$ (and possibly some from $Y$). Then by definition, $S\mathcal{S}etminus B$ contains only PMUs from vertices of $Y$ and $|S\mathcal{S}etminus B| = qn +n -2$. Note that if $q= 0$, we are distributing $n-2$ PMUs amongst the vertices of $Y$, and by Observation \ref{obs:bip_pow_set}, $S\mathcal{S}etminus B$ is not a power dominating set. By Remark \ref{rem:rev_pigeonhole}, there exists a vertex, $y_1 \in Y$, such that \begin{align*} \pmuss{S\mathcal{S}etminus B}{y_1} &\leq \left\lfloor \frac{qn+n -1}{|Y|}\right\rfloor \\ &\leq \left\lfloor \frac{qn+n -1}{n}\right\rfloor \\ &\leq q. \end{align*} Let $B'\mathcal{S}ubseteq S\mathcal{S}etminus B$, such that $|B'| = q$ and $B'$ contains all the vertices from $y_1$ (and possibly from other vertices of $Y$). Then, by definition, $S\mathcal{S}etminus(B\cup B')$ contains only PMUs from vertices of $Y\mathcal{S}etminus\{y_1\}$ and $|S\mathcal{S}etminus(B\cup B')| = q(n-1)+n - 2$. Thus, by Remark \ref{rem:rev_pigeonhole}, there exists a vertex, $y_2\in Y$, such that, \begin{align*} \pmuss{S\mathcal{S}etminus(B\cup B')}{y_2}&\leq \left\lfloor \frac{q(n-1)+n-2}{|Y|-1}\right\rfloor\\ &\leq \left\lfloor\frac{q(n-1)+n-2}{n-1}\right\rfloor \\ &\leq q. \end{align*} Let $B''\mathcal{S}ubseteq S\mathcal{S}etminus (B\cup B')$ such that $|B''|= q$ and $B$ contains all of the PMUs from $y_2$ (and possibly from other vertices of $Y$). Then by definition, $S\mathcal{S}etminus (B\cup B' \cup B'')$ has only PMUs from vertices of $Y\mathcal{S}etminus\{y_1,y_2\}$. Note that $|B\cup B' \cup B''|= k$ and by Observation \ref{obs:bip_pow_set}, $S\mathcal{S}etminus (B\cup B' \cup B'')$ is not a power dominating set. Therefore, no such $S$ is a $k$-robust power dominating set, a contradiction for the second case. \end{proof} Note that the bound found in Theorem \ref{thm:bipartite_lower} is not always tight. For example, we observe that $\gpkother{4}{K_{4,6}} > 7$. To see this, suppose for contradiction that there exists a $k$-robust power dominating set $S$ such that $|S| = 7$. Let the parts of $K_{4,6}$ be $X$ and $Y$ but leave the sizes of $X$ and $Y$ generic. Then, one side, say $X$, has at most $3$ PMUs. If $Y$ has $6$ vertices, then removing all PMUs from $X$ leaves $4$ PMUs on $Y$, and so $S$ is not a $4$-rPDS. Thus, $Y$ has $4$ vertices and $X$ has 6 vertices. We then consider if we remove $4$ PMUs from $Y$. If all of the $3$ remaining PMUs are on $X$, then $S$ is not a $4$-rPDS. Thus, the PMUs must be some remaining PMUs on $Y$ and some on $X$, and so $X$ has at most $2$ PMUs. Thus, $Y$ has either $5$, $6$, or $7$ PMUs. In any case, we can remove 2 PMUs so that there are only $5$ PMUs on $Y$. However, we can still remove 2 PMUs, which leaves us with at most 2 vertices of $Y$ that contain PMUs and no vertices of $X$ containing PMUs, which is not a power dominating set of $K_{4,6}$. Therefore, we see that $\gpkother{4}{K_{4,6}} > 7$. Next, we provide an upper bound for complete bipartite graphs. \begin{thm}\label{thm:bipartite_upper} Let $m\geq n\geq 4$, and $k\geq 1$ \[\gpk{K_{n,m}} \leq \begin{cases} 2(k+1)+(m-n-4)\left\lfloor \frac{k}{n+2}\right\rfloor & k \equiv i \Mod{n+2},\, \, 0\leq i\leq n-4\\ k+m-1 +(m-2)\left\lfloor\frac{k}{n+2}\right\rfloor & \text{otherwise} \end{cases}\] \end{thm} \begin{proof} Suppose $V(K_{n,m}) = X\cup Y$ are the parts of $K_{n,m}$, such that $X = \{x_1,\ldots, x_n\}$ and $Y = \{y_1,\ldots, y_m\}$. Let $k= q(n+2)+i$ for $q\geq 0$ and $0\leq i\leq n+1$. Observe that $i = k-q(n+2) = k - (n+2)\left\lfloor\frac{k}{n+2} \right\rfloor $. For the first case, suppose that $0\leq i\leq n-4$. To show that $\gpk{K_{n,m}} \leq 2(k+1) + (m-n-4)\left\lfloor\frac{k}{n+2}\right\rfloor$, it suffices to construct a $k$-robust power dominating set of this size. First, we observe that \begin{align*} 2(k+1) + (m-n-4)\left\lfloor\frac{k}{n+2}\right\rfloor &= k + 2 + k - (n+2)\left\lfloor\frac{k}{n+2}\right\rfloor +(m-2)\left\lfloor\frac{k}{n+2}\right\rfloor \\ &= k+2 + i+ (m-2)\left\lfloor\frac{k}{n+2}\right\rfloor \\ &= q(n+2) +i + 2 + i + q(m-2)\\ &= q(n+2)+2i +2+q(m-2)\\ &= q(n+m) +2(i+1) \end{align*} Let \[S = \{x^{q+1}_1, \ldots, x^{q+1}_{i+1},x^{q}_{i+2},\ldots,x^{q}_{n}, y^{q+1}_1, \ldots, y^{q+1}_{i+1},y^{q}_{i+2},\ldots,y^{q}_{m}\}\] Observe that $|S| = q(n+m) +2(i+1)$. We will now show that $S$ is a $k$-robust power dominating set. Let $B\mathcal{S}ubseteq S$ such that $|B| = k = q(n+2)+i$. We have two cases: \begin{enumerate} \item $S\mathcal{S}etminus B$ contains vertices from both $X$ and $Y$. By Observation \ref{obs:bip_pow_set}, $S\mathcal{S}etminus B$ is a power dominating set. \item $S\mathcal{S}etminus B$ contains vertices only from $X$ (or only from $Y$). For generality, call this side $Z$ with size $z$. Observe that $|S\mathcal{S}etminus B| = q(n+m) +2(i+1) - q(n+2) -i = q(m-2)+i+2$. By Observation \ref{obs:bip_pow_set}, for $S\mathcal{S}etminus B$ to be a robust power dominating set, it must have PMUs on at least $z-1$ vertices. Assume for contradiction that at most $z-2$ vertices of $Z$ have PMUs, then, \begin{align*} |S\mathcal{S}etminus B|&\leq (q+1)(i+1) +q(z-3-i)\\ &\leq (q+1)(i+1) +q(m-3-i) \\ &= qi+i+q+1+qm-3q-qi\\ &=q(m-2)+i +1 \\ &< |S\mathcal{S}etminus B|, \end{align*} a contradiction. \end{enumerate} Thus $S\mathcal{S}etminus B$ is a power dominating set, and since $B$ was arbitrary, $S$ is a $k$-robust power dominating set for $K_{n,m}$. For the second case, suppose that $n-3\leq i\leq n+1$. We now show that $\gpk{K_{n,m}}\leq k+m-1 +(m-2)\left\lfloor \frac{k}{n+2}\right\rfloor$ by constructing a $k$-robust power dominating set of size $k+m-1 +(m-2)\left\lfloor \frac{k}{n+2}\right\rfloor$. By Proposition \ref{prop:basicbounds}, we need only provide the construction for $i = n+1$. Then, \begin{align*} k+m-1 +(m-2)\left\lfloor \frac{k}{n+2}\right\rfloor &= q(n+2)+n+1 +m-1 + q(m-2)\\ &= q(n+m) +(n+m)\\ &= (q+1)(n+m) \end{align*} Let $S = \{x^{q+1}_1, \ldots, x^{q+1}_n, y^{q+1}_1, \ldots, y^{q+1}_m\}$. Observe that $|S| = (q+1)(n+m)$. We will now show that $S$ is a $k$-robust power dominating set. Let $B\mathcal{S}ubseteq S$ such that $|B| = k = q(n+2)+n+1$. We have two cases: \begin{enumerate} \item $S\mathcal{S}etminus B$ contains vertices from both $X$ and $Y$. By Observation \ref{obs:bip_pow_set}, $S\mathcal{S}etminus B$ is a power dominating set. \item $S\mathcal{S}etminus B$ contains vertices only from $X$ or only from $Y$. For generality, call this side $Z$ with size $z$. Observe that $|S\mathcal{S}etminus B| = (q+1)(n+m) -q(n+2) -n -1 = qm +m -2q -1 = q(m-2) +m -1$. By Observation \ref{obs:bip_pow_set}, for $S\mathcal{S}etminus B$ to be a robust power dominating set, it must have PMUs on at least $z-1$ vertices. Assume for contradiction that at most $z-2$ vertices of $Z$ have PMUs, then, \begin{align*} |S\mathcal{S}etminus B| &\leq (q+1)(z-2) \\ &\leq (q+1)(m-2) \\ &\leq qm +m-2q-2 \\ &< qm+m-2q -1\\ &= |S\mathcal{S}etminus B| \end{align*} a contradiction. \end{enumerate} Thus $S\mathcal{S}etminus B$ is a power dominating set, and since $B$ was arbitrary, $S$ is a $k$-robust power dominating set for $K_{n,m}$. Thus $S$ is a $k$-robust power dominating set for $K_{n,m}$. \end{proof} We can combine Theorem \ref{thm:bipartite_lower} and Theorem \ref{thm:bipartite_upper} to find a complete characterization for balanced complete bipartite graphs, as shown in the following corollary. Moreover, the proof of Theorem \ref{thm:bipartite_upper} yields the construction of $k$-robust power dominating sets for $K_{n,n}$ \begin{cor}{\label{bipartite_balanced}} Let $n\geq 4$, and $k\geq 1$ \[\gpk{K_{n,n}} = \begin{cases} 2(k+1)- 4\left\lfloor \frac{k}{n+2} \right\rfloor & k \equiv i \Mod{n+2},\, \, 0\leq i\leq n-3\\ k+n-1 +(n-2)\left\lfloor \frac{k}{n+2}\right\rfloor & \text{otherwise.} \end{cases}\] \end{cor} \mathcal{S}ection{Trees}\label{sec:trees} In this section, we establish the $k$-robust power domination number for trees. A \emph{tree} is an acyclic connected graph. A \emph{spider} is a tree with at most one vertex of degree 3 or more. A \emph{spider cover} of a tree $T$ is a partition of $V$, say $\{V_1,\ldots, V_\ell\}$ such that $G[V_i]$ is a spider for all $i$. The \emph{spider number} of a tree $T$, denoted by $\mathcal{S}p{T}$, is the minimum number of partitions in a spider cover. A \emph{rooted tree} is a tree in which one vertex is called the \textit{root}. Suppose two vertices $u$ and $v$ are in a rooted tree with root $r$. If $u$ is on the $r-v$ path, we say that $v$ is a \textit{descendant} of $u$ an $u$ is an \textit{ancestor} of $v$. If $u$ and $v$ are also neighbors, $v$ is a \emph{child} of $u$ and $v$ is the \emph{parent} of $u$. The vertex $u$ is an \textit{ancestor} of $v$ if $v$ is a descendant of $u$. \begin{thm}{\rm \cite[Theorem 12]{hhhh02}}\label{thm:treeshhhh02} For any tree $T$, $\gp{T} = \mathcal{S}p{T}$. \end{thm} { \begin{thm}\label{thm:trees} For any tree $T$, $\gpk{T} = (k+1)\mathcal{S}p{T}$. \end{thm} \begin{proof} By Proposition \ref{prop:basicbounds}, we have that $\gpk{T} \leq (k+1)\gp{T}$ for any tree $T$, and by Theorem \ref{thm:treeshhhh02} $\mathcal{S}p{T} = \gp{T}$. Therefore, $\gpk{T} \leq (k+1)\mathcal{S}p{T}$. To prove that $\gpk{T} \geq (k+1)\mathcal{S}p{T}$, we proceed by contradiction. That is, assume that $\gpk{T}<(k+1)\mathcal{S}p{T}$. Since $\mathcal{S}p{T} = \gp{T}$, any $\ddot{\gamma}_P^k\text{-set}$ must contain at least $\mathcal{S}p{T}$ distinct vertices and by the pigeonhole principle there exists at least one vertex in the set with at most $k$ PMUs. Let $S$ be a $\ddot{\gamma}_P^k\text{-set}$ of $T$ such that $\deg{v}{} \geq 3$ for each $v\in S$, and choose $S$ to have the smallest number of vertices $x$ with $\pmus{x}\leq k$. Root $T$ at a vertex $r\in S$. Let $A = \{v\in S : 0<\pmus{v} \leq k \}$. Let $v\in A$ with the property that $d(v,r)= \max\{d(u,r): u \in A\}$. Observe that for any descendant $u$ of $v$, then $\pmus{u} \geq k+1$ or $\pmus{u} = 0$. Let $w$ be the nearest ancestor of $v$ such that $\deg{w}{} \geq 3$. Let $S' = (S\cup\{w^{\pmuss{S}{v}}\}) \mathcal{S}etminus \{v^{\pmuss{S}{v}}\}$, so that for each $x\in S'$ such that $x\neq w$, $\pmuss{S'}{x} = \pmuss{S}{x}$ and $\pmuss{S'}{w} = \pmuss{S}{v}+ \pmuss{S}{w}$. That is, $S'$ is the same as $S$ with the exception that the PMUs on $v$ have been moved to $w$. We will show that $S'$ is a $k-$robust power dominating set. First observe that since $S\mathcal{S}etminus \{v^{\pmuss{S}{v}}\}$ is a power dominating set, $S'$ is also a power dominating set. Let $B'\mathcal{S}ubseteq S'$, such that $|B'|=k$. We have two cases: $\pmuss{B'}{w}\leq \pmuss{S}{v}$, or $\pmuss{B'}{w}> \pmuss{S}{v}$. First, assume that $\pmuss{B'}{w}\leq \pmuss{S}{v}$. Let $$B = (B'\mathcal{S}etminus \{ w^{\pmuss{B'}{w}} \}) \cup \{ v^{\pmuss{B'}{w}}\}.$$ Note that $S\mathcal{S}etminus B$ removes the same PMUs as $S'\mathcal{S}etminus B'$ with the exception that the PMUs removed from $w$ in $S'\mathcal{S}etminus B'$, are instead removed from $v$ in $S\mathcal{S}etminus B$. Since $\pmuss{B'}{w}\leq \pmuss{S}{v}$, we see that $\pmuss{S\mathcal{S}etminus B}{w}=0$. Now, if $w \notin S'\mathcal{S}etminus B'$, then $S\mathcal{S}etminus B$ = $S'\mathcal{S}etminus B'$. Thus $S'\mathcal{S}etminus B'$ is a power dominating set. Next consider the case where $w \in S'\mathcal{S}etminus B'$. The ancestor $w$, which is observed by virtue of being in $S'\mathcal{S}etminus B'$, will cause the observation of $v$ and $v$ can perform a zero forcing step to observe one descendant. Since $\deg{v}{} \geq 3$, all the descendants of $v$, except for possibly one, must be observed by forces from descendants of $v$ in $S'\mathcal{S}etminus{B'}$. To see this, recall that $S\mathcal{S}etminus \{v^{\pmuss{S}{v}}\}$ is a power dominating set. Since $v$ is not in the power dominating set, $v$ can only perform the zero forcing step. Since the only path from non-descendants of $v$ to descendants of $v$ is through $v$, all descendants, except for possibly one descendant, must be observed by descendants of $v$. Since $\pmuss{S\mathcal{S}etminus B}{x} =\pmuss{S'\mathcal{S}etminus B'}{x}$ whenever $x$ is a descendant of $v$, $S'\mathcal{S}etminus {B'}$ will force all the descendants of $v$. Thus, any vertices that rely on $\pmuss{S\mathcal{S}etminus B}{v}>0$ to be observed in $S\mathcal{S}etminus{B}$, will have been observed by $w$ or the descendants of $v$. The remaining vertices will be observed by the same vertices as they would have been by $S\mathcal{S}etminus{B}$. Therefore, $S'\mathcal{S}etminus{B'}$ is a power dominating set. For the second case, assume that $\pmuss{B'}{w}> \pmuss{S}{v}$. Let $$B = (B'\mathcal{S}etminus \{w^{\pmuss{S}{v}} \})\cup \{v^{\pmuss{S}{v}} \}.$$ Note that $S\mathcal{S}etminus B$ removes the same PMUs as $S'\mathcal{S}etminus B'$ with the exception that any PMUs removed from $w$ are first removed from $v$, and then the remaining are removed from $w$. Thus, $v\notin S\mathcal{S}etminus B$, $S'\mathcal{S}etminus B' = S\mathcal{S}etminus B$, and therefore $S'\mathcal{S}etminus B'$ is a power dominating set. In either case, $S'\mathcal{S}etminus B'$ is a power dominating set. If $w\in S$, then we have found $S'$ with fewer vertices for which $\pmus{x} \leq k$ for $x \in S'$, which is a contradiction. If $w\not\in S$, we may repeat the process. Since $r\in S$, we know that eventually the process terminates with the same contradiction. \end{proof} } \mathcal{S}ection{Concluding remarks}\label{sec:concrem} PMU-defect-robust power domination allows us to place multiple PMUs at the same location and consider the consequences if some of these PMUs fail. There are many questions left to examine in future work. Is there an improvement to the lower bound given in Proposition \ref{prop:basicbounds} for $\gp{G}>1$? As $K_{3,3}$ demonstrates in Theorem \ref{thm:k33}, it seems likely that there is a better lower bound based on the number of vertices and the power domination number that utilizes the pigeonhole principle to show that the lower bound must increase at certain values of $k$. We have begun the study of $k$-robust power domination for certain families of graphs but work remains to be done. We have determined the $k$-robust power domination number for trees. For complete bipartite graphs, we still have the case of $\gpk{K_{3,b}}$ for $4\leq b <\left\left\lfloorloor \frac{k}{3} \right\right\rfloorloor +3$. The question of $\gpk{K_{a,b}}$ for unbalanced complete bipartite graphs when $a,b\geq 4$ is also open and preliminary observations indicate an extensive case analysis for this problem. \mathcal{S}ection*{Acknowledgments} B. Bjorkman was supported by the US Department of Defense's Science, Mathematics and Research for Transformation (SMART) Scholarship for Service Program. E. Conrad was supported by the Autonomy Technology Research (ATR) Center Summer Program. {} \end{document}
math
\begin{document} \author[J.V. da Silva, J.D. Rossi and A.M. Salort]{Jo\~{a}o Vitor da Silva, Julio D. Rossi and Ariel M. Salort} \address{Departamento de Matem\'atica, FCEyN - Universidad de Buenos Aires and \break \indent IMAS - CONICET \break \indent Ciudad Universitaria, Pabell\'on I (1428) Av. Cantilo s/n. \break \indent Buenos Aires, Argentina.} \email{[email protected], [email protected], [email protected]} \urladdr{http://mate.dm.uba.ar/~jrossi, http://mate.dm.uba.ar/~asalort} \subjclass[2010]{35B27, 35J60, 35J70} \keywords{$\infty-$eigenvalues estimates, $\infty-$eigenvalue problem, approximation of domains} \begin{abstract} In this note we analyze how perturbations of a ball $\mathfrak{B}_r \subset \mathbb R^n$ behaves in terms of their first (non-trivial) Neumann and Dirichlet $\infty-$eigenvalues when a volume constraint $\Leb(\Omega) = \Leb(\mathfrak{B}_r)$ is imposed. Our main result states that $\Omega$ is uniformly close to a ball when it has first Neumann and Dirichlet eigenvalues close to the ones for the ball of the same volume $\mathfrak{B}_r$. In fact, we show that, if $$ |\lambda_{1,\infty}^D(\Omega) - \lambda_{1,\infty}^D(\mathfrak{B}_r)| = \delta_1 \quad \text{and} \quad |\lambda_{1,\infty}^N(\Omega) - \lambda_{1,\infty}^N(\mathfrak{B}_r)| = \delta_2, $$ then there are two balls such that $$\mathfrak{B}_{\frac{r}{\delta_1 r+1}} \subset \Omega \subset \mathfrak{B}_{\frac{r+\delta_2 r}{1-\delta_2 r}}.$$ In addition, we also obtain a result concerning stability of the Dirichlet $\infty-$eigen-functions. \end{abstract} \maketitle \section{Introduction}\label{Intro} Let $\Omega \subset \mathbb R^n$ be a bounded domain (connected open subset) with smooth boundary, $1<p< \infty$ and $\mathcal{D}_0^{1,p} (\Omega)elta_p u \mathrel{\mathop:}= \div(|\nabla u|^{p-2}\nabla u)$ (the standard $p$-Laplacian operator). Historically (cf. \cite{Lindq90}), it well-known that the first eigenvalue (referred as \textit{the principal frequency} in physical models) of the $p-$Laplacian Dirichlet eigenvalue problem \begin{equation}\label{eq.p} \left\{ \begin{array}{rclcl} -\mathcal{D}_0^{1,p} (\Omega)elta_p u & = & \lambdabda_{1, p}^D(\Omega)|u|^{p-2} u & \text{in} & \Omega \\ u & = & 0 & \text{on} & \partial \Omega \end{array} \right. \end{equation} can be characterized variationally as the minimizer of the following (normalized) problem: \begin{equation}\tag{{\bf \text{p-Dirichlet}}}\label{1er.p} \displaystyle \lambdabda_{1, p}^D(\Omega) \mathrel{\mathop:}= \inf_{u \in W^{1,p}_0 (\Omega) \setminus \{0\}} \left\{ \displaystyle \int_{\Omega} |\nabla u|^p dx : \int_{\Omega} |u|^pdx = 1\right\}. \end{equation} In the theory of shape optimization and nonlinear eigenvalue problems obtaining (sharp) estimates for the eigenvalues in terms of geometric quantities of the domain (e.g. measure, perimeter, diameter, among others) plays a fundamental role due to several applications of these problems in pure and applied sciences. We recall that the explicit value to \eqref{1er.p} is known only for some specific values of $p$ or for very particular domains $\Omega$. Notice that upper bounds for $\lambdabda_{1, p}^D(\Omega)$ are usually obtained by selecting particular test functions in \eqref{1er.p}. Nevertheless, lower bounds are a more challenging task. In this direction we have the remarkable \textit{Faber-Krahn inequality}: \textit{Among all domains of prescribed volume the ball minimizes \eqref{1er.p}}. More precisely, \begin{equation} \label{chi} \lambdabda_{1, p}^D(\Omega) \geq \lambdabda_{1, p}^D(\mathfrak{B}), \end{equation} where $\mathfrak{B}$ is the $n$-dimensional ball such that $\Leb(\Omega) = \Leb(\mathfrak{B})$ (along this paper $\Leb (\Omega)$ will denote the Lebesgue measure of $\Omega$ that is assumed to be fixed). Using isoperimetric or isodiametric inequality similar lower bounds for \eqref{1er.p} in terms of the perimeter (resp. diameter) of $\Omega$ are also available (cf. \cite{Bhat} and \cite[page 224]{Lindq92}, and the references therein). Recently, stability estimates for certain geometric inequalities were established in \cite{FMP2}, thereby providing an improved version of \eqref{chi} by adding a suitable remainder term, i.e., $$ \lambdabda_{1, p}^D(\Omega) \geq \lambdabda_{1, p}^D(\mathfrak{B})\left(1+ \gamma_{p, n} (\mathcal{S}(\Omega))^{2+p}\right), $$ where $\mathcal{S}(\Omega)$ is the so-called \textit{Fraenkel asymmetry} of $\Omega$, which is precisely defined as $$ \mathcal{S}(\Omega) \mathrel{\mathop:}= \inf_{x_0 \in \mathbb R^n} \left\{ \frac{\Leb (\Omega \bigtriangleup \mathfrak{B}_{r}(x_0))}{\Leb(\Omega)} \, :\Leb(\mathfrak{B}_{r}(x_0)) = \Leb(\Omega) \right\}, $$ and $\gamma_{p, n}$ is a constant. Observe that $\mathcal{S}$ measures the distance of a set $\Omega$ from being a ball. For such quantitative estimates and further related topics we quote \cite{Bp}, \cite{Cianci}, \cite{Fusco} and references therein. Our main goal here is to find stability results for the limit case $p=\infty$. First, we introduce what is known for the limit as $p\to \infty$ in the eigenvalue problem for the $p-$Laplacian. When one takes the limit as $ p\to \infty$ in the minimization problem \eqref{1er.p}, one obtains \begin{equation} \tag{{\bf \text{$\infty$-Dirichlet}}}\label{lam.infD} \lambda_{1,\infty}^D(\Omega) \mathrel{\mathop:}= \lim_{p\to\infty} \sqrt[p]{\lambda_{1, p}^D(\Omega)} =\inf_{u \in W^{1, \infty}_0(\Omega)\setminus\{0\}} \|\nabla u\|_{L^{\infty}(\Omega)}>0, \end{equation} see \cite{JLM}. Concerning the limit equation, also in \cite{JLM} it is proved that any family of normalized eigenfunctions $\{u_p\}_{p>1}$ to \eqref{1er.p} converges (up to a subsequence) locally uniformly to $u_\infty \in W^{1,\infty}_0 (\Omega)$, a minimizer for \ref{lam.infD} with $\|u_\infty\|_{L^{\infty}(\Omega)}=1$. Moreover, the pair $(u_\infty, \lambda_{1, \infty}^D(\Omega))$ is a nontrivial solution to \begin{equation}\label{eq.infty.p} \left\{ \begin{array}{rclcl} \min\Big\{-\mathcal{D}_0^{1,p} (\Omega)elta_\infty v_\infty, |\nabla v_\infty|-\lambda_{1,\infty}^D(\Omega)v_\infty\Big\} & = & 0 & \text{in} & \Omega \\ v_\infty & = & 0 & \text{on} & \partial \Omega. \end{array} \right. \end{equation} Solutions to \eqref{eq.infty.p} must be understood in the viscosity sense (cf. \cite{CIL} for a survey) and $ \mathcal{D}_0^{1,p} (\Omega)elta_\infty u(x) \mathrel{\mathop:}= \nabla u(x)^TD^2u(x) \rightharpoonupot \nabla u(x) $ is the well-known \textit{$\infty-$Laplace operator}. In addition, also in \cite{JLM}, it is given an interesting and useful geometrical characterization for \eqref{lam.infD}: \begin{equation} \label{lam1} \lambda_{1,\infty}^D(\Omega) = \Big(\max_{x \in \Omega} \dist(x, \partial \Omega)\Big)^{-1}. \end{equation} Such an information means that the ``principal frequency'' for the $\infty$-eigenvalue problem can be detected from the geometry of the domain: it is precisely the reciprocal of radius $\mathfrak{r}_\Omega>0$ of the largest ball inscribed in $\Omega$. For more references concerning the first eigenvalue \eqref{eq.infty.p} we refer to \cite{KH}, \cite{NRSanAS} and \cite{Yu}. Now, let us turn our attention to Neumann boundary conditions and consider the following eigenvalue problem: \begin{equation}\label{eq.p.n} \left\{ \begin{array}{rclcl} -\mathcal{D}_0^{1,p} (\Omega)elta_p u & = & \lambdabda_{1, p}^N(\Omega)|u|^{p-2} u & \text{in} & \Omega \\ \displaystyle |\nabla u|^{p-2}\tfrac{\partial u}{\partial \nu} & = & 0 & \text{on} & \partial \Omega. \end{array} \right. \end{equation} As before, we stress that the first non-zero eigenvalue of \eqref{eq.p.n} can also be characterized variationally as the minimizer of the following normalized problem: \begin{equation}\tag{{\bf \text{p-Neumann}}}\label{1er.p.n} \displaystyle \lambdabda_{1, p}^N(\Omega) \mathrel{\mathop:}= \inf_{u \in W^{1, p}(\Omega)} \left\{ \displaystyle \int_{\Omega} |\nabla u|^pdx: \int_{\Omega}| u|^p dx = 1\,\, \text{and}\,\,\int_\Omega |u|^{p-2}udx =0\right\}. \end{equation} The celebrated \textit{Payne-Weinberger inequality} provides a lower bound (on any convex domain $\Omega \subset \mathbb R^n$) for the first (non-trivial) Neumann $p-$eigenvalue (cf. \cite{ENT} and \cite{Valt}) \begin{equation} \label{ggg} \lambdabda_{1, p}^N(\Omega) \geq (p-1)\left(\frac{2\pi}{p\, \diam(\Omega)\, \sin(\frac{\pi}{p})}\right)^{p}. \end{equation} For a stability estimate for this problem with $p=2$ we refer to \cite{Bp}. When $p\to \infty$, the minimization problem \eqref{1er.p.n} becomes \begin{equation}\tag{{\bf \text{$\infty$-Neumann}}}\label{lam.inf} \displaystyle \lambda_{1,\infty}^N(\Omega) \mathrel{\mathop:}= \lim_{p\to\infty} \sqrt[p]{\lambda_{1, p}^N(\Omega)} = \inf_{ u\in W^{1,\infty}(\Omega) \atop{\max\limits_{\Omega} u = -\min\limits_{\Omega} u = 1}} \|\nabla u\|_{L^\infty(\Omega)}, \end{equation} see \cite{EKNT} and \cite{RosSaint}. Concerning the limit equation, also in \cite{EKNT} and \cite{RosSaint}, it is proved that any family of normalized eigenfunctions $\{u_p\}_{p>1}$ to \eqref{1er.p.n} converges (up to subsequence) locally uniformly to $u_\infty \in W^{1,\infty}_0 (\Omega)$ with $\|u_\infty\|_{L^{\infty}(\Omega)}=1$. Moreover, the pair $(u_\infty, \lambda_{1, \infty}^N(\Omega))$ is a nontrivial solution to \begin{equation}\label{eq.infty.p.n} \left\{ \begin{array}{rclcl} \min\Big\{-\mathcal{D}_0^{1,p} (\Omega)elta_\infty v_\infty, |\nabla v_\infty|-\lambda_{1,\infty}^N(\Omega)v_\infty\Big\} & = & 0 & \text{in} & \Omega\cap \{v>0\} \\ \max\Big\{-\mathcal{D}_0^{1,p} (\Omega)elta_\infty v_\infty, -|\nabla v_\infty|-\lambda_{1,\infty}^N(\Omega)v_\infty\Big\} & = & 0 & \text{in} & \Omega\cap \{v<0\} \\ -\mathcal{D}_0^{1,p} (\Omega)elta_\infty v_\infty & = & 0 & \text{in} & \Omega \cap \{v=0\}\\ \displaystyle \frac{\partial v_{\infty}}{\partial \nu}& = & 0 & \text{in} & \partial \Omega. \end{array} \right. \end{equation} In addiction, we have the following geometrical characterization for $\lambda_{1,\infty}^N(\Omega)$: \begin{equation} \label{lam1.n} \lambda_{1,\infty}^N(\Omega) = \frac{2}{\diam(\Omega)}, \end{equation} where the intrinsic diameter of $\Omega$ is defined as $$ \diam(\Omega) \mathrel{\mathop:}= \max_{\bar\Omega \times \bar\Omega} d_{\Omega}(x,y) = \max_{\partial \Omega \times \partial \Omega} d_{\Omega}(x,y), $$ being $d_{\Omega}(x,y)$ the geodesic distance given by $ d_{\Omega}(x,y)=\inf_{\gamma} \text{Long}(\gamma) $, where the infimum is taken over all possible Lipschitz curves in $\bar\Omega$ connecting $x$ and $y$. We remark that in the limit case $p=\infty$, the geometrical characterization \eqref{lam1.n} of \eqref{lam.inf} yields several interesting consequences: \begin{itemize} \item[\checkmark] If $\Leb(\Omega) = \Leb(\mathfrak{B})$, $\mathfrak{B}$ being a ball, then $\lambda_{1,\infty}^N(\Omega) \leq \lambda_{1,\infty}^N(\mathfrak{B})$, which establishes a \textit{Szeg\"{o}-Weinberger type inequality}: among all domains of prescribed volume the ball maximizes \eqref{lam.inf}. \item[\checkmark] $\lambda_{1,\infty}^N(\Omega) \leq \lambda_{1,\infty}^D(\Omega)$ for any convex $\Omega$ with equality if and only if $\Omega$ is a ball. \item[\checkmark] The Payne-Weinberger inequality, \eqref{ggg}, becomes an equality when $p = \infty$. \end{itemize} Taking account the previous historic overview, we arrive to our main result, which establishes the stability of the ball with respect to small perturbations of their first Dirichlet and Neumann $\infty-$eigenvalues. More precisely, if a domain $\Omega\subset \mathbb R^n$ has Dirichlet and Neumann $\infty-$eigenvalues close enough to those of the ball $\mathfrak{B}_r$ of the same Lebesgue measure, then $\Omega$ is uniformly ``almost'' ball-shaped. \begin{thm}\label{Mainthm} Let $\Omega$ be an open domain satisfying $\Leb(\Omega)=\Leb(\mathfrak{B}_r)$. If for some $\delta_i>0$ ($i=1, 2$) small enough it holds that $$ |\lambda_{1,\infty}^D(\Omega) - \lambda_{1,\infty}^D(\mathfrak{B}_r)| = \delta_1 \quad \text{and} \quad |\lambda_{1,\infty}^N(\Omega) - \lambda_{1,\infty}^N(\mathfrak{B}_r)| = \delta_2, $$ then there are two balls such that $$\mathfrak{B}_{\frac{r}{\delta_1 r+1}} \subset \Omega \subset \mathfrak{B}_{\frac{r+\delta_2 r}{1-\delta_2 r}}.$$ \end{thm} The previous theorem implies the following convergence result. \begin{thm}\label{Mainthm2} Let $\{\Omega_k\}_{k \in \mathbb{N}}$ be a family of uniformly bounded domains satisfying $\Leb(\Omega_k)=\Leb(\mathfrak{B}_r)$. If $$ |\lambda_{1,\infty}^D(\Omega_k) - \lambda_{1,\infty}^D(\mathfrak{B}_r)| = \text{o}(1) \quad \text{and} \quad |\lambda_{1,\infty}^N(\Omega) - \lambda_{1,\infty}^N(\mathfrak{B}_r)| = \text{o}(1) \quad \text{as } k \to \infty, $$ then $$\Omega_k \to \mathfrak{B}_r$$ in the sense that the Hausdorff distance between $\Omega$ and a ball $\mathfrak{B}_r$ goes to zero, i.e., $$ d_\mathcal{H} (\Omega_k, \mathfrak{B}_r) := \max\Big\{\,\sup _{{x\in \Omega_k}}\inf _{{y\in \mathfrak{B}_r}}d(x,y),\,\sup _{{y\in \mathfrak{B}_r}}\inf _{{x\in \Omega_k}}d(x,y)\, \Big\} \to 0. $$ \end{thm} Note that our results imply that \begin{equation} \label{ecccc} \max\left\{\Leb\left(\Omega\bigtriangleup \mathfrak{B}_{\frac{r}{\delta_1 r+1}}\right), \Leb\left(\Omega\bigtriangleup \mathfrak{B}_{\frac{r+\delta_2 r}{1-\delta_2 r}}\right)\right\}\leq \mathfrak{C}(n, \delta_i, r)r^n. \end{equation} where $\mathfrak{C}(n, \delta_i, r)=\omega_n \max\{(\delta_1r+1)^n-1,(n-1)\delta_2\}\to 0$ as $\delta_i\to 0$. Hence, we can control the Fraenkel asymmetry of the set, $S(\Omega)$. But our results give much more since we have a sort of uniform control on how far the set is from being a ball (for instance, we have convergence in Hausdorff distance in Theorem \ref{Mainthm2}). Another important question in this theory consists on how the corresponding $\infty-$ground states (solutions to \eqref{eq.infty.p}) behave in relation to perturbations of the $\infty-$eigenvalues of the ball. The next result provides an answer for this issue, showing that Dirichlet $\infty-$eigenfunctions are uniformly close to a cone when the first Dirichlet and Neumann $\infty-$eigenvalues are close to those for the ball. Note that, in general, the $\infty-$eigenvalue problem \eqref{eq.infty.p} may have multiple solutions (the first eigenvalue may not be simple), see \cite{HSY} and \cite{Yu}. \begin{thm} \label{teo.autofunc.intro} Let $\Omega$ be an open domain satisfying $\Leb(\Omega)=\Leb(\mathfrak{B}_r)$. Given $\varepsilon >0$ there are $\delta_i(\varepsilon)>0$ ($i=1, 2$) small enough such that: if $$ |\lambda_{1,\infty}^D(\Omega) - \lambda_{1,\infty}^D(\mathfrak{B}_r)| < \delta_1 \quad \text{and} \quad |\lambda_{1,\infty}^N(\Omega) - \lambda_{1,\infty}^N(\mathfrak{B}_r)| < \delta_2, $$ then $$ |u(x)-v_{\infty}(x)| < \varepsilon \,\,\, \text{in} \,\,\, \Omega \cap \mathfrak{B}_r, $$ where $$v_{\infty} (x)= 1 - \frac{|x|}{r}$$ is the normalized $\infty-$ground state to \eqref{eq.infty.p} in $\mathfrak{B}_r$. \end{thm} Theorem \ref{teo.autofunc.intro} can be rewritten as follows: \begin{corollary}\label{CorConv} Let $\{u_k\}_{k \in \mathbb{N}}$ be a family of normalized solutions to \eqref{eq.infty.p} in $\Omega_k$ such that $$ |\lambda_{1,\infty}^D(\Omega_k) - \lambda_{1,\infty}^D(\mathfrak{B}_r)| = \text{o}(1) \quad \text{and} \quad |\lambda_{1,\infty}^N(\Omega_k) - \lambda_{1,\infty}^N(\mathfrak{B}_r)| = \text{o}(1)\quad \text{as } k \to \infty. $$ Then, $$ u_k \to v_{\infty} \quad \text{locally uniformly} \quad \text{in} \quad \mathfrak{B}_r, $$ where $$v_{\infty} (x)= 1 - \frac{|x|}{r}$$ is the normalized $\infty-$ground state to \eqref{eq.infty.p} in $\mathfrak{B}_r$. \end{corollary} Our approach can be applied for other classes of operators with $p-$Laplacian type structure. We can deal with $p-$Laplacian type problems involving an \textit{anisotropic $p$-Laplacian operator} $$ \displaystyle -\mathcal{Q}_p u \mathrel{\mathop:}= -\div(\mathbb{F}^{p-1}(\nabla u)\mathbb{F}_{\xi}(\nabla u)), $$ where $\mathbb{F}$ is an appropriate (smooth) norm of $\mathbb R^n$ and $1<p< \infty$. The necessary tools for studying the anisotropic Dirichlet eigenvalue problem, as well as its limit as $p \to \infty$ can be found in \cite{BKJ}. Here, to obtain results similar to ours, one has to replace Euclidean balls with balls in the norm $\mathbb{F}$. The paper is organized as follows: in Section \ref{sect-main} we prove our main stability results including the behavior of the corresponding $\infty-$eigenfunctions and in Section \ref{sect-examples} we collect several examples that illustrate our results. \section{Proof of the Main Theorems} \label{sect-main} Before proving our main result we introduce some notations which will be used throughout this section. Given a bounded domain $\Omega\subset \mathbb R^n$ and a ball $\mathfrak{B}_r\subset\mathbb R^n$ of radius $r>0$ we denote $\lambda_{1,\infty}^D(\Omega)$ and $\lambda_{1,\infty}^D(\mathfrak{B}_r)$ the first Dirichlet eigenvalues \eqref{lam1} in $\Omega$ and in $\mathfrak{B}_r$, respectively; analogously, $\lambda_{1,\infty}^N(\Omega)$ and $\lambda_{1,\infty}^N(\mathfrak{B}_r)$ stand for the first nontrivial Neumann eigenvalues \eqref{lam1.n} in $\Omega$ and in $\mathfrak{B}_r$. We introduce the following class of sets which will play an important role in our approach. For non-negative constants $\delta_1$ and $\delta_2$ we define the class: $$ \mathcal{X}i_{\delta_1, \delta_2}(\mathfrak{B}_r) \mathrel{\mathop:}= \left\{ \begin{array}{c} \Omega \subset \mathbb R^n \\ \text{bounded domain with} \\ \Leb(\Omega) = \Leb(\mathfrak{B}_r) \end{array} : \begin{array}{rcl} |\lambda_{1,\infty}^D(\Omega)-\lambda_{1,\infty}^D(\mathfrak{B}_r)| & = & \delta_1 \\[10pt] |\lambda_{1,\infty}^N(\Omega)-\lambda_{1,\infty}^N(\mathfrak{B}_r)| & = & \delta_2 \end{array} \right\}. $$ Notice that, $\mathcal{X}i_{0, 0}(\mathfrak{B}_r)$ consists of the family of all balls with radius $r>0$. Another important remark is that the elements of $\mathcal{X}i_{\delta_1, \delta_2}(\mathfrak{B}_r)$ are invariant by rigid movements (rotations, translations, etc). Similarly, we can define the class $\mathcal{X}i^D_{\delta_1}(\mathfrak{B}_r)$ (resp. $\mathcal{X}i^N_{\delta_2}(\mathfrak{B}_r)$) as being $\mathcal{X}i_{\delta_1, \delta_2}(\mathfrak{B}_r)$ with the restriction on the Dirichlet (resp. Neumann) eigenvalues only. In the next lemma we show that a control on the difference of the first Dirichlet eigenvalue implies that $\Omega$ contains a large ball. \begin{lemma}\label{Lemma1} If $\Omega\in \mathcal{X}i^D_{\delta_1}(\mathfrak{B}_r)$ then there exists a ball such that $$\mathfrak{B}_{\frac{r}{\delta_1 r+1}} \subset \Omega.$$ Moreover, $$ \Leb\left(\Omega\bigtriangleup \mathfrak{B}_{\frac{r}{\delta_1 r +1}}\right)\leq \mathfrak{c}(n, \delta_1, r) r^{n}. $$ where $\mathfrak{c} = \text{o}(1)$ as $\delta_1 \to 0$. \end{lemma} \begin{proof} According to \eqref{lam1} we have that $$ \delta_1=|\lambda_{1,\infty}^D(\Omega) - \lambda_{1,\infty}^D(\mathfrak{B}_r)|=\left| \frac{1}{r_\Omega} - \frac{1}{r}\right|. $$ It follows that $$ r_\Omega \geq \frac{r}{\delta_1 r +1} . $$ and then there is ball such that $$\mathfrak{B}_{\frac{r}{\delta r+1}} \subset \Omega. $$ Finally, \begin{align*} \Leb(\Omega\triangle \mathfrak{B}_{\frac{r}{\delta r+1}} ) & = \Leb(\Omega) - \Leb(\mathfrak{B}_{\frac{r}{\delta r+1}} ) \\ & = \omega_n r^n \left(1-\frac{1}{(\delta r +1)^n}\right)\\ & \leq \omega_n r^n \left((\delta r +1)^n-1\right)\\ &= \mathfrak{c}(n, \delta, r)r^{n} \end{align*} and the lemma follows. \end{proof} Now, we show that a control on the difference of the first Neumann eigenvalue implies that $\Omega$ is contained in a small ball. \begin{lemma}\label{Lemma2} If $\Omega\in \mathcal{X}i^N_{\delta_2}(\mathfrak{B}_r)$ then there is a ball such that $$\Omega \subset \mathfrak{B}_{\frac{r}{1-\delta_2 r}}.$$ Moreover, $$ \Leb\left(\Omega \bigtriangleup B_{\frac{r}{1-\delta_2 r}}\right)\leq (n-1)\omega_n r^n \delta_2. $$ \end{lemma} \begin{proof} Using \eqref{lam1.n} we have that $$ \delta_2=|\lambda_{1,\infty}^N(\Omega) - \lambda_{1,\infty}^N(\mathfrak{B}_r)|=\left| \frac{2}{\diam(\Omega)} - \frac{1}{r}\right|. $$ It follows that $$ \text{diam}(\Omega)\leq \frac{2r}{1-\delta_2 r } = r+\frac{r(1+\delta r)}{1-\delta_2 r} $$ and then there exists a ball such that $$ \Omega \subset \mathfrak{B}_{\frac{\diam(\Omega)}{2}} = \mathfrak{B}_{\frac{r}{1-\delta_2 r}}.$$ Moreover, \begin{align*} \Leb\left(\Omega\bigtriangleup \mathfrak{B}_{\frac{\diam(\Omega)}{2}}\right)&=\Leb\left(\mathfrak{B}_{\frac{\diam(\Omega)}{2}}\right) - \Leb(\Omega) \\ & =\omega_n r^n \left( \left( 1+\frac{\delta_2}{1-\delta_2 r}\right)^n -1 \right)\\ &=\omega_n r^n \delta_2 \sum_{k=2}^n \left(\frac{\delta_2}{1-\delta_2 r}\right)^k\\ &\leq (n-1)\omega_n \delta_2 r^n \end{align*} and the lemma follows. \end{proof} \begin{proof}[Proof of Theorem \ref{Mainthm}] The proof of Theorem \ref{Mainthm} follows as an immediate consequence of Lemmas \ref{Lemma1} and \ref{Lemma2}. \end{proof} Next, we will prove Theorem \ref{Mainthm2}. \begin{proof}[Proof of Theorem \ref{Mainthm2}] The hypothesis implies that $\Omega_k \in \mathcal{X}i_{\delta_k, \varepsilon_k}(\mathfrak{B}_r)$ for $\delta_k, \varepsilon_k =\text{o}(1)$ as $k \to \infty$. For this reason, by Theorem \ref{Mainthm} there are two balls such that $$ \mathfrak{B}_{\frac{r}{\delta_k r+1}} \subset \Omega_k \subset \mathfrak{B}_{\frac{r+\varepsilon_k r}{1-\varepsilon_k r}}. $$ Now, using that all these balls are centered at points that are bounded (since we assumed that the family $\Omega_k $ is uniformly bounded), we can extract a subsequence such that the centers converge and therefore we conclude that there is a ball $\mathfrak{B}_r$ such that $\Omega_k \to \mathfrak{B}_r$ as $k \to \infty$. \end{proof} \begin{proof}[Proof of Theorem \ref{teo.autofunc.intro}] The proof follows by contradiction. Let us suppose that there exists an $\varepsilon_0>0$ such that the thesis of Theorem fails to hold. This means that for each $k \in \mathbb{N}$ we might find a domain $\Omega_k$ and $u_k$, a normalized $\infty-$ground state to \eqref{eq.infty.p} in $\Omega_k$, such that $\Omega_{k} \in \mathcal{X}i_{\gamma_k, \zeta_k}(\mathfrak{B}_r)$ with $\gamma_k, \zeta_k = \text{o}(1)$ as $k \to \infty$, that is, $$ |\lambda_{1,\infty}^D(\Omega_k) - \lambda_{1,\infty}^D(\mathfrak{B}_r)| < \gamma_k \quad \text{and} \quad |\lambda_{1,\infty}^N(\Omega_k) - \lambda_{1,\infty}^N(\mathfrak{B}_r)| < \zeta_k, $$ with $\gamma_k, \zeta_k = \text{o}(1)$ as $k \to \infty$, together with \begin{equation}\label{Eqcont} |u_k(x) - v_{\infty}(x)|> \varepsilon_0 \quad \,\,\text{in} \,\,\, \Omega_k \cap \mathfrak{B}_r, \end{equation} for every $k \in \mathbb{N}$. Using our previous results, we can suppose that every $\Omega_k \subset \mathfrak{B}_{2r}$. Then, by extending $u_k$ to zero outside of $\Omega_k$, we may assume that $\{u_k\}_{k\in \mathbb{N}} \subset W_0^{1, \infty}(\mathfrak{B}_{2r})$. In this context, standard arguments using viscosity theory show that, up to a subsequence, $u_k \to u_{\infty}$ uniformly in $\overline{\mathfrak{B}_{2r}}$, being the limit $u_\infty$ a normalized eigenfunction for some domain $\hat{\Omega}$ with $\hat{\Omega} \Subset\mathfrak{B}_{2r}$. Moreover, we have that $\lambdabda^D_{1,\infty} (\Omega_k) \to \lambdabda^D_{1,\infty} (\hat{\Omega})$. According to Theorem \ref{Mainthm2}, $\Omega_k \to \mathfrak{B}_r$ as $k \to \infty$. By the previous sentences we conclude that $\hat{\Omega} = \mathfrak{B}_r$. Now, by uniqueness of solutions to \eqref{eq.infty.p} in $\mathfrak{B}_r$ we conclude that $u_{\infty} = v_{\infty}$. However, this contradicts \eqref{Eqcont} for $k \gg 1$ (large enough). Such a contradiction proves the theorem. \end{proof} \section{Examples}\label{sect-examples} Given a fixed ball $\mathfrak{B}$ and a domain $\Omega$ having both of them the same volume, Theorem \ref{Mainthm} says that if the $\infty-$eigenvalues are close each other then $\Omega$ is almost ball-shaped uniformly. The following examples illustrate Theorem \ref{Mainthm} and \ref{Mainthm2}. \begin{example} The reciprocal in Theorem \ref{Mainthm} (and Theorem \ref{Mainthm2}) is not true: given a fixed ball $\mathfrak{B}$, clearly, there are domains $\Omega$ fulfilling \eqref{ecccc} such that the difference between the Neumann (and Dirichlet) eigenvalues in $\Omega$ and in $\mathfrak{B}$ is not small. Let us present some illustrative examples. \begin{enumerate} \item A stadium. Let $\mathfrak{B}$ be the unit ball in $\mathbb R^2$ and $\Omega$ the stadium domain given in Figure \ref{fi1} (a) with $\ell=\frac{\pi(1-\varepsilon^2)}{2\varepsilon}$. In this case $\Leb(\mathfrak{B})=\Leb(\Omega)=\pi$ for any $0<\varepsilon<1$. However, $$ \lambda_{1,\infty}^N(\mathfrak{B})=1, \qquad \lambda_{1,\infty}^N(\Omega)=\frac{2}{\text{diam}(\Omega)}= \frac{4\varepsilon}{\pi+\varepsilon^2(4-\pi)}<\frac13 \quad \text{if } \varepsilon<\frac14. $$ \item A ball with holes. If $\Omega=B(0,\sqrt{1+\varepsilon^2}) \setminus B(0,\varepsilon)$ is the domain given in Figure \ref{fi1} (b), then $\Leb(\mathfrak{B})=\Leb(\Omega)=\pi$, however $$ \lambda_{1,\infty}^D(\mathfrak{B})=1, \qquad \lambda_{1,\infty}^D(\Omega)=\frac{1}{\sqrt{1+\varepsilon^2}}>\frac32 \quad \text{if } \frac34<\varepsilon<1. $$ \item A ball with thin tubular branches. If $\Omega$ is the domain given in Figure \ref{fi1} (c), the condition $\Leb(\mathfrak{B})= \Leb(\Omega)$ gives the relation $$ r(r+\varepsilon) + \varepsilon(\tfrac{1}{\pi}+\tfrac{\varepsilon}{2})=1, \qquad \text{diam}(\Omega)=1+r+\pi(1+r). $$ For instance, if we take $\varepsilon=10^{-3}$ it follows that $r\sim 0.999465$ and then $$ \lambda_{1,\infty}^N(\mathfrak{B})=\frac{2}{\text{diam}(\mathfrak{B})}=1, \qquad \lambda_{1,\infty}^N(\Omega)=\frac{2}{\text{diam}(\Omega)}\sim 0.2415. $$ \begin{figure} \caption{Three examples of domains} \label{fi1} \label{dib3} \end{figure} \end{enumerate} Hence, in view of these examples we conclude that a domain that has Dirichlet and Neumann $\infty-$eigenvalues close to the ones for the ball is close to a ball not only in the sense that $\Leb\left(\Omega\bigtriangleup \mathfrak{B}_{r}\right)$ is small but it can not contain holes deep inside (small holes near the boundary are allowed) and can not have thin tubular branches. \end{example} \begin{example} The regular polygon $\mathbb{P}_k$ of $k-$sides ($k\geq 3$) centered at the origin such that $\Leb(\mathbb{P}_k) = \Leb(\mathfrak{B}_r)$ satisfies $$ \displaystyle | \lambda_{1,\infty}^D(\mathbb{P}_k) - \lambda_{1,\infty}^D(B_r)| = \delta_1\quad \text{and} \quad |\lambda_{1,\infty}^N(B_r) - \lambda_{1,\infty}^N(\mathbb{P}_k)| = \delta_2, $$ where $$ \delta_1=\frac{1}{r\sqrt{\frac{\pi}{k\tan(\frac{\pi}{k})}}}- \frac{1}{r} \quad \text{ and} \quad \delta_2 = \frac{1}{r} - \frac{1}{r\sqrt{\frac{2\pi}{k\sin(\frac{2\pi}{k})}}}. $$ Therefore, we can recover the well known convergence $\mathbb{P}_k \to \mathfrak{B}_r$ as $k \to \infty$. \end{example} \begin{example} Given $k \in \mathbb{N}$ and positive constants $\mathfrak{a}^k_1,\rightharpoonupots, \mathfrak{a}^k_n$, the $n-$dimensional ellipsoid given by $$\displaystyle \mathcal{E}_k \mathrel{\mathop:}= \left\{(x_1, \rightharpoonupots, x_n) \suchthat \sum_{i=1}^{n} \Big(\frac{x_i}{\mathfrak{a}^k_i}\Big)^2<1\right\}$$ such that $\Leb(\mathcal{E}_k) = \Leb(\mathfrak{B}_r)$ satisfies $$ \displaystyle | \lambda_{1,\infty}^D(\mathcal{E}_k) - \lambda_{1,\infty}^D(B_r)| =\delta_1 \quad \text{and} \quad |\lambda_{1,\infty}^N(B_r) - \lambda_{1,\infty}^N(\mathcal{E}_k)| = \delta_2, $$ where $$ \delta_1 = \frac{1}{\min\limits_{i}\{\mathfrak{a}^k_i\}} - \frac{1}{r}, \quad \text{and } \quad \delta_2=\frac{1}{r} - \frac{1}{\max\limits_{i}\{\mathfrak{a}^k_i\}}. $$ Therefore, we recover the fact that if $\min\limits_{i} \mathfrak{a}^k_i \to r$ and $\max\limits_{i} \mathfrak{a}^k_i \to r$ as $k \to \infty$, then $\mathcal{E}_k \to \mathfrak{B}_r$. \end{example} \begin{example} Given $r>0$ let $k_0 \in \mathbb{N}$ such that $\frac{1}{2\pi} \sqrt{\frac{4}{k^2} + 4 \pi^2r^2}> \frac{1}{k \pi}$ for all $k \geq k_0$. For each $k\in \mathbb{N}$ let $\Omega_k$ be the planar stadium domain from Figure 1 (a) with $l_k = \frac{1}{k}$ and $\varepsilon_k = \frac{1}{2\pi} \sqrt{\frac{4}{k^2} + 4 \pi^2r^2}-\frac{1}{k \pi}$. It is easy to check that $\Omega_k \in \mathcal{X}i_{\frac{1}{\varepsilon_k}-\frac{1}{r}, \frac{2}{2\varepsilon_k + \frac{1}{k}}-\frac{1}{r}}(\mathfrak{B}_r)$. Furthermore, in this case we have that the eigenfunctions are explicit and given by $$ u_k(x) = \frac{1}{\varepsilon_k}\dist(x, \partial \Omega_k). $$ Finally, form Corollary \ref{CorConv} $$ u_k(x) \to v_{\infty}(x) = \frac{1}{r}\dist(x, \partial \mathfrak{B}_r) \quad \text{locally uniformly} \quad \text{in} \quad \mathfrak{B}_r \quad \text{as}\quad k \to \infty. $$ \end{example} \subsubsection*{Acknowledgments} This work was supported by Consejo Nacional de Investigaciones Cien-t\'{i}ficas y T\'{e}cnicas (CONICET-Argentina). JVS would like to thank the Dept. of Math. and FCEyN Universidad de Buenos Aires for providing an excellent working environment and scientific atmosphere during his Postdoctoral program. \end{document}
math
\begin{document} \abovedisplayskip=6pt plus3pt minus3pt \belowdisplayskip=6pt plus3pt minus3pt \title[Diffeomorphic souls and disconnected moduli spaces]{Diffeomorphic souls and disconnected moduli spaces of nonnegatively curved metrics} \author{Igor Belegradek} \address{Igor Belegradek\\ School of Mathematics\\ Georgia Institute of Technology\\ Atlanta, GA, USA 30332} \email{[email protected]} \urladdr{www.math.gatech.edu/~ib} \author[David Gonz\'alez-\'Alvaro]{David Gonz\'alez-\'Alvaro} \address{David Gonz\'alez-\'Alvaro\\ ETSI de Caminos, Canales y Puertos\\ Universidad Polit\'ecnica de Madrid\\ 28040 Spain} \email{[email protected]} \urladdr{https://dcain.etsin.upm.es/~david/} \thanks{2010 \it Mathematics Subject classification.\rm\ Primary 53C20.} \keywords{Nonnegative curvature, soul, moduli space, positive scalar curvature.} \thanks{This work was partially supported by the Simons Foundation grant 524838 (Belegradek) and by MINECO grant MTM2017-85934-C3-2-P (Gonz\'alez-\'Alvaro).} \begin{abstract} We give examples of open manifolds that carry infinitely many complete metrics of nonnegative sectional curvature such that they all have the same soul, and their isometry classes lie in different connected components of the moduli space. All previously known examples of this kind have souls of codimension one. In our examples the souls have codimensions three and two. \end{abstract} \maketitle \thispagestyle{empty} \section{Motivation and results} There has been considerable recent interest in studying spaces of metrics with various curvature restrictions, such as nonnegative sectional curvature, to be denoted \mbox{$K\ge 0$}, see~\cite{TW15} and references therein. For a manifold $V$ let $\mathcal{R}_{K\ge 0}(V)$ denote the space of complete Riemannian metrics on $V$ of $K\ge 0$ with the topology of smooth ($=C^\infty$) uniform convergence on compact sets, and $\mathcal{M}_{K\ge 0}(V)$ be the corresponding moduli space, the quotient space of $\mathcal{R}_{K\ge 0}(V)$ by the $\operatorname{Diff}(V)$-action via pullback. The soul construction~\cite{CG72} takes as the input a complete metric of $K\ge 0$ on an open connected manifold $V$, and a basepoint of $V$, and produces a totally convex compact boundaryless submanifold $S$ of $V$, called the {\em soul}, such that $V$ is is diffeomorphic to a tubular neighborhood of $S$. If we fix a metric and vary the basepoint, the resulting souls are ambiently isotopic~\cite{Yi90} and isometric~\cite{Sh79}. Consider the map {\bf soul} that sends an isometry class of a complete metric of $K\ge 0$ on $V$ to the isometry class of its soul: \[ \text{\bf soul}\colon\thinspace\mathcal{M}_{K\ge 0}(V)\rightarrow \colon\thinspaceprod_{S\in \mathcal V}\mathcal{M}_{K\ge 0}(S) \] where the co-domain is given the topology of disjoint union, and $\mathcal V$ is a set of pairwise non-diffeomorphic manifolds such that $S\in \mathcal V$ if and only if $S$ is diffeomorphic to a soul of a complete metric of $K\ge 0$ on $V$. A tantalizing open problem is to decide if the map {\bf soul} is continuous; the difficulty is that the soul is constructed via asymptotic geometry which is not captured by the compact-open topology on the space of metrics. The following is immediate from~\cite[Theorem 2.1]{BFK17}. \begin{theorem} \label{thm: BFK cont} If $V$ is indecomposable, then the map {\bf soul} is continuous. \end{theorem} An open manifold is {\em indecomposable\,} if it admits a complete metric of $K\ge 0$ such that the normal sphere bundle to a soul has no section. It follows from~\cite{Yi90} that for indecomposable $V$ the soul is uniquely determined by the metric (and not the basepoint). Moreover, \cite{BFK17} implies that the souls of nearby metrics are ambiently isotopic by a small compactly supported isotopy. In particular, metrics with non-diffeomorphic souls in an indecomposable manifold lie in different connected components of $\mathcal{M}_{K\ge 0}(V)$. There are many examples where the diffeomorphism (or even homeomorphism) type of the soul depends on the metric, see~\cite{Be03, KPT05, BKS11, Ot11, BKS15, GZ18}, and if the ambient open manifold $V$ is indecomposable, this gives examples where $\mathcal{M}_{K\ge 0}(V)$ is not connected, or even has infinitely many connected components. If $V$ has a complete metric with $K\ge 0$ with soul of codimension one, then {\bf soul} is a homeomorphism, see~\cite{BKS11}. Thus if for some soul $S$ the space $\mathcal{M}_{K\ge 0}(S)$ has infinitely many connected components, then so does $\mathcal{M}_{K\ge 0}(V)$; for example, this applies to $V=S\times\mathbb{R}$. Examples of closed manifolds $S$ for which $\mathcal{M}_{K\ge 0}(S)$ has infinitely many connected components can be found in~\cite{KPT05, DKT18, De17, Go20a, DG19, De20}. These metrics on $S$ have $K\ge 0$ and $\mathrm{scal}>0$, and the connected components are distinguished by index-theoretic invariants that are constant of paths of $\mathrm{scal}>0$. The papers mentioned in the previous paragraph only prove existence of infinitely many path-components. We take this opportunity to note that they actually get infinitely many connected components. \begin{theorem} \label{thm: path scal>0} Let $M$ be a closed manifold. If two points in the same connected component of $\mathcal M_{K\ge 0}(M)$ have $\mathrm{scal}>0$, then they can be joined by a path of isometry classes of $\mathrm{scal}>0$. \end{theorem} In this paper we show that some of these $S$ as above can be realized as souls of codimensions $2$ or $3$ in indecomposable manifolds. The codimension $2$ case is a fairly straightforward consequence of results in~\cite{WZ90, KS93, DKT18}. \begin{theorem} \label{thm: witten} For every positive integer $n$ there are infinitely many homotopy types that contain a manifold $M$ such that \newline $\textup{(a)}$ $M$ is a simply-connected manifold that is the total space of a principal circle bundle over $\mathbb{S}^2\times \mathbb{C}P^{2n}$, and \newline $\textup{(b)}$ if $V$ is the total space of a non-trivial complex line bundle over $M$, then $V$ has infinitely many complete metrics of $K\ge 0$ whose souls equal the zero section, and whose isometry classes lie in different connected components of $\mathcal{M}_{K\ge 0}(V)$. \end{theorem} The codimension $3$ case requires a bit more work. Recall that if $M$ is the total space of a linear $\mathbb{S}^3$-bundle over $\mathbb{S}^4$, then $M$ admits a metric of $K\ge 0$~\cite{GZ00}, and moreover, if the bundle has nonzero Euler number, then $\mathcal{M}_{K\ge 0}(M)$ has infinitely many connected components~\cite{De17, Go20a}. We prove: \begin{theorem} \label{thm: cd 3} Let $M$ be the total space of a linear $\mathbb{S}^3$-bundle $\xi$ over $\mathbb{S}^4$ with Pontryagin number $p_1(\xi)$ and nonzero Euler number $e(\xi)$. If $\frac{p_1(\xi)}{2e(\xi)}$ is not an odd integer, then $M$ is diffeomorphic to a codimension three submanifold $S$ of an indecomposable manifold $V$ that admits infinitely many complete metrics of $K\ge 0$ with soul $S$ whose isometry classes lie in different connected components of $\mathcal{M}_{K\ge 0}(V)$. \end{theorem} Milnor famously showed that some $\mathbb{S}^3$-bundles over $\mathbb{S}^4$ are exotic spheres~\cite{Mi56}. In fact, $M$ is a homotopy sphere if and only if $e(\xi)=\pm 1$. Unfortunately, if $e(\xi)=\pm 1$, then $\frac{p_1(\xi)}{2}$ is an odd integer, so no $M$ in Theorem~\ref{thm: cd 3} is a homotopy sphere. On the other hand, for every integer $n$ with $n\ge 2$ there is $M$ as in the conclusion of Theorem~\ref{thm: cd 3} with $H^4(M)\colon\thinspaceng\mathbb{Z}_n$, see Section~\ref{sec: codim 3}. To prove Theorem~\ref{thm: cd 3} we use results of Grove-Ziller~\cite{GZ00} and some topological considerations to find an indecomposable $V$ with a codimension three soul, and then we observe that the metric on the soul can be moved by Cheeger deformation to metrics in~\cite{De17, Go20a} that represent infinitely many connected components. Let us conclude by mentioning that other results on connected components of moduli spaces corresponding to various nonnegative or positive curvature conditions can be found in~\cite{KS93, BG95, Wr11, TW17, Go20b}. \subsection*{Structure of the paper} Theorems~\ref{thm: BFK cont} and~\ref{thm: path scal>0} are proved in Section~\ref{sec: connect comp}. Theorem~\ref{thm: witten} is established in Section~\ref{sec: codim2}. Theorem~\ref{thm: cd 3} is proved in Section~\ref{sec: codim 3}, and the needed background is reviewed in Sections~\ref{sec: cheeger}, \ref{sec: 3-sphere}, \ref{sec: bundles}. \section{Continuity of souls, connectedness and path-connectedness} \label{sec: connect comp} \begin{proof}[Proof of Theorem~\ref{thm: BFK cont}] Theorem 2.1 in~\cite{BFK17} says that the map that sends a complete metric of $K\ge 0$ on $V$ to its soul, considered as a point in the space of smooth compact submanifolds of $V$ with smooth topology, is continuous. Two nearby submanifolds are ambiently isotopic by a small isotopy with compact support. Hence, the isometry classes of the induced metrics on these submanifolds are close in the moduli space. Thus we get a continuous map \[ \mathcal{R}_{K\ge 0}(V)\rightarrow \colon\thinspaceprod_{S\in \mathcal V}\mathcal{M}_{K\ge 0}(S) \] that takes a metric to the isometry class of its soul, where the co-domain is given the disjoint union topology, i.e., the set in the co-domain is open if and only if its intersection with each $\mathcal{M}_{K\ge 0}(S)$ is open. Finally, by the definition of quotient topology the above continuous map descends to a continuous map defined on $\mathcal{M}_{K\ge 0}(V)$. \end{proof} Let $X$ denote the the space of isometry classes of Riemannian metrics on a closed manifold $M$ with smooth ($=C^\infty$) topology, and let $X_{\mathrm{scal}\ge 0}$, $X_{\mathrm{scal}>0}$ be the subspaces of $X$ of isometry classes of metrics of nonnegative and positive scalar curvature, respectively. \begin{lemma} $X$ is metrizable. \end{lemma} \begin{proof} This is well-known, but we cannot find a proof in the literature, and hence present it here for completeness. The smooth topology on the space of all Riemannian metrics on $M$ is induced by a metric whose isometry group contains $\operatorname{Diff}(M)$~\cite[Proposition 148]{Ebin-thesis}, and every $\operatorname{Diff}(M)$-orbit is closed~\cite[Proposition 142]{Ebin-thesis}. The corresponding pseudometric on the set of orbits induces the quotient topology, and the pseudodistance is simply the infimum of distances between the orbits~\cite[Theorem 4]{Hi68}. Since the orbits are closed, the quotient space is $T_1$, so that the pseudometric is actually a metric. \end{proof} Also $X$ is locally path-connected (because this property is inherited by quotients, and $X$ is the quotient of the space of metrics, which is an open subset in the Fr\'echet space of $2$-tensors on $M$). In fact, every point of $X$ has a contractible neighborhood (as follows from the smooth version of Corollary 7.3 in~\cite{Ebin-symp} which can be deduced from the discussion after the corollary) but we do not need it here. \begin{theorem} \label{thm: path scal >=0} If $C$ is a connected subset of $X_{\mathrm{scal}\ge 0}$ that contains no Ricci-flat metrics, then any two points $y, z\in C$ can be joined by a path in $\{y, z\}\cup X_{\mathrm{scal}>0}$. \end{theorem} \begin{proof} By continuous dependence of Ricci flow on initial metric, see e.g.~\cite[Theorem A]{BGI}, for every point $x\in X$ there is a neighborhood $U_x$ and a positive constant $\tau_x$ such that the Ricci flow that starts at any point of $U_x$ exists in $[0,\tau_x]$. Being a metrizable space $X$ is paracompact Hausdorff, and hence has a locally finite open cover $\{R_{x_i}\}_{i\in I}$ such that $R_{x_i}\subset U_{x_i}$ for all $i$, and there is a continuous function $\tau\colon\thinspace X\to (0,\infty)$ with $\tau(x)\le \tau_{x_i}$ for all $x\in R_{x_i}$, see~\cite[Theorem 41.8]{Mun-book}. Since $C$ contains no Ricci-flat metrics, for every $x\in C$ the Ricci flow of $x$ has $\mathrm{scal}>0$ for all times in $(0,\tau(x)]$, see~\cite[Proposition 2.18]{Br10}. By continuous dependence of the Ricci flow on initial metric the map $T\colon\thinspace X\to X$ that sends $x$ to the Ricci flow of $x$ at time $\tau (x)$ is continuous. Hence, if $C$ is a connected subset of $X_{\mathrm{scal}\ge 0}$ that contains $y, z$, then $T(C)$ is a connected subset of $X_{\mathrm{scal}>0}$. Since $X_{\mathrm{scal}>0}$ is an open subset in the locally path-connected space $X$, every connected component of $X_{\mathrm{scal}>0}$ is path-connected. Hence the connected component of $X_{\mathrm{scal}>0}$ that contains $T(C)$ also contains a path from $T(y)$ to $T(z)$. Concatenating the path with Ricci flows from $y, z$ to $T(y), T(z)$, respectively, we get a path from $y$ to $z$ with desired properties. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: path scal>0}] No flat manifold admits a metric of $\mathrm{scal}>0$~\cite[Corollary A]{GL83}. Hence $M$ admits no flat metric. Since Ricci-flat metrics of $K\ge 0$ are flat, $\mathcal M_{K\ge 0}(M)$ contains no Ricci-flat metrics. Applying Theorem~\ref{thm: path scal >=0} to the connected component of $\mathcal M_{K\ge 0}(M)$ that contains $y, z$ finishes the proof. \end{proof} \section{Codimension two} \label{sec: codim2} \begin{proof}[Proof of Theorem~\ref{thm: witten}] If $\mathbb{S}^{2t+1}\to \mathbb{C}P^t$ is the circle bundle obtained by restricting the diagonal circle action on $\mathbb{C}^{t+1}$, where $t$ is a positive integer, then its Euler class generates $H^2(\mathbb{C}P^t)$ as follows from the Gysin sequence and $2$-connectedness of $\mathbb{S}^{2t+1}$. Consider the product of two such circle bundles with $t=1$ and $t=2n$. Then the argument~\cite[p.227]{WZ90} implies that any $M$ as in (a) is the quotient of the Riemannian product of two unit spheres $\mathbb{S}^3\times \mathbb{S}^{4n+1}$ by the free isometric circle action $e^{i\phi}(x,y)=(e^{il\phi} x,e^{-ik\phi}y)$ for some coprime integers $k$, $l$. This gives a Riemannian submersion metric on $M$ with $K\ge 0$ and $\mathrm{Ric}>0$. Sometimes it happens that the quotients corresponding to different pairs $(k, l)$ are diffeomorphic. In fact, $H^4(M)$ is a cyclic group of order $l^2$, so up to sign $l$ is determined by the homotopy type of $M$, but for a given $l$ the quotients fall into finitely many diffeomorphism types~\cite[Proposition 2.2]{DKT18}. Their diffeomorphism classification was studied in~\cite{WZ90, KS93} and finally in~\cite{DKT18} where it was shown that for each $n$ there are infinitely many homotopy types that contain $M$ as in (a) and such that the Riemannian submersion metrics as above represent infinitely many connected components of $\mathcal{M}_{K\ge 0}(M)$. Similarly, since $\mathbb{S}^3\times \mathbb{S}^{4n+1}$ is $2$-connected, any complex line bundle over $M$ is the quotient of $\mathbb{S}^3\times \mathbb{S}^{4n+1}\times\mathbb{C}$ by the circle action $e^{i\phi}(x,y, z)=(e^{il\phi} x,e^{-ik\phi}y, e^{im}z)$, cf.~\cite[Lemma 12.3]{BKS15}. In particular, $V$ carries a complete Riemannian submersion metric of $K\ge 0$ with soul equal to the zero section, which is the quotient of $\mathbb{S}^3\times \mathbb{S}^{4n+1}\times\{0\}$ by the above circle action, and hence is diffeomorphic to $M$. If we fix $l$ and the Euler class of the line bundle in $H^2(M)\colon\thinspaceng\mathbb{Z}$, there are only finitely many possibilities for the diffeomorphism type of the pair $(V, \text{\,soul})$ for the above metrics. By varying $k$ appropriately, then we get a sequence of complete metrics of $K\ge 0$ on each $V$ as above such that the metrics on the soul represent infinitely many connected components of $\mathcal{M}_{K\ge 0}(M)$. If the line bundle is non-trivial, then $V$ is indecomposable, and the map {\bf soul} is continuous by Theorem~\ref{thm: BFK cont}. Thus $\mathcal{M}_{K\ge 0}(V)$ has infinitely many connected components. \end{proof} \section{Equivariant Cheeger deformation} \label{sec: cheeger} The purpose of this section is to review the Cheeger deformation, and note that it passes to quotients by free isometric actions. Let $G$ be a compact Lie group with a bi-invariant metric $Q$ that acts isometrically on a Riemannian manifold $(M,q_0)$. Consider the diagonal $G$-action on $M\times G$ given by $a\cdot (p,g)=(ap, ag)$, $p\in M$, $a, g\in G$. Its orbit space is commonly denoted by $M\times_G G$. The map $\pi\colon\thinspace M\times G\to M$ given by $\pi(p,g)=g^{-1}p$ descends to a diffeomorphism $\phi\colon\thinspace M\times_G G\to M$. For any positive scalar $t$ the $G$-action is isometric in the product metric $q_0+\frac{Q}{t}$, which induces a metric $q_t$ on $M$ that makes $\pi$ into a Riemannian submersion. Similarly, $\phi$ becomes an isometry between $q_t$, $t>0$, and the Riemannian submersion metric on $M\times_G G$ induced by $q_0+\frac{Q}{t}$. The map $t\to g_t$ is continuous for $t\ge 0$; this is the {\em Cheeger deformation} of $q_0$, see e.g.~\cite[p.140]{AB15} or~\cite{Zi09}. The key property is that if $q_0$ has $K\ge 0$, then so does $q_t$ for all $t$. Fix a closed subgroup $H$ of $G$ such that the $H$-action on $M$ is free. For $t\ge 0$ let $\bm{q_t}$ be the metric on $M/H$ that makes the $H$-orbit map into a Riemannian submersion $\chi\colon\thinspace(M, q_t)\to (M/H, \bm{q_t})$. The map $t\to \bm{q_t}$ is continuous for $t\ge 0$. The $H$-action on $M\times G$ given by $h\cdot (p,g)=(p, gh^{-1})$ commutes with the diagonal $G$-action, and hence descends to a free $H$-action on $M\times_G\hspace{1pt} G$. For this action the maps $\pi$ and $\phi$ are $H$-equivariant, and descend to a Riemannian submersion $M\times (H\backslash G)\to M/H$ and an isometry $M\times_G (H\backslash G)\to M/H$, respectively, where $t>0$ and $H\backslash G$ is given the Riemannian submersion metric induced by $\frac{Q}{t}$. Thus in the following diagram all maps are Riemannian submersions for $t>0$ \begin{equation*} \xymatrix{ & M\times G\ar[dl]\ar[r]\ar[d]^\pi & M\times H\backslash G\ar[d]\ar[rd] &\\ M\times_G G\ar[r]^{\quad\phi} & M \ar[r]^{\chi} & M/H & M\times_G H\backslash G \ar[l] } \end{equation*} and $\chi$ is also a Riemannian submersion for $t=0$. In this diagram $M$ and $M/H$ are the only spaces where the metric corresponding to $t=0$ is defined. \section{Some algebra and geometry of the $3$-sphere} \label{sec: 3-sphere} In this section we specialize the discussion of Section~\ref{sec: cheeger} to the case when $G=\mathbb{S}^3\times\mathbb{S}^3$, where $\mathbb{S}^3$ is thought of as unit quaternions, and $H$ is the diagonal subgroup of $G$, i.e., $H=\{(g,g)\,:\, g\in G\}$. Consider the diffeomorphism $\psi\colon\thinspace\mathbb{S}^3\to H\backslash G$ given by $\psi(c)=(c,1)H$; thus $\psi^{-1}$ sends the coset $(a,b)H$ to $ab^{-1}$. With this identification the (left) $G$-action on $H\backslash G$ becomes $(a,b)\cdot c=acb^{-1}$, where $a, b, c\in\mathbb{S}^3$; indeed \[(a,b)(c,1)H=(ac,b)H=(acb^{-1},1)H.\] Since $(-1,-1)$ acts trivially, the $G$-action on $H\backslash G$ descends to an $SO(4)$ action with isotropy subgroups isomorphic to $SO(3)$. It follows that any $G$-invariant Riemannian metric on $H\backslash G$ is isometric to a round $3$-sphere (i.e., a metric sphere in $\mathbb{R}^4$). Indeed, $SO(3)$ acts transitively on every tangent $2$-sphere, so $G$ acts transitively on the unit tangent bundle, and hence the metric has constant Ricci curvature, which on the $3$-sphere makes the metric round. The discussion in Section~\ref{sec: cheeger} immediately gives the following. \begin{prop} \label{prop: cheeger} Let $H$ be the diagonal subgroup of $G=\mathbb{S}^3\times\mathbb{S}^3$. Given an isometric $G$-action on a Riemannian manifold $(M, q_0)$ of $K\ge 0$ that restricts to a free $H$-action there is path of Riemannian metrics $(M, q_t)$ of $K\ge 0$, defined for $t\ge 0$, such that \newline $\bullet$ for every $t\ge 0$ the $G$-action is $q_t$-isometric, and the Riemannian submersion metric $(M/H, \bm{q_t})$ induced by $q_t$ has $K\ge 0$, and $t\to \bm{q_t}$ is a continuous path of metrics on $M/H$, \newline $\bullet$ if $t>0$ and $H\backslash G$ is given the Riemannian submersion metric induced by a bi-invariant metric on $G$, then $H\backslash G$ is isometric to a round sphere, and $(M/H,\bm{q_t})$ is isometric to the Riemannian submersion metric on $(M, q_t)\times_G H\backslash G$. \end{prop} \section{Bundle theoretic facts} \label{sec: bundles} This section reviews several well-known bundle theoretic facts. \begin{lemma} \label{lem: class spaces} Let $C\le G$ be an order two normal subgroup of a topological group $G$. If $P\to X$ is a non-trivial principal $G$-bundle over a finite cell complex with $H^1(X;\mathbb{Z}_2)=0$, then the associated principal $G/C$-bundle $P/C\to X$ is non-trivial. \end{lemma} \begin{proof} The surjection $G\to G/C=H$ induces a fibration of classifying spaces $BC\to BG\to BH$ where $BC$ is a homotopy fiber of $BG\to BH$, see~\cite{MO-fibration-classifying-spaces}. As explained in~\cite[p.139]{MT68}, for any finite complex $X$ we get an exact sequence of pointed sets \[ [X, BC]\to [X,BG]\to [X,BH] \] with constant maps as basepoints. Since $[X, BC]=H^1(X;\mathbb{Z}_2)=0$, the rightmost arrow is injective. \end{proof} A {\em $k$-plane bundle\,} is a vector bundle with fiber $\mathbb{R}^k$. \begin{lemma} \label{lem: 3-plane bundles} Let $X$ be a paracompact space with $H^1(X;\mathbb{Z}_2)=0=H^2(X)$. If a $3$-plane bundle over $X$ has a nowhere zero section, then it is trivial. \end{lemma} \begin{proof} A nowhere zero section gives rise to a splitting of the bundle into the Whitney sum of a line and a $2$-plane subbundles, which are orientable since $H^1(X;\mathbb{Z}_2)=0$, and in fact, trivial because a line bundle is determined by its first Stiefel-Whitney class in $H^1(X;\mathbb{Z}_2)$, and an orientable $2$-plane bundle is determined by its Euler class in $H^2(X)$. \end{proof} \begin{lemma} \label{lem: euler pontr} If $X$ is a finite cell complex with $H^1(X;\mathbb{Z}_2)=0=H^4(X;\mathbb{Q})$, then the number of isomorphism classes of $3$-plane bundles over $X$ is finite. \end{lemma} \begin{proof} Since $H^1(X;\mathbb{Z}_2)=0$, any vector bundle over $X$ is orientable. There are only finitely many isomorphism classes of orientable $3$-plane bundles with a given first rational Pontryagin class~\cite[Theorem A.0.1]{Be01}, which lies in $H^4(X;\mathbb{Q})=0$. \end{proof} \section{Codimension three} \label{sec: codim 3} This section ends with a proof of Theorem~\ref{thm: cd 3}. First, we recall some results and notations from~\cite{GZ00}. Following~\cite[p.349]{GZ00} let $P_{k,l}$ denote the principal $\mathbb{S}^3\times \mathbb{S}^3$-bundle over $\mathbb{S}^4$ classified by the map $q\to (q^k, q^{-l})$ in $\pi_3(\mathbb{S}^3\times \mathbb{S}^3)\colon\thinspaceng\mathbb{Z}\times\mathbb{Z}$, where $q\in \mathbb{S}^3$. Let $M_{k,l}$ be the the associated bundle $P_{k,l}\times_{\mathbb{S}^3\times \mathbb{S}^3} \mathbb{S}^3$ where the action on $\mathbb{S}^3$ is as in Section~\ref{sec: 3-sphere}, see~\cite[p.352]{GZ00}. Equivalently~\cite[Proposition 8.27]{Po95}, the action is given by the universal covering $\mathbb{S}^3\times \mathbb{S}^3\to SO(4)$ where the $SO(4)$-action on $\mathbb{S}^3$ is standard, Hence, $M_{k,l}$ is a linear $\mathbb{S}^3$-bundle over $\mathbb{S}^4$. The Euler number and the Pontryagin number of the $\mathbb{S}^3$-bundle $M_{k,l}\to \mathbb{S}^4$ are $\pm(k+l)$ and $\pm 2(k-l)$, see~\cite[p.159, 169]{Kr10}. The Gysin sequence shows that $H^4(M_{k,l})\colon\thinspaceng\mathbb{Z}_{k+l}$ if $k+l\neq 0$, and then $H^4(M_{k,l})\colon\thinspaceng\mathbb{Z}$ if $k+l=0$. \begin{remark} \rm Somewhat confusingly, the notation $M_{m,n}$ is also used in the literature to denote the total space of another $\mathbb{S}^3$-bundle over $\mathbb{S}^4$ based on a different choice of generators in $\pi_3(\mathbb{S}^3\times \mathbb{S}^3)$. This usage goes back to James and Whitehead, and more to the point, appears in works quoted below. Thus $M_{k,l}$ of~\cite{GZ00} equals $M_{m,n}$ of~\cite{CE03, Go20a} when $m=-l$, $n=k+l$. In what follows all results are rephrased in notations of~\cite{GZ00}. \end{remark} According to Section~\ref{sec: cheeger} $M_{k,l}$ can be described as $P_{k,l}/H$ where $H$ is the diagonal subgroup in $\mathbb{S}^3\times \mathbb{S}^3$, cf. also Key Observation in~\cite{GKS20}. Thus $M_{k,l}$ is the base of a principal $\mathbb{S}^3$-bundle with total space $P_{k,l}$. Our strategy hinges on the following: \begin{problem} Find all $k, l$ such that the principal $H$-bundle $P_{k,l}\to P_{k,l}/H=M_{k,l}$ is non-trivial. \end{problem} Some partial solutions are presented below. An especially interesting case (which we could not resolve in this paper) is when $|k+l|=1$, or equivalently, $M_{k,l}$ is a homotopy sphere. \begin{lemma} If $kl=0$, the principal $H$-bundle $P_{k,l}\to M_{k,l}$ is trivial. \end{lemma} \begin{proof} The principal $\mathbb{S}^3\times\mathbb{S}^3$-bundle $P_{k,0}$ is isomorphic to $P\times \mathbb{S}^3$ for some principal $\mathbb{S}^3$-bundle $P$ over $\mathbb{S}^4$. The inclusion $i\colon\thinspace P\to P\times \mathbb{S}^3$ given by $i(p)=(p,1)$ is transverse to the $H$-orbits, hence it descends to an immersion $\bm{i}\colon\thinspace P\to (P\times \mathbb{S}^3)/H$, which is a diffeomorphism because both domain and co-domain are closed manifolds of the same dimension. Then $i\circ\bm{i}^{-1}$ is a section of $P\times \mathbb{S}^3\to (P\times \mathbb{S}^3)/H$, and any principal bundle with a section is trivial. \end{proof} Lemma~\ref{lem:Milnor} below sheds some light on why the assumption ``$\frac{p_1(\xi)}{2e(\xi)}$ is not odd'' is relevant. Let us first restate the assumption: \begin{lemma} \label{lem: odd} $\frac{k-l}{k+l}$ is an odd integer if and only if $\frac{k}{k+l}\in\mathbb{Z}$. \end{lemma} \begin{proof} $\frac{k-l}{k+l}$ is odd if and only if $\frac{k-l}{k+l}+1=\frac{2k}{k+l}$ is even if and only if $\frac{k}{k+l}\in\mathbb{Z}$ \end{proof} \begin{lemma} \label{lem:Milnor} If $kl\neq 0$ and the principal $H$-bundle $P_{k,l}\to P_{k,l}/H=M_{k,l}$ is trivial, then $k+l\neq 0$ and $\frac{k}{k+l}\in\mathbb{Z}$. \end{lemma} \begin{proof} If the bundle is trivial, $P_{k,l}$ is diffeomorphic to $\mathbb{S}^3\times M_{k,l}$. By the K\"unneth formula $H^4(P_{k,l})\colon\thinspaceng H^4(M_{k,l})$ which is $\mathbb{Z}_{k+l}$ if $k+l\neq 0$, and $\mathbb{Z}$ if $k+l=0$. As was mentioned on~\cite[p.349]{GZ00}, the quotient of the principal $\mathbb{S}^3\times \mathbb{S}^3$-bundle $P_{k,l}$ by the subgroup $1\times \mathbb{S}^3$ can be identified with $P_k$, the principal $\mathbb{S}^3$-bundle over $\mathbb{S}^4$ with Euler number $k$. Since $k\neq 0$, we get $H^4(P_k)\colon\thinspaceng\mathbb{Z}_{k}$~\cite[p.346]{GZ00}. The Gysin sequence for the $\mathbb{S}^3$-bundle $P_{k,l}\to P_k$ reads \begin{equation*} \xymatrix{ \mathbb{Z}_k\colon\thinspaceng H^4(P_k)\ar[r] & H^4(P_{k,l})\ar[r] & H^1(P_k)=0, } \end{equation*} which shows that $k+l$ is a nonzero integer that divides $k$. \end{proof} \begin{remark}\rm The asymmetry in the conclusion of Lemma~\ref{lem:Milnor} is an illusion: $\frac{k}{k+l}\in\mathbb{Z}$ if and only if $\frac{l}{k+l}\in\mathbb{Z}$ because $\frac{k}{k+l}+\frac{l}{k+l}=1$. \end{remark} \begin{proof}[Proof of Theorem~\ref{thm: cd 3}] By Proposition 3.11 of~\cite{GZ00} each $P_{k,l}$ admits a cohomogeneity one action by $\mathbb{S}^3\times \mathbb{S}^3\times \mathbb{S}^3$ with codimension two singular orbits, and such that the action of the subgroup $G:=\mathbb{S}^3\times \mathbb{S}^3\times \{1\}$ coincides with the principal bundle action. Hence by \cite[Theorem E]{GZ00} the space $P_{k,l}$ carries a $G$-invariant metric $\gamma_{k,l}$ of $K\ge 0$. Let $\mathbb{S}^3(r)$ be the round $3$-sphere of radius $r$ on which $\mathbb{S}^3\times\mathbb{S}^3$ acts as in Section~\ref{sec: 3-sphere}. Let $h_{k,l, r}$ be the Riemannian submersion metrics on $M_{k,l}=P_{k,l}\times_{\mathbb{S}^3\times\mathbb{S}^3} \mathbb{S}^3$ induced by the product of $\gamma_{k,l}$ and $\mathbb{S}^3(r)$. Then $h_{k,l, r}$ has $K\ge 0$ and $\mathrm{scal}>0$ by~\cite[Theorem 2.1]{Go20a}. An essential point is that there are infinitely many ways to represent $M$ as $M_{k,l}$. Indeed, by assumption $M=M_{k,l}$ for some $k,l\in\mathbb{Z}$ with $k+l\neq 0$ and such that $\frac{k-l}{k+l}$ is not an odd integer. The latter is equivalent to $\frac{k}{k+l}\notin\mathbb{Z}$ by Lemma~\ref{lem: odd}. For $i\in\mathbb{Z}$ let \[ l_i=l-56(k+l)i\quad\text{and}\quad k_i=k+l-l_i=-l+(k+l)(56i+1). \] Then $M_{k_i, l_i}$ are orientation-preserving diffeomorphic to $M$~\cite[Corollary 1.6]{CE03}. By~\cite[Section 3.1]{Go20a} there is $r$ and infinitely many values of $i$ for which the metrics $h_{k_i,l_i, r}$ lie in different connected components of $\mathcal{M}_{K\ge 0}(M)$. Let $g_{k,l}$ be the Riemannian submersion metric of $K\ge 0$ induced on $M_{k,l}=H\backslash P_{k,l}$ by $\gamma_{k,l}$. Proposition~\ref{prop: cheeger} implies that $g_{k,l}$ and $h_{k,l, r}$ lie in the same path-component of $\mathcal{M}_{K\ge 0}(M_{k,l})$. Thus for $k_i,l_i$ as in the previous paragraph $g_{k_i,l_i}$ lie in different connected components of $\mathcal{M}_{K\ge 0}(M)$. Consider the associated vector bundle $P_{k,l}\times_{H}\mathbb{R}^3$ over $M_{k,l}$ where $H=\mathbb{S}^3$ acts on $\mathbb{R}^3$ via the universal covering $\mathbb{S}^3\to SO(3)$. We give $P_{k,l}\times_{H}\mathbb{R}^3$ the Riemannian submersion metric induced by the product of $\gamma_{k,l}$ and the standard Euclidean metric. This is a complete metric of $K\ge 0$ with soul $P_{k,l}\times_{H}\{0\}$ which is isometric to $(M_{k,l}, g_{k,l})$. Since $\frac{k_i}{k_i+l_i}\notin\mathbb{Z}$, the principal $\mathbb{S}^3$-bundle $P_{k_i,l_i}\to M_{k_i,l_i}$ is non-trivial by Lemma~\ref{lem:Milnor}. Consider the associated $3$-plane bundle $P_{k_i,l_i}\times_{\mathbb{S}^3}\mathbb{R}^3$ over $M_{k_i,l_i}$ where $\mathbb{S}^3$ acts on $\mathbb{R}^3$ via the universal covering $\mathbb{S}^3\to SO(3)$. Any such vector bundle is non-trivial by Lemma~\ref{lem: class spaces}, and hence by Lemma~\ref{lem: 3-plane bundles} its total space is indecomposable. Pull back the vector bundles via diffeomorphisms $M\to M_{k_i,l_i}$. The pullback bundles fall into finitely many isomorphism classes by Lemma~\ref{lem: euler pontr}, so after passing to a subsequence we can assume that the bundles are isomorphic, and hence share the same ten-dimensional total space, which we denote $V$. In summary, $V$ is an indecomposable manifold with infinitely many complete metrics of $K\ge 0$ whose souls are all equal to the zero section, and diffeomorphic to $M$, and such that the induced metrics on the souls lie in different connected components of $\mathcal{M}_{K\ge 0}(M)$. Theorem~\ref{thm: BFK cont} finishes the proof. \end{proof} \small \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
math
{\bf e}gin{document} \newtheorem{definition}{Definition} \newtheorem{remark}{Remark} \newtheorem{proposition}{Proposition} \newtheorem{lemma}{Lemma} \title{Economic Dispatch in Unbalanced Distribution Networks via Semidefinite Relaxation} \author{Emiliano Dall'Anese, Georgios B. Giannakis, and Bruce F. Wollenberg \thanks{\protect\rule{0pt}{0.5cm} Version of the manuscript: June 29, 2012. The authors are with the Department of Electrical and Computer Engineering, University of Minnesota, 200 Union Street SE, Minneapolis, MN 55455, USA. E-mails: {\tt \{emiliano, georgios, wollenbe\}@umn.edu}. Part of this work was presented to the 44th North America Power Symposium, Urbana-Champaign, IL, USA. } } \markboth{} {First Author \MakeLowercase{\textit{et al.}}: Title} \maketitle {\bf e}gin{abstract} The economic dispatch problem is considered for unbalanced three-phase power distribution networks entailing both non-deferrable and elastic loads, and distributed generation (DG) units. The objective is to minimize the costs of power drawn from the main grid and supplied by the DG units over a given time horizon, while meeting the overall load demand and effecting voltage regulation. Similar to optimal power flow counterparts for balanced systems, the resultant optimization problem is nonconvex. Nevertheless, a semidefinite programming (SDP) relaxation technique is advocated to obtain a (relaxed) convex problem solvable in polynomial-time complexity. To promote a reliable yet efficient feeder operation, SDP-compliant constraints on line and neutral current magnitudes are accommodated in the optimization formulated, along with constraints on the power factor at the substation and at nodes equipped with capacitor banks. Tests on the IEEE 13-node radial feeder demonstrate the ability of the proposed method to attain the \emph{globally} optimal solution of the original nonconvex problem. \end{abstract} {\bf e}gin{keywords} Unbalanced distribution systems, economic dispatch, power factor, voltage regulation, elastic loads. \end{keywords} \section{Introduction} \label{sec:Introduction} The advent of distributed energy resources, along with the rapid proliferation of controllable loads such as, e.g., plug-in hybrid electric vehicles (PHEVs), call for innovative energy management methodologies to ensure highly efficient operation of distribution networks, effect voltage regulation, and facilitate emergency response~\cite{Momoh09}. Toward these goals, variants of the optimal power flow (OPF) problem have been devised with the objective of optimizing the power supplied by distributed generation (DG) units as well as by the utility at the substation, subject to electrical network constraints on powers and voltages, and the expected load profile~\cite{Khodr07}. These approaches however, are deemed challenging because they require solving nonconvex problems. Non-convexity stems from the nonlinear relationship between voltages and the apparent powers demanded at the loads. Furthermore, the high resistance-reactance ratio in conventional distribution lines severely challenges the convergence of Newton-Raphson iterations, which have been traditionally employed for solving nonconvex OPF problems in transmission networks~\cite{Irving87,Tripathy82}. This has motivated the adoption of forward/backward sweeping methods~\cite{Cespedes90}, which enable computationally-efficient load flow analysis, but are not suitable for optimization purposes, fuzzy dynamic programming~\cite{Lu97}, particle swarm optimization~\cite{Sortomme09}, sequential quadratic optimization~\cite{Driesen10}, and steepest descent-based methods~\cite{Forner}. However, these approaches generally return sub-optimal load flow solutions, and may be computationally cumbersome. To alleviate these concerns, the semidefinite programming (SDP) reformulation of~\cite{Bai08} and \cite{LavaeiLow} was recently extended to \emph{balanced} distribution networks in~\cite{Tse12}, and conditions ensuring global optimality of the obtained solution were derived. Three-phase distribution feeders however, are inherently \emph{unbalanced} because \emph{i)} a large number of unequal single-phase loads must be served, and \emph{ii)} non-equilateral conductor spacings of three-phase line segments are involved~\cite{Kerstingbook}. As a consequence, optimization approaches can not rely on single-phase equivalent models (as in, e.g.~\cite{Tse12, Forner}). For the unbalanced setup, an OPF framework was proposed in~\cite{Paudyal11}, where commercial solvers of nonlinear programs were used, and in~\cite{Bruno11}, where Newton methods where utilized in conjunction with OpenDSS load flow solvers. A model based on sequence components was adopted in~\cite{Rashid05}, and the Newton-Raphson algorithm was used. However, since these methods are inherently related to gradient descent solvers of nonconvex programs, they inherit the limitations of being sensitive to initialization, and do not guarantee global optimality of their solutions. The main contribution of the present paper consists in permeating the benefits of SDP relaxation techniques to the economic dispatch problem for \emph{unbalanced} three-phase power distribution systems. This powerful optimization tool not only offers the potential of finding the \emph{globally} optimal solution of the original nonconvex problem in \emph{polynomial-time} complexity~\cite{luospmag10}, but also facilitates the introduction of thermal and quality-of-power constraints without exacerbating the problem complexity. The focus here is on the case where the costs of power provided by the utility company and supplied by the DG units are known in advance over a given time horizon. Then, the goal is to minimize the overall energy cost so that both non-deferrable and elastic load demands are met, and the node voltages stay within prescribed limits. Furthermore, constraints on line and neutral current magnitudes, as well as on the power factor at substation and nodes equipped with capacitor banks are accommodated in the optimization problem in order to improve reliability and efficiency of the distribution feeder. {{\it Notation:} Upper (lower) boldface letters will be used for matrices (column vectors); $(\cdot)^{\cal T}$ for transposition; $(\cdot)^*$ complex-conjugate; and, $(\cdot)^{\cal H}$ complex-conjugate transposition; $\Re\{\cdot\}$ denotes the real part, and $\Im\{\cdot\}$ the imaginary part; $j = \sqrt{-1}$ represents the imaginary unit. ${\textrm{Tr}}(\cdot)$ denotes the matrix trace; ${\textrm{rank}}(\cdot)$ the matrix rank; $\circ$ the Hadamard product; and, $|\cdot|$ denotes the magnitude of a number or the cardinality of a set. Given a vector ${\bf v}$ and matrix ${\bf V}$, $[{\bf v}]_{{\cal P}}$ denotes a $|{\cal P}| \times 1$ sub-vector containing the entries of ${\bf v}$ indexed by the set ${\cal P}$, and $[{\bf V}]_{{\cal P}_1,{\cal P}_2}$ the $|{\cal P}_1| \times |{\cal P}_2|$ sub-matrix with row and column indexes described by ${\cal P}_1$ and ${\cal P}_2$. Finally, $\mathbf{0}_{M\times N}$ and $\mathbf{1}_{M\times N}$ denotes $M \times N$ matrices with all zeroes and ones, respectively. \section{Modeling and problem formulation} \label{sec:Modeling} Consider a radial distribution feeder comprising $N$ nodes collected in the set ${\cal N} := \{1,\ldots,N\}$, and overhead or underground lines represented by the set of edges ${\cal E} := \{(m,n)\} \subset ({\cal N} \cup \{0\}) \times ({\cal N} \cup \{0\})$, where the additional node $0$ corresponds to the point of common coupling (PCC). The feeder operation is to be optimized over a given time interval ${\cal I} := \{1,2,\ldots,T\}$, where each time slot can represent e.g., ten minutes or fifteen minutes, one hour, etc, depending on the specific short-, medium-, or long-range scheduling horizon~\cite{Paudyal11}. The backbone of the feeder generally consists of three-phase lines, with two- and single-phase connections at times present on laterals and sub-laterals. Let ${\cal P}_{mn} \subseteq \{a,b,c\}$ and ${\cal P}_{n} \subseteq \{a,b,c\}$ denote the phase of line $(m,n) \in {\cal E}$ and node $n \in {\cal N}$, respectively. Further, let $V_{n,t}^{\phi} \in \mathbb{C}$ and $I_{n,t}^{\phi} \in \mathbb{C}$ be the complex line-to-ground voltage at node $n \in {\cal N}$ and time slot $t$ of phase $\phi \in {\cal P}_n$, and the current injected at the same node, phase, and time. As usual, the voltages ${\bf v}_{0,t} := [V_{0,t}^{a},V_{0,t}^{b},V_{0,t}^{c}]^{{\cal T}}$ at the PCC are assumed to be known~\cite{Kerstingbook}. Lines $(m,n) \in {\cal E}$ are modeled as $\pi$-equivalent components, and the $|{\cal P}_{mn}| \times |{\cal P}_{mn}|$ phase impedance and shunt admittance matrices are denoted as ${\bf Z}_{mn} \in \mathbb{C}^{|{\cal P}_{mn}| \times |{\cal P}_{mn}|}$ and ${\bf Y}_{mn}^{(s)} \in \mathbb{C}^{|{\cal P}_{mn}| \times |{\cal P}_{mn}|}$, respectively. If four-wire grounded wye lines or lines with multi-grounded neutrals are present, matrices ${\bf Z}_{mn}$ and ${\bf Y}_{mn}^{(s)}$ can be obtained from the higher-dimensional ``primitive'' matrices via Kron reduction~\cite[Ch. 6]{Kerstingbook}. Using the $\pi$-equivalent model, it follows from Kirchhoff's current law that the current $I_n^{\phi}$ can be expressed as~\cite{Kerstingbook} \small {\bf e}gin{align} I_{n,t}^{\phi} & = \sum_{m \in {\cal N}_m} \left[ \left(\frac{1}{2}{\bf Y}_{mn}^{(s)} + {\bf Z}_{mn}^{-1} \right) [{\bf v}_{n,t}]_{{\cal P}_{mn}} - {\bf Z}_{mn}^{-1} [{\bf v}_{m,t}]_{{\cal P}_{mn}} \right]_{\{\phi\}} \label{currents} \end{align} \normalsize where ${\cal N}_n \subset {\cal N}$ is the set of nodes linked to $n$ through a transmission line, and ${\bf v}_{n,t} \in \mathbb{C}^{|{\cal P}_{n}|}$ denotes the column vector collecting the voltages at node $n$ and time slot $t$. Three- or single-phase transformers (if any) are modeled as series components with transmission parameters that depend on the connection type~\cite[Ch.~8]{Kerstingbook},~\cite{Paudyal11}. Per phase $\phi \in {\cal P}_n$ and node $n \in {\cal N}$, the following two classes of loads are considered. {\bf e}gin{itemize} \item A base \emph{non-deferrable} load with active and reactive powers demanded at time $t$ denoted by $P_{L,n,t}^{\phi}$ and $Q_{L,n,t}^{\phi}$, respectively. \item A set ${\cal D}_n^{\phi}$ of \emph{controllable} (elastic) loads, each with a prescribed energy requirement $E_{d,n}^{\phi}$ to be completed over a given interval ${\cal I}_{d,n}^{\phi} := [s_{d,n}^{\phi},f_{d,n}^{\phi}] \subseteq {\cal I}$, with $s_{d,n}^{\phi}$ representing the starting time, and $f_{d,n}^{\phi}$ the termination slot; that is, $\sum_{t \in {\cal I}_{d,n}^{\phi}} {\bf a}r{P}_{d,n,t}^{\phi} \Delta_t = E_{d,n}^{\phi}$, with ${\bf a}r{P}_{d,n,t}^{\phi}$ the amount of active power supplied to the controllable load $d \in {\cal D}_n^{\phi}$ at time slot $t$, and $\Delta_t > 0$ the duration of the time slot. \end{itemize} An example of controllable load is PHEVs, whose charging process can be shifted from hours with high price of electricity~\cite{Aliprantis12}, and high load conditions of the distribution network~\cite{Driesen10, Sundstrom12}, to off-peak hours. In this case, users specify the time when PHEVs will be plugged in, and the time by which the charging has to be completed~\cite{Driesen10, Aliprantis12}. In distribution feeders, capacitor banks are mounted at some selected nodes to provide reactive power support, aid voltage regulation, and correct the load power factor (PF). As usual, capacitors can be modeled as wye or delta loads with constant susceptance~\cite{Paudyal11},~\cite[Ch.~9]{Kerstingbook}. Therefore, with $y_{C,n}^{\phi}$ denoting the susceptance of a capacitor connected at node $n$ and phase $\phi$, the reactive power $Q_{C,n,t}^{\phi}$ provided by the capacitor at time $t$ is given by $Q_{C,n,t}^{\phi} = y_{C,n}^{\phi} |V_{n,t}^{\phi}|^2$. To satisfy the load demand, DG units such as, e.g. diesel generators and fuel cells can be employed to complement the power drawn from the main distribution grid. Then, suppose that $S$ DG units are located at nodes ${\cal S} \subset {\cal N}$, and let $P_{G,s,t}^{\phi}$ and $Q_{G,s,t}^{\phi}$ denote the active and reactive powers supplied by unit $s \in {\cal S}$. The focus is on the case where the costs of power provided by the utility company $P_{0,t}^{\phi} := \Re\{V_{0,t}^{\phi} (I_{0,t}^{\phi})^*\}$, $\phi = a,b,c$, and supplied or consumed by the DG units are determined in advance for the period ${\cal I}$. Let $\{\kappa_{0,t}\}$ denote the former, and $\{c_{s,t}\}$ the latter. Then, the goal is to minimize the overall cost of power purchased from the main grid and generated within the feeder (economic dispatch), so that the total load demand is met, and the node voltages stay within prescribed limits; that is, the following problem is to be solved, where ${\cal V}^{(1)} := \{\{I_{0,t}^{\phi}\}_{\forall \phi,t}, \{V_{n,t}^{\phi}, I_{n,t}^{\phi}, P_{G,n,t}^{\phi},Q_{G,n,t}^{\phi}\}_{\forall \, \phi, n, t}\}$ and ${\cal V}_d := \{ \{{\bf a}r{P}_{d,n,t}^{\phi} \}_{\forall \phi,n,t}\} $ collect the optimization variables: {\bf e}gin{subequations} \label{Pmg} {\bf e}gin{align} & \hspace{-.5cm} (P1)\,\, \min_{\substack{{\cal V}^{(1)}\\ {\cal V}_d }} \sum_{t \in {\cal I}} \left(\kappa_{0,t} \sum_{\phi \in {\cal P}_0} P_{0,t}^{\phi} + \sum_{s \in {\cal S}} c_{s,t} \sum_{\phi \in {\cal P}_s} P_{G,s,t}^{\phi} \right) \label{mg-cost} \\ \textrm{s.t.} \,\, & V_{s,t}^{\phi} (I_{s,t}^{\phi})^* = P_{G,s,t}^{\phi} - P_{L,s,t}^{\phi} - \sum_{d \in {\cal D}_s^{\phi}} {\bf a}r{P}_{d,s,t}^{\phi} \nonumber \\ & \hspace{.7cm}+ j Q_{G,s,t}^{\phi} - j Q_{L,n,t}^{\phi}, \,\, \forall \,\, t \in {\cal I}, \phi \in {\cal P}_s, \, s \in {\cal S} \label{mg-balance-source} \\ & V_{n,t}^{\phi} (I_{n,t}^{\phi})^* = - P_{L,n,t}^{\phi} - \sum_{d \in {\cal D}_n^{\phi}} {\bf a}r{P}_{d,n,t}^{\phi} - j Q_{L,n,t}^{\phi} \nonumber \\ & \hspace{.7cm} + j y_{C,n}^{\phi} |V_{n,t}^{\phi}|^2, \hspace{.2cm} \forall\,\, t \in {\cal I}, \phi \in {\cal P}_n, \, n \in {\cal N} {\bf a}ckslash {\cal S} \label{mg-balance} \\ & \sum_{t \in {\cal I}_{d,n}^{\phi}} {\bf a}r{P}_{d,n,t}^{\phi} \Delta_t = E_{d,n}^{\phi}, \hspace{.1cm} \forall \,\, d \in {\cal D}_n^{\phi}, \phi \in {\cal P}_n, n \in {\cal N} \label{mg-controllable} \\ & \hspace{.6cm} 0 \leq {\bf a}r{P}_{d,n,t}^{\phi} \leq {\bf a}r{P}_{d,n}^{\textrm{max}}, \hspace{.2cm} \forall \,\, t \in {\cal I}, d \in {\cal D}_n^{\phi},\, n \in {\cal N} \label{mg-Pdlimits} \\ & V_{n,t}^{\mathrm{min}} \leq |V_{n,t}^{\phi}| \leq V_{n,t}^{\mathrm{max}} , \hspace{.15cm} \forall \,\, t \in {\cal I}, \phi \in {\cal P}_n,\, n \in {\cal N} \label{mg-Vlimits} \\ & P_{G,s}^{\textrm{min}} \leq P_{G,s,t}^{\phi} \leq P_{G,s}^{\textrm{max}}, \hspace{.25cm} \forall \,\, t \in {\cal I}, \phi \in {\cal P}_s,\, s \in {\cal S} \label{mg-plimits} \\ & Q_{G,s}^{\textrm{min}} \leq Q_{G,s,t}^{\phi} \leq Q_{G,s}^{\textrm{max}}, \hspace{.15cm} \forall \,\, t \in {\cal I}, \phi \in {\cal P}_n,\, n \in {\cal N} \label{mg-qlimits} \end{align} \end{subequations} where $P_{G,s}^{\textrm{min}},P_{G,s}^{\textrm{max}}, Q_{G,s}^{\textrm{min}},Q_{G,s}^{\textrm{max}}$ capture physical and operational constraints of the DG units, and $V_n^{\mathrm{min}}$ and $V_n^{\mathrm{max}}$ are given minimum and maximum utilization and service voltages. Finally, ${\bf a}r{P}_{d,n}^{\textrm{max}}$ represents a possible cap for ${\bf a}r{P}_{d,n}^{\phi}$. If capacitor banks are not present,~\eqref{mg-balance} should be modified to $V_{n,t}^{\phi} (I_{n,t}^{\phi})^* = - P_{L,n,t}^{\phi} - jQ_{L,n,t}^{\phi}$. Recall that the voltages at the PCC are assumed known. However, if needed, this assumption can be relaxed, and (P1) can be appropriately re-stated. Unfortunately, (P1) is a nonlinear \emph{nonconvex} problem due to the load flow equations~\eqref{mg-balance-source}-\eqref{mg-balance} as well as the voltage constraints~\eqref{mg-Vlimits}. In the next section, an equivalent reformulation of (P1) will be derived, and its solution will be tackled by employing an SDP relaxation technique. \section{Relaxed semi-definite programming} \label{sec:semidefinite} Consider first a distribution feeder with only three-phase lines and nodes; that is, $|{\cal P}_{n}| = 3$ for all $n \in {\cal N}$, and $|{\cal P}_{nl}| = 3$ for all lines $(n,l) \in {\cal E}$. Let $\mathbf{Y} \in \mathbb{C}^{3(N+1) \times 3(N+1)}$ be a symmetric matrix defined as [cf.~\eqref{currents}] {\bf e}gin{equation} [{\bf Y}]_{{\cal P}_n,{\cal P}_m} := \left \{{\bf e}gin{array}{ll} \sum_{m \in {\cal N}_m} \left(\frac{1}{2}{\bf Y}_{mn}^{(s)} + {\bf Z}_{mn}^{-1} \right), & \textrm{if } m=n\\ -{\bf Z}_{mn}^{-1}, &\textrm{if }(m,n)\in {\cal E}\\ \mathbf{0}_{3 \times 3}, & \textrm{otherwise} \end{array} \right. \label{eq:Ymatrix} \nonumber \end{equation} and define the $3(N+1) \times 1$ vectors ${\bf v}_t := [{\bf v}_{0,t}^{\cal T}, \ldots, {\bf v}_{N,t}^{\cal T}]^{\cal T}$ and ${\bf i}_t := [{\bf i}_{0,t}^{\cal T}, \ldots, {\bf i}_{N,t}^{\cal T}]^{\cal T}$, with ${\bf i}_{n,t} := [I^a_{n,t},I^b_{n,t},I^c_{n,t}]^{\cal T}$. Then,~\eqref{currents} can be re-written in vector-matrix form as ${\bf i}_{t} = {\bf Y} {\bf v}_{t}$. Since the PCC voltages $\{{\bf v}_{0,t}\}$ are known, re-write the vector of complex voltages as ${\bf v}_{t} = {\bf a}_{0,t} \circ {\bf x}_{t}$, with ${\bf x}_{t}:= [\mathbf{1}_3^{\cal T}, {\bf v}_{1,t}^{\cal T}, \ldots, {\bf v}_{N,t}^{\cal T}]^{\cal T}$ and ${\bf a}_{0,t} := [{\bf v}_{0,t}^{\cal T},\mathbf{1}_{|{\cal P}_{1}|}^{{\cal T}},\ldots,\mathbf{1}_{|{\cal P}_{N}|}^{{\cal T}}]^{\cal T}$, for all $t \in {\cal I}$. Then, consider expressing the active and reactive powers injected at each node at time $t$, as well as the voltage magnitudes as linear functions of the outer-product matrix ${\bf X}_t := {\bf x}_t {\bf x}_t^{\cal H}$. To this end, define the following admittance-related matrix per node $n$ and phase $\phi$ {\bf e}gin{equation} {\bf Y}_n^{\phi} := {\bf a}r{{\bf e}}_n^{\phi} ({\bf a}r{{\bf e}}_n^{\phi})^{{\cal T}} {\bf Y} \label{eq:Ynode} \end{equation} where ${\bf a}r{{\bf e}}_n^{\phi} := [\mathbf{0}_{|{\cal P}_0|}^{{\cal T}},\ldots,\mathbf{0}_{|{\cal P}_{n-1}|}^{{\cal T}},{\bf e}^{\phi, {\cal T}},\mathbf{0}_{|{\cal P}_{n+1}|}^{{\cal T}},\ldots,\mathbf{0}_{|{\cal P}_N|}^{{\cal T}}]^{{\cal T}}$, and $\{{\bf e}^\phi\}_{\phi \in \{a,b,c\}}$ denotes the canonical basis of $\mathbb{R}^3$. Denote for future use the Hermitian matrices {\bf e}gin{subequations} \label{eq:Phi} {\bf e}gin{align} {\bf P}hi_{P,n,t}^{\phi} & := \frac{1}{2}{\bf D}_{0,t}^{{\cal H}} ({\bf Y}_n^{\phi} + ({\bf Y}_n^{\phi})^{\cal H}) {\bf D}_{0,t} \label{eq:PhiP} \\ {\bf P}hi_{Q,n,t}^{\phi} & := \frac{j}{2}{\bf D}_{0,t}^{{\cal H}} ({\bf Y}_n^{\phi} - ({\bf Y}_n^{\phi})^{\cal H}) {\bf D}_{0,t} \label{eq:PhiQ} \\ {\bf P}hi_{V,n,t}^{\phi} & := {\bf D}_{0,t}^{{\cal H}} {\bf a}r{{\bf e}}_n^{\phi} ({\bf a}r{{\bf e}}_n^{\phi})^{{\cal T}} {\bf D}_{0,t} \label{eq:PhiV} \end{align} \end{subequations} with ${\bf D}_{0,t} := \textrm{diag}({\bf a}_{0,t})$. Using~\eqref{eq:Phi}, a linear model in ${\bf X}_t$ (and therefore in ${\bf V}_t := {\bf v}_t {\bf v}_t^{\cal H}$) is established in the following lemma (see also~\cite{ZhuNAPS11} and~\cite{LavaeiLow}). {\bf e}gin{lemma} \label{SDPreformulation} Apparent powers and voltage magnitudes are linearly related with $\{{\bf X}_t\}$ as [cf.~\eqref{eq:Phi}] {\bf e}gin{subequations} \label{eq:PQV} {\bf e}gin{align} {\textrm{Tr}}({\bf P}hi_{P,n,t}^{\phi} {\bf X}_t) & = P_{G,n,t}^{\phi} - P_{L,n,t}^{\phi} - \sum_{d \in {\cal D}_s^{\phi}} {\bf a}r{P}_{d,s,t}^{\phi} \label{eq:P} \\ {\textrm{Tr}}({\bf P}hi_{Q,n,t}^{\phi} {\bf X}_t) & = Q_{G,n,t}^{\phi} - Q_{L,n,t}^{\phi} + y_{C,n}^{\phi}{\textrm{Tr}}({\bf P}hi_{V,n,t}^{\phi} {\bf X}_t) \label{eq:Q} \\ {\textrm{Tr}}({\bf P}hi_{V,n,t}^{\phi} {\bf X}_t) & = |V_{n,t}^{\phi}|^2 \label{eq:V} \end{align} \end{subequations} with $P_{G,n,t}^{\phi} = Q_{G,n,t}^{\phi} = 0$ for $n \in {\cal N}{\bf a}ckslash {\cal S}$, and $y_{C,n}^{\phi} = 0$ if capacitor banks are not present at node $n$. \end{lemma} \emph{Proof.} See the Appendix. $\Box$ Using~\eqref{eq:P}--\eqref{eq:V}, problem (P1) is \emph{equivalently} reformulated as follows: {\bf e}gin{subequations} \label{mg2} {\bf e}gin{align} & \hspace{-.5cm} (P2) \,\, \min_{\{{\bf X}_t\},{\cal V}_d} \sum_{t \in {\cal I}} \kappa_{0,t} \sum_{\phi \in {\cal P}_0} {\textrm{Tr}}({\bf P}hi_{P,0,t}^{\phi} {\bf X}_t) \nonumber \\ & \hspace{1.1cm} + \sum_{t \in {\cal I}} \sum_{s \in {\cal S}} c_{s,t} \sum_{\phi \in {\cal P}_s} {\textrm{Tr}}({\bf P}hi_{P,s,t}^{\phi} {\bf X}_t) \\ \textrm{s.t.} \,\, & {\textrm{Tr}}({\bf P}hi_{P,n,t}^{\phi} {\bf X}_t) + P_{L,n,t}^{\phi} + \sum_{d \in {\cal D}_s^{\phi}} {\bf a}r{P}_{d,s,t}^{\phi} = 0, \nonumber \\ &\hspace{3cm} \forall\,\, t \in {\cal I}, \phi \in {\cal P}_n, \, \forall \, n \in {\cal N} {\bf a}ckslash {\cal S} \label{mg2-P} \\ & {\textrm{Tr}}({\bf P}hi_{Q,n,t}^{\phi} {\bf X}_t) + Q_{L,n,t}^{\phi} -y_{C,n}^{\phi}{\textrm{Tr}}({\bf P}hi_{V,n}^{\phi} {\bf X}_t) = 0, \nonumber \\ & \hspace{3cm} \forall\,\, t \in {\cal I}, \phi \in {\cal P}_n, \, \forall \, n \in {\cal N} {\bf a}ckslash {\cal S} \label{mg2-Q} \\ &P_{G,s}^{\textrm{min}} \leq {\textrm{Tr}}({\bf P}hi_{P,s,t}^{\phi} {\bf X}_t) + P_{L,s,t}^{\phi} + \sum_{d \in {\cal D}_s^{\phi}} {\bf a}r{P}_{d,s,t}^{\phi}\leq P_{G,s}^{\textrm{max}}, \nonumber \\ & \hspace{3cm} \forall\,\, t \in {\cal I}, \phi \in {\cal P}_s, \, \forall \, s \in {\cal S} \label{mg2-Pg} \\ &Q_{G,s}^{\textrm{min}} \leq {\textrm{Tr}}({\bf P}hi_{Q,s,t}^{\phi} {\bf X}_t) + Q_{L,s,t}^{\phi} \leq Q_{G,s}^{\textrm{max}}, \nonumber \\ & \hspace{3cm} \forall\,\, t \in {\cal I}, \phi \in {\cal P}_s, \, \forall \, s \in {\cal S} \label{mg2-Pg} \\ & (V_n^{\mathrm{min}})^2 \leq {\textrm{Tr}}({\bf P}hi_{V,n,t}^{\phi} {\bf X}_t) \leq (V_n^{\mathrm{max}})^2, \forall\, \phi, \, \forall \, n \in {\cal N} \label{mg2-voltage} \\ & \sum_{t \in {\cal I}_{d,n}^{\phi}} {\bf a}r{P}_{d,n,t}^{\phi} \Delta_t = E_{d,n}^{\phi}, \hspace{.1cm} \forall \,d \in {\cal D}_n^{\phi}, \phi \in {\cal P}_n, n \in {\cal N} \label{mg2-controllable} \\ & \hspace{.6cm} 0 \leq {\bf a}r{P}_{d,n,t}^{\phi} \leq {\bf a}r{P}_{d,n}^{\textrm{max}}, \hspace{.2cm} \forall \,\, t \in {\cal I}, d \in {\cal D}_n^{\phi},\, n \in {\cal N} \label{mg2-Pdlimits} \\ & {\textrm{rank}}({\bf X}_t) = 1, \hspace{.2cm} \forall \,\, t \in {\cal I} \label{mg2-rank} \\ & {\bf X}_t \succeq \mathbf{0}. \,\, [{\bf X}_t]_{{\cal P}_0,{\cal P}_0} = \mathbf{1}_{3\times3}, \hspace{.2cm} \forall \,\, t \in {\cal I} \, . \end{align} \end{subequations} Unfortunately, (P2) is still nonconvex because of the rank-1 constraint on the positive semi-definite matrices $\{{\bf X}_t\}$. Nevertheless, (P2) is amenable to the SDP relaxation technique, which amounts to dropping the rank constraints, thus relaxing nonconvex problems to SDP ones; see e.g., the tutorial~\cite{luospmag10}, and the works in~\cite{ZhuNAPS11} and~\cite{LavaeiLow}, where this technique is employed for power system state estimation and OPF for power transmission systems, respectively. Leveraging the SDP relaxation technique here too, it is possible to obtain the following \emph{convex} relaxation of (P2): {\bf e}gin{subequations} {\bf e}gin{align} & \hspace{-.5cm} (P3) \,\, \min_{\{{\bf X}_t\},{\cal V}_d} \sum_{t \in {\cal I}} \kappa_{0,t} \sum_{\phi \in {\cal P}_0} {\textrm{Tr}}({\bf P}hi_{P,0,t}^{\phi} {\bf X}_t) \nonumber \\ & \hspace{1.1cm} + \sum_{t \in {\cal I}} \sum_{s \in {\cal S}} c_{s,t} \sum_{\phi \in {\cal P}_s} {\textrm{Tr}}({\bf P}hi_{P,s,t}^{\phi} {\bf X}_t) \\ \textrm{s.t.} & \,\, {\bf X}_t \succeq \mathbf{0}, \,\, [{\bf X}_t]_{{\cal P}_0,{\cal P}_0} = \mathbf{1}_{3\times3}, \textrm{ and } \eqref{mg2-P}-\eqref{mg2-Pdlimits} \, . \nonumber \end{align} \end{subequations} Clearly, if all the optimal matrices $\{{\bf X}_{t}^{\textrm{opt}}\}$ of (P3) have rank $1$, then the variables $\{{\bf X}_{t}^{\textrm{opt}}\}, {\cal V}_{d}^{\textrm{opt}}$ represent a globally optimal solution also for (P2). Further, since (P1) and (P2) are \emph{equivalent}, there exist $2 |{\cal I}|$ vectors $\{{\bf x}_{t}^{\textrm{opt}}\}$ and $\{{\bf v}_{t}^{\textrm{opt}}\}$, with ${\bf X}_{t}^{\textrm{opt}} = {\bf x}_{t}^{\textrm{opt}} {\bf x}_{t}^{\textrm{opt} {\cal H}}$ and ${\bf v}_{t}^{\textrm{opt}} := {\bf a}_{0,t} \circ {\bf x}_{t}^{\textrm{opt}}$, for all $t\in {\cal I}$, such that the optimal objective functions of (P1) and (P2) coincide. This is formally summarized next. {\bf e}gin{proposition} Let $\{{\bf X}_{t}^{\textrm{opt}}\}, {\cal V}_{d}^{\textrm{opt}}$ be the optimal solution of (P3), and assume that ${\textrm{rank}}({\bf X}_{t}^{\textrm{opt}}) = 1$, for all $t \in {\cal I}$. Then, a globally optimal solution of (P1) is given by ${\cal V}_{d}^{\textrm{opt}}$, the vectors of complex line-to-ground voltages {\bf e}gin{align} {\bf v}_{t}^{\textrm{opt}} := \sqrt{\lambda_{1,t}} {\bf D}_{0,t} {\bf u}_{1,t} \, , \quad \forall \,\, t \in {\cal I} \end{align} where $\lambda_{1,t} \in \mathbb{R}^+$ is the unique non-zero eigenvalue of ${\bf X}_{t}^{\textrm{opt}}$ and ${\bf u}_{1,t}$ the corresponding eigenvector, and the supplied active and reactive powers {\bf e}gin{align} P_{G,s,t}^{\textrm{opt}} & = {\textrm{Tr}}({\bf P}hi_{P,s,t}^{\phi} {\bf v}_{t}^{\textrm{opt}} {\bf v}_{t}^{\textrm{opt} {\cal H}} ) + P_{L,s,t}^{\phi} + \sum_{d \in {\cal D}_s^{\phi}} {\bf a}r{P}_{d,s,t}^{\phi, \textrm{opt}} \\ P_{Q,s,t}^{\textrm{opt}} & = {\textrm{Tr}}({\bf P}hi_{Q,s,t}^{\phi} {\bf v}_{t}^{\textrm{opt}} {\bf v}_{t}^{\textrm{opt} {\cal H}} ) + Q_{L,s,t}^{\phi} \, , \,\forall s \in {\cal S} \cup \{0\} . \end{align} \label{prop:rank1} \end{proposition} The upshot of the proposed formulation is that the \emph{globally} optimal solution of (P2) (and hence (P1)) can be obtained via standard interior-point solvers, in \emph{polynomial-time} complexity; see, for example, the complexity bounds for SDP reported in~\cite[Ch.~4]{Nemirovski_lecture} and~\cite{luospmag10}. This is in contrast with gradient descent-based solvers for nonconvex programs, sequential quadratic programming, and particle swarm optimization, which in general do not guarantee global optimality of the obtained solutions, face challenges pertaining to sensitivity of the initial guess, convergence, and complexity which increases with the number of iterations. Notice also, that matrices $\{{\bf P}hi_{P,n,t}^{\phi},{\bf P}hi_{Q,n,t}^{\phi}, {\bf P}hi_{V,n}^{\phi}\}$ are very sparse. This property can be leveraged to substantially reduce the computational burden of interior-point solvers; for instance, the so-called ``chordal'' structure of matrices $\{{\bf X}_t\}$ can be effectively exploited, as advocated in~\cite{Jabr12}. Since (P3) is a relaxed version of (P2), matrices ${\bf X}_{t}^{\textrm{opt}}$ could have rank greater than $1$. In this case, rank reduction techniques can be employed to find a feasible rank-1 approximation of ${\bf X}_{t}^{\textrm{opt}}$ (see~\cite{luospmag10} and references therein). The resultant solution is feasible for (P2), but generally suboptimal~\cite{luospmag10}. Notably, when \emph{balanced} distribution networks are considered,~\cite{Tse12} established conditions on the voltage angles and the reactive power injections under which rank-$1$ matrices are \emph{always} obtained provided the non-relaxed problem is feasible. Derivation of similar conditions in the present context constitutes an interesting future research direction, that will naturally complement the result in~\cite{Tse12}. \subsection{Feeders with two- and single-phase lines} \label{sec:mixed} For feeders with two- and single-phase laterals and sub-laterals, the dimensions of matrix ${\bf Y}$ have to be adjusted to $\sum_{n=0}^N |{\cal P}_n| \times \sum_{n=0}^N |{\cal P}_n|$, and its entries have to be as follows: \emph{i)} matrix $-{\bf Z}_{nm}^{-1}$ occupies the $|{\cal P}_{mn}| \times |{\cal P}_{mn}|$ off-diagonal block corresponding to line $(m,n) \in {\cal E}$; and, \emph{ii)} the $|{\cal P}_{n}| \times |{\cal P}_{n}|$ diagonal block corresponding to node $n \in {\cal N} \cup \{0\}$ is obtained as {\bf e}gin{align} [{\bf Y}]_{{\cal P}_n,{\cal P}_n} := \sum_{m \in {\cal N}_m} \left(\frac{1}{2}\tilde{{\bf Y}}_{mn}^{(s)} + \tilde{{\bf Z}}_{mn}^{-1} \right) \end{align} where $\tilde{{\bf Z}}_{mn} = {\bf Z}_{mn}$ and $\tilde{{\bf Y}}_{mn}^{(s)} = {\bf Y}_{mn}^{(s)}$ if ${\cal P}_n = {\cal P}_{mn}$, otherwise $[\tilde{{\bf Z}}_{mn}]_{{\cal P}_{nm},{\cal P}_{nm}} = {\bf Z}_{mn}$ and $[\tilde{{\bf Z}}_{mn}]_{{\cal P}_n {\bf a}ckslash {\cal P}_{nm},{\cal P}_n {\bf a}ckslash {\cal P}_{nm}} = 0$ ($\tilde{{\bf Y}}_{mn}^{(s)}$ is computed likewise). Re-defining the $\sum_{n=0}^N |{\cal P}_n| \times 1$ vectors collecting voltages and currents as ${\bf v}^\prime_t := [{\bf v}_{0,t}^{\cal T}, [{\bf v}_{1,t}]_{{\cal P}_{1}}^{\cal T},\ldots, [{\bf v}_{N,t}]_{{\cal P}_{N}}]^T$ and ${\bf i}^\prime_t := [{\bf i}_{0,t}^{\cal T}, [{\bf i}_{1,t}]_{{\cal P}_{1}}^{\cal T},\ldots, [{\bf i}_{N,t}]_{{\cal P}_{N}}^{\cal T}]^{\cal T}$, respectively,~\eqref{currents} can be re-written again in vector-matrix form as ${\bf i}^\prime_t = {\bf Y} {\bf v}^\prime_t$, for all $t \in {\cal I}$, and the procedure~\eqref{eq:Phi}--\eqref{mg2} can be readily followed. \section{Feasible voltage profile} \label{sec:feasibility} To effect voltage regulation, and avoid abrupt voltage drops, constraints on the minimum and maximum utilization and service voltages were imposed in (P1). Constraints~\eqref{mg-Vlimits} however, may challenge the feasibility of (P1), since it may not be possible to meet the minimum (maximum) utilization and service voltage requirements when feeders are heavily stressed and DG units supply a substantial amount of power. It is thus of prime importance to perform preemptive analysis of the feasible voltage profile in order to unveil possible infeasibility of (P1) and, in case, facilitate corrective actions. To this end, consider solving the following optimization problem {\bf e}gin{subequations} \label{Pmg4} {\bf e}gin{align} & \hspace{-.5cm} (P4)\,\, \min_{\substack{{\cal V}^{(1)}, {\cal V}_d }} \,\, (1-w_V) \sum_{t \in {\cal I}} \sum_{n \in {\cal N}} \sum_{\phi \in {\cal P}_n} \left( |V_{n,t}^{\phi}|^2 - |V_{n}^{\textrm{ref}}|^2 \right)^2 \nonumber \\ & + w_V \sum_{t \in {\cal I}} \left(\kappa_{0,t} \sum_{\phi \in {\cal P}_0} P_{0,t}^{\phi} + \sum_{s \in {\cal S}} c_{s,t} \sum_{\phi \in {\cal P}_s} P_{G,s,t}^{\phi} \right) \label{mg-cost4} \\ & \textrm{s.t.} \,\, \eqref{mg-balance-source}-\eqref{mg-Pdlimits}, \textrm{ and }~\eqref{mg-plimits}-\eqref{mg-qlimits} \nonumber \end{align} \end{subequations} with $w_V \in (0,1)$ denoting a weighting coefficient, and $|V_{n}^{\textrm{ref}}|$ the prescribed voltage magnitude of the feeder (e.g., $|V_{n}^{\textrm{ref}}| = 1$ p.u.). Although constraints~\eqref{mg-Vlimits} are not present in (P4), the first term in~\eqref{mg-cost4} promotes regulation by penalizing voltage magnitudes that deviate from the nominal ones. Similar to (P1), problem~\eqref{Pmg4} is nonconvex. However, by exploiting again Lemma~\ref{SDPreformulation}, along with the SDP relaxation technique, the following convex problem is obtained {\bf e}gin{subequations} \label{Pmg5} {\bf e}gin{align} & \hspace{-.5cm} (P5)\,\, \min_{\substack{\{{\bf X}_t\}, {\cal V}_d \\ \{\alpha_{n,t}^{\phi} \}}} \,\, (1-w_V) \sum_{t, n, \phi} \alpha_{n,t}^{\phi} \nonumber \\ & \hspace{1.5cm} + w_V \sum_{t \in {\cal I}} \kappa_{0,t} \sum_{\phi \in {\cal P}_0} {\textrm{Tr}}({\bf P}hi_{P,0,t}^{\phi} {\bf X}_t) \nonumber \\ & \hspace{1.5cm} + w_V \sum_{t \in {\cal I}} \sum_{s \in {\cal S}} c_{s,t} \sum_{\phi \in {\cal P}_s} {\textrm{Tr}}({\bf P}hi_{P,s,t}^{\phi} {\bf X}_t) \label{mg-cost5} \\ \textrm{s.t.} & \nonumber \\ & \left[ {\bf e}gin{array}{ll} - \alpha_{n,t}^{\phi} & {\textrm{Tr}}({\bf P}hi_{V,n}^{\phi} {\bf X}_t)-|V_{n}^{\textrm{ref}}|^2 \\ {\textrm{Tr}}({\bf P}hi_{V,n}^{\phi} {\bf X}_t)-|V_{n}^{\textrm{ref}}|^2 & -1 \end{array} \right] \preceq \mathbf{0} \label{mg5-alpha} \\ & \,\, {\bf X}_t \succeq \mathbf{0}, \,\, [{\bf X}_t]_{{\cal P}_0,{\cal P}_0} = \mathbf{1}_{3\times3}, \textrm{ and } \eqref{mg2-P}-\eqref{mg2-Pdlimits} \nonumber \end{align} \end{subequations} where constraint~\eqref{mg5-alpha} is enforced for all nodes $n$, per phase $\phi$, and time slot $t$. Notice that by using~\eqref{eq:V} the first term in~\eqref{mg-cost4} becomes quadratic in $\{{\bf X}_t\}$. To bypass this hurdle, the non-negative real variables $\{\alpha_{n,t}^{\phi} \}$ are introduced to upper bound each term $({\textrm{Tr}}({\bf P}hi_{V,n}^{\phi} {\bf X}_t)-|V_{n}^{\textrm{ref}}|^2)^2$, and the Schur's complement is subsequently employed to obtain~\eqref{mg5-alpha}. If for a given $w_V$, all matrices $\{{\bf X}_t\}$ have rank $1$, then the optimal solution of (P5) is also a globally optimal solution of (P4) (see Proposition~\ref{prop:rank1}). Clearly, if the voltages $\{V_{n,t}^{\phi, \mathrm{opt}}\}$ obtained from (P5) satisfy $(V_{n,t}^{\mathrm{min}})^2 \leq |V_{n,t}^{\phi, \mathrm{opt}}|^2 \leq (V_{n,t}^{\mathrm{max}})^2$, for all $t, n, \phi$, then it is possible to proceed with the solution of the economic dispatch problem (P3). On the other hand, if some of the voltage magnitudes largely deviate from $|V_{n}^{\textrm{ref}}|$, corrective actions have to be taken; these include, for example, switching the taps of controllable capacitor banks, or curtailing portion(s) of the loads. \section{Thermal and Quality-of-power constraints} \label{sec:Additional} \subsection{Thermal constraints} \label{sec:linecurrents} High current magnitudes on the lines can have detrimental effects on both efficiency and reliability of the distribution network. From an economical perspective, an additional (real) power has to be drawn from the main grid, or, supplied by the DG units in order to compensate for the increased power dissipated on the distribution lines. On the other hand, conductors may overheat if stressed by high currents over a prolonged time interval, and may eventually fail. This, in turn, would trigger an outage event, with consequent interruption of the power delivery in portions of the feeder. To alleviate these concerns, it is of interest to constrain either the power dispelled on the conductors, or, the magnitude of currents flowing through the distribution lines~\cite{Tse12}, which amounts to adding one of the following constraints in (P1): {\bf e}gin{align} |I_{mn,t}^{\phi}| & \leq I_{mn}^{\textrm{max}} \label{current} \\ P_{mn}^{\phi} := |I_{mn}^{\phi}|^2 \Re\{[{\bf Z}_{mn}]_{\{\phi\},\{\phi\}}\} & \leq P_{mn}^{\textrm{max}} \label{powerdissipated} \end{align} where $I_{mn,t}^{\phi}$ and $P_{mn}^{\phi}$ denote the current flowing on the phase $\phi$ of line $(m,n) \in {\cal E}$, and the active power lost on the same line and phase, respectively. An SDP-consistent re-formulation of~\eqref{current}--\eqref{powerdissipated} has to be derived in order to accommodate the aforementioned constraints in (P2) and (P3). To this end, let ${\bf i}_{mn,t} : = [\{I_{mn,t}^{\phi}\}]^{\cal T}$ denote the $|{\cal P}_{mn}| \times 1$ vector collecting the complex currents flowing through line $(m,n) \in {\cal E}$, which is related to the line-to-ground voltages ${\bf v}_{n,t}$ and ${\bf v}_{n,t}$ as (cf.~\eqref{currents}) {\bf e}gin{equation} {\bf i}_{mn,t} = {\bf Z}_{mn}^{-1} \left([{\bf v}_{m,t}]_{{\cal P}_{mn}} - [{\bf v}_{n,t}]_{{\cal P}_{mn}} \right) \, . \label{linecurrent} \end{equation} Notice that since ${\bf Z}_{mn}^{-1}$ is generally not diagonal~\cite{testfeeder},~\eqref{linecurrent} captures also current components arising from mutual inductive reactances and capacitive coefficients. Define the $|{\cal P}_{mn}| \times \sum_{n=0}^N |{\cal P}_n|$ complex matrix {\bf e}gin{align} {\bf B}_{mn} & := [\mathbf{0}_{|{\cal P}_{mn}| \times \sum_{n=0}^{m-1} |{\cal P}_n|}, \check{{\bf Z}}_{mn}^m, \ldots \nonumber \\ & \mathbf{0}_{|{\cal P}_{mn}| \times \sum_{n=m+1}^{n-1} |{\cal P}_n|}, \check{{\bf Z}}_{mn}^n \mathbf{0}_{|{\cal P}_{mn}| \times \sum_{n=n+1}^{N} |{\cal P}_n|}] \end{align} where $\check{{\bf Z}}_{mn}^m$ is a $|{\cal P}_{mn}| \times |{\cal P}_{m}|$ matrix with elements $[\check{{\bf Z}}_{mn}^m]_{{\cal P}_{mn},{\cal P}_{mn}} = {\bf Z}_{mn}^{-1}$ and $[\check{{\bf Z}}_{mn}^m]_{{\cal P}_{mn},{\cal P}_m {\bf a}ckslash {\cal P}_{mn}} = \mathbf{0}$; likewise, $\check{{\bf Z}}_{mn}^n$ has dimensions $|{\cal P}_{mn}| \times |{\cal P}_{n}|$, and its entries are filled as $[\check{{\bf Z}}_{mn}^n]_{{\cal P}_{mn},{\cal P}_{mn}} = -{\bf Z}_{mn}^{-1}$ and $[\check{{\bf Z}}_{mn}^n]_{{\cal P}_{mn},{\cal P}_n {\bf a}ckslash {\cal P}_{mn}} = \mathbf{0}$. Thus, building on~\eqref{linecurrent}, an SDP-compliant re-formulation of~\eqref{current}--\eqref{powerdissipated} is provided in the following lemma. {\bf e}gin{lemma} \label{limitcurrent} Consider the Hermitian matrix {\bf e}gin{align} {\bf P}hi_{I,mn,t}^{\phi} := {\bf D}_{0,t}^{{\cal H}} {\bf B}_{mn}^{{\cal H}} {\bf e}_{mn}^{\phi} ({\bf e}_{mn}^{\phi})^{{\cal T}} {\bf B}_{mn} {\bf D}_{0,t} \end{align} where $\{{\bf e}_{mn}^\phi\}_{\phi \in {\cal P}_{mn}}$ denotes the canonical basis of $\mathbb{R}^{|{\cal P}_{mn}|}$. Then, constraints~\eqref{current}--\eqref{powerdissipated} can be expressed linearly in the outer-product ${\bf X}_t$ as {\bf e}gin{align} {\textrm{Tr}}\{ {\bf P}hi_{I,mn,t}^{\phi} {\bf X}_t \} & \leq I_{mn}^{\textrm{max}} \label{currentSDP} \\ \Re\{{\textrm{Tr}}\{ {\bf e}_{mn}^{\phi} ({\bf e}_{mn}^{\phi})^{{\cal T}} {\bf Z}_{mn}\} \} {\textrm{Tr}}\{ {\bf P}hi_{I,mn,t}^{\phi} {\bf X}_t \} & \leq P_{mn}^{\textrm{max}}. \label{currentSDP} \end{align} \end{lemma} \emph{Proof.} See the Appendix. $\Box$ In distribution feeders, line outage events maybe triggered by overheating effects on the neutral cable(s), especially those experiencing highly unbalanced load conditions. Towards deriving constraints on the magnitude of neutral current(s), let ${\cal P}_{mn}^{({\textrm{Var}}phi)}$ denote the set of grounded neutral cables that are present on the line $(m,n) \in {\cal E}$, and ${\bf T}_{mn}$ the $|{\cal P}_{mn}^{({\textrm{Var}}phi)}| \times |{\cal P}_{mn}|$ neutral transformation matrix obtained from the primitive impedance matrix of the distribution line via Kron reduction~\cite[Sec.~4.1]{Kerstingbook}. For example, the neutral transformation matrix of a four-wire grounded wye segment has dimensions $1 \times 3$, while its dimensions increase to $3 \times 3$ for an underground wye line with three neutral conductors. Thus, the neutral currents ${\bf i}^{({\textrm{Var}}phi)}_{mn,t} := [I^{(1)}_{mn,t}, \ldots, I^{(N^{{\textrm{Var}}phi})}_{mn,t}]^{{\cal T}}$ are linearly related to the line currents ${\bf i}_{mn,t}$ as~\cite[Sec.~4.1]{Kerstingbook} {\bf e}gin{equation} {\bf i}^{({\textrm{Var}}phi)}_{mn,t} = {\bf T}_{mn} {\bf i}_{mn,t} \, . \label{neutralcurrent} \end{equation} It readily follows from~\eqref{neutralcurrent} and the result of Lemma~\ref{limitcurrent}, that the magnitude of the current on the neutral cables can be constrained in the SDP problem (P3) as {\bf e}gin{equation} {\textrm{Tr}}\{ {\bf P}hi_{I,mn,t}^{({\textrm{Var}}phi)} {\bf X}_t \} \leq I_{mn}^{({\textrm{Var}}phi), \textrm{max}}, \,\, \forall \, {\textrm{Var}}phi \in {\cal P}^{({\textrm{Var}}phi)}_{mn} \label{neutralcurrentlimit} \end{equation} with {\bf e}gin{align} {\bf P}hi_{I,mn,t}^{({\textrm{Var}}phi)} := {\bf D}_{0,t}^{{\cal H}} {\bf B}_{mn}^{{\cal H}} {\bf T}_{mn}^{{\cal H}} {\bf e}_{mn}^{({\textrm{Var}}phi)} ({\bf e}_{mn}^{({\textrm{Var}}phi)})^{{\cal T}} {\bf T}_{mn} {\bf B}_{mn} {\bf D}_{0,t} \end{align} where, as usual, $\{{\bf e}_{mn}^{({\textrm{Var}}phi)}\}$ is the canonical basis of $\mathbb{R}^{|{\cal P}^{({\textrm{Var}}phi)}_{mn}|}$. There is an increasing concern on the thermal effects arising from harmonic currents in the neutral cable(s). In this case, constraints similar to~\eqref{neutralcurrentlimit} can be imposed on a per-harmonic basis (see, e.g.~\cite{Forner}). \subsection{Constraints on the power factor} \label{sec:pfcorrection} The PF has been increasingly recognized as one of the principal measures of efficiency and reliability of power distribution networks~\cite{Momoh09,Roytelman00,Forner}. High PF translates to lower generation and transmission costs, and enhanced protection of transmission lines from overheating (hence, higher resilience to line outages). Constraining the PF at the PCC is tantamount to limiting the reactive power exchanged with the main power grid. This, in turn, has two well-appreciated merits: \emph{i)} it alleviates the power losses experienced along the backbone of the feeder~\cite{Roytelman00}; and, \emph{ii)} it limits the current drawn at the PCC, and therefore facilitates coexistence of multiple feeders on the same distribution line or substation without requiring components such as, e.g. conductors, transformers, and switchgear of increased size. Unfortunately, the definition of PF for an unbalanced polyphase system is not unique~\cite{Emanuel93}. In this paper, a per-phase definition is adopted in order to limit the reactive power exchanged at the PCC on each phase. Intuitively, polyphase variants~\cite{Emanuel93} may induce high discrepancies between the amount of reactive power exchanged per phase, but with a ``good'' polyphase PF nevertheless. Let $\eta_{0,t}^{\phi} \in [0,1]$ denote the minimum PF required at the PCC on the phase $\phi \in {\cal P}_0$ at time slot $t \in {\cal I}$. Now consider adding the following constraint to (P1): {\bf e}gin{equation} P_{0,t}^{\phi} \left(|V_{0,t}^{\phi}| |I_{0,t}^{\phi}| \right)^{-1} \geq \eta_{0,t}^{\phi}, \quad \quad \forall \,\, t \in {\cal I}, \phi \in {\cal P}_0 \label{mg-powerfactor} \end{equation} where voltages $\{|V_{0,t}^{\phi}| \}$ are known~\cite{Kerstingbook}. Notice that DG units complement the power supplied by the utility, and are usually not sufficient to satisfy the load demand on their own. Under this premise, an SDP-consistent reformulation of~\eqref{mg-powerfactor} can be readily obtained, as summarized next. {\bf e}gin{lemma} Provided the power supplied by the DG units does not exceed the total load demand at the feeder,~\eqref{mg-powerfactor} is equivalently expressed as a linear function of ${\bf X}$ as {\bf e}gin{align} \left \{{\bf e}gin{array}{ll} \tilde{\eta}_{0,t}^{\phi} {\textrm{Tr}}({\bf P}hi_{P,n}^{\phi} {\bf X}_t) - {\textrm{Tr}}({\bf P}hi_{Q,n}^{\phi} {\bf X}_t) \geq 0 \\ \tilde{\eta}_{0,t}^{\phi} {\textrm{Tr}}({\bf P}hi_{P,n}^{\phi} {\bf X}_t) + {\textrm{Tr}}({\bf P}hi_{Q,n}^{\phi} {\bf X}_t) \geq 0 \, . \end{array} \right. \end{align} \end{lemma} $\Box$ Additional charges are generally applied to residential and industrial loads with a poor PF. In the presence of highly inductive loads, capacitor banks are usually employed to balance reactive demand, and thus maintain the PF as close as possible to 1~\cite{Kerstingbook}. Recall that $y_{C,n}^{\phi}$ denotes the susceptance of a capacitor connected at node $n$ and phase $\phi$, and the provided reactive power amounts to $Q_{C,n}^{\phi} = y_{C,n}^{\phi} |V_{n,t}^{\phi}|^2$. With $Q_{L,n,t}^{\phi} > 0$ denoting the reactive power demanded by an inductive load, a minimum per-phase PF $\eta_{n,t}^{\phi} \in [0,1]$ is imposed as [cf.~\eqref{mg-powerfactor}] {\bf e}gin{align} \label{powerfactor_node} \frac{P_{L,n,t}^{\phi}}{\sqrt{(P_{L,n,t}^{\phi})^2 + (Q_{L,n,t}^{\phi} - Q_{C,n,t}^{\phi})^2 }} \geq \eta_{n,t}^{\phi} \, \quad \forall \, \phi \in {\cal P}_n \end{align} where $P_{L,n,t}^{\phi}$ is given. Clearly, $|V_{n,t}^{\phi}|^2$ can be re-expressed as a linear function of ${\bf X}_t$ using~\eqref{eq:V}, and~\eqref{powerfactor_node} can be reformulated to obtain the following SDP-compliant form. {\bf e}gin{lemma} \label{SDPformPF} Using~\eqref{eq:V}, constraint~\eqref{powerfactor_node} is equivalent to the following linear matrix inequality (with $Q_{C,n,t}^{\phi} = y_{C,n}^{\phi}{\textrm{Tr}}({\bf P}hi_{V,n}^{\phi} {\bf X}_t)$) {\bf e}gin{align} \label{eq:SDPformPF} \left[ {\bf e}gin{array}{ll} - \left(\frac{P_{L,n,t}^{\phi}}{\eta_{n,t}^{\phi}} \right)^2 & P_{L,n,t}^{\phi} + Q_{L,n,t}^{\phi} - Q_{C,n,t}^{\phi} \\ P_{L,n,t}^{\phi} + Q_{L,n,t}^{\phi} - Q_{C,n,t}^{\phi} & -1 \end{array} \right] \preceq \mathbf{0}. \end{align} \end{lemma} \emph{Proof.} See the Appendix. $\Box$ Controllable capacitor banks can be accounted for by associating an integer variable with each of the capacitor switches. To tackle the resultant mixed integer nonlinear problem, exhaustive search over the switches can be performed~\cite{Paudyal11}. In this case, (P3) is solved for each switch configuration. \section{Numerical Tests} \label{sec:results} The proposed optimization framework for unbalanced three-phase systems is tested on the IEEE 13-node test feeder shown in Fig.~\ref{fig:feeder}. Compared to the original scheme in~\cite{testfeeder}, DG units are placed at nodes 1 and 10. Specifically, single-phase DG units supply a maximum real power of $300$ kW and $500$ kW, respectively, and they operate at a unitary PF. Capacitor banks with rated reactive power of $200$ kVAr and $100$ kVAr are present at nodes $5$ and $8$, respectively. Line impedance and shunt admittance matrices are computed based on the dataset in~\cite{testfeeder}. To solve (P3) (and (P5) for a preemptive voltage profile analysis), the MATLAB-based optimization modeling package \texttt{CVX}~\cite{cvx} is used, along with the interior-point based solver \texttt{SeDuMi}~\cite{SeDuMi}. {\bf e}gin{figure}[t] {\bf e}gin{center} \includegraphics[width=0.35\textwidth]{F_Feeder} \caption{Modified IEEE 13-bus test feeder.} \label{fig:feeder} \end{center} \end{figure} The time horizon is $24$h, and slots of $1$h are considered; that is, ${\cal I} = \{1\, \textrm{AM}, 2\, \textrm{AM}, \ldots, 11\, \textrm{PM}, 12\, \textrm{AM}\}$. The loads specified in~\cite{testfeeder} are assumed to be the peak demands of the day, and the ``spring mid-week'' load profiles reported in~\cite{report_Canada} are used to generate $\{P_{L,n,t}^{\phi}, Q_{L,n,t}^{\phi} \}_{t\in {\cal I}}$. Specifically, the ``commercial load profile'' in~\cite[Sec.~1.1]{report_Canada} is used for node $9$, whereas the ``residential load profile'' is applied to all the remaining nodes; a Gaussian random variable with mean 1 and standard deviation $0.1$ is used to insinuate a perturbation on the profiles on a per-node and a per-time basis. The resulting aggregate real loads per phase are depicted in Fig.~\ref{fig:Profile} (similar trends are obtained for the reactive loads, but are not reported due to space limitations). Ten controllable loads are present at node $5^{\prime}$, and are allocated as follows: $2$ on phase $a$, $4$ on phase $b$, and $4$ on phase $c$. The energy requirement is $11$ kWh, and the cap ${\bf a}r{P}_{d,5,t}^{\textrm{max}}$ is set to $4$ kW, so as to resemble the demands of $10$ PHEVs~\cite{Driesen10}. Customers are assumed to plug-in the PHEVs at 6 PM, and the charging has to be completed by 6 AM. Two additional elastic loads are present at node $9$, and have to be satisfied between 8 AM and 4 PM. In this case, the energy requirement per each load is $30$ kWh, and no cap is present for $\{{\bf a}r{P}_{d,9,t}^{\phi}\}$. To model the price of the power purchased from the main distribution grid $\{\kappa_{0,t}\}$, one-day ahead locational marginal prices (LMPs) available in the Midwest Independent Transmission System Operator (MISO)~\cite{midwestiso} database are utilized. Specifically, the LMPs for the Minneapolis area on June 7th, 2012 are utilized throughout this section, and are reported in Fig.~\ref{fig:Profile}. On the other hand, the cost incurred by the use of the DG units is kept constant over time, and it is set to $30$ $\$$/MW. A minimum PF of $\eta_{0,t}^{\phi} = 0.8$ is required at the PCC, and the limits $V_n^{\mathrm{min}} = 0.95$ p.u. and $V_n^{\mathrm{max}} = 1.05$ p.u. are imposed to enforce voltage regulation. Finally, a balanced flat voltage profile is assumed at the PCC, with $|V_{0,t}^{\phi}| = 1.02$ p.u., and $\angle V_{0,t}^{a} = 0^{\circ}$, $\angle V_{0,t}^{b} = 120^{\circ}$, and $\angle V_{0,t}^{c} = -120^{\circ}$. Constraints on the line currents are not considered, since this datum is not available in~\cite{testfeeder}. {\bf e}gin{figure}[t] {\bf e}gin{center} \includegraphics[width=0.5\textwidth]{F_P_24h} \caption{Aggregate real load profile [MW], and prices $\{\kappa_{0,t}\}$ [$\$$/MW].} \label{fig:Profile} \end{center} \end{figure} {\bf e}gin{figure}[t] {\bf e}gin{center} \includegraphics[width=0.5\textwidth]{F_Psupplied_24} \caption{Real power supplied [MW].} \label{fig:supplied} \end{center} \end{figure} {\bf e}gin{figure}[t] {\bf e}gin{center} \includegraphics[width=0.5\textwidth]{F_Elastic} \caption{Aggregate per-phase elastic demand [kWh].} \label{fig:Elastic} \end{center} \end{figure} {\bf e}gin{figure}[t] {\bf e}gin{center} \includegraphics[width=0.5\textwidth]{F_PF_24h} \caption{PF at the substation.} \label{fig:PFprofile} \end{center} \end{figure} Before proceeding, it is worth mentioning that the rank of the matrices $\{{\bf X}_t^{\textrm{opt}}\}$ was \emph{always} 1 in the experiments reported in this section. Therefore, the \emph{globally} optimal solutions of (P1) were always attained. This illustrates clearly the merits of the proposed formulation. Fig.~\ref{fig:supplied} depicts the active power supplied by the DG units, and drawn from the main distribution grid. As expected, the DG units are heavily utilized from $8$ AM to $11$ PM, which is the interval where the price of power purchased from the main distribution grid is higher that $30$ $\$$/MW. This, in turn, has the benefit of reducing the overall demand of the feeder during the peak hours (peak shaving). Notice however, that the DG units are not utilized at the maximum extent because of the constraint on the PF at the PCC~\cite{Kroposki10}, as it will be shown later on. The optimal allocation of the elastic energy demands is shown in Fig.~\ref{fig:Elastic}. It is shown that the PHEVs connected at node $5^\prime$ are charged from $2$ AM to $5$ AM, interval where both the LMPs and the non-deferrable loads are the lowest. A similar behavior is noticed for the elastic demands at node $9$; in fact, they are entirely satisfied in the time slot $[8 \, \textrm{AM}, 9 \, \textrm{AM}]$, which is the slot with the lowest LMP in the interval $[8 \, \textrm{AM}, 4 \, \textrm{PM}]$. Finally, Fig.~\ref{fig:PFprofile} portrays the trajectories of the PF at the PCC. It can be seen that the lower bound on the PF is tightly met when the DG units are active. In fact, as they supply real power to a lagging power system, a reduction of the PF is inevitably experienced at the PCC. This can be further noticed from the dotted (orange) trajectories, which correspond to the case where (P3) is solved without the constraints on the PF. In this case, the majority of the real power is supplied by the DG units, thus entailing a significant drop of the PF at the PCC. The case where the DG units can operate at a variable PF, from $0.5$ to $1$, is also considered. In this case, the DG units can supply a sufficiently amount of reactive power, so that the PF at the substation can be kept close to the unity most of the times. This suggests that DG units can be effectively utilized for providing reactive support~\cite{Kroposki10}, although an appropriate modeling of the cost incurred in this case is required. \section{Concluding remarks} \label{sec:conclusions} The paper considered the economic dispatch problem for unbalanced three-phase power distribution networks, where the costs of power drawn from the main grid and supplied by the DG units over a given time horizon was minimized, while meeting the overall load demand and effecting voltage regulation. Is spite of the inherent non-convexity of the formulated problem, the SDP relaxation technique was advocated to obtain a (relaxed) convex problem. As corroborated by numerical tests, the main merit of the proposed approach consists in offering the potential of finding the globally optimal solution of the original nonconvex economic dispatch problem. \appendix \emph{Proof of Lemma}~\ref{SDPreformulation}. To prove~\eqref{eq:P}, notice first that the injected apparent power at node $n$, phase $\phi$ and time $t$ is given by $V_{n,t}^{\phi} (I_{n,t}^{\phi})^* = (V_{n,t}^{\phi, *} I_{n,t}^{\phi})^* = ({\bf v}_t^{\cal H} {\bf a}r{{\bf e}}_n^{\phi} ({\bf a}r{{\bf e}}_n^{\phi})^{{\cal T}} {\bf i}_t)^{{\cal H}}$. Next, noticing that $ {\bf v}_t = {\bf a}_{0,t} \circ {\bf x}_t = {\bf D}_{0,t} {\bf x}_t$ and using ${\bf i}_t = {\bf Y} {\bf v}_t$, it follows that $({\bf v}_t^{\cal H} {\bf a}r{{\bf e}}_n^{\phi} ({\bf a}r{{\bf e}}_n^{\phi})^{{\cal T}} {\bf i}_t)^{{\cal H}} = ({\bf x}_t^{\cal H} {\bf D}_{0,t}^{\cal H} {\bf a}r{{\bf e}}_n^{\phi} ({\bf a}r{{\bf e}}_n^{\phi})^{{\cal T}} {\bf Y} {\bf D}_{0,t} {\bf x}_t)^{{\cal H}} = ({\bf x}_t^{\cal H} {\bf D}_{0,t}^{\cal H} {\bf Y}^{\phi}_n {\bf D}_{0,t} {\bf x}_t)^{{\cal H}} = {\bf x}_t^{\cal H} {\bf D}_{0,t}^{\cal H} ({\bf Y}^{\phi}_n)^{{\cal H}} {\bf D}_{0,t} {\bf x}_t$, which can be equivalently rewritten as ${\textrm{Tr}}({\bf D}_{0,t}^{\cal H} {\bf Y}_n^{\phi} {\bf D}_{0,t} {\bf X}_t)$. Thus, the injected real and reactive powers can be obtained by using, respectively, the real and imaginary parts of $({\bf Y}^{\phi}_n)^{{\cal H}}$. Finally,~\eqref{eq:V} can be readily established by noticing that $|V_{n,t}^{\phi}|^2 = {\bf v}_t^{\cal H} {\bf a}r{{\bf e}}_n^{\phi} ({\bf a}r{{\bf e}}_n^{\phi})^{{\cal T}} {\bf v}_t = {\bf x}_t^{\cal H} {\cal D}_{0,t}^{\cal H} {\bf a}r{{\bf e}}_n^{\phi} ({\bf a}r{{\bf e}}_n^{\phi})^{{\cal T}} {\bf D}_{0,t} {\bf x}_t = {\textrm{Tr}}({\bf D}_{0,t}^{\cal H} {\bf a}r{{\bf e}}_n^{\phi} ({\bf a}r{{\bf e}}_n^{\phi})^{{\cal T}} {\bf D}_{0,t} {\bf X}_t)$. \emph{Proof of Lemma}~\ref{limitcurrent}. From~\eqref{linecurrent}, and using the definitions of $\check{{\bf Z}}_{mn}^m$ and $\check{{\bf Z}}_{mn}^n$, it follows that ${\bf i}_{mn,t} {\bf i}_{mn,t}^{{\cal H}} = \check{{\bf Z}}_{mn}^m {\bf v}_{m,t} {\bf v}_{m,t}^{{\cal H}} \check{{\bf Z}}_{mn}^{m {\cal H}} + \check{{\bf Z}}_{mn}^n {\bf v}_{n,t} {\bf v}_{n,t}^{{\cal H}} \check{{\bf Z}}_{mn}^{n {\cal H}} - \check{{\bf Z}}_{mn}^m {\bf v}_{m,t} {\bf v}_{n,t}^{{\cal H}} \check{{\bf Z}}_{mn}^{n {\cal H}} - \check{{\bf Z}}_{mn}^n {\bf v}_{n,t} {\bf v}_{m,t}^{{\cal H}} \check{{\bf Z}}_{mn}^{m {\cal H}} = {\bf B}_{mn} {\bf v}_{t} {\bf v}_{t}^{{\cal H}} {\bf B}_{mn}^{{\cal H}}$. Thus, $|I_{mn}^{\phi}|^2$ is given by $|I_{mn}^{\phi}|^2 = ({\bf e}_{mn}^{\phi})^{{\cal T}} {\bf B}_{mn} {\bf D}_{0,t} {\bf x}_{t} {\bf x}_{t}^{{\cal H}} {\bf D}_{0,t}^{{\cal H}} {\bf B}_{mn}^{{\cal H}} {\bf e}_{mn}^{\phi} = {\textrm{Tr}}\{{\bf B}_{mn} {\bf D}_{0,t} {\bf X}_t {\bf D}_{0,t}^{{\cal H}} {\bf B}_{mn}^{{\cal H}} {\bf e}_{mn}^{\phi} ({\bf e}_{mn}^{\phi})^{{\cal T}} \} = {\textrm{Tr}}\{\Phi_{I,mn,t}^{\phi} {\bf X}_t\}$. \emph{Proof of Lemma}~\ref{SDPformPF}. After standard manipulations,~\eqref{eq:V} can be re-written as $(P_{L,n,t}^{\phi} + Q_{L,n,t}^{\phi} - y_{C,n}^{\phi}{\textrm{Tr}}({\bf P}hi_{V,n}^{\phi} {\bf X}_t))^2 \leq (P_{L,n,t}^{\phi} \eta_{n,t}^{-1})^2$, which is quadratic in ${\bf X}_t$. Then,~\eqref{eq:SDPformPF} is readily obtained by using Schur's complement. {\bf i}bliographystyle{IEEEtran} {\bf i}bliography{Biblio_pow_systems} {\bf e}gin{biographynophoto} {Emiliano Dall'Anese (S'08, M'11)} received the Laurea Triennale (B.Sc degree) and the Laurea Specialistica (M.Sc degree) in Telecommunications Engineering from the University of Padova, Italy, in 2005 and 2007, respectively, and the Ph.D in Information Engineering at the Department of Information Engineering (DEI), University of Padova, Italy, in 2011. From January 2009 to September 2010 he was a visiting scholar at the Department of Electrical and Computer Engineering, University of Minnesota, USA. He is currently a post-doctoral associate at the Department of Electrical and Computer Engineering, University of Minnesota, USA. His research interests lie in the areas of signal processing, communications, and networking. Current research focuses on wireless cognitive networks, IP networks, and power distribution systems. \end{biographynophoto} {\bf e}gin{biographynophoto}{Georgios B. Giannakis (F'97)} received his Diploma in Electrical Engr. from the Ntl. Tech. Univ. of Athens, Greece, 1981. From 1982 to 1986 he was with the Univ. of Southern California (USC), where he received his MSc. in Electrical Engineering, 1983, MSc. in Mathematics, 1986, and Ph.D. in Electrical Engr., 1986. Since 1999 he has been a professor with the Univ. of Minnesota, where he now holds an ADC Chair in Wireless Telecommunications in the ECE Department, and serves as director of the Digital Technology Center. His general interests span the areas of communications, networking and statistical signal processing - subjects on which he has published more than 300 journal papers, 500 conference papers, 20 book chapters, two edited books and two research monographs. Current research focuses on compressive sensing, cognitive radios, network coding, cross-layer designs, wireless sensors, social and power grid networks. \end{biographynophoto} {\bf e}gin{biographynophoto}{Bruce F. Wollenberg (M'67, SM'75, F'89, LF'08)} graduated from Rensselaer Polytechnic Institute, Troy, NY, with a B.S.E.E. in 1964 and an M.Eng. in electric power engineering in 1966. He then attended the University of Pennsylvania, Philadelphia, PA, graduating with a Ph.D. in systems engineering in 1974. He worked for Leeds and Northrup Co. North Wales, PA from 1966 to 1974, Power Technologies Inc. Schenectady, NY from 1974 to 1984, and Control Data Corporation Energy Management System Division Plymouth, MN from 1984 to 1989. He took a position as a Professor of Electrical Engineering in the Electrical and Computer Engineering Department at the University of Minnesota in September 1989. He is presently the Director of the University of Minnesota Center for Electric Energy (UMCEE). His main research interests are the application of mathematical analysis to power system operation and planning problems. He is the co-author of the textbook \emph{Power Generation Operation and Control} published by John Wiley $\&$ Sons \end{biographynophoto} \end{document}
math
\begin{document} \begin{abstract} We prove two special cases of a conjecture of J. Fern\'andez de Bobadilla for hypersurfaces with $1$-dimensional critical loci. We do this via a new numerical invariant for such hypersurfaces, called the beta invariant, first defined and explored by the second author in 2014. The beta invariant is an algebraically calculable invariant of the local ambient topological-type of the hypersurface, and the vanishing of the beta invariant is equivalent to the hypotheses of Bobadilla's conjecture. \end{abstract} \title{Some Special Cases of Bobadilla's Conjecture} \thispagestyle{fancy} \lhead{} \chead{} \rhead{ } \lfoot{} \cfoot{} \rfoot{} \section{Introduction} Throughout this paper, we shall suppose that $\mathcal{U}$ is an open neighborhood of the origin in $\mathbb{C}^{n+1}$, and that $f: (\mathcal{U},\bb{0}) \to (\mathbb{C},0)$ is a complex analytic function with a 1-dimensional critical locus at the origin, i.e., $\dim_\bb{0} \Sigma f = 1$. We use coordinates $\bb z:=(z_0,\cdots,z_n)$ on $\mathcal{U}$. We assume that $z_0$ is generic enough so that $\dim_\bb{0} \Sigma (f_{|_{V(z_0)}} ) = 0$. One implication of this is that $$ V\left(\frac{\partial f}{\partial z_1}, \frac{\partial f}{\partial z_2}, \dots, \frac{\partial f}{\partial z_n}\right) $$ is purely 1-dimensional at the origin. As analytic cycles, we write $$ \left[V\left(\frac{\partial f}{\partial z_1}, \frac{\partial f}{\partial z_2}, \dots, \frac{\partial f}{\partial z_n}\right)\right] \ = \ \Gamma^1_{f, z_0} + \Lambda^1_{f, z_0}, $$ where $\Gamma^1_{f, z_0}$ and $\Lambda^1_{f, z_0}$ are, respectively, the relative polar curve and 1-dimensional L\^e cycle; see \cite{lecycles} or the section. We recall a classical non-splitting result (presented in a convenient form here) proved independently by Gabrielov, Lazzeri, and L\^e (in \cite{gabrielov}, \cite{lazzerimono}, and \cite{leacampo}, respectively) regarding the non-splitting of the cohomology of the Milnor fiber of $f_{|_{V(z_0)}}$ over the critical points of $f$ in a nearby hyperplane slice $V(z_0 -t)$ for a small non-zero value of $t$. \begin{thm}[GLL non-splitting]\label[theorem]{nosplit} The following are equivalent: \begin{enumerate} \item The Milnor number of $f_{|_{V(z_0)}}$ at the origin is equal to \begin{align*} \sum_C \mu^\circ_{{}_C} \left (C \cdot V(z_0) \right )_\mathbf{0}, \end{align*} where the sum is over the irreducible components $C$ of $\Sigma f$ at $\mathbf{0}$, $\left (C \cdot V(z_0) \right)_\mathbf{0}$ denotes the intersection number of $C$ and $V(z_0)$ at $\mathbf{0}$, and $\mu^\circ_{{}_C}$ denotes the Milnor number of $f$, restricted to a generic hyperplane slice, at a point $\bb{p} \in C \backslash \{\mathbf{0} \}$ close to $\mathbf{0}$. \item $\Gamma^1_{f, z_0}$ is zero at the origin (i.e., $\mathbf{0}$ is not in the relative polar curve). \end{enumerate} Furthermore, when these equivalent conditions hold, $\Sigma f$ has a single irreducible component which is smooth and is transversely intersected by $V(z_0)$ at the origin. \end{thm} This paper is concerned with a recent conjecture made by Javier Fern\'andez de Bobadilla, positing that, in the spirit of \cref{nosplit}, the cohomology of the Milnor fiber of $f$, {\bf not of a hyperplane slice}, does not split. We state a slightly more general form of Bobadilla's original conjecture, for the case where $\Sigma f$ may, a priori, have more than a single irreducible component: \begin{conj}[Fern\'andez de Bobadilla]\label[conjecture]{bobagen} Denote by $F_{f,\mathbf{0}}$ the Milnor fiber of $f$ at the origin. Suppose that $\widetilde H^*(F_{f,\mathbf{0}};\mathbb{Z})$ is non-zero only in degree $(n-1)$, and that \begin{align*} \widetilde H^{n-1}(F_{f,\mathbf{0}};\mathbb{Z}) \cong \bigoplus_C \mathbb{Z}^{\mu^\circ_{{}_C}} \end{align*} where the sum is over all irreducible components $C$ of $\Sigma f$ at $\mathbf{0}$. Then, in fact, $\Sigma f$ has a single irreducible component, which is smooth. \end{conj} Bobadilla's conjecture, in its original phrasing (\cite{bobleconj}), is a reformulation of a conjecture of L\^e (see, for example, \cite{leconj}): if $(X,\mathbf{0})$ is a reduced surface germ in $(\mathbb{C}^3,\mathbf{0})$, and the (real) link of $X$ is homeomorphic to a sphere, then $X$ is (analytically) isomorphic to the total space of an equisingular deformation of an irreducible plane curve. We approach \cref{bobagen} via the \bb{beta invariant} of a hypersurface with a $1$-dimensional critical locus, first defined and explored by the second author in \cite{betainv}. The beta invariant, $\beta_f$, of $f$ is an invariant of the local ambient topological-type of the hypersurface $V(f)$. It is a non-negative integer, and is algebraically calculable. Our motivation for using this invariant is that the requirement that $\beta_f = 0$ is precisely equivalent to the hypotheses of \cref{bobagen}, essentially turning the problem into a purely algebraic question \cite[see][Theorem 5.4]{betainv} For this reason, we will refer to our new formulation of \labelcref{bobagen} as the \bb{Beta Conjecture}. In this paper, we give proofs of the \nameref{beta0} in two special cases: \begin{enumerate} \item In \cref{betainduction}, we prove an induction-like result for when $f$ is a sum of two analytic functions defined on disjoint sets of variables. \item In \cref{onlythm}, we prove the result for the case when the relative polar curve $\Gamma_{f,z_0}^1$ is defined by a single equation inside the relative polar surface $\Gamma_{f,\bb{z}}^2$ (see below). \end{enumerate} \section{Notation and Known Results} The bulk of this section is largely a summary of the concepts of Chapter 1 of \cite{lecycles}, which will be used throughout this paper. Our assumption that $\dim_\bb{0} \Sigma (f_{|_{V(z_0)}}) = 0$ is equivalent to assuming that the variety $V \left (\frac{\partial f}{\partial z_1},\cdots,\frac{\partial f}{\partial z_n} \right )$ is purely 1-dimensional (and non-empty) at $\mathbf{0}$ and is intersected properly by the hyperplane $V(z_0)$ at $\bb{0}$. \begin{defn}\label[definition]{cycles} The \newword{relative polar surface of $f$ with respect to $\bb z$}, denoted $\Gamma_{f,\bb z}^2$\,, is, as an analytic cycle at the origin, $\left [V \left (\frac{\partial f}{\partial z_2},\cdots,\frac{\partial f}{\partial z_n} \right ) \right ]$. Note that each component of this at the origin must be precisely $2$-dimensional, and so is certainly not contained in $\Sigma f$. The \newword{relative polar curve of $f$ with respect to $z_0$}, denoted $\Gamma_{f,z_0}^1$, is, as an analytic cycle at the origin, the collection of those components of the cycle $\left [V \left (\frac{\partial f}{\partial z_1},\cdots,\frac{\partial f}{\partial z_n} \right ) \right ]$ which are not contained in $\Sigma f$. The \newword{1-dimensional L\^e cycle of $f$ with respect to $z_0$}, at the origin, denoted $\Lambda_{f,z_0}^1$, consists of those components of $\left [V \left (\frac{\partial f}{\partial z_1},\cdots,\frac{\partial f}{\partial z_n} \right )\right ]$ at the origin which \emph{are} contained in $\Sigma f$. \end{defn} We sometimes enclose an analytic variety $V$ in brackets to indicate that we are considering $V$ as a cycle. We do, however, frequently omit this notation if it is clear from context that a given variety is to be considered as an analytic cycle. An immediate consequence of \cref{cycles} is that, as cycles on $\mathcal{U}$, \begin{align*} V \left (\frac{\partial f}{\partial z_1},\cdots,\frac{\partial f}{\partial z_n} \right ) = \Gamma_{f,z_0}^1 + \Lambda_{f,z_0}^1 . \end{align*} We will use this identity throughout this paper. Note that, by assumption, $V \left ( \frac{\partial f}{\partial z_0} \right )$ properly intersects $\Gamma_{f,z_0}^1$ at $\bb{0}$, and also that $V(z_0)$ properly intersects $\Lambda_{f,z_0}^1$ at $\bb{0}$. Letting $C$'s denote the underlying reduced components of $\Sigma f$ at $\bb{0}$, we have (as cycles at the origin) \begin{align*} \Lambda_{f,z_0}^1 &= \sum_C \mu^\circ_{{}_C} [C], \end{align*} where $\mu^\circ_{{}_C}$ denotes the Milnor number of $f$, restricted to a generic hyperplane slice, at a point $\bb{p} \in C \backslash \{\mathbf{0} \}$ close to $\mathbf{0}$ (\cite[see][Remark 1.19]{lecycles}). \begin{defn}\label[definition]{lenums} The intersection numbers $\left ( \Gamma_{f,z_0}^1 \cdot V \left (\frac{\partial f}{\partial z_0} \right ) \right )_\mathbf{0}$ and $\left (\Lambda_{f,z_0}^1 \cdot V (z_0) \right )_\mathbf{0}$ are, respectively, the \newword{L\^{e} numbers} $\lambda_{f,z_0}^0$ and $\lambda_{f,z_0}^1$ (at the origin). Via the above formula for $\Lambda_{f,z_0}^1$, we have: \begin{align*} \lambda_{f,z_0}^1 &= \sum_C \mu^\circ_{{}_C} \left (C \cdot V(z_0) \right )_\mathbf{0}. \end{align*} \end{defn} A fundamental property of L\^e numbers from \cite{lecycles} is: \begin{prop}\label[proposition]{BettiLe} Let $\widetilde b_n(F_{f,\mathbf{0}})$ and $\widetilde b_{n-1}(F_{f,\mathbf{0}})$ denote the reduced Betti numbers of the Milnor fiber of $f$ at the origin. Then, $$ \widetilde b_n(F_{f,\mathbf{0}})-\widetilde b_{n-1}(F_{f,\mathbf{0}}) \ = \ \lambda_{f,z_0}^0-\lambda_{f,z_0}^1. $$ \end{prop} We will need the following classical relations between intersection numbers. \begin{prop}\label[proposition]{intnums} Since $\dim_\mathbf{0} \Sigma (f_{|_{V(z_0)}}) = 0$\textnormal{:} \begin{enumerate} \item $\dim_\mathbf{0} \Gamma_{f,z_0}^1 \cap V(f) \leq 0$, $\dim_\mathbf{0} \Gamma_{f,z_0}^1 \cap V(z_0) \leq 0$, $\dim_\mathbf{0} \Gamma_{f,z_0}^1 \cap V \left (\frac{\partial f}{\partial z_0} \right ) \leq 0$, and \begin{align*} \left ( \Gamma_{f,z_0}^1 \cdot V(f) \right )_\mathbf{0} = \left ( \Gamma_{f,z_0}^1 \cdot V(z_0) \right )_\mathbf{0} + \left (\Gamma_{f,z_0}^1 \cdot V \left ( \frac{\partial f}{\partial z_0} \right) \right )_\mathbf{0}. \end{align*} The proof of this result is sometimes referred to as \emph{Teissier's trick}. \item In addition, \begin{align*} \mu_\mathbf{0} \left ( f_{|_{V(z_0)}} \right ) = \left ( \Gamma_{f,z_0}^1 \cdot V(z_0) \right )_\mathbf{0} + \left ( \Lambda_{f,z_0}^1 \cdot V(z_0) \right )_\mathbf{0}. \end{align*} \end{enumerate} \end{prop} Formula $(1)$ above was first proved by B. Teissier in \cite{teissiercargese} for functions with isolated critical points, and it is an easy exercise to show that the result still holds in the case where $f$ has a critical locus of arbitrary dimension. Formula $(2)$ follows from the fact that \begin{align*} \Sigma \left ( f|_{V(z_0)} \right ) = V \left ( z_0,\frac{\partial f}{\partial z_1}, \cdots,\frac{ \partial f}{\partial z_n} \right ) \end{align*} and the fact that $V(z_0)$ properly intersects $V \left ( \frac{\partial f}{\partial z_1},\cdots,\frac{\partial f}{\partial z_n} \right )$ at the origin. The following numerical invariant, defined and discussed in \cite{betainv}, is crucial to the contents and goal of this paper. \begin{defn}\label[definition]{beta} The \newword{beta invariant} of $f$ with respect to $z_0$ is: \begin{align*} \beta_f = \beta_{f,z_0} &:= \left ( \Gamma_{f,z_0}^1 \cdot V \left (\frac{\partial f}{\partial z_0} \right ) \right )_\bb{0} - \sum_C \mu^\circ_{{}_C} \left [ \left ( C \cdot V(z_0) \right )_\bb{0} -1 \right ] \\ &= \lambda_{f,z_0}^0 - \lambda_{f,z_0}^1 + \sum_C \mu^\circ_{{}_C}\\ &= \widetilde b_n(F_{f,\mathbf{0}})-\widetilde b_{n-1}(F_{f,\mathbf{0}}) + \sum_C \mu^\circ_{{}_C}. \end{align*} \end{defn} Using \cref{intnums}, $\beta_f$ may be equivalently expressed as \begin{align*} \beta_f &= \left ( \Gamma_{f,z_0}^1 \cdot V(f) \right )_\mathbf{0} - \mu_\mathbf{0} \left ( f_{|_{V(z_0)}} \right ) + \sum_C \mu^\circ_{{}_C}. \end{align*} \begin{rem} A key property of the beta invariant is that the value $\beta_f$ is independent of the choice of linear form $z_0$ (provided, of course, that the linear form satisfies $\dim_\mathbf{0} \Sigma (f_{|_{V(z_0)}}) =0$). This often allows a great deal of freedom in calculating $\beta_f$ for a given $f$, as different choices of linear forms $L = z_0$ may result in simpler expressions for the intersection numbers $\lambda_{f,z_0}^0$ and $\lambda_{f,z_0}^1$, while leaving the value of $\beta_f$ unchanged. \cite[See][Remark 3.2, Example 3.4]{betainv}. \end{rem} It is shown in \cite{betainv} that $\beta_f\geq 0$. The interesting question is how strong the requirement that $\beta_f=0$ is. \begin{conj}[Beta Conjecture]\label[conjecture]{beta0} If $\beta_f=0$, then $\Sigma f$ has a single irreducible component at $\mathbf{0}$, which is smooth. \end{conj} \begin{conj}[polar form of the Beta Conjecture]\label[conjecture]{beta0polar} If $\beta_f=0$, then $\mathbf{0}$ is not in the relative polar curve $\Gamma_{f,z_0}^1$ (i.e., the relative polar curve is $0$ as a cycle at the origin). Equivalently, if the relative polar curve at the origin is not empty, then $\beta_f>0$. \end{conj} \begin{prop}\label[proposition]{betaEquiv} The \nameref{beta0} is equivalent to the \nameref{beta0polar}. \end{prop} \begin{proof} Suppose throughout that $\beta_f=0$. Suppose first that the Beta Conjecture holds, so that $\Sigma f$ has a single irreducible component at $\mathbf{0}$, which is smooth. Then $\beta_f= \lambda_{f,z_0}^0=0$, and so the relative polar curve must be zero at the origin. Suppose now that the polar form of the Beta Conjecture holds, so that $\Gamma_{f,z_0}^1 = 0$ at $\mathbf{0}$. Then \nameref{nosplit} implies that $\Sigma f$ has a single irreducible component at $\mathbf{0}$, which is smooth. \end{proof} \section{Generalized Suspension} Suppose that $\mathcal{U}$ and $\mathcal{W}$ are open neighborhoods of the origin in $\mathbb{C}^{n+1}$ and $\mathbb{C}^{m+1}$, respectively, and let $g: (\mathcal{U},\bb{0}) \to (\mathbb{C},0)$ and $h:(\mathcal{W},\bb{0}) \to (\mathbb{C},0)$ be two complex analytic functions. Let $\pi_1 : \mathcal{U} \times \mathcal{W} \to \mathcal{U}$ and $\pi_2 : \mathcal{U} \times \mathcal{W} \to \mathcal{W}$ be the natural projection maps, and set $f=g\boxplus h := g \circ \pi_1 + h \circ \pi_2$. Then, one trivially has \begin{align*} \Sigma f &= \left ( \Sigma g \times \mathbb{C}^{m+1} \right ) \cap \left ( \mathbb{C}^{n+1} \times \Sigma h \right ). \end{align*} Consequently, if we assume that $g$ has a one-dimensional critical locus at the origin, and that $h$ has an isolated critical point at $\bb{0}$, then $\Sigma f = \Sigma g \times \{\bb{0}\}$ is 1-dimensional and (analytically) isomorphic to $\Sigma g$. From this, one immediately has the following result. \begin{prop0}\label[proposition]{gensusp} Suppose that $g$ and $h$ are as above, so that $f = g\boxplus h$ has a one-dimensional critical locus at the origin in $\mathbb{C}^{n+m+2}$. Then, $\beta_f = \mu_\mathbf{0}(h)\beta_g$. \end{prop0} \begin{proof} This is a consequence of the Sebastiani-Thom isomorphism (see the results of N\'emethi \cite{nemethisebthom1},\cite{nemethisebthom2}, Oka \cite{okasebthom}, Sakamoto \cite{sakamoto}, Sebastiani-Thom \cite{sebthom}, and Massey \cite{masseysebthom}) for the reduced integral cohomology of the Milnor fiber of $f = g\boxplus h$ at $\mathbf{0}$. Letting $\widehat C$ denote the component of the critical locus $f$ which corresponds to $C$, the Sebastiani-Thom Theorem tells us that $$\widetilde b_{n+m+1} (F_{f,\mathbf{0}})= \mu_\mathbf{0}(h)\widetilde b_{n} (F_{g,\mathbf{0}}), \hskip 0.2in \widetilde b_{n+m} (F_{f,\mathbf{0}})= \mu_\mathbf{0}(h)\widetilde b_{n-1} (F_{g,\mathbf{0}}), \hskip 0.1in \textnormal{ and }\hskip 0.2in\mu^\circ_{{}_{\widehat C}}= \mu_\mathbf{0}(h) \mu^\circ_{{}_C}.$$ Thus, \begin{align*} \beta_f = \lambda_{f,z_0}^0 - \lambda_{f,z_0}^1 + \sum_{\widehat C} \mu^\circ_{{}_{\widehat C}} = \widetilde b_{n+m+1} (F_{f,\mathbf{0}}) - \widetilde b_{n+m} (F_{f,\mathbf{0}}) + \sum_{\widehat C} \mu^\circ_{{}_{\widehat C}} = \mu_\mathbf{0}(h)\beta_g. \end{align*} \end{proof} \begin{cor}\label[corollary]{betainduction} Suppose $f = g\boxplus h$, where $g$ and $h$ are as in \cref{gensusp}. Then, if the \nameref{beta0} is true for $g$, it is true for $f$. \end{cor} \begin{proof} Suppose that $\beta_f = 0$. By \cref{gensusp}, this is equivalent to $ \beta_g = 0$, since $\mu_\mathbf{0}(h) > 0$. By assumption, $\beta_g =0$ implies that $\Sigma g$ is smooth at zero. Since $\Sigma f = \Sigma g \times \{ \mathbf{0} \}$, it follows that $\Sigma f$ is also smooth at $\mathbf{0}$, i.e., the \nameref{beta0} is true for $f$. \end{proof} \section{$\Gamma_{f,z_0}^1$ as a hypersurface in $\Gamma_{f,\bb{z}}^2$}\label{hypersurface} Let $I := \langle \frac{\partial f}{\partial z_2},\cdots, \frac{\partial f}{\partial z_n} \rangle \subseteq \mathcal{O}_{\mathcal{U},\mathbf{0}}$, so that the relative polar surface of $f$ with respect to the coordinates $\bb{z}$ is (as a cycle at $\mathbf{0}$) given by $\Gamma_{f,\bb{z}}^2 = \left [V(I) \right ]$. For the remainder of this section, we will drop the brackets around cycles for convenience, and assume that everything is considered as a cycle unless otherwise specified. We remind the reader that we are assuming that $f_{|_{V(z_0)}}$ has an isolated critical point at the origin. \begin{prop}\label{prop:equiv} The following are equivalent: \begin{enumerate} \item $\dim_\mathbf{0}\left(\Gamma_{f,\bb{z}}^2\cap V(f)\cap V(z_0)\right)=0$. \item For all irreducible components $C$ at the origin of the analytic set $\Gamma_{f,\bb{z}}^2\cap V(f)$, $C$ is purely 1-dimensional and properly intersected by $V(z_0)$ at the origin. \item $\Gamma_{f,\bb{z}}^2$ is properly intersected by $V(z_0, z_1)$ at the origin. \end{enumerate} Furthermore, when these equivalent conditions hold $$\left ( \Gamma_{f,\bb{z}}^2 \cdot V(f)\cdot V(z_0) \right )_\mathbf{0} \ = \ \mu_\mathbf{0} \left ( f_{|_{V(z_0)}} \right ) + \left ( \Gamma_{f,\bb{z}}^2 \cdot V(z_0,z_1) \right )_\mathbf{0}.$$ \end{prop} \begin{proof} Clearly (1) and (2) are equivalent. We wish to show that (1) and (3) are equivalent. This follows from Tessier's trick applied to $f_{|_{V(z_0)}}$, but -- as it is crucial -- we shall quickly run through the argument. Since $f_{|_{V(z_0)}}$ has an isolated critical point at the origin, $$ \dim_\mathbf{0} \left(\Gamma_{f,\bb{z}}^2\cap V\left(\frac{\partial f}{\partial z_1}\right)\cap V(z_0)\right) =0. $$ Hence, $Z:= \Gamma_{f,\bb{z}}^2\cap V(z_0)$ is purely 1-dimensional at the origin. Let $Y$ be an irreducible component of $Z$ through the origin, and let $\alpha(t)$ be a parametrization of $Y$ such that $\alpha(0)=\mathbf{0}$. Let $z_1(t)$ denote the $z_1$ component of $\alpha(t)$. Then, $$ \big(f(\alpha(t))\big)' \ = \ {\frac{\partial f}{\partial z_1}}_{{\big |}_{\alpha(t)}}\cdot z_1'(t).\eqno{(\dagger)} $$ Since $\dim_\mathbf{0}\displaystyle Y\cap V\left(\frac{\partial f}{\partial z_1}\right)=0$, we conclude that $\big(f(\alpha(t))\big)' \equiv 0$ if and only if $z_1'(t)\equiv 0$, which tells us that $f(\alpha(t)) \equiv 0$ if and only if $z_1(t)\equiv 0$. Thus, $\dim_\mathbf{0}Y\cap V(f)=0$ if and only if $\dim_\mathbf{0} Y\cap V(z_1)=0$, i.e., (1) and (3) are equivalent. The equality now follows at once by considering the $t$-multiplicity of both sides of $(\dagger)$. \end{proof} \vbox{\begin{thm}\label[theorem]{onlythm} Suppose that \begin{enumerate} \item for all irreducible components $C$ at the origin of the analytic set \, $\Gamma_{f,\bb{z}}^2\cap V(f)$, $C$ is purely 1-dimensional, properly intersected by $V(z_0)$ at the origin, and $\left(C\cdot V(z_0)\right)_\mathbf{0} = \operatorname{mult}_\mathbf{0} C$, and \item the cycle $\Gamma_{f,z_0}^1$ equals $\Gamma_{f,\bb{z}}^2\cdot V(h)$ for some $h\in \mathcal{O}_{\mathcal{U},\mathbf{0}}$ (in particular, the relative polar curve at the origin is non-empty). \end{enumerate} Then, $$\widetilde b_n(F_{f,\mathbf{0}}) - \widetilde b_{n-1}(F_{f,\mathbf{0}})\geq \left ( \Gamma_{f,\bb{z}}^2 \cdot V(z_0,z_1) \right )_\mathbf{0}$$ and so $$\beta_f \geq \left ( \Gamma_{f,\bb{z}}^2 \cdot V(z_0,z_1) \right )_\mathbf{0} + \sum_{ C} \mu^\circ_{{}_{ C}}.$$ In particular, the \nameref{beta0} is true for $f$. \end{thm} } \begin{proof} By Proposition \ref{prop:equiv}, $$\left ( \Gamma_{f,\bb{z}}^2 \cdot V(f)\cdot V(z_0) \right )_\mathbf{0} \ = \ \mu_\mathbf{0} \left ( f_{|_{V(z_0)}} \right ) + \left ( \Gamma_{f,\bb{z}}^2 \cdot V(z_0,z_1) \right )_\mathbf{0}.$$ By assumption, $\Gamma_{f,z_0}^1 = \Gamma_{f,\bb{z}}^2\cdot V(h)$, for some $h\in \mathcal{O}_{\mathcal{U},\mathbf{0}}$. Then, via \cref{intnums} and the above paragraph, we have \begin{align*} \widetilde b_n(F_{f,\mathbf{0}}) - \widetilde b_{n-1}(F_{f,\mathbf{0}})&=\lambda_{f,z_0}^0 - \lambda_{f,z_0}^1 \\ &= \left ( \Gamma_{f,z_0}^1 \cdot V(f) \right )_\mathbf{0} - \mu_\mathbf{0} \left ( f_{|_{V(z_0)}} \right ) \\ &=\left[\left ( \Gamma_{f,\bb{z}}^2 \cdot V(h)\cdot V(f) \right )_\mathbf{0} - \left ( \Gamma_{f,\bb{z}}^2 \cdot V(f)\cdot V(z_0) \right )_\mathbf{0}\right] + \left ( \Gamma_{f,\bb{z}}^2 \cdot V(z_0,z_1) \right )_\mathbf{0}. \end{align*} As $\left(C\cdot V(z_0)\right)_\mathbf{0} = \operatorname{mult}_\mathbf{0} C$ for all irreducible components $C$ of $\Gamma_{f,\bb{z}}^2 \cap V(f)$, the bracketed quantity above is non-negative. The conclusion follows.\end{proof} \begin{exm}\label[example]{notsmooth} To illustrate the content of \cref{onlythm}, consider the following example. Let $f = (x^3 + y^2 + z^5)z$ on $\mathbb{C}^3$, with coordinate ordering $(x,y,z)$. Then, we have $\Sigma f = V(x^3 + y^2,z)$, and \begin{align*} \Gamma_{f,(x,y)}^2 &= V \left ( \frac{\partial f}{\partial z} \right ) = V(x^3 + y^2 + 6z^5), \end{align*} which we note has an isolated singularity at $\mathbf{0}$. Then, \begin{align*} V \left ( \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z} \right ) &= V(2yz,x^3 + y^2 + 6z^5 ) \\ &= V(y,x^3 + 6z^5) + V(z,x^3 + y^2) \end{align*} so that $\Gamma_{f,x}^1 = V(y,x^3 + 6z^5)$, and $\Lambda_{f,x}^1$ consists of the single component $C = V(z,x^3+y^2)$ with $\stackrel{\circ}{\mu}_C = 1$. It is then immediate that \begin{align*} \Gamma_{f,x}^1 = V(y) \cdot \Gamma_{f,(x,y)}^2, \end{align*} so that the second hypothesis of \cref{onlythm} is satisfied. For the first hypothesis, we note that \begin{align*} \Gamma_{f,(x,y)}^2 \cap V(f) &= V( x^3 + y^2 + 6z^5, (x^3 + y^2 + z^5)z ) \\ &= V(5z^5,x^3+y^2+z^5) \cup V(x^3+y^2,z) \\ &= V(x^3+y^2,z ) = C. \end{align*} Clearly, $C$ is purely 1-dimensional, and is properly intersected by $V(x)$ at $\mathbf{0}$. Finally, we see that \begin{align*} (C \cdot V(x) )_\mathbf{0} &= V(x,z,x^3 +y^2)_\mathbf{0} = 2 = \mult_\mathbf{0} C, \end{align*} so the two hypotheses of \cref{onlythm} are satisfied. By \cref{BettiLe}, \cref{onlythm} guarantees that the following inequality holds: \begin{align*} \lambda_{f,x}^0 - \lambda_{f,x}^1 &\geq \left ( \Gamma_{f,(x,y)}^2 \cdot V(x,y) \right )_\mathbf{0}. \end{align*} Let us verify this inequality ourselves. We have \begin{align*} \lambda_{f,x}^0 &= \left ( \Gamma_{f,x}^1 \cdot V \left ( \frac{\partial f}{\partial x} \right ) \right )_\mathbf{0} = V(y,x^3+6z^5, 3x^2z)_\mathbf{0} \\ &= V(y,x^2,z^5)_\mathbf{0} + V(y,z,x^3)_\mathbf{0} = 13, \end{align*} and \begin{align*} \lambda_{f,x}^1 &= \left ( \Lambda_{f,x}^1 \cdot V(x) \right )_\mathbf{0} = V(x,z,x^3+y^2)_\mathbf{0} = 2. \end{align*} Finally, we compute \begin{align*} \left ( \Gamma_{f,(x,y)}^2 \cdot V(x,y) \right )_\mathbf{0} &= V(x,y,x^3 +y^2 + 6z^5)_\mathbf{0} = 5. \end{align*} Putting this all together, we have \begin{align*} \lambda_{f,x}^0 - \lambda_{f,x}^1 = 11 \geq 5 = \left ( \Gamma_{f,(x,y)}^2 \cdot V(x,y) \right )_\mathbf{0}, \end{align*} as expected. \end{exm} \begin{exm}\label[example]{nothypersurface} We now give an example where the relative polar curve is {\bf not} defined inside $\Gamma_{f,\bb{z}}^2$ by a single equation, and $\widetilde b_n(F_{f,\mathbf{0}}) - \widetilde b_{n-1}(F_{f,\mathbf{0}}) < 0$. Let $f = (z^2 - x^2-y^2)(z-x)$, with coordinate ordering $(x,y,z)$. Then, we have $\Sigma f = V(y,z-x)$, and \begin{align*} \Gamma_{f,\bb{z}}^2 &= V \left ( \frac{\partial f}{\partial z} \right ) = V(2z(z-x) + (z^2-x^2-y^2) ). \end{align*} Similarly, \begin{align*} V \left ( \frac{\partial f}{\partial y}, \frac{\partial f}{\partial z} \right ) = V(y,3z+x) + 3 V(y,z-x), \end{align*} so that $\Gamma_{f,x}^1 = V(y,3z+x)$ and $\mu^\circ = 3$. It then follows that $\Gamma_{f,x}^1$ is not defined by a single equation inside $\Gamma_{f,(x,y)}^2$ To see that $\widetilde b_2(F_{f,\mathbf{0}}) - \widetilde b_1(F_{f,\mathbf{0}}) < 0$, we note that, up to analytic isomorphism, $f$ is the homogeneous polynomial $f = (zx-y^2)z$. Consequently, we need only consider the global Milnor fiber of $f$, i.e., $F_{f,\mathbf{0}}$ is diffeomorphic to $f^{-1}(1)$. Thus, $F_{f,\mathbf{0}}$ is homotopy equivalent to $S^1$, so that $\widetilde b_2(F_{f,\mathbf{0}}) = 0$ and $\widetilde b_1(F_{f,\mathbf{0}}) = 1$. \end{exm} \begin{cor}\label[corollary]{nonredcor} The \nameref{beta0} is true if the set $\Gamma_{f,\bb{z}}^2$ is smooth and transversely intersected by $V(z_0, z_1)$ at the origin. In particular, the \nameref{beta0} is true for non-reduced plane curve singularities. \end{cor} \begin{proof} Suppose that the cycle $\Gamma_{f,\bb{z}}^2=m[V(\mathfrak p)]$, where $\mathfrak p$ is prime. Since the set $\Gamma_{f,\bb{z}}^2$ is smooth, $A:=\mathcal{O}_{\mathcal{U},\mathbf{0}}/\mathfrak p$ is regular and so, in particular, is a UFD. The image of $\partial f/\partial z_1$ in $A$ factors (uniquely), yielding an $h$ as in hypothesis (2) of Theorem \ref{onlythm}. Furthermore, the transversality of $V(z_0, z_1)$ to $\Gamma_{f,\bb{z}}^2$ at the origin assures us that, by replacing $z_0$ by a generic linear combination $az_0+bz_1$, we obtain hypothesis (1) of Theorem \ref{onlythm}. \end{proof} \begin{exm}\label[example]{SmoothEx} Consider the case where $f = z^2 + (y^2-x^3)^2$ on $\mathbb{C}^3$, with coordinate ordering $(x,y,z)$; a quick calculation shows that $\Sigma f = V(z,y^2-x^3)$. Then, \begin{align*} \Gamma_{f,(x,y)}^2 = V \left ( \frac{\partial f}{\partial z} \right ) = V(z) \end{align*} is clearly smooth at the origin and transversely intersected at $\mathbf{0}$ by the line $V(x,y)$, so the hypotheses of \cref{nonredcor} are satisfied. Again, we want to verify by hand that the inequality \begin{align*} \lambda_{f,x}^0 - \lambda_{f,x}^1 \geq \left ( \Gamma_{f,(x,y)}^2 \cdot V(x,y) \right )_\mathbf{0} \end{align*} holds. First, we have \begin{align*} \lambda_{f,x}^0 &= \left ( \Gamma_{f,x}^1 \cdot V \left ( \frac{\partial f}{\partial x} \right ) \right )_\mathbf{0} = V(y,z,2(y^2-x^3)(-3x^2) )_\mathbf{0} = V(y,z,x^5)_\mathbf{0} = 5, \end{align*} and \begin{align*} \lambda_{f,x}^1 &= \left ( \Lambda_{f,x}^1 \cdot V(x) \right )_\mathbf{0} = V(x,z,y^2-x^3)_\mathbf{0} = V(x,z,y^2)_\mathbf{0} = 2. \end{align*} On the other hand, we have $\left ( \Gamma_{f,(x,y)}^2 \cdot V(x,y) \right )_\mathbf{0} = V(x,y,z)_\mathbf{0} = 1$, and we see again that the desired inequality holds. \end{exm} In the case where $f$ defines non-reduced plane curve singularity, there is a nice explicit formula for $\beta_f$, which we will derive in \cref{conclude}. \section{Non-reduced Plane Curves}\label{conclude} By \cref{nonredcor}, the \nameref{beta0} is true for non-reduced plane curve singularities. However, in that special case, we may calculate $\beta_f$ explicitly. Let $\mathcal{U}$ be an open neighborhood of the origin in $\mathbb{C}^2$, with coordinates $(x,y)$. \begin{prop}\label[proposition]{nonred} Suppose that $f$ is of the form $f = g(x,y)^p h(x,y)$, where $g : (\mathcal{U},\mathbf{0}) \to (\mathbb{C},\mathbf{0})$ is irreducible, $g$ does not divide $h$, and $p > 1$. Then, \begin{align*} \beta_f &= \left \{ \begin{matrix} (p+1)V(g,h)_\mathbf{0} + p\mu_\mathbf{0}(g) + \mu_\mathbf{0}(h) -1, && \text{ if $h(\mathbf{0}) = 0$; and} \\ p\mu_\mathbf{0}(g), && \text{ if $h(\mathbf{0}) \neq 0$.} \end{matrix} \right . \end{align*} Thus, $\beta_f = 0$ implies that $\Sigma f$ is smooth at $\mathbf{0}$. \end{prop} \begin{proof} After a possible linear change of coordinates, we may assume that the first coordinate $x$ satisifes $\dim_\bb{0} \Sigma (f_{|_{V(x)}}) = 0$, so that $\dim_\bb{0}V(g,x) = \dim_\bb{0}V(h,x) = 0$ as well. As germs of sets at $\bb{0}$, the critical locus of $f$ is simply $V(g)$. As cycles, \begin{align*} V \left ( \frac{\partial f}{\partial y} \right ) &= \Gamma_{f,x}^1 + \Lambda_{f,x}^1 = V \left ( p h g^{p-1} \frac{\partial g}{\partial y} + g^p \frac{\partial h}{\partial y} \right )\\ &= V \left ( ph\frac{\partial g}{\partial y} + g \frac{\partial h}{\partial y} \right ) + (p-1)V(g), \end{align*} so that $\Gamma_{f,x}^1 = V \left ( ph\frac{\partial g}{\partial y} + g \frac{\partial h}{\partial y} \right ) $ and $\Sigma f$ consists of a single component $C = V(g)$. It is a quick exercise to show that, for $g$ irreducible, $g$ does not divide $\frac{\partial g}{\partial y}$, and so the nearby Milnor number is precisely $\mu^\circ_{{}_C} = (p-1)$ along $V(g)$. Suppose first that $h(\mathbf{0}) = 0$. Then, by \cref{intnums}, \begin{align*} \lambda_{f,x}^0 - \lambda_{f,x}^1 &= \left ( \Gamma_{f,x}^1 \cdot V(f) \right )_\bb{0} - \mu_\bb{0} \left ( f_{|_{V(x)}} \right ). \end{align*} We then expand the terms on the right hand side, as follows: \begin{align*} \left ( \Gamma_{f,x}^1 \cdot V(f) \right )_\bb{0} &= p \left (\Gamma_{f,x}^1 \cdot V(g) \right )_\bb{0} + \left (\Gamma_{f,x}^1 \cdot V(h) \right )_\bb{0} \\ &= p V \left( g,h\frac{\partial g}{\partial y} \right )_\bb{0} + V \left (h,g \frac{\partial h}{\partial y} \right )_\bb{0} \\ &= (p+1)V(g,h)_\bb{0} + pV\left (g,\frac{\partial g}{\partial y} \right )_\bb{0} + V \left (h,\frac{\partial h}{\partial y} \right )_\bb{0}. \end{align*} Since $\dim_\bb{0} V(g,x) = 0$ and $\dim_\bb{0} V(h,x) = 0$, the relative polar curves of $g$ and $h$ with respect to $x$ are, respectively, $\Gamma_{g,x}^1 = V \left (\frac{\partial g}{\partial y} \right )$ and $\Gamma_{h,x}^1 = V \left (\frac{\partial h}{\partial y} \right )$. We can therefore apply Teissier's trick to this last equality to obtain \begin{align*} \left ( \Gamma_{f,x}^1 \cdot V(f) \right )_\bb{0} &= (p+1)V(g,h)_\bb{0} + p \left [ V \left (\frac{\partial g}{\partial y},x \right )_\bb{0} + \mu_\bb{0}(g) \right ] + \left [ V\left (\frac{\partial h}{\partial y},x \right )_\bb{0} + \mu_\bb{0}(h) \right ] \\ &= (p+1)V(g,h)_\bb{0} + p\mu_\bb{0}(g) + pV(g,x)_\bb{0} + V(h,x)_\bb{0} - (p+1). \end{align*} Next, we calculate the Milnor number of the restriction of $f$ to $V(x)$: \begin{align*} \mu_\bb{0} \left ( f_{|_{V(x)}} \right ) &= V \left ( \frac{\partial f}{\partial y},x \right )_\bb{0} = \left (\Gamma_{f,x}^1 \cdot V(x) \right )_\bb{0} + (p-1)V(g,x)_\bb{0}. \end{align*} Substituting these equations back into our initial identity, we obtain the following: \begin{align*} \lambda_{f,x}^0 - \lambda_{f,x}^1 &= (p+1)V(g,h)_\bb{0} + V(g,x)_\bb{0} + V(h,x)_\bb{0} \\ &+ p\mu_\bb{0}(g) + \mu_\bb{0}(h) - \left (\Gamma_{f,x}^1 \cdot V(x) \right )_\bb{0} - (p+1). \end{align*} We now wish to show that $\left (\Gamma_{f,x}^1 \cdot V(x) \right )_\mathbf{0} = V(gh,x)_\bb{0} -1$. To see this, we first recall that \begin{align*} \left ( \Gamma_{f,x}^1 \cdot V(x) \right )_\mathbf{0} &= \mult_y \left \{ \left (p h \cdot \frac{\partial g}{\partial y} \right )_{|_{V(x)}} + \left (g \cdot \frac{\partial h}{\partial y} \right )_{|_{V(x)}} \right \}, \end{align*} where $g_{|_{V(x)}}$ and $h_{|_{V(x)}}$ are (convergent) power series in $y$ with constant coefficients. If the lowest-degree terms in $y$ of $\left (p h \frac{\partial g}{\partial y} \right )_{|_{V(x)}}$ and $\left (g \frac{\partial h}{\partial y} \right )_{|_{V(x)}}$ do not cancel each other out, then the $y$-multiplicity of their sum is the minimum of their respective $y$-multiplicities, both of which equal $V(gh,x)_\mathbf{0} -1$. We must show that no such cancellation can occur. To this end, let $g_{|_{V(x)}} = \sum_{i \geq n} a_i y^i$ and $h_{|_{V(x)}} = \sum_{i \geq m} b_i y^i$ be power series representations in $y$, where $n = \mult_y g_{|_{V(x)}}$ and $m = \mult_y h_{|_{V(x)}}$ (so that $a_n,b_m \neq 0$). Then, a quick computation shows that the lowest-degree term of $\left (p h \frac{\partial g}{\partial y} \right )_{|_{V(x)}}$ is $pn \, a_n b_m$, and the lowest-degree term of $\left (g \frac{\partial h}{\partial y} \right )_{|_{V(x)}}$ is $m \, a_n b_m$. Consequently, no cancellation occurs, and thus $\left (\Gamma_{f,x}^1 \cdot V(x) \right )_\mathbf{0} = V(gh,x)_\mathbf{0} -1 = n+m-1$. Therefore, we conclude that \begin{align*} \beta_f &= (p+1)V(g,h)_\bb{0} + p\mu_\bb{0}(g) + \mu_\bb{0}(h) -1. \end{align*} Since $V(g)$ and $V(h)$ have a non-empty intersection at $\bb{0}$, the intersection number $V(g,h)_\bb{0}$ is greater than one (so that $\beta_f > 0$). Suppose now that $h(\mathbf{0}) \neq 0$. Then, from the above calculations, we find \begin{align*} \left ( \Gamma_{f,x}^1 \cdot V(f) \right )_\mathbf{0} &= p \mu_\mathbf{0}(g) + p V(g,x)_\mathbf{0} - (p+1), \text{ and } \\ \mu_\mathbf{0} \left ( f_{|_{V(x)}} \right ) &= p V(g,x)_\mathbf{0} - 1 \end{align*} so that $\beta_f = p \mu_\mathbf{0}(g)$. Recall that, as $\Sigma f = V(g)$, the critical locus of $f$ is smooth at $\bb{0}$ if and only if $V(g)$ is smooth at $\bb{0}$; equivalently, if and only if the Milnor number of $g$ at $\bb{0}$ vanishes. Hence, when $\Sigma f$ is \emph{not} smooth at $\bb{0}$, $\mu_\bb{0}(g) > 0$, and we find that $\beta_f > 0$, as desired. \end{proof} \begin{rem} Suppose that $f(x,y)$ is of the form $f = gh$, where $g$ and $h$ are relatively prime, and both have isolated critical points at the origin. Then, $f$ has an isolated critical point at $\bb{0}$ as well, and the same computation in \cref{nonred} (for $\mu_\mathbf{0}(f)$ instead of $\beta_f$) yields the formula \begin{align*} \mu_\bb{0}(f) = 2V(g,h)_\bb{0} + \mu_\bb{0}(g) + \mu_\bb{0}(h) -1. \end{align*} Thus, the formula for $\beta_f$ in the non-reduced case collapses to the ``expected value" of $\mu_\mathbf{0}(f)$ exactly when $p=1$ and $f$ has an isolated critical point at the origin. \end{rem} \printbibliography \end{document}
math
\begin{document} \title{On the derivative of the $\alphalpha$-Farey-Minkowski function} \alphauthor{Sara Munday} \alphaddress{Fachbereich 3 - Mathematik und Informatik, Universität Bremen, Bibliothekstr. 1, D-28359 Bremen, Germany} \mathrm email{[email protected]} \ \begin{abstract} In this paper we study the family of $\alphalpha$-Farey-Minkowski functions $\theta_\alphalpha$, for an arbitrary countable partition $\alphalpha$ of the unit interval with atoms which accumulate only at the origin, which are the conjugating homeomorphisms between each of the $\alphalpha$-Farey systems and the tent map. We first show that each function $\theta_\alphalpha$ is singular with respect to the Lebesgue measure and then demonstrate that the unit interval can be written as the disjoint union of the following three sets: ${\mathcal T}(\Sigma_{d})heta_0:=\{x\in[0,1]:\theta_\alphalpha'(x)=0\}, \ {\mathcal T}(\Sigma_{d})heta_\infty:=\{x\in[0,1]:\theta_\alphalpha'(x)=\infty\}\ \text{ and } {\mathcal T}(\Sigma_{d})heta_\sim:=[0,1]\setminus({\mathcal T}(\Sigma_{d})heta_0\ellup{\mathcal T}(\Sigma_{d})heta_\infty)$. The main result is that \[ \deltaim_{\mathrm{H}}({\mathcal T}(\Sigma_{d})heta_\infty)=\deltaim_{\mathrm{H}}({\mathcal T}(\Sigma_{d})heta_\sim)=\sigma_\alphalpha(\lambdaog2)<\deltaim_{\mathrm{H}}({\mathcal T}(\Sigma_{d})heta_0)=1, \] where $\sigma_\alphalpha(\lambdaog2)$ is the Hausdorff dimension of the level set $\{x\in [0,1]:\Lambdaambda(F_\alphalpha, x)=s\}$, where $\Lambdaambda(F_\alphalpha, x)$ is the Lyapunov exponent of the map $F_\alphalpha$ at the point $x$. The proof of the theorem employs the multifractal formalism for $\alphalpha$-Farey systems. \mathrm end{abstract} \maketitle \section{Introduction and Statement of Results} The aim of this paper is to study the family of $\alphalpha$-Farey-Minkowski maps, which we denote by $\theta_\alphalpha$, where $\alphalpha:=\{A_n:n\in\mathbb{N}\}$ denotes a countable partition of the unit interval into non-empty, right-closed and left-open intervals. These maps were first introduced in \ellite{KMS}. In that paper, the $\alphalpha$-Farey and $\alphalpha$-L\"uroth systems were also introduced and investigated. We will provide some details of these systems in Section 2, but let us simply mention now that for a given partition $\alphalpha$, the $\alphalpha$-Farey-Minkowski map $\theta_\alphalpha$ is the conjugating homeomorphism between the $\alphalpha$-Farey map $F_\alphalpha$ and the tent map $T$. This means that $\theta_\alphalpha$ is a homeomorphism of the unit interval such that $\theta_\alphalpha\ellirc F_\alphalpha= T\ellirc \theta_\alphalpha$. Our first result is that for every partition $\alphalpha$ (with the exception of the dyadic partition $\alphalpha_D$, which is defined by $\alphalpha_D:=\{(1/2^n, 1/2^{n-1}]:n\in\mathbb{N}\}$), if the derivative $\theta_\alphalpha'(x)$ exists {\mathrm em in a generalised sense}, meaning that it either exists or we have that $\theta_\alphalpha'(x)=\infty$, then \[ \theta_\alphalpha'(x)\in \{0, \infty\}. \] For the dyadic partition, since the map $F_{\alphalpha_D}$ can easily be seen to coincide with the tent map, the map $\theta_{\alphalpha_D}$ is nothing other than the identity map on $[0,1]$. We then show that from this it follows that for an arbitrary non-dyadic partition $\alphalpha$, the map $\theta_\alphalpha$ is singular with respect to the Lebesgue measure $\lambdaambda$. In other words, we have that for $\lambdaambda$-a.e. $x\in[0,1]$, the derivative $\theta_\alphalpha'(x)$ exists and is equal to zero. Consequently, the unit interval can be split into three pairwise disjoint sets ${\mathcal T}(\Sigma_{d})heta_0, {\mathcal T}(\Sigma_{d})heta_\infty$ and ${\mathcal T}(\Sigma_{d})heta_{\sim}$, which are defined as follows: \[ {\mathcal T}(\Sigma_{d})heta_0:=\{x\in[0,1]:\theta_\alphalpha'(x)=0\}, \ {\mathcal T}(\Sigma_{d})heta_\infty:=\{x\in[0,1]:\theta_\alphalpha'(x)=\infty\}\ \text{ and } {\mathcal T}(\Sigma_{d})heta_\sim:=[0,1]\setminus({\mathcal T}(\Sigma_{d})heta_0\ellup{\mathcal T}(\Sigma_{d})heta_\infty). \] It is immediate from the results stated above that \[ \lambdaambda({\mathcal T}(\Sigma_{d})heta_0)=\deltaim_{\mathrm{H}}({\mathcal T}(\Sigma_{d})heta_0)=1, \] where $\deltaim_{\mathrm{H}}(A)$ denotes the Hausdorff dimension of a set $A\subseteq \mathbb{R}$. For all the remaining results of the paper, we must restrict the class of partitions to those that are either expanding or expansive of exponent $\tau\geq 0$ and eventually decreasing (the relevant definitions are given in Section 3 below). The first main result of the paper is concerned with relating the derivative of $\theta_\alphalpha$ to the sets $\mathcal{L}(s)$, which are defined as follows: \[ \mathcal{L}(s):=\lambdaeft\{x\in [0,1]:\lambdaim_{n\to\infty}\frac{\lambdaog(\lambdaambda(I^{(\alphalpha)}_n(x)))}{-n}=s\right\}, \] where $I_n^{(\alphalpha)}(x)$ refers to the unique $\alphalpha$-Farey cylinder set containing the point $x$ (see Section 2 for the precise definition). These sets are only non-empty for $s$ inside the interval $[s_-, s_+]$, where $s_-:=\inf\{-\lambdaog(a_n)/n:n\in\mathbb{N}\}$ and $s_+:=\sup\{-\lambdaog(a_n)/n:n\in\mathbb{N}\}$. We obtain that if $s\in [s_-, \lambdaog2)$, then $\mathcal{L}(s)\subset {\mathcal T}(\Sigma_{d})heta_\infty$, whereas if $s\in(\lambdaog 2, s_+]$, then $\mathcal{L}(s)\subset {\mathcal T}(\Sigma_{d})heta_0$. The significance of the $\lambdaog2$ is that this is the value of the topological entropy of each map $F_\alphalpha$. The second main result of this paper is to employ the multifractal results obtained in \ellite{KMS} to calculate the Hausdorff dimensions of the sets ${\mathcal T}(\Sigma_{d})heta_\infty$ and ${\mathcal T}(\Sigma_{d})heta_\sim$. We have the following theorem. \begin{thm}\lambdaabel{mainthm} \[ \deltaim_{\mathrm{H}}({\mathcal T}(\Sigma_{d})heta_\infty)=\deltaim_{\mathrm{H}}({\mathcal T}(\Sigma_{d})heta_\sim)=\mathcal{L}(\lambdaog2)<\deltaim_{\mathrm{H}}({\mathcal T}(\Sigma_{d})heta_0)=1. \] \mathrm end{thm} This theorem is proved by employing the results obtained for the Hausdorff dimension of the Lyapunov spectrum of $F_\alphalpha$ in \ellite{KMS}, after first observing that the set $\mathcal{L}(s)$ coincides, up to a countable set of points, with the set $\{x\in [0,1]:\Lambdaambda(F_\alphalpha, x)=s\}$, where $\Lambdaambda(F_\alphalpha, x)$ refers to the Lyapunov exponent of the map $F_\alphalpha$ at the point $x$. All the necessary definitions and results are recalled at the start of Section 4. \section{The $\alphalpha$-L\"uroth and $\alphalpha$-Farey systems, and the function $\theta_\alphalpha$} In this section, we wish to remind the reader of the definition and some basic properties of the $\alphalpha$-L\"uroth and $\alphalpha$-Farey systems, which were introduced in \ellite{KMS} (let us also mention that the $\alphalpha$-L\"uroth systems are a particular class of generalised L\"uroth system, as introduced in \ellite{BBDK}). Recall from the introduction that $\alphalpha:=\{A_n:n\in\mathbb{N}\}$ denotes a countably infinite partition of the unit interval $[0,1]$, consisting of non-empty, right-closed and left-open intervals, and let $a_n:=\lambdaambda(A_n)$ and $t_n:=\sum_{k=n}^\infty a_k$. It is assumed throughout that the elements of $\alphalpha$ are ordered from right to left, starting from $A_1$, and that these elements accumulate only at the origin. Then, for a given partition $\alphalpha$, the {\mathrm em $\alphalpha$-L\"uroth map} $L_\alphalpha:[0,1]\to[0,1]$ is defined to be \[ L_{\alphalpha}(x):= \lambdaeft\{ \begin{array}{ll} ({t_n-x})/a_n & \text{ for }x\in A_n,\ n\in\mathbb{N};\\ 0 & \Phibox{ if } x=0. \mathrm end{array} \right. \] Each map $L_\alphalpha$ allows us to obtain a representation of the numbers in $[0,1]$. We will refer to this expansion as the {\mathrm em $\alphalpha$-L\"uroth expansion}. As shown in \ellite{KMS}, for each $x\in (0, 1]$, the finite or infinite sequence $(\mathrm ell_k)_{k\geq1}$ of positive integers is determined by $L_{\alphalpha}^{k-1}(x) \in A_{\mathrm ell_{k}}$, where the sequence terminates in $k$ if and only if $L_{\alphalpha}^{k-1}(x)=t_n$, for some $n\geq2$. Then the $\alphalpha$-L\"{u}roth expansion of $x$ is given as follows, where the sum is supposed to be finite if the sequence is finite: \[ x= \sum_{n=1}^\infty(-1)^{n-1}\lambdaeft(\textstyle\prod\lambdaimits_{i<n}a_{\mathrm ell_i}\right) t_{\mathrm ell_n}=t_{\mathrm ell_1}-a_{\mathrm ell_1}t_{\mathrm ell_2}+a_{\mathrm ell_1}a_{\mathrm ell_2}t_{\mathrm ell_3}+\elldots. \] In this situation we then write $x=[ \mathrm ell_1, \mathrm ell_2, \mathrm ell_3, \lambdadots]_{\alphalpha}$ for a point $x\in[0,1]$ with an infinite $\alphalpha$-L\"uroth expansion and $x=[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha$ for a finite $\alphalpha$-L\"uroth expansion. It is easy to see that every infinite expansion is unique, whereas each $x\in(0,1)$ with a finite $\alphalpha$-L\"uroth expansion can be expanded in exactly two ways. Namely, one immediately verifies that $x=[\mathrm ell_1, \lambdadots, \mathrm ell_k, 1]_\alphalpha=[\mathrm ell_1, \lambdadots, \mathrm ell_{k-1}, (\mathrm ell_k +1)]_\alphalpha$. By analogy with continued fractions, for which a number is rational if and only if it has a finite continued fraction expansion, we say that $x\in[0,1]$ is an \textit{$\alphalpha$-rational number} when $x$ has a finite $\alphalpha$-L\"uroth expansion and say that $x$ is an \textit{$\alphalpha$-irrational number} otherwise. We will now define the cylinder sets associated with the map $L_\alphalpha$. For each $k$-tuple $(\mathrm ell_1, \lambdadots, \mathrm ell_k)$ of positive integers, define the {\mathrm em $\alphalpha$-L\"uroth cylinder set} $C_\alphalpha (\mathrm ell_1, \lambdadots, \mathrm ell_k)$ associated with the $\alphalpha$-L\"{u}roth expansion to be \[ C_\alphalpha (\mathrm ell_1, \lambdadots, \mathrm ell_k):=\{[ y_1, y_2, \lambdadots ]_\alphalpha:y_i=\mathrm ell_i\text{ for } 1\lambdaeq i\lambdaeq k\}. \] Observe that these sets are closed intervals with endpoints given by $[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha$ and $[\mathrm ell_1, \lambdadots, (\mathrm ell_k +1)]_\alphalpha$. If $k$ is even, it follows that $[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha$ is the left endpoint of this interval. Likewise, if $k$ is odd, $[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha$ is the right endpoint. For the Lebesgue measure of these sets we have that \[ \lambdaambda(C_\alphalpha(\mathrm ell_1, \lambdadots, \mathrm ell_k))=a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_k}. \] Let us now recall some details of the $\alphalpha$-Farey map, $F_\alphalpha:[0,1]\to[0,1]$. For a given partition $\alphalpha$, the map $F_{\alphalpha}:[0,1] \to [0,1]$ is given by \[ F_{\alphalpha}(x):=\lambdaeft\{ \begin{array}{ll} (1-x)/a_1 & \Phibox{if $x\in A_1$,} \\ {a_{n-1}}(x-t_{n+1})/a_{n}+t_n & \Phibox{if $x\in A_n$, for $n\geq2$,} \\ 0 & \Phibox{if $x=0$. } \mathrm end{array} \right. \] An example of the graph of an $\alphalpha$-Farey and an $\alphalpha$-L\"uroth map is shown in Figure 2.1, for the specific example of the harmonic partition, $\alphalpha_H:=\{(1/(n+1), 1/n]:n\in\mathbb{N}\}$. For another specific example, consider the dyadic partition $\alphalpha_D:=\lambdaeft\{\lambdaeft(1/2^{n},1/2^{n-1}\right]:n\in\mathbb{N}\right\}$. One can immediately verify that the map $F_{\alphalpha_D}$ coincides with the tent map $T:[0,1]\to[0,1]$, which is given by \[T(x):=\lambdaeft\{ \begin{array}{ll} 2x, & \Phibox{for $x\in[0,1/2)$;} \\ 2-2x, & \Phibox{for $x\in [1/2, 1]$.} \mathrm end{array} \right. \] To see this, it is enough to note that for each $n\in\mathbb{N}$ we have that $a_n=2^{-n}$ and $t_n=2^{-(n-1)}$. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.42\textwidth]{altluroth}\Phispace{0.1\textwidth} \includegraphics[width=0.42\textwidth]{altfarey} \ellaption{The $\alphalpha_H$-L\"uroth and $\alphalpha_H$-Farey map, where $t_{n}=1/n$, $n\in \mathbb{N}$.}\mathrm end{center} \mathrm end{figure}\lambdaabel{fig:ClassicalLF} Let us now describe how to construct a Markov partition ${\mathcal{A}}$ from the partition $\alphalpha$, and its associated coding for the map $F_\alphalpha$. (For the definition of a Markov partition, see, for instance, \ellite{tseng}.) The partition $\mathcal{A}$ is given by the closed intervals $\{A, B\}$, where $A:=\overline{A_1}$ and $B:={[0,1]\setminus A_1}$. Each $\alphalpha$-irrational number in $[0,1]$ has an infinite coding $x=\lambdaangle x_{1}, x_{2},\lambdadots \rangle_{\alphalpha} \in \{0,1\}^{\mathbb{N}}$, which is given by $x_k=1$ if and only if $F_{\alphalpha}^{k-1}(x)\in \mathrm{Int}(A)$ for each $k \in \mathbb{N}$. This coding will be referred to as the $\alphalpha$-Farey coding. If an $\alphalpha$-irrational number $x\in[0,1]$ has $\alphalpha$-L\"uroth coding given by $x=[\mathrm ell_1, \mathrm ell_2, \mathrm ell_3, \lambdadots]_\alphalpha$, then the $\alphalpha$-Farey coding of $x$ is given by $x=\lambdaangle0^{\mathrm ell_1-1},1,0^{\mathrm ell_2-1},1,0^{\mathrm ell_3-1},1,\lambdadots\rangle_\alphalpha$, where $0^{n}$ denotes the sequence of $n$ consecutive appearances of the symbol $0$, whereas for each $\alphalpha$-rational number $x=[\mathrm ell_1, \mathrm ell_2,\lambdadots, \mathrm ell_k]_\alphalpha$, one immediately verifies that this number has an $\alphalpha$-Farey coding given either by \[x=\lambdaangle 0^{\mathrm ell_1-1},1,0^{\mathrm ell_2-1},1,\lambdadots,0^{\mathrm ell_k-1},1,0,0,0,\lambdadots\rangle_\alphalpha\] or \[ x=\lambdaangle 0^{\mathrm ell_1-1},1,0^{\mathrm ell_2-1},1,\lambdadots,0^{\mathrm ell_k-2},1,1,0,0,0,\lambdadots\rangle_\alphalpha.\] Let us now define the cylinder sets associated with the map $F_\alphalpha$. These coincide with the refinements $\mathcal{A}^n$ of the partition $\mathcal{A}$ for $F_\alphalpha$. For each $n$-tuple $(x_1, \lambdadots, x_n)$ of positive integers, define the \textit{$\alphalpha$-Farey cylinder set } $\widehat{C}_{\alphalpha}(x_{1},\lambdadots,x_{n})$ by setting \[ \widehat{C}_{\alphalpha}(x_{1},\lambdadots,x_{n}):=\{\lambdaangle y_{1},y_{2}, \lambdadots \rangle_{\alphalpha}: y_{k}=x_{k}, \text{ for }1\lambdaeq k\lambdaeq n\}. \] Notice that every $\alphalpha$-L\"uroth cylinder set is also an $\alphalpha$-Farey cylinder set, whereas the converse of this statement is not true. The precise description of the correspondence is that any $\alphalpha$-Farey cylinder set which has the form $\widehat{C}_\alphalpha(0^{\mathrm ell_{1}-1},1, \lambdadots ,0^{\mathrm ell_{k}-1},1)$ coincides with the $\alphalpha$-L\"uroth cylinder set $C_\alphalpha(\mathrm ell_1, \lambdadots, \mathrm ell_k)$ but if an $\alphalpha$-Farey cylinder set is defined by a finite word ending in the symbol $0$, then it cannot be translated to a single $\alphalpha$-L\"uroth cylinder set. However, we do have the relation \[ \widehat{C}_\alphalpha(0^{\mathrm ell_{1}-1},1,0^{\mathrm ell_{2}-1},1, \lambdadots ,0^{\mathrm ell_{k}-1},1,0^{m})=\bigcup_{n\geq m+1}C_\alphalpha(\mathrm ell_1, \mathrm ell_2, \lambdadots, \mathrm ell_k, n). \] It therefore follows that for the Lebesgue measure of this interval we have that \begin{eqnarray*} \lambdaambda(\widehat{C}_\alphalpha(0^{\mathrm ell_{1}-1},1,0^{\mathrm ell_{2}-1},1, \lambdadots ,0^{\mathrm ell_{k}-1},1,0^{m}))&=& \sum_{n\geq m+1}\lambdaambda(C_\alphalpha(\mathrm ell_1, \mathrm ell_2, \lambdadots, \mathrm ell_k, n))\\&=&a_{\mathrm ell_1}a_{\mathrm ell_2}\elldots a_{\mathrm ell_k}t_{m+1}. \mathrm end{eqnarray*} In addition, we can identify the endpoints of each $\alphalpha$-Farey cylinder set. If we consider the cylinder set $\widehat{C}_\alphalpha(0^{\mathrm ell_{1}-1},1,\lambdadots, 0^{\mathrm ell_{k}-1},1)$, then we already know the endpoints of this interval (since it is also equal to an $\alphalpha$-L\"uroth cylinder set). On the other hand, the endpoints of the set $\widehat{C}_\alphalpha(0^{\mathrm ell_{1}-1},1,0^{\mathrm ell_{2}-1},1,\lambdadots,0^{\mathrm ell_{k}-1},1, 0^{m})$ are given by $[\mathrm ell_1, \lambdadots, \mathrm ell_{k}, m+1]_\alphalpha$ and $[\mathrm ell_1, \lambdadots, \mathrm ell_{k}]_\alphalpha$. The following result concerning the $\alphalpha$-Farey system and the tent system was obtained in \ellite{KMS}. Before stating it, we remind the reader that the measure of maximal entropy $\mu_{\alphalpha}$ for the system $F_\alphalpha$ is the measure that assigns mass $2^{-n}$ to each $n$-th level $\alphalpha$-Farey cylinder set, for each $n\in\mathbb{N}$. Also, we recall that the distribution function $\Delta_\mu$ of a measure $\mu$ with support in $[0,1]$ is defined for each $x\in[0,1]$ by \[ \Delta_\mu(x):=\mu([0,x)). \] \begin{lem}[\ellite{KMS}, Lemma 2.2]\lambdaabel{KMSlem} The dynamical systems $([0,1], {F}_{\alphalpha})$ and $([0,1], T)$ are topologically conjugate and the conjugating homeomorphism is given, for each $x=[ \mathrm ell_1, \mathrm ell_2, \lambdadots]_{\alphalpha}$, by \[ {\theta_{\alphalpha}}(x):=-2\sum_{k=1}^\infty(-1)^k2^{-\sum_{i=1}^k \mathrm ell_i}. \] Moreover, the map $\theta_{\alphalpha}$ is equal to the distribution function of the measure of maximal entropy $\mu_{\alphalpha}$ for the $\alphalpha$-Farey map. \mathrm end{lem} Let us remark that this map should be seen as an analogue of Minkowski's question-mark function, which was originally introduced by Minkowski \ellite{min} in order to illustrate the Lagrange property of algebraic numbers of degree two. Indeed, all that is different in the definition of each is that in Minkowski's function the continued fraction entries appear and in the function $\theta_\alphalpha$, these are replaced by the $\alphalpha$-L\"uroth entries. For this reason, we refer to the map $\theta_\alphalpha$ as the {\mathrm em $\alphalpha$-Farey-Minkowski function}. \section{Differentiability properties of $\theta_\alphalpha$} In this section we will give a series of simple lemmas that describe the differentiability properties of the function $\theta_\alphalpha$ for an arbitrary partition $\alphalpha$. The results turn out to match the results for the Minkowski question-mark function, although a little care must be taken when dealing with certain partitions. Most of the proofs here are modelled after the corresponding proofs in \ellite{mink?}. Before we begin, though, let us point out that (as mentioned above) the tent map itself is an example of an $\alphalpha$-Farey map, coming from the dyadic partition $\alphalpha_D:=\{(1/2^n, 1/2^{n-1}]:n\in\mathbb{N}\}$. Obviously, then, the map $\theta_{\alphalpha_D}$ which conjugates the map $F_{\alphalpha_D}$ and the tent map is simply the identity. So, in this case, the derivative of $\theta_{\alphalpha_D}$ is clearly identically equal to 1. So, in what follows, unless otherwise stated, $\alphalpha$ is understood to be an arbitrary partition of the form detailed in the introduction but we also assume that $\alphalpha$ is {\mathrm em non-dyadic}, that is, we assume that $\alphalpha$ is not equal to the partition $\alphalpha_D$. In order to state the first lemma, we must first make the following definition. For an $\alphalpha$-irrational number $x\in[0,1]$ and for each $n\in\mathbb{N}$, define the interval $I_n^{(\alphalpha)}(x)$ to be the unique $n$-th level $\alphalpha$-Farey cylinder set that contains the point $x$. Let us also remind the reader here that we use the phrase ``exists in a generalised sense'' to mean ``exists or is equal to infinity''. \begin{lem}\lambdaabel{lemma1} Suppose that $x\in[0,1]$ is such that $\theta_\alphalpha'(x)$ exists in a generalised sense. We then have that: \begin{itemize} \item [(a)] If $x=[\mathrm ell_1, \mathrm ell_2, \lambdadots]_\alphalpha$ is an $\alphalpha$-irrational number, then \[ \theta_\alphalpha'(x)=\lambdaim_{n\to\infty}\frac{2^{-n}}{\lambdaambda\lambdaeft(I_n^{(\alphalpha)}(x)\right)}. \] \item [(b)] If $x=[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha$ is an $\alphalpha$-rational number, then \[ \theta_\alphalpha'(x)=\frac{2\elldot 2^{-(\mathrm ell_1+\elldots +\mathrm ell_k)}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_k}}\lambdaim_{m\to\infty}\frac{2^{-m}}{t_m}. \] \mathrm end{itemize} \mathrm end{lem} \begin{proof} To prove part (a), let $x$ be an $\alphalpha$-irrational number such that $\theta_\alphalpha'(x)$ exists in a generalised sense. Then for every sequence $(y_n)_{n\geq1}$ decreasing or increasing to $x$, we have that \[ \lambdaim_{n\to\infty}\frac{\theta_\alphalpha(y_n)-\theta_\alphalpha(x)}{y_n-x}=\theta_\alphalpha'(x). \] In particular this holds if we consider the sequences of endpoints of the intervals $I_n^{(\alphalpha)}(x):=[L_n, R_n]$ which approach $x$ from the left and right, respectively. So, letting $A_n:=\theta_\alphalpha(x)-\theta_\alphalpha(L_n)$, $B_n:=x-L_n$, $C_n:=\theta_\alphalpha(R_n)-\theta_\alphalpha(x)$ and $D_n:=R_n-x$, we have that \[ \lambdaim_{n\to\infty}\frac{A_n}{B_n}=\lambdaim_{n\to\infty}\frac{C_n}{D_n}=\theta_\alphalpha'(x). \] It then follows easily that \[ \lambdaim_{n\to\infty}\frac{2^{-n}}{\lambdaambda(I_n^{\alphalpha}(x))}=\lambdaim_{n\to\infty}\frac{\theta_\alphalpha(R_n)-\theta_\alphalpha(L_n)}{R_n-L_n}=\lambdaim_{n\to\infty}\frac{A_n+C_n}{B_n+D_n}=\theta_\alphalpha'(x). \] This finishes the proof of part (a). For part (b), let $x=[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha$ and again suppose that $\theta_\alphalpha'(x)$ exists in a generalised sense. Then, just as in the $\alphalpha$-irrational case above, for the sequence $([\mathrm ell_1, \lambdadots, \mathrm ell_k, m]_\alphalpha)_{m\geq1}$ which approaches the point $x$, we have that \[ \lambdaim_{m\to\infty}\frac{\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_k, m]_\alphalpha)-\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha)}{[\mathrm ell_1, \lambdadots, \mathrm ell_k, m]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha} =\lambdaim_{m\to\infty}\frac{2\elldot 2^{-(\mathrm ell_1+\elldots\mathrm ell_k+m)}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_k}t_{m}}=\frac{2\elldot 2^{-(\mathrm ell_1+\elldots\mathrm ell_k)}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_k}}\lambdaim_{m\to\infty}\frac{2^{-m}}{t_m}. \] \mathrm end{proof} We now come to the question of the particular values the derivative of $\theta_\alphalpha$ may take, if it exists. The answer is given in Proposition \ref{0inf} below, but before we can get there we need the following two lemmas. \begin{lem}\lambdaabel{0infnot1/2} Suppose that the partition $\alphalpha$ is such that $a_1\neq 1/2$. Let $x$ be an $\alphalpha$-irrational number with the property that $\theta_\alphalpha'(x)$ exists in a generalised sense. Then, \[ \theta_\alphalpha'(x)\in\{0, \infty\}. \] \mathrm end{lem} \begin{proof}Let $x$ be an $\alphalpha$-irrational number and suppose that $\theta_\alphalpha'(x)$ exists in a generalised sense. By Lemma \ref{lemma1}, it follows that \[ \theta_\alphalpha'(x)=\lambdaim_{n\to\infty}\frac{2^{-n}}{\lambdaambda\lambdaeft(I_n^{(\alphalpha)}(x)\right)}. \] Suppose, by way of contradiction, that $\theta_\alphalpha'(x)=c$, for $0<c<\infty$. Then, it follows that \[ \lambdaim_{n\to\infty}\frac{2^{-n}}{\lambdaambda\lambdaeft(I_n^{(\alphalpha)}(x)\right)}\elldot\frac{\lambdaambda\lambdaeft(I_{n-1}^{(\alphalpha)}(x)\right)}{2^{-(n-1)}}=1 \] and consequently, \begin{eqnarray}\lambdaabel{star} \lambdaim_{n\to\infty}\frac{\lambdaambda\lambdaeft(I_{n-1}^{(\alphalpha)}(x)\right)}{\lambdaambda\lambdaeft(I_n^{(\alphalpha)}(x)\right)}=2. \mathrm end{eqnarray} Since $x$ is an $\alphalpha$-irrational number, it follows that, infinitely often, the $n$-th $\alphalpha$-Farey cylinder set containing the point $x$ is also an $\alphalpha$-L\"uroth cylinder set. More specifically, where $x=[\mathrm ell_1, \mathrm ell_2, \mathrm ell_3, \lambdadots]_\alphalpha$, these are the following sets: \[ I_{\mathrm ell_1}^{(\alphalpha)}(x),I_{\mathrm ell_1+\mathrm ell_2}^{(\alphalpha)}(x), I_{\mathrm ell_1+\mathrm ell_2+\mathrm ell_3}^{(\alphalpha)}(x), \lambdadots. \] Recall that we have $\lambdaambda\lambdaeft(I_{\sum_{i=1}^n\mathrm ell_i}^{(\alphalpha)}(x)\right)=a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_n}$. Furthermore, \[ \lambdaambda\lambdaeft(I_{(\sum_{i=1}^n\mathrm ell_i)+1}^{(\alphalpha)}(x)\right):=\lambdaeft\{ \begin{array}{ll} a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_n}a_1, & \Phibox{if $\mathrm ell_{n+1}=1$;} \\ a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_n}t_2, & \Phibox{if $\mathrm ell_{n+1}>1$.} \mathrm end{array} \right. \] Thus, for each $n\in\mathbb{N}$, the quotient $\lambdaambda\lambdaeft(I_{\sum_{i=1}^n\mathrm ell_i}^{(\alphalpha)}(x)\right)/\lambdaambda\lambdaeft(I_{(\sum_{i=1}^n\mathrm ell_i)+1}^{(\alphalpha)}(x)\right)$ is either equal to $1/a_1$ or $1/t_2$. Given that $a_1\neq1/2$, neither $1/a_1$ nor $1/t_2$ can be equal to 2. This contradicts (\ref{star}) and the proof is finished. \mathrm end{proof} The above proof is closely modelled on the corresponding result for the Minkowski question-mark function given in \ellite{mink?}. The problem with the situation where $a_1=1/2$ can be overcome with the help of the next lemma. \begin{lem}\lambdaabel{0inf1/2} Suppose that there exists some proper subset $M\subset\mathbb{N}$ such that for all $i\in M$, the partition $\alphalpha$ satisfies $a_i=2^{-i}$ and for all $i\in \mathbb{N}\setminus M$, we have that $a_i\neq 2^{-i}$. Define the sets \[ B_{M, N}:=\{x\in[0,1]:\mathrm ell_i(x)\in M \text{ for all } i\geq N\} \ \text{ and }\ B_M:=\bigcup_{N\in\mathbb{N}}B_{M, N}. \] Then, if $x\in B_M$, we have that $\theta_\alphalpha'(x)$ does not exist. \mathrm end{lem} \begin{proof} We will prove the lemma by considering two separate cases. The first case is that $a_1\neq 1/2$, or, in other words, the set $M$ does not contain the number 1. Fix $N\in\mathbb{N}$ and suppose, by way of contradiction, that $x\in B_{M, N}$ and that the derivative $\theta_\alphalpha'(x)$ does exist in a generalised sense. Then, by Lemma \ref{lemma1}, we know that \[ \theta_\alphalpha'(x)=\lambdaim_{n\to\infty}\frac{2^{-n}}{\lambdaambda\lambdaeft(I_n^{(\alphalpha)}(x)\right)}. \] Also, since $a_1\neq 1/2$, we know that $\mathrm ell_i\neq 1$ for all $i\geq N$. Therefore, in the $\alphalpha$-Farey coding for $x$, after $\sum_{i=1}^{N-1}\mathrm ell_i$ entries, every occurrence of a 1 is followed directly by a 0. Let us choose two subsequences from the sequence $\lambdaeft(2^{-n}/\lambdaambda\lambdaeft(I_n^{(\alphalpha)}(x)\right)\right)_{n\geq1}$. For the first, pick out every $n\geq \sum_{i=1}^{N-1}\mathrm ell_i$ such that the $\alphalpha$-Farey interval $I_n^{(\alphalpha)}(x)$ is also an $\alphalpha$-L\"uroth interval (that is, pick out the elements of the sequence that correspond to stopping at every 1 in the $\alphalpha$-Farey code of $x$). For the second, take the subsequence that corresponds to shifting the first subsequence by exactly one place forward. Therefore we have the following two sequences, which, according to Lemma \ref{lemma1} ought to have the same limit: \[ \frac{2^{-\sum_{i=1}^{N-1}\mathrm ell_i}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_N}}\lambdaeft(\frac{2^{-\mathrm ell_N}}{a_{\mathrm ell_N}}, \frac{2^{-(\mathrm ell_N+\mathrm ell_{N+1})}}{a_{\mathrm ell_N}a_{\mathrm ell_{N+1}}},\frac{2^{-(\mathrm ell_N+\mathrm ell_{N+1}+\mathrm ell_{N+2})}}{a_{\mathrm ell_N}a_{\mathrm ell_{N+1}}a_{\mathrm ell_{N+2}}}, \lambdadots \right) \] and \[ \frac{2^{-\sum_{i=1}^{N-1}\mathrm ell_i}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_N}}\lambdaeft(\frac{2^{-(\mathrm ell_N+1)}}{a_{\mathrm ell_N}t_2}, \frac{2^{-(\mathrm ell_N+\mathrm ell_{N+1}+1)}}{a_{\mathrm ell_N}a_{\mathrm ell_{N+1}}t_2}, \frac{2^{-(\mathrm ell_N+\mathrm ell_{N+1}+\mathrm ell_{N+2}+1)}}{a_{\mathrm ell_N}a_{\mathrm ell_{N+1}}a_{\mathrm ell_{N+2}}t_2},\lambdadots \right). \] However, notice that since $a_{\mathrm ell_{N+m}}=2^{-\mathrm ell_{N+m}}$ for all $m\geq0$, we have that \begin{eqnarray}\lambdaabel{star3} \lambdaim_{m\to\infty}\frac{2^{-(\mathrm ell_N+\elldots+\mathrm ell_{N+m})}}{a_{\mathrm ell_N}\lambdadots a_{\mathrm ell_{N+m}}}=1, \mathrm end{eqnarray} whereas, \[ \lambdaim_{m\to\infty}\frac{2^{-(\mathrm ell_N+\elldots+\mathrm ell_{N+m}+1)}}{a_{\mathrm ell_N}\lambdadots a_{\mathrm ell_{N+m}}t_2}= \frac{2^{-1}}{t_2}\neq 1. \] Consequently the derivative of $\theta_\alphalpha$ at $x$ does not exist. To finish the proof, we consider the case where $a_1=1/2$. First notice that the argument in (\ref{star3}) obviously still holds whenever a point $x$ is such that eventually all the $\alphalpha$-L\"uroth entries lie in the set $M$. Without loss of generality, we suppose that every $\mathrm ell_i(x)\in M$. This implies, in light of Lemma \ref{lemma1}, that if the derivative $\theta_\alphalpha'(x)$ exists in a generalised sense, then it must be equal to 1. We must again consider two further cases. The first is the case that for every $k\in M$, not only does $a_k=2^{-k}$, but also $t_{k+1}=2^{-k}$. The second case is that this is no longer true, in other words, there exists $k\in M$ such that $t_{k+1}\neq 2^{-k}$. Consider first the situation that $t_{k+1}=2^{-k}$ for all $k\in M$. It then follows from a simple calculation that if $x$ is such that every $\alphalpha$-L\"uroth digit $\mathrm ell_i(x)$ of $x$ belongs to the set $M$, then $\theta_\alphalpha(x)=x$. Now suppose that $M$ is a bounded set, with largest element $k$. It therefore follows that $a_{k+1}\neq 2^{-(k+1)}$ and $t_{k+2}\neq 2^{-(k+1)}$. Since all entries of $x$ lie in $M$ and, in particular, $\mathrm ell_{2n}\lambdaeq k$, the sequence $([\mathrm ell_1, \lambdadots, \mathrm ell_{2n-1}, k+2]_\alphalpha)_{n\geq1}$ tends to $x$ from above. Therefore, we have (provided that the limit exists), \begin{eqnarray*} \lambdaim_{n\to \infty}\frac{\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{2n-1}, k+2]_\alphalpha)-\theta_\alphalpha(x)}{[\mathrm ell_1, \lambdadots, \mathrm ell_{2n-1}, k+2]_\alphalpha-x}&=&\lambdaim_{n\to\infty}\frac{[\mathrm ell_{2n}, \mathrm ell_{2n+1}, \lambdadots]_\alphalpha-2^{-(k+1)}}{[\mathrm ell_{2n}, \mathrm ell_{2n+1}, \lambdadots]_\alphalpha-t_{k+2}}\\&=& 1+\lambdaeft(t_{k+2}-2^{-(k+1)}\right)\lambdaim_{n\to\infty}\frac{1}{[\mathrm ell_{2n}, \mathrm ell_{2n+1}, \lambdadots]_\alphalpha-t_{k+2}}\neq1. \mathrm end{eqnarray*} Thus, in this case we also have that the derivative of $\theta_\alphalpha$ at $x$ does not exist. Suppose now that $M$ is unbounded. Then, there must exist a smallest integer $k\geq1$ such that $a_{k+1}\neq 2^{-(k+1)}$. From this, it follows that also $t_{k+2}\neq 2^{-(k+1)}$. If $x=[\mathrm ell_1, \mathrm ell_2, \lambdadots]_\alphalpha$ is such that eventually all the digits of $x$ lie in $M$ but are at most equal to $k$, we can show that the derivative of $\theta_\alphalpha$ at $x$ does not exist exactly as above, where $M$ was assumed to be bounded. So, suppose that there exists a subsequence $(\mathrm ell_{i_j})_{j\geq1}$ of the entries of $x$ with each $\mathrm ell_{i_j}\in M$ and $\mathrm ell_{i_j}>k+1$. Further, suppose that each of these entries appear in even positions (if odd, the proof can be easily modified accordingly). Consider the sequence $(A_{i_n}:=[\mathrm ell_1, \lambdadots, \mathrm ell_{i_n-1}, k+2]_\alphalpha)_{n\geq1}$, which tends to $x$ from below. We then obtain that \[ \frac{\theta_\alphalpha(x)-\theta_\alphalpha(A_{i_n})}{x-A_{i_n}}=\frac{2\elldot 2^{-(k+2)}-[\mathrm ell_{i_n}, \mathrm ell_{i_n+1}, \lambdadots]}{t_{k+2}-[\mathrm ell_{i_n}, \mathrm ell_{i_n+1}, \lambdadots]} \] and so (if the limit exists at all), $\lambdaim_{n\to \infty}\frac{\theta_\alphalpha(x)-\theta_\alphalpha(A_{i_n})}{x-A_{i_n}}\neq 1$. Therefore, in this second subcase we have also shown that the derivative of $\theta_\alphalpha$ at $x$ does not exist. This finishes the proof of the first subcase. Finally, we must consider the situation where there exists at least one $k\in M$ such that $t_{k+1}\neq 2^{-k}$. Observe that if $k\in M$ is such that $t_{k+1}\neq 2^{-k}$, it follows that $t_k\neq 2^{-(k-1)}$ also. Recall that since we are assuming that $a_1=1/2$, that $t_2=1/2$ and so this $k$ cannot be equal to 1. This case only needs special consideration whenever the $\alphalpha$-L\"uroth code of $x$ contains a sequence of entries $\mathrm ell_{i_j}(x)$ with $t_{\mathrm ell_{i_j}+1}\neq 2^{-\mathrm ell_{i_j}}$ for all $j\in \mathbb{N}$. Suppose that this holds and, where we have set $n_j:=\lambdaeft(\sum_{k=1}^{i_j}\mathrm ell_{k}(x)\right)-1$, consider the following limit (provided that it exists): \[ \lambdaim_{j\to\infty} \frac{2^{-n_j}}{\lambdaambda\lambdaeft(I_{n_j}^{(\alphalpha)}(x)\right)}=\lambdaim_{j\to \infty} \frac{2^{-(\mathrm ell_{i_j}-1)}}{t_{\mathrm ell_{i_j}}}\neq 1. \] Therefore, in this final subcase, we have demonstrated that the derivative $\theta_\alphalpha'(x)$ does not exist. This finishes the proof. \mathrm end{proof} We now give the main result of this section. \begin{prop}\lambdaabel{0inf} For an arbitrary non-dyadic partition $\alphalpha$, if $x\in[0,1]$ is such that the derivative $\theta_\alphalpha'(x)$ exists in a generalised sense, then \[ \theta_\alphalpha'(x)\in\{0, \infty\}. \] \mathrm end{prop} \begin{proof} In light of part (b) of Lemma \ref{lemma1}, it suffices to consider $\alphalpha$-irrational numbers. If the partition $\alphalpha$ is such that $a_1\neq1/2$, then the result follows by Lemma \ref{0infnot1/2}. It therefore only remains to consider the situation where $a_1=1/2$. In this case we can say that there exists a proper subset $M\subset \mathbb{N}$, which at least contains the point $1$, such that $a_i=2^{-i}$ for all $i\in M$. Since we do not allow $\alphalpha$ to be the dyadic partition, there exist at least two indices $i\in \mathbb{N}$ such that $a_i\neq 2^{-i}$. By Lemma \ref{0inf1/2}, we know that if $x$ belongs to the set $B_M$ (which is defined in the statement of said lemma), then the derivative of $\theta_\alphalpha$ does not exist at the point $x$. This further reduces the remaining work to the situation where $x\notin B_M$. To complete the proof, then, we consider a further two subcases. For the first, suppose that $k$ is the smallest integer such that $a_k\neq 2^{-k}$ and let $x\in[0,1]$ be such that $\theta_\alphalpha'(x)$ exists in a generalised sense and $\mathrm ell_i(x)\in M \ellup \{k\}$, with infinitely many of the $\mathrm ell_i$ equal to $k$. Then, once again we use Lemma \ref{lemma1} to obtain \[ \theta_\alphalpha'(x)=\lambdaim_{n\to\infty}\frac{2^{-n}}{\lambdaambda\lambdaeft(I_n^{(\alphalpha)}(x)\right)}. \] In particular, \[ \theta_\alphalpha'(x)=\lambdaim_{n\to\infty}\frac{2^{-\sum_{i=1}^n\mathrm ell_i}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{n}}}. \] However, for each $\mathrm ell_i(x)$ such that $\mathrm ell_i\in M$, we have that $2^{-\mathrm ell_i}/a_{\mathrm ell_i}=1$, so the above limit reduces to \[ \lambdaim_{n\to\infty} \lambdaeft(\frac{2^{-k}}{a_k}\right)^n = \lambdaeft\{ \begin{array}{ll} 0, & \Phibox{if $2^{-k}<a_k$;} \\ \infty, & \Phibox{$2^{-k}>a_k$.} \mathrm end{array} \right. \] Thus, in this first subcase, given that we assume that $a_k\neq 2^{-k}$, if the derivative of $\theta_\alphalpha$ exists in a generalised sense at $x$, it must lie in the set $\{0, \infty\}$. For the second subcase, again suppose that $k$ is the smallest integer such that $a_k\neq 2^{-k}$ and observe that this also means that $t_{k+1}\neq 2^{-k}$. Now suppose that $x$ is such that infinitely often $\mathrm ell_i(x)>k$ and that also infinitely many of the $\alphalpha$-L\"uroth entries of $x$ are outside the set $M$ (in case there are further indices $k+n$ with $a_{k+n}=2^{-(k+n)}$, since if the entries of $x$ would stay all but finitely often in $M$, we are back to the situation of Lemma \ref{0inf1/2}). Then, just as above, we have that \[ \theta_\alphalpha'(x)=\lambdaim_{n\to\infty}\frac{2^{-\sum_{i=1}^n\mathrm ell_i}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{n}}}. \] Since each time $\mathrm ell_i\in M$, we are only multiplying by 1 in this sequence, without loss of generality, we may suppose that $\mathrm ell_i\notin M$ for all $i\in \mathbb{N}$. Given this assumption, we can write the $\alphalpha$-L\"uroth code for $x$ in the following way: \[ x=[\underbrace{k, \lambdadots, k}_{n_1 \mbox{\scriptsize{ times}}}, \mathrm ell_{i_1},\underbrace{k, \lambdadots, k}_{n_2 \mbox{\scriptsize{ times}}}, \mathrm ell_{i_2}, \lambdadots, \underbrace{k, \lambdadots, k}_{n_j \mbox{\scriptsize{ times}}}, \mathrm ell_{i_j}, k, \lambdadots]_\alphalpha, \] where $n_j\in \mathbb{N}\ellup\{0\}$ and $\mathrm ell_{i_j}\geq k+1$, for all $j\in\mathbb{N}$. We then obtain the limit \[ \theta_\alphalpha'(x)=\lambdaim_{j\to\infty}\frac{2^{-((n_1+\elldots n_j)k+(\mathrm ell_{i_1}+\elldots+\mathrm ell_{i_{j-1}})+k)}}{(a_k)^{n_1+\elldots n_j}a_{\mathrm ell_{i_1}}\lambdadots a_{\mathrm ell_{i_{j-1}}}t_{k+1}}=\frac{2^{-k}}{t_{k+1}}\lambdaim_{m\to \infty}\frac{2^{-\sum_{i=1}^m\mathrm ell_i}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{m}}}, \] where $m=(n_1+1)+(n_2+1)+\elldots + (n_{j-1}+1)+n_j$. Thus, we have that \[ \theta_\alphalpha'(x)=\frac{2^{-k}}{t_{k+1}}\theta_\alphalpha'(x). \] It therefore follows that $\theta_\alphalpha'(x)$, whenever it exists in a generalised sense, belongs to the set $\{0, \infty\}$, since $2^{-k}/t_{k+1}\neq1$. This finishes the proof. \mathrm end{proof} We now give the result that for each non-dyadic partition $\alphalpha$, the $\alphalpha$-Farey-Minkowski function is singular with respect to the Lebesgue measure. Recall that this means that the derivative of $\theta_\alphalpha$ is Lebesgue-a.e. equal to zero. This mirrors the well-known result of Salem \ellite{Salem} that Minkowski's question-mark function is singular. \begin{prop}\lambdaabel{singular} For an arbitrary non-dyadic partition $\alphalpha$, the function $\theta_\alphalpha$ is singular with respect to the Lebesgue measure. \mathrm end{prop} \begin{proof} In light of Lemma \ref{KMSlem}, we know that each function $\theta_\alphalpha$ is strictly increasing. Therefore, by a classical theorem (see, for instance, Theorem 5.3 in \ellite{royden}), the derivative of $\theta_\alphalpha$ exists and is in particular finite $\lambdaambda$-almost everywhere. Therefore, by Lemmas \ref{0inf1/2} and \ref{0infnot1/2}, it follows that the derivative is equal to 0 for $\lambdaambda$-a.e. $x\in[0,1]$. \mathrm end{proof} From this point on, we have to assume a little more information about the partitions $\alphalpha$. We will henceforth assume that all partitions $\alphalpha$ are either {\mathrm em expansive of exponent $\tau>0$}, or, {\mathrm em expanding}. Recall the definitions from \ellite{KMS}: A partition $\alphalpha$ is said to be expansive of exponent $\tau>0$ if for the tails of $\alphalpha$ we have that $t_n=n^{-\tau}\psi(n)$, for some slowly-varying\footnote{A measurable function $f:\mathbb{R}^{+} \to \mathbb{R}^{+}$ is said to be {\mathrm em slowly varying} if $ \lambdaim_{x\to\infty}f(x y)/f(x)=1$, for all $y>0$.} function $\psi:\mathbb{N}\to\mathbb{R}^+$, whereas $\alphalpha$ is said to be expanding if $\lambdaim_{n\to \infty} t_{n}/t_{n+1}= \rho$, for some $\rho>1$. Before stating the next proposition, let us define the $k$-th approximant $r_k^{(\alphalpha)}(x)$ to a point $x=[\mathrm ell_1,\mathrm ell_2,\lambdadots]_\alphalpha$ by setting $r_k^{(\alphalpha)}(x):=[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha$. We will use the notation $[a, b]_\pm$ to indicate that we either have the interval $[a, b]$ or the interval $[b, a]$, depending on which number is larger. Finally, recall that the measure $\mu_\alphalpha$ by definition gives mass $2^{-n}$ to each $n$-th level $\alphalpha$-Farey cylinder set. \begin{prop}\lambdaabel{liminfderinf} Let $\alphalpha$ be a partition that is either { expansive of exponent $\tau>0$} or { expanding}. Suppose that $x$ is such that \[ \lambdaim_{k\to\infty}\frac{\mu_\alphalpha\lambdaeft(\lambdaeft[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)\right]_{\pm}\right)}{\lambdaambda\lambdaeft(\lambdaeft[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)\right]_{\pm}\right)}=\infty. \] Then $\theta_\alphalpha'(x)=\infty$. \mathrm end{prop} \begin{proof} First, notice that we have $[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)]_\pm=I_{\mathrm ell_1+\elldots \mathrm ell_{k+1}-1}^{(\alphalpha)}$, from which we immediately deduce that \[ \frac{\mu_\alphalpha\lambdaeft(\lambdaeft[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)\right]_\pm\right)}{\lambdaambda\lambdaeft(\lambdaeft[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)\right]_\pm\right)}=\frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k+1})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k}}t_{\mathrm ell_{k+1}}}. \] Let $y>x$. Then, for all $y$ close enough to $x$, there exists an even positive integer $k$ such that \begin{eqnarray}\lambdaabel{star1} y&\in& ([\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}]_\alphalpha, [\mathrm ell_1, \lambdadots, \mathrm ell_{k-1}]_\alphalpha]. \mathrm end{eqnarray} We will consider separately the cases $\mathrm ell_{k+1}>1$ and $\mathrm ell_{k+1}=1$. Suppose that we are the first of these cases, so $\mathrm ell_{k+1}>1$. Then we can split the interval in (\ref{star1}) up into smaller intervals to locate $y$ with greater precision. Between the points $[\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}]_\alphalpha$ and $[\mathrm ell_1, \lambdadots, \mathrm ell_{k-1}]_\alphalpha$ lie the points (written in increasing order), $[\mathrm ell_1, \lambdadots, \mathrm ell_{k}, \mathrm ell_{k+1}-1]_\alphalpha$, $[\mathrm ell_1, \lambdadots, \mathrm ell_{k}, \mathrm ell_{k+1}-2]_\alphalpha$, \lambdadots, $[\mathrm ell_1, \lambdadots, \mathrm ell_{k}, 2]_\alphalpha$ and $[\mathrm ell_1, \lambdadots, \mathrm ell_{k}, 1]_\alphalpha$, as shown in Figure 3.1 below. \begin{figure}[htbp] \begin{center} \input{figure2.pstex_t}\ellaption{The positions of the convergents (indicated with thicker lines) and intermediary points to $x$. } \mathrm end{center} \mathrm end{figure} \noindent{\bf Case 1.1} $\ $Suppose first that \[ [\mathrm ell_1, \lambdadots, \mathrm ell_{k}, 1]_\alphalpha<y\lambdaeq [\mathrm ell_1, \lambdadots, \mathrm ell_{k-1}]_\alphalpha. \] Immediately from the definition of $\theta_\alphalpha$ and the fact that $\theta_\alphalpha$ is an increasing function, we calculate that \begin{eqnarray*} \theta_\alphalpha(y)-\theta_\alphalpha(x)&\geq&\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k}, 1]_\alphalpha)-\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}]_\alphalpha)\\ &=& 2\elldot 2^{-(\mathrm ell_1+\elldots +\mathrm ell_k)}(2^{-1}-2^{-\mathrm ell_{k+1}})\\ &\gg &2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_k)}, \mathrm end{eqnarray*} where the last inequality comes from the fact that $\mathrm ell_{k+1}>1$. Moreover, \[ y-x \lambdaeq [\mathrm ell_1, \lambdadots, \mathrm ell_{k-1}]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_{k}]_\alphalpha=a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k-1}}t_{\mathrm ell_{k}}. \] Thus, in this first instance, we obtain that \[ \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}\gg \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_k)}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k-1}}t_{\mathrm ell_{k}}}. \] \noindent{\bf Case 1.2} $\ $Now suppose that there exists a positive integer $n\in\{1, 2, \lambdadots, \mathrm ell_{k+1}-2\}$ such that \[ [\mathrm ell_1, \lambdadots, \mathrm ell_{k}, n+1]_\alphalpha<y\lambdaeq [\mathrm ell_1, \lambdadots, \mathrm ell_{k}, n]_\alphalpha. \] At this point we have to split the argument up again. First suppose that the partition $\alphalpha$ is either expansive with exponent $\tau$, or, expanding with $\lambdaim_{n\to \infty} t_{n}/t_{n+1}= \rho$ for $1<\rho<2$. We then obtain that \begin{eqnarray*} \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}&\gg&\frac{2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k+1})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k}}t_{\mathrm ell_{k+1}}}\elldot\frac{2^{\mathrm ell_{k+1}-(n+1)}t_{\mathrm ell_{k+1}}}{t_n}\\ &\gg& \frac{2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k+1})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k}}t_{\mathrm ell_{k+1}}}. \mathrm end{eqnarray*} On the other hand, if $\alphalpha$ is expanding with $\lambdaim_{n\to \infty} t_{n}/t_{n+1}= \rho$ for $\rho>2$, we have that \begin{eqnarray*} \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}&\gg&\frac{2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k}}}\elldot \frac{2^{-n}(1-2^{n-\mathrm ell_{k+1}})}{t_{n}}\\ &\gg& \frac{2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k-1}}t_{\mathrm ell_{k}}}. \mathrm end{eqnarray*} \noindent{\bf Case 1.3} $\ $For the final part of the first case, suppose that \[ [\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}]_\alphalpha<y \lambdaeq [\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}-1]_\alphalpha. \] In this situation, the argument used in Case 1.2 will no longer suffice. We must consider a further two subcases. \noindent{\bf Subcase 1.3.1} $\mathrm ell_{k+2}>1$. \noindent In the event that $\mathrm ell_{k+2}>1$, the point $[\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}, 1]_\alphalpha$ still lies to the right of the point $x$. Then, \begin{eqnarray*} \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}&\geq&\frac{\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}]_\alphalpha)-\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}, 1]_\alphalpha)}{[\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}-1]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha}\\&=&\frac{2\elldot 2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k+1})}(1-1/2)}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_k}t_{\mathrm ell_{k+1}-1}}\\&\gg& \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k+1})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_k}t_{\mathrm ell_{k+1}}}, \mathrm end{eqnarray*} where the last inequality again comes from the fact that $\alphalpha$ is expansive of exponent $\tau\geq 0$ or expanding. \noindent{\bf Subcase 1.3.2} $\mathrm ell_{k+2}=1$. \noindent In the event that $\mathrm ell_{k+2}>1$, the point $[\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}, 1]_\alphalpha$ lies to the left of $x$ (it is equal to the $(k+2)$-th convergent). So, we make a slightly different calculation: \begin{eqnarray*} \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}&\geq&\frac{\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}]_\alphalpha)-\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}, 1, \mathrm ell_{k+3}]_\alphalpha)}{[\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}-1]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_k, 1]_\alphalpha}\\&=&\frac{2\elldot 2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k+1})}(2^{-1}-2^{-(1+\mathrm ell_{k+3})})}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_k}t_{\mathrm ell_{k+1}}}\\&\gg& \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k+1})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_k}t_{\mathrm ell_{k+1}}}. \mathrm end{eqnarray*} This finishes all the permutations of the case where $\mathrm ell_{k+1}>1$. We now come to the case $\mathrm ell_{k+1}=1$. Again, this will be split into various cases. First notice that we can split up the interval $([\mathrm ell_1, \lambdadots, \mathrm ell_k, 1]_{\alphalpha}, [\mathrm ell_1, \lambdadots, \mathrm ell_{k-1}]_\alphalpha]$ using the points $[\mathrm ell_1, \lambdadots, \mathrm ell_k+n]_\alphalpha$ for $n\in\mathbb{N}$, as shown in Figure 3.3. \begin{figure}[htbp] \begin{center} \input{figure3.pstex_t}\ellaption{Splitting up the interval $([\mathrm ell_1, \lambdadots, \mathrm ell_k, 1]_{\alphalpha}, [\mathrm ell_1, \lambdadots, \mathrm ell_{k-1}]_\alphalpha]$. } \mathrm end{center} \mathrm end{figure} \noindent{\bf Case 2.1} Suppose that there exists $n\geq2$ such that \[ [\mathrm ell_1, \lambdadots, \mathrm ell_k+n]_\alphalpha<y\lambdaeq[\mathrm ell_1, \lambdadots, \mathrm ell_k+n+1]_\alphalpha. \] Then, \begin{eqnarray*} \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}&\geq&\frac{\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_k+n]_\alphalpha)-\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k}, 1]_\alphalpha)}{[\mathrm ell_1, \lambdadots, \mathrm ell_{k}+n+1]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha}\\ &=&\frac{2\elldot 2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k})}(2^{-1}-2^{-n})}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k-1}}(t_{\mathrm ell_{k}}-t_{\mathrm ell_{k}+n+1})}\\&\gg& \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k-1}}t_{\mathrm ell_{k}}}. \mathrm end{eqnarray*} \noindent{\bf Case 2.2} Suppose that \[ [\mathrm ell_1, \lambdadots, \mathrm ell_k, 1]_\alphalpha<y\lambdaeq[\mathrm ell_1, \lambdadots, \mathrm ell_k+2]_\alphalpha. \] We will again split this into two subcases. \noindent{\bf Subcase 2.2.1} $\mathrm ell_{k+2}>1$. \noindent In the event that $\mathrm ell_{k+2}>1$, the point $[\mathrm ell_1, \lambdadots,\mathrm ell_k, 1, 1]_\alphalpha$ lies to the right of the point $x$. Then, \begin{eqnarray*} \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}&\geq&\frac{\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_k, 1]_\alphalpha)-\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_k, 1, 1]_\alphalpha)}{[\mathrm ell_1, \lambdadots, \mathrm ell_k+2]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha}\\&=& \frac{2\elldot 2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k})}(2^{-1}-2^{-1}+2^{-2})}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k-1}}(t_{\mathrm ell_{k}}-t_{\mathrm ell_{k}+2})}\geq \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k-1}}t_{\mathrm ell_{k}}}. \mathrm end{eqnarray*} \noindent{\bf Subcase 2.2.2} $\mathrm ell_{k+2}=1$. \noindent We make a similar calculation as for Subcase 1.3.2. \begin{eqnarray*} \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}&\geq& \frac{\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_k, 1]_\alphalpha)-\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_k, 1, 1, \mathrm ell_{k+3}]_\alphalpha)}{[\mathrm ell_1, \lambdadots, \mathrm ell_k+2]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_k]_\alphalpha}\\&=& \frac{2\elldot 2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k})}(2^{-1}-2^{-1}+2^{-2}-2^{-(2+\mathrm ell_{k+3})})}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k-1}}(t_{\mathrm ell_{k}}-t_{\mathrm ell_{k}+2})}\\ &\gg& \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k-1}}t_{\mathrm ell_{k}}}. \mathrm end{eqnarray*} This finishes Case 2. We have shown that for any $y>x$, \[ \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}\gg \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k-1}}t_{\mathrm ell_{k}}}. \] A similar calculation can be done for $y<x$; we leave that to the reader. Thus the proof of the proposition is finished. \mathrm end{proof} \begin{rem}\lambdaabel{limsupinf} In \ellite[Proposition 5.3~(i)]{mink?}, a similar result was proved for the Minkowski question mark function. However, the proof there contains a small mistake (the first inequality on page 2678 is incorrect) and is also incomplete (they do not consider the possibility that the $(k+1)$-th continued fraction entry could equal one, in which case there are no intermediate approximants). \mathrm end{rem} The following corollary will be of use in the next section. \begin{cor}\lambdaabel{liminfderinfcor} For each $x\in[0,1]$, we have that \[ \theta_\alphalpha'(x)=\infty\ \text{ if and only if }\ \lambdaim_{k\to\infty}{\mu_\alphalpha\lambdaeft(I^{(\alphalpha)}_k\right)}/{\lambdaambda\Bigl(I^{(\alphalpha)}_k\Bigr)}=\infty. \] \mathrm end{cor} \begin{proof} If the derivative of $\theta_\alphalpha$ at $x$ exists in a generalised sense and $\theta_\alphalpha'(x)=\infty$, the conclusion of the corollary follows directly from Lemma \ref{lemma1}. For the other direction, recall that $[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)]_\pm=I_{\mathrm ell_1+\elldots \mathrm ell_{k+1}-1}^{(\alphalpha)}$ and so the sequence $\lambdaeft(\mu_\alphalpha([r_k^{(\alphalpha)}, r_{k+1}^{(\alphalpha)}]_\pm)/\lambdaambda([r_k^{(\alphalpha)}, r_{k+1}^{(\alphalpha)}]_\pm)\right)_{k\geq1}$ is a subsequence of the sequence $\lambdaeft(\mu_\alphalpha(I^{(\alphalpha)}_k)/\lambdaambda(I^{(\alphalpha)}_k)\right)_{k\geq1}$. Thus, the corollary is an immediate consequence of Proposition \ref{liminfderinf}. \mathrm end{proof} Let us now consider a condition which gives rise to points with derivative equal to zero (recall that almost every $x\in [0,1]$ is such that $\theta_\alphalpha'(x)=0$). \begin{prop}\lambdaabel{limitto0} Suppose that $\alphalpha$ is either {expansive of exponent $\tau>0$} or { expanding}. Let $x=[\mathrm ell_1, \mathrm ell_2, \mathrm ell_3, \lambdadots]_\alphalpha$ be such that \[ \lambdaim_{k\to\infty}\frac{\mu_\alphalpha\lambdaeft(\lambdaeft[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)\right]_\pm\right)}{\lambdaambda\lambdaeft(\lambdaeft[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)\right]_\pm\right)}\elldot \frac{t_{\mathrm ell_{k+1}}}{a_{\mathrm ell_{k+1}}}=0. \] Then, $\theta_\alphalpha'(x)=0$. \mathrm end{prop} \begin{proof} First, notice that \[ \frac{\mu_\alphalpha\lambdaeft(\lambdaeft[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)\right]_\pm\right)}{\lambdaambda\lambdaeft(\lambdaeft[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)\right]_\pm\right)}\elldot \frac{t_{\mathrm ell_{k+1}}}{a_{\mathrm ell_{k+1}}}=\frac{2\elldot 2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k+1})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k}}a_{\mathrm ell_{k+1}}}. \] The remainder of the proof consists of a series of simple calculations, as in the proof of Proposition \ref{liminfderinf}. We will make one case explicit and leave the rest to the reader. Let $y>x$. Then, for all $y$ close enough to $x$, there exists an even positive integer $k$ such that $y\in ([\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}]_\alphalpha, [\mathrm ell_1, \lambdadots, \mathrm ell_{k-1}]_\alphalpha]$. Suppose that $\mathrm ell_{k+1}>1$. As in Figure 3.2 in the proof of the previous proposition, we can locate $y$ with greater precision, as follows. First suppose that $[\mathrm ell_1, \lambdadots, \mathrm ell_{k}, 1]_\alphalpha<y\lambdaeq [\mathrm ell_1, \lambdadots, \mathrm ell_{k-1}]_\alphalpha$. Then we have that \begin{eqnarray*} \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}&\lambdaeq&\frac{\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k-1}]_\alphalpha)-\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k}]_\alphalpha)}{[\mathrm ell_1, \lambdadots, \mathrm ell_k, 1]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_k, \mathrm ell_{k+1}]_\alphalpha}\\ &=& \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_k)}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k}}(1-t_{\mathrm ell_{k+1}})}\lambdal \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_k)}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k}}}. \mathrm end{eqnarray*} Now suppose that there exists a positive integer $n\in\{1, 2, \lambdadots, \mathrm ell_{k+1}-2\}$ such that $[\mathrm ell_1, \lambdadots, \mathrm ell_{k}, n+1]_\alphalpha<y\lambdaeq [\mathrm ell_1, \lambdadots, \mathrm ell_{k}, n]_\alphalpha$. In that case, we calculate \begin{eqnarray*} \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}&\lambdaeq&\frac{\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k}, n]_\alphalpha)-\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k}]_\alphalpha)}{[\mathrm ell_1, \lambdadots, \mathrm ell_k, n+1]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_k, \mathrm ell_{k+1}]_\alphalpha}\\ &=& \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_k+n)}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k}}(t_{n+1}-t_{\mathrm ell_{k+1}})}\lambdal \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_k)}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k}}}, \mathrm end{eqnarray*} where in this instance the final inequality holds in the case that $\alphalpha$ is expansive of exponent $\tau$ or $\alphalpha$ is expanding with $\lambdaim_{n\to\infty}t_n/t_{n+1}=\rho$ and $1<\rho<2$. The case that $\alphalpha$ is expanding and $\rho>2$ must be considered separately, but the calculation is similar and we leave it to the reader. Next, suppose that $[\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}]_\alphalpha<y \lambdaeq [\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}-1]_\alphalpha$ and $\mathrm ell_{k+2}>1$. In this case, we have that the point $[\mathrm ell_1, \lambdadots, \mathrm ell_{k+1},1 ]$ lies to the right of $x$ and we obtain that \begin{eqnarray*} \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}&\lambdaeq&\frac{\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}-1]_\alphalpha)-\theta_\alphalpha([\mathrm ell_1, \lambdadots, \mathrm ell_{k}]_\alphalpha)}{[\mathrm ell_1, \lambdadots, \mathrm ell_{k +1}]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_k, \mathrm ell_{k+1}, 1]_\alphalpha}\\ &\lambdal& \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k+1})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k+1}}}. \mathrm end{eqnarray*} Finally, if $[\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}]_\alphalpha<y \lambdaeq [\mathrm ell_1, \lambdadots, \mathrm ell_{k+1}-1]_\alphalpha$ and $\mathrm ell_{k+2}=1$, we have that \[ y-x\geq [\mathrm ell_1, \lambdadots, \mathrm ell_{k +1}]_\alphalpha-[\mathrm ell_1, \lambdadots, \mathrm ell_k, \mathrm ell_{k+1}, 1, \mathrm ell_{k+3}]_\alphalpha=a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k+1}}(1-a_1t_{\mathrm ell_{k+3}})\gg a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k+1}}. \] So, in this case too, we obtain that \[ \frac{\theta_\alphalpha(y)-\theta_\alphalpha(x)}{y-x}\lambdal \frac{2\elldot2^{-(\mathrm ell_1+\elldots +\mathrm ell_{k+1})}}{a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{k+1}}}. \] To finish the proof, we must consider the case $\mathrm ell_{k+1}=1$ and also do similar calculations for points $y$ such that $x>y$. Both of these are similar to what we have done above, thus we leave the remaining details to the reader. \mathrm end{proof} \begin{rem} Let us end this section with some remarks concerning the paper \ellite{PVB}. In there, the authors consider first the function ${\mathbb P}hi_{2, \tau}$, which, although this is not made explicit, conjugates the tent system with the map $T_{\tau}$ which is given, for $\tau>1$, by \[ T_\tau(x):=\lambdaeft\{ \begin{array}{ll} \tau x, & \Phibox{for $x\in[0, 1/\tau)$;} \\ \frac{\tau x -1}{\tau-1}, & \Phibox{for $x\in [1/\tau, 1]$.} \mathrm end{array} \right. \] This is nothing other than an ``untwisted'' $\alphalpha$-Farey map, where ``untwisted'' means that the right-hand branch of the map has a positive slope. Let us denote such maps by $F_{\widetilde{\alphalpha}}$. In this case, the partition in question, say $\widetilde{\alphalpha_\tau}$, is given by $t_n:= \tau^{-(n-1)}$ and $a_n:=(\tau-1)/\tau^n$. Notice that this is simply a specific example of an expanding partition, since it certainly satisfies the condition $\lambdaim_{n\to\infty}t_n/t_{n+1}=\rho>1$; in fact, here $\rho=\tau$. The associated untwisted $\widetilde{\alphalpha_\tau}$-L\"uroth map has all positive slopes. In this case the $\widetilde{\alphalpha_\tau}$-L\"uroth coding is given by \[ x=[\widetilde{\mathrm ell}_1, \widetilde{\mathrm ell}_2, \widetilde{\mathrm ell}_3,\lambdadots]_{\widetilde{\alphalpha_\tau}}= t_{\widetilde{\mathrm ell}_1+1}+a_{\widetilde{\mathrm ell}_1}t_{\widetilde{\mathrm ell}_2+1}+a_{\widetilde{\mathrm ell}_1}a_{\widetilde{\mathrm ell}_2}t_{\widetilde{\mathrm ell}_3+1}+\elldots= \frac{1}{\tau^{\widetilde{\mathrm ell}_1}}+\frac{\tau-1}{\tau^{\widetilde{\mathrm ell}_1+\widetilde{\mathrm ell_2}}}+ \frac{(\tau-1)^2}{\tau^{\widetilde{\mathrm ell}_1+\widetilde{\mathrm ell_2}+\widetilde{\mathrm ell}_3}}+\elldots \] The map equivalent to $\theta_\alphalpha$ in this positive slope situation is the map $\theta_{\widetilde{\alphalpha}}$, which is defined by \[ \theta_{\widetilde{\alphalpha}}(x):=\sum_{k=1}^{\infty}2^{-\sum_{i=1}^k\widetilde{\mathrm ell}_i(x)}. \] (For more details, we refer to \ellite{sarathesis}.) The map ${\mathbb P}hi_{2, \tau}$ in the paper \ellite{PVB} coincides with the inverse of the map $\theta_{\widetilde{\alphalpha_\tau}}$. They first show that ${\mathbb P}hi_{2, \tau}$ is singular and then, assuming the derivative of ${\mathbb P}hi_{2, \tau}$ at a point $x$ exists in a generalised sense, give a condition in terms of a certain constant $K=K(\tau):=\frac{-\lambdaog(\tau-1)}{\lambdaog(2/\tau)}$ for which the derivative at the point $x$ is either equal to zero or is infinite. The proof boils down to an equivalent statement to Lemma \ref{lemma1}, which in their case states that if ${\mathbb P}hi_{2, \tau}'(x)$ exists it must satisfy \[ {\mathbb P}hi_{2, \tau}'(x)=\lambdaim_{n\to \infty} \frac{\lambdaambda(C_{\widetilde{\alphalpha_\tau}}(\widetilde{\mathrm ell}_1, \lambdadots, \widetilde{\mathrm ell}_n))}{2^{-\sum_{i=1}^n a_{\widetilde{\mathrm ell}_i}}}=\lambdaim_{n\to \infty} \frac{(\tau-1)^n\elldot 2^{\sum_{i=1}^n a_{\widetilde{\mathrm ell}_i}}}{\tau^{{\sum_{i=1}^n {\widetilde{\mathrm ell}_i}}}}= \lambdaim_{n\to \infty} \lambdaeft(\lambdaeft(\frac{2}{\tau}\right)^{{\sum_{i=1}^n {\widetilde{\mathrm ell}_i}}/n}(\tau-1)\right)^n. \] Then the constant $K$ is just the boundary point between the term $\lambdaeft(\frac{2}{\tau}\right)^{{\sum_{i=1}^n {\widetilde{\mathrm ell}_i}}/n}(\tau-1)$ being strictly less than 1 or strictly greater than 1. They then go on to generalise this by conjugating two expanding untwisted $\widetilde{\alphalpha}$-Farey systems, one given by $\widetilde{\alphalpha_\tau}$ with $t_n:=\tau^{-(n-1)}$ and the other given by $\widetilde{\alphalpha_\beta}$ with $t_n:=\beta^{-(n-1)}$. They obtain a similar result for the map ${\mathbb P}hi_{\beta, \tau}$ which is the topological conjugacy map between the systems $F_{\widetilde{\alphalpha_\beta}}$ and $F_{\widetilde{\alphalpha_\tau}}$. Of course, ${\mathbb P}hi_{\beta, \tau}$ coincides with the composition $\theta_{\widetilde{\alphalpha_\tau}}^{-1}\ellirc \theta_{\widetilde{\alphalpha_\beta}}$. It may be interesting to consider conjugating homeomorphisms between two arbitrary $\alphalpha$-Farey maps, or even the case of two general expansive or expanding partitions (for either maps with positive or negative slopes). \mathrm end{rem} \section{Multifractal formalism for the $\alphalpha$-Farey system and the derivative of $F_\alphalpha$} Let us now recall the outcome of the multifractal formalism for the $\alphalpha$-Farey system obtained in \ellite{KMS}. Here, we must again assume that the partition $\alphalpha$ is either expanding or expansive of exponent $\tau\geq0$ and eventually decreasing (which means that for all sufficiently large $n$, we have that $a_n>a_{n+1}$), so this assumption will be made for every partition from here on. For both the $\alphalpha$-L\"uroth and $\alphalpha$-Farey systems, the fractal-geometric description of the Lyapunov spectra were obtained by employing the general multifractal results of Jaerisch and Kesseb\"ohmer \ellite{JaerischKess09}. First, let the $\alphalpha$-Farey free-energy function $v:\mathbb{R} \to \mathbb{R}$ be defined by \[ v(u):= \inf\lambdaeft\{r\in\mathbb{R}: \sum_{n=1}^\infty a_n^u\mathrm exp(-rn)\lambdaeq 1\right\}. \] Let us also remind the reader that the Lyapunov exponent of a differentiable map $S:[0,1]\to [0,1]$ at a point $x\in[0,1]$ is defined, provided the limit exists, by \[ \Lambdaambda(S, x):=\lambdaim_{n\to \infty}\frac1n \sum_{k=0}^{n-1}\lambdaog|S'(S^k(x))|. \] The following result can be found in \ellite{KMS}. (Here we have omitted the discussion of phase transitions and the boundary points of the spectrum, as they are not relevant to this paper.) \noindent{\bf Theorem.} \ellite[Theorem 3]{KMS} {\mathrm em Let $\alphalpha$ be either expanding or expansive of exponent $\tau\geq0$ and eventually decreasing. Then, where $s_-:=\inf\{-\lambdaog(a_n)/n:n\in\mathbb{N}\}$ and $s_+:=\sup\{-\lambdaog(a_n)/n:n\in\mathbb{N}\}$, we have that if $s\in (s_-, s_+)$, then \[ \deltaim_{\mathrm{H}}(\{x\in [0,1]:\Lambdaambda(F_\alphalpha, x)=s\})=\inf_{u\in\mathbb{R}}\lambdaeft\{u+s^{-1}v(u)\right\}. \]} We observe that it is equivalent to consider the free-energy function \[t(v):=\inf\lambdaeft\{u\in\mathbb{R}: \sum_{n=1}^\infty a_n^u\mathrm exp(-nv)\lambdaeq 1\right\},\] in line with \ellite{JaerischKess09}. The outcome then for the $\alphalpha$-Farey spectrum is that $\deltaim_{\mathrm{H}}(\{x\in [0,1]:\Lambdaambda(F_\alphalpha, x)=s\})=t^*(s):=\inf_{v\in\mathbb{R}}\{t(v)+vs^{-1}\}$. In light of the results of the previous section, as already mentioned in the introduction, we can split the unit interval into three disjoint subsets, namely, $[0,1]={\mathcal T}(\Sigma_{d})heta_0\ellup {\mathcal T}(\Sigma_{d})heta_\infty\ellup{\mathcal T}(\Sigma_{d})heta_\sim$. Recall that these sets are defined by ${\mathcal T}(\Sigma_{d})heta_0:=\{x\in [0,1]: \theta_\alphalpha'(x)=0\}$, ${\mathcal T}(\Sigma_{d})heta_\infty:=\{x\in [0,1]: \theta_\alphalpha'(x)=\infty\}$ and, finally, ${\mathcal T}(\Sigma_{d})heta_{\sim}:=[0,1]\setminus {\mathcal T}(\Sigma_{d})heta_0\ellup{\mathcal T}(\Sigma_{d})heta_\infty$. Observe that ${\mathcal T}(\Sigma_{d})heta_\sim$ can also be described as the set of points in $[0,1]$ at which the derivative of $\theta_\alphalpha$ does not exist. We already have that $\lambdaambda({\mathcal T}(\Sigma_{d})heta_0)=\deltaim_{\mathrm{H}}({\mathcal T}(\Sigma_{d})heta_0)=1$. The aim of this section is to prove Theorem \ref{mainthm}, which describes the Hausdorff dimensions of the other two sets. First, for $s\geq0$, recall the definition of the set $\mathcal{L}(s)$ from the introduction: \[ \mathcal{L}(s):=\lambdaeft\{x\in [0,1]:\lambdaim_{n\to\infty}\frac{\lambdaog(\lambdaambda(I^{(\alphalpha)}_n(x)))}{-n}=s\right\}. \] Let us now prove the following useful lemma. \begin{lem}\lambdaabel{ctblsets} For each $s\geq0$, we have that \[ \deltaim_{\mathrm{H}}\lambdaeft(\lambdaeft\{x\in [0,1]:\Lambdaambda(F_\alphalpha, x)=s\right\}\right)=\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}(s)\right). \] \mathrm end{lem} \begin{proof} Firstly, from Proposition 4.2 in \ellite{KMS}, where \[{\mathbb P}i(L_{\alphalpha},x):=\lambdaim_{n\to\infty}\frac{\sum_{k=1}^n\lambdaog(a_{\mathrm ell_{k}(x)})}{\sum_{k=1}^n\mathrm ell_k(x)},\] we have that the sets \[ \lambdaeft\{ x\in[0,1]:{\mathbb P}i(L_{\alphalpha},x)=s\right\} \Phibox{ and } \lambdaeft\{ x\in[0,1]:\Lambdaambda(F_{\alphalpha},x)=s\right\} \] coincide up to a countable set of points. An almost identical argument (using \ellite[Lemma 4.1~(1)]{KMS} as opposed to \ellite[Lemma 4.1~(3)]{KMS}), shows that the same statement is true with the set $\lambdaeft\{ x\in[0,1]:\Lambdaambda(F_{\alphalpha},x)=s\right\} $ replaced by the set $\mathcal{L}(s)$. Combining these two statements yields the result. \mathrm end{proof} \begin{rem} Notice that it follows immediately from Proposition \ref{ctblsets} that $\deltaim_{\mathrm{H}}(\mathcal{L}(s))=t^*(s)$. \mathrm end{rem} \begin{prop}\lambdaabel{6.1} \ \begin{itemize} \item[(a)] If $s\in(\lambdaog2, s_+]$, then \[ \mathcal{L}(s)\subset {\mathcal T}(\Sigma_{d})heta_\infty. \] \item[(b)] If $s\in[s_-, \lambdaog2)$, then \[ \mathcal{L}(s)\subset {\mathcal T}(\Sigma_{d})heta_0. \] \item[(c)] \[ \lambdaeft\{x\in[0,1]:\lambdaiminf_{n\to\infty} \frac{\sum_{i=1}^n\lambdaog(a_{\mathrm ell_{i}(x)})}{-\sum_{i=1}^n \mathrm ell_{i}(x)}<\lambdaog2<\lambdaimsup_{n\to\infty} \frac{\sum_{i=1}^n\lambdaog(a_{\mathrm ell_{i}(x)})}{-\sum_{i=1}^n \mathrm ell_{i}(x)}\right\}\subset {\mathcal T}(\Sigma_{d})heta_\sim. \] \mathrm end{itemize} \mathrm end{prop} \begin{proof} Let $x\in \mathcal{L}(s)$ be given. Then, for each $\varepsilon>0$, there exists $N_\varepsilon\in \mathbb{N}$ such that for all $n\geq N_\varepsilon$, \[ n(s-\varepsilon)\lambdaeq \lambdaog\lambdaeft(\frac1{\lambdaambda(I_n^{(\alphalpha)}(x))}\right)\lambdaeq n(s+\varepsilon). \] In other words, recalling that $\mu_\alphalpha(I_n^{(\alphalpha)}(x))=2^{-n}$, we have that \[ e^{-n(s+\varepsilon-\lambdaog2)}\lambdaeq \frac{\lambdaambda(I_n^{(\alphalpha)}(x))}{\mu_\alphalpha(I_n^{(\alphalpha)}(x))}\lambdaeq e^{-n(s-\varepsilon-\lambdaog2)}, \] for all $n\geq N_{\varepsilon}$. Thus, if $s\in (\lambdaog2, s_+]$, we deduce that \[ \lambdaim_{n\to\infty} \frac{\lambdaambda(I_n^{(\alphalpha)}(x))}{\mu_\alphalpha(I_n^{(\alphalpha)}(x))}=0. \] By Corollary \ref{liminfderinfcor}, we then infer that $\theta_\alphalpha'(x)=\infty$ and so $x\in {\mathcal T}(\Sigma_{d})heta_\infty$. This proves part (a). In order to prove part (b), first notice (where the first equality can be proved similarly to Lemma \ref{ctblsets} and the second comes from the proof of Lemma \ref{ctblsets}), that \[ \lambdaim_{n\to\infty} \frac{-\lambdaog(a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_n}t_{\mathrm ell_{n+1}})}{\mathrm ell_1+\elldots +\mathrm ell_{n+1}}=\lambdaim_{n\to\infty} \frac{-\lambdaog(a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_{n+1}})}{\mathrm ell_1+\elldots +\mathrm ell_{n+1}}=\lambdaim_{n\to\infty} \frac{\lambdaog(\lambdaambda(I_n^{(\alphalpha)}(x)))}{-n}=s<\lambdaog2. \] Using this observation, a straightforward calculation along the lines of that done for part (a) shows that we have $\lambdaim_{n\to\infty}2^{-(\mathrm ell_1+\elldots +\mathrm ell_n)}/a_{\mathrm ell_1}\lambdadots a_{\mathrm ell_n}=0$. In light of Proposition \ref{limitto0}, we obtain that $\theta_\alphalpha'(x)=0$ and this finishes the proof of part (b). Finally, to prove part (c), one immediately verifies that if $\lambdaiminf_{n\to\infty} \frac{\sum_{i=1}^n\lambdaog(a_{\mathrm ell_{i}(x)})}{-\sum_{i=1}^n \mathrm ell_{i}(x)}<\lambdaog2$, then there exists $0<c<1$ such that \[ \lambdaiminf_{n\to\infty} \frac{a_{\mathrm ell_{1}(x)}\lambdadots a_{\mathrm ell_{n}(x)}}{2^{-\sum_{i=1}^n \mathrm ell_{i}(x)}}\lambdaeq e^c. \] Similarly, if $\lambdaimsup_{n\to\infty} {\sum_{i=1}^n\lambdaog(a_{\mathrm ell_{i}(x)})}/{(-\sum_{i=1}^n \mathrm ell_{i}(x))}>\lambdaog2$, then there exists $C>1$ such that \[ \lambdaimsup_{n\to\infty} \frac{a_{\mathrm ell_{1}(x)}\lambdadots a_{\mathrm ell_{n}(x)}}{2^{-\sum_{i=1}^n \mathrm ell_{i}(x)}}\geq e^C. \] In other words, the limit as $n$ tends to infinity of the sequence $\lambdaeft(({a_{\mathrm ell_{1}(x)}\lambdadots a_{\mathrm ell_{n}(x)}})/(2^{-\sum_{i=1}^n \mathrm ell_{i}(x)})\right)_{n\geq1}$ does not exist. Therefore, the limit of the sequence $\lambdaeft(\lambdaambda(I_n^{(\alphalpha)}(x))/\mu_\alphalpha(I_n^{(\alphalpha)}(x))\right)_{n\geq1}$ does not exist either, and, in light of Lemma \ref{lemma1}, we have that the derivative $\theta_\alphalpha'(x)$ also cannot exist. This shows that $x\in{\mathcal T}(\Sigma_{d})heta_\sim$ and hence finishes the proof. \mathrm end{proof} For the next proposition, we define: \begin{eqnarray*} \mathcal{L}^{*}(s) & := & \lambdaeft\{ x\in\mathcal{U}:\lambdaimsup_{n\rightarrow\infty}\frac{\lambdaog(a_{\mathrm ell_{1}(x)}\lambdadots a_{\mathrm ell_{n}(x)})}{{-\sum_{i=1}^n \mathrm ell_{i}(x)}}\geq s\right\} ,\\ \mathcal{L}_{*}(s) & := & \lambdaeft\{ x\in\mathcal{U}:\lambdaiminf_{n\rightarrow\infty}\frac{\lambdaog(a_{\mathrm ell_{1}(x)}\lambdadots a_{\mathrm ell_{n}(x)})}{{-\sum_{i=1}^n \mathrm ell_{i}(x)}}\geq s\right\} ,\\ \mathcal{L}\lambdaeft(s,t\right) & := & \lambdaeft\{ x\in\mathcal{U}:\lambdaiminf_{n\to\infty}\frac{\lambdaog(a_{\mathrm ell_{1}(x)}\lambdadots a_{\mathrm ell_{n}(x)})}{{-\sum_{i=1}^n \mathrm ell_{i}(x)}}\lambdaeq s,\lambdaimsup_{n\to\infty}\frac{\lambdaog(a_{\mathrm ell_{1}(x)}\lambdadots a_{\mathrm ell_{n}(x)})}{{-\sum_{i=1}^n \mathrm ell_{i}(x)}}\geq t\right\} .\mathrm end{eqnarray*} \begin{prop}\lambdaabel{6.4} \ \begin{itemize} \item [(a)] For each $s\in (s_-, s_+)$, we have that \[ \deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}_*(s)\right)=\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}^*(s)\right)=\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}(s)\right). \] \item [(b)] For each $s_-<s_0\lambdaeq s_1<s_+$, we have that \[ \deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}(s_0, s_1)\right)=\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}(s_1)\right). \] \mathrm end{itemize} \mathrm end{prop} \begin{proof} Towards part (a), the inequality $\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}_*(s)\right)\lambdaeq\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}^*(s)\right)$ is immediate from the fact that $\mathcal{L}_*(s)\subset\mathcal{L}^*(s)$. Also, notice that $\mathcal{L}(s)\subset\mathcal{L}_*(s)$, so we have the inequality $\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}(s)\right)\lambdaeq\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}_*(s)\right)$. To finish the proof of part (a), we will show, via a covering argument, that $\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}^*(s)\right)\lambdaeq t^*(s)$. For ease of exposition, let us define the two potential functions $\varphi$ and $\psi$ by setting \[ \varphi(x):=\lambdaog(a_n)\ \text{ and }\ \psi(x):=-n, \ \text{ for }x\in A_n. \] Then, for each $x\in \mathcal{L}^*(s)$ and every $\varepsilon>0$, we can choose $n_{k(x, \deltaelta)}$ such that for all $k\geq k(x, \deltaelta)$ we have that \[ \deltaiam(C_\alphalpha(\mathrm ell_1(x), \lambdadots, \mathrm ell_{n_k}(x)))=a_{\mathrm ell_1(x)}\lambdadots a_{\mathrm ell_{n_k}(x)}<\deltaelta \] and \[ 0<\frac{S_{n_k}\psi(x)}{S_{n_k}\varphi(x)}\lambdaeq \frac1s+\frac\varepsilon2, \] where the notation $S_n\varphi$ denotes the $n$-th Birkhoff sum $\sum_{k=0}^{n-1}\varphi\ellirc L_\alphalpha^k$. Thus, removing duplicates as necessary, we can cover the set $\mathcal{L}^*(s)$ with the family $\mathcal{A}_\deltaelta$ of at most countably many cylinder sets, where \[ \mathcal{A}_\deltaelta:=\lambdaeft\{C_i:=C_{\alphalpha}\lambdaeft(\mathrm ell_1(x^{(i)}), \lambdadots, \mathrm ell_{n_{k(x^{(i)}, \deltaelta)}}(x^{(i)})\right):i\in A\subseteq \mathbb{N}\right\}. \] Then, for all $\varepsilon>0$, where to shorten notation we have set $n_k:=n_{k(x, \deltaelta)}$, we have that \begin{eqnarray*} \mathcal{H}_\deltaelta^{t(v)+vs^{-1}+\varepsilon}\lambdaeft(\mathcal{L}^*(s)\right)&\lambdaeq& \sum_{C_i\in \mathcal{A}_\deltaelta}|C_i|^{t(v)+vs^{-1}+\varepsilon}\\ &=&\sum_{i\in A}\lambdaeft(a_{\mathrm ell_1(x^{(i)})}\lambdadots a_{\mathrm ell_{n_k}(x^{(i)})}\right)^{t(v)+vs^{-1}+\varepsilon}\\ &=&\sum_{i\in A}\mathrm exp\lambdaeft(S_{n_k}\varphi(x^{(i)})(t(v)+vs^{-1}+\varepsilon)\right)\\ &\lambdaeq& \sum_{i\in A}\mathrm exp\lambdaeft(S_{n_k}\varphi(x^{(i)})\lambdaeft(t(v)+v\frac{S_{n_k}\psi(x^{(i)})}{S_{n_k}\varphi(x^{(i)})}+\frac{\varepsilon}{2}\right)\right)\\&\lambdaeq& \sum_{n\in\mathbb{N}}\sum_{\mathrm ell_1, \lambdadots, \mathrm ell_n\in \mathbb{N}^n}\mathrm exp \sup_{y\in C_\alphalpha(\mathrm ell_1, \lambdadots, \mathrm ell_{n})}\lambdaeft\{S_n\lambdaeft(\lambdaeft(t(v)+\frac\varepsilon2\right)\varphi+v\psi\right)(y)\right\}. \mathrm end{eqnarray*} Recalling that the free-energy function $t$ is defined in terms of the pressure function $\mathcal{P}(t\varphi+v\psi):=\lambdaog\sum_{n=1}^\infty a_n^t\mathrm exp(-vn)$ and that $\mathcal{P}$ is strictly decreasing as a function of $t$, from the definition of $t(v)$ it follows that $\mathcal{P}((t(v)+\varepsilon/2)\varphi+v\psi)=\mathrm eta<0$. Consequently, for arbitrarily small $\deltaelta$, we have that \[ \mathcal{H}_\deltaelta^{t(v)+vs^{-1}+\varepsilon}\lambdaeft(\mathcal{L}^*(s)\right)\lambdaeq \sum_{n\in\mathbb{N}}e^{n\mathrm eta}<\infty, \] which is summable since $\mathrm eta<0$. Therefore, for every $\varepsilon>0$ and every $v\in \mathbb{R}$, we have that $\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}^*(s)\right)\lambdaeq t(v)+vs^{-1}+\varepsilon$. Finally, then, we obtain that \[ \deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}^*(s)\right)\lambdaeq \deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}(s)\right). \] Now, for the proof of part (b), first notice that since $\mathcal{L}(s_0, s_1)\subseteq\mathcal{L}^*(s_1)$ and $\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}^*(s_1)\right)=\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}(s_1)\right)$, it is clear that \[ \deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}(s_0, s_1)\right)\lambdaeq\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}(s_1)\right). \] To obtain the lower bound, where we denote by $C_n(x)$ the $n$-th level cylinder set containing the point $x$, it suffices to show (by, for instance, \ellite[Proposition 2.3~(a)]{Fal2}), that there exists a finite measure $\mu$ such that \begin{itemize} \item[(i)] $\mu\lambdaeft(\mathcal{L}\lambdaeft(s_{0},s_{1}\right)\right)>0$, \item[(ii)] ${\deltaisplaystyle \lambdaiminf_{n\to\infty}\frac{-\lambdaog\mu\lambdaeft(C_{n}(x)\right)}{S_{n}\varphi(x)}\geq\deltaim_{H}\lambdaeft(\mathcal{L}\lambdaeft(s_{1}\right)\right)}$, for all $x$ in a subset of $\mathcal{L}\lambdaeft(s_{0},s_{1}\right)$ of positive $\mu$-measure. \mathrm end{itemize} In order to construct such a measure $\mu$, first note that it was shown in the proof of Theorem 3 in \ellite{KMS} that for every $u<1$, there exists $v(u)$ such that \begin{eqnarray}\lambdaabel{..} \sum_{n=1}^\infty a_n^u \mathrm exp({-nv(u)})=1. \mathrm end{eqnarray} Therefore, for $s_0$ and $s_1$ we can find corresponding Bernoulli measures $\mathbb{P}_{s_0}$ and $\mathbb{P}_{s_1}$ which are defined by the probability vectors given by $p_n(s_0):=a_n^{u_{s_0}}\mathrm exp(-nv(u_{s_0}))$ and $p_n(s_1):=a_n^{u_{s_1}}\mathrm exp(-nv(u_{s_1}))$, respectively. Note that the relation between $u$ and $s$ is given by $-v'(u_{s_i})=s_i$, for $i=0,1$. It is then straightforward to show, by differentiating (\ref{..}) with respect to $u$, that $\int\varphi\ \mathrm{d}\mathbb{P}_{s_i}/\int\psi\ \mathrm{d}\mathbb{P}_{s_i}=s_i$, again for $i=0,1$. We also have that for $\mathbb{P}_{s_i}$-a.e. $x\in [0,1]$, \[ \lambdaim_{n\to\infty}\frac{1}{n}S_n\varphi(x)=\int\varphi\ \mathrm{d}\mathbb{P}_{s_i}\in(0, \infty) \] and \[ \lambdaim_{n\to\infty}\frac{-\lambdaog\mathbb{P}_{s_i}(C_n(x))}{S_n\varphi(x)}=u_{s_i}+s_i^{-1}v(u_{s_i}). \] Therefore, by Egoroff's Theorem, there exists an increasing sequence of natural numbers $(m_k)_{k\geq1}$ and a sequence $(A_k)_{k\geq1}$ of Borel subsets of $[0,1]$ such that $\mathbb{P}_{s_0}(A_{2k})\geq 1-2^{2k+1}$, $\mathbb{P}_{s_1}(A_{2k-1})\geq 1-2^{2k}$ and such that for all $x\in A_{2k}$ and all $n\geq m_{2k}$, \begin{eqnarray}\lambdaabel{eq4.2} \lambdaeft|\frac{1}{n}S_n\varphi(x)-\int\varphi\ \mathrm{d}\mathbb{P}_{s_0}\right|<\frac{1}{2k}\ \text{ and }\ \frac{-\lambdaog\mathbb{P}_{s_0}(C_n(x))}{S_n\varphi(x)}>\deltaim_{\mathrm{H}}(\mathcal{L}(s_0))-\frac1{2k}, \mathrm end{eqnarray} whereas for all $x\in A_{2k-1}$ and all $n\geq m_{2k-1}$, \begin{eqnarray}\lambdaabel{eq4.3} \lambdaeft|\frac{1}{n}S_n\varphi(x)-\int\varphi\ \mathrm{d}\mathbb{P}_{s_1}\right|<\frac{1}{2k-1}\ \text{ and }\ \frac{-\lambdaog\mathbb{P}_{s_1}(C_n(x))}{S_n\varphi(x)}>\deltaim_{\mathrm{H}}(\mathcal{L}(s_1))-\frac1{2k-1}. \mathrm end{eqnarray} We now aim to use the sets $A_k$ to construct a set $\mathcal{M}\subset \mathcal{L}(s_0, s_1)$ by defining certain families of cylinder sets coded by increasingly long words and taking their intersection. To that end, set $n_0:=1+1/m_1$ and $n_k:=\prod_{i=1}^k(1+m_i)$, for each $k\geq1$. Then define the countable family of cylinder sets \[ \mathcal{C}_k:=\{C_{n_{k-1}m_k}(x):x\in A_k\}, \ \text{ for each }k\geq1. \] Further define a second countable family of cylinder sets by setting $\mathcal{D}_1:=\mathcal{C}_1$ and setting \[ \mathcal{D}_k:=\{DC:D\in \mathcal{D}_{k-1}, C\in \mathcal{C}_k\}, \ \text{ for each }k\geq2, \] where the cylinder set $DC$ is obtained by concatenating the length $n_{k-1}$ word that defines $D$ and the length $n_{k-1}m_k$ word that defines $C$ and using this length $n_k$ word to define $DC$. Observe that if $x\in DC\in \mathcal{D}_k$, then $L_\alphalpha^{n_{k-1}}(x)\in C\in \mathcal{C}_k$. Finally, define \[ \mathcal{M}:=\bigcap_{n\in\mathbb{N}}\bigcup_{I\in \mathcal{D}_k}I. \] Now, let $x\in \mathcal{D}_k$. Then, \begin{eqnarray*} \frac{S_{n_k}\varphi(x)}{n_k}&=&\frac{S_{n_{k-1}}\varphi(x)+S_{n_{k-1}m_k}\varphi(L_\alphalpha^{n_{k-1}}(x))}{n_{k-1}(1+m_k)}\\ &=& \frac{1}{1+m_k}\elldot\frac{S_{n_{k-1}}\varphi(x)}{n_{k-1}}+\frac{m_k}{1+m_k}\elldot\frac{S_{n_{k-1}m_k}\varphi(L_\alphalpha^{n_{k-1}}(x))}{n_{k-1}m_k}, \mathrm end{eqnarray*} and, since the latter equality is a convex combination, it follows immediately that the sequence $S_{n_k}\varphi(x)/n_k$ is bounded. Therefore, where we have set $i(k):=k$ (mod 2), and recalling that $L_\alphalpha^{n_{k-1}}(x)\in A_k$, \[ \lambdaim_{k\to\infty}\lambdaeft|\frac{S_{n_k}\varphi(x)}{n_k}-\int\varphi\ \mathrm{d}\mathbb{P}_{s_{i(k)}}\right| =0 \] This shows that for all $x\in \mathcal{M}$ we have two subsequences $(n_{2k})_{k\geq1}$ and $(n_{2k-1})_{k\geq1}$ along which we have that $\lambdaim_{k\to\infty}S_{n_{2k}}\varphi(x)/n_{2k}=\int\varphi\ \mathrm{d}\mathbb{P}_{s_0}$ and $\lambdaim_{k\to\infty}S_{n_{2k-1}}\varphi(x)/n_{2k-1}=\int\varphi\ \mathrm{d}\mathbb{P}_{s_1}$, which proves that $\mathcal{M}\in \mathcal{L}(s_0, s_1)$. Now, using the Kolmogorov consistency theorem, define the probability measure $\mu$ on $[0,1]$ by setting $\mu(C):=\mathbb{P}_{s_1}(C)$ for all length $n_1$ cylinder sets $C$ and, for all cylinder sets $I$ of the form $I=DC$, with $D$ of length $n_{k-1}$ and $C$ of length $n_{k-1}m_k$, setting $\mu(I):=\mu(D)\mathbb{P}_{s_{i(k)}}(C)$. Then, by construction, \[ \mu(\mathcal{M})\geq \prod_{k\in \mathbb{N}}(1-2^{-k})>0. \] Thus, the measure $\mu$ satisfies condition (i). To see that $\mu$ satisfies condition (ii), first note that every length $n_k$ cylinder set $C_{n_k}(x)$ for $x\in \mathcal{M}$ and $k\geq1$ can be split as follows: $C_{n_{k}}\lambdaeft(x\right)=C_{n_{k-1}}\lambdaeft(x\right)C_{m_{k}n_{k-1}}\lambdaeft(L_\alphalpha^{n_{k-1}}(x)\right)$. Using this, we obtain that \begin{eqnarray*} \!\!\!\!\!\!\!\!\!\!\!\!\!\frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n_{k}}\lambdaeft(x\right)\right)\right)}{S_{n_{k}}\varphi\lambdaeft(x\right)} & = & \frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n_{k-1}}\lambdaeft(x\right)\right)\right)}{S_{n_{k-1}}\varphi\lambdaeft(x\right)}\elldot\frac{\frac{S_{n_{k-1}}\varphi\lambdaeft(x\right)} {n_{k-1}}}{\frac{S_{n_{k}}\varphi\lambdaeft(x\right)}{n_{k}}}\elldot\frac{n_{k-1}}{n_{k}}\\ & & +\frac{-\lambdaog\lambdaeft(\mathbb{P}_{s_{i(k)}}\lambdaeft(C_{m_{k}n_{k-1}}\lambdaeft(L_\alphalpha^{n_{k-1}}(x)\right)\right)\right)} {S_{m_{k}n_{k-1}}\varphi\lambdaeft(L_\alphalpha^{n_{k-1}}(x)\right)}\elldot{\frac{\frac{S_{m_{k}n_{k-1}}\varphi\lambdaeft(L_\alphalpha^{n_{k-1}}(x)\right)}{m_{k}n_{k-1}}} {\frac{S_{n_{k}}\varphi\lambdaeft(x\right)}{n_{k}}}\frac{m_{k}n_{k-1}}{n_{k}}},\mathrm end{eqnarray*} where the last ratio in the second term tends to 1 as $k$ tends to infinity. This shows, similarly to the argument for condition (i), that since the above sum is a convex combination, the sequence $-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n_{k}}\lambdaeft(x\right)\right)\right)/S_{n_{k}}\varphi\lambdaeft(x\right)$ is also bounded. Therefore, given that $\deltaim_{\mathrm{H}}(\mathcal{L}(s_1))\lambdaeq \deltaim_{\mathrm{H}}(\mathcal{L}(s_0))$, we have that \begin{equation}\lambdaabel{eq6.3} \lambdaiminf_{k\to\infty}\frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n_{k}}\lambdaeft(x\right)\right)\right)}{S_{n_{k}}\varphi\lambdaeft(x\right)}\geq \deltaim_{\mathrm{H}}(\mathcal{L}(s_1)). \mathrm end{equation} This shows that (ii) is satisfied along the subsequence $(n_k)_{k\geq1}$. To complete the proof, we must consider $n_k<n<n_{k+1}$. We will split this into two cases. Firstly, for $n_k<n<n_{k}+m_k$, one immediately verifies that \[ \frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n}\lambdaeft(x\right)\right)\right)}{S_{n}\varphi\lambdaeft(x\right)}\geq \frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n_k}\lambdaeft(x\right)\right)\right)}{S_{n_k+m_k}\varphi\lambdaeft(x\right)} =\frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n_{k}}\lambdaeft(x\right)\right)\right)} {S_{n_{k}}\varphi(x)}{\elldot\frac{S_{n_{k}}\varphi(x)/n_{k}}{S_{n_{k}+m_{k}}\varphi(x)/(n_{k}+m_{k})}\elldot\frac{n_{k}}{n_{k}+m_{k}}}, \] where again the last ratio on the right-hand side tends to 1 as $k$ (and therefore $n$) tends to infinity. Secondly, if $n_{k}+m_{k}\lambdaeq n<n_{k+1}$ then $C_n(x)$ is equal to some length $n_k$ cylinder $D\in \mathcal{D}_k$ concatenated with the cylinder $C:=C_{n-n_k}(L_\alphalpha^{n_{k}})$, which has length at least equal to $m_k$. Since $x$ is assumed to belong to the set $\mathcal{M}$, the cylinder set $C$ contains some other cylinder set $I\in \mathcal{C}_{k+1}$. Thus, \begin{eqnarray*} \frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n}\lambdaeft(x\right)\right)\right)}{S_{n}\varphi\lambdaeft(x\right)} &\geq& \frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n_{k}}\lambdaeft(x\right)\right)\right)- \lambdaog\mathbb{P}_{s_{i(k)}}\lambdaeft(C_{n-n_k}\lambdaeft(L_\alphalpha^{n_{k}}\lambdaeft(x\right)\right)\right)}{S_{n}\varphi\lambdaeft(x\right)}\\ &\geq& \frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n_{k}}\lambdaeft(x\right)\right)\right)}{S_{n_k}\varphi(x)}\elldot\frac{S_{n_k}\varphi(x)}{S_{n}\varphi(x)} + \frac{- \lambdaog\mathbb{P}_{s_{i(k)}}\lambdaeft(C_{n-n_k}\lambdaeft(L_\alphalpha^{n_{k}}\lambdaeft(x\right)\right)\right)}{S_{n-n_k}\varphi(L_\alphalpha^{n_{k}}\lambdaeft(x\right))} \elldot\frac{S_{n-n_k}\varphi(L_\alphalpha^{n_{k}}\lambdaeft(x\right))}{S_{n}\varphi(x)}. \mathrm end{eqnarray*} Then, by (\ref{eq6.3}), for all $\varepsilon>0$ and all sufficiently large $k$ (and hence large $n$), we have that \[ \frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n_{k}}\lambdaeft(x\right)\right)\right)}{S_{n_k}\varphi(x)}\geq \deltaim_{\mathrm{H}}(\mathcal{L}(s_1))-\varepsilon. \] Also, recalling that $n-n_k\geq m_k$, in light of (\ref{eq4.2}) and (\ref{eq4.3}), we obtain that \[ \frac{- \lambdaog\mathbb{P}_{s_{i(k)}}\lambdaeft(C_{n-n_k}\lambdaeft(L_\alphalpha^{n_{k}}\lambdaeft(x\right)\right)\right)}{S_{n-n_k}\varphi(L_\alphalpha^{n_{k}}\lambdaeft(x\right))} \geq \deltaim_{\mathrm{H}}(\mathcal{L}(s_{i(k)}))-\varepsilon\geq\deltaim_{\mathrm{H}}(\mathcal{L}(s_1))-\varepsilon. \] Finally, letting $\varepsilon$ tend to zero and combining (\ref{eq6.3}) with the calculations given above for the two cases $n_k<n<n_{k}+m_k$ and $n_{k}+m_{k}\lambdaeq n<n_{k+1}$, we obtain that \[ \lambdaiminf_{n\to\infty}\frac{-\lambdaog\lambdaeft(\mu\lambdaeft(C_{n}\lambdaeft(x\right)\right)\right)}{S_{n}\varphi(x)}\geq \deltaim_{\mathrm{H}}(\mathcal{L}(s_1)), \] which finishes the proof. \mathrm end{proof} \begin{rem} The proof of the lower bound for Proposition \ref{6.4}~(b) follows along the same lines as the proof of \ellite[Proposition 6.4]{mink?}, which in turn was inspired by the argument in \ellite[Theorem 6.7(3)]{BS00}. \mathrm end{rem} We are now in a position to prove the main theorem. \begin{proof}[Proof of Theorem \ref{mainthm}] Firstly, that $\deltaim_{\mathrm{H}}(\mathcal{L}(\lambdaog2))<1$ follows immediately from the multifractal results in \ellite[Theorem 3]{KMS}. In order to prove that $\deltaim_{\mathrm{H}}\lambdaeft({\mathcal T}(\Sigma_{d})heta_\infty\right)=\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}\lambdaeft(\lambdaog2\right)\right)$, it suffices to show that for every small enough $\deltaelta>0$ we have \[ \mathcal{L}(\lambdaog2 + \deltaelta)\subset {\mathcal T}(\Sigma_{d})heta_\infty\subset \mathcal{L}_*(\lambdaog2). \] The first inclusion above is simply the statement of Proposition \ref{6.1}~(a). To demonstrate the second inclusion, let $x\in {\mathcal T}(\Sigma_{d})heta_\infty$ be given. Then, by Corollary \ref{liminfderinfcor}, we have that $\lambdaim_{n\to\infty} 2^n\lambdaambda(I^{(\alphalpha)}_n)=0$. Hence, for all $\varepsilon>0$ there exists $n_\varepsilon\in\mathbb{N}$ such that for all $n\geq n_\varepsilon$ we have that \begin{eqnarray*} 2^n\lambdaambda(I^{(\alphalpha)}_n)<\varepsilon& \mathbb{R}ightarrow & \lambdaog\lambdaeft(\lambdaambda(I^{(\alphalpha)}_n)\right)<-n\lambdaog2 +\lambdaog\varepsilon\\ & \mathbb{R}ightarrow &\frac{\lambdaog\lambdaeft(\lambdaambda(I^{(\alphalpha)}_n)\right)}{-n}>\lambdaog2 -\frac{\lambdaog\varepsilon}n. \mathrm end{eqnarray*} Therefore it follows that \[ \lambdaiminf_{n\to\infty}\frac{S_n\varphi(x)}{S_n\psi(x)}\geq \lambdaiminf_{n\to\infty}\frac{\lambdaog\lambdaeft(\lambdaambda(I^{(\alphalpha)}_n)\right)}{-n}\geq\lambdaog2, \] which shows that $x\in\mathcal{L}_*(\lambdaog2)$. Consequently, ${\mathcal T}(\Sigma_{d})heta_\infty\subset\mathcal{L}_*(\lambdaog2)$, as required. To prove that $\deltaim_{\mathrm{H}}\lambdaeft({\mathcal T}(\Sigma_{d})heta_\sim\right)\lambdaeq\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}\lambdaeft(\lambdaog2\right)\right)$, by Proposition \ref{6.4}~(a), it is enough to show that ${\mathcal T}(\Sigma_{d})heta_\sim\subset \mathcal{L}^*\lambdaeft(\lambdaog2\right)$. Towards this end, let $x\in {\mathcal T}(\Sigma_{d})heta_\sim$. Hence $x\in [0,1]\setminus {\mathcal T}(\Sigma_{d})heta_0$ and, according to Proposition \ref{limitto0}, we have that \[ \lambdaimsup_{k\to\infty}\frac{\mu_\alphalpha\lambdaeft(\lambdaeft[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)\right]\right)}{\lambdaambda\lambdaeft(\lambdaeft[r_k^{(\alphalpha)}(x), r_{k+1}^{(\alphalpha)}(x)\right]\right)}\elldot \frac{t_{\mathrm ell_{k+1}(x)}}{a_{\mathrm ell_{k+1}(x)}}=\lambdaimsup_{k\to\infty}\frac{2^{-(\mathrm ell_1(x)+\elldots +\mathrm ell_{k}(x))}}{a_{\mathrm ell_1(x)}\lambdadots a_{\mathrm ell_k(x)}}>0\mathbb{R}ightarrow\lambdaimsup_{n\to\infty}\frac{S_n\varphi(x)}{S_n\psi(x)}\geq \lambdaog2. \] This implies that $x\in \mathcal{L}^*(\lambdaog2)$ and so ${\mathcal T}(\Sigma_{d})heta_\sim\subset \mathcal{L}^*\lambdaeft(\lambdaog2\right)$. For the lower bound, $\deltaim_{\mathrm{H}}\lambdaeft({\mathcal T}(\Sigma_{d})heta_\sim\right)\geq\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}\lambdaeft(\lambdaog2\right)\right)$, recall that in Proposition \ref{6.1}~(c) we proved that \[ \lambdaeft\{x\in[0,1]:\lambdaiminf_{n\to\infty} \frac{\sum_{i=1}^n\lambdaog(a_{\mathrm ell_{i}(x)})}{-\sum_{i=1}^n \mathrm ell_{i}(x)}<\lambdaog2<\lambdaimsup_{n\to\infty} \frac{\sum_{i=1}^n\lambdaog(a_{\mathrm ell_{i}(x)})}{-\sum_{i=1}^n \mathrm ell_{i}(x)}\right\}\subset {\mathcal T}(\Sigma_{d})heta_\sim. \] Then, due to Proposition \ref{6.4}~(b), we have that $\deltaim_{\mathrm{H}}\lambdaeft({\mathcal T}(\Sigma_{d})heta_\sim\right)\geq\deltaim_{\mathrm{H}}\lambdaeft(\mathcal{L}(s_1)\right)$ for all $s_1\in(\lambdaog2, s_+)$. This finishes the proof. \mathrm end{proof} \begin{thebibliography}{31} \bibitem{BS00}L.~Barreira, J.~Schmeling. \newblock Sets of {}``non-typical'' points have full topological entropy and full Hausdorff dimension. \newblock {\mathrm emph{Israel J. Math.}}, 116:29--70, 2000. \bibitem{BBDK} J. Barrionuevo, R. M. Burton, K. Dajani and C. Kraaikamp. \newblock{Ergodic properties of generalised L\"uroth series.} \mathrm emph{Acta Arith.}, {\bf LXXIV} (4), 311-327, 1996. \bibitem{Fal2} K. Falconer. \newblock {\mathrm em Techniques in Fractal Geometry.} \newblock John Wiley, New York, 1997. \bibitem{JaerischKess09} J. Jaerisch, M. Kesseb\"ohmer. Regularity of multifractal spectra of conformal iterated function systems. \mathrm emph{Trans. Amer. Math. Soc.}, 363(1):313--330, 2011. \bibitem{KMS} M.~Kesseb{\"o}hmer, S. Munday and B.O. Stratmann. \newblock{Strong renewal theorems and Lyapunov spectra for $\alphalpha$-Farey and $\alphalpha$-L\"uroth systems.} \newblock {\mathrm em Ergodic Theory Dynam. Systems}, 32 no. 3:989--1017, 2012. \bibitem{KesseboehmerStratmann:07}M.~Kesseb{\"o}hmer, B.O.~Stratmann. \newblock A multifractal analysis for Stern-Brocot intervals, continued fractions and Diophantine growth rates. \newblock {\mathrm emph{J. Reine Angew. Math.}}, 605:133--163, 2007. \bibitem{mink?}M.~Kesseb{\"o}hmer, B.O.~Stratmann. \newblock Fractal analysis for sets of non-differentiability of Minkowski's question mark function. \newblock {\mathrm emph{J. Number Theory}}, 128:2663--2686, 2008. \bibitem{min} H. Minkowski. \newblock\mathrm emph{Geometrie der Zahlen}. Gesammelte Abhandlungen, Vol. 2, 1911; reprinted by Chelsea, New York, 43--52, 1967. \bibitem{sarathesis} S. Munday. \newblock\mathrm emph{Finite and Infinite Ergodic Theory for Linear and Conformal Dynamical Systems}. PhD Thesis, University of St Andrews, 2011. \bibitem{PV} J. Parad\'{\i}s, P. Viader. \newblock The derivative of Minkowski's $?(x)$ function. \newblock {\mathrm emph{J. Math. Anal. Appl.}}, 253:107--125, 2001. \bibitem{PVB} J. Parad\'{\i}s, P. Viader, L. Bibiloni. \newblock Riesz-N\'{a}{gy} singular functions revisited. \newblock {\mathrm emph{J. Math. Anal. Appl.}}, 329:592--602, 2007. \bibitem{royden} H.L. Royden. \newblock{\mathrm emph{Measure Theory, 3rd ed.}}. Prentice-Hall, New Jersey, 1988. \bibitem{Salem} R. Salem. \newblock On some singular monotonic functions which are strictly increasing. \newblock{\mathrm emph{Trans. Amer. Math. Soc.}}, 53(3):427--439, 1943. \bibitem{tseng} J. Tseng. \newblock Schmidt games and Markov partitions. \newblock{\mathrm emph{Nonlinearity}}, {\bf 22}, 525--543, 2009. \mathrm end{thebibliography} \mathrm end{document}
math
\begin{document} \title{Dynamic Pricing and Distributed Energy Management for Demand Response} \author{Liyan Jia and Lang Tong, {\em Fellow, IEEE} \thanks{Liyan Jia and Lang Tong are with the School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA {\tt\small \{lj92,lt35\}@cornell.edu} This manuscript has been accepted by IEEE Transactions on Smart Grid. This work is supported in part by the National Science Foundation under Grant CNS-1135844 and Grant 15499. } } \maketitle \begin{abstract} The problem of dynamic pricing of electricity in a retail market is considered. A Stackelberg game is used to model interactions between a retailer and its customers; the retailer sets the day-ahead hourly price of electricity and consumers adjust real-time consumptions to maximize individual consumer surplus. For thermostatic demands, the optimal aggregated demand is shown to be an affine function of the day-ahead hourly price. A complete characterization of the trade-offs between consumer surplus and retail profit is obtained. The Pareto front of achievable trade-offs is shown to be concave, and each point on the Pareto front is achieved by an optimal day-ahead hourly price. Effects of integrating renewables and local storage are analyzed. It is shown that benefits of renewable integration all go to the retailer when the capacity of renewable is relatively small. As the capacity increases beyond a certain threshold, the benefit from renewable that goes to consumers increases. \end{abstract} \begin{IEEEkeywords} demand response; dynamic pricing; thermostatic control; renewable integration; home energy storage. \end{IEEEkeywords} \section{Introduction}\label{sec:intro} Demand response in a smart grid is expected to offer economic benefits to consumers while improving overall operation efficiency and reliability. A properly designed demand response program can reduce the peak load, compensate for uncertainties associated intermittent renewables, and reduce the cost of system operation. Two types of demand response are commonly used. One gives the retail utility direct control of consumers' consumptions. See, {\it e.g., \/} \cite{DemandResponsePJM,Chao&Etal88,CLRC:05}. Although this form of demand response provides flexibilities to the system operator, a consumer loses the ability to manage consumption based on her own preferences. An alternative is to reshape demand response through dynamic pricing. Examples of this type of programs include the use of critical pricing \cite{Faruqui&etal:09} and real time pricing \cite{Borenstein&etc:02,Borenstein&Holland:05}. Such schemes allow the consumers to mange consumptions individually. In this paper, we consider a demand response scheme of the second type. We focus on a particular type of dynamic pricing referred to as the {\em day-ahead hourly pricing} (DAHP). DAHP was first considered in \cite{Borenstein&etc:02,Borenstein&Holland:05} and has already been implemented by some utility companies in U.S. \cite{Hopper&Goldman&Neenan:05EJ}. Under DAHP, the hourly retail prices of electricity are set one day ahead of the actual consumptions, thus providing a level of price certainty to consumers. DAHP also allows the retailer to adjust prices on a day-to-day basis, taking into account operating conditions at the wholesale market. The empirical study of DAHP reported in \cite{Hopper&Goldman&Neenan:05EJ} concludes that DAHP ``not only improves the linkage between wholesale and retail markets, but also promotes the development of retail competition.'' \subsection{Summary of Main Results} We analyze in this paper interactions between a retailer and its customers. We use the term retailer to include the traditional retail utilities as well as energy aggregators in a smart grid setting. A Stackelberg game model is used to capture retailer-consumer interactions in which the retailer is the leader who sets the DAHP and the consumers the followers who adjust individual consumption. For thermostatically controlled load, we show in Section~\ref{ssec:demand} that the optimal aggregated demand is an affine function of the DAHP, and the optimal DAHP can be obtained via convex optimizations. \begin{figure} \caption{\small CS-RP trade-off curve with different dynamic pricing schemes.} \label{fig:trade-off1} \end{figure} Main results of this paper are built upon the characterization of the Pareto front of trade-offs between consumer surplus (CS) and retail profit (RP), as illustrated in Fig.~\ref{fig:trade-off1}, where each point on the Pareto front is associated with an optimized DAHP. We show in Section~\ref{sec:Game} that the Pareto front is concave and monotonically decreasing. Several well known pricing schemes are shown in Fig.~\ref{fig:trade-off1}. In particular, in the absence of renewable integration and energy storage, the price $\pi^{\text{sw}}$ that maximizes the sum of consumer surplus and retail profit (the social welfare) is shown to result in zero retail profit. This means that the social welfare maximization price is not economically viable to the retailer. The optimal regulated monopoly price $\pi^{\text{r}}$ is associated with the point on the Pareto front where the retailer's profit is fixed at some $\Delta$. The retail profit maximization price is $\pi^{\text{o}}$ when no constraint is imposed on the retailer. We note also that benchmark pricing schemes such as constant pricing, time of use (ToU) and proportional markup pricing are suboptimal; they are in general inside the Pareto front, indicating that improvement can be achieved by applying the optimization framework developed in Section~\ref{sec:tradeoff}. We also consider effects of incorporating renewable energy and energy storage. We show that, when the retailer integrates renewable sources, the retail profit at the social welfare optimizing DAHP is positive, making social welfare maximization potentially a viable operation objective. It is shown that the benefit of renewable integration all goes to the retailer when the capacity of renewable integration is small. As the capacity increases, so does the benefit to consumers. For the consumer side storage, we show that the CS-RP trade-off can be formulated under the same Stackelberg game framework. Simulation studies are presented using realistic electricity prices and home energy management models to illustrate the benefits of distributed energy management and DAHP-based demand response. The effects of renewable generation on the trade-off curve between consumer surplus and retail profits are illustrated via simulations. \subsection{Related work} The literature on dynamic pricing at retail level appeared along with the beginning of wholesale market deregulation. In \cite{Borenstein&etc:02, Borenstein:05}, the authors demonstrated the benefit of DAHP and showed that DAHP could attract consumers in the long run. In \cite{Carrion&Etal:07TPS, Conejo&Etal:08TPS}, retail price optimization was formulated by considering consumer side uncertainties and financial risks. Relative to this line of existing work, the main contribution of this paper is a full characterization of the trade-off between the achievable retail profit and consumer surplus. The use of Stackelberg game model to study interactions between a retailer and its consumers goes back to \cite{Luh&etal:82TAC}. The authors of \cite{Luh&etal:82TAC} presented a general game theoretic framework and a load adaptive pricing scheme. The pricing scheme considered in \cite{Luh&etal:82TAC} is different from the DAHP treated here, primarily because the load adaptive pricing was designed for a vertically integrated utility whereas the DAHP considered in this paper is applicable for a deregulated market operating based on a two-settlement mechanism. Instead of aiming at the single objective of global social optimality as in \cite{Luh&etal:82TAC}, we give a full characterization of optimal solutions for a set of (local) objectives. Extensive results exist on optimizing demand response operations. For example, a distributed algorithm was developed to achieve social welfare maximization in \cite{Li&Etal:11PESGM}. A hierarchical market structure and distributed optimization is also considered in \cite{Joo&Ilic:13TSG}. Contract design between renewable generators and the aggregator is studied in \cite{Papavasiliou&Oren:11PESGM}, where the aggregator is responsible for a large scale PHEV charging. A particularly relevant work, from a modeling perspective, is the generalized battery model \cite{Hao&Etal} that captures succinctly the thermostatic control as a generalized battery model. The demand model used in this paper falls in this category. \subsection{Organization and notations} This paper is organized as follows. In Section \ref{sec:Game}, a Stacelberg game is introduced and rational behaviors of the retailer and its customers are analyzed. Section \ref{sec:tradeoff} characterizes the tradeoff between the retail profit and consumer surplus. In Section \ref{sec:renewable}, renewable integrations by the retail utility is considered. Section~\ref{sec:storage} considers the problem when storage devices are used at the consumer side. In Section \ref{sec:sim}, we conduct numerical simulations to compare different benchmark schemes and investigate the effects of renewable integration. Throughout the paper, $(x_1,\cdots, x_N)$ is for a column vector and $[x_1,\cdots, x_M]$ a row vector. We use $x^+$ to represent $\max\{0,x\}$, $x^-$ to represent $\max\{0,-x\}$, and $\begin{array}r{x}$ the expected value $\mathbb{E}[x]$. \section{A Stackelberg game model} \label{sec:Game} \subsection{DAHP and a Stackelberg game model} We model the retailer-consumer interactions through DAHP using a Stackelberg game where the retailer is the leader by setting the DAHP and consumers the follower by adjusting their individual consumption. \begin{enumerate} \item The DAHP $\pi=(\pi_1,\cdots, \pi_{24})$ is a 24-dimensional real vector where $\pi_i$ is the retail price of electricity at hour $i$. The DAHP is set one day ahead and fixed throughout the day of operation. The pay-off function of the retailer will be defined later in Section~\ref{ssec:retail}. \item In real time, a consumer dynamically determines her consumption. If storage is available, the consumer may also charge or discharge the storage. Selling back surplus power in a net metering setting is allowed. The payment from a consumer to the retailer is settled as the inner product of $\pi$ and the real-time energy consumption. The pay-off function of consumers is the consumer surplus that measures the satisfaction of the consumer. One candidate is provided in Section~\ref{ssec:demand}. \item The retailer meets the aggregated demand by purchasing electricity at the wholesale market, possibly complementing the purchase by its own renewables. \end{enumerate} The Stackelberg game can be solved via a backward induction. We thus present first the analysis of consumers' response to a fixed DAHP $\pi$. The problem of optimizing $\pi$ from the retailer's point of view is then considered. \subsection{Consumer action: optimal demand response} \label{ssec:demand} We start our analysis of the Stackelberg game from the consumer side. After the DAHP $\pi$ is given, a consumer dynamically determines her own consumption to optimize the consumer surplus defined by the difference between the utility of consumption and the electricity payment. It is assumed that the consumer in a smart grid can adjust her consumption based on real-time measurements. The optimal control policy and the resulted demand response serve as the predicted behavior of a rational consumer in the Stackelberg game described above. In this paper, we consider price elastic demands that are results of optimal control of a linear system. A typical example is the thermostatically controlled loads. See \cite{Hao&Etal} for a general model for other types of applications. For thermostatic control involving heating-ventilation and air conditioning (HVAC) units, empirical study \cite{Bargiotas&Birddwell:88ITPD} shows that the dynamic equation that governs the temperature evolution is given by \begin{equation} \begin{array}{r l} x_i&=x_{i-1}+ \alpha(a_i-x_{i-1}) - \beta p_i + w_i, \\ y_i& = (x_i, a_i) + v_i, \label{eq:hvacmodel} \end{array} \end{equation} where $i$ is the index of period, $y_i$ the noisy measurements of the indoor temperature $x_i$ and outdoor temperature $a_i$, $p_i$ the power drawn by the HVAC unit, $w_i$ the process noise and $v_i$ the measurement noise. Both $w_i$ and $v_i$ are assumed to be zero mean and Gaussian. System parameters $\alpha$ ($0<\alpha<1$) and $\beta$ model the insolation of the building and efficiency of the HVAC unit. Note that the above equation applies to both heating and cooling scenarios. But we exclude the scenario that the HVAC does both heating and cooling during the same day. We focus herein on the cooling scenario ($\beta > 0$) and the results apply to heating ($\beta < 0$) as well. In the following, we will use superscript $(j)$ to denote the variables associated with consumer $j$. For consumer $j$, assume that she wants to keep the indoor temperature close to her own desired temperature $t_i^{(j)}$ for the $i$th hour. The deviation of the actual indoor temperature $x_i^{(j)}$ from the desired temperature $t_i^{(j)}$ can be used to measure consumer $j$'s discomfort level. By assuming symmetric upward and downward discomfort measure, a quadratic form of consumer utility over $N$ periods is given by \begin{equation} u^{(j)}= -\mu^{(j)} \sum_{i=1}^{N} (x_i^{(j)}-t_i^{(j)})^2, \label{eq:utility} \end{equation} where $\mu^{(j)}$ is consumer $j$'s weight factor to convert the temperature deviation to a monetary value. The consumer surplus of consumer $j$ is defined by the difference between the consumer utility and the total payment from the consumers. In particular, given a DAHP $\pi=(\pi_1,\cdots, \pi_{24})$ and energy consumption $p_i^{(j)}$ in the $i$th hour, the payment for the real-time consumption from consumer $j$ is settled as $\sum_{i=1}^{24} \pi_i p_i^{(j)}$. Therefore, her consumer surplus is a random variable given by \begin{equation} \textsf{\text{cs}}^{(j)} {\stackrel{\Delta}{=}} u^{(j)} - \pi^{{\mbox{\tiny T}}} p^{(j)}, \label{eq:cs} \end{equation} where $p^{(j)}=(p^{(j)}_1,\cdots, p^{(j)}_{24})$. We use the consumer surplus as the payoff function of consumers in the Stackelberg game, and the action of a consumer is to decide the electricity consumption during each period. As an assumption, a rational consumer maximizes consumer surplus in real time according to DAHP and her current energy state $x_i^{(j)}$. Therefore, the solution to the following stochastic control program characterizes the demand response to DAHP. \begin{equation} \begin{array}{r l} \max_{p^{(j)}} & \mathbb{E}\left\{\sum_{i=1}^{24} [- \mu^{(j)} (x_i^{(j)} - t_{i}^{(j)})^2 ] - \pi^{{\mbox{\tiny T}}} p^{(j)} \right\} \\ \mbox{s.t.} & x^{(j)}_{i} = x^{(j)}_{i-1} + \alpha^{(j)} (a_{i} - x^{(j)}_{i-1}) - \beta^{(j)} p^{(j)}_i + w^{(j)}_i, \\ & y^{(j)}_i=(x^{(j)}_i,a_i)+ v^{(j)}_i . \\ \end{array} \label{eq:opt_d} \end{equation} Under mild conditions where the price doesn't vary too much during a day and $\mu^{(j)}$ is large, we ignore the positivity constraint and the rate constraint for energy consumption $p^{(j)}$. The backward induction gives the optimal demand response as shown in the following theorem first appeared in \cite{Jia&Tong:12Allerton}. \begin{theorem} \label{thm:opt_demand} Under DAHP, the optimal aggregated residential demand response for a fixed DAHP $\pi$ has the following form, \begin{equation} d(\pi)= -G \pi+b, \label{eq:demand} \end{equation} where $d(\pi)$ is a random vector whose $i$th entry is the aggregated demand from all consumers in hour $i$, $G$ a {\em deterministic} and positive definite matrix, depending only on system parameters and user preferences, and $b$ a Gaussian random vector corresponding to the overall process and observation noise. \end{theorem} {\em Proof: } see the Appendix. $\Box$ Theorem~(\ref{thm:opt_demand}) establishes an affine relationship between the optimal demand response and day-ahead hour price under some mild conditions. It is shown that the sensitivity matrix of demand with respect to price, $-G$, is not affected by the realization of randomness. This means that the change of price will have deterministic effect on the expected value of demand. Also, we see that the retailer does not need to estimate each consumer's parameters in Eq.~(\ref{eq:opt_d}), including $\mu$, $\alpha$, $\beta$ and etc.. It only needs to fit the affine model as in Eq.~(\ref{eq:demand}) based on historical data. Herein, we assume that rational consumers follow the optimal demand response according to Eq.~(\ref{eq:demand}). Given the retail price $\pi$, by substituting the optimal demand response in Theorem~\ref{thm:opt_demand} back into the consumer optimization problem, the expected total consumer surplus (summed over all consumers) is given by \begin{eqnarray} \overline{\textsf{\text{cs}}}(\pi) & =& \sum_j \overline{\textsf{\text{cs}}}^{(j)}(\pi), \nonumber\\ &=& \sum_j \mathbb{E}\left([- \sum_{i=1}^{24} \mu (x^{*(j)}_i - t_{i})^2] - \pi^{{\mbox{\tiny T}}}p^{*(j)}(\pi),\right) \nonumber\\ &=& \pi^{{\mbox{\tiny T}}}G\pi/2 -\pi^{{\mbox{\tiny T}}}\begin{array}r{b} + c, \label{eq:avg_cs} \end{eqnarray} where we used the fact that $\mathbb{E}(p^{*(j)}(\pi))=-G^{(j)}\pi+\begin{array}r{b}^{(j)}$ for the $j$th consumer and $G {\stackrel{\Delta}{=}} \sum_j G^{(j)}$, $\begin{array}r{b}{\stackrel{\Delta}{=}} \sum_j \begin{array}r{b}^{(j)}$. Here $x^{*(j)}$ and $p^{*(j)}(\pi)$ are the same as the values in the proof of Theorem~\ref{thm:opt_demand} in the Appendix, and $c$ is a constant depending on the variance of the noise. \subsection{Retailer's action: optimal dynamic pricing} \label{ssec:retail} In this paper, we assume that the retailer is a price taker in the wholesale real time market. This means that the aggregated demand from consumers in real time does not affect the wholesale price. Additionally, we assume that the Stackelberg game discussed in this paper is one with perfect information, which means the leader has complete knowledge of the follower's payoff function. In the real-time operation, the retailer is required to satisfy aggregated demand by paying for the distribution cost and the cost of procuring power from the wholesale market. Let $\lambda = (\lambda_1, \lambda_2,...,\lambda_{24})$ denote the random vector of average marginal cost during each hour, which includes the wholesale electricity price, distribution cost and other service cost. The total expected daily retail profit $\textsf{\text{rp}}(\pi)$, defined as the difference between the expected real time retail revenue and the retail cost, is given by \begin{eqnarray} \overline{\textsf{\text{rp}}}(\pi) &=& \mathbb{E}\left((\pi-\lambda)^{{\mbox{\tiny T}}}d(\pi),\right) \nonumber\\ &=& (\pi - \begin{array}r{\lambda})^{{\mbox{\tiny T}}}(-G \pi+\begin{array}r{b}), \label{eq:avg_rp} \end{eqnarray} where we use the assumption that, given fixed DADP $\pi$, the real-time wholesale price is independent of the real-time consumption. The retailer's pricing decision depends on its own payoff function. A particularly relevant objective is the social welfare defined as the sum of consumer surplus and retail profit, ${\it i.e.,\ \/}$ \begin{equation} \overline{\textsf{\text{sw}}}(\pi) = \overline{\textsf{\text{rp}}}(\pi) + \overline{\textsf{\text{cs}}}(\pi). \end{equation} Using (\ref{eq:avg_cs}-\ref{eq:avg_rp}), we obtain the following theorem that characterizes the social welfare maximizing DAHP. \begin{theorem} \label{thm:SW} The optimal retail price $\pi^{\text{sw}}$ that maximizes the expected social welfare is the expected real time retail cost, \[ \pi^{\text{sw}}=\begin{array}r{\lambda}, \] and the expected retail profit under $\pi^{\text{sw}}$ is $\overline{\textsf{\text{rp}}}(\pi^{\text{sw}})=0$. And for any $\pi'$ such that $\overline{\textsf{\text{rp}}}(\pi') \ge 0$, $\overline{\textsf{\text{cs}}}(\pi') \le \overline{\textsf{\text{cs}}}(\pi^{\text{sw}})$. \end{theorem} {\em Proof:} See the Appendix. $\Box$ Theorem~\ref{thm:SW} shows that, if the social welfare is to be maximized, the retailer generates no profit. It is also shown that when social welfare is the payoff function, the retailer simply matches the DAHP with the expected retail cost. \section{Pareto Front of Tradeoffs} \label{sec:tradeoff} In this section, we solve the Stackelberg game with a weighted social welfare as the retailer's payoff function. Our goal is to characterize the Pareto front of the CS vs. RP tradeoffs. In particular, we consider the following optimization for the retailer \begin{equation} \label{eq:optII} \mbox{max}~ \{\overline{\textsf{\text{rp}}}(\pi)+ \eta \overline{\textsf{\text{cs}}}(\pi)\}, \end{equation} where $\eta$ is a parameter that allows the retailer to weigh its profit against consumer's satisfaction. In practice, $\eta$ can be set according to the operating cost, long term business plan, and social impact etc. The two extremes are the social welfare optimization when $\eta=1$ and profit maximization when $\eta=0$. For a profit regulated monopoly, the retailer sets $\eta$ to maintain the allowed level of profit. In this case, the retailer may want to maximize consumer surplus subject to obtaining its regulated profit in order to maintain or attract additional customers. For the defined Stackelberg game, the action of the retailer is to determine the day-ahead hourly price to maximize the pay-off function. Given the expected cost $\begin{array}r{\lambda}$ of procuring electricity from the wholesale market, and the preference parameter $\eta$, we can solve Eq.~(\ref{eq:optII}) by substituting (\ref{eq:avg_cs}-\ref{eq:avg_rp}) in (\ref{eq:optII}), yielding \begin{eqnarray} \overline{\textsf{\text{rp}}}(\pi)+ \eta \overline{\textsf{\text{cs}}}(\pi) &=& (\frac{\eta}{2}-1)\pi^{{\mbox{\tiny T}}}G\pi \nonumber\\ & & +\pi^{{\mbox{\tiny T}}}((1-\eta)\begin{array}r{b}+G\begin{array}r{\lambda}) + \eta c,\nonumber \end{eqnarray} from which we obtain the optimal price as \begin{equation} \label{eq:opt_pi} \pi^{*}(\begin{array}r{\lambda}, \eta) = \frac{1}{2-\eta}\begin{array}r{\lambda} + \frac{1-\eta}{2-\eta}G^{-1}\begin{array}r{b}. \end{equation} Note that the optimized DADP in (\ref{eq:opt_pi}) is made of two terms. The first depends only on the expected wholesale price of electricity, which represents the cost to the retail. As an economic signal to consumers, this term corresponds to the behavior that the retail price follows the average wholesale price of electricity. The second term depends only on the consumer preference and randomness associated with consumer environments. With (\ref{eq:opt_pi}), the Pareto front of CS-RP tradeoffs is characterized by \begin{equation} \label{eq:P} \mathscr{P}_{\begin{array}r{\lambda}}=\Bigg\{ \Big(\overline{\textsf{\text{cs}}}(\pi^{*}(\begin{array}r{\lambda},\eta)),\overline{\textsf{\text{rp}}}(\pi^{*}(\begin{array}r{\lambda},\eta))\Big): \eta \in [0,1]\Bigg\}, \end{equation} where each element of $\mathscr{P}_{\begin{array}r{\lambda}}$ can be computed in closed form. The shape of the trade-off region is characterized by the following theorem. \begin{theorem} \label{thm:property} The Pareto front of $(\overline{\textsf{\text{cs}}},\overline{\textsf{\text{rp}}})$ is concave and decreasing. The area above the Pareto front is infeasible for the retailer to achieve under DAHP whereas the closed area on and below the Pareto front can be achieved by DAHP. \end{theorem} {\em Proof:} See the Appendix. $\Box$ We can now revisit Fig.~\ref{fig:trade-off1} in lights of the analysis developed in this section. As summarized in Sec~\ref{sec:intro}.A, the achievable tradeoff region is convex in the CS-RP plane. The Pareto front $\mathscr{P}_{\begin{array}r{\lambda}}$ is a collection of CS-RP tradeoffs achieved by optimal DADP. The social welfare maximizing pricing $\pi^{\text{sw}}$ results in zero retail profit, and profit regulated monopoly price $\pi^{\text{r}}$ is located at the Pareto front where the retail profit has a regulated profit $\Delta$. Prices such as constant prices, ToU, and markups on day-ahead wholesale price are typically inside the Pareto front $\mathscr{P}_{\begin{array}r{\lambda}}$, thus suboptimal in general. See validations from numerical simulations in Sec~\ref{sec:sim}. An alternative formulation is based on a practical situation where the retailer optimizes its profit under the constraint that the consumer surplus exceeds a certain level. In particular, the problem is formulated as \begin{equation} \label{eq:optI} \max_\pi \overline{\textsf{\text{rp}}}(\pi)~~\mbox{subject to}~~\overline{\textsf{\text{cs}}}(\pi) \ge \tau. \end{equation} The following theorem shows the equivalence between (\ref{eq:optI}) and (\ref{eq:optII}) in obtaining the Pareto front in (\ref{eq:P}). \begin{theorem} \label{thm:equivalence} For any specific $\eta$, if the solution in (\ref{eq:optII}) is $\pi^*$, $\pi^*$ is also a solution to (\ref{eq:optI}) with $\tau = \overline{\textsf{\text{cs}}}(\pi^*)$. Varying $\tau$ in optimization (\ref{eq:optI}) and varying $\eta$ in optimization (\ref{eq:optII}) results in the same Pareto front $\mathscr{P}_{\begin{array}r{\lambda}}$ in (\ref{eq:P}). \end{theorem} {\em Proof:} See the Appendix. $\Box$ \section{Effects of renewable integration} \label{sec:renewable} We consider in this section the role of renewable energy at the retailer side. As a large load aggregator, the retailer may have the financial ability and incentive to have its own or contracted solar/wind farms. We restrict ourself to the scenario that the retailer owned renewables are used to compensate the real time loads; the extra renewable power is spilled. Denote the vector of hourly marginal cost of renweable as $\nu$. It is reasonable to assume $\lambda - \nu > 0$ to hold every where in practice. Let $q = ( q_1,...,q_{24})$ be the maximum renewable power accessible to the retailer in each hour. Accordingly, the retailer's profit with renewable integration is changed to \begin{equation} \label{eq:opt_w} \begin{array}{rcl} \textsf{\text{rp}}_{\mbox{\tiny{Renew}}}(\pi) & = & \pi^{{\mbox{\tiny T}}}d(\pi) - \min_{0 \le \tilde{q} \le q } \{ (\lambda- \nu)^{{\mbox{\tiny T}}}(d(\pi) - \tilde{q})^+ \\ && + \nu^{{\mbox{\tiny T}}} \tilde{q}\},\\ & = & \pi^{{\mbox{\tiny T}}}d(\pi) - \nu^{{\mbox{\tiny T}}}d(\pi)- (\lambda - \nu )^{{\mbox{\tiny T}}}(d(\pi) - q)^+, \end{array} \end{equation} where $\tilde{q}$ is the actual renewable power the retailer used, and the function $(x)^+$ is the positive part of $x$, defined as $\mbox{max}\{x,0\}$. Notice that the expected retail profit is also a concave function of the DAHP vector, $\pi$. Therefore, if the retailer's objective is profit maximization, the optimal price can be easily solved by a convex program. Following similar discussions in Section \ref{sec:tradeoff}, we focus on the problem that the retailer's payoff function is weighted social welfare. Notice here that the consumer surplus does not change while the retail profit $\textsf{\text{rp}}(\pi)$ is replaced by $\textsf{\text{rp}}_{\mbox{\tiny{Renew}}}(\pi)$. Solving the following optimization problem will give the Pareto front for the RP vs. CS trade-offs in the presence of renewable integration. \begin{equation} \label{eq:opt_ew} \mbox{max}~ \{\overline{\textsf{\text{rp}}}_{\mbox{\tiny{Renew}}}(\pi)+ \eta \overline{\textsf{\text{cs}}}(\pi)\}. \end{equation} Intuitively the achievable trade-off region between CS and RP will be enlarged due to renewables toward upright in the CS-RP plane. The following theorem verifies this intuition and further shows how the benefit of renewable integration is distributed between the retailer and its consumers. \begin{theorem} Assume that for each hour $i$, the renewable power is uniformly distributed with maximum capacity $K$, ${\it i.e.,\ \/}$ $q_i \sim U[0, K]$. For each preference parameter $\eta$, let the optimal price in (\ref{eq:opt_ew}) be $\pi_{\mbox{\tiny{Renew}}}^{\eta}$ and the corresponding expected demand $\begin{array}r{d}(\eta)$. Define $\Delta \overline{\textsf{\text{rp}}}(\eta)$ and $\Delta \overline{\textsf{\text{cs}}}(\eta)$ as the increases of retail profit and consumer surplus due to renewable integration, respectively. We then have the following statements: \begin{enumerate} \item Renewable integration always increases retail profit, ${\it i.e.,\ \/}$ $\overline{\textsf{\text{rp}}}_w(\pi_w^{\eta}) > 0$ for all $\eta$. \item When $K \le \min_{i \in \{1,...,24\}} \begin{array}r{d}_i(\eta)$, $\Delta \overline{\textsf{\text{cs}}}(\eta) = 0$. Otherwise, $\Delta \overline{\textsf{\text{cs}}}(\eta) > 0$. \item As $K \rightarrow \infty$, the fraction of renewable integration benefit to consumer surplus $\frac{\Delta \overline{\textsf{\text{cs}}}(\eta)}{\Delta \overline{\textsf{\text{cs}}}(\eta)+ \Delta \overline{\textsf{\text{rp}}}(\eta)} \rightarrow \frac{1}{3-2\eta}$. \end{enumerate} \label{thm:wind} \end{theorem} {\em Proof:} See the Appendix. $\Box$ Theorem~\ref{thm:wind} shows that the benefit of renewable all goes to the retailer when the capacity is small. The intuition is that the retailer would naturally use the low cost energy to fulfill the need of demand and pocket the benefit. As the capacity of renewable integration increases, the additional renewable will be spilled unless the consumption increases. The only way to increase the retail profit is to increase consumption by lowering the price of electricity. As a result, all consumer benefits from the price reduction. As $K \rightarrow \infty$, the fraction of renewable integration benefit to CS converges to a particular limit, $\frac{1}{3-2\eta}$, depending only on the weighting factor $\eta$. The above theorem also shows that when the retailer cares only about its own profit, the fraction of renewable integration to CS converges to $\frac{1}{3}$. On the other hand, if the retailer's objective is social welfare maximization, as the capacity of renewable power goes to infinity, the fraction of renewable integration benefit to consumer converges to 1 while the retail profit converges to zero. \section{Effects of storage at the consumer side} \label{sec:storage} In this section, we consider the role of storage at the consumer side. In particular, we assume the net-metering option where a consumer can sell back the excess energy with the same purchasing price. In the presence of storage, a consumer can arbitrage over the hourly varying DAHP. For hour $i$, denote the energy level in the battery as $B_i$ and the energy charged to the battery as $r_i$ (when $r_i \le 0$, it means discharging from the battery). Let $r_i = r_i^+ - r_i^-$ where $r_i^+ \ge 0$ and $r_i^- \ge 0$ are the positive and negative parts of $r_i$, respectively. Then the dynamics of the battery can be expressed as \begin{equation} B_{i+1} = \kappa(B_{i} + \tau r_i^+ - r_i^-/\rho), \end{equation} where $\kappa \in (0,1)$ is the storage efficiency, $\tau \in (0,1)$ the charging efficiency and $\rho \in (0,1)$ the discharging efficiency. These efficiency factors model the potential energy loss during the process of storage, charging and discharging. After incorporating the storage devices, the optimal demand response problem at the consumer side is changed to \begin{equation} \begin{array}{r l} {\mbox{\tiny u}}nderaccent{p,r,B}{\max} & \mathbb{E}\left\{\sum_{i=1}^{24} [- \mu (x_i - t_{i})^2] - \pi^{{\mbox{\tiny T}}}(p + r) \right\} \\ \mbox{s.t.} & x_{i} = x_{i-1} + \alpha (a_{i} - x_{i-1}) - \beta p_i + w_i, \\ & y_i=(x_i,a_i)+ v_i, \\ & B_{i+1} = \kappa(B_{i} + \tau r_i^+ - r_i^-/\rho), \\ & B_{24} = B_{0}, 0 \le B_{i} \le C, \\ & 0 \le r_i^+ \le r^{{\mbox{\tiny u}}}, 0 \le r_i^- \le r^{{\mbox{\tiny d}}}, \end{array} \end{equation} where $B_0$ is the initial energy level in the storage, $C$ the capacity of the battery, $r^{{\mbox{\tiny u}}}$ the charging limit, and $r^{{\mbox{\tiny d}}}$ the discharging limit. Under the net-metering assumption, the optimization problem can be divided into two independent sub problems, where the first one is the same as the previous optimal stochastic HVAC control (\ref{eq:opt_d}) and the second one is purely energy arbitrage\cite{Xu&Tong:14PESGM}. This means that adding storage on the demand side doesn't change the original linear relationship between the actual HVAC consumption and retail price; the benefit of storage goes to the consumers only in the form of arbitrage options. Therefore, given day-ahead price $\pi$, the optimal charging vector $r^*(\pi)$ can be solved via the following deterministic linear program: \begin{equation} \begin{array}{r l} {\mbox{\tiny u}}nderaccent{r, B}{\max} & -\pi^{{\mbox{\tiny T}}}r \\ \mbox{subject to} & B_{i+1} = \kappa(B_{i} + \tau r_i^+ - r_i^-/\rho), \\ & B_{24} = B_{0}, 0 \le B_{i} \le C, \\ & 0 \le r_i^+ \le r^{{\mbox{\tiny u}}}, 0 \le r_i^- \le r^{d}. \end{array} \end{equation} Using the original form of RP and CS without storage, $\textsf{\text{rp}}(\pi)$ and $\textsf{\text{cs}}(\pi)$, the retailer's payoff function in (\ref{eq:optII}) changes to \begin{equation} \label{eq:opt_b} \mbox{max}~~\{\overline{\textsf{\text{rp}}}(\pi)+(\pi - \lambda)^{{\mbox{\tiny T}}}r(\pi) + \eta ( \overline{\textsf{\text{cs}}}(\pi)- \pi^{{\mbox{\tiny T}}} r(\pi))\}. \end{equation} Solving (\ref{eq:opt_b}) will give the induced tradeoff curve between CS and RP under different weighting factor $\eta$. \section{Numerical Simulations} \label{sec:sim} In this section, we present simulation results that help to gain insights into effects of demand response with optimized DAHP. The parameters used in the simulations were extracted from actual temperature record in Hartford, CT, from July 1st, 2012 to July 30th, 2012. For the same period, we used the record of day-ahead wholesale price for the same location from ISO New England. The parameters for a HVAC thermal dynamic model (\ref{eq:hvacmodel}) were set at $\alpha = 0.5$, $\beta = 0.1$, $\mu=0.5$. The desired indoor temperature was set to be $18^{\circ}C$ for all hours. We assumed zero marginal renewable power cost. \subsection{Benchmark comparisons} \label{ssec:benchmark} We first present direct comparisons between the optimized DAHP with some simple benchmarks. Specifically, we considered the following well known schemes: \begin{itemize} \item Constant pricing (CP): in this case, the price remained constant for the whole day, ${\it i.e.,\ \/}$ $\pi_1=\pi_2=...=\pi_{24}=x$. Each value of $\pi$ had a corresponding CS and RP pair. By varying $x$, we traced the performance of constant pricing on the CS-RP plane. \item Time of Usage (ToU): in this case, a single day was divided into two parts: peak hours and normal hours. For a normal hour $i$, $\pi_i = \pi^{\mbox{\tiny norm}}$, and for a peak hour $j$, $\pi_j = \pi^{\mbox{\tiny peak}}$, where $\pi^{\mbox{\tiny peak}} > \pi^{\mbox{\tiny norm}}$. In this section, we fixed $\pi^{\mbox{\tiny peak}} = 1.2 \pi^{\mbox{\tiny norm}}$ and set the peak hours as 9am to 5pm. \item Proportional mark-up pricing (PMP): in this case, the retail pricing was indexed by the day-ahead wholesale price. Specifically, the ratio of DAHP at each hour over the day-ahead wholesale price at the same hour was constant, ${\it i.e.,\ \/}$ $\frac{\pi_1}{\mathbb{E}\lambda_1}=\frac{\pi_2}{\mathbb{E}\lambda_2}=...=\frac{\pi_{24}}{\mathbb{E}\lambda_{24}}=\gamma$. By varying $\gamma$, we traced the performance of this pricing on the CS-RP plane. \end{itemize} Qualitatively, we expected that the CS-RP performance of these schemes will fall inside the region defined by the Pareto front. \begin{center} \begin{figure} \caption{Comparison of three pricing schemes} \label{fig:cs_rp_tradeoff_nowind} \end{figure} \end{center} The trade-off curves under the four schemes were plotted in Fig.~\ref{fig:cs_rp_tradeoff_nowind}. Theoretical statements established in the paper were evident: the trade-off curves between CS and RP were downward and concave. The social welfare optimal operating point ($\eta=1$) for the optimal DAHP resulted in zero retail profit. For other pricing schemes, the points with zero profit were quite close to the social welfare optimal operating point under DAHP. Especially, since the social welfare maximization price was equal to the proportional price with $\gamma=1$, the social welfare optimal operating point was also on the trade-off curve under PMP. As expected, the traces of CP, PMP and {\mbox{trace} ToU} all fell under the optimal DAHP which defines the Pareto front. \begin{table} \begin{center} \begin{tabular}{|r|c|c|c|c|} \hline Regulated RP & DAHP & PMP & ToU & CP \\ \hline 85 & -152.0 & -154.3 & -170.2 & -183.4 \\ \hline 0 & -60.3 & -60.3 & -60.9 & -61.1 \\ \hline \end{tabular} \label{tab:compare} \caption{Comsumer surplus under different regulated profits with four pricing schemes} \end{center} \end{table} Table I illustrates quantitatively the gain of the optimized DAHP over typical benchmarks. The retail profits were fixed and associated consumer surplus values were read from the CS-RP tradeoff curves for all the pricing schemes. The performance of PMP was close to DAHP since PMP also took advantage of varying environment during a day, whereas TOU and CP performed much worse due to the lack of flexibility; DAHP achieved $10.7\%$ gain over ToU and $17.1\%$ gain over CP when the regulated RP was fixed as 85. We also compared the payoffs to consumers with and without optimal demand response. Here we assumed a baseline control policy aimed at maintaining the indoor temperature below the desired temperature plus some tolerance. Table~II shows the results of optimal demand response and the scenarios with tolerances set to be 0 and 2 degrees, under the social welfare maximization price. From the results we can see that with the increase of tolerance, the payment to retailer decreased but the uncomfort level grew fast. The optimal demand response was the best if measured by the consumer surplus as defined in Eq. (\ref{eq:cs}). \begin{table} \begin{center} \begin{tabular}{|r|c|c|c|} \hline & DR & Tolerance = 0 & Tolerance = $2^{\circ}C$ \\ \hline Payment & 37.7 & 40.8 & 28.0 \\ \hline Discomfort Level & 1.6 & 0 & 33.2 \\ \hline Consumer Surplus & -39.3 & -40.8 & -61.2 \\ \hline \end{tabular} \label{tab:optimalresponse} \caption{Comparison of consumer payoff with and without optimal demand response} \end{center} \end{table} \subsection{Characteristics of DAHP} To gain insights into how the optimal DAHP balances the tradeoff between RP and CS, we plotted the DAHP at each hour for different values of $\eta$ in Fig.~\ref{fig:prices}. The two extreme points were $\eta=0$ for profit maximization and $\eta=1$ for social welfare optimization. In fact, the DAHP curve with $\eta=1$ was exactly the expected retail cost. We can see that, the RP maximizing DAHP showed a temporal pattern that matched the demand-supply dynamics. The prices were higher at peak hours. What was not shown in this figure was that the consumers used less energy in response to higher prices and consumer surplus decreased. The CS-RP pair moved northwest along the Pareto front. In contrast, the social welfare optimal price showed a more consistent pricing across all hours. \begin{center} \begin{figure} \caption{Comparison of consumer payoff with and without optimal demand response} \label{fig:prices} \end{figure} \end{center} \subsection{Renewable integration} \label{ssec:sim_w} Assuming renewable power in each hour was uniformly distributed over $[0,K]$, by solving the problem (\ref{eq:opt_ew}), we plotted the trade-off curves between CS and RP in Fig.~\ref{fig:cs_rp_tradeoff_wind}. Not surprisingly, renewable integration enlarged the tradeoff curve and all parties benefited from the low cost energy. In particular, the social welfare optimal pricing became economically viable, ${\it i.e.,\ \/}$ the social welfare maximization prices (rightmost points on the trade-off curve) resulted in positive retail profit. Furthermore, when the capacity of renewable was small ($K=20$), the trade-off curve moved upward, indicating that almost all the benefit from renewable integration went to the retailer. When the capacity was larger ($K=50$), the trade-off curve went upright, which showed that some part of the renewable integration benefit went to consumers. \begin{center} \begin{figure} \caption{Trade-off curve with renewable integration} \label{fig:cs_rp_tradeoff_wind} \end{figure} \end{center} Fig.~\ref{fig:wind_benefit_dist} provided more insights into how the benefits of renewable were distributed among the retailer and the consumers. As the level of integration ($K$) increased, so did the fraction of the renewable integration benefit to CS, $\frac{\Delta \text{cs}(\eta)}{\Delta \text{rp}(\eta)}$, and it converged to $\frac{1}{3-2\eta}$ as $K$ goes to infinity. More interestingly, as $\eta$ increased, more emphasis was placed on consumer surplus, substantial gains were achieved by the consumers. \begin{center} \begin{figure} \caption{Fraction of renewable benefit to consumer} \label{fig:wind_benefit_dist} \end{figure} \end{center} \section{Conclusion} In this paper, we have studied a day-ahead hourly pricing (DAHP) mechanism for distributed demand response in uncertain and dynamic environments. Such a pricing scheme has the advantage of reducing consumer anxiety of pricing uncertainties and allowing the retail utility to optimize the retail pricing adaptively. The main contribution is a full characterization of the tradeoff between consumer surplus and retail profit for a class of demands. This result allows the retailer to optimize its pricing strategy, taking into consideration his access to renewable energy. A number of important issues not treated in this paper have been or are being addressed. For example, the system parameters in the affine mapping of the aggregated load are assumed to be known here. In practice, obtaining these parameters require machine learning techniques. See \cite{Jia&Tong&Zhao:13Allerton}. The model considered here does not take into account the closed-loop behavior in the sense that demand response in the distribution system has an impact in the wholesale market. This is a challenging topic deserves further study. The developed theory can also be used to study the effects of demand side storage and renewable integration. See \cite{Jia&Tong:15PESGM}. { } \section*{APPENDIX} \subsection*{Proof of Theorem~\ref{thm:opt_demand}} For simplicity, in this paper we consider the case that control period length is 1 hour. For general cases where control period length is different from 1 hour, similar proof can be followed with some transformations. See \cite{Jia&Tong:13CDC} for details. Through backward induction, the optimal control for consumer $j$ can be obtained as \begin{equation*} \begin{tabular}{r c l} $p_i^{*(j)}(\pi)$ &$=$&$ \frac{1}{\beta}\left(\hat{x}_{i-1|i-1}^{(j)}+\alpha^{(j)}(\hat{a}_{i|i-1}-\hat{x}^{(j)}_{i-1|i-1})-x_i^{*(j)} \right) $,\\ $x_i^{*(j)}$ & ${\stackrel{\Delta}{=}}$ & $\frac{\pi_i-(1-\alpha^{(j)})\pi_{i+1}}{2\mu^{(j)} \beta^{(j)}} + t_i^{(j)}$, \end{tabular} \label{eq:solution} \end{equation*} where $\hat{x}_{i-1|i-1}^{(j)}$ and $\hat{a}_{i|i-1}$ are the minimum mean squared error estimate of indoor and outdoor temperatures of hour $i$ at hour $i-1$, respectively. Here $\pi_{25}$ is assumed to be 0, and $x_i^{*(j)}$ is an ancillary value. Expanding the solution above, the total demand of consumer $j$ is \[ p^{*(j)}(\pi) = -G^{(j)}\pi + b^{(j)}, \] where $b^{(j)}$ is Gaussian, independent of $\pi$, and \[ G^{(j)}_{ik} = \left\{ \begin{array}{l l} [1+(1-\alpha^{(j)})^2]/[2\mu^{(j)} (\beta^{(j)})^2], & \quad \text{if $i=k\neq1$};\\ 1 / [2\mu^{(j)} (\beta^{(j)})^2], & \quad \text{if $i=k=1$};\\ (-1+\alpha^{(j)})/[2\mu^{(j)} (\beta^{(j)})^2], & \quad \text{if $|i-k|=1$};\\ 0, & \quad \text{o.w.}\\ \end{array} \right. \] Notice that for $0 < \alpha < 1$, \[ 1 / [2\mu^{(j)} (\beta^{(j)})^2] > |(-1+\alpha^{(j)})/[2\mu^{(j)} (\beta^{(j)})^2]|, \] \[ [1+(1-\alpha^{(j)})^2]/[2\mu^{(j)} (\beta^{(j)})^2] > 2|(-1+\alpha^{(j)})/[2\mu^{(j)} (\beta^{(j)})^2]|. \] Therefore, $G^{(j)}$ is deterministic and diagonal dominant with positive diagonal elements. Hence, $G^{(j)}$ is positive definite. On the other hand, the optimal aggregated demand \[ d(\pi)=\sum_k p^{*(j)}(\pi)=-G\pi+b, \] where $b =\sum_j b^{(j)}$, $G = \sum_j G^{(j)}$. $G$ is positive definite and deterministic, depending only on the parameters. \subsection*{Proof of Theorem~\ref{thm:SW}} Setting the derivative of $\overline{\textsf{\text{sw}}}(\pi)$ to zero gives the optimal price and resulted retail profit as,$\pi^{\text{sw}} = \begin{array}r{\lambda}$, and $ \overline{\textsf{\text{rp}}}(\pi^{\text{sw}}) = 0$. For any $\pi'$ such that $\overline{\textsf{\text{rp}}}(\pi') \ge 0$, we have \[ \overline{\textsf{\text{cs}}}(\pi') = \overline{\textsf{\text{sw}}}(\pi') - \overline{\textsf{\text{rp}}}(\pi') \le \overline{\textsf{\text{sw}}}(\pi^{\text{sw}}) - 0 = \overline{\textsf{\text{cs}}}(\pi^{\text{sw}}). \] \subsection*{Proof of Theorem~\ref{thm:property}} For $\eta \in [0,1]$, the solution to (\ref{eq:optII}) is given by \[ \pi^* = \frac{1}{2-\eta}G^{-1}[(1-\eta)\begin{array}r{b} + G\begin{array}r{\lambda}]. \] Define the resulted retail profit and consumer surplus as $\textsf{\text{rp}}^*(\eta) {\stackrel{\Delta}{=}} \overline{\textsf{\text{rp}}}(\pi^*(\eta))$, $\textsf{\text{cs}}^*(\eta) {\stackrel{\Delta}{=}} \overline{\textsf{\text{cs}}}(\pi^*(\eta))$. Then, \[ \frac{\partial \textsf{\text{rp}}^*(\eta)}{\partial \textsf{\text{cs}}^*(\eta)} = \frac{\frac{\partial \textsf{\text{rp}}^*(\eta)}{\partial \eta}}{\frac{\partial \textsf{\text{cs}}^*(\eta)}{\partial \eta}} = -\eta. \] Because $\textsf{\text{cs}}^*(\eta)$ is an increasing function of $\eta$, $\frac{\partial \textsf{\text{rp}}^*(\eta)}{\partial \textsf{\text{cs}}^*(\eta)}$ decreases as $\textsf{\text{cs}}^*(\eta)$ increases. The curve is concave. According to Theorem~{\ref{thm:equivalence}}, $\textsf{\text{rp}}^*(\eta)$ is the optimal value of (\ref{eq:optI}) when consumer surplus is at least $\textsf{\text{cs}}^*(\eta)$. Therefore, no CS-RP pair can be above the trade-off curve. \subsection*{Proof of Theorem~\ref{thm:equivalence}} With a particular $\eta$, assume $\pi^*$ is the solution to (\ref{eq:optII}). Let $\tau = \overline{\textsf{\text{cs}}}(\pi^*)$ in (\ref{eq:optI}). Then $\pi^*$ will be in the feasible set of (\ref{eq:optII}). If there exists $\pi'$, such that $\overline{\textsf{\text{rp}}}(\pi') > \overline{\textsf{\text{rp}}}(\pi^*)$, and $\overline{\textsf{\text{cs}}}(\pi') \ge \tau$, \[ \overline{\textsf{\text{rp}}}(\pi')+ \eta \overline{\textsf{\text{cs}}}(\pi')> \overline{\textsf{\text{rp}}}(\pi^*) + \eta \tau = \overline{\textsf{\text{rp}}}(\pi^*)+ \eta \overline{\textsf{\text{cs}}}(\pi^*). \] Hence, $\pi^*$ is not the solution to (\ref{eq:optII}) since $\pi'$ achieves better objective value. It contradicts with the assumption. Therefore, $\pi^*$ is also a solution to (\ref{eq:optI}). \subsection*{Proof of Theorem~\ref{thm:wind}} Before renewable integration, for any $\eta$, the first order condition gives that the optimal demand level $d(\eta)$ satisfies \[ b - (2-\eta)d(\eta)= G\lambda. \] After renewable integration, for a particular $\eta$, the first order condition gives that the optimal demand level $d_w(\eta)$ satisfies \[ b - G\nu - (2-\eta)d_w(\eta) = G ((\lambda - \nu) \circ F(d_w(\eta))), \] where $\circ$ means the Hadamard product, ${\it i.e.,\ \/}$ pointwise product of two vectors, and $F$ is the cdf of renewable power. When $K < \min_i d_i(\eta)$, we can see that $d(\eta)$ satisfies the optimal condition therefore $d_w(\eta) = d(\eta)$, $\Delta \overline{\textsf{\text{cs}}}(\eta)=0$. Otherwise, $d_w(\eta) = d(\eta) + G{\mbox{\tiny d}}elta/(2-\eta)$, where ${\mbox{\tiny d}}elta = (\lambda - \nu) \circ (1 - F(d_w(\eta) )\ge 0$. \[ \begin{array}{rl} \Delta \overline{\textsf{\text{cs}}}(\eta) & = \frac{1}{2}\{(d_w(\eta))^{{\mbox{\tiny T}}} G^{-1} d_w(\eta) - (d(\eta))^{{\mbox{\tiny T}}} G^{-1} d(\eta)\} \\ & = \frac{1}{2}\{ 2 {\mbox{\tiny d}}elta^{{\mbox{\tiny T}}} d(\eta) + {\mbox{\tiny d}}elta^{{\mbox{\tiny T}}} G {\mbox{\tiny d}}elta \} > 0. \\ \end{array} \] For the RP, for all $K$ and $\eta$, \[ \begin{array}{rl} \Delta \overline{\textsf{\text{rp}}}(\eta)& =(1-\eta)\{(d_w(\eta))^{{\mbox{\tiny T}}} G^{-1} d_w(\eta) - (d(\eta))^{{\mbox{\tiny T}}} G^{-1} d(\eta)\} \\ &~~+ \frac{1}{2K}\{(d_w(\eta))^{{\mbox{\tiny T}}} {\cal L}ambda d_w(\eta)\}>0, \end{array} \] where ${\cal L}ambda = \text{diag}(\begin{array}r{\lambda}_1,...\begin{array}r{\lambda}_{24})$. As $K$ goes to infinity, $d_w(\eta)$ is bounded, $\frac{\Delta \overline{\textsf{\text{cs}}}(\eta)}{\Delta \overline{\textsf{\text{cs}}}(\eta)+\Delta \overline{\textsf{\text{rp}}}(\eta)}$ goes to $\frac{1}{3-2\eta}$. \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{liyan.eps}}]{Liyan Jia} \small received his B.E. degree from Department of Automation, Tsinghua University in 2009. He is currently working toward the Ph.D. degree in the School of Electrical and Computer Engineering, Cornell University. His current research interests are in smart grid, electricity market and demand response. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{langtong.eps}}]{Lang Tong} \small \noindent (S'87,M'91,SM'01,F'05) is the Irwin and Joan Jacobs Professor in Engineering of Cornell University and the site director of Power Systems Engineering Research Center (PSERC). He received the B.E. degree from Tsinghua University in 1985, and M.S. and Ph.D. degrees in electrical engineering in 1987 and 1991, respectively, from the University of Notre Dame. He was a Postdoctoral Research Affiliate at the Information Systems Laboratory, Stanford University in 1991. He was the 2001 Cor Wit Visiting Professor at the Delft University of Technology and had held visiting positions at Stanford University and the University of California at Berkeley. Lang Tong's research is in the general area of statistical inference, communications, and complex networks. His current research focuses on inference, optimization, and economic problems in energy and power systems. He received the 1993 Outstanding Young Author Award from the IEEE Circuits and Systems Society, the 2004 best paper award from IEEE Signal Processing Society, and the 2004 Leonard G. Abraham Prize Paper Award from the IEEE Communications Society. He is also a coauthor of seven student paper awards. He received Young Investigator Award from the Office of Naval Research. He was a Distinguished Lecturer of the IEEE Signal Processing Society. \end{IEEEbiography} \end{document}
math
\begin{document} \title{High-dimensional Fused Lasso Regression using Majorization-Minimization and Parallel Processing} \date{} \renewcommand\footnotemark{} \author[1]{Donghyeon Yu\thanks{E-mail addresses : \texttt{[email protected]} (D. Yu), \texttt{[email protected]} (J. Won),\\ \texttt{[email protected]} (T. Lee), \texttt{[email protected]} (J. Lim), \texttt{[email protected]} (S. Yoon).}} \affil[1]{Department of Statistics, Seoul National University} \author[2]{Joong-Ho Won} \affil[2]{School of Industrial Management Engineering, Korea University} \author[3]{Taehoon Lee} \author[1]{Johan Lim} \author[3]{Sungroh Yoon} \affil[3]{Department of Electrical and Computer Engineering, Seoul National University} \maketitle \begin{abstract} \noindent In this paper, we propose a majorization-minimization (MM) algorithm for high-dimensional fused lasso regression (FLR) suitable for parallelization using graphics processing units (GPUs). The MM algorithm is stable and flexible as it can solve the FLR problems with various types of design matrices and penalty structures within a few tens of iterations. We also show that the convergence of the proposed algorithm is guaranteed. We conduct numerical studies to compare our algorithm with other existing algorithms, demonstrating that the proposed MM algorithm is competitive in many settings including the two-dimensional FLR with arbitrary design matrices. The merit of GPU parallelization is also exhibited. \vskip0.5cm \noindent{\bf keywords:} fused lasso regression; majorization-minimization algorithm; parallel computation; graphics processing unit. \end{abstract} \section{Introduction} \label{sec:intro} Regression methods using $\ell_1$-regularization are commonly employed to analyze high-dimensional data, as they tend to yield sparse estimates of regression coefficients. In this paper, we consider the fused lasso regression (FLR), an important special case of $\ell_1$-regularized regression. The FLR minimizes \begin{align} \label{eqn:obj} f(\beta) = \frac{1}{2} \big\| {\bf y } - {\bf X} \beta \big\|_2^2 + \lambda_1 \sum_{j=1}^p | \beta_j | + \lambda_2 \sum_{(j,k) \in E} | \beta_{j} - \beta_{k}|, \end{align} where ${\bf y} \in \mathbb{R}^n$ is the response vector, and ${\bf X} \in \mathbb{R}^{n \times p}$ is the design matrix; $n$ is the sample size, and $p$ is the number of variables; $\lambda_1$ and $\lambda_2$ are non-negative regularization parameters, and $E$ is the unordered index set of the adjacent pairs of variables specified in the model. Thus the FLR not only promotes sparsity among the coefficients, but also encourages adjacent coefficients to have the same values (``fusion''), where adjacency is determined by the application. The above specification of the FLR \eqref{eqn:obj} generalizes the original proposal by \citet{Tibshirani2005}, which considers adjacency on a one-dimensional chain graph, i.e., $E = \big\{(j-1,j)|~ j = 2,\ldots,p \big\}$. We refer to this particular case as the \emph{standard FLR}. Problem \eqref{eqn:obj} also covers a wider class of the index set $E$, as considered by \cite{Chen2012} (see Section \ref{sec:review}). Thus we call the penalty terms associated with $\lambda_2$ the \emph{generalized fusion penalty}. For instance, the \emph{two-dimensional FLR} uses a two-dimensional lattice $E = \big\{ \big( (i,j-1), (i,j) \big) |~ 1\le i \le q, j=2,3,\ldots,q \big\} \cup \big\{ \big( (i-1,j), (i,j) \big) |~ i= 2,3,\ldots,q, 1 \le j \le q \big\}$ for coefficients $\big(\beta_{ij}\big)_{1\le i,j\le q}$; the \emph{clustered lasso} \citep{She2010} and the \emph{pairwise fused lasso} \citep{Petry2011} use all pairs of the variables as the index set. Closely related but not exactly specified by \eqref{eqn:obj} includes the \emph{generalized lasso} \citep{TibshiraniRyan2011}. Regardless of the penalty structure, or the choice of $E$, when ${\bf X}={\bf I}$ the FLR is called the \emph{fused lasso signal approximator} (FLSA). The major challenge in the FLR as compared to the classical lasso regression \citep{Tibshirani1996} that corresponds to setting $\lambda_2=0$ in \eqref{eqn:obj} is that the generalized fusion penalty terms are non-separable as well as non-smooth. The FLR is basically a quadratic programming (QP) and can be solved in principle using general purpose QP solvers \citep{Gill1997,Grant2008}. However, the standard QP formulation introduces a large number of additional variables and constraints, and is not adequate for general purpose QP solvers if the dimension $p$ increases. To resolve this difficulty, many special algorithms for solving the FLR problem have been proposed. Some are based on LARS \citep{Efron2004}-flavored solution path methods \citep{Friedman2007, Hoefling2010, TibshiraniRyan2011}, and others on a fixed set of the regularization parameters $(\lambda_1, \lambda_2)$ \citep{Liu2010,Ye2011,Lin2011,Chen2012}. Although these algorithms are adequate for high-dimensional settings, restrictions may apply: some are only applicable for special design matrices \citep{Friedman2007,Hoefling2010,TibshiraniRyan2011} or the standard FLR \citep{Liu2010}. In addition, computational issues still pertain for very high dimensional settings ($p \gg 1000$). In this paper, we apply the majorization-minimization (MM) algorithm \citep{Lange2000} to solve the FLR problem. The MM algorithm iteratively finds and minimizes a surrogate function called the \emph{majorizing function} that is generally convex and differentiable. It is well known that the efficiency of the MM algorithm depends largely on the choice of the majorizing function. Hence we begin the main body of the paper by proposing a majorizing function suitable for the FLR problem. Motivated by \cite{HunterLi2005} on their work on the classical lasso, we first introduce a perturbed version of the objective function (\ref{eqn:obj}) and propose a majorizing function for this perturbed objective. The MM algorithm we propose is based on these perturbed objective function and its majorizing function. The MM algorithm has several advantageous points over other existing algorithms. First, it has flexibility in the choice of both design matrix and penalty structure. We show that the proposed MM algorithm converges to the optimal objective (\ref{eqn:obj}) regardless of the rank of the design matrix. Furthermore, it can be applied to general penalty structures, while having comparable performance with state-of-the-art FLR solvers, some of which require a special penalty structure. Second, as numerically demonstrated in Section 5, our MM algorithm is stable in the number of iterations to converge regardless of the choice of design matrix and the penalty structure. Finally, an additional benefit of our MM formulation is that it opens up the possibility of solving the FLR problem in a massively parallel fashion using graphics processing units (GPUs). This paper is organized as follows. In Section 2, we review existing algorithms for the FLR. In Section 3, we propose an MM algorithm for the FLR with the generalized fusion penalty and prove the convergence of the algorithm. We also introduce the preconditioned conjugate gradient (PCG) method to accelerate the proposed MM algorithm. In Section 4, we explain how to parallelize the MM algorithm using GPUs. In Section 5, we conduct numerical studies to compare the proposed MM algorithm with the other existing algorithms. In Section 6, we conclude the paper with a brief summary. \section{Review of the existing algorithms}\label{sec:review} Existing algorithms for solving the FLR problem can be classified into two categories. One is based on solution path methods, whose goal is to find the whole solutions for all values of the regularization parameters. The other is based on first-order methods, which attempt to solve a large scale problem given a fixed set of regularization parameters using first order approximation. \subsection{Solution path methods}\label{sec:review:path} Solution path methods aim to provide all the solutions of interest with a small computational cost after solving the initial problems to find all change points while varying the regularization parameters $\lambda_1$ and $\lambda_2$. A change point refers to the value of $\lambda_1$ or $\lambda_2$ at which some coefficients fuse, or coefficients that used to be fused split. \paragraph{Path-wise optimization algorithm} ~\\ \cite{Friedman2007} propose the path-wise optimization algorithm for the ``standard'' FLSA, \begin{equation} \nonumber \min_\beta f(\beta) \equiv \frac{1}{2} \| {\bf y} - \beta \|_2^2 + \lambda_1 \sum_{j=1}^n | \beta_j | + \lambda_2 \sum_{j=2}^n | \beta_{j} - \beta_{j-1} |. \end{equation} In this case, it can be shown that $\widehat{\beta}_j(\lambda_1,\lambda_2)$, the optimal solution for the standard FLSA for regularization parameters $(\lambda_1,\lambda_2)$, can be obtained from $\widehat{\beta}_j(0,\lambda_2)$ by soft-thresholding: \begin{align}\label{eqn:soft} \widehat{\beta}_j(\lambda_1,\lambda_2) = {\rm sign}\big(\widehat{\beta}_j(0,\lambda_2)\big) \cdot \max\big( |\widehat{\beta}_j(0,\lambda_2)| - \lambda_1, 0 \big) ~~{\rm for}~ j=1,2,\ldots,p. \end{align} Hence we can set $\lambda_1 = 0$ without loss of generality and focus on the regularization path by varying only $\lambda_2$. Because of the non-separable nature of the generalized fusion penalty terms, typical coordinate descent (CD) algorithms may fail to reach the optimal solution, despite the convexity of the objective function \citep{Tseng2001}. This is in contrast to the classical lasso, where the penalty terms (associated with $\lambda_1$ in the case of the FLR) are separable. Hence the path-wise optimization algorithm uses a modified version of the CD algorithm so that two coefficients move together if the coordinate-wise move fails to reduce the objective function. An iteration of this modified CD algorithm comprises of three cycles: descent, fusion, and smoothing. The descent cycle is to update the $i$th coordinate $\widehat{\beta}_i$ of the current solution $\widehat{\beta}$ in a usual coordinate descent fashion: minimize $f(\beta)$ with respect to $\beta_i$ holding all the other coefficients fixed. Since the subdifferential of $f$ with respect to $\beta_i$ \begin{align*} \frac{\partial f(\beta)}{\partial \beta_i} = -(y_i - \beta_i) -\lambda_2 {\rm sgn}(\tilde{\beta}_{i+1} - \beta_i) + \lambda_2 {\rm sgn} (\beta_i - \tilde{\beta}_{i-1}), \end{align*} where $\tilde{\beta}$ is the solution from the previous iteration; ${\rm sgn}(x)$ is the sign of $x$ if $x \neq 0$, and any number in $[-1, 1]$ if $x=0$, is piecewise linear with breaks at $\tilde{\beta}_{i-1}$ and $\tilde{\beta}_{i+1}$, the desired coordinate-wise minimization is simple. As discussed above, the descent cycle may not reduce the objective function $f(\beta)$, hence the fusion cycle minimizes $f(\beta)$ for two adjacent variables $\beta_{i-1}$ and $\beta_i$, under the constraint $\beta_{i-1}=\beta_i$, i.e., the two variables are fused. This cycle in effect reduces the number of the variables by one. Still, fusion of neighboring two variables is not enough to reduce the objective function. To overcome this drawback, the smoothing cycle increases $\lambda_2$ by a small amount and update the solution $\widehat\beta(0,\lambda_2)$ by repeatedly applying the descent and the fusion cycles, and keeping track of the variable reduction. This strategy guarantees convergence to the exact solution if it starts from $\lambda_2=0$ where $\widehat{\beta}={\bf y}$, because in the standard FLSA, a fusion occurs at most two neighboring variables if the increment of $\lambda_2$ is sufficiently small, and the fused variables do not split for the larger values of $\lambda_2$. The resulting algorithm produces a forward stagewise-type path over a fine grid of $\lambda_2$, which can be obtained for all values of $\lambda_1$ due to \eqref{eqn:soft}. This algorithm has also been applied to the two-dimensional FLSA in \cite{Friedman2007}. However, its convergence is not guaranteed since the conditions described at the end of the last paragraph do not hold for this case. For the same reason, its extension to the FLR with general design matrices is limited. \paragraph{Path algorithm for the FLSA} ~\\ \cite{Hoefling2010} proposes a path algorithm for the FLSA with the generalized fusion penalty, \begin{align} \label{eqn:genflsa} \min_\beta f(\beta) \equiv \frac{1}{2} \big\| {\bf y } - \beta \big\|_2^2 + \lambda_1 \sum_{i=1}^n |\beta_i| +\lambda_2 \sum_{(j,k) \in E} |\beta_j - \beta_k|, \end{align} which gives an exact regularization path, in contrast to the approximate one for the standard FLSA by \cite{Friedman2007}. The main idea of the path algorithm arises from the observation that the sets of fused coefficients do not change with $\lambda_2$ except for finitely many points. (Again \eqref{eqn:soft} holds and we can assume $\lambda_1=0$ without loss of generality.) For each interval of $\lambda_2$ constructed by these finite change points, the objective function $f(\beta)$ in \eqref{eqn:genflsa} can be written as \begin{align}\label{eqn:genflsa2} f^*(\beta) = \displaystyle \frac{1}{2} \sum_{i=1}^{n_F} \Big(\sum_{j\in F_i(\lambda_2)} \big(y_j - \beta_{F_i}(\lambda_2)\big)^2 \Big) \displaystyle +\lambda_2 \sum_{1\le i<j \le n_F} n_{ij}|\beta_{F_i}(\lambda_2) - \beta_{F_j}(\lambda_2)|, \end{align} where $F_1(\lambda_2), F_2(\lambda_2), \ldots, F_{n_F}(\lambda_2)$ are the sets of fused coefficients, $n_F=n_F(\lambda_2)$ is the number of the sets, and $n_{ij} = |\{(k,l)\in E ~|~ k\in F_i(\lambda_2), l\in F_j(\lambda_2) \}|$ is the number of edges connecting the sets $F_i(\lambda_2)$ and $F_j(\lambda_2)$. The minimizer of \eqref{eqn:genflsa2}, hence of \eqref{eqn:genflsa}, varies linearly with $\lambda_2$ within this interval: \begin{align}\label{eqn:flsaslope} \frac{\partial \widehat{\beta}_{F_i}(\lambda_2)}{\partial \lambda_2} = - \frac{\sum_{j\neq i}n_{ij}\mbox{sgn}\left(\widehat{\beta}_{F_i}(\lambda_2)-\widehat{\beta}_{F_j}(\lambda_2)\right)}{|F_i(\lambda_2)|}, \end{align} i.e., the solution path is piecewise linear. Since at $\lambda_2 = 0$, $\widehat{\beta}_{F_i}(0) = y_i$ and $F_i = \{i\}$ for $i=1,2,\ldots,n$, by keeping track of the change points at which the solution path changes its slope \eqref{eqn:flsaslope} starting from $\lambda_2=0$, we can determine the entire solution path. Regarding the change points, it can be shown that the optimality condition for \eqref{eqn:genflsa} \[ \frac{\partial f}{\partial \beta_k}(\widehat{\beta}) = \widehat\beta_k - y_k + \lambda_2 \sum_{l: (k,l)\in E} \hat{t}_{kl}(\lambda_2) = 0, \] where $\hat{t}_{kl}(\lambda_2) = \mbox{sgn}\big( \widehat{\beta}_k(\lambda_2) - \widehat{\beta}_l(\lambda_2) \big)$, is satisfied by affine functions $\widehat{\beta}_k(\lambda_2)$ and $\hat{\tau}_{kl}(\lambda_2)=\lambda_2 \hat{t}_{kl}(\lambda_2)$ for an interval $[\lambda_2^0, \lambda_2^0+\Delta]$, with $\lambda_2^0$ a change point. In this case, we have \begin{align}\label{eqn:affine} \frac{\partial\widehat{\beta}_{F_i}}{\partial\lambda_2}(\lambda_2^0) + \sum_{l \in F_j(\lambda_2^0), j\neq i,(k,l)\in E} \hat{t}_{kl}(\lambda_2^0) + \sum_{l \in F_i(\lambda_2^0), (k,l)\in E} \frac{\partial\hat{\tau}_{kl}}{\partial\lambda_2}(\lambda_2^0) = 0, \end{align} for $k \in F_i(\lambda_2^0)$, $i=1, 2, \ldots, n_F(\lambda_2^0)$. At the beginning of the interval, $\partial\widehat{\beta}_{F_i}/\partial \lambda_2$ is given by \eqref{eqn:flsaslope}, and $\hat{t}_{kl}=\pm 1$ for $(k,l)\in E$, $l \in F_j(\lambda_2^0)$ with $i \neq j$. The length of the interval $\Delta$, hence the next change point, can be determined by examining where the sets of fused coefficients merge or split. Similar to the standard FLSA, at most two sets can merge, and a set splits into at most two sets for a sufficiently small increase in $\lambda_2$. The first merge after $\lambda_2^0$ may occur when the paths for any two sets of fused coefficients meet: \begin{align*} h(\lambda_2^0) = \min_{i=1,\ldots,n_F(\lambda_2^0)} \min_{j:h_{ij}(\lambda_2^0)>\lambda_2^0} h_{ij}(\lambda_2^0), \quad\mbox{where}\quad h_{ij}(\lambda_2^0) = \frac{\widehat{\beta}_{F_i}(\lambda_2^0) - \widehat{\beta}_{F_j}(\lambda_2^0)}{\frac{\partial \widehat{\beta}_{F_j}(\lambda_2^0)}{\partial \lambda_2^0} - \frac{\partial \widehat{\beta}_{F_i}(\lambda_2^0)}{\partial \lambda_2^0}} + \lambda_2^0, \end{align*} unless a split occurs before this point. The first split may occur at the point where the condition \eqref{eqn:affine} is violated, i.e., \begin{align*} v(\lambda_2^0) = \min_{i=1,\ldots,n_F(\lambda_2^0)} \delta_i + \lambda_2^0, \end{align*} where $\delta_i$ can be found by solving the linear program (LP) \begin{align}\label{eqn:flsalp} \begin{array}{ll} \mbox{minimize} & 1/\delta \\ \mbox{subject to} & \displaystyle \sum_{l \in F_i(\lambda_2^0), (k,l)\in E} f_{kl} = p_k, \quad k \in F_i(\lambda_2^0), \\ ~ & -1 - (1/\delta)(\lambda_2^0+\hat{\tau}_{kl}(\lambda_2^0)) \le f_{kl} \le 1 + (1/\delta)(\lambda_2^0-\hat{\tau}_{kl}(\lambda_2^0)), \quad k,l \in F_i(\lambda_2^0), (k,l) \in E, \end{array} \end{align} in $1/\delta$ and $f_{kl}$, where $p_k = -\frac{\partial\widehat{\beta}_{F_i}}{\partial\lambda_2}(\lambda_2^0) - \sum_{l \in F_j(\lambda_2^0), j\neq i,(k,l)\in E} \hat{t}_{kl}(\lambda_2^0)$, for each $i=1,2,\ldots,n_F(\lambda_2^0)$. This LP can be solved by iteratively applying a maximum flow algorithm, e.g., that of \citet{Ford1956}. The maximum flow algorithm solves \eqref{eqn:flsalp} for fixed $\delta$, and the final solution $\hat{f}_{kl}$ to \eqref{eqn:flsalp} gives the value of $\frac{\partial\hat{\tau}_{kl}}{\partial\lambda_2}(\lambda_2^0)$ in \eqref{eqn:affine}. Thus, $\Delta=\min\{h(\lambda_2^0),v(\lambda_2^0)\}-\lambda_2^0$. At split, \eqref{eqn:flsalp} also determines how the set $F_i$ is partitioned. The computational complexity of this path algorithm depends almost on that of solving the maximum flow problem corresponding to \eqref{eqn:flsalp}. It becomes inefficient as the dimension $p$ increases because a large scale maximum flow problem is difficult to solve. However, when restricted to the standard FLSA, this path algorithm is very efficient, since the solution path of the standard FLSA has only fusions and not splits, hence does not require solving the maximum flow problem. \paragraph{Path algorithm for the generalized lasso} ~\\ \cite{TibshiraniRyan2011} propose a path algorithm for the generalized lasso problem, which replaces the penalty terms in (\ref{eqn:obj}) with a generalized $\ell_1$-norm penalty $\lambda \| {\bf D} \beta \|_1$: \begin{align} \label{eqn:genlasso} \min_{\beta} f(\beta) \equiv \frac{1}{2} \big\| {\bf y} -{\bf X} \beta \big\|_2^2 + \lambda \big\| {\bf D}\beta \big\|_1, \end{align} where $\lambda$ is a non-negative regularization parameter and ${\bf D}$ is a $m \times p$ dimensional matrix corresponding to the dependent structure of coefficients. The generalized lasso has the FLR with the generalized fusion penalty as a special case. The dual problem of \eqref{eqn:genlasso} is given by \begin{align} \label{eqn:gen_gen} \begin{array}{ll} \mbox{minimize} & \displaystyle \frac{1}{2} \big\|\tilde{\bf y} -\tilde{\bf D}^T{\bf u} \big\|_2^2\\ {\rm subject ~ to~} & \displaystyle \| {\bf u} \|_\infty \le \lambda,~ {\bf D}^T {\bf u} \in {\rm row}({\bf X}) \end{array} \end{align} on ${\bf u} \in {\mathbb R}^m$, where $\|{\bf u}\|_\infty = \max_j |u_j|$, $\tilde{\bf y} = {\bf X} ({\bf X}^T {\bf X})^{\dagger} {\bf X}^T {\bf y}$, $\tilde{\bf D} = {\bf D} ({\bf X}^T {\bf X})^{\dagger} {\bf X}^T$, and ${\rm row}({\bf X})$ is a row space of ${\bf X}$; $A^{\dagger}$ denotes the Moore-Penrose pseudo-inverse of the matrix $A$. This dual problem is difficult to solve due to the the constraint ${\bf D}^T {\bf u} \in {\rm row}({\bf X})$, but this row space constraint can be removed when ${\rm rank}({\bf X}) = p$. Thus the path algorithm focuses on the case when the design matrix $\bf X$ has a full rank. The algorithm starts from $\lambda=\infty$ and decreases $\lambda$. Suppose $\lambda_k$ is the current value of the regularization parameter $\lambda$. The current dual solution $\widehat{\bf u}(\lambda_k)$ is given by \begin{align*} \displaystyle \widehat{\bf u}_{\mathcal{B}}(\lambda_k) &= \lambda_k {\bf s},\\ \displaystyle \widehat{\bf u}_{-\mathcal{B}}(\lambda_k) &= \big({\bf D}_{-\mathcal{B}}({\bf D}_{-\mathcal{B}})^T \big)^{\dagger} {\bf D}_{-\mathcal{B}} \big( {\bf y}- \lambda_k ({\bf D}_{\mathcal{B}})^T {\bf s} \big), \end{align*} where $\mathcal{B} = \mathcal{B}(\lambda) = \{~ j ~\big| ~|u_j| = \lambda, j=1,2,\ldots,m\}$ is the active set of the constraint $\| {\bf u} \|_\infty \le \lambda$; $-\mathcal{B}=\mathcal{B}^c$; for an index set $A \subset \{1,2,\ldots,m\}$ and for a vector ${\bf v}=(v_i)_{1 \le i \le m} \in \mathbb{R}^m$, ${\bf v}_A$ denotes the $|A|$-dimensional vector such that ${\bf v}_A=(v_i)_{i\in A}$, and for a matrix ${\bf D} = ({\bf d}_i)_{1 \le i \le m} \in \mathbb{R}^{p \times m}$ with ${\bf d}_i \in \mathbb{R}^p$, ${\bf D}_A$ denotes the $p \times |A|$ dimensional matrix such that ${\bf D}_A=({\bf d}_i)_{i\in A}$; and ${\bf s} = (s_1,\ldots,s_{|\mathcal{B}|})^T$ with $s_k = \mbox{sign}(u_{n_k}(\lambda_k))$, $n_k \in \mathcal{B}$. The primal solution is obtained from the dual solution by the following primal-dual relationship \begin{align}\label{eqn:genlassoprimaldual} \widehat{\beta}(\lambda) = \tilde{\bf y} - \tilde{\bf D}^T \widehat{\bf u}(\lambda), \end{align} hence the solution path is piecewise linear. The intervals in which all $\widehat{\beta}_j(\lambda)$ have constant slopes can be found by keeping track of the active set $\mathcal{B}$. An inactive coordinate $j \in -\mathcal{B}$ at $\lambda=\lambda_k$, possibly joins $\mathcal{B}$ at the hitting time \begin{align*} t_j^{\rm (hit)} = \frac{ \Big[ \big( {\bf D}_{-\mathcal{B}} ({\bf D}_{-\mathcal{B}})^T\big)^{\dagger} {\bf D}_{-\mathcal{B}} {\bf y} \Big]_j }{ \Big[ \big( {\bf D}_{-\mathcal{B}} ({\bf D}_{-\mathcal{B}})^T\big)^{\dagger} {\bf D}_{-\mathcal{B}} ({\bf D}_{\mathcal{B}})^T {\bf s} \Big]_j \pm 1}, \end{align*} for which only one is less than $\lambda_k$, and an active coordinate $j' \in \mathcal{B}$, possibly leaves $\mathcal{B}$ at the leaving time \begin{align*} t_{j'}^{\rm (leave)} = \begin{cases} \displaystyle \frac{ s_{j'} \Big[ {\bf D}_{\mathcal{B}} \big[ {\bf I} - {\bf D}_{-\mathcal{B}}^T \big( {\bf D}_{-\mathcal{B}} {\bf D}_{-\mathcal{B}}^T\big)^{\dagger} {\bf D}_{-\mathcal{B}} \big] {\bf y} \Big]_{j'} }{ s_{j'} \Big[ {\bf D}_{\mathcal{B}} \big[ {\bf I} - {\bf D}_{-\mathcal{B}}^T \big( {\bf D}_{-\mathcal{B}} {\bf D}_{-\mathcal{B}}^T\big)^{\dagger} {\bf D}_{-\mathcal{B}} \big] {\bf D}_{\mathcal{B}}^T {\bf s} \Big]_{j'} }\equiv \frac{c_{j'}}{d_{j'}}, & \mbox{if} ~ c_{j'},d_{j'}<0,\\ 0, & \mbox{otherwise}. \end{cases} \end{align*} Thus $\mathcal{B}$ changes at $\lambda=\lambda_{k+1}$ where \[ \lambda_{k+1} = \max\{ \max_{j \in -\mathcal{B}}~ t_j^{\rm{(hit)}}, \max_{j \in \mathcal{B}}~ t_j^{\rm{(leave)}} \}. \] The path algorithm for the generalized lasso problem can solve various $\ell_1$-regularization problems and obtains the exact solution path. After sequentially solving the dual, the solution path is obtained by the primal-dual relationship \eqref{eqn:genlassoprimaldual}. It is also efficient for solving the standard FLSA. However, if the rank of the design matrix ${\bf X}$ is not full rank, the additional constraint ${\bf D}^T {\bf u} \in {\rm row}({\bf X})$ does not disappear, making the dual \eqref{eqn:gen_gen} difficult to solve. To resolve this problem, \cite{TibshiraniRyan2011} suggest adding an additional penalty $\epsilonilon \|\beta\|_2^2$ to the primal \eqref{eqn:genlasso}, which in effect substitutes ${\bf y}$ and ${\bf X} $ with ${\bf y}^* = ({\bf y}^T, {\bf 0}^T)^T$ and ${\bf X}^* = \left( {\bf X}^T ,\sqrt{\epsilonilon}~ {\bf I} \right)^T$, respectively. This modification makes the new design matrix ${\bf X}^*$ full rank, but a small value of $\epsilonilon$ may lead an ill-conditioned matrix $({\bf X}^*)^T {\bf X}^*$. A further difficulty is that as the number of rows $m$ of the matrix $\bf D$, i.e., the number of penalty terms on $\beta$, increases, the path algorithm becomes less efficient since $m$ is the number of dual variables. \subsection{First-order methods} To avoid the restrictions in the solution path methods, several optimization algorithms, which target to solve the FLR problem with a fixed set of regularization parameters, have been developed. These algorithms can solve the FLR problems with the general design matrix ${\bf X}$ regardless of its rank. For scalability with the dimension $p$, they employ gradient descent-flavored first-order optimization methods. \paragraph{Efficient fused lasso algorithm} ~\\ \cite{Liu2010} propose the efficient fused lasso algorithm (EFLA) that solves the standard FLR \begin{align} \label{eqn:org_obj} \min_\beta f(\beta) &\equiv \displaystyle \frac{1}{2} \big\| {\bf y } - {\bf X} \beta \big\|_2^2 + \lambda_1 \sum_{j=1}^p | \beta_j | + \lambda_2 \sum_{j=2}^p | \beta_{j} - \beta_{j-1}|. \end{align} At the $(r+1)$th iteration, the EFLA solves \eqref{eqn:org_obj} by minimizing an approximation of $f(\beta)$ in which the squared error term is replaced by its first-order Taylor expansion at the approximate solution $\widehat{\beta}^{(r)}$ obtained from the $r$th iteration, followed by an additional quadratic regularization term $\frac{L_r}{2}\| \beta - \widehat{\beta}^{(r)}\|_2^2$, $L_r>0$. Minimization of this approximate objective can be written as \begin{align}\label{eqn:EFLA} \min_\beta \frac{1}{2} \| {\bf v}^{(r)} - \beta\|_2^2 +\frac{\lambda_1}{L_r} \sum_{j=1}^p | \beta_j |+ \frac{\lambda_2}{L_r} \sum_{j=2}^p | \beta_j - \beta_{j-1}|, \end{align} where ${\bf v}^{(r)} = \widehat{\beta}^{(r)} -({\bf X}^{T} {\bf X} \widehat{\beta}^{(r)} + {\bf X}^T {\bf y})/{L_r} $. The minimizer of \eqref{eqn:EFLA} is denoted by $\widehat{\beta}^{(r+1)}$. Note that this problem is the standard FLSA. Although \eqref{eqn:EFLA} can be solved by the path algorithms reviewed in Section \ref{sec:review}, for fixed $\lambda_1$ and $\lambda_2$, more efficient methods can be employed. \cite{Liu2010} advocate the use of a gradient descent method on the dual of \eqref{eqn:EFLA}, given by \begin{align} \label{eqn:sfa_obj} \min_{\|{\bf u}\|_\infty \le \lambda_2} \frac{1}{2} {\bf u}^T{\bf D}{\bf D}^T {\bf u} - {\bf u}^T {\bf D} {\bf v}^{(r)}, \end{align} where ${\bf D} \in {\mathbb R}^{(p-1) \times p}$ is the finite different matrix on the one-dimensional grid of size $p$, a special case of the $m \times p$ penalty matrix of the generalized lasso \eqref{eqn:genlasso}. Note here that we set $\lambda_1=0$ without loss of generality, due to the relation \eqref{eqn:soft}. Solution to \eqref{eqn:EFLA} is obtained from the primal-dual relationship $\widehat{\beta}^{(r+1)} = {\bf v}^{(r)} - {\rm D}^T {\bf u}^{\star}$, where ${\bf u}^{\star}$ is the solution to \eqref{eqn:sfa_obj}. Since \eqref{eqn:sfa_obj} is a box-constrained quadratic program, it is solved efficiently by the following projected gradient method: \begin{align*} \widehat{\bf u}^{(k+1)} = P_{\lambda_2}(\widehat{\bf u}^{(k)} - \alpha ({\bf D} {\bf D}^T\widehat{\bf u}^{(k)} - {\bf D}{\bf v}^{(r)}) ), \end{align*} where $P_{\lambda_2}(\cdot)$ is the projection onto the $l_{\infty}$-ball $\mathcal{B}=\{{\bf u}:\|{\bf u}\|_\infty \le \lambda_2 \}$ with radius $\lambda_2$, and $\alpha$ is the reciprocal of the largest eigenvalue of the matrix $\bf{D}\bf{D}^T$. Further acceleration is achieved by using a restart technique that keeps track of the coordinates of $\bf u$ for which the box constraints are active, and by replacing $\widehat{\beta}^{(r)}$ in \eqref{eqn:org_obj} with $\bar{\beta}^{(r)} = \widehat{\beta}^{(r)} + \eta_r (\widehat{\beta}^{(r)} - \widehat{\beta}^{(r-1)})$. The acceleration constant $\eta_r$ and the additional regularization constant $L_r$ in \eqref{eqn:EFLA} are chosen using Nestrov's method\citep{Nesterov2003,Nesterov2007}. The efficiency of the EFLA strongly depends on the special structure that the finite difference matrix $\bf D$ takes. Hence its generalizability to the generalized fusion penalty other than the standard one is limited. One may want to use one of the path algorithms for FLSA shown in Section \ref{sec:review} instead of solving the dual \eqref{eqn:sfa_obj}. However, this modification does not assure the efficiency of the algorithm, because a path algorithm always starts from $\lambda_2=0$ \citep{Hoefling2010} or $\lambda_2=\infty$ \citep{TibshiraniRyan2011} while for EFLA a fixed value of $\lambda_2$ suffices. \paragraph{Smoothing proximal gradient method} ~\\ \cite{Chen2012} propose the smoothing proximal gradient (SPG) method that solves the regression problems with structured penalties including the generalized fusion penalty. Recall the FLR problem \eqref{eqn:obj} can be written \begin{align}\label{eqn:spg} \displaystyle \min_\beta f(\beta) &\equiv \displaystyle \frac{1}{2} \big\| {\bf y } - {\bf X} \beta \big\|_2^2 + \lambda_2 \sum_{(j,k) \in E} | \beta_{j} - \beta_{k}| + \lambda_1 \sum_{j=1}^p | \beta_j | \nonumber \\ & \equiv \displaystyle \frac{1}{2} \big\| {\bf y } - {\bf X} \beta \big\|_2^2 + \lambda_2 \|{\bf D}\beta\|_1 + \lambda_1 \| \beta \|_1, \end{align} where ${\bf D}$ is an $m \times p$ dimensional matrix such that $\|{\bf D}\beta\|_1 = \sum_{(j,k)\in E} |\beta_j - \beta_k|$. The main idea of the SPG method is to approximate the non-smooth and non-separable penalty term $\| {\bf D} \beta \|_1$ by a smooth function and to solve the resulting smooth surrogate problem iteratively in a computationally efficient fashion. For the smooth approximation, recall that the penalty term $\|{\bf D} \beta \|_1$ can be reformulated as a maximization problem with an auxiliary vector $\alpha \in {\mathbb R}^{m}$, \begin{align*} \|{\bf D} \beta \|_1 \equiv \max_{\|\alpha\|_\infty \le 1} \alpha^T {\bf D} \beta \equiv \Omega(\beta). \end{align*} Now consider a smooth function \begin{align}\label{eqn:norm} \tilde{\Omega}_{\mu}(\beta) \equiv \max_{\|\alpha\|_\infty \le 1} \big( \alpha^T {\bf D} \beta - \frac{\mu}{2} \| \alpha\|_2^2 \big), \end{align} that approximates $\Omega(\beta)$ with a positive smoothing parameter $\mu$, chosen as $\epsilonilon/m$ to obtain the desired accuracy $\epsilonilon>0$ of the approximation. The function $\tilde{\Omega}_{\mu}(\beta)$ is convex and continuously differentiable with respect to $\beta$ \citep{Nesterov2005}. For the smooth surrogate problem, the SPG method solves the following. \begin{align}\label{eqn:spgsurrogate} \displaystyle \min_\beta \tilde{f}(\beta) & \equiv \displaystyle \underbrace{\frac{1}{2}\| {\bf y} - {\bf X}\beta \|_2^2 + \lambda_2 \tilde{\Omega}(\beta,\mu)}_{h(\beta)} + \lambda_1 \| \beta \|_1. \end{align} This is the sum of a smooth convex function $h(\beta)$ and a non-smooth but \emph{separable} function $\lambda_1 \| \beta \|_1$, hence can be solved efficiently using a proximal gradient method, e.g., the fast iterative shrinkage-thresholding algorithm (FISTA, \citet{Beck2009}). The FISTA iteratively approximates $h(\beta)$ by its first-order Taylor expansion with an additional quadratic regularization term of the form $\frac{L}{2}\| \beta - \widehat{\beta}^{(r)} \|_2^2$, with essentially the same manner as the EFLA that yields \eqref{eqn:EFLA}. As a result, the FISTA solves at each iteration a subproblem that minimizes the sum of a quadratic function of $\beta$ whose Hessian is the identity, and the classical lasso penalty $\lambda_1 \| \beta \|_1$. The solution of this subproblem is readily given by soft-thresholding. The computational complexity of the SPG method depends on that of determining the constant $L$ for the additional quadratic regularization term, which the FISTA chooses as the Lipschitz constant of the gradient of $h(\beta)$, i.e., $L = \|{\bf X}\|_2^2 +\frac{ \lambda_2 }{\mu}\|{\bf D}\|_2^2$, where $\| {\bf A}\|_2$ is the spectral norm of a matrix ${\bf A}$. The computational complexity for computing $L$ is $O\big( \min(m^2p,mp^2) \big)$, which is costly if either $m$ or $p$ increases. To reduce this computational cost, \cite{Chen2012} suggest using a line search on $L$, only to ensure that at the minimum of objective of the FISTA subproblem is not less than the surrogate function $\tilde{f}(\beta)$ evaluated at the minimizer of the same problem. However, since such an evaluation of $\tilde{f}(\beta)$ takes $O\big(\max(mp,np) \big)$ time, this scheme does not save much cost if the surrogate function is evaluated frequently, especially when either $m$ or $p$ is large. \paragraph{Split Bregman algorithm} ~\\ \cite{Ye2011} propose the split Bregman (SB) algorithm for the standard FLR and it can directly extend to the general FLR \eqref{eqn:obj} and \eqref{eqn:spg}. Note that \eqref{eqn:spg} can be reformulated as a constrained form: \begin{align}\label{eqn:sb} \begin{array}{ll} \min_{\beta, {\bf a}, {\bf b}} & \displaystyle \frac{1}{2} \big\| {\bf y } - {\bf X} \beta \big\|_2^2 + \lambda_1 \| {\bf a} \|_1 + \lambda_2 \|{\bf b} \|_1 \\ \mbox{subject~to} & {\bf a} = \beta \\ ~ & {\bf b} = {\bf D}\beta, \end{array} \end{align} with auxiliary variables ${\bf a} \in {\mathbb R}^p$ and $ {\bf b} \in {\mathbb R}^m$. The SB algorithm is derived from the augmented Lagrangian \citep{Hestenes1969,Rockafellar1973} of \eqref{eqn:sb} given by \begin{align}\label{eqn:aug} \displaystyle L(\beta,{\bf a},{\bf b},{\bf u},{\bf v}) &= \displaystyle \frac{1}{2} \big\| {\bf y } - {\bf X} \beta \big\|_2^2 + \lambda_1 \|{\bf a}\|_1 + \lambda_2 \|{\bf b}\|_1 + {\bf u}^T(\beta - {\bf a}) + {\bf v}^T({\bf D}\beta - {\bf b}) \notag\\ & \qquad \displaystyle + \frac{\mu_1}{2} \| \beta - {\bf a} \|_2^2 + \frac{\mu_2}{2} \| {\bf D}\beta -{\bf b} \|_2^2, \end{align} where ${\bf u} \in {\mathbb R}^p$, ${\bf v} \in {\mathbb R}^m$ are dual variables, and $\mu_1$, $\mu_2$ are positive constants. This is the usual Lagrangian of \eqref{eqn:sb} augmented by adding the quadratic penalty terms $\frac{\mu_1}{2}\| \beta - {\bf a} \|_2^2$ and $\frac{\mu_2}{2}\| {\bf D}\beta -{\bf b} \|_2^2$, for violating the equality constraints in \eqref{eqn:sb}. The primal problem associated with the augmented Lagrangian \eqref{eqn:aug} is to minimize $\sup_{{\bf u},{\bf v}}L(\beta,{\bf a},{\bf b},{\bf u},{\bf v})$ over $(\beta,{\bf a},{\bf b})$, and the dual is to maximize $\inf_{\beta,{\bf a},{\bf b}}L(\beta,{\bf a},{\bf b},{\bf u},{\bf v})$ over $({\bf u},{\bf v})$. The SB algorithm solves the primal and the dual in an alternating fashion, in which the primal is solved separately for each of $\beta$, $\bf a$, and $\bf b$: \begin{align} \widehat{\beta}^{(r)} &= \displaystyle \argmin_{\beta}~\displaystyle \frac{1}{2} \big\| {\bf y } - {\bf X} \beta \big\|_2^2 + ({\bf u}^{(r-1)})^{T}(\beta - {\bf a}^{(r-1)}) \displaystyle + ({\bf v}^{(r-1)})^T({\bf D}\beta - {\bf b}^{(r-1)}) \nonumber\\ & \qquad \qquad \quad \displaystyle+ \frac{\mu_1}{2} \big\|\beta-{\bf a}^{(r-1)}\big\|_2^2 + \frac{\mu_2}{2} \big\|{\bf D}\beta-{\bf b}^{(r-1)}\big\|_2^2, \label{eqn:sbprimal1}\\ {\bf a}^{(r)} &= \displaystyle \argmin_{\bf a}~\displaystyle \lambda_1 \|{\bf a}\|_1 + (\widehat{\beta}^{(r)} -{\bf a})^T{\bf u}^{(r-1)} \displaystyle + \frac{\mu_1}{2} \big\|\widehat{\beta}^{(r)}-{\bf a}\big\|_2^2,\label{eqn:sbprimal2}\\ {\bf b}^{(r)} &= \displaystyle \argmin_{\bf b}~\displaystyle \lambda_1 \|{\bf b}\|_1 + ({\bf D}\widehat{\beta}^{(r)} -{\bf b})^T{\bf v}^{(r-1)} \displaystyle + \frac{\mu_2}{2} \big\|{\bf D}\widehat{\beta}^{(r)}-{\bf b}\big\|_2^2, \label{eqn:sbprimal3} \end{align} while the dual is solved by gradient ascent: \begin{align} {\bf u}^{(r)} &= {\bf u}^{(r-1)} + \delta_1 (\widehat{\beta}^{(r)} - {\bf a}^{(r)}), \label{eqn:sbdual1}\\ {\bf v}^{(r)} &= {\bf v}^{(r-1)} + \delta_2 ({\bf D}\widehat{\beta}^{(r)} - {\bf b}^{(r)}). \label{eqn:sbdual2} \end{align} where $\delta_1$ and $\delta_2$ are the step sizes. The dual updates \eqref{eqn:sbdual1} and \eqref{eqn:sbdual2} are simple; the primal updates for the auxiliary variables \eqref{eqn:sbprimal2} and \eqref{eqn:sbprimal3} are readily given by soft-thresholding, in a similar manner as the SPG case \eqref{eqn:spgsurrogate}. The primal update for the coefficient $\beta$ \eqref{eqn:sbprimal1} is equivalent to solving the linear system \begin{align} \label{eqn:sb_lin} ({\bf X}^T {\bf X} + \mu_1 {\bf I} + \mu_2 {\bf D}^T {\bf D}) \beta = {\bf X}^T {\bf y} + (\mu_1 {\bf a}^{(r-1)} - {\bf u}^{(r-1)}) + {\bf D}^T (\mu_2 {\bf b}^{(r-1)} - {\bf v}^{(r-1)}) \end{align} whose computational complexity is $O(p^3)$ for general $\bf D$. However, for standard FLR in which $\bf D$ is the finite difference matrix on a one-dimensional grid, \eqref{eqn:sb_lin} can be solved quickly by using the preconditioned conjugate gradient (PCG) algorithm \citep{Demmel1997}. In this case, ${\bf M}=\mu_1 {\bf I} + \mu_2 {\bf D}^T {\bf D}$ is used as a preconditioner because $\bf M$ is tridiagonal and easy to invert. Note that the separation of updating equations \eqref{eqn:sbprimal1}, \eqref{eqn:sbprimal2} and \eqref{eqn:sbprimal3} for the primal is possible because the non-differentiable and non-separable $\ell_1$-norm penalties on the coefficients are transferred to the auxiliary variables that are completely decoupled and separable. Also note that the updating equations for the auxiliary variables \eqref{eqn:sbprimal2} and \eqref{eqn:sbprimal3} reduce to soft-thresholding due to the augmented quadratic terms and the separable non-differentiable terms. The SB algorithm needs to choose the augmentation constants $\mu_1$ and $\mu_2$ and the step sizes $\delta_1$ and $\delta_2$. In the implementation, \citet{Ye2011} use a common value $\mu$ for all of the four quantities, and consider a pre-trial procedure to obtain highest convergence rate. Although the choice of $\mu$ does not affect the convergence of the algorithm, it is known that the rate of convergence is very sensitive to this choice \citep{Lin2011,Ghadimi2012}. This makes the SB algorithm to stall or fail to reach the optimal solution; see Appendix B for further details. \paragraph{Alternating linearization} ~\\ \citet{Lin2011} propose the alternating linearization (ALIN) algorithm for the generalized lasso \eqref{eqn:genlasso}. Rewrite the problem \eqref{eqn:genlasso} as a sum of two convex functions \begin{align} \label{eqn:sum} \min_{\beta} f(\beta) \equiv \underbrace{\frac{1}{2} \big\| {\bf y} -{\bf X} \beta \big\|_2^2}_{g(\beta)} + \underbrace{\lambda \big\| {\bf D}\beta \big\|_1}_{h(\beta)}. \end{align} The main idea of the ALIN algorithm is to alternately solve two subproblems, in which $g(\beta)$ is linearized ($h$-subproblem) and added by a quadratic regularization term, and so is $h(\beta)$ ($g$-subproblem), respectively. This algorithm maintains three sequences of solutions: $\{\widehat{\beta}^{(r)}\}$ is the sequence of solutions of the problem (\ref{eqn:sum}); $\{\tilde{\beta}_h^{(r)} \}$ and $\{\tilde{\beta}_g^{(r)} \}$ are the sequences of solutions of the $h$-subproblem and the $g$-subproblem, respectively. For $r$th iteration of subproblems, the $h$-subproblem is given by \begin{align}\label{eqn:hsub1} \tilde{\beta}_h^{(r)} = \argmin_{\beta} \underbrace{\frac{1}{2} \| {\bf y} -{\bf X} \tilde{\beta}_g^{(r-1)} \|_2^2 + {\bf s}_g^T (\beta - \tilde{\beta}_g^{(r-1)})}_{\tilde{g}(\beta)} + \frac{1}{2} \| \beta - \widehat{\beta}^{(r-1)} \|_{\bf A}^2 + \lambda \|{\bf D}\beta \|_1 \end{align} where $\tilde{g}(\beta)$ is a linearization of $g(\beta)$ at $\tilde{\beta}_g^{(r-1)}$, ${\bf s}_g = \nabla g(\tilde{\beta}_g^{(r-1)}) = {\bf X}^T({\bf X} \tilde{\beta}_g^{(r-1)} - {\bf y})$, and $\| \beta - \widehat{\beta} \|_{\bf A}^2 = (\beta - \widehat{\beta})^{T} {\bf A} (\beta - \widehat{\beta})$ with a diagonal matrix ${\bf A} = \mbox{diag}\big({\bf X}^T {\bf X}\big)$. Problem \eqref{eqn:hsub1} is equivalent to \begin{align}\label{eqn:hsub2} \min_{\beta,{\bf z}}~ {\bf s}_g^T \beta + \frac{1}{2}\| \beta - \widehat{\beta}^{(r-1)} \|_{\bf A}^2 + \lambda \|{\bf z}\|_1 \quad {\rm subject~to} \quad {\bf D}\beta = {\bf z}. \end{align} Similar to \eqref{eqn:gen_gen}, the dual of \eqref{eqn:hsub2} is given by \begin{align} \label{eqn:alin} \displaystyle \min_{{\bf u}} \frac{1}{2} {\bf u}^T {\bf D} {\bf A}^{-1} {\bf D}^T {\bf u} - {\bf u}^T{\bf D}(\widehat{\beta}^{(r-1)} - {\bf A}^{-1} {\bf s}_g) \quad \displaystyle {\rm subject~ to} \quad \| {\bf u} \|_\infty \le \lambda. \end{align} The solution of $h$-subproblem is obtained by the primal-dual relationship $\tilde{\beta}_h^{(r)} = \widehat{\beta}^{(r-1)} - {\bf A}^{-1}({\bf s}_g + {\bf D}^T {\bf u}^*)$, where ${\bf u}^*$ is the optimal solution of \eqref{eqn:alin}. The dual \eqref{eqn:alin} is efficiently solved by an active-set box-constrained PCG algorithm with $\mbox{diag}({\bf D} {\bf A}^{-1} {\bf D}^T)$ as a preconditioner. \noindent The $g$-subproblem is also given by \begin{align}\label{eqn:gsub1} \tilde{\beta}_g^{(r)} &= \argmin_{\beta} \frac{1}{2} \big\| {\bf y} -{\bf X}\beta \big\|_2^2 + \underbrace{\lambda \|{\bf D}\tilde{\beta}_h^{(r)} \|_1 + {\bf s}_h^T (\beta - \tilde{\beta}_h^{(r)})}_{\tilde{h}(\beta)} + \frac{1}{2} \| \beta - \widehat{\beta}^{(r-1)} \|_{\bf A}^2 \nonumber \\ &= \argmin_{\beta} {\bf s}_h^T \beta + \frac{1}{2} \| {\bf y} - {\bf X}\beta \|_2^2 + \frac{1}{2} \| \beta - \widehat{\beta}^{(r-1)} \|_{\bf A}^2, \end{align} where ${\bf s}_h$ is the subgradient of $h(\beta)$ at $\tilde{\beta}_h^{(r)}$ that can be calculated as ${\bf s}_h = -{\bf s}_g - {\bf A}(\tilde{\beta}_h^{(r)} - \widehat{\beta}^{(r-1)})$. Problem \eqref{eqn:gsub1} is equivalent to solving the following linear system. \begin{equation} \label{eqn:alin_fsub} ({\bf X}^T {\bf X} + {\bf A})(\beta - \widehat{\beta}^{(r-1)}) = {\bf X}^T({\bf y}-{\bf X}\widehat{\beta}^{(r-1)}) - {\bf s}_h, \end{equation} which can be solved by using the PCG algorithm with ${\bf A} = \mbox{diag}({\bf X}^T {\bf X})$ as a preconditioner. After solving each subproblem, the ALIN algorithm checks its stopping and updating criteria. The stopping criteria for the $h$-subproblem and the $g$-subproblem are defined as \begin{align*} \begin{array}{l} h\mbox{-sub} ~:~ \tilde{g}(\tilde{\beta}_h^{(r)}) + {h}(\tilde{\beta}_h^{(r)}) \ge g(\widehat{\beta}^{(r-1)}) + h(\widehat{\beta}^{(r-1)}) - \epsilon,\\ g\mbox{-sub} ~:~ g(\tilde{\beta}_g^{(r)}) + \tilde{h}(\tilde{\beta}_g^{(r)}) \ge g(\widehat{\beta}^{(r-1)}) + h(\widehat{\beta}^{(r-1)}) - \epsilon, \end{array} \end{align*} where $\epsilon$ is a tolerance of the algorithm. If one of the stopping criteria is met, then the algorithm terminates; otherwise it proceeds to check the updating criteria: \begin{align*} \begin{array}{l} h\mbox{-sub} ~:~ g(\tilde{\beta}_h^{(r)}) + {h}(\tilde{\beta}_h^{(r)}) \le (1-\gamma)[g(\widehat{\beta}^{(r-1)}) + h(\widehat{\beta}^{(r-1)})] + \gamma [\tilde{g}(\tilde{\beta}_h^{(r)}) + h(\tilde{\beta}_h^{(r)})],\\ g\mbox{-sub} ~:~ g(\tilde{\beta}_g^{(r)}) + {h}(\tilde{\beta}_g^{(r)}) \le (1-\gamma)[g(\widehat{\beta}^{(r-1)}) + h(\widehat{\beta}^{(r-1)})] + \gamma [g(\tilde{\beta}_g^{(r)}) + \tilde{h}(\tilde{\beta}_g^{(r)})], \end{array} \end{align*} where $\gamma \in (0,1)$. If one of the updating criterion is satisfied, then $\widehat{\beta}^{(r)}$ is updated as the solution of the corresponding subproblem, i.e., $\tilde{\beta}_g^{(r)}$ or $\tilde{\beta}_h^{(r)}$. Otherwise $\widehat{\beta}^{(r)}$ remains unchanged: $\widehat{\beta}^{(r)}=\widehat{\beta}^{(r-1)}$. As it solves the dual problem \eqref{eqn:alin}, the ALIN algorithm has a similar drawback as the path algorithm for the generalized lasso: if the number of rows $m$ of $\bf D$ increases, solving the dual is less efficient than solving the primal. Moreover, the matrix ${\bf D}{\bf A}^{-1} {\bf D}^T$ in \eqref{eqn:alin} is not positive definite when $m$ is greater than $p$. This violates the assumption of the PCG. \subsection{Summary of the reviewed algorithms} Table 1 summarizes the existing algorithms explained in Section 2.1 and 2.2 and the MM algorithm proposed in Section 3 according to types of the design matrix ${\bf X}$ and penalty structure. The items marked with a filled circle ($\bullet$) denote the availability of the algorithm. \begin{table}[!htb] \caption{Summary of algorithms for solving the FLSA and the FLR problems.} \begin{minipage}{\textwidth} \centering \begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}*{Method\let\thefootnote\relax\footnotetext{Abbreviations of the names of the algorithms are given within the parentheses for future references.}} & \multicolumn{2}{c|}{FLSA (${\bf X} = {\bf I}$)} & \multicolumn{2}{c|}{FLR (general ${\bf X}$)} \\ \cline{2-5} &STD\let\thefootnote\relax\footnotetext{STD denotes the standard fusion penalty $\sum_{j=2}^p |\beta_j -\beta_{j-1}|$.} & GEN\let\thefootnote\relax\footnotetext{GEN denotes the generalized fusion penalty $\sum_{(j,k) \in E} |\beta_j -\beta_{k}|$ for a given index set $E$.} &$\phantom{a}$STD$\phantom{a}$ & GEN$$\\ \hline Path-wise optimization (\textsf{pathwise}) & $\bullet$\let\thefootnote\relax\footnotetext{$\bullet$ denotes that the method is applicable.}& & & \\ Path algorithm for the FLSA (\textsf{pathFLSA}) &$\bullet$& $\bullet$ & & \\ Path algorithm for the generalized lasso (\textsf{genlasso}) & $\bullet$& $\vartriangle$\let\thefootnote\relax\footnotetext{$\vartriangle$ denotes that the dual solution path is computationally infeasible in large scale image denoising.}& $\blacktriangle$\let\thefootnote\relax\footnotetext{$\blacktriangle$ denotes the method is only applicable when ${\bf X}$ has full rank.} & $\blacktriangle$ \\ Efficient Fused Lasso Algorithm (\textsf{EFLA}) & $\bullet$& & $\bullet$& \\ Smooth Proximal Gradient (\textsf{SPG}) & $\bullet$&$\circ$\let\thefootnote\relax\footnotetext{$\circ$ denotes the method is not adequate since computation of spectral matrix norm when $p$ is large.} &$\bullet$ &$\circ$\\ Split Bregman (\textsf{SB}) & $\bullet$&$\bullet$ &$\bullet$ &$\bullet$ \\ Alternating Linearization (\textsf{ALIN}) & $\bullet$&$\bullet$ &$\bullet$ &$\bullet$ \\ \hline Majorization-Minimization (\textsf{MM}) &$\bullet$&$\bullet$ &$\bullet$ &$\bullet$ \\ MM with GPU parallelization (\textsf{MMGPU}) &$\bullet$& &$\bullet$ & \\ \hline \end{tabular} \end{minipage} \end{table} \section{MM algorithm for fused lasso problem} \subsection{MM algorithm} In this section we propose an MM algorithm to solve the FLR (\ref{eqn:obj}). The MM algorithm iterates two steps, the majorization step and the minimization step. Given the current estimate $\widehat{\beta}^{0}$ of the optimal solution to (\ref{eqn:obj}), the majorization step finds a majorizing function $g\big(\beta \big|\widehat{\beta}^0 \big)$ such that $f\big(\widehat{\beta}^0\big) = g \big(\widehat{\beta}^0 \big|\widehat{\beta}^0 \big)$ and $f \big(\beta \big) \le g \big( \beta \big|\widehat{\beta}^0\big)$ for all $\beta \neq \widehat{\beta}^0$. The minimization step updates the estimate with \begin{equation} \nonumber \widehat{\beta} = \argmin_{\beta} ~g \big( \beta \big| \widehat{\beta}^0\big). \end{equation} It is known that the MM algorithm has a descent property in the sense that $f\big(\widehat{\beta}\big) \le f \big(\widehat{\beta}^0 \big)$. Furthermore, if $f\big(\beta\big)$ and $g \big( \beta \big|\widehat{\beta}^0\big)$ satisfy some regularity conditions, e.g., those in \cite{Vaida2005}, the MM algorithm converges to the optimal solution of $f\big(\beta\big)$. The MM algorithm has been applied to the classical lasso problem by \citet{HunterLi2005}. They propose to minimize a perturbed version of the objective function of the classical lasso \begin{equation} \nonumber f_{\lambda,\epsilon}(\beta) = \frac{1}{2} \|{\bf y}- {\bf X}\beta\|_2^2 + \lambda \sum_{j=1}^p \Big(|\beta_j| - \epsilon \log \big( 1 + \frac{|\beta_j|}{\epsilon} \big) \Big) \end{equation} instead of \begin{equation} \nonumber f_\lambda(\beta) = \frac{1}{2} \|{\bf y}- {\bf X}\beta\|_2^2 + \lambda \|\beta\|_1 \end{equation} to avoid division by zero in the algorithm. The majorizing function for $f_{\lambda,\epsilon}(\beta)$ at $\widehat{\beta}^0$ is given by \begin{eqnarray} g_{\lambda,\epsilon}(\beta |\widehat{\beta}^0) &=& \frac{1}{2} \|{\bf y}- {\bf X}\beta\|_2^2 ~+ \lambda \sum_{j=1}^p \left\{|\widehat{\beta}^0_j| - \epsilon \log \left( 1 + \frac{|\widehat{\beta}^0_j|} {\epsilon} \right) + \frac{\beta_j^2 - (\widehat{\beta}^0_j)^2}{2 \Big(|\widehat{\beta}^0_j| + \epsilon\Big)} \right\}. \nonumber \end{eqnarray} \citet{HunterLi2005} show that the sequence $\{ \widehat{\beta}^{(r)}\}_{r \ge 0}$ with $\widehat{\beta}^{(r+1)} = \argmin_\beta g_{\lambda,\epsilon}\big(\beta \big| \widehat{\beta}^{(r)}\big)$ converges to the minimizer of $f_{\lambda,\epsilon}(\beta)$, and that $f_{\lambda,\epsilon}(\beta)$ converges to $f_{\lambda}(\beta)$ uniformly as $\epsilon$ approaches $0$. Motivated by the above development, we introduce a perturbed version $f_\epsilon(\beta)$ of the objective (\ref{eqn:obj}) and the majorizing function $g_\epsilon(\beta|\widehat{\beta}^0)$ for the FLR problem (\ref{eqn:obj}): \begin{align} \label{eqn:pert} f_{\epsilon} (\beta) &= \displaystyle \frac{1}{2} \| {\bf y}- {\bf X} \beta \|_2^2 + \lambda_1 \sum_{j=1}^p \left\{|\beta_j | - \epsilon \log \Big(1 + \frac{|\beta_j|}{\epsilon} \Big) \right\} \nonumber \\ & \displaystyle + \lambda_2 \sum_{(j,k) \in E} \left\{|\beta_j - \beta_{k}| - \epsilon \log \Big(1 + \frac{|\beta_j - \beta_{k}|}{\epsilon} \Big) \right\}, \end{align} and \begin{align} \label{eqn:major} g_{\epsilon} (\beta | \widehat{\beta}^0) &= \displaystyle \frac{1}{2} \| {\bf y}- {\bf X} \beta \|_2^2 \displaystyle + \lambda_1 \sum_{j=1}^p \Bigg\{|\widehat{\beta}^0_j| - \epsilon \log \Big(1 + \frac{|\widehat{\beta}^0_j|}{\epsilon} \Big) + \frac{\beta_j^2 - (\widehat{\beta}^0_j)^2}{2(|\widehat{\beta}^0_j|+\epsilon)} \Bigg\} \nonumber \\ & \quad \displaystyle + \lambda_2 \sum_{(j,k) \in E} \Bigg\{ \big|\widehat{\beta}^0_j-\widehat{\beta}^0_k \big| - \epsilon \log \Big(1 + \frac{|\widehat{\beta}^0_j -\widehat{\beta}^0_k|}{\epsilon} \Big) + \frac{(\beta_j-\beta_{k})^2 - (\widehat{\beta}^0_j-\widehat{\beta}^0_k)^2}{2(|\widehat{\beta}^0_j-\widehat{\beta}^0_k|+\epsilon)} \Bigg\}. \end{align} Note that $g_\epsilon(\beta | \widehat{\beta}^0)$ is a smooth convex function in $\beta$. Below we see that $g_\epsilon(\beta | \widehat{\beta}^0)$ indeed majorizes $f_\epsilon(\beta)$ at $\widehat{\beta}^0$, and has a unique global minimum regardless of the rank of ${\bf X}$, together with several properties of the perturbed objective $f_{\epsilon}(\beta)$: \begin{proposition}\label{prop:mm} For $\epsilon >0$ and $\lambda_1>0$, (i) $f_\epsilon (\beta)$ is continuous, convex, and satisfies $\lim_{ \|\beta\|_2 \rightarrow \infty} f_{\epsilonilon} (\beta) = \infty$, (ii) $f_{\epsilon}(\beta)$ converges to $f(\beta)$ uniformly on any compact set $\bf C$ as $\epsilon$ approaches to zero. (iii) $g_\epsilon (\beta|\beta')$ majorizes $f_\epsilon (\beta)$ at $\beta'$, and $g_\epsilon(\beta|\beta')$ has a unique global minimum for all $\beta'$. \end{proposition} \begin{proof} See Appendix A.1. \end{proof} Now let $\widehat{\beta}^{(r)}$ be the current estimate in the $r$th iteration. The majorizing function $g_{\epsilon}(\beta |\widehat{\beta}^{(r)})$ is minimized over $\beta$ when $\partial g_\epsilon (\beta|\widehat{\beta}^{(r)}) / \partial \beta = 0$, which can be written as a linear system of equations \begin{equation} \label{eqn:lin-sys} \big( {\bf X}^{T} {\bf X} + \lambda_1 {\bf A}^{(r)} + \lambda_2 {\bf B}^{(r)} \big) \beta = {\bf X}^{T} {\bf y}, \end{equation} where ${\bf A}^{(r)} = {\rm diag}(a_1^{(r)},a_2^{(r)},\ldots,a_p^{(r)})$ with $a_{j}^{(r)} = 1/(|\widehat{\beta}_j^{(r)}|+\epsilon)$ for $1 \le j \le p$, and ${\bf B}^{(r)}= \big(b_{jk}^{(r)} \big)_{1\le j,k\le p}$ is a symmetric and positive semidefinite matrix with \begin{eqnarray} b_{jj}^{(r)} &=& \sum_{k: (j,k)\in E} \frac{1}{|\widehat{\beta}_j^{(r)} - \widehat{\beta}_k^{(r)}| + \epsilon}, \quad j=1,\ldots ,p, \nonumber \\ b_{jk}^{(r)} &=&b_{kj}^{(r)} =-\frac{1}{|\widehat{\beta}_j^{(r)} - \widehat{\beta}_k^{(r)}| + \epsilon}, \quad \forall (j,k) \in E. \nonumber \end{eqnarray} The procedure described so far is summarized as Algorithm \ref{mm_general}. Note that once the matrix $ {\bf X}^{T} {\bf X} + \lambda_1 {\bf A}^{(r)} + \lambda_2 {\bf B}^{(r)}$ is constructed, the linear system (\ref{eqn:lin-sys}) can be efficiently solved. Since this matrix is symmetric and positive definite, hence Cholesky decomposition and back substitution can be applied, e.g., using \texttt{dpotrf} and \texttt{dpotri} functions in LAPACK \citep{Anderson1995}. Furthermore, the construction of the matrix $ \lambda_1 {\bf A}^{(r)} + \lambda_2 {\bf B}^{(r)}$ is efficient in the MM algorithm. Recall from \eqref{eqn:spg} that the generalized fusion penalty in \eqref{eqn:obj} can be written as $\| {\bf D} \beta \|_1$ for a $m \times p$ dimensional matrix ${\bf D}$, where $m$ is the number of penalty terms. In Algorithm \ref{mm_general}, the matrix $ \lambda_1 {\bf A}^{(r)} + \lambda_2 {\bf B}^{(r)}$ is constructed in $O(m)$ time, i.e., independent of the problem dimension $p$. This is a great advantage of the MM algorithm over other algorithms reviewed in Section \ref{sec:review}: \textsf{SPG} requires to check a specific condition to guarantee its convergence, which takes at least $O\big(np\big)$ time for every iteration; \textsf{SB} needs to construct the matrix $ \mu {\bf I} + \mu {\bf D}^T {\bf D}$, where $\mu$ is the parameter discussed in Section 2.2, and it takes $O(mp^2)$ time; \textsf{ALIN} and \textsf{genlasso} need to compute the matrix ${\bf D}{\bf D}^T$ for solving the dual, hence requires $O(m^2p)$ time. Although in the MM algorithm the matrix $ \lambda_1 {\bf A}^{(r)} + \lambda_2 {\bf B}^{(r)}$ needs to be constructed for each iteration, our experience is that the MM algorithm tends to converge within a few tens of iterations; see Section \ref{sec:numerical}. Thus if $p$ is more than a few tens, which is of our genuine interest, the MM algorithm is expected to run faster than the other algorithms. In case the matrix ${\bf M}^{(r)}= \lambda_1 {\bf A}^{(r)} +\lambda_2 {\bf B}^{(r)}$ is tridiagonal as in the standard FLR, or block tridiagonal as in the two-dimensional FLR, we can employ the preconditioned conjugate gradient (PCG) method by using the matrix ${\bf M}^{(r)}$ as the preconditioner to efficiently solve the linear system (\ref{eqn:lin-sys}). The PCG method is an iterative method for solving a linear system ${\bf Q}{\bf x} ={\bf c}$, where ${\bf Q}$ is a symmetric positive definite matrix, with a preconditioner matrix ${\bf M}$ \citep{Demmel1997}. For example, we can use ${\bf M}={\rm diag}({\bf X}^T {\bf X})$ for ${\bf Q} = {\bf X}^T {\bf X}$. The PCG method achieves the solution by iteratively updating four sequences $\{{\bf x}_j\}_{j\ge0}$, $\{{\bf r}_j\}_{j\ge0}$, $\{{\bf z}_j\}_{j\ge0}$, and $\{{\bf p}_j\}_{j\ge1}$ with \begin{equation} \nonumber \begin{array}{ccl} {\bf x}_j &=& {\bf x}_{j-1} + \nu_j {\bf p}_j, \\ {\bf r}_j &=& {\bf r}_{j-1} + \nu_j {\bf Q} {\bf p}_j,\\ {\bf z}_j &=& {\bf M}^{-1} {\bf r}_j,\\ {\bf p}_{j+1} &=& {\bf z}_j + \gamma_{j} {\bf p}_j,\\ \end{array} \end{equation} where $\nu_j = ({\bf z}_{j-1}^T {\bf r}_{j-1})/({\bf p}_j^T {\bf Q} {\bf p}_j)$, $\gamma_{j} = ({\bf z}_j^T {\bf r}_j)/({\bf z}_{j-1}^T {\bf r}_{j-1})$, ${\bf x}_0 = 0$, ${\bf r}_0 = {\bf c}$, ${\bf z}_0 = {\bf M}^{-1} {\bf c}$, and ${\bf p}_1 = {\bf z}_0$. For the standard FLR, the matrix ${\bf M}^{(r)}$ is tridiagonal and its Cholesky decomposition can be evaluated within $ {O\big( p \big)}$ time, much faster than that of general positive definite matrices. In this case, we can solve the linear system ${\bf M}^{(r)} {\bf z}_j = {\bf r}_j$ using LAPACK functions \texttt{dpbtrf} and \texttt{dpbtrs} that perform Cholesky decomposition and linear system solving for symmetric positive definite band matrix, respectively. Algorithm \ref{mm_pcg} describes the proposed MM algorithm using the PCG method for the standard FLR. \begin{algorithm} \caption{MM algorithm for the FLR}\label{mm_general} \begin{algorithmic}[1] \Require ${\bf y}$, ${\bf X}$, $\lambda_1$, $\lambda_2$, convergence tolerance $\delta$, perturbation constant $\epsilon$. \For{$j = 1,\cdots, p$} \State $\displaystyle \widehat{\beta}_j^{(0)} \gets \frac{ X_j^{T} {\bf y} }{ X_j^{T}X_j}$\Comment{initialization} \EndFor \Repeat{~~$r = 0,1,2,\ldots$} \State $ {\bf M}^{(r)} \gets {\bf X}^{T} {\bf X} + \lambda_1 {\bf A}^{(r)} + \lambda_2 {\bf B}^{(r)}$ \State $\widehat{\beta}^{(r+1)} \gets \big({\bf M}^{(r)}\big)^{-1} {\bf X}^{T} {\bf y}$\Comment{using \texttt{dpotrf} and \texttt{dpotri} in LAPACK } \Until{ $ \displaystyle \frac{| f(\widehat{\beta}^{(r+1)}) - f(\widehat{\beta}^{(r)}) |}{ | f(\widehat{\beta}^{(r)}) | }\le \delta$} \Ensure $\widehat{\beta}_\epsilon = \widehat{\beta}^{(r+1)}$ \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{\newline MM algorithm using the PCG method for the standard FLR and the two-dimensional FLR} \label{mm_pcg} \begin{algorithmic}[1] \Require ${\bf y}$, ${\bf X}$, $\lambda_1$, $\lambda_2$, convergence tolerance $\delta$, perturbation constant $\epsilon$. \For{$j = 1,\cdots, p$} \State $\displaystyle \widehat{\beta}_j^{(0)} \gets \frac{ X_j^{T} {\bf y} }{ X_j^{T}X_j}$\Comment{initialization} \EndFor \Repeat{ $r = 0,1,2,\ldots$} \State ${\bf M }^{(r)} \gets \lambda_1 {\bf A}^{(r)} + \lambda_2 {\bf B}^{(r)}$\Comment{preconditioner} \State ${\bf x}_0 \gets \widehat{\beta}^{(r)}$ \Comment{begin the PCG method} \State ${\bf r}_0 \gets {\bf X}^T {\bf y} - ({\bf X}^T {\bf X} + {\bf M}^{(r)}) {\bf x}_0$ \State ${\bf z}_0 \gets \big({\bf M}^{(r)}\big)^{-1} {\bf r}_0$ \State ${\bf p}_1 \gets {\bf z}_0$ \Repeat{ ~~$j = 1,2,\ldots$} \State $ \displaystyle \nu_j \gets \frac{{\bf r}^T_{j-1} {\bf z}_{j-1} }{ {\bf p}^T_j ({\bf X}^T {\bf X} + {\bf M}^{(r)}) {\bf p}_j}$ \State $ {\bf x}_j \gets {\bf x}_{j-1} + \nu_j {\bf p}_j$ \State $ {\bf r}_j \gets {\bf r}_{j-1} + \nu_j ({\bf X}^T {\bf X} + {\bf M}^{(r)}) {\bf p}_j$ \State Solve ${\bf M}^{(r)} {\bf z}_j = {\bf r}_j$\Comment{using \texttt{dpbtrf} and \texttt{dpbtrs} in LAPACK } \State $\displaystyle \gamma_j \gets \frac{{\bf r}^T_j {\bf z}_j}{ {\bf r}^T_{j-1} {\bf z}_{j-1}}$ \State ${\bf p}_{j+1} \gets {\bf z}_j + \gamma_j {\bf p}_{j}$\Comment{update conjugate gradient} \Until{ $\displaystyle \frac{\| {\bf r}_j\|_2 }{ \|{\bf X}^T {\bf y}\|_2} \le \delta$} \Comment{end the PCG method} \State $\widehat{\beta}^{(r+1)} \gets {\bf x}_j$\Comment{update solution} \Until{ $ \displaystyle \frac{| f(\widehat{\beta}^{(r+1)}) - f(\widehat{\beta}^{(r)}) |}{ | f(\widehat{\beta}^{(r)}) | }\le \delta$} \Ensure $\widehat{\beta}_\epsilon = \widehat{\beta}^{(r+1)}$ \end{algorithmic} \end{algorithm} \subsection{Convergence} In this section we provide some results on the convergence of the proposed MM algorithm for the FLR. We first show that a minimizer $\widehat{\beta}_\epsilon$ of the perturbed version \eqref{eqn:pert} of the objective \eqref{eqn:obj} exhibits a minimal difference from a minimizer $\widehat{\beta}$ of the true objective \eqref{eqn:obj} for sufficiently small $\epsilon$. We then see that the sequence of the solutions $\{\widehat{\beta}_{\epsilonilon}^{(r)}\}_{r \ge 0}$ of the MM algorithm converges to $\widehat{\beta}_{\epsilonilon}$. The key to the proof of the following lemma is to see that the level sets of both the perturbed objective $f_{\epsilon}(\beta)$ and the true objective $f(\beta)$ are compact, which is a consequence of Proposition \ref{prop:mm}. \begin{lemma}\label{lemma:perturbed} Consider an arbitrary deceasing sequence $\big\{\epsilonilon_n, n=1,2,\ldots,\infty \big\}$ that converges to 0. Then, any limit point of $\widehat{\beta}_{\epsilonilon_n}$ is a minimizer of $f\big(\beta \big)$, provided that $\{ \beta ~|~ f(\beta) = f(\widehat{\beta})\}$ is non-empty. \end{lemma} \begin{proof} See Appendix A.2. \end{proof} Lemma \ref{lemma:lyapunov} states that $f_{\epsilon}(\beta)$ serves as a Lyapunov function for the nonlinear recursive updating rule, implicitly defined by the proposed MM algorithm, that generates the sequence $\{\widehat{\beta}_{\epsilon}^{(r)}\}$. Monotone convergence immediately follows from the descent property of the MM algorithm. \begin{lemma}\label{lemma:lyapunov} Let ${\bf S}$ be a set of stationary points of $f_{\epsilon} (\beta)$ and $\{ \widehat{\beta}_{\epsilon}^{(r)} \}_{r\ge 0}$ be a sequence of solutions of the proposed MM algorithm given by $\widehat{\beta}_{\epsilon}^{(r+1)} = M(\widehat{\beta}_{\epsilon}^{(r)}) = \argmin_{\beta} g_\epsilon(\beta|\widehat{\beta}_{\epsilon}^{(r)})$. Then the limit points of $\{\widehat{\beta}_\epsilon^{(r)} \}_{r \ge 0}$ are stationary points of $f_\epsilon(\beta)$, i.e, $\partial f_{\epsilon}(\beta)/\partial \beta = 0$ at these limit points. Moreover, $f_{\epsilon}(\widehat{\beta}_{\epsilon}^{(r)})$ converges monotonically to $f_\epsilon(\beta_{\epsilon}^*)$ for some $\beta_{\epsilon}^* \in {\bf S}$. \end{lemma} \begin{proof} See Appendix A.3. \end{proof} Combining Lemmas 1 and 2, it is straightforward to see that the MM algorithm generates solutions that converge to an optimal solution of the FLR problem \eqref{eqn:obj} as $\epsilon$ decreases to zero. \begin{theorem} The sequence of the solutions $\{\widehat{\beta}_{\epsilon}^{(r)}\}_{r\ge 0}$ generated by the proposed MM algorithm converges to a minimizer of $f(\beta)$. Moreover, the sequence of functionals $\{f_{\epsilon}(\widehat{\beta}_{\epsilon}^{(r)})\}_{r\ge 0}$ converges to the minimum value of $f(\beta)$. \end{theorem} \begin{proof} Note for the first part \[ \big\| \widehat{\beta}_{\epsilon}^{(r)} - \beta^{**} \big\| \le \big\| \widehat{\beta}_{\epsilon}^{(r)} - \beta_{\epsilon}^* \big\| + \big\| \beta_{\epsilon}^* - \beta^{**} \big\|, \] where $\beta_{\epsilon}^*$ is a stationary point of, hence minimizes, $f_{\epsilon}(\beta)$, and $\beta^{**}$ is a limit point of $\{\beta_{\epsilon}^*\}$ as $\epsilon \downarrow 0$. The first term in the right-hand side becomes arbitrarily small for sufficiently large $r$ by lemma \ref{lemma:lyapunov}, whereas the second term does for sufficiently small $\epsilon$ by lemma \ref{lemma:perturbed}. The limit point $\beta^{**}$ is a minimizer of $f(\beta)$ also by lemma \ref{lemma:perturbed}. Now we see \[ || f_{\epsilon}(\widehat{\beta}_{\epsilon}^{(r)})-f(\beta^{**}) || \le || f_{\epsilon}(\widehat{\beta}_{\epsilon}^{(r)})-f_{\epsilon}(\beta_{\epsilon}^*) || + || f_{\epsilon}(\widehat{\beta}_{\epsilon})-f(\beta_{\epsilon}^{*}) || + || f(\beta_{\epsilon}^{*})-f(\beta^{**}) ||. \] The first in the right-hand side vanishes by the continuity of $f_{\epsilon}(\beta)$; the second term by the uniform convergence of $f_{\epsilon}(\beta)$ to $f(\beta)$, as shown in the proof of lemma \ref{lemma:perturbed}; and the third term by the continuity of $f(\beta)$. \end{proof} \section{Parallelization of the MM algorithm with GPU} The graphics processing unit (GPU) was originally developed as a co-processor to reduce the workload of the central processing unit (CPU) for computationally expensive graphics operations. A GPU has a collection of thousands of light-weight computing cores, which makes it suitable for massively parallel applications \citep{Owens2008}. While early GPUs could only perform simple bitmap processing for generating pixels on screen, as the applications require high-level three-dimensional graphics performance it has become possible that users can program so-called shader controlling directly each step of the graphics pipeline that generates pixels from the three-dimensional geometry models. As a GPU generates a massive number of pixels on screen simultaneously, shader programming can be understood as conducting the identical computation with many geometric primitives (such as vertices) as the input to determine the output, or the intensities of the pixels. Thus for a general-purpose computing problem, if the problem has \emph{data-level parallelism}, that is, on many portions of data the computation can be executed at the same time, then by writing an appropriate shader program we can map the data to the geometric primitives and can parallelize the general-purpose computing. Due to this single-instruction multiple-data (SIMD) parallel architecture together with its economic attraction, the use of the GPU has been expanded to general-purpose scientific computing beyond graphics computation. As the demand in general-purpose computing increases, both the architecture and the library for the GPU have been evolved to support such computation, e.g., NVIDIA's Compute Unified Device Architecture (CUDA) \citep{Kirk2010,Farber2011}. As mentioned above, applications in which the GPU is effective involve separation of data and parameters, such as nonnegative matrix factorization, positron emission tomography, and multidimensional scaling \citep{Zhou2010}. For the standard FLR, the PCG step in (\ref{eqn:lin-sys}) that is frequently encountered in the inner iteration of Algorithm \ref{mm_pcg} can be greatly accelerated by the GPU parallelization. The motivation of the parallelization comes from the parallel algorithms such as cyclic reduction (CR), parallel cyclic reduction (PCR), and recursive doubling (RD) \citep{Hockney1965,Hockney1981,Stone1973} that can solve a tridiagonal system to which \eqref{eqn:lin-sys} reduces. Although these parallel algorithms require more operations than a basic Gaussian elimination for tridiagonal system \citep{Zhang2010}, they can be executed more efficiently than the basic Gaussian elimination if many cores are utilized simultaneously. Moreover, a hybrid algorithm, which combines the CR and the PCR, has been developed for GPUs recently \citep{Zhang2010} and is implemented in CUDA sparse matrix library (\texttt{cuSPARSE}) as the function \texttt{cusparseDgtsv}. We use this function to solve the linear system ${\bf M}^{(r)} {\bf z_j} = {\bf r_j}$ in the GPU implementation of Algorithm \ref{mm_pcg}. In addition to the tridiagonal system, Algorithm \ref{mm_pcg} also contains a set of linear operations suitable for GPU parallelization. For instance, lines 7, 11--13, and 15--17 rely on basic linear operations such as matrix-matrix multiplication, matrix-vector multiplication, vector-vector addition, inner-product, and $\ell_2$-norm computation. These operations involve identical arithmetic for each coordinate, hence the SIMD architecture of GPU is appropriate for parallelizing the algorithm. The CUDA basic linear algebra subroutines library (\texttt{cuBLAS}) provides an efficient implementation of such operations, hence we use \texttt{cuBLAS} for parallelizing lines 7, 11--13, and 15--17 with GPU. Unfortunately, our parallelization of the PCG step is limited to the standard FLR and at this moment is not extendable to the generalized fusion penalty. In this case, we recommend to solve the linear system \eqref{eqn:lin-sys} with LAPACK functions \texttt{dpotrf} and \texttt{dpotri} using CPUs as in Algorithm \ref{mm_general}. Currently this is more efficient than using GPUs. \section{Numerical studies}\label{sec:numerical} In this section, we compare the performance of the MM algorithm with the other existing algorithms using several data sets that generally fit into the ``large $p$, small $n$'' setting. The general QP solvers are excluded in the comparison, since they are slower than other specialized algorithms for the FLR. These numerical studies include five scenarios for the FLR problem. In the first three scenarios, we consider the standard FLR with various sparsities of the true coefficients. Since the path algorithms have restrictions on solving the FLR with general design matrices, we only compare the proposed MM algorithms (with and without GPU parallelization, denoted by \textsf{MMGPU} and \textsf{MM}, respectively) with \textsf{EFLA}, \textsf{SPG}, and \textsf{SB} algorithms. In next two scenarios, we consider the two-dimensional FLR with a general design matrix and its application to the image denoising problem. In the two-dimensional FLR with a general design matrix, only \textsf{MM}, \textsf{SPG}, and \textsf{SB} are available. In the image denoising problem, which is equivalent to the two-dimensional FLSA problem, we additionally consider the \textsf{pathFLSA} algorithm. The \textsf{MM} and \textsf{MMGPU} algorithms are implemented in C and CUDA C. The convergence of the algorithm is measured using a relative error of the objective function at each iteration. The relative error at the $r$th iteration is defined as ${\rm RE}(\widehat{\beta}^{(r)}) = |f(\widehat{\beta}^{(r)}) - f(\widehat{\beta}^{(r-1)})|/f(\widehat{\beta}^{(r-1)})$, where $\widehat{\beta}^{(r)}$ is the estimate in $r$th iteration (see Algorithms \ref{mm_general} and \ref{mm_pcg}). All the algorithms terminate when the relative error becomes smaller than $10^{-5}$ or the number of iterations exceeds $10000$. The \textsf{SB} and \textsf{MM} algorithms have additional parameters $\mu$ and $\epsilonilon$, respectively. The details of choosing $\mu$ of the \textsf{SB} are given in Appendix B. In the \textsf{MM} and \textsf{MMGPU}, we set the perturbation $\epsilon = 10^{-8}$ to avoid machine precision error.\footnote[1]{Although setting $\epsilon$ a constant does not exactly satisfy the condition for Theorem 1, this choice of constant is sufficiently small within the machine precision and to prevent divide by zero.} The algorithms are compared in the computation the time and the number of iterations for aforementioned five scenarios. We also investigate the sensitivity of the SB algorithm and the MM algorithm to the choice of their additional parameters $\mu$ and $\epsilonilon$. All algorithms are implemented in MATLAB except for \textsf{pathFLSA}, The computation times are measured in CPU time by using a desktop PC (Intel Core2 extreme X9650 CPU (3.00 GHz) and 8 GB RAM) with NVIDIA GeForce GTX 465 GPU. \iffalse \section{Numerical studies} In this section, we compare the performance of the MM algorithm with the other existing algorithms using several data sets that generally fit into the ``large $p$, small $n$'' setting. The general QP solvers are excluded in the comparison, since they are slower than other specialized algorithms for the FLR problems \citep{Ye2011,Lin2011}. These numerical studies include five scenarios for the FLR problem. In the first three scenarios, we consider the standard FLR problems with various sparsities of the true coefficients. Since the path algorithms have restrictions on solving the FLR with general design matrices, we only compare the proposed MM algorithms (with and without GPU acceleration, denoted by MMGPU and MM, respectively) with EFLA, SPG, and SB algorithms. In next two scenarios, we consider the two-dimensional FLR problem with a general design matrix and the image denoising problem. In the two-dimensional FLR with a general design matrix, The MM, SPG, and SB algorithms are only available. In the image denoising problem equivalent to the two-dimensional FLSA problem (${\bf X} = {\bf I}$), the path algorithms are also available. We additionally consider the path algorithm for FLSA with the generalized fusion penalty ({\bf PathFLSA}). All the algorithms are implemented in MATLAB except for PathFLSA, which is implemented in R. Computation times are measured in CPU time by using a desktop PC (Intel Core2 extreme X9650 CPU (3.00 GHz) and 8 GB RAM) with NVIDIA GeForce GTX 465 GPU. Let $\widehat{\beta}^{(r)}$ be the estimate in $r$-th iteration and ${\rm RE}(\widehat{\beta}^{(r)}) = \frac{|f(\widehat{\beta}^{(r)}) - f(\widehat{\beta}^{(r-1)})|}{f(\widehat{\beta}^{(r-1)})}$ be a relative error of the objective function at $\widehat{\beta}^{(r)}$ as used in Algorithms \ref{mm_general} and \ref{mm_pcg}. We use the convergence tolerance ${\rm RE}(\widehat{\beta}^{(r)}) \le \delta = 10^{-5}$ and set the maximum number of iterations as $10000$ for all algorithms. (The SB algorithm has an additional parameter $\mu$ to be chosen, which affects the convergence rate. See Appendix B for the details of choosing $\mu$.) We set the perturbation $\epsilon = 10^{-8}$ to avoid machine precision error.\footnote[1]{Although setting $\epsilon$ a constant does not exactly satisfy the condition for Theorem 1, this choice of constant is sufficiently small within the machine precision and to prevent divide by zero.} \fi \subsection{Standard FLR} We consider three scenarios for the standard FLR and check the efficiency and the stability of the five algorithms, i.e., \textsf{MMGPU}, \textsf{MM}, \textsf{EFLA}, \textsf{SPG}, and \textsf{SB}, with four sets of regularization parameters $(\lambda_1,\lambda_2) \in \big\{ (0.1,0.1), (0.1,1),(1,0.1),(1,1) \big\}$.\footnote[2]{ These choices of regularization parameters are similar in \cite{,Liu2010, Lin2011, Ye2011}.} We generate $n$ samples $X_1, X_2, \ldots, X_n$ from a $p$-dimensional multivariate normal distribution $N({\bf 0}, {\bf I}_p)$. The response variable ${\bf y}$ is generated from the model \begin{equation} \label{eqn:gener} {\bf y} = {\rm \bf X} {\bf \beta} + \epsilonilon, \end{equation} where $\epsilonilon \sim N\big(0,{\bf I}_n\big)$ and ${\rm \bf X} = (X_1, X_2, \ldots, X_n)^T$. To examine the performance with the dimension $p$, we try $p = 200,~1000,~10000,~{\rm and}~ 20000$ for sample size $n=1000$. We generate 10 data sets according to the following scenarios ${\bf (C1)}$--${\bf (C3)}$. \begin{itemize} \item[{\bf (C1)}] Sparse case. Following \cite{Ye2011}, we set 41 coefficients of $\beta$ nonzero: \begin{equation} \nonumber \beta_j= \left\{ \begin{array}{lll} 2 & & {\rm for}~~j=1,2,\ldots,20, 121,\ldots,125\\ 3 & & {\rm for}~~j = 41\\ 1 & & {\rm for}~~j=71,\ldots,85\\ 0 & & {\rm otherwise} \end{array} \right. \end{equation} \item[{\bf (C2)}] Moderately sparse case. Following \cite{Lin2011}, we set $30$\% of the coefficients nonzero: \begin{equation} \nonumber \beta_j= \left\{ \begin{array}{lll} 1 & & {\rm for}~~ j=p/10+1,~ p/10+2,~\ldots,~2p/10\\ 2 & & {\rm for}~~ j=2p/10 + 1,~2p/10+2,~\ldots,~4p/10\\ 0 & & {\rm otherwise} \end{array} \right. \end{equation} \item[{\bf (C3)}] Dense case. We set all the coefficients nonzero: \begin{equation} \nonumber \beta_j= \left\{ \begin{array}{lll} 1 & & {\rm for}~~ j=1,~ 2,~\ldots,~5p/10\\ -1 & & {\rm for}~~ j=5p/10 + 1,~5p/10+2,~\ldots,~p\\ \end{array} \right. \end{equation} \end{itemize} \begin{figure} \caption{Plots of the true coefficients in ${\bf (C1)} \label{tr_c1} \end{figure} \noindent These true coefficient structures are illustrated in Figure \ref{tr_c1}. We report the average computation time and the average number of iterations in Appendix C, Tables C.1--C.3, and summarize the average computation times in Figure \ref{res_c1}. Excluding the \textsf{MMGPU} (discussed below), the \textsf{EFLA} is generally the fastest in most of the scenarios considered. For relatively dense case ({\bf (C2)} and {\bf (C3)}) with a small penalty ($\lambda_1=\lambda_2=0.1$) for large dimensions ($p=10000, 20000$), MM is the fastest. We see that the \textsf{MM} is $1.14 \sim 4.85$ times slower than the \textsf{EFLA}, and is comparable to the \textsf{SPG} and similar to the \textsf{SB} for high dimensions ($p=10000$ or $20000$), each of which is the contender for the second place with the \textsf{MM} in the respective dimensions. Taking into account that the \textsf{EFLA} is only applicable to the standard FLR, we believe that the performance of the \textsf{MM} is acceptable. The benefit of parallelization is visible in large dimensions. the \textsf{MMGPU} has within about 10\% of the computation time of the \textsf{MM} and the fastest among all the algorithms considered when $p = 10000$ and $20000$. \textsf{MMGPU} is $2 - 63$ times faster than EFLA, which in turn is the fastest among \textsf{MM}, \textsf{SPG}, and \textsf{SB}. When the dimension is small ($p=200$), however, \textsf{MMGPU} is the slowest among the five algorithms. This is due to the memory transfer overhead between CPU and GPU. When $p$ is small, memory transfer occupies most of computation time. As the dimension increases, the relative portion of the memory transfer in computation time decreases, and \textsf{MMGPU} becomes efficient. Focusing on the number of iterations, we see that the \textsf{MM} converges within a few tens of iterations whereas other algorithms require up to a few hundreds of iterations, especially for large $p$. The number of iterations of \textsf{MM} is also insensitive to the sparsity structure and the choice of $\lambda_1$ and $\lambda_2$. (\textsf{EFLA} and \textsf{SPG} need more iterations as the coefficients become dense, together with \textsf{SB}, they also require more iterations for small $\lambda_1$ or $\lambda_2$.) This stability in the number of iterations contribute the performance gain in \textsf{MMGPU}. \textsf{MMGPU} speeds up each iteration, hence if the number of iteration does not rely much on the input, we can consistently expect a gain by parallelization. \begin{figure} \caption{Summary of the results of {\bf (C1)} \label{res_c1} \end{figure} \subsection{Two-dimensional FLR} We consider two scenarios for the two-dimensional FLR, where the coefficients are indexed on $q \times q$ lattice (hence $p = q^2$) and the penalty structure is imposed on the horizontally and vertically adjacent coefficients. The first is the two-dimensional FLR with general design matrix ${\bf X}$. The second is image denoising, which is equivalent to the two-dimensional FLSA (${\bf X} = {\bf I}$). Note that EFLA and MMGPU are not applicable for these scenarios. Instead, \textsf{pathFLSA} is considered for the second scenario, since it is known as one of the most efficient algorithms for the two-dimensional FLSA. For the example of the two-dimensional FLR problem with general ${\bf X}$, we generate 10 data sets according to the following scenario. \begin{itemize} \item[{\bf (C4)}] Two-dimensional FLR with general design matrix, moderately sparse case. We set the $q \times q$ dimensional matrix $B = (b_{ij})$ as follows. \begin{equation} \nonumber b_{ij}= \left\{ \begin{array}{lll} 2 & & {\rm for}~~ \frac{q}{4}k +1 \le i,j \le \frac{q}{4}(k+1), ~~k=0,1,2,3\\ -2 & & {\rm for}~~ \frac{q}{4}k +1 \le i \le \frac{q}{4}(k+1),~ \frac{q}{4}(3-k) +1 \le j \le \frac{q}{4}(4-k)~~ k=0,1,2,3\\ 0 & & {\rm otherwise} \end{array}\right. \end{equation} The true coefficient vector $\beta$ is the vectorization of the matrix $B$ denoted by $\beta = {\rm vec}(B)$. The structure of the two-dimensional coefficient matrix $B$ is shown in Figure \ref{tr_c4} (a). \end{itemize} As the previous scenarios, we generate ${\rm \bf X}$ and ${\rm \bf y}$ from equation (\ref{eqn:gener}) with true coefficients defined in (C4). The candidate dimensions are $q= 16,~32,~64,~{\rm and}~128$ for sample size $n=1000$. We also consider four sets of regularization parameters $(\lambda_1,\lambda_2) \in \big\{ (0.1,0.1), (0.1,1),(1,0.1),(1,1) \big\}$ as in Section 5.1. The computation times of (C4) are shown in Figure \ref{tr_c4} (b) and (c). MM gains its competitiveness competitive as the dimension $p~(= q^2)$ increases. MM is $1.6 \sim 19.5$ times faster than SB when $p >n$, has similar performance to SPG when $p=10000$, and MM is $1.3 \sim 12.3$ times faster than SPG when $p=20000$. A detailed report on the computation time can be found in Appendix C, Table C.4. It can be seen that the variation in the number of iterations for both dimensions and values of $(\lambda_1,\lambda_2)$ is the smallest for MM. Also it can be seen that MM tends to converge within a few tens of iterations. This stability is consistent with the observations from $\bf (C1) - (C3)$. \begin{figure} \caption{Summary of Case 4 {\bf (C4)} \label{tr_c4} \end{figure} \begin{itemize} \item[{\bf (C5)}] Image denoising (${\bf X} = {\bf I}$) We use a $256 \times 256$ gray scale image (i.e., $q=256$, $p=q^2 = 65536$) of R.A. Fisher, as used in \cite{Friedman2007}. After standardizing pixel intensities, we add Gaussian noise with a standard variation $0.3$. We set $\lambda_1 = 0$ because the application is image denoising. \end{itemize} We consider two regularization parameters $\lambda_2 =0.1, 1$. We report the average computation times and the average numbers of iterations in Table \ref{summ_c5}. We present the true image, the noisy image, and the denoising results at $\lambda_2 = 0.1$ in Figure \ref{image_res}. Since the path for all the solutions for two-dimensional FLSA is computationally infeasible, we terminate PathFLSA at the desired value of $\lambda_2$. For small $\lambda_2$ ($\lambda_2 = 0.1$), MM is slower than SPG and PathFLSA, but much faster than SB. As $\lambda_2$ increases, MM becomes faster than PathFLSA since PathFLSA always has to start from $\lambda_2 = 0$. In addition, SPG fails to obtain the solution at $\lambda_2 = 1$. (See the Figure \ref{image_res_2}.) MM also exhibits a smaller variation in the number of iterations and the objective function value at the obtained solution than SPG and SB. Again, this stability is a consistent behavior of MM throughout $\bf (C1)-(C5)$. \begin{table}[htb!] \caption{Summary of the computation time and the numbers of iterations of the \textsf{MM}, \textsf{SPG}, \textsf{SB}, and \textsf{pathFLSA} algorithms for case ${\bf (C5)}$. N/A denotes the method fails to obtain the optimal solution.} \label{summ_c5} \begin{minipage}{\textwidth} \centering {\small \begin{tabular}{|c|l|c|c|c|}\hline $(\lambda_1, \lambda_2)$ &Method & Computation time (sec.)& \# of iterations$^*$\let\thefootnote\relax\footnotetext{$*$ Since PathFLSA is a path algorithm, the number of iterations of PathFLSA is omitted.} & $f(\widehat{\beta})$\\\hline \multirow{4}*{$(0,0.1)$} & \textsf{MM} & 21.2082 & 26 & 2810.16 \\ & \textsf{SPG} & 1.9964 & 140 & 2817.31 \\ & \textsf{SB} & 217.2167 & 698 & 2822.40 \\ & \textsf{pathFLSA} & 3.2151 & - & 2809.96 \\ \hline \multirow{4}*{$(0,1)$} & \textsf{MM} & 48.7812 & 56 & 6920.43 \\ & \textsf{SPG}$^\vartriangle$\let\thefootnote\relax\footnotetext{$\vartriangle$ SPG stops after 11 iterations with $f(\widehat{\beta})= 35595.82$. } & N/A & N/A & N/A \\ & \textsf{SB} & 635.8651 & 4431 & 7308.07 \\ & \textsf{pathFLSA} & 359.2660 & - & 6919.06 \\ \hline \end{tabular} } \end{minipage} \end{table} \begin{figure} \caption{Image denoising with various algorithms for the FLR ($\lambda_2 = 0.1$)} \label{image_res} \end{figure} \begin{figure} \caption{Image denoising with various algorithms for the FLR ($\lambda_2 = 1$). SPG fails to obtain the optimal solution.} \label{image_res_2} \end{figure} \section{Conclusion} In this paper, we have proposed an MM algorithm for the fused lasso regression (FLR) problem with the generalized fusion penalty. The proposed algorithm is flexible in the sense that it can be applied to the FLR with a wide class of design matrices and penalty structures. It is stable in the sense that the convergence of the algorithm is not sensitive to the dimension of the problem, the choice of the regularization parameters, and the sparsity of the true model. Even when a special structure on the design matrix or the penalty is imposed, the MM algorithm shows comparable performance to the algorithms tailored to the special structure. These features make the proposed algorithm an attractive choice as an off-the-shelf FLR algorithm. Moreover, the performance of the MM algorithm can be improved by parallelizing it with GPU, when the standard fused lasso penalty is imposed and the dimension increases. Extension of the GPU algorithm to the two-dimensional FLR problem is our future direction of research. \section*{Supplementary Materials} \begin{description} \item[Source codes and data sets:] Supplementary materials contain C, CUDA C, and MATLAB source codes and data sets used in the numerical studies. All source codes are run on the 64-bit Windows operating system. To run CUDA C codes, a graphics device supporting CUDA (e.g., NVIDIA's GeForce GTX 465) is required. Detailed descriptions of files are given in \texttt{Readme.txt}, that is also enclosed in the supplementary materials. (\texttt{FusedGPU\_supp.tar.zip}, GNU zipped tar file) \end{description} \fi {} \appendix \section{Proofs} \subsection{Proof of Proposition 1}\label{sec:proofs:prop1} For part (i), write \[ f_{\epsilon} (\beta) = \displaystyle \frac{1}{2} \| {\bf y}- {\bf X} \beta \|_2^2 + \lambda_1 \sum_{j=1}^p a_j (\beta) + \lambda_2 \sum_{(j,k) \in E} b_{jk} (\beta), \] where \begin{align*} a_j(\beta) &= |\beta_j | - \epsilon \log \Big(1 + \frac{|\beta_j|}{\epsilon} \Big) \\ b_{jk}(\beta)&= |\beta_j - \beta_{k}| - \epsilon \log \Big(1 + \frac{|\beta_j - \beta_{k}|}{\epsilon} \Big). \end{align*} for $j,k=1,2,\ldots,p$. The functions $a_j(\beta)$ and $b_{jk}(\beta)$ are continuous and convex in $\beta \in \mathbb{R}^p$ and so is $f_{\epsilon} (\beta)$. In addition, for each $j=1,2,\ldots,p$, we see that $\lim_{|\beta_j| \rightarrow \infty} f_{\epsilonilon} (\beta) = \infty$ held fixed. This, along with the convexity of $f_{\epsilonilon}(\beta)$, shows that $\lim_{ \|\beta\|_2 \rightarrow \infty} f_{\epsilonilon} (\beta) = \infty$. For part (ii), recall that \begin{align*} 0 \le f(\beta) - f_\epsilon(\beta) & = \lambda_1 \sum_{j=1}^p \epsilon\log \left(1+ \frac{|\beta_j|}{\epsilon} \right) + \lambda_2 \sum_{(j,k)\in E} \epsilon \log \left(1+ \frac{|\beta_j-\beta_{k}|}{\epsilon} \right) \\ & \le \lambda_1 \sum_{j=1}^p \epsilon\log \left(1+ \frac{\sup_{\beta \in \bf{C}} \big| \beta_j \big|}{\epsilon} \right) + \lambda_2 \sum_{(j,k)\in E} \epsilon\log \left(1+ \frac{\sup_{\beta \in \bf{C}} \big| \beta_j-\beta_{k}\big|}{\epsilon} \right). \end{align*} The suprema in the rightmost side are achieved and finite because $\bf C$ is compact. Then the rightmost side monotonically decreases to $0$ as $\epsilon$ goes to $0$. To see part (iii), for a given $\epsilonilon>0$ and $x_0 \in \mathbb{R}$, consider the functions \begin{eqnarray} p_\epsilon(x) &=& |x|-\epsilon \log \big(1+ \frac{|x|}{\epsilon} \big) \nonumber \\ q_\epsilon(x|x^0) &=& |x^0|-\epsilon \log \bigg(1+ \frac{|x^0|}{\epsilon} \bigg) + \frac{x^2 - (x^0)^2}{2(|x^0|+\epsilon)}, \nonumber \end{eqnarray} and their difference $ h(y |y^0 ) = q_\epsilon(x|x^0) - p_\epsilon(x)$, where $y=|x|$ and $y_0=|x_0|$. The function $h(y|y^0)$ is twice differentiable, and simple algebra shows that it is increasing for $y \ge y_0$ and decreasing for $y < y_0$. We have \[ h\big(y \big| y^0 \big) = q_\epsilon(x|x^0) - p_\epsilon(x) \ge h \big(y^0 \big| y^0 \big)=0, \] where the equality holds if and only if $y=y^0$ (equivalently, $|x|=|x^0|$). This shows that $q_\epsilon(x|x^0)$ majorizes $p_\epsilon(x)$ at $x =x^0$, thus $g_\epsilon(\beta|\beta')$ majorizes $f_\epsilon(\beta)$ at $\beta =\beta'$. Finally, the majorizing function $g_\epsilon(\beta|\beta')$ is strictly convex for fixed $\beta'$ since \begin{equation} \nonumber \frac{\partial^2 g_\epsilon(\beta|\beta')}{\partial \beta^2} = {\bf X}^{T} {\bf X} + \lambda_1 {\bf A}' + \lambda_2 {\bf B}' \end{equation} is positive definite, where ${\bf A}'$ and ${\bf B}'$ are defined in (\ref{eqn:lin-sys}). Hence it has a unique global minimum. \subsection{Proof of Lemma \ref{lemma:perturbed}}\label{sec:proofs:lemma_perturbed} Define the level set \[ \Omega \big( \epsilonilon, c \big) = \big\{ \beta \in \mathbb{R}^p \big| f_\epsilon \big( \beta \big) \le c \big\}, \] for every $\epsilonilon >0$ and $c>0$. Similarly define \[ \Omega \big( 0, c \big) = \big\{ \beta \in \mathbb{R}^p \big| f \big( \beta \big) \le c \big\}. \] Then, from part (i) of Proposition 1, $\Omega \big( \epsilon, c \big)$ is compact for every $\epsilonilon>0$ and $c>0$. Compactness of $\Omega \big( 0, c \big)$ follows from the uniform convergence of $f_{\epsilon}(\beta)$ to $f(\beta)$. Note that $\Omega\big(\epsilon, f( \widehat{\beta}) \big)$ is a non-empty compact set since by the assumption that the solution set $\{ \beta ~|~ f(\beta) = f(\widehat{\beta})\}$ is non-empty. Then we have $f_{\epsilonilon}(\beta) \le f(\widehat{\beta})$ for every $\beta \in \Omega\big(\epsilon, f(\widehat{\beta}) \big)$. By construction, $f_{\epsilon} (\widehat{\beta}_{\epsilon}) \le f_{\epsilon}(\beta)$. Thus, \[ \min_{\gamma} f_{\epsilon} (\gamma)=f_{\epsilon} ( \widehat{\beta}_\epsilon ) \le f_{\epsilonilon} (\beta ) \le f \big(\widehat{\beta}\big) = \min_{\gamma} f\big(\gamma \big). \] In other words, $\widehat{\beta}_\epsilon \in \Omega\big(\epsilon, f( \widehat{\beta})\big)$. Similarly we see $\widehat{\beta} \in \Omega\big(0, f( \widehat{\beta})\big)$ that is non-empty. Since $f(\widehat{\beta}_\epsilon) \ge f(\widehat{\beta})$ by the definition of $\widehat{\beta}$, \begin{align*} 0 \le f \big(\widehat{\beta}_\epsilon \big) - f \big(\widehat{\beta} \big) & \le f \big(\widehat{\beta}_\epsilon \big) - f_\epsilon \big(\widehat{\beta}_\epsilon \big) + f_\epsilon \big(\widehat{\beta} \big) - f \big(\widehat{\beta} \big) \nonumber \\ & \le \big|f \big(\widehat{\beta}_\epsilon \big) - f_\epsilon\big(\widehat{\beta}_\epsilon \big) \big| + \big|f_\epsilon \big(\widehat{\beta} \big) - f\big(\widehat{\beta} \big) \big|. \end{align*} The rightmost side of the above inequality goes to zero because $f_{\epsilon}(\beta)$ converges to $f(\beta)$ uniformly on both $\Omega\big(\epsilon,f(\widehat{\beta})\big)$ and $\Omega\big(0,f(\widehat{\beta})\big)$. Then, for a limit point $\beta^*$ of the sequence $\{ \widehat{\beta}_{\epsilon_n} \}_{n\ge 1}$ with $\epsilon_n \downarrow 0$, we see \[ \lim_{n \rightarrow \infty} f \big( \widehat{\beta}_{\epsilon_n}\big) = f\big( \beta^* \big) = f \big(\widehat{\beta} \big)= \min_{\gamma} f (\gamma ) \] by the continuity of $f(\beta)$, i.e., $\beta^*$ minimizes $f(\beta)$. \subsection{Proof of Lemma \ref{lemma:lyapunov}}\label{sec:proofs:lemma_lyapunov} We first claim that the recursive updating rule implicitly defined by the MM algorithm is continuous. \begin{claim}\label{claim:continuity} The function $M(\alpha) = \argmin_{\beta} g_{\epsilon}(\beta | \alpha)$ is continuous in $\alpha$, where $g_\epsilon(\beta|\alpha) $ is the proposed majorizing function of $f_\epsilon(\beta)$ at $\alpha$. \end{claim} \begin{proof} Consider a sequence $\{\alpha_k\}_{k\ge 0}$ converging to $\tilde{\alpha}$ with finite $f_\epsilon(\tilde{\alpha})$ and satisfying $f_{\epsilon}(\alpha_k) \le f_{\epsilon}(\alpha_0) < \infty$ for $k\ge 1$. We want to show that $M(\alpha_k)$ converges to $M(\tilde{\alpha})$. The descent property of the MM algorithm states that \[ f_\epsilon\big( M(\alpha_k)\big) \le g_{\epsilon} \big( M(\alpha_k ) \big| \alpha_k \big) \le g_{\epsilon} \big( \alpha_k \big| \alpha_k \big) = f_\epsilon\big(\alpha_k\big) \] for every $k$. Thus the sequence $\big\{ M(\alpha_k ), k=0,1,2,\ldots \big\}$ is a subset of the level set $\Omega \big( \epsilonilon, f_{\epsilon} (\alpha_0) \big) = \{ \beta \in \mathbb{R}^p \big| f_{\epsilon} (\beta ) \le f_{\epsilon} (\alpha_0) \}$. Since the level set $\Omega \big( \epsilonilon, f_{\epsilon} (\alpha_0) \big)$ is compact (see Section \ref{sec:proofs:lemma_perturbed}), the sequence $\{ M(\alpha_k)\}_{k=0}$ is bounded. Hence we can find a convergent subsequence $\{ M(\alpha_{k_n} )\}_{n\ge 1}$; denote its limit point by $\tilde{M}$. Then, by the continuity of $g_{\epsilon} (\beta \big| \alpha )$ in both $\beta$ and $\alpha$, for all $\beta$, \[ g_{\epsilon}( \tilde{M} \big| \tilde{\alpha} ) = \overline{\lim}_{n \rightarrow \infty} g_{\epsilon}( M (\alpha_{k_n} ) | \alpha_{k_n} ) \le \overline{\lim}_{n \rightarrow \infty} g_{\epsilon}( \beta | \alpha_{k_n} ) =g_{\epsilon}(\beta \big| \tilde{\alpha} ). \] Thus $\tilde{M}$ minimizes $g_{\epsilon} \big(\beta \big| \tilde{\alpha} \big)$. Since the minimizer of $g_{\epsilon} \big(\beta \big| \alpha\big)$ is unique by Proposition \ref{prop:mm}, we have $\tilde{M} = M \big(\tilde{\alpha} \big)$, i.e., $M(\alpha_{k_n}\big)$ converges to $M(\tilde{\alpha})$. \end{proof} We then state a result on the global convergence of the updating rule. \begin{claim}\label{claim:global} (Convergence Theorem A, \citet[p.91]{Zangwill1969}) Let the point-to-set map $M: X \rightarrow X$ determine an algorithm that given a point $x_1 \in X$ generates the sequence $\{ x_k\}_{k=1}^\infty$. Also let a solution set $\Gamma \subset X$ be given. Suppose that: (i) all points $x_k$ are in a compact set $C \subset X$; (ii) there is a continuous function $u : X \rightarrow \mathbb{R}$ such that (a) if $x \notin \Gamma$, $u(y) > u(x)$ for all $y \in M(x)$, and (b) if $x \in \Gamma$, then either the algorithm terminates or $u(y) \ge u(x)$ for all $y \in M(x)$; (iii) the map $M$ is closed at $x$ if $x \notin \Gamma$. Then either the algorithm stops at a solution, or the limit of any convergent subsequence is a solution. \end{claim} \noindent Closedness of point-to-set maps is an extension of continuity of point-to-point maps: a point-to-set map $A:V \rightarrow V$ on a set $V$ is said to be closed if (a) $z^k \rightarrow z^{\infty}$, (b) $y^k \in A(z^k)$, and (c) $y^k \rightarrow y^{\infty}$ imply $y^{\infty} \in A(z^{\infty})$. The map $A$ is closed on a set $X \subset V$ if it is closed at each $z \in X$ \citep[p.88]{Zangwill1969}. The proof of this claim is similar to \cite{Vaida2005}. Finally we are ready to show the main result of this section. \begin{proof}[Proof of Lemma \ref{lemma:lyapunov}] We apply Claim \ref{claim:global} with $M(\alpha) = \argmin_\beta g_{\epsilon}(\beta|\alpha)$, $\Gamma = \{ \beta \,|\, 0 \in \partial_{\beta} f_\epsilon(\beta) \}$, and $u(\beta) = - f_\epsilon(\beta)$, where $\partial_{\beta} f_\epsilon(\beta)$ is the sub-differential of $f_{\epsilon}(\beta)$ at $\beta$. We check the three conditions for Claim \ref{claim:global} as follows. Define the level set of $f_\epsilon(\beta)$ as $\Omega \big( \epsilonilon, f_{\epsilon} (\widehat{\beta}^{(0)}) \big) = \big\{ \beta \in \mathbb{R}^p \big| f_{\epsilon} (\beta ) \le f_{\epsilon} (\widehat{\beta}^{(0)}) \big\}$, which is compact (see Section \ref{sec:proofs:lemma_perturbed}). Since $f_{\epsilon}(\widehat{\beta}^{(r+1)}) \le f_{\epsilon}(\widehat{\beta}^{(r)})$ for every $r = 0,1,2,\ldots$, $\{ \widehat{\beta}^{(r)}\}_{r\ge 0}$ is a subset of $ \Omega \big( \epsilonilon, f_{\epsilon} (\widehat{\beta}^{(0)}) \big)$ (see Section \ref{sec:proofs:lemma_perturbed}) and condition (i) of Claim \ref{claim:global} is satisfied. Lemma \ref{lemma:perturbed} states that $M(\alpha)$ is a continuous point-to-point map in $\alpha$. That is, $M(\alpha)$ is a closed map on any compact set of $\mathbb{R}^p$, and so on $\Gamma^c$. Thus condition (iii) of Claim \ref{claim:global} is met. Finally condition (ii) is guaranteed by the strict convexity of $g_{\epsilon}(\beta|\alpha)$ (see Proposition \ref{prop:mm}) and the definition of $M(\alpha)$. Therefore all the limit points of $\{ \widehat{\beta}^{(r)}\}_{r\ge 0}$ are stationary points of $f_{\epsilon}(\beta)$. \end{proof} \section{Choice of $\mu$ in the SB algorithm}\label{sec:app_SB} In the SB algorithm, the current solution $\widehat{\beta}^{(r+1)}$ is obtained from the solution of the following linear system \begin{equation} \nonumber \big( {\bf X}^T {\bf X} + \mu {\bf I} + \mu {\bf D}^T {\bf D} \big) \beta = {\bf c}^{(r)}(\mu). \end{equation} The additional parameter $\mu$ affects the convergence rate but there is no optimal rule for its choice in this problem \citep{Ghadimi2012}. Thus, \cite{Ye2011} suggest a pre-trial procedure to choose $\mu$ as one of $\{0.2, 0.4, 0.6, 0.8, 1 \} \times \|{\bf y}\|_2 $ by testing computation times for given $(\lambda_1, \lambda_2)$. In addition to this recipe, we consider $\frac{1}{n}\|{\bf y}\|_2$ and $\frac{1}{n^2}\|{\bf y}\|_2$ as candidate values of $\mu$. In our design of examples, we observe that the values other than $\frac{1}{n}\|{\bf y}\|_2$ makes the SB algorithm prematurely stop before reaching the optimal solution. The value $\frac{1}{n}\|{\bf y}\|_2$ also leads the best convergence rate from our pretrials in Figure \ref{SB_mu}. Thus, we set $\mu = \frac{1}{n}\|{\bf y}\|_2$ in the numerical studies. \begin{figure} \caption{The number of iterations in SB algorithm as $\mu$ changes for $(\lambda_1,\lambda_2) = (0.1,0.1)$} \label{SB_mu} \end{figure} \section{Tables of results for {\bf (C1)}--{\bf (C4)}} \begin{landscape} \begin{table}[!htb] \centering \caption*{Table C.1: Summary of ${\bf (C1)}$ with computation times and the numbers of iterations for $n=1000$.} \label{tb_c1} { \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{$\lambda_1$} & \multirow{2}{*}{$\lambda_2$} & \multirow{2}{*}{$p$} & \multicolumn{5}{|c|}{Computation time (sec.)} & \multicolumn{5}{|c|}{Number of iterations}\\ \cline{4-13} & & & MMGPU & MM & EFLA & SPG & SB & MMGPU & MM & EFLA & SPG & SB\\ \hline \multirow{4}{*}{0.1} & \multirow{4}{*}{0.1} & 200 & 0.2858 & 0.0090 & 0.0093 & 0.0152 & 0.0089 & 3.0 & 3.0 & 27.8 & 42.5 & 3.0\\ & & 1000 & 2.6552& 1.5070& 0.3085& 0.5799& 2.1276& 22.6& 23.0& 258.9& 405.3& 15.1\\ & & 10000 & 7.2544& 66.9573& 32.8320& 102.5558& 99.1599& 84.1& 83.9& 1067.8& 1756.1& 158.2\\ & & 20000 & 13.3395& 133.2365& 77.4856& 281.4834& 209.6247& 90.3& 90.3& 1287.3& 2437.4& 206.6\\ \hline \multirow{4}{*}{0.1} & \multirow{4}{*}{1.0} & 200 & 0.2207 & 0.0092 & 0.0067 & 0.0273 & 0.0656 & 4.0 & 4.0 & 24.3 & 73.8 & 28.7\\ & & 1000 & 2.2735& 1.1956& 0.1987& 0.4140& 10.0084& 26.3& 26.4& 166.9& 285.8& 75.3\\ & & 10000 & 5.3980& 51.1480& 19.3404& 58.0435& 117.2644& 86.3& 86.1& 618.7& 988.9& 186.9\\ & & 20000 & 9.8197& 101.1593& 49.4796& 163.0641& 238.4468& 94.2& 93.7& 808.7& 1411.3& 234.9\\ \hline \multirow{4}{*}{1.0} & \multirow{4}{*}{0.1} & 200 & 0.1469 & 0.0074 & 0.0069 & 0.0168 & 0.0150 & 3.2 & 3.2 & 26.1 & 43.0 & 5.8\\ & & 1000 & 1.6121& 0.8684& 0.2128& 0.4359& 9.2411& 23.2& 23.0& 166.5& 266.7& 69.7\\ & & 10000 & 7.4584& 75.5762& 15.5698& 48.1091& 54.5518& 82.9& 82.9& 506.3& 822.1& 87.7\\ & & 20000 & 14.9428& 163.5872& 44.3698& 133.7532& 152.5818& 92.8& 92.8& 737.6& 1155.5& 152.4\\ \hline \multirow{4}{*}{1.0} & \multirow{4}{*}{1.0} & 200 & 0.1477 & 0.0072 & 0.0068 & 0.0157 & 0.0873 & 4.0 & 4.0 & 25.0 & 42.2 & 37.8\\ & & 1000 & 1.2295& 0.6092& 0.1527& 0.2986& 10.1666& 24.4& 24.4& 128.1& 209.6& 79.2\\ & & 10000 & 4.5901& 44.8417& 14.6557& 32.6244& 67.7480& 82.5& 82.4& 465.8& 554.9& 110.9\\ & & 20000 & 8.4576& 90.2439& 35.3691& 96.1971& 199.6160& 88.6& 88.5& 575.4& 830.1& 201.5\\ \hline \end{tabular} } \end{table} \begin{table}[!htb] \centering \caption*{Table C.2: Summary of ${\bf (C2)}$ with computation times and the numbers of iterations for $n=1000$. The gray cells denote that the EFLA fails to reach the optimal solution for one or two samples. We report the average computation times of the EFLA removed the failed cases. } \label{tb_c2} { \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{$\lambda_1$} & \multirow{2}{*}{$\lambda_2$} & \multirow{2}{*}{$p$} & \multicolumn{5}{|c|}{Computation time (sec.)} & \multicolumn{5}{|c|}{Number of iterations}\\ \cline{4-13} & & & MMGPU & MM & EFLA & SPG & SB & MMGPU & MM & EFLA & SPG & SB\\ \hline \multirow{4}{*}{0.1} & \multirow{4}{*}{0.1} & 200 & 0.2247 & 0.0108 & 0.0077 & 0.0170 & 0.0088 & 3.0 & 3.0 & 25.0 & 46.8 & 3.0\\ & & 1000 & 3.0984& 1.4673& 0.3732& 0.7777& 1.6792& 22.2& 21.8& 311.8& 508.8& 19.9\\ & & 10000 & 4.1122& 45.8289& \cellcolor{gray!20}91.7927& 263.7253& 266.5620& 36.3& 35.5& \cellcolor{gray!20}2976.3& 4527.5& 466.7\\ & & 20000 & 4.4272& 52.7964& \cellcolor{gray!20}242.0749& 821.5516& 704.5284& 27.2& 27.6& \cellcolor{gray!20} 4043.3& 7108.9& 776.4\\ \hline \multirow{4}{*}{0.1} & \multirow{4}{*}{1.0} & 200 & 0.2276 & 0.0115 & 0.0072 & 0.0168 & 0.0595 & 4.2 & 4.2 & 25.4 & 41.9 & 26.8\\ & & 1000 & 3.6282& 1.5755& 0.2211& 0.5595& 4.7977& 28.1& 28.2& 180.5& 359.1& 57.7\\ & & 10000 & 10.3537& 109.7741& 47.8311& 132.3191& 107.4803& 53.9& 53.9& 1526.8& 2272.4& 188.4\\ & & 20000 & 23.6943& 271.3948& 236.7397& 503.1258& 232.9924& 63.9& 63.7& 3915.2& 4347.2& 256.3\\ \hline \multirow{4}{*}{1.0} & \multirow{4}{*}{0.1} & 200 & 0.1279 & 0.0086 & 0.0061 & 0.0171 & 0.0081 & 3.1 & 3.1 & 21.8 & 44.6 & 2.7\\ & & 1000 & 1.7817& 0.9148& 0.1901& 0.5022& 2.9606& 20.3& 20.3& 160.7& 325.7& 34.9\\ & & 10000 & 10.8871& 127.3424& 51.5586& 169.2509& 142.5682& 91.6& 91.5& 1665.9& 2910.1& 249.8\\ & & 20000 & 20.1332& 244.0547& 141.8118& 526.4848& 313.0687& 88.7& 89.7& 2383.4& 4556.9& 344.7\\ \hline \multirow{4}{*}{1.0} & \multirow{4}{*}{1.0} & 200 & 0.1467 & 0.0084 & 0.0065 & 0.0176 & 0.0633 & 4.2 & 4.2 & 23.4 & 44.6 & 28.8\\ & & 1000 & 1.7161& 0.8342& 0.1476& 0.4186& 3.0164& 22.5& 22.5& 122.6& 269.4& 36.0\\ & & 10000 & 7.7781& 91.8040& 38.7895& 120.4112& 64.9480& 92.1& 92.2& 1226.0& 2077.8& 114.0\\ & & 20000 & 17.6899& 212.5753& 106.2446& 339.2643& 174.6900& 102.3& 102.3& 1756.6& 2929.0& 192.6\\ \hline \end{tabular} } \end{table} \begin{table}[!htb] \centering \caption*{Table C.3: Summary of ${\bf (C3)}$ with computation times and the numbers of iterations for $n=1000$. The gray cells denote that the EFLA fails to reach the optimal solution for one sample. We report the average computation times of the EFLA removed failed case.} \label{tb_c3} { \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{$\lambda_1$} & \multirow{2}{*}{$\lambda_2$} & \multirow{2}{*}{$p$} & \multicolumn{5}{|c|}{Computation time (sec.)} & \multicolumn{5}{|c|}{Number of iterations}\\ \cline{4-13} & & & MMGPU & MM & EFLA & SPG & SB & MMGPU & MM & EFLA & SPG & SB\\ \hline \multirow{4}{*}{0.1} & \multirow{4}{*}{0.1} & 200 & 0.2511 & 0.0171 & 0.0081 & 0.0170 & 0.0095 & 3.0 & 3.0 & 26.7 & 45.2 & 3.0\\ & & 1000 & 2.9360& 1.5933& 0.3793& 0.8082& 1.2007& 21.3& 21.9& 327.4& 511.3& 13.2\\ & & 10000 & 3.7909& 42.9165& 90.8098& 282.7641& 286.2912& 34.7& 34.0& 3025.2& 4789.1& 498.3\\ & & 20000 & 4.0806& 48.1163& 248.1919& 871.9119& 756.4445& 26.0& 25.8& 4108.3& 7553.7& 824.2\\ \hline \multirow{4}{*}{0.1} & \multirow{4}{*}{1.0} & 200 & 0.3401 & 0.0133 & 0.0069 & 0.0174 & 0.0662 & 4.2 & 4.2 & 25.3 & 44.9 & 27.4\\ & & 1000 & 3.9890& 1.8233& 0.2031& 0.5462& 3.8157& 27.4& 27.3& 170.8& 352.3& 43.5\\ & & 10000 & 15.0492& 180.0133& 54.7479& 144.0007& 108.6189& 91.8& 91.9& 1795.4& 2446.6& 189.3\\ & & 20000 & 28.2846& 343.3138& 169.7143& 505.0450& 238.1494& 96.8& 97.0& 2762.1& 4376.6& 259.7\\ \hline \multirow{4}{*}{1.0} & \multirow{4}{*}{0.1} & 200 & 0.0890 & 0.0058 & 0.0072 & 0.0178 & 0.0130 & 3.4 & 3.4 & 26.5 & 46.3 & 5.0\\ & & 1000 & 1.6862& 0.8142& 0.3392& 0.8342& 2.3927& 23.8& 23.8& 287.1& 555.2& 24.8\\ & & 10000 & 10.7328& 127.3884& 50.5020& 176.1017& 127.5324& 88.6& 88.6& 1681.8& 2985.3& 222.3\\ & & 20000 & 19.5732& 239.8993& 150.7861& 520.9941& 307.9786& 85.4& 85.8& 2497.4& 4511.6& 335.6\\ \hline \multirow{4}{*}{1.0} & \multirow{4}{*}{1.0} & 200 & 0.1840 & 0.0085 & 0.0067 & 0.0169 & 0.0094 & 4.4 & 4.4 & 24.9 & 44.3 & 3.4\\ & & 1000 & 1.4819& 0.7155& 0.1548& 0.4869& 1.9491& 19.7& 19.6& 124.3& 308.7& 23.0\\ & & 10000 & 8.1713& 97.7493& \cellcolor{gray!20} 38.0236& 129.9449& 89.1512& 95.4& 95.5& \cellcolor{gray!20} 1240.0& 2206.9& 155.5\\ & & 20000 & 17.7725& 219.5605& 110.7899& 365.3513& 173.9217& 101.9& 101.9& 1799.7& 3156.2& 189.4\\ \hline \end{tabular} } \end{table} \begin{table}[!htb] \centering \caption*{Table C.4: Summary of ${\bf (C4)}$ with computation times and the numbers of iterations for $n=1000$.} \label{tb_c4} { \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{$\lambda_1$} & \multirow{2}{*}{$\lambda_2$} & \multirow{2}{*}{$q \times q$} & \multicolumn{3}{|c|}{Computation time (sec.)} &\multicolumn{3}{|c|}{Number of iterations}\\ \cline{4-9} & & & MM & SPG & SB & MM & SPG & SB \\ \hline \multirow{4}{*}{0.1} & \multirow{4}{*}{0.1} & $16 \times 16$ & 0.0477 & 0.0252 & 0.0230 & 5.0 & 54.9 & 2.8\\ & & $32 \times 32$ & 1.9948& 1.1403& 1.9887& 21.3& 603.6& 16.5\\ & & $64 \times 64$ & 45.1773& 68.6246& 164.9881& 63.7& 2745.9& 456.0\\ & & $128 \times 128$ & 50.2078& 615.4028& 979.8359& 28.8& 6414.6& 1066.6\\ \hline \multirow{4}{*}{0.1} & \multirow{4}{*}{1.0} & $16 \times 16$ & 0.0549 & 0.0234 & 0.0971 & 8.3 & 53.3 & 26.7\\ & & $32 \times 32$ & 1.1902& 0.5846& 4.1854& 23.0& 294.8& 37.9\\ & & $64 \times 64$ & 15.1176& 23.0897& 33.2731& 39.6& 923.6& 90.8\\ & & $128 \times 128$ & 72.8389& 208.2940& 119.7563& 49.8& 2169.3& 123.5\\ \hline \multirow{4}{*}{1.0} & \multirow{4}{*}{0.1} & $16 \times 16$ & 0.0612 & 0.0232 & 0.0180 & 7.3 & 56.2 & 3.0\\ & & $32 \times 32$ & 1.4630& 0.6096& 2.3580& 20.6& 326.3& 21.5\\ & & $64 \times 64$ & 48.5678& 46.7797& 92.2762& 80.1& 1876.4& 254.2\\ & & $128 \times 128$ & 241.6816& 343.2788& 657.3494& 85.3& 3579.7& 714.9\\ \hline \multirow{4}{*}{1.0} & \multirow{4}{*}{1.0} & $16 \times 16$ & 0.0476 & 0.0221 & 0.1006 & 9.5 & 51.5 & 27.3\\ & & $32 \times 32$ & 0.8718& 0.4960& 3.3169& 20.5& 273.1& 30.3\\ & & $64 \times 64$ & 30.0718& 24.6371& 35.1351& 63.5& 978.6& 95.9\\ & & $128 \times 128$ & 189.6904& 237.8033& 233.3495& 109.0& 2476.8& 248.1\\ \hline \end{tabular} } \end{table} \end{landscape} \end{document}
math
\begin{document} \title[On the third-order Horadam and geometric mean sequences]{On the third-order Horadam and geometric mean sequences} \author[G. Cerda-Morales]{Gamaliel Cerda-Morales} \address{Instituto de Matem\'aticas, Pontificia Universidad Cat\'olica de Valpara\'iso, Blanco Viel 596, Valpara\'iso, Chile.} \email{[email protected]} \subjclass{11B37, 11B39, 11K31.} \keywords{Generalized Fibonacci number, generalized Tribonacci number, geometric mean sequence, third-order Horadam number.} \begin{abstract} This paper, in considering aspects of the geometric mean sequence, offers new results connecting generalized Tribonacci and third-order Horadam numbers which are established and then proved independently. \end{abstract} \maketitle \section {Introduction} The Horadam numbers have many interesting properties and applications in many fields of science (see, e.g., \cite{Lar1,Lar2}). The Horadam numbers $H_{n}(a,b;r,s)$ or $H_{n}$ are defined by the recurrence relation \begin{equation}\label{e1} H_{0}=a,\ H_{1}=b,\ H_{n+2}=rH_{n+1}+sH_{n},\ n\geq0. \end{equation} Another important sequence is the generalized Fibonacci sequence $\{h_{n}^{(3)}\}_{n\in \mathbb{N}}$. This sequence is defined by the recurrence relation $h_{n+2}=rh_{n+1}+sh_{n}$, with $h_{0}=0$, $h_{1}=1$ and $n\geq0$. In \cite{Sha,Wa} the Horadam recurrence relation (\ref{e1}) is extended to higher order recurrence relations and the basic list of identities provided by A. F. Horadam is expanded and extended to several identities for some of the higher order cases. In fact, third-order Horadam numbers, $\{H_{n}^{(3)}(a,b,c;r,s,t)\}$, and generalized Tribonacci numbers, $\{h_{n}^{(3)}(0,1,r;r,s,t)\}_{n\geq0}$, are defined by \begin{equation}\label{ec:5} H_{n+3}^{(3)}=rH_{n+2}^{(3)}+sH_{n+1}^{(3)}+tH_{n}^{(3)},\ H_{0}^{(3)}=a,\ H_{1}^{(3)}=b,\ H_{2}^{(3)}=c,\ n\geq0, \end{equation} and \begin{equation}\label{ec:6} h_{n+3}^{(3)}=rh_{n+2}^{(3)}+sh_{n+1}^{(3)}+th_{n}^{(3)},\ h_{0}^{(3)}=0,\ h_{1}^{(3)}=1,\ h_{2}^{(3)}=r,\ n\geq0, \end{equation} respectively. Some of the following properties given for third-order Horadam numbers and generalized Tribonacci numbers are revisited in this paper (for more details, see \cite{Cer1,Sha,Wa}). \begin{equation}\label{e4} H_{n+m}^{(3)}=h_{n}^{(3)}H_{m+1}^{(3)}+\left(sh_{n-1}^{(3)}+th_{n-2}^{(3)}\right)H_{m}^{(3)}+th_{n-1}^{(3)}H_{m-1}^{(3)}, \end{equation} \begin{equation}\label{e5} \left(h_{n}^{(3)}\right)^{2}+s\left(h_{n-1}^{(3)}\right)^{2}+2th_{n-1}^{(3)}h_{n-2}^{(3)}=h_{2n-1}^{(3)} \end{equation} and \begin{equation}\label{ec5} \left(H_{n}^{(3)}\right)^{2}+s\left(H_{n-1}^{(3)}\right)^{2}+2tH_{n-1}^{(3)}H_{n-2}^{(3)}=\left\lbrace \begin{array}{c} cH_{2n-2}^{(3)}+\left(sb+ta\right)H_{2n-3}^{(3)}\\ +tbH_{2n-4}^{(3)}, \end{array} \right\rbrace, \end{equation} where $n\geq 2$ and $m\geq 1$. As the elements of this Tribonacci-type number sequence provide third order iterative relation, its characteristic equation is $x^{3}-rx^{2}-sx-t=0$, whose roots are $\alpha=\frac{r}{3}+A+B$, $\omega_{1}=\frac{r}{3}+\epsilon A+\epsilon^{2} B$ and $\omega_{2}=\frac{r}{3}+\epsilon^{2}A+\epsilon B$, where $$A=\sqrt[3]{\frac{r^{3}}{27}+\frac{rs}{6}+\frac{t}{2}+\sqrt{\Delta}},\ B=\sqrt[3]{\frac{r^{3}}{27}+\frac{rs}{6}+\frac{t}{2}-\sqrt{\Delta}},$$ with $\Delta=\Delta(r,s,t)=\frac{r^{3}t}{27}-\frac{r^{2}s^{2}}{108}+\frac{rst}{6}-\frac{s^{3}}{27}+\frac{t^{2}}{4}$ and $\epsilon=-\frac{1}{2}+\frac{i\sqrt{3}}{2}$. In this paper, $\Delta>0$, then the cubic equation $x^{3}-rx^{2}-sx-t=0$ has one real and two nonreal solutions, the latter being conjugate complex. Thus, the Binet formula for the third-order Horadam numbers can be expressed as: \begin{equation}\label{eq:8} H_{n}^{(3)}=\frac{P\alpha^{n}}{(\alpha-\omega_{1})(\alpha-\omega_{2})}-\frac{Q\omega_{1}^{n}}{(\alpha-\omega_{1})(\omega_{1}-\omega_{2})}+\frac{R\omega_{2}^{n}}{(\alpha-\omega_{2})(\omega_{1}-\omega_{2})}, \end{equation} where the coefficients are $P=c-(\omega_{1}+\omega_{2})b+\omega_{1}\omega_{2}a$, $Q=c-(\alpha+\omega_{2})b+\alpha\omega_{2}a$ and $R=c-(\alpha+\omega_{1})b+\alpha\omega_{1}a$. In particular, if $a=0$, $b=1$ and $c=r$, we obtain $H_{n}^{(3)}=h_{n}^{(3)}$. In this case, $P=\alpha$, $Q=\omega_{1}$ and $R=\omega_{2}$ in Eq. (\ref{eq:8}). In fact, the third-order Horadam sequence is the generalization of the well-known sequences like Tribonacci, Padovan, Narayana and third-order Jacobsthal (see \cite{Cer}). Consider the (scaled) geometric mean sequence $\{g_{n}^{(3)}(a,b,c;\epsilon)\}_{n=0}^{\infty}$ defined, given $g_{0}^{(3)}=a$, $g_{1}^{(3)}=b$ and $g_{2}^{(3)}=c$, through the recurrence \begin{equation}\label{e0} g_{n+3}^{(3)}=\epsilon \sqrt[3]{g_{n+2}^{(3)}g_{n+1}^{(3)}g_{n}^{(3)}},\ n\geq0, \end{equation} where $\epsilon \in \mathbb{N}$ is a scaling constant. In this paper we assume $\epsilon=1$ and begin by finding the growth rate of the sequence \begin{equation}\label{e00} \left\lbrace g_{n}^{(3)}\right\rbrace=\left\lbrace a, b, c, \left(abc\right)^{\frac{1}{3}}, \left(ab^{4}c^{4}\right)^{\frac{1}{9}}, \left(a^{4}b^{7}c^{16}\right)^{\frac{1}{27}}, \left(a^{16}b^{28}c^{37}\right)^{\frac{1}{81}}, \cdots \right\rbrace \end{equation} using two alternative approaches, one of which are routine with the other based on a connection between this sequence and the generalized Tribonacci sequence $\{T_{n}^{(3)}\}=\{H_{n}^{(3)}(0,0,1;1,3,9)\}$ defined by \begin{equation}\label{ecua:6} T_{n+3}^{(3)}=T_{n+2}^{(3)}+3T_{n+1}^{(3)}+9T_{n}^{(3)},\ T_{0}^{(3)}=T_{1}^{(3)}=0,\ T_{2}^{(3)}=1, \end{equation} discernible in the powers of $a$, $b$ and $c$ in Eq. (\ref{e00}). Other results follow accordingly, with generalized Tribonacci numbers expressed in terms of families of parameterized third-order Horadam numbers in two particular identities that are established in different ways. \section {Growth Rate of $\{g_{n}^{(3)}(a,b,c;1)\}_{n=0}^{\infty}$} We begin by showing that the growth rate of the sequence $\{g_{n}^{(3)}(a,b,c;1)\}_{n=0}^{\infty}$ is 1. That is to say the following. \begin{thm}\label{t1} The sequence $\{g_{n}^{(3)}(a,b,c;1)\}_{n=0}^{\infty}$ grows according to \begin{equation}\label{p1} \lim_{n\rightarrow \infty}\frac{g_{n+1}^{(3)}(a,b,c;1)}{g_{n}^{(3)}(a,b,c;1)}=1. \end{equation} \end{thm} As alluded to in the Introduction, we approach the proof of Theorem \ref{t1} in two ways. \textbf{Method I}. This is elementary, and parallels that seen in \cite{Shi} for the case $\{g_{n}^{(2)}(a,b;1)\}_{n=0}^{\infty}$. \begin{proof} Writing $\mu=\lim_{n\rightarrow \infty}\frac{g_{n+1}^{(3)}}{g_{n}^{(3)}}\in \mathbb{R}^{+}$ and using Eq. (\ref{e0}), then \begin{align*} \mu^{3}&=\left(\lim_{n\rightarrow \infty}\frac{g_{n+3}^{(3)}}{g_{n+2}^{(3)}}\right)^{3}\\ &=\lim_{n\rightarrow \infty}\frac{g_{n+2}^{(3)}g_{n+1}^{(3)}g_{n}^{(3)}}{\left(g_{n+2}^{(3)}\right)^{3}}\\ &=\lim_{n\rightarrow \infty}\frac{g_{n+1}^{(3)}g_{n}^{(3)}g_{n+1}^{(3)}}{g_{n+2}^{(3)}g_{n+2}^{(3)}g_{n+1}^{(3)}}=\frac{1}{\mu^{3}}. \end{align*} So that $\mu^{6}=1$ of which $\mu=1$ is a real positive solution. \end{proof} \textbf{Method II}. This is rather more interesting, since it relies on a closed form for the sequence term $\{g_{n}^{(3)}(a,b,c;1)\}_{n=0}^{\infty}$ in which generalized Tribonacci numbers make an appearance. A previous result is demonstrated \begin{lem}\label{lem} For $n\geq 0$, \begin{equation}\label{gam} g_{n+2}^{(3)}(a,b,c;1)=\left(a^{T_{n+1}^{(3)}}b^{T_{n+1}^{(3)}+3T_{n}^{(3)}}c^{T_{n+2}^{(3)}}\right)^{3^{-n}}, \end{equation} where $T_{n}^{(3)}$ is as in Eq. (\ref{ecua:6}). \end{lem} \begin{proof} To prove Eq. (\ref{gam}), let us use the induction on $n$. If $n=0$, the proof is obvious since that $T_{0}^{(3)}=0=T_{1}^{(3)}=0$, $T_{2}^{(3)}=1$ and $$g_{2}^{(3)}(a,b,c;1)=c=\left(a^{T_{1}^{(3)}}b^{T_{1}^{(3)}+3T_{0}^{(3)}}c^{T_{2}^{(3)}}\right)^{3^{-0}}.$$ Let us assume that Eq. (\ref{gam}) holds for all values $m$ less than or equal $n$. Now we have to show that the result is true for $n+1$: \begin{align*} \left(g_{(n+1)+2}^{(3)}\right)^{3}&=g_{n+2}^{(3)}g_{n+1}^{(3)}g_{n}^{(3)}\\ &=\left(a^{T_{n+1}^{(3)}}b^{T_{n+1}^{(3)}+3T_{n}^{(3)}}c^{T_{n+2}^{(3)}}\right)^{3^{-n}}\times \left(a^{T_{n}^{(3)}}b^{T_{n}^{(3)}+3T_{n-1}^{(3)}}c^{T_{n+1}^{(3)}}\right)^{3^{-(n-1)}}\\ &\ \ \times \left(a^{T_{n-1}^{(3)}}b^{T_{n-1}^{(3)}+3T_{n-2}^{(3)}}c^{T_{n}^{(3)}}\right)^{3^{-(n-2)}}\\ &=\left\lbrace\begin{array}{c}a^{T_{n+1}^{(3)}+3T_{n}^{(3)}+9T_{n-1}^{(3)}}\\ \times b^{\left(T_{n+1}^{(3)}+3T_{n}^{(3)}+9T_{n-1}^{(3)}\right)+3\left(T_{n}^{(3)}+3T_{n-1}^{(3)}+9T_{n-2}^{(3)}\right)}\\ \times c^{T_{n+2}^{(3)}+3T_{n+1}^{(3)}+9T_{n}^{(3)}}\end{array}\right\rbrace^{3^{-n}}\\ &=\left(a^{T_{n+2}^{(3)}}b^{T_{n+2}^{(3)}+3T_{n+1}^{(3)}}c^{T_{n+3}^{(3)}}\right)^{3^{-n}}. \end{align*} Then, using Eq. (\ref{ecua:6}) we obtain the result. \end{proof} The proof of Theorem \ref{t1} is now immediate. \begin{proof} From Eq. (\ref{gam}), we have \begin{align*} \lim_{n\rightarrow \infty}\frac{g_{n+3}^{(3)}}{g_{n+2}^{(3)}}&=\lim_{n\rightarrow \infty}\frac{\left(a^{T_{n+2}^{(3)}}b^{T_{n+2}^{(3)}+3T_{n+1}^{(3)}}c^{T_{n+3}^{(3)}}\right)^{3^{-(n+1)}}}{\left(a^{T_{n+1}^{(3)}}b^{T_{n+1}^{(3)}+3T_{n}^{(3)}}c^{T_{n+2}^{(3)}}\right)^{3^{-n}}}\\ &=\lim_{n\rightarrow \infty}\left\lbrace \begin{array}{c}a^{T_{n+2}^{(3)}-3T_{n+1}^{(3)}}b^{\left(T_{n+2}^{(3)}-3T_{n+1}^{(3)}\right)+3\left(T_{n+1}^{(3)}-3T_{n}^{(3)}\right)}\\ \times c^{T_{n+3}^{(3)}-3T_{n+2}^{(3)}} \end{array}\right\rbrace^{3^{-(n+1)}}\\ &=\lim_{n\rightarrow \infty}a^{\frac{T_{n+2}^{(3)}-3T_{n+1}^{(3)}}{3^{n+1}}}b^{\frac{T_{n+2}^{(3)}-9T_{n}^{(3)}}{3^{n+1}}}c^{\frac{T_{n+3}^{(3)}-3T_{n+2}^{(3)}}{3^{n+1}}}. \end{align*} Now, using Eq. (\ref{eq:8}) and $\{T_{n}^{(3)}\}=\{H_{n}^{(3)}(0,0,1;1,3,9)\}$, we obtain the Binet formula \begin{align*} T_{n+1}^{(3)}&=\frac{1}{6}\left[3^{n}+\left(\frac{-1-i\sqrt{2}}{2}\right)(-1+i\sqrt{2})^{n}+\left(\frac{-1+i\sqrt{2}}{2}\right)(-1-i\sqrt{2})^{n}\right]\\ &=\frac{1}{6}\left[3^{n}+\frac{3}{2}\left((-1+i\sqrt{2})^{n-1}+(-1-i\sqrt{2})^{n-1}\right)\right]\\ &=\frac{1}{6}\left[3^{n}+V_{n}^{(2)}\right],\ n\geq 1, \end{align*} where $V_{n}^{(2)}=\frac{3}{2}\left((-1+i\sqrt{2})^{n-1}+(-1-i\sqrt{2})^{n-1}\right)$. In fact, $V_{n}^{(2)}$ satisfies the following properties $V_{n+2}^{(2)}=-2V_{n+1}^{(2)}-3V_{n}^{(2)}$, $V_{0}^{(2)}=-1$, $V_{1}^{(2)}=3$ and $$\lim_{n\rightarrow \infty}\frac{V_{n}^{(2)}}{3^{n}}=\lim_{n\rightarrow \infty}\frac{1}{2}\left(\left(\frac{-1+i\sqrt{2}}{3}\right)^{n-1}+\left(\frac{-1-i\sqrt{2}}{3}\right)^{n-1}\right)=0.$$ Then, we have \begin{equation}\label{teo1} \lim_{n\rightarrow \infty}\frac{g_{n+3}^{(3)}}{g_{n+2}^{(3)}}=\lim_{n\rightarrow \infty}\left\lbrace \begin{array}{c}a^{\frac{1}{6}\left(\frac{V_{n+1}^{(2)}-3V_{n}^{(2)}}{3^{n+1}}\right)} \times b^{\frac{1}{6}\left(\frac{V_{n+1}^{(2)}-9V_{n-1}^{(2)}}{3^{n+1}}\right)}\\ \times c^{\frac{1}{2}\left(\frac{V_{n+2}^{(2)}-3V_{n+1}^{(2)}}{3^{n+2}}\right)}\end{array}\right\rbrace=1 \end{equation} as required in Eq. (\ref{p1}). \end{proof} \section{Generalized Tribonacci and Third-order Horadam Identities} We now develop a couple of identities that link terms of the generalized Tribonacci sequence $T_{n}^{(3)}$ with those of particular third-order Horadam sequences; these are then generalized, with proofs given. The power product recurrence \begin{equation}\label{n1} z_{n+3}^{(3)}=\left(z_{n+2}^{(3)}\right)^{r}\left(z_{n+1}^{(3)}\right)^{s}\left(z_{n}^{(3)}\right)^{t},\ n\geq 0, \end{equation} with initial values $z_{0}^{(3)}=a$, $z_{1}^{(3)}=b$ and $z_{2}^{(3)}=c$, is known to produce a sequence $\{z_{n}^{(3)}\}_{n\geq 0}=\{z_{n}^{(3)}(a,b,c;r,s,t)\}$ for which \begin{equation}\label{n2} z_{n}^{(3)}(a,b,c;r,s,t)=a^{H_{n}^{(3)}(1,0,0;r,s,t)}b^{H_{n}^{(3)}(0,1,0;r,s,t)}c^{H_{n}^{(3)}(0,0,1;r,s,t)},\end{equation} with $H_{n}^{(3)}$ as in Eq. (\ref{eq:8}). It was first put forward by Bunder in 1975 \cite{Bu} for the case $t=0$, having recently been proved inductively and generalized by Larcombe and Bagdasar in \cite{Lar3}. Since our recursion in Eq. (\ref{e0}) (with $\epsilon=1$) is the case $r=s=t=\frac{1}{3}$ of Eq. (\ref{n1}) we can infer immediately from Eq. (\ref{n2}) and Lemma \ref{lem} that \begin{equation}\label{n3} \begin{aligned} g_{n}^{(3)}(a,b,c;1)&=\left(a^{T_{n-1}^{(3)}}b^{T_{n-1}^{(3)}+3T_{n-2}^{(3)}}c^{T_{n}^{(3)}}\right)^{3^{-(n-2)}}\\ &=z_{n}^{(3)}\left(a,b,c;\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)\\ &=a^{H_{n}^{(3)}\left(1,0,0;\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)}b^{H_{n}^{(3)}\left(0,1,0;\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)}c^{H_{n}^{(3)}\left(0,0,1;\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)}, \end{aligned} \end{equation} delivering \begin{equation}\label{i1} T_{n-1}^{(3)}=3^{n-2}H_{n}^{(3)}\left(1,0,0;\frac{1}{3},\frac{1}{3},\frac{1}{3}\right),\ n\geq 1, \end{equation} \begin{equation}\label{i2} T_{n-1}^{(3)}+3T_{n-2}^{(3)}=3^{n-2}H_{n}^{(3)}\left(0,1,0;\frac{1}{3},\frac{1}{3},\frac{1}{3}\right),\ n\geq 2 \end{equation} and \begin{equation}\label{i3} T_{n}^{(3)}=3^{n-2}H_{n}^{(3)}\left(0,0,1;\frac{1}{3},\frac{1}{3},\frac{1}{3}\right),\ n\geq 0, \end{equation} which we believe are new relations in that they express generalized Tribonacci numbers $T_{n}^{(3)}$ in terms of third-order Horadam numbers $H_{n}^{(3)}\left(a,b,c;r,s,t\right)$ of Eq. (\ref{eq:8}). \begin{rem} We remark, for completeness, that values $r=s=t=1$ in Eq. (\ref{n1}) yield, by Eq. (\ref{n2}), \begin{align*} z_{n}^{(3)}(a,b,c;1,1,1)&=a^{H_{n}^{(3)}(1,0,0;1,1,1)}b^{H_{n}^{(3)}(0,1,0;1,1,1)}c^{H_{n}^{(3)}(0,0,1;1,1,1)}\\ &=a^{T_{n-2}}b^{T_{n-2}+T_{n-3}}c^{T_{n-1}},\ n\geq 3, \end{align*} where $\{T_{n}\}_{n\geq 0}=\{0,1,1,2,4,7,... \}$ is the classic Tribonacci sequence defined by $$T_{n}=T_{n-1}+T_{n-2}+T_{n-3},\ T_{0}=0,\ T_{1}=T_{2}=1, n\geq 3.$$ \end{rem} Relations (\ref{i1}), (\ref{i2}) and (\ref{i3}) emerge naturally as a result of what we know about sequence $\{z_{n}^{(3)}(a,b,c;r,s,t)\}_{n\geq 0}$. It is readily seen, however, that they are merely instances of general ones. In fact, setting $a=1$ and $b=c=0$, Eq. (\ref{ec:5}) gives \begin{equation}\label{n5} \begin{aligned} H_{n}^{(3)}&=\frac{P\alpha^{n}}{(\alpha-\omega_{1})(\alpha-\omega_{2})}-\frac{Q\omega_{1}^{n}}{(\alpha-\omega_{1})(\omega_{1}-\omega_{2})}+\frac{R\omega_{2}^{n}}{(\alpha-\omega_{2})(\omega_{1}-\omega_{2})}\\ &=\frac{t\alpha^{n-1}}{(\alpha-\omega_{1})(\alpha-\omega_{2})}-\frac{t\omega_{1}^{n-1}}{(\alpha-\omega_{1})(\omega_{1}-\omega_{2})}+\frac{t\omega_{2}^{n-1}}{(\alpha-\omega_{2})(\omega_{1}-\omega_{2})}, \end{aligned} \end{equation} where the coefficients are $P=\omega_{1}\omega_{2}$, $Q=\alpha\omega_{2}$ and $R=\alpha\omega_{1}$, and choosing further, for arbitrary $\lambda$, $r(\lambda)=\lambda$, $s(\lambda)=3\lambda^{2}$ and $t(\lambda)=9\lambda^{3}$ ($\alpha(\lambda)=3\lambda$, $\omega_{1}(\lambda)=-\lambda+i\lambda\sqrt{2}$ and $\omega_{2}(\lambda)=-\lambda-i\lambda\sqrt{2}$), then \begin{align*} H_{n}^{(3)}(1,0,0;\lambda,3\lambda^{2},9\lambda^{3})&=9\lambda^{n}\left\lbrace \begin{array}{c} \frac{\alpha^{n-1}}{(\alpha-\omega_{1})(\alpha-\omega_{2})}-\frac{\omega_{1}^{n-1}}{(\alpha-\omega_{1})(\omega_{1}-\omega_{2})}\\ +\frac{\omega_{2}^{n-1}}{(\alpha-\omega_{2})(\omega_{1}-\omega_{2})}\end{array}\right\rbrace\\ &=9\lambda^{n}T_{n-1}^{(3)} \end{align*} by Eq. (\ref{eq:8}) and we have a generalized form of Eq. (\ref{i1}), parameterized by $\lambda$, which recovers it when $\lambda=\frac{1}{3}$: \begin{prop}\label{prop1} For $n\geq 2$, we have \begin{equation}\label{iden1} T_{n-1}^{(3)}=\frac{1}{9\lambda^{n}}H_{n}^{(3)}(1,0,0;\lambda,3\lambda^{2},9\lambda^{3}). \end{equation} \end{prop} In a similar fashion, Eq. (\ref{ec:5}) gives \begin{equation}\label{n6} H_{n}^{(3)}(0,0,1;r,s,t)=\left\lbrace \begin{array}{c} \frac{\alpha^{n}}{(\alpha-\omega_{1})(\alpha-\omega_{2})}-\frac{\omega_{1}^{n}}{(\alpha-\omega_{1})(\omega_{1}-\omega_{2})}\\ +\frac{\omega_{2}^{n}}{(\alpha-\omega_{2})(\omega_{1}-\omega_{2})}\end{array}\right\rbrace \end{equation} (the general term of the fundamental sequence mentioned earlier), with \begin{align*} H_{n}^{(3)}(0,0,1;\lambda,3\lambda^{2},9\lambda^{3})&=\lambda^{n-2}\left\lbrace \begin{array}{c} \frac{\alpha^{n}}{(\alpha-\omega_{1})(\alpha-\omega_{2})}-\frac{\omega_{1}^{n}}{(\alpha-\omega_{1})(\omega_{1}-\omega_{2})}\\ +\frac{\omega_{2}^{n}}{(\alpha-\omega_{2})(\omega_{1}-\omega_{2})}\end{array}\right\rbrace\\ &=\lambda^{n-2}T_{n}^{(3)} \end{align*} and so a general form of Eq. (\ref{i3}) which latter is also reproduced for the same value $\lambda=\frac{1}{3}$: \begin{prop}\label{prop2} For $n\geq 1$, we have \begin{equation}\label{iden2} T_{n}^{(3)}=\frac{1}{\lambda^{n-2}}H_{n}^{(3)}(0,0,1;\lambda,3\lambda^{2},9\lambda^{3}). \end{equation} \end{prop} Here, we will just prove Eq. (\ref{iden2}) since (\ref{iden1}) can be dealt with in the same manner. We use two different ways to proof the result. \begin{proof}[Proof 1 of Eq. (\ref{iden2})] This proof uses a matrix approach. Writing the recurrence Eq. (\ref{ec:5}) as $$\left[\begin{array}{c} H_{n+1}^{(3)}\\H_{n}^{(3)} \\ H_{n-1}^{(3)}\end{array}\right]=\left[\begin{array}{ccc} r&s&t\\1&0&0 \\ 0&1&0\end{array}\right]\left[\begin{array}{c} H_{n}^{(3)}\\H_{n-1}^{(3)} \\ H_{n-2}^{(3)}\end{array}\right],\ n\geq 2$$ in matrix form leads iteratively to the matrix power equation \begin{equation}\label{n7} \left[\begin{array}{c} H_{n+1}^{(3)}\\H_{n}^{(3)} \\ H_{n-1}^{(3)}\end{array}\right]=\left[\begin{array}{ccc} r&s&t\\1&0&0 \\ 0&1&0\end{array}\right]^{n-1}\left[\begin{array}{c} H_{2}^{(3)}\\H_{1}^{(3)} \\ H_{0}^{(3)}\end{array}\right] \end{equation} which holds for $n\geq 1$. Thus, \begin{equation}\label{n8} H_{n}^{(3)}(a,b,c;r,s,t)=\left[\begin{array}{ccc} 0&1&0\end{array}\right]\left[\begin{array}{ccc} r&s&t\\1&0&0 \\ 0&1&0\end{array}\right]^{n-1}\left[\begin{array}{c} c\\b \\ a\end{array}\right]. \end{equation} Let us define matrices $$\textbf{F}(\lambda)=\left[\begin{array}{ccc} 1&0&0\\0&\lambda&0\\ 0&0&\lambda^{2}\end{array}\right],\ \ \textbf{C}(\lambda)=\left[\begin{array}{ccc} \lambda&3\lambda^{2}&9\lambda^{3}\\1&0&0 \\ 0&1&0\end{array}\right].$$ Then, observing the decomposition \begin{equation}\label{n10} \frac{1}{\lambda}\textbf{F}(\lambda)\textbf{C}(\lambda)\textbf{F}^{-1}(\lambda)=\left[\begin{array}{ccc} 1&3&9\\1&0&0\\ 0&1&0\end{array}\right],\ \lambda \neq 0, \end{equation} it follows, using Eq. (\ref{ecua:6}) as a starting point, that $T_{n}^{(3)}=H_{n}^{(3)}(0,0,1;1,3,9)$. Then, we obtain \begin{equation}\label{ga} \begin{aligned} T_{n}^{(3)}&=\left[\begin{array}{ccc} 0&1&0\end{array}\right]\left[\begin{array}{ccc} 1&3&9\\1&0&0 \\ 0&1&0\end{array}\right]^{n-1}\left[\begin{array}{c} 1\\0 \\ 0\end{array}\right]\\ &=\left[\begin{array}{ccc} 0&1&0\end{array}\right]\left[\frac{1}{\lambda}\textbf{F}(\lambda)\textbf{C}(\lambda)\textbf{F}^{-1}(\lambda)\right]^{n-1}\left[\begin{array}{ccc} 1&0&0\end{array}\right]^{T}\\ &=\left[\begin{array}{ccc} 0&1&0\end{array}\right]\left[\lambda^{-(n-1)}\textbf{F}(\lambda)\textbf{C}^{n-1}(\lambda)\textbf{F}^{-1}(\lambda)\right]\left[\begin{array}{ccc} 1&0&0\end{array}\right]^{T}, \end{aligned} \end{equation} having employed Eqs. (\ref{n8}) and (\ref{n10}). Further, noting that $\left[\begin{array}{ccc} 0&1&0\end{array}\right]\textbf{F}(\lambda)= \left[\begin{array}{ccc} 0&\lambda&0\end{array}\right]$ and the relation $\textbf{F}^{-1}(\lambda)\left[\begin{array}{ccc} 1&0&0\end{array}\right]^{T}=\left[\begin{array}{ccc} 1&0&0\end{array}\right]^{T}$, the last identity can be written as \begin{equation}\label{n12} \begin{aligned} T_{n}^{(3)}&=\lambda^{-(n-2)}\left[\begin{array}{ccc} 0&1&0\end{array}\right]\textbf{C}^{n-1}(\lambda)\left[\begin{array}{ccc} 1&0&0\end{array}\right]^{T}\\ &=\frac{1}{\lambda^{n-2}}H_{n}^{(3)}(0,0,1;\lambda,3\lambda^{2},9\lambda^{3}) \end{aligned} \end{equation} using Eqs. (\ref{n8}) and (\ref{ga}). The proof is completed. \end{proof} \begin{proof}[Proof 2 of Eq. (\ref{iden2})] This proof takes a different route. Here, we define a sequence $D_{n}^{(3)}=\lambda^{n-2}T_{n}^{(3)}$, where initial conditions are $D_{0}^{(3)}=\lambda^{-2}T_{0}^{(3)}=0$, $D_{1}^{(3)}=\lambda^{-1}T_{1}^{(3)}=0$ and $D_{2}^{(3)}=T_{2}^{(3)}=1$. Then, for all $n\geq 2$, we have \begin{align*} D_{n+1}^{(3)}&=\lambda^{n-1}T_{n+1}^{(3)}\\ &=\lambda^{n-1}\left(T_{n}^{(3)}+3T_{n-1}^{(3)}+9T_{n-2}^{(3)}\right)\ \ (\textrm{By Eq. (\ref{ecua:6})})\\ &=\lambda\left(\lambda^{n-2}T_{n}^{(3)}\right)+3\lambda^{2}\left(\lambda^{n-3}T_{n-1}^{(3)}\right)+9\lambda^{3}\left(\lambda^{n-4}T_{n-2}^{(3)}\right)\\ &=\lambda D_{n}^{(3)}+3\lambda^{2}D_{n-1}^{(3)}+9\lambda^{3}D_{n-2}^{(3)}. \end{align*} This being a third-order Horadam recurrence in Eq. (\ref{ec:5}) for the sequence $D_{n}^{(3)}$ (with $r=\lambda$, $s=3\lambda^{2}$ and $t=9\lambda^{3}$), we have, for $n\geq 0$, $$\lambda^{n-2}T_{n}^{(3)}=H_{n}^{(3)}(0,0,1;\lambda,3\lambda^{2},9\lambda^{3})$$ as required. \end{proof} \begin{rem} Using $\lambda=1$ in Eq. (\ref{iden1}), we obtain $$T_{n-1}^{(3)}=\frac{1}{9}H_{n}^{(3)}(1,0,0;1,3,9).$$ \end{rem} \section{Conclusions} Geometric mean sequences, however defined, have been studied in the past, but not to any great extent. This paper considers the notion of the growth rate of what we regard as the standard geometric mean sequence, and from that develops new identities which express generalized Tribonacci numbers in terms of parameterized families of third-order Horadam numbers. As an extension of this article, future work will examine properties of the scaled version of recurrence in Eq. (\ref{e0}) or the case the higher order Horadam sequences following the idea of Larcombe and Bagdasar in \cite{Lar3}. \end{document}
math
\begin{equation}gin{document} \title{Equivalent norms and Hardy-Littlewood type Theorems, and their applications} \author[Shaolin Chen and Hidetaka Hamada]{Shaolin Chen and Hidetaka Hamada} \address{S. L. Chen, College of Mathematics and Statistics, Hengyang Normal University, Hengyang, Hunan 421002, People's Republic of China; Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, 421002, People's Republic of China.} \email{[email protected]} \address{H. Hamada, Faculty of Science and Engineering, Kyushu Sangyo University, 3-1 Matsukadai 2-Chome, Higashi-ku, Fukuoka 813-8503, Japan.} \email{ [email protected]} \maketitle \def\@arabic\c@footnote{} \footnotetext{2010 Mathematics Subject Classification. Primary: 31A05, 30H05, 47B33; Secondary: 30C62, 46E15.} \footnotetext{Keywords. Lipschitz space, Equivalent norms, Harmonic function, Composition operator} \makeatletter\def\@arabic\c@footnote{\@arabic\c@footnote}\makeatother \begin{equation}gin{abstract} The main purpose of this paper is to develop some methods to investigate equivalent norms and Hardy-Littlewood type Theorems on Lipschitz type spaces of analytic functions and complex-valued harmonic functions. Initially, some characterizations of equivalent norms on Lipschitz type spaces of analytic functions and complex-valued harmonic functions will be given. In particular, we give an answer to an open problem posed by Dyakonov in ({\it Math. Z.} 249(2005), 597--611). Furthermore, some Hardy-Littlewood type Theorems of complex-valued harmonic functions are established. The obtained results improve and extend the main results in ({\it Acta Math.} 178(1997), 143--167). Additionally, we apply the equivalent norms and Hardy-Littlewood type Theorems to study composition operators between Lipschitz type spaces. \end{abstract} \maketitle \tableofcontents \section{Introduction}\label{sec1} Let $\mathbb{C}$ be the complex plane. For $a\in\mathbb{C}$ and $\rho>0$, let $\mathbb{D}(a,\rho)=\{z:~|z-a|<\rho\}$. In particular, we use $\mathbb{D}_{\rho}$ to denote the disk $\mathbb{D}(0,\rho)$ and $\mathbb{D}$ to denote the unit disk $\mathbb{D}_{1}$. Moreover, let $\mathbb{T}:=\partial\mathbb{D}$ be the unit circle. A twice differentiable complex-valued function $f=u+iv$ is harmonic defined in a domain $\Omega\subset\mathbb{C}$ if the real-valued functions $u$ and $v$ satisfy Laplace's equations $\Delta u=\Delta v=0$, where $$\Delta:=\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}.$$ For some more details of complex-valued harmonic functions, see \cite{Du}. It is well-known that every complex-valued harmonic function $f$ defined in a simply connected domain $\Omega$ admits a decomposition $f = h + \overlineerline{g}$, where $h$ and $g$ are analytic in $\Omega$. This decomposition is unique up to an additive constant. Let us recall that the Poisson integral, $P[\varphi]$, of a function $\varphi\in L^{1}(\mathbb{T})$ is defined by $$P[\varphi](z)=\frac{1}{2\pi}\int_{0}^{2\pi}\varphi(e^{i\tau})\mathbf{P}(z,e^{i\tau})d\tau$$ or $$P[\varphi](z)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\varphi(e^{i\tau})\mathbf{P}(z,e^{i\tau})d\tau,$$ where $\mathbf{P}(z,e^{i\tau})=\frac{1-|z|^{2}}{|e^{i\tau}-z|^{2}}$ is the Poisson kernel. In the following, we use $\mathscr{H}(\mathbb{D})$ to denote the set of all complex-valued harmonic functions of $\mathbb{D}$ into $\mathbb{C}$. For $z=x+iy\in\mathbb{C}$, the complex formal differential operators are defined by $\partial/\partial z=1/2(\partial/\partial x-i\partial/\partial y)$ and $\partial/\partial \overlineerline{z}=1/2(\partial/\partial x+i\partial/\partial y)$, where $x,y\in\mathbb{R}$. For a differentiable function $f$ on a domain $\Omega\subset \mathbb{C}$, let $$\mathscr{M}_{f}(z):=|f_{z}(z)|+|f_{\overlineerline{z}}(z)|, \quad z\in \Omega. $$ For $\theta\in[0,2\pi]$, the directional derivative of a complex-valued harmonic function $f$ at $z\in\mathbb{D}$ is defined by $$\partial_{\theta}f(z)=\lim_{\rho\rightarrow0^{+}}\frac{f(z+\rho e^{i\theta})-f(z)}{\rho}=f_{z}(z)e^{i\theta} +f_{\overlineerline{z}}(z)e^{-i\theta},$$ where $f_{z}:=\partial f/\partial z,$ $f_{\overlineerline{z}}:=\partial f/\partial \overlineerline{z}$ and $\rho$ is a positive real number such that $z+\rho e^{i\theta}\in\mathbb{D}$. Then $$\mathscr{M}_{f}(z)=\max\{|\partial_{\theta}f(z)|:\; \theta\in[0,2\pi]\}. $$ A mapping $f:~\Omega\rightarrow\mathbb{C}$ is said to be absolutely continuous on lines, $ACL$ in brief, in the domain $\Omega$ if for every closed rectangle $R\subset\Omega$ with sides parallel to the axes $x$ and $y$, $f$ is absolutely continuous on almost every horizontal line and almost every vertical line in $R$. Such a mapping has, of course, partial derivatives $f_{x}$ and $f_{y}$ a.e. in $\Omega$. Moreover, we say $f\in ACL^{2}$ if $f\in ACL$ and its partial derivatives are locally $L^{2}$ integrable in $\Omega$. A sense-preserving and continuous mapping $f$ of $\mathbb{D}$ into $\mathbb{C}$ is called a $K$-quasiregular mapping if \begin{equation}gin{enumerate} \item $f$ is $ACL^{2}$ in $\mathbb{D}$ and $J_{f}>0$ a.e. in $\mathbb{D}$, where $J_{f}=|f_{z}|^{2}-|f_{\overlineerline{z}}|^{2}$ is the Jacobian of $f$; \item there is a constant $K\geq1$ such that $$\mathscr{M}_{f}^{2}\leq KJ_{f}~\mbox{ a.e. in $\mathbb{D}$}.$$ \end{enumerate} Throughout of this paper, we use the symbol $M$ to denote the various positive constants, whose value may change from one occurrence to another. \section{Preliminaries and main results}\label{sec2-1} A continuous increasing function $\omega:[0,\infty)\rightarrow[0,\infty)$ with $\omega(0)=0$ is called a majorant if $\omega(t)/t$ is non-increasing for $t\in(0,\infty)$ (see \cite{Dy1,Dy2,P}). For $\delta_{0}>0$ and $0<\delta<\delta_{0}$, we consider the following conditions on a majorant $\omega$: \begin{equation}\label{eq2x} \int_{0}^{\delta}\frac{\omega(t)}{t}\,dt\leq\, M\omega(\delta) \end{equation} and \begin{equation}\label{eq3x} \delta\int_{\delta}^{\infty}\frac{\omega(t)}{t^{2}}\,dt\leq\, M \omega(\delta), \end{equation} where $M$ denotes a positive constant. A majorant $\omega$ is henceforth called fast (resp. slow) if condition (\ref{eq2x}) (resp. (\ref{eq3x}) ) is fulfilled. In particular, a majorant $\omega$ is said to be regular if it satisfies the conditions (\ref{eq2x}) and (\ref{eq3x}) (see \cite{Dy1,Dy2}). Given a majorant $\omega$ and a subset $\Omega$ of $\mathbb{C}$, a function $f$ of $\Omega$ into $\mathbb{C}$ is said to belong to the Lipschitz space $\Lambda_{\omega}(\Omega)$ if there is a positive constant $M$ such that \begin{equation}\label{rrt-1}|f(z_{1})-f(z_{2})| \leq\,M\omega\left(|z_{1}-z_{2}|\right), \quad z_{1},z_{2} \in \Omega.\end{equation} Furthermore, let $$\|f\|_{\Lambda_{\omega}(\Omega),s}:=\sup_{z_{1},z_{2}\in\Omega,z_{1}\neq\,z_{2}}\frac{|f(z_{1})-f(z_{2})|}{\omega(|z_{1}-z_{2}|)}<\infty.$$ Note that if $\Omega$ is a proper subdomain of $\mathbb{C}$ and $f\in \Lambda_{\omega}(\Omega)$, then $f$ is continuous on $\overlineerline{\Omega}$ and (\ref{rrt-1}) holds for $z,w \in \overlineerline{\Omega}$ (see \cite{Dy2}). Furthermore, we use $\Lambda_{\omega,p}(\mathbb{D})$ to denote the class of all Borel functions $f$ of $\mathbb{D}$ into $\mathbb{C}$ such that, for $z_{1},~z_{2}\in\mathbb{D}$, $$\mathcal{L}_{p}[f](z_{1},z_{2})\leq M\omega(|z_{1}-z_{2}|),$$ where $M$ is a positive constant and $$\mathcal{L}_{p}[f](z_{1},z_{2})= \begin{equation}gin{cases} \displaystyle\left(\int_{0}^{2\pi}|f(e^{i\eta}z_{1})-f(e^{i\eta}z_{2})|^{p}d\eta\right)^{\frac{1}{p}} & \mbox{if } p\in(0,\infty),\\ \displaystyle|f(z_{1})-f(z_{2})| &\mbox{if } p=\infty. \end{cases} $$ The Lipschitz constant of $f\in\Lambda_{\omega,p}(\mathbb{D})$ is defined as follows $$\|f\|_{\Lambda_{\omega,p}(\Omega),s}:=\sup_{z_{1},z_{2}\in\Omega,z_{1}\neq\,z_{2}}\frac{\mathcal{L}_{p}[f](z_{1},z_{2})}{\omega(|z_{1}-z_{2}|)}<\infty.$$ Obviously, $\|f\|_{\Lambda_{\omega,\infty}(\mathbb{D}),s}=\|f\|_{\Lambda_{\omega}(\mathbb{D}),s}$ and $\Lambda_{\omega,\infty}(\mathbb{D})=\Lambda_{\omega}(\mathbb{D})$. Moreover, we define the space $\Lambda_{\omega,p}(\mathbb{T})$ consisting of those $f\in L^{p}(\mathbb{T})$ for which $$\mathcal{L}_{p}[f](\xi_{1},\xi_{2})\leq M\omega(|\xi_{1}-\xi_{2}|),~\xi_{1},~\xi_{2}\in\mathbb{T},$$ where $M>0$ is a constant and $$\mathcal{L}_{p}[f](\xi_{1},\xi_{2})= \begin{equation}gin{cases} \displaystyle\left(\int_{0}^{2\pi}|f(e^{i\eta}\xi_{1})-f(e^{i\eta}\xi_{2})|^{p}d\eta\right)^{\frac{1}{p}} & \mbox{if } p\in(0,\infty),\\ \displaystyle|f(\xi_{1})-f(\xi_{2})| &\mbox{if } p=\infty. \end{cases} $$ In particular, we say that a function $f$ belongs to the local Lipschitz space $\mbox{loc}\Lambda_{\omega}(\Omega)$ if (\ref{rrt-1}) holds, with a fixed positive constant $M$, whenever $z\in \Omega$ and $|z-w|<\frac{1}{2}d_{\Omega}(z)$ (cf. \cite{Dy2,GM,L}), where $d_{\Omega}(z)$ is the Euclidean distance between $z$ and the boundary of $\Omega$. Moreover, $\Omega$ is called a $\Lambda_{\omega}$-extension domain if $\Lambda_{\omega}(\Omega)=\mbox{loc}\Lambda_{\omega}(\Omega).$ On the geometric characterization of $\Lambda_{\omega}$-extension domains, see \cite{GM}. In \cite{L}, Lappalainen generalized the characterization of \cite{GM}, and proved that $\Omega$ is a $\Lambda_{\omega}$-extension domain if and only if each pair of points $z_{1},z_{2}\in \Omega$ can be joined by a rectifiable curve $\gamma\subset \Omega$ satisfying \[ \int_{\gamma}\frac{\omega(d_{\Omega}(\zeta))}{d_{\Omega}(\zeta)}\,ds(\zeta) \leq M\omega(|z_{1}-z_{2}|) \] with some fixed positive constant $M$, where $ds$ stands for the arc length measure on $\gamma$. Furthermore, Lappalainen \cite[Theorem 4.12]{L} proved that $\Lambda_{\omega}$-extension domains exist only for fast majorants $\omega$. In particular, $\mathbb{D}$ is a $\Lambda_{\omega}$-extension domain of $\mathbb{C}$ for fast majorant $\omega$ (see \cite{Dy2}). \subsection{Equivalent norms and Hardy-Littlewood type theorems on Lipschitz type spaces} Let $\mathscr{A}(\mathbb{D})$ denote the disk algebra, i.e., the class of analytic functions in $\mathbb{D}$ that are continuous up to the boundary. In \cite{Dy1}, Dyakonov gave some characterizations of functions $f\in\mathscr{A}(\mathbb{D})\cap\Lambda_{\omega,\infty}(\mathbb{D})$ in terms of their equivalent norms (or their moduli). Let's recall the main results of \cite{Dy1} as follows. \begin{equation}gin{ThmA}\label{Dya-1}{\rm(see \cite[Theorem 2]{Dy1}; cf. \cite[p. 598]{Dy3} \mbox{or} \cite[Theorem A]{P})} Let $\omega$ be a fast majorant. Then $f\in\mathscr{A}(\mathbb{D})\cap\Lambda_{\omega,\infty}(\overlineerline{\mathbb{D}})$ if and only if $|f|\in\mathscr{A}(\mathbb{D})\cap\Lambda_{\omega,\infty}(\overlineerline{\mathbb{D}})$. \end{ThmA} In particular, if we take $\omega=\omega_{\alpha}$ in Theorem A, where $\alpha\in(0,1]$ is a constant and $\omega_{\alpha}(t)=t^{\alpha}$ for $t\geq0$, then we have \begin{equation}\label{NB-Lip-1}|f|\in\Lambda_{\omega_{\alpha},\infty}(\mathbb{D})\Leftrightarrow f\in\Lambda_{\omega_{\alpha},\infty}(\mathbb{D}).\end{equation} Furthermore, in \cite{Dy3}, Dyakonov posed an open problem on the extension of (\ref{NB-Lip-1}). Let us recall it as follows. \begin{equation}gin{Prob}\label{Problem-1} We do not know how to extend (\ref{NB-Lip-1}) to the whole $\Lambda_{\omega_{\alpha},\infty}(\mathbb{D})$-scale with $\alpha\in(0,\infty)$ (higher order $\Lambda_{\omega_{\alpha},\infty}(\mathbb{D})$-spaces being defined in terms of higher order derivatives); to find the ``right" extension of (\ref{NB-Lip-1}) is an open problem that puzzles us (see \cite[page 606 and lines 5-7]{Dy3}). \end{Prob} We give an answer to Problem \ref{Problem-1} as follows. \begin{equation}gin{Thm}\label{Open-1} It is impossible to extend {\rm(\ref{NB-Lip-1})} to the whole $\Lambda_{\omega_{\alpha},\infty}(\mathbb{D})$-scale with $\alpha\in(0,\infty)$ unless the analytic function $f$ is a constant. \end{Thm} In \cite{Dy3}, Dyakonov also proved that the following original Hardy-Littlewood theorem follows from (\ref{NB-Lip-1}). \begin{equation}gin{CorA}\label{CorA-HL}{\rm (Hardy-Littlewood's Theorem)} Suppose $u$ is a real-valued harmonic function in $\mathbb{D}$ with $u\in\Lambda_{\omega_{\alpha},\infty}(\mathbb{D})$, where $\alpha\in(0,1]$ and $\omega_{\alpha}(t)=t^{\alpha}$ for $t\geq0$. Let $v$ be a harmonic conjugate of $u$ with $v(0)=0$. Then $v\in\Lambda_{\omega_{\alpha},\infty}(\mathbb{D})$. \end{CorA} The following results are Lipschitz characterizations of boundary functions and their harmonic extensions. \begin{equation}gin{ThmB}\label{Dya-2}{\rm(see \cite[Corollary 1 (ii)]{Dy1} \mbox{or} \cite[Theorem B]{P})} Let $\omega$ be a regular majorant, $f\in\mathscr{A}(\mathbb{D})$, and let the boundary function of $|f|$ belong to $\Lambda_{\omega,\infty}(\mathbb{T})$. Then $f$ is in $\Lambda_{\omega,\infty}(\mathbb{D})$ if and only if $$P[|f|](z)-|f(z)|\leq M\omega(d_{\mathbb{D}}(z))$$ for some positive constant $M$. \end{ThmB} \begin{equation}gin{ThmC}\label{Dya-3}{\rm(\cite[Theorem 1]{Dy1})} If $f\in\mathscr{A}(\mathbb{D})$ and if both $\omega$ and $\omega^{2}$ are regular majorants, then there is a positive constant $M$ such that \begin{equation}\label{jjp-1}\frac{1}{M}\|f\|_{\Lambda_{\omega}(\overlineerline{\mathbb{D}}),s} \leq\sup_{z\in\mathbb{D}}\left\{\frac{\left(P[|f|^{2}](z)-|f(z)|^{2}\right)^{\frac{1}{2}}}{\omega\big(d_{\mathbb{D}}(z)\big)}\right\}\leq M\|f\|_{\Lambda_{\omega}(\overlineerline{\mathbb{D}}),s}.\end{equation} \end{ThmC} We remark that the norm appearing in the middle of the inequality (\ref{jjp-1}) can be regarded as an analogue of the so-called Garsia norm on the space $BMO$ (see \cite{Dy1} or \cite[Chapter VI]{Ga}). Recently, the equivalent norms and Hardy-Littlewood type theorems on Lipschitz type spaces have attracted much attention of many authors (see \cite{AKM,AP-2017,MVM,CH-Riesz,CSR,CPR-2014,CPW-11,CP-2013,Dy2,Dy3,Dy4,P,Pav-2007,Q-W}). The first purpose of this paper is to use some new techniques, in conjunction with some methods of Dyakonov \cite{Dy1} and Pavlovi\'c \cite{P}, to study the equivalent norms and Hardy-Littlewood type theorems for $f\in\mathscr{H}(\mathbb{D})\cap\Lambda_{\omega,p}(\mathbb{D})$ and $f\in\mathscr{A}(\mathbb{D})\cap\Lambda_{\omega,p}(\mathbb{D})$, where $p\in[1,\infty]$. In the following, we will show that Theorem A is equivalent to Corollary A. It is read as follows. \begin{equation}gin{Thm}\label{thm-1.0} Suppose that $p\in[1,\infty]$ is a constant, $\omega$ is a fast majorant and $f=u+iv=h+\overlineerline{g}\in\mathscr{H}(\mathbb{D})$ is continuous up to the boundary $\mathbb{T}$ of $\mathbb{D}$, where $h,~g\in\mathscr{A}(\mathbb{D})$ and $u, v$ are real-valued functions in $\mathbb{D}$. Then the following statements are equivalent. \begin{equation}gin{enumerate} \item[{\rm ($\mathscr{A}_{1}$)}] There is a positive constant $M$ such that, for $z\in\mathbb{D}$, $$ \left\{ \begin{equation}gin{array}{ll} \left(\int_{0}^{2\pi}\left(\mathscr{M}_{f}(ze^{i\eta})\right)^{p}d\eta\right)^{\frac{1}{p}}\leq M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}, & p\in [1,\infty), \\ \mathscr{M}_{f}(z) \leq M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}, & p=\infty ; \end{array} \right. $$ \item[{\rm ($\mathscr{A}_{2}$)}] $f\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{A}_{3}$)}] $h,~g\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{A}_{4}$)}] $|h|,~|g|\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \end{enumerate} Furthermore, assume that $f$ is also $K$-quasiregular in $\mathbb{D}$, where $K\in[1,\infty)$ is a constant. Then, the conditions {\rm ($\mathscr{A}_{1}$)}$\sim${\rm ($\mathscr{A}_{4}$)} are equivalent to the conditions {\rm ($\mathscr{A}_{5}$)}$\sim${\rm ($\mathscr{A}_{11}$)}. \begin{equation}gin{enumerate} \item[{\rm ($\mathscr{A}_{5}$)}] $|f|\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{A}_{6}$)}] $u\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{A}_{7}$)}] $|u|\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{A}_{8}$)}] $v\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{A}_{9}$)}] $|v|\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{A}_{10}$)}] There is a positive constant $M$ such that, for $z\in\mathbb{D}$, $$\mathscr{Y}_{u,q,p}(z) \leq M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)},$$ where $p\vee q:=\{p,q:~0<q\leq p<\infty\}$ or $p\wedge q:=\{p,q:~0<q<p=\infty\}$, $$\mathscr{Y}_{u,q,p}(z)= \left\{ \begin{equation}gin{array}{ll} \left(\int_{0}^{2\pi}\left(\frac{\int_{\mathbb{D}\left(z,d_{\mathbb{D}}(z)/2\right)}\left(\mathscr{M}_{u}(\xi e^{i\eta})\right)^{q}dA(\xi)} {\left|\mathbb{D}\left(z,d_{\mathbb{D}}(z)/2\right)\right|} \right)^{\frac{p}{q}}d\eta\right)^{\frac{1}{p}}, & p\vee q, \\ \left(\frac{1} {\left|\mathbb{D}\left(z,d_{\mathbb{D}}(z)/2\right)\right|}\int_{\mathbb{D}\left(z,d_{\mathbb{D}}(z)/2\right)}\left(\mathscr{M}_{u}(\xi)\right)^{q}dA(\xi) \right)^{\frac{1}{q}}, & p\wedge q, \end{array} \right. $$ $\left|\mathbb{D}\big(z,d_{\mathbb{D}}(z)/2\big)\right|$ denotes the area of $\mathbb{D}\big(z,d_{\mathbb{D}}(z)/2\big)$, and $dA$ denotes the Lebesgue area measure on $\mathbb{D}$; \item[{\rm ($\mathscr{A}_{11}$)}] There is a positive constant $M$ such that, for $z\in\mathbb{D}$, $$\mathscr{Y}_{v,q,p}(z) \leq M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)},$$ where $0<q\leq p<\infty$ or $0<q<p=\infty$. \end{enumerate} \end{Thm} We remark that ($\mathscr{A}_{1}$) is equivalent to $f\in\Lambda_{\omega,p}(\mathbb{D})$ without the assumption ``$f$ is continuous on $\mathbb{T}$" in Theorem \ref{thm-1.0}. The following result easily follows from Theorem \ref{thm-1.0}. \begin{equation}gin{Cor}\label{An-th-0.1} Suppose that $p\in[1,\infty]$ is a constant, $\omega$ is a fast majorant and $f=u+iv\in\mathscr{A}(\mathbb{D})$, where $u$ and $v$ are real-valued functions in $\mathbb{D}$ that are continuous up to $\mathbb{T}$. Then the following statements are equivalent. \begin{equation}gin{enumerate} \item[{\rm ($\mathscr{B}_{1}$)}] There is a positive constant $M$ such that, for $z\in\mathbb{D}$, $$ \left\{ \begin{equation}gin{array}{ll} \left(\int_{0}^{2\pi}|f'(ze^{i\eta})|^{p}d\eta\right)^{\frac{1}{p}}\leq M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}, & p\in [1,\infty)\\ |f'(z)|\leq M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}, & p=\infty; \end{array} \right. $$ \item[{\rm ($\mathscr{B}_{2}$)}] $f\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{B}_{3}$)}] $|f|\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{B}_{4}$)}] $u\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{B}_{5}$)}] $|u|\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{B}_{6}$)}] $v\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{B}_{7}$)}] $|v|\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}});$ \item[{\rm ($\mathscr{B}_{8}$)}] There is a positive constant $M$ such that, for $z\in\mathbb{D}$, $$\mathscr{Y}_{u,q,p}(z) \leq M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)},$$ where $0<q\leq p<\infty$ or $0<q<p=\infty$; \item[{\rm ($\mathscr{B}_{9}$)}] There is a positive constant $M$ such that, for $z\in\mathbb{D}$, $$\mathscr{Y}_{v,q,p}(z) \leq M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)},$$ where $0<q\leq p<\infty$ or $0<q<p=\infty$. \end{enumerate} \end{Cor} We remark that Corollary \ref{An-th-0.1} is an improvement and extension of Theorem A and Corollary A. In order to extend Theorems B and C, we need to establish another version of Hardy-Littlewood type theorem on complex-valued harmonic functions. Before presenting our results, let us recall the classical Hardy-Littlewood theorem of complex-valued harmonic functions as follows (cf. \cite{H-L-31,H-L,Pav-2008,Pri}). \begin{equation}gin{ThmD}\label{H-L}{\rm (Hardy-Littlewood's Theorem)} If $\varphi\in\Lambda_{\omega_{\begin{equation}ta},\infty}(\mathbb{T})$, then $P[\varphi]\in\Lambda_{\omega_{\begin{equation}ta},\infty}(\overlineerline{\mathbb{D}})$, where $\begin{equation}ta\in(0,1)$ and $\omega_{\begin{equation}ta}(t)=t^{\begin{equation}ta}$ for $t\geq0$. \end{ThmD} Nolder and Oberlin generalized Theorem D, and established a Hardy-Littlewood theorem for a differentiable majorant (see \cite[Theorem 1.5]{No}). Later, Dyakonov \cite{Dy1} generalized Theorem D to complex-valued harmonic functions as follows. For some related studies, we refer the readers to \cite{AKM,Ch-18,CSR,Dy2,Dy3,GK,No,P,Pav-2007,P-08} for details. \begin{equation}gin{ThmE}\label{Th-B}{\rm (\cite[Lemma 4]{Dy1})} Let $\omega$ be a regular majorant. If $\varphi\in\Lambda_{\omega,\infty}(\mathbb{T})$, then $P[\varphi]\in\Lambda_{\omega,\infty}(\overlineerline{\mathbb{D}})$. \end{ThmE} Replacing a regular majorant by a weaker one, we use some new techniques to improve and extend Theorem E into the following form. \begin{equation}gin{Thm}\label{thm-5.0} Let $\omega$ be a majorant, and let $p\in[1,\infty]$ be a constant. \begin{equation}gin{enumerate} \item[{\rm ($\mathscr{C}_{1}$)}] If $\varphi\in\Lambda_{\omega,p}(\mathbb{T})$ and $\varphi$ is continuous on $\mathbb{T}$, then $P[\varphi]\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})$; \item[{\rm ($\mathscr{C}_{2}$)}] There is a positive constant $M$ such that for all $\delta\in[0,\pi]$, $$\delta\int_{\delta}^{\pi}\frac{\omega(t)}{t^{2}}dt\leq\,M\omega(\delta).$$ \end{enumerate} Then $(\mathscr{C}_{2})\Rightarrow(\mathscr{C}_{1})$ for $p\in[1,\infty]$, and $(\mathscr{C}_{1})\Leftrightarrow(\mathscr{C}_{2})$ for $p=\infty$. \end{Thm} In particular, if $\omega$ is a slow majorant, then the following result easily follows from Theorem \ref{thm-5.0}. \begin{equation}gin{Cor}\label{cor-0.11} Let $\omega$ be a slow majorant, and let $p\in[1,\infty]$ be a constant. If $\varphi\in\Lambda_{\omega,p}(\mathbb{T})$ and $\varphi$ is continuous on $\mathbb{T}$, then $P[\varphi]\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})$. \end{Cor} By using Theorem \ref{thm-5.0}, we improve and generalize Theorem B into the following form. \begin{equation}gin{Thm}\label{t-3} Suppose that $p\in[1,\infty]$ is a constant, $\omega$ is a regular majorant and $f\in\mathscr{H}(\mathbb{D})$ is continuous up to the boundary $\mathbb{T}$ of $\mathbb{D}$. If the boundary function of $|f|\in\Lambda_{\omega,p}(\mathbb{T})$ and $f$ is $K$-quasiregular in $\mathbb{D}$, where $K\in[1,\infty)$ is a constant, then the following statements are equivalent. \begin{equation}gin{enumerate} \item[{\rm ($\mathscr{D}_{1}$)}] $f\in\Lambda_{\omega,p}(\mathbb{D})$; \item[{\rm ($\mathscr{D}_{2}$)}] There is a positive constant $M$ such that $$ \left\{ \begin{equation}gin{array}{ll} \left(\int_{0}^{2\pi}\left(P[|f|](ze^{i\eta})-|f(ze^{i\eta})|\right)^{p}d\eta\right)^{\frac{1}{p}} \leq M\omega(d_{\mathbb{D}}(z)), &p\in[1,\infty), \\ P[|f|](z)-|f(z)|\leq M\omega(d_{\mathbb{D}}(z)), &p=\infty. \end{array} \right. $$ \end{enumerate} \end{Thm} \begin{equation}gin{Rem} We remark that $|f|^{\lambda}$ is subharmonic, and therefore the Poisson integral, $P[|f|^{\lambda}]$, of the boundary function of $|f|^{\lambda}$, is equal to the smallest harmonic majorant of $|f|^{\lambda}$ in $\mathbb{D}$, where $K\in[1,\infty)$, $\lambda\in\left[1-1/K^{2}, \infty\right)$ and $f\in\mathscr{H}(\mathbb{D})$ is a $K$-quasiregular mapping $($see \cite{KP-08}$)$. In particular, $P[|f|^{\lambda}]-|f|^{\lambda}\geq0$ in $\mathbb{D}$, which implies that $P[|f|]-|f|\geq0$ in $\mathbb{D}$ holds in Theorem \ref{t-3}. \end{Rem} From Theorem \ref{t-3}, we obtain the following result. \begin{equation}gin{Cor}\label{yy-cor} Suppose that $p\in[1,\infty]$ is a constant, $\omega$ is a regular majorant and $f\in\mathscr{A}(\mathbb{D})$. If the boundary function of $|f|\in\Lambda_{\omega,p}(\mathbb{T})$, then the following statements are equivalent. \begin{equation}gin{enumerate} \item[{\rm ($\mathscr{E}_{1}$)}] $f\in\Lambda_{\omega,p}(\mathbb{D})$; \item[{\rm ($\mathscr{E}_{2}$)}] There is a positive constant $M$ such that $$ \left\{ \begin{equation}gin{array}{ll} \left(\int_{0}^{2\pi}\left(P[|f|](ze^{i\eta})-|f(ze^{i\eta})|\right)^{p}d\eta\right)^{\frac{1}{p}} \leq M\omega(d_{\mathbb{D}}(z)), &p\in[1,\infty), \\ P[|f|](z)-|f(z)|\leq M\omega(d_{\mathbb{D}}(z)), &p=\infty. \end{array} \right. $$ \end{enumerate} \end{Cor} Furthermore, if we replace $|f|$ by $|f|^{2}$ in ($\mathscr{E}_{2}$) of Corollary \ref{yy-cor}, then we obtain the following result which is an extension of Theorem C. \begin{equation}gin{Thm}\label{thm-8} If $f\in\mathscr{A}(\mathbb{D})$ and if both $\omega$ and $\omega^{2}$ are regular majorants, then $f\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})$ if and only if there is a positive constant $M$ such that \begin{equation}gin{eqnarray}q\label{df-1-1} \left\{ \begin{equation}gin{array}{ll} \left(\int_{0}^{2\pi}\left(P[|f|^{2}](ze^{i\eta})-|f(ze^{i\eta})|^{2}\right)^{\frac{p}{2}}d\eta\right)^{\frac{1}{p}} \leq M\omega(d_{\mathbb{D}}(z)), &p\in[2,\infty), \\ \left(P[|f|^{2}](z)-|f(z)|^{2}\right)^{\frac{1}{2}}\leq M\omega(d_{\mathbb{D}}(z)), &p=\infty. \end{array} \right. \end{equation}qq \end{Thm} We remark that if we take $p=\infty$, then Theorem \ref{thm-8} coincides with Theorem C. \subsection{Applications of equivalent norms and Hardy-Littlewood type theorems} Let $\phi$ be a analytic function of $\mathbb{D}$ into itself. We define the composition operator $C_{\phi}$ on $\mathscr{H}(\mathbb{D})$ by $C_{\phi}(f)(z)=f(\phi(z))$ for $z\in\mathbb{D}$. For two sets $X$, $Y\subset \mathscr{H}(\mathbb{D})$, the set of all $\phi$ for which $C_{\phi}(X)\subset Y$ is denoted by $\mathscr{F}(X,Y)$. In \cite{Sha}, Shapiro gave some compete characterizations of compact composition operators of some analytic function spaces. Recently, the studies of composition operators on holomorphic and harmonic functions have been attracted much attention of many mathematicians (see \cite{C-H,CHZ2022MZ,CPR,P-08,Pav-2008,P-R-0,P-R-1,Sha,Z2}). The second purpose of this paper is to apply Theorems \ref{thm-1.0} and \ref{thm-5.0} to study the composition operators between the Lipschitz type spaces. In the following, for $x\in\mathbb{R}$, let $$\{x\}_{+}=\max\{x,0\}.$$ By using Theorems \ref{thm-1.0} and \ref{thm-5.0}, we obtain the following result. \begin{equation}gin{Thm}\label{th-3} Suppose that $p\in[1,\infty)$ is a constant, and $\omega_{1}$ and $\omega_{2}$ are fast majorants. Let $\phi$ be an analytic function of $\mathbb{D}$ into itself. Then the following conditions are equivalent. \begin{equation}gin{enumerate} \item[{\rm $(\mathscr{F}_{1})$}] $\phi\in\mathscr{F}\big(\Lambda_{\omega_{1},\infty}(\mathbb{D})\cap\mathscr{H}(\mathbb{D}),\Lambda_{\omega_{2},p}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})\big)$; \item[{\rm $(\mathscr{F}_{2})$}] There is a positive constant $M$ such that, for $z\in\mathbb{D}$, $$\left(\int_{0}^{2\pi}\left(|\phi'(ze^{i\theta})| \frac{\omega_{1}\big(d_{\mathbb{D}}(\phi(ze^{i\theta}))\big)}{d_{\mathbb{D}}(\phi(ze^{i\theta}))}\right)^{p}d\theta\right)^{\frac{1}{p}} \leq M\frac{\omega_{2}\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)};$$ \end{enumerate} Furthermore, assume that $\phi$ is continuous in $\overlineerline{\mathbb{D}}$, $\omega_{1}\in\mathscr{S}$ and $\omega_{2}$ is regular, where $\mathscr{S}$ denotes the set consisting of those majorant $\omega_{1}$ for which $\omega_{1}$ is differentiable on $(0,1]$ and $\omega_{1}'$ is also non-increasing on $(0,1]$ with $\sup_{t\in (0,1]}\frac{\omega_1(t)}{t\omega_1'(t)}<\infty$. Then, the conditions {\rm $(\mathscr{F}_{1})$} and {\rm $(\mathscr{F}_{2})$} are equivalent to the conditions $(\mathscr{F}_{3})$$\sim$$(\mathscr{F}_{5})$. \begin{equation}gin{enumerate} \item[{\rm $(\mathscr{F}_{3})$}] $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\overlineerline{\mathbb{D}});$ \item[{\rm $(\mathscr{F}_{4})$}] $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\mathbb{T})$ and there is a positive constant $M$ such that, for $r\in(0,1)$, $$\left(\int_{0}^{2\pi}\left\{\omega_{1}(d_{\mathbb{D}}(\phi(re^{i\theta})))-\omega_{1}(d_{\mathbb{D}}(\phi(e^{i\theta})))\right\}_{+}^{p}d\theta\right)^{\frac{1}{p}} \leq\,M\omega_{2}(d_{\mathbb{D}}(r));$$ \item[{\rm $(\mathscr{F}_{5})$}] $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\mathbb{T})$ and there is a positive constant $M$ such that, for $r\in(0,1)$, $$\left(\int_{0}^{2\pi}\left(\omega_{1}(d_{\mathbb{D}}(\phi(re^{i\theta})))-P[\omega_{1}(d_{\mathbb{D}}(\phi))](re^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}} \leq\,M\omega_{2}(d_{\mathbb{D}}(r)).$$ \end{enumerate} \end{Thm} \begin{equation}gin{Rem} We remark that $$\omega_{1}(d_{\mathbb{D}}(\phi(re^{i\theta})))-P[\omega_{1}(d_{\mathbb{D}}(\phi))](re^{i\theta})\geq0$$ in Theorem \ref{th-3} because $\omega_{1}(d_{\mathbb{D}}(\phi))$ is superharmonic in $\mathbb{D}$ {\rm (see Lemma \ref{le-9})}. In particular, Theorem \ref{th-3} is also true for $p=\infty$. The proof method used to prove Theorem \ref{th-3} for $p\in[1,\infty)$ is still valid for $p=\infty$. \end{Rem} For $t\geq0$, let $\omega_{\alpha}(t)=t^{\alpha}$ and $\omega_{\begin{equation}ta}(t)=t^{\begin{equation}ta}$, where $\alpha, \begin{equation}ta\in(0,1]$ are constants. In particular, if we take $\omega_{1}=\omega_{\alpha}$ and $\omega_{2}=\omega_{\begin{equation}ta}$ in Theorem \ref{th-3}, then we obtain the following result. \begin{equation}gin{Cor} Suppose that $p\in[1,\infty)$ is a constant and $\phi$ is an analytic function of $\mathbb{D}$ into itself. Then the following conditions are equivalent. \begin{equation}gin{enumerate} \item[{\rm $(\mathscr{G}_{1})$}] $\phi\in\mathscr{F}\big(\Lambda_{\omega_{\alpha},\infty}(\mathbb{D})\cap\mathscr{H}(\mathbb{D}),\Lambda_{\omega_{\begin{equation}ta},p}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})\big)$; \item[{\rm $(\mathscr{G}_{2})$}] There is a positive constant $M$ such that, for $z\in\mathbb{D}$, $$\left(\int_{0}^{2\pi}\left(|\phi'(ze^{i\theta})| {\big(d_{\mathbb{D}}(\phi(ze^{i\theta}))\big)^{\alpha-1}}\right)^{p}d\theta\right)^{\frac{1}{p}} \leq M\big(d_{\mathbb{D}}(z)\big)^{\begin{equation}ta-1};$$ \end{enumerate} Furthermore, assume that $\phi$ is continuous in $\overlineerline{\mathbb{D}}$ and $\begin{equation}ta\neq1$. Then, the conditions {\rm $(\mathscr{G}_{1})$} and {\rm $(\mathscr{G}_{2})$} are equivalent to the conditions $(\mathscr{G}_{3})$$\sim$$(\mathscr{G}_{5})$. \begin{equation}gin{enumerate} \item[{\rm $(\mathscr{G}_{3})$}] ${\big(d_{\mathbb{D}}(\phi)\big)^{\alpha}}\in\Lambda_{\omega_{\begin{equation}ta},p}(\overlineerline{\mathbb{D}});$ \item[{\rm $(\mathscr{G}_{4})$}] ${\big(d_{\mathbb{D}}(\phi)\big)^{\alpha}}\in\Lambda_{\omega_{\begin{equation}ta},p}(\mathbb{T})$ and there is a positive constant $M$ such that, for $r\in(0,1)$, $$\left(\int_{0}^{2\pi}\left\{(d_{\mathbb{D}}(\phi(re^{i\theta})))^{\alpha}- (d_{\mathbb{D}}(\phi(e^{i\theta})))^{\alpha}\right\}_{+}^{p}d\theta\right)^{\frac{1}{p}} \leq\,M(d_{\mathbb{D}}(r))^{\begin{equation}ta};$$ \item[{\rm $(\mathscr{G}_{5})$}] $(d_{\mathbb{D}}(\phi))^{\alpha}\in\Lambda_{\omega_{\begin{equation}ta},p}(\mathbb{T})$ and there is a positive constant $M$ such that, for $r\in(0,1)$, $$\left(\int_{0}^{2\pi}\left((d_{\mathbb{D}}(\phi(re^{i\theta})))^{\alpha}-P[(d_{\mathbb{D}}(\phi))^{\alpha}](re^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}} \leq\,M(d_{\mathbb{D}}(r))^{\begin{equation}ta}.$$ \end{enumerate} \end{Cor} The proofs of Theorems \ref{Open-1}, \ref{thm-1.0}, \ref{thm-5.0}, \ref{t-3} and \ref{thm-8} will be presented in Sec. \ref{sec2}, and the proof of Theorem \ref{th-3} will be given in Sec. \ref{sec3}. \section{Equivalent norms and Hardy-Littlewood type theorems on Lipschitz type spaces}\label{sec2} \subsection{The proof of Theorem \ref{Open-1}} Let $f$ be analytic in $\mathbb{D}$. We will show that if $|f|\in\Lambda_{\omega_{\alpha},\infty}(\mathbb{D})$ for $\alpha>1$, then $f$ is a constant function, and $f\in\Lambda_{\omega_{\alpha},\infty}(\mathbb{D}).$ It follows from the assumptions that, for all $z,w\in\mathbb{D}$, there is a positive constant $M$ such that \begin{equation}\label{o-p-1}\big||f(z)|-|f(w)|\big|\leq M|z-w|^{\alpha}.\end{equation} Let $E=\{\varsigma\in\mathbb{D}:~f(\varsigma)=0\}$. We may assume that $\mathbb{D}\setminus E \neq \emptyset$. For $z\in\mathbb{D}\setminus E$, let $w=z+re^{i\theta}\in\mathbb{D}$, where $r\in(0,d_{\mathbb{D}}(z))$ and $\theta\in[0,2\pi]$. Then, by (\ref{o-p-1}), we have \begin{equation}gin{eqnarray}q 0\leq\lim_{r\rightarrow0^{+}}\frac{\big||f(z)|-|f(z+re^{i\theta})|\big|}{r}\leq M\lim_{r\rightarrow0^{+}}r^{\alpha-1}=0, \end{equation}qq which implies that \begin{equation}gin{eqnarray}q 0&=&\max_{\theta\in[0,2\pi]}\lim_{r\rightarrow0^{+}}\frac{\big||f(z)|-|f(z+re^{i\theta})|\big|}{r}\\ &=&\max_{\theta\in[0,2\pi]}\big||f(z)|_{x}{\overlineerline{\operatorname{co}}}s\theta+|f(z)|_{y}\sin\theta\big|\\ &=&\max_{\theta\in[0,2\pi]}\frac{1}{2}\left|\big(|f(z)|_{x}+i|f(z)|_{y}\big)e^{-i\theta}+\big(|f(z)|_{x}-i|f(z)|_{y}\big)e^{i\theta}\right|\\ &=&\big||f(z)|_{z}\big|+\big||f(z)|_{\overlineerline{z}}\big|\\ &=&{|f'(z)|}. \end{equation}qq Hence $f'(z)\equiv0$ for all $z\in\mathbb{D}\setminus E$. Since $\mathbb{D}\setminus E$ is a nonempty open subset of $\mathbb{D}$, $f'(z)\equiv0$ for all $z\in\mathbb{D}$. Therefore, $f$ is a constant function in $\mathbb{D}$, and $f\in\Lambda_{\omega_{\alpha},\infty}(\mathbb{D}).$ The proof of this theorem is complete. \qed \begin{equation}gin{LemF}{\rm (\cite[p.19]{Zy})}\label{Mink} For $\nu\geq1$, Minkowski's inequality in infinite form is $$\left(\int_{A_{1}}\left|\int_{B_{1}}\mathscr{X}(\zeta,\xi)d\mu_{\xi}\right|^{\nu}d\mu_{\zeta}\right)^{\frac{1}{\nu}}\leq \int_{B_{1}}\left(\int_{A_{1}}\left|\mathscr{X}(\zeta,\xi)\right|^{\nu}d\mu_{\zeta}\right)^{\frac{1}{\nu}}d\mu_{\xi},$$ where $A_{1}$ and $B_{1}$ are measurable sets with positive measures $d\mu_{\zeta}$ and $d\mu_{\xi}$, respectively, and $\mathscr{X}$ is integrable on $A_{1}\times B_{1}$. \end{LemF} It follows from Lewy's Theorem that $f\in\mathscr{H}(\mathbb{D})$ is locally univalent and sense-preserving in $\mathbb{D}$ if and only if $J_{f}>0$ in $\mathbb{D}$, which means that $f_{z}\neq 0$ in $\mathbb{D}$ and the second complex dilatation $\mu_{f} =\overlineerline{f_{\begin{equation}gin{array}r{z}}}/f_{z}$ has the property that $|\mu_{f} (z)|<1$ in $\mathbb{D}$ (see \cite{Lewy}). From the definition of $K$-quasiregular mappings, we know that if $f\in\mathscr{H}(\mathbb{D})$ is a harmonic $K$-quasiregular mapping if and only if $J_{f}(z)>0$ and $\mathscr{M}_{f}^{2}(z)\leq KJ_{f}(z)$ for all $z\in\mathbb{D}$. By using almost the same proof method of \cite[Proposition 3.1]{KM}, we get the following result. \begin{equation}gin{Lem}\label{mate-07} For $K\geq1$, let $f$ be a harmonic $K$-quasiregular mapping of $\mathbb{D}$ into itself. Then, for $z\in\mathbb{D}$, $$\mathscr{M}_{f}(z)\leq K\frac{1-|f(z)|^{2}}{1-|z|^{2}}.$$ \end{Lem} Using Lemma \ref{mate-07}, we obtain the following lemma. \begin{equation}gin{Lem}\label{mate-07b} For $K\geq1$, let $f$ be a harmonic $K$-quasiregular mapping of $\mathbb{D}$ into $\mathbb{C}$. Then, for $z\in\mathbb{D}$ and $0<\delta<d_{\mathbb{D}}(z)$, \begin{equation}gin{equation} \label{mate07c} \mathscr{M}_{f}(z)\leq \frac{2K\left(M_{z, \delta}-|f(z)|\right)}{\delta}, \end{equation} where $M_{z,\delta}=\sup\{|f(w)|:~|w-z|<\delta \}$. \end{Lem} \begin{equation}gin{proof} We may assume that $f$ is not a constant. For a fixed point $z\in\mathbb{D}$, let $$F(\zeta)=\frac{f(z+\delta \zeta )}{M_{z,\delta}},~\zeta\in\mathbb{D}.$$ Elementary calculations lead to \begin{equation}\label{eq-rtp-1} F(0)=\frac{f(z)}{M_{z,\delta}}~\mbox{and}~\mathscr{M}_{F}(0)= \frac{\delta}{M_{z, \delta}}\mathscr{M}_{f}(z). \end{equation} Since $F$ is a harmonic $K$-quasiregular mapping of $\mathbb{D}$ into itself, by (\ref{eq-rtp-1}) and Lemma \ref{mate-07}, we see that \begin{equation}gin{eqnarray}q \frac{\delta}{M_{z,\delta}}\mathscr{M}_{f}(z)&=&\mathscr{M}_{F}(0)\leq K\left(1-|F(0)|^{2}\right)\leq2K(1-|F(0)|)\\ &=&2K\left(1-\frac{|f(z)|}{M_{z,\delta}}\right). \end{equation}qq Consequently, \[ \delta\mathscr{M}_{f}(z)\leq 2K\left(M_{z,\delta}-|f(z)|\right), \] which implies (\ref{mate07c}). \end{proof} \begin{equation}gin{Lem}\label{mate-07d} Suppose that $p\in[1,\infty)$ and $0<\varepsilon<1-|z|$, $z\in \mathbb{D}$. For $K\geq1$, let $f$ be a harmonic $K$-quasiregular mapping of $\mathbb{D}$ into $\mathbb{C}$. Then, for $z\in\mathbb{D}$, \[ \mathscr{M}_{f}^{p}(z)\leq \frac{M}{\varepsilon^{p+2}} \int_{\mathbb{D}(z,\varepsilon)}\{|f(w)|-|f(z)|\}_+^p dm(w), \] where $M$ is a positive constant which depends only on $p$ and $K$, and $dm=dA/\pi$ $($see Theorem \ref{thm-1.0}$)$. \end{Lem} \begin{equation}gin{proof} Let $z\in \mathbb{D}$ be fixed with $0<\varepsilon<1-|z|$. The function $\{ |f(w)|-|f(z)|\}_+$, $w\in \mathbb{D}$, is subharmonic in $\mathbb{D}$. By applying Lemma \ref{mate-07b} for $\delta=\varepsilon/2$, we have \begin{equation}gin{equation} \label{eq-3-1} \mathscr{M}_{f}(z)\leq \frac{4K\left(M_{z,\varepsilon/2}-|f(z)|\right)}{\varepsilon} =\frac{4K}{\varepsilon}\sup_{w\in \mathbb{D}(z,\varepsilon/2)}\{ |f(w)|-|f(z)|\}_+ \end{equation} Since $\{ |f(w)|-|f(z)|\}_{+}^p$ is subharmonic on $\mathbb{D}$, for $w\in \mathbb{D}(z,\delta)$, we have \[ \{ |f(w)|-|f(z)|\}_{+}^p \leq \frac{1}{\delta^2}\int_{\mathbb{D}(z,\varepsilon)}\{ |f(\eta)|-|f(z)|\}_{+}^{p}dm(\eta), \] which combined with (\ref{eq-3-1}) implies that \[ \mathscr{M}_{f}^p(z)\leq \frac{4^{p+1}K^p}{\varepsilon^{p+2}}\int_{\mathbb{D}(z,\varepsilon)}\{ |f(\eta)|-|f(z)|\}_{+}^{p}dm(\eta). \] This completes the proof. \end{proof} From \cite[Theorem 1.8]{KM-2011}, we obtain the following result. \begin{equation}gin{Lem}\label{real-harmonic-07} Let $f$ be a real harmonic mapping of $\mathbb{D}$ into $(-1,1)$. Then, there exists a constant $M>0$ which is independent of $f$ such that for $z\in\mathbb{D}$, $$\mathscr{M}_{f}(z)\leq M\frac{1-|f(z)|^{2}}{1-|z|^{2}}.$$ \end{Lem} Using Lemma \ref{real-harmonic-07} and an argument similar to that in the proof of Lemma \ref{mate-07b}, we obtain the following lemma \begin{equation}gin{Lem}\label{real-harmonic-07b} Let $f$ be a real harmonic mapping of $\mathbb{D}$ into $\mathbb{R}$. Then, there exists a constant $M>0$ which is independent of $f$ such that for $z\in\mathbb{D}$ and $0<\delta<d_{\mathbb{D}}(z)$, \[ \mathscr{M}_{f}(z)\leq \frac{2M\left(M_{z, \delta}-|f(z)|\right)}{\delta}, \] where $M_{z,\delta}=\sup\{|f(w)|:~|w-z|<\delta \}$. \end{Lem} \begin{equation}gin{Lem}\label{real-harmonic-07d} Suppose that $p\in[1,\infty)$ and $0<\varepsilon<1-|z|$, $z\in \mathbb{D}$. Let $f$ be a real harmonic mapping of $\mathbb{D}$ into $\mathbb{R}$. Then, for $z\in\mathbb{D}$, \[ \mathscr{M}_{f}^{p}(z)\leq \frac{M}{\varepsilon^{p+2}} \int_{\mathbb{D}(z,\varepsilon)}\{|f(w)|-|f(z)|\}_+^p dm(w), \] where $M$ is a positive constant which depends only on $p$, and $dm=dA/\pi$ $($see Theorem \ref{thm-1.0}$)$. \end{Lem} \subsection{The proof of Theorem \ref{thm-1.0}} \noindent $\mathbf{Case~1.}$We first give a proof in the case $p\in[1,\infty)$. We split the proof of this case into twelve steps. \noindent $\mathbf{Step~1.}$ ``$(\mathscr{A}_{2})\Rightarrow(\mathscr{A}_{1})$".\\ For any fixed $z\in\mathbb{D}$, let $r=2d_{\mathbb{D}}(z)/3$. By assumption, we see that there is a positive constant $M$ such that \begin{equation}\label{eq-0.01k} \left(\int_{0}^{2\pi}|f(e^{i\theta}z)-f(e^{i\theta}\xi)|^{p}d\theta\right)^{\frac{1}{p}}\leq M\omega(|z-\xi|)\leq M\omega(r) \end{equation} for $\xi\in\overlineerline{\mathbb{D}(z,r)}$. For $\theta\in[0,2\pi]$ and $e^{i\theta}\xi\in\mathbb{D}(e^{i\theta}z,r)$, we have $$f(e^{i\theta}\xi)=\frac{1}{2\pi}\int_{0}^{2\pi}P_{r}(\xi,e^{i\eta})f(ze^{i\theta}+e^{i\theta}re^{i\eta})d\eta,$$ where $$P_{r}(\xi,e^{i\eta})=\frac{r^{2}-|\xi-z|^{2}}{|re^{i\eta}-(\xi-z)|^{2}}.$$ Elementary calculations lead to \begin{equation}gin{eqnarray}q \frac{\partial}{\partial\xi}P_{r}(\xi,e^{i\eta})&=&\frac{\overlineerline{z}-\overlineerline{\xi}}{|re^{i\eta}-(\xi-z)|^{2}}\\ &&+ \frac{(r^{2}-|\xi-z|^{2})(re^{-i\eta}-(\overlineerline{\xi}-\overlineerline{z}))}{|re^{i\eta}-(\xi-z)|^{4}}. \end{equation}qq Then, for $e^{i\theta}\xi\in\mathbb{D}(e^{i\theta}z,3r/4)$, we have \begin{equation}\label{eq-0.1k} \left|\frac{\partial}{\partial\xi}P_{r}(\xi,e^{i\eta})\right|\leq\frac{40}{r}. \end{equation} From the similar proof process, we obtain the following inequality \begin{equation}\label{eq-0.2k} \left|\frac{\partial}{\partial\overlineerline{\xi}}P_{r}(\xi,e^{i\eta})\right|\leq\frac{40}{r}. \end{equation} It follows from (\ref{eq-0.1k}), (\ref{eq-0.2k}) and H\"{o}lder's inequality that \begin{equation}gin{eqnarray}\label{eq-0.3k} \big(\mathscr{M}_{f}(e^{i\theta}\xi)\big)^{p}&\leq&\int_{0}^{2\pi}\left(\mathscr{M}_{P_{r}}(\xi, e^{i\eta})\right)^p \left|\mathscr{P}_{f}(\theta,\eta)\right|^{p}\frac{d\eta}{2\pi}\\ \nonumber &\leq&\frac{80^{p}}{r^{p}}\int_{0}^{2\pi} \left|\mathscr{P}_{f}(\theta,\eta)\right|^{p}\frac{d\eta}{2\pi}, \end{equation}q where $\mathscr{P}_{f}(\theta,\eta)=f(ze^{i\theta}+e^{i\theta}re^{i\eta})-f(ze^{i\theta})$. By taking $\xi=z$ in (\ref{eq-0.3k}) and integrating both sides of the inequality (\ref{eq-0.3k}) with respect to $\theta$ from $0$ to $2\pi$, we obtain from (\ref{eq-0.01k}) that \begin{equation}gin{eqnarray}q \int_{0}^{2\pi}\left(\mathscr{M}_{f}(ze^{i\theta})\right)^{p}d\theta &\leq& \frac{(80)^{p}}{r^{p}}\int_{0}^{2\pi}\left(\int_{0}^{2\pi} \big|\mathscr{P}_{f}(\theta,\eta)\big|^{p}\frac{d\eta}{2\pi}\right) d\theta \\ &=& \frac{(80)^{p}}{r^{p}}\int_{0}^{2\pi}\left(\int_{0}^{2\pi} \big|\mathscr{P}_{f}(\theta,\eta)\big|^{p}d\theta\right) \frac{d\eta}{2\pi} \\ &\leq& (80M)^p\left(\frac{\omega\left(\frac{2}{3}d_{\mathbb{D}}(z)\right)}{\frac{2}{3}d_{\mathbb{D}}(z)}\right)^{p}\\ &\leq& (120M)^{p}\left(\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}\right)^{p}. \end{equation}qq \noindent $\mathbf{Step~2.}$ ``$(\mathscr{A}_{1})\Rightarrow(\mathscr{A}_{2})$".\\ Since $\mathbb{D}$ is a $\Lambda_{\omega}$-extension domain, we see that, for each pair of points $z_{1},z_{2}\in \mathbb{D}$, there is a rectifiable curve $\gamma\subset \mathbb{D}$ joining $z_{1}$ to $z_{2}$ such that \begin{equation}\label{eq-0.4k} \int_{\gamma}\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}ds(z)\leq\,M\omega(|z_{1}-z_{2}|), \end{equation} where $M$ is a positive constant. By Lemma F, we have $$\left(\int_{0}^{2\pi}\left(\int_{\gamma}\mathscr{M}_{f}(ze^{i\eta})ds(z)\right)^{p}d\eta\right)^{\frac{1}{p}} \leq\int_{\gamma}\left(\int_{0}^{2\pi}\left(\mathscr{M}_{f}(ze^{i\eta})\right)^{p}d\eta\,\right)^{\frac{1}{p}}ds(z),$$ which, together with the assumption and (\ref{eq-0.4k}), implies that there is a positive constant $M$ such that \begin{equation}gin{eqnarray}q \mathcal{L}_{p}[f](z_{1},z_{2})&\leq& \left(\int_{0}^{2\pi}\left(\int_{\gamma}\mathscr{M}_{f}(ze^{i\eta})ds(z)\right)^{p}d\eta\right)^{\frac{1}{p}}\\ &\leq& \int_{\gamma}\left(\int_{0}^{2\pi}\left(\mathscr{M}_{f}(ze^{i\eta})\right)^{p}d\eta\right)^{\frac{1}{p}}ds(z)\\ &\leq& M\int_{\gamma}\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}ds(z)\\ &\leq&\,M^{2}\omega(|z_{1}-z_{2}|). \end{equation}qq Since $f$ is continuous on $\overlineerline{\mathbb{D}}$, the above inequality implies that $f\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})$. \noindent $\mathbf{Step~3.}$ ``$(\mathscr{A}_{2})\Rightarrow(\mathscr{A}_{3})$".\\ Let $f=h+\overlineerline{g}=u+iv$, where $h=u_{1}+iv_{1}$ and $g=u_{2}+iv_{2}$. Then $f\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})$ if and only if $u,~v\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})$. Let $F=h+g$ and $\widetilde{v}=\mbox{Im}(F)$, where ``$\mbox{Im}$" is the imaginary part of a complex number. Since $\mbox{Im}(iF)=\mbox{Im}(if)=u$, we see that \begin{equation}\label{eq-gj-0.1}\mathscr{M}_{\mbox{Re}(iF)}=\mathscr{M}_{\widetilde{v}}=\mathscr{M}_{u},\end{equation} where ``$\mbox{Re}$" is the real part of a complex number. By (\ref{eq-gj-0.1}) and Step 2, we have $\widetilde{v}\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}}),$ which implies that \begin{equation}\label{eq-gj-0.2}v_{1}=\frac{v+\widetilde{v}}{2}\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})\end{equation} and \begin{equation}\label{eq-gj-0.3}v_{2}=\frac{\widetilde{v}-v}{2}\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}}).\end{equation} It follows from $\mathscr{M}_{u_{1}}=\mathscr{M}_{v_{1}}$, $\mathscr{M}_{u_{2}}=\mathscr{M}_{v_{2}}$ and Step 2 that \begin{equation}\label{eq-gj-0.4}u_{1}\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})\end{equation} and \begin{equation}\label{eq-gj-0.5}u_{2}\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}}).\end{equation} Consequently, $h\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})$ follows from $(\ref{eq-gj-0.2})$ and $(\ref{eq-gj-0.4})$, and $g\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})$ follows from $(\ref{eq-gj-0.3})$ and $(\ref{eq-gj-0.5})$. \noindent $\mathbf{Step~4.}$ ``$(\mathscr{A}_{2})\Rightarrow(\mathscr{A}_{6})$", ``$(\mathscr{A}_{2})\Rightarrow(\mathscr{A}_{8})$", ``$(\mathscr{A}_{3})\Rightarrow(\mathscr{A}_{2})$", ``$(\mathscr{A}_{3})\Rightarrow(\mathscr{A}_{4})$", ``$(\mathscr{A}_{2})\Rightarrow(\mathscr{A}_{5})$", ``$(\mathscr{A}_{6})\Rightarrow(\mathscr{A}_{7})$" and ``$(\mathscr{A}_{8})\Rightarrow(\mathscr{A}_{9})$" are obvious. \noindent $\mathbf{Step~5.}$ ``$(\mathscr{A}_{5})\Rightarrow(\mathscr{A}_{2})$".\\ Let $f$ be a harmonic $K$-quasiregular mapping in $\mathbb{D}$ with $|f|\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}}).$ Then, there is a positive constant $M$ such that \begin{equation}gin{equation} \label{Lip-5} \left(\int_{0}^{2\pi}||f(e^{i\eta}z_{1})|-|f(e^{i\eta}z_{2})||^{p}d\eta\right)^{\frac{1}{p}}\leq M\omega(|z_{1}-z_{2}|), \end{equation} for $z_1, z_2\in \overlineerline{\mathbb{D}}$. Let $z\in \mathbb{D}$ be fixed, and let $\varepsilon_0=d_{\mathbb{D}}(z)/2$. Set $$\mathscr{N}_{p}=\int_{0}^{2\pi}\left(\mathscr{M}_{f}(ze^{i\eta})\right)^{p}d\eta.$$ Then, by Lemma \ref{mate-07d}, there is a positive constant $M^{\ast}$ such that \begin{equation}gin{eqnarray*} \mathscr{N}_{p} &\leq & \int_{0}^{2\pi}\frac{M^{\ast}}{\varepsilon_0^{p+2}} \int_{\mathbb{D}(ze^{i\eta},\varepsilon_0)}\{|f(w)|-|f(ze^{i\eta})|\}_+^p dm(w)d\eta \\ &=& \int_{0}^{2\pi}\frac{M^{\ast}}{\varepsilon_0^{p+2}} \int_{\mathbb{D}(z,\varepsilon_0)}\{|f(we^{i\eta})|-|f(ze^{i\eta})|\}_+^p dm(w)d\eta \\ &=& \frac{M^{\ast}}{\varepsilon_0^{p+2}} \int_{\mathbb{D}(z,\varepsilon_0)} \int_{0}^{2\pi}\{|f(we^{i\eta})|-|f(ze^{i\eta})|\}_+^p d\eta dm(w), \end{eqnarray*} which, together with (\ref{Lip-5}), implies that there is a positive constant $M$ such that \begin{equation}gin{eqnarray}\label{eq-kp-0.1} \mathscr{N}_{p}&\leq& \frac{M^{\ast}M^{p}}{\varepsilon_0^{p+2}} \int_{\mathbb{D}(z,\varepsilon_0)} \omega(|w-z|)^pdm(w) \\ \nonumber &\leq & \frac{M^{\ast}M^{p}}{\varepsilon_0^{p}} \big(\omega(\varepsilon_0)\big)^p \\ \nonumber &\leq & 2^pM^{\ast}M^{p}\frac{\big(\omega(d_{\mathbb{D}}(z))\big)^p}{\big(d_{\mathbb{D}}(z)\big)^{p}}. \end{equation}q Combining (\ref{eq-kp-0.1}) and Step 2 gives $f\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}}).$ \noindent $\mathbf{Step~6.}$ ``$(\mathscr{A}_{6})\Rightarrow(\mathscr{A}_{2})$". \\ Since $u\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}}),$ by using the similar reasoning as in the proof of ``$(\mathscr{A}_{2})\Rightarrow(\mathscr{A}_{1})$", we see that there is a positive constant $M$ such that, for $z\in\mathbb{D}$, \begin{equation}\label{h-j-0.1}\left(\int_{0}^{2\pi}\left(\mathscr{M}_{u}(ze^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}\leq M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}.\end{equation} Note that $u=(h+g+\overlineerline{h+g})/2$, which gives that $\mathscr{M}_{u}=|h'+g'|$. Since $f$ is a harmonic $K$-quasiregular mapping in $\mathbb{D}$, we see that \begin{equation}\label{h-j-0.2}\frac{1}{K}\mathscr{M}_{f}\leq |h'|-|g'|\leq|h'+g'|=\mathscr{M}_{u}. \end{equation} Hence $(\mathscr{A}_{2})$ follows from (\ref{h-j-0.1}), (\ref{h-j-0.2}) and Step 2. \noindent $\mathbf{Step~7.}$ ``$(\mathscr{A}_{8})\Rightarrow(\mathscr{A}_{2})$".\\ Since $v\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}}),$ by using the similar reasoning as in the proof of ``$(\mathscr{A}_{2})\Rightarrow(\mathscr{A}_{1})$", we see that there is a positive constant $M$ such that, for $z\in\mathbb{D}$, \begin{equation}\label{h-j-0.4}\left(\int_{0}^{2\pi}\left(\mathscr{M}_{v}(ze^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}\leq M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}.\end{equation} It is not difficult to know that $v=(h-g-\overlineerline{(h-g)})/(2i)$, which yields that $\mathscr{M}_{v}=|h'-g'|$. Since $f$ is a harmonic $K$-quasiregular mapping in $\mathbb{D}$, we see that \begin{equation}\label{h-j-0.5}\frac{1}{K}\mathscr{M}_{f}\leq |h'|-|g'|\leq|h'-g'|=\mathscr{M}_{v}. \end{equation} Hence $(\mathscr{A}_{2})$ follows from (\ref{h-j-0.4}), (\ref{h-j-0.5}) and Step 2. \noindent $\mathbf{Step~8.}$ ``$(\mathscr{A}_{4})\Rightarrow(\mathscr{A}_{3})$" follows from ``$(\mathscr{A}_{5})\Rightarrow(\mathscr{A}_{2})$". \noindent $\mathbf{Step~9.}$ ``$(\mathscr{A}_{7})\Rightarrow(\mathscr{A}_{6})$" and ``$(\mathscr{A}_{9})\Rightarrow(\mathscr{A}_{8})$" follow from Lemma \ref{real-harmonic-07d} and the argument in ``$(\mathscr{A}_{5})\Rightarrow(\mathscr{A}_{2})$". \noindent $\mathbf{Step~10.}$ ``$(\mathscr{A}_{2})\Rightarrow(\mathscr{A}_{10})$".\\ By using the similar reasoning as in the proof of (\ref{eq-0.3k}), we see that, for $e^{i\eta}\xi\in\mathbb{D}(e^{i\eta}z,d_{\mathbb{D}}(z)/2)$, there is a positive constant $M$ such that $$\int_{0}^{2\pi}\left(\mathscr{M}_{f}(e^{i\eta}\xi)\right)^{p}d\eta \leq M^{p}\left(\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}\right)^{p}.$$ Then, by Lemma F, we have \begin{equation}gin{eqnarray}q \mathscr{Y}_{u,q,p}(z)&\leq&\mathscr{Y}_{f,q,p}(z) \\ &=& V_{q}(z) \left(\int_{0}^{2\pi}\left({\int_{\mathbb{D}\left(z,d_{\mathbb{D}}(z)/2\right)}\left(\mathscr{M}_{f}(\xi e^{i\eta})\right)^{q}dA(\xi)} \right)^{\frac{p}{q}}d\eta\right)^{\frac{1}{p}} \\ &\leq& V_{q}(z) \left(\int_{\mathbb{D}\left(z,d_{\mathbb{D}}(z)/2\right)}\left({\int_{0}^{2\pi}\left(\mathscr{M}_{f}(\xi e^{i\eta})\right)^{p}d\eta} \right)^{\frac{q}{p}}dA(\xi)\right)^{\frac{1}{q}} \\ &\leq& V_{q}(z) \left(\int_{\mathbb{D}\left(z,d_{\mathbb{D}}(z)/2\right)}\left(M^{p}\left(\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}\right)^{p} \right)^{\frac{q}{p}}dA(\xi)\right)^{\frac{1}{q}} \\ &=& M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}, \end{equation}qq where $ V_{q}(z)=1/\left|\mathbb{D}\left(z,d_{\mathbb{D}}(z)/2\right)\right|^{\frac{1}{q}}$ and $q\leq p<\infty$. \noindent $\mathbf{Step~11.}$ ``$(\mathscr{A}_{10})\Rightarrow(\mathscr{A}_{2})$".\\ Since $\mathscr{M}_{u}=|h'+g'|$, we see that $\mathscr{M}_{u}^{q}$ is subharmonic in $\mathbb{D}$ for $q\in (0,\infty)$. Then we have \begin{equation}gin{eqnarray}q \big(\mathscr{M}_{u}(ze^{i\eta})\big)^{q}\leq\frac{1}{\left|\mathbb{D}\big(ze^{i\eta},\frac{d_{\mathbb{D}}(z)}{2}\big)\right|} \int_{\mathbb{D}\big(ze^{i\eta},\frac{d_{\mathbb{D}}(z)}{2}\big)}\left(\mathscr{M}_{u}(\xi )\right)^{q}dA(\xi), \end{equation}qq which, together with (\ref{h-j-0.2}) and $(\mathscr{A}_{10})$, implies that there is a positive constant $M$ such that \begin{equation}gin{eqnarray}\label{h-j-0.9} \frac{1}{K}\left(\int_{0}^{2\pi}\left(\mathscr{M}_{f}(ze^{i\eta})\right)^{p}d\eta\right)^{\frac{1}{p}}&\leq& \left(\int_{0}^{2\pi}\left(\mathscr{M}_{u}(ze^{i\eta})\right)^{p}d\eta\right)^{\frac{1}{p}}\\ \nonumber &\leq&\mathscr{Y}_{u,q,p}(z)\\ \nonumber &\leq& M\frac{\omega\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}, \end{equation}q where $0<q\leq p<\infty$. It follows from (\ref{h-j-0.9}) and $(\mathscr{A}_{1})\Rightarrow(\mathscr{A}_{2})$ that $f\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}}).$ \noindent $\mathbf{Step~12.}$ The proof of $(\mathscr{A}_{11})\Leftrightarrow(\mathscr{A}_{2})$ is similar to $(\mathscr{A}_{10})\Leftrightarrow(\mathscr{A}_{2})$.\\ $\mathbf{Case~2.}$ $p=\infty$. The technical route used to prove Case 1 is still valid for Case 2. We only need to replace some formulas when $p\in[1,\infty)$ with corresponding formulas when $p=\infty$ to complete the proof. Therefore, we will not repeat it here. The proof of this theorem is complete. \qed \begin{equation}gin{Lem}\label{lem-4.0} Let $\omega$ be a majorant such that, for all $\delta\in[0,\pi]$, \begin{equation}\label{HJK-1}\delta\int_{\delta}^{\pi}\frac{\omega(t)}{t^{2}}dt\leq\,M\omega(\delta),\end{equation} where $M$ is a positive constant. For $p\in[1,\infty]$, if $\varphi\in\Lambda_{\omega,p}(\mathbb{T})$, then there is a positive constant $M$ which depends only on $\| \varphi\|_{\Lambda_{\omega,p}(\mathbb{T}),s}$ such that $$ \left\{ \begin{equation}gin{array}{ll} \left(\int_{0}^{2\pi}|\varphi\left(\widetilde{z}e^{i\eta}\right)-P[\varphi](ze^{i\eta})|^{p}d\eta\right)^{\frac{1}{p}}\leq M\omega(d_{\mathbb{D}}(z)), &p\in[1,\infty), \\ |\varphi\left(\widetilde{z}\right)-P[\varphi](z)|\leq M\omega(d_{\mathbb{D}}(z)), &p=\infty. \end{array} \right. $$ where $z\in\mathbb{D}\begin{equation}gin{array}ckslash\{0\}$ and $\widetilde{z}=z/|z|$. \end{Lem} \begin{equation}gin{proof} Without loss of generality, we assume that $p\in[1,\infty)$. For $z\in\mathbb{D}\setminus\{ 0\}$, let $$J(z)=\left(\int_{0}^{2\pi}|\varphi\left(\widetilde{z}e^{i\eta}\right)-P[\varphi](ze^{i\eta})|^{p}d\eta\right)^{\frac{1}{p}}.$$ We divide the proof of this lemma into two cases. \noindent $\mathbf{Case~1.}$ Let $z=re^{i\theta}\in\mathbb{D}$ with $r\in(0,1/4)$. Then \begin{equation}\label{eq-re-0}\mathbf{P}(z,e^{i\tau})\leq\frac{1-r^{2}}{(1-r)^{2}}\leq\frac{5}{3}.\end{equation} It follows from $\varphi\in\Lambda_{\omega,p}(\mathbb{T})$, (\ref{eq-re-0}) and Lemma F that there is a positive constant $M$ which depends only on $\| \varphi\|_{\Lambda_{\omega,p}(\mathbb{T}),s}$ such that \begin{equation}gin{eqnarray}q J(z)&=& \left(\int_{0}^{2\pi}\left|\int_{0}^{2\pi}\big(\varphi(e^{i(\theta+\eta)})-\varphi(e^{i(\tau+\eta)})\big)\mathbf{P}(z,e^{i\tau})\frac{d\tau}{2\pi}\right|^{p} d\eta\right)^{\frac{1}{p}}\\ &\leq&\frac{5}{3}\left(\int_{0}^{2\pi}\left(\int_{0}^{2\pi}\big|\varphi(e^{i(\theta+\eta)})-\varphi(e^{i(\tau+\eta)})\big|\frac{d\tau}{2\pi}\right)^{p} d\eta\right)^{\frac{1}{p}}\\ &\leq&\frac{5}{3}\int_{0}^{2\pi}\left(\int_{0}^{2\pi}\big|\varphi(e^{i(\theta+\eta)})-\varphi(e^{i(\tau+\eta)})\big|^{p}d\eta\right)^{\frac{1}{p}} \frac{d\tau}{2\pi}\\ &\leq&\frac{5}{3}M\omega(2), \end{equation}qq which implies that \begin{equation}gin{eqnarray}q\frac{J(z)}{\omega(d_{\mathbb{D}}(z))}\leq\frac{\frac{5}{3}M\omega(2)}{\omega\left(\frac{3}{4}\right)}. \end{equation}qq \noindent $\mathbf{Case~2.}$ Let $z=re^{i\theta}\in\mathbb{D}$ with $r\geq1/4$. By Lemma F and the assumption that $\varphi\in\Lambda_{\omega,p}(\mathbb{T})$, there is a positive constant $M$ which depends only on $\| \varphi\|_{\Lambda_{\omega,p}(\mathbb{T}),s}$ such that \begin{equation}gin{eqnarray}\label{eq-rt-2.0}\nonumber J(z) &\leq&\left(\int_{0}^{2\pi}\left(\int_{0}^{2\pi}\big|\varphi(e^{i(\theta+\eta)})-\varphi(e^{i(\tau+\eta)})\big|\mathbf{P}(z,e^{i\tau})\frac{d\tau}{2\pi}\right)^{p} d\eta\right)^{\frac{1}{p}}\\ \nonumber &\leq&\int_{0}^{2\pi}\left(\int_{0}^{2\pi}\big|\varphi(e^{i(\theta+\eta)})-\varphi(e^{i(\tau+\eta)})\big|^{p}\left(\mathbf{P}(z,e^{i\tau})\right)^{p}d\eta\right)^{\frac{1}{p}} \frac{d\tau}{2\pi}\\ &\leq&MJ_{1}(z), \end{equation}q where $$J_{1}(z)=\frac{1}{2\pi}\int_{0}^{2\pi}\mathbf{P}(z,e^{i\tau})\omega(|e^{i\tau}-e^{i\theta}|)d\tau.$$ Next, we estimate $J_{1}$. Let $E_{1}(\theta)=\{\tau\in[-\pi+\theta, \pi+\theta]:~|\tau-\theta|\leq1-r\}$ and $E_{2}(\theta)=[-\pi+\theta, \pi+\theta]\begin{equation}gin{array}ckslash E_{1}(\theta)$. Since $\sin x\geq2x/\pi $ for $x\in[0,\pi/2]$, we see that \begin{equation}gin{eqnarray}q J_{1}(z)&=&\frac{1}{2\pi} \int_{-\pi+\theta}^{\pi+\theta}\frac{1-r^{2}}{(1-r)^{2}+4r\left(\sin\frac{\theta-\tau}{2}\right)^{2}}\omega(|e^{i\tau}-e^{i\theta}|)d\tau\\ &\leq&\frac{1}{2\pi}\int_{E_{1}(\theta)\cup E_{2}(\theta)}\frac{1-r^{2}}{(1-r)^{2}+\frac{4}{\pi^{2}}r(\theta-\tau)^{2}}\omega(|\tau-\theta|)d\tau\\ &\leq&\frac{1}{2\pi}\int_{E_{1}(\theta)}\frac{1-r^{2}}{(1-r)^{2}}\omega(1-r)d\tau+\frac{\pi}{2}\int_{ E_{2}(\theta)}\frac{1-r^{2}}{(\theta-\tau)^{2}}\omega(|\tau-\theta|)d\tau\\ &\leq&\frac{2}{\pi}\omega(1-r)+2\pi(1-r)\int_{1-r}^{\pi}\frac{\omega(t)}{t^{2}}dt, \end{equation}qq which, together with (\ref{HJK-1}) and (\ref{eq-rt-2.0}), gives \begin{equation}gin{eqnarray}q J(z)\leq\, M\left(2\pi+\frac{2}{\pi}\right)\omega(d_{\mathbb{D}}(z)).\end{equation}qq Combining Cases 1 and 2 gives the desired result. \end{proof} \subsection{The proof of Theorem \ref{thm-5.0}} We split the proof of theorem into two steps. \noindent $\mathbf{Step~1.}$ We first prove ``$(\mathscr{C}_{2})\Rightarrow(\mathscr{C}_{1})$" for $p\in[1,\infty]$. Without loss of generality, we assume that $p\in[1,\infty)$. Also, it suffices to prove that $P[\varphi]\in\Lambda_{\omega,p}({\mathbb{D}})$. For $z_{1},~z_{2}\in{\mathbb{D}}$, let $$\mathscr{J}=\left(\int_{0}^{2\pi}\left|P[\varphi](z_{1}e^{i\eta})-P[\varphi](z_{2}e^{i\eta})\right|^{p}d\eta\right)^{\frac{1}{p}}.$$ We divide the proof of this step into two cases. \noindent $\mathbf{Case~1.}$ Let $z_{1},~z_{2}\in\mathbb{D}$ with $\max\{|z_{1}|,~|z_{2}|\}\leq1/2$. Since \begin{equation}gin{eqnarray}q \mathbf{P}(z,e^{i\tau})=\frac{1-|z|^{2}}{|z-e^{i\tau}|^{2}}=\frac{e^{i\tau}}{e^{i\tau}-z}+\frac{\overlineerline{z}}{e^{-i\tau}-\overlineerline{z}}, \end{equation}qq we see that, for $z\in\mathbb{D}$ with $|z|\leq1/2$, \begin{equation}gin{eqnarray}q \mathscr{M}_{\mathbf{P}}(z)=\frac{1}{|e^{i\tau}-z|^{2}}+\frac{1}{|e^{-i\tau}-\overlineerline{z}|^{2}}\leq8. \end{equation}qq Consequently, \begin{equation}\label{eq-rt-2.2} \left|\varpi_{\tau}(z_{1},z_{2})\right|\leq\max_{|z|\leq1/2}\{\mathscr{M}_{\mathbf{P}}(z)\}|z_{1}-z_{2}| \leq\frac{16}{\omega(2)}\omega(|z_{1}-z_{2}|), \end{equation} where $\varpi_{\tau}(z_{1},z_{2})=\mathbf{P}(z_{1},e^{i\tau})-\mathbf{P}(z_{2},e^{i\tau}).$ It follows from (\ref{eq-rt-2.2}), $\varphi\in\Lambda_{\omega,p}(\mathbb{T})$ and Lemma F that there is a positive constant $M$ such that \begin{equation}gin{eqnarray}q \mathscr{J}&=&\left(\int_{0}^{2\pi}\left|\int_{0}^{2\pi}\big(\varphi(e^{i(\tau+\eta)})-\varphi(e^{i\eta})\big) (\varpi_{\tau}(z_{1},z_{2}))\frac{d\tau}{2\pi}\right|^{p} d\eta\right)^{\frac{1}{p}}\\ &\leq&\int_{0}^{2\pi}\left(\int_{0}^{2\pi}|\varphi(e^{i(\tau+\eta)})-\varphi(e^{i\eta})|^{p} \left|\varpi_{\tau}(z_{1},z_{2})\right|^{p}d\eta\right)^{\frac{1}{p}}\frac{d\tau}{2\pi}\\ &\leq&\frac{16}{\omega(2)}\omega(|z_{1}-z_{2}|)\int_{0}^{2\pi}\left(\int_{0}^{2\pi}|\varphi(e^{i(\tau+\eta)})-\varphi(e^{i\eta})|^{p} d\eta\right)^{\frac{1}{p}}\frac{d\tau}{2\pi}\\ &\leq&16M\omega(|z_{1}-z_{2}|). \end{equation}qq \noindent $\mathbf{Case~2.}$ Let $z_{1},~z_{2}\in\mathbb{D}$ with $\max\{|z_{1}|,~|z_{2}|\}>1/2$. In this case, we write $z_{j}=r_{j}e^{i\theta_{j}}$ for $j\in\{1,2\}$, where $r_{j}=|z_{j}|$. Without loss of generality, we assume that $r_{2}\leq r_{1}$. For any fixed $r\in(0,1)$, let $$F_{r}(z)=P[\varphi](rz),~z\in\overlineerline{\mathbb{D}}.$$ Moreover, for $\xi_{1},~\xi_{2}\in\mathbb{T}$, let \begin{equation}gin{eqnarray}q \mathscr{J}_{1}=\left(\int_{0}^{2\pi}|F_{r}(e^{i\eta}\xi_{1})-F_{r}(e^{i\eta}\xi_{2})|^{p}d\eta\right)^{\frac{1}{p}}. \end{equation}qq Then, by $\varphi\in\Lambda_{\omega,p}(\mathbb{T})$ and Lemma F, we see that there is a positive constant $M$ which is independent of $r$ such that \begin{equation}gin{eqnarray}\label{eq-rt-2.4}\nonumber \mathscr{J}_{1}&\leq&\int_{0}^{2\pi}\mathbf{P}(r,e^{i\tau}) \left(\int_{0}^{2\pi}\left|\varphi(e^{i(\tau+\eta)}\xi_{1})-\varphi(e^{i(\tau+\eta)}\xi_{2})\right|^{p} d\eta\right)^{\frac{1}{p}} \frac{d\tau}{2\pi}\\ &\leq&M\omega(|\xi_{1}-\xi_{2}|), \end{equation}q which implies that $F_{r}|_{\mathbb{T}}\in\Lambda_{\omega,p}(\mathbb{T})$ and $\| F_{r}|_{\mathbb{T}}\|_{\Lambda_{\omega,p}(\mathbb{T}),s}$ is independent of $r\in (0,1)$. By Minkowski's inequality, we have \begin{equation}gin{eqnarray}q \mathscr{J}&=&\bigg(\int_{0}^{2\pi}\big|F_{r_{1}}(e^{i(\theta_{1}+\eta)})-F_{r_{1}}(e^{i(\theta_{2}+\eta)})\\ &&+ F_{r_{1}}(e^{i(\theta_{2}+\eta)})-F_{r_{2}}(e^{i(\theta_{2}+\eta)})\big|^{p}d\eta\bigg)^{\frac{1}{p}}\\ &\leq&\mathscr{J}_{2}+\mathscr{J}_{3}, \end{equation}qq where $$\mathscr{J}_{2}=\left(\int_{0}^{2\pi}\left|F_{r_{1}}(e^{i(\theta_{1}+\eta)})-F_{r_{1}}(e^{i(\theta_{2}+\eta)})\right|^{p}d\eta\right)^{\frac{1}{p}}$$ and $$\mathscr{J}_{3}=\left(\int_{0}^{2\pi}\left|F_{r_{1}}(e^{i(\theta_{2}+\eta)})-F_{r_{2}}(e^{i(\theta_{2}+\eta)})\right|^{p}d\eta\right)^{\frac{1}{p}}.$$ By (\ref{eq-rt-2.4}), we see that there is a positive constant $M$ such that \begin{equation}\label{eq-rt-2.5} \mathscr{J}_{2}\leq M\omega(|e^{i\theta_{1}}-e^{i\theta_{2}}|). \end{equation} Since, for any fixed $\eta\in[0,2\pi]$, $$F_{r_{2}}(e^{i(\theta_{2}+\eta)})=F_{r_{1}}\left(\frac{r_{2}}{r_{1}}e^{i(\theta_{2}+\eta)}\right)= P[F_{r_{1}}|_{\mathbb{T}}]\left(\frac{r_{2}}{r_{1}}e^{i(\theta_{2}+\eta)}\right),$$ by $F_{r_{1}}|_{\mathbb{T}}\in\Lambda_{\omega,p}(\mathbb{T})$, $\| F_{r_1}|_{\mathbb{T}}\|_{\Lambda_{\omega,p}(\mathbb{T}),s}$ is independent of $r_1\in (0,1)$ and Lemma \ref{lem-4.0}, we see that there is a positive constant $M$ such that \begin{equation}\label{eq-rt-2.6} \mathscr{J}_{3}\leq M\omega\left(1-\frac{r_{2}}{r_{1}}\right). \end{equation} From $r_{1}>1/2$, we obtain that $$|e^{i\theta_{1}}-e^{i\theta_{2}}|\leq4|z_{1}-z_{2}|~\mbox{and}~1-\frac{r_{2}}{r_{1}}\leq2|z_{1}-z_{2}|,$$ which, together with (\ref{eq-rt-2.5}) and (\ref{eq-rt-2.6}), implies that there is a positive constant $M$ such that \[ \mathscr{J}\leq M\omega(|z_{1}-z_{2}|). \] Hence $(\mathscr{C}_{1})$ follows from Cases 1 and 2 for $p\in[1,\infty)$. \noindent $\mathbf{Step~2.}$ ``$(\mathscr{C}_{1})\Rightarrow(\mathscr{C}_{2})$" for $p=\infty$. Let \begin{equation}\label{bjh-1} \varphi(e^{i\tau})= \left\{ \begin{equation}gin{array}{ll} \omega(\tau), & \tau\in[0,\pi], \\ \omega(2\pi-\tau), & \tau\in[\pi,2\pi]. \end{array} \right. \end{equation} Since \begin{equation}\label{lk-0.1}\omega(t_{1}+t_{2})\leq\omega(t_{1})+\omega(t_{2})\end{equation} for $t_{1},~t_{2}\in[0,\infty)$, we see that $\varphi\in\Lambda_{\omega,\infty}(\mathbb{T})$. Then, by $(\mathscr{C}_{1})$, we have $P[\varphi]\in\Lambda_{\omega,\infty}(\overlineerline{\mathbb{D}})$. In the following, we split the remaining proof of this step into two cases. \noindent $\mathbf{Case~3.}$ Let $\delta\in[0,1/2]$. In this case, let $r=1-\delta$. Since $$P[\varphi](1)=\varphi(1)=\omega(0)=0,$$ by $P[\varphi]\in\Lambda_{\omega,\infty}(\overlineerline{\mathbb{D}})$, we see that there is a positive constant $M$ such that \begin{equation}gin{eqnarray}q \left|P[\varphi](r)\right|=\left|P[\varphi](r)-P[\varphi](1)\right|\leq M\omega(1-r), \end{equation}qq which gives that \begin{equation}gin{eqnarray}q \mathscr{J}_{4}&\leq&2\int_{1-r}^{\pi}\frac{(1-r)\omega(t)}{(1-r)^{2}+t^{2}}dt \leq 2\int_{1-r}^{\pi}\frac{(1-r^{2})\omega(t)dt}{(1-r)^{2}+4r\sin^{2}\left(\frac{t}{2}\right)}\\ \nonumber &\leq& 2\left|\int_{0}^{\pi}\mathbf{P}(r,e^{i\tau})\omega(\tau)d\tau +\int_{\pi}^{2\pi}\mathbf{P}(r,e^{i\tau})\omega(2\pi-\tau)d\tau\right|\\ \nonumber &=& 4\pi \left|P[\varphi](r)\right|\\ &\leq& 4\pi M\omega(1-r), \end{equation}qq where $$\mathscr{J}_{4}=(1-r)\int_{1-r}^{\pi}\frac{\omega(t)}{t^{2}}dt.$$ \noindent $\mathbf{Case~4.}$ Let $\delta\in[1/2, \pi]$. For this case, we have $$ \delta\int_{\delta}^{\pi}\frac{\omega(t)}{t^{2}}dt\leq\pi\int_{\frac{1}{2}}^{\pi}\frac{\omega(t)}{t^{2}}dt\leq 2\pi\int_{\frac{1}{2}}^{\pi}\frac{\omega(t)}{t}dt\leq2\pi^{2}\frac{\omega\left(\frac{1}{2}\right)}{\frac{1}{2}}\leq4\pi^{2}\omega(\delta). $$ Combining Cases 3 and 4 gives $(\mathscr{C}_{2})$. The proof of this theorem is complete. \qed \subsection{The proof of Theorem \ref{t-3}} $\mathbf{Case~1.}$ We first give a proof in the case $p\in[1,\infty)$. We divide the proof of this case into two steps. \noindent $\mathbf{Step~1.}$ ``$(\mathscr{D}_{1})\Rightarrow(\mathscr{D}_{2})$".\\ Since $f$ is continuous up to the boundary, $f\in\Lambda_{\omega,p}(\mathbb{D})$ implies that \begin{equation}\label{eq-jl-1}|f|\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}}).\end{equation} Since $|f|\in\Lambda_{\omega,p}(\mathbb{T})$ and $\omega$ is a regular majorant, by Theorem \ref{thm-5.0} (or Corollary \ref{cor-0.11}), we see that \begin{equation}\label{eq-jl-2}P[|f|]\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}}).\end{equation} Let $$\mathscr{J}_{5}=\left(\int_{0}^{2\pi}\left(P[|f|](ze^{i\theta})-|f(ze^{i\theta})|\right)^{p}d\theta\right)^{\frac{1}{p}}.$$ Then, by (\ref{eq-jl-1}), (\ref{eq-jl-2}) and the Minkowski inequality, we see that there is a positive constant $M$ such that \begin{equation}gin{eqnarray}q \mathscr{J}_{5} &\leq& \left(\int_{0}^{2\pi}\left|P[|f|](ze^{i\theta})-|f(\widetilde{z}e^{i\theta})|\right|^{p}d\theta\right)^{\frac{1}{p}}\\ &&+\left(\int_{0}^{2\pi}\left||f(\widetilde{z}e^{i\theta})|-|f(ze^{i\theta})|\right|^{p}d\theta\right)^{\frac{1}{p}}\\ &\leq&M\omega(d_{\mathbb{D}}(z)), \end{equation}qq where $z\in\mathbb{D}\begin{equation}gin{array}ckslash\{0\}$ and $\widetilde{z}=z/|z|$. Hence $(\mathscr{D}_{2})$ holds. \noindent $\mathbf{Step~2.}$ ``$(\mathscr{D}_{2})\Rightarrow(\mathscr{D}_{1})$".\\ For a fixed point $z\in\mathbb{D}$ and a fixed point $\theta\in[0,2\pi]$, we have \begin{equation}gin{eqnarray}\label{kkl-1} \{|f(we^{i\theta})|-|f(ze^{i\theta})|\}_{+}&\leq&\{P[|f|](we^{i\theta})-|f(ze^{i\theta})|\}_{+}\\ \nonumber &\leq&\{P[|f|](we^{i\theta})-P[|f|](ze^{i\theta})\}_{+}\\ \nonumber &&+\{P[|f|](ze^{i\theta})-|f(ze^{i\theta})|\}_{+}, \end{equation}q where $w\in\{\varsigma:~|\varsigma-z|\leq d_{\mathbb{D}}(z)\}.$ From $(\mathscr{D}_{2})$, we know that there is a positive constant $M$ such that \begin{equation} \label{kkl-2} \left(\int_{0}^{2\pi}\left(P[|f|](ze^{i\theta})-|f(ze^{i\theta})|\right)^{p}d\theta\right)^{\frac{1}{p}} \leq M\omega(d_{\mathbb{D}}(z)).\end{equation} Since $|f|\in\Lambda_{\omega,p}(\mathbb{T})$ and $\omega$ is a regular majorant, by Theorem \ref{thm-5.0} (or Corollary \ref{cor-0.11}), we see that that there is a positive constant $M$ such that \begin{equation} \label{kkl-3} \left(\int_{0}^{2\pi}\left|P[|f|](we^{i\theta})-P[|f|](ze^{i\theta})\right|^{p}d\theta\right)^{\frac{1}{p}} \leq M\omega(d_{\mathbb{D}}(z)),\end{equation} for $w\in\{\varsigma:~|\varsigma-z|\leq d_{\mathbb{D}}(z)\}.$ Combining (\ref{kkl-1}), (\ref{kkl-2}), (\ref{kkl-3}) and the Minkowski inequality gives that \begin{equation}gin{eqnarray}q \left(\int_{0}^{2\pi}\left\{|f(we^{i\theta})|-|f(ze^{i\theta})|\right\}_{+}^{p}d\theta\right)^{\frac{1}{p}} \leq 2M\omega(d_{\mathbb{D}}(z)), \end{equation}qq for $w\in\{\varsigma:~|\varsigma-z|\leq d_{\mathbb{D}}(z)\}.$ Arguing as in the proof of the Step 5 of Theorem \ref{thm-1.0}, we have $f\in\Lambda_{\omega,p}(\mathbb{D})$. $\mathbf{Case~2.}$ $p=\infty$. The proof method used to prove Case 1 is still valid for Case 2. We only need to replace some formulas when $p\in[1,\infty)$ with corresponding formulas when $p=\infty$ to complete the proof. Therefore, we omit it here. The proof of this theorem is finished. \qed \subsection{The proof of Theorem \ref{thm-8}} We only need to prove the case $p\in[2,\infty)$ because the case $p=\infty$ follows from Theorem C. We first prove the necessity. For $z\in\mathbb{D}$, let $$\mathscr{S}_{p}(z)=\left(\int_{0}^{2\pi}\left(P[|f|^{2}](ze^{i\eta})-|f(ze^{i\eta})|^{2}\right)^{\frac{p}{2}}d\eta\right)^{\frac{1}{p}}.$$ Since $f\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})$, we see that there is a positive constant $M$ such that \begin{equation}\label{xx-1} \int_{0}^{2\pi}|f(e^{i(\tau+\eta)})-f(ze^{i\eta})|^{p}d\eta\leq M^{p}\big(\omega(|z-e^{i\tau}|)\big)^{p}. \end{equation} Then, by (\ref{xx-1}) and Lemma F, there is a positive constant $M$ such that \begin{equation}gin{eqnarray} \nonumber \mathscr{S}_{p}(z)&=&\left(\int_{0}^{2\pi}\left(\int_{0}^{2\pi}|f(e^{i(\tau+\eta)})-f(ze^{i\eta})|^{2}\mathbf{P}(z,e^{i\tau}) \frac{d\tau}{2\pi}\right)^{\frac{p}{2}}d\eta\right)^{\frac{1}{p}}\\ \nonumber &\leq&\left(\int_{0}^{2\pi}\mathbf{P}(z,e^{i\tau})\left(\int_{0}^{2\pi}|f(e^{i(\tau+\eta)})-f(ze^{i\eta})|^{p}d\eta\right)^{\frac{2}{p}}\frac{d\tau}{2\pi}\right)^{\frac{1}{2}}\\ &\leq&M\left(\int_{0}^{2\pi}\mathbf{P}(z,e^{i\tau})\big(\omega(|z-e^{i\tau}|)\big)^{2}\frac{d\tau}{2\pi}\right)^{\frac{1}{2}}. \label{xx-2} \end{equation}q On the other hand, for $z\neq0$ and $\widetilde{z}=z/|z|$, it follows from (\ref{lk-0.1}) that \begin{equation}gin{eqnarray}\label{xx-3} \big(\omega(|z-e^{i\tau}|)\big)^{2}&\leq&\left(\omega(|\widetilde{z}-e^{i\tau}|)+\omega(|z-\widetilde{z}|)\right)^{2}\\ \nonumber &\leq&2\left(\big(\omega(|\widetilde{z}-e^{i\tau}|)\big)^{2}+\big(\omega(d_{\mathbb{D}}(z))\big)^{2}\right). \end{equation}q Combining (\ref{xx-2}), (\ref{xx-3}) and \cite[Lemma 2]{Dy1} yields that there is a positive constant $M$ such that $$ \left(\int_{0}^{2\pi}\left(P[|f|^{2}](ze^{i\theta})-|f(ze^{i\theta})|^{2}\right)^{\frac{p}{2}}d\theta\right)^{\frac{1}{p}} \leq M\omega(d_{\mathbb{D}}(z)).$$ Next, we prove the sufficiency. For $z\in\mathbb{D}$ and fixed $\eta\in[0,2\pi]$, it follows from the Cauchy integral formula and the Cauchy-Schwarz inequality that \begin{equation}gin{eqnarray}\label{xx-4} \nonumber d_{\mathbb{D}}(z)|f'(ze^{i\eta})|&=&\left|\int_{|\zeta|=1}\left(f(\zeta e^{i\eta})-f(ze^{i\eta})\right)\frac{d_{\mathbb{D}}(z)}{(\zeta-z)^{2}}\frac{d\zeta}{2\pi i}\right|\\ \nonumber &\leq&\int_{0}^{2\pi}|f(\zeta e^{i\eta})-f(ze^{i\eta})|\mathbf{P}(z,e^{i\tau})\frac{d\tau}{2\pi}\\ &\leq&\left(\int_{0}^{2\pi}|f(\zeta e^{i\eta})-f(ze^{i\eta})|^{2}\mathbf{P}(z,e^{i\tau})\frac{d\tau}{2\pi}\right)^{\frac{1}{2}}, \end{equation}q where $\zeta=e^{i\tau}$. For $z\in\mathbb{D}$, let $$\mathscr{Q}_{p}(z)=d_{\mathbb{D}}(z)\left(\int_{0}^{2\pi}|f'(ze^{i\eta})|^{p}d\eta\right)^{\frac{1}{p}}.$$ Then, by (\ref{xx-2}) and (\ref{xx-4}), we see that there is a positive constant $M$ such that \begin{equation}gin{eqnarray}q\mathscr{Q}_{p}(z) &\leq& \left(\int_{0}^{2\pi}\left(\int_{0}^{2\pi}|f(e^{i(\tau+\eta)})-f(ze^{i\eta})|^{2}\mathbf{P}(z,e^{i\tau})\frac{d\tau}{2\pi}\right)^{\frac{p}{2}}d\eta\right)^{\frac{1}{p}}\\ &=&\mathscr{S}_{p}(z)\\ &\leq&M\omega(d_{\mathbb{D}}(z)), \end{equation}qq which, together with Corollary \ref{An-th-0.1}, gives that $f\in\Lambda_{\omega,p}(\overlineerline{\mathbb{D}})$. The proof of this theorem is complete. \qed \section{Applications of equivalent norms and Hardy-Littlewood type theorems}\label{sec3} A continuous non-decreasing function $\psi:~[0,1)\rightarrow(0,\infty)$ is called a weight if $\psi$ is unbounded (see \cite{AD}). Moreover, a weight $\psi$ is called doubling if there is a constant $M>1$ such that $$\psi(1-s/2)< M\psi(1-s)$$ for $s\in(0,1]$. The following result easily follows from \cite[Lemma 1]{AD} and \cite[Theorem 2]{AD}. \begin{equation}gin{Lem}\label{lem-0.2} Let $\psi$ be a doubling function. Then there exist holomorphic functions $f_{j}~(j\in\{1,2\})$ with \[ \sup_{z\in \mathbb{D}}\frac{|f_j'(z)|}{\psi(|z|)}<\infty \] such that for $z\in\mathbb{D}$, \[ \sum_{j=1}^{2}|f_{j}'(z)|\geq\psi(|z|). \] \end{Lem} \begin{equation}gin{LemG}{\rm (\cite[Lemma 2]{P})}\label{L-3} Suppose that $f\in \mathscr{A}(\mathbb{D})$. For $z\in\mathbb{D}$, let $D_{z}=\{w:~|w-z|\leq d_{\mathbb{D}}(z)\}$ and $M_{z}=\max\{|f(w)|:~w\in D_{z}\}$. Then, for $z\in\mathbb{D}$, $$\frac{1}{2}(1-|z|)|f'(z)|+|f(z)|\leq\,M_{z}.$$ \end{LemG} \begin{equation}gin{Lem}\label{lem-0.4} Let $\omega_{1}$ be a majorant such that $\omega_{1}$ is differentiable on $(0,1 ]$ and $\omega_{1}'$ is also non-increasing on $(0,1 ]$, and let $\omega_{2}$ be a fast majorant. For $p\in[1,\infty]$, if $\phi$ is a holomorphic function of $\mathbb{D}$ into itself, then $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\mathbb{D})$ if and only if there is a positive constant $M$ such that for $z\in\mathbb{D}$, \begin{equation}\label{lem-3j} \left\{ \begin{equation}gin{array}{ll} \left(\int_{0}^{2\pi}\left(\mathscr{M}_{\omega_{1}(d_{\mathbb{D}}(\phi))}(ze^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}} \leq\,M\frac{\omega_{2}(d_{\mathbb{D}}(z))}{d_{\mathbb{D}}(z)}, &p\in[1,\infty), \\ \mathscr{M}_{\omega_{1}(d_{\mathbb{D}}(\phi))}(z) \leq\,M\frac{\omega_{2}(d_{\mathbb{D}}(z))}{d_{\mathbb{D}}(z)}, &p=\infty. \end{array} \right. \end{equation} \end{Lem} \begin{equation}gin{proof} Without loss of generality, we assume that $\phi$ is non-constant and $p\in[1,\infty)$. We first prove the sufficiency. For $z\in\mathbb{D}$, elementary calculations give $$ \left|\frac{\partial\omega_{1}(d_{\mathbb{D}}(\phi(z)))}{\partial z}\right|= \left|\frac{\partial d_{\mathbb{D}}(\phi(z))}{\partial z}\right|\omega_{1}'(d_{\mathbb{D}}(\phi(z)))=\frac{|\phi'(z)|\omega_{1}'(d_{\mathbb{D}}(\phi(z)))}{2} $$ and $$ \left|\frac{\partial\omega_{1}(d_{\mathbb{D}}(\phi(z)))}{\partial \overlineerline{z}}\right|= \left|\frac{\partial d_{\mathbb{D}}(\phi(z))}{\partial \overlineerline{z}}\right|\omega_{1}'(d_{\mathbb{D}}(\phi(z)))=\frac{|\phi'(z)|\omega_{1}'(d_{\mathbb{D}}(\phi(z)))}{2}, $$ which gives that \begin{equation}\label{eq-t6} \mathscr{M}_{\omega_{1}(d_{\mathbb{D}}(\phi))}(z)=|\phi'(z)|\omega_{1}' (d_{\mathbb{D}}(\phi(z))), \quad z\in \mathbb{D}. \end{equation} Since $\mathbb{D}$ is a $\Lambda_{\omega_{2}}$-extension domain of $\mathbb{C}$ for fast majorant $\omega_{2}$, we see that, for $z,~w\in\mathbb{D}$, there is a rectifiable curve $\gamma\subset\mathbb{D}$ joining $z$ to $w$ such that \begin{equation}\label{eq-t7}\int_{\gamma}\frac{\omega_{2}(d_{\mathbb{D}}(\zeta))}{d_{\mathbb{D}}(\zeta)}ds(\zeta)\leq M\omega_{2}(|z-w|),\end{equation} where $M$ is a positive constant. By (\ref{lem-3j}), (\ref{eq-t6}), (\ref{eq-t7}) and Lemma F, we conclude that, for $z,~w\in\mathbb{D}$, there is a rectifiable curve $\gamma\subset\mathbb{D}$ joining $z$ to $w$ such that \begin{equation}gin{eqnarray}q I(z,w)&\leq& \left(\int_{0}^{2\pi}\left(\int_{\gamma}\mathscr{M}_{\omega_{1}(d_{\mathbb{D}}(\phi))}(e^{i\theta}\zeta)ds(\zeta)\right)^{p}d\theta\right)^{\frac{1}{p}}\\ &\leq&\int_{\gamma}\left(\int_{0}^{2\pi}\left(\mathscr{M}_{\omega_{1}(d_{\mathbb{D}}(\phi))}(\zeta e^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}ds(\zeta)\\ &\leq&M\int_{\gamma}\frac{\omega_{2}(d_{\mathbb{D}}(\zeta))}{d_{\mathbb{D}}(\zeta)}ds(\zeta)\\ &\leq&M^{2}\omega_{2}(|z-w|), \end{equation}qq where $$I(z,w)=\left(\int_{0}^{2\pi}\left|\omega_{1}(d_{\mathbb{D}}(\phi(ze^{i\theta})))- \omega_{1}(d_{\mathbb{D}}(\phi(we^{i\theta})))\right|^{p}d\theta\right)^{\frac{1}{p}}.$$ Next, we prove the necessity. For $a,b\in [0,1]$ with $b>a$, it follows from the Lagrange mean value theorem that there is an $a_{0}\in(a,b)$ such that \begin{equation}\label{eq-t4} \frac{\omega_{1}(a)-\omega_{1}(b)}{a-b}=\omega_{1}'(a_{0})\geq\omega_{1}'(b).\end{equation} Since $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\mathbb{D})$, we see that there is a positive constant $M$ such that \begin{equation}\label{eq-t1}\mathscr{J}_{6} \leq\,M\omega_{2}(|z-w|) \end{equation} for $z,w\in{\mathbb{D}}$, where $$\mathscr{J}_{6}=\left(\int_{0}^{2\pi}\left|\omega_{1}(d_{\mathbb{D}}(\phi(ze^{i\theta})))- \omega_{1}(d_{\mathbb{D}}(\phi(we^{i\theta})))\right|^{p}d\theta\right)^{\frac{1}{p}}.$$ By (\ref{eq-t4}) and (\ref{eq-t1}), we have \[ \mathscr{J}_{7} \leq\,M\omega_{2}(|z-w|),~ \] for $z,w\in{\mathbb{D}}$, where $$\mathscr{J}_{7}=\left(\int_{0}^{2\pi}\left(\left\{ |\phi(we^{i\theta})|-|\phi(ze^{i\theta})| \right\}_{+}\omega_{1}'(d_{\mathbb{D}}(\phi(ze^{i\theta})))\right)^{p}d\theta\right)^{\frac{1}{p}}.$$ Then, for $z,w\in {\mathbb{D}}$ with $|w-z|\leq d_{\mathbb{D}}(z)/2$, we have \begin{equation}\label{eq-t2b} \mathscr{J}_{7} \leq\,M\omega_{2}(d_{\mathbb{D}}(z)). \end{equation} Let $$\mathscr{W}(z)=\int_{0}^{2\pi}\left(\mathscr{M}_{\omega_{1}(d_{\mathbb{D}}(\phi))}(ze^{i\theta})\right)^{p}d\theta$$ and $$\mathscr{J}_{8}(w,ze^{i\theta})=\{|\phi(w)|-|\phi(ze^{i\theta})|\}_+ \omega_{1}'(d_{\mathbb{D}}(\phi(ze^{i\theta}))).$$ For a fixed $z\in \mathbb{D}$ and $\varepsilon_0=d_{\mathbb{D}}(z)/2$, it follows from Lemma \ref{mate-07d} that there is a positive constant $C$ such that \begin{equation}gin{eqnarray*} \mathscr{W}(z) &\leq& \int_{0}^{2\pi}\frac{C}{\varepsilon_0^{p+2}} \int_{\mathbb{D}(ze^{i\theta},\varepsilon_0)}\left(\mathscr{J}_{8}(w,ze^{i\theta})\right)^p dm(w)d\theta \\ &=& \int_{0}^{2\pi}\frac{C}{\varepsilon_0^{p+2}} \int_{\mathbb{D}(z,\varepsilon_0)}\left(\mathscr{J}_{8}(we^{i\theta},ze^{i\theta})\right)^p dm(w)d\theta \\ &=& \frac{C}{\varepsilon_0^{p+2}} \int_{\mathbb{D}(z,\varepsilon_0)} \int_{0}^{2\pi}\left(\mathscr{J}_{8}(we^{i\theta},ze^{i\theta})\right)^p d\theta dm(w), \end{eqnarray*} which, together with (\ref{eq-t2b}), yields that \begin{equation}gin{eqnarray}q \mathscr{W}(z)&\leq& \frac{C}{\varepsilon_0^{p+2}} \int_{\mathbb{D}(z,\varepsilon_0)} M^p\omega_{2}(d_{\mathbb{D}}(z))^pdm(w) \\ &\leq& \frac{C}{\varepsilon_0^{p}} M^p\big(\omega_{2}(d_{\mathbb{D}}(z))\big)^p \\ &\leq&2^pCM^p \frac{\big(\omega_{2}(d_{\mathbb{D}}(z))\big)^p}{\big(d_{\mathbb{D}}(z)\big)^{p}}. \end{equation}qq The proof of this lemma is complete. \end{proof} From the necessity proof of Lemma \ref{lem-0.4}, we also obtain the following result. Here we only need to assume that $\omega_{2}$ is a majorant because the ``fast" condition only used in the sufficiency proof of Lemma \ref{lem-0.4}. \begin{equation}gin{Lem}\label{cor-1.0} Let $\omega_{1}$ be a majorant such that $\omega_{1}$ is differentiable on $(0,1 ]$ and $\omega_{1}'$ is also non-increasing on $(0,1 ]$, and let $\omega_{2}$ be a majorant. Suppose that $\phi$ is a holomorphic function of $\mathbb{D}$ into itself. For $p\in[1,\infty]$, if $$ \left\{ \begin{equation}gin{array}{ll} \chi_{\omega_{1}}(z,w) \leq\omega_{2}(d_{\mathbb{D}}(z)), &p\in[1,\infty), \\ \left\{\omega_{1}(d_{\mathbb{D}}(\phi(z)))- \omega_{1}(d_{\mathbb{D}}(\phi(w)))\right\}_{+} \leq\omega_{2}(d_{\mathbb{D}}(z)), &p=\infty. \end{array} \right. $$ whenever $z\in\mathbb{D}$ and $w\in\{\varsigma\in\mathbb{D}:~|\varsigma-z|\leq\,d_{\mathbb{D}}(z)/2\}$, where $$\chi_{\omega_{1}}(z,w)=\left(\int_{0}^{2\pi}\left\{\omega_{1}(d_{\mathbb{D}}(\phi(ze^{i\theta})))- \omega_{1}(d_{\mathbb{D}}(\phi(we^{i\theta})))\right\}_{+}^{p}d\theta\right)^{\frac{1}{p}},$$ then there is a positive constant $M$ such that for $z\in \mathbb{D}$, $$ \left\{ \begin{equation}gin{array}{ll} \left(\int_{0}^{2\pi}\left(\mathscr{M}_{\omega_{1}(d_{\mathbb{D}}(\phi))}(ze^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}} \leq\,M\frac{\omega_{2}(d_{\mathbb{D}}(z))}{d_{\mathbb{D}}(z)}, &p\in[1,\infty), \\ \mathscr{M}_{\omega_{1}(d_{\mathbb{D}}(\phi))}(z) \leq\,M\frac{\omega_{2}(d_{\mathbb{D}}(z))}{d_{\mathbb{D}}(z)}, &p=\infty. \end{array} \right. $$ \end{Lem} \begin{equation}gin{Lem}\label{le-9} Suppose that $\omega$ is a majorant such that $\omega$ is differentiable on $(0,1 ]$ and $\omega'$ is also non-increasing on $(0,1 ]$. Let $\phi$ be an analytic function of $\mathbb{D}$ into itself. Then $\omega(d_{\mathbb{D}}(\phi))$ is superharmonic in $\mathbb{D}$. \end{Lem} \begin{equation}gin{proof} Without loss of generality, we assume that $\phi$ is non-constant. Since $\phi$ is analytic in $\mathbb{D}$, we see that $|\phi(z)|$ is subharmonic in $\mathbb{D}$. Let $\varphi(t)=-\omega(1-t)$ for $t\in (-\infty,1]$. Then, $\varphi'(t)$ exists and is non-decreasing on $[0,1)$. Let $t_0\in [0,1)$ be arbitrarily fixed. For $0\leq t<t_0< 1$, it follows from the Lagrange mean value theorem that there is an $s\in(t,t_0)$ such that \[ \frac{\varphi(t_0)-\varphi(t)}{t_0-t}=\varphi'(s)\leq \varphi'(t_0), \] which implies that \begin{equation}gin{equation} \label{varphi-1} \varphi(t) \geq \varphi(t_0)+\varphi'(t_0)(t-t_0). \end{equation} For $0\leq t_0<t< 1$, it follows from the Lagrange mean value theorem that there is an $s\in(t_0,t)$ such that \[ \frac{\varphi(t)-\varphi(t_0)}{t-t_0}=\varphi'(s)\geq \varphi'(t_0), \] which implies that \begin{equation}gin{equation} \label{varphi-2} \varphi(t) \geq \varphi(t_0)+\varphi'(t_0)(t-t_0). \end{equation} From (\ref{varphi-1}) and (\ref{varphi-2}), we have \begin{equation}gin{equation} \label{varphi-3} \varphi(t) \geq \varphi(t_0)+\varphi'(t_0)(t-t_0), \quad t\in [0,1). \end{equation} For $z\in \mathbb{D}$, $r\in (0,d_{\mathbb{D}}(z))$ and $\zeta\in \mathbb{T}$, letting $t=|\phi(z+r\zeta)|$, \[ t_0=\int_{0}^{2\pi}|\phi(z+re^{i\tau})|\frac{d\tau}{2\pi} \] in the inequality (\ref{varphi-3}), and integrating on $\mathbb{T}$, we have \[ \int_{\mathbb{T}} \varphi(|\phi(z+r\zeta)|)\frac{|d\zeta|}{2\pi} \geq \varphi\left( \int_{\mathbb{T}}|\phi(z+r\zeta)|\frac{|d\zeta|}{2\pi}\right), \] which, together with the subharmonicity of $|\phi|$ and the fact that $\varphi$ is increasing, yields that \[ \varphi(|\phi(z)|) \leq \varphi\left( \int_{\mathbb{T}}|\phi(z+r\zeta)|\frac{|d\zeta|}{2\pi}\right) \leq \int_{\mathbb{T}} \varphi(|\phi(z+r\zeta)|)\frac{|d\zeta|}{2\pi}. \] Consequently, $\varphi(|\phi(z)|)$ is subharmonic in $\mathbb{D}$, which implies that $\omega(d_{\mathbb{D}}(\phi))$ is superharmonic in $\mathbb{D}$. The proof of this lemma is finished. \end{proof} The following result easily follows from Theorem \ref{thm-1.0}. \begin{equation}gin{Lem}\label{L-1} Let $\omega$ be a fast majorant, and let $f\in\mathscr{H}(\mathbb{D})$. Then $f\in\Lambda_{\omega}(\mathbb{D})$ if and only if there is a positive constant $M$ such that $$\mathscr{M}_{f}(z)\leq\,M\frac{\omega(d_{\mathbb{D}}(z))}{d_{\mathbb{D}}(z)}, \quad z\in \mathbb{D}. $$ Moreover, there is a positive constant $M$ which is independent of $f$ such that $$\frac{1}{M}\|f\|_{\Lambda_{\omega}(\mathbb{D}),s}\leq \sup_{z\in\mathbb{D}}\left\{\mathscr{M}_{f}(z) \frac{d_{\mathbb{D}}(z)}{\omega(d_{\mathbb{D}}(z))}\right\}\leq\,M\|f\|_{\Lambda_{\omega}(\mathbb{D}),s}.$$ \end{Lem} \subsection{The proof of Theorem \ref{th-3}} We divide the proof of this theorem into seven steps. \noindent $\mathbf{Step~1.}$ ``$(\mathscr{F}_{2})\Rightarrow(\mathscr{F}_{1})$". For $f\in\Lambda_{\omega_{1}}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})$, it follows from Lemma \ref{L-1} that there is a positive constant $M$ such that, for $w\in \mathbb{D}$, $$\mathscr{M}_{f}(w)\leq\,M\frac{\omega_{1}(d_{\mathbb{D}}(w))}{d_{\mathbb{D}}(w)}, $$ which, together with the assumption $(\mathscr{F}_{2})$, implies that \begin{equation}gin{eqnarray}\label{eq-0.6k}\nonumber \int_{0}^{2\pi}\left(\mathscr{M}_{C_{\phi}(f)}(ze^{i\theta})\right)^{p}d\theta&=& \int_{0}^{2\pi}\left(\mathscr{M}_{f}(\phi(ze^{i\theta}))|\phi'(ze^{i\theta})|\right)^{p}d\theta\\ \nonumber &\leq&\,M^{p}\int_{0}^{2\pi}\left(\frac{\omega_{1}(d_{\mathbb{D}}(\phi(ze^{i\theta})))}{d_{\mathbb{D}}(\phi(ze^{i\theta}))} |\phi'(ze^{i\theta})|\right)^{p}d\theta\\ &\leq&M^{2p}\left(\frac{\omega_{2}\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}\right)^{p}. \end{equation}q By (\ref{eq-0.6k}) and Theorem \ref{thm-1.0}, we have $C_{\phi}(f)\in\Lambda_{\omega_{2},p}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})$.\\ \noindent $\mathbf{Step~2.}$ ``$(\mathscr{F}_{1})\Rightarrow(\mathscr{F}_{2})$". Suppose that $C_{\phi}(f)\in\Lambda_{\omega_{2},p}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})$ for every $f\in\Lambda_{\omega_{1}}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})$. We split the proof of this step into two cases. \noindent $\mathbf{Case~1.}$ Suppose that $$\lim_{t\rightarrow0^{+}}\frac{\omega_{1}(t)}{t}=\infty.$$ For this case, let $$\psi(t)=\frac{\omega_{1}(1-t)}{1-t}$$ for $t\in[0,1)$. Since for $s\in (0,1]$, $$ \psi\left(1-\frac{s}{2}\right)=\frac{\omega_1(\frac{s}{2})}{\frac{s}{2}}\leq 2\frac{\omega_1(s)}{s}=2\psi(1-s), $$ we see that $\psi$ is a doubling. Then, by Lemma \ref{lem-0.2}, there are holomorphic functions $f_{j}$ $(j\in\{1,2\})$ with \[ \sup_{\xi\in \mathbb{D}}\frac{|f_j'(\xi)|}{\psi(|\xi|)}<\infty \] such that for $\xi\in\mathbb{D}$, \[ \sum_{j=1}^{2}|f_{j}'(\xi)|\geq\psi(|\xi|), \] which yields that \begin{equation}\label{eq-0.8k} |f_{1}'(\phi(z))|+|f_{2}'(\phi(z))|\geq\,\frac{\omega_{1}(d_{\mathbb{D}}(\phi(z)))}{d_{\mathbb{D}}(\phi(z))}, \quad z\in \mathbb{D}. \end{equation} It follows from Lemma \ref{L-1} that $f_{1},~f_{2}\in\Lambda_{\omega_{1}}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})$. Consequently, $f=f_{1}+\overlineerline{f_{2}}\in\Lambda_{\omega_{1}}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})$. Since $C_{\phi}(f)\in\Lambda_{\omega_{2},p}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})$, by Theorem \ref{thm-1.0}, we conclude that there is a positive constant $M$ such that, for $z\in\mathbb{D}$, \begin{equation}\label{eq-0.9k}\mathscr{J}_{9} \leq M\frac{\omega_{2}\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}, \end{equation} where $$\mathscr{J}_{9}=\left(\int_{0}^{2\pi}\left(|f'_{1}(\phi(ze^{i\theta}))|+|f'_{2}(\phi(ze^{i\theta}))|\right)^{p}|\phi'(ze^{i\theta})|^{p}d\theta\right)^{\frac{1}{p}}.$$ Then $(\mathscr{F}_{2})$ follows from (\ref{eq-0.8k}) and (\ref{eq-0.9k}). \noindent $\mathbf{Case~2.}$ Suppose that $$\lim_{t\rightarrow0^{+}}\frac{\omega_{1}(t)}{t}<\infty.$$ For this case, let $f_{0}(\xi)=\xi$ for $\xi\in\mathbb{D}$. Since $\omega_1(t)/t$ is non-increasing for $t>0$, we see that there is a positive constant $M$ such that, for $z\in\mathbb{D}$, \begin{equation}\label{eq-10k} |f_{0}'(\phi(z))|\geq\, M\frac{\omega_{1}(d_{\mathbb{D}}(\phi(z)))}{d_{\mathbb{D}}(\phi(z))}. \end{equation} On the other hand, for $t\in(0,2]$, $$\frac{\omega_{1}(t)}{t}\geq \frac{\omega_{1}(2)}{2},$$ which gives that $f_{0}\in\Lambda_{\omega_{1}}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})$. By Theorem \ref{thm-1.0}, there is a positive constant $M$ such that, for $z\in\mathbb{D}$, \begin{equation}gin{eqnarray}q \left(\int_{0}^{2\pi}\left(|f'_{0}(\phi(ze^{i\theta}))||\phi'(ze^{i\theta})|\right)^{p}d\theta\right)^{\frac{1}{p}}\leq M\frac{\omega_{2}\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}, \end{equation}qq which, together with (\ref{eq-10k}), implies that $(\mathscr{F}_{2})$.\\ \noindent $\mathbf{Step~3.}$ ``$(\mathscr{F}_{1})\Rightarrow(\mathscr{F}_{3})$". By the Step 2, we see that there is a positive constant $M$ such that, for $z\in\mathbb{D}$, \begin{equation}\label{eq-1.0k}\left(\int_{0}^{2\pi}\left(|\phi'(ze^{i\theta})| \frac{\omega_{1}\big(d_{\mathbb{D}}(\phi(ze^{i\theta}))\big)}{d_{\mathbb{D}}(\phi(ze^{i\theta}))}\right)^{p}d\theta\right)^{\frac{1}{p}} \leq M\frac{\omega_{2}\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}.\end{equation} It follows from $\omega_{1}\in\mathscr{S}$ and (\ref{eq-t4}) that, for $z\in\mathbb{D}$, \begin{equation}\label{L-9} \omega_{1}'(d_{\mathbb{D}}(\phi(z)))\leq\frac{\omega_{1}(d_{\mathbb{D}}(\phi(z)))-\omega_{1}(0)}{d_{\mathbb{D}}(\phi(z))-0}.\end{equation} By (\ref{eq-t6}) and (\ref{L-9}), we have $$\mathscr{M}_{\omega_{1}(d_{\mathbb{D}}(\phi))}(z)=|\phi'(z)|\omega_{1}' (d_{\mathbb{D}}(\phi(z)))\leq|\phi'(z)|\frac{\omega_{1}(d_{\mathbb{D}}(\phi(z)))}{d_{\mathbb{D}}(\phi(z))},$$ which, together with (\ref{eq-1.0k}), yields that there is a positive constant $M$ such that, for $z\in\mathbb{D}$, \begin{equation}gin{eqnarray}q \mathscr{J}_{10} &\leq& \left(\int_{0}^{2\pi}\left(|\phi'(ze^{i\theta})| \frac{\omega_{1}\big(d_{\mathbb{D}}(\phi(ze^{i\theta}))\big)}{d_{\mathbb{D}}(\phi(ze^{i\theta}))}\right)^{p}d\theta\right)^{\frac{1}{p}}\\ &\leq& M\frac{\omega_{2}\big(d_{\mathbb{D}}(z)\big)}{d_{\mathbb{D}}(z)}, \end{equation}qq where $$\mathscr{J}_{10}=\left(\int_{0}^{2\pi}\left(\mathscr{M}_{\omega_{1}(d_{\mathbb{D}}(\phi))}(ze^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}.$$ Then, $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\mathbb{D})$ by Lemma \ref{lem-0.4}. Consequently, $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\overlineerline{\mathbb{D}})$, since $\phi$ is continuous in $\overlineerline{\mathbb{D}}$. \\ \noindent $\mathbf{Step~4.}$ ``$(\mathscr{F}_{3})\Rightarrow(\mathscr{F}_{1})$". Since $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\mathbb{D})$, by Lemma \ref{lem-0.4}, we see that there is a positive constant $M$ such that \begin{equation}\label{eq-1.1k} \mathscr{J}_{11} \leq\,M\frac{\omega_{2}(d_{\mathbb{D}}(z))}{d_{\mathbb{D}}(z)},~z\in\mathbb{D}, \end{equation} where $$\mathscr{J}_{11}=\left(\int_{0}^{2\pi}\left(|\phi'(ze^{i\theta})|\omega_{1}'(d_{\mathbb{D}}(\phi(ze^{i\theta})))\right)^{p}d\theta\right)^{\frac{1}{p}}.$$ It follows from Lemma \ref{L-1} that there is a positive constant $M$ such that, for $f\in\Lambda_{\omega_{1}}(\mathbb{D})\cap\mathscr{H}(\mathbb{D})$, \begin{equation}gin{eqnarray}q \mathscr{M}_{C_{\phi}(f)}(ze^{i\theta})=\mathscr{M}_{f}(\phi(ze^{i\theta}))|\phi'(ze^{i\theta})| \leq\,M\frac{\omega_{1}(d_{\mathbb{D}}(\phi(ze^{i\theta})))}{d_{\mathbb{D}}(\phi(ze^{i\theta}))}|\phi'(ze^{i\theta})|, \end{equation}qq which, together with (\ref{eq-1.1k}), gives that \begin{equation}gin{eqnarray}\label{eq-1.2k}\mathscr{J}_{12} &=& \int_{0}^{2\pi}\left(\mathscr{M}_{f}(\phi(ze^{i\theta}))|\phi'(ze^{i\theta})|\right)^{p}d\theta\\ \nonumber &\leq&\,M^{p}\int_{0}^{2\pi}\left(\frac{\omega_{1}(d_{\mathbb{D}}(\phi(ze^{i\theta})))}{d_{\mathbb{D}}(\phi(ze^{i\theta}))} |\phi'(ze^{i\theta})|\right)^{p}d\theta\\ \nonumber &\leq&M^{p}M_{1}^{p}\int_{0}^{2\pi}\left(|\phi'(ze^{i\theta})|\omega_{1}'(d_{\mathbb{D}}(\phi(ze^{i\theta})))\right)^{p}d\theta\\ \nonumber &\leq&M^{2p}M_{1}^{p}\left(\frac{\omega_{2}(d_{\mathbb{D}}(z))}{d_{\mathbb{D}}(z)}\right)^{p}, \end{equation}q where $$\mathscr{J}_{12}=\int_{0}^{2\pi}\left(\mathscr{M}_{C_{\phi}(f)}(ze^{i\theta})\right)^{p}d\theta$$ and $$M_{1}=\sup_{z\in \mathbb{D}}\left\{\frac{\omega_{1}(d_{\mathbb{D}}(\phi(z)))} {d_{\mathbb{D}}(\phi(z))\omega_{1}'(d_{\mathbb{D}}(\phi(z)))}\right\}<\infty.$$ Hence $(\mathscr{F}_{1})$ follows from (\ref{eq-1.2k}) and Theorem \ref{thm-1.0}.\\ \noindent $\mathbf{Step~5.}$ ``$(\mathscr{F}_{3})\Rightarrow(\mathscr{F}_{4})$" is obvious.\\ \noindent $\mathbf{Step~6.}$ ``$(\mathscr{F}_{4})\Rightarrow(\mathscr{F}_{5})$".\\ For $r\in(0,1)$ and $\theta\in[0,2\pi]$, let $$\mathscr{K}_{1}(re^{i\theta})=\left(\omega_{1}(d_{\mathbb{D}}(\phi(re^{i\theta})))-P[\omega_{1}(d_{\mathbb{D}}(\phi))](re^{i\theta})\right),$$ $$\mathscr{K}_{2}(re^{i\theta})=\left\{\omega_{1}(d_{\mathbb{D}}(\phi(re^{i\theta})))-\omega_{1}(d_{\mathbb{D}}(\phi(e^{i\theta})))\right\}_{+}$$ and $$\mathscr{K}_{3}(re^{i\theta})=\left|\omega_{1}(d_{\mathbb{D}}(\phi(e^{i\theta})))-P[\omega_{1}(d_{\mathbb{D}}(\phi))](re^{i\theta})\right|.$$ Then, by $(\mathscr{F}_{4})$, we see that there is a positive constant $M$ such that \begin{equation}\label{eq-1.5k} \left(\int_{0}^{2\pi} \left(\mathscr{K}_{2}(re^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}\leq\, M\omega_{2}(d_{\mathbb{D}}(r)). \end{equation} Since $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\mathbb{T})$ and $\omega_2$ is a regular majorant, by Theorem \ref{thm-5.0}, there is a positive constant $M$ such that \begin{equation}\label{eq-1.6k} \left(\int_{0}^{2\pi} \left(\mathscr{K}_{3}(re^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}\leq\, M\omega_{2}(d_{\mathbb{D}}(r)). \end{equation} On the other hand, we have \begin{equation}gin{eqnarray}q \mathscr{K}_{1}(re^{i\theta})&=&\left\{\omega_{1}(d_{\mathbb{D}}(\phi(re^{i\theta})))-P[\omega_{1}(d_{\mathbb{D}}(\phi))](re^{i\theta})\right\}_{+}\\ &\leq&\mathscr{K}_{2}(re^{i\theta})+\left\{\omega_{1}(d_{\mathbb{D}}(\phi(e^{i\theta})))-P[\omega_{1}(d_{\mathbb{D}}(\phi))](re^{i\theta})\right\}_{+}\\ &\leq& \mathscr{K}_{2}(re^{i\theta})+\mathscr{K}_{3}(re^{i\theta}), \end{equation}qq which, together with (\ref{eq-1.5k}), (\ref{eq-1.6k}) and the Minkowski inequality, implies that there is a positive constant $M$ such that \begin{equation}gin{eqnarray}q \left(\int_{0}^{2\pi}\left(\mathscr{K}_{1}(re^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}&\leq& \left(\int_{0}^{2\pi} \left(\mathscr{K}_{2}(re^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}\\ &&+\left(\int_{0}^{2\pi} \left(\mathscr{K}_{3}(re^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}\\ &\leq&M\omega_{2}(d_{\mathbb{D}}(r)). \end{equation}qq \noindent $\mathbf{Step~7.}$ ``$(\mathscr{F}_{5})\Rightarrow(\mathscr{F}_{1})$".\\ For $z,~w\in\mathbb{D}$ and $\theta\in[0,2\pi]$, let $$\mathscr{K}_{4}(ze^{i\theta},we^{i\theta})=\left\{\omega_{1}(d_{\mathbb{D}}(\phi(ze^{i\theta})))-P[\omega_{1}(d_{\mathbb{D}}(\phi))](we^{i\theta})\right\}_{+},$$ $$\mathscr{K}_{5}(ze^{i\theta},we^{i\theta})=\left\{\omega_{1}(d_{\mathbb{D}}(\phi(ze^{i\theta})))-\omega_{1}(d_{\mathbb{D}}(\phi(we^{i\theta})))\right\}_{+}$$ and $$\mathscr{K}_{6}(ze^{i\theta},we^{i\theta})=\left\{P[\omega_{1}(d_{\mathbb{D}}(\phi))](ze^{i\theta})-P[\omega_{1}(d_{\mathbb{D}}(\phi))](we^{i\theta})\right\}_{+}.$$ Since $\omega_{1}(d_{\mathbb{D}}(\phi))$ is superharmonic in $\mathbb{D}$ (see Lemma \ref{le-9}), we see that \begin{equation}gin{eqnarray}\label{eq-1.6.0k}\nonumber \mathscr{K}_{5}(ze^{i\theta},we^{i\theta})&\leq&\mathscr{K}_{4}(ze^{i\theta},we^{i\theta})\\ &\leq&\mathscr{K}_{4}(ze^{i\theta},ze^{i\theta})+\mathscr{K}_{6}(ze^{i\theta},we^{i\theta}). \end{equation}q From $(\mathscr{F}_{5})$, we see that there is a positive constant $M$ such that \begin{equation}\label{eq-1.7k}\left(\int_{0}^{2\pi}\left(\mathscr{K}_{4}(ze^{i\theta},ze^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}} \leq\,M\omega_{2}(d_{\mathbb{D}}(z)).\end{equation} Since $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\mathbb{T})$ and $\omega_2$ is a regular majorant, by Theorem \ref{thm-5.0}, there is a positive constant $M$ such that \begin{equation}\label{eq-1.8k}\left(\int_{0}^{2\pi}\left(\mathscr{K}_{6}(ze^{i\theta},we^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}} \leq\,M\omega_{2}(|z-w|).\end{equation} For $z\in\mathbb{D}$ and $|z-w|\leq d_{\mathbb{D}}(z)$, it follows from (\ref{eq-1.6.0k}), (\ref{eq-1.7k}), (\ref{eq-1.8k}) and the Minkowski inequality that \begin{equation}gin{eqnarray}\label{eq-1.9k}\nonumber \left(\int_{0}^{2\pi}\left(\mathscr{K}_{5}(ze^{i\theta},we^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}&\leq& \left(\int_{0}^{2\pi}\left(\mathscr{K}_{4}(ze^{i\theta},ze^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}\\ \nonumber &&+\left(\int_{0}^{2\pi}\left(\mathscr{K}_{6}(ze^{i\theta},we^{i\theta})\right)^{p}d\theta\right)^{\frac{1}{p}}\\ &\leq&M\omega_{2}(d_{\mathbb{D}}(z)). \end{equation}q Combining (\ref{eq-1.9k}), Lemmas \ref{lem-0.4} and \ref{cor-1.0} gives $\omega_{1}(d_{\mathbb{D}}(\phi))\in\Lambda_{\omega_{2},p}(\mathbb{D}),$ which, together with Step 4, implies that $(\mathscr{F}_{1})$ holds. The proof of this theorem is complete. \qed \section*{Statements and Declarations} \subsection*{Competing interests} There are no competing interests. \subsection*{Data availability} Data sharing not applicable to this article as no datasets were generated or analysed during the current study. \section{Acknowledgments} The research of the first author was partly supported by the National Science Foundation of China (grant no. 12071116), the Hunan Provincial Natural Science Foundation of China (No. 2022JJ10001), the Key Projects of Hunan Provincial Department of Education (grant no. 21A0429), the Double First-Class University Project of Hunan Province (Xiangjiaotong [2018]469), the Science and Technology Plan Project of Hunan Province (2016TP1020), and the Discipline Special Research Projects of Hengyang Normal University (XKZX21002); The research of the second author was partly supported by JSPS KAKENHI Grant Number JP22K03363. \begin{equation}gin{thebibliography}{1} \bibitem{AD} E. Abakumov and E. Doubtsov, Reverse estimates in growth spaces, \textit{Math. Z.} {\bf 271} (2012), 399--413. \bibitem{AKM} M. Arsenovi\'c, V. Koji\'c and M. Mateljevi\'c, On Lipschitz continuity of harmonic quasiregular maps on the unit ball in $\mathbb{R}^{n}$, \textit{Ann. Acad. Sci. Fenn. Math.} {\bf 33} (2008), 315--318. \bibitem{AP-2017} M. Arsenovi\'c and M. Pavlovi\'c, On Dyakonov type theorems for harmonic quasiregular mappings, \textit{Czech. Math. J.} {\bf 67} (2017), 289--296. \bibitem{MVM} M. Arsenovi\'c, V. Manojlovi\'c and M. Mateljevi\'c, Lipschitz-type spaces and harmonic mappings in the space, \textit{Ann. Acad. Sci. Fenn. Math.} {\bf 35} (2010), 379--378. \bibitem{Ch-18} J. L. Chen, M.Z. Huang, A. Rasila and X.T. Wang, On Lipschitz continuity of solutions of hyperbolic Poisson's equation, \textit{Calc. Var. Partial Differ. Equ.} {\bf 57} (2018), 32 p. \bibitem{C-H} S. L. Chen and H. Hamada, Harmonic Lipschitz type spaces and composition operators meet majorants, \textit{J. Geom. Anal.}, to appear. \bibitem{CH-Riesz} S. L. Chen and H. Hamada, On Riesz type inequalities, Hardy-Littlewood type theorems and smooth moduli, submitted. \bibitem{CHZ2022MZ} S. L. Chen, H. Hamada and J. -F. Zhu, Composition operators on Bloch and Hardy type spaces, \textit{Math. Z.} {\bf 301} (2022), 3939--3957. \bibitem{CSR} S. L. Chen, S. Ponnusamy and A. Rasila, Lengths, areas and Lipschitz-type spaces of planar harmonic mappings, \textit{Nonlinear Anal.} {\bf 115} (2015), 62--70. \bibitem{CPR} S. L. Chen, S. Ponnusamy, and A. Rasila, On characterizations of Bloch-type, Hardy-type, and Lipschitz-type spaces, \textit{Math. Z.} {\bf 279} (2015), 163--183. \bibitem{CPR-2014} S. L. Chen, S. Ponnusamy, and A. Rasila, Coefficient estimates, Landau's theorem and Lipschitz-type spaces on planar harmonic mappings, \textit{J. Aust. Math. Soc.} {\bf 96} (2014), 198--215. \bibitem{CPW-11} S. L. Chen, S. Ponnusamy, and X. Wang, On planar harmonic Lipschitz and planar harmonic Hardy classes, \textit{Ann. Acad. Sci. Fenn. Math.} {\bf 36} (2011), 567--576. \bibitem{CP-2013} S. L. Chen and S. Ponnusamy, Lipschitz-type spaces and Hardy spaces on some classes of complex-valued functions, \textit{Integral Equations Oper. Theory} {\bf 77} (2013), 261--278. \bibitem{Du} P. Duren, {\it Harmonic mappings in the plane,} Cambridge Univ. Press, 2004. \bibitem{Dy1} K. M. Dyakonov, Equivalent norms on Lipschitz-type spaces of holomorphic functions, \textit{Acta Math.} {\bf 178} (1997), 143--167. \bibitem{Dy2} K. M. Dyakonov, Holomorphic functions and quasiconformal mappings with smooth moduli, \textit{Adv. Math.} {\bf 187}(2004), 146--172. \bibitem{Dy3} K. M. Dyakonov, Strong Hard-Littlewood theorems for analytic functions and mappings of finite distortion, \textit{Math. Z.} {\bf 249}(2005), 597--611. \bibitem{Dy4} K. M. Dyakonov, Functions in Bloch-type spaces and their moduli, \textit{Ann. Acad. Sci. Fenn., Math.} {\bf 41}(2016), 705--712. \bibitem{Ga} J. B. Garnett, Bounded analytic functions. Academic Press, New York, 1981. \bibitem{GK} A. Gjokaj and D. Kalaj, Quasiconformal harmonic mappings between the unit ball and spatial domain with $C^{1,\alpha}$ boundary, \textit{Potential Anal.} {\bf 57} (2022), 367--377. \bibitem{GM} F. W. Gehring and O. Martio, Lipschitz-classes and quasiconformal mappings, \textit{Ann. Acad. Sci. Fenn. Ser. A I Math.} {\bf 10} (1985), 203--219. \bibitem{H-L-31} G. H. Hardy and J. E. Littlewood, Some properties of conjugate functions, \textit{J. Reine Angew. Math.} {\bf 167} (1931), 405--423. \bibitem{H-L} G. H. Hardy and J. E. Littlewood, Some properties of fractional integrals II, \textit{Math. Z.} {\bf 34} (1932), 403--439. \bibitem{KM-2011} D. Kalaj and M. Vuorinen, On harmonic functions and the Schwarz lemma, \textit{Proc. Am. Math. Soc.} {\bf 140} (2012), 161--165. \bibitem{KM} M. Kne{\rm$\check{z}$}evi\'c and M. Mateljevi\'c, On the quasi-isometries of harmonic quasiconformal mappings, \textit{J. Math. Anal. Appl.} {\bf 334} (2007), 404--413. \bibitem{KP-08} V. Koji\'c and M. Pavlovi\'c, Subharmonicity of $|f|^{p}$ for quasiregular harmonic functions with applications, \textit{J. Math. Anal. Appl.} {\bf 342} (2008), 742--746. \bibitem{L} V. Lappalainen, Lip$_{h}$-extension domains, \textit{Ann. Acad. Sci. Fenn. Ser. A I Math. Dissertationes} {\bf 56} (1985). \bibitem{Lewy} H. Lewy, On the non-vanishing of the Jacobian in certain one-to-one mappings, \textit{Bull. Amer. Math. Soc.} {\bf 42} (1936), 689--692. \bibitem{No} C. A. Nolder and D. M. Oberlin, Moduli of continuity and a Hardy-Littlewood theorem, Complex analysis, Proc. 13th Rolf Nevanlinna-Colloq., Joensuu/Finl. 1987, \textit{Lect. Notes Math.} 1351(1988), 265-272. \bibitem{P} M. Pavlovi\'c, On Dyakonov's paper Equivalent norms on Lipschitz-type spaces of holomorphic functions, \textit{Acta Math.} {\bf 183} (1999), 141--143. \bibitem{Pav-2007} M. Pavlovi\'c, Lipschitz conditions on the modulus of a harmonic function, \textit{Rev. Mat. Iberoamericana} {\bf 23} (2007), 831--845. \bibitem{P-08} M. Pavlovi\'c, Division by inner functions in a class of composition operators on Lipschitz spaces, \textit{Bull. London Math. Soc.} {\bf 40} (2008), 199--209. \bibitem{Pav-2008} M. Pavlovi\'c, Derivative-free characterizations of bounded composition operators between Lipschitz spaces, \textit{Math. Z.} {\bf 258} (2008), 81--86. \bibitem{P-R-0} J. A. Pel\'aez and J. R\"atty\"a, Generalized Hilbert operators on weighted Bergman spaces, \textit{Adv. Math.} {\bf 240} (2013), 227--267. \bibitem{P-R-1} J. A. Pel\'aez and J. R\"atty\"a, Trace class criteria for Toeplitz and composition operators on small Bergman spaces, \textit{Adv. Math.} {\bf 293} (2016), 606--643. \bibitem{Pri} I. I. Privalov, Sur les fonctions conjugees, \textit{Bull. Soc. Math. France} {\bf 44}(1916), 100--103. \bibitem{Q-W} J. Qiao and X. T. Wang, Lipschitz-type spaces of pluriharmonic mappings, \textit{Filomat} {\bf 27} (2013), 693--702. \bibitem{Sha} J. H. Shapiro, The essential norm of a composition operator, \textit{Ann. Math.} {\bf 125} (1987), 375--404. \bibitem{Z2} K. Zhu, \textit{Operator theory in function spaces}, Mathematical surveys and monographs, American Mathematical Society, V. 138, 2nd ed., 2007. \bibitem{Zy} A. Zygmund, \textit{Trigonometric series}, Cambridge University Press, 1959. \end{thebibliography} \end{document}
math
\begin{document} \title{Relations of the Nuclear Norms of a Tensor and its Matrix Flattenings} \author{Shenglong~Hu}\thanks{This work is partially supported by National Science Foundation of China (Grant No. 11401428). } \address{Department of Mathematics, School of Science, Tianjin University, Tianjin, China.} \email{[email protected]} \address{Department of Mathematics, National University of Singapore} \email{[email protected]} \begin{abstract} For a $3$-tensor of dimensions $I_1\times I_2\times I_3$, we show that the nuclear norm of its every matrix flattening is a lower bound of the tensor nuclear norm, and which in turn is upper bounded by $\sqrt{\min\{I_i : i\neq j\}}$ times the nuclear norm of the matrix flattening in mode $j$ for all $j=1,2,3$. The results can be generalized to $N$-tensors with any $N\geq 3$. Both the lower and upper bounds for the tensor nuclear norm are sharp in the case $N=3$. A computable criterion for the lower bound being tight is given as well. \end{abstract} \keywords{tensor, nuclear norm} \subjclass[2010]{15A60; 15A69} \maketitle \section{Introduction}\label{sec:intro} The fundamental significancy of the matrix nuclear norm is commonly admitted, in both theory and applications, especially in matrix completion problems, see \cite{horn-johnson,golub-vanloan,candes-recht} and references therein. Likewise, the tensor nuclear norm has been recognized to be of great interesting and importance very recently \cite{friedland-lim,yuan-zhang,derksen,lim13}. Though much effort in developing theory and tools for handling tensors in recent years, compared with those for matrices, they are still in infancy \cite{landsberg,lim13}. As a result, in the very important problem of tensor completion, nuclear norms of the matrix flattenings of the underlying tensor are popularly used as alternatives of the less explored tensor nuclear norm \cite{gandy-recht-yamada}. However, in very recently, it is shown that the usage of tensor nuclear norm can gain drastically smaller sample size to guarantee exact recovery of lower rank tensors in large dimensions \cite{yuan-zhang}. Therefore, it would be of interesting to know some relationships between the two approaches of the tensor completion problem. Especially, the practical powerful approach through matrix flattenings suggests that there should be closely related relationships between the tensor nuclear norm, which possesses stronger theoretical recovery property, and its matrix flattening nuclear norms. On the other side, it is shown also very recently that the computation of the tensor nuclear norm is NP-hard \cite{friedland-lim}, which implies that the tensor completion problem based on tensor nuclear norm is also NP-hard. So, it is interesting to get approximations of the tensor nuclear norm with known worst case bounds. From the approaches of tensor completion problem, the nuclear norms of the matrix flattenings would be the first choices. This article establishes some relationships between them. Thus, it provides a rationale for the usage of nuclear norms of the matrix flattenings in tensor completion from a new perspective, and also computable bounds for the NP-hard tensor nuclear norm. We will first focus on third order tensors (or $3$-tensors) in Sections~\ref{sec:nuclear}, \ref{sec:flatten} and \ref{sec:bound}, and then extend the results to tensors of higher orders in the last section (cf.\ Section~\ref{sec:general}). \section{Tensor Nuclear Norm}\label{sec:nuclear} Let $\mathbb R^{I\times J\times K}$ be space of third order tensors (or $3$-tensors) of dimensions $I\times J\times K$ with entries in the field of real numbers. A tensor $\mathcal A\in\mathbb R^{I\times J\times K}$ consists of $IJK$ entries $a_{ijk}$ with $i\in\{1,\dots,I\}$, $j\in\{1,\dots,J\}$ and $k\in\{1,\dots,K\}$. Associated with the tensor space $\mathbb R^{I\times J\times K}$ are the natural inner product: \[ \langle\mathcal A,\mathcal B\rangle:=\sum_{1\leq i\leq I}\sum_{1\leq j\leq J}\sum_{1\leq k\leq K}a_{ijk} b_{ijk} , \] and the induced norm \[ \|\mathcal A\|_{\operatorname{HS}}:=\sqrt{\langle\mathcal A,\mathcal A\rangle}, \] which is referred as the Hilbert-Schmidt norm in the literature \cite{lim13}. Note that when $\mathcal A$ degenerates (i.e., $\min\{I,J,K\}=1$), the Hilbert-Schmidt norm reduces to the Frobenius norm of a matrix or the Euclidean norm of a vector. Tensors are generalizations of matrices. Two of the most important norms of a matrix are the spectral norm and its dual (i.e., the nuclear norm) \cite{horn-johnson, golub-vanloan}. Likewise, we can define the \textit{spectral norm of a tensor} $\mathcal A\in\mathbb R^{I\times J\times K}$ as \begin{equation}\label{sepctralnorm} \|\mathcal A\|:=\max\big\{\langle\mathcal A,\mathbf x\otimes\mathbf y\otimes\mathbf z\rangle : \mathbf x\in\mathbb R^I,\ \mathbf y\in\mathbb R^J,\ \mathbf z\in\mathbb R^K,\ \|\mathbf x\|=\|\mathbf y\|=\|\mathbf z\|=1\big\}. \end{equation} Here $\mathbf x\otimes\mathbf y\otimes\mathbf z\in\mathbb R^{I\times J\times K}$ is a rank one tensor with its $ijk$th entry being $x_iy_jz_k$. Conveniently, we use the same $\|\cdot\|$ to mean the spectral norm of a tensor (which is denoted in calligraphic letter) and a matrix (which is denoted in capital letter), and the Euclidean norm of a vector (which is denoted in bold lower case letter). It is clear that $\|\cdot\|$ defines a norm over $\mathbb R^{I\times J\times K}$. The dual norm of $\|\cdot\|$ is defined as \begin{equation}\label{nuclearnorm} \|\mathcal A\|_\ast:=\max\big\{\langle\mathcal A,\mathcal B\rangle : \mathcal B\in\mathbb R^{I\times J\times K},\ \|\mathcal B\|=1\big\}. \end{equation} It can be proved that $\|\cdot\|_\ast$ is also a norm over $\mathbb R^{I\times J\times K}$. From the definitions, we see that the spectral norm and its dual norm of a tensor are generalizations of the spectral norm and the nuclear norm of a matrix respectively. We call $\|\mathcal A\|_\ast$ the \textit{nuclear norm} of the tensor $\mathcal A$. It is a fact that every tensor $\mathcal A\in\mathbb R^{I\times J\times K}$ can be decomposed into a sum of rank one tensors \cite{lim13,landsberg}: \[ \mathcal A=\sum_{s=1}^r\lambda_s\mathbf x_s\otimes\mathbf y_s\otimes\mathbf z_s, \] with $\lambda_s\in\mathbb R$, and unit vectors $\mathbf x_s\in\mathbb R^I$, $\mathbf y_s\in\mathbb R^J$, and $\mathbf z_s\in\mathbb R^K$. It can be shown that \cite{lim13,friedland-lim} \begin{equation}\label{decomp} \|\mathcal A\|_\ast=\min\bigg\{\sum_{s=1}^r|\lambda_s| : \mathcal A=\sum_{s=1}^r\lambda_s\mathbf x_s\otimes\mathbf y_s\otimes\mathbf z_s, \|\mathbf x_s\|=\|\mathbf y_s\|=\|\mathbf z_s\|=1,\ s=1,\dots,r\bigg\}. \end{equation} Note that the matrix nuclear norm has a similar characterization, i.e., the singular value decomposition. It is well-known that both the spectral norm and the nuclear norm of a matrix can be computed out very efficiently, in polynomial time complexity up to the machine accuracy \cite{golub-vanloan}. However, both the spectral norm and the nuclear norm of a tensor are successively proven to be NP-hard to compute \cite{hillar-lim,friedland-lim}. Despite the general NP-hardness, the nuclear norms of some special tensors can be determined, see \cite{lim13,friedland-lim,derksen}. \section{Matrix Flattening}\label{sec:flatten} Given a $3$-tensor $\mathcal A\in\mathbb R^{I\times J\times K}$, we can regard it as a collection of $I$-vectors ($J$-vectors, $K$-vectors) $\mathbf a_{\cdot jk}$'s ($\mathbf a_{i\cdot k}$'s, $\mathbf a_{ij\cdot}$'s respectively), where for example \[ \mathbf a_{\cdot jk}=(a_{1jk},\dots,a_{Ijk})^\mathsf{T}\in\mathbb R^I. \] Let us focus on the $I$-vectors for a moment. There are altogether $JK$ $I$-vectors, and they are denoted by $\mathbf a_{\cdot jk}$ for $j=1,\dots,J$ and $k=1,\dots,K$. If we list all of them into a $I\times JK$ matrix with respect to a prefixed order of the set $\{(j,k) : 1\leq j\leq J,\ 1\leq k\leq K\}$ (eg.\ lexicographic order) as \[ A_{(1)}:=\big[\mathbf a_{\cdot 11},\dots,\mathbf a_{\cdot JK}\big], \] the resulting matrix is called the \textit{matrix flattening of the tensor $\mathcal A$ in mode $1$}. Similarly, we have the matrix flattenings $A_{(2)}$ and $A_{(3)}$ of the tensor $\mathcal A$ in mode $2$ and mode $3$ respectively. The next lemma is immediate. \begin{lemma}[Isomorphism]\label{lem:corres} For a fixed order of the set $\{(j,k) : 1\leq j\leq J,\ 1\leq k\leq K\}$, there is a one to one correspondence between the space $\mathbb R^{I\times J\times K}$ of $3$-tensors and the space of $\mathbb R^{I\times JK}$ of matrices through the matrix flattening in mode $1$. Similar results hold for mode $2$ and $3$. \end{lemma} Therefore, we would like to fix an order of the set $\{(j,k) : 1\leq j\leq J,\ 1\leq k\leq K\}$, say the lexicographic order. Then, under the matrix flattening in mode $1$, the unique matrix associated to a tensor $\mathcal A$ is denoted by $A_{(1)}$ as before; while the unique tensor associated to a matrix $A\in\mathbb R^{I\times JK}$ is denoted by $\operatorname{ten_1}(A)$. \section{Bounds from Matrix Flattening Nuclear Norms}\label{sec:bound} As the nuclear norm of a tensor is NP-hard to compute, whereas the nuclear norms of matrices are easy to compute in any given accuracy, it becomes popular in applications, such as tensor completion \cite{gandy-recht-yamada}, to use \begin{equation}\label{norm-av} \|\mathcal A\|_{\#}:=\frac{1}{3}\big(\|A_{(1)}\|_\ast+\|A_{(2)}\|_\ast+\|A_{(3)}\|_\ast\big) \end{equation} or some other functionals over $(\|A_{(1)}\|_\ast,\|A_{(2)}\|_\ast,\|A_{(3)}\|_\ast)^\mathsf{T}$ as alternatives for $\|\mathcal A\|_\ast$. We first show that every matrix flattening nuclear norm is a lower bound of the tensor nuclear norm. \begin{proposition}[Lower Bound]\label{prop:lowerbound} For any $3$-tensor $\mathcal A\in\mathbb R^{I\times J\times K}$, we have \[ \|A_{(i)}\|_\ast\leq \|\mathcal A\|_\ast,\ \text{for all }i=1,2,3. \] Therefore, \begin{equation}\label{lowerbound} \|\mathcal A\|_{\#}\leq\|\mathcal A\|_\ast. \end{equation} \end{proposition} \begin{proof} We prove the case $\|A_{(1)}\|_\ast\leq\|\mathcal A\|_\ast$ and the others follow similarly. Moreover, \eqref{lowerbound} is an immediate consequence of these inequalities. Let \[ \mathfrak F:=\{\mathbf u\otimes\mathbf v\otimes\mathbf w : \mathbf u\in\mathbb R^I,\ \mathbf v\in\mathbb R^J,\ \mathbf w\in\mathbb R^K,\ \|\mathbf u\|=\|\mathbf v\|=\|\mathbf w\|=1\} \] be the set of all rank one tensors of unit length. Let \[ \mathfrak D:=\{\mathcal U\in\mathbb R^{I\times J\times K} : \|\mathcal U\|\leq 1\} \] be the set of all tensors of length smaller than one. Likewise, let \[ \mathfrak M:=\{\mathbf u\otimes\mathbf z : \mathbf u\in\mathbb R^I,\ \mathbf z\in\mathbb R^{JK},\ \|\mathbf u\|=\|\mathbf z\|=1\} \] be the set of all rank one matrices of unit length, and \[ \mathfrak E:=\{ V\in\mathbb R^{I\times JK} : \|V\|\leq 1\} \] be the set of matrices of length smaller than one. It is easy to see that under the isomorphism with the matrix flattening in mode $1$ (cf.\ Lemma~\ref{lem:corres}), \[ \mathfrak F\subset\mathfrak M, \] since \[ \|\mathbf v\otimes\mathbf w\|=\|\mathbf v\|\|\mathbf w\|. \] For a matrix $V\in\mathbb R^{I\times JK}$, we have \[ \|V\|= \max\{\langle V,\mathbf u\otimes\mathbf z\rangle : \mathbf u\otimes\mathbf z\in\mathfrak M\}. \] Therefore, \[ \|V\|\geq\max\{\langle \operatorname{ten_1}(V),\mathbf u\otimes\mathbf v\otimes \mathbf w\rangle : \mathbf u\otimes\mathbf v\otimes\mathbf w\in\mathfrak F\}=\|\operatorname{ten_1}(V)\|. \] Thus, under the isomorphism with the matrix flattening in mode $1$, \[ \mathfrak E\subset\mathfrak D. \] On the other side, by the definition of nuclear norm, it follows that \[ \|A_{(1)}\|_\ast=\max\{\langle A_{(1)},V\rangle : V\in\mathfrak E\}. \] Henceforth, these, together with \[ \langle A_{(1)},V\rangle=\langle\mathcal A,\operatorname{ten_1}(V)\rangle, \] imply that \[ \|A_{(1)}\|_\ast\leq \max\{\langle\mathcal A,\operatorname{ten_1}(V)\rangle, \operatorname{ten_1}(V)\in\mathfrak D\}=\|\mathcal A\|_\ast. \] \end{proof} Given a tensor $\mathcal A\in\mathbb R^{I\times J\times K}$, let \[ \sum_{i=1}^r\sigma_i\mathbf x_i\otimes \mathbf z_i \] be the singular value decomposition of the matrix $A_{(1)}$, see \cite{horn-johnson,golub-vanloan}. Then, \[ \|A_{(1)}\|_\ast=\sum_{i=1}^r\sigma_i, \] and both \[ \{\mathbf x_1,\dots,\mathbf x_r\}\ \text{and }\{\mathbf z_1,\dots,\mathbf z_r\} \] are orthonormal. Under the order of the set $\{(j,k) : 1\leq j\leq J,\ 1\leq k\leq K\}$, we can reformulate the vectors $\mathbf z_i$'s as $J\times K$ matrices $Z_i$'s. Then, \begin{equation}\label{hs-norm} \|Z_i\|_{\operatorname{HS}}=\|\mathbf z_i\|=1,\ \text{for all }i=1,\dots,r. \end{equation} Define \begin{equation}\label{sum-nuclear} \|\vee Z_i\|_\ast:=\max\{\|Z_i\|_\ast : 1\leq i\leq r\}. \end{equation} Then, we have the following result. \begin{proposition}[Upper Bound]\label{prop:svd-nuclear} For any $3$-tensor $\mathcal A\in\mathbb R^{I\times J\times K}$, let $A_{(1)}$ be its matrix flattening in mode $1$ and \[ \sum_{i=1}^r\sigma_i\mathbf x_i\otimes \mathbf z_i \] be the singular value decomposition of $A_{(1)}$. Let $Z_i$ be the matrix reformulation of $\mathbf z_i$ as above, then \begin{equation}\label{bound-nuclear} \|\mathcal A\|_\ast\leq \sum_{i=1}^r\sigma_i\|Z_i\|_\ast\leq \|A_{(1)}\|_\ast\|\vee Z_i\|_\ast. \end{equation} Similar results hold for matrix flattenings in mode $2$ and mode $3$. \end{proposition} \begin{proof} Let \[ Z_i=\sum_{j=1}^{r_i}\mu_{i,j}\mathbf v_{i,j}\otimes\mathbf w_{i,j} \] be the singular value decomposition of the matrix $Z_i$ for all $i=1,\dots,r$. Then, \[ \|Z_i\|_\ast=\sum_{j=1}^{r_i}\mu_{i,j}. \] Since \[ A_{(1)}=\sum_{i=1}^r\sigma_i\mathbf x_i\otimes \mathbf z_i, \] under the isomorphism with the matrix flattening in mode $1$ (cf.\ Lemma~\ref{lem:corres}), we have that \[ \mathcal A=\sum_{i=1}^r\sigma_i\mathbf x_i\otimes(\sum_{j=1}^{r_i}\mu_{i,j}\mathbf v_{i,j}\otimes\mathbf w_{i,j}). \] It follows from the characterization \eqref{decomp} that \[ \|\mathcal A\|_\ast\leq \sum_{i=1}^r\sigma_i\big(\sum_{j=1}^{r_i}\mu_{i,j}\big)=\sum_{i=1}^r\sigma_i\|Z_i\|_\ast\leq \|\vee Z_i\|_\ast (\sum_{i=1}^r\sigma_i)=\|A_{(1)}\|_\ast\|\vee Z_i\|_\ast. \] Therefore, the result follows. \end{proof} \begin{corollary}[A Criterion]\label{cor:nuclear} Let $3$-tensor $\mathcal A\in\mathbb R^{I\times J\times K}$. If all the matrices $Z_i$'s as above have nuclear norm $1$, then \[ \|\mathcal A\|_\ast=\|A_{(1)}\|_\ast. \] In this case, \begin{equation}\label{diagonaldec} \mathcal A=\sum_{i=1}^r\sigma_i\mathbf x_i\otimes\mathbf u_i\otimes\mathbf v_i \end{equation} for a set of orthonormal vectors $\{\mathbf x_i\in\mathbb R^I : 1\leq i\leq r\}$, and unit vectors $\{\mathbf u_i\in \mathbb R^J : 1\leq i\leq r\}$ and $\{\mathbf v_i\in\mathbb R^K : 1\leq i\leq r\}$ satisfy \[ \langle\mathbf u_i,\mathbf u_j\rangle\langle \mathbf v_i,\mathbf v_j\rangle=\delta_{ij}, \text{for all }i,j=1,\dots,r, \] where $\delta_{ij}$ is the Kronecker symbol. Similar results hold for the matrix flattenings in mode $2$ and $3$. \end{corollary} \begin{proof} The first part follows from Propositions~\ref{prop:lowerbound} and \ref{prop:svd-nuclear}. The orthonormality follows from those of $\mathbf x_i$'s and $Z_i$'s. The remaining follows from \eqref{hs-norm} and the hypothesis that $\|Z_i\|_\ast=1$, which together imply that $Z_i$'s are all rank one matrices. \end{proof} Corollary~\ref{cor:nuclear} gives a computable criterion for the equivalence between the tensor nuclear norm and its matrix flattening nuclear norm. If a tensor has the decomposition \eqref{diagonaldec}, then $\|\mathcal A\|_\ast=\|A_{(1)}\|_\ast$ \cite{yuan-zhang}. Proposition~\ref{prop:svd-nuclear} has the merit to measure how far the computed matrix flattening nuclear norm from the true nuclear norm of the tensor. It may happens that \[ \|\vee Z_i\|_\ast:=\max\{\|Z_i\|_\ast : 1\leq i\leq r\} \] is much larger than most of the $\|Z_i\|_\ast$'s. Therefore, in practical computation, we can determined the accuracy of the nuclear norm of the tensor by the interval \[ \bigg[\max\big\{\|A_{(i)}\|_\ast : i=1,2,3\big\},\ \ \min\bigg\{\sum_{i=1}^r\sigma_i\|Z_i\|_\ast, \sum_{j=1}^s\mu_j\|S_j\|_\ast, \sum_{k=1}^t\gamma_k\|T_k\|_\ast\bigg\}\bigg], \] where $\sum_{j=1}^s\mu_j\|S_j\|_\ast$ is the upper bound given by the matrix flattening in mode $2$, and $\sum_{k=1}^t\gamma_k\|T_k\|_\ast$ is that for mode $3$. We arrive at the main theorem. \begin{theorem}[The Relation]\label{thm:bound} For any $3$-tensor $\mathcal A\in\mathbb R^{I\times J\times K}$, we have \begin{equation}\label{upperbound} \|A_{(1)}\|_\ast\leq \|\mathcal A\|_\ast\leq \sqrt{\min\{J,K\}}\|A_{(1)}\|_\ast. \end{equation} Moreover, both the bounds on $\|\mathcal A\|_\ast$ are sharp. Similar results hold for matrix flattenings in mode $2$ and mode $3$. \end{theorem} \begin{proof} The results follow from Propositions~\ref{prop:lowerbound} and \ref{prop:svd-nuclear}, and the fact that a $J\times K$ matrix of the Hilbert-Schmidt norm (or the Frobenius norm) $1$ can have the nuclear norm at most $\sqrt{\min\{J,K\}}$. The sharpness of the left hand side inequality follows when $\min\{J,K\}=1$, in which case the tensor is essentially a matrix and henceforth the inequality becomes an equality. For the right hand side inequality, let $I=1$ and tensor \[ \mathcal A= \sum_{i=1}^J\frac{1}{\sqrt{J}}1\otimes\mathbf e_{2, i}\otimes\mathbf e_{3,i}, \] where $\{\mathbf e_{2,i} : i=1,\dots,J\}$ is the standard orthonormal basis of $\mathbb R^J$ and $\{\mathbf e_{3,i} : i=1,\dots,J\}$ are $J$ standard basis vectors in $\mathbb R^K$. Then, it follows that \[ \|A_{(1)}\|=\|\sum_{i=1}^J\frac{1}{\sqrt{J}}\mathbf e_{2, i}\otimes\mathbf e_{3,i}\|_{\operatorname{HS}}=1, \] and \[ \|\mathcal A\|_\ast=\|\sum_{i=1}^J\frac{1}{\sqrt{J}}\mathbf e_{2, i}\otimes\mathbf e_{3,i}\|_\ast=\sqrt{J}. \] Therefore, we have that the inequality becomes an equality. \end{proof} \begin{corollary}\label{cor:av} Let $I\leq J\leq K$. For any $3$-tensor $\mathcal A\in\mathbb R^{I\times J\times K}$, we have \begin{equation*} \|\mathcal A\|_{\#}\leq \|\mathcal A\|_\ast\leq \sqrt{J}\|\mathcal A\|_{\#}. \end{equation*} \end{corollary} \section{Generalization}\label{sec:general} For any positive integers $N\geq 3$, $I_1\leq\dots\leq I_N$ and an $N$-tensor $\mathcal A\in\mathbb R^{I_1\times\dots\times I_N}$, we can define in a similar fashion the matrix flattenings in mode $1$ up to mode $N$, and \[ \|\mathcal A\|_{\#}:=\frac{1}{N}\sum_{i=1}^N\|A_{(i)}\|_\ast. \] All the results in the previous sections can be generalized to tensors of higher orders, except the sharpness result of the upper bound in Theorem~\ref{thm:bound} which is unknown and suspected to be most likely false. To this end, only the next lemma should be outlined. \begin{lemma}\label{lem:general} For any positive integers $N\geq 3$, $I_1\leq\dots\leq I_N$ and an $N$-tensor $\mathcal A\in\mathbb R^{I_1\times\dots\times I_N}$ with $\|\mathcal A\|_{\operatorname{HS}}=1$, we have \[ \|\mathcal A\|_\ast \leq \sqrt{\prod_{i=1}^{N-1}I_i}. \] \end{lemma} \begin{proof} Let $\{\mathbf e_{i,j} \in\mathbb R^{I_i} : j=1,\dots,I_i\}$ be the standard orthonormal basis of $\mathbb R^{I_i}$ for $i=1,\dots,N-1$. Then, \[ \mathcal A=\sum_{1\leq i_j\leq I_j,\ 1\leq j\leq N-1}\mathbf a_{i_1\dots i_{N-1}\cdot}\otimes\mathbf e_{1,i_1}\otimes\dots\otimes\mathbf e_{N-1,i_{N-1}}, \] where $\mathbf a_{i_1\dots i_{N-1}\cdot}$'s are the mode $N$ vectors of $\mathcal A$. Since $\{\mathbf e_{1,i_1}\otimes\dots\otimes\mathbf e_{N-1,i_{N-1}} : 1\leq i_j\leq I_j,\ 1\leq j\leq N-1\}$ is the standard orthonormal basis of $\mathbb R^{I_1\times\dots\times I_{N-1}}$, we have that \begin{align*} \|\mathcal A\|_{\operatorname{HS}}^2&=\sum_{1\leq i_j\leq I_j,\ 1\leq j\leq N-1}\langle \mathbf a_{i_1\dots i_{N-1}\cdot}\otimes\mathbf e_{1,i_1}\otimes\dots\otimes\mathbf e_{N-1,i_{N-1}}, \mathbf a_{i_1\dots i_{N-1}\cdot}\otimes\mathbf e_{1,i_1}\otimes\dots\otimes\mathbf e_{N-1,i_{N-1}}\rangle\\ &=\sum_{1\leq i_j\leq I_j,\ 1\leq j\leq N-1}\|\mathbf a_{i_1\dots i_{N-1}\cdot}\|^2. \end{align*} On the other side, \[ \sum_{1\leq i_j\leq I_j,\ 1\leq j\leq N-1}\|\mathbf a_{i_1\dots i_{N-1}\cdot}\otimes\mathbf e_{1,i_1}\otimes\dots\otimes\mathbf e_{N-1,i_{N-1}}\|=\sum_{1\leq i_j\leq I_j,\ 1\leq j\leq N-1}\|\mathbf a_{i_1\dots i_{N-1}\cdot}\|. \] Since $\|\mathcal A\|_{\operatorname{HS}}=1$, we have that \[ \sum_{1\leq i_j\leq I_j,\ 1\leq j\leq N-1}\|\mathbf a_{i_1\dots i_{N-1}\cdot}\|\leq \sqrt{\prod_{i=1}^{N-1}I_i}. \] The result on $\|\mathcal A\|_\ast$ then follows from \eqref{decomp}. \end{proof} We then have a similar theorem to Theorem~\ref{thm:bound}. \begin{theorem}[General Relation]\label{thm:bound-general} For any positive integers $N\geq 3$, $I_1\leq\dots\leq I_N$ and an $N$-tensor $\mathcal A\in\mathbb R^{I_1\times\dots\times I_N}$, we have \[ \|A_{(1)}\|_\ast\leq \|\mathcal A\|_\ast\leq \sqrt{\prod_{i=2}^{N-1}I_i}\|A_{(1)}\|_\ast. \] Therefore, \[ \|\mathcal A\|_{\#}\leq \|\mathcal A\|_\ast\leq \sqrt{\prod_{i=2}^{N-1}I_i}\|\mathcal A\|_{\#}. \] \end{theorem} \end{document}
math
\begin{document} \title[The Largest Irreducible Representations of Simple Groups] {The Largest Irreducible Representations\\ of Simple Groups} \author{Michael Larsen} \address{Department of Mathematics\\ Indiana University \\ Bloomington, IN 47405\\ U.S.A.} \epsilonmail{[email protected]} \author{Gunter Malle} \address{FB Mathematik, TU Kaiserslautern, 67653 Kaiserslautern, Germany} \epsilonmail{[email protected]} \author{Pham Huu Tiep} \address{Department of Mathematics\\ University of Arizona\\ Tucson, AZ 85721\\ U. S. A.} \epsilonmail{[email protected]} \date{Oct. 7, 2010} \subjclass{20C15, 20C20, 20C30, 20C33} \thanks{The authors are grateful to Marty Isaacs for suggesting this problem to them. Michael Larsen was partially supported by NSF Grant DMS-0800705, and Pham Huu Tiep was partially supported by NSF Grant DMS-0901241.} \begin{abstract} Answering a question of I. M. Isaacs, we show that the largest degree of irreducible complex representations of any finite non-abelian simple group can be bounded in terms of the smaller degrees. We also study the asymptotic behavior of this largest degree for finite groups of Lie type. Moreover, we show that for groups of Lie type, the Steinberg character has largest degree among all unipotent characters. \epsilonnd{abstract} \maketitle \section{Introduction} For any finite group $G$, let $b(G)$ denote the largest degree of any irreducible complex representation of $G$. Certainly, $b(G)^2 \leq |G|$, and this trivial bound is best possible in the following sense. One can write $|G| = b(G)(b(G)+e)$ for some non-negative integer $e$. Then $e = 0$ if and only if $|G| = 1$. Y. Berkovich showed that $e=1$ precisely when $|G| = 2$ or $G$ is a $2$-transitive Frobenius group, cf. \cite[Theorem 7]{Be}. In particular, there is no upper bound on $|G|$ when $e=1$. On the other hand, it turns out that $|G|$ \epsilonmph{can} be bounded in terms of $e$ if $e > 1$. Indeed, N. Snyder showed in \cite{Sn} that $|G| \leq ((2e)!)^2$. One can ask whether the largest degree $b(G)$ can be bounded in terms of the remaining degrees of $G$. More precisely, can one bound $$\varepsilon(G) := \dfrac{\sum_{\chi \in {\operatorname{Irr}}(G),~\chi(1) < b(G)}|\chi(1)|^{2}}{b(G)^{2}}$$ away from $0$ for all finite groups $G$? The aforementioned result of Berkovich immediately implies a negative answer to this question for general groups. M. Isaacs raised the question whether there exists a universal constant $\varepsilon > 0$ such that $\varepsilon(S) \geq \varepsilon$ for all \epsilonmph{simple} groups $S$. Assuming an affirmative answer to this question, he has improved Snyder's bound to the polynomial bound $|G| \leq Be^6$ (for some universal constant $B$ and for all finite groups $G$ with $e > 1$), cf. \cite{I}. The main goal of this paper is to answer Isaacs' question in the affirmative: \begin{thm} \lambdabel{main1} There exists a universal constant $\varepsilon > 0$ such that $\varepsilon(S) \geq \varepsilon$ for all finite non-abelian simple groups $S$. \epsilonnd{thm} We note that our $\varepsilon$ is implicit because of the proof of Theorem \ref{sym}. It would be interesting to get an explicit $\varepsilon$; also, we do not know of any non-abelian simple group $S$ where $\varepsilon(G) < 1$. As pointed out by Isaacs in \cite{I}, if $\varepsilon(S) \geq 1$ for all non-abelian simple groups $S$, then his polynomial bound $Be^6$ can be improved to $|G| \leq e^6 +e^4$. For many simple groups $S$, one knows exactly what $b(S)$ is. However, for alternating groups $\mathsf{A}_n$ there are only asymptotic formulae, see \cite{VK} and \cite{LS}. For simple classical groups over small fields ${\mathbb{F}}_q$, the right asymptotic for $b(S)$ has not been determined. In this paper we provide the following lower and upper bounds: \begin{thm} \lambdabel{main2} For any $1 > \varepsilon > 0$, there are some (explicit) constants $A, B > 0$ depending on $\varepsilon$ such that, for any simple algebraic group ${\mathcal{G}}$ in characteristic $p$ of rank $n$ and any Frobenius map $F : {\mathcal{G}} \to {\mathcal{G}}$, the largest degree $b(G)$ of the corresponding finite group $G:= {\mathcal{G}}^F$ over ${\mathbb{F}}_q$ satisfies the following inequalities: $$A(\log_qn)^{(1-\varepsilon)/\gammama} < \frac{b(G)}{|G|_p} < B(\log_qn)^{2.54/\gammama}$$ if $G$ is classical, and $$1 \leq \frac{b(G)}{|G|_p} < B$$ if $G$ is an exceptional group of Lie type. Here, $\gammama = 1$ if $G$ is untwisted of type $A$, and $\gammama = 2$ otherwise. \epsilonnd{thm} Even more explicit lower and upper bounds for $b(G)$ are proved in \S5 for finite classical groups $G$, cf. Theorems \ref{bound4}, \ref{bound5}, and \ref{bound6}. Certainly, any upper bound for $b(G)$ also holds for the largest degree $b_{\epsilonll}(G)$ of the $\epsilonll$-modular irreducible representations of $G$. Here is a lower bound for $b_{\epsilonll}(G)$: \begin{thm} \lambdabel{main3} There exists an (explicit) constant $C > 0$ such that, for any simple algebraic group ${\mathcal{G}}$ in characteristic $p$, any Frobenius map $F : {\mathcal{G}} \to {\mathcal{G}}$, and any prime $\epsilonll$, the largest degree $b_{\epsilonll}(G)$ of $\epsilonll$-modular irreducible representation of $G := {\mathcal{G}}^F$ satisfies the inequality $b_{\epsilonll}(G)/|G|_p \geq C$. \epsilonnd{thm} \section{Symmetric Groups} \lambdabel{sec:sym} We recall some basic combinatorics connected with symmetric groups. By a \epsilonmph{Young diagram}, we mean a finite subset $\Delta$ of ${\mathbb{Z}}^{>0}\times {\mathbb{Z}}^{>0}$ such that for all $(x,y)\in {\mathbb{Z}}^{>0}\times {\mathbb{Z}}^{>0}$, $(x+1,y)\in \Delta$ or $(x,y+1)\in \Delta$ implies $(x,y)\in\Delta$. Elements of $\Delta$ are called \epsilonmph{nodes}. We denote by $Y(n)$ the set of Young diagrams of cardinality $n$. For any fixed $\Delta$, we let $l$ and $k$ denote the largest $x$-coordinate and $y$-coordinate in $\Delta$ respectively and define $a_j$ for $1\le j\le k$ and $b_i$ for $1\le i\le l$ by $$a_j := \max \{i\mid (i,j)\in \Delta\}$$ and likewise $$b_i := \max \{j\mid (i,j)\in \Delta\}.$$ Thus, for each $\Delta$, we have a pair of mutually transpose partitions $$n = a_1+\cdots +a_k = b_1+\cdots+b_l.$$ For each $(i,j)\in\Delta$, we define the \epsilonmph{hook} $H_{i,j}:=H_{i,j}(\Delta)$ to be the set of $(i',j')\in \Delta$ such that $i'\ge i$, $j'\ge j$, and equality holds in at least one of these two inequalities. We define the \epsilonmph{hook length} $$h(i,j) := h_{i,j}(\Delta) := |H_{i,j}(\Delta)| = 1+a_j-i+b_i-j,$$ and set $$P := P(\Delta) := \prod_{(i,j)\in \Delta} h_{i,j}.$$ Define ${\mathcal{A}}(\Delta)$ (resp. ${\mathcal{B}}(\Delta)$) to be the set of nodes that can be added (resp. removed) from $\Delta$ to produce another Young diagram: $${\mathcal{A}}(\Delta):= \{(i,j)\in {\mathbb{Z}}^{>0}\times {\mathbb{Z}}^{>0}\mid \Delta\cup\{(i,j)\}\in Y(n+1)\}$$ and $${\mathcal{B}}(\Delta):= \{(i,j)\in \Delta\mid \Delta\setminus \{(i,j)\}\in Y(n-1)\}.$$ Thus ${\mathcal{A}}(\Delta)$ consists of the pair $(1,k+1)$ and pairs $(a_j+1,j)$ where $j=1$ or $a_j < a_{j-1}$. In particular, the values $i$ for $(i,j)\in {\mathcal{A}}(\Delta)$ are pairwise distinct, so $$n \ge \sum_{(i,j)\in {\mathcal{A}}(\Delta)} (i-1) \ge \frac{|{\mathcal{A}}(\Delta)|^2 - |{\mathcal{A}}(\Delta)|}2,$$ and $|{\mathcal{A}}(\Delta)| < \sqrt{2n}+1$. Similarly, ${\mathcal{B}}(\Delta)$ consists of the pairs $(a_j,j)$ where either $j=k$ or $a_j > a_{j+1}$. Hence $$n \ge \sum_{(i,j)\in {\mathcal{B}}(\Delta)} i \ge \frac{|{\mathcal{B}}(\Delta)|^2 + |{\mathcal{B}}(\Delta)|}2,$$ and $|{\mathcal{B}}(\Delta)| < \sqrt{2n}$. For $(i,j)\in {\mathcal{A}}(\Delta)$, the symmetric difference between ${\mathcal{A}}(\Delta)$ and ${\mathcal{A}}(\Delta\cup \{(i,j)\})$ consists of at most three elements: $(i,j)$ itself and possibly $(i+1,j)$ and/or $(i,j+1)$. Likewise, the symmetric difference between ${\mathcal{B}}(\Delta)$ and ${\mathcal{B}}(\Delta\setminus \{(i,j)\})$ consists of at most three elements: $(i,j)$ and possibly $(i-1,j)$ and/or $(i,j-1)$. There are bijective correspondences between elements of $Y(n)$, partitions $n=\sum_j a_j$, dual partitions $n = \sum_i b_i$, and complex irreducible characters of $\mathsf{S}_n$. By the hook length formula, the degree of the character associated to $\Delta$ is $n!/P(\Delta)$. The branching rule for $\mathsf{S}_{n-1}<\mathsf{S}_n$ asserts that the restriction to $\mathsf{S}_{n-1}$ of the irreducible representation $\rho(\Delta)$ of $\mathsf{S}_n$ associated to $\Delta\in Y(n)$ is the direct sum of $\rho(\Delta\setminus(i,j))$ over all $(i,j)\in {\mathcal{B}}(\Delta)$. By Frobenius reciprocity, it follows that the induction from $\mathsf{S}_n$ to $\mathsf{S}_{n+1}$ of $\rho(\Delta)$ is the direct sum of $\rho(\Delta\cup\{(i,j)\})$ over all $(i,j)\in {\mathcal{A}}(\Delta)$. We can now prove the main theorem of this section. \begin{thm} \lambdabel{sym} Let $S\subset {\mathbb{R}}$ be a finite set. Then there exists $N$ and $\delta>0$ such that for all $n>N$ and every irreducible character $\phi$ of $\mathsf{S}_n$, there exists an irreducible character $\psi$ of $\mathsf{S}_n$ such that $$\frac{\psi(1)}{\phi(1)} \in [\delta,\infty) \setminus S.$$ \epsilonnd{thm} \begin{proof} Equivalently, we prove that for all $\Delta\in Y(n)$ there exists $\Gamma\in Y(n)$ such that $$\frac{\dim\rho(\Gamma)}{\dim\rho(\Delta)} = \frac{P(\Delta)}{P(\Gamma)} \in [\delta,\infty) \setminus S.$$ Consider the decomposition of \begin{equation} \lambdabel{downup} {\operatorname{Ind}}_{\mathsf{S}_{n-1}}^{\mathsf{S}_n}{\mathbb{R}}es _{\mathsf{S}_{n-1}}^{\mathsf{S}_n} \rho(\Delta) \epsilonnd{equation} into irreducible summands of the form $\rho(\Gamma)$. These summands are indexed by the set $N(\Delta)$ of quadruples $(i_1,j_1,i_2,j_2)$, where $(i_1,j_1)\in {\mathcal{B}}(\Delta)$ and $(i_2,j_2)\in {\mathcal{A}}(\Delta\setminus\{(i_1,j_1)\})$. Clearly, $|N(\Delta)| < \sqrt{2n}(\sqrt{2n}+1)$, while the degree of the representation (\ref{downup}) equals $n\dim \rho(\Delta)$. Thus, there exists $\epsilonpsilon>0$, depending only on $S$, such that if $n$ is sufficiently large, either there exists an element of $N(\Delta)$ with corresponding diagram $\Gamma\in Y(n)$ such that $$\frac{\dim\rho(\Gamma)}{\dim\rho(\Delta)} \in [\delta,\infty) \setminus S$$ or there exist at least $\epsilonpsilon n$ elements of $N(\Delta)$ with corresponding diagrams $\Gamma$ such that \begin{equation} \lambdabel{inS} \frac{\dim\rho(\Gamma)}{\dim\rho(\Delta)} \in S. \epsilonnd{equation} We need only treat the latter case. Consider octuples $(i_1,j_1,\ldots,i_4,j_4)$ such that $(i_1,j_1,i_2,j_2)$ and $(i_3,j_3,i_4,j_4)$ are in $N(\Delta)$, every $\Gamma$ corresponding to either of them satisfies (\ref{inS}), the coordinates $i_1,i_2,i_3,i_4$ are pairwise distinct, and the same is true for the coordinates $j_1,j_2,j_3,j_4$. The number of such octuples must be at least $\epsilonpsilon^2 n^2/2$ if $n$ is sufficiently large. Let us fix one. We set $$\Delta_{12} := (\Delta\setminus \{(i_1,j_1)\})\cup\{(i_2,j_2)\}$$ and $$\Delta_{34} := (\Delta\setminus \{(i_3,j_3)\})\cup\{(i_4,j_4)\}.$$ By the distinctness of the $i$ and $j$ coordinates, we have $$(i_3,j_3)\in {\mathcal{B}}(\Delta_{12})$$ and $$(i_4,j_4)\in {\mathcal{A}}(\Delta_{12}\setminus \{(i_3,j_3)\}).$$ Let $$\Delta_{1234} := ((\Delta_{12}\setminus \{(i_3,j_3)\})\cup \{(i_4,j_4)\}.$$ Given $(i,j), (i',j')\in {\mathcal{A}}(\Delta)$, we can compare $h_{(i',j')}(\Delta)$ to $h_{(i',j')}(\Delta\cup \{(i,j)\})$. If $i\neq i'$ and $j\neq j'$, the hook lengths are equal, but if $i=i'$ or $j=j'$, then $$h_{(i',j')}(\Delta\cup \{(i,j)\}) = h_{(i',j')}(\Delta)+1.$$ From this formula, we deduce that $$\frac{P(\Delta)P(\Delta_{1234})}{P(\Delta_{12})P(\Delta_{34})} = \frac{a(a+2)}{(a+1)^2}\cdot\frac{b(b-2)}{(b-1)^2},$$ where $$a = h_{(\min(i_2,i_4),\min(j_2,j_4))}(\Delta),\ b = h_{(\min(i_1,i_3),\min(j_1,j_3))}(\Delta).$$ Letting $S^2 = \{s_1 s_2\mid s_1,s_2\in S\}$, we conclude that $$\frac{P(\Delta_{1234})}{P(\Delta)} \in \left(\frac{a(a+2)}{(a+1)^2}\cdot \frac{b(b-2)}{(b-1)^2}\right)S^2.$$ As long as $\delta$ is chosen less than $(9/16)(\min S)^2$, this value is automatically greater than $\delta$. It remains to show that we can choose the octuple $(i_1,\ldots,j_4)$ such that the value is not in $S$. There are finitely many values of $t$ such that $tS^2 \cap S$ is non-empty, and we need only consider values of $a$ and $b$ for which $$\frac{a(a+2)}{(a+1)^2}\cdot\frac{b(b-2)}{(b-1)^2}$$ lies in this finite set. We claim that for each value $t$, the set of octuples which achieves this value is $o(n^2)$. The claim implies the theorem. To prove the claim we note that there are $O(n^{3/2})$ possibilities for $(i_1,j_1,i_2,j_2,i_3,j_3)$. Given one such value, $b$ is determined, so if $t$ is fixed, so is $a$. For a given value of $a$ and given $i_2$ and $j_2$, there are at most two possibilities for $(i_4,j_4)\in {\mathcal{A}}(\Delta)$ with $h_{(\min(i_2,i_4),\min(j_2,j_4))}(\Delta)$ achieving this fixed value. The claim follows. \epsilonnd{proof} \begin{cor} \lambdabel{alt} There is some constant $\varepsilon > 0$ such that $\varepsilon(\mathsf{A}_n) \geq \varepsilon$ for all $n \geq 5$. \epsilonnd{cor} \begin{proof} Choose $S := \{2,1,1/2\}$ and apply Theorem \ref{sym}. Let $\chi \in {\operatorname{Irr}}(\mathsf{A}_n)$ be of degree $b := b(\mathsf{A}_n)$ and let $\phi \in {\operatorname{Irr}}(\mathsf{S}_n)$ be lying above $\chi$; in particular $\phi(1) = rb$ with $r = 1$ or $2$. By Theorem \ref{sym}, there is some $\psi \in {\operatorname{Irr}}(\mathsf{S}_n)$ such that $\psi(1)/\phi(1) \geq \delta$ and $\psi(1)/\phi(1) \notin S$. Now let $\rho \in {\operatorname{Irr}}(\mathsf{A}_n)$ be lying under $\psi$; in particular, $\rho(1) = \psi(1)/s$ with $s = 1$ or $2$. Then $\rho(1)/\chi(1) = (\psi(1)/\phi(1)) \cdot (r/s)$, and so $\rho(1)/\chi(1) \geq \delta/2$ and $\rho(1)/\chi(1) \neq 1$. It follows that $\varepsilon(\mathsf{A}_n) \geq \delta^2/4$ for $n\ge N$. \epsilonnd{proof} \section{Comparing Unipotent Character Degrees of Simple Groups of Lie Type} Each finite simple group $S$ of Lie type, say in characteristic $p$, has the \epsilonmph{Steinberg character} ${\operatorname{St}}$, which is irreducible of degree $|S|_p$. We refer the reader to \cite{C} and \cite{DM} for this, as well as basic facts on Deligne-Lusztig theory. The main aim of this section is the proof of the following comparison result: \begin{thm} \lambdabel{thm:steinberg} Let ${\mathcal{G}}$ be a simple algebraic group in characteristic $p$, $F : {\mathcal{G}} \to {\mathcal{G}}$ a Frobenius map, and $G = {\mathcal{G}}^F$ be the corresponding finite group of Lie type. Then the degree of the Steinberg character of $G$ is strictly larger than the degree of any other unipotent character. \epsilonnd{thm} By the results of Lusztig, unipotent characters of isogenous groups have the same degrees, so it is immaterial here whether we speak of groups of adjoint or of simply connected type; moreover, all unipotent characters have the center in their kernel, so they can all be considered as characters of the corresponding simple group. It is easily checked from the formulas in \cite[\S13]{C} and the data in \cite{Lu} that Theorem~\ref{thm:steinberg} does in fact hold for exceptional groups of Lie type. The six series of classical groups are handled in Corollaries~\ref{cor:StGL} and~\ref{cor:StGU} and Proposition~\ref{prop:Stclass} after some combinatorial preparations. On the way we derive some further interesting relations between unipotent character degrees. \subsection{Type ${\operatorname{GL}}_n$} For $q>1$ and ${\underline c}=(c_1<\ldots<c_s)$ a strictly increasing sequence we set $$[{\underline c}]:=\prod_{i=1}^s(q^{c_i}-1)$$ and $\underline c+m:=(c_1+m<\ldots<c_s+m)$ for an integer $m$. \begin{lem} \lambdabel{lem:ineq} Let $q\ge2$, $s\ge1$. \begin{enumerate} \item[\rm(i)] $\frac{q^a-1}{q^{a-1}-1}\le\frac{q^b-1}{q^{b-1}-1}$ if and only if $a\ge b$. \item[\rm(ii)] Let ${\underline c}=(c_1<\ldots<c_s)$ be a strictly increasing sequence of integers, with $c_1\ge2$. Then: $$q^s< \frac{[{\underline c}]}{[{\underline c}-1]}<q^{s+1}.$$ \epsilonnd{enumerate} \epsilonnd{lem} \begin{proof} The first part is obvious, and then the second follows by a $2s$-fold application of~(i) since $$q^s< \frac{q^{c_s}-1}{q^{c_s-s}-1} =\prod_{i=1}^s\frac{q^{c_s-s+i}-1}{q^{c_s-s+i-1}-1} \le\prod_{i=1}^s\frac{q^{c_i}-1}{q^{c_i-1}-1} \le \prod_{i=1}^s\frac{q^{i+1}-1}{q^i-1}=\frac{q^{s+1}-1}{q-1}<q^{s+1}. $$ \epsilonnd{proof} We denote by $\chi_\lambda$ the unipotent character of ${\operatorname{GL}}_n(q)$ parametrized by the partition $\lambda$ of $n$. Its degree is given by the quantized hook formula $$\chi_\lambda(1) =q^{a(\lambda)}\frac{(q-1)\cdots(q^n-1)}{\prod_{h}(q^{l(h)}-1)},$$ where $h$ runs over the hooks of $\lambda=(a_1\ge\ldots\ge a_r)$, and $a(\lambda)=\sum_{i=1}^r(i-1)a_i$ (see for example \cite[(21)]{Ol} or \cite{Ma1}). \begin{prop} \lambdabel{prop:compGL} Let $\lambda=(a_1\ge\ldots\ge a_{r-1}>0)\vdash n-1$ be a partition of $n-1$ and $\mu,\nu$ the partitions of $n$ obtained by adding a node at $(r,1)$, $(i,j)$ respectively, where $i<r$ and $a_i=j-1$. Then for all $q\ge2$ the corresponding unipotent character degrees of ${\operatorname{GL}}_n(q)$ satisfy $$q^{-j-1}\chi_{\mu}(1)<\chi_{\nu}(1)<q^{2-j}\chi_{\mu}(1)\le\chi_{\mu}(1).$$ \epsilonnd{prop} \begin{proof} According to the hook formula, we have to consider the hooks in $\mu,\nu$ of different lengths. These lie in the $1$st column, the $i$th rows and in the $j$th column. Let ${\underline h}=(1<h_2<\ldots<h_r)$ denote the hook lengths in the $1$st column, ${\underline k}=(k_1<\ldots<k_{j-1})$ the hook lengths in the $i$th row and ${\underline l}=(l_1<\ldots<l_{i-1})$ the hook lengths in the $j$th column of $\mu$. Write ${\underline h}'=(h_2<\ldots<h_r)$ and ${\underline k}'=(0<k_1<\ldots<k_{j-1})$. Then a threefold application of Lemma~\ref{lem:ineq}(ii) shows that $$\begin{aligned} \chi_{\nu}(1) &=q^{a(\nu)-a(\mu)}\frac{[{\underline h}]}{[{\underline h}'-1]} \frac{[{\underline k}]}{[{\underline k}'+1]}\frac{[{\underline l}]}{[{\underline l}+1]}\chi_{\mu}(1)\\ &< q^{-r+i}q^rq^{1-j}q^{1-i}\chi_{\mu}(1) =q^{2-j}\chi_{\mu}(1) \le \chi_{\mu}(1), \epsilonnd{aligned}$$ since $j\ge2$. The other inequality is then also immediate. \epsilonnd{proof} Note that $\nu$ is the partition obtained from $\mu$ by moving one node from the last row (which contains a single node) to some row higher up. Since clearly any partition of $n$ can be reached by a finite number of such operations from $(1)^n$, we conclude: \begin{cor} \lambdabel{cor:StGL} Any unipotent character of ${\operatorname{GL}}_n(q)$ other than the Steinberg character ${\operatorname{St}}$ has smaller degree than ${\operatorname{St}}$. \epsilonnd{cor} A better result can be obtained when $q\ge3$, since then the upper bound in Lemma~\ref{lem:ineq}(ii) can be improved to $q^{s+1/2}$. In that case, `moving up' any node in a partition leads to a smaller unipotent degree: \begin{prop} \lambdabel{prop:dominance} Let $q\ge3$, and $\nu\ne\mu$ two partitions of $n$ with $\nu\rhd\mu$ in the dominance order. Then the corresponding unipotent character degrees of ${\operatorname{GL}}_n(q)$ satisfy $\chi_{\nu}(1)<\chi_{\mu}(1)$. \epsilonnd{prop} \begin{proof} In our situation, $\nu$ can be reached from $\mu$ by a sequence of steps of moving up a node in a partition. Consider one such step, where the node at position $(r,s)$ is moved to position $(i,j)$, with $j>s$. A similar estimate as in the proof of Proposition~\ref{prop:compGL}, but with the improved upper bound from Lemma~\ref{lem:ineq}, leads to the result. \epsilonnd{proof} \begin{exmp} The previous result fails for $q=2$; the smallest counterexample occurs for $n=6$, $\mu=(2)^3\lhd\nu=(3)(2)(1)$, where $\chi_\mu(1)=5952<\chi_\nu(1)=6480$. \epsilonnd{exmp} \subsection{Type ${\operatorname{GU}}_n$} The analogue of Proposition~\ref{prop:compGL} is no longer true for the unipotent characters of unitary groups, in general. Still, we can obtain a characterization of the Steinberg character by comparing with character degrees in ${\operatorname{GL}}_n(q)$. \begin{prop} Any partition $\lambda$ of $n$ has $r=\lceil n/2\rceil$ distinct hooks $h_1,\ldots,h_r$ of odd lengths $l(h_i)\le 2i-1$, $1\le i\le r$. \epsilonnd{prop} \begin{proof} We proceed by induction on $n$. The result is clear for 2-cores, i.e., triangular partitions. Now let $\lambda=(a_1\le\ldots\le a_r)$ be a partition of~$n$ which is not a 2-core, with corresponding $\beta$-set $B=\{a_1,a_2+1,\ldots,a_r+r-1\}$. The hook lengths of $\lambda$ are just the differences $j-i$ with $j\in B$, $i\notin B$, $i<j$ (see \cite[Lemma~2]{Ol}). Since $\lambda$ is not a 2-core, there exists $j\in B$ with $j-2\notin B$. Let $B'=\{j-2\}\cup B\setminus\{j\}$, the $\beta$-set of a partition $\mu$ of $n-2$. We now compare hook lengths in $B'$ and in $B$: hooks in $B'$ from $k>j$, $k\in B'$, to $j$ become hooks from $k$ to $j-2$ in $B$, and hooks from $j-2$ to $k\notin B'$, $k<j-2$, become hooks from $j$ to $k$ in $B$. In both cases, the length has increased by~2. But we have one further new hook in $B$: either from $j$ to $j-1$ (if $j-1\notin B'$), or from $j-1$ to $j-2$ (if $j-1\in B'$), of length~1. So indeed, in both cases we've produced hooks of the required odd lengths in $\lambda$. \epsilonnd{proof} \begin{prop} \lambdabel{prop:GLvsGU} Let $\lambda$ be a partition of $n$. Then the degree of the unipotent character of ${\operatorname{GL}}_n(q)$ indexed by $\lambda$ is at least as big as the corresponding one of ${\operatorname{GU}}_n(q)$. \epsilonnd{prop} \begin{proof} It's well-known that the degree of the unipotent character of ${\operatorname{GU}}_n(q)$ indexed by $\lambda$ is obtained from the one for ${\operatorname{GL}}_n(q)$ by formally replacing $q$ by$-q$ in the hook formula above and adjusting the sign. Now let $h_1,\ldots,h_r$ denote the sequence of hooks of odd length from the previous result. Observe that the numerators in the hook formula for ${\operatorname{GL}}_n(q)$ and ${\operatorname{GU}}_n(q)$ differ by the factor $\prod_{i=1}^r (q^{2i-1}+1)/(q^{2i-1}-1)$. Since $(q^a+1)/(q^b+1)<(q^a-1)/(q^b-1)$ when $b<a$, the claim now follows from the hook formula. \epsilonnd{proof} It seems that the only case with equality, apart from the trivial cases $1$ and ${\operatorname{St}}$, occurs for the partition $(2)^2$ of~$4$. Since the degree of the Steinberg character of ${\operatorname{GL}}_n(q)$ and ${\operatorname{GU}}_n(q)$ is the same, the following is immediate from Corollary~\ref{cor:StGL} and Proposition~\ref{prop:GLvsGU}: \begin{cor} \lambdabel{cor:StGU} Any unipotent character of ${\operatorname{GU}}_n(q)$ other than the Steinberg character ${\operatorname{St}}$ has smaller degree than ${\operatorname{St}}$. \epsilonnd{cor} \subsection{Other classical types} The unipotent characters of the remaining classical groups $G = G(q)$ (i.e. symplectic and orthogonal groups) are labelled by \epsilonmph{symbols}, whose definition and basic combinatorics we now recall (we refer to \cite{Ma1} for the version of the hook formula given here). A \epsilonmph{symbol} $S=(X,Y)$ is a pair of strictly increasing sequences $X=(x_1<\ldots<x_r)$, $Y=(y_1<\ldots<y_s)$ of non-negative integers. The \epsilonmph{rank} of $S$ is then $${\operatorname{rk}}(S)=\sum_{i=1}^r x_i+\sum_{j=1}^s y_j -\left\lfloor\left(\frac{r+s-1}{2}\right)^2\right\rfloor.$$ The symbol $S'=(\{0\}\cup (X+1),\{0\}\cup (Y+1))$ is said to be \epsilonmph{equivalent} to $S$, and so is the symbol $(Y,X)$. The rank is constant on equivalence classes. The \epsilonmph{defect} of $S$ is $d(S)=||X|-|Y||$, which clearly is also invariant under equivalence. \par Lusztig has shown that the unipotent characters of classical groups of rank~$n$ are naturally parametrized by equivalence classes of symbols of rank~$n$, with those of odd defect parametrizing characters in type $B_n$ and $C_n$, those of defect $\epsilonquiv0\pmod4$ characters in type $D_n$, and those of defect $\epsilonquiv2\pmod4$ characters in type $\tw2D_n$. (Here, each so-called degenerate symbol, where $X=Y$, parametrizes two unipotent characters in type $D_n$.) \par The degrees of unipotent characters are most conveniently given by an analogue of the hook formula for ${\operatorname{GL}}_n(q)$, as follows. A \epsilonmph{hook of $S$} is a pair $(b,c)\in{\mathbb{N}}_0^2$ with $b<c$ and either $b\notin X$, $c\in X$, or $b\notin Y$, $c\in Y$. Thus, a hook of $S$ is nothing else but a hook (as considered in Section~\ref{sec:sym} and for type $A$ above) of the permutation with associated $\beta$-set either $X$ or $Y$. A \epsilonmph{cohook of $S$} is a pair $(b,c)\in{\mathbb{N}}_0^2$ with $b\le c$ and either $b\notin Y$, $c\in X$, or $b\notin X$, $c\in Y$. (Note that the possibility $b=c$ for cohooks was excluded in \cite{Ol} which led to a less smooth hook formula than in \cite{Ma1}.) We also set $$a(S):=\sum_{\{b,c\}\subseteq S}\min\{b,c\}-\sum_{i\ge1}\binom{r+s-2i}{2},$$ where the sum runs over all 2-element subsets of the multiset $X\cup Y$ of entries of $S$. The degree of the unipotent character $\chi_S$ of a finite classical group $G=G(q)$ parametrized by $S$ is then given as $$\chi_S(1)=q^{a(S)}\frac{|G|_{q'}}{\prod_{(b,c)\text{ hook}}(q^{c-b}-1) \prod_{(b,c)\text{ cohook}}(q^{c-b}+1)},$$ where the products run over hooks, respectively cohooks of $S$ (see \cite[Bem.~3.12 and~6.8]{Ma1}). It can be checked that this is constant on equivalence classes. It is also clear from this that the unipotent characters in types $B_n$ and $C_n$ have the same degrees. \begin{lem} \lambdabel{lem:ineq2} Let $q\ge2$, $s\ge1$. \begin{enumerate} \item[\rm(i)] $\frac{q^a+1}{q^{a-1}+1}\le\frac{q^b+1}{q^{b-1}+1}$ if and only if $a\le b$. \item[\rm(ii)] Let $(c_1<\ldots<c_s)$ be a strictly increasing sequence of integers, with $c_1>0$. Then: $$q^{s-1}\le q^s/2< \prod_{i=1}^s \frac{q^{c_i}+1}{q^{c_i-1}+1}<q^s.$$ \epsilonnd{enumerate} \epsilonnd{lem} The proof is immediate and entirely similar to the one of Lemma~\ref{lem:ineq}. \begin{prop} \lambdabel{prop:Stclass} Let $S$ be a symbol of rank~$n$, parametrizing a unipotent character of $G=G(q)$ of rank~$n$. Then $\chi_S(1)\le{\operatorname{St}}(1)$, where ${\operatorname{St}}$ denotes the Steinberg character of $G$, with equality only if $\chi_S={\operatorname{St}}$. \epsilonnd{prop} \begin{proof} We'll describe an algorithm changing the entries of a given symbol, preserving its rank and the parity of its defect, but increasing the corresponding character degree, which eventually leads to the symbol parametrizing the Steinberg character. Note that the Steinberg characters of $D_n(q)$ and $\tw2D_n(q)$ have the same degree, so that we may switch freely between symbols of any even defect. \par First assume that $S=((0,1,\ldots,x),())$ is a so-called cuspidal symbol. Then the symbol $S'=((0,\ldots,x-1),(x))$ has same rank, same $a$-value and same parity of defect, but larger degree unless $x=0$, in which case $S$ parametrizes the trivial character of $B_0$. So from now on we may suppose that $S$ contains at least one 'hole', that's to say, not both sequences $X,Y$ of $S$ are complete intervals starting at~0. First replace $S$ by an equivalent symbols such that $0\in X\cap Y$ and $1\notin X\cap Y$. We may and will then assume that $1\notin X$. Now let $b$ be maximal such that $S$ has a hook $(b,b+1)$. (Such an $b$ exists since $S$ is not cuspidal.) Let $x=|\{i\in X\mid i\le b\}|$, $y=|\{i\in Y\mid i\le b\}|$ and $m=x+y$ (the number of entries in $S$ below $b+1$). By the definition of $b$, there are no holes in $S$ above $b$. \par First assume that $m\le 2b-2$ and let $S'$ be the symbol obtained from $S$ as follows: first replace $0$ by $1$ in $X$, then replace $b+1$ by $b$ in $X$ if $(b,b+1)$ was a hook of $X$, respectively in $Y$ if it was a hook of $Y$. Clearly the new symbol has the same rank and the same defect. We have $a(S')=a(S)+m-1$, and the quotient of the contributions by the hooks and cohooks can be seen to be larger than $q^{2b-2m-1}$ by Lemmas~\ref{lem:ineq} and~\ref{lem:ineq2}, so that $\chi_{S'}(1)>q^{2b-m-2}\chi_S(1)\ge \chi_S(1)$ by our assumption on~$m$. Thus any unipotent character degree is smaller than one for a symbol $S$ with $m\ge 2b-1$, so there are at most three 'holes' in the entries of $S$. But in fact, if $m=2b-1$ the above process leads to the better inequality $\chi_{S'}(1)>q^{2b-m-1}\chi_S(1)\ge \chi_S(1)$, whence $S$ has at most two holes. \par Now note that if $S=(X,Y)$, with $x=\max X>\max Y+1$ and $x-1\in X$, then $S'=(X\setminus\{x\},Y\cup\{x\})$ has at least as many holes as $S$ and larger associated character degree, since $\prod_{i=2}^k(q^i-1)/(q^i+1)>(q-1)/(q+1)$, so we may assume that $\max X$, $\max Y$ differ by at most~1. \par If $S$ has just one hole, up to equivalence it is of the form $S=((1,\ldots,x),(0,\ldots,y))$ with $y\in\{x+1,x,x-1\}$. But such a symbol parametrizes the Steinberg character of $G$ of type $D_n$ when $y=x-1$, of $B_n$ (and $C_n$) when $y=x$, respectively of $\tw2D_n$ when $y=x+1$. \par Next assume that $S$ has two holes. If there is $b\le\max(X\cup Y)$ such that $b\notin X\cup Y$, then we may pass to an equivalent symbol $S=((1,\ldots,x),(1,\ldots,y))$, where without loss $y\in\{x,x-1\}$ and so $n=x+y$. If $n$ is odd, so $x=(n+1)/2$, $y=(n-1)/2$, we have $$\chi_S(1)=q^{a(S)}\prod_{i=1}^y\frac{q^{2(x+i)}-1}{q^{2i}-1} \le q^{\binom{n}{2}}\prod_{i=1}^y q^{2x+1}=q^{n^2-1}<q^{n^2}={\operatorname{St}}(1),$$ while for $n$ even, $x=y=n/2$, we find $$\chi_S(1)=\frac{q^{a(S)}}{q^n+1} \prod_{i=1}^y\frac{q^{2(x+i)}-1}{q^{2i}-1} \le \frac{q^{\binom{n}{2}}}{q^n+1}\prod_{i=1}^y q^{2x+1} =\frac{q^{n^2}}{q^n+1}<q^{n^2-n}={\operatorname{St}}(1).$$ \par So finally, $S$ has two holes with different values, in which case there is an equivalent symbol $S$ obtained from $S_0:=((0,\ldots,x),(1,\ldots,y))$ by removing at most one entry, different from~0, and $x\in\{y,y\pm1\}$. If $x>y$ and $S,S_0$ differ in the second row, then the symbol obtained from $S$ by moving $x$ from the first to the second row has larger degree. If $x=y$ and the two holes lie in different rows, moving one of them to the other row generates a larger degree. In the last two remaining cases (with $x>y$ and $|X|\le|Y|$, respectively $x=y$ and $|X|=|Y|-2$) the above procedure of reducing the smallest and increasing the largest hole leads to a symbol with larger degree. \epsilonnd{proof} This completes the proof of Theorem~\ref{thm:steinberg}. \section{Theorem \ref{main1} for Simple Groups of Lie Type} Notice that to prove Theorem \ref{main1} we can ignore any finite number of non-abelian simple groups, in particular the $26$ sporadic groups (of course one can find out the exact value of $\varepsilon(S)$ for each of them; in particular, one can check using \cite{Atlas} that $\varepsilon(S)>1$ for all the sporadic groups). Thus, in view of Corollary~\ref{alt}, it remains to prove Theorem~\ref{main1} for simple groups of Lie type. We begin with some estimates: \begin{lem} \lambdabel{sum} Let $q \geq 2$. Then the following inequalities hold. \begin{enumerate} \item[\rm (i)] $\prod^{\infty}_{i=1}(1-1/q^i) > 1-1/q -1/q^2 +1/q^5 \geq \epsilonxp(-\alpha/q)$, where $\alpha = 2\ln(32/9) \approx 2.537$. \item[\rm (ii)] $\prod^{\infty}_{i=2}(1-1/q^i) > 9/16$. \item[\rm (iii)] $\prod^{\infty}_{i=k}(1+1/q^i)$ is smaller than $2.4$ if $k = 1$, $1.6$ if $k = 2$, $1.28$ if $k = 3$, and $16/15$ if $k = 5$. \item[\rm (iv)] $1 < \prod^{n}_{i=1}(1-(-1/q)^{i}) \leq 3/2$. \epsilonnd{enumerate} \epsilonnd{lem} \begin{proof} (i) As mentioned in \cite{FG} (see the paragraph after Lemma 3.4 of \cite{FG}), a convenient way to prove these estimates is to use Euler's pentagonal number theorem \cite[p. 11]{A}: \begin{equation}\lambdabel{euler} \begin{array}{ll}\prod^{\infty}_{i=1}(1-\frac{1}{q^i}) & = 1 + \sum^{\infty}_{n=1}(-1)^n(q^{-n(3n-1)/2} + q^{-n(3n+1)/2})\\ & = 1 - q^{-1} - q^{-2} + q^{-5} + q^{-7} - q^{-12} - q^{-15} + \cdots \epsilonnd{array} \epsilonnd{equation} Since $q^{-m} \geq \sum^{\infty}_{i=m+1}q^{-i}$, finite partial sums of this series yield arbitrarily accurate upper and lower bounds for $\prod^{\infty}_{i=1}(1-q^{-i})$. In particular, truncating the series (\ref{euler}) at the term $q^{-5}$ yields the first inequality. Next, consider the function $$f(x) := 1-x-x^2+x^5 - \epsilonxp(-\alpha x)$$ for the chosen $\alpha$. The choice of $\alpha$ ensures that $f(1/2) = 0 = f(0)$, and $f''(x) < 0$ for all $x\in [0,1/2]$. It follows that $f(x) \geq 0$ on $[0,1/2]$, yielding the second inequality. (ii) Clearly, $$\prod^{\infty}_{i=2}(1-\frac{1}{q^{i}}) \geq \prod^{\infty}_{i=2}(1-\frac{1}{2^{i}}) = 2\prod^{\infty}_{i=1}(1-\frac{1}{2^{i}}) > 2(1-\frac{1}{2}-\frac{1}{4}+\frac{1}{32}) = \frac{9}{16}.$$ (iii) Applying (\ref{euler}) with $q = 4$ and truncating the series at the term $q^{-7}$ we get $\prod^{\infty}_{i=1}(1-4^{-i}) < 0.6876$. Applying (\ref{euler}) with $q = 2$ and truncating the series at the term $q^{-15}$ we get $\prod^{\infty}_{i=1}(1-2^{-i}) > 0.2887$. Now $$\prod^{\infty}_{i=1}(1+\frac{1}{q^{i}}) \leq \prod^{\infty}_{i=1}(1+\frac{1}{2^{i}}) = \frac{\prod^{\infty}_{i=1}(1-\frac{1}{4^{i}})} {\prod^{\infty}_{i=1}(1-\frac{1}{2^{i}})} < \frac{0.6876}{0.2887} < 2.382.$$ The other bounds can be obtained by using this bound and noting that $$\prod^{\infty}_{i=k}(1+\frac{1}{q^{i}}) \leq \prod^{\infty}_{i=k}(1+\frac{1}{2^{i}}) = \frac{\prod^{\infty}_{i=1}(1+\frac{1}{2^{i}})} {\prod^{k-1}_{i=1}(1+\frac{1}{2^{i}})}.$$ (iv) follows from the estimates $$(1-\frac{1}{q^{2k}})(1+\frac{1}{q^{2k+1}}) < 1 < (1+\frac{1}{q^{2k-1}})(1-\frac{1}{q^{2k}})$$ for any $k \geq 1$. \epsilonnd{proof} First we prove Theorem~\ref{main1} for the exceptional groups of Lie type: \begin{prop} \lambdabel{exc} There is a constant $\varepsilon > 0$ such that $\varepsilon(S) \geq \varepsilon$ for all simple exceptional groups $S$ of Lie type. \epsilonnd{prop} \begin{proof} It suffices to show that there is some constant $\delta > 0$ such that every simple exceptional group $S$ of Lie type, in characteristic say $p$, has an irreducible character $\rho$ such that $\delta b(S) \leq \rho(1) < b(S)$. In this proof, we will use the notation $^aX_r(q^a)$ to indicate the type of $S$, where $a = 1$ for untwisted groups and $a = 2,3$ for twisted groups (so $q$ can be irrational for the Suzuki and the Ree groups, and $r \leq 8$ is the rank of the underlying algebraic group). Let $H$ be the group of adjoint type corresponding to $S$; in particular, $S = [H,H]$, and $|H/S| \leq 3$. The list of character degrees of $H$ is given on Frank L\"ubeck's website \cite{Lu}. A detailed inspection of this list would yield an explicit $\delta$ (and hence $\varepsilon$), but we will give a short, implicit proof. One can see that there is a finite list of monic polynomials $f_i(q)$, $1 \leq i \leq n_0$ (which are divisors of cyclotomic polynomials ${\mathbb{P}}hi_m(q^a)$ with $m \leq 30$) such that every character degree $f(q)$ of $H$ is $c(f)$ times a product of some powers of these $f_i(q)$, where $c(f)$ is the leading coefficient for $f$. It follows that there is an absolute constant $\beta > 0$ such that $f(q) \leq \beta q^{\deg(f)}$ for all such $f$ and all $q$. If $|H|_p = q^N$, then $\deg(f) \leq N$ (as one can easily check). Hence we have shown that $b(H) \leq \beta q^N$. First assume that $b(S)$ is attained at some character $\chi$ of $S$ of degree $\neq q^N = {\operatorname{St}}(1)$. Then we can set $\rho := {\operatorname{St}}$ and observe that $$1 > \rho(1)/\chi(1) = q^N/b(S) \geq q^N/b(H) \geq 1/\beta.$$ Now assume that $b(S) = q^N$. Then the finite group $L$ of simply connected type corresponding to $S$ contains a regular semisimple element $t$. Consider the semisimple character $\chi_t$ of $H$ labeled by the conjugacy class of $t$ (recall that $H$ is adjoint, so the centralizer of $t$ in the underlying algebraic group is connected). Then $C_L(t)$ is a maximal torus of $L$, and so $|C_L(t)| \leq (q+1)^r$. Since $\chi_t(1) = |H|_{p'}/|C_L(t)|$ and $r \leq 8$, we see that there is a universal constant $\gamma > 0$ such that $\chi_t(1) > \gamma q^N$ for all $q$. Now let $\rho \in {\operatorname{Irr}}(S)$ lie below $\chi_t$. Since $\chi_t(1)$ is coprime to $p$, $\rho(1) \neq q^N = b(S)$. Finally, $\rho(1) \geq \chi_t(1)/|H/S| \geq \chi_t(1)/3$, whence $\rho(1)/b(S) \geq \gamma/3$. \epsilonnd{proof} The same proof as above establishes Theorem \ref{main1} for simple classical groups of \epsilonmph{bounded} rank. To handle simple classical groups of arbitrary rank, we will need the following result of G. M. Seitz \cite[Theorem 2.1]{S}: \begin{thm}[Seitz] \lambdabel{bound1} Let ${\mathcal{G}}$ be a simple, simply connected algebraic group over the algebraic closure of a finite field of characteristic~$p$, $F~:~{\mathcal{G}} \to {\mathcal{G}}$ a Frobenius endomorphism of ${\mathcal{G}}$, and let $L := {\mathcal{G}}^F$. Then $b(L) \leq |L|_{p'}/|T_0|$, where $T_0$ is a maximal torus of $L$ of minimal order. For $q$ sufficiently large, this is in fact an equality (namely, whenever that torus contains at least one regular element). \epsilonnd{thm} In what follows, we will view our simple classical group $S$ as $L/Z(L)$, where $L = {\mathcal{G}}^F$ as in Theorem \ref{bound1}. We also consider the pair $({\mathcal{G}}D,{\mathbb{F}}D)$ dual to $({\mathcal{G}},F)$ and the group $H := ({\mathcal{G}}D)^{{\mathbb{F}}D}$ dual to $L$. \begin{thm} \lambdabel{clas} Let $S$ be a finite simple classical group. Suppose that $S$ is not isomorphic to any of the following groups: $$\left\{\begin{array}{l} {\operatorname{SL}}_n(2),~{\operatorname{Sp}}_{2n}(2),~\Omega^{\pm}_{2n}(2),\\ {\operatorname{PSL}}_n(3) \mbox{ with }5 \leq n \leq 14,~ {\operatorname{PSU}}_n(2) \mbox{ with }7 \leq n \leq 14,\\ {\operatorname{PSp}}_{2n}(3) \mbox{ or } \Omega_{2n+1}(3) \mbox{ with }4 \leq n \leq 17, ~{\operatorname{P\Omega}}^{\pm}_{2n}(3) \mbox{ with }4 \leq n \leq 30,\\ {\operatorname{P\Omega}}^{\pm}_{8}(7), ~{\operatorname{P\Omega}}^{\pm}_{2n}(5) \mbox{ with }4 \leq n \leq 6. \epsilonnd{array}\right.$$ Then $\varepsilon(S) > 1$. \epsilonnd{thm} \begin{proof} 1) First we consider the case $S = {\operatorname{PSL}}_n(q)$ with $q \geq 3$. Then ${\mathcal{G}} = {\operatorname{SL}}_n(\overline{\F}_q)$, ${\mathcal{G}}D = {\operatorname{PGL}}_n(\overline{\F}_q)$, $L = {\operatorname{SL}}_n(q)$, $H = {\operatorname{PGL}}_n(q)$, and the maximal tori of minimal order in Theorem \ref{bound1} are the maximally split ones, of order $(q-1)^{n-1}$. Hence $$b(S) \leq b(L) \leq B/(q-1)^{n-1}, \mbox{ where } B:= |L|_{p'} = (q^2-1)(q^3-1) \cdots (q^n-1).$$ Now we consider a maximal torus $T$ of order $q^{n-1}-1$ in $H$, with full inverse image $\hat{T} = C_{q^{n-1}-1} \times C_{q-1}$ in ${\operatorname{GL}}_n(q)$. We will show that, in the generic case, the regular semisimple elements in $T$ will produce enough irreducible characters of $S$, all of degree less than $b(S)$, and with the sum of squares of their degrees exceeding $b(S)^2$. Assume $n \geq 4$. A typical element $\hat{s}$ of $\hat{T} \cap L$ is ${\operatorname{GL}}_n(\overline{\F}_q)$-conjugate to $${\operatorname{diag}}\left(\alpha,\alpha^q, \ldots ,\alpha^{q^{n-2}},\alpha^{(1-q^{n-1})/(q-1)}\right),$$ where $\alpha \in {\mathbb{F}}_{q^{n-1}}^{\times}$. Let $$X := \left(\cup^{n-2}_{i=1}{\mathbb{F}}_{q^i}^{\times} \cup \{x \in \overline{\F}_q^{\times} \mid x^{n(q-1)} = 1 \}\right) \cap {\mathbb{F}}_{q^{n-1}}^{\times}.$$ Also set $m := \lfloor (n-1)/2 \rfloor$. Then for $n \geq 6$ we have $n-m \geq 4$, and so $$|X| < \sum^{m}_{i=0}q^i +n(q-1) \leq \frac{q^{m+1}-1}{2} + n(q-1) \leq \frac{q^{n-3}-1}{2} + n(q-1) < \frac{q^{n-1}-1}{2}$$ since $q \geq 3$. Direct calculations show that $|X| < (q^{n-1}-1)/2$ also for $n = 4,5$. Thus there are at least $(q^{n-1}-1)/2$ elements $\alpha$ in ${\mathbb{F}}_{q^{n-1}}^{\times}$ that do not belong to $X$. Consider $\hat{s}$ for any such $\alpha$. Then all the $n$ eigenvalues of $\hat{s}$ are distinct, and exactly one of them (namely $\beta:=\alpha^{-(q^{n-1}-1)/(q-1)}$) belongs to ${\mathbb{F}}_q$. Suppose that $x \in {\operatorname{GL}}_n(\overline{\F}_q)$ centralizes $\hat{s}$ modulo $Z({\operatorname{GL}}_n(\overline{\F}_q))$: $x\hat{s} x^{-1} = \gamma \hat{s}$. Comparing the determinant, we see that $\gamma^n = 1$. Suppose that for some $i$ with $0 \leq i \leq n-2$, $\gamma\beta = \alpha^{q^i}$. Then $(\alpha^{q^i})^{n(q-1)} = (\gamma\beta)^{n(q-1)} = 1$, and so $\alpha \in X$, a contradiction. Hence $\gamma\beta = \beta$, i.e. $\gamma = 1$ and $x$ centralizes $\hat{s}$, and clearly $C_{{\operatorname{GL}}_n(\overline{\F}_q)}(\hat{s})$ is a maximal torus. So if $s \in T$ is the image of $\hat{s}$, then $C_{{\mathcal{G}}D}(s) = C_{{\operatorname{GL}}_n(\overline{\F}_q)}(\hat{s})/Z({\operatorname{GL}}_n(\overline{\F}_q))$ is connected and a maximal torus of ${\mathcal{G}}D$; in particular, $s$ is regular. Also, $s \in {\operatorname{PSL}}_n(q) = [H,H]$. Hence each such $s$ defines an irreducible character $\chi_s$ of $L$, of degree $B/|T|$, which is trivial at $Z(L)$. So we can view $\chi_s$ as an irreducible character of $S$. Each such $s$ has at most $q-1$ inverse images $\hat{s} \in \hat{T} \cap L$. Moreover, since $|N_{H}(T)/T| = n-1$, the $H$-conjugacy class of $s$ intersects $T$ at $n-1$ elements. We have therefore produced at least $(q^{n-1}-1)/2(q-1)(n-1)$ irreducible characters $\chi_s$ of $S$, each of degree $$\chi_s(1) = B/|T| = (q^2-1)(q^{3}-1) \ldots (q^{n-2}-1)(q^n-1).$$ Note that $\chi_s(1) < q^{2+3+ \ldots +(n-2)+n} = {\operatorname{St}}(1) \leq b(S)$. Hence, to show that $\varepsilon(S) > 1$, it suffices to verify that $$\frac{q^{n-1}-1}{2(q-1)(n-1)} \cdot \left(\frac{B}{q^{n-1}-1}\right)^2 > \left(\frac{B}{(q-1)^{n-1}}\right)^2,$$ equivalently, $(q-1)^{2n-3} > 2(n-1)(q^{n-1}-1)$. The latter inequality holds if $q = 3$ and $n \geq 15$, or if $q = 4$ and $n \geq 5$, or if $q \geq 5$ and $n \geq 4$. It is straightforward to check that $\varepsilon(S) > 1$ when $n = 2,3$ or $(n,q) = (4,4)$ (using \cite{GAP} for the last case). 2) Next let $S = {\operatorname{PSU}}_n(q)$ with $n \geq 3$. Then ${\mathcal{G}} = {\operatorname{SL}}_n(\overline{\F}_q)$, ${\mathcal{G}}D = {\operatorname{PGL}}_n(\overline{\F}_q)$, $L = {\operatorname{SU}}_n(q)$, $H = {\operatorname{PGU}}_n(q)$. The maximal tori of minimal order in Theorem \ref{bound1} have order at least $(q^2-1)^{n/2}/(q+1)$. Hence $$b(S) \leq b(L) \leq B(q+1)/(q^2-1)^{n/2}, \mbox{ where } B:= |L|_{p'} = \prod^{n}_{i=2}(q^i-(-1)^i).$$ Now we consider a maximal torus $T$ of order $q^{n-1}-(-1)^{n-1}$ in $H$, with full inverse image $\hat{T} = C_{q^{n-1}-(-1)^{n-1}} \times C_{q+1}$ in ${\operatorname{GU}}_n(q)$. We will follow the same approach as in the case of ${\operatorname{PSL}}_n(q)$. Assume that $n \geq 4$, and moreover $q \geq 3$ if $4 \leq n \leq 7$. A typical element $\hat{s}$ of $\hat{T} \cap L$ is ${\operatorname{GL}}_n(\overline{\F}_q)$-conjugate to $${\operatorname{diag}}\left(\alpha,\alpha^{-q}, \ldots ,\alpha^{(-q)^{n-2}},\alpha^{((-q)^{n-1}-1)/(q+1)}\right),$$ where $\alpha \in C_{q^{n-1}-(-1)^{n-1}} < \overline{\F}_{q}^{\times}$. Let $Y$ be the set of elements in $C_{q^{n-1}-(-1)^{n-1}}$ that belong to a cyclic subgroup $C_{q^k-(-1)^k}$ of $\overline{\F}_{q}^{\times}$ for some $1 \leq k < n-1$ or have order dividing $n(q+1)$. Assume $n \geq 9$ and set $m:= \lfloor (n-1)/2 \rfloor$. Then $n-m \geq 5$ and so $$\begin{array}{ll}|Y| & \leq \sum^{m}_{i=1}(q^i-(-1)^i)+n(q+1) \leq \sum^{m}_{i=0}q^i+n(q+1) \\ \\ & = \dfrac{q^{m+1}-1}{q-1}+n(q+1) \leq \dfrac{q^{n-3}-1}{2} + n(q+1) < \dfrac{q^{n-1}-1}{2}.\epsilonnd{array}$$ Direct calculations show that $|Y| < (q^{n-1}-1)/2$ also for $4 \leq n \leq 8$ (recall that we are assuming $q \geq 3$ when $4 \leq n \leq 7$). Thus there are at least $(q^{n-1}-(-1)^{n-1})/2$ elements of $C_{q^{n-1}-(-1)^{n-1}}$ that do not belong to $Y$. Consider $\hat{s}$ for any such $\alpha$. Then all the $n$ eigenvalues of $\hat{s}$ are distinct, and exactly one of them (namely $\alpha^{((-q)^{n-1}-1)/(q+1)}$) belongs to $C_{q+1} < \overline{\F}_q^{\times}$. Arguing as in the ${\operatorname{PSL}} $-case, we see that if $x \in {\operatorname{GL}}_n(\overline{\F}_q)$ centralizes $\hat{s}$ modulo $Z({\operatorname{GL}}_n(\overline{\F}_q))$, then $x$ actually centralizes $\hat{s}$, and $C_{{\operatorname{GL}}_n(\overline{\F}_q)}(\hat{s})$ is a maximal torus. So if $s \in T$ is the image of $\hat{s}$, then $C_{{\mathcal{G}}D}(s) = C_{{\operatorname{GL}}_n(\overline{\F}_q)}(\hat{s})/Z({\operatorname{GL}}_n(\overline{\F}_q))$ is connected and a maximal torus of ${\mathcal{G}}D$; in particular, $s$ is regular. Also, $s \in {\operatorname{PSU}}_n(q) = [H,H]$. Hence each such $s$ defines an irreducible character $\chi_s$ of $L$, of degree $B/|T|$, which is trivial at $Z(L)$. So we can view $\chi_s$ as an irreducible character of $S$. Each such $s$ has at most $q+1$ inverse images $\hat{s} \in \hat{T} \cap L$. Moreover, since $|N_{H}(T)/T| = n-1$, the $H$-conjugacy class of $s$ intersects $T$ at $n-1$ elements. We have therefore produced at least $(q^{n-1}-(-1)^{n-1})/2(q+1)(n-1)$ irreducible characters $\chi_s$ of $S$, each of degree $$\chi_s(1) = \frac{B}{|T|} = \frac{\prod^{n}_{i=2}(q^{i}-(-1)^i)}{q^{n-1}-(-1)^{n-1}}.$$ Note that $\chi_s(1) < q^{2+3+ \ldots +(n-2)+n} = {\operatorname{St}}(1) \leq b(S)$. Hence, to show that $\varepsilon(S) > 1$, it suffices to verify that $$\frac{q^{n-1}-(-1)^{n-1}}{2(q+1)(n-1)} \cdot \left(\frac{B}{(q^{n-1}-(-1))^{n-1}} \right)^2 > \left(\frac{B(q+1)}{(q^2-1)^{n/2}}\right)^2,$$ equivalently, $(q^2-1)^n > 2(n-1)(q^{n-1}-(-1)^{n-1})(q+1)^3$. The latter inequality holds if $q = 2$ and $n \geq 15$, or if $q = 3$ and $n \geq 6$, or if $q \geq 4$ and $n \geq 4$. It is straightforward to check that $\varepsilon(S) > 1$ when $n = 3$ (and $q \geq 3$), or $(n,q) = (4,2)$, $(5,2)$, $(6,2)$, $(4,3)$, $(5,3)$ (using \cite{Lu} in the last case). 3) Here we consider the case $S = {\operatorname{PSp}}_{2n}(q)$ or $\Omega_{2n+1}(q)$ with $n \geq 2$ and $q \geq 3$ (and $q$ is odd in the $\Omega$-case). Then $L = {\operatorname{Sp}}_{2n}(q)$, resp. $L = {\operatorname{Sp}}in_{2n+1}(q)$. The maximal tori of minimal order in Theorem \ref{bound1} have order at least $(q-1)^n$. Hence $$b(S) \leq b(L) \leq B/(q-1)^n, \mbox{ where } B:= |L|_{p'} = \prod^{n}_{i=1}(q^{2i}-1).$$ To simplify the computation, we will view $S$ as a normal subgroup of index $\leq \kappa := \gcd(2,q-1)$ of the Lie-type group of adjoint type $K := {\operatorname{PCSp}}_{2n}(q)$, resp. $K := {\operatorname{SO}}_{2n+1}(q)$. Then any semisimple element in the dual group $K^* = {\operatorname{Sp}}in_{2n+1}(q)$, resp. ${\operatorname{Sp}}_{2n}(q)$, has connected centralizer (in the underlying algebraic group). Now we consider a maximal torus $T$ of order $q^n-1$ in $K^*$, and let $X$ be the set of elements in $T$ of order dividing $q^k \pm 1$ for some $k$ with $1 \leq k \leq n-1$. Setting $m := \lfloor n/2 \rfloor$ we have $$|X| \leq \sum^{m}_{i=1}((q^i+1)+(q^i-1)) < 2\frac{q^{m+1}-1}{q-1} \leq q^{n-1}-1 < \frac{q^n-1}{3},$$ if $n \geq 3$. One can also check by direct computation that $|X| \leq (q^n-1)/2$ if $n = 2$. Hence there are at least $(q^n-1)/2$ elements of $T$ that are regular semisimple. Each such $s$ defines an irreducible character $\chi_s$ of $K$ of degree $B/|T|$. Moreover, since $|N_{K^*}(T)/T| = 2n$, the $K^*$-conjugacy class of $s$ intersects $T$ at $2n$ elements. We have therefore produced at least $(q^n-1)/4n$ irreducible characters $\chi_s$ of $K$, each of degree $$\chi_s(1) = \frac{B}{|T|} = \frac{\prod^{n}_{i=1}(q^{2i}-1)}{q^n-1} < q^{n^2} = {\operatorname{St}}(1) \leq b(S).$$ First we consider the characters $\chi_s$ which split over $S$. They exist only when $|K/S| = \kappa = 2$. Then the irreducible constituents of their restrictions to $S$ are all distinct, and the sum of squares of the degrees of the irreducible components of each $(\chi_s)|_S$ is $\chi_s(1)^2/\kappa$. On the other hand, among the $\chi_s$ which are irreducible over $S$, at most $\kappa$ of them can restrict to the same (given) irreducible character of $S$. Hence, to show that $\varepsilon(S) > 1$, it suffices to verify that $$\frac{1}{\kappa} \cdot \frac{q^n-1}{4n} \cdot \left(\frac{B}{q^n-1}\right)^2 > \left(\frac{B}{(q-1)^n}\right)^2,$$ equivalently, $(q-1)^{2n} > 4\kappa n(q^n-1)$. The latter inequality holds if $q = 3$ and $n \geq 18$, or if $q = 4$ and $n \geq 4$, or if $q = 5$ and $n \geq 3$, or if $q \geq 7$ and $n \geq 2$. Using \cite{Atlas} and \cite{GAP} one can check that $\varepsilon(S) > 1$ when $(n,q) = (2,3)$, $(2,4)$, $(2,5)$, $(3,3)$, $(3,4)$. 4) Finally, we consider the cases $S = {\operatorname{P\Omega}}^{\pm}_{2n}(q)$, where $n \geq 4$ and $q \geq 3$. We set $\epsilonpsilon$ to $1$ or $-1$ in the split and non-split cases respectively. Then $L = {\operatorname{Sp}}in^{\pm}_{2n}(q)$, and the maximal tori of minimal order in Theorem \ref{bound1} have order at least $(q-1)^n$. Hence $$b(S) \leq b(L) \leq B/(q-1)^n, \mbox{ where } B:= |L|_{p'} = (q^n-\epsilon) \cdot \prod^{n-1}_{i=1}(q^{2i}-1).$$ As in 3), we will view $S$ as a normal subgroup of index $\leq \kappa := \gcd(4,q^n-\epsilon)$ of the Lie-type group of adjoint type $H := P({\operatorname{CO}}^{\pm}_{2n}(q)^\circ)$. Then any semisimple element in the dual group $L$ has connected centralizer. Now we consider a maximal torus $T$ of order $q^n-\epsilon$ in $L$, and let $Y$ be the set of elements in $T$ of order dividing $q^k + 1$ or $q^k-1$ for some $k$ with $1 \leq k \leq n-1$. As in 3) we see that $|X| \leq (q^n-\epsilon)/3$ since $n \geq 4$. Hence there are at least $2(q^n-\epsilon)/3$ elements of $T$ that are regular semisimple. Each such $s$ defines an irreducible character $\chi_s$ of $H$ of degree $B/|T|$. Moreover, since $|N_{L}(T)/T| = 2n$, the $L$-conjugacy class of $s$ intersects $T$ at $2n$ elements. We have therefore produced at least $(q^n-\epsilon)/3n$ irreducible characters $\chi_s$ of $H$, each of degree $$\chi_s(1) = B/|T| = \prod^{n-1}_{i=1}(q^{2i}-1) < q^{n(n-1)} = {\operatorname{St}}(1) \leq b(S).$$ The restriction $(\chi_s)|_S$ contains an irreducible constituent $\rho_s$ of degree at least $\chi_s(1)/\kappa$. Conversely, each $\rho \in {\operatorname{Irr}}(S)$ can lie under at most $\kappa$ distinct irreducible characters of $H$. Hence, to show that $\varepsilon(S) > 1$, it suffices to verify that $$\frac{1}{\kappa} \cdot \frac{q^n-\epsilon}{3n} \cdot \left(\frac{B}{\kappa(q^n-\epsilon1)}\right)^2 > \left(\frac{B}{(q-1)^n}\right)^2,$$ equivalently, $(q-1)^{2n} > 3\kappa^{3}n(q^n-\epsilon)$. The latter inequality holds unless $q = 3$ and $n \leq 30$, or if $q = 5$ and $n \leq 6$, or if $q = 7$ and $n = 4$. \epsilonnd{proof} Now we handle the remaining infinite families of simple classical groups over~${\mathbb{F}}_2$. \begin{thm} \lambdabel{clas2} If $S$ is any of the following simple classical groups over ${\mathbb{F}}_2$: $${\operatorname{SL}}_n(2), ~~{\operatorname{Sp}}_{2n}(2)', ~~\Omega^{\epsilon}_{2n}(2),$$ then one of the following statements holds: {\rm (i)} There exists $\psi \in {\operatorname{Irr}}(S)$ with $81/512 \leq \psi(1)/b(S) < 1$; {\rm (ii)} $\varepsilon(S) > 9/16$. \epsilonnd{thm} \begin{proof} The ``small'' groups ${\operatorname{SL}}_3(2)$ and ${\operatorname{Sp}}_4(2)' \cong \mathsf{A}_6$ are easily handled using \cite{Atlas}. Also set $q = 2$. 1) First we consider the case $S = {\operatorname{SL}}_n(2)$ with $n \geq 4$. Then the dual group is isomorphic to $S$. In particular, any character $\chi \in {\operatorname{Irr}}(S)$ of largest degree $b(S)$ can be parametrized by $((s),\phi)$, where $(s)$ is the conjugacy class of a semisimple element $s \in S$ and $\phi$ is a unipotent character of the centralizer $C := C_S(s)$. Such a centralizer is isomorphic to $${\operatorname{GL}}_{k_1}(2^{d_1}) \times \ldots \times {\operatorname{GL}}_{k_r}(2^{d_r}),$$ where $k_i, d_i \geq 1$, $k_1d_1 \geq k_2d_2 \geq \ldots \geq k_rd_r$, and $\sum^{r}_{i=1}k_id_i = n$. Moreover, for each $d$, the number of indices $i$ such that $d_i = d$ is at most the number of conjugacy classes of semisimple elements in ${\operatorname{GL}}_{kd}(2)$ with centralizer $\cong {\operatorname{GL}}_{k}(2^d)$, i.e. the number of monic irreducible polynomials $f(t)$ of degree $d$ over ${\mathbb{F}}_2$. Since $\chi(1) = (S:C)_{2'} \cdot \psi(1)$ and $\chi(1) = b(S)$, by Corollary~\ref{cor:StGL} $\psi$ must be the Steinberg character ${\operatorname{St}}_C$ of $C$, and so $$\psi = \psi_1 \otimes \psi_2 \otimes \ldots \otimes \psi_r,$$ where $\psi_i$ is the Steinberg character of ${\operatorname{GL}}_{k_i}(q^{d_i})$, of degree $q^{d_ik_i(k_i-1)/2}$. Observe that $s \neq 1$, i.e. $\chi$ is not unipotent. Otherwise $b(S) = {\operatorname{St}}(1) = q^{n(n-1)/2}$. However, the character $\rho \in {\operatorname{Irr}}(S)$ labeled by $((u), {\operatorname{St}}_{C_S(u)})$, where $u \in S$ is an element of order $3$ with centralizer $C_S(u) \cong C_3 \times {\operatorname{GL}}_{n-2}(2)$, has degree $$\frac{(q^n-1)(q^{n-1}-1)}{3} \cdot q^{(n-2)(n-3)/2} > q^{n(n-1)/2} = b(S)$$ as $n \geq 4$, a contradiction. Next we show that $r > 1$. Assume the contrary: $C \cong {\operatorname{GL}}_k(q^d)$ with $kd = n$ and $d > 1$. Then by Lemma \ref{sum}(ii) we have $$\chi(1) = q^{dk(k-1)/2} \cdot \frac{(q-1)(q^2-1) \ldots (q^n-1)} {(q^d-1)(q^{2d}-1) \ldots (q^{kd}-1)} < q^{dk(k-1)/2} \cdot \frac{q^{n(n+1)/2-1}}{\frac{9}{16}q^{dk(k+1)/2}} = \frac{8}{9}q^{n(n-1)/2}$$ as $q = 2$. Thus $\chi(1) < {\operatorname{St}}(1)$, a contradiction. Thus we must have that $r \geq 2$. Observe that there is a semisimple element $t \in S$ with centralizer $$C_S(t) \cong {\operatorname{GL}}_1(q^{k_1d_1+k_2d_2}) \times {\operatorname{GL}}_{k_3}(q^{d_3}) \times \ldots \times {\operatorname{GL}}_{k_r}(q^{d_r}).$$ Choose $\psi \in {\operatorname{Irr}}(S)$ to be labeled by $((t),{\operatorname{St}}_{C_S(t)})$. Then $$\frac{\psi(1)}{\chi(1)} = \frac{\prod^{k_1}_{i=1}(q^{id_1}-1)\cdot \prod^{k_2}_{i=1}(q^{id_2}-1)} {q^{d_1k_1(k_1-1)/2} \cdot q^{d_2k_2(k_2-1)/2} \cdot (q^{k_1d_1+k_2d_2}-1)}.$$ By Lemma \ref{sum}(ii), $1> \prod^{k_j}_{i=1}(q^{id_j}-1)/q^{d_jk_j(k_j+1)/2} > 9/32$ for $j = 1,2$ (in fact we can replace $9/32$ by $9/16$ if $d_j > 1$). Since $(d_1,d_2) \neq (1,1)$, it follows that $1 > \psi(1)/\chi(1) > 81/512$. 2) Next we consider the case $S = {\operatorname{Sp}}_{2n}(2)$ with $n \geq 3$. Then the dual group is again isomorphic to $S$, and we can view $S = {\mathcal{G}}^F$ for a Frobenius map $F$ of ${\mathcal{G}} = {\operatorname{Sp}}_{2n}(\overline{{\mathbb{F}}}_2)$. Since $Z({\mathcal{G}}) = 1$, by Corollary 14.47 and Proposition 14.42 of \cite{DM}, $S$ has a unique Gelfand-Graev character $\Gamma$, which is the sum of $2^n$ \epsilonmph{regular} irreducible characters $\chi_{(s)}$. Each such $\chi_{(s)}$ has Lusztig label $((s),{\operatorname{St}}_{C_S(s)})$, where $(s)$ is any semisimple class in $S$, see e.g. \cite{H1}. Note that $\Gamma(1) = |S|_{2'} = \prod^{n}_{i=1}(2^{2i}-1) > 2^{n(n+1)} \cdot (9/16)$, with the latter inequality following from Lemma \ref{sum}(ii). Hence by the Cauchy-Schwarz inequality we have $$\sum_{(s)}\chi_{(s)}(1)^2 \geq \frac{(\sum_{(s)}\chi_{s}(1))^2}{2^n} = \frac{(|S|_{2'})^{2}}{2^n} > \frac{9}{16} \cdot |S| \geq \frac{9}{16} \cdot b(S)^2.$$ In particular, $\varepsilon(S) > 9/16$ if $b(S)$ is not achieved by any regular character $\chi_{(s)}$. So we will assume that $b(S)$ is achieved by a regular character $\chi = \chi_{(s)}$. According to \cite[Lemma 3.6]{TZ1}, $C:= C_S(s) = D_1 \times \ldots \times D_r$ is a direct product of groups of the form ${\operatorname{GL}}^{\epsilon}_{k}(q^d)$ (where $\epsilon = +1$ for ${\operatorname{GL}}$ and $\epsilon = -1$ for ${\operatorname{GU}}$) or ${\operatorname{Sp}}_{2m}(q)$. Note that, since $q=2$, $C$ contains at most one factor of the latter form, and no factor of the former form with $(d,\epsilon) = (1,1)$. First suppose that all of the factors $D_i$ are of the second form. It follows that $s = 1$, $\chi_{(1)} = {\operatorname{St}}$. Choosing $\psi \in {\operatorname{Irr}}(S)$ to be labeled by $((u), {\operatorname{St}}_{C_S(u)})$, where $u \in S$ is an element of order $3$ with centralizer $C_S(u) \cong C_3 \times {\operatorname{Sp}}_{2n-2}(2)$, we see that $$b(S)= \chi(1) = 2^{n^2} > \psi(1) = \frac{(2^{2n}-1)}{3} \cdot 2^{(n-1)^2} > b(S)/2$$ as $n \geq 2$. Next we consider the case where exactly one of the factors $D_i$ is of the form ${\operatorname{GL}}^{\epsilon}_k(q^d)$. Then, since $q=2$ we must actually have $r \leq 2$, $C = {\operatorname{GL}}^{\epsilon}_{k}(q^d) \times {\operatorname{Sp}}_{2m}(q)$ with $m := n-kd$, and $(d,\epsilon) \neq (1,1)$. Hence by Lemma \ref{sum}(ii) $$\chi(1) = q^{dk(k-1)/2+m^2} \cdot \frac{\prod^{n}_{j=m+1}(q^{2j}-1)} {\prod^{k}_{j=1}(q^{jd}-\epsilon^j)} < q^{dk(k-1)/2+m^2} \cdot \frac{q^{n(n+1)-m(m+1)}}{\frac{9}{16}q^{dk(k+1)/2}} < \frac{16}{9}q^{n^2}.$$ It is easy to check that $\chi(1) \neq q^{n^2}$. Choosing $\psi = {\operatorname{St}}$, we then have $1 > \psi(1)/b(S) > 9/16$. Lastly, we consider the case where at least two of the factors $D_i$ are of form ${\operatorname{GL}}^{\epsilon}_k(q^d)$: $$C = {\operatorname{GL}}^{\epsilon_1}_{k_1}(q^{d_1}) \times {\operatorname{GL}}^{\epsilon_2}_{k_2}(q^{d_2}) \times \ldots \times {\operatorname{GL}}^{\epsilon_{r-1}}_{k_{r-1}}(q^{d_{r-1}}) \times {\operatorname{Sp}}_{2m}(q),$$ where $r-1 \geq 2$ and $m$ can be zero. We will assume that $k_1d_1 \geq k_2d_2 \geq \ldots \geq k_{r-1}d_{r-1} \geq 1$. Observe that there is a semisimple element $t \in S$ with centralizer $$C_S(t) \cong {\operatorname{GU}}_1(q^{k_1d_1+k_2d_2}) \times {\operatorname{GL}}^{\epsilon_3}_{k_3}(q^{d_3}) \times \ldots \times {\operatorname{GL}}^{\epsilon_{r-1}}_{k_{r-1}}(q^{d_{r-1}}) \times {\operatorname{Sp}}_{2m}(q).$$ Choose $\psi \in {\operatorname{Irr}}(S)$ to be labeled by $((t),{\operatorname{St}}_{C_S(t)})$. Then $$\frac{\psi(1)}{\chi(1)} = \frac{\prod^{k_1}_{i=1}(q^{id_1}-\epsilon_1^i)\cdot \prod^{k_2}_{i=1}(q^{id_2}-\epsilon_2^i)} {q^{d_1k_1(k_1-1)/2} \cdot q^{d_2k_2(k_2-1)/2} \cdot (q^{k_1d_1+k_2d_2}+1)}.$$ By Lemma \ref{sum}(ii), $\prod^{k_j}_{i=1}(q^{id_j}-\epsilon_j^i)/q^{d_jk_j(k_j+1)/2} > 9/16$ for $j = 1,2$ since $(d_j,\epsilon_j) \neq (1,1)$. Furthermore, since $k_1d_1+k_2d_2 \geq 2$ we have $q^{k_1d_1+k_2d_2}+1 \leq (5/4)q^{k_1d_1+k_2d_2}$. Thus $\psi(1)/\chi(1) > 81/320$, and so we are done if $\psi(1) \neq \chi(1)$. Suppose that $\psi(1) = \chi(1)$. Then $k_1=k_2=1$, $(d_1,\epsilon_1)=(2,1)$, and $(d_2,\epsilon_2) = (1,-1)$. In this case we can replace $t$ by a semisimple element $t'$ with $$C_S(t') \cong {\operatorname{GL}}_1(q^{k_1d_1+k_2d_2}) \times {\operatorname{GL}}^{\epsilon_3}_{k_3}(q^{d_3}) \times \ldots \times {\operatorname{GL}}^{\epsilon_{r-1}}_{k_{r-1}}(q^{d_{r-1}}) \times {\operatorname{Sp}}_{2m}(q)$$ and repeat the above argument. 3) Finally, let us consider the case $S = \Omega^{\epsilon}_{2n}(q)$ with $n \geq 4$. Again, the dual group of $S$ can be identified with $S$, and we can view $S = {\mathcal{G}}^F$ for a Frobenius map $F$ of ${\mathcal{G}} = {\operatorname{SO}}_{2n}(\overline{{\mathbb{F}}}_2)$. Since ${\mathcal{G}}$ is simply connected (as $q=2$), the centralizers of any semisimple element in ${\mathcal{G}}$ are connected. Arguing as in the ${\operatorname{Sp}}$-case, we may assume that $b(S)$ is attained at a regular character $\chi = \chi_{(s)}$. One can show (see also \cite[Lemma 3.7]{TZ1}) that $$C := C_S(s) = K_1 \times H_3 \times \ldots \times H_r$$ where each $H_i$ with $3 \leq i \leq r$ is of the form ${\operatorname{GL}}^{\beta}_k(q^d)$ with $\beta = \pm 1$. Furthermore, $K_1$ has a normal subgroup $H_1 \cong \Omega^{\pm}_{2m}(q)$ (where $m$ can be zero) such that $K_1/H_1$ is either trivial, or isomorphic to ${\operatorname{GU}}_2(2)$. In the latter case, the Steinberg character of the finite connected reductive group $K_1$ has degree equal to $|K_1|_2 = 2^{m(m-1)+1}$. Thus in either case we may replace $C_S(s)$ by $$H_1 \times H_2 \times H_3 \times \ldots \times H_r$$ (where each $H_i$ is of the form ${\operatorname{GL}}^{\beta}_k(q^d)$ with $\beta = \pm 1$ or $\Omega^{\pm}_{2m}(q)$, and the latter form can occur for at most one factor $H_i$), and identify the Steinberg character ${\operatorname{St}}_C$ with ${\operatorname{St}}_{H_1} \otimes \ldots \otimes {\operatorname{St}}_{H_r}$. First we consider the case $s = 1$, i.e. $C = H_1 = S$, and $\chi(1) = {\operatorname{St}}$. Choosing $\psi \in {\operatorname{Irr}}(S)$ to be labeled by $((u), {\operatorname{St}}_{C_S(u)})$, where $u \in S$ is an element of order $3$ with centralizer $C_S(u) \cong C_3 \times \Omega^{-\epsilon}_{2n-2}(2)$, we see that $$b(S)= \chi(1) = 2^{n(n-1)} > \psi(1) = \frac{(2^{n}-\epsilon)(2^{n-1}-\epsilon)}{3} \cdot 2^{(n-1)(n-2)} > b(S)/4$$ as $n \geq 4$. Next we consider the case where exactly one of the factors $H_i$ is of the form ${\operatorname{GL}}^{\alpha}_k(q^d)$. Then, since $q=2$ we must actually have $r \leq 2$, $C = {\operatorname{GL}}^{\alpha}_{k}(q^d) \times \Omega^{\beta}_{2m}(q)$ with $m := n-kd$, and $(d,\alpha) \neq (1,1)$. Hence by Lemma \ref{sum}(ii) \begin{align*} \chi(1) & = q^{dk(k-1)/2+m(m-1)} \cdot \dfrac{(q^m+\beta) \cdot \prod^{n}_{j=m+1}(q^{2j}-1)} {(q^n+\epsilon) \cdot \prod^{k}_{j=1}(q^{jd}-\alpha^j)}\\ & < q^{dk(k-1)/2+m(m-1)} \cdot \dfrac{3}{2} \cdot \dfrac{16}{15} \cdot \dfrac{q^{m+ n(n+1)-m(m+1)}}{\frac{9}{16}q^{n+ dk(k+1)/2}} < \frac{128}{45}q^{n(n-1)}. \epsilonnd{align*} It is easy to check that $\chi(1) \neq q^{n(n-1)}$. Choosing $\psi = {\operatorname{St}}$ we then have $1 > \psi(1)/b(S) > 45/128$. Lastly, we consider the case where at least two of the factors $H_i$ are of the form ${\operatorname{GL}}^{\pm}_k(q^d)$: $$C = {\operatorname{GL}}^{\epsilon_1}_{k_1}(q^{d_1}) \times {\operatorname{GL}}^{\epsilon_2}_{k_2}(q^{d_2}) \times \ldots \times {\operatorname{GL}}^{\epsilon_{r-1}}_{k_{r-1}}(q^{d_{r-1}}) \times \Omega^{\beta}_{2m}(q),$$ where $r-1 \geq 2$ and $m$ can be zero. We will assume that $k_1d_1 \geq k_2d_2 \geq \ldots \geq k_{r-1}d_{r-1} \geq 1$. Observe that there is a semisimple element $t \in S$ with centralizer $$C_S(t) \cong {\operatorname{GL}}^{\alpha}_1(q^{k_1d_1+k_2d_2}) \times {\operatorname{GL}}^{\epsilon_3}_{k_3}(q^{d_3}) \times \ldots \times {\operatorname{GL}}^{\epsilon_{r-1}}_{k_{r-1}}(q^{d_{r-1}}) \times \Omega^{\beta}_{2m}(q)$$ for some $\alpha = \pm 1$. Choose $\psi \in {\operatorname{Irr}}(S)$ to be labeled by $((t),{\operatorname{St}}_{C_S(t)})$. Then $$\frac{\psi(1)}{\chi(1)} = \frac{\prod^{k_1}_{i=1}(q^{id_1}-\epsilon_1^i)\cdot \prod^{k_2}_{i=1}(q^{id_2}-\epsilon_2^i)} {q^{d_1k_1(k_1-1)/2} \cdot q^{d_2k_2(k_2-1)/2} \cdot (q^{k_1d_1+k_2d_2}-\alpha)}.$$ By Lemma \ref{sum}(ii), $\prod^{k_j}_{i=1}(q^{id_j}-\epsilon_j^i)/q^{d_jk_j(k_j+1)/2} > 9/16$ for $j = 1,2$ since $(d_j,\epsilon_j) \neq (1,1)$. Furthermore, since $k_1d_1+k_2d_2 \geq 2$ we have $q^{k_1d_1+k_2d_2}-\alpha \leq (5/4)q^{k_1d_1+k_2d_2}$. Thus $\psi(1)/\chi(1) > 81/320$, and so we are done if $\psi(1) \neq \chi(1)$. Suppose that $\psi(1) = \chi(1)$. Then $k_1=k_2=1$, which forces $\alpha = \epsilon_1\epsilon_2$, $(d_1,\epsilon_1)=(2,1)$, and $(d_2,\epsilon_2) = (1,-1)$. In this last case we must have that $r=3$, $$C = {\operatorname{GL}}_1(4) \times {\operatorname{GU}}_1(2) \times \Omega^{-\epsilon}_{2n-6}(2),$$ and $$\chi(1) = \frac{1}{9} \cdot 2^{(n-3)(n-4)}(2^n-\epsilon)(2^{n-3}-\epsilon)(2^{2n-2}-1)(2^{2n-4}-1).$$ In particular, $1 \neq \chi(1)/{\operatorname{St}}(1) < 4/3$, and so we are done. \epsilonnd{proof} \section{The Largest Degrees of Simple Groups of Lie Type} Let $L$ be a finite Lie-type group of simply connected type over ${\mathbb{F}}_q$. When $q$ is large enough in comparison to the rank of $L$, Theorem~\ref{bound1} gives us the precise value of $b(L)$. However, we do not have a formula for $b(L)$ for small values of $q$. In the extreme case $L = {\operatorname{SL}}_n(2)$, there does not even seem to exist a decent upper bound on $b(L)$ in the literature, aside from the trivial bound $b(L) < |L|^{1/2}$. On the other hand, as a polynomial of $q$, the degree of the Steinberg character ${\operatorname{St}}$ is the same as that of the bound in Theorem~\ref{bound1}. So it is an interesting question to study the asymptotic of the quantity $c(L) := b(L)/{\operatorname{St}}(1)$. In this section we will prove upper and lower bounds for $c(L)$ for finite classical groups. \begin{thm} \lambdabel{bound4} Let $G$ be any of the following Lie-type groups of type $A$: ${\operatorname{GL}}_n(q)$, ${\operatorname{PGL}}_n(q)$, ${\operatorname{SL}}_n(q)$, or ${\operatorname{PSL}}_n(q)$. Then the following inequalities hold: $$\max\left\{1, \frac{1}{4} \left(\log_{q}((n-1)(1-\frac{1}{q})+q^2)\right)^{3/4}\right\} \leq \frac{b(G)}{q^{n(n-1)/2}} < 13(\log_q(n(q-1)+q))^{2.54}.$$ In particular, $$\frac{1}{4}\left(\log_{q}\frac{n+7}{2}\right)^{3/4} < \frac{b(G)}{q^{n(n-1)/2}} < 13(1+\log_q(n+1))^{2.54}.$$ \epsilonnd{thm} \begin{proof} 1) Since the Steinberg character of ${\operatorname{GL}}_n(q)$ is trivial at $Z({\operatorname{GL}}_n(q))$ and stays irreducible as a character of ${\operatorname{PSL}}_n(q)$, the inequality $b(G) \geq q^{n(n-1)/2}$ is obvious. Next we prove the upper bound $$c(G) := \frac{b(G)}{q^{n(n-1)/2}} < 13(\log_q(n(q-1)+q))^{2.54}$$ for $G = {\operatorname{GL}}_n(q)$, which then also implies the same bound for all other groups of type $A$. It is not hard to see that the arguments in p. 1) of the proof of Theorem~\ref{clas2} also carry over to the case of $G = {\operatorname{GL}}_n(q)$. It follows that $c(G)$ is just the maximum of $$P := \frac{\prod^{n}_{i=1}(1-q^{-i})} {\prod^m_{j=1}\prod^{k_j}_{i=1}(1-q^{-id_j})},$$ where the maximum is taken over all possible $m,k_j,d_j \geq 1$ with $k_1d_1 \geq \ldots \geq k_md_m$, $\sum^{m}_{j=1}k_jd_j = n$, and for each $d = 1,2, \ldots$, the number $a_d$ of the values of $j$ such that $d_j$ equals to $d$ does not exceed the number ${\mathfrak{n}}_d$ of monic irreducible polynomials $f(t)$ of degree $d$ over ${\mathbb{F}}_q$; in particular, $a_d < q^d/d$. By Lemma \ref{sum}(ii), the numerator of $P$ is bounded between $9/32$ and $1$ for all $q \geq 2$. It remains to bound (the natural logarithm $L$ of) the denominator of $P$. By Lemma \ref{sum}(i), $\prod^{\infty}_{i=1}(1-q^{-id_j}) > \epsilonxp(-\alpha q^{-d_j})$ with $\alpha = 2\ln(32/9)$. Hence, $$-L/\alpha < \sum_{j=1}^{m} q^{-d_j} = \sum^{n}_{d=1}a_d q^{-d}.$$ The constraints imply $\sum^n_{d=1} d a_d \leq n$ and $a_d < q^d/d$. Replacing the $a_d$ with real numbers $x_d$, we want to maximize $\sum^{\infty}_{d=1}x_dq^{-d}$ subject to the constraints $$\sum^{\infty}_{d=1}d x_d \leq n, ~~0 \leq x_d \leq q^d/d.$$ Since the function $q^{-t}/t$ is decreasing on $(0,\infty)$, we see that there exists some $d_0$ (depending on $n$) such that the sum is optimized when $x_i = q^i/i$ for all $i<d_0$ and $x_i = 0$ for all $i>d_0$. Thus $d_0$ is the largest integer such that $\sum^{d_0-1}_{d=1}(q^d/d)d = \sum^{d_0-1}_{d=1}q^d = (q^{d_0}-q)/(q-1)$ does not exceed $n$, whence $$d_0 \leq \log_q(n(q-1)+q) < 1+ \log_q(n +1).$$ On the other hand, $$\sum^{d_0}_{d=1}x_d q^{-d} \leq \sum_{d=1}^{d_0} \frac{1}{d} < 1 + \ln(d_0).$$ Thus $L > -\alpha(1+\ln(d_0))$ and so $$P < e^{-L} < e^{\alpha(1+\ln(d_0))} = e^{\alpha}d_{0}^{\alpha} < 13(\log_q(n(q-1)+q))^{2.54}$$ by the choice $\alpha = 2\ln(32/9)$. 2) Now we prove the lower bound $$c(S) := \frac{b(S)}{q^{n(n-1)/2}} > \frac{1}{4} \left(\log_{q}((n-1)(1-\frac{1}{q})+q^2)\right)^{3/4}$$ for $S = {\operatorname{PSL}}_n(q)$, which then also implies the same bound for all other groups of type $A$. As above, let ${\mathfrak{n}}_d$ be the number of monic irreducible polynomials $f(t)$ over ${\mathbb{F}}_q$. Arguing as in p. 1) of the proof of Theorem \ref{clas}, we see that the total number of elements of ${\mathbb{F}}_{q^d}$ which do not belong to any proper subfield of ${\mathbb{F}}_{q^d}$ is at least $3q^d/4$ when $d \geq 3$ and at most $q^d-1$. It follows that for $d \geq 3$ we have \begin{equation}\lambdabel{poly} \frac{3q^d}{4d} \leq {\mathfrak{n}}_d < \frac{q^d}{d}. \epsilonnd{equation} Since $b(S) \geq {\operatorname{St}}(1)$, the claim is obvious if $n \leq q^3$. Hence we may assume that $n \geq q^3+1 \geq 3{\mathfrak{n}}_3+3$. Let $d^* \geq 3$ be the largest integer such that $m:= \sum^{d^*}_{d=3}d{\mathfrak{n}}_d \leq n-3$. In particular, $$\sum^{d^*+1}_{d=3}q^d > \sum^{d^*+1}_{d=3}d {\mathfrak{n}}_d \geq n-2,$$ and so \begin{equation}\lambdabel{for-d1} d^*+1 \geq \log_q((n-1)(1-1/q)+q^2). \epsilonnd{equation} Observe that $G_1 := {\operatorname{GL}}_m(q)$ contains a semisimple element $s_1$ with $$C_{G_1}(s_1) = {\operatorname{GL}}_{1}(q^3)^{{\mathfrak{n}}_3} \times {\operatorname{GL}}_{1}(q^4)^{{\mathfrak{n}}_4} \times \ldots \times {\operatorname{GL}}_{1}(q^{d^*})^{{\mathfrak{n}}_{d^*}}.$$ (Indeed, each of the ${\mathfrak{n}}_d$ monic irreducible polynomials of degree $d$ over ${\mathbb{F}}_q$ gives us an embedding ${\operatorname{GL}}_1(q^d) \hookrightarrow {\operatorname{GL}}_d(q)$.) If $\det(s_1) = 1$, then choose $s := {\operatorname{diag}}(I_{n-m},s_1)$ so that $$C_G(s) = {\operatorname{GL}}_{n-m}(q) \times C_{G_1}(s_1).$$ Otherwise we choose $s := {\operatorname{diag}}(I_{n-m-1},\det(s_1)^{-1},s_1)$ so that $$C_G(s) = {\operatorname{GL}}_{n-m-1}(q) \times {\operatorname{GL}}_1(q) \times C_{G_1}(s_1).$$ In either case, $\det(s) = 1$ and so $s \in [G,G]$ for $G = {\operatorname{GL}}_n(q)$. Now consider the (regular) irreducible character $\chi$ labeled by $((s),{\operatorname{St}}_{C_G(s)})$. The inclusion $s \in [G,G]$ implies that $\chi$ is trivial at $Z(G)$. Also, our choice of $s$ ensures that $s$ has at most two eigenvalues in ${\mathbb{F}}_q^{\times}$: the eigenvalue $1$ with multiplicity $\geq n-m-1 \geq 2$, and at most one more eigenvalue with multiplicity $1$. Hence, for any $1 \neq t \in {\mathbb{F}}_q^{\times}$, $s$ and $st$ are not conjugate in $G$. To each such $t$ one can associate a linear character $\hat{t} \in {\operatorname{Irr}}(G)$ in such a way that the multiplication by $\hat{t}$ yields a bijection between the Lusztig series ${\mathcal E}(G,(s))$ and ${\mathcal E}(G,(st))$ labeled by the conjugacy classes of $s$ and $st$, cf. \cite[Proposition 13.30]{DM}. Since the distinct Lusztig series are disjoint, we conclude that the number of linear characters $\hat{t} \in {\operatorname{Irr}}(G)$ such that $\chi\hat{t} = \chi$ is exactly one. It then follows by \cite[Lemma 3.2(i)]{KT} that $\chi$ is irreducible over ${\operatorname{SL}}_n(q)$. Thus we can view $\chi$ as an irreducible character of $S = {\operatorname{PSL}}_n(q)$. Next, in the case $\det(s_1) = 1$ we have $$\frac{\chi(1)}{q^{n(n-1)/2}} = \frac{\prod^{n}_{i=n-m+1}(1-q^{-i})} {\prod^{d^*}_{j=3}(1-q^{-j})^{{\mathfrak{n}}_j}},$$ whereas in the case $\det(s_1) \neq 1$ we have that $$\frac{\chi(1)}{q^{n(n-1)/2}} = \frac{\prod^{n}_{i=n-m}(1-q^{-i})} {(1-q^{-1})\prod^{d^*}_{j=3}(1-q^{-j})^{{\mathfrak{n}}_j}}.$$ Since $n-m \geq 3$, in either case we have $$\frac{\chi(1)}{q^{n(n-1)/2}} > \frac{\prod^{\infty}_{i=4}(1-q^{-i})} {\prod^{d^*}_{j=3}(1-q^{-j})^{{\mathfrak{n}}_j}}.$$ By Lemma \ref{sum}(ii), the numerator is at least $(9/16) \cdot (4/3) \cdot (8/7) = 6/7$. To estimate the denominator, observe that $1/(1-x) > e^{x}$ for $0 < x < 1$. Applying (\ref{poly}) we now see that $$\begin{aligned} \ln\left(\frac{1}{\prod^{d^*}_{j=3}(1-q^{-j})^{{\mathfrak{n}}_j}}\right) &> \sum^{d^*}_{j=3}q^{-j}{\mathfrak{n}}_j \geq \sum^{d^*}_{j=3}\frac{3}{4j}\\ &\geq \frac{3}{4}(\ln(d^*+1) - 1 - \frac{1}{2}) = \frac{3\ln(d^*+1)}{4} - \frac{9}{8}. \epsilonnd{aligned}$$ Together with (\ref{for-d1}) this implies that $$\frac{\chi(1)}{q^{n(n-1)/2}} > \frac{6}{7e^{9/8}}\left(\log_q((n-1)(1-\frac{1}{q})+q^2)\right)^{3/4}\!\! > \frac{1}{4}\left(\log_{q}((n-1)(1-\frac{1}{q})+q^2)\right)^{3/4}\!\!.$$ \epsilonnd{proof} Next we handle the other classical groups. Abusing the notation, by \epsilonmph{a group of type $C_n$ over ${\mathbb{F}}_q$} we mean any of the following groups: ${\operatorname{Sp}}_{2n}(q)$ (of simply connected type), ${\operatorname{PCSp}}_{2n}(q)$ (of adjoint type), or ${\operatorname{PSp}}_{2n}(q)$ (the simple group, except for a few ``small'' exceptions). Similarly, by \epsilonmph{a group of type $B_n$ over ${\mathbb{F}}_q$} we mean any of the following group: ${\operatorname{Sp}}in_{2n+1}(q)$ (of simply connected type), ${\operatorname{SO}}_{2n+1}(q)$ (of adjoint type), or $\Omega_{2n+1}(q)$ (the simple group, except for a few ``small'' exceptions). By \epsilonmph{a group of type $D_n$ or $^2 D_n$ over ${\mathbb{F}}_q$} we mean any of the following group: ${\operatorname{Sp}}in^{\epsilon}_{2n}(q)$ (of simply connected type), ${\operatorname{PCO}}^{\epsilon}_{2n}(q)^{\circ}$ (of adjoint type), ${\operatorname{P\Omega}}^{\epsilon}_{2n}(q)$ (the simple group, except for a few ``small'' exceptions), ${\operatorname{SO}}^{\epsilon}_{2n}(q)$, as well as the half-spin group ${\operatorname{HS}}_{2n}(q)$. We refer the reader to \cite{C} for the definition of these finite groups of Lie type. \begin{thm} \lambdabel{bound5} Let $G$ be a group of type $B_n$, $C_n$, $D_n$, or $^2D_n$ over ${\mathbb{F}}_q$. If $q$ is odd, then the following inequalities hold: $$\max\left\{1, \frac{1}{5} \left(\log_{q}\frac{4n+25}{3}\right)^{3/8}\right\} \leq \frac{b(G)}{{\operatorname{St}}(1)} < 38(1+\log_q(2n+1))^{1.27}.$$ If $q$ is even, then the following inequalities hold: $$\max\left\{1, \frac{1}{5} \left(\log_{q}(n+17)\right)^{3/8}\right\} \leq \frac{b(G)}{{\operatorname{St}}(1)} < 8(1+\log_q(2n+1))^{1.27}.$$ \epsilonnd{thm} \begin{proof} In all cases, the bound $b(G)/{\operatorname{St}}(1) \geq 1$ is obvious since ${\operatorname{St}}$ is irreducible over any of the listed possibilities for $G$. 1) We begin by proving the upper bound in the cases where $G$ is of type $B_n$, respectively of type $D_n$ or $^2D_n$, and $q$ is odd and $n \geq 3$. Let $V = {\mathbb{F}}_q^{2n+1}$, respectively $V = {\mathbb{F}}_q^{2n}$, be endowed with a non-degenerate quadratic form. Then it is convenient to work with the \epsilonmph{special Clifford group} $G := \Gamma^+(V)$ associated to the quadratic space $V$, see for instance \cite{TZ2}. In particular, $G$ maps onto ${\operatorname{SO}}(V)$ with kernel $C_{q-1}$, and contains ${\operatorname{Sp}}in(V)$ as a normal subgroup of index $q-1$. Furthermore, the dual group $G^*$ can be identified with the conformal symplectic group ${\operatorname{CSp}}_{2n}(q)$ in the $B$-case, and with the group ${\operatorname{CO}}(V)^{\circ}$ in the $D$-case, cf. \cite[\S3]{FS}. Observe that the adjoint group ${\operatorname{PCO}}(V)^{\circ}$ contains ${\operatorname{PSO}}(V)$ as a normal subgroup of index $2$. Similarly, the half-spin group contains a quotient of ${\operatorname{Sp}}in(V)$ (by a central subgroup of order $2$) as a normal subgroup of index $2$. Hence, it suffices to prove the indicated upper bound (with constant $19$) for this particular $G$. Similarly, it will suffice to prove the indicated lower bound (with constant $1/5$) for the simple group $S = {\operatorname{P\Omega}}(V)$. Let $s \in G^*$ be any semisimple element. Consider for instance the $B$-case and let $\tau(s) \in {\mathbb{F}}_q^{\times}$ denote the factor by which the conformal transformation $s \in {\operatorname{CSp}}_{2n}(q)$ changes the corresponding symplectic form. Also set $H := {\operatorname{Sp}}_{2n}(q)$ and denote by ${\mathbb{F}}_q^{\times 2}$ the set of squares in ${\mathbb{F}}_q^{\times}$. Then by \cite[Lemma 2.4]{N} we have that $$C := C_{G^*}(s) = C_{H}(s) \cdot C_{q-1},$$ with $$C_{H}(s) = \prod_{i}{\operatorname{GL}}^{\epsilon_i}_{k_i}(q^{d_i}) \times \left\{ \begin{array}{lll} {\operatorname{Sp}}_l(q^2), & \tau(s) \notin {\mathbb{F}}_q^{\times 2}, & (B1)\\ {\operatorname{Sp}}_{2k}(q) \times {\operatorname{Sp}}_{2l-2k}(q), & \tau(s) \in {\mathbb{F}}_q^{\times 2}, & (B2), \epsilonnd{array}\right.$$ where $\sum_ik_id_i = n-l$, $\epsilon_i = \pm 1$, and $0 \leq k \leq l \leq n$ (and we use $(B1)$ and $(B2)$ to label the two subcases which can arise). In the $D$-case, let $\tau(s) \in {\mathbb{F}}_q^{\times}$ denote the factor by which the conformal transformation $s \in {\operatorname{CO}}(V)^{\circ}$ changes the corresponding quadratic form; also set $H := {\operatorname{SO}}(V)$. Then by \cite[Lemma 2.5]{N} we have that $$C := C_{G^*}(s) = C_{H}(s) \cdot C_{q-1},$$ with $$C_{{\operatorname{GO}}(V)}(s)\cong \prod_{i}{\operatorname{GL}}^{\epsilon_i}_{k_i}(q^{d_i}) \times \left\{ \begin{array}{lll} {\operatorname{GO}}^{\pm}_l(q^2), & \tau(s)\notin {\mathbb{F}}_q^{\times 2}, & (D1)\\ {\operatorname{GO}}^{\pm}_{2k}(q) \times {\operatorname{GO}}^{\pm}_{2l-2k}(q), & \tau(s) \in {\mathbb{F}}_q^{\times 2}, & (D2), \epsilonnd{array}\right.$$ where $\sum_ik_id_i = n-l$, $\epsilon_i = \pm 1$, and $0 \leq k \leq l \leq n$ (and we use $(D1)$ and $(D2)$ to label the two subcases which can arise). On the set of monic irreducible polynomials of degree $d$ in ${\mathbb{F}}_q[t]$ (regardless of whether $q$ is odd or not), one can define the involutive map $f \mapsto \check{f}$ such that $x^df(1/x)$ is a scalar multiple of $f(x)$. One can show that such an $f$ can satisfy the equality $f = \check{f}$ only when $2|d$ and $\alpha^{q^{d/2}+1} = 1$ for every root $\alpha$ of $f$. Hence, if ${\mathfrak{n}}s_d$ denotes the number of monic irreducible polynomials of degree $d$ over ${\mathbb{F}}_q$ with $f \neq \check{f}$, then \begin{equation}\lambdabel{poly2} {\mathfrak{n}}s_d < \frac{q^d}{d}, \mbox{ and }{\mathfrak{n}}s_d \geq \frac{3q^d}{4d} \mbox{ if }d \geq 3 \mbox{ and }q \geq 3, \mbox { or if } d \geq 5 \mbox{ and }q = 2. \epsilonnd{equation} The former inequality is obvious. The latter inequality follows from (\ref{poly2}) when $d$ is odd (as ${\mathfrak{n}}s_d = {\mathfrak{n}}_d$ in this case), and by direct check when $d = 4$ or $(q,d) = (2,6)$. Assume $d = 2e \geq 6$ and $(q,d) \neq (2,6)$. Then the number of elements of ${\mathbb{F}}_{q^d}$ which belong to a proper subfield of ${\mathbb{F}}_{q^d}$ or to the subgroup $C_{q^{d/2}+1}$ of ${\mathbb{F}}_{q^{d}}^{\times}$ is at most $$q^e + \sum^e_{i=1}q^i < q^e(q+1) < q^d/4,$$ whence ${\mathfrak{n}}s_d > 3q^d/4$ as stated. In either case, decompose the characteristic polynomial of the transformation $s$ into a product of powers of distinct monic irreducible polynomials over ${\mathbb{F}}_q$. Then the factor $GL^{\epsilon_i}_{k_i}(q^{d_i})$ in $C$ with $\epsilon_i = 1$, respectively with $\epsilon_i = -1$, corresponds to a factor $(f_i\check{f}_i)^{k_i}$ in this decomposition with $\deg(f_i) = d_i$ and $f_i \neq \check{f}_i$, respectively to a factor $f_i^{k_i}$ in this decomposition with $\deg(f_i) = 2d_i$ and $f_i = \check{f}_i$. In particular, if $a_d$ denotes the number of factors $GL_{k_i}(q^d_i)$ in $C$ with $d_i = d$, then \begin{equation}\lambdabel{for-ad} a_d \leq \frac{{\mathfrak{n}}s_d}{2} < \frac{q^d}{2d}, ~~~\sum_dda_d \leq n. \epsilonnd{equation} Certainly, ${\operatorname{St}}_C(1) = |C|_{p}$ for the prime $p$ dividing $q$. But, since the centralizer of $s$ in the corresponding algebraic group is (most of the time) disconnected, we cannot apply Theorem \ref{thm:steinberg} directly to say that $b(G)$ is attained by the regular character $\chi=\chi_{(s)}$ labeled by $((s),{\operatorname{St}}_C)$. Nevertheless, we claim that the degree of any unipotent character $\psi$ of $C$ is at most $2\kappa\cdot{\operatorname{St}}_C(1)$ and so $b(G) \leq 2\kappa\chi(1)$, where $\kappa = 2$ in the $(D2)$-case and $\kappa = 1$ otherwise. Indeed, by definition the unipotent character $\psi$ of the (usually disconnected) group $C$ is an irreducible constituent of ${\operatorname{Ind}}^{C}_{D}(\varphi)$ for some unipotent character $\varphi$ of $D := Z(G^*)C_H(s)$. In turn, $\varphi$ restricts irreducibly to a unipotent character of the (usually disconnected) group $C_H(s)$. It is easy to see that $C_H(s)$ contains a normal subgroup $D_1$ of index $\kappa$, which is a finite connected group, in fact a direct product of subgroups of form ${\operatorname{GL}}^{\epsilon_i}_{k_i}(q^{d_i})$, ${\operatorname{Sp}}_l(q^2)$, ${\operatorname{Sp}}_l(q)$, ${\operatorname{SO}}^{\pm}_l(q^2)$, or ${\operatorname{SO}}^{\pm}_l(q)$. Again by definition $\varphi|_{C_H(s)}$ is an irreducible constituent of ${\operatorname{Ind}}^{C_H(s)}_{D_1}(\varphi_1)$ for some unipotent character $\varphi_1$ of $D_1$. Now we can apply Theorem \ref{thm:steinberg} to $D_1$ to see that $\varphi_1(1) \leq {\operatorname{St}}_{D_1}(1) = |D_1|_p = |C|_p$. Since $|C/D| = 2$ and $|C_H(s)/D_1| = \kappa$, we conclude that $\psi(1) \leq 2\kappa|C|_p$, as stated. Observe that $(G^*:C)_{p'} = (H:D_1)_{p'}/\kappa$. We have therefore shown that $$b(G) = \chi(1) \leq 2(H:D_1)_{p'}\cdot |D_1|_p.$$ By Lemma \ref{sum}(i), (iv), $\prod^{\infty}_{i=1}(1-q^{-2i}) > 71/81$ since $q \geq 3$, and $\prod^{k_j}_{i=1}(1-(\epsilon_jq^{-d_j})^{i}) > 1$ if $\epsilon_j = -1$. Furthermore, $q^l \pm 1 \geq (2/3)q^l$ and $q^n \pm 1 \leq (28/27)q^n$ since $n,q \geq 3$. Using these estimates, we see that \begin{equation}\lambdabel{for-cg1} c(G) \leq \frac{A}{\prod_{j~:~\epsilon_j = 1}\prod^{k_j}_{i=1}(1-q^{-id_j})}. \epsilonnd{equation} Here, $A = 2 \cdot (28/27) \cdot (81/71) \cdot (3/2)^2 = 378/71$ in the $(D2)$-case. Similarly, $A = 2$ in the $(B1)$-case, $A = 162/71$ in the $(B2)$-case, and $A = 28/9$ in the $(D1)$-case. By Lemma~\ref{sum}(i), $c(G) \leq A\cdot \epsilonxp(\alpha\sum_{d}a_dq^{-d})$ with $\alpha = 2\ln(32/9)$, and $a_d$ is subject to the constraints (\ref{for-ad}). Now we can argue as in p. 1) of the proof of Theorem \ref{bound4} to bound $\sum_da_dq^{-d}$ from above. In particular, we get $\sum_da_dq^{-d} \leq (1+\ln(d_0))/2$, where $d_0$ is the largest integer such that $\sum^{d_0-1}_{d=1}d(q^d/2d) \leq n$, i.e. $$d_0 \leq \log_q(2n(q-1)+q) < 1 + \log_q(2n+1).$$ Putting everything together, we obtain \begin{equation}\lambdabel{for-cg2} c(G) \leq Ae^{\alpha/2}d_0^{\alpha/2} < Ae^{1.27}(1+\log_q(2n+1))^{1.27} \epsilonnd{equation} and so we are done, as $Ae^{1.27} < 19$. 2) Next we briefly discuss how one can prove the upper bound in the remaining cases. 2a) Consider the case $G$ is of type $C_n$ over ${\mathbb{F}}_q$ with $q$ odd. As above, it suffices to prove the upper bound with the constant $19$ for $G = {\operatorname{Sp}}_{2n}(q)$. In this case, $G^* = {\operatorname{SO}}_{2n+1}(q)$, and if $s \in G^*$ is a semisimple element, then the structure of $C_{{\operatorname{GO}}_{2n+1}(q)}(s)$ is as described in the $(D2)$-case. Arguing as above, we arrive at (\ref{for-cg1}) and (\ref{for-cg2}) with $Ae^{1.27} = 2 \cdot (81/71) \cdot (3/2)^2 \cdot e^{1.27} < 18.3$. 2b) Next suppose that $G$ is of type $C_n$ over ${\mathbb{F}}_q$ with $q$ even. In this case, $G^* \cong {\operatorname{Sp}}_{2n}(q)$, and if $s \in G^*$ is a semisimple element, then the structure of $C_{G^*}(s)$ is as described in the $(B2)$-case with $k=0$. Arguing as above, we arrive at (\ref{for-cg1}) and (\ref{for-cg2}) with $Ae^{1.27} = e^{1.27} < 3.6$. 2c) Finally, let $G = \Omega^{\epsilon}_{2n}(q)$ with $q$ even and $n \geq 4$; in particular, $G^* \cong G$. If $s \in G^*$ is a semisimple element, then the structure of $C_{{\operatorname{GO}}^{\epsilon}_{2n}(q)}(s)$ is as described in the $(D2)$-case with $k=0$. Arguing as above and using the estimates $q^l \pm 1 \geq q^l/2$ and $q^n \pm 1 \leq (17/16)q^n$, we arrive at (\ref{for-cg1}) and (\ref{for-cg2}) with $Ae^{1.27} = 2 \cdot (17/16) \cdot e^{1.27} < 7.6$ for $q \geq 4$. As in p. 3) of the proof of Theorem \ref{clas2}, in the case $q=2$ we need some extra care if $C_G(s)$ contains a factor $K_1 := ((C_3 \times C_3) \times \Omega^{\pm}_{2r}(2)) \cdot 2$, where $C_3 \times C_3$ is the (unique) subgroup of index $2$ in ${\operatorname{GU}}_2(2)$. We claim that we still have the bound $\theta(1) \leq |K_1|_2$ for any unipotent character $\theta$ of $K_1$. Indeed, $K_1$ is a normal subgroup of index $2$ of $\tilde{K}_1 := {\operatorname{GU}}_2(2) \times {\operatorname{GO}}^{\pm}_{2r}(2)$. Now $\theta$ is an irreducible constituent of some unipotent character $\tilde{\theta} = \lambdambda \otimes \mu$ of $\tilde{K}_1$, where $\lambdambda \in {\operatorname{Irr}}({\operatorname{GU}}_2(2))$ and $\mu \in {\operatorname{Irr}}({\operatorname{GO}}^{\pm}_{2r}(2))$ are unipotent. It follows that the irreducible constituents of $\theta|_{\Omega^{\pm}_{2r}(2)}$ are unipotent characters of $H_1 := \Omega^{\pm}_{2r}(2)$ and so have degree at most ${\operatorname{St}}_{H_1}(1)$ by Theorem \ref{thm:steinberg}. But $C_3 \times C_3$ is abelian, so $\theta(1) \leq 2 \cdot {\operatorname{St}}_{H_1}(1) = |K_1|_2$. Now we can proceed as in the case $q \geq 4$. 3) Now we proceed to establish the logarithmic lower bound for the simple groups $S$ of type $D_n$ or $^2D_n$ over ${\mathbb{F}}_q$ with $q$ odd and $n \geq 4$. It is convenient to work instead with $G := {\operatorname{SO}}^{\epsilon}_{2n}(q)$, since $G^* \cong G$. Since the lower bound is obvious when $n \leq q^3$, we will assume that $n > q^3 > 3{\mathfrak{n}}s_3+2$. Let $d^* \geq 3$ be the largest integer such that $m:= \sum^{d^*}_{d=3}d({\mathfrak{n}}s_d/2) \leq n-2$. In particular, $$\sum^{d^*+1}_{d=3}\frac{q^d}{2} > \sum^{d^*+1}_{d=3}d({\mathfrak{n}}s_d/2) \geq n-1,$$ and so \begin{equation}\lambdabel{for-d2} d^*+1 \geq \log_q((2n-1)(1-1/q)+q^2). \epsilonnd{equation} Observe that $G_1 := {\operatorname{SO}}^{+}_{2m}(q)$ contains a semisimple element $s_1$ with $$C_{G_1}(s_1) = {\operatorname{GL}}_{1}(q^3)^{{\mathfrak{n}}s_3/2} \times {\operatorname{GL}}_{1}(q^4)^{{\mathfrak{n}}s_4/2} \times \ldots \times {\operatorname{GL}}_{1}(q^{d^*})^{{\mathfrak{n}}s_{d^*}/2}.$$ (Indeed, each of the ${\mathfrak{n}}s_d$ monic irreducible polynomials $f$ of degree $d$ over ${\mathbb{F}}_q$ with $f \neq \check{f}$ gives us an embedding ${\operatorname{GL}}_1(q^d) \hookrightarrow {\operatorname{SO}}^{+}_{2d}(q)$.) If $s_1 \in \Omega^+_{2m}(q)$, then choose $s := {\operatorname{diag}}(I_{2n-2m},s_1)$ so that $$C_G(s) = {\operatorname{SO}}^{\epsilon}_{2n-2m}(q) \times C_{G_1}(s_1).$$ Suppose for the moment that $s_1 \notin \Omega^+_{2m}(q)$. Note that there is some $\delta \in {\mathbb{F}}_{q^2} \setminus (C_{q+1} \cup {\mathbb{F}}_{q})$ such that $h \neq \check{h}$ for the minimal (monic) polynomial $h \in {\mathbb{F}}_q[t]$ of $\delta$ and moreover the ${\mathbb{F}}_q$-norm of $\delta$ is a non-square in ${\mathbb{F}}_q^{\times}$. Hence by \cite[Lemma 2.7.2]{KL}, under the embedding ${\operatorname{GL}}_{1}(q^2) \hookrightarrow {\operatorname{GL}}_2(q) \hookrightarrow {\operatorname{SO}}^+_4(q)$, $\delta$ gives rise to an element $s_2 \in {\operatorname{SO}}^+_4(q) \setminus \Omega^+_4(q)$. Now we choose $s := {\operatorname{diag}}(I_{2n-2m-4},s_2,s_1)$ so that $$C_G(s) = {\operatorname{SO}}^{\epsilon}_{2n-2m-4}(q) \times {\operatorname{GL}}_1(q^2) \times C_{G_1}(s_1).$$ Our construction ensures that $s \in [G,G] = \Omega^{\epsilon}_{2n}(q)$. Next we consider the (regular) irreducible character $\rho$ labeled by $((s),{\operatorname{St}}_{C_G(s)})$. The inclusion $s \in [G,G]$ implies that $\rho$ is trivial at $Z(G)$. Since $S = {\operatorname{P\Omega}}^{\epsilon}_{2n}(q)$ is a normal subgroup of index $2$ in $G/Z(G)$, we see that $S$ has an irreducible character $\chi$ of degree at least $\rho(1)/2$. Hence in the case $s_1 \in \Omega^+_{2m}(q)$ we have $$\frac{\chi(1)}{{\operatorname{St}}(1)} \geq \frac{1}{2} \cdot \frac{\prod^{n-1}_{i=n-m}(1-q^{-2i}) \cdot (1-\epsilon q^{-n})} {\prod^{d^*}_{j=3}(1-q^{-j})^{{\mathfrak{n}}s_j/2} \cdot (1-\epsilon q^{m-n})},$$ whereas in the case $s_1 \notin \Omega^{+}_{2m}(q)$ we have that $$\frac{\chi(1)}{{\operatorname{St}}(1)} \geq \frac{1}{2} \cdot \frac{\prod^{n-1}_{i=n-m-2}(1-q^{-2i}) \cdot (1-\epsilon q^{-n})} {\prod^{d^*}_{j=3}(1-q^{-j})^{{\mathfrak{n}}s_j/2} \cdot (1-\epsilon q^{m-n+2}) \cdot(1-q^{-2})}$$ (with the convention that $1-\epsilon q^{m-n+2} = 1$ when $m=n-2$). Observe that $(1-\epsilon q^{-n})/(1-\epsilon q^{-k}) > q/(q+1) \geq 3/4$ for $0 \leq k \leq n$. Furthermore, since $n \geq m+2$ we have $$\prod^{n-1}_{i=n-m}(1-q^{-2i}) > \frac{\prod^{n-1}_{i=n-m-2}(1-q^{-2i})}{1-q^{-2}} > \prod^{\infty}_{i=2}(1-q^{-2i}) > \frac{71}{72}$$ by Lemma \ref{sum}(i). Thus \begin{equation}\lambdabel{for-cs1} c(S) \geq \frac{\chi(1)}{{\operatorname{St}}(1)} > \frac{B}{\prod^{d^*}_{j=3}(1-q^{-j})^{{\mathfrak{n}}s_j/2}}, \epsilonnd{equation} with $B = (71/72) \cdot (3/4) \cdot (1/2) = 71/192$. Applying (\ref{poly2}) we now see that $$\ln\left(\frac{1}{\prod^{d^*}_{j=3}(1-q^{-j})^{{\mathfrak{n}}s_j/2}}\right) > \sum^{d^*}_{j=3}q^{-j}\frac{{\mathfrak{n}}s_j}{2} \geq \sum^{d^*}_{j=3}\frac{3}{8j} \geq \frac{3\ln(d^*+1)}{8} - \frac{9}{16}.$$ Together with (\ref{for-d2}) this implies that \begin{equation}\lambdabel{for-cs2} c(S) > \frac{B}{e^{9/16}}\left(\log_q((2n-1)(1-\frac{1}{q})+q^2)\right)^{3/8} > \frac{B}{e^{9/16}}\left(\log_{q}\frac{4n+25}{3}\right)^{3/8}, \epsilonnd{equation} and so we are done, since $Be^{-9/16} > 1/5$. 4) We will now briefly discuss how one can prove the lower bound in the remaining cases. Notice that the lower bound is obvious when $n \leq q^6$, so we will assume $n > q^6$. 4a) Consider the case $G$ is of type $C_n$ over ${\mathbb{F}}_q$ with $q$ odd; in particular, $G^* = {\operatorname{SO}}_{2n+1}(q)$. Choose $d^* \geq 3$ largest possible such that $m := \sum^{d^*}_{d=3}d{\mathfrak{n}}s_d/2 \leq n-2$, and so (\ref{for-d2}) holds. Also choose $s_1 \in G_1 := {\operatorname{SO}}_{2m+1}(q)$ a semisimple element with $$C_{G_1}(s_1) = \prod^{d^*}_{d=3}{\operatorname{GL}}_{1}(q^d)^{{\mathfrak{n}}s_d/2}.$$ If $s_1 \in \Omega_{2m+1}(q)$ then we choose $s := {\operatorname{diag}}(I_{2n-2m+1},s_1)$, and if $s_1 \notin \Omega_{2m+1}(q)$ then we choose $s:={\operatorname{diag}}(I_{2n-2m-3},s_2,s_1)$ where $s_2$ is defined as in 3). As above, $s$ gives rise to a regular character $\rho$ of $G$ which is trivial at $Z(G)$, so $\rho$ can be viewed as an irreducible character of $S := {\operatorname{PSp}}_{2n}(q)$. The same arguments as in 3) now show that (\ref{for-cs1}) holds with $B = 71/72$, and (\ref{for-cs2}) holds with $Be^{-9/16} > 1/2$. 4b) Assume now that $G = {\operatorname{SO}}_{2n+1}(q)$ with $q$ odd. Then we choose $d^*$ and $m$ as in 4a), and choose $s \in G^* = {\operatorname{Sp}}_{2n}(q)$ a semisimple element with $$C_{G}(s) = {\operatorname{Sp}}_{2n-2m}(q) \times \prod^{d^*}_{d=3}{\operatorname{GL}}_{1}(q^d)^{{\mathfrak{n}}s_d/2}.$$ Let $\chi$ be an irreducible constituent over $S := \Omega_{2n+1}(q)$ of the regular character labeled by $(s)$. The same arguments as in 3) now show that (\ref{for-cs1}) holds with $B = (71/72) \cdot (1/2)$, and (\ref{for-cs2}) holds with $Be^{-9/16} > 1/4$. 4c) Next suppose that $G$ is of type $C_n$ over ${\mathbb{F}}_q$ with $q$ even; in particular, $G^* \cong {\operatorname{Sp}}_{2n}(q)$. Choose $d^* \geq 5$ largest possible such that $m := \sum^{d^*}_{d=5}d{\mathfrak{n}}s_d/2 \leq n$, and so instead of (\ref{for-d2}) we now have \begin{equation}\lambdabel{for-d3} d^*+1 > \log_q((2n+3)(1-1/q)+q^4). \epsilonnd{equation} We can find a semisimple element $s \in G^*$ such that $$C_{G^*}(s) = {\operatorname{Sp}}_{2n-2m}(q) \times \prod^{d^*}_{d=3}{\operatorname{GL}}_{1}(q^d)^{{\mathfrak{n}}s_d/2}.$$ By Lemma \ref{sum}(i), $\prod^{\infty}_{i=1}(1-q^{-2i}) > 11/16$. Considering the regular character $\rho$ labeled by $(s)$, we now obtain \begin{equation}\lambdabel{for-cs3} c(S) \geq \frac{\rho(1)}{{\operatorname{St}}(1)} > \frac{B}{\prod^{d^*}_{j=5}(1-q^{-j})^{{\mathfrak{n}}s_j/2}}, \epsilonnd{equation} with $B = (11/16)$. Applying (\ref{poly2}) and arguing as in 3), we arrive at \begin{equation}\lambdabel{for-cs4} c(S) > \frac{B}{e^{25/32}}\left(\log_q((2n+3)(1-\frac{1}{q})+q^4)\right)^{3/8} > \frac{B}{e^{25/32}}\left(\log_{q}(n+17)\right)^{3/8}, \epsilonnd{equation} and so we are done, since $Be^{-25/32} > 0.3$. 4d) Finally, let $G = \Omega^{\epsilon}_{2n}(q)$ with $q$ even and $n \geq 4$; in particular, $G^* \cong G$. Now we choose $d^*$ and $m$ as in 4c), and fix a semisimple element $s \in G^*$ with $$C_{G^*}(s) = \Omega^{\epsilon}_{2n-2m}(q) \times \prod^{d^*}_{d=3}{\operatorname{GL}}_{1}(q^d)^{{\mathfrak{n}}s_d/2}.$$ Using the estimate $(1-\epsilon q^{-n})/(1-\epsilon q^{m-n}) > 2/3$ and arguing as in 4c), we see that (\ref{for-cs3}) holds with $B = (11/16) \cdot (2/3) = 11/24$. Consequently, (\ref{for-cs4}) holds with $Be^{-25/32} > 1/5$. \epsilonnd{proof} Following the same approach, A. Schaeffer has proved: \begin{thm} \lambdabel{bound6} {\rm \cite{Sc}} Let $G$ be any of the following twisted Lie-type groups of type $A$: ${\operatorname{GU}}_n(q)$, ${\operatorname{PGU}}_n(q)$, ${\operatorname{SU}}_n(q)$, or ${\operatorname{PSU}}_n(q)$. Then the following inequalities hold: $$\max\left\{1, \frac{1}{4} \left(\log_{q}((n-1)(1-\frac{1}{q^2})+q^4)\right)^{2/5}\right\} \leq \frac{b(G)}{q^{n(n-1)/2}} < 2(\log_q(n(q^2-1)+q^2))^{1.27}.$$ \epsilonnd{thm} \begin{proof}[Proof of Theorem \ref{main2}] The cases where $G$ is an exceptional group of Lie type follow from the proof of Proposition \ref{exc}. Consider the case $G$ is classical. Then the upper bound follows from Theorems \ref{bound4}, \ref{bound5}, and \ref{bound6}. We need only to add some explanation for the groups of type $A$, twisted or untwisted. For instance, let $G$ be a group of Lie type in the same isogeny class with $L := {\operatorname{SL}}_n(q)$. Then $G \cong (L/Z) \cdot C_d$, where $Z$ is a central subgroup of order $d$ of $L$, and furthermore the subgroup of all automorphisms of $L/Z$ induced by conjugations by elements in $G$ is contained in ${\operatorname{PGL}}_n(q)$. Now consider any $\chi \in {\operatorname{Irr}}(G)$. Let $\chi_1$ be an irreducible constituent of $\chi|_{L/Z}$ viewed as a characters of $L$ and let $\chi_2$ be an irreducible constituent of ${\operatorname{Ind}}^{H}_{L}(\chi_1)$, where $H := {\operatorname{GL}}_n(q)$. Since the quotients $G/(L/Z)$ and $H/L$ are cyclic, we see that $\chi(1)/\chi_1(1)$ is the index (in $G$) of the inertia group of $\chi_1$ in $G$, which is at most the index (in $H$) of the inertia group of $\chi_1$ in $H$, and the latter index is just $\chi_2(1)/\chi_1(1)$. It follows that $\chi(1) \leq \chi_2(1) \leq b(H)$. The same argument applies to the twisted case of type $A$. For the lower bound, observe that there is some $d_{\varepsilon} \geq 5$ depending on $\varepsilon$ such that $${\mathfrak{n}}_d \geq {\mathfrak{n}}s_d > (1-\varepsilon)\frac{q^d}{d}.$$ Choosing $A \leq (d_{\varepsilon})^{(\varepsilon-1)/\gammama}$ we can guarantee that the lower bound holds for $n \leq q^{d_{\varepsilon}}$. Hence we may assume that $n \geq q^{d_{\varepsilon}}+1 \geq d_{\varepsilon}{\mathfrak{n}}_{d_{\varepsilon}}+3$. Now we can repeat the proofs of Theorems \ref{bound4} and \ref{bound5}, replacing the products $\prod^{d^*}_{d=3}$, respectively $\prod^{d^*}_{d=5}$, by $\prod^{d^*}_{d = d_{\varepsilon}}$. \epsilonnd{proof} \begin{proof}[Proof of Theorem \ref{main3}] To guarantee the lower bound in the case $\epsilonll = p$ we can take $C \leq 1$, since the Steinberg, being of $p$-defect~0, is irreducible modulo $p$. Assume that $\epsilonll \neq p$. As usual, by choosing $C$ small enough we can ignore any finite number of simple groups; also, it suffices to prove the lower bound for the unique non-abelian composition factor $S$ of $G$. So we will work with $S = G/Z(G)$, where ${\mathcal{G}}$ is a simple simply connected algebraic group and $G = {\mathcal{G}}^F$ is the corresponding finite group over ${\mathbb{F}}_q$. Consider the pair $({\mathcal{G}}D,{\mathbb{F}}D)$ dual to $({\mathcal{G}},F)$ and the dual group $G^*:=({\mathcal{G}}D)^{{\mathbb{F}}D}$. It is well known that, for $q \geq 5$, ${\mathrm {IBr}}_{\epsilonll}({\operatorname{PSL}}_2(q))$ contains a character of degree $\geq q-1$, so we may assume that $r:={\operatorname{rank}}({\mathcal{G}}) > 1$. We will show that, with a finite number of exceptions, $[G^*,G^*]$ contains a regular semisimple $\epsilonll'$-element $s$ with connected centralizer and such that $C_{G^*}(s)$ is a torus of order $\leq 2q^r$. For such an $s$, the corresponding semisimple character $\chi = \chi_s$ can be viewed as an irreducible character of $S$ of degree $|G|_{p'}/|C_{G^*}(s)| > C\cdot|G|_p$ (with $C > 0$ suitably chosen). Moreover, any Brauer character in the $\epsilonll$-block of $G$ containing $\chi$ has degree divisible by $\chi(1)$ as a consequence of a result of Brou\'e--Michel, see \cite[Prop. 1]{HM}. Hence the reduction modulo $\epsilonll$ of $\chi$ is irreducible and so $b_{\epsilonll}(S) \geq \chi(1)$. To find such an $s$, we will work with two specific tori $T_1$ and $T_2$ of $G^*$. For $G=\tw3D_4(q)$ we can choose $|T_1|=q^4-q^2+1$ and $|T_2|=(q^2-q+1)^2$. For $G = {\operatorname{SU}}_n(q)$, we choose $$(|T_1|,|T_2|) = \begin{cases} \left(\frac{(q^{n/2}+1)^2}{q+1}, q^{n-1}+1\right)& \text{if $n \epsilonquiv 2\pmod4$},\cr \left(\frac{q^n+1}{q+1},(q^{(n-1)/2}+1)^2\right)& \text{if $n \epsilonquiv 3\pmod4$}\epsilonnd{cases}$$ If $G$ is of type $B_n$ or $C_n$ with $2|n$, we can choose $$(|T_1|,|T_2|) = (q^n+1,(q^{n/2}+1)^2).$$ For all other $G$, $T_1$ and $T_2$ can be chosen of order indicated in \cite[Tables~3.5 and~4.2]{Ma2}. We may assume that either $q$ or the rank of $G$ is sufficiently large, so in particular Zsigmondy primes $r_i$ \cite{Zs} exist for the cyclotomic polynomials ${\mathbb{P}}hi_{m_i}(q)$ of largest possible $m_i$ dividing the orders $|T_i|$. Here $i = 1,2$, and, furthermore, for $i = 2$ we need to assume that $G$ is not ${\operatorname{SL}}_3(q)$, ${\operatorname{SU}}_3(q)$, or ${\operatorname{Sp}}_4(q) \cong {\operatorname{Sp}}in_5(q)$. According to \cite{F}, either $r_2 > m_2+1$ or $r_2^2$ divides ${\mathbb{P}}hi_{m_2}(q)$, again with finitely many exceptions. Now if $r_1 \neq \epsilonll$, respectively if $\epsilonll = r_1 \neq r_2$ and $r_2$ is larger than all torsion primes of ${\mathcal{G}}$ (see e.g. \cite[Table 2.3]{MT} for the list of them), we can choose $s \in T_i$ of prime order $r_i$, with $i = 1$, respectively $i=2$, and observe that $r_i$ is coprime to all torsion primes of ${\mathcal{G}}$ as well as to $|G^*/[G^*,G^*]|$. It follows that $C_{{\mathcal{G}}D}(s)$ is connected (cf. \cite[Prop. 14.20]{MT} for instance), $s \in [G^*,G^*]$, and moreover $s$ can be chosen so that $|C_{G^*}(s)| = |T_i|$. Thus $s$ has the desired properties, and so we are done. We observe that $r_2$ can be a torsion prime for ${\mathcal{G}}$ only when $r_2 = m_2+1$ and $(G,r_2)$ is $({\operatorname{SL}}_n(q),n)$, or $({\operatorname{SU}}_n(q),n)$ with $n \epsilonquiv 3 \pmod4$. In either case we can choose $s \in T_2 \cap [G^*,G^*]$ of order $r_2^2$. Furthermore, if $G = {\operatorname{SL}}^{\epsilon}_3(q)$ with $q \geq 5$, we fix $\alpha \in {\mathbb{F}}_{q^2}^{\times}$ of order $q+\epsilon$ and choose $s \in T_2$ with an inverse image ${\operatorname{diag}}(\alpha,\alpha^{-1},1)$ in ${\mathcal{G}}$. If $G = {\operatorname{Sp}}_4(q)$ with $q \geq 8$, we fix $\beta \in {\mathbb{F}}_{q^2}^{\times}$ of order $q+1$ and choose $s \in T_2$ with an inverse image ${\operatorname{diag}}(\beta,\beta^{-1},\beta^2,\beta^{-2})$ in ${\operatorname{Sp}}_4(\overline{{\mathbb{F}}}_q) \cong {\operatorname{Sp}}in_5(\overline{{\mathbb{F}}}_q)$. It remains to show that in these cases the element $s$ has the desired properties. In fact, it suffices to show that $C_{{\mathcal{G}}D}(s)$ is a torus. Consider for instance the case $G = {\operatorname{SL}}_n(q)$ (so $r_2 = n$). Then $s$ can be chosen to have an inverse image ${\operatorname{diag}}(\gamma,\gamma^q, \ldots,\gamma^{q^{n-2}},1)$ in the simply connected group $\hat{\mathcal{G}}^*$, where $|\gamma| = n^2$ and ${\mathcal{G}}D = \hat{\mathcal{G}}^*/Z(\hat{\mathcal{G}}^*)$. Suppose $x \in \hat{\mathcal{G}}^*$ centralizes $g$ modulo $Z(\hat{\mathcal{G}}^*)$. Then $xgx^{-1} = \delta g$ for some $\delta \in \overline{{\mathbb{F}}}_q^{\times}$ with $\delta^n = 1$. It follows that $\delta$ is an eigenvalue of $g$ of order dividing $n$, and so $\delta = 1$. Thus $C_{{\mathcal{G}}D}(s)$ equals $C_{\hat{\mathcal{G}}^*}(s)/Z(\hat{\mathcal{G}}^*)$ and so it is a torus. Similar arguments apply to all the remaining cases. \epsilonnd{proof} \begin{thebibliography}{ABC} \bibitem[A]{A} G. Andrews, `\epsilonmph{The Theory of Partitions}'. Addison-Wesley, Reading, Mass., 1976. \bibitem[Be]{Be} Y. Berkovich, Groups with few characters of small degrees. \epsilonmph{Israel J. Math.} {\bf 110} (1999), 325--332. \bibitem[C]{C} R. W. Carter, `\epsilonmph{Finite Groups of Lie Type. Conjugacy classes and Complex Characters}'. Wiley and Sons, New York et al, $1985$. \bibitem[Atlas]{Atlas} J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker, and R. A. Wilson, `\epsilonmph{ATLAS of Finite Groups}'. Clarendon Press, Oxford, $1985$. \bibitem[DM]{DM} F. Digne and J. Michel, `\epsilonmph{Representations of Finite Groups of Lie Type}'. London Math. Soc. Student Texts $21$, Cambridge University Press, $1991$. \bibitem[F]{F} W. Feit, On large Zsigmondy primes, \epsilonmph{Proc. Amer. Math. Soc.} {\bf 102} (1988), 29--36. \bibitem[FS]{FS} P. Fong and B. Srinivasan, The blocks of finite classical groups. \epsilonmph{J. reine angew. Math.} {\bf 396} (1989), 122--191. \bibitem[FG]{FG} J. Fulman and R. M. Guralnick, Bounds on the number and sizes of conjugacy classes in finite Chevalley groups. \epsilonmph{Trans. Amer. Math. Soc.} (to appear). \bibitem[GAP]{GAP} The GAP group, `\epsilonmph{{\sf GAP} - groups, algorithms, and programming}', Version 4.4. 2004, {\tt http://www.gap-system.org}. \bibitem[H1]{H1} G. Hiss, Regular and semisimple blocks of finite reductive groups. \epsilonmph{J. London Math. Soc.} {\bf 41} (1990), 63--68. \bibitem[HM]{HM} G. Hiss and G. Malle, Low-dimensional representations of special unitary groups. \epsilonmph{J. Algebra} {\bf 236} (2001), 745--767. \bibitem[I]{I} I. M. Isaacs, Bounding the order of a group with a large character degree. Preprint. \bibitem[KL]{KL} P. B. Kleidman and M. W. Liebeck, `\epsilonmph{The Subgroup Structure of the Finite Classical Groups}'. London Math. Soc. Lecture Note Ser. no. $129$, Cambridge University Press, $1990$. \bibitem[KT]{KT} A. S. Kleshchev and P.~H.~Tiep, Representations of finite special linear groups in non-defining characteristic. \epsilonmph{Adv. Math.} {\bf 220} (2009), 478--504. \bibitem[LS]{LS} B. F. Logan and L. A. Shepp, A variational problem for random Young tableaux. \epsilonmph{Advances in Math.} {\bf 26} (1977), 206--222. \bibitem[Lu]{Lu} F. L\"ubeck, Data for Finite Groups of Lie Type and Related Algebraic Groups. \texttt{http: //www.math.rwth-aachen.de/$\sim$Frank.Luebeck/chev} \bibitem[Ma1]{Ma1} G. Malle, Unipotente Grade imprimitiver komplexer Spiegelungsgruppen. \epsilonmph{J. Algebra} {\bf177} (1995), 768--826. \bibitem[Ma2]{Ma2} G. Malle, Almost irreducible tensor squares. \epsilonmph{Comm. Algebra \bf27} (1999), 1033--1051. \bibitem[MT]{MT} G. Malle and D. Testerman, `\epsilonmph{Linear Algebraic Groups and Finite Groups of Lie Type}'. Cambridge University Press, to appear. \bibitem[N]{N} H.~N.~Nguyen, Low-dimensional complex characters of the symplectic and orthogonal groups. \epsilonmph{Comm. Algebra} {\bf 38} (2010), 1157--1197. \bibitem[Ol]{Ol} J. B. Olsson, Remarks on symbols, hooks and degrees of unipotent characters. \epsilonmph{J. Combin. Theory Ser. A} {\bf42} (1986), 223--238. \bibitem[Sc]{Sc} A. Schaeffer, Representations of finite unitary groups. Preprint. \bibitem[S]{S} G. M. Seitz, Cross-characteristic embeddings of finite groups of Lie type. \epsilonmph{Proc. London Math. Soc.} {\bf 60} (1990), 166--200. \bibitem[Sn]{Sn} N. Snyder, Groups with a character of large degree. \epsilonmph{Proc. Amer. Math. Soc.} {\bf 136} (2008), 1893--1903. \bibitem[TZ1]{TZ1} P.~H.~Tiep and A. E. Zalesskii, Unipotent elements of finite groups of Lie type and realization fields of their complex representations. \epsilonmph{J. Algebra} {\bf 271} (2004), 327--390. \bibitem[TZ2]{TZ2} P.~H.~Tiep and A. E. Zalesskii, Real conjugacy classes in algebraic groups and finite groups of Lie type. \epsilonmph{J. Group Theory} {\bf 8} (2005), 291--315. \bibitem[VK]{VK} A. M. Vershik and S. V. Kerov, Asymptotic behavior of the Plancherel measure of the symmetric group and the limit form of Young tableaux. (Russian), \epsilonmph{Dokl. Akad. Nauk SSSR} {\bf 233} (1977), 1024--1027; English translation: \epsilonmph{Soviet Math. Dokl.} {\bf 233} (1977), 527--531. \bibitem[Zs]{Zs} K. Zsigmondy, Zur Theorie der Potenzreste. \epsilonmph{Monatsh. Math. Phys.} {\bf 3} (1892), 265--284. \epsilonnd{thebibliography} \epsilonnd{document}
math
\begin{document} \title{Uniformly Convex-Transitive Function Spaces} \mathrm{Aut}hor[F. Rambla]{Fernando Rambla-Barreno} \address{Fernando Rambla\\ Universidad de C\'{a}diz, Departamento de Matem\'{a}ticas, 11510, Puerto Real, Spain} \email{[email protected]} \mathrm{Aut}hor[J. Talponen]{Jarno Talponen} \address{Jarno Talponen\\ University of Helsinki, Department of Mathematics and Statistics, Box 68, FI-00014 University of Helsinki, Finland} \email{[email protected]} \subjclass[2000]{Primary 46B04, 46B20; Secondary 46B25} \date{\today} \begin{abstract} We introduce a property of Banach spaces, called uniform convex-transitivity, which falls between almost transitivity and convex-transitivity. We will provide examples of uniformly convex-transitive spaces. This property behaves nicely in connection with some vector-valued function spaces. As a consequence, we obtain some new examples of convex-transitive Banach spaces. \end{abstract} \maketitle \section{Introduction} In this paper we study the symmetries of some well-known, in fact, almost classical Banach spaces. We denote the closed unit ball of a Banach space $\mathrm{X}$ by $\mathbf{B}_{\mathrm{X}}$ and the unit sphere of $\mathrm{X}$ by $\mathbf{S}_{\mathrm{X}}$. A Banach space $\mathrm{X}$ is called \emph{transitive} if for each $x\in \mathbf{S}_{\mathrm{X}}$ the orbit $\mathcal{G}_{\mathrm{X}}(x)\stackrel{\cdot}{=}\{T(x)|\ T\colon \mathrm{X}\rightarrow \mathrm{X}\ \mathrm{is\ an\ isometric\ automorphism}\}=\mathbf{S}_{\mathrm{X}}$. If $\overline{\mathcal{G}_{\mathrm{X}}(x)}=\mathbf{S}_{\mathrm{X}}$ (resp. $\overline{\mathrm{conv}}(\mathcal{G}_{\mathrm{X}}(x))=\mathbf{B}_{\mathrm{X}}$) for all $x\in\mathbf{S}_{\mathrm{X}}$, then $\mathrm{X}$ is called \emph{almost transitive} (resp. \emph{convex-transitive}). These concepts are motivated by the \emph{Banach-Mazur rotation problem} appearing in \cite[p.242]{Ba}, which remains unsolved. We refer to \cite{BR2} and \cite{Ca0} for a survey and discussion on the matter. The known concrete examples of convex-transitive spaces are scarce, and the ultimate aim of this paper is to provide more examples by establishing the convex-transitivity of some vector-valued function spaces and other natural spaces. It was first reported by Pelczynski and Rolewicz \cite{PR} in 1962 that the space $L^p$ is almost transitive for $p\in [1,\infty)$ and convex-transitive for $p=\infty$ (see also \cite{Rol}). Later, Wood \cite{Wo} characterized the spaces $C_0^{\mathbb{R}}(L)$ whose norm is convex-transitive (see Preliminaries). Greim, Jamison and Kaminska \cite{GJK} proved that if $\mathrm{X}$ is almost transitive and $1\leq p<\infty$, then the Lebesgue-Bochner space $L^p(\mathrm{X})$ is also almost transitive. Recently, an analogous study of the spaces $C_0(L,\mathrm{X})$ was done by Aizpuru and Rambla \cite{AR}, and some related spaces were studied by Talponen \cite{conv}. For some other relevant contemporary results, see \cite{Ca?}, \cite{Kawamura} and \cite{Rambla}. We will extend these investigations into the vector-valued convex-transitive setting, which differs considerably in many respects from the scalar-valued almost transitive one. For this purpose we will introduce a new concept which is (formally) stronger than convex-transitivity and weaker than almost transitivity, called \emph{uniform convex-transitivity}. With the aid of this class of Banach spaces we produce new natural examples of convex-transitive vector-valued function spaces. The main results of this paper are the following: \begin{itemize} \item Characterization of locally compact Hausdorff spaces $L$ such that $C_0^{\mathbb{R}}(L)$ is uniformly convex-transitive. \item If $\mathrm{X}$ is a uniformly convex-transitive Banach space, then so is $L_{\mathbb{K}}^{\infty}(\mathrm{X})$. \item If $\mathrm{X}$ and $C_0^{\mathbb{R}}(L)$ are uniformly convex-transitive, then so is $C_0^{\mathbb{K}}(L,\mathrm{X})$. \end{itemize} \subsection*{Preliminaries} The scalar field of a Banach space $\mathrm{X}$ is denoted by $\mathbb{K}$ and whenever there are several Banach spaces under discussion, then $\mathbb{K}$ is the scalar field of the space denoted by $\mathrm{X}$. The open unit ball of $\mathrm{X}$ is denoted by $\mathbf{U}_{\mathrm{X}}$. The group of rotations $\mathcal{G}_{\mathrm{X}}$ of $\mathrm{X}$ consists of isometric automorphisms $T\colon\mathrm{X}\rightarrow\mathrm{X}$, the group operation being the composition of the maps and the neutral element being the identity map $\mathbf{I}\colon \mathrm{X}\rightarrow\mathrm{X}$. We will always consider $\mathcal{G}_{\mathrm{X}}$ equipped with the strong operator topology (SOT). An element $x\in \mathbf{S}_{\mathrm{X}}$ is called a big point if $\overline{\mathrm{conv}}{\mathcal{G}(x)}=\mathbf{B}_{\mathrm{X}}$. Thus $\mathrm{X}$ is convex-transitive if and only if each $x\in\mathbf{S}_{\mathrm{X}}$ is a big point. Recall that a topological space is totally disconnected if each connected component of the space is a singleton. In what follows $L$ is a locally compact Hausdorff space and $K$ is a compact Hausdorff space, unless otherwise stated. In \cite{Wo} Wood characterized convex-transitive $C_0^{\mathbb{R}}(L)$ spaces. Namely, $C_0^{\mathbb{R}}(L)$ is convex-transitive if and only if $L$ is totally disconnected and for every regular probability measure $\mu$ on $L$ and $t\in L$ there exists a net $\{\gamma_{\alpha}\}_{\alpha}$ of homeomorphisms on $L$ such that the net $\{\mu\circ \gamma_{\alpha}\}_{\alpha}$ is $\omega^{\ast}$-convergent to the Dirac measure $\delta_{t}$. The above mapping $\mu\circ \gamma_{\alpha}$ is given by $\mu\circ \gamma_{\alpha}(A)=\mu(\gamma_{\alpha}(A))$ for Borel sets $A\subset L$. We refer to \cite{Lac} for background information on measure algebras and isometries of $L^p$-spaces and to \cite{HHZ} for a suitable source to Banach spaces in general. In what follows $\mathbf{S}igma$ is the completed $\sigma$-algebra of Lebesgue measurable sets on $[0,1]$ and we denote by $m\colon \mathbf{S}igma\rightarrow \mathbb{R}$ the Lebesgue measure. Define an equivalence relation $\ae$ on $\mathbf{S}igma$ by setting $A\ae B$ if $m((A\cup B)\setminus (A\cap B))=0$. Recall that a rotation $R$ on the space $C_0^{\mathbb{K}}(L,\mathrm{X})$ is said to be of the \emph{Banach-Stone type}, if $R$ can be written as \[R(f)(t)=\sigma(t)(f\circ\phi(t)),\quad f\in C_0^{\mathbb{K}}(L,\mathrm{X}),\] where $\phi\colon L\rightarrow L$ is a homeomorphism and $\sigma\colon L\rightarrow \mathcal{G}_{\mathrm{X}}$ is a continuous map. A Banach space $\mathrm{Y}$ is said to be contained \emph{almost isometrically} in a Banach space $\mathrm{X}$ if for each $\varepsilon>0$ there is a linear map $\psi\colon \mathrm{Y}\rightarrow \mathrm{X}$ such that \[||y||_{\mathrm{Y}}\leq ||\psi(y)||_{\mathrm{X}}\leq (1+\varepsilon)||y||_{\mathrm{Y}}\quad \mathrm{for}\ y\in \mathrm{Y}.\] \section{Uniform convex-transitivity} Provided that the space $\mathrm{X}$ under discussion is understood, we denote \[C_{n}(x)=\left\{\sum_{i=1}^{n}a_{i}T_{i}(x)|\ T_{1},\ldots,T_{n}\in \mathcal{G}_{\mathrm{X}},\ a_{1},\ldots,a_{n}\in [0,1],\ \sum_{i=1}^{n}a_{i}=1\right\}\]for $n\in \mathbb{N}$ and $x\in \mathbf{S}_{\mathrm{X}}$. We call a Banach space $\mathrm{X}$ \emph{uniformly convex-transitive} if for each $\varepsilon>0$ there exists $n\in \mathbb{N}$ satisfying the following condition: For all $x\in \mathbf{S}_{\mathrm{X}}$ and $y\in \mathbf{B}_{\mathrm{X}}$ it holds that $\mathrm{dist}(y,C_{n}(x))\leq \varepsilon$, that is \[\lim_{n\rightarrow \infty}\sup_{x \in \mathbf{S}_{\mathrm{X}}, y \in \mathbf{B}_{\mathrm{X}}}\mathrm{dist}(y,C_{n}(x))=0.\] We denote by $K_{\varepsilon}$ the least integer $n$, which satisfies the above inequality involving $\varepsilon$ and such $K_{\varepsilon}$ is called \emph{the constant of uniform convex transitivity of $\mathrm{X}$ associated to $\varepsilon$.} We call $x\in \mathbf{S}_{\mathrm{X}}$ a \emph{uniformly big point} if \[\lim_{n\rightarrow \infty}\sup_{y\in \mathbf{B}_{\mathrm{X}}}\mathrm{dist}(y,C_{n}(x))=0.\] Clearly almost transitive spaces are uniformly convex-transitive, and uniformly convex-transitive spaces are convex-transitive. It is well-known that $C^{\mathbb{C}}(S^{1})$ is a convex-transitive, non-almost transitive space, and it is easy to see (see e.g. the subsequent Theorem \ref{thm: C0char}) that it is even uniformly convex-transitive. Unfortunately, we have not been able so far to find an example of a convex-transitive space which is not uniformly convex-transitive. However, we suspect that such examples exist and we note that the absence of such a complicated space would make some proofs regarding convex-transitive spaces much more simple. Observe that the canonical unit vectors $e_{k}\in \ell^{1}$ are far from being uniformly big points: \[\lim_{n\rightarrow \infty}\sup_{y\in \mathbf{B}_{\ell^{1}}}\mathrm{dist}(y,C_{n}(e_{k}))=1,\] even though they are big points, i.e. $\overline{\mathrm{conv}}(\mathcal{G}_{\ell^{1}}(e_{k}))=\mathbf{B}_{\ell^{1}}$ for $k\in \mathbb{N}$. In any case, we will provide examples of uniformly convex-transitive spaces, most of which are not previously known to be even convex-transitive. We note that if $\mathrm{X}$ is convex-transitive and there exists a uniformly big point $x\in \mathbf{S}_{\mathrm{X}}$, then each $y\in \mathbf{S}_{\mathrm{X}}$ is a uniformly big point. This does not mean, a priori, that $\mathrm{X}$ should be uniformly convex-transitive. Next we give an equivalent condition to uniform convex transitivity, which is more applicable in calculations than the condition introduced above. \begin{proposition} Let $\mathrm{X}$ be a Banach space. The following condition of $\mathrm{X}$ is equivalent to $\mathrm{X}$ being uniformly convex-transitive: For each $\varepsilon>0$ there is $N_{\varepsilon}\in \mathbb{N}$ such that for each $x\in \mathbf{S}_{\mathrm{X}}$ and $y\in \mathbf{B}_{\mathrm{X}}$ there are $T_{1},\ldots, T_{N_{\varepsilon}}\in \mathcal{G}_{\mathrm{X}}$ such that \begin{equation}\label{eq: bullet} \begin{array}{l} \left|\left|y-\frac{1}{N_{\varepsilon}}\sum_{i=1}^{N_{\varepsilon}}T_{i}(x)\right|\right|\leq \varepsilon. \end{array} \end{equation} \end{proposition} \begin{proof} It is clear that \eqref{eq: bullet} implies uniform convex transitivity, even for the value $K_{\varepsilon}=N_{\varepsilon}$ for each $\varepsilon>0$. Towards the other direction, let $\mathrm{X}$ be a uniformly convex-transitive Banach space, $\varepsilon>0$ and $x\in \mathbf{S}_{\mathrm{X}},\ y\in \mathbf{B}_{\mathrm{X}}$. Let $K$ be the constant of uniform convex-transitivity of $\mathrm{X}$ associated to $\frac{\varepsilon}{4}$. Then there are $a_{1},\ldots ,a_{K}\in [0,1],\ \sum_{i}a_{i}=1$ and $T_{1},\ldots,T_{K}\in \mathcal{G}_{\mathrm{X}}$ such that \begin{equation*} \begin{array}{l} \left|\left|y-\sum_{i=1}^{K}a_{i}T_{i}(x)\right|\right|\leq \frac{\varepsilon}{4}. \end{array} \end{equation*} Put $m=\lceil\frac{4K}{\varepsilon}\rceil\in \mathbb{N}$, so that $K\cdot\frac{1}{m}\leq \frac{\varepsilon}{4}$. Next we define an $m$-uple $(S_{1},\ldots,S_{m})\subset \mathcal{G}_{\mathrm{X}}$ as follows: For each $j\in \{1,\ldots,m\}$ and $i\in \{1,\ldots,K\}$ we put $S_{j}=T_{i}$ if $\lceil m\sum_{n<i}a_{n}\rceil < j\leq \lfloor m\sum_{n\leq i}a_{n}\rfloor$. (Here $\sum_{\emptyset}a_{n}=0$.) By applying the triangle inequality several times, we obtain that \begin{equation*} \begin{array}{l} \left|\left|y-\frac{1}{m}\sum_{j=1}^{m}S_{j}(x)\right|\right|\leq \varepsilon. \end{array} \end{equation*} Hence it suffices to put $N_{\varepsilon}=m=\lceil\frac{4K}{\varepsilon}\rceil$, where $K$ depends only on the Banach space $\mathrm{X}$ and the value of $\varepsilon$. \end{proof} In what follows, we will apply the constant $N_{\varepsilon}$ freely without explicit reference to the previous proposition, and if there is no danger of confusion, also without mentioning explicitly $\mathrm{X}$ and $\varepsilon$, either. The following condition on the locally compact space $L$ turns out to be closely related to the uniform convex-transitivity of $C_0^{\mathbb{K}}(L)$: \begin{enumerate} \item[$(\ast)$]{For each $\varepsilon>0$ there is $M_{\varepsilon}\in \mathbb{N}$ such that for every non-empty open subset $U\subset L$ and compact $K\subset L$ there are homeomorphisms $\phi_{1},\ldots,\phi_{M_{\varepsilon}}\colon L\rightarrow L$ with \[\frac{1}{M_{\varepsilon}}\sum_{i=1}^{M_{\varepsilon}}\chi_{\phi_{i}^{-1}(U)}(t)\geq 1-\varepsilon\quad \mathrm{for}\ t\in K.\]} \end{enumerate} This condition should be compared with the conditions found by Cabello (see \cite[p.110-113]{Ca?}, especially condition (g)), which characterize the convex transitivity of $C_0(L)$. Next we will give this characterization the uniformly convex-transitive counterpart. If $L$ is a locally compact Hausdorff space, by $\alpha L$ we denote its one-point compactification and if $L$ is noncompact, we denote such point by $\infty$. Prior to the theorem we need the following two lemmas. \begin{lemma}\label{lm: from Ra}(\cite[Thm. 3.1]{Rambla}) Let $T$ be a normal topological space with $\mathrm{dim}\ T \leq 1$. If $F\subseteq T$ is a closed subset and $f: F \to S_{\mathbb{C}}$ is a continuous map, then $f$ admits a continuous extension $g: T \to S_{\mathbb{C}}$. \end{lemma} \begin{lemma}\label{lm: technical uct} Let $L$ be a locally compact, Hausdorff, $0$-dimensional space. Then for every $g \in \mathbf{B}_{C_0^{\mathbb{R}}(L)}$ and $k\in \mathbb{N}$ there exist disjoint clopen sets $C_1, C_2, \dots, C_{2k-1}$ such that the function $h \in \mathbf{B}_{C_0^{\mathbb{R}}(L)}$ defined by $h=\sum_{i=1}^{2k-1}\frac{i-k}{k}\chi_{C_i}$ satisfies $\|h-g\| \leq \frac{3}{2k}$. \end{lemma} \begin{proof} We regard $g$ as defined in $\alpha L$. Consider $i \in \{-k,-k+1,\dots,k-1\}$ and let $K_i=g^{-1}[\frac{i}{k},\frac{i+1}{k}]$. Every $x \in K_i$ has a clopen neighbourhood $A_x$ such that $g(A_x)\subseteq [\frac{2i-1}{2k},\frac{2i+3}{2k}]$. By compactness there exist $x_1, \dots, x_n$ such that $K_i \subseteq \bigcup_{j=1}^n A_{x_j}\stackrel{\cdot}{=}B_i$. Finally, define $C_0=B_0$, $C_1=B_{-k} \setminus C_0$, \dots, $C_k=B_{-1}\setminus (C_0 \cup \dots \cup C_{k-1})$, $C_{k+1}=B_1\setminus (C_0 \cup \dots \cup C_k)$, \dots, $C_{2k-1}=B_{k-1}\setminus (C_0 \cup \dots \cup C_{2k-2})$. Note that the $C_i$'s are a partition of $\alpha L$. Now take $h: L \to \mathbb{R}$ given by $h=\sum_{i=1}^{2k-1}\frac{i-k}{k}\chi_{C_i}$. It is easy to check that $\|h-g\| \leq \frac{3}{2k}$ and $h \in \mathbf{B}_{C_0^{\mathbb{R}}(L)}$. \end{proof} \begin{theorem}\label{thm: C0char} Let $L$ be a locally compact Hausdorff space. The space $C_0^{\mathbb{R}}(L)$ is uniformly convex-transitive if and only if $L$ is totally disconnected and satisfies $(\ast)$. If the space $C_0^{\mathbb{C}}(L)$ is uniformly convex-transitive, then $L$ satisfies $(\ast)$. Moreover, if $\mathrm{dim}(\alpha L)\leq 1$, then also the converse implication holds. \end{theorem} Before the proof we comment on the above assumptions. \begin{remark}\label{remark} The spaces $C^{\mathbb{R}}(S^{1},\mathbb{R}^{2})$ and $C^{\mathbb{C}}(S^{1},\mathbb{C})$ are uniformly convex-transitive, their rotations are of the Banach-Stone type, and clearly $S^{1}$, $\mathcal{G}_{\mathbb{R}^{2}}$ and $\mathcal{G}_{\mathbb{C}}$ are not totally disconnected. \end{remark} \begin{proof}[Proof of Theorem \ref{thm: C0char}] Let us first consider the {\it only if} directions. Since uniformly convex-transitive spaces are convex-transitive, we may apply Wood's characterization for convex-transitive $C_0^{\mathbb{R}}(L)$ spaces, and thus we obtain that $L$ must be totally disconnected. Let $C_0^{\mathbb{K}}(L),\ \mathbb{K}\in\{\mathbb{R},\mathbb{C}\},$ be uniformly convex-transitive. Next we aim to check that $L$ satisfies $(\ast)$, so let $U\subset L$ be a non-empty open subset and $K\subset L$ a compact subset. Fix $x_{0}\in U$. Since $\alpha L$ is normal, there exist continuous functions $f,g: \alpha L \to [-1,1]$ satisfying $f(\alpha L \setminus U)=\{0\}$, $f(x_0)=1$, $g(K)=\{1\}$ and $g(\infty)=0$. Since both functions vanish at infinity, we can consider that $f,g \in \mathbf{S}_{C_0^{\mathbb{K}}(L)}$. Fix $\varepsilon>0$ appearing in condition $(\ast)$. Let $N_{\varepsilon}$ be the associated constant provided by the uniform convex-transitivity and condition \eqref{eq: bullet}. Then by the definition of $N_{\varepsilon}$ and the Banach-Stone characterization of rotations of $C_0^{\mathbb{K}}(L)$ we obtain that there are continuous functions $\sigma_{1},\ldots ,\sigma_{N_{\varepsilon}}\colon L\rightarrow \mathbb{K}$ and homeomorphisms $\phi_{1},\ldots,\phi_{N_{\varepsilon}}\colon L\rightarrow L$ such that \begin{equation}\label{eq: gNe} \begin{array}{l} \left|\left|g-\frac{1}{N_{\varepsilon}}\sum_{i=1}^{N_{\varepsilon}}\sigma_{i}(f \circ\phi_{i})\right|\right|\leq\varepsilon. \end{array} \end{equation} In particular, this yields for each $t\in K$ that \begin{equation*} \begin{array}{lcl} \varepsilon &\geq& |1-\frac{1}{N_{\varepsilon}}\sum_{i=1}^{N_{\varepsilon}}\sigma_{i}f(\phi_{i}(t))|=|\frac{1}{N_{\varepsilon}}\sum_{i=1}^{N_{\varepsilon}}1-\sigma_{i}f(\phi_{i}(t))|\\ &\geq& \frac{1}{N_{\varepsilon}}\sum_{i=1}^{N_{\varepsilon}}\chi_{L\setminus \phi_{i}^{-1}(U)}(t), \end{array} \end{equation*} where we applied the fact that $f$ vanishes outside $U$. This justifies $(\ast)$ for $M_{\varepsilon}=N_{\varepsilon}$. Let us see the {\it if} direction for $C_0^{\mathbb{R}}(L)$. Let $k \in \mathbb{N}, f \in \mathbf{S}_{C_0^{\mathbb{R}}(L)}$ and $g \in \mathbf{B}_{C_0^{\mathbb{R}}(L)}$. We may assume $\max f=1$. Take $h$ as in Lemma \ref{lm: technical uct}, i.e. $h=\sum_{i=1}^{2k-1}\frac{i-k}{k}\chi_{C_i}$ with each $C_i$ clopen and $\|h-g\| \leq \frac{3}{2k}$. Note that $K\stackrel{\cdot}{=}\bigcup_{i=1}^{2k-1}C_i$ is compact and apply $(*)$ to this $K$, the subset $U\stackrel{\cdot}{=}\{t\in L: f(t) >1-k^{-1}\}$ and $\varepsilon=\frac{1}{k}$. Write $M\stackrel{\cdot}{=}M_\varepsilon$. There exist homeomorphisms $\phi_1, \dots, \phi_M$ such that if $t \in K$ then $\frac{1}{M}\sum_{i=1}^M\chi_{\phi_i^{-1}(U)}(t) \geq 1-k^{-1}$. For each $j \in \{1, \dots, 2k-1\}$, define $B_j=\bigcup_{s=j}^{2k-1}C_s$ and let $T_j$ be the rotation on $C_0^{\mathbb{R}}(L)$ given by $T_jx=(\chi_{B_j}-\chi_{B_j^c})\cdot x$ if $j\leq k$ and $T_jx=(\chi_{B_j}-\chi_{B_j^c}+2\chi_{L\setminus K})\cdot x$ if $j > k$. Now only a few calculations are needed to see that \begin{equation*} \begin{array}{l} \left|\left|g-\frac{1}{M(2k-1)}\sum_{j=1}^{2k-1}\sum_{i=1}^MT_j(f\circ \phi_i)\right|\right| \leq 6k^{-1} \end{array} \end{equation*} and thus $C_0^{\mathbb{R}}(L)$ is uniformly convex transitive. In order to justify the last claim it is required to verify that if $L$ satisfies $\mathrm{dim}(\alpha L)\leq 1$ and $(\ast)$, then $C_0^{\mathbb{C}}(L)$ is uniformly convex-transitive. Let $k\in \mathbb{N}$ and let $M_{k}$ be the corresponding constant in condition $(\ast)$ associated to value $k^{-1}$. Fix $f\in \mathbf{S}_{C_0^{\mathbb{C}}(L)}$ and $g\in\mathbf{B}_{C_0^{\mathbb{C}}(L)}$. We may assume without loss of generality, possibly by multiplying $f$ with a suitable complex number of modulus $1$, that $f(t_{0})=1$ for a suitable $t_{0}\in L$. Let $U\stackrel{\cdot}{=}\{t\in L:\ |1-f(t)|<k^{-1}\}$ and $K=\{t\in L:\ |g(t)|\geq k^{-1}\}$. Let $\phi_{1},\ldots,\phi_{M_{k}}\colon L\rightarrow L$ be homeomorphisms such that $\frac{1}{M_{k}}\sum_{i=1}^{M_{k}}\chi_{\phi_{i}^{-1}(U)}(t)\geq 1-k^{-1}$ for $t\in K$. This means that the average \begin{equation}\label{eq: F} \begin{array}{l} F\stackrel{\cdot}{=}\frac{1}{M_{k}}\sum_{i=1}^{M_{k}}f\circ \phi_{i}\in \mathbf{B}_{C_0^{\mathbb{C}}(L)} \end{array} \end{equation} satisfies $|1-F(t)|\leq 3k^{-1}$ for each $t\in K$. Next we will define some auxiliary mappings. Put $\alpha\colon \mathbf{S}_{\mathbb{C}}\times [0,1]\rightarrow \mathbf{S}_{\mathbb{C}};\ \alpha(z,s)=-i^{2s}z$. Note that this is a continuous map, and $\alpha(z,0)=-z$, $\alpha(z,1)=z$ for $z\in\mathbf{S}_{\mathbb{C}}$. Taking into account Lemma \ref{lm: from Ra} with $T=\alpha L$, let $\beta_{g}\colon L\rightarrow \mathbf{S}_{\mathbb{C}}$ be a continuous extension of the function $\frac{g(\cdot)}{|g(\cdot)|}$ defined on $K$. For $j\in \{1,\ldots,k\}$ we define rotations on $C_0^{\mathbb{C}}(L)$ by putting $e_{ja}(x)(t)=\beta_{g}(t)\cdot x(t)$ and $e_{jb}(x)(t)=\alpha(\beta_{g}(t),\min(1,\max(0, k|g(t)|-j)))\cdot x(t)$. The main point above is that $(e_{ja}+e_{jb})(F)(t)=0$ for $(j,t)\in \{1,\ldots,k\}\times L$ such that $|g(t)|\leq \frac{j}{k}$ and $(e_{ja}+e_{jb})(F)(t)=2F(t)\beta_{g}(t)$ for $(j,t)\in \{1,\ldots,k\}\times L$ such that $g(t)\geq \frac{j+1}{k}$. Thus, by using \eqref{eq: F} we obtain that \begin{equation*} \begin{array}{l} \left|\left|\ |g(t)|\beta_{g}(t)-\frac{1}{2k}\sum_{j=1}^{k}(e_{ja}+e_{jb})(F)(t)\right|\right|\leq 2k^{-1}\quad \mathrm{for}\ t\in L. \end{array} \end{equation*} Here $||g(\cdot)-|g(\cdot)|\beta_{g}(\cdot)||\leq k^{-1}$, so that $C_0^{\mathbb{C}}(L)$ is uniformly convex-transitive. \end{proof} Note that Theorem \ref{thm: C0char} yields the fact that if $C_0^{\mathbb{R}}(L)$ is uniformly convex-transitive, then so is $C_0^{\mathbb{C}}(L)$. By the above reasoning one can also see that if $C_0^{\mathbb{K}}(L)$ is convex-transitive and $|L|>1$, then $L$ contains no isolated points and thus it follows that each non-empty open subset of $L$ is uncountable. Cabello pointed out \cite[Cor.1]{Ca?} that locally compact spaces $L$ having a basis of clopen sets $C$ such that $L\setminus C$ is homeomorphic to $C$, have the property that $C_0^{\mathbb{R}}(L)$ is convex-transitive. Consequently, this provides a route to the fact that the spaces $L^{\infty},\ \ell^{\infty}/ c_0$ and $C(\Delta)$ over $\mathbb{R}$, where $\Delta$ is the Cantor set, are convex-transitive. By applying Theorem \ref{thm: C0char} and following Cabello's argument with slight modifications, one arrives at the conclusion that these spaces are in fact uniformly convex-transitive. When studying \cite{Ca?} it is helpful to observe that each occurence of 'basically disconnected' in the paper must be read as \emph{totally disconnected}, (\cite{CaTa}). It is quite easy to verify that if $L_{1},\ldots,L_{n}$, where $n\in \mathbb{N}$, are totally disconnected locally compact Hausdorff spaces satisfying $(\ast)$, then so is the product $L_{1}\times \dots \times L_{n}$. It follows that the space $C_0^{\mathbb{R}}(L_{1}\times \dots \times L_{n})$ (also known as the injective tensor product $C_0^{\mathbb{R}}(L_{1})\hat{\otimes}_{\varepsilon} \dots \hat{\otimes}_{\varepsilon} C_0^{\mathbb{R}}(L_{n})$, up to isometry) is uniformly convex-transitive. \section{Uniform convex-transitivity of Banach-valued function spaces} With a proof similar to that of lemma \ref{lm: technical uct}, we obtain the following: \begin{lemma}\label{lm: technical uctx} Let $L$ be a locally compact, Hausdorff, $0$-dimensional space and $X$ a Banach space over $\mathbb{K}$. Given $g \in \mathbf{B}_{C_0^{\mathbb{K}}(L,X)}$ and $j\in \mathbb{N}$, there exist nonzero $x_1, \ldots, x_n \in \mathbf{B}_{\mathrm{X}}$ and disjoint clopen sets $C_1, C_2, \ldots, C_n\subset L$ such that the function $h \in \mathbf{B}_{C_0^{\mathbb{K}}(L,\mathrm{X})}$ defined by $h(t)=\sum_{i=1}^n \chi_{C_i}(t)x_i$ satisfies $\|h-g\|<\frac{1}{j}$. \end{lemma} \begin{theorem}\label{thm: CLX} Let $L$ be a locally compact Hausdorff space and $\mathrm{X}$ a Banach space over $\mathbb{K}$. Consider the following conditions: \begin{enumerate} \item[(1)]{$L$ is totally disconnected and satisfies $(\ast)$, i.e. $C_0^{\mathbb{R}}(L)$ is uniformly convex-transitive.} \item[(2)]{$\mathrm{X}$ is uniformly convex-transitive.} \item[(3)]{$C_0^{\mathbb{K}}(L,\mathrm{X})$ is uniformly convex-transitive.} \end{enumerate} We have the implication $(1)+(2)\implies (3)$. If the rotations of $C_0^{\mathbb{K}}(L,\mathrm{X})$ are of the Banach-Stone type and $\mathrm{dim}_{\mathbb{K}}(\mathrm{X})\geq 1$, then $(3)\implies (\ast)+(2)$. If additionally $\mathbb{K}=\mathbb{R}$ and $\mathcal{G}_{\mathrm{X}}$ is totally disconnected, then $(3)\implies (1)+(2)$. \end{theorem} Recall Remark \ref{remark} related to the last claim above. \begin{proof}[Proof of Theorem \ref{thm: CLX}] We begin by proving the implication $(1)+(2)\implies (3)$. Fix $k\in \mathbb{N}$. Then condition $(\ast)$ provides us with an integer $N_{k}$ associated to $\frac{1}{4k}$. Let $f\in \mathbf{S}_{C_0^{\mathbb{K}}(L,\mathrm{X})}$ and $g\in \mathbf{B}_{C_0^{\mathbb{K}}(L,\mathrm{X})}$. Take $h$ and $C_{1},\ldots,C_{n}\subset L$ as in Lemma $\ref{lm: technical uctx}$ with $j=2k$. Let $B=\bigcup_{i=1}^n C_i$ and $K=\{t \in L: \|g(t)\| \geq k^{-1}\}$. Note that $B$ is a compact clopen set and $K\subset B$. There are $y\in \mathbf{S}_{\mathrm{X}}$ and $T_{1},\ldots,T_{N_{k}} \in \mathcal{G}_{C_{0}^{\mathbb{K}}(L,\mathrm{X})}$ such that \begin{equation}\label{eq: yNk} \begin{array}{l} \left|\left|y-\left(\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}T_{i}f\right)(t)\right|\right|<\frac{1}{k},\quad \mathrm{for}\ t\in B. \end{array} \end{equation} Indeed, observe that the continuous map $L\rightarrow \mathbb{R};\ t\mapsto ||f(t)||$ attains its supremum, the value $1$. Thus, let $t_{0}\in L$ be such that $||f(t_{0})||=1$ and let $y=f(t_{0})\in \mathbf{S}_{\mathrm{X}}$. Write $V=\{t\in L:\ ||f(t)-y||<\frac{1}{2k}\}$. By using $(\ast)$ there are homeomorphisms $\sigma_{1},\ldots, \sigma_{N_{k}}\colon L\rightarrow L$ such that \begin{equation}\label{eq: NkB} \begin{array}{l} \frac{1}{N_{k}}\sum_{i=1}^{N_{k}}\chi_{V}(\sigma_{i}(t))\geq 1-\frac{1}{4k},\quad \mathrm{for}\ t\in B. \end{array} \end{equation} Let $T_{i}\in\mathcal{G}_{C_{0}^{\mathbb{K}}(L,\mathrm{X})}$ be given by $(T_{i}F)(t)=F(\sigma_{i}(t))$ for $1\leq i\leq N_{k}$ and $F\in C_{0}^{\mathbb{K}}(L,\mathrm{X})$. Thus, for all $t\in B$ we obtain by \eqref{eq: NkB} and the definition of $V$ that \begin{equation*} \begin{array}{lll} & &\left|\left|y-\left(\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}T_{i}f\right)(t)\right|\right|\phantom{\bigg |}\\ &=& \left|\left|\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}y-\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}f(\sigma_{i}(t))\right|\right|\leq \frac{1}{N_{k}}\sum_{i=1}^{N_{k}}||y-f(\sigma_{i}(t))||\phantom{\bigg |}\\ &<&(1-\frac{1}{4k})\cdot \frac{1}{2k}+\frac{1}{4k}\cdot 2<\frac{1}{k}.\phantom{\bigg |} \end{array} \end{equation*} Since $X$ is uniformly convex-transitive, there is an integer $2M=N_{\varepsilon}$ satisfying \eqref{eq: bullet} for the value $\varepsilon=k^{-1}$. Let $S^{(i)}_{1},\ldots , S^{(i)}_{2M}\in \mathcal{G}_{\mathrm{X}}$ for $1\leq i\leq n$ such that \begin{equation}\label{eq: xMl} \begin{array}{l} \left|\left|x_{i}-\frac{1}{2M}\sum_{l=1}^{2M}S^{(i)}_{l}(y)\right|\right|<k^{-1}\quad \mathrm{for}\ 1\leq i\leq n. \end{array} \end{equation} Then for each $1\leq l\leq 2M$ we define a rotation on $C_0^{\mathbb{K}}(L,\mathrm{X})$ by \[R_{l}(F)(t)=\chi_{L\setminus B}(t)(-1)^{l}F(t)+\sum_{i=1}^{n}\chi_{C_{i}}S^{(i)}_{l}(F(t)), \quad F\in C_0^{\mathbb{K}}(L,\mathrm{X}),\ t\in L.\] Indeed, this defines rotations, since the sets $L\setminus B$ and $C_{i}$ are clopen. It is easy to see by combining \eqref{eq: yNk} and \eqref{eq: xMl} that \begin{equation*} \begin{array}{l} \left|\left|g-\frac{1}{2M}\sum_{l=1}^{2M}R_{l}\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}T_{i}f\right|\right|<2k^{-1}. \end{array} \end{equation*} This verifies the first implication. Next we will prove the implication $(3)\implies (\ast)+(2)$ under the assumption that the rotations are of the Banach-Stone type. In fact, the verification of claim $(3)\implies (\ast)$ reduces to the analogous scalar-valued case, which was treated in the proof of Theorem \ref{thm: C0char}. Moreover, by using the Banach-Stone representation of rotations and functions of type $f\otimes x, g\otimes y\in \mathbf{S}_{C^{\mathbb{K}}_{0}(L,\mathrm{X})}$ it is easy to verify that the uniform convex-transitivity of $C_0^{\mathbb{K}}(L,\mathrm{X})$ implies that of $\mathrm{X}$. Finally, let us prove the total disconnectedness of $L$ in the case when $\mathcal{G}_{\mathrm{X}}$ is totally disconnected and $\mathbb{K}=\mathbb{R}$. Assume to the contrary that $L$ contains a connected subset $C$, which is not a singleton. Pick $t,s\in C,\ t\boldsymbol{/}eq s,$ and $x\in \mathbf{S}_{\mathrm{X}}$. Let $x^{\ast}\in \mathbf{S}_{\mathrm{X}^{\ast}}$ with $x^{\ast}(x)=1$. Let $f,g\in \mathbf{S}_{C_0^{\mathbb{R}}(L)}$ be functions with disjoint supports and such that $f(t)=g(s)=1$. Consider $f\otimes x, f\otimes x -g\otimes x\in \mathbf{S}_{C_0^{\mathbb{R}}(L,\mathrm{X})}$. Since $C_0^{\mathbb{R}}(L,\mathrm{X})$ is convex-transitive we obtain that $f\otimes x -g\otimes x\in\mathcal{G}_{C_0^{\mathbb{R}}(L,\mathrm{X})}(f\otimes x)$. It follows easily by using the Banach-Stone representation of rotations of $C_0^{\mathbb{R}}(L,\mathrm{X})$ that there exists a continuous map $\sigma\colon L\rightarrow \mathcal{G}_{\mathrm{X}}$ such that \[x^{\ast}(\sigma(t)(x)),x^{\ast}(-\sigma(s)(x))>0.\] By using the facts that $\sigma(t)\boldsymbol{/}eq\sigma(s)$ and that $\mathcal{G}_{\mathrm{X}}$ is totally disconnected we obtain that $\sigma(C)$ is not connected. However, we have a contradiction, since $\sigma(C)$ is a continuous image of a connected set. This contradiction shows that $L$ must be totally disconnected. \end{proof} By following the argument in the previous proof with slight modifications one obtains an analogous result in the convex-transitive setting. \begin{theorem} If $C_0^{\mathbb{R}}(L)$ is convex-transitive and $\mathrm{X}$ is a convex-transitive space over $\mathbb{K}$, then $C_0^{\mathbb{K}}(L,\mathrm{X})$ is convex-transitive. \end{theorem} \begin{proof} The proof of Theorem \ref{thm: CLX} has the convex-transitive counterpart with convex combinations of rotations in place of averages of rotations. Indeed, in the equation \eqref{eq: yNk} one uses the convex-transitivity of $C_0^{\mathbb{R}}(L)$ and the corresponding Banach-Stone type rotations applied on $C_0^{\mathbb{K}}(L,\mathrm{X})$. After equation \eqref{eq: yNk} the argument proceeds similarly. Note that in the convex-transitive setting there does not exist, a priori, an upper bound $M$ depending only on $\epsilon$. \end{proof} Recall that the Lebesgue-Bochner space $L^p(\mathrm{X})$ consists of strongly measurable maps $f\colon [0,1]\rightarrow \mathrm{X}$ endowed with the norm \[||f||_{L^p(\mathrm{X})}^{p}=\int_{0}^{1}||f(t)||_{\mathrm{X}}^{p}\ \mathrm{d}t,\quad \mathrm{for}\ p\in [1,\infty)\] and $||f||_{L^{\infty}(\mathrm{X})}=\underset{t\in [0,1]}{\mathrm{ess\ sup}}||f(t)||_{X}$. We refer to \cite{DU} for precise definitions and background information regarding the Banach-valued function spaces appearing here. Recall that $L^{\infty}$ is convex-transitive (see \cite{PR} and \cite{Rol}). Greim, Jamison and Kaminska proved that $L^p(\mathrm{X})$ is almost transitive if $\mathrm{X}$ is almost transitive and $1\leq p<\infty$, see \cite[Thm. 2.1]{GJK}. We will present the analogous result for uniformly convex-transitive spaces, that is, if $\mathrm{X}$ is uniformly convex-transitive, then $L^p(\mathrm{X})$ are also uniformly convex-transitive for $1\leq p\leq \infty$. \begin{theorem}\label{thm: LPX} Let $\mathrm{X}$ be a uniformly convex-transitive space over $\mathbb{K}$. Then the Bochner space $L_{\mathbb{K}}^{p}(\mathrm{X})$ is uniformly convex-transitive for $1\leq p\leq\infty$. \end{theorem} We will make some preparations before giving the proof. Suppose that $(A_{n})_{n\in\mathbb{N}}$ is a countable measurable partition of the unit interval and $(x_{n})_{n\in\mathbb{N}}\subset\mathrm{X}$. We will use the short-hand notation $F=\sum_{n}\chi_{A_{n}}x_{n}$ for the function $F\in L^{\infty}(\mathrm{X})$ defined by $F(t)=x_{n}$ for a.e. $t\in A_{n}$ for each $n\in\mathbb{N}$. The following two auxiliary observations are obtained immediately from the fact that the countably valued functions are dense in $L^{\infty}(\mathrm{X})$ and the triangle inequality, respectively. \begin{fact}\label{fact1} Consider $F=\sum_{n}\chi_{A_{n}}x_{n}$, where $(A_{n})$ is a measurable partition of $[0,1]$ and $(x_{n})\subset \mathbf{B}_{\mathrm{X}}$. Functions $F$ of such type are dense in $\mathbf{B}_{L^{\infty}(\mathrm{X})}$. \end{fact} \begin{fact}\label{fact2} Let $\mathrm{X}$ be a Banach space and $T_{1},...,T_{n}\in \mathcal{G}_{\mathrm{X}},\ n\in \mathbb{N}$. Assume that $x,y,z\in \mathrm{X}$ satisfy $||y-\frac{1}{n}\sum_{i}T_{i}(x)||=\varepsilon\geq 0$ and $||x-z||=\delta\geq 0$. Then $||y-\frac{1}{n}\sum_{i}T_{i}(z)||\leq \varepsilon +\delta$. \end{fact} \begin{proof}[Proof of Theorem \ref{thm: LPX}] We mainly concentrate on the case $p=\infty$. Fix $k\in \mathbb{N}$, $x\in \mathbf{S}_{\mathrm{X}}$, $(x_{n}),(y_{n})\subset\mathbf{B}_{\mathrm{X}}$ and measurable partitions $(A_{n})$ and $(B_{n})$ of the unit interval. Let $N_{k}$ be the integer provided by the uniform convex transitivity of $\mathrm{X}$ associated to the value $\varepsilon=\frac{1}{k}$. Write \[F=\sum_{n}\chi_{A_{n}}x_{n}\ \mathrm{and}\ G=\sum_{n}\chi_{B_{n}}y_{n}.\] We assume additionally that $||F||=1$. For each $n\in \mathbb{N}$ there are isometries $\{T_{i}^{(n)}\}_{i\leq N_{k}}\subset\mathcal{G}_{\mathrm{X}}$ such that \begin{equation}\label{eq: yj} \begin{array}{cc} &\left|\left|\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}T_{i}^{(n)}(x)-y_{n}\right|\right|<\frac{1}{k}\quad \mathrm{for}\ n\in\mathbb{N}. \end{array} \end{equation} Observe that one obtains rotations on $L^{\infty}(\mathrm{X})$ by putting \[R_{i}(f)(t)=\sum_{n}\chi_{B_{n}}T_{i}^{(n)}(f(t))\] for a.e. $t\in [0,1]$, where $f\in L^{\infty}(\mathrm{X})$, $i\leq N_{k}$, and the above summation is understood in the sense of pointwise convergence almost everywhere. We define a convex combination of elements of $\mathcal{G}_{L^{\infty}(\mathrm{X})}$ by \begin{equation*} \begin{array}{l} \mathrm{A}_{1}(f)=\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}R_{i}(f),\quad f\in L^{\infty}(\mathrm{X}). \end{array} \end{equation*} Condition \eqref{eq: yj} implies that \begin{equation}\label{eq: GCC} ||G-\mathrm{A}_{1}(\chi_{[0,1]}x)||<\frac{1}{k}. \end{equation} By the definition of $F$ one can find $n_{0}\in\mathbb{N}$ such that $m(A_{n_{0}})>0$ and \begin{equation}\label{eq: supAFx} ||x_{n_{0}}||_{\mathrm{X}}>1-\frac{1}{k}. \end{equation} Put $\Delta_{n}=[1-2^{-n},1-2^{-(n+1)}]$ for $n\leq k$. By composing suitable bijective transformations one can construct measurable mappings $g_{n}\colon [0,1]\rightarrow [0,1]$ and $\hat{g}_{n}\colon [0,1]\rightarrow [0,1]$ such that \begin{equation} g_{n}(A_{n_{0}})\ae [0,1]\setminus \Delta_{n}\ \mathrm{and}\ g_{n}([0,1]\setminus A_{n_{0}})\ae \Delta_{n}, \end{equation} \begin{equation}\label{eq: mequiv} \mathrm{the\ measure}\ \mu_{n}(\cdot)\stackrel{\cdot}{=}m(g_{n}(\cdot))\colon\mathbf{S}igma\rightarrow\mathbb{R}\ \mathrm{is\ equivalent\ to}\ m \end{equation} and \begin{equation}\label{eq: hatg} \hat{g}_{n}\circ g_{n}(t)=t\quad \mathrm{for}\ \mathrm{a.e.}\ t\in [0,1] \end{equation} for each $n\leq k$. Next we will apply some observations which appear e.g. in \cite{Greim_Lp} and \cite{Greim_Linfty}. Denote by $\mathbf{S}igma\setminus_{m}$ the quotient $\sigma$-algebra of Lebesgue measurable sets on $[0,1]$ formed by identifying the $m$-null sets with $\emptyset$. Note that \eqref{eq: mequiv} gives in particular that the map $\phi_{n}\colon\mathbf{S}igma\setminus_{m}\rightarrow\mathbf{S}igma\setminus_{m}$ determined by $\phi_{n}(A)\ae g_{n}(A)$ for $A\in \mathbf{S}igma$ is a Boolean isomorphism for each $n\leq k$. Observe that $\hat{g}_{n}(A)\ae \phi_{n}^{-1}(A)$ for $A\in\mathbf{S}igma$ and $n\leq k$. By \eqref{eq: supAFx} there are rotations $\{T_{i}\}_{i\leq N_{k}}\subset \mathcal{G}_{\mathrm{X}}$ such that \begin{equation}\label{eq: xsumd} \begin{array}{cc} &\left|\left|x-\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}T_{i}(x_{n_{0}})\right|\right|_{\mathrm{X}}<\frac{2}{k}. \end{array} \end{equation} According to \eqref{eq: hatg} we may define mappings $S_{i}\colon L^{\infty}(\mathrm{X})\rightarrow L^{\infty}(\mathrm{X})$ for $n\leq k$ and $i\leq N_{k}$ by putting \[S_{i}^{(n)}(F)(t)=T_{i}(F(\hat{g}_{n}(t)))\quad \mathrm{for\ a.e.}\ t\in[0,1],\ F\in L^{\infty}(\mathrm{X}).\] By \eqref{eq: mequiv} we get that $S_{i}^{(n)}\in\mathcal{G}_{L^{\infty}(\mathrm{X})}$ (see also \cite[p.467-468]{Greim_Linfty}). The function $\chi_{[0,1]}x$ can be approximated by convex combinations as follows: \begin{equation}\label{eq: conclusion} \begin{array}{cc} &\left|\left|\chi_{[0,1]}x-\frac{1}{k}\sum_{n=1}^{k}\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}S_{i}^{(n)}(F)\right|\right|_{L^{\infty}(\mathrm{X})} \leq \frac{1}{k}(2+\sum_{i=1}^{k-1}2k^{-1}). \end{array} \end{equation} Indeed, for $n\leq k$ and a.e. $t\in [0,1]\setminus \Delta_{n}$ it holds by \eqref{eq: xsumd} that \begin{equation*} \begin{array}{cc} &\left|\left|x-\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}S_{i}^{(n)}(F)(t)\right|\right|_{\mathrm{X}}= \left|\left|x-\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}T_{i}^{(n)}(x_{n})\right|\right|_{\mathrm{X}}\leq \frac{2}{k}. \end{array} \end{equation*} On the other hand, $||x-\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}S_{i}^{(n)}(F)(t)||_{\mathrm{X}}\leq 2$ for a.e. $t\in \Delta_{n}$. In \eqref{eq: conclusion} we apply the fact that $\Delta_{n}$ are pairwise essentially disjoint. Denote $\mathrm{A}_{2}=\frac{1}{k}\sum_{n=1}^{k}\frac{1}{N_{k}}\sum_{i=1}^{N_{k}}S_{i}^{(n)}\in \mathrm{conv}(\mathcal{G}_{L^{\infty}(\mathrm{X})})$. By combining the estimates \eqref{eq: GCC} and \eqref{eq: conclusion} we obtain by Fact \ref{fact2} that \[||G-\mathrm{A}_{1}\mathrm{A}_{2}(F)||<\frac{5}{k}.\] Observe that $\mathrm{A}_{1}\mathrm{A}_{2}$ is an average of $N_{k}N_{k}$ many rotations on $L^{\infty}(\mathrm{X})$. We conclude by Fact \ref{fact1} that $L^{\infty}(\mathrm{X})$ is uniformly convex-transitive. The case $1\leq p<\infty$ is a straightforward modification of the proof of \cite[Thm. 2.1]{GJK}, where one replaces $U_{i}x_{i}$ by suitable averages belonging to $\mathrm{conv}(\mathcal{G}_{\mathrm{X}}(x_{i}))$ for each $i$. \end{proof} In fact it is not difficult to check the following fact: If the rotations of $L^{\infty}(\mathrm{X})$ are of the Banach-Stone type, then $L^{\infty}(\mathrm{X})$ is convex-transitive if and only if each $x\in \mathbf{S}_{\mathrm{X}}$ is a uniformly big point. We already mentioned that $\ell^{\infty}/ c_0$ is uniformly convex-transitive as a real space. Next we generalize this result to the vector-valued setting. \begin{theorem} Let $\mathrm{X}$ be a uniformly convex-transitive Banach space over $\mathbb{K}$. Then $\ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})$ (over $\mathbb{K}$) is uniformly convex-transitive. \end{theorem} \begin{proof} Observe that the formula \begin{equation}\label{eq: Tsum} T((x_{n})_{n})=(S_{n}x_{\pi(n)})_{n}, \end{equation} where $\pi\colon \mathbb{N}\rightarrow\mathbb{N}$ is a bijection and $S_{n}\in\mathcal{G}_{\mathrm{X}},\ n\in \mathbb{N}$, defines a rotation on $\ell^{\infty}(\mathrm{X})$. Also note that such an isometry $T$ restricted to $c_0(\mathrm{X})$ is a member of $\mathcal{G}_{c_0(\mathrm{X})}$. If $T\in\mathcal{G}_{\ell^{\infty}(\mathrm{X})}$ is as in \eqref{eq: Tsum}, then $\widehat{T}\colon x+c_0(\mathrm{X})\mapsto T(x)+c_0(\mathrm{X})$, for $x\in\ell^{\infty}(\mathrm{X})$, defines a rotation $\ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})\rightarrow \ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})$. Indeed, it is clear that $\widehat{T}\colon \ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})\rightarrow \ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})$ is a linear bijection. Moreover, \[\inf_{z\in c_0(\mathrm{X})}||x-z||=\inf_{z\in c_0(\mathrm{X})}||T(x)-T(z)||=\inf_{z\in c_0(\mathrm{X})}||T(x)-z||,\] so that $\widehat{T}\colon \ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})\rightarrow \ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})$ is an isometry. Fix $u,v\in \mathbf{S}_{\ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})}$. If $x,y\in \ell^{\infty}(\mathrm{X})$ are such that $u=x+c_0(\mathrm{X})$ and $v=y+c_0(\mathrm{X})$, then \begin{equation}\label{eq: distsup} \mathrm{dist}(x,c_0(\mathrm{X}))=\limsup_{n\rightarrow\infty}||x_{n}||=1=\mathrm{dist}(y,c_0(\mathrm{X}))=\limsup_{n\rightarrow\infty}||y_{n}||, \end{equation} since $u,v\in \mathbf{S}_{\ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})}$. Hence we may pick $x,y\in \mathbf{S}_{\ell^{\infty}(\mathrm{X})}$ such that $u=x+c_0(\mathrm{X})$ and $v=y+c_0(\mathrm{X})$. Fix $k\in\mathbb{N}$, $e\in \mathbf{S}_{\mathrm{X}}$ and let $A=\{n\in \mathbb{N}:\ ||x_{n}||\geq 1-\frac{1}{2k}\}$. Observe that $A$ is an infinite set by \eqref{eq: distsup}. Since $\mathrm{X}$ is uniformly convex-transitive, there exists $N_{(k)}\in\mathbb{N}$ such that for each $n\in A$ there are $T_{1}^{(n)},\ldots,T_{N_{(k)}}^{(n)}\in \mathcal{G}_{\mathrm{X}}$ such that \begin{equation}\label{eq: e} \begin{array}{l} \left|\left|e-\frac{1}{N_{(k)}}\sum_{l=1}^{N_{(k)}}T_{l}^{(n)}x_{n}\right|\right|<\frac{1}{k}. \end{array} \end{equation} Fix $j_{(k)}\in \mathbb{N}$ such that \begin{equation}\label{eq: jk} \frac{1}{j_{(k)}}(2+(j_{(k)}-1)(\frac{1}{k}))<\frac{2}{k}. \end{equation} Denote by $p_{1},\ldots,p_{j_{(k)}}\in \mathbb{N}$ the $j_{(k)}$ first primes. Let $\phi_{1},\ldots, \phi_{j_{(k)}}\colon \mathbb{N}\rightarrow \mathbb{N}$ be permutations such that \begin{equation}\label{eq: phiNA} \phi_{i}(\mathbb{N}\setminus A)\subset \{p_{i}^{m}|\ m\in \mathbb{N}\}\quad \mathrm{for}\ i\in \{1,\ldots,j_{(k)}\}. \end{equation} For $l\in \{1,\ldots,N_{(k)}\}$ put $S_{i,n,l}=T_{l}^{(\phi^{-1}_{i}(n))}$ if $\phi^{-1}_{i}(n)\in A$ and otherwise put $S_{i,n,l}=\mathbf{I}$. Define a convex combination of rotations on $\ell^{\infty}(\mathrm{X})$ by letting \begin{equation*} \begin{array}{l} \mathrm{A}_{1}(z)|_{n}=\frac{1}{j_{(k)}}\sum_{i=1}^{j_{(k)}}\frac{1}{N_{(k)}}\sum_{l=1}^{N_{(k)}}S_{i,n,l}(z_{\phi^{-1}_{i}(n)}), \end{array} \end{equation*} where $(z_{n})_{n\in\mathbb{N}}\in \ell^{\infty}(\mathrm{X})$. Consider $\mathrm{A}_{1}\in L(\ell^{\infty}(\mathrm{X}))$ and $\overline{e}=(e,e,e,\ldots)\in \ell^{\infty}(\mathrm{X})$. We obtain that \begin{equation}\label{eq: eee} ||\overline{e}-\mathrm{A}_{1}((x_{n}))||_{\ell^{\infty}(\mathrm{X})}<\frac{2}{k}. \end{equation} Indeed, for each $n\in \mathbb{N}$ it holds for at least $j_{(k)}-1$ many indices $i$ that \begin{equation*} \begin{array}{l} \frac{1}{N_{(k)}}\sum_{l=1}^{N_{(k)}}S_{i,n,l}(x_{\phi^{-1}_{i}(n)})=\frac{1}{N_{(k)}}\sum_{l=1}^{N_{(k)}}T_{l}^{(\phi_{i}^{-1}(n))}(x_{\phi_{i}^{-1}(n)}), \end{array} \end{equation*} where one uses the definition of $S_{i,n,l}$, \eqref{eq: phiNA} and the fact that the sets $\{p_{i}^{m}|\ m\in \mathbb{N}\}, \{p_{j}^{m}|\ m\in\mathbb{N}\}$ are mutually disjoint for $i\boldsymbol{/}eq j$. Thus \eqref{eq: e} and \eqref{eq: jk} yield that \begin{equation*} \begin{array}{l} \left|\left|e-\frac{1}{j_{(k)}}\sum_{i=1}^{j_{(k)}}\frac{1}{N_{(k)}}\sum_{l=1}^{N_{(k)}}S_{i,n,l}(x_{\phi^{-1}_{i}(n)})\right|\right|< \frac{2}{k} \end{array} \end{equation*} holds for all $n\in \mathbb{N}$. Next we will define another convex combination $\mathrm{A}_{2}$ of rotations on $\ell^{\infty}(\mathrm{X})$ as follows. By using again the uniform convex transitivity of $\mathrm{X}$ we obtain $T_{n,l}\in \mathcal{G}_{\mathrm{X}},\ 1\leq l\leq N_{(k)},\ n\in \mathbb{N},$ such that \begin{equation*} \begin{array}{l} \left|\left|y_{n}-\frac{1}{N_{(k)}}\sum_{l=1}^{N_{(k)}}T_{n,l}e\right|\right|<\frac{1}{k} \end{array} \end{equation*} holds for $n\in\mathbb{N}$. Define \begin{equation*} \begin{array}{l} \mathrm{A}_{2}(z)|_{n}=\frac{1}{N_{(k)}}\sum_{l=1}^{N_{(k)}}T_{n,l}z_{n}. \end{array} \end{equation*} Combining the convex combinations yields \[||y-\mathrm{A}_{2}\mathrm{A}_{1}x||_{\ell^{\infty}(\mathrm{X})}<\frac{3}{k}\] according to Fact \ref{fact2}. Since the applied rotations induce rotations on $\ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})$, we may consider the corresponding convex combinations in $L(\ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X}))$ and thus \[||v-\widehat{\mathrm{A}_{2}\mathrm{A}_{1}}u||_{\ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})}<\frac{3}{k}.\] Tracking the formation of the convex combinations reveals that $\widehat{\mathrm{A}_{2}\mathrm{A}_{1}}$ can be written as an average of $N_{(k)}j_{(k)}N_{(k)}$ many rotations on $\ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})$. \end{proof} Since $C(\beta\mathbb{N}\setminus \mathbb{N})$ is linearly isometric to $\ell^{\infty}/ c_0$, an application of Theorem \ref{thm: CLX} yields that $C(\beta\mathbb{N}\setminus\mathbb{N},\mathrm{X})$ is uniformly convex-transitive if $\mathrm{X}$ is uniformly convex-transitive. However, let us recall that this space is linearly isometric to $\ell^{\infty}(\mathrm{X})/ c_0(\mathrm{X})$ if and only if $\mathrm{X}$ is finite-dimensional. \section{Roughness and projections} Let $\mathrm{X}$ be a Banach space. For each $x \in \mathbf{S}_{\mathrm{X}}$ we denote \[\eta(\mathrm{X},x)\stackrel{\cdot}{=}\limsup_{\|h\|\to 0} \frac{\|x+h\|+\|x-h\|-2}{\|h\|}.\] Given $\varepsilon > 0$, the space $\mathrm{X}$ is said to be \emph{$\varepsilon$-rough} if $\displaystyle \inf_{x\in \mathbf{S}_{\mathrm{X}}} \eta(\mathrm{X},x) \geq \varepsilon$. In addition, $2$-rough spaces are usually called \emph{extremely rough}. We will denote the \emph{coprojection constant} of $\mathrm{X}$ by \[\rho(\mathrm{X})=\sup_{P}||\mathbf{I}-P||,\] where the supremum is taken over all linear norm-$1$ projections $P\colon \mathrm{X}\rightarrow \mathrm{Y}$. A Banach space $\mathrm{X}$ is called \emph{uniformly non-square} if there exists $a\in (0,1)$ such that if $x,y \in \mathbf{B}_{\mathrm{X}}$ and $\|x-y\| \geq 2a$ then $\|x-y\| < 2a$. These spaces were introduced in \cite{James} by R. C. James, who also proved that this property lies strictly between uniform convexity and reflexivity. Next we will illustrate how the previous concepts are related. \begin{theorem}\label{thm: rough} Let $\mathrm{X}$ be a Banach space. Then the following conditions are equivalent: \begin{enumerate} \item[(1)]{$\mathrm{X}$ contains $\ell^{1}(2)$ almost isometrically.} \item[(2)]{$\mathrm{X}$ is not uniformly non-square.} \item[(3)]{$\rho(\mathrm{X})=2$.} \end{enumerate} Moreover, if $\sup_{x\in\mathbf{S}_{\mathrm{X}}}\eta(\mathrm{X},x)=2$, then $\rho(\mathrm{X})=2$. \end{theorem} We will require some preparations before the proof. Recall that given $x,y\in \mathrm{X}$ the function $t\mapsto \frac{||x+ty||-||x||}{t}$ is monotone in $t$ and thus the limit $\lim_{t\rightarrow 0^{+}}\frac{||x+ty||-||x||}{t}$ exists and is finite. \begin{lemma}\label{lm: canonical} Let $\mathrm{X}$ be a Banach space and $x,y\in \mathrm{X},\ x\boldsymbol{/}eq 0$. Then \begin{equation*} \begin{array}{lll} & & \phantom{\mathbf{B}ig |}\lim_{t\rightarrow 0^{+}}\frac{||x+t(y+\theta x)||-||x||}{t}=\lim_{t\rightarrow 0^{+}}\frac{||x-t(y+\theta x)||-||x||}{t}\\ &=&\phantom{\mathbf{B}ig |}\lim_{t\rightarrow 0^{+}}\frac{||x+t(y+\theta x)||+||x-t(y+\theta x)||-2||x||}{2t} \end{array} \end{equation*} for $\theta\stackrel{\cdot}{=}\lim_{t\rightarrow 0^{+}}\frac{||x-ty||-||x+ty||}{2t||x||}$. \end{lemma} \begin{proof} Observe that for all maps $a\colon [0,1]\rightarrow \mathbb{R}$ such that $\lim_{t\rightarrow 0^{+}}a(t)>0$ it holds that \begin{equation} \begin{array}{lll}\label{eq: a} & &\phantom{\mathbf{B}ig |}\lim_{t\rightarrow 0^{+}}\frac{||a(t)x+ty||-||a(t)x||}{t}=\lim_{t\rightarrow 0^{+}}\frac{||a(t)x+\frac{a(t)}{a(t)}ty||-||a(t)x||}{t}\\ &=&\phantom{\mathbf{B}ig |}\lim_{t\rightarrow 0^{+}}\frac{||x+\frac{t}{a(t)}y||-||x||}{\frac{t}{a(t)}}=\lim_{t\rightarrow 0^{+}}\frac{||x+ty||-||x||}{t}. \end{array} \end{equation} We will also apply the fact that \begin{equation}\label{eq: b} \lim_{t\rightarrow 0^{+}}\frac{t(\lim_{t\rightarrow 0^{+}} \frac{||x-ty||-||x+ty||}{2t||x||})-t\frac{||x-ty||-||x+ty||}{2t||x||}}{t}=0. \end{equation} The claimed one-sided limits are calculated as follows: \begin{eqnarray*} & &\lim_{t\rightarrow 0^{+}}\frac{||x+t(y+\theta x)||-||x||}{t}\\ &=&\lim_{t\rightarrow 0^{+}}\frac{||(1+\frac{||x-ty||-||x+ty||}{2||x||})x+ty||-||x||}{t}\\ &=&\lim_{t\rightarrow 0^{+}}\frac{||(1+\frac{||x-ty||-||x+ty||}{2||x||})x+ty||-(1+\frac{||x-ty||-||x+ty||}{2||x||})||x||}{t}\\ &+&\lim_{t\rightarrow 0^{+}}\frac{(1+\frac{||x-ty||-||x+ty||}{2||x||})||x||-||x||}{t}\\ &=&\lim_{t\rightarrow 0^{+}}\frac{||x+ty||-||x||}{t}+\lim_{t\rightarrow 0^{+}}\frac{||x-ty||-||x+ty||}{2t}\\ &=&\lim_{t\rightarrow 0^{+}}\frac{||x+ty||+||x-ty||-2||x||}{2t}. \end{eqnarray*} In the first equality above we applied the fact \eqref{eq: b}, and in the third equality the fact \eqref{eq: a}. The calculations for the equation \begin{equation*} \lim_{t\rightarrow 0^{+}}\frac{||x-t(y+\theta x)||-||x||}{t}=\lim_{t\rightarrow 0^{+}}\frac{||x+ty||-||x-ty||-2||x||}{2t} \end{equation*} are similar. \end{proof} \begin{proof}[Proof of Theorem \ref{thm: rough}] The equivalence of conditions (1) and (2) is well-known. The direction (1)$\implies$(3) is established by using the Hahn-Banach Theorem to obtain suitable rank-$1$ projections $P$. Towards the implication (3)$\implies$(2), suppose that $\rho(\mathrm{X})=2$. Given $\delta>0$ there exists a projection $P\colon \mathrm{X}\rightarrow \mathrm{Y}$, which satisfies $||P||=1$ and $||\mathbf{I}-P||>2-\frac{\delta}{2}$. Choose $x\in\mathbf{S}_{\mathrm{X}}$ such that $||x-P(x)||>2-\frac{\delta}{2}$. This gives that $||P(x)||\geq 1-\frac{\delta}{2}$. Put $y=\frac{P(x)}{||P(x)||}$ and note that $y\in \mathbf{S}_{\mathrm{X}}$ and $||y-P(x)||<\frac{\delta}{2}$. Moreover, \begin{eqnarray*} & &||x-y||\geq ||x-P(x)||-||y-P(x)||>2-\delta>2(1-\delta)\\ \mathrm{and}& & \\ & &||x+y||\geq ||x+P(x)||-||y-P(x)||>||x+P(x)||-\frac{\delta}{2}\\ &=&||2x+P(x)-x||\ ||P||-\frac{\delta}{2}\\ &\geq &||P(2x+P(x)-x)||-\frac{\delta}{2}=||P(2x)||-\frac{\delta}{2}>2-\delta-\frac{\delta}{2}>2(1-\delta). \end{eqnarray*} Thus $\mathrm{X}$ is not uniformly non-square. To check the latter part of the claim, an application of Lemma \ref{lm: canonical} yields that if $\sup_{x\in \mathbf{S}_{\mathrm{X}}}\eta(x,\mathrm{X})=2$, then $\mathrm{X}$ is not uniformly non-square. Hence $\rho(\mathrm{X})=2$. \end{proof} The extreme roughness of $\mathrm{X}$ is a tremendously stronger condition than $\rho(\mathrm{X})=2$. For example, if $(F_{n})$ is a sequence of finite-dimensional smooth spaces such that $\rho(F_{n})\rightarrow 2$ as $n\rightarrow \infty$, then the space \[\quad \quad \quad \quad \quad \quad \quad \mathrm{X}=\bigoplus_{n\in \mathbb{N}}F_{n}\quad \quad \mathrm{ (summation\ in }\ \ell^{2}\mathrm{ -sense ) }\] is Fr\'{e}chet-smooth but $\rho(\mathrm{X})=2$. However, for convex-transitive spaces $\mathrm{X}$ the condition of being extremely rough is equivalent to the condition $\rho(\mathrm{X})=2$. Indeed, if a convex-transitive space is not extremely rough then, by \cite[Thm. 6.8]{BR2}, it must be uniformly convex and thus $\rho(\mathrm{X})<2$. It is unknown to us whether a convex-transitive Banach space is reflexive if it does not contain an isomorphic copy of $\ell^1$. In the same spirit as in this section, the projection constants of $L^p$ spaces were discussed in \cite{asytrans}. \section{Final Remarks: On the universality of transitivity properties} The well-known Banach-Mazur problem mentioned in the introduction asks whether every transitive, separable Banach space must be linearly isometric to a Hilbert space. It is well-known that all such (transitive+separable) spaces must be smooth; otherwise, not much is known. Even adding some properties like being a dual space or even reflexivity has not sufficed, to date, for proving that the norm is Hilbertian. Let us make a few remarks on the universality of some spaces of continuous functions. It is well-known that $C(\Delta)$ contains $C([0,1])$ isometrically; hence, the former space is universal for the property of being uniformly convex-transitive and separable. However, it is not almost transitive. To get a space which is universal for the property of being almost transitive and separable, just consider the almost transitive space $X=C_0^{\mathbb{C}}(L)$ where $L$ is the pseudo-arc with one point removed (\cite{Kawamura} or \cite{Rambla}). Since $[0,1]$ is a continuous image of $L$, every separable space is isometrically contained in $X$ (complex case) or $X_{\mathbb{R}}$ (real case). Finally, note that the almost transitivity of a Banach space implies that of the real underlying space. \end{document}
math
\begin{document} \thanks{2010 {\it Mathematics Subject Classification.} Primary: 06D35. Secondary: 03B50, 03D40, 05E45, 06F20, 08A50, 08B30, 52B11, 52B20, 54C15, 55U10, 57Q05, 57Q25} \keywords{MV-algebra, lattice ordered abelian group, strong order unit, unital $\ell$-group, projective MV-algebra, infinite-valued \L u\-ka\-s\-ie\-wicz\ calculus Stone-Weierstrass theorem, retraction, McNaughton function, piecewise linear function, basis, Schauder hat, regular triangulation, unimodular triangulation, duality, polyhedron, finite presentation, isomorphism problem, Markov unrecognizability theorem.} \begin{abstract} Working jointly in the equivalent categories of MV-al\-ge\-bras and lattice-ordered abelian groups with strong order unit (for short, unital $\ell$-groups), we prove that isomorphism is a sufficient condition for a separating subalgebra $A$ of a finitely presented algebra $F$ to coincide with $F$. The separation and isomorphism conditions do not individually imply $A=F$. Various related problems, like the separation property of $A$, or $A\cong F$ (for $A$ a separating subalgebra of $F$), are shown to be (Turing-)decidable. We use tools from algebraic topology, category theory, polyhedral geometry and computational algebraic logic. \end{abstract} \maketitle \section{Introduction} A {\it unital $\ell$-group}, \cite{bigkeiwol, gla} group equipped with a translation invariant lattice structure, and $u\geq 0$ is an element whose positive integer multiples eventually dominate every element of $G$. An {\it MV-algebra} $A=(A,0,\oplus,\neg)$ is an abelian monoid $(A,0,\oplus)$ equipped with an operation $\neg$ such that $\neg\neg x=x, \,\,\,x\oplus \neg 0=\neg 0$ and $\neg(\neg x \oplus y )\oplus y=\neg(\neg y \oplus x )\oplus x$. Recently, MV-algebras, and especially finitely presented MV-algebras, \cite{marspa}, \cite[\S 6]{mun11}, have found applications to lattice-ordered abelian groups \cite{cab, cabmun-ja, cabmun-ccm, mmm, mun08}, the Farey-Stern-Brocot AF C$^*$-algebra of \cite{mun88}, (see \cite{boc,eck,mun-lincei}), probability and measure theory, \cite{forum,mun-cpc}, multisets \cite{cigmar}, and vector lattices \cite{ped}. The versatility of MV-algebras stems from a number of factors, including: \begin{itemize} \item[(i)] The categorical equivalence $\Gamma$ between unital $\ell$-groups and MV-algebras \cite{mun86}, which endows unital $\ell$-groups with the equational machinery of free algebras, finite presentability, and word problems, despite the archimedean property of the unit is not definable by equations. \item[(ii)] The duality between finitely presented MV-algebras and rational polyhedra, \cite{cab, marspa, mun11}. \item[(iii)] The one-to-one correspondence, via $\Gamma$ and Grothendieck's $K_0$, between countable MV-algebras and AF C$^*$-algebras whose Murray-von Neumann order of projections is a lattice, \cite{mun86}. \item[(iv)] The deductive-algorithmic machinery of the infinite-valued \L u\-ka\-s\-ie\-wicz\ calculus \L$_\infty$ is immediately applicable to MV-algebras, \cite{cigdotmun,mun11}. \end{itemize} This paper deals with finitely generated subalgebras of finitely presented MV-algebras and unital $\ell$-groups. We will preferably focus on the equational class of MV-algebras, where freeness and finite presentations are immediately definable. Finitely presented MV-algebras are the Lindenbaum algebras of finitely axiomatizable theories in \L$_\infty$. Finitely generated projective MV-algebras, in particular, are a key tool for the proof-theory of \L$_\infty$ (see \cite{cab, jer, marspa} for an algebraic analysis of admissibility, exactness and unification in \L$_\infty$). Remarkably enough, the characterization of projective MV-algebras and unital $\ell$-groups is a deep {\it open} problem in algebraic topology, showing that unital $\ell$-groups have a greater complexity than $\ell$-groups, (see \cite{cabmun-ja, cabmun-ccm}). As a matter of fact, while the well-known Baker-Beynon duality (\cite{bey} and references therein), shows that finitely presented $\ell$-groups coincide with finitely generated projective $\ell$-groups, the class of finitely generated projective unital $\ell$-groups (resp., finitely generated projective MV-algebras) is strictly contained in the class of finitely presented unital $\ell$-groups (resp., finitely presented MV-algebras). Let $A$ be a finitely generated subalgebra of a finitely presented MV-algebra, (or unital $\ell$-group) $F$. In Theorem \ref{theorem:separation-is-decidable} we prove: it is decidable whether $A$ is separating. If $A$ is separating and $F$ is a finitely generated free MV-algebra then $A$ is projective. This is proved in Theorem \ref{theorem:projective}. Further, $A=F$ iff $A\cong F$: this is our MV-algebraic Stone Weierstrass theorem (\ref{theorem:sw}). For separating subalgebras $A$ of free $n$-generator MV-algebras or unital $\ell$-groups, the isomorphism problem $A\cong F$ is decidable. See Theorem \ref{theorem:decidable-sw}. As is well known, \cite[9.1]{cigdotmun}, the free $n$-generator MV-algebra $\McNn$ consists of all $n$-variable McNaughton functions defined on $\I^n$. For any MV-term $\tau=\tau(X_1,\ldots,X_n)$ we let $\hat\tau\in \McNn$ be obtained by evaluating $\tau$ in $\McNn$. In Section \ref{section:method} we introduce a method to write down a list of MV-terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\ldots,X_n$ in such a way that the subalgebra of $\McNn$ generated by $\hat\tau_1,\ldots,\hat\tau_k$ is {\it separating and distinct} from $\McNn$. In Theorem \ref{theorem:delta-basis} we prove that every finitely generated separating proper subalgebra $A$ of $\McNn$ is obtainable by this method. In the light of Theorem \ref{theorem:projective}, a large class of projective non-free MV-algebras can be effectively introduced. In Section \ref{section:basis}, we connect our presentation of finitely generated separating subalgebra $A$ of $\McNn$ with the notion of basis \cite[\S 6]{mun11}. In Theorem \ref{theorem:effective}, for any finitely generated separating subalgebra $A$ of $\McNn$ we provide an effective method to transform every generating set of $A$ into a basis of $A$. As another application, in Section \ref{section:recognizing}, we prove the decidability of the problem of recognizing whether two different sets of terms generate the same separating subalgebra of a free MV-algebra. Finally, in Section \ref{section:final} we discuss the mutual relations between presentations of MV-algebras as {\it finitely generated subalgebras} of free MV-algebras, and the traditional finite presentations in terms of {\it principal quotients} of free MV-algebras. Many results proved in this paper for separating finitely generated unital $\ell$-subgroups of free unital $\ell$-groups, fail for finitely presented $\ell$-groups, and are open problems for finitely presented unital $\ell$-groups. The separation hypothesis plays a crucial role in most decidability results of the earlier sections. Actually, the final two results of this paper (Theorems \ref{theorem:generators-to-quotient} and \ref{theorem:finale}) show that without this hypothesis, decision problems for a subalgebra $A$ of $\McNn$ generated by $\hat\tau_1,\ldots,\hat\tau_k$ become as difficult as their classical counterparts where $A$ is presented as a principal quotient of $\McNn$. \section{MV-algebras, rational polyhedra, regular triangulations} \label{section:presentation} \subsubsection*{MV-algebras, \cite{cigdotmun,mun11}} We assume familiarity with the categorical equivalence $\Gamma$ between MV-algebras and unital $\ell$-groups, \cite{mun86,cigdotmun}. For any closed set $Y \subseteq \cube$ we let $\McN(Y)$ denote the MV-algebra of restrictions to $Y$ of all McNaughton functions defined on $\cube$. A set $S\subseteq \McN(Y)$ is said to be {\it separating} (or, $S$ {\it separates points of $Y$}) if for all $x,y\in Y$ such that $x\neq y$ there is $f\in S$ with $f(x)\not=f(y)$. Unless otherwise specified, $Y$ will be nonempty, whence the MV-algebra $ \McN(Y)$ will be nontrivial. By an {\it MV-term} $\tau = \tau(X_{1},\ldots,X_{n})$ we mean a string of symbols obtained from the variable symbols $X_{i}$ and the constant symbol $0$ by a finite number of applications of the MV-algebraic connectives $\neg,\oplus$. The map $\,\,\,\hat{}\,\,\,$ sending $X_{i}$ to the $i$th coordinate function $\pi_{i}\colon [0,1]^{n}\to [0,1]$ canonically extends to a map interpreting each MV-term $\tau(X_1,\ldots,X_n)$ as a McNaughton function $\hat{\tau}\in \McNn$. McNaughton theorem \cite[9.1.5]{cigdotmun} states that this map is onto the MV-algebra $\McNn$. The set $\pi_1,\ldots,\pi_n$ freely generates the free MV-algebra $\McNn$. \subsubsection*{Rational polyhedra and their regular triangulations, \cite{mun11, sta}} Let $n=1,2,\ldots$ be a fixed integer. A point $x$ lying in the $n$-cube $\cube$ is said to be {\it rational} if so are its coordinates. In this case, there are uniquely determined integers $0\leq c_i \leq d_i$ such that $x$ can be written as $ x=(c_1/d_1,\ldots,c_n/d_n),\,$ with $ c_i \,\,{\rm and} \,\, d_i$ relatively prime for each $i=1,\ldots,n$. By definition, the {\it homogeneous correspondent} of $x$ is the integer vector $ \tilde x = (dc_1/d_1,\ldots,dc_n/d_n,d) \in \mathbb Z^{n+1}, $ where $d>0$ is the least common multiple of $d_1,\ldots,d_n$. The integer $d$ is said to be the {\it denominator of } $x$, denoted $\den(x)$. A {\it rational polyhedron $P$ in $\cube$} is a subset of $\cube$ which is a finite union of simplexes in $\cube$ with rational vertices. A polyhedral complex $\Pi$ is said to be {\it rational} if the vertices of each polyhedron in $\Pi$ are rational. A rational $m$-dimensional simplex $T=\conv(v_0,\ldots,v_m)\subseteq \cube$ is {\it regular} if the set $\{\tilde v_0, . . . , \tilde v_m\}$ of homogeneous correspondents of its vertices can be extended to a basis of the free abelian group $\mathbb Z^{n+1}$. An (always finite) simplicial complex $\Delta$ is {\it regular} if each one of its simplexes is {\it regular}. The {\it support} $|\Delta|$ of $\Delta$, i.e., the point-set union of all simplexes of $\Delta$, is the most general possible rational polyhedron in $\mathbb R^n$ (see \cite[2.10]{mun11}). We also say that $\Delta$ is a regular {\it triangulation} of the rational polyhedron $|\Delta|$. Regular simplexes and complexes are called ``unimodular'' in \cite{cigdotmun, mmm, marspa, mun08}, and ``(Farey) regular'' in \cite{mun-cpc}. In the literature on polyhedral topology, notably in \cite{sta}, the adjective ``regular'' has a different meaning. Throughout this paper, the adjective ``linear'' is to be understood in the affine sense. \subsubsection*{Finitely presented MV-algebras are dual to rational polyhedra, \cite{cab,marspa, mun11}} An MV-algebra $A$ is said to be {\it finitely presented} if it is isomorphic to the quotient MV-algebra $\McNn/\mathfrak j$, for some $n=1,2,\ldots,$ and some principal ideal $\mathfrak j$ of $\McNn$. If $\mathfrak j$ is generated by $g\in \McNn$ then $\McNn/\mathfrak j\cong\McN(g^{-1}(0))$, and $g^{-1}(0)$ is a (possibly empty) rational polyhedron in $\I^n$. Given two rational polyhedra $P\subseteq \cube$ and $Q\subseteq \kube$ a {\it $\mathbb{Z}$-map} is a continuous piecewise linear map $\eta\colon P \to Q$ such that each linear piece of $\eta$ has integer coefficients. Following \cite[3.2]{mun11}, given rational polyhedra $P\subseteq \cube$ and $Q\subseteq \kube$, we write $P\cong_\mathbb Z Q$ (and say that $P$ and $Q$ are {\it $\mathbb Z$-homeomorphic}) if there is a homeomorphism $h$ of $P$ onto $Q$ such that both $h$ and $h^{-1}$ are $\mathbb Z$-maps. We also say that $h$ is a {\it $\mathbb Z$-homeomorphism}. The functor $\McN$ sending each polyhedron $P$ to the MV-algebra $\McN(P)$, and each $\mathbb{Z}$-map $\eta\colon P\to Q$ to the map $\McN(\eta)\colon \McN(Q)\to \McN(P)$ defined by \begin{equation} \label{equation:functor} \McN(\eta)\colon f\mapsto f\circ \eta,\,\,\,\mbox{for any }f\in\McN(Q), \,\, \mbox{\rm where $\circ$ denotes composition,} \end{equation} determines a categorical equivalence between rational polyhedra with $\mathbb Z$-maps, and the opposite of the category of finitely presented MV-algebras with homomorphisms. For short, $\McN$ is a {\it duality} between these categories. See \cite{cab, marspa, mun11} for further details. \section{``Presenting'' MV-algebras by a finite list of MV-terms} Every finite set of MV-terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\ldots,X_n$ determines the subalgebra $A$ of $\McNn$ generated by the McNaughton functions $\hat\tau_1,\ldots,\hat\tau_k$. Then $A$ is finitely presented \cite[6.6]{mun11}. As we will show throughout this paper, when $A$ is presented via generators $\hat\tau_1,\ldots,\hat\tau_k,\,\,$ several decision problems turn out to be solvable. Our first example is as follows: \begin{theorem} \label{theorem:separation-is-decidable} The following {\rm separation problem} is decidable: \noindent ${\mathsf{INSTANCE}:}$ A list of MV-terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\ldots,X_n$. \noindent ${\mathsf{QUESTION}:}$ Does the set of functions $\{\hat\tau_1,\ldots,\hat\tau_k\}$ separate points of $\cube$ ? \end{theorem} \begin{proof} Let $g =(\hat\tau_1,\ldots,\hat\tau_k)\colon\cube\to\kube$ be defined by $g(x)=(\hat\tau_1(x),\dots,\hat\tau_k(x))$ for all $x \in \cube$. By \cite[3.4]{mun11}, the range $R$ of $g$ is a rational polyhedron in $\kube$. The separation problem equivalently asks if $g$ is one-to-one. Equivalently, is $g$ is a piecewise linear homeomorphism of $\cube$ onto $R$? Fix $i\in \{1,\dots,k\}$. By induction on the number of connectives in the subterms of $\tau_i$ one can effectively list the linear pieces $l_{i1},\dots,l_{it_i}$ of the piecewise linear function $\hat\tau_i$. Since all these linear pieces have integer coefficients, the routine stratification argument of \cite[2.1]{mun11} yields (as the list of the sets of vertices of its simplexes) a rational polyhedral complex $\Pi_i$ over $\cube$ such that $\hat\tau_i$ is linear on each simplex of $\Pi_i$. As in \cite[1.4]{sta}, we now subdivide $\Pi_i$ into a rational triangulation $\Delta_i$ without adding new vertices. By induction on $k,$ we compute a rational triangulation $\Delta$ of $\cube$ which is a joint subdivision of $\Delta_1,\dots,\Delta_k$. Thus $g$ is linear over $\Delta$. (Using the effective desingularization procedure of \cite[2.9--2.10]{mun11} we can even insist that $\Delta$ is regular). We next write down the set $ \Delta' = \{g(T)\subseteq \I^k\mid T\in \Delta\}$ as the list of the set of vertices of each convex polyhedron $g(T)$. The following two conditions are now routinely checked: \begin{itemize} \item[(I)] $ \Delta'$ is a rational triangulation (of $R$), and \item[(II)] $g$ maps the set of vertices of $\Delta$ one-to-one into the set of vertices of $\Delta'$. (By definition of $\Delta'$, $g$ maps vertices of $\Delta$ onto vertices of $\Delta'$.) \end{itemize} Indeed, the separation problem has a positive answer iff the both (I) and (II) are satisfied: This completes the proof of the decidability of the separation problem. \end{proof} An MV-algebra $D$ is said to be {\it projective} \index{Projective} if whenever $\psi\colon A\to B$ is a surjective homomorphism and $\phi\colon D\to B$ is a homomorphism, there is a homomorphism $\theta\colon D\to A$ such that $\phi= \psi\circ \theta$. As is well known, $D$ is projective iff it is a {\it retract} of a free MV-algebra $F$: in other words, there is a homomorphism $\omega$ of $F$ onto $D$ and a one-to-one homomorphism $\iota$ of $D$ into $F$ such that the composite function $\omega\circ \iota$ is the identity function on $D$. A large class of finitely generated projective MV-algebras, and automatically, of finitely generated projective unital $\ell$-groups, can be effectively presented by their generators, combining the following theorem with the techniques of Section \ref{section:method} below: \begin{theorem} \label{theorem:projective} Let $A$ be a finitely generated subalgebra of the free MV-algebra $\McNn$. Suppose $A$ is separating (a decidable property, by Theorem \ref{theorem:separation-is-decidable}). Then $A$ is projective. \end{theorem} \begin{proof} Let $\{g_1,\ldots,g_k\}$ be a generating set of $A$. As already noted, the separation hypothesis means that the map $g\colon \cube\to [0,1]^k$ defined by $g(x)=(g_1(x),\ldots,g_k(x)),$ $\,\,(x\in\cube)$ is one-to-one. Since $g$ is continuous, $g$ is a homeomorphism of $\cube$ onto its range $R \subseteq [0,1]^k$. Further, $g$ is piecewise linear and each linear piece of $g$ is a polynomial with integer coefficients. Thus, $g$ is a $\mathbb Z$-map. By \cite[3.4]{mun11}, $R$ is a rational polyhedron. Further: \begin{itemize} \item[(i)] The piecewise linearity of $g$ yields a triangulation $\Delta_g$ of $\cube\,\,$ such that $g$ is linear over every simplex of $\Delta_g$. Since $g$ is a homeomorphism, the set $g(\Delta_g)=\{g(T)\mid T\in \Delta_g\}$ is a rational triangulation of $R$, making $R$ into what is known as an $n$-dimensional {\it PL-ball. } A classical result of Whitehead \cite{whi} shows that $R$ has a collapsible triangulation $\nabla$. \item[(ii)] By \cite[4.10]{cab}, the polyhedron $R=g(\cube)$ has a point of denominator $1$, and has a {\it strongly regular} triangulation $\Delta$, in the sense that $\Delta$ is regular and for every maximal simplex $M$ of $\Delta$ the greatest common divisor of the denominators of the vertices of $M$ is equal to 1. \end{itemize} By \cite[6.1(III)]{cabmun-ccm}, the MV-algebra $\McN(R)$ is projective. By \cite[3.6]{mun11}, $A$ is isomorphic to $\McN(R)$, whence the desired conclusion follows. \end{proof} We refer to \cite{mun86} and \cite{cigdotmun} for background on unital $\ell$-groups and their categorical equivalence $\Gamma$ with MV-algebras. In particular, \cite[4.16]{mun86} deals with the freeness properties of the unital $\ell$-group $\McN_{\rm group}(\cube)$ of all (continuous) piecewise linear functions $f\colon \cube\to \mathbb R$ where each linear piece of $f$ has integer coefficients, and with the constant function 1 as the distinguished order unit. An equivalent definition of $\McN_{\rm group}(\cube)$ is given by \begin{equation} \label{equation:gamma} \Gamma(\McN_{\rm group}(\cube))=\McNn. \end{equation} Projective unital $\ell$-groups are the main concern of \cite{cabmun-ja, cabmun-ccm}. From the foregoing theorem we immediately have: \begin{corollary} If $(G,u)$ is a finitely generated unital $\ell$-subgroup of $\McN_{\rm group}(\cube)$, and is {\rm separating}, in the sense that for each $x\neq y\in\cube$ there exists $f\in G$ such that $f(x)\neq f(y)$, then $(G,u)$ is projective. {$\Box$} \end{corollary} \section{An MV-algebraic Stone-Weierstrass theorem} Let $P\subseteq\cube$ be a rational polyhedron and $\{g_1,\ldots,g_k\}$ a generating set of a subalgebra $A$ of $\McN(P)$. Under which conditions does $A$ coincide with $\McN(P)$? One obvious {\it necessary} condition is that $A$ be isomorphic to $\McN(P)$---but this condition alone is {\it not sufficient}: for instance, by \cite[3.6]{mun11}, the subalgebra of $\McN([0,1])$ generated by $x\oplus x$ is isomorphic to $\McN([0,1])$ but does not coincide with it, because the points $1/2$ and $1$ are not separated by $A,$ but are separated by the identity function $x\in \McN(\I)$. Another {\it necessary} condition is given by observing that $A$ must separate points of $P$. Again, this condition alone is {\it not sufficient} for $A$ to coincide with $\McN(P)$, as the following example shows: \begin{example} \label{example:schauder} {\rm Let $A$ be the subalgebra of the free one-generator MV-algebra $\McN([0,1])$ generated by the two elements $x\odot x$ and $\neg(x\oplus x)$. See the picture below. It is easy to see that $A$ is separating. However, $A$ does not coincide with $\McN([0,1])$: no function $f\in A$ satifies $f(1/2)=1/2$, while the McNaughton function $x\in \McN([0,1])$ does.} \end{example} \unitlength0.8cm \begin{picture}(5,4) \multiput(2.5,0)(5,0){2}{\line(1,0){4}} \multiput(2.5,0)(5,0){2}{\line(0,1){4}} \multiput(2.5,4)(5,0){2}{\line(1,0){4}} \multiput(6.5,0)(5,0){2}{\line(0,1){4}} \thicklines \put(2.5,4){\line(1,-2){2}} \put(4.5,0){\line(1,0){2}} \put(3.5,2){$\neg (x\oplus x)$} \put(9.5,0){\line(1,2){2}} \put(7.5,0){\line(1,0){2}} \put(9.5,2){$x\odot x$} \end{picture} While individually taken, separation and isomorphism are necessary but not sufficient conditions for a subalgebra $A$ of $\McN(P)$ to coincide with $\McN(P)$, putting these two conditions together, we will obtain in Theorem \ref{theorem:sw} an MV-algebraic variant of the Stone-Weierstrass theorem. To this purpose, let us agree to say that a subalgebra $A$ of an MV-algebra $B$ is an {\it epi-subalgebra} if the inclusion map is an {\it epi-homomorphism}. Stated otherwise, for any two homomorphisms $h,g\colon B\to C$, if $h\,{\mathbin{\vert\mkern-0.3mu\grave{}}}\, A=g\,{\mathbin{\vert\mkern-0.3mu\grave{}}}\, A$ then $h=g$. We then have: \begin{lemma}\label{Lem:SepEpi} Let $P\subseteq [0,1]^n$ be a rational polyhedron and $A$ a finitely generated subalgebra of $\McN(P)$. Then the following conditions are equivalent: \begin{itemize} \item[(i)] $A$ is a separating subalgebra of $\McN(P)$; \item[(ii)] $A$ is an epi-subalgebra of $\McN(P)$. \end{itemize} \end{lemma} \begin{proof} Let $g_1,\ldots,g_n\in \McN(P)$ be a set of generators for $A$. Then $A$ is separating iff the map $g=(g_1,\ldots,g_n)\colon P\to[0,1]^n$ is one-to-one. By \cite[Theorem 3.2]{cab}, $g$ is one-to-one iff $g$ is a mono $\mathbb{Z}$-map. Recalling \eqref{equation:functor}, this latter condition is equivalent to stating that the map $\McN(g)\colon \McN([0,1]^n)\to \McN(P)$ is an epi-homomorphism. Equivalently, the range $A$ of $\McN(g)$ is an epi-subalgebra of~$\McN(P)$. \end{proof} \begin{lemma}\label{Lemma_iso} Let $P$ and $Q$ be rational polyhedra in $\cube$ and $\eta\colon P\to Q$ a one-to-one $\mathbb Z$-map. Then the following are equivalent: \begin{itemize} \item[(i)] $P$ is $\,\,\mathbb Z$-homeomorphic to $\eta(P)$; \item[(ii)] $\eta$ is a $\,\mathbb Z$-homeomorphism of $P$ onto $\eta(P)$. \end{itemize} \end{lemma} \begin{proof} For the nontrivial direction, let $\gamma\colon \eta(P)\to P$ be a $\mathbb Z$-ho\-m\-e\-o\-mor\-ph\-ism. Then $\gamma\circ \eta$ is a one-to-one $\mathbb Z$-map from $P$ into $P$. By \cite[Theorem 3.6]{cab}, $\gamma\circ \eta$ is a $\mathbb Z$-homeomorphism. It follows that $\eta=\gamma^{-1}\circ(\gamma\circ \eta)$ is a $\mathbb Z$-homeomorphism. \end{proof} We are now ready to prove the main result of this section: \begin{theorem} \label{theorem:sw} Let $P\subseteq [0,1]^n$ be a rational polyhedron and $A$ a subalgebra of $\McN(P)$. Then the following conditions are equivalent: \begin{itemize} \item[(i)] $A=\McN(P)$; \item[(ii)] $A$ is isomorphic to $\McN(P)$ and is a separating subalgebra of $\McN(P)$; \item[(iii)] $A$ is isomorphic to $\McN(P)$ and is an epi-subalgebra of $\McN(P)$. \end{itemize} \end{theorem} \begin{proof} (ii)$\Leftrightarrow$(iii) follows directly from Lemma \ref{Lem:SepEpi}. (i)$\Rightarrow$(ii) is trivial. To prove (ii)$\Rightarrow$(i), let $e\colon \McN(P)\to A$ be an isomorphism, and $\iota \colon A\to \McN(P)$ be the inclusion map. Since $e$ is bijective and $\iota$ is epi, then $\iota\circ e$ is epi. Similarly, since $e$ and $\iota$ are one-to-one, then so is $\iota\circ e$. Let $\eta\colon P \to P $ be the unique $\mathbb{Z}$-map such that $\iota\circ e(f)=f\circ \eta$ for each $f\in\McN(P)$. By \cite[Theorem 3.2]{cab}, $\eta$ is one-to-one and onto. By \cite[Theorem 3.6]{cab}, $\eta$ is a $\mathbb{Z}$-homeomorphism, that is, $\eta^{-1}$ is a $\mathbb{Z}$-map. Again with reference to \eqref{equation:functor}, the map $\McN(\eta^{-1})\colon \McN(P)\to A$ is the inverse of $\iota\circ e$. As a consequence, $\iota$ is surjective and $A=\iota(\McN(P))=\McN(P)$. \end{proof} Recalling \eqref{equation:gamma} we immediately have: \begin{corollary} \label{corollary:sw} A finitely generated separating unital $\ell$-subgroup of $\McN_{\rm group}(\cube)$ isomorphic to $\McN_{\rm group}(\cube)$ coincides with $\McN_{\rm group}(\cube)$. {$\Box$} \end{corollary} When $P=\cube$, Theorem \ref{theorem:sw} has the following stronger form: \begin{theorem} \label{theorem:sw-plus} Any $n$-generator separating subalgebra $A$ of $\McNn$ (equivalently, any $n$-generator epi-subalgebra of $\McNn$) coincides with $\McNn$. \end{theorem} \begin{proof} Let $\{g_1,\ldots,g_n\}$ be a generating set of $A$ in $\McNn$. The map $g=(g_1,\ldots,g_n)\colon \cube\to \cube$ is one-to-one. From \cite[Theorem 3.6]{cab} it follows that $g$ is a $\mathbb{Z}$-homeomorphism. With reference to \eqref{equation:functor}, the map $\McN(g)$ yields an isomorphism from $\McNn$ onto $\McNn$, whence $\McNn=A$. \end{proof} \begin{problem} {\rm Prove or disprove}: {\it Any epi-subalgebra $B$ of a semisimple MV-algebra C isomorphic to $C$ coincides with $C$.} \end{problem} \begin{theorem} \label{theorem:decidable-sw} The following {\rm isomorphism problem} is decidable: \noindent ${\mathsf{INSTANCE}:}$ MV-terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\ldots,X_n$ such that the subalgebra $A$ of $\McNn$ generated by $\hat\tau_1,\ldots,\hat\tau_k$ is separating (a decidable condition, by Theorem \ref{theorem:separation-is-decidable}). \noindent ${\mathsf{QUESTION}:}$ Is $A$ isomorphic to $\McNn$? \end{theorem} \begin{proof} Let us write $g =(\hat\tau_1,\ldots,\hat\tau_k)\colon \cube\to\kube$. By hypothesis $g$ is a homeomorphism. Let $R$ be the range of $g$. As in the proof of Theorem~\ref{theorem:separation-is-decidable}, let $\Delta'$ be a rational triangulation of $[0,1]^n$ such that $g$ is linear on each simplex of $\Delta'$. Using the desingularization procedure of \cite[2.8]{mun11} (which is also found in \cite[Theorem 9.1.2]{cigdotmun}), we compute a regular subdivision $\Delta$ of $\Delta',$ by listing the sets of vertices of its simplexes. We have the following equivalent conditions: $\quad \,\,\, A\cong\McNn$ $\Leftrightarrow$ $\McN(R)\cong\McNn$, (because $A\cong \McN(R)$, by \cite[3.6]{mun11}) $\Leftrightarrow$ $R \cong_{\mathbb Z}\cube$, (by duality, \cite[3.10]{mun11}) $\Leftrightarrow$ $g$ is a $\mathbb Z$-homeomorphism of $\cube$ onto $R$, (by Lemma~\ref{Lemma_iso}) $\Leftrightarrow$ $g(S)$ is regular for each $S\in\Delta$, \,\,and\,\, $\den(g(r))=\den(r)$ for each vertex $v$ of $\Delta$. This last equivalence follows because the homeomorphism $g$ is a $\mathbb Z$-map of $\cube$ onto $R$, (see \cite[3.15(i)$\leftrightarrow$(iii)]{mun11}). \end{proof} \begin{proposition} \label{proposition:partial} The following problem is decidable: \noindent ${\mathsf{INSTANCE}:}$ MV-terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\ldots,X_n$. \noindent ${\mathsf{QUESTION}:}$ Is the subalgebra $A$ of $\McNn$ generated by $\hat\tau_1,\ldots,\hat\tau_k$ free and separating? \end{proposition} \begin{proof} Again, let us write $g =(\hat\tau_1,\ldots,\hat\tau_k)\colon \cube\to\kube$. Let $R$ be the range of $g$. We first {\it claim} that $A$ is separating and free iff $A$ is separating and isomorphic to $\McNn$. For the nontrivial direction, as repeatedly noted, since $A$ is separating and ${A\cong \McN(R)}$ then $g$ is a homeomorphism of $\cube$ onto $R$. Since by \cite[4.18]{mun11}, $R$ is homeomorphic to the maximal spectral space $\mu(A)$ of $A$, then $\mu(A)$ is homeomorphic to $\cube$. Further, for each $m=1,2,\ldots,$ the maximal spectral space of the free $m$-generator MV-algebra $\McN(\I^m)$ is homeomorphic to $\I^m$. As is well known, whenever $m\not=n$ the $m$-cube $\I^m$ is not homeomorphic to $\cube$. Since by hypothesis $A$ is free and finitely generated, the only possibility for $A$ to be isomorphic to some free MV-algebra $ \McN(\I^m)$ is for $m=n$, which settles our claim. To conclude the proof, by Theorem \ref{theorem:sw}, $A$ is free and separating iff $A$ is (separating and) equal to $\McNn$. This is decidable, by Theorems \ref{theorem:separation-is-decidable} and \ref{theorem:decidable-sw}. \end{proof} \begin{problem}{\rm Prove or disprove the decidability of the following problems: \noindent (a) ${\mathsf{INSTANCE}:}$ MV-terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\ldots,X_n$. $\,{\mathsf{QUESTION}:}$ Is the subalgebra $A$ of $\McNn$ generated by $\hat\tau_1,\ldots,\hat\tau_k$ free? \noindent (b) ${\mathsf{INSTANCE}:}$ MV-terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\ldots,X_n$. $\,{\mathsf{QUESTION}:}$ Is the subalgebra $A$ of $\McNn$ generated by $\hat\tau_1,\ldots,\hat\tau_k$ isomorphic to $\McNn$? } \end{problem} By a quirk of fate, replacing isomorphism by equality in Problem (b) we have: \begin{proposition} \label{proposition:urto} The following problem is decidable: \noindent ${\mathsf{INSTANCE}:}$ MV-terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\ldots,X_n$. \noindent ${\mathsf{QUESTION}:}$ Does the subalgebra $A$ of $\McNn$ generated by $\hat\tau_1,\ldots,\hat\tau_k$ coincide with $\McNn$? \end{proposition} \begin{proof} By Theorem \ref{theorem:sw}, $A=\McNn$ iff $A$ is separating and isomorphic to $\McNn$ iff $A$ is separating and free, by the claim in the proof of Proposition \ref{proposition:partial}. The latter conjunction of properties is decidable, by the same proposition. \end{proof} \section{Subalgebras of $\McNn$ and rational triangulations of $\cube$} \label{section:method} In this section a method is introduced to write down a list of MV-terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\ldots,X_n$ in such a way that the subalgebra of $\McNn$ generated by the McNaughton functions $\hat\tau_1,\ldots,\hat\tau_k$ is simultaneously {\it separating and distinct} from $\McNn$. Conversely, every finitely generated separating proper subalgebra of $\McNn$ is obtainable by this method. In combination with Theorem \ref{theorem:projective}, a large class of projective MV-algebras can be effectively introduced by this method. The procedure starts with a rational triangulation $\Delta$ of $\cube$ equipped with a set $\mathcal H$ of functions $f\in \McNn$, called ``hats''. Each hat of $\mathcal H$ is pyramid-shaped and linear on each simplex of $\Delta$. One then lets $A$ be the algebra generated by $\mathcal H$. In more detail: \begin{definition} \label{definition:delta-basis} A {\it weighted triangulation} of $\cube$ is a pair $(\Delta,(a_1,\ldots,a_u)),$ where $\Delta$ is a triangulation of $\cube$ with rational vertices $v_1,\ldots,v_u$ and their {\it associated} positive integers $\,\, a_1,\dots,a_u, $ where for each $i=1,\dots,u$, $\,\,a_i$ is a divisor of $\den(v_i)$. We write $(\Delta,{\bf a})$ as an abbreviation of $(\Delta,(a_1,\ldots,a_u))$. The function $h_i\colon \cube\to \I$ which is linear on every simplex of $\Delta$ and satisfies $h_i(v_i)=a_i/\den(v_i)$ and $h_i(v_j)=0$ for each $j\not= i$ is called the $i$th {\it hat} of~$(\Delta,{\bf a})$. Given a weighted triangulation $(\Delta, {\bf a})$ of $\cube$, the set of its hats is denoted $\mathcal H_{\Delta, {\bf a}}$. If each linear piece of every hat $h_i\in \mathcal H_{\Delta, {\bf a}}$ has integer coefficients, (i.e., $h_i\in \McNn$), we say that the set $\mathcal H_{\Delta, {\bf a}}$ is {\it basic}. \end{definition} \begin{lemma} \label{lemma:pugliese} {\rm (i)} Let $T$ be an $n$-simplex with rational vertices $w_0,\ldots,w_n\in \cube$. For each $i=0,\ldots,n\,$ let $\,l_i\colon T\to \I$ be the linear function satisfying $l_i(w_i)=1/\den(w_i)$ and $l_i(w_j)=0$ for $j\not=i$. Then $T$ is regular iff $\,l_i$ has integer coefficients, for each $i=0,\dots,n$. {\rm (ii)} Let $\Delta$ be a regular triangulation of $\cube$ with vertices $v_1,\ldots,v_u$. For each $i=1,\ldots,u$ let $a_i\geq 1$ be a divisor of $\den(v_i)$. Let ${\bf a}=(a_1,\ldots,a_u)$. Then $(\Delta, {\bf a})$ is a weighted triangulation and $\mathcal H_{\Delta, {\bf a}}$ is a basic set. {\rm (iii)} There is an effective (=Turing-computable) procedure to test if a weighted triangulation $(\Delta,{\bf a})$ of $\cube$ determines a basic set $\mathcal H_{\Delta, {\bf a}}$. \end{lemma} \begin{proof} (i) Let $M$ be the $(n+1)\times(n+1)$ matrix whose $i$th row consists of the integer coordinates of the homogeneous correspondent $\tilde{w_i}$ of $w_i$. Assume $T$ is not regular. By definition, $|\det(M)|\geq 2$. The absolute value of the determinant of the inverse matrix $M^{-1}$ is a rational number lying in the open interval $(0,1)$. So $M^{-1}$ is not an integer matrix. Since, the $i$th column of $M^{-1}$ yields the coefficients of the linear function $l_i\,,$ not all these functions can have integer coefficients. Conversely, if $T$ is regular then $M^{-1}$ is an integer matrix, whose columns yield the coefficients of the linear functions $l_i, \,\,\,i=1,\dots,n$. (ii) Evidently, $(\Delta, {\bf a})$ is a weighted triangulation. Fix an $n$-simplex $T$ of $\Delta$ with its vertices $w_0,\ldots,w_n$ and corresponding linear functions $l_0,\dots,l_w.$ By (i), the coefficients of each $l_i$ are integers, and so are the coefficients of $a_il_i$. Thus, $\mathcal H_{\Delta, {\bf a}}$ is a basic set. (iii) For every $n$-simplex $T$ of $\Delta$ let $M_{T}$ be the $(n+1)\times(n+1)$ integer-valued matrices whose rows are the homogeneous correspondents of the vertices of $T$. Let $D_T$ be the $(n+1)\times(n+1)$ diagonal matrix whose diagonal entries are given by the subsequence of ${\bf a}$ associated to the vertices of $T$. The rational matrix $M_T^{-1}D$ is effectively computable from the input data $(\Delta,{\bf a})$. Arguing as in (i), it is easy to see that the set $\mathcal H_{\Delta, {\bf a}}$ is basic iff $M_T^{-1}D$ is an integer matrix for each $T$. \end{proof} The following is an example of a basic set $\mathcal H_{\Delta, {\bf a}}$ where the triangulation $\Delta$ is not regular. \begin{example} \label{example:continuation} {\rm Fix an integer $u\geq 3$, and let $V=\{k/u\mid k=0,1,\ldots,u\}$. Let $\Delta$ be the rational triangulation of $\I$ whose vertices are precisely those in $V$. Assume each vertex $v_k$ of $\Delta$ is associated to the integer $a_k=\den(v_k)$. We then have a weighted triangulation $(\Delta,{\bf a})$ of $\I$ and $\mathcal H_{\Delta, {\bf a}}$ is a basic set. For $k=1,\ldots,u-1$, the hat $h_k$ of $\mathcal H_{\Delta,{\bf a}}$ is a piecewise linear function with four linear pieces, connecting the five points of the unit square $(0,0),((k-1)/u,0),(k/u,1),((k+1)/u,0),(1,0)$. Each hat $h_k$ of $\mathcal H_{\Delta,{\bf a}}$ has value 1 at $v_k$. In detail: \[ h_k(x)=\begin{cases} 0&\mbox{if }\,\, 0\leq x<\frac{k-1}{u} \\ u x-(k-1)&\mbox{if }\,\, \frac{k-1}{u}\leq x<\frac{k}{u} \\ -u x+k+1&\mbox{if }\,\, \frac{k}{u}\leq x<\frac{k+1}{u} \\ 0&\mbox{if }\,\, \frac{k+1}{u}\leq x\leq 1. \\ \end{cases} \] Further, \[ h_0(x)=\begin{cases} -u x+1&\mbox{if }\,\, 0\leq x<\frac{1}{u} \\ 0&\mbox{if } \,\,\frac{k+1}{u}\leq x\leq 1. \\ \end{cases}\quad h_u(x)=\begin{cases} 0&\mbox{if }\,\, 0\leq x<\frac{u-1}{u} \\ u x-(u-1)&\mbox{if }\,\, \frac{u-1}{u}\leq x\leq 1.\\ \end{cases} \] A moment's reflection shows that the subalgebra of $\McN(\I)$ generated by $\mathcal H_{\Delta,{\bf a}}$ is separating and differs from $\McN(\I)$. } \end{example} The next two results show that basic sets $\mathcal H_{\Delta, {\bf a}}$ generate {\it all possible} separating proper subalgebras of free MV-algebras and unital $\ell$-groups: \begin{theorem} \label{theorem:delta-basis} Let $A$ be a subalgebra of $\McNn$. \begin{itemize} \item[(i)] $A$ is finitely generated, separating, and distinct from $\McNn$ iff $A$ is generated by a basic set $\mathcal H_{\Delta,{\bf a}}\,,$ for some weighted triangulation $(\Delta,{\bf a})= (\Delta,(a_1,\ldots,a_u))$ of $\cube$ such that $a_j\not=1$ for some $j =1,\dots,u$. \item[(ii)] If $A$ is finitely generated, separating, and distinct from $\McNn$ then for every weighted triangulation $(\Delta,{\bf a})= (\Delta,(a_1,\ldots,a_u))$ of $\cube$ such that $\mathcal H_{\Delta,{\bf a}}$ is a basic generating set of $A$, it follows that $a_j\not=1$ for some $j=1,\dots,u$. \end{itemize} \end{theorem} \begin{proof} (i) $(\Leftarrow)$ We have only to check $A\not=\McNn$. Let $v_1,\ldots,v_u$ be the vertices of $\Delta$. For each $f\in A$ the value $f(v_j)$ is an integer multiple of $a_j/\den(v_j)$, whence $f(v_j) \not=1/\den(v_j)$. We {\it claim} that some function in $\McNn$ attains the value $1/\den(v_j)$ at $v_j$. As a matter of fact, desingularization as in \cite{mun88} or \cite[5.2]{mun11} yields a regular triangulation $\Sigma$ of $\I^n$ such that $v_j$ is one of the vertices of $\Sigma$. Let $h_j\colon \cube \to \I$ be the Schauder hat of $\Sigma$ at $v_j$, as in \cite[9.1.3--9.1.5]{cigdotmun}. By definition, $h_j$ is linear on every simplex of $\Sigma,\,\,$ $h_j=1/\den(v_j)$ and $h_j(v_i)=0$ for all other vertices of $\Sigma$. Our claim is settled. By \cite[9.1.4]{cigdotmun}, $h_j$ belongs to $\McNn$. So $A$ is strictly contained in $\McNn$. $(\Rightarrow)$ Let $\{g_1,\ldots,g_k\}$ be a generating set of $A$. Let $g=(g_1,\ldots,g_k)\colon \cube\to [0,1]^k$, and $R$ be the range of $g$. Since $A$ is separating then $g^{-1}$ is a piecewise linear homeomorphism of $R$ onto $\cube$ and each linear piece of $g^{-1}$ has rational coefficients---just because each linear piece of $g$ has integer coefficients. Let $\nabla$ be a regular triangulation of $\McN(R)$ such that $g^{-1}$ is linear over every simplex of $\nabla$. The computability of $\nabla$ follows by direct inspection of the proof of \cite[2.9]{mun11}. Let $w_1,\ldots,w_u$ be the vertices of $\nabla$. For each $i=1,\ldots,u$ let $v_i=g^{-1}(w_i)$. Since $g$ has integer coefficients there is an integer $1\leq a_i$ such that $\den(v_i)=a_i\cdot \den(w_i)$. The set of simplexes $$ \Delta=\{g^{-1}(T)\subseteq \cube \mid T\in\nabla\} $$ is a rational triangulation of $\cube$. Since $\McNn$ strictly contains $A$, by Theorem \ref{theorem:sw} it is impossible for $A$ to be isomorphic to $\McNn$. Since by \cite[3.6]{mun11} $A\cong\McN(R),$ then $\McN(R)$ is not isomorphic to $\McNn$. By duality \cite[3.10]{mun11}, $\cube$ is not $\mathbb Z$-homeomorphic to $R$. As observed in proof of Theorem~\ref{theorem:decidable-sw}, $g$ is not a $\mathbb Z$-homeomorphism of $\cube$ onto $R$. By \cite[3.15]{mun11} there is a rational point $r\in\cube$ such that $\den(g(r))$ is a divisor of $\den(r)$ different from $\den(r)$. Stated otherwise, $\den(r)=m\cdot \den(g(r))$ with $m\not= 1$. By \cite[5.2]{mun11}, it is no loss of generality to assume that $\nabla$ has $g(r)$ among its vertices. Thus, for some $j$ we can assume that $r$ is the $j$th vertex of $\Delta$ and write $$ r=v_j,\,\,\,g(r)=w_j,\,\,\,\,1\not=a_j=\frac{\den(v_j)}{\den(w_j)}. $$ As in the proof of (i) above, let $\mathcal H_{\nabla}={h_1,\ldots,h_u}$ be the set of Schauder hats of $\nabla$. By Lemma \ref{lemma:pugliese}(i), for every $i=1,\ldots,u$ each linear piece of $h_i$ has integer coefficients. By \cite[5.8]{mun11}, $\,\,\,\mathcal H_\nabla$ generates $\mathcal M(R)$. By construction of $\nabla,$ the composite function $h_i\circ g$ belongs to $\McNn$, has value $a_i/\den(v_i)\geq1/\den(v_i)$ at $v_i$, has value zero at any other vertex of $\Delta,$ and is linear over every simplex of $\Delta$. Therefore, the weighted triangulation $(\Delta,(a_1,\ldots,a_u))=(\Delta,{\bf a})$ determines the basic set $$ \mathcal H_{\Delta,{\bf a}}=\mathcal H_\nabla\circ g=\{h_i\circ g\mid h_i\in \mathcal H_\nabla, \,\,\, i=1,\ldots,u \}, $$ which generates $A$, just as $\mathcal H_\nabla$ generates $\mathcal M(R)\cong A$. (ii) We argue by cases: In case $\Delta$ is not regular, let $T=\conv(w_0,\ldots,w_n)$ be an $n$-simplex in $\cube$ that fails to be regular. By hypothesis, the linear pieces of each hat of $\mathcal H_{\Delta,{\bf a}}$ have integer coefficients, and so do, in particular, the linear functions $l_0,\ldots,l_n\colon \mathbb R^n\to\mathbb R$ given by the following stipulations, for each $t=0,\ldots,n:$ \begin{itemize} \item[---] $\,\,\,l_t$ is linear over $T$, \item[---] $\,\,\,l_t(w_t)=a_t/\den(v_t)$, and \item[---] $\,\,\,l_t(w_s)=0$ for each $s\not=t$. \end{itemize} By Lemma \ref{lemma:pugliese}(i), not all $a_t$ can be equal to 1. In case $\Delta$ is regular, suppose $a_i=1$ for each $i=1,\ldots,u$ (absurdum hypothesis). Then $\mathcal H_{\Delta,{\bf a}}$ is precisely the set of Schauder hats of $\Delta$. By \cite[5.8]{mun11}, $\,\,\mathcal H_{\Delta,{\bf a}}$ generates $\McNn$, which contradicts the hypothesis $A\not=\McNn$. \end{proof} \begin{corollary} \label{corollary:delta-basis} Let $(G,u)$ be a unital $\ell$-subgroup of $\McN_{\rm group}(\I^n)$. \begin{itemize} \item[(i)] $(G,u)$ is finitely generated, separating, and distinct from $\McN_{\rm group}(\I^n)$ iff $(G,u)$ is generated by a basic set $\mathcal H_{\Delta,{\bf a}}$ for some weighted triangulation $(\Delta,{\bf a})$ of $\cube$ such that $a_j\not=1$ for some $j$. \item[(ii)] If $(G,u)$ is finitely generated, separating, and distinct from $\McN_{\rm group}(\I^n)$ then for every weighted triangulation $(\Delta,{\bf a})$ of $\cube$ such that $\mathcal H_{\Delta,{\bf a}}$ is a basic generating set of $(G,u)$, it follows that $a_j\not=1$ for some $j$. $\Box$ \end{itemize} \end{corollary} \section{Computing a basis of a finitely generated subalgebra of $\McNn$} \label{section:basis} By \cite[6.6]{mun11}, every finitely generated subalgebra $A$ of $\McNn$ is finitely presented, i.e., $A$ is a principal quotient of a free MV-algebra. Equivalently, \cite[6.1, 6.3]{mun11}, $\,\,A$ has a {\it basis}, i.e., a set of nonzero elements $\mathcal B=\{b_1,\ldots,b_z\}$, together with integers $1\leq m_1,\dots,m_z$ (called ``multipliers'') such that \begin{itemize} \item[(a)] $\mathcal B$ generates $A$. \item[(b)] $m_1 b_1+\dots+m_z b_z=1$ where the sum is computed in the unital $\ell$-group $(G,u)$ of $A$ given by $\Gamma(G,u)=A$. See \cite[6.1(iii')]{mun11}. \item[(c)] For each $k$-element subset $C=b_{i_1},\ldots,b_{i_k}$ of $\mathcal B$ with $b_{i_1}\wedge\ldots\wedge b_{i_k}\not=0$, the set of maximal ideals of $A$ containing $\mathcal B\setminus C$ is homeomorphic to a $(k-1)$-simplex, \,\,\,$(k=1,2,\ldots)$. \end{itemize} We now prove that every basic set is a basis of the MV-algebra it generates. \begin{proposition} \label{proposition:delta-basis} Suppose the weighted triangulation $(\Delta,(a_1,\ldots,a_u))=(\Delta,{\bf a})$ of $\cube$ determines the basic set $\mathcal H_{\Delta,{\bf a}}$. Let $v_1,\dots,v_u$ be the vertices of $\Delta$. Then the MV-subalgebra $A$ of $\McNn$ generated by $\mathcal H_{\Delta,{\bf a}}$ is separating, and $\mathcal H_{\Delta,{\bf a}}$ is a basis of $A$ , whose multipliers $m_i$ coincide with $\den(v_i)/a_i$ for each $i=1,\dots,u$. \end{proposition} \begin{proof} $A$ is a separating subalgebra of $\McNn$ because it is generated by the hats of a triangulation of $\cube$. From the definition of $\mathcal H_{\Delta,{\bf a}}$ together with \cite[4.18]{mun11} and \cite[6.1(ii')]{mun11}, it follows that the conclusion of condition (c) above is equivalent to saying that the set of points $x\in\cube$ such that $m_{i_1}b_{i_1(x)}+\cdots+m_{i_k}b_{i_k}(x)=1$ is homeomorphic to a $(k-1)$-simplex. It is now easy to see that $\mathcal H_{\Delta,{\bf a}}$ satisfies condition (c). (See the proof of \cite[5.8(ii)]{mun11}). Condition (b) is trivially satisfied. Therefore, $\mathcal H_{\Delta,{\bf a}}$ is a basis of $A$. \end{proof} \begin{remark} By Lemma \ref{lemma:pugliese}(iii), one can decide whether a weighted triangulation $\Delta$ with multiplicities ${\bf a}$ determines a basic set $\mathcal H_{\Delta,{\bf a}}$. By contrast, the decidability of the problem whether $\{\hat\tau_i,\ldots,\hat\tau_k\}$ is a basis of the MV-algebra it generates is open. See \cite[p.213]{mun11}. Interestingly enough, perusal of \cite[\S 6.5]{mun11} shows that {\it every basis of $A\subseteq \McNn$ becomes a basic generating set of $A$ after finitely many binary algebraic blowups.} \end{remark} In the light of the foregoing proposition, the following theorem provides an effective method to transform every generating set of $A\subseteq \McNn$ into a basis of~$A$: \begin{theorem} [Effective basis generation] \label{theorem:effective} Every list of terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\ldots,X_n,$ such that the subalgebra $A$ of $\McNn$ generated by the McNaughton functions $\hat\tau_1,\ldots,\hat\tau_k$ is separating, can be effectively transformed into a weighted triangulation $(\Delta,{\bf a})= (\Delta, (a_1,\dots,a_z))$ of the $n$-cube $\cube$, with vertices $v_1,\dots,v_z$, in such a way that $\mathcal H_{\Delta, {\bf a}}$ is a basic generating set of $A$. We can effectively write down MV-terms $\sigma_1,\ldots,\sigma_z$ representing the hats of $\mathcal H_{\Delta, {\bf a}}$. \end{theorem} \begin{proof} Let $g =(\hat\tau_1,\ldots,\hat\tau_k) \colon \cube\to\kube$, and $R=$ range of $g$. Since $A$ is separating, $g$ is a piecewise linear homeomorphism onto $R$. The transformation proceeds as follows: \begin{itemize} \item[(i)] Arguing as in the proof of Theorem \ref{theorem:separation-is-decidable}, we first compute from $\tau_1,\ldots,\tau_k$ a regular triangulation $\Sigma$ of $\cube$ such that $g$ is linear over every simplex of $\Sigma$ (also see \cite[18.1]{mun11}). $\Sigma$ is written down as the list of the sets of rational vertices of its simplexes. \item[(ii)] We write down the rational triangulation $g(\Sigma)=\{g(T)\mid T\in \Sigma\}$ of $R$ given by the $g$-images of the simplexes of $\Sigma$. This is effective, because the linear pieces of each function $\hat\tau_i$ are computable from the MV-term $\tau_i,$ and so is the linear piece $l_T$ of $g\,{\mathbin{\vert\mkern-0.3mu\grave{}}}\, T.$ \item[(iii)] Using desingularization (\cite[2.9, 18.1]{mun11}), we subdivide $g(\Sigma)$ into a {\it regular} triangulation $\nabla$ of $R$. Let $\Delta$ be the rational subdivision of $\Sigma$ defined by $g(\Delta)=\nabla$. Since $g$ determines a computable one-to-one correspondence between the $n$-simplexes of $\nabla$ and those of $\Delta$, then also $\Delta$ is computable. \item[(iv)] Let us write $\mathcal H_\nabla$ for the set of Schauder hats of $\nabla$, \cite[5.7]{mun11}. For each vertex $w$ of $\nabla$, we can effectively write down an MV-term $\gamma_w(X_1,\dots,X_k)$ such that the restriction to $R$ of the associated McNaughton function $\hat\gamma_w$ is the hat of $\mathcal H_\nabla$ with vertex $w$. Thus $\mathcal H_\nabla$ can be effectively computed. The routine verification can be made arguing as in the proof of \cite[9.1.4]{cigdotmun}. \item[(v)] Observe that $g^{-1}$ is linear on every simplex of $\nabla$, and is an explicitly given piecewise linear function. As a matter of fact, for each $n$-simplex $U$ of $\nabla$, the map $g^{-1}\,{\mathbin{\vert\mkern-0.3mu\grave{}}}\, U$ is an $n$-tuple of linear polynomials with rational coefficients that can be effectively computed from the $k$-tuple of linear polynomials with integer coefficients $(\hat\tau_1,\ldots,\hat\tau_k)\,{\mathbin{\vert\mkern-0.3mu\grave{}}}\, g^{-1}(U)$. For each $i=1,\dots,k$, arguing by induction on the number of connectives of all subterms of $\tau_i$ one effectively computes the linear function $\hat\tau_i\,{\mathbin{\vert\mkern-0.3mu\grave{}}}\, g^{-1}(U)$. \item[(vi)] Let $v_1,\dots,v_z$ be the vertices of $\Delta$. For each $j=1,\dots,z$, we set \begin{equation} \label{equation:ratios} a_j=\den(v_j)/\den(g(v_j)). \end{equation} Since $g$ is piecewise linear with integer coefficients, each $a_j$ is an integer. Recalling that $\circ$ denotes composition, the weighted triangulation $(\Delta,{\bf a})$ determines the basic set $\mathcal H_{\Delta,{\bf a}}=\{h\circ g\mid h\in \mathcal H_\nabla\}$. By \cite[5.8]{mun11}, $\mathcal H_\nabla$ generates $\McN(R)$, whence $\mathcal H_{\Delta,{\bf a}}$ generates $A$. \end{itemize} Since $\Delta$ is explicitly given by listing the sets of the vertices of its simplexes, and the integers $a_1,\dots,a_z$ associated to the vertices $v_1,\dots,v_z$ of $\Delta$ are computed from \eqref{equation:ratios}, the proof of \cite[9.1.4]{cigdotmun} yields an effective procedure to write down MV-terms $\sigma_1,\ldots,\sigma_z$ such that $\{\hat\sigma_1,\ldots,\hat\sigma_z\}= \mathcal H_{\Delta,{\bf a}}$. \end{proof} \section{Recognizing subalgebras of $\McNn$} \label{section:recognizing} A main application of Theorem \ref{theorem:effective} is given by the following decidability result: \begin{theorem} \label{theorem:identity} The following problem is decidable: \noindent ${\mathsf{INSTANCE}:}$ MV-terms $\tau_1,\ldots,\tau_k$ and $ \sigma_1,\ldots,\sigma_l$ in the variables $X_1,\ldots,X_n$, such that both sets $\{\hat\tau_1,\ldots,\hat\tau_k\}$ and $\{\hat\sigma_1,\ldots,\hat\sigma_l\}$ separate points (the separation property being decidable by Theorem \ref{theorem:separation-is-decidable}). \noindent ${\mathsf{QUESTION}:}$ Does the subalgebra $A$ of $\McNn$ generated by $\hat\tau_1,\ldots,\hat\tau_k$ coincide with the subalgebra $A'$ of $\McNn$ generated by $\hat\sigma_1,\ldots,\hat\sigma_l$? \end{theorem} \begin{proof} It is enough to decide $A\supseteq A'$. By Theorem \ref{theorem:effective} we can safely suppose that for some weighted triangulations $(\Delta,{\bf a})$ and $(\Delta',{\bf a}')$, the MV-algebras $A$ and $A'$ are respectively generated by the basic sets \begin{equation} \label{equation:indici} \mathcal H_{\Delta,{\bf a}}=\{h_1,\ldots,h_r\} \mbox{ \,\,\,and\,\,\, } \mathcal H_{\Delta',{\bf a}'}=\{h'_1,\ldots,h'_s\}. \end{equation} Let ${\bf a}= (a_1,\ldots,a_r),\,\,\,{\bf a}'=(a'_1,\ldots,a'_s)$. Let $h=(h_1,\ldots,h_r)\colon \cube\to \kube$ and $R=h(\cube)$ be the range of $h$. The separation hypothesis is to the effect that $h$ is a piecewise linear homeomorphism of $\cube$ onto $R$. For each $i=1,\dots,r,$ all linear pieces of $h_i$ have integer coefficients. As in the proof of Theorem \ref{theorem:separation-is-decidable} (also see \cite[18.1]{mun11}), we now compute a rational subdivision $\Delta^*$ of $\Delta'$ such that $h$ is linear on each simplex of $\Delta'$ (by definition of $\mathcal H_{\Delta,{\bf a}}$\,, \,\,$\Delta^*$ is automatically a subdivision of $\Delta$). Then the set $$ h(\Delta^*)=\{h(T)\mid T\in \Delta^*\} $$ is a rational triangulation of $R$. Let $\nabla$ be the regular subdivision of $h(\Delta^*)$ obtained by the desingularization process in \cite[2.9]{mun11}. Let $w_1,\ldots,w_u,$ be the vertices of $\nabla$ and $p_1,\ldots,p_u$ their respective Schauder hats. Since all steps of the desingularization process in \cite[2.9]{mun11} are effective, $\nabla$ is effectively computable. Upon writing $$ \Sigma=h^{-1}(\nabla)=\{h^{-1}(U)\mid U\in \nabla\}, $$ we get a rational triangulation $\Sigma$ of $\cube$ that jointly subdivides $\Delta$ and $\Delta',$ (whence $h$ is linear on each simplex of $\Sigma$). Evidently, $\Sigma$ is computable as the list of the sets of vertices of its simplexes. Let $v_1=h^{-1}(w_1),\ldots,v_u=h^{-1}(w_u)$ be the vertices of $\Sigma$. The piecewise linear functions $$ q_1=p_1\circ h,\ldots,q_u=p_u\circ h $$ have integer coefficients. Since $\den(h(v_i))$ is a divisor of $\den(v_i)$, the rational number $q_i(v_i)=1/\den(h(v_i))$ is an integer multiple of $1/\den(v_i),$ say $$\mathbb Z \ni c_i=\den(v_i)/\den(h(v_i)),\,\,\, (i=1,\dots,u).$$ Letting now ${\bf c}=(c_1,\ldots,c_u)$, we have a weighted triangulation $(\Sigma,{\bf c})$ such that $\mathcal H_{\Sigma,{\bf c}}$ is a basic generating set of $A$, just as the set $\{p_1,\ldots,p_u\}$ generates $\McN(R)$, by \cite[5.8]{mun11}. Recalling \eqref{equation:indici}, we are now ready to decide whether $A\supseteq A'$ as follows: For every $j=1,\ldots,s$ and $n$-simplex $T\in \Sigma$ let $f_{T,j}=h'_j\,{\mathbin{\vert\mkern-0.3mu\grave{}}}\, T$ be the restriction to $T$ of the $j$th hat of the basic set $\mathcal H_{\Delta',{\bf a}'}$. Observe that $f_{T,j}$ is linear on $T$ and has linear coefficients. We now check whether $f_{T,j}$ is obtainable as a sum of positive ($>0$) integer multiples of some of the hats $q_i\,{\mathbin{\vert\mkern-0.3mu\grave{}}}\, T$. Let $V_T$ be the set of vertices of $\Sigma$ lying in $T$, with the corresponding set of hats $H_T=\{q_v\mid v\in T\} \subseteq \{q_1,\ldots,q_u\}$=the hats of the basic set $\mathcal H_{\Sigma,{\bf c}}$ of $A$. \noindent {\it Case 1:} For each $j=1,\ldots,s,$ $\,\,\,T\in \Sigma$ and $v\in V_T$, $q_{v}(v)$ is a divisor of $f_{T,j}(v)$. Then the linear function $f_{T,j}$ coincides (over $V_T$, and hence) over $T$ with a suitable sum of positive integer multiples of the hats of $H_T$. Direct inspection shows that $h'_j$ is a sum of integer multiples of some hats in $\mathcal H_{\Sigma,{\bf c}}$. It follows that $\mathcal H_{\Sigma,{\bf c}}$ generates $A',$ and we conclude $A\supseteq A'$. \noindent {\it Case 2:} For some $j=1,\ldots,s,$ $\,\,\,T\in \Sigma$ and $v\in V_T$, $q_{v}(v)$ is not a divisor of $f_{T,j}(v)$. The possible values of functions in $A$ at $v$ are integer multiples of $q_{v}(v)$, because all other hats of $\mathcal H_{\Sigma,{\bf c}}$ vanish at $v$. Therefore, $h_j$ does not belong to $A$, whence the inclusion $A\supseteq A'$ fails. This completes the decision procedure for $A= A'$. \end{proof} \begin{problem} {\rm Prove or disprove the decidability of the following problem: \noindent ${\mathsf{INSTANCE}:}$ MV-terms $\tau_1,\ldots,\tau_k$ and $\sigma_1,\ldots,\sigma_l$ in the variables $X_1,\ldots,X_n$. \noindent ${\mathsf{QUESTION}:}$ Is the subalgebra of $\McNn$ generated by $\hat\tau_1,\ldots,\hat\tau_k$ equal to the subalgebra of $\McNn$ generated by $\hat\sigma_1,\ldots,\hat\sigma_l$? } \end{problem} \section{Conclusions: Two types of presentations} \label{section:final} Throughout this paper, MV-algebras $A\subseteq \McNn$ (resp., unital $\ell$-groups $(G,u)\subseteq \McN_{\rm group}(\I^n)$\,) have been ``effectively presented'' by a finite string of symbols $\tau_1,\ldots,\tau_u$, where each $\tau_i= \tau_i(X_1,\ldots,X_n)$ is an MV-term (resp., a unital $\ell$-group term) in the variables $X_1,\ldots,X_n$. Traditional finite presentations are instead defined as we did in Section \ref{section:presentation}, by a single $k$-variable term $\sigma$, letting $Z_\sigma\subseteq \I^k$ be the zeroset $\hat\sigma^{-1}(0)$ of the McNaughton function $\hat\sigma\colon \I^k\to \I$ associated to $\sigma$, and setting \begin{equation} \label{equation:finale} A_\sigma=\McN(\I^k)/ \mathfrak j_\sigma \cong \McN(Z_\sigma), \end{equation} where $\mathfrak j_\sigma$ is the principal ideal of $\McN(\I^k)$ generated by $\hat\sigma$. One similarly defines finite presentations of unital $\ell$-groups, \cite{cab,cabmun-ccm, mun08}. By \cite[6.6]{mun11}, every finitely generated subalgebra of $\McNn$ is finitely presented, but not every finitely presented MV-algebra is isomorphic to a subalgebra of a free MV-algebra. For instance, $\{0,1\}\times\{0,1\}$ (and more generally, every non-simple finite MV-algebra) is finitely presented but is not isomorphic to a subalgebra of a free MV-algebra, because its maximal spectral space is disconnected. Thus one may reasonably expect that decision problems that are unsolvable for finitely presented $\ell$-groups and open for finitely presented unital $\ell$-groups (equivalently, for finitely presented MV-algebras), turn out to be decidable for {\it separating} subalgebras of $\McNn$ presented via their generators $\hat\tau_1,\ldots,\hat\tau_u$. Here is an example of this state of affairs: \begin{itemize} \item[---] As shown in \cite[Theorem D]{glamad}, for each fixed $k\geq 6$ the property of being a free $k$-generator $\ell$-group is undecidable. This follows from Markov's celebrated unrecognizability theorems (see \cite{sht} for a detailed account.) \item[---] The same problem for unital $\ell$-groups and MV-algebras is open, except for $k=1$, where the problem is decidable, (see \cite[18.3]{mun11}). \item[---] Theorem \ref{theorem:decidable-sw} shows the decidability of the problem whether a finitely generated {\it separating} subalgebra of $\McNn$ is isomorphic to $\McNn$. \end{itemize} The separation hypothesis plays a crucial role in most decidability results of the earlier sections. As a matter of fact, the final two results of this paper will show that (without the separation hypothesis), for all decision problems concerning finitely generated subalgebras $A$ of free algebras, it is immaterial whether $A$ is presented by a list of generators $\hat\tau_1,\ldots,\hat\tau_u$ or by a principal ideal $\mathfrak j_\sigma$ of some free algebra. \begin{theorem} \label{theorem:generators-to-quotient} There is a computable transformation of every presentation of an MV-algebra $A\subseteq \McNn$ by a list of generators $\hat\tau_1,\ldots,\hat\tau_u$, into a presentation of an isomorphic copy $A_\sigma$ of $A$ as a principal quotient of some finitely generated free MV-algebra as in \eqref{equation:finale}. \end{theorem} \begin{proof} Following \cite[18.1]{mun11}, from the input MV-terms $\tau_1,\ldots,\tau_u$ we first compute the rational polyhedron $R\subseteq \I^u$ given by the range of the function $g=(\hat\tau_1,\ldots, \hat\tau_u)$. By \cite[3.6]{mun11}, $A\cong \McN(R)$. Next, in the light of \cite[2.9, 18.1]{mun11}, we list the sets of vertices of the simplexes of a regular triangulation $\Delta$ of $\kube$ such that the set $\Delta_R=\{T\in \Delta\mid T\subseteq R\}$ is a triangulation of $R$. Without loss of generality, $\Delta_R$ is {\it full}: any simplex of $\Delta$ all of whose vertices lie in $\Delta_R$ is a simplex of $\Delta_R$. Following the proof of \cite[9.1.4(ii)]{cigdotmun}, we compute MV-terms $\rho_1,\dots, \rho_w$ in the variables $Y_1,\dots,Y_u,$ whose associated McNaughton functions $\hat\rho_1,\dots, \hat\rho_w$ constitute the set $\mathcal H_\Delta$ of Schauder hats of $\Delta$, as defined in \cite[9.1.3]{cigdotmun}. The $\oplus$-sum of all hats with vertices not belonging to $R$ (coincides with their pointwise sum taken in the unital $\ell$-group $\McN_{\rm group}(R)$ and) provides an MV-term $\sigma(Y_1,\ldots,Y_u)$, together with its associated McNaughton function $\hat\sigma\in \McN(\I^u)$. Since $\Delta_R $ is full, the zeroset $Z_\sigma$ of $\hat\sigma$ coincides with $R$. The isomorphisms $$A\cong\McN(g(\I^n)) = \McN(R)=\McN(Z_{\sigma})\cong \McN(\I^u)/\mathfrak j_\sigma = A_\sigma, $$ yield a finite presentation of $A$ as a principal quotient of $ \McN(\I^u)$. \end{proof} Conversely, we can prove: \begin{theorem} \label{theorem:finale} For any arbitrary input MV-term $\sigma=\sigma(Y_1,\ldots,Y_k)$, we have: \begin{itemize} \item[(i)] It is decidable whether the MV-algebra $A_\sigma=\McN(\I^k)/ \mathfrak j_\sigma\cong \McN(Z_\sigma)$ is isomorphic to a subalgebra of a free MV-algebra. \item[(ii)] In case $A_\sigma$ is isomorphic to a subalgebra of a free MV-algebra, $\sigma$ can be effectively transformed into a finite list of MV-terms $\tau_i$ in $n$ variables, in such a way that $A_\sigma$ is isomorphic to the subalgebra $A$ of the free MV-algebra $\McNn$ generated by the set of $\hat\tau_i$. \end{itemize} \end{theorem} \begin{proof} Using a variant of the algorithm $\mathsf{Mod}$ of \cite[18.1]{mun11}, we first compute a regular triangulation $\Delta$ whose support is the zeroset $Z_\sigma \subseteq \kube$ of $\hat\sigma$. The proof of (i) and (ii) then proceeds as follows: (i) In \cite[4.10]{cab} it is proved that $A_\sigma$ is isomorphic to a (necessarily finitely generated) subalgebra of a free MV-algebra iff the following three conditions hold: \begin{itemize} \item[(a)] $Z_\sigma$ intersects the set of vertices of $\kube$; \item[(b)] $Z_\sigma$ is connected; \item[(c)] $Z_\sigma$ is strongly regular. \end{itemize} From $\Delta$, explicitly given by the sets of vertices of its simplexes, conditions (a)-(b) can be immediately checked. To check condition (c), for each maximal simplex $M$ of $\Delta$, one checks whether the greatest common divisor of the denominators of the vertices of $M$ equals 1. This completes the proof of (i). (ii) Assuming all three checks (a)-(c) are successful, following the proof of the characterization theorem \cite[4.10]{cab} we output the desired MV-terms $\tau_i$ through the following steps: \begin{itemize} \item[(d)] From $\Delta$ we compute a {\it collapsible} triangulation $\Delta'$ whose support lies in the cube $\cube$, for some suitably large integer $n$. \item[(e)] Next we compute a simplicial map from $\Delta'$ to $\Delta$, providing a $\mathbb Z$-map $\eta$ from the support of $\Delta'$ onto the support $Z_\sigma $ of $\Delta$. \item[(f)] Following now the proof of \cite[5.1]{cabmun-ccm}, we compute a $\mathbb Z$-map $\gamma$ which is a retraction of $\cube$ onto the support of $\Delta'$. \item[(g)] Letting $\pi_i\colon \I^k \to \I$ denote the $i$th coordinate function, we observe that for each $i\in\{1,\ldots,k\}$ the $\mathbb Z$-map $ \pi_i\circ \eta\circ\gamma\mbox{ belongs to } \McN(\cube). $ Since both $\mathbb Z$-maps $\gamma$ and $\eta$ are explicitly given, an application of \cite[9.1.5]{cigdotmun} yields MV-terms $\tau_1,\ldots,\tau_k$ in the variables $X_1,\dots,X_n$ such that $\hat\tau_i=\pi_i\circ\gamma\circ \eta$. \end{itemize} Let $A$ be the subalgebra of $\McNn$ generated by $\hat\tau_1,\dots, \hat\tau_k$. The final part of the proof of \cite[4.10]{cab} yields \[A_\sigma= \McN(\I^u)/\mathfrak j_\sigma \cong \McN(Z_{\sigma})= \McN((\gamma\circ\eta)(\I^n))\cong A. \qedhere \] \end{proof} \section*{Funding} The first author was supported by a Marie Curie Intra European Fellowship within the 7th European Community Framework Program (ref. 299401, FP7-PEOPLE-2011-IEF). {\small } \end{document}
math
\begin{document} \title[Unipotent frame flows]{On topological and measurable dynamics of unipotent frame flows for hyperbolic manifolds} \author{ Fran\c cois MAUCOURANT, Barbara SCHAPIRA} \begin{abstract} We study the dynamics of unipotent flows on frame bundles of hyperbolic manifolds of infinite volume. We prove that they are topologically transitive, and that the natural invariant measure, the so-called "Burger-Roblin measure", is ergodic, as soon as the geodesic flow admits a finite measure of maximal entropy, and this entropy is strictly greater than the codimension of the unipotent flow inside the maximal unipotent flow. The latter result generalises a Theorem of Mohammadi and Oh. \end{abstract} \maketitle \section{Introduction} \subsection{Problem and State of the art} For $d\geq 3$, let $\mathcal Gamma$ be a Zariski-dense, discrete subgroup of $G={{\bf\operatorname{SO}}}_o(d,1)$. Let $N$ be a maximal unipotent subgroup of $G$ (hence isomorphic to $\R^{d-1}$), and $U\subset N$ a nontrivial connected subgroup (hence isomorphic to some $\R^k$ in $\R^{d-1}$). The main topic of this paper is the study of the action of $U$ on the space $\mathcal Gamma \backslash G$. Geometrically, this is the space $\mathcal{FM}$ of orthonormal frames of the hyperbolic manifold $\mathcal{M}=\mathcal Gamma \backslash \H^d$, and the $N$ (and $U$)-action moves the frame in a parallel way on the stable horosphere defined by the first vector of the frame. There are a few cases where such an action is well understood, from both topological and ergodic point of view. \subsubsection{Lattices} If $\mathcal Gamma$ has finite covolume, then Ratner's theory provides a complete description of closures of $U$-orbits as well as ergodic $U$-invariant measures. If $\mathcal Gamma$ has infinite covolume, while it no longer provide information about the topology of the orbits, it still classifies finite $U$-invariant measures. Unfortunately, the dynamically relevant measures happen to be of infinite mass. In the rest of the paper, we will always think of $\mathcal Gamma$ as a subgroup having infinite covolume. \subsubsection{Full horospherical group} If one looks at the action of the whole horospherical group $U=N$, a $N$-orbit projects on $T^1\mathcal{M}$ onto a leaf of the strong stable foliation for the geodesic flow , a well-understood object, at least in the case of geometrically finite manifolds. In particular, the results of Dal'bo \cite{MR1779902} imply that for a geometrically finite manifold, such a leaf is either closed, or dense in an appropriate subset of $T^1\mathcal{M}$.\\ From the ergodic point of view, there is a natural good $N$-invariant measure, the so-called {\em Burger-Roblin measure}, unique with certain natural properties. Recall briefly its construction. The measure of maximal entropy of the geodesic flow on $T^1\mathcal{M}$, the {\em Bowen-Margulis-Sullivan measure}, when {\em finite}, induces a transverse invariant measure to the strong stable foliation. This transverse measure is often seen as a measure on the space of horospheres, invariant under the action of $\mathcal Gamma$. Integrating the Lebesgue measure along these leaves leads to a measure on $T^1\mathcal{M}$, which lifts naturally to $\mathcal{F}\mathcal{M}$ into a $N$-invariant measure, the {\em Burger-Roblin measure}. In \cite{Ro}, Roblin extended a classical result of Bowen-Marcus \cite{MR0451307}, and showed that, up to scalar multiple, when the Bowen-Margulis-Sullivan measure is finite, it induces (up to scalar multiple) the unique invariant measure supported on this space of horospheres, supported in the set of horospheres based at conical (radial) limit points. In particular, if the manifold $\mathcal{M}$ is geometrically finite, this gives a complete classification of $\mathcal Gamma$-invariant (Radon) measures on the space of horospheres, or equivalently of transverse invariant measures to the strong stable foliation. In general, Roblin's result says that there is a unique (up to scaling) transverse invariant measure of full support in the set of vectors whose geodesic orbit returns i.o. in a compact set. It is natural to try to "lift" this classification along the principal bundle $\mathcal{FM}\to T^1\mathcal{M}$, since the structure group is compact. This was done by Winter \cite{Winter}, who proved that, up to scaling, the only $N$-invariant measure of full support in the set of frames whose $A$-orbit returns i.o. in a compact set is the Burger-Roblin measure, i.e. the natural $M$-invariant lift of the above measure. On geometrically finite manifolds, this statement is simpler: the Burger-Roblin is the unique (up to scaling) $N$-invariant ergodic measure of full support. \subsubsection{A Theorem of Mohammadi and Oh} However, if one considers only the action of a proper subgroup $U\subset N$, the situation changes dramatically, and much less is known, because ergodicity or conservativeness of a measure with respect to a group does not imply in any way the same properties with respect to proper subgroups. In this direction, the first result is a Theorem of Mohammadi and Oh \cite{MO}, which states that, in dimension $d=3$ (in which case $dim(U)=1$) and for convex-cocompact manifolds, the Burger-Roblin measure is ergodic and conservative for the $U$-action if and only if the critical exponent $\delta_\mathcal Gamma$ of $\mathcal Gamma$ satisfies $\delta_\mathcal Gamma>1$. \subsubsection{Dufloux recurrence results} In \cite{Dufloux2016, Dufloux2017}, Dufloux investigates the case of small critical exponent. Without any assumption on the manifold, when the Bowen-Margulis-Sullivan measure is finite (assumption satisfied in particular when $\mathcal Gamma$ is convex-cocompact, but not only, see \cite{Peigne}), he proves in \cite{Dufloux2016} that the Bowen-Margulis-Sullivan is totally $U$-dissipative when $\delta_\mathcal Gamma\le\dim N-\dim U$, and totally recurrent when $\delta_\mathcal Gamma>\dim N-\dim U$. In \cite{Dufloux2017}, when the group $\mathcal Gamma$ is convex-cocompact, he proves that when $\delta_\mathcal Gamma=\dim N-\dim U$, the Burger-Roblin measure is $U$-recurrent. \subsubsection{Rigid acylindrical 3-manifolds} There is one last case where more is know on the topological properties of the $U$-action, in fact in a very strong form. Assuming $\mathcal{M}$ is a rigid acylindrical 3-manifold, McMullen, Mohammadi and Oh recently managed in \cite{McMullen2} to classify the $U$-orbit closures, which are very rigid. Their analysis relies on their previous classification of ${\bf\operatorname{SL}}(2,\R)$-orbits \cite{McMullen}.\\ Unfortunately, their methods relies heavily on the particular shape of the limit set (the complement of a countable union of disks), and such a strong result is certainly false for general convex-cocompact manifolds. \subsection{Results} The results that we prove here divide in two distinct parts, a topological one, and a ergodic one. Although they are independent, the strategy of their proofs follow similar patterns, a fact we will try to emphasise. \subsubsection{Topological properties} Let $A\subset G$ be a Cartan Subgroup. Denote by $\Omega\subset \mathcal{FM}$ the non-wandering set for the geodesic flow (or equivalently, the $A$-action), and by $\mathcal{E}$ the non-wandering set for the $N$-action. For more precise definitions and description of these objects, see section \ref{section2}.\\ Using a Theorem of Guivarc'h and Raugi \cite{MR2339285}, we show: \begin{theo} \label{Atopmix} Assume that $\mathcal Gamma$ is Zariski-dense. The action of $A$ on $\Omega$ is topologically mixing. \end{theo} This allows us to deduce: \begin{theo} \label{topologicallytransitive} Assume that $\mathcal Gamma$ is Zariski-dense. The action of $U$ on $\mathcal{E}$ is topologically transitive. \end{theo} Both results are new. Note that, for example in the case of a general convex-cocompact manifold with low critical exponent, the existence of a non-divergent $U$-orbit is itself non-trivial, and was previously unknown. \subsubsection{Ergodic properties} We will assume that $\mathcal Gamma$ is of divergent type, and denote by $\mu$ the Bowen-Margulis-Sullivan measure - or more precisely, its natural lift to $\mathcal{FM}$, normalised to be a probability. We are interested in the case where $\mu$ is a finite measure. Denote by $\nu$ the Patterson-Sullivan measure on the limit set, and $\lambda$ the Burger-Roblin measure on $\mathcal{FM}$. More detailed description of these objects is given in section \ref{section4}. \\ The following is a strengthening of the Theorem of Mohammadi and Oh \cite{MO}. \begin{theo} \label{ergodicity} Assume that $\mathcal Gamma$ is Zariski-dense. If $\mu$ is finite and $\delta_\mathcal Gamma+\dim(U)>d-1$, then both measures $\mu$ and $\lambda$ are $U$-ergodic. \end{theo} The hypothesis that $\mu$ is finite is satisfied for example when $\mathcal Gamma$ is geometrically finite see Sullivan \cite{Sullivan1979}. But there are many other examples, see \cite{Peigne}, \cite{Ancona}. Note that the measure $\mu$ is {\em not} $U$-invariant, or even quasi-invariant; in this case, ergodicity simply means that $U$-invariant sets have zero of full measure. Apart from the use of Marstrand's projection Theorem, our proof differs significantly from the one of \cite{MO}, and does not use compactness arguments, allowing us to go beyond the convex-cocompact case. It is also, in our opinion, simpler. Note that the work of Dufloux \cite{Dufloux2016} uses the same assumptions as ours. \\ For the opposite direction, we prove: \begin{theo} \label{nonergodicity} Assume that $\mathcal Gamma$ is Zariski-dense. If $\mu$ is finite with $\delta_\mathcal Gamma+\dim(U)<d-1$, then $\lambda$-almost every frame is divergent. \end{theo} In fact, in the convex-cocompact case, a stronger result holds: for all vectors $v\in T^1\mathcal{M}$ and almost all frames $\textbf{x}$ in the fiber of $v$, the orbit $\textbf{x} U$ is divergent, see Theorem \ref{divergence} for details. \subsection{Overview of the proofs} \subsubsection{Topological transitivity} The proof of the topological transitivity can be summarised as follows. \begin{itemize} \item The $U$-orbit of $\Omega$ is dense in $\mathcal{E}$ (Lemma \ref{UOmegadense}). \item The mixing of the $A$-action (Theorem \ref{Atopmix}) implies that there are couples $(\textbf{x},\textbf{y})\in \Omega^2$, generic in the sense that their orbit by the diagonal action of $A$ by negative times on $\Omega^2$ is dense in $\Omega^2$. \item But one can "align" such couples of frames so that $\textbf{x}$ and $\textbf{y}$ are in the same $U$-orbit, that is $\textbf{x} U=\textbf{y} U$ (Lemma \ref{UGeneric}). \end{itemize} These facts easily imply topological transitivity of $U$ on $\mathcal{E}$ (see section \ref{prooftoptrans}).\\ \subsubsection{Ergodicity of $\mu$ and $\lambda$} In the convex-cocompact case, the Patterson-Sullivan $\nu$ is Ahlfors-regular of dimension $\delta_\mathcal Gamma$. To go beyond that case, we will need to consider the lower dimension of the Patterson-Sullivan measure: $$\underline{\dim} \, \nu =\textrm{infess}\liminf_{r\to 0}\frac{\log \nu(B(\textbf{x}i,r))}{\log r},$$ which satisfies the following important property. \begin{prop}[Ledrappier \cite{Ledrappier-Platon}]\label{dim} If $\mu$ is finite, then $\underline{\dim} \, \nu=\delta_\mathcal Gamma $. \end{prop} The first step in the proof of topological transitivity is the proof that the closure of the set of $U$-orbits intersecting $\Omega$ is $\mathcal{E}$. The analogue here is to show that for a $U$-invariant set $E$, it is sufficient to show that $\mu(E)=0$ or $\mu(E)=1$ to deduce that $\lambda(E)=0$ or $\lambda(E^c)=0$ respectively. Marstrand's projection Theorem and the hypothesis $\delta_\mathcal Gamma +\dim(U)>d-1$ allow us to prove that the ergodicity of $\lambda$ is in fact equivalent to the ergodicity of $\mu$ (Proposition \ref{BMandBRequivalent}). Although it is highly unusual to study the ergodicity of non-quasi-invariant measures, it turns out here to be easier, thanks to finiteness of $\mu$. \\ For the second step, we know thanks to Winter \cite{Winter} that the $A$-action on $(\Omega^2,\mu \otimes\mu)$ is mixing. So we can find couples $(\textbf{x},\textbf{y})\in \Omega^2$, which are typical in the sense that they satisfy Birkhoff ergodic Theorem for the diagonal action of $A$ for negative times and continuous test-functions. By the same alignment argument than in the topological part, one can find such typical couples in the same $U$-orbit.\\ Unfortunately, from the point of view of measures, existence of one individual orbit with some specified properties is meaningless. To circumvent this difficulty, we have to consider plenty of such typical couples on the same $U$-orbit. More precisely, we consider a measure $\eta$ on $\Omega^2$ such that almost surely, a couple $(\textbf{x},\textbf{y})$ picked at random using $\eta$ is in the same $U$-orbit, and is typical for the diagonal $A$-action. For this to make sense when comparing with the measure $\mu$, we also require that both marginal laws of $\eta$ on $\Omega$ are absolutely continuous with respect to $\mu$. We check in section \ref{subsectionplenty} that the existence of such a measure $\eta$ is sufficient to prove Theorem \ref{ergodicity}. This measure $\eta$ is a kind of self-joining of the dynamical system $(\Omega,\mu)$, but instead of being invariant by a diagonal action, we ask that it reflects both the structure of $U$-orbits, and the mixing property of $A$.\\ It remains to show that such a measure $\eta$ actually exists. In dimension $d=3$, we can construct it (at least locally on $\mathcal{F}\H^3$) as the direct image of $\mu\otimes \mu$ by the alignment map, so we present the simpler 3-dimensional case separately in section \ref{constrplentydim3}. The fact that $\eta$ is supported by typical couples on the same $U$-orbit is tautological from the chosen construction. The difficult part is to show that its marginal laws are absolutely continuous. This is a consequence of the following fact:\\ {\em If two compactly supported, probability measures on the plane $\nu_1,\nu_2$ have finite $1$-energy, then for $\nu_1$-almost every $x$, the radial projection of $\nu_2$ on the unit circle around $x$ is absolutely continuous with respect to the Lebesgue measure on the circle.}\\ Although probably unsurprising to the specialists, as there exists many related statements in the literature (see e.g. \cite{Mattila},\cite{MR1333890}), we were unable to find a reference. We prove this implicitly in our situation, using the $L^2$-regularity of the orthogonal projection in Marstrand's Theorem, and the maximal inequality of Hardy and Littlewood.\\ In dimension $d\geq 4$, the construction of $\eta$, done in section \ref{higherdimeta}, is a bit more involved since there is not a unique couple aligned on the same $U$-orbit, especially if $\dim(U)\geq 2$, so we have to choose randomly amongst them, using smooth measures on Grassmannian manifolds. Again, the absolute continuity follows from Mastrand's projection Theorem and the maximal inequality. \subsection{Organization of the paper} Section 2 is devoted to introductory material. In section 3, we prove our results on topological dynamics. In section 4, we introduce the measures $\mu$ and $\lambda$, establish the dimensional properties that we need, and prove Theorem \ref{divergence} and the fact that $U$-ergodicity of $\mu$ and $\lambda$ are equivalent. Finally, we prove Theorem \ref{ergodicity} in section 5. \section{Setup and Notations} \label{section2} \subsection{Lie groups, Iwasawa decomposition} Let $d \geq 2$, and $G={\bf\operatorname{SO}}^o (d,1)$, i.e. the subgroup of ${\bf\operatorname{SL}}(d+1,\R)$ preserving the quadratic form $q(x_1,..,x_{d+1})=x_1^2+x_2^2+..-x_{d+1}^2$. It is the group of direct isometries of the hyperbolic $n$-space $\mathbb{H}^d=\{x\in\R^{d+1}, q(x)=-1, x_{d+1}>0\}$. Define $K<G$ as \[ K= \left\lbrace \left( \begin{array}{cc} k & 0 \\ 0 & 1 \\ \end{array} \right) : k \in {\bf\operatorname{SO}}(d) \right\rbrace. \] It is a maximal compact subgroup of $G$, and it is the stabilizer of the origin $x=(0,...0,1)\in \H^d$. We choose the one-dimensional Cartan subgroup $A$, defined by \[ A = \left\lbrace a_t = \left(\begin{array}{cc} I_{d-2} & 0 \\ 0 & \begin{array}{cc} \cosh(t) & \sinh(t) \\ \sinh(t) & \cosh(t) \\ \end{array} \\ \end{array} \right) \, : \, t\in \R \right\rbrace. \] It commutes with the following subgroup $M$, which can be identified with ${\bf\operatorname{SO}}(d-1)$. \[ M=\left\lbrace \left( \begin{array}{cc} m & 0 \\ 0 & I_2 \\ \end{array} \right) : m \in {\bf\operatorname{SO}}(d-1) \right\rbrace. \] In other words, the group $M$ is the centralizer of $A$ in $K$. The stabilizer of any vector $v\in T^1\H^d$ identifies with a conjugate of $M$, so that $T^1\H^d={\bf\operatorname{SO}}^o(d,1)/M$. Let $\mathfrak{n}\subset \mathfrak{so}(d,1)$ be the eigenspace of $Ad(a_t)$ with eigenvalue $e^{-t}$. Let $$ N=\exp(\mathfrak{n}) \,.$$ It is an abelian, maximal unipotent subgroup, normalized by $A$. The group $G$ is diffeomorphic to the product $K\times A\times N$. This decomposition is the {\em Iwasawa decomposition} of the group $G$. The subgroup $N$ is normalized by $M$, and $M\ltimes N$ is a closed subgroup isomorphic to the orientation-preserving affine isometry group of an $d-1$-dimensional Euclidean space. If $U$ is any closed, connected unipotent subgroup of $G$, it is conjugated to a subgroup of $N$ (see for example \cite{MR3012160}). Therefore, it is isomorphic to $\R^k$, for some $d \in \{ 0,.., d-1\}$. Through the article, we will always assume that $k\geq 1$. In this paper, we are interested in the dynamical properties of the right actions of the subgroups $A,N,U$ on the space $\mathcal Gamma\backslash G$. \subsection{Geometry} \subsubsection*{Fundamental group, critical exponent, limit set} Let $\mathcal Gamma \subset G=\mbox{Isom}^+(\H^d)$ be a discrete group. Let $\mathcal{M}= \mathcal Gamma \backslash \H^d$ be the corresponding hyperbolic manifold. The limit set $\mathcal Lambda_\mathcal Gamma$ is the set of accumulation points in $\partial \H^n \simeq \mathbb{S}^{d-1}$ of any orbit $\mathcal Gamma o$, where $o \in \H^d$. We will always assume that the group $\mathcal Gamma$ is nonelementary, that is $\#\mathcal Lambda_\mathcal Gamma=+\infty$. The critical exponent $\delta$ of the group $\mathcal Gamma$ is the infimum of the $s>0$ such that the Poincar\'e series $$P_\mathcal Gamma(s)=\sum_{\gamma\in \mathcal Gamma} e^{-sd(o,\gamma o)},$$ is finite, where $o$ is the choice of a fixed point in $\H^d$. In the convex-cocompact case, the critical exponent $\delta$ equals the Hausdorff dimension of the limit set $\mathcal Lambda_\mathcal Gamma$. Since $\mathcal Gamma$ is non-elementary, we have $0<\delta\leq d-1$. \subsubsection*{Frames} The space of orthonormal, positively oriented frames over $\H^d$ (resp. $\mathcal{M}$) will be denoted by $\mathcal{F}\H^d$ (resp. $\mathcal{FM}$). As $G$ acts simply transitively on $\mathcal F\H^d$, $\mathcal{F}\H^d$ (resp. $\mathcal{FM}$) can be identified with $G$ (resp. $\mathcal Gamma\backslash G$) by the map $g\mapsto g.\textbf{x}_0$, where $\textbf{x}_0$ is a fixed reference frame. Note that $\mathcal F\H^d$ is a $M$-principal bundle over $T^1\H^d$, and so is $\mathcal F\mathcal{M}$ over $T^1\mathcal{M}$. Denote by $\pi_1:\mathcal F \mathcal{M} \to T^1\mathcal{M}$ (resp. $\mathcal F\H^d\to T^1\H^d$) the projection of a frame onto its first vector. As said above, we are interested in the properties of the right actions of $A,N,U$ on $\mathcal{FM}$. \begin{figure} \caption{Right actions of $A$, $N$, $U$ } \label{groupactions} \end{figure} Given a subset $E\subset \mathcal{M}$ (resp. $T^1 \mathcal{M}$, $\mathcal F \mathcal{M}$), we will write $\tilde{E}$ for its lift to $\H^d$ (resp. $T^1 \H^d$, $\mathcal F \H^d$). Denote by $\mathcal F \mathbb{S}^{d-1}$ the set of (positively oriented) frames over $\partial \H^d=\mathbb{S}^{d-1}$. We will write $\mathcal F \mathcal Lambda_\mathcal Gamma$ for the subset of frames which are based at $\mathcal Lambda_\mathcal Gamma$. \subsubsection*{Generalised Hopf coordinates} Choose $o$ to be the point $(0,\dots, 0, 1)\in\H^d$. Recall that the Busemann cocycle is defined on $\mathbb{S}^{d-1}\times \H^d\times \H^d$ by \[\beta_\textbf{x}i(x,y)=\lim_{z\to \textbf{x}i} d(x,z)-d(y,z)\] By abuse of notation, if $\textbf{x}, \textbf{x}'$ are frames (or $v,v'$ vectors) with basepoints $x,x'\in\H^d$, we will write $\beta_\textbf{x}i(\textbf{x},\textbf{x}')$ or $\beta_\textbf{x}i(v,v')$ for $\beta_\textbf{x}i(x,x')$. We will use the following extension of the classical Hopf coordinates to describe frames. To a frame $\textbf{x} \in \mathcal F\H^d$, we associate \begin{align*} \mathcal F\H^d & \to (\mathcal F \mathbb{S}^{d-1} \times_\mathbb Delta \mathbb{S}^{d-1}) \times \R,\\ \textbf{x}=(v_1,\dots, v_d) & \mapsto (\textbf{x}^+,x^-,t_\textbf{x}), \end{align*} where $x^-$ (resp. $x^+$) is the negative (resp. positive) endpoint in $\mathbb{S}^{d-1}$ of the geodesic $\textbf{x} A$, $t_\textbf{x}=\beta_{x^+}(o,\textbf{x})$, and $\textbf{x}^+\in \mathcal F\mathbb{S}^{d-1}$ is the frame over $x^+$ obtained for example by parallel transport along $\textbf{x} A$ of the $(d-1)$-dimensional frame $(v_2,\dots, v_n)$. The subscript $\mathbb Delta$ in $(\mathcal F \mathbb{S}^{d-1} \times_\mathbb Delta \mathbb{S}^{d-1})$ indicates that this is the product set, minus the diagonal, i.e. the set of $(\textbf{x}^+,x^-)$ where $\textbf{x}^+$ is based at $x^-$. \begin{figure} \caption{Hopf frame coordinates } \label{hopf} \end{figure} Define the following subsets of frames in Hopf coordinates \[ \textbf{w}idetilde{\Omega}=(\mathcal F \mathcal Lambda_\mathcal Gamma \times_\mathbb Delta \mathcal Lambda_\mathcal Gamma) \times \R, \] and \[ \tilde{\mathcal{E}} = (\mathcal F \mathcal Lambda_\mathcal Gamma \times_\mathbb Delta \partial \H^n) \times \R. \] Consider their quotients $\Omega=\mathcal Gamma \backslash \textbf{w}idetilde{\Omega}$ and $\mathcal{E}=\mathcal Gamma \backslash \tilde{\mathcal{E}}$. These are closed invariant subsets of $\mathcal F \mathcal{M}$ for the dynamics of $M\times A$ and $(M\times A) \ltimes N$ respectively, where all the dynamic happens. Let us state it more precisely. The {\em non-wandering set} of the action of $N$ (resp. $U$) on $\mathcal Gamma\backslash G$ is the set of frames $\textbf{x}\in \mathcal F\mathcal M$ such that given any neighbourhood $\mathcal{O}$ of $\textbf{x}$ there exists a sequence $n_k\in N$ (resp. $u_k\in U$) going to $\infty$ such that $n_k \mathcal{O}\cap \mathcal{O}\neq\emptyset$. As a consequence of Theorem \ref{topologicallytransitive}, the following result holds. \begin{prop} The set $\mathcal{E}$ is the nonwandering set of $N$ and of any unipotent subgroup $\{0\}\neq U<N$. \end{prop} \section{Topological dynamics of geodesic and unipotent frame flows} \label{sectiontop} \subsection{Dense leaves and periodic vectors} For the proof of Theorem \ref{Atopmix}, we will need the following intermediate result, of independent interest . \begin{prop} \label{densityperiodic} Let $\mathcal Gamma$ be a Zariski-dense subgroup of ${\bf\operatorname{SO}}^o(d,1)$. Let {\rm $\textbf{x}\in\Omega$} be a frame such that {\rm $\pi_1(\textbf{x})$} is a periodic orbit of the geodesic flow on $T^1\mathcal M$. Then its $N$-orbit {\rm $\textbf{x} N$} is dense in $\mathcal{E}$. \end{prop} \begin{proof} First, observe that if $v=\pi_1(\textbf{x}) \in T^1\mathcal M$ is a periodic vector for the geodesic flow, then its strong stable manifold $W^{ss}(v)$ is dense in $\pi_1(\mathcal{E})$ \cite[Proposition B]{MR1779902}. Therefore, $\pi_1^{-1}(W^{ss}(\pi_1(\textbf{x})))=\textbf{x} NM=\textbf{x} MN$ is dense in $\mathcal{E}$. Thus it is enough to prove that \[ \textbf{x} M \subset \overline{\textbf{x} N}. \] The crucial tool is a Theorem of Guivarc'h and Raugi \cite[Theorem 2]{MR2339285}. We will use it in two different ways depending if $G={\bf\operatorname{SO}}^o(3,1)$ or $G={\bf\operatorname{SO}}^o(d,1)$, for $d\ge 4$, the reason being that $M={\bf\operatorname{SO}}(d-1)$ is abelian in the case $d=3$. Choose $\tilde{\textbf{x}}$ a lift of $\textbf{x}$ to $\textbf{w}idetilde{\Omega}$. As $\pi_1(\textbf{x})$ is periodic, say of period $l_0>0$, but $\textbf{x}$ itself has no reason to be periodic, there exists $\gamma_0\in \mathcal Gamma$ and $m_0\in M$ such that \[ \tilde{\textbf{x}}a_{l_0}m_0=\gamma_0\tilde{\textbf{x}}. \] First assume $d=3$, so both $M$ and $MA$ are abelian groups. Let $C$ be the connected compact abelian group $C=MA/\langle a_{l_0}m_0 \rangle$. Let $\rho$ be the homomorphism from $MAN$ to $C$ defined by $\rho(man)=ma$ mod $\langle a_{l_0}m_0 \rangle$. Define $X^\rho=G\times C/\sim$, where $(g,c)\sim (gman, \rho(man)^{-1}.c)$. The set $X^\rho$ is a fiber bundle over $G/MAN=\partial \H^n$, whose fibers are isomorphic to $C$. In other terms, it is an extension of the boundary containing additional information on how $g$ is positioned along $AM$, modulo $a_{l_0}m_0$. Let $\mathcal Lambda_\mathcal Gamma^\rho$ be the preimage of $\mathcal Lambda_\mathcal Gamma\subset \partial \H^n$ inside $X^\rho$. Now, since $C$ is connected, \cite[Theorem 2]{MR2339285} asserts that the action of $\mathcal Gamma$ on $\mathcal Lambda_\mathcal Gamma^\rho$ is minimal. Denote by $[g,m]$ the class of $(g,m)$ in $X^\rho$. Let us deduce that $\textbf{x} M \subset \overline{\textbf{x} N}$. Choose some $m\in M$. As $\mathcal Gamma$ acts minimally on $\mathcal Lambda_\mathcal Gamma^\rho$, there exists a sequence $(\gamma_k)_{k \geq 1}$ of elements of $\mathcal Gamma$, such that $\gamma_k[\tilde{\textbf{x}},e]$ converges to $[\tilde{\textbf{x}}m,e]$. It means that there exist sequences $(m_k)_k\in M^\N$, $(a_k)_k\in A^\N$, $(n_k)_k\in N^\N$, such that $\gamma_k\tilde{\textbf{x}}m_ka_kn_k \to \tilde{\textbf{x}}m$ in $G$, whereas $\rho(m_ka_kn_k)\to e$ in $C$, which means that there exists some sequence $j_k$ of integers, such that $d_k:=(m_ka_k)^{-1}(a_{l_0}m_0)^{j_k}\to e $ in $MA$. Now observe that the sequence \[ \gamma_k\tilde{\textbf{x}}(a_{l_0}m_0)^{j_k}(d_k^{-1}n_kd_k)=(\gamma_k\gamma_0^{j_k})\tilde{\textbf{x}}(d_k^{-1}n_kd_k)\in \mathcal Gamma \tilde{\textbf{x}} N \] has the same limit as the sequence \[ \gamma_k\tilde{\textbf{x}}(a_{l_0}m_0)^{j_k}d_k^{-1}n_k=\gamma_k\tilde{\textbf{x}}m_ka_kn_k, \] which by construction converges to $ \tilde{\textbf{x}}m$. On $\mathcal F\mathcal M=\mathcal Gamma\backslash G$, it proves precisely that $\textbf{x} m\in \overline{\textbf{x} N}$. As $m$ was arbitrary, it concludes the proof in the case $n=3$. \\ In dimension $d\ge 4$, $\langle a_{l_0}m_0 \rangle$ is not always a normal subgroup of $MA$ anymore, so we have to modify the argument as follows. Denote by $M_\textbf{x}$ the set $$M_\textbf{x}=\{m\in M, \textbf{x} m\in\overline{\textbf{x} N}\}\,.$$ This is a closed subgroup of $M$; indeed, if $m_1,m_2 \in M_\textbf{x}$, then $\textbf{x} m_1 \in \overline{\textbf{x} N}$, so $\textbf{x} m_1m_2 \in \overline{\textbf{x} N}m_2=\overline{\textbf{x} m_2 N}$ since $m_2$ normalises $N$. Since $\textbf{x} m_2 \in \overline{\textbf{x} N}$, we have $\textbf{x} m_2 N\subset \overline{\textbf{x} N}$. So $\textbf{x} m_1 m_2 \in \overline{\textbf{x} m_2 N}\subset \overline{\textbf{x} N}$. Thus $M_\textbf{x}$ is a subsemigroup, non-empty since it contains $e$, and closed. Since $M$ is a compact group, such a closed semigroup is automatically a group.\\ We aim to show that the group $M_\textbf{x}$ is necessarily equal to $M$.\\ Let $C=MA/\langle a_{l_0} \rangle$. It is a compact connected group. Consider $\rho(man)=ma$ mod $\langle a_{l_0} \rangle$, and the associated boundary $X^\rho=G\times C/\sim$. Choose some $m\in M$. As above, \cite[Theorem 2]{MR2339285} asserts that the action of $\mathcal Gamma$ on $\mathcal Lambda_\mathcal Gamma^\rho$ is minimal. Therefore, there exists a sequence $(\gamma_k)_{k \geq 1}$ of elements of $\mathcal Gamma$, such that $\gamma_k[\tilde{\textbf{x}},e]$ converges to $[\tilde{\textbf{x}}m,e]$. As above, consider sequences $(m_k)_k\in M^\N$, $(a_k)_k\in A^\N$, $(n_k)_k\in N^\N$, such that $\gamma_k\tilde{\textbf{x}}m_ka_kn_k \to \tilde{\textbf{x}}m$ in $G$, whereas $\rho(m_ka_kn_k)\to e$ in $C$, which with this new group $C$ means that there exists some sequence $j_k$ of integers, such that $d_k:=(m_ka_k)^{-1}(a_{l_0})^{j_k}\to e $ in $MA$. Similarly to the $3$-dimension case, we can write \[ \gamma_k\tilde{\textbf{x}}m_ka_kn_k d_k= \gamma^k \tilde{\textbf{x}} a_{l_0}^{j_k}(d_k^{-1} n_k d_k)= (\gamma_k \gamma_0^{j_k}) \tilde{\textbf{x}} m_0^{-j_k} (d_k^{-1} n_k d_k) \] The above argument shows that some sequence of frames in $ \textbf{x} \langle m_0 \rangle N=\textbf{x} N \langle m_0 \rangle$ converges to $\textbf{x} m$. This implies that the set of products $M_\textbf{x}.\overline{\langle m_0\rangle} $ is equal to $M$. We use a dimension argument to conclude the proof. The group $\overline{ \langle m_0 \rangle}$ is a torus inside $M={\bf\operatorname{SO}}(d-1)$, therefore of dimension at most $\frac{d-1}{2}$. The group $M$ has dimension $\frac{(d-1)(d-2)}{2}$, so that $M_\textbf{x}.\overline{\langle m_0\rangle}=M$ implies that $\dim M_\textbf{x}\ge \frac{(d-1)(d-3)}{2}$. By \cite[lemma 4]{montgomery-samelson}, the dimension of any proper closed subgroup of $M={\bf\operatorname{SO}}(d-1)$ is smaller than $\dim {\bf\operatorname{SO}}(d-2)=\frac{(d-2)(d-3)}{2}$. Therefore, $M_\textbf{x}$ cannot be a proper subgroup of $M$, so that $M_\textbf{x}=M$. \end{proof} The following corollary is a generalization to $\mathcal F \mathcal M$ of a well-known result on $T^1\mathcal M$, due to Eberlein. A vector $v\in T^1\mathcal M$ is said quasi-minimizing if there exists a constant $C>0$ such that for all $t\ge 0$, $d(g^t v,v)\ge t-C$. In other terms, the geodesic $(g^t v)$ goes to infinity at maximal speed. We will say that a frame $\textbf{x}\in \mathcal F\mathcal M$ is quasi-minimizing if its first vector $\pi_1(\textbf{x})$ is quasi-minimizing. \begin{coro}\label{density-N-orbites-pas-quasi-minimisantes} Let $\mathcal Gamma$ be a Zariski dense subgroup of $G= {\bf\operatorname{SO}}^o(d,1)$ . A frame {\rm $\textbf{x}\in \Omega$} is not quasi-minimizing if and only if {\rm $\textbf{x} N$} is dense in $\mathcal{E}$. \end{coro} \begin{proof} First, observe that when $\textbf{x}\in\Omega$ is quasi-minimizing, then the strong stable manifold $W^{ss}(\pi_1(\textbf{x}))$ of its first vector is not dense in $\pi_1(\Omega)$. Therefore, $\textbf{x} N\subset \pi_1^{-1}(W^{ss}(\pi_1(\textbf{x}))$ cannot be dense in $\Omega$. Now, let $\textbf{x}\in \Omega$ be a non quasi-minimizing vector. Then $W^{ss}(\pi_1(\textbf{x}))$ is dense in $\pi_1(\Omega)$, so that $\textbf{x} NM=\textbf{x} MN=\pi_1^{-1}(W^{ss}(\pi_1(\textbf{x}))$ is dense in $\Omega$, and therefore in $\mathcal{E}=\Omega N$. Choose some $\textbf{y}\in\Omega$ such that $\pi_1(\textbf{y}) $ is a periodic orbit of the geodesic flow. By the above proposition, $\textbf{y} N$ is dense in $\mathcal{E}$. As $\textbf{x} NM$ is dense in $\mathcal{E}\supset \Omega$, we have $\textbf{y} M\subset \overline{\textbf{x} NM}=\overline{\textbf{x} N}M$ (this last equality following from the compactness of $M$), so that there exists $m\in M$ with $\textbf{y} m\in\overline{\textbf{x} N}$. But $\pi_1(\textbf{y} m)=\pi_1(\textbf{y})$ is periodic, so that $\textbf{y} mN$ is dense in $\mathcal{E}$ and $\overline{\textbf{x} N}\supset \overline{\textbf{y} mN}\supset \mathcal{E}$. \end{proof} \subsection{Topological Mixing of the geodesic frame flow} Recall that the continuous flow $(\phi_t)_{t\in \R}$ (or a continuous transformation $(\phi_k)_{k\in \Z}$) on the topological space $X$ is {\em topologically mixing} if for any two non-empty open sets $\mathcal{U},\mathcal{V} \subset X$, there exists $T>0$ such that for all $t>T$, \[ \phi_{-t} \mathcal{U}\cap \mathcal{V} \neq \emptyset. \] Let us now prove Theorem \ref{Atopmix}, by a refinement of an argument of Shub also used by Dal'bo \cite[p988]{MR1779902}. \begin{proof} We will proceed by contradiction and assume that the action of $A$ is not mixing. Thus there exist $\mathcal{U},\mathcal{V}$ two non-empty open sets in $\Omega$, and a sequence $t_k\to +\infty$, such that $\mathcal{U}.a_{t_k}\cap \mathcal{V}=\emptyset$. Choose $\textbf{x}\in \mathcal{V}$ such that $\pi_1(\textbf{x})$ is periodic for the geodesic flow - this is possible by density of periodic orbits in $\pi_1(\Omega)$ \cite[Theorem 3.10]{MR0310926}. Let $l_0>0, m_0\in M$ be such that $\textbf{x} a_{l_0}m_0=\textbf{x}$. In particular, we have $\textbf{x} a_{l_0}^{j}=\textbf{x} m_0^{-j}\in \textbf{x} G_{\varepsilon}$ for all $j\in J$. We can find integers $(j_k)_k$ (the integer parts of $t_k/l_0$) and real numbers $(s_k)_k$ such that: $$ t_k=j_kl_0+s_k, \, \mathrm{with} \; 0\le s_k< l_0. $$ Without loss of generality, we can assume that the sequence $(s_k)_{k\geq 0}$ converges to some $s_\infty\in [0,l_0]$, and that $m^{j_k}$ converge in the compact group $M$ to some $m_\infty\in M$. By Proposition \ref{densityperiodic}, the $N$-orbit $\textbf{x} a_{-s_\infty}m_\infty N$ is dense in $\mathcal{E}$. Notice that $\mathcal{U}N$ is an open subset of $\mathcal{E}$; therefore one can choose a point $\textbf{w}=\textbf{x} a_{-s_\infty}m_\infty n\in \mathcal{U}$, for some $n \in N$. \begin{figure} \caption{The frame flow is mixing } \label{mixing-figure} \end{figure} We have \begin{align*} \textbf{w} a_{t_{k}} & = \textbf{x} a_{-s_\infty}m_\infty n a_{t_{k}} \\ & =\textbf{x} ( a_{l_0}m)^{j_k} (m^{-j_k} m_\infty) (a_{s_k-s_\infty}) (a_{t_k} n a_{-t_k})\\ & = \textbf{x} (m^{-j_k} m_\infty) (a_{s_k-s_\infty}) (a_{t_k} n a_{-t_k}). \end{align*} Observe that, as $N$-orbits are strong stable manifolds for the $A$-action, so $$\lim_k a_{t_k} n a_{-t_k}=e.$$ By definition of $m_\infty$ and $s_\infty$, $\lim_k m^{-j_k} m_\infty=e$ and $\lim_k a_{s_k-s_\infty}=e$. Therefore, $\textbf{w} a_{t_{k}}$ tends to the frame $\textbf{x}$ in the open set $\mathcal{V}$. Thus, we found a frame $\textbf{w}\in \mathcal{U}$, with $\textbf{w} a_{t_{k}}\in \mathcal{V}$ for all $k$ large enough. Contradiction. \end{proof} \subsection{Dense orbits for the diagonal frame flow on $\Omega^2$} Recall that a continuous flow $(\phi_t)_{t\in \R}$ (or a continuous transformation $(\phi_k)_{k\in \Z}$) on the topological space $X$ is said to be {\em topologically transitive} if any nonempty invariant open set is dense. In the case of a continuous transformation on a complete separable metric space without isolated points, topological transitivity is equivalent to the existence of a dense positive orbit, or equivalently, that the set of dense positive orbits is a $G^\delta$-dense set (see for example \cite{MR1973093}). It is clear that topological mixing implies topological transitivity. Moreover, as is easily checked, topological mixing of $(X,\phi_t)$ implies topological mixing for the diagonal action on the product $(X \times X,(\phi_t,\phi_t))$. A couple $(\textbf{x},\textbf{y}) \in \Omega^2$ will be said {\em generic} if the negative diagonal, discrete-time orbit $(\textbf{x} a_{-k} ,\textbf{y} a_{-k} )_{k\geq 0}$ is dense in $\Omega^2$. Theorem \ref{Atopmix} about topological mixing of the $A$-action on $\Omega$ has the following corollary, which will be useful in the proof of Theorem \ref{topologicallytransitive}. \begin{coro} \label{densediagonal} If $\mathcal Gamma\subset G={{\bf\operatorname{SO}}^o(d,1)}$ is a Zariski-dense discrete subgroup, then there exists a generic couple $(\textbf{\rm \bf x},\textbf{\rm \bf y}) \in \Omega^2$. \end{coro} \begin{proof} By Theorem \ref{Atopmix}, the geodesic frame flow is topologically mixing. Therefore, so is the diagonal flow action of $A$ on $\Omega^2$. This implies that the transformation $(a_{-1},a_{-1})$ on $\Omega^2$ is also topologically mixing, hence topologically transitive, i.e. has a dense positive orbit. \end{proof} \subsection{Existence of a generic couple on the same $U$-orbit} \begin{lem} \label{UGeneric} There exists a generic couple of the form $(\textbf{\rm \bf x} ,\textbf{\rm \bf x} u)$, with $\textbf{\rm \bf x}\in \Omega$ and $u\in U$. \end{lem} \begin{proof} By Corollary \ref{densediagonal}, there exists a generic couple. Let $(\textbf{y},\textbf{z})\in (\mathcal F\H^d)^2$ be the lift of a generic couple. Notice that, since the actions of $A$ and $M$ commute with $A$, the set of generic couples is invariant under the action of $(A\times M) \times (A\times M)$. This means that in Hopf coordinates, being the lift of a generic couple does not depend on the orientation of the frame $\textbf{y}^+,\textbf{z}^+$, nor of the times $t_\textbf{y},t_\textbf{z}$. Moreover, since being generic is defined as density for {\em negative} times, one can also freely change the base-points of $\textbf{y}^+,\textbf{z}^+$ because the new negative orbit will be exponentially close to the old one. In short, being the lift of a generic couple (or not) depends only on the past endpoints $(y^-,z^-)$, or equivalently, is $\left( (M\times A)\ltimes N^- \right)^2$-invariant. Obviously, $y^-\neq z^-$ since generic couple cannot be on the diagonal. Up to conjugation by elements of $M$, we can freely assume that $U$ contains the subgroup corresponding to following the direction given by the second vector of a frame. Pick a third point $\textbf{x}i \in \mathcal Lambda_\mathcal Gamma$ distinct from $y^-$ and $z^-$, and choose a frame $\textbf{x}^+\in \mathcal F \mathcal Lambda_\mathcal Gamma$ based at $\textbf{x}i$, whose first vector is tangent to the circle determined by $(\textbf{x}i,y^-,z^-)$. Therefore, the two frames of Hopf coordinates $\textbf{x}=(\textbf{x}^+,y^-,0)$ and $( \textbf{x}^+,z^-,0)$ lie in the same $U$-orbit, thus $(\textbf{x}^+,z^-,0)=\textbf{x} u$ for some $u \in U$. By construction, the couple $( \textbf{x},\textbf{x} u)$ is the lift of a generic couple. \end{proof} \subsection{Minimality of $\mathcal Gamma$ on $\mathcal F\mathcal Lambda_\mathcal Gamma$} We recall the following known fact. \begin{prop} \label{minimalboundaryset} Let $\mathcal Gamma$ be a Zariski-dense subgroup of ${{\bf\operatorname{SO}}}_o(d,1)$. Then the action of $\mathcal Gamma$ on $\mathcal F \mathcal Lambda_\mathcal Gamma$ is minimal. \end{prop} In dimension $d=3$, this is due to Ferte \cite[Corollaire E]{MR1934285}. In general, this is again a consequence of Guivarc'h-Raugi \cite[Theorem 2]{MR2339285}, applied with $G=SO_o(d,1)$, $C=M$. The set $\mathcal F\H^d$ identifies with $G\times M/\sim$ where $(g,m)\sim (gm'an,m'^{-1}m)$. \cite[Theorem 2]{MR2339285} asserts that the $\mathcal Gamma$-action on $\mathcal F\H^d=G\times M/\sim$ has a unique minimal set, which is necessarily $\mathcal F\mathcal Lambda_\mathcal Gamma$. \subsection{Density of the orbit of $\Omega$} \begin{prop}\label{UOmegadense} The $U$-orbit of $\Omega$ is dense in $\mathcal{E}$. \end{prop} \begin{proof} Up to conjugation by an element of $M$, it is sufficient to prove the proposition in the case where $U$ contains the subgroup corresponding to shifting in the direction of the first vector of the frame $\textbf{x}^+$. Consider the subset $E$ of $\mathcal F \mathcal Lambda_\mathcal Gamma$ defined by $(\textbf{x}i,R) \in E$ if $\textbf{x}i \in \mathcal Lambda_\mathcal Gamma$ and there exists a sequence $(\textbf{x}i_n)_{n\geq 0} \subset \mathcal Lambda_\mathcal Gamma \setminus\{ \textbf{x}i \}$ such that $\textbf{x}i_n \to \textbf{x}i$ tangentially to the direction of the first vector of $R$, in the sense that the direction of the geodesic (on the sphere $\partial \H^n$) from $\textbf{x}i$ to $\textbf{x}i_n$ converges to the direction of the first vector of $R$. Clearly, $E$ is a non-empty, $\mathcal Gamma$-invariant set. By Proposition \ref{minimalboundaryset}, it is dense in $\mathcal F \mathcal Lambda_\mathcal Gamma$. Let $\textbf{x}$ be a frame in $\tilde{\mathcal{E}}$, we wish to find a frame arbitrarily close to $\textbf{x}$, which is in the $U$-orbit of $\textbf{w}idetilde{\Omega}$. Let $\textbf{x}=(\textbf{x}^+,x^-,t_\textbf{x})$ be its Hopf coordinates, by assumption $\textbf{x}^+ \in \mathcal F \mathcal Lambda_\mathcal Gamma$. Pick $(\textbf{x}i,R) \in E$ very close to $\textbf{x}^+$. By definition of $E$, there exist $\textbf{x}i' \in \mathcal Lambda$, very close to $\textbf{x}i$ such that the direction $(\textbf{x}i\textbf{x}i')$ is close to the first vector of the frame $R$. We can find a frame $\textbf{y}^+\in \mathcal F \mathcal Lambda_\mathcal Gamma$, based at $\textbf{x}i$, close to $\textbf{x}^+$, whose first vector is tangent to the circle going through $(\textbf{x}i,\textbf{x}i',x^-)$. \begin{figure} \caption{ } \label{ } \end{figure} By construction, the two frames $\textbf{y}=(\textbf{y}^+,x^-,t_\textbf{x})$ and $\textbf{z}=(\textbf{y}^+,\textbf{x}i',t_\textbf{x})$ belong to the same $U$-orbit; notice that $\textbf{z}\in \textbf{w}idetilde{\Omega}$, so we have $\textbf{y}\in \textbf{w}idetilde{\Omega}U$. Since $\textbf{y}^+$ and $\textbf{x}^+$ are arbitrarily close, so are $\textbf{x}$ and $\textbf{y}$. \end{proof} \subsection{Proof of Theorem \ref{topologicallytransitive}} \label{prooftoptrans} Let $\mathcal{O},\mathcal{O}' \subset \mathcal{E}$ be non-empty open sets. We wish to prove that $\mathcal{O}'U\cap \mathcal{O}U\neq \emptyset$. By Proposition \ref{UOmegadense}, $\mathcal{O}\cap \Omega U \neq \emptyset$, therefore $\mathcal{O}U \cap \Omega$ is an open nonempty subset of $\Omega$. Similarly, $\mathcal{O}'U\cap \Omega\neq \emptyset$. Let $(\textbf{x}, \textbf{x} u)$ a generic couple given by Lemma \ref{UGeneric}. By density, there exists a $k\geq 0$ such that $(\textbf{x} a_{-k}, \textbf{x} ua_{-k}) \in (\mathcal{O}U \cap \Omega)\times (\mathcal{O}'U\cap \Omega)$. But since $A$ normalizes $U$, $\textbf{x} ua_{-k} \in \textbf{x} a_{-k}U\subset \mathcal{O}U$. Therefore $\textbf{x} ua_{-k} \in \mathcal{O}'U\cap \mathcal{O}U$, which is thus non-empty, as required. \section{Mesurable dynamics} \label{section4} \subsection{Measures} Let us introduce the measures that will play a role here. \\ The {\em Patterson-Sullivan measure on the limit set} is a measure $\nu $ on the boundary, whose support is $\mathcal Lambda_\mathcal Gamma$, which is quasi-invariant under the action of $\mathcal Gamma$, and more precisely satisfies for all $\gamma\in\mathcal Gamma$ and $\nu$-almost every $\textbf{x}i\in\mathcal Lambda_\mathcal Gamma$, \[\frac{d\gamma_*\nu}{d\nu}(\textbf{x}i)=e^{-\delta \beta_\textbf{x}i(o,\gamma o)}\,. \] When $\mathcal Gamma$ is convex-cocompact, this measure is proportional to the Hausdorff measure of the limit set \cite{Sullivan84}, it is the intuition to keep in mind here. \\ On the unit tangent bundle $T^1\H^d$, let us define a $\mathcal Gamma$-invariant measure by $$ d\tilde{m}_{BM}(v)=e^{\delta\beta_{v^-}(o,v)+\delta\beta_{v^+}(o,v)}d\nu(v^-)d\nu(v^+)dt^,. $$ By construction, this measure is invariant under the geodesic flow and induces on the quotient on $T^1\mathcal{M}$ the so-called {\em Bowen-Margulis-Sullivan measure $m_{BMS}$}. When finite, it is the unique measure of maximal entropy of the geodesic flow, and is ergodic and mixing. On the frame bundle $\mathcal{F}\H^d$ (resp. $\mathcal F\mathcal M$), there is a unique way to define a $M$-invariant lift of the Bowen-Margulis measure, that we will denote by $\tilde{\mu}$ (resp. $\mu$). We still call it the {\em Bowen-Margulis-Sullivan measure}. On $\mathcal F\mathcal M$, this measure has support $\Omega$. When it is finite, it is ergodic and mixing \cite{Winter}. The key point in our proofs will be that it is mixing, and that it is locally equivalent to the product $d\nu(x^-)d\nu(x^+)\,dt\,dm_\textbf{x}$, where $dm_\textbf{x}$ denotes the Haar measure on the fiber of $\pi_1(\textbf{x})$, for the fiber bundle $\mathcal F\mathcal M\to T^1\mathcal M$. This measure is $MA$-invariant, but not $N$-(or $U$)-invariant, nor even quasi-invariant. \\ The {\em Burger-Roblin measure} is defined locally on $T^1\H^d$ as \[d\tilde{m}_{BR}(v)=e^{ (d-1)\beta_{v^-}(o,v)+\delta\beta_{v^+}(o,v)}d\mathcal L(v^-)d\nu(v^+)dt\,,\] where $\mathcal L$ denotes the Lebesgue measure on the boundary $\mathbb{S}^{d-1}=\partial \H^d$, invariant under the stabiliser $K \simeq {\bf\operatorname{SO}}(d)$ of $o$. We denote its $M$-invariant extension to $\mathcal F\H^d$ (resp. $\mathcal F\mathcal M$), still called the {\em Burger-Roblin measure}, by $\tilde{\lambda}$ (resp. $\lambda$). This measure is infinite, $A$-quasi-invariant, $N$-invariant. It is $N$-ergodic as soon as $\mu$ is finite. This has been proven by Winter \cite{Winter}. See also \cite{Schapira2015} for a short proof that it is the unique $N$-invariant measure supported in $\mathcal{E}_{rad}$. In some proofs, we will need to use the properties of the conditional measures of $\mu$ on the strong stable leaves of the $A$-orbits, that is the $N$-orbits. These conditional measures can easily be expressed as $$d\mu_{\textbf{x} N}(\textbf{x} n)=e^{\delta\beta_{(\textbf{x} n)^-}(\textbf{x},\textbf{x} n)}\,d\nu((\textbf{x} n)^-), $$ and the quantity $e^{\delta\beta_{(\textbf{x} n)^-}(x,\textbf{x} n)}$ is equivalent to $|n|^{2\delta}$ when $|n|\to +\infty$. Observe also that by construction, the measure $\mu_{\textbf{x} N}$ has full support in the set $\{\textbf{y}\in \textbf{x} N, y^-\in\mathcal Lambda_\mathcal Gamma\}$. Another useful fact is that $\mu_{\textbf{x} N}$ does not depend really on $\textbf{x}$ in the sense that it comes from a measure on $\partial \H^n\setminus \{x^+\}$. In other terms, if $m\in M$ and $\textbf{y}\in \textbf{x} m N$, and $\textbf{z}\in \textbf{x} N$ is a frame with $\pi_1(\textbf{z})=\pi_1(\textbf{y})$, one has $d\mu_{\textbf{x} mN}(\textbf{y})=d\mu_{\textbf{x} N}(\textbf{z})$. \subsection{Dimension properties on the measure $\nu$} Most results in this paper rely on certain dimension properties on $\nu$, allowing to use projection theorems due to Marstrand \cite{MR0063439}, and explained in the books of Falconer \cite{MR2118797} and Mattila \cite{MR1333890}. These properties are easier to check in the convex-cocompact case, relatively easy in the geometrically finite, and more subtle in general, under the sole assumption that $\mu$ is finite.\\ Define the dimension of $\nu$, like in \cite{LL}, by $$\underline{\dim} \, \nu =\textrm{infess}\liminf_{r\to 0}\frac{\log\nu(B(\textbf{x}i,r))}{\log r}\,.$$ Denote by $g^t$ the geodesic flow on $T^1\mathcal{M}$. For $v\in T^1\mathcal{M}$, let $d(v,t)$ be the distance between the base point of $g^t v$ and the point $\mathcal Gamma.o$. Proposition \ref{dim} in the introduction has been established by Ledrappier \cite{Ledrappier-Platon} when $\mu$ is finite. It is also an immediate consequence of Proposition \ref{dim-mesfinie} and Lemma \ref{dist} below, as it is well known that when the measure $\mu$ is finite, it is ergodic and conservative. \begin{prop}\label{dim-mesfinie} If $\mu$-almost surely, we have $\frac{d(v,t)}{t}\to 0$, then $\underline{\dim} \, \nu \ge \delta_\mathcal Gamma$. If $\mu$ is ergodic and conservative, then $\underline{\dim}\,\nu\le\delta_\mathcal Gamma$. \end{prop} \begin{proof} We will come back to the original proof of the Shadow Lemma, of Sullivan, and adapt it (the proof, not the statement) to our purpose. The Shadow $\mathcal{O}_o(B(x,R))$ of the ball $B(x,R)$ viewed from $o$ is the set $\{\textbf{x}i\in\partial\H^d$, $[o\textbf{x}i)\cap B(x,R)\neq\emptyset\}$. Denote by $\textbf{x}i(t)$ the point at distance $t$ of $o$ on the geodesic $[o\textbf{x}i)$. It is well known that for the usual spherical distance, a ball $B(\textbf{x}i,r)$ in the boundary is comparable to a shadow $\mathcal{O}_o(B(\textbf{x}i(-\log r),R))$. More precisely, there exists a universal constant $t_1>0$ such that for all $\textbf{x}i\in\partial \H^d$ and $0<r<1$, one has $$ \mathcal{O}_o(B(\textbf{x}i(-\log r+t_1),1)) \subset B(\textbf{x}i,r)\subset \mathcal{O}_o(B(\textbf{x}i(-\log r-t_1),1)) $$ Denote by $d(\textbf{x}i,t)$ the distance $d(\textbf{x}i(t),\mathcal Gamma.o)$. By assumption (in the application this will be given by Lemma \ref{dist}), for $\nu$-almost all $\textbf{x}i\in\partial\H^d$ and $0<r<1$ small enough, the quantity $d(\textbf{x}i,-\log r\pm t_1 )\le t_1+d(\textbf{x}i,-\log r)$ is negligible compared to $t=-\log r$. Let $\gamma\in \mathcal Gamma$ be an element minimizing this distance $d(\textbf{x}i,t)$. It satisfies obviously $|d(o,\gamma o)-t|\le d(\textbf{x}i,t)$. Observe that, by a very naive inclusion, using just $1\le 1+(C+1)d(\textbf{x}i,t)$, $$ \mathcal{O}_o(B(\textbf{x}i(t-t_1),1)\subset \mathcal{O}_o(B(\gamma.o, 1+d(\textbf{x}i,t-t_1)) $$ Now, using the $\mathcal Gamma$-invariance properties of the probability measure $\nu$, and the fact that for $\eta\in \mathcal{O}_o(B(\gamma.o,1+d(\textbf{x}i,t-t_1))$, the quantity $|-\beta_\eta(o,\gamma o)+d(o,\gamma.o)-2d(\textbf{x}i,t) | $ is bounded by some universal constant $c$, one can compute \begin{eqnarray*} \nu(B(\textbf{x}i,r))&\le& \nu(\mathcal{O}_o(B(\gamma.o,1+d(\textbf{x}i,t-t_1))))\\ &=&\int_{O_o\left(B(\gamma.o,1+d(\textbf{x}i,t-t_1))\right)} e^{-\delta_\mathcal Gamma\beta_{\eta}(o,\gamma o) }d\gamma_*\nu(\eta)\\ &\le& e^{\delta_\mathcal Gamma c}\, e^{-\delta_\mathcal Gamma d(o,\gamma o)+ 2\delta_\mathcal Gamma d(\textbf{x}i,t)}\gamma_*\nu(O_o(B(\gamma.o,1+d(\textbf{x}i,t-t_1))))\\ &\le& e^{\delta_\mathcal Gamma c}\, e^{-\delta_\mathcal Gamma t+ 2\delta_\mathcal Gamma d(\textbf{x}i,t)} \end{eqnarray*} Recall that $t=-\log r$. Up to some universal constants, we deduce that \begin{eqnarray}\label{estimee-nu}\nu(B(\textbf{x}i,r))\le r^{\delta_\mathcal Gamma}e^{2\delta_\mathcal Gamma \,d(\textbf{x}i,\log r)} \end{eqnarray} It follows immediately that $\underline{\dim} \, \nu\ge \delta_\mathcal Gamma$. The other inequality follows easily from the classical version of Sullivan's Shadow Lemma, or from the well known fact that $\delta_\mathcal Gamma$ is the Hausdorff dimension of the radial limit set, which has full $\nu$-measure. \end{proof} \begin{lem}\label{dist} The following assertions are equivalent, and hold when $\mu$ is finite. \begin{itemize} \item for $\mu$-a.e. {\rm $\textbf{x}\in \mathcal F M$}, one has {\rm $$ \lim_{t\to +\infty}\frac{d(\textbf{x},\textbf{x} a_t)}{t}=0\,. $$ } \item for $\lambda$-a.e. {\rm $\textbf{x}\in \mathcal F M$}, one has {\rm $$ \lim_{t\to +\infty}\frac{d(\textbf{x},\textbf{x} a_t)}{t}=0\,. $$} \item for $m_{BM}$ or $m_{BR}$ a.e. $v\in T^1M$, one has $$ \lim_{t\to +\infty}\frac{d(v,g^tv)}{t}=\lim_{t\to +\infty}\frac{d(v,t)}{t}=0\,. $$ \item $\nu$-almost surely, $$ \lim_{t\to +\infty} \frac{d(\textbf{x}i(t),\mathcal Gamma.o)}{ t}=\lim_{t\to +\infty} \frac{d(\textbf{x}i,t)}{ t}=0\,. $$ \end{itemize} \end{lem} When $\mathcal Gamma$ is geometrically finite, a much better estimate is known thanks to Sullivan's logarithm law (see \cite{MR688349}, \cite{MR1327939}, \cite[Theorem 5.6]{MR2732978}), since the distance grows typically in a logarithmic fashion. However, this may not hold for geometrically infinite manifolds with finite $\mu$. In any case, the above sublinear growth is sufficient for our purposes. \begin{proof} First, observe that all statements are equivalent. Indeed, first, as $m_{BR}$ and $m_{BM}$ differ only by their conditionals on stable leaves, and the limit $d(v,g^t v)/t$ when $t\to +\infty$ depends only on the stable leaf $W^{ss}(v)$, this property holds (or not) equivalently for $m_{BR}$ and $m_{BM}$. Moreover, as $\mathcal F M$ is a compact extension of $T^1M$, this property holds (or not) equivalently for $\lambda$ on $\mathcal F M$ and $m_{BR}$ on $T^1M$ or $\mu$ on $\mathcal F M$ and $m_{BM}$ on $T^1M$. As this limit depends only on the endpoint $v^+$ of the geodesic, and not really on $v$, the product structure of $m_{BR}$ implies that this property holds true equivalently for $m_{BM}$-a.e. $v\in T^1M$ and $\nu$ almost surely on the boundary. Let us prove that all these equivalent properties indeed hold when $\mu$ is finite. Let $f(v)=d(v,1)-d(v,0)$. As the geodesic flow is $1$-lipschitz, this map is bounded, and therefore $\mu$-integrable. Thus, $\frac{S_n f}{n}$ converges a.s. to $\int f\,d\mu$, and therefore $d(v,t)/t\to \int f\,d\mu$, $\mu$-a.s. It is now enough to show that this integral is $0$. This would be obvious if we knew that the distance $d(v,0)$ is $\mu$-integrable. Divide $\Omega$ in annuli $K_n=\{v\in T^1\mathcal{M},d(\pi(v),o)\in (n,n+1)\}$, and set $B_n=T^1B(o,n+1)$. If $a_n=\mu(K_n)$, we have $\sum_n a_n=1$. Observe that $\int f\,d\mu=\lim_{n\to \infty}\int_{B_n} f\,d\mu$. It is enough to find a sequence $n_k\to +\infty $ such that these integrals are arbitrarily small. Observe that \[ \int_{B_N} f(x)d\mu(x)=\int_ {g^1(B_N) } d(v,0)d\mu- \int_{B_N} d(v,0)d\mu \] But now, the symmetric difference between $g^1B_N$ and $B_N$ is included in $K_N\cup K_{N+1}$. As $d(v,0)\le N+2$ in this union, we get \[ \left| \int_{B_N} f(x)d\mu(x) \right| \leq (N+2)(a_N+a_{N+1}). \] As $\sum a_n=1$, there exists a subsequence $n_k\to +\infty$, such that $(n_k+2)(a_{n_k}+a_{n_k+1})\to 0$. This proves the lemma. \end{proof} \subsection{Energy of the measure $\nu$} The $t$-energy of $\nu$ is defined as $$ I_t(\nu)=\int\int_{\mathcal Lambda^2}\frac{1}{|\textbf{x}i-\eta|^{t}}d\nu(\textbf{x}i)\,d\,\nu(\eta) \, . $$ The finiteness of a $t$-energy is sufficient to get the absolute continuity of the projection of $\nu$ on almost every $k$-plane of dimension $k<t$. However, a weaker form of finiteness of energy will be sufficient for our purposes, namely \begin{lem}\label{energie-finie} For all $t<\underline{\dim} \,\nu$, there exists an increasing sequence $(A_k)_{k\geq 0}$ such that $I_t(\nu_{|A_k})<\infty$, and $\nu(\cup_k A_k)=1$. \end{lem} \begin{proof} When $t<\dim\nu$, choose some $t<t'<\dim\nu$. One has, for $\nu$-almost all $x$, and $r$ small enough, $\nu(B(x,r))\le Cst. r^{t'}$. It implies the convergence of the integral $$\int_{\mathcal Lambda}\frac{1}{|\textbf{x}i-\eta|^{t}}d\,\nu(\eta)=t\int_0^\infty\frac{\nu(B(\textbf{x}i,r))}{r^{t-1}}\,dr<\infty $$ Therefore, the sequence of sets $A_M=\{x\in\partial\H^n,\int_{\mathcal Lambda_\mathcal Gamma}\frac{1}{|\textbf{x}i-\eta|^{t}}d\,\nu(\eta)\le M\}$ is an increasing sequence whose union has full measure. And of course, $I_t(\nu_{|A_M})<\infty$. \end{proof} It is interesting to know when the following stronger assumption of finiteness of energy is satisfied. In \cite{MO}, when $\dim N=2$ and $\dim U=1$, Mohammadi and Oh used the following: \begin{lem} If $\mathcal Gamma$ is convex-cocompact and $\delta>d-1-\dim U$ then $I_{d-1-\dim U}(\nu)<\infty$. \end{lem} \begin{proof} For $\textbf{x}i\in\mathcal Lambda_\mathcal Gamma$, define $A_k=\{\eta\in\partial \H^d,\,|\textbf{x}i-\eta|\in ]2^{-k-1},2^{-k}]\}$, and compute \[ \int_{\mathcal Lambda_\mathcal Gamma}\frac{1}{|\textbf{x}i-\eta|^{\dim N-\dim U}}d\,\nu(\eta) \simeq \sum_{k\in \N^*} 2^{k(\dim N-\dim U)}\nu(A_k) \] Denote by $\textbf{x}i_{k\log 2}$ the point at distance $k\log 2$ of $o$ on the geodesic ray $[o\textbf{x}i)$. As $\mathcal Gamma$ is convex-cocompact, $\Omega$ is compact, so that $\textbf{x}i_{k\log 2}$ is at bounded distance from $\mathcal Gamma o$. Sullivan' Shadow lemma implies that, up to some multiplicative constant, $\nu(A_k)\le \nu(B(\textbf{x}i,2^{-k}))\le Cst. 2^{-k\delta}$. We deduce that, up to multiplicative constants, \begin{eqnarray*} \int_{\mathcal Lambda_\mathcal Gamma}\frac{1}{|\textbf{x}i-\eta|^{\dim N-\dim U}} d\,\nu(\eta) &\le & \sum_{k } 2^{k(\dim N-\dim U-(1-\varepsilon)k\delta)} \end{eqnarray*} If $\delta>\dim N-\dim U$, for $\varepsilon>0$ small enough, the above series converges, uniformly in $\textbf{x}i\in\mathcal Lambda_\mathcal Gamma$, so that the integral $\int\int_{\mathcal Lambda_\mathcal Gamma^2}\frac{1}{|\textbf{x}i-\eta|^{\dim N-\dim U}} d\,\nu(\eta)d\,\nu(\textbf{x}i)$ is finite, and the Lemma is proven. \end{proof} As mentioned before, the reason we have to be interested in these energies is the following version of Marstrand's projection theorem, see for example \cite[thm 9.7]{MR1333890}. \begin{theo} \label{projL2} Let $\nu_1$ be a finite measure with compact support in $\R^m$, such that $I_t(\nu_1)<\infty$, for some $0<t<m$. For all integer $k<t$, and almost all $k$-planes $P$ of $\R^m$, the orthogonal projection $(\Pi_P)_*\nu_1$ of $\nu_1$ on $P$ is absolutely continuous w.r.t. the $k$-dimensional Lebesgue measure of $P$. Moreover, its Radon-Nikodym derivative satisfies the following inequality $$ \int_{\mathcal{G}_k^m} \int_P \left(\frac{d (\Pi_P)_*\nu_1}{d\mathcal{L}_P}\right)^2 d\mathcal{L}_P d\sigma_k^m < c. I_k(\nu_1) $$ where $\sigma_k^m$ is the natural measure on the Grassmannian $\mathcal{G}_k^m$, invariant by isometry, and $c$ some constant depending only on $k$ and $m$. \end{theo} \subsection{Conservativity/ Dissipativity of $\lambda$} In this section, we aim to prove Theorem \ref{nonergodicity}. The measure $\lambda$ is $N$-invariant (and $N$-ergodic), therefore, $U$-invariant for all unipotent subgroups $U<N$. It is $U$-conservative iff for all sets $E\subset \mathcal F\mathcal M$ with positive measure, and $\lambda$-almost all frames $\textbf{x}\in\mathcal F\mathcal M$, the integral $\int_0^\infty {\bf 1}_E(\textbf{x} u)du$ diverges, where $du$ is the Haar measure of $U$. In other words, it is conservative when it satisfies the conclusion of Poincar\'e recurrence theorem (always true for a finite measure). It is $U$-dissipative iff for all sets $E\subset \mathcal F\mathcal M$ with positive finite measure, and $\lambda$-almost all frames $\textbf{x}\in\mathcal F\mathcal M$, the integral $\int_0^\infty {\bf 1}_E(\textbf{x} u)du$ converges. A measure supported by a single orbit can be both ergodic and dissipative. In other cases, ergodicity implies conservativity \cite{Aar}. Therefore, Theorem \ref{ergodicity} implies that when the Bowen-Margulis-Sullivan measure is finite, and $\delta_\mathcal Gamma>\dim N-\dim U=d-1-\dim U$, the Burger-Roblin measure $\lambda$ is $U$-conservative. In the case $\delta_\mathcal Gamma<\dim N-\dim U$, we prove below (Theorem \ref{divergence}) that the measure $\lambda$ is $U$-dissipative. Unfortunately, our method does not work in the case $\delta_\mathcal Gamma=\dim N-\dim U$. We refer to works of Dufloux \cite{Dufloux2016} and \cite{Dufloux2017} for the proof that \begin{itemize} \item When $\mu$ is finite and $\mathcal Gamma$ Zariski dense, the measure $\mu$ is $U$-dissipative iff $\delta_\mathcal Gamma\le \dim N-\dim U$\\ \item When moreover $\mathcal Gamma$ is convex-cocompact, if $\delta_\mathcal Gamma=\dim N-\dim U$, then $\lambda$ is $U$-conservative. \end{itemize} \begin{theo}\label{divergence} Let $\mathcal Gamma$ be a discrete Zariski dense subgroup of $G={{\bf\operatorname{SO}}}_o(d,1)$ group and $U<G$ a unipotent subgroup. If $\delta<d-1-\dim U$, then for all compact sets $K\subset \mathcal F\mathcal M$ and $\lambda$-almost all {\rm $\textbf{x}\in \mathcal F M$} the time spent by {\rm$ \textbf{x} U$} in $K$ is finite. \end{theo} Let $d=\dim U$. Let $r>0$. Let $N_r\subset N$ (resp. $U_r\subset U$) be the closed ball of radius $r>0$ and center $0 $ in $ N$ (resp. in $U$). Let $K_r=K.N_r $ be the $r$-neighbourhood of $K $ along the $N$-direction. Let $\mu_{\textbf{x} N}$ be the conditional measure on $W^{ss} (\textbf{x})=\textbf{x} N$ of the Bowen-Margulis measure. \begin{figure} \caption{Intersection of a $U$-orbit with the $\mathcal Gamma$-orbit of a compact set $\textbf{w} \label{dissipative } \end{figure} \begin{lem}\label{comparison-noncompact}\rm For all compact sets $K\subset \Omega$, and all $\textbf{x}\in \mathcal{E}$, if $K_r=K.N_r$, for all $r>0$, there exists $c=c(x,r,K)>0$ such that \[ \int_ U 1_{K_{2r}}(\textbf{x} u ) du \leq c \, \mu_{\textbf{x} N }(\textbf{x} U N_{2r} ). \] \end{lem} \begin{proof} Let $r>0$. First, the map $\textbf{x}\in\Omega \mapsto \mu_{ \textbf{x} N }(\textbf{x} N_r )$ is continuous. It is an immediate consequence of \cite[Cor. 1.4]{FS}, where they establish that $\mu_{\textbf{x} N}(\partial\textbf{x} N_r)=0$ for all $\textbf{x}\in \mathcal Gamma\backslash G$ and $r>0$. In this reference, they assume at the beginning $\mathcal Gamma$ to be convex-cocompact, but they use in the proof of corollary 1.4 only the finiteness of $\mu$. The above map is also positive, and therefore bounded away from $0$ and $+\infty$ on any compact set. Let $0<c_r=\inf_{\textbf{z} \in K_{4r} } \mu_{\textbf{z} N }(\textbf{z} N_r )\le C_r=\sup_{\textbf{z} \in K_{4r} } \mu_{ \textbf{z} N }(\textbf{z} N_r )<\infty$. Let us work now on $G$ and not on $\mathcal Gamma\backslash G$. Fix a frame $\textbf{x}\in \textbf{w}idetilde{\mathcal{E}}\subset \mathcal F \H^d$. For all $\textbf{y}\in \textbf{x} U\cap \mathcal Gamma K N_r$, choose some $\textbf{z} \in \textbf{y} N_{r}\cap \mathcal Gamma K$ and consider the ball $\textbf{z} N_r$. Choose among them a maximal (countable) family of balls $\textbf{z}_i N_{r}\subset \textbf{x} U N_{2r}$ which are pairwise disjoint. By maximality, the family of balls $z_i N_{4r}$ cover $\textbf{x} U N_{r}\cap \mathcal Gamma K N_r$. We deduce on the one hand \[ \int_\mathcal U 1_{K_r}(\textbf{x} u) du \leq\sum_i \mu_{\textbf{x} N} (\textbf{z}_i N_{4r}) \le C_{4r} |I|\,. \] On the other hand, as the balls $\textbf{z}_i N_{r}$ are disjoint, \[ \mu_{\textbf{x} N}(\textbf{x} U N_{2r})\ge \sum_i \mu_{\textbf{x} N} (\textbf{z}_i N_r)\ge c_r |I|. \] This proves the lemma. \end{proof} To prove Theorem \ref{divergence}, it is therefore sufficient to prove the following lemma. \begin{lem}\label{integrale-finie} Assume that $\delta_\mathcal Gamma<\dim N-\dim U$. Then for all {\rm $\textbf{x} \in \mathcal{E}$} such that {\rm $\frac{d(\textbf{x}, \textbf{x} a_t)}{t}\to 0$} when $t\to +\infty$, we have \rm \[ \int_{M} \mu_{\textbf{x} m N}(\textbf{x} m U N_r) dm<\infty. \] \end{lem} Indeed, Lemma \ref{dist} ensures that the assumption of Lemma \ref{integrale-finie} is satisfied $\lambda$-almost surely. And by Lemma \ref{comparison-noncompact}, its conclusion implies that for $\lambda$-a.e. $\textbf{x}\in\mathcal{E}$ and almost all $m\in M$, the orbit $\textbf{x} m U$ does not return infinitely often in a compact set $K$. As $\lambda$ is by construction the lift to $\mathcal F M$ of $m_{BR}$ on $T^1M$, with the Haar measure of $M$ on the fibers, this implies that for $\lambda$-almost all $\textbf{x}$, the orbit $\textbf{x} m U$ does not return infinitely often in a compact set $K$. This implies the dissipativity of $\lambda$ w.r.t. the action of $U$, so that Theorem \ref{divergence} is proved. \begin{proof} Recall first that for $n\in N$ not too small, one has $d\mu_{\textbf{x} N}(\textbf{x} n)\simeq |n|^{2\delta} d\nu((\textbf{x} n)^-)$. We want to estimate the integral $\int_{M} \mu_{\textbf{x} m N }(\textbf{x} m U N_r ) d m$. First, observe that the measure $\mu_{\textbf{x} N}$ on $\textbf{x} N$ does not depend really on the orbit $\textbf{x} N$, in the sense that it is the lift of a measure on $W^{ss}(\pi_1(\textbf{x}))$ through the inverse of the canonical projection $\textbf{y}\in\textbf{x} N\to \pi_1(\textbf{y})$ from $\textbf{x} N$ to $W^{ss}(\pi_1(\textbf{x}))$. Therefore, one has $\mu_{\textbf{x} m N}( \textbf{x} m U N_r)=\mu_{\textbf{x} N}(\textbf{x} m U m^{-1} N_r)$. Thus, by Fubini Theorem, one can compute\,: \begin{align*} F(\textbf{x})& =\int_{M} \mu_{\textbf{x} m N }(\textbf{x} m U N_r ) d m \\ & =\int_{M} \mu_{\textbf{x} N}(\textbf{x} m U m^{-1} N_r ) d m \\ & =\int_{M\times N} 1_{m \in M, m U m^{-1}\cap n N_r\neq\emptyset }(m)d m d\mu_{\textbf{x} N}(n)\\ & \simeq \int_{N_0} r^{\dim N-\dim U} |n|^{\dim U-\dim N} d\mu_{\textbf{x} N}(n)\\ & \simeq \int_{N_0} r^{\dim N-\dim U} |n|^{ \dim U-\dim N+2\delta} d\nu ((\textbf{x} n)^-) \\ \end{align*} where $N_t=\{n\in N; |n|\geq 2^t\}$. The estimate comes from the probability that a point in the sphere of dimension $k-2$ falls in the $r/|n|$-neigborhood of a fixed subsphere of dimension $d-1$, see for example \cite[chapter 3]{MR1333890}. Therefore, we get \[F(\textbf{x}) \le \sum_{l\ge 0} 2^{l(\dim U-\dim N+2\delta)} \nu((\textbf{x} N_l)^-) \] Now, observe that $(\textbf{x} N_l)^-$ is comparable to the ball of center $x^+$ and radius $2^{-l}$ on the boundary. By Inequality (\ref{estimee-nu}), we deduce that $$\nu((\textbf{x} N_l)^-)\le 2^{-\delta l}e^{\delta_\mathcal Gamma d(\textbf{x} a_{l\log 2},\mathcal Gamma o)}$$ For all $\varepsilon >0$, there exists $l_0\ge 0$, such that $d(\textbf{x} a_{l\log 2})\leq \varepsilon l\log 2$ for $l\ge l_0$. Thus, up to the $l_0$ first terms of the series, we get the following upper bound for $F(\textbf{x})$. \begin{eqnarray*} F(\textbf{x}) & \le & \sum_{l=0}^{l_0-1}\dots +\sum_{l\ge 0} 2^{l(\dim U-\dim N+\delta)}e^{\delta_\mathcal Gamma d(\textbf{x} a_{l\log 2},\mathcal Gamma o)}\\ &\le &\sum_{l=0}^{l_0-1}\dots +\sum_{l\ge l_0} 2^{l(\dim U-\dim N+\delta+\varepsilon\delta)} \end{eqnarray*} Thus, if $\delta<\dim N-\dim U$, we can choose $\varepsilon>0$ so that $\dim U-\dim N+\delta+\varepsilon\delta<0$, and $F(\textbf{x})$ is finite. \end{proof} \begin{rema}\rm Observe that the above argument, in the case $\delta+\dim U= \dim N$, would lead to the fact that \[ \int_{M} \mu_{\textbf{x} m N}(\textbf{x} m U N_r) dm=\infty, \] which is not enough to conclude to the conservativity, that is that almost surely, $\mu_{\textbf{x} m N}(\textbf{x} m U N_r)=+\infty$. We refer to the works of Dufloux for a finer analysis in this case. \end{rema} \subsection{Equivalence of the Bowen-Margulis-Sullivan measure and the Burger-Roblin measure for invariants sets} As claimed in the introduction, we reduce the study of ergodicity of the Burger-Roblin measure $\lambda$ to the ergodicity of the Bowen-Margulis-Sullivan measure $\mu$. The rest of the section is devoted to the proof of the following Proposition: \begin{prop} \label{BMandBRequivalent} Assume that $\mathcal Gamma$ is Zariski-dense. If $\mu$ finite and $\delta_\mathcal Gamma +\dim(U)>d-1$, then for any $U$-invariant Borel set $E$, we have $\lambda(E)>0$ if and only if $\mu(E)>0$. \end{prop} We denote by $\mathcal{B}$ the Borel $\sigma$-algebra of $\mathcal{E}$, and $\mathcal{I}_U \subset \mathcal{B}$ the sub-$\sigma$-algebra of $U$-invariant sets. The first part of the proof of Proposition \ref{BMandBRequivalent} is the following. \begin{lem} \label{muposimplieslamdapos} Assume that $\mathcal Gamma$ is Zarisi-dense in ${\bf\operatorname{SO}}_o(d,1)$ and that $\mu$ is finite. If $\delta>\dim N-\dim U$ and $E$ is a Borel $U$-invariant set such that $\mu(E)>0$, then $\lambda(E)>0$. \end{lem} \begin{proof} Let $E$ be a Borel $U$-invariant set with $\mu(E)>0$. It is sufficient to show that $\tilde{\lambda}(\tilde{E})>0$. Let $\textbf{x}_0=(\textbf{x}_0^+,x_0^-,t_{\textbf{x}_0})$ be a frame in the support of the (non-zero) measure $1_{\tilde{E}}\tilde{\mu}$, and $F$ be a small neighbourhood of $\textbf{x}_0$. Denote by $\mathcal{H}(x^+,t_\textbf{x})$ the horosphere passing through the base-point of the frame $\textbf{x}$. The measure $\tilde{\mu}(\tilde{E}\cap F)$ can be written $$ \tilde{\mu}(\tilde{E}\cap F)= \int_{\mathcal{F}\mathcal Lambda_\mathcal Gamma \times \R} \left( \int_{\mathcal{H}(x^+,t)} 1_{\tilde{E}\cap F} (\textbf{x}^+,x^-,t)\,.g d\nu(x^-) \right) d\tilde{\nu}(\textbf{x}^+)dt_\textbf{x}, $$ where $g$ is a positive continuous function, namely the exponential of some Busemann functions, and $\tilde{\nu}$ the $M$-invariant lift of $\nu$ to $\mathcal{F}\mathcal Lambda_\mathcal Gamma$. The main point is that it is positive, so for a set $J\subset \mathcal{F}\mathcal Lambda_\mathcal Gamma \times \R$ of positive $\tilde{\nu}\otimes dt$ measure, for any $(\textbf{x}^+,t_\textbf{x})\in J$, the set $$E_{\textbf{x}^+,t}^F=\{x^- \; :\; (\textbf{x}^+,x^-,t_\textbf{x})\in \tilde{E}\cap F, \},$$ has positive $\nu$-measure. \\ \begin{figure} \caption{ } \label{comparison-mu-lambda} \end{figure} Since similarly, $$\tilde{\mu}(\tilde{E})= \int_{\mathcal{F}\mathcal Lambda_\mathcal Gamma \times \R} \left( \int_{\mathcal{H}(x^+,t_\textbf{x})} g'.1_{\tilde{E}} (\textbf{x}^+,x^-,t_\textbf{x}) d\mathcal{L}(x^-) \right) d\tilde{\nu}(\textbf{x}^+)dt_\textbf{x},$$ with $g'>0$, it is sufficient to show that for a subset of $(\textbf{x}^+,t_\textbf{x}) \in J$ of positive measure, the set $$ E_{\textbf{x}^+,t}=\{x^- \; :\; (\textbf{x}^+,x^-,t_\textbf{x})\in \tilde{E}, \} $$ has positive Lebesgue $\mathcal{L}$-measure.\\ On each horosphere $\mathcal{H}(x^+,t_\textbf{x})$, we wish to use Marstrand's projection Theorem, and therefore to use an identification of the horosphere with $\R^{d-1}$. A naive way would be to say that $\mathcal{H}(x^+,t_\textbf{x})$ is diffeomorphic to $\textbf{x} N$, and therefore to $N\simeq \R^{d-1}$. However, it will be more convenient to use an identification of these horospheres with $N\simeq \R^{d-1}$ which does not depend on a frame $\textbf{x}$ in $\pi_1^{-1}(\mathcal{H}(x^+,t_\textbf{x}))$. In order to obtain these convenient coordinates, we fix a smooth section $s$ from a neighbourhood of $x_0^+$ to $\mathcal{F} \partial \H^d$. If $\textbf{x} \in F$, the horosphere $\mathcal{H}(x^+,t_\textbf{x})$ can be identified (in a non-canonical way) with $N$ the following way: let $n\in N$, we associate to it the base-point of $(s(x^+),x_0^-,t_\textbf{x})n$. This way, the identification does depend only on the $MN$-orbit of $\textbf{x}$, that is depends on the horosphere only.\\ For $\textbf{x}^+\in \mathcal{F}\mathcal Lambda_\mathcal Gamma$, define $m=m(\textbf{x}^+) \in M$ by the relation $\textbf{x}^+=s(x^+)m$. If $\textbf{x} \in \tilde{E}$, then so does $\textbf{x} u= (s(x^+)m,x_0^-,t_\textbf{x})u =(s(x^+),x_0^-,t_\textbf{x})mu$, which has the same base-point as $(s(x^+),x_0^-,t_\textbf{x})mum^{-1}$. This means that the set $E_{\textbf{x}^+,t}$, viewed as a subset of $N$, is invariant by translations by the subspace $mUm^{-1}$ in these coordinates. From now on, $E_{\textbf{x}^+,t}$ will always be seen as a subset of $N$. \\ Let $V$ be the orthogonal complement of $U$ in $N$, and $\Pi_{mVm^{-1}}:N\to mVm^{-1}$ be the orthogonal projection onto $mVm^{-1}$. What we saw is that the set $E_{\textbf{x}^+,t}$ is a product of $mUm^{-1}$ and $\Pi_{mVm^{-1}}(E_{\textbf{x}^+,t})$. Clearly, it contains the product of $mUm^{-1}$ and $\Pi_{mVm^{-1}}(E_{\textbf{x}^+,t}^F)$, so it is of positive Lebesgue $V$-measure as soon as $\Pi_{mVm^{-1}}(E_{\textbf{x}^+,t}^F)$ has positive Lebesgue measure in $mVm^{-1}$. The strategy is now to use the projection Theorem \ref{projL2} on each horosphere to deduce that $\Pi_{mVm^{-1}}(E_{\textbf{x}^+,t}^F)$ is of positive Lebesgue measure for almost every $m \in M$. Unfortunately, we cannot apply it to the measure $1_{E_{\textbf{x}^+,t}^F}\nu$ directly, since the set $E_{\textbf{x}^+,t}^F$ depends on the orientation $m$ of the frame $\textbf{x}^+=s(x^+)m$ (and not only on the Horosphere $\mathcal{H}(x^+,t_\textbf{x})$), so it depends on $M$. \\ By Lemma \ref{energie-finie}, we can find a subset $L\subset \mathcal Lambda_\mathcal Gamma$, such that $\nu_{|L}$ has finite $\dim(N)-\dim(U)$-energy, and $E_{\textbf{x}^+,t}^F\cap L$ has positive $\nu$-measure for any $(\textbf{x}^+,t)\in J'$, where $J'\subset J$ is of positive $\tilde{\nu}\otimes dt$- measure. \\ One can moreover assume that for every horosphere $\mathcal{H}(x^+,t_\textbf{x})$ with $\textbf{x} \in F$, $L$ lies in a fixed compact set of $N$ using both identifications of the horosphere with $\partial \H^d$ and $N$. Notice that these identifications are smooth maps, so the finiteness of the energy of $\nu_{|L}$ does not depend on the model metric space chosen.\\ By Theorem \ref{projL2}, applied on each horosphere $\mathcal{H}(x^+,t)\simeq N=U\oplus V$, the orthogonal projection $(\Pi_{mVm^{-1}})_*\nu_{|L}$ is $m$-almost surely absolutely continuous with respect to the Lebesgue measure on $mVm^{-1}$. But since $1_{E_{\textbf{x}^+,t}^F}\nu_{|L} \ll \nu_{|L}$, we have for almost every $m$ $$(\Pi_{mVm^{-1}})_* (1_{E_{\textbf{x}^+,t}^F}\nu_{|L}) \ll (\Pi_{mVm^{-1}})_*\nu_{|L} \ll \mathcal{L}_{mVm^{-1}}.$$ This forces the projection set $\Pi_{mVm^{-1}} E_{\textbf{x}^+,t}^F$ to be of positive $\mathcal{L}_{mVm^{-1}}$-measure $m$-almost surely, for those $m$ such that $(s(x^+)m,t)\in J'$. \end{proof} The second step of the proof is the following. \begin{lem} Assume that $\mathcal Gamma$ is Zariski-dense in ${SO^o(d,1)}$, that $\mu$ is ergodic and conservative, and $\delta>\dim N-\dim U$. If $E$ is a Borel $U$-invariant set such that $\mu(E)=1$, then $\lambda(E\triangle \mathcal{E})=0$. \end{lem} \begin{proof} First, pick some element $a\in A$ whose adjoint action has eigenvalue $\log(\lambda_a)>0$ on $\mathfrak{n}$ such that $a n a^{-1} =\lambda_a v$ for all $n \in N$. Replacing $E$ by $\cap_{k\in \Z}E.a^k$ (another set of full $\mu$-measure), we can freely assume that $E$ is $a$-invariant. \\ By Lemma \ref{muposimplieslamdapos}, we already know that $\lambda(E)>0$. As above also, let $V\subset N$ be a supplementary of $U$ in $N$. As $\lambda(E)>0$, we know that for $\lambda$-almost all $\textbf{x}\in E$, the set $V(\textbf{x},E)=\{v\in V, \textbf{x} v\in E\}$ has positive $V$-Lebesgue (Haar) measure $dv$. The Lebesgue density points of $V(\textbf{x},E)$ have full $dv$-measure. Recall that $V_t$ is the ball of radius $t$ in $V$. Let $\epsilon \in (0,1)$, and define for all $\textbf{x}\in \mathcal{E}$ (not only $\textbf{x}\in E$) \[ F_{\varepsilon,E}(\textbf{x})=\sup\left\{T>0 \, : \, \forall t \in (0,T), \int_V {\bf 1}_{\textbf{x} V_t \cap E} dv \ge (1-\varepsilon) |V_t|\right\}\,, \] with the convention that it is zero if no such $T$ exists; it may take the value $+\infty$. Observe that $F_{\varepsilon,E}$ is a $U$-invariant map, because $E$ is $U$-invariant. \\ Since the Lebesgue density points of $V(\textbf{x},E)$ have full $dv$-measure, then for $\lambda$-almost all $\textbf{x} \in E$, and $dv$-almost all $v\in V(\textbf{x},E)$, $F_{\varepsilon,E}(\textbf{x} v)>0$. Moreover, this statement stay valid for other $U$-invariant sets $E'$ of positive $\lambda$-measure.\\ We claim that for $\mu$-almost every $\textbf{x} \in E$, $F_{\varepsilon,E}(\textbf{x})>0$. Assuming the contrary, $E'=F_{\varepsilon,E}^{-1}(0)\cap E$ is a $U$-invariant set of positive $\mu$-measure, so by Lemma \ref{muposimplieslamdapos}, it is also of positive $\lambda$-measure. As $E'\subset E$, $F_{\varepsilon,E'}\leq F_{\varepsilon,E}$, so that the function $F_{\varepsilon, E'}$ is identically zero on $E'$. But there exists $\textbf{x} \in E'$ and $v\in V(\textbf{x},E')$ such that $\textbf{x} v\in E'$ (by definition of $V(\textbf{x},E')$) and $F_{\varepsilon,E'}(\textbf{x} v)>0$, by the previous consideration of Lebesgue density points, leading to an absurdity.\\ We will now show that $F_{\varepsilon,E}$ is in fact infinite, $\mu$-almost surely. First, the classical commutation relations between $A$ and $N$ (and therefore $A$ and $V\subset N$) give $a V_T a^{-1}=V_{\lambda_a T}$. Observe also that,by $a$-invariance of $E$, \[ V(\textbf{x} a, E)=\{v,\textbf{x} a v\in E\}=\{v\in V, \textbf{x} a v a^{-1}=\textbf{x}.(\lambda_a.v)\in E \}= \lambda_a^{-1} V(\textbf{x},E). \] Therefore, $F_{\varepsilon,E}(\textbf{x} a)= \lambda_a F_{\varepsilon,E}(\textbf{x})$, i.e. it is a function increasing along the dynamic of an ergodic and conservative measure-preserving system. This situation is constrained by the conservativity of $\mu$. Indeed, assume there exists $t_1<t_2$ such that $\mu(F_{\varepsilon,E}^{-1}(t_1,t_2))>0$. Then for all $k$ large enough (namely s.t. $\lambda_a^k>t_2/t_1$), we have $$ \left(F_{\varepsilon,E}^{-1}(t_1,t_2) \right) a^k\cap \left(F_{\varepsilon,E}^{-1}(t_1,t_2) \right)=\emptyset, $$ in contradiction to the conservativity of $\mu$ w.r.t. the action of $a$.\\ This shows that $F_{\varepsilon,E}(\textbf{x})=+\infty$ for $\mu$-almost all $\textbf{x} \in \mathcal{E}$. Define now $\mathcal{I}_E=\cap_{j\in\N^*} F_{1/j,E}^{-1}(+\infty)$. It is a $U$-invariant set of full $\mu$-measure as a countable intersection of sets of full $\mu$-measure. Therefore $\lambda(\mathcal{I})>0$ by Lemma \ref{muposimplieslamdapos}. By definition of $F_{\varepsilon, E}$, $\mathcal{I}$ consists of the frames $\textbf{x}$ such that $V(\textbf{x},E)$ is of full measure in $V$, a property that is $V$-invariant. Hence $\mathcal{I}$ is $N$-invariant of positive $\lambda$-measure, so by ergodicity of $(N,\lambda)$, it is of full $\lambda$-measure.\\ Unfortunately, we know that $E\subset \mathcal{I}_E$ but $\mathcal{I}_E$ does not have to be a subset of $E$. To be able to conclude the proof (i.e. show that $\lambda(E^c)=0$), we consider the complement set $E'=E^c$, and assume it to be of positive $\lambda$-measure. For any $\textbf{x} \in \mathcal{I}_E$ and $v\in V$, by definition of $\mathcal{I}_E$, $F_{\varepsilon,E^c}(\textbf{x} v)=0$. So the intersection of $\mathcal{I}_E$ and $E^c$ is of zero measure, and thus $\lambda(E^c)=0$. \end{proof} Let us now conclude the proof of Proposition \ref{BMandBRequivalent}. Let $E$ be a $U$-invariant set. We already know that $\mu(E)>0$ implies $\lambda(E)>0$. For the other direction, assume that $\mu(E)=0$, so that $\mu(E^c)=1$. The above Lemma applied to $E^c$ therefore would imply $E^c=\mathcal{E}$ $\lambda$-almost surely, so that $\lambda(E)=0$. Thus, $\lambda(E)>0$ implies $\mu(E)>0$. \section{Ergodicity of the Bowen-Margulis-Sullivan measure} \subsection{Typical couples for the negative geodesic flow} Let us say that a couple $(\textbf{x},\textbf{y})\in \Omega^2$ is {\em typical} (for $\mu\otimes \mu$) if for {\em every} compactly supported continuous function $f\in C^0_c(\mathcal{E}^2)$, the conclusion of the Birkhoff ergodic Theorem holds for the couple $(\textbf{x},\textbf{y})$ in negative discrete time for the action of $a_1$, more precisely: \[ \lim_{N\rightarrow +\infty} \frac1{N} \sum_{k=0}^{N-1} f(\textbf{x} a_{-k},\textbf{y} a_{-k}) = \mu\otimes \mu(f). \] Write $\mathcal{T}$ for the set of typical couples, which is a subset of the set of generic couples. Let us explain briefly why this is a set of full $\mu\otimes \mu$-measure. Since the action of $A$ on $(\Omega,\mu)$ is mixing, so is the action of $a_{-1}$. A fortiori, the action of $a_{-1}$ on $(\Omega,\mu)$ is weak-mixing, so the diagonal action of $a_{-1}$ on $(\Omega^2,\mu\otimes \mu)$ is ergodic. It follows from the Birkhoff ergodic Theorem applied to a countable dense subset of the separable space $(C^0_c(\mathcal{E}^2),\|.\|_\infty)$ that $\mu\otimes\mu$-almost every couple is typical. As the set of generic couples used in the topological part of the article (see section \ref{sectiontop}), the subset of typical couples enjoys the same nice invariance properties by $\left( (M\times A)\ltimes N^-\right)^2$. That is, $(\textbf{x},\textbf{y})\in (\mathcal F \H^d)^2$ being the lift of a typical couple only depends on $(x^-,y^-)$ in Hopf coordinates. This follows from the fact that $M\times A$ acts isometrically on $C_c^0(\mathcal{E}^2)$ and commutes with $a_{-1}$, so $\mathcal{T}$ is $(M\times A)^2$-invariant, and the fact that, since elements of $C^0_c(\mathcal{E}^2)$ are uniformly continuous, two orbits in the same strong unstable leaf have the same limit for their ergodic averages. \subsection{Plenty of typical couples on the same $U$-orbit} \label{subsectionplenty} We will say that there are {\em plenty of typical couples on the same $U$-orbit} if there exists a probability measure $\eta$ on $\Omega^2$ such that the three following conditions are satisfied: \begin{enumerate}\label{plenty} \item Typical couples are of full $\eta$-measure, that is $\eta(\mathcal{T})=1$. \item Let $ p_1(\textbf{x},\textbf{y})=\textbf{x}, p_2(\textbf{x},\textbf{y})=\textbf{y}$ be the coordinates projections. We assume that, for $i=1,2$, $( p_i)_*\eta$ is absolutely continuous with respect to $\mu$. We denote by $D_1$, $D_2$ their respective Radon-Nikodym derivatives, so that $( p_i)_*\eta=D_i \mu$. We assume moreover that $D_2 \in L^{2}(\mu)$. \item Let $\eta_\textbf{x}$ and $\eta^\textbf{y}$ be the measures on $\Omega$ obtained by disintegration of $\eta$ along the maps $ p_i$, $i=1,2$ respectively. More precisely, for any $f\in L^1(\eta)$, \[ \int_{\Omega^2} f d\eta= \int_\Omega \left( \int_\Omega f(\textbf{x},\textbf{y}) d\eta_\textbf{x}(\textbf{y}) \right) d\mu(\textbf{x})= \int_\Omega \left( \int_\Omega f(\textbf{x},\textbf{y}) d\eta^\textbf{y}(\textbf{x}) \right) d\mu(\textbf{y}). \] Note that $\eta_\textbf{x}$ (resp $\eta^\textbf{y}$) have total mass $D_1(\textbf{x})$ (resp. $D_2(\textbf{y})$). Whenever this makes sense, define the operator $\Phi$ which to a function $f$ on $\Omega$ associates the following function on $\Omega$: \[ \Phi(f)(\textbf{x})=\int_{\Omega} f(\textbf{y})d\eta_\textbf{x}(\textbf{y}). \] The condition (3) here is that if $f$ is a bounded, measurable $U$-invariant function, then \[ \Phi(f)(\textbf{x})=f(\textbf{x})D_1(\textbf{x}) \] for $\mu$-almost every $\textbf{x}\in \Omega$. Note that even if $f$ is bounded, $\Phi(f)$ may not be defined everywhere. \end{enumerate} \begin{rema}\rm Observe that we do not require any invariance of the measure $\eta$. Condition (1) replaces the $A$-invariance, whereas Condition (3) establish a link between the structure of $U$-orbits and $\eta$. \end{rema} \begin{rema}\label{eta-supported-on-U-orbits}\rm Let us comment a little bit on condition (3): it is obviously satisfied if, for example, $\eta_\textbf{x}$ is supported on $\textbf{x} U$ for almost every $\textbf{x}$, that is, $\eta$ is supported on couples of the form $(\textbf{x},\textbf{x} u)$ with $u\in U$. It will be the case for the measures $\eta$ we will construct in section \ref{constrplentydim3} and \ref{higherdimeta} in dimension 3 and higher respectively. \\ A good example of a measure $\eta$ satisfying (2) and (3) is the following: let $(\mu_\textbf{x})_{\textbf{x}\in \Omega}$ be the conditional measures of $\mu$ with respect to the $\sigma$-algebra of $U$-invariant sets, and define $\eta$ as the measure on $\Omega^2$ such that $\eta_\textbf{x}=\mu_\textbf{x}$ by the above disintegration along $ p_1$. However, its seems difficult to prove directly that it also satisfies (1). This example also highlights that condition (3) is in fact weaker than requiring that $\eta_\textbf{x}$ is supported on $\textbf{x} U$. \end{rema} \begin{rema}\label{pas-besoin-de-L2}\rm The condition that the Radon-Nikodym derivatives $D_i$ be in $L^2$ is not restrictive. Indeed , we will construct a measure $\eta'$ satisfying all above conditions except this $L^2$-condition. The Radon-Nikodym derivatives $D_i$ are integrable, so that they are bounded on a set of large measure. We will simply restrict $\eta'$ to this subset, and normalize it, to get the desired probability measure $\eta$. \end{rema} The interest we have in finding plenty of typical couples on the same $U$-orbit is due to the following key observation. \begin{lem} \label{reductiontwo} {\em To prove Theorem \ref{ergodicity}, it is sufficient to prove that there are plenty of typical couples on the same $U$-orbit, that is that there exists a probability measure $\eta$ satisfying (1),(2) and (3).} \end{lem} The next section is devoted to the proof of this observation. The idea is the following: suppose $g$ is a bounded $U$-invariant function. We aim to prove that $g$ is constant $\mu$-almost everywhere. Consider the integral of the ergodic averages for the function $g\otimes g$ on $\Omega^2$ with respect to $\eta$, $$ J_N=\int_{\Omega^2} \frac1{N} \sum_{k=0}^{N-1} g\otimes g(\textbf{x} a_{-k} , \textbf{y} a_{-k}) d\eta(\textbf{x},\textbf{y}).$$ If $\eta$ is supported only on couples on the same $U$-orbit, then since $g$ is constant on $U$-orbits, $g(\textbf{x} a_{-k})=g( \textbf{y} a_{-k})$ for $\eta$-almost every $(\textbf{x},\textbf{y})$, so \begin{align*} J_N & = \int_{\Omega^2} \frac1{N} \sum_{k=0}^{N-1} g(\textbf{x} a_{-k})^2 d\eta(\textbf{x},\textbf{y})\\ &=\int_\Omega \frac1{N} \sum_{k=0}^{N-1} g(\textbf{x} a_{-k})^2 D_1(\textbf{x}) d\mu(\textbf{x}), &=\int_\Omega g(\textbf{x})^2 \left( \frac1{N} \sum_{k=0}^{N-1} D_1(\textbf{x} a _k) \right) d\mu(\textbf{x}), \end{align*} so $J_N\to \int_\Omega g^2 d\mu$ by the Birkhoff ergodic Theorem applied to $D_1$. Observe that Property (3) is used in the first equality, and Property (2) in the second. For the sake of the argument, assume that $g$ is moreover {\em continuous with compact support}. Then by Condition (1) on typical couples, since $g\otimes g$ is continuous with compact support, the same sequence $J_N$ tends to $\int_{\Omega^2} g\otimes g d\mu=(\int_\Omega g d\mu)^2.$ Hence $g$ has zero variance, so is constant. Unfortunately, one cannot assume $g$ to be continuous, nor approximate it by continuous functions in $L^\infty(\mu)$. The regularity Condition (2) that $D_2 \in L^{2}$ will nevertheless allow us to use continuous approximations in $L^2(\mu)$. \subsection{Proof of Lemma \ref{reductiontwo} } We first need to collect some facts about the operator $\Phi$, and its behaviour in relationship with ergodic averages for the negative-time geodesic flow $a_{-1}$. \begin{lem} The operator $\Phi$ is a continuous linear operator from $L^2(\mu)$ to $L^1(\mu)$. \end{lem} As we will see, Property (2) of the measure $\eta$ is crucial here. \begin{proof} Let $f\in L^2(\mu)$, we compute \begin{align*} \|\Phi(f)\|_{L^1(\mu)} & = \int_\Omega \left| \Phi(f)(\textbf{x}) \right| d\mu(\textbf{x})\leq \int_\Omega \left( \int_\Omega |f(\textbf{y})| d\eta_\textbf{x}(\textbf{y}) \right) d\mu(\textbf{x}), \\ & \leq \int_{\Omega^2} |f(\textbf{y})| d\eta(\textbf{x},\textbf{y}) \leq \int_\Omega |f(\textbf{y})| \left( \int_\Omega d\eta^\textbf{y}(\textbf{x}) \right) d\mu(\textbf{y}),\\ & \leq \int_\Omega |f(\textbf{y})| D_2(\textbf{y}) d\mu(\textbf{y})\leq \|f\|_{L^2(\mu)} \, \|D_2\|_{L^{2}(\mu)}. \end{align*} \end{proof} Given $f,g$ two functions on $\Omega$, write $f\otimes g$ for the function $f\otimes g(\textbf{x},\textbf{y})=f(\textbf{x})g(\textbf{y})$ on $\Omega^2$. Denote by $\langle f,g \rangle_\mu=\int_\Omega f.g\,d\mu$ the usual scalar product on $L^2(\mu)$. For $f \in L^\infty(\mu)$ and $g\in L^2(\mu)$, a simple calculation gives \[ \int_{\Omega^2}f\otimes g \, d\eta= \langle f, \Phi(g)\rangle_\mu. \] Let $\Psi$ be the Koopman operator associated to $a_{1}$, that is $\Psi(f)(\textbf{x})=f(\textbf{x} a_1 )$. The Ergodic average of a tensor product can be written in terms of $\Phi$ and $\Psi$ the following way: \begin{align*} \int_{\Omega^2} \frac1{N} \sum_{k=0}^{N-1} f\otimes g(\textbf{x} a_{-k} , \textbf{y} a_{-k}) d\eta(\textbf{x},\textbf{y}) &= \frac1{N} \sum_{k=0}^{N-1} \langle \Psi^{-k}(f), \Phi( \Psi^{-k}(g))\rangle_\mu,\\ & = \langle f, \frac1{N} \sum_{k=0}^{N-1} \Psi^k \circ \Phi \circ \Psi^{-k}(g) \rangle_\mu \\ & = \langle f, \Xi_N(g) \rangle_\mu, \end{align*} where $\Xi_N$ is the operator $\Xi_N =\frac1{N}\sum_{k=0}^{N-1} \Psi^k \circ \Phi \circ \Psi^{-k}$. Since the Koopman operator is an isometry from $L^q(\mu)$ to $L^q(\mu)$ for both $q=1$ and $q=2$, the operator $\Xi_N$ from $L^2(\mu)$ to $L^1(\mu)$ has norm at most $$\|\Xi_N\|_{L^2 \to L^1}\leq \|\Phi\|_{L^2 \to L^1}.$$ Notice also that if $f,g$ are continuous with compact support, the above ergodic average converges toward $\langle f,1\rangle_\mu \langle g,1\rangle_\mu$ for $\eta$-almost every $x,y$, by Condition (1). By the Lebesgue dominated convergence Theorem, we also have \begin{equation}\label{Xi_n-converge} \lim_{N\to \infty} \langle f,\Xi_N(g)\rangle_\mu =\langle f,1\rangle_\mu \langle g,1\rangle_\mu. \end{equation} Let $g$ be a bounded measurable, $U$-invariant function. Since $\Psi^{-k}(g)$ is also bounded and $U$-invariant, by property (3), we have \[ \Phi(\Psi^{-k}(g))(\textbf{x})=g(\textbf{x} a_{-k})D_1(\textbf{x}). \] Therefore, \[ \Xi_N(g)(\textbf{x})=g(\textbf{x})\left(\frac1{N}\sum_{k=0}^{N-1} D_1(\textbf{x} a_k )\right). \] By the Birkhoff $L^1$-ergodic Theorem and boundedness of $g$, it follows that $\Xi_N(g)$ tends to $g$ in $L^1(\mu)$-topology. Our aim is to show that $g$ has variance zero. Let $(g_n)_{n\geq 0}$ be a sequence of uniformly bounded continuous functions with compact support converging to $g$ in $L^2(\mu)$ (and hence also in $L^1(\mu)$). Let $D>0$ be such that $\|g_n\|_\infty\leq D$ for all $n$. For $n,N$ positive integers, we have \begin{align*} \langle g,g \rangle_\mu - \langle g,1\rangle_\mu^2 = & \, \langle g-g_n,g \rangle_\mu + \langle g_n, g-\Xi_N(g) \rangle_\mu + \langle g_n, \Xi_N(g-g_n) \rangle_\mu \\ & \, + \left( \langle g_n, \Xi_N(g_n) \rangle_\mu-\langle g_n,1\rangle_\mu^2 \right)+ \left( \langle g_n,1\rangle_\mu^2 - \langle g,1\rangle_\mu^2\right). \end{align*} Therefore, \begin{align*} \left| \langle g,g \rangle_\mu -\langle g, 1\rangle_\mu^2 \right| \leq & \|g-g_n\|_1\|g\|_\infty + D \|g-\Xi_N(g)\|_1 + D \|\Xi_N\|_{L^2\to L^1} \|g-g_n\|_2 \\ & + \left| \langle g_n, \Xi_N(g_n) \rangle_\mu-\langle g_n,1\rangle_\mu^2 \right|+ \|g-g_n\|_1\|g+g_n\|_1. \end{align*} First fix $n$ and let $N$ goes to infinity. By what precedes, $\Xi_N(g)$ converges to $g$ in $L^1$ so that the second term vanishes. Since $g_n$ is continuous, by (\ref{Xi_n-converge}), the last but one term of the upper bound vanishes. We obtain \[ \left| \langle g,g \rangle_\mu -\langle g, 1\rangle_\mu^2 \right| \leq \|g-g_n\|_1\|g\|_\infty + D \|\Phi\|_{L^2\to L^1} \|g-g_n\|_2 + 2D \|g-g_n\|_1 . \] We now let $n$ go to infinity, and we get \[ \langle g,g \rangle_\mu -\langle g, 1\rangle_\mu^2=0 \] Therefore, $g$ has variance zero, so is constant. \subsection{Constructing plenty of typical couples : the dimension 3 case} \label{constrplentydim3} \subsubsection*{The candidate to be the measure $\eta$, in dimension $3$} First, recall that $N$ is identified with $\R^{d-1}=\R^2$. Fix also an isomorphism $U\simeq \R$, so that the set $U^+$ of positive elements is well defined. Consider the map $\textbf{w}idetilde{\mathcal{R}}: \textbf{w}idetilde{\Omega}^2\to \textbf{w}idetilde{\Omega}^2$ defined as follows. The image $(\textbf{x}',\textbf{y}')$ of $(\textbf{x},\textbf{y})$ is the unique couple such that $x'^+=x^+=y'+$, $x'^-=x^-$, $y'^-=y^-$, $t_{\textbf{x}'}=t_\textbf{x}=t_\textbf{y}$, and $\textbf{x}^+,\textbf{y}^+$ are the unique frames such that there exists $u\in U^+$ with $\textbf{x}' u=\textbf{y}'$. \begin{figure} \caption{The alignement map $\mathcal{R} \label{align} \end{figure} Consider the restriction of this map to couples $(\textbf{x},\textbf{y})$ inside some fundamental domain for the action of $\mathcal Gamma$ on $\textbf{w}idetilde{\Omega}$, so that we get a well defined map $\mathcal{R}:\Omega^2\to \Omega^2$. Define $\eta$ as the image $\eta:= \mathcal{R}_*(\mu\otimes\mu)$. Observe that condition (1) in \ref{plenty} is automatic, as being typical depends only on $x^-$ and $y^-$. Remark \ref{eta-supported-on-U-orbits} shows that condition (3) is also automatic. By Remark \ref{pas-besoin-de-L2}, we only need to show that its projections $ (p_1)_*\eta$ and $ (p_2)_*\eta$ are absolutely continuous w.r.t. $\mu$. That is the crucial part of the proof. We do it in the next sections. The key assumption will of course be our dimension assumption on $\delta_\mathcal Gamma>\dim N-\dim U$. Then, we will try to follow the classical strategy of Marstrand, Falconer, Mattila. However, a new technical difficulty will appear, because we will need to do radial projections on circles instead of orthogonal projections on lines. The length of the proof below is due to this technical obstacle. \subsubsection*{Projections} First of all, by lemma \ref{energie-finie}, we can restrict the measure $\nu$ to some subset $A\subset\mathcal Lambda_\mathcal Gamma$ of measure as close to $1$ as we want, with $I_1(\nu_{|A})<\infty$. In the sequel, we denote by $\nu_A$ the measure restricted to $A$ and normalized to be a probability measure. Fix four disjoint compact subsets $X_+,X_-,Y_+,Y_ -$ of $A\subset\mathcal Lambda_\mathcal Gamma$, each of positive $\nu $-measure, and write $\nu_{X_+},\nu_{X_-},\nu_{Y_+},\nu_{Y_-}$ for the Patterson measures restricted to each of these sets, normalized to be probability measures. Therefore, all their energies $I_1(\nu_{X_\pm})$ and $I_1(\nu_{Y_\pm})$ are finite. In fact, the definition of the measure $\eta$ will be slightly different than said above. First, $\tilde{\eta}$ will be the image by the projection map $\textbf{w}idetilde{\mathcal{R}}$ defined above of the restriction of $\tilde{\mu}\otimes\tilde{\mu}$ to the set of couples $(\textbf{x},\textbf{y})\in\textbf{w}idetilde{\Omega}^2$, such that $x^\pm\in X_\pm$ and $y^\pm \in Y_\pm$, $t_\textbf{x}\in [0,1]$, $t_\textbf{y}\in[0,1]$. Then $\eta$ will be defined on $\Omega^2$ as the image of $\tilde{\eta}$. Pick two distinct points outside $X_+$, called 'zero' and 'one'. For any $x^+\in X_+$, we identify $\partial\H^3\setminus \{x^+\}$ to the complex plane $\mathbb C$ by the unique homography, say $h^{x^+}:\partial\H^3\to \mathbb C\cup\{\infty\}$, sending $x^+$ to $+\infty$, zero to $0$ and one to $1$. We get a well defined parametrization of angles, as soon as $x^+$ is fixed. \begin{rema}\label{varierxplus} Observe that when $x^+$ varies in the compact set $X_+$, as $0$ and $1$ do not belong to $X_+$, all the quantities defined geometrically (projections, intersections of circles, ...) vary analytically in $x_+$. \end{rema} In particular, if $\textbf{x}\in \Omega$ is a frame, the frame $\textbf{x}^+$ in the boundary determines a unique half-circle from $x^+$ to $x^-$ in $\partial\H^3$, which is tangent to the first direction of $\textbf{x} ^+$ at $x^+$, and therefore, a unique half-line originating from $x^-$ in $\mathbb C\simeq \partial \H^3\setminus\{x^+\}$. We use therefore an angular coordinate $\theta_\textbf{x}\in [0,2\pi)$ instead of $\textbf{x}^+$. Let $\vec{u}_\theta$ be the unit vector $e^{i(\theta+\pi/2)}$ in the complex plane. Define the projection $ \pi_\theta^{x^+}$ in the direction $\theta$ from $\partial \H^3\setminus\{x^+\}$ to itself as $\pi_\theta^{x^+}(z)=z.\vec{u}_\theta$. Observe that the line $\R\vec{u}_\theta$ in $\mathbb C$, orthogonal to $\theta$, has a canonical parametrization, and a Lebesgue measure, denoted by $\ell^{x^+}_\theta$. \begin{figure} \caption{Angular parameter on $\mathbb{C} \label{angles} \end{figure} Once again, the variations of $x^+\mapsto \pi_\theta^{x^+}$ and $x^+\mapsto \ell^{x^+}_\theta$ are as regular as possible. For measures, it means that the Lebesgue measures $\ell^{x^+}_\theta$ are equivalent one to another when $x^+$ varies, with analytic Radon-Nikodym derivatives in $x^+$ in restriction to any compact set of $\partial\H^d$ which does not contain $x^+$. Observe also that when $x^+$ varies in $X_+$, the distances $d^{x^+}$ induced by the complex metric on $\mathbb C\simeq \partial\H^3\setminus\{x^+\}$, when restricted to the compact set $X_-\cup Y_-$, are uniformly equivalent to the usual metric on $\partial \H^3$. In particular, if we denote by $I_1^{x^+}$ the energy of a measure relatively to the distance $d^{x^+}$, there exists a constant $c=c(X_+,X_-,Y_+)$ such that for all $x^+\in X_+$, \begin{eqnarray}\label{energies-unif-equiv} \frac{1}{c}I_1(\nu_A)\le I_1^{x^+}(\nu_{X_-})\le c I_1(\nu_A) \quad\mbox{and}\quad \frac{1}{c}I_1(\nu_A)\le I_1^{x^+}(\nu_{Y_-})\le c I_1(\nu_A) \end{eqnarray} Rephrasing Marstrand's projection Theorem in dimension 2, we have: \begin{theo}(Falconer, \cite[p82]{MR2118797}, Mattila \cite[th 4.5]{Mattila}) \label{projection-Falconer} Assume that $I_1(\nu_A)<\infty$. Then for all fixed $x^+\in X^+$, and almost all $\theta\in[0,\pi)$, the projection $(\pi_\theta^{x^+})_*\nu_{Y^-}$ (resp. $(\pi_\theta^{x^+})_*\nu_{X^-}$) is absolutely continuous w.r.t $\ell_\theta^{x^+}$. Moreover, the map $H^{x^+}$ defined as \[ H^{x^+}:(\theta,\textbf{x}i)\in [0,\pi)\times \R \mapsto \frac{d(\pi_\theta^{x^+})_*\nu_{Y^-}}{d\ell^{x^+}_\theta}(\textbf{x}i) \] belongs to $L^2([0,\pi)\times \R)$, and we have $\|H^{x^+}\|^2_{L^2([0,\pi)\times \R)}\le C I_1(\nu_A)$, with $C$ a universal constant which does not depend on $x^+\in X_+$. In particular, as the variation in $x^+$ is analytic and $X^+$ compact, the map $(x^+, \theta,\textbf{x}i)\to H^{x^+}_\theta(\textbf{x}i)$ belongs to $L^2(X^+\times [0,\pi]\times\R)$, with $L^2$-norm bounded by the same upper bound $C I_1(\nu_A)$. The same result is true when replacing $Y^-$ with $X^-$. \end{theo} \begin{proof} Thanks to the comparison (\ref{energies-unif-equiv}) between the different notions of energy, we can replace $I_1^{x^+}(\nu_{X_+})$ by $I_1(\nu_A)$, and get the desired result. \end{proof} \subsubsection*{Hardy-Littlewood Maximal Inequality} Let $H^{x^+}_\theta$ be the map \[H^{x^+}_\theta:\textbf{x}i\in\R.\vec{u}_\theta\mapsto \frac{d(\pi_\theta^{x^+})_*\nu_{Y^-}}{d\ell^{x^+}_\theta}(\textbf{x}i) \] Its maximal function is defined as \[ MH^{x^+}_\theta(t)=\sup_{\varepsilon>0}\frac{1}{2\varepsilon}\int_{t-\varepsilon}^{t+\varepsilon}\frac{d(\pi_\theta^{x^+})_*\nu_{Y^-}}{d\ell^{x^+}_\theta}(\textbf{x}i)d\textbf{x}i= \sup_{\varepsilon>0}\frac{1}{2\varepsilon}\nu_{Y^-}(\{y\in Y^-, \pi_\theta^{x^+}(y)\in [t-\varepsilon,t+\varepsilon]\})\,. \] The strong maximal inequality of Hardy-Littlewood \cite{HL} with $p=2$ on $\R$ (of dimension $1$) asserts that there exists $C=C_{2,1}$ independent of $\theta$ such that for all $\theta\in[0,\pi)$, \[ \|MH^{x^+}_\theta\|_{L^2(\R)}\le C_{2,1}\|H_\theta^{x^+}\|_{L^2(\R)} \] We deduce that \[ \|MH^{x^+}\|_{L^2([0,\pi)\times\R)} \le \int_0^\pi C_{2,1}^2 \|H^{x^+}_\theta\|_{L^2(\R)}^2d\theta =C_{2,1}^2 \|H^{x^+}_\theta\|_{L^2([0,\pi)\times\R)}^2 <+\infty \] The above also holds for the map $G^{x^+}$ defined by \[G^{x^+}_\theta:\textbf{x}i\in\R.\vec{u}_\theta\mapsto \frac{d(\pi_\theta^{x^+})_*\nu_{X^-}}{d\ell^{x^+}_\theta}(\textbf{x}i)\,, \] with the same constants. \subsubsection*{A geometric inequality} We want to show that the projections $( p_i)_*\eta$ on $\Omega$ are absolutely continuous w.r.t. $\mu$. We will first prove it for $ p_1$, and then observe that for $ p_2$, the situation is completely symmetric, when reversing the role of $x^-$ and $y^-$. Given a Borel set $P=E_+\times E_-\times E_t\times E_\theta\subset X_+\times X_-\times [0,1]\times [0,2\pi)$, observe that \[ ( p_1)_*\eta(P)=\\ \tilde{\mu}\otimes\tilde{\mu}(\{(\textbf{x},\textbf{y})\in \textbf{w}idetilde{\Omega}^2,\, x^+\in E_+,x^-\in E_-, t_\textbf{x}\in E_t, y^-\in C^{x^+}(x^-,E_\theta)\,\} \] where $C^{x^+}(x^-,E_\theta)$ is the cone of center $x^-$ with angles in $E_\theta$ in the complex plane $\mathbb C\simeq\partial\H^3\setminus\{x^+\}$. Similarly, \[ ( p_2)_*\eta(P)=\tilde{\mu}\otimes\tilde{\mu}(\{(\textbf{x},\textbf{y})\in \textbf{w}idetilde{\Omega}^2,\, x^+\in E_+,y^-\in E_-,t_\textbf{x}\in E_t, x^-\in C^{x^+}(y^-,E_\theta)\,\}\,. \] \begin{lem} To prove that $( p_1)_*\eta$ (resp. $( p_2)_*\eta$) is absolutely continuous w.r.t. $\mu$, it is enough to show that there exists a nonnegative measurable map $F_1$ (resp. $F_2$) such that for all rectangles $P=E_+\times E_-\times E_t\times E_\theta\in X_+\times X_-\times [0,1]\times M$ (resp. $P=E_+\times E_-\times E_t\times E_\theta\in X_+\times Y_-\times [0,1]\times M$ ) we have \[ ( p_1)_*\eta(P)\le \int_P F_1(x^+,x^-,\theta)d\nu_{X^+}(x^+)d\nu_{X^-}(x^-)dt d\theta \] and \[ ( p_2)_*\eta(P)\le \int_P F_2(x^+,y^-,\theta)d\nu_{X^+}(x^+)d\nu_{Y^-}(y^-)dt d\theta \] with $F_1\in L^1(\nu_{X^+}\times\nu_{X_-}\times [0,\pi])$, and $F_2\in L^1(\nu_{X^+}\times\nu_{Y_-}\times [0,\pi))$ \end{lem} \begin{proof} It is clear that $\mu(P)=0$ will imply $( p_1)_*\eta(P)=0$ for all rectangles. As they generate the $\sigma$-algebra of $\textbf{w}idetilde{\Omega}\cap(X_+\times X_-\times[0,1]\times[0,\pi)$ it implies that $( p_1)_*\eta$ is absolutely continuous w.r.t. $\mu$. The proof is the same with $ p_2$. \end{proof} Let us show that such integrable maps $F_1$ and $F_2$ exist. In fact, we will prove that for all given $x^+$, $F_i(x^+,.)$ is integrable. And the fact that, as usual, the variation of all involved quantities in $x^+$ is analytic will imply that $\|F_i(x^+,.)\|$ is integrable also in $x^+$. As said above, for $P=E_+\times E_-\times E_t\times E_\theta$ we have \[ ( p_1)_*\eta(P)=\int_{E_+\times E_-\times E_t}\int_{Y^-} {\bf 1}_{C^{x^+}(x^-,E_\theta)}(y^-)d\nu_{Y_-}(y^-)d\nu_{X_-}(x^-)d\nu_{X_+}(x^+) dt \] Now, we wish to study the quantity $\nu_{Y^-}(C^{x^+}(x^-,E_\theta))$ in order to prove that, $x^+$ being fixed, the radial projection of $\nu_{Y^-}$ on the circle of directions around $x^-$ is absolutely continuous w.r.t the Lebesgue measure $d\theta$, and control the norm of the Radon-Nikodym derivative, which a priori depends on, and needs to be integrable in the variable $x^+$. It seems now appropriate to use Theorem \ref{projection-Falconer} to conclude. Unfortunately, we have to prove that a radial projection is absolutely continuous, whereas Theorem \ref{projection-Falconer} deals with orthogonal projection in a certain direction. The Hardy-Littlewood maximal $L^2$-inequality will allow us to overcome this difficulty. Denote by $\Theta^{x^+}(x^-,y^-)$ the angle in $\partial \H^3\setminus \{x^+\}\simeq \mathbb C$ at $x^-$ of the half-line from $x^-$ to $y^-$. First, as the distance from $X^-$ to $Y^-$ is uniformly bounded from below, the cone $C^{x^+}(x^-,[\theta_0-\varepsilon,\theta_0+\varepsilon])$ intersected with $Y^-$ is uniformly included in a rectangle of the form $\{y^-\in Y^-, |\pi_{\theta_0}^{x^+}(y^-)-\pi_{\theta_0}^{x^+}(x^-)|\le c_0\varepsilon\}$, for some uniform constant depending only on the sets $X_\pm$ and $Y_\pm$, and not on $\varepsilon, x^\pm,y^\pm$. In particular, the following result holds. \\ \begin{figure} \caption{Radial versus orthogonal projections of $\nu_{Y^-} \label{radial-orthogonal} \end{figure} \begin{lem}\label{radial-versus-orthogonal} There exists a geometric constant $c_0>0$ depending only on the sizes and respective distances of the sets $X_\pm$ and $Y_\pm$, such that \[ \nu_{Y_-}( C^{x^+}(x^-,[\theta_0-\varepsilon,\theta_0+\varepsilon])\cap Y^-)\le 2c_0\varepsilon MH_{\theta_0}^{x^+}(\pi_{\theta_0}^{x^+}(x^-)) \] \end{lem} \subsubsection*{Conclusion of the argument} The above inequality does not allow directly to conclude. Let us integrate it in $\theta$, to recover the $L^2$-norm of the maximal Hardy-Littlewood function. The first inequality follows from the inclusion $[\theta_0-\varepsilon,\theta_0+\varepsilon]\subset [\theta-2\varepsilon,\theta+2\varepsilon]$ for $\theta$ in the first interval, the second inequality from Lemma \ref{radial-versus-orthogonal}. \begin{eqnarray*} \nu_{Y_-}(&C^{x^+}&(x^-,[\theta_0-\varepsilon,\theta_0+\varepsilon])\cap Y^-)\\ &\le &\int_{\theta_0-\varepsilon}^{\theta_0+\varepsilon}\nu_{Y^-}(\{y^-\in Y^-, \Theta^{x^+}(x^-,y^-)\in [\theta-2\varepsilon,\theta+2\varepsilon]\})\,\frac{d\theta}{2\varepsilon}\\ &\le& 4c_0\varepsilon\int_{\theta_0-\varepsilon}^{\theta_0+\varepsilon} MH_{\theta}^{x^+}(\pi_{\theta}^{x^+}(x^-))\frac{d\theta}{2\varepsilon} \\ &=& 2c_0 \int_{\theta_0-\varepsilon}^{\theta_0+\varepsilon}MH_{\theta}^{x^+}(\pi_{\theta}^{x^+}(x^-))\,d\theta \end{eqnarray*} Define $F_1(x^+,x^-,\theta)$ as $$ F_1(x^+,x^-,\theta)=2c_0MH_{\theta}^{x^+}\left(\pi_{\theta}^{x^+}(x^-)\right)= 2c_0\sup_{\epsilon>0}\frac{1}{2\varepsilon}\int_{\pi_\theta^{x^+}(x^-)-\varepsilon}^{\pi_\theta^{x^+}(x^-)+\varepsilon} H_\theta^{x^+}(t)dt\,. $$ The absolute continuity of $\pi_\theta^{x^+}$ w.r.t $\ell_\theta$, the Cauchy-Schwartz inequality and the Hardy-Littlewood maximal inequality imply that \begin{eqnarray*} \|F_1(x^+,.,.)\|_{L^1([X^-\times[0,\pi])}&=& 2c_0 \int_{X^-}\int_0^\pi MH_{\theta}^{x^+}(\pi_{\theta}^{x^+}(x^-)) d\nu_{X^-}(x^-)d\theta\\ &=&\int_\R\int_0^\pi MH_{\theta}^{x^+}(\textbf{x}i)\frac{d(\pi_\theta^{x^+})_*\nu_{X^-}}{d\ell_\theta^{x^+}}(\textbf{x}i)d\textbf{x}i d\theta\\ &\le & \|MH^{x_+}\|_{L^2(\R\times[0,\pi])}\times\left\|\frac{d(\pi_\theta^{x^+})_*\nu_{X^-}}{d\ell_\theta^{x^+}}(\textbf{x}i)\right\|_{L^2(\R\times[0,\pi])}\\ &\le& C_1\|H^{x^+}\|_{L^2(\R\times[0,\pi])}\times\|G^{x^+}\|_{L^2(\R\times[0,\pi])} \end{eqnarray*} which is, by Projection Theorem \ref{projL2}, bounded from above by $C_1^2 I_1(\nu_A)^2<\infty$. The uniformity of the bound in $x^+\in X^+$ allows to integrate once again the above quantities and deduce that $F_1\in L^1(X^-\times X^+\times[0,\pi])$. \subsection{The higher dimensional case} \label{higherdimeta} In higher dimension, the strategy of the proof is similar. We want to build a measure $\eta$ on $\Omega^2$ which gives positive measure to {\em plenty of couples on the same $U$-orbit}. We will build $\eta$ from the measure $\mu\otimes\mu$, to obtain a measure defined on (a subset of) $\{(\textbf{x},\textbf{y})\in\Omega^2,\,\textbf{x} U=\textbf{y} U\}$, which gives full measure to typical couples $(\textbf{x},\textbf{y})$ (whose negative orbit satisfies Birkhoff ergodic theorem for the diagonal action of $a_{-1}$, and whose projections $(p_1)_*\eta$ and $(p_2)*\eta$ on $\Omega$ are absolutely continuous w.r.t $\mu$. Contrarily to the dimension $3$ case, we will not define any "alignment map". Indeed, given a typical couple $(\textbf{x},\textbf{y})$, one can begin as in dimension $3$, and try to find a frame $\textbf{x}'\in \textbf{x} M$ and a frame $\textbf{y}'\in \textbf{x}' U$ (or in other words $\textbf{y}'U=\textbf{x}'U$), so that in particular $y'^+=x'^+=x^+$, with the same past as $\textbf{y}$ (that is, $y'^-=y^-$). However, there is no canonical choice of such $\textbf{x}'$, $\textbf{y}'$, due to the fact that the dimension and/or the codimension of $U$ in $N$ will be greater than one. Therefore, we will directly define the new measure $\eta$, by a kind of averaging procedure of all good choices of couples $(\textbf{x}',\textbf{y}')$. \\ Identify the horosphere $\textbf{x} NM=\textbf{x} MN$ in $T^1\H^d$ with a $d-1$-dimensional affine space. As in dimension $3$, we wish that the frames $\textbf{x}'$ and $\textbf{y}'$ have their first vectors on $\textbf{x} NM$, that $\textbf{x}'$ belongs to the fiber $\textbf{x} M$ of the vector $\pi_1(\textbf{x})$, and $y'^-=y^-$, so that $\textbf{y}'$ belongs to the fiber $\textbf{y}' M$ (with an abuse of notation, as $\textbf{y}'$ is not well defined) of the well defined vector $v_\textbf{y}=(y^-,x^+,t_\textbf{x})$ of $\textbf{x} MN$. These vectors $\textbf{x} M$ and $\textbf{y}' M$ are well defined, so that the line from $\textbf{x} M$ to $\textbf{y}' M$ in the affine space $\textbf{x} NM$ is also well defined. Now, given any two frames $\textbf{x}'$ and $\textbf{y}'$ in the respective fibers of $\textbf{x} M$ and $\textbf{y}' M$, such that $\textbf{x}' U=\textbf{y}' U$, the $k$-dimensional oriented linear space $P=\textbf{x}'UM$ contains the line from $\textbf{x} M$ to $\textbf{y}' M$. The set of such $P$ can be identified with ${ SO}(d-2)/\left( SO (k-1)\times{ SO}(d-k-1)\right)$. We will first choose randomly $P$ using the ${ SO}(d-2)$-invariant measure on the latter space. Now, given $P$, the set of frames $\textbf{x}'$ such that the direction of the affine subspace $\textbf{x}'UM$ is $P$ can be identified with ${ SO}(k)\times { SO}(d-k-1)$, and we choose $\textbf{x}'$ randomly using the Haar measure of this group. This determines the element $u \in U$ such that $\textbf{x}'uM=\textbf{y}'M$, so it determines $\textbf{y}'=\textbf{x}'u$ completely. \\ As in dimension 3, the non-trivial part is to show that the measure obtained by this construction has absolutely continuous marginals. We first describe more precisely the construction to fix notations. \subsubsection{Restriction of the support of $\mu\otimes\mu$} Recall that the lift $\tilde{\mu}$ of the measure $\mu$ on $\textbf{w}idetilde{\Omega}$ can be written locally as \[ d\tilde{\mu}(\textbf{x})=d\nu(x^-)d\nu(x^+)dt_{\textbf{x}}dm\,, \] where $dm$ denotes the Haar measure on the fiber $\textbf{x} M$ over $\pi_1(\textbf{x})$. Remember that a frame $\textbf{x}$ with first vector $\pi_1(\textbf{x})$ induces (by parallel transport until infinity) a frame at infinity in $T^1_{x^+}\partial \H^d$, or $T^1_{x^-}\partial \H^d$, so that $dm$ can also be seen as the Haar measure on the set of frames based at $x^-$ inside $T^1_{x^-}\partial\H^d$. As in dimension $3$, consider a subset $A\subset\mathcal Lambda_\mathcal Gamma$ of positive $\nu$-measure such that $I_1(\nu_{|A})<\infty$. Choose four compact sets $X^\pm, Y^\pm$ inside $A$, pairwise disjoint, and restrict $\tilde{\mu}\otimes\tilde{\mu}$ to the couples $(\textbf{x},\textbf{y})\in\textbf{w}idetilde{\Omega}^2$ such that $x^\pm\in X^\pm$ and $y^\pm \in Y^\pm$, and $t_\textbf{x},t_\textbf{y}\in [0,1]$. \subsubsection{Coordinates on $\partial\mathbb{H}^d$} For the purpose of contructing $\eta$, it will be convenient to have a family of identifications of horospheres, or here the complement of a point $x^+$ in $\partial \H^d$, with the vector space $\R^{d-1}$. Let $(e_i)_{1\le i\le d-1}$ be the canonical basis of $\R^{d-1}$. Choose three different points $x_0^+ \in X^+$, $x_0^-\in X^-$ and $y_0^⁻\in Y^-$, in the support of $\nu_{|X^+},\nu_{|X^-}$ and $\nu_{|Y^-}$ respectively. Now we want to get a unique homography $h_{x^+}$ from $\partial\H^d\setminus\{x^+\}$ to $\R^{d-1}$ sending $x_0$ to $0$, $y_0$ to $e_1$, and $x^+$ to infinity, with a smooth dependence in $x^+$. To do so, choose successively $d-3$ other points, say $q_2$, ... $q_{d-2}$ in $\partial\H^d$, in such a way that, uniformly in $x^+\in X^+$, none of the points $x^+,x_0, y_0, q_2,\dots, q_{d-2}$ belongs to a circle containing three other points. Now, it is elementary to check that there is a unique conformal map $h_{x^+}$ sending $x^+$ to infinity, $x_0$ to $0$, $y_0$ to $e_1$, $q_2$ inside the half-plane $\R.e_1 + \R_+ e_2$, $q_3$ inside the half-space $\R e_1+\R e_2+\R_+ e_3$, and so on up to $q_{d-2}$. This is the desired map. Up to decreasing the size of $X^+$, $X^-$ and $Y^-$ using neighbourhoods of $x_0^+,x_0^-,y_0^-$ respectively, we can moreover assume that for all these conformal maps uniformly in $x^+\in X^+, x^-\in X^-, y^-\in Y^-$, the first coordinate of the vector $\overrightarrow{h_{x^+}(x^-)h_{x^+}(y^-)}$ belongs to $[\frac{1}{2}, 2]$, and the norm of this vector is bounded by $3$. In the sequel, we use the coordinates induced by $h_{x^+}$ on $\partial\H^d$. \subsubsection{A nice bundle} We will construct a measure $\tilde{\eta}$ on the set $$ \mathcal{S}_\eta=\{(\textbf{x},\textbf{y})\in\Omega^2,\, : \, x^+=y^+ \in X^+, x^-\in X^-, y^-\in Y^- , \textbf{x} U=\textbf{y} U\}\,, $$ and prove that it satisfies assumptions (1),(2),(3) of Lemma \ref{reductiontwo}, so that Theorem \ref{ergodicity} follows. Observe that this space $\mathcal{S}_\eta$ is a fiber bundle over some subset $$ \mathcal{P} \subset X^+\times X^-\times Y^-\times \mathcal{G}_{k }^{d-1}\, , $$ whose projection is simply $$ (\textbf{x},\textbf{y})\in \mathcal{S}_\eta\to (x^+,x^-,y^-,Vect(x_1,\dots, x_{k})) \in\mathcal{P} \,, $$ where $Vect(x_1,\dots,x_{k})$ is the oriented $k$-linear space spanned by the $k$ first vectors of the frame $\textbf{x}^+$ at infinity with orientation $x_1\textbf{w}edge...\textbf{w}edge x_{k}$, or equivalently the $k$-plane spanned by these $k$ vectors viewed around $x^-$ at infinity, i.e. inside $\R^{d-1}$ identified with $\partial\H^d\setminus\{x^+\}$ using the map $h_{x^+}$. Moreover, observe that it is a principal bundle, whose fibers are isomorphic to ${SO}(k)\times { SO}(d-1-k)\times A$. Indeed, given a couple $(\textbf{x},\textbf{y})$ in the fiber of $(x^+,x^-,y^-,P)$, after maybe let $A$ act diagonally so that both couples are based on the horosphere passing through the origin $o\in \H^d$, any other couple differs from $(\textbf{x},\textbf{y})$ only by changing $(x_1,\dots, x_{k})$ into another orthonormal basis of $Vect(x_1,\dots, x_{k})$, and $(x_{k+1},\dots,x_{d-1})$ into another orthonormal basis of $Vect(x_{k+1},\dots, x_{d-1})$, preserving the orientation. \subsubsection{Defining the measure} Given $x^+\in X^+$, we first define a measure $\bar{\eta}_{x^+}$ supported on the set \[ \mathcal{P}_{x^+}=\{(x^-,y^-,P) \; : \; x^-\in X^-, y^- \in Y^-, P\in \mathcal{G}_{k}^{d-1}\,, s.t. \,\, \overrightarrow{h_{x^+}(x^-)h_{x^+}(y^-)} \in P \} \,. \] (a subset of $X^-\times Y^-\times \mathcal{G}_{k}^{d-1}$) as follows. Observe that, thanks to our choice of coordinates, the vector $\overrightarrow{h_{x^+}(x^-)h_{x^+}(y^-)}$ has always a nonzero coordinate along $e_1$. Therefore, any $k$-plane $P$ containing $\overrightarrow{h_{x^+}(x^-)h_{x^+}(y^-)}$ is uniquely determined by its $k-1$-dimensional intersection $P\cap e_1^\perp$ with $e_1^\perp$. Thus, we have a well defined measure on $\mathcal{P}_{x^+}$: \[ d\bar{\eta}_{x^+}(x^-,y^-,P)=d\nu_{X^-}(x^-) d\nu_{Y^-}(y^-)d\sigma_{k-1}^{d-2}(P\cap e_1^\perp)\,, \] where $\sigma_{k-1}^{d-2}$ is the $SO(d-2)$-invariant probability measure on the Grassmannian manifold of $(k-1)$-planes in $e_1^\perp$.\\ Now, $\mathcal{P}$ is a bundle over $X^+$ with fibers $\mathcal{P}_{x^+}$. Define $\bar{\eta}$ on $\mathcal{P}$ as the measure which disintegrates as $\nu_{X^+}$ on the basis $X^+$ and $\bar{\eta}_{x^+}$ in the fibers. Pick $\epsilon$ small enough, and lift $\bar{\eta}$ to $\textbf{w}idetilde{\eta}$ on $\textbf{w}idetilde{\Omega}^2$, or more precisely on its subset $$ \textbf{w}idetilde{S}_\eta=\{(\textbf{x},\textbf{y})\in \textbf{w}idetilde{\Omega}^2, \,\textbf{x} U=\textbf{y} U,\,\,t_\textbf{x}=t_\textbf{y}\in[0,\epsilon]\} \, $$ by endowing the fibers with the Haar measure of ${ SO}(k )\times { SO}(d-1-k)$ times the uniform probability measure on the interval $[0,\epsilon]$. If $X^\pm,Y^\pm$ and $\epsilon$ are small enough, we can assume that the support of $\textbf{w}idetilde{\eta}$ is included inside the product of two single fundamental domains of the action of $\mathcal Gamma$ on ${ SO}^o(d,1)$, so that it induces a well defined measure $\eta$ on the quotient. By construction, it is supported on couples $(\textbf{x},\textbf{y})$ in the same $U$-orbit, and as in dimension $3$, it gives full measure to couples $(\textbf{x},\textbf{y})$ which are typical in the past, because this property of being typical depends only on $x^-,y^-$, and $\nu_{|X^-}\otimes\nu_{|Y^-}$ gives full measure to the pairs $(x^-,y^-)$ which are negative endpoints of typical couples $(\textbf{x},\textbf{y})$. The main point to check is that $(p_1)_*\eta$ and $(p_2)_*\eta$ are absolutely continuous w.r.t. $\mu$. \subsubsection{Absolute continuity} Let us reduce the abolute continuity of $(p_i)_*\eta$ to another absolute continuity property, by a succession of elementary observations. First, to prove that $(p_1)_*\eta$ and $(p_2)_*\eta$ are absolutely continuous w.r.t. $\mu$, it is sufficient to prove that $(\tilde{p}_1)_*\textbf{w}idetilde{\eta}$ and $(\tilde{p}_2)_*\textbf{w}idetilde{\eta}$, where $\tilde{p}_i:\textbf{w}idetilde{\Omega}^2\to \textbf{w}idetilde{\Omega}$ are the coordinates maps, are both absolutely continuous with respect to $\tilde{\mu}$. Both measures are defined on the compact set $$ T=\{ \textbf{x} \in \textbf{w}idetilde{\Omega} \, : \, t_\textbf{x} \in [0,\epsilon], \, x^+\in X^+, \, x^- \in (X^-\cup Y^-)\} \, . $$ This set $T$ is fibered over $$ X^+\times(X^-\cup Y^-) \times \mathcal{G}_{k}^{d-1} \, , $$ with projection map $\textbf{x}\to (x^+,x^-,\textbf{x} MU)$ and fiber isomorphic to $ { SO}(k)\times {SO}(d-k-1)\times A$. $$\textbf{x}ymatrix{ { \textbf{w}idetilde{S}_\eta \subset\textbf{w}idetilde{\Omega}^2 } \ar[d] \ar[r]^{\tilde{p}_i} & { T\subset\textbf{w}idetilde{\Omega} } \ar[d] \\ {\mathcal{P} \subset X^+\times X^-\times Y^-\times\mathcal{G}_k^{d-1} } \ar[d] \ar[r]^-{\bar{p}_i} & { X^+\times( X^-\cup Y^-)\times \mathcal{G}_k^{d-1} } \ar[d]\\ {X^+} & X^+ \\} $$ On the upper left part of this diagram, observe that the measure $\tilde{\eta}$ disintegrates over $\mathcal{P}$, with the Haar measure of $SO(k)\times SO(d-1-k)\times A$ in the fibers, and $\bar{\eta}$ on $\mathcal{P}$. Similarly, on the upper right of the diagram, the measure $\tilde{\mu}$ restricted to $T$ disintegrates over $X^+\times (X^-\cup Y^-)\times \mathcal{G}_k^{d-1}$, with measure $\nu_{X^+}\otimes \nu_{X^-\cup Y^-}\times \sigma_k^{d-1}$ on the basis, and Haar measure of $SO(k)\times SO(d-1-k)\times A$ in the fibers. Therefore, to prove that $(\tilde{p}_i)_*\tilde{\eta}$ is absolutely continuous w.r.t. $\tilde{\mu}$, it is enough to prove that $(\bar{p}_i)_*\bar{\eta}$ is absolutely continuous w.r.t. $\nu_{X^+}\otimes \nu_{X^-\cup Y^-}\times \sigma_k^{d-1}$. Look at the lower part of the diagramm now. The measure $\bar{\eta}$ itself disintegrates over $X^+$, with $\nu_{X^+}$ on the base and $\bar{\eta}_{x^+}$ on each fiber $\mathcal{P}_{x^+}$, whereas the measure $(\bar{p}_i)_* \bar{\eta}$ disintegrates also over $\nu_{X^+}$, with measure $\nu_{X^-\cup Y^-}\times \sigma_k^{d-1}$ on each fiber. Thus, it is in fact enough to prove that for $\nu_{X^+}$-almost every $x^+$, the image of the measure $\bar{\eta}_{x^+}$ under the natural projection map $\mathcal{P}_{x^+}\to \{x^+\}\times X^-\cup Y^-\times \mathcal{G}_k^{d-1}$ is absolutely continuous w.r.t. $\nu_{X^-\cup Y^-}\otimes \sigma_k^{d-1}$. The precise statement that we will prove is Lemma \ref{abscontdimsup}. By the above discussion, it implies that $(p_i)_*\eta$ is absolutely continuous w.r.t. $\mu$, and therefore, as in dimension $3$, Theorem \ref{ergodicity} follows from Lemma \ref{reductiontwo}. \subsubsection{Absolute continuity of conditional measures} We discuss now the absolute continuity of the marginals laws of $\bar{\eta}_{x^+}$. In order to do so, it is necessary to say a few words about the distance on the Grassmannian manifolds of oriented subspaces that we shall use. As we are only interested in the local properties of the distance, we will (abusively) define it only on the Grassmannian manifold of unoriented subspaces. If $P$ is a $l$-dimensional subspace of a Euclidean space of dimension $n$, we write $\Pi_P$ for the orthogonal projection on $P$. If $P,P' \in \mathcal{G}_l^n$ are two $l$-dimensional subspaces, a distance between $P$ and $P'$ can be defined as the operator norm of $\Pi_P-\Pi_{P'}$ (which is also the operator norm of $\Pi_{P^\perp}-\Pi_{(P')^\perp}$). We will use the following facts. \begin{enumerate} \item The above distance is Lipschitz-equivalent to any Riemannian metric on $\mathcal{G}_l^n$, and $\sigma_l^n$ is a smooth measure. In particular, up to multiplicative constants, the measure of a ball of sufficiently small radius $r$ around a point $P$ is $$ \sigma_{l}^{n}(B_{\mathcal{G}_{l}^{n}}(P,r)) \simeq r^{l(n-l)}. $$ \item Identify $e_1^\perp$ with $\R^{d-2}$. Define $$ (\mathcal{G}_{k}^{d-1})'=\{P \in \mathcal{G}_{k}^{d-1} \; : \; P \not\subset e_1^\perp\}. $$ The map $P \in (\mathcal{G}_k^{d-1})'\mapsto P\cap e_1^\perp \in \mathcal{G}_{k-1}^{d-2}$ is well-defined and smooth, so that its restriction to any compact set is Lipschitz. \item Let $P, P_1$ be two $k$-dimensional subspaces of $\R^{d-1}$. If $v\in P$, $\|v\|\leq 3$ and $d_{\mathcal{G}_k^{d-1}}(P,P_1)\leq r$, then $$ \| \Pi_{P_1^\perp}(v) \| \leq 3r . $$ \end{enumerate} \begin{lem} \label{abscontdimsup} There exist two functions $F_{x^+,1} \in L^1(\nu_{X^-}\otimes \sigma_k^{d-1})$, $F_{x^+,2} \in L^1(\nu_{Y^-}\otimes \sigma_k^{d-1})$ such that for any $E \subset (X^-\cup Y^-)$, any ball $B=B(P_0,r)\subset \mathcal{G}_k^{d-1}$ of sufficiently small radius $r$ around some $P_0 \in \mathcal{G}_k^{d-1}$, and any $x^+ \in X^+$, $$ \bar{\eta}_{x^+}(\{ (x^-,y^-,P)\in \mathcal{P}_{x^+} \; : \; (x^-,P) \in E\times B \}) \leq \int_{E\times B} F_{x^+,1} \, d\nu_{X^-}\otimes \sigma_k^{d-1}, $$ and $$ \bar{\eta}_{x^+}(\{ (x^-,y^-,P)\in \mathcal{P}_{x^+} \; : \; (y^-,P) \in E\times B \}) \leq \int_{E\times B} F_{x^+,2} \, d\nu_{ Y^-}\otimes \sigma_k^{d-1}. $$ Moreover, the $L^1$-norms of $F_{x^+,i}$ are uniformly bounded on $X^+$. \end{lem} \begin{proof} We prove only the second inequality, the first one is similar and only exchanges the roles of $x^-$ and $y^-$ in the following. \\ First choose some $P_1 \in B_{\mathcal{G}_k^{d-1}}(P_0,r)$. If $(x^-,y^-,P)\in \mathcal{P}_{x^+}$ with $P \in B_{\mathcal{G}_k^{d-1}}(P_1,2r)$, then, provided $r$ is small enough, both $P$ and $P_1$ are in a fixed compact subset of $(\mathcal{G}_{k}^d)'$. This implies that for some fixed $c_0>0$, $$ Q=P\cap e_1^\perp \in B_{\mathcal{G}_{k-1}^{d-2}}(P_1\cap e_1^\perp,c_0 r).$$ We also have $$ d_{P_1^\perp}(\Pi_{P_1^\perp}(x^-),\Pi_{P_1^\perp}(y^-))\leq 6r. $$ Thus we have the inequalities \begin{align*} & \bar{\eta}_{x^+} (\{ (x^-,y^-,P)\in \mathcal{P}_{x^+} \; : \; (y^-,P) \in E\times B_{\mathcal{G}_k^{d-1}}(P_1,2r) \})\\ & = \int 1_{B_{\mathcal{G}_k^{d-1}}(P_1,2r)}(Q\oplus \overrightarrow{h_{x^+}(x^-)h_{x^+}(y^-)}) \, d\nu_{X^-}(x^-) d\nu_{Y^-}(y^-) d\sigma_{k-1}^{d-2}(Q),\\ & \leq \sigma_{k-1}^{d-2}(B_{\mathcal{G}_{k-1}^{d-2}}(P_1\cap e_1^\perp,c_0 r)) \int_{E}\int_{X^-} 1_{B(\Pi_{P_1^\perp}(y),6r)}(\Pi_{P_1^\perp}(x))d\nu_{ X^-}(x^-)d\nu_{ Y^-}(y^-),\\ & \leq \sigma_{k-1}^{d-2}(B_{\mathcal{G}_{k-1}^{d-2}}(P_1\cap e_1^\perp,c_0 r)) \int_{E} (\Pi_{P_1^\perp})_*\nu_{ X^-}(B(\Pi_{P_1^\perp}(x),6r))d\nu_{ Y^-}(y^-)\\ & \leq \sigma_{k-1}^{d-2}(B_{\mathcal{G}_{k-1}^{d-2}}(P_1\cap e_1^\perp,c_0 r)) \int_{E} (6r)^{d-k-1} \, MH_{x^+,P_1}(\Pi_{P_1^\perp}(y))d\nu_{ Y^-}(y^-), \end{align*} where $MH_{x^+,P_1}$ is the maximal function $$ MH_{x^+,P_1}(v)=\sup_{\rho>0} \rho^{-(d-k-1)} \int_{B_{P_1^\perp}(v,\rho)} \frac{d(\Pi_{P_1^\perp}\circ h_{x^+})_* \nu_{ X^-}}{dw}(w) dw\,. $$ We now integrate this inequality over $P_1 \in B_{\mathcal{G}_k^{d-1}}(P_0,r)$ using the uniform measure and the fact that $$ B_{\mathcal{G}_k^{d-1}}(P_0,r)\subset B_{\mathcal{G}_k^{d-1}}(P_1,2r). $$ We obtain \begin{align*} & \bar{\eta}_{x^+} (\{ (x^-,y^-,P)\in \mathcal{P}_{x^+} \; : \; (y^-,P) \in E\times B_{\mathcal{G}_k^{d-1}}(P_0,r) \})\\ & \leq \int_{B_{\mathcal{G}_{k}^{d-1}}(P_0,r))} \bar{\eta}_{x^+} (\{ (x^-,y^-,P)\in \mathcal{P}_{x^+} \; : \; (y^-,P) \in E\times B_{\mathcal{G}_k^{d-1}}(P_1,2 r) \} \frac{d\sigma_k^{d-1}(P_1)}{\sigma_{k}^{d-1}(B_{\mathcal{G}_{k}^{d-1}}(P_0,r))}\\ & \leq \int_{E\times B_{\mathcal{G}_{k}^{d-1}}(P_0,r))} \frac{ \sigma_{k-1}^{d-2}(B_{\mathcal{G}_{k-1}^{d-2}}(P_1\cap e_1^\perp,c_0 r)) (6r)^{d-k-1}} {\sigma_{k}^{d-1}(B_{\mathcal{G}_{k}^{d-1}}(P_0,r)) } MH_{x^+,P_1}(\Pi_{P_1^\perp}(y^-))d\nu_{|Y^-}(y^-)d\sigma_k^{d-1}(P_1). \end{align*} Now, the ratio $$ \frac{ \sigma_{k-1}^{d-2}(B_{\mathcal{G}_{k-1}^{d-2}}(P_1\cap e_1^\perp,c_0 r)) (6r)^{d-k-1}} {\sigma_{k}^{d-1}(B_{\mathcal{G}_{k}^{d-1}}(P_0,r)) }, $$ is bounded by a uniform constant $c>0$, since the dimension of the Grassmannian manifolds $\mathcal{G}_{r}^{n}$ is $r(n-r)$, so the above ratio is comparable, up to multiplicative constants, with $\frac{r^{(k-1)(d-k-1)}\times r^{d-k-1}}{r^{k(d-k-1)}}= 1$.\\ This proves an inequality of the desired form with the function $$F_{x^+,2}(y^-,P)=c \, MH_{x^+,P}(\Pi_{P^\perp}(h_{x^+}(y^-))).$$ We still have to show that this function is in $L^1(\nu_{Y^-}\otimes \sigma_k^{d-1})$. Let us compute its norm \begin{align*} \mathcal{N}=&\int_{Y^-\times \mathcal{G}_{k-1}^{d-2}} MH_{x^+,P}(\Pi_{P^\perp}(h_{x^+}(y^-)))\,d\nu_{ Y^-}(y^-)d\sigma_{k}^{d-1}(P)\\ &=\int_{\mathcal{G}_{k-1}^{d-2}} \left( \int_{P^\perp} MH_{x^+,P}(v)\,d(\Pi_{P^\perp}\circ h_{x^+})_*\nu_{|Y^-}(v)\right) d\sigma_{k}^{d-1}(P)\\ &=\int_{\mathcal{G}_{k-1}^{d-2}} \left( \int_{P^\perp} MH_{x^+,P}(v)\,\frac{d(\Pi_{P^\perp}\circ h_{x^+})_*\nu_{|Y^-}}{dv}(v) dv\right) d\sigma_{k}^{d-1}(P). \end{align*} By \cite[Theorem 9.7]{MR1333890}, the two Radon-Nikodym derivatives $$\frac{d(\Pi_{P^\perp}\circ h_{x^+})_*\nu_{|Y^-}}{dv}, \; \frac{d(\Pi_{P^\perp}\circ h_{x^+})_*\nu_{|X^-}}{dv},$$ have the square of their $L^2$-norms bounded by a constant times the respective energies $$I_{d-1-k}((h_{x^+})_* \nu_{|Y^-}), \; I_{d-1-k}((h_{x^+})_* \nu_{|X^-}).$$ By the Hardy-Littlewood inequality \cite[Theorem 2.19]{MR1333890}, this is also true for their maximal functions, with a different constant. By the choices of $X^+,X^-,Y^-$ and $h_{x^+}$, the family of maps $(h_{x^+})_{x^+\in X^+}$ is uniformly bilispchitz when restricted to the compact set $X^-\cup Y^-$. In particular, the above energies are in turn bounded by a constant times $I_{d-1-k}(\nu)$.\\ The integral $\mathcal{N}$ is thus the scalar product of two $L^2$ functions, each one of norm less that a fixed multiple of $\sqrt{I_{d-1-k}(\nu)}$.\\ This implies that there exists a constant $c>0$ such that $$\mathcal{N} \leq c \, I_{d-1-k}(\nu).$$ \end{proof} \section{Acknowledgments} The authors thank warmly S\'ebastien Gou\"ezel for all the interesting discussions and useful comments on the subject. {} \end{document}
math
\begin{document} \title{New examples of compact special Lagrangian submanifolds embedded in hyper-K\"ahler manifolds} \author{Kota Hattori} \date{} \maketitle {\abstract We construct smooth families of compact special Lagrangian submanifolds embedded in some toric hyper-K\"ahler\ manifolds, which never become holomorphic\ Lagrangian\ submanifolds via any hyper-K\"ahler\ rotations. These families converge to special\ Lagrangian\ immersions with self-intersection points in the sense of current. To construct them, we apply the desingularization method developed by Joyce.} \section{Introduction} In $1982$, Harvey and Lawson have introduced in \cite{harvey1982calibrated} the notion of calibrated submanifolds in Riemannian manifold. They were recognized by many researchers as the important class of minimal submanifolds which had already been well-studied for a long time. One of the importances of calibrated submanifolds is the volume minimizing property, that is, every compact calibrated submanifold minimizes the volume functional in its homology class. The several kinds of calibrated submanifolds are defined in the Riemannian manifolds with special holonomy. For example, special Lagrangian submanifolds are the middle dimensional calibrated submanifolds embedded in Riemannian manifolds with $SU(n)$ holonomy, so called Calabi-Yau manifolds. In hyper-K\"ahler\ manifolds, which are Riemannian manifolds with $Sp(n)$ holonomy, there is a notion of holomorphic\ Lagrangian\ submanifolds those are calibrated by the $n$-th power of the K\"ahler form. At the same time, hyper-K\"ahler\ manifolds are naturally regarded as Calabi-Yau manifolds, special\ Lagrangian\ submanifolds also make sense in these manifolds. Hence there are two kinds of calibrated submanifolds in hyper-K\"ahler\ manifolds, and it is well-known that every holomorphic\ Lagrangian\ submanifold becomes special\ Lagrangian\ by the hyper-K\"ahler\ rotations. The converse may not holds although the counterexamples have not been found. Another importance of calibrated geometry is that some of the calibrated submanifolds have the moduli spaces with good structure. For instance, McLean has shown that the moduli space of compact special Lagrangian submanifolds becomes a smooth manifold, whose dimension is equal to the first betti number of the special Lagrangian submanifold \cite{mclean1996deformations}. Although the construction of compact special\ Lagrangian\ submanifolds embedded in Calabi-Yau manifolds is not easy in general, Y. I. Lee \cite{lee2003embedded}, Joyce \cite{joyce2003special}\cite{joyce2004special} and D. A. Lee \cite{lee2004connected} developed the gluing method for the construction of families of compact special\ Lagrangian\ submanifolds converging to special\ Lagrangian\ immersions with self-intersection points as a sense of current. Moreover D. A. Lee construct a non-totally geodesic special\ Lagrangian\ submanifold in the flat torus by applying his gluing method. After these working, several concrete examples of special\ Lagrangian\ submanifolds are constructed by gluing method. See \cite{haskins2007slag}\cite{chan2009desingularization1}\cite{chan2009desingularization2}, for example. In this paper we apply the result in \cite{joyce2003special}\cite{joyce2004special} to the construction of new examples of compact special\ Lagrangian\ submanifolds embedded in toric hyper-K\"ahler\ manifolds. Moreover, these examples never become holomorphic\ Lagrangian\ submanifolds with respect to any complex structures given by the hyper-K\"ahler\ rotations. A hyper-K\"ahler\ manifold is a Riemannian manifold $(M^{4n},g)$ equipped with an integrable hypercomplex structure $(I_1,I_2,I_3)$, so that $g$ is hermitian with respect to every $I_\alpha$, and $\omega_\alpha:=g(I_\alpha\cdot,\cdot)$ are closed. For any $\theta\in \mathbb{R}$, note that $e^{\sqrt{-1}\theta}(\omega_2+\sqrt{-1}\omega_3)$ becomes a holomorphic symplectic $2$-form with respect to $I_1$. If the holomorphic symplectic form vanishes on a submanifold $L^{2n}\subset M$, $L$ is called a holomorphic\ Lagrangian\ submanifold. Clearly, this definition is not depend on $\theta$. Similarly, we can define the notion of holomorphic\ Lagrangian\ submanifold with respect to a complex structure $aI_1+bI_2+cI_3$ for every unit vector $(a,b,c)$ in $\mathbb{R}^3$. The new complex structure $aI_1+bI_2+cI_3$ is called a hyper-K\"ahler\ rotation of $(M,g,I_1,I_2,I_3)$. The hyper-K\"ahler\ manifold $M$ is naturally regarded as the Calabi-Yau manifold by the complex structure $I_1$, the K\"ahler form $\omega_1$ and the holomorphic volume form $(\omega_2 + \sqrt{-1}\omega_3)^n$. Then we can easy to see that holomorphic\ Lagrangian\ submanifolds with respect to $\cos(\alpha\pi/n) I_2 + \sin(\alpha\pi/n) I_3$ are special\ Lagrangian\ for every $\alpha =1,\cdots, 2n$. Conversely, it has been unknown whether there exist special\ Lagrangian\ submanifolds embedded in hyper-K\"ahler\ manifolds never come from holomorphic\ Lagrangian\ submanifolds with respect to any complex structure given by the hyper-K\"ahler\ rotations. The main result of this paper is described as follows. \begin{thm} Let $n\ge 2$. There exist smooth compact special\ Lagrangian\ submanifolds $\{ \tilde{L}_t\}_{0<t< \delta}$ and $\{ L_\alpha\}_{\alpha = 1,\cdots,2n}$ embedded in a hyper-K\"ahler\ manifold $M^{4n}$, which satisfy $\lim_{t\to 0}\tilde{L}_t = \bigcup_\alpha L_\alpha$ in the sense of current, and $\tilde{L}_t$ is diffeomorphic to $2n(\mathbb{P}^1)^n \# (S^1\times S^{2n-1})$. Moreover, each $L_\alpha$ is the holomorphic\ Lagrangian\ submanifold of $M$ with respect to $\cos(\alpha\pi/n) I_2 + \sin(\alpha\pi/n) I_3$, although $\tilde{L}_t$ never become holomorphic\ Lagrangian\ submanifolds with respect to any complex structure given by the hyper-K\"ahler\ rotations. \label{main1} \end{thm} This is one of examples which we obtain in this article. Furthermore, we obtain special\ Lagrangian\ $2\mathbb{P}^2 \# 2\overline{\mathbb{P}^2} \# (S^1\times S^3)$ embedded in an $8$-dimensional hyper-K\"ahler\ manifold and special\ Lagrangian\ $(3N+1)(\mathbb{P}^1)^2 \# N(S^1\times S^3)$ embedded in another $8$-dimensional hyper-K\"ahler\ manifold, both of which never become holomorphic\ Lagrangian\ submanifolds with respect to any complex structure given by the hyper-K\"ahler\ rotations. Theorem \ref{main1} has another significance from the point of the view of the compactification of the moduli spaces of compact special\ Lagrangian\ submanifolds. In general, the moduli space $\mathcal{M}(L)$ of the deformations of compact special\ Lagrangian\ submanifolds $L\subset X$ is not necessarily to be compact, consequently the study of its compactification is important problem. It is known that the compactification of $\mathcal{M}(L)$ is given by the geometric measure theory. The special\ Lagrangian\ immersion $\bigcup_\alpha L_\alpha$ appeared in Theorem \ref{main1} is the concrete example of an element of $\overline{\mathcal{M}(\tilde{L}_{t_0})}\backslash\mathcal{M}(\tilde{L}_{t_0})$. D. A. Lee also considered the similar situation, however the Calabi-Yau structures of ambient space of $\tilde{L}_t$ is deformed by the parameter $t$ in \cite{lee2004connected}. Here, we describe the outline of the proof. Let $(M,J,\omega,\Omega)$ be a K\"ahler manifold of complex dimension $m \ge 3$ with holomorphic volume form $\Omega\in H^0(K_M)$, and $L_{\alpha}\subset M$ be connected special\ Lagrangian\ submanifolds, where $\alpha = 1,\cdots,A$. Put $\mathcal{V}=\{1,\cdots, A\}$, and suppose we have a quiver $(\mathcal{V},\mathcal{E},s,t)$, namely, $\mathcal{V}$ consists of finite vertices, $\mathcal{E}$ consists of finite directed edges, and $s,t$ are maps $\mathcal{E} \to \mathcal{V}$ so that $s(h)$ is the source of $h\in \mathcal{E}$ and $t(h)$ is the target. A subset $S\subset \mathcal{E}$ is called a cycle if it is written as $S=\{ h_1,h_2,\cdots,h_l\}$ and $t(h_k) = s(h_{k+1})$, $t(h_l) = s(h_1)$ hold for all $k=1,\cdots,l-1$. Then $\mathcal{E}$ is said to be {\it covered by cycles} if every edge $h\in \mathcal{E}$ is contained in some cycles of $\mathcal{E}$. If there are two special\ Lagrangian\ submanifolds $L_0,L_1\subset X$ intersecting transversely at $p\in L_0\cap L_1$, then we can define a type at the intersection point $p$, which is a positive integer less than $m$. Then we have the next result, which follows from Theorem 9.7 of \cite{joyce2003special} by some additional arguments. \begin{thm} Let $(\mathcal{V},\mathcal{E},s,t)$ be a quiver, and $L_\alpha$ be connected compact special\ Lagrangian\ submanifolds embedded in a Calabi-Yau manifold $(M,J,\omega,\Omega)$ for every $\alpha\in \mathcal{V}$. Assume that $L_{s(h)}$ and $L_{t(h)}$ intersects transversely at only one point $p$ if $h \in \mathcal{E}$, and $p$ is the intersection point of type $1$, and $L_\alpha \cap L_\beta$ is empty if $\alpha\neq \beta$ and there are no edges connecting $\alpha$ and $\beta$. Then, if $\mathcal{E}$ is covered by cycles, there exists a family of compact special\ Lagrangian\ submanifolds $\{ \tilde{L}_t\}_{0<t<\delta}$ embedded in $M$ which satisfies $\lim_{t\to 0}\tilde{L}_t = \bigcup_{\alpha\in \mathcal{V}}L_\alpha$ in the sense of current. \label{gluing} \end{thm} To obtain Theorem \ref{main1}, we apply Theorem \ref{gluing} to the case that $M$ is a toric hyper-K\"ahler\ manifold and $L_\alpha$ is a holomorphic\ Lagrangian\ submanifold with respect to $\cos(\alpha\pi/n) I_2 + \sin(\alpha\pi/n) I_3$. Accordingly, the proof is reduced to looking for toric hyper-K\"ahler\ manifolds $M$ and their holomorphic\ Lagrangian\ submanifolds $L_1,\cdots,L_{2n}$ satisfying the assumption of Theorem \ref{gluing}. In particular, to find $L_\alpha$'s so that $\mathcal{E}$ is covered by cycles is not so easy. The author cannot develop the systematic way to find such examples in toric hyper-K\"ahler\ manifolds, however, we can raise some concrete examples in this article. In toric hyper-K\"ahler\ manifolds, many holomorphic\ Lagrangian\ submanifolds are obtained as the inverse image of some special polytopes by the hyper-K\"ahler\ moment maps, where the polytopes are naturally given by the hyperplane arrangements which determine the toric hyper-K\"ahler\ manifolds. We can compute the type at the intersection point of two holomorphic\ Lagrangian\ submanifolds, if the intersection point is the fixed point of the torus action. Finally, we can find examples of toric hyper-K\"ahler\ manifolds and such polytopes, which satisfy the assumption Theorem \ref{gluing}. Next we have to show that these examples of special\ Lagrangian\ submanifolds never become holomorphic\ Lagrangian\ submanifolds. Since $\tilde{L}_t$ is contained in the homology class $\sum_{\alpha}(-1)^\alpha [L_{\alpha}]$, we obtain the volume of $\tilde{L}_t$ by integrating the real part of the holomorphic volume form over $\sum_{\alpha}(-1)^\alpha [L_{\alpha}]$. On the other hand, if $\tilde{L}_t$ is holomorphic\ Lagrangian\ submanifold with respect to some $aI_1+bI_2+cI_3$, then the volume can be also computed by integrating $(a\omega_1+b\omega_2+c\omega_3)^n$ over $\sum_{\alpha}(-1)^\alpha [L_{\alpha}]$, since $a\omega_1+b\omega_2+c\omega_3$ should be the K\"ahler form on $\tilde{L}_t$. These two values of the volume do not coincide, we have a contradiction. At the same time, we have another simpler proof if the first betti number $\tilde{L}_t$ is odd, since any holomorphic\ Lagrangian\ submanifolds become K\"ahler manifolds which always have even first betti number. The example constructed in Theorem \ref{main1} satisfies $b_1 = 1$, hence we can use this proof. However, we have other examples in Section \ref{sec6} whose first betti number may be even. This article is organized as follows. First of all we define $\sigma$-holomorphic\ Lagrangian\ submanifolds in Section \ref{sec2} and review the constructions of them in toric hyper-K\"ahler\ manifolds in Section \ref{sec3}. Next we review the definition of the type at the intersection point of two special\ Lagrangian\ submanifolds, and then compute them in the case of toric hyper-K\"ahler\ manifolds in Section \ref{sec4}. In Section \ref{sec5}, we prove Theorem \ref{gluing} by using Theorem 9.7 of \cite{joyce2003special}. In Section \ref{sec6}, we find toric hyper-K\"ahler\ manifolds and their holomorphic\ Lagrangian\ submanifolds which satisfy the assumption of Theorem \ref{gluing}, and obtain compact special\ Lagrangian\ submanifolds embedded in some toric hyper-K\"ahler\ manifolds. In Section \ref{sec7}, we show the examples obtained in Section \ref{sec6} never become $\sigma$-holomorphic\ Lagrangian\ submanifolds for any $\sigma\in S^2$. {\bf Acknowledgment.} The author would like to express his gratitude to Professor Dominic Joyce for his advice on this article. The author is also grateful to Dr. Yohsuke Imagi for useful discussion and his advice. \section{Holomorphic Lagrangian submanifolds}\label{sec2} \begin{definition} {\rm A Riemannian manifold $(M,g)$ equipped with integrable complex structures $(I_1,I_2,I_3)$ is a} hyper-K\"ahler\ manifold {\rm if each $I_{\alpha}$ is orthogonal with respect to $g$, they satisfy the quaternionic relation $I_1I_2I_3 = -1$ and fundamental $2$-forms $\omega_{\alpha}:=g(I_{\alpha}\cdot,\cdot)$ are closed.} \end{definition} We put $\omega=(\omega_1,\omega_2,\omega_3)$ and call it the hyper-K\"ahler\ structure. For each \begin{eqnarray} \sigma = (\sigma_1,\sigma_2,\sigma_3) \in S^2=\{ (a,b,c)\in \mathbb{R}^3;\ a^2+b^2+c^2=1 \},\nonumber \end{eqnarray} we have another K\"ahler structure \begin{eqnarray} (M, I^\sigma, \omega^\sigma) := (M, \sum_{i=1}^3 I_i\omega_i, \sum_{i=1}^3\sigma_i\omega_i). \nonumber \end{eqnarray} Take $\sigma',\sigma''\in S^2$ so that $(\sigma,\sigma',\sigma'')$ forms an orthonormal basis in $\mathbb{R}^3$. Suppose it has the positive orientation, that is, \begin{eqnarray} \sigma \wedge \sigma' \wedge \sigma'' = (1,0,0) \wedge (0,1,0) \wedge (0,0,1)\nonumber \end{eqnarray} holds. Then we have another hyper-K\"ahler\ structure $(\omega^\sigma, \omega^{\sigma'}, \omega^{\sigma''})$ which is called the hyper-K\"ahler\ rotation of $\omega$. \begin{definition} {\rm Let $(M,g,I_1,I_2,I_3)$ be a hyper-K\"ahler\ manifold of real dimension $4n$, and $L\subset M$ be a $2n$-dimensional submanifold. Fix $\sigma\in S^2$ arbitrarily. Then $L$ is a $\sigma$}-holomorphic\ Lagrangian\ submanifold {\rm if $\omega^{\sigma'}|_{L} = \omega^{\sigma''}|_{L} = 0$.} \end{definition} It is easy to see that the above definition is not depend on the choice of $\sigma',\sigma''$. Any hyper-K\"ahler\ manifolds can be regarded as Calabi-Yau manifolds by considering the pair of a K\"ahler manifold $(M,I_1,\omega_1)$ and a holomorphic volume form $(\omega_2+\sqrt{-1}\omega_3)^n \in H^0(M,K_M)$, where $K_M$ is the canonical line bundle of the complex manifold $(M,I_1)$. Therefore, we can consider the notion of special\ Lagrangian\ submanifolds in $M$ as follows. \begin{definition} {\rm Let $(M,g,I_1,I_2,I_3)$ be a hyper-K\"ahler\ manifold of real dimension $4n$, and $L\subset M$ be a $2n$-dimensional submanifold. Then $L$ is a} special\ Lagrangian\ submanifold {\rm if $\omega_1|_{L} = {\rm Im}(\omega_2+\sqrt{-1}\omega_3)^n|_L = 0$.} \end{definition} \section{Toric hyper-K\"ahler\ manifolds}\label{torichk}\label{sec3} \subsection{Construction}\label{const} In this subsection we review the construction of toric hyper-K\"ahler\ manifolds shortly. Let $u_{\mathbb{Z}} : \mathbb{Z}^d \to \mathbb{Z}^n$ be a surjective $\mathbb{Z}$ linear map which induces a homomorphisms between tori and their Lie algebras, denoted by $\hat{u}: T^d \to T^n$ and $u : \mathbf{t}^d \to \mathbf{t}^n$, respectively. We put $K:={\rm Ker}\ \hat{u}\in T^d$ and $\mathbf{k}:= {\rm Ker}\ u\in \mathbf{t}^d$, where $\mathbf{k}:$ is the Lie algebra of the subtorus $K$. The adjoint map of $u$ is denoted by $u^* : (\mathbf{t}^n)^*\to (\mathbf{t}^d)^*:$ and it induces $u^*:V\otimes (\mathbf{t}^n)^*\to V\otimes (\mathbf{t}^d)^*$ naturally for any vector space $V$, which is also denoted by the same symbol. Next we consider the action of $T^d$ on the quaternionic vector space $\mathbb{H}^d$ given by $(x_1,\cdots,x_d)\cdot (g_1,\cdots, g_d) := (x_1g_1,\cdots,x_dg_d)$ for $x_k\in \mathbb{H}$ and $g_k\in S^1$. Then this action preserves the standard hyper-K\"ahler\ structure on $\mathbb{H}^d$, and the hyper-K\"ahler\ moment map $\mu_d:\mathbb{H}^d\to {\rm Im}\mathbb{H} \otimes (t^d)^*$ is given by $\mu_d(x_1,\cdots,x_d) = (x_1i\overline{x_1},\cdots,x_di\overline{x_d})$. Here, ${\rm Im}\mathbb{H} \cong \mathbb{R}^3$ is the pure imaginary part of $\mathbb{H}$. Let $\hat{\iota} : K\to T^d$ and $\iota : \mathbf{k}\to \mathbf{t}^d$ be the inclusion maps and put $\mu_K:=\iota^*\circ\mu_d : \mathbb{H}^d\to {\rm Im}\mathbb{H}\otimes\mathbf{k}^*$ be the hyper-K\"ahler\ moment map with respect to $K$-action on $\mathbb{H}^d$. For each $\lambda \in {\rm Im}\mathbb{H}\otimes \mathbf{k}^*$, we obtain the hyper-K\"ahler\ quotient $X(u,\lambda):= \mu_K^{-1}(\iota^*(\lambda))/K$ for every $\lambda=(\lambda_1,\cdots,\lambda_d)\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^d)^*$, called toric hyper-K\"ahler\ varieties. The complex structures on $X(u,\lambda)$ are denoted by $I_{\lambda,1}, I_{\lambda,2},I_{\lambda,3}$, and the corresponding K\"ahler forms are denoted by $\omega_{\lambda}=(\omega_{\lambda,1}, \omega_{\lambda,2},\omega_{\lambda,3})$. Although $X(u,\lambda)$ is not necessarily to be a smooth manifold, the equivalent condition for the smoothness was obtained by Bielawski-Dancer in \cite{bielawski2000geometry}. Let $e_1,\cdots,e_d\in \mathbb{R}^d$ be the standard basis and $u_k:=u(e_k) \in \mathbf{t}^n$. Put \begin{eqnarray} H_k = H_k(\lambda) := \{ y\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^n)^* ;\ \langle y, u_k \rangle + \lambda_k = 0 \},\nonumber \end{eqnarray} where \begin{eqnarray} \langle y, u_k \rangle = (\langle y_1, u_k \rangle,\ \langle y_2, u_k \rangle,\ \langle y_3, u_k \rangle) \in \mathbb{R}^3 = {\rm Im}\mathbb{H} \nonumber \end{eqnarray} for $y = (y_1,y_2,y_3)$. \begin{thm}[\cite{bielawski2000geometry}] The hyper-K\"ahler\ quotient $X(u,\lambda)$ is a smooth manifold if and only if both of the following conditions $(*1)(*2)$ are satisfied. $(*1)$ For any $\tau \subset \{1,2,\cdots, d \}$ with $\#\tau = n + 1$, the intersection $\bigcap_{k\in \tau} H_k$ is empty. $(*2)$ For every $\tau \subset \{1,2,\cdots, d \}$ with $\#\tau = n$, the intersection $\bigcap_{k\in \tau} H_k$ is nonempty if and only if $\{u_k;\ k\in\tau \}$ is a $\mathbb{Z}$-basis of $\mathbb{Z}^n$. \label{smooth} \end{thm} The $T^d$ action on $\mathbb{H}^d$ induces a $T^n=T^d/K$ action on $X(u,\lambda)$ preserving the hyper-K\"ahler\ structure of $X(u,\lambda)$, and the hyper-K\"ahler\ moment map $\mu_{\lambda} = (\mu_{\lambda,1},\ \mu_{\lambda,2},\ \mu_{\lambda,3}): X(u,\lambda)\to {\rm Im}\mathbb{H}\otimes (\mathbf{t}^n)^*$ is defined by \begin{eqnarray} u^*(\mu_{\lambda}([x])):= \mu_d(x) - \lambda,\nonumber \end{eqnarray} where $[x]\in X(u,\lambda)$ is the equivalence class represented by $x\in \mu_K^{-1}(\iota^*(\lambda))$. Let $\sigma\in S^2$. A $T^n$-invariant submanifold $L\subset X(u,\lambda)$ becomes a $\sigma$-holomorphic\ Lagrangian\ submanifold if $\mu_\lambda(L)$ is contained in $q+\sigma\otimes (\mathbf{t}^n)^*$ for some $q\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^n)^*$. \subsection{Local model of the neighborhood of a fixed point}\label{localmodel} Let $X=X(u,\lambda)$ be a smooth toric hyper-K\"ahler\ manifold of real dimension $4n$, $\omega=\omega_{\lambda}$ and $\mu=\mu_{\lambda}$. Denote by $X^*$ the maximal subset of $X$ on whom $T^n$ acts freely. Let $p\in X$ be a fixed point of the $T^n$-action. Then we can see that \begin{eqnarray} H_{k_1}\cap H_{k_2}\cap \cdots\cap H_{k_n} = \{ \mu(p) \}\nonumber \end{eqnarray} for some $k_1,\cdots, k_n$, and we may suppose $k_i=i$ without loss of generality. By the result of \cite{pedersen1988hyper}, the hyper-K\"ahler\ metric on $X^*$ can be described using $\mu$ and $T^n$-connection on $X^*$ and some functions defined on $\mu(X^*)$. Using their result, we can see that $\omega$ can be decomposed into two parts as \begin{eqnarray} \omega = \omega_{\mathbb{H}^n} + \mu^*\eta \nonumber \end{eqnarray} on $U$, where $U$ is a $T^n$-invariant neighborhood of $p$, $\omega_{\mathbb{H}^n}$ has the same form with the standard hyper-K\"ahler\ structure on $\mathbb{H}^n$, and $\eta\in \Omega^2(\mu(U))$. Hence we have the followings. \begin{prop} Let $(X,\omega,\mu)$ and $p$ be as above. There is a $T^n$-invariant neighborhood $U\subset X$ of $p$, a $T^n$-equivariant diffeomorphism $F:U \to B_{\mathbb{H}^n}(\varepsilon)$ and $\eta\in \Omega^2(\mu(U))$ which satisfy \begin{eqnarray} \omega|_U &=& F^*\omega_{\mathbb{H}^n}|_U+ \mu^*\eta, \nonumber\\ \mu_n|_{B_{\mathbb{H}^n}(\varepsilon)} \circ F &=& \mu|_U - \mu(p) \nonumber \end{eqnarray} where $B_{\mathbb{H}^n}(\varepsilon)=\{ x\in \mathbb{H}^n;\ \| x\| < \varepsilon\}$, and $\omega_{\mathbb{H}^n}$ is the standard hyper-K\"ahler\ structure on $\mathbb{H}^n$. \label{3.2} \end{prop} \section{Characterizing angles}\label{sec4} \subsection{Calabi-Yau case} For the desingularization of special Lagrangian immersions which intersect transversely on a point, one should consider the characterizing angles, introduced by Lawlor \cite{lawlor1989angle}. Let $(M,J,\omega)$ be a K\"ahler manifold, where $J$ is a complex structure, $\omega$ is a K\"ahler form. Suppose that there is a Lagrangian immersion $\iota : L \to M$, where $\iota$ is embedding on $L\backslash \{p_+,p_-\}$ and $\iota(L)$ intersects at $\iota(p_+) = \iota(p_-) = p \in M$ transversely. We suppose $L$ is not necessarily to be connected, and the orientation of $L$ is fixed. \begin{thm}[\cite{joyce2003special}\cite{joyce2004special}] Let $(J_0,\omega_0)$ be the standard K\"ahler structure on $\mathbb{C}^m$. There exists a linear map $v:T_pM\to \mathbb{C}^m$ satisfying the following conditions; (i) $v$ is a $\mathbb{C}$-linear isomorphism preserving the K\"ahler forms, (ii) there is $\varphi =(\varphi_1, \cdots, \varphi_m)\in \mathbb{R}^m$ which satisfies $0<\varphi_1\le \cdots\le \varphi_m<\pi$ and \begin{eqnarray} v\circ\iota_* (T_{x_+} L) &=& \mathbb{R}^m = \{ (t_1,\cdots, t_m)\in \mathbb{C}^m;\ t_i\in\mathbb{R} \},\nonumber\\ v\circ\iota_* (T_{x_-} L) &=& \mathbb{R}^m_{\varphi}=\{ (t_1 e^{\sqrt{-1}\varphi_1},\cdots, t_m e^{\sqrt{-1}\varphi_m})\in \mathbb{C}^m;\ t_i\in\mathbb{R} \}.\nonumber \end{eqnarray} (iii) $v$ maps the orientation of $\iota_* (T_{x_+} L)$ to the standard orientation of $\mathbb{R}^m$. Moreover, $\varphi_1, \cdots, \varphi_m$ and the induced orientation of $\mathbb{R}^m_{\varphi}$ by $v,\iota_* (T_{x_-} L)$ do not depend on the choice of $v$. \label{joyce} \end{thm} \begin{proof} Choose a $\mathbb{C}$-linear map $v_0:T_pM\to \mathbb{C}^m$ which preserves the K\"ahler metrics and the orientations of $\iota_* (T_{x_+} L)$ and $\mathbb{R}^m$. Then $V_{\pm}:=v_0\circ\iota_* (T_{x_{\pm}} L)$ are Lagrangian subspaces of $\mathbb{C}^m$. It is well-known that any Lagrangian subspaces in $\mathbb{C}^m$ are written as $g\cdot\mathbb{R}^m$ for some $g\in U(m)$, where $\mathbb{R}^m\subset \mathbb{C}^m$ is the standard real form of $\mathbb{C}^m$. Therefore there are $g_{\pm}\in U(m)$ so that $g_+\cdot V_+ = g_-\cdot V_- = \mathbb{R}^m$. We may chose $g_+$ so that it preserves the orientations of $V_+$ and $\mathbb{R}^m$. Once we fined such $g_\pm$, we can replace them by $h_\pm g_\pm$ for any $h_+\in SO(m)$ and $h_-\in O(m)$ respectively. Now, let $v:= h_+ g_+ v_0$. Then we have $v\circ\iota_* (T_{x_+} L) = \mathbb{R}^m$ and \begin{eqnarray} v\circ\iota_* (T_{x_-} L) &=& h_+ g_+ V_- = (h_+ g_+g_-^{-1}h_-^{-1})h_-g_- V_- \nonumber\\ &=& (h_+ g_+g_-^{-1}h_-^{-1}) \mathbb{R}^m. \nonumber \end{eqnarray} Accordingly, it suffices to show that $h_\pm \in O(m)$ can be chosen so that $h_+ g_+g_-^{-1}h_-^{-1}$ is a diagonal matrix. Put $P= g_+g_-^{-1} \in SU(m)$. Since ${}^tPP $ is a unitary and symmetric matrix, it can be diagonalized by some $Q\in O(m)$, that is, ${}^tQ{}^tPP Q = {\rm diag}(e^{\sqrt{-1}\theta_1},\cdots,e^{\sqrt{-1}\theta_m})$ holds for some $0\le \theta_1\le \cdots \le\theta_m <2\pi$. Note that $Q$ can be chosen so that either ${\rm det}(Q) = 1$ or ${\rm det}(Q) = -1$. If we put $R:= P Q {\rm diag}(e^{-\sqrt{-1}\theta_1/2},\cdots,e^{-\sqrt{-1}\theta_m/2}) \in U(m)$, then ${}^tR=R^*=R^{-1}$ holds, hence $R$ is contained in $O(m)$. We determine the value of ${\rm det}(Q)$ so that ${\rm det}(R) = 1$. Hence the assertion follows by putting $h_+ = R^{-1}$, $h_- = Q^{-1}$ and $\varphi_i = \theta_i /2$. Here, $\varphi_i$ never be $0$ since $V_+$ and $V_-$ intersect transversely. Next we show the uniqueness of $\varphi_1\le \cdots \le\varphi_m$ and the orientation of $\mathbb{R}^m_{\varphi}$. Assume that we have other $\hat{v}:T_pM\to \mathbb{C}^m$ satisfying $(i)(ii)(iii)$, and suppose \begin{eqnarray} \hat{v}\circ\iota_* (T_{x_-} L) = \{ (t_1 e^{\sqrt{-1}\hat{\varphi}_1},\cdots, t_m e^{\sqrt{-1}\hat{\varphi}_m})\in \mathbb{C}^m;\ t_i\in\mathbb{R} \}\nonumber \end{eqnarray} holds for some $0<\hat{\varphi}_1 \le \cdots \le \hat{\varphi}_m < \pi$. If we put $\hat{g}:=\hat{v}v^{-1}$, then $\hat{g}$ is in $U(m)$ and preserves the subspace $\mathbb{R}^m\subset\mathbb{C}^m$, hence $\hat{g} \in O(m)$. Moreover $\hat{g}\in SO(m)$ holds since $v$ and $\hat{v}$ satisfy $(iii)$. Moreover $\hat{g}(\mathbb{R}^m_{\varphi}) = \mathbb{R}^m_{\hat{\varphi}}$ also holds, consequently we can see that \begin{eqnarray} G:= {\rm diag}(e^{-\sqrt{-1}\hat{\varphi}_1},\cdots,e^{-\sqrt{-1}\hat{\varphi}_m})\cdot \hat{g}\cdot {\rm diag}(e^{\sqrt{-1}\varphi_1},\cdots,e^{\sqrt{-1}\varphi_m})\nonumber \end{eqnarray} is a real matrix, hence we can deduce that \begin{eqnarray} \hat{g}\cdot {\rm diag}(e^{2\sqrt{-1}\varphi_1},\cdots,e^{2\sqrt{-1}\varphi_m})\cdot \hat{g}^{-1} = {\rm diag}(e^{2\sqrt{-1}\hat{\varphi}_1},\cdots,e^{2\sqrt{-1}\hat{\varphi}_m}) \nonumber \end{eqnarray} by $\overline{G}=G$. Thus we obtain $e^{2\sqrt{-1}\varphi_i}=e^{2\sqrt{-1}\hat{\varphi}_i}$ for all $i=1,\cdots,m$, which implies $\varphi_i=\hat{\varphi}_i$, since $\hat{\varphi}_i$ are taken from $(0,\pi)$. Now we have two orientations on $\mathbb{R}^m_{\varphi}$ induced from $v$ and $\hat{v}$ respectively. These coincide since ${\rm det}(G) = {\rm det}(\hat{g}) = 1$. \end{proof} Here, $\varphi=(\varphi_1, \cdots, \varphi_m)$ is called the characterizing angles between $(L,p_+)$ and $(L,p_-)$. Under the above situation, assume that there is a holomorphic volume form $\Omega$ on $M$ satisfying $\omega^m/m!= (-1)^{m(m-1)/2}(\sqrt{-1}/2)^m\Omega\wedge\overline{\Omega}$, where $m$ is the complex dimension of $M$. Let $\Omega_0:=dz_1\wedge \cdots\wedge dz_m$ be the standard holomorphic volume form on $\mathbb{C}^m$, and assume that $\iota : L\to M$ is a special\ Lagrangian\ immersion. Then there exists $v:T_pM\to \mathbb{C}^m$ satisfying Theorem \ref{joyce}. By the condition $(ii)$, we can see that $v^*\Omega_0 = \Omega_p$. Since both of $\iota_* (T_{p_\pm} L)$ are special Lagrangian subspaces, there is a positive integer $k = 1,2,\cdots m-1$ and $\varphi_1+ \cdots + \varphi_m =k\pi$ holds. Then the intersection point $p\in M $ is said to be of type $k$. Note that the type depends on the order of $p_+,p_-$. If we take the opposite order, the characterizing angles become $\pi -\varphi_m,\cdots,\pi -\varphi_1$ and the type becomes $m-k$. \subsection{Hyper-K\"ahler\ case} An irreducible decomposition of $T^n$-action on $\mathbb{H}^n$ is given by \begin{eqnarray} \mathbb{H}^n = \bigoplus_{i=1}^n Z_i \oplus \bigoplus_{i=1}^n W_i, \nonumber \end{eqnarray} where $Z_i$ and $W_i$ are complex $1$-dimensional representation of $T^n$ defined by \begin{eqnarray} (g_1,\cdots, g_n)z_i:= g_i z_i,\quad (g_1,\cdots, g_n)w_i:= g_i^{-1} w_i\nonumber \end{eqnarray} for $(g_1,\cdots, g_n)\in T^n$ and $z_i\in Z_i,\ w_i\in W_i$. Note that $Z_i$ and $W_i$ are not isomorphic as $\mathbb{C}$-representations, but the complex conjugate restricted to $Z_i$ gives an isomorphism of $\mathbb{R}$-representations $Z_i\to W_i$. For $(\alpha,\beta)\in S^3 \subset \mathbb{C}^2$, put $h(\alpha,\beta) := (|\alpha|^2 - |\beta|^2, 2{\rm Im}(\alpha \beta), -2{\rm Re}(\alpha \beta)) \in \mathbb{R}^3$. Then $h:S^3\to S^2$ is the Hopf fibration and $S^1$ action is given by $e^{\sqrt{-1}t}\cdot(\alpha,\beta) = (e^{\sqrt{-1}t}\alpha,e^{-\sqrt{-1}t}\beta)$. Now we put \begin{eqnarray} V_i(y) := \{(\alpha z_i, \beta \overline{z_i})\in Z_i\oplus W_i;\ z_i\in Z_i\}\nonumber \end{eqnarray} for $y\in S^2$, where $(\alpha,\beta)\in S^3$ is taken to be $h(\alpha,\beta) = y$. Then $V_i(y)$ does not depend on the choice of $(\alpha,\beta)$, and $V_i(y)$ is an sub $\mathbb{R}$-representation of $Z_i\oplus W_i$. Conversely, any nontrivial sub $\mathbb{R}$-representation of $Z_i\oplus W_i$ is obtained in this way. Note that $V_i(y) = V_i(y')$ holds if and only if $y=y'$. \begin{prop} Let $V\subset \mathbb{H}^n$ be a $\sigma$-holomorphic\ Lagrangian\ subspace which is closed under the $T^n$ action. Then we have \begin{eqnarray} V = \bigoplus_{i=1}^n V_i(\varepsilon_i \sigma) \nonumber \end{eqnarray} for some $\varepsilon_i = \pm 1$, and its hyper-K\"ahler\ moment image is given by \begin{eqnarray} \mu_n(V) = \{ \sigma\otimes (x_1,\cdots, x_n)\in {\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*;\ \varepsilon_i x_i\ge 0 \}. \nonumber \end{eqnarray} \label{model} \end{prop} \begin{proof} Take $\sigma',\sigma''\in S^2$ so that $(\sigma,\sigma',\sigma'')$ is the orthonormal basis with positive orientation of $\mathbb{R}^3$. Let $I_1,I_2,I_3$ be the standard basis of the pure imaginary part of $\mathbb{H}$, and $I^\sigma,I^{\sigma'},I^{\sigma''}$ be its hyper-K\"ahler\ rotation. Let $V\subset \mathbb{H}^n$ be a $\sigma$-holomorphic Lagrangian subspace which is closed under the $T^n$ action. Since $V$ is Lagrangian with respect to $\omega^{\sigma'}$, we have an orthogonal decomposition $\mathbb{H}^n = V \oplus I^{\sigma'} V$. Then $V$ and $I^{\sigma'} V$ are isomorphic as real representations of $T^n$, therefore $V$ should be isomorphic to $\oplus_{i=1}^n Z_i$ as real representations of $T^n$, and can be written as $V = \oplus_{i=1}^nV_i(y_i)$ for some $y_i\in S^2$ by Schur's Lemma. Here, every $y_i$ is determined uniquely. Next we calculate the restriction of $\omega$ to $V$. Let $(z_1,\cdots, z_n,w_1,\cdots, w_n)$ be the holomorphic coordinate on $\mathbb{H}^n$, where $z_i\in Z_i \cong \mathbb{C}$ and $w_i \in W_i \cong \mathbb{C}$ then $\omega$ can be written as \begin{eqnarray} \omega_1 &=& \frac{\sqrt{-1}}{2}\sum_{i=1}^n(dz_i\wedge d\overline{z_i} + dw_i\wedge d\overline{w_i}),\nonumber\\ \omega_2+\sqrt{-1}\omega_3 &=& \sum_{i=1}^n dz_i\wedge dw_i.\nonumber \end{eqnarray} Take $(\alpha_i,\beta_i)\in S^3$ such that $h(\alpha_i,\beta_i) = y_i$. Then $P,Q\in V=\oplus_{i=1}^nV_i(y_i)$ can be written as \begin{eqnarray} P &=& (\alpha_1p_1,\cdots, \alpha_n p_n, \beta_1\overline{p_1},\cdots \beta_n\overline{p_n}),\nonumber\\ Q &=& (\alpha_1q_1,\cdots, \alpha_n q_n, \beta_1\overline{q_1},\cdots \beta_n\overline{q_n})\nonumber \end{eqnarray} for some $p_i\in Z_i$ and $q_i\in W_i$. Then we obtain \begin{eqnarray} \omega_1(P,Q) &=& \sum_{i=1}^n(|\alpha_i|^2 - |\beta_i|^2 ){\rm Im}(\overline{p_i}q_i),\nonumber\\ (\omega_2+\sqrt{-1}\omega_3)(P,Q) &=& -2\sqrt{-1}\sum_{i=1}^n \alpha_i\beta_i{\rm Im}(\overline{p_i}q_i).\nonumber \end{eqnarray} Hence $\sigma$ Lagrangian condition for $V$ is equivalent to that the vector \[ \left ( \begin{array}{c} \sum_{i=1}^n(|\alpha_i|^2 - |\beta_i|^2 ){\rm Im}(\overline{p_i}q_i) \\ 2\sum_{i=1}^n {\rm Im}(\alpha_i\beta_i){\rm Im}(\overline{p_i}q_i) \\ -2\sum_{i=1}^n {\rm Re}(\alpha_i\beta_i){\rm Im}(\overline{p_i}q_i) \end{array} \right )\in \mathbb{R}^3 \] is orthogonal to $\sigma',\sigma''\in \mathbb{R}^3$ for any $p_i,q_i\in\mathbb{C}$. Thus every \[ y_i = \left ( \begin{array}{c} |\alpha_i|^2 - |\beta_i|^2 \\ 2 {\rm Im}(\alpha_i\beta_i) \\ -2 {\rm Re}(\alpha_i\beta_i) \end{array} \right ) \] is equal to $\pm \sigma$ because $\{\sigma,\sigma',\sigma''\}$ is an orthonormal basis. \end{proof} \begin{prop} Let $(X,\omega,\mu)$ and $p$ be as in Proposition \ref{3.2}. Let $L\subset X$ be a $\sigma$-holomorphic\ Lagrangian\ submanifold containing $p$, and assume that there exists a sufficiently small $r >0$, $\varepsilon_i,\varepsilon'_i = \pm 1$, and \begin{eqnarray} (\mu (L)-\mu(p))\cap B(r) &=& \sigma \otimes \{ x \in (\mathbf{t}^n)^*;\ \| x\| < r, \varepsilon'_i x_i \ge 0\},\nonumber\\ \mu_n (V) &=& \sigma \otimes \{ x \in (\mathbf{t}^n)^*;\ \varepsilon_i x_i \ge 0\}\nonumber \end{eqnarray} holds, where $V= dF_p(T_p L)$ and $B(r) = \{ y\in{\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*; \| y\| <r\}$. Then $\varepsilon_i = \varepsilon'_i$ holds for every $i=1,\cdots, n$. \label{apex} \end{prop} \begin{proof} By the first equation of Proposition \ref{3.2}, we have $\mu (L)-\mu(p) = \mu_n(F(L))$. Since $T_0F(L) = dF_p(T_pL)$, $\mu_n (V) = \mu_n(T_0F(L))$ holds. Now we have open neighborhoods $U_0\subset F(L)$ of $0$, $U_1\subset T_0F(L)$ of $0$, and a diffeomorphism $f:U_0\to U_1$ such that $f(0)= 0$ and $df_0= {\rm id}$. Next we take a smooth map $\gamma:(-1,1) \to F(L)$ which satisfies $\gamma(0) = F(p) = 0$, $\varepsilon'_i\mu_n^i(\gamma(t)) >0$ for $t\neq 0$. Here $\mu_n^i$ is the $i$-th component of $\mu_n$. Since $\| f(x)-x\| = \mathcal{O}(\| x\|^2)$ and $\mu_n(x + \delta x) = \mu_n(x) + \mathcal{O}(\| x\| \| \delta x\|)$ holds, we have \begin{eqnarray} \mu_n(f\circ\gamma(t)) &=& \mu_n(\gamma(t) + \mathcal{O}(\| \gamma(t)\|^2))\nonumber\\ &=& \mu_n(\gamma(t)) + \mathcal{O}(\| \gamma(t)\|^3)\nonumber \end{eqnarray} If we take $t$ sufficiently close to $0$, then $\| \gamma(t)\|$ is sufficiently small but $\varepsilon'_i\mu_n^i(\gamma(t)) >0$, hence $\varepsilon'_i\mu_n^i(f\circ\gamma(t))$ should be positive for small $t$ since $\mu_n$ is a quadratic polynomial. Since $\mu_n(f\circ\gamma(t)) \in \mu_n (V)$, $\varepsilon_i = \varepsilon'_i$ must holds. We have taken $i$ arbitrarily, $\varepsilon_i = \varepsilon'_i$ holds for every $i=1,\cdots,n$. \end{proof} Let \begin{eqnarray} \sigma(\theta)=(0,\cos\theta,\sin\theta) \in S^2.\nonumber \end{eqnarray} Then every $\sigma(\theta)$-holomorphic Lagrangian submanifold is special Lagrangian, if $n\theta \in \pi \mathbb{Z}$. \begin{prop} Let $n\theta_{\pm} \in \pi\mathbb{Z}$ and $V_{\pm}$ be $T^n$-invariant $\sigma(\theta_{\pm})$-holomorphic Lagrangian subspaces of $\mathbb{H}^n$ given by \begin{eqnarray} V_+ := \bigoplus_{i=1}^n V_i( \sigma(\theta_+) ),\quad V_- := \bigoplus_{i=1}^n V_i( \sigma(\theta_-) ).\nonumber \end{eqnarray} Then the characterizing angles between $V_+$ and $V_-$ are given by $(\theta_{-} - \theta_{+})/2$ with multiplicity $2n$. \label{angle} \end{prop} \begin{proof} Since $h(\sqrt{-1}/\sqrt{2},e^{\sqrt{-1} \theta_{\pm}} / \sqrt{2}) = \sigma(\theta_{\pm})$, we have \begin{eqnarray} V_{\pm}=\{(\frac{\sqrt{-1}}{\sqrt{2}}z_1, \frac{e^{\sqrt{-1} \theta_{\pm}}}{\sqrt{2}}\overline{z_1}, \cdots, \frac{\sqrt{-1}}{\sqrt{2}}z_n, \frac{e^{\sqrt{-1} \theta_{\pm}}}{\sqrt{2}}\overline{z_n}) \in \mathbb{H}^n;\ z_1,\cdots, z_n\in \mathbb{C} \}\nonumber \end{eqnarray} respectively. Put \[ A(\theta) := \frac{1}{\sqrt{2}} \left ( \begin{array}{ccc} -\sqrt{-1} & e^{-\sqrt{-1} \theta} \\ -1 & \sqrt{-1}e^{-\sqrt{-1} \theta} \end{array} \right ), \] and \[ g_+ := \left ( \begin{array}{ccc} A(\theta_{+}) & & \rm{O} \\ & \ddots & \\ \rm{O} & & A(\theta_{+}) \end{array} \right ),\quad g_- := \left ( \begin{array}{ccc} A(\theta_{-}) & & \rm{O} \\ & \ddots & \\ \rm{O} & & A(\theta_{-}) \end{array} \right ). \] Since $g_+V_+ = g_-V_- = \mathbb{R}^{2n}$ holds, then the characterizing angles are the argument of the square root of the eigenvalues of ${}^tPP$, where $P = g_+g_-^{-1}$, by the proof of Theorem \ref{joyce}. Since \begin{eqnarray} {}^t(A(\theta_{+})A(\theta_{-})^{-1})A(\theta_{+})A(\theta_{-})^{-1} = e^{\sqrt{-1}(\theta_{-} - \theta_{+})}{\rm Id},\nonumber \end{eqnarray} the characterizing angles turn out to be $(\theta_{-} - \theta_{+})/2$ with multiplicity $2n$. \end{proof} Now we consider the case that \begin{eqnarray} (M,J,\omega,\Omega)=(X(u,\lambda),I_1,(\omega_{\lambda,2} + \sqrt{-1} \omega_{\lambda,3})^n)\nonumber \end{eqnarray} and $L=L_+\sqcup L_- $, where $L_{\pm}$ is embedded as $\sigma(\theta_{\pm})$-holomorphic Lagrangian submanifolds respectively, for some $\theta_{\pm}\in\mathbb{R}$. Denote by $\iota: L\to X(u,\lambda)$ the immersion. Assume that the image of $L$ is a $T^n$ invariant subset of $X(u,\lambda)$, and $p$ is the fixed point of the torus action. In this subsection, we see the characterizing angles between $(L,p_+)$ and $(L,p_-)$ in this situation. Take $F:U\to B_{\mathbb{H}^n}(\varepsilon)$ as in Proposition \ref{3.2}. Then $dF_p:T_pX(u,\lambda) \to \mathbb{H}^n$ is $T^n$-equivariant and satisfies $dF_p^*(\omega_{\mathbb{H}^n}|_0) = \omega|_p$ by the first equation in Proposition \ref{3.2} since $d\mu_p=0$. Here, a $T^n$-action on $T_pX(u,\lambda)$ is induced from the torus action on $X(u,\lambda)$ since $p$ is fixed by the action. Then $V_{\pm}:=dF_p\circ\iota_*(T_{p_{\pm}}L)$ is a $\sigma_{\pm}$-holomorphic Lagrangian subspace of $\mathbb{H}^n$, respectively. Moreover, $V_{\pm}$ are closed under the $T^n$-action. \begin{prop} Under the above setting, assume that there is a sufficiently small $r>0$ and \begin{eqnarray} (\mu (L_\pm)-\mu(p))\cap B(r) &=& \sigma_{\pm} \otimes \{ x \in (\mathbf{t}^n)^*;\ \| x\| < r, x_i \ge 0\}\nonumber\nonumber \end{eqnarray} holds respectively. Then the characterizing angles between $(L,p_+)$ and $(L,p_-)$ are given by $(\theta_- - \theta_+)/2$ with multiplicity $2n$. \end{prop} \begin{proof} By combining Propositions \ref{model}\ref{apex}, we can see that \begin{eqnarray} V_\pm = \bigoplus_{i=1}^n V_i( \sigma(\theta_\pm) )\nonumber \end{eqnarray} respectively. Thus we have the assertion by Proposition \ref{angle}. \end{proof} \section{Proof of Theorem \ref{gluing}}\label{sec5} In this section we prove Theorem \ref{gluing}. Although Theorem \ref{gluing} follows from Theorem 9.7 of \cite{joyce2003special} essentially, we need some additional argument about the quivers. Let $Q = (\mathcal{V},\mathcal{E},s,t)$ be a quiver, that is, $\mathcal{V}$ consists of finite vertices, $\mathcal{E}$ consists of directed finite edges, and $s,t:\mathcal{E} \to \mathcal{V}$ are maps. Here, $s(h)$ and $t(h)$ means the source and the target of $h\in\mathcal{E}$ respectively. The quiver is said to be connected if any two vertices are connected by some edges. Given the quiver, we have operators \begin{eqnarray} \partial &:& \mathbb{R}^\mathcal{E} \to \mathbb{R}^\mathcal{V}\nonumber\\ \partial^* &:& \mathbb{R}^\mathcal{V} \to \mathbb{R}^\mathcal{E}\nonumber \end{eqnarray} defined by \begin{eqnarray} \partial (\sum_{h\in \mathcal{E}}A_h\cdot h) &:=& \sum_{k\in \mathcal{V}}A_h\cdot (s(h) - t(h)),\nonumber\\ \partial^*(\sum_{k\in \mathcal{V}}x_k\cdot k) &:=& \sum_{h\in \mathcal{E}}(x_{s(h)} - x_{t(h)})\cdot h.\nonumber \end{eqnarray} Here, $\mathbb{R}^\mathcal{E}$ and $\mathbb{R}^\mathcal{V}$ are the free $\mathbb{R}$-modules generated by elements of $\mathcal{E}$ and $\mathcal{V}$ respectively. Since $\partial^*$ is the adjoint of $\partial$, we have \begin{eqnarray} \mathbf{h}_0(Q) - \mathbf{h}_1(Q) = \#\mathcal{V} - \#\mathcal{E},\label{euler} \end{eqnarray} where $\mathbf{h}_0(Q) = {\rm dim\ Ker} \partial^*$ and $\mathbf{h}_1(Q) = {\rm dim\ Ker} \partial$. Note that $\mathbf{h}_0(Q)$ is equal to the number of the connected components of $Q$. We need the following lemmas for the proof of Theorem \ref{gluing}. \begin{lem} Let $Q$ be as above. The set $(\mathbb{R}_{>0})^\mathcal{E}\cap {\rm Ker}(\partial)$ is nonempty if and only if $\mathcal{E}$ is covered by cycles. \label{quiver1} \end{lem} \begin{proof} Suppose that $\mathcal{E} = \bigcup_{\alpha} S_k$ holds for some cycles $S_1,\cdots, S_N$. For a subset $S\subset \mathcal{E}$, define $\chi_S\in \mathbb{R}^\mathcal{E}$ by \[ (\chi_{S})_h := \left\{ \begin{array}{cc} 1 & (h\in S), \\ 0 & (h\notin S). \end{array} \right. \] Then $\sum_{k=1}^N \chi_{S_k}$ is contained in $(\mathbb{R}_{>0})^\mathcal{E}\cap {\rm Ker}(\partial)$. Conversely, assume that there exists $A = \sum_{h\in\mathcal{E}}A_h\cdot h\in (\mathbb{R}_{>0})^\mathcal{E}\cap {\rm Ker}(\partial)$, and take $h_0\in \mathcal{E}$ arbitrarily. Since $\partial(A) = 0$, we have \begin{eqnarray} \sum_{h\in s^{-1}(t(h_0))}A_h = \sum_{h\in t^{-1}(t(h_0))}A_h \ge A_{h_0}>0.\nonumber \end{eqnarray} Hence $s^{-1}(t(h_0))$ is nonempty, we can take $h_1\in s^{-1}(t(h_0))$. By repeating this procedure, we obtain $h_0,h_1,\cdots,h_l$ so that $t(h_k)=s(h_{k+1})$ for $k=0,\cdots,l-1$. Stop this procedure when $t(h_l) = s(h_k)$ holds for some $k=0,\cdots,l$. Since $\mathcal{V}$ is finite, this procedure always stops for some $l<+\infty$. Then we have an nonempty cycle $S_0 = \{ h_k,h_{k+1},\cdots,h_l \}$. If $h_0$ is contained in $S_0$, then we have the assertion, hence suppose $h_0\notin S_0$ Put $A_0:=\min_{h\in S_0}A_h >0$, \begin{eqnarray} P_0 &:=& \{ h\in\mathcal{E} ;\ A_h = A_0\},\nonumber\\ \mathcal{E}_1 &:=& \mathcal{E}\backslash P_0.\nonumber \end{eqnarray} Then we have a new quiver $((\mathcal{V},\mathcal{E}_1,s,t))$ and the boundary operator $\partial_1:\mathbb{R}^{\mathcal{E}_1} \to \mathbb{R}^\mathcal{V}$. Now, put $A^{(1)}:=A- A_0\chi_{S_0} \in \mathbb{R}^{\mathcal{E}_1}$, where Then each component of $A^{(1)}$ is positive. Moreover we can see that \begin{eqnarray} \partial_1 (A^{(1)}) &=& \sum_{h\in \mathcal{E} \backslash S_0}A_h (s(h) - t(h)) + \sum_{h\in S_0\backslash P_0} (A_h - A_0)(s(h) - t(h)) \nonumber\\ &=& \sum_{h\in \mathcal{E}}A_h (s(h) - t(h)) - \sum_{h\in S_0}A_h (s(h) - t(h))\nonumber\\ &\quad & \quad + \sum_{h\in S_0} (A_h - A_0)(s(h) - t(h)) \nonumber\\ &=& \partial (A) - \sum_{h\in S_0}A_0(s(h) - t(h)) \nonumber\\ &=& - A_0\partial (\chi_{S_0}) = 0,\nonumber \end{eqnarray} thus $A^{(1)}$ is contained in $(\mathbb{R}_{>0})^{\mathcal{E}_1}\cap {\rm Ker}(\partial_1)$. Then we can apply the above procedure for $h_0\in \mathcal{E}_1$ and we can construct $S_k$ inductively. Since $\mathcal{E}$ is finite and $\#\mathcal{E} > \#\mathcal{E}_1> \cdots $ holds, there is $k_0$ such that $h_0\in S_{k_0}$. \end{proof} \begin{lem} Let $Q=(\mathcal{V},\mathcal{E},s,t)$ be as above. Then $Q'=(\mathcal{V},\mathcal{E}\backslash \{ h\},s,t)$ satisfies either $(\mathbf{h}_0(Q'),\mathbf{h}_1(Q')) = (\mathbf{h}_0(Q) + 1,\mathbf{h}_1(Q))$ or $(\mathbf{h}_0(Q'),\mathbf{h}_1(Q')) = (\mathbf{h}_0(Q),\mathbf{h}_1(Q) - 1)$ for any $h\in \mathcal{E}$. \label{quiver2} \end{lem} \begin{proof} Put \begin{eqnarray} \mathcal{E}_1 &:=& \{ h\in \mathcal{E}; A_h = 0\ {\rm for\ any\ }A\in {\rm Ker}(\partial)\},\nonumber\\ \mathcal{E}_2 &:=& \mathcal{E}\backslash \mathcal{E}_1.\nonumber \end{eqnarray} First of all, we show that there exists $A\in (\mathbb{R}_{\neq 0})^{\mathcal{E}_2}\cap {\rm Ker}(\partial_2)$, where $\partial_2:\mathbb{R}^{\mathcal{E}_2} \to \mathbb{R}^\mathcal{V}$ is the restriction of $\partial$ to $\mathbb{R}^{\mathcal{E}_2} \subset \mathbb{R}^\mathcal{E}$. By the definition of $\mathcal{E}_2$, we can easy to see that ${\rm Ker}(\partial_2) = {\rm Ker}(\partial)$ holds, then we can take $A^h=\sum_{h'\in \mathcal{E}_2}A^h_{h'}\cdot h'\in {\rm Ker}(\partial_2)$ for every $h\in \mathcal{E}_2$ so that $A^h_h \neq 0$. Since $\mathcal{E}_2$ is a finite set, we may write $\mathcal{E}_2 = \{ 1,\cdots, \#\mathcal{E}_2\}$. Let $h_1$ be the minimum number so that $A^1_{h_1} = 0$. Then put $B^2:= A^1 + a_1A^{h_1}$ for some $a_1\neq 0$. Chose $a_1$ sufficiently close to $0$ so that $B^2_h\neq 0$ for all $h\le h_1$. By defining $B^k$ inductively, finally we obtain $A = B^N\in (\mathbb{R}_{\neq 0})^\mathcal{E}_2\cap {\rm Ker}(\partial_2)$ for some $N$. Since $\mathbf{h}_i(Q)$ is independent of the orientation of each edge in $\mathcal{E}$, we can replace $h\in \mathcal{E}_2$ by the edge with the opposite orientation if $A_h<0$. Consequently, we may suppose $A_h >0$ for any $h\in \mathcal{E}_2$ without loss of generality. Hence $\mathcal{E}_2$ is covered by cycles by Lemma \ref{quiver1}. Next we consider $Q'=(\mathcal{V},\mathcal{E}'=\mathcal{E}\backslash \{ h\},s,t)$. Let $\partial':=\partial|_{\mathcal{E}'}$ and $(\partial')^*:\mathcal{V}\to\mathcal{E}'$ be the adjoint operator. If $h\in \mathcal{E}_1$, then we can see ${\rm Ker}(\partial) = {\rm Ker}(\partial')$. In this case $(\mathbf{h}_0(Q'),\mathbf{h}_1(Q')) = (\mathbf{h}_0(Q) + 1,\mathbf{h}_1(Q))$ holds by the equation (\ref{euler}). If $h\in \mathcal{E}_2$, then $h$ is contained in a cycle $S\subset \mathcal{E}_2$. Then $s(h)$ and $t(h)$ are also connected in $\mathcal{E}'$, the number of the connected components of $Q'$ is equal to that of $Q$. Thus we have $(\mathbf{h}_0(Q'),\mathbf{h}_1(Q')) = (\mathbf{h}_0(Q),\mathbf{h}_1(Q) - 1)$. \end{proof} Let $L_\alpha$ be a compact connected smooth special\ Lagrangian\ submanifold of the Calabi-Yau manifold $(M,J,\omega,\Omega)$ of ${\rm dim}_{\mathbb{C}}M=m$ for every $\alpha\in\mathcal{V}$. For every $h\in\mathcal{E}$, suppose $L_{s(h)}$ and $L_{t(h)}$ intersects transversely at $p_h\in L_{s(h)} \cap L_{t(h)}$, where $p_h$ is the intersection point of type $1$. Assume that $p_h\neq p_{h'}$ if $h\neq h'$, and assume that $\bigcup_{\alpha\in\mathcal{V}}L_\alpha\backslash\{ p_h;h\in\mathcal{E}\}$ is embedded in $M$. Let $L_{Q}$ be a differential manifold obtained by taking the connected some of $L_{s(h)}$ and $L_{t(h)}$ at $p_h$ for every $h\in \mathcal{E}$. By Theorem 9.7 of \cite{joyce2003special}, if $(\mathbb{R}_{>0})^\mathcal{E}\cap {\rm Ker}(\partial)$ is nonempty, there exists a family of compact smooth special\ Lagrangian\ submanifolds $\{\tilde{L}_t\}_{0<t<\delta}$ which converges to $\bigcup_{\alpha\in\mathcal{V}}$ as $t\to 0$ in the sense of current. Here, $\tilde{L}_t$ is diffeomorphic to $L_{Q}$. Now we can replace the assumption that $(\mathbb{R}_{>0})^\mathcal{E}\cap {\rm Ker}(\partial)$ is nonempty can be replaced by that $\mathcal{E}$ is covered by cycles. Consequently, the proof of Theorem \ref{gluing} is completed by the next proposition. \begin{prop} If $Q=(\mathcal{V},\mathcal{E},s,t)$ is a connected quiver, then $L_{Q}$ is diffeomorphic to \begin{eqnarray} L_1\# L_2\# \cdots \# L_A\# N (S^1\times S^{m-1}),\nonumber \end{eqnarray} where $\mathcal{V}=\{ 1,\cdots, A\}$ and $N={\rm dim}\ {\rm Ker}(\partial)$, and the orientation of each $L_\alpha$ is determined by ${\rm Re}\Omega|_{L_\alpha}$. \label{topology} \end{prop} \begin{proof} Let $Q=(\mathcal{V},\mathcal{E},s,t)$ be a connected quiver and $Q'=(\mathcal{V},\mathcal{E}',s|_{\mathcal{E}'},t|_{\mathcal{E}'})$, where $\mathcal{E}'=\mathcal{E}\backslash \{ h\}$. Let $\mathcal{E}_1,\mathcal{E}_2$ be as in the proof of Lemma \ref{quiver2}. If $h\in \mathcal{E}_1$, then the quiver $Q'$ consists of two connected components $Q_1 = (\mathcal{W}_1,\mathcal{F}_1,s|_{\mathcal{F}_1},t|_{\mathcal{F}_1})$ and $Q_2 = (\mathcal{W}_2,\mathcal{F}_2,s|_{\mathcal{F}_2},t|_{\mathcal{F}_2})$, where $\mathcal{V} = \mathcal{W}_1\sqcup \mathcal{W}_2$ and $\mathcal{F}_i = \mathcal{E}'\cap (s^{-1}(\mathcal{W}_i)\cup t^{-1}(\mathcal{W}_i))$. Then we can see that $L_Q = L_{Q_1}\# L_{Q_2}$. If $h\in \mathcal{E}_2$, then $Q'=(\mathcal{V},\mathcal{E}',s|_{\mathcal{E}'},t|_{\mathcal{E}'})$ is also connected, hence $L_Q$ is constructed from $L_{Q'}$ in the following way. Take any distinct points $p_+,p_-\in L_{Q'}$ and their neighborhood $B_{p_\pm}\subset L_{Q'}$ so that $B_{p_0}\cap B_{p_1}$ is empty and $B_{p_\pm}$ are diffeomorphic to the Euclidean unit ball. Now we have a polar coordinate $(r_\pm,\Theta_\pm) \in B_{p_\pm}\backslash \{ p_\pm\}$, where $r_\pm \in (0,1)$ is the distance from $p_\pm$, and $\Theta_\pm\in S^{m-1}$. By taking a diffeomorphism $\psi: (r,\Theta) \mapsto (1-r,\varphi (\Theta))$, we can glue $B_{p_+}\backslash \{ p_+\}$ and $B_{p_-}\backslash \{ p_-\}$, then obtain $L_Q$. Here, $\varphi:S^{m-1} \to S^{m-1}$ is a diffeomorphism which reverse the orientation. Note that the differentiable structure of $L_Q$ is independent of the choice of $p_\pm$, $B_{p_\pm}$ and $\varphi$. Therefore we may suppose $p_+$ and $p_-$ is contained in an open subset $U\subset L_Q$, where $U= B(0,10)$ and $B_{p_\pm} = B(\pm 5, 1)$, respectively. Here $B(x,r) = \{ x'\in\mathbb{R}^m;\ \| x' - x\| < r\}$. Then $(U\backslash \{ (1,0),(-1,0)\} )/\psi$ is diffeomorphic to $S^1\times S^{m-1}\backslash \{ {\rm pt.}\}$, hence $L_Q$ is diffeomorphic to $L_{Q'}\# S^1\times S^{m-1}$. By repeating these two types of procedures, we finally obtain a quiver $Q''=(\mathcal{V},\emptyset,s,t)$, and we have $(\mathbf{h}_0(Q''),\mathbf{h}_1(Q'')) = (\#\mathcal{V},0)$. By counting $(\mathbf{h}_0,\mathbf{h}_1)$ on each step, it turns out that we have to follow the former procedures $\#\mathcal{V} -1$ times and the latter procedures $\mathbf{h}_1(Q)$ times until we reach $Q''$. Therefore we obtain the assertion by considering the procedures inductively. \end{proof} \section{The construction of compact special Lagrangian submanifolds in $X(u,\lambda)$}\label{sec6} Here we construct examples of compact special Lagrangian submanifolds in $X(u,\lambda)$, using Theorem \ref{gluing}. We construct a one parameter family of compact special Lagrangian submanifolds which degenerates to the union $\bigcup_i L_i$ of some $\sigma_i$-holomorphic Lagrangian submanifolds $L_i$ in Subsection \ref{5.1}. Let $X(u,\lambda)$ be a smooth toric hyper-K\"ahler\ manifold. \begin{definition} {\rm We call $\triangle \subset {\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*$} a $\sigma$-Delzant polytope {\rm if it is a compact convex set in \begin{eqnarray} V(q,\sigma):=q + \sigma\otimes (\mathbf{t}^n)^* \nonumber \end{eqnarray} for some $q\in {\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*$, and the boundary of $\triangle$ in $V(q,\sigma)$ satisfies \begin{eqnarray} \partial \triangle = \triangle \cap \bigg( \bigcup_{k=1}^NH_k\bigg).\nonumber \end{eqnarray} } \end{definition} It is easy to see that $L_{\triangle} := \mu_{\lambda}^{-1}(\triangle)$ is $\sigma$-holomorphic\ Lagrangian\ if it is smooth. Since $T^n$-action is closed on $L_{\triangle}$, we may regard $(L_{\triangle},I_{\lambda,1}^\sigma|_{L_{\triangle}})$ as a toric variety, equipped with a K\"ahler form $\omega_{\lambda,1}^\sigma|_{L_{\triangle}}$ and a K\"ahler moment map $\mu_{\lambda,1}^\sigma: L_{\triangle} \to (\mathbf{t}^n)^*$. In particular, $L_{\triangle}$ is an oriented manifold whose orientation is induced naturally from $I_{\lambda,1}^\sigma$. We denote by $\overline{L}_{\triangle}$ the oriented manifold diffeomorphic to $L_{\triangle}$ with the opposite orientation. By the assumption $X(u,\lambda)$ is smooth, $u$ and $\lambda$ satisfies $(*1)(*2)$ of Theorem \ref{smooth}, then it is easy to see that $\triangle$ is a Delzant polytope in the ordinary sense, consequently $L_{\triangle}$ turns out to be a smooth toric variety. \begin{definition} {\rm For $\alpha=0,1$, let $\triangle_\alpha$ be a $\sigma(\theta_\alpha)$-Delzant polytope. Put \begin{eqnarray} Q(r) := \{ (t_1,\cdots, t_n)\in (\mathbf{t}^n)^*; \ t_1\ge 0,\cdots, t_n\ge 0, t_1^2+\cdots + t_n^2 <r^2\}.\nonumber \end{eqnarray} Then $\triangle_0$ and $\triangle_1$ are said to be} intersecting standardly with angle $\theta$ {\rm if $\triangle_0 \cap \triangle_1 = \{ q\}$ and there are $\psi\in GL_n\mathbb{Z}$, $\theta_0\in\mathbb{R}$ and sufficiently small $r >0$ such that \begin{eqnarray} \psi(\triangle_0 -q)\cap B(r) &=& \sigma(\theta_0)\otimes Q(r),\nonumber\\ \psi(\triangle_1 -q)\cap B(r) &=& \sigma(\theta_0 + \theta)\otimes Q(r),\nonumber \end{eqnarray} where $\psi : (\mathbb{Z}^n)^* \to (\mathbb{Z}^n)^*$ extends to ${\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^* \to {\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*$ naturally. } \label{def.intersection} \end{definition} For $m\in\mathbb{Z}_{>0}$, let \begin{eqnarray} d_m(l_1,l_2) := \min \{ |l_1 - l_2 + mk|;\ k\in\mathbb{Z} \},\nonumber \end{eqnarray} for $l_1,l_2\in \mathbb{Z}$, which induces a distance function on $\mathbb{Z} /m\mathbb{Z}$. The main result of this article is described as follows. \begin{thm} Let $X(u,\lambda)$ be a smooth toric hyper-K\"ahler\ manifold, and $\triangle_k$ be a $\sigma(k\pi/n)$-Delzant polytope for each $k = 1,\cdots, 2n$. Assume that $\triangle_k \cap \triangle_l = \emptyset$ if $d_{2n}(k,l)>1$, and $\triangle_k$ and $\triangle_{k+1}$ intersecting standardly with angle $\pi/n$. Then there exists a family of compact special Lagrangian submanifolds $\{ \tilde{L}_t \}_{0<t<\delta}$ which converges $\bigcup_{k=1}^{2n}L_{\triangle_k}$ as $t\to 0$ in the sense of current. Moreover, $\tilde{L}_t$ is diffeomorphic to $L_{\triangle_1}\# \overline{L}_{\triangle_2}\# \cdots L_{\triangle_{2n-1}}\# \overline{L}_{\triangle_{2n}}\# (S^1\times S^{2n-1})$. \label{main} \end{thm} \begin{proof} We apply Theorem \ref{gluing}. By combining Propositions \ref{model} and \ref{angle}, we can see that the characterizing angles between $L_{\triangle_k}$ and $L_{\triangle_{k+1}}$ are $\frac{\pi}{2n}$ with multiplicity $2n$. Then the intersection point $L_{\triangle_k}\cap L_{\triangle_{k+1}}$ is of type $1$. Next we consider the topology of $\tilde{L}_t$. When we take a connected sum, we should determine the orientation of $L_{\triangle_k}$ uniformly by the calibration ${\rm Re}\Omega$, where $\Omega=(\omega_{\lambda,2} + \sqrt{-1}\omega_{\lambda,3})^n$. Now $\Omega|_{L_{\triangle_k}} = (-1)^k(\omega_1^{\sigma(k\pi/n)})^n|_{L_{\triangle_k}}$ holds, therefore $\tilde{L}_t$ is diffeomorphic to \begin{eqnarray} L_{\triangle_1}\# \overline{L}_{\triangle_2}\# \cdots L_{\triangle_{2n-1}}\# \overline{L}_{\triangle_{2n}}\# (S^1\times S^{2n-1}).\nonumber \end{eqnarray} \end{proof} \subsection{Example $(1)$}\label{5.1} Let \begin{eqnarray} u = (I_n\ I_n\ \cdots\ I_n) \in {\rm Hom}(\mathbb{Z}^{2n^2}, \mathbb{Z}^n) \nonumber \end{eqnarray} and $\lambda = (\lambda_{1,1},\cdots,\lambda_{1,n},\lambda_{2,1},\cdots,\lambda_{2,n},\cdots,\lambda_{2n,1},\cdots,\lambda_{2n,n})$, where $I_n$ is the identity matrix. Then $X(u,\lambda)$ is smooth if $\lambda_{k,\alpha} = \lambda_{l,\alpha}$ holds only if $k=l$. We assume this condition and that $-\lambda_{k,\alpha} = (0, \rho_{k,\alpha}) \in \{ 0\} \oplus \mathbb{C}$ holds for every $k,\alpha$, where ${\rm Im}\mathbb{H}$ is identified with $\mathbb{R} \oplus \mathbb{C}$. Moreover we suppose that \begin{eqnarray} \arg (\rho_{k + 1,\alpha} - \rho_{k,\alpha} ) = \theta_0 + \frac{n+1}{n}k\pi \end{eqnarray} for some $\theta_0 \in \mathbb{R}$. Note that $X(u,\lambda)$ is a direct product of multi Eguchi-Hanson spaces. Next we put $q_k:= -(\lambda_{k,1},\cdots,\lambda_{k,n})\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^{n})^*$, and \begin{eqnarray} \square_k &:=& q_k + (0, e^{\sqrt{-1}(\theta_0 + \frac{n+1}{n}k\pi) })\otimes \square(r_{k,1},\cdots, r_{k,n})\nonumber\\ &\subset& V(q_k, \sigma(\theta_0 + \frac{n+1}{n}k\pi)), \nonumber \end{eqnarray} where $r_{k,\alpha} = |\rho_{k + 1,\alpha} - \rho_{k,\alpha}|$, and a hyperrectangle $\square(r_{1},\cdots, r_{n}) \subset (\mathbf{t}^{n})^*\cong \mathbb{R}^n$ is defined by \begin{eqnarray} \square(r_{1},\cdots, r_{n}) := \{ (t_1,\cdots, t_n) \in \mathbb{R}^n ;\ 0\le t_1 \le r_1,\ \cdots,\ 0\le t_n \le r_n\}\nonumber \end{eqnarray} Let $H_{k,\alpha} = \{ y\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^{n})^* ;\ y_k + \lambda_{k,\alpha} = 0 \}$. Then it is easy to see that $\square_k$ is compact, convex and \begin{eqnarray} \partial \square_k \subset \bigcup_{\alpha = 1}^{2n}(H_{k,\alpha}\cup H_{k+1,\alpha}).\nonumber \end{eqnarray} Therefore, $\square_k$ is a $\sigma(\theta_0 + (n+1)k\pi/n)$-Delzant polytope if \begin{eqnarray} \square_k \cap \bigcup_{\alpha, k}H_{k,\alpha} \subset \partial \square_k \label{inclusion} \end{eqnarray} holds. Next we study the intersection of $\square_{k-1}$ and $\square_k$. We can check that $\square_{k-1} \cap \square_k = \{ q_k\}$ and every element in $\square_{k-1}$ satisfies \begin{eqnarray} &\ & q_{k-1} + (0, e^{\sqrt{-1}(\theta_0 + \frac{n+1}{n}(k-1)\pi )})\otimes (t_1,\cdots, t_n) \nonumber\\ &=& q_{k-1} + (0, e^{\sqrt{-1}(\theta_0 + \frac{n+1}{n}(k-1)\pi )})\otimes (r_{k-1,1},\cdots, r_{k-1,n}) \nonumber\\ &\ & - (0, e^{\sqrt{-1}(\theta_0 + \frac{n+1}{n}(k-1)\pi )})\otimes (r_{k-1,1} - t_1,\cdots, r_{k-1,n}-t_n)\nonumber\\ &=& q_{k-1} + (\lambda_{k-1,1} - \lambda_{k,1},\cdots, \lambda_{k-1,n} - \lambda_{k,n}) \nonumber\\ &\ & + (0, e^{\sqrt{-1}(\theta_0 + \frac{n+1}{n}(k-1)+1)\pi })\otimes (r_{k-1,1} - t_1,\cdots, r_{k-1,n}-t_n)\nonumber\\ &=& q_k + (0, e^{\theta_0 + \sqrt{-1}\frac{(n+1)k -1}{n}\pi })\otimes (r_{k-1,1} - t_1,\cdots, r_{k-1,n}-t_n).\nonumber \end{eqnarray} Therefore, $\square_{k-1}$ and $\square_k$ are intersecting standardly with angle $\pi/n$. Of course, the same argument goes well for $\square_{2n}$ and $\square_1$. To apply Theorem \ref{main}, it suffices to show that $\square_k \cap \square_l$ is empty if $d_{2n}(k,l)>1$. However, this condition does not hold in general, accordingly we need to take $\rho_{k,\alpha}$ well. Unfortunately, the author cannot find the good criterion for $\rho_{k,\alpha}$ satisfying the above condition. Here we show one example of $\rho_{k,\alpha}$ which satisfies the assumption of Theorem \ref{main}. First of all, take $a_1,\cdots,a_n\in \mathbb{R}$ so that every $a_m$ is larger than $1$, and put \begin{eqnarray} \rho_{2m-1} &:=& e^{\sqrt{-1}\frac{2(m-1)}{n}\pi} + a_m(e^{\sqrt{-1}\frac{2m}{n}\pi} - e^{\sqrt{-1}\frac{2(m-1)}{n}\pi}),\nonumber\\ \rho_{2m} &:=& e^{\sqrt{-1}\frac{2(m+1)}{n}\pi} + a_m(e^{\sqrt{-1}\frac{2m}{n}\pi} - e^{\sqrt{-1}\frac{2(m+1)}{n}\pi})\nonumber \end{eqnarray} for each $m=1,\cdots, n$. Denote by $\mathbf{l}_k\subset \mathbb{C}$ the segment connecting $\rho_k$ and $\rho_{k+1}$. Then we can easily see that $\mathbf{l}_{k-1} \cap \mathbf{l}_{k} = \{ \rho_k \}$ and \begin{eqnarray} \arg(\rho_{k+1} -\rho_{k}) = \frac{n-2}{2n}\pi + \frac{n+1}{n}k\pi.\nonumber \end{eqnarray} Note that we can regard $k\in\mathbb{Z} /2n \mathbb{Z}$ and $m\in \mathbb{Z}/ n\mathbb{Z}$. \begin{prop} Let $\rho_1,\cdots,\rho_{2n}$ be as above. If every $a_k-1$ is sufficiently small, then $\mathbf{l}_{2m-1} \cap \mathbf{l}_{k}$ are empty for all $m=1,\cdots,n$ and $k=1,\cdots,2n$ with $d_{2n}(k,2m-1)>1$. \label{disjoint} \end{prop} \begin{proof} Let ${\rm Re}:\mathbb{C} \to \mathbb{R}$ be the projection given by taking the real part. It suffices to show that ${\rm Re}(\mathbf{l}_{2m-1}e^{-\sqrt{-1}\frac{2m}{n}\pi}) \cap {\rm Re}(\mathbf{l}_{k}e^{-\sqrt{-1}\frac{2m}{n}\pi})$ is empty under the given assumptions. Let $\rho_{2m-1} + t(\rho_{2m}-\rho_{2m-1})\in \mathbf{l}_{2m-1}$. Then we can check that \begin{eqnarray} {\rm Re}(\rho_{2m-1}e^{-\sqrt{-1}\frac{2m}{n}\pi} + t(\rho_{2m}-\rho_{2m-1})e^{-\sqrt{-1}\frac{2m}{n}\pi}) = (1-a_m)\cos \frac{2\pi}{n} + a_m,\nonumber \end{eqnarray} which implies ${\rm Re}(\mathbf{l}_{2m-1}e^{-\sqrt{-1}\frac{2m}{n}\pi}) = \{ -(a_m-1)\cos \frac{2\pi}{n} + a_m\}$. If we can see that \begin{eqnarray} {\rm Re}(\rho_k e^{-\sqrt{-1}\frac{2m}{n}\pi}) < -(a_m-1)\cos \frac{2\pi}{n} + a_m\label{ineq1} \end{eqnarray} for all $k$, we have the assertion. Since \begin{eqnarray} {\rm Re}(\rho_{2l} e^{-\sqrt{-1}\frac{2m}{n}\pi}) &=& -(a_{l}-1)\cos(\frac{2(1 + l-m)}{n}\pi) \nonumber\\ &\ &\quad + a_{l}\cos(\frac{2(l-m)}{n}\pi),\nonumber\\ {\rm Re}(\rho_{2l'-1} e^{-\sqrt{-1}\frac{2m}{n}\pi}) &=& -(a_{l'} -1)\cos(\frac{2(1 - l'+m)}{n}\pi) \nonumber\\ &\ &\quad + a_{l'}\cos(\frac{2(l'-m)}{n}\pi)\nonumber \end{eqnarray} and $d_{2n}(2l,2m)>1$, $d_{2n}(2l'-1,2m)>1$ holds, we have $\cos(2(l-m)\pi /n) \le \cos(2\pi / n)$ and $\cos(2(l'-m)\pi /n) \le \cos(2\pi / n)$. By using the inequality $\cos(\frac{2(1 + l-m)}{n}\pi) \ge -1$ and $\cos(\frac{2(1 - l'+m)}{n}\pi) \ge -1$, we obtain \begin{eqnarray} {\rm Re}(\rho_{2l} e^{-\sqrt{-1}\frac{2m}{n}\pi}) &\le& (a_{l}-1) + a_{l}\cos\frac{2\pi}{n} \nonumber\\ &=&(a_{l}-1)(1+ \cos\frac{2\pi}{n}) + \cos\frac{2\pi}{n},\nonumber\\ {\rm Re}(\rho_{2l'-1} e^{-\sqrt{-1}\frac{2m}{n}\pi}) &\le& (a_{l'} -1) + a_{l'}\cos\frac{2\pi}{n}\nonumber\\ &=& (a_{l'}-1)(1+ \cos\frac{2\pi}{n}) + \cos\frac{2\pi}{n}\nonumber \end{eqnarray} Now, if we assume $a_l-1<(1-\cos\frac{2\pi}{n})/(1+ \cos\frac{2\pi}{n})$, then the left-hand-side of (\ref{ineq1}) is less than $1$. Since \begin{eqnarray} -(a_m-1)\cos \frac{2\pi}{n} + a_m = (a_m-1)(1- \cos \frac{2\pi}{n}) + 1,\nonumber \end{eqnarray} the right-hand-side of (\ref{ineq1}) is always larger than $1$ and we obtain the inequality (\ref{ineq1}). \end{proof} Now, divide $\{ 1,\cdots, n\}$ into two nonempty sets \begin{eqnarray} \{ 1,\cdots, n\} = A_+\sqcup A_-, \nonumber \end{eqnarray} and define $\rho_{k,\alpha}$ by $\rho_{k,\alpha} = \rho_k$ if $\alpha \in A_+$, and $\rho_{k,\alpha} = \rho_ke^{\sqrt{-1}\pi/n}$ if $\alpha \in A_-$. Here, we suppose $a_k-1$ are sufficiently small so that Proposition \ref{disjoint} holds. Denote by $\mathbf{l}_{k,\alpha}$ the segment in ${\rm Im}\mathbb{H}$ connecting $(0,\rho_{k,\alpha})$ and $(0,\rho_{k+1,\alpha})$. Then we can see that \begin{eqnarray} \square_k = q_k + \mathbf{l}_{k,1}\times \cdots \times \mathbf{l}_{k,n},\nonumber \end{eqnarray} and $\square_k$ satisfies (\ref{inclusion}). \begin{prop} Let $\square_1,\cdots,\square_{2n}$ be as above. Then $\square_k \cap \square_l$ is empty if $d_{2n}(k,l)>1$. \end{prop} \begin{proof} Suppose there is an element $\hat{x}\in \square_k \cap \square_l$. Then $\hat{x}$ can be written as $\hat{x}=(0,x)$ for some $x = (x_1,\cdots,x_n)\in \mathbb{C}^n$, and \begin{eqnarray} x_\alpha = \rho_{k,\alpha} + t_{k,\alpha} (\rho_{k+1,\alpha} - \rho_{k,\alpha}) = \rho_{l,\alpha} + t_{l,\alpha} (\rho_{l+1,\alpha} - \rho_{l,\alpha}) \label{lineareq} \end{eqnarray} holds for some $0\le t_{k,\alpha},t_{l,\alpha}\le 1$. Now, assume that $l$ is odd. Then (\ref{lineareq}) has no solution for $\alpha \in A_+$ by Proposition \ref{disjoint}. Similarly, if $l$ is supposed to be even, then (\ref{lineareq}) has no solution for $\alpha \in A_-$. Hence $\square_k \cap \square_l$ should be empty. \end{proof} Since $L_{\square_k} = (\mathbb{P}^1)^n$, and there is an orientation preserving diffeomorphism between $(\mathbb{P}^1)^n$ and $(\overline{\mathbb{P}^1})^n$, we obtain the following example. \begin{thm} Let $X(u,\lambda)$ be as above. Then there exists a compact smooth special\ Lagrangian\ submanifold diffeomorphic to \begin{eqnarray} 2n(\mathbb{P}^1)^n \# (S^1\times S^{2n-1})\nonumber \end{eqnarray} embedded in $X(u,\lambda)$. \end{thm} \subsection{Example $(2)$}\label{5.2} Here we construct one more example in an $8$ dimensional toric hyper-K\"ahler\ manifolds. Let \[ u := \left ( \begin{array}{ccccc} 1 & 1 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 1 \end{array} \right ) \in {\rm Hom}(\mathbb{Z}^5, \mathbb{Z}^2), \] and $\lambda=(\lambda_0,\cdots,\lambda_4)\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^5)^*$. Put \begin{eqnarray} q_1:=-(\lambda_1,\lambda_2),\ q_2:=-(\lambda_3,\lambda_2),\ q_3:=-(\lambda_3,\lambda_4),\ q_4:=-(\lambda_1,\lambda_4)\nonumber \end{eqnarray} and $\triangle_k := q_k + \tau_k\otimes \triangle$ for $k=1,\cdots,4$, where \begin{eqnarray} \tau_1 &:=& \lambda_1 + \lambda_2 -\lambda_0, \nonumber\\ \tau_2 &:=& \lambda_3 + \lambda_2 -\lambda_0, \nonumber\\ \tau_3 &:=& \lambda_3 + \lambda_4 -\lambda_0, \nonumber\\ \tau_4 &:=& \lambda_1 + \lambda_4 -\lambda_0, \nonumber \end{eqnarray} and \begin{eqnarray} \triangle := \{ (t_1,t_2) \in (\mathbf{t}^2)^*\cong \mathbb{R}^2; \ t_1\ge 0,\ t_2\ge 0,\ t_1+t_2 \le 1\}.\nonumber \end{eqnarray} If we assume that $\tau_1 = (0,\sqrt{-1}r_1)$, $\tau_2 = (0,-r_2)$, $\tau_3 = (0,-\sqrt{-1}r_1)$, $\tau_4 = (0,r_2)\in \mathbb{R} \oplus \mathbb{C}$, where $r_1,r_2>0$, then $\triangle_k$ is a $\sigma(k\pi/2)$-Delzant polytope. For example, put $\lambda_0 = \lambda_1 = 0$, $\lambda_2 = (0,\sqrt{-1}r_1)$, $\lambda_3 = (0,-r_2 - \sqrt{-1}r_1)$ and $\lambda_4 = (0,r_2)$. \begin{prop} Under the above setting, $\triangle_k$ and $\triangle_{k+1}$ are intersecting standardly with angle $\pi/2$ for every $k=1,\cdots,4$. Here we suppose $\triangle_5 = \triangle_1$. \end{prop} \begin{proof} We check the case of $k=1$, because other cases can be shown similarly. Let $q_k + \tau_k\otimes (t_1,t_2) \in \triangle_k$. Then we have \begin{eqnarray} q_1 + \tau_1\otimes (t_1,t_2) &=& q_1 + \tau_1\otimes (1,0) + \tau_1\otimes (t_1-1,t_2)\nonumber\\ &=& (\lambda_2-\lambda_0,-\lambda_2) + \sigma(\frac{\pi}{2}) \otimes r_1(t_1-1,t_2),\nonumber\\ q_2 + \tau_2\otimes (t_1,t_2) &=& q_2 + \tau_2\otimes (1,0) + \tau_2\otimes (t_1-1,t_2)\nonumber\\ &=& (\lambda_2-\lambda_0,-\lambda_2) + \sigma(\pi)\otimes r_2(t_1-1,t_2),\nonumber \end{eqnarray} therefore $\triangle_1$ and $\triangle_2$ are intersecting standardly with angle $\pi/2$. Note that we have to take \[ \psi = \left ( \begin{array}{cc} -1 & 0\\ 0 & 1 \end{array} \right ) \] in Definition \ref{def.intersection}, since $t_1 - 1$ is nonpositive in this case. \end{proof} \begin{prop} Under the above setting, $\triangle_1 \cap \triangle_3$ and $\triangle_2 \cap \triangle_4$ are empty sets. \end{prop} \begin{proof} Every $x\in \triangle_1 \cap \triangle_3$ can be written as \begin{eqnarray} x = q_1 + \tau_1\otimes (t_1,t_2) = q_3 + \tau_3\otimes (s_1,s_2)\nonumber \end{eqnarray} for some $0\le t_1,t_2,s_1,s_2\le 1$. The first component of the above equation gives \begin{eqnarray} t_1\tau_1 - s_1\tau_3 = \lambda_1 - \lambda_3 = \tau_1 - \tau_2.\nonumber \end{eqnarray} By substituting $\tau_1 = (0,\sqrt{-1}r_1)$, $\tau_2 = (0,-r_2)$ and $\tau_3 = (0,-\sqrt{-1}r_1)$, we obtain \begin{eqnarray} (t_1 + s_1)\sqrt{-1}r_1 = r_2 + \sqrt{-1}r_1,\nonumber \end{eqnarray} which has no solution $(t_1,s_1) \in \mathbb{R}^2$. $\triangle_2 \cap \triangle_4 = \emptyset$ also follows by the same argument. \end{proof} Thus we obtain the following example. \begin{thm} Let $X(u,\lambda)$ be as above. Then there exists a compact smooth special\ Lagrangian\ submanifold diffeomorphic to $2\mathbb{P}^2 \# 2\overline{\mathbb{P}^2} \# (S^1\times S^3)$ embedded in $X(u,\lambda)$. \end{thm} \subsection{Example $(3)$}\label{5.3} We can describe a generalization of Theorem \ref{main} in the more complicated situation. \begin{thm} Let $(\mathcal{V},\mathcal{E},s,t)$ be a quiver, $X(u,\lambda)$ be a smooth toric hyper-K\"ahler\ manifold, and $\{\triangle_k \}_{k\in \mathcal{V}}$ be a family of subsets of ${\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*$. Assume that every $\triangle_k$ is a $\sigma(\theta_k)$-Delzant polytope for some $\theta_k\in\mathbb{R}$, $\triangle_{s(h)}$ and $\triangle_{t(h)}$ intersecting standardly with angle $\pi/n$ if $h \in\mathcal{E}$, otherwise $\triangle_{k_1} \cap \triangle_{k_2} = \emptyset$ or $k_1=k_2$. Moreover, suppose that $\mathcal{E}$ is covered by cycles. Then there exists a family of compact special\ Lagrangian\ submanifolds $\{ \tilde{L}_t\}_{0<t<\delta}$ which converges to $\bigcup_{k\in \mathcal{V}} L_{\triangle_k}$ in the sense of current. \label{graph} \end{thm} \begin{proof} The proof is same as that of Theorem \ref{main}. \end{proof} Fix positive real numbers $a,b,c,a_m$ for $m=1,\cdots,N$ so that $0<a_1<a_2<\cdots <a_N$. Let \[ u = \left ( \begin{array}{cccccccc} 1 & 1 & 1 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & \cdots & 1 \end{array} \right ) \in {\rm Hom}(\mathbb{Z}^{2N+6}, \mathbb{Z}^2) \] and \begin{eqnarray} \lambda = (\lambda_{-3}, \lambda_{-2}, \lambda_{-1}, \lambda_0, \lambda_1,\cdots,\lambda_{2N+2}) \in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^{2N+6})^* ,\nonumber \end{eqnarray} where $-\lambda_0=0$, $-\lambda_{-1}=(0,\sqrt{-1}b)$, $-\lambda_{-2}=(0,a+\sqrt{-1}b)$, $-\lambda_{-3}=(0,a)$, $-\lambda_{2m+1}=(0,a_m + \sqrt{-1}c)$ and $-\lambda_{2m+2}=(0,a_m)$ for $m=0,1,\cdots,N$. Here, we put $a_0=0$. Then $X(u,\lambda)$ is smooth and become the direct product $X(u',\lambda')\times X(u'',\lambda'')$ where $u'=(1,1,1,1) \in {\rm Hom}(\mathbb{Z}^4,\mathbb{Z})$, $u''=(1,\cdots,1) \in {\rm Hom}(\mathbb{Z}^{2N+2},\mathbb{Z})$, $\lambda' =(\lambda_{-3}, \lambda_{-2}, \lambda_{-1}, \lambda_0)$ and $\lambda''=(\lambda_1,\cdots,\lambda_{2N+2})$. Denote by $[p,q]\subset {\rm Im}\mathbb{H}$ the segment connecting $p,q\in {\rm Im}\mathbb{H}$, and put $\mathbf{A}_{-}:=[-\lambda_0,-\lambda_{-1}]$, $\mathbf{A}_{+}:=[-\lambda_{-2},-\lambda_{-3}]$, $\mathbf{S}_{+}:=[-\lambda_{-1},-\lambda_{-2}]$, $\mathbf{S}_{-}:=[-\lambda_{-3},-\lambda_0]$, $\mathbf{A}_{m}:=[-\lambda_{2m+1},-\lambda_{2m+2}]$ for $m=0,1,\cdots,N$, $\mathbf{S}_{+,m}:=[-\lambda_{2m-1},-\lambda_{2m+1}]$ and $\mathbf{S}_{-,m}:=[-\lambda_{2m},-\lambda_{2m+2}]$ for $m=1,\cdots, N$. Let \begin{eqnarray} \square_{2l,1} &:=& \mathbf{A}_{-}\times \mathbf{A}_{2l},\nonumber\\ \square_{2l,2} &:=& \mathbf{S}_{+}\times \mathbf{S}_{+,2l+1}, \nonumber\\ \square_{2l,3} &:=& \mathbf{A}_{+}\times \mathbf{A}_{2l+1}, \nonumber\\ \square_{2l,4} &:=& \mathbf{S}_{-}\times \mathbf{S}_{-,2l+1}\nonumber \end{eqnarray} for $l=0,1,\cdots, [(N-1)/2]$, and \begin{eqnarray} \square_{2l-1,1} &:=& \mathbf{A}_{-}\times \mathbf{A}_{2l}, \nonumber\\ \square_{2l-1,2} &:=& \mathbf{S}_{+}\times \mathbf{S}_{-,2l}, \nonumber\\ \square_{2l-1,3} &:=& \mathbf{A}_{+}\times \mathbf{A}_{2l-1}, \nonumber\\ \square_{2l-1,4} &:=& \mathbf{S}_{-}\times \mathbf{S}_{+,2l}\nonumber \end{eqnarray} for $l=1,\cdots, [N/2]$. Then $\square_{m,j}$ is a $\sigma(j\pi/2)$-holomorphic\ Lagrangian\ submanifold satisfying $\square_{2l-1,1}=\square_{2l,1}$ and $\square_{2l,3}=\square_{2l+1,3}$, moreover $\square_{m,j}$ and $\square_{m,j+1}$ intersect standardly with angle $\pi/2$ for $j=1,2,3,4$, where we put $\square_{m,5}=\square_{m,1}$. Otherwise, $\square_{m,j} \cap \square_{m',j'}$ is empty. Now let \begin{eqnarray} \mathcal{V} &:=& (\{ 0,1,\cdots,N\}\times \{ 1,2,3,4\})/\sim,\nonumber \end{eqnarray} where $\sim$ is defined by $(2l-1,1)\sim (2l,1)$ and $(2l,3)\sim (2l+1,3)$. We denote by $[m,j]\in \mathcal{V}$ the equivalence class represented by $(m,j)$. Put \begin{eqnarray} \mathcal{E} &:=& \{ [m,j]\to [m,j+1], [m,4]\to [m,1];\ m=0,\cdots, N,\ j=1,2,3\},\nonumber \end{eqnarray} where $x\to y$ means the directed edge whose source is $x$ and the target is $y$. Then we obtain a quiver $(\mathcal{V},\mathcal{E}s,t)$ and it is easy to see that $\mathcal{E}$ is covered by cycles. By this setting, we can see that $\{ \square_{m,j}\}_{[m,j]\in\mathcal{V}}$ satisfies the assumption of Theorem \ref{graph}, and we have the following result. \begin{thm} Let $X(u,\lambda)$ be as above. Then there exists a compact smooth special\ Lagrangian\ submanifold diffeomorphic to \begin{eqnarray} (3N+1)(\mathbb{P}^1)^2 \# N(S^1\times S^3)\nonumber \end{eqnarray} embedded in $X(u,\lambda)$. \end{thm} \section{Obstruction}\label{sec7} Here we introduce obstructions for the existence of holomorphic\ Lagrangian\ and special\ Lagrangian\ submanifolds in hyper-K\"ahler\ manifolds. Throughout of this section, let $(M^{4n},g,I_1,I_2,I_3)$ be a hyper-K\"ahler\ manifold. \begin{prop} Let $L\subset M$ be a special\ Lagrangian\ submanifold, and also a $\sigma$-holomorphic\ Lagrangian\ submanifold for some $\sigma\in S^2$. Then $\sigma = \sigma(k\pi/n)$ for some $k = 1,\cdots, 2n$. \label{6.1} \end{prop} \begin{proof} By decomposing $\mathbb{R}^3$ into $\mathbb{R}\sigma$ and its orthogonal complement, we have \begin{eqnarray} (1,0,0) = p\sigma + q\tau\nonumber \end{eqnarray} for some $p,q\in \mathbb{R}$ and $\tau\in S^2$, where $\tau$ is orthogonal to $\sigma$. Then we have $\omega_1= p\omega^\sigma + q\omega^\tau$ and \begin{eqnarray} 0= \omega_1|_L= p\omega^\sigma|_L + q\omega^\tau|_L = p\omega^\sigma|_L,\nonumber \end{eqnarray} since $L$ is a special\ Lagrangian\ and $\sigma$-holomorphic\ Lagrangian\ submanifold. Hence $p$ should be $0$ since $\omega^\sigma$ is non-degenerate on $L$. Thus we have $(1,0,0) = q\tau$, which means that $\sigma$ is orthogonal to $(1,0,0)$. Then we may write $\sigma = \sigma(\theta)$ for some $\theta\in \mathbb{R}$. By the condition ${\rm Im}(\omega_2 + \sqrt{-1}\omega_3)^n|_L = 0$, we obtain $\theta = k\pi/n$ for some $k=1,\cdots, 2n$. \end{proof} \begin{prop} Let $L$ be a compact $\sigma(\theta)$-holomorphic\ Lagrangian\ submanifold in $M$ for some $\theta$, and the orientation of $L$ be determined by $\omega^{\sigma(\theta)}_1$. Then the pairing of the de Rham cohomology class $[\omega_2+\sqrt{-1}\omega_3]^n$ and the homology class $[L]\in H_{2n}(M,\mathbb{Z})$ is given by \begin{eqnarray} \langle [\omega_2+\sqrt{-1}\omega_3]^n, [L]\rangle = e^{\sqrt{-1}n \theta} V, \nonumber \end{eqnarray} where $V(L) >0$ is the volume of $L$. \label{6.2} \end{prop} \begin{proof} Since $L$ is $\sigma(\theta)$-holomorphic Lagrangian, we have \begin{eqnarray} \omega_1|_L = \omega^{\hat{\sigma}(\theta)}|_L=0,\nonumber \end{eqnarray} where $\hat{\sigma}(\theta) = (0,-\sin\theta,\cos\theta) \in S^2$. Then we obtain \begin{eqnarray} \langle [\omega_2+\sqrt{-1}\omega_3]^n, [L]\rangle &=& \int_{L}(\omega_2+\sqrt{-1}\omega_3)^n\nonumber\\ &=& \int_{L}e^{\sqrt{-1}n \theta}\{ e^{-\sqrt{-1} \theta}(\omega_2+\sqrt{-1}\omega_3) \}^n\nonumber\\ &=& e^{\sqrt{-1}n \theta}\int_{L}(\omega^{\sigma(\theta)}+\sqrt{-1}\omega^{\hat{\sigma}(\theta)})^n \nonumber\\ &=& e^{\sqrt{-1}n \theta}\int_{L}(\omega^{\sigma(\theta)})^n = e^{\sqrt{-1}n \theta} V(L). \nonumber \end{eqnarray} \end{proof} Let $L_1,L_2,\cdots, L_A$ be compact smooth submanifolds of dimension $2n$ embedded in $M$. Assume that each $L_\alpha$ is a $\sigma(\theta_\alpha)$-holomorphic\ Lagrangian\ submanifold for some $\theta_\alpha\in \frac{\pi}{n} \mathbb{Z}$, and the orientation of $L_\alpha$ is determined by $\omega^{\sigma(\theta_\alpha)}$. Put $\varepsilon_\alpha =1$ if $\frac{n}{\pi}\theta_\alpha$ is even, and $\varepsilon_\alpha =-1$ if $\frac{n}{\pi}\theta_\alpha$ is odd. \begin{prop} Under the above setting, assume that there exists a compact smooth $\sigma(\theta)$-holomorphic\ Lagrangian\ submanifold $L$ in the homology class $\sum_{\alpha=1}^A \varepsilon_\alpha [L_\alpha]$ for some $\theta\in \mathbb{R}$. Then $\{ \theta_1,\theta_2,\cdots,\theta_A\}$ is contained in $\theta + \pi\mathbb{Z}$. \label{homology} \end{prop} \begin{proof} Since $L_\alpha$ is a $\sigma(\theta_\alpha)$-holomorphic\ Lagrangian\ submanifold, we have \begin{eqnarray} \langle [\omega_2+\sqrt{-1}\omega_3]^n, \sum_{\alpha=1}^A \varepsilon_\alpha [L_\alpha ] \rangle &=& \sum_{\alpha=1}^A \varepsilon_\alpha e^{\sqrt{-1}n \theta_\alpha} V(L_\alpha)\nonumber\\ &=& \sum_{\alpha=1}^A \varepsilon_\alpha^2 V(L_\alpha) = \sum_{\alpha=1}^A V(L_\alpha). \label{eq5} \end{eqnarray} by Proposition \ref{6.2}. Since $L$ is a compact smooth $\sigma(\theta)$-holomorphic\ Lagrangian\ submanifold, $\omega_1|_L = \omega^{\hat{\sigma}(\theta)}|_L=0$ holds, where we put ${\hat{\sigma}(\theta)}$ as in the proof of Proposition \ref{6.2}. Therefore we obtain \begin{eqnarray} \langle [\omega_2+\sqrt{-1}\omega_3]^n, [L] \rangle = e^{\sqrt{-1}n\theta} \langle [\omega^{\sigma(\theta)}]^n, [L] \rangle = e^{\sqrt{-1}n\theta} \langle [\omega^{\sigma(\theta)}]^n, [L] \rangle \label{eq6} \end{eqnarray} by the same computation in the proof of Proposition \ref{6.2}. Then by combining (\ref{eq5})(\ref{eq6}), $\theta$ is given by $\theta = k\pi/n$ for an integer $k=1,\cdots,2n$. Now we have \begin{eqnarray} \omega^{\sigma(\theta)} &=& {\rm Re}(e^{-\sqrt{-1}\theta}(\omega_2+\sqrt{-1}\omega_3)) \nonumber\\ &=& {\rm Re}(e^{-\sqrt{-1}(\theta - \theta_\alpha)} e^{-\sqrt{-1}\theta_\alpha}(\omega_2+\sqrt{-1}\omega_3 ))\nonumber\\ &=& {\rm Re}(e^{-\sqrt{-1}(\theta - \theta_\alpha)} (\omega^{\sigma(\theta_\alpha)}+\sqrt{-1}\omega^{\hat{\sigma}(\theta_\alpha)}))\nonumber\\ &=& \cos (\theta - \theta_\alpha)\omega^{\sigma(\theta_\alpha)} + \sin (\theta - \theta_\alpha)\omega^{\hat{\sigma}(\theta_\alpha)}\nonumber \end{eqnarray} and $\omega^{\hat{\sigma}(\theta_\alpha)}|_{L_\alpha} = 0$, we obtain \begin{eqnarray} \langle [\omega^{\sigma(\theta)}]^n, [L] \rangle &=& \sum_{\alpha=1}^A \varepsilon_\alpha \langle [\omega^{\sigma(\theta)}]^n, [L_\alpha ] \rangle \nonumber\\ &=& \sum_{\alpha=1}^A \varepsilon_\alpha \cos^n (\theta - \theta_\alpha)\langle [\omega^{\sigma(\theta_\alpha)} ]^n, [L_\alpha ] \rangle \nonumber\\ &=& \sum_{\alpha=1}^A \varepsilon_\alpha \cos^n (\theta - \theta_\alpha) V(L_\alpha ) \label{eq7} \end{eqnarray} By combining (\ref{eq5})(\ref{eq6})(\ref{eq7}) and putting $\theta=k\pi/n$, we obtain \begin{eqnarray} \sum_{\alpha=1}^A V(L_\alpha) = (-1)^k\sum_{\alpha=1}^A \varepsilon_\alpha \cos^n \bigg(\frac{k\pi}{n} - \theta_\alpha\bigg) V(L_\alpha). \nonumber \end{eqnarray} Next we put $\theta_\alpha = k_\alpha\pi/n$ for $k_\alpha=1,\cdots,2n$. Then $\varepsilon_\alpha = (-1)^{k_\alpha}$ and \begin{eqnarray} \sum_{\alpha=1}^A V(L_\alpha) = \sum_{\alpha=1}^A (-1)^{k-k_\alpha} \cos^n \bigg(\frac{k-k_\alpha}{n}\pi\bigg) V(L_\alpha) \nonumber \end{eqnarray} holds. Since every $V(L_\alpha)$ is positive, we obtain \begin{eqnarray} (-1)^{k-k_\alpha} \cos^n \bigg(\frac{k-k_\alpha}{n}\pi\bigg) = 1, \label{eq8} \end{eqnarray} then $k-k_\alpha$ should be contained in $n\mathbb{Z}$. If $k-k_\alpha = nl$ for some $l\in \mathbb{Z}$, then $\cos (\frac{k-k_\alpha}{n}\pi) = \cos l\pi = (-1)^l$ holds, which gives \begin{eqnarray} (-1)^{k-k_\alpha} \cos^n \bigg(\frac{k-k_\alpha}{n}\pi\bigg) = (-1)^{nl} (-1)^{nl} = 1.\nonumber \end{eqnarray} Thus the assertion follows since (\ref{eq8}) holds if and only if $k-k_\alpha \in n\mathbb{Z}$. \end{proof} \begin{cor} Under the assumption of Theorem \ref{graph}, let $\mathcal{E}$ be nonempty. Then the special\ Lagrangian\ submanifolds $\tilde{L}_t$ obtained in Theorem \ref{graph} is not $\sigma$-holomorphic\ Lagrangian\ submanifold for any $\sigma\in S^2$. \end{cor} \begin{proof} First of all, note that $\tilde{L}_t$ is contains in $\sum_{k\in\mathcal{V}}\varepsilon_{k}[L_{\triangle_k}]$. Let $k_1 \to k_2\in \mathcal{E}$. Then $\triangle_{k_1}$ and $\triangle_{k_2}$ are intersecting standardly with angle $\pi/n$, hence we have $\theta_{k_2} = \theta_{k_1} + \pi/n$, which implies that $\{\theta_k;\ k\in\mathcal{V}\}$ contains $\theta_{k_1}$ and $\theta_{k_1} + \pi/n$. Thus $\{\theta_k;\ k\in\mathcal{V}\}$ never be contained in $\theta+\pi\mathbb{Z}$ for any $\theta$ since $n>1$. By Propositions \ref{6.1} and \ref{homology}, $\tilde{L}_t$ never becomes $\sigma$-holomorphic\ Lagrangian\ submanifold for any $\sigma\in S^2$. \end{proof} Keio University, 3-14-1 Hiyoshi, Kohoku, Yokohama 223-8522, Japan,\\ [email protected] \end{document}
math
\begin{document} \date{\today} \maketitle \section{Introduction} \label{sect:intro} Let $X$ be an irreducible holomorphic symplectic manifold, i.e., a compact K\"ahler simply-connected manifold admitting a unique nondegenerate holomorphic two-form. Let $\left(,\right)$ denote the Beauville--Bogomolov form on the cohomology group $\mathrm{H}^2(X,{\mathbb Z})$, normalized so that it is integral and primitive. When $X$ is a K3 surface this coincides with the intersection form. In higher dimensions, the form induces an inclusion \begin{equation} \label{eqn:incl} \mathrm{H}^2(X,{\mathbb Z}) \subset \mathrm{H}_2(X,{\mathbb Z}), \end{equation} which allows us to extend $\left(,\right)$ to a ${\mathbb Q}$-valued quadratic form. Lagrangian projective spaces play a fundamental r\^ole in the birational geometry of these classes of manifolds. If $X$ contains a holomorphically embedded projective space ${\mathbb P}^{\dim(X)/2}$ we can consider the {\em Mukai flop} of $X$, obtained by blowing up the projective space and blowing down the exceptional divisor $$E\simeq {\mathbb P}(\Omega^1_{{\mathbb P}^{\dim(X)/2}})$$ along the opposite ruling. Our goal is to characterize possible homology classes of such submanifolds, modulo the monodromy representation on the cohomology of $X$. Assuming $X$ contains a Lagrangian projective space ${\mathbb P}^{\dim(X)/2}$, let $\ell\in \mathrm{H}_2(X,{\mathbb Z})$ denote the class of a line in ${\mathbb P}^{\dim(X)/2}$, and $\lambda=N\ell\in \mathrm{H}^2(X,{\mathbb Z})$ a positive integer multiple. We can take $N$ to be the index of $\mathrm{H}^2(X,{\mathbb Z}) \subset \mathrm{H}_2(X,{\mathbb Z})$. Hodge theory \cite{Ran,Voisin} shows that the deformations of $X$ containing a deformation of the Lagrangian space coincide with the deformations of $X$ for which $\lambda \in \mathrm{H}^2(X,{\mathbb Z})$ remains of type $(1,1)$. Infinitesimal Torelli implies this is a divisor in the deformation space, i.e., $$\lambda^{\perp} \subset \mathrm{H}^1(X,\Omega^1_X) \simeq \mathrm{H}^1(X,{\mathcal T}_X).$$ We seek to establish intersection theoretic properties of $\ell$ for various deformation-equivalence classes of holomorphic symplectic manifolds. Previous results in this direction include \begin{enumerate} \item If $X$ is a K3 surface then $\left(\ell,\ell\right)=-2$. \item If $X$ is deformation equivalent to the Hilbert scheme of length-two subschemes of a K3 surface then $\left(\ell,\ell\right)=-5/2$. \cite{HTGAFA08} \item If $X$ is deformation equivalent to a generalized Kummer fourfold then $\left(\ell,\ell\right)=-3/2$. \cite{HT10} \end{enumerate} Here we prove \begin{theo} \label{theo:main} Let $X$ be a six-dimensional K\"ahler manifold, deformation equivalent to the Hilbert scheme of length-three subschemes of a K3 surface. Let ${\mathbb P}^3 \subset X$ be a smooth subvariety and $\ell \subset {\mathbb P}^3$ a line. Then $\left(\ell,\ell\right)=-3$ and $\rho=2\ell \in \mathrm{H}^2(X,{\mathbb Z})$. Furthermore, we have $$\left[ {\mathbb P}^3 \right]=\frac{1}{48}\left( \rho^3 + \rho^2c_2(X)\right).$$ \end{theo} This uniquely characterizes the class of the Lagrangian plane, modulo the monodromy action, which acts transitively on the $\rho \in \mathrm{H}^2(X,{\mathbb Z})$ with $\left(\rho,\rho\right)=-12$ and $\left(\rho,\mathrm{H}^2(X,{\mathbb Z})\right)=2{\mathbb Z}$ \cite[\S 3]{GHS}. In general, we conjectured in \cite{HT09} that if $X$ is of dimension $2n$ then $\left(\ell,\ell\right)= -(n+3)/2$, if $X$ is deformation equivalent to a Hilbert scheme of a K3 surface. Our main motivation for making these conjectures is to achieve a classification of extremal rational curves on irreducible holomorphic symplectic varieties (i.e., generators of extremal rays of birational contractions) in terms of intersection properties under the Beauville-Bogomolov form. The structure of this paper is as follows: Section~\ref{sect:cohomology} reviews the cohomology groups of Hilbert schemes of K3 surfaces; Section~\ref{sect:ring} focuses on the ring structure. We employ representation theory to get results on the Hodge classes in Section~\ref{sect:representation}. The Hilbert scheme of length-three subschemes is studied in detail in Section~\ref{sect:lengththree}. We extract the distinguished absolute Hodge class in the middle cohomology in Section~\ref{sect:indecomp}; here `absolute Hodge classes' are those that remain Hodge under arbitrary deformations of complex structure. The computation of the class of the Lagrangian three planes is worked out in Section~\ref{sect:LTP}, modulo a number theoretic result. This is proved in Section~\ref{sect:DA}. {\bf Acknowledgments:} We are grateful to Noam Elkies, Lothar G\"ott\-sche, Manfred Lehn, Eyal Markman, and Christoph Sorger for useful conversations. The second author was supported by National Science Foundation Grant 0554491 and 0901645; the third author was supported by National Science Foundation Grants 0554280 and 0602333. We appreciate the hospitality of the American Institute of Mathematics, where some of this work was done. \section{Cohomology of Hilbert schemes} \label{sect:cohomology} Let $X$ be deformation equivalent to the punctual Hilbert scheme $S^{[n]}$, where $S$ is a K3 surface. For $n>1$ the Beauville-Bogomolov form can be written \cite[\S 8]{beauville} $$\mathrm{H}^2(X,{\mathbb Z}) \simeq \mathrm{H}^2(S,{\mathbb Z})_{\left(,\right)} \oplus_{\perp} {\mathbb Z} \delta, \quad \left(\delta,\delta\right)=-2(n-1)$$ where $2\delta$ is the class of the `diagonal' divisor $\Delta^{[n]} \subset S^{[n]}$ parameterizing nonreduced subschemes. For each homology class $f\in \mathrm{H}^2(S,{\mathbb Z})$, let $f \in \mathrm{H}^2(X,{\mathbb Z})$ denote the class parameterizing subschemes with some support along $f$. This is compatible with the lattice embedding above. Duality gives a ${\mathbb Q}$-valued form on homology $$ \mathrm{H}_2(X,{\mathbb Z}) \simeq \mathrm{H}_2(S,{\mathbb Z})_{\left(,\right)} \oplus_{\perp} {\mathbb Z} \delta^{\vee}, \quad \left(\delta^{\vee},\delta^{\vee}\right)=-\frac{1}{2(n-1)},$$ where $\delta^{\vee}$ is characterized as the homology class orthogonal to $\mathrm{H}^2(S,{\mathbb Z})$ and satisfying $\delta^{\vee}\cdot \delta =1$. \begin{theo} \cite{Gott90} Let $S$ be a K3 surface and $S^{[n]}$ its Hilbert scheme. Consider the Poincar\'e polynomial $$p(S^{[n]},z)=\sum_{j=0}^{4n} \beta_j(S^{[n]})z^j.$$ Then $$\sum_{n=0}^{\infty} p(S^{[n]},z)t^n= \prod_{m=1}^{\infty} (1-z^{2m-2}t^m)^{-1}(1-z^{2m}t^m)^{-22}(1-z^{2m+2}t^m)^{-1}.$$ \end{theo} To save space, we write $$q(S^{[n]},z)=\sum_{j=0}^{n} \beta_{2j} z^j,$$ which determines the Poincar\'e polynomial by Poincar\'e duality. We have $$\begin{array}{rcl} q(S,z)&=& 1+ 22z \\ q(S^{[2]},z) &=& 1 + 23 z + 276 z^2 \\ q(S^{[3]},z) &=& 1 + 23 z + 299 z^2 + 2554 z^3. \end{array} $$ A theorem of Verbitsky \cite[Theorem 1.5]{Verb} asserts that the homomorphism arising from the cup product $$\mu_{k,n}:\mathrm{Sym}^k \mathrm{H}^2(S^{[n]},{\mathbb Q}) \rightarrow \mathrm{H}^{2k}(S^{[n]},{\mathbb Q})$$ is injective for $k \le n$. Thus its image has dimension $$\binom{22+k}{k}.$$ In light of the computations above, $\mu_{2,2}$ is an isomorphism, $\mu_{2,3}$ has cokernel of dimension $23$, and $\mu_{3,3}$ has cokernel of dimension $$2554-\binom{25}{3}=254=\binom{23}{2}+1.$$ The cup product also induces a homomorphism $$\mathrm{coker}(\mu_{2,3}) \otimes \mathrm{H}^2(S^{[3]},{\mathbb Q}) \rightarrow \mathrm{coker}(\mu_{3,3}).$$ This homomorphism has been observed by Markman \cite[p. 80]{MarkCrelle}. More generally, he analyzes what classes are needed to generate the cohomology ring $\mathrm{H}^*(S^{[n]},{\mathbb Q})$, beyond those coming $\mathrm{H}^2(S^{[2]},{\mathbb Q})$. Markman uses Chern classes of universal sheaves over the product $S^{[n]} \times S$; a detailed discussion of the $n=3$ case is given in \cite[Ex. 14]{MarkCrelle}. \section{The ring structure on cohomology} \label{sect:ring} Lehn-Sorger \cite{LS} and Nakajima \cite{Nakajima} described $\mathrm{H}^*(S^{[n]},{\mathbb Q})$ in terms of $\mathrm{H}^*(S,{\mathbb Q})$. We review the Lehn-Sorger formalism for the cup product on the cohomology ring. Let $S$ be a K3 surface and $A=\mathrm{H}^*(S,{\mathbb Q})(1)$, the cohomology ring shifted so that it has weights $-2,0$, and $2$; this is written as $\mathrm{H}^*(S,{\mathbb Q})[2]$ in their paper. Shifting the weights changes the sign of the intersection form, which is denoted by $\left<,\right>$; this has signature $(20,4)$. Let $T:A \rightarrow {\mathbb Q}$ denote the linear form $$\gamma \mapsto -\int_S \gamma$$ and $\left<,\right>$ the induced bilinear form $$\left<\gamma_1,\gamma_2\right>=T(\gamma_1\gamma_2)=-\int_S \gamma_1 \gamma_2.$$ For each $n \in {\mathbb N}$, we endow $A^{\otimes n}$ with an analogous structure. We shall use the fact that $A$ has only graded pieces of {\em even} degrees to simplify the description in \cite{LS}. In this situation, graded commutative multiplication rules are in fact commutative, given by the rule $$(a_1 \otimes \cdots \otimes a_n)\cdot (b_1 \otimes \cdots \otimes b_n)= (a_1b_1)\otimes \cdots \otimes (a_nb_n). $$ The linear form $$T:A^{\otimes n} \rightarrow {\mathbb Q}$$ is defined by $$T(a_1\otimes \cdots \otimes a_n) =T(a_1)\cdots T(a_n).$$ Let $\left<,\right>$ denote the associated bilinear form $$\left<a,b\right>=T(a\cdot b).$$ The symmetric group ${\mathfrak S}_n$ acts on $A^{\otimes n}$ by the rule $$\pi(a_1 \otimes \cdots \otimes a_n)= a_{\pi^{-1}(1)} \otimes \cdots \otimes a_{\pi^{-1}(n)}.$$ Given a partition $n=n_1+\ldots+n_k$ with $n_1,\ldots,n_k \in {\mathbb N}$, we have a generalized multiplication map $$\begin{array}{rcl} A^{\otimes n} & \rightarrow & A^{\otimes k}\\ a_1\otimes \cdots \otimes a_n & \mapsto & (a_1\cdots a_{n_1}) \otimes \cdots \otimes (a_{n_1+\cdots+n_{k-1}+1}\cdots a_{n_1+\cdots+n_k}). \end{array} $$ Given a finite set $I\subset \{1,\ldots,n\}$, let $A^{\otimes I}$ denote the tensor power with factors indexed by elements of $I$. Given a surjection $\phi:I \rightarrow J$, there is an induced multiplication $$ \phi^* \colon A^{\otimes I} \rightarrow A^{\otimes J} $$ defined as above. Let $$\phi_*: A^{\otimes J} \rightarrow A^{\otimes I}$$ denote the {\em adjoint} of $\phi^*$, i.e., $$\left< \phi^*a,b \right>=\left<a,\phi_* b\right>$$ for $a \in A^{\otimes I}$ and $b\in A^{\otimes J}$. We have the composite $$A \stackrel{\Delta_*}{\rightarrow} A\otimes A \rightarrow A,$$ where the first map is adjoint comultiplication and the second is multiplication. Let $e:=e(A)$ denote the image of $1$ under the composed map. \begin{rema} We evaluate the signs of $\Delta_*1$ and $e(A)$. Let $\Delta_S$ denote the fundamental class of the diagonal in $\mathrm{H}^*(S\times S,{\mathbb Z})=\mathrm{H}^*(S,{\mathbb Z}) \otimes \mathrm{H}^*(S,{\mathbb Z})$. Using the adjoint property, we have $$\begin{array}{rcl} \left<\Delta_*1, \alpha \otimes \beta \right> &=& \left< 1, \alpha\beta \right> \\ &=& T(\alpha\beta) \\ &=& -\int_S \alpha \beta \end{array} $$ whereas $$\begin{array}{rcl} \left<\Delta_S, \alpha \otimes \beta \right> &=& \left< \sum_j e_j \otimes e_j^{\vee}, \alpha\otimes \beta \right> \\ &=& \sum_j T(e_j \alpha) T(e_j^{\vee}\beta)\\ &=& \int_S \alpha \beta, \end{array} $$ where $\{e_j\}$ is a homogeneous basis for $\mathrm{H}^*(S,{\mathbb Q})$ with Poincar\'e-dual basis $e_j^{\vee}$. Therefore, we find \begin{equation} \Delta_*1=-[\Delta_S]. \label{eqnsignswitch} \end{equation} Furthermore, we have $$\int_S e(A)=-T(e(A))=-\left<e(A),1\right>=-\left<\Delta_*1,\Delta_*1\right>=-\mathrm{ch}i(S)=-24,$$ so $e(A)$ is a {\em negative} multiple of the point class. Nevertheless, we still have (cf. \cite[\S 2.2]{LS}) $$e(A)=\mathrm{ch}i(S) \mathrm{vol}, \quad \text{ where } \quad T(\mathrm{vol})=1,$$ but $\mathrm{vol}$ differs from the standard volume form by sign. \end{rema} Let $\left<\pi\right> \backslash [n]$ denote the set of orbits of $[n]=\{1,2,\ldots,n\}$ under the action of $\pi$. Set $$A\{{\mathfrak S}_n\}=\oplus_{\pi\in {\mathfrak S}_n} A^{\otimes \left<\pi\right> \backslash [n]} \cdot \pi$$ which admits an action of ${\mathfrak S}_n$. First, note that $\sigma \in {\mathfrak S}_n$ induces a bijection $$\begin{array}{rcl} \sigma:\left<\pi\right> \backslash [n] & \rightarrow & \left<\sigma \pi \sigma^{-1}\right> \backslash [n] \\ x & \mapsto & \sigma x. \end{array} $$ Thus we obtain an isomorphism $$\begin{array}{rcl} \tilde{\sigma}: A\{S_n\} & \rightarrow & A\{S_n\} \\ a \pi & \mapsto & \sigma^* \sigma\pi \sigma^{-1}. \end{array} $$ \begin{exam} \cite[2.9, 2.17]{LS} We have $A\{{\mathfrak S}_2\}=A^{\otimes 2}\mathrm{id} \oplus A (12)$ and $$ A\{{\mathfrak S}_3\}=A^{\otimes 3}\mathrm{id}\oplus A^{\otimes 2}(12) \oplus A^{\otimes 2}(13)\oplus A^{\otimes 2}(23) \oplus A(123) \oplus A(132). $$ \end{exam} Let $A^{[n]}\subset A\{{\mathfrak S}_n\}$ denote the invariants under this action. Then we have \cite[\S 2]{LS} $$A^{[n]}=\sum_{\| \alpha \|=n} \bigotimes_{i}\mathrm{Sym }^{\alpha_i}A,$$ where $\alpha$ corresponds to a partition $$\underbrace{1+\cdots+1}_{\alpha_1 \text{ times}}+ \underbrace{2+\cdots+2}_{\alpha_2 \text{ times}}+\cdots$$ and $$n=\|\alpha \|=\alpha_1+2\alpha_2+\cdots+n\alpha_n.$$ Note that this is compatible with Hodge structures; in particular, $A^{[n]}$ is a representation of the Hodge group of $S$ and the special orthogonal group $G_S$ associated with the intersection form on $\mathrm{H}^2(S,{\mathbb R})$. We interpret this as acting on $A$, trivially on the summands $\mathrm{H}^0(S,{\mathbb R})$ and $\mathrm{H}^4(S,{\mathbb R})$. \begin{theo} \cite[Theorem 3.2]{LS} Let $S$ be a K3 surface. Then there is a canonical isomorphism of graded rings $$(\mathrm{H}^*(S,{\mathbb Q})[2])^{[n]} \stackrel{\sim}{\rightarrow}\mathrm{H}^*(S^{[n]},{\mathbb Q})[2n].$$ \end{theo} In the cohomology of the Hilbert scheme, the subring generated by $\mathrm{H}^2(S^{[n]})$ plays a special role. We have an isomorphism $$\mathrm{H}^2(S^{[n]},{\mathbb Z}) = \mathrm{H}^2(S,{\mathbb Z}) \oplus {\mathbb Z} \delta,$$ where $2\delta$ parameterizes the non-reduced schemes of $S$. We express this in terms of our presentation. Given $D \in \mathrm{H}^2(S,{\mathbb Z})$, the class $$\sum_{i=1}^n 1_{\{1\}} \otimes \cdots \otimes 1_{\{i-1\}} \otimes D_{\{i\}} \otimes 1_{\{i+1\}}\otimes \cdots \otimes 1_{\{n\}} (\mathrm{id})$$ is the corresponding class in $\mathrm{H}^2(S^{[n]},{\mathbb Q})[2n]$. Using the explicit form of the isomorphism in \cite[2.7]{LS} and Nakajima's isomorphism (\cite[Thm. 3.6]{LS}), we find that $$\delta=\sum_{1 \le i<j \le n} 1_{\{1\}} \otimes \ldots \otimes 1_{\{i-1\}} \otimes 1_{\{i,j\}} \otimes 1_{\{i+1\}} \otimes \cdots \otimes 1_{\{j-1\}} \otimes 1_{\{j+1 \}} \otimes \cdots \otimes 1_{\{n\}}(ij).$$ Here is the essence of the computation: the interpretation of the nonreduced subschemes via the correspondence $$Z_2=\{(\xi,x,\xi'):|\xi'|-|\xi|=2x \} \subset S^{[n-2]} \times S \times S^{[n]}$$ allows us to express $\delta$ in terms of Nakajima's creation and annihilation operators, and thus in $$\mathrm{H}^*(S^{[n]},{\mathbb Q})[2n].$$ We describe the general rule for evaluating the fundamental class in $A^{[n]}$. Let $$[\mathrm{pt}]\in \mathrm{H}^4(S,{\mathbb Z})[2] \subset A$$ be the point class, which is of degree $-2$. Let $$[\mathrm{pt}]_{\{1\}} \otimes \cdots \otimes [\mathrm{pt}]_{\{n\}} (\mathrm{id}) \in A^{[n]}$$ denote the unique class of degree $-2n$ up to scalar. Then the class of a point in $S^{[n]}$ is equal to \cite[2.10]{LS} \begin{equation} \label{eqn:pointclass} [\mathrm{pt}_{S^{[n]}}]=\frac{1}{n!} [\mathrm{pt}]_{\{1\}} \otimes \cdots \otimes [\mathrm{pt}]_{\{n\}} (\mathrm{id}) . \end{equation} \section{Decomposition of the cohomology representation} \label{sect:representation} We summarize general results from representation theory. For an orthogonal group of odd dimension $2r+1$, the highest weights $\lambda=(\lambda_1,\ldots,\lambda_r)$ of irreducible representations $V(\lambda)$ are vectors consisting entirely of integers (or half integers) in the fundamental chamber $$\{\lambda_1 \ge \lambda_2 \ge \ldots \ge \lambda_{r-1} \ge \lambda_r \ge 0 \}.$$ Since we only consider even-weight representations, we ignore cases where the $\lambda_j$ are half-integers. For orthogonal groups of even dimension $2r$, the fundamental chamber is $$\{\lambda_1 \ge \lambda_2 \ge \ldots \ge \lambda_{r-1} \ge |\lambda_r| \ge 0 \}.$$ Recall that \begin{itemize} \item{$V(1,0,\ldots)$ is the standard representation $V$.} \item{We have $$V(\underbrace{1,\cdots,1}_{k \text{ times}},0,\cdots)=\bigwedge^k V,$$ provided $k < r$ (in the even case) or $k\le r$ (in the odd case); see, for instance, \cite[Thms. 19.2 and 19.14]{FulHar}.} \item{$V(k,0,\ldots)=\mathrm{Sym }^k(V)/\mathrm{Sym }^{k-2}(V)$, embedded via the dual to the quadratic form on $V$.} \item{For the odd orthogonal group, we have $$\dim V(\lambda)=\prod_{i<j} \frac{\ell_i-\ell_j}{j-i} \prod_{i \le j}\frac{\ell_i+\ell_j}{2n+1-i-j}$$ where $\ell_i=\lambda_i+n-i+\frac{1}{2}$ \cite[p. 408]{FulHar}.} \item{For the even orthogonal group, we have $$\dim V(\lambda)=\prod_{i<j} \frac{\ell^2_i-\ell^2_j}{(j-i)(2n-i-j)}$$ where $\ell_i=\lambda_i+n-i$ \cite[p. 410]{FulHar}.} \item{Let $V_X(\lambda)$ denote an irreducible representation of an orthogonal group $G_X$ of dimension $2r+1$, $G_S \subset G_X$ the orthogonal subgroup $G_S\subset G_X$ of dimension $2r$ fixing a non-isotropic vector with negative self-intersection, and $V_S(\overline{\lambda})$ the representation of $G_S$ with highest weight $\overline{\lambda}$. Then we have the branching rule \cite[p. 426]{FulHar} $$\mathrm{Res}^{G_X}_{G_S}V_X(\lambda)= \oplus_{{\overline \lambda}} V_S(\overline \lambda),$$ where the sum ranges over all $\overline{\lambda}$ with $$\lambda_1 \ge {\overline \lambda_1} \ge \lambda_2 \ge {\overline \lambda_2} \ge \cdots \lambda_r \ge |{\overline \lambda_r}|.$$} \end{itemize} Let $X$ be a generic deformation of $S^{[n]}$. Our goal is to decompose $\mathrm{H}^*(X,{\mathbb Q})$ into irreducible representations for the action of the identity component $G_X$ of the special orthogonal group associated with the Beauville-Bogomolov form on $\mathrm{H}^2(X,{\mathbb Q})$. Let $G_S$ denote the identity component of the special orthogonal group associated with the intersection form on $\mathrm{H}^2(S,{\mathbb Q})$. The decomposition $$\mathrm{H}^2(S^{[2]},{\mathbb Z})=\mathrm{H}^2(S,{\mathbb Z}) \oplus_{\perp} {\mathbb Z} \delta$$ induces an inclusion $G_S \subset G_X$. \begin{prop} Let $X$ be deformation equivalent to $S^{[n]}$ for some $n$. Then $G_X$ admits a representation on the cohomology ring of $X$. \end{prop} \begin{proof} Let $\mathrm{Mon}\subset \mathrm{Aut}(\mathrm{H}^*(X,{\mathbb Z}))$ denote the monodromy group, i.e., the group generated by the monodromy representations of all connected families containing $X$. Let $\mathrm{Mon}^2\subset \mathrm{Aut}(\mathrm{H}^2(X,{\mathbb Z}))$ denote its image under projection to the second cohomology group, so we have an exact sequence $$ 1 \rightarrow K \rightarrow \mathrm{Mon} \rightarrow \mathrm{Mon}^2 \rightarrow 1.$$ Markman has shown \cite[\S 4.3]{Mark1} that $K$ is finite. Note that $G_X$ is a connected component of the Zariski closure of $\mathrm{Mon}^2$ (see, for example \cite[\S 1.8]{Mark1}). Since $\mathrm{Mon}$ and $\mathrm{Mon}^2$ differ only by finite subgroups, it follows that the universal cover $\widetilde{G_X}\rightarrow G_X$ acts on the cohomology ring of $X$. Since the cohomology of $X$ is nonzero only in even degrees, this representation passes to $G_X$. \end{proof} In principle, we can decompose $\mathrm{H}^*(X,{\mathbb R})$ explicitly into isotypic components as follows: \begin{enumerate} \item{Fix an embedding $G_S \subset G_X$, e.g., using the isomorphism $$\mathrm{H}^2(X,{\mathbb Z})\simeq \mathrm{H}^2(S,{\mathbb Z}) \oplus_{\perp} {\mathbb Z} \delta,$$ and compatible maximal tori (both of which have rank $11$).} \item{Identify the highest-weight irreducible $G_S$-representation $V_{S}(\lambda) \subset \mathrm{H}^*(S^{[n]},{\mathbb R})$, which is a summand of the restriction of an irreducible $V_{X}(\lambda) \subset \mathrm{H}^*(X,{\mathbb R})$. Decompose $V_X(\lambda)$ into irreducible $G_S$-representations.} \item{Repeat step two for $\mathrm{H}^*(X,{\mathbb R})/V_{X}(\lambda)$ and subsequent quotients.} \end{enumerate} First consider $X=S^{[2]}$. We have decompositions $$\mathrm{H}^*(S^{[2]})=A \oplus \mathrm{Sym }^2(A)$$ inducing $$\begin{array}{rcl} \mathrm{H}^2(S^{[2]})&=& \mathrm{H}^0(S) \oplus (\mathrm{H}^0(S)\otimes \mathrm{H}^2(S))={\bf 1}_S \oplus V_S(1,0,\ldots) \\ \mathrm{H}^4(S^{[2]})&=& \mathrm{H}^2(S) \oplus (\mathrm{H}^0(S)\otimes \mathrm{H}^4(S)) \oplus \mathrm{Sym }^2(\mathrm{H}^2(S)) \\ &=& V_S(1,0,\ldots) \oplus {\bf 1}_S^{\oplus 2} \oplus V_S(2,0,\ldots) \end{array} $$ Let $V_{X}(2,0,\ldots,0)$ denote the highest-weight representation associated to $\mathrm{Sym }^2(\mathrm{H}^2(X))$ so that $$\mathrm{Sym }^2(\mathrm{H}^2(X))=V_{X}(2,0,\ldots) \oplus {\bf 1}_X.$$ The branching rule gives $$V_{X}(1,0,\ldots)= V_S(1,0,\ldots) \oplus {\bf 1}_S$$ and $$V_{X}(2,0,\ldots)= V_S(2,0,\ldots) \oplus V_S(1,0,\ldots) \oplus {\bf 1}_S.$$ Therefore we obtain $$\begin{array}{rcl} \mathrm{H}^2(X) &=& V_X(1,0,\ldots) \\ \mathrm{H}^4(X) &=& V_X(2,0,\ldots)\oplus {\bf 1}_X. \end{array} $$ Now consider $X=S^{[3]}$. We have $$\mathrm{H}^*(S^{[3]})= A \oplus (A\otimes A) \oplus \mathrm{Sym }^3(A)$$ inducing following decompositions (as described in \cite[Example 2.9]{LS}): $$ \begin{array}{rcl} \mathrm{H}^2(S^{[3]}) &=& (\mathrm{H}^0(S)^{\otimes 2}) \oplus (\mathrm{H}^2(S)\otimes \mathrm{H}^0(S)^{\otimes 2}) \\ &=& {\bf 1}_S \oplus V_S(1,0\ldots) \\ \mathrm{H}^4(S^{[3]}) &=& \mathrm{H}^0(S) \oplus (\mathrm{H}^0(S)\otimes \mathrm{H}^2(S))^{\oplus 2} \\ & &\oplus (\mathrm{Sym }^2(\mathrm{H}^2(S))\otimes \mathrm{H}^0(S)) \oplus (\mathrm{H}^4(S)\otimes \mathrm{H}^0(S)^{\otimes 2}) \\ &=& {\bf 1}_S^{\oplus 3} \oplus V_S(1,0,\ldots)^{\oplus 2} \oplus V_S(2,0,\ldots) \\ \mathrm{H}^6(S^{[3]}) &=& \mathrm{H}^2(S) \oplus (\mathrm{H}^2(S)\otimes \mathrm{H}^2(S)) \oplus (\mathrm{H}^0(S) \otimes \mathrm{H}^4(S))^{\oplus 2} \\ & &\oplus \mathrm{Sym }^3(\mathrm{H}^2(S)) \oplus (\mathrm{H}^4(S) \otimes \mathrm{H}^2(S) \otimes \mathrm{H}^0(S)) \\ &=& {\bf 1}_S^{\oplus 3} \oplus V_S(1,0,\ldots)^{\oplus 3} \oplus V_S(1,1,0,\ldots)\\ & & \oplus V_S(2,0,\ldots) \oplus V_S(3,0,\ldots). \end{array} $$ Let $V_{X}(1,1,0,\ldots)=\bigwedge^2 V_X(1,0,\ldots)$ and $V_X(3,0,\ldots)$ denote the highest weight representation in $\mathrm{Sym }^3(V_X(1,0,\ldots))$ so that $$\mathrm{Sym }^3(V_X(1,0,\ldots))=V_X(3,0,\ldots) \oplus V_X(1,0,\ldots).$$ Therefore we obtain $$\begin{array}{rcl} \mathrm{H}^2(X) &=& V_X(1,0,\ldots) \\ \mathrm{H}^4(X) &=& V_X(2,0,\ldots) \oplus V_X(1,0,\ldots) \oplus {\bf 1}_X \\ \mathrm{H}^6(X) &=& V_X(3,0,\ldots) \oplus V_X(1,1,0\ldots)\oplus V_X(1,0,\ldots) \oplus {\bf 1}_X. \end{array} $$ The trivial factor in $\mathrm{H}^4(X)$ corresponds to the Chern class $c_2(X)$; our main task is to analyze the trivial factor in $\mathrm{H}^6(X)$. \section{Cohomology computations for length-three subschemes} \label{sect:lengththree} The general rule for multiplication in $A\{{\mathfrak S}_n\}$ is fairly complicated, so we will only give a formula in the case ($n=3$) we need. The fact that $A$ only has terms of even degree simplifies the expressions of \cite[2.17]{LS}: $$\begin{array}{rcl} (\alpha_{\{1,2\}}\otimes \beta_{\{3\}})(12) \cdot (\gamma_{\{1,3\}} \otimes \delta_{\{2\}})(13) &=& \alpha \beta \gamma \delta (132) \\ (\alpha_{\{1,2\}} \otimes \beta_{\{3\}})(12) \cdot (\gamma_{\{1,2\}} \otimes \delta_{\{3\}})(12) &=& \Delta_*(\alpha \gamma) \otimes (\beta \delta) (\mathrm{id}) \\ \alpha_{\{1,2,3\}} (123) \cdot \beta_{\{1,2,3\}}(123) &=& (\alpha \beta e)(132) \\ \alpha_{\{1,2,3\}}(123) \cdot \beta_{\{1,2,3\}}(132) &=& (\Delta_*(\alpha \beta))_{\{1,2,3\}} (\mathrm{id}), \end{array} $$ where $\Delta_*$ is the adjoint of the threefold multiplication $A\otimes A \otimes A \rightarrow A$. The remaining products can be deduced as formal consequences using the associativity of the multiplication, e.g., $$\begin{array}{l} (\alpha_{\{1,2\}} \otimes \beta_{\{3\}})(12) \cdot \gamma_{\{1,2,3\}}(132) \\ \quad = (\alpha_{\{1,2\}} \otimes \beta_{\{3\}})(12) \cdot (\gamma_{\{1,2\}} \otimes 1_{\{3\}})(12)\cdot (13) \\ \quad = (\Delta_*(\alpha \gamma)_{\{1,2\}} \otimes \beta_{\{3\}})(\mathrm{id}) \cdot (1_{\{1,3\}} \otimes 1_{\{2\}})(13) \\ \quad = \alpha \beta \gamma (\Delta_*(1))_{\{1,3\}, \{2\}} (13), \end{array} $$ where $\alpha,\beta,$ and $\gamma$ act on the diagonal via either the first or second variable. Thus in particular $$(12)\cdot (132)=(\Delta_*(1))_{\{1,3\},\{2\}}(13).$$ We compute intersections among the absolute Hodge classes for $S^{[3]}$, i.e., classes that are Hodge for general K3 surfaces $S$. From now on, to condense notation we omit factors of the form $1_{\{i\}},1_{\{i,j\}}$, etc. from our expressions. Based on the representation-theoretic analysis in Section~\ref{sect:representation}, we expect one independent classes in codimension one, three in codimension two, and three in codimension three. We have the unique divisor $$\delta = (12)+(13)+(23).$$ In codimension two, we have $$\begin{array}{rcl} P &=& [\mathrm{pt}]_{\{1\}}+ [\mathrm{pt}]_{\{2\}} + [\mathrm{pt}]_{\{3\}} \\ Q &=& \sum_{j=1}^{22} {e_j}_{\{1\}} \otimes {e^{\vee}_j}_{\{2\}} + {e_j}_{\{1\}} \otimes {e^{\vee}_j}_{\{3\}} + {e_j}_{\{2\}} \otimes {e^{\vee}_j}_{\{3\}} \\ R &=& (132)+(123). \end{array} $$ In codimension three, we have $$\begin{array}{rcl} U &=& [\mathrm{pt}]_{\{1,2\}} (12) + [\mathrm{pt}]_{\{1,3\}} (13) + [\mathrm{pt}]_{\{2,3\}} (23) \\ V &=& [\mathrm{pt}]_{\{3\}} (12) + [\mathrm{pt}]_{\{2\}} (13) + [\mathrm{pt}]_{\{1\}} (23) \\ W &=& \sum_{j=1}^{22} {e_j}_{\{1,2\}} \otimes {e_j^{\vee}}_{\{3\}} (12) + {e_j}_{\{1,3\}} \otimes {e_j^{\vee}}_{\{2\}} (13) + {e_j}_{\{2,3\}} \otimes {e_j^{\vee}}_{\{1\}} (23). \end{array} $$ Thus we have $$\begin{array}{rcl} \delta^2&=& (\Delta_*1)_{\{1,2\}}(12)+ (\Delta_*1)_{\{1,3\}}(13)+(\Delta_*1)_{\{2,3\}}(23) \\ & & \mathbin{+} \,\, 3((132)+(123)) \\ &=&-2P-Q+3R. \end{array} $$ Furthermore, we have $$\begin{array}{rcl} \delta \cdot P &=& ((12)+(13)+(23))\cdot ([\mathrm{pt}]_{\{1\}} + [\mathrm{pt}]_{\{2\}} + [\mathrm{pt}]_{\{3\}}) \\ &=& 2U+V \\ \delta \cdot Q &=& ((12)+(13)+(23))\cdot (\sum_{j=1}^{22} {e_j}_{\{1\}} \otimes {e^{\vee}_j}_{\{2\}} + {e_j}_{\{1\}} \otimes {e^{\vee}_j}_{\{3\}} \\ & & \quad \quad +\,\, {e_j}_{\{2\}} \otimes {e^{\vee}_j}_{\{3\}} ) \\ &=& 22([\mathrm{pt}]_{\{1,2\}}(12) +[\mathrm{pt}]_{\{1,3\}}(13) + [\mathrm{pt}]_{\{2,3\}}(23)) \\ & & +2 ( \sum_{j=1}^{22} {e_j}_{\{1,2\}} \otimes {e_j^{\vee}}_{\{3\}} + {e_j}_{\{1,3\}} \otimes {e_j^{\vee}}_{\{2\}} + {e_j}_{\{2,3\}} \otimes {e_j^{\vee}}_{\{1\}}) \\ &=& 22U + 2W \\ \delta \cdot R &=&((12)+(13)+(23))((132)+(123))\\ &=&2({\Delta_*1}_{\{1,2\},\{3\}}(12)+{\Delta_*1}_{\{1,3\},\{2\}}+{\Delta_*1}_{\{2,3\},\{1\}})\\ &=& -2(U+V+W). \end{array} $$ We deduce then that $$\delta^3=\delta(-2P-Q+3R)=-32U-8V-8W.$$ Finally, we compute the intersection pairing on the subspace of the middle cohomology spanned by $U,V,$ and $W$. Dimensional considerations give vanishing $$U^2=V^2=U\cdot W=V\cdot W=0.$$ For the remaining numbers, we get $$\begin{array}{rcl} U\cdot V &=& ([\mathrm{pt}]_{\{1,2\}} (12) + [\mathrm{pt}]_{\{1,3\}} (13) + [\mathrm{pt}]_{\{2,3\}} (23) ) \\ & & \quad \quad \cdot ([\mathrm{pt}]_{\{3\}} (12) + [\mathrm{pt}]_{\{2\}} (13) + [\mathrm{pt}]_{\{1\}} (23)) \\ &=&-3 [\mathrm{pt}]_{\{1\}} \otimes [\mathrm{pt}]_{\{2\}} \otimes [\mathrm{pt}]_{\{3\}} \mathrm{id} \end{array} $$ and $$\begin{array}{rcl} W^2 &=& (\sum_{j=1}^{22} {e_j}_{\{1,2\}} \otimes {e_j^{\vee}}_{\{3\}} (12) + {e_j}_{\{1,3\}} \otimes {e_j^{\vee}}_{\{2\}} (13) + {e_j}_{\{2,3\}} \otimes {e_j^{\vee}}_{\{1\}} (23))^2 \\ &=& -3\cdot 22 \cdot [\mathrm{pt}]_{\{1\}} \otimes [\mathrm{pt}]_{\{2\}} \otimes [\mathrm{pt}]_{\{3\}} \mathrm{id}. \end{array} $$ \begin{rema} As a consistency check, we evaluate $$ \begin{array}{rcl} \delta^6&=&(-32U-8V-8W)^2=2^6 (8UV+W^2)\\ &=&2^6 (-24-66) [\mathrm{pt}]_{\{1\}} \otimes [\mathrm{pt}]_{\{2\}} \otimes [\mathrm{pt}]_{\{3\}} \mathrm{id}. \end{array} $$ Using the formula for the point class (Equation~\ref{eqn:pointclass}), we obtain $$\delta^6=-\frac{2^7 \cdot 3^2 \cdot 5}{2\cdot 3}=-2^6 \cdot 3 \cdot 5.$$ This is compatible with the Fujiki-type identity $$D^6=15\left(D,D\right)^3, \quad D\in H^2(S^{[3]},{\mathbb Q}),$$ as $\left(\delta,\delta\right)=-4$. \end{rema} \section{Evaluation of the distinguished absolute Hodge class} \label{sect:indecomp} Let $S$ be a general K3 surface and $X$ a general deformation of $S^{[3]}$. The computations above show that the middle cohomology of $X$ admits one Hodge class $$\mathrm{H}^6(X,{\mathbb Q})\cap \mathrm{H}^{3,3}(X)={\mathbb Q} \eta$$ and the middle cohomology of $S^{[3]}$ admits three Hodge classes $$\mathrm{H}^6(S^{[3]},{\mathbb Q}) \cap \mathrm{H}^{3,3}(S^{[3]})= {\mathbb Q} \eta \oplus {\mathbb Q} \delta^3 \oplus {\mathbb Q} c_2(X)\delta.$$ Our goal is to compute the self-intersection of $\eta$, at least up to the square of a rational number. Note that $\eta$ is orthogonal to $\delta^3$ and $\delta c_2(X)$ under the intersection form, by the analysis in Section~\ref{sect:representation}. The analysis here gives the one structure constant left open in \cite[Ex. 14]{MarkCrelle}. \begin{prop} \label{prop:eta} Let $X$ be deformation equivalent to $S^{[3]}$, for $S$ a K3 surface. Let $\eta\in \mathrm{H}^6(X,{\mathbb Q})$ denote the unique (up to scalar) absolute Hodge class. Then $\eta^2=-3\cdot 443$. \end{prop} \begin{proof} The argument relies heavily on the analysis in Section~\ref{sect:lengththree}. We extract the decomposable classes in codimension three. We have $\delta^3$ already and $$\delta \cdot P=2U+V.$$ Hence the subspace $\mathrm{span}\{2U+V,V-W \}$ is spanned by decomposable classes and has orthogonal complement spanned by $2U-V+11W$. Thus we have $$\eta=2U-V+11W$$ and $$\begin{array}{rcl} \eta^2&=&-4UV+121W^2 \\ &=&(12-121 \times 66)([\mathrm{pt}] \otimes [\mathrm{pt}] \otimes [\mathrm{pt}] )\mathrm{id} \\ &=&-3 \cdot 443. \end{array} $$ \end{proof} \section{Proof of the main theorem} \label{sect:LTP} We compute the cohomology class of a Lagrangian subspace ${\mathbb P}^3 \subset X$, where $X$ is deformation equivalent to the Hilbert scheme of length three subschemes. As we shall see, the formula for $[{\mathbb P}^3]$ involves only decomposable classes, and not the absolute Hodge class $\eta$: \begin{lemm} Let ${\mathbb P}^n \subset X$ be embedded in a general irreducible holomorphic symplectic variety of dimension $2n$. Then we have $$c_{2j}({\mathcal T}_X|{\mathbb P}^n)=(-1)^jh^{2j}\binom{n+1}{j},$$ where $h$ is the hyperplane class. \end{lemm} This is proved using the exact sequence $$0 \rightarrow {\mathcal T}_{{\mathbb P}^n} \rightarrow {\mathcal T}_X|{\mathbb P}^n \rightarrow {\mathcal N}_{{\mathbb P}^n/X} \rightarrow 0$$ and $${\mathcal N}_{{\mathbb P}^n/X}\simeq {\mathcal T}_{{\mathbb P}^n}^{\vee},$$ reflecting the fact that ${\mathbb P}^n$ is a Lagrangian subvariety of $X$. Regarding $$ \mathrm{H}^2(X,{\mathbb Z}) \subset \mathrm{H}_2(X,{\mathbb Z}) $$ as a subgroup of index four, we can express $\ell=\lambda/4$ for some divisor class $\lambda \in \mathrm{H}^2(X,{\mathbb Z})$. (This might not be primitive.) Given a deformation of $X$ such that $\lambda$ remains algebraic, the subvariety ${\mathbb P}^3$ deforms as well \cite{HTGAFA99}. Without loss of generality, we can deform $X$ to a variety containing a ${\mathbb P}^3$, but otherwise having a general Hodge structure. In particular, we have a injection $$\mathrm{Sym }(\mathrm{H}^2(X,{\mathbb Q})) \hookrightarrow \mathrm{H}^*(X,{\mathbb Q}).$$ We expect to be able to write $$\left[ {\mathbb P}^3 \right]=a\lambda c_2(X) + b \lambda^3 + d \eta$$ for some $a,b,d \in {\mathbb Q}$. Furthermore, the Fujiki relations \cite{Fujiki} imply that for each $f\in \mathrm{H}^2(X,{\mathbb Z})$, $$f^6= e_0 \left(f,f\right)^3, \quad c_2(X)f^4= e_2 \left(f,f\right)^2, \quad c_4(X)f^2 = e_4 \left(f,f\right) $$ for suitable rational constants $e_0,e_2,e_4$. Precisely, we have \cite{EGL} $$c_2^2(X)f^2=\frac{5}{2}c_4(X)f^2.$$ The Riemann-Roch formula gives $$\mathrm{ch}i({\mathcal O}_X(f))=\frac{f^6}{6!}+ \frac{c_2(X)f^4}{12 \cdot 4!}+ \frac{f^2(3c_2^2-c_4)}{720 \cdot 2!}+4.$$ On the other hand, we know that $$\mathrm{ch}i({\mathcal O}_X(f))=\frac{1}{3!2^3}(\left(f,f\right)+8)(\left(f,f\right)+6) (\left(f,f\right)+4).$$ Perhaps the quickest way to check this formula is to observe that if $X=S^{[3]}$ and $f$ is a very ample divisor on $S$ with no higher cohomology then the induced sheaf ${\mathcal O}_X(f)$ has no higher cohomology and $$\dim \Gamma({\mathcal O}_X(f))=\dim \mathrm{Sym }^3(\Gamma({\mathcal O}_S(f)))= \binom{\mathrm{ch}i({\mathcal O}_S(f))+2}{3}.$$ Equating coefficients, we find $$\begin{array}{rcl} f^6 &=& 15 \left(f,f\right)^3 \\ f^4c_2 &=& 108 \left(f,f\right)^2 \\ f^2c_4 &=& 480 \left(f,f\right) \\ f^2c_2^2 &=& 1200 \left(f,f\right) \end{array} $$ We generate Diophantine equations for $a,b,\left(\lambda,\lambda\right)$ and eventually, $d$. First, observe that $$\left(\lambda,\ell \right)=\lambda\cdot \ell =\deg \lambda|{\mathbb P}^3$$ so that $\lambda|{\mathbb P}^3$ is $\left(\lambda,\lambda \right)/4$ times the hyperplane class. Thus we have $$\left[{\mathbb P}^3 \right]\lambda^3 = (\left(\lambda,\lambda \right)/4)^3$$ and $$\left[{\mathbb P}^3 \right]\lambda^3 = a\lambda^4c_2(X)+b\lambda^6.$$ Equating these expressions and evaluating the terms, we find $$\left(\lambda,\lambda\right) (15 b -1/64)+108a=0.$$ We have divided out by $\left(\lambda,\lambda \right)$; the solution $\left(\lambda,\lambda\right)=0$ is not possible for geometric reasons, and we shall exclude it algebraically below. Second, the Lemma on restrictions of Chern classes implies $$\left[{\mathbb P}^3 \right] \lambda c_2(X)=-\left(\lambda,\lambda\right)$$ whereas the formula for the class of ${\mathbb P}^3$ yields $$\left[{\mathbb P}^3 \right]\lambda c_2(X)=a\lambda^2c_2(X)^2 + b \lambda^4 c_2(X).$$ Thus we obtain $$108 b \left(\lambda,\lambda \right) + (1200 a + 1)=0.$$ \begin{rema} The cup product of $\mathrm{H}^*(X)$ is compatible with the $G_X$-action, so the subring generated by Chern classes and elements of $\mathrm{H}^2(X)$ is orthogonal to $\eta$. Thus even if the decomposition of $[{\mathbb P}^3]$ were to involve $\eta$, the computations up to this point would not reflect this. \end{rema} Finally, the fact that $$ \left[{\mathbb P}^3 \right]^2=c_3({\mathcal N}_{{\mathbb P}^3/X})=c_3({\mathcal T}_{{\mathbb P}^3}^{\vee})=-4$$ yields the {\em cubic} equation $$15b^2 \left(\lambda,\lambda\right)^3 + 216 ab \left(\lambda,\lambda\right)^2 + 1200 \left(\lambda,\lambda \right)a^2 + d^2 \eta \cdot \eta =-4.$$ Proposition~\ref{prop:eta} implies that $\eta \cdot \eta = -11\cdot 443$. In particular, $\left(\lambda,\lambda\right)=0$ is excluded. Eliminating $a$ and $b$ from these equations and setting $L=\left(\lambda,\lambda\right)$, we obtain \begin{equation} \label{eqn:elliptic} 2^{14}\cdot 3^2\cdot 11\cdot 443 d^2 = 5^2 L^3 + 2^5\cdot 3^2 L^2 + 2^8\cdot 5 L + 2^{16}\cdot 3\cdot 11. \end{equation} We know, {\em a priori}, that $L \in {\mathbb Z}$ and $d\in {\mathbb Q}$. \begin{prop} The only solution to (\ref{eqn:elliptic}) with $L \in {\mathbb Z}$ and $d\in {\mathbb Q}$ is $d=0$ and $L=-48$. \end{prop} We assume this for the moment; its proof can be found in Section~\ref{sect:DA}. Back-substitution yields $$a=1/96, \quad b=1/384, \quad \left(\ell,\ell\right)=-3.$$ We claim that $\lambda/2 \in \mathrm{H}^2(X,{\mathbb Z})$, i.e., $\lambda$ is not primitive. Using the isomorphism $$\mathrm{H}_2(X,{\mathbb Z})=\mathrm{H}_2(S,{\mathbb Z}) \oplus_{\perp} {\mathbb Z} \delta^{\vee}, \quad \left(\delta^{\vee},\delta^{\vee}\right)=-1/4$$ we can express $$\ell=D+m\delta^{\vee}, \quad D \in \mathrm{H}_2(S,{\mathbb Z}),m \in {\mathbb Z}.$$ If $\lambda$ were primitive then $m$ would have to be odd and $$-3=\left(\ell,\ell\right)=\left(D,D\right) - m^2 /4.$$ Since $\left(D,D\right) \in 2{\mathbb Z}$, we have a contradiction. \section{Diophantine analysis} \label{sect:DA} \begin{theo} The only solution to \[ 2^{14}\mathord{\cdot} 3^2\mathord{\cdot} 11\mathord{\cdot}443 d^2 = 5^2 L^3 + 2^5\mathord{\cdot} 3^2 L^2 + 2^8\mathord{\cdot} 5 L + 2^{16}\mathord{\cdot}3\mathord{\cdot}11 \] with $L \in {\mathbb Z}$ and $d \in {\mathbb Q}$ is $L = -48$, $d = 0$. \end{theo} \begin{proof} Put $x = 2^{-4}\mathord{\cdot}5^2\mathord{\cdot}11\mathord{\cdot}443 (L + 48)$ and $y = 2\mathord{\cdot}3\mathord{\cdot}5^2\mathord{\cdot}11^2\mathord{\cdot}443^2d$. The equation then takes the form \begin{equation} \label{eq:weierstrass} E: y^2 = x^3 + ax^2 + bx \end{equation} where \[ a = -3^2\mathord{\cdot}11\mathord{\cdot}23\mathord{\cdot}443, \qquad b = 2^2\mathord{\cdot}5^2\mathord{\cdot}11^3\mathord{\cdot}13\mathord{\cdot}443^2. \] It suffices to prove the stronger statement that there are no solutions to \eqref{eq:weierstrass} with $x, y \in {\mathbb Z}[\frac12]$, apart from $x = y = 0$. The proof is given in two steps. Proposition \ref{prop:MW} below determines explicitly the structure of the Mordell--Weil group $E({\mathbb Q})$. Proposition \ref{prop:integral} then identifies the integral points. \end{proof} Algorithms for both of these steps are implemented in computer algebra systems such as {\tt Sage} \cite{sage-4.4.1} and {\tt Magma} \cite{magma}, and the theorem may be verified this way. To avoid depending on the correctness of these systems, we give alternative proofs that use as little machine assistance as possible. The only step that is perhaps unreasonable to verify by hand is that a certain point $P$ with large coordinates (about $30$ digits) lies in $E({\mathbb Q})$. We first set notation and briefly recall some facts about point multiplication on elliptic curves. Let $O$ denote the zero element of $E({\mathbb Q})$ (the point at infinity). For nonzero $R \in E({\mathbb Q})$ we write \[ R = (x(R), y(R)) = \left( \frac{\alpha(R)}{e(R)^2}, \frac{\beta(R)}{e(R)^3} \right), \] where $\alpha, \beta, e \in {\mathbb Z}$, $e \geq 1$ and $(\alpha, e) = (\beta, e) = 1$. Let $R \in E({\mathbb Q})$, $R \neq O$. If $p$ is a prime, then $p \mathbin{|} e(R)$ if and only if $R$ reduces to the identity in $E({\mathbb F}_p)$. If $m \geq 1$ and $mR \neq O$, then $e(R) \mathbin{|} e(mR)$. For $m = 2$ we have the following formula: \begin{equation} \label{eq:double} x(2R) = \frac{\alpha(2R)}{e(2R)^2} = \frac{(\alpha(R)^2 - b \mathord{\cdot} e(R)^4)^2}{4e(R)^2\big(\alpha(R)^3 + a \mathord{\cdot} \alpha(R)^2 e(R)^2 + b \mathord{\cdot} \alpha(R) e(R)^4\big)}. \end{equation} Moreover, if $R$ reduces to a nonsingular point in $E({\mathbb F}_p)$, then $p$ cannot divide both the numerator and denominator of the fraction on the right side of \eqref{eq:double}. In other words, there is no cancellation locally at $p$. One proof of this is given in \cite[Prop.~IV.2]{Wut-thesis}; as pointed out in that paper, it can also be proved from properties of real-valued non-archimedean local heights. The discriminant of the Weierstrass equation \eqref{eq:weierstrass} is given by \[ \Delta = 16b^2(a^2 - 4b) = -2^8 \mathord{\cdot} 5^4 \mathord{\cdot} 11^8 \mathord{\cdot} 13^2 \mathord{\cdot} 113 \mathord{\cdot} 127 \mathord{\cdot} 443^6, \] so the model is minimal, and the primes of bad reduction are $2$, $5$, $11$, $13$, $113$, $127$ and $443$. For $p = 2, 5, 11, 13, 443$, we have that $p \mathbin{|} \alpha(R)$ if and only if $R$ reduces to a singular point of $E({\mathbb F}_p)$, i.e.~the only singular point of $E({\mathbb F}_p)$ is $(0:0:1)$ for these primes. The point $Q = (0, 0)$ has order two, and addition with $Q$ is given by the formula \begin{equation} \label{eq:add-Q} R + Q = \left(\frac{b}{x(R)}, \frac{-b \mathord{\cdot} y(R)}{x(R)^2}\right). \end{equation} \begin{prop} \label{prop:MW} We have $E({\mathbb Q}) \cong {\mathbb Z} \times ({\mathbb Z}/2{\mathbb Z})$, where the free part is generated by the point $P$ with coordinates \[ \left(\frac{2 \mathord{\cdot} 3^2 \mathord{\cdot} 11^2 \mathord{\cdot} 83^2 \mathord{\cdot} 443^2 \mathord{\cdot} 6481^2}{7^4 \mathord{\cdot} 41^2 \mathord{\cdot} 71^2 \mathord{\cdot} 193^2}, \frac{2 \mathord{\cdot} 3 \mathord{\cdot} 11^3 \mathord{\cdot} 31 \mathord{\cdot} 83 \mathord{\cdot} 163 \mathord{\cdot} 443^2 \mathord{\cdot} 6481 \mathord{\cdot} 240623 \mathord{\cdot} 3691717}{7^6 \mathord{\cdot} 41^3 \mathord{\cdot} 71^3 \mathord{\cdot} 193^3} \right) \] and the torsion part by $Q = (0, 0)$. \end{prop} \begin{proof} We first check that the torsion subgroup is as described. We have $E({\mathbb F}_3) = {\mathbb Z}/2{\mathbb Z} \times {\mathbb Z}/2{\mathbb Z}$ and $E({\mathbb F}_{19}) = {\mathbb Z}/2{\mathbb Z} \times {\mathbb Z}/7{\mathbb Z}$. For $\ell$ prime, by \cite[Prop.~VII.3.1]{Sil-AEC} we see that $E({\mathbb Q})[\ell]$ injects into $E({\mathbb F}_3)$ for $\ell \neq 3$ and that $E({\mathbb Q})[\ell]$ injects into $E({\mathbb F}_{19})$ for $\ell \neq 19$. These facts force $E({\mathbb Q})[2] = {\mathbb Z}/2{\mathbb Z}$, $E({\mathbb Q})[3] = 0$, and $E({\mathbb Q})[\ell] = 0$ for $\ell \neq 2, 3$. Hence $E_{\text{tors}}({\mathbb Q}) = \langle Q \rightarrowngle$. Now we consider the free part. The point $P$ was found using Cremona's mwrank library \cite{Cre-mwrank} included in {\tt Sage} \cite{sage-4.4.1}. We may check that $P \in E({\mathbb Q})$ using a computer; this shows that $\mathrm{rank}\, E \geq 1$. (The point $P$ is reasonably difficult to find from scratch; indeed the standard functions for computing $E({\mathbb Q})$ in both {\tt Magma} and {\tt Sage} fail to find $P$.) To show that $\mathrm{rank} E \leq 1$ we use a standard $2$-descent strategy (see for example \cite[Ch.~III]{ST-ratpoints}). Consider the auxiliary curve \[ E' : y^2 = x^3 - 2ax^2 + (a^2 - 4b)x. \] There are isogenies $\phi : E \to E'$ and $\hat\phi : E' \to E$ of degree $2$, and injections \begin{gather*} E({\mathbb Q})/\hat\phi(E'({\mathbb Q})) \overset{\psi}{\hookrightarrow} S \subset {\mathbb Q}^*/({\mathbb Q}^*)^2, \\ E'({\mathbb Q})/\phi(E({\mathbb Q})) \overset{\psi'}{\hookrightarrow} S' \subset {\mathbb Q}^*/({\mathbb Q}^*)^2, \end{gather*} where $S$ consists of the cosets $\delta({\mathbb Q}^*)^2$ for $\delta \mathbin{|} 2 \mathord{\cdot} 5 \mathord{\cdot} 11 \mathord{\cdot} 13 \mathord{\cdot} 443$, and $S'$ of the cosets for $\delta \mathbin{|} 11 \mathord{\cdot} 113 \mathord{\cdot} 127 \mathord{\cdot} 443$ (these are the primes dividing $b$ and $a^2 - 4b$ respectively). We must determine which elements of $S$ and $S'$ arise from points in $E({\mathbb Q})$ and $E'({\mathbb Q})$. This is achieved by testing for the existence of rational points on the two families of quartic curves \begin{gather} \notag C_\delta : \delta w^2 = \delta^2 z^4 + \delta a z^2 + b, \qquad \delta \in S, \\ \label{eq:Cprime} C'_\delta : \delta w^2 = \delta^2 z^4 - 2\delta a z^2 + (a^2 - 4b), \qquad \delta \in S'. \end{gather} We first consider the $C'_\delta$. If $443 \mathbin{|} \delta$ then \eqref{eq:Cprime} has no solution in ${\mathbb Q}_{443}$; if $(\delta/5) = -1$ then it has no solution in ${\mathbb Q}_5$; and if $\delta \neq 1 \pmod 8$ then it has no solution in ${\mathbb Q}_2$. These conditions rule out all but $\delta = 1$ and $\delta = -113\mathord{\cdot}127$. These correspond to the classes in $E'({\mathbb Q})/\phi(E({\mathbb Q}))$ of $O$ and the unique two-torsion point of $E'({\mathbb Q})$; both have trivial image in $\hat\phi(E'({\mathbb Q}))/2E({\mathbb Q})$. Now we examine the $C_\delta$. For $\delta = 11\mathord{\cdot}13$ there is the trivial rational point $z = 0$, $w = 2\mathord{\cdot}5\mathord{\cdot}11\mathord{\cdot}443$, corresponding to the class of $Q$ in $E({\mathbb Q})/\hat\phi(E'({\mathbb Q}))$. For $\delta = 2$ there is a (highly nontrivial) rational point corresponding to $P$, namely $z = (\frac 12x(P))^{1/2}$, $w = y(P) (2x(P))^{1/2}$. Rational points are automatic for $\delta = 1$ and $\delta = 2\mathord{\cdot}11\mathord{\cdot}13$ since the image of $\psi$ is a subgroup of $S$. We will show that $C_\delta({\mathbb Q}) = \emptyset$ for all other $\delta$. Rewriting the equation for $C_\delta$ as $4\delta w^2 = (2\delta z^2 + a)^2 - (a^2 - 4b)$, we see immediately that $\delta > 0$ since $a^2 - 4b < 0$. Next, note that $(p/113) = 1$ for $p = 2, 11, 13, 443$, but $(5/113) = -1$. Thus if $5 \mathbin{|} \delta$ we have $(\delta/113) = -1$; this is impossible as $v_{113}(a^2 - 4b) = 1$. Therefore $5 \nmid \delta$. To finish the argument for the $C_\delta$ it suffices to show that $C_\delta({\mathbb Q}) = \emptyset$ for $\delta = 11$, $443$ and $11\mathord{\cdot}443$; the statement for the remaining $\delta$ will then follow automatically from the subgroup property. Let $\delta = 11$, $443$, or $11\mathord{\cdot}443$. Let $u = z^2$ and let $(u, w) = (u_0/t, w_0/t)$ be a rational point on the conic $\delta w^2 = \delta^2 u^2 + \delta a u + b$, where $u_0, w_0, t \in {\mathbb Z}$. Intersecting the conic with a line of slope $X/Y$ through $(u_0/t, w_0/t)$, we obtain the parameterization $z^2 = f(X,Y)/g(X,Y)$ where \begin{align*} f(X, Y) & = u_0 X^2 - 2w_0 XY + (ta + \delta u_0)Y^2, \\ g(X, Y) & = t(X^2 - \delta Y^2), \end{align*} and where we may assume that $X, Y \in {\mathbb Z}$ and $(X, Y) = 1$. Taking resultants, we find that any prime $p$ dividing $f(X, Y)$ and $g(X, Y)$ must divide $t$ or $a^2 - 4b = -11^2\mathord{\cdot}113\mathord{\cdot}127\mathord{\cdot}443^2$. Thus \begin{align} \label{eq:eps-1} f(X, Y) & = \varepsilon Z^2, \\ \label{eq:eps-2} g(X, Y) & = \varepsilon W^2 \end{align} for some $\varepsilon \mathbin{|} 11\mathord{\cdot}113\mathord{\cdot}127\mathord{\cdot}443 t$, and some $W, Z \in {\mathbb Z}$. We now consider each $\delta$ in turn, summarizing the local obstructions encountered for each possible $\varepsilon$. Let $\delta = 11$. We take $u_0 = 3 \mathord{\cdot} 5^2 \mathord{\cdot} 443$, $w_0 = 2^2 \mathord{\cdot} 5 \mathord{\cdot} 11 \mathord{\cdot} 443$, $t = 1$. Then $\varepsilon \mathbin{|} 11\mathord{\cdot}113\mathord{\cdot}127\mathord{\cdot}443$. If $443 \mathbin{|} \varepsilon$ then \eqref{eq:eps-2} has no solution in ${\mathbb Q}_{443}$. If $11 \mathbin{|} \varepsilon$ then \eqref{eq:eps-1} has no solution in ${\mathbb Q}_{11}$. If $(\varepsilon/11) = -1$ then \eqref{eq:eps-2} has no solution in ${\mathbb Q}_{11}$. This leaves $\varepsilon \in \{1, 113, -127, -113\mathord{\cdot}127\}$. For these $\varepsilon$ we have $(\varepsilon/443) = 1$. Equation \eqref{eq:eps-1} implies that $X = 14Y$ or $X = 110Y \pmod{443}$; both options contradict \eqref{eq:eps-2}. Now consider $\delta = 443$. We take $u_0 = -3 \mathord{\cdot} 11 \mathord{\cdot} 13$, $w_0 = 11 \mathord{\cdot} 13 \mathord{\cdot} 443$, $t = 2$. Then $\varepsilon \mathbin{|} 2\mathord{\cdot}11\mathord{\cdot}113\mathord{\cdot}127\mathord{\cdot}443$. Suppose that $443 \nmid \varepsilon$. If $(\varepsilon/443) = 1$ then \eqref{eq:eps-2} has no solution in ${\mathbb Q}_{443}$, and if $(\varepsilon/443) = -1$ then \eqref{eq:eps-1} has no solution in ${\mathbb Q}_{443}$. Now let $\varepsilon = 443\varepsilon'$. If $(\varepsilon'/443) = -1$ then \eqref{eq:eps-2} has no solution in ${\mathbb Q}_{443}$. Now assume that $(\varepsilon'/443) = 1$. Observe that $(p/443) = (p/11)$ for $p \in \{-1, 2, 113, 127\}$, but $(11/443) = -1$. This implies that either $11 \nmid \varepsilon'$ and $(\varepsilon'/11) = 1$, or $11 \mathbin{|} \varepsilon'$ and $(\frac{\varepsilon'}{11}/11) = -1$. In both cases, \eqref{eq:eps-1} forces $Y = 10X \pmod{11}$, and this contradicts \eqref{eq:eps-2}. Finally let $\delta = 11\mathord{\cdot}443$. We take $u_0 = 5^3 \mathord{\cdot} 11$, $w_0 = 2 \mathord{\cdot} 5 \mathord{\cdot} 11 \mathord{\cdot} 443$, $t = 3^2$. Then $\varepsilon \mathbin{|} 3\mathord{\cdot}11\mathord{\cdot}113\mathord{\cdot}127\mathord{\cdot}443$. If $11 \nmid \varepsilon$ then there are no solutions to \eqref{eq:eps-1} in ${\mathbb Q}_{11}$. Suppose that $11 \mathbin{|} \varepsilon$. Then since $(p/13) = 1$ for $p \in \{-1, 3, 113, 127, 443\}$ and $(11/13) = -1$, we have $(\varepsilon/13) = -1$; then \eqref{eq:eps-1} has no solution in ${\mathbb Q}_{13}$. This completes the $2$-descent. In particular, we have found that \[ |E({\mathbb Q})/2E({\mathbb Q})| = |E({\mathbb Q})/\hat\phi(E'({\mathbb Q}))| \cdot |\hat\phi(E'({\mathbb Q}))/2E({\mathbb Q})| = 4 \cdot 1 = 4, \] and that $E({\mathbb Q})/2E({\mathbb Q})$ is generated by $P$ and $Q$. Moreover, for $R \neq O, Q$ the image of $x(R)$ in ${\mathbb Q}^*/({\mathbb Q}^*)^2$ is one of $\{1, 2, 11\mathord{\cdot}13, 2\mathord{\cdot}11\mathord{\cdot}13\}$. At this stage we know that $\langle P, Q \rightarrowngle$ is of finite index in $E({\mathbb Q})$; we must still check that it exhausts $E({\mathbb Q})$. Suppose not; then for some prime $\ell$ and some $R \in E({\mathbb Q})$ we have $\ell R = P$ or $\ell R = P + Q$. We cannot have $2R = P$ as $P$ is not divisible by $2$ in $E({\mathbb F}_3$); similarly $2R = P + Q$ is excluded by considering $E({\mathbb F}_7)$. Thus we may assume that $\ell$ is odd. If $\ell R = P + Q$ we replace $R$ by $R + Q$, so now may assume that $\ell R = P$ and $\ell (R+Q) = P + Q$. In this case $e(R) \mathbin{|} e(P) = 7^2 \mathord{\cdot} 41 \mathord{\cdot} 71 \mathord{\cdot} 193$. From \eqref{eq:add-Q} we have \[ x(P + Q) = \frac{2 \mathord{\cdot} 5^2 \mathord{\cdot} 7^4 \mathord{\cdot} 11 \mathord{\cdot} 13 \mathord{\cdot} 41^2 \mathord{\cdot} 71^2 \mathord{\cdot} 193^2}{3^2 \mathord{\cdot} 83^2 \mathord{\cdot} 6481^2}, \] so similarly $e(R + Q) \mathbin{|} 3 \mathord{\cdot} 83 \mathord{\cdot} 6481$. Moreover by \eqref{eq:add-Q} we have $$ \alpha(R) \alpha(R+Q) = b \mathord{\cdot} e(R)^2 e(R+Q)^2. $$ Since $(\alpha(R), e(R)) = (\alpha(R+Q), e(R+Q)) = 1$ this implies that $\alpha(R) = b_1 e(R+Q)^2$ and $\alpha(R+Q) = (b/b_1) e(R)^2$ for some $b_1 \mathbin{|} b$. Since $P$ has singular reduction at $p = 2, 11, 443$, so does $R$, so $2\mathord{\cdot}11\mathord{\cdot}443 \mathbin{|} b_1$. Similarly we find that $2\mathord{\cdot}5\mathord{\cdot}11\mathord{\cdot}13 \mathbin{|} (b/b_1)$. Comparing with the classes of ${\mathbb Q}^*/({\mathbb Q}^*)^2$ found by the $2$-descent shows that we must have $b_1 = 2\mathord{\cdot}11^2\mathord{\cdot}443^2$ and $b/b_1 = 2\mathord{\cdot}5^2\mathord{\cdot}11\mathord{\cdot}13$. At this point we have reduced to $24$ possibilities for $e(R)$ and $8$ possibilities for $\alpha(R)$, and it is straightforward to check using a computer that the only pair defining a point on $E({\mathbb Q})$ is $R = P$. Alternatively one may finish the argument using congruences. We sketch one quick way to do it: first prove that $3 \mathbin{|} \alpha(R)$ by considering images in $E({\mathbb F}_3)$. Then for only 10 remaining values of $x(R)$ is $x^3 + ax^2 + bx$ a square in ${\mathbb Q}_{11}$, and for only one of these is it a square in ${\mathbb Q}_{31}$. \end{proof} \begin{prop} \label{prop:integral} The only solution to \eqref{eq:weierstrass} with $x, y \in {\mathbb Z}[\frac12]$ is $x = y = 0$. \end{prop} \begin{proof} Let $n \in {\mathbb Z}$, $k \in \{0, 1\}$. We must prove that $x(nP + kQ) \notin {\mathbb Z}[\frac12]$ for $n \neq 0$. We consider several cases. First suppose that $k = 0$ and $n \neq 0$. Since $7 \mathbin{|} e(P)$, also $7 \mathbin{|} e(nP)$, so $x(nP) \notin {\mathbb Z}[\frac12]$. Next suppose that $k = 1$ and that $n$ is odd. Since $3 \mathbin{|} e(P + Q)$, we have $3 \mathbin{|} e(n(P + Q)) = e(nP + Q)$, so $x(nP + Q) \notin {\mathbb Z}[\frac12]$. Now suppose that $k = 1$ and $n = 2r$ where $r$ is odd. Since $79 \mathbin{|} e(2P + Q)$, we have $79 \mathbin{|} e(r(2P + Q)) = e(nP + Q)$, so $x(nP + Q) \notin {\mathbb Z}[\frac12]$. The last case is $k = 1$, $n = 0 \pmod 4$, $n \neq 0$. Write $n = 2^i r$ for some $i \geq 2$ and odd $r$. To continue the pattern we must find a prime $q$ playing the same role as $79$ from the previous case. For this, we first establish that \begin{equation} \label{eq:mod7} \alpha(2^j P) = \pm 4 \pmod 7 \qquad \text{for $j \geq 2$.} \end{equation} Indeed, one checks that $4P$ has nonsingular reduction for all $p$. The doubling formula \eqref{eq:double} and the comments regarding cancellation immediately following it then imply that $\alpha(2^{j+1}P) = \pm (\alpha(2^j P)^2 - b e(2^j P)^4)^2$ for all $j \geq 2$. Since $7 \mathbin{|} e(2^j P)$ and $\alpha(4P) = \pm 4 \pmod 7$, identity \eqref{eq:mod7} follows by induction. In particular $\alpha(2^i P) = \pm 4 \pmod 7$, so there must exist some prime $q$, not congruent to $1$ modulo $7$, dividing $\alpha(2^i P)$. We cannot have $q = 113$ or $q = 127$, as both of these are $1 \pmod 7$. Also, $q \notin \{2, 5, 11, 13, 443\}$, since for all of these primes the point $(0:0:1)$ is singular in $E({\mathbb F}_p)$, whereas $2^i P$ has nonsingular reduction for all $p$. Therefore $q$ is not a prime of bad reduction. From \eqref{eq:add-Q} we obtain $q \mathbin{|} e(2^i P + Q)$. Finally, since $nP + Q = r(2^i P + Q)$, we have also $q \mathbin{|} e(nP + Q)$, so that $x(nP + Q) \notin {\mathbb Z}[\frac12]$. \end{proof} \begin{rema} In several places in the above proof we use certain facts about $2P$ and $4P$. It is not necessary to compute their full coordinates, which are quite large (for example $\alpha(4P)$ has 256 digits). In every case it is possible to work $p$-adically to low precision. For example, to check that $4P$ has nonsingular reduction at $2$, it suffices to apply the doubling formula twice, using as input $a = 1 \pmod{2^3}$, $b = 28 \pmod{2^5}$ and $x(P) = 2 \pmod{2^4}$, to find that $x(2P) = 4 \pmod{2^5}$ and $x(4P) = 2^{-4} \pmod{2^{-3}}$. \end{rema} \end{document} \documentclass[11pt]{article} \usepackage{amsmath} \usepackage{amssymb} \usepackage{amsthm} \newtheorem{thm}{Theorem} \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \theoremstyle{remark} \newtheorem*{rem}{Remark} \begin{document} \title{Current situation} \maketitle \end{document}
math
\begin{document} \title{Relations Between Greedy and Bit-Optimal LZ77~Encodings} \begin{abstract} This paper investigates the size in bits of the LZ77 encoding, which is the most popular and efficient variant of the Lempel--Ziv encodings used in data compression. We prove that, for a wide natural class of variable-length encoders for LZ77 phrases, the size of the greedily constructed LZ77 encoding on constant alphabets is within a factor $O(\frac{\log n}{\log\log\log n})$ of the optimal LZ77 encoding, where $n$ is the length of the processed string. We describe a series of examples showing that, surprisingly, this bound is tight, thus improving both the previously known upper and lower bounds. Further, we obtain a more detailed bound $O(\min\{z, \frac{\log n}{\log\log z}\})$, which uses the number $z$ of phrases in the greedy LZ77 encoding as a parameter, and construct a series of examples showing that this bound is tight even for binary alphabet. We then investigate the problem on non-constant alphabets: we show that the known $O(\log n)$ bound is tight even for alphabets of logarithmic size, and provide tight bounds for some other important cases. \end{abstract} \section{Introduction} The Lempel--Ziv encoding~\cite{LZ77} (LZ77 for short) is one of the most popular and efficient compression techniques used in data compression, stringology, and algorithms in general. The LZ77 encoding lies at the heart of common compressors such as {\tt gzip}, {\tt 7zip}, {\tt pkzip}, {\tt rar}, etc. and serves as a basis for modern compressed text indexes on highly repetitive data (e.g., see~\cite{GGKNP2,KreftNavarroTCS,MakinenNavarro}). Numerous papers on LZ77 have been published during the last 40 years. In these works, it was proved that LZ77 is superior compared to many other compression schemes both in practice and in theory. For instance, in~\cite{KosarajuManzini,LZ78,WynerZiv} it was shown that LZ77 is asymptotically optimal with respect to different entropy-related measures; further, in~\cite{CharikarEtAl} it was proved that many other reference based encoders (including LZ78~\cite{LZ78}) use polynomially (in the length of the uncompressed data) more space than LZ77 in the worst case and, in a sense, are never significantly better than LZ77. However, many problems related to LZ77 are still not completely solved. In this paper we investigate how good is the popular greedy LZ77 encoder in a class of practically motivated models with variable-length encoders for LZ77 phrases; to formulate the problem that we study more accurately, let us first discuss what is known about different LZ77 encoders. LZ77 is a dictionary based compression scheme that replaces a string with phrases that are actually references to strings in a dictionary. Each phrase of an LZ77 encoding can be viewed as a triple $\langle d,\ell,c\rangle$, where $\ell$ is the length of the phrase, $d$ is the distance to a string of length $\ell{-}1$ from the dictionary such that this string is a prefix of the phrase, and $c$ is the last letter of the phrase (the precise definition follows); we use the definition from~\cite{LZ77} but all our results can be adapted for the version of LZ77 from~\cite{StorerSzymanski}, in which phrases are encoded by pairs $\langle d,\ell\rangle$ (throughout the paper, we provide the reader with separate remarks in cases where such adaptation is not straightforward). The same string can have many different LZ77 encodings. It is well known that the greedily constructed LZ77 encoding, which builds the encoding from left to right making each phrase as long as possible during this process, is optimal in the sense that it produces the minimal number of phrases among all LZ77 encodings of this string (see~\cite{CharikarEtAl,Rytter03,StorerSzymanski}). The same optimality property holds for the versions of LZ77 with ``sliding window''~\cite{CrochemoreLangiuMignosi}, which is a restriction that is important for practical applications. However, in practice, compressors usually use variable-length encoders for phrases and, in this case, it is not clear whether the greedy LZ77 encoder is optimal in the sense that it outputs the minimal number of bits. The question of finding an optimal LZ77 encoding for variable-length phrase encoders was raised in~\cite{RajpootSahinalp} and the first attempts to solve this problem were given in~\cite{FerraginaNittoVenturini}. The authors of~\cite{FerraginaNittoVenturini} also conducted the first theoretical studies to find how bad is the greedy LZ77 encoding compared to an optimal LZ77 encoding. Such questions make sense only if we state formally which kinds of phrase encoders are used in the LZ77 encoder. As in~\cite{FerraginaNittoVenturini}, we investigate encoders that encode each phrase $\langle d,\ell,c\rangle$ using $\Theta(\log d + \log \ell + \log c)$ bits\footnote{Throughout the paper all logarithms have base $2$ if it is not explicitly stated otherwise.} (see a more formal discussion below). This class of phrase encoders includes a broad range of practically used encoders and, among others, Elias's~\cite{Elias} and Levenshtein's~\cite{Levenshtein} encoders, which produce asymptotically optimal universal codes for the numbers $d, \ell, c$; we refer the reader to~\cite{FerraginaNittoVenturini} for further discussions on the motivation. In the described model, there are two ways how to optimize the size of the produced LZ77 encoding. The first way is to minimize $d$ in the triples $\langle d,\ell,c\rangle$. This problem was addressed already in~\cite{FerraginaNittoVenturini} for the greedy LZ77 encoder, where one must find the rightmost occurrence of the referenced part of each phrase; several improvements on this result of~\cite{FerraginaNittoVenturini} and related questions were given in~\cite{AmirLandauUkkonen,BelazzouguiPuglisi,BilleCordingFischerGortz,CrochemoreLangiuMignosi2,Larsson}. The second way is to consider both parameters $\ell$ and $d$, i.e., to build an optimal LZ77 encoding. There are very few works in this direction (see~\cite{CrochemoreEtAl2} and~\cite{FerraginaNittoVenturini}) and there is still a room for improvements in such results. Due to the overall difficulty of the problem of finding an optimal LZ77 encoding, real compressors usually construct an LZ77 encoding greedily. Thus, this raises the following question: how bad can the produced greedy LZ77 encoding be compared to an optimal LZ77 encoding? For a given string of length $n$, denote by $\mathsf{LZ_{gr}}$ and $\mathsf{LZ_{opt}}$ the sizes in bits of, respectively, the greedily constructed and an optimal LZ77 encodings from the special class of encodings that we consider in this paper (see clarifications in Section~\ref{SectPreliminaries}). We investigate the ratio $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}}$. Upper bounds on this ratio are provided in terms of the parameters $n$, $z$, and $\sigma$, where $z$ is the number of phrases in the greedy LZ77 encoding of the considered string (it is well known that any other LZ77 encoding contains at least $z$ phrases; see~\cite{CharikarEtAl,Rytter03,StorerSzymanski}) and $\sigma$ is the alphabet size. We are also interested in upper bounds that use only the parameter $n$. In~\cite{FerraginaNittoVenturini} it was proved that $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} = O(\log n)$ and there is a series of examples on which $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} = \Omega(\frac{\log n}{\log\log n})$. In this paper we improve these results and our bounds in many cases are tight in the sense that there are series of examples on which these bound are attained; our main contributions are summarized in Table~\ref{tbl:results}. \begin{table}[ht] \caption{Upper bounds on $\mathsf{LZ_{gr}}/ \mathsf{LZ_{opt}}$; tight bounds are denoted by $\Theta$.} \begin{center} \begin{tabular}{r|c|c}\hline ~ & parameter $n$ & parameters $n, z, \sigma$ \\\midrule $\sigma = O(1)$ & $\Theta(\frac{\log n}{\log\log\log n})$ & $\Theta(\min\{z, \frac{\log n}{\log\log z}\})$ \\ arbitrary $\sigma$ & $\Theta(\log n)$ & $O(\min\{z, \frac{\log n}{\log\log_\sigma z}\})$ \\\hline \end{tabular} \label{tbl:results} \end{center} \end{table} First, we study the case of constant alphabets and completely solve it. Namely, in Theorem~\ref{MainTheorem}, we find the following detailed upper bound on the ratio $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}}$ (note that this bound is also applicable for arbitrary alphabets): $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} = O(\min\{z, \frac{\log n}{\log\log_\sigma z}\}).$ In the case of constant alphabets this upper bound degenerates to $O(\min\{z, \frac{\log n}{\log\log z}\})$. In Theorem~\ref{ExampleTheorem} we construct a series of examples on the binary alphabet showing that this simplified bound is tight, thus closing the problem for constant alphabets. Theorem~\ref{ExampleTheorem} actually provides a more elaborate lower bound $\Omega(\min\{z, \frac{\log n}{\log\log_\sigma z + \log\sigma}\})$, which is applicable for arbitrary alphabets. From these general results, we deduce in Corollary~\ref{OnlyNbound} that $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} = O(\frac{\log n}{\log\log\log n})$ for constant alphabets, and this upper bound is tight. Then, we consider the case of arbitrary alphabets. It is shown in Theorem~\ref{ExampleTheorem2} that the upper bound $O(\log n)$ on the ratio $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}}$ is tight even if the input alphabet has logarithmic size. Thus, we solve the problem in the general case and find that the tight upper bounds, expressed in terms of $n$, for constant and arbitrary alphabets differ by $\Theta(\log\log\log n)$ factor. As a side note, for polylogarithmic alphabets and $z \ge 2^{\log^\epsilon n}$, where $\epsilon > 0$ is an arbitrary constant, we obtain in Corollary~\ref{MainIsTight2} the upper bound $O(\frac{\log n}{\log\log n})$ and show that this bound is tight for such alphabets and such $z$. Informally, the strings for which the condition $z \ge 2^{\log^\epsilon n}$ holds (which includes the case $z \ge n^\delta$, where $\delta > 0$ is an arbitrary constant) can be called ``non-extremely compressible'' strings. Thus, we, in a sense, solve the problem in the arguably most important case of ``non-extremely compressible'' strings drawn from polylogarithmic alphabets. The paper is organized as follows. In the following Section~\ref{SectPreliminaries} we introduce some basic notions used throughout the text and, in particular, formally define LZ77 parsings and encodings. Section~\ref{SectUpperBound} describes a detailed upper bound on the ratio of the sizes in bits of the greedy and optimal LZ77 encodings. In Section~\ref{SectLowerBound} it is shown that, on constant alphabets, this bound is tight. The material of these two sections provides a complete solution of the problem for constant alphabets, which turns out to be quite simple. We then consider arbitrary alphabets in Section~\ref{SectArbitraryAlphabet} and find tight bounds for several important cases, including the general case of arbitrary alphabet and arbitrary $z$, for which, as it turns out, the known $O(\log n)$ bound is tight. Finally, we conclude with some remarks and open problems in Section~\ref{SectConclusion}. \section{Preliminaries}\label{SectPreliminaries} A \emph{string $s$} over an alphabet $\Sigma$ is a map $\{1,2,\ldots,n\} \to \Sigma$, where $n$ is referred to as the \emph{length of $s$}, denoted by $|s|$. In this paper we assume that the alphabet is a set of non-negative integers that are less than or equal to $n$, which is a common and natural assumption in the problem under investigation. We write $s[i]$ for the $i$th letter of $s$ and $s[i..j]$ for $s[i]s[i{+}1]\cdots s[j]$. A string $u$ is a \emph{substring} of $s$ if $u = s[i..j]$ for some $i$ and $j$; the pair $(i,j)$ is not necessarily unique and we say that $i$ specifies an \emph{occurrence} of $u$ in $s$ starting at position $i$. A substring $s[1..j]$ (resp., $s[i..n]$) is a \emph{prefix} (resp. \emph{suffix}) of $s$. We say that substrings $s[i..j]$ and $s[i'..j']$ \emph{overlap} if $j \ge i'$ and $i \le j'$. For any $i,j$, the set $\{k\in \mathbb{Z} \colon i \le k \le j\}$ (possibly empty) is denoted by $[i..j]$. An \emph{LZ77 parsing} of a given string $s$ is a parsing $s = f_1f_2\cdots f_z$ such that all the strings $f_1, \ldots, f_z$ (called \emph{phrases}) are non-empty and, for any $i \in [1..z]$, either $f_i$ is a letter, or $|f_i| > 1$ and the string $f_i[1..|f_i|{-}1]$ has an earlier occurrence starting at some position $j \le |f_1f_2\cdots f_{i-1}|$ (note that this occurrence can overlap $f_i$). The \emph{greedy LZ77 parsing} is a special LZ77 parsing built by the greedy procedure that constructs all phrases from left to right by choosing each phrase $f_i$ as the longest substring starting at given position such that $f_i[1..|f_i|{-}1]$ has an earlier occurrence in the string (see~\cite{LZ77}). For instance, the greedy LZ77 parsing of the string $s = abababbbaba$ is $a.b.ababb.baba$. The following lemma is straightforward. \begin{lemma} All phrases in the greedy LZ77 parsing of a given string (except, possibly, for the last phrase) are distinct. \label{DistinctPhrases} \end{lemma} It is also well-known that, for a given string, the greedy LZ77 parsing has the minimal number of phrases among all LZ77 parsings (e.g., see~\cite{CharikarEtAl,Rytter03,StorerSzymanski}). This implies that, when each phrase of the parsing is encoded by a fixed number of bits, the greedy LZ77 parsing is optimal, i.e., it produces an encoding of the minimal size in bit. However, the greedy LZ77 parsing does not necessarily produce an encoding of the minimal size when one uses a variable-length encoder for phrases; the latter is usually the case in most common compressors. Let us clarify what kinds of variable-length phrase encoders we are to consider in this paper. A given LZ77 parsing $f_1f_2\cdots f_z$ is encoded as follows. Each phrase $f_i$ is represented by a triple $\langle d, \ell, c\rangle$, where $\ell = |f_i|$, $c = f_i[|f_i|]$, and $d = |f_1f_2\cdots f_{i-1}| - j$ for $j$ that is the position of an earlier occurrence of $f_i[1..|f_i|{-}1]$ (assuming that $d = 0$ if $|f_i| = 1$). We choose three encoders $e_{d}, e_{\ell}, e_c$, each of which maps non-negative integers to bit strings. We then transform each triple $\langle d,\ell,c\rangle$ into the binary string $e_{d}(d)e_{\ell}(\ell)e_{c}(c)$ and concatenate all these binary strings, thus producing an \emph{LZ77 encoding} corresponding to the given LZ77 parsing. In this paper we consider only encoders $e_{d}, e_{\ell}, e_c$ that map any positive integer $x$ to a bit string of length $\Theta(\log(x+1))$. This family of encoders includes most widely used encoders such as Elias's~\cite{Elias} and Levenshtein's~\cite{Levenshtein} ones (see~\cite{FerraginaNittoVenturini} for further motivation). We fix three encoders $e_{d}, e_{\ell}, e_c$ satisfying the above property and, hereafter, assume that all considered LZ77 encodings are obtained using these $e_{d}, e_{\ell}, e_c$. We say that an LZ77 encoding is \emph{optimal} if it has the minimal size in bits. It is shown below that, unlike the case of fixed-length phrase encoders, for the family of phrase encoders under investigation, the LZ77 encoding generated by the greedy LZ77 parsing (which is called the \emph{greedy LZ77 encoding}) is not necessarily optimal. Among all possible greedy LZ77 encodings we always consider those that occupy the minimal number of bits; usually, such encoding is obtained by the minimization of the numbers $d$ in the triples $\langle d,\ell,c\rangle$ representing the phrases of the greedy LZ77 parsing. \begin{remark} Most common compressors actually use a different variant of the LZ77 parsing (which was introduced in~\cite{StorerSzymanski}), defining each phrase $f_i$ as either a letter or a string that has an earlier occurrence (note that in the definition of LZ77 parsings only the prefix $f_i[1..|f_i|{-}1]$ of $f_i$ must have an earlier occurrence). We call this variant a \emph{nonclassical LZ77 parsing} (as it differs from the original parsing proposed in~\cite{LZ77}). The \emph{greedy nonclassical LZ77 parsing} is defined by analogy with the greedy LZ77 parsing. In encoding corresponding to a nonclassical LZ77 parsing each phrase is represented either by a pair $\langle d,\ell\rangle$ that is defined analogously to the triples $\langle d,\ell,c\rangle$, or by one letter. This variant of LZ77 is very similar to the one that we investigate and, moreover, all our results can be adapted for this variant. In the sequel, we provide separate remarks that explicitly show how to generalize our results to nonclassical LZ77 parsings if it is not straightforward. \end{remark} \section{Upper Bound}\label{SectUpperBound} Our proof of the upper bound on the ratio between the sizes of the greedy and optimal LZ77 encodings is as follows: first, we obtain an upper bound $U$ on the size of the greedy LZ77 encoding, then we find a lower bound $L$ on the size of any LZ77 encoding, and finally, we derive the estimation $\frac{U}{L}$ on the ratio. The details follow. Let $s$ be a string of length $n$. Recall that any letter of $s$ is an integer from the range $[0..n]$. Based on the above mentioned properties of the phrase encoders $e_{d}, e_{\ell}, e_c$, one can easily show that each phrase of any LZ77 encoding of $s$ occupies $O(\log n)$ bits. Therefore, we obtain the following upper bound on the size of the greedy LZ77 encoding. \begin{lemma} Let $\mathsf{LZ_{gr}}$ be the size in bits of the greedy LZ77 encoding of a given string of length $n$. Then, we have $\mathsf{LZ_{gr}} = O(z\log n)$, where $z$ is the number of phrases in the encoding.\label{GreedyLZ77upper} \end{lemma} The lower bound on any LZ77 encoding is more complicated. Lemmas~\ref{TechLemma},~\ref{LZ77intersect},~\ref{LongPhrasesSet} below are well known but we, nevertheless, provide their proofs for the sake of completeness. \begin{lemma} For any positive integers $t, t_1, \ldots, t_k$ such that $\sum_{i=1}^k t_i \ge t$, we have $\sum_{i=1}^k\log t_i \ge \log(t-k+1)$. \label{TechLemma} \end{lemma} \begin{proof} Note that $\sum_{i=1}^k\log t_i = \log\prod_{i=1}^k t_i$. Since for any $t_j$ and $t_{j'}$ such that $t_j \ge t_{j'}$, we have $(t_j + 1)(t_{j'} - 1) = t_jt_{j'} - (t_j - t_{j'} + 1) < t_jt_{j'}$, the product $\prod_{i=1}^k t_i$ is minimized when $t_1 = t - k + 1$ and $t_2 = t_3 = \cdots = t_{k} = 1$ (recall that every number $t_i$ must be a positive integer). Therefore, we obtain $\sum_{i=1}^k\log t_i \ge \log(t-k+1)$. \end{proof} \begin{lemma} Any phrase of an LZ77 parsing of a string can overlap with at most two phrases of the greedy LZ77 parsing of the same string. \label{LZ77intersect} \end{lemma} \begin{proof} Suppose, for the sake of contradiction, that a phrase $f$ of an LZ77 parsing overlaps with at least three phrases of the greedy LZ77 parsing. Then, $f[1..|f|{-}1]$ must contain a phrase $f'$ of the greedy LZ77 parsing as a proper substring. But then the string $f'$ occurs in an earlier occurrence of the string $f[1..|f|{-}1]$ and, therefore, the greedy construction procedure could choose a longer phrase during the construction of the phrase $f'$, which is a contradiction. \end{proof} \begin{lemma} In the greedy LZ77 parsing of any string of length $n$ over an alphabet of size $\sigma \ge 2$, at least $z - 2\sqrt{z}$ phrases have length ${\ge}\frac{1}{2}\log_\sigma z$, where $z$ is the number of phrases.\label{LongPhrasesSet} \end{lemma} \begin{proof} Denote by $f_1f_2\cdots f_z$ the greedy LZ77 parsing of a given string of length $n$ over an alphabet of size $\sigma$. By Lemma~\ref{DistinctPhrases}, all the phrases $f_1, \ldots, f_{z-1}$ are distinct. Therefore, for any $\ell > 0$, at most $\sum_{i=0}^{\ell} \sigma^i = \frac{\sigma^{\ell+1} - 1}{\sigma - 1}$ of these phrases have length at most $\ell$. Since for any $\ell < \frac{1}2\log_\sigma z$, we have $\sum_{i=0}^{\ell}\sigma^i < \frac{\sqrt{z}\sigma - 1}{\sigma - 1}$, the number of phrases with length at least $\frac{1}2 \log_\sigma z$ must be greater than $(z - 1) - \frac{\sqrt{z}\sigma - 1}{\sigma - 1}$. Thus, it remains to prove that $1 + \frac{\sqrt{z}\sigma - 1}{\sigma - 1} \le 2\sqrt{z}$. It is easy to show that, for $\sigma \ge 2$, the function $\frac{\sqrt{z}\sigma - 1}{\sigma - 1}$ decreases as $\sigma$ grows. Hence, we deduce $1 + \frac{\sqrt{z}\sigma - 1}{\sigma - 1} \le 1 + \frac{2\sqrt{z} - 1}{2 - 1} = 2\sqrt{z}$. \end{proof} \begin{lemma} Let $\mathsf{LZ_{opt}}$ be the size in bits of an optimal LZ77 encoding of a string of length $n$ over an alphabet of size $\sigma \ge 2$. Then, we have $\mathsf{LZ_{opt}} = \Omega(\log n + z\log\log_\sigma z)$, where $z$ is the number of phrases in the greedy LZ77 parsing of this string. \label{OptLZ77lower} \end{lemma} \begin{proof} Denote by $f_1f_2\cdots f_{z'}$ the LZ77 parsing corresponding to an optimal LZ77 encoding of the string under consideration. By the definition of the phrase encoders, we have $\mathsf{LZ_{opt}} \ge \Omega(\sum_{i=1}^{z'} \log |f_i|)$. It follows from Lemma~\ref{TechLemma} that $\mathsf{LZ_{opt}} \ge \Omega(\log(n - z'))$. Since, obviously, $\mathsf{LZ_{opt}} \ge z'$, the latter implies $\mathsf{LZ_{opt}} \ge \Omega(z' + \log(n - z')) \ge \Omega(\log n)$. Denote by $f'_1f'_2\cdots f'_z$ the greedy LZ77 parsing of the same string. Let $S$ be the set of all phrases in this parsing with lengths at least $\frac{1}2 \log_\sigma z$. By Lemma~\ref{LongPhrasesSet}, we have $|S| \ge z - 2\sqrt{z} = \Theta(z)$. Consider a phrase $f' \in S$. Let $f_g, f_{g+1}, \ldots, f_{h}$ be all phrases in the parsing $f_1f_2\cdots f_{z'}$ that overlap with the phrase $f'$. Since $|f_gf_{g+1}\cdots f_h| \ge |f'|$, Lemma~\ref{TechLemma} implies that $(h - g) + \log|f_g| + \log|f_{g+1}| + \cdots + \log|f_h| \ge (h - g) + \log(|f'| - (h - g)) \ge \Omega(\log|f'|)$. Thus, the encodings of the phrases $f_g, f_{g+1}, \ldots, f_h$ all together occupy $\Omega(\log|f'|)$ bits. By Lemma~\ref{LZ77intersect}, any phrase $f_i$ of the parsing $f_1\cdots f_{z'}$ overlaps with at most two phrases of the parsing $f'_1\cdots f'_z$. Therefore, the encodings of all phrases $f_1, \ldots, f_{z'}$ occupy $\frac{1}{2}\Omega(\sum_{f' \in S} \log|f'|) \ge \Omega(|S|\log\log_\sigma z) = \Omega(z\log\log_\sigma z)$ overall bits. \end{proof} \begin{theorem} Let $z$ be the number of phrases in the greedy LZ77 parsing of a given string of length $n$ drawn from an alphabet of size $\sigma$. Denote by $\mathsf{LZ_{gr}}$ and $\mathsf{LZ_{opt}}$ the sizes in bits of, respectively, the greedy and optimal LZ77 encodings of this string. Then, we have $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} = O(\min\{z, \frac{\log n}{\log\log_\sigma z}\})$.\label{MainTheorem} \end{theorem} \begin{proof} By Lemmas~\ref{GreedyLZ77upper} and~\ref{OptLZ77lower}, $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \le \frac{O(z\log n)}{\Omega(\log n + z\log\log_\sigma z)} = O(\frac{z\log n}{\log n + z\log\log_\sigma z})$. Since $\frac{z\log n}{\log n + z\log\log_\sigma z} \le \frac{z\log n}{\log n} = z$ and $\frac{z\log n}{\log n + z\log\log_\sigma z} \le \frac{\log n}{\log\log_\sigma z}$, the result follows. \end{proof} \begin{corollary} For constant alphabet, $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} = O(\frac{\log n}{\log\log\log n})$.\label{OnlyNbound} \end{corollary} \begin{proof} We have $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} = O(\min\{z, \frac{\log n}{\log\log z}\})$ due to Theorem~\ref{MainTheorem}. The functions $z \mapsto z$ and $z \mapsto \frac{\log n}{\log\log z}$, respectively, increase and decrease as $z$ grows. Therefore, the maximum of the function $\min\{z, \frac{\log n}{\log\log z}\}$ is reached when $z = \frac{\log n}{\log\log z}$. Solving this equation, we obtain $z = \Theta(\frac{\log n}{\log\log\log n})$, which proves the result. \end{proof} \begin{remark} To generalize the described results to nonclassical LZ77 parsings, one should use, instead of Lemma~\ref{DistinctPhrases}, the following straightforward lemma. \begin{lemma} Suppose that $s = f_1f_2\cdots f_z$ is the greedy nonclassical LZ77 parsing of a given string $s$; then, all the strings $f_i\cdot f_{i+1}[1]$, for $i \in [1..z{-}1]$, are distinct.\label{DistinctPhrases2} \end{lemma} The rest can be easily reconstructed by analogy. \end{remark} \section{Lower Bound}\label{SectLowerBound} We now construct a series of example showing that, for several important cases, the upper bound given in Theorem~\ref{MainTheorem} is tight. In particular, on constant alphabets, i.e., when $\sigma = O(1)$, Theorem~\ref{ExampleTheorem} complements Theorem~\ref{MainTheorem} showing that the bound $O(\min\{z, \frac{\log n}{\log\log z}\})$ is tight. Further, putting $z = \frac{\log n}{\log\log\log n}$ and $\sigma = 2$ in Theorem~\ref{ExampleTheorem}, we show that the upper bound given in Corollary~\ref{OnlyNbound} is tight. \begin{theorem} For any given integers $n > 1$, $\sigma \in [2..n]$, and $z \in [\sigma..\frac{n}{\log_\sigma n}]$, there is a string of length $n$ over an alphabet of size $\sigma$ such that the number of phrases in the greedy LZ77 parsing of this string is $\Theta(z)$ and the sizes $\mathsf{LZ_{gr}}$ and $\mathsf{LZ_{opt}}$ of, respectively, the greedy and optimal LZ77 encodings of this string are related as $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \ge \Omega(\min\{z, \frac{\log n}{\log\log_\sigma z + \log\sigma}\})$. \label{ExampleTheorem} \end{theorem} \begin{proof} If $\sigma \ge n/4$, then any LZ77 encoding of a string of length $n$ containing $\sigma$ distinct letters obviously occupies $\Theta(\sigma\log\sigma) = \Theta(n\log n)$ bits and, hence, the statement of the theorem, which degenerates to $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \ge \Omega(1)$, trivially holds. Assume that $\sigma < n/4$. We first consider the case $\sigma \ge 3$ as it is simpler. Suppose that the alphabet is the set $[1..\sigma]$. Denote $b = 1$ and $\tau = \sigma - 1$ ($b$ is a special letter-separator with small code and $\tau$ is the size of the set $[1..\sigma] \setminus \{b\} = [2..\sigma]$). Let $m$ be the minimal integer such that $\tau^m \ge z$, i.e., $m = \lceil\log_{\tau} z\rceil$. Note that $m = \Theta(\log_\sigma z)$. In~\cite{Cohn} it is shown that all $\tau^m$ possible strings of length $m$ over the alphabet $[2..\sigma]$ can be arranged in a sequence $s_1, s_2, \ldots, s_{\tau^m}$ (called a \emph{$\tau$-ary Gray code}~\cite{Cohn,Gray}) such that, for any $i \in [2..\tau^m]$, the strings $s_{i-1}$ and $s_i$ differ in exactly one position. Moreover, we can choose such sequence so that $s_{\tau^m} = a^m$, where $a$ is an arbitrary letter from $[2..\sigma]$. Let $k$ and $\ell$ be positive integers such that $k < \tau^m$ and $\ell > m$. Our example is the following string (the numbers $k$ and $\ell$ will be adjusted below so that $k = \Theta(z)$ and $\ell \ge \frac{1}{2}n$): $$ s = s_1 s_2 \cdots s_k\cdot a^{\ell}\cdot bs_1bs_2b \cdots s_k b. $$ Let us consider the greedy LZ77 parsing of $s$ and the corresponding greedy LZ77 encoding. Since the letter $b$ first occurs in the substring $a^\ell b$, the greedy construction procedure builds the parsing of $s_1 b s_2 b \cdots s_k b$ starting from the first position of this substring. Since $k < \tau^m$ and $s_{\tau^m} = a^m$, it follows from the definition of the sequence $s_1, \ldots, s_{k}$ that, for any $i\in[1..k]$, the longest prefix of the string $s_i b s_{i+1} b \cdots s_k b$ that has an earlier occurrence in $s$ is $s_i$ and this earlier occurrence is a substring of the prefix $s_1 s_2 \cdots s_ka^m$ of $s$. Therefore, the greedy algorithm decomposes the suffix $s_1 b s_2 b \cdots s_k b$ into $k$ phrases $s_i b$, for $i \in [1..k]$. It is easy to see that each of these phrases is encoded in $\Omega(\log\ell)$ bits (this is the number of bits required to encode the distance between the phrase and its earlier occurrence). Hence, the size in bits of the greedy LZ77 encoding of $s$ is $\mathsf{LZ_{gr}} \ge \Omega(k\log\ell)$. Now let us consider a better encoding of the same string $s$. For simplicity, we omit the description of the encoding of the prefix $s_1 s_2 \cdots s_k$ as it is very similar to the encoding of the suffix $s_1 b s_2 b \cdots s_k b$ discussed below. First, we parse the substring $a^{\ell}b$ into two phrases $a$ and $a^{\ell-1}b$, which are encoded in $O(\log\ell + \log\sigma)$ bits (the referenced part $a^{\ell-1}$ of $a^{\ell-1}b$ is self-referential). Then, we encode the substring $s_1 b$ as in the greedy approach by one phrase taking $O(\log\ell)$ bits (recall that $\ell > m$ and $b = 1$ and, hence, the length $|s_1b| = m + 1$ and the letter $b$ are encoded in $O(\log\ell)$ bits). Now we consecutively encode each substring $s_i b$, for $i \in [2..k]$, as follows. Suppose that the strings $s_i$ and $s_{i-1}$ differ at position $j$, i.e., $s_{i-1}[1..j{-}1] = s_i[1..j{-}1]$ and $s_{i-1}[j{+}1 .. m] = s_i[j{+}1 .. m]$. We decompose $s_i b$ into two phrases $s_i[1..j]$ and $s_i[j{+}1..m]b$. Since the strings $s_i[1..j{-}1]$ and $s_i[j{+}1..m]$ both are substrings of the string $s_{i-1}$ and have length $O(m)$, the encoding of the produced two phrases occupies $O(\log m + \log\sigma)$ bits. Hence, the whole suffix $s_1bs_2b\cdots s_kb$ can be encoded in $O(k\log m + k\log\sigma)$ bits; the prefix $s_1 s_2 \cdots s_k$ can be encoded similarly in $O(k\log m + k\log\sigma)$ bits. Thus, we obtain an encoding of the string $s$ that occupies $O(\log\ell + k\log m + k\log\sigma)$ bits. Therefore, the size in bits of the optimal LZ77 encoding of $s$ is $\mathsf{LZ_{opt}} = O(\log\ell + k\log m + k\log\sigma)$. Recall that $m = \Theta(\log_\sigma z)$. Combining the estimations on $\mathsf{LZ_{gr}}$ and $\mathsf{LZ_{opt}}$, we obtain $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \ge \frac{\Omega(k\log\ell)}{O(\log\ell + k(\log m + \log\sigma))} \ge \Omega(\frac{k\log\ell}{\log\ell + k(\log\log_\sigma z + \log\sigma)})$. Since $\frac{k\log\ell}{\log\ell + k(\log\log_\sigma z + \log\sigma)} \ge \frac{k\log\ell}{2\cdot\max\{\log\ell, k(\log\log_\sigma z + \log\sigma)\}} = \frac{1}{2} \min\{k, \frac{\log\ell}{\log\log_\sigma z + \log\sigma}\}$, we obtain $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \ge \Omega(\min\{k, \frac{\log\ell}{\log\log_\sigma z + \log\sigma}\})$. Note that the number of phrases in the greedy LZ77 parsing of $s$ is $\Theta(k)$ and $|s| = \ell + 1 + k(2m + 1)$. We put $\ell = n - k(2m + 1) - 1$ so that $|s| = n$. Since $z \in [2..\frac{n}{\log_\sigma n}]$ and $m = \Theta(\log_{\sigma} z)$, we have $k(2m + 1) \le O(n)$ if $k = \Theta(z)$. Then, it is straightforward that the parameter $k$ can be chosen so that $k = \Theta(z)$ and $\ell = n - k(2m + 1) - 1 \ge \frac{1}{2}n$. Hence, we derive $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \ge \Omega(\min\{z, \frac{\log n}{\log\log_\sigma z + \log\sigma}\})$. (If not all letters of the alphabet $[1..\sigma]$ indeed occur in the constructed string, we append all unused letters to the end of $s$ and reduce $\ell$ appropriately; as $\sigma < n/4$, we have $\ell \ge \frac{1}{4}n$ in the end.) Now assume that $\sigma = 2$. Let $\{0,1\}$ be the alphabet. Similarly to the above analysis, we fix a sequence $s_1, \ldots, s_{2^m}$ of all binary strings of length $m = \lceil\log z\rceil$ such that, for $i \in [2..2^m]$, $s_{i-1}$ and $s_i$ differ in exactly one position, and we choose two parameters $\ell > 4m$ and $k < 2^m$, which will be adjusted later so that $\ell \ge \frac{1}2n$ and $k = \Theta(z)$. It is well known that one can fix the sequence $s_1, \ldots, s_{2^m}$ so that $s_{2^m} = 0^m$. Our example is defined as follows: $$ s = s_1 0^m 1 s_2 0^m 1 \cdots s_k 0^m 1 0^{\ell}1 s_1 0^m 1 c_1 s_2 0^m 1 c_2\cdots s_k 0^m 1 c_k, $$ where $c_k = 1$ and, for $i \in [1..k{-}1]$, $c_i = 0$ if $s_{i+1}[1] = 1$, and $c_i = 1$ otherwise. Since, for any $i \in [1..k]$, $s_i \ne 0^m$ (as $s_i = 0^m$ iff $i = 2^m$, and $k < 2^m$) and $\ell > 4m$, the greedy LZ77 parser necessarily makes a phrase that is a suffix of the substring $0^\ell1$ and, then, parses the suffix $s_1 0^m 1 c_1 s_2 0^m 1 c_2\cdots s_k 0^m 1 c_k$ from the first position. It is straightforward that, for any $i \in [1..k]$, the string $0^m1$ has only one occurrence in the strings $1s_i0^m1$ and $1c_{i-1}s_i0^m1$ (for $i > 1$). Therefore, for any $i \in [1..k]$, the string $s_i 0^m 1$ has only one occurrence in the prefix $s_1 0^m 1 s_2 0^m 1 \cdots s_k 0^m 1$ and the string $s_i 0^m 1 c_i$ has only one occurrence in the whole string $s$. Then, the greedy parser parses the suffix $s_1 0^m 1 c_1 s_2 0^m 1 c_2\cdots s_k 0^m 1 c_k$ into $k$ phrases $s_i 0^m 1 c_i$, for $i\in[1..k]$. This parsing produces an encoding of size $\Omega(k\log\ell)$ bits. At the same time, there is an LZ77 encoding for $s$ of size $O(\log\ell + k\log m)$ bits. The further analysis is very similar to the analysis of the case $\sigma \ge 3$: we put $\ell = n - k(4m + 3) - 1$ so that $|s| = n$, and we adjust $k$ so that $k = \Theta(z)$ and $\ell \ge \frac{1}{2}n$, which is possible because $m \le \log z + 1$ and $z \le \frac{n}{\log n}$. We omit the details as they are analogous. \end{proof} \begin{remark} The condition $\sigma \le z \le \frac{n}{\log_\sigma n}$ from Theorem~\ref{ExampleTheorem} is justified by the following observations. First, it is obvious that any LZ77 parsing has at least $\sigma$ phrases and, hence, the inequality $\sigma \le z$ holds. Secondly, by Lemma~\ref{LongPhrasesSet}, at least $z - 2\sqrt{z}$ phrases in the greedy LZ77 parsing have length at least $\frac{1}{2}\log_\sigma z$, where $z$ is the total number of phrases; hence, we obtain $z\log_\sigma z \le O(n)$ and, solving this inequality, $z = O(\frac{n}{\log_\sigma n})$, which justifies the condition $z \le \frac{n}{\log_\sigma n}$. \end{remark} \begin{remark} Let us sketch the way in which the constructions from the proof of Theorem~\ref{ExampleTheorem} can be adapted to nonclassical LZ77 encodings. For the case $\sigma \ge 3$, the corresponding string is as follows (the notation is from the proof of Theorem~\ref{ExampleTheorem}): $$ s = bs_1bs_2\cdots bs_kb\cdot a^\ell\cdot bs_1bbs_2bb\cdots bbs_kb. $$ The suffix $bs_1bbs_2bb\cdots bbs_kb$ of this string is greedily parsed into the phrases $bs_ib$, for $i \in [1..k]$. For the case $\sigma = 2$, the corresponding string is as follows: $$ s = 10s_1\alpha 10s_2\alpha 1\cdots 10s_k\alpha\cdot 0^\ell\cdot 1 0s_1\alpha 0s_2\alpha 0\cdots 0s_k\alpha 0, $$ where $\alpha = 0^{m+1}1$. The suffix $1 0s_1\alpha 0s_2\alpha 0\cdots 0s_k\alpha 0$ of $s$ is greedily parsed into the phrases $10s_1\alpha$ and $0s_i\alpha$, for $i \in [2..k]$. We omit the detailed analysis as it is analogous to the analysis in the proof of Theorem~\ref{ExampleTheorem}. \end{remark} \section{Arbitrary Alphabets}\label{SectArbitraryAlphabet} The following corollary shows that, in the case of ``non-extremely compressible'' string ($z \ge 2^{\log^\epsilon n}$) over a polylogarithmic alphabet ($\sigma \le \log^{O(1)} n$), which is arguably the most important case for practice, the upper and lower bounds from Theorems~\ref{MainTheorem} and~\ref{ExampleTheorem} degenerate to $\Theta(\frac{\log n}{\log\log n})$ and, hence, are tight. (Note that $2^{\log^\epsilon n} = o(n^\delta)$ for any fixed constants $\epsilon \in (0,1)$ and $\delta \in (0,1)$.) \begin{corollary} Let $z$ be the number of phrases in the greedy LZ77 parsing of a given string of length $n$ drawn from an alphabet of size $\sigma$. Suppose that $\sigma \le \log^{O(1)} n$ and $z \ge 2^{\log^\epsilon n}$, for a fixed constant $\epsilon \in (0,1)$. Denote by $\mathsf{LZ_{gr}}$ and $\mathsf{LZ_{opt}}$ the sizes in bits of, respectively, the greedy and optimal LZ77 encodings of this string. Then, we have $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \le O(\frac{\log n}{\log\log n})$ and this upper bound is tight. \label{MainIsTight2} \end{corollary} \begin{proof} The result follows from Theorems~\ref{MainTheorem} and~\ref{ExampleTheorem} since $\log\log n \ge \log\log_\sigma z \ge \log\frac{\log^\epsilon n}{O(\log\log n)} = \Theta(\log\log n)$. \end{proof} Now let us consider bounds on the ratio $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}}$ that are independent of the parameters $z$ and $\sigma$. In~\cite{FerraginaNittoVenturini} it was proved that $O(\log n)$ is an upper bound on the ratio $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}}$. It turns out that this bound is tight on sufficiently large non-constant alphabets. Precisely, a series of examples on which $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} = \Omega(\log n)$ can be constructed on an alphabet of size $O(\log n)$. Therefore, the upper bound $O(\frac{\log n}{\log\log\log n})$ on the ratio $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}}$, which, by Corollary~\ref{OnlyNbound}, holds for constant alphabets and is tight, does not hold, in general, even for alphabets of logarithmic size. In examples showing this, we use the following well-known combinatorial structure. A \emph{Steiner system} $S(t,k,n)$ is a set $S$ of size $n$ and a family of $k$-element subsets of $S$, called \emph{blocks}, such that each subset of $S$ of size $t$ is contained in exactly one block. We are particularly interested in the Steiner systems $S(2, 2^{2^{i-1}}, 2^{2^i})$, which can be constructed for any positive integers $i$ (the structure is realized on a finite affine plane of order $2^{2^{i-1}}$ and the blocks are lines in the plane; see~\cite{ColbournDinitz}). It is well known that the number of blocks in the Steiner system $S(2, 2^{2^{i-1}}, 2^{2^i})$ is ${2^{2^i} \choose 2} / {2^{2^{i-1}} \choose 2}$. \begin{theorem} For any integer $n > 1$, there is a string of length $n$ over an alphabet of size $O(\log n)$ such that the sizes $\mathsf{LZ_{gr}}$ and $\mathsf{LZ_{opt}}$ of, respectively, the greedy and optimal LZ77 encodings of this string are related as $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \ge \Omega(\log n)$. \label{ExampleTheorem2} \end{theorem} \begin{proof} Let us first discuss a high-level idea of our construction. Consider the following string: $$ t\cdot b_1cb'_1\cdot b_2cb'_2\cdots b_kcb'_k\cdot c^{\Theta(n)}\cdot td\cdot b_1cb'_1d\cdot b_2cb'_2d\cdots b_kcb'_kd, $$ where $t = a_1a_2\cdots a_{\sigma-2}$ is a string consisting of $\sigma{-}2$ distinct letters, the sets $\{b_i, b'_i\}$ run through all $k = {\sigma{-}2 \choose 2}$ two-element subsets of the set $\{a_1, a_2, \ldots, a_{\sigma-2}\}$, and $c$ and $d$ are two special letters with constant codes (say, $0$ and $1$) that do not occur in $t$. The greedy LZ77 parser parses the suffix $b_1cb'_1d\cdot b_2cb'_2d\cdots b_kcb'_kd$ into phrases $b_icb'_id$ encoded by references to the substrings $b_icb'_i$ of the prefix $t\cdot b_1cb'_1\cdot b_2cb'_2\cdots b_kcb'_k$. Each such reference takes $\Omega(\log n)$ bits and, therefore, the greedy encoding occupies $\Omega({\sigma{-}2 \choose 2}\log n) = \Omega(\sigma^2\log n)$ bits. Obviously, any LZ77 encoding spends $\Theta(\log n)$ bits to encode the substring $c^{\Theta(n)}$. If we were able to encode the prefix and the suffix surrounding the substring $c^{\Theta(n)}$ in $O(\sigma^2)$ bits, then we would obtain $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \ge \Omega(\frac{\sigma^2\log n}{\sigma^2 + \log n}) = \Omega(\frac{\sigma^2\log n}{\max\{\sigma^2, \log n\}}) = \Omega(\min\{\log n, \sigma^2\})$, which is $\Omega(\log n)$ for $\sigma = \Omega(\sqrt{\log n})$. Unfortunately, it seems that the best encoding that one can find for the suffix $b_1cb'_1d\cdot b_2cb'_2d\cdots b_kcb'_kd$ parses each substring $b_icb'_id$ into two phrases $b_ic$ and $b'_ic$, encoding each of them by a reference to a letter in $t = a_1a_2\cdots a_{\sigma-2}$, thus spending $\Theta({\sigma{-}2 \choose 2}\log {\sigma{-}2 \choose 2}) = \Theta(\sigma^2\log\sigma)$ bits for the whole suffix, which is larger than $\Theta(\sigma^2)$ by the factor $\log\sigma$. To address this issue, we construct a more sophisticated string equipped with additional ``infrastructure'' that helps to ``deliver'' cheaply letters from a ``dictionary'' substring (like $t$) to the places where these letters are used. Let us formalize this intuition. Choose the minimal positive integer $x$ such that $2^{2^x} > \sqrt{\log n}$. The alphabet for our example will consist of two special letters $c$ and $d$ with codes $0$ and $1$, and of the set $A$ of $2^{2^x}$ letters with codes larger than $1$. Obviously, the alphabet size $\sigma = 2^{2^x} + 2$ is at most $\log n + 2$. Let us assign to each subset $S$ of $A$ such that $|S| = 2^{2^i}$, for some $i \in [1..x]$, a Steiner system $S(2, 2^{2^{i-1}}, 2^{2^i})$ with the set of blocks denoted by $B_S$. Denote by $q$ a mapping that maps every such $S$ to a string $q(S) = a_{j_1}da_{j_2}d\cdots a_{j_{|S|}}d$, where $a_{j_1}, a_{j_2}, \ldots, a_{j_{|S|}}$ are all letters from $S$ in an arbitrarily chosen order. The basic building elements for our string are defined recursively as follows. $$ \begin{array}{l} r(S) = q(S)\prod_{B \in B_S} r(B) \quad\text{ if }|S| > 2,\\ r(S) = bcb'cbcb'dd \quad\text{ if }S = \{b, b'\}\text{ for distinct letters }b, b'. \end{array} $$ Analogously, we define: $$ \begin{array}{l} r'(S) = q(S)\prod_{B \in B_S} r(B) \quad\text{ if }|S| > 2,\\ r'(S) = bcb'cbcb'dc \quad\text{ if }S = \{b, b'\}\text{ for distinct letters }b, b'. \end{array} $$ To break ties on the lowest levels of recursion where $|S| = 2$, we assume that $b$ is the letter from $S$ with the smallest code. Our string on which $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \ge \Omega(\log n)$ is $s = r'(A)c^\ell r(A)$, where $\ell$ is chosen so that $\ell = \Theta(n)$ (see the text below, where we discuss the lengths of $r(S)$ and $r'(S)$). Let us first show that the greedy LZ77 encoding of this string has size $\Omega(\sigma^2\log n)$ bits. By the definition of Steiner systems, for any subset $S\subseteq A$ of size $2^{2^i}$, each pair $\{b, b'\}$ of distinct letters from $S$ is contained in exactly one block (of size $2^{2^{i-1}}$) from $B_S$. Then, it is straightforward that any given pair $\{b, b'\}$ of distinct letters from $A$ occurs exactly once as a parameter of $r$ on the lowest level of the recursion $r(A)$. An analogous claim holds for $r'(A)$. Hence, the string $bcb'cbcb'd$ (we assume that the code of $b$ is smaller than the code of $b'$) occurs in $s$ exactly twice: in the prefix $r'(A)$ and in the suffix $r(A)$. Further, it is easy to see that the string $bcb'$ occurs in $s$ only as a substring of $bcb'cbcb'd$. By a straightforward case analysis, one can show that this implies that the greedy LZ77 parsing of $s$ has a phrase $f$ containing the substring $bcb'dd$ of $r(A)$: $f$ either is a phrase starting at one of the first five positions of $bcb'cbcb'dd$ (greedily ``eating'' the remaining part) or is a phrase containing the prefix $bcb'cb$ of $bcb'cbcb'dd$ (the part $bcb'c$ can be copied only from $bcb'cbcb'dc$ in $r'(A)$ and, thus, again $f$ greedily ``eats'' the remaining part). The encoding of $f$ copies the part $bcb'd$ from the substring $bcb'd$ of $r'(A)$ by reference, thus spending $\Omega(\log\ell) = \Omega(\log n)$ bits. Since the two occurrences of $bcb'cbcb'd$ in $s$ are followed by distinct letters ($c$ in $r'(A)$ and $d$ in $r(A)$), the string $bcb'dd$ must be a suffix of $f$. Hence, there is a one-to-one correspondence between the pairs $\{b,b'\}$ of distinct letters from $A$ and the phrases containing the substrings $bcb'dd$. Therefore, the greedy LZ77 encoding of $s$ occupies $\Omega({|A| \choose 2}\log n) = \Omega(\sigma^2\log n)$ bits. Now it remains to show that there is an LZ77 encoding of the string $s$ that occupies $O(\sigma^2 + \log n)$ bits. This will imply that $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \ge \frac{\Omega(\sigma^2\log n)}{O(\sigma^2 + \log n)} \ge \Omega(\min\{\log n, \sigma^2\})$, which is $\Omega(\log n)$ since, by construction, $\sigma > \sqrt{\log n}$. We decompose the substring $c^\ell$ of $s = r'(A)c^\ell r(A)$ into two phrases $c$ and $c^{\ell-1}$, encoding these phrases in $O(\log n)$ bits. All other phrases in our parsing will have length either one or two. For simplicity of the exposition, we consider only encoding of the suffix $r(A)$; the encoding for $r'(A)$ is analogous and occupies asymptotically the same space. By definition, $q(A)$ is a prefix of $r(A)$. The string $q(A)$ serves as a ``dictionary'' of letters similar to the string $t$ in the preliminary example. We encode each letter of $q(A)$ as a phrase of length one, thus spending $O(\sigma\log\sigma)$ bits. These are the only ``heavy'' phrases of length one in our encoding of $r(A)$: all other phrases of length one will be either $c$ or $d$, the letters with codes $0$ and $1$, which can be encoded in $O(1)$ bits. All phrases of length two will have the form either $ac$ or $ad$, where $a \in A$; thus, the ``heavy'' part of the encoding of such phrases of length two is an \mbox{$O(\log\delta)$-bit} encoding of the distance $\delta$ to an occurrence of $a$ preceding this phrase. Let us consider a substring $r(S) = q(S)\prod_{B \in B_S} r(B)$ of $r(A)$, where $S \subseteq A$ is a set of size $2^{2^i}$ that occurs in the expansion of the recursion $r(A)$. Suppose that $i > 1$. Then, each substring $r(B)$, for $B \in B_S$, has a prefix $q(B) = a_1da_2d\cdots a_{|B|}d$, where $a_1, a_2, \ldots, a_{|B|}$ are members of $B$. We parse $q(B)$ into phrases $a_1d, a_2d, \ldots, a_{|B|}d$, encoding each phrase $a_id$ by a reference to the letter $a_i$ of the prefix $q(S)$ of $r(S)$. Suppose that $i = 1$. Then, each block $B \in B_S$ is just a pair $\{b,b'\}$ of distinct letters from $S$, and $r(B) = bcb'cbcb'dd$. We parse $r(B)$ into phrases $bc, b'c, bc, b'd, d$, encoding each phrase of length two by a reference to a letter from the prefix $q(S)$ of the string $r(S)$. Denote by $E(i)$ the maximum size in bits of the encoding for the suffix $\prod_{B \in B_S} r(B)$ of some string $r(S)$, among all subsets $S \subseteq A$ such that $|S| = 2^{2^i}$. Then, $E(i)$ can be expressed by the following recursion (recall that $|B_S| = {2^{2^i} \choose 2} / {2^{2^{i-1}} \choose 2}$): $$ \begin{array}{l} E(i) \le \left({2^{2^i} \choose 2} / {2^{2^{i-1}} \choose 2}\right) (2^{2^{i-1}} \alpha\log L(i) + E(i-1)),\quad\text{ for }i > 1,\\ E(1) \le {4 \choose 2} (4\alpha\log L(1) + \alpha), \end{array} $$ where $L(i)$ denotes the length of the string $r(S)$ (obviously, $L$ depends only on the size $2^{2^i}$ of $S$) and $\alpha$ is a positive constant that depends on the chosen phrase encoder. Consider the prefix $q(B)$ of a substring $r(B)$ of $r(S)$, where $B \in B_S$ and $|B| > 2$. Each phrase $ad$ from the parsing of $q(B)$ is encoded in $O(\log\delta)$ bits, where $\delta$ is the distance to the letter $a$ from the prefix $q(S)$ of $r(S)$. Obviously, we have $\delta < L(i)$. Therefore, choosing an appropriate constant $\alpha > 0$, we can estimate the number of bits required to encode all $2^{2^{i-1}}$ phrases from the parsing of $q(B)$ as $2^{2^{i-1}} \alpha\log L(i)$; hence, the expression for $E(i)$ with $i \ne 1$. Analogously, the size in bits of the encoding for $bcb'cbcb'dd$ can be estimated as $4\alpha\log L(1) + \alpha$; hence, the expression for $E(1)$. Thus, the whole encoding of the string $s$ requires $O(\log n + \sigma\log\sigma + E(x))$ bits. It remains to show that $E(x) \le O(\sigma^2)$. Before finding a closed form for $E(i)$, let us consider $L(i)$, which can be expressed by the following recursion: $$ \begin{array}{l} L(i) = 2\cdot 2^{2^i} + \left({2^{2^i} \choose 2} / {2^{2^{i-1}} \choose 2}\right) L(i-1),\quad\text{ for }i > 0,\\ L(0) = 9. \end{array} $$ Here, $L(0) = |bcb'cbcb'dd| = 9$. Let us find a closed form for $L(i)$. Note that $2^{2^z} / {2^{2^z} \choose 2} = \frac{2}{2^{2^z} - 1}$ for any integer $z \ge 0$. Expanding the recursion for $L(i)$, we obtain: $$ \begin{array}{l} L(i) = 2\cdot 2^{2^i} + \frac{{2^{2^i} \choose 2}}{{2^{2^{i-1}} \choose 2}} L(i-1)\\ = 2^{2^i+1} + \frac{{2^{2^i} \choose 2}}{{2^{2^{i-1}} \choose 2}} \left(2\cdot 2^{2^{i-1}} + \frac{{2^{2^{i-1}} \choose 2}}{{2^{2^{i-2}} \choose 2}} L(i - 2)\right)\\ = 2^{2^i+1} + \frac{4\cdot {2^{2^i} \choose 2}}{2^{2^{i-1}} - 1} + \frac{{2^{2^i} \choose 2}}{{2^{2^{i-2}} \choose 2}} L(i-2)\\ = 2^{2^i+1} + \left(\frac{4\cdot {2^{2^i} \choose 2}}{2^{2^{i-1}} - 1} + \frac{4\cdot {2^{2^i} \choose 2}}{2^{2^{i-2}} - 1} + \cdots + \frac{4\cdot {2^{2^i} \choose 2}}{2^{2^{1}} - 1}\right) + 9\cdot {2^{2^i} \choose 2}\\ = 2^{2^i+1} + {2^{2^i} \choose 2}\left(\frac{4}{2^{2^{i-1}} - 1} + \frac{4}{2^{2^{i-2}} - 1} + \cdots + \frac{4}{2^{2^{1}} - 1} + 9\right). \end{array} $$ The term $9\cdot {2^{2^i} \choose 2}$ appears because of the last level of the recursion $L(i)$. Now it is easy to see that $L(i) \le \beta\cdot {2^{2^i} \choose 2}$ for a constant $\beta > 0$. In particular, we obtain $|r(A)| = |r'(A)| = L(x) \le \beta\cdot{2^{2^x} \choose 2} \le O(\sigma^2)$ (recall that $\sigma = 2^{2^x} + 2$). Since, as it was noted above, $\sigma \le \log n + 2$, we obtain $L(x) \le O(\log^2 n)$. Hence, for large enough $n$, we have $\ell = n - |r(A)| - |r'(A)| = n - 2L(x) = n - O(\log^2 n) \ge \tfrac{1}2 n$, i.e., $\ell = \Theta(n)$, as it was announced above. Let us similarly estimate $E(i)$. Denote $\gamma_i = \alpha\log L(i)$ for brevity. $$ \begin{array}{l} E(i) \le \frac{{2^{2^i} \choose 2}}{{2^{2^{i-1}} \choose 2}} (2^{2^{i-1}} \gamma_i + E(i-1))\\ = \frac{2\cdot{2^{2^i} \choose 2}}{2^{2^{i-1}} - 1}\gamma_i + \frac{{2^{2^i} \choose 2}}{{2^{2^{i-1}} \choose 2}} E(i-1)\\ = \frac{2\cdot{2^{2^i} \choose 2}}{2^{2^{i-1}} - 1}\gamma_i + \frac{{2^{2^i} \choose 2}}{{2^{2^{i-1}} \choose 2}}\left(\frac{{2^{2^{i-1}} \choose 2}}{{2^{2^{i-2}} \choose 2}} (2^{2^{i-2}} \gamma_{i-1} + E(i-2))\right)\\ = \frac{2\cdot{2^{2^i} \choose 2}}{2^{2^{i-1}} - 1}\gamma_i + \frac{{2^{2^i} \choose 2}}{{2^{2^{i-2}} \choose 2}} 2^{2^{i-2}} \gamma_{i-1} + \frac{{2^{2^i} \choose 2}}{{2^{2^{i-2}} \choose 2}} E(i-2) \\ = \frac{2\cdot{2^{2^i} \choose 2}}{2^{2^{i-1}} - 1}\gamma_i + \frac{2\cdot{2^{2^i} \choose 2}}{2^{2^{i-2}} - 1} \gamma_{i-1} + \frac{{2^{2^i} \choose 2}}{{2^{2^{i-2}} \choose 2}} E(i-2)\\ = \frac{2\cdot{2^{2^i} \choose 2}}{2^{2^{i-1}} - 1}\gamma_i + \frac{2\cdot{2^{2^i} \choose 2}}{2^{2^{i-2}} - 1} \gamma_{i-1} + \cdots + \frac{2\cdot{2^{2^i} \choose 2}}{2^{2^{1}} - 1} \gamma_{2} + {2^{2^i} \choose 2}(4\gamma_1{+}\alpha)\\ = 2\cdot{2^{2^i} \choose 2}\left(\frac{\gamma_i}{2^{2^{i-1}} - 1} + \frac{\gamma_{i-1}}{2^{2^{i-2}} - 1} + \cdots + \frac{\gamma_2}{2^{2^{1}} - 1} + 2\gamma_1 + \frac{\alpha}2\right). \end{array} $$ The term ${2^{2^i} \choose 2}(4\gamma_1 + \alpha)$ appear because of the last level of the recursion $E(i)$. Note that $\gamma_i = \alpha\log L(i) \le \alpha\log(\beta\cdot{2^{2^i} \choose 2}) = O(2^{i})$. It is well known that $\sum_{k=0}^\infty \frac{2^k}{2^{2^k} - 1} = O(1)$. Therefore, $E(i)$ can be estimated as $O({2^{2^i} \choose 2})$. Thus, we obtain $E(x) \le O({2^{2^x} \choose 2})$, which is $O(\sigma^2)$ since $\sigma = 2^{2^x} + 2$. \end{proof} \begin{remark} For nonclassical LZ77 encodings, we can use exactly the same example as in the proof of Theorem~\ref{ExampleTheorem2}. In this case, the substrings $q(B) = a_1da_2d\cdots a_{|B|}d$ and $bcb'cbcb'dd$ of each string $r(S)$ are parsed into one-letter phrases: the phrases $c$ and $d$ are encoded in $O(1)$ bits using the codes of these letters, and the phrases $a_1, a_2, \ldots, a_{|B|}, b, b'$ are encoded using references to letters of the prefix $q(S)$ of $r(S)$. The analysis of the size of thus obtained encoding is analogous. \end{remark} \section{Concluding Remarks}\label{SectConclusion} The upper and lower bounds $O(\min\{z, \frac{\log n}{\log\log_\sigma z}\})$ and $\Omega(\min\{z, \frac{\log n}{\log\log_\sigma z + \log\sigma}\})$, established in Theorems~\ref{MainTheorem} and~\ref{ExampleTheorem}, completely solve the problem for the case of constant alphabets and for some cases of arbitrary alphabets. But the general case of arbitrary alphabets with bounds expressed in terms of the parameters $n, z, \sigma$ remains open (see Table~\ref{tbl:results} in the introduction). Note that the examples constructed in the proof of Theorem~\ref{ExampleTheorem2} to show that $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \ge \Omega(\log n)$ are extremely compressible strings with $z = O(\log^2 n)$ and it is not clear whether the upper bound $\frac{\mathsf{LZ_{gr}}}{\mathsf{LZ_{opt}}} \le O(\log n)$ remains tight if we consider ``non-extremely compressible'' strings (but not necessarily on polylogarithmic alphabets). It is interesting to consider other encoders for LZ77. Many practical compressors utilize a type of phrase encoders that is strikingly different from ours: such encoders use entropy compression as a component. DEFLATE and LZMA are important examples of compression schemes using such techniques. This is a major open problem to formalize these schemes and to conduct a similar theoretical analysis of the efficiency of the popular greedy approach. \end{document}
math
\begin{document} \title[] {Invariant theoretic approach to uncertainty relations for quantum systems} \author{J. Solomon Ivan} \email{[email protected]} \affiliation{Raman Research Institute, C. V. Raman Avenue, Sadashivanagar, Bangalore 560 080.} \author{Krishna Kumar Sabapathy} \email{[email protected]} \affiliation{The Institute of Mathematical Sciences, C. I. T. Campus, Chennai 600113.} \author{N. Mukunda} \email{[email protected]} \affiliation{The Institute of Mathematical Sciences, C. I. T. Campus, Chennai 600113.} \author{R. Simon} \email{[email protected]} \affiliation{The Institute of Mathematical Sciences, C. I. T. Campus, Chennai 600113.} \begin{abstract} We present a general framework and procedure to derive uncertainty relations for observables of quantum systems in a covariant manner. All such relations are consequences of the positive semidefiniteness of the density matrix of a general quantum state. Particular emphasis is given to the action of unitary symmetry operations of the system on the chosen observables, and the covariance of the uncertainty relations under these operations. The general method is applied to the case of an $n$-mode system to recover the $Sp(2n,\,R)$-covariant multi mode generalization of the single mode Schr\"{o}dinger-Robertson Uncertainty Principle; and to the set of all polynomials in canonical variables for a single mode system. In the latter situation, the case of the fourth order moments is analyzed in detail, exploiting covariance under the homogeneous Lorentz group $SO(2,\,1)$ of which the symplectic group $Sp(2,\,R)$ is the double cover. \end{abstract} \pacs{03.65.-w, 03.65.Fd, 03.65.Ca, 03.65.Wj} \maketitle \section{Introduction} It is a well known historical fact that the 1925\,--\,1926 discoveries of two equivalent mathematical formulations of quantum mechanics---Heisenberg's matrix form followed by Schr\"{o}dinger's wave mechanical form---preceded the development of a physical interpretation of these formalisms\,\cite{sch-hei}. The first important ingredient of the conventional interpretation was Born's 1926 identification of the squared modulus of a complex Schr\"{o}dinger wavefunction as a probability\,\cite{born26}. The second ingredient developed in 1927 was Heisenberg's Uncertainty Principle (UP)\,\cite{heisenbergup}. To these may be added Bohr's Complementarity Principle which has a more philosophical flavour\,\cite{bohrcp}. Heisenberg's original derivation of his position-momentum UP combined the formula for the resolving power of an optical microscope extrapolated to a hypothetical gamma ray microscope, with the energy and momentum relations for a single photon, in analysing the inherent limitations in simultaneous determinations of the position and momentum of an electron. His result indicated the limits of applicability of classical notions, in particular the spatial orbit of a point particle, in quantum mechanics. More formal mathematical derivations of the UP, using the Born probability interpretation, soon followed. Prominent among them are the treatments of Kennard, Schr\"{o}dinger, and Robertson\,\cite{uncer0}. Such a derivation was also presented by Heisenberg in his 1930 Chicago lectures\,\cite{heisenberg-chicago}. The Heisenberg position-momentum UP is basically kinematical in nature. In contrast, the Bohr UP for time and energy involves quantum dynamics in an essential manner\,\cite{bohr-te-up}. Later work on the UP has introduced a wide variety of ideas\,\cite{ideas} and interpretations of the fluctuations or the uncertainties involved\,\cite{interpretUP}, such as in entropic\,\cite{EUP} and other formulations\,\cite{otherUP}. Even for a one-dimensional quantum system, the Schr\"{o}dinger-Robertson form of the UP displays more invariance than the Heisenberg form. Thus while the latter is invariant only under reciprocal scalings of position and momentum, and their interchange amounting to Fourier transformation, the former is invariant under the three-parameter Lie group $Sp(2,\,R)$ of linear canonical transformations. Fourier transformation, as well as reciprocal scalings, belong to $Sp(2,\,R)$\,\cite{simon88}. The generalisation of the Schr\"{o}dinger-Robertson UP to any finite number, $n$, of degrees of freedom displays invariance under the group $Sp(2n,\,R)$\,\cite{dutta94}. The purpose of this paper is to outline an invariant theoretic approach to general uncertainty relations for quantum systems. It combines a recapitulation and reexpression of some past results\,\cite{moments2012} with some new ones geared to practical applications. The analysis throughout is in the spirit of the Schr\"{o}dinger-Robertson treatment, and, in particular, our considerations do not cover the entropic type uncertainty relations. All our considerations will be kinematical in nature. The material of this paper is presented as follows. Section II sets up a general framework and procedure for deriving consequences of the positive semidefiniteness of the density matrix of a general quantum state, for the expectation values and fluctuations of a chosen (linearly independent) set of observables for the system. This has the form of a general uncertainty relation. A natural way to separate the expressions entering it into a symmetric fluctuation part, and an antisymmetric part contributed by commutators among the observables, hence specifically quantum in origin, is described. With respect to any unitary symmetry operation associated with the system, under which the chosen observables transform in a suitable manner, the uncertainty relation is shown to transform covariantly and to be preserved in content. In Section III this general framework is applied to the case of a quantum system involving $n$ Cartesian canonical Heisenberg pairs, i.e., an $n$-mode system\,; and to the fluctuations in canonical `coordinates' and `momenta' in any state. The resulting $n$-mode generalization of the original Schr\"{o}dinger-Robertson UP is seen to be explicitly covariant under the group $Sp(2n,\,R)$ of linear homogeneous canonical transformations. Section IV returns to the single mode system, but considers as the system of observables the infinite set of operator polynomials of all orders in the two canonical variables. The treatment is formal to the extent that unbounded operators are involved. An important role is played by the set of all finite-dimensional real nonunitary irreducible representations of the covariance group $Sp(2,\,R)$. We follow in spirit the structure of the basic theorems in the classical theory of moments. Thus the formal infinite-dimensional matrix uncertainty relation is reduced to a nested sequence of finite-dimensional requirements, of steadily increasing dimensions. While this case has been treated elsewhere\,\cite{moments2012}, some of the subtler aspects are now carefully brought out. In this and the subsequent Sections the method of Wigner distributions is used as an extremely convenient technical tool. Section V treats in more detail the uncertainty relations of Section IV that go one step beyond the original Schr\"{o}dinger-Robertson UP. Here all the fourth order moments of the canonical variables in a general state are involved. Their fully covariant treatment brings in the defining and some other low dimensional representations of the three-dimensional Lorentz group $SO(2,\,1)$. It is shown that the uncertainty relations (to the concerned order) are all expressible in terms of $SO(2,\,1)$ invariants. In Section VI we describe an interesting aspect of the Schr\"{o}dinger-Robertson UP in the light of three-dimensional Lorentz geometry, which becomes particularly apparent through the use of Wigner distribution methods. We argue that this should generalise to the conditions on fourth (and higher) order moments as well. The paper ends with some concluding remarks in Section VII. \section{General Framework} We consider a quantum system with associated Hilbert space ${\cal H}$, state vectors $|\psi \rangle$, $|\phi \rangle$, $\cdots$ and inner product $\langle \phi | \psi \rangle$ as usual. A general (mixed) state is determined by a density operator or density matrix $\hat{\rho}$ acting on ${\cal H}$ and obeying \begin{eqnarray} \hat{\rho}^{\dagger} =\hat{\rho} \geq 0,\,\,\,\,\,{\rm Tr}\,\hat{\rho}=1. \label{2.1} \end{eqnarray} Then ${\rm Tr}\,\hat{\rho}^2=1$ or $< 1$ distinguishes between pure and mixed states. Any hermitian observable $\hat{A}$ of the system possesses the expectation value \begin{eqnarray} \langle \hat{A} \rangle = {\rm Tr}\,(\hat{\rho}\,\hat{A}) \label{2.2} \end{eqnarray} in the state $\hat{\rho}$, the dependence of the left hand side on $\hat{\rho}$ being generally left implicit. We now set up a general method which allows the drawing out of the consequences of the nonnegativity of $\hat{\rho}$ in a systematic manner. This along with two elementary lemmas will be the basis of our considerations. Let $\hat{A}_{a}$, $a=1,\,2,\,\cdots,\,N$ be a set of $N$ {\em linearly} independent {\em hermitian} operators, each representing some observable of the system. We set up two formal $N$-component and $(N+1)$-component column vectors with hermitian operator entries as follows\,: \begin{eqnarray} \hat{A}=\left( \begin{array}{c} \hat{A}_{1} \\ \vdots \\ \vdots \\ \hat{A}_{N} \end{array} \right), \,\,\,\,\, \hat{{\cal A}}=\left( \begin{array}{c} 1 \\ \hat{A} \end{array}\right) = \left( \begin{array}{c} 1\\ \hat{A}_{1} \\ \vdots \\ \vdots \\ \hat{A}_{N} \end{array} \right), \label{2.3} \end{eqnarray} From $\hat{{\cal A}}$ we construct a square $(N+1)$-dimensional `matrix' with operator entries as \begin{eqnarray} \hat{\Omega} = \hat{{\cal A}}\hat{{\cal A}}^{T} = \left(\begin{array}{ccccc} 1 & \cdots & \cdots & \hat{A}_{b} & \cdots \\ \vdots &&&\vdots&\\ \hat{A}_{a}&\cdots & \cdots & \hat{A}_{a}\hat{A}_{b} & \cdots \\ \vdots &&&\vdots& \end{array} \right). \label{2.4} \end{eqnarray} Since $(\hat{A}_{a}\hat{A}_{b} )^{\dagger}= \hat{A}_{b}\hat{A}_{a}$, $\hat{\Omega}$ is `hermitian' in the following sense\,: taking the operator hermitian conjugate of each element and then transposing the rows and columns leaves $\hat{\Omega}$ unchanged. In a state $\hat{\rho}$ we then have an $(N+1)$-dimensional numerical hermitian matrix $\Omega$ of the expectation values of the elements of $\hat{\Omega}$\,: \begin{eqnarray} \Omega = \langle \hat{\Omega} \rangle &=& {\rm Tr}(\hat{\rho}\,\hat{\Omega})= \left(\begin{array}{ccccc} 1 & \cdots & \cdots & \langle \hat{A}_{b}\rangle & \cdots \\ \vdots &&&\vdots&\\ \langle \hat{A}_{a}\rangle &\cdots & \cdots & \langle \hat{A}_{a}\hat{A}_{b}\rangle & \cdots \\ \vdots &&&\vdots& \end{array} \right), \nonumber \\ {\rm i.e.}, ~ \Omega_{ab} &=& {\rm Tr}(\hat{\rho}\,\hat{\Omega}_{ab})\,;\nonumber\\ \Omega^{\dagger}&=& \Omega. \label{2.5} \end{eqnarray} Now for any complex $(N+1)$ component column vector ${\bf C}= (c_0,\,c_1,\, \cdots, \, c_N)^T$ we have \begin{eqnarray} {\bf C}^{\dagger}\,\hat{\Omega}\,{\bf C}&=& {\bf C}^{\dagger}\hat{{\cal A}}\,({\bf C}^{\dagger}\hat{{\cal A}})^{\dagger}\geq 0, \nonumber \\ \langle {\bf C}^{\dagger}\,\hat{\Omega}\,{\bf C} \rangle &=& {\bf C}^{\dagger}\,{\Omega}\,{\bf C} \geq 0, \label{2.6} \end{eqnarray} leading immediately to\,: \begin{theorem} Positivity of $\hat{\rho}$ imputes positivity to the matrix $\Omega$, for every choice of $\hat{{\cal A}}$\,: {\begin{equation} \hat{\rho}\geq 0 \Rightarrow \Omega = \langle \hat{\Omega} \rangle = {\rm Tr}(\hat{\rho}\,\hat{\Omega})\geq 0\,, ~~ \forall \,\hat{\cal A}\,. \label{2.7} \end{equation}} \end{theorem} This is thus an uncertainty relation valid in every physical state $\hat{\rho}$. \vskip0.2cm \noindent {\bf Remark}\,: It is for the sake of definiteness and keeping in view the ensuing applications that we have assumed the entries $\hat{{A}}_{a}$ of $\hat{\cal A}$ and $\hat{A}$ to be all hermitian. This can be relaxed and each $\hat{A}_{a}$ can be a general linear operator pertinent to the system. The only change would be the replacement of $\hat{\cal A}^{T}$ in Eq.\,(\ref{2.4}) by $\hat{\cal A}^{\dagger}$, leading to a result similar to Theorem 1. \vskip0.2cm Depending on the basic kinematics of the system we can imagine various choices of the $\hat{A}_{a}$ geared to exhibiting corresponding symmetries or covariance properties of the uncertainty relation (\ref{2.7}). Specifically suppose there is a unitary operator $\overline{U}$ on ${\cal H}$ such that under conjugation the $\hat{A}_{a}$ go into (necessarily real) linear combinations of themselves\,: \begin{eqnarray} \overline{U}\,\overline{U}^{\,\dagger}&=& \overline{U}^{\,\dagger}\,\overline{U}=1\!\!1, \nonumber \\ \overline{U}^{\,-1}\,\hat{A}_{a}\,\overline{U}&=& R_{ab}\,\hat{A}_{b}, \nonumber \\ \overline{U}^{\,-1}\,\hat{\cal A}\,\overline{U}&=& {\cal R} \,\hat{\cal A}, \nonumber \\ {\cal R}&=& \left( \begin{array}{cc} 1 & 0 \\ 0 & R \end{array} \right), \,\,\,\, R=(R_{ab}). \label{2.8} \end{eqnarray} The matrix $R$ here is real $N$-dimensional nonsingular. Then combined with Eq.\,(\ref{2.5}) we have\,: \begin{eqnarray} \hat{\rho}^{\, \prime} = \overline{U}\,\hat{\rho}\,\overline{U}^{\,-1} \Rightarrow \Omega^{\, \prime} &=&{\rm Tr}(\hat{\rho}^{\, \prime}\,\hat{\Omega})={\rm Tr}(\hat{\rho}\,\overline{U}^{\,-1}\,\hat{\cal A}\,\hat{\cal A}^{T}\,\overline{U}) \nonumber \\ &=& {\cal R} \,\Omega \,{\cal R}^{T}, \nonumber \\ \Omega \geq 0 &\Leftrightarrow& \Omega^{\, \prime} \geq 0. \label{2.9} \end{eqnarray} This is because the passage $\Omega \rightarrow \Omega^{\, \prime}$ is a congruence transformation. Thus the uncertainty relation (\ref{2.7}) is covariant or explicitly preserved under the conjugation of the state $\hat{\rho}$ by the unitary transformation $\overline{U}$. We now introduce two lemmas concerning (finite-dimensional) nonnegative matrices, whose proofs are elementary\,: \begin{lemma} For a hermitian positive definite matrix in block form, \begin{eqnarray} Q=Q^{\dagger}= \left( \begin{array}{cc} A & C^{\dagger} \\ C & B \end{array} \right),\,\,\,A^{\dagger}=A\,,\;\,\,B^{\dagger}= B, \label{2.10} \end{eqnarray} we have \begin{eqnarray} Q > 0 \,\, \Leftrightarrow \,\, A > 0\,\,\,{\rm and}\,\,\,B-C\,A^{-1}C^{\dagger} >0. \label{2.11} \end{eqnarray} \end{lemma} The proof consists in noting that by a congruence we can pass from $Q$ to a block diagonal form\,\cite{hjbook}\,: \begin{eqnarray} Q = \left( \begin{array}{cc} 1\!\!1 & 0 \\ -CA^{-1} & 1\!\!1 \end{array} \right) \left( \begin{array}{cc} A & 0 \\ 0 & B- C A^{-1}C^{\dagger} \end{array} \right) \left( \begin{array}{cc} 1\!\!1 & 0 \\ -CA^{-1} & 1\!\!1 \end{array} \right)^{\dagger}. \label{2.12} \end{eqnarray} \begin{lemma} If we separate a hermitian matrix $Q$ into real symmetric and pure imaginary antisymmetric parts $R,\,S$ then \begin{eqnarray} Q=Q^{\dagger} = R + iS\geq 0, \,\,{\rm det}\,S \not= 0 \,\Rightarrow R > 0. \label{2.13} \end{eqnarray} \end{lemma} The nonsingularity of $S$ means that $Q$ must be even dimensional. (The proof, which is elementary, is omitted). Now we apply Lemma 1 to the $(N+1)$-dimensional matrix $\Omega$ in Eq.\,(\ref{2.5}), choosing a partitioning where $B$ is $N \times N$, $C$ is $N \times 1$ and $C^{\dagger}$ is $1 \times N$\,: \begin{eqnarray} \Omega = \left( \begin{array}{cc} A & C^{\dagger} \\ C & B \end{array} \right):\,\, A=1,\,\,B=(\langle \hat{A}_{a}\hat{A}_{b}\rangle),\,\,\,C=(\langle \hat{A_a} \rangle). \label{2.14} \end{eqnarray} Then from Eq.\,(\ref{2.11}) we conclude\,: \begin{theorem} \begin{eqnarray} \hat{\rho}\geq 0 &\Rightarrow & \Omega \geq 0 \Leftrightarrow \nonumber \\ \tilde{\Omega}&=& (\langle(\hat{A}_{a} - \langle\hat{A}_{a} \rangle)(\hat{A}_{b}- \langle \hat{A}_{b} \rangle) \rangle ) \geq 0. \label{2.15} \end{eqnarray} \end{theorem} All expectation values involved in the elements of the $N\times N$ matrix $\tilde{\Omega}$ are with respect to the state $\hat{\rho}$. The motivation for the definitions of $\hat{\cal A}$, $\hat{\Omega}$ as in Eq.\,(\ref{2.3}) is now clear\,: after an application of Lemma 1 we immediately descend from the matrix $\Omega$ to the matrix $\tilde{\Omega}$ involving only expectation values of products of deviations from means. It is then natural to write the elements of $\tilde{\Omega}$ as follows\,: \begin{eqnarray} \Delta \hat{A}_{a}&=& \hat{A}_{a}- \langle \hat{A}_{a} \rangle, \nonumber \\ \tilde{\Omega}_{ab}&=& \langle\Delta \hat{A}_{a}\Delta \hat{A}_{b} \rangle. \label{2.16} \end{eqnarray} We revert to this form shortly. The covariance of the statement (\ref{2.15}), Theorem 2, under a unitary symmetry $\overline{U}$ acting as in Eq.\,(\ref{2.8}) follows from a brief calculation\,: \begin{eqnarray} \hat{\rho} \rightarrow \hat{\rho}^{\, \prime} = \overline{U} \hat{\rho}\,\overline{U}^{\,-1} & \Rightarrow & \nonumber \\ \overline{U}^{\,-1} (\hat{A}_{a} - {\rm Tr}(\hat{\rho}^{\, \prime} \hat{A}_{a}))\overline{U} &=&\overline{U}^{\,-1} \hat{A}_{a} \overline{U} -{\rm Tr}(\hat{\rho}\,\overline{U}^{\,-1}\hat{A}_{a} \overline{U}) \nonumber \\ &=& R_{ab}(\hat{A}_{b}- {\rm Tr}(\hat{\rho}\hat{A}_{b})); \nonumber \\ \tilde{\Omega}\rightarrow \tilde{\Omega}^{\, \prime}&=& R \tilde{\Omega} R^{T}; \nonumber \\ \tilde{\Omega} \geq 0 & \Leftrightarrow & \tilde{\Omega}^{\, \prime} \geq 0. \label{2.17} \end{eqnarray} We now return to Eq.\,(\ref{2.16}). The state $\hat{\rho}$ being kept fixed, we can split the hermitian $N \times N$ matrix $\tilde{\Omega}$ into real symmetric and pure imaginary antisymmetric parts as follows\,: \begin{eqnarray} \tilde{\Omega}_{ab} &=& V_{ab}(\hat{\rho}\,;\,\hat{A})+ \frac{i}{2}\,\omega_{ab}(\hat{\rho}\,; \,\hat{A}), \nonumber \\ V_{ab}(\hat{\rho}\,;\,\hat{A})&=&V_{ba}(\hat{\rho}\,;\,\hat{A})= \frac{1}{2}\langle\{\Delta \hat{A}_{a}, \, \Delta \hat{A}_{b} \} \rangle \nonumber \\ &=& \frac{1}{2}\langle\{ \hat{A}_{a}, \, \hat{A}_{b} \} \rangle - \langle \hat{A}_{a}\rangle \langle \hat{A}_{b} \rangle; \nonumber \\ \omega_{ab}(\hat{\rho}\,;\,\hat{A})&=&- \omega_{ba}(\hat{\rho}\,;\,\hat{A})= \,- \, i\, \langle [ \hat{A}_{a}, \, \hat{A}_{b}] \rangle . \label{2.18} \end{eqnarray} The brackets $[\,\cdot\,,\,\cdot\,]$ and $\{\,\cdot\,,\,\cdot\,\}$ denote, as usual, the commutator and anticommutator respectively. The natural physical identification of the $N \times N$ real symmetric matrix $V(\hat{\rho}\,;\,\hat{A})=(V_{ab}(\hat{\rho}\,;\,\hat{A}))$ is that it is the variance matrix (or matrix of covariances) associated with the set $\{\hat{A}_{a} \}$ in the state $\hat{\rho}$. The uncertainty relation (\ref{2.15}) now reads\,: \begin{eqnarray} \hat{\rho}\geq 0 \,\,\Rightarrow \,\,V(\hat{\rho}\,;\,\hat{A})+ \frac{i}{2}\,\omega(\hat{\rho}\,;\,\hat{A}) \geq 0, \label{2.19} \end{eqnarray} and then by Lemma 2 we have the possible further consequence\,: \begin{eqnarray} {\rm det}\,\omega(\hat{\rho}\,;\,\hat{A}) \not= 0 \Rightarrow V(\hat{\rho}\,;\,\hat{A}) > 0\,. \label{2.20} \end{eqnarray} \vskip0.2cm \noindent {\bf Remark}\.: In case the operators $\hat{A}_{a}$ commute pairwise, in any state $\hat{\rho}$ there is a `classical' joint probability distribution over the sets of simultaneous eigenvalues of all the $\hat{A}_{a}$. In such a case, the term $\omega$ in Eqs.\,(\ref{2.18},\,\ref{2.19}) vanishes identically, and the uncertainty relation (\ref{2.19}) is a `classical' statement\,\cite{fine-jmp82}. Therefore in the general case a good name for $\omega_{ab}(\hat{\rho}\,; \,\hat{A})$ is that it is the `commutator correction' term. \vskip0.2cm It is instructive to appreciate that while the original definitions of $\Omega$ and $\tilde{\Omega}$, starting from the operator sets $\hat{\cal A}$ and $\hat{\Omega}$, make it essentially trivial to see that they must be nonnegative, the form (\ref{2.19}) of the general uncertainty relation gives prominence to the variance matrix $V(\hat{\rho}\,;\,\hat{A})$. In addition, as seen earlier, the matrix $\Omega$ does not directly deal with fluctuations. It is after the use of Lemma 1 that we obtain the matrix $\tilde{\Omega}$ involving the fluctuations. From Eqs.\,(\ref{2.8}, \ref{2.17}), the effect of a unitary symmetry transformation on the real matrices $V(\hat{\rho}\,;\, \hat{A})$ and $\omega(\hat{\rho}\,;\, \hat{A})$ is seen to be\,: \begin{eqnarray} \hat{\rho}^{\, \prime}=\overline{U}\,\hat{\rho}\,\overline{U}^{\,-1}\,: &&~ V(\hat{\rho}^{\, \prime}\,;\,\hat{A})=R\,V(\hat{\rho}\,;\, \hat{A})\, R^{T}, \nonumber \\ &&~\omega(\hat{\rho}^{\, \prime}\,;\,\hat{A})=R\,\omega(\hat{\rho}\,; \hat{A})\, R^{T}, \label{2.21} \end{eqnarray} so that the form (\ref{2.19}) of the uncertainty relation is manifestly preserved. In later work, when there is no danger of confusion, we sometimes omit the arguments $\hat{\rho}$ and $\hat{A}$ in $V$ and $\omega$. \section{The multi mode Schr\"{o}dinger-Robertson Uncertainty Principle} As a first example of the general framework we consider briefly the Schr\"{o}dinger-Robertson UP for an $n$-mode system, which has been extensively discussed elsewhere\,\cite{dutta94,gaussians}. The basic operators, Cartesian coordinates and momenta, consist of $n$ pairs of canonical $\hat{q}$ and $\hat{p}$ variables obeying the Heisenberg canonical commutation relations. The operator properties and relations are\,: \begin{eqnarray} a=1,\,2,\,\cdots,\,2n\,: \;\; \hat{\xi}_{a} &=& \left\{ \begin{array}{cc} \hat{q}_{(a+1)/2}, \;& a ~ {\rm odd}\,, \\ \hat{p}_{a/2},\; & a ~ {\rm even}\,; \end{array} \right. \nonumber \\ \hat{\xi}_{a}^{\dagger} &=& \hat{\xi}_{a} \,; \nonumber \\ \left[ \hat{\xi}_{a},\,\hat{\xi}_{b} \right] &=& i \hbar \beta_{ab}\,,\nonumber\\ \beta = \,{\rm block ~ diag}\,(\,i\sigma_2,\,i\sigma_2,\,\cdots,\,i\sigma_2\,) &=& {1\!\!1}_{n \times n} \otimes i\sigma_2\,. \label{3.1} \end{eqnarray} These operators act irreducibly on the system Hilbert space ${\cal H}=L^{2}(\mathbb{R}^n)$. We take these $\hat{\xi}_{a}$ as the $\hat{A}_{a}$ of Eq.\,(\ref{2.3}), so here $N=2n$\,: \begin{eqnarray} \hat{{\cal A}} \rightarrow \left( \begin{array}{c} 1 \\ \hat{\xi} \end{array} \right),\,\,\, \hat{A}\rightarrow \hat{\xi} =\left( \begin{array}{c} \hat{\xi}_{1} \\ \vdots \\\hat{\xi}_{2n} \end{array}\right) =\left( \begin{array}{c} \hat{q}_{1} \\ \hat{p}_{1}\\ \vdots \\ \hat{q}_{n} \\ \hat{p}_{n} \end{array}\right). \label{3.2} \end{eqnarray} Then for any state $\hat{\rho}$, the variance matrix $V$ has elements \begin{eqnarray} V_{ab}&=& \frac{1}{2}\,{\rm Tr} \left(\hat{\rho}\,\{\hat{\xi}_{a}-{\rm Tr}(\hat{\rho}\,\hat{\xi}_{a}),\, \hat{\xi}_{b} - {\rm Tr}(\hat{\rho}\,\hat{\xi}_{b}) \}\right) \nonumber \\ &=&\frac{1}{2}\langle\{ \hat{\xi}_{a},\, \hat{\xi}_{b}\} \rangle - \langle \hat{\xi}_{a} \rangle\,\langle \hat{\xi}_{b} \rangle, \label{3.3} \end{eqnarray} while the antisymmetric matrix $\omega$ is just the {\em state-independent} numerical `symplectic metric matrix' $\beta$\,: \begin{eqnarray} \omega_{ab}= -i \, \langle\left[\hat{\xi}_{a},\,\hat{\xi}_{b} \right] \rangle = \hbar \beta_{ab}. \label{3.4} \end{eqnarray} The uncertainty relation (\ref{2.19}) then becomes the $n$-mode Schr\"{o}dinger-Robertson UP\,: \begin{eqnarray} \hat{\rho}\geq 0 \Rightarrow V + i \frac{\hbar}{2} \beta \geq 0\,\,(\Rightarrow V > 0), \label{3.5} \end{eqnarray} the second step following from Eq.\,(\ref{2.20}) as $\beta$ is nonsingular. For $n=1$, a single mode, the matrices $V$ and $\beta$ are two-dimensional\,: \begin{eqnarray} && V = \left( \begin{array}{cc} (\Delta q)^2 & \Delta (q,p) \\ \Delta (q,p) & (\Delta p)^2 \end{array} \right), \nonumber \\ && (\Delta q)^2 = \langle (\hat{q} -\langle \hat{q} \rangle)^2 \rangle,\,\,\,\, (\Delta p)^2 = \langle (\hat{p} -\langle \hat{p} \rangle)^2\rangle, \nonumber \\ && \Delta (q,p) = \frac{1}{2}\langle \{\hat{q}-\langle \hat{q} \rangle,\, \hat{p} - \langle \hat{p} \rangle \} \rangle\,; \nonumber \\ && \beta = i \sigma_{2} = \left(\begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right). \label{3.6} \end{eqnarray} Then (\ref{3.5}) simplifies to \begin{eqnarray} \left( \begin{array}{cc} (\Delta q)^2 & \Delta (q,p)+\frac{i}{2} \hbar \\ \Delta (q,p)-\frac{i}{2}\hbar & (\Delta p)^2 \end{array} \right) &\geq& 0,\nonumber\\ i.e., \; {\rm det}\left( V + \frac{i}{2}\hbar \beta \right)\equiv (\Delta q)^2 (\Delta p)^2 -(\Delta(q,p))^2 -\frac{\hbar^2}{4} &\geq& 0,\nonumber\\ i.e., \; {\rm det}\,V &\geq& \frac{\hbar^2}{4}\,, \label{3.7} \end{eqnarray} the original Schr\"{o}dinger-Robertson UP. Returning to $n$ modes, the $Sp(2n,\,R)$ covariance of the Schr\"{o}dinger-Robertson UP (\ref{3.5}) takes the following form\,: If $S \in Sp(2n,\, R)$, i.e., any real $2n \times 2n$ matrix obeying $S \beta S^T = \beta$, then the new operators \begin{eqnarray} \hat{\xi}_{a}^{'}=S_{ab} \, \hat{\xi}_{b} \label{3.8} \end{eqnarray} preserve the commutation relations in Eq.\,(\ref{3.1}) and hence are unitarily related to the $\hat{\xi}_{a}$. These unitary transformations constitute the double valued metaplectic unitary representation of $Sp(2n,\,R)$\,\cite{pramana95}\,: \begin{eqnarray} S \in Sp(2n,\,R) &\rightarrow& \overline{U}(S)={\rm unitary\,\,operator\,\, on\,\,{\cal H}}, \nonumber \\ \overline{U}(S^{\, \prime})\overline{U}(S) &=& \pm \overline{U}(S^{\, \prime} S)\,; \nonumber \\ \overline{U}(S)^{-1}\,\hat{\xi}_{a}\, \overline{U}(S)&=& S_{ab}\,\hat{\xi}_{b}. \label{3.9} \end{eqnarray} Then, as an instance of Eqs.\,({\ref{2.21}}) we have the results\,: \begin{eqnarray} &&\hat{\rho}\rightarrow \hat{\rho}^{\, \prime} =\overline{U}(S)\,\hat{\rho}\, \overline{U}(S)^{-1} \Rightarrow V \rightarrow V^{\, \prime} = S\,V \, S^{T}, \nonumber \\ &&V + \frac{i}{2}\,\hbar\, \beta \geq 0 \,\, \Leftrightarrow \,\, V^{\, \prime} + \frac{i}{2}\, \hbar\, \beta \geq 0. \label{3.10} \end{eqnarray} \vskip0.2cm \noindent {\bf Remark}\,: The $n$-mode Schr\"{o}dinger-Robertson UP (\ref{3.5}), with its explicit $Sp(2n,\,R)$ covariance (\ref{3.10}), constitutes the answer to an important question raised by Littlejohn\,\cite{littlejohn86}: under what conditions is a real normalized Gaussian function on a $2n$-dimensional phase space the Wigner distribution for some quantum state? The answer is stated in terms of the variance matrix which of course determines the Gaussian up to phase space displacements [And these phase space displacements have no role to play on the `Wigner quality' of a phase space distribution]. This result has been used extensively in both classical and quantum optics\,\cite{gaussians}, and more recently in quantum information theory of continuous variable canonical systems\,\cite{cv}. \vskip0.2cm As a last comment we mention that as according to Eq.\,(\ref{3.5}) the variance matrix $V$ is always positive definite, by Williamson's celebrated theorem an $S \in Sp(2n,\,R)$ can be found such that $V^{\, \prime}$ in Eq.\,(\ref{3.10}) becomes diagonal\,\cite{Williamson,RSSCVS}. In general, though, the diagonal elements of $V^{\, \prime}$ will not be the eigenvalues of $V$. \section{Higher order moments for single mode system} We now revert to the $n=1$ case of one canonical pair of hermitian operators $\hat{q}$ and $\hat{p}$, but consider expectation values of expressions in these operators of order greater than two. The relevant Hilbert space is of course ${\cal H}= L^{2}(\mathbb{R})$. As a useful computational tool we work with the Wigner distribution description of quantum states, and the associated Weyl rule of association of (hermitian) operators with (real) classical phase space functions. Given a quantum mechanical state $\hat{\rho}$, the corresponding Wigner distribution is a function on the classical two-dimensional phase space\,: \begin{eqnarray} W(q, p)=\frac{1}{2 \pi \hbar} \int_{-\infty}^{\infty} dq^{\, \prime} \left\langle q -\frac{1}{2} q^{\, \prime} \right|\hat{\rho} \left| q + \frac{1}{2} q^{\, \prime} \right\rangle e^{ipq^{\, \prime}/\hbar}. \label{4.1} \end{eqnarray} Thus it is a partial Fourier transform of the position space matrix elements of $\hat{\rho}$. This function is real and normalised to unity, but need not be pointwise nonnegative\,: \begin{eqnarray} \hat{\rho}^{\dagger}=\hat{\rho} & \Rightarrow & W(q,p)^{*}=W(q,p)\,; \nonumber \\ {\rm Tr}\,\hat{\rho}=1 &\Rightarrow& \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}dqdp \,W(q,p)=1. \label{4.2} \end{eqnarray} The operator $\hat{\rho}$ and the function $W(q,p)$ determine each other uniquely. The {\em key property} is that the quantum expectation values of operator exponentials are equal to the classical phase space averages of classical exponentials with respect to $W(q,p)$ \,\cite{Cahill}\,: \begin{eqnarray} {\rm Tr}(\hat{\rho}\,e^{i(\theta\, \hat{q}- \tau\, \hat{p})}) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} dq dp \,W(q,p) e^{i(\theta\, {q}- \tau\, {p})},\,\,\,-\infty <\, \theta,\, \tau\,< \infty. \label{4.3} \end{eqnarray} By expanding the exponentials and comparing powers of $\theta$ and $\tau$ we get\,: \begin{eqnarray} {\rm Tr}(\hat{\rho}\, \widehat{(q^n\,p^{n'})}) &=& \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} dq dp \,W(q,p) q^{n}p^{n'}, \nonumber \\ \widehat{(q^n\,p^{n'})}&=&{\rm coefficient\,\,of\,\,} \frac{(i\theta)^n}{n!}\,\frac{(-i\tau)^{n'}}{{n'}!}\,\,{\rm in}\,\,\ e^{i(\theta\, \hat{q}- \tau\, \hat{p})} \nonumber \\ &=&\frac{n!\,{n'}!}{(n+n')!} \times \,\,{\rm coefficient \,\,of \,\,} \theta^n(-\tau)^{n'} \,\,{\rm in}\,\,(\theta\,\hat{q}- \tau\,\hat{p})^{n + n'}, \nonumber \\ && n,\,n'=0,\,1,\,2,\, \cdots. \label{4.4} \end{eqnarray} Thus $\widehat{(q^n\,p^{n'})}$ is an hermitian operator polynomial in $\hat{q}$ and $\hat{p}$ associated to the classical real monomial $q^np^{n'}$. This is the Weyl rule of association indicated by \begin{eqnarray} \widehat{(q^n\,p^{n'})}={(q^n\,p^{n'})}_{W}, \label{4.5} \end{eqnarray} so Eq.\,(\ref{4.4}) appears as \begin{eqnarray} {\rm Tr}(\hat{\rho}\,{(q^n\,p^{n'})}_{W} )= \int \int dq dp\, W(q,p) \,q^n p^{n'}. \label{4.6} \end{eqnarray} We regard the polynomials ${(q^n\,p^{n'})}_{W}$ as the basic `quantum monomials'. By linearity the association (\ref{4.5}) can be extended to general functions on the classical phase space, leading to the scheme\,: \begin{eqnarray} f(q,p)={\rm real\,\,classical\,\,function\,\,} &\rightarrow & \hat{F}= (f(q,p))_{W}={\rm hermitian\,\, operator\,\, on \,\,{\cal H}}, \nonumber \\ {\rm Tr}(\hat{\rho}\,\hat{F})&=& \int \int dqdp\, W(q,p)\,f(q,p). \label{4.7} \end{eqnarray} \vskip 0.2cm \noindent {\bf Remarks}: Two useful comments may be made at this point. For any pair of states $\hat{\rho},\;\hat{\rho}^{\prime}$ we have \begin{eqnarray} {\rm Tr}(\hat{\rho}\,\hat{\rho}^{\prime})&=& \int \int dq dp\, W(q,p)\,W^{\prime}(q,p)\geq 0. \label{4.7a} \end{eqnarray} Based on this, one can see the following\,: a given real normalised phase space function $W(q,p)$ is a Wigner distribution (corresponding to some physical state $\hat{\rho}$) if and only if the overlap integral on the right hand side of Eq.\,(\ref{4.7a}) is nonnegative for all Wigner distributions $W^{\, \prime}(q,p)$. Secondly, we refer to the remarks made following Eq.\,(\ref{2.20}) concerning the commutative case $\left[\hat{A}_{a},\,\hat{A}_{b} \right]=0$. This happens for instance when $\hat{A}_{a}=f_{a}(\hat{q})$ for all $a$. In that case, only the integral of $W(q,p)$ over $p$ is relevant, and this is known to be the coordinate space probability density in the state $\hat{\rho}$\,\cite{hilleryPR}. In the multi mode case this generalizes to the following statement\,: the result of integrating $W(q_1,p_1,\,q_2,p_2,\,\cdots\,,\,q_n,p_n)$ over any ($n$-dimensional) linear Lagrangian subspace in phase space is always a genuine probability distribution (the marginal) over the `remaining' $n$ phase space variables. {\em This marginal is basically the squared modulus, or probability density in the Born sense, of a wavefunction on the corresponding `configuration space', generalised to the case of a mixed state}\,\cite{hilleryPR}. The covariance group of the canonical commutation relation obeyed by $\hat{q}$ and $\hat{p}$ is (apart from phase space translations) the group $Sp(2,\,R)$\,: \begin{eqnarray} Sp(2,\,R)&=& \left\{ S=\left(\begin{array}{cc} a & b \\ c & d \end{array} \right) ={\rm real \,\,} 2\times 2\,\,{\rm matrix} \,\,|\,\,S\,\sigma_2\,S^{T} =\sigma_2,\,\;{\rm i.e.,}\; {\rm det}\,S =1 \right\}\,.~~ \label{4.8} \end{eqnarray} The actions on $\hat{q}$ and $\hat{p}$ by matrices and by the unitary metaplectic representation of $Sp(2,\,R)$ are connected in this manner\,: \begin{eqnarray} S \in Sp(2,\,R) &\rightarrow &\overline{U}(S)={\rm unitary\,\,operator\,\,on\,\,{\cal H}}\,; \nonumber \\ \xi = \left(\begin{array}{c} q \\ p \end{array} \right) & \rightarrow & \hat{\xi} =(\xi)_{W}= \left(\begin{array}{c} \hat{q} \\ \hat{p} \end{array} \right)\,: \nonumber \\ \overline{U}(S)^{-1}\,\hat{\xi}\,\overline{U}(S)&=& S\,\hat{\xi}. \label{4.9} \end{eqnarray} The effect on $W(q,p)\equiv W(\xi)$ is then given as\,\,\cite{dutta94,gaussians}\,: \begin{eqnarray} \hat{\rho}^{\, \prime}=\overline{U}(S)\,\hat{\rho}\,\overline{U}(S)^{-1} \leftrightarrow W^{\, \prime}(\xi)=W(S^{-1}\,\xi). \label{4.10} \end{eqnarray} We now introduce a more suggestive notation for the classical monomials $q^n p^{n'}$ and their operator counterparts $(q^n p^{n'})_{W}$. This is taken from the quantum theory of angular momentum (QTAM) and uses the fact that finite-dimensional nonunitary irreducible real representations of $Sp(2,\,R)$ are related to the unitary irreducible representations of $SU(2)$ by analytic continuation. (Indeed the two sets of generators are related by the unitary Weyl trick). We use `quantum numbers' $j=0,\, \frac{1}{2},\,1,\, \cdots$, $m = j,\,j-1,\, \cdots,\, -j$ as in QTAM and define the hermitian monomial basis for operators on ${\cal H}$ in this way\,: \begin{eqnarray} \hat{T}_{jm}=(q^{j+m}p^{j-m})_{W} &=& {\rm coefficient \,\,of\,\,} \frac{(2j)!}{(j+m)!(j-m)!} \,\theta^{j+m} (-\tau)^{j-m}\,\,{\rm in} \,\, (\theta\,\hat{q}-\tau\,\hat{p})^{2j},\nonumber \\ &&j=0,\,\frac{1}{2},\,1,\,\cdots,\,;\,\,\,m=j,\,j-1,\,\cdots,\, -j. \label{4.11} \end{eqnarray} For the first few values of $j$ we have \begin{eqnarray} &&(\hat{T}_{\frac{1}{2} m})= \left(\begin{array}{c} \hat{q}\\ \hat{p} \end{array} \right);\,\,\, (\hat{T}_{1 m})= \left(\begin{array}{c} \hat{q}^2\\ \frac{1}{2}\{\hat{q},\, \hat{p}\} \\ \hat{p}^2 \end{array} \right);\,\,\, (\hat{T}_{\frac{3}{2} m})= \left(\begin{array}{c} \hat{q}^3\\ \frac{1}{3}(\hat{q}^2\hat{p} + \hat{q}\hat{p}\hat{q}+ \hat{p} \hat{q}^2)\\ \frac{1}{3}(\hat{q}\hat{p}^2 + \hat{p}\hat{q}\hat{p}+ \hat{p}^2 \hat{q})\\ \hat{p}^3 \end{array} \right); \nonumber \\ &&(\hat{T}_{2 m})= \left(\begin{array}{c} \hat{q}^4\\ \frac{1}{4}(\hat{q}^3\hat{p} + \hat{q}^2\hat{p}\hat{q}+ \hat{q}\hat{p}\hat{q}^2 + \hat{p} \hat{q}^3)\\ \frac{1}{6}(\hat{q}^2 \hat{p}^2 +\hat{q}\hat{p}\hat{q}\hat{p} +\hat{q}\hat{p}^2 \hat{q} + \hat{p}\hat{q}^2 \hat{p} + \hat{p}\hat{q}\hat{p}\hat{q} + \hat{p}^2 \hat{q}^2) \\ \frac{1}{4}(\hat{q}\hat{p}^3 + \hat{p}\hat{q}\hat{p}^2+ \hat{p}^2\hat{q}\hat{p} + \hat{p}^3\hat{q})\\ \hat{p}^4 \end{array} \right). \label{4.12} \end{eqnarray} Then we have the consequences\,: \begin{eqnarray} && {\rm Tr}(\hat{\rho}\,\hat{T}_{jm}) = \int \int dqdp\, W(q,p) \,q^{j+m}\,p^{j-m} \equiv \overline{q^{j+m}\,p^{j-m}}\,; \nonumber \\ && S\in Sp(2,\,R):\;\, \overline{U}(S)^{-1}\, \hat{T}_{jm}\, \overline{U}(S) = \sum_{m'= -j}^{j} K^{(j)}_{m m'} (S)\,\hat{T}_{jm'}. \label{4.13} \end{eqnarray} The quantum expectation values of the $\hat{T}_{jm}$ are phase space moments of $W(q,p)$, denoted for convenience with an overhead bar. The matrices $K^{(j)}(S)$ constitute the $(2j+1)$-dimensional real nonunitary irreducible representation of $Sp(2,\,R)$ obtained from the familiar `spin $j$' unitary irreducible representation of $SU(2)$ by analytic continuation. For $j = \frac{1}{2}$, we have $K^{(1/2)}(S)=S$. The representation $K^{(1)}(S)$ corresponding to $j=1$ will be seen to engage our sole attention in Section~V. The noncommutative (but associative) product law for the hermitian monomial operators $\hat{T}_{jm}$ has an interesting form, being essentially determined by the $SU(2)$ Clebsch-Gordan coefficients. This is not surprising, in view of the connection between $SU(2)$ and $Sp(2,\,R)$ representations (in finite dimensions) mentioned above. In fact for these representations and in chosen bases, $SU(2)$ and $Sp(2,\, R)$ share the same Clebsch-Gordan coefficients\,\cite{moments2012}. The product formula has a particularly simple structure if we (momentarily) use suitable numerical multiples of $\hat{T}_{jm}$\,: \begin{eqnarray} \hat{\tau}_{jm}= \hat{T}_{jm}/\sqrt{(j+m)!(j-m)!}. \label{4.14} \end{eqnarray} Then we find\,\cite{moments2012} \begin{eqnarray} \hat{\tau}_{jm}\,\hat{\tau}_{j' m'} &=& \sum _{j'' = |j-j'|}^{j+j'} \left(\frac{i \hbar}{2} \right)^{j+j'-j''} \sqrt{\frac{(j+j' + j'' +1)!}{(2j'' +1)(j+j'-j'')! (j'+j'' -j)! (j'' +j -j')!}} \nonumber \\ && \times C^{j\,j'\,j''}_{m\,m'\, m+m'}\, \hat{\tau}_{j'',\,m +m'}. \label{4.15} \end{eqnarray} The $C^{j\,j'\,j''}_{m\,m'\,m''}$ are the $SU(2)$ Clebsch-Gordan coefficients familiar from QTAM\,\cite{edmonds}. We will use this product rule in the sequel. Now we apply the general framework of Section II to the present situation. We will use a notation similar to that in the main theorems of the classical theory of moments. We take $\hat{A}$ and $\hat{{\cal A}}$ to formally be infinite component column vectors with hermitian entries\,: \begin{eqnarray} \hat{A} &=& \left(\begin{array}{c} \vdots \\ \hat{T}_{jm} \\ \vdots \end{array} \right) = (\hat{T}_{\frac{1}{2}\, \frac{1}{2}}, \,\hat{T}_{\frac{1}{2}\, \frac{-1}{2}},\, \hat{T}_{1\,1},\, \hat{T}_{1\,0},\,\hat{T}_{1\, -1},\,\cdots, \,\hat{T}_{jj},\,\cdots,\, \hat{T}_{j,-j},\,\cdots)^{T}, \nonumber \\ \hat{\cal A}&=& \left(\begin{array}{c}1 \\ \hat{A} \end{array} \right). \label{4.16} \end{eqnarray} Thus the subscript $a$ of Eq.\,(\ref{2.4}) is now the pair $jm$ taking values in the sequence given above. To simplify notation, as $\hat{A}$ is kept fixed, we will not indicate it as an argument in various quantities. The general entries in the infinite-dimensional matrices $\hat{\Omega}$, $\Omega$, $\tilde{\Omega}$ in Eqs.\,(\ref{2.4},\,\,\ref{2.5}) are then\,: \begin{eqnarray} \hat{\Omega}_{jm, j'm'} &=& \hat{T}_{jm}\,\hat{T}_{j'm'}\,; \nonumber \\ \Omega_{jm,j'm'}(\hat{\rho}) &=&{\rm Tr}(\hat{\rho}\,\hat{T}_{jm}\,\hat{T}_{j'm'})= \langle \hat{T}_{jm}\,\hat{T}_{j'm'} \rangle\,; \nonumber \\ \tilde{\Omega}_{jm,j'm'}(\hat{\rho}) &=& \langle (\hat{T}_{jm}- \langle\hat{T}_{jm} \rangle)\,(\hat{T}_{j'm'}-\langle \hat{T}_{j'm'} \rangle) \rangle. \label{4.17} \end{eqnarray} (In $\hat{\Omega}$ and $\Omega$, for $j=m=0$, we have $\hat{T}_{00} =1$). By using the product rule (\ref{4.15}) the (generally nonhermitian) operator $\hat{T}_{jm}\,\hat{T}_{j'm'}$ can be written as a complex linear combination of $\hat{T}_{j'',m+m'}$ with $j'' = j+j',\, j+j'-1,\,\cdots,\, |j-j'|$. The variance matrix $V(\hat{\rho})$ in Eq.\,(\ref{2.18}) has the elements \begin{eqnarray} V_{jm,j'm'}(\hat{\rho}) =\frac{1}{2} \langle \{\hat{T}_{jm}, \,\hat{T}_{j'm'} \} \rangle - \langle \hat{T}_{jm} \rangle\,\langle \hat{T}_{j'm'} \rangle. \label{4.18} \end{eqnarray} From the known symmetry relation \,\cite{edmonds} \begin{eqnarray} C^{j'\,j\,j''}_{m'\,m\, m+m'} = (-1)^{j+j'-j''}\,C^{j\,j'\,j''}_{m\,m'\,m+m'} \label{4.19} \end{eqnarray} we see that in the anticommutator term in Eq.\,(\ref{4.18}) only $\hat{T}_{m+m'}^{j''}$ for $j''= j+j',\,j+j'-2,\, j+j'-4, \, \cdots$ will appear with real coefficients. On the other hand, for the antisymmetric part $\omega_{ab}$ of Eq.\,(\ref{2.18}) we have \begin{eqnarray} \omega_{jm,j'm'}(\hat{\rho}) = -i \left\langle \left[\hat{T}_{jm} ,\,\hat{T}_{j'm'}\right] \right\rangle , \label{4.20} \end{eqnarray} so now by Eq.\,(\ref{4.19}) the commutator here is a linear combination of terms $\hat{T}_{m+m'}^{j''}$ for $j''= j+j'-1,\,j+j'-3,\, \cdots$ with pure imaginary coefficients. There is, therefore, a clean separation of the product $\hat{T}_{jm}\, \hat{T}_{j'm'}$ into a hermitian part in $V$ and an antihermitian part in $\omega$. With these facts in mind, the uncertainty relation (\ref{2.19}) is in hand\,: \begin{eqnarray} V_{jm,j'm'}(\hat{\rho})&=&\sum_{j+j'-j''\,\,{\rm even}} \cdots\,\,\langle \hat{T}_{j'',m+m'} \rangle - \langle \hat{T}_{jm} \rangle\, \langle \hat{T}_{j'm'} \rangle, \nonumber \\ \omega_{jm,j'm'}(\hat{\rho})&=& \sum_{j+j'-j''\,\,{\rm odd}} \cdots\,\,\langle \hat{T}_{j'',m+m'} \rangle\,; \nonumber \\ (\tilde{\Omega}_{jm,j'm'}(\hat{\rho})) &=&(V_{jm,j'm'}(\hat{\rho})) +\frac{i}{2}\,(\omega_{jm,j'm'}(\hat{\rho})) \geq 0. \label{4.21} \end{eqnarray} Each matrix element of $V(\hat{\rho})$ (apart from the subtracted term) and of $\omega(\rho)$ appears as some real linear combination of expectation values of hermitian monomial operators, i.e., of moments of $W(q,p)$\,; however, in this way of writing, the essentially trivial nature of the statement $\tilde{\Omega}(\hat{\rho}) \geq 0$ is not manifest. The covariance group in this problem is of course $Sp(2,\,R)$. From Eq.\,(\ref{4.13}) we see that under conjugation by the metaplectic group unitary operator $\overline{U}(S)$, the column vector $\hat{A}$ of Eq.\,(\ref{4.16}) transforms as a direct sum of the sequence of finite-dimensional real irreducible nonunitary representation matrices $K^{(1/2)}(S)=S$, $K^{(1)}(S)$, $K^{(3/2)}(S)\,\cdots$\,; so Eq.\,(\ref{2.8}) in the present context is\,\cite{moments2012}\,: \begin{eqnarray} &&S\in Sp(2,\,R)\,:\,\,\,\, \overline{U}(S)^{-1}\, \hat{A}\, \overline{U}(S) = K(S)\,\hat{A}, \nonumber \\ &&K(S)= K^{(1/2)}(S)\oplus K^{(1)}(S)\oplus K^{(3/2)}(S)\,\oplus \,\cdots \label{4.22} \end{eqnarray} From Eq.\,(\ref{2.21}), when $\hat{\rho}\rightarrow \hat{\rho}'= \overline{U}(S) \,\hat{\rho}\, \overline{U}(S)^{-1}$ both $V(\hat{\rho})$ and $\omega(\hat{\rho})$ experience congruence transformations by $K(S)$, and the formal uncertainty relation (\ref{4.21}) is preserved. Up to this point the use of infinite component $\hat{A}$ and infinite-dimensional $\Omega$, $\tilde{\Omega}$, $V$ and $\omega$ has been formal. We may now interpret the uncertainty relation (\ref{4.21}) in practical terms to mean that for each finite $N=1,\,2,\,\cdots\,$, the principal submatrix of $\tilde{\Omega}(\hat{\rho})$ formed by its first $N$ rows and columns should be nonnegative. However, in order to maintain $Sp(2,\,R)$ covariance, a slight modification of this procedure is desirable. If for each $J=\frac{1}{2},\,1,\,\frac{3}{2},\,\cdots$ we include all values of $j\,m$ for $j\leq J$, the number of rows (and columns) involved is $N_{J}=J(2J+3)$, the sequence of integers $2,\,5,\,9,\,14,\,\cdots$. Let us then define hierarchies of $N_{J}$-dimensional matrices as\,: \begin{eqnarray} J=\frac{1}{2},\,1,\,\frac{3}{2},\,\cdots\,:~~&& \nonumber\\ \tilde{\Omega}^{(J)}(\hat{\rho})&=& (\tilde{\Omega}_{jm,j'm'}(\hat{\rho})), \nonumber \\ V^{(J)}(\hat{\rho}) &=& (V_{jm,j'm'}(\hat{\rho})), \nonumber \\ {\omega}^{(J)}(\hat{\rho}) &=& ({\omega}_{jm,j'm'}(\hat{\rho})), \,\,\,j,\,j'=\frac{1}{2},\,1,\,\cdots,\,J\,; \nonumber \\ \tilde{\Omega}^{(J)}(\hat{\rho}) &=& V^{(J)}(\hat{\rho})+\frac{i}{2} {\omega}^{(J)}(\hat{\rho}). \label{4.23} \end{eqnarray} However, in each of these matrices {\em there is no $J$ dependence in their matrix elements}. Each also naturally breaks up into blocks of dimension $(2j+1)(2j^{\, \prime}+1)$ for each pair $(j,j^{\, \prime})$ present, and these can be denoted by $\tilde{\Omega}^{(j,j')}(\hat{\rho})$, $V^{(j,j')}(\hat{\rho})$, ${\omega}^{(j,j')}(\hat{\rho})$. Symbolically, \begin{eqnarray} \tilde{\Omega}^{(J)}(\hat{\rho})= \left( \begin{array}{ccc} & \vdots & \\ \cdots & \tilde{\Omega}^{(j,j')}(\hat{\rho}) & \cdots \\ & \vdots & \end{array} \right) \label{4.24} \end{eqnarray} and similary for $V^{(J)}(\hat{\rho})$ and $\omega^{(J)}(\hat{\rho})$. As examples we have\,: \begin{eqnarray} \tilde{\Omega}^{(1/2)}(\hat{\rho})&=&(\tilde{\Omega}^{\left( \frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho}))\,; \nonumber \\ \tilde{\Omega}^{(1)}(\hat{\rho})&=& \left(\begin{array}{cc} \tilde{\Omega}^{\left( \frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho}) & \tilde{\Omega}^{\left( \frac{1}{2}, 1 \right)}(\hat{\rho})\\ \tilde{\Omega}^{\left( 1 , \frac{1}{2} \right)}(\hat{\rho}) & \tilde{\Omega}^{\left( 1,1 \right)}(\hat{\rho}) \end{array} \right), \label{4.25} \end{eqnarray} and correspondingly for $V^{(J)}$, $\omega^{(J)}$. Moreover, in going from $J$ to $J +\frac{1}{2}$, we have an augmentation of each matrix with $2(J+1)$ new rows and columns, \begin{eqnarray} \tilde{\Omega}^{(J + 1/2)}(\hat{\rho})= \left( \begin{array}{ccc} \tilde{\Omega}^{(J)}(\hat{\rho}) & \begin{array}{c} \vdots \\ \vdots \\ \vdots \end{array} & \begin{array}{c} \tilde{\Omega}^{\left( \frac{1}{2}, J + \frac{1}{2} \right)}(\hat{\rho}) \\ \vdots \\ \tilde{\Omega}^{\left( J, J+ \frac{1}{2} \right)}(\hat{\rho}) \end{array} \\ \begin{array}{ccccc} \cdots\,\,\,\, &\,\,\,\, \cdots\,\,\,\, &\,\,\,\, \cdots\,\,\,\, & \,\,\,\,\cdots\,\,\,\,& \,\,\,\,\cdots \end{array} && \cdots\,\,\,\,\cdots \\ \begin{array}{ccc} \tilde{\Omega}^{\left( J+ \frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho}) & \cdots & \tilde{\Omega}^{\left( J + \frac{1}{2}, J \right)}(\hat{\rho}) \end{array} &\vdots & \tilde{\Omega}^{\left( J+ \frac{1}{2}, J+ \frac{1}{2} \right)}(\hat{\rho}) \end{array}\right). \label{4.26} \end{eqnarray} The formal uncertainty relation (\ref{4.21}) now translates into a hierarchy of finite-dimensional matrix conditions \begin{eqnarray} \tilde{\Omega}^{(J )}(\hat{\rho}) = V^{(J )}(\hat{\rho})+ \frac{i}{2} {\omega}^{(J)}(\hat{\rho}) \geq 0,\,\,J=\frac{1}{2},\,1,\, \frac{3}{2},\, \cdots. \label{4.27} \end{eqnarray} (Of course, for a given state $\hat{\rho}$, moments may exist and be finite only up to some value $J_{\rm max}$ of $J$, so the hierarchy (\ref{4.27}) also terminates at this point). The lowest condition in this hierarchy, $J=\frac{1}{2}$, takes us back to Eqs.\,(\ref{3.6},\,\,\ref{3.7})\,: \begin{eqnarray} \tilde{\Omega}^{\left(\frac{1}{2} \right)}(\hat{\rho}) &=& \tilde{\Omega}^{\left( \frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho})= V^{\left(\frac{1}{2} \right)}(\hat{\rho})+ \frac{i}{2} {\omega}^{\left( \frac{1}{2} \right)}(\hat{\rho})\,; \nonumber \\ V^{\left(\frac{1}{2} \right)}(\hat{\rho}) &=& V^{\left(\frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho}) = \left( \left\langle \frac{1}{2} \{\hat{T}_{\frac{1}{2}, m}, \,\hat{T}_{\frac{1}{2}, m'} \}\right\rangle\right)- \left( \begin{array}{c} \langle \hat{q} \rangle \\ \langle \hat{p} \rangle \end{array} \right)\, \begin{array}{c} \left( \begin{array}{cc} \langle\hat{q}\rangle &\langle\hat{p} \rangle \end{array} \right) \\ \\ \end{array} \nonumber \\ &=& \left( \begin{array}{cc} (\Delta q)^2 & \Delta(q,p) \\ \Delta (q,p) & (\Delta p)^2 \end{array} \right)\,; \nonumber \\ {\omega}^{\left(\frac{1}{2} \right)}(\hat{\rho}) &=& {\omega}^{\left( \frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho})= -i \left( \begin{array}{cc} 0 & \left[\hat{q},\,\hat{p} \right] \\ \left[\hat{p},\,\hat{q} \right] & 0 \end{array} \right)= i\,\hbar\, \sigma_2\,; \nonumber \\ \tilde{\Omega}^{\left(\frac{1}{2} \right)}(\hat{\rho}) &\geq& 0\,\,\,\,\,\Leftrightarrow \,\,\,\,\, \left( \begin{array}{cc} (\Delta q)^2 & \Delta(q,p)+ i \,\frac{\hbar}{2} \\ \Delta (q,p)-i\, \frac{\hbar}{2} & (\Delta p)^2 \end{array} \right)\,\ge 0\,\,\Leftrightarrow \nonumber \\ && (\Delta q)^2\,(\Delta p)^2 - (\Delta (q,p))^2 \geq \frac{\hbar^2}{4}, \label{4.28} \end{eqnarray} the original Schr\"{o}dinger-Robertson UP. It is natural to ask for the new conditions that appear at each step in the hierarchy (\ref{4.27}), in passing from $J$ to $J +\frac{1}{2}$. In the generic case, when we have a strict inequality we can find the answer using Lemma 1 of Section II. Comparing $\tilde{\Omega}^{\left(J+ \frac{1}{2} \right)}(\hat{\rho})$ and $\tilde{\Omega}^{\left(J\right)}(\hat{\rho})$, in the notation of Eq.\,(\ref{2.10}) and using Eq.\,(\ref{4.26}) we have\,: \begin{eqnarray} &&\tilde{\Omega}^{\left(J+ \frac{1}{2} \right)}(\hat{\rho})= \left( \begin{array}{cc} A & C^{\dagger} \\ C & B \end{array} \right)\,;\nonumber \\ && A = \tilde{\Omega}^{\left(J \right)}(\hat{\rho})\,,\,\,\, \,\,B = \tilde{\Omega}^{\left(J+ \frac{1}{2}, J +\frac{1}{2} \right)}(\hat{\rho}), \nonumber \\ && C= \left( \begin{array}{ccc} \tilde{\Omega}^{\left(J+ \frac{1}{2}, \frac{1}{2} \right)}(\hat{\rho})& \cdots & \tilde{\Omega}^{\left(J+ \frac{1}{2}, J \right)}(\hat{\rho}) \end{array} \right). \label{4.29} \end{eqnarray} The `dimensions' are $N_{J} \times N_{J}$, $2(J+1)\times 2(J+1)$, $2(J+1)\times N_{J}$ respectively. Then \begin{eqnarray} \tilde{\Omega}^{\left(J +\frac{1}{2} \right)}(\hat{\rho}) > 0 \,\, \Leftrightarrow \,\, \tilde{\Omega}^{\left(J \right)}(\hat{\rho}) > 0, \,\,\,\,B -C \,A^{-1}\,C^{\dagger} > 0, \label{4.30} \end{eqnarray} where $A$, $B$, $C$ are taken from Eq.\,(\ref{4.29}). One can see that some complication arises from the need to compute $A^{-1}$ in the new condition. In the next Section, we analyse the case $J=\frac{1}{2} \rightarrow J +\frac{1}{2}=1$ in some detail, as the first nontrivial step going beyond the Schr\"{o}dinger-Robertson UP (\ref{3.7},\,\,\ref{4.28}). Before we turn to this task, however, a note on the non-generic case of singular $A$ seems to be in order. \vskip0.2cm \noindent {\bf Remark}\,: Lemma 1 expresses the positive definiteness of a hermitian matrix $Q$ in the block form (\ref{2.10}) in terms of conditions on the lower dimensional blocks. The block form itself is a description of $Q$ with respect to a given breakup of the underlying vector space on which $Q$ acts, into two mutually orthogonal subspaces. Both $A$ and $B$ are hermitian. For the case of positive semidefinite $Q$, there are two possibilities at the level of $A$, $B$, $C$. If $A^{-1}$ exists, then $Q\geq 0$ translates into $A > 0$, $B-C\,A^{-1}\,C^{\dagger}\geq 0$. In case $A$ is singular, while of course $Q \geq 0$ implies $A \geq 0$, the question is what other condition on $B$, $C$ is implied. To answer this, we further separate the subspace on which $A$ acts into two mutually orthogonal subspaces---one corresponding to the null subspace of $A$, and the other on which $A$ acts invertibly, say as $A_{1}$. Then in such a description, the block form of $Q$ is initially refined to the form \begin{eqnarray} Q \simeq \left( \begin{array}{ccc} 0 & 0 & C_{2}^{\dagger} \\ 0 & A_{1} & C_{1}^{\dagger} \\ C_{2} & C_{1} & B \end{array} \right), \end{eqnarray} with the original $A$ and $C$ being respectively $\left(\begin{array}{cc} 0 & 0 \\ 0 & A_1\end{array} \right)$ and $(C_2,\,C_1)$. But now one sees easily that $Q \geq 0$ implies $C_2=0$, so as $A_1^{\,-1}$ exists, we have in this situation \begin{eqnarray} Q \geq 0 \,\,\,\Leftrightarrow \,\,\, A_{1} > 0,\,\,\, B- C_{1}\,A_{1}^{-1}\, C_{1}^{\dagger} \geq 0. \end{eqnarray} This is the description of the nongeneric situation mentioned above. \section{$SO(2,1)$ analysis of fourth order moments} The first nontrivial step in the hierarchy of uncertainty relations (\ref{4.27}), after the Schr\"{o}dinger-Robertson UP (\ref{3.7},\,\,\ref{4.28}), occurs in going from $J =\frac{1}{2}$ to $J + \frac{1}{2}=1$. We study this in some detail, especially as it brings into evidence the equivalence of the irreducible representation $K^{(1)}(S)$ of $Sp(2,\,R)$ and the defining representation of the three-dimensional proper homogeneous Lorentz group $SO(2,1)$\,\cite{simon84}. Indeed $K^{(2)}(S)$, $K^{(3)}(S)$, $\cdots$ are all true representations of $SO(2,1)$\,\cite{LCB}. It is useful to introduce specific symbols for the operators $\hat{T}_{\frac{1}{2}m}$, $\hat{T}_{1m}$ in the present context. We write \begin{eqnarray} (\hat{T}_{\frac{1}{2}m})&=&(\hat{\xi}_m) =\left( \begin{array}{c} \hat{q} \\ \hat{p} \end{array} \right),\,\,\,\,m=\frac{1}{2},\,-\frac{1}{2}\,; \nonumber \\ (\hat{T}_{1m})&=&(\hat{X}_{m})= \left( \begin{array}{c} \hat{q}^2 \\ \frac{1}{2}\,\{\hat{q},\,\hat{p} \} \\ \hat{p}^2 \end{array} \right),\,\,\,\,m=1,\,0,\,-1\,; \label{5.1} \end{eqnarray} so that one immediately recognises that $\hat{\xi}$ is a two-component spinor, and $\hat{X}$ a three-component vector, with respect to $SO(2,1)$ (see below). Their products can be computed using Eq.\,(\ref{4.15}) or more directly by simple algebra\,: \begin{equation} \begin{array}{rclc} \hat{\xi}_{m}\,\hat{\xi}_{m'} &=& \hat{X}_{m+m'} + i\,\frac{\hbar}{2} (-1)^{m-\frac{1}{2}} \,\delta_{m+m',\,0}\,; & (a) \\ \hat{\xi}_{m}\,\hat{X}_{m'}&=& \hat{T}_{\frac{3}{2},\,m+m'} + i \,\frac{\hbar}{2} (-1)^{m-\frac{1}{2}}\,\hat{\xi}_{m+m'}\,; & (b) \\ \hat{X}_{m}\,\hat{X}_{m'} &=& \hat{T}_{2,\,m+m'} + \frac{\hbar^2}{4}(-1)^m\, (1+m^2)\,\delta_{m+m',0} + i\,\hbar\,(m-m')\,\hat{X}_{m+m'}. &(c) \end{array} \label{5.2} \end{equation} In ($5.2a$) the leading $J=1$ term is symmetric in $m$, $m^{\, \prime}$\,; while the pure imaginary $J=0$ second term is antisymmetric. In ($5.2b$) it is understood that $\hat{\xi}_{\pm \frac{3}{2}}=0$. In ($5.2c$) the first two $J=2$ and $J=0$ terms are symmetric in $m$, $m^{\, \prime}$\,; while the third $J=1$ term is antisymmetric. These features agree with the pattern in Eq.\,(\ref{4.21}). For $J=\frac{1}{2}$ in Eq.\,(\ref{4.29}) we have \begin{eqnarray} A=\tilde{\Omega}^{\left(\frac{1}{2},\,\frac{1}{2} \right)}(\hat{\rho}),\,\,\,\, B=\tilde{\Omega}^{\left(1,\,1 \right)}(\hat{\rho}),\,\,\,\, C=\tilde{\Omega}^{\left( 1,\,\frac{1}{2} \right)}(\hat{\rho}),\,\,\,\, \label{5.3} \end{eqnarray} with `dimensions' $2 \times 2$, $3 \times 3$, $3 \times 2$ respectively. (Throughout this Section, $A$, $B$, $C$ will have these meanings). Their behaviours under $Sp(2,\,R)$ are \begin{eqnarray} \hat{\rho}\rightarrow \hat{\rho}^{\, \prime} =\overline{U}(S)\,\hat{\rho}\,\overline{U}(S)^{-1} &\Rightarrow & A \rightarrow S\,A\,S^{T},\,\,\,\,B \rightarrow K^{(1)}(S)\, B \, K^{(1)}(S)^{T}, \nonumber \\ && C \rightarrow K^{(1)}(S) \,C \,S^{T}. \label{5.4} \end{eqnarray} Assuming $A^{-1}$ exists, we have \begin{eqnarray} A^{-1} \rightarrow (S^{-1})^{T}\, A^{-1}\, S^{-1}, \label{5.5} \end{eqnarray} and consequently, \begin{eqnarray} B- C \,A^{-1}\,C^{\dagger} &\rightarrow& K^{(1)}(S)\,(B - C\, A^{-1}\,C^{\dagger})\,K^{(1)}(S)^{T}, \label{5.6} \end{eqnarray} which as expected is a congruence. The matrix $K^{(1)}(S)$ is easily found. At the level of classical variables\,: \begin{eqnarray} &&S=\left( \begin{array}{cc} a & b\\ c&d \end{array} \right) \in Sp(2,\,R)\,:\,\,\,\, \left(\begin{array}{c} q \\ p \end{array} \right) \rightarrow S\,\left(\begin{array}{c} q \\ p \end{array} \right) \Rightarrow \nonumber \\ &&(X_{m}(q,\,p))= \left( \begin{array}{c} q^2 \\ qp \\ p^2 \end{array} \right) \rightarrow K^{(1)}(S)\,(X_{m}(q,p)), \nonumber \\ &&K^{(1)}(S)=\left( \begin{array}{ccc} a^2 & 2ab & b^2 \\ ac & ad+bc & bd \\ c^2 & 2cd & d^2 \end{array} \right). \label{5.7} \end{eqnarray} The link to $SO(2,1)$ can be seen in two (essentially equivalent) ways, either through $A$ or through $(X_{m}(q,p))$. We now outline both. We introduce indices $\mu$, $\nu$, $\cdots$ going over values $0,\,3,\,1$ (in that sequence) and a three-dimensional Lorentz metric $g_{\mu \,\nu}={\rm diag}(+1,\,-1,\,-1)$. This metric and its inverse $g^{\mu\,\nu}$ are used for lowering and raising Greek indices. The defining representation of the proper homogeneous Lorentz group $SO(2,1)$ is then\,: \begin{eqnarray} SO(2,1)=\{ \Lambda =({\Lambda^{\mu}}_{\nu})=3\times3 {\rm\,\, real\,\,matrix\,\,}&|&\,\,{\Lambda^{\mu}}_{\nu}\,\Lambda_{\mu \lambda} \equiv g_{\mu \tau}\,{\Lambda^{\mu}}_{\nu}\,{\Lambda^{\tau}}_{\lambda}=g_{\nu \lambda},\nonumber \\ &&~~~{\rm det}\,\Lambda =+1,\,\,\,{\Lambda^{0}}_{0}\geq 1 \}. \label{5.8} \end{eqnarray} This is a three-parameter noncompact Lie group. Now expand $A=\tilde{\Omega}^{\left(\frac{1}{2},\,\frac{1}{2} \right)}(\hat{\rho})$ in terms of Pauli matrices as follows\,: \begin{eqnarray} A=\tilde{\Omega}^{\left(\frac{1}{2},\,\frac{1}{2} \right)}(\hat{\rho}) = x^{\mu}\,\sigma_{\mu} - \frac{\hbar}{2}\,\sigma_2 = \left(\begin{array}{cc} x^{0}+x^{3} & x^{1} \\ x^{1} & x^{0} - x^{3} \end{array}\right) - \frac{\hbar}{2}\,\sigma_2 . \label{5.9} \end{eqnarray} From Eqs.\,(\ref{3.6},\,\,\ref{4.28}) we have (indicating $\hat{\rho}$ dependences)\,: \begin{eqnarray} x^{0}(\hat{\rho})=\frac{1}{2}((\Delta q)^2 + (\Delta p)^2),\,\,\, x^{3}(\hat{\rho})= \frac{1}{2}((\Delta q)^2 - (\Delta p)^2),\,\,\, x^{1}(\hat{\rho})=\Delta(q,p) \label{5.10} \end{eqnarray} Then the transformation rule for $A$ in Eq.\,(\ref{5.4}), combined with $S\,\sigma_2 \,S^T = \sigma_2$, leads to a rule for $x^{\mu}(\hat{\rho})$\,: \begin{eqnarray} && \hat{\rho} \rightarrow \overline{U}(S)\,\hat{\rho}\,\overline{U}(S)^{-1}\,\,\Rightarrow \,\, A \rightarrow S\,A\,S^{T}\,\,\Rightarrow \,\, x^{\mu}(\hat{\rho}) \rightarrow {\Lambda^{\mu}}_{\nu}(S)\,x^{\nu}(\hat{\rho}), \nonumber \\ \Lambda(S)&=&\left( \begin{array}{ccc} \frac{1}{2}(a^2 + b^2 + c^2 + d^2) &\frac{1}{2}(a^2 - b^2 + c^2 - d^2) & ab + cd \\ \frac{1}{2}(a^2 + b^2 - c^2 - d^2) &\frac{1}{2}(a^2 - b^2 - c^2 + d^2) & ab-cd \\ ac+bd & ac-bd & ad+bc \end{array} \right) \,\,\in SO(2,1)\,. \label{5.11} \end{eqnarray} Thus $x^{\mu}(\hat{\rho})$ transforms as a Lorentz three-vector, and the associated invariant is seen to be \begin{eqnarray} x^{\mu}(\hat{\rho})\,x_{\mu}(\hat{\rho})=g_{\mu \nu} \,x^{\mu}(\hat{\rho})\,x^{\nu}(\hat{\rho}) = (\Delta q)^2\,(\Delta p)^2 - (\Delta(q,p))^2 \geq \frac{\hbar^2}{4}, \label{5.12} \end{eqnarray} so the Schr\"{o}dinger-Robertson UP implies the geometrical statement that $x^{\mu}(\hat{\rho})$ is positive time-like. The matrices $K^{(1)}(S)$ by which $\hat{X}_{m}$ transform under $Sp(2,\,R)$ are related by a fixed similarity transform to the $\Lambda(S)$ above. If in terms of classical variables we pass from the components $X_{m}(q,p)$ in Eq.\,(\ref{5.7}) to a new set of components $X^{\mu}(q,p)$ by \begin{eqnarray} (X^{\mu}(q,p))&=& \left( \begin{array}{c} \frac{1}{2}(q^2 + p^2) \\ \frac{1}{2}(q^2 -p^2) \\qp \end{array} \right) = M\,\left( \begin{array}{c} q^2 \\ qp \\ p^2 \end{array} \right), \nonumber \\ X^{\mu}(q,p)&=& {M^{\mu}}_{m}\,X_{m}(q,p),\,\,\,\,X_{m}(q,p)=M^{-1}_{m \mu}\,X^{\mu}(q,p), \nonumber \\ M&=& ({M^{\mu}}_{m})=\left( \begin{array}{rrr} \frac{1}{2} & ~0 & \frac{1}{2} \\ \frac{1}{2} & ~0 & - \frac{1}{2} \\ 0 & ~1 & 0 \end{array} \right),\,\,\,\, M^{-1}=(M^{-1}_{m \mu})= \left( \begin{array}{rrr} 1 & 1 & ~0 \\ 0 & 0 & ~1 \\ 1 & -1 & ~0 \end{array} \right), \label{5.13} \end{eqnarray} then in place of Eq.\,(\ref{5.7}) we have \begin{eqnarray} \left( \begin{array}{c}q \\p \end{array} \right) \rightarrow S\,\left( \begin{array}{c}q \\p \end{array} \right) \Rightarrow X^{\mu}(q,p)&\rightarrow &{M^{\mu}}_{m} \, K^{(1)}_{mm'}(S)\, M^{-1}_{m' \nu}\,X^{\nu}(q,p) \nonumber \\ &=& {\Lambda^{\mu}}_{\nu}(S)\,X^{\nu}(q,p), \nonumber \\ K^{(1)}(S)&=& M^{-1}\,\Lambda(S)\,M. \label{5.14} \end{eqnarray} At the operator level we have \begin{eqnarray} &&\hat{X}^{0}=\frac{1}{2}(\hat{q}^2 + \hat{p}^2),\,\,\,\hat{X}^{3}= \frac{1}{2}(\hat{q}^2 - \hat{p}^2),\,\,\,\hat{X}^1=\frac{1}{2}\{\hat{q},\, \hat{p} \},\nonumber \\ && \hat{X}^{\mu}= {M^{\mu}}_{m}\,\hat{X}_{m}, \label{5.15} \end{eqnarray} and, as consequence of Eq.\,(\ref{4.13})), the twin equivalent transformation laws\,: \begin{eqnarray} S \in Sp(2,\,R)\,:~~&& \overline{U}(S)^{-1}\,\hat{X}_{m}\,\overline{U}(S)=K^{(1)}_{mm'}(S) \,\hat{X}_{m'} , \nonumber \\ &&\overline{U}(S)^{-1}\,\hat{X}^{\mu}\,\overline{U}(S)={\Lambda^{\mu}}_{\nu}(S) \,\hat{X}^{\nu}. \label{5.16} \end{eqnarray} The upshot is that the matrices $K^{(1)}(S)$ are just the `ordinary' homogeneous Lorentz transformation matrices $\Lambda(S)$ in a `tilted' basis. The metric preserved by them is easily found though unfamiliar\,: \begin{eqnarray} &&K^{(1)}(S)\,g_K \,K^{(1)}(S)^T = g_K , \nonumber \\ && g_{K}=M^{-1}\,g\, (M^{-1})^T = \left( \begin{array}{ccc} 0 & 0 & 2 \\ 0 &-1 & 0 \\ 2 & 0 &0 \end{array} \right). \label{5.17} \end{eqnarray} This enables us to use the nomenclature and geometrical features of three-dimensional Minkowski space even while working with operators $\hat{X}_{m}$ and transformation matrices $K^{(1)}(S)$. Now we proceed to analyse the three matrices $A$, $B$, $C$ and the combination $B- C\,A^{-1}\,C^{\dagger}$. (We have already parametrised $A$ in Eqs.\,(\ref{5.9},\,\,\ref{5.10})). Using Eqs.\,(\ref{5.2}), their matrix elements are \begin{eqnarray} A_{mm'}&=& \langle\hat{\xi}_{m}\,\hat{\xi}_{m'} \rangle - \langle\hat{\xi}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle \nonumber \\ &=& \langle \hat{X}_{m+m'} \rangle - \langle\hat{\xi}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle +i\,\frac{\hbar}{2}\,(-1)^{m-\frac{1}{2}}\,\delta_{m,\,-m'} \nonumber \\ &=&(x^{\mu}\,\sigma_{\mu})_{mm'} +i \,\frac{\hbar}{2}\,(-1)^{m-\frac{1}{2}}\,\delta_{m,\,-m'}\,; \nonumber \\ B_{mm'}&=& \langle\hat{X}_{m}\,\hat{X}_{m'} \rangle - \langle\hat{X}_{m} \rangle \,\langle \hat{X}_{m'} \rangle \nonumber \\ &=& \langle \hat{T}_{2,m+m'} \rangle + \frac{\hbar^2}{4}\,(-1)^{m}\,\delta_{m,-m'}- \langle\hat{X}_{m} \rangle \,\langle \hat{X}_{m'} \rangle + i\,\hbar \,(m-m')\,\langle \hat{X}_{m+m'} \rangle ; \nonumber \\ C_{mm'}&=& \langle\hat{X}_{m}\,\hat{\xi}_{m'} \rangle - \langle\hat{X}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle \nonumber \\ &=& \langle \hat{T}_{\frac{3}{2},m+m'} \rangle - \langle\hat{X}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle - i\,\frac{\hbar}{2}\,(-1)^{m'-\frac{1}{2}}\,\langle \hat{\xi}_{m+m'} \rangle. \label{5.18} \end{eqnarray} In each of these expressions, the possible values for $m$, $m^{\, \prime}$ are evident from the context. We now note an important fact in respect of the final forms of all three expressions\.: apart from explicit appearances of $i$ in the last terms, {\em all other quantities are real}. This allows us to easily separate each of $A$, $B$, $C$ into real and imaginary parts, which in the cases of $A$ and $B$ are respectively symmetric and antisymmetric in $m$ and $m^{\, \prime}$. [\,This is already seen in Eq.\,(\ref{5.9}) for $A$\,]. We write these as follows\,: \begin{equation} \begin{array}{rclc} A &=& A_{1}+ i\,A_{2}, & \\ (A_{1})_{mm'}&=& \langle\hat{X}_{m+m'} \rangle - \langle\hat{\xi}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle =(x^{\mu}\,\sigma_{\mu})_{mm'}\,, & \\ (A_{2})_{mm'}&=& \,\frac{\hbar}{2}\,(-1)^{m-\frac{1}{2}}\,\delta_{m,\,-m'}; & ~~(a) \\ B&=&B_{1} + i\,B_{2}, & \\ (B_{1})_{mm'}&=& \langle \hat{T}_{2,m+m'} \rangle + \frac{\hbar^2}{4}\,(-1)^{m}\,\delta_{m,-m'}- \langle\hat{X}_{m} \rangle \,\langle \hat{X}_{m'} \rangle, & \\ (B_{2})_{mm'} & =& \,\hbar \,(m-m')\,\langle \hat{X}_{m+m'} \rangle ; & ~~(b) \\ C&=&C_{1}+ i\,C_{2}; & \\ (C_{1})_{mm'}&=& \langle \hat{T}_{\frac{3}{2},m+m'} \rangle - \langle\hat{X}_{m} \rangle \,\langle \hat{\xi}_{m'} \rangle, & \\ (C_{2})_{mm'}&=& - \,\frac{\hbar}{2}\,(-1)^{m'-\frac{1}{2}}\,\langle \hat{\xi}_{m+m'} \rangle\,; & \\ C^{\dagger}&=& C_{1}^{T} - i\,C_{2}^{T}\,. & ~~(c) \end{array} \label{5.19} \end{equation} To deal similarly with $B - C\,A^{-1} \,C^{\dagger}$, we need an expression for $A^{-1}$. We will assume the generic situation in which $A$ is nonsingular, \begin{eqnarray} {\rm det}\,A \equiv \kappa^{-1}=x^{\mu}\,x_{\mu} - \frac{\hbar^2}{4} > 0, \label{5.20} \end{eqnarray} so that \begin{eqnarray} A^{-1}&=& \kappa (x^0 - x^3 \sigma_3 -x^1 \sigma_1 + \frac{\hbar}{2}\,\sigma_2) \nonumber \\ &=& \kappa(\tilde{x}^{\mu}\,\sigma_{\mu} + \frac{\hbar}{2}\,\sigma_2), \nonumber \\ \tilde{x}^{\mu}&=& (x^0,\,-x^3,\,-x^1). \label{5.21} \end{eqnarray} The transformation law for $A^{-1}$ under $S \in Sp(2,\,R)$ given in Eq.\,(\ref{5.5}) is different from (though equivalent to) the law for $A$. Thus, while the $\tilde{x}^{\mu}$ do follow a definite (i.e., well defined tensorial\,\cite{simon84}) transformation law, there are some differences (in signs) compared to the law followed by $x^{\mu}$. Clearly the two terms in Eq.\,(\ref{5.21}) are, as they stand, the real symmetric and the pure imaginary antisymmetric parts of $A^{-1}$. We can now handle $B- C\, A^{-1}\,C^{\dagger}$ in the same manner as above\,: \begin{eqnarray} B - C\,A^{-1}\,C^{\dagger} &=& B_{1}+ i\,B_{2}- \kappa (C_{1}+ i\,C_{2})\,(\tilde{x}\cdot \sigma +\frac{\hbar}{2}\,\sigma_2)\,(C_{1}^{T}-i\,C_{2}^{T}) \nonumber \\ &=&V^{({\rm eff})}+ \frac{i}{2} \,\omega^{({\rm eff})}, \nonumber \\ V^{({\rm eff})}&=& B_{1} - \kappa(C_{1}\,\tilde{x}\cdot\sigma\,C_{1}^{T} +C_{2}\,\tilde{x}\cdot\sigma C_{2}^{T} + i \,\frac{\hbar}{2} C_{2}\,\sigma_{2}\,C_{1}^{T} -i\,\frac{\hbar}{2}\, C_{1}\,\sigma_2\,C_{2}^{T}), \nonumber \\ \frac{1}{2}\,\omega^{({\rm eff})}&=& B_{2} - \kappa(C_{2}\,\tilde{x}\cdot\sigma\,C_{1}^{T} - C_{1}\,\tilde{x}\cdot\sigma C_{2}^{T} - i \,\frac{\hbar}{2} C_{1}\,\sigma_{2}\,C_{1}^{T} -i\,\frac{\hbar}{2}\, C_{2}\,\sigma_2\,C_{2}^{T}). ~~ \label{5.22} \end{eqnarray} This decomposition is in the spirit and notation of Eq.\,(\ref{2.18}) of the general framework. However, $V^{({\rm eff})}$ and $\omega^{({\rm eff})}$ do not correspond any longer to expectation values of simple anticommutators and commutators among relevant operators, as was the case in Eqs.\,(\ref{2.18},\,\ref{4.18},\,\ref{4.20}). Both $V^{({\rm eff})}$ and $\omega^{({\rm eff})}$ are real three-dimensional matrices with elements $V^{({\rm eff})}_{m m'}$, $\omega^{({\rm eff})}_{m m'}$, where $m,\,m^{\, \prime} = 1,\,0,\,-1$\,; and they are respectively symmetric and antisymmetric. It does not seem possible to simplify the expressions (\ref{5.22}) to any significant extent, as they are already expressed in terms of the independent real expectation values $\langle \hat{\xi}_m \rangle$, $\langle \hat{X}_{m} \rangle$, $\langle \hat{T}_{\frac{3}{2}, m} \rangle$, $\langle \hat{T}_{2,m} \rangle$ which are the moments of the Wigner distribution $W(q,\, p)$ of orders up to and including the fourth. Under action by $S\in Sp(2,\,R)$ we have from Eq.\,(\ref{5.6})\,: \begin{eqnarray} \hat{\rho}\rightarrow \overline{U}(S)\,\hat{\rho}\, \overline{U}(S)^{-1}\,\, &\Rightarrow& \,\,V^{\rm (eff)} \rightarrow K^{(1)}(S)\,V^{\rm (eff)}\,K^{(1)}(S)^{T}, \nonumber \\ && \,\,\omega^{\rm (eff)} \rightarrow K^{(1)}(S)\, \omega^{\rm (eff)}\,K^{(1)}(S)^{T}. \label{5.23} \end{eqnarray} The added uncertainty relation up to the fourth order going beyond the Schr\"{o}dinger-Robertson UP (\ref{3.7},\,\,\ref{4.28}), reads [\,in the generic case ${\rm det}\,A > 0$\,]\,: \begin{eqnarray} V^{\rm (eff)} + \frac{i}{2}\,\omega^{\rm (eff)} \geq 0, \label{5.24} \end{eqnarray} which is an $SO(2,1)$ covariant statement by virtue of Eq.\,(\ref{5.23}). For further analysis it is rather awkward to work with $SO(2,1)$ matrices and Lorentz metric in the form $K^{(1)}(S)$, $g_{K}$, therefore we pass to the `standard' forms via the matrices $M$, $M^{-1}$ in Eq.\,(\ref{5.13})\,: \begin{eqnarray} V^{\rm (eff)\,\mu \nu} ={M^{\mu}}_{m}\,{M^{\nu}}_{m'}\,V^{\rm (eff)}_{m m'}, \nonumber \\ \omega^{\rm (eff)\,\mu \nu} = {M^{\mu}}_{m}\,{M^{\nu}}_{m'}\,\omega^{\rm (eff)}_{m m'}\,, \label{5.25} \end{eqnarray} which are congruences. Then the $Sp(2,\,R)$ or $SO(2,\,1)$ actions (\ref{5.23}) appear as\,: \begin{eqnarray} &&V^{\rm (eff)\,\mu \nu} \rightarrow {\Lambda^{\mu}}_{\mu'}(S)\,{\Lambda^{\nu}}_{\nu'}(S)\, V^{\rm (eff)\, \mu' \nu'}, \nonumber \\ &&\omega^{\rm (eff)\,\mu \nu} \rightarrow {\Lambda^{\mu}}_{\mu'}(S)\,{\Lambda^{\nu}}_{\nu'}(S)\, \omega^{\rm (eff)\, \mu' \nu'}, \label{5.26} \end{eqnarray} and the condition (\ref{5.24}) becomes\,: \begin{eqnarray} (V^{\rm (eff) \,\mu \nu}) + \frac{i}{2}\,(\omega^{\rm (eff)\, \mu \nu}) \geq 0. \label{5.27} \end{eqnarray} While $V^{\rm (eff)\,\mu \nu}$ transforms as a symmetric second rank $SO(2,1)$ tensor, $\omega^{\rm (eff)\,\mu \nu}$ is an antisymmetric second rank tensor, which by the use of the Levi Civita invariant tensor is the same as a three vector. Thus we can write, with $\epsilon^{031}=\epsilon_{031} =+1$, \begin{eqnarray} \omega^{\rm (eff)\,\mu \nu} &=& \epsilon^{\mu \nu \lambda}\, a_{\lambda}, \nonumber \\ (\omega^{\rm (eff)\,\mu \nu})&=& \left( \begin{array}{ccc} 0 & a_1 & -a_3 \\ -a_1 & 0 & a_0 \\ a_3 & -a_0 & 0 \end{array} \right), \label{5.28} \end{eqnarray} with transformation law \begin{eqnarray} a^{\mu} \rightarrow {\Lambda^{\mu}}_{\nu}(S)\, a^{\nu}. \label{5.29} \end{eqnarray} Of course, $V^{\rm (eff) \,\mu \nu}$ itself is made up of two irreducible parts\,: the symmetric second rank `trace-free' part belonging to the $SO(2,\,1)$ representation $K^{(2)}(S)$, and the $SO(2,\,1)$ invariant trace which is a scalar. We now appeal to a remarkable result\,\cite{RSSCVS}, which is similar in spirit to the Williamson theorem for $Sp(2n,\,R)$ quoted in Section 3. It states that if $V^{\rm (eff)\,\mu \nu}$ transforming as in Eq.\,(\ref{5.26}) is positive definite, it is possible to bring it to a diagonal form by a suitable choice of $\Lambda \in SO(2,1)$\,; however in general the resulting diagonal values are not the eigenvalues of the initial matrix. This diagonal form may be called the `SCS normal form' of $V^{\rm (eff)}$, which in the generic case is unique. Passing to this normal form of $V^{\rm (eff)}$, and transforming $\omega^{\rm (eff)}$ as well by the same (generically unique) $\Lambda \in SO(2,1)$, these matrices appear as \begin{eqnarray} V^{\rm (eff)} \rightarrow \left(\begin{array}{ccc} v^{00} & 0 & 0 \\ 0 & v^{33} & 0 \\ 0 & 0 & v^{11} \end{array} \right),\,\,\, \omega^{\rm (eff)}\rightarrow \left( \begin{array}{ccc} 0 & -b^1 & b^3 \\ b^1 & 0 & b^0 \\-b^3 & -b^0 &0 \end{array} \right), \label{5.30} \end{eqnarray} with all the quantities $v^{00}$, $v^{33}$, $v^{11}$, $b^0$, $b^3$, $b^1$ being real $SO(2,1)$ (and $Sp(2,\,R)$) invariants. The uncertainty relation (\ref{5.27}) expressed in terms of these invariants, and in its maximally simplified form thanks to the SCS theorem, is \begin{eqnarray} \left(\begin{array}{ccc} v^{00} & 0 & 0 \\ 0 & v^{33} & 0 \\ 0 & 0 & v^{11} \end{array} \right) +\frac{i}{2}\, \left( \begin{array}{ccc} 0 & -b^1 & b^3 \\ b^1 & 0 & b^0 \\-b^3 & -b^0 &0 \end{array} \right)\geq 0. \label{5.31} \end{eqnarray} As an (admittedly elementary) example of the discussion of this Section, we consider the Fock states $|n \rangle$, $n \geq 0$. The $(\hat{q},\,\hat{p})$ --- $(\hat{a},\,\hat{a}^{\dagger})$ relations are \begin{align} \hat{a} = (\hat{q} + i \hat{p})/\sqrt{2 \hbar}, ~~~ \hat{a}^{\dagger} = (\hat{q} - i \hat{p})/\sqrt{2 \hbar}, \label{5.32} \end{align} so both $\hat{q}$ and $\hat{p}$ have dimensions $\hbar^{{{{1}/{2}}}}$. In the Fock states $|n\rangle$, by parity arguments we have \begin{align} \langle n | \hat{\xi}_m |n\rangle = \langle n | \hat{T}_{3/2,m}| n \rangle =0. \label{5.33} \end{align} For $\hat{X}_{m}, \, \hat{T}_{2,m}$ explicit calculations give\,: \begin{align} \langle n | \hat{X}_m | n \rangle &= \hbar (n+{{\frac{1}{2}}}) (1,\,0,\,1), ~~ m = 1,\,0,\,-1;\nonumber\\ \langle n | \hat{T}_{2,m} | n \rangle &= \frac{1}{2} \hbar^2 (n^2 + n+ {{\frac{1}{2}}}) (3,\,0,\,1,\,0,\,3), ~~ m = 2,\,1,\,0,\,-1,\,-2. \label{5.34} \end{align} Then the matrices $A,\, B,\, C$ of Eq.\,\eqref{5.3} follow easily\,: \begin{align} \begin{array}{rclc} \left(A_{mm^{\,'}}\right) &=& \hbar (n+{{\frac{1}{2}}}) 1\!\!1 - \frac{\hbar}{2}\, \sigma_2, & \\ x^0 &=& \hbar (n+{{\frac{1}{2}}}), ~~ x^3 = x^1 =0; & (a)\\ \left(B_{mm^{\,'}}\right) &=& \frac{\hbar^2}{2} (n^2 + n +1 ) \left( \begin{array}{rrr}1& ~0&-1 \\ 0&1&0 \\ -1&0&1 \end{array} \right) + i \hbar^2 (n + {{\frac{1}{2}}}) \left( \begin{array}{rrr} 0&1&~0 \\ -1&0 &1 \\ 0 & -1 & 0\end{array} \right); & ~~(b) \\ \left( C_{mm^{\,'}}\right) &=& 0. & (c) \end{array} \label{5.35} \end{align} Therefore the combination $B - CA^{-1} C^{\dagger} = B$, and from Eq.\,\eqref{5.22}, \begin{align} \left(V^{(\rm eff)}_{mm^{\,'}} \right) &= \frac{\hbar^2}{2} (n^2 + n+ 1) \left( \begin{array}{rrr} 1& ~0&-1 \\ 0&1&0 \\ -1&0&1\end{array} \right), \nonumber\\ \frac{1}{2}\left(\omega^{(\rm eff)}_{mm^{\,'}} \right) &= \hbar^2 (n+{{\frac{1}{2}}}) \left( \begin{array}{rrr} 0&1& ~0 \\ -1&0 &1 \\ 0 & -1 & 0\end{array} \right). \label{5.36} \end{align} Transforming to the standard $SO(2,1)$ tensor components by the congruence transformation\,\eqref{5.25} we find\,: \begin{align} \left(V^{(\rm eff)\,\mu\nu} \right) &= \frac{\hbar^2}{2} (n^2 + n+ 1) \left( \begin{array}{ccc} 0&~0~&0 \\ 0&1&0 \\ 0&0&1\end{array} \right), \nonumber\\ \frac{1}{2}\left(\omega^{(\rm eff)\,\mu\nu} \right) &= \hbar^2 (n+{{\frac{1}{2}}}) \left( \begin{array}{rrr} 0&0&~0 \\ 0&0 &1 \\ 0 & -1 & 0\end{array} \right). \label{5.37} \end{align} As expected, both these matrices are invariant under the $SO(2)$ subgroup of $SO(2,\,1)$, as the Fock states are eigenstates of the phase space rotation generator $\hat{a}^\dagger\hat{a}$. We see that $\left(V^{(\rm eff)\,\mu\nu} \right)$ is already in the $SCS$ normal form, and as the eigenvalues of $\left(V^{(\rm eff)\,\mu\nu} \right) + \frac{\,i\,}{2}\, \left(\omega^{(\rm eff)\,\mu\nu} \right)$ are $\,0,\, \frac{\hbar^2}{2}(n^2+n+1) \pm \hbar^2 (n+ {{\frac{1}{2}}})$, \,i.e., $\,0,\, \frac{\hbar^2}{2}(n+1)(n+2),\, \frac{\hbar^2}{2} n(n-1)$, the uncertainty relation\,\eqref{5.27} is clearly respected; indeed it is saturated! \section{Lorentz geometry and the Schr\"{o}dinger-Robertson UP} The original Schr\"{o}dinger-Robertson UP has a very interesting character when viewed in the Wigner distribution language, bringing out the role of the group $SO(2,\,1)$ in a rather striking manner. This seems worth exploring in some detail. For a given state $\hat{\rho}$ with Wigner distribution $W(q,\,p)$, the means are \begin{align} \overline{q} = \int\int dqdp\,q\,W(q,\,p)\,, ~~~ \overline{p} = \int\int dqdp\,p\,W(q,\,p)\,. \label{6.1} \end{align} Referring to Eq.\,\eqref{5.13}, at each point $(q,\,p)$ in the phase plane we define the $SO(2,\,1)$ three-vector (a displaced form of $(X^{\mu}(q,p))$ in Eq.\,\eqref{5.13}) \begin{align} (\,X^\mu(q,\,p)\,) = \left(\begin{array}{c} \frac{1}{2}\,[\,(q - \overline{q})^2 + (p - \overline{p})^2\,]\\ \frac{1}{2}\,[\,(q - \overline{q})^2 - (p - \overline{p})^2\,]\\ (q - \overline{q})(p - \overline{p}) \end{array} \right), \label{6.2} \end{align} which (\,except at $q = \overline{q},\,p=\overline{p}$\,) is pointwise positive light-like. The elements of the variance matrix $V$ in Eqs.\,($3.6, \, 4.29$) are obtained by `averaging' this three-vector over the phase plane with the quasiprobability $W(q,p)$ as `weight' function, resulting in the three-vector $x^{\mu}(\hat{\rho})$ of Eq.\,\eqref{5.10}\,: \begin{align} (x^{\mu}(\hat{\rho})) = \left( \begin{array}{c} \frac{1}{2}\, [\,(\triangle q )^2+(\triangle p)^2\,]\\ \frac{1}{2}\, [\,(\triangle q )^2-(\triangle p)^2\,]\\ \triangle (q,p) \end{array} \right) = \int \int dqdp\, W(q,p)\, (\,X^\mu(q,\,p)\,). \label{6.3} \end{align} Given that $W(q,p)$ can in principle be negative over certain regions of the phase space, this `averaging' could have led to a result which need not be either time-like or light-like positive. However the Schr\"{o}dinger-Robertson UP assures us that in fact the result has to be a time-like positive three-vector, thus implying a subtle limit on the extent to which $W(q,p)$ could become negative. In fact it specifies that the three-vector obtained as a result of the `averaging' must be within or on the positive time-like (solid) hyperboloid $\sum_{\mu} x^{\mu}(\hat{\rho}) x_{\mu}(\hat{\rho}) \geq \hbar^2/4$ corresponding to `squared mass' $\hbar^2/4$ presented in Eq.\,\eqref{5.12}. On the other hand, while pointwise nonnegativity of $W(q,p)$ will certainly ensure that the averaging in Eq.\,\eqref{6.3} takes $\left( x^{\mu}(\hat{\rho})\right)$ inside the time-like positive cone, it will not itself ensure that it is taken all the way inside the said hyperboloid. To ensure the latter, $W(q,p)$ should have a threshold effective spread. Thus, pointwise nonnegativity is neither a necessary nor sufficient requirement to ensure `Wigner quality' on $W(q,p)$ as is known from other considerations. The argument above has been presented in such a way that the interpretation in terms of Lorentz geometry in $2+1$ dimensions is obvious. However, comparing Eqs.\,\eqref{5.7} and \eqref{5.13}, we see that it could be expressed equally well as follows. At each point $(q,p)$ in the phase plane we define a $2\times 2$ real symmetric matrix \begin{align} V(q,p) = \left( \begin{array}{c} q -\overline{q} \\ p-\overline{p} \end{array} \right) \, \begin{array}{c} \left( \begin{array}{cc} q-\overline{q} & p-\overline{p} \end{array} \right) \\ ~~~~ \end{array} . \label{6.4} \end{align} Pointwise (except at $q = \overline{q},\, p = \overline{p}$ ) this is proportional to a one-dimensional projection matrix, and in particular it has vanishing determinant. After `averaging' with $W(q,p)$ as weight function, however, we obtain the $2 \times 2$ variance matrix $V$ in Eq.\,\eqref{4.28}\,: \begin{align} V = \int \int dp\,dq \, W(q,p) V(q,p) = \left( \begin{array}{cc} (\triangle q)^2 & \triangle(q,p) \\ \triangle(q,p) & (\triangle p)^2 \end{array} \right), \label{6.5} \end{align} and now the Schr\"{o}dinger-Robertson UP shows that $V$ is non-singular and has determinant bounded below by the `squared mass' $\hbar^2/4$. In this form, just like the Schr\"{o}dinger-Robertson UP, this geometrical picture based on the Wigner distribution language generalises in both directions---second order moments for a multi mode system, and higher order moments for a single mode system. As an example of the former, consider a two-mode system for simplicity. The classical phase space variables are $\xi_a$ and the hermitian quantum operators obeying Eq.\,\eqref{3.1} are $\hat{\xi}_a$, for $a=1,\cdots,4$. Given a two-mode state $\hat{\rho}$, we pass to its Wigner distribution $W(\xi)$ (something we did not do in Section III) and compute the means \begin{align} \langle \hat{\xi}_a \rangle = {\rm Tr}(\hat{\rho}\,\hat{\xi}_a) = \int d^4 \xi \, \xi_a W(\xi) = \overline{\xi}_a,\, a = 1,\cdots,4. \label{6.6} \end{align} Then, generalising Eq.\,\eqref{6.4} above, at each point $\xi$ in the 4-dimensional phase space we define a real symmetric $4\times 4 $ matrix \begin{align} V(\xi) &= (V_{ab}(\xi)) = ((\xi_a - \overline{\xi}_a)(\xi_b - \overline{\xi}_b)) = x(\xi) x(\xi)^T,\nonumber\\ x_a(\xi) &= \xi_a - \overline{\xi}_a. \label{6.7} \end{align} At each point $\xi$ (except at $\xi = \overline{\xi}$) we have here a real symmetric positive semidefinite matrix $V(\xi)$ which is essentially a one-dimensional projection matrix\,: the eigenvalues of $V(\xi)$ are $x(\xi)^T x(\xi),0,0,0$. The variance matrix $V$ for the state $\hat{\rho}$ is then obtained by `averaging' $V(\xi)$ using the real normalised quasiprobability $W(\xi)$\,: \begin{align} V = \int d^4 \xi \, W(\xi) V(\xi). \label{6.8} \end{align} Since in general $W(\xi)$ can assume negative values at some places in phase space, it may appear at first sight that some of the properties of $V(\xi)$ described above may be lost by the `averaging' process leading to $V$. However the UP\,\eqref{3.5} guarantees that this will not happen; indeed by Lemma 2, Section II, in Eq.\,\eqref{2.13}, $V$ is seen to be positive definite. Quantitatively we have the following situation\,: Williamson's theorem assures us that under the congruence transformation by a suitable $S_0 \in Sp(4, {\cal R})$, $V $ is taken to a diagonal form\,: \begin{align} V_0 = S_0 V S_0^T = diag(\kappa_1,\kappa_1,\kappa_2,\kappa_2), ~~\kappa_{1,2} >0. \label{6.9} \end{align} The congruence transformation becomes a similarity transformation on $V \beta^{-1} $\,\cite{dutta94}, since\,: \begin{align} S \in Sp(4,{\cal R}) \,: V^{\,'} = SVS^T \leftrightarrow V^{\,'} \beta^{-1} = S V \beta^{-1} S^{-1}. \label{6.10} \end{align} Applying this to the transition $V \to V_0$ we see that as \begin{align} V_0 \beta^{-1} = -i \left( \begin{array}{cc} \kappa_1 \sigma_2 &0 \\ 0& \kappa_2 \sigma_2 \end{array} \right) , \label{6.11} \end{align} the eigenvalues of $i V \beta^{-1}$ are $\pm \kappa_1, \, \pm \kappa_2 $; and the UP\,\eqref{3.5} ensures that $\kappa_{1,2} \geq \hbar/2$. The $\kappa$'s themselves are determined, upto an interchange, by the $Sp(4,{\cal R})$ invariant traces \begin{align} {\rm Tr}(V\beta^{-1})^2 = -2(\kappa_1^2 + \kappa_2^2),\nonumber\\ {\rm Tr}(V\beta^{-1})^4 = 2(\kappa_1^4 + \kappa_2^4). \label{6.12} \end{align} The manner in which the geometrical picture, and the constraint on the extent to which $W(\xi)$ can become negative, both generalise in going to the multi mode situation is now clear. A qualitatively similar situation (even if algebraically more involved) obtains when we generalise in the other direction---to higher order moments for a single mode system, and their uncertainty relations handled in the Wigner distribution language. Limiting ourselves to the moments upto fourth order, we are concerned in the notation of Eq.\,\eqref{4.23} with the uncertainty relation \begin{align} \tilde{\Omega}^{(1)}(\hat{\rho}) \geq 0 \label{6.13} \end{align} contained in the hierarchy\,\eqref{4.27}, and its rendering in the Wigner distribution language. Combining the notations of Sections II and V, we have a set of five real phase space functions $A_{a}(q,p)$, $a=1,2,\cdots,5$, and their hermitian operator counterparts in the Weyl sense\,: \begin{align} (A_a(q,p)) &= (q,\,p,\,q^2,\,qp,\,p^2)^T;\nonumber\\ (\hat{A}_a) &= ((A_a(q,p))_W) = (\hat{q},\,\hat{p},\,\hat{q}^2,\,\frac{1}{2} \{\hat{q},\hat{p}\},\,\hat{p}^2)^T, \label{6.14} \end{align} a listing of the components $\hat{\xi}_m$, $\hat{X}_m$. In a given state $\hat{\rho}$ with Wigner distribution $W(q,p)$ we have the means \begin{align} \langle \hat{A}_a \rangle = {\rm Tr}(\hat{\rho} \hat{A}_a) = \int \int dp\,dq \, W(q,p) A_a(q,p) = \overline{A}_a. \label{6.15} \end{align} To calculate the elements of $\tilde{\Omega}^{(1)}(\hat{\rho})$ we need to deal with the products $\hat{A}_a \hat{A}_b $. For these, using Eq.\,\eqref{5.2} we find\,: \begin{align} \hat{A}_a \hat{A}_b &= (A_a(q,p)A_b(q,p))_W + (C_{ab}(q,p))_W,\nonumber\\ (C_{ab}(q,p)) &= \left( \begin{array}{ccccc} 0 & \frac{i\hbar}{2} &~ 0 ~& \frac{i\hbar q }{2} & \frac{i\hbar p }{2}\\ -\frac{i\hbar}{2} & 0 &~ -\frac{i\hbar q}{2} ~& -\frac{i\hbar p}{2} & 0 \\ 0 & \frac{i\hbar q }{2} &~ 0~& i\hbar q^2 &~ -\frac{\hbar^2}{2} + 2i \hbar q p ~\\ -\frac{i\hbar q}{2} & \frac{i\hbar p}{2} &~ -i\hbar q^2 ~& \frac{\hbar^2}{4} & i\hbar p^2 \\ -\frac{i\hbar p}{2} & 0 &~~-\frac{\hbar^2}{2} - 2i \hbar q p~ ~& -i \hbar p^2 & 0 \end{array} \right). \label{6.16} \end{align} (We note that the real symmetric part of the matrix $C(q,p)$ is $-\frac{\hbar^2}{4} g_K$ in the lower $3 \times 3$ block, where $g_K$ is the tilted form of the $(2+1)$ Lorentz metric in Eq.\,\eqref{5.17}). With these ingredients and referring to the general structure\,\eqref{2.16} we have the expression for $\tilde{\Omega}^{(1)}(\hat{\rho})$ in the Wigner distribution language\,: \begin{align} \tilde{\Omega}^{(1)}(\hat{\rho}) &= \left(\tilde{\Omega}^{(1)}_{ab}(\hat{\rho})\right) = \left({\rm Tr}(\hat{\rho}(\hat{A}_a - \langle \hat{A}_a \rangle)(\hat{A}_b - \langle \hat{A}_b \rangle)) \right) \nonumber\\ &= \left({\rm Tr}(\hat{\rho}\,(\,(A_a(q,p)A_b(q,p))_W - \overline{A}_a\overline{A}_b + (C_{ab}(q,p))_W)\,)\right) \nonumber\\ &= \int \int dp dq \,W(q,p)\left( x(q,p) x(q,p)^T + (C_{ab}(q,p))\right),\nonumber\\ x(q,p)^T&=(q-\overline{q},\,p-\overline{p},\,q^2-\overline{q^2},\, qp-\overline{qp},\,p^2-\overline{p^2} ). \label{6.17} \end{align} At each point $(q,p)$ in the phase plane, we have essentially a one-dimensional projector $x(q,p) x(q,p)^T$, together with a five-dimensional hermitian matrix $(C_{ab}(q,p))$ with elements involving $\hbar$ and $\hbar^2$ terms. The uncertainty relation\,\eqref{6.13} demands that the phase plane `average' of this expression (hermitian matrix) with $W(q,p)$ as weight function be nonnegative. After this `averaging', the leading term is no longer a one-dimensional projector; moreover, the pure imaginary antisymmetric part coming from this part of $C(q,p)$ being singular, Lemma 2 of Section II does not apply to the real symmetric part of $\tilde{\Omega}^{(1)}(\hat{\rho})$. In any event, \eqref{6.13} again constrains the extent to which $W(q,p)$ can become negative. \section{Concluding Remarks} In this paper we have set up a systematic procedure to obtain covariant uncertainty relations for general quantum systems. It applies equally well to continuous variable systems and to systems described by finite-dimensional Hilbert spaces, and even to systems based on the tensor product of the two, and consists of two ingredients\,: the choice of a collection of observables, and the action of unitary symmetry operations on them. We have shown that the uncertainty relations are automatically covariant---preserved in content---under every symmetry operation. We have applied this to two important special cases\,: the fluctuations and covariances in coordinates and momenta of an $n$-mode canonical system; and to the set of all hermitian operator `monomials' in canonical variables $\hat{q},\,\hat{p}$ of a single mode system. These are both generalisations of the Schr\"{o}dinger-Robertson UP in two distinct directions. The latter generalisation has been treated for definiteness using the Wigner distribution method. We hope to have set up a robust yet flexible formalism which can be applied to all quantum systems, in particular to composite, for instance bipartite, systems. In such a case, by judicious choices of the operator sets $\{\hat{A}_a \}$ of Section II, one can devise tests for entanglement, exhibiting covariance under corresponding local symmetry operations. If for a bipartite system the operator $\hat{\rho}^{\,{\rm PT}}$\,\cite{peres96}, arising from partial transpose of a physical state $\hat{\rho}$, violates any uncertainty relation, the presence of entanglement in $\hat{\rho}$ follows\,\cite{recent3,solomonnpt12}. A systematic analysis along these lines of higher order moments in the bipartite multi-mode case will be presented elsewhere, keeping in mind that our general methods are applicable for both discrete and continuous variable systems, and even to composite systems consisting of either or both types as subsystems. \end{document}
math
\begin{document} \title{QND Measurement of Large-Spin Ensembles by Dynamical Decoupling} \author{M. Koschorreck} \email{[email protected]} \affiliation{ICFO-Institut de Ciencies Fotoniques, 08860 Castelldefels (Barcelona), Spain} \author{M. Napolitano} \affiliation{ICFO-Institut de Ciencies Fotoniques, 08860 Castelldefels (Barcelona), Spain} \author{B. Dubost} \affiliation{ICFO-Institut de Ciencies Fotoniques, 08860 Castelldefels (Barcelona), Spain} \affiliation{Laboratoire Mat\'{e}riaux et Ph\'{e}nom\`{e}nes Quantiques, Universit\'{e} Paris Diderot et CNRS, \\UMR 7162, B\^{a}t. Condorcet, 75205 Paris Cedex 13, France} \author{M. W. Mitchell} \affiliation{ICFO-Institut de Ciencies Fotoniques, 08860 Castelldefels (Barcelona), Spain} \begin{abstract} Quantum non-demolition (QND) measurement of collective variables by off-resonant optical probing has the ability to create entanglement and squeezing in atomic ensembles. Until now, this technique has been applied to real or effective spin one-half systems. We show theoretically that the build-up of Raman coherence prevents the naive application of this technique to larger spin atoms, but that dynamical decoupling can be used to recover the ideal QND behavior. We experimentally demonstrate dynamical decoupling by using a two-polarization probing technique. The decoupled QND measurement achieves a sensitivity 5.7(6) dB better than the spin projection noise. \end{abstract} \pacs{42.50.Lc, 07.55.Ge, 42.50.Dv, 03.67.Bg} \maketitle Quantum non-demolition measurement plays a central role in quantum networking and quantum metrology for its ability to simultaneously detect and generate non-classical quantum states. The original proposal by Braginsky \cite{Braginsky1974UFNv114p41} in the context of gravitational wave detection has been generalized to the optical \cite{Poizat1994APv19p265,Holland1990PRAv42p2995}, atomic \cite{Kuzmich1998ELv42p481} and nano-mechanical \cite{Ruskov2005PRBv71p235407} domains. In the atomic domain, QND by dispersive optical probing of spins or pseudo-spins has been demonstrated using ensembles of cold atoms on a clock transition \cite{Windpassinger2009MSTv20p55301, Schleier-Smith2010PRLv104p73604}, and with polarization variables \cite{Takano2009PRLv102p33601,Koschorreck2010PRLv104p93602}, but thus far only with real or effective spin-1/2 systems. QND measurement of larger spin systems offers a metrological advantage, e.g., in magnetometry \cite{Geremia2005PRLv94p203002}, and may be essential for the detection of different quantum phases of degenerate atomic gases that intrinsically rely on large-spin systems \cite{Eckert2007NPv4p50,Eckert2007PRLv98p100404,Roscilde2009NJPv11p55041}. Dispersive interactions with large-spin atoms are complicated by the presence of non-QND-type terms in the effective Hamiltonian describing the interaction \cite{Geremia2006PRAv73p42112,Madsen2004PRAv70p52324, Echaniz2005JOBv7p548}. As we show, and contrary to what has often been assumed \cite{Kuzmich2000PRLv85p1594,Eckert2007NPv4p50,Eckert2007PRLv98p100404,Roscilde2009NJPv11p55041}, these terms spoil the QND performance, even in the large-detuning limit. The non-QND terms introduce noise into the measured variable, or equivalently decoherence into the atomic state. The problem is serious for both large and small ensembles, so that naive application of dispersive probing fails for several of the above-cited proposals. We approach this problem using the methods of dynamical decoupling \cite{Viola1998PRAv58p2733, Viola1999PRLv82p2417,Facchi2005PRAv71p22302}, which allow us to effectively cancel the non-QND terms in the Hamiltonian while retaining the QND term. To our knowledge, this is the first application of this method to quantum non-demolition measurements. Dynamical decoupling has been extensively applied in magnetic resonance \cite{Morton2008Nv455p1085, Biercuk2009Nv458p996}, used to suppress collisional decoherence in a thermal vapor \cite{Search2000PRLv85p2272}, to extend coherence times in solids \cite{ Taylor2008NPv4p810}, in Rydberg atoms \cite{Minns2006PRLv97p}, and with photon polarization \cite{Damodarakurup2009PRLv103p40502}. Other approaches include application of a static perturbation \cite{Smith2004PRLv93p163602,Fraval2004PRLv92p}. \global\long\def^{87}\mathrm{Rb}{^{87}\mathrm{Rb}} \global\long\def5S_{1/2}{5S_{1/2}} \global\long\def5P_{3/2}{5P_{3/2}} \global\long\def\inlket#1#2{\left|\right.\!#1,\,#2\!\left.\right\rangle } \global\long\def\ket#1#2{\left|#1,\,#2\right\rangle } \global\long\def\sket#1{\left|#1\right\rangle } \global\long\def\var#1{\mathrm{var}(#1)} \global\long\def\cov#1{\mathrm{cov}(#1)} \global\long\def^{87}\mathrm{Rb}i{^{87}\mathrm{Rb}} \global\long\defG_{\gamma-V}{G_{\gamma-V}} \global\long\def\hat{S}_{\mathrm{\mathbf{x}}}{\hat{S}_{\mathrm{\mathbf{x}}}} \global\long\def\hat{S}_{\mathrm{\mathbf{y}}}{\hat{S}_{\mathrm{\mathbf{y}}}} \global\long\def\hat{S}_{\mathrm{\mathbf{z}}}{\hat{S}_{\mathrm{\mathbf{z}}}} \global\long\def\hat{J}_{\mathbf{x}}{\hat{J}_{\mathbf{x}}} \global\long\def\hat{J}_{\mathrm{\mathbf{y}}}{\hat{J}_{\mathrm{\mathbf{y}}}} \global\long\def\hat{J}_{\mathrm{\mathbf{z}}}{\hat{J}_{\mathrm{\mathbf{z}}}} \global\long\def\hat{s}_{\mathrm{\mathbf{x}}}{\hat{s}_{\mathrm{\mathbf{x}}}} \global\long\def\sy#1{\hat{s}_{\mathrm{\mathbf{y}}#1}} \global\long\def\hat{s}_{\mathrm{\mathbf{z}}}{\hat{s}_{\mathrm{\mathbf{z}}}} \global\long\def\hat{j}_{\mathbf{x}}{\hat{j}_{\mathbf{x}}} \global\long\def\hat{j}_{\mathrm{\mathbf{y}}}{\hat{j}_{\mathrm{\mathbf{y}}}} \global\long\def\hat{j}_{\mathrm{\mathbf{z}}}{\hat{j}_{\mathrm{\mathbf{z}}}} \global\long\def\hat{T}_{\mathrm{\mathbf{x}}}{\hat{T}_{\mathrm{\mathbf{x}}}} \global\long\def\hat{T}_{\mathrm{\mathbf{y}}}{\hat{T}_{\mathrm{\mathbf{y}}}} \global\long\def\hat{T}_{\mathrm{\mathbf{z}}}{\hat{T}_{\mathrm{\mathbf{z}}}} \global\long\def\hat{K}_{\mathrm{\mathbf{x}}}{\hat{K}_{\mathrm{\mathbf{x}}}} \global\long\def\hat{K}_{\mathrm{\mathbf{y}}}{\hat{K}_{\mathrm{\mathbf{y}}}} \global\long\def\hat{K}_{\mathrm{\mathbf{z}}}{\hat{K}_{\mathrm{\mathbf{z}}}} \global\long\defS_{\mathrm{\mathbf{x}}}{S_{\mathrm{\mathbf{x}}}} \global\long\defS_{\mathrm{\mathbf{y}}}{S_{\mathrm{\mathbf{y}}}} \global\long\defS_{\mathrm{\mathbf{z}}}{S_{\mathrm{\mathbf{z}}}} \global\long\defJ_{\mathbf{x}}{J_{\mathbf{x}}} \global\long\defJ_{\mathrm{\mathbf{y}}}{J_{\mathrm{\mathbf{y}}}} \global\long\defJ_{\mathrm{\mathbf{z}}}{J_{\mathrm{\mathbf{z}}}} \global\long\defs_{\mathrm{\mathbf{x}}}{s_{\mathrm{\mathbf{x}}}} \global\long\defs_{\mathrm{\mathbf{y}}}{s_{\mathrm{\mathbf{y}}}} \global\long\defs_{\mathrm{\mathbf{z}}}{s_{\mathrm{\mathbf{z}}}} \global\long\defj_{\mathbf{x}}{j_{\mathbf{x}}} \global\long\defj_{\mathrm{\mathbf{y}}}{j_{\mathrm{\mathbf{y}}}} \global\long\defj_{\mathrm{\mathbf{z}}}{j_{\mathrm{\mathbf{z}}}} \global\long\def\dexpect#1{\left\langle \right.\!#1\!\left.\right\rangle } \global\long\def\var#1{\mathrm{var}(#1)} \global\long\def\sexpect#1#2{\left\langle #2\right|#1\left|#2\right\rangle } \global\long\def\hat{F}_{x}{\hat{F}_{x}} \global\long\def\hat{F}_{y}{\hat{F}_{y}} \global\long\def\hat{F}_{z}{\hat{F}_{z}} \global\long\def\fx#1{\hat{f}_{x,#1}} \global\long\def\fy#1{\hat{f}_{y,#1}} \global\long\def\fz#1{\hat{f}_{z,#1}} \newcommand{\hat{v}}{\hat{v}} \newcommand{\hat{w}}{\hat{w}} \newcommand{\hat{J}}{\hat{J}} \newcommand{\hat{S}}{\hat{S}} \newcommand{\hat{K}}{\hat{K}} \newcommand{\hat{T}}{\hat{T}} \newcommand{\hat{U}}{\hat{U}} \newcommand{\hat{U}_{b}}{\hat{U}_{b}} \newcommand{{\bf S}}{{\bf S}} \newcommand{{\bf J}}{{\bf J}} \newcommand{{\bf j}}{{\bf j}} \newcommand{{\bf f}}{{\bf f}} \newcommand{{\bf \alpha}}{{\bf \alpha}} \newcommand{^{({1})} }{^{({1})} } \newcommand{^{({2})} }{^{({2})} } \newcommand{^{({\rm in})} }{^{({\rm in})} } \newcommand{^{({\rm mid})} }{^{({\rm mid})} } \newcommand{^{({\rm out})} }{^{({\rm out})} } \newcommand{^{({1})} in}{^{(1,{\rm in})} } \newcommand{^{({1})} out}{^{(1,{\rm out})} } \newcommand{^{({2})} in}{^{(2,{\rm in})} } \newcommand{^{({2})} out}{^{(2,{\rm out})} } \newcommand{S_{\rm SNR}}{S_{\rm SNR}} \newcommand{\hat{j}_{[\mathbf{x,y}]}}{\hat{j}_{[\mathbf{x,y}]}} \newcommand{\hat{J}_{[\mathbf{x,y}]}}{\hat{J}_{[\mathbf{x,y}]}} \newcommand{\mathbbm{1}}{\mathbbm{1}} \newcommand{\expect}[1]{\left<#1\right>} We consider an ensemble of spin-$f$ atoms interacting with a pulse of near-resonant polarized light. As described in references \cite{Geremia2006PRAv73p42112, Madsen2004PRAv70p52324, Echaniz2005JOBv7p548}, the light and atoms interact by the effective Hamiltonian $\hat{H}_{\rm eff}$ \begin{equation} \tau {\hat{H}}_{\mathrm{eff}} = G_1 \hat{S}_{\mathrm{\mathbf{z}}} \hat{J}_{\mathrm{\mathbf{z}}}+G_2 ( \hat{S}_{\mathrm{\mathbf{x}}} \hat{J}_{\mathbf{x}} + \hat{S}_{\mathrm{\mathbf{y}}} \hat{J}_{\mathrm{\mathbf{y}}})\,\,, \label{eq:H_full} \end{equation} where $\tau$ is the duration of the pulse and $G_{1,2}$ are coupling constants that depend on the atomic absorption cross section, the beam geometry, the detuning from resonance $\Delta$, and the hyperfine structure of the atom \cite{Kubasik2009PRAv79p43815}. The atomic variables $\hat{{\bf J}}$ (described below) are collective spin and alignment operators. The light is described by the Stokes operators $\hat{{\bf S}}$ defined as $\hat{S}_i \equiv \frac{1}{2}(\hat{a}_+^\dagger,\hat{a}_-^\dagger) \sigma_i (\hat{a}_+,\hat{a}_-)^T$, where the $\sigma_i$ are the Pauli matrices and $\hat{a}_\pm$ are annihilation operators for the temporal mode of the pulse and circular plus/minus polarization. Bold subscripts, e.g., $\mathbf{x}$, are used to label non-spatial directions for atomic and light variables. The $G_1$ term describes a QND interaction, while the $G_2$ describes a more complicated coupling. In the dispersive, i.e. far-detuned, regime, $G_1$ and $G_2$ scale as $\Delta^{-1}$ and $\Delta^{-2}$, respectively. It has sometimes been assumed that the $G_2$ terms can be neglected for sufficiently large $\Delta$, leaving an approximate QND interaction. As we show below, this scaling argument fails, and the $G_2$ terms remain important. We note an important symmetry: $\hat{H}_{\rm eff}$ commutes with $\hat{S}_{\mathrm{\mathbf{z}}} + \hat{J}_{\mathrm{\mathbf{z}}}$, and is thus invariant under simultaneous rotation of $\hat{{\bf J}}$ and $\hat{{\bf S}}$ about the $z$ axis. The atomic collective variables are $\hat{J}_k \equiv \sum_{i}^{N_{A}} \hat{j}^{(i)}_k$ where the superscript indicates the $i$-th atom and $\hat{j}_{\mathbf{x}} \equiv (\hat{f}_x^2 - \hat{f}_y^2 )/2$, $\hat{j}_{\mathrm{\mathbf{y}}} \equiv (\hat{f}_x \hat{f}_y + \hat{f}_y \hat{f}_x)/2$, $\hat{j}_{\mathrm{\mathbf{z}}} \equiv \hat{f}_z/2$ and $\hat{j}_{[\mathbf{x,y}]} \equiv -i [\hat{j}_{\mathbf{x}},\hat{j}_{\mathrm{\mathbf{y}}}] = \hat{f}_z (\hat{f}^2 - \hat{f}_z^2 -1/2 ) $. These obey commutation relations $[\hat{j}_{\mathrm{\mathbf{z}}},\hat{j}_{\mathbf{x}}] = i \hat{j}_{\mathrm{\mathbf{y}}}$, $ [\hat{j}_{\mathrm{\mathbf{y}}},\hat{j}_{\mathrm{\mathbf{z}}}] = i \hat{j}_{\mathbf{x}}$, $ [\hat{j}_{\mathbf{x}},\hat{j}_{\mathrm{\mathbf{y}}}] = i \hat{j}_{[\mathbf{x,y}]}$. For $f=1/2$, $\hat{j}_{\mathbf{x}},\hat{j}_{\mathrm{\mathbf{y}}}$ and $\hat{j}_{[\mathbf{x,y}]}$ vanish identically while for $f=1$, $\hat{j}_{[\mathbf{x,y}]}=\hat{j}_{\mathrm{\mathbf{z}}}$ so that $\hat{j}_{\mathbf{x}},\hat{j}_{\mathrm{\mathbf{y}}},$ and $\hat{j}_{\mathrm{\mathbf{z}}}$ describe a pseudo-spin $\hat{\bf j}$. In the QND scenario, an initial coherent polarization state with $\dexpect{\hat{{\bf S}}} = (N_L/2,0,0)$ is passed through the ensemble and experiences a rotation due to the $G_1$ term such that the component $\hat{S}_{\mathrm{\mathbf{y}}}$ (the `meter' variable) indicates the value of $\hat{J}_{\mathrm{\mathbf{z}}}$ (the `system' variable). We assume that $\hat{J}_{\mathbf{x}} = N_A/2$. For a weak pulse, i.e., for $\dexpect{\hat{{\bf S}}} $ sufficiently small, we have the $\tau$-linear input-output relations $\hat{A}^{({\rm out})} = \hat{A}^{({\rm in})} - i \tau [\hat{A}^{({\rm in})} ,\hat{H}_{\rm eff}]$. Of specific interest are \newcommand{\spacer}{} \begin{eqnarray} \hat{J}_{\mathrm{\mathbf{z}}}^{({\rm out})} &=& \hat{J}_{\mathrm{\mathbf{z}}}^{({\rm in})} \spacer+ G_2 \hat{S}_{\mathrm{\mathbf{x}}} \hat{J}_{\mathrm{\mathbf{y}}}^{({\rm in})} -{G_2 \hat{S}_{\mathrm{\mathbf{y}}}^{({\rm in})} \hat{J}_{\mathbf{x}}} \,\,,\label{JzInOut} \\ \hat{J}_{\mathrm{\mathbf{y}}}^{({\rm out})} &=& \hat{J}_{\mathrm{\mathbf{y}}}^{({\rm in})} - G_1 \hat{S}_{\mathrm{\mathbf{z}}}^{({\rm in})} \hat{J}_{\mathbf{x}} - G_2 \hat{S}_{\mathrm{\mathbf{x}}} \hat{J}_{[\mathbf{x,y}]} ^{({\rm in})} \,\,, \label{JyInOut} \\ \hat{S}_{\mathrm{\mathbf{y}}}^{({\rm out})} &=& \hat{S}_{\mathrm{\mathbf{y}}}^{({\rm in})} + G_1 \hat{S}_{\mathrm{\mathbf{x}}} \hat{J}_{\mathrm{\mathbf{z}}}^{({\rm in})} -{G_2 \hat{S}_{\mathrm{\mathbf{z}}}^{({\rm in})} \hat{J}_{\mathrm{\mathbf{y}}}} \,\,,\label{SyInOut} \end{eqnarray} which describe the change in the system variable, its conjugate, and the meter variable. In the case of $f=1/2$, the $G_2$ terms vanish identically and we have a pure QND measurement: information about $\hat{J}_{\mathrm{\mathbf{z}}}$ enters $\hat{S}_{\mathrm{\mathbf{y}}}$ and there is a back-action on $\hat{J}_{\mathrm{\mathbf{y}}}$, but not on $\hat{J}_{\mathrm{\mathbf{z}}}$. The input noise $\var{\hat{S}_{\mathrm{\mathbf{y}}}^{({\rm in})} } = S_{\mathrm{\mathbf{x}}}/2$ limits the performance of the measurement, and corresponds to a spin sensitivity of $\delta \hat{J}_{\mathrm{\mathbf{z}}}^2 = (2 G_1^2 \hat{S}_{\mathrm{\mathbf{x}}})^{-1}$. For comparison, the projection noise of an ${\mathbf{x}}$-polarized spin state is $\var{\hat{J}_{\mathrm{\mathbf{z}}}} = \hat{J}_{\mathbf{x}}/2$, so that projection noise sensitivity is achieved for $\hat{S}_{\mathrm{\mathbf{x}}} = (G_1^2 \hat{J}_{\mathbf{x}})^{-1} \equiv S_{\rm SNR}$. This ideal QND regime does not occur naturally except for $f=1/2$. In the interesting regime $\hat{S}_{\mathrm{\mathbf{x}}} \approx S_{\rm SNR}$, we find that ${G_2 \hat{S}_{\mathrm{\mathbf{x}}} \hat{J}_{\mathrm{\mathbf{y}}}} \approx \hat{J}_{\mathrm{\mathbf{y}}}(G_{2}/G_{1}^2)/\hat{J}_{\mathbf{x}}$ is independent of $\Delta$, and cannot be neglected based on detuning. To get an order of magnitude, we note that for large detuning, $G_1 \approx {\sigma_0 \Gamma}/{4 A \Delta}$, $G_2 \approx G_1 \Delta_{\rm HFS}/\Delta$ where $\sigma_0$ is the on-resonance scattering cross-section, $A$ is the effective area of the beam, and $\Gamma$ and $\Delta_{\rm HFS}$ are the natural linewidth and hyperfine splitting, respectively, of the excited states. In terms of the on-resonance optical depth $d_0 \equiv \sigma_0 N_A / A$, we find $G_2/G_1^2 J_{\mathbf{x}} \approx 8 \Delta_{\rm HFS}/d_0 \Gamma$. In a typical experiment with rubidium on the $D_2$ line, $\Delta_{\rm HFS}/\Gamma \sim 30$ and $d_0 \sim 50$ \cite{Kubasik2009PRAv79p43815}, so the contribution of this term is important. In contrast, the last term in Eq. (\ref{JyInOut}) and (\ref{SyInOut}), respectively, contribute variances $\left<G_2^2 \hat{S}_{\mathrm{\mathbf{y}}}^2 \hat{J}_{\mathbf{x}}^2\right>$ and $\left<G_2^2 \hat{S}_{\mathrm{\mathbf{z}}}^2 \hat{J}_{\mathrm{\mathbf{y}}}^2\right>$ which scale as $\Delta^{-2}$. We will henceforth drop these terms. The system variable $\hat{J}_{\mathrm{\mathbf{z}}}$ is coupled to a degree of freedom, $\hat{J}_{\mathrm{\mathbf{y}}}$, which is neither system nor meter in the QND measurement. This coupling introduces noise into the system variable, and decoherence into the state of the ensemble. To remove the decoherence associated with this coupling $G_2 \hat{S}_{\mathrm{\mathbf{x}}} \hat{J}_{\mathrm{\mathbf{y}}}$, we adopt the strategy of ``bang-bang'' dynamical decoupling \cite{Viola1998PRAv58p2733, Viola1999PRLv82p2417, Facchi2005PRAv71p22302}. In this method, a unitary $\hat{U}_{b}$ and its inverse $\hat{U}_{b}^\dagger$ are alternately and periodically applied to the system $p$ times during the evolution, so that the total evolution is $[\hat{U}_{b}^\dagger \hat{U}_H(t/2p) \hat{U}_{b} \hat{U}_H(t/2p)]^p$ where $\hat{U}_H(t)$ describes unitary evolution under $\hat{H}$ for a time $t$. With this evolution, those system variables that are unchanged by $\hat{U}_{b}$ continue to evolve under $\hat{H}$, while others are rapidly switched from one value to another, preventing coherent evolution. For large $p$, the system evolves under a modified Hamiltonian $\hat{H}' = \hat{P} \hat{H}$, where $\hat{P}$ projects onto the commutant (i.e., the set of operators which commute with) of $\{ \hat{U}_{b},\hat{U}_{b}^\dagger \}$ \cite{Facchi2005PRAv71p22302}. To eliminate $G_2 (\hat{S}_{\mathrm{\mathbf{x}}} \hat{J}_{\mathbf{x}} + \hat{S}_{\mathrm{\mathbf{y}}} \hat{J}_{\mathrm{\mathbf{y}}})$, while keeping $G_1 \hat{S}_{\mathrm{\mathbf{z}}} \hat{J}_{\mathrm{\mathbf{z}}}$ we choose a $\hat{U}_{b}$ which commutes with $\hat{J}_{\mathrm{\mathbf{z}}}$, but not with $\hat{J}_{\mathbf{x}}$ or $\hat{J}_{\mathrm{\mathbf{y}}}$, namely a $\pi$ rotation about $\hat{J}_{\mathrm{\mathbf{z}}}$, $\hat{U}_{b} = \exp[i \pi \hat{J}_{\mathrm{\mathbf{z}}}]$. This leaves $\hat{J}_{\mathrm{\mathbf{z}}}$ unchanged, but inverts $\hat{J}_{\mathbf{x}}$ and $\hat{J}_{\mathrm{\mathbf{y}}}$. By the symmetry of $\hat{H}_{\rm eff}$, this is equivalent to inverting $\hat{S}_{\mathrm{\mathbf{x}}}$ and $\hat{S}_{\mathrm{\mathbf{y}}}$, which suggests a practical implementation: probe with pulses of alternating $\hat{S}_{\mathrm{\mathbf{x}}}$, and define a `meter' variable taking into account the inversion of $\hat{S}_{\mathrm{\mathbf{y}}}$. We consider sequential interaction of the ensemble with a pair of pulses, with $\hat{S}_{\mathrm{\mathbf{x}}}^{(1)} = -\hat{S}_{\mathrm{\mathbf{x}}}^{(2)}=N_L/4p$. We define also the new `meter' variable $S_y^{(\rm diff)} \equiv \hat{S}_{\mathrm{\mathbf{y}}}^{(1)} -\hat{S}_{\mathrm{\mathbf{y}}}^{(2)}$. We describe the atomic variables before, between, and after the two pulses with superscripts $(\rm in),(mid),(out)$, respectively. We apply Equations (\ref{JzInOut}-\ref{SyInOut}) to find: \begin{eqnarray} \hat{J}_{\mathrm{\mathbf{z}}}^{({\rm mid})} &=& \hat{J}_{\mathrm{\mathbf{z}}}^{({\rm in})} \spacer+ G_2 \hat{S}_{\mathrm{\mathbf{x}}}^{({1})} \hat{J}_{\mathrm{\mathbf{y}}}^{({\rm in})} \label{JzInOutFP} \\ \hat{J}_{\mathrm{\mathbf{y}}}^{({\rm mid})} &=& \hat{J}_{\mathrm{\mathbf{y}}}^{({\rm in})} - G_1 \hat{S}_{\mathrm{\mathbf{z}}}^{({1})} in \hat{J}_{\mathbf{x}} - G_2 \hat{S}_{\mathrm{\mathbf{x}}}^{({1})} \hat{J}_{[\mathbf{x,y}]}^{({\rm in})} \label{JzInOutFP} \\ \hat{S}_{\mathrm{\mathbf{y}}}^{({1})} out &=& \hat{S}_{\mathrm{\mathbf{y}}}^{({1})} in + G_1 \hat{S}_{\mathrm{\mathbf{x}}}^{({1})} \hat{J}_{\mathrm{\mathbf{z}}}^{({\rm in})} \label{SyInOutFP} \end{eqnarray} and \begin{eqnarray} \hat{J}_{\mathrm{\mathbf{z}}}^{({\rm out})} &=& \hat{J}_{\mathrm{\mathbf{z}}}^{({\rm in})} \label{JzInOutTP} \\ \hat{S}_{\mathrm{\mathbf{y}}}^{(\rm diff,out)} &=& \hat{S}_{\mathrm{\mathbf{y}}}^{(\rm diff,in)} + 2G_1 \hat{S}_{\mathrm{\mathbf{x}}}^{({1})} \hat{J}_{\mathrm{\mathbf{z}}}^{({\rm in})} \label{SyInOutTP} \end{eqnarray} plus terms in $G_1 G_2 \hat{S}_{\mathrm{\mathbf{x}}} \hat{S}_{\mathrm{\mathbf{z}}} \hat{J}_{\mathbf{x}} $, $G_2^2 \hat{S}_{\mathrm{\mathbf{x}}}^2 \hat{J}_{[\mathbf{x,y}]}$ and $G_1 G_2 \hat{S}_{\mathrm{\mathbf{x}}}^2 \hat{J}_{\mathrm{\mathbf{y}}}$ which become negligible in the limit of large $p$. The ideal QND form is recovered by the dynamical decoupling. The presence of the $G_2$ term can be detected by noise scaling properties. While in the ideal QND of Equations (\ref{JzInOutTP}),(\ref{SyInOutTP}) the variance of the system variable is $\propto \hat{J}_{\mathbf{x}}$ giving a variance for the meter variable linear in $\hat{J}_{\mathbf{x}}$, for the imperfect QND of Equations (\ref{JzInOut}) to (\ref{SyInOut}) this is not the case: from Equation (\ref{JzInOutFP}), we see that $\hat{J}_{\mathrm{\mathbf{y}}}$ acquires a back-action variance $\propto \hat{J}_{\mathbf{x}}^2$, which then is fed into the system variable by the $G_2$ term. This additional $\hat{J}_{\mathbf{x}}^2$ noise is also reflected in the meter variable, and provides a measurable indication of $G_2$. We use the two-polarization decoupling technique to perform QND measurement on an ensemble of $\sim10^{6}$ laser cooled $^{87}$Rb atoms in the $F=1$ ground state. In the atomic ensemble system, described in detail in reference \cite{Kubasik2009PRAv79p43815}, $\mu$s pulses interact with an elongated atomic cloud and are detected by a shot-noise-limited polarimeter. The experiment achieves projection noise limited sensitivity, as calibrated against a thermal spin state \cite{Koschorreck2010PRLv104p93602}. \begin{figure} \caption{\label{fig:TPP scheme} \label{fig:TPP scheme} \end{figure} The experimental sequence is shown schematically in Fig.~\ref{fig:TPP scheme}. In each measurement cycle the atom number $N_A$ is first measured by a dispersive atom-number measurement (DANM) \cite{Koschorreck2010PRLv104p93602}. A $\hat{J}_{\mathbf{x}}$-polarized coherent spin state (CSS) is then prepared and probed with pulses of alternating polarization to find the QND signal $\hat{S}_{\mathrm{\mathbf{y}}} \equiv \sum_i \hat{s}_{\mathbf{y},i}^{({\rm out})} (-1)^{i+1}$. Immediately after, $\dexpect{\hat{J}_{\mathbf{x}}}$ is measured to quantify depolarization of the sample and any atoms having made transitions to the $F=2$ manifold are removed from the trap, reducing $N_A$ for the next cycle and allowing a range of $N_A$ to be probed on a single loading. This sequence of state preparation and probing is repeated ten times for each loading of the trap. The trap is loaded 350 times to acquire statistics. The optical dipole trap, formed by a weakly-focused ($52\,\mu$m) beam of a Yb:YAG laser at $1030\,$nm with $6\,$W of optical power, is loaded from a conventional two stage magneto-optical trap (MOT) during $4\,$s. Sub-Doppler cooling produces atom temperatures down to $25\,\mu$K as measured in the dipole trap \cite{Kubasik2009PRAv79p43815}. In the DANM, we prepare a $\hat{J}_{\mathbf{x}}$-polarized CSS, i.e., all atoms in a coherent superposition of hyperfine states $\left|\uparrow/\downarrow\right\rangle \equiv \left|F=1,m_F = \pm1\right\rangle $, by optically pumping with vertically-polarized light tuned to the transition $F=1\rightarrow F'=1$, while also applying repumping on the $F=2\rightarrow F'=2$ transition and a weak magnetic field along $x$ to prevent spin precession. The atoms arrive to this dark state after scattering fewer than two photons on average. To measure $\dexpect{\hat{J}_{\mathbf{x}}}$, we send ten circularly-polarized probe pulses, i.e., with $\dexpect{\hat{S}_{\mathrm{\mathbf{z}}}} = N_L/2$, tuned $190\,$MHz to the red of the transition $F=1\rightarrow F'=0$. Each pulse, of $1\,\mu$s duration, contains $2.6\times10^{6}$ photons and produces a signal $\dexpect{\hat{S}_{\mathrm{\mathbf{y}}}} \propto G_{2}\dexpect{ \hat{S}_{\mathrm{\mathbf{z}}}}\dexpect{ \hat{J}_{\mathbf{x}}}$. The coherent state for the QND measurement is prepared in the same way, but in zero magnetic field. To measure $ \hat{J}_{\mathrm{\mathbf{z}}}$, i.e., one half the population difference between $\left|\uparrow\right\rangle $ and $\left|\downarrow\right\rangle $, we send probe pulses of either vertical $s_{\mathrm{\mathbf{x}}} = n_L/2$ or horizontal $s_{\mathrm{\mathbf{x}}} = -n_L/2$ polarization through atomic sample and record their polarization rotation as $\hat{s}_{\mathbf{y},i}^{\rm (out)}$. The number of individual probe pulses is $2p$ and the total number of probe photons $N_{L}=2pn_{L}$. \begin{figure} \caption{\label{fig:Exp and Sim} \label{fig:Exp and Sim} \end{figure} In Fig.~\ref{fig:Exp and Sim} we plot the measured noise versus atom number, which confirms the linear scaling characteristic of the QND measurement. The black squares indicate the variance $\var{\hat{S}_{\mathrm{\mathbf{y}}}}$ normalized to the optical polarization noise, measured in the absence of atoms. Independent measurements confirm the polarimetry is shot-noise limited in this regime. The black solid line is the expected projection noise scaling $4\var{ \hat{S}_{\mathrm{\mathbf{y}}}}/N_L=1+G_{1}^{2}N_{L}\var{ \hat{J}_{\mathrm{\mathbf{z}}}}$, calculated from the independently measured interaction strength $G_{1}$ and number of probe photons $N_{L}=8\times10^{8}$. The QND measurement achieves projection-noise limited sensitivity, i.e., the measurement noise is $5.7(6)\,$dB below the projection noise. Also shown are results of covariance matrix calculations, following the techniques of reference \cite{Koschorreck2009JPBv42p9}, including loss and photon scattering. The scenarios considered include the naive QND measurement, i.e., with a single polarization, and the ``bang-bang'' or two-polarization QND measurement, with $p=1,2,5$. These show a rapid decrease in the quadratic component with increasing $p$. This confirms the removal of $G_2$ due to the dynamical decoupling. Also included in these simulations is the term $ \hat{S}_{\mathrm{\mathbf{y}}} \hat{J}_{\mathrm{\mathbf{y}}}$ which introduces noise into $ \hat{J}_{\mathrm{\mathbf{z}}}$ proportional to $G_{2}^{2}\var{ \hat{S}_{\mathrm{\mathbf{y}}}}\dexpect{ \hat{J}_{\mathbf{x}}}^{2}$. For our experimental parameters this term leads to an increase of $\var{ \hat{J}_{\mathrm{\mathbf{z}}}}$ of less then $2\,$\% and as noted above could be reduced with increased detuning. The dynamical decoupling also suppresses technical noise which would otherwise enter into $ \hat{J}_{\mathrm{\mathbf{z}}}$ through the interaction $G_{2}( \hat{S}_{\mathrm{\mathbf{x}}} \hat{J}_{\mathbf{x}}+ \hat{S}_{\mathrm{\mathbf{y}}} \hat{J}_{\mathrm{\mathbf{y}}})$. An imperfect preparation of the atomic and and/or light state, e.g., $\dexpect{ \hat{J}_{\mathrm{\mathbf{y}}}}\neq0$ or $\dexpect{ \hat{S}_{\mathrm{\mathbf{y}}}}\neq0$, would otherwise be transferred into $ \hat{J}_{\mathrm{\mathbf{z}}}$. Using dynamical decoupling techniques, we have demonstrated optical quantum non-demolition measurement of a large-spin system. We first identify an often-overlooked impediment to this goal: the tensorial polarizability causes decoherence of the measured variable, and prevents (naive) QND measurement of small ensembles. We then identify an appropriate dynamical decoupling strategy to cancel the tensorial components of the effective Hamiltonian, and implement the strategy with an ensemble of $\sim 10^6$ cold $^{87}$Rb atoms and two-polarization probing. The dynamically-decoupled QND measurement achieves a sensitivity $5.7(6)$ dB better than the projection noise level. The technique will enable the use of large-spin ensembles in quantum metrology and quantum networking, and permit the QND measurement of exotic phases of large-spin condensed atomic gases. We gratefully acknowledge fruitful discussions with Ivan H. Deutsch and Robert Sewell. This work was funded by the Spanish Ministry of Science and Innovation under the ILUMA project (Ref. FIS2008-01051) and the Consolider-Ingenio 2010 Project \textquoteleft{}\textquoteleft{}QOIT.\textquoteright{}\textquoteright{} \end{document}
math
\begin{document} \title{Classifying Finitely Generated Indecomposable RA Loops} \author{Mariana G. Cornelissen \\ \small\emph{{Universidade Federal de São João Del Rei, Brazil}} \\ \\ C.Polcino Milies \\ \small\emph{{Universidade de São Paulo, Brazil}}} \date{} \maketitle \begin{abstract} In \cite{casofinito}, E. Jespers, G. Leal and C. Polcino Milies classified all finite ring alternative loops (RA loops for short) which are not direct products of proper subloops. In this paper we extend this result to finitely generated RA loops and provide an explicit description of all such loops. \end{abstract} \section{Basic Definitions} A loop is a pair $(L,.)$ where $L$ is a nonempty set, $(a,b) \mapsto a.b$ is a closed binary operation on $L$ which has a two sided identity element $1$ and with the property that the equation $a.b=c$ determines a unique element $b \in L$ for given $a,c \in L$ and a unique element $a \in L$ for given $b,c \in L$. A subloop of a loop $(L,.)$ is a subset of $L$ which, under the binary operation, is also a loop. Given a commutative and associative ring $R$ with unity and a loop $L$, we can construct the loop algebra $RL$ of $L$ over $R$ as the free $R$-module with basis $L$ in which the multiplication is defined by extending that of $L$ via the distributive laws. We remember that a ring $R$ (not necessarily associative) is called alternative if it satisfies the identities $$ [x,x,y]=0 \mbox{ and } [y,x,x]=0$$ for all $x,y \in R$ where $[a,b,c]=(ab)c - a(bc)$ is the \emph{associator} of $a,b,c$. A \textbf{R}ing \textbf{A}lternative Loop (RA loop) is a loop whose loop ring over some commutative, associative ring with unity and of characteristic different from 2, is alternative but not associative. In this paper we provide an explicit description of all finitely generated indecomposable RA loops, i.e., finitely generated RA loops which are not direct products of proper subloops. \section{Some Results} We list below some results about loops and groups that are used in this work. These proofs can be found in \cite{livro}, \cite{artigogrupos}. Suppose that $G$ is a non abelian group with center $Z(G)$, $g_o$ is an element of $Z(G)$ and that $g \mapsto g^*$ is an involution in $G$ such that ${g_{o}}^* =g_o$ and $gg^* \in Z(G)$ for every $g \in G$. Let $L = G \cup Gu$ where $u$ is an indeterminate and extend the binary operation from $G$ to $L$ by the rules: \[ g(hu)=(hg)u \] \[ (gu)h=(gh^*)u \] \[ (gu)(hu)=g_oh^*g \] for $g,h \in L$. It can be shown that $L$ is a non-associative loop where each element has a unique two-sided inverse. This loop is denoted by $M(G,*,g_o)$. The theorem below shows that the loops obtained in this way are, in fact, all the RA loops. \begin{theorem}[\cite{livro}, Theorem IV.3.1] \label{raloops} Let $L$ be a RA loop. Then there exists a group $G \subset L$ and an element $u \in L$ such that $L= G \cup Gu $, $L'=G'= \{1,s\} \subseteq Z(G) = Z(L)$, where $L'$ is the commutator-associator subloop of $L$, $L / Z(L) \simeq C_2 \times C_2 \times C_2$, where $C_2$ denotes a cyclic group of order 2, and consequently $G / Z(G) \simeq C_2 \times C_2$. \\ Furthermore, the map $*: L \rightarrow L$ given by \begin{eqnarray}\label{involution} g^* = \left\{ \begin{array}{ll} $g$, & \hbox{if $g \in Z(G)$;} \\ $sg$, & \hbox{if $g \notin Z(G)$.} \\ \end{array} \right. \end{eqnarray} is an involution of $L$, and, setting $u^2 =g_o$, we have that $g_o \in Z(G)$ and $L = M(G,*,g_o)$. Conversely, for any nonabelian group $G$ such that $G / Z(G) \simeq C_2 \times C_2$ and involution *, the loop $M(G,*,g_o)$ is a RA loop for any $g_o \in Z(G)$, where * is given by (\ref{involution}). \end{theorem} According to the theorem above, in order to classify the finitely generated RA loops, we have to study the finitely generated groups $G$ such that $G/Z(G) \simeq C_2 \times C_2$. In \cite{artigogrupos}, M.Cornelissen and C.P.Milies classified all finitely generated groups $G$ such that $G/Z(G) \simeq C_p \times C_p$ where $C_p$ denotes a cyclic group of prime order $p$. We quote below two important results of this work. \begin{theorem}[\cite{artigogrupos}, Theorem 2.1] \label{decomposition} A finitely generated group $G$ is such that $G/Z(G)\simeq C_p \times C_p$, where $C_p$ denotes a cyclic group of prime order $p$, if and only if it can be written in the form $G = D \times A$ where $A$ is a finitely generated abelian group and $D$ is an indecomposable group such that $D = \langle x,y,Z(D)\rangle$ where $Z(D)$ is of the form $Z(D) = \langle t_1\rangle \times \langle z_2\rangle \times \langle z_3\rangle$, with: $(i)$ $o(t_1)= p^{m_1}, m_1 \geq 1$ and $s = [x,y]=x^{-1}y^{-1}xy = t_1^{p^{m_1-1}}$, $(ii)$ either $o(z_i)= p^{m_i}$ with $m_i \geq 0$ or $o(z_i) = \infty$, for $i=2,3$, $(iii)$ $x^p \in \langle t_1\rangle \times \langle z_2\rangle$ and $y^p \in \langle t_1\rangle \times \langle z_2\rangle \times \langle z_3\rangle$. \end{theorem} \begin{theorem}[\cite{artigogrupos}, Theorem 3.8]\label{groupclassification} Let $G$ be a finitely generated indecomposable group such that $G/Z(G) \simeq C_p \times C_p$ where $C_p$ denotes a cyclic group of prime order $p$. Then $G$ is of the form $G = \langle x,y, Z(G) \rangle$ with $x^p, y^p \in Z(G)$ and belongs to one of the nine types of non isomorphic groups listed in the table below. In each case we have that $o(t_i) = p^{m_i}, i=1,2,3$ and $o(u_j) = \infty, j = 1,2$. \begin{center} \begin{table}[htb] \begin{center} \caption{Classification Group Table} \begin{tabular}{||c||c||c||c||} \hline $G$ & $Z(G)$ & $x^p$ & $y^p$ \\ \hline 1 & $\langle t_1 \rangle$ & 1 & 1 \\ 2 & $\langle t_1 \rangle$ & $t_1$ & $t_1$ \\ 3 & $\langle t_1 \rangle \times \langle t_2 \rangle$ & 1 & $t_2$ \\ 4 & $\langle t_1 \rangle \times \langle t_2 \rangle$ & $t_1$ & $t_2$ \\ 5 & $\langle t_1 \rangle \times \langle u_1 \rangle$ & 1 & $u_1$ \\ 6 & $\langle t_1 \rangle \times \langle u_1 \rangle$ & $t_1$ & $u_1$ \\ 7 & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle t_3 \rangle$ & $t_2$& $t_3$ \\ 8 & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ \\ 9 & $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle u_2 \rangle$ & $u_1$ & $u_2$ \\ \hline \end{tabular} \end{center} \end{table} \end{center} \end{theorem} Hence, in this work, we will use the above classification with $p=2$ to describe all the finitely generated indecomposable loops of the type $L=M(G,*,g_o)$. \section{Classifying Finitely Generated Indecomposable RA Loops} With the results of the previous section and a few other remarks we shall show how to construct all the indecomposable finitely generated RA loops. \begin{lemma}[\cite{livro}, Proposition V.1.6]\label{lema} Let $L = M(G,*,g_o)$ an RA loop and $A$ an abelian group. Then, $(g,a)^* = (g^*,a)$ defines an involution in $G \times A$ and $M(G \times A, *,(g_o,1))$ is an RA loop isomorphic to $M(G,*,g_o) \times A$. Conversely, if $A$ is an abelian group such that $M(G \times A, *, (g_o,1))$ is an RA loop for some non abelian group $G$ and some $g_o \in Z(G)$, then $*$ restricts to $G$ is an involution of $G$ and $M(G \times A, *,(g_o,1))\simeq M(G,*,g_o)\times A$. \end{lemma} We know by [\cite{livro}, Theorem IV.2.1] that squares are central in RA loops. Using the same techniques as in [\cite{artigogrupos}, Lemma 2.2], it can be shown the following lemma that will be used frequently in what follows. \begin{lemma}\label{expoente} Let $v$ be an element of the indecomposable RA loop $L = M(G,*, g_o)$. Then $v$ can be chosen in such a way that if $w^{\alpha}$, $\alpha \in \mathbb{N}$, is a factor of $v^2 \in Z(L)$, we can write $\alpha = 0$ if $\alpha$ is even and $\alpha =1$, if $\alpha$ is odd. \end{lemma} Now we observe that if $L=M(G,*,g_o)$ is an indecomposable RA loop, then $G$ is close to being itself indecomposable. \begin{theorem}Let $L=M(G,*,g_o)$ be a finitely generated indecomposable RA loop. Then $G = D \times H$ where $D$ is a finitely generated indecomposable group and $H$ is a cyclic group. If $H = \langle h \rangle$ is non trivial, then $g_o = dh$ with $d \in Z(D)$ and $h \in H$. \end{theorem} \emph{Proof:} Let $G = G \cup Gu$, $u \notin G$, $u^2=g_o \in Z(G)$. By Theorem \ref{decomposition} we have that $G = D \times A$ with $D = \langle x,y,Z(D) \rangle$ indecomposable, $rank[Z(D)] \leq 3$ and $A$ a finitely generated abelian group. If $A$ is trivial, we have nothing to prove. Suppose $A \not= \{1\}$ and write $g_o = da$, $d \in Z(D), a\in A$. We claim that $a \not= 1$. Otherwise, $g_o = d \in Z(D)$ and, by Lemma \ref{lema}, we can write $L = M(D \times A, *, (g_o,1)) \simeq M(G,*,g_o) \times A$ which contradicts the non decomposability of $L$. We can write $A = B \times C \times F$ where $B$ is a 2-group, $|C|$ is odd and $F$ is a free group of finite rank. We can suppose that $a \in B \times F$. In fact, if $a = a'c$ with $a' \in B \times F$ and $c \in C$ such that $c^n =1$ then we can change $u$ to $u'= \gamma u$ where $\gamma \in C$ and such that $\gamma ^2 = c^{n-1}$. (Observe that there exists such $\gamma$ since the map $x \mapsto x^2$ is an automorphism of $C$). Hence, $(u')^2 = (\gamma u)^2 = \gamma ^2 u^2 = c^{n-1}g_o = c^{n-1}da = c^{n-1}da'c = da' \in D \times B \times F$. (Note that $L = G \cup Gu = G \cup Gu'$). Write $A = \langle t_1 \rangle \times \langle t_2 \rangle \times \ldots \times \langle t_k \rangle \times \langle x_1 \rangle \times \ldots \times \langle x_l \rangle \times C$ with $o(t_i)= 2^{m_i}$, $o(x_j)= \infty$ and $g_o = dt_1^{a_1}\ldots t_m^{a_m} x_1^{b_1} \ldots x_n^{b_n}$ with $m \leq k, n \leq l$ . Using Lemma \ref{expoente} for $u^2 = g_o$ and remembering that $Z(G) = Z(L)$ and $L = G \cup Gu = G \cup G(\alpha u)$ with $\alpha \in Z(G)$, we can assume that $a_i, b_j \in \{0,1\}$ for $i = 1, \ldots ,m$ and $j = 1, \ldots n$, i.e, $g_o = dt_1\ldots t_m x_1 \ldots x_n$, $m \leq k, n \leq l$ and reordering, if necessary, we can suppose that $o(t_1) \geq o(t_2) \geq \ldots \geq o(t_m)$. Let $H = \langle t_1\ldots t_m x_1 \ldots x_n \rangle$ be a cyclic group. If $x_1 \ldots x_n =1$ then $H = \langle t_1 \ldots t_m \rangle, g_o = dt_1 \ldots t_m$ and $A = \langle t_1\ldots t_m \rangle \times \langle t_2\rangle \times \ldots \times \langle t_m \rangle \times \ldots \times \langle t_k \rangle \times \langle x_1 \rangle \times \ldots \times \langle x_l \rangle \times C$. If $x_1\ldots x_n \not= 1$ we can then write $A = \langle t_1\ldots t_mx_1\ldots x_n \rangle \times \langle x_2 \rangle \times \ldots \times \langle x_n \rangle \times \ldots \times \langle x_l \rangle \times \langle t_1 \rangle \times \ldots \times \langle t_k \rangle \times C$. In both cases, we have that $A = H \times K$ with $H$ a cyclic group and $K$ an abelian group. Hence $ G = D \times H \times K$ with $g_o \in D \times H$ and $K$ abelian. Using again Lemma \ref{lema} we have that $L = M(D \times H, *, g_o) \times K$. Indecomposable property of L implies that $K = \{1\}$, so $ G = D \times H$ with $H$ cyclic, $H = \langle h \rangle$ where $g_o = dh$. $ {\fimprova}$ Using the previous results, we can write the indecomposable RA loop $L$ in the form $L = M(G,*,g_o) = G \cup Gu$ with: $(i)$ $u^2 = g_o \in Z(G)$ $(ii)$ $G = D \times H = \langle x,y,Z(D) \rangle \times H$ $(iii)$ $Z(L) = Z(G) = Z(D) \times H = \langle t_1 \rangle \times \langle z_2 \rangle \times \langle z_3 \rangle \times \langle h \rangle$ where $o(t_1) = 2^{m_1}, m_1 \geq 1$, $o(z_i) = 2^{m_i}, m_i \geq 0$ or $o(z_i) = \infty$ for $i=2,3$ and $o(h) = 2^k, k \geq 0$ or $o(h) = \infty$ $(iv)$ $x^2, y^2 \in Z(D)$. Note that if $z$ is a non trivial factor of $x^2$ or $y^2$ then we can choose $u$ such that $u^2$ does not have the factor $z$ in its decomposition, except in the case where $z = t_1$ and $m_1 =1$, i.e, ${t_1}^2=1$. In fact, if $u^2$ and $x^2$ has $z$ in its decomposition, set $ u' =xu$; then $L = G \cup Gu = G \cup Gu'$ and $u'^2 = sx^2u^2$ where $s$ is the unique non trivial commutator-associator of $L$. Since $s = t_1^{2^{m_1-1}}$ the exponent of $z$ in $u'^2$ is equal two (*unless $i=1$ and $m_1=1$). Hence, by Lemma \ref{expoente}, we can therefore make a new choice of $u'$ such that $u'^2$ does not have $z$. A similar statement is true for $y^2$. We are now ready to enumerate all possible indecomposable RA loops. Using Theorem \ref{groupclassification} with $p=2$, it follows that exist 9 types of groups $D$ as above and for each one, we have six possibilities of indecomposable RA loops, as we can see in the tables below. \begin{quote} \begin{center} Table 1: RA Loops when $D = G$ of type 1 \end{center} \end{quote} \begin{center} \begin{tabular}{||c||c||c||c||c||c||} \hline Loop $L$ & $Z(D)$ & $x^2$ & $y^2$ & $G$ & $u^2=g_{o}$ \\ \hline $L_1$ & $ \langle t_1 \rangle$ & 1 & 1 & $G_1$ & 1 \\ $L_2$ & $\langle t_1 \rangle$ & 1 & 1 & $G_1$ & $t_1$ \\ $L_3$ & $\langle t_1 \rangle$ & 1 & 1 & $G_1 \times \langle t \rangle$ & $t$ \\ $L_4$ & $\langle t_1 \rangle$ & 1 & 1 & $G_1 \times \langle t \rangle$ & $t_1t$ \\ $L_5$ & $\langle t_1 \langle$ & 1 & 1 & $G_1 \times \langle w \rangle$ & $w$ \\ $L_6$ & $\langle t_1 \rangle$ & 1 & 1 & $G_1 \times \langle w \rangle$ & $t_1w$ \\ \hline \end{tabular} \end{center} \begin{quote} \begin{center} Table 2: RA Loops when $D = G$ of type 2 \end{center} \end{quote} \begin{center} \begin{tabular}{||c||c||c||c||c||c||} \hline Loop $L$ & $Z(D)$ & $x^2$ & $y^2$ & $G$ & $u^2=g_{o}$ \\ \hline $L_7$ & $\langle t_1 \rangle$ & $t_1$ & $t_1$ & $G_2$ & 1 \\ $L_8^*$ & $\langle t_1 \rangle$ & $t_1$ & $t_1$ & $G_2$ & $t_1$ \\ $L_9$ & $\langle t_1 \rangle$ & $t_1$ & $t_1$ & $G_2 \times \langle t \rangle$ & $t$ \\ $L_{10}^*$ & $\langle t_1 \rangle$ & $t_1$ & $t_1$ & $G_2 \times \langle t \rangle$ & $t_1t$ \\ $L_{11}$ & $ \langle t_1 \rangle$ & $t_1$ & $t_1$ & $G_2 \times \langle w \rangle$ & $w$ \\ $L_{12}^*$ & $\langle t_1 \rangle$ & $t_1$ & $t_1$ & $G_2 \times \langle w \rangle$ & $t_1w$ \\ \hline \end{tabular} \end{center} \begin{quote} \begin{center} Table 3: RA Loops when $D = G$ of type 3 \end{center} \end{quote} \begin{center} \begin{tabular}{||c||c||c||c||c||c||} \hline Loop $L$ & $Z(D)$ & $x^2$ & $y^2$ & $G$ & $u^2=g_{o}$ \\ \hline $L_{13}$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & 1 & $t_2$ & $G_3$ & 1 \\ $L_{14}$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & 1 & $t_2$ & $G_3$ & $t_1$ \\ $L_{15}$ & $\langle t_1\rangle \times \langle t_2 \rangle$ & 1 & $t_2$ & $G_3 \times \langle t \rangle$ & $t$ \\ $L_{16}$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & 1 & $t_2$ & $G_3 \times \langle t \rangle$ & $t_1t$ \\ $L_{17}$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & 1 & $t_2$ & $G_3 \times \langle w \rangle$ & $w$ \\ $L_{18}$ & $\langle t_1\rangle \times \langle t_2 \rangle$ & 1 & $t_2$ & $G_3 \times \langle w \rangle$ & $t_1w$ \\ \hline \end{tabular} \end{center} \begin{quote} \begin{center} Table 4: RA Loops when $D = G$ of type 4 \end{center} \end{quote} \begin{center} \begin{tabular}{||c||c||c||c||c||c||} \hline Loop $L$ & $Z(D)$ & $x^2$ & $y^2$ & $G$ & $u^2=g_{o}$ \\ \hline $L_{19}$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & $t_1$ & $t_2$ & $G_4$ & 1 \\ $L_{20}^*$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & $t_1$ & $t_2$ & $G_4$ & $t_1$ \\ $L_{21}$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & $t_1$ & $t_2$ & $G_4 \times \langle t \rangle$ & $t$ \\ $L_{22}^*$ & $ \langle t_1 \rangle \times \langle t_2 \rangle$ & $t_1$ & $t_2$ & $G_4 \times \langle t \rangle$ & $t_1t$ \\ $L_{23}$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & $t_1$ & $t_2$ & $G_4 \times \langle w \rangle$ & $w$ \\ $L_{24}^*$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & $t_1$ & $t_2$ & $G_4 \times \langle w \rangle$ & $t_1w$ \\ \hline \end{tabular} \end{center} \begin{quote} \begin{center} Table 5: RA Loops when $D = G$ of type 5 \end{center} \end{quote} \begin{center} \begin{tabular}{||c||c||c||c||c||c||} \hline Loop $L$ & $Z(D)$ & $x^2$ & $y^2$ & $G$ & $u^2=g_{o}$ \\ \hline $L_{25}$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & 1 & $u_1$ & $G_5$ & 1 \\ $L_{26}$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & 1 & $u_1$ & $G_5$ & $t_1$ \\ $L_{27}$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & 1 & $u_1$ & $G_5 \times \langle t \rangle$ & $t$ \\ $L_{28}$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & 1 & $u_1$ & $G_5 \times \langle t \rangle$ & $t_1t$ \\ $L_{29}$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & 1 & $u_1$ & $G_5 \times \langle w \rangle$ & $w$ \\ $L_{30}$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & 1 & $u_1$ & $G_5 \times \langle w \rangle$ & $t_1w$ \\ \hline \end{tabular} \end{center} \begin{quote} \begin{center} Table 6: RA Loops when $D = G$ of type 6 \end{center} \end{quote} \begin{center} \begin{tabular}{||c||c||c||c||c||c||} \hline Loop $L$ & $Z(D)$ & $x^2$ & $y^2$ & $G$ & $u^2=g_{o}$ \\ \hline $L_{31}$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & $t_1$ & $u_1$ & $G_6$ & 1 \\ $L_{32}^*$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & $t_1$ & $u_1$ & $G_6$ & $t_1$ \\ $L_{33}$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & $t_1$ & $u_1$ & $G_6 \times \langle t \rangle$ & $t$ \\ $L_{34}^*$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & $t_1$ & $u_1$ & $G_6 \times \langle t \rangle$ & $t_1t$ \\ $L_{35}$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & $t_1$ & $u_1$ & $G_6 \times \langle w \rangle$ & $w$ \\ $L_{36}^*$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & $t_1$ & $u_1$ & $G_6 \times \langle w \rangle$ & $t_1w$ \\ \hline \end{tabular} \end{center} \begin{quote} \begin{center} Table 7: RA Loops when $D = G$ of type 7 \end{center} \end{quote} \begin{center} \begin{tabular}{||c||c||c||c||c||c||} \hline Loop $L$ & $Z(D)$ & $x^2$ & $y^2$ & $G$ & $u^2=g_{o}$ \\ \hline $L_{37}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle t_3 \rangle$ & $t_2$ & $t_3$ & $G_7$ & 1 \\ $L_{38}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle t_3 \rangle$ & $t_2$ & $t_3$ & $G_7$ & $t_1$ \\ $L_{39}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle t_3 \rangle$ & $t_2$ & $t_3$ & $G_7 \times \langle t \rangle$ & $t$ \\ $L_{40}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle t_3 \rangle$ & $t_2$ & $t_3$ & $G_7 \times \langle t \rangle$ & $t_1t$ \\ $L_{41}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle t_3 \rangle$ & $t_2$ & $t_3$ & $G_7 \times \langle w \rangle$ & $w$ \\ $L_{42}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle t_3 \rangle$ & $t_2$ & $t_3$ & $G_7 \times \langle w \rangle$ & $t_1w$ \\ \hline \end{tabular} \end{center} \begin{quote} \begin{center} Table 8: RA Loops when $D = G$ of type 8 \end{center} \end{quote} \begin{center} \begin{tabular}{||c||c||c||c||c||c||} \hline Loop $L$ & $Z(D)$ & $x^2$ & $y^2$ & $G$ & $u^2=g_{o}$ \\ \hline $L_{43}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ & $G_8$ & 1 \\ $L_{44}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ & $G_8$ & $t_1$ \\ $L_{45}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ & $G_8 \times \langle t \rangle$ & $t$ \\ $L_{46}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ & $G_8 \times \langle t \rangle$ & $t_1t$ \\ $L_{47}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ & $G_8 \times \langle w \rangle$ & $w$ \\ $L_{48}$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ & $G_8 \times \langle w \rangle$ & $t_1w$ \\ \hline \end{tabular} \end{center} \begin{quote} \begin{center} Table 9: RA Loops when $D = G$ of type 9 \end{center} \end{quote} \begin{center} \begin{tabular}{||c||c||c||c||c||c||} \hline Loop $L$ & $Z(D)$ & $x^2$ & $y^2$ & $G$ & $u^2=g_{o}$ \\ \hline $L_{49}$ & $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle u_2 \rangle$ & $u_1$ & $u_2$ & $G_9$ & 1 \\ $L_{50}$ & $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle u_2 \rangle$ & $u_1$ & $u_2$ & $G_9$ & $t_1$ \\ $L_{51}$ & $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle u_2 \rangle$ & $u_1$ & $u_2$ & $G_9 \times \langle t \rangle$ & $t$ \\ $L_{52}$ & $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle u_2 \rangle$ & $u_1$ & $u_2$ & $G_9 \times \langle t \rangle$ & $t_1t$ \\ $L_{53}$ & $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle u_2 \rangle$ & $u_1$ & $u_2$ & $G_9 \times \langle w \rangle$ & $w$ \\ $L_{54}$ & $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle u_2 \rangle$ & $u_1$ & $u_2$ & $G_9 \times \langle w \rangle$ & $t_1w$ \\ \hline \end{tabular} \end{center} In each of the tables above, we are supposing that $o(t_i)=2^{m_i}, m_i \geq 1$, $o(u_j) = \infty, j = 1,2$, $o(t) = 2^k, k \geq 1$ and $o(w) = \infty$. The lines denote by $*$ are there from the exceptional case when $m_1 =1$, $s = t_1$ and $t_1^2 = 1$. \section{Classification Theorem} In this section, we will see that many of the RA loops listed in the tables of the previous section are isomorphic. The next theorem shows that, in fact, we have only 16 different types of finitely generated RA loops. \begin{theorem} \label{classification} Any finitely generated indecomposable RA loop is in one of the sixteen types of loops listed in the table below. \begin{quote} \begin{center} Table 10: Classification Loop Table \end{center} \end{quote} \begin{center} \begin{tabular}{||c||c||c||c||c||c||} \hline $L$ & $Z(D)$ & $x^2$ & $y^2$ & $G$ & $u^2=g_{o}$ \\ \hline $1$ & $\langle t_1 \rangle$ & 1 & 1 & $G_1$ & 1 \\ $2$ & $\langle t_1 \rangle$ & $t_1$ & $t_1$ & $G_2$ & $t_1$ \\ $3$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & 1 & $t_2$ & $G_3$ & 1 \\ $4$ & $\langle t_1 \rangle \times \langle t_2 \rangle$ & $t_1$ & $t_2$ & $G_4$ & $t_1$ \\ $5$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & 1 & $u_1$ & $G_5$ & 1 \\ $6$ & $\langle t_1 \rangle \times \langle u_1 \rangle$ & $t_1$ & $u_1$ & $G_6$ & $t_1$ \\ $7$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle t_3 \rangle$ & $t_2$ & $t_3$ & $G_7$ & 1 \\ $8$ & $ \langle t_1 \rangle \times \langle t_2 \rangle \times \langle t_3 \rangle$ & $t_2$ & $t_3$ & $G_7$ & $t_1$ \\ $9$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle t_3 \rangle$ & $t_2$ & $t_3$ & $G_7 \times \langle t \rangle$ & $t$ \\ $10$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ & $G_8$ & 1 \\ $11$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ & $G_8$ & $t_1$ \\ $12$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ & $G_8 \times \langle t \rangle$ & $t$ \\ $13$ & $\langle t_1 \rangle \times \langle t_2 \rangle \times \langle u_1 \rangle$ & $t_2$ & $u_1$ & $G_8 \times \langle w \rangle$ & $w$ \\ $14$ & $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle u_2 \rangle$ & $u_1$ & $u_2$ & $G_9$ & 1 \\ $15$ & $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle u_2 \rangle$ & $u_1$ & $u_2$ & $G_9$ & $t_1$ \\ $16$ & $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle u_2 \rangle$ & $u_1$ & $u_2$ & $G_9 \times \langle w \rangle$ & $w$ \\ \hline \end{tabular} \end{center} \end{theorem} \emph{Proof:} We will prove, case by case, that each type of loop listed in those 9 tables of section 3 is listed in the above table. In this proof we will often make use of the fact that if $u,v,w$ are any three elements of $L$ that do not associate then no two distinct elements of $\{u,v,w\}$ commute, $G = \langle u,v,Z(L) \rangle$ is a non abelian group with $G' = L' = \{1,s\}$ and $L = G \cup Gw = M(G,*,w^2)$. A proof of this fact can be founded in [\cite{livro}, Corolary IV.2.3]. Note that the types of loops $L_1$ to $L_4$, $L_7$ to $L_{10}$, $L_{13}$ to $L_{16}$, $L_{19}$ to $L_{22}$ and $L_{37}$ to $L_{40}$ are finite RA loops. So, in [\cite{casofinito}, section 4], it was shown that these loops belongs to the family of loops listed in lines 1,2,3,4,7,8,9 of Table 10, which are the seven families of finite indecomposable RA loops that exist. Let $L \in L_5 = M(\langle x,y,t_1,w \rangle, *, u^2)$ with $x^2=y^2=1$ and $u^2=w$. In this case, $ L = M(\langle x,u,t_1,w \rangle, *, y^2)$ is of type 5. Now, suppose that $L$ is a type of loop listed in $L_6$, i.e., $L=M(\langle x,y,t_1,w \rangle,*,u^2)$ with $x^2=y^2=1$, $u^2=t_1w$. Setting $t_1w = w'$, hence $\langle t_1 \rangle \times \langle w\rangle = \langle t_1\rangle \times \langle w' \rangle$ and in this way, $L=M(\langle x,u,t_1,w'\rangle,*,y^2)$ is of type 5. If $L=M(\langle x,y,t_1,w \rangle,*,u^2) \in L_{11}$, with $x^2=y^2=t_1$ and $u^2=w$, we have that $L=M(\langle x,u,t_1,w \rangle,*,y^2)$ is of type 6. Suppose $L$ is a loop in $L_{12}$. So, $L=M(\langle x,y,t_1,w \rangle,*,u^2)$ where $x^2=y^2=t_1$ and $u^2=t_1w$. Setting $t_1w=w'$ and changing the generator $w$ to $w'$, we have that $L=M(\langle x,u,t_1,w' \rangle,*,y^2)$ is of type 6. Next, if $L=M(\langle x,y,t_1,t_2,w \rangle,*,u^2) \in L_{17}$ where $x^2=1, y^2=t_2$ and $u^2=w$ then $ L=M(\langle y,u,t_1,t_2,w \rangle,*,x^2)$ is of type 10. If $L \in L_{18}$, then setting $w' = t_1w$ we have that $L= M(\langle y,u,t_1,t_2,w'\rangle,*,x^2)$ with $y^2 = t_2, u^2 = w', x^2 = 1$ which is a loop of type 10. Let $L=M(\langle x,y,t_1,t_2,w \rangle,*,u^2) \in L_{23}$, $x^2=t_1,y^2=t_2,u^2=w$. Then $L=M(\langle y,u,t_1,t_2,w \rangle,*,x^2)$ which is a loop of type 11. Suppose now that $L$ is loop in $L_{24}$. Setting $t_1w=w'$ we have that $L=M(\langle y,u,t_1,t_2,w' \rangle,*,x^2)$ is a loop of type 11. Observe that those loops in $L_{25}$ are those of type 5. Now, if $L \in L_{26}$ then $L=M(\langle x,y,t_1,u_1 \rangle, *,u^2) = M(\langle y,u,t_1,u_1 \rangle,*, x^2) \in L_{31}$ and in what follows, we will see that those loops in $L_{31}$ are of type 5 or 6. If $L \in L_{27}$, then $L=M(\langle x,y,t_1,u_1,t \rangle,*,u^2) = M(\langle u,y,t_1,u_1,t \rangle,*,x^2)$ which is of type 10. Let $L$ be a loop in $L_{28}$. If $o(t_1)\leq o(t)$ then $\langle t_1 \rangle \times \langle u_1 \rangle \times \langle t \rangle = \langle t_1 \rangle \times \langle u_1 \rangle \times \langle t_1t \rangle$. Setting $t_1t = t'$, we have $L=M(\langle x,y,t_1,u_1,t' \rangle,*,u^2) \in L_{27}$ which we had showed before that this loop is of type 10; if $o(t_1) > o(t)$ then $s = t_1^{2^{m_1-1}} = (t_1t)^{2^{m_1-1}}$ and setting $t_1t = {t_1}'$ we have that $L=M(\langle x,y,{t_1}',u_1,t \rangle,*,u^2) = M(\langle x,y,{t_1}',u_1 \rangle,*,u^2) \times \langle t \rangle$ which is not indecomposable. If $L \in L_{29}$ then $L=M(\langle x,y,t_1,u_1,w \rangle,*,u^2) = M(\langle u,y,t_1,u_1,w \rangle,*,x^2)$ which is of type 14. Next, if $L \in L_{30}$, setting $t_1w = w'$, we see that $L=M(\langle u,y,t_1,u_1,w'\rangle,*,x^2)$ which is of type 14. Suppose $L \in L_{31}$. If $m_1 = 1$ then changing $x$ to $xu$ we have that $(xu)^2 = {t_1}^2$ and using Lemma \ref{expoente} we can suppose that $x^2=1$. Hence $L=M(\langle x,y,t_1,u_1 \rangle ,*,u^2)$ with $x^2=1,y^2=u_1,u^2=1$ which implies that $L$ is of type 5. If $m_1 >1$ then changing $u$ by $xu$ we have that $(xu)^2 = {t_1}^{\alpha}$ with $\alpha$ ímpar. So, using Lemma \ref{expoente} we can suppose that $(xu)^2 = t_1$ which implies that $L$ is of type 6. Observe that those loops in $L_{32}$ are loops of type 6. If $L=M(\langle x,y,t_1,u_1,t \rangle,*,u^2) \in L_{33}$ then $L = M(\langle u,y,t_1,u_1,t \rangle,*,x^2)$ which is of type 11. Now, suppose that $L=M(\rangle x,y,t_1,u_1,t \rangle,*,u^2) \in L_{34}$ where $x^2=t_1, y^2=u_1$ and $u^2=t_1t$. If $o(t)\geq o(t_1)$ then we change $t$ to $t'=t_1t$; in this case $L \in L_{33}$ which we had shown that these loops are of type 11. If $o(t) < o(t_1)$ then changing $x$ to $xu$ we see that $(xu)^2=t$ and setting $t_1t = {t_1}'$ implies that $L=M(\langle x,y,t_1,u_1,t \rangle,*,u^2)$ with $x^2=t,y^2=u_1$ and $u^2 = {t_1}'$ which belongs to family $\mathcal{L}_{11}$. If $L \in L_{35}$ then $L=M(\langle x,y,t_1,u_1,w \rangle,*,u^2) = M(\langle y,u,t_1,u_1,w \rangle,*,x^2)$ which is a loop of type 15. Suppose now that $L=M(\langle x,y,t_1,u_1,w \rangle,*,u^2) \in L_{36}$. Setting $t_1w = w'$ we have that $L = M(\langle x,y,t_1,u_1,w' \rangle, *, u^2)$ with $x^2=t_1,y^2=u_1$ and $u^2=w'$. Hence $L_{35}$ are loops of type 15. If $L \in L_{41}$ then $L=M(\langle x,y,t_1,t_2,t_3,w \rangle,*,u^2) = M(\langle x,u,t_1,t_2,w,t_3 \rangle,*,y^2)$ which is of type 12. If $L \in L_{42}$ then changing the generator $w$ to $t_1w = w'$ and using the same argument as in $L_{41}$ we have that $L_{42}$ is still of type 12. Note that loops in $L_{43}, L_{44}$ and $L_{45}$ are, respectively, of the types 10,11 and 12. Let $L=M(\langle x,y,t_1,t_2,u_1,t \rangle,*,u^2) \in L_{46}$. If $o(t) \geq o(t_1)$ then we can change $t$ to $t_1t$ which implies that $L$ is of type 12. If $o(t) < o(t_1)$ then we can change $t_1$ to $t_1t$ obtaining $L=M(\langle x,y,t_1,t_2,u_1 \rangle,*,u^2) \times \langle t \rangle$ which is not indecomposable. Observe that those loops in $L_{47}$ are precisely those of type 13. If we change the generator $w$ to $ t_1w$ of the loops in $L_{48}$ we see that these loops are of type 13. Loops in $L_{49}$ and in $L_{50}$ are, respectively, loops of type 14 and 15. If $L \in L_{51} = M(\langle x,y, t_1, u_1,u_2, t \rangle, *, u^2)$ with $x^2 = u_1, y^2 = u_2$ and $u^2 = t$ then $L = M(\langle x,u,t_1,u_1,u_2,t \rangle, *, y^2)$ which is a loop of type 13. Let $L=M(\langle x,y,t_1,u_1,u_2,t \rangle, *,u^2) \in L_{52}$. If $o(t) \geq o(t_1)$ then we can change $t$ to $t_1t$ and $L= M(\langle u,y,t_1,t,u_1,u_2\rangle,*,x^2)$ which is of type 13. If $o(t) < o(t_1)$ then changing $t_1$ to $t_1t$ we have that $L=M(\langle x,y,t_1,u_1,u_2\rangle,*,u^2) \times \langle t \rangle$ which is not indecomposable. Note that loops in $L_{53}$ are of type 16. Finally, if we have a loop $L \in L_{54}$, setting $t_1w = w'$, we see that $L=M(\langle x,y,t_1,u_1,u_2,w' \rangle,*,u^2)$ is a loop of type 16. $ {\fimprova}$ We saw that, up to isomorphism, there are at most sixteen types of indecomposable finitely generated RA loops. Now we will show that these types of loops listed in the Table 10 are not isomorphic. \begin{theorem} The sixteen types of loops listed in the Theorem \ref{classification} are distinct: loops of different types are not isomorphic. \end{theorem} \emph{Proof:} Loops of the types 1,2,3,4, 7, 8 and 9 are those of finite R.A. loops. In \cite{casofinito}, the authors had proved that they are not isomorphic. Elementary considerations of the ranks of the centers, shows that we only have to prove that loops of type 4 are not isomorphic to loops of type 6, loops of type 10 are not isomorphic of those in type 11 and loops of type 14 are not isomorphic of those in type 15. Remember that in $\mathcal{L}_6$ we have that $m_1 =1$. Observe that in $\mathcal{L}_4, u$ is a non central element of order 2. We will show that do no exist such element in loops of $\mathcal{L}_6$. Suppose that $w = x^{a}y^{b}t_1^{c}u_1^{d}u^{e} \in L$ where $L$ is a loop of type 6, $ w \notin Z(L)=Z(G)$ and $w^2=1$. So, $w^2=x^{2a}y^{2b}u^{2e}s^{ae+be+ab}{t_1}^{2c}{u_1}^{2d} = {t_1}^{a+e+2c+ae+be+ab}{u_1}^{b+2d} = 1$. Hence $b=d=0$ and $a+e+ae$ is even which implies $a$ and $e$ are even and then $w \in Z(G)$, a contradiction. Now, we are going to do the same with $\mathcal{L}_{14}$ and $\mathcal{L}_{15}$. In $\mathcal{L}_{14}, u$ is a non central element of order 2. Let $w = x^{a}y^{b}{t_1}^{c}{u_1}^{d}{u_2}^{e}u^{f} \in L$ where $L$ is a loop of type 15, $w \notin Z(L)$ such that $w^2=1$. Using that $x^2=u_1, y^2=u_2$ and $u^2=t_1$ in $\mathcal{L}_{15}$ we have that $w^2 = {u_1}^{a+2d}{u_2}^{b+2e}{t_1}^{2^{m_1-1}(ab+af+bf)+f+2c}=1$. Hence $a=b=d=e=0$ and $f +2c$ is even, which implies $f$ even and then $w \in Z(G)$, a contradiction. Similar arguments shows that $\mathcal{L}_{10} \not\simeq \mathcal{L}_{11}$ and we really have sixteen non isomorphic families of indecomposable loops. $ {\fimprova}$ \end{document}
math
{\sf b}egin{document} \title{On the multi dimensional modal logic of substitutions} {\sf b}egin{myabstract} We prove completeness, interpolation, decidability and an omitting types theorem for certain multi dimensional modal logics where the states are not abstract entities but have an inner structure. The states will be sequences. Our approach is algebraic addressing (varieties generated by) complex algebras of Kripke semantics for such logic. Those algebras, whose elements are sets of states are common reducts of cylindric and polyadic algebras. \footnote{Mathematics Subject Classification. 03G15; 06E25 Key words: multimodal logic, substitution algebras, interpolation} \end{myabstract} \section{Introduction} We study certain propositional multi dimensional modal logics. These logics arise from the modal view of certain algebras that are cylindrifier free reducts of both cylindric algebras and polyadic algebras of dimension $n$, $n{\sf g}eq 2$. We can view the class of representable such algebras from the perspective of (multi dimensional) modal logic as complex algebras of certain relational structures. This approach was initiated by Venema, by viewing quantifiers which are the most prominent citizens of first order logic as modalities. Modal logic can be studied as fragments of first order logic. But one can turn the glass around and examine first order logic as if it were a modal formalism. The basic idea enabling this perspective is that we may view assignments the functions that give first order variables their value in a first order structure as states of a modal model, and this makes standard first order connectives behave just like modal diamonds and boxes. First order logic then forms an example of a multi dimensional modal system. Multi dimensional modal logic is a branch of modal logic dealing with special relational structures in which the states, rather than being abstract entities, have some inner structure. These states are tuples over some base set, the domain of the first order structure. S\'agi studied $n$ dimensional relational structures, referred to also as Kripke frames, of the form $(V, S_j^i)_{i,j\in n}$ where $V\subseteq {}^nU$ and for $x,y\in V$, $(x, y)\in S_j^i$, iff $x\circ [i/j]=y$. Here $[i/j]$ is the replacement that sends $i$ to $j$ and is the identity map otherwise. In this paper we study Kripke frames of the form $(V, S_{[i,j]})$ where where $[i,j]$ is the transposition that swaps $i$ and $j$, $V\subseteq {}^nU$ and for $x,y\in V$, $(x, y)\in { S}_{[i,j]}$, iff $x\circ [i,j]=y$. By dealing with such cases, the case when we have both kinds of substitutions corresponding to replacements and transpositions cannot help but come to mind. We shall show that this case, is also very interesting, and gives a sort of unifying framework for our investigations. A Kripke frame is called square if $V={}^nU$ for some set $U$. The complex algebras of such frames are called full set algebras. Subdirect products of such set algebras is called the class of representable algebras of dimension $n$. S\'agi, answering a question posed by Andr\'eka, proves that in the case of replacements, the class of representable algebras is a finitely axiomatizable quasi variety, that is {\it not} a variety, and that the variety generated by this class (closing it under homomorphic images) is obtained by relativizing units to so-called diagonalizable sets; $V$ is diagonalizable if whenever $s\in V$, then $s\circ [i/j]\in V$. He also provides finite axiomatizations for both the quasivariety (by quasi-equations) and the generated variety (by equations). This provides a strong completeness theorem for such modal logics: There is a fairly simple finite Hilbert style proof system, that captures all the valid multimodal formulas with the above Kripke semantics, when the assignments are only restricted to diagonalizable sets. In other words, this modal logic is complete with respect to diagonalizable frames, but not relative to square frames. The idea of finding such an axiomatization is not too hard; it basically involves translating finite presentations of the semigroup generated by replacements to equations, and further stipulating that the substitution operators on algebra are Boolean endomorphisms. We show that in the case when we have, transpositions and replacements, the subdirect products of complex algebras of square frames, is a finitely axiomatizable variety, while in the case when only transpositions are available, this class is only a quasivariety, that is not a variety. In all cases we obtain axiomatizations for the varieties in question (closing our quasivariety under homomorphic images in case of transpositions only), by translating finite presentations of the semigroup $^nn$, and the symmetric group $S_n.$ We can also view such logics as a natural fragment of $L_n$ the first order logic restricted to $n$ variables, that is strong enough to describe certain combinatorial properties. Here we have finitely many variables, namely exactly $n$ variables, atomic formulas are of the form $R(x_0,\ldots x_{n-1})$ with the variables occurring in their natural order. The substitution operators are now viewed as unary connectives. In this case, it is more appropriate to deal with square frames, representing ordinary models, and substitutions have their usual meaning of substituting variables for free variables such that the substitution is free. From the perspective of algebraic logic, we are dealing with a non-trivial instance of the so called representation problem, which roughly says the following. Given a class $K$ of concrete algebras, like set algebras, can we find a simple, hopefully finite axiomatization by equations of $V(K)$, the variety generated by $K$. The representation problem for Boolean algebras is completely settled by Stones theorem; every abstract Boolaen algebra is isomorphic to a set algebra. The representation problem, for cylindric-like algebras is more subtle; indeed not every cylindric algebra is representable, and any axiomatization of the variety generated by the set algebras of dimension $>2$ is highly complex. A recent negative extremely strong result, proved by Hodkinson for cylindric algebras, is that the it is undecidable whether a finite cylindric algebra is representable. The representation problem have provoked extensive research, and, indeed, it is still an active part of algebraic logic. In this paper we obtain a clean cut solution to the representation problem for several natural classes of set algebras, whose elements are sets of sets endowed with concrete set theoretic operations, those of substitutions, and possibly diagonal elements. We also discuss the infinite version of such logics. Algebraically we deal with infinite dimensional algebras; and from the modal point of view we deal with infinitely many modalities. In case of the presence of both transpositions and replacements, we show that like the finite dimensional case, subdirect product of full set algebras forms a variety. The dual facet of our logics (as multi dimensional modal logics and fragments of first order logic) enables us to go further in the analysis and give two proofs that such logics have the Craig interpolation property and enjoy other definability properties like Beth definability and Robinson's joint consistency theorem. This works for all dimensions. Our results readily follows from the fact that, in all cases, the variety generated by the set algebras has the superamalgamation property. Our first proof, inspired by the 'first order view' is a Henkin construction. In this case, we show that the free algebras have the interpolation property. The second proof inspired by the 'modal view' uses known correspondence theorems between Kripke frames and complex algebras, and closure properties that Kripke frames should satisfy. In the case of transpositions, for finite as well as for infinite dimensions, we prove an omitting types theorem for countable languages using the Baire Category theorem. We also show in this case that atomic algebras posses complete representations, i.e representations preserving infinitary meets carrying them to set-theoretic intersection. Indeed, there are various types of representations in algebraic logic. Ordinary representations are just isomorphisms from Boolean algebras with operators to a more concrete structure (having the same similarity type) whose elements are sets endowed with set-theoretic operations like intersection and complementation. Complete representations, on the other hand, are representations that preserve arbitrary conjunctions whenever defined. The notion of complete representations has turned out to be very interesting for cylindric algebras, where it is proved by Hirsch and Hodkinson that the class of completely representable algebras is not elementary. The correlation of atomicity to complete representations has caused a lot of confusion in the past. It was mistakenly thought for a long time, among algebraic logicians, that atomic representable relation and cylindric algebras are completely representable, an error attributed to Lyndon and now referred to as Lyndon's error. Here we show, that atomic algebras {\it are} completely representable. Another property that is important in algebraic logic generally, and particulary in cylindric and polyadic algebras is the question whether the Boolean reducts of the free algebras are atomic or not. N\'emeti has shown that for cylindric algebras, this corresponds to a form of Godel's incompleteness theorem. More precisely, N\'emeti showed that finite variable fragments of first order logic enjoy a Godel's incompleteness theorem; there are formulas that cannot be extended to complete recursive theories; in the formula (free) algebras such formulas cannot be atoms, further they cannot be isolated from the bottom element by atoms. In short the free algebras are not atomic. Here we show, in sharp contrast to the cylindric case, that finitely generated algebras are finite, hence atomic. In particular, our varieties are locally finite. We learn from our results that subdirect products of full transposition set algebras, is only a quasivariety (it is not closed under taking homomorphic images, theorem {\sf r}ef{not}), but the algebraic closure of this quasivariety share a lot of properties with Boolean algebras; it is finitely axiomatizable, theorem {\sf r}ef{rep}, it is locally finite, theorem {\sf r}ef{locallyfinite}, atomic algebras are completely representable, theorem {\sf r}ef{com}, and it has the superamalgamation property, theorem {\sf r}ef{SUPAP}. (this occurs in all dimensions). When we have both transpositions and replacements, our quasivariety turns out to be a variety, theorems {\sf r}ef{variety}, {\sf r}ef{v2}, but we lose complete representability of atomic algebras, or rather we do not know whether atomic algebras are completely representable. In this case, we could only show that the canonical extension of any algebra is always completely representable, and that the minimal completion of a completely representable algebra, stays completely representable, theorems {\sf r}ef{canonical}, {\sf r}ef{completion}. Superamalgamation and local finiteness hold, theorems {\sf r}ef{SUPAP}, {\sf r}ef{locallyfinite} in this case as well. An oddity that occurs here, is that we could provide a finite schema axiomatizing such varieties, endowed with diagonal elements in the infinite dimensional case, theorem {\sf r}ef{infinite}, but we did not succeed to provide a finite axiomatization for finite dimensions (with diagonal elements). Finally, we prove that the validity problem for our logics in all dimensions is decidable. \subsection*{Summary} In section 2 we fix our notation and recall the basic concepts. In section 3 we study transposition algebras obtaining a finite axiomatization of the quasi-variety (that is shown not to be closed under taking homomorphic images) of representable algebras and proving strong representability results. In section 3 we deal with algebras when we have transpositions and replacements together. In this case we show that the class of subdirect products of algebras (in all dimensions) is a variety and we provide a simple equational axiomatization this of variety. In section 4 we deal with free algebras, and we formulate and prove some general theorems adressing free algebras of classes of Boolean algebras with operators. In section 5, we also formulate and prove general theorems on the amalgamation property, and we prove that the varieties (generated by the quasi-varieties in case of replacements only and transpositions only) have the superamalgamation theorem. Logical counterpart of our algebraic theorems are carefully formulated in the final section. \section{Notation and Preliminaries} Our system of notation is mostly standard, or self-explainatory, but the following list may be useful. Throughout, for every natural number $n$ we follow Von-Neuman's convention; $n=\{0,\ldots,n-1\}.$ Let $A$ and $B$ be sets. Then ${}^AB$ denotes the set of functions whose domain is $A$ and whose range is a subset of $B.$ In addition, $A^*$ denotes the set of finite sequences over $A$ and $\mathcal{P}(A)$ denotes the power set of $A$, that is, the set of all subsets of $A$. For a cardinal $\kappa$, we let $A\subseteq_\kappa B$ mean that $A$ is a subset of $B$ with cardinality $|A|=\kappa.$ If $f:A\longrightarrow B$ is a function and $X\subseteq A,$ then $f\upharpoonright X$ is the restriction of $f$ to $X$. If $K$ is a class of algebras, then $\mathbf{H}K,\;\mathbf{P}K,\;\mathbf{S}K,\;\mathbf{Up}K$ denote the classes of homomorphic images, (isomorphic copies of) direct products, (isomorphic copies of) subalgebras, (isomorphic copies of) ultraproducts of members of $K$, respectively. If $\Sigma$ is a set of formulas, then $\mathbf{Mod}(\Sigma)$ denotes the class of all models of $\Sigma.$ By a \emph{quasi-equation} we mean a universal formula of the form $e_0{\sf w}edge\ldots{\sf w}edge e_ncylindric algebral{R}ightarrow f,$ where $e_0,\ldots, e_n,$ and $f$ are equations. Let $K$ be a class of algebras. Then, $K$ is a \emph{variety} iff $K$ is axiomatizable by equations iff $\mathbf{HSP}K=K,$ and $K$ is a \emph{quasi-variety} iff $K$ is axiomatizable by quasi-equations iff $\mathbf{SPUp}K=K$. We distinguish notationally between an algebra and its universe. Algebras are denoted by Gothic letters, and when we write, for example ${\mathfrak A}$, for an algebra, we will be tacitly assuming that the corresponding Roman letter $A$ is its universe. The word \emph{BAO} will abbreviate \emph{Boolean algebra with operators} (an algebra that has a Boolean reduct and every non-Boolean operation on it is additive in each of its arguments). For $K\subseteq BAO$ and a set $X$ and ${\mathfrak{Fr}}_XK$ denotes the algebra freely generated by $X$. We sometimes identify ${\mathfrak{Fr}}_XK$ with ${\mathfrak{Fr}}_{|X|}K$, and we say that ${\mathfrak{Fr}}_XK$ is the $K$ free algebra on the set of generators $X$, or simply on $X$. We do not assume that ${\mathfrak{Fr}}_XK$, belongs to $K$, when $K$ is not a variety. If $K$ is a class of algebras, then $Fin(K)$ is the class of finite algebras in $K$ and $V(K)$ denotes ${{\sf b}f HSP}(K)$, that is, the variety generated by $K$. If ${\mathfrak A}$ is a $BAO$, and $X\subseteq {\mathfrak A}$, then ${\mathfrak{B}}l{\mathfrak A}$, denotes its Boolean reduct, ${\sf At}{\mathfrak A}$ denotes the set of atoms of ${\mathfrak{B}}l{\mathfrak A}$, ${\mathfrak Sg}^{{\mathfrak A}}X$, and ${\mathfrak{Ig}}^{{\mathfrak A}}X$ denote the subalgebra, ideal, generated by $X$. Every class $K$ of $BAO$'s correspond to a multi dimensional modal logic. For an algebra ${\mathfrak A}$, ${\mathfrak A}_+$ denotes its ultrafilter atom structure, and ${\mathfrak A}^+$ denotes ${\mathfrak{C}}m{\mathfrak A}_{+}$ the canonical extension of ${\mathfrak A}$. If ${\mathfrak A}$ is atomic then ${\sf At}{\mathfrak A}$ denotes its atom structure, and ${\mathfrak{C}}m{\sf At}{\mathfrak A}$ is its minimal completion. The next theorem is folklore: {\sf b}egin{theorem} Suppose ${\mathfrak A}$ is a BAO and for every $0^{\mathfrak A}\neq a\in {\mathfrak A}$ there is a homomorphism $h_a:{\mathfrak A}\longrightarrow\mathcal{B}_a$ such that $h_a(a)\neq 0^{\mathcal{B}_a}.$ Then ${\mathfrak A}$ is embeddable in the direct product $\prod_{0^{\mathfrak A}\neq a\in A}\mathcal{B}_a,$ or equivalently, ${\mathfrak A}\in\mathbf{SP}\langle\mathcal{B}_a, 0^{\mathfrak A}\neq a\in A{\sf r}angle.$ \end{theorem} {\sf b}egin{proof} Define $g:{\mathfrak A}\to \prod_{a\in A}{\mathfrak{B}}_a$ by $g(x)=(h_a(x): a\in A)$. Then $g$ is an embedding. \end{proof} \section{Transposition Algebras} In this section our work closely resembles that of S\'agi's, but we obtain stronger representability results. We refer to the algebras studied by S\'agi as replacement algebras. We not only prove the representability of abstract algebra defined via a finite set of equations, but we further show that such representations can be chosen to respect infinitary meets and joins. In particular, we show that atomic algebras are completely representable. We also show that the subdirect products of full set algebras, those algebras whose units are squares is not a variety. Our treatment of the infinite dimensional case is also different than S\'agi's for replacement algebras; we alter the units of representable algebras, dealing with a different generating class (in this case we show that it suffices to take only one set algebra, that is not the free algebra on $\omega$ generators, as a generating set). {\sf b}egin{definition}[Transposition Set Algebras] Let $U$ be a set. \emph{The full transposition set algebra of dimension} $\alpha$ \emph{with base} $U$ is the algebra $$\langle\mathcal{P}({}^\alpha U); cylindric algebrap,-,S_{ij}{\sf r}angle_{i\neq j\in\alpha};$$ where $S_{ij}$'s are unary operations defined by $$S_{ij}(X)=\{q\in {}^\alpha U:q\circ [i,j]\in X\}.$$ Recall that $[i,j]$ denotes that transposition of $\alpha$ that permutes $i,j$ and leaves any other element fixed. The class of Transposition Set Algebras of dimension $\alpha$ is defined as follows: $$SetTA_\alpha=\mathbf{S}\{{\mathfrak A}:{\mathfrak A}\text{ is a full transposition set algebra of dimension }\alpha $$$$ \text{ with base }U,\text{ for some set }U\}.$$ \end{definition} {\sf b}egin{definition}[Representable Transposition Set Algebras] The class of Transposition Set Algebras of dimension $\alpha$ is defined to be $$RTA_\alpha=\mathbf{SP}SetTA_\alpha.$$ \end{definition} {\sf b}egin{definition}[Permutable Set] Let $U$ be a given set, and let $D\subseteq{}^\alpha U.$ We say that $D$ is permutable iff $$(\forall i\neq j\in\alpha)(\forall s\in{}^\alpha U)(s\in D{\mathfrak{L}}ongrightarrow s\circ [i,j]\in D).$$ \end{definition} {\sf b}egin{definition}[Permutable Algebras] The class of \emph{Permutable Algebras} of dimension $\alpha$, $\alpha$ an ordinal, is defined to be $$PTA_n=\mathbf{SP}\{\langle\mathcal{P}(D); cylindric algebrap,-,S_{ij}{\sf r}angle_{i\neq j\in \alpha}: U\text{ \emph{is a set}}, D\subset{}^{\alpha} U\text{\emph{permutable}}\}.$$ Here $S_{ij}(X)=\{q\in D:q\circ [i,j]\in X\}$, and $-$ is complement w.r.t. $D$.\\ If $D$ is a permutable set then the algebra ${\sf w}p(D)$ is defined to be $${\sf w}p(D)=\langle\mathcal{P}(D);cylindric algebrap,-,S_{ij}{\sf r}angle_{i\neq j\in\alpha}$$ So ${\sf w}p(D)\in PTA_\alpha.$ \end{definition} Note that ${\sf w}p(^{\alpha}U)$ can be viewed as the complex algebra of the atom structure $(^{\alpha}U, S_{ij})_{i,j\in \alpha}$ where for all $i,j, S_{ij}$ is a binary relation, such that for $s,t\in {}^{\alpha}U$, $(s,t)\in S_{ij}$ iff $s\circ [i,j]=t.$ When we consider permutable sets then from the modal point of view we are restricting the states or assignments to $D$. This process is also referred to as relativization. for some time to come we restrict ourselves to finite $\alpha$, which we denote by $n$. {\sf b}egin{theorem} Let $U$ be a set and suppose $G\subseteq{}^n U$ is permutable. Let ${\mathfrak A}=\langle\mathcal{P}({}^n U);cylindric algebrap,-,S_{ij}{\sf r}angle_{i\neq j\in n}$ and let $\mathcal{B}=\langle\mathcal{P}(G);cylindric algebrap,-,S_{ij}{\sf r}angle_{i\neq j\in n}$. Then the following $h:{\mathfrak A}\longrightarrow\mathcal{B}$ defined by $h(x)=xcylindric algebrap G$ is a homomorphism. \end{theorem} {\sf b}egin{proof} It is easy to see that $h$ preserves $cylindric algebrap,-$ so it remains to show that the $S_{ij}$'s are also preserved. To do this let $i\neq j\in n$ and $x\in A.$ Now{\sf b}egin{align*} h({S_{ij}}^{\mathfrak A} x)= {S_{ij}}^{\mathfrak A} x cylindric algebrap G &=\{q\in{}^n U:q\circ [i,j]\in x\}cylindric algebrap G\\ &=\{q\in G:q\circ [i,j]\in x\}\\ &=\{q\in G:q\circ [i,j]\in xcylindric algebrap G\}\\ &=\{q\in G:q\circ [i,j]\in h(x) \}\\ &={S_{ij}}^\mathcal{B} h(x) \end{align*} \end{proof} The function $h$ will be called \emph{relativization by} $G$. Now we distinguish certain elements of $SetTA_n$ ($n>1$) which play an important role. {\sf b}egin{definition}[Small algebras] For any natural number $k\leq n$ the algebra ${\mathfrak A}_{nk}$ is defined to be $${\mathfrak A}_{nk}=\langle\mathcal{P}({}^nk);cylindric algebrap,-,S_{ij}{\sf r}angle_{i\neq j\in n}.$$ So ${\mathfrak A}_{nk}\in SetTA_n$. \end{definition} {\sf b}egin{theorem} $RTA_n=\mathbf{SP}\{{\mathfrak A}_{nk}:k\leq n\}.$ \end{theorem} {\sf b}egin{proof} The proof is exactly like that of Theorem 4.9 in\cite{sagiphd}. Clearly, $\{{\mathfrak A}_{nk}:k\leq n\}\subseteq RTA_n,$ and since, by definition, $RTA_n$ is closed under the formation of subalgebras and direct products, $RTA_n\supseteq\mathbf{SP}\{{\mathfrak A}_{nk}:k\leq n\}.$ To prove the other inclusion, it is enough to show $SetTA_n\subseteq \mathbf{SP}\{{\mathfrak A}_{nk}:k\leq n\}.$ Let ${\mathfrak A}\in SetTA_n$ and suppose that $U$ is the base of ${\mathfrak A}.$ If $U$ is empty, then ${\mathfrak A}$ has one element, and one can easily show ${\mathfrak A}\cong{\mathfrak A}_{n0}.$ Otherwise for every $0^{\mathfrak A}\neq a\in A$ we can construct a homomorphism $h_a$ such that $h_a(a)\neq 0$ as follows. If $a\neq 0^{\mathfrak A}$ then there is a sequence $q\in a.$ Let $U_0^a=range(q)$. Clearly, $^nU_0^a$ is permutable, therefore by Theorem 3.5 relativizing by $^nU_0^a$ is a homomorphism to ${\mathfrak A}_{nk_a}$ (where $k_a:=|range(q)|\leq n$). Let $h_a$ be this homomorphism. Since $q\in ^nU_0^a$ we have $h_a(a)\neq0^{{\mathfrak A}_{nk_a}}.$ Applying Theorem 2.1 one concludes that ${\mathfrak A}\in\mathbf{SP}\{{\mathfrak A}_{nk}:k\leq n\}$ as desired. \end{proof} \subsection{Axiomatizing the Variety Generated by $RTA_n$} We know that the variety generated by $RTA_n$ is finitely axiomatizable since it is generated by finitely many finite algebras, and because, having a Boolean reduct, it is congruence distributive. This follows from a famous theorem by Baker. Throughout, given a set $U$, we let $S_U$ denote the set of permutations on $U$. In this section we will provide a finite set of equations axiomatizing $\mathbf{HSP}RTA_n.$ To do this, we first note that the set of transpositions $\{[i,j]:i\neq j\in n\}$, with composition of maps, generates the group $S_n$ of permutations of $n$ To obtain our desired axiomatization we need to recall some concepts from the presentation theory of groups. In particular, we need a concrete presentation of the Symmetric Group $S_n$. Let $\sigma_i=[i-1,i]$, then $S_n$ is generated by $\sigma_1,\ldots,\sigma_{n-1}$ governed by the following relations: {\sf b}egin{enumerate} \item $\sigma_i^2=1$ \item $\sigma_i\sigma_j=\sigma_j\sigma_i$ for $i\neq j\pm1$ \item $\sigma_i\sigma_{i+1}\sigma_i=\sigma_{i+1}\sigma_i\sigma_{i+1}$ \end{enumerate} Throughout, $s$ will denote the operation symbol corresponding to $S$ in the language level. {\sf b}egin{definition}[The axiomatization]\label{ax} For all natural numbers $n\in\omega,\;n>1,$ let $\Sigma_n$ be the following finite set of equations. {\sf b}egin{enumerate} \item The usual Boolean algebra axioms. \item $s_{ij}(x{\sf w}edge y)=s_{ij}(x){\sf w}edge s_{ij}(y)$. \item $s_{ij}(-x)=-s_{ij}(x)$. \item $s_{ij}s_{ij}x=x.$ \item $s_{i\; i+1}s_{j\;j+1}x=s_{j\;j+1}s_{i\;i+1}x.$ \item $s_{i\;i+1}s_{j\;j+1}s_{i\;i+1}x=s_{j\;j+1}s_{i\;i+1}s_{j\;j+1}x$ \end{enumerate} \end{definition} Let $TA_n$ be the abstractly defined class $\mathbf{Mod}(\Sigma_n)$. Our goal in this section is to show that $TA_n=\mathbf{HSP}RTA_n$. {\sf b}egin{definition} Given a set $U$, let $T(U)=\{s_{ij}:i\neq j\in U\}$ and let $\hat{}:T(U)^*\longrightarrow S_U$ which maps each $q=(s_{i_1j_1}\ldots s_{i_kj_k})\in T(U)^*$ to $$(s_{i_1j_1}\ldots s_{i_kj_k})^{\hat{}}= [i_1,j_1]\circ\ldots\circ[i_k,j_k]\in S_U.$$ When $q$ is the empty word, $q\hat{}=Id_U.$ \end{definition} {\sf b}egin{theorem} For all $n\in\omega$ the set of (all instances of the) axiom-schemas 4,5,6 is a presentation of the group $S_n$ via generators $T(n)$. That is, for all $t_1,t_2\in T(n)^*$ we have $$axioms\;4,5,6\vdash t_1=t_2\text{ iff }t_1^{\hat{}}=t_2^{\hat{}}.$$ Here $\vdash$ denotes derivability using Birkhoff's calculus for equational logic. \end{theorem} {\sf b}egin{proof} This is clear because $\Sigma_n$ corresponds exactly to the set of relations governing the generators of $S_n$. \end{proof} {\sf b}egin{definition}For every $\xi\in S_n$ we associate a sequence $s_\xi\in T(n)^*$ such that $s_\xi^{\hat{}}=\xi.$ Such an $s_\xi$ exists, since the transpositions of $n$ generate $S_n.$ In other words, in any model, the term $s_\xi (x)$ has the same interpretation as the term $s_{i_1j_1}(\ldots (s_{i_kj_k}(x)))$ if $\xi=[i_1,j_1]\circ\ldots\circ[i_k,j_k].$ In our algebras (${\mathfrak A}=\langle \mathcal{P}(V),cylindric algebrap,-,S_{ij}{\sf r}angle_{i,j\in\alpha}$ with permutable $V$), $s_\xi$ corresponds to the unary operator $S_\xi$ defined as follows. For $X\subset V,$ $$S^{\mathfrak A}_\xi(X)=\{p\in V:p\circ \xi\in X\}.$$ \end{definition} Now we turn to prove that $ PTA_n\subseteq \mathbf{HSP}RTA_n\subseteq TA_n\subseteq PTA_n$ which achieves our goal. {\sf b}egin{theorem} $PTA_n\subseteq\mathbf{HSP}RTA_n$ \end{theorem} {\sf b}egin{proof} It is enough to show that ${\mathfrak A}=\langle\mathcal{P}(G);cylindric algebrap,-,S_{ij}{\sf r}angle_{i\neq j\in n}\in \mathbf{HSP}RTA_n$ whenever $G$ permutable. Let $U={\sf b}igcup_{q\in G}range(q)$ and let $$\mathcal{B}=\langle\mathcal{P}({}^nU);cylindric algebrap,-,S_{ij}{\sf r}angle_{i\neq j\in n}.$$ Clearly, $\mathcal{B}\in SetTA_n$ and so ${\mathfrak A}$ is a homomorphic image of $\mathcal{B}$. Thus, ${\mathfrak A}\in\mathbf{H}SetTA_n\subseteq \mathbf{HSP}RTA_n$ \end{proof} {\sf b}egin{theorem} $\mathbf{HSP}RTA_n\subseteq TA_n,$ or equivalently $RTA_n\models\Sigma_n.$ \end{theorem} {\sf b}egin{proof}It is enough to prove that $SetTA_n\models\Sigma_n$ which is a routine computation. \end{proof} The proof of the following lemma is completely analogous to that of Theorem 4.17 in \cite{sagiphd}, so it is omitted. {\sf b}egin{lemma} Let ${\mathfrak A}$ be an $RTA_n$ type $BAO$. Suppose $G\subset {}^nn$ is a permutable set, and $\langle\mathcal{F}_\xi:\xi\in G{\sf r}angle$ is a system of ultrafilters of ${\mathfrak A}$ such that for all $\xi\in G,\;i\neq j\in n$ and $a\in{\mathfrak A}$ the following condition holds: $${S_{ij}}^{\mathfrak A}(a)\in\mathcal{F}_\xi{\mathfrak{L}}eftrightarrow a\in \mathcal{F}_{\xi\circ[i,j]}\quad\quad (*).$$ Then the following function $h:{\mathfrak A}\longrightarrow{\sf w}p(G)$ is a homomorphism$$h(a)=\{\xi\in G:a\in \mathcal{F}_\xi\}.$$ \end{lemma} {\sf b}egin{theorem}\label{rep} $TA_n=\mathbf{Mod}(\Sigma_n)\subseteq PTA_n.$ \end{theorem} {\sf b}egin{proof} Let ${\mathfrak A}\in TA_n$ and let $0^{\mathfrak A}\neq a\in A$. We construct a homomorphism $h$ $$h:{\mathfrak A}\longrightarrow{\sf w}p(S_n)$$ such that $h(a)\neq 0^{{\sf w}p(S_n)}.$ Let us choose an ultrafilter $\mathcal{F}\subseteq A$ containing $a$. Let $h:{\mathfrak A}\longrightarrow {\sf w}p(S_n)$ be the following function $h(z)=\{\xi\in S_n:S_{\xi}^{\mathfrak A}(z)\in\mathcal{F}\}.$ Now, if $\mathcal{F}_\xi=\{z\in A: S_\xi(z)\in\mathcal{F}\},$ then $\langle\mathcal{F}_\xi:\xi\in S_n{\sf r}angle$ is a system of ultrafilters of ${\mathfrak A}$ satisfying $(*)$. To see this, let $i\neq j\in n,\;z\in A$ and $\xi\in S_n.$ Suppose $S_{ij}^{\mathfrak A}(z)\in\mathcal{F}_\xi.$ This implies $$S_\xi^{\mathfrak A} S_{ij}^{\mathfrak A}(z)\in \mathcal{F}\quad\quad(**).$$ Observe that $(s_\xi s_{ij})^{\hat{}}=\xi\circ[i,j].$ Therefore, by Theorem 2.11, $$Axioms\;4,5,6\vdash s_\xi s_{ij}(x)=s_{\xi\circ[i,j]}(x)\quad (***)$$So by $(**)$ we have $S^{\mathfrak A}_{\xi\circ[i,j]}(z)\in\mathcal{F}.$ But then $z\in\mathcal{F}_{\xi\circ[i,j]}.$\\ Conversely, if $z\in\mathcal{F}_{\xi\circ[i,j]}$ then $S^{\mathfrak A}_{\xi\circ[i,j]}(z)\in\mathcal{F}$ so by $(***)$ $S_\xi^{\mathfrak A} S_{ij}^{\mathfrak A}(z)\in \mathcal{F}$ hence $S_{ij}^{\mathfrak A}(z)\in\mathcal{F}_{\xi}.$ Now, by previous lemma, $h$ is the desired homomorphism. \end{proof} {\sf b}egin{theorem}\label{not} For $n{\sf g}eq 2$, $RTA_n$ is not a variety. \end{theorem} {\sf b}egin{proof} Let us denote $\sigma$ the quasi-equation $$s_f(x)=-x\longrightarrow 0=1,$$ where $f$ is a permutation that we will define shortly. It is easy to see that for all $k\leq n,$ the small algebra ${\mathfrak A}_{nk}$ ( or more generally, any set algebra with square unit) models $\sigma.$ This can be seen using a constant map in $^nk.$ More precisely, let $q\in {}^nk$ be an arbitrary constant map and let $X$ be any subset of $^nk.$ We have two cases for $q$ which are $q\in X$ or $q\in -X$. In either case, noticing that $q\in X{\mathfrak{L}}eftrightarrow q\in S_f(X),$ it cannot be the case that $S_f(X)=-X.$ Thus, the implication $\sigma$ holds in ${\mathfrak A}_{nk}.$ It follows then, from Theorem 3.7, that $RTA_n\models\sigma$ (because the operators $\mathbf{S}$ and $\mathbf{P}$ preserve quasi-equations). Now we are going to show that there is some element ${\mathfrak{B}}\in PTA_n$ such that ${\mathfrak{B}}\nvDash\sigma.$ Let $G\subseteq {}^nn$ be the following permutable set $$G=\{s\in {}^n2:|\{i:s(i)=0\}|=1\}.$$ Let ${\mathfrak{B}}={\sf w}p(G)$, then ${\sf w}p(G)\in PTA_n.$ Let $f$ be the permutation defined as follows \[ f = {\sf b}egin{cases} [0,1]\circ[2,3]\circ\ldots\circ[n-2,n-1] & \text{if $n$ is even}, \\ [0,1]\circ[2,3]\circ\ldots\circ[n-3,n-2] & \text{if $n$ is odd} \end{cases} \] Notice that $f$ is the composition of disjoint transpositions. Let $X$ be the following subset of $G,$ $$X=\{e_i:i\mbox{ is odd, }i<n\},$$ where $e_i$ denotes the map that maps every element to $1$ except that the $i$th elementis mapped to $0$. It is easy to see that, for all odd $i<n,$ $e_i\circ f=e_{i-1}.$ This clearly implies that $$S_f^{\mathfrak{B}}(X)=-X=\{e_i:i\mbox{ is even, }i<n\}.$$ Since $0^{\mathfrak{B}}\neq 1^{\mathfrak{B}},$ $X$ falsifies $\sigma$ in ${\mathfrak{B}}.$ Since ${\mathfrak{B}}\in {{\sf b}f H}{{\sf w}p(^nn)}$ we are done. \end{proof} \subsection{Complete representability} In this section, we consider the following question. If ${\mathfrak A}$ is a transposition algebra, is there a representation of ${\mathfrak A}$ that preserves a given set of infinite meets and joins? Is there one, perhaps, which represents all existing meets and joins? First we investigate the case when homomorphisms respect only a given set of meets, not necessarily all. For a substitution algebra ${\mathfrak A}$, and $a\in A$, $N_a$ denotes the set of all Boolean ultrafilters of ${\mathfrak A}$ containing $a$. Recall that $\{N_a:a\in A\}$ is a clopen base for the Stone topology whose underlying set consists of all ultrafilters of ${\mathfrak A}$. In what follows $\prod$ and $\sum$ denote infimum and supremum, respectively Throughout this section $n$ will be a finite ordinal $>1$. We start by a crucial easy lemma. Recall that $TA_n=Mod(\Sigma_n)$. {\sf b}egin{lemma} Let ${\mathfrak A}\in TA_n$ and let $i,j\in n$, then $s_{[i,j]}$ is a complete Boolean endomorphism \end{lemma} {\sf b}egin{proof} Let $X$ be a subset of $A$. Since $s_{[i,j]}$ is a Boolean endomorphism, and $\prod X\leq x$ for all $x\in X,$ so $s_{[i,j]}\prod X\leq s_{[i,j]}x$ for all $x\in X$. Therefore, $s_{[i,j]}\prod X\leq \prod s_{[i,j]}X.$ Conversely, $$s_{[i,j]}\prod X\leq \prod s_{[i,j]}X$$ implies that $$\prod X\leq s_{[i,j]}\prod s_{[i,j]}X.$$ But from what we have already done, $$s_{[i,j]}\prod s_{[i,j]}X\leq\prod s_{[i,j]}s_{[i,j]}X= \prod X.$$ Thus, $$\prod X= s_{[i,j]}\prod s_{[i,j]}X,$$ which implies that $$s_{[i,j]}\prod X=\prod s_{[i,j]}X.$$ \end{proof} {\sf b}egin{theorem}\label{OTT} Let ${\mathfrak A}\in TA_n$ be countable, let $a\in A$ non-zero, and let $X\subseteq A$ be such that such that $\prod X=0$. Then there exists permutable $V$ and a representation $h:{\mathfrak A}\to {\sf w}p(V)$ such that ${\sf b}igcap_{x\in X}h(x)=\emptyset$ and $h(a)\neq\phi$. \end{theorem} {\sf b}egin{demo}{Proof} Each $\eta\in S_n$ is a composition of transpositions, so that $s_{\eta}$, a composition of complete endomorphisms, is itself complete. Therefore $\prod s_{\eta}X=0$ for all $\eta\in S_n$. Then for all $\eta\in S_n$, $B_{\eta}={\sf b}igcap_{x\in X} N_{s_{\eta}}x$ is nowhere dense in the Stone topology and $B={\sf b}igcup_{\eta\in S_n} B_{\eta}$ is of the first category (In fact, $B$ is also nowhere dense). Let $F$ be an ultrafilter that contains $a$ and is outside $B$ which exists by the Baire category theorem, since the complement of $B$ is dense. Then for all $\eta\in S_n$, there exists $x\in X$ such that ${s}_{\tau}x\notin F$. Let $h:{\mathfrak A}\to {\sf w}p(S_n)$ be the usual representation function; $h(x)=\{\eta\in S_n: { s}_{\eta}x\in F\}$. Then clearly ${\sf b}igcap_{x\in X} h(x)=\emptyset.$ \end{demo} We refer to $X$ as a non-principal types, and to $V$ as a model omitting $X$. The proof can be easily generalized to countably many non-principal types. We now ask for the preservation of possibly uncountably many meets. We will see that we are actually touching upon somewhat deep issues in set theory here. We let $MA$ denote Martin's axiom. But first a piece of notation: For a cardinal $\kappa$, $OTT(\kappa)$ stands for the above statement when we have $\kappa$ many meets. {\sf b}egin{theorem}\label{t} {\sf b}egin{enumerate} \item The statement $``(\forall \kappa<{2}^{\omega})$$OTT(\kappa)"$ is provable in $ZFC$ +$MA$. \item The statement $``(\forall k< 2^{\omega})(OTT(\kappa))"$ is independent of $ZFC+\neg CH$. \end{enumerate} \end{theorem} {\sf b}egin{demo}{Proof} {\sf b}egin{enumerate}\item Martin's axiom implies that, in the Stone space, the union of $\kappa$ many $(\kappa<{2}^{\omega})$ nowhere dense sets is actually a countable union and so the Baire category theorem readily applies. \item We have proved consistency since $MA$ implies the required statement. We now prove independence. Let $covK$ be the least cardinal $\kappa$ such that the real line can be covered by $\kappa$ many closed disjoint nowhere dense sets. It is known that $\omega<covK\leq 2^{\omega}$. In any Polish space the intersection of $< covK$ dense sets is dense. But then if $\kappa<covK$, then $OTT(\kappa)$ is true. The independence is proved using standard iterated forcing to show that it is consistent that $covK$ could be literally anything greater than $\omega$ and $\leq 2^{\omega}$, and then show that $OTT(covK)$ is false. \end{enumerate} \end{demo} {\sf b}egin{corollary} Let ${\mathfrak A}\in TA_n$ be countable and $X\subseteq A$ be such that $\prod X=0$. Then there is a permutable set $V$ and and an embedding $f:{\mathfrak A}\to {\sf w}p(V)$ such that ${\sf b}igcap_{x\in X}f(x)=\emptyset.$ \end{corollary} {\sf b}egin{demo}{Proof} For each $a\neq 0$, let $f_a:{\mathfrak A}\to {\sf w}p(S_n)$ be a homomorphism such that ${\sf b}igcap_{x\in X}f_a(x)=\emptyset$. For each $a\in A$, let $V_a=S_n$ and let $V$ be the disjoint union of the $V_a$'s. Then $\prod_{a\in A} {\sf w}p(V_a)\cong {\sf w}p(V)$, via $(a_i:i\in I)\mapsto {\sf b}igcup a_i$. Let $g$ denote this isomorphism. Define $f:{\mathfrak A}\to {\sf w}p(V)$ by $f(x)=g[(f_ax: a\in A)]$.Then $f$ is the desired embedding. Indeed, if $s\in {\sf b}igcap_{x\in X} f(x)$, then $s\in V_a$ for some $a\in A$, and $s\in {\sf b}igcap_{x\in X}f_a(x)$, and this cannot happen. \end{demo} Now we turn to the problem of preserving all meets. To make the problem more tangible we need a few preparations. For some tome to come we restrict the notion of representation. We stipulate that a representation of an algebra ${\mathfrak A}$, is a one to one homomorphism $f:{\mathfrak A}\longrightarrow {\sf w}p(V)$ for a permutable set $V$. Notice that ${\mathfrak A}$ could be infinite, and so $V$ could be infinite as well. Let ${\mathfrak A}$ be a substitution algebra and $f:{\mathfrak A}\longrightarrow {\sf w}p(V)$ be a representation of ${\mathfrak A}$. If $s\in V$, we let $$f^{-1}(s)=\{a\in {\mathfrak A}: s\in f(a)\}.$$ An atomic representation $f:{\mathfrak A}\to {\sf w}p(V)$ is a representation such that for each $s\in V$, the ultrafilter $f^{-1}(s)$ is principal. A complete representation of ${\mathfrak A}$ is a representation $f$ satisfying $$f(\prod X)={\sf b}igcap f[X]$$ whenever $X\subseteq {\mathfrak A}$ and $\prod X$ is defined. When we ask for representations that respect all existing meets, it turns out that the Boolean reduct of algebras in question have to be atomic in the first place. Atomicity a necessary condition for complete representability may not sufficient, as is the case example for cylindric algebras. For transposition algebras we show that complete representability and atomicity (of the Boolean reduct) are equivalent. To prove this, we first recall the following result established by Hirsch and Hodkinson for cylindric algebras. The proof works verbatim for $TA$'s. {\sf b}egin{lemma}Let ${\mathfrak A}\in TA_{n}$. {\sf b}egin{enumarab} \item A representation of ${\mathfrak A}$ is atomic if and only if it is complete. \item A representation $f:{\mathfrak A}\to {\sf w}p(V)$ is atomic if and only if ${\mathfrak{B}}l{\mathfrak A}$ is atomic and ${\sf b}igcup_{x\in X}f(x)=V$, where $X$ is the set of atoms. \item If ${\mathfrak A}$ has a complete representation $f$, then ${\mathfrak{B}}l{\mathfrak A}$ is atomic. \end{enumarab} \end{lemma} {\sf b}egin{proof}See \cite{Hirsh}. \end{proof} We say that an algebra is atomic, if its Boolean reduct is atomic. Contrary to the cylindric case, we have: {\sf b}egin{theorem}\label{com} If ${\mathfrak A}\in TA_n$ is atomic, then ${\mathfrak A}$ ise completely representable \end{theorem} {\sf b}egin{proof} Let ${\mathfrak{B}}$ be an atomic transposition algebra, let $X$ be the set of atoms, and let $c\in {\mathfrak{B}}$ be non-zero. Let $S$ be the Stone space of ${\mathfrak{B}}$, whose underlying set consists of all Boolean ulltrafilters of ${\mathfrak{B}}$. Let $X^*$ be the set of principal ultrafilters of ${\mathfrak{B}}$ (those generated by the atoms). These are isolated points in the Stone topology, and they form a dense set in the Stone topology since ${\mathfrak{B}}$ is atomic. So we have $X^*cylindric algebrap T=\emptyset$ for every nowhere dense set $T$ (since principal ultrafilters, which are isolated points in the Stone topology, lie outside nowhere dense sets). Recall that for $a\in {\mathfrak{B}}$, $N_a$ denotes the set of all Boolean ultrafilters containing $a$. Now for all $\tau\in S_n$, we have $G_{X, \tau}=S\sim {\sf b}igcup_{x\in X}N_{s_{\tau}x}$ is nowhere dense. Let $F$ be a principal ultrafilter of $S$ containing $c$. This is possible since ${\mathfrak{B}}$ is atomic, so there is an atom $x$ below $c$; just take the ultrafilter generated by $x$. Also $F$ lies outside the $G_{X,\tau}$'s, for all $\tau\in S_n$ Define, as we did before, $f_c$ by $f_c(b)=\{\tau\in S_n: s_{\tau}b\in F\}$. Then clearly for every $\tau\in S_n$ there exists an atom $x$ such that $\tau\in f_c(x)$. As before, for each $a\in A$, let $V_a=S_n$ and let $V$ be the disjoint union of the $V_a$'s. Then $\prod_{a\in A} {\sf w}p(V_a)\cong {\sf w}p(V)$. Define $f:{\mathfrak A}\to {\sf w}p(V)$ by $f(x)=g[(f_ax: a\in A)]$. Then $f: {\mathfrak A}\to {\sf w}p(V)$ is an embedding such that ${\sf b}igcup_{x\in {\sf At}{\mathfrak A}}f(x)=V$. Hence $f$ is a complete representation. \end{proof} A classical theorem of Vaught for first order logic says that countable atomic theories have countable atomic models, such models are necessarily prime, and a prime model omits all non principal types. We have a similar situation here: {\sf b}egin{theorem} Let $f:{\mathfrak A}\to {\sf w}p(V)$ be an atomic representation of ${\mathfrak A}\in TA_n$. Then for any $Y\subseteq A$, if $\prod Y=0$, then ${\sf b}igcap_{y\in Y} f(y)=\emptyset$. \end{theorem} {\sf b}egin{proof}Follows from the simple observation that ${\sf b}igcup_{x\in {\sf At}{\mathfrak A}}f(x)\leq {\sf b}igcup_{y\in y} f(-y).$ \end{proof} Seemingly a second order condition, in contrast to cylindric algebras, we get {\sf b}egin{corollary} The class of completely representable algebras is elementary, and is axiomatized by a finite set of first order sentences \end{corollary} {\sf b}egin{demo}{Proof} Atomicity can be expressed by a first order sentence. \end{demo} \subsection{The infinite dimensional case} S\'agi dealt with infinite dimensional algebras; but he only dealt with square units. We give a reasonable generalization to the above theorems for the infinite dimensional case, by allowing weak sets as units, a weak set being a set of sequences that agree cofinitely with some fixed sequence. That is a weak set is one of the form $\{s\in {}^{\alpha}U: |\{i\in \alpha, s_i\neq p_i\}|<\omega\}$, where $U$ is a set, $\alpha$ an ordinal and $p\in {}^{\alpha}U$. This set will be denoted by $^{\alpha}U^{(p)}$. The set $U$ is called the base of the weak set. A set $V\subseteq {}^{\alpha}\alpha^{(Id)}$, is defined to be permutable just like the finite dimensional case. Altering top elements to be weak sets, rather than squares, turns out to be a fruitful approach and a rewarding task. {\sf b}egin{definition} We let $PTA_{\alpha}$ be the variety generated by $${\sf w}p(V)=\langle\mathcal{P}(V),cylindric algebrap,-,S_{ij}{\sf r}angle_{i,j\in\alpha},$$ where $V\subseteq {}^\alpha\alpha^{Id}$ is permutable. \end{definition} Let $\Sigma_{\alpha}$ be the set of finite schemas obtained from $\Sigma_n$ but now allowing indices from $\alpha$. Obviously $\Sigma_{\alpha}$ is infinite. $(Mod\Sigma_{\alpha}: \alpha{\sf g}eq \omega)$ is a system of varieties definable by schemes which means that it is enough to specify $\Sigma_{\omega}$, to define $\Sigma_{\alpha}$ for all $\alpha{\sf g}eq \omega$. Indeed, let ${\sf r}ho:\alpha\to {\sf b}eta$ be an injection. One defines for a term $t$ in $L_{\alpha}$ a term ${\sf r}ho(t)$ in $L_{{\sf b}eta}$ by recursion ${\sf r}ho(v_i)=v_i$ and ${\sf r}ho(f(\tau))=f({\sf r}ho(\tau))$. Then one defines ${\sf r}ho(\sigma=\tau)={\sf r}ho(\sigma)={\sf r}ho(\tau)$. Then there exists a finite set $\Sigma\subseteq \Sigma_{\omega}$ such that $\Sigma_{\alpha}=\{{\sf r}ho(e): {\sf r}ho:\omega\to \alpha \text { is an injection }, e\in \Sigma\}.$ We give two proofs of the following main representation theorem but first we give a definition. Let $\alpha\leq{\sf b}eta$ be ordinals and let ${\sf r}ho:\alpha{\sf r}ightarrow{\sf b}eta$ be an injection. For any ${\sf b}eta$-dimensional algebra ${\mathfrak{B}}$) we define an $\alpha$-dimensional algebra ${\mathfrak{Rd}}^{\sf r}ho(\c B)$, with the same base and Boolean structure as $\c B$, where the $(i,j)$th substitution of ${\mathfrak{Rd}}^{\sf r}ho(\c B)$ is $s_ {{\sf r}ho(i){\sf r}ho(j)}\in\c B$. For a class $K$, ${\mathfrak{Rd}}^{{\sf r}ho}K=\{{\mathfrak{Rd}}^{{\sf r}ho}{\mathfrak A}: {\mathfrak A}\in K\}$. When $\alpha\subseteq {\sf b}eta$ and ${\sf r}ho$ is the identity map on $\alpha$, then we write ${\mathfrak{Rd}}_{\alpha}{\mathfrak{B}}$, for ${\mathfrak{Rd}}^{{\sf r}ho}{\mathfrak{B}}$. We let $TA_{\alpha}$ denote $Mod(\Sigma_{\alpha})$ {\sf b}egin{theorem} For any infinite ordinal $\alpha$, $TA_{\alpha}=PTA_{\alpha}.$ \end{theorem} {\sf b}egin{proof} {{\sf b}f First proof} {\sf b}egin{enumarab} \item First for ${\mathfrak A}\models \Sigma_{\alpha}$ and ${\sf r}ho:n\to \alpha,$ $n\in \omega$ and ${\sf r}ho$ one to one, define ${\mathfrak{Rd}}^{{\sf r}ho}{\mathfrak A}$ as in \cite{HMT1} def. 2.6.1. Then ${\mathfrak{Rd}}^{{\sf r}ho}{\mathfrak A}\in TA_n$. \item For any $n{\sf g}eq 2$ and ${\sf r}ho:n\to \alpha$ as above, $TA_n\subseteq\mathbf{S}{\mathfrak{Rd}}^{{\sf r}ho}PTA_{\alpha}$ as in \cite{HMT2} theorem 3.1.121. \item $PTA_{\alpha}$ is closed under ultraproducts, cf \cite{HMT2}, lemma 3.1.90. \end{enumarab} Now we show that if ${\mathfrak A}\models \Sigma_{\alpha}$, then ${\mathfrak A}$ is representable. First, for any ${\sf r}ho:n\to \alpha$, ${\mathfrak{Rd}}^{{\sf r}ho}{\mathfrak A}\in TA_n$. Hence it is in $PTA_n$ and so it is in $S{\mathfrak{Rd}}^{{\sf r}ho}PTA_{\alpha}$. Let $I$ be the set of all finite one to one sequences with range in $\alpha$. For ${\sf r}ho\in I$, let $M_{{\sf r}ho}=\{\sigma\in I:{\sf r}ho\subseteq \sigma\}$. Let $U$ be an ultrafilter of $I$ such that $M_{{\sf r}ho}\in U$ for every ${\sf r}ho\in I$. Then for ${\sf r}ho\in I$, there is ${\mathfrak{B}}_{{\sf r}ho}\in PTA_{\alpha}$ such that ${\mathfrak{Rd}}^{{\sf r}ho}{\mathfrak A}\subseteq {\mathfrak{Rd}}^{{\sf r}ho}{\mathfrak{B}}_{{\sf r}ho}$. Let ${\mathfrak{C}}=\prod{\mathfrak{B}}_{{\sf r}ho}/U$; it is in $\mathbf{Up}PTA_{\alpha}=PTA_{\alpha}$. Define $f:{\mathfrak A}\to \prod{\mathfrak{B}}_{{\sf r}ho}$ by $f(a)_{{\sf r}ho}=a$ , and finally define $g:{\mathfrak A}\to {\mathfrak{C}}$ by $g(a)=f(a)/U$. Then $g$ is an embedding. \end{proof} The \textbf{second proof} follows from the next lemma, whose proof is identical to the finite dimensional case with obvious modifications. Here, for $\xi\in {}^\alpha\alpha^{Id},$ the operator $S_\xi$ works as $S_{\xi\upharpoonright J}$ (which can be defined as in Def. 2.12) where $J=\{i\in\alpha:\xi(i)\neq i\}$ (in case $J$ is empty, i.e., $\xi=Id_\alpha,$ $S_\xi$ is the identity operator). {\sf b}egin{lemma}\label{f} Let ${\mathfrak A}$ be a $TA_\alpha$ type $BAO$ and $G\subseteq{}^\alpha\alpha^{Id}$ permutable. Let $\langle\mathcal{F}_\xi:\xi\in G{\sf r}angle$ is a system of ultrafilters of ${\mathfrak A}$ such that for all $\xi\in G,\;i\neq j\in \alpha$ and $a\in{\mathfrak A}$ the following condition holds:$$S_{ij}^{\mathfrak A}(a)\in\mathcal{F}_\xi{\mathfrak{L}}eftrightarrow a\in \mathcal{F}_{\xi\circ[i,j]}\quad\quad (*).$$ Then the following function $h:{\mathfrak A}\longrightarrow{\sf w}p(G)$ is a homomorphism $$h(a)=\{\xi\in G: a\in \mathcal{F}_\xi\}.$$ \end{lemma} Using this lemma one proves the above theorem by the method used for the finite dimensional case replacing $S_n$ by $S=\{s\in{}^\alpha\alpha^{Id}:s\mbox{ bijective }\}$ which is permutable. {\sf b}egin{corollary} ${{\sf b}old H}RTA_{\alpha}=\mathbf{HSP}(\{{\sf w}p(V):V\subseteq{}^\alpha\alpha^{Id}\mbox{ permutable}\})$ \end{corollary} Theorems on complete representations also generalize verbatim with the same proofs using the technique in {\sf r}ef{f}. Let $covK$ denote the cardinal defined in the proof of theorem {\sf r}ef{t}. Then we have {\sf b}egin{theorem} {\sf b}egin{enumarab} \item Let ${\mathfrak A}\in TA_{\alpha}$ be a countable transposition algebra $a\in A$ non-zero, and $X_i\subseteq A$, $i<covK$, such that $\prod X_i=0$. Then there exists a transposition set algebra ${\mathfrak{B}}$ with a weak unit and a representation $h:{\mathfrak A}\to {\mathfrak{B}}$ such that ${\sf b}igcap_{x\in X}h(x)=\emptyset$ and $h(a)\neq 0$. \item Let ${\mathfrak A}$ and $(X_i:i<covK)$ be as above. Then there exists a permutable $V$ and an embedding $f:{\mathfrak A}\to {\sf w}p(V)$ such that ${\sf b}igcup_{x\in X_i}f(x)=\emptyset$ \end{enumarab} \end{theorem} The next theorem is also in essence an omitting types theorem. First a definition. Let ${\mathfrak{B}}$ be a substitution algebra. Say that an ultrafilter $F$ in ${\mathfrak{B}}$ is realized in the representation $f:{\mathfrak{B}}\to {\sf w}p(V)$ if ${\sf b}igcap_{x\in F}f(x)\neq \emptyset.$ {\sf b}egin{theorem} Let ${\mathfrak{B}}\in TA_{\omega}$ be countable. Then there exists two representations of ${\mathfrak{B}}$ such that if $F$ is an ultrafilter realized in both, then $F$ is principal. \end{theorem} {\sf b}egin{proof} We construct two distinct representations of ${\mathfrak{B}}$ such that if $F$ is an ultrafilter in $B$ that is realized in both representations, then $F$ is necessarily principal, that is $\prod F$ is an atom generating $F$. We construct two ultrafilter $T$ and $S$ of ${\mathfrak{B}}$ such that (*) $\forall \tau_1, \tau_2\in {}^{\omega}\omega^{(Id)}( G_1=\{a\in {\mathfrak{B}}: {\sf s}_{\tau_1}a\in T\}, G_2=\{a\in {\mathfrak{B}}: s_{\tau_1}a\in S\})\\\implies G_1\neq G_2 \text { or $G_1$ is principal.}$ Note that $G_1$ and $G_2$ are indeed ultrafilters. We construct $S$ and $T$ as a union of a chain. We carry out various tasks as we build the chains. The tasks are as in (*), as well as (**) for all $a\in A$ either $a\in T$ or $-a\in T$, and same for $S$. We let $S_0=T_0=\{1\}$. There are countably many tasks. Metaphorically we hire countably many experts and give them one task each. We partition $\omega$ into infinitely many sets and we assign one of these tasks to each expert. When $T_{i-1}$ and $S_{i-1}$ have been chosen and $i$ is in the set assigned to some expert $E$, then $E$ will construct $T_i$ and $S_i$. For consider the expert who handles task (***). Let $X$ be her subset of $\omega$. Let her list as $(a_i: i\in X)$ all elements of $X$. When $T_{i-1}$ has been chosen with $i\in X$, she should consider whether $T_{i-1}\cup \{a_i\}$ is consistent. If it is she puts $T_i=T_{i-1}\cup \{a_i\}$. If not she puts $T_i=T_{i-1}\cup \{-a_i\}$. Same for $S_i$. Now finally consider the tasks in (*). Suppose that $X$ contains $i$ , and $S_{i-1}$ and $T_{i-1}$ have been chosen. Let $e={\sf b}igwedge S_{i-1}$ and $f={\sf b}igwedge T_{i-1}$. We have two cases. If $e$ is an atom in $B$ then the ultrafilter $F$ containing $e$ is principal so our expert can put $S_i=S_{i-1}$ and $T_i=T_{i-1}$. If not, then let $F_1$ , $F_2$ be distinct ultrafilters containing $e$. Let $G$ be an ultrafilter containing $f$. Say $F_1$ is different from $G$. Let $\theta$ be in $F_1-G$. Then put $S_i=S_{-1}\cup \{\theta\}$ and $T_i=T_{i-1}\cup \{-\theta\}.$ It is not hard to check that the canonical models, defined the usual way, corresponding to $S$ and $T$ are as required. \end{proof} \subsection*{Remark} The above technique is important in omitting types theorems, since it can, in certain contexts, allow omitting $\kappa< {} ^{\omega}2$ types, given that they are maximal (i.e ultrafilters). The idea is to construct ${}^{\omega}2$ pairwise non-isomorphic models, such that given any ultrafilter $F$ realized in two of them is necessarily principal. One then defines for $i<{}^{\omega}2$, $K_i=\{{\mathfrak{M}}: {\mathfrak{M}} \text { omits }F_i\}$, and for limits $K_{\mu}={\sf b}igcap_{i<\mu}K_i$. Since $\kappa < {}^{\omega}2$, there will be a model ${\mathfrak{M}}\in {\sf b}igcap_{i< ^{\omega}2}K_i$, and ${\mathfrak{M}}$ will omit all given types. {\sf b}egin{theorem} Atomic transposition algebras of infinite dimensions are completely representable on weak sets \end{theorem} {\sf b}egin{proof} Like the finite dimensional case. \end{proof} Finally we show that: {\sf b}egin{theorem} For infinite ordinals $\alpha$, $RQA_{\alpha}$ is not a variety. \end{theorem} {\sf b}egin{proof} Assume to the contrary that $RQA_{\alpha}$ is a variety and that $RQA_{\alpha}={{\sf b}f Mod}\Sigma_{\alpha}$ for some countable schema $\Sigma_{\alpha}.$ Fix $n{\sf g}eq 2.$ We show that for any set $U$ and any ideal $I$ of ${\mathfrak A}={\sf w}p(^nU)$, we have ${\mathfrak A}/I\in RQA_n$, which is not possible since we know that there are relativized set algebras to permutable sets that are not in $RQA_n$. Define $f:{\mathfrak A}\to {\sf w}p(^{\alpha}U)$ by $f(X)=\{s\in {}^{\alpha}U: f\upharpoonright n\in X\}$. Then $f$ is an embedding of ${\mathfrak A}$ into ${\mathfrak{Rd}}_n({\sf w}p({}^nU))$, so that we can assume that ${\mathfrak A}\subseteq {\mathfrak{Rd}}_n{\mathfrak{B}}$, for some ${\mathfrak{B}}\in RQA_{\alpha}.$ Let $I$ be an ideal of ${\mathfrak A}$, and let $J={\mathfrak{Ig}}^{{\mathfrak{B}}}I$. Then we claim that $Jcylindric algebrap {\mathfrak A}=I$. One inclusion is trivial; we need to show $Jcylindric algebrap {\mathfrak A}\subseteq I$. Let $y\in Acylindric algebrap J$. Then $y\in {\mathfrak{Ig}}\{I\}$ and so, there is a term $\tau$, and $x_1,\ldots x_n\in I$ such that $y\leq \tau(x_1,\dots x_n)$. But $\tau(x_1,\ldots x_{n-1})\in I$ and $y\in A$, hence $y\in I$, since ideals are closed downwards. It follows that ${\mathfrak A}/I$ embeds into ${\mathfrak{Rd}}_n({\mathfrak{B}}/J)$ via $x/I\mapsto x/J$. The map is well defined since $I\subseteq J$, and it is one to one, because if $x,y\in A$, such that $x\delta y\in J$, then $x\delta y\in I$. We have ${\mathfrak{B}}/J\models \Sigma_{\alpha}$. For ${\sf b}eta$ an ordinal, let $K_{{\sf b}eta}$ denote the class of all full set algebras of dimension ${\sf b}eta$. Then $K_n=S{\mathfrak{Rd}}_nK_{\alpha}$. One inclusion is obvious. To prove the other inclusion, it suffices to show that that if ${\mathfrak A}\subseteq {\mathfrak{Rd}}_n({\sf w}p({}^{\alpha}U))$, then ${\mathfrak A}$ is embeddable in ${\sf w}p(^nW)$, for some set $W$. Simply take $W=U$ and define $g:{\mathfrak A}\to {\sf w}p(^nU)$ by $g(X)=\{f\upharpoonright n: f\in X\}$. Now let ${\mathfrak{B}}'={\mathfrak{B}}/I$, then ${\mathfrak{B}}'\in {{\sf b}f SP}K_{\alpha}$, so ${\mathfrak{Rd}}_n{\mathfrak{B}}'\in {\mathfrak{Rd}}_n{{\sf b}f SP}K_{\alpha}={{\sf b}f SP}{\mathfrak{Rd}}_nK_{\alpha}\subseteq {{\sf b}f SP}K_n$. Hence ${\mathfrak A}/I\in RTA_n$. But this cannot happen for all ${\mathfrak A}\in K_n$ and we are done. \end{proof} \section{Substitution Algebras with Transpositions} In this section we study the common expansion of S\'agi's algebras and ours when we have all replacements and substitutions. We provide a finite axiomatization of the variety of the generated by the set algebras. Unlike the other two cases, we show that the class of subdirect products of set algebras is a variety, for finite as well as of for infinite dimensions {\sf b}egin{definition}[Substitution Set Algebras with Transpositions] Let $U$ be a set. \emph{The full substitution set algebra with transpositions of dimension} $\alpha$ \emph{with base} $U$ is the algebra $$\langle\mathcal{P}({}^\alpha U); cylindric algebrap,-,S^i_j,S_{ij}{\sf r}angle_{i\neq j\in\alpha}.$$ Where $S^i_j$'s are as in Definition 2.10 in \cite{sagiphd} and $S_{ij}$'s are the unary operations defined before. The class of Substitution Set Algebras with Transpositions of dimension $\alpha$ is defined as follows: $$SetSA_\alpha=\mathbf{S}\{{\mathfrak A}:{\mathfrak A}\text{ is a full substitution set algebra with transpositions} $$$$\text{of dimension }\alpha \text{ with base }U,\text{ for some set }U\}.$$ \end{definition} {\sf b}egin{definition}[Representable Substitution Set Algebras with Transpositions] The class of Substitution Set Algebras with Transpositions of dimension $\alpha$ is defined to be $$RSA_\alpha=\mathbf{SP}SetSA_\alpha.$$ \end{definition} We adapt the definition of ``Permutable Set", in the new context, as expected: {\sf b}egin{definition}[Dipermutable Set] Let $U$ be a given set, and let $D\subseteq{}^\alpha U.$ We say that $D$ is dipermutable iff it satisfies the following condition: $$(\forall i\neq j\in\alpha)(\forall s\in{}^\alpha U)(s\in D{\mathfrak{L}}ongrightarrow s\circ [i/j]\mbox{ and }s\circ [i,j]\in D),$$ \end{definition} {\sf b}egin{definition}[Dipermutable Algebras] The class of \emph{Dipermutable Set Algebras} (of dimension $n<\omega$) is defined to be $$DPSA_n=\mathbf{SP}\{\langle\mathcal{P}(D); cylindric algebrap,-,S^i_j,S_{ij}{\sf r}angle_{i\neq j\in n}: U\text{ \emph{is a set}}, D\subset{}^\alpha U\text{\emph{permutable}}\}$$ Here $S_j^i(X)=\{q\in D:q\circ [i/j]\in X\}$ and $S_{ij}(X)=\{q\in D:q\circ [i,j]\in X\}$, and $-$ is complement w.r.t. $D$.\\ If $D$ is a dipermutable set then the algebra ${\sf w}p(D)$ is defined to be $${\sf w}p(D)=\langle\mathcal{P}(D);cylindric algebrap,-,S^i_j,S_{ij}{\sf r}angle_{i\neq j\in n}$$ So ${\sf w}p(D)\in DPSA_n.$ \end{definition} We later define Dipermutable Set Algebras of infinite dimension $\alpha$. Analogous to our earlier investigations, we have: {\sf b}egin{theorem} Let $U$ be a set and suppose $G\subseteq{}^n U$ is dipermutable. Let ${\mathfrak A}=\langle\mathcal{P}({}^n U);cylindric algebrap,-,S^i_j,S_{ij}{\sf r}angle_{i\neq j\in n}$ and let $\mathcal{B}=\langle\mathcal{P}(G);cylindric algebrap,-,S^i_j,S_{ij}{\sf r}angle_{i\neq j\in n}$. Then the following function $h$ is a homomorphism. $$h:{\mathfrak A}\longrightarrow\mathcal{B},\quad h(x)=xcylindric algebrap G.$$ \end{theorem} {\sf b}egin{definition}[Small algebras] For any natural number $k\leq n$ the algebra ${\mathfrak A}_{nk}$ is defined to be $${\mathfrak A}_{nk}=\langle\mathcal{P}({}^nk);cylindric algebrap,-,S^i_j,S_{ij}{\sf r}angle_{i\neq j\in n}.$$ So ${\mathfrak A}_{nk}\in SetSA_n$. \end{definition} {\sf b}egin{theorem} $RSA_n=\mathbf{SP}\{{\mathfrak A}_{nk}:k\leq n\}.$ \end{theorem} {\sf b}egin{proof} Exactly like before. \end{proof} \subsection{Axiomatizing $RSA_n$.} In this section we show that $RSA_n$ is a variety by giving a set $\Sigma'_n$ of equations such that $\mathbf{Mod}(\Sigma'_n)=RSA_n$. {\sf b}egin{definition}[The Axiomatization] For all natural $n>1$, let $\Sigma'_n$ be the set of equations $\Sigma_n$ defined before together with the following: For distinct $i,j,k,l$ {\sf b}egin{enumerate} \item$s_{kl}s^j_is_{kl}x=s^j_ix$ \item$s_{jk}s^j_is_{jk}x=s^k_ix$ \item$s_{ki}s^j_is_{ki}x=s^j_kx$ \item$s_{ij}s^j_is_{ij}x=s^i_jx$ \item$s^j_is^k_lx=s^k_ls^j_ix$ \item$s^j_is^k_ix=s^k_is^j_ix=s^j_is^k_jx$ \item$s^j_is^i_kx=s^j_ks_{ij}x$ \item$s^j_is^j_kx=s^j_kx$ \item$s^j_is^j_ix=s^j_ix$ \item$s^j_is^i_jx=s^j_ix$ \item$s^j_is_{ij}x=s_{ij}x$ \end{enumerate} \end{definition} We let $SA_n$ denote $Mod(\Sigma'_n).$ {\sf b}egin{definition} Let $R(U)=\{s_{ij}:i\neq j\in U\}\cup\{s^i_j:i\neq j\in U\}$ and let $\hat{}:R(U)^*\longrightarrow {}^UU$ defined inductively as follows: it maps the empty string to $Id_U$ and for any string $t$, $$(s_{ij}t)^{\hat{}}=[i,j]\circ t^{\hat{}}\;\;and\; (s^i_jt)^{\hat{}}=[i/j]\circ t^{\hat{}}.$$ \end{definition} {\sf b}egin{theorem} For all $n\in\omega$ the set of (all instances of the) axiom-schemas 4,5,6 of Def.3.8 and 1 to 11 of Def.4.8 is a presentation of the semigroup ${}^nn$ via generators $R(n)$. That is, for all $t_1,t_2\in R(n)^*$ we have $$4,5,6\mbox{ of Def.3.8 and 1 to 11 of Def.4.8 }\vdash t_1=t_2\text{ iff }t_1^{\hat{}}=t_2^{\hat{}}.$$ Here $\vdash$ denotes derivability using Birkhoff's calculus for equational logic. \end{theorem} {\sf b}egin{proof} This is clear because the mentioned schemas correspond exactly to the set of relations governing the generators of ${}^nn$ (see \cite{semigroup}). \end{proof} {\sf b}egin{definition}For every $\xi\in {}^nn$ we associate a sequence $s_\xi\in R(U)^*$ such that $s_\xi^{\hat{}}=\xi.$ Such an $s_\xi$ exists, since $R(n)$ generates ${}^nn.$ This is a combination of Def.2.12 here and Def.4.13 in \cite{sagiphd}. \end{definition} Like before, we have {\sf b}egin{lemma}\label{lemma} Let ${\mathfrak A}$ be an $RSA_n$ type $BAO$. Suppose $G\subseteq {}^nn$ is a dipermutable set, and $\langle\mathcal{F}_\xi:\xi\in G{\sf r}angle$ is a system of ultrafilters of ${\mathfrak A}$ such that for all $\xi\in G,\;i\neq j\in n$ and $a\in{\mathfrak A}$ the following conditions hold:$${S_{ij}}^{\mathfrak A}(a)\in\mathcal{F}_\xi{\mathfrak{L}}eftrightarrow a\in \mathcal{F}_{\xi\circ[i,j]}\quad\quad (*),\text{ and}$$ $${S^i_j}^{\mathfrak A}(a)\in\mathcal{F}_\xi{\mathfrak{L}}eftrightarrow a\in \mathcal{F}_{\xi\circ[i/j]}\quad\quad (**)$$ Then the following function $h:{\mathfrak A}\longrightarrow{\sf w}p(G)$ is a homomorphism$$h(a)=\{\xi\in G:a\in \mathcal{F}_\xi\}.$$ \end{lemma} Now, we show, unlike replacement algebras, that $RSA_n$ is a variety. {\sf b}egin{theorem}\label{variety} For any finite $n{\sf g}eq 2$, $RSA_n=SA_n$ \end{theorem} {\sf b}egin{proof} Clearly, $RSA_n\subseteq SA_n$ because $SetSA_n\models\Sigma'_n$ (checking it is a routine computation). Conversely, $RSA_n\supseteq SA_n$. To see this, let ${\mathfrak A}\in SA_n$ be arbitrary. We may suppose that ${\mathfrak A}$ has at least two elements, otherwise, it is easy to represent ${\mathfrak A}$. For every $0^{\mathfrak A}\neq a\in A$ we will construct a homomorphism $h_a$ on ${\mathfrak A}_{nn}$ such that $h_a(a)\neq 0^{{\mathfrak A}_{nn}}$. To do this, let $0^{\mathfrak A}\neq a\in A$ be an arbitrary element. Let $\mathcal{F}$ be an ultrafilter over ${\mathfrak A}$ containing $a$, and for every $\xi\in {}^nn$ let $\mathcal{F}_\xi=\{z\in A: S^{\mathfrak A}_\xi(z)\in\mathcal{F}\}$ (which is an ultrafilter). Then, $h:{\mathfrak A}\longrightarrow{\mathfrak A}_{nn}$ defined by $h(z)=\{\xi\in{}^nn:z\in\mathcal{F}_{\xi}\}$ is a homomorphism by {\sf r}ef{lemma} as $(*),$ $(**)$ hold. \end{proof} Also here, by slight modifications to arguments in the previous section, we have $$SA_n\subseteq DPSA_n\subseteq\mathbf{HSP}RSA_n$$ which is equivalent to that $$SA_n= DPSA_n=RSA_n.$$ \subsection{Complete representations} For $SA_n$, the problem of complete representations is more delicate, since the substitutions corresponding to replacements are not necessarily complete endomorphisms, or at least we could not prove that they are. However, we could obtain several results in this context on complete representations. {\sf b}egin{definition} Let ${\mathfrak A}\in SA_n$ and $b\in A$, then $cylindric algebral{R}l_{b}{\mathfrak A}=\{x\in {\mathfrak A}: x\leq b\}$, with operations relativized to $b$. \end{definition} {\sf b}egin{theorem}\label{atomic} Let ${\mathfrak A}\in SA_n$ is atomic and assume that $\sum_{x\in X} s_{\tau}x=b$ for all $\tau\in {}^nn$. Then $cylindric algebral{R}l_{b}{\mathfrak A}$ is completely representable. In particular, if $\sum_{x\in x}s_{\tau}x=1$, then ${\mathfrak A}$ is completely representable. \end{theorem} {\sf b}egin{proof} Clearly ${\mathfrak{B}}=cylindric algebral{R}l_{b}{\mathfrak A}$ is atomic and for $\tau\in {}^nn$, $\sum_{x\in X}s_{\tau}(x.b)=b$. \end{proof} For an algebra ${\mathfrak A}$, ${\mathfrak A}^+$ denotes its canonical extension. {\sf b}egin{theorem}\label{canonical} Let ${\mathfrak A}\in SA_n$. Then ${\mathfrak A}^+$ is completely representable. In fact, any representation of ${\mathfrak A}$ is complete. \end{theorem} {\sf b}egin{proof} Let $X={\sf At}{\mathfrak A}^+$. Then $\sum X=1$, and so there exists a finite $X'\subset X$ such that $\sum X'=\sum X=1$, and so $\sum { s}_{\tau}X={s}_{\tau}\sum X=1$ for every $\tau\in {}^nn$. \end{proof} {\sf b}egin{lemma} For ${\mathfrak A}\in SA_n$, the following two conditions are equivalent: {\sf b}egin{enumarab} \item There exists a dipermutable set $V$, and a complete representation $f:{\mathfrak A}\to {\sf w}p(V)$. \item For all non zero $a\in A$, there exists a homomorphism $f:{\mathfrak A}\to {\sf w}p(^nn)$ such that $f(a)\neq 0$, such that ${\sf b}igcup_{x\in {\sf At}{\mathfrak A}} f(x)={}^nn$. \end{enumarab} \end{lemma} {\sf b}egin{demo}{Proof} We have already proved (2) implies (1). Conversely, let there be given a complete representation $g:{\mathfrak A}\to {\sf w}p(V)$. Then ${\sf w}p(V)\subseteq \prod_{i\in I} {\mathfrak A}_i$ for some set $I$, where ${\mathfrak A}_i={\sf w}p{}(^nn)$. Assume that $a$ is non-zero, then $g(a)$ is non-zero, hence $g(a)_i$ is non-zero for some $i$. Let $\pi_j$ be the $j$th projection $\pi_j:\prod {\mathfrak A}_i\to {\mathfrak A}_i$, $\pi_j[(a_i: i\in I)]=a_j$. Define $f:{\mathfrak A}\to {\mathfrak A}_i$ by $f(x)=(\pi_i\circ g(x)).$ Then clearly $f$ is as required. \end{demo} The following theorem is a converse to {\sf r}ef{atomic} {\sf b}egin{theorem}\label{converse} Assume that ${\mathfrak A}$ is a substitution algebra that is completely representable. Let $f:{\mathfrak A}\to {\sf w}p(V)$ be a complete representation. Then $\sum s_{\tau}{\sf At}{\mathfrak A}=1$ for every $\tau\in {}^nn$. \end{theorem} {\sf b}egin{proof} Let $a\in A$, and $f:{\mathfrak A}\to {\sf w}p(^nn)$, such that $f(a)\neq 0$, and ${\sf b}igcup_{x\in {\sf At}{\mathfrak A}} f(x)={}^nn$. Let $F=\{a\in A: Id\in f(a)\}$. Then $F$ is an ultrafilter. For $a\in A$, let $rep_F(a)=\{\tau\in {}^nn: s_{\tau}a\in F\}$. Then $rep_F:{\mathfrak A}\to {\sf w}p(V)$ and $rep_F=f$. Indeed for $b\in {\mathfrak A}$, and $\tau\in {}^nn$, we have $\tau\in rep_F(b)$ iff $s_{\tau}b\in F$ iff $Id\in f(s_{\tau}b)$ iff $\tau \in f(b).$ Assume, seeking a contradiction, that there exist $\tau\in {}^nn$ and $y\in A$, $y<1$, such that $s_{\tau}x\leq y$ for all $x\in X$. Then for all $x\in X$, $\tau\notin rep_{F}(x)$, for if $x\in rep_F(x)$, then there would be an $x\in X$, such that $\tau\in rep_F(x)$, so that $s_{\tau}x=1$, and this is not possible. This means that ${\sf b}igcup_{x\in X} rep_F(x)\neq {}^nn$, and this contradicts complete representability. \end{proof} There is another kind of completion for a $BAO$ called its minimal completion. For an algebra ${\mathfrak A}$, its canonical extension and minimal completion coincide if and only if ${\mathfrak A}$ is finite. if ${\mathfrak A}$ is atomic, then its minimal completion is easy to construct; it is just the complex algebra of its atom structure. If ${\mathfrak A}$ is an algebra and ${\mathfrak{B}}$ is its minimal completion, then for any $X\subseteq {\mathfrak A}$, we have $\sum^{{\mathfrak A}}X=\sum^{{\mathfrak{B}}}X$, whenever the former exists. {\sf b}egin{theorem}\label{completion} If ${\mathfrak A}$ is completely representable, then so is its minimal completion \end{theorem} {\sf b}egin{proof} Let ${\mathfrak A}$ be given and let ${\mathfrak{B}}$ denote its minimal completion. Then ${\mathfrak A}$ is atomic, and since ${\mathfrak A}$ is completely representable, then by theorem {\sf r}ef{converse} $\sum s_{\tau}^{{\mathfrak A}}{\sf At}{\mathfrak A}=1$ for every $\tau\in V$. But, suprema are preserved in the completion, hence we have, $\sum s_{\tau}^{{\mathfrak{B}}}{\sf At}{\mathfrak{B}}=1$ for every such $\tau$ and we are done by {\sf r}ef{atomic}. \end{proof} \subsection{The Infinite Dimensional Case} Also here, for $SA$'s, we can lift our results to infinite dimensions. Surprisingly, while we could not capture the extension by diagonal elements in the finite dimensional case, it turns out that here we can, when we enough spare dimensions; in fact we have infinitely many. {\sf b}egin{definition} We let $DPSA_{\alpha}$ be the variety generated by $${\sf w}p(V)=\langle\mathcal{P}(V),cylindric algebrap,-,S^i_j,S_{ij}{\sf r}angle_{i,j\in\alpha}, \ \ V\subseteq{}^\alpha\alpha^{Id}$$ is dipermutable. \end{definition} In the next theorem we show that weak spaces can be squared using a non-trivial ultraproduct construction. Let $\Sigma_{\alpha}$ be the set of finite schemas obtained from from the $\Sigma_n$ but now allowing indices from $\alpha$. We know that if ${\mathfrak A}\subseteq {\sf w}p(^{\alpha}U)$ and $a\in A$ is non zero, then there exists a homomorphism $f:{\mathfrak A}\to {\sf w}p(V)$ for some permutable $V$ such that $f(a)\neq 0$. We now prove a converse of this result. But first a definition and a result on the number of non-isomorphic models. {\sf b}egin{definition} Let ${\mathfrak A}$ and ${\mathfrak{B}}$ be set algebras with bases $U$ and $W$ respectively. Then ${\mathfrak A}$ and ${\mathfrak{B}}$ are base isomorphic if there exists a bijection $f:U\to W$ such that ${\sf b}ar{f}:{\mathfrak A}\to {\mathfrak{B}}$ defined by ${{\sf b}ar f}(X)=\{y\in {}^{\alpha}W: f^{-1}\circ y\in x\}$ is an isomorphism from ${\mathfrak A}$ to ${\mathfrak{B}}$ \end{definition} {\sf b}egin{definition} An algebra ${\mathfrak A}$ is hereditary atomic, if each of its subalgebras is atomic. \end{definition} Finite Boolean algebras are hereditary atomic of course, but there are infinite hereditary atomic Boolean algebras. What characterizes such algebras is that the base of their Stone space, that is the set of all ultrafilters is countable. {\sf b}egin{theorem} Let ${\mathfrak A}\in SA_{\omega}$ be countable and simple Then the number of non base isomorphic representations of ${\mathfrak A}$ is either $\leq \omega$ or $^{\omega}2$. \end{theorem} {\sf b}egin{proof} If ${\mathfrak A}$ is hereditary atomic, then the number of models $\leq$ the number of ultrafilters. Else, ${\mathfrak A}$ is non-atomic, then it has $^{\omega}2$ ultrafilters. For an ultrafilter $F$, let $h_F(a)=\{\tau \in V: s_{\tau}a\in F\}$. Then $h_F:{\mathfrak A}\to {\sf w}p(V)$. We have $h_F({\mathfrak A})$ is base isomorphic to $h_G({\mathfrak A})$ iff there exists a finite bijection $\sigma\in V$ such that $s_{\sigma}F=G$. Define the equivalence relation $\sim $ on the set of ultrafilters by $F\sim G$, if there exists a finite permutation $\sigma$ such that $F=s_{\sigma}G$. Then any equivalence class is countable, and so we have $^{\omega}2$ many orbits, which correspond to the non base isomorphic representations of ${\mathfrak A}$. \end{proof} We shall prove that weak set algebras are strongly isomorphic to set algebras in the sense of the following definition. {\sf b}egin{definition} Let ${\mathfrak A}$ and ${\mathfrak{B}}$ be set algebras with units $V_0$ and $V_0$ and bases $U_0$ and $U_1,$ respectively, and let $F$ be an isomorphism from ${\mathfrak{B}}$ to ${\mathfrak A}$. Then $F$ is a strong ext-isomorphism if $F=(Xcylindric algebrap V_0: X\in B)$. In this case $F^{-1}$ is called a strong subisomorphism. An isomorphism $F$ from ${\mathfrak A}$ to ${\mathfrak{B}}$ is a strong ext base isomorphism if $F=g\circ h$ for some base isomorphism and some strong ext isomorphism $g$. In this case $F^{-1}$ is called a strong sub base isomorphism. \end{definition} {\sf b}egin{theorem} If ${\mathfrak{B}}$ is a subalgebra of $ {\sf w}p(^{\alpha}\alpha^{(Id)})$ then there exists a set algebra ${\mathfrak{C}}$ with unit $^{\alpha}U$ such that ${\mathfrak{B}}\cong {\mathfrak{C}}$. Furthermore, the isomorphism is a strong sub base isomorphism. \end{theorem} {\sf b}egin{proof}We square the unit using ultraproducts. We prove the theorem for $\alpha=\omega$. Let $F$ be a non-principal ultrafilter over $\omega$. Then there exists a function $h: \omega\to \{{\mathfrak{G}}amma\subseteq_{\omega} \omega\}$ such that $\{i\in \omega: \kappa\in h(i)\}\in F$ for all $\kappa<\omega$. Let $M={}^{\omega}U/F$. $M$ will be the base of our desired algebra, that is ${\mathfrak{C}}$ will have unit $^{\omega}M.$ Define $\epsilon: U\to {}^{\omega}U/F$ by $$\epsilon(u)=\langle u: i\in \omega{\sf r}angle/F.$$ Then it is clear that $\epsilon$ is one to one. For $Y\subseteq {}^{\omega}U$, let $${\sf b}ar{\epsilon}(Y)=\{y\in {}^{\omega}(^{\omega}U/F): \epsilon^{-1}\circ y\in Y\}.$$ By an $(F, (U:i\in \omega), \omega)$ choice function we mean a function $c$ mapping $\omega\times {}^{\omega}U/F$ into $^{\omega}U$ such that for all $\kappa<\omega$ and all $y\in {}^{\omega}U/F$, we have $c(k,y)\in y.$ Let $c$ be an $(F, (U:i\in \omega), \omega)$ choice function satisfying the following condition: For all $\kappa, i<\omega$ for all $y\in X$, if $\kappa\notin h(i)$ then $c(\kappa,y)_i=\kappa$, if $\kappa\in h(i)$ and $y=\epsilon u$ with $u\in U$ then $c(\kappa,y)_i=u$. Let $\delta: {\mathfrak{B}}\to {}^{\omega}{\mathfrak{B}}/F$ be the following monomorphism $$\delta(b)=\langle b: i\in \omega{\sf r}angle/F.$$ Let $t$ be the unique homomorphism mapping $^{\omega}{\mathfrak{B}}/F$ into ${\sf w}p{}^{\omega}(^{\omega}U/F)$ such that for any $a\in {}^{\omega}B$ $$t(a/F)=\{q\in {}^{\omega}(^{\omega}U/F): \{i\in \omega: (c^+q)_i\in a_i\}\in F\}.$$ Here $(c^+q)_i=\langle c(\kappa,q_\kappa)_i: k<\omega{\sf r}angle.$ Let $g=t\circ \delta$. Then for $a\in B$, $$g(a)=\{q\in {}^{\omega}(^{\omega}U/F): \{i\in \omega: (c^+q)_i\in a\}\in F\}.$$ Let ${\mathfrak{C}}=g({\mathfrak{B}})$. Then $g:{\mathfrak{B}}\to {\mathfrak{C}}$. We show that $g$ is an isomorphism onto a set algebra. First it is clear that $g$ is a monomorphism into an algebra with unit $g(V)$. Recall that $M={}^{\omega}U/F$. Evidently $g(V)\subseteq {}^{\omega}M$. We show the other inclusion. Let $q\in {}^{\omega}M$. It suffices to show that $(c^+q)_i\in V$ for all $i\in\omega$. So, let $i\in \omega$. Note that $(c^+q)_i\in {}^{\omega}U$. If $\kappa\notin h(i)$ then we have $$(c^+q)_i\kappa=c(\kappa, q\kappa)_i=\kappa.$$ Since $h(i)$ is finite the conclusion follows. We now prove that for $a\in B$ $$(*) \ \ \ g(a)cylindric algebrap {\sf b}ar{\epsilon}V=\{\epsilon\circ s: s\in a\}.$$ Let $\tau\in V$. Then there is a finite ${\mathfrak{G}}amma\subseteq \omega$ such that $$\tau\upharpoonright (\omega\smallsetminus {\mathfrak{G}}amma)= p\upharpoonright (\omega\smallsetminus {\mathfrak{G}}amma).$$ Let $Z=\{i\in \omega: {\mathfrak{G}}amma\subseteq hi\}$. By the choice of $h$ we have $Z\in F$. Let $\kappa<\omega$ and $i\in Z$. We show that $c(\kappa,\epsilon\tau \kappa)_i=\tau \kappa$. If $\kappa\in {\mathfrak{G}}amma,$ then $\kappa\in h(i)$ and so $c(\kappa,\epsilon \tau \kappa)_i=\tau \kappa$. If $\kappa\notin {\mathfrak{G}}amma,$ then $\tau \kappa=\kappa$ and $c(\kappa,\epsilon \tau \kappa)_i=\tau\kappa.$ We now prove $(*)$. Let us suppose that $q\in g(a)cylindric algebrap {{\sf b}ar{\epsilon}}V$. Since $q\in {\sf b}ar{\epsilon}V$ there is an $s\in V$ such that $q=\epsilon\circ s$. Choose $Z\in F$ such that $$c(\kappa, \epsilon(s\kappa))\supseteq\langle s\kappa: i\in Z{\sf r}angle$$ for all $\kappa<\omega$. This is possible by the above. Let $H=\{i\in \omega: (c^+q)_i\in a\}$. Then $H\in F$. Since $Hcylindric algebrap Z$ is in $F$ we can choose $i\in Hcylindric algebrap Z$. Then we have $$s=\langle s\kappa: \kappa<\omega{\sf r}angle= \langle c(\kappa, \epsilon(s\kappa))_i:\kappa<\omega{\sf r}angle= \langle c(\kappa,q\kappa)_i:\kappa<\omega{\sf r}angle=(c^+q)_i\in a.$$ Thus $q\in \epsilon \circ s$. Now suppose that $q=\epsilon\circ s$ with $s\in a$. Since $a\subseteq V$ we have $q\in \epsilon V$. Again let $Z\in F$ such that for all $\kappa<\omega$ $$c(\kappa, \epsilon s \kappa)\supseteq \langle s\kappa: i\in Z{\sf r}angle.$$ Then $(c^+q)_i=s\in a$ for all $i\in Z.$ So $q\in g(a).$ Note that ${\sf b}ar{\epsilon}V\subseteq {}^{\omega}(^{\omega}U/F)$. Let $rl_{\epsilon(V)}^{{\mathfrak{C}}}$ be the function with domain ${\mathfrak{C}}$ (onto ${\sf b}ar{\epsilon}({\mathfrak{B}}))$ such that $$rl_{\epsilon(V)}^{{\mathfrak{C}}}Y=Ycylindric algebrap {\sf b}ar{\epsilon}V.$$ Then we have proved that $${\sf b}ar{\epsilon}=rl_{{\sf b}ar{\epsilon V}}^{{\mathfrak{C}}}\circ g.$$ It follows that $g$ is a strong sub-base-isomorphism of ${\mathfrak{B}}$ onto ${\mathfrak{C}}$. \end{proof} Like the finite dimensional case, we get: {\sf b}egin{corollary}\label{v2} $\mathbf{SP}\{ {\sf w}p(^{\alpha}U): \text {$U$ a set }\}$ is a variety. \end{corollary} {\sf b}egin{proof} Let ${\mathfrak A}\in SA_\alpha$. Then for $a\neq 0$ there exists a weak set algebra ${\mathfrak{B}}$ and $f:{\mathfrak A}\to {\mathfrak{B}}$ such that $f(a)\neq 0$. By the previous theorem there is a set algebra ${\mathfrak{C}}$ such that ${\mathfrak{B}}\cong {\mathfrak{C}}$, via $g$ say. Then $g\circ f(a)\neq 0$, and we are done. \end{proof} \subsection{Adding Diagonals} {\sf b}egin{definition}Let $\Sigma'^d_{\alpha}$ be the axiomatization obtained by adding to $\Sigma'_{\alpha}$ the following equations for al $i,j<\alpha$. {\sf b}egin{enumerate} \item $d_{ii}=1$ \item $d_{i,j}=d_{j,i}$ \item $d_{i,k}.d_{k,j}\leq d_{i,j}$ \item $s_{\tau}d_{i,j}=d_{\tau(i), \tau(i)}$, $\tau\in \{[i,j], [i|j]\}$. \end{enumerate} \end{definition} {\sf b}egin{theorem}\label{infinite} Every substitution algebra with transpositions and diagonals of infinite dimension is representable. \end{theorem} {\sf b}egin{proof} Let ${\mathfrak A}\in\mathbf{Mod}(\Sigma'^d_{\alpha})$ and let $0^{\mathfrak A}\neq a\in A$. We construct a homomorphism $h:{\mathfrak A}\longrightarrow{\sf w}p (^{\alpha}\alpha^{(Id)})$. such that $h(a)\neq 0$. Like before, choose an ultrafilter $\mathcal{F}\subset A$ containing $a$. Let $h:{\mathfrak A}\longrightarrow {\sf w}p(^{\alpha}\alpha^{(Id)})$ be the following function $h(z)=\{\xi\in ^{\alpha}\alpha^{(Id)}:S_{\xi}^{\mathfrak A}(z)\in\mathcal{F}\}.$ Now, if $\mathcal{F}_\xi=\{z\in A: S_\xi(z)\in\mathcal{F}\},$ then $\langle\mathcal{F}_\xi:\xi\in ^{\alpha}\alpha^{(Id)}{\sf r}angle$ is a system of ultrafilters of ${\mathfrak A}$ satisfying $(*)$. The function $h$ respects substitutions but it may not respect the diagonal elements. To ensure that it does we factor out $\alpha$, the base of the set algebra, by a congruence relation. Define the following equivalence relation $\sim$ on $\alpha$, $i\sim j$ iff $d_{ij}\in F$. Using the axioms for diagonals $\sim$ is an equivalence relation. Let $V={}^{\alpha}\alpha{(Id}),$ and $M=V/\sim$. For $h\in V$ we write $h={\sf b}ar{\tau}$, if $h(i)=\tau(i)/\sim$ for all $i\in n$. Of course $\tau$ may not be unique. Now define $f(z)=\{{\sf b}ar{\xi}\in M: S_{\xi}^{{\mathfrak A}}(z)\in \mathcal{F}\}$. We first check that $f$ is well defined. We use extensively the property $(s_{\tau}\circ s_{\sigma})x=s_{\tau\circ \sigma}x$ for all $\tau,\sigma\in {}^{\alpha}\alpha^{(Id)}$, a property that can be inferred form our axiomatization. We show that $f$ is well defined, by induction on the cardinality of $$J=\{i\in \mu: \sigma (i)\neq \tau (i)\}.$$ Of course $J$ is finite. If $J$ is empty, the result is obvious. Otherwise assume that $k\in J$. We introduce a piece of notation. For $\eta\in V$ and $k,l<\alpha$, write $\eta(k\mapsto l)$ for the $\eta'\in V$ that is the same as $\eta$ except that $\eta'(k)=l.$ Now take any $$\lambda\in \{\eta\in \alpha: \sigma^{-1}\{\eta\}= \tau^{-1}\{\eta\}=\{\eta\}\}$$ We have $${ s}_{\sigma}x={ s}_{\sigma k}^{\lambda}{ s}_{\sigma (k\mapsto \lambda)}x.$$ Also we have (b) $${s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}. {\sf s}_{\sigma} x) ={ d}_{\tau k, \sigma k} { s}_{\sigma} x,$$ and (c) $${ s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}.{\sf s}_{\sigma(k\mapsto \lambda)}x)$$ $$= { d}_{\tau k, \sigma k}.{ s}_{\sigma(k\mapsto \tau k)}x.$$ and (d) $${ d}_{\lambda, \sigma k}.{ s}_{\sigma k}^{\lambda}{ s}_{{\sigma}(k\mapsto \lambda)}x= { d}_{\lambda, \sigma k}.{ s}_{{\sigma}(k\mapsto \lambda)}x$$ Then by (b), (a), (d) and (c), we get, $${ d}_{\tau k, \sigma k}.{ s}_{\sigma} x= { s}_{\tau k}^{\lambda}({ d}_{\lambda,\sigma k}.{ s}_{\sigma}x)$$ $$={ s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}.{ s}_{\sigma k}^{\lambda} { s}_{{\sigma}(k\mapsto \lambda)}x)$$ $$={s}_{\tau k}^{\lambda}({ d}_{\lambda, \sigma k}.{s}_{{\sigma}(k\mapsto \lambda)}x)$$ $$= { d}_{\tau k, \sigma k}.{ s}_{\sigma(k\mapsto \tau k)}x.$$ The conclusion follows from the induction hypothesis. Finally, clearly $f$ respects diagonal elements. \end{proof} {\sf b}egin{question} Are the atomic algebras completely representable? \end{question} \section{Atomicity of free algebras} \subsection{General results on free algebras} In cylindric algebra theory, whether the free algebras are atomic or not is an important topic. In fact, N\'emeti proves that for $n{\sf g}eq 3$ the free algebras of dimension $n$ on a finite set of generators are not atomic, and this is closely related to Godels incompleteness theorems for the finite $n$-variable fragments of first order logic. We first start by proving slightly new results concerning free algebra of class of $BAO$'s. {\sf b}egin{definition} Let $K$ be variety of $BAO$'s. Let ${\mathfrak{L}}$ be the corresponding multimodal logic. We say that ${\mathfrak{L}}$ has the Godel's incompleteness property if there exists a formula $\phi$ that cannot be extended to a recursive complete theory. Such formula is called incompletable. \end{definition} Let ${\mathfrak{L}}$ be a general modal logic, and let ${\mathfrak{Fm}}_{\equiv}$ be the Tarski-Lindenbaum formula algebra on finitely many generators. {\sf b}egin{theorem}(Essentially Nemeti's) If ${\mathfrak{L}}$ has $G.I$, then the algebra ${\mathfrak{Fm}}_{\equiv}$ is not atomic. \end{theorem} {\sf b}egin{proof} Assume that ${\mathfrak{L}}$ has $G.I$. Let $\phi$ be an incompletable formula. We show that there is no atom in the Boolean algebra ${\mathfrak{Fm}}$ below $\phi/\equiv.$ Note that because $\phi$ is consistent, it follows that $\phi/\equiv$ is non-zero. Now, assume to the contrary that there is such an atom $\tau/\equiv$ for some formula $\tau.$ This means that . that $(\tau\land {\sf b}ar{\phi})/\equiv=\tau/\equiv$. Then it follows that $\vdash (\tau\land \phi)\implies \phi$, i.e. $\vdash\tau\implies \phi$. Let $T=\{\tau,\phi\}$ and let $Consq(T)=\{\psi\in Fm: T\vdash \psi\}.$ $Consq(T)$ is short for the consequences of $T$. We show that $T$ is complete and that $Consq(T)$ is decidable. Let $\psi$ be an arbitrary formula in ${\mathfrak{Fm}}.$ Then either $\tau/\equiv\leq \psi/\equiv$ or $\tau/\equiv\leq \neg \psi/\equiv$ because $\tau/\equiv$ is an atom. Thus $T\vdash\psi$ or $T\vdash \neg \psi.$ Here it is the {\it exclusive or} i.e. the two cases cannot occur together. Clearly $ConsqT$ is recursively enumerable. By completeness of $T$ we have ${\mathfrak{Fm}}_{\equiv}\smallsetminus Consq(T)=\{\neg \psi: \psi\in Consq(T)\},$ hence the complement of $ConsqT$ is recursively enumerable as well, hence $T$ is decidable. Here we are using the trivial fact that ${\mathfrak{Fm}}$ is decidable. This contradiction proves that ${\mathfrak{Fm}}_{\equiv}$ is not atomic. \end{proof} {\sf b}egin{definition} An element $a\in A$ is closed, if $f_i(a)=a$ for every $i\in I$ \end{definition} In the following theorem (1) holds for cylindric algebras, Pinter's substitution algebras (which are replacement algebras endowed with cylindrifiers) and quasipolydic algebras with and without equality when the dimension is $\leq 2$. (3) holds for such algebras for all finite dimensions. (4) is due to Johnsson and Tarski. In fact, (1) holds for any discriminator variety $V$ of $BAO$'s, with finitely many operators, when $V$ is generated by a discriminator class $K$. {\sf b}egin{theorem} Let $K$ be a variety of Boolean algebras with finitely many operators. {\sf b}egin{enumarab} \item Assume that $K=V(Fin(K))$, and for any ${\mathfrak{B}}\in K$ and $b'\in {\mathfrak{B}}$, there exists $b\in {\mathfrak{B}}$ such that ${\mathfrak{Ig}}^{{\mathfrak{B}}}\{b'\}={\mathfrak{Ig}}^{{\mathfrak{B}}l{\mathfrak{B}}}\{b\}$. If ${\mathfrak A}$ is finitely generated, then ${\mathfrak A}$ is atomic. In particular, the finitely generated free algebras are atomic. \item Assume the condition above on principal ideals, together with the condition that that if $b_1'$ and $b_2$'s are the generators of two given ideals happen to be disjoint then $b_0, b_1$ can be chosen to be also disjoint. Then ${\mathfrak{Fr}}_{{\sf b}eta}K_{\alpha}\times {\mathfrak{Fr}}_{{\sf b}eta}K_{\alpha}\cong {\mathfrak{Fr}}_{|{\sf b}eta+1|}K.$ In particular if ${\sf b}eta$ is infinite, and ${\mathfrak A}={\mathfrak{Fr}}_{{\sf b}eta}K$, then ${\mathfrak A}\times {\mathfrak A}\cong {\mathfrak A}$. (This happens when $b_0, b_1$ are closed.) \item Assume that ${\sf b}eta<\omega$, and assume the above condition on principal ideals. Suppose further that for every $k\in \omega$, there exists an algebra ${\mathfrak A}\in K$ with $k$ elements that is generated by a single element. Then ${\mathfrak{Fr}}_{{\sf b}eta}K$ has infinitely many atoms. \item Assume that $K=V(Fin(K))$. Suppose ${\mathfrak A}$ is $K$ freely generated by a finite set $X$ and ${\mathfrak A}={\mathfrak Sg} Y$ with $|Y|=|X|$. Then ${\mathfrak A}$ is $K$ freely generated by $Y.$ \end{enumarab} \end{theorem} {\sf b}egin{proof} {\sf b}egin{enumarab} \item Assume that $a\in A$ is non-zero. Let $h:{\mathfrak A}\to {\mathfrak{B}}$ be a homomorphism of ${\mathfrak A}$ into a finite algebra ${\mathfrak{B}}$ such that $h(a)\neq 0$. Let $I=ker h.$ We claim that $I$ is a finitely generated ideal. Let $R_I$ be the congruence relation corresponding to $I$, that is $R_I=\{(a,b)\in A\times A: h(a)=h(b)\}$. Let $X$ be a finite set such that $X$ generates ${\mathfrak A}$ and $h(X)={\mathfrak{B}}$. Such a set obviously exists. Let $X'=X\cup \{x+y: x, y\in X\}\cup \{-x: x\in X\}\cup {\sf b}igcup_{f\in t}\{f(x): x\in X\}.$ Let $R={\mathfrak Sg}^{{\mathfrak A}}(R_Icylindric algebrap X\times X')$. Clearly $R$ is a finitely generated congruence and $R_I\subseteq R$. We show that the converse inclusion also holds. For this purpose we first show that $R(X)=\{a\in A: \exists x\in X (x,a)\in R\}={\mathfrak A}.$ Assume that $xRa$ and $yRb$, $x,y\in X$ then $x+yRa+b$, but there exists $z\in X$ such that $h(z)=h(x+y)$ and $zR(x+y)$, hence $zR(a+b)$ , so that $a+b\in R(X)$. Similarly for all other operations. Thus $R(X)=A$. Now assume that $a,b\in A$ such that $h(a)=h(b)$. Then there exist $x, y\in X$ such that $xRa$ and $xRb$. Since $R\subseteq ker h$, we have $h(x)=h(a)=h(b)=h(y)$ and so $xRy$, hence $aRb$ and $R_I\subseteq R$. So $I={\mathfrak{Ig}}\{b'\}$ for some element $b'$. Then there exists $b\in {\mathfrak A}$ such that ${\mathfrak{Ig}}^{{\mathfrak{B}}l{\mathfrak{B}}}\{b\}={\mathfrak{Ig}}\{b'\}.$ Since $h(b)=0$ and $h(a)\neq 0,$ we have $a.-b\neq 0$. Now $h({\mathfrak A})\cong {\mathfrak A}/{\mathfrak{Ig}}^{{\mathfrak{B}}l{\mathfrak{B}}}\{b\}$ as $K$ algebras. Let $cylindric algebral{R}l_{-b}{\mathfrak A}=\{x: x\leq -b\}$. Let $f:{\mathfrak A}/{\mathfrak{Ig}}^{{\mathfrak{B}}l{\mathfrak{B}}}\{b\}\to cylindric algebral{R}l_{-b}{\mathfrak A}$ be defined by ${\sf b}ar{x}\mapsto x.-b$. Then $f$ is an isomorphism of Boolean algebras (recall that the operations of $cylindric algebral{R}l_{-b}{\mathfrak{B}}$ are defined by relativizing the Boolean operations to $-b$.) Indeed, the map is well defined, by noting that if $x\delta y\in {\mathfrak{Ig}}^{{\mathfrak{B}}l{\mathfrak{B}}}\{b\}$, where $\delta$ denotes symmetric difference, then $x.-b=y.-b$ because $x, y\leq b$. Since $cylindric algebral{R}l_{-b}{\mathfrak A}$ is finite, and $a.-b\in cylindric algebral{R}l_{-b}{\mathfrak A}$ is non-zero, then there exists an atom $x\in cylindric algebral{R}l_{-b}{\mathfrak A}$ below $a$, but clearly ${\sf At}(cylindric algebral{R}l_{-b}{\mathfrak A})\subseteq {\sf At}{\mathfrak A}$ and we are done. \item Let $(g_i:i\in {\sf b}eta+1)$ be the free generators of ${\mathfrak A}={\mathfrak{Fr}}_{{\sf b}eta+1}K$. We first show that $cylindric algebral{R}l_{g_{{\sf b}eta}}{\mathfrak A}$ is freely generated by $\{g_i.g_{{\sf b}eta}:i<{\sf b}eta\}$. Let ${\mathfrak{B}}$ be in $K$ and $y\in {}^{{\sf b}eta}{\mathfrak{B}}$. Then there exists a homomorphism $f:{\mathfrak A}\to {\mathfrak{B}}$ such that $f(g_i)=y_i$ for all $i<{\sf b}eta$ and $f(g_{{\sf b}eta})=1$. Then $f\upharpoonright cylindric algebral{R}l_{g_{{\sf b}eta}}{\mathfrak A}$ is a homomorphism such that $f(g_i.g_{{\sf b}eta})=y_i$. Similarly $cylindric algebral{R}l_{-g_{{\sf b}eta}}{\mathfrak A}$ is freely generated by $\{g_i.-g_{{\sf b}eta}:i<{\sf b}eta\}$. Let ${\mathfrak{B}}_0=cylindric algebral{R}l_{g_{{\sf b}eta}}{\mathfrak A}$ and ${\mathfrak{B}}_1=cylindric algebral{R}l_{g_{{\sf b}eta}}{\mathfrak A}$. Let $t_0=g_{{\sf b}eta}$ and $t_1=-g_{{\sf b}eta}$. Let $x_i$ be such that $J_i={\mathfrak{Ig}}\{t_i\}={\mathfrak{Ig}}^{Bl{\mathfrak A}}\{x_i\}$, and $x_0.x_1=0$. Exist by assumption. Assume that $z\in J_0cylindric algebrap J_1$. Then $z\leq x_i$, for $i=0, 1$, and so $z=0$. Thus $J_0cylindric algebrap J_1=\{0\}$. Let $y\in A\times A$, and let $z=(y_0.x_0+y_1.x_1)$, then $y_i.x_i=z.x_i$ for each $i=\{0,1\}$ and so $z\in {\sf b}igcap y_0/J_0cylindric algebrap y_1/J_1$. Thus ${\mathfrak A}/J_i\cong {\mathfrak{B}}_i$, and so ${\mathfrak A}\cong {\mathfrak{B}}_0\times {\mathfrak{B}}_1$. \item Let ${\mathfrak A}={\mathfrak{Fr}}_{{\sf b}eta}K.$ Let ${\mathfrak{B}}$ have $k$ atoms and generated by a single element. Then there exists a surjective homomorphism $h:{\mathfrak A}\to {\mathfrak{B}}$. Then, as in the first item, ${\mathfrak A}/{\mathfrak{Ig}}^{{\mathfrak{B}}l{\mathfrak{B}}}\{b\}\cong {\mathfrak{B}}$, and so $cylindric algebral{R}l_{b}{\mathfrak{B}}$ has $k$ atoms. Hence ${\mathfrak A}$ has $k$ atoms for any $k$ and we are done. \item Let ${\mathfrak A}={\mathfrak{Fr}}_XK$, let ${\mathfrak{B}}\in Fin(K)$ and let $f:X\to {\mathfrak{B}}$. Then $f$ can extended to a homomorphism $f':{\mathfrak A}\to {\mathfrak{B}}$. Let ${\sf b}ar{f}=f'\upharpoonright Y$. If $f, g\in {}^XB$ and ${\sf b}ar{f}={\sf b}ar{g}$, then $f'$ and $g'$ agree on a generating set $Y$, so $f'=g',$ hence $f=g$. Therefore we obtain a one to one mapping from $^XB$ to $^YB$, but $|X|=|Y|,$ hence this map is surjective. In other words for each $h\in {}^YB,$ there exists a unique $f\in {}^XB$ such that ${\sf b}ar{f}=h$, then $f'$ with domain ${\mathfrak A}$ extends $h.$ Since ${\mathfrak{Fr}}_XK={\mathfrak{Fr}}_X(Fin(K))$ we are done. \end{enumarab} \end{proof} For cylindric algebras, Pinter's algebras and quasipolyadic equality, though free algebras of $>2$ dimensions contain infinitely many atoms, they are not atomic. {\sf b}egin{definition} Let $K$ be a class of $BAO$ with operators$(f_i: i\in I.)$ Let ${\mathfrak A}\in K$. An element $b\in A$ is called hereditary closed if for all $x\leq b$, $f_i(x)=x$. \end{definition} {\sf b}egin{theorem} {\sf b}egin{enumarab} \item Let ${\mathfrak A}={\mathfrak Sg} X$ and $|X|<\omega$. Let $b\in {\mathfrak A}$ be hereditary closed. Then ${\sf At}{\mathfrak A}cylindric algebrap cylindric algebral{R}l_{b}{\mathfrak A}\leq 2^{n}$. If ${\mathfrak A}$ is freely generated by $X$, then ${\sf At}{\mathfrak A}cylindric algebrap cylindric algebral{R}l_{b}{\mathfrak A}= 2^{n}.$ \item If every atom of ${\mathfrak A}$ is below $b,$ then ${\mathfrak A}\cong cylindric algebral{R}l_{b}{\mathfrak A}\times cylindric algebral{R}l_{-b}{\mathfrak A}$, and $|cylindric algebral{R}l_{b}{\mathfrak A}|=2^{2^n}$. If in addition ${\mathfrak A}$ is infinite, then $cylindric algebral{R}l_{-b}{\mathfrak A}$ is atomless. \end{enumarab} \end{theorem} {\sf b}egin{proof} Assume that $|X|=m$. We have $|{\sf At}{\mathfrak A}cylindric algebrap cylindric algebral{R}l_b{\mathfrak A}|=|\{\prod Y \sim \sum(X\sim Y).b\}\sim \{0\}|\leq {}^{m}2.$ Let ${\mathfrak{B}}=cylindric algebral{R}l_b{\mathfrak A}$. Then ${\mathfrak{B}}={\mathfrak Sg}^{{\mathfrak{B}}}\{x_i.b: i<m\}={\mathfrak Sg}^{Bl{\mathfrak{B}}}\{x_i.b:i<{\sf b}eta\}$ since $b$ is hereditary fixed. For ${\mathfrak{G}}amma\subseteq m$, let $$x_{{\mathfrak{G}}amma}=\prod_{i\in {\mathfrak{G}}amma}(x_i.b).\prod_{i\in m\sim {\mathfrak{G}}amma}(x_i.-b).$$ Let ${\mathfrak{C}}$ be the two element algebra. Then for each ${\mathfrak{G}}amma\subseteq m$, there is a homomorphism $f:{\mathfrak A}\to {\mathfrak{C}}$ such that $fx_i=1$ iff $i\in {\mathfrak{G}}amma$.This shows that $x_{{\mathfrak{G}}amma}\neq 0$ for every ${\mathfrak{G}}amma\subseteq m$, while it is easily seen that $x_{{\mathfrak{G}}amma}$ and $x_{{\mathfrak{D}}elta}$ are distinct for distinct ${\mathfrak{G}}amma, {\mathfrak{D}}elta\subseteq m$. We show that ${\mathfrak A}\cong cylindric algebral{R}l_{b}{\mathfrak A}\times cylindric algebral{R}l_{-b}{\mathfrak A}$. Let ${\mathfrak{B}}_0=cylindric algebral{R}l_{b}{\mathfrak A}$ and ${\mathfrak{B}}_1=cylindric algebral{R}l_{-b}{\mathfrak A}$. Let $t_0=b$ and $t_1=-b$. Let $J_i={\mathfrak{Ig}}\{t_i\}$ Assume that $z\in J_0cylindric algebrap J_1$. Then $z\leq x_i$, for $i=0, 1$, and so $z=0$. Thus $J_0cylindric algebrap J_1=\{0\}$. Let $y\in A\times A$, and let $z=(y_0.t_0+y_1.t_1)$, then $y_i.x_i=z.x_i$ for each $i=\{0,1\}$ and so $z\in {\sf b}igcap y_0/J_0cylindric algebrap y_1/J_1$. Thus ${\mathfrak A}/J_i\cong {\mathfrak{B}}_i$, and so ${\mathfrak A}\cong {\mathfrak{B}}_0\times {\mathfrak{B}}_1$. \end{proof} The following theorem holds for any class of $BAO$'s. {\sf b}egin{theorem} The free algebra on an infinite generating set $X$ is atomless. \end{theorem} {\sf b}egin{proof} Let $a\in A$ be non-zero. Then there is a finite $Y\subset X$ such that $a\in {\mathfrak Sg}^{{\mathfrak A}} Y$. Let $y\in X\sim Y$. Then by freeness there exist homomorphisms $f:{\mathfrak A}\to {\mathfrak{B}}$ and $h:{\mathfrak A}\to {\mathfrak{B}}$ such that $f(\mu)=h(\mu) $ for all $\mu\in Y$ while $f(y)=1$ and $h(y)=0$. Then $f(a)=h(a)=a$. Hence $f(a.y)=h(a.-y)=a\neq 0$ and so $a.y\neq 0$ and $a.-y\neq 0$. Thus $a$ cannot be an atom. \end{proof} \subsection{Specific results on algebras of substitutions} {\sf b}egin{definition} A variety $V$ is locally finite, if every finitely generated algebra is finite. \end{definition} The most famous locally finite variety is that of Boolean algebras. It turns out that our varieties are also locally finite, as the next simple proof shows. {\sf b}egin{theorem}\label{locallyfinite}If ${\mathfrak A}$ is generated by $X$ and $|X|=m$ then $|{\mathfrak A}|\leq 2^{m\times n!}$ \end{theorem} {\sf b}egin{proof} We first prove the theorem when ${\mathfrak A}$ is diagonal free. Let $Y=\{s_{\tau}x_i: i<m, \tau\in S_n\}$. Then ${\mathfrak A}={\mathfrak Sg}^{{\mathfrak{B}}l{\mathfrak A}}Y$. This follows from the fact that the substitutions are Boolean endomorphisms. Since $|Y|\leq m\times n!,$ then $|{\mathfrak A}|\leq 2^{m\times n!}$. When we have diagonal elements we take the larger, but still finite $Y=\{s_{\tau}x_i: i<m, \tau\in S_n\}\cup \{d_{ij}: i,j\in n\}$. \end{proof} {\sf b}egin{theorem} Let ${\mathfrak A}={\mathfrak{Fr}}_XV$. Assume that $X=\{g_i:i\in m\}$. Let $Y=\{s_{\tau}x: , \tau\in S_n: x\in X\}$ and ${\sf b}eta=|Y|.$ Then ${\sf b}eta=n^2\sim n$ and there exists $i\in \omega$ such that $m\leq i\leq n$ and $|{\sf At} {\mathfrak A}|= 2^i$. \end{theorem} {\sf b}egin{proof} We have ${\mathfrak A}={\mathfrak Sg}^{{\mathfrak{B}}l A}Y$. Hence ${\sf At} {\mathfrak A}=\{\prod Z\sim \sum (Y\sim Z): Z\subseteq Y\}\sim \{0\}$, and so $|At {\mathfrak A}|\leq 2^{{\sf b}eta}.$ Conversely, for each ${\mathfrak{G}}amma\subseteq m$, let $x_{{\mathfrak{G}}amma}=\prod_{\eta\in {\mathfrak{G}}amma}g_{\eta}.\prod_{\eta\in {\sf b}eta \sim {\mathfrak{G}}amma}-g_{\eta}$. Let ${\mathfrak{C}}$ be the two element substitution algebra. Then for each ${\mathfrak{G}}amma\subseteq m$, there is a homomorphism $f:{\mathfrak A}\to {\mathfrak{C}}$ such that $f(s_{[i,j]}g_{\eta})=1$ iff $\eta\in {\mathfrak{G}}amma$and $i,j\in n$, and hence $f(x_{{\mathfrak{G}}amma})=1$. This shows that $x_{{\mathfrak{G}}amma}\neq 0$ for every ${\mathfrak{G}}amma\subseteq m$, while it is easily seen that $x_{{\mathfrak{G}}amma}$ and $x_{{\mathfrak{D}}elta}$ are distinct for distinct ${\mathfrak{G}}amma, {\mathfrak{D}}elta\subseteq {\sf b}eta$. Hence $|{\sf At}{\mathfrak A}|{\sf g}eq 2^{m}$. \end{proof} One can prove easily show that if $\alpha$ or ${\sf b}eta$ are infinite, then $|{\mathfrak{Fr}}_{{\sf b}eta}SA_{\alpha}|=|\alpha|\cup |{\sf b}eta|$. {\sf b}egin{question} Does ${\mathfrak{Fr}}_{{\sf b}eta}SA_n$, for finite ${\sf b}eta$ and any ordinal $n$ have exactly $2^{{\sf b}eta}$ atoms? \end{question} \section{Amalgamation} \subsection{General theorems} In this section we show that all varieties considered have the superamalgamation property, a strong form of amalgamation. Like in the previous section, we start by formulating slightly new results in the general setting of $BAO$'s. The main novelty in our general approach is that we consider classes of algebras that are not necessarily varieties. We start by some definitions. {\sf b}egin{definition} {\sf b}egin{enumarab} \item $K$ has the \emph{Amalgamation Property } if for all ${\mathfrak A}_1, {\mathfrak A}_2\in K$ and monomorphisms $i_1:{\mathfrak A}_0\to {\mathfrak A}_1,$ $i_2:{\mathfrak A}_0\to {\mathfrak A}_2$ there exist ${\mathfrak{D}}\in K$ and monomorphisms $m_1:{\mathfrak A}_1\to {\mathfrak{D}}$ and $m_2:{\mathfrak A}_2\to {\mathfrak{D}}$ such that $m_1\circ i_1=m_2\circ i_2$. \item If in addition, $(\forall x\in A_j)(\forall y\in A_k) (m_j(x)\leq m_k(y)\implies (\exists z\in A_0)(x\leq i_j(z)\land i_k(z) \leq y))$ where $\{j,k\}=\{1,2\}$, then we say that $K$ has the superamalgamation property $(SUPAP)$. \end{enumarab} \end{definition} {\sf b}egin{definition} An algebra ${\mathfrak A}$ has the strong interpolation theorem, $SIP$ for short, if for all $X_1, X_2\subseteq A$, $a\in {\mathfrak Sg}^{{\mathfrak A}}X_1$, $c\in {\mathfrak Sg}^{{\mathfrak A}}X_2$ with $a\leq c$, there exist $b\in {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)$ such that $a\leq b\leq c$. \end{definition} For an algebra ${\mathfrak A}$, $Co{\mathfrak A}$ denotes the set of congruences on ${\mathfrak A}$. {\sf b}egin{definition} An algebra ${\mathfrak A}$ has the congruence extension property, or $CP$ for short, if for any $X_1, X_2\subset A$ if $R\in Co {\mathfrak Sg}^{{\mathfrak A}}X_1$ and $S\in Co {\mathfrak Sg}^{A}X_2$ and $$Rcylindric algebrap {}^2{\mathfrak Sg}^{A}(X_1cylindric algebrap X_2)=Scylindric algebrap {}^2{\mathfrak Sg}^{{\mathfrak A}}{(X_1cylindric algebrap X_2)},$$ then there exists a congruence $T$ on ${\mathfrak A}$ such that $$Tcylindric algebrap {}^2 {\mathfrak Sg}^{{\mathfrak A}}X_1=R \text { and } Tcylindric algebrap {}^2{\mathfrak Sg}^{{\mathfrak A}}{(X_2)}=S.$$ \end{definition} Maksimova and Madarasz proved that interpolation in free algebras of a variety imply that the variety has the superamalgamation property. Using a similar argument, we prove this implication in a slightly more general setting. But first an easy lemma: {\sf b}egin{lemma} Let $K$ be a class of $BAO$'s. Let ${\mathfrak A}, {\mathfrak{B}}\in K$ with ${\mathfrak{B}}\subseteq {\mathfrak A}$. Let $M$ be an ideal of ${\mathfrak{B}}$. We then have: {\sf b}egin{enumarab} \item ${\mathfrak{Ig}}^{{\mathfrak A}}M=\{x\in A: x\leq b \text { for some $b\in M$}\}$ \item $M={\mathfrak{Ig}}^{{\mathfrak A}}Mcylindric algebrap {\mathfrak{B}}$ \item if ${\mathfrak{C}}\subseteq {\mathfrak A}$ and $N$ is an ideal of ${\mathfrak{C}}$, then ${\mathfrak{Ig}}^{{\mathfrak A}}(M\cup N)=\{x\in A: x\leq b+c\ \text { for some $b\in M$ and $c\in N$}\}$ \item For every ideal $N$ of ${\mathfrak A}$ such that $Ncylindric algebrap B\subseteq M$, there is an ideal $N'$ in ${\mathfrak A}$ such that $N\subseteq N'$ and $N'cylindric algebrap B=M$. Furthermore, if $M$ is a maximal ideal of ${\mathfrak{B}}$, then $N'$ can be taken to be a maximal ideal of ${\mathfrak A}$. \end{enumarab} \end{lemma} {\sf b}egin{demo}{Proof} Only (iv) deserves attention. The special case when $n=\{0\}$ is straightforward. The general case follows from this one, by considering ${\mathfrak A}/N$, ${\mathfrak{B}}/(Ncylindric algebrap {\mathfrak{B}})$ and $M/(Ncylindric algebrap {\mathfrak{B}})$, in place of ${\mathfrak A}$, ${\mathfrak{B}}$ and $M$ respectively. \end{demo} The previous lemma will be frequently used without being explicitly mentioned. {\sf b}egin{theorem} Let $K$ be a class of $BAO$'s such that $\mathbf{H}K=\mathbf{S}K=K$. Assume that for all ${\mathfrak A}, {\mathfrak{B}}, {\mathfrak{C}}\in K$, inclusions $m:{\mathfrak{C}}\to {\mathfrak A}$, $n:{\mathfrak{C}}\to {\mathfrak{B}}$, there exist ${\mathfrak{D}}$ with $SIP$ and $h:{\mathfrak{D}}\to {\mathfrak{C}}$, $h_1:{\mathfrak{D}}\to {\mathfrak A}$, $h_2:{\mathfrak{D}}\to {\mathfrak{B}}$ such that for $x\in h^{-1}({\mathfrak{C}})$, $$h_1(x)=m\circ h(x)=n\circ h(x)=h_2(x).$$ Then $K$ has $SUPAP$. {\sf b}igskip {\sf b}igskip {\sf b}igskip {\sf b}igskip {\sf b}igskip {\sf b}igskip {\sf b}igskip {\sf b}igskip {\sf b}igskip {\sf b}igskip {\sf b}igskip {\sf b}egin{picture}(10,0)(-30,70) \thicklines \put (-10,0){${\mathfrak{D}}$} \put(5,0){\vector(1,0){70}}\put(80,0){${\mathfrak{C}}$} \put(5,5){\vector(2,1){100}}\put(110,60){${\mathfrak A}$} \put(5,-5){\vector(2,-1){100}}\put(110,-60){${\mathfrak{B}}$} \put(85,10){\vector(1,2){20}} \put(85,-5){\vector(1,-2){20}} \put(40,5){$h$} \put(100,25){$m$} \put(100,-25){$n$} \put(50,45){$h_1$} \put(50,-45){$h_2$} \end{picture} \end{theorem} {\sf b}igskip {\sf b}igskip {\sf b}egin{proof} Let ${\mathfrak{D}}_1=h_1^{-1}({\mathfrak A})$ and ${\mathfrak{D}}_2=h_2^{-1}({\mathfrak{B}})$. Then $h_1:{\mathfrak{D}}_1\to {\mathfrak A}$, and $h_2:{\mathfrak{D}}_2\to {\mathfrak{B}}$. Let $M=ker h_1$ and $N=ker h_2$, and let ${\sf b}ar{h_1}:{\mathfrak{D}}_1/M\to {\mathfrak A}, {\sf b}ar{h_2}:{\mathfrak{D}}_2/N\to {\mathfrak{B}}$ be the induced isomorphisms. Let $l_1:h^{-1}({\mathfrak{C}})/h^{-1}({\mathfrak{C}})cylindric algebrap M\to {\mathfrak{C}}$ be defined via ${\sf b}ar{x}\to h(x)$, and $l_2:h^{-1}({\mathfrak{C}})/h^{-1}({\mathfrak{C}})cylindric algebrap N$ to ${\mathfrak{C}}$ be defined via ${\sf b}ar{x}\to h(x)$. Then those are well defined, and hence $k^{-1}({\mathfrak{C}})cylindric algebrap M=h^{-1}({\mathfrak{C}})cylindric algebrap N$. Then we show that ${\mathfrak{P}}={\mathfrak{Ig}}(M\cup N)$ is a proper ideal and ${\mathfrak{D}}/{\mathfrak{P}}$ is the desired algebra. Now let $x\in \mathfrak{Ig}(M\cup N)cylindric algebrap {\mathfrak{D}}_1$. Then there exist $b\in M$ and $c\in N$ such that $x\leq b+c$. Thus $x-b\leq c$. But $x-b\in {\mathfrak{D}}_1$ and $c\in {\mathfrak{D}}_2$, it follows that there exists an interpolant $d\in {\mathfrak{D}}_1cylindric algebrap {\mathfrak{D}}_2$ such that $x-b\leq d\leq c$. We have $d\in N$ therefore $d\in M$, and since $x\leq d+b$, therefore $x\in M$. It follows that $\mathfrak{Ig}(M\cup N)cylindric algebrap {\mathfrak{D}}_1=M$ and similarly $\mathfrak{Ig}(M\cup N)cylindric algebrap {\mathfrak{D}}_2=N$. In particular $P=\mathfrak{Ig}(M\cup N)$ is a proper ideal. Let $k:{\mathfrak{D}}_1/M\to {\mathfrak{D}}/P$ be defined by $k(a/M)=a/P$ and $h:{\mathfrak{D}}_2/N\to {\mathfrak{D}}/P$ by $h(a/N)=a/P$. Then $k\circ m$ and $h\circ n$ are one to one and $k\circ m \circ f=h\circ n\circ g$. We now prove that ${\mathfrak{D}}/P$ is actually a superamalgam. i.e we prove that $K$ has the superamalgamation property. Assume that $k\circ m(a)\leq h\circ n(b)$. There exists $x\in {\mathfrak{D}}_1$ such that $x/P=k(m(a))$ and $m(a)=x/M$. Also there exists $z\in {\mathfrak{D}}_2$ such that $z/P=h(n(b))$ and $n(b)=z/N$. Now $x/P\leq z/P$ hence $x-z\in P$. Therefore there is an $r\in M$ and an $s\in N$ such that $x-r\leq z+s$. Now $x-r\in {\mathfrak{D}}_1$ and $z+s\in{\mathfrak{D}}_2,$ it follows that there is an interpolant $u\in {\mathfrak{D}}_1cylindric algebrap {\mathfrak{D}}_2$ such that $x-r\leq u\leq z+s$. Let $t\in {\mathfrak{C}}$ such that $m\circ f(t)=u/M$ and $n\circ g(t)=u/N.$ We have $x/P\leq u/P\leq z/P$. Now $m(f(t))=u/M{\sf g}eq x/M=m(a).$ Thus $f(t){\sf g}eq a$. Similarly $n(g(t))=u/N\leq z/N=n(b)$, hence $g(t)\leq b$. By total symmetry, we are done. \end{proof} The intimate relationship between $CP$ and $AP$ has been worked out extensively by Pigozzi for cylindric algebras. Here we prove an implication in one direction for $BAO$'s. {\sf b}egin{theorem} Let $K$ be such that $\mathbf{H}K=\mathbf{S}K=K$. If $K$ has the amalgamation property, then the $V(K)$ free algebras have $CP$. \end{theorem} {\sf b}egin{proof} For $R\in Co{\mathfrak A}$ and $X\subseteq A$, by $({\mathfrak A}/R)^{(X)}$ we understand the subalgebra of ${\mathfrak A}/R$ generated by $\{x/R: x\in X\}.$ Let ${\mathfrak A}$, $X_1$, $X_2$, $R$ and $S$ be as specified in in the definition of $CP$. Define $$\theta: {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)\to {\mathfrak Sg}^{{\mathfrak A}}(X_1)/R$$ by $$a\mapsto a/R.$$ Then $ker\theta=Rcylindric algebrap {}^2{\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)$ and $Im\theta=({\mathfrak Sg}^{{\mathfrak A}}(X_1)/R)^{(X_1cylindric algebrap X_2)}$. It follows that $${\sf b}ar{\theta}:{\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)/Rcylindric algebrap {}^2{\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)\to ({\mathfrak Sg}^{{\mathfrak A}}(X_1)/R)^{(X_1cylindric algebrap X_2)}$$ defined by $$a/Rcylindric algebrap {}^{2}{\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)\mapsto a/R$$ is a well defined isomorphism. Similarly $${\sf b}ar{\psi}:{\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)/Scylindric algebrap {}^2{\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)\to ({\mathfrak Sg}^{{\mathfrak A}}(X_2)/S)^{(X_1cylindric algebrap X_2)}$$ defined by $$a/Scylindric algebrap {}^{2}{\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)\mapsto a/S$$ is also a well defined isomorphism. But $$Rcylindric algebrap {}^2{\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)=Scylindric algebrap {}^2{\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2),$$ Hence $$\phi: ({\mathfrak Sg}^{{\mathfrak A}}(X_1)/R)^{(X_1cylindric algebrap X_2)}\to ({\mathfrak Sg}^{{\mathfrak A}}(X_2)/S)^{(X_1cylindric algebrap X_2)}$$ defined by $$a/R\mapsto a/S$$ is a well defined isomorphism. Now $({\mathfrak Sg}^{{\mathfrak A}}(X_1)/R)^{(X_1cylindric algebrap X_2)}$ embeds into ${\mathfrak Sg}^{{\mathfrak A}}(X_1)/R$ via the inclusion map; it also embeds in ${\mathfrak A}^{(X_2)}/S$ via $i\circ \phi$ where $i$ is also the inclusion map. For brevity let ${\mathfrak A}_0=({\mathfrak Sg}^{{\mathfrak A}}(X_1)/R)^{(X_1cylindric algebrap X_2)}$, ${\mathfrak A}_1={\mathfrak Sg}^{{\mathfrak A}}(X_1)/R$ and ${\mathfrak A}_2={\mathfrak Sg}^{{\mathfrak A}}(X_2)/S$ and $j=i\circ \phi$. Then ${\mathfrak A}_0$ embeds in ${\mathfrak A}_1$ and ${\mathfrak A}_2$ via $i$ and $j$ respectively. Then there exists ${\mathfrak{B}}\in V$ and monomorphisms $f$ and $g$ from ${\mathfrak A}_1$ and ${\mathfrak A}_2$ respectively to ${\mathfrak{B}}$ such that $f\circ i=g\circ j$. Let $${\sf b}ar{f}:{\mathfrak Sg}^{{\mathfrak A}}(X_1)\to {\mathfrak{B}}$$ be defined by $$a\mapsto f(a/R)$$ and $${\sf b}ar{g}:{\mathfrak Sg}^{{\mathfrak A}}(X_2)\to {\mathfrak{B}}$$ be defined by $$a\mapsto g(a/R).$$ Let ${\mathfrak{B}}'$ be the algebra generated by $Imf\cup Im g$. Then ${\sf b}ar{f}\cup {\sf b}ar{g}\upharpoonright X_1\cup X_2\to {\mathfrak{B}}'$ is a function since ${\sf b}ar{f}$ and ${\sf b}ar{g}$ coincide on $X_1cylindric algebrap X_2$. By freeness of ${\mathfrak A}$, there exists $h:{\mathfrak A}\to {\mathfrak{B}}'$ such that $h\upharpoonright_{X_1\cup X_2}={\sf b}ar{f}\cup {\sf b}ar{g}$. Let $T=kerh $. Then it is not hard to check that $$Tcylindric algebrap {}^2 {\mathfrak Sg}^{{\mathfrak A}}(X_1)=R \text { and } Tcylindric algebrap {}^2{\mathfrak Sg}^{{\mathfrak A}}(X_2)=S.$$ \end{proof} Finally we show that $CP$ implies a weak form of interpolation. {\sf b}egin{theorem} If an algebra ${\mathfrak A}$ has $CP$ , then for $X_1, X_2\subseteq {\mathfrak A}$, if $x\in {\mathfrak Sg}^{{\mathfrak A}}X_1$ and $z\in {\mathfrak Sg}^{{\mathfrak A}}X_2$ are such that $x\leq z$, then there exists $y\in {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2),$ and a term $\tau$ such that $x\leq y\leq \tau(z)$. If $Ig^{{\mathfrak{B}}l{\mathfrak A}}\{z\}={\mathfrak{Ig}}^{{\mathfrak A}}\{z\},$ then $\tau$ can be chosen to be the identity term. In particular, if $z$ is closed then the latter case occurs. \end{theorem} {\sf b}egin{proof} Now let $x\in {\mathfrak Sg}^{{\mathfrak A}}(X_1)$, $z\in {\mathfrak Sg}^{{\mathfrak A}}(X_2)$ and assume that $x\leq z$. Then $$x\in ({\mathfrak{Ig}}^{{\mathfrak A}}\{z\})cylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1).$$ Let $$M={\mathfrak{Ig}}^{{\mathfrak A}^{(X_1)}}\{z\}\text { and } N={\mathfrak{Ig}}^{{\mathfrak Sg}^{{\mathfrak A}}(X_2)}(Mcylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)).$$ Then $$Mcylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)=Ncylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2).$$ By identifying ideals with congruences, and using the congruence extension property, there is a an ideal $P$ of ${\mathfrak A}$ such that $$Pcylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1)=N\text { and }Pcylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_2)=M.$$ It follows that $${\mathfrak{Ig}}^{{\mathfrak A}}(N\cup M)cylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1)\subseteq Pcylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1)=N.$$ Hence $$({\mathfrak{Ig}}^{({\mathfrak A})}\{z\})cylindric algebrap A^{(X_1)}\subseteq N.$$ and we have $$x\in {\mathfrak{Ig}}^{{\mathfrak Sg}^{{\mathfrak A}}X_1}[{\mathfrak{Ig}}^{{\mathfrak Sg}^{{\mathfrak A}}(X_2)}\{z\}cylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2).]$$ This implies that there is an element $y$ such that $$x\leq y\in {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)$$ and $y\in {\mathfrak{Ig}}^{Sg^{{\mathfrak A}}X}\{z\}$, hence the first required. The second required follows follows, also immediately, since $y\leq z$, because ${\mathfrak{Ig}}^{{\mathfrak A}}\{z\}=cylindric algebral{R}l_z{\mathfrak A}$. \end{proof} \subsection{Specific theorems for algebras of substitutions} By an algebra ${\mathfrak A}$ we mean a substitution algebra. For an algebra ${\mathfrak A}$ and $X\subseteq A$, $fl^{{\mathfrak A}}X$ denotes the Boolean filter generated by $X$. {\sf b}egin{theorem} let ${\mathfrak A}={\mathfrak{Fr}}_XV$, and let $X_1, X_2\subseteq {\mathfrak A}$ be such that $X_1\cup X_2=X$. Assume that $a\in {\mathfrak Sg}^{{\mathfrak A}}X_1$ and $c\in {\mathfrak Sg}^{{\mathfrak A}}X_2$ are such that $a\leq c$. Then there exists an interpolant $b\in {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)$ such that $a\leq b\leq c$. \end{theorem} {\sf b}egin{proof} We prove the theorem for the finite dimensional case and for $S_n$. All other cases, for finite as well as for infinite dimensions, can be accomplished in exactly the same manner, undergoing the obvious modifications. In case, we have diagonals, we just factor out the base of the representations constructed as in the previous proofs. Assume that $a\leq c$, but there is no such $b$. We will reach a contradiction. Let $$H_1=fl^{{\mathfrak{B}}l{\mathfrak Sg}^{{\mathfrak A}}X_1}\{a\}=\{x: x{\sf g}eq a\},$$ $$H_2=fl^{{\mathfrak{B}}l{\mathfrak Sg}^{{\mathfrak A}}X_2}\{-c\}=\{x: x{\sf g}eq -c\},$$ and $$H=fl^{{\mathfrak{B}}l{\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)}[(H_1cylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2))\cup (H_2cylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2))].$$ We show that $H$ is a proper filter of ${\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)$. For this, it suffices to show that for any $b_0,b_1\in {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)$, for any $x_1\in H_1$ and $x_2\in H_2$ if $a.x_1\leq b_0$ and $-c.x_2\leq b_1$, then $b_0.b_1\neq 0$. Now $a.x_1=a$ and $-c.x_2=-c$. So assume, to the contrary, that $b_0.b_1=0$. Then $a\leq b_0$ and $-c\leq b_1$ and so $a\leq b_0\leq-b_1\leq c$, which is impossible because we assumed that there is no interpolant. Hence $H$ is a proper filter. Let $H^*$ be an ultrafilter of ${\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)$ containing $H$, and let $F$ be an ultrafilter of ${\mathfrak Sg}^{{\mathfrak A}}X_1$ and $G$ be an ultrafilter of ${\mathfrak Sg}^{{\mathfrak A}}X_2$ such that $$Fcylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2))=H^*=Gcylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap x_2).$$ Such ultrafilters exist. For simplicity of notation let ${\mathfrak A}_1={\mathfrak Sg}^{{\mathfrak A}}(X_1)$ and ${\mathfrak A}_2={\mathfrak Sg}^{{\mathfrak A}}(X_2).$ Define $h_1:{\mathfrak A}_1\to {\sf w}p(S_n)$ by $$h_1(x)=\{\eta\in S_n: x\in s_{\eta}F\},$$ and $h_2:{\mathfrak A}_1\to {\sf w}p(S_n)$ by $$h_2(x)=\{\eta\in S_n: x\in s_{\eta}G_\},$$ Then $h_1, h_2$ are homomorphisms, they agree on ${\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2).$ Indeed let $x\in {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)$. Then $\eta\in h_1(x)$ iff $x\in F_{\eta}$ iff $s_{\eta}x\in F$ iff $s_{\eta}x\in Fcylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)=H^*=Gcylindric algebrap {\mathfrak Sg}^{{\mathfrak A}}(X_1cylindric algebrap X_2)$ iff $s_{\eta}x\in G$ iff $x\in G_{\eta}$ iff $\eta\in h_2(x)$. Thus $h_1\cup h_2$ is a function. By freeness there is an $h:{\mathfrak A}\to {\sf w}p(S_n)$ extending $h_1$ and $h_2$. Now $Id\in h(a)cylindric algebrap h(-c)\neq \emptyset$ which contradicts $a\leq c$. \end{proof} {\sf b}egin{corollary}\label{SUPAP} All varieties considered have the superamalgamation property. \end{corollary} {\sf b}egin{proof}Let ${\mathfrak A}, {\mathfrak{B}}, {\mathfrak{C}}\in K$, inclusions $m:{\mathfrak{C}}\to {\mathfrak A}$, $n:{\mathfrak{C}}\to {\mathfrak{B}}$ be given. Take ${\mathfrak{D}}$ to be the free algebra on a set $I\cup J$ of generators such that $|I|=|A|$, $|J|=|B|$ and $|Icylindric algebrap J|=|C|$. Then clearly there exist $h:{\mathfrak{D}}\to {\mathfrak{C}}$, $h_1:{\mathfrak{D}}\to {\mathfrak A}$, $h_2:{\mathfrak{D}}\to {\mathfrak{B}}$ such that for $x\in h^{-1}(C)$ , $h_1(x)=m\circ h(x)=n\circ h(x)=h_2(x).$ \end{proof} \subsection*{Remark} Like representability the infinite dimensional case may be inferred from the finite dimensional case as follows. Let $\tau$ and $\sigma$ be terms in the language of $SA_{\alpha}$ and assume that $K\models \tau \leq \sigma$. Then there is a finite $n$ such that $K_n\models \tau\leq \sigma$ and a we can find an interpolant. \subsubsection{Another proof} Here we give a different syntactical proof, depending on the fact that our varieties can be axiomatized by a set of positive equations. This follows from the simple observation that Boolean algebras can be defined by equations involving only meet and join, and so Boolean homomorphisms can be defined to respect only those two operations, so that we can get rid of any reference to negation in our axioms. We prove our theorem only for transposition algebras, the rest of the cases are the same. {\sf b}egin{definition} {\sf b}egin{enumarab} \item A frame of type $TA_{\alpha}$ is a first order structure ${\mathfrak F}=(V, S_{ij})_{i,j\in \alpha}$ where $V$ is an arbitrary set and and $S_{ij}$ is a binary relation on $V$ for all $i, j\in \alpha$. \item Given a frame ${\mathfrak F}$, its complex algebra denote by ${\mathfrak F}^+$ is the algebra $({\sf w}p({\mathfrak F}), s_{ij})_{i,j}$ where for $X\subseteq V$, $s_{ij}(X)=\{s\in V: \exists t\in X, (t, s)\in S_{i,j} \}$. \item Given $K\subseteq TA_{\alpha},$ then ${\mathfrak{C}}m^{-1}K=\{{\mathfrak F}: {\mathfrak F}^+\in K\}.$ \item Given a family $({\mathfrak F}_i)_{i\in I}$ of frames, a zigzag product of these frames is a substructure of $\prod_{i\in I}{\mathfrak F}_i$ such that the projection maps restricted to $S$ are onto. \end{enumarab} \end{definition} {\sf b}egin{theorem}(Marx) Assume that $K$ is a canonical variety and $L={\mathfrak{C}}m^{-1}K$ is closed under finite zigzag products. Then $K$ has the superamalgamation property. \end{theorem} {\sf b}egin{theorem} The variety $TA_{\alpha}$ has $SUPAP$. \end{theorem} {\sf b}egin{proof} Since $TA_{\alpha}$ is defined by positive equations then it is canonical. In this case $L={\mathfrak{C}}m^{-1}TA_{\alpha}$ consists of frames $(V, S_{i,j})$ such that if $s\in V$, then $s\circ [i,j]\in V$ and $s\circ [i,j]$ is in $V$. The first order correspondents of the positive equations translated to the class of frames will be Horn formulas, hence clausifiable and so $L$ is closed under finite zigzag products. Marx's theorem finishes the proof. \end{proof} \subsection*{Remark} When we add cylindrifications things blow up. For such algebras the class of subdirect products of set algebras is a variety, but it is not finitely axiomatizable not decidable, and the free algebras are not atomic. If we do not insist on commutativity of cylindrifiers we get a nice representation theory, witness the theorems of Andreka Resek Thompson Ferenzci [reference to be provided]. \section{Logical consequences} Let ${\mathfrak{L}}_n$ denote the fragment of first order logic with $n$ many variables. A particular language has countably many relation symbols of the form $R(x_0,..x_{n-1})$, where the variables occur n their natural order and and the $s_{[i,j]}$'s are treated as connectives. A structure ${\mathfrak{M}}$ is specified, like ordinary first order logic, specifying for every relation symbol an $n$-ary relation on $M$ the domain of ${\mathfrak{M}}$. Satisfiability is defined inductively the usual way: For $s\in {}^nM$ and a formula $\phi$, $s\in {}^nM$, ${\mathfrak{M}}\models s_{[i,j]}\phi$ iff $s\circ [i,j]$ satisfies $\phi$. For a structure ${\mathfrak{M}}$, a formula $\phi$, and an assignment $s\in {}^nM$, we write ${\mathfrak{M}}\models \phi[s]$ if $s$ satisfies $\phi$ in ${\mathfrak{M}}$. We write $\phi^{{\mathfrak{M}}}$ for the set of all assignments satisfying $\phi.$ Then the algebra with universe $\{\phi^{{\mathfrak{M}}}:\phi\in {\mathfrak{L}}\}$ is a set algebra. Now consider the basic declarative statement in this fragment of first order logic concerning the truth of a formula in a model under an assignment $s$. ${\mathfrak{M}}\models \phi[s].$ We can read this from a modal perspective `the formula $\phi$ is true in ${\mathfrak{M}}$ at state $s$'. Indeed we can replace the above truth definition with the modal equivalent ${\mathfrak{M}}\models s_{[i,j]}\phi[s]$ iff there is $t\in M$ with $t\equiv_{i,j}s$ and ${\mathfrak{M}}\models \phi[t]$ where $\equiv_{i,j}$ is the relation on $^nM$ defined by $s\equiv_{i,j} t$ iff $s\circ [i,j]=t.$ In other words, the substitutions behave like a modal diamond having $\equiv{i,j}$ as its accessibility relation. So we can look at set algebras as complex algebras of frames of the form $(^nU, \equiv_{i,j}).$ Since the semantics of the Boolean connectives in the predicate calculus is the same as in modal logic, this shows that the inductive clauses in the truth definition of first order logic neatly fit a modal mould. In fact, the modal disguise of this fragment of first order logic is so thin, that there is an absolutely straightforward translation mapping formulas to modal ones. But we can relativize the set of states to permutable sets of sequences. Now, bearing this double view in mind, we consider the multi dimensional modal logic ${\mathfrak{L}}$ corresponding to $TA_n$. We chose to work with $S_n$ since it has more (metalogical) theorems. The infinite dimensional case is also identical modulo replacing $S_n$ by finite permutation in $^{\alpha}\alpha^{(Id)}$ where $\alpha$ is the dimension. The proofs of the common theorems are the same underlying the obvious modifications. ${\mathfrak{L}}$ has a set $P$ of countably many propositional variables, the Boolean connections and a modality $R_{i,j}$ for every $i,j\in n$. A frame is a tuple $(V, R_{i,j})$ where $V$ is permutable; called the set of states and $R_{i,j}$ are binary relations on $V$ defined by $(s,t)\in R_{i,j}$ if $s\circ [i,j]=t$. A model ${\mathfrak{M}}$ is a triple $(V, R_{i,j}, s)$ where $(V,R_{i,j})$ is a frame and $s:P\to {\sf w}p(V)$. The notion of satisfiability in ${\mathfrak{M}}$ of a formula $\phi$ at state $w$ is defined inductively the usual way, and the semantical relation $\models$ defined accordingly. Now terms in the language of $TA_n$ translates to formulas also the usual way. One translates effectively the set of axioms of $TA_n$ to a finite set of formula schema $Ax$ each of the form of an equivalence. This can be done inductively. For a term $t$ write $\phi_t$ for the corresponding formula schema. Then we have $TA_n\models t_1=t_2$ iff $Ax\vdash \phi_{t_1}\leftrightarrow \phi_{t_2}$. We now formulate the metalogical counterparts of our algebraic results using standard machinery of algebraic logic. {\sf b}egin{theorem} $Ax$ with modus ponens is a finite complete Hilbert-style axiomatization. That is for any set ${\mathfrak{G}}amma$ of formulas ${\mathfrak{G}}amma\models \phi,$ then ${\mathfrak{G}}amma\vdash \phi$. Furthermore, there is an effective proof of $\phi$. \end{theorem} {\sf b}egin{proof} We prove that any consistent set $T$ of formula is satisfiable, and indeed satisfiable in a finite model. Assume that $T$ and $\phi$ are given. Form the Lindenbaum Tarski algebra ${\mathfrak A}={\mathfrak{Fm}}_T$ and let $a=\phi/T$. We have $a$ is non-zero, because $\phi$ is consistent with $T$. Let ${\mathfrak{B}}$ be a set algebra with unit $D$ and $f:{\mathfrak A}\to {\sf w}p(D)$ be a representation such that $f(a)\neq 0$. We extract a model $D$ of $T$, with base $M$, from ${\mathfrak{B}}$ as follows. For a relation symbol $R$ and $s\in D$, $D,s\models R$ if $s\in f(R(x_0,x_1\ldots ..)/T)$. Here the variables occur in their natural order. \end{proof} {\sf b}egin{corollary}${\mathfrak{L}}$ has the finite base property that is if $\phi$ is satisfiable in a model, then it is satisfiable in a finite model ${\mathfrak{L}}$ is complete with respect to the class of finite frames. \end{corollary} {\sf b}egin{proof} Since the variety considered is locally finite. \end{proof} {\sf b}egin{theorem} {\sf b}egin{enumarab} \item ${\mathfrak{L}}$ has the Craig interpolation property. That is to say, if $\phi, \psi$ are formulas such that $\phi\to \psi$ then there is a formula $\theta$ in their common vocabulary such that $\vdash \phi\to \theta$ and $\theta\to \phi.$ \item ${\mathfrak{L}}$ has the joint consistency property that if $T_1$ and $T_2$ are theories such that $T_1cylindric algebrap T_2$ is consistent then $T_1\cup T_2$ is consistent. \item ${\mathfrak{L}}$ has the Beth definability property \end{enumarab} \end{theorem} {\sf b}egin{proof} By theorem {\sf r}ef{SUPAP}, by noting that $SUPAP$ implies $CP$, and implies that epimorphisms are surjective, which is the equivalent of Beth definability. \end{proof} {\sf b}egin{definition} Let $T$ be a theory. A set ${\mathfrak{G}}amma$ is principal if there exists $\phi$ consistent with $T$ such that $T\models \phi\to {\mathfrak{G}}amma$. Else ${\mathfrak{G}}amma$ is non-principal. \end{definition} {\sf b}egin{theorem} If ${\mathfrak{G}}amma$ is non-principal, then there is a model $D$ of $T$ for which there is no $w$ such that ${\mathfrak{M}},w\models \phi$ for all $\phi$ in ${\mathfrak{G}}amma.$ \end{theorem} {\sf b}egin{proof} By theorem {\sf r}ef{OTT} \end{proof} The above theorem extends to omitting $< covK$ types. {\sf b}egin{definition} Let $T$ be a given ${\mathfrak{L}}$ theory. {\sf b}egin{enumarab} \item A formula $\phi$ is said to be complete in $T$ iff for every formula $\psi$ exactly one of $$T\models \phi\to \psi, \\ T\models \phi\to \neg \psi$$ holds. \item A formula $\theta$ is completable in $T$ iff there is a complete formula $\phi$ with $T\models \phi\to \theta$. \item $T$ is atomic iff if every formula consistent with $T$ is completable in $T.$ \item A model $D$ of $T$ is atomic iff for every $s\in V$, there is a complete formula $\phi$ such that $V, s\models \phi.$ \end{enumarab} \end{definition} {\sf b}egin{theorem} If $T$ is atomic, then $T$ has a model $D$, such that for any state $w$ there is an atomic formula $\phi$ such that $D,w\models \phi$. \end{theorem} {\sf b}egin{proof} From theorem {\sf r}ef{com} \end{proof} {\sf b}egin{thebibliography}{} {\sf b}ibitem{Burris}S. Burris, H.P., Sankappanavar, A course in universal algebra Graduate Texts in Mathematics, Springer Verlag, New York 1981. {\sf b}ibitem{Hirsh}R. Hirsch and I. Hodkinson, \emph{Complete Representations in Algebraic Logic}, The Journal of Symbolic Logic, Vol. 62, No. 3 (Sep., 1997), pp. 816-847 {\sf b}ibitem{HMT1} L. Henkin, J.D. Monk and A.Tarski, {\it Cylindric Algebras Part I}. North Holland, 1971. {\sf b}ibitem{HMT2} L. Henkin, J.D. Monk and A.Tarski, {\it Cylindric Algebras Part II}. North Holland, 1985. {\sf b}ibitem{semigroup}O. Ganyushkin and V. Mazorchuk,\emph{Classical Finite Transformation Semigroups-An Introduction}, Springer, 2009. {\sf b}ibitem{sagiphd}G. S\'agi,\emph{ A Note on ALgebras of Substitutions}, Studia Logica, November 2002, Volume 72, Issue 2, pp 265-284. {\sf b}ibitem{Tarek}T. S. Ahmed,\emph{Cylindric-like Algebras and Algebraic Logic}, Bolyai Society Mathematical Studies, Vol. 22 Andréka, Hajnal; Ferenczi, Miklós; Németi, István (Eds.) 2013. \end{thebibliography} \end{document}
math
\begin{document} \title{On the number of cycles in a random permutation} \author[K. Maples]{Kenneth Maples} \address{Institut f\"ur Mathematik\\ Universit\"at Z\"urich\\ Winterthurerstrasse 190\\ 8057-Z\"urich, Switzerland} \email{[email protected]} \author[A. Nikeghbali]{Ashkan Nikeghbali} \address{Institut f\"ur Mathematik\\ Universit\"at Z\"urich\\ Winterthurerstrasse 190\\ 8057-Z\"urich, Switzerland} \email{[email protected]} \author[D. Zeindler]{Dirk Zeindler} \address{ Sonderforschungsbereich 701\\ Fakult\"at f\"ur Mathematik\\ Universit\"at Bielefeld\\ Postfach 10 01 31\\ 33501 Bielefeld\\ Germany} \email{[email protected]} \subjclass[2000]{Primary 60C05; Secondary 60F05, 60F10} \maketitle \thispagestyle{empty} \begin{abstract} We show that the number of cycles in a random permutation chosen according to generalized Ewens measure is normally distributed and compute asymptotic estimates for the mean and variance. \end{abstract} \section{Introduction} Let $S_n$ denote the permutation group on $n$ letters. For each permutation $\sigma \in S_n$, we write $c_j(\sigma)$ for the number of disjoint cycles of length $j$ in $\sigma$. For any permutation, we let $K_{0n}(\sigma) := \sum_{j=1}^n c_j(\sigma)$ denote the number of cycles in $\sigma$. We are interested in the statistics of permutations produced in a random way. Random (uniform) permutations and their cycle structures have received much attention and have a long history (see e.g. the first chapter of \cite{ABT02} for a detailed account with references). The literature on the topic has grown quickly in recent years in relation to mathematical biology and theoretical physics, where models of non-uniform permutations are considered (see e.g. \cite{BeUe09, BeUe10, BeUeVe11, ErUe11}). We will restrict our attention to random permutations with cycle weights as considered in the recent work of Betz, Ueltschi and Velenik \cite{BeUeVe11} and Ercolani and Uelstschi \cite{ErUe11}. These are families of probability measures on $S_n$ that are constant on conjugacy classes with the distribution \[ \mathbb{P}(\sigma) := \frac1{h_n n!} \prod_{j=1}^n \theta_j^{c_j(\sigma)} \] where $\theta_j \geq 0$ is a given sequence of weights and $h_n$ is a normalization constant. If $\theta_j = 1$ for all $j$ then this is the uniform measure on $S_n$, while if $\theta_j = \theta_0$ is constant then this gives Ewens measure, which plays an important role in mathematical biology. A situation of interest which appears in the study of the quantum gas in statistical mechanics is when the asymptotic behavior of $\theta_j$ is fixed for large $j$ (see \cite{BeUeVe11} and \cite{ErUe11}). Natural important historical questions arise on the behavior of $c_j(\sigma)$ or $K_{0n}(\sigma)$. For instance, it is known that under the Ewens measure, or in special cases of weighted random permutations, the cycle counts $(c_j(\sigma))_j$ converge to independent Poisson distributions (see \cite{ABT02} for the Ewens measure and \cite{ErUe11} and \cite{NikZei11} for weighted random permutations). The case of $K_{0n}(\sigma)$ is in fact more delicate and less results are available in the general case with cycle weights. It is well known that under the Ewens measure $K_{0n}(\sigma)$ satisfies a central limit theorem (see \cite{ABT02} for details and historical references). The methods used in this case are very probabilistic and rely on the Feller coupling. However, the Feller coupling does not exist in the model of random permutations with cycle weights. Ercolani and Ueltschi (\cite{ErUe11}) used generating series and refined saddle point analysis to obtain some asymptotic estimates for the mean of $K_{0n}(\sigma)$ in some special cases but were not able to prove any central limit theorem. In \cite{NikZei11} the second and third authors used generating series and singularity analysis to prove a central limit theorem and some large deviations estimates in the cases where the generating series exhibit some logarithmic singularities, but the important cases corresponding to subexponential and algebraic growth of the generating series were still open (see the corollaries below for a more precise statement). In this paper we propose yet another, but more elementary, approach based on Cauchy's integral theorem for analytic functions to solve these problems. More precisely, with a sequence $\theta = \{\theta_j\}_{j=1}^\infty$ fixed, we write \[ g_\theta(t) = \sum_{k=1}^\infty \frac{\theta_k}{k} t^k \] for the indicated generating function. We will always assume that the series for $g_\theta$ converges in a neighborhood of the origin. We will also require that $g_\theta$ satisfies a technical condition which we call $\log$-admissibility, which will be defined in Section~\ref{sec:estmgf}. Our main result is the following. \begin{theorem}[Central Limit Theorem for $K_{0n}$] \label{thm:main} Suppose that $g_\theta(t)$ is defined for $t \in [0,1)$ with $g_\theta(t) \to \infty$ as $t \to 1$. Suppose further that $g$ is $\log$-admissible (see Definition~\ref{def:admissible}). Then there are sequences $\mu_n$ and $\sigma_n$ such that \[ \frac{K_{0n} - \mu_n}{\sigma_n} \] converges to a standard normal distribution. \end{theorem} The mean and standard deviation can also be explicitly computed. For now we will state explicit results for two cases of interest, and defer the general result to Section~\ref{sec:totalnumber}. \begin{corollary} \label{cor:genalgebraic} Let $g(t) = \gamma (1 - t)^{-\beta}$ for some $\beta > 0$ and $\gamma > 0$. Then $K_{0n}$ converges asymptotically to a normal distribution with mean \[ \mu_n = (\beta \gamma)^{-\beta/(\beta + 1)} n^{\beta/(\beta+1)} (1 + o(1)) \] and variance \[ \sigma_n^2 = ((\beta \gamma)^{-1} - (\beta + 1)^{-1}) (\beta \gamma)^{1/(\beta + 1)} n^{\beta/(\beta + 1)} (1 + o(1)). \] \end{corollary} \begin{corollary} \label{cor:exponential} Let $g(t) = \exp (1 - t)^{-\beta}$ for some $\beta > 0$. Then $K_{0n}$ converges asymptotically to a normal distribution with mean \[ \mu_n = \frac{n}{(\log n)^{1 + 1/\beta}} (1 + o(1)) \] and variance \[ \sigma_n^2 = (2 + \beta^{-1}) \frac{n}{(\log n)^{2 + 1/\beta}} \] \end{corollary} Our approach relies in the following well-known calculation of the moment generating function for $K_{0n}$. \begin{proposition} \label{prop:mgf} We have the power series identity \[ \exp( e^{-s} g_\theta(t)) = \sum_{n=0}^\infty h_n \mathbb{E} \exp(-s K_{0n}) t^n \] for all $s \in \mathbb{R}$, with the conventions that $h_0=1$ and $K_{00}=0$. \end{proposition} This proposition follows immediately from Polya's enumeration theorem with a small calculation. More details can be found for instance in \cite[Section 4]{NikZei11} \begin{remark} In \cite{NikZei11} the characteristic function $\mathbb{E} \exp(i t K_{0n})$ of $K_{0n}$ was considered. In our new approach it is crucial to rather consider the Laplace transform $\mathbb{E} \exp(-s K_{0n})$ where the variable $s$ is real in order to be able to evaluate the relevant contour integrals. \end{remark} The outline of this article is as follows. In Section~\ref{sec:estmgf} we define $\log$-admissible $g(t)$ and derive a formula for the coefficients of its generating function. In Section~\ref{sec:totalnumber} we use the formula to compute asymptotics for $\mathbb{E} \exp(-s K_{0n})$, of which Theorem~\ref{thm:main} is a direct consequence. We also prove Corollary~\ref{cor:genalgebraic} and Corollary~\ref{cor:exponential}. Finally, in Section~\ref{sec:largedeviation} we show how the proof of Theorem~\ref{thm:main} can be modified to give large deviation estimates for $K_{0n}$. \subsection*{Notation} We will also freely employ asymptotic notation as follows; let $f$, $g$, and $h$ be arbitrary functions of a parameter $n$. Then we write $f = O(g)$ to indicate the existence of a constant $C$ and threshold $n_0$ such that for all $n > n_0$, $\abs{f(n)} \leq C \abs{g(n)}$; the constant and threshold may be different in each use of the notation. We also write $f = h + O(g)$ to indicate $\abs{f - h} = O(g)$. We similarly write $f = o(g)$ to indicate that \[ \lim_{n \to \infty} \frac{f(n)}{g(n)} = 0. \] It is convenient to also employ Vinogradov notation: we write $f \lesssim g$ (and equivalently $g \gtrsim f$) for $f = O(g)$. \section{Estimates for the moment generating function} \label{sec:estmgf} We shall now introduce the class of functions $g_\theta(t)$ where we can compute the asymptotic behavior of $\mathbb{E}\exp(-s K_{0n})$. \begin{definition} \label{def:admissible} Let $g(t) = \sum_{n=0}^\infty g_n t^n$ be given with radius of convergence $\rho > 0$ and $g_n \geq 0$. We say that $g(t)$ is \emph{$\log$-admissible} if there exist functions $a,b,\delta:[0,\rho) \to \mathbb{R}^+$ and $R : [0,\rho) \times (-\pi/2, \pi/2) \to \mathbb{R}^+$ with the following properties. \begin{description} \item[approximation] For all $\abs{\varphi} \leq \delta(r)$ we have the expansion \begin{align} g(re^{i\varphi}) = g(r) + i\varphi a(r)-\frac{\varphi^2}{2} b(r) + R(r,\varphi) \label{eq_admissible_expansion} \end{align} where $R(r,\varphi) = o(\varphi^3 \delta(r)^{-3})$ and the implied constant is uniform. \item[divergence] $a(r) \to \infty$, $b(r) \to \infty$ and $\delta(r) \to 0$ as $r \to \rho$. \item[width of convergence] For all $\epsilon > 0$, we have $\epsilon\delta^2(r)b(r) - \log b(r) \to \infty$ as $r \to \rho$. \item[monotonicity] For all $\abs{\varphi} > \delta(r)$, we have \begin{align} \mathbb{R}e g(re^{i\varphi}) \leq \mathbb{R}e g(re^{\pm i\delta(r)}). \end{align} \end{description} \end{definition} These properties can be interpreted as a logarithmic analogue of Hayman-admissibility \cite[Chapter~VIII.5]{FlSe09}. In fact, if $g(t)$ is $\log$-admissible then $\exp g(t)$ is Hayman-admissible. We have also introduced the $\epsilon$ term in the width condition, which is required for uniformity in the error term of Proposition~\ref{prop:generalasymptotic}. The approximation condition allows us to compute the functions $a$ and $b$ exactly. We have \begin{align} a(r) &= rg'(r), \\ b(r) &= rg'(r) +r^2 g''(r). \end{align} Clearly $a$ and $b$ are strictly increasing real analytic functions in $[0, \rho)$. The error in the approximation can similarly be bounded, so that \[ R(r,\varphi) \lesssim rg'(r) + 3r^2 g''(r) + r^3 g'''(r). \] Note that for Ewens measure ($\theta_j = 1$ for all $j \geq 0$), we have $g(r) = - \log(1 - r)$, so we can compute \begin{align*} b(r) = \frac{r}{(1-r)^2} \text{ and } R(r,\varphi) \approx \frac{r^2 + r}{(1-r)^3}. \end{align*} Therefore $g(r) = - \log(1-r)$ is not $\log$-admissible and we cannot apply this method to such distributions. We will frequently require the inverse function of $a$ on the interval $[0,\rho)$, and define $r_x$ to be the (unique) solution to $a(r) = x$ there. It is easy to see that $r_x$ is strictly increasing real analytic function in $x$ and $r_x \to \rho $ if $x$ tends to infinity. We now define for $s \in \mathbb{R}$ the sequence $G_{n,s}$ by \begin{align}\label{eq:generating_general} \sum_{n=0}^\infty G_{n,s} t^n = \exp (e^{-s} g(t)). \end{align} If $g(t)$ is $\log$-admissible, then we can compute the asymptotic behavior of $G_{n,s}$ for $n\to \infty$. The generating function in Proposition~\ref{prop:mgf} has the same form as \eqref{eq:generating_general} and we thus can compute also the asymptotic behavior of the moment generating function for $K_{0n}$ if $g_\theta$ is $\log$-admissible. \begin{proposition} \label{prop:generalasymptotic} Let $s\in \mathbb{R}$ and let $g$ be $\log$-admissible with associated functions $a$, $b$. Let further be $r_x$ to be the (unique) solution of $a(r) = x$. Then $G_{n,s}$ has the asymptotic expansion \[ G_{n,s} = \frac{1}{\sqrt{2 \pi}} e^{s/2} r_{e^s n}^{-n} b(r_{e^s n})^{-1/2} \exp(e^{-s} g(r_{e^s n})) (1 + o(1)) \] as $n \to \infty$ with the implied constant uniform in $s$ for $s$ bounded. \end{proposition} \begin{proof} We apply Cauchy's integral formula to the circle $\gamma$ centered at $0$ of radius $r$. We get \begin{align*} G_{n,s} &= \frac{1}{2 \pi i} \int_{\gamma} \exp( e^{-s} g(z) ) \, \frac{dz}{z^{n+1}} \\ &= \frac{1}{2\pi r^n} \int_{-\pi}^\pi \exp( e^{-s} g(re^{i\varphi}) - in\varphi ) \, d\varphi \end{align*} with $r = r_{e^s n}$. We now split the integral into the parts $[-\delta(r),\delta(r)]$ and $[-\pi, -\delta(r)) \cup (\delta(r), \pi]$. We first look at the minor arc $[-\delta(r),\delta(r)]$. By hypothesis on $g$ we can expand in $\varphi$, giving \begin{align*} I_1 &:= \int_{-\delta(r)}^{\delta(r)} \exp( e^{-s} g(re^{i\varphi}) - in\varphi ) \, d\varphi \\ &= \int_{-\delta(r)}^{\delta(r)}\exp( e^{-s} (g(r) + i\varphi a(r) - b(r)\varphi^2/2 + o(\varphi^3 \delta(r)^{-3}) - in\varphi)) \, d\varphi. \end{align*} We have $e^{-s} a(r_{e^s n}) = n$ since $r = r_{e^s n}$, which cancels the linear terms in $\varphi$. We get \[ I_1 = \int_{-\delta(r)}^{\delta(r)}\exp( e^{-s} (g(r) - b(r)\varphi^2/2 + \varphi^3 o(\delta(r)^{-3}) )) \, d\varphi. \] We now observe that $\varphi^3 o(\delta(r)^{-3}) = o(1)$ on $[-\delta(r), \delta(r)]$ as $r = r_{e^s n} \to \rho$, or equivalently as $n \to \infty$. Rearranging, we get \[ I_1 = \exp(e^{-s} g(r)) \int_{- \delta(r)}^{\delta(r)} \exp(-e^{-s} b(r) \varphi^2 / 2) \, d\varphi (1 + o(1)). \] By assumption on the width of convergence of $g$, the integral converges to \[ e^{s/2} b(r)^{-1/2} \int_{- \delta(r) e^{-s/2} b(r)^{1/2}}^{\delta(r) e^{-s/2} b(r)^{1/2}} \exp(-x^2/2) \, dx = \sqrt{2 \pi} e^{s/2} b(r)^{-1/2} (1 + o(1)) \] so that \[ I_1 = \sqrt{2 \pi} \exp(e^{-s} g(r)) e^{s/2} b(r)^{-1/2} (1 + o(1)). \] Next we estimate the integral over the major arc. By the monotonicity assumption on $g$, we have \[ I_2 := \int_{[-\pi, -\delta(r)] \cup [\delta(r), \pi]} \exp(e^{-s} g(r e^{i\varphi}) - i n \varphi) \, d\varphi \leq 2 \pi \exp( \mathbb{R}e (e^{-s} g(r e^{i \delta(r)}) )) \] We can apply the approximation for $g$ at $\varphi = \delta(r)$, giving \[ \mathbb{R}e g(r e^{i \delta(r)}) = g(r) - \frac{\delta(r)^2}{2} b(r) + o(1) \] Collecting terms and rearranging, \begin{align}\label{eq:I_2_estimates} I_2 &\leq 2 \pi \exp(e^{-s} g(r)) b(r)^{-1/2} \exp(- e^{-s} \delta(r)^2 b(r) / 2 + \frac12 \log b(r)) \nonumber\\ &= o( \exp(e^{-s} g(r)) b(r)^{-1/2} ) \end{align} where at the last step we used the width of approximation for $g$. Combining the estimates for $I_1$ and $I_2$, we find that \[ G_{n,s} = \frac{1}{\sqrt{2 \pi}} r_{e^{s} n}^{-n} e^{s/2} \exp(e^{-s} g(r_{e^s n})) (1 + o(1)). \] where the error term is uniform in $s$ for $s$ in a fixed compact set. Note that all error-terms in this proof are uniform in $s$ for $s$ in a fixed compact set. \end{proof} Note that the $\epsilon$ in the definition of $\log$-admissibility is required to make the error in \eqref{eq:I_2_estimates} uniform in $s$. \section{The total number of cycles} \label{sec:totalnumber} We are now ready to compute the asymptotic number of cycles in a random permutation as described in the introduction. We will restrict our attention to those examples in \cite{ErUe11} where the limiting behavior was not known, namely where the generating function $g_\theta$ is of the form \[ g_\theta(r) = \gamma (1 - r)^{-\beta} \] or \[ g_\theta(r) = \exp (1 - r)^{-\beta}. \] We will refer to such functions as exhibiting algebraic and sub-exponential growth, respectively. We begin by observing that a formula for the moment generating function of $K_{0n}$ can be determined from Proposition~\ref{prop:generalasymptotic}. \begin{corollary} \label{cor:mgfexpansion} Let $s\in \mathbb{R}$, $g_\theta(t)$ be $\log$-admissible with associated functions $a$, $b$. Let further $r_x$ be the (unique) solution to $a(r) = x$. Then \[ h_n = \frac{1}{\sqrt{2\pi b(r_n)} r_n^n} \exp(g_\theta(r_n)) (1 + o(1)) \] and \[ \mathbb{E} \exp(-s K_{0n}) = e^{s/2} \left( \frac{r_n}{r_{e^s n}} \right)^n \frac{\exp(e^{-s} g_\theta(r_{e^s n})) }{\exp(g_\theta(r_n))} \left( \frac{b(r_n)}{b(r_{e^s n})} \right)^{1/2} (1 + o(1)), \] where the error terms are uniform in $s$ for $s$ in a fixed compact set. \end{corollary} \begin{proof} By Proposition~\ref{prop:mgf}, we have \[ h_n \mathbb{E} \exp(-s K_{0n}) = G_{n,s}. \] Apply Proposition~\ref{prop:generalasymptotic} at $0$ to get the desired formula for $h_n$, and apply it again at $s$ to find the formula for $\mathbb{E} \exp(-s K_{0n})$. \end{proof} \subsection{A simple example} Before we consider more complicated functions, we will illustrate the method with $g_\theta$ given by the equation \[ g_\theta(t) = \frac{1}{1 - t}. \] This generating function corresponds to the sequence $\theta_k = k$. Our first step is to compute the moment generating function of $K_{0n}$ by finding an asymptotic expansion of the formula in Corollary~\ref{cor:mgfexpansion}. \begin{proposition} \label{prop:algebraic} Let $g_\theta(t) = (1-t)^{-1}$. Then $g_\theta$ is admissible, \[ h_n = \frac{1}{\sqrt{4 \pi}} n^{-3/4} \exp(2 \sqrt{n} (1 + o(1))), \] and \[ \mathbb{E} \exp(-s K_{0n}) = e^{-s/4} \exp(2 \sqrt{n} (e^{-s/2} - 1) (1 + o(1))) \] where the errors are uniform in $s$ for $s$ in a fixed compact set. \end{proposition} We will prove Proposition~\ref{prop:algebraic} in a moment. We first see how to use this result to prove a central limit theorem for $K_{0n}$. \begin{corollary} \label{cor:algebraic} Let $g_\theta(t) = (1-t)^{-1}$. Then \[ \frac{K_{0n} - n^{1/2}}{2^{-1/2} n^{1/4}} \stackrel{d}{\longrightarrow} N. \] where $N$ is the standard normal distribution. \end{corollary} \begin{proof}[Proof of Corollary~\ref{cor:algebraic}] It suffices to show that the moment generating function of the renormalized cycle count converges to $e^{s^2/2}$ for bounded $s$ (in fact, we only need this for $s$ sufficiently small, but our theorem gives a stronger result). We apply Theorem~\ref{prop:algebraic} at $s / (2^{-1/2} n^{1/4})$ to find \[ \mathbb{E} \exp(-s \frac{K_{0n}} {2^{-1/2} n^{1/4}}) = \exp(- \sqrt{2} s n^{1/4} + s^2/2 + O(s^3 n^{-1/4})). \] Here we used the uniformity of the error for bounded $s$. Multiplying both sides by $\exp(\sqrt{2} s n^{1/4} )$ and letting $n$ tend to $\infty$ completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{prop:algebraic}] We first show that $g_\theta(t) = (1 - t)^{-1}$ is admissible. The monotonicity condition is obvious. We compute \[ a(t) = \frac{t}{(1 - t)^2} \] and \[ b(t) = \frac{t}{(1 - t)^2} + \frac{2 t^2}{(1 - t)^3} \] and note that $a, b \to \infty$ as $t \to 1$ from the left. It suffices to choose a function $\delta(t)$ that satisfies the remaining hypotheses. We observe that the width condition on $\delta$ is satisfied if $\delta(t) \lesssim (1 - t)^{3/2 - \eta}$ for some $\eta > 0$. Likewise, for the error in the approximation condition to be satisfied we need $\delta(t)^3 (1 - t)^{-4} = o(1)$. It therefore suffices to choose $\delta(t) = (1 - t)^\alpha$ for any $4/3 < \alpha \leq 3/2$. We first calculate $r_x$. By definition we have \[ \frac{r_x}{(1 - r_x)^2} = x, \] which can be inverted to find \[ r_x = 1 - x^{-1/2} (1 + o(1)). \] We then compute \[ g(r_x) = \sqrt{x} (1 + o(1)) \] and \[ b(r_x) = 2 x^{3/2} (1 + o(1)). \] With the approximation $(1 - \eta)^n \sim \exp(- \eta n)$ for $\eta$ sufficiently small, we apply Corollary~\ref{cor:mgfexpansion} to find \[ h_n = \frac{1}{\sqrt{4 \pi}} n^{-3/4} \exp(2 \sqrt{n} (1 + o(1))) \] and \[ \mathbb{E} \exp(-s K_{0n}) = e^{-s/4} \exp(2 \sqrt{n} (e^{-s/2} - 1) (1 + o(1))) \] as required. \end{proof} \subsection{The general case} The previous calculation suggests how to transform the formula in Corollary~\ref{cor:mgfexpansion} into a form that is easier to manage. We will restrict our attention to those functions $g$ where the induced functions $r_x$ satisfy a family of inequalities \begin{equation} \abs{r_x^{(k)} x^k} \lesssim \abs{r_x^{(k-1)} x^{(k-1)}} \label{eqn:technicalhelper} \end{equation} for all $x$ sufficiently large and $2 \leq k \leq 4$. This is easy to verify in practice and avoids some technical details for functions $g$ which diverge slowly at $1$ (i.e.~slower than $(1 - r)^{-\epsilon}$ for any $\epsilon > 0$). It is convenient to define the functions \[ \eta_k(x) := (-1)^k \left. \frac{\partial^k}{\partial^k s} \right|_{s=0} \log( r_{e^sx}). \] For example, this gives \[ \eta_1(x) = -x \frac{r_x'}{r_x} \ \text{ and } \eta_2(x) = - \eta_1(x) + x^2 \left( \frac{r_x''}{r_x} + \left( \frac{r'_x}{r_x} \right)^2 \right). \] \begin{proposition} \label{prop:expand} Let $g$ be $\log$-admissible and $r_x$ be defined as above, satisfying condition \ref{eqn:technicalhelper}. Fix $M > 0$. Then for every $-M < s < M$, we have \begin{align*} \log \mathbb{E} \exp(-s K_{0n}) = &- g(r_n) (1 + o(1)) s \\ &+ (g(r_n) + n\eta_1(n)) (1 + o(1)) s^2/2\\ &+ O(\xi(n)s^3) \end{align*} where \[ \xi(n) = \sup_{x\in[e^{-M} n,e^M n]} g(r_x) + x\abs{\eta_1(x)}. \] \end{proposition} \begin{proof} We use Corollary~\ref{cor:mgfexpansion} and expand each factor with respect to $s$. We start with the first term. Taking logarithms, we find \begin{align*} \log (r_n r^{-1}_{e^s n})^n &= n \log(r_n) - n \log(r_{e^s n}) \\ &= n\eta_1(n) s - n\eta_2(n) s^2/2 + O(\xi_1(n) s^3) . \end{align*} where \[ \xi_1(n) := \sup_{x\in[e^{-M} n,e^M n]} |\eta_3(x)|. \] Next we consider $\exp(e^{-s} g(r_{e^s n}) - g(r_n))$. We use $x = a(r_x) = r_x g'(r_x)$ and get \[ \frac{\partial}{\partial s} g(r_{e^s n}) = g'(r_{e^s n})r'_{e^s n} e^s n = e^{2s} n^2 \frac{r'_{e^s n}}{r_{e^s n}} = - e^s n \cdot \eta_1(e^s n). \] This gives \begin{align*} e^{-s} g(r_{e^s n}) - g(r_n) = &- (n \eta_1(n) + g(r_n)) s\\ &+ (n \eta_1(n) + n \eta_2(n) + g(r_n)) s^2/2 \\ &+ O(\xi_2(n)s^3) \end{align*} where \[ \xi_2(n) = \sup_{x\in[e^{-M} n,e^M n]} g_r(x) + x\abs{\eta_1(x)} + x\abs{\eta_2 (x)} + x\abs{\eta_3(x)}. \] Finally we consider $(b(r_n) b(r_{e^s n})^{-1})^{1/2}$. Writing $b$ in terms of $a$, we get \[ b(r_x) = a(r_x) + r_x^2 g''(r_x) = x + r_x^2 g''(r_x) \] Differentiating the defining equation $ x = r_x g'(r_x) $ in $x$, we find that \[ 1 = r_x' g'(r_x) + r_x r_x' g''(r_x) \] so that \[ r_x^2 g''(r_x) = \frac{r_x}{r_x'} - x \] and thus \[ b(r_x) = \frac{r_x}{r_x'} = \frac{-x}{\eta_1(x)}. \] This then gives \[ \log \left( \frac{b(r_n)}{b(r_{e^s n})} \right)^{1/2} = s/2 - \frac{\eta_2(n)}{2\eta_1(n)} s + \left( \frac{\eta_3(n)}{2\eta_1(n)} - \frac{\eta_2^2(n)}{2\eta_1^2(n)} \right) \frac{s^2}{2} + O( \xi_3(n) s^3) \] where \[ \xi_3(n) = \sup_{x\in[n,e^s n]} \left|\frac{\eta_4(x)}{ \eta_1(x)} \right| + \left|\frac{\eta_2(x)\eta_3(x)}{ \eta_1^2(x)} \right| + \left|\frac{\eta_2^3(x)}{ \eta_1^3(x)} \right|. \] We now apply the technical assumption to see that $\eta_k(n) \lesssim \eta_1(n)$ for $k = 2, 3, 4$. In particular, we see that the $(b(r_n) b(r_{e^s n})^{-1})^{1/2}$ is dominated by the other terms. Collapsing other redundant terms, we find that \begin{align*} \log \mathbb{E} \exp(-s K_{0n}) = &- g(r_n) (1 + o(1)) s \\ &+ (g(r_n) + n\eta_1(n)) (1 + o(1)) s^2/2\\ &+ O(s^3 \sup_{x\in[e^{-M} n,e^M n]} g(r_x) + x\abs{\eta_1(x)}) \end{align*} as desired. \end{proof} As above, this has the following immediate corollary. \begin{corollary} \label{cor:convergencetheorem} Let $\theta$ be the defining sequence for a generalized Ewens measure. Suppose that $g_\theta$ is $\log$-admissible and $r_x$ satisfies the technical condition \ref{eqn:technicalhelper}. Then there are functions $\mu_n$ and $\sigma_n$ such that \[ \frac{K_{0n} - \mu_n}{\sigma_n} \stackrel{d}{\longrightarrow} N(0,1) \] where $\mu_n$, $\sigma_n$ satisfy the asymptotics \[ \mu_n = g(r_n) (1 + o(1)) \] and \[ \sigma_n^2 = (g(r_n) + n\eta_1(n))(1 + o(1)). \] \end{corollary} \begin{proof} The only thing to check is whether the coefficient of $s^3$ is bounded by $(g(r_x) + x\abs{\eta_1(x)})^{3/2}$, but this is obvious. \end{proof} \subsection{Computing $g(r_n)$ and $\eta_1(n)$} We now give two examples of how to apply Corollary~\ref{cor:convergencetheorem} to compute explicit asymptotics for given $g_\theta$. First we prove Corollary~\ref{cor:genalgebraic}. \begin{proposition} \label{prop:generalalgebraic} Let $g_\theta(t) = \gamma (1 - t)^{-\beta}$ for some $\beta > 0$ and $\gamma > 0$. Then $g_\theta$ is $\log$-admissible, $r_x$ satisfies the technical condition \ref{eqn:technicalhelper}, and there are asymptotic expansions \[ g_\theta(r_n) = (n \beta^{-1} \gamma^{-1})^{\frac{\beta}{\beta + 1}} (1 + o(1)) \] and \[ n \eta_1(n) = - \frac{(\beta \gamma)^{1/(\beta + 1)}}{\beta + 1} n^{\beta/(\beta + 1)} (1 + o(1)). \] \end{proposition} \begin{proof} Admissibility follows once we construct an explicit $\delta(t)$. It suffices to find a function that satisfies the inequalities \[ \epsilon \delta(t)^2 b(t) - \log b(t) \to \infty \] for all $\epsilon > 0$, and \[ \delta(t)^3 R(t,\varphi) \to 0. \] For $t \to 1$, we have the lower bound $b(t) \gtrsim_{\gamma, \beta} (1 - t)^{-\beta-2}$ and the upper bound $R(t,\varphi) \lesssim_{\gamma, \beta} (1 - t)^{-\beta-3}$. Thus we see that any $\delta$ of the form $\delta(t) = (1 - t)^\alpha$ with $1 + \frac{\beta}{3} \leq \alpha < 1 + \frac{\beta}{2}$ suffices. We compute $r_n$ by inverting $n = a(r_n) = \beta \gamma r_n (1 - r_n)^{-\beta-1}$, so that \[ r_n = 1 - (\beta \gamma n^{-1})^{\frac{1}{\beta + 1}} (1 + o(1)). \] The derivatives of $r_n$ can be approximated in an analogous way, so that \[ \abs{r_n^{(k)}} = C_{\beta, \gamma} n^{\frac{-1}{\beta + 1} - k} \] and the technical condition is clear. We then estimate \[ g_\theta(r_n) = (n \beta^{-1} \gamma^{-1})^{\frac{\beta}{\beta + 1}} (1 + o(1)). \] and \[ n \eta_1(n) = - \frac{(\beta \gamma)^{1/(\beta + 1)}}{\beta + 1} n^{\beta/(\beta + 1)} (1 + o(1)) \] with our estimate for $r_n$ and $r_n'$. \end{proof} Next we prove Corollary~\ref{cor:exponential}. \begin{proposition} \label{prop:subexponential} Let $g_\theta(r) = \exp (1 - r)^{-\beta}$ for some $\beta > 0$. Then $g_\theta$ is $\log$-admissible, $r_x$ satisfies the technical condition \ref{eqn:technicalhelper}, and there are asymptotic expansions \[ g_\theta(r_n) = \frac{n}{(\log n)^{1 + 1/\beta}} (1 + (\log n)^{-1/\beta} + (1 + \beta^{-1}) \frac{\log \log x}{\log x}(1 + o(1))) \] and \[ n \eta_1(n) = - \frac{n}{(\log n)^{1 + 1/\beta}} (1 + (\log n)^{-1/\beta} - (1 + \beta^{-1}) \frac{1}{\log x}(1 + o(1))) \] \end{proposition} \begin{proof} First we verify that $g_\theta$ is admissible. Monotonicity is obvious. We compute \[ a(r) = r g'(r) = r \beta (1 - r)^{-\beta - 1} \exp (1 - r)^{-\beta} \] and \begin{align*} b(r) &= r g'(r) + r^2 g''(r) \\ &= r^2 (\beta (\beta + 1) (1 - r)^{-\beta-2} + \beta^2 (1 - r)^{-2 \beta - 2}) \exp(1 - r)^{-\beta} \\ &= r^2 \beta^2 (1 - r)^{-2 \beta - 2} \exp (1 - r)^{-\beta} (1 + O_\beta(1 - r)^\beta). \end{align*} These diverge at $1$. Once we estimate \begin{align*} R(r,\varphi) &= r g'(r) + 3 r^2 g''(r) + r^3 g'''(r) \\ &= r^3 \beta^3 (1 - r)^{-3 \beta - 3} \exp (1 - r)^{-\beta} (1 + O_\beta(1 - r)^\beta) \end{align*} we see that it remains to choose any $\delta$ that satisfies the pair of inequalities \[ \epsilon \delta(r)^2 r^2 \beta^2 (1 - r)^{-2 \beta - 2} \exp(1 - r)^{-\beta} \to \infty \] and \[ r^3 \beta^3 (1 - r)^{-3 \beta - 3} \exp (1 - r)^{-\beta} \lesssim \delta(r)^{-3}. \] Any $\delta$ of the form \[ \delta(r) = \exp (\alpha (1 - t)^{-\beta}) \] with $1/3 \leq \alpha < 1/2$ suffices. We next need an asymptotic approximation for $r_x$. This is provided by the following lemma. \begin{lemma} \label{lem:expandradius} Let $f(x) := (1 - r_x)^{-\beta}$. Then we have the asymptotic expansion \begin{multline*} f(x) = \log x - (1 + \beta^{-1}) \log \log x - \log \beta \\ + ((\log x)^{-1/\beta} + (1 + \beta^{-1}) \frac{\log \log x}{\log x})(1 + o(1)) \end{multline*} Furthermore, we have the estimates \[ f^{(k)}(x) = (-1)^{k+1} (k-1)! \frac{1}{x^k} (1 - \frac{1 + \beta^{-1}}{\log x}) + O_{\beta,k} (\frac{\log \log x}{x^k (\log x)^2}) \] \end{lemma} \begin{proof} Once we make the substitution $f(x) = (1 - r_x)^{-\beta}$ in the equation $a(r_x) = x$, we see that $f$ is implicitly defined by the equation \[ x = \beta (1 - f(x)^{-1/\beta}) f(x)^{1 + 1/\beta} \exp f(x). \] We then substitute $f(x) = \log x - (1 + \beta^{-1}) \log \log x + w$ and observe that $w = ((\log x)^{-1/\beta} + (1 + \beta^{-1}) \frac{\log \log x}{\log x})(1 + o(1))$. For the estimates on the derivatives of $f$, we differentiate the defining equation for $f$ to find \[ 1 = (f(x)^{1/\beta} + \beta (f(x)^{1/\beta} - 1) + \beta(f(x)^{1/\beta} - 1) f(x)) f'(x) \exp f(x). \] We can use the defining equation again to eliminate the exponential term, which gives us \[ f'(x) = \frac1x (1 - \frac{\beta^{-1}(1 - f(x)^{-1/\beta})^{-1} + 1}{\beta^{-1}(1 - f(x)^{-1/\beta})^{-1} + 1 + f(x)}). \] This gives us the lemma for $k = 1$. For the higher derivatives, we differentiate by parts and apply our earlier asymptotics. \end{proof} Note that this lemma also shows that $r_x$ satisfies the technical condition. We apply this formula to $g(r_n)$ to get \[ g(r_n) = \frac{n}{(\log n)^{1 + 1/\beta}} (1 + (\log n)^{-1/\beta} + (1 + \beta^{-1}) \frac{\log \log x}{\log x}(1 + o(1)))\] and to $n \eta_1(n)$ to get \[ n \eta_1(n) = - \frac{n}{(\log n)^{1 + 1/\beta}} (1 + (\log n)^{-1/\beta} - (1 + \beta^{-1}) \frac{1}{\log x}(1 + o(1))). \qedhere \] \end{proof} Other $g(t)$ can be computed in similar ways. Note that in the proof of Corollary~\ref{cor:exponential}, it was crucial to develop $g(r_n)$ and $n \eta_1(n)$ beyond the first term; this reflects the reduced variance of the number of cycles when there most of the cycles are of logarithmic length. \section{Large deviation estimates} \label{sec:largedeviation} The method developed in the previous two sections actually gives more information than a central limit theorem. In fact, it was enough for us to show that \[ \mathbb{E} \frac{K_{0n} - \mu_n}{\sigma_n} = \exp(\frac{s^2}{2}(1 + o(1))) \] for $s$ arbitrarily close to $0$, but our method applied to all $s$ in a fixed compact set. In this section we will briefly indicate how to use this extra information to prove large deviation estimates for $K_{0n}$. Let $M(s)$ denote the moment generation function for the renormalized cycle count; i.e.~ \[ M(s) = \mathbb{E} \exp(s \frac{K_{0n} - \mu_n}{\sigma_n}) \] and let $\Lambda(s) = \log M(s)$ denote its logarithm. We restate the corollary of Proposition~\ref{prop:expand} as follows. \begin{proposition} There are functions $\sigma_n^2$, $\xi(n)$ such that the for all $s = O(\sigma_n)$, we have the estimate \[ \Lambda(s) = s^2 / 2 + O(\xi(n) \sigma_n^{-3}) s^3. \] \end{proposition} As an immediate consequence, we also have \[ \Lambda'(s) = s + O(\xi(n) \sigma_n^{-3}) s^2 \] and \[ \Lambda''(s) = 1 + O(\xi(n) \sigma_n^{-3}) s. \] Furthermore, $\Lambda'(s)$ is monotone increasing (hence injective) for such $s$. \begin{theorem} For all $a = O(\sigma_n)$ we have \[ \mathbb{P}(\abs{\frac{K_{0n} - \mu_n}{\sigma_n} - a} < \epsilon) = (1 - \epsilon^{-2} (1 + \delta)) \exp(- a^2 / 2 + O(\delta + \epsilon a)) \] where \[ \delta = O(\xi(n) \sigma_n^{-3} a) \] and the errors are absolute. \end{theorem} \begin{proof} Let $X_n := (K_{0n} - \mu_n) / \sigma_n$ and let $\eta$ denote the pdf for $X_n$. We define a pdf $\nu_s$ depending on $s \in \mathbb{R}$ by \[ d\nu_s(x) = \frac{1}{M(s)} e^{sx} d\eta(x). \] Then if $Y$ is a random variable with pdf $\nu_s$, we see that \[ \mathbb{P}(\abs{X_n - a} < \epsilon) = M(s) \mathbb{E} e^{-s Y} 1_{\abs{Y - a} < \epsilon} \] We want to choose $s$ so that $Y$ has mean $a$. In fact, \[ \mathbb{E} Y = M(s)^{-1} \mathbb{E} X_n \exp(s X_n)] = \Lambda'(s) \] Therefore, because $\Lambda'$ is injective we solve $s = a + O(\xi(n) \sigma_n^{-3} a^2)$. On the event that $\abs{Y - a} < \epsilon$, we see that $e^{-s Y} = e^{-sa + O(s\epsilon)}$ so that \[ \mathbb{P}(\abs{X_n - a} < \epsilon) = \exp(- a^2 / 2 + O(\alpha a^3 + \epsilon a)) \mathbb{P}(\abs{Y - a} < \epsilon). \] It is not hard to show that for $s$ chosen so that $a = \mathbb{E}Y$, \[ \mathbb{E} \abs{Y - a}^2 = \Lambda''(s) \] so that by the second moment method, \[ \mathbb{P}(\abs{Y - a} < \epsilon) = 1 - \epsilon^{-2} (1 + O(\xi(n) \sigma_n^{-3} a)), \] and the result follows. \end{proof} \end{document}
math
\begin{document} \title{A brief introduction on residue theory of holomorphic foliations} \text{\rm h}yphenation{ho-mo-lo-gi-cal} \text{\rm h}yphenation{fo-lia-tion} \text{\rm h}yphenation{ge-ne-ra-li-za-ti-on} \begin{abstract} This is a survey paper dealing with holomorphic foliations, with emphasis on residue theory and its applications. We start recalling the definition of holomorphic foliations as a subsheaf of the tangent sheaf of a manifold. The theory of Characteristic Classes of vector bundles is approached from this perspective. We define Chern classes of holomorphic foliations using the Chern-Weil theory and we remark that the Baum-Bott residue is a great tool that help us to classify some foliations. We present throughout the survey several recent results and advances in residue theory. We finish by presenting some applications of residues to solve for example the Poincar\'e problem and the existence of minimal sets for foliations. \end{abstract} \author{Fernando Louren\c co} \address{ Fernando Louren\c co \\ DMM - UFLA , Campus Universit\'ario, Lavras MG, Brazil, CEP 37200-000} \email{[email protected]} \author{Fernando Reis} \address{Fernando Reis \\ DMA - UFES, Rodovia BR-101, Km 60, Bairro Litor\^aneo, S\~ao Mateus-ES, Brazil, CEP 29932-540} \email{[email protected] } \maketitle \noindentndent {\bf{Keywords:}} Holomorphic foliation, flags, residues, characteristic classes. \noindentndent{\bf{Mathematics Subject Classification:}} 32S65, 58K45, 57R30, 53C12. \section{Introduction} The residue theory of holomorphic foliations red with the work of P.Baum and R.Bott \cite{Article-2} in 1972. In their paper the authors have developed a class for foliations associated with its singular set using differential geometry based on the Bott vanishing theorem. However this class, called residue, appears only as an element in the homology group. The question that arises is: \textit{how to compute the residue?} We consider a holomorphic foliation $\mathcal{F}$ of dimension $q$ on a complex manifold $M$ of dimension $n$. If we take $\varphi$ a homogeneous symmetric polynomial of degree $d$, then to each compact connected component $Z \in {\rm{Sing}}(\mathcal{F})$ of the singular set of the foliation $\mathcal{F}$, there exists the homology class ${\rm{Res}}_{\varphi}(\mathcal{F};Z)$ into the group $H_{2(n-d)}(Z; \mathbb{C})$ such that under certain conditions on $M$, one has $$ \varphi(\mathcal{N}_{\mathcal{F}}) \frown [M] = \sum {\rm{Res}}_{\varphi}(\mathcal{F};Z), $$ \noindentndent where $\mathcal{N}_{\mathcal{F}}$ represents the normal sheaf of the foliation $\mathcal{F} $. This survey address the problem of computing this residue ${\rm{Res}}_{\varphi}(\mathcal{F};Z)$ in some cases. In one complex variable, we have the Cauchy's residue of a holomorphic function and the Cauchy integral formula which help us to compute it. On the other hand, in several complex variable, as a generalization of the Cauchy's residue, we have the Grothendieck residue associated with a meromorphic form. Let $f = (f_{1}, \ldots , f_{n}) : U \subset \mathbb{C}^{n} \longrightarrow \mathbb{C}^{n}$ be a finite holomorphic map, such that $f(0) = 0,$ and $g$ be a holomorphic function on $U$. We define the Grothendieck residue by $${\rm{Res}}_{0}(g,f) = \Big(\mathfrak Fartialfrac{1}{2\mathfrak Fi \sqrt{-1}}\Big)^{n} \int_{\gamma} \mathfrak Fartialfrac{g(z) dz_{1}\wedge \ldots \wedge dz_{n}}{f_{1}\ldots f_{n}}$$ \noindentndent where $\gamma$ is a $n$-cycle with orientation prescribed by the $n$-form $d(\arg (f_{1})) \wedge \cdots \wedge d(\arg (f_{n}) )\geq 0.$ If we denote the merophorfic $n$-form $\mathfrak Fartialfrac{g(z) dz_{1}\wedge \cdots \wedge dz_{n} }{f_{1}\ldots f_{n}}$ by $\omega$ we may use the notation \begin{equation}\label{eq 01(groth)} {\rm{Res}}_{0}(g,f) = {\rm{Res}}_{0}\left[\begin{array}{cc} \omega \\ f_{1}, \ldots , f_{n} \end{array}\right]. \end{equation} We observe that for $n=1$, this residue is just the Cauchy's residue. Baum and Bott \cite{Article-2} also show how to compute the residue of a higher dimension foliation since certain assumptions are required in each irreducible component of the singular set of foliation. Take $Z$ an irreducible component of ${\rm{Sing}}(\mathcal{F})$ with $\mathfrak Fartialim Z = \mathfrak Fartialim (\mathcal{F}) - 1$ and other generic hypotheses, one has $${\rm{Res}}_{\varphi}(\mathcal{F}; Z) = {\rm{Res}}_{\varphi}(\mathcal{F}|_{B_{p}}; p)[Z],$$ \noindentndent where ${\rm{Res}}_{\varphi}(\mathcal{F}|_{B_{p}}; p)$ is a certain Grothendieck residue. Furthermore, in the same work, \cite{Article-2} Baum-Bott prove that the residue of a dimension one foliation at an isolated singular point can be also expressed by the Grothendieck residue in (\ref{eq 01(groth)}), where $f_{1}, \ldots , f_{n}$ are the components of the vector fields that locally induces $\mathcal{F}$. In 1984, T. Suwa in \cite{Article-29} considers a foliation of complete intersection type and express a certain class of residues in terms of the Chern classes and the local Chern classes of the sheaf $\mathcal{E} xt_{\mathcal{O}_{M}}^{1}({\mathcal O}mega_{\mathcal{F}}, \mathcal{O}_{M})$. As an application, in (\cite{Article-29}, 3.8 Corollary, p.41) he gives a partial answer to {the Rationality Conjecture}, see (\cite{Article-2}, p.287). As another consequence, he shows how to compute residues at isolated singularities in the case that the foliation has codimension one. T. Suwa in \cite{book-5} has developed a residue theory to distributions, where the localization considered there arises from a rather primitive fact, i.e., the Chern forms of degree greater than the rank of the vector bundle vanish. Hence the involutivity has nothing to do with it. For this reason the localization can be defined by rank reason, of some characteristic classes and associated residues of the normal sheaf of the distribution. Also in (\cite{book-5}, Proposition 4.4, p.15), the author shows in particular that the residues to distribution coincides with the Baum-Bott residues to foliation, when the distribution is involutive (i.e., the distribution is integrable and hence define a foliation). A few years later, in 2005, based on the classical Camacho-Sad residue (or index) theorem, T. Suwa and F. Bracci in \cite{Article-6}, have developed a residue theory for adequate singular pairs, which generalizes, in some sense, the classical Camacho-Sad residue. The authors show a Bott vanishing theorem by adapting the Cech-de Rham theory and localization for adequate singular pair and prove that there exists the residue in this situation. More recently, in 2015, F. Bracci and T. Suwa in \cite{Article-7} provide another way to compute residues. The authors consider a deformation of a complex manifold $M$, denoted by $\tilde{M} = \{ M_{t} \}_{t \in \text{\rm h}at{P}}$, where the parameter space $\text{\rm h}at{P}$ is a $\mathcal{C}^{\infty}$ manifold and a deformation of a holomorphic foliation $\mathcal{F}$ of $M$, denoted by $\mathcal{\tilde{F}} = \{ \mathcal{F}_{t} \}$. For each parameter $t \in \text{\rm h}at{P}$ they assume that the singular set $S_{t}$ is compact. Let $\varphi$ be a homogeneous symmetric polynomial of degree $d$ and ${\rm{Res}}_{\varphi}(\mathcal{F}_{t}; S_{t}^{\lambda})$ the Baum-Bott residue of $\mathcal{F}_{t}$ at the compact connected set $S_{t}^{\lambda}$. It is proved that the Baum-Bott residue continuously varies on this smooth deformation, i.e, $$\lim_{t \rightarrow t_{0}} \sum_{\lambda}{\rm{Res}}_{\varphi}(\mathcal{F}_{t}; S_{t}^{\lambda}) = {\rm{Res}}_{\varphi}(\mathcal{F}_{t_{0}}; S_{t_{0}}^{\lambda}).$$ Subsequentely, the authors consider $\mathcal{F}$ as a germ at $ 0 \in \mathbb{C}^{n}$ of a simple almost Liouvillian foliation of codimension one and $V$ a divisor of poles. Then it is shown that the residue of $\mathcal{F}$ at $Z$, which is an irreducible component of singular set of $\mathcal{F}$ of codimension 2, can be written as a sum (over the irreducible components of $V$ that contains $Z$) in terms of Lehmann-Suwa residues \cite{Article-23}. This represents an effective way to compute residues: $$BB(\mathcal{F}; Z) = \sum_{j=1}^{k} {\rm{Res}}(\gamma_{0},V_{j})Var(\mathcal{F},V_{j};Z).$$ In the paper \cite{Article-31} the author proves a more slight generalization of the Bott residue theorem to holomorphic foliations of dimension one. The proof is based on a localization formula of Duistermaat and Heckman type, which has been first discussed in \cite{Article-5}. Recently in 2016, see \cite{Article-14.1}, Corr\^ea, Pen\~a and Soares have studied several residue formulas for vector fields on compact complex orbifolds with isolated singularities, which is a special type of a singular variety. It is worth remark that there are other types of residues and invariants associated with a foliations and distributions. For instance, residues of logarithmic vector fields in \cite{Article-14.3}, Camacho-Sad index in \cite{Article-11}, GSV-index of foliations and Pfaff systems in \cite{Article-14.2,Article-20.1}. Another interesting topic that has been studied recently is the residue theory for flags of foliations and distributions. In the works \cite{Article-9, Article-20}, it was developed a general theory of residues to $2$-flags. There are many topics which are closely related to $2$-flags and naturally appear in the theory of foliation. For example, a conjecture due to Marco Brunella (see \cite{Article-12}, p.443) which says that a two-dimensional holomorphic foliation $\mathcal{F}$ on $\mathbb{P}^{3}$ either admits an invariant algebraic surface or it compose a flag of holomorphic foliations. Finally, we finish the survey with some applications. We discuss how the residue theory has been used to investigate two classical open problems: the Poincar\'e problem and the existence or not of minimal sets for foliations. It is important to note that throughout this paper we maintain all the notations and symbols exactly as they are in the original works, respecting the choices of the referenced authors. \section{Chern-Weil Theory of Characteristic classes of holomorphic foliations} \subsection{Holomorphic foliations} Let us begin by recalling the basic definition of singular holomorphic foliations and distributions. Let $M$ be a complex manifold of dimension $n$ and denote by $\Theta_{M}$ and ${\mathcal O}mega_{M}$ respectively, the sheaves of germs of holomorphic vector fields and holomorphic $1$-forms on $M$. There are two definitions of singular foliations that turn out to be equivalent as long as we consider only reduced foliations. For this section about foliations theory we refer to \cite{Article-2, Article-9, Article-24, book-4, book-5}. A singular holomorphic distribution of dimension $q$ on $M$ is a coherent subsheaf $\mathcal{F}$ of $\Theta_{M}$ of rank $q$. Moreover, if $\mathcal{F}$ satisfies the following integrability condition $$ [\mathcal{F}_{x} , \mathcal{F}_{x}] \subset \mathcal{F}_{x} \ \ \ \mbox{for} \ \ \mbox{all} \ \ x \in M,$$ \noindentndent we say that $\mathcal{F}$ is a holomorphic foliation. The normal sheaf of $\mathcal{F}$ is defined as the quotient sheaf $\mathcal{N}_{\mathcal{F}} := \Theta_{M}/ \mathcal{F}$, such that it is torsion free (it means that $\mathcal{F}$ is saturated). With this definition we have the following exact sequence $$ 0 \longrightarrow \mathcal{F} \longrightarrow \Theta_{M} \longrightarrow \mathcal{N}_{\mathcal{F}} \longrightarrow 0. $$ We define the singular set of the distribution $\mathcal{F}$ by \begin{center} ${\rm{Sing}}(\mathcal{F}) := {\rm{Sing}}(\mathcal{N}_{\mathcal{F}} ) = \{p \in M ; \mathcal{N}_{\mathcal{F},p} \ \mbox{is} \ \ \mbox{not} \ \mbox{locally} \ \mbox{free} \}.$ \end{center} We assume that $\text{\rm cod}im({\rm{Sing}}\mathcal{(F)}) \geq 2.$ For the second one definition, a singular distribution $\mathcal{G}$ can be defined, as a dual way by means of differential forms, i. e., as a coherent subsheaf of ${\mathcal O}mega_{M}$. If $\mathcal{G}$ satisfies the integrability condition, i.e., $$d \mathcal{G}_{x} \subset ({\mathcal O}mega_{M} \wedge \mathcal{G})_{x} \ \ \mbox{for} \ \ \mbox{all} \ x \in M \setminus {\rm{Sing}}(\mathcal{G}),$$ \noindentndent we say that $\mathcal{G}$ is a foliation, where ${\rm{Sing}}(\mathcal{G}) := {\rm{Sing}}({\mathcal O}mega_{M} / \mathcal{G}).$ The two definitions of foliations are equivalents and related by taking the annihilator of each other. If $\mathcal{F}$ is a foliation on $M$ of dimension $q$, its annihilator is defined by $$ \mathcal{F}^{a} = \{ v \in {\mathcal O}mega_{M} ; < v, \omega > = 0 \ \ \mbox{for} \ \ \mbox{all} \ \ \omega \in \mathcal{F} \}.$$ We say that $\mathcal{F}$ is reduced, if for any open set $U$ in $M$, $$\mbox{${\mathscr G}$}amma(U, \Theta_{M}) \cap \mbox{${\mathscr G}$}amma(U\setminus {\rm{Sing}}(\mathcal{F}), \mathcal{F}) = \mbox{${\mathscr G}$}amma(U, \mathcal{F}). $$ If we consider only reduced foliation, then $\mathcal{G} = \mathcal{F}^{a}$ and the converse is also true (see \cite{book-4}). To conclude this subsection we present the follow definition. \begin{definition} Let $V$ be an analytic subspace of a complex manifold $X$. We say that $V$ is invariant by a foliation $\mathcal{F}$ if $$T\mathcal{F}|_{V} \subset ({\mathcal O}mega_{V}^{1})^{\ast}.$$ \noindentndent In particular cases, $\bullet$ if $V$ is a hypersurface we say that $\mathcal{F}$ is \textit{logarithmic \ along} $\mathcal{F}$; $\bullet$ if $V$ is a reduced complete intersection of dimension $n-k$, defined by intersection of $k$ hypersurfaces we say that $\mathcal{F}$ is \textit{multlogarithmic \ along} $\mathcal{F}$. \end{definition} \subsubsection{Flag of holomorphic foliations} In this subsection, we define flag of foliations and show its main properties. For more details we refer to \cite{Article-9, Article-12.22, Article-20, Article-24}. A flag of singular holomorphic foliations on a complex manifold $M$ of dimension $n$, can be define by a finite sequence of $k$ foliations $\mathcal{F} = (\mathcal{F}_{1},\ldots , \mathcal{F}_{k})$ such that, outside the singular sets, each foliation $\mathcal{F}_{i+1}$ is a subfoliation of $\mathcal{F}_{i}$ and we denote $\mathcal{F}_{i} \subset \mathcal{F}_{i+1}$, for each $i = 1, \ldots, k-1$. In a more formal manner, and for $k=2$, one has the following. \begin{definition}\label{def.2.1} Let $\mathcal{F}_{1},\mathcal{F}_{2}$ be two holomorphic foliations on $M$ of dimensions $q = (q_{1},q_{2})$. We say that $\mathcal{F} := (\mathcal{F}_{1},\mathcal{F}_{2})$ is a $2$-flag of holomorphic foliations if $\mathcal{F}_{1}$ is a coherent sub $\mathcal{O}_{M}$-module of $\mathcal{F}_{2}.$ \end{definition} We note that, for $ x \in M \setminus \cup_{i = 1}^{2} {\rm{Sing}} (\mathcal{F}_{i})$ the inclusion relation $T_{x}\mathcal{F}_{1} \subset T_{x}\mathcal{F}_{2}$ holds, namely, the leaves of $\mathcal{F}_{1}$ are contained in leaves of $\mathcal{F}_{2}$. Here $T\mathcal{F}_{i}$ represents the tangent sheaf of the foliation $\mathcal{F}_{i}$, but throughout the text we will abuse notation and denote it simply by $\mathcal{F}_{i}$. Now, we would like to highlight a diagram of short exact sequences of sheaves, called "turtle diagram". $$ \timesymatrix{ 0 \ar[rd] & & 0 \ar[ld] & & 0 \\ & \mathcal{F}_{1} \ar[rd] \ar[dd] & & \mathcal{N}_{2} \ar[lu] \ar[ru] & \\ & & \Theta_{M} \ar[ru] \ar[rd] & & \\ & \mathcal{F}_{2} \ar[ru] \ar[rd] & & \mathcal{N}_{1} \ar[uu] \ar[rd] & \\ 0 \ar[ru] & & \mathcal{N}_{12} \ar[ru] \ar[rd] & & 0 \\ & 0 \ar[ru] & & 0 & }$$ Let us give some definitions and notations. We write $\mathcal{N}_{j}$ as the quotient sheaf $\mathcal{N}_{\mathcal{F}_{j}}$. Also, we denote by ${\rm{Sing}}(\mathcal{F})$ the singular set of the flag $\mathcal{F}$ which is the analytic set ${\rm{Sing}}(\mathcal{F}_{1})\cup {\rm{Sing}}(\mathcal{F}_{2})$, and $\mathcal{N}_{\mathcal{F}} := \mathcal{N}_{12} \oplus \mathcal{N}_{2}$ is the normal sheaf of the flag, where $\mathcal{N}_{12}$ is the relative quotient sheaf $ \mathcal{F}_{2} / \mathcal{F}_{1}$. \subsection{Characteristic classes via Chern-Weil theory} In this section, we review the basic tools of the Chern-Weil theory for working with residue and characteristic classes to vector bundles and sheaves. The residue theory of foliations was first introduced by Baum and Bott using differential geometry, in a series of papers (\cite{Article-2, Article-3, Article-4}). Later Lehman and Suwa in the decades of 1980 and 1990 present a new approach of residue theory using the Chern-Weil theory (see \cite{Article-23}). \begin{definition} A connection for a complex vector bundle $E$ of rank $r$ on $M$ is a $\mathbb{C}$-linear map $$ \nabla : A^{0} (M,E) \longrightarrow A^{1} (M, E) $$ \noindentndent that satisfies $$ \nabla (f.s) = df \otimes s + f. \nabla (s) \ \ \mbox{for} \ \ f \in A^{0}(M) \ \ \mbox{and} \ \ s \in A^{0}(M,E).$$ \end{definition} If $\nabla$ is a connection for $E$, then it induces a $\mathbb{C}$-linear map $$ \nabla := \nabla^{2} : A^{1}(M,E) \longrightarrow A^{2}(M,E) $$ \noindentndent satisfying $$ \nabla (\omega \otimes s) = d\omega \otimes s - \omega \wedge \nabla (s), \ \ \omega \in A^{1}(M), \ \ s \in A^{0} (M,E).$$ \noindentndent We define the composition map $K := \nabla \circ \nabla$ from $A^{0} (M,E)$ to $A^{2} (M,E)$ as the curvature of the connection $\nabla$. If $ s = (s_{1}, \ldots , s_{n})$ is a frame of $E$ on an open set $U$ we have $\theta = (\theta_{ij})$ the connection matrix (where $\theta_{ij}$ are 1-forms on $U$ ) of $E$ with respect to frame $s$. In the same way, we can get $K = (k_{ij})$ the curvature matrix of $E$ with respect to $s$. If we consider $\sigma_{i}, i = 1,\ldots,r$ the $i$-th elementary symmetric functions in the eigenvalues of the matrix $K$ $$ \mathfrak Fartialet(It + K) = 1 + \sigma_{1}(K)t + \sigma_{2}(K)t^{2} + \cdots + \sigma_{r}(K)t^{r}, $$ \noindentndent we may define a $2i$-form of Chern $c_{i}$ on $U$ by $$ c_{i}(K) := \sigma_{i} (\frac{\sqrt{-1}}{2 \mathfrak Fi}K). $$ In general, if $\varphi$ is a homogeneous symmetric polynomial in $r$ variables of degree $d$, we may write $\varphi = \tilde{P}(c_{1},\ldots,c_{r})$ for some polynomial $\tilde{P}$. Then we can define $$ \varphi(K) := \tilde{P}(c_{1}(K),\ldots,c_{r}(K))$$ \noindentndent which is a closed form on $M$. Therefore, we have a cohomology class of $E$ on $M$, $\varphi (E) := \varphi(K) \in H^{2d}(M; \mathbb{C})$. Let $\mathcal{G}$ be a sheaf on $M$, $S$ a compact connected set of $M$ and $U$ a relatively compact open neighborhood of $S$ in $M$. We may consider $\mathcal{U} = \{U_{0}, U_{1} \}$ a covering of $U$, where $U_{1} = U$ and $U_{0} = U \setminus S$ and since there exists \cite{Article-1} the following resolution of $\mathcal{G}$ $$0 \longrightarrow \mathcal{A}_{U}(E_{r}) \longrightarrow \cdots \longrightarrow \mathcal{A}_{U}(E_{0}) \longrightarrow \mathcal{A}_{U} \otimes_{\mathcal{O}_{U}}\mathcal{G} \longrightarrow 0,$$ \noindentndent we can define the characteristic class $\varphi(\mathcal{G})$ on $U$ using the \textit{virtual bundle} $\timesi := \sum_{i=0}^{r} (-1)^{i}E_{i}$, i. e., $\varphi(\mathcal{G}) :=\varphi(\timesi).$ Given $\mathcal{F}$ a holomorphic foliation on $M$ and $\varphi$ a homogeneous symmetric polynomial of degree $d$, one has the short exact sequence $$ 0 \longrightarrow \mathcal{F} \longrightarrow \Theta_{M} \longrightarrow \mathcal{N}_{\mathcal{F}} \longrightarrow 0.$$ Then $\varphi(\mathcal{N}_{\mathcal{F}})$ denotes the characteristic class of $\mathcal{F}$ and it is an element in cohomology group $H^{2d}(M; \mathbb{C}).$ We denote by $P$ the Poincar\'e homomorphism (or isomorphism if $M$ is nonsingular) from $H^{2d}(M;\mathbb{C})$ to $H_{2(n-d)}(M;\mathbb{C})$ and by $A$ the Alexander homomorphism (or isomorphism if $S$ is nonsingular) $A : H^{2d}(M, M \backslash S; \mathbb{C}) \longrightarrow H_{2(n-d)}(S; \mathbb{C})$. We have the following commuting diagram: $$ \timesymatrix{ H^{2d}(M, M \backslash S; \mathbb{C}) \ar[r] \ar[d]_{A} & H^{2d}(M ; \mathbb{C}) \\ H_{2(n-d)}(S; \mathbb{C}) \ar[r]^{i^{\ast}} & H_{2(n-d)}(M; \mathbb{C}) \ar[u]_{P}}$$ \noindentndent where this map $H^{2d}(M, M \backslash S; \mathbb{C}) \longrightarrow H^{2d}(M ; \mathbb{C}) $ represents a lift that can be interpreted, in terms of foliation theory, by the Bott vanishing theorem (see \cite{book-4}, Theorem 9.11). Thus we have the residue of foliation $\mathcal{F}$, denoted by ${\rm{Res}}_{\varphi}(\mathcal{F}, \mathcal{N}_{\mathcal{F}}; S)$ in $H_{2(n-d)}(S; \mathbb{C})$ as the image of $\varphi_{S}(\mathcal{N}_{\mathcal{F}}; \mathbb{C}) \in H^{2d}(M, M \backslash S; \mathbb{C})$ by the Alexander duality, which is independent of all the choices involved. In general it is not possible to compute such residue directly from the above definition. It is then important when one can compute such element using tools like differential geometry, foliation theory, complex analysis or singularities theory. The goal of this survey is to present some results in this direction. \subsection{Some results about residues of holomorphic foliations} The residue theory of holomorphic foliations was developed by several authors in the last years. We can cite, for instance, Baum and Bott in \cite{Article-2, Article-4}, Brasselet and Suwa in \cite{Article-8} and Bracci and Suwa in \cite{Article-6} and \cite{Article-7}. We also refer to the book \cite{book-0} that is dedicated to study of indices for holomorphic foliation of dimension one with isolated singularity, in the cases where the underlying space is either a smooth variety or a singular variety. The authors defines several notions of index such that: Poincar\'e-Hopt index, Schwartz index, GSV index, Virtual index, Homological index and others. We begin with the classical Grothendieck residue, see (\cite{book-2, Article-28, book-4}) for more details. Let us take $\omega$ a germ of holomorphic $n$-form at $0$, a neighborhood $U$ of $0$ in $\mathbb{C}^{n}$, and $a_{1}, \ldots , a_{n}$ germs of holomorphic functions such that $V(a_{1}, \ldots , a_{n}) = \{0\}$. The Grothendieck residue of $\omega$ at 0 is defined by $$ {\rm{Res}}_{0}\left[\begin{array}{cc} \omega \\ a_{1}, \ldots , a_{n} \end{array}\right] = \mathfrak Fartialfrac{1}{(2\mathfrak Fi \sqrt{-1})^{n}} \int_{\gamma} \mathfrak Fartialfrac{\omega}{a_{1}\cdots a_{n}}, $$ \\ \noindentndent where $\gamma$ is a $n$-cycle in $U$ defined by $$\gamma = \{ z \in U ; |a_{1}(z)| = \cdots = |a_{n}(z)|= \epsilon \}$$ \noindentndent and oriented by $d(\arg( a_{1})) \wedge \cdots \wedge d(\arg( a_{n})) \geq 0.$ We remark that the above residue is the usual Cauchy residue at $0$, of the meromorphic 1-form $\omega /a_{1}$, when $n=1$. Next, we give the first important result about residues in theory of foliations. This theorem relates the Baum-Bott residue of certain foliations, to the Grothendieck residue. For this, let $\mathcal{F}$ be a holomorphic foliation of dimension one in a complex compact manifold $M$ of dimension $n$. We assume that $\mathcal{F}$ has only isolated singularities. Let $\varphi$ be a homogeneous symmetric polynomial of degree $n$ and $p \in {\rm{Sing}}(\mathcal{F})$ an isolated point. Then \begin{theorem}(\cite{Article-2}, Theorem 1) $${\rm{Res}}_{\varphi}(\mathcal{F}; p) = {\rm{Res}}_{p}\left[\begin{array}{cc} \varphi(J X) \\ X_{1}, \ldots , X_{n} \end{array}\right],$$ \noindentndent where $X = (X_{1}, \ldots , X_{n})$ is a germ of a holomorphic vector field at $p$, local representative of $\mathcal{F}$. \end{theorem} In \cite{Article-29}, (3.12) Proposition, T. Suwa considers $\mathcal{F}$ a holomorphic distribution (not necessarily involutive) of codimension one and taking $0$ (it can be another point $p$) as an isolated singularity of this distribution, he shows how to compute the residues. \begin{theorem} Let $U$ be a polydisk about the origin $0$ in $\mathbb{C}^{n}$ and let $\mathcal{F} = <\omega>$ be a codimension one holomorphic foliation on $U$ with an isolated singularity at 0. Then we have $$ {\rm{Res}}_{c_{n}}(\mathcal{F};0) = (-1)^{n}(n-1)! \mathfrak Fartialim_{\mathbb{C}}Ext^{1}_{\mathcal{O}}({\mathcal O}mega_{\mathcal{F}}, \mathcal{O}) \ \ \ in \ \ \ H_{0}(0; \mathbb{Q}) = \mathbb{Q}.$$ \end{theorem} The reader interested in residues for distributions can also refer the survey \cite{book-5}. With the same aim, F. Bracci and T. Suwa in \cite{Article-7} consider smooth deformations of holomorphic foliations and verify that it provides an effective way to compute residues. \begin{theorem} Let $(\tilde{M}, \text{\rm h}at{P}, \mathfrak Fi)$ be a deformation of manifolds and $\tilde{\mathcal{F}}$ a deformation of foliations on $\tilde{M}$ of rank $p$. Suppose that $\mathcal{N}_{\tilde{\mathcal{F}}}$ admits a $\mathcal{C}^{\infty}$ locally free resolution. Let $S^{'}(\tilde{\mathcal{F}}) \subset S(\tilde{\mathcal{F}})$ be a connect component of the singular set of $\tilde{\mathcal{F}}$ and let $S_{t} :=M_{t} \cap S^{'}(\tilde{\mathcal{F}})$. Assume that for all $t \in \text{\rm h}at{P}$ the set $S_{t}$ is compact and $S_{t} \neq M_{t}$. Let $\varphi$ be a homogeneous symmetric polynomial of degree $d > n-p$. Under this assumptions, the Baum-Bott residue $BB_{\varphi}(\mathcal{F}_{t};S_{t})$ is continuous in $t \in \text{\rm h}at{P}$. Namely, for any $\mathcal{C}^{\infty} \ (2n-2d)$-form $\tilde{\tau}$ on $\tilde{M}$ such that $i_{t}^{\ast}(\tilde{\tau})$ is closed for all $t \in \text{\rm h}at{P}$, $$\lim_{t \rightarrow t_{0}} BB_{\varphi} (\mathcal{F}_{t};S_{t})(i_{t}^{\ast}(\tilde{\tau})) = BB_{\varphi} (\mathcal{F}_{t_{0}};S_{t_{0}})(i_{t_{0}}^{\ast}(\tilde{\tau})).$$ \end{theorem} Also for higher dimension foliations it is possible to relate the Baum-Bott residue with the Grothendieck residue. Vishik, in \cite{Article-30}, found this relation with the hypothesis that the foliation has locally free tangent sheaf. In \cite{Article-2} Baum and Bott, before and independent of Vishik, have proved a similar result using a generic assumption in the singular set of foliation. Let us consider $\mathcal{F}$ be a holomorphic foliation of codimension $k$ on a complex manifold $M$ and $\varphi$ a homogeneous symmetric polynomials of degree $k+1$. Note that $\mathfrak Fartialeg \varphi > n - \mathfrak Fartialim (\mathcal{F})$, which is condition to Bott vanishing theorem. We consider that the singular set of $\mathcal{F}$ has pure expected codimension, i.e., $\mathfrak Fartialim ({\rm{Sing}}(\mathcal{F})) = k+1$. In this case, the notation ${\rm{Sing}}_{k+1}(\mathcal{F})$ is usually used to denote the union of irreducible components of the singular set of pure codimension $k+1$ of $\mathcal{F}$. If $ Z \subset {\rm{Sing}}_{k+1}(\mathcal{F})$ is a pure codimension and irreducible component, we consider $B_{p}$ a $(k+1)$-dimension ball centered at $p$ sufficiently small and transversal to $Z$ at $p$. We remark that the restricted foliation $\mathcal{F}|_{B_{p}}$ is a foliation of dimension one with isolated singularity at $p$. In \cite{Article-14} (Theorem 1.2), M. Corr\^ea and F. Louren\c co relate the Baum-Bott residue of $\mathcal{F}$ in $Z$ with the Grothendick residue of $\mathcal{F}|_{B_{p}}$ at $p$. \begin{theorem} Let $\mathcal{F}$ be a singular holomorphic foliation of codimension $k$ on a compact complex manifold $M$ such that $cod({\rm{Sing}}(\mathcal{F} )) \geq k + 1.$ Then, $${\rm{Res}}_{\varphi}(\mathcal{F}; Z) = {\rm{Res}}_{\varphi}(\mathcal{F}|_{B_{p}}; p)[Z],$$ \noindentndent where ${\rm{Res}}_{\varphi}(\mathcal{F}|_{B_{p}}; p)$ represents the Grothendieck residue at $p$ of the one dimensional foliation $\mathcal{F}|_{B_{p}}$ on a $(k + 1)$-dimensional transversal ball $B_{p}.$ \end{theorem} The next result is due to Fernandez-Perez and Tamara in \cite{Article-19} (Theorem 6.2), and it provides another effective way to computing Baum-Bott residues of codimension one holomorphic foliations. First, we need a definition. Consider the germ of holomorphic foliation $\mathcal{F}$ at $0\in \mathbb{C}^n$. \begin{definition} We say that $\mathcal{F}$ is an \textit{almost Liouvillian foliation} at $0 \in \mathbb{C}^{n}$ if there exists a germ of closed meromorphic $1$-form $\gamma_{0}$ and a germ of holomorphic $1$-form $\gamma_{1}$ at $0 \in \mathbb{C}^{n}$ such that $$d \omega = (\gamma_{0} + \gamma_{1}) \wedge \omega.$$ \noindentndent We say that $\mathcal{F}$ is a \textit{simple almost Liouvillian foliation} at $0 \in \mathbb{C}^{n}$ if we can choose $\gamma_{0}$ having only first-order poles. \end{definition} \begin{theorem} Let $\mathcal{F}$ be a germ at $0 \in \mathbb{C}^{n} , n \geq 3$, of a simple almost Liouvillian foliation defined by $\omega \in {\mathcal O}mega^{1}(\mathbb{C}^{n},0)$ such that $$ d\omega = (\gamma_{0}+ \gamma_{1}) \wedge \omega.$$ \noindentndent Let $V$ be the divisor of poles of $\gamma = \gamma_{1} + \gamma_{1}$ and $V_{1}, \ldots , V_{l}$ the irreducible components of $V$. Let $Z$ be an irreducible component of ${\rm{Sing}}_{2}(\mathcal{F})$. Then $$ BB(\mathcal{F}; Z) = \sum_{i=1}^{k} {\rm{Res}}(\gamma_{0}, V_{j}) Var(\mathcal{F}, V_{j}; Z), $$ \noindentndent where $V_{1}, \ldots , V_{k}$ are the irreducible components that contains $Z$ and \\ $Var(\mathcal{F}, V_{j}; Z)$ represents the Varational index defined by Khanedani and Suwa in \cite{Article-21}. \end{theorem} In \cite{Article-8, Article-27} the notion of Nash residue of foliations was developed, which immediately implies a partial answer to the rationality conjecture of Baum and Bott (see \cite{Article-2}, p.287). Let $M$ be a complex manifold with dimension $n$, and let $\mathcal{F}$ be a singular holomorphic foliation with dimension $q$ on $M$. Let us consider for each point $x$ in $M$ the following set \begin{equation}\label{eq1} F(x) := \{ v(x); v \in \mathcal{F}_{x} \}, \end{equation} \noindentndent where $\mathcal{F}_{x}$ denotes the stalk of $\mathcal{F}$ at $x$. We observe that $F(x)$ is a subspace of tangent space $T_{x}M$ of dimension $q$ if, and only if, $x \in M \setminus {\rm{Sing}}(\mathcal{F})$. In general, $\mathfrak Fartialim( F(x)) \leq q$. In the following, $G(q,n)$ is the Grassmannian bundle of $q$-planes in $TM$. Using the expression $(\ref{eq1})$ we can define a section of $G(q,n)$ outside of singular set of $\mathcal{F}$, as follows $$s: M \setminus {\rm{Sing}}(\mathcal{F}) \longrightarrow G(q,n)$$ \noindentndent gives by $s(x) := F(x).$ We define $M^{\nu} := \overline{Im (s)}$ in $G(q,n)$ and call it the Nash modification of $M$ with respect to foliation $\mathcal{F}$. We consider ${\rm{Sing}}(\mathcal{F})^{\nu} := \mathfrak Fi^{-1}{\rm{Sing}}(\mathcal{F})$ where $\mathfrak Fi$ is the restriction map to $M^{\nu}$ of the projection of the bundle $G(q,n)$ that is a birational map $$\mathfrak Fi : M^{\nu} \longrightarrow M.$$ \noindentndent Moreover, it is biholomorphic from $M^{\nu} \setminus {\rm{Sing}}(\mathcal{F})^{\nu}$ to $M \setminus {\rm{Sing}}(\mathcal{F})$. In some case, we can assume $M^{\nu}$ as a smooth manifold (see \cite{Article-27}). We denote by $\tilde{T}^{\nu}$ and $\tilde{N}^{\nu}$, respectively, the tautological bundle and the tautological quotient bundle on $G(q,n)$. So, one has a short exact sequence $$ 0 \longrightarrow T^{\nu} \longrightarrow \mathfrak Fi^{\ast}TM \longrightarrow N^{\nu} \longrightarrow 0, $$ \noindentndent where $T^{\nu}$ and $N^{\nu}$ are essentially the restrictions to $M^{\nu}$. It is possible to show that the characteristic class $\varphi(N^{\nu})$, for a homogeneous symmetrical polynomial $\varphi$ of degree $d > n- \mathfrak Fartialim (\mathcal{F})$, is localized at ${\rm{Sing}}(\mathcal{F})^{\nu}$ give us the following residues $${\rm{Res}}_{\varphi}(N^{\nu}, \mathcal{F}; {\rm{Sing}}(\mathcal{F})^{\nu} ) \in H_{2(n-d)}({\rm{Sing}}(\mathcal{F})^{\nu}; \mathbb{C})$$ \noindentndent and we call it the Nash residue of $\mathcal{F}$ with respect to $\varphi$ at ${\rm{Sing}}(\mathcal{F})^{\nu}$. In 1989, Sert\"oz in \cite{Article-27} (Theorem IV.4, p.238), has showed that the difference between the Baum-Bott residue and the Nash residue is an integer number with the assumption that $M^{\nu}$ is non singular. \begin{theorem} Let $S$ be a connect component of singular set of $\mathcal{F}$ and $\varphi$ a homogeneous symmetrical polynomial of degree $d > n- \mathfrak Fartialim (\mathcal{F})$ then $${\rm{Res}}_{\varphi}(N_{\mathcal{F}}, \mathcal{F}; S) = {\rm{Res}}_{\varphi}(N^{\nu}, \mathcal{F}; S^{\nu}) +k,$$ \noindentndent where $k$ is a homology cycle in $S$ and it is compute by a Grassmaann graph construction. \end{theorem} In 2000, Brasselet and Suwa in \cite{Article-8} (Theorem 4.1, p. 44), have given a similar result of Sert\"oz droping the hypothesis that $M^{\nu}$ is smooth. \begin{theorem} Let $\varphi$ be a homogeneous symmetric polynomial of degree $d > n - \mathfrak Fartialim (\mathcal{F})$. If $\varphi$ is with integral coefficients, then the difference $${\rm{Res}}_{\varphi}(N_{\mathcal{F}}, \mathcal{F}; S) - {\rm{Res}}_{\varphi}(N^{\nu}, \mathcal{F}; S^{\nu})$$ \noindentndent is in the image of the canonical homomorphism $$H_{2(n-p)}(S; \mathbb{Z}) \longrightarrow H_{2(n-p)}(S; \mathbb{C}).$$ \end{theorem} In the following result F. Bracci and T. Suwa have developed the residue theory for foliations of adequate singular pairs (see \cite{Article-6}). In short, we consider $M$ a complex manifold of dimension $m$ and let $P \subset M$ be a complex submanifold of dimension $r$, then we have a short exact sequence $$ 0 \longrightarrow TP \longrightarrow TM|_{P} \longrightarrow N_{P,M} \longrightarrow 0,$$ \noindentndent where $N_{P,M}$ denotes the normal bundle of $P$ in $M$. We pick $X$ another submanifold of $M$ of dimension $n$ which intersects $P$ along a submanifold $Y \subset M$ of dimension $n+r-m$ and such intersection is everywhere transversal. We define $(X, Y)$ as adequate singular pair in $M$ if $r = m+l-n$ and the data satisfy the following \\ 1) $Y = X \cap P;$ 2) $\mathfrak Fartialim ({\rm{Sing}}(X) \cap P) < l$; 3) $X_{reg}$ intersects $P$ generically transversely. \\ \begin{theorem}(\cite{Article-6}, Theorem 2.1, p.7) Let $(X,Y)$ be an adequate singular pair in $M$ and let $\mathcal{F}$ be a holomorphic foliation in $X$ of dimension $d \leq l$ which leaves $Y$ invariant. Let $\Sigma = ({\rm{Sing}}(\mathcal{F}) \cup {\rm{Sing}}(Y)) \cap Y$ and assume that $\mathfrak Fartialim (\Sigma) < l$. Let $\Sigma = \sum_{\gamma} \Sigma_{\gamma}$ be the decomposition into connected components and let $ i_{\gamma} : \Sigma_{\gamma} \text{\rm h}ookrightarrow Y$ denotes the inclusion. Let $\varphi$ be a symmetric homogeneous polynomial of degree $t>l-d$. Then \begin{enumerate} \item[i)] For each compact connected component $\Sigma_{\gamma}$ there exists a class \\ ${\rm{Res}}_{\varphi}(\mathcal{F}, Y; \Sigma_{\gamma}) \in H_{2l-2t}(\Sigma_{\gamma}; \mathbb{C})$ called "residue", which depends only on the local behavior of $\mathcal{F}$ near $\Sigma_{\gamma}$; \item[ii)] If $Y$ is compact we have $$ \sum_{\gamma} (i_{\gamma})_{\ast} {\rm{Res}}_{\varphi}(\mathcal{F}, Y; \Sigma_{\gamma}) = \varphi(N_{P,M}) \cap [Y] \ \ in \ \ H_{2l-2t}(Y; \mathbb{C}).$$ \end{enumerate} \end{theorem} We note that this "new concept" of residue of foliations can be understood as a generalization of the classical Camacho-Sad residue theorem (see \cite{Article-11}). Corr\^ea at al in \cite{Article-14.1} showed the follows residue formula for orbifolds. Let $X$ be a complex orbifold of dimension $n$ and $L$ be a line $V$-bundle over $X$ and considering some Chern classes of the bundle $TX-L^{\vee}$, moreover, for each point $p$ which vanishing $\timesi$, let $$\mathfrak Fi_{p} : (\tilde{U}, \tilde{p}) \rightarrow (U,p),$$ \noindentndent be a smoothing covering of $X$ at $p$ and the notation $\tilde{\timesi} = \mathfrak Fi_{p}^{\ast} \timesi$, one has \begin{theorem}(\cite{Article-14.1}, Theorem 3.1, p.2897) Let $X$ be a compact orbifold of dimension $n$ with only isolated singularities, let $L$ be a locally $V$-free sheaf of rank $1$ over $X$ and $L$ the associated line $V$-bundle. Suppose $\timesi$ is a holomorphic section of $TX\otimes L$ with isolated zeros. If $P$ is an invariant polynomial of degree $n$, then $$\int_{X} P(TX- L^{\vee}) = \sum_{p | \timesi(p) = 0} \mathfrak Fartialfrac{1}{\#G_{p}} {\rm{Res}}_{\tilde{p}}\Big[P(J \tilde{\timesi}) \mathfrak Fartialfrac{d\tilde{z}_{1}\wedge \ldots \wedge d\tilde{z}_{n}}{\tilde{\timesi}_{1}\ldots \tilde{\timesi}_{n}}\Big],$$ \noindentndent where $G_{p} \subset Gl(n, \mathbb{C})$ denotes a small finite group. \end{theorem} All the results that are well known in residue theory consider the hypothesis that the component of singular set is nondegenerate, see for instance \cite{Article-2}. The paper \cite{Article-17} provides a slight improvement of the results given by Baum-Bott by considering the degenerate case with restrictions. Let us consider $v$ a holomorphic vector field on $V$ and $W$ a component of singular set of $v$ such that the vector field is degenerate along of $W$. M. Dia proves the result of residue of $v$ at $W$ subject to the condition that there is a biholomorphism between a neighborhood of $W$ and a neighborhood of the zero section of the normal bundle of $W$. See Th\'eor\`eme A, B, C and D in \cite{Article-17}. \subsection{Residues of logarithmic foliations of dimension one} This section is dedicated to present some results about residues associated with logarithmic holomorphic foliations of dimension one. For this we refer the readers to \cite{Article-0, Article-12.1,Article-14.3, Article-20.11, Article-20.2} and the references therein. The general index of a vector field tangent to hypersurfaces was defined and studied in terms of the homology of the complex of differential forms by X. Gomez-Mont, L. Giraldo and P. Mardesi\'c, see \cite{Article-20.11, Article-20.2}. The first result of this section deals with logarithmic index and was done by A. G. Aleksandrov in \cite{Article-0}. The author defines a logarithmic index using differential forms with logarithmic poles and proves that the homological index can be expressed via the logarithmic index. Let $M$ be a complex manifold of dimension $n$, and let ${\mathcal O}mega_{M}^{q} , q \geq 0$, and $Der(M)$ be the sheaves of germs of holomorphic $q$-forms and vector fields on $M$, respectively. Let $D\subset M$ be a divisor which all of whose irreducible components are of multiplicity one. Given $V \in Der(M)$ a vector field with an isolated singularity at a point $x \in D$ then the $\iota_{V}$-homology groups of the complex $({\mathcal O}mega_{D,x}^{\bullet}, \iota_{V})$ are finite-dimensional vector spaces, where $\iota_{V} : {\mathcal O}mega_{M}^{q} \rightarrow {\mathcal O}mega_{M}^{q-1}$ is the interior multiplication (contraction). We can define the homological index of the vector field $V$ at the point $x \in D$ by $$Ind_{hom,D,x}(V) := \sum_{i=0}^{n}(-1)^{i}\mathfrak Fartialim H_{i}({\mathcal O}mega_{D,x},\iota_{V}).$$ Given a divisor $D$ we can consider the coherent analytic sheaves ${\mathcal O}mega_{M}^{q}(\log D)$, $q >0$ and $ Der_{M}(\log D) = T_{M}(-\log D)$ as in \cite{Article-0}. Take a vector field $V \in Der_{M} (\log D$). As above, the interior multiplication $\iota_{V}$ defines the structure of a complex on ${\mathcal O}mega_{M}^{\bullet}(\log D).$ \begin{lemma}\label{lemma: important}(\cite{Article-0}, Lemma 1, p.247) If all singularities of $V$ are isolated, then the $\iota_{V}$-homology groups of the complex ${\mathcal O}mega_{M}^{\bullet}(\log D)$ are finite-dimensional vector spaces. \end{lemma} It follows from Lemma \ref{lemma: important} that the \textit{logarithmic index} of the field $V$ is well defined at the point $x$, $$ Ind_{\log D,x}(V) := \sum_{i=0}^{n}(-1)^{i}\mathfrak Fartialim H_{i}({\mathcal O}mega_{D,x}(\log D),\iota_{V}).$$ These index are related bellow. \begin{proposition}(\cite{Article-0}, Proposition 1, p. 248) Suppose that a point $x \in D$ is an isolated singularity of a vector field $V \in Der(\log D)$, the germs $V_{i} \in \mathcal{O}_{M,x}$ are determined by the expansion $V = \sum_{i} V_{i} \mathfrak Fartialfrac{\mathfrak Fartial}{\mathfrak Fartial z_{i}}$, and $J_{x}V = (V_{0},\ldots, V_{n})\mathcal{O}_{M,x}$. Then $$Ind_{hom,D,x}(V) = \mathfrak Fartialim \mathcal{O}_{M,x}/J_{x}V - Ind_{\log D,x}(V).$$ \end{proposition} In \cite{Article-14.3} the authors consider logarithmic foliation along $D$ and prove the residue formulas, namely, Baum-Bott type formulas for non-compact complex manifold, still considering the logarithmic vector field. \begin{theorem}(\cite{Article-14.3}, Theorem 1, p. 6403) Let $\tilde{X}$ be an $n$-dimensional complex manifold such that $\tilde{X} = X-D$, where $X$ is an $n$-dimensional complex compact manifold and $D$ is a smooth hypersurface on on $X$. Let $\mathcal{F}$ be a foliation of dimension one on $X$ with isolated singularities and logarithmic along $D$. Suppose that $Ind_{\log D,p}(\mathcal{F}) = 0$ for all $p \in {\rm{Sing}}(\mathcal{F}) \cap D$. Then $$\int_{X} c_{n}(T_{X}(-\log D)-T_{\mathcal{F}}) = \sum_{p \in {\rm{Sing}}(\mathcal{F})\cap (X \setminus D)} \mu_{p}(\mathcal{F}).$$ \end{theorem} In the same work, the authors consider that the divisor $D$ is a normal crossing hypersurface and one has \begin{theorem}(\cite{Article-14.3}, Theorem 2, p. 6404) Let $\tilde{X}$ be an $n$-dimensional complex manifold such that $\tilde{X} = X-D$, where $X$ is an $n$-dimensional complex compact manifold, $D$ is a normally crossing hypersurface on $X$. Let $\mathcal{F}$ be a foliation on $X$ of dimension one, with isolated singularities (non-degenerates) and logarithmic along $D$. Then, $$\int_{X} c_{n}(T_{X}(-\log D)-T_{\mathcal{F}}) = \sum_{p \in {\rm{Sing}}(\mathcal{F})\cap (\tilde{X})} \mu_{p}(\mathcal{F}).$$ \end{theorem} In \cite{Article-12.1} the authors prove new versions of Gauss-Bonnet and Poincar\'e-Hopf theorems for complex $\mathfrak Fartial$-manifolds of the type $\tilde{X}= X-D$, where $\mathfrak Fartialim X = n \geq 3$ and $D$ is a reduced divisor. More precisely, \begin{theorem}(\cite{Article-12.1}, Theorem 1.1 p. 495) Let $\tilde{X}$ be a complex manifold such that $\tilde{X} = X - D$, where $X$ is an $n$-dimensional ($n\geq 3$) complex compact manifold and $D$ is a reduced divisor on $X$. Given any (not necessarily irreducible) decomposition $D = D_1\cup D_2$, where $D_1$, $D_2$ have isolated singularities and $C=D_1\cap D_2$ is a codimension $2$ variety and has isolated singularities, \begin{itemize} \item [ (i)] (Gauss-Bonnet type formula) the following formula holds\\ \small{ $$ \int_{X}c_{n}({\mathcal O}mega^1_X(\log\, D)) = (-1)^{n}\chi(\tilde{X})+\mu( D_1,S(D_1))+ \mu( D_2,S(D_2))- \mu( C,S( C)). $$} \end{itemize} \begin{itemize} \item [ (ii)] (Poincar\'e-Hopf type formula) if $v$ is a holomorphic vector field on $X$, with isolated singularities and logarithmic along $ D$, we have that \small{ $$ \chi(\tilde{X}) = PH(v, {\rm{Sing}}(v)) - GSV(v, D_1, S(v, D_1)) - GSV(v, D_2, S(v, D_2)) + $$ $$ +GSV(v, C, S(v, C))+(-1)^{n-1}\left[ \mu( D_1,S(D_1))+ \mu( D_2,S(D_2))- \mu( C, S( C)) \right]. $$} \end{itemize} \end{theorem} \subsection{Residues to flags} In the following, we review some results on residues of flag of foliations which have emerged in recent years. For the background of flags, we refer to \cite{Article-9, Article-12.22, Article-20, Article-24} and references therein. The next result (\cite{Article-9}, Theorem 2) serves two purposes. The first one is to show that if $\mathcal{F} = (\mathcal{F}_{1}, \mathcal{F}_{2})$ is a flag of holomorphic foliations on $M$ then we can define the residue of $\mathcal{F}$ as an element of homology group. The second one is to show a Baum-Bott type theorem of the flag $\mathcal{F}$. \begin{theorem}\label{1.0.14} Let $\mathcal{F} = (\mathcal{F}_{1}, \mathcal{F}_{2} )$ be a 2-flag of holomorphic foliations on a compact complex manifold $M$ of dimension $n$. Let $\varphi_{1}, \varphi_{2}$ be homogeneous symmetric polynomials, respectively of degrees $d_{1} $ and $d_{2}$, satisfying the Bott vanishing theorem to Flags. Then for each compact connected component $S$ of ${\rm{Sing}}(\mathcal{F})$ there exists a class, ${\rm{Res}}_{\varphi_{1}, \varphi_{2}} (\mathcal{F}, \mathcal{N}_{\mathcal{F}}; S ) \in H_{2n - 2(d_{1} + d_{2} )} (S; \mathbb{C})$, that we will call it of Baum-Bott Reisdue of Flag, such that \begin{equation}\label{eq.0.2} \sum_{\lambda}(\iota_{\lambda})_{\ast} \mbox{{\rm{Res}}}_{\varphi_{1}, \varphi_{2}} (\mathcal{F} , \mathcal{N}_{\mathcal{F}}; S_{\lambda} )= (\varphi_{1}(\mathcal{N}_{12}). \varphi_{2}(\mathcal{N}_{2})) \frown [M] \end{equation} \noindentndent in $H_{2n - 2(d_{1} + d_{2} )} (M; \mathbb{C})$, where $\iota_{\lambda}$ denotes the embedding of $S_{\lambda}$ in $M$. \end{theorem} It is very interesting that we have an effective way to compute residues. The remainder of this section is devoted to this topic. First, we point out how the residues of two foliations in a flag are related (see \cite{Article-9}, Proposition 3, p. 1169). \begin{theorem} For a flag $\mathcal{F} = (\mathcal{F}_{1}, \mathcal{F}_{2} )$ on $M$ with $\mathfrak Fartialim(\mathcal{F}_{1}) = \text{\rm cod}im(\mathcal{F}_{2}) = 1$ and ${\rm{Sing}}(\mathcal{F}_{1}) \cap {\rm{Sing}}(\mathcal{F}_{2})$ admitting isolated singularities (only) we have $$ \mbox{{\rm{Res}}}_{c_{n}} (\mathcal{F}_{2}, \mathcal{N}_{2}; p) = (-1)^{n} (n-1)! \mbox{{\rm{Res}}}_{c_{n}} (\mathcal{F}_{1}, \mathcal{N}_{1}; p), $$ \noindentndent where the residues involved are of the foliations $\mathcal{F}_{1}$ and $\mathcal{F}_{2}$. \end{theorem} Let us introduce some notations. Given a flag $\mathcal{F} = (\mathcal{F}_{1}, \mathcal{F}_{2})$ on $M$ with $\text{\rm cod}im (\mathcal{F}_{i}) = k_{i}$ and $i =1,2,$ we denote by $\mbox{{\rm{Sing}}}_{k_{i}+1} \mathcal{F}_{i}$ the union of irreducible components of ${\rm{Sing}}(\mathcal{F}_{i})$ of pure codimension $k_{i}+1$. Take an irreducible component $Z \subset \mbox{{\rm{Sing}}}_{k_{1} + 1}(\mathcal{F}_{1})$ and a generic point $p \in Z$. Denote by $B_{p}$ a small ball centered at $p$ such that $S(B_{p}) \subset B_{p}$ is a sub-ball of dimension $n - k_{1} - 1$ (i.e., the same dimension of $Z$). Under the above setting (see \cite{book-0, book-4}) the de Rham class can be integrated over an oriented $(2k_{1} + 1)$-sphere $L_{p} \subset B_{p}^{\ast}$ and it defines the Baum-Bott residue of $\mathcal{F}$ at $Z$ $$ BB^{j}(\mathcal{F}; Z) := (2 \mathfrak Fi i)^{-k_{1} - 1} \int_{L_{p}} \theta^{12}\wedge (d \theta^{2})^{j} \wedge (d \theta^{12})^{k_{1} - j}\ \ \ \mbox{for each} \ \ \ 0 \leq j \leq k_{2}. $$ In particular, for the case where $\text{\rm cod}im({\rm{Sing}}(\mathcal{F})) \geq k_{1} + 1$, we get the formula to residue and the Baum-Bott theorem, see (\cite{Article-9}, Theorem 4, p.1173). \begin{theorem} Let $\mathcal{F} = (\mathcal{F}_{1}, \mathcal{F}_{2})$ be a 2-flag of codimension $(k_{1}, k_{2})$ on a compact complex manifold $M$. If $\text{\rm cod}im({\rm{Sing}}(\mathcal{F})) \geq k_{1} + 1$, then for each $0 \leq j \leq k_{2}$ we have $$c_{1}^{k_{1} - j + 1}(\mathcal{N}_{12}) \smile c_{1}^{j}(\mathcal{N}_{2}) = \sum_{Z \ \in \ \mbox{{\rm{Sing}}}_{k_{1}+1}(\mathcal{F}_{1}) \cup \mbox{{\rm{Sing}}}_{k_{1}+1}(\mathcal{F}_{2}) } \lambda_{Z}^{j}(\mathcal{F})[Z], $$ \noindentndent where $\lambda_{Z}^{j}(\mathcal{F}) = BB^{j}(\mathcal{F}, Z)$. \end{theorem} In 2020 Ferreira and Louren\c co in \cite{Article-20} extended the residue theory to flags of holomorphic distributions. The next result is about isolated singularities. \begin{theorem}(\cite{Article-20}, Theorem 1.2)\label{2.11} Let $\mathcal{F} = (\mathcal{F}_{1}, \mathcal{F}_{2} )$ be a $2$-flag of holomorphic distributions on a compact complex manifold $M$ of dimension $n$, $\varphi_{1} \ and \ \varphi_{2}$ be homogeneous symmetric polynomials, respectively of degrees $d_{1}>0$ and $d_{2}> 0$ and $p$ an isolated point of ${\rm{Sing}}(\mathcal{F})$. Then $${\rm{Res}}_{\varphi_{1}, \varphi_{2}} (\mathcal{F} , \mathcal{N}_{\mathcal{F}}; p ) = 0.$$ \end{theorem} In the particular case where M is the projective space $\mathbb{P}^3$ we obtain some advances in goal of compute residue of flags. Furthermore, if the singular scheme of flag has only one irreducible component the following result gives us an effectively way to compute residues, see ([36], Theorem 1.3). \begin{theorem}\label{1.2} Let $\mathcal{F} = (\mathcal{F}_{1}, \mathcal{F}_{2} )$ be a $2$-flag of holomorphic foliations on $\mathbb{P}^{3}$ with $\mathfrak Fartialeg(\mathcal{F}_{i}) = d_{i}$ thus \begin{equation}\label{2} (1 + d_{1} - d_{2})\sum_{Z \in S_{1}(\mathcal{F}_{2})}\mathfrak Fartialeg(Z){\rm{Res}}_{\varphi_{2}}(\mathcal{F}_{2}|_{B_{p}}; p)=\sum_{Z \in S_{1}(\mathcal{F})} {\rm{Res}}_{c_{1}\varphi_{2}}(\mathcal{F}, \mathcal{N}_{\mathcal{F}}; Z), \end{equation} \noindentndent where $\mathfrak Fartialeg(Z)$ is the degree of the irreducible component $Z$, ${\rm{Res}}_{\varphi_{2}}(\mathcal{F}_{2}|_{B_{p}}; p)$ represents the Grothendieck residue of the foliation $\mathcal{F}_{2}|_{B_{p}}$ at $\{p\} = Z \cap B_{p}$ with $B_{p}$ a transversal ball and either $\varphi_{2}=c_{1}^{2}$ or $\varphi_{2}=c_{2}.$ \end{theorem} We believe that if we work with the theory of residue currents, see \cite{Article-25}, we can present a more general expression to compute residue, mainly for the degenerate case. \section{Some applications} \subsection{Residues and the Poincar\'e problem} Motivated by Darboux's works and the problem of the algebraic integrability of differential equations, in 1891 H. Poincar\'e formulated the following question, see \cite{Article-24.1} \\ \noindentndent \textit{"Is it possible to bound the degree of an irreducible curve such that is invariant by a foliation in terms of the degree of foliation?"} \\ \noindentndent This problem is equivalent to asking when a holomorphic foliation on $\mathbb{P}^{2}$ admits a rational first integral. This question is known as the \textit{Poincar\'e problem}. It is well known that such a bound does not exist in general, see \cite{Article-11.1}. However, under certain hypotheses, there are several partial answers and generalizations even for flags and for Pfaff systems; see for instance \cite{Article-9.1, Article-9.2, Article-11.1, Article-12.2, Article-12.22,Article-14.2, Article-17.1,Article-20, Article-20.12, Article-25.1, Article-27.1}. The residue theory, and especially the Baum-Bott Theorem, are powerful tools and obstructions for several problems related to foliations with singularities. One hundred years after, in (\cite{Article-11.1}, Theorem 1, p.891) Cerveau and Lins Neto gave a first partial answer to Poincar\'e problem. For this, consider $S$ a projective nodal curve, that is, all its singularities are of normal crossing type, with reduced homogeneous equation $f=0$, of degree $m$. \begin{theorem} Let $\mathcal{F}$ be a foliation in $\mathbb{CP}(2)$ of degree $n$, having $S$ as separatrix. Then $m \leq n+2$. Moreover if $m=n+2$ then $f$ is reducible and $\mathcal{F}$ is of logarithmic type, that is given by a rational closed form $\sum \lambda_{i} \mathfrak Fartialfrac{df_{i}}{f_{i}}$ where $\lambda_{i} \in \mathbb{C}$ and the $f_{i}$ are homogeneous polynomials. \end{theorem} Some years later, Carnicer in (\cite{Article-9.2}, Theorem, p.289) presents the same quota to the Poincar\'e problem with other hypothesis, namely, the foliation does not have dicritical singularities into the curve. Roughly speaking, a foliation $\mathcal{F}$ is dicritical if, and only if, there exists an irreducible surface $Z$ such that restriction of $\mathcal{F}$ to $Z$ contains infinitely many distinct separatrices. \begin{theorem} Let $\mathcal{F}$ be a foliation of $\mathbb{P}^{2}$ and let $C$ be an algebraic curve in $\mathbb{P}^{2}$. Suppose that $C$ is invariant by $\mathcal{F}$ and there are no dicritical singularities of $\mathcal{F}$ in $C$. Then $$\mathfrak Fartialeg C \leq \mathfrak Fartialeg \mathcal{F} + 2.$$ \end{theorem} M. Soares in (\cite{Article-27.1}, Theorem B, p.496) extended the Poincar\'e problem from $\mathbb{P}^{2}$ to $\mathbb{P}^{n}$. Using the Baum-Bott theorem, Soares improved the above quota. In fact, let $V \text{\rm h}ookrightarrow \mathbb{P}^{n}, n \geq 2,$ be an irreducible non-singular algebraic hypersurface of degree $d_{0}$ invariant by $\mathcal{F}^{d}$, a non-degenerated one-dimensional holomorphic foliation of degree $d \geq 2$. Then we have. \begin{theorem} $$d_{0} \leq d + 1.$$ \end{theorem} In \cite{Article-9.01} Brunella says that the GSV-index is the obstruction to a positive solution to Poincar\'e problem and gives a simple condition that implies the non-negativity of this index. Motivated by Brunella's work, Corr\^ea and Machado in \cite{Article-14.2} introduced a GSV type index for invariant varieties by holomorphic Pfaff systems on projective manifolds. The authors proved, with certain hypotheses, a non-negativity property for this index. As a consequence we have the following result. \begin{theorem} Let $\omega \in H^{0}(\mathbb{P}^{n}, {\mathcal O}mega^{k}_{\mathbb{P}^{n}}(d+k+1))$ be a holomorphic Pfaff system of rank $k$ and degree $d$. Let $V \subset \mathbb{P}^{n}$ be a reduced complete intersection variety, of codimension $k$ and multidegree $(d_{1},\ldots, d_{k})$, invariant by $\omega$. Suppose that ${\rm{Sing}}(\omega, V )$ has codimension one in $V$, then $$\sum_{i}GSV(\omega,V, S_{i}) \mathfrak Fartialeg(S_{i}) = [d+k+1-(d_{1}+ \mathfrak Fartialots +d_{k})](d_{1}\mathfrak Fartialots d_{k}),$$ \noindentndent where $S_{i}$ denotes an irreducible component of ${\rm{Sing}}(\omega, V)$. In particular, if $GSV(\omega, V, S_{i}) \geq 0$, for all $i$, we have $$d_{1} +\mathfrak Fartialots + d_{k} \leq d + k + 1.$$ \end{theorem} \subsection{Residues and the non-existence of minimal sets} Let $\mathcal{F}$ be a holomorphic foliation on a compact manifold $X.$ A compact non-empty subset $\mathcal{M} \subset X$ is said to be a \textit{nontrivial minimal set} for $\mathcal{F}$ if the following properties are satisfied \begin{enumerate} \item[a)] $\mathcal{M}$ is invariant by $\mathcal{F};$ \item[b)] $\mathcal{M} \cap {\rm{Sing}}(\mathcal{F}) = \emptyset;$ \item[c)] $\mathcal{M}$ is minimal with respect to properties $a)$ and $b)$. \end{enumerate} Next, we shall see how the residue theory can be used in the problem of existence or not of nontrivial minimal sets for foliations. The above problem for codimension one holomorphic foliations on complex projective spaces was considered by Camacho, Lins Neto and Sad in \cite{Article-9.22}. The authors addresses the problem on $\mathbb{P}^{2}$. More precisely, they prove a geometrical property of minimal sets, i.e., by applying the Maximum Principle for harmonic functions, they prove that $\mathcal{F}$ has at most one non-trivial minimal set. Moreover, under generic conditions imposed on the singularities of foliation, all leaves accumulate on that set. Anyway, in general, the question on $\mathbb{P}^{2}$ remains as an open problem. In the case where the complex projective space has higher dimensions, Lins Neto (\cite{Article-23.1}, Theorem 1, p. 1370) extended the above discussion and prove the following. \begin{theorem}\label{alcideSO} Codimension 1 foliations on $\mathbb{P}^{n}, n\geq 3$, have no nontrivial minimal sets. \end{theorem} We would like to emphasize that the residue theory is essential to prove Theorem \ref{alcideSO}. In fact, one of the main arguments in the proof is the following proposition (\cite{Article-23.1}, Proposition 1, p. 1372) which is a consequence of a Baum-Bott theorem. \begin{proposition} Let $\mathcal{F}$ be a foliation of degree $k$ on $\mathbb{CP}^{2}$, with singular set ${\rm{Sing}}(\mathcal{F})$ of complex codimension $2$. Then \[ \sum_{p \, \in\, {\rm{Sing}}(\mathcal{F})} BB(\mathcal{F},p) = (k+2)^{2}. \] In particular $\mathfrak Fartialisplaystyle{\sum_{p \, \in\, {\rm{Sing}}(\mathcal{F})} BB(\mathcal{F},p)}$ is positive for any foliation $\mathcal{F}$ on $\mathbb{CP}^2$. \end{proposition} In (\cite{Article-9.11}, Theorem 1.2, p.296] the authors, by using Baum-Bott Theorem, have generalized Lins Neto's result for codimension one holomorphic foliations on complex projective manifolds and such that the Picard group is cyclic. \begin{theorem}\label{burnella} Let $X$ be a complex projective manifold of dimension at least $3$ and with Pic$(X) = \mathbb{Z}$, and let $\mathcal{F}$ be a codimension one foliation on $X$. Then every leaf $L$ of $\mathcal{F}$ accumulates to ${\rm{Sing}}(\mathcal{F})$: $$\overline{L} \cap {\rm{Sing}}(\mathcal{F}) \neq \emptyset.$$ \end{theorem} Theorem \ref{burnella} is a partial answer to Brunella's conjecture, see (\cite{Article-9.001}, Conjecture 1.1, p.3102), in the special case when Pic$(X) = \mathbb{Z}$. Recently, in 2021 M. Adachi and J. Brinkschulte have proved the Brunella's conjecture without hypotheses on manifold $X$, see (\cite{Article-1.1}, Main Theorem, p.1). \begin{theorem} Let $X$ be a compact complex manifold of dimension $\geq 3$. Let $\mathcal{F}$ be a codimension one holomorphic foliation on $X$ with ample normal bundle $N_{\mathcal{F}}$. Then every leaf of $\mathcal{F}$ accumulates to ${\rm{Sing}}(\mathcal{F})$. \end{theorem} Brunella's conjecture has been stated for higher codimensional foliations (\cite{Article-14.22}, Conjecture 1.2, p. 1236) and still remains as an open problem. However, in \cite{Article-14.222} the authors by using a Brunella-Khanedani-Suwa variational type residue theorem for invariant currents by holomorphic foliations, provide conditions for the accumulation of the leaves to the intersection of the singular set of a holomorphic foliation with the support of an invariant current. \end{document}
math
\begin{document} \chapter{Introduction} The aim of the thesis at hand is to analyse the possible extension of the Lebesgue density theorem to infinite dimensions. We will attempt to explain why it is natural and interesting to ask this question. The direction in which we were planning to take the original research project is quite different from the results which we now present and here we will explain where the motivation comes from. \section{Motivation} \subsection{Similarity Search} The idea which grew into this thesis was the following: investigate the existence of an efficient algorithm which, given a point $q$ known as a query point, will search a high-dimensional object (call it $D$ for database) and return the nearest point or the nearest points to $q$. This is the basic idea behind the well known similarity search, also known as neighbour search and closest point search, and in the case where we accept more than one output point it is also known as $k$-nearest neighbour search and proximity search. For a good introduction to high-dimensional similarity search, see chapter 9 in \cite{SS}. Similarity search has applications in coding theory, database and data mining, statistics and data analysis, information retrieval, machine learning and pattern recognition amongst other fields. If these applications mean nothing to you, some more specific applications would be searching a database of resum\'es for an applicant who suits a particular job description, searching for a used car by price, history, mileage, make, model, year and colour and even searching for a companion through an online dating service. Each of our points (entries) in the database is described by a number of features, and this number may range from a small number like two or three, which means that our objects are described with little detail, to hundreds of thousands and even millions, which means our objects are extremely detailed. For many applications, we may not be able to get a match satisfying all our desired features, in which case it would suffice to return matches satisfying most of the features. This may be decided by giving different weights to the features and choosing matches with the highest weight. This is not necessary if all features have the same priority. Also, we may not want only one match. We may want our algorithm to return a number of matches and leave it up to the human to choose the best match according to their own discretion, personal preference, or according to some criteria which is not taken into consideration in the database. For example, when searching for a companion, what one person finds visually appealing is personal and there is no criteria which will allow an algorithm to pick out an exact match. In these cases, $k$-nearest neighbour search is preferred over nearest neighbour search. Now, a very relevant question would be, what do we mean by a high-dimensional object? Well, this object is usually a space in which our data points lie. The dimension, $d$, of our space depends on the number of features of each point. It is greater than or equal to the number of features of the point with the highest number of features. For example, if our data consists of information about an employee at some company, then an entry may include features such as the worker's employee number, first name, last name, gender, position at the company and maybe even marital status. Thus, a 6-dimensional database is enough to store information about employees. So, the question can now be stated as follows: given a database of employees, $D$, with the features mentioned above stored in $D$, and some data, $q$, about someone, find the employee in $D$ who is the closest match to $q$, or return the $k$ workers who best fit the description. For simplicity, and since we are studying the asymptotic case, we choose as our domain $\mathbb{R}^\infty$ to be the set of all possible points. A typical example of a domain would be a Euclidean space with Euclidean distance or some $\ell_p$ metric. There are many algorithms to search through data structures like k-d trees, M-trees and locality-sensitive hashing (again see \cite{SS} for more examples and for references to literature as this work has veered away from the topic). A model that we wanted to investigate uses a random geometric graph and we assumed a greedy walk type algorithm. Start at a random vertex. Take one step to another node which is closer to the query. From this new vertex, repeat the last step and continue until we can get no closer to the query point, then stop. We do not deal with the dynamic case where nodes are removed, instead, in our case, new nodes are added but the older ones remain. The most difficult concept to grasp is the assumption that as our dimension grows, and more is known about each object, our overview of them changes only slightly in that as we see more data we expect it to agree in some sense to what we have already seen. In the case where data lies in a low-dimensional space, very efficient algorithms exist to find the solution to our query \cite{HE}. However, as dimension increases algorithms fall prey to what is known as the ``curse of dimensionality'' (see \cite{SS} for a discussion). Informally, this means that the higher our dimension, the more difficult it is to find the nearest neighbour(s) of a given point. In fact, given a high enough dimension, known algorithms perform no better than a brute force search \cite{WSB}. It is widely believed that in high-dimensional databases there are no efficient search algorithms, but it remains unproven at a mathematical level, so we are interested in a framework for the analysis of this problem. In an effort to solve this problem of inefficiency in high dimensions, it would be nice to find an algorithm which is independent of dimension. Of course, this is impossible, so we settle for something a bit more realistic, and that is an algorithm with time complexity of order $ O(\log d)$. Now, the amount of work which goes into searching our database depends on how our data is stored. If data is stored haphazardly, then the time to find the nearest neighbour(s) of a query point may increase exponentially. Since we are concerned with efficiency, we have decided that the most suitable structure to model our data would be a random graph, but not just any random graph, a random geometric graph. \subsection{Random Geometric Graphs} Whenever a mathematician hears the words `random graph', the first model which comes to mind is the Erd\H os-R\'enyi model, $G(n,p)$, where we start with $n$ vertices and choose each of the edges of the graph with equal probability $p$. Whilst the study of these kinds of graphs is useful, they are not very realistic and in this situation are of little use to us. If we attempt to search through this model of a random graph, then there is no assurance that `jumping' from one vertex to another will bring us any closer to our target, $q$. However, random graphs of this type are still useful for proving the existence of graphs satisfying certain properties. In our investigation, we planned to study a different kind of random graph model which has been of interest recently. These graphs have a little more structure than those of the Erd\H os-R\'enyi model, and it is this structure which we will try to take advantage of. By their very nature, they are an excellent choice to model data in a high-dimensional space. They are still random, and no less random than any other model, but random in a sense which is more suitable to our situation. Random geometric graphs are formed by arbitrarily choosing $n$ points in a space with regard to some probability distribution, and joining any two points, separated by less than some specified distance, with an edge. This model is used to study distributed wireless networks, sensory based communication networks, Percolation Theory and cluster analysis --- which is a powerful technique with applications in Medicine, Biology and Ecology. An excellent reference which introduces and goes in-depth into random geometric graphs is \cite{MP}. Our initial aim was to use random geometric graphs in the study of similarity search. Of course, random geometric graphs is an obvious model on which such work can be done. If given a set (database) of $n$ points in a $d$-dimensional metric space and a specific query point $q$, we would like to find the nearest neighbour, $p$, to that point $q$. When using random geometric graphs, we would connect all points which are within a distance $r$ from each other, then choose any point to start our search. By using the greedy algorithm described earlier, we would get a result. The asymptotic setting in which we would like to work and in which researchers model similarity search is by taking both $n$ and $d$ tending toward infinity. This setting requires us to work in $\mathbb{R}^{\infty}$. \section{Our Setting} Let $f$ be a probability density function on $\mathbb{R}^{\infty}$ which is bounded and integrable with regard to a certain measure, $\lambda^{\infty}$, that is, $\int_{\mathbb{R}^{\infty}} f \, d \lambda^{\infty} = 1$. Let us assume that our data is modelled by a sequence $X_1, X_2, \ldots$ which are independent and identically distributed random variables taking their values in $\mathbb{R}^\infty$. Let $\mathbb{R}^{\infty}$ be equipped with a metric, $\rho(\cdot, \cdot)$, which induces the product topology. \begin{egg} The metric \[ \rho(x,y) = \sum_{i=1}^{\infty} 2^{-i} \frac{|x_i - y_i|}{1 + |x_i - y_i|} \] where $x = (x_i), y = (y_i) \in \mathbb{R}^{\infty}$, induces the product topology on $\mathbb{R}^{\infty}$. \end{egg} Let $\mathfrak{X}_n = \{ X_1, X_2, \ldots, X_n \} \subset \mathbb{R}^{\infty}$ be the vertex set of an undirected graph $G(\mathfrak{X}_n, r)$ where any two vertices $X_i, X_j$ are joined if and only if $\rho(X_i, X_j) < r$, where $r$ denotes the radius. We call $G(\mathfrak{X}_n, r)$ a random geometric graph. Now if we let our random geometric graph have a fixed radius $r$, but continue to add vertices (increase $n$ while keeping $d$ constant), then the average degree of each vertex is guaranteed to rise. This is undesirable if we are using a greedy algorithm to search our graph because it increases search time. To curb this phenomenon and others similar to it, we introduce a sequence of radii $(r_n)$ and limiting regimes \cite{MP}. We are interested in one particular limiting regime for $(r_n)$, that is the \emph{thermodynamic limit}. In this limiting regime, the expected degree of a typical vertex tends to be constant. Here, $r_n \propto cn^{-\frac{1}{d}}$, for some constant $c$. It can be shown that if the limiting constant is taken to be above some critical value, then with high probability a giant component will arise in $G(\mathfrak{X}_n; r_n)$. This giant component is a connected subgraph which contains most of the vertices of the entire graph. It is studied in \cite{CL, MP}. There are other limiting regimes which we may come across, these include: \begin{itemize} \item[(i)] the sparse limit regime: $nr_n^d \rightarrow 0$ \item[(ii)] the dense limit regime: $nr_n^d \rightarrow \infty$ \item[(iii)] the connectivity regime: $r_n \propto c((\log n)/n)^{-\frac{1}{d}}$ (this is a special case of the dense limit regime). \end{itemize} Knowing these limits is of extreme importance, and here is why. If we are looking for the nearest neighbours to our query point $q$, we would hope that $q$ is not part of the giant component, because our search becomes much easier when we have eliminated a large portion of vertices. Another scenario which may be of importance is finding a subgraph in our model which is isomorphic to some other graph $\Gamma$. We will look at the latter in more detail as it will propel us to our main objective. We have found that proving analogues of certain results about the behaviour of random geometric graphs in $\mathbb{R}^d$ as $d$ tends to infinity requires using the Lebesgue density theorem in $\mathbb{R}^\infty$, and so we have concentrated on the task of finding an analogue of this result. Let us say a word about the so called Lebesgue measure, $\lambda^{\infty}$ on $\mathbb{R}^\infty$. It is known that given any infinite-dimensional locally convex topological vector space, $X$, there does not exist a non-trivial translation-invariant sigma-finite Borel measure on $X$ \cite{RB, HSY}. In other words, Haar measure does not exist in an infinite-dimensional setting. However, there have been a few mathematicians who have written of analogues to the Lebesgue Measure in infinite dimensions, they include R. Baker \cite{RB}, Y. Yamasaki \cite{YY}, and the trio of N. Tsilevich, A. Vershik and M. Yor \cite{TVY}. In \cite{RB}, a nontrivial translation invariant Borel measure, $\lambda^{\infty}$, on $\mathbb{R}^{\infty}$ is introduced. This is a sigma-additive Borel measure which is analogous to the Lebesgue measure, but it is of course not sigma-finite. We focus on this measure in the hope that it will lead us to a positive result. An interesting circumstance of both $n$ and $d$ tending to infinity is that at any point in time, only the first $d$ coordinates (features) are revealed to us while the rest are kept hidden. As time goes by, however, in addition to obtaining new data points, our knowledge of the data increases as we discover new features and their values at previous data points. Thus, at any step in time we have $n$ points modelled by independent and identically distributed random variables distributed with respect to $f_d \cdot \lambda^d$, where $f_d$ and $\lambda^d$ can be thought of as projection of our probability density function, $f$, and the push forward of our measure, $\lambda^{\infty}$, to the finite-dimensional case. Now, these functions, $f_d$, depend only on the first $d$ coordinates and as we get to know more information, our functions do not vary greatly. This is intuitively what we want. We do not want or expect the data that is obtained in the future to be extremely different from what we have at any point in time. Thus, we are not dealing with complete independence. And let us not forget that $n$ is also increasing, but $n$ is tied to $d$ in the sense that it is not independent. It grows faster than $d$, but only sub-exponentially. More precise definitions of $f_d$ and $\lambda^{\infty}$ will be given later on. $\lambda^d$ is just the $d$-dimensional Lebesgue measure which is well known. What we should keep in mind is that at any instant in time, we have a random geometric graph in $\mathbb{R}^d$. \section{Overview} The plan for the rest of the paper is as follows: In chapter 2, we provide the base and background needed to continue through the paper without getting lost. The terminology and notation introduced there will be used throughout the rest of the paper. New or specific terms will be defined in the chapter in which they are put forth. Chapter 3 explains the reason for embarking on this project. We go through the theorem which started it all. We see for the first time the Lebesgue density theorem, the extension of which is the purpose of this paper. We review briefly the concept of a Lebesgue point and state Lebesgue's density theorem. We see why it is not possible to simply extend this concept, as is, to an infinite-dimensional case. In chapter 4, we deal with the measure which we will be using in our analysis. It is a very interesting measure and it is supposedly very nice to work with seeing it is analogous to the Lebesgue measure. We start by showing that a non-trivial translation-invariant locally finite measure is not possible in an infinite-dimensional setting. This is a weaker statement than that proved by Andr\'e Weil, but still shows us that working in infinite dimensions is not easy. We then go through the construction of the ``Lebesgue measure'' on $\mathbb{R}^\infty$. Chapter 5 is concerned with a not very well known example by Jean Dieudonn\'e. It was instrumental in shaping the outcome of this paper. We go through his paper in detail as this is probably the first English translation of the result, after which we point out the complication which arose due to the result. Chapter 6 introduces a new class of functions which satisfy the analogous Lebesgue density theorem and brings to the fore a very interesting theorem by Jessen. This theorem was crucial in attempting to solve our problem. It is just as interesting, and complex, that the generalisation of this theorem does not hold as shown by Dieudonn\'e in chapter 5. The final chapter exhibits some routes on which we embarked but departed from for some reason or the other. It also shows some of the difficulties we had in general. In the last section we produce the final result showing the impossibility of the density theorem on $\mathbb{R}^\infty$, in full generality as required by our original goals. The last detail we would like to point out before continuing is that double daggers $(\ddag)$ will indicate new results. cation and \cleardoublepage \chapter{Preliminaries} \section{Some Important Definitions} The purpose of this chapter is to introduce some of the terms (and general notation) that will be used throughout the paper. As we come across or introduce specific terms, they will be defined in the chapter in which they appear. The definitions here have been compiled from \cite{RD, HP, RHK, JLK, SL, JEM, JRM, CAR, WR}. \subsection{Topology} A collection $\tau$ of subsets of a set $X$ is said to be a \emph{topology} in $X$ if $\tau$ has the following three properties: \begin{itemize} \item $\emptyset \in \tau$ and $X \in \tau$ \item If $A_i \in \tau$ for $i = 1, \ldots, n$, then $A_1 \cap A_2 \cap \cdots \cap A_n \in \tau$ \item If $\{A_{\alpha}\}$ is an arbitrary collection of members of $\tau$, then $\cup_{\alpha} A_{\alpha} \in \tau$. \end{itemize} \begin{egg} Let $X_1 = \{1, 2, 3, 4\}$, then the following are topologies of $X_1$: \\ $\tau_1 = \{\emptyset, \{1\}, \{2\}, \{1,2\}, \{1,2,3,4\}\}$ \\ $\tau_2 = \{\emptyset, \{3\}, \{1,2,3\}, \{1,2,3,4\}\}$ \end{egg} If $\tau$ is a topology on $X$, then $(X, \tau)$ is a \emph{topological space}, and the members of $\tau$ are called \emph{open sets} in $X$. A set $B \subseteq X$ is \emph{closed} if its complement $B^c$ is open. Both $\emptyset$ and $X$ are closed, finite unions of closed sets are closed and arbitrary intersections of closed sets are closed. A set can be both open and closed and a set can be neither open nor closed. $X$ is called \emph{connected} if and only if the only subsets of $X$ which are both open and closed in $\tau$ are the empty set, $\emptyset$, and $X$ itself. The \emph{closure} of a set $B \subseteq X$ is the smallest closed set in $X$ which contains $B$. \begin{egg} The sets $\emptyset$ and $X_1$ are always both open and closed. $\{3\} \subset X_1$ is neither open nor closed with respect to $\tau_1$, however, it is an open set with respect to $\tau_2$. The closure of $\{3\}$ with respect to $\tau_2$ is $X_1$. \end{egg} A set $U$ in a topological space $(X, \tau)$ is a \emph{neighbourhood} of a point $x$ if and only if $U$ contains an open set to which $x$ belongs. A neighbourhood of a point need not be an open set, but every open set is a neighbourhood of each of its points. A family $\mathcal{B}$ of sets is a \emph{base} for a topology $\tau$ if and only if $\mathcal{B}$ is a subfamily of $\tau$ and for each point $x$ of the space, and each neighbourhood $U$ of $x$, there is a member $V$ of $\mathcal{B}$ such that $x \in V \subset U$. A space whose topology has a countable base is called \emph{second countable}. Any second countable space is \emph{separable} but not vice versa. A family $\mathcal{S}$ of sets is a \emph{subbase} for a topology $\tau$ if and only if the class of finite intersections of members of $\mathcal{S}$ is a base for $\tau$. Equivalently, iff each member of $\tau$ is the union of finite intersections of the members of $\mathcal{S}$, then $\mathcal{S}$ is a subbase for $\tau$. A map $f$ of a topological space $(X, \tau)$ into a topological space $(Y, \upsilon)$ is \emph{continuous} if and only if $f^{-1}(U) \in \tau$ for each $U \in \upsilon$. That is, iff the inverse of each open set is open then $f$ is continuous. A family $\mathcal{C}$ of sets is a \emph{cover} of a set $X$ if and only if $X$ is a subset of the union $\bigcup\{A:A \in \mathcal{C}\}$. That is, iff each member of $X$ belongs to some $A \in \mathcal{C}$. It is an open cover if and only if each $A \in \mathcal{C}$ is an open set. A \emph{subcover} of $\mathcal{C}$ is a subset of $\mathcal{C}$ that still covers $X$. A set $B \subset X$ is \emph{compact} if every open cover of $B$ contains a finite subcover. If $X$ is compact, it is called a compact space. \begin{defn} \label{D:Ideal} Given a set $X$, a non-empty subset, $I$, of the power set of $X$ is called an \emph{ideal} on $X$ if: \begin{itemize} \item $A \in I$ and $B \subseteq A$ implies $B \in I$ and \item $A, B \in I$ implies $A \cup B \in I$. \end{itemize} \end{defn} \subsection{Metric Spaces} The most familiar topological spaces are metric spaces. A \emph{metric space} is a set $X$ in which a distance function (or metric) $\rho$ is defined, with the following properties: \begin{itemize} \item $0 \leq \rho(x,y) < \infty$ for all $x, y \in X$ \item $\rho(x,y) = 0$ iff $x=y$ \item $\rho(x,y) = \rho(y,x)$ for all $x, y \in X$ \item $\rho(x,y) \leq \rho(x,z) + \rho(z,y)$ for all $x, y, z \in X$ \end{itemize} \begin{defn} \label{D:PositivelySeparated} Two sets $A, B$ in a metric space $X$ with metric $\rho$ are \emph{positively separated} if \[ \inf_{\substack{a \in A\\b \in B}} \rho(a, b) > 0. \] \end{defn} Let $(X, \rho_1)$ and $(Y, \rho_2)$ be two metric spaces. A function $f:X \rightarrow Y$ is called \emph{uniformly continuous} if for every $\epsilon > 0$, there exists a $\delta > 0$ such that for all $x_1, x_2 \in X$ with $\rho_1(x_1, x_2) < \delta$ we have $\rho_2(f(x_1), f(x_2)) < \epsilon$. Every uniformly continuous function is continuous, but not vice versa. \begin{defn} Let $x_n$ be a sequence of numbers. We say $x_n$ \emph{converges} to $x$ if for any number $\epsilon > 0$ there is an integer $N$ such that $|x_n-x| < \epsilon$ for all integers $n \geq N$. This is written as $\lim_{n \rightarrow \infty} x_n = x$ or $x_n \rightarrow x$ as $n \rightarrow \infty$. A sequence $x_n$ in $\mathbb{R}$ is called a \emph{Cauchy sequence} if for every number $\epsilon > 0$ there is an integer $N$ (depending on $\epsilon$), such that $|x_n-x_m| < \epsilon$ whenever $n \geq N$ and $m \geq N$. \end{defn} \subsection{Measure Theory} A collection $\mathfrak{M}$ of subsets of a set $X$ is said to be a \emph{$\sigma$-algebra} in $X$ if it has the following properties: \begin{itemize} \item $X \in \mathfrak{M}$ \item If $A \in \mathfrak{M}$, then $A^c \in \mathfrak{M}$, where $A^c$ is the complement of $A$ relative to $X$. \item If $A = \bigcup_{n=1}^\infty A_n$ and if $A_n \in \mathfrak{M}$ for $n = 1, 2, \ldots$, then $A \in \mathfrak{M}$. \end{itemize} \begin{egg} Let $X_1 = \{1, 2, 3, 4\}$, then the following are both $\sigma$-algebras of $X_1$: \\ $\mathfrak{M}_1 = \{\emptyset, \{1\}, \{2,3,4\}, \{1,2,3,4\}\}$ \\ $\mathfrak{M}_2 = \{\emptyset, \{1\}, \{2\}, \{1,2\}, \{3,4\}, \{2,3,4\}, \{1,3,4\}, \{1,2,3,4\}\}$ \end{egg} If $\mathfrak{M}$ is a $\sigma$-algebra in $X$, then $(X, \mathfrak{M})$ is called a \emph{measurable space}, and the members of $\mathfrak{M}$ are called \emph{measurable sets} in $X$. If $X$ is a measurable space, $Y$ is a topological space, and $f$ is a mapping of $X$ into $Y$, then $f$ is said to be \emph{measurable} provided that $f^{-1}(V)$ is a measurable set in $X$ for every open set $V$ in $Y$. \begin{defn} \label{D:Premeasure} A function $\xi$ on a family $\mathfrak{C}$ of subsets of $X$ is called a pre-measure if: \begin{itemize} \item $\emptyset \in \mathfrak{C}$ and $\xi(\emptyset) = 0$; \item $0 \leq \xi(C) \leq +\infty$ for all $C$ in $\mathfrak{C}$. \end{itemize} \end{defn} \begin{defn} \label{OuterMeasure} A function $\mu^\star$ defined on the sets of a space $X$ is called an \emph{outer measure} on $X$ if it satisfies the conditions: \begin{itemize} \item $\mu^\star(E)$ takes values in $[0, + \infty]$ for each subset $E$ of $X$; \item $\mu^\star(\emptyset) = 0$; \item if $E_1 \subset E_2$ then $\mu^\star(E_1) \leq \mu^\star(E_2)$; and \item if $\{E_i\}$ is any sequence of subsets of $X$ then \[ \mu^\star \left ( \bigcup_{i=1}^\infty E_i \right ) \leq \sum_{i=1}^\infty \mu^\star(E_i). \] \end{itemize} \end{defn} \begin{defn} \label{D:MetricOuterMeasure} An outer measure $\mu^\star$ defined on a metric space $X$ is called a \emph{metric outer measure} if \[ \mu^\star(A \cup B) = \mu^\star(A) + \mu^\star(B) \] for all pairs of positively separated sets $A, B$. \end{defn} \begin{defn} A \emph{measure}, $\mu$, is an outer measure restricted to a $\sigma$-algebra $\mathfrak{M}$, and which is countably additive. This means if $\{A_i\}$ is a disjoint countable collection of members of $\mathfrak{M}$, then \[ \mu \left ( \bigcup_{i=1}^{\infty} A_i \right ) = \sum_{i=1}^{\infty} \mu(A_i) \] \end{defn} \begin{defn} A \emph{measure space} is a measurable space which has a measure defined on the $\sigma$-algebra of its measurable sets. \end{defn} \begin{defn} The measure $\mu$ is called \emph{$\sigma$-finite} if $X$ is a countable union of measurable sets of finite measure. \end{defn} Let $\mu$ be a measure on a $\sigma$-algebra $\mathfrak{M}$, then \begin{itemize} \item[(i)] $\mu(\emptyset) = 0$. \item[(ii)] $\mu(A_1 \cup \cdots \cup A_n) = \mu(A_1) + \ldots + \mu(A_n)$ if $A_1, \ldots, A_n$ are pairwise disjoint members of $\mathfrak{M}$. \item[(iii)] $A \subseteq B$ implies $\mu(A) \leq \mu(B)$ for $A, B \in \mathfrak{M}$. \item[(iv)] $\mu(A_n) \rightarrow \mu(A)$ as $n \rightarrow \infty$ if $A = \bigcup_{n=1}^{\infty}A_n,\;A_n \in \mathfrak{M}$, and $A_1 \subset A_2 \subset \cdots$ \item[(v)] $\mu(A_n) \rightarrow \mu(A)$ as $n \rightarrow \infty$ if $A = \bigcap_{n=1}^{\infty}A_n,\;A_n \in \mathfrak{M}$, and $A_1 \supset A_2 \supset \cdots$ and $\mu(A_1)$ is finite. \end{itemize} \begin{defn} Let $(X, \tau)$ be a topological space; let $\mathfrak{B}$ denote the Borel $\sigma$-algebra on $X$, that is, the smallest $\sigma$-algebra on $X$ that contains all open sets $U \in \tau$. Let $\mu$ be a measure on $\mathfrak{B}$. Then $\mu$ is called a \emph{Borel measure}. \end{defn} \begin{defn} The \emph{support} of a measure $\mu$ is defined to be the set of all points $x$ in $X$ for which every open neighbourhood of $x$ has positive measure: \[ \mbox{supp }(\mu) := \left \{ x \in X: x \in U \in \tau \Rightarrow \mu(U) > 0 \right \} \] \end{defn} \begin{defn} $L^1(X, \mu)$ is the set of equivalence classes of all real-valued functions on a topological space $X$ which have finite integral with respect to the measure $\mu$, where $f \sim g \mbox{ iff } \mu(\{x: f(x) \neq g(x)\}) = 0$. It may be written as $L^1(X)$ when the measure is understood. This space is equipped with the norm \[ ||f||_1 = \int_X |f(x)| \, d\mu(x). \] \end{defn} \begin{theo} [Egorov's theorem] \label{T:EgorovTheorem} Given a sequence $(f_n)$ of real-valued functions on some measure space $(X, \mathfrak{M}, \mu)$ and a measurable set $A$ with $\mu(A) < \infty$ such that $(f_n)$ converges $\mu$-almost everywhere on $A$ to a limit function $f$, the following result holds: for every $\epsilon > 0$, there exists a measurable subset $B \subset A$ such that $\mu(B) < \epsilon$, and $(f_n)$ converges to $f$ uniformly on $A \setminus B$. \end{theo} In other words, pointwise convergence on $A$ implies uniform convergence everywhere except on some subset $B$ of fixed small measure. \subsection{Graph Theory} A \emph{graph} is a pair $G = (V, E)$ of sets such that $E \subseteq [V]^2$, where $[V]^2$ denotes the collection of all 2-element subsets of $V$, and $V \cap E = \emptyset$. The elements of $V$ are called \emph{vertices} (or \emph{points}) of $G$ and the elements of $E$ are called \emph{edges}. We will write $xy$ for an edge between two elements $x,y \in V$. The number of edges at each vertex is called the \emph{degree} of the vertex. The number of vertices of a graph is called its \emph{order}. If we let $G = (V, E)$ and $G^\prime = (V^\prime, E^\prime)$ be two graphs, then $G$ and $G^\prime$ are \emph{isomorphic} if there exists a bijection $\Phi: V \rightarrow V^\prime$ with $xy \in E \Leftrightarrow \Phi(x)\Phi(y) \in E^\prime$ for all $x, y \in V$. Such a map $\Phi$ is called and \emph{isomorphism} and we write $G \cong G^\prime$. If $V^\prime \subseteq V$ and $E^\prime \subseteq E$, then $G^\prime$ is a \emph{subgraph} of $G$, written $G^\prime \subseteq G$. If $G^\prime \subseteq G$ and $G^\prime$ contains all the edges $xy \in E$ with $x, y \in V^\prime$, then $G^\prime$ is an \emph{induced subgraph} of $G$. \section{Product Topology vs. Box Topology} Let $X$ and $Y$ be topological spaces, we shall call them \emph{coordinate spaces}. Let $U \subseteq X$ and $V \subseteq Y$, with $U$ and $V$ open with respect to the topologies on $X$ and $Y$. The family of all cartesian products $U \times V$ forms a base for a topology for $X \times Y$. This topology is called the \emph{product topology} of $X \times Y$. The functions $\pi_0: X \times Y \rightarrow X$ and $\pi_1: X \times Y \rightarrow Y$, which take $(x,y) \in X \times Y$ to $x \in X$ and $y \in Y$ respectively, are called \emph{projections} onto the coordinate spaces. These functions are continuous because given any open set $U \subseteq X$, we have $\pi_0^{-1}(U) = U \times Y$ which is an open set in $X \times Y$. The product topology is the coarsest topology (topology with the fewest open sets) for which all projections into their respective coordinate spaces are continuous. Now let us generalise this idea to the case of an arbitrary number of coordinate spaces. Let $\{X_i: i \in I\}$ be a collection of topological spaces indexed by $I$ ($I$ may be finite, countable or uncountable), and let $X = \prod_i X_i$ be the cartesian product of these topological spaces. A subbase for the product topology is formed by the collection of sets $\pi_i^{-1}(U)$ where $U \subseteq X_i$ is open. A base for the product topology is the family of all finite intersections of these subbase elements. If our index set $I$ contains an infinite number of elements, then as a base we have the family of all cartesian products of open sets from each coordinate space with all but finitely many factors equal to the entire space. A more na\"ive approach to the above generalisation leads to what is known as the box topology. In this approach, our base consists of the family of sets formed by taking the product of infinitely many open subsets, one in each coordinate space. Examples of topologies other than the box topology and the product topology on the cartesian product of a collection of topological vector spaces are given in \cite{CJK}. It is obvious that for a finite number of coordinate spaces the box and product topologies agree. It is also obvious that the box topology is finer than the product topology. However, the most important reason why the product topology is chosen over the box topology is because many theorems about finite products hold for arbitrary products if the product topology is used, but not for the box topology \cite{JRM}. That being said, the box topology is still useful for constructing counterexamples. The box topology came before the product and was studied first. It was not until Tychonoff that the product topology became the canonical topology for a cartesian product of an infinite number of topological spaces. Let us see how these two topologies differ by some examples. The following examples are taken from \cite{DS}. \begin{egg} In the product topology, the product of compact spaces is compact --- this is the famous Tychonoff product theorem. This fails in the box topology. Consider $\mathbb{I}^{\infty}$ --- the countable product of copies of the unit interval, $\mathbb{I}$. If $A_0 = [0,1) \mbox{ and } A_1 = (0,1]$, then the collection of all open sets of the form $A_{\epsilon_1} \times A_{\epsilon_2} \times \cdots$, where $\epsilon_i = 0,1$, is an uncountable open cover of $\mathbb{I}^{\infty}$ with no proper subcover. For if $A_{\epsilon_1} \times A_{\epsilon_2} \times \cdots$ is excluded from the cover, the point $(\epsilon_1,\epsilon_2,\ldots)$ is not covered. \end{egg} \begin{egg} In the product topology, the product of connected spaces is connected. This is not true in the box topology. Consider, for example, $\mathbb{R}^{\infty}$ --- the countable product of real lines. The set \[ A = \{(x_1,x_2,\ldots): {x_i} \mbox{ is a bounded sequence}\} \] is both open and closed in the box topology, and thus $\mathbb{R}^{\infty}$ is not connected with respect to the box topology. \end{egg} The following theorem is taken from \cite{CJK}: \begin{theo} \label{T:DiscontBoxProd} Let $\mathbb{I}^\infty$ be equipped with the box topology. Let $x, y \in \mathbb{I}^\infty$. Then $x$ and $y$ are in the same connected component of $\mathbb{I}^\infty$ if and only if the set $\{i: x_i \neq y_i\}$ has finite cardinality. If the number of members of $\{i: x_i \neq y_i\}$ is infinite then there exists a $U \subseteq \mathbb{I}^\infty$ which is open and closed at the same time and for which $x \in U$ while $y \notin U$. \end{theo} \begin{rmk} Here are some other interesting facts about the box topology: \begin{itemize} \item Every basic open set in the product topology is in the box topology but not every box open set is in product topology. Basic open sets are countable unions of intersections of finitely many sets of the form $\pi_i^{-1}(U)$ where $U \subseteq X_i$ is open. On the other hand, box open sets can be arbitrary unions of intersections of infinitely many sets of the form $\pi_i^{-1}(U)$. \item Parallelepipeds are an infinite-dimensional generalisation of the cuboid with sides parallel to the coordinate axes. They are not open in the product topology but they are open in the box topology. \end{itemize} \end{rmk} So, as follows from theorem \ref{T:DiscontBoxProd}, $\mathbb{I}^\infty$ with the box topology has uncountably many disconnected components, and because of this there are lots of continuous functions. This is a strange property of the box topology --- a property that we do not want, and this is one reason we will do away with the box topology. The purpose for the brief appearance of the box topology here is that during the course of our research we put our hopes on it having a connection with the Lebesgue density theorem, but these hopes were dashed. Next, we turn our attention to the issue at hand. We give a brief introduction of the Lebesgue density theorem in finite dimension and the reason we have chosen to extend it. \cleardoublepage \chapter{Lebesgue Density Theorem} The purpose of this work is to prove an extension of the theorem --- which goes by the name of this chapter --- to infinite-dimensional spaces and give examples of functions which satisfy it. But first, seeing that there is no obvious connection between the Lebesgue density theorem and random graph theory, we show what role the density theorem plays in graph theory by going through a proof which uses it. Then we state what the Lebesgue density theorem and examine it superficially. Later on we attempt to explain why it is non-trivial to find an extension to infinite dimensions. \section{Justification for Use} We came across the Lebesgue density theorem trying to obtain an analogue of the proposition which follows (the proposition and some of the terms used here are taken from \cite{MP}, chapter 3). The proposition may seem incomprehensible if we do not explain some of the notation first. So let's get right to it. We will use $\Gamma$ to denote a \emph{feasible} connected graph of order $k$ which means that the probability that some random geometric graph on $k$ vertices with radius $r$ is isomorphic to $\Gamma$ is strictly positive, for some $r > 0$. As an example, take the 2-dimensional case with the normal Euclidean distance. The star graphs $S_1 \ldots S_6$ are feasible, but $S_7$ and above are not. A star graph on $n$ vertices, $S_n$, is a graph with one vertex having degree $n-1$ and the remaining $n-1$ vertices each having degree $1$. We will call a vertex of degree $1$ a \emph{leaf}. If we assume that the radius from the centre vertex to each leaf is one, then for $S_7$, which has 6 leaves, there are two consecutive leaves for which the distance between them will be less than or equal to one and so we would need to have the edge between those two leaves added in order to make it a random geometric graph. However, the additional edge leaves us with a graph which is not $S_7$, and thus the probability of getting a graph isomorphic to $S_7$ is zero. Moving on, let $(x_1, x_2) \succeq (y_1, y_2)$ if and only if either $x_1 > y_1$ or $x_1 = y_1$ and $x_2 \geq y_2$, then $\succeq$ is called the lexicographic order on the plane $\mathbb{R}^2$. The generalisation to higher dimensions is obvious. Now, given a finite set of points $\mathfrak{Y} \subset \mathbb{R}^d$, let the first element of $\mathfrak{Y}$ according to the lexicographic ordering of $\mathbb{R}^d$ be called the left-most point of $\mathfrak{Y}$ (LMP($\mathfrak{Y}$)). Let $G_{n,A}$ be the number of unlabelled induced subgraphs of $G$, with radius $r_n$, isomorphic to $\Gamma$ for which the left-most point of the vertex set lies in $A \subseteq \mathbb{R}^d$, and $E[G_{n,A}(\Gamma)]$ is the expectation of that number. $\partial A$ denotes the boundary of $A$ and is defined as the intersection of the closure of $A$ with the closure of $A$s complement. We take the Lebesgue measure of the boundary of $A$ to be zero. Given a connected graph $\Gamma$ on $k$ vertices, and $A \subseteq \mathbb{R}^d$, then for all finite $\mathfrak{Y} \subset \mathbb{R}^d$: \[ h_\Gamma(\mathfrak{Y}) := 1_{\{G(\mathfrak{Y};1) \cong \Gamma\}} \] \[ h_{\Gamma, n, A}(\mathfrak{Y}) := 1_{\{G(\mathfrak{Y};r_n) \cong \Gamma\} \cap \{LMP(\mathfrak{Y}) \in A\}} \] The first indicator function is 1 when the point set $\mathfrak{Y}$ with a radius of $r = 1$ forms a graph isomorphic to $\Gamma$. For the second function, it is 1 when the point set $\mathfrak{Y}$ with a radius of $r = r_n$ forms an isomorphic graph with its left-most point coming from the set $A$. It should be noted that the values of either above function is zero on a set $\mathfrak{Y}$ of less than $k$ vertices. Finally, we set \[ \mu_{\Gamma, A} := k!^{-1} \int_A f(x)^k dx \int_{(\mathbb{R}^d)^{k-1}} h_\Gamma(\{0,x_1, \dots, x_{k-1}\})d(x_1, \dots, x_{k-1}) \] Points to note about this function $\mu_{\Gamma, A}$: \begin{enumerate} \item $k!^{-1}$ makes sure that we count each graph only once since there are $k!$ ways to choose the same $k$ points. \item $\int_A f(x)^k dx = 1$ when $A = \mathbb{R}^d$. \item $\int_{(\mathbb{R}^d)^{k-1}} h_\Gamma(\{0,x_1, \dots, x_{k-1}\})d(x_1, \dots, x_{k-1})$ gives the proportion of the graphs isomorphic to $\Gamma$. \end{enumerate} Here, $f$ is some specified probability density function on $\mathbb{R}^d$ which is bounded. It should not be confused with the $f$ used elsewhere in this work to mean a probability density function on $\mathbb{R}^\infty$. \begin{prop}\label{T:SubgraphCount} Suppose that $\Gamma$ is a feasible connected graph of order $k \geq 2$, that $A \subseteq \mathbb{R}^d$ is open with $\lambda^d(\partial A) = 0$, and that $\lim_{n \rightarrow \infty}(r_n) = 0$. Then \[ \lim_{n \rightarrow \infty} r_n^{-d(k-1)} n^{-k} E[G_{n,A}(\Gamma)] = \mu_{\Gamma,A} \qquad (*) \] \end{prop} \begin{proof} Clearly $E[G_{n,A}(\Gamma)] = {n \choose k} E[h_{\Gamma, n, A}(X_k)]$. Hence, \begin{align} E[G_{n,A}(\Gamma)] = & {n \choose k} \int_{\mathbb{R}^d} \ldots \int_{\mathbb{R}^d} h_{\Gamma, n, A}(\{x_1, \ldots, x_k\})f(x_1)^k dx_k \ldots dx_1 \notag \\ &+ {n \choose k} \int_{\mathbb{R}^d} \ldots \int_{\mathbb{R}^d} h_{\Gamma, n, A}(\{x_1, \ldots, x_k\}) \notag \\ &\times \left ( \prod_{i=1}^k f(x_i) - f(x_1)^k \right ) \prod_{i=1}^k dx_i. \qquad (**) \notag \end{align} By the change of variables $x_i = x_1 + r_{n}y_{i}$ for $2 \leq i \leq k$, and $x_1 = x$, the first term on the right-hand side of (**) equals \[ {n \choose k} r_n^{d(k-1)} \int_{\mathbb{R}^d} \ldots \int_{\mathbb{R}^d} h_{\Gamma, n, A}(\{x, x + r_{n}y_{2}, \ldots, x + r_{n}y_{k}\}) dy_k \ldots dy_2 f(x)^k dx. \] Since $A$ is open, for $x \in A$ the function $h_{\Gamma, n, A}(\{x, x + r_{n}y_{2}, \ldots, x + r_{n}y_{k}\})$ equals $h_\Gamma(\{0, y_2, \ldots, y_k\})$ for all large enough $n$, while for $x \notin A \cup \partial A$ it equals zero for all $n$. Also, $h_{\Gamma, n, A}(\{x, x + r_{n}y_{2}, \ldots, x + r_{n}y_{k}\})$ is zero except for $(y_2, \ldots, y_k)$ in a bounded region of $(\mathbb{R}^d)^{k-1}$, while $f(x)^k$ is integrable over $x \in \mathbb{R}^d$ since $f$ is assumed bounded. Therefore, by the dominated convergence theorem for integrals, the first term on the right-hand side of (**) is asymptotic to $n^k r_n^{d(k-1)} \mu_{\Gamma, A}$. On the other hand, the absolute value of the second term on the right-hand side of (**) multiplied by $n^{-k} r_n^{-d(k-1)}$ is bounded by $\int_{\mathbb{R}^d} w_n(x_1) f(x_1) dx_1$, where we set \[ w_n(x) := \int_{B(x;kr_n)} \ldots \int_{B(x;kr_n)} r_n^{-d(k-1)} \left | \prod_{i=2}^k f(x_i) - f(x)^{k-1} \right | dx_2 \ldots dx_k. \] If $f$ is continuous at $x$, then clearly $w_n(x)$ tends to zero. Even if $f$ is not almost everywhere continuous, we assert that $w_n(x)$ still tends to zero if $x$ is a Lebesgue point of $f$. This is proved by induction of $k$; the inductive step is to bound the integrand by \[ r_n^{-d(k-1)} \left ( |f(x_k) - f(x)| \prod_{i=2}^{k-1} f(x_i) \right ) + r_n^{-d(k-1)} \left | \prod_{i=2}^{k-1} f(x_i) - f(x)^{k-2} \right | f(x). \] The integral of the first expression over $B(x;kr_n)^{k-1}$ tends to zero by the definition of a Lebesgue point (and boundedness of $f$), while that of the second tends to zero by the inductive hypothesis. Hence, by the Lebesgue density theorem and the dominated convergence theorem, $\int_{\mathbb{R}^d} w_n(x_1) f(x_1) dx_1$ tends to zero, which proves the second equality in (*). \end{proof} So, as we see, the Lebesgue density theorem is of use in dispensing with any assumption of continuity on $f$. For this reason we would like to have an analogue of it in the infinite-dimensional case. Before we get to that, let us see what exactly this Lebesgue density theorem is and how it works in finite dimensions. Then, once we have an understanding of the basics, we proceed to expand on that knowledge. \section{The Classical Case for $\mathbb{R}^d$} To understand the Lebesgue density theorem, we first need to grasp the concept of a Lebesgue point. Let us look at the definition. \begin{defn} If $f \in L^1(\mathbb{R}^d)$, any $x \in \mathbb{R}^d$ for which it is true that \[ \lim_{\epsilon \to 0} \frac{1}{\lambda^d(B_x(\epsilon))} \int_{B_x(\epsilon)} |f(y) - f(x)|d\lambda^d(y) = 0, \] where $\lambda^d$ is the $d$-dimensional Lebesgue measure, is called a \emph{Lebesgue point} of $f$. \end{defn} The definition of a Lebesgue point is simple and straightforward, but the Lebesgue density theorem is amazing and takes some time to get used to. \begin{theo}[Lebesgue density theorem] If $f \in L^1(\mathbb{R}^d)$, then almost every $x \in \mathbb{R}^d$ is a Lebesgue point of $f$. \end{theo} The Lebesgue density theorem has a well-known proof which can be found in \cite{WR} p. 139. Here we are going to work on it and transform it into some form that we can use. We cannot use the proof as is for a couple reasons, the first is that it is proved for finite dimensions only, and another is that the tools used to prove it, such as the Hardy-Littlewood maximal function, may not survive the generalisation to infinite dimensions. In fact, the Hardy-Littlewood maximal function has properties which are proved using the Vitali covering theorem which itself does not hold in the infinite-dimensional setting in which we are working. This fact was proven by David Preiss \cite{DP} in 1979 and the result was strengthened by Jaroslav Ti\v ser \cite{JT1} in 2003. However, the Lebesgue differentiation theorem, of which the Lebesgue density theorem is a special case, has been shown to hold for some class of Gaussian measures and all integrable functions provided that we change almost everywhere convergence to convergence in measure. This is so even though the Vitali covering theorem fails in general for Gaussian measures \cite{JT2}. \begin{rmk} It is easy to see that the theorem holds true for locally integrable functions, but for our purpose in the case of $\mathbb{R}^\infty$, local integrability is not defined. \end{rmk} Let us look at how the Lebesgue density theorem works for continuous functions. All continuous functions are locally $L^1$ and so satisfy the theorem. That is, every point has a neighbourhood such that the restriction of the function to this neighbourhood is $L^1$ (has a finite integral), and because the restriction to this neighbourhood is bounded and since our function is continuous, it is in $L^1$. If $f$ is continuous, then the following proof (taken from \cite{NV}) shows that every member, $x$, of the domain of $f$ is a Lebesgue point. \begin{theo} \label{T:LebAtContinuousPt} Given a measure, $\mu$, and a function, $f$ in $\mathbb{R}^d$, which is continuous at one point, $x_0$. Then $x_0$ is a Lebesgue point. \end{theo} \begin{proof} Since $f$ is continuous at $x_0$, then for all $\varepsilon > 0$, there exists a cube, $\Pi$, of strictly positive measure containing $x_0$ such that $\forall \, $y$ \, \in \, \Pi, |f(y) - f(x_0)| \leq \varepsilon$. Suppose $\mu(\Pi^{\prime}) > 0$ is such that $0 < \mu(\Pi^{\prime}) < \mu(\Pi)$, then: \[ \frac{1}{\mu(\Pi^{\prime})} \int_{\Pi^{\prime}} |f(y) - f(x_0)| \, dy \leq \frac{1}{\mu(\Pi^{\prime})} \int_{\Pi^{\prime}} \varepsilon \, dy = \varepsilon \] Therefore: \[ \lim_{\mu(\Pi^{\prime}) \rightarrow 0} \frac{1}{\mu(\Pi^{\prime})} \int_{\Pi^{\prime}} |f(y) - f(x_0)| \, dy = 0 \] and $x_0$ is a Lebesgue point of $f$. \end{proof} As a result of this simple proof, we are assured that every point of a continuous function is a Lebesgue point. Note that although $\Pi$ was chosen to be a cube, it could have been a ball or any Borel neighbourhood of $x$. \begin{egg} The function \( f(x) = \left \{ \begin{array}{cl} 0 & \mbox{ if } x \in \mathbb{R} \setminus \mathbb{Q} \\ x & \mbox{ if } x \in \mathbb{Q} \end{array} \right. \) is continuous at only one point, namely $x = 0$, and in particular that point is a Lebesgue point of the function. \end{egg} At this moment, we should clarify a point which may have been missed. A point of continuity of $f$ is a Lebesgue point of $f$, but a Lebesgue point of $f$ is not necessarily a point of continuity of $f$. The following example demonstrates this fact. \begin{egg} The function \( 1_{\mathbb{Q}}(x) = \left \{ \begin{array}{cl} 0 & \mbox{ if } x \notin \mathbb{Q} \\ 1 & \mbox{ if } x \in \mathbb{Q} \end{array} \right. \) is nowhere continuous, however, every point $x \in \mathbb{R} \setminus \mathbb{Q}$ is a Lebesgue point of $f$ even if every point is a point of discontinuity of $f$. \end{egg} As mentioned earlier, the Lebesgue density theorem is a special case of the Lebesgue differentiation theorem and we may sometimes prove results for the differentiation theorem which will still hold for the density theorem. We will not stress the distinction between the two concepts, in fact, they are very similar. The Lebesgue differentiation theorem states that: \begin{theo} For almost every $x \in \mathbb{R}^d$: \[ \lim_{\epsilon \to 0} \frac{1}{\lambda^d(B_x(\epsilon))} \int_{B_x(\epsilon)} f(y) \, d\lambda^d(y) = f(x) \] where $f: \mathbb{R}^d \to \mathbb{R}$ is Lebesgue-integrable, and $B_x(\epsilon)$ is the ball of radius $\epsilon$ around $x$. \end{theo} \cite{NV} demonstrates how easy it is to show that if $x \in \mathbb{R}^d$ is a Lebesgue point of $f$, then \[ \lim_{\epsilon \to 0} \frac{1}{\lambda^d(B_x(\epsilon))} \int_{B_x(\epsilon)} f(y) \, d\lambda^d(y) = f(x). \] Let us assume $x \in \mathbb{R}^d$ is a Lebesgue point of $f$. Then for all $\epsilon > 0$, we have: \begin{align} \left | \frac{1}{\lambda^d(B_x(\epsilon))} \int_{B_x(\epsilon)} f(y) \, d\lambda^d(y) - f(x) \right | &= \left | \frac{1}{\lambda^d(B_x(\epsilon))} \int_{B_x(\epsilon)} (f(y) - f(x)) \, d\lambda^d(y) \right | \notag \\ &\leq \frac{1}{\lambda^d(B_x(\epsilon))} \int_{B_x(\epsilon)} |f(y) - f(x)| \, d\lambda^d(y) \notag \end{align} Hence, from \[ \lim_{\epsilon \to 0} \frac{1}{\lambda^d(B_x(\epsilon))} \int_{B_x(\epsilon)} |f(y) - f(x)| \, d\lambda^d(y) = 0 \] we conclude that \[ \lim_{\epsilon \to 0} \frac{1}{\lambda^d(B_x(\epsilon))} \int_{B_x(\epsilon)} f(y) \, d\lambda^d(y) = f(x). \] We have just scratched the surface of the Lebesgue density theorem. Later on, we look at densities of functions in $\mathbb{I}^\infty$ and $\mathbb{R}^\infty$ and we also look at a more general forms of the Lebesgue density theorem. As stated earlier, the Lebesgue density theorem, as is, fails in infinite dimensions but we will attempt to remedy this by finding the correct version of the theorem which will work in infinite dimensions and in particular, $\mathbb{R}^{\infty}$. We will also detail the difficulties faced in extending it. Before we do that, however, we need an interesting tool. As seen in the proof of \ref{T:SubgraphCount}, it is of importance that we have a translation-invariant measure because we always rescale our points according to the left-most point, and if our measure is not translation-invariant then it is obvious that we would not be able to count the number of subgraphs using the same proof. This measure we are interested in should be on $\mathbb{R}^\infty$ and we should be able to take projections down into any space of finite dimension $\mathbb{R}^d$. Once we find this measure, we will need a Lebesgue density theorem for it. Next, we reveal the resulting candidate of our search for such a non-trivial translation-invariant measure on $\mathbb{R}^\infty$. \cleardoublepage \chapter{``Lebesgue Measure'' on $\mathbb{R}^\infty$} Is it possible to measure the volume of an infinite-dimensional object? If so, how do we do it? And is it possible to do so in an easy and intuitive manner? Can an infinite-dimensional object even have finite volume? Of course it can, take as an example the infinite-dimensional cube of side 1, which we will denote by $C_1$. Its volume is expected to be 1. Now consider the parallelepiped (an infinite-dimensional generalisation of the cuboid with sides parallel to the coordinate axes) formed from $C_1$ by shortening one of its sides to $\frac{1}{2}$. Its volume, also, is expected to be $\frac{1}{2}$. Thus, from this example, we see that there exists uncountably many infinite-dimensional objects with finite volume. The purpose of this chapter is to find a ``nice'', intuitive way to measure the volumes of such objects. By nice we mean as simple as the Lebesgue measure for finite-dimensional objects. Of course, we also want to go beyond well shaped objects such as cubes and parallelepipeds. We need to make it as general as possible. If a Lebesgue measure on $\mathbb{R}^\infty$ already existed, then there would be no reason for this chapter. And in the true sense of what a measure is, a non-trivial one does not exist and cannot exist. Clearly, the trivial measure where we assign a `size' of zero to each set will work, but we require something with more substance. Now, if we lessen our expectations slightly, we may be able to obtain what we desire. We will settle for a measure which is not $\sigma$-finite. Our setting requires a nice translation-invariant measure on $\mathbb{R}^\infty$. However, it is well known in functional analysis that the trivial measure is the only $\sigma$-finite translation-invariant Borel measure on an infinite-dimensional locally convex topological vector space \cite{HSY, YY}. The proof of this statement can be found in \cite{YY} pp. 138 - 143. As such, there is no analogue to the Lebesgue measure on a space of infinite dimensions. Although this is well known, it does not stop mathematicians from coming up with measures on these spaces. Richard Baker, amongst others \cite{TVY, YY}, has shown the existence of and constructed a non-trivial translation-invariant Borel measure which is almost as nice to work with as the Lebesgue measure, but on the infinite-dimensional space $\mathbb{R}^\infty$. It should be noted that this measure is not $\sigma$-finite as we mentioned earlier. What does it mean when we say that this measure is almost as nice to work with as the Lebesgue measure? As Baker puts it, it means that if $R = \prod_{i=1}^{\infty}(a_i,b_i)$ is any infinite-dimensional parallelepiped such that the ``volume'' $\prod_{i=1}^{\infty}(b_i-a_i)$ of $R$ is a non-negative real number, then \[ \lambda^\infty(R) = \prod_{i=1}^{\infty}(b_i-a_i) \] where $\lambda^\infty$ is our so-called \emph{infinite-dimensional Lebesgue measure on $\mathbb{R}^\infty$}. This is as intuitive and as easy as it gets. In this chapter we attempt to summarize (and reproduce some proofs) of Richard Baker's paper which goes by the same title as the chapter. This is a survey of his paper and so no new results will be given in this chapter except for a lemma toward the end of it which tries to help us understand $\lambda^\infty$. We will soon go through the construction of this measure in detail, but first, we provide a weaker proof than the one which states that there is only one $\sigma$-finite translation-invariant Borel measure on an infinite-dimensional space and it is the trivial measure. \section{Impossibility Theorem} One important thing to note is that the measure we are going to use, as simple and intuitive as it is, lacks a key property, that of being $\sigma$-finite. This section attempts to show that it is actually impossible to have such a nice measure in $\mathbb{R}^\infty$, not including the trivial measure. To see that this measure is not $\sigma$-finite, let us refer back to our example of the cube. However, instead of having a cube with sides of length 1, let it have sides of length 2 and denote it by $C_2$. This cube exists in $\mathbb{R}^\infty$. The volume of this cube is obviously infinity, but the question is, can it be covered by a countable number of cubes of finite volume each? The answer is no, for if we were to cover it with cubes of side length 1 (which have finite volume as noted earlier, that being 1), we would need $2^\infty$ of these cubes to cover $C_2$, and that is an uncountable number. \begin{theo} Let $(X, \| \cdot \|)$ be an infinite-dimensional, separable Banach space. Then the only locally finite (every point of the measure space has a neighbourhood of finite measure) and translation-invariant Borel measure $\mu$ on $X$ is the trivial measure, with $\mu(A) = 0$ for every measurable set $A \in X$. \end{theo} \begin{proof} Equip $X$ with a $\sigma$-finite, translation-invariant Borel measure $\mu$. Using $\sigma$-finiteness, choose a $\delta > 0$ such that the open ball $B_x(\delta)$ of radius $\delta$ around an arbitrary element $x \in X$ has finite $\mu$-measure. Since $X$ is infinite-dimensional, the ball $B_x(\delta)$ is non-compact, and so there exists a $\gamma > 0$ such that $B_x(\delta)$ cannot be covered with finitely many balls of radius $\gamma$. By this fact, it is easy to construct an infinite sequence of points $x_1, x_2, \ldots, x_n, \ldots \in B_x(\delta)$ so that the open balls $B_{x_n}(\frac{\gamma}{2}), n \in N$, of radius $\frac{\gamma}{2}$, do not intersect, and are contained in $B_x(\delta)$. By translation-invariance, all of the smaller balls $B_{x_n}(\frac{\gamma}{2})$ have the same measure and as the sum of these measures is finite, the smaller balls must all have $\mu$-measure zero. Now, since $X$ is separable, it can be covered by a countable collection of balls of radius $\frac{\gamma}{2}$ and because each of these balls has $\mu$-measure zero, so must the whole space $X$, thus, $\mu$ is the trivial measure. \end{proof} Simply speaking, you can fit inside a ball, infinitely many smaller balls of equal size, which means by $\sigma$-additivity that each of these smaller balls has measure zero. On the other hand, because the space is separable, you can cover it by countably many small balls of measure zero which means that it, too, has measure zero. \section{Construction of the Measure $\lambda^\infty$} Here we get to the heart of the chapter and one of the main points of interest of this paper. This construction of an analogue to the Lebesgue measure follows the same plan as that of the regular Lebesgue measure construction given in C. A. Rogers' ``Hausdorff Measures'' \cite{CAR}. The idea behind this measure is as follows: cover our object (set) with as few infinite-dimensional parallelepipeds of finite volume as needed, then take the sum of the volumes of these parallelepipeds. This gives a rough estimate of the volume of the object. As there may be overlaps of these covering parallelepipeds, we take the smallest sum that we can get (which is obtained where the least overlap occurs). Since it may not be possible to cover the object with totally disjoint parallelepipeds, the infimum of the above sum will suffice. At this point, we would like to introduce some new notation and recall a few definitions, namely that of: positively separated (\ref{D:PositivelySeparated}), pre-measure (\ref{D:Premeasure}), outer measure (\ref{OuterMeasure}) and metric outer measure (\ref{D:MetricOuterMeasure}). Then we state a few theorems without proof. The proofs of these theorems and most of the terms defined here can be found in Chapter 1 of \cite{CAR}. \begin{defn} Let $\mathfrak{R}$ be the family of all infinite-dimensional parallelepipeds $R~\in~\mathbb{R}^{\infty}$ of the form \[ R = \prod_{i=1}^{\infty}(a_i,b_i), -\infty < a_i \leq b_i < +\infty, \] such that $0 \leq \prod_{i=1}^{\infty}(b_i - a_i) < +\infty$ where the product converges in the normal sense. \end{defn} \begin{defn} Let $\xi$ be the function on $\mathfrak{R}$ defined by \[ \xi(R) = \prod_{i=1}^{\infty}(b_i - a_i), \qquad R \in \mathfrak{R}. \] \end{defn} \begin{defn} Let $\lambda^\infty$ be the function defined on all subsets of $\mathbb{R}^{\infty}$ by \[ \lambda^\infty(E) = \inf_{\substack{R_j \in \mathfrak{R}\\\cup R_j \supseteq E}} \sum_{j=1}^{\infty} \xi(R_j), \qquad E \subseteq \mathbb{R}^{\infty}. \] \end{defn} Let us agree that any infimum taken over an empty set of real numbers has the value of +$\infty$. \begin{theo} \label{T:TransInvBorelRinfty} The $\lambda^\infty$ as above is a translation invariant Borel measure on $\mathbb{R}^{\infty}$ such that for all $R = \prod_{i=1}^{\infty}(a_i,b_i) \in \mathfrak{R}$, we have \[ \lambda^\infty(R) = \prod_{i=1}^{\infty}(b_i - a_i). \] \end{theo} Theorem \ref{T:TransInvBorelRinfty} is the main theorem of interest. The purpose of the present chapter is to prove this theorem. \begin{theo} [Method I] If $\xi$ is a pre-measure defined on a family $\mathfrak{C}$ of subsets of $X$, the function \[ \mu(E) = \inf_{\substack{C_j \in \mathfrak{C}\\\cup C_j \supseteq E}} \sum_{j=1}^{\infty} \xi(C_j) \] is an outer measure on $X$. \end{theo} \begin{theo} \label{T:UniqueXsi} Let $X$ is an arbitrary non-empty set. Let $\mu$ be an outer measure on $X$. Let $\xi = \mu$, then $\xi$ is a pre-measure. Let $\lambda$ be the outer measure constructed by Method I from the pre-measure $\xi$, then $\lambda$ coincides with $\mu$. \end{theo} \begin{theo} [Method II] If $\xi$ is a pre-measure defined on a family $\mathfrak{C}$ of subsets, in a metric space $X$ with metric $\rho$, the set function \[ \mu(E) = \sup_{\delta > 0} \mu_\delta(E) = \lim_{\delta \rightarrow 0} \mu_\delta(E), \] where \[ \mu_\delta(E) = \inf_{\substack{C_j \in \mathfrak{C}\\\mbox{diam}(C_j) \leq \delta\\\cup C_j \supseteq E}} \sum_{j=1}^{\infty} \xi(C_j) \] is an outer measure on $X$. Here, $\mbox{diam}(C)$ is the diameter of $C$ with respect to $\rho$, $C \subseteq \mathbb{R}^{\infty}$. \end{theo} \begin{theo}\label{T:MetricOuterMeasure} Let $\mu$ be an outer measure on $X$ (a metric space with metric $\rho$), constructed by Method II, from the pre-measure $\xi$. Then $\mu$ is a metric outer measure on $X$. \end{theo} \begin{theo}\label{T:AllBorelMeasurable} If $\mu$ is a metric outer measure on a metric space $X$, then every Borel set in $X$ has a value uniquely defined by $\mu$. \end{theo} The plan for the remainder of the chapter is to construct the measure $\lambda^\infty$ on $\mathbb{R}^\infty$ satisfying the properties that all Borel sets in $\mathbb{R}^\infty$ have values uniquely defined by $\lambda^\infty$ and $\lambda^\infty$ is translation invariant on $\mathbb{R}^\infty$. We continue as follows: \begin{itemize} \item By Method I and the definitions of $\mathfrak{R}$, $\xi$ and $\lambda^\infty$ we see that $\lambda^\infty$ is an outer measure on $\mathbb{R}^\infty$. \item Prove that $\forall R \in \mathfrak{R}, \xi(R) = \lambda^\infty(R)$. \item Prove that $\forall R \in \mathfrak{R}, \nu(R) = \xi(R)$, where $\nu$ is an outer measure constructed by Method II. \item Show that $\lambda^\infty(E) = \nu(E)$ for all $E \subseteq \mathbb{R}^\infty$, and thus $\lambda^\infty$ is a metric outer measure by \ref{T:MetricOuterMeasure} above. \item By \ref{T:AllBorelMeasurable} above, every Borel set in $\mathbb{R}^\infty$ has a value unique defined by $\lambda^\infty$. \item Finally, show that $\lambda^\infty$ is translation-invariant on $\mathbb{R}^\infty$. \end{itemize} \begin{theo} \label{T:TauLeqLambda} Let $I = \prod_{i=1}^\infty[a_i, b_i],\; - \infty < a_i \leq b_i < + \infty$, be an infinite-dimensional compact parallelepiped in $\mathbb{R}^\infty$ such that $0 \leq \prod_{i=1}^\infty(b_i - a_i) < + \infty$, then \[ \prod_{i=1}^\infty(b_i - a_i) \leq \lambda^\infty(I). \] \end{theo} \begin{proof} If $I \subseteq \bigcup_{j=1}^\infty R_j$, where $R_j \in \mathfrak{R}$, then it is enough to show that \[ \xi(I) \leq \sum_{j=1}^\infty\xi(R_j) \qquad (*) \] where $\xi(I) = \prod_{i=1}^\infty(b_i - a_i)$. Assume the strict inequalities $0 < \xi(I)$ and $\sum_{j=1}^\infty\xi(R_j)~<~\infty$ hold as the proof is trivial in those cases where they are equal. For all $d, j \geq 1$, let us introduce the following notation \[ R_j = \prod_{i=1}^\infty(a_{ij}, b_{ij}), \qquad R_{dj} = \prod_{i=1}^d (a_{ij}, b_{ij}) \times \prod_{i=d+1}^\infty \mathbb{R}. \] \noindent Choose $\epsilon > 0$. \noindent If we assume $\xi(I)$ to be finite, then there must exist a $d \in \mathbb{N}$ such that: \begin{itemize} \item[(1)] $\prod_{i=d+1}^\infty (b_i - a_i) < 1 + \epsilon$ (see lemma \ref{L:SidesApproachOne}). \end{itemize} \noindent Let $\mathfrak{F}$ be the family of sets containing parallelepipeds $R_{j}$ satisfying (1) above and either: \begin{itemize} \item[(2)] $\prod_{i=d+1}^\infty(b_{ij} - a_{ij}) > 1 - \epsilon$; or \item[(3)] $\prod_{i=1}^{d}(b_{ij} - a_{ij}) < \frac{\epsilon}{2^j}$. \end{itemize} \noindent Since $0 < \xi(I) < + \infty$ and $0 \leq \xi(R_j) < + \infty$, $\mathfrak{F}$ clearly covers $I$ and as parallelepipeds in $\mathfrak{F}$ are open and $I$ is compact, there exists a finite subfamily $\{R_{d_{p}j_{p}} | 1 \leq p \leq k\}$ of $\mathfrak{F}$ that covers $I$. \noindent Choose $d > max\{d_1, \ldots, d_k\}$ and for $1 \leq p \leq k$, define \[ I_d = \prod_{i=1}^d [a_i, b_i], \qquad S_{dp} = \prod_{i=1}^{d_p}(a_{ij_p}, b_{ij_p}) \times \prod_{i=d_p+1}^d [a_i, b_i]. \] \noindent It is easy to see that $I \subseteq \bigcup_{p=1}^k R_{d_{p}j_{p}}$ and thus $I_d \subseteq \bigcup_{p=1}^k S_{dp}$. \noindent Let $\lambda^d$ be the usual Lebesgue measure on $\mathbb{R}^d$, then \begin{align} \prod_{i=1}^d (b_i - a_i) = \lambda^d (I_d) &\leq \sum_{p=1}^k \lambda^d (S_{dp}) \notag \\ &= \sum_{p=1}^k \left \{\prod_{i=1}^{d_p}(b_{ij_p} - a_{ij_p}) \cdot \prod_{i=d_p+1}^d (b_i - a_i) \right \}. \notag \end{align} \noindent Taking the limit as $d \rightarrow \infty$, we get \begin{align} \xi(I) &\leq \sum_{p=1}^k \left \{\prod_{i=1}^{d_p}(b_{ij_p} - a_{ij_p}) \cdot \prod_{i=d_p+1}^\infty (b_i - a_i) \right \} \notag \\ &\leq (1 + \epsilon)\sum_{p=1}^k \prod_{i=1}^{d_p}(b_{ij_p} - a_{ij_p}) \mbox{ (by (1)) } \notag \\ &\leq (1 + \epsilon)\sum_{p=1}^k {\,}^{{\!}^{\prime}} \prod_{i=1}^{d_p}(b_{ij_p} - a_{ij_p}) + (1 + \epsilon)\sum_{p=1}^k {\,}^{{\!}^{\prime\prime}} \prod_{i=1}^{d_p}(b_{ij_p} - a_{ij_p}) \mbox{ (by (2) and (3)) } \notag \end{align} where $\Sigma^{{\,}^\prime}$ is the sum over those $p$ for which $\prod_{i=d_p+1}^\infty (b_{ij_p} - a_{ij_p}) > 1 - \epsilon$ and $\Sigma^{{\,}^{\prime \prime}}$ is the sum over those $p$ for which $\prod_{i=1}^{d_p}(b_{ij_p} - a_{ij_p}) < \frac{\epsilon}{2^{j_p}}$. It follows that \begin{align} \xi(I) &\leq \left ( \frac{1 + \epsilon}{1 - \epsilon} \right ) \sum_{p=1}^k {\,}^{{\!}^{\prime}} \xi(R_{j_p}) + (1 + \epsilon)\epsilon \sum_{p=1}^k {\,}^{{\!}^{\prime\prime}} \frac{1}{2^{j_p}} \mbox{ (by (3), (4)) } \notag \\ &\leq \left ( \frac{1 + \epsilon}{1 - \epsilon} \right ) \sum_{j=1}^\infty \xi(R_j) + (1 + \epsilon)\epsilon \notag. \end{align} As $\epsilon \to 0$, (*) holds. \end{proof} \begin{theo} \label{T:TauEqLambda} For every $R \in \mathfrak{R}$, $\xi(R) = \lambda^\infty(R)$. \end{theo} \begin{proof} Let $R \in \mathfrak{R}$. Clearly $\lambda^\infty(R) \leq \xi(R)$, and we may assume that $\xi(R)~>~0$. Let $\epsilon > 0$. There exists a compact parallelepiped $I = \prod_{i=1}^\infty[a_i, b_i] \subseteq R$ such that $\prod_{i=1}^\infty(b_i~-~a_i)~=~(1~-~\epsilon)\xi(R)$. By \ref{T:TauLeqLambda}, $\prod_{i=1}^\infty(b_i - a_i) \leq \lambda^\infty(I)$, thus \\$(1~-~\epsilon)\xi(R)~\leq~\lambda^\infty(R)$. As $\epsilon \to 0$, we obtain the desired result. \end{proof} \begin{theo} \label{T:OuterMeasPreMeas} Let $\nu$ be the outer measure on $\mathbb{R}^{\infty}$ constructed from the pair $\xi, \rho$ by Method II. Then for all $R \in \mathfrak{R}$, $\nu(R) = \xi(R)$. \end{theo} \begin{proof} Let $R \in \mathfrak{R}$. By \ref{T:TauEqLambda}, $\lambda^\infty(R) = \xi(R)$. Clearly $\lambda^\infty(R) = \nu(R)$, and it is sufficient to show that \[ \nu(R) \leq \xi(R). \qquad (**) \] Let $R = \prod_{i=1}^{\infty}(a_i,b_i), -\infty < a_i \leq b_i < +\infty$. Assume $R \neq \emptyset$ as the proof is trivial if it is. Thus, for all $i, a_i < b_i$. Now, if $\xi(R) = 0$, then for all $d$, we have $\prod_{i=d+1}^{\infty} (b_i - a_i) = 0$. However, if $\xi(R) > 0$, then $\lim_{d \rightarrow \infty} \prod_{i=d+1}^{\infty} (b_i - a_i) = 1$ (see lemma \ref{L:SidesApproachOne}). Choose $\delta, \epsilon > 0$. Then there exists an $d$ such that $\prod_{i=d+1}^{\infty} (b_i - a_i) < 1 + \epsilon$ and $\sum_{i=d+1}^{\infty} 2^{-i} < \frac{\delta}{2}$. Define $R_d = \prod_{i=1}^{d} (a_i,b_i)$. For $x = (x_i)_{i=1}^d, y = (y_i)_{i=1}^d \in \mathbb{R}^d$, let $\rho_d(x,y)$ be defined as \[ \rho_d(x,y) = \sum_{i=1}^{d} 2^{-i} \frac{|x_i - y_i|}{1 + |x_i - y_i|}. \] Cover the parallelepiped $R_d$ by parallelepipeds $R_{d1}, \ldots, R_{dm}$ in $\mathbb{R}^d$ such that \begin{itemize} \item[(a)] For $1 \leq j \leq m$, $R_{dj} = \prod_{i=1}^{d} (a_{ij},b_{ij}), , -\infty < a_{ij} \leq b_{ij} < +\infty$. \item[(b)] For $1 \leq j \leq m$, $\sup_{x, y \in R_{dj}} ||x-y|| < \frac{\delta}{2d}$, where $||\cdot||$ is the Euclidean norm on $\mathbb{R}^d$. \item[(c)] $\sum_{j=1}^{m} \lambda^d (R_{dj}) < \prod_{i=1}^{d} (b_i - a_i) + \epsilon$. \end{itemize} For $1 \leq j \leq m$, define $R_j = R_{dj} \times \prod_{i=d+1}^{\infty} (a_i,b_i)$. Let $x = (x_i), y = (y_i) \in R_j$, and define $x^{(d)} = (x_i)_{i=1}^d, y^{(d)} = (y_i)_{i=1}^d$, then, by (b), we have \begin{align} \rho_d(x,y) &= \rho_d(x^{(d)},y^{(d)}) + \sum_{i=d+1}^{\infty} 2^{-i} \frac{|x_i - y_i|}{1 + |x_i - y_i|} \notag \\ &< \rho_d(x^{(d)},y^{(d)}) + \frac{\delta}{2} \notag \\ &\leq d||x^{(d)}-y^{(d)}|| + \frac{\delta}{2} \notag \\ &< \delta. \notag \end{align} Hence, for all $1 \leq j \leq m$, we have $\mbox{diam}(R_j) < \delta$. It is clear that $R \subseteq \cup_{j=1}^m R_j$, hence by definition, $\nu_{\delta}(R) \leq \sum_{j=1}^{m} \xi(R_j)$. By (c), \begin{align} \sum_{j=1}^{m} \xi(R_j) &= \sum_{j=1}^{m} \left \{ \lambda^d(R_{dj}) \cdot \prod_{i=d+1}^{\infty} (b_i - a_i) \right \} \notag \\ &\leq \left \{ \prod_{i=1}^{d} (b_i - a_i) + \epsilon \right \} \cdot \prod_{i=d+1}^{\infty} (b_i - a_i) \notag \\ &= \xi(R) + \epsilon \prod_{i=d+1}^{\infty} (b_i - a_i) \notag \\ &< \xi(R) + \epsilon(1 + \epsilon). \notag \end{align} As $\epsilon \to 0$, we get $\nu_{\delta}(R) \leq \xi(R)$. But $\delta \to 0$ also, hence $\nu(R) \leq \xi(R)$. Therefore, (**) holds. \end{proof} \begin{theo} For all $E \subseteq \mathbb{R}^{\infty}$, $\lambda^\infty(E) = \nu(E)$. Hence, by Theorem \ref{T:MetricOuterMeasure}, $\lambda^\infty$ is a metric outer measure on $\mathbb{R}^{\infty}$. \end{theo} \begin{proof} For $E \subseteq \mathbb{R}^{\infty}$, we have $\lambda^\infty(E) \leq \nu(E)$, hence it suffices to prove that \[ \nu(E) \leq \lambda^\infty(E), \qquad E \subseteq \mathbb{R}^{\infty}. \qquad (***) \] Fix $E \subseteq \mathbb{R}^{\infty}$. By Theorem \ref{T:UniqueXsi}, we have \[ \nu(E) = \inf_{\substack{C_j \subseteq \mathbb{R}^{\infty}\\\cup C_j \supseteq E}} \sum_{j=1}^{\infty} \nu(C_j). \] Hence we see that \[ \nu(E) \leq \inf_{\substack{R_j \in \mathfrak{R}\\\cup R_j \supseteq E}} \sum_{j=1}^{\infty} \nu(R_j). \] For all $R \in \mathfrak{R}$, Theorem \ref{T:OuterMeasPreMeas} implies that $\nu(R) = \xi(R)$, therefore we have $\nu(E)~\leq~\lambda^\infty(E)$. This proves (***). \end{proof} \begin{theo} The outer measure $\lambda^\infty$ is translation invariant on $\mathbb{R}^\infty$. \end{theo} \begin{proof} This follows from the fact that if $R = \prod_{i=1}^{\infty}(a_i,b_i) \in \mathfrak{R}$ and $x \in \mathbb{R}^\infty$, then $R + x \in \mathfrak{R}$ and $\xi(R + x) = \xi(R)$. \end{proof} To illustrate the properties of the measure $\lambda^\infty$ we prove the following lemma which we have already referred to and will be using again later on: \begin{lem} \label{L:SidesApproachOne} Let $\lambda^\infty(\Pi) = c$ with $0 < c < \infty$ where $\Pi = \prod_{i=1}^\infty [a_i, b_i]$ and each interval has positive length $l_i = b_i - a_i$. Then $(l_i)_{i \to \infty} \to 1$. \end{lem} \begin{proof} If $\lambda^\infty(\Pi) = c$ then \[ \lambda^\infty(\Pi) = \lim_{d \to \infty} \prod_{i=1}^d l_i = c. \qquad (****) \] Taking $\log$s of both sides of (****) we get \begin{align} \lim_{d \to \infty} &\sum_{i=1}^d \log l_i \to \log c \notag \\ &\Rightarrow \sum_{i=1}^\infty \log l_i < \infty \notag \\ &\Rightarrow \log l_i \to 0 \notag \\ &\Rightarrow l_i \to 1 \notag \end{align} Thus, $\forall \epsilon > 0, \exists N \mbox{ such that } \forall n \geq N, 1 - \epsilon < l_i < 1 + \epsilon$. \end{proof} It is interesting to note that lemma \ref{L:SidesApproachOne} implies that if we go to sufficiently high dimensions, a parallelepiped of finite volume looks more and more like the unit cube. There are cases where this product may not be defined, like in \[ \Pi = [0, \frac{1}{2}] \times [0,2] \times [0, \frac{1}{2}] \times [0,2] \times \ldots, \] but this does not concern us for we are only interested in ``nice'' parallelepipeds which are well behaved as in they have finite measure. In trying to understand $\lambda^\infty$ let us look at the following example: \begin{egg} Denote the ``boundary'' of $\mathbb{I}^\infty$ by the following set: \[ \partial \mathbb{I}^\infty = \{x \in \mathbb{I}^\infty: \exists i, x_i \in \{0,1\}\}. \] and let \[ \partial_j \mathbb{I}^\infty = \{x \in \mathbb{I}^\infty: x_j = 0 \mbox{ or } 1\}. \] Thus, we can write $\partial \mathbb{I}^\infty = \cup_{j=1}^\infty \partial_j \mathbb{I}^\infty$. Let $\epsilon > 0$ and write \[ \epsilon = \sum_{j=1}^\infty 2^{-j} \cdot \epsilon. \] Let \begin{align} C_j = &[0,1] \times \ldots \times [0,1] \times [-2^{-j-2}\epsilon, 2^{-j-2}\epsilon] \times [0,1] \times \ldots \notag \\ &\cup [0,1] \times \ldots \times [0,1] \times [1-2^{-j-2}\epsilon, 1+2^{-j-2}\epsilon] \times [0,1] \times \ldots \notag \end{align} Since $\partial \mathbb{I}^\infty$ can be covered with countably many sets of volume zero, the ``boundary'' of $\mathbb{I}^\infty$ has measure zero. \end{egg} Having gone through the construction of this measure which was a bit technical, it prepared us for the next chapter which presents a paper by Jean Dieudonn\'e. The chapter highlights the non-triviality of extending the Lebesgue density theorem to the infinite-dimensional case. \cleardoublepage \chapter{Dieudonn\'e's Example}\label{C:Dieudonne} In this chapter we survey a paper by Jean Dieudonn\'e, one of the founding members of Bourbaki and a major contributor to the field of functional analysis. The paper \cite{JD} is not famous and is only available in French. Here, we reproduce it in English and attempt to explain its importance, both in our situation and more generally. In brief, it explains the construction of a set which shows that there is no straightforward generalisation to infinite dimensions of the Lebesgue density theorem for finite dimensions. If we put forward the most natural version of the extension, it is wrong and it will not work. Before we get down to the details, let us set the foundation on which we will build. First, the space we will work in is the Hilbert Cube, $\mathbb{I}^{\infty}$. It is easier to work with because it has some desirable properties which are noticeably absent from $\mathbb{R}^{\infty}$. Later on we will undertake the task of extending our results from $\mathbb{I}^{\infty}$ to $\mathbb{R}^{\infty}$. Now, let $x \in \mathbb{I}^{\infty}$ be written as $(x^{\prime}, x^{\prime\prime})$ where $x^{\prime} \in \mathbb{I}^d$ and $x^{\prime\prime} \in \mathbb{I}^{\mathbb{N} \setminus [1,2,\ldots,d]}$ and denote by $f_d$ the function obtained by integrating along the tail. That is, \[ f_d(x^{\prime}) := \int_{\mathbb{I}^{\mathbb{N} \setminus [1,2,\ldots,d]}} f(x^{\prime}, x^{\prime\prime}) d\lambda^{\mathbb{N} \setminus [1,2,\ldots,d]}(x^{\prime\prime}). \] Thus, Jessen's theorem states that $f_d \rightarrow f$ almost everywhere as $d \rightarrow \infty$. So a natural question to ask to make Jessen's theorem more general is: does it still work if we choose arbitrary finite subsets of the coordinate spaces? To be more precise, given $J \in F$ where $F$ is the set of all finite subsets of $\mathbb{N}$ ordered by inclusion, does $f_J \rightarrow f$ almost everywhere as the cardinality of $J$ increases? The answer, unintuitively, is no, and we shall see why. Also, as a consequence of Dieudonn\'e's example and to our detriment, if we have a bounded, measurable (thus integrable) function $f: \mathbb{I}^\infty \rightarrow \mathbb{R}$, and the functions $f_d$ obtained by integrating along the tail. Then, by looking at \[ \lim_{d \rightarrow \infty} \left ( \lim_{n \rightarrow \infty} \frac{1}{\lambda^d(\Pi_n)} \int_{\Pi_n} |f_d(x^{\prime})| \, d \lambda^d \right ) \] where $\lambda^d(\Pi_n) \rightarrow 0 \mbox{ as } n \rightarrow \infty$, the inner limit converges for fixed $d$ by the Lebesgue differentiation theorem, and the outer limit converges by Jessen's theorem. However, because of Dieudonn\'e's example, we cannot send the outer and inner limits to infinity at the same time because we will not obtain $f(x)$. It must be done in the correct order, fix a $d$, then send $n$ to infinity, then send $d$ to infinity. Unfortunately, we are on the bad side of Dieudonn\'e's example. Thus, the most general form of the Lebesgue density theorem does not hold in $\mathbb{I}^\infty$ and thus there is no hope for an extension to $\mathbb{R}^\infty$. \section{Dieudonn\'e's Example} For the next two sections, to avoid excessive clutter, let us agree to denote all finite-dimensional or infinite-dimensional Lebesgue measures with $\mu$. Let us say that $F$ is a countable family of finite subsets of $\mathbb{N}$ ordered by inclusion, thus $F$ is an ideal (see definition \ref{D:Ideal}). It should be noted that the union of two finite sets is finite. The theorem of Jessen would lead us to think that for almost every $x \in \mathbb{I}^\infty$, $f_J(x) \rightarrow f(x)$ according to the ideal $F$ (that is to say, for each $x \in \mathbb{I}^\infty$ not belonging to a set of measure zero and for each $\epsilon > 0$ there corresponds a finite subset $J_0(x, \epsilon)$ such that for all finite sets $J \supset J_0$, $|f_J(x) - f(x)| \leq \epsilon)$. The goal of Dieudonn\'e's paper is to prove that the conjecture is inaccurate by building an example where this property fails. Since the ideal $F$ has a countable basis, the reasoning which shows the Egorov theorem (see theorem \ref{T:EgorovTheorem}) for the almost everywhere convergence of a sequence of functions still applies when its almost everywhere convergence according to $F$ of a family $(g_J)$ of measurable functions for all $J \in F$. More precisely, if such a family $(g_J) \rightarrow g$ almost everywhere according to the sets of the ideal $F$, for each $\delta > 0$, there exists a set $H \subset \mathbb{I}^\infty$, with $\mu(H) > 1 - \delta$, such that in $H$ the family $(g_J) \rightarrow g$ uniformly. In the example which Dieudonn\'e constructed, the function $f = 1_A$ is the characteristic (indicator) function of a set $A$ and $\mu(A) < \frac{1}{8}$. For all $J \in F$, we will denote by $\phi_J(x) = \sup_{K \in J} f_K(x)$. If $f_J(x) \rightarrow f(x)$ almost everywhere it follows that there would exist a subset $J_0 \in F$ such that, for $J \supset J_0$, $\mu(\{x \in \mathbb{I}^\infty: \phi_J(x) > \frac{1}{2}\}) < \frac{1}{4}$. However, the set $A$ will be such that there would exist an increasing sequence $J_n$ of finite subsets of $\mathbb{N}$, such that for all $n$, $\mu(\{x \in \mathbb{I}^\infty: \phi_{J_n}(x) > \frac{1}{2}\}) > \frac{7}{16}$. That will prove that $f_J(x) \nrightarrow f(x)$ almost everywhere according to F. \subsection{Notation} \noindent The sets $J_n = [1, q_n]$.\\ \noindent We will divide $J_n$ and $J_{n+1}$ into $h_n$ intervals labelled $J_{n,1}, J_{n,2}, \ldots, J_{n,h_n}$ and let $p_{n,r}$ denote the number of elements of $J_{n,r}$. Thus, $q_{n+1} - q_n = p_{n,1} + p_{n,2} + \dots + p_{n,h_n}$. The numbers $p_{n,r}$ and $h_n$ will be determined by induction and at the same time we define $A$ to be the union of pairwise disjoint sets $A_{n,r}$.\\ \noindent Denote by $k_n$ the total number of intervals $J_{m,r}$ $(m < n)$ which we divide $J_n$ into. So $k_{n+1} = k_n + h_n$.\\ \noindent We have a decreasing sequence of positive numbers $(a_n)$ satisfying the following conditions: \begin{itemize} \item[(a)] $\sum_{n=1}^{\infty} a_n < \frac{1}{8}$ converges. \item[(b)] $(a_n \log \frac{1}{a_n})$ diverges and each term $(a_n \log \frac{1}{a_n}) < \frac{1}{4}$. \end{itemize} \begin{egg} Taking \[ a_n = \frac{c}{n(\log n)^2} \] for small enough $c$ will satisfy these conditions. \end{egg} \noindent We will denote $a_{k_n + r}$ by $a_{n,r}$ to simplify the notation.\\ \noindent Suppose that the $k_n$ sets $A_{m,r}$ $(m < n)$ form $\bar{A}_{m,r} \times I^{J_n^\prime}$ where $\bar{A}_{m,r} \subseteq I^{J_n}$. Moreover, suppose $\mu(A_{m,r}) < a_{m,r}$ for the $k_n$ sets. Let $\bar{B}_n = I^{J_n} \setminus \bigcup_1^{k_n} \bar{A}_{m,r}$, then one has $\mu_{J_n}(\bar{B}_n) > \frac{7}{8}$. \subsection{Basis} \noindent Let us start by defining the number $p_{n,1}$ and the set $A_{n,1}$.\\ \noindent So let $K_{n,1} = J_n \cup J_{n,1}$.\\ \noindent Take $A_{n,1} = \bar{B}_n \times \bar{C}_{n,1} \times I^{K_{n,1}^\prime}$, where $\bar{C}_{n,1} \subseteq I^{J_{n,1}}$ which we will define.\\ \noindent We will take $\bar{C}_{n,1} = \prod_1^{p_{n,1}} T_j$ (each $T_j$ taken in a interval $I_j$ of $I^{J_{n,1}}$ of length \[ 1 - \frac{1}{p_{n,1}} \log \frac{1}{a_{n,1}}). \] Thus \[ \mu(\bar{C}_{n,1}) = \left (1 - \frac{1}{p_{n,1}} \log \frac{1}{a_{n,1}} \right ) ^ {p_{n,1}}, \] $\mu(\bar{C}_{n,1}) \rightarrow 1$ as $p_{n,1} \rightarrow \infty$.\\ \noindent Take $p_{n,1}$ large enough so that $\frac{1}{2} a_{n,1} \leq \mu(\bar{C}_{n,1}) \leq a_{n,1}$ and \[ 1 - \frac{1}{p_{n,1}} \log \frac{1}{a_{n,1}} > \frac{1}{2}. \] \noindent For each index $j \in J_{n,1}$, let $S_j= \prod_{\substack{i \in J_{n,1}\\i \neq j}} T_i$; $A_{n,1} = (\bar{B}_n \times S_j) \times (T_j \times I^{K_{n,1}^\prime})$.\\ \noindent For $J = J_n \cup (J_{n,1} - \{j\})$, we have $f_J(x) > \frac{1}{2}$ for all $x \in (\bar{B}_n \times S_j) \times (I_j \times I^{K_{n,1}^\prime})$.\\ \noindent So $D_{n,1}$ is the union of the $p_{n,1}$ sets for each $j \in J_{n,1}$. It is clear that $\phi_{J_n}(x) > \frac{1}{2}$ in the set $D_{n,1}$.\\ \noindent We can write $D_{n,1} = \bar{B}_n \times \bar{D}_{n,1} \times I^{K_{n,1}^\prime}$, where $\bar{D}_{n,1} = \cup_1^{p_{n,1}} (S_j \times I_j)$. It follows immediately that the measure of $\bar{D}_{n,1}$ (in $I^{J_{n,1}}$) is \[ \mu(\bar{D}_{n,1}) = \delta_{n,1} = \left (1 - \frac{1}{p_{n,1}} \log \frac{1}{a_{n,1}} \right ) ^ {p_{n,1}} + \left (\log \frac{1}{a_{n,1}} \right ) \left (1 - \frac{1}{p_{n,1}} \log \frac{1}{a_{n,1}} \right ) ^ {p_{n,1} - 1}. \] \noindent It is clear that $p_{n,1}$ can be large enough so that $\frac{1}{2} a_{n,1} \log \frac{1}{a_{n,1}} \leq \delta_{n,1} \leq 2 a_{n,1} \log \frac{1}{a_{n,1}}$.\\ \noindent We let $\bar{E}_{n,1} = I^{J_{n,1}} \setminus \bar{D}_{n,1}$ which has measure $1 - \delta_{n,1} > \frac{1}{2}$. \subsection{Inductive Step} \noindent Suppose now the sets $J_{n,1}, \ldots, J_{n,r}$ have all been defined and in each product $I^{J_{n,s}}$ for $s~\leq~r$, the two sets $\bar{C}_{n,s}$ and $\bar{D}_{n,s}$ are such that $\bar{C}_{n,s} \subset \bar{D}_{n,s}$ and that we have the following: \begin{itemize} \item [(1)] $H_{n,s}$ is the union of $J_{n,1}, \ldots, J_{n,s}$; $K_{n,s}$ is the union of $J_n$ and $H_{n,s}$. Define by recurrence the sets $\bar{F}_{n,s}$ and $\bar{E}_{n,s}$ in $I^{H_{n,s}}$ as being the complement in the product, and as $\bar{F}_{n,1} = \bar{D}_{n,1}$ and \[ \bar{F}_{n,s} = \left (\bar{F}_{n,s-1} \times I^{J_{n,s}} \right ) \cup \left (\bar{E}_{n,s-1} \times \bar{D}_{n,s} \right ) \] then $A_{n,s} = \bar{B}_{n} \times \bar{E}_{n,s-1} \times \bar{C}_{n,s} \times I^{K_{n,s}^\prime}$ with $\mu(A_{n,s}) \leq a_{n,s}$. \item [(2)] $\frac{1}{2} a_{n,s} \log \frac{1}{a_{n,s}} \leq \mu(\bar{E}_{n,s-1} \times \bar{D}_{n,s}) \leq 2 a_{n,s} \log \frac{1}{a_{n,s}}$.\\Moreover, $(\delta_{n,1} + \delta_{n,2} + \dots + \delta_{n,r}) \leq \frac{1}{2}$ in $\bar{F}_{n,r}$. \item [(3)] For $D_{n,s} = \bar{B}_{n} \times \bar{E}_{n,s-1} \times \bar{D}_{n,s} \times I^{K_{n,s}^\prime}$, we have $\phi_{J_n}(x) > \frac{1}{2}$ on the set $D_{n,s}$. \end{itemize} \noindent So, $\mu(\bar{E}_{n,r}) = \beta_r = 1 - (\delta_{n,1} + \delta_{n,2} + \dots + \delta_{n,r}) \geq \frac{1}{2}$.\\ \noindent We take $\bar{C}_{n,r+1} = \prod_1^{p_{n,r+1}} T_j$ (each $T_j$ of length $1 - \frac{1}{p_{n,r+1}} \log \frac{\beta_r}{a_{n,r+1}}$ in $I_j$, $\forall j \in J_{n,r+1}$). Thus \[ \mu(\bar{C}_{n,r+1}) = \left (1 - \frac{1}{p_{n,r+1}} \log \frac{\beta_r}{a_{n,r+1}} \right ) ^ {p_{n,r+1}} \] and $\mu(\bar{C}_{n,r+1}) \rightarrow \frac{a_{n,r+1}}{\beta_r}$ as $p_{n,r+1} \rightarrow \infty$.\\ \noindent We can thus take $p_{n,r+1}$ large enough so that $\frac{1}{2} \frac{a_{n,r+1}}{\beta_r} \leq \mu(\bar{C}_{n,r+1}) \leq \frac{a_{n,r+1}}{\beta_r}$ and that \[ 1 - \frac{1}{p_{n,r+1}} \log \frac{\beta_r}{a_{n,r+1}} > \frac{1}{2}; \] \noindent If we take $A_{n,r+1} = \bar{B}_{n} \times \bar{E}_{n,r} \times \bar{C}_{n,r+1} \times I^{K_{n,r+1}^\prime}$, then $\mu(A_{n,r+1}) \leq a_{n,r+1}$.\\ \noindent For each $j \in J_{n,r+1}$, again $S_j = \prod_{\substack{i \in J_{n,r+1}\\i \neq j}} T_i$.\\ \noindent For $J = K_{n,r} \cup (J_{n,r+1} - \{j\})$, we have $f_J(x) > \frac{1}{2}$ for all $x \in \bar{B}_{n} \times \bar{E}_{n,r} \times S_j \times I_j \times I^{K_{n,r+1}^\prime}$.\\ \noindent If $D_{n,r+1}$ is the union of the $p_{n,r+1}$ sets for each $j \in J_{n,r+1}$, it is clear that $\phi_{J_n}(x) > \frac{1}{2}$ in $D_{n,r+1}$.\\ \noindent However, we have $D_{n,r+1} = \bar{B}_{n} \times \bar{E}_{n,r} \times \bar{D}_{n,r+1} \times I^{K_{n,r+1}^\prime}$, where\\$\bar{D}_{n,r+1} = \bigcup_1^{p_{n,r+1}} (S_j \times I_j)$ and thus \[ \mu(\bar{D}_{n,r+1}) = \left (1 - \frac{1}{p_{n,r+1}} \log \frac{\beta_r}{a_{n,r+1}} \right ) ^ {p_{n,r+1}} + \left (\log \frac{\beta_r}{a_{n,r+1}} \right ) \left (1 - \frac{1}{p_{n,r+1}} \log \frac{\beta}{a_{n,r+1}} \right ) ^ {p_{n,r+1} - 1}. \] \noindent Taking into account the assumption that $\beta_r \geq \frac{1}{2}$, we can suppose $p_{n,r+1}$ is large enough that $\frac{1}{2} \frac{a_{n,r+1}}{\beta_r} \log \frac{1}{a_{n,r+1}} \leq \mu(\bar{D}_{n,r+1}) \leq 2 \frac{a_{n,r+1}}{\beta_r} \log \frac{1}{a_{n,r+1}}$.\\ \noindent We deduce at once that $\frac{1}{2} a_{n,r+1} \log \frac{1}{a_{n,r+1}} \leq \mu(\bar{E}_{n,r} \times \bar{D}_{n,r+1}) \leq 2 a_{n,r+1} \log \frac{1}{a_{n,r+1}}$.\\ The recurrence on $r$ can thus continue just like this until we arrive at an $r$ such that $\delta_{n,1} + \delta_{n,2} + \dots + \delta_{n,r} > \frac{1}{2}$ and as $(a_n \log \frac{1}{a_n})$ diverges by assumption, there always exists a smaller $r$ having this property; it is this $r$ which we will take to be our $h_n$. It is clear then that the $h_n$ sets $A_{n,1}, \dots, A_{n,h_n}$ are pairwise disjoint; so are the $h_n$ sets $D_{n,1}, \dots, D_{n,h_n}$. Moreover, one has the union $D_n$ of $D_{n,r}$ has measure greater than $\frac{7}{16}$, and in all points of $D_n$ we have $\phi_{J_n}(x) > \frac{1}{2}$. The union $A$ of all the sets $A_{n,r}$ thus answers the question. \section{Consequence of Dieudonn\'e's example} The preceding example allows us to respond in the negative to an analogous question concerning derivation bases\footnote{A net $\mathfrak{H}$ is a countable family of bounded, non-empty Borel sets such that: \begin{itemize} \item $\{B: \mathfrak{H} \ni H \subset B\} \cap \mathfrak{H}$ is a finite family. \item If $B_1, B_2 \in \mathfrak{H} \mbox{ and } B_1 \cap B_2 \neq \emptyset$, then either $B_1 \subset B_2 \mbox{ or } B_2 \subset B_1$. \end{itemize} Let $\mathfrak{M}$ be a $\sigma$-algebra on a set $X$, and let $E \subseteq X$ be fixed. For each $x$ in $E$, let $(M_i(x))$ be a net. The family of all $(M_i(x))$ forms a \emph{prebasis} $\mathcal{B}$. Thus, the elements of $\mathcal{B}$ are converging sequences together with their convergence points. We allow two or more points corresponding to the same sequence. Let $\mathcal{D}$ be the family of all sets occurring in the sequences $(M_i(x))$ for all $x$ in $E$. If we provide $\mathfrak{M}$ with a measure $\mu$ and if the sets of $\mathcal{D}$ are of finite measure, then $\mathcal{B}$ is a \emph{derivation basis}.}in the cube $\mathbb{I}^\infty$. For the cube of finite dimension $\mathbb{I}^d$, a classical theorem of Vitali shows that the cubes $\Pi_n(x)$ with centre $x \in \mathbb{I}^d$ and sides $\frac{1}{n}$ form a derivation basis for the functions. In particular, for all measurable, bounded $f$ defined in $\mathbb{I}^d$, the functions \[ g_n(x) = \frac{1}{\mu(\Pi_n(x))} \int_{\Pi_n(x)} f \, d\mu \] tend almost everywhere to $f(x)$ in $\mathbb{I}^d$. One naturally wonders whether the following similar result is valid in $\mathbb{I}^\infty$: for all finite subsets $J$ of $\mathbb{N}$ and all $n$, denote by $\Pi_{n,J}(x)$ the product of the cube with centre $x_J$ and with sides $\frac{1}{n}$ in $\mathbb{I}^J$ by $\mathbb{I}^{J^\prime}$, and for all measurable, bounded functions $f$ in $\mathbb{I}^\infty$, consider the function \[ g_{n,J}(x) = \frac{1}{\mu(\Pi_{n,J}(x))} \int_{\Pi_{n,J}(x)} f \, d\mu; \] This function tends almost everywhere to $f(x)$ according to the ideal product $\mathbb{N} \times F$ (with order relation $(n_1, J_1) \leq (n_2, J_2)$ signifies ``$n_1 \leq n_2$ and $J_1 \subset J_2$''). The example constructed shows that the answer is no. Indeed, let us write $\Pi_{n,J}(x) = \bar{\Pi}_{n,J}(x) \times \mathbb{I}^{J^\prime}$, where $\bar{\Pi}_{n,J}(x)$ is the cube with centre $x_J$ and side $\frac{1}{n}$, one has, with the notations above and by virtue of Fubini's theorem \[ \int_{\Pi_{n,J}(x)} f \, d\mu = \int_{\bar{\Pi}_{n,J}(x)} f_J \, d\mu; \] and $\mu(\Pi_{n,J}(x)) = \mu(\bar{\Pi}_{n,J}(x))$. Recall that Vitali's theorem above shows that, for all $J$, one has \[ \lim_{n \rightarrow \infty} g_{n,J}(x) = f_J(x) \mbox{ almost everywhere.} \] Since the sets of $F$ are countable, there thus exists in $\mathbb{I}^\infty$ a set of measure zero in the complement, of which one would have \[ \lim_{n \rightarrow \infty} g_{n,J}(x) = f_J(x) \mbox{ for all } J \in F. \] But then if $g_{n,J}(x)$ tended almost everywhere to $f(x)$ according to $\mathbb{N} \times F$, the theorem of double limits \cite{NB} shows that $f_J(x)$ tends almost everywhere to $f(x)$ according to $F$, which is what we showed to be inaccurate. Thus, the most general form fails, but maybe a form more suited to our situation will suffice. \cleardoublepage \chapter{Slowly Oscillating Functions} \section{Jessen's Theorem} Jessen showed that in countable cartesian product spaces, the infinite-dimensional integral is the limit (in the sense of $L^1$-norm) of the corresponding interval over the first $d$ spaces. Jessen proved that as $n \rightarrow \infty$ and for almost every $x = (x_1, x_2, \ldots)$: \begin{itemize} \item [(1)] $\int \int \ldots f(x) \, dx_d dx_{d+1} \ldots \rightarrow f(x)$ \item [(2)] $\int \int \ldots \int f(x) \, dx_1 dx_2 \ldots dx_d \rightarrow \int_X f(x) dx$ \end{itemize} where $X$ is the product of a countable sequence of measure spaces $X_1, X_2, \ldots, X_d, \ldots$, each of measure 1; and $f$ is a summable real-valued function of $X$. As pointed out by examiner W. Jaworksi, Jessen's theorem is an immediate consequence of martingale theory (martingale convergence in the case of formula (1) and reverse martingale convergence in the case of formula (2)). Martingales were not available to Jessen, but are a standard tool today. Dorothy Maharam shows in her paper \cite{DM} how to extend (2) to the product of arbitrarily many coordinate spaces by taking the integrals over all finite subsets of the coordinate spaces. As we saw in the previous chapter, Dieudonn\'e shows that such an extension is false for (1). The point is very subtle. Jessen shows that $\int \int \ldots f_{[1,2,...d]}(x) \, dx_{d+1} \ldots \rightarrow f(x)$ not $\int \int \ldots f_J(x) \, dJ^{\prime} \ldots \rightarrow f(x)$ where $J \subset Fin$ and $Fin$ is the set of all finite subsets of $\mathbb{N}$, and $J^{\prime}$ is its complement. So the order of factors is important. Maharam does show that an extension for (1) is possible if we use factors which are well-ordered and transfinite limits instead of directed limits. Recall from the chapter \ref{C:Dieudonne} that Jessen works in $\mathbb{I}^\infty$ and for $x \in \mathbb{I}^\infty$, we can write $x = (x^\prime, x^{\prime \prime})$ where $x^\prime \in \mathbb{I}^J$ and $x^{\prime \prime} \in \mathbb{I}^{J^\prime}$. Consider on each $\mathbb{I}_n$ (that is, each copy of $\mathbb{I}$), the Lebesgue measure, and denote by $\mu$ the product measure of the measurable sets of $\mathbb{I}^\infty$. We denote in the same way by $\mu_J$ the product measure on $\mathbb{I}^J$. That being, say $f$ is a function defined on $\mathbb{I}^\infty$ and is integrable with respect to $\mu$. According to the Fubini theorem, if $J$ is any subset of $\mathbb{N}$ and $J^\prime$ is the complement, for almost all $x_J \in I^{J^\prime}$, the function $x_J \rightarrow f(x_J, x_{J^\prime})$ is integrable, the function \[ f_J(x) = \int_{I^{J^\prime}} f(x_J, x_{J^\prime}) d\mu_{J^\prime} \] defined almost everywhere in $I^J$ is integrable in this set, and one has \[ \int_{\mathbb{I}^\infty} f d\mu = \int_{I^J} f_J(x_J) d\mu_J. \] That being, the theorem by Jessen is as follows: \begin{theo} If $(J_n)$ is an increasing sequence of finite subsets of $\mathbb{N}$, the functions $f_{J_n}$ converge almost everywhere to f in $\mathbb{I}^\infty$ as $n$ goes to infinity. \end{theo} \section{Results on $\mathbb{I}^\infty$} \label{S:ResultsOnIinfty} For continuous functions the Lebesgue density theorem is true in a very general sense. A more general proof of theorem \ref{T:LebAtContinuousPt} is as follows: \begin{theo} [\ddag] \label{T:LebMetricSpace} Let $(X, \rho, \mu)$ be a metric space with Borel probability measure and suppose $\mu$ has full support, that is, $\mbox{supp }(\mu) = X$. Let $f:(X, \rho) \rightarrow \mathbb{R}$ be a Borel function which is continuous at some point $x$. Then \[ \displaystyle \lim_{\mu(V) \rightarrow 0} \frac{1}{\mu(V)} \int_V |f(x) - f(y)| d\mu(y) = 0 \] in the following sense: for every $\epsilon > 0$, there is an open neighbourhood $V$ of $x$ such that whenever $B \subseteq V$ is a Borel subset of strictly positive measure, \[ \frac{1}{\mu(B)} \int_{B} |f(x) - f(y)| d\mu(y) < \epsilon. \] \end{theo} \begin{proof} Let $x \in supp(\mu)$. Since $f$ is continuous at $x$, given $\epsilon > 0$, $\exists V \ni x$ such that $\forall y \in V, |f(x) - f(y)| < \epsilon$. This implies that for every Borel set $B$ with $\mu(B) > 0, \int_{B} |f(x) - f(y)| d\mu(y) < \epsilon \cdot \mu(B)$ and so $\frac{1}{\mu(B)} \int_{B} |f(x) - f(y)| d\mu(y) < \epsilon$ holds. \end{proof} \begin{rmk} Since $(X, \rho)$ is a metric space, we can always find a V as above. Take as an example, $V_n = B(x, \frac{1}{n})$, the ball of radius $\frac{1}{n}$ around $x$. These balls form a basis for the topology on X. Taking $\mu \left (\bigcap_{n \in \mathbb{N}} V_n \right ) = \lim_{n \rightarrow \infty} \mu(V_n) = 0$, we have a sequence of sets with their measures tending to zero. Also, the proof of theorem \ref{T:LebMetricSpace} does not apply to $\mathbb{R}^\infty$ since our measure $\lambda^\infty$ is not a probability measure. \end{rmk} Now the space $(\mathbb{I}^{\infty}, \rho)$ is a compact metric space when equipped with the metric which induces the product topology, for example, \[ \rho(x,y) = \sum_{n=1}^{\infty} 2^{-n} \frac{|x_n - y_n|}{1 + |x_n - y_n|}. \] Let $\mathbb{R}$ be equipped with the standard Euclidean norm. Let $\pi_d: \mathbb{I}^{\infty} \rightarrow \mathbb{I}^{d}$ denote a projection, that is, if $x \in \mathbb{I}^{\infty}$ and $x = \{x_1,x_2,\ldots,x_{d-1},x_d,x_{d+1},\ldots\}$, then $\pi_d$ truncates $x$ to $x_d = \{x_1,x_2,\ldots,x_{d-1},x_d\}$. Let $f: \mathbb{I}^{\infty} \rightarrow \mathbb{R}$ be a continuous function with respect to the product topology, then by the Heine-Cantor theorem, $f$ is also uniformly continuous. Thus, $\forall \epsilon > 0$, $\exists \delta > 0$ such that for all $x, y \in \mathbb{I}^{\infty}$ with $\rho(x,y) < \delta$, one has $|f(x) - f(y)| < \epsilon$. \begin{theo} [\ddag] \label{T:UniformContinuity} Let $f:\mathbb{I}^\infty \to \mathbb{R}$ be continuous. For all $\epsilon > 0$, $\exists D$ such that $\forall d \geq D$, if $\pi_d(x) = \pi_d(y)$, then $|f(x) - f(y)| < \epsilon$. \end{theo} \begin{proof} Given any $\epsilon > 0$, choose a $D$ so that for $x, y \in \mathbb{I}^{\infty}$ one has: \[ \sum_{n=1}^{D} 2^{-n} \frac{|x_n - y_n|}{1 + |x_n - y_n|} = 0; \sum_{n=D+1}^{\infty} 2^{-n} \frac{|x_n - y_n|}{1 + |x_n - y_n|} < \delta \] Now if $d \geq D$, and $\pi_d(x) = \pi_d(y)$, one has: \[ \rho(x,y) = \sum_{n=1}^{D} 2^{-n} \frac{|x_n - y_n|}{1 + |x_n - y_n|} + \sum_{n=D+1}^{\infty} 2^{-n} \frac{|x_n - y_n|}{1 + |x_n - y_n|} \\ < \delta \] Thus one has that $\forall \epsilon > 0$, $\exists D$ such that $\forall d \geq D$, if $\pi_d(x) = \pi_d(y)$, then \\$|f(x) - f(y)| < \epsilon$. \end{proof} In other words, what we have just proven is that, in the case where $f$ is continuous, the function $f_{d}$ which is obtained from $f$ by integration along the fibres (full spaces) in dimensions $d$ and greater, differs from $f$ by less than $\epsilon$. For such $f$, the Lebesgue density theorem works. Functions like these are what we want and need. Let us denote by $S$ the above family of bounded functions $f$ on $\mathbb{I}^\infty$ such that for every $\epsilon > 0$ there exists a dimension $D$ such that for all $d \geq D$, if the truncations of two elements $x, y \in \mathbb{I}^\infty$ are equal then $|f(x) - f(y)| < \epsilon$. These are functions which oscillate `slowly' in high dimensions along the fibres $\mathbb{I}^{J^\prime}$. That is, as we go up to a certain high dimension, we can be sure that the function will not change by much. What kind of functions exist in $S$? All continuous functions live in that space. By the definition of the continuity for product topology we will be able to find a sufficiently high dimension that the function does not change much along the fibres. Also, given a function $g:\mathbb{R}^d \rightarrow \mathbb{R}$ with $g \in L^1(X, \mu)$, then $g \circ \pi_d \in S$. This is a function which is exactly constant along the fibres in dimension $d$ and higher. Not every $L^1(X, \mu)$ function satisfies this however. It can be possible that this is the only family of functions for which Lebesgue theorem holds. The next natural question is: What is a natural metric for which this class is a complete metric space? We shall use the same norm as in the space $L^{\infty}(X,\mu)$, that is, the \emph{essential supremum}. The norm is defined as follows: \[ \|f\|_\infty = \inf\{C \geq 0: |f(x)| \leq C \mbox{ for almost every } x\}. \] Functions $f$ and $g$ are in the same equivalence class, $f \sim g$, if they are equal almost everywhere, and so belong to the same equivalence class. Suppose $(X, \Sigma, \mu)$ is a space with measure, then for two functions $f,g: X \rightarrow \mathbb{R}$ we have: \[ \esssup_{x \in X} f = \inf_{g: \mu(\{x:f(x) \neq g(x)\}) = 0} \sup_{x \in X} g(x) \] Let us try to equip $S$ with the above norm and see what happens. We end up with the following result: \[ f \in S \Longleftrightarrow f_d \stackrel{L^{\infty}}{\rightarrow} f. \] Put otherwise, the class $S$ consists of all functions $f$ which satisfy a stronger version of Jessen's theorem; the functions $f_d$ converge to $f$ not almost everywhere as with Jessen's theorem, but in $L^{\infty}$ norm, and this condition means that along the fibres they get smaller and smaller (uniformly smaller). It means that every such function is measurable but not vice versa. Define $\mbox{ess osc } f = \inf_{g \sim f }\sup |g(x) - g(y)|$. A proof may emulate the following reasoning: Suppose $f \notin S$, then there exists an $\epsilon > 0$ such that for all $d$ there exists an $A \subseteq \mathbb{I}^d$ with $\mu(A) > 0$ and for all $x \in A$, $\mbox{ ess osc } (f(\pi_d^{-1}(x))) \geq \epsilon$. If $||f - g||_{\infty} < \frac{\epsilon}{2}$ then for all $x \in A$, $\mbox{ ess osc }~(f(\pi_d^{-1}(x)))~\geq~\frac{\epsilon}{2}$. This can be restated more accurately as: Let $f:\mathbb{I}^{\infty} \rightarrow \mathbb{R}$. Then $\forall \epsilon, \exists d$ such that for almost every $x \in \mathbb{I}^d$ we have $\mbox{ess osc } (f(\pi_d^{-1}(x))) < \epsilon$. \begin{theo} [\ddag] Let $S$ be the space of all bounded functions $f:~\mathbb{I}^{\infty}~\rightarrow~\mathbb{R}$, with the property that $\forall \epsilon$, $\exists d$ such that $\forall d^{\prime} > d$, \[ \inf_{g \sim f} \sup_{\pi_d(x) = \pi_d(y)} |g(x) - g(y)| < \epsilon. \] Equip $S$ with the following norm: $||f|| = \mbox{ess sup } |f|$ where $\mbox{ess sup }_{x \in X} |f|$ is as defined earlier. The space $S$ is complete. \end{theo} \begin{proof} Let $(f_n)$ be a Cauchy sequence in $S$ converging to some function $f$ almost everywhere. It is enough to show that $f \in S$. In other words, given a sequence $(f_n)$ satisfying $\forall \epsilon$, $\exists d$ such that $\forall d^{\prime} > d$, if $\inf_{g \sim f} \sup_{\pi_d(x) = \pi_d(y)} |g(x) - g(y)| < \epsilon$ and $(f_n)$ converges to $f$, then $f$ satisfies the above as well. Let $\gamma > 0$ and choose for every $n$ a $g_n \sim f_n$ so that $\sup |g(x) - g(y)| < \epsilon + \gamma$. Now, for almost every $x, y$ such that $\pi_d(x) = \pi_d(y)$ we have \[ |f(x) - f(y)| < |f(x) - g_n(x)| + |g_n(x) - g_n(y)| + |g_n(y) - f(y)| \] and we already know that $f_n(x)$ converges to $f(x)$, thus $|f(x) - f_n(x)| < \epsilon$ almost everywhere and likewise $|g_n(y) - f(y)| < \epsilon$ almost everywhere. Also, $|g_n(x) - g_n(y)| < \epsilon + \gamma$. Thus $|f(x) - f(y)| < 3\epsilon + \gamma$. Since $\gamma$ is arbitrary, we get $|f(x) - f(y)| \leq 3\epsilon$ and thus belongs to $S$ as well since $\epsilon$ is arbitrarily chosen. \end{proof} So far we have seen that examples of functions which are slowly oscillating include: \begin{enumerate} \item continuous functions with regard to the product topology, and \item functions that factorise through projections. \end{enumerate} Now that we have positive results for $\mathbb{I}^\infty$, let us see if we can extend this to $\mathbb{R}^\infty$ with the hope that they will survive. \section{Results on $\mathbb{R}^\infty$} We begin this section by asking a simple, yet relevant question. What does it mean when we say that $f: R^\infty \rightarrow R$ is integrable with regard to our infinite-dimensional Lebesgue measure? Simply put, the integral exists and is finite, but for our purpose, let us examine it further. \begin{itemize} \item For bounded functions whose values are within some interval $-N$ to $N$ and whose domain of integration has finite measure, we subdivide this integral into small subintervals so that for every $i$, we consider a partition \[ -N < -N + \frac{2N}{i} < -N + \frac{4N}{i} < \ldots < N. \] Then we form a Lebesgue integral sum, \[ L_i(f) = \sum_{k=0}^{i-1} \lambda^\infty(f^{-1}(-N + \frac{2kN}{i},-N + \frac{2(k+1)N}{i}))\cdot(-N + \frac{2kN}{i}) \] and we say that the bounded function is integrable if $\lim_{i \rightarrow \infty} L_i(f)$ exists and is finite. That is \[ \int f(x) d\lambda^\infty(x) = \lim_{i \rightarrow \infty} L_i(f) < \infty. \] \item If the domain of integration, $A$, is unbounded, we write it as the union of pairwise disjoint sets, $A_i$, each of finite measure and then \[ \int_A f \, d\lambda^\infty = \sum_i \int_{A_i} f \, d\lambda^\infty. \] This sum does not depend on the choice of family as long as it is the union of disjoint sets of finite measure. \item If the function is unbounded, then for every natural number $N$, we define a cut-off function $F_N$ as follows \[ \forall N \in \mathbb{R}, f_N := \min\{N, \max\{-N,f\}\}; \mbox{ and } \int f(x) d\lambda^\infty(x) = \lim_{N \rightarrow \infty} \int f_N(x) d\lambda^\infty(x) \] and we say that the function is integrable if this limit exists and is finite. \end{itemize} Now, it is obvious that $\mathbb{R}^{\infty}$ can be covered by an uncountably infinite family of cubes. But $f$ being integrable means that the set of the points for which it is non-zero is contained in the union of countably many of these cubes. For the simple reason that $\int f = \sum_\Pi \int_\Pi f|_{\Pi}$ and this sum cannot be uncountable. Most of its terms should be zero for it to be sound. So on most cubes the restriction of $f$ to them is zero. Only on countably many cubes will the restriction be non-zero. Thus, we can apply the theorem of this result to every cube and since there are countably many, it will survive. Let us try to explain. \begin{lem} \label{L:BorelSetSigmaFinite1} Let $f:\mathbb{R}^\infty \rightarrow \mathbb{R}$ be Borel measurable and $f \geq 0$, and \\$\int_{\mathbb{R}^\infty}~f(x)~d\lambda^\infty(x)~=~1$. Then $S = \{x: f(x) > 0\}$ has $\sigma$-finite measure. \end{lem} \begin{proof} Let $S_n = \{x: f(x) \geq \frac{1}{n}\}, n = 1, 2, 3, \ldots, S = \cup_{n=1}^\infty S_n$. Let us look at $S_1$: \[ 1 = \int_S f d\lambda^\infty \geq \int_{S_1} f d\lambda^\infty \geq \int_{S_1} 1 d\lambda^\infty = \mu(S_1) \] Similarly, the same argument works for $S_n$: \[ 1 = \int_S f d\lambda^\infty \geq \int_{S_n} f d\lambda^\infty \geq \int_{S_n} \frac{1}{n} d\lambda^\infty = \frac{1}{n} \mu(S_n) \] This means that for every $n$, the measure of $S_n$ is finite. \end{proof} By the following lemma we can cover the set of all those points where $f(x) > 0$ by cubes and restrict the consideration of our function to the union of cubes. \begin{lem} [\ddag] \label{L:BorelSetSigmaFinite2} For a Borel set $A \subseteq \mathbb{R}^\infty$, the following are equivalent: \begin{itemize} \item[(i)] $A$ is $\sigma$-finite, that is, $A \subseteq \cup A_i$, where $\lambda^\infty(A_i) < \infty$ for all $i$ \item[(ii)] $A$ is contained in a union of a countable family of parallelepipeds of finite volume each. \end{itemize} \end{lem} \begin{proof} $(ii) \Rightarrow (i)$ is trivial. $(i) \Rightarrow (ii)$ follows from definition of $\lambda^\infty$. Let $\epsilon$ be any positive number, for example, $\epsilon = 1$. Suppose $A \subseteq \cup_{i=1}^{\infty} A_i \mbox{ with } \lambda^\infty(A_i) < \infty$. For each $i$, by definition there exists parallelepipeds $(C_{i,j})_{j=1}^{\infty}$ of finite measure such that $\lambda^\infty(A_i) \leq \sum_j \lambda^\infty(C_{i,j} \cap A_i) \leq \lambda^\infty(A_i) + \epsilon < \infty$. Then $A \subseteq \cup_{i,j = 1}^{\infty} C_{i,j}$, each $C_{i,j}$ has finite volume. \end{proof} Note however that it may not always be possible to cover a Borel set $A \subseteq \mathbb{R}^\infty$ by countably many cubes of side one, even in that case where the measure of $A$ is finite. The following example shows this. \begin{egg} Let us suppose that we have a parallelepiped with sides of length greater than one which converge to one very fast so that the product exists and is finite. Can it be covered with countably many cubes of finite volume? No. We can claim that by definition, a set of finite measure is contained in the union of countably many parallelepipeds of finite volume, but not necessarily with unit side. Let $(a_n)~\downarrow~1 \mbox{ such that } \prod_{n=1}^\infty a_n < \infty$. Let us examine the parallelepiped $[0, a_1] \times [0, a_2] \times \ldots$. It requires uncountably many unit cubes to be covered. Let $(C_1^n)_{n=1}^\infty$ be an infinite sequence of unit cubes. We will define an $x \in [0, a_1] \times [0, a_2] \times \ldots$ for which $x \notin \cup_{n=1}^\infty C_1^n$. For all $n, \mbox{ let } \mathcal{I}_j = [c_j, c_j+1]$ for some constant $c_j$ be the $j^{th}$ side of $C_1^n$. Since $\mathcal{I}_j$ is a unit interval, looking at the $j^{th}$ interval of our parallelepiped, there exists an $x_j \in [0, a_j] \setminus [c_j, c_j+1]$. This point is different from any $j^{th}$ coordinate of the $n^{th}$ cube so it misses all the cubes. Thus we cannot cover the support of a measurable function with countably many cubes of side one. We can do it with countably many parallelepipeds of finite volume though. \end{egg} \section{Peculiarities of $\mathbb{R}^\infty$} There are a few things to note about $\mathbb{R}^\infty$ which are very peculiar. The following theorem illustrates a problem that we came across. \begin{theo} [$\ddag$] There are no functions $f:\mathbb{R}^\infty \rightarrow \mathbb{R}$ for which: \begin{itemize} \item[(i)] $f \geq 0$ \item[(ii)] $\int_{\mathbb{R}^\infty} f \, d\lambda^\infty = 1$ \item[(iii)] $f$ is continuous with regard to the product topology. \end{itemize} \end{theo} \begin{proof} Let $x_0 \in \mathbb{R}^\infty$ with $f(x_0) > 0$. Set $\epsilon = \frac{f(x_0)}{2} > 0$. Since $f$ is continuous, it is continuous at $x_0$, and there exists a neighbourhood $V$ of $x_0$ in the product topology such that $\forall y \in V, |f(x_0) - f(y)| < \epsilon$. Thus, for all $y$ in $V$, $f(y) \geq f(x_0) - \epsilon = \frac{f(x_0)}{2}$. By definition of the product topology, without loss of generality we can replace $V$ with a standard basic neighbourhood $V^\prime = \prod_{i=1}^d (a_i,b_i) \times \mathbb{R}^{\{d+1, d+2, \ldots\}}$. Now, notice that: \[ 1 = \int_{\mathbb{R}^\infty} f(x) d\lambda^\infty(x) \geq \int_V f(x) d\lambda^\infty(x) \geq \int_V \epsilon d\lambda^\infty(x) = \epsilon \cdot \lambda^\infty(V) = +\infty \] \end{proof} \begin{rmk} Moreover, the following stronger statement follows from the same argument. If $f$ is a probability density function on $\mathbb{R}^\infty$, then $f$ has to be discontinuous at every $x$ where $f(x) > 0$. It can only be continuous at points where $f(x) = 0$. \end{rmk} \cleardoublepage \chapter{Non-Density Theorem} In this chapter we first discard an approach which once seemed plausible. We then go on to give a simple example which explains the ideas of the main theorem. Finally, we generalise the concepts of this example to prove the main theorem of this work. \section{Approach using Vitali systems} Since the proof of the Lebesgue density theorem does not generalise to the infinite-dimensional case, what approach will we take? Let us look at the most general form of the density theorem that is known as presented in \cite{SG} (section 10.3). To state it, we need first to have the following definitions. \begin{defn} A \emph{set function} is any function whose domain is a collection of sets and whose range is the (finite) real numbers. \end{defn} \begin{egg} We can take any measurable function $g$ and associate to it a set function $\phi$ as follows: $\phi(A) = \int_A g d\mu$ where $A$ belongs to the $\sigma$-algebra on which $\mu$ is defined. \end{egg} \begin{defn} A countably additive set function is a map $\phi: \mathfrak{B} \rightarrow \mathbb{R}$ such that if $A_i, A_j \in \mathfrak{B} \mbox{ and } A_i \cap A_j = \emptyset \mbox{ for } i \neq j \mbox{ and } i,j \in \mathbb{N}, $ then $\phi(\cup_{i=1}^\infty A_i) = \sum_{i=1}^\infty \phi(A_i)$, where $\mathfrak{B}$ denotes a $\sigma$-algebra. \end{defn} \begin{defn} Let $X$ be a metric space equipped with a Borel measure $\mu$. Suppose every set $\{x\}$ consisting of a single point is measurable with measure zero ($\mu(\{x\}) = 0$). A \emph{Vitali system} for (X, $\mu$) is a family $\mathfrak{V}$ of Borel sets $E \subseteq X$ with \begin{enumerate} \item Given any Borel set $E$, there are countably many sets $A_i, i = 1,\ldots,n,\ldots$ such that $E \subseteq \cup_{i=1}^{\infty}A_i \mbox{ and } \mu(\cup A_i \setminus E) < \epsilon$. \item each $E \in \mathfrak{V}$ has a ``boundary'' $\partial E$ such that \begin{enumerate} \item if $x \in E \setminus \partial E$ then all Vitali sets of sufficiently small measure containing $x$ are contained in $E \setminus \partial E$. \item if $x \notin E \cup \partial E$ then all Vitali sets of sufficiently small measure containing $x$ are contained in $X \setminus (E \cup \partial E)$. \end{enumerate} \end{enumerate} \end{defn} \begin{egg} The following are Vitali systems for the Euclidean space $\mathbb{R}^d$ equipped with the Lebesgue measure: \begin{enumerate} \item The balls $B_x(\epsilon)$ of radius $\epsilon$ with centre $x \in \mathbb{R}^d$ and $\epsilon > 0$. \item All cubes. \end{enumerate} \end{egg} \begin{defn} Let $\phi(E)$ be a countably additive set function defined on a metric space $X$ (and hence on $\mathfrak{V}$). Also, let $X$ be equipped with a Borel measure $\mu$. Then, by the derivative of $\phi(E)$ at the point $x_0$ with respect to the Vitali system $\mathfrak{V}$ we mean the quantity \[ D_{\mathfrak{V}} \phi(x_0) = \lim_{\epsilon \to 0} \frac{\phi(A_{x_0}(\epsilon))}{\mu(A_{x_0}(\epsilon))} \] (provided the limit exists), where $A_{x_0}(\epsilon)$ is any Vitali set of measure less than $\epsilon$ containing the point $x_0$. \end{defn} Put differently, differentiating $\phi$ at any point $x_0$ gives us $D_{\mathfrak{V}} \phi(x_0)$. The following theorem is taken from \cite{SG} (section 10.3) and is the most general form of the theorem which we have come across. \begin{theo} [Lebesgue-Vitali Theorem] Let $\mathfrak{V}$ be a Vitali system of Borel subsets of $X$ and let $\phi(E)$ be a countably additive set function on $X$. Then the derivative: \[ D_{\mathfrak{V}} \phi(x_0) = \lim_{\epsilon \to 0} \frac{\phi(A_{x_0}(\epsilon))}{\mu(A_{x_0}(\epsilon))} \] exists almost everywhere. \end{theo} Let $\mathfrak{V}$ be a Vitali system of sets in $(X, \mu)$ where $\mu$ is $\sigma$-finite and $\sigma$-additive and let $f: X \rightarrow \mathbb{R}$ be integrable. Then this corollary follows immediately from the above theorem. \begin{cor} Let $f$ be a measurable function on a metric space. Then for almost every $x \in X$, \[ \lim_{\epsilon \to 0} \frac{1}{\mu(A)} \int_A f(y) \, d\mu(y) = f(x) \] where $A \subseteq X$ is an element of a Vitali system containing $x$ and $\mu(A) < \epsilon$. \end{cor} Unfortunately, this approach cannot be made to work in $\mathbb{R}^\infty$ for two main reasons. First of all, the theorem uses a $\sigma$-finite measure, $\mu$, while the measure, $\lambda^\infty$, we are interested in is not $\sigma$-finite. Second, there are no obvious candidates for Vitali systems. Even in $\mathbb{I}^\infty$, the sets $\{[a_1, b_1] \times \ldots \times [a_n, b_n] \times \mathbb{I}^{J^\prime}\}$ do not form a Vitali system because the ``boundary'' condition does not work. A very relaxed approach to his first axiom is as follows: take a parallelepiped $C$ such that $0 < \lambda^\infty(C) < \infty, \forall x \in C, \exists \epsilon > 0 \mbox{ such that } \forall C^\prime \ni x, \lambda^\infty(C^\prime) < \epsilon \Rightarrow C^\prime \subseteq C$. Take for example, $C = [0, 1]^\infty$. If we take parallelepipeds, then we can always find a parallelepiped, $C^\prime$, of small measure which sticks out (thus violating the ``boundary'' condition) of our chosen parallelepiped of finite volume, $C$. Notice that in finite dimensions there is a very rigid dependence between the volume of a ball, or cube, and its radius, respectively the length of its side. This is a one-to-one correspondence and by changing the volume we can make the radius as small as we wish. In infinite dimensions, there is only one cube of finite dimension. The cubes in $\mathbb{R}^\infty$ have the property that their volumes are either 0, 1, or +$\infty$. This is why we are forced to use parallelepipeds. But parallelepipeds do not form a Vitali system in $\mathbb{R}^\infty$ as we just saw. Let us thus abandon the Vitali approach and recall that parallelepipeds do work in $\mathbb{R}^d$. In \cite{HP} (Part I, Chapter V) a theorem by Jessen, Marcinkiewicz and Zygmund shows that the density theorem holds in $\mathbb{R}^d$ using a parallelepiped basis. Their theorem is stated as follows: \begin{theo} The interval basis $\mathfrak{I} = [\mathcal{I}, \delta]$ in $\mathbb{R}^d$ derives the Lebesgue integral of each measurable function $f$ for which the function $|f|(\log^+ |f|)^{m-1}$ is Lebesgue integrable over the open cube \[ Q_0 = \{x: 0 < x_i < 1, i = 1,2,\ldots,d\}, \] and the $\mathfrak{I}$-derivative coincides with $f$ except on a set of Lebesgue measure zero. \end{theo} Here, $\mathcal{I}$ is the family of closed non-degenerate $d$-dimensional parallelepipeds \[ I = \{x: \alpha_i \leq x_i \leq \beta_i, i = 1,2,\ldots,d\} \] and $\alpha_i < \beta_i \mbox{ for } i = 1,2,\ldots,d$. We will not analyse it but the condition on $f$ is satisfied by all bounded functions for example. In different words, the above theorem states that the derivative of any Lebesgue integral of a measurable function $f$, satisfiying certain conditions, can be calculated using the interval basis and the derivative will coincide with $f$ except on a null set. \begin{rmk} Interestingly enough, the same source (\cite{HP}, p. 104) shows that in the more general case, where the basis consists of all rectangular parallelepipeds whose sides may or may not be parallel to the coordinate axes, the density theorem does not even hold in $\mathbb{R}^2$. \end{rmk} \section{Interpretation of Density} It is important to realize that since we are dealing with parallelepipeds, there are two ways in which we can interpret density. The reason for this is that we can be given a parallelepiped $\Pi$ with centre $x$ and measure $0 < \lambda^\infty(\Pi) < \infty$ and another parallelepiped $\Pi^\prime$ with centre $x$ and measure $\lambda^\infty(\Pi^\prime) < \lambda^\infty(\Pi)$ then one of two possible situations holds: \begin{itemize} \item The first situation is the one which comes to mind immediately and that is $\Pi^\prime \subseteq \Pi$. For this case, the definition of a Lebesgue point is as follows: \begin{defn} Call $x \in \mathbb{R}^\infty$ a Lebesgue point for $f:\mathbb{R}^\infty \rightarrow \mathbb{R}$ if $\forall \epsilon > 0, \exists \Pi$, a parallelepiped centred at $x$, $\lambda^\infty(\Pi) > 0$, such that $\forall \Pi^\prime$ centred at $x$, $\lambda^\infty(\Pi^\prime) > 0, \Pi^\prime \subseteq \Pi$, \[ |\frac{1}{\lambda^\infty(\Pi^\prime)} \int_{\Pi^\prime} f(y) d\lambda^\infty(y) - f(x)| < \epsilon. \] \end{defn} \item The second situation stems from the fact that the geometry of a parallelepiped is not determined by its measure and so even though $\lambda^\infty(\Pi^\prime) < \lambda^\infty(\Pi)$, it does not mean that $\Pi^\prime$ is contained in $\Pi$. There may be dimensions in which $\Pi^\prime$ sticks out of $\Pi$. As a simple example in $\mathbb{R}^2$, take the cube $Q$ of side 2 units. It's area is 4 units$^2$ of course. Now, take a rectangle $R$ with the same centre as $Q$, but whose length is 3 and width is 1. It's area is 3 units$^2$ but cannot be contained in $Q$. In this case, the definition of a Lebesgue point has to be modified to the following stronger statement: \begin{defn} Call $x \in \mathbb{R}^\infty$ a Lebesgue point for $f:\mathbb{R}^\infty \rightarrow \mathbb{R}$ if $\forall \epsilon > 0, \exists \delta$ such that for every parallelepiped centred at $x$ with $\lambda^\infty(\Pi) < \delta$ \[ |\frac{1}{\lambda^\infty(\Pi)} \int_{\Pi} f(x) d\lambda^\infty(x) - f(x)| < \epsilon. \] \end{defn} \end{itemize} The second condition is a stronger one as if the density exists in that case, then it also exists in the first case and furthermore, they are equal. However, it is not known whether the density exists for the second case even in finite dimensions and for this reason we will only work with the first case where the restriction is by geometry also and not only volume. \section{Example: Calculating the density of the cube} We need to realise some facts about density. First, they do not always exist for measurable functions at all points, even in $\mathbb{R}^d$. The following example illustrates this. \begin{egg} The function \( f(x) = \left \{ \begin{array}{cl} 1 & \mbox{ if } x \in (2^{-(n + 1)}, 2^{-n}] \mbox{ for $n = 0, 2, 4, \ldots$ } \\ 0 & \mbox{ otherwise } \end{array} \right. \) has integral $\sum_{n=1}^{\infty} 2^{-(2n-1)} = \frac{2}{3}$ over the interval $[0,1]$, and thus integral $\frac{1}{3}$ over the interval $[-1,1]$. What is the density of $f$ at $x = 0$? \[ \lim_{n \to \infty} \frac{1}{|[-2^{-n}, 2^{-n}]|} \int_{[-2^{-n}, 2^{-n}]} f(y) dx = ? \] Realise that if we take a ball of decreasing radius about $x = 0$, the density will never exist as the above limit does not exist. The limit does not converge because it keeps fluctuating and does so even more as the ball gets smaller. \end{egg} Can we state a reasonable analogue of the Lebesgue density theorem for $\lambda^\infty$? Let us begin with an example which contains the ideas of the main theorem. \begin{egg} Let us compute the Lebesgue density of $A = \mathbb{I}^\infty \subseteq \mathbb{R}^\infty$. \begin{align} density(A,x) &= \lim_{\substack{x \text{ is the centre of } \Pi\\0 < \lambda^\infty(\Pi) < \epsilon\\\epsilon \rightarrow 0}} \frac{1}{\lambda^\infty(\Pi)} \int_\Pi \chi_A d\lambda^\infty \notag \\ &= \lim_{\epsilon \to 0} \frac{\lambda^\infty(\Pi \cap A)}{\lambda^\infty(\Pi)} \notag \end{align} First, let us study the case where $x \in \mathbb{I}^\infty$. Consider the set of sequences $S = \{x \in \mathbb{I}^\infty: \exists D, \forall d > D, x_d \geq \frac{1}{4}\}$. This set has $\lambda^\infty$-measure zero. Indeed, let $S = \cup_D S_D \mbox{ where } S_D = \{x: \forall d > D, x_d \geq \frac{1}{4}\} \subseteq [0,1] \times \ldots \times [0,1] \times [\frac{1}{4}, 1] \times [\frac{1}{4}, 1] \times \ldots$, which has measure zero. We draw the conclusion by using the $\sigma$-additivity of $\lambda^\infty$ that for almost every $x \in \mathbb{I}^\infty, \forall D, \exists d > D, x_d < \frac{1}{4}$. Now, let $x \in \mathbb{I}^\infty \setminus S$. Take $C_1$ to be the unit cube centred at $x$. Clearly, one has \[ C_1 \cap \mathbb{I}^\infty \subseteq [0,1] \times \ldots \times [0, \frac{3}{4}] \times \ldots \times [0, \frac{3}{4}] \times \ldots \] where the interval $[0, \frac{3}{4}]$ occurs and infinite number of times. This is a set that has measure zero. It follows that \( density(\mathbb{I}^\infty, x) = 0. \) Finally, consider the case where $x \notin \mathbb{I}^\infty$. Then there is a coordinate, $x_i$, which sits outside $\mathbb{I}^\infty$. Take a parallelepiped centred at $x_i$ which is so small in the $i^{th}$ dimension that it does not touch $\mathbb{I}^\infty$. Since this parallelepiped is disjoint from $\mathbb{I}^\infty$, the measure of their intersection is zero. Overall, we conclude that \[ \frac{\lambda^\infty(\Pi \cap \mathbb{I}^\infty)}{\lambda^\infty(\Pi)} = \frac{0}{1} = 0 \] and the density of the cube $\mathbb{I}^\infty$ is zero at $\lambda^\infty$-almost every point of $\mathbb{R}^\infty$. \end{egg} \section{Main Theorem} Before we extend the previous ideas to a more general situation, let us introduce one more concept that is needed to prove our main result. \begin{defn} [$\ddag$] Let $\Pi$ be a parallelepiped with centre $x$. That is, let \( \Pi~=~\prod_{i=1}^\infty [a_i, b_i] \) and \( x = \mbox{centre}(\Pi) = \left ( \frac{a_i + b_i}{2} \right )_{i=1}^\infty. \) Let $0 \leq \delta \leq 1$. The \emph{$\delta$-core} of $\Pi$ is the set: \[ \mbox{core}_\delta(\Pi) =\bigcup_{D=1}^\infty \left ( \prod_{i=1}^D [a_i, b_i] \times \prod_{i=D+1}^\infty [\frac{a_i + b_i}{2} - \frac{\delta \cdot (b_i - a_i)}{2}, \frac{a_i + b_i}{2} + \frac{\delta \cdot (b_i - a_i)}{2}] \right ). \] \end{defn} The core consists of all sequences whose coordinates eventually end up close to the centre of the parallelepiped, and stay there forever. Note that if $\delta = 1$, we get back to our ``parent'' parallelepiped. \begin{lem} [$\ddag$] \label{L:NullCore} $\lambda^\infty(\mbox{core}_\delta(\Pi)) = 0$ if $\delta < 1$, provided $\lambda^\infty(\Pi) < \infty$. \end{lem} \begin{proof} It is enough to prove that for each $D$, \[ \prod_{i=1}^D [a_i, b_i] \times \prod_{i=D+1}^\infty [\frac{a_i + b_i}{2} - \frac{\delta \cdot (b_i - a_i)}{2}, \frac{a_i + b_i}{2} + \frac{\delta \cdot (b_i - a_i)}{2}] \] has measure zero because the core is the union of countable many sets and if each set has measure zero, then by the $\sigma$-subadditivity of measure, their union is also zero. So: \begin{align} \lambda^\infty & \left ( \prod_{i=1}^D [a_i, b_i] \times \prod_{i=D+1}^\infty [\frac{a_i + b_i}{2} - \frac{\delta \cdot (b_i - a_i)}{2}, \frac{a_i + b_i}{2} + \frac{\delta \cdot (b_i - a_i)}{2}] \right ) \notag \\ &= \prod_{i=1}^D (b_i - a_i) \times \prod_{i=D+1}^\infty \delta \cdot (b_i - a_i) \notag \\ &= \prod_{i=1}^D l_i \times \prod_{i=D+1}^\infty \delta \cdot l_i \notag \end{align} But $(l_i)_{i \to \infty} \to 1$ (see lemma \ref{L:SidesApproachOne}), so as $i \to \infty$ the lengths of the sides of the core converge to $\delta$. But $\delta < 1$ and so $\prod_{i=D+1}^\infty \delta = 0$. \end{proof} We now arrive at the main theorem of this work. \begin{theo} [$\ddag$] \label{T:NonDensity} Let $f:\mathbb{R}^\infty \rightarrow \mathbb{R}$ be a measurable function such that $\int_{\mathbb{R}^\infty}{f \, d\lambda^\infty}~=~1$ and $f \geq 0$. Then for almost every $x$, \[ \lim_{\substack{0 < \lambda^\infty(\Pi) < \infty\\\lambda^\infty(\Pi) \rightarrow 0}} \frac{1}{\lambda^\infty(\Pi)} \int_{\Pi} f(y) d\lambda^\infty(y) = 0. \] \end{theo} \noindent [Proof Idea] Since $\int_{\mathbb{R}^\infty} f \, d\lambda^\infty = 1$, the set of points where $f$ is non-zero, call it $A$, is $\sigma$-finite and so $A \subseteq \cup_{i=1}^\infty \Pi_i$ where $\Pi_i$ are parallelepipeds with $\lambda^\infty(\Pi_i) < \infty$. For each parallelepiped, the measure of the points at the $\delta$-core (for $\delta < 1$) is zero. Since a countable union of negligible sets is still negligible, the measure of the union of the cores of all our parallelepipeds is still zero. Now, take a point which is not at the union of the cores and take the unit cube, $C_1$, around it. Then $\lambda^\infty(C_1 \cap \Pi_i) = 0$ and so, \[ \frac{1}{\lambda^\infty(C_1)} \int_{\{y \in C_1;f(y) > 0\}} f(y) \, d\lambda^\infty(y) = 0 \] and $A$ has density zero almost everywhere. \begin{proof} Let $A = \{x: f(x) > 0\}$, then $A \subseteq \cup_{j=1}^\infty \Pi_j$ (see lemmas \ref{L:BorelSetSigmaFinite1} and \ref{L:BorelSetSigmaFinite2}). Set $\delta = \frac{1}{2}$. We have that $\lambda^\infty(\cup_{j=1}^\infty \mbox{ core}_\frac{1}{2} (\Pi_j)) = 0$ (see lemma \ref{L:NullCore}). Without loss of generality, suppose $x \in \mathbb{R}^\infty \setminus \{\cup_{j=1}^\infty \text{ core}_\frac{1}{2} (\Pi_j)\}$. Denote by $C_1$ the unit cube centred at $x$. Fix an arbitrary $j$. We know that $x \notin \mbox{core}_\frac{1}{2} (\Pi_j)$ and this means that \[ \left \vert \left \{ i: | x_i - \frac{b_i + a_i}{2} | > \frac{l_i}{4} \right \} \right \vert = \infty, \] where $l_i$ is the length of the sides of the parallelepiped in the $i^{th}$ dimension. Set $\epsilon = \frac{1}{8}$ and choose $D$ so that $\forall i > D$, $\frac{7}{8} < l_i < \frac{9}{8}$. In particular, for an infinite set of $i$'s we have both $\frac{7}{8} < l_i < \frac{9}{8}$ and $| x_i - \frac{b_i - a_i}{2} | > \frac{l_i}{4}$ being satisfied. Let us deduce that $\lambda^\infty(C_1 \cap \Pi_j) = 0$. We do so by bounding the length of the intersection of the sides of $C_1$ and $\Pi_j$. The worst cases are: \begin{enumerate} \item when $x$ comes close to the centre of the $i^{th}$ interval and \item when the length of the interval is at its longest \end{enumerate} because in either case the intersection can be large. So take the longest possible length which is $l_i = \frac{9}{8}$, and the $x$ closest to the centre. This means that $x$ would be at a distance of $\frac{1}{4} \times \frac{9}{8} = \frac{9}{32}$ which is midway between the start and centre of the interval. Now, let us calculate the overlap with the unit interval whose centre would be at that point $x$. We see that the overlap has length $\frac{1}{2} + \frac{9}{32} = \frac{25}{32} < 1$. Recall that this happens infinitely many times. So \[ \lambda^\infty(C_1 \cap \Pi_j) \leq 1 \times \ldots \times \frac{25}{32} \times \ldots \times \frac{25}{32} \times \ldots = 0. \] And as this happens for all $j$, it follows that \[ \lambda^\infty(C_1 \cap \cup_{j=1}^\infty \Pi_j) \leq \sum_{j=1}^\infty \lambda^\infty(C_1 \cap \Pi_j) = 0 \] and also, \[ \frac{1}{\lambda^\infty(C_1)} \int_{C_1} f(y) \, d\lambda^\infty(y) = 0. \] \end{proof} The next corollary follows directly from the preceding theorem. \begin{cor} [$\ddag$] Let $A \subseteq \mathbb{R}^\infty$ be a Borel set with $0 < \lambda^\infty(A) < \infty$. Then for almost every $x$, $density(A, \lambda^\infty) = 0$, that is, \[ \lim_{\substack{\Pi \text{ with centre } x\\0 < \lambda^\infty(\Pi) < \infty\\\lambda^\infty(\Pi) \to 0}} \frac{\lambda^\infty(\Pi \cap A)}{\lambda^\infty(\Pi)} = 0. \] \end{cor} At this point, we have come to the end of this work. Essentially, we have shown that the approach used is not plausible. It is not to be discarded though as there have been positive results. However, it does not mean that positive results regarding densities may not be obtained by using a different measure. \cleardoublepage \end{document}
math
\begin{document} \title{Counting processes in $p$-variation with applications to recurrent events} \begin{abstract} Convergence results for averages of independent replications of counting processes are established in a $p$-variation setting and under certain assumptions. Such convergence results can be combined with functional differentiability results in $p$-variation in order to study the asymptotic properties of estimators that can be considered functionals of such averages of counting processes. Examples of this are given in recurrent events settings, confirming known results while also establishing the appropriateness of the pseudo-observation method for regression analysis. In a recurrent events setting with a terminal event, it is also established that it is more efficient to discard complete information on a censoring time and instead consider the censoring times censored by the terminal event. \end{abstract} \section{Introduction} The concept of $p$-variation allows for an elegant way of studying asymptotic properties of estimators depending on the data through the empirical distribution function of one or more one-dimensional variables. This is because of two things. Firstly, the empirical distribution function, based on independent and identically distributed observations, converges to the true distribution function in $p$-variation for many $p$. Secondly, many such estimators may be described as differentiable functionals of the empirical distribution function in a $p$-variation setting, at least for some values of~$p$. This means that asymptotic properties can be derived by what is essentially a functional delta method. Unfortunately, many estimators do not depend only on the empirical distribution of one-dimensional variables, but will rather be more complex. This motivates looking for extensions to this approach, which involves proving convergence in $p$-variation of more general averages to the true mean. In this paper, we will see how the approach described above can be used for estimators based on averages of counting processes. The main result is a convergence result for counting processes in $p$-variation which builds on and extends results by Qian ^{\mathsf{c}}ite{qian1998}. Applications to different estimators of the mean function in a recurrent events setting serve as demonstrations of how the described approach can be used to derive asymptotic properties of the different estimators. Expressions of influence functions and asymptotic variances and suggestions for natural variance estimators are also given for the different estimators and the appropriateness of the pseudo-observation method based on these estimators is established under some conditions. \section{Main results} \label{sec:main} A counting process $N$ is characterized by taking only non-negative integer values, $N \in \mathbb{N}_0$, and being increasing, $N(t) \geq N(s)$ for $t \geq s$. In our setting, counting processes are also continuous on the right with limits on the left and are defined on the time interval $[0, \infty]$ with the requirements $N(0) = 0$ and $N(\infty) = \lim_{t \to \infty} N(t)$. A counting process which is no larger than 1 is called simple. For a counting process $N$, the time points $T_k := \inf\{s : N(s) \geq k\}$ for $k \in \mathbb{N}$ characterize the process. Specifically, we can define simple counting processes by $N^{(k)}(s) = \indic{T_k \leq s}$, $s \in [0, \infty)$ for $k \in \mathbb{N}$, in which case \begin{equation} \label{eq:counting_decomposition} N(s) = \sum_{k = 1}^\infty N^{(k)}(s) \end{equation} for all $s \in [0, \infty]$. If $E(N(\infty)) < \infty$ for a counting process, $N$, as defined above then $\operatorname{E}(N(s)) < \infty$ for all $s \in [0,\infty]$. In this case, the notation $F$ for the mean function given by $F(s) = \operatorname{E}(N(s))$ will be used in the following. Consider $p \geq 1$. For a real function $f$ defined on an interval $J$ the $p$-variation is defined by \begin{equation} v_p(f;J) = \sup_{m \in \mathbb{N}, t_0, \hspace{1pt} \mathrm{d}ots, t_m \in J} \sum_{j=1}^m |f(t_j)-f(t_{j-1})|^p \end{equation} and $f$ is of bounded $p$-variation when $v_p(f;J) < \infty$. An equivalent definition and the following results can be found in the book by Dudley and Norvai\v{s}a ^{\mathsf{c}}ite{Dudley2010}. The definition $\|f\|_{(p)} = v_p(f;J)^{\frac{1}{p}}$ leads to a seminorm and if $\|f\|_\infty = \sup_{t \in J} |f(t)|$, norms are defined by both $\| f \|_\infty$, the supremum norm, and $\| f\|_{[p]} = \|f\|_{(p)} + \|f\|_{\infty}$, the $p$-variation norm. The space $\mathcal{W}_p(J)$ of all functions $f ^{\mathsf{c}}olon J \to \mathbb{R}$ of bounded $p$-variation, $v_p(f;J) < \infty$, is a Banach space, a complete normed vector space, when equipped with the $p$-variation norm. This is similarly the case for the subset $\mathcal{W}_p^{\mathsf{r}}(J)$ of right-continuous functions. The interval $J$, which will be $[0,\infty)$ in the following, is generally dropped from the notation when it is clear from the context as has already been done in the cases of $\|^{\mathsf{c}}dot\|_{\infty}$, $\|^{\mathsf{c}}dot\|_{(p)}$, and $\|^{\mathsf{c}}dot\|_{[p]}$. An important convergence result by Qian, Theorem~3.2 of ^{\mathsf{c}}ite{qian1998}, implies the following result. \begin{prop} \label{prop:QianE} Let $F$ be a cumulative distribution function with mass on $(0, \infty)$ and let, for each $n \in \mathbb{N}$, $F_n$ be the empirical distribution of $n$ independent observations $X_1, \hspace{1pt} \mathrm{d}ots, X_n \in [0,\infty)$ from the distribution given by $F$, that is, given by $F_n(t) = \frac{1}{n} \sum_{i=1}^n \indic{X_i \leq t}$. Then, for $p \in [1, 2)$, a constant $C_p$, only depending on~$p$, exists such that \begin{equation} \label{eq:QianE} \operatorname{E}(v_p(F_n - F)) \leq C_p n^{1-p} \end{equation} for any $n \in \mathbb{N}$. \end{prop} The inequality of \eqref{eq:QianE} also implies \begin{equation} \label{eq:QianE_alt} \operatorname{E}(\|F_n - F\|_{[p]}) \leq 2C_p^{\frac{1}{p}} n^{\frac{1-p}{p}} \end{equation} according to Jensen's inequality with the convex function $x \mapsto x^p$ and the fact that $F_n(0)-F(0)=0$ under the same conditions. The processes $t \mapsto \indic{X_i \leq t}$ at play above are simple counting processes. Proposition~\ref{prop:QianE} is the main ingredient in obtaining the following convergence result for more general simple counting processes, which generalizes~\eqref{eq:QianE_alt}. \begin{theorem} \label{theorem:simple} Consider a simple counting process $N$ with mean function $F$. For each $n \in \mathbb{N}$, let $F_n = n^{-1} \sum_{i=1}^n N_{i}$ be the average of independent replications $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ of $N$. Then, for any $p \in [1, 2)$ a constant $K_p$, only depending on $p$, exists such that \begin{equation} \operatorname{E}(\| F_n - F \|_{[p]}) \leq K_p F(\infty)^{\frac{1}{p}} n^{\frac{1-p}{p}} \end{equation} for any $n \in \mathbb{N}$. \end{theorem} \begin{proof} Let $p \in [1,2)$, $n \in \mathbb{N}$ and the processes $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ be given and let $\tilde n = \# \{i : N_i(\infty) = 1 \}$ be the number of observed jumps. Interpreting division by 0 as 0 allows for the equations \begin{equation} \label{eq:FnFsplit} \begin{aligned} F_n - F &= \frac{\tilde n}{n} \frac{1}{\tilde n}\sum_{i=1}^n N_i - F(\infty) \frac{F}{F(\infty)} \\ &= \frac{\tilde n}{n} \big( \frac{1}{\tilde n} \sum_{i=1}^n N_i - \frac{F}{F(\infty)} \big) + ( \frac{\tilde n}{n} - F(\infty)) \frac{F}{F(\infty)}. \end{aligned} \end{equation} Because $F$ is 0 at 0 and increasing, we have $\| F /F(\infty)\|_{[p]} \leq 2$. Note that $F(\infty) \in [0,1]$ because $N$ is simple. Since $\tilde n$ follows a binomial distribution of $n$ trials with probability $F(\infty)$, we have $\operatorname{E}(| \tilde n - n F(\infty)|) \leq \operatorname{E}(\tilde n(1-F(\infty)) + (n-\tilde n) F(\infty)) = 2 n F(\infty)(1-F(\infty))$ and $\operatorname{E}(| \tilde n - n F(\infty)|^2) = \Var(\tilde n) = nF(\infty)(1-F(\infty))$ and so $\operatorname{E}(| \tilde n - n F(\infty)|^p) \leq \operatorname{E}(\max(| \tilde n - n F(\infty)|, | \tilde n - n F(\infty)|^2)) \leq \operatorname{E}(| \tilde n - n F(\infty)|) + \operatorname{E}(| \tilde n - n F(\infty)|^2) \leq 3 n F(\infty)(1-F(\infty))$. It is then an application of Jensen's inequality with the convex function $x \mapsto x^p$ which reveals \begin{equation} \begin{aligned} \operatorname{E} \big(\big| \frac{\tilde n}{n} - F(\infty) \big|\big) &\leq \frac{1}{n} \operatorname{E}(| \tilde n - n F(\infty)|^p)^{\frac{1}{p}} \leq \frac{1}{n} \big(3n F(\infty)(1-F(\infty))\big)^{\frac{1}{p}} \\ & \leq 3^{\frac{1}{p}} n^{\frac{1-p}{p}} F(\infty)^{\frac{1}{p}}. \end{aligned} \end{equation} The function $F/F(\infty)$ is a cumulative distribution function. In the conditional distribution given $N_1(\infty), \hspace{1pt} \mathrm{d}ots, N_n(\infty)$, the $\tilde n^{-1} \sum_{i=1}^n N_i$ is an empirical distribution of $\tilde n$ independent observations from the distribution given by $F/F(\infty)$. Let $\sigAlg{A}$ be the $\sigma$-algebra generated by $N_1(\infty), \hspace{1pt} \mathrm{d}ots, N_n(\infty)$. An application of Proposition~\ref{prop:QianE} and Jensen's inequality as before in the conditional distribution reveals, almost surely, \begin{equation} \begin{aligned} \operatorname{E} \big(\big\|\frac{1}{\tilde n} \sum_{i=1}^n N_i - \frac{F}{F(\infty)} \big\|_{[p]} \mathbin{|} \sigAlg{A} \big) &\leq 2\operatorname{E} \big(v_p \big(\frac{1}{\tilde n} \sum_{i=1}^n N_i - \frac{F}{F(\infty)} \big) \mathbin{|} \sigAlg{A} \big)^{\frac{1}{p}} \\ &\leq 2C_p^{\frac{1}{p}} \tilde n^{\frac{1-p}{p}}, \end{aligned} \end{equation} where $C_p$ is the constant from Proposition~\ref{prop:QianE}. It is worth noting that $\sigma(\tilde n) \subseteq \sigAlg{A}$ and that the same constant $C_p$ can be used no matter the distribution according to Proposition~\ref{prop:QianE}. The considerations above mean that equation~\eqref{eq:FnFsplit} leads to \begin{equation} \begin{aligned} \operatorname{E}(\|F_n - F\|_{[p]}) &\leq 2C_p^{\frac{1}{p}} \frac{1}{n} \operatorname{E}(\tilde n^{\frac{1}{p}}) + 3^{\frac{1}{p}} 2 n^{\frac{1-p}{p}} F(\infty)^{\frac{1}{p}} \\ &\leq 2(C_p^{\frac{1}{p}} + 3^{\frac{1}{p}}) n^{\frac{1-p}{p}} F(\infty)^{\frac{1}{p}} \end{aligned} \end{equation} where Jensen's inequality is again used to establish that $\operatorname{E}((\tilde n / n)^{1/p}) \leq \operatorname{E}(\tilde n/ n)^{1/p} = F(\infty)^{1/p}$. This shows the desired upper bound with the constant $K_p = 2(C_p^{\frac{1}{p}} + 3^{\frac{1}{p}})$. \end{proof} Let us now turn our attention to more general counting processes as defined in the beginning of this section. The characterization of a counting process as a sum of simple counting processes allows us to establish the following convergence result by appealing to Theorem~\ref{theorem:simple}. \begin{theorem} \label{theorem:generalE} Let $p \in [1,2)$ be given and consider a counting process $N$ with mean function $F$ such that $N(\infty)$ has finite moment of order $p+\varepsilon$ for some $\varepsilon > 0$. For each $n \in \mathbb{N}$, let $F_n = n^{-1} \sum_{i=1}^n N_{i}$ be the average of independent replications $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ of $N$. Then a constant $C$ exists, depending on $p$ as well as the distribution of $N(\infty)$, such that \begin{equation} \label{eq:multjumpE} \operatorname{E}(\| F_n - F \|_{[p]}) \leq C n^{\frac{1-p}{p}}, \end{equation} for any $n \in \mathbb{N}$. \end{theorem} \begin{proof} Let $n \in \mathbb{N}$ and the processes $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ be given. For each $k \in \mathbb{N}$, let $N^{(k)}$ denote the simple counting process corresponding to the decomposition of $N$ as in \eqref{eq:counting_decomposition} and similarly with $N_i^{(k)}$ for $i = 1, \hspace{1pt} \mathrm{d}ots, n$. Also, for $k \in \mathbb{N}$, let $F^{(k)}$ denote the mean function of $N^{(k)}$ and let $F_n^{(k)} = n^{-1} \sum_{i=1}^n N_i^{(k)}$ be the empirical mean function. We then obtain the identities $F = \sum_{k=1}^\infty F^{(k)}$ and $F_n = \sum_{k=1}^\infty F_n^{(k)}$. By the triangle inequality, the monotone convergence theorem, and Theorem~\ref{theorem:simple}, we now have \begin{equation} \label{eq:FnF_E_bound_general} \operatorname{E}(\|F_n - F\|_{[p]}) \leq \sum_{k=1}^\infty \operatorname{E}(\|F_n^{(k)} - F^{(k)}\|_{[p]}) \leq K_p \sum_{k=1}^\infty F^{(k)}(\infty)^{\frac{1}{p}} n^{\frac{1-p}{p}}. \end{equation} Now, $F^{(k)}(\infty) = \operatorname{P}(N(\infty) \geq k)$ and by Markov's inequality, we have, for the given $\varepsilon > 0$, $P(N(\infty) \geq k) \leq \operatorname{E}(N(\infty)^{p+\varepsilon})/k^{p+\varepsilon}$. Since $\operatorname{E}(N(\infty)^{p+\varepsilon}) < \infty$ by the moment condition, then \begin{equation} C:= K_p \sum_{k=1}^\infty F^{(k)}(\infty)^{\frac{1}{p}} \leq K_p \operatorname{E}(N(\infty)^{p+\varepsilon})^{\frac{1}{p}} \sum_{k=1}^{\infty} k^{-1-\frac{\varepsilon}{p}} < \infty \end{equation} and this proves the desired result. \end{proof} The bound in expectation in \eqref{eq:multjumpE} immediately, by Markov's inequality, gives the useful bound in probability, \begin{equation} \| F_n - F \|_{[p]} = O_{\operatorname{P}}(n^{\frac{1-p}{p}}) \end{equation} under the same assumptions. The following result gives an almost sure bound in some cases and has its inspiration in Theorem~4.2 of ^{\mathsf{c}}ite{qian1998}. The proof will rely heavily on a lemma from ^{\mathsf{c}}ite{dudley1983invariance} in addition to the results of Theorem~\ref{theorem:generalE} above. \begin{theorem} \label{theorem:almost_sure} Consider a counting process $N$ such that $N(\infty) \leq B$ almost surely for some $B > 0$. For each $n \in \mathbb{N}$, let $F_n = n^{-1} \sum_{i=1}^n N_{i}$ be the average of independent replications $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ of $N$. Then for any $p \in [1,2)$ a constant $\lambda$ exists, depending on $p$ and $B$, such that \begin{equation} \label{eq:as_bound} \limsup_{n \to \infty} n^{\frac{p-1}{p}} \| F_n - F \|_{[p]} \leq \lambda \end{equation} almost surely. In particular, $\|F_n - F\|_{p]} = O(n^{(1-p)/p})$ almost surely in this case. \end{theorem} \begin{proof} The statement is trivial for $p=1$ where $\|F_n - F\|_{[p]} \leq \lambda$ for all $n$ for $\lambda = 4B$. Consider a given $p \in (1,2)$ and, for now, a given $n \in \mathbb{N}$ and the processes $N_1, \hspace{1pt} \mathrm{d}ots, N_n$. We let $X_j = N_j -F$, for $j=1, \hspace{1pt} \mathrm{d}ots, n$, as well as $S_n = \sum_{j=1}^n X_j$ be random elements of $\mathcal{W}_p$. The $\{X_j\}_{j=1}^n$ can be considered an independent sequence in the terminology of Section~2 of ^{\mathsf{c}}ite{dudley1983invariance}. Owing to the upper bound of $N$ by $B$, we have $F(s) \leq B$ and so $\|X_j\|_{[p]} \leq 4B =: M$ and $\sum_{j=1}^n \operatorname{E}(\|X_j\|_{[p]}^2) \leq n (4B)^{2} =: \tau_n$. Lemma~2.6 of ^{\mathsf{c}}ite{dudley1983invariance} now states that \begin{equation} \label{eq:dudley_bound} \operatorname{P}(\|S_n\|_{[p]} \geq K) \leq \exp(3 \gamma^2 \tau_n - \gamma (K - \operatorname{E}(\|S_n\|_{[p]}))) \end{equation} for any $\gamma \in [0, (2M)^{-1}]$ and any $K > 0$. Now, supposing $n$ is sufficiently large that $n^{(1-p)/p} \leq (2M)^{-1}$, we want to use this with $\gamma = n^{(1-p)/p}$ and $K = t n^{1/p}$ for a given $t > 0$. Note that $S_n = n(F_n - F)$ and use that $\operatorname{E}(\|S_n\|_{[p]}) \leq C n^{1/p}$ for some $C > 0$ according to Theorem~\ref{theorem:generalE} to see that equation~\eqref{eq:dudley_bound} implies \begin{equation} \label{eq:tail_DP} \begin{aligned} \operatorname{P}(n^{\frac{p-1}{p}}\|F_n - F\|_{[p]} \geq t) &\leq \exp \big( 3 n^{2\frac{1-p}{p}} n(4B)^2 - n^{\frac{1-p}{p}}(t n^{\frac{1}{p}} - C n^{\frac{1}{p}})\big) \\ &= \exp\big(-n^{\frac{2-p}{p}}(t - (C+3(4B)^2))\big) \end{aligned} \end{equation} for any $t > 0$ for sufficiently large $n$. Let $\lambda = C+3(4B)^2$. Since $p < 2$ and if $t > \lambda$ is considered, the tail probability from~\eqref{eq:tail_DP} vanishes rapidly as $n$ increases. In particular, $\sum_{n=1}^\infty \operatorname{P}(n^{(p-1)/p} \|F_n - F\|_{[p]} \geq t)$ converges for $t > \lambda$. The Borel--Cantelli lemma then reveals that $\operatorname{P}( \limsup_{n \to \infty} \{n^{(p-1)/p} \|F_n - F\|_{[p]} \geq t\}) = 0$ for $t > \lambda$. This implies $\limsup_{n \to \infty} n^{(p-1)/p} \|F_n - F\|_{[p]} \leq \lambda$ almost surely, which is the desired result. Looking at the proof of Theorem~\ref{theorem:generalE} and equation~\eqref{eq:FnF_E_bound_general} in particular reveals $C \leq K_p B$, for $K_p$ from Theorem~\ref{theorem:simple}, can be used in this case such that $\lambda$ can be taken to only depend on $p$ and $B$ if desired. The statement $\|F_n - F\|_{[p]} = O(n^{(1-p)/p})$ almost surely means exactly the existence of a $\lambda$ such that \eqref{eq:as_bound} holds almost surely. \end{proof} \section{Application in a recurrent events setting} \label{sec:count} An example of a counting process is a process counting the number of recurrent events a study participant has experienced. Estimation of targets such as the expected number of events by a certain time point may be estimated in a straightforward manner by an average over independent replications when the process is completely observed. When the counting of events of interest is sometimes prevented by censoring, estimation may be more complicated. When the censoring time is itself censored, perhaps by a terminal event, then estimation is further complicated. In this section, we will see how the convergence results of Section~\ref{sec:main} may be applied to study the asymptotic properties of estimators in this setting by appealing to differentiability properties of the involved estimating functionals. Here, differentiability means Fréchet differentiability. Appendix~\ref{appendix:differentiability} includes the most important definitions and properties for our purposes in a general Banach space-based setting, primarily based on Chapter~5 of ^{\mathsf{c}}ite{Dudley2010}. As mentioned in Section~\ref{sec:main}, the function space $\mathcal{W}_p$ of functions of bounded $p$-variation is a Banach space when equipped with the $p$-variation norm, as is the case for the subspace $\mathcal{W}_p^{\mathsf{r}}$ of right-continuous functions of bounded $p$-variation. In particular owing to the inequalities $\|f g\|_{[p]} \leq \|f\|_{[p]} \|g\|_{[p]}$ and $\|\int_0^{(^{\mathsf{c}}dot)} g(s) f(\hspace{1pt} \mathrm{d} s)\|_{[p]} \leq k_p \|f\|_{[p]} \|g\|_{[p]}$ for a constant $k_p > 0$ for $f,g \in \mathcal{W}_p$ for $p \in [1,2)$, many important functionals are differentiable as functionals between $p$-variation-based spaces. In addition to the two implied bilinear functionals, these differentiable functionals include mapping to the inverse element and product integration. Above, $\int_0^{(^{\mathsf{c}}dot)} g(s) f(\hspace{1pt} \mathrm{d} s)$ for $f,g \in \mathcal{W}_p$ should be considered a Young integral, see for instance ^{\mathsf{c}}ite{Dudley1992}. The Young integral does correspond to the Lebesgue--Stieltjes integral when $f$ is of bounded variation, $f \in \mathcal{W}_1$, here. More details on these topics can be found in ^{\mathsf{c}}ite{Dudley2010} and ^{\mathsf{c}}ite{Dudley1999}. The supplements to the papers ^{\mathsf{c}}ite{overgaard2017asymptotic, Overgaard2019} include some important details in a more condensed form. Let $N$ be the counting process of interest, which is assumed square integrable. In this section, we let $\mu$ denote the mean function of $N$ such that $\mu(s) = \operatorname{E}(N(s))$, reserving the notation $F$ for a collection of such means. The mean function $\mu$ is the target of estimation in this section. \begin{example} \label{example:uncens} If information is available on $N_1, \hspace{1pt} \mathrm{d}ots, N_n$ which are $n$ independent replications of $N$, estimation may be performed by simply taking the average, $\hat \mu_n(t) = n^{-1} \sum_{i=1}^n N_i(t)$. In this case, $\sqrt{n}(\hat \mu_n(t) - \mu(t))$ has an asymptotic normal distribution with variance $\Var(N(t))$ according to the central limit theorem. The convergence result of Theorem~\ref{theorem:generalE} implies that $\| \hat \mu_n - \mu \|_{[p]} = O_{\operatorname{P}}(n^{(1-p)/p})$ for $p \in [1,2)$ and also opens up for use of the functional delta method. \end{example} \begin{example} \label{example:cens_obs} If observation of events is prevented after a right-censoring time $C$, we do not generally have information on the entire $N$, but only on $\tilde N$ given by \begin{equation} \tilde N(s) = \int_0^s \indic{C \geq u} N(\hspace{1pt} \mathrm{d} u). \end{equation} Here, $\tilde N$ is simply another counting process. Assume that $N$ is independent of $C$ and let $K(s) = \operatorname{P}(C \geq s)$. If $\nu$ denotes the mean function of $\tilde N$, then \begin{equation} \nu(s) = \int_0^s K(u) \mu(\hspace{1pt} \mathrm{d} u), \end{equation} and so, for $s$ such that $K(s) > 0$, \begin{equation} \label{eq:ipcw_motivation} \mu(s) = \int_0^s \frac{1}{K(u)} \nu(\hspace{1pt} \mathrm{d} u). \end{equation} Let $X = (\tilde N, C)$ and suppose information is available on $X_1, \hspace{1pt} \mathrm{d}ots, X_n$ which are independent replications of $X$ with $X_i = (\tilde N_i, C_i)$. Then $\nu(s)$ can be estimated as in Example~\ref{example:uncens} by $\hat \nu_n(s) = n^{-1} \sum_{i=1}^n \tilde N_i(s)$, and $K(s)$ can be estimated by $\hat K_n(s) = n^{-1} \sum_{i=1}^n \indic{C_i \geq s}$. Equation~\eqref{eq:ipcw_motivation} now suggests the estimate \begin{equation} \label{eq:ipcw_estimate_emp} \hat \mu_n(s) = \int_0^s \frac{1}{\hat K_n(u)} \hat \nu_n(\hspace{1pt} \mathrm{d} u) \end{equation} of $\mu(s)$. This corresponds to the estimator from~(2.2) of ^{\mathsf{c}}ite{lawless1995some}. The estimate of \eqref{eq:ipcw_estimate_emp} relies on empirical means of two counting processes, namely $\tilde N$ and $N_C$ given by $N_C(s) = \indic{C \leq s}$. The estimate is in fact obtained from the empirical means of those counting processes by a functional which is differentiable of any order in a $p$-variation setting. Consider a given $t > 0$ such that $K(t) > 0$. Interest will now be in properties of $\hat \mu_n(s)$ from \eqref{eq:ipcw_estimate_emp} for $s \in [0,t]$. This will be studied through a functional approach based on $p$-variation. To be specific, consider the Banach space $\banach{F} = \mathcal{W}_p^{\mathsf{r}}([0, \infty))^2$ for a $p \in [1,2)$, with a general element $f = (f_1, f_2) \in \banach{F}$ and norm given by $\|f\|_{\banach{F}} = \max(\|f_1\|_{[p]}, \|f_2\|_{[p]})$. The functional given by $K(f) = (s \mapsto K(f;s))$ with $K(f;s) = f_2(\infty) - f_2(s-)$ is linear and continuous as a functional from $\banach{F}$ to $\mathcal{W}_p([0,t])$. It is therefore differentiable of any order with a first order derivative at $f \in \banach{F}$ in direction $g \in \banach{F}$ which is $K_f'(g) = K(g)$. As can be seen from Theorem~4.16 of ^{\mathsf{c}}ite{Dudley2010} since $\mathcal{W}_p([0,t])$ is a unital Banach algebra, if $U \subseteq \mathcal{W}_p([0,t])$ is the open subset of $\mathcal{W}_p([0,t])$ of $f$s for which $1/f \in \mathcal{W}_p([0,t])$, then $f \mapsto 1/f$ is differentiable of any order with a first order derivative at $f \in U$ in direction $g \in \mathcal{W}_p([0,t])$ which is $-g/f^2$. Also, $(f_1,f_2) \mapsto \int_0^{(^{\mathsf{c}}dot)} f_2(s) f_1(\hspace{1pt} \mathrm{d} s)$ is, as a functional from $\mathcal{W}_p([0,t])^2$ to $\mathcal{W}_p([0,t])$, bilinear and continuous and thus differentiable of any order with first order derivative given by $\int_0^{(^{\mathsf{c}}dot)} f_2(s) g_1(\hspace{1pt} \mathrm{d} s) + \int_0^{(^{\mathsf{c}}dot)} g_2(s) f_1(\hspace{1pt} \mathrm{d} s)$. An $f \in U$ is characterized by being bounded uniformly away from 0. It is now the chain rule, see Appendix~\ref{appendix:differentiability}, which reveals that \begin{equation} \mu ^{\mathsf{c}}olon f \mapsto \int_0^{(^{\mathsf{c}}dot)} \frac{1}{K(f;s)} f_1(\hspace{1pt} \mathrm{d} s) \end{equation} is differentiable of any order, at least in a neighborhood of an $f$ such that $K(f)$ is bounded uniformly away from 0, as a functional from $\banach{F}$ into $\mathcal{W}_p^{\mathsf{r}}([0,t])$. For $f$ where the functional $\mu$ is differentiable, the derivative at $f$ in direction $g$ is, by the chain rule, see~\eqref{eq:chain_rule} of Appendix~\ref{appendix:differentiability}, \begin{equation} \mu_f'(g) = \int_0^{(^{\mathsf{c}}dot)} \frac{1}{K(f;s)} g_1(\hspace{1pt} \mathrm{d} s) - \int_0^{(^{\mathsf{c}}dot)} \frac{K(g;s)}{K(f;s)^2} f_1(\hspace{1pt} \mathrm{d} s). \end{equation} If we let $G(s) = \operatorname{P}(C \leq s)$ and $F = (\nu, G)$, then the functional $\mu$ is, in particular, differentiable of any order in a neighborhood of $F \in \banach{F}$. With the notation $x = (\tilde n, c)$ for $\tilde n \in \mathcal{W}_p^{\mathsf{r}}([0, \infty))$ and $c > 0$, $N_c(s) = \indic{c \leq s}$, and $\hspace{1pt} \mathrm{d}elta_x \in \banach{F}$ given by $\hspace{1pt} \mathrm{d}elta_x(s) = (\tilde n(s), N_c(s))$, the empirical mean of the counting processes is $F_n = n^{-1} \sum_{i=1}^n \hspace{1pt} \mathrm{d}elta_{X_i}$ and we see that $\hat \mu_n$ from \eqref{eq:ipcw_estimate_emp} is obtained by $\hat \mu_n = \mu(F_n)$. The influence function, defined by $\hspace{1pt} \mathrm{d}ot \mu(x) = \mu_F'(\hspace{1pt} \mathrm{d}elta_x - F)$, of this estimator can be expressed as \begin{equation} \hspace{1pt} \mathrm{d}ot \mu(x) = \int_0^{(^{\mathsf{c}}dot)} \frac{1}{K(s)} \tilde n(\hspace{1pt} \mathrm{d} s) - \int_0^{(^{\mathsf{c}}dot)} \frac{\indic{c \geq s}}{K(s)} \mu(\hspace{1pt} \mathrm{d} s) \end{equation} by using that $K(F;s) = K(s)$ and $\mu(s) = \int_0^s K(u)^{-1} \nu(\hspace{1pt} \mathrm{d} u)$. The differentiability of any order of $\mu$ in a neighborhood of $F$ is enough to establish \begin{equation} \label{eq:taylor1} \mu(F_n) = \mu(F) + \mu_F'(F_n - F) + O(\|F_n - F\|_{\banach{F}}^2) \end{equation} as in \eqref{eq:Taylor1+Lip} of Appendix~\ref{appendix:differentiability}. We have already assumed square integrability of $N$ and thus of $\tilde N \leq N$ while this is trivially the case for the bounded counting process $N_C$. This means that Theorem~\ref{theorem:generalE} ensures $\|F_n - F\|_{\banach{F}} = O_{\operatorname{P}}(n^{(1-p)/p})$. Take $p \in (4/3, 2)$, then we have, in particular, $\|F_n - F\|_{\banach{F}} = o_{\operatorname{P}}(n^{-1/4})$. From \eqref{eq:taylor1} and linearity of $\mu_F'$ we obtain \begin{equation} \sqrt{n}(\hat \mu_n - \mu) = \sqrt{n}\frac{1}{n} \sum_{i=1}^n \hspace{1pt} \mathrm{d}ot \mu(X_i) + o_{\operatorname{P}}(1) \end{equation} in $p$-variation. Evaluating at $s \in [0,t]$, this ensures that $\sqrt{n}(\hat \mu_n(s) - \mu(s))$ has the same asymptotic distribution as $n^{-1/2} \sum_{i=1}^n \hspace{1pt} \mathrm{d}ot \mu(X;s)$ which is a normal distribution with mean $\operatorname{E}(\hspace{1pt} \mathrm{d}ot \mu(X;s)) = 0$ and variance $\Var(\hspace{1pt} \mathrm{d}ot \mu(X;s))$. From the alternative expression of the influence function as \begin{equation} \label{eq:C_obs_infl} \hspace{1pt} \mathrm{d}ot \mu(X;s) = N(s) - \mu(s) - \int_0^{s-} \frac{(N(s) - \mu(s) - (N(u) - \mu(u)))}{K(u+)} M_C(\hspace{1pt} \mathrm{d} u), \end{equation} where $M_C(s) = N_C(s) - \int_0^s \indic{C \geq u} \Lambda(\hspace{1pt} \mathrm{d} u)$, the variance can be seen to be \begin{equation} \label{eq:var_expr_C_obs} \Var(\hspace{1pt} \mathrm{d}ot \mu(X;s)) = \Var(N(s)) + \int_0^{s-} \Var(N(s) - N(u)) \frac{1}{K(u+)} \Lambda(\hspace{1pt} \mathrm{d} u) \end{equation} under the independence assumption $N \independent C$. This variance can be estimated by $n^{-1} \sum_{i=1}^n \mu_{F_n}'(\hspace{1pt} \mathrm{d}elta_{X_i} - F_n;s)^2$ which turns out to be the variance estimate of (2.3) from ^{\mathsf{c}}ite{lawless1995some}. Some more details on the derivations of~\eqref{eq:C_obs_infl} and~\eqref{eq:var_expr_C_obs} can be found in Appendix~\ref{appendix:infl_var}. In comparison to Example~\ref{example:uncens} with no censoring, the last term of~\eqref{eq:var_expr_C_obs} can be seen as the added variance due to censoring when using the estimator $\hat \mu_n$ from \eqref{eq:ipcw_estimate_emp}. \end{example} \begin{example} \label{example:cens_unobs} Consider the setting of Example~\ref{example:cens_obs} with $N$ censored by $C$, leaving $\tilde N(s) = \int_0^s \indic{C \geq u} N(\hspace{1pt} \mathrm{d} u)$ observed. As mentioned in the beginning of the section, $C$ may itself be censored such that the empirical estimate of $K(s) = \operatorname{P}(C \geq s)$ is not generally available. This is the setting considered in this example. Suppose a terminal event at time $T$ right-censors observation of $C$. Here, $T$ is terminal in the sense that $N(s) = N(T \wedge s)$ for all $s$. We let $\tilde C = C \wedge T$ and $\tilde D = \indic{C < T}$. The function $K$ is the left-continuous version of a survival function for $C$ and takes the form $K(s) = \prodi_0^{s-}(1-\Lambda(\hspace{1pt} \mathrm{d} u))$ for a right-continuous cumulative censoring hazard function $\Lambda$. If we let $G(s) = \operatorname{P}(C \leq s)$, we have $\Lambda(s) = \int_0^s K(u)^{-1} G(\hspace{1pt} \mathrm{d} u)$. We will assume independence of $C$ and $(N,T)$. Then we also have \begin{equation} \label{eq:Lambda_identified} \Lambda(s) = \int_0^s \frac{1}{K^{\mathsf{c}}(u)} G_1^{\mathsf{c}}(\hspace{1pt} \mathrm{d} u), \end{equation} for $s$ such that $K^{\mathsf{c}}(s) > 0$, where $K^{\mathsf{c}}(s) = \operatorname{P}(\tilde C > s) + \operatorname{P}(\tilde C = s, \tilde D = 1)$ and $G_1^{\mathsf{c}}(s) = \operatorname{P}(\tilde C \leq s, \tilde D = 1)$ since, owing to the independence of $C$ and $T$, $K^{\mathsf{c}}(s) = K(s) \operatorname{P}(T > s)$ and $G_1^{\mathsf{c}}(s) = \int_0^s \operatorname{P}(T > u) G(\hspace{1pt} \mathrm{d} u)$. In this example, the basic observation is $X = (\tilde N, \tilde C, \tilde D)$. If information is available on $X_1, \hspace{1pt} \mathrm{d}ots, X_n$ which are independent replications of $X = (\tilde N, \tilde C, \tilde D)$ with $X_i =(\tilde N_i, \tilde C_i, \tilde D_i)$, we may estimate $K^{\mathsf{c}}(s)$ by $\hat K_n^{\mathsf{c}}(s) = n^{-1} \sum_{i=1}^n (\indic{\tilde C_i > s} + \indic{\tilde C_i=s, \tilde D_i = 1})$ and $G_1^{\mathsf{c}}(s)$ by $\hat G_{n,1}^{\mathsf{c}}(s) = n^{-1} \sum_{i=1}^n \indic{\tilde C_i \leq s, \tilde D_i = 1}$. Equation~\eqref{eq:Lambda_identified} suggests the estimate \begin{equation} \hat \Lambda_n(s) = \int_0^s \frac{1}{\hat K_n^{\mathsf{c}}(u)} \hat G_{n,1}^{\mathsf{c}}(\hspace{1pt} \mathrm{d} u) \end{equation} of $\Lambda(s)$. This estimate then leads to the estimate of $K$ given by $\hat K_n(s) = \prodi_0^{s-}(1-\hat \Lambda_n(\hspace{1pt} \mathrm{d} u))$. This is basically the Kaplan--Meier estimator, but for the censoring distribution and in a left-continuous version. Finally, if $\hat \nu_n(s) = n^{-1} \sum_{i=1}^n \tilde N_i(s)$ as before, the estimate obtained by \begin{equation} \label{eq:ipcw_estimate} \hat \mu_n(s) = \int_0^s \frac{1}{\hat K_n(u)} \hat \nu_n(\hspace{1pt} \mathrm{d} u) \end{equation} yields an estimate of $\mu(s)$. This estimate corresponds to the estimate of~\eqref{eq:ipcw_estimate_emp} from Example~\ref{example:cens_obs} if no censoring of the censoring times occurs before time $s$, and is in this sense a generalization. The estimate of~\eqref{eq:ipcw_estimate} also corresponds to the estimate studied by ^{\mathsf{c}}ite{ghosh2000}, which is also considered in ^{\mathsf{c}}ite{Andersen2019}. The estimate in \eqref{eq:ipcw_estimate} relies on empirical means of three counting processes, namely $\tilde N$, $N_{X,0}$, and $N_{X,1}$, where $N_{X,j}(s) = \indic{\tilde C \leq s, \tilde D = j}$ for $j=0, 1$, through a functional which is differentiable of any order in a $p$-variation setting. This allows us to take a similar approach as in Example~\ref{example:cens_obs} when studying the asymptotic properties of the estimator. Specifically, we let $\banach{F} = \mathcal{W}_p^{\mathsf{r}}([0, \infty))^3$, for a $p \in [1,2)$, with a general element of the form $f= (f_1, f_2, f_3) \in \banach{F}$ and a norm given by $\|f\|_{\banach{F}} = \max(\|f_1\|_{[p]}, \|f_2\|_{[p]}, \|f_3\|_{[p]})$. In particular, $F := (\nu, G_1^{\mathsf{c}}, G_0^{\mathsf{c}}) \in \banach{F}$, where $G_0^{\mathsf{c}}(s) = \operatorname{P}(\tilde C \leq s, \tilde D = 0)$. With $x = (\tilde n, \tilde c, \tilde d)$ for $\tilde n \in \mathcal{W}_p^{\mathsf{r}}$, $\tilde c > 0$, and $\tilde d \in \{0,1\}$, define $\hspace{1pt} \mathrm{d}elta_x$ by $\hspace{1pt} \mathrm{d}elta_x(s) = (\tilde n(s), N_{x,1}(s), N_{x,0}(s))$, where $N_{x,1}(s) = \indic{\tilde c \leq s, \tilde d = 1}$ and $N_{x,0}(s) = \indic{\tilde c \leq s, \tilde d = 0})$. Based on $n$ independent replications of $X = (\tilde N, \tilde C, \tilde D)$, the empirical version of $F$ is $F_n = n^{-1} \sum_{i=1}^n \hspace{1pt} \mathrm{d}elta_{X_i}$. In the following, the estimate $\hat \mu_n(s)$ from \eqref{eq:ipcw_estimate} is studied as a functional of $F_n$. This is done for $s \in [0,t]$ for a given $t > 0$ that satisfies $K^{\mathsf{c}}(t) > 0$. Define a $K^{\mathsf{c}}$ functional by $K^{\mathsf{c}}(f;s) = f_2(\infty) + f_3(\infty) - f_2(s-) - f_3(s)$. This functional is continuous and linear and so differentiable of any order as a functional from $\banach{F}$ to $\mathcal{W}_p([0,t])$ with first order derivative given by ${K^{\mathsf{c}}}_f'(g;s) = K^{\mathsf{c}}(g;s)$. Since $K^{\mathsf{c}}(F;s) = K^{\mathsf{c}}(s)$ and $K^{\mathsf{c}}(t) > 0$, a $\Lambda$ functional can, at least in a neighborhood of $F$, be defined by \begin{equation} \Lambda(f;s) = \int_0^s \frac{1}{K^{\mathsf{c}}(f;u)} f_2(\hspace{1pt} \mathrm{d} u), \end{equation} such that $f \mapsto \Lambda(f)$ is mapping into $\mathcal{W}_p^{\mathsf{r}}([0,t])$. We see that $\Lambda(F;s) = \int_0^s K^{\mathsf{c}}(u)^{-1} G_1^{\mathsf{c}}(\hspace{1pt} \mathrm{d} u) = \Lambda(s)$ as well as $\Lambda(F_n;s) = \hat \Lambda_n(s)$. By the arguments in Example~\ref{example:cens_obs}, the $\Lambda$ functional is differentiable of any order in a neighborhood of $F$ with first order derivative \begin{equation} \Lambda_f'(g;s) = \int_0^s \frac{1}{K^{\mathsf{c}}(f;u)} g_2(\hspace{1pt} \mathrm{d} u) - \int_0^s \frac{K^{\mathsf{c}}(g;u)}{K^{\mathsf{c}}(f;u)^2} f_2(\hspace{1pt} \mathrm{d} u). \end{equation} Note how $\Lambda_F'(\hspace{1pt} \mathrm{d}elta_x - F;s) = \Lambda_F'(\hspace{1pt} \mathrm{d}elta_x;s) = \int_0^s K^{\mathsf{c}}(u)^{-1} M_{x,1}(\hspace{1pt} \mathrm{d} u)$ where $M_{x,1}(s) = N_{x,1}(s) - \int_0^s \indic{\tilde c \geq u} \Lambda(\hspace{1pt} \mathrm{d} u)$. Next, a $K$ functional can be defined by $K(f;s) = \prodi_0^{s-}(1- \Lambda(f; \hspace{1pt} \mathrm{d} u))$ as a functional from $\banach{F}$ to $\mathcal{W}_p([0,t])$. This will then satisfy $K(F;s) = K(s)$ and $K(F_n;s) = \hat K_n(s)$. The product integral $f \mapsto \prodi_0^{(^{\mathsf{c}}dot)}(1+f(\hspace{1pt} \mathrm{d} u))$, as a functional from $\mathcal{W}_p^{\mathsf{r}}([0,t])$ to $\mathcal{W}_p^{\mathsf{r}}([0,t])$ for a $p \in [1,2)$, is differentiable of any order with a first order derivative at $f$ in direction $g$ which is $\int_0^{(^{\mathsf{c}}dot)} \prodi_0^{s-}(1+f(\hspace{1pt} \mathrm{d} u)) g(\hspace{1pt} \mathrm{d} s) \prodi_s^{(^{\mathsf{c}}dot)}(1+f(\hspace{1pt} \mathrm{d} u))$, which when $\Delta f(s):=f(s)-f(s-) \neq -1$ for all $s \in [0,t]$ can also be given as $\prodi_0^{(^{\mathsf{c}}dot)}(1+f(\hspace{1pt} \mathrm{d} u)) \int_0^{(^{\mathsf{c}}dot)} (1+\Delta f(s))^{-1} g(\hspace{1pt} \mathrm{d} s)$. Since $\Lambda(s) < 1$ for all $s \in [0,t]$, the chain rule now reveals that the $K$ functional is differentiable of any order in a neighborhood of $F \in \banach{F}$ with first order derivative given by \begin{equation} K_f'(g;s) = -K(f;s) \int_0^{s-} \frac{1}{1-\Delta \Lambda(f;u)} \Lambda_f'(g;\hspace{1pt} \mathrm{d} u). \end{equation} Using the expression of $\Lambda_F'(\hspace{1pt} \mathrm{d}elta_x - F;s)$ given above, the expression \begin{equation} \label{eq:K_infl} K_F'(\hspace{1pt} \mathrm{d}elta_x - F;s) = -K(s) \int_0^{s-} \frac{1}{1-\Delta \Lambda(u)} \frac{1}{K^{\mathsf{c}}(u)} M_{x,1}(\hspace{1pt} \mathrm{d} u) \end{equation} can be obtained. Lastly, the $\mu$ functional is defined by \begin{equation} \mu(f;s) = \int_0^s \frac{1}{K(f;u)} f_1(\hspace{1pt} \mathrm{d} u) \end{equation} as a functional from $\banach{F}$ to $\mathcal{W}_p^{\mathsf{r}}([0,t])$. The functional satisfies $\mu(F;s) = \mu(s)$ and $\mu(F_n;s) = \hat \mu_n(s)$ for $\hat \mu_n$ from~\eqref{eq:ipcw_estimate}. As in Example~\ref{example:cens_obs}, the $\mu$ functional is differentiable of any order in a neighborhood of $F \in \banach{F}$. The first order derivative is given by \begin{equation} \label{eq:cens_unobs_mu_deriv} \mu_f'(g;s) = \int_0^s \frac{1}{K(f; u)} g_1(\hspace{1pt} \mathrm{d} u) - \int_0^s \frac{K_f'(g;u)}{K(f;u)^2} f_1(\hspace{1pt} \mathrm{d} u). \end{equation} Using the expression of $K_F'(\hspace{1pt} \mathrm{d}elta_x - F;s)$ from \eqref{eq:K_infl} the influence function $\hspace{1pt} \mathrm{d}ot \mu$ can be expressed as \begin{equation} \begin{aligned} \hspace{1pt} \mathrm{d}ot \mu(x;s) &= \int_0^s \frac{1}{K(u)} \tilde n(\hspace{1pt} \mathrm{d} u) - \mu(s)\\ &\phantom{{}=} + \int_0^s \int_0^{u-} \frac{1}{1-\Delta \Lambda(v)} \frac{1}{K^{\mathsf{c}}(v)} M_{x,1}(\hspace{1pt} \mathrm{d} v) \mu(\hspace{1pt} \mathrm{d} u) \end{aligned} \end{equation} for $x = (\tilde n, \tilde c, \tilde d)$. As was the case in Example~\ref{example:cens_obs}, using $p \in (4/3,2)$ allows for the conclusion that, for $s \in [0,t]$, $\sqrt{n}(\hat \mu_n(s) - \mu(s))$ has an asymptotic normal distribution with mean $\operatorname{E}(\hspace{1pt} \mathrm{d}ot \mu(X;s)) = 0$ and a variance of $\Var(\hspace{1pt} \mathrm{d}ot \mu(X;s))$. In terms of the potentially unobserved $N$, $T$ and $C$, the influence function at $X$ can also be expressed as \begin{equation} \label{eq:C_unobs_infl} \begin{aligned} \hspace{1pt} \mathrm{d}ot \mu(X;s) &= N(s) - \mu(s) \\ & \hspace{-12pt}- \int_0^{s-} \big(N(s) - N(u) - \frac{\indic{T > u}}{\operatorname{P}(T > u)} (\mu(s) - \mu(u)\big)\frac{1}{K(u+)} M_C(\hspace{1pt} \mathrm{d} u). \end{aligned} \end{equation} This expression leads to the variance expression \begin{equation} \label{eq:var_expr_C_unobs} \begin{aligned} \Var(\hspace{1pt} \mathrm{d}ot \mu(X;s)) &= \Var(N(s)) + \int_0^{s-} \Var(N(s) - N(u)) \frac{1}{K(u+)} \Lambda(\hspace{1pt} \mathrm{d} u) \\ &\phantom{{}=} - \int_0^{s-} (\mu(s) - \mu(u))^2 \frac{\operatorname{P}(T \leq u)}{\operatorname{P}(T > u)} \frac{1}{K(u+)} \Lambda(\hspace{1pt} \mathrm{d} u) \end{aligned} \end{equation} under the independence assumption $(N,T) \independent C$. This variance can be estimated by $n^{-1} \sum_{i=1}^n \mu_{F_n}'(\hspace{1pt} \mathrm{d}elta_{X_i} - F_n;s)^2$ where the expression of $\mu_{F_n}'(\hspace{1pt} \mathrm{d}elta_x - F_n; s)$ can be obtained by insertion in \eqref{eq:cens_unobs_mu_deriv}. This variance estimate will be very similar to the one suggested by ^{\mathsf{c}}ite{ghosh2000} and seemingly identical in the absence of ties. Some more details on the derivations of~\eqref{eq:C_unobs_infl} and~\eqref{eq:var_expr_C_unobs} can be found in Appendix~\ref{appendix:infl_var}. In comparison to Example~\ref{example:cens_obs} where the actual censoring times are available, the last term of~\eqref{eq:var_expr_C_unobs} reveals that this asymptotic variance is smaller than for the estimator of Example~\ref{example:cens_obs}. This means that even when information is available on the potential censoring times $C_1, \hspace{1pt} \mathrm{d}ots, C_n$ in a setting with a terminal event and from an asymptotic point of view, the analyst is better off by disregarding this complete information and relying only on the censored censoring times. \end{example} \begin{example} \label{example:pseudo} The pseudo-observation method is a method for regression analysis of an outcome such as $N(t)$ when the outcomes are incompletely observed such as in examples~\ref{example:cens_obs} and~\ref{example:cens_unobs}. Given $n$ independent replications $X_1, \hspace{1pt} \mathrm{d}ots, X_n$ of $X$, the method works by substituting all the potentially unobserved outcomes $N_1(t), \hspace{1pt} \mathrm{d}ots, N_n(t)$ for jack-knife pseudo-values, $\hat \mu_{n,1}(t), \hspace{1pt} \mathrm{d}ots, \hat \mu_{n,n}(t)$ with $\hat \mu_{n,i}(t) = n \hat \mu_n(t) - (n-1) \hat \mu_n^{(i)}(t)$, and proceeding by performing whatever regression analysis was intended for $N_1(t), \hspace{1pt} \mathrm{d}ots, N_n(t)$. Here, $\hat \mu_n(t)$ is an estimator of the expectation $\mu(t) = \operatorname{E}(N(t))$ based on the sample $X_1, \hspace{1pt} \mathrm{d}ots, X_n$ and $\hat \mu_n^{(i)}(t)$ is the same estimator applied to the sample where the $i$th observation has been left out. Suppose $Z$ denotes covariates and the regression analysis concerns a model of $\operatorname{E}(N(t) \mathbin{|} Z)$. According to ^{\mathsf{c}}ite{overgaard2017asymptotic}, this pseudo-observation approach will work, under some regularity conditions, in a setting where the estimator can be seen as a functional applied to a sample average, $\hat \mu_n(t) = \mu(F_n;t)$ for a functional $\mu(^{\mathsf{c}}dot; t) ^{\mathsf{c}}olon \banach{F} \to \mathbb{R}$ defined on a Banach space $ \banach{F}$ where $F_n = n^{-1} \sum_{i=1}^n \hspace{1pt} \mathrm{d}elta_{X_i}$ for some function $x \mapsto \hspace{1pt} \mathrm{d}elta_x$ applied to the observed $X_1, \hspace{1pt} \mathrm{d}ots, X_n$, if \begin{enumerate}[label=(\alph*)] \item \label{it:conv} an $F \in \banach{F}$ and an $\varepsilon \in (0, 1/4]$ exist such that $\|F_n - F\|_{\banach{F}} = o_{\operatorname{P}}(n^{-1/4 - \varepsilon/2})$ and $\lim_{y \to \infty} y^{1/\varepsilon}\operatorname{P}(\|\hspace{1pt} \mathrm{d}elta_X\| > y) = 0$, \item \label{it:diff} the functional $f \mapsto \mu(f;t)$ is continuously differentiable of order 2 with a Lipschitz continuous second order derivative in a neighborhood of $F$, \item \label{it:cond} the influence function $\hspace{1pt} \mathrm{d}ot \mu(x;t) = \mu_F'(\hspace{1pt} \mathrm{d}elta_x - F;t)$ satisfies \begin{equation} \operatorname{E}(\hspace{1pt} \mathrm{d}ot \mu(X;t) \mathbin{|} Z) = \operatorname{E}(N(t) \mathbin{|} Z) - \mu(F;t). \end{equation} \end{enumerate} Conditions~\ref{it:conv} and~\ref{it:diff} agree well with the estimators of examples~\ref{example:cens_obs} and~\ref{example:cens_unobs} above. The condition that $\lim_{y \to \infty} y^{1/\varepsilon}\operatorname{P}(\|\hspace{1pt} \mathrm{d}elta_X\| > y) = 0$ is fulfilled in either example if $N(t)$ has finite moment of order a little higher than $1/\varepsilon$, that is, at least a little more than fourth order with the choice $\varepsilon = 1/4$. The convergence order $\|F_n - F\|_{\banach{F}} = o_{\operatorname{P}}(n^{-3/8})$ with this choice of $\varepsilon$ is achieved for the $p$-variation-based norm for $p \in (8/5,2)$. The range is $p \in (4/(3-2 \varepsilon), 2)$ more generally for the relevant $\varepsilon \in (0,1/4]$. The functionals involved in the estimators of the examples are differentiable of any order for such choices of $p$, and so condition~\ref{it:diff} is met in this setting since the Lipschitz continuity of the second order derivative in a neighborhood of $F$ follows from third order differentiability in a neighborhood of $F$. This leaves us with condition~\ref{it:cond}. If we assume $(N,Z) \independent C$ or $(N,T,Z) \independent C$ in the examples, respectively, this can easily be seen to be fulfilled by appealing to the expressions of~\eqref{eq:C_obs_infl} and~\eqref{eq:C_unobs_infl}, respectively, since $\operatorname{E}(M_C(s) \mathbin{|} N, Z) = 0$ or similarly $\operatorname{E}(M_C(s) \mathbin{|} N, T, Z) = 0$ for $s \in [0,t]$ in these settings. The pseudo-observation method with pseudo-observations based on the estimator of Example~\ref{example:cens_unobs} has been suggested and applied by Andersen, Angst, and Ravn in ^{\mathsf{c}}ite{Andersen2019}, where their equation~(4) corresponds to~\eqref{eq:ipcw_estimate} of this paper. With conditions \ref{it:conv}, \ref{it:diff}, and \ref{it:cond} fulfilled, the results of ^{\mathsf{c}}ite{overgaard2017asymptotic} now brings a theoretical justification to this approach. \end{example} \section{Concluding remarks} In many cases counting processes may go to infinity as time passes. Such a setting does not fit well with an assumption of finite $p$-variation or finite moment conditions and the approach described in this paper is not directly applicable. It may be useful to consider stopped or localized versions of such counting processes, and such stopped or localized counting processes may perhaps be studied using the $p$-variation approach described here. Concretely, the settings of Examples~\ref{example:uncens}--\ref{example:pseudo} reduced interest to the interval $[0,t]$ or simply $t$ for some time point $t > 0$ and the stopped processes $N(^{\mathsf{c}}dot \wedge t)$ can replace $N$ without issues in such cases. The convergence results in $p$-variation of Section~\ref{sec:main} are likely not the best possible and further studies of this subject are called for. It is, for instance, not clear whether the moment condition of Theorem~\ref{theorem:generalE} or the boundedness condition of Theorem~\ref{theorem:almost_sure} are necessary or if such convergence results apply more generally to averages of independent replications of random elements in $\mathcal{W}_p$ spaces. In Banach spaces, measurability is not a straightforward matter. Measurability has not been touched upon in any detail here. When a counting process $N$ is considered a random element, it is in the sense of measurable coordinate projections, that $N(s)$ is a random variable for all relevant $s$. In the examples of Section~\ref{sec:count}, the various $\mu$ functionals have not been formalized as measurable maps from $\banach{F}$ to $\mathcal{W}_p^{\mathsf{r}}$. It is however clear by inspection that at $s \in [0,t]$, the various $\hat \mu_n(s)$ and $\hspace{1pt} \mathrm{d}ot \mu(X;s)$ are random variables. Owing to right-continuity, the $\|F_n - F\|_{[p]}$ of Section~\ref{sec:main}, and so similarly the various $\|F_n - F\|_{\banach{F}}$ of Section~\ref{sec:count}, are random variables since the involved suprema can be taken over the rationals. Various convergence results exist in $p$-variation for $p \geq 2$, see for instance Theorem~3.1 and~4.1 of ^{\mathsf{c}}ite{qian1998} and Theorem~1 of ^{\mathsf{c}}ite{huang2001}. Since somewhat fewer functionals are differentiable in a $p$-variation setting for such $p$s, this has not been considered here. \appendix \section{Fréchet differentiability} \label{appendix:differentiability} A functional $\phi$ defined on an open subset $U$ of a Banach space $\banach{D}$ and with values in a Banach space $\banach{E}$ is said to be differentiable at $f \in U$ if a linear continuous operator $\phi_f' \in L(\banach{D}, \banach{E})$ exists such that \begin{equation} \|\phi(f + g) - \phi(f) - \phi_f'(g)\|_{\banach{E}} = o(\|g\|_{\banach{D}}) \end{equation} as $\|g\|_{\banach{D}} \to 0$. In that case, $\phi_f' \in L(\banach{D}, \banach{E})$ is the first order derivative of $\phi$ at $f$ and, for any $g \in \banach{D}$, $\phi_f'(g) \in \banach{E}$ is called the first order derivative of $\phi$ at $f$ in direction $g$. The space of linear, continuous operators $L(\banach{D}, \banach{E})$ is itself a Banach space when equipped with the operator norm given by \begin{equation} \|\lambda\|_{L(\banach{D}, \banach{E})} = \inf \{c \geq 0 : \|\lambda(f)\|_{\banach{E}} \leq c \|f \|_{\banach{D}} \textup{ for all } f \in \banach{D}\} \end{equation} and the first order derivative $\phi' ^{\mathsf{c}}olon U \to L(\banach{D}, \banach{E})$, given by $f \mapsto \phi_f'$, is simply another functional. Higher order differentiability of the functional $\phi$ can then iteratively be defined in terms of differentiability properties of the functional $\phi'$. If $\phi$ is differentiable of order $k$, the $k$th order derivative can be identified with a functional $\phi^{(k)} ^{\mathsf{c}}olon U \to L^k(\banach{D}, \banach{E})$ where $L^k(\banach{D}, \banach{E})$ is the space of $k$-linear, continuous operators. For $\lambda \in L^k(\banach{D}, \banach{E})$ a $c > 0$ exists such that \begin{equation} \|\lambda(f_1, \hspace{1pt} \mathrm{d}ots, f_k)\|_{\banach{E}} \leq c \|f_1\|_{\banach{D}} \hspace{1pt} \mathrm{d}ots \|f_k\|_{\banach{D}} \end{equation} and, in similarity to the $k=1$ case, the norm of $\lambda$ in the Banach space $L^k(\banach{D}, \banach{E})$ is given by the infimum over such constants $c$. The $k$th order derivative is not only continuous and $k$-linear, but also symmetric in its arguments. If $\phi ^{\mathsf{c}}olon U \to \banach{E}$, in the setting from before, is continuously differentiable of order $k$, a $k$th order Taylor approximation in line with \begin{equation} \phi(f+g) = \phi(f) + \sum_{j=1}^k \frac{1}{j!} \phi_f^{(j)}(g, \hspace{1pt} \mathrm{d}ots, g) + o(\|g\|_{\banach{D}}^k) \end{equation} applies as $\|g\|_{\banach{D}} \to 0$. In particular if $\phi$ is continuously differentiable of order 2 in a neighborhood of $f \in \banach{D}$, or weaker still if $\phi'$ is Lipschitz continuous in a neighborhood of $f \in \banach{D}$, then \begin{equation} \label{eq:Taylor1+Lip} \phi(f+g) = \phi(f) + \phi_f'(g) + O(\|g\|_{\banach{D}}^2) \end{equation} as $\|g\|_{\banach{D}} \to 0$. If $\banach{D}$, $\banach{E}$, and $\banach{F}$ are Banach spaces and $\phi$ is a functional defined and differentiable on a neighborhood of $f \in \banach{D}$ as a functional into $\banach{E}$, whereas $\psi$ is a functional defined and differentiable on a neighborhood of $\phi(f) \in \banach{E}$ as a functional into $\banach{F}$, then $\psi ^{\mathsf{c}}irc \phi$ is differentiable in a neighborhood of $f \in \banach{D}$ as a functional into $\banach{F}$. The derivative is given by \begin{equation} \label{eq:chain_rule} (\psi ^{\mathsf{c}}irc \phi)_f'(g) = \psi_{\phi(f)}'(\phi_f'(g)), \end{equation} the derivative of $\psi$ at $\phi(f)$ in direction $\phi_f'(g)$. This is the chain rule. \section{Influence functions and the variance expressions} \label{appendix:infl_var} In obtaining the desired expressions of the influence functions and variance expressions in examples~\ref{example:cens_obs} and~\ref{example:cens_unobs}, an important identity is \begin{equation} \frac{\indic{C \geq s)}}{K(s)} - 1 = -\int_0^{s-} \frac{1}{K(u+)} M_C(\hspace{1pt} \mathrm{d} u), \end{equation} which can be seen as a consequence of the Duhamel equation, see for instance ^{\mathsf{c}}ite{Gill1990}, here in the form \begin{equation} \indic{C \geq s} - K(s) = \int_0^{s-} \indic{C \geq u}(\Lambda - N_C)(\hspace{1pt} \mathrm{d} u) \frac{K(s)}{K(u+)}. \end{equation} Since $\tilde N(s) = \int_0^s \indic{C \geq u} N(\hspace{1pt} \mathrm{d} u)$, we have in Example~\ref{example:cens_obs} \begin{equation} \begin{aligned} &\int_0^s \frac{1}{K(u)} \tilde N(\hspace{1pt} \mathrm{d} u) - \int_0^s \frac{\indic{C \geq u}}{K(u)} \mu(\hspace{1pt} \mathrm{d} u) \\ =& \int_0^s \frac{\indic{C \geq u}}{K(u)}(N - \mu)(\hspace{1pt} \mathrm{d} u) \\ =& N(s) - \mu(s) + \int_0^s \big(\frac{\indic{C \geq u}}{K(u)} - 1\big)(N - \mu)(\hspace{1pt} \mathrm{d} u) \\ =& N(s) - \mu(s) - \int_0^s \int_0^{u-} \frac{1}{K(v+)} M_C(\hspace{1pt} \mathrm{d} v)(N - \mu)(\hspace{1pt} \mathrm{d} u) \\ =& N(s) - \mu(s) - \int_0^{s-} \frac{N(s) - \mu(s) - N(v) + \mu(v)}{K(v+)} M_C(\hspace{1pt} \mathrm{d} v), \end{aligned} \end{equation} which shows the alternative expression of the influence function in Example~\ref{example:cens_obs}. For Example~\ref{example:cens_unobs}, it can be noted that we have $M_{X,1}(s) = \int_0^s \indic{T > u} M_C(\hspace{1pt} \mathrm{d} u)$ and also $(1- \Delta \Lambda(s))K^{\mathsf{c}}(s) = \operatorname{P}(T > s) K(s+)$. It is now a similar argument as above which yields \begin{equation} \begin{aligned} &\int_0^s \frac{1}{K(u)} \tilde N(\hspace{1pt} \mathrm{d} u) - \mu(s) + \int_0^s \int_0^{u-} \frac{1}{1-\Delta \Lambda(v)} \frac{1}{K^{\mathsf{c}}(v)} M_{X,1}(\hspace{1pt} \mathrm{d} v) \mu(\hspace{1pt} \mathrm{d} u) \\ =& N(s) - \mu(s) + \int_0^s \big(\frac{\indic{C \geq u}}{K(u)} - 1\big) N(\hspace{1pt} \mathrm{d} u) \\ &+ \int_0^s \int_0^{u-} \frac{\indic{T > v}}{\operatorname{P}(T > v)} \frac{1}{K(v+)} M_{C}(\hspace{1pt} \mathrm{d} v) \mu(\hspace{1pt} \mathrm{d} u) \\ =& N(s) - \mu(s) \\ &- \int_0^{s-} \big(N(s) - N(v) - \frac{\indic{T > v}}{\operatorname{P}(T > v)}(\mu(s) - \mu(v))\big) \frac{1}{K(v+)} M_{C}(\hspace{1pt} \mathrm{d} v). \end{aligned} \end{equation} The process given by $M_C(s) = N_C(s) - \int_0^s \indic{C \geq u} \Lambda(\hspace{1pt} \mathrm{d} u)$ is a martingale with respect to the natural filtration of $N_C$. If we assume $N \independent C$ or $(N,T) \independent C$ this is the case even in the conditional distribution given $N$ or given $(N,T)$. So, for certain stochastic processes $A(s)$ and $B(s)$ that are measurable with respect to $\sigAlg{A} = \sigma(N)$ or $\sigAlg{A} = \sigma(N, T)$, we have $\Var(A(s) + \int_0^s B(u) K(u+)^{-1}M_{C}(\hspace{1pt} \mathrm{d} u) \mathbin{|} \sigAlg{A}) = \int_0^s B(u)^2 K(u+)^{-1} \Lambda(\hspace{1pt} \mathrm{d} u)$ by martingale properties since, for instance, the optional variation process of $M_C$ is given by $[M_C](s) = \int_0^s (1-\Delta \Lambda(u)) N_C(\hspace{1pt} \mathrm{d} u) - \int_0^s \Delta \Lambda(u) M_C(s)$ with conditional expectation $\operatorname{E}([M_C](s) \mathbin{|} \sigAlg{A} ) = \int_0^s (1- \Delta \Lambda(u)) G(\hspace{1pt} \mathrm{d} u) = \int_0^s K(u+) \Lambda(\hspace{1pt} \mathrm{d} u)$. The law of total variation then reveals $\Var(A(s) + \int_0^s B(u) K(u+)^{-1} M_C(\hspace{1pt} \mathrm{d} u)) = \Var(A(s)) + \int_0^s \operatorname{E}(B(u)^2) K(u+)^{-1} \Lambda(\hspace{1pt} \mathrm{d} u)$. Both~\eqref{eq:var_expr_C_obs} and~\eqref{eq:var_expr_C_unobs} follow this structure, although establishing~\eqref{eq:var_expr_C_unobs} requires an additional direct calculation as follows. In this case, $N(s) - N(u) - \indic{T > u} \operatorname{P}(T > u)^{-1}(\mu(s) - \mu(u))$ is playing the role of $B(u)$. It is the fact that $T$ is terminal for $N$ which implies $(N(s) - N(u)) \indic{T > u} = N(s) - N(u)$ and so \begin{equation} \begin{aligned} &\phantom{{}=}\operatorname{E}\big(\big(N(s) - N(u) - \frac{\indic{T > u}}{\operatorname{P}(T > u)}(\mu(s) - \mu(u)\big)^2\big) \\ &=\operatorname{E}((N(s)-N(u))^2) - 2 \frac{\operatorname{E}(N(s) - N(u))}{\operatorname{P}(T > u)} (\mu(s) - \mu(u)) \\ &\phantom{{}=}+ \frac{\operatorname{E}(\indic{T > u})}{\operatorname{P}(T > u)^2}(\mu(s) - \mu(u))^2 \\ &= \operatorname{E}((N(s)-N(u))^2) - \frac{1}{\operatorname{P}(T > u)}(\mu(s) - \mu(u))^2 \\ &= \Var(N(s) - N(u)) - \frac{\operatorname{P}(T \leq u)}{\operatorname{P}(T > u)}(\mu(s) - \mu(u))^2 \end{aligned} \end{equation} as desired. \end{document}
math
\begin{document} \begin{frontmatter} \title{Fully Bayesian Estimation under Dependent and Informative Cluster Sampling} \runtitle{Bayes Estimation/Informative Cluster Sampling} \begin{aug} \author{\fnms{Luis G.} \snm{Le\'on-Novelo}\thanksref{addr1}\ead[label=e1] {[email protected]}} \and \author{\fnms{Terrance D.} \snm{Savitsky}\thanksref{addr2} \ead[label=e2]{[email protected]} } \runauthor{Le\'on-Novelo \& Savitsky} \address[addr1]{ Assistant Professor. University of Texas Health Science Center at Houston-School of Public Health, 1200 Pressler St. Suite E805, Houston, TX, 77030, USA \printead{e1} } \address[addr2]{ Senior Research Mathematical Statistician. Office of Survey Methods Research, U.S. Bureau of Labor Statistics, Washington, DC, 20212-0001, USA \printead{e2} } \end{aug} \begin{abstract} Survey data are often collected under multistage sampling designs where units are binned to clusters that are sampled in a first stage. The unit-indexed population variables of interest are typically dependent within cluster. We propose a Fully Bayesian method that constructs an exact likelihood for the observed sample to incorporate unit-level marginal sampling weights for performing unbiased inference for population parameters while simultaneously accounting for the dependence induced by sampling clusters of units to produce correct uncertainty quantification. Our approach parameterizes cluster-indexed random effects in both a marginal model for the response and a conditional model for published, unit-level sampling weights. We compare our method to plug-in Bayesian and frequentist alternatives in a simulation study and demonstrate that our method most closely achieves correct uncertainty quantification for model parameters, including the generating variances for cluster-indexed random effects. We demonstrate our method in an application with NHANES data. \\ \\ \textbf{Statement of Significance}\\ We propose a fully Bayesian framework for parameter estimation of a population model from survey data obtained via a multi-stage sampling design. Inference incorporates sampling weights. Our framework delivers estimates that achieve asymptotically correct uncertainty quantification unlike popular Bayesian and frequentist alternatives. In particular, our method provides asymptotically unbiased point and variance estimates under the sampling of clusters of units. This type of sampling design is common in national and large surveys. \begin{keyword} \kwd{Inclusion probabilities} \kwd{mixed effects linear model} \kwd{NHANES} \kwd{primary stage sampling unit} \kwd{sampling weights} \kwd{survey sampling}. \end{keyword} \end{abstract} \end{frontmatter} \section{Introduction} Inference with data from a complex sampling scheme, such as that collected by the National Health and Nutrition Examination Survey (NHANES), requires consideration of the sampling design. A common multistage sampling scheme in public survey datasets is formulated as: \begin{enumerate} \item Divide survey population into $H$ strata. \item Each stratum is assigned $N_h$ clusters of individuals called \textit{primary stage sampling units} (PSUs) from which $J_h$ PSUs are selected. PSU $hj$ is selected with probability $\pi_{1hj}$. By design, at least one PSU is selected in each stratum, \ie\ $J_h\geq 1, \forall h$. \item Within each selected PSU, $n_{hj}$ individuals are sampled out of the total $N_{hj}$ population units nested in the PSU. Each individual or last stage unit is sampled with probability $\pi_{i\mid hj}$, $i=1,\dots,N_{hj}$. \end{enumerate} The indices $i,j,h$ index individual, PSU and stratum, respectively. The marginal probability of including an individual in the sample is then $\pi_{ihj}^\prime=\pi_{i\mid hj}\pi_{1hj}$. In addition to sampling clusters of dependent individuals, both clusters and individuals-within-clusters are typically selected with unequal sampling inclusion probabilities in order to improve estimation power for a population subgroup or to reduce variance of a global estimator. The sample inclusion probabilities are constructed to be correlated with or ``informative" about the response variable of interest to reduce variance of the estimator. On the one hand, stratification reduces the standard error (SE) of estimates while, on the other hand, clustering tends to increase the standard error since clustering induces dependence and is used for convenience and to reduce cost. Utilizing unequal inclusion probabilities can reduce the variance of the estimator where a subset of units is highly influential for the estimator of interest, such as is the case where larger-sized employers drive the estimation of total employment for the Current Employment Statistics survey administered by the U.S. Bureau of Labor Statistics; more often, the use of unequal inclusion probabilities tends to increase the variance of the estimator due to the variation in the information about the population reflected in observed samples. Ignoring PSU and unequal sampling inclusion probabilities underestimates the SE because of the dependence among individuals within a PSU and the variation of information about the population reflected in informative samples drawn from it. The statistical analyst receives variables of interest for each survey participant along with the stratum and PSU identifiers to which s/he belongs, as well as sampling weights, $w_{ihj}\propto 1/\pi_{ihj}$. The inclusion probability, $\pi_{ihj}$, is proportional to $\pi_{ihj}^\prime$ after adjusting for oversampling of subpopulations and nonresponse. In the context of NHANES, a stratum is defined by the intersection of geography with concentrations of minority populations and a PSU is constructed as a county or a group of geographically continuous counties. Secondary and tertiary stage sampling units include segments (contiguous census blocks) and households. The final unit is an eligible participant in the selected household. NHANES released masked stratum and PSU information to protect participant's privacy. Every 2-year NHANES-data cycle \citep{CDCNHANESSurveyDesign} releases information obtained from $H=15$ strata with $J_h=2$ PSUs per stratum. In this paper, we focus on a two-stage sampling design that excludes strata for both our simulation study and application, in the sequel, without loss of generality since the inclusion of strata would be expected to improve estimation by reducing variability across realized samples. Our two-stage sampling design of focus is characterized by the first stage sampling of PSUs and, subsequent, second stage sampling of units. We employ a fully Bayesian estimation approach that co-models the response variable of interest and the marginal inclusion probabilities as introduced in \cite{leon2019fully}, hereafter referred to LS2019. We extend their approach by constructing PSU-indexed random effects specified in both the marginal model for the response variable and the conditional model for the sampling inclusion probabilities. Since our sampling design does not utilize strata, we do not consider subindex $h$ that indexes strata in the discussion above. Sampled individual $ij$ denotes individual $i \in \{1,\ldots,n_{j}\}$ in cluster $j \in \{1,\ldots,J_{pop}\}$ included in the sample, where $J_{pop}$ denotes the total number of PSUs in the population. Let $J \leq J_{pop}$ denote the number of PSUs actually sampled. We assume that the sampling weight, $w_{ij}$, is proportional to the inverse marginal inclusion probability, $\pi_{ij}$, of individual $ij$ being included in the sample; or $\pi_{ij}\propto 1/w_{ij}$. We denote the vector of predictors associated to individual $ij$ as $\bx_{ij}$. The data analyst aims to estimate the parameters, $\boldsymbol\theta$, of a \emph{population} model, $p(y\mid\boldsymbol\theta,\bx)$, that they specify from these data. Relabeling PSU indices in the sample so they run from $1,\ldots,J$, the analyst observes sample of size, $n=\sum_{j=1}^J n_j$, and the associated variables, $\{\is{y}_{ij},\sampled{\bx}_{ij},\is{\pi}_{ij}\propto 1/\is{w}_{ij},j\}_{i=1,\dots,n_j,j=1,\dots,J}$ with $n_j$ the number of participants from PSU $j$ and superindex $(s)$ denotes in the sample. By contrast, $y_{ij}$ without superindex $(s)$ denotes a response of an individual in the survey population but not, necessarily, a survey participant included in the observed sample. The probability of inclusion of each PSU (denoted as $\pi_{1j}$ in point ii, above) is unknown to the analyst because it is not typically published for the observed sample, though the PSU inclusion probabilities are used to construct published unit marginal inclusion probabilities, (such that inclusion probabilities within the same PSU tend to be more similar or correlated), but the dependence of units in the same PSU may not be fully accounted for by the dependence on their inclusion probabilities. A sampling design is informative for inference about individuals within a group when $y_{ij}\not\perp \pi_{ij}\mid \bx_{ij}$. A sampling design will also be informative for PSUs in the case that $\bar{y}_{\cdot j} - \bar{y} = (1/N_{j})\mathop{\sum}_{i=1}^{N_{j}} y_{ij} - \bar{y} \not\perp \pi_{1j}\mid \bar{\bx}_{j}$ with $\bar{y}$ the population mean response and $\bar{\bx}_{j}=(1/N_j) \sum_{i=1}^{N_j} \bx_{ij} $. Even if a sampling design is not informative for individuals and/or groups, however, there are typically scale effects induced by within group dependence that must be accounted for to produce correct uncertainty quantification. \begin{comment} LS2019\ propose a model-based Bayesian approach appropriate under informative sampling that incorporates the sampling weights into the model by modelling both the response given the parameter of interest and the inclusion probability given the response, $\pi_{ij}\mid y_{ij}$. The main advantages of this approach is that it yields (1) consistent point estimates [LS2019] (point estimates converge in probability to true values), (2) credible intervals that achieve nominal (frequentist) coverage, and (3) robust inference against mis-specification of $\pi_{ij}\mid y_{ij}$. \end{comment} LS2019\ only focus on fixed effect models and ignore the dependence induced by the sampling design; that is, both association among the responses within the same PSU (that we label, dep-$y$), and possible association among inclusion probabilities within the same PSU (that we label, dep-$\pi$). This paper extends the approach of LS2019\ to account for these associations via mixed effect models. More specifically, we include PSU-specific random effects (PSU-REs) in both the model for the responses and in the model for the inclusion probabilities. \cite{makela2018bayesian} propose, as we do, Bayesian inference under a two-stage sampling design. \begin{comment} in particular, they consider the case where clusters/PSUs are selected with probability, \ie $\pi_{1j}$, proportional to a measure of PSU size (that is commonly the number of individuals in the PSU). They require $\pi_{1j}$ to be available and published to the data analyst for the sampled PSUs. They assume that individuals nested in PSUs are drawn under simple random sampling (SRS) in a second stage. \end{comment} Their estimation focus is on the population mean or proportion. By contrast, we focus on estimation of model parameters and assume that the analyst does not know $\pi_{1j}$ (because it is not published), but instead only knows $\pi_{ij}$ (up to a multiplying constant) for the sampled individuals, as is the case for NHANES data. We do not assume that individuals within the sampled PSUs are selected under SRS, but allow for informativeness. We introduce our general approach in Section \ref{sec:inclusionofPSU}, though in the rest of the paper we focus on the linear regression setting for ease-of-illustration. Competing methods are summarized in this section as well. In Section \ref{sec:simulationstudy}, we show via simulation that our approach yields credible intervals with nominal (frequentist) coverage, while the competing methods do not in some simulation scenarios. In Section \ref{sec:applications} we demonstrate our approach by applying it to an NHANES dataset to estimate the daily kilocalorie consumption of persons in different demographic groups in the U.S. population. Inference under our Fully Bayes approach is compared against inference under competing plug-in Bayesian and frequentist methods. We provide a final discussion section and an Appendix containing details not discussed, but referred to in the main paper. \section{Review of LS2019}\label{sec:review} LS2019\ introduces the inclusion probabilities into the Bayesian paradigm by assuming them to be random. In this section we review their approach before we extend it to include PSU information, in the next section. The superpopulation approach in LS2019\ assumes that the finite population of size $N$, $(y_1,\pi_1,\bx_1),\dots (y_N,\pi_N,\bx_N)$ is a realization of \begin{equation}\label{eq:population} (y_i,\pi_i)\mid \bx_i,\boldsymbol\theta,\hbox{$\boldsymbol{\kappa$}} \sim p(y_i,\pi_i\mid \bx_i,\boldsymbol\theta,\hbox{$\boldsymbol{\kappa$}})= p(\pi_i\mid y_i,\bx_i,\hbox{$\boldsymbol{\kappa$}})\ p(y_i\mid \bx_i,\boldsymbol\theta), \quad i=1,\dots,N. \end{equation} Here, $y_i$ is the response for individual $i$ with vector of covariates $\bx_i$ and $\pi_i\in [0,1]$ is a proper survey sampling inclusion probability for individual $i$ being sampled. It is assumed that $(y_i,\pi_i)\perp (y_{i^\prime},\pi_{i^\prime})\mid \bx_i,\bx_{i^\prime},\boldsymbol\theta,\hbox{$\boldsymbol{\kappa$}}$, for $i\neq i^\prime$, and $\bx_i$ is assumed known; that is, the unit responses and inclusion probabilities are conditionally (on the model parameters) independent. Note also that \eqref{eq:population} above presumes that $\pi_i\perp \boldsymbol\theta \mid y_i,\bx_i,\hbox{$\boldsymbol{\kappa$}}$ and $y_i\perp \hbox{$\boldsymbol{\kappa$}} \mid \bx_i,\boldsymbol\theta$; that is, the parameters for the models for the response and weights are \emph{a priori} independent. The population parameter $\boldsymbol\theta$ determines the relationship between $\bx_i$ and $y_i$, and is of main interest. The parameter $\hbox{$\boldsymbol{\kappa$}}$ is a nuisance parameter that allows modeling the association between $\pi_i$ and $y_i$, though we later see in our simulation study section that it provides insight on the informativeness of the sampling design for a particular response variable of interest. The informative sample of size $n$ is drawn so that $P[\hbox{individual $i$ in sample}]=\pi_i$, a proper sampling inclusion probability. Bayes theorem implies, \begin{align}\label{eq:samplingdist} p(y_{i},\pi_{i}\vert \bx_{i}, \boldsymbol\theta, \hbox{$\boldsymbol{\kappa$}},&\hbox{individual $i$ in sample})\\ =&\frac {\mbox{Pr}(\hbox{individual $i$ in sample} \vert y_{i},\pi_{i},\bx_{i}, \boldsymbol\theta, \hbox{$\boldsymbol{\kappa$}} )\times p(y_i,\pi_i\vert \bx_i, \boldsymbol\theta, \hbox{$\boldsymbol{\kappa$}}) \nonumber} {\hbox{denominator}} \end{align} By the way the informative sample is drawn, and the population model in \eqref{eq:population}, the numerator in \eqref{eq:samplingdist} is \begin{equation}\label{eq:numerator} \pi_i \times p(\pi_i\mid y_i,\bx_i,\hbox{$\boldsymbol{\kappa$}})\ p(y_i\mid \bx_i,\boldsymbol\theta) \end{equation} The denominator is obtained by integrating out $(y_i,\pi_i)$ in the numerator, \begin{equation}\label{eq:denominator} \int \pi_i^\star p(\pi_i^\star \mid y_i^\star,\bx_i,\hbox{$\boldsymbol{\kappa$}})\ p(y_i^\star\mid \bx_i,\boldsymbol\theta)\, d\pi_i^\star dy_i^\star= E_{y_i^\star\mid \bx_i,\boldsymbol\theta}\left[E\left(\pi_i^\star\mid y_i^\star,\bx_i,\hbox{$\boldsymbol{\kappa$}}\right)\right] \end{equation} The superindex $\star$ is used to distinguish the quantities integrated out from the ones in the numerator. Plugging \eqref{eq:numerator} and \eqref{eq:denominator} in \eqref{eq:samplingdist} we obtain Equation (5) in LS2019, and also Equation (7.1) in \cite{pfeffermann1998parametric}, given by, \begin{equation}\label{eq:IScorrection} p_s(y_{i},\pi_{i}\vert \bx_{i}, \boldsymbol\theta, \hbox{$\boldsymbol{\kappa$}})= \left\{\frac {\pi_i\, p(\pi_i\vert y_i,\bx_i,\hbox{$\boldsymbol{\kappa$}}) } {E_{y_i^\star\vert \bx_i,\boldsymbol\theta}\left[E(\pi_i^\star\vert y_i^\star, \bx_i, \hbox{$\boldsymbol{\kappa$}}) \right]}\right\} \times p( y_i\vert \bx_i, \boldsymbol\theta) \end{equation} where the LHS, $p_s(\cdots\mid \cdots)$, denotes the joint distribution of $(y_i,\pi_i)$ conditioned on the individual $i$ being in the sample, \ie, the LHS of \eqref{eq:samplingdist} is equal to $p(\dots\mid \cdots,\hbox{individual $i$ in sample})$. Inference is based on this \emph{exact} likelihood for the observed sample with, \begin{equation*}\label{eq:likelihood} \ell(\boldsymbol\theta,\hbox{$\boldsymbol{\kappa$}};\sampled{y},\sampled{\pi},\sampled{\bx})=\prod_{i=1}^n\left[p_s(\is{y_i},\is{\pi}_i\mid \sampled{x_i},\boldsymbol\theta,\hbox{$\boldsymbol{\kappa$}}) \right] \end{equation*} where the superindex $(s)$ is used to emphasize that these are the values observed in the sample. We also relabel the index $i$ running from $1,\dots,N$ in the population so it runs from $1,\dots,n$ in the sample. A Bayesian inference model is completed by assigning priors to $\boldsymbol\theta$ and $\hbox{$\boldsymbol{\kappa$}}$. Note that under noninformative sampling, \ie\ when $y_i\perp \pi_i\mid \bx_i$, the quantity between curvy brackets in \eqref{eq:IScorrection} does not depend on $y_i$ and therefore inference on $\boldsymbol\theta$ does not depend on the inclusion probabilities, or the $\pi_i$s. In other words, inference using \eqref{eq:IScorrection} is the same as if treating the sample as an SRS from the model $y_i\sim p(y_i\mid \bx_i,\boldsymbol\theta)$. For the informative sampling case, in theory, we can assume any distribution for $y_i\mid \bx_i,\boldsymbol\theta$ and $\pi_i\mid y_i,\bx_i,\hbox{$\boldsymbol{\kappa$}}$. In practice, the calculation of $E_{y_i^\star\vert \bx_i,\boldsymbol\theta}[\cdots]$ in \eqref{eq:IScorrection} is a computational bottleneck. Theorem 1 in LS2019, stated below, provides conditions to obtain a closed form for this expected value. Let $\bxp_i$ and $\bxy_i$ be subvectors of $\bx_i$, the covariates used to specify the conditional distribution of $\pi_i\mid y_i,\bx_i,\hbox{$\boldsymbol{\kappa$}}$ and $y_i\mid \bx_i,\boldsymbol\theta$, respectively; that is, $\pi_i\mid y_i,\bx_i,\hbox{$\boldsymbol{\kappa$}} \sim \pi_i \mid y_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}}$ and $y_i\mid \bx_i,\boldsymbol\theta\sim y_i\mid \bxy_i,\boldsymbol\theta$. Note that we allow for $\bxp_i$ and $\bxy_i$ to have common covariates. Let $\hbox{normal}(x\mid\mu,s^2)$ denote the normal distribution pdf with mean $\mu$ and variance $s^2$ evaluated at $x$, and $\hbox{lognormal}(\cdot\mid\mu,s^2)$ denote the lognormal pdf, so that $X\sim \hbox{lognormal}(\mu,s^2)$ is equivalent to $\log X\sim\hbox{normal}(\mu,s^2)$. \begin{thm}\label{th:closeform} (Theorem 1 in LS2019) If $p(\pi_i\mid y_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}}) =\emph{lognormal}(\pi_i\mid h(y_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}}),\sigma_{\pi}^2)$, with the function $h(y_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}})$ of the form $h(y_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}})=g(y_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}})+t(\bxp_i,\hbox{$\boldsymbol{\kappa$}})$ where $\sigma_{\pi}^2=\sigma_{\pi}^2(\hbox{$\boldsymbol{\kappa$}},\bxp_i)$, possibly a function of $(\hbox{$\boldsymbol{\kappa$}},\bxp_i)$ then $$ p_s(y_i,\pi_i\mid \bxy_i,\bxp_i,\boldsymbol\theta,\hbox{$\boldsymbol{\kappa$}})= \frac{\emph{normal}\left(\log \pi_i\mid g(y_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}})+t(\bxp_i,\hbox{$\boldsymbol{\kappa$}}),\sigma_\pi^2\right)} {\exp\left\{t(\bxp_i,\hbox{$\boldsymbol{\kappa$}})+\sigma^2_\pi/2\right\} \times M_y(\hbox{$\boldsymbol{\kappa$}};\bxy_i,\bxp_i,\boldsymbol\theta) } \times p(y_i\mid \bxy_i,\boldsymbol\theta)\nonumber $$ with $M_y(\hbox{$\boldsymbol{\kappa$}};\bxy_i,\bxp_i,\boldsymbol\theta):=E_{y^\star_i\mid \bxy_i,\boldsymbol\theta}\left[\exp\left\{g(y^\star_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}})\right\}\right]$. \end{thm} If both $M_y$ and $p(y_i\mid\cdots)$ admit closed form expressions, then $p_s(y_i,\pi_i\mid\cdots)$ has a closed form, as well; for example, when $g(y_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}})=\kappa_y y_i$ where $\kappa_y$ an element of the parameter vector, $\hbox{$\boldsymbol{\kappa$}}$, with $\kappa_y \in \mathbb{R}$, then $M_y(\hbox{$\boldsymbol{\kappa$}};\bxy_i,\bxp_i,\boldsymbol\theta)$ is the moment generating function (MGF) of $y_i\mid \boldsymbol\theta$ evaluated at $\kappa_y$, which may have a closed form defined on $\mathbb{R}$. This implies a closed form for $p_s(y_i,\pi_i\mid\cdots)$. Analogously, we may consider an interaction between $y$ and $\bxp$, using $g(y_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}})=(\kappa_y+\bxp_i^t \hbox{$\boldsymbol{\kappa$}}_\bxp) y_i\equiv r y_i$ with $\hbox{$\boldsymbol{\kappa$}}=(\kappa_y,\hbox{$\boldsymbol{\kappa$}}_\bxp,\sigma_\pi^2)$. In this case, we achieve, $M_y(r;\cdots)$, which is the MGF evaluated at $r$. As mentioned in LS2019, the assumption of a lognormal distribution for $\pi_{i}$ is mathematically appealing. The inclusion probability, $\propto \pi_i$, for individual, $i$, is composed from the product of inclusion probabilities of selection across the stages of the multistage survey design. If each of these stagewise probabilities are {lognormal} then their product, $\propto\pi_i$, is {lognormal} as well. This is particularly helpful in the setting that includes PSUs, discussed in next section. For implementation, we \emph{observe} sampled $\{(\sampled{\pi}_i,\sampled{y}_i)\}_{i=1,\dots,n}$ and we estimate the exact posterior distributions for the population model parameters on the observed sample. Under our lognormal conditional model for $\pi_i$s, there is no restriction imposed on $\sum_{i=1}^n \sampled{\pi}_i$, such that we may normalize the $\sampled{\pi}_s$ to any positive constant, $\sum_{i=1}^n \sampled{\pi}_i = c$, as long as $h(y_i,\bxp_i,\hbox{$\boldsymbol{\kappa$}})=\kappa_0+\dots$ includes an intercept parameter that we label $\kappa_0$. Since $\pi_i\sim\hbox{lognormal}(\kappa_0+\dots,\dots)$ is equivalent to $\pi_i/c\sim\hbox{lognormal}(\kappa_0-\log c+\dots,\dots)$ such that the estimated intercept is either $\kappa_0$, or a shifted version, $\kappa_0-\log c$, inference is unaffected. \section{Inclusion of PSU Information into Fully Bayesian approach}\label{sec:inclusionofPSU} In Subsection \ref{subsec:reviewLN}, we extend the approach in LS2019, reiviewed in Section \ref{sec:review}, that co-models the response variable and sampling weights, modifying their notation by adding cluster-indexed parameters, in preparation to include PSU information into the analysis in Subsection \ref{subsec:includingPSU} to capture within PSU dependence in the response and sample inclusion probabilities. In Subsection \ref{subsec:LRM} we introduce the Fully Bayes joint population model for the response and the sample inclusion probabilities in the linear regression case. In Subsections \ref{subsec:pseudo} and \ref{subsec:freq} we briefly review competing approaches to analyze informative samples that we will compare in a simulation study. \subsection{Extend Joint Population Model to incorporate PSU-indexed Parameters \label{subsec:reviewLN}} We assume a population with a total of $J_{pop}$ PSUs and size $N=\sum_{j=1}^J N_j$ with $N_j$ the number of population individuals or units in PSU $j$. More specifically, the population consists of $$ \begin{array}{rl} \underbrace{(y_{1,1},\pi_{1\mid1},\bx_{1,1}),\dots,(y_{N_1,1},\pi_{N_1\mid1},\bx_{N_1,1} )}_{\hbox{PSU }1},&\dots, \underbrace{(y_{1,j},\pi_{1\mid j},\bx_{1,j}),\dots,(y_{N_j,j},\pi_{N_j\mid j},\bx_{N_j,j})}_{\hbox{PSU }j},\dots\\ \multicolumn{2}{c}{ \underbrace{(y_{1,J_{pop}},\pi_{1\mid J_{pop}},\bx_{1,J_{pop}}), \dots,(y_{N_{J_{pop}},J_{pop}},\pi_{N_J{_{pop}} \mid J_{pop}},\bx_{N_{J_{pop}},J_{pop}} )}_{\hbox{PSU }J_{pop}} } \end{array} $$ and also of, $ \pi_{11},\pi_{12},\dots,\pi_{1j},\dots ,\pi_{1J_{pop}}>0 $ with $\pi_{i\mid j}\in (0,1],~ \forall i,j$. The sample of size $n=\sum_{j=1}^J n_j$ (with $n_j$ and $J$ specified by the survey sampler) is drawn in two steps: \begin{itemize} \item{Step 1: PSU sampling.} Sample $J$ different PSUs $j_1,\dots,j_J\in\{1,\dots,J_{pop}\}$ so that \begin{equation*} Pr[\hbox{PSU } j \hbox{ is in the sample}] = \pi_{1j} \end{equation*} \item {Step 2: Sampling of individuals.} Within each PSU in observed sample $j\in\{j_1,\dots,j_J\}$, draw $n_j$ different individuals so that individual $i$ (in the sampled PSU $j$) is in the sample with probability $$ P[\hbox{Individual } {ij} \hbox{ is in the sample}\mid \hbox{PSU } j\hbox { is in the sample}]= \pi_{i\mid j} $$ \end{itemize} and therefore the marginal inclusion probability is proportional to $\pi_{ij}:=\pi_{1j}\pi_{i\mid j}$. The superpopulation approach assumes that the population is a realization of the joint distribution for values of the response variable and inclusion probabilities, \begin{align}\label{eq:jointyandpi} (y_{ij},\pi_{ij})\mid \bx_{ij},\boldsymbol\theta,{\eta}^{y}_j,\hbox{$\boldsymbol{\kappa$}},\pi_{1j} \sim&\ p(y_{ij},\pi_{ij}\vert \bx_{ij}, \boldsymbol\theta,{\eta}^{y}_j,\hbox{$\boldsymbol{\kappa$}},\pi_{1j})\\%=&p(\pi_i\vert y_i,\bx_i,\boldsymbol\theta, \hbox{$\boldsymbol{\kappa$}}) p(y_i\vert \bx_i,\boldsymbol\theta, \hbox{$\boldsymbol{\kappa$}})\\ =&\ p(\pi_{i j}\vert y_{ij},\bx_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j})\, p(y_{ij}\vert \bx_{ij},\boldsymbol\theta,{\eta}^{y}_j) \nonumber \end{align} We model $\pi_{ij}\mid y_{ij}$ with $p(\pi_{ij}\vert y_{ij},\bx_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j})$. The (population) parameter of interest is $\boldsymbol\theta$. This construction allows for an informative sampling design by modeling $\pi_{ij}$ conditioned on $y_{ij}$. While $(\pi_{ij},y_{ij})$ are assumed to be conditionally independent over PSUs $j$ and units $i$, they are unconditionally (on model parameters) dependent under our construction. We have augmented the parameters used in LS2019, given in \eqref{eq:population}, to incorporate ${\eta}^{y}_j$ and $\pi_{1j}$ that are shared by all observations in PSU $j$. Parameters, ${\eta}^{y}_j$, induce a correlation in the response for individuals in the same PSU (dep-$y$) while $\pi_{1j}$ induces association of marginal inclusion probabilities (dep-$\pi$) among respondents nested in the same PSU. We will later construct priors on these parameters to define PSU-indexed random effects. $\hbox{$\boldsymbol{\kappa$}}$ is a nuisance parameter used to model the inclusion probabilities. After relabeling the sampled PSU indices $j_1,\dots j_J$ to $1,\dots J$, and the indices $i$ in the sample to run from $i=1,\dots,n_j$, the sample of size $n=\sum_{j=1}^J n_j$ consists of $$ \hbox{\emph{data} }:= \{\is{y}_{ij},\sampled{\bx}_{ij},\is{\pi}_{ij},j\}_{i=1,\dots,n_j; j=1,\dots,J}$$ with $j$ indicating from which PSU the individual was sampled, $n_j$ the number of participants from PSU $j$, and $J$ the total number of sampled PSUs. Recall, superindex $(s)$ denotes in the sample. The equality in \eqref{eq:jointyandpi} assumes that $y_{ij}\perp (\hbox{$\boldsymbol{\kappa$}},\pi_{1j})\mid \bx_{ij},\boldsymbol\theta,{\eta}^{y}_j$ and $\pi_{ij}\perp (\boldsymbol\theta,{\eta}^{y}_j) \mid y_{ij},\bx_{ij}, \hbox{$\boldsymbol{\kappa$}},\pi_{1j}$. \begin{comment} Examples of noninformative sample are \begin{enumerate} \item SRS: equivalent to $J_{pop}=J=1$ a and $\pi_{i\mid 1}=1$ for $i=1,\dots,N$. \item SRS within PSU with PSU sampling probability $\pi_{1j}$ independent of the response, equivalent to $\pi_{1j}\perp y_{ij}\mid \bx_{ij}$ $\forall i$ and $\pi_{i\mid j}=1$. \end{enumerate} \end{comment} We extend \eqref{eq:IScorrection} that captures the joint probability model for the sample by replacing $\boldsymbol\theta$ and $\hbox{$\boldsymbol{\kappa$}}$ with ($(\boldsymbol\theta,{\eta}^{y})$ and $(\hbox{$\boldsymbol{\kappa$}},\pi_{1j})$) to achieve, \begin{equation}\label{eq:IScorrectionPSU} p_s(y_{ij},\pi_{ij}\vert \bx_{ij}, \boldsymbol\theta, {\eta}^{y}_j,\hbox{$\boldsymbol{\kappa$}},\pi_{1j})=\frac {\pi_{ij}\, p(\pi_{ij}\vert y_{ij},\bx_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j}) } {E_{y_{ij}^\star\vert \bx_{ij},\boldsymbol\theta,{\eta}^{y}_j}\left[E(\pi_{ij}^\star\vert y_{ij}^\star, \bx_{ij}, \hbox{$\boldsymbol{\kappa$}},\pi_{1j}) \right]} \times p( y_{ij}\vert \bx_{ij}, \boldsymbol\theta,{\eta}^{y}_j) \end{equation} The subindex $s$ on the joint distribution $p_s$ on the LHS denotes that we condition on individual $ij$ being in the sample; that is, $p_s(y_{ij},\pi_{ij}\mid \dots)=p(y_{ij},\pi_{ij}\mid \hbox{individual } ij \hbox{ is in the sample},\dots)$. In contrast, the distributions on the RHS are population distributions. Inference on $\boldsymbol\theta$ utilizes the joint likelihood for the observed sample, \begin{equation*} \ell(\boldsymbol\theta,\boldsymbol{{\eta}^{y}},\hbox{$\boldsymbol{\kappa$}},\boldsymbol{\pi}_1; \hbox{\emph{data}} )= \prod_{j=1}^J \prod_{i=1}^{n_j} \left[p_s(\is{y_{ij}},\is{\pi}_{ij}\mid \sampled{x}_{ij},\boldsymbol\theta,{\eta}^{y}_j,\hbox{$\boldsymbol{\kappa$}},\pi_{1j}) \right] \end{equation*} with $\boldsymbol{{\eta}^{y}}:=({\eta}^{y}_1,\dots,{\eta}^{y}_J)$, $\boldsymbol{\pi}_1:=(\pi_{11},\dots,\pi_{1J})$. Inference for $\boldsymbol\theta$ is achieved via the posterior distribution of the model parameters: \begin{align*} p_s\left(\boldsymbol\theta,\boldsymbol{{\eta}^{y}},\hbox{$\boldsymbol{\kappa$}},\boldsymbol{\pi}_1 \mid \hbox{\emph{data}} \right)\propto& \ \ell\left(\boldsymbol\theta,\boldsymbol{{\eta}^{y}},\hbox{$\boldsymbol{\kappa$}}; \hbox{\emph{data}} \right) \times \hbox{Prior}(\boldsymbol\theta) \times \hbox{Prior}({\eta}^{y}) \times \hbox{Prior}(\boldsymbol{\pi}_1) \times \hbox{Prior}(\hbox{$\boldsymbol{\kappa$}}). \end{align*} To obtain a closed form for the likelihood we need a closed form for the expected value in the denominator in \eqref{eq:IScorrectionPSU}, in turn. Theorem \ref{th:closeform} (same as Theorem 1 in LS2019) is here extended under our extended PSU-indexed parameterization, ($(\boldsymbol\theta,{\eta}^{y})$ and $(\hbox{$\boldsymbol{\kappa$}},\pi_{1j})$), to provide conditions that allow a closed form expression of this expected value. Similar to the set-up for Theorem \ref{th:closeform}, let $\bxp$ and $\bxy$ be subvectors of $\bx$, the covariates used to specify the conditional distribution of $\pi_{ij}\mid y,\bx,\hbox{$\boldsymbol{\kappa$}},\pi_{1j}$ and $y\mid \bx,\boldsymbol\theta,{\eta}^{y}$, respectively; that is, $\pi_{ij}\mid y_{ij},\bx_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j} \sim \pi_{ij} \mid y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j}$ and $y_{ij}\mid \bx_{ij},\boldsymbol\theta,{\eta}^{y}\sim y_{ij}\mid \bxy_{ij},\boldsymbol\theta,{\eta}^{y}$. \begin{thm}\label{th:closeformPSU} If $p(\pi_{ij}\mid y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j}) =\emph{lognormal}(\pi_{ij}\mid h(y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j}),\sigma_{\pi}^2)$, with the function $h(y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j})$ of the form $h(y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j})= g(y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}})+ t(\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j})$ where $\sigma_{\pi}^2=\sigma_{\pi}^2(\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j})$, possibly a function of $(\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j})$ then \begin{align*} p_s(y_{ij},\pi_{ij}\mid \bxy_{ij},\bxp_{ij},\boldsymbol\theta,{\eta}^{y}_j,\hbox{$\boldsymbol{\kappa$}},\pi_{1j})=& \frac{\emph{normal}\left(\log \pi_{ij}\mid g(y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}})+t(\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j}),\sigma_\pi^2\right)} {\exp\left\{t(\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j})+\sigma^2_\pi/2\right\} \times M_y(\hbox{$\boldsymbol{\kappa$}};\bxy_{ij},\bxp_{ij},\boldsymbol\theta,{\eta}^{y}_j) }\\ &\times p(y_{ij}\mid \bxy_{ij},\boldsymbol\theta,{\eta}^{y}_j) \end{align*} with $M_y(\hbox{$\boldsymbol{\kappa$}};\bxy_{ij},\bxp_{ij},\boldsymbol\theta):= E_{y_{ij}^\star\mid \bxy_{ij},\boldsymbol\theta,{\eta}^{y}_j}\left[\exp\left\{g(y_{ij}^\star,\bxp_{ij},\hbox{$\boldsymbol{\kappa$}})\right\}\right]$. \end{thm} So, analogously to the discussion after Theorem \ref{th:closeform}, if $g(y,\bxp,\hbox{$\boldsymbol{\kappa$}})=\kappa_y y$ with $\kappa_y$ depending on $\hbox{$\boldsymbol{\kappa$}}$ and, perhaps, on $\bxp$, then $$M_y(\hbox{$\boldsymbol{\kappa$}};\bxy_{ij},\bxp_{ij},\boldsymbol\theta,{\eta}^{y}_j):= E_{y_{ij}^\star\mid \bxy_{ij},\boldsymbol\theta,{\eta}^{y}_j}\left[\exp\left(\kappa_y y_{ij}^\star\right)\right]$$ is the moment generating function of $y$ evaluated at $\kappa_y$. So when both the population distribution of $y$, $p(y\mid \boldsymbol\theta,{\eta}^{y},\bxy)$, has a closed form and the moment generating function has a closed form over the real line, then the likelihood, $p_s$, has a closed form, as well. \subsection{Inclusion of PSU Information into Conditional Population Model for Weights \label{subsec:includingPSU}} The marginal inclusion probability of the individual $ij$, $\propto\pi_{ij}$, is the product of the probability of selecting PSU $j$, $\propto\pi_{1j}$, and the probability of selecting the individual $i$ conditioning on PSU being in the sample, $\propto\pi_{i\mid j}$ such that $\pi_{ij}=\pi_{1j} \pi_{i\mid j}.$ Therefore, $$ \log \pi_{ij}=\log \pi_{i\mid j}+\log \pi_{1j} $$ Specifying, $\log \pi_{1j}\sim \hbox{normal}(\mu_j,\sigma_{{\eta}^{\pi}}^2)$ where $\mu_j$ could depend on PSU covariates (e.g. county population, etc) but, for simplicity, we assume that it does not and set $\mu_j=0$. Choosing a normal distribution for $ \log \pi_{i\mid j}\sim \hbox{normal}( g(y,\bxp,\kappa)+t^\prime(\bxp,\hbox{$\boldsymbol{\kappa$}}),\sigma_{\pi}^2)$ yields \begin{equation}\label{eq:logpiij} \log \pi_{ij}\mid y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},{\eta}^{\pi}_j \sim \hbox{normal}\left(g(y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}})+ t^\prime(\bxp_{ij},\hbox{$\boldsymbol{\kappa$}})+\eta^{\pi}_{j},\sigma_{\pi}^2\right) \end{equation} with ${\eta}^{\pi}_j:=\log \pi_{1j}\iid \hbox{normal}(0,\sigma_{{\eta}^{\pi}}^2)$ PSU-specific random effects. So defining $t(\bxp,\hbox{$\boldsymbol{\kappa$}},\pi_{1j}):=$ $t^\prime(\bxp,\hbox{$\boldsymbol{\kappa$}})+\log \pi_{1j}=t^\prime(\bxp,\hbox{$\boldsymbol{\kappa$}})+{\eta}^{\pi}_j$, the distribution of $\pi_{ij}$ satisfies the conditions of Theorem \ref{th:closeformPSU}. This set-up is coherent with our assumption that the data analyst does not have information about the PSU-indexed sampling weights for either the population or sampled units because they are not typically published by survey administrators. Nevertheless, our derivation of \eqref{eq:logpiij} by factoring the marginal inclusion probabilities, $\pi_{ij}$, demonstrates how we may capture within PSU dependence among $(\pi_{ij})$ by inclusion of random effects, ${\eta}^{\pi}_j$. Notice that, as before, since we will include an intercept parameter, $\kappa_0$, in the model for $\pi_{ij}$ in \eqref{eq:logpiij}, \ie, $t^\prime(\bxp,\hbox{$\boldsymbol{\kappa$}})=\kappa_0+\dots$, we do not impose any restriction on $\sum_{i=1}^{n_j} \pi_{ij}$ or $\sum_{j=1}^J \sum_{i=1}^{n_j} \pi_{ij}$. \subsection{Linear Regression Joint Population Model}\label{subsec:LRM} We construct a linear regression model for the population with, \begin{equation}\label{eq:SLR_likelihood} {y_{ij}\mid \bxy_{ij},\boldsymbol\theta,{\eta}^{y}_j}\sim\text{normal}\left(\bxy_{ij}^t\hbox{$\boldsymbol{\beta$}}+{\eta}^{y}_j ,\sigma_y^2 \right) , \quad\hbox{with }\boldsymbol\theta=(\hbox{$\boldsymbol{\beta$}},\sigma_y^2) \end{equation} with the PSU-specific random effect ${\eta}^{y}_j$ in \eqref{eq:SLR_likelihood} playing the roll of ${\eta}^{y}_j$ in \eqref{eq:jointyandpi}. The conditional population model for inclusion probabilities is specified as in \eqref{eq:logpiij}, with \begin{equation}\label{eq:lonnormalpriorforpi} {\pi_{ij}\mid y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},{\eta}^{\pi}_j}\sim \text{lognormal}\Big(\kappa_y y_{ij}+\bxp_{ij}^t \hbox{$\boldsymbol{\kappa$}}_\bxp+{\eta}^{\pi}_j, \sigma_\pi^2\Big),\quad\hbox{with } \hbox{$\boldsymbol{\kappa$}}=(\kappa_y,\hbox{$\boldsymbol{\kappa$}}_\bxp,\sigma_\pi^2) \end{equation} This construction results from setting, $g(y_{ij},\bxp_{ij},\hbox{$\boldsymbol{\kappa$}})=k_y y_{ij}$, $t(\bxp_{ij},\hbox{$\boldsymbol{\kappa$}},\pi_{1j})=\bxp_{ij}^t \hbox{$\boldsymbol{\kappa$}}_\bxp+{\eta}^{\pi}_j$ (remember ${\eta}^{\pi}_j=\log \pi_{1j}$), and $\sigma_\pi^2(\hbox{$\boldsymbol{\kappa$}},\bxp_{ij},\pi_{1j})=\sigma_\pi^2$ in ~\eqref{eq:logpiij}. Here $\hbox{$\boldsymbol{\beta$}}$ and $\hbox{$\boldsymbol{\kappa$}}_\bxp$ are vectors of regression coefficients that include an intercept, so the first entry of both $\bxy_{ij}$ and $\bxp_{ij}$ equals 1. We select prior distributions, \begin{equation}\label{eq:priors} \begin{array}{c} \hbox{$\boldsymbol{\beta$}} \sim \hbox{MVN}(\mathbf{0},100 \mathbf{I}), \quad \hbox{$\boldsymbol{\kappa$}} \sim \hbox{MVN}(\mathbf{0},100 \mathbf{I}), \quad {\eta}^{y}_1,\dots,{\eta}^{y}_J\iid \hbox{normal}(0,\sigma_{{\eta}^{y}}^2), \\ {\eta}^{\pi}_1,\dots,{\eta}^{\pi}_J\iid \hbox{normal}(0,\sigma_{{\eta}^{\pi}}^2), \quad \hbox{and} \quad \sigma_y,\sigma_\pi,\sigma_{{\eta}^{y}},\sigma_{{\eta}^{\pi}} \iid \hbox{normal}^+(0,1) \end{array} \end{equation} with $\hbox{normal}^+(m,s^2)$ denoting a normal distribution with mean $m$ and variance $s^2$ restricted to the positive real line; $\hbox{MVN}(\mathbf{m},\bm{\Sigma})$ the multivariate normal distribution with mean vector $\mathbf{m}$ and variance-covariance matrix $\bm{\Sigma}$; and $\mathbf{I}$ the identity matrix. Since $y\sim\hbox{normal}(m,s^2)$ admits a closed form expression for moment generating function $M_y(t)=\exp(tm+t^2 s^2/2)$, we apply Theorem \ref{th:closeformPSU} to obtain, \begin{align}\label{eq:LRp_s} p_s\left(y_{ij},\pi_{ij}\mid \bxy_{ij},\bxp_{ij},\boldsymbol\theta,{\eta}^{y}_j,\hbox{$\boldsymbol{\kappa$}},{\eta}^{\pi}_j\right) =&\frac{\hbox{normal}\left(\log \pi_{ij}\mid \kappa_y y_{ij}+\bxp_{ij}^t\hbox{$\boldsymbol{\kappa$}}_\bxp+{\eta}^{\pi}_j,\sigma_\pi^2\right)} {\exp\left\{\bxp^t_{ij} \hbox{$\boldsymbol{\kappa$}}_\bxp+{\eta}^{\pi}_j +\sigma^2_\pi/2+ \kappa_y (\bxy^t_{ij}\hbox{$\boldsymbol{\beta$}}+{\eta}^{y}_j)+\kappa_y^2\sigma_y^2/2 \right\}}\nonumber \\ &\times \hbox{normal}\left(y_{ij}\mid \bxy^t_{ij}\hbox{$\boldsymbol{\beta$}}+{\eta}^{y}_j ,\sigma^2_y\right) \end{align} The implementation of the Gibbs sampler is not straightforward in this case due to non-conjugacy under the exact likelihood in \eqref{eq:LRp_s}. To obtain a posterior sample of the model parameters, we rely on the ``black box'' solver, ``Stan" \citep{carpenter2016stan}, which performs an efficiently-mixing Hamiltonian Monte Carlo sampling algorithm with a feature that non-conjugate model specifications are readily accommodated. \subsection{Pseudolikelihood Approach}\label{subsec:pseudo} \cite{10.1214/18-BA1143} propose an approach to incorporate sampling weights using a plug-in observed data pseudolikelihood: \begin{equation}\label{eq:fullpseudo} \left[\prod_{j=1}^J \prod_{i=1}^{n_j} p(\sampled{y}_{ij}\mid \theta)^ {\sampled{w}_{ij}}\right] \times \prod_{j=1}^J p({\eta}^{y}_{j}\mid \sigma^{2}_{{\eta}^{y}}) \end{equation} where we start with a pseudolikelihood that exponentiates each observed data likelihood contribution by its marginal unit sampling weight, $\sampled{w}_{ij}$, to re-balance the information in the observed sample to approximate that in the population. So the observed data pseudolikelihood (in square brackets) is not an exact likelihood, but an approximation for the unobserved population. We apply the pseudolikelihood approach by augmenting it with the prior for the unobserved, linked random effect to form an augmented pseudolikelihood. Inference on $\boldsymbol\theta$ utilizes the pseudoposterior. In Section \ref{sec:simulationstudy} we label this approach, ``Pseudo". \cite{10.1214/18-BA1143} standardize marginal individual sampling weights so that\\ $\sum_{j=1}^J\sum_{i=1}^{n_j} \sampled{w}_{ij}=n$ to approximately reflect the amount of posterior uncertainty in the sample. Nevertheless, as in LS2019, neither do they account for dep-$y$ nor dep-$\pi$, so resultant credibility intervals are overly optimistic (short). In a related work, \citet{williams2021} propose to utilize a separate, post-processing step applied to the pseudoposterior samples that produces posterior draws with the sandwich variance estimator (that depends on $(y_{ij},w_{ij})$) of the pseudo MLE to account for the dependence induced by clustering. The added post processing step to correct the posterior variance to the sandwich form that characterizes the frequentist construction is required because the pseudolikelihood treats the weights as fixed. In our fully Bayesian approach, by contrast, the frequentist sandwich form collapses to the Bayesian estimator for the asymptotic covariance matrix under joint modeling of the response variable and sampling weights under assumption of a joint population generating model \citep{kleijn2012}. Related frequentist approaches of \cite{pfeffermann1998weighting} and \citep{rabe2006multilevel} also employ an augmented likelihood similar to that of \eqref{eq:fullpseudo}, but where they also weight the prior for the random effect with the marginal group (PSU)-level weight, $\sampled{w}_{1j}$. They proceed to integrate out the random effect, $\eta^{y}_{j}$, to perform estimation. They also focus on consistent estimation of parameters, rather than correct uncertainty quantification. We will see in the sequel that because our approach uses a joint model for $(y_{ij},\pi_{ij}\mid \bx_{ij})$, the asymptotic covariance matrix of the joint posterior, $H^{-1}$, is the same for the MLE such that we achieve correct uncertainty quantification. In the context of the linear regression in \eqref{eq:SLR_likelihood}, the quantity between square brackets in \eqref{eq:fullpseudo} matches the likelihood of the weighted regression model ${y_{ij}\mid \bxy_{ij},\boldsymbol\theta,{\eta}^{y}_j}\sim\text{normal}\left(\bxy_{ij}^t\hbox{$\boldsymbol{\beta$}}+{\eta}^{y}_j ,\sigma_y^2/w_{ij}\right)$ (See Appendix \ref{subsec:pseudoandweightedreg} for details.) So \eqref{eq:fullpseudo} becomes \begin{equation*} \left[\prod_{j=1}^J \prod_{i=1}^{n_j} \text{normal}\left(\sampled{y}_{ij}\mid \bxy_{ij}^t\hbox{$\boldsymbol{\beta$}}+{\eta}^{y}_j ,\sigma_y^2/\sampled{w}_{ij} \right) \right] \times \prod_{j=1}^J p(\eta^{y}_{j}\mid \sigma^{2}_{\eta^{y}}) \end{equation*} This becomes useful under estimation in Stan where one can specify the weighted linear regression model and add the log of $p({\eta}^{y}\mid \cdots)$ to the $\log$ of the full conditional for joint sampling of the model parameters. \subsection{Frequentist Approach}\label{subsec:freq} Frequentist estimation approaches are designed-based, assuming the population is fixed. The formulation that we highlight employs the pseudolikelihood construction, but without PSU-REs. The point estimate of $\boldsymbol\theta$, called $\tilde{\boldsymbol\theta}_{freq}$, maximizes\\ $p_{pseudo}(\boldsymbol\theta; \sampled{y},\sampled{\pi},\sampled{x})= \prod_{j=1}^J \prod_{i=1}^{n_j} \left[p(\sampled{y}_{ij}\vert \sampled{\bx}_{ij}, \boldsymbol\theta)\right]^{\sampled{w}_{ij}}$ with $\sampled{w}_{ij}\propto 1/\sampled{\pi}_{ij}$ standardized so that $\sum_{ij} \sampled{w}_{ij}=n$. The PSU indices, together with $p_{pseudo}$, are used to estimate the standard error of $\tilde{\boldsymbol\theta}_{freq}$ via resampling methods (\ie, balanced repeated replication, Jack-Knife, Bootstrap) or Taylor series linearization. The R function svyglm in the R package \hbox{survey}~\citep{lumley2019surveyR} uses the latter (as default) to fit common generalized regression models such as linear, Poisson, logistic, etc. For multiple linear regression with $\boldsymbol\theta=(\hbox{$\boldsymbol{\beta$}},\sigma_y^2)$, inference, in particular the construction of confidence regions, for the $(p+1)$-dimension vector of regression coefficients $\hbox{$\boldsymbol{\beta$}}$ (that includes an intercept) is based on the asymptotic result, $\tilde{\Sigma}^{1/2} (\tilde{\hbox{$\boldsymbol{\beta$}}}_{freq}-\hbox{$\boldsymbol{\beta$}})\sim (p+1)\hbox{-variate Student-t}$ with degrees of freedom equal to $df=\# PSUs-\#Strata$, that represents the design-based degrees of freedom; $\tilde{\Sigma}^{1/2}$ is a lower triangular scale matrix such that $\tilde{\Sigma}^{1/2}\tilde{\Sigma}^{1/2}=\tilde{\Sigma}$ with $\tilde{\Sigma}$ the estimate of the variance-covariance matrix of $\tilde{\hbox{$\boldsymbol{\beta$}}}_{freq}$. No stratification is equivalent to having one stratum and the degrees of freedom reduces to $df=J-1$ (recall, $J:=\#PSUs$). This frequentist approach for uncertainty quantification is similar to the post processing correction of \citet{williams2021} in that the analysis model for the population does not employ a PSU-indexed random effects term; rather, the resampling of clusters captures the dependence within clusters. Both methods perform nearly identically, in practice, so that we focus on comparing our Fully Bayes approach to this frequentist resampling method in the simulation study that follows. \section{Simulation \label{sec:simulationstudy}} We perform a Monte Carlo simulation study to compare the performance of our fully Bayes method of \eqref{eq:LRp_s} that employs PSU-indexed random effects in both the models for the response and inclusion probabilities to the pseudoposterior and frequentist methods, presented in Sections ~\ref{subsec:pseudo} and ~\ref{subsec:freq}, respectively. In each Monte Carlo iteration, we generate a population of $J_{pop}$ clusters and $N_{j}$ individuals per cluster. The response variable is generated proportionally to size-based group and marginal inclusion probabilities to induce informativeness (dependence between the response variable and inclusion probabilities). We next take a sample of groups and, subsequently, individuals within group. A clustered simple random sample (cSRS) is also generated from the same population. The cSRS is included to serve as a gold standard for point estimation and uncertainty quantification (under the population model) and is compared to our model alternatives designed for estimation on the informative sample taken from the same population. For each population and sample we utilize the Fully Bayes method and associated comparative methods. We assess the bias, MSE and coverage properties under each model formulation. \subsection{Monte Carlo Simulation Scheme}\label{subsec:MCscheme} Steps 1-5 describe how the synthetic population dataset is generated, steps 6-7 how the samples are drawn and 8-10 how they are analyzed. We use the superindex `DG' to refer to the data generating (population) model as opposed to the analysis model. In the sequel, gamma$(a,b)$ denotes the gamma distribution with shape and rate parameters $a$ and $b$ (\ie, mean $a/b$). \renewcommand{\arabic{enumi}}{\arabic{enumi}} \begin{enumerate} \item Generate $\pi_{i\mid j}\iid \hbox{gamma}(a_\pi=2,b_\pi=2)$ for $i=1,\dots, (N_j=20)$ individuals nested in PSU, $j=1,\dots,(J_{pop}=10^3)$ total PSUs. The total population size is $J_{pop}\times N_j = 20,000$. \item Define PSU $j$ inclusion probability $\pi_{1j}^{tem}:=\sum_{i=1}^{N_j}\pi_{i\mid j}$ (therefore $\pi_{1j}^{tem}\iid$ $\hbox{gamma}(N_j a_\pi,b_\pi)$). \item Standardize $\pi_{1j}:=\pi_{1j}^{tem}/\sum_{j^\prime=1}^{J_{pop}} \pi_{1j^\prime}^{tem}$, so $$(\pi_{1,1},\pi_{1,2},\dots,\pi_{1,J_{pop}})\sim\hbox{Dirichlet}\left( a_\pi \times(N_1,N_2,\dots,N_{J_{pop}})\right).$$ (Thus, $b_\pi$ does not play a roll on the distribution of $\pi_{1j}$.) \item Generate ${\eta}^{y,DG}_j\iid \hbox{normal}(0,\sigma_{{\eta}^{y,DG}}^2=0.1^2)$ PSU specific random effects and predictor $x_{ij}\iid \hbox{Uniform}(0,1)$. \item Generate the response. We consider three simulation scenarios to generate the response by different settings for coefficients in the following generating expression, $$y_{ij}= \beta_0^{DG}+\beta_1^{DG} x_{ij}+ \beta_{\pi,1} {\pi}_{1j}+ \beta_{\pi,2} {\pi_{i\mid j}}+ \beta_{{\eta}^{y,DG}} {\eta}^{y,DG}_j+ \epsilon_{ij}^{DG},$$ with $\epsilon^{DG}_{ij}\iid \hbox{normal}(0,(\sigma_y^{DG})^2)$. The three scenarios each set the last three regression coefficients, as follows: \begin{center} \begingroup \setlength{\tabcolsep}{4pt} \begin{tabular}{l| cccc} Scenario &$\beta_{\pi,1}$&$\beta_{\pi,2}$& $\beta_{{\eta}^{y,DG}}$\\%&$\beta_{\pi,{\eta}^{y,DG}}$ \\ \hline {\hbox{$\mathcal {S}1$}}: Informative PSU-RE &$J_{pop}$ & 1 &0\\% &$J_{pop}$\\ {\Sc}: Non-informative PSU-RE &0 & 1 &1 \\ {\Sd}: No stage is informative &0 & 0 & 1 \\ \end{tabular} \endgroup \end{center} with $(\sigma_y^{DG})^{2}=0.1^2,\beta_0^{DG}=0,\beta_1^{DG}=1$. Note that, in scenario \hbox{$\mathcal {S}1$}, $\beta_{\pi,1}=1/E(\pi_{1j})=J_{pop}$ so that $\beta_{\pi,1} E(\pi_{1j})=1$. Informative random effects are instantiated in Scenario \hbox{$\mathcal {S}1$}\ by generating $y_{ij}$ from $\pi_{1j}$, where $\pi_{1j}$, the inclusion probability for PSU, $j$, is equivalent to a PSU-indexed random effect. We set the regression coefficient for $\pi_{1j}$ equal to $0$ and that for ${\eta}^{y,DG}_j$ equal to $1$ in Scenario \Sc, where we generated random effects as non-informative (uncorrelated with the selection probabilities). \item Take a clustered simple random sample (cSRS) from the population: From each population dataset we draw two samples, one informative and the other under two-stage cluster SRS. Both samples contain $J=30$ PSUs and $n_j=5$ individuals that produces a total sample size of $n=\sum_{j=1}^J n_j=150$. Results under cSRS will serve as a Gold Standard and will be compared to the results under comparative methods designed to analyze informative samples. To implement the clustered random sample we, \begin{enumerate} \item Draw an SRS (without replacement) of size $J$ of PSUs indices from $\{1,\dots,J_{pop}\}$. \item Within each drawn PSU $j$, obtain a SRS (without replacement) of size $n_j$. \item Relabel PSU indices to run from $1$ to $J$ and individual indices to run from $1$ to $n_j$. \item The cSRS consists of $\{(y_{ij},x_{ij},j)\}_{i=1,\dots,n_j;j=1,\dots,J}$ \end{enumerate} \item Take an informative sample: \begin{enumerate} \item Draw, without replacement, $J$ PSU indices $j_1,\dots,j_J\in\{1,\dots,J_{pop}\}$ with $Pr(j\in\hbox{sample})=\pi_{1j}$. \item For each $j\in\{j_1,\dots,j_J\}$ drawn, sample, without replacement, $n_j$ individual indices $i\in\{1,\dots, N_{j}\}$ with probability $Pr(i\in \hbox{ sample from PSU }j)=\pi_{i\mid j}/\sum_{i^\prime=1}^{N_j} \pi_{i^\prime\mid j}$. \item Define $\pi_{ij}=\pi_{1j}\pi_{i\mid j}$ and relabel the PSU and individual indices so they run from $1$ to $J$ and from $1$ to $n_j$, respectively, and add superindex ``$(s)$'' to denote sampled quantities. \item The informative sample consists of $\{(\sampled{y}_{ij},\sampled{x}_{ij},\sampled{\pi}_{ij},j)\}_{ i=1,\dots,n_j, j=1,\dots,J}$. \end{enumerate} \item Analyze the realized informative sample by estimating parameters under the following modeling approaches: \begin{enumerate} \item \textbf{FULL.both}: Denotes the approach enumerated in Subsection \ref{subsec:includingPSU} that employs PSU-REs in models for both response and inclusion probability; the model for the response includes PSU-indexed random effects with, \begin{equation}\label{eq:AnalysisModel} y_{ij}=\beta_{0}^{Ana}+\beta_{1}^{Ana} x_{ij}+{\eta}^{y,Ana}_j+\epsilon^{Ana}_{ij} \quad\hbox{with }\epsilon^{Ana}_{ij}\iid \hbox{normal}(0,(\sigma_y^{Ana})^2) \end{equation} where the superscript, ``$Ana$" denotes the model for analysis or estimation as contrasted with the $DG$ model used for population data generation. We are interested in estimating $\beta_0^{Ana}$ and the standard deviation of the PSU-REs, $\sigma_{{\eta}^{y,Ana}}$ where ${\eta}^{y,Ana}_j\iid \hbox{normal}(0,(\sigma_{{\eta}^{y,Ana}})^2)$. (Note that the estimation of $\beta_1^{Ana}$ is unbiased regardless of the sampling scheme) We, subsequently, use the conditional estimation model for the marginal inclusion probabilities of \eqref{eq:lonnormalpriorforpi} to include a PSU-indexed random effects term, \begin{equation*} \log \pi_{ij}\mid y_{ij},{\eta}^{\pi}_j\sim \hbox{normal}(\kappa_0+\kappa_y y_{ij}+\kappa_x x_{ij}+{\eta}^{\pi}_j,\sigma_\pi^2) \end{equation*} These two formulations describe the joint population model that employs random effects in both the marginal model for the response and conditional model for the inclusion probabilities. We leverage the Bayes rule approach of Section~\ref{subsec:reviewLN} under the linear regression population to produce \eqref{eq:LRp_s} that adjusts the population model to condition on the observed sample. We use this equation to estimate the Fully Bayes population model on the observed sample. It bears noting that FULL.both assumes that the data analyst does not have access to PSU-indexed sampling weights ($\propto 1/\pi_{1j}$). Yet, we show in the simulation study results that FULL.both is able to adjust for informative sampling of PSUs for estimation of population model parameters (\eg, the intercept and PSU random effects variance). This relatively good result owes to the inclusion of PSU-indexed random effects in the conditional model for the inclusion probabilities, $\pi_{ij}$, because it captures the within PSU dependence among them. Note that the Full.both analysis assumes that $\log \pi_{ij}\mid y_{ij},\dots$ is a normal distribution, but that does not hold under any simulation scenario, which allows our assessment of the robustness of FULL.both to model misspecification. \item \textbf{FULL.y}: This alternative is a variation of FULL.both that uses the same population estimation for the response stated in ~\eqref{eq:AnalysisModel}. In this option, however, PSU-REs are \emph{excluded} from the conditional model for the marginal inclusion probabilities; \ie, $\log \pi_{ij}\mid y_{ij},{\eta}^{\pi}_j\sim \hbox{normal}(\kappa_0+\kappa_y y_{ij}+\kappa_x x_{ij},\sigma_\pi^2)$ does not include PSU-REs. \item \textbf{Pseudo}: Denotes the pseudolikelihood that exponentiates the likelihood by marginal sampling weights, as described in Subsection ~\ref{subsec:pseudo}. \item \textbf{Freq}: Denotes the frequentist, design-based, inference under simple linear regression model as described in Subsection \ref{subsec:freq}. Note that this analysis model does not include PSU- REs because we employ a step that resamples the PSUs in order to estimate confidence intervals. To fit the model, we use R function svyglm in library survey~\citep{lumley2019surveyR}. \item \textbf{Pop}: Ignore the informative sampling and fit model in \eqref{eq:AnalysisModel} (as if the sample were a cSRS). The inclusion probabilities do not play a roll in the inference, though the model for the response \emph{includes} PSU-REs. This is equivalent to Pseudo with all sampling weights set equal to 1. \end{enumerate} \item Analyze the cluster simple random sample. \textbf{cSRS}: Fit the model in \eqref{eq:AnalysisModel} to the sample taken under a cSRS (generated in step 6) design. The same as Pop but applied to the cSRS generated in step 6. \item Save parameter estimates to compute Bias, MSE, coverage probability of central 95\% credible intervals and their expected length. The parameters of inferential interest are the point estimate of $\beta_0^{Ana,TRUE}$: $\tilde{\beta}_0^{Ana}:=E(\beta_0^{Ana}\mid data)$ (or $\tilde{\beta}_{0,freq}^{Ana}$ for Freq), and its central 95\% credible (or confidence for Freq) interval lower and upper limits. We also produce point and interval estimates for $\sigma_y^{Ana,TRUE}$ for those methods that include PSU-REs in the marginal response model (\ie\ exclude Freq). The computation of $\beta_0^{Ana,TRUE}$ and $\sigma_y^{Ana,TRUE}$ is discussed in Subsection \ref{subsec:TrueModelParameters}. \end{enumerate} Once we have run steps 1-10 $1000$ times, we use the quantities stored in step 10 to estimate the bias and MSE of the points estimate of $\beta_0^{Ana,TRUE}$ (defined below in Subsection \ref{subsec:TrueModelParameters}) of each method as the average of $\tilde{\beta}^{Ana}_0-\beta^{Ana,TRUE}_0$ and average of $(\tilde{\beta}^{Ana}_0-\beta^{Ana,TRUE}_0)^2$, respectively. The coverage and expected length of the 95\% credible (or confidence) are estimated as the proportion of times that the credible intervals contain $\beta^{Ana,TRUE}_0$ and their average length. We do the same for $\sigma_{{\eta}^{y,Ana}}^{TRUE}$ also defined in subsection \ref{subsec:TrueModelParameters}. \subsection{True Model Parameters under Analysis Model}\label{subsec:TrueModelParameters} In this section we compute the true values of the intercept and random effects variance parameters for the analysis ($Ana$) models that are obtained from associating parameters of the analysis model to the data generating ($DG$) model. Having true values for the intercept and random effect variance under our analysis models allows our assessment of bias, MSE and coverage. We use the superindex ``$TRUE$" to refer to the true parameter values for the $Ana$ model implied by the simulation true parameter values in the $DG$ model. The true value of the intercept parameter under the analysis model is achieved by integration, $$\beta_0^{Ana,TRUE}=E(y_{ij}\mid x_{ij}=0)= \beta_0^{DG}+\beta_{\pi,1} E(\pi_{1j})+\beta_{\pi,2} E(\pi_{i\mid j}) $$ yielding, $\beta_0^{Ana,TRUE}=2,1$ and $0$ under simulation scenarios \hbox{$\mathcal {S}1$}, \Sc\ and \Sd, respectively. The true value for the population random effect is, ${{\eta}^{y,Ana}_j}^{,TRUE}=\beta_{\pi,1} \left[\pi_{1j}-E(\pi_{1j})\right]+ \beta_{{\eta}^{y,DG}} {\eta}^{y,DG}_j $ and the true values random errors under the analysis model are $\epsilon_{ij}^{Ana,TRUE}=\beta_{\pi,2} \pi_{i\mid j}+\epsilon_{ij}^{DG}$. Since $\pi_{i\mid j}$s are not normally distributed, $\epsilon_{ij}^{Ana,TRUE}$s are also not normally distributed. Since the normality assumption of the errors of the simple regression model is violated, the variance of random effects for the marginal population model for the response, $$ \begin{array}{rl} \mbox{Var}({{\eta}^{y,Ana}_j}^{TRUE})=&\beta_{\pi,1}^2 \mbox{Var}(\pi_{1j})+ \beta_{{\eta}^{y,DG}}^2 \underbrace{\mbox{Var}({\eta}^{y,DG}_j)}_{\sigma_{{\eta}^{y,DG}}^2} \end{array} $$ is different from $(\sigma_{{\eta}^{y,Ana}}^{TRUE})^2$. Nevertheless, we may compute $\sigma_{{\eta}^{y,Ana}}^{TRUE}$ by fitting the linear mixed effect population model in \eqref{eq:AnalysisModel}, which corresponds to the analysis model for the response, directly to the population dataset via the \text{lmer} R function lmer($y\sim x + (1\mid$ PSU index),data= population). In practice, the PSU inclusion probabilities, $(\pi_{1j})$, are not available to the data analyst for either sampled or non-sampled individuals from the population. Under scenario \hbox{$\mathcal {S}1$}\ $\sigma_{{\eta}^{y,Ana}}^{TRUE}\approx 0.27$, and under the other two scenarios $\sigma_{{\eta}^{y,Ana}}^{TRUE}\approx 0.1$. \subsection{Simulation Results}\label{subsection:simresults} Tables \ref{tab:informativePSURE}, \ref{tab:Non-informativePSU-RE} and \ref{tab:nostageinformative} show the simulation results under all scenarios. As expected in all scenarios cSRS credible intervals have coverage close to nominal level (0.95) and the lowest MSE. In informative scenarios, \hbox{$\mathcal {S}1$}\ and \Sc, (i) Pop performs poorly showing the consequences of ignoring the informative sampling scheme, (ii) all methods to analyze informative samples yield similar quality point estimators (similar MSE), and (iii) FULL.both and FULL.y credible intervals maintain nominal coverage while Pseudo and Freq do not. Under non-informative scenario \Sd, all methods to analyze informative samples yield similar results to Pop (now correctly specified model). Both Pseudo and Freq under-estimate uncertainty such that they both under cover in the informative sampling case. Interestingly, only Freq pays the price of the noise introduced by the non-informative sampling weights producing considerable wider confidence intervals for $\beta_0^{Ana,TRUE}$ than all other methods. Overall, Tables \ref{tab:informativePSURE}-\ref{tab:nostageinformative} show that FULL.both and FULL.y are the best methods to analyze informative samples, particularly in terms of uncertainty quantification. But, so far, results have not shown advantage of FULL.both over FULL.y. To do so, under scenario \hbox{$\mathcal {S}1$}, we increase level of informativeness of the PSUs by increasing the value of $\beta_{\pi,1}$ (See step 5 in Subsection \ref{subsec:MCscheme}) from $J_{pop}$ to $2J_{pop}$ and $3J_{pop}$. As shown in Table \ref{tab:coverage}, coverage of FULL.y credible intervals deteriorates as informativeness increases while FULL.both, in contrast to all other methods, maintains coverage similar to cSRS (at nominal level). The strength of FULL.both over all other considered approaches is that it accounts for the association among the inclusion probabilities within the same PSU. Table \ref{tab:coverage} shows that FULL.both is the only approach that performs well under simulation scenario \hbox{$\mathcal {S}1$}\ when the level of informativeness of $\pi_{ij}$ (or correlation between $y_{ij}$ and $\pi_{1j}$) increases. FULL.both is the only method whose inference quality is not affected when $\pi_{1j}\not\perp y_{ij}\mid \bx_{ij} \forall i,j$. Since the population simulation true distribution of $\pi_{ij}$, given in point 7 (3.) in Subsection \ref{subsec:MCscheme}, is not lognormal the simulation shows that FULL.both is robust to misspecification of the distribution of $\pi_{ij}\mid y_{ij},\cdots$. \color{black}\rm \begin{table}[ht] \centering \begin{small} \begin{tabular}{rrrr rrr} & FULL.both & FULL.y & Pseudo & Freq & Pop & cSRS\\ \hline & \multicolumn{6}{c}{$\beta_0^{Ana}$}\\ Bias & 0.035 & 0.046 & 0.067 & 0.015 & 0.518 & 0.005 \\ MSE & 0.028 & 0.028 & 0.028 & 0.030 & 0.294 & 0.016 \\ 95\% CI Coverage\% CI & 0.949 & 0.948 & 0.902 & 0.905 & 0.088 & 0.957 \\ 95\% CI Length 95\% CI & 0.670 & 0.668 & 0.538 & 0.617 & 0.620 & 0.520 \\ \hline & \multicolumn{6}{c}{$\sigma_{{\eta}^{y,Ana}}$}\\ Bias & 0.012 & 0.012 & 0.043 &NA & 0.014 & -0.012 \\ MSE & 0.010 & 0.010 & 0.010 &NA & 0.010 & 0.008 \\ 95\% CI Coverage & 0.964 & 0.967 & 0.902 &NA & 0.961 & 0.951 \\ 95\% CI Length & 0.424 & 0.422 & 0.360 &NA & 0.416 & 0.357 \\ \hline \end{tabular} \end{small} \caption{Simulation Scenario \hbox{$\mathcal {S}1$}: Informative PSU-RE. cSRS analyses the cSRS sample while all other approaches analyze the informative sample. CI denotes central credible interval except for Freq where it denotes confidence interval. NA stands for not applicable, Freq does not include PSU-REs. \label{tab:informativePSURE} } \end{table} \begin{table}[ht] \centering \begin{small} \begin{tabular}{rrrrrrr} & FULL.both & FULL.y & Pseudo & Freq & Pop & cSRS\\ \hline & \multicolumn{6}{c}{$\beta_0^{Ana}$}\\ Bias & 0.008 & 0.009 & 0.055 & 0.011 & 0.494 & 0.001\\ MSE & 0.020 & 0.020 & 0.022 & 0.025 & 0.263 & 0.014\\ 95\% CI Coverage& 0.958 & 0.962 & 0.908 & 0.926 & 0.064 & 0.962\\ 95\% CI Length & 0.623 & 0.624 & 0.496 & 0.565 & 0.574 & 0.479\\ \hline & \multicolumn{6}{c}{$\sigma_{{\eta}^{y,Ana}}$}\\ Bias & 0.063 & 0.065 & 0.107 &NA & 0.065& 0.049 \\ MSE & 0.008 & 0.008 & 0.017 &NA & 0.008 & 0.006\\ 95\% CI Coverage& 0.967 & 0.971 & 0.752 &NA & 0.966 & 0.956\\ 95\% CI Length & 0.332 & 0.331 & 0.312 &NA & 0.329& 0.285 \\ \hline \end{tabular} \end{small} \caption{Simulation Scenario \Sc: Non informative PSU-REs. Same as Table \ref{tab:informativePSURE} but under \Sc. \label{tab:Non-informativePSU-RE} } \end{table} \begin{table}[ht] \centering \begin{small} \begin{tabular}{rrrrrrr} & FULL.both & FULL.y & Pseudo & Freq& Pop & cSRS\\ \hline & \multicolumn{6}{c}{$\beta_0^{Ana}$}\\ Bias & 0.000 & 0.000 & -0.000 & 0.000 & -0.000 & 0.000 \\ MSE & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001 \\ 95\% CI Coverage & 0.957 & 0.964 & 0.944 & 0.947 & 0.956 & 0.951 \\ 95\% CI Length & 0.103 & 0.103 & 0.104 & 0.134 & 0.102 & 0.103 \\ \hline & \multicolumn{6}{c}{$\sigma_{{\eta}^{y,Ana}}$}\\ Bias & 0.002 & 0.002 & 0.006 &NA & 0.002 & 0.003 \\ MSE & 0.000 & 0.000 & 0.000 &NA & 0.000 & 0.000 \\ 95\% CI Coverage & 0.935 & 0.936 & 0.933 &NA & 0.938 & 0.955 \\ 95\% CI Length & 0.067 & 0.067 & 0.068 &NA & 0.067 & 0.068 \\ \hline \end{tabular} \end{small} \caption{Simulation Scenario \Sd: No stage informative. Same as Table \ref{tab:informativePSURE} but under \Sd,\ where Pop is correctly specified. \label{tab:nostageinformative} } \end{table} \begin{table} [!htb] \begin{tabular}{ l | cccccccc} $\beta_{\pi,1}$&FULL.both & FULL.y & Pseudo &Freq &cSRS \\ \hline $J_{pop}$ &.949,.964 &.948,.967 &.902,.902 & .905,NA &.957,.951 \\ $2J_{pop}$ &.949,.929 &.914,.927 &.852,.929 & .908,NA &.961,.927\\ $3J_{pop}$ &.949,.946 &.85,.954 &.808,.914 & .910,NA &.957,.949\\ \end{tabular} \caption{ Coverage of central 95\% credible (confidence for Freq) intervals for $\beta_0^{Ana,TRUE}$,$\sigma_{{\eta}^{y,Ana}}^{TRUE}$ under scenario \hbox{$\mathcal {S}1$}\ increasing the level of informativeness of the PSUs (by increasing $\beta_{\pi,1})$. NA stands for not applicable, Freq does not include PSU-REs. \label{tab:coverage} } \end{table} \section{Application}\label{sec:applications} The National Health and Nutrition Examination Survey (NHANES) is designed to assess the health and nutritional status of the non-institutionalized civilian population living in one of the 50 U.S. states and Washington D.C. Although nationally representative, NHANES is designed to oversample specific subpopulations (\eg\ persons 60 and older, African Americans, Asians, and Hispanics) and follows a complex sampling design \citep{CDCG}. The NHANES sampling design is constructed as multi-stage with stages that include sampling strata and nested primary sampling units (PSUs) that further nest respondents. A PSU is a cluster or grouping of spatially contiguous counties, while a stratum is a region nesting multiple PSUs. NHANES publishes respondent-level marginal sampling weights based on resulting respondent marginal inclusion probabilities in the sample after accounting for clustering. \begin{comment} The sampling weights measure the number of people in the population represented by that sampled individual, reflecting unequal probability of selection, nonresponse adjustment, and adjustment to independent population controls. \end{comment} The survey consists of both interviews and physical examinations. The NHANES interview includes demographic, socioeconomic, dietary, and health-related questions. The examination component consists of medical, dental, and physiological measurements, as well as laboratory tests. Data obtained from $J=30$ PSUs, corresponding to 15 strata with two PSU per stratum, are released in two-year cycles. The example considers the dietary data. The analyses consider PSU information and sampling weights as provided by NHANES but does not incorporate strata information. \begin{comment} Priors are assigned in \eqref{eq:priors}, and posterior inference under non-frequentist methods is based on a posterior sample of the model parameters of size 10,000. The Gibbs sampler was run 10,000 iterations, after a burn-in period of another 10,000 iterations, on Stan. \subsection{Proportion of Body Fat and BMI} In \cite{10.3945/ajcn.111.025171}, here after referred as H2012, the authors model relationship of percentage of body fat (PBF) and body mass index (BMI in $kg/m^2$) using the simple linear regression (SLR) $\hbox{PBF}=\beta_0+\beta_1 (1/\hbox{BMI})$. H2012 combine data from three NHANES biannual cycles: 1999-2000,2001-2002 and 2003-2004. They fit a SLR model for each combination of sex (men and women), 3 age groups (18–29, 30–49, and 50–84 years of age), and 3 race-ethnicity groups (non-Hispanic Whites, non-Hispanic Blacks, and Mexican Americans). Table 3 of H2012 reports the estimated values and standard errors of $\beta_0$ and $\beta_1$. Their table 4 reports the predicted PBF, $\hat{\beta}_0+\hat{\beta}_1/\hbox{BMI}$, for individuals with BMI levels of $18.5, 25, 30, 35$ and $40$ that represent BMI cutoffs for underweight, normal, overweight, and obesity classes I, II, and III, respectively. The PBF variable expresses a high rate of missing values. NHANES releases two datasets (per cycle) with five sets of imputed PBF values; the first dataset includes participants with observed PBF or with imputed PBF values with low variability, while the second data set participants with high variability in their imputed values. H2012 analysis considers sampling design and multiple imputation of missing values. In this section we mimic their analysis but with the 2005-2006 NHANES dataset. We use a multiple linear regression model where we control for stratification variables in H2012. Since PBF in the $2005-2006$ cycle is reported only for $18-69$ year old participants, we categorize age into $3$ groups: $18-29, 30-49,$ and $50-69$ years of age and excluded participants not in these age ranges. We also include two more race/ethnicity groups: ``Other Hispanic'' and ``Other Race - Including Multi-Racial''. As in H2012, we exclude participants with missing BMI, women who tested positive in pregnant test or who claimed to be pregnant (for which by design PBF is not measured), or, with PBF imputed values with high variability. Our final sample size is $n=4099$. The analysis model of the non-frequentist methods is the mixed effect linear regression with PBF as the response variable, along with the following predictors: (1/BMI), gender, age group and race ethnicity, with male, 18-29 age group and non-Hispanic White as reference groups, and PSU-REs. The frequentist analysis model is same (now fixed effect) model but without PSU-REs. We recall that $\bxy$ denotes predictors in the marginal model for $y$ in \eqref{eq:SLR_likelihood}, and construct, \begin{equation}\label{eq:bxyinfirstapplication} \begin{array}{rl} \bxy^t=\big(&1,1/\hbox{BMI},1(gender=\hbox{Female}), 1(Age\in [30,49]),1(Age\in [50,69])),\\ &1(Race/Eth=\hbox{MX-AME}),1(Race/Eth=\hbox{Other-Hisp}),\\ &1(Race/Eth=\hbox{NonHisp black}), 1(Race/Eth=\hbox{Other or Multiracial})\big) \end{array} \end{equation} with dimension $p+1=9$, where $1(A)$ denotes the indicator function of the individual in the set $A$. We analyze the dataset with the first set of PBF imputed values under the following comparator models used in the simulation study: Full.both, Full.y, Pseudo.w, Pseudo, Freq and Pop. We recall that we jointly model the response and sampling inclusion probabilities under Full.both and Full.y, with the response denoted as $y=\hbox{PBF}$, and we use the same predictors in the conditional model for the inclusion probabilities and the marginal model for the response such that $\bxp=\bxy$ in \eqref{eq:lonnormalpriorforpi}. In the implementation of pseudo.w we use sampling weights, $\sampled{w}_{\cdot j}$ of \eqref{eq:fullpseudo}, that sum individual sampling weights for those units nested in each PSU in order to exponentiate the random effects prior distribution. Our analyses consider PSU information and sampling weights as provided by NHANES but not strata information. We add the comparator method Freq.strata that does consider strata information, which would be expected to produce more optimistic confidence intervals, fitted using the R package Survey~\citep{lumley2019surveyR}. We recall that the NHANES design employs 15 strata with 2 PSUs per stratum such that frequentist inferences will be based on the Student-$t$ distribution with $df=\# PSU-\#strata-p$ degrees of freedom, equal to $df=21$ and $df=7$ under Freq and Freq.strata, respectively. The left panel in Figure \ref{fig:beta0beta1reference} compares violin plots and central 95\% credibility (or confidence) intervals for the expected PBF value for a person in the reference group with ``normal" BMI or $\hbox{BMI}=18.5$ (where $\hbox{BMI}<18.5$ is labeled as underweight), which represents uncertainty intervals of $\beta_0+\beta_1/18.5$. All point estimates are close to the value reported in Table 4 of H2012 of 14.5\% for this group with FULL.both and FULL.y at $14.7\%$ are closest to H2012. The right panel of Figure \ref{fig:beta0beta1reference} depicts the same point estimates and uncertainty intervals but now for non-Hispanic White woman with $\hbox{BMI}=18.5$, which are computed computed as the uncertainty intervals of $\beta_0+\beta_1/18.5+\beta_2$. Here, again, all CIs contain the PBF estimated in H2012 for this group of 26.9\%. In both figures, the Frequentist CIs (\ie\ Freq and Freq.strata) are much wider than the other methods, which indicates an inefficiency of this Freq.strata despite it's consideration of strata should produce smaller uncertainty intervals. By contrast, inference under Full-both and Full-y is similar to Pop indicating the possibility of a non-informative design. This is confirmed by the central 95\% CI for $\kappa_y, (-0.479,0.397)$ in FULL.both and FULL.y that contains 0 indicating a non-informative sample; more formally, $y_{ij}\perp \pi_{ij}\mid \bx_{ij}$. The posterior mean estimates for the plug-in Pseudo.w and Pseudo express slightly more bias (relative to the $14.5\%$ of H2012) because of the use of use of noisy weights, which are not necessary since the NHANES design is non-informative for PBF. The fully Bayesian methods (FULL.both, FULL.y), by contrast, performs weight smoothing to mitigate bias induced by noisy weights. Figure \ref{fig:ApplicationREy} displays violin plots of the posterior distribution of the standard deviation of the PSU-REs, $\sigma_{{\eta}^{y}}$ (in the marginal model for $y$), for the non-frequentist methods (as we recall that the frequentist comparator methods do not include PSU-REs). As before, the inference under Full.Both and Full.y are similar to Pop due to the non-informativeness of the sampling design for PBF. We now discuss inference under Full.Both. The estimate of correlation between the PBF of individuals in the same cluster (after controlling for BMI and other predictors) is low $E[\sigma^2_{{\eta}^{y}}/(\sigma^2_{{\eta}^{y}}+\sigma_y^2) \mid data]\approx 0.01553$. Table \ref{tab:Full.bothApp} shows the point estimates of the inference under Full.both when using the first set of PBF imputed values. The estimated correlation between the log of inclusion probabilities in the same cluster ($\approx 10\%$), as expected by the way the inclusion probabilities are built, is greater than zero, (\ie, $cor(\log \pi_{ij},\log \pi_{i^\prime j})=E[\sigma^2_{{\eta}^{\pi}}/(\sigma^2_{{\eta}^{\pi}}+\sigma_\pi^2)\mid data]\approx 0.0991$). But this fact has little impact in the inference since the sample is non-informative. Interpreting the coefficients in the model for $y$ (top rows in Table \ref{tab:Full.bothApp}), after controlling for BMI, women have, on average, 12.3\% higher PBF than men, PBF increases with age and Other or multiracial and MX-AME people have the highest PDF followed by white and other Hispanic while nonhispanic blacks have the lowest PBF. \end{comment} \begin{comment} Since we are using just one set of imputed values we are underestimating the SE in the discussion above. We adjust our estimates and standard error following \cite{toutenburg1990rubin} \citep[See also][p.448]{heeringa2017applied}. In short, assume we have $M$ completed data sets with a missing imputation algorithm, let $\tilde{\theta}_m$ and $var(\tilde{\theta}_m)$ the points estimates of the generic parameter $\theta$ and its variance using completed dataset $m$, the point estimate of $\theta$ is $\bar{\theta}:=(1/M)\sum_{m=1}^M \tilde{\theta}_m$ with $var(\bar{\theta})=U+[(M+1)/M]B$, with $U:=(1/M)\sum_{m=1}^M var(\tilde{\theta}_m)$ the within imputation variance and $B:=[1/(M-1)] \sum_{m=1}^M (\tilde{\theta}_m-\bar{\theta})^2$ the between imputation variance. In our example $m=1,\dots,5$ corresponding to the five sets of PMF of imputed values, results are shown in table \ref{tab:MultImp}. The point estimates under all models are similar. The confidence intervals under the frequentist approaches tend to be wider that the confidence intervals under all other approaches. This phenomenon was observed under simulation scenario d (See Table \ref{tab:nostageinformative}); when the sample is non-informative, the frequentist confidence intervals are wider than the credible intervals under all other methods here considered. \end{comment} \begin{comment} When implementing our comparator frequentist approaches \cite{lohr2019sampling} \citep[see also][p 62]{heeringa2017applied} recommends to fit the model with and without weights. If the point and standard error estimates are different between the two, they recommend that the analyst should explain the difference based on the construction of the sampling weights, \eg\ oversampling of certain minority or age groups. In our example, inference under Freq and Pop generate similar point estimated but Freq yields, in general, greater standard errors. The analyst needs to decide and justify which model to use inference under frequentist modeling; though under NHANES guidelines \citep{CDCNHANESDemoD} the weights should be used for all NHANES 2005-2006 analyses. By contrast, the fully Bayesian approaches do not require the data analyst to make this choice about whether to use the weighted or unweighted estimates. Inference for model parameter $\kappa_y$, under Full.Both or Full.y, informs the analyst if the design is informative; $\kappa_y=0$ implies non-informative design and the magnitude of $\kappa_y$ is a measure of informativeness (in the scale of $y$). Full.both and Full.y correct inference when the design is informative, but also mitigate against bias induced by weights when the sampling design is non-informative (through weight smoothing), as in this particular application, and produces results that are similar to the Pop method that ignores the sampling weights. Full.Both, is the only method that, as shown in Subsection \ref{subsection:simresults}, also provides appropriate model uncertainty estimation when the PSUs are informative (not the case in this application); e.g., when $y_{ij}\not\perp j\mid \bx_{ij}$ for some $i$ and $j$.\end{comment} \begin{comment} \begin{table}[ht] \centering \begin{tabular}{lrrrr} \hline Parameter & mean & sd & 2.5\% & 97.5\% \\ \multicolumn{5}{c} {Parameters for $y_{ij}\mid \cdots$ }\\ \hline intercept & 0.518 & 0.003 & 0.512 & 0.524 \\ $1/\hbox{BMI}$ & -6.862 & 0.071 & -6.999 & -6.721 \\ gender: Female & 0.123 & 0.001 & 0.121 & 0.125 \\ Age: 30-49 & 0.005 & 0.001 & 0.002 & 0.007 \\ Age: 50-69 & 0.022 & 0.001 & 0.019 & 0.025 \\ Race/eth: MX-AME & 0.004 & 0.002 & 0.001 & 0.007 \\ Race/eth: Other-Hisp & -0.001 & 0.003 & -0.007 & 0.005 \\ Race/eth: Non Hisp Black & -0.018 & 0.002 & -0.021 & -0.015 \\ Race/eth: Other-Multiracial & 0.007 & 0.003 & 0.002 & 0.013 \\ $\sigma_{{\eta}^{y}}$ & 0.004 & 0.001 & 0.003 & 0.006 \\ $\sigma_y$ & 0.035 & 0.000 & 0.034 & 0.036 \\ \hline \multicolumn{5}{c} {Parameters for $\pi_{ij}\mid y_{ij},\cdots$ }\\ $\hbox{PBF}$ & -0.042 & 0.223 & -0.479 & 0.397 \\ Intercept & -0.429 & 0.128 & -0.682 & -0.177 \\ $1/\hbox{BMI}$ &2.240 & 1.850 & -1.375 & 5.812 \\ gender: Female & -0.025 & 0.031 & -0.086 & 0.038 \\ Age: 30-49 & -0.549 & 0.020 & -0.587 & -0.510 \\ Age: 50-69 & -0.207 & 0.021 & -0.249 & -0.167 \\ Race/eth: MX-AME & 1.589 & 0.025 & 1.539 & 1.637 \\ Race/eth: Other-Hisp & 0.454 & 0.045 & 0.366 & 0.542 \\ Race/eth: Non Hisp Black & 1.277 & 0.023 & 1.231 & 1.323 \\ Race/eth: Other-Multiracial & 0.367 & 0.040 & 0.290 & 0.446 \\ $\sigma_{{\eta}^{\pi}}$ & 0.166 & 0.025 & 0.124 & 0.221 \\ $\sigma_{\pi}$ & 0.500 & 0.006 & 0.490 & 0.512 \\ \hline \end{tabular} \caption{Inferende under Full.both using the first set of NHANES imputed values of PBF. Column headers: mean, sd, 2.5\% and 97.5\% denote the posterior expected value, standard deviation, and 0.025 and 97.5 quantiles. \label{tab:Full.bothApp_To_delete_later} } \end{table} \end{comment} \begin{comment} \begin{table}[ht] \centering \begin{tabular}{l| rrr | rrr} Model & \multicolumn{3}{c|} {for $y_{ij}\mid \cdots$ }& \multicolumn{3}{c} {for $\pi_{ij}\mid y_{ij},\cdots$ }\\ Parameter & mean & 2.5\% & 97.5\% & mean & 2.5\% & 97.5\% \\ \hline $\hbox{PBF}$ & $-\ \ $ &$-\ \ $ &$-\ \ $ & -0.042 & -0.479 & 0.397 \\ intercept & 0.518 & 0.512 & 0.524 & -0.429 & -0.682 & -0.177 \\ $1/\hbox{BMI}$ & -6.862 & -6.999 & -6.721 &2.240 & -1.375 & 5.812 \\ gender: Female & 0.123 & 0.121 & 0.125 & -0.025 & -0.086 & 0.038 \\ Age: 30-49 & 0.005 & 0.002 & 0.007 & -0.549 & -0.587 & -0.510 \\ Age: 50-69 & 0.022 & 0.019 & 0.025 & -0.207 & -0.249 & -0.167 \\ Race/eth: MX-AME & 0.004 & 0.001 & 0.007 & 1.589 & 1.539 & 1.637 \\ Race/eth: Other-Hisp & -0.001 & -0.007 & 0.005 & 0.454 & 0.366 & 0.542 \\ Race/eth: Non Hisp Black & -0.018 & -0.021 & -0.015 & 1.277 & 1.231 & 1.323 \\ Race/eth: Other-Multiracial & 0.007 & 0.002 & 0.013 & 0.367 & 0.290 & 0.446 \\ PSU-RE SD & 0.004 & 0.003 & 0.006 & 0.166 & 0.124 & 0.221 \\ Error SD & 0.035 & 0.034 & 0.036 & 0.500 & 0.490 & 0.512\\ \hline \end{tabular} \caption{Inference under Full.both using the first set of NHANES imputed values of PBF. Column headers: mean, 2.5\% and 97.5\% denote the posterior expected value, and 0.025 and 97.5 quantiles. PSU-RE SD (Error SD) represents the standard deviation of the PSU specific random effect (of the error), \ie, $\sigma_{{\eta}^{y}}$ (and $\sigma_y$) in the model for $y_{ij\mid \dots}$ and $\sigma_{{\eta}^{\pi}}$ (and $\sigma_{\pi}$) in the model for $\pi_{ij}\mid y_{ij},\dots$ \label{tab:Full.bothApp} } \end{table} \end{comment} \begin{comment} \begin{table}[ht] \centering \begin{tabular}{rrrrr} \hline & Estimate & Std. Error & t value & Pr($>$$|$t$|$) \\ \hline (Intercept) & 0.5174 & 0.0039 & 132.39 & 0.0000 \\ InvBMI & -6.8402 & 0.1029 & -66.44 & 0.0000 \\ gender\_Fem & 0.1219 & 0.0010 & 124.86 & 0.0000 \\ X29.48 & 0.0040 & 0.0019 & 2.11 & 0.0474 \\ X49.Inf & 0.0209 & 0.0018 & 11.41 & 0.0000 \\ MX.AME & 0.0040 & 0.0023 & 1.74 & 0.0972 \\ Other.Hisp & 0.0011 & 0.0049 & 0.22 & 0.8267 \\ NONHisp.Black & -0.0153 & 0.0020 & -7.65 & 0.0000 \\ Other.Multiracial & 0.0102 & 0.0034 & 3.02 & 0.0066 \\ \hline \end{tabular} \caption{Freq fitted values using the first set of imputed values of PBF.} \end{table} \end{comment} \begin{comment} \begin{table}[ht] \centering \begin{tiny} \begin{tabular}{rllllllll} \hline Parameter & FULL.both & FULL.y & Pseudo.w & Pseudo & Freq & FreqwStrata & Pop \\ \hline $\beta_0$ & 0.52(0.0035) & 0.52(0.0035) & 0.519(0.0036) & 0.519(0.0036) & 0.519(0.004) & 0.519(0.0036) & 0.52(0.0035) \\ $\beta_1$ & -6.888(0.074) & -6.887(0.0745) & -6.872(0.0806) & -6.869(0.079) & -6.882(0.1021) & -6.882(0.0939) & -6.886(0.074) \\ $\beta0+\beta_1/18.5$ & 0.147(0.0019) & 0.147(0.0019) & 0.148(0.0021) & 0.148(0.002) & 0.147(0.0024) & 0.147(0.0024) & 0.147(0.0019) \\ $\sigma_y$ & 0.035(5e-04) & 0.035(4e-04) & 0.035(5e-04) & 0.035(5e-04) & 0.001(NA) & 0.001(NA) & 0.035(5e-04) \\ $\sigma_{{\eta}^{y}}$ & 0.004(9e-04) & 0.004(9e-04) & 0.005(0.001) & 0.005(0.001) & NA(NA) & NA(NA) & 0.004(9e-04) \\ \hline \end{tabular} \end{tiny} \caption{\label{tab:MultImp} Point estimate (SE) after adjusting for multiple imputation. L may need to add all the other parameters. Luis needs to rerun this to make a little mistake in the age groups Results very likely to NOT change} \end{table} \end{comment} \begin{comment} \begin{figure} \caption{\label{fig:beta0beta1reference} \label{fig:beta0beta1reference} \end{figure} \end{comment} \begin{comment} \begin{figure} \caption{\label{fig:ApplicationREy} \label{fig:ApplicationREy} \end{figure} \end{comment} In this application, we estimate the average kilocalories (kcal) consumed in each one of the gender, age and ethnicity groupings. We use dietary data from the 2015-2016 NHANES cycle. Each participant answers a 24-hour dietary recall interview in two days: Day 1 and Day 2. The Day 1 recall interview takes place when the participant visits the Mobile Exam Center (MEC) unit where other NHANES measurements are taken. The Day 2 recall interview is collected by telephone and it is scheduled for 3 to 10 days later (See \cite{CDCF} for more details). Based on theses interviews NHANES provides datasets with estimates of kilocalories (and many nutrients) ingested by the participant 24 hours before the interview along with their dietary sampling weight. In this application, we consider the Day 1 dataset and the sampling weights that come in it. There are 8,506 participants who completed the Day 1 dietary recall, of which this analysis considers the $n=8,327$ with positive sampling weights or, equivalently, with recall status labeled ``complete and reliable'' by NHANES \cite{CDCF}. The underlying analysis model for the non-frequentist methods (FULL.both, FULL.y, Pseudo and Pop) is the mixed effect linear regression with response $y=\log(\hbox{kcal}+1)$ with kcal the NHANES estimate of kilocalories consumption based on Day 1 recall interview; predictors: gender, age group and race/ethnicity; and, PSU-REs. The frequentist analysis model, Freq, is the same (now fixed effect) model but without PSU-REs. Age is categorized in 5 groups: $[0,8],[9,17],[18,29],[30,49]$ and $[50,80]$ years old, while race/ethnicity categories are non-Hispanic White, Mexican American, other-Hispanic, non-Hispanic Black, and other or multiracial. Male, $[0,8]$ age group and non-Hispanic White are the reference groups. We recall that $\bxy$ denotes predictors in the marginal model for $y$ in \eqref{eq:SLR_likelihood}, and construct, \begin{equation} \begin{array}{rl} \bxy^t=\big(&1,1(gender=\hbox{Female}),\\ &1(Age\in [9,17]),1(Age\in [18,29]), 1(Age\in [30,49]),1(Age\in [50,80]),\\ &1(Race/Eth=\hbox{Mexican American}),1(Race/Eth=\hbox{other Hispanic}),\\ &1(Race/Eth=\hbox{non-Hispanic Black}), 1(Race/Eth=\hbox{other or multiracial})\big) \end{array} \end{equation} with dimension $p+1=10$, where $1(A)$ denotes the indicator function of the individual in the set $A$. In this application, we set $\bxp=\bxy$ in \eqref{eq:lonnormalpriorforpi}. For the non-frequentist methods, priors are assigned in \eqref{eq:priors}, and posterior inference is based on a posterior sample of the model parameters of size 10,000. The MCMC sampler was run 10,000 iterations, after a burn-in period of another 10,000 iterations, on Stan. Relatively fewer draws are required when using Stan's Hamiltonian Monte Carlo (HMC) than a Gibbs sampler because the Stan draws are less correlated. Figure \ref{fig:beta0dr1tkcal} depicts violin plots of the estimated mean daily kcal consumption for White males in age groups [0,8] (left) and [30,49] (right). More specifically, the left panel depicts violin plots of the posterior distribution of $\exp(\beta_0)-1$ for the set of non-frequentist methods. For the frequentist method (Freq) depicts the distribution of $\exp(\beta_0)-1$ with $\beta_0$ drawn from $\hat{\beta}_0+t\times SE(\hat{\beta}_0)$ with $t\sim \hbox{Student-t}$ with $J-1=30-1=29 $ degrees of freedom. The right panel depicts the violin plot of the posterior distributions of $\exp(\beta_0+\beta_4)-1$ for the non-frequentist methods. It also depicts the violin plot of $\exp(\beta_0+\beta_4)-1$ for Freq, though with $(\beta_0,\beta_4)$ drawn from the distribution $(\hat{\beta}_0,\hat{\beta}_4)^t+ \hat{\Sigma}_{1,4}^{1/2} \mathbf{t}$ where the random vector $\mathbf{t}=(t_0,t_4)^t$ has entries $t_0,t_4 \iid \hbox{Student-t}$, with 29 degrees of freedom and $\hat{\Sigma}_{0,4}^{1/2} \hat{\Sigma}_{0,4}^{1/2} =\hat{\Sigma}_{0,4}$ with $\hat{\Sigma}_{0,4}$ the estimated variance-covariance matrix of $(\hat{\beta}_0,\hat{\beta}_4)^t$. Figure \ref{fig:beta0dr1tkcal} shows that for these groups, inference under FULL.both and FULL.y are similar to one another but different from Pop. The FULL.both central 95\% credible interval (and also the FULL.y one, not shown) for $\kappa_y$, $(-0.110,-0.048)$, does not contain zero, indicating that the sampling design is informative. FULL.both and FULL.y correct for this. The point estimates under Pseudo and Freq are close to one another, but differ from those for FULL.both and FULL.y, indicating that the weight smoothing provided by the fully Bayesian methods is more robust to noise present in the weights that may injure point estimation. \begin{comment} Table \ref{table:Fullboth_kcal} displays inference for the model parameters under Full.Both. Table \ref{table:Fullboth_kcal} confirms the expected pattern of kcal consumption; it increases with age when young, plateau at middle age and decreases in the oldest age group. Table \ref{table:Fullboth_kcal} also shows that, in average, White people consumes more kcals than each other race/ethnicity groups.\footnote{T: maybe White people are taller. We could have controlled for height, when controlling for BMI the design becomes non-informative.} In contrast, Freq.strata (See Table \ref{table:Freq.strata_kcal} in the appendix subsection \ref{subsec:kilocalappdetails}) concludes that the only group with, statistically significant (p-value<0.05) lower kcal consumption than the White people group is the non-Hispanic Black people group. \end{comment} Figure \ref{fig:sigma_ydr1tkcal} depicts the violin plots for the standard deviation of the PSU-RE, $\sigma_{{\eta}^{y}}$ under the non-frequentist methods (Freq does not model PSU-RE). Inference for this parameter under Pseudo differs from those under the two fully Bayesian methods. Figure \ref{fig:PSUREinresponse15_27dr1tkcal} shows that the posterior distribution of individual PSU random effects in the marginal response model (PSU-RE) also differs. The figure focuses on two particular random effects, ${\eta}^{y}_{15}$ and ${\eta}^{y}_{27}$, that are coherent with the general pattern we see over the random effects; in particular, the fully Bayes methods express less estimation uncertainty than does Pseudo, indicating a greater estimation efficiency by jointly modeling the response and inclusion probabilities. \begin{figure} \caption{\label{fig:beta0dr1tkcal} \label{fig:beta0dr1tkcal} \end{figure} \begin{figure} \caption{\label{fig:sigma_ydr1tkcal} \label{fig:sigma_ydr1tkcal} \end{figure} \begin{figure} \caption{\label{fig:PSUREinresponse15_27dr1tkcal} \label{fig:PSUREinresponse15_27dr1tkcal} \end{figure} \begin{comment} \begin{table}[ht] \centering \begin{tabular}{rrrrr} \hline & mean & sd & 2.5\% & 97.5\% \\ \hline \multicolumn{5}{c} {Parameters for $y_{ij}\mid \cdots$ }\\ intercept & 7.242 & 0.016 & 7.212 & 7.273 \\ gender:Female & 0.009 & 0.010 & -0.011 & 0.030 \\ age 9-17 & 0.295 & 0.017 & 0.261 & 0.329 \\ age 18-29 & 0.394 & 0.019 & 0.358 & 0.431 \\ age 30-49 & 0.401 & 0.016 & 0.369 & 0.433 \\ age.50-80 & 0.269 & 0.015 & 0.239 & 0.299 \\ MX.AME & -0.031 & 0.015 & -0.060 & -0.002 \\ Other.Hisp & -0.049 & 0.017 & -0.083 & -0.016 \\ NONHisp.Black & -0.053 & 0.014 & -0.082 & -0.025 \\ Other.Multiracial& -0.054 & 0.017 & -0.087 & -0.022 \\ $\sigma_{{\eta}^{y}}$ & 0.012 & 0.007 & 0.001 & 0.028 \\ $\sigma_y$ & 0.473 & 0.004 & 0.466 & 0.481 \\ \hline \multicolumn{5}{c} {Parameters for $\pi_{ij}\mid y_{ij},\cdots$ }\\ log(kcal+1) & -0.079 & 0.016 & -0.110 & -0.048 \\ Intercept & 0.213 & 0.117 & -0.015 & 0.443 \\ gender:Female & 0.005 & 0.015 & -0.024 & 0.034 \\ age 9-17 & -0.249 & 0.025 & -0.299 & -0.199 \\ age 18-29 & -0.769 & 0.028 & -0.823 & -0.715 \\ age 30-49 & -0.710 & 0.025 & -0.759 & -0.660 \\ age 50-80 & -0.350 & 0.022 & -0.394 & -0.307 \\ MX.AME & 1.103 & 0.022 & 1.061 & 1.146 \\ Other.Hisp & 1.215 & 0.024 & 1.167 & 1.262 \\ NONHisp.Black & 1.128 & 0.021 & 1.087 & 1.168 \\ Other.Multiracial & 0.991 & 0.023 & 0.945 & 1.037 \\ $\sigma_{{\eta}^{\pi}}$ & 0.026 & 0.013 & 0.003 & 0.052 \\ $\sigma_\pi$ & 0.678 & 0.005 & 0.668 & 0.689 \\ \hline \end{tabular} \caption{ \label{table:Fullboth_kcal} Parameter estimates for the regression model with response $\log(kcal+1)$ and predictors gender, race/ethnicity and age group using FULL.Both } \end{table} \end{comment} {\FloatBarrier} \section{Discussion} We have extended our work in LS2019\ to include PSU information in a model-based, fully Bayesian analysis of informative samples. The extension consists of replacing the fixed effect model by a mixed effect model that includes PSU/cluster-indexed random effects in the marginal model for $y$ and the conditional model for $\pi\mid y$ to capture dependence induced by the clustering structure. We have shown via simulation that our fully Bayesian approach yields correct uncertainty quantification, or equivalently CIs, with coverage close to their nominal level, including for the random effects variances. Competing methods fail to do so in at least one simulation scenario. In particular, FULL.both is the only appropriate method, of all here considered, when the sample design is informative for the selection of PSUs. The results in simulation scenario \Sd, where the design is not informative, revealed that the method is also robust to noise in weights. Our fully Bayesian methods proposed here are mixed effect linear models that not only take into account the possible association of individuals within the same cluster but also, in contrast to the design-based frequentist methods, quantify this association; that is, the within PSU correlation can be estimated. We demonstrated our method with an NHANES dietary dataset whose sampling design includes stratification. The next natural step of the method is to include strata information into the analysis. This application only analyzed data from Day 1 dietary questionnaire. To analyze Day 1 and Day 2 data with one model we need to adapt our approach to repeated measures. This is another current line of research. To implement our Bayesian method, we derived an \emph{exact} likelihood for the observed sample. In principle, this likelihood can also be used for maximum likelihood estimation opening the door for model-based frequentist inference. Our approach requires the modeler to specify a distribution for $\pi_i\mid y_i,\cdots$. Estimation requires the computation of an expected value, the denominator in \eqref{eq:IScorrectionPSU}. We assume a lognormal conditional likelihood for the marginal inclusion probability, given the response, with linear relationship between the location parameter and the the response, both of which facilitate use of Theorem \ref{th:closeformPSU} to obtain a closed form for this expected value. Our simulation study showed that the Bayesian method is robust against misspecification of these assumptions. Future work is needed to ease conditions in Theorem \ref{th:closeformPSU}. To sum up, we have presented the first model-based Bayesian estimation approach that accounts for both informative sampling within the individuals in the same PSU and when the PSU is informative to produce correct uncertainty quantification. \appendix \begin{comment} \subsection{Application Details for PBF and BMI analysis}\label{subsec:applicationdetails} \subsubsection{Data used} Three publicly available datatset were downloaded from the NHANES website. \begin{itemize} \item Demographic Variables \& Sample Weights: DEMO\_D.XPT (\cite{CDCNHANESDemoD}, \url{https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Demographics&CycleBeginYear=2005}). Used columns: \begin{itemize} \item sdmvstra: Stratum to which the participant belongs \item sdmvpsu: PSU indicator \item wtmec2yr: Full Sample 2 Year MEC Exam Weight \item riagendr: Sex \item ridageyr: age in years \item ridreth1:Race/ethnicity \end{itemize} \item Body Measures: BMX\_D.XPT\\ (\url{https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Examination&CycleBeginYear=2005}). Used columns: \begin{itemize} \item bmxbmi: BMI ($kg/m^2$) \end{itemize} \item Dual-Energy X-ray Absorptiometry - Whole Body: dxx\_d.XPT (\url{https://wwwn.cdc.gov/Nchs/Nhanes/Dxa/Dxa.aspx}) \begin{itemize} \item dxdtopf: Total percent body fat (\%) \item \_MULT\_: Imputation Version in $1,\dots,5$. For Figures \ref{fig:beta0beta1reference} and \ref{fig:ApplicationREy} and Table \ref{tab:Full.bothApp} we use the first set of imputations. This is, rows with \_MULT\_=1. \end{itemize} \end{itemize} \subsection{Application Details for Daily Kilocalories Analysis}\label{subsec:kilocalappdetails} \subsubsection{Dataset} Two publicly available datatset were downloaded from the NHANES website. \begin{itemize} \item Demographic Variables and Sample Weights: DR1TOT\_I.XPT\\ (\url{https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Dietary&CycleBeginYear=2015}). Columns used \begin{itemize} \item sdmvstra: Stratum to which the participant belongs \item sdmvpsu: PSU indicator \item riagendr: Sex \item ridageyr: age in years \item ridreth1:Race/ethnicity \end{itemize} \item Dietary Interview - Total Nutrient Intakes, First Day: DR1TOT\_I.XPT\\ (\url{https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Dietary&CycleBeginYear=2015}). Columns used \begin{itemize} \item dr1tkcal: Energy (kcal) \item wtdrd1: Dietary day one sample weight \end{itemize} \end{itemize} Notice that, in this example, following the NHANES guidelines for the analisys of dietary data, we are not using the Full Sample 2 Year MEC Exam Weights ``wtmec2yr'' included in the demographic dataset DR1TOT\_I.XPT but instead dietary day one sample weights ``wtdrd1'' constructed based on ``MEC sample weights and further adjusting for (a) the additional non-response and (b) the differential allocation by weekdays (Monday through Thursday), Fridays, Saturdays and Sundays for the dietary intake data collection'' \citep{CDCF}. \end{comment} \section{ Quantity between Brackets in Augmented Pseudolikelihood in \eqref{eq:fullpseudo} Matches Likelihood under Weighted Linear Regression \label{subsec:pseudoandweightedreg} } The weighted linear regression model is ${y_{ij}\mid \bxy_{ij},\boldsymbol\theta,{\eta}^{y}_j}\sim\text{normal}\left(\bxy_{ij}^t\hbox{$\boldsymbol{\beta$}}+{\eta}^{y}_j ,\sigma_y^2/w_{ij}\right)$ with $w_{ij}>0$ known. Using the fact that $$\left[\text{normal}(y\mid \mu,\sigma^2)\right]^w= \frac{1}{w^{1/2}(2\pi)^{(w-1)/2}} \frac{1}{(\sigma^2)^{(w-1)/2}} \times \text{normal}(y\mid \mu,\sigma^2/w) $$ we obtain that the expression between brackets in \eqref{eq:fullpseudo}, $\prod_{j=1}^J \prod_{i=1}^{n_j} p(\sampled{y}_{ij}\mid \theta)^ {\sampled{w}_{ij}}$, equals \begin{align*} \prod_{j=1}^J \prod_{i=1}^{n_j} \left[\text{normal}\left(\sampled{y}_{ij}\mid \bxy_{ij}^t\hbox{$\boldsymbol{\beta$}}+{\eta}^{y}_j ,\sigma_y^2 \right) \right]^{\sampled{w}_{ij}} \propto& \frac{1}{(\sigma_y^2)^{\left(\sum_{j,i}[\sampled{w}_{ij}-1]\right)/2}}\\ &\times \left[\prod_{j=1}^J \prod_{i=1}^{n_j} \text{normal}\left(\sampled{y}_{ij}\mid \bxy_{ij}^t\hbox{$\boldsymbol{\beta$}}+{\eta}^{y}_j ,\sigma_y^2/\sampled{w}_{ij} \right) \right] \end{align*} but, by construction, $\sum_{j=1}^J\sum_{i=1}^{n_j} \sampled{w}_{ij}=n$ and therefore the expontent of $\sigma_y^2$ in the denominator in the expression above is zero, and the quantity between brackets is the likelihood of the weighted linear regression model. \begin{comment} \begin{table}[ht] \centering \begin{tabular}{rrrrr} \hline & Estimate & Std. Error & t value & Pr($>$$|$t$|$) \\ \hline (Intercept) & 7.2605 & 0.0159 & 456.42 & 0.0000 \\ gender\_Fem & 0.0089 & 0.0140 & 0.63 & 0.5512 \\ age.9.17 & 0.2626 & 0.0163 & 16.08 & 0.0000 \\ age.18.29 & 0.3620 & 0.0267 & 13.55 & 0.0000 \\ age.30.49 & 0.3631 & 0.0268 & 13.52 & 0.0000 \\ age.50.Inf & 0.2436 & 0.0245 & 9.95 & 0.0001 \\ MX.AME & -0.0233 & 0.0139 & -1.67 & 0.1455 \\ Other.Hisp & -0.0325 & 0.0175 & -1.85 & 0.1131 \\ NONHisp.Black & -0.0548 & 0.0125 & -4.37 & 0.0047 \\ Other.Multiracial & -0.0376 & 0.0180 & -2.08 & 0.0826 \\ \hline \end{tabular} \caption{\label{table:Freq.strata_kcal} Inference under model Freq.strata of the multiple linear regression model with response $\log (kcal+1)$ and predictors gender, age group and race/ethnicity} \end{table} \end{comment} \begin{comment} \section{T comments: Linear Regression Population Model} Additional sections include: \begin{enumerate} \item Fully Bayes Model for Random Effects \begin{enumerate} \item Discuss inclusion of random effects in both models for y and pi|y, as well as separately for each. Note that our experiments (without including simulation) reveals that solely including random effects in model for pi|y only captures dependence among the pi and not the y, such that uncertainty quantification would not be correct. \item Address that failure to include random effects will produce overly confident credibility intervals that will fail to achieve nominal coverage even under non-informative sampling. The likelihood for the sample must still account for the dependence induced by the sampling design. \item Introduce the Pseudo posterior (PP) comparator model that includes a random effects term. \item Discuss both non-informative and informative sampling of groups. Under the latter, introduce the pseudo posterior that exponentiates both the likelihood *and* prior for random effects by the sampling weights. The pseudo prior for the random effects functions as a pseudo likelihood for the generating parameters of the random effects. \end{enumerate} \item Simulation study \begin{enumerate} \item Non-informative sampling of random effects \item Informative sampling of random effects \begin{enumerate} \item Introduce our diagnostic procedure to differentiate one from the other to decide whether to employ a RE term solely in the model for y or also for $pi\mid y$. \end{enumerate} \end{enumerate} \item Application \begin{enumerate} \item Run the diagnostic and decide on the fully Bayesian model. Compare the PP (with and without a random effects term and both weights for the random effects prior). \end{enumerate} \end{enumerate} \end{comment} \bibhang=1.7pc \bibsep=2pt \fontsize{9}{14pt plus.8pt minus .6pt}\selectfont \renewcommand\bibname{\large \bf References} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL{URL}\fi \end{document}
math
\begin{document} \title{\sc Inflations of Geometric Grid Classes of Permutations} \pagestyle{main} \begin{abstract} Geometric grid classes and the substitution decomposition have both been shown to be fundamental in the understanding of the structure of permutation classes. In particular, these are the two main tools in the recent classification of permutation classes of growth rate less than $\kappa\approx2.20557$ (a specific algebraic integer at which infinite antichains begin to appear). Using language- and order-theoretic methods, we prove that the substitution closures of geometric grid classes are partially well-ordered, finitely based, and that all their subclasses have algebraic generating functions. We go on to show that the inflation of a geometric grid class by a strongly rational class is partially well-ordered, and that all its subclasses have rational generating functions. This latter fact allows us to conclude that every permutation class with growth rate less than $\kappa$ has a rational generating function. This bound is tight as there are permutation classes with growth rate $\kappa$ which have nonrational generating functions. \end{abstract} \title{\sc Inflations of Geometric Grid Classes of Permutations} \section{Introduction}\label{infinite-simples-intro} The celebrated proof of the Stanley--Wilf Conjecture by Marcus and Tardos~\cite{marcus:excluded-permut:} establishes that all nontrivial permutation classes have at most exponential growth. A prominent line of subsequent research has focused on determining the possible growth rates of these classes. In particular, Vatter~\cite{vatter:small-permutati:} characterised all growth rates up to \[ \kappa=\mbox{the unique real root of $x^3-2x^2-1$}\approx 2.20557. \] The number $\kappa$ is the threshold of a sharp phase transition: there are only countably many permutation classes of growth rate less than $\kappa$, but uncountably many of growth rate $\kappa$. Furthermore, it is the first growth rate at which permutation classes may contain infinite antichains, which in turn is the cause of much more complicated structure. For this reason we single out classes of growth rate less than $\kappa$ as \emph{small}. In this work we elucidate the enumerative structure of small permutation classes, essentially completing this research programme by proving that \emph{all small permutation classes have rational generating functions}. Our work combines and extends two of the most useful techniques for analysing the structure of permutation classes: geometric grid classes and the substitution decomposition. The conclusion about small permutation classes is obtained from a general enumerative result showing that the inflation of a geometric grid class by a strongly rational class is itself strongly rational. This also places Theorem 3.5 of Albert, Atkinson, and Vatter~\cite{albert:subclasses-of-t:} in a wider theoretical context. We introduce our results gradually, and along the way prove another structural result of independent interest: the substitution closure of a geometric grid class is partially well-ordered, finitely based, and all its subclasses have algebraic generating functions. Thereby, we generalise one of the main theorems of Albert and Atkinson~\cite{albert:simple-permutat:} to a more natural and applicable setting. For the rest of the introduction, we give just enough notation to motivate and precisely state our main results. Sections~\ref{sec-inflate} and \ref{sec-geomgrid} offer a more thorough review of the substitution decomposition and geometric grid classes. Our new results are proved in Sections~\ref{sec-infinite-finite-bases-pwo}--\ref{sec-spc-rational}, and Section~\ref{sec-conclusion} concludes by outlining several directions for further investigation. Given permutations $\pi$ and $\sigma$, we say that $\pi$ \emph{contains} $\sigma$, and write $\sigma\le\pi$, if $\pi$ has a subsequence $\pi(i_1)\cdots\pi(i_k)$ of length $k$ which is order isomorphic to $\sigma$; otherwise, we say that $\pi$ \emph{avoids} $\sigma$. For example, $\pi=391867452$ (written in list, or one-line notation) contains $\sigma=51342$, as can be seen by considering the subsequence $\pi(2)\pi(3)\pi(5)\pi(6)\pi(9)=91672$. A \emph{permutation class} is a downset, say $\mathcal{C}$, of permutations under this order; i.e., if $\pi\in\mathcal{C}$ and $\sigma\le\pi$, then $\sigma\in\mathcal{C}$. For any permutation class $\mathcal{C}$ there is a unique, possibly infinite, antichain $B$ such that \[ \mathcal{C}=\operatorname{Av}(B)=\{\pi: \pi \not \geq\beta\mbox{ for all } \beta \in B\}. \] This antichain $B$, which consists of all the minimal permutations \emph{not} in $\mathcal{C}$, is called the \emph{basis} of $\mathcal{C}$. If $B$ happens to be finite, we say that $\mathcal{C}$ is \emph{finitely based}. For $n\in\mathbb{N}$, we denote by $\mathcal{C}_n$ the set of permutations in $\mathcal{C}$ of length $n$, and we refer to \[ \sum_{n=0}^\infty |\mathcal{C}_n|x^n=\sum_{\pi\in\mathcal{C}} x^{|\pi|} \] as the \emph{generating function} of $\mathcal{C}$ (here $|\pi|$ denotes the length of the permutation $\pi$). Since proper permutation classes are of exponentially bounded size, they have associated parameters of interest related to their asymptotic growth. Specifically, every class $\mathcal{C}$ has \emph{upper} and \emph{lower growth rates} given, respectively, by \[ \overline{\gr}(\mathcal{C})=\limsup_{n\rightarrow\infty}\sqrt[n]{|\mathcal{C}_n|} \quad\mbox{and}\quad \underline{\gr}(\mathcal{C})=\liminf_{n\rightarrow\infty}\sqrt[n]{|\mathcal{C}_n|}. \] It is conjectured that the actual limit of $\sqrt[n]{|\mathcal{C}_n|}$ exists for every permutation class; whenever this limit is known to exist we call it the \emph{(proper) growth rate} of $\mathcal{C}$ and denote it by $\mathrm{gr}(\mathcal{C})$. (All growth rates mentioned in the first paragraph are proper growth rates.) A partially ordered set (poset for short) is said to be \emph{partially well-ordered (pwo)} if it contains neither an infinite strictly descending sequence nor an infinite antichain. In permutation classes, pwo is synonymous with the absence of infinite antichains, since they cannot contain infinite strictly decreasing sequences . Geometric grid classes are the first of our major tools, and may be defined as follows. Suppose that $M$ is a $0/\mathord{\pm} 1$ matrix. The \emph{standard figure} of $M$ is the point set in $\mathbb{R}^2$ consisting of: \begin{itemize} \item the line segment from $(k-1,\ell-1)$ to $(k,\ell)$ if $M_{k,\ell}=1$ or \item the line segment from $(k-1,\ell)$ to $(k,\ell-1)$ if $M_{k,\ell}=-1$. \end{itemize} The \emph{geometric grid class} of $M$, denoted by $\operatorname{Geom}(M)$, is then the set of all permutations that can be drawn on this figure in the following manner. Choose $n$ points in the figure, no two on a common horizontal or vertical line. Then label the points from $1$ to $n$ from bottom to top and record these labels reading left to right. An example is shown in Figure~\ref{fig-example-ggc}. Note that in order for the cells of the matrix $M$ to be compatible with plots of permutations, we use Cartesian coordinates for our matrices, indexing them first by column, from left to right starting with $1$, and then by row, from bottom to top. A permutation class is said to be \emph{geometrically griddable} if it is contained in a geometric grid class. These classes are known to be well behaved: \begin{figure}\label{fig-example-ggc} \end{figure} \begin{theorem}[Albert, Atkinson, Bouvel, Ru\v{s}kuc and Vatter~\cite{albert:geometric-grid-:}] \label{thm-geom-griddable-all-summary} Every geometrically griddable class $\mathcal{C}$ is finitely based, pwo, and has a rational generating function. \end{theorem} Theorem~\ref{thm-geom-griddable-all-summary} implies that every geometrically griddable class $\mathcal{C}$ is \emph{strongly rational}, in the sense that $\mathcal{C}$ and all of its subclasses have rational generating functions. Strong rationality has numerous consequences for permutation classes, such as the following, which can be established by a simple counting argument. \begin{proposition}[Albert, Atkinson, and Vatter~\cite{albert:subclasses-of-t:}] \label{prop-strong-rat-pwo} Every strongly rational class is pwo. \end{proposition} Our second major tool is the \emph{substitution decomposition} of permutations into intervals. An \emph{interval} in the permutation $\pi$ is a set of contiguous indices $I=\interval{a}{b}=\{a,a+1,\dots,b\}$ such that the set of values $\pi(I)=\{\pi(i) : i\in I\}$ is also contiguous. Given a permutation $\sigma$ of length $m$ and nonempty permutations $\alpha_1,\dots,\alpha_m$, the \emph{inflation} of $\sigma$ by $\alpha_1,\dots,\alpha_m$ is the permutation $\pi=\sigma[\alpha_1,\dots,\alpha_m]$ obtained by replacing each entry $\sigma(i)$ by an interval that is order isomorphic to $\alpha_i$. For example, \[ 2413[1,132,321,12]=4\ 798\ 321\ 56. \] Given two classes $\mathcal{C}$ and $\mathcal{U}$, the \emph{inflation} of $\mathcal{C}$ by $\mathcal{U}$ is defined as \[ \mathcal{C}[\mathcal{U}]=\{\sigma[\alpha_1,\dots,\alpha_m]\::\:\mbox{$\sigma\in\mathcal{C}_m$ and $\alpha_1,\dots,\alpha_m\in\mathcal{U}$}\}. \] The class $\mathcal{C}$ is said to be \emph{substitution closed} if $\mathcal{C}[\mathcal{C}]\subseteq\mathcal{C}$. The \emph{substitution closure} $\langle\mathcal{C}\rangle$ of a class $\mathcal{C}$ is defined as the smallest substitution closed class containing $\mathcal{C}$. A standard argument shows that $\langle\mathcal{C}\rangle$ exists, and a detailed construction of $\langle\mathcal{C}\rangle$ is provided by Proposition~\ref{prop-subst-completion-const}. The main results of this paper can now be stated. \begin{itemize} \item If the class $\mathcal{C}$ is geometrically griddable, then every subclass of $\langle\mathcal{C}\rangle$ is finitely based and pwo (Theorem~\ref{thm-geom-simples-pwo-basis}). \item If the class $\mathcal{C}$ is geometrically griddable, then every subclass of $\langle\mathcal{C}\rangle$ has an algebraic generating function (Theorem~\ref{thm-context-free}). \item If the class $\mathcal{C}$ is geometrically griddable and $\mathcal{U}$ is strongly rational, then $\mathcal{C}[\mathcal{U}]$ is also strongly rational (Theorem~\ref{thm-geom-inflate-enum}). \item Every small permutation class has a rational generating function (Theorem~\ref{thm-small-rational}). \end{itemize} As mentioned earlier, there are uncountably many classes with growth rate $\kappa$ which give rise to uncountably many different generating functions. Hence there are permutation classes of growth rate $\kappa$ with nonrational, nonalgebraic, and even nonholonomic generating functions, so that the final result above is best possible. \section{Substitution Closures and Simple Permutations} \label{sec-inflate} Every permutation of length $n\ge 1$ has \emph{trivial} intervals of lengths $0$, $1$, and $n$; all other intervals are termed \emph{proper}. A nontrivial permutation is said to be \emph{simple} if it has no proper intervals. The shortest simple permutations are thus $12$ and $21$, there are no simple permutations of length three, and the simple permutations of length four are $2413$ and $3142$. Several examples of simple permutations are plotted throughout the paper, for instance in Figures~\ref{fig-par-alts} and \ref{fig-osc}. Inflations of $12$ and $21$ generally require special treatment and have their own names. The \emph{(direct) sum} of the permutations $\alpha_1$ and $\alpha_2$ is $\alpha_1\oplus\alpha_2=12[\alpha_1,\alpha_2]$. The permutation $\pi$ is said to be \emph{sum decomposable} if it can be expressed as a sum of two nonempty permutations, and \emph{sum indecomposable} otherwise. For every permutation $\pi$ there are unique sum indecomposable permutations $\alpha_1,\dots,\alpha_k$ (called the \emph{sum components} of $\pi$) such that $\pi=\alpha_1\oplus\dots\oplus\alpha_k$. A \emph{sum-prefix} and a \emph{sum-suffix} of $\pi$ are any of the permutations $\alpha_1\oplus\dots\oplus\alpha_i$ and $\alpha_i\oplus\dots\oplus \alpha_k$ for $i=1,\dots,k$. The \emph{skew sum} operation is defined by $\alpha_1\ominus\alpha_2=21[\alpha_1,\alpha_2]$ and the notions of \emph{skew decomposable}, \emph{skew indecomposable}, \emph{skew components}, \emph{skew-prefix}, and \emph{skew-suffix} are defined analogously. Simple permutations and inflations are linked by the following result. \begin{proposition}[Albert and Atkinson~\cite{albert:simple-permutat:}] \label{simple-decomp-unique} Every nontrivial permutation $\pi$ is an inflation of a unique simple permutation $\sigma$. Moreover, if $\pi=\sigma[\alpha_1,\dots,\alpha_m]$ for a simple permutation $\sigma$ of length $m\ge 4$, then each $\alpha_i$ is unique. If $\pi$ is an inflation of $12$ (i.e., is sum decomposable), then there is a unique sum indecomposable $\alpha_1$ such that $\pi=12[\alpha_1,\alpha_2]$. The same holds, mutatis mutandis, with $12$ replaced by $21$ and sum replaced by skew. \end{proposition} We need several technical details about inflations, and we begin by investigating the intervals of an inflation $\pi = \sigma[\alpha_1, \dots, \alpha_m]$. Trivially, any subinterval of any $\alpha_i$ will be an interval of $\pi$, and if $\tau$ is an interval of $\sigma$ corresponding to indices $i$ through $j$, then the entries corresponding to $\tau[\alpha_i,\dots,\alpha_j]$ will also form an interval of $\pi$. Any other interval contains an interval of this second type, possibly together with some entries of $\alpha_{i-1}$ and $\alpha_{j+1}$. \begin{figure} \caption{An illustration of an interval in an inflation of $285746319$. The small boxes represent the inflations of each point. The large shaded box captures the complete inflations of the interval $5746$ along with (possibly) some skew-suffix of the inflation of the entry $8$, and some skew-prefix of the inflation of the entry $3$.} \label{fig-example-interval-in-inflation} \end{figure} \begin{proposition} \label{intervals-in-inflation} Suppose that $\pi = \sigma[\alpha_1,\dots,\alpha_m]$ and that $\theta$ is an interval of $\pi$ not contained in a single $\alpha_i$. Then there exist a (possibly empty) interval $\interval{i}{j}$ of indices, and intervals $\gamma_{i-1}$ and $\gamma_{j+1}$ of $\alpha_{i-1}$ and $\alpha_{j+1}$ respectively, such that $\tau = \sigma(\interval{i}{j})$ is an interval of $\sigma$, and the entries of $\theta$ correspond to \[ \gamma_{i-1} \oplus \tau[\alpha_i,\dots,\alpha_j] \oplus \gamma_{j+1} \quad \mbox{or} \quad \gamma_{i-1} \ominus \tau[\alpha_i,\dots,\alpha_j] \ominus \gamma_{j+1}. \] In the first case $\gamma_{i-1}$ may be nonempty only if $\sigma(\interval{i-1}{j})$ is also an interval of $\sigma$ of the form $1 \oplus \tau$ and $\gamma_{i-1}$ is a sum-suffix of $\alpha_{i-1}$, while $\gamma_{j+1}$ may be nonempty only if $\sigma(\interval{i}{j+1})$ is also an interval of $\sigma$ of the form $\tau \oplus 1$ and $\gamma_{j+1}$ is a sum-prefix of $\alpha_{j+1}$. Analogous conditions apply for the second alternative. \end{proposition} \begin{proof} The set of indices $k$ such that $\alpha_k$ is wholly contained in $\theta$ forms an interval $I = \interval{i}{j}$. Suppose first that $I$ is nonempty. Observe that if two intervals of a permutation overlap, but neither is contained in the other, then their intersection is a sum- or skew-suffix of one and a sum- or skew-prefix of the other. So, in the case where $\alpha_{i-1}$ and $\theta$ have entries in common, these entries $\beta_{i-1}$ must lie with respect to $\tau[\alpha_i,\dots,\alpha_j]$ as specified in the statement of the proposition, and similar conclusions apply to $\alpha_{j+1}$. In case that $I$ is empty, $\theta$ must have nontrivial intersection with exactly two consecutive intervals $\alpha_{i-1}$ and $\alpha_{j+1}$. Then a minor modification of the argument above (relating $\beta_{i-1}$ directly to $\beta_{j+1}$) completes the proof. \end{proof} Inflations $\mathcal{C}[\mathcal{U}]$ of one class by another feature prominently in what follows. In this area we follow in the footsteps of Brignall~\cite{brignall:wreath-products:}. Let us define a \emph{$\mathcal{U}$-inflation} of $\sigma$ to be any permutation of the form \[ \pi=\sigma[\alpha_1,\dots,\alpha_m]\mbox{ with $\alpha_1,\dots,\alpha_m\in\mathcal{U}$;} \] we also refer to the above expression as a \emph{$\mathcal{U}$-decomposition} of $\pi$. \begin{proposition}[Brignall~\cite{brignall:wreath-products:}] \label{prop-U-profile} For every nonempty permutation class $\mathcal{U}$ and every permutation $\pi$, the set \[ \{\sigma\::\:\mbox{\textup{$\pi$ can be expressed as a $\mathcal{U}$-inflation of $\sigma$}}\} \] has a unique minimal element (with respect to the permutation containment order). \end{proposition} This unique minimal $\sigma$ is called the \emph{$\mathcal{U}$-profile} of $\pi$; note that the $\mathcal{U}$-profile of $\pi$ is $1$ precisely if $\pi\in\mathcal{U}$. To verify that $\pi\in\mathcal{C}[\mathcal{U}]$, it suffices to check whether the $\mathcal{U}$-profile of $\pi$ lies in $\mathcal{C}$. However, for our enumerative intents, Proposition~\ref{prop-U-profile} is not sufficient, because it does not guarantee uniqueness of substitution decomposition, even after the $\mathcal{U}$-profile has been singled out. For a very simple example, consider the permutation $12345$. The $\operatorname{Av}(123)$-profile of this permutation is $123$, but it has three decompositions with respect to this profile: \[ 12345=123[12,12,1]=123[12,1,12]=123[1,12,12]. \] To address this problem we introduce the \emph{left-greedy} $\mathcal{U}$-decomposition of the permutation $\pi$ as $\sigma[\alpha_1,\dots,\alpha_m]$, where $\sigma$ is the $\mathcal{U}$-profile of $\pi$ and the $\alpha_i\in\mathcal{U}$ are chosen from left to right to be as long as possible. We may also refer to such $\sigma[\alpha_1,\dots,\alpha_m]$ as a \emph{left-greedy} $\mathcal{U}$-inflation of $\sigma$. By definition the left-greedy $\mathcal{U}$-decomposition is unique, and the question arises as to how to distinguish it from other $\mathcal{U}$-decompositions of $\pi$. Our next proposition shows that if a $\mathcal{U}$-decomposition is not left-greedy then either several of the $\alpha_i$ can be merged, or a sum- or skew-prefix of one $\alpha_i$ can be appended to $\alpha_{i-1}$. \begin{proposition} \label{prop-left-greedy-U-inflate} Let $\mathcal{U}$ be a nonempty permutation class. A $\mathcal{U}$-decomposition \[ \pi=\theta[\gamma_1, \dots, \gamma_k]\mbox{ where $\gamma_1,\dots,\gamma_k\in\mathcal{U}$} \] is \emph{not} the left-greedy $\mathcal{U}$-decomposition of $\pi$ if and only if there is an interval $\interval{i}{j}$ of length at least $2$ giving rise to an interval of $\theta$ which is order isomorphic to a permutation $\tau$ such that \begin{enumerate} \item[(G1)] $\tau[\gamma_i,\gamma_{i+1},\dots,\gamma_j]\in\mathcal{U}$ (implying that $\theta$ is in fact not the $\mathcal{U}$-profile of $\pi$); or \item[(G2)] $\tau=12$ and the sum of $\gamma_i$ with the first sum component of $\gamma_{i+1}$ lies in $\mathcal{U}$; or, similarly, \item[(G3)] $\tau=21$ and the skew sum of $\gamma_i$ with the first skew component of $\gamma_{i+1}$ lies in $\mathcal{U}$. \end{enumerate} \end{proposition} \begin{proof} One direction is trivial. To prove the other direction, let the left-greedy $\mathcal{U}$-decomposition of $\pi$ be \[ \pi = \sigma[\alpha_1, \dots, \alpha_m]. \] Choose the minimum index $i$ such that $|\gamma_i| < |\alpha_i|$. By Proposition~\ref{intervals-in-inflation} applied to $\theta[\gamma_1,\dots,\gamma_k]$, the interval $\alpha_i$ of $\pi$ must be of the form \[ \tau[\gamma_i, \dots, \gamma_j] \oplus \beta_{j+1} \quad \mbox{or} \quad \tau[\gamma_i, \dots, \gamma_j] \ominus \beta_{j+1} \] for some sum- or skew-prefix $\beta_{j+1}$ of $\gamma_{j+1}$. If $j > i$ then we already see that the condition (G1) is met. Otherwise, the values corresponding to $\gamma_i$ (which form an interval) together with the subpermutation of $\beta_{j+1}$ corresponding to the first sum or skew component of $\gamma_{j+1}$ show that either (G2) or (G3) is met. \end{proof} We now turn our attention to the substitution closure $\langle \mathcal{C} \rangle$ of a class $\mathcal{C}$. Since the intersection of any family of substitution closed classes is substitution closed, $\langle \mathcal{C} \rangle$ can be defined nonconstructively as the intersection of all substitution closed classes containing $\mathcal{C}$. For our purposes, the following constructive description is more useful. \begin{proposition} \label{prop-subst-completion-const} The substitution closure of a nonempty class $\mathcal{C}$ is given by \[ \langle\mathcal{C}\rangle = \bigcup_{i=0}^{\infty} \mathcal{C}^{[i]}, \] where $\mathcal{C}^{[0]}=\{1\}$ and $\mathcal{C}^{[i+1]} = \mathcal{C}[\mathcal{C}^{[i]}]$ for $i\ge0$. \end{proposition} \begin{proof} Since $\langle\mathcal{C}\rangle$ is substitution closed, it contains all $\mathcal{C}^{[i]}$. The other inclusion is proved in a standard way by establishing that $\bigcup \mathcal{C}^{[i]}$ is substitution closed, and appealing to minimality of $\langle\mathcal{C}\rangle$. Inflations obey the associative law, i.e. $\mathcal{X}[\mathcal{Y}[\mathcal{Z}]]=(\mathcal{X}[\mathcal{Y}])[\mathcal{Z}]$ for any classes $\mathcal{X},\mathcal{Y},\mathcal{Z}$. From this it readily follows that $\mathcal{C}^{[i]}[\mathcal{C}^{[j]}]=\mathcal{C}^{[i+j]}$. Now consider an inflation $\sigma[\alpha_1,\dots,\alpha_m]$ where $\sigma$ and each $\alpha_i$ are contained in $\bigcup\mathcal{C}^{[i]}$. Because $\mathcal{C}^{[0]}\subseteq \mathcal{C}^{[1]}\subseteq \mathcal{C}^{[2]}\subseteq\dots$, there is some $k$ such that $\sigma,\alpha_1,\dots,\alpha_m\in\mathcal{C}^{[k]}$. It then follows that $\sigma[\alpha_1,\dots,\alpha_2]\in\mathcal{C}^{[k]}[\mathcal{C}^{[k]}]=\mathcal{C}^{[2k]}\subseteq\bigcup\mathcal{C}^{[i]}$, as desired. \end{proof} Another basic result about substitution closures is the following. \begin{proposition} \label{simples-in-substitution-completion} The substitution closure $\langle \mathcal{C} \rangle$ of a class $\mathcal{C}$ contains exactly the same set of simple permutations as $\mathcal{C}$. A permutation class $\mathcal{D}$ is contained in $\langle\mathcal{C}\rangle$ if and only if all simple permutations of $\mathcal{D}$ belong to $\mathcal{C}$. \end{proposition} \begin{proof} The first assertion and the forward implication of the second assertion follow immediately from Proposition \ref{prop-subst-completion-const} and the observation that a simple permutation can never be obtained by a nontrivial inflation. For the converse implication of the second assertion suppose that all simple permutations of a class $\mathcal{D}$ belong to $\mathcal{C}$. Let $\pi\in\mathcal{D}$ be a nontrivial permutation, and decompose it as $\pi=\sigma[\alpha_1,\dots,\alpha_m]$ where $\sigma$ is simple. By assumption $\sigma\in\mathcal{C}$, and if we inductively suppose $\alpha_1,\dots,\alpha_m\in\langle\mathcal{C}\rangle$, we obtain $\pi\in\langle\mathcal{C}\rangle$ by Proposition \ref{prop-subst-completion-const}, as required. \end{proof} For the remainder of this section we concern ourselves with the basis of the substitution closure of a class. \begin{proposition}[Albert and Atkinson~\cite{albert:simple-permutat:}]\label{substitution-completion-basis} The basis of the substitution closure of a class $\mathcal{C}$ consists of all minimal simple permutations not contained in $\mathcal{C}$. \end{proposition} \begin{proof} Suppose that $\beta \not \in \langle \mathcal{C} \rangle$ is not simple. Then $\beta = \sigma[\alpha_1,\dots,\alpha_m]$ for some simple permutation $\sigma$ and permutations $\alpha_1, \dots, \alpha_m$ all strictly contained in $\beta$. Were $\sigma$ and $\alpha_1, \dots, \alpha_m$ all in $\langle \mathcal{C} \rangle$ we would have $\beta\in\langle\mathcal{C}\rangle$. Hence $\beta$ cannot be a basis element of $\langle \mathcal{C} \rangle$ since all the proper subpermutations of a basis element of a class must lie in the class. Therefore every basis element of $\langle \mathcal{C} \rangle$ is simple, and the proposition follows by the first part of Proposition \ref{simples-in-substitution-completion}. \end{proof} A \emph{parallel alternation} is a permutation whose plot can be divided into two parts, by a single horizontal or vertical line, so that the points on either side of this line are both either increasing or decreasing and for every pair of points from the same part there is a point from the other part which {\it separates\/} them, i.e. lies either horizontally or vertically between them. It is easy to see that a parallel alternation of length at least four is simple if and only if its length is even and it does not begin with its smallest entry. Thus there are precisely four simple parallel alternations of each even length at least six, shown in Figure~\ref{fig-par-alts}, and no simple parallel alternations of odd length. \begin{figure} \caption{The four orientations of parallel alternations.} \label{fig-par-alts} \end{figure} \begin{theorem}[Schmerl and Trotter~\cite{schmerl:critically-inde:}]\label{thm-schmerl-trotter} Every simple permutation of length $n\ge 4$ which is not a parallel alternation contains a simple permutation of length $n-1$. A simple parallel alternation of length $n\geq 4$ contains a simple permutation of length $n-2$. \end{theorem} Theorem~\ref{thm-schmerl-trotter} leads rapidly to a sufficient condition for $\langle\mathcal{C}\rangle$ to be finitely based. Given any permutation class $\mathcal{C}$, we let $\mathcal{C}^{+1}$ denote the class of \emph{one point extensions} of elements of $\mathcal{C}$, i.e., the class of all permutations $\pi$ which contain an entry whose removal yields a permutation in $\mathcal{C}$. \begin{proposition}\label{prop-vector-simples-basis} Let $\mathcal{C}$ be a class of permutations. If $\mathcal{C}^{+1}$ is pwo, then the substitution closure $\langle\mathcal{C}\rangle$ is finitely based. \end{proposition} \begin{proof} The basis elements of $\langle\mathcal{C}\rangle$ are the minimal simple permutations not contained in $\mathcal{C}$ by Proposition~\ref{substitution-completion-basis}. Clearly there are only finitely many (indeed, at most four) minimal parallel alternations not contained in $\mathcal{C}$. By Theorem~\ref{thm-schmerl-trotter}, every other basis permutation $\beta$ of length $n$ contains a simple permutation $\sigma$ of length $n-1$ which by minimality belongs to $\mathcal{C}$. Hence $\beta\in\mathcal{C}^{+1}$, and the proposition follows from the assumption that $\mathcal{C}^{+1}$ has no infinite antichains. \end{proof} \section{Geometric Grid Classes and Regular Languages} \label{sec-geomgrid} We say that a $0/\mathord{\pm} 1$ matrix $M$ of size $t\times u$ is a \emph{partial multiplication matrix} if there exist \emph{column and row signs} \[ c_1,\ldots,c_t,r_1,\ldots,r_u\in \{1,-1\} \] such that every entry $M_{k,\ell}$ is equal to either $c_kr_\ell$ or $0$. Given a $0/\mathord{\pm} 1$ matrix $M$, we form a new matrix $M^{\times 2}$ by replacing each $0$, $1$, and $-1$ by \[ \fnmatrix{rr}{0&0\\0&0}, \fnmatrix{rr}{0&1\\1&0} \mbox{, and } \fnmatrix{rr}{-1&0\\0&-1}, \] respectively. It is easy to see that the standard figure of $M^{\times 2}$ is simply a scaled copy of the standard figure of $M$, and thus $\operatorname{Geom}(M^{\times 2})=\operatorname{Geom}(M)$ for all matrices $M$. Moreover, the column and row signs $c_k=(-1)^k$, $r_\ell=(-1)^\ell$, show that $M^{\times 2}$ is a partial multiplication matrix, giving the following result. \begin{proposition}[Albert, Atkinson, Bouvel, Ru\v{s}kuc and Vatter~\cite{albert:geometric-grid-:}] \label{prop-geom-pmm} Every geometric grid class is the geometric grid class of a partial multiplication matrix. \end{proposition} One useful aspect of geometric grid classes is that they provide a link between permutations and words. Before explaining this connection, we briefly review a few relevant facts about words. Given a finite alphabet (merely a set of symbols) $\mathcal{S}igma$, $\mathcal{S}igma^\ast$ denotes the set of all words (i.e. finite sequences) over $\mathcal{S}igma$. The set $\mathcal{S}igma^\ast$ is partially ordered by the \emph{subword} (or, \emph{subsequence}) order in which $v\le w$ if one can obtain $v$ from $w$ by deleting letters. Subsets of $\mathcal{S}igma^\ast$ are called \emph{languages}, and a particular type, \emph{regular languages}, play a central role in our work. The empty set, the singleton $\{\varepsilon\}$ containing only the empty word, and the singletons $\{a\}$ for each $a\in\mathcal{S}igma$ are all regular languages; moreover, given two regular languages $K,L\subseteq\mathcal{S}igma^\ast$, their union $K\cup L$, their concatenation $KL=\{vw\::\: v\in K\mbox{ and }w\in L\}$, and the star $K^\ast=\{v^{(1)}\cdots v^{(m)}\::\: v^{(1)},\dots,v^{(m)}\in K\}$ are also regular. Every regular language can be obtained by a finite sequence of applications of these rules. Alternatively, one may define regular languages as those accepted by finite state automata, but we will not require this description. A language $L$ is \emph{subword closed} if for every $w\in L$ and every subword $v\leq w$ we have $v\in L$. The \emph{generating function} of the language $L$ is $\sum x^{|w|}$, where the sum is taken over all $w\in L$, and $|w|$ denotes the \emph{length} of $w$. In addition to the above defining properties of regular languages, we will only require few other basic facts: \begin{itemize} \item All finite languages are regular. \item If $K$ and $L$ are regular languages then so are $K\cap L$ and $K\setminus L$. \item Every subword closed language is regular. \item The class of regular languages is closed under homomorphic images and inverse homomorphic images. \item Every regular language has a rational generating function. \end{itemize} For a systematic introduction to regular languages we refer the reader to Hopcroft, Motwani, and Ullman~\cite{hopcroft:introduction-to:a}, or, for a more combinatorial slant, to Flajolet and Sedgewick~\cite[Section I.4 and Appendix A.7]{flajolet:analytic-combin:}. The regularity of subword closed languages is folkloric, but is specifically proved in Haines~\cite{haines:on-free-monoids:}. Returning to geometric grid classes, given a partial multiplication matrix $M$ with standard figure $\Lambda$ we define the \emph{cell alphabet} of $M$ as \[ \mathcal{S}igma=\{\cell{k}{\ell} \::\: M_{k,\ell}\neq 0\}. \] The permutations in $\operatorname{Geom}(M)$ will be represented, or encoded, by words over $\mathcal{S}igma$. Intuitively, the letter $\cell{k}{\ell}$ represents an instruction to place a point in an appropriate position on the line in the $(k,\ell)$ cell of $\Lambda$. This appropriate position is determined as follows, and the whole process is depicted in Figure~\ref{fig-example-ggc-bij}. \begin{figure}\label{fig-example-ggc-bij} \end{figure} We say that the \emph{base line} of a \emph{column} of $\Lambda$ is the grid line to the left (resp., right) of that column if the corresponding column sign is $1$ (resp., $-1$). Similarly, the \emph{base line} of a \emph{row} of $\Lambda$ is the grid line below (resp., above) that row if the corresponding row sign is $1$ (resp., $-1$). We designate the intersection of the two base lines of a cell as its \emph{base point}. Note that the base point is an endpoint of the line segment of $\Lambda$ lying in this cell. As this definition indicates, we interpret the column and row signs as specifying the direction in which the columns and rows are `read'. Owing to this interpretation, we represent the column and row signs in our figures by arrows, as shown in Figure~\ref{fig-example-ggc-bij}. To every word $w=w_1\cdots w_n\in\mathcal{S}igma^\ast$ we associate a permutation $\varphi(w)$. First we choose arbitrary distances $0<d_1<\cdots<d_n<1$. For each $1\le i\le n$, we choose a point $p_i$ corresponding to $w_i$. Let $w_i=a_{k\ell}$; the point $p_i$ is chosen from the line segment in cell $C_{k,\ell}$, at infinity-norm distance $d_i$ from the base point of this cell. Finally, $\varphi(w)$ denotes the permutation defined by the set $\{p_1,\dots,p_n\}$ of points. It is a routine exercise to show that $\varphi(w)$ does not depend on the particular choice of $d_1,\dots,d_n$, and thus $\varphi\::\:\mathcal{S}igma^\ast\to \operatorname{Geom}(M)$ is a well-defined mapping. The basic properties of $\varphi$ are described by the following result. \begin{proposition}[Albert, Atkinson, Bouvel, Ru\v{s}kuc and Vatter~\cite{albert:geometric-grid-:}] \label{prop-properties-of-bij} The mapping $\varphi$ is length-preserving, finite-to-one, onto, and order-preserving. \end{proposition} We then have the following more detailed version of Theorem~\ref{thm-geom-griddable-all-summary}. \begin{theorem}[Albert, Atkinson, Bouvel, Ru\v{s}kuc and Vatter~\cite{albert:geometric-grid-:}] \label{thm-geom-griddable-all} Suppose that $\mathcal{C}\subseteq\operatorname{Geom}(M)$ is a permutation class and $M$ is a partial multiplication matrix with cell alphabet $\mathcal{S}igma$. Then the following hold: \begin{enumerate} \item[(i)] $\mathcal{C}$ is partially well-ordered. \item[(ii)] $\mathcal{C}$ is finitely based. \item[(iii)] There is a regular language $L\subseteq\mathcal{S}igma^\ast$ such that $\varphi$ restricts to a bijection $ L\rightarrow\mathcal{C}$. \item[(iv)] There is a regular language $L_S$, contained in the regular language from (iii), such that $\varphi$ restricts to a bijection between $L_S$ and the simple permutations in $\mathcal{C}$. \end{enumerate} \end{theorem} We also need the following result. \begin{theorem}[Albert, Atkinson, Bouvel, Ru\v{s}kuc and Vatter~\cite{albert:geometric-grid-:}] \label{thm-geom-griddable-one-point-extension} If the class $\mathcal{C}$ is geometrically griddable, then the class $\mathcal{C}^{+1}$ is also geometrically griddable. \end{theorem} We end this section with a technical note. The mapping $\varphi$ `jumbles' the entries, in the sense that the $i$th letter of a word $w\in\mathcal{S}igma^\ast$ typically does not correspond to the $i$th entry in the permutation $\varphi(w)$. To control for this, we define the \emph{index correspondence} $\psi$ associated to the pair $(\varphi,w)$ by letting $\psi(i)$ denote the index of the letter of $w$ which corresponds to the $i$th entry of $\varphi(w)$. \section{Finite Bases and Partial Well-Order} \label{sec-infinite-finite-bases-pwo} Two general types of classes will be under investigation in this paper: \begin{itemize} \item[(1)] subclasses of substitution closures of geometric grid classes; and \item[(2)] subclasses of inflations of geometric grid classes by strongly rational classes. \end{itemize} In this section we establish the pwo property for both these types. As a consequence we deduce that all classes of type (1) are finitely based. Note that we cannot hope to have a general finite basis result for type (2), since strongly rational classes need not be themselves finitely based (see Section \ref{sec-conclusion}). It follows immediately from Proposition~\ref{prop-vector-simples-basis}, Theorem \ref{thm-geom-griddable-all} (ii) and Theorem~\ref{thm-geom-griddable-one-point-extension} that $\langle\operatorname{Geom}(M)\rangle$ is finitely based. The basis of $\mathcal{C}\subseteq\langle\operatorname{Geom}(M)\rangle$ therefore consists of an antichain in $\langle\operatorname{Geom}(M)\rangle$ together with, possibly, some of the finitely many basis elements of $\langle\operatorname{Geom}(M)\rangle$ itself. Therefore we need only prove that $\langle\operatorname{Geom}(M)\rangle$ is pwo. Morally, owing to the tree-like structure of nested substitutions, this is a consequence of Kruskal's Tree Theorem~\cite{kruskal:well-quasi-orde:}. However, there are several technical issues that would need to be resolved in such an approach, so we give a proof from first principles. Given a poset $(P,\le)$, consider the set $P^\ast$ of words with letters from $P$. The \emph{generalised subword order} on $P^\ast$ is defined by stipulating that $v=v_1\dots v_k$ is contained in $w=w_1\dots w_n$ if $w$ has a subsequence $w_{i_1}w_{i_2}\cdots w_{i_k}$ such that $v_j\le w_{i_j}$ for all $j$. Note that the usual subword ordering on $\mathcal{S}igma^\ast$ is obtained as a special case where the letters of $\mathcal{S}igma$ are taken to be an antichain. We then have the following result from \cite{higman:ordering-by-div:}. \newtheorem*{higmans-theorem}{\rm\bf Higman's Theorem} \begin{higmans-theorem} If $(P,\le)$ is pwo then $P^*$, ordered by the subword order, is also pwo. \end{higmans-theorem} We can immediately deduce the pwo property for inflations of geometrically griddable classes. \begin{proposition}\label{prop-inflate-pwo} If $\mathcal{C}$ is a geometrically griddable class and $\mathcal{U}$ is a pwo class then the inflation $\mathcal{C}[\mathcal{U}]$ is pwo. \end{proposition} \begin{proof} It suffices to prove that $\operatorname{Geom}(M)[\mathcal{U}]$ is pwo for all partial multiplication matrices $M$. Suppose that the cell alphabet of $M$ is $\mathcal{S}igma$ and consider the map \[ \varphi^\mathcal{U}:\left(\mathcal{S}igma\times\mathcal{U}\right)^\ast\rightarrow\operatorname{Geom}(M)[\mathcal{U}] \] which sends $(w_1,\alpha_1)\cdots(w_m,\alpha_m)$ to $\varphi(w_1\cdots w_m)[\alpha_{\psi(1)},\dots,\alpha_{\psi(m)}]$ where $\varphi$ is the encoding mapping, and $\psi$ is the index correspondence associated to $(\varphi,w)$, both of which have been introduced in Section \ref{sec-geomgrid}. This maps onto $\operatorname{Geom}(M)[\mathcal{U}]$ because $\varphi$ maps onto $\operatorname{Geom}(M)$ by Proposition \ref{prop-properties-of-bij}. Order $\left(\mathcal{S}igma\times\mathcal{U}\right)^\ast$ as follows: $\mathcal{S}igma\times\mathcal{U}$ is ordered by the direct product ordering, where $\mathcal{S}igma$ is considered to be an antichain, and then $\left(\mathcal{S}igma\times\mathcal{U}\right)^\ast$ is ordered by the generalised subword ordering. Using the fact that $\varphi$ is order-preserving (Proposition~\ref{prop-properties-of-bij} again), it can be seen that $\varphi^\mathcal{U}$ is order-preserving as well. Since $\mathcal{U}$ is pwo, $\mathcal{S}igma\times\mathcal{U}$ is also pwo (as it's simply a union of $|\mathcal{S}igma|$ copies of $\mathcal{U}$) and thus $\left(\mathcal{S}igma\times\mathcal{U}\right)^\ast$ is pwo by Higman's Theorem. It immediately follows that $\operatorname{Geom}(M)[\mathcal{U}]$ is pwo as well. \end{proof} In order to show that substitution closures of geometrically griddable classes are pwo, we borrow a few ideas from the study of posets. For the purposes of this discussion, we restrict ourselves to posets (such as the poset of all permutations) which are \emph{well-founded}, meaning that they have no infinite strictly decreasing sequences. Gustedt~\cite{gustedt:finiteness-theo:} defines a partial order on the infinite antichains of a poset, implicit in Nash-Williams~\cite{nash-williams:on-well-quasi-o:}, in which $A\preceq B$ if for every $b\in B$ there exists $a\in A$ such that $a\le b$. Note that $\preceq$ reverses the set inclusion order: if two infinite antichains satisfy $B\subseteq A$, then $A\preceq B$. \begin{proposition}[Gustedt~{\cite[Lemma 5]{gustedt:finiteness-theo:}}]\label{prop-minimal-antichain} For a well-founded poset $P$, the poset of infinite antichains of $P$ under $\preceq$ is also well-founded. In particular, for every infinite antichain $A\subseteq P$ there is a $\preceq$-minimal infinite antichain $B$ such that $B\preceq A$. \end{proposition} \begin{proposition}[Gustedt~{\protect\cite[Theorem 6]{gustedt:finiteness-theo:}}]\label{prop-preceq-min-pcl-pwo} Suppose that the poset $P$ is well-founded and that the antichain $A$ is $\preceq$-minimal. Then the \emph{proper closure} of $A$, \[ A^{\mbox{\begin{tiny}\ensuremath{<}\end{tiny}}}=\{b : b < a\mbox{ for some $a\in A$}\}, \] is pwo. \end{proposition} As an easy consequence we now have: \begin{theorem}\label{thm-geom-simples-pwo-basis} If the class $\mathcal{C}$ is geometrically griddable, then every subclass of $\langle\mathcal{C}\rangle$ is finitely based and pwo. \end{theorem} \begin{proof} From our prior discussion, it suffices to prove that $\langle\operatorname{Geom}(M)\rangle$ is pwo for every partial multiplication matrix $M$. Suppose $\langle\operatorname{Geom}(M)\rangle$ contains an infinite antichain; then it contains an infinite $\preceq$-minimal antichain $A$ by Proposition~\ref{prop-minimal-antichain}. By Proposition~\ref{prop-preceq-min-pcl-pwo} the permutation class $A^{\mbox{\begin{tiny}\ensuremath{<}\end{tiny}}}$ is pwo. By Proposition~\ref{prop-subst-completion-const} every element $\pi\in A$ can be decomposed as $\pi=\sigma[\alpha_1,\dots,\alpha_m]$, where $\sigma\in\operatorname{Geom}(M)$ and $\alpha_1,\dots,\alpha_m$ are properly contained in $\pi$. In other words, $A\subseteq \operatorname{Geom}(M)[A^{\mbox{\begin{tiny}\ensuremath{<}\end{tiny}}}]$, which cannot happen since $\operatorname{Geom}(M)[A^{\mbox{\begin{tiny}\ensuremath{<}\end{tiny}}}]$ is pwo by Proposition ~\ref{prop-inflate-pwo}. \end{proof} In their early investigations of simple permutations, Albert and Atkinson~\cite{albert:simple-permutat:} proved that every permutation class with only finitely many simple permutations is pwo. Theorem~\ref{thm-geom-simples-pwo-basis} generalises this result, as every finite set of permutations is trivially contained in some geometric grid class. \section{Properties and Frameworks}\label{sec-frameworks} In order to establish our enumerative results we adapt ideas introduced by Brignall, Huczynska and Vatter~\cite{brignall:simple-permutat:}. A {\it property\/} is any set $P$ of permutations, and we say that $\pi$ {\it satisfies\/} $P$ if $\pi\in P$. Given a family $\mathcal{P}$ of properties and a permutation $\pi$, we write $\mathcal{P}(\pi)$ for the collection of properties in $\mathcal{P}$ satisfied by $\pi$. In this section we use only two types of properties. An \emph{avoidance property} is one of the form $\operatorname{Av}(\beta)$ for some permutation $\beta$. Note that if $\mathcal{P}$ is a family of avoidance properties and $\sigma\le\pi$, then $\sigma$ must avoid every permutation avoided by $\pi$, so $\mathcal{P}(\sigma)\supseteq\mathcal{P}(\pi)$. Additionally we will need the properties $D_{\mathord{\oplus}}$ and $D_{\mathord{\ominus}}$, which denote, respectively, the sets of sum decomposable permutations and skew decomposable permutations. A \emph{$\mathcal{P}$-framework} $\mathfrak{F}$ is a (formal) expression $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ where $\sigma$ is a permutation of length $m$, called the \emph{skeleton} of $\mathfrak{F}$, and $\mathcal{Q}_i\subseteq\mathcal{P}$ for all $i$. We say that $\mathfrak{F}$ \emph{describes} the set of permutations \[ \{\sigma[\alpha_1,\dots,\alpha_m]\::\: \mbox{$\mathcal{P}(\alpha_i)=\mathcal{Q}_i$ for all $i$}\}. \] Informed by Proposition~\ref{simple-decomp-unique}, we say that a $\mathcal{P}$-framework $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ is \emph{simple} if $\sigma$ is simple and $D_\oplus\notin\mathcal{Q}_1$ (resp., $D_\ominus\notin\mathcal{Q}_1$) if $\sigma=12$ (resp., $\sigma=21$). We then have the following result. \begin{proposition} \label{simple-framework-unique} If $\mathcal{P}$ is a family of properties containing $D_{\mathord{\oplus}}$ and $D_{\mathord{\ominus}}$ then every non-trivial permutation is described by a unique simple $\mathcal{P}$-framework. \end{proposition} We say that the $\mathcal{P}$-framework $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ is \emph{nonempty} if it describes at least one permutation; this condition is equivalent to requiring that there be at least one permutation $\alpha_i$ with $\mathcal{P}(\alpha_i)=\mathcal{Q}_i$ for every $i$. The family $\mathcal{P}$ of properties is \emph{query-complete} if the collection of properties $\mathcal{P}(\sigma[\alpha_1,\dots,\alpha_m])$ is completely determined by $\sigma$ and the collections $\mathcal{P}(\alpha_1)$, $\dots$, $\mathcal{P}(\alpha_m)$. In other words, $\mathcal{P}$ is query-complete if \[ \mathcal{P}(\sigma[\alpha_1,\dots,\alpha_m])=\mathcal{P}(\sigma[\alpha_1',\dots,\alpha_m']) \] for all permutations $\sigma$ of length $m$, and all $m$-tuples $(\alpha_1,\dots,\alpha_m)$ and $(\alpha_1',\dots,\alpha_m')$ which satisfy $\mathcal{P}(\alpha_i)=\mathcal{P}(\alpha_i')$ for all $i$. When $\mathcal{P}$ is query-complete, we may refer to the \emph{properties} of a nonempty $\mathcal{P}$-framework $\mathfrak{F}$, for which we use the notation $\mathcal{P}(\mathfrak{F})$, defined as $\mathcal{P}(\pi)$ where $\pi$ is any permutation described by $\mathfrak{F}$. The situation we are interested in is when $\mathcal{C}\subseteq\langle\operatorname{Geom}(M)\rangle$, i.e., when the simple permutations of $\mathcal{C}$ are contained in a geometric grid class (see Proposition~\ref{simples-in-substitution-completion}). Without loss of generality we will suppose that $M$ is a partial multiplication matrix (Proposition \ref{prop-geom-pmm}). Let $B$ be the basis of $\mathcal{C}$; recall that $B$ is finite by Theorem \ref{thm-geom-simples-pwo-basis}. In order to enumerate $\mathcal{C}$, the properties we are interested in are \[ \mathcal{P}_B=\{D_{\mathord{\oplus}},D_{\mathord{\ominus}}\}\cup\{\operatorname{Av}(\delta)\::\: \mbox{$\delta\le\beta$ for some $\beta\in B$}\}. \] Intuitively, these properties allow us to `monitor', as substitutions are iteratively formed to build $\mathcal{C}\subseteq\langle\operatorname{Geom}(M)\rangle$, `how much' of any basis element from $B$ the resulting permutations contain. Let us first verify that $\mathcal{P}_B$ is query-complete; as the union of query-complete sets of properties is again query-complete, we may prove this piece by piece. First, $\{D_{\mathord{\oplus}}\}$ is query-complete: $\sigma[\alpha_1,\dots,\alpha_m]\inD_{\mathord{\oplus}}$ if and only if $\sigma\inD_{\mathord{\oplus}}$ or $\sigma=1$ and $\alpha_1\inD_{\mathord{\oplus}}$. The case of $\{D_{\mathord{\ominus}}\}$ is similar. For the rest of $\mathcal{P}_B$, we claim that for every $\beta\in B$, the set $\{\operatorname{Av}(\delta)\::\:\delta\le\beta\}$ is query-complete. This is equivalent to stating that knowing the skeleton $\sigma$ and exactly which of the relevant subpermutations of $\beta$ each interval contains allows us to determine whether $\sigma[\alpha_1,\dots,\alpha_m]$ contains a given $\delta\le\beta$; a formal proof is given in Brignall, Huczynska and Vatter~\cite{brignall:simple-permutat:}. Now let $\mathcal{P}\supseteq\mathcal{P}_B$ be a query-complete set of properties consisting of $\mathcal{P}_B$ together, possibly, with finitely many additional avoidance properties. Since $B$ is the relative basis of $\mathcal{C}$ and the properties $\operatorname{Av}(\beta)$ ($\beta\in B$) are in $\mathcal{P}$, it follows that every subset of $\mathcal{P}$ `knows' whether the permutations it describes belong to $\mathcal{C}$ or not. More precisely, for a $\mathcal{P}$-framework $\mathfrak{F}$, either every permutation described by $\mathfrak{F}$ lies in $\mathcal{C}$ or none of them do. The first step of our enumeration of $\mathcal{C}$ is to encode the nonempty, simple $\mathcal{P}$-frameworks which describe permutations in $\mathcal{C}$. Let $\mathcal{S}igma$ be the cell alphabet of $M$, and let $\varphi:\mathcal{S}igma^\ast\rightarrow \operatorname{Geom}(M)$ be the mapping defined in Section \ref{sec-geomgrid}. Since the set $S$ of simple permutations in $\mathcal{C}$ is contained in $\operatorname{Geom}(M)$, Theorem \ref{thm-geom-griddable-all} (iv) applied to the subclass $\mathcal{C} \cap \operatorname{Geom}(M)$ of $\operatorname{Geom}(M)$ yields a regular language $L_S\subseteq \mathcal{S}igma^\ast$ such that $\varphi$ induces a bijection between $L_S$ and $S$. In order to encode $\mathcal{P}$-frameworks, we extend our alphabet to $\mathcal{S}igma\times 2^{\mathcal{P}}$, that is, ordered pairs whose first component is a letter from $\mathcal{S}igma$, and whose second component is a subset of $\mathcal{P}$. We now define the mapping $\varphi^{\mathcal{P}}$ from words in $\left(\mathcal{S}igma\times2^{\mathcal{P}}\right)^\ast$ to ${\mathcal{P}}$-frameworks with underlying permutations in $\operatorname{Geom}(M)$ by \[ \varphi^{\mathcal{P}}:(w_1,\mathcal{Q}_1)\cdots (w_m,\mathcal{Q}_m)\mapsto \varphi(w)[\mathcal{Q}_{\psi(1)},\dots,\mathcal{Q}_{\psi(m)}], \] where $w=w_1\cdots w_m$, and $\psi$ is the index correspondence associated to $(\varphi,w)$ defined at the end of Section~\ref{sec-geomgrid}. Since $\varphi$ is onto (Proposition \ref{thm-geom-griddable-all}), so is $\varphi^{\mathcal{P}}$. Given two $\mathcal{P}$-frameworks we write \[ \tau[\mathcal{R}_1,\dots,R_k]\le\sigma[\mathcal{Q}_1,\dots,Q_m] \] if there are indices $1\le i_1<\cdots<i_k\le m$ such that $\tau$ is order isomorphic to $\sigma(i_1)\cdots\sigma(i_k)$ and $\mathcal{R}_j=\mathcal{Q}_{i_j}$ for all $1\le j\le k$. From Proposition~\ref{prop-properties-of-bij} it follows readily that $\varphi^\mathcal{P}$ is order-preserving when considered as a mapping from $\left(\mathcal{S}igma\times2^{\mathcal{P}}\right)^\ast$ under the subword order to the set of all $\mathcal{P}$-frameworks under the above ordering. The main result of this section shows that the $\mathcal{P}$-frameworks we are interested in are described by a finite family of regular languages. \begin{theorem} \label{thm-framework-regular} Let $M$ be a partial multiplication matrix with cell alphabet $\mathcal{S}igma$, let $B$ be any finite set of permutations, and let $\mathcal{P}$ be a query-complete set of properties consisting of $\mathcal{P}_B$ together, possibly, with finitely many additional avoidance properties. For every subset $\mathcal{Q}\subseteq\mathcal{P}$ of properties, there is a regular language $L_\mathcal{Q}\subseteq\left(\mathcal{S}igma\times2^{\mathcal{P}}\right)^\ast$ such that the mapping $\varphi^{\mathcal{P}}$ is a bijection between $L_\mathcal{Q}$ and the nonempty, simple $\mathcal{P}$-frameworks $\mathfrak{F}=\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ satisfying $\sigma\in\operatorname{Geom}(M)$ and $\mathcal{P}(\mathfrak{F})=\mathcal{Q}$. \end{theorem} \begin{proof} Let $L_S\subseteq\mathcal{S}igma^\ast$ be the regular language such that the mapping $\varphi$ is a bijection between $L_S$ and the simple permutations of $\operatorname{Geom}(M)$. The language \[ L_S^{\mathcal{P}} = \{(w_1,\mathcal{Q}_1)\cdots(w_m,\mathcal{Q}_m)\::\: w_1\cdots w_m\in L_S\} \subseteq \left(\mathcal{S}igma\times 2^{\mathcal{P}}\right)^\ast \] is the inverse image of $L_S$ under the first projection homomorphism, and is thus regular. Consider first the case where $D_{\mathord{\oplus}}\in\mathcal{Q}$. Thus we must generate all nonempty, simple $\mathcal{P}$-frameworks which describe sum decomposable permutations $\pi$ with $\mathcal{P}(\pi)=\mathcal{Q}$. Clearly there are only finitely many such frameworks because they have the form $12[\mathcal{Q}_1,\mathcal{Q}_2]$. Choosing a single preimage under $\varphi^{\mathcal{P}}$ for each framework yields a finite, and hence regular, set $L_\mathcal{Q}$ with the desired properties. The case where $D_{\mathord{\ominus}}\in\mathcal{Q}$ is dual. Now suppose that $D_{\mathord{\oplus}},D_{\mathord{\ominus}}\notin \mathcal{Q}$. In this case we must ensure that the $\mathcal{P}$-frameworks we build are neither sum decomposable nor skew decomposable. This is equivalent to insisting that the skeleton have length at least four. Consider the set $\{\mathfrak{F} \::\: \mathcal{Q}\subseteq\mathcal{P}(\mathfrak{F})\}$ of $\mathcal{P}$-frameworks which satisfy at least the properties of $\mathcal{Q}$. (Note that here we do not require the skeleton be simple -- it can be any element of $\operatorname{Geom}(M)$.) As all of the properties of $\mathcal{Q}$ are avoidance properties, this set of $\mathcal{P}$-frameworks is closed downward under the $\mathcal{P}$-framework ordering. Therefore, because $\varphi^\mathcal{P}$ is order-preserving, the set \[ \{w\in\left(\mathcal{S}igma\times2^{\mathcal{P}}\right)^\ast \::\: \mathcal{Q}\subseteq\mathcal{P}(\varphi^{\mathcal{P}}(w))\} \] is subword-closed, and thus regular. Dually, the set \[ \{w\in\left(\mathcal{S}igma\times2^{\mathcal{P}}\right)^\ast \::\: \mathcal{P}(\varphi^{\mathcal{P}}(w))\subseteq\mathcal{Q}\} \] is upward-closed, and thus also regular. The regular language $L_\mathcal{Q}$ we need to produce is simply the intersection of the two sets above (to ensure that $\mathcal{P}(\mathfrak{F})=\mathcal{Q}$ for every resulting framework $\mathfrak{F}$), with the regular language $L_S^{\mathcal{P}}$ (to ensure that the skeleton of $\mathfrak{F}$ is simple), and the regular language of words of length at least four (to ensure that $D_{\mathord{\oplus}},D_{\mathord{\ominus}}\not\in\mathcal{P}(\mathfrak{F})$), completing the proof. \end{proof} \section{Algebraic Generating Functions} \label{sec-alg-gf} Our goal now is to utilise Theorem~\ref{thm-framework-regular} ($\mathcal{P}$-frameworks specified by any $\mathcal{Q}\subseteq \mathcal{P}$ are in bijection with a regular language) to show that the generating function for a subclass $\mathcal{C}$ of the substitution closure of a geometrically griddable class is algebraic. It obviously suffices to consider the case where $\mathcal{C}\subseteq\langle\operatorname{Geom}(M)\rangle$. Without loss of generality suppose that $M$ is a partial multiplication matrix (Proposition \ref{thm-geom-griddable-all}), and denote the corresponding cell alphabet by $\mathcal{S}igma$. Let $B$ be the (finite) basis of $\mathcal{C}$. For the purposes of this section, it will be helpful to insist that \[ \mathcal{P}=\mathcal{P}_B\cup\{\operatorname{Av}(21),\operatorname{Av}(12)\}, \] which is easily seen to be query complete. With these two extra properties, the family $\mathcal{Q}^\bullet$ consisting of all avoidance properties in $\mathcal{P}$ except $\operatorname{Av}(1)$ satisfies \[ \mathcal{P}(\pi)=\mathcal{Q}^\bullet \mbox{ if and only if } \pi=1. \] For every subset $\mathcal{Q}\subseteq \mathcal{P}$ let $f_\mathcal{Q}$ be the generating function for the set \[ \mathcal{D}elta(\mathcal{Q})=\{ \pi\in\langle\operatorname{Geom}(M)\rangle \::\: \mathcal{P}(\pi)=\mathcal{Q}\} \] of all permutations in $\langle\operatorname{Geom}(M)\rangle$ described by $\mathcal{Q}$. Clearly \[ \mathcal{D}elta(\mathcal{Q}^\bullet)=\{1\}. \] For every other $\mathcal{Q}$ we have \[ \mathcal{D}elta(\mathcal{Q})=\bigcup \sigma[\mathcal{D}elta(\mathcal{Q}_1),\dots,\mathcal{D}elta(\mathcal{Q}_m)], \] where the (disjoint) union is taken over all simple $\mathcal{P}$-frameworks $\mathfrak{F}=\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ with $\sigma\in\operatorname{Geom}(M)$ and $\mathcal{P}(\mathfrak{F})=\mathcal{Q}$. This latter set of frameworks is bijectively encoded by the language $L_\mathcal{Q}\subseteq (\mathcal{S}igma \times 2^\mathcal{P})^\ast$ via the mapping $\varphi^\mathcal{P}$, as described in Section \ref{sec-frameworks}. Let $g_\mathcal{Q}$ be the generating function for $L_\mathcal{Q}$ in non-commuting variables representing the letters of our alphabet: \begin{equation} \label{nreq3} g_\mathcal{Q}=\sum_{w\in L_\mathcal{Q}} w. \end{equation} Due to the recursive description of the sets $\mathcal{D}elta(\mathcal{Q})$ above, and the fact that every non-trivial permutation in $\langle\operatorname{Geom}(M)\rangle$ is described by a unique $\mathcal{P}$-framework, a system of equations for the $f_\mathcal{Q}$ ($\mathcal{Q}\subseteq\mathcal{P}$) can be obtained by stipulating \begin{equation} \label{nreq4} f_{\mathcal{Q}^\bullet}=x, \end{equation} and performing the following substitutions in \eqref{nreq3}: \begin{equation} \label{nreq5} g_\mathcal{Q}\leftarrow f_\mathcal{Q},\ (u,\mathcal{R})\leftarrow f_\mathcal{R}\ (\mathcal{R}\subseteq\mathcal{P}). \end{equation} The resulting system is finite, although a typical right-hand side of an equation is an infinite series. On the other hand, the language $L_\mathcal{Q}$ is regular by Theorem~\ref{thm-framework-regular}. Therefore, as is well known (see Flajolet and Sedgewick~\cite[Proposition I.3]{flajolet:analytic-combin:}), each $g_\mathcal{Q}$ is the solution of a finite system of linear equations (which almost certainly includes auxiliary variables). We then take these systems together and perform substitutions \eqref{nreq5} on them. The resulting system, together with the equation \eqref{nreq4}, is a finite algebraic system for the $f_\mathcal{Q}$. We may then perform algebraic elimination (see Flajolet and Sedgewick~\cite[Appendix B.1]{flajolet:analytic-combin:}) to produce a single polynomial equation for each $f_\mathcal{Q}$. The generating function $f$ of $\mathcal{C}$ is $f=\sum f_\mathcal{Q}$, where the sum is taken over all $\mathcal{Q}$ satisfying $\{ \operatorname{Av}(\beta)\::\: \beta\in B\}\subseteq \mathcal{Q}\subseteq\mathcal{P}$, thus proving the following result. \begin{theorem} \label{thm-context-free} Every subclass of the substitution closure of a geometrically griddable class has an algebraic generating function. \end{theorem} While we have established Theorem~\ref{thm-context-free} in a purely algebraic manner, it would not be difficult to express our proof in terms of formal languages. In such an approach, the above considerations would translate into a proof that the class $\mathcal{C}$ is in bijection with a context-free language, and exhibiting an unambiguous grammar for this language. Theorem \ref{thm-context-free} would follow from the fact that such languages have algebraic generating functions; see Flajolet and Sedgewick~\cite[Proposition I.7]{flajolet:analytic-combin:}. \section{Inflations by Strongly Rational Classes} We now consider inflations of the form $\mathcal{C}[\mathcal{U}]$ where $\mathcal{C}$ is geometrically griddable and $\mathcal{U}$ is \emph{strongly rational}, meaning that $\mathcal{U}$ and all its subclasses have rational generating functions. Recall that $\mathcal{C}[\mathcal{U}]$ is defined as \[ \mathcal{C}[\mathcal{U}]=\{\sigma[\alpha_1,\dots,\alpha_m]\::\:\mbox{$\sigma\in\mathcal{C}$ is of length $m$, and $\alpha_1,\dots,\alpha_m\in\mathcal{U}$}\}. \] We cannot hope to prove the main result of this section by encoding the permutations of $\mathcal{C}[\mathcal{U}]$ as a regular language, simply because we do not know how to encode an arbitrary strongly rational class. Thus we must consider generating functions for various subsets of $\mathcal{U}$. The following result is our starting point. \begin{proposition}[Albert, Atkinson, and Vatter~\cite{albert:subclasses-of-t:}] \label{prop-strong-rat-indecomps} If the class $\mathcal{U}$ is strongly rational, then each of the following sets has a rational generating function: \begin{itemize} \item the sum indecomposable permutations in $\mathcal{U}$; \item the sum decomposable permutations in $\mathcal{U}$; \item the skew indecomposable permutations in $\mathcal{U}$; \item the skew decomposable permutations in $\mathcal{U}$; and \item the permutations in $\mathcal{U}$ which are both sum and skew indecomposable. \end{itemize} \end{proposition} As in Section~\ref{sec-frameworks}, given a finite set $B$ of permutations, we define the family of properties $\mathcal{P}_B$ by \[ \mathcal{P}_B=\{D_{\mathord{\oplus}},D_{\mathord{\ominus}}\}\cup\{\operatorname{Av}(\delta)\::\: \mbox{$\delta\le\beta$ for some $\beta\in B$}\}. \] \begin{proposition} \label{prop-strong-rat-properties} Let $\mathcal{U}$ be a strongly rational permutation class, and let $B$ be a finite set of permutations. For every subset $\mathcal{Q}\subseteq\mathcal{P}_B$ of properties, the generating function for the permutations in $\mathcal{U}$ satisfying $\mathcal{P}_B(\pi)=\mathcal{Q}$ is rational. \end{proposition} \begin{proof} Let $g_\mathcal{Q}$ denote the generating function for the permutations we want to count, i.e., the permutations in $\mathcal{U}$ which satisfy precisely the properties $\mathcal{Q}$. Further, given a set $\mathcal{R}\subseteq\mathcal{P}_B$ of properties, let $f_\mathcal{R}$ denote the generating function for the permutations in $\mathcal{U}$ which satisfy at least the properties of $\mathcal{R}$, but possibly more. Because $\mathcal{P}_B$ consists of the properties of being sum- and skew decomposable, together with a collection of avoidance properties, each $f_\mathcal{R}$ corresponds to one of the bullet points in Proposition~\ref{prop-strong-rat-indecomps} for a subclass of $\mathcal{U}$. Specifically, letting $B'=\{ \delta\::\: \operatorname{Av}(\delta)\in\mathcal{R}\}$ and $\mathcal{V}=\mathcal{U}\cap \operatorname{Av}(B')$, we have: \begin{itemize} \item if $D_{\mathord{\oplus}},D_{\mathord{\ominus}}\not\in\mathcal{R}$ then $f_\mathcal{R}$ is the generating function for the class $\mathcal{V}$; \item if $D_{\mathord{\oplus}}\in\mathcal{R}$ and $D_{\mathord{\ominus}}\not\in\mathcal{R}$ then $f_\mathcal{R}$ is the generating function for the sum decomposable permutations in $\mathcal{V}$; \item if $D_{\mathord{\oplus}}\not\in\mathcal{R}$ and $D_{\mathord{\ominus}}\in\mathcal{R}$ then $f_\mathcal{R}$ is the generating function for the skew decomposable permutations in $\mathcal{V}$; \item if $D_{\mathord{\oplus}},D_{\mathord{\ominus}}\in\mathcal{R}$ then $f_\mathcal{R}=0$. \end{itemize} In any case, $f_\mathcal{R}$ is rational. To complete the proof, we need only note that \[ g_\mathcal{Q} = \sum_{\mathcal{R}\::\:\mathcal{Q}\subseteq\mathcal{R}\subseteq\mathcal{P}_B} (-1)^{|\mathcal{R}\setminus\mathcal{Q}|} f_\mathcal{R} \] by inclusion-exclusion. \end{proof} Our argument that inflations of geometrically griddable classes by strongly rational classes are strongly rational (Theorem~\ref{thm-geom-inflate-enum}) is fairly technical, but the underlying idea is quite simple: Given such a class $\mathcal{D}\subseteq \mathcal{C}[\mathcal{U}]$, where $\mathcal{C}$ is geometrically griddable and $\mathcal{U}$ is strongly rational, we find a suitable set of properties $\mathcal{P}$ so that we can encode all the requisite $\mathcal{P}$-frameworks by a regular language $L_\mathcal{D}$. Then we use a variant of Proposition \ref{prop-strong-rat-properties} to show that the generating functions for permutations in $\mathcal{U}$ described by arbitrary $\mathcal{Q}\subseteq\mathcal{P}$ are rational. Finally, we substitute these rational generating functions into the rational generating function for the language $L_\mathcal{D}$, yielding a rational generating function for $\mathcal{D}$. There are two major obstacles to this programme. The first is that for obvious reasons we need our set of properties to discriminate between inflations $\sigma[\alpha_1,\dots,\alpha_m]$ that belong to $\mathcal{D}$ and those that don't, and also, for technical reasons which will become apparent shortly, between the inflations that belong to $\mathcal{U}$ and those that don't. But we cannot assume that $\mathcal{U}$ (and hence $\mathcal{D}$) are finitely based and then use the basis permutations to construct $\mathcal{P}$. Fortunately, the pwo property comes to the rescue (Proposition \ref{prop-inflate-pwo}), and we can use \emph{relative bases} instead. Specifically, let $B_\mathcal{D}$ (respectively $B_\mathcal{U}$) be the set of all basis elements of $\mathcal{D}$ (respectively, $\mathcal{U}$) that belong to $\mathcal{C}[\mathcal{U}]$; note that $B_\mathcal{D}$ and $B_\mathcal{U}$ are both finite because $\mathcal{C}[\mathcal{U}]$ is pwo, and that \begin{eqnarray*} \mathcal{D}&=&\mathcal{C}[\mathcal{U}]\cap \operatorname{Av}(B_\mathcal{D}),\\ \mathcal{U}&=&\mathcal{C}[\mathcal{U}]\cap \operatorname{Av}(B_\mathcal{U}). \end{eqnarray*} We let $B=B_\mathcal{D}\cup B_\mathcal{U}$, and construct the set of properties $\mathcal{P}_B$ as in Section \ref{sec-frameworks}. The second obstacle is that as it stands, the family of properties $\mathcal{P}_B$ is still not sufficiently discriminating. Indeed, Proposition~\ref{prop-left-greedy-U-inflate} demonstrates that a single $\mathcal{P}_B$-framework may well describe both left-greedy and non-left-greedy $\mathcal{U}$-inflations. Consider, for example, a $\mathcal{P}_B$-framework of the form $12[\mathcal{Q}_1,\mathcal{Q}_2]$. Some $\mathcal{U}$-inflations described by this $\mathcal{P}_B$-framework will be left-greedy (if the first sum component of the second interval cannot `slide' to the first interval), while the others will not be. To address this issue, we say that the \emph{first component} of the permutation $\pi$ is the first sum component of $\pi$ if $\pi$ is sum decomposable, the first skew component of $\pi$ if $\pi$ is skew decomposable, and $\pi$ itself otherwise (if $\pi$ is neither sum nor skew decomposable). Observe that this notion is well-defined because no permutation is both sum and skew decomposable. We can now introduce the \emph{first component avoidance properties}: \[ \operatorname{Av}^{\rm \#1}(\delta) = \{\pi \::\: \mbox{the first component of $\pi$ avoids $\delta$}\}. \] We need the full range of these properties, \[ \mathcal{P}_B^{\rm \#1} = \{\operatorname{Av}^{\rm \#1}(\delta) \::\: \mbox{$\delta\le\beta$ for some $\beta\in B$}\}. \] The enlarged family of properties $\tilde{\mathcal{P}}_B=\mathcal{P}_B\cup\mathcal{P}_B^{\rm \#1}$ is query-complete. Indeed, it suffices to show that the properties $\mathcal{P}_B^{\rm \#1}$ of $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ are completely determined by $\sigma$ and the sets $\mathcal{Q}_i\cap\mathcal{P}_B$ of properties. If $\sigma=\tau\oplus\xi$ for a sum indecomposable $\tau$ and nonempty $\xi$, we see that \[ \operatorname{Av}^{\rm \#1}(\delta)\in\mathcal{P}_B^{\rm \#1}(\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]) \mbox{ if and only if } \operatorname{Av}(\delta)\in\mathcal{P}_B(\tau[\mathcal{Q}_1,\dots,\mathcal{Q}_{|\tau|}]), \] which can be determined from $\tau$ and $\mathcal{Q}_1,\dots,\mathcal{Q}_{|\tau|}$ because $\mathcal{P}_B$ is query-complete. The analogous assertion holds when $\sigma=\tau\ominus\xi$ for a skew indecomposable $\tau$ and nonempty $\xi$. If $\sigma$ is neither sum nor skew decomposable, then the criterion is even simpler: \[ \operatorname{Av}^{\rm \#1}(\delta)\in\mathcal{P}_B^{\rm \#1}(\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]) \mbox{ if and only if } \operatorname{Av}(\delta)\in\mathcal{P}_B(\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]), \] which again can be determined from $\sigma$ and $\mathcal{Q}_1,\dots,\mathcal{Q}_m$ because $\mathcal{P}_B$ is query-complete. We begin by reiterating that, because $B$ contains the relative bases for $\mathcal{D}$ and $\mathcal{U}$ in $\mathcal{C}[\mathcal{U}]$, the $\tilde{\mathcal{P}}_B$ frameworks respect the boundary between each of these two classes and its complement in $\mathcal{C}[\mathcal{U}]$. \begin{proposition} \label{nrlemma1} Given a $\tilde{\mathcal{P}}_B$-framework $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ with $\sigma\in\mathcal{C}$, either all $\mathcal{U}$-inflations described by it lie in $\mathcal{D}$ (respectively $\mathcal{U}$) or none do. \end{proposition} Next we show that $\tilde{\mathcal{P}}_B$-frameworks can also distinguish between left-greedy and non-left-greedy $\mathcal{U}$-inflations. \begin{proposition} \label{lem-firstcomp-capture} Given a $\tilde{\mathcal{P}}_B$-framework $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ with $\sigma\in\mathcal{C}$, either all $\mathcal{U}$-inflations described by it are left-greedy or none are. \end{proposition} \begin{proof} We need to show that either every $\mathcal{U}$-inflation described by $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ satisfies one of the three conditions (G1)--(G3) of Proposition~\ref{prop-left-greedy-U-inflate} or that none do. Suppose that some $\mathcal{U}$-inflation, say $\pi=\sigma[\alpha_1,\dots,\alpha_m]$, described by $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ satisfies at least one of these conditions. If this condition is (G1), the assertion follows from Proposition~\ref{nrlemma1}. Now suppose that (G1) is not satisfied, and that (G2) is. Thus $\sigma$ contains an increasing run $\sigma(i+1)=\sigma(i)+1$ and the sum of $\alpha_i$ and the first sum component of $\alpha_{i+1}$ lies in $\mathcal{U}$. Furthermore, since (G1) does not hold, this first sum component is not the entire $\alpha_{i+1}$. Translating to our properties, this will happen if and only if $D_{\mathord{\oplus}}\in \mathcal{Q}_{i+1}$, $D_{\mathord{\ominus}}\not\in\mathcal{Q}_{i+1}$, and for all $\delta_1,\delta_2$ such that $\operatorname{Av}(\delta_1)\not\in\mathcal{Q}_i\cap \mathcal{P}_B$ and $\operatorname{Av}^{\rm \#1}(\delta_2)\not\in\mathcal{Q}_{i+1}\cap\mathcal{P}_B^{\rm \#1}$ we have $\delta_1\oplus\delta_2\not\in B_\mathcal{U}$. Therefore (G2) will hold for $\pi$ if and only if it holds for all $\mathcal{U}$-inflations described by $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$. A similar argument applies in the case that (G3) is satisfied but (G1) is not, completing the proof. \end{proof} Proposition~\ref{lem-firstcomp-capture} allows us to call a non-empty $\tilde{\mathcal{P}}_B$-framework \emph{left-greedy} if every $\mathcal{U}$-inflation it describes is left-greedy. The price we pay for this additional discriminating power of $\tilde{\mathcal{P}}_B$ is that we must strengthen Proposition~\ref{prop-strong-rat-properties} to include first component properties. \begin{proposition} \label{prop-strong-rat-properties-firstcomp} Let $\mathcal{U}$ be a strongly rational permutation class and $B$ be a finite set of permutations. For every subset $\mathcal{Q}\subseteq\tilde{\mathcal{P}}_B$ of properties, the generating function for the set of permutations $\pi\in\mathcal{U}$ satisfying $\tilde{\mathcal{P}}_B(\pi)=\mathcal{Q}$ is rational. \end{proposition} \begin{proof} For any set $\mathcal{S}\subseteq\mathcal{P}_B$ of properties, let $g_\mathcal{S}$ denote the generating function for permutations in $\mathcal{U}$ satisfying $\mathcal{P}_B(\pi)=\mathcal{S}$; all such generating functions are rational by Proposition~\ref{prop-strong-rat-properties}. Further define \[ \mathcal{R}=\{\operatorname{Av}(\delta)\::\: \operatorname{Av}^{\rm \#1}(\delta)\in\mathcal{Q}\}, \] and let $f$ denote the generating function of permutations in $\mathcal{U}$ satisfying $\tilde{\mathcal{P}}_B(\pi)=\mathcal{Q}$. There are three cases to consider. First suppose that $D_{\mathord{\oplus}},D_{\mathord{\ominus}}\notin\mathcal{Q}$. Thus if $\pi$ satisfies $\tilde{\mathcal{P}}_B(\pi)=\mathcal{Q}$ then the first component of $\pi$ is $\pi$ itself, so $\pi\in\operatorname{Av}^{\rm \#1}(\delta)$ if and only if $\pi\in\operatorname{Av}(\delta)$. Therefore $f=0$ unless $\mathcal{R}=\mathcal{Q}\cap\mathcal{P}_B$, in which case $f=g_{\mathcal{Q}\cap\mathcal{P}_B}$, which is rational. Now suppose that $D_{\mathord{\oplus}}\in\mathcal{Q}$, so we aim to count sum decomposable permutations. If $D_{\mathord{\ominus}}\in\mathcal{Q}$ then $f=0$, so we may assume that $D_{\mathord{\ominus}}\notin\mathcal{Q}$. We now see that $f$ counts permutations of the form $\sigma\oplus\tau$ where $\mathcal{P}_B(\sigma)$ is equal to $\mathcal{R}$ or to $\mathcal{R}'=\mathcal{R}\cup\{D_{\mathord{\ominus}}\}$, and $\mathcal{P}_B(\sigma\oplus\tau)=\mathcal{Q}\cap\mathcal{P}_B=\mathcal{T}$. Thus we have that \[ f = \sum_{\mathcal{S}\::\:\mathcal{P}_B(12[\mathcal{R},\mathcal{S}])=\mathcal{T}} g_{\mathcal{R}}g_{\mathcal{S}} + \sum_{\mathcal{S}\::\:\mathcal{P}_B(12[\mathcal{R}',\mathcal{S}])=\mathcal{T}} g_{\mathcal{R}'}g_{\mathcal{S}}, \] which is also rational. The case where $D_{\mathord{\ominus}}\in\mathcal{Q}$ is analogous, completing the proof. \end{proof} A crucial step in our argument is a regular encoding of left-greedy $\tilde{\mathcal{P}}_B$-frameworks. While we know (in principle) how to recognise a single left-greedy framework, this does not help to encode them all simultaneously. To address this issue, we adapt the marking technique from Albert, Atkinson, Bouvel, Ru\v{s}kuc, and Vatter~\cite{albert:geometric-grid-:}. A \emph{marked permutation} is a permutation in which the entries are allowed to be marked, which we designate with an overline. The intention of a marking is to highlight some special characteristic of the marked entries. A \emph{marked $\mathcal{P}$-framework} is then a $\mathcal{P}$-framework $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ in which the skeleton $\sigma$ is marked. The mapping $\varphi^{\mathcal{P}}$ defined in Section \ref{sec-frameworks} can be extended in a natural manner to a mapping $\overline{\varphi}^{\mathcal{P}}$ from $\left(\left(\mathcal{S}igma\times 2^{\mathcal{P}}\right)\cup \left(\overline{\mathcal{S}igma}\times 2^{\mathcal{P}}\right)\right)^\ast$ to the set of marked $\mathcal{P}$-frameworks $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ with $\sigma\in\mathcal{C}$; here $\overline{\mathcal{S}igma}=\{\overline{a}\::\: a\in\mathcal{S}igma\}$ is the \emph{marked cell alphabet}, and $\overline{\varphi}^{\mathcal{P}}$ maps marked letters to marked entries. We can extend the order on $\mathcal{P}$-frameworks defined in Section~\ref{sec-frameworks} to this context as follows: for two marked frameworks we write \[ \tau[\mathcal{R}_1,\dots,\mathcal{R}_k]\le\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m] \] if there are indices $1\le i_1<\cdots<i_k\le m$ such that \begin{itemize} \item $\sigma(i_1),\sigma(i_2),\dots,\sigma(i_k)$ is order isomorphic to $\tau$ (as ordinary, unmarked, permutations); and \item for all $1\le j\le k$, $\sigma(i_j)$ is marked if and only if $\tau(j)$ is marked; and \item for all $1\le j\le k$, $\mathcal{R}_j=\mathcal{Q}_{i_j}$. \end{itemize} With this order, it follows from Proposition~\ref{prop-properties-of-bij} that the mapping $\overline{\varphi}^{\mathcal{P}}$ is order-preserving. \begin{theorem} \label{thm-geom-inflate-enum} The class $\mathcal{C}[\mathcal{U}]$ is strongly rational for all geometrically griddable classes $\mathcal{C}$ and strongly rational classes $\mathcal{U}$. \end{theorem} \begin{proof} We retain the set-up introduced so far, so $\mathcal{C}\subseteq\operatorname{Geom}(M)$ for a partial multiplication matrix $M$ with cell alphabet $\mathcal{S}igma$, $\mathcal{D}$ is a subclass of $\mathcal{C}[\mathcal{U}]$, and $B=B_\mathcal{D}\cup B_\mathcal{U}$ where $B_\mathcal{D}$ and $B_\mathcal{U}$ denote, respectively, the relative bases of $\mathcal{D}$ and $\mathcal{U}$ in $\mathcal{C}[\mathcal{U}]$. We now seek to encode the nonempty, left-greedy $\tilde{\mathcal{P}}_B$-frameworks by means of a regular language and the mapping $\varphi^{\tilde{\mathcal{P}}_B}$. To do this we mark an interval of $\sigma$ which might satisfy one of the conditions (G1)--(G3) of Proposition~\ref{prop-left-greedy-U-inflate}. To this end, we say that a marking of a $\tilde{\mathcal{P}}_B$-framework $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ is \emph{threatening} if the marked entries constitute a (possibly trivial) interval of $\sigma$ given by the indices $\interval{i}{j}$ which is order isomorphic to $\tau$, and either \begin{itemize} \item the permutations described by $\tau[\mathcal{Q}_i,\mathcal{Q}_{i+1},\dots,\mathcal{Q}_j]$ lie in $\mathcal{U}$ (corresponding to (G1)); or \item $|\tau|=2$ and $\tau[\mathcal{Q}_i,\mathcal{Q}_{i+1}]$ is not a left-greedy $\tilde{\mathcal{P}}_B$-framework (corresponding to (G2) and (G3)). \end{itemize} Note that every marked $\tilde{\mathcal{P}}_B$-framework with zero, one, or all marked letters is threatening, and thus every $\tilde{\mathcal{P}}_B$-framework has several threatening markings. However, if a $\tilde{\mathcal{P}}_B$-framework has a threatening marking with two or more but not all marked letters, then that $\tilde{\mathcal{P}}_B$-framework is not left-greedy. Therefore, the left-greedy $\tilde{\mathcal{P}}_B$-frameworks are precisely the $\tilde{\mathcal{P}}_B$-frameworks which do not have such markings, and our goal is to identify them. Importantly, given a threateningly marked $\tilde{\mathcal{P}}_B$-framework $\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ which describes permutations from $\mathcal{D}$ (recall Proposition~\ref{nrlemma1}), if $\tau[\mathcal{R}_1,\dots,\mathcal{R}_k]\le\sigma[\mathcal{Q}_1,\dots,\mathcal{Q}_m]$ in the order defined above, then $\tau[\mathcal{R}_1,\dots,\mathcal{R}_k]$ is also threateningly marked (and also describes permutations in $\mathcal{D}$). Therefore the set of all threateningly marked $\tilde{\mathcal{P}}_B$-frameworks is a downset. Since $\overline{\varphi}^{\tilde{\mathcal{P}}_B}$ is order-preserving, the pre-image of this downset \[ \overline{J}= \left\{ w\in\left(\left(\mathcal{S}igma\times 2^{\mathcal{P}}\right)\cup \left(\overline{\mathcal{S}igma}\times 2^{\mathcal{P}}\right)\right)^\ast \::\: \overline{\varphi}^{\tilde{\mathcal{P}}_B}(w) \mbox{ is threateningly marked} \right\} \] is subword-closed, and thus regular. Now let \[ \Gamma \::\: \left(\left(\mathcal{S}igma\times 2^{\tilde{\mathcal{P}}_B}\right)\cup \left(\overline{\mathcal{S}igma}\times 2^{\tilde{\mathcal{P}}_B}\right)\right)^\ast \rightarrow \left(\mathcal{S}igma\times 2^{\tilde{\mathcal{P}}_B}\right)^\ast \] denote the homomorphism which removes markings. Because every $\tilde{\mathcal{P}}_B$-framework has a threatening marking, $\Gamma(\overline{J})$ encodes all $\tilde{\mathcal{P}}_B$-frameworks. We want to remove from $\Gamma(\overline{J})$ the set of non-left-greedy $\tilde{\mathcal{P}}_B$-frameworks. These non-left-greedy frameworks are precisely the frameworks which have a marked encoding in $\overline{J}\cap\overline{K}$ where \[ \overline{K} = \left\{ \begin{array}{l} \mbox{words in $\left(\left(\mathcal{S}igma\times 2^{\tilde{\mathcal{P}}_B}\right)\cup \left(\overline{\mathcal{S}igma}\times 2^{\tilde{\mathcal{P}}_B}\right)\right)^\ast$ with at}\\ \mbox{least two marked and one unmarked letters} \end{array}\right\}. \] The language $\overline{K}$ is clearly regular. Furthermore, $\Gamma(\overline{J})$ and $\Gamma(\overline{J}\cap\overline{K})$ are both regular as homomorphic images of regular languages. Therefore the language $L_\mathcal{D}=\Gamma(\overline{J})\setminus\Gamma(\overline{J}\cap\overline{K})$ is regular, and it encodes nonempty, left-greedy $\tilde{\mathcal{P}}_B$-frameworks describing permutations from $\mathcal{D}$. Recall that every permutation $\pi\in\mathcal{D}$ has a unique left-greedy $\mathcal{U}$-decomposition $\pi=\sigma[\alpha_1,\dots,\alpha_m]$ with $\sigma\in\mathcal{C}$ and $\alpha_1,\dots,\alpha_m\in\mathcal{U}$. Furthermore, recall that every $\alpha\in\mathcal{U}$ is described by a unique $\tilde{\mathcal{P}}_B$ framework. Therefore, a generating function for the class $\mathcal{D}$ is obtained by taking the generating function $g=\sum_{w\in L_\mathcal{D}} w$ in non-commuting variables representing the letters of our alphabet, and substituting for each letter $(u,\mathcal{Q})$ the generating function $f_\mathcal{Q}$ for the set of all permutations in $\mathcal{U}$ described by $\mathcal{Q}$. The function $g$ is rational because $L_\mathcal{D}$ is a regular language, and the functions $f_\mathcal{Q}$ are rational by Proposition~\ref{prop-strong-rat-properties-firstcomp}. It follows that $\mathcal{D}$ itself has a rational generating function, and the theorem is proved. \end{proof} \section{Small Permutation Classes} \label{sec-spc-rational} With Theorem~\ref{thm-geom-inflate-enum}, we have all the enumerative machinery we need to prove that small permutation classes are strongly rational, but we must spend a bit of time beforehand aligning the results of Vatter~\cite{vatter:small-permutati:} with those of this paper. One of the biggest differences between the two approaches is that the grid classes we have discussed so far are much more constrained than the generalised grid classes of Vatter~\cite{vatter:small-permutati:}. Suppose that $\mathcal{M}$ is a $t\times u$ matrix of permutation classes (we use calligraphic font for matrices containing permutation classes). An {\it $\mathcal{M}$-gridding\/} of the permutation $\pi$ of length $n$ in this context is a pair of sequences $1=c_1\le\cdots\le c_{t+1}=n+1$ (the column divisions) and $1=r_1\le\cdots\le r_{u+1}=n+1$ (the row divisions) such that for all $1\le k\le t$ and $1\le\ell\le u$, the entries of $\pi$ from indices $c_k$ up to but not including $c_{k+1}$, which have values from $r_{\ell}$ up to but not including $r_{\ell+1}$ are either empty or order isomorphic to an element of $\mathcal{M}_{k,\ell}$. The {\it grid class of $\mathcal{M}$\/}, written $\operatorname{Grid}(\mathcal{M})$, consists of all permutations which possess an $\mathcal{M}$-gridding. Furthermore, we say that the permutation class $\mathcal{C}$ is {\it $\mathcal{D}$-griddable\/} if $\mathcal{C}\subseteq\operatorname{Grid}(\mathcal{M})$ for some (finite) matrix $\mathcal{M}$ whose entries are all equal to $\mathcal{D}$. Between these generalised grid classes and the geometric grid classes we have been considering lie the \emph{monotone grid classes}, of the form $\operatorname{Grid}(\mathcal{M})$ for a matrix $\mathcal{M}$ whose entries are restricted to $\emptyset$, $\operatorname{Av}(21)$, and $\operatorname{Av}(12)$. When considering monotone grid classes we abbreviate these three classes to $0$, $1$, and $-1$ (respectively). The class $\mathcal{C}$ is \emph{monotone griddable} if $\mathcal{C}$ lies in $\operatorname{Grid}(M)$ for some $0/\mathord{\pm} 1$ matrix $M$. To explain the relationship between monotone and geometric grid classes we need to introduce a graph. The \emph{row-column graph} of a $t\times u$ matrix $M$ is the bipartite graph on the vertices $x_1$, $\dots$, $x_t$, $y_1$, $\dots$, $y_u$ where $x_k\sim y_\ell$ if and only if $M_{k,\ell}\neq 0$. It can then be shown (see Albert, Atkinson, Bouvel, Ru\v{s}kuc, and Vatter~{\cite[Theorem 3.2]{albert:geometric-grid-:}}) that $\operatorname{Geom}(M)=\operatorname{Grid}(M)$ if and only if the row-column graph of $M$ is a forest. In order to use grid classes to describe small permutation classes one needs the following generalisation of a result of Huczynska and Vatter~\cite{huczynska:grid-classes-an:}. \begin{theorem}[Vatter~{\cite[Theorem 3.1]{vatter:small-permutati:}}] \label{gridding-characterization} A permutation class is $\mathcal{D}$-griddable if and only if it does not contain arbitrarily long sums or skew sums of basis elements of $\mathcal{D}$. \end{theorem} We now introduce a specific class. The \emph{increasing oscillating sequence} is the infinite sequence defined by \[ 4,1,6,3,8,5,\dots,2k+2,2k-1,\dots \] (which contains every positive integer except $2$). An {\it increasing oscillation\/} is any sum indecomposable permutation that is contained in the increasing oscillating sequence (this term dates back to at least Pratt~\cite{pratt:computing-permu:}). A \emph{decreasing oscillation} is the reverse of an increasing oscillation, and collectively these permutations are called \emph{oscillations}. \begin{figure} \caption{The four oscillations of length $9$.} \label{fig-osc} \end{figure} We let $\mathcal{O}$ denote the \emph{downward closure} of the set of (increasing and decreasing) oscillations; in other words, $\mathcal{O}$ consists of all oscillations and their subpermutations. Further let $\mathcal{O}_k$ denote the downward closure of the set of oscillations of length at most $k$; this is a finite class. Using Theorem~\ref{gridding-characterization} and Schmerl and Trotter's Theorem~\ref{thm-schmerl-trotter}, Vatter~\cite{vatter:small-permutati:} showed via a computational argument that every small permutation class is $\langle\mathcal{O}\rangle$-griddable. In fact, a much stronger result holds. It can be shown that the growth rate of the downward closure of the set of increasing oscillations is precisely equal to $\kappa$. Therefore, if $\mathcal{C}$ contains all increasing oscillations, it is not small. Moreover, the increasing oscillations `almost' form a chain, and so if a class does not contain one increasing oscillation, there is a bound on the length of the increasing oscillations it can contain. By symmetry, every small permutation class also has a bound on the length of decreasing oscillations it can contain, and thus every small permutation class is actually $\langle\mathcal{O}_k\rangle$-griddable for some integer $k$. Because classes containing permutations with complicated substitution decompositions can be shown to have growth rates greater than $\kappa$ (via another computational argument), we can say more about the griddability of small permutation classes. First, define the class $\tilde{\mathcal{O}}_k$ by \[ \tilde{\mathcal{O}}_k=\mathcal{O}_k\cup\operatorname{Av}(21)\cup\operatorname{Av}(12), \] i.e., the oscillations of length at most $k$ together with the monotone permutations of all lengths. Via a minor translation in notation, and recalling the $\mathcal{C}^{[d]}$ construction from Proposition \ref{prop-subst-completion-const}, we quote the following result. \begin{theorem}[Vatter~{\cite[Theorems 4.3 and 4.4]{vatter:small-permutati:}}] \label{thm-subst-gridding-main} Every small permutation class is $\tilde{\mathcal{O}}_k^{[d]}$-griddable for some choice of integers $k$ and $d$. \end{theorem} The restriction to $\tilde{\mathcal{O}}_k^{[d]}$-griddings is important for two reasons. The first is that these classes are strongly rational. Indeed, by iterating Theorem~\ref{thm-geom-inflate-enum}, we obtain the following. \begin{corollary} \label{cor-bounded-depth-strong-rat} If the class $\mathcal{C}$ is geometrically griddable, then the class $\mathcal{C}^{[d]}$ is strongly rational for every $d$. \end{corollary} Clearly $\tilde{\mathcal{O}}_k$, which contains only finitely many nonmonotone permutations, is geometrically griddable. Therefore Corollary~\ref{cor-bounded-depth-strong-rat} implies that $\tilde{\mathcal{O}}_k^{[d]}$ is strongly rational for all $d$ and $k$. The second benefit of the restriction to $\tilde{\mathcal{O}}_k^{[d]}$-griddings is that, because $\tilde{\mathcal{O}}_k^{[d]}$ contains neither long simple permutations nor complicated substitution decompositions, a quite technical argument allows us to `slice' the $\tilde{\mathcal{O}}_k^{[d]}$-griddings of small permutation classes, as formalised below. \begin{theorem}[Vatter~{\cite[Theorem 5.4]{vatter:small-permutati:}}] \label{small-classes-gridding} Every small permutation class is $M$-griddable for a matrix $M$ in which: \begin{enumerate} \item[(S1)] every entry is $\tilde{\mathcal{O}}_k^{[d]}$, $\operatorname{Av}(21)$, $\operatorname{Av}(12)$, or the empty set; \item[(S2)] every entry equal to $\tilde{\mathcal{O}}_k^{[d]}$ is the unique nonempty entry in its row and column; and \item[(S3)] if two nonempty entries share a row or a column with each other (in which case they both must be monotone by (S2)), then neither shares a row or column with any other nonempty entry. \end{enumerate} \end{theorem} Condition (S2) shows that every small permutation class is $M$-griddable for a matrix $M$ in which every pair of `interacting' cells is monotone. Therefore, if $M$ contains any nonmonotone cells, we may as well view them as inflations of a singleton cell, or indeed, of any type of monotone cell at all. We can express this consequence of Theorem~\ref{small-classes-gridding} in the language of monotone grid classes by saying that every small permutation class is contained in $\operatorname{Grid}(M)[\tilde{\mathcal{O}}_k^{[d]}]$ for some $0/\mathord{\pm} 1$ matrix $M$ and integers $k$ and $d$. Furthermore, condition (S3) implies that this matrix $M$ can be taken so that its row-column graph is a forest, so $\operatorname{Grid}(M)=\operatorname{Geom}(M)$. Therefore we see that, for every small permutation class $\mathcal{C}$, there is a $0/\mathord{\pm} 1$ matrix $M$ and integers $k$ and $d$ such that \[ \mathcal{C}\subseteq\operatorname{Geom}(M)[\tilde{\mathcal{O}}_k^{[d]}]. \] From here, we need only apply Theorem~\ref{thm-geom-inflate-enum} to establish the desired result. \begin{theorem} \label{thm-small-rational} All small permutation classes have rational generating functions. \end{theorem} \section{Conclusion} \label{sec-conclusion} We have extended the substitution decomposition to handle enumeration far beyond the initial investigations of Albert and Atkinson~\cite{albert:simple-permutat:}, to the point where these techniques apply to all permutation classes of growth rate less than $\kappa\approx 2.20557$. Still, it is worth reflecting on how difficult the enumeration of permutation classes remains. Over fifteen years ago Noonan and Zeilberger~\cite{noonan:the-enumeration:} suggested that every finitely based permutation class has a holonomic generating function. Roughly ten years after that, Zeilberger (see Elder and Vatter~\cite{elder:problems-and-co:}) conjectured precisely the opposite, in fact specifying a potential counterexample: he speculated that $\operatorname{Av}(1324)$ might not have a holonomic generating function. Perhaps even if the generating functions of permutation classes are not well behaved, their growth rates might be. Balogh, Bollob\'as, and Morris~\cite{balogh:hereditary-prop:ordgraphs} were overly optimistic in this direction: they made a conjecture whose truth would have implied that all growth rates of permutation classes are algebraic numbers, which was disproved by Albert and Linton~\cite{albert:growing-at-a-pe:} (and even more starkly by Vatter~\cite{vatter:permutation-cla}). However, Klazar~\cite{klazar:overview-of-som} has suggested that their conjecture might be true for all finitely based classes. Moving from general concerns to more local matters, throughout this work we have routinely required geometric griddability as a hypotheses, and it is natural to ask if this can be replaced by the weaker condition of strong rationality. In general the answer is no, and essentially all attempts are thwarted by a particular strongly rational class. We feel it might be edifying to ponder this class and what extensions of our results it \emph{does not} rule out, so we describe it in some detail. To begin with we need the \emph{increasing oscillating antichain}. To construct this antichain, take the set of increasing oscillations (oriented as on the far left of Figure~\ref{fig-osc}) of odd length at least three and `anchor' the two ends of the increasing oscillations by inflating the first and the greatest entry of each by the permutation $12$. Thus the first element of the antichain is $231[12,12,1]=23451$, while the fourth element is \[ 241638597[12,1,1,1,1,1,1,12,1]=2\ 3\ 5\ 1\ 7\ 4\ 9\ 6\ 10\ 11\ 8, \] shown on the left of Figure~\ref{fig-u4} (which also gives a sketch of the proof that it \emph{is} an antichain; numerous formal proofs exist elsewhere). Let $A$ denote this antichain. \begin{figure} \caption{On the left, a member of the infinite antichain $A$. It is easiest to see that $A$ forms an antichain by considering the inversion graphs (or, permutation graphs) of its members (right), which form an infinite antichain of graphs under the induced subgraph order.} \label{fig-u4} \end{figure} By definition we see that $A\subseteq\mathcal{O}[\{1,12\}]$. Moreover, $\mathcal{O}$ can be seen to be strongly rational in several ways. Perhaps the most systematic method is to consider the rank encodings of Albert, Atkinson, and Ru\v{s}kuc~\cite{albert:regular-closed-:}; in this encoding the class $\mathcal{O}$ and all its subclasses are in bijection with regular languages. The class $\mathcal{O}[\{1,12\}]$ contains $A$ and thus is not pwo. As observed in Proposition~\ref{prop-strong-rat-pwo}, this implies that $\mathcal{O}[\{1,12\}]$ is not strongly rational, and this fact dooms all naive generalisations of our results. However, the rank encoding can be used to show that every \emph{finitely based} subclass of $\mathcal{O}[\{1,12\}]$ has a rational generating function, so one might optimistically hope that the following question has a positive answer. \begin{question} \label{ques-strong-rat-completion-enum} If $\mathcal{C}$ and $\mathcal{U}$ are both strongly rational classes, does every finitely based subclass of $\langle\mathcal{C}\rangle$ (resp., $\mathcal{C}[\mathcal{U}]$) have an algebraic (resp., a rational) generating function? \end{question} As stated above, the conclusion about rationality holds for $\mathcal{C}=\mathcal{O}$ and $\mathcal{U}=\{1,12\}$. It may be enlightening to answer this question in the special case where $\mathcal{C}=\mathcal{O}$ and $\mathcal{U}$ is an arbitrary strongly rational class. Answering Question~\ref{ques-strong-rat-completion-enum} in general would almost surely require a greater understanding of the simple permutations in strongly rational classes. Although Proposition~\ref{prop-strong-rat-indecomps} gives us a very good idea of the structure and enumeration of sum indecomposable permutations in a strongly rational class, its simple permutation analogue is still open: \begin{conjecture} If the class $\mathcal{C}$ is strongly rational, then the simple permutations in $\mathcal{C}$ have a rational generating function. \end{conjecture} \end{document}
math
\begin{document} \title[Large time behavior of unbounded solutions]{Large time behavior of unbounded solutions of first-order Hamilton-Jacobi equations in the whole space} \author[G. Barles, O. Ley, T.-T. Nguyen, T. V. Phan]{Guy Barles \and Olivier Ley \and Thi-Tuyen Nguyen \and Thanh Viet Phan} \address{LMPT, F\'ed\'eration Denis Poisson, Universit\'e Fran\c{c}ois-Rabelais Tours, France} \varepsilonmail{[email protected]} \address{IRMAR, INSA Rennes, France} \varepsilonmail{[email protected]} \address{Dipartimento di Matematica, Universit\`a di Padova, Italy} \varepsilonmail{[email protected]} \address{Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam} \varepsilonmail{[email protected]} \begin{abstract} We study the large time behavior of solutions of first-order convex Hamilton-Jacobi Equations of Eikonal type set in the whole space. We assume that the solutions may have arbitrary growth. A complete study of the structure of solutions of the ergodic problem is provided : contrarily to the periodic setting, the ergodic constant is not anymore unique, leading to different large time behavior for the solutions. We establish the ergodic behavior of the solutions of the Cauchy problem (i) when starting with a bounded from below initial condition and (ii) for some particular unbounded from below initial condition, two cases for which we have different ergodic constants which play a role. When the solution is not bounded from below, an example showing that the convergence may fail in general is provided. \varepsilonnd{abstract} \subjclass[2010]{Primary 35F21; Secondary 35B40, 35Q93, 49L25} \keywords{Hamilton-Jacobi equations; asymptotic behavior; ergodic problem; unbounded solutions; viscosity solutions} \maketitle \section{Introduction} This work is concerned with the large time behavior for unbounded solutions of the first-order Hamilton-Jacobi equation \begin{eqnarray} \label{lc-cauchy} \begin{cases} u_t(x,t) + H(x,Du(x,t)) = l(x), \qquad {\rm in } \ \mathbb R^N \times (0, +\infty),\\ u(\cdot,0) = u_0(\cdot) \qquad {\rm in } \ \mathbb R^N, \varepsilonnd{cases} \varepsilonnd{eqnarray} where $H\in W_{\rm loc}^{1,\infty}(\mathbb R^N\times \mathbb R^N)$ satisfies \begin{eqnarray} && \text{There exists $\nu\in C(\mathbb R^N),$ $\nu >0$ such that } H(x,p)\geq \nu (x) |p|, \label{Hcoerc}\\ && 0 =H(x,0) < H(x,p) \ \ \text{ for $p\not= 0$},\label{Hnulle}\\ && H(x,\cdot) \text{ is convex,} \label{Hconvex}\\ && \text{There exist a constant $C_H>0$ and, for all $R>0$, a constant $k_R$ such that}\nonumber\\ && |H(x,p)-H(y,q)|\leq k_R(1+|p|)|x-y|+C_H |p-q|,\label{Hlip}\\ && \text{for all $|x|,|y|\leq R$, $p,q\in\mathbb R^N.$\nonumber} \varepsilonnd{eqnarray} We always assume $u_0, l\in C(\mathbb R^N)$ and \begin{eqnarray}\label{bd-below} l \geq 0 \quad\text{in $\mathbb R^N$}\; . \varepsilonnd{eqnarray} These assumptions are those used in the so-called Namah-Roquejoffre case introduced in~\cite{nr99} in the periodic case, and in Barles-Ro\-que\-jof\-fre~\cite{br06} in the unbounded case. They are not the most general but, for simplicity, we choose to state as above since they are well-designed to encompass the classical Eikonal equation \begin{equation}\label{eik} u_t(x,t) + a(x)|Du(x,t)| = l(x), \quad \hbox{in }\mathbb R^N\times (0,+\infty), \varepsilonnd{equation} where $a(\cdot)$ is a locally Lipschitz, bounded function such that $a(x)>0$ in $\mathbb R^N$. The assumption~\varepsilonqref{Hcoerc} is a coercivity assumption, which may be replaced by~\varepsilonqref{Hcoerc-gene}. We also may replace~\varepsilonqref{bd-below} by $l$ is bounded from below up to assume that $H(x,0)- {\rm inf}_{\mathbb R^N}l=0$ in~\varepsilonqref{Hnulle}. Our goal is to prove that, under suitable additional assumptions, there exists a unique viscosity solution $u$ of~\varepsilonqref{lc-cauchy} and that this solution satisfies \begin{eqnarray*} && u(x,t)+ct \to v(x) \ \ \text{in $C(\mathbb R^N)$ as $t\to +\infty,$} \varepsilonnd{eqnarray*} where $(c,v)\in \mathbb R_+\times C(\mathbb R^N)$ is a solution to the ergodic problem \begin{eqnarray}\label{lc-erprob} && H(x,Dv(x)) = l(x) + c \qquad \text{in } \mathbb R^N. \varepsilonnd{eqnarray} This problem has not been widely studied comparing to the periodic case~\cite{fathi98, nr99, bs00, fs04, ds06, bim13, abil13} and references therein. The main works in the unbounded setting are Barles-Roquejoffre~\cite{br06} which extends the well-known periodic result of Namah-Roquejoffre~\cite{nr99}, the works of Ishii~\cite{ishii08} and Ichihara-Ishii~\cite{ii09}. A very interesting reference is the review of Ishii~\cite{ishii09}. We will compare more precisely our results with the existing ones below but let us mention that our main goal is to make more precise the large time behavior for the Eikonal Equation~\varepsilonqref{eik} in a setting where the equation is well-posed for solutions with arbitrary growth, which brings delicate issues. Most of our results were already obtained or are close to results of~\cite{br06, ii09} but we use pure PDE arguments to prove them without using Weak KAM methods and making a priori assumptions on the structure of solutions or subsolutions of~\varepsilonqref{lc-erprob}. Changing $u(x,t)$ in $u(x,t)-\inf_{\mathbb R^N}\{l\}t$ allows to reduce to the case when $\inf_{\mathbb R^N}l=0$ and we are going to actually reduce to that case to simplify the exposure. Taking this into account, we use below the assumption \begin{eqnarray}\label{lc-argmin-l} && \mathop{\rm lim\,inf}_{|x|\to +\infty} l(x) > \inf_{\mathbb R^N} (l)=0. \varepsilonnd{eqnarray} which is a compactness assumption in the sense that it implies \begin{eqnarray*} && \mathcal{A}:= {\rm argmin}\, l = \{x\in\mathbb R^N : l(x)=0\} \ \text{ is a nonempty compact subset of $\mathbb R^N$.} \varepsilonnd{eqnarray*} This subset corresponds to the Aubry set in the framework of Weak KAM theory. Our first main result collects all the properties we obtain for the solutions of~\varepsilonqref{lc-erprob}. \begin{theorem}\label{lc-thm:ergodic} (Ergodic problem)\ \\ Assume that $0\leq l\in C(\mathbb R^N)$ and $H\in C(\mathbb R^N\times \mathbb R^N).$ \noindent (i) If $H$ satisfies~\varepsilonqref{Hcoerc} and $H(x,0)=0$ for all $x\in \mathbb R^N$ then, for all $c\geq 0,$ there exists a solution $(c,v) \in \mathbb R_+ \times W_{\rm loc}^{1,\infty}(\mathbb R^N)$ of~\varepsilonqref{lc-erprob}. \noindent (ii) Assume that~\varepsilonqref{lc-cauchy} satisfies a comparison principle in $C(\mathbb R^N \times [0,+\infty)).$ If $(c,v)$ and $(d,w)$ are solutions of~\varepsilonqref{lc-erprob} with $\sup_{\mathbb R^N}|v-w| <\infty,$ then $c=d$. \noindent (iii) If $\mathcal{A}\not=\varepsilonmptyset$ and $H$ satisfies~\varepsilonqref{Hcoerc} and~\varepsilonqref{Hnulle}, then there exists a solution $(c,v) \in \mathbb R_+ \times W_{\rm loc}^{1,\infty}(\mathbb R^N)$ of~\varepsilonqref{lc-erprob} with $c= 0$ and $v\geq 0.$ If, in addition, \begin{eqnarray}\label{Hbornesup} && \text{$H(x,p)\leq m(|p|)$ for some increasing function $m\in C(\mathbb R_+,\mathbb R_+)$,} \varepsilonnd{eqnarray} and $\mathcal{A}$ satisfies~\varepsilonqref{lc-argmin-l}, then $v(x)\to +\infty$ as $|x|\to \infty.$ \noindent (iv) Let $c>0.$ If $H$ satisfies~\varepsilonqref{Hbornesup} then any solution $(c,v)$ of~\varepsilonqref{lc-erprob} is unbounded from below. If $H$ satisfies $H(x,0)=0$ and~\varepsilonqref{Hconvex} and $(c,v),$ $(c,w)$ are two solutions of~\varepsilonqref{lc-erprob} with $v(x)-w(x)\to 0$ as $|x|\to\infty,$ then $v=w.$ \varepsilonnd{theorem} \noindent The situation is completely different with respect to the periodic setting where there is a unique ergodic constant (or critical value) for which~\varepsilonqref{lc-erprob} has a solution (e.g., Lions-Papanicolaou-Varadhan~\cite{lpv86} or Fathi-Siconolfi~\cite{fs05}). We recover some results of Barles-Roquejoffre~\cite{br06} and Fathi-Maderna~\cite{fm07}, see Remark~\ref{disc-ergo} for a discussion. As far as the case of unbounded solutions of elliptic equations is concerned, let us mention the recent work of Barles-Meireles~\cite{bm16} and the references therein. Coming back to~\varepsilonqref{lc-cauchy}, when $H$ satisfies~\varepsilonqref{Hlip}, we have a comparison principle by a ``finite speed of propagation'' type argument, which allows to compare sub- and supersolutions without growth condition (\cite{ishii84, ley01} and Theorem~\ref{lc-compathm}). It follows that there exists a unique continuous solution defined for all time as soon as there exist a sub- and supersolution. \begin{proposition}\label{lc-exis-uniq-cauchy} Assume that $l\geq 0$ and $H$ satisfies~\varepsilonqref{Hcoerc} and~\varepsilonqref{Hlip}. Let $u_0\in C(\mathbb R^N)$ and $c\geq 0.$ \noindent (i) There exists a smooth supersolution $(c,v^+)$ of~\varepsilonqref{lc-erprob} satisfying $u_0\leq v^+$ in $\mathbb R^N.$~~~~ \noindent (ii) If there exists a subsolution $(c,v^-)$ of~\varepsilonqref{lc-erprob} satisfying $v^-\leq u_0$ in $\mathbb R^N,$ then there exists a unique viscosity solution $u\in C(\mathbb R^N\times [0,+\infty))$ of~\varepsilonqref{lc-cauchy} such that \begin{eqnarray}\label{encadrement-u} && v^-(x)\leq u(x,t)+ct\leq v^+(x) \qquad\text{for all $(x,t)\in\mathbb R^N\times [0,+\infty).$} \varepsilonnd{eqnarray} \varepsilonnd{proposition} \noindent Notice that the existence of a subsolution is given by~\varepsilonqref{Hnulle} for instance. We give two convergence results depending on the critical value $c=0$ or $c> 0.$ \begin{theorem}\label{lc-tm:longtime} (Large time behavior starting with bounded from below initial data) Assume~\varepsilonqref{Hcoerc}-\varepsilonqref{Hnulle}-\varepsilonqref{Hconvex}-\varepsilonqref{Hlip}, $l\geq 0,$ and \varepsilonqref{lc-argmin-l}. Then, for every bounded from below initial data $u_0$, the unique viscosity solution $u$ of~\varepsilonqref{lc-cauchy} satisfies \begin{eqnarray}\label{cv-thm-main} u(x,t) \mathop{\to}_{t\to +\infty} v(x) \quad \text{locally uniformly in $\mathbb R^N,$} \varepsilonnd{eqnarray} where $(0,v)$ is a solution to~\varepsilonqref{lc-erprob}. \varepsilonnd{theorem} \begin{theorem}\label{lc-tm:longtime-infty} (Large time behavior starting from particular unbounded from below initial data) Assume~\varepsilonqref{Hcoerc}-\varepsilonqref{Hnulle}-\varepsilonqref{Hconvex}-\varepsilonqref{Hlip}, $l\geq 0$ and let $(c,v)$ be a solution of~\varepsilonqref{lc-erprob} with $c> 0.$ If there exists a subsolution $(0,\psi)$ of~\varepsilonqref{lc-erprob} such that the initial data $u_0$ satisfies \begin{eqnarray}\label{u0-v-to0} && \min \{\psi(x),u_0(x)\}- \min \{\psi(x),v(x)\} \mathop{\to} 0 \quad \text{as $|x|\to +\infty$,} \varepsilonnd{eqnarray} then there exists a unique viscosity solution $u$ of~\varepsilonqref{lc-cauchy} and $u(x,t)+ct \mathop{\to} v(x)$ locally uniformly in $\mathbb R^N$ as $t\to +\infty.$ \varepsilonnd{theorem} \noindent Let us comment these results. The first convergence result means that, starting from any bounded from below initial condition (with arbitrary growth from above), the unique viscosity solution of~\varepsilonqref{lc-cauchy} converges to a solution $(c,v)$ of the ergodic problem~\varepsilonqref{lc-erprob}, which is given by Theorem~\ref{lc-thm:ergodic}(iii), i.e., with $c=0$ and $v\geq 0,$ $v\to +\infty$ at infinity. When $u_0$ is not bounded from below, even if it is close to a solution of the ergodic problem, we give an example showing that the convergence may fail, see Section~\ref{lc-sec:exple}, where several examples and interpretations in terms of the underlying optimal control problem are given. To describe the second convergence result, suppose that~\varepsilonqref{u0-v-to0} holds with the particular constant subsolution $(0,M)$ for some constant $M$. In this case,~\varepsilonqref{u0-v-to0} is equivalent to $(u_0-v)(x)\to 0$ when $v(x)\to -\infty.$ Since, for $c>0$, any solution $(c,v)$ of the ergodic problem is necessarily unbounded from below (by Theorem~\ref{lc-thm:ergodic}(iv)), Condition~\varepsilonqref{u0-v-to0} may only happen for unbounded from below initial condition $u_0$. In this sense, Theorem~\ref{lc-tm:longtime-infty} sheds a new light on the picture of the asymptotic behavior for~\varepsilonqref{lc-cauchy}, bringing a positive result for some particular unbounded from below initial data. Theorem~\ref{lc-tm:longtime} and Theorem~\ref{lc-tm:longtime-infty} generalize and make more precise~\cite[Theorem 4.1]{br06} and~\cite[Theorem 4.2]{br06} respectively. In~\cite{br06}, $H$ is bounded uniformly continuous in $\mathbb R^N\times B(0,R)$ for any $R>0$ and $u_0$ is bounded from below and Lipschitz continuous. Our results are also close to~\cite[Theorem 6.2]{ii09} as far as Theorem~\ref{lc-tm:longtime} is concerned and~\cite[Theorem 5.3]{ii09} is very close to Theorem~\ref{lc-tm:longtime-infty}, see Remark~\ref{rmqII}. In~\cite{ii09}, $H$ may have arbitrary growth with respect to $p$ (\varepsilonqref{Hlip} is not required) and the initial condition is bounded from below with possible arbitrary growth from above. The results apply to more general equations than ours. The counterpart is that the unique solvability of~\varepsilonqref{lc-cauchy} is not ensured by the assumptions so {\varepsilonm the} solution of~\varepsilonqref{lc-cauchy} is the one given by the representation formula in the optimal control framework. The assumptions are given in terms of existence of particular sub or supersolutions of~\varepsilonqref{lc-erprob}, which may be difficult to check in some cases. Finally, let us point out that the proofs of~\cite{br06, ii09} use in a crucial way the interpretation of~\varepsilonqref{lc-cauchy}-\varepsilonqref{lc-erprob} in terms of control problems and need some arguments of Weak KAM theory. In this work, we give pure PDE proofs, which are interesting by themselves. Finally, let us underline that in the arbitrary unbounded setting, we do not have in hands local Lipschitz bounds, i.e. bounds on $|u_t|, |Du|\leq C,$ with $C$ independent of $t.$ These bounds are easy consequences of the coercivity of $H$ in the periodic setting and in the Lipschitz setting of~\cite{br06}. In the general unbounded case, such bounds require additional restrictive assumptions. Instead, we provide a more involved proof without further assumptions, see the proof of Theorem~\ref{lc-tm:longtime}. Let us also mention that several other convergence results are established in~\cite{ishii08} and~\cite{ii09} in the case of {\varepsilonm strictly} convex Hamiltonian $H$ and~\cite{ii08} is devoted to a precise study in the one dimensional case. We refer again the reader to the review~\cite{ishii09} for details and many examples. The paper is organized as follows. We start by solving the ergodic problem~\varepsilonqref{lc-erprob}, see Section~\ref{lc-sec:erg}. Then, we consider the evolution problem~\varepsilonqref{lc-cauchy} in Section~\ref{lc-sec:cauchy}. Section~\ref{lc-sec:cv} is devoted to the proofs of the theorems of convergence. Finally, Section~\ref{lc-sec:exple} provides several examples based both on the Hamilton-Jacobi equations~\varepsilonqref{lc-cauchy}-\varepsilonqref{lc-erprob} and on the associated optimal control problem. \noindent{\bf Acknowledgements:} Part of this work was made during the stay of T.-T. Nguyen as a Ph.D. student at IRMAR and she would like to thank University of Rennes~1 \& INSA for the hospitality. The work of O. Ley and T.-T. Nguyen was partially supported by the Centre Henri Lebesgue ANR-11-LABX-0020-01. The authors would like to thank the referees for the careful reading of the manuscript and their useful comments. \section{The Ergodic problem} \label{lc-sec:erg} Before giving the proof of Theorem~\ref{lc-thm:ergodic}, we start with a lemma based on the coercivity of $H.$ \begin{lemma}\label{sous-sol-lip} Let $\Omega\subset\mathbb R^N$ be an open bounded subset and $H$ satisfies~\varepsilonqref{Hcoerc}. For every subsolution $(c,v)\in \mathbb R_+ \times USC(\overline{\Omega})$ of~\varepsilonqref{lc-erprob}, we have $v\in W^{1,\infty}(\Omega)$ and \begin{eqnarray}\label{apriori-v} && |Dv(x)|\leq \max_{y\in\overline{\Omega}}\left\{\frac{l(y)+c}{\nu(y)}\right\} \quad \text{for a.e. $x\in \Omega.$} \varepsilonnd{eqnarray} \varepsilonnd{lemma} \begin{remark} Assumption~\varepsilonqref{Hcoerc} was stated in that way having in mind the Eikonal Equation~\varepsilonqref{eik} but it can be replaced by the classical assumption of coercivity \begin{eqnarray}\label{Hcoerc-gene} && \mathop{\rm lim}_{|p|\to +\infty} \mathop{\rm inf}_{x\in B(0,R)} H(x,p) = +\infty \quad \text{for all $R>0.$} \varepsilonnd{eqnarray} \varepsilonnd{remark} \begin{proof}[Proof of Lemma~\ref{sous-sol-lip}] Let $B(x_0,R)$ be any ball contained in $\Omega$. Since $v$ is a viscosity subsolution of \begin{eqnarray*} && |Dv(x)|\leq \max_{y\in\overline{\Omega}}\left\{\frac{l(y)+c}{\nu(y)}\right\} \quad \text{in $\Omega$}, \varepsilonnd{eqnarray*} we see from \cite[Proposition 1.14, p.140]{abil13} that $v$ is Lipschitz continuous in $B(x_0,R)$ with the Lipschitz constant $\max_{\overline{\Omega}}\left\{\frac{l+c}{\nu}\right\}$, which implies together with Rademacher theorem \begin{eqnarray*} && |Dv(x)|\leq \max_{y\in\overline{\Omega}}\left\{\frac{l(y)+c}{\nu(y)}\right\} \quad \text{for a.e. $x\in B(x_0,R)$.} \varepsilonnd{eqnarray*} \varepsilonnd{proof} We are now able to give the proof of Theorem~\ref{lc-thm:ergodic}. \begin{proof}[Proof of Theorem~\ref{lc-thm:ergodic}] \ \noindent (i) We follow some arguments of the proof of~\cite[Theorem 2.1]{br06}. Fix $c\geq 0,$ noticing that $l(x)+c\geq 0$ for every $x\in\mathbb R^N$ and recalling that $H(x,0)=0,$ we infer that $0$ is a subsolution of~\varepsilonqref{lc-erprob}. For $R>0,$ we consider the Dirichlet problem \begin{eqnarray}\label{pb-dirichl} && H(x,Dv)=l(x)+c \ \ \text{in $B(0,R)$,} \quad v=0 \ \ \text{on $\partial B(0,R).$} \varepsilonnd{eqnarray} If $p_R\in \mathbb R^N$ and $C_R>0,$ $|p_R|$ are big enough, then, using~\varepsilonqref{Hcoerc}, $C_R+\langle p_R,x\rangle$ is a supersolution of~\varepsilonqref{pb-dirichl}. By Perron's method up to the boundary (\cite[Theorem 6.1]{dalio02}), the function \begin{eqnarray*} && \hspace*{-2cm}V_R(x):=\mathop{\rm sup}\{v\in USC(\overline{B}(0,R)) \text{ subsolution of~\varepsilonqref{pb-dirichl}}:\nonumber\\ && \hspace*{2cm} 0\leq v(x)\leq C_R+\langle p_R,x\rangle \text{ for } x\in \overline{B}(0,R) \}, \varepsilonnd{eqnarray*} is a discontinuous viscosity solution of~\varepsilonqref{pb-dirichl}. Recall that the boundary conditions are satisfied in the viscosity sense meaning that either the viscosity inequality or the boundary condition for the semicontinuous envelopes holds at the boundary. We claim that $V_R\in W^{1,\infty}(\overline{B}(0,R))$ and $V_R(x)=0$ for every $x\in \partial B(0,R),$ i.e., the boundary conditions are satisfied in the classical sense. At first, from Lemma~\ref{sous-sol-lip}, $V_R\in W^{1,\infty}(B(0,R)).$ By definition, $V_R\geq 0$ in $\overline{B}(0,R),$ so $(V_R)_*\geq 0$ on $\partial B(0,R)$ and the boundary condition holds in the classical sense for the supersolution. It remains to check that $(V_R)^*\leq 0$ on $\partial B(0,R).$ We argue by contradiction assuming there exists $\hat{x}\in \partial B(0,R)$ such that $(V_R)^*(\hat{x})> 0.$ It follows that the viscosity inequality for subsolutions holds at $\hat{x},$ i.e., for every $\varphi\in C^1(\overline{B}(0,R))$ such that $\varphi\geq (V_R)^*$ over $\overline{B}(0,R)$ with $(V_R)^*(\hat{x})=\varphi(\hat{x}),$ we have $H(\hat{x}, D\varphi(\hat{x}))\leq l(\hat{x})+c$ and there exists at least one such $\varphi.$ Consider, for $K>0,$ $\tilde{\varphi}(x):=\varphi(x)-K\langle\frac{\hat{x}}{|\hat{x}|},x-\hat{x}\rangle.$ We still have $\tilde{\varphi}\geq (V_R)^*$ over $\overline{B}(0,R)$ and $(V_R)^*(\hat{x})=\tilde{\varphi}(\hat{x}).$ Therefore $H(\hat{x}, D\varphi(\hat{x})-K\frac{\hat{x}}{|\hat{x}|})\leq l(\hat{x})+c$ for every $K>0,$ which is absurd for large $K$ by~\varepsilonqref{Hcoerc}. It ends the proof of the claim. We set $v_R(x)=V_R(x)-V_R(0).$ By Lemma~\ref{sous-sol-lip}, for every $R>R',$ we have \begin{eqnarray} && |Dv_R(x)|=|DV_R(x)|\leq C_{R'}:= \max_{\overline{B}(0,R')}\left\{\frac{l+c}{\nu}\right\} \quad \text{a.e. $x\in B(0,R'),$}\label{estim-Dv-123}\\ && |v_R(x)|=|V_R(x)-V_R(0)|\leq C_{R'}R'\quad \text{for $x\in B(0,R').$}\label{estim-v-123} \varepsilonnd{eqnarray} Up to an extraction, by Ascoli's Theorem and a diagonal process, $v_R$ converges in $C(\mathbb R^N)$ to a function $v$ as $R\to +\infty,$ which still satisfies~\varepsilonqref{estim-Dv-123}-\varepsilonqref{estim-v-123}. By stability of viscosity solutions, $(c,v)$ is a solution of~\varepsilonqref{lc-erprob}. \noindent (ii) Let $(c,v)$ and $(d,w)$ be two solutions of~\varepsilonqref{lc-erprob} and set \begin{eqnarray*} && V(x,t) = v(x) - ct\\ && W(x,t) = w(x) -dt. \varepsilonnd{eqnarray*} To show that $c=d$, we argue by contradiction, assuming that $c < d$. Obviously, $V$ is a viscosity solution of~\varepsilonqref{lc-cauchy} with $u_0 = v$ and $W$ is a viscosity solution of~\varepsilonqref{lc-cauchy} with $u_0 = w$. Using the comparison principle for~\varepsilonqref{lc-cauchy}, we get that $$ V(x,t) - W(x,t) \leq \sup_{\mathbb R^N}\{v - w\} \qquad \text{ for all } (x,t) \in \mathbb R^N \times [0,\infty). $$ This means that $$ (d-c)t + v(x) - w(x) \leq \sup_{\mathbb R^N}\{v - w\} \qquad \text{ for all } (x,t) \in \mathbb R^N \times [0,\infty). $$ Recalling that $\sup_{\mathbb R^N}|v-w| < \infty$, we get a contradiction for $t$ large enough. By exchanging the roles of $v,w,$ we conclude that $c=d.$ \noindent(iii) Let $c=0.$ We apply the Perron's method using in a crucial way $\mathcal{A}\not=\varepsilonmptyset.$ Let $\mathcal{S}=\{ w\in USC(\mathbb R^N) \text{ subsolution of~\varepsilonqref{lc-erprob}}: 0\leq w \text{ and } w=0 \text{ on } \mathcal{A}\}$ and set $$ v(x):=\mathop{\rm sup}_{w\in \mathcal{S}} w(x). $$ Noticing that $l+c\geq 0$ and since $H(x,0)=0,$ we have $0\in \mathcal{S}.$ Let $x\in\mathbb R^N$ and $R>0$ large enough such that $x\in B(0,R)$ and there exists $x_\mathcal{A}\in B(0,R)\cap \mathcal{A}.$ For all $w\in \mathcal{S},$ by Lemma~\ref{sous-sol-lip}, we have \begin{eqnarray*} && 0\leq w(x)\leq w(x_\mathcal{A})+\max_{\overline{B}(0,R)}\left\{\frac{l+c}{\nu}\right\}\ |x-x_\mathcal{A}|\leq 2R \max_{\overline{B}(0,R)}\left\{\frac{l+c}{\nu}\right\}, \varepsilonnd{eqnarray*} since $w(x_\mathcal{A})=0.$ The above upper-bound does not depend on $w\in \mathcal{S},$ so we deduce that $0\leq v(x)<+\infty$ for every $x\in\mathbb R^N.$ We claim that $v$ is a solution of~\varepsilonqref{lc-erprob}. At first, by classical arguments (\cite{barles94}), $v$ is still a subsolution of~\varepsilonqref{lc-erprob} satisfying $v\geq 0$ in $\mathbb R^N$ and $v=0$ on $\mathcal{A}.$ By Lemma~\ref{sous-sol-lip}, $v\in W_{\rm loc}^{1,\infty}(\mathbb R^N).$ To prove that $v$ is a supersolution, we argue as usual by contradiction assuming that there exists $\hat{x}$ and $\varphi\in C^1(\mathbb R^N)$ such that $\varphi\leq v,$ $v(\hat{x})=\varphi(\hat{x})$ and the viscosity supersolution inequality does not hold, i.e., $H(\hat{x}, D\varphi(\hat{x}))< l(\hat{x})+c.$ To reach a contradiction, one slightly modify $v$ near $\hat{x}$ in order to build a new subsolution $\hat{v}$ in $\mathcal{S},$ which is strictly bigger than $v$ near $\hat{x}.$ To be able to proceed as in the classical proof, it is enough to check that $\hat{x}\not\in \mathcal{A}$; otherwise $\hat{v}$ will not be 0 on $\mathcal{A}$ leading to $\hat{v} \not\in \mathcal{S}.$ If $\hat{x}\in \mathcal{A},$ then $l(\hat{x})+c=0.$ By~\varepsilonqref{Hnulle}, we obtain $0\leq H(\hat{x}, D\varphi(\hat{x}))< l(\hat{x})+c=0,$ which is not possible. It ends the proof of the claim. From~\varepsilonqref{lc-argmin-l}, there exists $\varepsilonpsilon_\mathcal{A}, R_\mathcal{A} >0$ such that $l(x) > \min _{\mathbb R^N}l+\varepsilonpsilon_\mathcal{A}$ for all $x\in \mathbb R^N\setminus B(0,R_\mathcal{A}).$ By~\varepsilonqref{Hbornesup}, $v$ satisfies, in the viscosity sense \begin{eqnarray*} m(|Dv|) \geq H(x,Dv)\geq l(x)+c \geq \varepsilonpsilon_\mathcal{A} \quad \text{in $\mathbb R^N\setminus B(0,R_\mathcal{A}).$} \varepsilonnd{eqnarray*} Therefore, for all $x\in\mathbb R^N$ and every $p$ in the viscosity subdifferential $D^- v(x)$ of $v$ at $x,$ we have $|p|\geq m^{-1}(\varepsilonpsilon_\mathcal{A}) >0.$ By the viscosity decrease principle~\cite[Lemma 4.1]{ley01}, for all $B(x,R)\subset \mathbb R^N\setminus B(0,R_\mathcal{A}),$ we obtain \begin{eqnarray*} && \mathop{\rm inf}_{B(x,R)} v \leq v(x)- m^{-1}(\varepsilonpsilon_\mathcal{A}) R. \varepsilonnd{eqnarray*} Since $v\geq 0,$ for any $R>0$ and $x$ such that $|x|> R_\mathcal{A}+R,$ we conclude $v(x)\geq m^{-1}(\varepsilonpsilon_\mathcal{A}) R,$ which proves that $v(x)\to +\infty$ as $|x|\to +\infty.$ \noindent(iv) Since $c> 0,$ there exists $\alpha >0$ such that $l(x)+c\geq \alpha$ for all $x\in\mathbb R^N.$ To prove that $v$ is unbounded from below, we use again the viscosity decrease principle~\cite[Lemma 4.1]{ley01}. By~\varepsilonqref{Hbornesup}, $v$ satisfies, in the viscosity sense \begin{eqnarray*} m(|Dv|) \geq H(x,Dv)\geq \alpha \quad \text{in $\mathbb R^N,$} \varepsilonnd{eqnarray*} which implies, for all $R>0,$ \begin{eqnarray*} \mathop{\rm inf}_{B(0,R)} v \leq v(0)- m^{-1}(\alpha) R \varepsilonnd{eqnarray*} and so $v$ cannot be bounded from below. For the second part of the result, we argue by contradiction assuming that $v\not= w.$ Without loss of generality, there exists $\varepsilonta >0$ and $\hat{x}\in\mathbb R^N$ such that $(v-w)(\hat{x})>3\varepsilonta.$ Since $(v-w)(x)\to 0$ as $|x|\to +\infty,$ there exists $R>0$ such that $|(v-w)(x)|<\varepsilonta$ when $|x|\geq R.$ Up to choose $0<\mu <1$ sufficiently close to 1, we have $|(\mu v-w)(\hat{x})|> 2\varepsilonta$ and, by compactness of $\partial B(0,R),$ $|(\mu v-w)(x)|< 2\varepsilonta$ for all $x\in \partial B(0,R).$ It follows that $M:=\max_{\overline{B}(0,R)}\mu v-w$ cannot be achieved at the boundary of $\overline{B}(0,R).$ Consider \begin{eqnarray*} M_\varepsilon := \max_{x,y\in \overline{B}(0,R)} \left\{\mu v(x)-w(y)-\frac{|x-y|^2}{\varepsilon^2}\right\}, \varepsilonnd{eqnarray*} which is achieved at some $(\bar{x},\bar{y}).$ By classical properties (\cite{bcd97, barles94}), up to extract some subsequences $\varepsilonpsilon\to 0,$ \begin{eqnarray*} && \frac{|\bar{x}-\bar{y}|^2}{\varepsilonpsilon^2}\to 0, \\ && \bar{x}, \bar{y} \to x_0 \quad\text{for some $x_0\in \overline{B}(0,R),$}\\ && M_\varepsilon \to M. \varepsilonnd{eqnarray*} It follows that $M= (\mu v-w )(x_0)$ and therefore, for $\varepsilon$ small enough, neither $\bar{x}$ nor $\bar{y}$ is on the boundary of $\overline{B}(0,R).$ We can write the viscosity inequalities for $v$ subsolution at $\bar{x}$ and $w$ supersolution at $\bar{y}$ for small $\varepsilon$ leading to \begin{eqnarray*} && H(\bar{x},\frac{\bar{p}}{\mu})\leq l(\bar{x})+c,\\ && H(\bar{y},\bar{p})\geq l(\bar{y})+c, \varepsilonnd{eqnarray*} where we set $\displaystyle \bar{p}=2\frac{(\bar{x}-\bar{y})}{\varepsilon^2}.$ Noticing that \begin{eqnarray*} && \mu v(\bar{x})- w(\bar{x})\leq \mu v(\bar{x})- w(\bar{y})-\frac{|\bar{x}-\bar{y}|^2}{\varepsilon^2} \varepsilonnd{eqnarray*} and using that $w$ is Lipschitz continuous with some constant $C_R$ in $\overline{B}(0,R)$ by Lemma~\ref{sous-sol-lip}, we obtain $|\bar{p}|\leq C_R.$ Therefore, up to extract a subsequence $\varepsilon\to 0,$ we have $\bar{p}\to p_0.$ By the convexity of $H$, \begin{eqnarray*} && H(\bar{x}, p)=H(\bar{x}, \mu \frac{\bar{p}}{\mu}+(1-\mu)0) \leq \mu H(\bar{x}, \frac{\bar{p}}{\mu})+(1-\mu) H(\bar{x}, 0). \varepsilonnd{eqnarray*} Using $H(\bar{x},0)=0,$ we get \begin{eqnarray*}\label{convexityH} && 0\leq \mu H(\bar{x}, \frac{\bar{p}}{\mu})- H(\bar{x}, \bar{p}). \varepsilonnd{eqnarray*} Subtracting the viscosity inequalities and using the above estimates, we obtain \begin{eqnarray*} && 0\leq \mu H(\bar{x},\frac{\bar{p}}{\mu})-H(\bar{x},\bar{p}) \leq H(\bar{y},\bar{p})-H(\bar{x},\bar{p})+\mu (l(\bar{x})+c)-(l(\bar{y})+c). \varepsilonnd{eqnarray*} Sending $\varepsilon\to 0,$ we reach $0\leq (\mu -1)(l(x_0)+c)\leq (\mu -1) \alpha <0,$ which is a contradiction. It ends the proof of the theorem. \varepsilonnd{proof} \begin{remark}\label{disc-ergo} \ \\ (i) In the periodic setting, there is a unique $c= 0$ such that~\varepsilonqref{lc-erprob} has a solution. It is not anymore the case in the unbounded setting where there exist solutions for all $c\geq 0.$ The proof is adapted from~\cite[Theorem 2.1]{br06}. Similar issues are studied in~\cite{fm07}. Notice that, when $c< 0,$ there is no subsolution (thus no solution) because of~\varepsilonqref{Hnulle}.\\ (ii) In the periodic setting, the classical proof of existence of a solution to~\varepsilonqref{lc-erprob} (\cite{lpv86}) uses the auxiliary approximate equation \begin{eqnarray}\label{eq-lambda} && \lambda v^\lambda + H(x, Dv^\lambda )=l(x) \quad\text{in $\mathbb R^N.$} \varepsilonnd{eqnarray} In our case, it gives only the existence of a solution $(c,v)$ with $c= 0$ but not for all $c\geq 0.$ \\ (iii) Neither the proof using~\varepsilonqref{eq-lambda}, nor the proof of Theorem~\ref{lc-thm:ergodic}(i) using the Dirichlet problem~\varepsilonqref{pb-dirichl} yields a nonnegative (or bounded from below) solution $v$ of~\varepsilonqref{lc-erprob} for $c= 0$. See Section~\ref{sec:exple-sol-ergo} for an explicit computation of the solution of~\varepsilonqref{pb-dirichl}. It is why we need another proof to construct such a solution. See~\cite{bm16} for the same result in the viscous case. \\ (iv) For $c= 0,$ bounded solutions to the ergodic problem may exist, e.g., when $l$ is periodic (\cite{lpv86} and the example in Remark~\ref{rmk-perio}). If $\mathcal{A}$ is bounded, we can prove with similar arguments as in the proof of the theorem that all solutions of the ergodic problem are unbounded. \\ (v) When $c> 0,$ there is no bounded solution to~\varepsilonqref{lc-erprob} even if $l$ is periodic or bounded. \\ (vi) Theorem~\ref{lc-thm:ergodic} does not require $H$ to satisfy~\varepsilonqref{Hlip} so it applies to more general equations than~\varepsilonqref{eik}, for instance with quadratic Hamiltonians. \\ (vii) The assumption that a comparison principle in $C(\mathbb R^N\times [0,+\infty))$ holds for~\varepsilonqref{lc-cauchy} in Theorem~\ref{lc-thm:ergodic}(ii) may seem to be a strong assumption but it is true for the Eikonal equation, i.e., when $H$ satisfies~\varepsilonqref{Hlip}, see Theorem~\ref{lc-compathm}. In this case, $H$ automatically satisfies~\varepsilonqref{Hbornesup} with $m(r)=C_H r.$ \varepsilonnd{remark} \section{The Cauchy problem} \label{lc-sec:cauchy} In this section we study the Cauchy problem~\varepsilonqref{lc-cauchy}. We start with some comments about Proposition~\ref{lc-exis-uniq-cauchy} and then we prove it. Existence and uniqueness are based on the comparison Theorem~\ref{lc-compathm} without growth condition, which holds when~\varepsilonqref{Hlip} is satisfied thanks to the finite speed of propagation. When $u_0$ is bounded from below and~\varepsilonqref{Hnulle} holds, $\mathop{\rm inf}_{\mathbb R^N}u_0$ is a subsolution of~\varepsilonqref{lc-cauchy} and~\varepsilonqref{encadrement-u} takes the simpler form \begin{eqnarray*} && \mathop{\rm inf}_{\mathbb R^N}u_0 \leq u(x,t)+ct\leq v^+(x). \varepsilonnd{eqnarray*} \begin{proof}[Proof of Proposition~\ref{lc-exis-uniq-cauchy}] \ \\ (i) Let \begin{eqnarray*} && {v^+(x):= f_0(|x|)+\int_0^{|x|} f_1(s)ds, } \varepsilonnd{eqnarray*} where \begin{eqnarray*} && \left\{ \begin{array}{l} \text{$f_0:\mathbb R_+\to\mathbb R_+$ $C^1$ nondecreasing, $f_0'(0)=0$ and $f_0(|x|)\geq u_0(x)$}\\ \text{$f_1:\mathbb R_+\to\mathbb R_+$ continuous, $f_1(0)=0$ and $\displaystyle f_1(|x|)\geq \frac{l(x)+c}{\nu(x)},$}\\ \varepsilonnd{array} \right. \varepsilonnd{eqnarray*} where $\nu$ appears in~\varepsilonqref{Hcoerc}. The existence of such functions $f_0, f_1$ is classical (see~\cite[Proof of Theorem 2.2]{barles94} for instance). It is straightforward to see that $v^+\in C^1(\mathbb R^N),$ $v^+\geq u_0$ and $(c,v^+)$ is a supersolution of~\varepsilonqref{lc-erprob} thanks to~\varepsilonqref{Hcoerc}. \noindent (ii) It is obvious that $(c,v)$ is a solution (respectively a subsolution, supersolution) of~\varepsilonqref{lc-erprob} if and only if $V(x,t)=v(x)-ct$ is a solution (respectively a subsolution, supersolution) of~\varepsilonqref{lc-cauchy} with initial data $V(x,0)=v(x).$ {We have $v^-\leq u_0\leq v^+,$ where $v^-$ is the subsolution given by assumption and $v^+$ is the supersolution built in (i).} Using Perron's method and Theorem~\ref{lc-compathm}, which holds thanks to~\varepsilonqref{Hlip}, we conclude that there exists a unique viscosity solution $u\in C(\mathbb R^N\times [0,+\infty))$ of~\varepsilonqref{lc-cauchy} such that \begin{eqnarray*} && v^-(x)-ct\leq u(x,t)\leq v^+(x)-ct. \varepsilonnd{eqnarray*} \varepsilonnd{proof} \section{Large time behavior of solutions} \label{lc-sec:cv} \subsection{Proof of Theorem~\ref{lc-tm:longtime}} We first consider the case when $u_0$ is bounded. Recalling that $c=0$ and $u$ is solution of~\varepsilonqref{lc-cauchy}, we see by Proposition~\ref{lc-exis-uniq-cauchy} that \begin{eqnarray}\label{borneLinfini} && \mathop{\rm inf}_{\mathbb R^N}u_0 \leq u(x,t)\leq v^+(x), \varepsilonnd{eqnarray} where $(0,v^+)$ is a supersolution of~\varepsilonqref{lc-erprob} satisfying $v^+\geq u_0.$ The first step is to obtain better estimates for the large time behavior of $u$. To do so, we consider $(0,v_1)$ and $(0,v_2)$ two solutions of~\varepsilonqref{lc-erprob}. Such solutions exist from Theorem~\ref{lc-thm:ergodic}(iii) with $c=0$ and $\mathcal{A}\not=\varepsilonmptyset.$ Moreover, $v_1(x), v_2 (x)\to +\infty$ as $|x|\to +\infty$ since $\mathcal{A}$ is supposed to be compact and~\varepsilonqref{Hbornesup} holds because of Assumptions~\varepsilonqref{Hnulle} and~\varepsilonqref{Hlip}. We have \begin{lemma}\label{IneqLT} There exist two constants $k_1,k_2\geq 0$ such that $$ v_1(x) -k_1 \leq \liminf_{t \to +\infty} u(x,t) \leq \limsup_{t \to +\infty} u(x,t)\leq v_2(x)+k_2\quad\hbox{in }\mathbb R^N .$$ As a consequence, for any solutions $(0,v_1)$ and $(0,v_2)$ of~\varepsilonqref{lc-erprob}, $v_1-v_2$ is bounded. \varepsilonnd{lemma} \begin{proof}[Proof of Lemma~\ref{IneqLT}] The proof of third inequality in Lemma~\ref{IneqLT} is obvious : since $u_0$ is bounded and $v_2 (x)\to +\infty$ as $|x|\to +\infty$, there exists $k_2$ such that $u_0\leq v_2+k_2 $ in $\mathbb R^N.$ Then, by comparison (Theorem~\ref{lc-compathm}) $$ u(x,t) \leq v_2(x)+k_2\quad\hbox{in }\mathbb R^N\times [0,+\infty)\; ,$$ which implies the $\limsup$-inequality. The $\liminf$-one is less standard. Let $R_\mathcal{A}>0$ be such that $\mathcal{A}\subset B(0,R_\mathcal{A}/2)$ and set \begin{eqnarray*} && C_1=C_1(\mathcal{A},v_1):= \mathop{\rm sup}_{\overline{B}(0,R_\mathcal{A})} v_1 +1. \varepsilonnd{eqnarray*} Notice that, by definition of $\mathcal{A}$ and~\varepsilonqref{lc-argmin-l}, there exists $\varepsilonta_\mathcal{A} >0$ such that \begin{eqnarray} \label{prop-hors-A} && l(x)\geq \varepsilonta_\mathcal{A} >0 \quad \text{for all $x\in \mathbb R^N\setminus B(0,R_\mathcal{A}).$} \varepsilonnd{eqnarray} Using that ${\rm min}\{v_1,C_1\}$ is bounded from above and $u_0$ is bounded, there exists $k_1=k_1(\mathcal{A},v_1,u_0)$ such that \begin{eqnarray}\label{ineg720} && {\rm min}\{v_1,C_1\}-k_1 \leq u_0 \quad \text{in $\mathbb R^N.$} \varepsilonnd{eqnarray} Next we have to examine the large time behavior of the solution associated to the initial condition ${\rm min}\{v_1,C_1\}-k_1$ and to do so, we use the following result of Barron and Jensen (see Appendix). \begin{lemma}\label{barron-jensen} \cite{bj90} Assume~\varepsilonqref{Hconvex} and let $u,\tilde{u}$ be locally Lipschitz subsolutions (resp. solutions) of~\varepsilonqref{lc-cauchy}. Then ${\rm min} \{u,\tilde{u}\}$ is still a subsolution (resp. a solution) of~\varepsilonqref{lc-cauchy}. \varepsilonnd{lemma} To use it, we remark that the function $w^-(x,t):= C_1+\varepsilonta_\mathcal{A} t$ is a smooth subsolution of~\varepsilonqref{lc-cauchy} in $(\mathbb R^N\setminus \overline{B}(0,R_\mathcal{A}))\times (0,+\infty).$ Indeed, for all $|x|>R_\mathcal{A},$ $t>0,$ \begin{eqnarray*} && w_t^- +H(x,Dw^-)= \varepsilonta_\mathcal{A} +H(x,0)=\varepsilonta_\mathcal{A} \leq l(x) \varepsilonnd{eqnarray*} using~\varepsilonqref{Hnulle} and~\varepsilonqref{prop-hors-A}. Since $v_1$ is a locally Lipschitz continuous subsolution of~\varepsilonqref{lc-cauchy} in $\mathbb R^N\times (0,+\infty)$, we can use Lemma~\ref{barron-jensen} in $(\mathbb R^N\setminus \overline{B}(0,R_\mathcal{A}))\times (0,+\infty)$ to conclude that $\min\{v_1,w^-\}-k_1$ is a subsolution, while in a neighborhood of $\overline{B}(0,R_\mathcal{A})\times (0,+\infty)$, we have $\min\{v_1,w^-\}-k_1=v_1-k_1$ by definition of $C_1$. Then, by comparison (Theorem~\ref{lc-compathm}) $$ \min\{v_1(x),C_1+\varepsilonta_\mathcal{A} t\}-k_1 \leq u(x,t) \quad\hbox{in }\mathbb R^N\times [0,+\infty)\; ,$$ and one concludes easily. The last assertion of Lemma~\ref{IneqLT} is obvious since $v_1, v_2$ are arbitrary solutions of~\varepsilonqref{lc-erprob} and we can exchange their roles. \varepsilonnd{proof} The next step of the proof of Theorem~\ref{lc-tm:longtime} consists in introducing the half-relaxed limits~\cite{cil92, barles94} $$ \underline{u}(x)=\mathop{{\rm lim\,inf}_*}_{t\to +\infty} u(x,t), \qquad \overline{u}(x)=\mathop{{\rm lim\,sup}^*}_{t\to +\infty} u(x,t). $$ They are well-defined for all $x\in\mathbb R^N$ thanks to~\varepsilonqref{borneLinfini} or Lemma~\ref{IneqLT}. We recall that $\underline{u}\leq \overline{u}$ by definition and $\underline{u} =\overline{u}$ if and only if $u(x,t)$ converges locally uniformly in $\mathbb R^N$ as $t\to +\infty.$ Therefore, to prove~\varepsilonqref{cv-thm-main}, it is enough to prove $\overline{u}\leq \underline{u}$ in $\mathbb R^N.$ A formal direct proof of this inequality is easy: $\overline{u}$ is a subsolution of~\varepsilonqref{lc-erprob}, while $\underline{u}$ is a supersolution of~\varepsilonqref{lc-erprob}; by Lemma~\ref{barron-jensen}, for any constant $C>0$, $\min\{\overline{u},C\}$ is still a subsolution of~\varepsilonqref{lc-erprob} and Lemma~\ref{IneqLT} shows that $\min\{\overline{u},C\}-\underline{u} \to -\infty$ as $|x| \to +\infty$. Moreover $0$ is a strict subsolution of~\varepsilonqref{lc-erprob} outside $\mathcal{A}$, therefore by comparison arguments of Ishii \cite{ishii87a} $$ \max_{\mathbb R^N} \{\min\{\overline{u},C\}-\underline{u}\} \leq \max_{\mathcal{A}} \{\min\{\overline{u},C\}-\underline{u}\}\; ,$$ and letting $C$ tend to $+\infty$ gives $\overline{u}-\underline{u} \leq \max_{\mathcal{A}} \{\overline{u}-\underline{u}\}.$ But the right-hand side is 0 since $u(x,t)$ is decreasing in $t$ on $\mathcal{A}$ using $H(x,p) \geq 0$ and $l(x)=0$ if $x \in \mathcal{A}$. This gives the result. This formal proof, although almost correct, is not correct since we do not have a local uniform convergence of $u$ in a neighborhood of $\mathcal{A}$, in particular because we do not have equicontinuity of the family $\{u(\cdot,t), t\geq 0\}.$ To overcome this difficulty, we use some approximations by inf- and sup-convolutions. For all $\varepsilon >0,$ we introduce \begin{eqnarray*} && u_\varepsilon (x,t)=\mathop{\rm inf}_{s\in (0,+\infty)}\{u(x,s)+\frac{|t-s|^2}{\varepsilon^2}\},\\ && u^\varepsilon (x,t)=\mathop{\rm sup}_{s\in (0,+\infty)}\{u(x,s)-\frac{|t-s|^2}{\varepsilon^2}\}. \varepsilonnd{eqnarray*} By~\varepsilonqref{borneLinfini}, they are well-defined for all $(x,t)\in\mathbb R^N\times [0,+\infty)$ and we have \begin{eqnarray}\label{ineg-sup-inf241} && \mathop{\rm inf}_{\mathbb R^N}u_0 \leq u_\varepsilon (x,t)\leq u(x,t)\leq u^\varepsilon (x,t)\leq v^+(x). \varepsilonnd{eqnarray} Notice that the infimum and the supremum are achieved in $u_\varepsilon(x,t)$ and $u^\varepsilon(x,t)$ respectively. Moreover Lemma~\ref{IneqLT} still holds for $u_\varepsilon$ and $u^\varepsilon$. Taking in the same way the half-relaxed limits for $u_\varepsilon$ and $u^\varepsilon$, we obtain (with obvious notations) \begin{eqnarray*} && \mathop{\rm inf}_{\mathbb R^N}u_0 \leq \underline{u_\varepsilon} \leq \underline{u} \leq \overline{u} \leq \overline{u^\varepsilon} \leq v^+ \quad \text{in $\mathbb R^N.$} \varepsilonnd{eqnarray*} To prove the convergence result~\varepsilonqref{cv-thm-main}, it is therefore sufficient to establish \begin{eqnarray}\label{but-goal} \overline{u^\varepsilon}\leq \underline{u_\varepsilon} \quad \text{in $\mathbb R^N,$} \varepsilonnd{eqnarray} which is our purpose from now on. The following lemma, the proof of which is standard and left to the reader, collects some useful properties of $u_\varepsilon$ and $u^\varepsilon.$ \begin{lemma}\label{prop-convol}\ \\ (i) The functions $u_\varepsilon$ and $u^\varepsilon$ converge locally uniformly to $u$ in $\mathbb R^N\times [0,+\infty)$ as $\varepsilon\to 0.$ \noindent (ii) The functions $u_\varepsilon$ and $u^\varepsilon$ are Lipschitz continuous with respect to $t$ locally uniformly in space, i.e., for all $R>0,$ there exists $C_{\varepsilon,R}>0$ such that, for all $x\in B(0,R),$ $t,t'\geq 0,$ \begin{eqnarray}\label{lipt-eps} && |u_\varepsilon (x,t)-u_\varepsilon (x,t')|, |u^\varepsilon (x,t)-u^\varepsilon (x,t')| \leq C_{\varepsilon,R}|t-t'|. \varepsilonnd{eqnarray} (iii) For all open bounded subset $\Omega\subset\mathbb R^N$, here exists $t_{\varepsilon,\Omega} >0$ with $t_{\varepsilon,\Omega}\to 0$ as $\varepsilon\to 0,$ such that $u_\varepsilon$ is solution of~\varepsilonqref{lc-cauchy} and $u^\varepsilon$ is subsolution of~\varepsilonqref{lc-cauchy} in $\Omega\times (t_{\varepsilon,\Omega},+\infty).$ \noindent (iv) For all $R>0$, there exists $C_{\varepsilon,R}, t_{\varepsilon, R}>0$ such that, for all $t>t_{\varepsilon,R},$ $u_\varepsilon(\cdot,t)$ and $u^\varepsilon(\cdot,t)$ are subsolutions of \begin{eqnarray} \label{sous-sol-enx} H(x,Dw(x)) \leq l(x)+c+2C_{\varepsilon,R}, \qquad {\rm in } \ B(0,R). \varepsilonnd{eqnarray} Therefore, $u_\varepsilon(\cdot,t)$ and $u^\varepsilon(\cdot,t)$ are locally Lipschitz continuous in space with a Lipschitz constant independent of $t.$ \varepsilonnd{lemma} We are now ready to prove that $u_\varepsilon(\cdot,t)$ and $u^\varepsilon(\cdot,t)$ converge uniformly on $\mathcal{A}$ as $t\to +\infty.$ We follow the arguments of~\cite{nr99} (or alternatively, one may use~\cite[Theorem I.14]{cl83}). We fix $R>0$ such that $\mathcal{A}\subset B(0,R)$ and consider $t_{\varepsilon, R}>0$ given by Lemma~\ref{prop-convol}. Since $w=u_\varepsilon$ or $w=u^\varepsilon$ is a locally Lipschitz continuous subsolution of~\varepsilonqref{lc-cauchy} in $B(0,R)\times (t_{\varepsilon,R},+\infty),$ we have \begin{eqnarray}\label{lc-eq-lip123} && w_t(x,t)\leq w_t(x,t)+H(x,Dw(x,t))\leq l(x),\quad \text{a.e. $(x,t)$,} \varepsilonnd{eqnarray} since $H\geq 0$ by~\varepsilonqref{Hnulle}. Let $x \in \mathcal{A}$, $t>t_{\varepsilon,R},$ and $h,r>0.$ We have \begin{eqnarray*} && \frac{1}{|B(x,r)|}\int_{B(x,r)}(w(y,t+h)-w(y,t))dy \\ &=& \frac{1}{|B(x,r)|}\int_{B(x,r)} \int_t^{t+h} w_t(y,s)dsdy\\ &\leq& \frac{1}{|B(x,r)|} \int_{B(x,r)} \int_t^{t+h}l(y)dsdy \varepsilonnd{eqnarray*} Using the continuity of $w, l$ and $l(x)=0,$ and letting $r\to 0,$ we obtain \begin{eqnarray}\label{decroiss-eps} w(x,t+h) \leq w(x,t) \qquad \text{for all $x\in \mathcal{A},$ $t>t_{\varepsilon,R},$ $h \geq 0$.} \varepsilonnd{eqnarray} Therefore $t\mapsto w(x,t)$ is a nonincreasing function on $[t_{\varepsilon,R}, \infty),$ Lipschitz continuous in space on the compact subset $\mathcal{A}$ (uniformly in time) and bounded from below according to~\varepsilonqref{ineg-sup-inf241}. By Dini Theorem, $w(\cdot ,t)$ converges uniformly on $\mathcal{A}$ as $t\to +\infty$ to a Lipschitz continuous function. Therefore, there exist Lipschitz continuous functions $\phi_\varepsilon,\phi^\varepsilon:\mathcal{A}\to\mathbb R$ with $\phi_\varepsilon\leq \phi^\varepsilon$ and \begin{eqnarray*} && u_\varepsilon(x,t)\to \phi_\varepsilon(x), \quad u^\varepsilon(x,t)\to \phi^\varepsilon(x), \quad \text{uniformly on $\mathcal{A}$ as $t\to +\infty.$} \varepsilonnd{eqnarray*} We now use the previous results to prove the convergence of $u$ on $\mathcal{A}.$ By Lemma~\ref{prop-convol}(i), we first obtain that~\varepsilonqref{decroiss-eps} holds for $u.$ Therefore $t\mapsto u(x,t)$ is nonincreasing for $x\in \mathcal{A}$, so $u(\cdot,t)$ converges pointwise as $t\to +\infty$ to some function $\phi : \mathcal{A} \to \mathbb R.$ Notice that we cannot conclude to the uniform convergence at this step since we do not know that $\phi$ is continuous. We claim that $u_\varepsilon (x,t),u^\varepsilon (x,t)\to \phi(x)$ as $t\to +\infty$, for all $x\in\mathcal{A}$. The proof is similar in both cases so we only provide it for $u_\varepsilon (x,t)$. Let $x\in \mathcal{A}$ and $\overline{s}>0$ be such that \begin{eqnarray}\label{equa173} && u(x,\overline{s})+\frac{|t-\overline{s}|^2}{\varepsilon^2} = u_\varepsilon (x,t)\leq u(x,t). \varepsilonnd{eqnarray} By~\varepsilonqref{ineg-sup-inf241}, we have \begin{eqnarray*} && \frac{|t-\overline{s}|^2}{\varepsilon^2}\leq v^+(x)- \inf u_0. \varepsilonnd{eqnarray*} It follows that $\overline{s}\to +\infty$ as $t\to +\infty$. Thanks to the pointwise convergence $u(x,s)\to \phi(x)$ as $s\to +\infty,$ sending $t$ to $+\infty$ in~\varepsilonqref{equa173}, we obtain \begin{eqnarray*} && \phi(x)+\mathop{\rm lim\,sup}_{t\to +\infty} \frac{|t-\overline{s}|^2}{\varepsilon^2}\leq \phi(x), \varepsilonnd{eqnarray*} from which we infer $\mathop{\rm lim}_{t\to +\infty} \frac{|t-\overline{s}|^2}{\varepsilon^2}=0$. Therefore, by~\varepsilonqref{equa173}, $u_\varepsilon (x,t)\to \phi(x)$. The claim is proved, which implies $\phi_\varepsilon=\phi^\varepsilon=\phi$ on $\mathcal{A}$. At this stage, we can apply here the above formal argument to the locally Lipschitz continuous functions $u^\varepsilon$ and $u_\varepsilon$, noticing that $\overline{u^\varepsilon}$ and $\underline{u_\varepsilon}$ are also locally Lipschitz continuous functions. We deduce that $$ \max_{\mathbb R^N} \{\min\{\overline{u^\varepsilon},C\}-\underline{u_\varepsilon}\} \leq \max_{\mathcal{A}} \{\min\{\overline{u^\varepsilon},C\}-\underline{u_\varepsilon}\} =\max_{\mathcal{A}} \{\min\{\phi,C\}-\phi\}, $$ and therefore letting $C$ tend to $+\infty$ we have $\overline{u^\varepsilon}=\underline{u_\varepsilon}$ in $\mathbb R^N$. Recalling that $\underline{u_\varepsilon} \leq \underline{u} \leq \overline{u} \leq \overline{u^\varepsilon} $ in $\mathbb R^N$, we have also $\underline{u} = \overline{u}$ in $\mathbb R^N$, and the conclusion follows, completing the proof of the case when $u_0$ is bounded. We consider now the case when $u_0$ is only bounded from below (but not necessarely from above). We set $u_0^C = \min\{u_0,C\}$. If $w$ denotes the solution of~\varepsilonqref{lc-cauchy} associated to the initial data $0$, then, because of the Barron-Jensen results, the solution associated to $u_0^C$ is $\min\{u,w+C\}$. But, from the first step, we know that (i) $w$ converges locally uniformly to some solution $v_1$ of~\varepsilonqref{lc-erprob}, (ii) $\min\{u,w+C\}$ converges to some solution $v^C_2$ of~\varepsilonqref{lc-erprob} (depending perhaps on $C$) and (iii) we have \varepsilonqref{borneLinfini} for $u$. Let $\mathcal{K}$ be any compact subset of $\mathbb R^N$. If $C$ is large enough in order to have $v_1+C>v^+$ on $\mathcal{K}$ (the size of such $C$ depends only on $\mathcal{K}$), then for large $t,$ $\min\{u,w+C\}=u$ on $\mathcal{K}$ by the uniform convergence of $w$ to $v_1$ on $\mathcal{K}$. It follows that $u$ converges locally uniformly to $v_2^C$ on $\mathcal{K},$ which is independent on $C$. The proof of Theorem~\ref{lc-tm:longtime} is complete. $\Box$ To conclude this section, we point out the following result which is a consequence of the comparison argument we used in the proof. \begin{corollary}\label{meme-comport} Assume~\varepsilonqref{Hcoerc}-\varepsilonqref{Hnulle}-\varepsilonqref{Hconvex}-\varepsilonqref{Hlip}, $l\geq 0$ and~\varepsilonqref{lc-argmin-l}. Then, for all bounded from below solutions $(0,v_1)$ and $(0,v_2)$ of~\varepsilonqref{lc-erprob}, $v_1, v_2\to +\infty$ as $|x|\to +\infty$ and \begin{eqnarray*} \mathop{\rm sup}_{\mathbb R^N}\{v_1-v_2\}\leq \mathop{\rm max}_{\mathcal{A}}\{v_1-v_2\}<+\infty. \varepsilonnd{eqnarray*} \varepsilonnd{corollary} \begin{remark} \label{rmk-perio}It is quite surprising that, though a lot of different solutions to~\varepsilonqref{lc-erprob} may exist (see Section~\ref{sec:exple-sol-ergo}), all the bounded from below solutions associated to $c=-{\rm min}\,l$ have the same growth at infinity. This is not true when $\mathcal{A}$ is not compact, e.g., in the periodic case. Consider for instance \begin{eqnarray*} && |Dv|=|{\rm sin}(x)|\quad\text{in $\mathbb R.$} \varepsilonnd{eqnarray*} For $c=-\mathop{\rm min}_{\mathbb R}|{\rm sin}(x)|=0,$ it is possible to build infinitely many solutions with very different behaviors by gluing some branches of cosine functions, see Figure~\ref{dess-period}. \begin{figure}[ht] \begin{center} \includegraphics[width=14cm]{construct-fct-period-glob.pdf} \varepsilonnd{center} \caption{Some solutions of $|Dv|=|{\rm sin}(x)|$ in $\mathbb R,$ $\mathcal{A}=\pi\mathbb Z$} \label{dess-period} \varepsilonnd{figure} \varepsilonnd{remark} \subsection{Proof of Theorem~\ref{lc-tm:longtime-infty}} By~\varepsilonqref{u0-v-to0}, there exists a subsolution $(0,\psi)$ of~\varepsilonqref{lc-erprob} such that, for every $\varepsilonpsilon >0,$ there exists $R_\varepsilon >0$ such that, for all $|x|\geq R_\varepsilon,$ \begin{eqnarray}\label{u0vproches} && {\rm min} \{\psi(x), v(x)\}-\varepsilon \leq {\rm min} \{\psi(x), u_0(x)\} \leq {\rm min} \{\psi(x), v(x)\}+\varepsilon. \varepsilonnd{eqnarray} Let $M_\varepsilon >0$ be such that \begin{eqnarray*} && -M_\varepsilon\leq u_0(x), v(x) \qquad \text{for all $|x|\leq R_\varepsilon.$} \varepsilonnd{eqnarray*} Setting $\psi_\varepsilon(x)= {\rm min} \{\psi(x), -M_\varepsilon\},$ we claim that, for all $x\in\mathbb R^N,$ \begin{eqnarray}\label{ineg602} && {\rm min} \{\psi_\varepsilon(x), v(x)\} -\varepsilon \leq {\rm min} \{\psi_\varepsilon(x), u_0(x)\} \leq {\rm min} \{\psi_\varepsilon(x), v(x)\} +\varepsilon. \varepsilonnd{eqnarray} Indeed, this inequality comes from the $M$-property~\varepsilonqref{u0vproches} if $|x|\geq R_\varepsilon,$ while it is obvious by the choice of $M_\varepsilon,$ if $|x|\leq R_\varepsilon.$ From Lemma~\ref{barron-jensen}, $(0,\psi_\varepsilon)$ is a subsolution of~\varepsilonqref{lc-erprob} as a minimum of subsolutions. Since $c\geq 0$, $(c,\psi_\varepsilon)$ is also subsolution of~\varepsilonqref{lc-erprob}. Applying again Lemma~\ref{barron-jensen} to $(c,\psi_\varepsilon)$ and $(c,v)$, we obtain that $(c,{\rm min} \{\psi_\varepsilon, v\})$ is a subsolution of~\varepsilonqref{lc-erprob}. From~\varepsilonqref{ineg602}, we have ${\rm min} \{\psi_\varepsilon, v\} -\varepsilon\leq u_0$. It follows from Proposition~\ref{lc-exis-uniq-cauchy}, that there exists a unique viscosity solution $u$ of~\varepsilonqref{lc-cauchy} with initial data $u_0$ and it satisfies \begin{eqnarray}\label{ineg693} {\rm min} \{\psi_\varepsilon, v\} -\varepsilon\leq u(x,t)+ct\leq v^+(x), \varepsilonnd{eqnarray} where $(c,v^+)$ is a supersolution of~\varepsilonqref{lc-erprob} such that $u_0\leq v^+$. In the same way, there exists unique viscosity solutions $w_\varepsilon$ and $w$ of~\varepsilonqref{lc-cauchy} associated with initial datas $\psi_\varepsilon$ and 0 respectively. Since $\psi_\varepsilon\leq -M_\varepsilon$, by comparison and Proposition~\ref{lc-exis-uniq-cauchy}, we have \begin{eqnarray}\label{ineg694} && \psi_\varepsilon(x)\leq w_\varepsilon(x,t)\leq -M_\varepsilon +w(x,t)\leq \tilde{v}^+(x) \quad \text{for $(x,t)\in\mathbb R^N\times [0,+\infty)$,} \varepsilonnd{eqnarray} where $(0,\tilde{v}^+)$ is a supersolution of~\varepsilonqref{lc-erprob} such that $-M_\varepsilon\leq \tilde{v}^+$. Arguing as at the end of the proof of Theorem~\ref{lc-tm:longtime}, the solutions of~\varepsilonqref{lc-cauchy} associated to the initial datas $$ {\rm min} \{\psi_\varepsilon(x), v(x)\} -\varepsilon, \quad {\rm min} \{\psi_\varepsilon(x), u_0(x)\} \quad\text{and}\quad {\rm min} \{\psi_\varepsilon(x), v(x)\} +\varepsilon $$ are respectively $$ {\rm min} \{w_\varepsilon(x,t), v(x)-ct\} -\varepsilon, \ {\rm min} \{w_\varepsilon(x,t), u(x,t)\} \ \text{and}\ {\rm min} \{w_\varepsilon(x,t), v(x)-ct\} +\varepsilon. $$ By comparison, we have, in $\mathbb R^N\times (0,+\infty)$ $$ {\rm min} \{w_\varepsilon(x,t), v(x)-ct\} -\varepsilon \leq {\rm min} \{w_\varepsilon(x,t), u(x,t)\}\; ,$$ and $$ {\rm min} \{w_\varepsilon(x,t), u(x,t)\} \leq {\rm min} \{w_\varepsilon(x,t), v(x)-ct\} +\varepsilon. $$ Recalling that $c$ is positive and using~\varepsilonqref{ineg694}, if $\mathcal{K}$ is a compact subset of $\mathbb R^N$, then for $t$ large enough and $x \in \mathcal{K}$ $$ {\rm min} \{w_\varepsilon(x,t), v(x)-ct\} = v(x)-ct\; ,$$ leading to the inequality $$v(x)-ct -\varepsilon \leq {\rm min} \{w_\varepsilon(x,t), u(x,t)\} \leq v(x)-ct +\varepsilon. $$ From~\varepsilonqref{ineg693} and~\varepsilonqref{ineg694}, $t$ can be chosen large enough to have $w_\varepsilon(x,t) +ct>u(x,t)+ct $ so we end up with $$ v(x)-ct -\varepsilon \leq u(x,t) \leq v(x)-ct +\varepsilon,$$ for $t$ large enough and $x$ in $\mathcal{K}$. Since $\varepsilon$ is arbitrary, the conclusion follows. $\Box$ \begin{remark}\label{rmqII} Theorem~\ref{lc-tm:longtime-infty} is very close to \cite[Theorem 5.3]{ii09}. In the latter paper, the authors obtain the convergence assuming that $\mathop{\rm inf}_{\mathbb R^N}\{u_0-{\rm min}\{\psi,v \}\}> -\infty$ and \begin{eqnarray}\label{hypII} \mathop{\rm lim}_{r\to +\infty}\{|(u_0-v)(x)| : \psi(x)>v(x)+r\}=0. \varepsilonnd{eqnarray} We do not know if this assumption is equivalent to ours. But in both assumptions, the point is that $u_0(x)$ must be close to $v(x)$ when $v(x)$ is ``far below'' $\psi(x)$, which means $\psi(x)>v(x)+r$ for large $r$ in~\varepsilonqref{hypII} and ${\rm min}\{\psi(x), -r\}> v(x)$ for large $r$ in our case. This situation occurs for instance if $v,u_0$ are unbounded from below and close when $v(x)\to -\infty$. \varepsilonnd{remark} \section{Optimal control problem and examples} \label{lc-sec:exple} Consider the one-dimensional Hamilton-Jacobi Equation \begin{eqnarray}\label{lc-exp} \begin{cases} u_t(x,t) + |Du(x,t)| = 1+ |x| \qquad \text{in } \mathbb R \times (0, \infty)\\ u(x,0) = u_0(x), \varepsilonnd{cases} \varepsilonnd{eqnarray} where $l(x)=|x|+1\geq 0,$ $\min_{\mathbb R^N} l= 1,$ and $\mathcal{A}={\rm argmin}\,l=\{0\}$ satisfies~\varepsilonqref{lc-argmin-l}. We can come back to our framework by looking at $\tilde u (x,t)=u (x,t)-t$ which solves $$\tilde u_t(x,t) + |D\tilde u(x,t)| = |x| \qquad \text{in } \mathbb R \times (0, \infty)\; ,$$ where $\tilde l(x):= |x|$ satisfies the assumptions of our results. There exists a unique continuous solution $u$ of~\varepsilonqref{lc-exp} for every continuous $u_0$ satisfying $|u_0(x)|\leq C(1+|x|^2)$ (use Theorem~\ref{lc-compathm} and the fact that $\pm Ke^{Kt}(1+|x|^2)$ are super- and subsolution for large $K$). We can represent $u$ as the value function of the following associated deterministic optimal control problem. Consider the controlled ordinary differential equation \begin{eqnarray}\label{lc-expcontrol} \begin{cases} \dot{X}(s) = \alpha(s),\\ X(0) = x \in \mathbb R, \varepsilonnd{cases} \varepsilonnd{eqnarray} where the control $\alpha(\cdot)\in L^\infty([0,+\infty); [-1,1])$ (i.e., $|\alpha(t)| \leq 1$ a.e. $t\geq 0$). For any given control $\alpha$,~\varepsilonqref{lc-expcontrol} has a unique solution $X(t) =X_{x,\alpha (\cdot)}= x + \int_0^t \alpha(s)ds$. We define the cost functional $$J(x,t,\alpha) = \int_0^t(|X(s)| + 1)ds + u_0(X(t)),$$ and the value function $$V(x,t) = \inf_{\alpha\in L^\infty([0,+\infty); [-1,1])} J(x,t,\alpha).$$ It is classical to check that $V(x,t)=u(x,t)$ is the unique viscosity solution of~\varepsilonqref{lc-exp}, see~\cite{barles94, bcd97}. \subsection{Solutions to the ergodic problem} \label{sec:exple-sol-ergo} There are infinitely many essentially different solutions with different constants to the associated ergodic problem \begin{eqnarray}\label{exple-ergo} |Dv(x)|=1+|x|+c \qquad \text{in } \mathbb R. \varepsilonnd{eqnarray} Define $S(x)=\int_0^x |y|dy.$ The following pairs $(c,v)$ are solutions. \begin{itemize} \item $(-1, \frac{1}{2}x^2)$ and $(-1, -\frac{1}{2}x^2).$ They are bounded from below (respectively from above) with $c=-\min\,l$; \item $(-1,S(x))$ and $(-1,-S(x)).$ They are neither bounded from below nor from above and $c=-\min\,l$; \item $(\lambda -1, \lambda x+S(x))$ and $(\lambda -1, -\lambda x-S(x))$ for every $\lambda >0.$ They are neither bounded from below nor from above and $c>-\min\,l$; \item $(\lambda -1, -\frac{1}{2}x^2-\lambda|x|)$ for every $\lambda >0.$ These solutions are nonsmooth (notice that $-v$ is not anymore a viscosity solution), they are not bounded from below. Actually, they are the solutions obtained by the constructive proof of Theorem~\ref{lc-thm:ergodic}(i). Indeed, the unique solution $V_R$ of the Dirichlet problem~\varepsilonqref{pb-dirichl} is $V_R(x)= \frac{R^2-x^2}{2}+\lambda(R-|x|)$ for $x\in [-R,R],$ leading to $v(x)={\rm lim}_{R\to \infty}\{V_R(x)-V_R(0)\}= -\frac{1}{2}x^2-\lambda|x|.$ \item $(c,v)$ where $(c,v_1)$ and $(c,v_2)$ are solutions, $v=\min \{v_1+C_1, v_2+C_2\}$ and $C_1,C_2\in\mathbb R.$ This is a consequence of Lemma~\ref{barron-jensen}. \varepsilonnd{itemize} \subsection{Equation~\varepsilonqref{lc-exp} with $u_0(x)=S(x)$} \label{lc-exple:S} For any solution $(c,v)$ to~\varepsilonqref{exple-ergo}, it is obvious that $u(x,t)= -ct+v(x)$ is the unique solution to~\varepsilonqref{lc-exp} with $u_0(x)=v(x)$ and the convergence holds, i.e., $u(x,t)+ct\to v(x)$ as $t\to +\infty.$ In particular, if $u_0(x)=S(x),$ the solution of~\varepsilonqref{lc-exp} is $u(x,t)= t+S(x).$ Let us find in another way the solution by computing the value function of the control problem stated above. Let $t>0.$ We compute $V(x,t)$ for any $x\in\mathbb R$ by determining the optimal controls and trajectories. \noindent{\it 1st case:} $x\geq 0.$\\ There are infinitely many optimal strategies: they consist in going as quickly as possible to 0 ($={\rm argmin}\,l$), to wait at 0 for a while and to go as quickly as possible towards $-\infty.$ For any $0\leq \tau \leq t-x,$ it corresponds to the optimal controls and trajectories \begin{eqnarray}\label{lc-optS} && \scriptsize\alpha(s)= \left\{\begin{array}{cl} -1, & 0\leq s\leq x,\\ 0, & x\leq s\leq x+\tau,\\ -1, & x+\tau\leq s\leq t, \varepsilonnd{array}\right. \qquad \scriptsize X(s)= \left\{\begin{array}{cl} x-s, & 0\leq s\leq x,\\ 0, & x\leq s\leq x+\tau,\\ -(s-x-\tau), & x+\tau\leq s\leq t. \varepsilonnd{array}\right. \varepsilonnd{eqnarray} They lead to $V(x,t)=J(x,t,\alpha)=t+S(x).$ Among these optimal strategies, there are two of particular interest: \begin{itemize} \item The first one is to go as quickly as possible to 0 and to remain there ($\tau=t-x$). This strategy is typical of what happens in the periodic case: the optimal trajectories are attracted by $\mathcal{A}={\rm argmin}\,l.$ \item The second one is to go as quickly as possible towards $-\infty$ during all the available time $t$ ($\tau=0$). This situation is very different to the periodic case. Due to the unbounded (from below) final cost $u_0,$ some optimal trajectories are not anymore atttracted by ${\rm argmin}\,l$ and are unbounded. \varepsilonnd{itemize} \noindent{\it 2nd case:} $x< 0.$\\ In this case there is not anymore bounded optimal trajectories. The only optimal strategy is to go as quickly as possible towards $-\infty.$ The optimal control are $\alpha(s)=-1,$ $X(s)=x-s$ for $0\leq s\leq t$ leading to $V(x,t)=J(x,t,-1)=t+S(x).$ The analysis of this case in terms of control will help us for the following examples. \subsection{Equation~\varepsilonqref{lc-exp} with $u_0(x)=\frac{1}{2}x^2+b(x)$ with $b$ bounded from below} \label{lc-exple:carreb} To illustrate Theorem~\ref{lc-tm:longtime}, we choose an initial condition which is a bounded perturbation of a bounded from below solution of the ergodic problem. To simplify the computations, we choose a periodic perturbation $b$. For any $x$, an optimal strategy can be chosen among those described in Example~\ref{lc-exple:S}. More precisely: go as quickly as possible to 0, wait nearly until time $t$ and move a little to reach the minimum of the periodic perturbation. For $t$ large enough (at least $t>x$), we compute the cost with $\alpha, X$ given by~\varepsilonqref{lc-optS}, $$ J(x,t,\alpha)= t+\frac{1}{2}x^2+b(-t+x+\tau). $$ For every $t$ large enough, there exists $0\leq \tau=\tau_t <t-x$ such that $b(-t+x+\tau_t)=\min \, b.$ It leads to $$ V(x,t)= J(x,t,\alpha)= t+\frac{1}{2}x^2 + \min \, b. $$ Therefore, we have the convergence as announced in Theorem~\ref{lc-tm:longtime}. \subsection{Equation~\varepsilonqref{lc-exp} with $u_0(x)=S(x)+b(x)$ with $b$ bounded Lipschitz continuous} We compute the value function as above. Due to the unboundedness from below of $u_0$ we need to distinguish the cases $x\geq 0$ and $x<0$ as in Example~\ref{lc-exple:S}. \noindent{\it 1st case:} $x\geq 0.$\\ We use the same strategy as in Example~\ref{lc-exple:carreb} leading to $V(x,t)= J(x,t,\alpha)= t+\frac{1}{2}x^2 + \min \, b.$ \noindent{\it 2nd case:} $x< 0.$\\ In this case, the optimal strategy suggested by Examples~\ref{lc-exple:S} and~\ref{lc-exple:carreb} is to start by waiting a small time $\tau$ before going as quickly as possible towards $-\infty.$ The waiting time correspond to an attempt to reach a minimum of $b$ at the left end of the trajectory. It corresponds to the control and trajectory \begin{eqnarray*} &&\scriptsize\alpha(s)= \left\{\begin{array}{cl} 0, & 0\leq s\leq \tau,\\ -1, & \tau\leq s\leq t, \varepsilonnd{array}\right. \qquad \scriptsize X(s)= \left\{\begin{array}{cl} x, & 0\leq s\leq \tau,\\ x-(s-\tau), & \tau\leq s\leq t, \varepsilonnd{array}\right. \varepsilonnd{eqnarray*} leading to $$ J(x,t,\alpha)=t+S(x)+\tau |x|+b(x-t+\tau). $$ Due to the boundedness of $b,$ in order to be optimal, we see that necessarily $\tau=O(1/|x|)$ to keep bounded the positive term $\tau |x|$ in $J(x,t,\alpha)$. So, for large $|x|,$ $x<0,$ we have $b(x-t+\tau)\approx b(x-t).$ When $b$ is not constant, $b(x-t)$ has no limit as $t\to +\infty,$ the convergence for $V(x,t)$ cannot hold. In this case, $u_0$ is a bounded perturbation of a solution $(c,v)=(-1,S(x))$ of the ergodic problem with $c = -\min \,l$ but $v$ is not bounded from below and the convergence of the value function may not hold. It follows that the assumptions of Theorem~\ref{lc-tm:longtime} cannot be weakened easily. In particular, the boundedness from below of the solution of the ergodic problem seems to be crucial. \subsection{Equation~\varepsilonqref{lc-exp} with $u_0(x)=S(x)+x+{\rm sin}(x)$} The solution of~\varepsilonqref{lc-exp} is $u(x,t)=S(x)+x+{\rm sin}(x-t).$ Clearly, we do not have the convergence. In this case, $u_0$ is a bounded perturbation of the solution $(0, S(x)+x)$ of the ergodic problem with $c> -\min \,l$ and $S(x)+x-u_0(x)\not\to 0$ as $x\to -\infty$ (where $S(x)+x\to -\infty$). This example shows that the convergence in Theorem~\ref{lc-tm:longtime-infty} may fail when~\varepsilonqref{u0-v-to0} does not hold. \appendix \section{Comparison principle for the solutions of~\varepsilonqref{lc-cauchy}} The comparison result for the unbounded solutions of~\varepsilonqref{lc-cauchy} is a consequence of a general comparison result for first-order Hamilton-Jacobi equations which holds without growth conditions at infinity. \begin{theorem} \label{lc-compathm} \cite{ishii84, ley01} Assume that $H$ satisfies~\varepsilonqref{Hlip} and that $u \in USC(\mathbb R^N \times [0, T])$ and $v \in LSC(\mathbb R^N \times [0, T])$ are respectively a subsolution of~\varepsilonqref{lc-cauchy} with initial data $u_0\in C(\mathbb R^N)$ and a supersolution of~\varepsilonqref{lc-cauchy} with initial data $v_0\in C(\mathbb R^N).$ Then, for every $x_0\in\mathbb R^N$ and $r>0,$ \begin{eqnarray*} && u(x,t) - v(x,t) \leq \sup_{\overline{B}(x_0,r)}\{u_0(y) - v_0(y) \} \quad \text{for every $(x,t) \in \bar{\mathcal{D}}(x_0,r)$,} \varepsilonnd{eqnarray*} where $$ \bar{\mathcal{D}}(x_0,r) = \lbrace (x,t) \in B(x_0,r) \times (0,T): e^{C_H T}(1 + |x-x_0|) - 1 \leq r \rbrace. $$ \varepsilonnd{theorem} When $\sup_{\mathbb R^N} \{u_0-v_0\} < +\infty$, a straightforward consequence is \begin{eqnarray*} && u(x,t) - v(x,t) \leq \sup_{\mathbb R^N}\{u_0 - v_0 \} \quad \text{for every $(x,t) \in \mathbb R^N\times [0,+\infty)$.} \varepsilonnd{eqnarray*} \section{Barron-Jensen solutions of convex HJ equations} \begin{theorem} \label{thm-bj} Assume that $H$ satisfies~\varepsilonqref{Hconvex} and~\varepsilonqref{Hlip}. Then $u\in W_{\rm loc}^{1,\infty}(\mathbb R^N\times (0,+\infty))$ is a viscosity solution (respectively subsolution) of~\varepsilonqref{lc-cauchy} if and ony if it is a Barron-Jensen solution (respectively subsolution) of~\varepsilonqref{lc-cauchy}, i.e., for every $(x,t)\in \mathbb R^N\times (0,+\infty)$ and $\varphi\in C^1(\mathbb R^N\times (0,+\infty))$ such that $u-\varphi$ has a local minimum at $(x,t),$ one has \begin{eqnarray*} && \varphi_t(x,t)+ H(x,D\varphi(x,t))=l(x) \ \ \text{(respectively $\leq l(x)$)}. \varepsilonnd{eqnarray*} \varepsilonnd{theorem} This result is due to Barron and Jensen~\cite{bj90} and we refer to Barles~\cite[p. 89]{abil13}. Lemmas~\ref{prop-convol}(iii) and~\ref{barron-jensen} are consequences of this theorem. As far as Lemma~\ref{prop-convol}(iii) is concerned, the fact that the inf-convolution (respectively the sup-convolution) preserves the supersolution (respectively the subsolution) property is classical (\cite{barles94, bcd97}). What is more suprising is the preservation of the subsolution property of the inf-convolution which comes from the convexity of $H$ and the Theorem of Barron-Jensen~\ref{thm-bj}. For a proof, notice first that $U$, being a solution of~\varepsilonqref{lc-cauchy}, is a Barron-Jensen solution of~\varepsilonqref{lc-cauchy}. We then apply~\cite[Lemma 3.2]{ley01} using that $H,l$ are independent of $t.$ For Lemma~\ref{barron-jensen}, we refer the reader to~\cite[Theorem 9.2, p.90]{abil13}. \begin{thebibliography}{10} \bibitem{abil13} Yves Achdou, Guy Barles, Hitoshi Ishii, and Grigory~L. Litvinov. \newblock {\varepsilonm Hamilton-{J}acobi equations: approximations, numerical analysis and applications}, volume 2074 of {\varepsilonm Lecture Notes in Mathematics}. \newblock Springer, Heidelberg; Fondazione C.I.M.E., Florence, 2013. \newblock Lecture Notes from the CIME Summer School held in Cetraro, August 29--September 3, 2011, Edited by Paola Loreti and Nicoletta Anna Tchou, Fondazione CIME/CIME Foundation Subseries. \bibitem{bcd97} M.~Bardi and I.~Capuzzo~Dolcetta. \newblock {\varepsilonm Optimal control and viscosity solutions of {H}amilton-{J}acobi-{B}ellman equations}. \newblock Birkh\"auser Boston Inc., Boston, MA, 1997. \bibitem{barles94} G.~Barles. \newblock {\varepsilonm Solutions de viscosit\'e des \'equations de {H}amilton-{J}acobi}. \newblock Springer-Verlag, Paris, 1994. \bibitem{br06} G.~Barles and J.-M. Roquejoffre. \newblock Ergodic type problems and large time behaviour of unbounded solutions of {H}amilton-{J}acobi equations. \newblock {\varepsilonm Comm. Partial Differential Equations}, 31(7-9):1209--1225, 2006. \bibitem{bs00} G.~Barles and P.~E. Souganidis. \newblock On the large time behavior of solutions of {H}amilton-{J}acobi equations. \newblock {\varepsilonm SIAM J. Math. Anal.}, 31(4):925--939 (electronic), 2000. \bibitem{bim13} Guy Barles, Hitoshi Ishii, and Hiroyoshi Mitake. \newblock A new {PDE} approach to the large time asymptotics of solutions of {H}amilton-{J}acobi equations. \newblock {\varepsilonm Bull. Math. Sci.}, 3(3):363--388, 2013. \bibitem{bm16} Guy Barles and Joao Meireles. \newblock On unbounded solutions of ergodic problems in {$\Bbb{R}^m$} for viscous {H}amilton-{J}acobi equations. \newblock {\varepsilonm Comm. Partial Differential Equations}, 41(12):1985--2003, 2016. \bibitem{bj90} E.~N. Barron and R.~Jensen. \newblock Semicontinuous viscosity solutions of {H}amilton-{J}acobi equations with convex hamiltonians. \newblock {\varepsilonm Comm. Partial Differential Equations}, 15(12):1713--1740, 1990. \bibitem{cil92} M.~G. Crandall, H.~Ishii, and P.-L. Lions. \newblock User's guide to viscosity solutions of second order partial differential equations. \newblock {\varepsilonm Bull. Amer. Math. Soc. (N.S.)}, 27(1):1--67, 1992. \bibitem{cl83} M.~G. Crandall and P.-L. Lions. \newblock Viscosity solutions of {H}amilton-{J}acobi equations. \newblock {\varepsilonm Trans. Amer. Math. Soc.}, 277(1):1--42, 1983. \bibitem{dalio02} F.~Da~Lio. \newblock Comparison results for quasilinear equations in annular domains and applications. \newblock {\varepsilonm Comm. Partial Differential Equations}, 27(1-2):283--323, 2002. \bibitem{ds06} A.~Davini and A.~Siconolfi. \newblock A generalized dynamical approach to the large time behavior of solutions of {H}amilton-{J}acobi equations. \newblock {\varepsilonm SIAM J. Math. Anal.}, 38(2):478--502 (electronic), 2006. \bibitem{fathi98} A.~Fathi. \newblock Sur la convergence du semi-groupe de {L}ax-{O}leinik. \newblock {\varepsilonm C. R. Acad. Sci. Paris S\'er. I Math.}, 327(3):267--270, 1998. \bibitem{fs04} A.~Fathi and A.~Siconolfi. \newblock Existence of {$C^1$} critical subsolutions of the {H}amilton-{J}acobi equation. \newblock {\varepsilonm Invent. Math.}, 155(2):363--388, 2004. \bibitem{fs05} A.~Fathi and A.~Siconolfi. \newblock P{DE} aspects of {A}ubry-{M}ather theory for quasiconvex {H}amiltonians. \newblock {\varepsilonm Calc. Var. Partial Differential Equations}, 22(2):185--228, 2005. \bibitem{fm07} Albert Fathi and Ezequiel Maderna. \newblock Weak {KAM} theorem on non compact manifolds. \newblock {\varepsilonm NoDEA Nonlinear Differential Equations Appl.}, 14(1-2):1--27, 2007. \bibitem{ii08} Naoyuki Ichihara and Hitoshi Ishii. \newblock The large-time behavior of solutions of {H}amilton-{J}acobi equations on the real line. \newblock {\varepsilonm Methods Appl. Anal.}, 15(2):223--242, 2008. \bibitem{ii09} Naoyuki Ichihara and Hitoshi Ishii. \newblock Long-time behavior of solutions of {H}amilton-{J}acobi equations with convex and coercive {H}amiltonians. \newblock {\varepsilonm Arch. Ration. Mech. Anal.}, 194(2):383--419, 2009. \bibitem{ishii84} H.~Ishii. \newblock Uniqueness of unbounded viscosity solution of {H}amilton-{J}acobi equations. \newblock {\varepsilonm Indiana Univ. Math. J.}, 33(5):721--748, 1984. \bibitem{ishii87a} H.~Ishii. \newblock A simple, direct proof of uniqueness for solutions of the {H}amilton-{J}acobi equations of eikonal type. \newblock {\varepsilonm Proc. Amer. Math. Soc.}, 100(2):247--251, 1987. \bibitem{ishii08} H.~Ishii. \newblock Asymptotic solutions for large time of {H}amilton-{J}acobi equations in {E}uclidean {$n$} space. \newblock {\varepsilonm Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire}, 25(2):231--266, 2008. \bibitem{ishii09} H.~Ishii. \newblock Asymptotic solutions of {H}amilton-{J}acobi equations for large time and related topics. \newblock In {\varepsilonm I{CIAM} 07---6th {I}nternational {C}ongress on {I}ndustrial and {A}pplied {M}athematics}, pages 193--217. Eur. Math. Soc., Z\"urich, 2009. \bibitem{ley01} O.~Ley. \newblock Lower-bound gradient estimates for first-order {H}amilton-{J}acobi equations and applications to the regularity of propagating fronts. \newblock {\varepsilonm Adv. Differential Equations}, 6(5):547--576, 2001. \bibitem{lpv86} P.-L. Lions, B.~Papanicolaou, and S.~R.~S. Varadhan. \newblock Homogenization of {H}amilton-{J}acobi equations. \newblock {\varepsilonm Unpublished}, 1986. \bibitem{nr99} G.~Namah and J.-M. Roquejoffre. \newblock Remarks on the long time behaviour of the solutions of {H}amilton-{J}acobi equations. \newblock {\varepsilonm Comm. Partial Differential Equations}, 24(5-6):883--893, 1999. \varepsilonnd{thebibliography} \varepsilonnd{document}
math
\begin{document} \setcounter{page}{1} \title[ Pseudo-differential operators in H\"older spaces ]{ Pseudo-differential operators in H\"older spaces revisited. Weyl-H\"ormander calculus and Ruzhansky-Turunen classes.} \author[D. Cardona]{Duv\'an Cardona} \address{ Duv\'an Cardona: \endgraf Department of Mathematics \endgraf Pontificia Universidad Javeriana. \endgraf Bogot\'a \endgraf Colombia \endgraf {\it E-mail address} {\rm [email protected]; [email protected]} } \subjclass[2010]{Primary {}.} \keywords{ Pseudo-differential operators on $R^n$; Toroidal pseudo-differential operators; Weyl-H\"ormander calculus; Ruzhansky-Turunen classes; H\"older spaces} \begin{abstract} In this work we obtain continuity results on H\"older spaces for operators belonging to a Weyl-H\"ormander calculus for metrics such that the class of the associated operators contains, in particular, certain hypoelliptic laplacians. With our results we recover some historical H\"older boundedness theorems (see R. Beals \cite{Be,Be2}). The action of (periodic) Ruzhansky-Turunen classes of pseudo-differential operators on H\"older spaces also will be investigated. \textbf{MSC 2010.} {Primary: 35J70, Secondary: 35A27, 47G30.} \end{abstract} \maketitle \tableofcontents \section{Introduction} \subsection{Outline of the paper} For every $0<s<1,$ the (Lipschitz) H\"older space $\Lambda^s(\mathbb{R}^n)$ consists of those functions $f$ satisfying \begin{equation}\label{Holder} \Vert f\Vert_{\Lambda^s}:=\sup_{x,y\in\mathbb{R}^n}|f(x+y)-f(y)||x|^{-s}<\infty. \end{equation} In this work we study pseudo-differential operators on H\"older spaces. These are linear operators of the form \begin{equation}\label{pseudo} Af(x)\equiv \sigma(x,D_x)f(x):=\int_{\mathbb{R}^n}e^{i2\pi x\cdot \xi}\sigma(x,\xi)\hat{f}(\xi)d\xi,\,\,\,f\in C^{\infty}_0(\mathbb{R}^n), \end{equation} where the function $\widehat{f}$ is the Fourier transform of $f$ and the function $\sigma$ is the so called, symbol associated to the operator $\sigma(x,D_x).$ In this paper we give mapping properties for pseudo-differential operators (with symbols associated to Weyl-H\"ormander classes) on H\"older spaces. These classes usually denoted by $S(m,g)$ are associated to a H\"ormander metric $g=\{g_{(x,\xi)}:(x,\xi)\in\mathbb{R}^{2n}\}$ on the phase space $\mathbb{R}^n\times \mathbb{R}^n$ and a weight $m$ on $\mathbb{R}^{2n}.$ The particular case $$g=g^{\rho,\delta}:=\langle \xi\rangle^{2\delta}dx^{2}+\langle \xi\rangle^{-2\rho}d\xi^{2},\,\,m(x,\xi)=\langle \xi\rangle^{m},\,\,0\leq \delta\leq \rho\leq 1,\,\delta<1,$$ where $\langle \xi\rangle:=(1+|\xi|^2)^{\frac{1}{2}},$ corresponds to the $(\rho,\delta)-$H\"ormander class $S^{m}_{\rho,\delta}(\mathbb{R}^{2n}),$ which consists of those symbols satisfying \begin{equation}\label{RDHC} |\partial_{x}^{\beta}\partial_\xi^{\alpha}\sigma(x,\xi)|\leq C_{\alpha,\beta}\langle \xi\rangle^{m-\rho|\alpha|+\delta|\beta|}. \end{equation} These classes were introduced by L. H\"ormander in 1967 motivated by the study of hypoelliptic problems as the heat equation. Another important case are the Shubin classes $\Sigma^m(\mathbb{R}^n\times \mathbb{R}^n)$ defined by the condition \begin{equation}\label{Shubin} |\partial_{x}^{\beta}\partial_\xi^{\alpha}\sigma(x,\xi)|\leq C_{\alpha,\beta}\langle x,\xi\rangle^{m-\rho(|\alpha|+|\beta|)}, \end{equation} where $\langle x, \xi\rangle:=(1+|x|^2+|\xi|^2)^{\frac{1}{2}},$ which from the Weyl-H\"ormander calculus can be obtained with $$ g=g^{\rho}:=\langle x,\xi \rangle^{-\rho}(dx^2+d\xi^2),\,\,\,m(x,\xi)=\langle x,\xi \rangle^{m},\,\,\,0\leq \rho\leq 1.$$ For symbol classes associated to anharmonic oscillators we refer the reader to Chatzakou, Delgado and Ruzhansky \cite{ChatDelRuz}. The quantisation process of pseudo-differential operators associates to each function $\sigma(x,\xi)$ in suitable classes (of symbols) on the phase space $\mathbb{R}^n\times \mathbb{R}^n,$ a (densely defined) linear operator $\sigma(x,D_{x})$ on the Hilbert space $L^2(\mathbb{R}^n),$ such that the coordinate functions $x_i$ and $\xi_i$ correspond to the operators $x_{j}$ and $D_{j}=-i2\pi\frac{\partial}{\partial x_j},$ and such that the properties of the symbols $\sigma(x,\xi)$ (positivity, boundedness, differentiability, invertibility, homogeneity, integrability, etc.) are reflected in some sense in the properties of the operators $\sigma(x,D_{x}),$ (positivity, mapping properties, invertibility, Fredholmness, geometric information, compactness). To our knowledge, the first general quantization procedure, (in pseudo-differential operators theory) and in many ways still the most satisfactory one, was proposed by Weyl (see H\"ormander\cite{Hor2}) not long after the invention of quantum mechanics. The symbols $\sigma(x,\xi)$ considered in this work, belong to Weyl-H\"ormander classes $S(m^{-n\varepsilon},g),$ which are associated to the metric $ g_{(x,\xi)}(dx,d\xi)=m(x,\xi)^{-2}(\langle \xi \rangle^2 dx^2+d\xi^2),$ $x,\xi\in\mathbb{R}^n, $ and the weight $ m(x,\xi)=(a(x,\xi)+\langle \xi\rangle)^{\frac{1}{2}}. $ The symbol $a,$ which is assumed positive and classical, is the principal part of the second order differential operator $L$ defined by \begin{equation}\label{ST} {Lf}=-\sum_{ij}a_{ij}(x)\frac{\partial^2}{\partial{x_i\partial x_j}}f+[\textnormal{partial derivatives of lower order }]f,\,\,\, f\in C_{0}(\mathbb{R}^n). \end{equation} We also assume for every $x\in\mathbb{R}^n,$ that the matrix $A(x)=(a_{ij}(x))$ is a positive, semi-definite matrix of rank $r(x):=\textnormal{rank}(A(x))\geq r_0\geq 1.$ The coefficients $a_{ij}$ are smooth functions which are uniformly bounded on $\mathbb{R}^n$, together with all their derivatives. Important examples arise from operators of the form \begin{equation}\label{Ex1} L=-\sum_{j}X_{j}^{*}X_{j}+X_0,\,\,\,L=-\sum_{j}X_{j}^{*}X_{j}, \end{equation} where $\{X_i\}$ is a system of vector fields on $\mathbb{R}^n$ satisfying the H\"ormander conditions of order 2 (this means that the vector fields $X_i$ and their commutators provide a basis of $\mathbb{R}^n,$ or equivalently that $\mathbb{R}^{n}=\textnormal{Lie}\{X_i\}_{i}$). On the other hand, H\"older spaces on the torus $\mathbb{T}^n=\mathbb{R}^n/\mathbb{Z}^n,$ are the Banach spaces defined for each $0<s\leq 1$ by $$\Lambda^{s}(\mathbb{T}^n)=\{f:\mathbb{T}^n\rightarrow \mathbb{C} : |f|_{\Lambda^s}=\sup_{x,h\in \mathbb{T}^n} {|f(x+h)-f(x)|} {|h|^{-s}} <\infty \}$$ together with the norm $\Vert f\Vert_{\Lambda^{s}}=|f|_{\Lambda^s}+\sup_{x\in \mathbb{T}^n}|f(x)|.$ In the second part of this work we investigate the action of periodic operators on toroidal H\"older spaces. These operators have the form \begin{equation} a(x,D_{x})f(x)\equiv \textnormal{Op}(a)f(x)=\sum_{\eta\in \mathbb{Z}^n}e^{i2\pi \langle x, \eta\rangle}a(x,\eta)\hat{f}(\eta), \end{equation} where the function $a(x,\eta)$ is called, the symbol of $a(x,D_{x}),$ and $\widehat{f}$ is the periodic Fourier transform of $f.$ Periodic pseudo-differential operators were introduced by M. Agranovich \cite{ag} who proposed a global quantization of periodic pseudo-differential operators on the one dimensional torus $\mathbb{R}/\mathbb{Z}\equiv\mathbb{T}$. Later, this theory was widely developed by G. Vainikko, M. Ruzhansky and V. Turunen (see \cite{Ruz}), and subsequently generalised on compact Lie groups. For some recent work on boundedness results of periodic pseudo-differential operators we refer the reader to the references \cite{Duvan2,Duvan3,periodic,Profe2, s2, Ruz, Ruz-2} where the subject was treated in $L^p$-spaces. Pseudo-differential operators with symbols in H\"ormander classes can be defined on smooth closed manifolds by using local charts (localizations). Agranovich (see \cite{ag}) gives a global definition of pseudo-differential operators on the circle $\mathbb{S}^1,$ (instead of the local formulation on the circle as a manifold). By using the Fourier transform Agranovich's definition was readily generalizable to the $n-$dimensional torus $\mathbb{T}^n.$ Indeed, G. Vainikko, M. Ruzhansky and V. Turunen introduced H\"ormander classes on any $n$-torus $\mathbb{T}^n$ of the following way: given $m\in \mathbb{R},\, 0\leq \rho,\delta \leq 1,$ we say that $a \in S^{m}_{\rho,\delta}(\mathbb{T}^n\times \mathbb{Z}^n)$ ($(\rho,\delta)-$ symbol classes) if \begin{equation} \forall \alpha,\beta\in\mathbb{N}^n,\exists C_{\alpha,\beta}>0,\,\, |\Delta^{\alpha}_{\xi}\partial^{\beta}_{x}a(x,\xi)|\leq C_{\alpha,\beta}\langle \xi \rangle^{m-\rho|\alpha|+\delta|\beta|}. \end{equation} Here $\Delta_\xi$ is the usual difference operator on sequences defined by \begin{equation} \Delta_{\xi_j} a(\xi):=a(\xi+e_j)-a(\xi);\,\,\,\,e_{j}(k):=\delta_{jk},\,\,\,\Delta^{\alpha}_{\xi}=\Delta_{\xi_1}^{\alpha_1}\cdots \Delta_{\xi_n}^{\alpha_n}, \end{equation} where $\alpha=(\alpha_1,\cdots,\alpha_n)\in\mathbb{N}_0^n.$ It is a non-trivial result, that the pseudo-differential calculus with symbols on $(\rho,\delta)-$classes by Vainikko-Ruzhansky-Turunen and the periodic H\"ormander calculus via localisations, are equivalent, this means that $$ \Psi^m_{\rho,\delta}(\mathbb{T}^n\times \mathbb{Z}^n)=\Psi^m_{\rho,\delta}(\mathbb{T}^n\times \mathbb{R}^n) . $$ This fact is known as the McLean equivalence theorem (see \cite{Mc}). The principal consequence of the McLean equivalence is that we can transfer the boundedness properties of pseudo-differential operators on $\mathbb{R}^n$ (in H\"ormander classes) to pseudo-differential operators on the torus. So, if we denote by $\Lambda^s(\mathbb{R}^n)$ the H\"older space with regularity order $s,$ $0<s<1,$ then it is well known that $T_\sigma\in \Psi_{1,0}^0(\mathbb{R}^n\times \mathbb{R}^n )$ extends to a bounded operator on $\Lambda^s(\mathbb{R}^n)$ (see E. Stein \cite{Eli}, p. 253). In view of the McLean equivalence, if $\sigma(x,D_x)\in \Psi^0_{1,0}(\mathbb{T}^n\times \mathbb{Z}^n) $ is a periodic operator, then \begin{eqnarray} \sigma(x,D_x):\Lambda^s(\mathbb{T}^n)\rightarrow \Lambda^s(\mathbb{T}^n),\,\,0<s<1, \end{eqnarray} extends to a bounded operator. So, the problem of the boundedness of periodic pseudo-differential operators can be reduced to consider symbols of limited regularity. \subsection{State of the art and main results} In general, classes of bounded pseudo-differential operators on Lebesgue spaces $L^p{(\mathbb{R}^n)},$ $1<p<\infty,$ also provide bounded operators on H\"older spaces $\Lambda^s(\mathbb{R}^n),$ Besov spaces $B^s_{p,q}(\mathbb{R}^n),$ Triebel-Lizorkin spaces $F^s_{p,q}(\mathbb{R}^n),$ and several types of function spaces. Through of the metric $g^{1,0}_{(x,\xi)}(dx,d\xi)=dx^2+\langle\xi\rangle^{-2}d\xi^2,$ and the weight $m(x,\xi)=1$ we recover the Kohn-Nirenberg class of order zero $S^0(\mathbb{R}^n)=S(1,g^{1,0})$ which give bounded pseudo-differential operators on $L^p$ spaces, $1<p<\infty,$ and on every H\"older space $\Lambda^s.$ In fact, for certain metrics, R. Beals showed that the classes $S(1,g)$ give bounded pseudo-differential operators in $L^p$ spaces as well as in H\"older spaces (see the classical references Beals \cite{Be} and Beals \cite{Be2}; the case of periodic operators was treated in \cite{Car,Car1}). On the other hand, if we define \begin{equation}\label{q0} Q_0:=r_0+2(n-r_0),\,\,\varepsilon_0:=\frac{Q_0}{2n}-\frac{1}{2}, \end{equation} it was proved by J. Delgado in \cite{Profe2}, that the set of operators with symbols in the classes $S(m^{-n\varepsilon},g)$ provides bounded operators on $L^p$-spaces for all $1<p<\infty$ and $\varepsilon_0\leq \varepsilon<\frac{Q_0}{2n}.$ Moreover, for $0\leq \beta<\varepsilon_0,$ the operators in the classes $S(m^{-\beta},g)$ are $L^p$-bounded provided that $|1/2-1/p|\leq \beta/2n\varepsilon_0.$ These are important extensions of some results by C. Fefferman and R. Beals. In this paper we will prove the following. \begin{itemize} \item Let us assume $\varepsilon_0\leq \varepsilon<\frac{Q_0}{2n}$ and let $\sigma\in S(m^{-n\varepsilon},g).$ Then the pseudo-differential operator $\sigma(x,D_x)$ extends to a bounded operator on $\Lambda^s(\mathbb{R}^n)$ for all $0<s<1.$ \item Let us assume $0\leq \beta \leq n\varepsilon_0$ and let $\sigma\in S(m^{-\beta},g).$ Then the pseudo-differential operator $\sigma(x,D_x)$ extends to a bounded operator from $\Lambda^s(\mathbb{R}^n)$ into $\Lambda^{s-(n\varepsilon_0-\beta)}(\mathbb{R}^n)$ for all $0<s<1$ provided that $0<s-(n\varepsilon_0-\beta)<1.$ \end{itemize} In particular, if the operator $L$ is elliptic we have $Q_0=2n,$ $\varepsilon_0=0,$ and then we recover the classical H\"older estimate by R. Beals for $S(1,g)$-classes (see Beals \cite{Be,Be2}). On the other hand, our results for periodic pseudo-differential operators can be summarised as follow. \begin{itemize} \item Let $0\leq \varepsilon <1$ and $k:=[\frac{n}{2}]+1,$ let $\sigma:\mathbb{T}^n\times \mathbb{Z}^n\rightarrow \mathbb{C}$ be a symbol such that $|\Delta_{\xi}^{\alpha}\sigma(x,\xi)|\leq C_{\alpha}\langle \xi \rangle^{-\frac{n}{2}\varepsilon-(1-\varepsilon)|\alpha|},$ for $|\alpha| \leq k.$ Then $\sigma(x,D):\Lambda^s(\mathbb{T}^n)\rightarrow\Lambda^s(\mathbb{T}^n)$ extends to a bounded linear operator for all $0<s<1.$ \item Let $0<\rho\leq 1,$ $0\leq \delta\leq 1,$ $\ell\in\mathbb{N},$ $k:=[\frac{n}{2}]+1,$ and let $A:C^{\infty}(\mathbb{T}^n)\rightarrow C^{\infty}(\mathbb{T}^n)$ be a pseudo-differential operator with symbol $\sigma$ satisfying \begin{equation}\label{eqMaint'} \vert \partial_x^\beta\Delta_{\xi}^{\alpha}\sigma(x,\xi)\vert\leq C_{\alpha}\langle \xi \rangle^{-m-\rho|\alpha|+\delta|\beta|} \end{equation} for all $|\alpha| \leq k,$ $|\beta|\leq \ell.$ Then $A:\Lambda^s(\mathbb{T}^n)\rightarrow \Lambda^s(\mathbb{T}^n)$ extends to a bounded linear operator for all $0<s<1$ provided that $m\geq \delta \ell+\frac{n}{2}(1-\rho).$ \end{itemize} The periodic conditions above provide Fefferman type conditions for the H\"older boundedness of periodic operators. These estimates extend the H\"older results in \cite{Car} and \cite{Car1}. This paper is organized as follows. In Section \ref{preliminaries} we present some basics on Weyl-H\"ormander classes and the periodic Ruzhansky-Turunen classes. Finally, in Section \ref{proofs} we prove our main results. \section{Weyl-H\"ormander and Ruzhansky-Turunen classes}\label{preliminaries} \subsection{Weyl-H\"ormander classes: $S(m,g)$} Now, we provide some notions on the Weyl-H\"ormander calculus. An extensive treatment on the subject can be found in H\"ormander \cite{Hor2}, but we present only the necessary elements in order to get a simple presentation. The Kohn-Nirenberg quantisation procedure (or classical quantisation) associates to every $\sigma\in\mathscr{S}'(\mathbb{R}^{2n})$ the operator \begin{equation}\label{cl} \sigma(x,D_x)f(x):=\int_{\mathbb{R}^n}e^{i2\pi x\cdot \xi}\sigma(x,\xi)\widehat{f}(\xi)d\xi=\int_{\mathbb{R}^n}\int_{\mathbb{R}^n}e^{i2\pi (x-y)\cdot \xi}\sigma(x,\xi)d\xi f(y)dy, \end{equation} where $f\in C^{\infty}_0(\mathbb{R}^n). $ On the other hand the Weyl quantization of $\sigma\in\mathscr{S}'(\mathbb{R}^{2n})$ is given by the operator \begin{equation}\label{Wq} \sigma^{\omega}(x,D_x)f(x):= \int_{\mathbb{R}^n}\int_{\mathbb{R}^n}e^{i2\pi (x-y)\cdot \xi}\sigma(\frac{1}{2}(x+y),\xi)d\xi f(y)dy,\,\,\,f\in C^{\infty}_0(\mathbb{R}^n). \end{equation} There exists a relation between the classic quantization and the Weyl quantization (see \cite{Profe3,Profe2}). For the Weyl quantisation, the composition rule can be formulated in terms of the symplectic form $\langle (x,\xi),(y,\eta)\rangle_\omega:=y\cdot \xi-x\cdot \eta,$ trough of the operation \begin{equation}\label{operation} a\# b(X):=\frac{1}{\pi^{2n}}\int_{\mathbb{R}^{2n}}\int_{\mathbb{R}^{2n}}e^{-2i \langle X-Y,X-Z\rangle_{\omega}}a(Y)b(Z)dYdZ,\,\,\,\,\,X=(x,\xi). \end{equation} satisfying the identity \begin{equation}\label{identitycompo} (a\#b)^\omega=a^\omega\circ b^\omega. \end{equation} $S(m,g)$-classes are associated to H\"ormander metrics which are special Riemannian metrics $g=(g_X)_{X\in \mathbb{R}^{2n}}$ on $\mathbb{R}^{2n},$ satisfying the following properties. \begin{itemize} \item{Continuity}: there exist positive constants $C,c,c'$ such that $g_{X}(Y)\leq C$ implies \begin{equation}\label{continuity} c'\cdot g_{X+Y}(Z)\leq g_{X}(Z)\leq c\cdot g_{X+Y}(Z), \textnormal{ where } X,Y,Z\in\mathbb{R}^{2n}. \end{equation} \item{Uncertainty principle}: for $X,Y,T,Z\in\mathbb{R}^{2n}$ and \begin{equation}\label{Incer} g_{X}^{\langle\cdot,\cdot \rangle_{\omega}}(T)=\sup_{Z\neq 0}\frac{ \langle T,Z \rangle_{\omega}^2}{g_X(Z)} \end{equation} the metric $g$ satisfies the uncertainty principle if \begin{equation}\label{eqincert} \lambda_g(X):= \inf_{T\neq 0}(\frac{g_{X}^{\langle\cdot,\cdot \rangle_{\omega}}(T) }{g_X(T)})^{\frac{1}{2}}\geq 1. \end{equation} \item{Temperancy}: we say that $g$ is a temperate metric if there exists $\overline{C}>0$ and $J\in\mathbb{N}$ satisfying \begin{equation}\label{temperancy} (\frac{g_X(\cdot)}{g_Y(\cdot)})^{\pm 1}\leq \overline{C}(1+g_Y^{\langle\cdot,\cdot\rangle_\omega}(X-Y))^{J}. \end{equation} \end{itemize} \ On the other hand, classes $S(m,g)$ associated to H\"ormander metrics $g$ require the notion of $g$-admissible weight $M,$ which are strictly positive functions satisfying \begin{itemize} \item{Continuity:} there exists $D>0$ such that \begin{equation}\label{continuityweigh} (\frac{M(X+Y)}{M(X)})^{\pm 1}\leq D,\,\,\,\,\textnormal{if} \,\,\,\,g_{X}(Y)\leq \frac{1}{D}. \end{equation} \item{Temperancy:} there exists $D'>0$ and $N_0\in\mathbb{N}$ such that \begin{equation}\label{temperancyweigh} (\frac{M(Y)}{M(X)})^{\pm 1}\leq D'(1+g^{ \langle\cdot,\cdot\rangle_\omega }_{Y}(X-Y))^N,\,\,\,\, \textnormal{if} \,\,\,\,g_{X}(Y)\leq \frac{1}{D'}. \end{equation} \end{itemize} We end this section with the definition of $S(m,g)$ Weyl-H\"ormander classes. \begin{definition}[Weyl-H\"ormander classes] Let us assume that $g$ is a H\"ormander metric and let $m$ be a $g$-admissible weight. The class $S(m,g)$ consists of those smooth functions $\sigma$ on $\mathbb{R}^{2n}$ satisfying the symbol inequalities \begin{equation}\label{Smgdefi} |\sigma^{(k)}(T_1,T_2,\cdots,T_k)|\leq C_k M(X) \cdot (g_X(T_1)g_X(T_2)\cdots g_X(T_k)))^{\frac{1}{2}} \end{equation} \end{definition} For every symbol $\sigma\in S(m,g)$ we denote by $\Vert \sigma \Vert_{S(m,g),k}$ the minimum $C_k$ satisfying the inequality \eqref{Smgdefi}. \subsection{Ruzhansky-Turunen classes} The toroidal H\"older spaces are the Banach spaces defined for each $0<s\leq 1$ by $$\Lambda^{s}(\mathbb{T}^n)=\{f:\mathbb{T}^n\rightarrow \mathbb{C} : |f|_{\Lambda^s}=\sup_{x,h\in \mathbb{T}^n} {|f(x+h)-f(x)|} {|h|^{-s}} <\infty \}$$ together with the norm $\Vert f\Vert_{\Lambda^{s}}=|f|_{\Lambda^s}+\sup_{x\in \mathbb{T}^n}|f(x)|.$ In our analysis of periodic operators on H\"older spaces we use the standard notation (see \cite{ Hor1, Hor2, Ruz, Wong}). The discrete Schwartz space $\mathcal{S}(\mathbb{Z}^n)$ denote the space of discrete functions $\phi:\mathbb{Z}^n\rightarrow \mathbb{C}$ such that \begin{equation} \forall M\in\mathbb{R}, \exists C_{M}>0,\, |\phi(\xi)|\leq C_{M}\langle \xi \rangle^M, \end{equation} where $\langle \xi \rangle=(1+|\xi|^2)^{\frac{1}{2}}.$ The toroidal Fourier transform is defined for any $f\in C^{\infty}(\mathbb{T}^n)$ by $$\hat{f}(\xi)=\int_{\mathbb{T}^n}e^{-i2\pi\langle x,\xi\rangle}f(x)dx,\,\,\xi\in\mathbb{Z}^n,\,\,\,\,\langle x,\xi\rangle=x_1\xi_1+\cdots +x_n\xi_n.$$ The Fourier inversion formula is given by $$f(x)=\sum_{\xi\in\mathbb{Z}^n}e^{i2\pi\langle x,\xi \rangle }\hat{u}(\xi),\,\,x\in\mathbb{T}^n.$$ Now, the periodic H\"ormander class $S^m_{\rho,\delta}(\mathbb{T}^n\times \mathbb{R}^n), where\,\, 0\leq \rho,\delta\leq 1,$ consists of those complex (1-periodic) functions $a(x,\xi)$ in $x$, which are smooth in $(x,\xi)\in \mathbb{T}^n\times \mathbb{R}^n$ and which satisfy toroidal symbols inequalities \begin{equation}\label{css} |\partial^{\beta}_{x}\partial^{\alpha}_{\xi}a(x,\xi)|\leq C_{\alpha,\beta}\langle \xi \rangle^{m-\rho|\alpha|+\delta|\beta|}. \end{equation} Symbols in $S^m_{\rho,\delta}(\mathbb{T}^n\times \mathbb{R}^n)$ are symbols in $S^m_{\rho,\delta}(\mathbb{R}^n\times \mathbb{R}^n)$ (see \cite{Hor1, Ruz}) with order $m$ which are 1-periodic in $x.$ If $a(x,\xi)\in S^{m}_{\rho,\delta}(\mathbb{T}^n\times \mathbb{R}^n),$ the corresponding pseudo-differential operator is defined by \begin{equation}\label{hh} a(x,D_x)u(x)=\int_{\mathbb{T}^n}\int_{\mathbb{R}^n}e^{i2\pi\langle x-y,\xi \rangle}a(x,\xi)u(y)d\xi dy,\,\, u\in C^{\infty}(\mathbb{T}^n). \end{equation} The set $S^m_{\rho,\delta}(\mathbb{T}^n\times \mathbb{Z}^n),\, 0\leq \rho,\delta\leq 1,$ consists of those functions $a(x, \xi)$ which are smooth in $x$ for all $\xi\in\mathbb{Z}^n$ and which satisfy \begin{equation}\label{cs} \forall \alpha,\beta\in\mathbb{N}^n,\exists C_{\alpha,\beta}>0,\,\, |\Delta^{\alpha}_{\xi}\partial^{\beta}_{x}a(x,\xi)|\leq C_{\alpha,\beta}\langle \xi \rangle^{m-\rho|\alpha|+\delta|\beta|}. \end{equation} The operator $\Delta_\xi^\alpha$ in \eqref{cs} is the difference operator which is defined as follows. First, if $f:\mathbb{Z}^n\rightarrow \mathbb{C}$ is a discrete function and $(e_j)_{1\leq j\leq n}$ is the canonical basis of $\mathbb{R}^n,$ $ (\Delta_{\xi_{j}} f)(\xi)=f(\xi+e_{j})-f(\xi). $ If $k\in\mathbb{N},$ denote by $\Delta^k_{\xi_{j}}$ the composition of $\Delta_{\xi_{j}}$ with itself $k-$times. Finally, if $\alpha\in\mathbb{N}^n,$ $\Delta^{\alpha}_{\xi}= \Delta^{\alpha_1}_{\xi_{1}}\cdots \Delta^{\alpha_n}_{\xi_{n}}.$ The main object here are the toroidal operators (or periodic operators) with symbols $a(x,\xi).$ They are defined as (in the sense of Vainikko, Ruzhansky and Turunen) \begin{equation}\label{aa} a(x,D_x)u(x):=\textnormal{Op}(a)u=\sum_{\xi\in\mathbb{Z}^n}e^{i 2\pi\langle x,\xi\rangle}a(x,\xi)\hat{u}(\xi),\,\, u\in C^{\infty}(\mathbb{T}^n). \end{equation} The relation on toroidal and euclidean symbols will be explain as follows. There exists a process to interpolate the second argument of symbols on $\mathbb{T}^n\times \mathbb{Z}^n$ in a smooth way to get a symbol defined on $\mathbb{T}^n\times \mathbb{R}^n.$ \begin{proposition}\label{eq} Let $0\leq \delta \leq 1,$ $0< \rho\leq 1.$ The symbol $a\in S^m_{\rho,\delta}(\mathbb{T}^n\times \mathbb{Z}^n)$ if and only if there exists a Euclidean symbol $a'\in S^m_{\rho,\delta}(\mathbb{T}^n\times \mathbb{R}^n)$ such that $a=a'|_{\mathbb{T}^n\times \mathbb{Z}^n}.$ \end{proposition} \begin{proof} The proof can be found in \cite{Mc, Ruz}. \end{proof} It is a non trivial fact, however, that the definition of pseudo-differential operator on a torus given by Agranovich (equation \ref{aa} ) and H\"ormander (equation \ref{hh}) are equivalent. McLean (see \cite{Mc}) prove this result for all the H\"ormander classes $S^m_{\rho,\delta}(\mathbb{T}^n\times \mathbb{Z}^n).$ A different proof to this fact can be found in \cite{Ruz}, Corollary 4.6.13 by using a periodisation technique. \begin{proposition}$\label{eqc}($Equality of classes$).$ For every $0\leq \delta \leq 1$ and $0<\rho\leq 1,$ we have $\Psi^{m}_{\rho,\delta}(\mathbb{T}^n\times \mathbb{Z}^n)=\Psi^{m}_{\rho,\delta}(\mathbb{T}^n\times \mathbb{R}^n).$ \end{proposition} For the multilinear aspects of the theory of periodic operators we refer the reader to Cardona and Kumar \cite{CardonaKumar}. \section{Pseudo-differential operators in H\"older spaces }\label{proofs} \subsection{$S(m,g)$-classes in H\"older spaces} Now we prove our H\"older estimates. Our main tool will be the characterisation of H\"older spaces in terms of dyadic decompositions. The symbols $\sigma(x,\xi)$ considered in this section, belong to Weyl-H\"ormander classes $S(m^{-n\varepsilon},g),$ which are associated to the metric $$g_{(x,\xi)}(dx,d\xi)=m(x,\xi)^{-2}(\langle \xi \rangle^2 dx^2+d\xi^2),$$ $x,\xi\in\mathbb{R}^n, $ and the weight $ m(x,\xi)=(a(x,\xi)+\langle \xi\rangle)^{\frac{1}{2}}. $ The symbol $a,$ which is assumed positive and classical, is the principal part of the second order differential operator $L$ defined by \begin{equation}\label{ST'} {Lf}=-\sum_{ij}a_{ij}(x)\frac{\partial^2}{\partial{x_i\partial x_j}}f+[\textnormal{partial derivatives of lower order }]f,\,\,\, f\in C_{0}(\mathbb{R}^n). \end{equation} We also assume for every $x\in\mathbb{R}^n,$ that the matrix $A(x)=(a_{ij}(x))$ is a positive, semi-definite matrix of rank $r(x):=\textnormal{rank}(A(x))\geq r_0\geq 1.$ The coefficients $a_{ij}$ are smooth functions which are uniformly bounded on $\mathbb{R}^n$, together with all their derivatives. Important examples arise from operators of the form \begin{equation}\label{Ex1'} L=-\sum_{j}X_{j}^{*}X_{j}+X_0,\,\,\,L=-\sum_{j}X_{j}^{*}X_{j}, \end{equation} where $\{X_i\}$ is a system of vector fields on $\mathbb{R}^n$ satisfying the H\"ormander condition of order 2 (this means that the vector fields $X_i$ and their commutators provide a basis of $\mathbb{R}^n,$ or equivalently that $\mathbb{R}^{n}=\textnormal{Lie}\{X_i\}_{i}$). \begin{remark}Classical examples of operators as in \eqref{Ex1} are the following: \begin{itemize} \item the Laplacian $-\Delta_{x}:=-\sum_{i=1}^{n}\partial_{x_i}^2$ on $\mathbb{R}^{n}.$ \item The heat operator $-\Delta_x+\partial_{t}$ on $\mathbb{R}^{n+1}.$ \item The Mumford operator on $\mathbb{R}^4:$ \begin{equation}\label{mumford} M=-X_{1}^2-X_{0}=-\partial_\theta^{2}+\cos(\theta)\partial_x-\sin(\theta)\partial_y+\partial_t,\,\,X_1=\partial_\theta, \end{equation} where we have denoted by $(\theta,x,y,t)$ the coordinates of $\mathbb{R}^4.$ In this case, $X_0=-M-X_1,$ $X_2=[X_1,X_0],$ $X_3=[X_1,X_2]$ and $\textnormal{span}\{ X_0,X_1,X_2,X_3\}=\mathbb{R}^4,$ which shows that $M$ satisfies the H\"ormander condition of order 2. \item The Kolmogorov operator on $\mathbb{R}^{3}:$ \begin{equation}\label{Kolmogorov} K=-X_1^{2}-X_0=-\partial_{x}^{2}-x\partial_{y}+\partial_t,\,\,X_{1}=\partial_x. \end{equation} A similar analysis as in the Mumford operator shows that $K$ satisfies the H\"ormander condition of order 2. \item The operator \begin{equation} L=\frac{\partial^2}{\partial t^2} +\frac{\partial^2}{\partial x^2}+e^{-\frac{1}{|x|^\delta}}\frac{\partial^2}{\partial y^2} \end{equation} on $\mathbb{R}^3.$ \end{itemize} \end{remark} Our starting point is the following lemma which is a slight variation of one proved by J. Delgado (see Lemma 3.2 of \cite{Profe2}) and whose proof is only an obvious variation of Delgado's proof. We will use the following parameters \begin{equation}\label{q0} Q_0:=r_0+2(n-r_0),\,\,\varepsilon_0:=\frac{Q_0}{2n}-\frac{1}{2}. \end{equation} \begin{lemma} \label{lemma} Let us assume $\varepsilon_0\leq \varepsilon<\frac{Q_0}{2n}$ and let $\sigma\in S(m^{-n\varepsilon},g)$ supported in $R\leq a(x,\xi)+\langle \xi\rangle\leq \omega' R$ for $\omega',R>1.$ Then \begin{equation} \Vert \sigma(x,D_x)f\Vert_{L^\infty(\mathbb{R}^n)}\leq C \Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) }\Vert f \Vert_{L^\infty(\mathbb{R}^n)}, \end{equation} holds true for every $f\in L^{\infty}(\mathbb{R}^n).$ Moreover, the constant $C>0$ does not depend on the parameters $R$ and $\omega'.$ \end{lemma} Now we will prove our main theorem. \begin{theorem} \label{MainT} Let us assume $\varepsilon_0\leq \varepsilon<\frac{Q_0}{2n}$ and let $\sigma\in S(m^{-n\varepsilon},g).$ Then the pseudo-differential operator $\sigma(x,D_x)$ extends to a bounded operator on $\Lambda^s(\mathbb{R}^n)$ for all $0<s<1.$ Moreover, there exists $\ell$ such that \begin{equation} \Vert \sigma(x,D_x)f\Vert_{\Lambda^s(\mathbb{R}^n)}\leq C \Vert \sigma\Vert_{\ell, S(m^{-n\varepsilon},g) }\Vert f \Vert_{\Lambda^s(\mathbb{R}^n)}. \end{equation} \end{theorem} \begin{remark} If we suppose that $L$ is an elliptic operator, then we deduce that $Q_0=2n,$ $\varepsilon_0=0.$ Consequently, we recover the classical H\"older estimate due to R. Beals for $S(1,g)$-classes (see Theorem 4.1 of Beals \cite{Be} and Beals \cite{Be2}) and also the H\"older result for the class $S^0(\mathbb{R}^n)$ mentioned in the introduction. \end{remark} \begin{proof}[Proof of Theorem \ref{MainT}] Let us fix $f\in \Lambda^s(\mathbb{R}^n),$ $0<s<1.$ We will show that some positive integer $\ell$ satisfies \begin{equation} \Vert \sigma(x,D_x)f\Vert_{\Lambda^s(\mathbb{R}^n)}\leq C \Vert \sigma\Vert_{\ell, S(m^{-n\varepsilon},g) }\Vert f \Vert_{\Lambda^s(\mathbb{R}^n)}, \end{equation} for some positive constant $C$ independent of $f$. We split the proof in two parts. In the first one, we prove the statement of the theorem for Fourier multipliers, i.e., pseudo-differential operators depending only on the Fourier variable $\xi.$ Later, in the second step, we extend the result for general pseudo-differential operators.\\ \noindent{{\textit{Step 1.}}} Let us consider the Bessel potential of first order $\mathcal{R}:=(I-\frac{1}{4\pi^2}\Delta_x)^{\frac{1}{2}}$, where $\Delta_x$ is the Laplacian on $\mathbb{R}^n,$ and let us fix a dyadic decomposition of its spectrum: we choose a function $\psi_0\in C^{\infty}_{0}(\mathbb{R}),$ $\psi_0(\lambda)=1,$ if $|\lambda|\leq 1,$ and $\psi(\lambda)=0,$ for $|\lambda|\geq 2.$ For every $j\geq 1,$ let us define $\psi_{j}(\lambda)=\psi_{0}(2^{-j}\lambda)-\psi_{0}(2^{-j+1}\lambda).$ Then we have \begin{eqnarray}\label{deco1} \sum_{l\in\mathbb{N}_{0}}\psi_{l}(\lambda)=1,\,\,\, \text{for every}\,\,\, \lambda>0. \end{eqnarray} We will use the following characterization for H\"older spaces in terms of dyadic decompositions (see Stein \cite{Eli}): $f\in \Lambda^s(\mathbb{R}^n),$ $0<s<1,$ if and only if, \begin{equation} \Vert f \Vert_{\Lambda^s(\mathbb{R}^n)}\asymp \Vert f\Vert_{B^s_{\infty,\infty}(\mathbb{R}^n)}:=\sup_{l\geq 0}2^{ls}\Vert \psi_l(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)} \end{equation} where $\psi_l(\mathcal{R})$ is defined by the functional calculus associated to the self-adjoint operator $\mathcal{R}.$ If $\sigma(D_x)=\sigma(x,D_x )$ has a symbol depending only on the Fourier variable, then \begin{equation} \Vert \sigma(D_x) f \Vert_{\Lambda^s(\mathbb{R}^n)}\asymp \Vert \sigma(D_x)f\Vert_{B^s_{\infty,\infty}(\mathbb{R}^n)}:=\sup_{l\geq 0}2^{ls}\Vert \psi_l(\mathcal{R})\sigma(D_x)f \Vert_{L^\infty(\mathbb{R}^n)}. \end{equation} Let us note that $\sigma(D_x)$ commutes with $\psi_l(\mathcal{R})$ for every $l,$ that \begin{equation} \psi_l(\mathcal{R})\sigma(D_x)= \sigma(D_x)\psi_l(\mathcal{R})=: \sigma_{l}(D_x), \end{equation} where $\sigma_{l}(D_x)$ has a smooth symbol supported in $\{\xi:2^{l-1}\leq \langle \xi\rangle\leq 2^{l+1} \}.$ Now let us observe that from the estimate $0\leq a(x,\xi)\lesssim \langle \xi \rangle^2,$ every $\xi$ in the support of $\sigma_l(\xi)$ satisfies $4^{l-1}\leq \langle \xi \rangle^2\leq 4^{l+1}$ and \begin{equation} 4^{l-1}\leq a(x,\xi)+\langle \xi \rangle\lesssim \langle \xi \rangle^2+\langle \xi \rangle\lesssim 4^{l+1}. \end{equation} This analysis shows that the support of $\sigma_{l}$ lies in the set \begin{equation} \{ (x,\xi) \in\mathbb{R}^n_x\times \mathbb{R}^n_\xi:4^{l-1}\leq a(x,\xi)+\langle \xi \rangle \leq \omega 4^{l+1}\}, \end{equation} for some $\omega>0$ which we can assume larger than one. So, by Lemma \ref{lemma} with $\omega'=4^2\omega$ and $R=4^{l-1},$ we deduce that $\sigma_{l}(D_x)$ is a bounded operator on $L^{\infty}(\mathbb{R}^n)$ with operator norm independent on $l.$ In fact, $\sigma_{l}\in S(m^{-n\varepsilon},g)$ for all $l,$ and consequently \begin{equation} \Vert \sigma_{l}(D_x)\Vert_{\mathscr{B}(L^{\infty})}\leq C \Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) }.\end{equation} So, we have \begin{align*} \Vert \psi_l(\mathcal{R})\sigma(D_x) & f \Vert_{L^\infty(\mathbb{R}^n)} \\ &=\Vert \sigma_l(D_x) \sum_{l'\in \mathbb{N}_{0}}\psi_{l'}(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)} \\ &=\Vert \sigma_l(D_x) \sum_{l'=l-1}^{l+1}\psi_{l'}(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)} \\ &\leq \sum_{l'=l-1}^{l+1} \Vert \sigma_{l}(D_x)\Vert_{\mathscr{B}(L^{\infty})} \Vert \psi_{l'}(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)} \\ & \lesssim \Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) } \sum_{l'=l-1}^{l+1} \Vert \psi_{l'}(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)}. \end{align*} As a consequence, we obtain \begin{align*}\Vert \sigma(D_x) f \Vert_{\Lambda^s(\mathbb{R}^n)} &\asymp \sup_{l\geq 0}2^{ls}\Vert \psi_l(\mathcal{R})\sigma(D_x)f \Vert_{L^\infty(\mathbb{R}^n)}\\ & \lesssim \Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) }\sup_{l\geq 0} 2^{ls} \Vert \psi_l(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)}. \end{align*} Indeed, \begin{align*} &\sup_{l\geq 0}2^{ls}\Vert \psi_l(\mathcal{R})\sigma(D_x)f \Vert_{L^\infty(\mathbb{R}^n)}\\ &\lesssim \Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) }\sup_{l\geq 0}2^{ls} \sum_{l'=l-1}^{l+1} \Vert \psi_{l'}(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)}\\ &= \Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) }\sup_{l\geq 0} \sum_{l'=l-1}^{l+1} 2^{(l-l')s} 2^{l's} \Vert \psi_{l'}(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)}\\ &\leq \Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) }\left( 2^{s} +1+2^{-s} \right)\sup_{l'\geq 0}2^{l's}\Vert \psi_{l'}(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)}. \end{align*} Finally, we finish the first step, by observing that \begin{align*} &\sup_{l\geq 0}2^{ls}\Vert \psi_l(\mathcal{R})\sigma(D_x)f \Vert_{L^\infty(\mathbb{R}^n)}\\ & \lesssim C\Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) }\sup_{l'\geq 0}\Vert \psi_{l'}(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)} \asymp \Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) }\Vert f\Vert_{\Lambda^s(\mathbb{R}^n)}, \end{align*}where $C=\left( 2^{s} +1+2^{-s} \right).$\\ \noindent{{\textit{Step 2.}}} Now, we extend the H\"older boundedness from multipliers to pseudo-differential operators by adapting a technique developed by Ruzhansky and Turunen for operators on compact Lie groups, used in our setting, to the non-compact case of $\mathbb{R}^n$ (see Ruzhansky and Turunen \cite{Ruz} and Ruzhansky and Wirth \cite{Ruz3}). So, let us define for every $z\in\mathbb{R}^n,$ the multiplier \begin{center} $ \sigma_z(D_{x})f(x)=\int_{\mathbb{R}^n}e^{i2\pi \langle x, \eta\rangle}\sigma(z,\eta)\hat{f}(\eta)d\eta.$ \end{center} For every $x\in\mathbb{R}^n$ we have the equality, \begin{center} $ \sigma_{x}(D_{x})f(x)=\sigma(x,D_x)f(x),$ \end{center} and we can estimate the H\"older norm of the function $\sigma(x,D_x)f,$ as follows \begin{align*} \Vert \sigma_{x}(D_x)f(x)\Vert_{\Lambda^s} &\asymp \sup_{l\geq 0}2^{ls} \textnormal{esssup}_{x\in\mathbb{R}^n}|\psi_l(\mathcal{R})\sigma_x(D_x)f(x) |\\ &\leq \sup_{l\geq 0}2^{ls} \textnormal{esssup}_{x\in\mathbb{R}^n} \sup_{z\in\mathbb{R}^n}|\psi_l(\mathcal{R})\sigma_z(D_x)f(x) | \\ &= \sup_{l\geq 0}2^{ls} \textnormal{esssup}_{x\in\mathbb{R}^n} \sup_{z\in\mathbb{R}^n}| \sigma_z(D_x) \psi_l(\mathcal{R})f(x) | \\ &\leq \sup_{l\geq 0}2^{ls} \textnormal{esssup}_{x\in\mathbb{R}^n} \sup_{z\in\mathbb{R}^n} \textnormal{esssup}_{\varkappa\in\mathbb{R}^n} |\sigma_z(D_\varkappa) \psi_l(\mathcal{R})f(\varkappa) | \\ &= \sup_{l\geq 0}2^{ls} \sup_{z\in\mathbb{R}^n} \Vert \sigma_z(D_\varkappa) \psi_l(\mathcal{R})f(\varkappa) \Vert_{L^\infty(\mathbb{R}^n_\varkappa)} .\\ \end{align*} From the estimate for the operator norm of multipliers proved in the first step, we deduce \begin{align*}\sup_{z\in\mathbb{R}^n}\Vert \sigma_{z}(D_x) \psi_l(\mathcal{R}) f \Vert_{L^\infty(\mathbb{R}^n)}\lesssim \Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) }\Vert \psi_l(\mathcal{R})f\Vert_{L^\infty(\mathbb{R}^n)}. \end{align*} So, we have \begin{equation} \Vert \sigma_{x}(D_x)f(x)\Vert_{\Lambda^s} \lesssim \Vert \sigma\Vert_{[\frac{n}{2}]+1, S(m^{-n\varepsilon},g) }\Vert f\Vert_{\Lambda^s(\mathbb{R}^n)}. \end{equation} Thus, we finish the proof. \end{proof} By following the notation in Delgado \cite{Profe2}, we extend the previous result when $0\leq \beta\leq n\varepsilon_0.$ \begin{theorem} \label{MainT'''} Let us assume $0\leq \beta \leq n\varepsilon_0$ and let $\sigma\in S(m^{-\beta},g).$ Then the pseudo-differential operator $\sigma(x,D_x)$ extends to a bounded operator from $\Lambda^s(\mathbb{R}^n)$ into $\Lambda^{s-(n\varepsilon_0-\beta)}(\mathbb{R}^n)$ for all $0<s<1$ provided that $0<s-(n\varepsilon_0-\beta)<1.$ \end{theorem} \begin{proof} Let us consider the following factorization for $\sigma(x,D_x):$ \begin{eqnarray} \sigma(x,D_x)=m(x,D)^{\gamma}m(x,D)^{-\gamma}\sigma(x,D_x), \,\,\gamma:=n\varepsilon_0-\beta. \end{eqnarray} Taking into account that $m(x,D)^{-\gamma}$ has symbol in $S(m^{-\gamma},g),$ the symbol of the operator $m(x,D)^{-\gamma}\sigma(x,D_x)$ belongs to $S(m^{-n\varepsilon_0},g).$ Consequently, we deduce the boundedness of $m(x,D)^{-\gamma}\sigma(x,D_x)$ on $\Lambda^s(\mathbb{R}^n)$ and from the continuity of $m(x,D)^\gamma$ from $\Lambda^s(\mathbb{R}^n)$ into $\Lambda^{s-\gamma}(\mathbb{R}^n)$ we finish the proof of the theorem. \end{proof} Let us observe that for all $-\infty<s<\infty$ the Besov space $B^s_{\infty,\infty}(\mathbb{R}^n)$ is defined by the norm. \begin{equation}\Vert f\Vert_{B^s_{\infty,\infty}(\mathbb{R}^n)}:=\sup_{l\geq 0}2^{ls}\Vert \psi_l(\mathcal{R})f \Vert_{L^\infty(\mathbb{R}^n)}. \end{equation} Clearly, a look to the proof of our results above gives the following result. \begin{corollary} Let us assume $\varepsilon_0\leq \varepsilon<\frac{Q_0}{2n}$ and let $\sigma\in S(m^{-n\varepsilon},g).$ Then the pseudo-differential operator $\sigma(x,D_x)$ extends to a bounded operator on $B^s_{\infty,\infty}(\mathbb{R}^n)$ for all $-\infty<s<\infty.$ Moreover, there exists $\ell$ such that \begin{equation} \Vert \sigma(x,D_x)f\Vert_{B^s_{\infty,\infty}(\mathbb{R}^n) }\leq C \Vert \sigma\Vert_{\ell, S(m^{-n\varepsilon},g) }\Vert f \Vert_{B^s_{\infty,\infty}(\mathbb{R}^n)}. \end{equation} Moreover, for $0\leq \beta \leq n\varepsilon_0$ and $\sigma\in S(m^{-\beta},g),$ the pseudo-differential operator $\sigma(x,D_x)$ extends to a bounded operator from $B^s_{\infty,\infty}(\mathbb{R}^n)$ into $B^{s-(n\varepsilon_0-\beta)}(\mathbb{R}^n)$ for all $-\infty<s<\infty.$ \end{corollary} \subsection{Ruzhansky-Turunen classes in H\"older spaces} In this section we prove our H\"older estimate for periodic (toroidal) pseudo-differential operators. Our starting point is the following lemma which is slight variation of one due to J. Delgado (see Lemma 3.6 of \cite{Profe2}) and whose proof is only a repetition of Delgado's proof. \begin{lemma}\label{lemma} Let $0\leq \varepsilon <1$ and $k:=[\frac{n}{2}]+1,$ let $\sigma:\mathbb{T}^n\times \mathbb{Z}^n\rightarrow \mathbb{C}$ be a symbol such that $|\Delta_{\xi}^{\alpha}\sigma(x,\xi)|\leq C_{\alpha}\langle \xi \rangle^{-\frac{n}{2}\varepsilon-(1-\varepsilon)|\alpha|},$ for $|\alpha| \leq k.$ Let us assume that $\sigma$ is supported in $\{\xi:|\xi|\leq 1\}$ or $\{\xi:R\leq |\xi|\leq 2R\}$ for some $R>0.$ Then $a(x,D):L^{\infty}(\mathbb{T}^n)\rightarrow L^{\infty}(\mathbb{T}^n)$ extends to a bounded linear operator with norm operator independent of $R.$ Moreover,\begin{equation} \Vert \sigma(x,D_x)\Vert_{\mathscr{B}(L^{\infty})}\leq C \sup\{C_{\alpha}: {|\alpha|\leq k}\}.\end{equation} \end{lemma} \begin{theorem}\label{MainT*} Let $0\leq \varepsilon <1$ and $k:=[\frac{n}{2}]+1,$ let $\sigma:\mathbb{T}^n\times \mathbb{Z}^n\rightarrow \mathbb{C}$ be a symbol such that $|\Delta_{\xi}^{\alpha}\sigma(x,\xi)|\leq C_{\alpha}\langle \xi \rangle^{-\frac{n}{2}\varepsilon-(1-\varepsilon)|\alpha|},$ for $|\alpha| \leq k.$ Then $a(x,D):\Lambda^s(\mathbb{T}^n)\rightarrow\Lambda^s(\mathbb{T}^n)$ extends to a bounded linear operator for all $0<s<1.$ Moreover,\begin{equation} \Vert \sigma(x,D_x)\Vert_{\mathscr{B}(\Lambda^s)}\leq C \sup\{C_{\alpha}: {|\alpha|\leq k}\}.\end{equation} \end{theorem} \begin{proof}[Proof of Theorem \ref{MainT*}] Our proof consists of two steps. In the first one, we prove the statement of the theorem for periodic Fourier multipliers, i.e., toroidal pseudo-differential operators depending only on the Fourier variable $\xi.$ Later, in the second step, we extend the result for general periodic operators.\\ \noindent{{\textit{Step 1.}}} Let us consider the operator $\mathcal{R}:=(I-\frac{1}{4\pi^2}\mathcal{L}_{\mathbb{T}^n})^{\frac{1}{2}},$ where $\mathcal{L}_{\mathbb{T}^n}$ is the Laplacian on the torus $\mathbb{T}^n,$ and let us fix a dyadic decomposition of its spectrum: we choose a function $\psi_0\in C^{\infty}_{0}(\mathbb{R}),$ $\psi_0(\lambda)=1,$ if $|\lambda|\leq 1,$ and $\psi(\lambda)=0,$ for $|\lambda|\geq 2.$ For every $j\geq 1,$ let us define $\psi_{j}(\lambda)=\psi_{0}(2^{-j}\lambda)-\psi_{0}(2^{-j+1}\lambda).$ Then we have \begin{eqnarray}\label{deco1} \sum_{l\in\mathbb{N}_{0}}\psi_{l}(\lambda)=1,\,\,\, \text{for every}\,\,\, \lambda>0. \end{eqnarray} We will use the following characterization for H\"older spaces in terms of dyadic decompositions (see Stein \cite{Eli}): $f\in \Lambda^s(\mathbb{T}^n),$ if and only if, \begin{equation} \Vert f \Vert_{\Lambda^s(\mathbb{T}^n)}\asymp \Vert f\Vert_{B^s_{\infty,\infty}(\mathbb{T}^n)}:=\sup_{l\geq 0}2^{ls}\Vert \psi_l(\mathcal{R})f \Vert_{L^\infty(\mathbb{T}^n)} \end{equation} where $\psi_l(\mathcal{R})$ is defined by the functional calculus associated to the self-adjoint operator $\mathcal{R}.$ If $\sigma(D_x)=\sigma(x,D_x )$ has a symbol depending only on the Fourier variable, then \begin{equation} \Vert \sigma(D_x) f \Vert_{\Lambda^s(\mathbb{T}^n)}\asymp \Vert \sigma(D_x)f\Vert_{B^s_{\infty,\infty}(\mathbb{T}^n)}:=\sup_{l\geq 0}2^{ls}\Vert \psi_l(\mathcal{R})\sigma(D_x)f \Vert_{L^\infty(\mathbb{T}^n)}. \end{equation} Taking into account that the operator $\sigma(D_x)$ commutes with $\psi_l(\mathcal{R})$ for every $l,$ that \begin{equation} \psi_l(\mathcal{R})\sigma(D_x)= \sigma(D_x)\psi_l(\mathcal{R})= \sigma_{l}(D_x)\psi_l(\mathcal{R}) \end{equation} where $\sigma_l(D_x)$ is the pseudo-differential operator with symbol $$\sigma_{l}(\xi)=\sigma(\xi) \cdot 1_{\{\xi:2^{l-1}\langle \xi\rangle\leq 2^{l+1} \}},$$ and that $\sigma_{l}(D_x)$ has a symbol supported in $\{\xi:2^{l-1}\langle \xi\rangle\leq 2^{l+1} \},$ by Lemma \ref{lemma} we deduce that $\sigma_{l}(D_x)$ is a bounded operator on $L^{\infty}(\mathbb{T}^n)$ with operator norm independent on $l.$ In fact, $\sigma_{l}$ satisfies the symbol inequalities $$|\Delta_{\xi}^{\alpha}\sigma_l(\xi)|\leq C_{\alpha}\langle \xi \rangle^{-\frac{n}{2}\varepsilon-(1-\varepsilon)|\alpha|},$$ for all $|\alpha| \leq k,$ and consequently \begin{equation} \Vert \sigma_{l}(D_x)\Vert_{\mathscr{B}(L^{\infty})}\leq C \sup\{C_{\alpha}: {|\alpha|\leq k}\}.\end{equation} So, we have \begin{align*} \Vert \psi_l(\mathcal{R})\sigma(D_x) & f \Vert_{L^\infty(\mathbb{T}^n)} \\ &=\Vert \sigma_l(D_x) \psi_l(\mathcal{R})f \Vert_{L^\infty(\mathbb{T}^n)}\leq \Vert \sigma_{l}(D_x)\Vert_{\mathscr{B}(L^{\infty})} \Vert \psi_l(\mathcal{R})f \Vert_{L^\infty(\mathbb{T}^n)} \\ & \lesssim \sup\{C_{\alpha}: {|\alpha|\leq k}\}\Vert \psi_l(\mathcal{R})f \Vert_{L^\infty(\mathbb{T}^n)}. \end{align*} As a consequence, we obtain \begin{align*}\Vert \sigma(D_x) f \Vert_{\Lambda^s(\mathbb{T}^n)} &\asymp \sup_{l\geq 0}2^{ls}\Vert \psi_l(\mathcal{R})\sigma(D_x)f \Vert_{L^\infty(\mathbb{T}^n)}\\ & \lesssim \sup\{C_{\alpha}: {|\alpha|\leq k}\}\sup_{l\geq 0} 2^{ls} \Vert \psi_l(\mathcal{R})f \Vert_{L^\infty(\mathbb{T}^n)}\\ &\asymp \sup\{C_{\alpha}: {|\alpha|\leq k}\}\Vert f\Vert_{\Lambda^s(\mathbb{T}^n)}. \end{align*} \noindent{{\textit{Step 2.}}} Now, we extend the H\"older boundedness from multipliers to pseudo-differential operators by using a technique developed by Ruzhansky, Turunen and Wirth (see Ruzhansky and Turunen \cite{Ruz} and Ruzhansky and Wirth \cite{Ruz3}). So, let us define for every $z\in\mathbb{T}^n,$ the multiplier \begin{center} $ \sigma_z(D_{x})f(x)=\sum_{\eta\in \mathbb{Z}^n}e^{i2\pi \langle x, \eta\rangle}\sigma(z,\eta)\widehat{f}(\eta).$ \end{center} For every $x\in\mathbb{T}^n$ we have the equality, \begin{center} $ \sigma_{x}(D_{x})f(x)=\sigma(x,D_x)f(x),$ \end{center} and we can estimate the H\"older norm of the function $\sigma(x,D_x)f,$ as follows \begin{align*} \Vert \sigma_{x}(D_x)f(x)\Vert_{\Lambda^s} &\asymp \sup_{l\geq 0}2^{ls} \textnormal{esssup}_{x\in\mathbb{T}^n}|\psi_l(\mathcal{R})\sigma_x(D_x)f(x) |\\ &\leq \sup_{l\geq 0}2^{ls} \textnormal{esssup}_{x\in\mathbb{T}^n} \sup_{z\in\mathbb{T}^n}|\psi_l(\mathcal{R})\sigma_z(D_x)f(x) | \\ &= \sup_{l\geq 0}2^{ls} \textnormal{esssup}_{x\in\mathbb{T}^n} \sup_{z\in\mathbb{T}^n}| \sigma_z(D_x) \psi_l(\mathcal{R})f(x) | \\ &\leq \sup_{l\geq 0}2^{ls} \textnormal{esssup}_{x\in\mathbb{T}^n} \sup_{z\in\mathbb{T}^n} \textnormal{ess sup}_{\varkappa\in\mathbb{T}^n} |\sigma_z(D_\varkappa) \psi_l(\mathcal{R})f(\varkappa) | \\ &= \sup_{l\geq 0}2^{ls} \sup_{z\in\mathbb{T}^n} \Vert \sigma_z(D_\varkappa) \psi_l(\mathcal{R})f(\varkappa) \Vert_{L^\infty(\mathbb{T}^n)} .\\ \end{align*} From the estimate for the operator norm of multipliers proved in the first step, we deduce \begin{align*}\sup_{z\in\mathbb{T}^n}\Vert \sigma_{z}(D_x) \psi_l(\mathcal{R}) f \Vert_{L^\infty(\mathbb{T}^n)}\lesssim \sup\{C_{\alpha}: {|\alpha|\leq k}\}\Vert \psi_l(\mathcal{R})f\Vert_{L^\infty(\mathbb{T}^n)}. \end{align*} So, we have \begin{equation} \Vert \sigma_{x}(D_x)f(x)\Vert_{\Lambda^s} \lesssim \sup\{C_{\alpha}: {|\alpha|\leq k}\}\Vert f\Vert_{\Lambda^s(\mathbb{T}^n)}. \end{equation} Thus, we finish the proof. \end{proof} \begin{corollary}\label{MainT''''} Let $0<\rho\leq 1,$ $0\leq \delta\leq 1,$ $\ell\in\mathbb{N},$ $k:=[\frac{n}{2}]+1,$ and let $A:C^{\infty}(\mathbb{T}^n)\rightarrow C^{\infty}(\mathbb{T}^n)$ be a pseudo-differential operator with symbol $\sigma$ satisfying \begin{equation}\label{eqMaint'''''''} \vert \partial_x^\beta\Delta_{\xi}^{\alpha}\sigma(x,\xi)\vert\leq C_{\alpha}\langle \xi \rangle^{-m-\rho|\alpha|+\delta|\beta|} \end{equation} for all $|\alpha| \leq k,$ $|\beta|\leq \ell.$ Then $A:\Lambda^s(\mathbb{T}^n)\rightarrow \Lambda^s(\mathbb{T}^n)$ extends to a bounded linear operator for all $0<s<1$ provided that $m\geq \delta \ell+\frac{n}{2}(1-\rho).$ \end{corollary} \begin{proof} Let us observe that $ \langle \xi \rangle^{-m-\rho|\alpha|+\delta|\beta|}\leq \langle \xi \rangle^{-\frac{n}{2}(1-\rho)+\rho|\alpha|}, $ when $m\geq \delta \ell+\frac{n}{2}(1-\rho).$ So, we finish the proof if we apply Theorem \ref{MainT}. \end{proof} \begin{remark} In the proofs of our H\"older estimates we have used the equivalence between the Besov space $B^{s}_{\infty,\infty}(\mathbb{T}^n)$ and the H\"older space $\Lambda^s(\mathbb{T}^n),$ for $0<s<1.$ Consequently, hypothesis of Theorem \ref{MainT} and Corollary \ref{MainT''''} also assure the boundedness of $\sigma(x,D_x)$ on $B^{s}_{\infty,\infty}(\mathbb{T}^n)$ for all $-\infty<s<\infty.$ \end{remark} Besov boundedness results for operators in Ruzhansky-Turunen classes for $1<p<\infty$ and $0<q<\infty$ can be found in Cardona \cite{CardBesovCras}, Cardona and Ruzhansky \cite{CardonaRuzhansky2019} and references therein. \end{document}
math
\begin{equation}gin{document} \title[Entanglement quantification from incomplete measurements]{Entanglement quantification from incomplete measurements: Applications using photon-number-resolving weak homodyne detectors} \author{Graciana Puentes} \address{Clarendon Laboratory, Department of Physics, University of Oxford, OX1 3PU Oxford, UK\\ Email : [email protected]} \author{Animesh Datta} \address{Institute for Mathematical Sciences, Imperial College London, London SW7 2PG, UK} \address{QOLS, The Blackett Laboratory, Imperial College London, London SW7 2BW, UK} \author{Alvaro Feito} \address{Institute for Mathematical Sciences, Imperial College London, London SW7 2PG, UK} \address{QOLS, The Blackett Laboratory, Imperial College London, London SW7 2BW, UK} \author{Jens Eisert} \address{Institut f{\"u}r Physik, Universit{\"a}t Potsdam, D-14469 Potsdam, Germany} \address{Institute for Advanced Study Berlin, D-14193 Berlin, Germany} \address{Institute for Mathematical Sciences, Imperial College London, London SW7 2PG, UK} \author{Martin B.\ Plenio} \address{Institut f{\"u}r Theoretische Physik, Universit{\"a}t Ulm, D-89069 Ulm, Germany} \address{Institute for Mathematical Sciences, Imperial College London, London SW7 2PG, UK} \address{QOLS, The Blackett Laboratory, Imperial College London, London SW7 2BW, UK} \author{Ian A.\ Walmsley} \address{Clarendon Laboratory, Department of Physics, University of Oxford, OX1 3PU Oxford, UK} \begin{equation}gin{abstract} The certificate of success for a number of important quantum information processing protocols, such as entanglement distillation, is based on the difference in the entanglement content of the quantum states before and after the protocol. In such cases, effective bounds need to be placed on the entanglement of non-local states consistent with statistics obtained from local measurements. In this work, we study numerically the ability of a novel type of homodyne detector which combines phase sensitivity and photon-number resolution to set accurate bounds on the entanglement content of two-mode quadrature squeezed states without the need for full state tomography. We show that it is possible to set tight lower bounds on the entanglement of a family of two-mode degaussified states using only a few measurements. This presents a significant improvement over the resource requirements for the experimental demonstration of continuous-variable entanglement distillation, which traditionally relies on full quantum state tomography. \end{abstract} \maketitle \section{Introduction} \label{sec:Intro} Entanglement is a fundamental characteristic of quantum systems and a primary resource in quantum information science. Therefore methods to experimentally measure the entanglement of the quantum state of a system are important both for the interpretation of experiments involving quantum systems and for verifying the operation and capacity of a quantum processor or communications system. The most common approach to this problem is to perform quantum tomography of the unknown state of the system~\cite{dariano03}. Quantum state tomography amounts to measuring a tomographically complete set of observables, followed by suitably postprocessing the data. For example, in systems specified by continuous variables (such as the quadrature amplitudes of an optical field, or the position and momentum of a mechanical oscillator), the basic theoretical principle is that a collection of probability distributions of the transformed continuous variables is the Radon transform of its Wigner function. Starting from experimentally measured marginals, therefore, an inverse Radon transform gives the Wigner function from which elements of the density matrix can be obtained. The notion was first experimentally realized in the domain of quantum optics~\cite{Raymer, Dunn}. Since then quantum state tomography has been improved to give controlled statistical errors using maximum-likelihood or least squares \cite{LSE}, made more efficient for low-rank states using ideas of compressed sensing \cite{CS}, and equipped with statistical error bars~\cite{as09}. This is of particular importance in the case of density matrices of non-classical states, which are typically characterized by a negative quasi-probability distribution, such as the Wigner function~\cite{Wigner}. Reconstruction of such non-classical states is indeed a part and parcel of experimental demonstrations of quantum information protocols. Non-classical features may be difficult to reconstruct. In photonic applications, this is often due to low quantum detection efficiencies, leading to noisy measurements.~\cite{lr09,Ourjoumtsev07} Typically, overall detection efficiencies above $50 \%$ are required. However, direct detection of other non-classical signatures may be effected using different sorts of detectors. For example, weak-field homodyne detection coupled with photon counting provides a means to detect entanglement in Gaussian states.~\cite{Grangier88, Kuzmich00} In this work we present an extensive numerical study of a strategy that provides robust direct quantative estimates for the entanglement content of a state, without the need of full quantum state tomography. In order to accomplish this task, we systematically investigate the performance of a weak-field homodyne detector with photon-number resolution as an experimentally feasible component for the construction of local measurement operators. These will be a set of positive operator valued measurements (POVMs). The POVM elements required for such a construction are characterized by a model of the homodyne detector, based on a previous characterization of the time-multiplexed photon-number-resolving detectors~\cite{Puentes}. This detector has also been characterized using the nascent field of detector tomography~\cite{detectomo}. The fundamental question we will answer here is the entanglement content of the least entangled state consistent with the available measurement data~\cite{ap06,eba07,grw07,p09}. Thus, we will be left with a \textit{lower} bound on the entanglement of the state in question. In particular, this procedure can be used for setting a lower bound on the Logarithmic Negativity \cite{p05}, the evaluation of which can be reduced to an efficiently solvable class of convex optimization problems called semidefinite programs~\cite{ap06,eba07,grw07,p09,bv05}. We apply our technique to two mode photon-subtracted quadrature squeezed states. Setting bounds on such a family of non-Gaussian quantum states is of major significance for the implementation of a continuous-variable entanglement distillation protocol~\cite{Eisert,distillation}. Although we will primarily be concerned with continuous-variable entanglement distillation~\cite{distillation} in this work, we must make it clear at the outset that the technique studied here can be applied to any task that aims to manipulate entanglement between spatially separated observers by local operations and classical communications (LOCC), and subsequently confirm the outcome, also by means of local operations and classical communications. This goes back to the resource nature of entanglement, and the ability to manipulate it by LOCCs. Continuous variable entanglement distillation is an important instance of such a situation. It should also be clear that we are not limited to entirely optical settings, and similar techniques should be helpful to eventually identify entanglement in opto-mechanical settings, say of entanglement between a micromirror and an optical mode. In cases such as these, it is often possible to gather enough information by a limited number of measurements to assess the correlations in the state. The natural question then is whether the correlations revealed by these local measurements (aided possibly by classical communication) represent classical correlations, or entanglement~\cite{ap06,p09}. This circumvents the necessity of the resource-intensive process of quantum state tomography. The method is also more robust with respect to measurement errors than full state tomography. Importantly, no a-priori assumptions concerning the purity or the specific form of the states enter the certifiable bound on the degree of entanglement. The paper is structured as follows. In Sec.~(\ref{sec:SDPs}), we formalize as a semidefinite program (SDP) the problem of putting lowers bounds on the entanglement content of states using localized measurement statistics. Sec.~(\ref{sec:PNR}) describes the specific time-multiplexed homodyne detector that we use to build these localized measurements. In Sec.~(\ref{sec:CVentdist}), we present the numerical results on the bounds set on the entanglement content of a two-mode photon subtracted quadrature squeezed state, for different values of relevant experimental parameters. We also present an extensive numerical exploration of the performance of the detector under different experimental conditions. In particular, we analyze the required phase accuracy and phase stability in our homodyne scheme. We also discuss the tolerance of the convex optimization algorithm to experimental noise. Finally, in Sec.~(\ref{sec:conc}) we report the conclusions. As a matter of notation, all logarithms in this paper are taken to base $2$. \section{Lower bounds using convex optimization} \label{sec:SDPs} As stated in the introduction, we are seeking the amount of entanglement in the least entangled state compatible with a set of measurement results. Mathematically, this can be presented as \begin{equation} \label{eq:Emin} E_{\min} = \min_{\rho}\{E(\rho): \mathrm{Tr}(\rho M_i)=m_i\}, \end{equation} where $E$ is the measure of entanglement, and $M_i$ are the measurements made with measurement data $m_i.$ Additional constraints that $\rho$ is a density matrix, i.e., positive, and $\mathrm{Tr}(\rho)=1$, are also imposed. The latter is easily done by setting $M_0=\mathbb{I}$ and $m_0=1.$ Depending on the measure of entanglement, and the measurements chosen, the minimization in Eq.\ (\ref{eq:Emin}) can even be performed analytically, but generically that is not the case. Here, we briefly present a technique following the presentation in Refs.\ \cite{ap06,eba07} that allows the above problem to be cast as a semidefinite program when the entanglement measure is the Logarithmic Negativity~\cite{p05}. Logarithmic Negativity is defined as the logarithm of the 1-norm of the partial transposed density matrix $\|\rho^{T_1}\|_1.$ The 1-norm can be expressed as~\cite{b97} \begin{equation} \|\rho^{T_1}\|_1=\max_{\|H\|_{\infty}=1}\mathrm{Tr}(H\rho^{T_1}) = \max_{\|H\|_{\infty}=1}\mathrm{Tr}(H^{T_1}\rho), \end{equation} with the maximization being over all hermitian operators $H$, where $\|.\|_\infty$ denotes the standard matrix operator norm, namely the largest singular value of the matrix. Using the monotonicity of the logarithm, the minimization in Eq.\ (\ref{eq:Emin}) can be rewritten as \begin{equation} \label{eq:minmax} \mathcal{N}_{\min} = \log\min_{\rho}\left\{ \max_{H}\{\mathrm{Tr}(H^{T_1}\rho)\big| \|H\|_{\infty}=1\} : \mathrm{Tr}(\rho M_i)=m_i\right\}. \end{equation} The minimax equality allows us to interchange the maximization and the minimization, leading to \begin{equation} \label{eq:maxmin} \mathcal{N}_{\min} = \log\max_{H}\left\{\min_{\rho} \{\mathrm{Tr}(H^{T_1}\rho): \mathrm{Tr}(\rho M_i)=m_i\}: \|H\|_{\infty}=1 \right\}. \end{equation} For any real numbers $\{\nu_i\}$ for which \begin{equation}gin{equation} H^{T_1}\geq\sum_i\nu_iM_i, \end{equation} clearly the lower bound \begin{equation}gin{equation} \mathrm{Tr}(H^{T_1}\rho)\geq \sum_i\nu_i \mathrm{Tr}(M_i\rho) = \sum_i\nu_i m_i. \end{equation} holds true for states $\rho$. Thus we get \begin{equation} \label{eq:maxmax} \mathcal{N}_{\min} \geq \log\max_{H}\left\{\max_{\nu_i} \big\{\sum_i \nu_im_i: H^{T_1}\geq\sum_i\nu_iM_i \big\} : \|H\|_{\infty}=1 \right\}. \end{equation} Note that the state $\rho$ drops completely out of contention now. Since the inner minimization in Eq.\ (\ref{eq:maxmin}) is a semidefinite program, strong duality in the strictly feasible case ensures equality in Eq.\ (\ref{eq:maxmax}). Thus, having fixed the operators that we choose to measure $M_i,$ any choice of $H$ and $\nu_i$ such that $H^{T_1}\geq\sum_i\nu_iM_i$ and $\|H\|_{\infty}=1,$ provides us with a lower bound on the Logarithmic Negativity of states which provide expectation values of $m_i.$ Finally, we can rewrite Eq.\ (\ref{eq:maxmax}) as \begin{equation}n \label{eq:sdp} &&\mathrm{maximize}\;\;\;\log\Big(\sum_i \nu_im_i\Big), \\ &&\mathrm{subject\;to}\;\;H^{T_1}\geq\sum_i\nu_iM_i,\nonumber\\ && \;\;\;\;\;\; \mathrm{and}\;\;\; -\mathbb{I} \leq H \leq \mathbb{I},\nonumber \end{equation}n which can be solved quite easily using standard SDP solvers, like SeDuMi~\cite{sedumi}, once we have decided what our measurements $M_i$ are. Since these are to be local, the typical form of the measurement, in the case of bipartite states, is \begin{equation} \label{eq:localmeas} M_i = \Pi^{1}_j\otimes\Pi^{2}_k. \end{equation} The problem is thus reduced to the construction of the operators $\Pi^{1,2}_j,$ which is what we move onto in the next section. In passing, we mention that the choice of these measurement operators can also be cast as a SDP, although it is more challenging to incorporate the locality constraint into its framework. This idea gives useful and practically tight bounds to the entanglement content, not having to assume any a-priori knowledge about the state, or properties of it such as its purity. If the set of expectation values $\{ \mathrm{Tr}(M_i\rho) \}$ is tomographically complete, obviously, the bound is promised to give the exact value, but in practice, a much smaller number of measurements is sufficient to arrive at good bounds. Data of expectation values can be composed, that is if two sets of expectation values are combined, the resulting bound can only become better, to the extent that two sets that only give rise to trivial bounds can provide tight bounds. The approach presented here is perfectly suitable for any finite-dimensional system, and also for continuous-variable systems, as long as the observables $M_i$ are bounded operators. Photon counting with a phase reference gives rise to such operators, as we will see. Note also that similar ideas, formulating lower bounds to entanglement measures, constraining expectation values of observables can also be formulated for other measures of entanglement \cite{eba07,grw07}. This is in line with the idea of systems identification of trying to directly estimate relevant quantities, instead of aiming at the detour of reconstructing the quantum states first. \section{Photon-number-resolved weak homodyne detection} \label{sec:PNR} We consider now the application of entanglement quantification to detection of entangled photonic states. In this application we propose to make use of photon-number resolving detectors. These have several useful features that make them well-suited to the measurement of non-classical signatures of light beams. First, weak-field homodyne detection provides a way to demonstrate the entangled character of EPR-like two-mode squeezed states~\cite{Grangier88, Kuzmich00, Banaszek9899}, in contrast to strong-field homodyne detection~\cite{Ou92}. Second, because the amplitude of the local oscillator is comparable to that of the signal, the phase sensitivity of the photon counting distribution is much smaller than that of a regular homodyne detector.~This greatly reduces the problem of synchronizing local reference frames, which generally becomes more difficult with increasing distance. Within the framework of the quantum theory of measurement, the action of a detector is completely specified by its positive operator valued measurement (POVM) set \cite{Holevo}. A POVM element is a positive definite operator $ \Pi_{\begin{equation}ta, \gamma}\geq 0$, which represents the outcome $\begin{equation}ta$ of a given detector, for a setting $\gamma$ corresponding to a particular value for a tunable parameter in the detector. In the case of a homodyne detector, $\gamma$ would correspond to the phase or the amplitude of the local oscillator. The complete set should satisfy $\sum_{\begin{equation}ta} \Pi_{\begin{equation}ta, \gamma}=\mathbb{I}$. The probability $p_{\begin{equation}ta, \gamma}$ of obtaining outcome $\begin{equation}ta$ for setting $\gamma$ can be related to the state of the system $ \rho$ by $p_{\begin{equation}ta, \gamma}=\mathrm{Tr}( \rho \Pi_{\begin{equation}ta, \gamma})$. Our detection scheme consists of a weak local oscillator (LO) mixed with the signal $ \rho$ at a variable reflectivity ($R$) beam-splitter (BS). The outcome of such an interference is collected by time-multiplexed photon-number-resolving (PNR) detectors~\cite{Achilles}. The time-multiplexed detectors (TMD) split the incoming pulses into $8$ distinct modes, which are eventually detected by binary avalanche photo-diodes (APDs) which can register either $0$ or $1$ click. Thus there are $9$ possible outcomes for a given TMD which are labelled by the number of clicks $\begin{equation}ta= 0,\dots,8$. The settings of the detector $\gamma$, in turn, are determined by a number of experimental parameters, such as LO amplitude $|\alpha|$ and phase $\theta$, BS reflectivity $R$, and detector efficiency $\eta$. By tuning the detector settings it is possible to prepare POVM elements able to project onto a large variety of radiation field states, ranging from Fock states to quadrature squeezed states \cite{Puentes}. \subsection{Detector model} \begin{equation}gin{figure}[h!] \begin{equation}gin{center} \includegraphics[angle=0,width=10truecm]{Graphic2.pdf} \caption{Homodyne detection scheme for (a) balanced and (b) unbalanced configuration. $D_{c,d}$ are PNR detectors of the time-multiplexed type.} \label{fig:1} \end{center} \end{figure} In Fig.~(\ref{fig:1} (a)) we show a schematic of our detection system for the balanced configuration. The BS input modes, labeled $a$ and $b$, correspond to the LO and the signal ($ \rho $), respectively. The output modes, labeled by $c$ and $d$, are detected by PNR detectors $D_{c}$ and $D_{d}$. The joint detection events, denoted $\{\begin{equation}ta=(n_{c},n_{d})\}$, are recorded for different LO settings $\gamma=(|\alpha|,\theta)$. The LO, is prepared in a coherent state with state vector $|\alpha \rangle $ of complex amplitude $\alpha=|\alpha|e^{i\theta}$ and provides the phase reference needed to access off-diagonal elements in $ \rho$, as the PNR detectors alone have no phase sensitivity \cite{Achilles}. For ideal PNR detectors, the probability to obtain measurement outcome $\begin{equation}ta$ for LO setting $\gamma $ is related to $\rho$ by \cite{Pregnell} \begin{equation}gin{equation} \label{eq:3} p_{\begin{equation}ta, \gamma }=\mathrm{Tr}_{c,d}[{U}\sigma{U}^{\dagger } |n_c, n_d\rangle \langle n_c, n_d|], \end{equation} \noindent where $U=e^{i\chi(b^{\dagger}a+a^{\dagger}b)}$ is the unitary operator representing the BS, $R=\cos(\chi)^2$ is the BS reflectivity, $ \sigma =|\alpha \rangle \langle\alpha |_a \otimes \rho_b$ the two-mode input state and $|n_c, n_d \rangle=|n_c\rangle_c |n_d\rangle _d$ the photon number state vectors of mode $c$ ($d$) to be detected at PNR detectors $D_{c}$ ($D_{d}$). In order to account for the imperfections of the time-multiplexed PNR detectors we use a well tested model of the TMDs \cite{Achilles}. Within this model, the TMD operation can be described as a map from the incoming photon-number distribution $\vec{r}$, as a vector (i.e., the diagonal components of the density matrix) to the measured click statistics $\vec{k}$ by $\vec{k} =C \cdot L\cdot \vec{r} $.~Here $L$ and $C$ are matrices accounting for loss and the intrinsic detector structure \cite{Achilles}, respectively. To calculate the POVM elements implemented by our PNR homodyne detector, the POVMs for TMD detectors $D_c$ and $D_d$ are determined from the $C$ and $L$ matrices (characterized by independent methods \cite{detectomo}). The TMD POVMs are then substituted into Eq.\ (\ref{eq:3}), in place of the photon number projectors $|n_c, n_d\rangle \langle n_c, n_d|$, to obtain the final expression for the imperfect POVM elements $ \Pi _{\begin{equation}ta,\gamma }$. We note that our TMDs can resolve up to $8$ photons, setting the number of possible outcomes to $81$. \subsection{Unbalanced detection scheme} Our aim is to use such homodyne PNR detectors to provide lower bounds on the entanglement of bipartite quantum states, in which case two of such devices should be employed. To this end, the joint POVM statistics of the four modes involved in the detection need to be measured, increasing the total number of POVM elements to $81^2=6561.$ In order to simplify the experimental arrangement, we use the detector in an unbalanced configuration, so that we only detect one of the outgoing modes of each homodyne BS. In this way only two modes need to be jointly detected and the total number of POVM elements is reduced to $9^2=81$. This unbalanced scheme can be modeled by setting the efficiency $\eta$ of one the PNR detectors to zero (see Fig.~(\ref{fig:1} (b))). The only disadvantage of this unbalanced scheme is that the overall efficiency is in principle reduced by $50\%$, but this limitation can be overcome by increasing the BS reflectivity $R$. Additionally, as we will show in the next section, our partial detection approach alleviates the strong efficiency requirements of full tomography allowing for the additional losses of the unbalanced scheme. \begin{equation}gin{figure}[h!] \begin{equation}gin{center} \includegraphics[angle=0,width=9truecm]{Graphic1.pdf} \caption{Wigner representation of selected POVM elements $ \Pi_{\begin{equation}ta, \gamma}$ for the unbalanced scheme described in Fig.~(\ref{fig:1} (b)). Different columns correspond to LO phase settings $\theta=0$ and $\theta=\pi/2$. The rows correspond to three different detector outcomes $\begin{equation}ta=1,2,3$. The amplitude of the LO, BS reflectivity and detector efficiency are, $|\alpha|=1$, $R=50\%$ and $\eta=10\%$ , respectively.} \label{fig:2} \end{center} \end{figure} In Fig.~(\ref{fig:2}), we show numerically constructed Wigner functions corresponding to $6$ different POVM elements $ \Pi_{\begin{equation}ta, \gamma}$, characterizing the unbalanced scheme. The axes $(x,p)$ label the phase space quadratures. The different columns correspond to different LO phases $\theta =0$ and $\theta= \pi/2$. The rows correspond to three consecutive outcomes $\begin{equation}ta=1,2,3$, labelling the corresponding number of detector clicks. For these simulations we fixed the amplitude of the LO to $|\alpha|=1$, the BS reflectivity to $R=50\%$ and the detector efficiency to $\eta=10\%$, which is a realistic value for a single mode TMD. The figure shows that $ \Pi_{\begin{equation}ta, \gamma}$ are not rotationally symmetric, as expected for a phase sensitive detector. The oscillations in the Wigner functions are due to the low efficiency of the detectors which mixes different photon number states, whose phase-space representation is given by consecutive rings of increasing radii. Also, as is expected, a change in the LO phase setting by $\pi/2$ corresponds to an overall phase-space rotation in the Winger function. In the next section we show that by using $8$ POVM elements of the type shown in Fig.~(\ref{fig:2}) for each subsystem, we can construct the measurements $M_i$ mentioned in Sec.~(\ref{sec:SDPs}), which can be employed to bound the entanglement content of two-mode degaussified states. \section{Application to continuous-variable entanglement distillation} \label{sec:CVentdist} We will now apply the above methods to a setting that plays a central role in continuous-variable entanglement distillation. Entanglement distillation aims at producing more highly entangled states out of a situation where entanglement is present only in a dilute and noisy form, presumably generated by some lossy quantum channel. It provides the key step in quantum repeater ideas allowing for the distribution of long-range entanglement in the presence of noise. Crudely speaking, one may distinguish between actual distillation schemes that involve more than one specimen of an entangled state at each step of the protocol, and ``Procrustean'' or local filtering approaches that take a single copy of a state and under appropriate local filtering give rise -- if successful -- to a more highly entangled state. In the setting of strict Gaussian operations, continuous-variable entanglement distillation of neither kind is possible \cite{distillation}, but this obstacle can be overcome with the help of non-Gaussian ingredients such as photon addition or subtraction \cite{Eisert}. Such first Procrustean steps can also be used as starting points in full entanglement distillation protocols. Indeed, quite exciting first steps towards full continuous-variable entanglement distillation have recently been taken experimentally \cite{Ourjoumtsev07,Dong,Takahashi,Xiang,Bellini}. In the subsequent discussion we show the use of quantitative tests to certify success in such a scheme. Needless to say, we discuss specific input states, but it should be clear that the given entanglement bounds do not make use of that a-priori knowledge. We consider as our initial state vector $|\psi^{\mathrm{ini}}\rangle$ an ideal pure two-mode quadrature squeezed state of the form \begin{equation}gin{equation} \label{eq:9} |\psi^{\mathrm{ini}}\rangle = \sqrt{1-\lambda^2}\sum_{n=0}^\infty\lambda^{n}|n,n\rangle_{1,2}, \end{equation} \noindent where $\lambda$ represents the squeezing parameter, and the subindices $(1,2)$ represent each spatial mode. Such type of states are produced in the laboratory by the non-linear process of spontaneous parametric down conversion (SPDC) in non-linear crystals \cite{Mosley}. In order to simplify our numerical calculations, we will restrict the maximum photon-number per mode to $n_{\mathrm{max}}=3$. Thus the set of bipartite initial states $\rho_{1,2}^{\mathrm{ini}}$ is given by the set of $16\times16$ density matrices $\rho_{1,2}^{\mathrm{ini}}= |\psi^{\mathrm{ini}}\rangle\langle\psi^{\mathrm{ini}}|$. The Logarithmic Negativity for the bipartite state in Eq.\ (\ref{eq:9}) takes the simple form \begin{equation}gin{equation} \mathcal{N}(\rho_{1,2}^{\mathrm{ini}})=\log\|(\rho_{1,2}^{\mathrm{ini}} )^{T_1}\|_1=\log\left(\frac{1+\lambda}{1-\lambda}\right), \end{equation} as can readily be verified. \begin{equation}gin{figure}[h!] \begin{equation}gin{center} \includegraphics[angle=0,width=12truecm]{Graphic7.pdf} \caption{(Color online) Scheme describing the bipartite initial state $ \rho_{1,2}^{\mathrm{ini}}$ produced by spontaneous parameteric down conversion (SPDC). Next, the poton-subtracted state $ \rho_{1,2} ^{\mathrm{subt}}$ is prepared by local subtraction of a single photon at a tunable subtraction beam splitter (SBS). The entanglement content in $\rho_{1,2} ^{\mathrm{subt}}$ is quantified by our partial detection approach. This type of scheme is one of the main components of an entanglement distillation protocol \cite{Eisert}. \label{fig:7}} \end{center} \end{figure} \subsection{Two-mode photon-subtracted quadrature squeezed state} In order to distill continuous-variable entanglement from Gaussian states, such as the two-mode quadrature squeezed state described by Eq.\ (\ref{eq:9}), an operation that removes the Gaussian nature of the probability distribution is required \cite{Eisert}.~Examples of such non-Gaussian operations are the conditional subtraction or addition of a photon \cite{Ourjoumtsev07,Bellini}.~An ideal two-mode photon-subtracted quadrature squeezed state can be modeled by inserting a BS of transmission $T$ (the so-called subtraction beam-splitter SBS) in one spatial mode. The reflected mode from the SBS is then detected by a standard (ideal) avalanche photodetector (APD) (this is schematized in Fig.~(\ref{fig:7})). The photon subtracted state can, in the approximation of having a a very weakly reflecting beam-splitter, thus be written as \begin{equation}gin{equation} \label{eq:substate} |\psi^{\mathrm{subt}}\rangle=C\sum_{n=1}^{n_{\max}}(\lambda T)^{n}\sqrt{n}|n-1,n\rangle_{1,2}, \end{equation} where $C$ is a normalization constant and in our simulations $n_{\mathrm{max}}=3$. The corresponding density matrix is $\rho_{1,2}^{\mathrm{subt}}=|\psi^{\mathrm{subt}}\rangle\langle\psi ^{\mathrm{subt}}|$. We note that the family of states described by Eq.\ (\ref{eq:substate}) are of current interest in the realm of continuous-variable entanglement distillation, as they represent a particular kind of non-Gaussian state (i.e., a state whose Wigner representation is not Gaussian), whose entanglement content $\mathcal{N}(\rho^{\mathrm{subt}}_{1,2})$ is predicted to be larger than $\mathcal{N}(\rho^{\mathrm{ini}}_{1,2})$, for suitable experimental parameters ($\lambda$, $T$)~\cite{distillation}. \subsection{Construction of the observables} Our aim is to construct bipartite measurement operators $M_i$ as a tensor product of the POVM elements corresponding to each subsystem $ \Pi_{1}\otimes \Pi_{2}$. In particular, we will consider $8$ POVM elements for each subsystem $(1,2)$, specified by four different outcomes $\begin{equation}ta =0,1,2,3$ and two different settings $\gamma$ corresponding to $\theta=0$ and $\theta=\pi/2$. Thus the selected POVM subset for each mode consist of $8$ elements $ \Pi_{\begin{equation}ta, \gamma}$, collected as \begin{equation}gin{equation} \{ \Pi_{0,0},\Pi_{1,0},\Pi_{2,0}, \Pi_{3,0}, \Pi_{0,\pi/2}, \Pi_{1,\pi/2},\Pi_{2,\pi/2}, \Pi_{3,\pi/2}\}, \end{equation} where we will keep this ordering in the POVM elements for the rest of the paper. We measure $8\times8$ configurations, which determine $64$ POVMs $M_{i}$, labelled by the index $i$, of the form in Eq.\ (\ref{eq:localmeas}) with $j,k=1,\dots,8$ being the indices labelling the POVM elements of mode $(1,2)$ respectively, and $i=(j,k)$ the joint index, labelling the bipartite measurement operator. For example, the observable $ M_{(6,1)}$ corresponds to the POVM elements $\Pi_{1,\pi/2}$ for mode 1 and $\Pi_{0,0}$ for mode $2$. This gives a total of 64 measurements, which in turn determine 64 expectation values $\mathrm{Tr}( \rho_{1,2} M_{i})=m_{i}$, with $i=1,\dots,64$. This is a clear reduction with respect to full state tomography, which would require (at least) $16^2-1=255$ measurements in order to reconstruct $ \rho_{1,2}$ in a truncated Hilbert space of dimension $16$. In order to find the lower bound on the Logarithmic Negativity of the photon-subtracted states $ \rho_{1,2}^{\mathrm{subt}}$ described in Eq.\ (\ref{eq:substate}), by means of the set of measurement observables $M_i$ as defined in Eq.\ (\ref{eq:localmeas}), we follow the procedure described in Sec.~(\ref{sec:SDPs}). Note that in a real experiment $\mathrm{Tr}( \rho_{1,2}^{\mathrm{subt}} M_i)$ should be replaced by the actual experimental probability estimates which will be subject to different sources of noise. We will discuss the effect of experimental noise on entanglement bounds in the final subsection. \subsection{Numerical results} \begin{equation}gin{figure}[h!] \begin{equation}gin{center} \includegraphics[angle=0,width=10truecm]{Graphic3.pdf} \caption{(Color online) Logarithmic Negativity (LN) for the initial state (red curve), subtracted state (blue curve) and lower bound on LN of subtracted state obtained by convex optimization (green curve), vs.\ squeezing parameter ($\lambda$). Percentual entanglement increase with a single photon-subtraction step and percentual difference between the actual LN of the subtracted state and the one obtained by convex optimization are indicated. The SBS transmission was fixed at $T=90\%$. \label{fig:3}} \end{center} \end{figure} \begin{equation}gin{figure}[h!] \begin{equation}gin{center} \includegraphics[angle=0,width=10truecm]{Graphic4.pdf} \caption{(Color online) Logarithmic Negativity (LN) for the initial state (red curve), subtracted state (blue curve) and lower bound on LN of subtracted state obtained by convex optimization (green curve), vs.\ SBS transmission ($T$). Percentual entanglement increase with a single photon-subtraction step and percentual difference between the actual LN of the subtracted state and the one obtained by convex optimization are indicated. The squeezing parameter was fixed at $\lambda=0.2$. \label{fig:4}} \end{center} \end{figure} Fig.~(\ref{fig:3}) presents the percentual entanglement increase between $|\psi^{\mathrm{ini}}\rangle$ (red curve) and $|\psi^{\mathrm{subt}}\rangle$ (blue curve) for different squeezing parameters $\lambda$ ranging from $\lambda=0.1$ to $\lambda=0.3$. Percentual differences between the actual Logarithmic Negativity characterizing the photon-subtracted state and the lower bound obtained by convex optimization (green curve) are also indicated. The transmission coefficient of the subtraction beam-splitter (SBS) in Fig.~(\ref{fig:2}) was fixed at ($T=90\%$), the LO amplitude and detector efficiency were set to $|\alpha|=1$ and $\eta=10\%$, respectively. It is noticeable that while a single-photon-subtraction step produces a larger entanglement increase for lower values of $\lambda$, the lower bound on the entanglement becomes tighter for higher squeezing parameter $\lambda$. The percentual error in the lower bound is in all cases below $9\%$, which reveals the accuracy of our partial detection scheme in characterizing entanglement. Fig.~(\ref{fig:4}) presents the percentual entanglement increase between $|\psi^{\mathrm{ini}}\rangle$ (red curve) and $|\psi^{\mathrm{subt}}\rangle$ (blue curve) for different SBS transmission $T$, ranging from $T=80\%$ to $T=99\%$. Percentual differences between the actual Logarithmic Negativity characterizing the photon-subtracted state vector ($|\psi^{\mathrm{subt}}\rangle$) and the lower bound obtained by convex optimization (green curve) are also indicated. The squeezing parameter was fixed at ($\lambda=0.2$), the LO amplitude and detector efficiency were set to $|\alpha|=1$ and $\eta=10\%$, respectively. Fig.~(\ref{fig:4}) shows that for a fixed squeezing parameter the single-photon-subtraction step produces a larger entanglement increase for higher SBS transmission $T$ and that the lower bound becomes tighter for higher $T$.~In all cases, the percentual error in the lower bound remains below $11\%$. Next, we tested the robustness of the measurement scheme with respect to different types of LO phase noise. To this end we constructed a set of 64 POVM elements as described in the previous section, for a fixed LO amplitude $\alpha=1$, detector efficiency $\eta=0.10$, SBS transmission $T=0.90$ and squeezing parameter $\lambda=0.2$. The two fixed phase setting $\theta_0=(0,\pi/2)$ were subject to different types of fluctuations. In particular, we investigated the required precision in the LO phase $\theta$ by adding different amounts of random phase noise $\epsilon$ in the form $\theta=(0\pm \epsilon/10, \pi/2(1 \pm \epsilon/10))$, where $0 \leq \epsilon \leq 1$ is a random number with a uniform distribution. In our numerical simulations we found that for a phase error of up to $10\%,$ the lower bound differs from the actual Logarithmic Negativity by less than $1\%$. This means that an LO phase precision of 5 degrees (at the most) is required for the bounds to produce a highly tight estimate. This is shown in Fig.~(\ref{fig:5} (a)), for $\theta_0=(0,\pi/2)$, a squeezing parameter $\lambda=0.2$, a SBS transmission $T=90\%$, an LO amplitude $|\alpha|=1$ and a detector efficiency $\eta=0.10$. We also analyzed the effect of temporal phase fluctuations, by modelling the LO as a phase averaged coherent state described by the complex amplitude $\alpha=\sum_{j} |\alpha| e^{i(\theta_0 + \delta \theta_j)}$ with $j=1,\dots,100$ and $\delta \theta$ a random phase with a uniform distribution centered around $\theta_0\in[0,\pi/2]$ and with width $\Delta \theta$. We found that a phase width of up to $0.6$ radians ($\approx 30$ degrees) introduces a percentual difference in the lower bound of up to $15\%$. For a phase width $\Delta \theta$ of up to $0.4$ radians ($\approx 20$ degrees) the lower bound on the logarithmic negativity is within $10\%$. This is shown in Fig.~(\ref{fig:5} (b)), for a squeezing parameter $\lambda=0.2$, a SBS transmission $T=90\%$, an LO amplitude $|\alpha|=1$ and a detector efficiency $\eta=0.10$. \begin{equation}gin{figure}[h!] \begin{equation}gin{center} \includegraphics[angle=0,width=13.5truecm]{Graphic5.pdf} \caption{(Color online) Exact Logarithmic Negativity (red curve) and lower bound obtained by convex optimization (blue curve) for the photon-subtracted state vs.\ (a) dimensionless phase noise $\epsilon$ and (b) phase noise standard deviation $\Delta \theta$ in radians for LO phase settings $\theta_0=(0,\pi/2)$. Max. and min. percentual differences are indicated. The squeezing parameter was fixed at $\lambda=0.2$.\label{fig:5} } \end{center} \end{figure} Finally, we analyzed the impact of a different homodyne BS reflectivity $R$ on the overall accuracy of the entanglement quantification scheme. We found that for $R\geq 80\%$ the lower bound on the Logarithmic Negativity differs by less than $0.2\%$ from the actual value, as long as the LO amplitude remains small enough ($|\alpha|\leq2.5$) due to the limited photon-number resolution in the time-multiplexed detectors. This is shown in Fig.~(\ref{fig:6} (a)). Fig.~(\ref{fig:6} (b)) shows a complete simulation for $50 \%\leq R \leq 99 \%$, $|\alpha|=2.5$, $\lambda=0.1$ and $T=90\%$. In all the simulations, the TMD efficiencies were set to $\eta=0.10$ and the LO phase settings were chosen as $\theta=(0,\pi/2)$. Additionally, the subtraction APD in Fig.~(\ref{fig:7}) is assumed to have a limited efficiency, which is modelled by interposing a beamsplitter with transmittivity of $15\%$. The numerical simulations in this work were implemented using the convex optimization package SeDuMi \cite{sedumi}. \begin{equation}gin{figure}[h!] \begin{equation}gin{center} \includegraphics[angle=0,width=14truecm]{Graphic6.pdf} \caption{(Color online) (a) Percentual error in the lower bound set by convex optimization for different homodyne BS reflectivities $R$. The error remains below $ 0.2 \%$ for a sufficiently weak LO amplitude $|\alpha| < 2.5$. (b) Extension to a larger range of BS reflectivities $R$, for $\lambda=0.1$, $|\alpha|=2.5$, $\theta=(0,\pi/2)$. \label{fig:6}} \end{center} \end{figure} \subsection{Tolerance to experimental measurement errors} In the numerical simulations presented here we have used the exact expectation values $m_i=\mathrm{Tr}( \rho^{\mathrm{subt}} M_i)$ for the minimization of Logarithmic Negativity. However, in a real experiment such expectation values are affected by different sources of noise. In this subsection, we test the tolerance of the scheme to experimental errors. There are several ways to include such an error. One approach would be to estimate variances of measured values, and then make a model including Gaussian-distributed errors for the measured variables. Another would be a hard bound on the degree of entanglement as a function of a small norm deviation from the perfect data, giving rise to a box error model. This latter error model even allows for a malicious correlation in the errors, in that all errors add constructively. Clearly, independent, identically distributed errors would give rise to much more robust bounds. Nonetheless, in order to evaluate the feasible robustness of our method, we will now refer to this latter, more demanding error model: We merely require for an $\epsilon>0$ that the measured value $n_i$ and the true expectation value $m_i= \mathrm{Tr}{(M_i \rho)}$ satisfy $m_i \in [(1-\epsilon)n_i , (1+\epsilon)n_i]$ for all $i$. Hence, the problem to be solved becomes \begin{equation} \label{eq:Emin} \mathcal{N}_{\min} = \min_{\rho}\{\mathcal{N}(\rho): \mathrm{Tr}(\rho M_i)=n_i,\, m_i \in [(1-\epsilon)n_i , (1+\epsilon)n_i] \,\,\, \forall i \}, \end{equation} Including such measurement errors, Eq.\ (\ref{eq:sdp}) then clearly becomes \begin{equation}n \label{eq:exptsdp} &&\mathrm{maximize}\;\;\;\; \log\Big(\sum_i \nu_i n_i \Big), \\ &&\mathrm{subject\;to}\;\;H^{T_1}\geq \sum_i \nu_i M_i ,\nonumber\\ && \hspace{2.0cm} m_i \in [(1-\epsilon)n_i , (1+\epsilon)n_i] \,\, \forall i,\nonumber\\ && \;\;\;\;\;\; \mathrm{and}\;\;\; -\mathbb{I} \leq H \leq \mathbb{I},\nonumber \end{equation}n which can be solved as easily as Eq.\ (\ref{eq:sdp}) using SeDuMi~\cite{sedumi}. Note that the resulting bound is even valid if each of the errors in the measured data are maliciously correlated. In this subsection, we use as an example a two-mode squeezed state with $\lambda=0.2$ from which a photon is subtracted using a SBS with $T=95\%.$ The APD in Fig.~(\ref{fig:2}) is assumed to have a limited efficiency, which is achieved by interposing a beamsplitter with transmittivity of $20\%.$ In a short table below, we present the bounds attained by solving the SDP in Eq.\ (\ref{eq:exptsdp}) for some representative values of $\epsilon.$ \begin{equation}gin{center} \begin{equation}gin{tabular}{|c|c|c|c|c|} \hline $\epsilon$ & 0.0 & 0.001 & 0.01 & 0.1 \\ \hline $\mathcal{N}_{\min}$ & 0.7308 & 0.7185 & 0.6660 & 0.3034\\ \hline \end{tabular} \end{center} These numbers must be compared with the entanglement of the initial two-mode squeezed state, which has $\mathcal{N}=0.5803,$ and the ideal photon subtracted state which has $\mathcal{N}=0.7309.$ Note that the state on which we put the lower bounds is inevitably mixed, and the table shows the robustness and effectiveness of our scheme. $\epsilon =0.01$ is enough to demonstrate the enhancement of entanglement by distillation with experimentally realistic parameters, without having to undertake a full tomography of the quantum states involved. This value of $\epsilon$ translates to about $10000$ data points, via the central limit theorem, for each measurement configuration. This is in line with the number of data points taken in other experiments involving reconstruction of non-Gaussian states~\cite{Ourjoumtsev07}. \section{Conclusions} \label{sec:conc} We have presented quantitative numerical evidence that a novel homodyne detection scheme with photon-number resolution is able to set accurate bounds on the entanglement content of a family of two-mode photon-subtracted quadrature squeezed states. The entanglement lower bounds retrieved by the measurement scheme are accurate to within $10\%$ for the full range of squeezing parameters $\lambda=0.1-0.3$ and subtraction beam-splitter transmissions $T=80\%-99\%$. We found that the bounds become tighter for higher $\lambda$ and $T$. We also analyzed the required phase precision and stability in the local oscillator (LO), and found that a precision of less than $5 $ degrees is required for a bound accuracy within $1\%$, while temporal phase fluctuations of up to $20$ degrees can be accepted for a lower bound with $10\%$ accuracy. Additionally we found that a homodyne beam-splitter reflectivity $R$ above $60\%$, for an LO amplitude within $|\alpha|=2.5$ is sufficient to obtain a lower bound on the Logarithmic Negativity which agrees to within $2\%$ with the actual Logarithmic Negativity value characterizing the photon-subtracted state. The results reported here provide strong numerical evidence of the suitability of our partial detection scheme for entanglement quantification of bipartite degaussified states. We note that this type of partial detection approach is not only attractive due to its accuracy but also due to its scalability. This is of importance for the application of an entanglement distillation protocol combining two degaussified sources \cite{Eisert, distillation}.~In particular, our scheme can be easily scaled to the detection of four spatial modes, in which case it would require the measurement of only $64^2=4096$ outcome probabilities. In contrast, full state tomography would require (at least) $16^4 -1=65535$ different measurements. Therefore our method provides a feasible, direct and resilient way of accurately experimentally characterizing entanglement in continuous-variable quantum systems. Finally, we anticipate the amount of data required in order to obtain an adequate precision in the measurement-outcome probabilities characterizing our partial measurement scheme to be considerably lower than that required for full state tomography. \section*{Acknowledgments} This work was supported by the EPSRC through the QIP IRC, the EU through the IST directorate FET Integrated Project QAP, and through STREP projects CORNER, HIP, COMPAS and MINOS. JE acknowledges an EURYI Award, MP and IAW Royal Society Research Merit Awards, and MP an Alexander von Humboldt Professorship. \section*{References} \begin{equation}gin{thebibliography}{10} \bibitem{dariano03} G.\ M.\ D'Ariano, M.\ G.\ A.\ Paris, and M.\ F.\ Sacchi, Advances in Imaging and Electron Physics \textbf{128}, 205 (2003). \bibitem{Raymer} D.\ Smithey, M.\ Beck and M.\ Raymer, and A.\ Faridani, Phys.\ Rev.\ Lett.\ \textbf{70}, 1244 (1993). \bibitem{Dunn} T.\ Dunn, I.\ A.\ Walmsley and S.\ Mukamel, Phys.\ Rev.\ Lett.\ \textbf{74}, 884 (1995). \bibitem{LSE} M.\ P.\ A.\ Branderhorst, I.\ A.\ Walmsley, R.\ L.\ Kosut, and H Rabitz, J.\ Phys.\ B \textbf{41}, 074004 (2008); R.\ L.\ Kosut, I.\ A.\ Walmsley, and H.\ Rabitz, arXiv quant-ph/0411093 (2004). \bibitem{CS} D.\ Gross, Y.-K.\ Liu, S.\ Flammia, S.\ Becker, and J.\ Eisert, arxiv:0909.3304 R.L.\ Kosut, arXiv:0812.4323; A.\ Shabani, R.L.\ Kosut, and H.\ Rabitz, arXiv:0910.5498. \bibitem{as09} K.\ M.\ R.\ Audenaert and S.\ Scheel, New J.\ Phys.\ \textbf{11}, 023028 (2009). \bibitem{Wigner} M.\ Hillery, R.\ F.\ O'Connell, M.\ O.\ Scully, and E.\ P.\ Wigner, Phys.\ Rep.\ \textbf{106}, 121 (1984); W.\ P.\ Schleich, \emph{Quantum optics in phase space}, Wiley, Berlin (2001). \bibitem{lr09} A.\ I.\ Lvovsky and M.\ G.\ Raymer, Rev.\ Mod.\ Phys.\ \textbf{81}, 299 (2009). \bibitem{Ourjoumtsev07} A.\ Ourjoumtsev, A.\ Dantan, R.\ Tualle-Brouri, and P.\ Grangier, Phys.\ Rev.\ Lett.\ \textbf{98}, 030502 (2007). \bibitem{Grangier88} P.\ Grangier, M.\ J.\ Potasek and B.\ Yurke, Phys.\ Rev.\ A, \textbf{38}, R3132 (1988). \bibitem{Kuzmich00} A.\ Kuzmich, I.\ A.\ Walmsley, and L.\ Mandel, Phys.\ Rev.\ Lett., \textbf{85}, 1349 (2000). \bibitem{Puentes} G.\ Puentes, J.~S.\ Lundeen, M.~P.~A.\ Branderhorst, H.\ B.\ Coldenstrodt-Ronge, B.\ J.\ Smith, and I.\ A.\ Walmsley, Phys.\ Rev.\ Lett.\ \textbf{102}, 080404 (2009). \bibitem{detectomo} J.\ S.\ Lundeen, A.\ Feito, H.\ Coldenstrodt-Ronge, K.\ L.\ Pregnell, Ch.\ Silberhorn, T.\ C.\ Ralph, J.\ Eisert, M.\ B.\ Plenio, and I.\ A.\ Walmsley, Nature Physics, \textbf{5}, 27 (2009); A.\ Feito, J.\ S.\ Lundeen, H.\ Coldenstrodt-Ronge, J.\ Eisert, M.\ B.\ Plenio, and I.\ A.\ Walmsley, New J.\ Phys.\ \textbf{11}, 093038 (2009); H.B.\ Coldenstrodt-Ronge, J.\ S.\ Lundeen, A.\ Feito, B.J.\ Smith, W.\ Mauerer, Ch.\ Silberhorn, J.\ Eisert, M.\ B.\ Plenio, I.A.\ Walmsley, J.\ Mod.\ Opt.\ {\bf 56}, 432 (2009). \bibitem{ap06} K.\ M.\ R.\ Audenaert and M.\ B.\ Plenio, New J.\ Phys.\ \textbf{8}, 266 (2006). \bibitem{eba07} J.\ Eisert, F.\ G.\ S.\ L.\ Brand{\~a}o, and K.\ M.\ R.\ Audenaert, New J.\ Phys.\ \textbf{8}, 46 (2007). \bibitem{grw07} O.\ G\"{u}hne, M.\ Reimpell, and R.\ F.\ Werner, Phys.\ Rev.\ Lett.\ {\bf 98}, 110502 (2007). \bibitem{p09} M.\ B.\ Plenio, Science, \textbf{324}, 342 (2009). \bibitem{p05} M.\ B.\ Plenio, Phys.\ Rev.\ Lett.\ \textbf{95}, 090503 (2005); J.\ Eisert, PhD thesis (Potsdam, February 2001); G.\ Vidal and R.F.\ Werner, Phys.\ Rev.\ A {\bf 65}, 032314 (2002). \bibitem{bv05} S.\ Boyd and L.\ Vandenberghe, \textit{Convex optimization}, Cambridge University Press, Cambridge (2005). \bibitem{Eisert} D.\ E.\ Browne, J.\ Eisert, S.\ Scheel, and M.\ B.\ Plenio, Phys.\ Rev.\ A \textbf{67}, 062320 (2003); J.\ Eisert, D.~E.\ Browne, S.\ Scheel, and M.~B.\ Plenio, Annals of Physics (NY) {\bf 311}, 431 (2004). \bibitem{distillation} J.\ Eisert, S.\ Scheel, and M.~B.\ Plenio, Phys.\ Rev.\ Lett.\ {\bf 89}, 137903 (2002); J.\ Fiurasek, Phys.\ Rev.\ Lett.\ {\bf 89}, 137904 (2002); G.\ Giedke and J.~I.\ Cirac, Phys.\ Rev.\ A {\bf 66}, 032316 (2002). \bibitem{b97} R.\ Bhatia, \emph{Matrix analysis}, Springer, New York (1997). \bibitem{sedumi} All the simulations in this work were preformed using the convex optimization routine SeDuMi 1.1, available for free downloading at \emph{http://sedumi.ie.lehigh.edu}. \bibitem{Banaszek9899} K.\ \ Banaszek and K.\ \ Wodkiewicz, Phys.\ Rev.\ A \textbf{58}, 4345 (1998); Phys.\ Rev.\ Lett.\ \textbf{98} 82, 2009 (1999). \bibitem{Ou92} Z.\ Y.\ Ou, S.\ F.\ Pereira, H.\ J.\ Kimble, and K.\ C.\ Peng, Phys.\ Rev.\ Lett., \textbf{68}, 3663 (1992). \bibitem{Holevo} A.\ S.\ Holevo, \emph{Probabilistic and statistical aspects of quantum theory}, North Holland, Amsterdam (1982). \bibitem{Achilles} D.\ Achilles, C.\ Silberhorn, C.\ Sliwa, K.\ Banaszek, I.\ A.\ Walmsley, M.\ J.\ Fitch, B.\ C.\ Jacobs, T.\ B.\ Pittman, and J.D.\ Franson, J.\ Mod.\ Opt.\ \textbf{51}, 1499 (2004). \bibitem{Pregnell} K.\ L.\ Pregnell, and D.\ T.\ Pegg, Phys.\ Rev.\ A \textbf{66}, 013810 (2002). \bibitem{Mosley} P.~J.\ Mosley, J.~S.\ Lundeen, B.~J.\ Smith, P.\ Wasylczyk, A.~B.\ U'Ren, Ch.\ Silberhorn, and I.~A.\ Walmsley, Phys.\ Rev.\ Lett.\ {\bf 100}, 133601 (2008). \bibitem{Dong} R.\ Dong, M.\ Lassen, J.\ Heersink, Ch.\ Marquardt, R.\ Filip, G.\ Leuchs, and U.\ Andersen, Nature Physics {\bf 4}, 919 (2008); B.\ Hage, A.\ Samblowski, J.\ DiGuglielmo, A.\ Franzen, J.\ Fiurasek, and R.\ Schnabel, Nature Physics {\bf 4}, 915 (2008). \bibitem{Takahashi} H.\ Takahashi, J.\ Neergaard-Nielsen, M.\ Takeuchi, M.\ Takeoka, K.\ Hayasake, A.\ Furusawa, M.\ Sasaki, arXiv:0907.2159. \bibitem{Xiang} G.\ Xiang, T.\ Ralph, A.\ Lund, N.\ Walk, and G.~J.\ Pryde, arXiv:0907.3638. \bibitem{Bellini} V.\ Parigi, A.\ Zavatta, M.~S.\ Kim, and M.\ Bellini, Science \textbf{317}, 1890 (2007). \end{thebibliography} \end{document}
math
\begin{document} \title{Invariant generalized almost complex structures on real flag manifolds} \begin{abstract} We characterize those real flag manifolds that can be endowed with invariant generalized almost complex structures. We show that no $GM_2$-maximal real flag manifolds admit integrable invariant generalized almost complex structures. We give a concrete description of the generalized complex geometry on the maximal real flags of type $B_2$, $G_2$, $A_3$, and $D_l$ with $l\geq 5$, where we prove that the space of invariant generalized almost complex structures under invariant $B$-transformations is homotopy equivalent to a torus and we classify all invariant generalized almost Hermitian structures on them. \end{abstract} \tableofcontents \section{Introduction} Generalized complex geometry is a theory recently introduced by Hitchin \cite{H} and further developed by both Gualtieri \cite{G1,G2,G3} and Cavalcanti \cite{Ca2}. This provides a unified framework where it is possible to establish a generalization of several classical geometric structures as for instance symplectic, complex, K\"ahler, Calabi--Yau, among others. It is worth mentioning that both complex and symplectic geometries are extreme cases in the theory and their detailed study has allowed to extend several classical results to the context of generalized complex geometry. Great developments on generalized complex geometry in both, mathematics and physics, have been done in last years. For instance, some of them aim at providing applications to mathematical physics as may be viewed in: \begin{itemize} \item \cite{HH} where the authors gave an extension of the Hitchin–Kobayashi correspondence for quiver bundles over generalized K\"ahler manifolds, thus obtaining applications to Yang--Mills theory and analysis, \item \cite{Gr,BLPZ} where are developed various aspects of string theory using techniques from generalized K\"ahler geometry, \item \cite{CG2} where the authors applied their study about generalized complex geometry to attack problems related to mirror symmetry and $T$-duality, and \item \cite{LT} where is given an extension of the notion of moment map which allows both to study Hamiltonian mechanics in a more general setting and to obtain a notion of reduction in generalized complex geometry as well as in generalized K\"ahler geometry. \end{itemize} The aim of this paper is to start with the study of generalized complex geometry on real flag manifolds. Examples of generalized complex structures constructed by using Lie theory can be found for instance in: \begin{itemize} \item \cite{CG} where the authors gave a classification of all 6-dimensional nilmanifolds admitting generalized complex structures, \item \cite{AD} where it was described a regular class of invariant generalized complex structures on a real semisimple Lie group, \item \cite{BMW} where they were classified all left invariant generalized complex and K\"ahler structures on simply connected 4-dimensional Lie groups as well as studied their invariant cohomologies, and, more importantly to us, \item \cite{VS,V,GVV} where the corresponding authors studied invariant generalized complex geometry on complex flag manifolds. Firstly, in \cite{VS} was described the set of all invariant generalized almost complex structures on a maximal complex flag manifold and were presented, in a very concrete way, the integrability conditions of each of these structures. Secondly, in \cite{V} were classified all invariant generalized complex structures on a partial flag manifold with at most four isotropy summands. Thirdly, in \cite{GVV} were classified all invariant generalized K\"ahler structures on a complex maximal flag, were described the quotient spaces of all invariant generalized complex and K\"ahler structures on a complex maximal flag up to action by invariant $B$-transformations, and was given an explicit expression for the invariant pure spinor of each of these structures. \end{itemize} A flag manifold associated to a non-compact semisimple Lie algebra $\mathfrak{g}$ is a homogeneous space $\mathbb{F}_\Theta=G/P_\Theta $ where $G$ is a connected Lie group with Lie algebra $\mathfrak{g}$ and $P_\Theta$ is a parabolic subgroup. If $K$ is a maximal compact subgroup of $G$ and $K_\Theta=K\cap P_\Theta$ then the flag manifold $\mathbb{F}_\Theta$ can be also written as $\mathbb{F}_\Theta=K/K_\Theta$. In this paper we study the existence, integrability and geometry of those generalized almost complex structures on real flag manifolds $\mathbb{F}_\Theta$ that are invariant with respect to the isotropy representation, in the case that $\mathfrak{g}$ is a split real form of a complex simple Lie algebra. We mainly focus on the case of real maximal flags. Unlike what happens in the case of complex flag manifolds, for real flag manifolds there is no systematic way to study invariant geometric structures. Some advances have been done for instance in: \begin{itemize} \item \cite{PS} where the authors provided a detailed analysis of the isotropy representations for the flag manifolds of split real forms of complex simple Lie algebras, \item \cite{FBS} where it was studied the existence of $K$-invariant almost complex structures on real flag manifolds, thus obtaining as conclusion that only a few flag manifolds (associated to split real forms) admit $K$-invariant complex structures, and \item \cite{GG} where the authors studied the existence of invariant Einstein metrics on real flag manifolds associated to simple and non-compact split real forms of complex classical Lie algebras whose isotropy representation decomposes into two or three irreducible sub-representations. \end{itemize} This paper is structured as follows. In Section \ref{S:2} we introduce the basic concepts of both generalized complex geometry and Lie theory that we will be using throughout the paper. In Section \ref{S:3} we develop preliminary results which will be the base for the necessary computations that we must do. They will allow us to state our main results. Using the notion of $M$-equivalence classes introduced in \cite{PS} which are defined by an equivalence relation between isotropy representations of the real flag $\mathbb{F}_\Theta$, we give a concrete description of the invariant maximal isotropic subspaces in $T_{b_\Theta} \mathbb{F}_\Theta\oplus T_{b_\Theta}^\ast \mathbb{F}_\Theta$ with $b_\Theta$ the origin of $\mathbb{F}_\Theta$. Accordingly, we can state a generalization of Theorem 1. from \cite{FBS} as follows: \begin{theorem*}[\ref{P1}] A real flag manifold $\mathbb{F}_\Theta=K/K_\Theta$ admits a $K$-invariant generalized almost complex structure if and only if it is a maximal flag of the type $A_3$, $B_2$, $G_2$, $C_l$ for $l$ even, $D_l$ for $l\geq 4$ or it is one of the following intermediate flags: \begin{itemize} \item[$-$] of type $B_3$ and $\Theta=\lbrace \lambda_1-\lambda_2,\lambda_2-\lambda_3\rbrace$, \item[$-$] of type $C_l$ with $\Theta=\lbrace \lambda_{d}-\lambda_{d+1},\cdots, \lambda_{l-1}-\lambda_{l}, 2\lambda_{l}\rbrace$ for $d>1$ odd, and \item[$-$] of type $D_l$ with $l=4$ and $\Theta$ being one of the sets of roots: $\lbrace \lambda_1-\lambda_2,\lambda_3-\lambda_4\rbrace$, $\lbrace \lambda_1-\lambda_2,\lambda_3+\lambda_4\rbrace$, $\lbrace \lambda_3-\lambda_4,\lambda_3+\lambda_4\rbrace$. \end{itemize} \end{theorem*} We show a more explicit and useful expression for both the Courant bracket and the Nijenhuis operator at the origin $b_\Theta$ of $\mathbb{F}_\Theta$ (see Proposition \ref{CaurentB}). There is a special kind of maximal real flag manifolds admitting invariant generalized almost complex structures that we are very interested in. Namely, a maximal real flag manifold of those described in Theorem \ref{P1} is said to be a $GM_2$-\emph{maximal real flag} if it admits at least one $M$-equivalence class root subspace of dimension $2$. In Section \ref{S:4} we deal with the integrability of invariant generalized almost complex structures on $GM_2$-maximal real flag manifolds. By means of very explicit computations using preliminary results of Section \ref{S:3} we get that: \begin{theorem*}[\ref{NoIntegrables}] No $GM_2$-maximal real flag manifold admits integrable $K$-invariant generalized almost complex structures. \end{theorem*} It is important to point out that the computations needed to prove this result strongly depend on the classification of $M$-classes of isotropy representations which was carried out in \cite{PS}. Thus, given the nature of the problem of integrability and the rich algebraic and analytic structure of the manifolds we are working with, we reduced such an integrability problem to perform computations using the algebraic information provided by the root systems involved in each case. Finally, Section \ref{S:5} is devoted to give a concrete description of the generalized complex geometry on the maximal real flags $\mathbb{F}$ of type $B_2$, $G_2$, $A_3$, and $D_l$ with $l\geq 5$. For these cases, we study the effects of the action by invariant $B$-transformations on the space of invariant generalized almost complex structures. We prove that for every $M$-class root space, any generalized complex structure which is not of complex type is a $B$-transform of a structure of symplectic type. We also show that every element in the set of generalized complex structures of complex type is fixed for the action induced by $B$-transformations. Adapting these results to the general case and denoting by $\mathfrak{M}_a(\mathbb{F})$ the quotient space obtained from the set of all invariant generalized almost complex structures modulo the action by invariant $B$-transformations, we obtain: \begin{theorem*}[\ref{BModuli2.1}] Suppose that $\displaystyle \Pi^-=\bigcup_{j=1}^d[\alpha_j]_M$ where $d$ is the number of $M$-equivalence classes. Then $$\mathfrak M_a (\mathbb F)= \prod_{[\alpha_j]_M\subset\Pi^-} \mathfrak{M}_{\alpha_j}(\mathbb{F})=(\mathbb{R}^\ast \cup (\mathbb{R}^\ast\times \mathbb{R}))_{\alpha_1} \times \cdots \times (\mathbb{R}^\ast \cup (\mathbb{R}^\ast\times \mathbb{R}))_{\alpha_d}.$$ In particular, $\mathfrak M_a (\mathbb F)$ admits a natural topology induced from $\mathbb{R}^{2d}$ with which it is homotopy equivalent to the $d$-torus $\mathbb{T}^d$. \end{theorem*} Moreover, as a consequence of this result we obtain an explicit expression for the invariant pure spinor associated to each invariant generalized almost complex structure on $\mathbb{F}$ (see Corollary \ref{PureSpinor}). We also characterize all invariant generalized almost Hermitian structures on these maximal real flags (see Proposition \ref{GKhaler3}). In particular, if $\mathfrak{G}_a(\mathbb{F})$ denotes the quotient space obtained from the set of all invariant generalized metrics modulo the action by invariant $B$-transformations we have: \begin{proposition*}[\ref{GMetrics}] Suppose that $\displaystyle \Pi^-=\bigcup_{j=1}^d[\alpha_j]_M$ where $d$ is the number of $M$-equivalence classes. Then $$\mathfrak{G}_a(\mathbb{F})=\lbrace((\mathbb{R}^+)^2\times\mathbb{R}) \cup ((\mathbb{R}^-)^2\times\mathbb{R})\rbrace_{\alpha_1}\times \cdots \times \lbrace((\mathbb{R}^+)^2\times\mathbb{R}) \cup ((\mathbb{R}^-)^2\times\mathbb{R})\rbrace_{\alpha_d}.$$ \end{proposition*} It is worth noticing that the only $2$ maximal real flag manifolds mentioned in Theorem \ref{P1} that are not $GM_2$-maximal real flags are those particular cases of type $C_4$ and $D_4$. This is because each $M$-equivalence class root subspace associated to them has dimension $4$. Such a little variation in the dimension increases a lot the difficulty when performing the computations to determine whether or not an invariant generalized almost complex structure is integrable. Therefore, due to how extensive the computations were to claim that there is no integrable invariant generalized almost complex structures on a $GM_2$-maximal real flag, we leave the study of integrability as well as other aspects related to generalized geometry on the maximal real flags of type $C_4$, $D_4$, and the intermediate real flags for a forthcoming work. It may be done by using the present work plus Remark \ref{PartialCases}. Based on the results obtained in this paper and Theorem 2. from \cite{FBS} we predict that: \begin{conjecture*}[I] No maximal real flag manifold of type $C_4$ or $D_4$ admits integrable $K$-invariant generalized almost complex structures. \end{conjecture*} \begin{conjecture*}[II] A real flag manifold $\mathbb{F}_\Theta=K/K_\Theta$ admits a $K$-invariant generalized complex structure if and only if it is of type $C_l$ and $\Theta=\lbrace \lambda_{d}-\lambda_{d+1},\cdots, \lambda_{l-1}-\lambda_{l}, 2\lambda_{l}\rbrace$ for $d>1$ odd. \end{conjecture*} {\bf Acknowledgments:} We would like to express our more sincere gratitude to Luiz San Martin, Viviana del Barco, Cristi\'an Ortiz, and Sebasti\'an Herrera for their valuable comments and suggestions during several stages of the present work. Varea thanks Instituto de Matem\'atica e Estat\'istica - Universidade de S\~ao Paulo for the support provided while this work was carried out. Valencia was supported by Grant 2020/07704-7 S\~ao Paulo Research Foundation - FAPESP. Varea was supported by Grant 2020/12018-5 S\~ao Paulo Research Foundation - FAPESP. \section{Preliminaries}\label{S:2} In this section we will introduce notations and summarize the most important concepts and results from both generalized complex geometry and Lie theory which we will be using throughout the paper. For more details the reader is recommended to visit for instance the references \cite{CD,G1,G2,G3,H,K}. \subsection{Generalized complex structures} Let $M$ be a $2n$-dimensional smooth manifold. The \emph{generalized tangent bundle} on $M$ is defined to be the vector bundle $\mathbb{T}M:=TM\oplus T^\ast M$ whose space of sections is locally identified with $\mathfrak{X}(M)\oplus \Omega^1(M)$. On this vector bundle we have a natural indefinite inner product $\langle\cdot,\cdot\rangle$ of signature $(n,n)$ that is given by \begin{equation}\label{InnerProduct} \langle X+\xi,Y+\eta \rangle =\dfrac{1}{2}(\xi(Y)+\eta(X)). \end{equation} The bundle $\bigwedge^\bullet T^\ast M$ of differential forms can be viewed as a {\it spinor bundle} for $(\mathbb{T}M,\langle\cdot,\cdot\rangle)$ where the Clifford action of an element $X+\xi\in\mathbb{T}M$ on a differential form $\varphi$ is given by $$(X+\xi)\cdot \varphi=i_X\varphi+\xi\wedge \varphi.$$ It is simple to check that $(X+\xi)^2\varphi=\langle X+\xi,X+\xi\rangle\varphi$. A spinor $\varphi \in \bigwedge^\bullet T^\ast M$ is said to be {\it pure} if its null space $$L_\varphi=\{ X+\xi\in \mathbb{T}M:\ (X+\xi)\cdot \varphi=0\},$$ is maximal isotropic. At each point $p\in M$, the set $\mathfrak{so}(\mathbb{T}M_p)$ of infinitesimal orthogonal transformations of $\mathbb{T}M_p$ with respect to $\langle\cdot,\cdot\rangle_p$ is identified as a vector space with $\textnormal{End}(T_pM)\oplus \bigwedge^2T_p^\ast M\oplus\bigwedge^2T_pM$. Thus, using the exponential map to the Lie group $\textnormal{SO}(\mathbb{T}M_p)$, we get that associated to each 2-form $B\in \Omega^2(M)$ we can define an orthogonal transformation $e^{B}=\left( \begin{array}{cc} 1 & 0\\ B & 1 \end{array} \right)$ which acts on $\mathbb{T}M$ by sending $X+\xi\mapsto X+\xi+i_XB$. This transformation will be called a {\it $B$-transformation}. Let us now consider the extension of $\langle\cdot,\cdot\rangle$ to the complexification $\mathbb{T}M\otimes \mathbb{C}$. We can define a generalized almost complex structure on $M$ in three equivalent ways: \begin{definition}\cite{H}\label{Def1} A \emph{generalized almost complex structure} on $M$ is determined by any of the following equivalent objects: \begin{enumerate} \item[$\iota.$] An automorphism $\mathbb{J}:\mathbb{T}M\to \mathbb{T}M$ such that $\mathbb{J}^2=-1$ and it is orthogonal with respect to the natural indefinite inner product $\langle\cdot,\cdot\rangle$ defined in \eqref{InnerProduct}. \item[$\iota\iota.$] A subbundle $L$ of $\mathbb{T}M\otimes \mathbb{C}$ which is maximal isotropic with respect to $\langle\cdot,\cdot\rangle$ and it satisfies $L\cap \overline{L}=\{0\}$. \item[$\iota\iota\iota.$] A line subbundle of $\bigwedge^\bullet T^\ast M\otimes \mathbb{C}$ generated at each point by a complex pure spinor $\varphi$ such that its null space satisfies $L_\varphi\cap \overline{L}_\varphi=\{0\}$. \end{enumerate} \end{definition} On the one hand, if $\mathbb{J}$ is a generalized almost complex structure on $M$ then it induces a maximal isotropic subbundle in $\mathbb{T}M\otimes \mathbb{C}$ which is given by its associated $+i$-eigenbundle. On the other hand, at each point, a pure spinor $\varphi\in \mathbb{T}M\otimes \mathbb{C}$ must have the form $$\varphi=e^{B+i\omega}\theta_1\wedge\cdots\wedge\theta_k,$$ where $B$ and $\omega$ are the real and imaginary parts of a complex $2$-form and $(\theta_1,\cdots,\theta_k)$ are linearly independent complex 1-forms spanning $\Delta^\circ$ where $\Delta=\pi_1(L)$; see \cite{C}. Here $\pi_1$ denotes the projection from $\mathbb{T}M\otimes\mathbb{C}$ onto $TM\otimes\mathbb{C}$. If we denote by $\Omega=\theta_1\wedge\cdots\wedge\theta_k$, then the requirement $L_\varphi\cap \overline{L}_\varphi=\{0\}$ from part $\iota\iota\iota.$ of Definition \ref{Def1} is equivalent to asking $$(\varphi,\overline{\varphi})=\Omega\wedge \overline{\Omega}\wedge\omega^{n-k}\neq 0.$$ We can also speak about the notion of integrability of a generalized almost complex structure in four equivalent ways. For this purpose we introduce the \emph{Courant bracket} on sections of $\mathbb{T}M$ which is defined as $$[X+\xi,Y+\eta]=[X,Y]+\mathcal{L}_X\eta -\mathcal{L}_Y\xi-\dfrac{1}{2}d(i_X\eta-i_Y\xi).$$ Assuming the notation used in Definition \ref{Def1} we have: \begin{definition}\cite{H,G1}\label{Def2} A generalized almost complex structure on $M$ is said to be \emph{integrable} if any of the following equivalent facts occurs: \begin{enumerate} \item[$\iota.$] The Nijenhuis tensor $N_\mathbb{J}$ of $\mathbb{J}$ with respect to the Courant bracket vanishes: $$N_\mathbb{J}(A,B)=[\mathbb{J}A,\mathbb{J}B]-[A,B]-\mathbb{J}[\mathbb{J}A,B]-\mathbb{J}[A,\mathbb{J}B]=0.$$ \item[$\iota\iota.$] The maximal isotropic subbundle $L$ of $\mathbb{T}M\otimes \mathbb{C}$ is involutive with respect to the Courant bracket. \item[$\iota\iota\iota.$] The Nijenhuis operator $\textnormal{Nij}$ restricted to the maximal isotropic subbundle $L$ of $\mathbb{T}M\otimes \mathbb{C}$ vanishes. That is $\textnormal{Nij}|_L=0$ where \begin{equation}\label{NijOperator} \textnormal{Nij}(A,B,C)=\dfrac{1}{3}(\langle[A,B],C \rangle+\langle[B,C],A \rangle+\langle[C,A],B \rangle). \end{equation} \item[$\iota\nu.$] There exists a section $X+\xi\in \mathfrak{X}(M)\oplus \Omega^1(M)$ such that $(X+\xi)\cdot \varphi =\textnormal{d}\varphi$. \end{enumerate} If this is the case, the geometric object that we obtain is called a \emph{generalized complex structure}. \end{definition} The following natural number, which is pointwise defined, will allow us to differentiate a generalized complex structure from another. \begin{definition}\cite{H,G1} At each point $p\in M$, the \emph{type} of a generalized almost complex structure $\mathbb{J}$ on $M$ is defined as $$\textnormal{Type}(\mathbb{J})_p=\textnormal{dim}(\Delta_p^\circ)=n-\dim(\Delta_p),$$ where $\Delta_p=\pi_1(L_p)$. This is not necessarily constant along $M$. When $\textnormal{Type}(\mathbb{J})$ is locally constant we say that $\mathbb{J}$ is \emph{regular}. \end{definition} Using generalized complex structures we can establish a notion of generalized K\"ahler structure which generalizes the classical notion of K\"ahler manifold. Due to our purposes it is more appropriate to introduce such a notion in terms of generalized almost Hermitian structures as studied in \cite{CD}. Namely: \begin{definition}\cite{CD}\cite{G2}\label{ke} A {\it generalized almost Hermitian structure} on $M$ is a pair $(\mathbb{J},\mathbb{J'})$ of commuting generalized almost complex structures such that $G=-\mathbb{J}\mathbb{J'}$ is positive definite. If moreover both $\mathbb{J}$ and $\mathbb{J'}$ are integrable then we say that the pair $(\mathbb{J},\mathbb{J'})$ is a {\it generalized K\"ahler structure}. The corresponding metric on $\mathbb{T}M$ given by $$G(X+\alpha, Y+\beta)=\langle \mathbb{J}(X+\alpha),\mathbb{J'}(Y+\beta)\rangle$$ is called a {\it generalized metric}. \end{definition} Several equivalent characterizations of a generalized K\"ahler structure can be found in \cite{CD}. One of the most important reasons for which we introduced $B$-transformations above is because of the following result: \begin{proposition}\cite{H,G1,G2}\label{ModuliM} Let $B$ be a 2-form on $M$. \begin{enumerate} \item[$\iota.$] If $\mathbb{J}$ is a generalized almost complex structure on $M$, then so do $e^{-B}\mathbb{J}e^B$ and $\textnormal{Type}(\mathbb{J})_p=\textnormal{Type}(e^{-B}\mathbb{J}e^B)_p$ for all $p\in M$. If $\mathbb{J}$ is integrable, then so do $e^{-B}\mathbb{J}e^B$ if and only if $B$ is closed. \item[$\iota\iota.$] If $(\mathbb{J},\mathbb{J'})$ is a generalized almost Hermitian structure on $M$ with generalized metric $G$, then so do $(e^{-B}\mathbb{J}e^B,e^{-B}\mathbb{J}'e^B)$ and its generalized metric is given by $G_B:=e^{-B}Ge^B$. \item[$\iota\iota\iota.$] Given a generalized almost Hermitian structure $(\mathbb{J},\mathbb{J'})$ on $M$ with generalized metric $G$ there always exist a Riemannian metric $g$ on $M$ and a $2$-form $b\in \Omega^2(M)$ such that $$G=e^{b}\left( \begin{array}{cc} & g^{-1}\\ g & \end{array} \right)e^{-b}.$$ In particular, the signature of any generalized metric is $(n,n)$. \end{enumerate} \end{proposition} Let $\mathcal{M}_a$ denote the space of all generalized almost complex structures on $M$ and let $\mathcal{K}_a\subseteq \mathcal{M}_a\times \mathcal{M}_a$ denote the set of all generalized almost Hermitian structures on $M$. The set $\mathcal{B}:=\lbrace e^B\vert\ B\in \Omega^2(M)\rbrace$ is an Abelian group with the commutative product $e^{B_1}\cdot e^{B_2}=e^{B_1+B_2}$. By Proposition \ref{ModuliM} we have that the map $\cdot:\mathcal{B}\times \mathcal{M}_a\to \mathcal{M}_a$ given by $e^{B}\cdot \mathbb{J}:=e^{-B}\mathbb{J}e^B$ is a well defined action of $\mathcal{B}$ on $\mathcal{M}_a$. In a similar way we obtain a well defined diagonal action of $\mathcal{B}$ on $\mathcal{K}_a$. These facts motivate the following definition: \begin{definition}\cite{GVV}\label{ModuliDef} The \emph{moduli space} of \begin{enumerate} \item[$\iota.$] generalized almost complex structures on $M$ under $B$-transformations is defined as the quotient space $\mathfrak{M}_a=\mathcal{M}_a/\mathcal{B}$ determined by the natural action of $\mathcal{B}$ on $\mathcal{M}_a$; and \item[$\iota\iota.$] generalized almost Hermitian structures on $M$ under $B$-transformations is defined as the quotient space $\mathfrak{K}_a=\mathcal{K}_a/\mathcal{B}$ determined by the diagonal action of $\mathcal{B}$ on $\mathcal{K}_a$. \end{enumerate} \end{definition} These quotient spaces were well described in \cite{GVV} for the case of complex maximal flag manifolds when we consider the action by invariant $B$-transformations. It is simple to see that we may define similar quotient spaces if we consider the set of all generalized complex (K\"ahler) structures on $M$. We just need to act by the subgroup $\mathcal{B}_{cl}\subset \mathcal{B}$ determined by the closed 2-forms on $M$. However, because of our purposes, in this paper we will just deal with the cases introduced in Definition \ref{ModuliDef}. Let us now introduce some basic but instructive examples. \begin{example}[Symplectic type $k=0$]\label{ExampleS} Given an {\it almost symplectic} manifold $(M,\omega)$ (that is, $\omega$ is a nondegenerate 2-form on $M$, not necessarily closed) we get that $$\mathbb{J}_\omega=\left( \begin{array}{cc} 0 & -\omega^{-1}\\ \omega & 0 \end{array} \right),$$ defines a generalized almost complex structure on $M$. Such a structure is integrable if and only if $\omega$ is closed, in which case we say that this is a generalized complex structure of {\it symplectic type}. We have that $\mathbb{J}_\omega$ determines a maximal isotropic subbundle $L_\omega=\lbrace X-i\omega(X):\ X\in TM\otimes \mathbb{C}\rbrace$ and a pure spinor line generated by $\varphi=e^{i\omega}$. This is a regular generalized almost complex structure of type $k=0$. We may transform this example by a $B$-transformation and obtain another generalized almost complex structure of type $k=0$ as follows: $$e^{-B}\mathbb{J}_\omega e^B=\left( \begin{array}{cc} -\omega^{-1}B & -\omega^{-1}\\ \omega+B\omega^{-1}B& B\omega^{-1} \end{array} \right).$$ This has maximal isotropic subbundle $e^{-B}(L_\omega)=\lbrace X-(B+i\omega)(X):\ X\in TM\otimes \mathbb{C}\rbrace$ and pure spinor $e^{B}\cdot\varphi=e^{B+i\omega}$. \end{example} \begin{example}[Complex type $k=n$]\label{ExampleC} If $(M,J)$ is an almost complex manifold, then we have that $$\mathbb{J}_c=\left( \begin{array}{cc} -J & 0\\ 0 & J^\ast \end{array} \right),$$ defines a generalized almost complex structure on $M$, which is integrable if and only if $J$ is integrable in the classical sense. We refer to such a generalized complex structure as being of {\it complex type}. This is a regular generalized almost complex structure of type $k=n$. We have that $\mathbb{J}_c$ determines a maximal isotropic subbundle $L=TM_{0,1}\oplus T^\ast M_{1,0} \subset \mathbb TM\otimes \mathbb C$ where $TM_{1,0}=\overline{TM_{0,1}}$ is the $+i$-eigenspace of $J$ and a pure spinor line generated by $\varphi=\Omega^{n,0}$ where $\Omega^{n,0}$ is any generator of the $(n,0)$-forms for the almost complex manifold $(M,J)$. This example may be transformed by a $B$-transformation $$e^{-B}\mathbb{J}_ce^B=\left( \begin{array}{cc} -J & 0\\ BJ+J^\ast B& J^\ast \end{array} \right).$$ The obtained structure has maximal isotropic subbundle $e^{-B}(L)=\lbrace X+\xi-i_XB:\ X+\xi\in TM_{0,1}\oplus T^\ast M_{1,0}\rbrace$ and pure spinor $e^{B}\cdot\varphi=e^B\Omega^{n,0}$. \end{example} \begin{example}\label{typical} Assume that $(M,J,\omega)$ is a K\"ahler manifold satisfying the conditions of both Examples \ref{ExampleS} and \ref{ExampleC}. This means that $J$ is a complex structure on $M$ and $\omega$ is a symplectic form of type $(1,1)$, verifying $\omega J=-J^\ast \omega$, which implies that $\mathbb{J}_c\mathbb{J}_\omega=\mathbb{J}_\omega\mathbb{J}_c$. It is simple to check that $$G=-\mathbb{J}_c\mathbb{J}_\omega=\left(\begin{array}{cc} 0 & g^{-1}\\ g & 0 \end{array} \right),$$ defines a generalized metric on $\mathbb{T} M$, thus obtaining that $(\mathbb{J}_c,\mathbb{J}_\omega)$ is a generalized K\"ahler structure on $M$. Here $g$ denotes the Riemannian metric on $M$ induced by the formula $g(X,Y)=\omega(X,JY)$. \end{example} \subsection{Real flag manifolds} We introduce real flag manifolds by following the nice and brief exposition provided in \cite{FBS}. Let us assume that $\mathfrak{g}$ is the split real form of a complex semisimple Lie algebra $\mathfrak{g}_\mathbb{C}:=\mathfrak{g}\otimes_\mathbb{R}\mathbb{C}$. If $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{a}\oplus\mathfrak{n}$ is an Iwasawa decomposition, then we get that $\mathfrak{a}$ is a Cartan subalgebra of $\mathfrak{g}$. Let $\Pi$ denote the root system associated to the pair $(\mathfrak{g},\mathfrak{a})$. If $\alpha\in\mathfrak{a}^\ast$ is a root, then we write $$\mathfrak{g}_\alpha=\lbrace X\in\mathfrak{g}\vert\ \textnormal{ad}(H)(X)=\alpha(H)X,\ \forall H\in\mathfrak{a}\rbrace,$$ for its corresponding root space, which is $1$-dimensional since $\mathfrak{g}$ is split. Let $\Pi^+$ be a choice of positive roots with corresponding set of positive simple roots $\Sigma$. The set of parabolic subalgebras of $\mathfrak{g}$ is parametrized by the subsets $\Theta$ of $\Sigma$. Namely, given $\Theta\subset \Sigma$, the corresponding parabolic subalgebra is given by $$\mathfrak{p}_\Theta=\mathfrak{a}\oplus \sum_{\alpha\in \Pi^+}\mathfrak{g}_\alpha\oplus\sum_{\alpha\in \langle\Theta\rangle^{-}}\mathfrak{g}_\alpha=\mathfrak{a}\oplus \sum_{\alpha\in \langle\Theta\rangle^+\cup \langle\Theta\rangle^{-}}\mathfrak{g}_\alpha\oplus\sum_{\alpha\in \Pi^+\backslash \langle\Theta\rangle^{+}}\mathfrak{g}_\alpha,$$ where $\langle \Theta \rangle^\pm$ denotes the set of positive/negative roots generated by $\Theta$. Let $G$ denote the Lie group of inner automorphisms of $\mathfrak{g}$ which is connected and generated by $\textnormal{Exp}\ \textnormal{ad}(\mathfrak{g})$ inside $\textnormal{GL}(\mathfrak{g})$. Let $K$ be the maximal compact subgroup of $G$. This is generated by $\textnormal{ad}(\mathfrak{k})$. The standard parabolic subgroup $P_\Theta$ of $G$ associated to $\Theta$ is the normalizer of $\mathfrak{p}_\Theta$ in $G$ and its associated \emph{flag manifold} is defined as the homogeneous space $\mathbb{F}_\Theta:=G/P_\Theta$. Given that $K$ acts transitively on $\mathbb{F}_\Theta$ we may also identify $\mathbb{F}_\Theta=K/K_\Theta$ where $K_\Theta=P_\Theta\cap K$. Fixing an arbitrary origin $b_\Theta$ in $\mathbb{F}_\Theta$, we identity the tangent space $T_{b_\Theta}\mathbb{F}_\Theta$ with the nilpotent Lie algebra $$\mathfrak{n}_\Theta^{-}=\sum_{\alpha\in \Pi^-\backslash\langle\Theta\rangle^{-}}\mathfrak{g}_\alpha.$$ It is important to notice that under this identification the isotropy representation of $K_\Theta$ on $T_{b_\Theta}\mathbb{F}_\Theta$ is just the adjoint representation since $\mathfrak{n}_\Theta^{-}$ is normalized by $K_\Theta$. The Lie algebra $\mathfrak{k}_\Theta$ of $K_\Theta$ is given by $$\mathfrak{k}_\Theta=\sum_{\alpha\in \langle\Theta\rangle^+\cup \langle\Theta\rangle^{-}}(\mathfrak{g}_\alpha\oplus \mathfrak{g}_{-\alpha})\cap \mathfrak{k}.$$ The compactness of $K$ implies that $\mathfrak{k}_\Theta$ has a reductive complement $\mathfrak{m}_\Theta$ so that $\mathfrak{k}=\mathfrak{k}_\Theta\oplus \mathfrak{m}_\Theta$ and $T_{b_\Theta}\mathbb{F}_\Theta$ is also identified with $\mathfrak{m}_\Theta$. Indeed, the map $X_\alpha\mapsto X_\alpha-X_{-\alpha}$ for $\alpha\in \Pi^-\backslash\langle\Theta\rangle^{-}$ is an invariant map from $\mathfrak{n}_\Theta^{-}$ to $\mathfrak{m}_\Theta$. Throughout this paper we will call \emph{isotropy representation} to the representation of $K_\Theta$ on either $\mathfrak{n}_\Theta^{-}$ or $\mathfrak{m}_\Theta$ without making any difference or special mention. In some cases, we will even use $\displaystyle \mathfrak{n}_\Theta^{+}=\sum_{\alpha\in \Pi^-\backslash\langle\Theta\rangle^{-}}\mathfrak{g}_{-\alpha}$ instead $\mathfrak{n}_\Theta^{-}$. Let $M$ be the centralizer of $\mathfrak{a}$ in $K$. Then $K_\Theta=M\cdot (K_\Theta)_0$ where $(K_\Theta)_0$ is the connected component of the identity in $K_\Theta$. Thus $M$ acts on $T_{b_\Theta}\mathbb{F}_\Theta$ by restricting the isotropy representation of $K_\Theta$. The group $M$ is finite and acts on each root space $\mathfrak{g}_\alpha$ leaving it invariant. \begin{definition}\cite{PS} Two roots $\alpha$ and $\beta$ are called $M$-\emph{equivalent}, which we will write as $\alpha\sim_M\beta$, if the representations of $M$ on $\mathfrak{g}_\alpha$ and $\mathfrak{g}_\beta$ are equivalent. \end{definition} See \cite{PS} for more details about the description of the $M$-equivalence classes. Let $\langle \cdot,\cdot \rangle$ be the Cartan--Killing form of $\mathfrak{g}$. As the restriction of $\langle \cdot,\cdot \rangle$ to $\mathfrak{a}$ is non-degenerate, for every root $\alpha\in\mathfrak{a}^\ast$, we denote by $H_\alpha$ the unique element in $\mathfrak{a}$ such that $\alpha(\cdot)=\langle H_\alpha,\cdot\rangle$. Thus: \begin{remark}\cite{PS} If $\mathfrak{g}$ is a split real form of a complex semisimple Lie algebra $\mathfrak{g}_\mathbb{C}$, then $M$ is the finite abelian group given by $$M=\lbrace m_\gamma=\exp(\pi iH^\vee_\gamma)\vert\ \gamma\in \Pi \rbrace,$$ where $H^\vee_\gamma=\dfrac{2H_\gamma}{\langle \gamma,\gamma\rangle}$ is the co-root associated to $\gamma\in\Pi$. For the formula above, the exponential $\exp(t iH^\vee_\gamma)$ is taken in the complex subgroup $\textnormal{Aut}(\mathfrak{g}_\mathbb{C})$ but for $t=\pi$ we get $m_\gamma\in \textnormal{Aut}(\mathfrak{g})$ which acts on the root spaces $\mathfrak{g}_\alpha$ by $\textnormal{Ad}(m_\gamma)(X)=\pm X$. Moreover, two roots $\alpha$ and $\beta$ are $M$-equivalent if and only if for every $\gamma\in\Pi$ we get that \begin{equation}\label{Mrelation} \dfrac{2\langle \gamma,\alpha\rangle}{\langle \gamma,\gamma\rangle}\equiv \dfrac{2\langle \gamma,\beta\rangle}{\langle \gamma,\gamma\rangle}\ \textnormal{mod}\ 2. \end{equation} \end{remark} For the purposes of next sections we fix from here a Weyl basis for $\mathfrak{g}$ which amounts to taking $X_\alpha\in\mathbb{F}$ such that $\langle X_\alpha,X_{-\alpha}\rangle=1$ and $[X_\alpha,X_\beta]=m_{\alpha,\beta}X_{\alpha+\beta}$ with $m_{\alpha,\beta}\in\mathbb{R}$, $m_{-\alpha,-\beta}=-m_{\alpha,\beta}$, and $m_{\alpha,\beta}=0$ is $\alpha+\beta$ is not a root. \section{Invariant generalized almost complex structures}\label{S:3} From the general theory of invariant tensors on homogeneous spaces, we get that $K$-invariant generalized almost complex structures on the flag manifold $\mathbb{F}_\Theta=K/K_\Theta$ are in one-to-one correspondence with complex structures $\mathbb{J}:T_{b_\Theta} \mathbb{F}_\Theta\oplus T_{b_\Theta}^\ast \mathbb{F}_\Theta\to T_{b_\Theta} \mathbb{F}_\Theta\oplus T_{b_\Theta}^\ast \mathbb{F}_\Theta$ such that: \begin{enumerate} \item[$\iota.$] $\langle \mathbb{J}(x) , \mathbb{J}(y)\rangle=\langle x,y \rangle$ for all $x,y\in T_{b_\Theta} \mathbb{F}_\Theta\oplus T_{b_\Theta}^\ast \mathbb{F}_\Theta$ where $\langle \cdot,\cdot\rangle$ is the natural indefinite inner product of signature $(n,n)$ defined on $T_{b_\Theta} \mathbb{F}_\Theta\oplus T_{b_\Theta}^\ast \mathbb{F}_\Theta$, which actually, after fixing a Weyl basis, agrees with the Cartan--Killing form; and \item[$\iota\iota.$] $(\textnormal{Ad}\oplus \textnormal{Ad}^\ast)(g)\circ \mathbb{J}=\mathbb{J}\circ (\textnormal{Ad}\oplus \textnormal{Ad}^\ast)(g)$ for all $g\in K_\Theta=M\cdot(K_\Theta)_0$. \end{enumerate} Making use of the Cartan--Killing form we can identify $\mathfrak{n}_\Theta^{+}$ with the dual space $T_{b_\Theta}^\ast \mathbb{F}_\Theta$. Therefore, we are interested in determining $K_\Theta$-invariant orthogonal complex structures $\mathbb{J}:\mathfrak{n}_\Theta^{-}\oplus \mathfrak{n}_\Theta^{+}\to \mathfrak{n}_\Theta^{-}\oplus \mathfrak{n}_\Theta^{+}$. As we know that generalized almost complex structures are in relation with maximal isotropic subspaces, we should look at the invariant maximal isotropic subspaces of $\mathfrak{n}_\Theta^{-}\oplus \mathfrak{n}_\Theta^{+}$ with respect to the Cartan--Killing form $\langle \cdot,\cdot\rangle$. \begin{remark}\label{PartialCases} Recall that $K_\Theta=M\cdot(K_\Theta)_0$ and suppose that $\mathbb{J}:\mathfrak{n}_\Theta^{-}\oplus \mathfrak{n}_\Theta^{+}\to \mathfrak{n}_\Theta^{-}\oplus \mathfrak{n}_\Theta^{+}$ is just $M$-invariant. A straightforward computation allows us to prove that $\mathbb{J}$ is also $K_\Theta$-invariant if and only if it is $(K_\Theta)_0$-invariant, which, by the connectedness, is equivalent to saying that $(\textnormal{ad}\oplus\textnormal{ad}^\ast)(X)\circ \mathbb{J}=\mathbb{J}\circ (\textnormal{ad}\oplus\textnormal{ad}^\ast)(X)$ for all $X\in\mathfrak{k}_\Theta$. \end{remark} For every $\alpha\in \Pi^-\backslash\langle\Theta\rangle^{-}$ we denote its $M$-equivalence class by $[\alpha]_M$. We also denote $$\displaystyle V_{[\alpha]_M}=\sum_{\alpha \sim_M \beta} \mathfrak{g}_\beta\qquad\textnormal{and}\qquad\displaystyle V^\ast _{[\alpha]_M}= \sum_{\alpha \sim_M \beta} \mathfrak{g}^\ast _{\beta} =\sum_{\alpha \sim_M \beta} \mathfrak{g}_{-\beta}.$$ \begin{lemma}\label{decomposition} Each $M$-invariant subspace $L$ in $\mathfrak{n}_\Theta^{-}\oplus \mathfrak{n}_\Theta^{+}$ has the form \begin{equation*} L=\sum_{[\alpha]_M}L\cap (V_{[\alpha]_M} \oplus V^\ast _{[\alpha]_M}). \end{equation*} \end{lemma} \begin{proof} Consider $\alpha\in\Pi^-\backslash\langle\Theta\rangle^{-}$ and $m_\gamma=\exp(\pi iH^\vee_\gamma)=e^{\pi iH^\vee_\gamma}\in M$. Given that $\textnormal{Ad}^\ast(m_\gamma)=\textnormal{Ad}(m_\gamma^{-1})$ we get that $$(\textnormal{Ad}\oplus \textnormal{Ad}^\ast)(m_\gamma)(X_\alpha+X_{-\alpha})=\textnormal{Ad}(e^{\pi iH^\vee_\gamma})+\textnormal{Ad}(e^{-\pi iH^\vee_\gamma}).$$ Now, as we also have that $\textnormal{Ad}(e^Z)=e^{\textnormal{ad}(Z)}$ it follows that \begin{eqnarray*} (\textnormal{Ad}\oplus \textnormal{Ad}^\ast)(m_\gamma)(X_\alpha+X_{-\alpha})& = & e^{i\pi \textnormal{ad}(H^\vee_\gamma)}X_\alpha+e^{-i\pi \textnormal{ad}(H^\vee_\gamma)}X_{-\alpha},\\ & = & e^{i\pi \alpha(H^\vee_\gamma)}X_\alpha+e^{i\pi \alpha(H^\vee_\gamma)}X_{-\alpha}\\ & = & e^{i\pi \alpha(H^\vee_\gamma)}(X_\alpha+X_{-\alpha})\in \mathfrak{g}_\alpha\oplus \mathfrak{g}_{-\alpha}. \end{eqnarray*} Then, $e^{i\pi \alpha(H^\vee_\gamma)}$ is an eigenvalue of $(\textrm{Ad} \oplus \textrm{Ad}^\ast )(M)$. Note that, from \cite{PS}, we know that $\alpha \sim_M \beta$ if and only if the congruence \eqref{Mrelation} holds true which is actually equivalent to asking that $e^{i\pi \alpha(H^\vee_\gamma)}=e^{i\pi \beta(H^\vee_\gamma)}$ for all $\gamma\in \Pi$. Therefore, $\displaystyle V_{[\alpha]_M} \oplus V^\ast _{[\alpha]_M}=\sum_{\alpha \sim_M \beta} \mathfrak{g}_\beta\oplus \mathfrak{g}_{-\beta}$ is a generalized eigenspace of $(\textrm{Ad} \oplus \textrm{Ad}^\ast )(M)$ associated with the eigenvalue $e^{i\pi \alpha(H^\vee_\gamma)}$. Since $L$ is invariant, it follows that $L$ is written as a sum of generalized eigenspaces of $(\textrm{Ad} \oplus \textrm{Ad}^\ast )(M)$, that is, $$L=\sum_{[\alpha]_M}L\cap (V_{[\alpha]_M} \oplus V^\ast _{[\alpha]_M}).$$ \end{proof} \begin{lemma}\label{isotropic} Let $\displaystyle L=\sum_{[\alpha]_M}L\cap (V_{[\alpha]_M} \oplus V^\ast _{[\alpha]_M})$ be an $M$-invariant subspace. Then $L$ is isotropic if and only if for each $\alpha$ we have that \begin{equation*} L_{[\alpha]_M} = L\cap(V_{[\alpha]_M}\oplus V^\ast _{[\alpha]_M}), \end{equation*} is isotropic. Moreover, if $L$ is maximal isotropic, then $L_{[\alpha]_M}$ is also maximal isotropic for each $\alpha$. \end{lemma} \begin{proof} If $L$ is isotropic then it follows immediately that $L_{[\alpha]_M}$ is also isotropic. Conversely, suppose that $L_{[\alpha]_M}$ is isotropic for each $\alpha$. Thus, $\langle X,Y\rangle=0$ for all $X,Y\in L_{[\alpha]_M}$. If $\alpha \nsim_M \beta$, then we have that $\langle X,Y\rangle=0$ for all $X \in V_{[\alpha]_M} \oplus V^\ast _{[\alpha]_M}$ and $Y\in V_{[\beta]_M} \oplus V^\ast _{[\beta]_M}$, because $\langle \mathfrak{g}_\alpha,\mathfrak{g}_\beta\rangle=0$ unless $\beta=-\alpha$ which implies that $L$ is isotropic. Finally, if $L$ is maximal isotropic, then the fact that $\displaystyle L = \sum_{[\alpha]_M} L_{[\alpha]_M}$ ensures that each $L_{[\alpha]_M}$ is maximal isotropic. \end{proof} Lemmas \ref{decomposition} and \ref{isotropic} imply that if $\displaystyle L = \sum_{[\alpha]_M} L_{[\alpha]_M}$ is an $M$-invariant maximal isotropic subspace in $\mathfrak{n}_\Theta^{-}\oplus \mathfrak{n}_\Theta^{+}$, then $\displaystyle L\otimes \mathbb{C} = \sum_{[\alpha]_M} (L_{[\alpha]_M}\otimes\mathbb{C})$ is an $M$-invariant maximal isotropic subspace of in $(\mathfrak{n}_\Theta^{-}\oplus \mathfrak{n}_\Theta^{+})\otimes \mathbb{C}$ with respect to $\langle\cdot,\cdot\rangle$ extended to the complexification. Therefore, because of the bijective correspondence between maximal isotropic subspaces and generalized complex structures, we have that if $\mathbb{J}$ is an $M$-invariant generalized almost complex structure on $\mathbb{F}_\Theta$, then \[ \mathbb{J} = \sum_{[\alpha]_M} \mathbb{J}_{[\alpha]_M}, \] where $\mathbb{J}_{[\alpha]_M}$ is the restriction of $\mathbb{J}$ to the subspace $V_{[\alpha]_M} \oplus V^\ast _{[\alpha]_M}$. Here $\mathbb{J}_{[\alpha]_M}$ is a generalized complex structure on $V_{[\alpha]_M}$ with associated $M$-invariant maximal isotropic subspace $L_{[\alpha]_M}\otimes\mathbb{C}$. As consequence, we have that the dimension of $V_{[\alpha]_M}$ must be even and hence the amount of roots in the every $M$-equivalence class $[\alpha]_M$ must be even. Summing up, using the description of the $M$-equivalence classes found in \cite{PS} we get: \begin{theorem}\label{P1} A real flag manifold $\mathbb{F}_\Theta$ admits an $M$-invariant generalized almost complex structure if and only if the amount of roots in the every $M$-equivalence class $[\alpha]_M$ is even. As consequence, the real flag manifolds $\mathbb{F}_\Theta$ admitting $K_\Theta$-invariant generalized almost complex structures are those with $\Theta$ described in Table \ref{table1}. \end{theorem} \begin{proof} We only have to prove the sufficiency. Let $[\alpha]_M$ be an $M$-equivalence class such that the subspace $\displaystyle V_{[\alpha]_M}=\sum_{\alpha \sim_M \beta} \mathfrak{g}_\beta$ has even dimension. Recall that $\textnormal{Ad}(m)(X_\beta)=\pm X_\beta$ for all $X_\beta\in\mathfrak{g}_\beta$ and $m\in M$. In this equality, the sign does not change when $\beta$ runs through an $M$-equivalence class. Given that $\textnormal{Ad}^\ast(m)=\textnormal{Ad}(m^{-1})$ we also have that $(\textnormal{Ad}\oplus \textnormal{Ad}^\ast)(m)(X_\beta+X_{-\beta})=\pm(X_\beta+X_{-\beta})$. Thus, we get that $(\textnormal{Ad}\oplus \textnormal{Ad}^\ast)(m)=\pm 1$ on $V_{[\alpha]_M} \oplus V^\ast _{[\alpha]_M}$. Therefore, all orthogonal complex structure on $V_{[\alpha]_M} \oplus V^\ast _{[\alpha]_M}$ are clearly $M$-invariant. Finally, by taking direct sum of orthogonal complex structures on the several $V_{[\alpha]_M} \oplus V^\ast _{[\alpha]_M}$ we obtain $M$-invariant generalized almost complex structures on $\mathfrak{n}_\Theta^-=T_{b_\Theta}\mathbb{F}_\Theta$. \end{proof} \begin{table} \begin{tabular}{c|r} $\textnormal{Lie algebra type}$ & $\Theta$ \\ \hline $A_3$ & $\emptyset$\\ $B_2$ & $\emptyset$\\ $B_3$ & $\lbrace \lambda_1-\lambda_2,\lambda_2-\lambda_3\rbrace$\\ $C_4$ & $\emptyset$, $\lbrace \lambda_1-\lambda_2,\lambda_3-\lambda_4\rbrace$, $\lbrace \lambda_3-\lambda_4,2\lambda_4\rbrace$\\ $C_l$ with $l\neq 4$ & $\emptyset$ only when $l$ is even\\ & $\lbrace\lambda_d-\lambda_{d+1},\cdots, \lambda_{l-1}-\lambda_l,2\lambda_l\rbrace$ for $1<d\leq l-1$ with $d$ odd, for all $l$ \\ $D_4$ & $\emptyset$, $\lbrace \lambda_1-\lambda_2,\lambda_3-\lambda_4\rbrace$, $\lbrace \lambda_1-\lambda_2,\lambda_3+\lambda_4\rbrace$, $\lbrace \lambda_3-\lambda_4,\lambda_3+\lambda_4\rbrace$\\ & $\lbrace \lambda_1-\lambda_2,\lambda_2-\lambda_3,\lambda_3+\lambda_4\rbrace$, $\lbrace \lambda_2-\lambda_3,\lambda_3-\lambda_4,\lambda_3+\lambda_4\rbrace$\\ $D_l$ with $l\leq 5$ & $\emptyset$, $\lbrace\lambda_d-\lambda_{d+1},\cdots,\lambda_{l-1}-\lambda_l,\lambda_{l-1}+\lambda_l \rbrace$ for $1<d\leq l-1$\\ $G_4$ & $\emptyset$ \end{tabular} \caption{$M$-equivalence classes in $\Pi^-\backslash\langle\Theta\rangle^{-}$ with even elements.}\label{table1} \end{table} \subsection{The Courant bracket at the origin $b_\Theta$} Recall that the Courant bracket on sections of $\mathbb{T}M$ is given by $$[X+\xi,Y+\eta]=[X,Y]+\mathcal{L}_X\eta -\mathcal{L}_Y\xi-\dfrac{1}{2}\textnormal{d}(i_X\eta-i_Y\xi).$$ We want to describe this bracket at the origin $b_\Theta$ of $\mathbb{F}_\Theta$. Let $\langle \cdot,\cdot\rangle$ denote the Cartan--Killing form on $\mathfrak{g}$. As we said before, the tangent and the cotangent spaces of $\mathbb{F}_\Theta$ at $b_\Theta$ are respectively identified with the spaces $$\mathfrak{n}_\Theta^{-}=\sum_{\alpha\in \Pi^-\backslash\langle\Theta\rangle^{-}}\mathfrak{g}_\alpha\qquad\textnormal{and}\qquad (\mathfrak{n}_\Theta^{-})^\ast= \sum_{\alpha\in \Pi^-\backslash\langle\Theta\rangle^{-}}\mathfrak{g}_\alpha^\ast=\sum_{\alpha\in \Pi^-\backslash\langle\Theta\rangle^{-}}\mathfrak{g}_{-\alpha}.$$ The identification of $\mathfrak{g}_\alpha^\ast$ with $\mathfrak{g}_{-\alpha}$ is obtained by using the Cartan--Killing form. Indeed, as $\langle X_\alpha, X_{-\alpha}\rangle=1$ we have that every element $X_\alpha^\ast\in\mathfrak{g}_\alpha^\ast$ can be represented as $X_\alpha^\ast=\langle X_{-\alpha},\cdot\rangle\approx X_{-\alpha}\in \mathfrak{g}_{-\alpha}$. For each $X\in \mathfrak{n}_\Theta^{-}$, we denote its respective element in $(\mathfrak{n}_\Theta^{-})^\ast$ by $X^\ast=k^\flat(X^-):=\langle X^-,\cdot\rangle$, where if $\displaystyle X=\sum_{\alpha\in \Pi^-\backslash\langle\Theta\rangle^{-}} X_\alpha$, then $\displaystyle X^-=\sum_{\alpha\in \Pi^-\backslash\langle\Theta\rangle^{-}} X_{-\alpha}$. Therefore, for all $X,Y\in \mathfrak{n}_\Theta^{-}$ and $Z^\ast, W^\ast\in (\mathfrak{n}_\Theta^{-})^\ast$, we have: \begin{eqnarray*} [X+Z^\ast,Y+W^\ast] &=& [X,Y]+\mathcal{L}_{X}W^\ast-\mathcal{L}_{Y}Z^\ast-\dfrac{1}{2}\textnormal{d}(i_{X}W^\ast-i_{Y}Z^\ast)\\ & = & [X,Y]+\mathcal{L}_{X}k^\flat(W^-)-\mathcal{L}_{Y}k^\flat(Z^-)-\dfrac{1}{2}\textnormal{d}(i_{X}k^\flat(W^-)-i_{Y}k^\flat(Z^-))\\ & = & [X,Y]+\textnormal{d}(i_Xk^\flat(W^-))+i_X(\textnormal{d}k^\flat(W^-))-\textnormal{d}(i_Yk^\flat(Z^-))-i_Y(\textnormal{d}k^\flat(Z^-))\\ & - & \dfrac{1}{2}\textnormal{d}(i_{X}k^\flat(W^-))+\dfrac{1}{2}\textnormal{d}(i_{Y}k^\flat(Z^-))\\ & = & [X,Y]+\dfrac{1}{2}\textnormal{d}(i_Xk^\flat(W^-))+i_X(\textnormal{d}k^\flat(W^-))-\dfrac{1}{2}\textnormal{d}(i_Yk^\flat(Z^-))-i_Y(\textnormal{d}k^\flat(Z^-)). \end{eqnarray*} Given that the Cartan--Killing form $\langle\cdot,\cdot\rangle$ induces a bi-invariant metric on $G$, we have that its Levi--Civita connection is given by $$\nabla_{X}Y=-\nabla_{Y}X=\dfrac{1}{2}[X,Y],\quad \textnormal{for all}\quad X,Y\in\mathfrak{g},$$ and moreover, each element $X\in \mathfrak{g}$ is a Killing vector field, that is, $\mathcal{L}_{X}\langle\cdot,\cdot\rangle=0$. In terms of the Levi--Civita connection this means: $$\langle \nabla_YX,Z\rangle+\langle Y,\nabla_ZX\rangle=\langle [Y,X],Z\rangle+\langle Y,[Z,X]\rangle=0,\quad\textnormal{for all}\quad X,Y\in\mathfrak{g}.$$ On the one hand, $\textnormal{d}(i_Xk^\flat(W^-))=\textnormal{d}\langle W^-,X\rangle$. So, for all $A\in \mathfrak{n}_\Theta^{-}$: $$\textnormal{d}\langle W^-,X\rangle (A)=A\cdot \langle W^-,X\rangle=\langle \nabla_AW^-,X\rangle+\langle W^-,\nabla_AX\rangle=-\langle \nabla_{W^-}A,X\rangle-\langle W^-,\nabla_XA\rangle=0.$$ On the other hand, $i_X(\textnormal{d}k^\flat(W^-))=\textnormal{d}k^\flat(W^-)(X)$. Thus, for all $A\in \mathfrak{n}_\Theta^{-}$: \begin{eqnarray*} \textnormal{d}k^\flat(W^-)(X)(A) & = & \textnormal{d}k^\flat(W^-)(X,A)\\ & = & X\cdot k^\flat(W^-)(A)-A\cdot k^\flat(W^-)(X)-k^\flat(W^-)([X,A])\\ & = & X\cdot \langle W^-,A\rangle-A\cdot \langle W^-,X\rangle- \langle W^-,[X,A]\rangle\\ & = & -2 \langle W^-,\nabla_XA\rangle\\ & = & 2 \langle W^-,\nabla_AX\rangle\\ & = & -2 \langle \nabla_{W^-}X,A\rangle\\ & = & k^\flat([X,W^-])(A), \end{eqnarray*} that is, $i_X(\textnormal{d}k^\flat(W^-))=k^\flat([X,W^-])$. So, it follows that the Courant bracket reduces to $$[X+Z^\ast,Y+W^\ast]=[X,Y]+k^\flat([X,W^-])-k^\flat([Y,Z^-]).$$ With similar computations we obtain that $k^\flat([X,W^-])=\textnormal{ad}^\ast_X(W^\ast)$ where $\textnormal{ad}^\ast$ denotes the co-adjoint representation of $\mathfrak{g}$. Therefore, we get that \begin{equation*} [X+Z^\ast,Y+W^\ast]=[X,Y]+\textnormal{ad}^\ast_X(W^\ast)-\textnormal{ad}^\ast_Y(Z^\ast). \end{equation*} Summing up, \begin{proposition}\label{CaurentB} At the origin $b_0$ of $\mathbb{F}$ we have that the Courant bracket takes the form \begin{equation}\label{CourantBracket} [X+Z^\ast,Y+W^\ast]=[X,Y]+\textnormal{ad}^\ast_X(W^\ast)-\textnormal{ad}^\ast_Y(Z^\ast), \end{equation} and the Nijenhuis operator \begin{equation}\label{NijenhuisTensor} \textnormal{Nij}(A_1+A_2^\ast,B_1+B_2^\ast,C_1+C_2^\ast) = \dfrac{1}{2}\left(\langle A_2^-,[B_1,C_1] \rangle+\langle B_2^-,[C_1,A_1] \rangle +\langle C_2^-,[A_1,B_1] \rangle\right), \end{equation} for all $X,Y,A_1,B_1,C_1\in \mathfrak{n}_\Theta^{-}$ and $Z^\ast, W^\ast A_2^\ast,B_2^\ast,C_2^\ast\in (\mathfrak{n}^{-}_\Theta)^\ast$. \end{proposition} \begin{proof} The expression \eqref{NijenhuisTensor} for the Nijenhuis operator at the origin directly follows from replacing in \eqref{NijOperator} the expression \eqref{CourantBracket} that we got for the Courant bracket. \end{proof} Using a Weyl basis for $\mathfrak{g}$ it is simple to check that the expression we obtained for the Nijenhuis operator \eqref{NijenhuisTensor} just depends on triples of roots ($\alpha,\beta,\alpha+\beta$). Thus, we have: \begin{corollary}\label{usefulformula1} For every triple of roots $(\alpha,\beta,\alpha+\beta)$ we have that $$\Nij(X_\alpha,X_\beta,X_{\alpha+\beta} ^\ast)=\dfrac{1}{2}m_{\alpha,\beta},$$ and all other possible combination are zero unless these are cyclic permutations of the elements $X_\alpha, X_\beta, X^\ast _{\alpha+\beta}$. \end{corollary} \begin{proof} This result follows from a direct computation using formula \eqref{NijenhuisTensor}. In particular, for the nonzero cases \begin{eqnarray*} \Nij(X_\alpha,X_\beta,X_{\alpha+\beta} ^\ast) & = & \dfrac{1}{2}\langle X_{-(\alpha+\beta)},[X_\alpha,X_\beta]\rangle \\ & = & \dfrac{1}{2} \langle X_{-(\alpha+\beta)},m_{\alpha,\beta}X_{\alpha+\beta} \rangle \\ & = & \dfrac{1}{2}m_{\alpha,\beta}. \end{eqnarray*} \end{proof} It is wroth noticing that from Equation \eqref{CourantBracket} we have that the Courant bracket for the basic vectors of a Weyl basis is given by \[ [X_{\alpha},X_{\beta}] = \left\lbrace \begin{array}{ll} m_{\alpha,\beta}X_{\alpha+\beta}, \quad \textrm{if } \alpha+\beta \textrm{ is a root} \\ 0, \quad \textrm{otherwise} \end{array}\right. \quad [X_{\alpha},X^\ast _{\beta}] = \left\lbrace \begin{array}{ll} m_{\alpha,-\beta}X^\ast _{\beta-\alpha}, \quad \textrm{if } \beta-\alpha \textrm{ is a root} \\ 0, \quad \textrm{otherwise} \end{array} \right. \] and $[X^\ast _{\alpha},X^\ast _{\beta}] = 0$. \section{Maximal real flag manifolds}\label{S:4} Maximal real flag manifolds are those with $\Theta=\emptyset$. In this case the isotropy subgroup $K_\Theta$ is the centralizer of $\mathfrak{a}$ in $K$, that is, $K_\Theta=M$. Here we will drop up all the sub-index $\Theta$. Recall that $\textnormal{Ad}(m)(X_\alpha)=\pm X_\alpha$ for all $X_\alpha\in\mathfrak{g}_\alpha$ and $m\in M$. Given that $\textnormal{Ad}^\ast(m)=\textnormal{Ad}(m^{-1})$ we get that $(\textnormal{Ad}\oplus \textnormal{Ad}^\ast)(m)(X_\alpha+X_{-\alpha})=\pm(X_\alpha+X_{-\alpha})$. Therefore, invariant generalized almost complex structures on $\mathbb{F}$ are complex structures $\mathbb{J}:\mathfrak{n}^{-}\oplus \mathfrak{n}^{+}\to \mathfrak{n}^{-}\oplus \mathfrak{n}^{+}$ that are orthogonal with respect to $\langle\cdot,\cdot\rangle$. The representation matrix of $\langle\cdot,\cdot\rangle$ restricted to $\mathfrak{n}^{-}\oplus \mathfrak{n}^{+}$ with respect to a Weyl basis is $$Q=\left( \begin{array}{cc} 0 & I_d\\ I_d & 0 \end{array} \right),$$ where $I_d$ denotes the $d\times d$ identity matrix with $d$ the amount of elements in $\Pi^-$. Thus, we are interested in describing structures $\mathbb{J}$ such that $\mathbb{J}^2=-1$ and $\mathbb{J}^TQ\mathbb{J}=Q$. More precisely, according to what we did in the previous section, we just need to describe the structures $\mathbb{J}_{[\alpha]_M}$ on $V_{[\alpha]_M}$ such that $\mathbb{J}_{[\alpha]_M}^2=-1$ and $\mathbb{J}_{[\alpha]_M}^TQ\mathbb{J}_{[\alpha]_M}=Q$. For the case of maximal real flag manifolds the $M$-equivalence classes of non-negative roots are: \begin{enumerate} \item[-] {\it Case} $A_3$: $$\lbrace \lambda_2-\lambda_1,\lambda_4-\lambda_3\rbrace,\quad\lbrace \lambda_3-\lambda_1,\lambda_4-\lambda_2\rbrace,\quad\textnormal{and}\quad \lbrace \lambda_4-\lambda_1,\lambda_3-\lambda_2\rbrace.$$ \item[-] {\it Case} $B_2$: $$\lbrace \lambda_2-\lambda_1,-\lambda_2-\lambda_1\rbrace\quad\textnormal{and}\quad \lbrace -\lambda_1,-\lambda_2\rbrace.$$ \item[-] {\it Case} $C_4$: $$\lbrace \pm\lambda_2-\lambda_1,\pm\lambda_4-\lambda_3\rbrace,\quad\lbrace \pm\lambda_3-\lambda_1,\pm\lambda_4-\lambda_2\rbrace,\quad \lbrace \pm\lambda_4-\lambda_1,\pm\lambda_3-\lambda_2\rbrace,\quad \textnormal{and}$$ $$\lbrace -2\lambda_i:\ i=1,\cdots,4\rbrace.$$ \item[-] {\it Case} $C_l$ with $l$ even and $l\geq 6$: $$A.\ \lbrace\pm \lambda_s-\lambda_i\rbrace,\ 1\leq i<s\leq l\quad \textnormal{and}\quad B.\ \lbrace 2\lambda_1,\cdots,2\lambda_l\rbrace.$$ \item[-] {\it Case} $D_4$: $$\lbrace \pm\lambda_2-\lambda_1,\pm\lambda_4-\lambda_3\rbrace,\quad\lbrace \pm\lambda_3-\lambda_1,\pm\lambda_4-\lambda_1\rbrace,\quad \textnormal{and}\quad\lbrace \pm\lambda_4-\lambda_1,\pm\lambda_3-\lambda_2\rbrace.$$ \item[-] {\it Case} $D_l$ with $l\geq 5$: $$\lbrace\pm \lambda_j-\lambda_i\rbrace,\ 1\leq i<j\leq l.$$ \item[-] {\it Case} $G_2$: $$\lbrace -\lambda_1,-2\lambda_2-\lambda_1\rbrace,\quad\lbrace -\lambda_2-\lambda_1,-3\lambda_2-\lambda_1\rbrace\quad \textnormal{and}\qquad\quad\lbrace -\lambda_2,-3\lambda_2-2\lambda_1\rbrace.$$ \end{enumerate} Motivated by this we set up the following definition. \begin{definition} A maximal real flag manifold of those described in Theorem \ref{P1} is said to be a $GM_2$-\emph{maximal real flag} if it admits at least an $M$-equivalence class root subspace of dimension $2$. \end{definition} Note that the only $2$ maximal real flag manifolds admitting invariant generalized almost complex structure that are not $MG_2$-maximal real flags are those particular cases of type $C_4$ and $D_4$. This is because each $M$-equivalence class root subspace associated to them has dimension $4$. \subsection{Integrability on $GM_2$-maximal real flags}\label{Sub:Integrability} We will start by analyzing the integrability of the invariant generalized almost complex structures on maximal flags $\mathbb{F}$ obtained from the cases $A_3$, $B_2$, $D_l$ with $l\geq 5$, and $G_2$. We will end the subsection by studying the integrability on maximal flags $\mathbb{F}$ of type $C_l$ with $l$ even and $l\geq 6$. \begin{remark}\label{2DimensionalCases} Following the computations made in \cite{VS}, for almost all these cases we just have two possibilities for $\mathbb{J}_{[\alpha]_M}$ since $\dim V_{[\alpha]_M}=2$ for all of them. Namely, $$\mathcal{J}^c_{[\alpha]_M}=\left( \begin{array}{cccc} b_\alpha & \dfrac{-(1+b_\alpha^2)}{c_\alpha} & 0 & 0\\ c_\alpha & -b_\alpha & 0 & 0\\ 0 & 0 & -b_\alpha & -c_\alpha \\ 0 & 0 & \dfrac{1+b_\alpha^2}{c_\alpha} & b_\alpha \end{array} \right):=\left( \begin{array}{cc} -J^c & 0\\ 0 & (J^c)^\ast \end{array} \right)\qquad \textbf{complex type},$$ or else $$\mathcal{J}^{nc}_{[\alpha]_M}=\left( \begin{array}{cccc} a_\alpha & 0 & 0 & -x_\alpha\\ 0 & a_\alpha& x_\alpha & 0\\ 0 & -y_\alpha & -a_\alpha & 0\\ y_\alpha & 0 & 0 & -a_\alpha \end{array} \right):=\left( \begin{array}{cc} \mathcal{A}_{\alpha} & \mathcal{X}_{\alpha} \\ \mathcal{Y}_{\alpha} & -\mathcal{A}_{\alpha} \end{array} \right)\qquad \textbf{noncomplex type},$$ with $a_\alpha,b_\alpha,c_\alpha,x_\alpha,y_\alpha\in\mathbb{R}$ such that $c_\alpha\neq 0$ and $a_\alpha^2=x_\alpha y_\alpha-1$. \end{remark} Consider the basis $(X_\alpha,X_\beta,X^\ast _\alpha,X_\beta ^\ast)$. For simplicity when performing the computations that we will do below, we forget up the root sub-index notation in the entries of the matrices introduced in Remark \ref{2DimensionalCases}. A straightforward computation allows us to get the following result: \begin{lemma} Let $\mathbb{J}_{[\alpha]_M}$ be a generalized complex structure on $V_{[\alpha]_M}$ where $\dim V_{[\alpha]_M}=2$. If $\mathbb{J}_{[\alpha]_M}=\mathcal{J}^c_{[\alpha]_M}$ is of complex type, then its $+i$-eigenspace is given by \[ L_c = \textrm{span} \{ (b+i)X_\alpha + cX_\beta, -cX_\alpha ^\ast +(b+i)X_\beta ^\ast \}. \] Otherwise, if $\mathbb{J}_{[\alpha]_M}=\mathcal{J}^{nc}_{[\alpha]_M}$ is of noncomplex type, then its $+i$-eigenspace is given by \[ L_{nc} = \textrm{span} \{ xX_\alpha + (a-i)X_\beta ^\ast, -xX_\beta +(a-i)X_\alpha ^\ast \}. \] \end{lemma} As consequence of Corollary \ref{usefulformula1} we get: \\ \\ \noindent {\bf Case $B_2$.} In this case the $M$-equivalence classes of roots are given by \[ \lbrace \lambda_2-\lambda_1,-\lambda_2-\lambda_1\rbrace\quad\textnormal{and}\quad \lbrace -\lambda_1,-\lambda_2\rbrace. \] If we fix the simple root system $\Sigma = \{ \alpha,\beta\}$ with $\alpha = \lambda_2 - \lambda_1$ and $\beta = -\lambda_2$, then we have that the $M$-equivalence classes are $\lbrace \alpha,\alpha + 2\beta \rbrace$ and $\lbrace \alpha+\beta,\beta\rbrace$, respectively. Thus, if $\mathbb{J}= \mathbb{J}_{[\alpha]} \oplus \mathbb{J}_{[\alpha+\beta]}$ is an invariant generalized almost complex structure then we just need to look at the following four possibilities: \\ \\ \noindent {\bf 1.} $\mathbb{J}_{[\alpha]}$ and $\mathbb{J}_{[\alpha+\beta]}$ of complex type. In this case we have that $L = \textrm{span} \{(b_1+i)X_\alpha + c_1X_{\alpha+2\beta}, -c_1 X_\alpha ^\ast +(b_1+i)X_{\alpha+2\beta} ^\ast, (b_2+i)X_{\alpha+\beta} + c_2X_\beta, -c_2X_{\alpha+\beta} ^\ast +(b_2+i)X_\beta ^\ast \}$. Then, we get that \[ \Nij ((b_1+i)X_\alpha + c_1X_{\alpha+2\beta}, (b_2+i)X_{\alpha+\beta} + c_2X_\beta, -c_2X_{\alpha+\beta} ^\ast +(b_2+i)X_\beta ^\ast) = -\frac{1}{2}(b_1+i)c_2 ^2 m_{\alpha, \beta}\neq 0. \] \\ {\bf 2.} $\mathbb{J}_{[\alpha]}$ of complex type and $\mathbb{J}_{[\alpha+\beta]}$ of noncomplex type. In this case $L = \textrm{span} \{ (b+i)X_\alpha + cX_{\alpha+2\beta}, -cX_\alpha ^\ast +(b+i)X_{\alpha+2\beta} ^\ast, xX_{\alpha+\beta} + (a-i)X_\beta ^\ast, -xX_\beta +(a-i)X_{\alpha+\beta} ^\ast \}$. Then, we have that \[ \Nij (-cX_\alpha ^\ast +(b+i)X_{\alpha+2\beta} ^\ast, xX_{\alpha+\beta} + (a-i)X_\beta ^\ast, -xX_\beta +(a-i)X_{\alpha+\beta} ^\ast) = -\frac{1}{2}x^2(b+i)m_{\alpha+\beta,\beta}\neq 0. \] \\ {\bf 3.} $\mathbb{J}_{[\alpha]}$ of noncomplex type and $\mathbb{J}_{[\alpha+\beta]}$ of complex type. Here $L = \textrm{span} \{ xX_\alpha + (a-i)X_{\alpha+2\beta} ^\ast, -xX_{\alpha+2\beta} +(a-i)X_\alpha ^\ast, (b+i)X_{\alpha+\beta} + cX_\beta, -cX_{\alpha+\beta} ^\ast +(b+i)X_\beta ^\ast \}$. Then, we obtain that \[ \Nij (xX_\alpha + (a-i)X_{\alpha+2\beta} ^\ast, (b+i)X_{\alpha+\beta} + cX_\beta, -cX_{\alpha+\beta} ^\ast +(b+i)X_\beta ^\ast) = -\frac{1}{2}xc^2 m_{\alpha,\beta}\neq 0. \] \\ {\bf 4.} $\mathbb{J}_{[\alpha]}$ and $\mathbb{J}_{[\alpha+\beta]}$ of noncomplex type. Here $L= \textrm{span} \{x_1X_\alpha + (a_1-i)X_{\alpha+2\beta} ^\ast, -x_1X_{\alpha+2\beta} +(a_1-i)X_\alpha ^\ast, x_2X_{\alpha+\beta} + (a_2-i)X_\beta ^\ast, -x_2X_\beta +(a_2-i)X_{\alpha+\beta} ^\ast \}$. Then, we have that \[ \Nij (x_1X_\alpha + (a_1-i)X_{\alpha+2\beta} ^\ast, x_2X_{\alpha+\beta} + (a_2-i)X_\beta ^\ast, -x_2X_\beta +(a_2-i)X_{\alpha+\beta} ^\ast) = -\frac{1}{2}x_2 ^2 (a_1 - i)m_{\alpha+\beta,\beta}\neq 0. \] \begin{proposition} Let $\mathbb{F}$ be the maximal flag manifold of type $B_2$. Then, the invariant generalized almost complex structures on $\mathbb{F}$ are not integrable. \end{proposition} \noindent {\bf Case $A_3$.} The $M$-equivalence classes of roots are given by \[ \lbrace \lambda_1 - \lambda_2, \lambda_3 - \lambda_4\rbrace, \quad \lbrace \lambda_1-\lambda_3,\lambda_2-\lambda_4\rbrace \quad \textrm{and} \quad \lbrace\lambda_1 - \lambda_4,\lambda_2-\lambda_3\rbrace. \] If we fix the simple root system $\Sigma = \lbrace \alpha_1 = \lambda_1 - \lambda_2, \alpha_2 = \lambda_2-\lambda_3, \alpha_3 = \lambda_3 - \lambda_4\rbrace$, then we have that the $M$-equivalence classes are respectively given by \[ \lbrace \alpha_1, \alpha_3\rbrace, \quad \lbrace \alpha_1+\alpha_2, \alpha_2+\alpha_3\rbrace \quad \textrm{and} \quad \lbrace \alpha_1+\alpha_2+\alpha_3, \alpha_2\rbrace. \] If $\mathbb{J}$ is an invariant generalized almost complex structure on $\mathbb{F}$, then we have one of the possibilities presented in Table \ref{table2}. \begin{table}[htb!] \begin{tabular}{|r|c|c|c|} \hline & $\mathbb{J}_{[\alpha_1]}$ & $\mathbb{J}_{[\alpha_1+\alpha_2]}$ & $\mathbb{J}_{[\alpha_1+\alpha_2+\alpha_3]}$ \\ \hline {\bf 1.} & complex & complex & complex\\ {\bf 2.} & complex & complex & noncomplex\\ {\bf 3.} & complex & noncomplex & complex\\ {\bf 4.} & noncomplex & complex & complex\\ {\bf 5.} & complex & noncomplex & noncomplex\\ {\bf 6.} & noncomplex & complex & noncomplex\\ {\bf 7.} & complex & noncomplex & noncomplex\\ {\bf 8.} & noncomplex & noncomplex & noncomplex \\ \hline \end{tabular} \caption{Possible combinations for $\mathbb{J}$ on $A_3$}\label{table2} \end{table} We must carefully analyze the integrability of all possibilities presented in Table \ref{table2}. \\ \\ \noindent {\bf 1.} This case comes from the invariant almost complex structures found in \cite{FBS}. This is not integrable. \\ \\ \noindent {\bf 2.} Here the $+i$-eigenspace is given by $L = \textrm{span}\lbrace (b_1+i)X_{\alpha_1} +c_1 X_{\alpha_3}, -c_1X_{\alpha_1} ^\ast +(b_1 + i)X_{\alpha_3} ^\ast, (b_2+i)X_{\alpha_1+\alpha_2} +c_2 X_{\alpha_2+\alpha_3}, -c_2X_{\alpha_1+\alpha_2} ^\ast +(b_2 + i)X_{\alpha_2+\alpha_3} ^\ast, xX_{\alpha_1+\alpha_2+\alpha_3}+(a-i)X^\ast _{\alpha_2}, -xX_{\alpha_2} +(a-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast \rbrace$. On the one hand, we have that \begin{eqnarray*} \Nij ((b_1+i)X_{\alpha_1} +c_1 X_{\alpha_3},(b_2+i)X_{\alpha_1+\alpha_2} +c_2 X_{\alpha_2+\alpha_3},-xX_{\alpha_2} +(a-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast) \\ = \frac{1}{2}(a-i)(c_2(b_1+i)m_{\alpha_1,\alpha_2+\alpha_3}+c_1(b_2+i)m_{\alpha_3,\alpha_1+\alpha_2}) \end{eqnarray*} and \begin{eqnarray*} \Nij ((b_1+i)X_{\alpha_1} +c_1 X_{\alpha_3},-xX_{\alpha_2} +(a-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast, -c_2X_{\alpha_1+\alpha_2} ^\ast +(b_2 + i)X_{\alpha_2+\alpha_3} ^\ast) \\ = \dfrac{1}{2}x( (b_1+i)c_2 m_{\alpha_1,\alpha_2}- c_1 (b_2+i) m_{\alpha_3,\alpha_2}). \end{eqnarray*} On the other hand, suppose that $\mathbb{J}$ is integrable. So, the condition $\Nij |_L = 0$ implies that \[ \left\lbrace \begin{array}{l} c_2(b_1+i)m_{\alpha_1,\alpha_2+\alpha_3}+c_1(b_2+i)m_{\alpha_3,\alpha_1+\alpha_2} = 0\\ (b_1+i)c_2 m_{\alpha_1,\alpha_2}- c_1 (b_2+i) m_{\alpha_3,\alpha_2} = 0. \end{array}\right. \] In particular, it follows that \begin{equation}\label{eq1} \left\lbrace \begin{array}{l} c_2 m_{\alpha_1,\alpha_2+\alpha_3}+c_1 m_{\alpha_3,\alpha_1+\alpha_2} = 0\\ c_2 m_{\alpha_1,\alpha_2}- c_1 m_{\alpha_3,\alpha_2} = 0 \end{array}\right. \quad\textnormal{implies}\quad \frac{m_{\alpha_1,\alpha_2}}{m_{\alpha_3,\alpha_2}} = \frac{c_1}{c_2} = -\frac{m_{\alpha_1,\alpha_2+\alpha_3}}{m_{\alpha_3,\alpha_1+\alpha_2}}. \end{equation} However, observe that by the Jacobi identity we have \begin{eqnarray*} 0 & = & [X_{\alpha_3},[X_{\alpha_1},X_{\alpha_2}]]-[[X_{\alpha_3},X_{\alpha_1}],X_{\alpha_2}]-[X_{\alpha_1},[X_{\alpha_3},X_{\alpha_2}]] \\ & = & m_{\alpha_1,\alpha_2}[X_{\alpha_3},X_{\alpha_1+\alpha_2}]-m_{\alpha_3,\alpha_2}[X_{\alpha_1},X_{\alpha_2+\alpha_3}]\\ & = & (m_{\alpha_1,\alpha_2}m_{\alpha_3,\alpha_1+\alpha_2}-m_{\alpha_3,\alpha_2}m_{\alpha_1,\alpha_2+\alpha_3})X_{\alpha_1+\alpha_2+\alpha_3}. \end{eqnarray*} Thus \begin{equation}\label{eq2} m_{\alpha_1,\alpha_2}m_{\alpha_3,\alpha_1+\alpha_2}-m_{\alpha_3,\alpha_2}m_{\alpha_1,\alpha_2+\alpha_3} = 0 \quad\textnormal{implies}\quad \frac{m_{\alpha_1,\alpha_2}}{m_{\alpha_3,\alpha_2}} = \frac{m_{\alpha_1,\alpha_2+\alpha_3}}{m_{\alpha_3,\alpha_1+\alpha_2}}. \end{equation} Note that Equation \eqref{eq2} contradicts Equation \eqref{eq1}. Therefore, $\mathbb{J}$ can not be integrable. \\ \\ {\bf 3.} In this case the $+i$-eigenspace is $L = \textrm{span}\lbrace (b_1+i)X_{\alpha_1} +c_1 X_{\alpha_3}, -c_1X_{\alpha_1} ^\ast +(b_1 + i)X_{\alpha_3} ^\ast, xX_{\alpha_1+\alpha_2} +(a-i) X^\ast _{\alpha_2+\alpha_3}, -xX_{\alpha_2+\alpha_3} +(a- i)X_{\alpha_1+\alpha_2} ^\ast, (b_2+i)X_{\alpha_1+\alpha_2+\alpha_3}+c_2X_{\alpha_2}, -c_2 X_{\alpha_2} ^\ast +(b_2+i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast \rbrace$. Then, we have \begin{eqnarray*} \Nij ((b_1+i)X_{\alpha_1} +c_1 X_{\alpha_3},(b_2+i)X_{\alpha_1+\alpha_2+\alpha_3}+c_2X_{\alpha_2},xX_{\alpha_1+\alpha_2} +(a-i) X^\ast _{\alpha_2+\alpha_3}) \\ = \frac{1}{2}c_1 c_2 (a-i)m_{\alpha_3,\alpha_2} \not= 0, \end{eqnarray*} which immediately implies that $\mathbb{J}$ is not integrable. \\ \\ {\bf 4.} Here we obtain that $L = \textrm{span}\lbrace xX_{\alpha_1} +(a-i) X^\ast _{\alpha_3}, -xX_{\alpha_3} +(a-i)X_{\alpha_1} ^\ast, (b_1+i)X_{\alpha_1+\alpha_2} +c_1 X_{\alpha_2+\alpha_3}, -c_1X_{\alpha_1+\alpha_2} ^\ast +(b_1 + i)X_{\alpha_2+\alpha_3} ^\ast, (b_2+i)X_{\alpha_1+\alpha_2+\alpha_3}+c_2X_{\alpha_2}, -c_2 X_{\alpha_1+\alpha_2+\alpha_3} ^\ast +(b_2+i)X_{\alpha_2} ^\ast \rbrace$. Thus \begin{eqnarray*} \Nij ( xX_{\alpha_1} +(a-i) X^\ast _{\alpha_3},(b_1+i)X_{\alpha_1+\alpha_2} +c_1 X_{\alpha_2+\alpha_3},-c_2 X_{\alpha_1+\alpha_2+\alpha_3} ^\ast +(b_2+i)X_{\alpha_2} ^\ast) \\ = -\frac{1}{2}xc_1c_2m_{\alpha_1,\alpha_2+\alpha_3}\not= 0, \end{eqnarray*} and hence $\mathbb{J}$ is not integrable. \\ \\ {\bf 5.} In this case $L = \textrm{span}\lbrace (b+i)X_{\alpha_1} +c X_{\alpha_3}, -cX_{\alpha_1} ^\ast +(b+i)X_{\alpha_3} ^\ast, x_1 X_{\alpha_1+\alpha_2} +(a_1 - i) X^\ast _{\alpha_2+\alpha_3}, -x_1X_{\alpha_2+\alpha_3} +(a_1 - i)X_{\alpha_1+\alpha_2} ^\ast, x_2 X_{\alpha_1+\alpha_2+\alpha_3}+(a_2-i)X^\ast _{\alpha_2}, -x_2 X_{\alpha_2} +(a_2-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast \rbrace$. Then, we have that \begin{eqnarray*} \Nij ((b+i)X_{\alpha_1} +c X^\ast _{\alpha_3},x_1 X_{\alpha_1+\alpha_2} +(a_1 - i) X^\ast _{\alpha_2+\alpha_3},-x_2 X_{\alpha_2} +(a_2-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast) \\ = \frac{1}{2}c(x_1(a_2-i)m_{\alpha_3,\alpha_1+\alpha_2}+x_2(a_1-i)m_{\alpha_3,\alpha_2}), \end{eqnarray*} and \begin{eqnarray*} \Nij ((b+i)X_{\alpha_1} +c X^\ast _{\alpha_3},-x_1X_{\alpha_2+\alpha_3} +(a_1 - i)X_{\alpha_1+\alpha_2} ^\ast,-x_2 X_{\alpha_2} +(a_2-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast) \\ = -\frac{1}{2}(b+i)(x_1(a_2-i)m_{\alpha_1,\alpha_2+\alpha_3}+x_2(a_1-i)m_{\alpha_1,\alpha_2}). \end{eqnarray*} Therefore, if $\Nij |_L = 0$ then we must obtain \[ \left\lbrace \begin{array}{l} x_1(a_2-i)m_{\alpha_3,\alpha_1+\alpha_2}+x_2(a_1-i)m_{\alpha_3,\alpha_2} = 0\\ x_1(a_2-i)m_{\alpha_1,\alpha_2+\alpha_3}+x_2(a_1-i)m_{\alpha_1,\alpha_2}=0. \end{array}\right. \] In particular, from the imaginary part we have \begin{equation}\label{eq3} \frac{m_{\alpha_1,\alpha_2}}{m_{\alpha_1,\alpha_2+\alpha_3}} = \frac{x_1}{x_2} = -\frac{m_{\alpha_3,\alpha_2}}{m_{\alpha_3,\alpha_1+\alpha_2}}. \end{equation} But observe that Equation \eqref{eq3} contradicts Equation \eqref{eq2} which we got by using Jacobi identity. Therefore, $\mathbb{J}$ can not be integrable. \\ \\ {\bf 6.} Here we have that $L = \textrm{span}\lbrace x_1 X_{\alpha_1} +(a_1-i) X^\ast _{\alpha_3}, -x_1X_{\alpha_3} +(a_1-i)X_{\alpha_1} ^\ast, (b+i) X_{\alpha_1+\alpha_2} +c X_{\alpha_2+\alpha_3}, -cX_{\alpha_1+\alpha_2} ^\ast +(b+i)X_{\alpha_2+\alpha_3} ^\ast, x_2 X_{\alpha_1+\alpha_2+\alpha_3}+(a_2-i)X^\ast _{\alpha_2}, -x_2 X_{\alpha_2} +(a_2-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast \rbrace$. Thus \begin{eqnarray*} \Nij (x_1 X_{\alpha_1} +(a_1-i) X^\ast _{\alpha_3},(b+i) X_{\alpha_1+\alpha_2} +c X_{\alpha_2+\alpha_3},-x_2 X_{\alpha_2} +(a_2-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast) \\ = \frac{1}{2}x_1 c (a_2-i)m_{\alpha_1,\alpha_2+\alpha_3} \not= 0, \end{eqnarray*} which means that $\mathbb{J}$ is not integrable. \\ \\ {\bf 7.} This case is also not integrable because the $+i$-eigenpace is $L = \textrm{span}\lbrace x_1 X_{\alpha_1} +(a_1-i) X^\ast _{\alpha_3}, -x_1X_{\alpha_3} +(a_1-i)X_{\alpha_1} ^\ast, x_2 X_{\alpha_1+\alpha_2} +(a_2-i) X^\ast _{\alpha_2+\alpha_3}, -x_2X_{\alpha_2+\alpha_3} +(a_2-i) X_{\alpha_1+\alpha_2} ^\ast, (b+i) X_{\alpha_1+\alpha_2+\alpha_3}+cX _{\alpha_2}, -c X_{\alpha_1+\alpha_2+\alpha_3} ^\ast +(b+i)X_{\alpha_2} ^\ast \rbrace$ and \begin{eqnarray*} \Nij (x_1 X_{\alpha_1} +(a_1-i) X^\ast _{\alpha_3},-x_2X_{\alpha_2+\alpha_3} +(a_2-i) X_{\alpha_1+\alpha_2} ^\ast,-c X_{\alpha_1+\alpha_2+\alpha_3} ^\ast +(b+i)X_{\alpha_2} ^\ast) \\ =-\frac{1}{2}x_1x_2(b+i)m_{\alpha_1,\alpha_2+\alpha_3} \not= 0. \end{eqnarray*} \\ \\ {\bf 8.} Finally, for this case we have $L = \textrm{span}\lbrace x_1 X_{\alpha_1} +(a_1-i) X^\ast _{\alpha_3}, -x_1X_{\alpha_3} +(a_1-i)X_{\alpha_1} ^\ast, x_2 X_{\alpha_1+\alpha_2} +(a_2-i) X^\ast _{\alpha_2+\alpha_3}, -x_2X_{\alpha_2+\alpha_3} +(a_2-i) X_{\alpha_1+\alpha_2} ^\ast, x_3 X_{\alpha_1+\alpha_2+\alpha_3}+(a_3-i)X _{\alpha_2} ^\ast, -x_3 X_{\alpha_2} +(a_3-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast \rbrace$. Note that \begin{eqnarray*} \Nij (x_1 X_{\alpha_1} +(a_1-i) X^\ast _{\alpha_3},-x_2X_{\alpha_2+\alpha_3} +(a_2-i) X_{\alpha_1+\alpha_2} ^\ast,-x_3 X_{\alpha_2} +(a_3-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast) \\ = -\frac{1}{2}x_1(x_2(a_3 - i)m_{\alpha_1,\alpha_2+\alpha_3}-x_3(a_2-i)m_{\alpha_1,\alpha_2}), \end{eqnarray*} and \begin{eqnarray*} \Nij (-x_1X_{\alpha_3} +(a_1-i)X_{\alpha_1} ^\ast,x_2 X_{\alpha_1+\alpha_2} +(a_2-i) X^\ast _{\alpha_2+\alpha_3},-x_3 X_{\alpha_2} +(a_3-i)X_{\alpha_1+\alpha_2+\alpha_3} ^\ast )\\ = -\frac{1}{2}x_1 (x_2(a_3 - i)m_{\alpha_3,\alpha_1+\alpha_2}+x_3(a_2-i)m_{\alpha_3,\alpha_2}). \end{eqnarray*} So, if $\Nij|_L = 0$ then we must have \[ \left\lbrace \begin{array}{l} x_2(a_3 - i)m_{\alpha_1,\alpha_2+\alpha_3}-x_3(a_2-i)m_{\alpha_1,\alpha_2} = 0\\ x_2(a_3 - i)m_{\alpha_3,\alpha_1+\alpha_2}+x_3(a_2-i)m_{\alpha_3,\alpha_2} = 0. \end{array}\right. \] In particular, the imaginary part vanishes which gives us \begin{equation}\label{eq4} \frac{m_{\alpha_1,\alpha_2}}{m_{\alpha_1,\alpha_2+\alpha_3}} = \frac{x_2}{x_3} = -\frac{m_{\alpha_3,\alpha_2}}{m_{\alpha_3,\alpha_1+\alpha_2}}. \end{equation} Again, we have that Equation \eqref{eq4} contradicts Equation \eqref{eq2} and, therefore, $\mathbb{J}$ can not be integrable. \begin{proposition} Let $\mathbb{F}$ be the maximal flag manifold of type $A_3$. Then, the invariant generalized almost complex structures on $\mathbb{F}$ are not integrable. \end{proposition} \noindent {\bf Case $G_2$.} As we did in the previous cases, here it is simple to check that the $M$-equivalence classes of roots are given by \[ \lbrace \alpha, \alpha+2\beta \rbrace, \quad \lbrace \alpha+\beta,\alpha+3\beta \rbrace \quad \textrm{and} \quad \lbrace \beta,2\alpha+3\beta\rbrace. \] Thus, as in the case of $A_3$, we have that an invariant generalized almost complex structure $\mathbb{J}$ on $\mathbb{F}$ takes one of the possibilities presented in Table \ref{table3}. \begin{table}[htb!] \begin{tabular}{|r|c|c|c|} \hline & $\mathbb{J}_{[\alpha]}$ & $\mathbb{J}_{[\alpha+\beta]}$ & $\mathbb{J}_{[\beta]}$ \\ \hline {\bf 1.} & complex & complex & complex\\ {\bf 2.} & complex & complex & noncomplex\\ {\bf 3.} & complex & noncomplex & complex\\ {\bf 4.} & noncomplex & complex & complex\\ {\bf 5.} & complex & noncomplex & noncomplex\\ {\bf 6.} & noncomplex & complex & noncomplex\\ {\bf 7.} & complex & noncomplex & noncomplex\\ {\bf 8.} & noncomplex & noncomplex & noncomplex \\ \hline \end{tabular} \caption{Possible combinations for $\mathbb{J}$ on $G_2$}\label{table3} \end{table} \\ \\ \noindent {\bf 1.} This case comes from the invariant almost complex structures found in \cite{FBS}. This is not integrable. \\ \\ \noindent {\bf 2.} Here the $+i$-eigenspace is given by $L = \textrm{span}\lbrace (b_1+i)X_{\alpha} +c_1 X_{\alpha+2\beta}, -c_1X_{\alpha} ^\ast +(b_1 + i)X_{\alpha+2\beta} ^\ast, (b_2+i)X_{\alpha+\beta} +c_2 X_{\alpha+3\beta}, -c_2X_{\alpha+\beta} ^\ast +(b_2 + i)X_{\alpha+3\beta} ^\ast, xX_{\beta}+(a-i)X^\ast _{2\alpha+3\beta}, -xX_{2\alpha+3\beta} +(a-i)X_{\beta} ^\ast \rbrace$. Observe that \begin{eqnarray*} \Nij ((b_1+i)X_{\alpha} +c_1 X_{\alpha+2\beta}, (b_2+i)X_{\alpha+\beta} +c_2 X_{\alpha+3\beta}, xX_{\beta}+(a-i)X^\ast _{2\alpha+3\beta}) \\ = \frac{1}{2}(a-i)((b_1+i)c_2m_{\alpha,\alpha+3\beta}+(b_2+i)c_1m_{\alpha+2\beta,\alpha+\beta}), \end{eqnarray*} and \begin{eqnarray*} \Nij ((b_1+i)X_{\alpha} +c_1 X_{\alpha+2\beta}, xX_{\beta}+(a-i)X^\ast _{2\alpha+3\beta}, -c_2X_{\alpha+\beta} ^\ast +(b_2 + i)X_{\alpha+3\beta} ^\ast) \\ = - \frac{1}{2}x((b_1+i)c_2m_{\alpha,\beta}-(b_2+i)c_1m_{\alpha+2\beta,\beta}). \end{eqnarray*} Suppose that $\mathbb{J}$ is integrable, then the condition $\Nij|_L = 0$ implies that \[ \left\lbrace \begin{array}{l} (b_1+i)c_2m_{\alpha,\alpha+3\beta}+(b_2+i)c_1m_{\alpha+2\beta,\alpha+\beta}=0\\ (b_1+i)c_2m_{\alpha,\beta}-(b_2+i)c_1m_{\alpha+2\beta,\beta} = 0. \end{array}\right. \] In particular, we have \begin{equation}\label{g2-1} \left\lbrace \begin{array}{l} c_2m_{\alpha,\alpha+3\beta}+c_1m_{\alpha+2\beta,\alpha+\beta}=0\\ c_2m_{\alpha,\beta}-c_1m_{\alpha+2\beta,\beta} = 0 \end{array}\right. \quad \textnormal{implies} \quad \frac{m_{\alpha,\beta}}{m_{\alpha+2\beta}} = \frac{c_1}{c_2} = -\frac{m_{\alpha,\alpha+3\beta}}{m_{\alpha+2\beta,\alpha+\beta}}. \end{equation} However, as consequence of Jacobi identity we get \begin{eqnarray*} 0 & = & [X_{\alpha+2\beta},[X_{\alpha},X_\beta]]-[[X_{\alpha+2\beta},X_\alpha],X_\beta]-[X_\alpha,[X_{\alpha+2\beta},X_\beta]]\\ & = & m_{\alpha,\beta}[X_{\alpha+2\beta},X_{\alpha+\beta}]- m_{\alpha+2\beta,\beta}[X_\alpha,X_{\alpha+3\beta}]\\ & = & (m_{\alpha,\beta}m_{{\alpha+2\beta},{\alpha+\beta}}-m_{\alpha+2\beta,\beta}m_{\alpha,\alpha+3\beta})X_{2\alpha+3\beta}, \end{eqnarray*} which tells us that \begin{equation}\label{g2-2} m_{\alpha,\beta}m_{{\alpha+2\beta},{\alpha+\beta}}-m_{\alpha+2\beta,\beta}m_{\alpha,\alpha+3\beta} = 0 \quad \textnormal{implies} \quad \frac{m_{\alpha,\beta}}{m_{\alpha+2\beta}} = \frac{m_{\alpha,\alpha+3\beta}}{m_{\alpha+2\beta,\alpha+\beta}}. \end{equation} Note that Equation \eqref{g2-2} contradicts Equation \eqref{g2-1}, thus $\mathbb{J}$ can not be integrable. \\ \\ {\bf 3.} In this case the $+i$-eigenspace is $L = \textrm{span}\lbrace (b_1+i)X_{\alpha} +c_1 X_{\alpha+2\beta}, -c_1X_{\alpha} ^\ast +(b_1 + i)X_{\alpha+2\beta} ^\ast, xX_{\alpha+\beta} +(a-i) X^\ast _{\alpha+3\beta}, -xX_{\alpha+3\beta} +(a- i)X_{\alpha+\beta} ^\ast, (b_2+i)X_{\beta}+c_2X_{2\alpha+3\beta}, -c_2 X_{2\alpha+3\beta} ^\ast +(b_2+i)X_{\beta} ^\ast \rbrace$. Then \begin{eqnarray*} \Nij ((b_1+i)X_{\alpha} +c_1 X_{\alpha+2\beta}, xX_{\alpha+\beta} +(a-i) X^\ast _{\alpha+3\beta}, -c_2 X_{2\alpha+3\beta} ^\ast +(b_2+i)X_{\beta} ^\ast \rbrace) \\ = \frac{1}{2}c_1x(b_2+i)m_{\alpha+2\beta,\alpha+\beta}\not= 0, \end{eqnarray*} for which we immediately get that $\mathbb{J}$ is not integrable. \\ \\ {\bf 4.} Here we have $L = \textrm{span}\lbrace xX_{\alpha} +(a-i) X^\ast _{\alpha+2\beta}, -xX_{\alpha+2\beta} +(a-i)X_{\alpha} ^\ast, (b_1+i)X_{\alpha+\beta} +c_1 X_{\alpha+3\beta}, -c_1X_{\alpha+\beta} ^\ast +(b_1 + i)X_{\alpha+3\beta} ^\ast, (b_2+i)X_{\beta}+c_2X_{2\alpha+3\beta}, -c_2 X_{\beta} ^\ast +(b_2+i)X_{2\alpha+3\beta} ^\ast \rbrace$. Then \begin{eqnarray*} \Nij (-xX_{\alpha+2\beta} +(a-i)X_{\alpha} ^\ast, (b_2+i)X_{\beta}+c_2X_{2\alpha+3\beta}, -c_1X_{\alpha+\beta} ^\ast +(b_1 + i)X_{\alpha+3\beta} ^\ast) \\ = -\frac{1}{2}x(b_1+i)(b_2+i)m_{\alpha+2\beta,\beta}\not=0, \end{eqnarray*} which means that $\mathbb{J}$ is not integrable. \\ \\ {\bf 5.} This case is also not integrable because $L = \textrm{span}\lbrace (b+i)X_{\alpha} +c X_{\alpha+2\beta}, -cX_{\alpha} ^\ast +(b+i)X_{\alpha+2\beta} ^\ast, x_1 X_{\alpha+\beta} +(a_1 - i) X^\ast _{\alpha+3\beta}, -x_1X_{\alpha+3\beta} +(a_1 - i)X_{\alpha+\beta} ^\ast, x_2 X_{\beta}+(a_2-i)X^\ast _{2\alpha+3\beta}, -x_2 X_{2\alpha+3\beta} +(a_2-i)X_{\beta} ^\ast \rbrace$ and \begin{eqnarray*} \Nij (x_2 X_{\beta}+(a_2-i)X^\ast _{2\alpha+3\beta}, x_1 X_{\alpha+\beta} +(a_1 - i) X^\ast _{\alpha+3\beta},-cX_{\alpha} ^\ast +(b+i)X_{\alpha+2\beta} ^\ast) \\ = \frac{1}{2}x_1x_2(b_i)m_{\beta,\alpha+\beta} \not= 0. \end{eqnarray*} \\ \\ {\bf 6.} Now we have $L = \textrm{span}\lbrace x_1 X_{\alpha} +(a_1-i) X^\ast _{\alpha+2\beta}, -x_1X_{\alpha+2\beta} +(a_1-i)X_{\alpha} ^\ast, (b+i) X_{\alpha+\beta} +c X_{\alpha+3\beta}, -cX_{\alpha+\beta} ^\ast +(b+i)X_{\alpha+3\beta} ^\ast, x_2 X_{\beta}+(a_2-i)X^\ast _{2\alpha+3\beta}, -x_2 X_{2\alpha+3\beta} +(a_2-i)X_{\beta} ^\ast \rbrace$, then observe that \begin{eqnarray*} \Nij (-x_1X_{\alpha+2\beta} +(a_1-i)X_{\alpha} ^\ast, x_2 X_{\beta}+(a_2-i)X^\ast _{2\alpha+3\beta},-cX_{\alpha+\beta} ^\ast +(b+i)X_{\alpha+3\beta} ^\ast) \\ = -\frac{1}{2}x_1x_2(b+i)m_{\alpha+2\beta,\beta}\not= 0. \end{eqnarray*} Therefore $\mathbb{J}$ is not integrable. \\ \\ {\bf 7.} Here the $+i$-eigenspace is given by $L = \textrm{span}\lbrace x_1 X_{\alpha} +(a_1-i) X^\ast _{\alpha+2\beta}, -x_1X_{\alpha+2\beta} +(a_1-i)X_{\alpha} ^\ast, x_2 X_{\alpha+\beta} +(a_2-i) X^\ast _{\alpha+3\beta}, -x_2X_{\alpha+3\beta} +(a_2-i) X_{\alpha+\beta} ^\ast, (b+i) X_{\beta}+cX _{2\alpha+3\beta}, -c X_{\beta} ^\ast +(b+i)X_{2\alpha+3\beta} ^\ast \rbrace$. Thus \begin{eqnarray*} \Nij (x_1 X_{\alpha} +(a_1-i) X^\ast _{\alpha+2\beta}, -x_2X_{\alpha+3\beta} +(a_2-i) X_{\alpha+\beta} ^\ast, -c X_{\beta} ^\ast +(b+i)X_{2\alpha+3\beta} ^\ast) \\ = -\frac{1}{2}x_1x_2(b+i)m_{\alpha,\alpha+3\beta}\not= 0, \end{eqnarray*} and hence $\mathbb{J}$ is not integrable. \\ \\ {\bf 8.} Finally, we have $L = \textrm{span}\lbrace x_1 X_{\alpha} +(a_1-i) X^\ast _{\alpha+2\beta}, -x_1X_{\alpha+2\beta} +(a_1-i)X_{\alpha} ^\ast, x_2 X_{\alpha+\beta} +(a_2-i) X^\ast _{\alpha+3\beta}, -x_2X_{\alpha+3\beta} +(a_2-i) X_{\alpha+\beta} ^\ast, x_3 X_{\beta}+(a_3-i)X _{2\alpha+3\beta} ^\ast, -x_3 X_{2\alpha+3\beta} +(a_3-i)X_{\beta} ^\ast \rbrace$. On the one hand, note that \begin{eqnarray*} \Nij (x_1 X_{\alpha} +(a_1-i) X^\ast _{\alpha+2\beta}, -x_2X_{\alpha+3\beta} +(a_2-i) X_{\alpha+\beta} ^\ast, x_3 X_{\beta}+(a_3-i)X _{2\alpha+3\beta} ^\ast) \\ -\frac{1}{2}x_1(x_2(a_3-i)m_{\alpha,\alpha+3\beta}+x_3(a_2-i)m_{\alpha,\beta}) \end{eqnarray*} and \begin{eqnarray*} \Nij (-x_1X_{\alpha+2\beta} +(a_1-i)X_{\alpha} ^\ast, x_2 X_{\alpha+\beta} +(a_2-i) X^\ast _{\alpha+3\beta}, x_3 X_{\beta}+(a_3-i)X _{2\alpha+3\beta} ^\ast) \\ = -\frac{1}{2}x_1(x_2(a_3-i)m_{\alpha+2\beta,\alpha+\beta}-x_3(a_2-i)m_{\alpha+2\beta,\beta}). \end{eqnarray*} On the other hand, if we assume that $\mathbb{J}$ is integrable it follows from the condition $\Nij|_L=0$ that \[ \left\lbrace\begin{array}{l} x_2(a_3-i)m_{\alpha,\alpha+3\beta}+x_3(a_2-i)m_{\alpha,\beta} = 0\\ x_2(a_3-i)m_{\alpha+2\beta,\alpha+\beta}-x_3(a_2-i)m_{\alpha+2\beta,\beta} = 0. \end{array}\right. \] In particular, from the imaginary part we have \begin{equation}\label{g2-3} \left\lbrace\begin{array}{l} x_2m_{\alpha,\alpha+3\beta}+x_3m_{\alpha,\beta} = 0\\ x_2m_{\alpha+2\beta,\alpha+\beta}-x_3m_{\alpha+2\beta,\beta} = 0 \end{array}\right. \quad \textnormal{implies} \quad \frac{m_{\alpha,\beta}}{m_{\alpha,\alpha+3\beta}}= -\frac{x_2}{x_3} = -\frac{m_{\alpha+2\beta,\beta}}{m_{\alpha+2\beta,\alpha+\beta}}. \end{equation} But, observe that Equation \eqref{g2-3} contradicts Equation \eqref{g2-2} and hence $\mathbb{J}$ can not be integrable. \begin{proposition} Let $\mathbb{F}$ be the maximal flag manifold of type $G_2$. Then, the invariant generalized almost complex structures on $\mathbb{F}$ are not integrable. \end{proposition} \noindent {\bf Case $D_l$ with $l\geq 5$.} The $M$-equivalence classes of roots are given by \[ \lbrace\pm \lambda_j-\lambda_i\rbrace,\ 1\leq i<j\leq l. \] For the conclusion that we are going to obtain here will be enough to consider the $M$-equivalence classes $\lbrace \lambda_1-\lambda_2,\lambda_1+\lambda_2\rbrace$, $\lbrace \lambda_2-\lambda_3,\lambda_2+\lambda_3\rbrace$ and $\lbrace \lambda_1-\lambda_3,\lambda_1+\lambda_3\rbrace$. If $\mathbb{J}$ is an invariant generalized almost complex structure on $\mathbb{F}$, then we have that the possibilities for $\mathbb{J}$ at these $M$-equivalence classes are given in Table \ref{table4}. \begin{table}[htb!] \begin{tabular}{|r|c|c|c|} \hline & $\mathbb{J}_{[\lambda_1-\lambda_2]}$ & $\mathbb{J}_{[\lambda_2-\lambda_3]}$ & $\mathbb{J}_{[\lambda_1-\lambda_3]}$ \\ \hline {\bf 1.} & complex & complex & complex\\ {\bf 2.} & complex & complex & noncomplex\\ {\bf 3.} & complex & noncomplex & complex\\ {\bf 4.} & noncomplex & complex & complex\\ {\bf 5.} & complex & noncomplex & noncomplex\\ {\bf 6.} & noncomplex & complex & noncomplex\\ {\bf 7.} & complex & noncomplex & noncomplex\\ {\bf 8.} & noncomplex & noncomplex & noncomplex \\ \hline \end{tabular} \caption{Possible combinations for $\mathbb{J}$ on $D_l$ with $l\geq 5$}\label{table4} \end{table} \noindent {\bf 1.} The $+i$-eigenspace is given by $L = \textrm{span}\lbrace (b_1+i)X_{\lambda_1-\lambda_2} +c_1 X_{\lambda_1+\lambda_2},-c_1X_{\lambda_1-\lambda_2} ^\ast + (b_1+i)X^\ast _{\lambda_1+\lambda_2}, (b_2+i)X_{\lambda_2-\lambda_3} +c_2X_{\lambda_2+\lambda_3}, -c_2X^\ast _{\lambda_2-\lambda_3} + (b_2+i)X_{\lambda_2+\lambda_3} ^\ast, (b_3+i)X_{\lambda_1-\lambda_3} +c_3X_{\lambda_1+\lambda_3}, -c_3X_{\lambda_1-\lambda_3} ^\ast +(b_3+i)X_{\lambda_1+\lambda_3}\rbrace$. Thus we have \begin{eqnarray*} \Nij ((b_1+i)X_{\lambda_1-\lambda_2} +c_1 X_{\lambda_1+\lambda_2}, (b_2+i)X_{\lambda_2-\lambda_3} +c_2X_{\lambda_2+\lambda_3}, -c_3X_{\lambda_1-\lambda_3} ^\ast +(b_3+i)X_{\lambda_1+\lambda_3}) \\ = -\frac{1}{2}(b_1+i)(c_3(b_2+i)m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3} -c_2(b_3+i)m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}), \end{eqnarray*} and \begin{eqnarray*} \Nij (-c_1X_{\lambda_1-\lambda_2} ^\ast + (b_1+i)X^\ast _{\lambda_1+\lambda_2}, (b_2+i)X_{\lambda_2-\lambda_3} +c_2X_{\lambda_2+\lambda_3},(b_3+i)X_{\lambda_1-\lambda_3} +c_3X_{\lambda_1+\lambda_3}) \\ =\frac{1}{2}(b_1+i)(c_3(b_2+i)m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}+c_2(b_3+i)m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3}). \end{eqnarray*} So, if $\Nij|_L = 0$ then we get \[ \left\lbrace \begin{array}{l} c_3(b_2+i)m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3} -c_2(b_3+i)m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3} = 0\\ c_3(b_2+i)m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}+c_2(b_3+i)m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3} = 0. \end{array}\right. \] In particular, from the imaginary part we obtain the equation \begin{equation}\label{dl-1} \left\lbrace \begin{array}{l} c_3m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3} -c_2m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3} = 0\\ c_3m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}+c_2m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3} = 0 \end{array}\right. \quad \textnormal{implies} \quad \frac{m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}}{m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}}=\frac{c_3}{c_2}= -\frac{m_{\lambda_2-\lambda_3,\lambda_1-\lambda_3}}{m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}}. \end{equation} However, by applying the Jacobi identity to the vectors $X_{\lambda_1-\lambda_2}$, $X_{\lambda_2-\lambda_3}$ and $X_{\lambda_2-\lambda_3}$ we obtain \begin{equation}\label{dl-2} \frac{m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}}{m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}} = \frac{m_{\lambda_2-\lambda_3,\lambda_1-\lambda_3}}{m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}}. \end{equation} Since Equation \eqref{dl-2} contradicts Equation \eqref{dl-1}, then $\mathbb{J}$ is not integrable. \\ \\ {\bf 2.} Here the $+i$-eigenspace is given by $L = \textrm{span}\lbrace (b_1+i)X_{\lambda_1-\lambda_2} +c_1 X_{\lambda_1+\lambda_2}, -c_1X_{\lambda_1-\lambda_2} ^\ast +(b_1 + i)X_{\lambda_1+\lambda_2} ^\ast, (b_2+i)X_{\lambda_2-\lambda_3} +c_2 X_{\lambda_2+\lambda_3}, -c_2X_{\lambda_2-\lambda_3} ^\ast +(b_2 + i)X_{\lambda_2+\lambda_3} ^\ast, xX_{\lambda_1-\lambda_3}+(a-i)X^\ast _{\lambda_1+\lambda_3}, -xX_{\lambda_1+\lambda_3} +(a-i)X_{\lambda_1-\lambda_3} ^\ast \rbrace$. Observe that \begin{eqnarray*} \Nij (-c_1X_{\lambda_1-\lambda_2} ^\ast +(b_1 + i)X_{\lambda_1+\lambda_2} ^\ast, (b_2+i)X_{\lambda_2-\lambda_3} +c_2 X_{\lambda_2+\lambda_3}, xX_{\lambda_1-\lambda_3}+(a-i)X^\ast _{\lambda_1+\lambda_3}) \\ =\frac{1}{2} (b_1+i)c_2xm_{\lambda_1+\lambda_3,\lambda_1-\lambda_3} \not= 0, \end{eqnarray*} which immediately implies that $\mathbb{J}$ is not integrable. \\ \\ {\bf 3.} This case is also not integrable because the $+i$-eigenspace is $L = \textrm{span}\lbrace (b_1+i)X_{\lambda_1-\lambda_2} +c_1 X_{\lambda_1+\lambda_2}, -c_1X_{\lambda_1-\lambda_2} ^\ast +(b_1 + i)X_{\lambda_1+\lambda_2} ^\ast, xX_{\lambda_2-\lambda_3} +(a-i) X^\ast _{\lambda_2+\lambda_3}, -xX_{\lambda_2+\lambda_3} +(a- i)X_{\lambda_2-\lambda_3} ^\ast, (b_2+i)X_{\lambda_1-\lambda_3}+c_2X_{\lambda_1+\lambda_3}, -c_2 X_{\lambda_1-\lambda_3} ^\ast +(b_2+i)X_{\lambda_1+\lambda_3} ^\ast \rbrace$ and we have that \begin{eqnarray*} \Nij ((b_1+i)X_{\lambda_1-\lambda_2} +c_1 X_{\lambda_1+\lambda_2}, xX_{\lambda_2-\lambda_3} +(a-i) X^\ast _{\lambda_2+\lambda_3}, -c_2 X_{\lambda_1-\lambda_3} ^\ast +(b_2+i)X_{\lambda_1+\lambda_3} ^\ast) \\ = -\frac{1}{2} xc_2(b_1+i)m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}\not= 0. \end{eqnarray*} \\ {\bf 4.} In this case we have $L = \textrm{span}\lbrace xX_{\lambda_1-\lambda_2} +(a-i) X^\ast _{\lambda_1+\lambda_2}, -xX_{\lambda_1+\lambda_2} +(a-i)X_{\lambda_1-\lambda_2} ^\ast, (b_1+i)X_{\lambda_2-\lambda_3} +c_1 X_{\lambda_2+\lambda_3}, -c_1X_{\lambda_2-\lambda_3} ^\ast +(b_1 + i)X_{\lambda_2+\lambda_3} ^\ast, (b_2+i)X_{\lambda_1-\lambda_3}+c_2X_{\lambda_1+\lambda_3}, -c_2 X_{\lambda_1-\lambda_3} ^\ast +(b_2+i)X_{\lambda_1+\lambda_3} ^\ast \rbrace$. On the one hand, we get that \begin{eqnarray*} \Nij (xX_{\lambda_1-\lambda_2} +(a-i) X^\ast _{\lambda_1+\lambda_2}, (b_1+i)X_{\lambda_2-\lambda_3} +c_1 X_{\lambda_2+\lambda_3}, -c_2 X_{\lambda_1-\lambda_3} ^\ast +(b_2+i)X_{\lambda_1+\lambda_3} ^\ast) \\ = -\frac{1}{2}x(c_2(b_1+i)m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}-c_1(b_2+i)m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}) \end{eqnarray*} and \begin{eqnarray*} \Nij (xX_{\lambda_1-\lambda_2} +(a-i) X^\ast _{\lambda_1+\lambda_2}, (b_1+i)X_{\lambda_2-\lambda_3} +c_1 X_{\lambda_2+\lambda_3}, (b_2+i)X_{\lambda_1-\lambda_3}+c_2X_{\lambda_1+\lambda_3}) \\ = \frac{1}{2}(a-i)(c_2(b_1+i)m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}+c_1(b_2+i)m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3}). \end{eqnarray*} So, if we assume that $\mathbb{J}$ is integrable then the conditions $\Nij|_L=0$ implies that \[ \left\lbrace \begin{array}{l} c_2(b_1+i)m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}-c_1(b_2+i)m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3} =0 \\ c_2(b_1+i)m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}+c_1(b_2+i)m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3} = 0. \end{array}\right. \] In particular, \begin{equation}\label{dl-3} \left\lbrace \begin{array}{l} c_2m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}-c_1m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3} =0 \\ c_2m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}+c_1m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3} = 0 \end{array}\right. \quad \textnormal{implies} \quad \frac{m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}}{m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}}=-\frac{c_2}{c_1}=-\frac{m_{\lambda_1-\lambda_3,\lambda_2+\lambda_3}}{m_{\lambda_1+\lambda_3,\lambda_2-\lambda_3}} \end{equation} which contradicts Equation \eqref{dl-2}. Thus, $\mathbb{J}$ can not be integrable. \\ \\ {\bf 5.} Here we have that $L = \textrm{span}\lbrace (b+i)X_{\lambda_1-\lambda_2} +c X_{\lambda_1+\lambda_2}, -cX_{\lambda_1-\lambda_2} ^\ast +(b+i)X_{\lambda_1+\lambda_2} ^\ast, x_1 X_{\lambda_2-\lambda_3} +(a_1 - i) X^\ast _{\lambda_2+\lambda_3}, -x_1X_{\lambda_2+\lambda_3} +(a_1 - i)X_{\lambda_2-\lambda_3} ^\ast, x_2 X_{\lambda_1-\lambda_3}+(a_2-i)X^\ast _{\lambda_1+\lambda_3}, -x_2 X_{\lambda_1+\lambda_3} +(a_2-i)X_{\lambda_1-\lambda_3} ^\ast \rbrace$. Then, we get \begin{eqnarray*} \Nij ((b+i)X_{\lambda_1-\lambda_2} +c X_{\lambda_1+\lambda_2}, x_1 X_{\lambda_2-\lambda_3} +(a_1 - i) X^\ast _{\lambda_2+\lambda_3}, -x_2 X_{\lambda_1+\lambda_3} +(a_2-i)X_{\lambda_1-\lambda_3} ^\ast) \\ =\frac{1}{2}x_1(a_2-i)(b+i)m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}\not= 0, \end{eqnarray*} and hence $\mathbb{J}$ is not integrable. \\ \\ {\bf 6.} In this case $L = \textrm{span}\lbrace x_1 X_{\lambda_1-\lambda_2} +(a_1-i) X^\ast _{\lambda_1+\lambda_2}, -x_1X_{\lambda_1+\lambda_2} +(a_1-i)X_{\lambda_1-\lambda_2} ^\ast, (b+i) X_{\lambda_2-\lambda_3} +c X_{\lambda_2+\lambda_3}, -cX_{\lambda_2-\lambda_3} ^\ast +(b+i)X_{\lambda_2+\lambda_3} ^\ast, x_2 X_{\lambda_1-\lambda_3}+(a_2-i)X^\ast _{\lambda_1+\lambda_3}, -x_2 X_{\lambda_1+\lambda_3} +(a_2-i)X_{\lambda_1-\lambda_3} ^\ast \rbrace$, then observe that \begin{eqnarray*} \Nij (x_1 X_{\lambda_1-\lambda_2} +(a_1-i) X^\ast _{\lambda_1+\lambda_2}, (b+i) X_{\lambda_2-\lambda_3} +c X_{\lambda_2+\lambda_3}, -x_2 X_{\lambda_1+\lambda_3} +(a_2-i)X_{\lambda_1-\lambda_3} ^\ast) \\ = \frac{1}{2}(b+i)(x_1(a_2-i)m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}-x_2(a_1-i)m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}) \end{eqnarray*} and \begin{eqnarray*} \Nij (x_1 X_{\lambda_1-\lambda_2} +(a_1-i) X^\ast _{\lambda_1+\lambda_2}, (b+i) X_{\lambda_2-\lambda_3} +c X_{\lambda_2+\lambda_3}, x_2 X_{\lambda_1-\lambda_3}+(a_2-i)X^\ast _{\lambda_1+\lambda_3}) \\ =\frac{1}{2}c(x_1(a_2-i)m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}+x_2(a_1-i)m_{\lambda_2-\lambda_3,\lambda_1-\lambda_3}). \end{eqnarray*} If $\Nij|_L =0$, then we have \[ \left\lbrace \begin{array}{l} x_1(a_2-i)m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}-x_2(a_1-i)m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3} =0\\ x_1(a_2-i)m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}+x_2(a_1-i)m_{\lambda_2-\lambda_3,\lambda_1-\lambda_3} =0. \end{array}\right. \] From the imaginary part we obtain that \begin{equation}\label{dl-4} \left\lbrace \begin{array}{l} x_1m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}-x_2m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3} =0\\ x_1m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}+x_2m_{\lambda_2-\lambda_3,\lambda_1-\lambda_3} =0 \end{array}\right.\quad \textnormal{implies} \quad \frac{m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}}{m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}}=\frac{x_1}{x_2}= -\frac{m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3}}{m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}} \end{equation} fact which also contradicts Equation \eqref{dl-2} and hence $\mathbb{J}$ can not be integrable. \\ \\ {\bf 7.} Here the $+i$-eigenspace is $L = \textrm{span}\lbrace x_1 X_{\lambda_1-\lambda_2} +(a_1-i) X^\ast _{\lambda_1+\lambda_2}, -x_1X_{\lambda_1+\lambda_2} +(a_1-i)X_{\lambda_1-\lambda_2} ^\ast, x_2 X_{\lambda_2-\lambda_3} +(a_2-i) X^\ast _{\lambda_2+\lambda_3}, -x_2X_{\lambda_2+\lambda_3} +(a_2-i) X_{\lambda_2-\lambda_3} ^\ast, (b+i) X_{\lambda_1-\lambda_3}+cX _{\lambda_1+\lambda_3}, -c X_{\lambda_1-\lambda_3} ^\ast +(b+i)X_{\lambda_1+\lambda_3} ^\ast \rbrace$ and \begin{eqnarray*} \Nij (x_1 X_{\lambda_1-\lambda_2} +(a_1-i) X^\ast _{\lambda_1+\lambda_2}, x_2 X_{\lambda_2-\lambda_3} +(a_2-i) X^\ast _{\lambda_2+\lambda_3}, -c X_{\lambda_1-\lambda_3} ^\ast +(b+i)X_{\lambda_1+\lambda_3} ^\ast) \\ =-\frac{1}{2}x_1x_2cm_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}\not= 0. \end{eqnarray*} Therefore $\mathbb{J}$ is not integrable. \\ \\ {\bf 8.} Finally, we have $L = \textrm{span}\lbrace x_1 X_{\lambda_1-\lambda_2} +(a_1-i) X^\ast _{\lambda_1+\lambda_2}, -x_1X_{\lambda_1+\lambda_2} +(a_1-i)X_{\lambda_1-\lambda_2} ^\ast, x_2 X_{\lambda_2-\lambda_3} +(a_2-i) X^\ast _{\lambda_2+\lambda_3}, -x_2X_{\lambda_2+\lambda_3} +(a_2-i) X_{\lambda_2-\lambda_3} ^\ast, x_3 X_{\lambda_1-\lambda_3}+(a_3-i)X _{\lambda_1+\lambda_3} ^\ast, -x_3 X_{\lambda_1+\lambda_3} +(a_3-i)X_{\lambda_1-\lambda_3} ^\ast \rbrace$. Note that \begin{eqnarray*} \Nij (x_1 X_{\lambda_1-\lambda_2} +(a_1-i) X^\ast _{\lambda_1+\lambda_2}, x_2 X_{\lambda_2-\lambda_3} +(a_2-i) X^\ast _{\lambda_2+\lambda_3}, -x_3 X_{\lambda_1+\lambda_3} +(a_3-i)X_{\lambda_1-\lambda_3} ^\ast) \\ = \frac{1}{2}x_2(x_1 (a_3-i)m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}-x_3(a_1-i)m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}), \end{eqnarray*} and \begin{eqnarray*} \Nij (x_1 X_{\lambda_1-\lambda_2} +(a_1-i) X^\ast _{\lambda_1+\lambda_2}, -x_2X_{\lambda_2+\lambda_3} +(a_2-i) X_{\lambda_2-\lambda_3} ^\ast, x_3 X_{\lambda_1-\lambda_3}+(a_3-i)X _{\lambda_1+\lambda_3} ^\ast) \\ =-\frac{1}{2}x_2(x_1(a_3-i)m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}+x_3(a_1-i)m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3}). \end{eqnarray*} Now, if we assume that $\mathbb{J}$ is integrable then we must have \[ \left\lbrace \begin{array}{l} x_1 (a_3-i)m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}-x_3(a_1-i)m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3} = 0\\ x_1(a_3-i)m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}+x_3(a_1-i)m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3} = 0. \end{array}\right. \] From the imaginary part we get that \begin{equation}\label{dl-5} \left\lbrace \begin{array}{l} x_1 m_{\lambda_1-\lambda_2,\lambda_2-\lambda_3}-x_3 m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3} = 0\\ x_1 m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}+x_3 m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3} = 0 \end{array}\right.\quad \textnormal{implies} \quad \frac{m_{\lambda_1-\lambda_2,\lambda_2+\lambda_3}}{m_{\lambda_2+\lambda_3,\lambda_1-\lambda_3}}=-\frac{x_3}{x_1} =-\frac{m_{\lambda_1-\lambda_2,\lambda2-\lambda_3}}{m_{\lambda_2-\lambda_3,\lambda_1+\lambda_3}}. \end{equation} It is simple to see that Equation \eqref{dl-5} contradicts Equation \eqref{dl-2} and hence $\mathbb{J}$ can not be integrable. \begin{proposition} Let $\mathbb{F}$ be the maximal flag manifold of type $D_l$ with $l\geq 5$. Then, the invariant generalized almost complex structures on $\mathbb{F}$ are not integrable. \end{proposition} For the remaining case we will use the expression described in Equation \eqref{CourantBracket} for the Courant bracket. More specifically, we will use the fact that a generalized complex structure $\mathbb{J}$ is integrable if and only if $N_\mathbb{J} \equiv 0$, where the `Nijenhuis tensor' is given by \begin{equation} N_{\mathbb{J}} (A,B) = [\mathbb{J}A,\mathbb{J}B]-[A,B]-\mathbb{J}[A,\mathbb{J}B]-\mathbb{J}[\mathbb{J}A,B], \end{equation} with $[\cdot,\cdot]$ denoting the Courant bracket. To make the computations easier recall that the Courant bracket for the basic vectors of a Weyl basis is given by \[ [X_{\alpha},X_{\beta}] = \left\lbrace \begin{array}{ll} m_{\alpha,\beta}X_{\alpha+\beta}, \quad \textrm{if } \alpha+\beta \textrm{ is a root} \\ 0, \quad \textrm{otherwise} \end{array}\right. \quad [X_{\alpha},X^\ast _{\beta}] = \left\lbrace \begin{array}{ll} m_{\alpha,-\beta}X^\ast _{\beta-\alpha}, \quad \textrm{if } \beta-\alpha \textrm{ is a root} \\ 0, \quad \textrm{otherwise} \end{array} \right. \] and $[X^\ast _{\alpha},X^\ast _{\beta}] = 0$.\\ \noindent {\bf Case $C_l$ with $l\geq 6$ and $l$ even.} The $M$-equivalence classes are given by \[ \lbrace \lambda_i-\lambda_s,\lambda_i+\lambda_s \rbrace,\ 1\leq i<s\leq l\quad \textnormal{and}\quad \lbrace 2\lambda_1,\cdots,2\lambda_l \rbrace. \] Let $\mathbb{J}$ be an invariant generalized almost complex structure on $\mathbb{F}$. Observe that $\mathbb{J}_{[\lambda_i-\lambda_s]}$ can be either of complex or noncomplex type, for all $1\leq i<s\leq l$. Therefore, we will see what happens with the integrability of $\mathbb{J}$ when analyzing $\mathbb{J}_{[\lambda_1-\lambda_2]}$ in both cases, complex and noncomplex. \\ \\ {\bf 1.} Suppose that $\mathbb{J}_{[\lambda_1-\lambda_2]}$ is of complex type and let us write $\mathbb{J}(X_{2\lambda_2}) = \sum_{j=1} ^l c_jX_{2\lambda_j} + \sum_{j=1} ^l d_jX^\ast _{2\lambda_j}$. Assuming the notation of Remark \ref{2DimensionalCases} we have \begin{eqnarray*} N_{\mathbb{J}}(X_{\lambda_1+\lambda_2},X_{2\lambda_2}) & = & \left[ -\frac{(1+b^2)}{c}X_{\lambda_1-\lambda_2}-bX_{\lambda_1+\lambda_2},\sum_{j=1} ^l c_jX_{2\lambda_j} + \sum_{j=1} ^l d_jX^\ast _{2\lambda_j}\right] \\ & & -\mathbb{J}\left[X_{\lambda_1+\lambda_2},\sum_{j=1} ^l c_jX_{2\lambda_j} + \sum_{j=1} ^l d_jX^\ast _{2\lambda_j}\right] - \mathbb{J}\left[-\frac{(1+b^2)}{c}X_{\lambda_1-\lambda_2}-bX_{\lambda_1+\lambda_2},X_{2\lambda_2}\right]\\ & = & -\frac{(1+b^2)}{c}c_2m_{\lambda_1-\lambda_2,2\lambda_2}X_{\lambda_1+\lambda_2}-\frac{(1+b^2)}{c}d_1m_{\lambda_1-\lambda_2,-2\lambda_1}X_{\lambda_1+\lambda_2} ^\ast \\ & & - bd_1m_{\lambda_1+\lambda_2,-2\lambda_1}X_{\lambda_1-\lambda_2} ^\ast -\mathbb{J}\left( d_1m_{\lambda_1+\lambda_2,-2\lambda_1} X_{\lambda_1-\lambda_2} ^\ast \right) \\ & & - \mathbb{J}\left( -\frac{(1+b^2)}{c}m_{\lambda_1-\lambda_2,2\lambda_2}X_{\lambda_1+\lambda_2} \right)\\ & = & -\frac{(1+b^2)}{c}c_2m_{\lambda_1-\lambda_2,2\lambda_2}X_{\lambda_1+\lambda_2}-\frac{(1+b^2)}{c}d_1m_{\lambda_1-\lambda_2,-2\lambda_1}X_{\lambda_1+\lambda_2} ^\ast \\ & & -bd_1m_{\lambda_1+\lambda_2,-2\lambda_1}X_{\lambda_1-\lambda_2} ^\ast - d_1m_{\lambda_1+\lambda_2,-2\lambda_1} \left( -bX_{\lambda_1-\lambda_2} ^\ast + \frac{1+b^2}{c}X_{\lambda_1+\lambda_2} ^\ast \right) \\ & & + \frac{(1+b^2)}{c}m_{\lambda_1-\lambda_2,2\lambda_2} \left( -\frac{(1+b^2)}{c} X_{\lambda_1-\lambda_2} - bX_{\lambda_1+\lambda_2} \right)\\ & = & -\frac{(1+b^2)}{c}m_{\lambda_1-\lambda_2,2\lambda_2}(1+b)X_{\lambda_1+\lambda_2} - 2\frac{(1+b^2)}{c}d_1m_{\lambda_1-\lambda_2,-2\lambda_1}X^\ast _{\lambda_1+\lambda_2} \\ & & - \left( \frac{(1+b^2)}{c}\right)^2 m_{\lambda_1-\lambda_2,2\lambda_2}X_{\lambda_1-\lambda_2} \neq 0, \end{eqnarray*} since $\frac{(1+b^2)}{c} \neq 0$ and $m_{\lambda_1-\lambda_2,2\lambda_2}\neq 0$. \\ \\ {\bf 1.} Now suppose $\mathbb{J}_{[\lambda_1-\lambda_2]}$ of noncomplex type. Thus, \begin{eqnarray*} N_{\mathbb{J}}(X_{\lambda_1-\lambda_2},X_{\lambda_1+\lambda_2}) & = & [\mathbb{J}X_{\lambda_1-\lambda_2},\mathbb{J}X_{\lambda_1+\lambda_2}]- [X_{\lambda_1-\lambda_2},X_{\lambda_1+\lambda_2}]-\mathbb{J}[X_{\lambda_1-\lambda_2},\mathbb{J}X_{\lambda_1+\lambda_2}]\\ & & -\mathbb{J}[\mathbb{J}X_{\lambda_1-\lambda_2},X_{\lambda_1+\lambda_2}]\\ & = & [aX_{\lambda_1-\lambda_2}+yX_{\lambda_1+\lambda_2} ^\ast ,aX_{\lambda_1+\lambda_2}-yX_{\lambda_1-\lambda_2} ^\ast] - m_{\lambda_1-\lambda_2,\lambda_1+\lambda_2}X_{2\lambda_1}\\ & & -\mathbb{J}[X_{\lambda_1-\lambda_2},aX_{\lambda_1+\lambda_2}-yX_{\lambda_1-\lambda_2} ^\ast]-\mathbb{J}[aX_{\lambda_1-\lambda_2}+yX_{\lambda_1+\lambda_2} ^\ast, X_{\lambda_1+\lambda_2}]\\ & = & a^2 m_{\lambda_1-\lambda_2 ,\lambda_1+\lambda_2}X_{2\lambda_1} - m_{\lambda_1-\lambda_2 ,\lambda_1+\lambda_2}X_{2\lambda_1}-am_{\lambda_1-\lambda_2,\lambda_1+\lambda_2}\mathbb{J}X_{2\lambda_1}\\ & & - am_{\lambda_1-\lambda_2,\lambda_1+\lambda_2}\mathbb{J}X_{2\lambda_1} \\ & = & (a^2 +1)m_{\lambda_1-\lambda_2,\lambda_1+\lambda_2}X_{2\lambda_1} - 2am_{\lambda_1-\lambda_2,\lambda_1+\lambda_2}\mathbb{J}X_{2\lambda_1}. \end{eqnarray*} Observe that if $a=0$, then $N_{\mathbb{J}}(X_{\lambda_1-\lambda_2},X_{\lambda_1+\lambda_2}) = m_{\lambda_1-\lambda_2,\lambda1+\lambda_2}X_{2\lambda_1} \neq 0$. Let us now suppose that $a\neq 0$ and write $\mathbb{J}X_{2\lambda_1} = \sum_{j=1} ^l c_jX_{2\lambda_k} + \sum_{j=1} ^l d_jX_{2\lambda_k} ^\ast$. If $N_\mathbb{J} \equiv 0$, then we obtain from the expression obtained above for $N_{\mathbb{J}}(X_{\lambda_1-\lambda_2},X_{\lambda_1+\lambda_2})$ that $c_j = 0$ for $j=2,3,\cdots,l$ and $d_j = 0$ for $j=1,2\cdots,l$. Therefore, we have that $\mathbb{J}X_{2\lambda_1} = c_1X_{2\lambda_1}$ which is a contradiction since $\mathbb{J}^2 = -1$ and $c_1 \in \mathbb{R}$. So, we conclude: \begin{proposition} Let $\mathbb{F}$ be the maximal flag manifold of type $C_l$, with $l\geq 6$ and $l$ even. Then, the invariant generalized almost complex structures on $\mathbb{F}$ are not integrable. \end{proposition} Summing up, we have proved that: \begin{theorem}\label{NoIntegrables} No $GM_2$-maximal real flag manifold admits integrable $K$-invariant generalized almost complex structures. \end{theorem} \section{Generalized geometry on the special cases $B_2$, $G_2$, $A_3$, and $D_l$ with $l\geq 5$}\label{S:5} Motivated by the study of several geometric aspects involving invariant generalized complex structures on complex flag manifolds developed in \cite{GVV}, in this section we give a concrete description of the generalized geometry on those maximal real flag manifolds of type $B_2$, $G_2$, $A_3$, and $D_l$ with $l\geq 5$. What makes these cases special among all maximal real flag manifolds is the fact that $\dim V_{[\alpha]_M}=2$ and because of this, for a fixed root $\alpha\in \Pi^{-}$, a generalized complex structure $\mathbb{J}_{[\alpha]_M}$ on $V_{[\alpha]_M}$ is given by either the structure $\mathcal{J}^c_{[\alpha]_M}$ or $\mathcal{J}^{nc}_{[\alpha]_M}$ introduced in Remark \ref{2DimensionalCases}. In the sequel $\mathbb{F}$ will denote any of the maximal real flag manifolds determined by the special cases mentioned above. Consider the set $\mathcal M_a(\mathbb{F})$ of all invariant generalized almost complex structures on $\mathbb F$. For a fixed root $\alpha \in \Pi^-$, let $\mathcal M_\alpha(\mathbb{F})$ be the restriction of $\mathcal M_a(\mathbb{F})$ to the subspace $V_{[\alpha]_M}$. Thus, the following result is clear: \begin{lemma} \label{Aalpha} $\mathcal M_\alpha(\mathbb{F})$ is a disjoint union of: \begin{enumerate} \item[$\iota$.] a 2-dimensional family of structures of noncomplex type parametrized by the real algebraic surface cut out by $a_\alpha^2-x_\alpha y_\alpha =-1$ in $\mathbb R^3$, and \item[$\iota\iota$.] a 2-dimensional family of structures of complex type. \end{enumerate} \end{lemma} It is worth mentioning that this clearly differs from what was known for maximal complex flags where there were only 2 isolated structures of complex type inside $\mathcal M_\alpha(\mathbb{F})$; compare \cite{GVV}. \subsection{Effects of the action by invariant $B$-transformations} Let $\alpha \in \Pi^-$ be a fixed root. First of all, let us look at the effects of the action by $B$-transformations on the generalized complex structures on $V_{[\alpha]_M}$ which are of noncomplex type. Similar to how it was done in \cite{GVV}, if $\mathbb{J}_{[\alpha]_M}=\mathcal{J}^{nc}_{[\alpha]_M}$ is of noncomplex type with $a=0$ then because the equation $xy=1$ we get a structure of symplectic type since \begin{equation}\label{ECType1} \mathbb{J}_{[\alpha]_M}=\left( \begin{array}{cc} 0 & -\omega_\alpha^{-1}\\ \omega_\alpha & 0 \end{array} \right), \end{equation} where $\omega_\alpha=\dfrac{1}{x}\left(\begin{array}{cc} 0 & -1\\ 1 & 0 \end{array} \right)\approx \dfrac{1}{x}X_\alpha^\ast\wedge X_\beta^\ast$, with $\alpha\sim_M \beta$, defines a symplectic form on $V_{[\alpha]_M}$. The structure showed in \eqref{ECType1} will be denoted by $\mathbb{J}_{\omega_\alpha}$. In general, if $\mathbb{J}_{[\alpha]_M}=\mathcal{J}^{nc}_{[\alpha]_M}$ is of noncomplex type verifying the equation $a^2=x y-1$ and $\mathbb{J}_{\omega_\alpha}$ is the structure of symplectic type defined as in \eqref{ECType1} using the nonzero parameter $x$, then it is simple to check that the equation $a^2=x y-1$ implies \begin{equation}\label{AllSymplectic} e^{-B_\alpha}\mathbb{J}_{\omega_\alpha}e^{B_\alpha}=\left( \begin{array}{cc} -\omega_\alpha^{-1}B_\alpha & -\omega_\alpha^{-1}\\ \omega_\alpha+B_\alpha\omega_\alpha^{-1}B_\alpha& B_\alpha\omega_\alpha^{-1} \end{array} \right)=\left( \begin{array}{cccc} a & 0 & 0 & -x\\ 0 & a & x & 0\\ 0 & -y & -a & 0\\ y & 0 & 0 & -a \end{array} \right)=\mathcal{J}^{nc}_{[\alpha]_M}, \end{equation} where $B_\alpha=\dfrac{a}{x} X_\alpha^\ast\wedge X_\beta^\ast$ with $\beta$ the other one root such that $\alpha\sim_M \beta$. In particular, $\textnormal{Type}(\mathbb{J}_{[\alpha]_M})=0$ and we have: \begin{lemma}\label{LemmaBsymplectic} Suppose that both $\mathbb{J}_{[\alpha]_M}$ and $\mathbb{J}_{[\alpha]_M}'$ are of noncomplex type with $$\mathcal{J}^{nc}_{[\alpha]_M}={\tiny\left( \begin{array}{cccc} a & 0 & 0 & -x\\ 0 & a & x & 0\\ 0 & -y & -a & 0\\ y & 0 & 0 & -a \end{array} \right)} \quad \textnormal{and}\quad \mathcal{J'}^{nc}_{[\alpha]_M}={\tiny\left( \begin{array}{cccc} a' & 0 & 0 & -x\\ 0 & a' & x & 0\\ 0 & -y' & -a' & 0\\ y' & 0 & 0 & -a' \end{array} \right)}.$$ Then, there exists a $B$-transformation $B_\alpha\in \bigwedge^2 V_{[\alpha]_M}^\ast$ such that $$e^{-B_\alpha}\mathbb{J}_\alpha e^{B_\alpha} =\mathbb{J}_\alpha'.$$ \end{lemma} \begin{proof} It is clear that $\mathbb{J}_{\omega_\alpha}=\mathbb{J}_{\omega_\alpha}'$. Thus, as consequence of the arguments given above there exist $\hat{B}_\alpha$ and $B_\alpha'$ in $\bigwedge^2 V_{[\alpha]_M}^\ast$ such that $$e^{\hat{B}_\alpha}\mathbb{J}_\alpha e^{-\hat{B}_\alpha}=e^{B_\alpha'}\mathbb{J}_\alpha' e^{-B_\alpha'}.$$ Therefore, the identity $e^{\hat{B}+B'}=e^{\hat{B}}\cdot e^{B'}$ allows us to ensure that the result follows for $B_\alpha=B_\alpha'-\hat{B}_\alpha$. \end{proof} Let us now suppose that $\mathbb{J}_{[\alpha]_M}=\mathcal{J}^{c}_{[\alpha]_M}$ is of complex type. Here we have that $\mathcal{J}^{c}_{[\alpha]_M}=\left( \begin{array}{cc} -J^c & 0\\ 0 & (J^c)^\ast \end{array} \right)$ where $J^c=\left( \begin{array}{cc} -b & \dfrac{1+b^2}{c}\\ -c & b \end{array} \right)$ with $c\neq 0$ is a complex structure on $V_{[\alpha]_M}$. In this case, a straightforward computation allows us to get that for every $B$-transformation $B\in \bigwedge^2 V_{[\alpha]_M}^\ast$ we obtain $$e^{-B}\mathcal{J}^{c}_{[\alpha]_M}e^B=\left( \begin{array}{cc} -J^c & 0\\ BJ^c+(J^c)^\ast B& (J^c)^\ast \end{array} \right)=\mathcal{J}^{c}_{[\alpha]_M}.$$ This is because $BJ^c+(J^c)^\ast B=0$ for every $B=rX_\alpha^\ast\wedge X_\beta^\ast$ with $r\in\mathbb{R}$ and $\alpha \sim_M \beta$. \begin{lemma}\label{LemmaBcomplex} If $\mathbb{J}_{[\alpha]_M}$ is of complex type, then for every $B$-transformation $B\in \bigwedge^2 V_{[\alpha]_M}^\ast$ we have $$e^{-B}\mathbb{J}_{[\alpha]_M} e^{B} =\mathbb{J}_{[\alpha]_M}.$$ \end{lemma} In other words, the action by $B$-transformations leaves fixed every element in the 2-dimensional family of generalized complex structures of complex type on $V_{[\alpha]_M}$. Let $\displaystyle \mathfrak{M}_\alpha(\mathbb{F})=\mathcal M_\alpha(\mathbb{F})/\mathcal{B}$ denote the moduli space of generalized complex structures on $V_{[\alpha]_M}$ under $B$-transformations. Thus, we obtain: \begin{proposition}\label{Bmoduli1} The quotient space $\displaystyle \mathfrak{M}_\alpha(\mathbb{F})$ consists of 2 disjoint sets: \begin{enumerate} \item[$\iota$.] a punctured real line $\mathbb{R}^\ast$ parametrizing structures of symplectic type, and \item[$\iota\iota$.] a plane minus a line $\mathbb{R}^\ast\times \mathbb{R}$ parametrizing structures of complex type. \end{enumerate} That is $$\displaystyle \mathfrak{M}_\alpha(\mathbb{F})=\mathbb{R}^\ast \cup (\mathbb{R}^\ast\times \mathbb{R}).$$ In particular, $\mathfrak{M}_\alpha(\mathbb{F})$ admits a natural topology with which it is homotopy equivalent to $S^1$. \end{proposition} \begin{proof} On the one hand, by Lemma \ref{LemmaBcomplex} we have that structures in $\mathcal M_\alpha(\mathbb{F})$ which are of complex type are fixed points of the action by $B$-transformations. This implies that up to action by $B$-transformations the whole 2-dimensional family of structures of complex type on $V_{[\alpha]_M}$ mentioned in Lemma \ref{Aalpha} is parametrized by the plane minus a line $\mathbb{R}^\ast\times \mathbb{R}$ holding the values of $(c,b)$. On the other hand, by Lemma \ref{Aalpha}, structures in $\mathcal M_\alpha(\mathbb{F})$ which are of noncomplex type are parameterized by a real surface $a^2 -x y=-1$ in $\mathbb{R}^3$, and from equation \eqref{AllSymplectic} we conclude that every point on this surface is the image by a $B$-transformation of a generalized complex structure of symplectic type determined by $x$. Hence the quotient of this real surface by $B$-transformations reduces to a punctured line $\mathbb{R}^\ast$ holding the values of $x$. If $x \neq x'$ are distinct then we get different generalized complex structures of symplectic type. Therefore, from Lemma \ref{LemmaBsymplectic} we conclude that points on this real punctured line represent non equivalent classes. Finally, if we consider both sets $\mathbb{R}^\ast\approx \{0\}\times \mathbb{R}^\ast$ and $\mathbb{R}^\ast\times \mathbb{R}$ inside $\mathbb{R}^2$ with the subspace topology, then we may induce a natural topology to $\mathfrak{M}_\alpha(\mathbb{F})=\mathbb{R}^\ast \cup (\mathbb{R}^\ast\times \mathbb{R})$ which comes from the natural topology of $\mathbb{R}^2$. Thus, if we take the inclusions $\iota_1: \{0\}\times \mathbb{R}^\ast \hookrightarrow \mathbb{R}^2\backslash\{(0,0)\}$ and $\iota_2: \mathbb{R}^\ast\times \mathbb{R} \hookrightarrow \mathbb{R}^2\backslash\{(0,0)\}$, then using the pasting lemma we get a continuous function $f:\{0\}\times \mathbb{R}^\ast\cup(\mathbb{R}^\ast\times \mathbb{R})\to \mathbb{R}^2\backslash\{(0,0)\}$ which is actually a homeomorphism. In consequence, we may identify the quotient space $\mathfrak{M}_\alpha(\mathbb{F})\approx \mathbb{R}^2\backslash\{(0,0)\}$ and hence $\mathfrak{M}_\alpha(\mathbb{F})$ is homotopy equivalent to $S^1$. \end{proof} The proof of the next result follows the same steps used to prove \cite[Theorem 5.13]{GVV}. For this reason we will only give a few details about it. This result is essential for two reasons. The first one is because this allows us to describe the moduli space $\mathfrak{M}_a(\mathbb{F})$ for the maximal real flag manifolds $\mathbb{F}$ that we are dealing with in this section. The second one is because as consequence of this result and Lemmas \ref{LemmaBsymplectic} and \ref{LemmaBcomplex} we may give an explicit expression for the invariant pure spinor associated to each element in $\mathcal{M}_a(\mathbb{F})$. \begin{proposition}\label{BModuli2} Let $\mathbb{J}$ and $\mathbb{J}'$ be two invariant generalized almost complex structures on $\mathbb{F}$ such that for each $M$-equivalence class $[\alpha]_M$ the following conditions hold true: \begin{enumerate} \item[$\iota$.] if $\mathbb{J}_{[\alpha]_M}$ is of complex type, then $\mathbb{J}_{[\alpha]_M}=\mathbb{J}_{[\alpha]_M}'$, and \item[$\iota\iota$.] if $\mathbb{J}_{[\alpha]_M}$ is of noncomplex type, then $\mathbb{J}_{[\alpha]_M}'$ is also of noncomplex type with $$\mathcal{J}_{[\alpha]_M}^{nc}=\left( \begin{array}{cc} \mathcal{A}_{\alpha} & \mathcal{X}_{\alpha} \\ \mathcal{Y}_{\alpha} & -\mathcal{A}_{\alpha} \end{array} \right) \quad \textnormal{and}\quad \mathcal{J'}_{[\alpha]_M}^{cn}=\left( \begin{array}{cc} \mathcal{A}_{\alpha}' & \mathcal{X}_{\alpha} \\ \mathcal{Y}_{\alpha}' & -\mathcal{A}_{\alpha}' \end{array} \right).$$ \end{enumerate} Then there exists a $B$-transformation $B\in \wedge^2 (\mathfrak{n}^-)^\ast$ such that $$e^{-B}\mathbb{J}e^{B}=\mathbb{J}'.$$ \end{proposition} \begin{proof} Suppose that $\displaystyle \Pi^-=\bigcup_{j=1}^d[\alpha_j]_M$ where $d$ is the number of $M$-equivalence classes. For each $M$-equivalence class $[\alpha_j]_M$ we set $$\mathbb{J}_{[\alpha_j]_M}=\left( \begin{array}{cc} \mathcal{A}_{\alpha_j} & \mathcal{X}_{\alpha_j} \\ \mathcal{Y}_{\alpha_j} & -\mathcal{A}_{\alpha_j} \end{array} \right) \quad \textnormal{and}\quad \mathbb{J}_{[\alpha_j]_M}'=\left( \begin{array}{cc} \mathcal{A}_{\alpha_j}' & \mathcal{X}_{\alpha_j} \\ \mathcal{Y}_{\alpha_j}' & -\mathcal{A}_{\alpha_j}' \end{array} \right).$$ where by hypotheses if $\mathbb{J}_{[\alpha_j]_M}$ is of complex type, then $\mathbb{J}_{[\alpha_j]_M}=\mathbb{J}_{[\alpha_j]_M}'=\mathcal{J}_{[\alpha_j]_M}^{c}$ which implies that $\mathcal{A}_{\alpha_j}=\mathcal{A}_{\alpha_j}'=-J^{c}$ and $-\mathcal{A}_{\alpha_j}=-\mathcal{A}_{\alpha_j}'=(J^{c})^\ast$ and $\mathcal{X}_{\alpha_j}=\mathcal{Y}_{\alpha_j}'=\mathcal{Y}_{\alpha_j}=0$. Otherwise, if $\mathbb{J}_{[\alpha_j]_M}$ is of noncomplex type, then $\mathbb{J}_{[\alpha_j]_M}'$ is also of noncomplex type. By Lemmas \ref{LemmaBsymplectic} and \ref{LemmaBcomplex} we have that for all $j=1,2,\cdots,d$ there exist $B$-transformations $B_j\in\bigwedge^2V_{[\alpha_j]_M}^\ast$ such that \begin{equation}\label{tired} e^{-B_j}\mathbb{J}_{[\alpha_j]_M} e^{B_j} =\mathbb{J}_{[\alpha_j]_M}'. \end{equation} If we define $B\in\bigwedge^2(\mathfrak{n}^-)^\ast$ by $B={\tiny \left( \begin{array}{cccc} B_{1} & & & \\ & B_{2}& & \\ & & \ddots & \\ & & & B_{d} \end{array} \right)}$, then setting $$\mathbb{J}={\tiny\left( \begin{array}{ccccccccc} \mathcal{A}_{\alpha_1} & & & & \mathcal{X}_{\alpha_1} & & \\ & \mathcal{A}_{\alpha_2} & & & & \mathcal{X}_{\alpha_2}& \\ & & \ddots & & & &\ddots \\ & & & \mathcal{A}_{\alpha_d} & & & & \mathcal{X}_{\alpha_d} \\ \mathcal{Y}_{\alpha_1} & & & & -\mathcal{A}_{\alpha_1} & &\\ & \mathcal{Y}_{\alpha_2} & & & & -\mathcal{A}_{\alpha_2} &\\ & & \ddots & & & &\ddots\\ & & &\mathcal{Y}_{\alpha_d} & & & & -\mathcal{A}_{\alpha_d}\\ \end{array} \right)},$$ we get that the identities deduced from \eqref{tired} imply that $e^{-B}\mathbb{J}e^{B}=\mathbb{J}'$. \end{proof} As an immediate consequence of Propositions \ref{Bmoduli1} and \ref{BModuli2} we obtain: \begin{corollary}\label{BModuli2.1} Suppose that $\displaystyle \Pi^-=\bigcup_{j=1}^d[\alpha_j]_M$ where $d$ is the number of $M$-equivalence classes. Then $$\mathfrak M_a (\mathbb F)= \prod_{[\alpha_j]_M\subset\Pi^-} \mathfrak{M}_{\alpha_j}(\mathbb{F})=(\mathbb{R}^\ast \cup (\mathbb{R}^\ast\times \mathbb{R}))_{\alpha_1} \times \cdots \times (\mathbb{R}^\ast \cup (\mathbb{R}^\ast\times \mathbb{R}))_{\alpha_d}.$$ In particular, $\mathfrak M_a (\mathbb F)$ admits a natural topology induced from $\mathbb{R}^{2d}$ with which it is homotopy equivalent to the $d$-torus $\mathbb{T}^d$. \end{corollary} Let us decompose the set of roots $\Pi^-= \Pi^-_{nc}\cup\Pi^-_{c}$ where $\mathbb{J}_{[\alpha]_M}=\mathcal{J}_{[\alpha]_M}^{cn}$ is of noncomplex type for all $[\alpha]_M\subset \Pi^-_{nc}$ and $\mathbb{J}_{[\alpha]_M}=\mathcal{J}_{[\alpha]_M}^{c}$ is of complex type for all $[\alpha]_M\subset \Pi^-_{c}$. It is clear that because of the invariance all our generalized almost complex structures $\mathbb{J}$ are regular. In particular, $\textnormal{Type}(\mathbb{J})=\textnormal{Type}(\mathbb{J})_{b_0}=\vert \Pi^-_{c}/\sim_M\vert$. From previous results, when $\Pi^-_{c}=\emptyset$ we get that $\mathbb{J}$ is a $B$-transformation of a structure of symplectic type and when $\Pi^-_{c}=\Pi^-$ we obtain that $\mathbb{J}$ is of complex type. Otherwise, when $\Pi^-_{nc}\subset\Pi^-$ is nonempty we get a generalized almost complex structure which is neither complex nor symplectic. Since finite products of generalized almost complex manifolds are still generalized complex with the obvious induced structure \cite{H,G3}, as an immediate consequence of Lemmas \ref{LemmaBsymplectic}, \ref{LemmaBcomplex} and Proposition \ref{BModuli2} we get an explicit expression for the invariant pure spinor associated to each structure in $\mathcal{M}_a(\mathbb{F})$. \begin{corollary}\label{PureSpinor} Let $\mathbb{J}$ be an invariant generalized almost complex structure on $\mathbb{F}$. Then the invariant pure spinor line $K_\mathcal{L}<\bigwedge^\bullet (\mathfrak{n}^-)^\ast\otimes \mathbb{C}$ associated to $\mathbb{J}$ is generated by $$\varphi=e^{\sum_{[\alpha]_M\subset \Pi^-_{nc}}(B_\alpha +i\omega_\alpha)}\bigwedge_{[\alpha]_M\subset \Pi^-_{c}}\Omega_\alpha,$$ where $B_\alpha=\dfrac{a}{x}X_\alpha^\ast\wedge X_\beta^\ast$ and $\omega_\alpha=\dfrac{1}{x}X_\alpha^\ast\wedge X_\beta^\ast$ with $\alpha\sim_M \beta$ for all $[\alpha]_M\subset \Pi^-_{nc}$ and $\Omega_\alpha\in \wedge^{1,0}V_{[\alpha]_M}^\ast$ defines a complex structure on $V_{[\alpha]_M}$ for all $[\alpha]_M\subset \Pi^-_{c}$. \end{corollary} As $\varphi$ is initially defined on $T_{b_0}\mathbb{F}=\mathfrak{n}^-$ we may also use invariance to define $\varphi$ on $\mathbb{F}$ as follows. Assume that $\varphi\in \bigwedge^r (\mathfrak{n}^-)^\ast\otimes \mathbb{C}$. Thus, at $x=gM\in \mathbb{F}=K/M$ for some $g\in K$ we define $$\widetilde{\varphi}_x(X_1(x),\cdots, X_r(x))=\varphi(\Ad(g^{-1})_{\ast,x}X_1(x),\cdots,\Ad(g^{-1})_{\ast,x}X_r(x)).$$ In this case $\widetilde{\varphi}\in \bigwedge^r T^\ast\mathbb{F}\otimes \mathbb{C}$ would be a pure spinor determined by $\mathbb{J}: \mathbb{TF}\to \mathbb{TF}$ which satisfies $(\Ad(g))^\ast\widetilde{\varphi}=\widetilde{\varphi}$ for all $g\in K$. The same thing that we did with $\varphi$ can be done with the $B$-transformation constructed in Proposition \ref{BModuli2}. \subsection{Invariant generalized almost Hermitian structures} Let us now classify the invariant generalized almost Hermitian structures on $\mathbb{F}$. Given the requirement of invariance we need to find pairs of commuting invariant generalized almost complex structures $(\mathbb{J},\mathbb{J'})$ such that $G:=-\mathbb{J}\mathbb{J'}$ defines a positive definite metric on $\mathfrak{n}^-\oplus (\mathfrak{n}^-)^\ast$. Because every invariant generalized almost complex structure on $\mathbb{F}$ has the form $\displaystyle \mathbb{J} = \sum_{[\alpha]_M} \mathbb{J}_{[\alpha]_M}$, we just need to determinate those commuting pairs $(\mathbb{J}_{[\alpha]_M},\mathbb{J'}_{[\alpha]_M})$ such that $G_\alpha=-\mathbb{J}_{[\alpha]_M}\mathbb{J'}_{[\alpha]_M}$ defines a positive definite metric on $V_{[\alpha]_M}\oplus V_{[\alpha]_M}^\ast$ of signature $(2,2)$ for all $[\alpha]_M\subset \Pi^-$. Thus, latter requirement allows us to conclude that we only have to work with pairs of the form $(\mathcal{J}^c_{[\alpha]_M},\mathcal{J}^{nc}_{[\alpha]_M})$. Let $\alpha\in\Pi^-$ be a fixed root. After a straightforward computation we may easily see that $\mathcal{J}^c_{[\alpha]_M}$ and $\mathcal{J}^{nc}_{[\alpha]_M}$ always commute. Indeed, \begin{equation}\label{GKhaler1} \mathcal{J}^c_{[\alpha]_M}\mathcal{J}^{nc}_{[\alpha]_M}={\tiny\left( \begin{array}{cccc} ab & -\dfrac{a(1+b^2)}{c} & -\dfrac{x(1+b^2)}{c} & -bx\\ ac & -ab & -bx & -cx\\ -cy & by & ab & ac\\ by & -\dfrac{y(1+b^2)}{c} & -\dfrac{a(1+b^2)}{c} & -ab \end{array} \right)}=\mathcal{J}^{nc}_{[\alpha]_M}\mathcal{J}^c_{[\alpha]_M}. \end{equation} We will mainly use Proposition \ref{ModuliM}, Lemma \ref{LemmaBsymplectic}, and the fact concluded from Equation \eqref{AllSymplectic}. Let $\mathbb{J}_{\omega_\alpha}$ be the structure of symplectic type defined by means of $\mathcal{J}^{nc}_{[\alpha]_M}$ using the parameter $x$ (look at Expression \eqref{ECType1}) and let us consider \begin{equation}\label{GKhaler1.1} \widetilde{G}_\alpha:=-\mathbb{J}_{\omega_\alpha}\mathcal{J}^c_{[\alpha]_M}=-\mathcal{J}^c_{[\alpha]_M}\mathbb{J}_{\omega_\alpha}={\tiny\left( \begin{array}{cccc} 0 & 0 & \dfrac{x(1+b^2)}{c} & bx\\ 0 & 0 & bx & cx\\ \dfrac{c}{x} & -\dfrac{b}{x} & 0 & 0\\ -\dfrac{b}{x} & \dfrac{1+b^2}{cx} & 0 & 0 \end{array} \right)}. \end{equation} It is simple to check that $\widetilde{G}_\alpha=\left( \begin{array}{cc} 0 & g^{-1}\\ g & 0 \end{array} \right)$ where $g=\left( \begin{array}{cc} \dfrac{c}{x} & -\dfrac{b}{x}\\ -\dfrac{b}{x} & \dfrac{1+b^2}{cx} \end{array} \right)$. \begin{lemma}\label{GKhaler2} The matrix $g$ allows us to define an inner product on $V_{[\alpha]_M}$ if and only if $cx>0$. \end{lemma} \begin{proof} Since $g$ is clearly symmetric, we only need to look at the conditions under which the $2$ eigenvalues of $g$ are positive. They are given by $$\lambda_1=\dfrac{c^2+b^2+1+\sqrt{(c^2+b^2+1)^2-4c^2}}{2cx}\quad \textnormal{and} \quad \lambda_2=\dfrac{c^2+b^2+1-\sqrt{(c^2+b^2+1)^2-4c^2}}{2cx}.$$ Recall that both $x$ and $c$ are nonzero. Obviously $\lambda_1>0$ if and only if $cx>0$ and because $c^2+b^2+1\geq \sqrt{(c^2+b^2+1)^2-4c^2}$ always holds true, this is the same condition for $\lambda_2$ being positive. \end{proof} Given the generalized complex structures $\mathcal{J}^c_{[\alpha]_M}$ and $\mathcal{J}^{nc}_{[\alpha]_M}$, we denote them respectively by $(\mathcal{J}^c_{[\alpha]_M})^+$ and $(\mathcal{J}^{nc}_{[\alpha]_M})^+$ if $c>0$ and $x>0$. Otherwise, we denote them respectively by $(\mathcal{J}^c_{[\alpha]_M})^-$ and $(\mathcal{J}^{nc}_{[\alpha]_M})^-$. So, in terms of this notation we have: \begin{proposition}\label{GKhaler3} Let $(\mathbb{J},\mathbb{J}')$ be an invariant generalized almost Hermitian structure on $\mathbb{F}$. Then for every $M$-equivalence class $[\alpha]_M\subset \Pi^-$, the pair $(\mathbb{J}_{[\alpha]_M},\mathbb{J}_{[\alpha]_M}')$ takes one of the following values \begin{center} \begin{tabular}{c|c} $\mathbb{J}_{[\alpha]_M}$ & $\mathbb{J}_{[\alpha]_M}'$\\ \hline $(\mathcal{J}^c_{[\alpha]_M})^+$ & $(\mathcal{J'}^{cn}_{[\alpha]_M})^+$ \\ $(\mathcal{J}^{cn}_{[\alpha]_M})^+$ & $(\mathcal{J'}^c_{[\alpha]_M})^+$ \\ $(\mathcal{J}^c_{[\alpha]_M})^-$ & $(\mathcal{J'}^{nc}_{[\alpha]_M})^-$ \\ $(\mathcal{J}^{nc}_{[\alpha]_M})^-$ & $(\mathcal{J'}^c_{[\alpha]_M})^-$. \end{tabular} \end{center} \end{proposition} \begin{proof} As consequence of Lemma \ref{GKhaler2} the values of the previous table are precisely those that either the pair $(\mathbb{J}_{\omega_\alpha},\mathcal{J'}^c_{[\alpha]_M})$ or $(\mathcal{J}^c_{[\alpha]_M},\mathbb{J'}_{\omega'_\alpha})$ can take. In any case we get a generalized metric $\widetilde{G}_\alpha$ as that given by Equation \eqref{GKhaler1.1}. By Lemma \ref{LemmaBsymplectic} and Equation \eqref{AllSymplectic} we have that $$e^{B_\alpha}\cdot (\mathbb{J}_{\omega_\alpha},\mathcal{J'}^c_{[\alpha]_M})=(e^{-B_\alpha}\mathbb{J}_{\omega_\alpha}e^{B_\alpha},e^{-B_\alpha}\mathcal{J'}^c_{[\alpha]_M}e^{B_\alpha})=(\mathbb{J}_{[\alpha]_M},\mathcal{J'}^c_{[\alpha]_M})=(\mathbb{J}_{[\alpha]_M},\mathbb{J'}_{[\alpha]_M}),$$ for $B_\alpha=\dfrac{a}{x}X_\alpha^\ast\wedge X_\beta^\ast$ with $\alpha\sim_M \beta$. Analogously, $e^{B'_\alpha}\cdot(\mathcal{J}^c_{[\alpha]_M},\mathbb{J'}_{\omega'_\alpha})=(\mathbb{J}_{[\alpha]_M},\mathbb{J'}_{[\alpha]_M})$ for $B'_\alpha=\dfrac{a'}{x'}X_\alpha^\ast\wedge X_\beta^\ast$. Therefore, by Proposition \ref{ModuliM}, the generalized metric associated to the pair $(\mathbb{J}_{[\alpha]_M},\mathbb{J'}_{[\alpha]_M})$ is either $G_\alpha=e^{-B_\alpha}\widetilde{G}_\alpha e^{B_\alpha}$ or $G_\alpha'=e^{-B'_\alpha}\widetilde{G'}_\alpha e^{B'_\alpha}$ and, in any case, we precisely obtain a matrix of the form minus the matrix given in Equation \eqref{GKhaler1}. Last claim is consequence of having the constrains $a^2=xy-1$ or $a'^2=x'y'-1$ for the structures of noncomplex type. So, the result follows. \end{proof} Let $\displaystyle \mathfrak{K}_\alpha(\mathbb{F})=\mathcal K_\alpha(\mathbb{F})/\mathcal{B}\subset \mathfrak{M}_\alpha(\mathbb{F})\times \mathfrak{M}_\alpha(\mathbb{F})$ denote the moduli space of generalized almost Hermitian structures on $V_{[\alpha]_M}$ under $B$-transformations. Thus, we obtain \begin{corollary} Suppose that $\displaystyle \Pi^-=\bigcup_{j=1}^d[\alpha_j]_M$ where $d$ is the number of $M$-equivalence classes. Then $$\mathfrak K_a (\mathbb F)= \prod_{[\alpha_j]_M\subset\Pi^-} \mathfrak{K}_{\alpha_j}(\mathbb{F})=\mathbb{R}^\dagger_{\alpha_1} \times \cdots \times \mathbb{R}^\dagger_{\alpha_d},$$ where $\mathbb{R}^\dagger=\lbrace\mathbb{R}^+\times(\mathbb{R}^+\cup \mathbb{R}) \rbrace\cup \lbrace (\mathbb{R}^+\cup \mathbb{R})\times \mathbb{R}^+ \rbrace\cup\lbrace\mathbb{R}^-\times(\mathbb{R}^-\cup \mathbb{R}) \rbrace\cup \lbrace (\mathbb{R}^-\cup \mathbb{R})\times \mathbb{R}^- \rbrace$. Moreover, $\mathfrak K_a (\mathbb F)$ admits a natural topology induced by the product topology of $\mathfrak M_a (\mathbb F)\times \mathfrak M_a (\mathbb F)$. \end{corollary} \begin{proof} As consequence of Proposition \ref{GKhaler3} and the arguments used in its proof, we easily conclude that $\mathfrak{K}_{\alpha_j}(\mathbb{F})=\mathbb{R}^\dagger$ for every $M$-equivalence class $[\alpha_j]_M\subset\Pi^-$. Therefore, the result immediately follows from Proposition \ref{BModuli2} and Corollary \ref{BModuli2.1}. \end{proof} Finally, let $\mathcal{G}_a(\mathbb{F})$ denote the set of all invariant generalized metrics on $\mathbb{F}$. Motivated by Proposition \ref{ModuliM} we can define an action of $\mathcal{B}$ on $\mathcal{G}_a(\mathbb{F})$ as $e^B\cdot G=e^{-B}Ge^B$. The quotient space $\mathfrak{G}_a(\mathbb{F}):=\mathcal{G}_a(\mathbb{F})/\mathcal{B}$ induced by this action is called {\it moduli space of invariant generalized metrics on $\mathbb{F}$ under invariant $B$-transformations}; see \cite{GVV}. \begin{corollary}\label{GMetrics} Suppose that $\displaystyle \Pi^-=\bigcup_{j=1}^d[\alpha_j]_M$ where $d$ is the number of $M$-equivalence classes. Then $$\mathfrak{G}_a(\mathbb{F})=\lbrace((\mathbb{R}^+)^2\times\mathbb{R}) \cup ((\mathbb{R}^-)^2\times\mathbb{R})\rbrace_{\alpha_1}\times \cdots \times \lbrace((\mathbb{R}^+)^2\times\mathbb{R}) \cup ((\mathbb{R}^-)^2\times\mathbb{R})\rbrace_{\alpha_d}.$$ \end{corollary} \begin{proof} As we saw before, up to action by $B$-transformations a generalized metric on $V_{[\alpha]_M}$ has the form $\widetilde{G}_\alpha=\left( \begin{array}{cc} 0 & g^{-1}\\ g & 0 \end{array} \right)$ where $$g=\left( \begin{array}{cc} \dfrac{c}{x} & -\dfrac{b}{x}\\ -\dfrac{b}{x} & \dfrac{1+b^2}{cx} \end{array} \right)\qquad \textnormal{verifying}\qquad cx>0.$$ Therefore, up to action by $B$-transformations we get that the set of generalized metrics on $V_{[\alpha]_M}$ is parametrized by $((\mathbb{R}^+)^2\times\mathbb{R}) \cup ((\mathbb{R}^-)^2\times\mathbb{R})$. So, the result follows from Proposition \ref{BModuli2} as desired. \end{proof} \end{document}
math
\begin{document} \title{Smile Asymptotics II: Models with Known Moment Generating Function} \author{Shalom Benaim and Peter Friz\\Statistical Laboratory, University of Cambridge} \maketitle \begin{abstract} In a recent article the authors obtained a formula which relates explicitly the tail of risk neutral returns with the wing behavior of the Black Scholes implied volatility smile. In situations where precise tail asymptotics are unknown but a moment generating function is available we first establish, under easy-to-check conditions, tail asymptoics on logarithmic scale as soft applications of standard Tauberian theorems. Such asymptotics are enough to make the tail-wing formula work and we so obtain a version of Lee's moment formula with the novel guarantee that there is indeed a limiting slope when plotting implied variance against log-strike. We apply these results to time-changed L\'{e}vy models and the Heston model. In particular, the term-structure of the wings can be analytically understood. \end{abstract} \section{ Introduction} Consider a random variable $X$ whose moment generating function (mgf)\ $M$ is known in closed form, but whose density $f$ (if it exists) and distribution function $F$ are, even asymptotically, unknown. For a large class of distributions used for modelling (risk-neutral) returns in finance, $M$ is finite only on part of the real line. Let us define $\bar{F}\equiv1-F $ and $r^{\ast}$ as the least upper bound of all real $r$ for which $M\left( r\right) \equiv E[e^{rX}]<\infty$ and assume $r^{\ast}\in\left( 0,\infty\right) $. An easy Chebyshev argument gives \begin{equation} \lim\sup_{x\rightarrow\infty}\frac{-\log\bar{F}(x)}{x}=r^{\ast}, \label{limsupStmt} \end{equation} but counter-examples show that the stronger statement \begin{equation} -\log\bar{F}(x)\sim r^{\ast}x\text{ \ as }x\rightarrow\infty\text{ } \label{as} \end{equation} may not be true\footnote{We use the standard notation $g\left( x\right) \sim h\left( x\right) \equiv\lim g\left( x\right) /h\left( x\right) =1$ as $x\rightarrow\infty.$}. However, we do expect (\ref{as}) to be true if the (right) tail of the distribution is reasonably behaved. Our interest in such distributions stems from the fact that the crude tail asymptotics (\ref{as}) and the mild integrability condition $p^{\ast}=r^{\ast}-1>0$ are enough, via the tail-wing formula \cite{BF}, to assert existence of a limiting slope of Black Scholes implied variance $V^{2}$ as function of log-strike $k$. Indeed, in standard notation, reviewed in section \ref{ApplSmile}, one has \begin{equation} \lim_{k\rightarrow\infty}V^{2}(k)/k=2-4\left( \sqrt{(p^{\ast})^{2}+p^{\ast} }-p^{\ast}\right) . \label{rSlope} \end{equation} Similarly, if $q^{\ast}\equiv\sup\left\{ q\in\mathbb{R}:M\left( -q\right) \equiv E[e^{-qX}]<\infty\right\} \in(0,\infty)$ and the (left) tail is reasonably behaved one expects $\log F(-x)\sim-q^{\ast}x$ as $x\rightarrow \infty$ in which case the tail wing formula gives \begin{equation} \lim_{k\rightarrow\infty}V^{2}\left( -k\right) /k=2-4\left( \sqrt{(q^{\ast })^{2}+q^{\ast}}-q^{\ast}\right) . \label{lSlope} \end{equation} It was already pointed out in \cite{BF} that the tail-wing formulae sharpen Lee's celebrated moment formulae \cite{Lee, Ga}. In the present context, this amounts to having a $\lim$ instead of a $\lim\sup$\footnote{Remark that, at least when $p^{\ast}>0$, the moment formula is in fact recovered from the tail-wing formula and (\ref{limsupStmt}).}. It must be noted that the tail-wing formula requires some knowledge of the tails whereas the moment formula is conveniently applicable by looking at the mgf (to obtain the critical values $r^{\ast}$ and $-q^{\ast}$ ). In this paper we develop criteria, checkable by looking \textit{a little closer} at the mgf (near $r^{\ast}$ and $-q^{\ast}$), which will guarantee that (\ref{rSlope}) resp. (\ref{lSlope}) hold. In view of the tail-wing formula the problem is reduced to obtain criteria for (\ref{as}) resp. its left-sided analogue. The proofs rely on Tauberian theorems and, as one expects, the monograph \cite{BGT} is our splendid source. The criteria are then fine-tuned to the fashionable class of time-changed L\'{e}vy models \cite{S, CT} and checked explicitly for the examples of Variance Gamma under Gamma-OU clock and Normal Inverse Gaussian with CIR clock. We also check the criteria for the Heston model. In fact, it appears to us that most (if not all) sensible models for stock returns with known mgf and $p^{\ast},q^{\ast}\in\left( 0,\infty\right) $ satisfy one of our criteria so that (\ref{rSlope}) and (\ref{lSlope}) will hold. Finally, we present some numerical results. The asymptotic regime becomes visible for remarkably low log-strikes which underlines the practical value of moment - and tail-wing formulae. \section{Background\label{background} in Regular Variation} \subsection{Asymptotic inversion} If $f=f\left( x\right) $ is defined and locally bounded on $\,[X,\infty)$, and tends to $\infty$ as $x\rightarrow\infty$ then the generalized inverse \[ f^{\leftarrow}\left( x\right) :=\inf\left\{ y\in\lbrack X,\infty):f\left( y\right) >x\right\} \] is defined on $[f\left( X\right) ,\infty)$ and is monotone increasing to $\infty$. This applies in particular to $f\in R_{\alpha}$ with $\alpha>0$ and Thm 1.5.12 in \cite{BGT} asserts that $f^{\leftarrow}\in R_{1/\alpha}$ and \[ f\left( f^{\leftarrow}\left( x\right) \right) \sim f^{\leftarrow}\left( f\left( x\right) \right) \sim x\text{ as }x\rightarrow\infty\text{.} \] Given $f$ one can often compute $f^{\leftarrow}$ (up the asymptotic equivalence) in terms of the \textit{Bruijn conjugate} of slowly varying functions (Prop. 1.5.15, Section 5.2. and Appendix 5 in \cite{BGT}). \subsection{Smooth Variation} A positive function \thinspace$g$ defined in some neighbourhood of $\infty$ \textit{varies smoothly with index} $\alpha$, $g\in SR_{\alpha}$, iff $h\left( x\right) :=\log\left( g\left( e^{x}\right) \right) $ is $C^{\infty} $ and \[ h^{\prime}\left( x\right) \rightarrow\alpha,\,\,\,h^{\left( n\right) }\left( x\right) \rightarrow0\text{\thinspace\thinspace\thinspace for }n=2,3,...\text{ as}\ x\rightarrow\infty.\text{ } \] \begin{theorem} [Smooth Variation Theorem, Thm 1.8.2 in \cite{BGT}]If $f\in R_{\alpha}$ then there exist $f_{i}\in SR_{\alpha}$, $i=1,2$, with $f_{1}\sim f_{2}$ and $f_{1}\leq f\leq f_{2}$ on some neighbourhood of $\infty$. \end{theorem} When $\alpha>0$ we can assume that $f_{1}$ and $f_{2}$ are strictly increasing in some neighbourhood of $\infty$. In fact, we have \begin{proposition} \label{SVTcor}Let $\alpha>0$ and $g\in SR_{\alpha}$. Then $g$ is strictly increasing in some neighbourhood of $\infty$ and $g^{\prime}\in SR_{\alpha-1}$. \end{proposition} \begin{proof} By definition of $SR_{\alpha}$, \[ \frac{\partial}{\partial x}\log\left( g\left( e^{x}\right) \right) =\frac{1}{g\left( e^{x}\right) }g^{\prime}\left( e^{x}\right) e^{x}\rightarrow\alpha>0\text{ as }x\rightarrow\infty. \] This shows that, in some neighbourhood of $\infty$, $g^{\prime}$ is strictly positive which implies that $g$ is strictly increasing. From Prop 1.8.1 in \cite{BGT}, $g^{\prime}=\left\vert g^{\prime}\right\vert \in SR_{\alpha-1}$. \end{proof} \begin{remark} In the situation of the last Proposition we have $\lim_{x\rightarrow\infty }g\left( x\right) =\infty$ and hence, in some neighbourhood of $\infty$ , $g$ has a genuine inverse $g^{-1}$ which coincides with the generalized inverse $g^{\leftarrow}$. \end{remark} \subsection{Exponential Tauberian Theory} \begin{theorem} [Kohlbecker's Theorem, Thm 4.12.1 and Cor 4.12.6 in \cite{BGT}]Let $U$ be a non-decreasing right-continuous function on $\mathbb{R}$ with $U\left( x\right) =0$ for all $x<0$. Set \[ N\left( \lambda\right) :=\int_{[0,\infty)}e^{-x/\lambda}dU\left( x\right) ,\,\,\,\lambda>0. \] Let $\alpha>1$ and $\chi\in R_{\alpha/\left( \alpha-1\right) }$. Then \[ \log N\left( \lambda\right) \sim\left( \alpha-1\right) \chi\left( \lambda\right) /\lambda\text{ as }\lambda\rightarrow\infty \] iff \[ \log\mu\left[ 0,x\right] \sim\alpha x/\chi^{\leftarrow}\left( x\right) \text{ as }x\rightarrow\infty. \] \end{theorem} \begin{theorem} [Karamata's Tauberian Theorem, Thm 1.7.1 in \cite{BGT}]Let $U$ be a non-decreasing right-continuous function on $\mathbb{R}$ with $U\left( x\right) =0$ for all $x<0$. If $l\in R_{0}$ and $c\geq0,\rho\geq0$, the following are equivalent: \begin{align*} U\left( x\right) & \sim cx^{\rho}l\left( x\right) /\Gamma\left( 1+\rho\right) \text{ as }x\rightarrow\infty\\ \hat{U}\left( s\right) & \equiv\int_{0}^{\infty}e^{-sx}dU\left( x\right) \sim c s^{-\rho}l\left( 1/s\right) \text{ as }s\rightarrow0+. \end{align*} (When $c=0$ the asymptotic relations are interpreted in the sense that $U\left( x\right) =o\left( x^{\rho}l\left( x\right) \right) $ and similar for $\hat{U}$.) \end{theorem} \begin{theorem} [Bingham's Lemma, Thm 4.12.10 in \cite{BGT}]Let $f\in R_{\alpha}$ with $\alpha>0$ such that that $e^{-f}$ is locally integrable at $+\infty$. Then \[ -\log\int_{x}^{\infty}e^{-f\left( y\right) }dy\sim f\left( x\right) \text{.} \] \end{theorem} \section{Moment generating functions and log-tails\label{MgfLT}} Let $F$ be a finite Borel measure on $\mathbb{R}$, identified with its (bounded, non-decreasing, right-continuous) distributions function, $F\left( x\right) \equiv F\left( (-\infty,x]\right) $. Its mgf is defined as \[ M\left( s\right) :=\int e^{sx}dF\left( x\right) . \] We define the critical exponents $q^{\ast}$ and $r^{\ast}$ via \[ -q^{\ast}\equiv\inf\left\{ s:M\left( s\right) <\infty\right\} ,r^{\ast }\equiv\sup\left\{ s:M\left( s\right) <\infty\right\} \] and make the \textbf{standing assumption} that \[ r^{\ast},q^{\ast}\in(0,\infty)\text{.} \] In this section, we develop criteria which will imply the asymptotic relations \[ \log F\left( (-\infty,-x]\right) \sim-q^{\ast}x\text{, }\log F\left( (x,\infty)\right) \sim-r^{\ast}x\text{ as }x\rightarrow\infty. \] The assumption in the following Criterion I is simply that some derivative of the mgf (at the critical exponent ) blows up in a regularly varying way. \begin{theorem} [Criterion I]Let $F$ be a bounded non-decreasing right-continuous function on $\mathbb{R}$ and define $M=M\left( s\right) ,$ $q^{\ast}$ and $r^{\ast}$ as above.\newline(i) If for some $n\geq0$, $M^{\left( n\right) }\left( -q^{\ast}+s\right) \sim s^{-\rho}l_{1}(1/s)$ for some $\rho>0$, $l_{1}\in R_{0}$ as $s\rightarrow0+$ then \[ \log F\left( (-\infty,-x]\right) \sim-q^{\ast}x \] \newline(ii) If for some $n\geq0$, $M^{\left( n\right) }\left( r^{\ast }-s\right) \sim s^{-\rho}l_{1}(1/s)$ for some $\rho>0$, $l_{1}\in R_{0}$ as $s\rightarrow0+$ then \[ \log F\left( (x,\infty)\right) \sim-r^{\ast}x. \] \end{theorem} \begin{proof} Let us focus on case (ii), noting that case (i) is similar. We first discuss $n=0$. The idea is an Escher-type change of measure followed by an application of Karamata's Tauberian Theorem. We define a new measure $U$ on $[0,\infty)$ by a change-of-measure designed to get rid of the exponential decay, \[ dU\left( x\right) :=\exp\left( r^{\ast}x\right) dF\left( x\right) . \] We identify $U$ with its non-decreasing right-continuous distribution function $x\mapsto U\left( \left[ 0,x\right] \right) $. The Laplace transform of $U$ is given by \[ \hat{U}\left( s\right) =\int_{0}^{\infty}e^{-sx}dU\left( x\right) =\int_{0}^{\infty}e^{\left( r^{\ast}-s\right) x}dF\left( x\right) =M\left( r^{\ast}-s\right) -\int_{-\infty}^{0}e^{\left( r^{\ast}-s\right) x}dF\left( x\right) \] so that \[ \left\vert \hat{U}\left( s\right) -M\left( r^{\ast}-s\right) \right\vert \leq\int_{-\infty}^{0}e^{\left( r^{\ast}-s\right) x}dF\left( x\right) \leq F\left( 0\right) -F\left( -\infty\right) \leq2\left\Vert F\right\Vert _{\infty}<\infty. \] Since $M\left( r^{\ast}-s\right) $ goes to $\infty$ as $s\rightarrow0+$ and we see that $\hat{U}\left( s\right) \sim M\left( r^{\ast}-s\right) $ so that $\hat{U}\in R_{\rho}$ as $s\rightarrow0$. Hence, there exists $l\in R_{0}$ so that $\hat{U}\left( s\right) =\left( 1/s\right) ^{\rho}l\left( 1/s\right) $ and Karamata's Tauberian theorem tells us that $U\in R_{\rho}$, namely \[ U\left( x\right) \sim x^{\rho}l\left( x\right) /\Gamma\left( 1+\rho\right) \equiv x^{\rho}l^{\prime}\left( x\right) \text{ as }x\rightarrow\infty \] where $l^{\prime}\in R_{0}$. Going back to the right-tail of $F$, we have for $x\geq0$ \[ F\left( (x,\infty)\right) =\int_{(x,\infty)}dF\left( y\right) =\int_{(x,\infty)}\exp\left( -r^{\ast}y\right) dU\left( y\right) \text{.} \] We first assume that $U\in SR_{\rho}$. Under this assumption $U$ is smooth with derivative $u=U^{\prime}$ $\in SR_{\rho-1}$ and we can write \[ u\left( y\right) =y^{\rho-1}l^{\prime\prime}\left( y\right) \text{ with }l^{\prime\prime}\in R_{0}. \] Then \begin{align*} F\left( (x,\infty)\right) & =\int_{(x,\infty)}\exp\left( -r^{\ast }y\right) y^{\rho-1}l^{\prime\prime}\left( y\right) dy\\ & =\int_{(x,\infty)}\exp\left[ -r^{\ast}y+\left( \rho-1\right) \log y+\log l^{\prime\prime}\left( y\right) \right] dy. \end{align*} Since $-\left[ -r^{\ast}y+\left( \rho-1\right) \log y+\log l^{\prime\prime }\left( y\right) \right] \sim r^{\ast}y\in R_{1}$ as $y\rightarrow\infty$ we can use Bingham's lemma to obtain \begin{equation} -\log F\left( (x,\infty)\right) =-\log\int_{(x,\infty)}\exp\left[ -r^{\ast }y+\left( \rho-1\right) \log y+\log l^{\prime\prime}\left( y\right) \right] dU\left( y\right) \sim r^{\ast}y. \label{TailForSR} \end{equation} We now deal with the general case of non-decreasing $U\in R_{\rho}$. From the Smooth Variation Theorem and Proposition \ref{SVTcor} we can find $U_{-} ,U_{+}\in SR_{\rho}$, strictly increasing in a neighbourhood of $\infty$, so that \[ U_{-}\leq U\leq U_{+}\text{ and }U_{-}\sim U\sim U_{+}\text{.} \] Below we use the change of variable $z=U\left( y\right) $ and $w=$ $U_{+}^{-1}\left( z\right) $. Noting that $U_{+}^{-1}\leq U^{\leftarrow}\leq U_{-}^{-1}$ and using change-of-variable formulae, as found in \cite[p7-9]{RY} for instance, we have \begin{align*} F\left( (x,\infty)\right) & =\int_{(x,\infty)}\exp\left( -r^{\ast }y\right) dU\left( y\right) \\ & =\int_{(U\left( x\right) ,\infty)}\exp\left( -r^{\ast}U^{\leftarrow }\left( z\right) \right) dz\\ & \leq\int_{(U\left( x\right) ,\infty)}\exp\left( -r^{\ast}U_{+} ^{-1}\left( z\right) \right) dz\\ & =\int_{(U_{+}^{-1}\left( U\left( x\right) \right) ,\infty)}\exp\left( -r^{\ast}w\right) dU_{+}\left( w\right) \text{.} \end{align*} Similar to the derivation of (\ref{TailForSR}), Bingham's lemma leads to \[ -\log\int_{(U_{+}^{-1}\left( U\left( x\right) \right) ,\infty)}\exp\left( -r^{\ast}w\right) dU_{+}\left( w\right) \sim r^{\ast}U_{+}^{-1}\left( U\left( x\right) \right) . \] Noting that $U_{+}^{-1}$ is non-decreasing, $U_{+}^{-1}\left( U\left( x\right) \right) \leq U_{+}^{-1}\left( U_{+}\left( x\right) \right) =x$ so that\footnote{By $g\lesssim h$ we mean $\lim\sup f\left( x\right) /g\left( x\right) \leq1$ as $x\rightarrow\infty$.} \[ -\log F\left( [x,\infty)\right) \lesssim r^{\ast}x \] The same argument gives the lower bound $-\log F\left( (x,\infty)\right) \gtrsim r^{\ast}x$ and we conclude that \[ -\log F\left( (x,\infty)\right) \sim r^{\ast}x. \] We now show how $n>0$ follows from $n=0$. Define $V$ on $[0,\infty)$ by \[ dV\left( x\right) :=x^{n}dF\left( x\right) . \] Clearly, $V$ induces a non-decreasing, right continuous distribution on $\mathbb{R}$, $V\left( x\right) :=V\left( [0,x]\right) $ for $x\geq0$ and $V\left( x\right) \equiv0$ for $x<0$. The distribution function $V\left( x\right) $ is also bounded since \[ \int_{0}^{\infty}x^{n}dF\left( x\right) <\infty \] which follows a forteriori from the standing assumption of exponential moments. We will write $\bar{V}(x)$ for $V(x,\infty).$ Note that $V$ has a mgf $M_{V}\left( s\right) $, finite at least for $s\in(0,r^{\ast})$, given by \begin{align*} M_{V}\left( s\right) & \equiv\int e^{sx}dV\left( x\right) =\int _{0}^{\infty}x^{n}e^{sx}dF\\ & =\int x^{n}e^{sx}dF+C=M^{(n)}\left( s\right) +C \end{align*} where\footnote{One could do without the assumption $\int_{-\infty} ^{0}\left\vert x\right\vert dF$ (which follows a forteriori from the standing assumption $q^{\ast}>0$). Finiteness of $F$ on $\left( -\infty,0\right) $ is enough.} \[ 0\leq C\equiv-\int_{-\infty}^{0}x^{n}e^{sx}dF\leq\int_{-\infty}^{0}\left\vert x\right\vert ^{n}dF<\infty. \] By assumption, $M^{(n)}$ is regularly varying with index $\rho$ at $r^{\ast}$ and it follows that, as $s\rightarrow0+$, \[ M_{V}\left( r^{\ast}-s\right) =M^{(n)}\left( r^{\ast}-s\right) +O\left( 1\right) \sim s^{-\rho}l_{1}(1/s). \] We now use the "$n=0$" result on the distribution function $V$ resp. its mgf $M_{V}$ and obtain \[ -\log V\left( [x,\infty)\right) \equiv-\log\bar{V}\left( x\right) \sim r^{\ast}x\in R_{1} \] Assume first that $-\log\bar{V}\left( x\right) \in SR_{1}$. Then $V$ has a density $V^{\prime}\equiv v$ and \[ v\left( x\right) =\partial_{x}(V(\infty)-\bar{V}(x))=-\bar{V}\left( x\right) \partial_{x}\left( \log\bar{V}\left( x\right) \right) \sim r^{\ast}\bar{V}\left( x\right) \text{ as }x\rightarrow\infty \] since functions in $SR_{1}$ are stable under differentiation in the sense that $\partial_{x}\left( -\log\bar{V}\left( x\right) \right) \sim\partial _{x}\left( r^{\ast}x\right) =r^{\ast}$. In particular, we have $\log v\left( x\right) \sim\log\bar{V}\left( x\right) \sim-r^{\ast}x$. After these preparations we can write \begin{align*} F\left( (x,\infty)\right) & =\int_{(x,\infty)}dF\left( y\right) \\ & =\int_{(x,\infty)}\frac{1}{y^{n}}v\left( y\right) dy\\ & =\int_{(x,\infty)}\exp\left[ \log v\left( y\right) -n\log y\right] dy \end{align*} and Bingham's lemma implies that $\log F\left( (x,\infty)\right) \sim-r^{\ast}x$. The general case of $\log\bar{V}\left( x\right) \in R_{1}$ follows by a smooth variation and comparison argument as earlier. \end{proof} The next criterion deals with exponential blow-up of $M$ at its critical values. \begin{theorem} [Criterion II]Let $F,M,q^{\ast},r^{\ast}$ be as above.\newline(i) If $\log M\left( -q^{\ast}+s\right) \sim s^{-\rho}l_{1}(1/s)$ for some $\rho>0$, $l_{1}\in R_{0}$ as $s\rightarrow0+$ then \[ \log F\left( (-\infty,-x]\right) \sim-q^{\ast}x \] \newline(ii) If $\log M\left( r^{\ast}-s\right) \sim s^{-\rho}l_{1}(1/s)$ for some $\rho>0$, $l_{1}\in R_{0}$ as $s\rightarrow0+$ then \[ \log F\left( (x,\infty)\right) \sim-r^{\ast}x. \] \end{theorem} \begin{proof} As for Criterion I, the idea is an Escher-type change of measure followed by a suitable Tauberian theorem; in the present case we need Kohlbecker's Theorem. Let us focus on case (ii), noting that case (i) is similar. A new measure $U$ on $[0,\infty)$ is defined by \[ dU\left( x\right) :=\exp\left( r^{\ast}x\right) dF\left( x\right) . \] We identify $U$ with its non-decreasing right-continuous distribution function $x\mapsto U\left( \left[ 0,x\right] \right) $ and define the transform \[ N\left( \lambda\right) =\int_{0}^{\infty}e^{-x/\lambda}dU\left( x\right) =\int_{0}^{\infty}e^{\left( r^{\ast}-1/\lambda\right) x}dF\left( x\right) =M\left( r^{\ast}-1/\lambda\right) -\int_{-\infty}^{0}e^{\left( r^{\ast }-1/\lambda\right) x}dF\left( x\right) \] so that \[ \left\vert N\left( \lambda\right) -M\left( r^{\ast}-1/\lambda\right) \right\vert \leq\int_{-\infty}^{0}e^{\left( r^{\ast}-1/\lambda\right) x}dF\left( x\right) \leq F\left( 0\right) -F\left( -\infty\right) \leq2\left\Vert F\right\Vert _{\infty}<\infty. \] Thus, \[ N\left( \lambda\right) =M\left( r^{\ast}-1/\lambda\right) +O\left( 1\right) \text{ as }\lambda\rightarrow\infty \] and, in particular, since $\lim_{\lambda\rightarrow\infty}\log M\left( r^{\ast}-1/\lambda\right) =\lim_{\lambda\rightarrow\infty}M\left( r^{\ast }-1/\lambda\right) =\infty$ from the assumption (ii) we see that \[ \log N\left( \lambda\right) \sim\log M\left( r^{\ast}-1/\lambda\right) \text{ }\sim\lambda^{\rho}l_{1}(\lambda)\in R_{\rho}\text{ as }\lambda \rightarrow\infty. \] Define $\alpha\in(1,\infty)$ as the unique solution to $\rho+1=\alpha/\left( \alpha-1\right) $ and note \[ \chi\left( \lambda\right) :=\frac{\lambda}{\left( \alpha-1\right) }\log N\left( \lambda\right) \in R_{\rho+1}=R_{\alpha/\left( \alpha-1\right) }. \] Using that $\chi^{\leftarrow}\in R_{\left( \alpha-1\right) /\alpha }=R_{1-1/\alpha}$, Kohlbecker's Tauberian Theorem tells us that \[ \log U\left( \left[ 0,x\right] \right) \equiv\log U\left( x\right) \sim\alpha x/\chi^{\leftarrow}\left( x\right) \in R_{1/\alpha}\text{ as }x\rightarrow\infty. \] In particular, there exists $l\in R_{0}$ so that $\log U\left( x\right) =\alpha x^{1/\alpha}l\left( x\right) $. We first assume that $\log U\in SR_{1/\alpha}$. Then $U$ has a density $u\left( .\right) \in SR_{1/\alpha -1}$ and \[ u\left( x\right) =U\left( x\right) \partial_{x}\left( \log U\left( x\right) \right) \sim U\left( x\right) x^{1/\alpha-1}l\left( x\right) . \] In particular, \[ \log u\left( x\right) \sim\log U\left( x\right) \in R_{1/\alpha}\text{ as }x\rightarrow\infty. \] Now, $y\mapsto r^{\ast}y\in R_{1}$ dominates $R_{1/\alpha}$ (since $1/\alpha<1$) in the sense that \[ r^{\ast}y-\log u\left( y\right) \sim r^{\ast}y. \] Thus, from \begin{align*} F\left( (x,\infty)\right) & =\int_{(x,\infty)}dF\left( y\right) =\int_{[x,\infty)}\exp\left( -r^{\ast}y\right) u\left( y\right) dy\\ & =\int_{(x,\infty)}\exp\left[ -r^{\ast}y+\log u\left( y\right) \right] \end{align*} and Bingham's lemma we deduce that \[ -\log F\left( (x,\infty)\right) \sim r^{\ast}x\text{.} \] The general case, $\log U\in R_{1/\alpha}$, is handled via smooth variation as earlier. Namely, we can find smooth minorizing and majorizing functions for $\log U$, say $G\_$ and $G_{+}$. After defining $U_{\pm}=\exp G_{\pm}$ we have \[ \log U_{-}\sim\log U\sim\log U_{+}\text{ and }U_{-}\leq U\leq U_{+}. \] Then, exactly as in the last step of the proof of Criterion I, \[ F\left( (x,\infty)\right) =\int_{(x,\infty)}\exp\left( -r^{\ast}y\right) dU\left( y\right) \leq\int_{(U_{+}^{-1}\left( U\left( x\right) \right) ,\infty)}\exp\left( -r^{\ast}w\right) dU_{+}\left( w\right) \] and from Bingham's lemma, \[ -\log F\left( (x,\infty)\right) \lesssim r^{\ast}U_{+}^{-1}\left( U\left( x\right) \right) \sim r^{\ast}x\text{.} \] Similarly, $-\log F\left( (x,\infty)\right) \gtrsim r^{\ast}x$ and the proof is finished. \end{proof} \section{ Application to Smile Asymptotics\label{ApplSmile}} We start with a few recalls to settle the notation. The normalized price of a Black-Scholes call with log-strike $k$ is given by \[ c_{BS}\left( k,\sigma\right) =\Phi\left( d_{1}\right) -e^{k}\Phi\left( d_{2}\right) \, \] with $d_{1,2}\left( k\right) =-k/\sigma\pm\sigma/2$. If one models risk-neutral returns with a distribution function $F$, the implied volatility is the (unique) value $V\left( k\right) $ so that \[ c_{BS}\left( k,V\left( k\right) \right) =\int_{k}^{\infty}\left( e^{x}-e^{k}\right) dF\left( x\right) =:c\left( k\right) \text{.} \] Set $\psi\left[ x\right] \equiv2-4\left( \sqrt{x^{2}+x}-x\right) $ and recall $\bar{F}\equiv1-F$. The following is a special case of the tail-wing formula \cite{BF}. \begin{theorem} \label{FromBF} Assume that $-\log F\left( -k\right) /k\sim q^{\ast}$ for some $q^{\ast}\in\left( 0,\infty\right) .$ Then \[ V(-k)^{2}/k\sim\psi\left[ -\log F\left( -k\right) /k\right] \sim \psi\left( q^{\ast}\right) . \] Similarly, assume that $-\log\bar{F}\left( k\right) /k\sim p^{\ast}+1$ for some $p^{\ast}\in\left( 0,\infty\right) $. Then \[ V(k)^{2}/k\sim\psi\left[ -1-\log\bar{F}\left( k\right) /k\right] \sim \psi\left( p^{\ast}\right) . \] \end{theorem} As earlier, let $M\left( s\right) =\int\exp\left( sx\right) dF\left( x\right) $ denote the mgf of risk-neutral returns and now \textit{define} the critical exponents $r^{\ast}$and $-q^{\ast}$ exactly as in the beginning of the last section \ref{MgfLT}. Combining the results therein with Theorem \ref{FromBF} we obtain \begin{theorem} \label{main} If $q^{\ast}\in\left( 0,\infty\right) $ and $M$ satisfies part (i) of Criteria I or II then \[ V(-k)^{2}/k\sim\psi\left( q^{\ast}\right) \text{ as }k\rightarrow \infty\text{.} \] Similarly, if $\ r^{\ast}\equiv p^{\ast}+1\in\left( 1,\infty\right) $ and $M$ satisfies part (ii) of Criteria I or II then \[ V(k)^{2}/k\sim\psi\left( p^{\ast}\right) \text{ as }k\rightarrow \infty\text{.} \] \end{theorem} \section{First Examples\label{Examples}} The examples discussed in this section model risk-neutral log-price by L\'{e}vy processes and there is no loss of generality to focus on unit time.\footnote{In fact, L\'{e}vy models that satisfy one of our criteria have no term structure of implied variance slopes.} \subsection{Criterion I with n=0: the Variance Gamma Model} The Variance Gamma model $VG=VG\left( m,g,C\right) $ has mgf \[ M(s)=\left( \frac{gm}{gm+(m-g)s-s^{2}}\right) ^{C}=\left( \frac {gm}{(m-s)(s+g)}\right) ^{C}\text{.} \] The critical exponents are obviously given by $r^{\ast}=m$ and $q^{\ast}=g$. Focusing on the first, we have \[ M(r^{\ast}-s)\sim\left( \frac{gm}{m+g}\right) ^{C}s^{-C}\text{ as }s\rightarrow0+ \] which shows the Criterion I is satisfied with $n=0$. Theorem \ref{main} now identifies the asymptotic slope of the implied variance to be $\psi\left( r^{\ast}-1\right) =\psi\left( m-1\right) $. Similarly, the left slope is seen to be $\psi\left( q^{\ast}\right) =\psi\left( g\right) $. We remark that \cite{AB1} contains tail estimates for $VG$ which lead, via the tail-wing formula, to the same result. \subsection{ Criterion\ I with n $>$ 0: the Normal Inverse Gaussian Model} The Normal Inverse Gaussian Model $NIG=$ $NIG\left( \alpha,\beta,\mu ,\delta\right) $ has mgf given by \[ M\left( s\right) =\exp\left\{ \delta\left\{ \sqrt{\alpha^{2}-\beta^{2} }-\sqrt{\alpha^{2}-\left( \beta+s\right) ^{2}}\right\} +\mu s\right\} . \] By looking at the endpoints of the strip of analyticity the critical exponents are immediately seen to be $r^{\ast}=$ $\alpha-\beta,\,q^{\ast}=\alpha+\beta$ and we focus again on the first. While $M(s)$ converges to the finite constant $M(r^{\ast})$ as $s\rightarrow r^{\ast}-$ we have \begin{align*} M^{\prime}(s)/M\left( s\right) & =(2\delta(\beta+s)[\alpha^{2} -(\beta+s)^{2}]^{-1/2}+\mu)\\ \text{and }M^{\prime}(r^{\ast}-s) & \sim2\delta\alpha\sqrt{2\alpha} s^{-1/2}M(r^{\ast})\text{ as }s\rightarrow0+\text{.} \end{align*} We see that Criterion I is satisfied with $n=1$ and Theorem \ref{main} gives the asymptotic slope $\psi\left( r^{\ast}-1\right) =\psi\left( \alpha -\beta-1\right) $. Similarly, the left slope is seen to be $\psi\left( q^{\ast}\right) =\psi\left( \alpha+\beta\right) $. We remark that the same slopes were computed in \cite{BF} using the tail-wing formula and explicitly known density asymptotics for $NIG$. \subsection{Criterion\ II: the Double Exponential Model} The double exponential model $DE=DE\left( \sigma,\mu,\lambda,p,q,\eta _{1},\eta_{2}\right) $ has mgf \[ \log M(s)=\frac{1}{2}\sigma^{2}s^{2}+\mu s+\lambda\left( \frac{p\eta_{1} }{\eta_{1}-s}+\frac{q\eta_{2}}{\eta_{2}+s}-1\right) . \] Clearly, $r^{\ast}=\eta_{1}$ and as $s\rightarrow0+$ \[ \log M(\eta_{1}-s)\sim\frac{1}{2}\sigma\eta_{1}^{2}+\mu\eta_{1}+\lambda\left( \frac{p\eta_{1}}{s}+\frac{q\eta_{2}}{\eta_{2}+\eta_{1}}-1\right) \sim\lambda p\eta_{1}s^{-1} \] and we see that Criterion II is satisfied. As above, this implies asympotic slopes $\psi\left( r^{\ast}-1\right) =\psi\left( \eta_{1}-1\right) $ on the right and $\psi\left( \eta_{2}\right) $ on the left. \section{ Time changed L\'{e}vy processes} We now discuss how to apply our results to time changed L\'{e}vy processes \cite{S, SST, CT}. \ To do this, we only need to check that the moment generating function of the marginals of the process, satisfies one of the three criteria. To this end, consider a L\'{e}vy process $L=L\left( t\right) $ described through its cumulant generating function (cgf) $K_{L}$ at time $1$, that is, \[ K_{L}\left( v\right) =\log E\left[ \exp\left( vL_{1}\right) \right] \] and an independent random clock $T=T\left( \omega\right) \geq0$ with cgf $K_{T}$. It follows that the mgf of $L\circ T$ is given by \[ M(v)=\mathbb{E}\left[ \mathbb{E}\left( e^{vL_{T}}|T\right) \right] =\mathbb{E}\left[ e^{K_{L}\left( v\right) T}\right] =\exp\left[ K_{T}(K_{L}(v)\right] . \] Therefore, in order to apply our Theorem \ref{main} to time-changed L\'{e}vy models we need to check if $M=\exp\left[ K_{T}(K_{L}(\cdot)\right] $ satisfies criterion I or II so that $-\log\bar{F}\left( x\right) /x$ tends to a positive constant. Here, as earlier, $F$ denotes the distribution function of $M$ and $\bar{F}\equiv1-F$. The following theorem gives sufficient conditions for this in terms of $K_{T}$ and $K_{L}$. We shall write $M_{T}\equiv\exp\left( K_{T}\right) $ and $M_{L}\equiv\exp\left( K_{L}\right) $. For brevity, we only discuss the right tail\footnote{In fact, the elegant change-of-measure argument in Lee \cite{Lee} allows a formal reduction of the left tail behaviour to the right tail behaviour.} and set \[ p_{L}=\sup\left\{ s:M_{L}\left( s\right) <\infty\right\} ,\,\,\,p_{T} =\sup\left\{ s:M_{T}\left( s\right) <\infty\right\} . \] \begin{theorem} \label{TCLT}With notation as above, assuming $p_{L},p_{T}>0$, we have:\newline(i.1) If $K_{L}(p)=p_{T}$ for some $p\in\lbrack0,p_{L})$ and $M_{T}$ satisfies either criterion then \[ \log\bar{F}(x)\sim-px. \] (i.2) If $K_{L}(p)=p_{T}$ for $p=p_{L}$ and $M_{T},M_{L}$ satisfy either criterion then \[ \log\bar{F}(x)\sim-px. \] (ii) If $K_{L}(p)<p_{T}$ for all $p\in\lbrack0,p_{L}]$ and $M_{L}$ satisfies either criterion then \[ \log\bar{F}(x)\sim-p_{L}x. \] \end{theorem} \begin{remark} \label{UniqueSolForP}It is worth noting that there cannot be more than one solution to $K_{L}(p)=p_{T}$. To see this, take any $v$ such that $v>0$ and $K_{L}\left( v\right) >0$ (any solution to $K_{L}(p)=p_{T}>0$ will satisfy this!) From $M_{L}\equiv\exp K_{L}$ it follows that $M_{L}\left( 0\right) =1$ and $M_{L}\left( v\right) >1$. By convexity of $M_{L}\left( \cdot\right) $ it is easy to see that $M_{L}^{\prime}\left( v\right) $ is strictly positive and the same is true for $K_{L}^{\prime}\left( v\right) =M_{L}^{\prime}\left( v\right) /M_{L}\left( v\right) $. It follows that $w\geq v\implies$ $K_{L}\left( w\right) \geq K_{L}\left( v\right) >0$ and the set of all $v>0:K_{L}\left( v\right) >0$ is connected and $K_{L}$ restricted to this set is strictly increasing. This shows that there is at most one solution to $K_{L}(p)=p_{T}$. \end{remark} \begin{proof} (i.1) Noting that $p>0$ let as first assume that $M_{T}$ satisfies criterion I (at $K_{L}(p)=p_{T}$ with some $n\geq0\,$) so that for some $\rho>0$ and $l\in R_{0},$ \[ M_{T}^{\left( n\right) }\left( u\right) \sim\left( p_{T}-u\right) ^{-\rho}l\left( \left( p_{T}-u\right) ^{-1}\right) \text{ as }u\uparrow p_{T}. \] From $M=M_{T}\circ K_{L}$ we have $M^{\prime}=M_{T}^{\prime}\left( K_{L}\right) K_{L}^{\prime}$ and, by iteration, $M^{\left( n\right) }$ equals $M_{T}^{\left( n\right) }\left( K_{L}\right) \left( K_{L}^{\prime }\right) ^{n}$ plus a polynomial in $M_{T}\left( \cdot\right) ,...,M_{T}^{\left( n-1\right) }\left( \cdot\right) $ which remains bounded when the argument approaches $p_{T}$. Noting that $K_{L}^{\prime}\left( p\right) >0$ (see remark above) we absorb the factor $[K_{L}^{\prime}\left( p\right) ]^{n}$ into the slowly varying function and see that \[ M^{\left( n\right) }(v)\sim(p_{T}-K_{L}(v))^{-\rho}l((p_{T}-K_{L} (v))^{-1})\text{ for }\rho\text{ as above and some }l\text{ }\in R_{0} \] as $K_{L}\left( v\right) $ tends to $p_{T}$ which follows from $v\uparrow p$. Using analyticity of $K_{L}$ in $(0,p_{L})$ and $K_{L}^{\prime}$ $\left( p\right) \neq0$ it is clear that $p_{T}-K_{L}(v)\sim K_{L}^{\prime}(p)(p-v)$ as $v\uparrow p$ and so \[ M^{\left( n\right) }(p-v)\sim K_{L}^{\prime}(p)^{-\rho}v^{-\rho}l(1/v)\text{ as }v\rightarrow0+. \] This shows that $M$ satisfies Criterion I (with the same $n$ as $M_{T}$). A similar argument shows that $M$ satisfies Criterion II if $M_{T}$ does. Either way, the assert tail behaviour of $\log\bar{F}$ follows.\newline(i.2) The (unlikely!)\ case $K_{L}(p_{L})=p_{T}$ involves similar ideas and is left to the reader.\newline(ii)\ We now assume that \[ \sup_{p\in\left[ 0,p_{L}\right] }K_{L}(p)<p_{T}<\infty. \] and $M_{L}$ satisfies either criterion (at $p_{L}$). Since $M_{L}=\exp K_{L}$ stays bounded as its argument approaches the critical value $p_{L}$ it is clear that $M_{L}$ cannot satisfy criterion II or criterion I with $n=0$ and there must exist a smallest integer $n$ such that \[ M_{L}^{(n)}(p_{L}-x)\sim x^{-\rho}l(x)\text{ \ \ as }x\rightarrow p_{L} \] for some $\rho>0$ and $l\in R_{0}$. We note that \[ M^{(n)}(v)=(K_{L}^{(n)}(v)K_{T}^{\prime}(K_{L}(v))+f(v))\exp(K_{T}(K_{L}(v))) \] where $f(v)$ is a polynomial function of the first $\left( n-1\right) $ derivatives of $K_{L}$ and the first $n$ derivatives of $K_{T}$ evaluated at $K_{L}(v)$, which are all bounded for $0\leq v\leq p_{L}$. Noting that positivity of $T$ implies $M_{T}^{\prime}>0$ and hence $K_{T}^{\prime}>0$ we see that as $v\uparrow p_{L}$ \[ M^{(n)}(v)\sim K_{L}^{(n)}(v)K_{T}^{\prime}(K_{L}(p_{L}))M\left( p_{L}\right) . \] Applying this to $K_{T}\left( x\right) \equiv x$ leads immediately to \[ K_{L}^{(n)}(v)\sim M_{L}^{(n)}(v)/M_{L}(v)\sim x^{-\rho}l(x)/M_{L}(p_{L}). \] as $v\uparrow p_{L}$, and so $M$ satisfies criterion I.\newline \end{proof} We now discuss examples to which the above analysis is applicable. For all examples we plot the total variance smile\footnote{That is, $V^{2}\left( k,t\right) \equiv\sigma^{2}\left( k,t\right) t.$} for several maturities and compare with straight lines\footnote{These lines have been parallel-shifted so that they are easier to compare with the actual smile.} of correct slope as predicted by Theorem \ref{main}. All plots are based on the calibrations obtained in \cite{SST}. This is also where the reader can find more details about the respective model parameters. \subsection{Variance Gamma with Gamma-OU time change} We will consider the Variance Gamma process with a Gamma-Ornstein-Uhlenbeck time change and refer to \cite{SST} for details. From earlier, the Variance Gamma process has cumulant generating function \[ K_{L}(v)=C\log\left( \frac{gm}{(m-v)(v+g)}\right) \text{ for }v\in(-g,m) \] We note that $K_{L}\left( \left[ 0,m\right] \right) =\left[ 0,\infty\right] $ so that $p_{L}=\infty$. The Gamma-Ornstein-Uhlenbeck clock $T=T\left( \omega,t\right) $ has cgf \[ K_{T}(v)=vy_{0}\lambda^{-1}(1-e^{-\lambda t})+\frac{\lambda a}{v-\lambda b}\left[ b\log\left( \frac{b}{b-v\lambda^{-1}(1-e^{-\lambda t})}\right) -vt\right] \] We need to examine how this function behaves around the endpoint of its strip of regularity. At first glance, it appears that the function tends to infinity as $v\uparrow\lambda b$, because of the $\frac{\lambda a}{v-\lambda b}$ term. However, upon closer examination, we can see that this is in fact a removable singularity, and the term of interest to us is the $\log(...)$ term. This term tends to infinity as $v\rightarrow\lambda b(1-e^{-\lambda t})^{-1}=:p_{T}$. After some simple algebra, we see that \begin{align*} e^{K_{T}(v)} & =\left( \frac{b}{b-v\lambda^{-1}(1-e^{-\lambda t})}\right) ^{\frac{\lambda ab}{v-\lambda b}}\exp\left\{ vy_{0}\lambda^{-1}(1-e^{-\lambda t})-\frac{vt\lambda a}{v-\lambda b}\right\} \\ & \sim\left( \frac{p_{T}}{p_{T}-v}\right) ^{\frac{\lambda ab}{p_{T}-\lambda b}}\exp\left\{ p_{T}y_{0}\lambda^{-1}(1-e^{-\lambda t})-\frac{p_{T}t\lambda a}{p_{T}-\lambda b}\right\} \text{ as }v\uparrow p_{T}. \end{align*} Therefore, $\exp\left( K_{T}\right) $ satisfies Criterion I with $n=0$ and part (i.1) of Theorem \ref{TCLT} shows that $M$ does too and so that $\log \bar{F}(x)\sim-px$ where $p$ is determined by the equation \[ K_{L}(p)=p_{T}=\lambda b(1-e^{-\lambda t})^{-1} \] and can be calculated explicitly, \[ p=\frac{m-g+\sqrt{(m-g)^{2}+4gm(1-\exp(-\lambda b/C(1-e^{-\lambda t}))}}{2}. \] \begin{figure} \caption{VG with Gamma-OU time change. Parameters from \cite{SST} \end{figure} \subsection{ Normal Inverse Gaussian with CIR time change} The cgf of the Cox-Ingersoll-Ross (CIR)\ clock $T=T\left( \omega,t\right) $ is given by \[ K_{T}(v)=\kappa^{2}\eta t/\lambda^{2}+2y_{0}v/(\kappa+\gamma\coth(\gamma t/2))-\frac{2\kappa\eta}{\lambda^{2}}\log[\sinh\gamma t/2(\coth\gamma t/2+\frac{\kappa}{\gamma})]\text{ } \] where \[ \gamma=\sqrt{\kappa^{2}-2\lambda^{2}v}\text{.} \] This clearly tends to infinity as $I(v)\equiv\kappa+\gamma(v)\coth (\gamma(v)t/2)\rightarrow0$, and we can define $p_{T}$ as solution to the equation $I\left( p_{T}\right) =0$. Using l'H\^{o}pital's rule, it is easy to check that \[ \frac{p_{T}-v}{\kappa+\gamma(v)\coth(\gamma(v)t/2)}t \] tends to a constant as $v\rightarrow p_{T}$ , and so $2y_{0}v/(\kappa +\gamma\coth(\gamma t/2)$ is regularly varying of index $1$ as a\ function of $\left( p_{T}-v\right) ^{-1}$. It is clear that this is the dominant term in this limit, and so $M_{T}\equiv\exp\left( K_{T}\right) $ satisfies criterion II (at $p_{T}$). From earlier, the NIG cgf is\footnote{Following \cite{SST} we take $\mu=0$ here.} \[ K_{L}(v)=-\delta(\sqrt{\alpha^{2}-(\beta+v)^{2}}-\sqrt{\alpha^{2}-\beta^{2} })\text{ for }v\leq\alpha-\beta \] from which we see that $p_{L}=\alpha-\beta>0$ and \[ \sup_{v\in\lbrack0,\alpha-\beta]}K_{L}(v)=\delta\sqrt{\alpha^{2}-\beta^{2} }\text{.} \] Therefore, the behavior of $M$ on the edge of the strip of analyticity, and the location of the critical value, will depend on whether this supremum is more or less than $p_{T}$; if it is less than $p_{T}$, the latter is never reached. Recalling that $\exp\left( K_{L}\right) $ satisfies Criterion I with $n=1$, we apply part (ii) of Theorem \ref{TCLT} and obtain \[ -\log\bar{F}(x)\sim p_{L}x=(\alpha-\beta)x. \] Otherwise, there exists $p\in(0,\alpha-\beta]$ such that $K_{L}(p)=p_{T}$, for some $p\leq\alpha-\beta$, and since $M_{T}$ was seen to satisfy one of the criteria (to be precise: Criterion II) we can apply part (i) of Theorem \ref{TCLT} and obtain \[ -\log\bar{F}(x)\sim px. \] In particular, we see that for all possible parameters in the NIG-CIR model the formula (\ref{as}) holds true. Smile-asymptotics are now an immediate consequence from Theorem \ref{FromBF}. \begin{figure} \caption{NIG with CIR time change. Parameters from \cite{SST} \end{figure} \subsection{ The Heston Model} The Heston model is a stochastic volatility model defined by the following stochastic differential equations: \begin{align*} \frac{dS_{t}}{S_{t}} & =\sqrt{v_{t}}dW_{t}^{1}\\ dv_{t} & =\kappa(\eta-v_{t})dt+v_{t}dW_{t}^{2} \end{align*} where $d\left\langle W_{t}^{1},W_{t}^{2}\right\rangle =\rho dt$ is the correlation of the two Brownian motions. $\log S_{t}$ therefore has the distribution of a Brownian motion with drift $-1/2$ evaluated at a\ random time $T\left( \omega,t\right) =$ $\int_{0}^{t}v_{s}ds$ with the distribution of an integrated CIR\ process, as in the previous example. When $\rho=0$, the L\'{e}vy process $L\equiv W^{1}$ and $T$ are independent and we can apply the same analysis as above. Namely, the cgf of the Brownian motion with drift speed $-1/2$ at time $1$ is \[ K_{L}(v)=(v^{2}-v)/2,\text{ } \] so that $p_{L}=\infty,$ and $M_{T}=\exp\left( K_{T}\right) $ satisfies Criterion II hence, by part (i) of Theorem \ref{TCLT}, \[ \log\bar{F}(x)\sim-px \] where $p$ is determined by the equation $K_{L}\left( p\right) =p_{T}$. When $\rho\leq0$, we can analyze the mgf of $\log S_{t}$ directly, and we can apply the same reasoning as for the mgf of the CIR\ process, to deduce that criterion II is satisfied. The distribution function for the Heston returns hence satisfies $\log\bar{F}(x)\sim-px$ where $p$ is solution to, see \cite{AP}, \[ \left. (\kappa-\rho v\theta)+(\theta^{2}(v^{2}-v)-(\kappa-\rho v\theta )^{2})^{1/2}\cot\{(\theta^{2}(v^{2}-v)-(\kappa-\rho v\theta)^{2} )^{1/2}t/2\}\right\vert _{v=p}=0. \] When $\rho>0$, which is of little practical importance (at least in equity markets), the mgf may explode at a different point, see \cite{AP}, but criterion II will still be satisfied. \begin{figure} \caption{Heston Model. Parameters from \cite{SST} \end{figure} \end{document}
math
\begin{document} \newtheorem{definition}{Definition} \newtheorem{theorem}{Theorem} \newtheorem{remark}{Remark} \newtheorem*{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newtheorem*{question}{Question} \title{A short proof of nonhomogeneity of the pseudo-circle} \author{Krystyna~Kuperberg} \address{Department of Mathematics and Statistics, Auburn University, Auburn, AL 36849, USA} \email{[email protected]} \author{Kevin~Gammon} \address{Department of Mathematics and Statistics, Auburn University, Auburn, AL 36849, USA} \email{[email protected]} \subjclass[2000]{54F15; 54F50} \keywords{pseudo-circle, pseudo-arc, homogeneous, composant, indecomposable continuum} \dedicatory{Dedicated to James T.~Rogers,~Jr. on the occasion of his 65th birthday} \begin{abstract} The pseudo-circle is known to be nonhomogeneous. The original proofs of this fact were discovered independently by L.~Fearnley~\cite{Fearnleyhomogeneous} and J.T.~Rogers, Jr.~\cite{Rogers1}. The purpose of this paper is to provide an alternative, very short proof based on a result of D.~Bellamy and W.~Lewis~\cite{BellamyLewis}. \end{abstract} \maketitle \section{Introduction} A {\em pseudo-arc} is a hereditarily indecomposable, chainable continuum. In 1948, E.E.~Moise~\cite{Moise} constructed a pseudo-arc as an indecomposable continuum homeomorphic to each of its subcontinua. Moise correctly conjectured that the hereditarily indecomposable continuum given by B.~Knaster~\cite{Knaster} in 1922 is a pseudo-arc. Also in 1948, R.H. Bing~\cite{Bing1} proved that Moise's example is homogeneous. In 1951, Bing~\cite{Bing2} proved that every hereditarily indecomposable chainable continuum is a pseudo-arc and that all pseudo-arcs are homeomorphic. In 1959, Bing~\cite{Bing3} gave another characterization of the pseudo-arc: a homogeneous chainable continuum. The history of many other aspects of the pseudo-arc can be found in survey papers by W.~Lewis \cite{Lewis2} and \cite{Lewis1}. In 1951, Bing~ \cite{Bing2} described a {\em pseudo-circle}, a planar hereditarily indecomposable circularly chainable continuum which separates the plane and whose every proper subcontinuum is a pseudo-arc. It has been shown by L.~Fearnley in~\cite{Fearnleyhomogeneous} and J. T.~Rogers, Jr. in ~\cite{Rogers1} that the pseudo-circle is not homogeneous. Fearnley also proved that the pseudo-circle is unique~\cite{Fearnleyunique1} and \cite{Fearnleyunique2}. The fact that the pseudo-circle is not homogeneous also follows from more general theorems proved in~\cite{Hagopian}, \cite{KennedyRogers}, \cite{Lewis3}, and \cite{Rogers2}. This paper offers yet another, very short proof, a consequence of a result of D.~Bellamy and W.~Lewis~\cite{BellamyLewis}. Similarly as in~\cite{Rogers2}, an infinite covering space of a plane separating continuum is used. \section{Preliminaries}\label{prelim} Throughout the paper, a {\em continuum\/} will refer to a nondegenerate compact and connected metric space. A continuum is {\em indecomposable} if it is not the union of two proper subcontinua. A continuum is {\em hereditarily indecomposable} if every subcontinuum is also indecomposable. For a point $a$ in $X$, the {\em composant\/} $K(a)$ of $a $ in $X$ is the union of all proper subcontinua of $X$ containing $a$. An indecomposable continuum contains uncountably many pairwise disjoint composants, see~\cite{Kuratowski} Theorem 7, page 212. A topological space $X$ is {\em homogeneous} if for any two points in $X$ there is a homeomorphism of $X$ onto itself mapping one point onto the other. Let $C$ denote the pseudo-circle. We may assume that $C$ is contained in a planar annulus $A$ in such a way that the winding number of each circular chain in the sequence of crooked circular chains defining $C$ is one. Any homeomorphism $h:C \to C$ extends to a continuous map $f: A \to A$ of degree $\pm 1$. (First extend $h$ to a map $\overline{U}\to A$ for some closed annular neighborhood $\overline{U}$ of $C$, then compose a retraction of $A$ onto $\overline{U}$ with this extension.) Let $\widetilde{A}$ be the universal covering space of $A$ with projection $p$. For any $\widetilde{x}\in \widetilde{A}$ and $\widetilde{y}\in p^{-1}(f(p(\widetilde{x})))$ there is a map $\widetilde{f}$ such that the diagram $$ \begin{array}{ccc} &\widetilde{f} & \\ \widetilde{A} & \longrightarrow & \widetilde{A}\\ p\downarrow & & \downarrow p\\ A & \longrightarrow & A\\ & f & \end{array} $$ \noindent commutes and $\widetilde{f}(\widetilde{x})=\widetilde{y}$; see for example~\cite{Hu}, Theorem 16.3. Let $\widehat{A}$ be the disc that is a two-point compactification of $\widetilde{A}$. Denote the two added points of the compactification by $a$ and $b$. The map $\widetilde{f}$ extends uniquely to a map $F: \widehat{A} \to \widehat{A}$. Let $\widetilde{C}=p^{-1}(C)$, and let $P=\widetilde{C} \cup \{a,b\}$, a two-point compactification of $p^{-1}(C)$. D.~Bellamy and W.~Lewis considered this set in~\cite{BellamyLewis} and proved that $P$ is a pseudo-arc. Denote by $H$ the restriction of $F$ to $P$ and note that \begin{enumerate} \item either $H(a)=a$ and $H(b)=b$, or $H(a)=b$ and $H(b)=a$, \item $\widetilde{f}(\widetilde{C})=\widetilde{C}$ and hence $H(P)=P$, \item $\widetilde{f}_{|\widetilde{C}}$ is one-to-one. \end{enumerate} Thus \begin{lemma} $H$ is a homeomorphism from $P$ to $P$.\end{lemma} \section{Proof of nonhomogeneity of the pseudo-circle} \begin{theorem} The pseudo-circle is not homogeneous. \end{theorem} \begin{proof} Let $K(a)$ and $K(b)$ be the composants of $a$ and $b$, respectively, in the pseudo-arc $P$. Let $\widetilde{x}$ and $\widetilde{y}$ be two points in $P$ such that $\widetilde{x} \in (K(a)\cup K(b)) -\{a,b\}$ and $\widetilde{y}\in P-(K(a)\cup K(b))$. If $C$ were homogeneous, then there would be a homeomorphism $h:C\to C$ taking $x=p(\widetilde{x})$ onto $y=p(\widetilde{y})$. Then there would be a homeomorphism $H:P\to P$ as described in section~\ref{prelim} taking $\widetilde{x}$ onto $\widetilde{y}$. This is not possible since under every such homeomorphism, the set $K(a)\cup K(b)$ is invariant; the image of a composant is a composant. \end{proof} \noindent {\bf Remark.} It is not important for this proof that $K(a)$ and $K(b)$ are not the same set, but the authors are grateful to D.~Bellamy and W.~Lewis for showing that $K(a)$ and $K(b)$ were indeed different composants. \begin{theorem} If for some $x$, the composant $K(a)$ intersects the fiber $p^{-1}(x)$, then it contains $p^{-1}(x)$. \end{theorem} \begin{proof} If $y \in p^{-1}(x)\cap K(a)$, then by the definition of a composant, there is a proper subcontinuum $W$ of $P$ that contains both $a$ and $y$. Let $g:\widetilde{C}\to \widetilde{C}$ be a deck transformation such that $p^{-1}(x)=\{g^n(y)\}_{n\in \mathbb{Z}}$, $\mathbb{Z}$ being the set of integers. Denote by $G$ the extension of $g$ to $P$. The set $W_n=G^n(W)$ is a continuum containing $a$ and $g^n(y)$. Thus $p^{-1}(x)\subset K(a)$. \end{proof} Note that Theorem 2 and Remark above imply that $p(K(a)-\{a\})\cap p(K(b)-\{b\})=\emptyset$. \begin{question} Can the sets $p(K(a)-\{a\})$ and $ p(K(b)-\{b\})$ be used to classify the composants of the pseudo-circle $C$?\end{question} The authors would like to thank Jim Rogers, David Bellamy, and Wayne Lewis for their comments. \end{document}
math
\betaegin{equation}gin{document} \tauitle{Cores of imprimitive symmetric graphs of order a product of two distinct primes} \alphauthor{Ricky Rotheram and Sanming Zhou \sigmamallskip \\ \sigmamall School of Mathematics and Statistics\\ \sigmamall The University of Melbourne\\ \sigmamall Parkville, VIC 3010, Australia} \deltaate{} \openup 0.25\jot \title{Cores of imprimitive symmetric graphs of order a product of two distinct primes} \betaegin{equation}gin{abstract} A retract of a graph ${\rm G}amma$ is an induced subgraph $\Psi$ of ${\rm G}amma$ such that there exists a homomorphism from ${\rm G}amma$ to $\Psi$ whose restriction to $\Psi$ is the identity map. A graph is a core if it has no nontrivial retracts. In general, the minimal retracts of a graph are cores and are unique up to isomorphism; they are called the core of the graph. A graph ${\rm G}amma$ is $G$-symmetric if $G$ is a subgroup of the automorphism group of ${\rm G}amma$ that is transitive on the vertex set and also transitive on the set of ordered pairs of adjacent vertices. If in addition the vertex set of ${\rm G}amma$ admits a nontrivial partition that is preserved by $G$, then ${\rm G}amma$ is an imprimitive $G$-symmetric graph. In this paper cores of imprimitive symmetric graphs ${\rm G}amma$ of order a product of two distinct primes are studied. In many cases the core of ${\rm G}amma$ is determined completely. In other cases it is proved that either ${\rm G}amma$ is a core or its core is isomorphic to one of two graphs, and conditions on when each of these possibilities occurs is given. \emph{Key words:} graph homomorphism; core graph; core of a graph; symmetric graph; arc-transitive graph \emph{AMS Subject Classification (2010):} 05C60, 05C25 \end{abstract} \sigmaection{Introduction} \lambdaanglebel{sec:intro} All graphs in this paper are finite and undirected without loops or multi-edges. The \tauextit{order} of a graph is its number of vertices. A \tauextit{homomorphism} from a graph ${\rm G}ammamma$ to a graph $\Psi$ is a map $\psihi: V({\rm G}ammamma) \rightarrow V(\Psi)$ such that whenever $x, y\in V({\rm G}ammamma)$ are adjacent in ${\rm G}ammamma$, $\psihi(x)$ and $\psihi(y)$ are adjacent in $\Psi$. The subsets $\psihi^{-1}(v) := \{x \in V({\rm G}amma): \psihi(x) = v\}$ of $V({\rm G}amma)$, $v \in V(\Psi)$ are called the {\em fibres} of $\psihi$. It is readily seen that all fibres are (possibly empty) independent sets of ${\rm G}amma$ (see e.g. \cite[Proposition 2.11]{MR1468789}). Whenever there exists a homomorphism $\psihi$ from ${\rm G}amma$ to $\Psi$, we denote $\psihi: {\rm G}ammamma \rightarrow \Psi$ or simply ${\rm G}ammamma \rightarrow \Psi$. For example, if ${\rm G}amma$ is a subgraph of $\Psi$, then ${\rm G}ammamma \rightarrow \Psi$ by the \tauextit{inclusion homomorphism}, that is, the homomorphism that maps each vertex of ${\rm G}amma$ to itself. A homomorphism $\psihi$ from ${\rm G}ammamma$ onto an induced subgraph $\Psi$ of ${\rm G}amma$ is called a \tauextit{retraction} if the restriction of $\psihi$ to $V(\Psi)$ (denoted by $\psihi \mid_{\Psi}$) is the identity map; in this case $\Psi$ is called a \tauextit{retract} of ${\rm G}ammamma$. A graph is called a \tauextit{core} if it has no nontrivial retracts. In general, the minimal retracts of a graph are cores and are unique up to isomorphism. So we can speak of \tauextit{the core} of a graph ${\rm G}ammamma$, denoted by ${\rm G}ammamma^\alphast$. Thus there exists a retraction $\psihi: {\rm G}ammamma \rightarrow {\rm G}amma^*$ (so that $\psihi\mid_{{\rm G}ammamma^\alphast}$ is the identity map from $V({\rm G}amma^*)$ to $V({\rm G}amma^*)$). A homomorphism from ${\rm G}amma$ to itself is called an {\em endomorphism} of ${\rm G}amma$. A core can be equivalently defined (see e.g. \cite[Proposition 2.22]{MR1468789}) as a graph whose endomorphisms are all automorphisms. A core can also be defined by virtue of the homomorphism equivalence relation. Two graphs ${\rm G}amma$ and $\Psi$ are said to be {\em homomorphically equivalent}, denoted by ${\rm G}ammamma \lambdaeftrightarrow \Psi$, if we have both ${\rm G}ammamma \rightarrow \Psi$ and $\Psi \rightarrow {\rm G}amma$. This defines an equivalence relation that is coarser than isomorphism. It can be verified that each equivalence class contains a unique graph (up to isomorphism) with smallest order; such a graph is a core, or the core of any graph in the class. Cores play an important role in the study of homomorphisms and graph colourings. For instance, a graph has a complete graph as its core if and only if its clique and chromatic numbers are equal, and any graph and its core have the same chromatic number. Unfortunately, in general it is difficult to determine the core of a graph. In fact, not many families of graphs whose cores have been determined are known so far, the simplest being non-empty bipartite graphs of which the cores are the complete graph $K_2$ of order $2$. The reader is referred to \cite{MR1468789} for a survey on homomorphisms, retracts and cores of graphs. In \cite[Theorem 3.7]{MR1468789} it was proved that the core of any vertex-transitive graph is vertex-transitive. In \cite[Theorem 3.9]{MR1468789} it was proved further that, for a vertex-transitive graph ${\rm G}amma$, the order of ${\rm G}amma^*$ divides the order of ${\rm G}amma$. In particular, vertex-transitive graphs of prime orders are cores. In \cite{R} the problem about when the vertex set of a vertex-transitive graph can be partitioned into subsets each inducing a copy of its core was studied. It was proved that Cayley graphs with connection sets closed under conjugation and vertex-transitive graphs with cores half their order admit such partitions. In \cite[Theorem 7.9.1]{MR1829620} it was proved that Kneser graphs (which are vertex-transitive) are cores. Complete graphs (which are clearly vertex-transitive) are also cores. The proof of \cite[Theorem 3.7]{MR1468789} can be extended to prove that the core of a symmetric (arc-transitive) graph is also symmetric (see Theorem \ref{thm:sym cores}). Two-arc-transitive graphs form a proper subfamily of the family of symmetric graphs, and in \cite[Theorem 6.13.5]{MR1829620} it was proved that any connected non-bipartite two-arc-transitive graph is a core. A \tauextit{rank-three graph} is a graph whose automorphism group is transitive on vertices, ordered pairs of adjacent vertices and ordered pairs of non-adjacent vertices. Thus rank-three graphs are necessarily symmetric and strongly regular. In \cite{MR2470534} it was proved (as a consequence of a more general result) that if ${\rm G}amma$ is a rank-three graph then either ${\rm G}amma$ is a core or ${\rm G}amma^*$ is a complete graph. In the same paper the authors asked whether the same result holds for all strongly regular graphs. This was confirmed in \cite{MR2813515} for two families of strongly regular graphs that are not necessarily rank-three graphs. In \cite{MR2813515} it was also proved that any distance-transitive graph is either a core or has a complete core. In this paper we study the cores of imprimitive symmetric graphs of order a product of two distinct primes. All symmetric graphs of order a product of two distinct primes were classified in \cite{MR884254, MR1223702, MR1244933, MR1223693}, and many interesting graphs arose from this classification. (In fact, all vertex-transitive graphs of order a product of two distinct primes were classified in \cite{MR1244933} and \cite{MR1289072} independently.) In \cite{D} imprimitive automorphism groups of metacirculant graphs of order a product of two distinct primes were classified. This together with previously known results completed the classification of automorphism groups of vertex-transitive graphs of order a product of two distinct primes. The fact that imprimitive symmetric metacirculants are circulants can be derived from \cite{D} and will be used in our study in the present paper. The main result in this paper is as follows. (The graphs in Tables \ref{tab:cir} and \ref{tab:2} will be defined in \S\ref{subsec:circu}, \S\ref{sec:incidence} and \S\ref{subsec:ms}. All graphs ${\rm G}amma$ in Table \ref{tab:cir} are circulant graphs as we will justify in \S\ref{subsec:circu}.) \betaegin{equation}gin{theorem} \lambdaanglebel{thm:main} Let $p$ and $q$ be primes with $2 \lambdae p<q$, and let ${\rm G}ammamma$ be an imprimitive symmetric graph of order $pq$. Then the core ${\rm G}ammamma^\alphast$ of ${\rm G}amma$ is given in the third column of Tables \ref{tab:cir} and \ref{tab:2}. \end{theorem} As shown in Tables \ref{tab:cir}-\ref{tab:2}, in many cases we determine ${\rm G}ammamma^\alphast$ completely. In other cases we prove that either ${\rm G}amma$ is a core or ${\rm G}amma^*$ is isomorphic to one of two graphs. As seen in rows 10-12 of Table \ref{tab:cir}, determining the core of $G(q,r)[\overlineerline{K}_{p}]-pG(q,r)$ is reduced to the problems of computing the chromatic and clique numbers of a circulant graph. Unfortunately, the latter problems are both NP-hard even for circulant graphs \cite{CGV}. (In fact, determining the clique number remains NP-hard even for circulant graphs of prime orders \cite[Theorem 2]{CGV}.) Nevertheless, we notice that if $r \lambdae p$ then $\chi(G(q,r)) \lambdae r \lambdaeq p$ by Brooks' theorem and so the core of $G(q,r)[\overlineerline{K}_{p}]-pG(q,r)$ is $G(q,r)$. By rows 13-18 of Table \ref{tab:cir}, determining the core of $G(pq;r,s,u)$ is reduced to the problem of deciding whether there exists a homomorphism, or a homomorphism with certain properties, between $G(p,s)$ and $G(q,u)$. The latter problem is, unfortunately, difficult in general. We notice that the condition $G(p,s)\rightarrow G(q,u)$ in row 13 of Table \ref{tab:cir} can not be satisfied unless $\omega(G(p,s)) = \omega(G(q,u)) = \omega(G(pq;r,s,u))$. (In fact, if $G(p,s)\rightarrow G(q,u)$, then the core of $G(pq;r,s,u)$ is $G(p, s)$ and hence $\omega(G(q,u)) \lambdae \omega(G(p,s)) = \omega(G(pq;r,s,u)) = \min\{\omega(G(p,s)), \omega(G(q,u))\}$ by Lemma \ref{prop:inva} and \cite[Observation 5.1]{MR1468789}.) Similarly, the condition $G(q,u)\rightarrow G(p,s)$ in row 14 can not be satisfied unless these three clique numbers are equal. The proof of Theorem \ref{thm:main} relies on the classification of imprimitive symmetric graphs of order a product of two distinct primes, obtained in \cite[Theorem 2.4]{MR884254}, \cite[Theorems 3-4]{MR1223693} and \cite[Theorem]{MR1223702} collectively. These graphs are given in the second column of Tables \ref{tab:cir} and \ref{tab:2} (where $K_n$ is the complete graph of order $n$ and $\overlineerline{K}_n$ its complement), and their definitions will be given in \S\ref{subsec:circu}, \S\ref{sec:incidence} and \S\ref{subsec:ms}, respectively. Along the way to the proof of Theorem \ref{thm:main}, we will prove some properties of such graphs; see Lemmas \ref{lem:3qr}, \ref{lem:pqrsu cat prod} and Theorems \ref{thm:clq>p}, \ref{thm:clq<q} and \ref{thm:pq ms ind<q}. \betaegin{equation}gin{center} \betaegin{equation}gin{table} \betaegin{equation}gin{tabular}{p{0.7cm} | p{4.4cm} | p{2.3cm} | p{5.6cm} | p{0.9cm}} \hline Row & ${\rm G}ammamma$ & ${\rm G}ammamma^\alphast$ & Condition & Proof \\ \hline 1 & $G(2q,r)$ & $K_2$ & $q \gammae 3$ & L\ref{lem:2qr} \\ \hline 2 & $G(2,q,r)\cong G(q,r)[\overlineerline{K}_2]$ & $G(q,r)$ & $q \gammae 3$ & T\ref{lem:2qr1}\\ \hline 3 & $G(3q,r)\cong G(3q;r,2,r)$ & $G(3q;r,2,r)^*$ & $q \gammae 5$, $r$ even & L\ref{lem:3qr}\\ \hline 4 & $G(3q,r)\cong G(3q;r,2,2r)$ & $G(3q;r,2,2r)^*$ & $q \gammae 5$, $r$ odd & L\ref{lem:3qr}\\ \hline 5 & $K_3[\overlineerline{K}_{q}]$ & $K_3$ & $q \gammae 5$ & T\ref{thm:lexprod} \\ \hline 6 & $G(q,r)[\overlineerline{K}_{3}]$ & $G(q,r)$ & $q \gammae 5$ & T\ref{thm:lexprod}\\ \hline 7 & $G(p,s)[\overlineerline{K}_{q}]$ & $G(p,s)$ & $p \gammae 2$ & T\ref{thm:lexprod} \\ \hline 8 & $G(q,r)[\overlineerline{K}_{p}]$ & $G(q,r)$ & $p \gammae 5$ & T\ref{thm:lexprod} \\ \hline 9 & $G(p,s)[\overlineerline{K}_{q}]-qG(p,s)$ & $G(p,s)$ & $p \gammae 5$ & T\ref{thm:del lex prod}\\ \hline 10 & $G(q,r)[\overlineerline{K}_{p}]-pG(q,r)$ & $G(q,r)$ & $p \gammae 5$, $\chi(G(q,r))\lambdaeq p$ & T\ref{thm:del lex prod}\\ \hline 11 & $G(q,r)[\overlineerline{K}_{p}]-pG(q,r)$ & $K_p $ & $p \gammae 5$, $\omega(G(q,r))\gammaeq p$ & T\ref{thm:del lex prod} \\ \hline 12 & $G(q,r)[\overlineerline{K}_{p}]-pG(q,r)$ & ${\rm G}ammamma$ & $p \gammae 5$, $\chi(G(q,r)) > p > \omega(G(q,r))$ & T\ref{thm:del lex prod} \\ \hline 13 & $G(pq;r,s,u)$, $t\in H(q,r)$ & $G(p,s)$ & $p\gammaeq 3$, $G(p,s)\rightarrow G(q,u)$ & T\ref{thm:pqrsu cat} \\ \hline 14 & $G(pq;r,s,u)$, $t\in H(q,r)$ & $G(q,u)$ & $p\gammaeq 3$, $G(q,u)\rightarrow G(p,s)$ & T\ref{thm:pqrsu cat} \\ \hline 15 & $G(pq;r,s,u)$, $t\in H(q,r)$ & ${\rm G}ammamma$ & $p\gammaeq 3$, $G(p,s)\nrightarrow G(q,u)$, $G(q,u)\nrightarrow G(p,s)$ & T\ref{thm:pqrsu cat} \\ \hline 16 & $G(pq;r,s,u)$, $t\notin H(q,r)$ & $G(p,s)$ & $p\gammaeq 3$, $\exists \eta: G(p,s)\rightarrow G(q,u)$ such that each arc $(i,j)$ of $G(p,s)$ with $j-i=a^l$ satisfies $\eta(j)-\eta(i) \in t^l H(q,r)$ & T\ref{thm:pqrsu} \\ \hline 17 & $G(pq;r,s,u)$, $t\notin H(q,r)$ & $G(q,u)$ & $p\gammaeq 3$, $\exists \zeta: G(q,u)\rightarrow G(p,s)$ such that each arc $(x,y)$ of $G(q,u)$ with $y-x\in t^l H(q,r)$ satisfies $\zeta(y) - \zeta(x) = a^l$ & T\ref{thm:pqrsu} \\ \hline 18 & $G(pq;r,s,u)$, $t\notin H(q,r)$ & ${\rm G}ammamma$ & $p\gammaeq 3$, neither $\eta$ nor $\zeta$ above exists & T\ref{thm:pqrsu} \\ \hline \end{tabular} \caption{\sigmamall Imprimitive symmetric circulant graphs of order $pq$ ($2 \lambdae p < q$) and their cores. In row 7, if $p=3$, then the graph ${\rm G}amma$ is $K_3[\overlineerline{K}_{q}]$ and in this case the result is the same as that given in row 5. Acronym: L = Lemma, T = Theorem, $\chi =$ chromatic number, $\omega =$ clique number.} \lambdaanglebel{tab:cir} \end{table} \end{center} \betaegin{equation}gin{center} \betaegin{equation}gin{table} \betaegin{equation}gin{tabular}{p{0.7cm} | p{3.6cm} | p{0.5cm} | p{8.1cm} | p{0.9cm}} \hline Row & ${\rm G}ammamma$ & ${\rm G}ammamma^\alphast$ & Condition & Proof \\ \hline 1 & $X({\rm PG}(d-1,r))$ & $K_2$ & $p = 2$, $q = \frac{r^d-1}{r-1}$ & E\ref{ex:inc} \\ \hline 2 & $X'({\rm PG}(d-1,r))$ & $K_2$ & $p = 2$, $q = \frac{r^d-1}{r-1}$ & E\ref{ex:inc} \\ \hline 3 & $X(H(11)) \cong G(22, 5)$ & $K_2$ & $p=2$, $q=11$ & E\ref{ex:inc} \\ \hline 4 & $X'(H(11))$ & $K_2$ & $p=2$, $q=11$ & E\ref{ex:inc} \\ \hline 5 & ${\rm G}ammamma(2,3,\emptyset,\{1,2\})$ & $K_5$ & $p=3$, $q=5$ & T\ref{thm:pq ms cores} \\ \hline 6 & ${\rm G}ammamma(a,3,\emptyset,\{1,2\})$ & ${\rm G}ammamma$ & $p = 3$, $q = 2^a + 1 > 5$ with $a = 2^s$ & T\ref{thm:pq ms cores} \\ \hline 7 & ${\rm G}ammamma(a,3,\emptyset,\{0\})$ & ${\rm G}ammamma$ & $p = 3$, $q = 2^a + 1 \gammae 5$ with $a = 2^s$ & T\ref{thm:pq ms cores} \\ \hline 8 & ${\rm G}ammamma(a,p,\emptyset,\{0\})$ & $K_p$ & $p=2^{2^{s-1}}+1 \gammae 5$, $q = 2^a + 1 > 5$ with $a = 2^s$ & T\ref{thm:pq ms cores} \\ \hline 9 & ${\rm G}ammamma(a,p,\emptyset,\{0\})$ & ${\rm G}ammamma$ & $5 \lambdae p < 2^{2^{s-1}}+1$, $q = 2^a + 1 > 5$ with $a = 2^s$ & T\ref{thm:pq ms cores} \\ \hline 10 & ${\rm G}ammamma(a,p,\emptyset,U_{e, i})$ & ${\rm G}ammamma$ & $p \gammae 5$, $q = 2^a + 1 > 5$ with $a = 2^s$, $U_{e, i} = \{i2^{ej}: 0 \lambdaeq j < d/e\}$ for some $i \in {\rm B}bb Z_p^*$ and divisor $e \gammae 1$ of $\gammacd(d, a)$ with $1 < d/e < p-1$, where $d$ is the order of $2$ in ${\rm B}bb Z_p^*$ & T\ref{thm:pq ms cores} \\ \hline \end{tabular} \caption{\sigmamall Symmetric incidence and Maru\v{s}i\v{c}-Scapellato graphs of order $pq$ ($2 \lambdae p < q$) and their cores. Acronym: E = Example, T = Theorem.} \lambdaanglebel{tab:2} \end{table} \end{center} \sigmaection{Preliminaries} \lambdaanglebel{sec:pre} This section consists of definitions and known results that will be used in subsequent sections. Let $G$ be a group acting on a set $V$. That is, to every pair $(g, v) \in G \tauimes V$ there corresponds $g(v) \in V$ such that $1_{G}(v) = v$ and $g(h(v)) = (gh)(v)$ for $g, h \in G$ and $v \in V$, where $1_G$ is the identity element of $G$. The \tauextit{$G$-orbit} containing $v \in V$ is defined as $G(v) := \{g(v): g \in G\}$, and the \tauextit{stabilizer} of $v$ under $G$ is the subgroup $G_{v} := \{g \in G: g(v) = v\}$ of $G$. $G$ is \tauextit{transitive} on $V$ if $G(v) = V$ for some (and hence all) $v \in V$, \tauextit{semiregular} on $V$ if $G_{v} = 1$ for every $v \in V$, and \tauextit{regular} on $V$ if it is both transitive and semiregular on $V$. A partition ${\cal B}$ of $V$ is called \tauextit{$G$-invariant} if for any $g \in G$ and each \tauextit{block} $B \in {\cal B}$, $g(B) := \{g(v): v \in B\} \in {\cal B}$, and is \tauextit{nontrivial} if $1 < |B| < |V|$ for some $B \in {\cal B}$. If $V$ admits a nontrivial $G$-invariant partition, then $G$ is \tauextit{imprimitive} on $V$ (and each block of this partition is a \tauextit{block of imprimitivity} for $G$ in its action on $V$); otherwise, $G$ is \tauextit{primitive} on $V$. A group $G$ acting on $V$ is a \tauextit{Frobenius group} if it is transitive, non-regular, and only the identify element of $G$ can fix two points of $V$. A graph ${\rm G}ammamma$ is called \tauextit{$G$-vertex-transitive} if ${\rm G}ammamma$ admits $G$ as a group of automorphisms acting transitively on $V({\rm G}ammamma)$. If in addition $G$ is transitive on the set of arcs of ${\rm G}amma$, then ${\rm G}ammamma$ is called a \tauextit{$G$-symmetric graph}, where an {\em arc} is an ordered pair of adjacent vertices. A graph ${\rm G}ammamma$ is \tauextit{vertex-transitive} (\tauextit{symmetric}, respectively) if it is $G$-vertex-transitive ($G$-symmetric, respectively) for some $G \lambdae {\rm A}ut({\rm G}ammamma)$, where ${\rm A}ut({\rm G}ammamma)$ is the automorphism group of ${\rm G}amma$. A $G$-vertex-transitive graph is \tauextit{imprimitive} or \tauextit{primitive} according to whether $G$ is imprimitive or primitive on $V({\rm G}ammamma)$. In a vertex-transitive graph ${\rm G}amma$ all vertices have the same valency, which is called the \tauextit{valency} of ${\rm G}amma$ and is denoted by ${\rm val}({\rm G}amma)$. Given a group $G$ and a subset $S$ of $G \sigmaetminus \{1_G\}$ such that $S = S^{-1} := \{s^{-1}: s \in S\}$, the \tauextit{Cayley graph} of $G$ relative to $S$ is the graph with vertex set $G$ such that $x, y$ are adjacent if and only if $x^{-1}y \in S$. A circulant graph is a Cayley graph on a cyclic group. More specifically, for the cyclic group ${\rm B}bb Z_n$ of integers modulo $n$ and a subset $S \sigmaubseteq {\rm B}bb Z_n \sigmaetminus \{0\}$ such that $S = -S := \{-s: s \in S\}$, the \tauextit{circulant graph} of order $n$ relative to $S$ is the graph with vertex set ${\rm B}bb Z_n$ such that $x, y \in {\rm B}bb Z_n$ are adjacent if and only if $y-x \in S$. It is well known that Cayley graphs are vertex-transitive, and a graph is isomorphic to a circulant graph if and only if its automorphism group contains a cyclic subgroup regular on the vertex set. \betaegin{equation}gin{lemma}[{\cite[Lemma 6.2.3]{MR1829620}}] \lambdaanglebel{lem:hom eq} Two graphs are homomorphically equivalent if and only if their cores are isomorphic. In particular, any graph is homomorphically equivalent to its core. \end{lemma} \betaegin{equation}gin{theorem}[{\cite[Theorem 3.7]{MR1468789}}] \lambdaanglebel{thm:cores vt} The core of any vertex-transitive graph is vertex-transitive. \end{theorem} As mentioned in \cite[p.121]{MR1468789}, the proof of this result can be easily adapted to other kinds of transitivity. In particular, using essentially the same proof, we obtain the following result. \betaegin{equation}gin{theorem} \lambdaanglebel{thm:sym cores} The core of any symmetric graph is symmetric. \end{theorem} \betaegin{equation}gin{theorem}[\cite{GR}] \lambdaanglebel{thm:val cores} Let ${\rm G}ammamma$ be a symmetric graph. Then ${\rm val}({\rm G}ammamma^\alphast)$ is a divisor of ${\rm val}({\rm G}amma)$. \end{theorem} \betaegin{equation}gin{theorem}[{\cite[Theorem 3.9]{MR1468789}}] \lambdaanglebel{thm:order vt} Let ${\rm G}ammamma$ be a vertex-transitive graph, and $\psihi:{\rm G}ammamma\rightarrow{\rm G}ammamma^\alphast$ a retraction. Then $|V({\rm G}ammamma^\alphast)|$ divides $|V({\rm G}ammamma)|$, and all fibres of $\psihi$ have the same cardinality, namely $|\psihi^{-1}(u)| = |V({\rm G}ammamma)|/|V({\rm G}ammamma^\alphast)|$ for $u \in V({\rm G}ammamma^\alphast)$. \end{theorem} As an immediate consequence, we have: \betaegin{equation}gin{corollary} \lambdaanglebel{coro:prime order} Any vertex-transitive graph of prime order is a core. \end{corollary} Denote by $\alpha({\rm G}amma)$, $\omega({\rm G}amma)$ and $\chi({\rm G}amma)$ the \tauextit{independence}, \tauextit{clique} and \tauextit{chromatic numbers} of ${\rm G}amma$, respectively. \betaegin{equation}gin{lemma}[{\cite[pp.110]{MR1468789}}] \lambdaanglebel{prop:inva} Let ${\rm G}ammamma$ and $\Psi$ be non-bipartite graphs and $\psihi:{\rm G}ammamma\rightarrow\Psi$ a homomorphism. Then $$ \omega({\rm G}ammamma)\gammaeq \omega(\Psi),\;\, \chi({\rm G}ammamma)\lambdaeq\chi(\Psi). $$ In particular, for any non-bipartite graph ${\rm G}amma$, \betaegin{equation} \lambdaanglebel{eq:inva} \omega({\rm G}ammamma) = \omega({\rm G}amma^*),\;\, \chi({\rm G}ammamma) = \chi({\rm G}amma^*). \end{equation} \end{lemma} \betaegin{equation}gin{theorem}[\cite{GR}] \lambdaanglebel{thm:ao} Let ${\rm G}ammamma$ be a vertex-transitive graph. Then $$ \alphalpha({\rm G}ammamma)\omega({\rm G}ammamma)\lambdaeq |V({\rm G}ammamma)|. $$ \end{theorem} \betaegin{equation}gin{lemma}[{No-Homomorphism Lemma \cite{MR791653}}] \lambdaanglebel{lem:nohom} Let ${\rm G}ammamma$ and $\Psi$ be graphs such that ${\rm G}ammamma \rightarrow \Psi$ and $\Psi$ is vertex-transitive. Then \betaegin{equation}gin{equation} \lambdaanglebel{eq:nohom} \frac{\alphalpha({\rm G}ammamma)}{|V({\rm G}ammamma)|} \gammaeq \frac{\alphalpha(\Psi)}{|V(\Psi)|}. \end{equation} \end{lemma} In particular, if ${\rm G}amma$ is vertex-transitive, then ${\rm G}amma^*$ is vertex-transitive (Theorem \ref{thm:cores vt}), and so by ${\rm G}amma \lambdaeftrightarrow {\rm G}amma^*$ (Lemma \ref{lem:hom eq}) and (\ref{eq:nohom}) we obtain \betaegin{equation}gin{equation} \lambdaanglebel{eq:nohom1} \frac{\alphalpha({\rm G}ammamma)}{|V({\rm G}ammamma)|} = \frac{\alphalpha({\rm G}amma^*)}{|V({\rm G}amma^*)|}. \end{equation} \betaegin{equation}gin{definition} \lambdaanglebel{defn:prod} \rm Let ${\rm G}ammamma$ and $\Psi$ be graphs. The \tauextit{categorical product} ${\rm G}ammamma \tauimes \Psi$ of ${\rm G}ammamma$ and $\Psi$ and the \tauextit{lexicographic product} ${\rm G}ammamma[\Psi]$ of ${\rm G}ammamma$ by $\Psi$ are both defined to have vertex set $V({\rm G}ammamma) \tauimes V(\Psi)$. Their edge sets are defined as follows: $$ E({\rm G}ammamma \tauimes \Psi) := \{\{(u,x),(v,y)\}: \mbox{$\{u,v\} \in E({\rm G}ammamma)$ and $\{x,y\} \in E(\Psi)$}\} $$ $$ E({\rm G}amma[\Psi]) := \{\{(u,x), (v,y)\}: \mbox{either $\{u,v\} \in E({\rm G}amma)$, or $u=v$ and $\{x,y\} \in E(\Psi)$}\}. $$ The \tauextit{deleted lexicographic product} of ${\rm G}amma$ by $\Psi$, denoted by ${\rm G}amma[\Psi]-d{\rm G}amma$ where $d$ is the order of $\Psi$, is obtained from ${\rm G}amma[\Psi]$ by deleting all edges of the form $\{(u,x),(v,x)\}$ with $\{u,v\} \in E({\rm G}amma)$ and $x \in V(\Psi)$. (The categorical product of graphs is also known as the Kronecker product, direct product and tensor product in the literature.) \end{definition} It was proved in \cite[Proposition 4.18]{MR1788124} that the cartesian product of two vertex-transitive graphs is vertex-transitive. Similarly, one can prove the following lemma of which the second statement follows from the fact that ${\rm A}ut({\rm G}amma) \tauimes {\rm A}ut(\Psi) \lambdae {\rm A}ut({\rm G}amma \tauimes \Psi)$. \betaegin{equation}gin{lemma}[{\cite[Proposition 4.18]{MR1788124}}] \lambdaanglebel{lem:trans} The categorical product of any two connected vertex-transitive graphs is vertex-transitive; the categorical product of any two symmetric graphs is symmetric. \end{lemma} \betaegin{equation}gin{lemma}[{\cite[Proposition 2.1]{MR2089014}}] \lambdaanglebel{lem:prop2.1} Let $\Psi$, ${\rm G}ammamma$ and ${\rm L}ambda$ be graphs. Then the following hold: \betaegin{equation}gin{itemize} \item[\rm (a)] $\Psi\tauimes{\rm G}ammamma\rightarrow\Psi$ and $\Psi\tauimes{\rm G}ammamma\rightarrow{\rm G}ammamma$, and the corresponding homomorphisms are given by projections $(u,v) \mapsto u$ and $(u,v) \mapsto v$, $(u,v)\in V(\Psi)\tauimes V({\rm G}ammamma)$, respectively; \item[\rm (b)] if ${\rm L}ambda\rightarrow\Psi$ and ${\rm L}ambda\rightarrow{\rm G}ammamma$, then ${\rm L}ambda\rightarrow\Psi\tauimes{\rm G}ammamma$; \item[\rm (c)] in particular, ${\rm G}ammamma\rightarrow\Psi$ if and only if $\Psi\tauimes{\rm G}ammamma\lambdaeftrightarrow{\rm G}ammamma$. \end{itemize} \end{lemma} Given a graph ${\rm G}ammamma$ and an integer $t \gammae 2$, a homomorphism from ${\rm G}ammamma^t := \overlineerbrace{{\rm G}ammamma \tauimes \cdots \tauimes {\rm G}ammamma}^{t}$ to ${\rm G}amma$ is called a \tauextit{polymorphism} \cite{MR2089014}. A polymorphism $\psihi: {\rm G}ammamma^t \rightarrow {\rm G}amma$ is called \tauextit{idempotent} \cite{MR2089014} if $\psihi(u, \lambdadots, u)=u$ for all $u\in V({\rm G}ammamma)$. Obviously, for each $i = 1, \lambdadots, t$, the {\em projection} $\psii_i: {\rm G}ammamma^t \rightarrow {\rm G}amma$ defined by $\psii_i(u_1, \lambdadots, u_t) = u_i$ is idempotent. A graph ${\rm G}ammamma$ is called \tauextit{projective} \cite{MR2089014} if for all integers $t \gammae 2$ the only idempotent polymorphisms ${\rm G}ammamma^t \rightarrow {\rm G}amma$ are the projections $\psii_1, \lambdadots, \psii_t$. \betaegin{equation}gin{theorem}[{\cite[Theorem 1.5]{MR1804825}}] \lambdaanglebel{thm:prim cores proj} Let ${\rm G}amma$ be a vertex-transitive and ${\rm A}ut({\rm G}amma)$-primitive core. Then ${\rm G}amma$ is projective. \end{theorem} \betaegin{equation}gin{theorem}[{\cite[Theorem 1.4]{MR1804825}}] \lambdaanglebel{thm:retr fac} Let ${\rm G}amma$ be a vertex-transitive core. If ${\rm G}amma$ is projective, then whenever ${\rm G}amma$ is a retract of a categorical product of connected graphs, it is a retract of a factor. \end{theorem} As mentioned earlier, our proof of Theorem \ref{thm:main} relies on the classification of imprimitive symmetric graphs of order a product of two distinct primes, obtained in \cite[Theorem 2.4]{MR884254}, \cite[Theorems 3-4]{MR1223693} and \cite[Theorem]{MR1223702}. We state this classification below but defer the definition of related graphs for technical reasons. \betaegin{equation}gin{theorem} \lambdaanglebel{thm:circu} Let $p$ and $q$ be primes with $2 \lambdae p < q$, and let ${\rm G}amma$ be an imprimitive symmetric graph of order $pq$. Then ${\rm G}amma$ is isomorphic to one of the graphs in Example \ref{ex:inc}, Definitions \ref{defn:2qr}, \ref{defn:3qr} and \ref{defn:pqrsu}, Example \ref{ex:lexprod} and Theorem \ref{thm:sym ms}. \end{theorem} These graphs come in three classes, namely incidence and non-incidence graphs of two specific block designs, circulant graphs, and Maru\v{s}i\v{c}-Scapellato graphs. We will give their definitions and determine their cores in \S\ref{sec:incidence}, \S\ref{sec:circulants} and \S\ref{sec:cores ms}, respectively. \sigmaection{Symmetric incidence and non-incidence graphs of order $2q$} \lambdaanglebel{sec:incidence} Let ${\cal D}$ be a $2$-design with point set ${\betaf P}$ and block set ${\betaf B}$. The \tauextit{incidence graph} of ${\cal D}$, denoted by $X({\cal D})$, is defined to be the bipartite graph with bipartition $\{{\betaf P}, {\betaf B}\}$ such that $v \in {\betaf P}$ and $B \in {\betaf B}$ are adjacent if and only if $v$ is incident to $B$ in ${\cal D}$. The \tauextit{nonincidence graph} of ${\cal D}$, denoted by $X'({\cal D})$, is the bipartite graph with the same bipartition such that $v \in {\betaf P}$ and $B \in {\betaf B}$ are adjacent if and only if $v$ is not incident to $B$ in ${\cal D}$. Since $X({\cal D})$ and $X'({\cal D})$ are bipartite with at least one edge, their cores are isomorphic to $K_2$. \betaegin{equation}gin{example} \lambdaanglebel{ex:inc} {\em Given a prime power $r$ and an integer $d \gammae 2$, the symmetric design ${\rm PG}(d-1,r)$ has its points and blocks the points and hyperplanes respectively of the $(d-1)$-dimensional projective space over ${\rm GF}(r)$. It is noted in \cite{MR884254} that $X({\rm PG}(d-1,r))$ and $X'({\rm PG}(d-1,r))$ are symmetric graphs each with $2(r^d-1)/(r-1)$ vertices. Thus, when $(r^d-1)/(r-1)$ is a prime, these two graphs are symmetric graphs of order twice a prime. The unique $2$-$(11,5,2)$ design $H(11)$ has as its points the elements of $\mathbb{Z}_{11}$ and its blocks the 11 sets $R+i=\{x+i: x\in R\}$, where $i\in\mathbb{Z}_{11}$ and addition is undertaken in ${\rm B}bb Z_{11}$, and $R=\{1,3,4,5,9\}$ is the set of non-zero quadratic residues modulo $11$. It was noted in \cite{MR884254} that both $X(H(11))$ and $X'(H(11))$ are symmetric with order $22$, and $X(H(11))$ is isomorphic to the graph $G(2 \cdot 11,5)$ to be defined in Definition \ref{defn:2qr}. Since $X({\rm PG}(d-1,r))$, $X'({\rm PG}(d-1,r))$, $X(H(11))$ and $X'(H(11))$ are bipartite, their cores are all isomorphic to $K_2$, justifying lines 2-5 in Table \ref{tab:2}. } \end{example} \sigmaection{Cores of imprimitive symmetric circulant graphs of order $pq$} \lambdaanglebel{sec:circulants} \tauextit{Throughout this section $p$ and $q$ are primes with $2 \lambdae p < q$.} The purpose of this section is to determine the cores of imprimitive symmetric circulants of order $pq$. To be self-contained we first give the definitions \cite{MR884254, MR1223693, MR1223702} of such circulants. We then determine their cores in subsequent subsections in this section. \sigmaubsection{Symmetric circulant graphs of order $pq$} \lambdaanglebel{subsec:circu} Let $p$ be a prime and $r$ a positive divisor of $p-1$. Denote by $H(p,r)$ the unique subgroup of ${\rm A}ut(\mathbb{Z}_p) \cong \mathbb{Z}_p^\alphast$ with order $r$, where $\mathbb{Z}_p^\alphast$ is the multiplicative group of units of ${\rm B}bb Z_p$. \betaegin{equation}gin{definition} \lambdaanglebel{defn:pr} {\em Define $G(p,r)$ to be the circulant graph of order $p$ relative to $H(p, r)$. That is, $G(p,r)$ has vertex set $\mathbb{Z}_{p}$ such that $x, y \in \mathbb{Z}_{p}$ are adjacent if and only if $y - x \in H(p, r)$. } \end{definition} It was proved in \cite[Theorem 3]{MR0279000} that, for an odd prime $p$, a graph ${\rm G}ammamma$ is a connected symmetric graph of order $p$ if and only if ${\rm G}ammamma \cong G(p,r)$ for some even divisor $r$ of $p-1$. Moreover, $G(p,r)$ has valency $r$, and if $r<p-1$ then ${\rm A}ut(G(p,r)) \cong \mathbb{Z}_{p}\rtimes H(p,r)$ ($\lambdae {\rm A}GL(1, p)$) is a Frobenius group in its action on the vertex set ${\rm B}bb Z_p$ of $G(p,r)$, while $G(p,p-1)=K_p$. (The fact that ${\rm A}ut(G(p,r))$ is a Frobenius group on ${\rm B}bb Z_p$ was also observed in \cite[Corollary 2.11]{2013arXiv1302.6652T} in a different setting.) \betaegin{equation}gin{definition} \lambdaanglebel{defn:2qr} \rm Let $A$ and $A'$ be two disjoint copies of $\mathbb{Z}_q$, and for each $i\in \mathbb{Z}_q$, denote the corresponding elements of $A$ and $A'$ by $i$ and $i'$, respectively. For each positive divisor $r$ of $q-1$, define $G(2q,r)$ \cite{MR884254} to be the graph with vertex set $A \cup A'$ and edge set $\{\{x,y'\}: x,y \in\mathbb{Z}_q \tauext{ and } y-x \in H(q,r)\}$. For each positive even divisor $r$ of $q-1$, define $G(2,q,r)$ \cite{MR884254} to be the graph with vertex set $A \cup A'$ and edge set $\{\{x,y\}, \{x',y\}, \{x,y'\}, \{x',y'\}: x,y\in\mathbb{Z}_q \tauext{ and } y-x\in H(q,r)\}$. \end{definition} It was proved in \cite[Lemmas 2.1 and 2.2]{MR884254} that both $G(2q,r)$ and $G(2,q,r)$ are symmetric. We now show that they are both circulant graphs. In fact, in \cite[Section 2]{MR884254} it was shown that both $G(2q,r)$ and $G(2,q,r)$ have automorphisms $\tauau$ and $\rho$ defined by $\tauau(i)=i+1$, $\tauau(i')=(i+1)'$, $\rho(i)=(-i)'$ and $\rho(i')=-i$. It can be verified that they also have automorphisms $\tauau_a$ where $a\in H(q,r)$, defined by $\tauau_a(i)=ai+1$ and $\tauau_a(i')=(ai+1)'$. Thus they both have automorphism $\tauau_{-1}\rho$, given by $\tauau_{-1}\rho(i)=(i+1)'$ and $\tauau_{-1}\rho(i')=i+1$. It can be verified that $\tauau_{-1}\rho$ has order $2q$. On the other hand, $\lambdaeft\lambdaanglengle \tauau_{-1}\rho\right\ranglengle$ is transitive on $A \cup A'$, because for any $i,j\in A$, $(\tauau_{-1}\rho)^{n_1}(i)=i+n_1=j$ for some even integer $n_1$, and for any $i\in A$ and $j'\in A'$, $(\tauau_{-1}\rho)^{n_2}(i)=(i+n_2)'=j'$ for some odd integer $n_2$. (Note that $2$ generates $\mathbb{Z}_q$ as $q$ is an odd prime.) Now that $|A \cup A'| = |\lambdaeft\lambdaanglengle \tauau_{-1}\rho\right\ranglengle| = 2q$, it follows that $\lambdaeft\lambdaanglengle \tauau_{-1}\rho\right\ranglengle$ is regular on $A \cup A'$. Since $\lambdaeft\lambdaanglengle \tauau_{-1}\rho\right\ranglengle \lambdae {\rm A}ut(G(2q,r))$ and $\lambdaeft\lambdaanglengle \tauau_{-1}\rho\right\ranglengle \lambdae {\rm A}ut(G(2,q,r))$, we see that $G(2q,r)$ and $G(2,q,r)$ are both circulants. \betaegin{equation}gin{definition} \lambdaanglebel{defn:3qr} \rm For each positive divisor $r$ of $q-1$, define $G(3q,r)$ \cite{MR1223693} to be the graph with vertex set $\mathbb{Z}_3 \tauimes \mathbb{Z}_q$ and edge set $\{\{(i,x),(i+1,y)\}: i \in \mathbb{Z}_3, x, y \in \mathbb{Z}_q \tauext{ and } y-x\in H(q,r)\}$. \end{definition} It was proved in \cite[Example 3.4]{MR1223693} that $G(3q,r)$ is a connected symmetric graph with order $3q$ and valency $2r$. Moreover, $G(3q,r)$ is a circulant graph by \cite[Lemma 3.6, Theorem 3]{MR1223693}. \betaegin{equation}gin{definition} \lambdaanglebel{defn:pqrsu} \rm Let $s$ be an even divisor of $p-1$ and $r$ a divisor of $q-1$. Let $H(p,s)=\lambdaanglengle a\ranglengle\lambdaeq \mathbb{Z}_p^\alphast$. Let $t\in \mathbb{Z}_q^\alphast$ be such that $t^{s/2}\in -H(q,r)$, and let $u = \lambdacm(r, o(t))$ (least common multiple), where $o(t)$ is the order of $t$ in $\mathbb{Z}_q^\alphast$. Define $G(pq;r,s,u)$ \cite{MR1223702} to be the graph with vertex set $\mathbb{Z}_p \tauimes \mathbb{Z}_q$ such that $(i,x)$ and $(j,y)$ are adjacent if and only if there exists an integer $l$ such that $j-i=a^l$ and $y-x\in t^lH(q,r)$. \end{definition} Up to isomorphism $G(pq;r,s,u)$ is independent \cite{MR1223702} of the choice of $a$ and $t$ with $\lambdacm(r, o(t))=u$. It was proved in \cite[Theorem 3.5]{MR1223702} that $G(pq;r,s,u)$ is a connected symmetric graph of order $pq$ and valency $sr$, and moreover $G(pq;r,s,u)\cong G(pq;r',s',u')$ if and only if $r=r'$, $s=s'$ and $u=u'$. Furthermore, in the proof of \cite[Theorem 3.5]{MR1223702} it was shown that $G(pq;r,s,u)$ is a Cayley graph on ${\rm B}bb Z_p \tauimes {\rm B}bb Z_q$. Since $p\neq q$ are primes, $\mathbb{Z}_p\tauimes\mathbb{Z}_q\cong\mathbb{Z}_{pq}$ and hence $G(pq;r,s,u)$ is a circulant graph. Denote by $K_{q,q}$ (respectively, $K_{q,q,q}$) the complete bipartite (respectively, tripartite) graph with $p$ vertices in each part of the bipartition (respectively, tripartition). \betaegin{equation}gin{example} \lambdaanglebel{ex:lexprod} \rm The following graphs are symmetric circulants \cite{MR884254,MR1223693,MR1223702}: \betaegin{equation}gin{itemize} \item[\rm (a)] $K_2[\overlineerline{K}_{q}] \cong K_{q,q}$, where $q \gammae 3$; \item[\rm (b)] $K_3[\overlineerline{K}_{q}] \cong K_{q,q,q}$, where $q \gammae 5$; \item[\rm (c)] $G(p,s)[\overlineerline{K}_{q}]$ and $G(q,r)[\overlineerline{K}_{p}]$, where $3 \lambdaeq p<q$, $s$ is an even divisor of $p-1$, and $r$ is an even divisor of $q-1$; \item[\rm (d)] $G(p,s)[\overlineerline{K}_{q}]-qG(p,s)$ and $G(q,r)[\overlineerline{K}_{p}]-pG(q,r)$, where $5\lambdaeq p<q$, $s$ is an even divisor of $p-1$, and $r$ is an even divisor of $q-1$. \end{itemize} As mentioned in \cite[Section 3]{MR1223702}, the graphs in Example \ref{ex:lexprod} are all circulants since each of them admits a cyclic group of order $pq$ acting regularly on the vertex set (where $p=2, 3$ in (a), (b) respectively). \end{example} Definitions \ref{defn:2qr}, \ref{defn:3qr} and \ref{defn:pqrsu} and Example \ref{ex:lexprod} give all imprimitive symmetric circulant graphs of order a product of two distinct primes, listed in the second column in Table \ref{tab:cir}. We determine their cores in the remainder of this section. \sigmaubsection{Lexicographic products} \lambdaanglebel{subsec:lex prod} Since $G(2q,r)$ is a bipartite graph by Definition \ref{defn:2qr}, we have: \betaegin{equation}gin{lemma} \lambdaanglebel{lem:2qr} The core of $G(2q,r)$ is $K_2$. \end{lemma} \betaegin{equation}gin{theorem} \lambdaanglebel{thm:lexprod} The core of $G(p,s)[\overlineerline{K}_{q}]$ is $G(p,s)$, and the core of $G(q,r)[\overlineerline{K}_{p}]$ is $G(q,r)$. \end{theorem} \betaegin{equation}gin{proof} Denote ${\rm G}ammamma := G(p,s)[\overlineerline{K}_{q}]$. Since for a fixed $i \in V(\overlineerline{K}_{q})$ the subset $\{(x, i): x \in V(G(p,s))\}$ of $V({\rm G}amma)$ induces a subgraph of ${\rm G}ammamma$ isomorphic to $G(p,s)$, we have $G(p,s) \rightarrow{\rm G}ammamma$. On the other hand, we have ${\rm G}ammamma \rightarrow G(p,s)$ via the projection $V({\rm G}ammamma)\rightarrow V(G(p,s)), (x, i) \mapsto x$. Therefore, ${\rm G}ammamma\lambdaeftrightarrow G(p,s)$ and so ${\rm G}ammamma^\alphast\cong G(p,s)^\alphast$ by Lemma \ref{lem:hom eq}. Since $G(p,s)$ is a vertex-transitive graph with prime order, we have $G(p,s)^\alphast=G(p,s)$ by Corollary \ref{coro:prime order}. Hence the core of $G(p,s)[\overlineerline{K}_{q}]$ is $G(p,s)$. Similarly, one can show that the core of $G(q,r)[\overlineerline{K}_{p}]$ is $G(q,r)$. ${\rm B}ox$ \end{proof} \betaegin{equation}gin{theorem} \lambdaanglebel{lem:2qr1} $G(2,q,r)\cong G(q,r)[\overlineerline{K}_2]$, and the core of $G(2,q,r)$ is $G(q,r)$. \end{theorem} \betaegin{equation}gin{proof} The circulant $G(q,r)$ has vertex set ${\rm B}bb Z_q$, with $x, y \in {\rm B}bb Z_q$ adjacent if and only if $y-x \in H(q,r)$. The lexicographic product $G(q,r)[\overlineerline{K}_2]$ can be thought as defined on the vertex set ${\rm B}bb Z_q \tauimes {\rm B}bb Z_2$, with $(x, i), (y, j) \in {\rm B}bb Z_q \tauimes {\rm B}bb Z_2$ adjacent if and only if $x$ and $y$ are adjacent in $G(q,r)$. Thus, using the notation in Definition \ref{defn:2qr}, one can see that the map $$ A \cup A' \rightarrow {\rm B}bb Z_q \tauimes {\rm B}bb Z_2: x \mapsto (x,0),\;\, x' \mapsto (x,1),\;\, x \in {\rm B}bb Z_q $$ defines an isomorphism from $G(2,q,r)$ to $G(q,r)[\overlineerline{K}_2]$. Therefore, $G(2,q,r)\cong G(q,r)[\overlineerline{K}_2]$ and so the core of $G(2,q,r)$ is $G(q,r)$ by Theorem \ref{thm:lexprod}. ${\rm B}ox$ \end{proof} \sigmaubsection{Deleted lexicographic products} \lambdaanglebel{subsec:dellexprod} \betaegin{equation}gin{lemma} \lambdaanglebel{lem:iso del lex} $G(p,s)[\overlineerline{K}_{q}]-qG(p,s) = G(p,s)\tauimes K_{q}$ and $G(q,r)[\overlineerline{K}_{p}]-pG(q,r) = G(q,r)\tauimes K_{p}$. \end{lemma} \betaegin{equation}gin{proof} We may think of $G(p,s)[\overlineerline{K}_{q}]$ as defined on ${\rm B}bb Z_p \tauimes {\rm B}bb Z_q$. Then $(x, i), (y, j) \in {\rm B}bb Z_p \tauimes {\rm B}bb Z_q$ are adjacent in $G(p,s)[\overlineerline{K}_{q}]-qG(p,s)$ ${\rm L}eftrightarrow$ $x, y \in {\rm B}bb Z_p$ are adjacent in $G(p,s)$ and $i \ne j$ ${\rm L}eftrightarrow$ $x, y \in {\rm B}bb Z_p$ are adjacent in $G(p,s)$ and $i, j \in {\rm B}bb Z_q$ are adjacent in $K_{q}$ ${\rm L}eftrightarrow$ $(x, i), (y, j)$ are adjacent in $G(p,s)\tauimes K_{q}$. Hence $G(p,s)[\overlineerline{K}_{q}]-qG(p,s) = G(p,s)\tauimes K_{q}$. Similarly, $G(q,r)[\overlineerline{K}_{p}]-pG(q,r) = G(q,r)\tauimes K_{p}$. ${\rm B}ox$ \end{proof} \betaegin{equation}gin{lemma} \lambdaanglebel{lem:pq circ hom} Suppose that $s$ is an even divisor of $p-1$ and $r$ is an even divisor of $q-1$. Let ${\rm G}ammamma = G(p,s)\tauimes G(q,r)$. Then ${\rm G}ammamma$ is not a core if and only if one of the following occurs: \betaegin{equation}gin{itemize} \item[\rm (a)] $G(p,s)\rightarrow G(q,r)$, in which case ${\rm G}ammamma^\alphast\cong G(p,s)$; \item[\rm (b)] $G(q,r)\rightarrow G(p,s)$, in which case ${\rm G}ammamma^\alphast\cong G(q,r)$. \end{itemize} \end{lemma} \betaegin{equation}gin{proof} Since $s$ is an even divisor of $p-1$ and $r$ is an even divisor of $q-1$, both $G(p,s)$ and $G(q,r)$ are symmetric. Thus ${\rm G}amma$ is symmetric by Lemma \ref{lem:trans}. Moreover, both $G(p,s)$ and $G(q,r)$ are cores by Corollary \ref{coro:prime order}. If $G(p,s)\rightarrow G(q,r)$, then ${\rm G}ammamma\lambdaeftrightarrow G(p,s)$ by Lemma \ref{lem:prop2.1}, and so ${\rm G}ammamma^\alphast\cong G(p,s)^\alphast$ by Lemma \ref{lem:hom eq}. Since $G(p,s)$ is a core, it follows that ${\rm G}ammamma^\alphast\cong G(p,s)$ and so ${\rm G}ammamma$ is not a core. Similarly, if $G(q,r)\rightarrow G(p,s)$, then ${\rm G}ammamma^\alphast\cong G(q,r)$ and ${\rm G}ammamma$ is not a core. In the rest of this proof we assume that ${\rm G}ammamma$ is not a core. By Theorems \ref{thm:sym cores} and \ref{thm:order vt}, ${\rm G}ammamma^\alphast$ is a symmetric graph of prime order (and hence is isomorphic to a circulant graph), and so ${\rm A}ut({\rm G}ammamma^\alphast)$ is primitive on $V({\rm G}ammamma^\alphast)$. Thus, by Theorem \ref{thm:prim cores proj}, ${\rm G}ammamma^\alphast$ is projective. Since ${\rm G}ammamma^\alphast$ is vertex-transitive and is a retract of ${\rm G}ammamma$, it follows from Theorem \ref{thm:retr fac} that ${\rm G}ammamma^\alphast$ is a retract of either $G(p,s)$ or $G(q,r)$. Since both $G(p,s)$ and $G(q,r)$ are cores, we have either ${\rm G}ammamma^\alphast\cong G(p,s)$ or ${\rm G}ammamma^\alphast\cong G(q,r)$. Since ${\rm G}ammamma^\alphast \lambdaeftrightarrow {\rm G}ammamma$, ${\rm G}ammamma\rightarrow G(p,s)$ and ${\rm G}ammamma\rightarrow G(q,r)$, we have ${\rm G}ammamma^\alphast\rightarrow G(p,s)$ and ${\rm G}ammamma^\alphast\rightarrow G(q,r)$. Therefore, if ${\rm G}ammamma^\alphast\cong G(p,s)$ then $G(p,s)\rightarrow G(q,r)$, and if ${\rm G}ammamma^\alphast\cong G(q,r)$ then $G(q,r)\rightarrow G(p,s)$. ${\rm B}ox$ \end{proof} \betaegin{equation}gin{theorem} \lambdaanglebel{thm:del lex prod} Suppose that $s$ is an even divisor of $p-1$ and $r$ is an even divisor of $q-1$. \betaegin{equation}gin{itemize} \item[\rm (a)] If ${\rm G}ammamma = G(p,s)[\overlineerline{K}_{q}]-qG(p,s)$, then ${\rm G}ammamma^\alphast\cong G(p,s)$. \item[\rm (b)] If ${\rm G}ammamma = G(q,r)[\overlineerline{K}_{p}]-pG(q,r)$, then exactly one of the following occurs: \betaegin{equation}gin{enumerate}[\rm (1)] \item $\chi(G(q,r))\lambdaeq p$, in which case ${\rm G}ammamma^\alphast\cong G(q,r)$; \item $\omega(G(q,r))\gammaeq p$, in which case ${\rm G}ammamma^\alphast\cong K_p$; \item $\chi(G(q,r))> p>\omega(G(q,r))$, in which case ${\rm G}ammamma$ is a core. \end{enumerate} \end{itemize} \end{theorem} \betaegin{equation}gin{proof} (a) Let ${\rm G}ammamma:=G(p,s)[\overlineerline{K}_{q}]-qG(p,s)$. Since $G(q,q-1) \cong K_q$, by Lemma \ref{lem:iso del lex}, ${\rm G}ammamma \cong G(p,s)\tauimes G(q,q-1)$. Since $p<q$, we have $\chi(G(p,s))< q$ and so $G(p,s)\rightarrow G(q,q-1)$. Therefore, by Lemma \ref{lem:pq circ hom}, ${\rm G}ammamma^\alphast\cong G(p,s)$. (b) Now consider ${\rm G}ammamma:=G(q,r)[\overlineerline{K}_{p}]-pG(q,r)$. Since $G(p,p-1) \cong K_p$, by Lemma \ref{lem:iso del lex}, ${\rm G}ammamma \cong G(q,r) \tauimes G(p,p-1)$. Thus, by Lemma \ref{lem:pq circ hom}, if $G(q,r)\rightarrow G(p,p-1)$ (that is, if $\chi(G(q,r))\lambdaeq p$), then ${\rm G}ammamma^\alphast\cong G(q,r)$; if $G(p,p-1)\rightarrow G(q,r)$ (that is, if $\omega(G(q,r))\gammaeq p$), then ${\rm G}ammamma^\alphast\cong G(p,p-1)\cong K_p$; and if none of these cases occurs (that is, if $\chi(G(q,r))> p>\omega(G(q,r))$), then ${\rm G}ammamma$ is a core. Therefore, at least one of cases (1)-(3) occurs. If, say, $\omega(G(q,r))=\chi(G(q,r))=k$, then $G(q,r)^\alphast\cong K_k$. Since $G(q,r)$ is a core by Corollary \ref{coro:prime order}, this happens precisely when $r = q-1 = k-1$. Since $p < q$, it follows that exactly one of (1)-(3) occurs. ${\rm B}ox$ \end{proof} \sigmaubsection{Categorical products} \lambdaanglebel{subsec:Cat prod} \betaegin{equation}gin{lemma} \lambdaanglebel{lem:3qr} $G(3q,r) = G(3q;r,2,u)$, where $u=r$ if $r$ is even, and $u=2r$ if $r$ is odd. \end{lemma} \betaegin{equation}gin{proof} We use the notation in Definitions \ref{defn:3qr} and \ref{defn:pqrsu}. For $G(3q;r,2,u)$, we have $p = 3$, $s=2$ and $H(p, s) = \lambdaangle -1 \rangle = {\rm B}bb Z_3^*$. Let $t \in {\rm B}bb Z_q^*$ be such that $t^{s/2} = t \in -H(q,r)$. Then $t^2\in H(q,r)$. If $r$ is even, then $-H(q,r)=H(q,r)$ and so $\lambdaanglengle t \ranglengle\lambdaeq H(q,r)$. Hence $o(t)$ divides $r$ and $u=\lambdacm(o(t),r)=r$. If $r$ is odd, then $-H(q,r) \neq H(q,r)$. Thus $t^k\in H(q,r)$ for even $k$ and $t^k \in -H(q,r)$ for odd $k$. Hence $o(t)$ is even and $u=\lambdacm(o(t),r)=2r$. Note that both $G(3q,r)$ and $G(3q;r,2,u)$ are defined on the vertex set ${\rm B}bb Z_3 \tauimes {\rm B}bb Z_q$. Let $(i,x), (j,y) \in {\rm B}bb Z_3 \tauimes {\rm B}bb Z_q$. If these two vertices are adjacent in $G(3q,r)$, then by Definition \ref{defn:3qr}, $j -i \equiv 1 \equiv (-1)^2\mod{3}$ and $y-x\in H(q,r)=t^2H(q,r)$. Since $H(3, 2) = \lambdaanglengle -1\ranglengle$, by Definition \ref{defn:pqrsu}, $(i,x)$ and $(j,y)$ are also adjacent in $G(3q;r,2,u)$. Now assume that $(i,x)$ and $(j,y)$ are adjacent in $G(3q;r,2,u)$. Then by Definition \ref{defn:pqrsu}, there exists an integer $l$ such that $j -i \equiv (-1)^l \mod{3}$ and $y-x\in t^l H(q,r)$. If $l$ is even, then $t^l\in H(q,r)$, $j-i \equiv 1 \mod{3}$ and $y-x\in H(q,r)$, implying that $(i,x)$ and $(j,y)$ are adjacent in $G(3q,r)$. If $l$ is odd, then $t^l\in -H(q,r)$, $i-j\equiv 1\mod{3}$ and $x-y\in H(q,r)$, again implying that $(i,x)$ and $(j,y)$ are adjacent in $G(3q,r)$. Therefore, $G(3q,r) = G(3q;r,2,u)$. ${\rm B}ox$ \end{proof} Note that in $G(pq;r,s,u)$ the integer $u$ must be a divisor of $q-1$, because in Definition \ref{defn:pqrsu} $q-1$ is a common multiple of $r$ and $o(t)$ and $u = \lambdacm(o(t),r)$. Thus, for a given $G(pq;r,s,u)$, the graph $G(q,u)$ is well-defined. The next lemma connects $G(pq;r,s,u)$ and $G(p,s)\tauimes G(q,u)$. \betaegin{equation}gin{lemma} \lambdaanglebel{lem:pqrsu cat prod} $G(p,s)\tauimes G(q,u)$ is symmetric, and $G(pq;r,s,u)$ is isomorphic to a spanning subgraph of $G(p,s)\tauimes G(q,u)$. Moreover, $G(pq;r,s,u) \cong G(p,s)\tauimes G(q,u)$ if and only if the element $t \in {\rm B}bb Z_q^*$ in the definition of $G(pq;r,s,u)$ belongs to $H(q,r)$. \end{lemma} \betaegin{equation}gin{proof} Denote ${\rm G}ammamma := G(pq;r,s,u)$ and $\Psi := G(p,s)\tauimes G(q,u)$. The definition of ${\rm G}amma$ as given in Definition \ref{defn:pqrsu} requires $s$ to be even so that $G(p,s)$ is symmetric. We now prove that $u$ is even and so $G(q,u)$ is symmetric as well. From this and Lemma \ref{lem:trans} we then obtain that $\Psi$ is symmetric. As in Definition \ref{defn:pqrsu} let $H(p,s)=\lambdaanglengle a \ranglengle$ and $H(q,r)=\lambdaanglengle c\ranglengle$, where $a \in\mathbb{Z}_{p}^\alphast$ and $c \in\mathbb{Z}_{q}^\alphast$. Let $\omega$ be a primitive element of ${\rm B}bb Z^*_q$ so that we can take $c = \omega^{(q-1)/r}$. The definition of ${\rm G}amma$ involves an element $t \in {\rm B}bb Z_q^*$ such that $w := -t^{s/2} \in H(q,r)$. Let $k := o(t)$ be the order of $t$ in ${\rm B}bb Z_q^*$ and let $u := \lambdacm(k,r)$. Then $\lambdaanglengle t, c\ranglengle$ is the unique subgroup of $\mathbb{Z}_q^\alphast$ with order $u$. Hence $H(q,r) \lambdae \lambdaanglengle t, c\ranglengle=H(q,u) = \lambdaangle \omega^{(q-1)/u} \rangle$. Thus $t^l H(q,r)\sigmaubseteq H(q,u)$ for any integer $l$, or equivalently $H(q, u) = \cup_{l=1}^{k} t^l H(q,r)$. Let $v$ be the inverse element of $w$ in $H(q,r)$. Then $vw = 1$ in ${\rm B}bb Z_q^*$ and $H(q,u)$ contains the element $t^{s/2}v = -1$. Since $-1$ is an involution in $\mathbb{Z}_q^\alphast$, it follows that the order $u$ of $H(q,u)$ must be even. Therefore, $\Psi$ is symmetric by our discussion in the previous paragraph. We may take both ${\rm G}amma$ and $\Psi$ as defined on the same vertex set ${\rm B}bb Z_p \tauimes {\rm B}bb Z_q$. Suppose that $(i,x), (j,y) \in {\rm B}bb Z_p \tauimes {\rm B}bb Z_q$ are adjacent in ${\rm G}amma$. Then $j-i=a^l$ and $y-x\in t^lH(q,r)$ for some integer $l$. Since $H(p,s)=\lambdaanglengle a \ranglengle$ and $t^lH(q,r)\sigmaubseteq H(q,u)$ as mentioned above, this means that $(i,x)$ and $(j,y)$ are adjacent in $\Psi$. Thus ${\rm G}amma$ is a spanning subgraph of $\Psi$. It remains to show that $t\in H(q,r)$ if and only if every edge of $\Psi$ is an edge of ${\rm G}ammamma$. Suppose first that $t\in H(q,r)$. Then $u=r$ and $H(q,u) = H(q,r)$. If $(i,x), (j,y) \in {\rm B}bb Z_p \tauimes {\rm B}bb Z_q$ are adjacent in $\Psi$, then $j-i\in H(p,s)$ and $y-x \in H(q,u)$. That is, $j-i=a^l\tauextnormal{ and }y-x\in t^lH(q,r)$ for some integer $l$, and so $(i,x)$ and $(j,y)$ are adjacent in ${\rm G}ammamma$. Suppose conversely that every edge of $\Psi$ is an edge of ${\rm G}ammamma$. Then for any $i, j \in {\rm B}bb Z_p$ and $x, y \in {\rm B}bb Z_q$ such that $j-i \in H(p, s)$ and $y-x \in H(q, u)$, we have $j-i = a^l$ and $y-x \in t^l H(q, r)$ for some integer $l$. In particular, for $(i, x) = (0, 0)$, this means that for any integers $l$ and $d$, there exists an integer $e$ such that $\omega^{d(q-1)/u} = t^l \omega^{e(q-1)/r}$. Taking $l=0$, this implies $u=r$, $H(q,r)=H(q,u)$, and so $t\in H(q,r)$. Therefore, ${\rm G}ammamma = \Psi$ if and only if $t\in H(q,r)$. ${\rm B}ox$ \end{proof} Combining Lemma \ref{lem:pq circ hom} and the second part of Lemma \ref{lem:pqrsu cat prod}, we obtain: \betaegin{equation}gin{theorem} \lambdaanglebel{thm:pqrsu cat} Let ${\rm G}amma = G(pq;r,s,u)$ and suppose that $t\in H(q,r)$. If $G(p,s) \rightarrow G(q,u)$, then ${\rm G}amma^* \cong G(p,s)$; if $G(q,u) \rightarrow G(p,s)$, then ${\rm G}amma^* \cong G(q,u)$; in the remaining case ${\rm G}amma$ is a core. \end{theorem} In the next subsection we determine the core of $G(pq;r,s,u)$ when $t\notin H(q,r)$. As it turns out, this is a more challenging task. \sigmaubsection{The core of $G(pq;r,s,u)$ when $t\notin H(q,r)$} \lambdaanglebel{subsec:cand} \tauextit{ Throughout this subsection we let ${\rm G}ammamma:=G(pq;r,s,u)$ and assume $t\notin H(q,r)$. As in Definition \ref{defn:pqrsu}, let $a \in\mathbb{Z}_{p}^\alphast$ and $c\in\mathbb{Z}_{q}^\alphast$ be such that $H(p,s)=\lambdaanglengle a\ranglengle$ and $H(q,r)=\lambdaanglengle c\ranglengle$. Let $l$ and $m$ be fixed positive integers. Define the permutation $\gamma$ on the vertex set ${\rm B}bb Z_p \tauimes {\rm B}bb Z_q$ of ${\rm G}amma$ by $$ \gammaamma: (i, x) \mapsto (i+a^l, x + t^lc^m),\;\; (i,x) \in {\rm B}bb Z_p \tauimes {\rm B}bb Z_q. $$} It is clear that $\gamma \in {\rm A}ut({\rm G}amma)$ and $(i, x)$ and $\gamma((i, x))$ are adjacent in ${\rm G}amma$. The purpose of this subsection is to prove the following two results. The first asserts that ${\rm G}amma^*$ is isomorphic to ${\rm G}amma$, $G(p,s)$ or $G(q,u)$, and the second tells us exactly when each of these cases occurs. \betaegin{equation}gin{theorem} \lambdaanglebel{thm:pqrsu core} Suppose that ${\rm G}amma$ is not a core. Then exactly one of the following occurs: \betaegin{equation}gin{itemize} \item[\rm (a)] ${\rm G}ammamma^\alphast\cong G(p,s)$ and the fibres of any retraction $\psihi: {\rm G}amma \rightarrow {\rm G}amma^*$ are the sets $\{(i, x): x \in {\rm B}bb Z_q\}$, $i \in {\rm B}bb Z_p$; \item[\rm (b)] ${\rm G}ammamma^\alphast\cong G(q,u)$ and the fibres of any retraction $\psihi: {\rm G}amma \rightarrow {\rm G}amma^*$ are the sets $\{(i, x): i \in {\rm B}bb Z_p\}$, $x \in {\rm B}bb Z_q$. \end{itemize} \end{theorem} \betaegin{equation}gin{theorem} \lambdaanglebel{thm:pqrsu} \betaegin{equation}gin{itemize} \item[\rm (a)] ${\rm G}ammamma^\alphast \cong G(p,s)$ if and only if there exists a homomorphism $\eta: G(p,s)\rightarrow G(q,u)$ such that for every arc $(i,j)$ of $G(p,s)$, say, $j-i=a^l$ for some integer $l$, we have $\eta(j)-\eta(i)\in t^lH(q,r)$; \item[\rm (b)] ${\rm G}ammamma^\alphast\cong G(q,u)$ if and only if there exists a homomorphism $\zeta:G(q,u)\rightarrow G(p,s)$ such that for every arc $(x,y)$ of $G(q,u)$, say, $y-x\in t^lH(q,r)$ for some integer $l$, we have $\zeta(y)-\zeta(x)=a^l$. \end{itemize} \end{theorem} To establish these results we need to prove three lemmas first. \betaegin{equation}gin{lemma} \lambdaanglebel{lem:pqrsu semi} Let $\psihi:{\rm G}ammamma\rightarrow{\rm G}ammamma^\alphast$ be a retraction. Then $(\psihi \gammaamma^{j})\mid_{{\rm G}ammamma^\alphast} \in{\rm A}ut({\rm G}ammamma^\alphast)$ for any integer $j \gammae 1$, and $(\psihi \gammaamma)\mid_{{\rm G}ammamma^\alphast}$ does not fix any vertex of ${\rm G}ammamma^\alphast$. \end{lemma} \betaegin{equation}gin{proof} Since $\gamma^j \in {\rm A}ut({\rm G}amma)$ and $\psihi: {\rm G}ammamma\rightarrow{\rm G}ammamma^\alphast$ is a homomorphism, $(\psihi \gammaamma^j)\mid_{{\rm G}ammamma^\alphast}$ is an endomorphism of ${\rm G}amma^*$. Since ${\rm G}amma^*$ is a core, it follows that $(\psihi \gammaamma^j)\mid_{{\rm G}ammamma^\alphast} \in {\rm A}ut({\rm G}ammamma^\alphast)$. It is clear that $\gamma$ fixes no vertex of $V({\rm G}amma)$. Since $(i,x)$ and $\gamma(i,x) = (i+a^l,x+t^lc^m)$ are adjacent in ${\rm G}amma$, and since fibres of $\psihi$ are independent sets of ${\rm G}amma$, we have $\psihi(i,x) \ne \psihi(\gamma(i,x)) = (\psihi \gamma)(i,x)$. Since $\psihi\mid_{{\rm G}ammamma^\alphast}$ is the identity map from $V({\rm G}amma^*)$ to itself, for $(i, x) \in V({\rm G}amma^*)$ we have $\psihi(i,x) = (i,x)$ and therefore $(\psihi \gamma)(i,x) \ne (i, x)$. In other words, $(\psihi \gammaamma)\mid_{{\rm G}ammamma^\alphast}$ fixes no vertex of ${\rm G}ammamma^\alphast$. ${\rm B}ox$ \end{proof} \betaegin{equation}gin{lemma} \lambdaanglebel{lem:pqrsu semireg} Suppose that ${\rm G}ammamma^* \ne {\rm G}amma$ and ${\rm G}ammamma^\alphast$ is not a complete graph. Then the following hold: \betaegin{equation}gin{itemize} \item[\rm (a)] ${\rm G}ammamma^\alphast \cong G(n, d)$, where $n = p$ or $q$ and $d$ is an even divisor of $n-1$. Moreover, we may identify ${\rm G}ammamma^\alphast$ with $G(n, d)$ by labelling bijectively the vertices of ${\rm G}amma^*$ by the elements of ${\rm B}bb Z_n$ in such a way that $h^*, k^* \in V({\rm G}amma^*)$ are adjacent in ${\rm G}amma^*$ if and only if $k - h \in H(n, d)$, where $k^*$ denotes the unique vertex of ${\rm G}amma^*$ labelled by $k \in {\rm B}bb Z_n$. \item[\rm (b)] Under this identification, for any retraction $\psihi:{\rm G}ammamma\rightarrow{\rm G}ammamma^\alphast$ that fixes every vertex of ${\rm G}amma^*$, there exists $b = b({\rm G}amma^*, \psihi) \in {\rm B}bb Z_n$ such that $(\psihi \gammaamma)\mid_{{\rm G}ammamma^\alphast}(k^*) = (k + b)^*$ (with addition in ${\rm B}bb Z_n$) for $k \in V({\rm G}ammamma^\alphast)$. \end{itemize} \end{lemma} \betaegin{equation}gin{proof} Since ${\rm G}amma$ is symmetric, by Theorem \ref{thm:sym cores}, ${\rm G}amma^*$ is symmetric. Denote $n := |V({\rm G}amma^*)|$. Since ${\rm G}amma$ has order $pq$ and is not a core, by Theorem \ref{thm:order vt}, we have $n = p$ or $q$. Since ${\rm G}amma^*$ is symmetric of prime order, ${\rm G}ammamma^\alphast\cong G(n, d)$ for some even divisor $d$ of $n-1$. Thus we may identify ${\rm G}amma^*$ with $G(n, d)$ in the way as described in (a). Since ${\rm G}ammamma^\alphast$ is not a complete graph, under this identification, ${\rm A}ut({\rm G}ammamma^\alphast) \cong \mathbb{Z}_{n}\rtimes H(n, d)$ consists of all affine transformations $\psihi_{m, b}$ ($m \in H(n, d), b \in {\rm B}bb Z_n$) defined by $\psihi_{m, b}(k^*) = (mk+b)^*$ for $k \in {\rm B}bb Z_n$ (with addition undertaken in ${\rm B}bb Z_n$). If $m \ne 1$, then $b = k (1-m)$ in ${\rm B}bb Z_n$ for some $k \in {\rm B}bb Z_n^*$ and so $\psihi_{m, b}(k^*) = (mk+k (1-m))^* = k^*$. That is, if $m \ne 1$, then $\psihi_{m, b} \in {\rm A}ut({\rm G}ammamma^\alphast)$ fixes at least one vertex of ${\rm G}amma^*$. On the other hand, for any retraction $\psihi:{\rm G}ammamma\rightarrow{\rm G}ammamma^\alphast$, by Lemma \ref{lem:pqrsu semi} we have $(\psihi \gammaamma)\mid_{{\rm G}ammamma^\alphast} \in {\rm A}ut({\rm G}ammamma^\alphast)$ and it does not fix any vertex of ${\rm G}ammamma^\alphast$. Therefore, there exists $b \in {\rm B}bb Z_n$ such that $(\psihi \gammaamma)\mid_{{\rm G}ammamma^\alphast} = \psihi_{1, b}$, that is, $(\psihi \gammaamma)\mid_{{\rm G}ammamma^\alphast}(k^*) = (k + b)^*$ for $k \in {\rm B}bb Z_n$. ${\rm B}ox$ \end{proof} Technically, ${\rm G}amma^*$ is an induced subgraph of ${\rm G}amma$ (and so its vertices $k^*$ are elements of ${\rm B}bb Z_p \tauimes {\rm B}bb Z_q$), but we identify it with $G(n, d)$ in the way as in Lemma \ref{lem:pqrsu semireg}(a). With this convention the fibres of $\psihi$ are: \betaegin{equation} \lambdaanglebel{eq:Pk} P_{k, \psihi} := \{(i, x) \in {\rm B}bb Z_p \tauimes {\rm B}bb Z_q: \psihi(i, x)=k^*\},\;\, k \in {\rm B}bb Z_n. \end{equation} Since $\psihi$ fixes every vertex of ${\rm G}amma^*$, we have $k^* \in P_{k, \psihi}$ for every $k \in {\rm B}bb Z_n$. Since the number of fibres $P_{k, \psihi}$ and the number of vertices of ${\rm G}amma^*$ are both equal to $n$, it follows that \betaegin{equation} \lambdaanglebel{eq:fibre} P_{k, \psihi} \cap V({\rm G}amma^*) = \{k^*\}. \end{equation} \betaegin{equation}gin{lemma} \lambdaanglebel{lem:pqrsu ind} Suppose that ${\rm G}ammamma^* \ne {\rm G}amma$ and ${\rm G}ammamma^\alphast$ is not a complete graph. Then for any retraction $\psihi:{\rm G}ammamma\rightarrow{\rm G}ammamma^\alphast$, there exists $b = b({\rm G}amma^*, \psihi) \in {\rm B}bb Z_n$ such that $(\psihi \gammaamma^{j})\mid_{{\rm G}ammamma^\alphast}(k^*) = (k+jb)^*$ for any integer $j \gammae 1$ and $k^* \in V({\rm G}amma^*)$. \end{lemma} \betaegin{equation}gin{proof} Since ${\rm G}amma$ is symmetric and any automorphism of ${\rm G}amma$ maps a core to a core, without loss of generality we may assume that the core ${\rm G}amma^*$ under consideration contains the arc of ${\rm G}amma$ from $(0, 0) \in {\rm B}bb Z_p \tauimes {\rm B}bb Z_q$ to $(a^l, t^lc^m) \in {\rm B}bb Z_p \tauimes {\rm B}bb Z_q$ (in particular, $(0, 0), (a^l, t^lc^m) \in V({\rm G}amma^*)$). Without loss of generality we may also assume that $0^* = (0, 0)$. By Lemma \ref{lem:pqrsu semireg}, there exists $b = b({\rm G}amma^*, \psihi) \in {\rm B}bb Z_n$ such that $(\psihi \gammaamma)\mid_{{\rm G}ammamma^\alphast}(k^*) = (k + b)^*$ for every $k^* \in V({\rm G}amma^*)$, with addition $k+b$ undertaken in ${\rm B}bb Z_n$. More explicitly, if $k^* = (i, x)$, then $\psihi(k^* + (a^l, t^lc^m)) = \psihi(i+a^l, x+t^lc^m) = (\psihi \gammaamma)(i, x) = (\psihi \gammaamma)\mid_{{\rm G}ammamma^\alphast}(k^*) = (k + b)^*$, or equivalently, $k^* + (a^l, t^lc^m) \in P_{k+b, \psihi}$. In particular, $\psihi((a^l, t^lc^m)) = \psihi(0^* + (a^l, t^lc^m)) = (\psihi \gammaamma)\mid_{{\rm G}ammamma^\alphast}(0^*) = b^*$ and $(a^l, t^lc^m) \in P_{b, \psihi}$. Since $b^*$ is the unique vertex of ${\rm G}amma^*$ contained in $P_{b, \psihi}$, from (\ref{eq:fibre}) it follows that $b^* = (a^l, t^lc^m)$. We prove $(\psihi \gammaamma^j)\mid_{{\rm G}ammamma^\alphast}(k^*) = (k + jb)^*$ for any $\psihi$ and $k^* \in V({\rm G}amma^*)$ by induction on $j$. This is true when $j=1$ as noted above. Assume that, for any retraction $\psihi:{\rm G}ammamma\rightarrow{\rm G}ammamma^\alphast$ that fixes every vertex of ${\rm G}amma^*$, the result is true for some $j \gammae 1$. In what follows we prove $(\psihi \gammaamma^{j+1})\mid_{{\rm G}ammamma^\alphast}(k^*) = (k + (j+1)b)^*$ for $k^* \in V({\rm G}amma^*)$ to complete the proof of the lemma. Consider the image ${\rm G}amma^{\#} := \gammaamma^j({\rm G}amma^*)$ of ${\rm G}amma^*$ under $\gamma^j$. Since $\gamma^j \in {\rm A}ut({\rm G}amma)$, ${\rm G}amma^{\#} \cong {\rm G}amma^* \cong G(n, d)$ and ${\rm G}amma^{\#}$ is a core of ${\rm G}amma$. The vertices of ${\rm G}amma^{\#}$ are $k^{\#} := \gammaamma^j(k^*)$ (where $k^* \in V({\rm G}amma^*)$), which are labelled by $k \in {\rm B}bb Z_n$ respectively. Since $\psihi(k^{\#}) = (\psihi\gamma^j)(k^*) = (k + jb)^*$ by the induction hypothesis, we have $k^{\#} \in P_{k + jb, \psihi}$. Moreover, since $\gamma^j \in {\rm A}ut({\rm G}amma)$ and $\psihi$ is a retraction, $\gamma^j \psihi: {\rm G}amma \rightarrow {\rm G}amma^{\#}$ is a retraction. Define $\tauau: V({\rm G}ammamma^{\#}) \rightarrow V({\rm G}ammamma^{\#})$ by $\tauau(k^{\#}) := (k-jb)^{\#}$ and then let $\psisi := \tauau\gammaamma^j\psihi: {\rm G}ammamma \rightarrow {\rm G}ammamma^{\#}$. We have: $h^{\#}, k^{\#}$ are adjacent in ${\rm G}ammamma^{\#}$ ${\rm L}eftrightarrow$ $h^{*}, k^{*}$ are adjacent in ${\rm G}ammamma^{*}$ ${\rm L}eftrightarrow$ $k-h \in H(n, d)$. Thus $\tauau \in {\rm A}ut({\rm G}ammamma^{\#})$ by the definition of $\tauau$. Hence $\psisi: {\rm G}ammamma \rightarrow {\rm G}ammamma^{\#}$ is a retraction and the set of fibres of $\psisi$ is the same as that of $\gamma^j \psihi$. However, the fibres of $\gammaamma^j\psihi$ are the subsets $\{(i, x) \in {\rm B}bb Z_p \tauimes {\rm B}bb Z_q: (\gamma^j \psihi)(i, x)=\gammaamma^j(k^*)\} = \{(i, x) \in {\rm B}bb Z_p \tauimes {\rm B}bb Z_q: \psihi(i, x)=k^*\} = P_{k, \psihi}$, $k \in {\rm B}bb Z_n$. Therefore, the set of fibres of $\psisi$ is identical to the set of fibres $P_{k, \psihi}$ of $\psihi$. Moreover, $\psisi(k^{\#})=(\tauau\gammaamma^j\psihi\gammaamma^j)(k^\alphast)=(\tauau\gammaamma^j)((k+jb)^\alphast)=\tauau((k+jb)^{\#})=k^{\#}$, that is, $\psisi$ fixes every vertex of ${\rm G}ammamma^{\#}$. Since $k^{\#} \in P_{k + jb, \psihi}$ as shown above and the set of fibres of $\psisi$ is $\{P_{k, \psihi}: k \in {\rm B}bb Z_n\}$, it follows that the unique fibre of $\psisi$ containing $k^{\#}$, denoted by $P_{k, \psisi}$, is given by $P_{k, \psisi} = P_{k + jb, \psihi}$. In particular, $0^{\#} = (ja^l, j t^lc^m) \in P_{jb, \psihi}$ and $b^{\#} = ((j+1)a^l, (j+1)t^lc^m) \in P_{(j+1)b, \psihi}$, and so $\gamma(0^{\#}) = b^{\#}$. Since $\psisi$ fixes every vertex of ${\rm G}amma^{\#}$, we then have $(\psisi\gamma)|_{{\rm G}amma^{\#}}(0^{\#}) = \psisi(b^{\#}) = b^{\#}$. Thus, when applying Lemma \ref{lem:pqrsu semireg} to $({\rm G}amma^{\#}, \psisi)$, the element $b({\rm G}amma^{\#}, \psisi)$ of ${\rm B}bb Z_n$ involved is equal to $b$ and so $\psisi (\gamma(k^{\#})) = (\psisi\gamma)|_{{\rm G}amma^{\#}}(k^{\#}) = (k+b)^{\#}$ by this lemma. Therefore, $\gamma^{j+1}(k^{*}) = \gamma(k^{\#}) \in P_{{k+b}, \psisi} = P_{k + (j+1)b, \psihi}$, that is, $(\psihi \gamma^{j+1})|_{{\rm G}amma^*} (k^{*}) = (k + (j+1)b)^*$ as required. ${\rm B}ox$ \end{proof} \betaegin{equation}gin{proof}\tauextbf{of Theorem \ref{thm:pqrsu core}}~~ Since ${\rm G}amma$ has order $pq$ and is not a core, by Theorem \ref{thm:order vt}, $|V({\rm G}ammamma^\alphast)|=p$ or $q$. Denote $\Psi := G(p,s)\tauimes G(q,u)$. \tauextsf{Case 1: ${\rm G}ammamma^\alphast$ is a complete graph.}~ Then ${\rm G}ammamma^\alphast\cong G(n,n-1)$, where $n=p\tauextnormal{ or }q$, and so ${\rm G}ammamma$ contains a subgraph isomorphic to the complete graph $G(n,n-1)$. Since by Lemma \ref{lem:pqrsu cat prod}, $\Psi$ contains ${\rm G}ammamma$ as a spanning subgraph, it contains a subgraph isomorphic to $G(n,n-1)$. Hence $G(n,n-1) \rightarrow \Psi$. On the other hand, by Lemma \ref{lem:prop2.1}, there are projection homomorphisms $\Psi \rightarrow G(p,s)$ and $\Psi \rightarrow G(q,u)$. Also, we have $G(p,s) \rightarrow G(n,n-1)$ or $G(q,u) \rightarrow G(n,n-1)$ since $G(p,s)$ or $G(q,u)$ is a subgraph of $G(n, n-1)$, depending on whether $n=p$ or $q$. In either case we have $\Psi \rightarrow G(n, n-1)$. Therefore, $\Psi \lambdaeftrightarrow G(n, n-1)$ and so $\Psi^\alphast\cong G(n,n-1)$. In particular, $\Psi$ is not a core. Thus, by Lemma \ref{lem:pq circ hom}, either $\Psi^\alphast\cong G(p,s)$ or $\Psi^\alphast\cong G(q,u)$. Therefore, if $n=p$ then $s=p-1$, and if $n=q$ then $u=q-1$. Since ${\rm G}amma$ is a spanning subgraph of $\Psi$, the identity map from ${\rm B}bb Z_p \tauimes {\rm B}bb Z_q$ to itself is a homomorphism ${\rm G}amma \rightarrow \Psi$. This together the projections $\Psi \rightarrow G(p,s)$ and $\Psi \rightarrow G(q,u)$ implies that the projections \betaegin{equation}gin{equation*} \psii:{\rm G}ammamma\rightarrow G(p,s),\;\, (i, x) \mapsto i; \quad \rho:{\rm G}ammamma\rightarrow G(q,u),\;\, (i, x) \mapsto x \end{equation*} are homomorphisms. Therefore, if $n=p$, then $s=p-1$ and $\psii:{\rm G}ammamma\rightarrow G(p,p-1)$ is a retraction; if $n=q$, then $u=q-1$ and $\rho:{\rm G}ammamma\rightarrow G(q,q-1)$ is a retraction. \tauextsf{Case 2: ${\rm G}ammamma^\alphast$ is not a complete graph.}~We will only consider the case where $|V({\rm G}ammamma^\alphast)|=p$ since the case $|V({\rm G}ammamma^\alphast)|=q$ can be dealt with similarly. So let us assume $|V({\rm G}ammamma^\alphast)|=p$ and ${\rm G}ammamma^\alphast\ncong K_p$. Then $n=p$ by Lemma \ref{lem:pqrsu semireg}. We aim to prove ${\rm G}amma^* \cong G(p, s)$. Let $\psihi: {\rm G}amma \rightarrow {\rm G}amma^*$ be any retraction that fixes each vertex of ${\rm G}amma^*$. By Lemma \ref{lem:pqrsu ind}, there exists an element $b \in {\rm B}bb Z_p$ such that $(\psihi \gammaamma^{jp})\mid_{{\rm G}ammamma^\alphast}(k^*) = (k+jpb)^*$ for any integer $j \gammae 1$ and $k\in\mathbb{Z}_{p}$. Since $k + jpb \equiv k \mod{p}$, we then have $(\psihi \gammaamma^{jp})\mid_{{\rm G}ammamma^\alphast}(k^*) = k^*$ for each $k\in\mathbb{Z}_{p}$. In other words, $\gammaamma^{jp}(k^*) \in P_{k, \psihi}$ for $k\in\mathbb{Z}_{p}$. More explicitly, letting $k^* = (i, x)$, then $\gammaamma^{jp}(k^*) = \gammaamma^{jp}(i, x) = (i + jpa^l, x + jpt^lc^m) = (i, x + jpt^lc^m) \in P_{k, \psihi}$. Since $p$ and $q$ are distinct primes and $t, c\in\mathbb{Z}_q^\alphast$, we have $pt^lc^m\in\mathbb{Z}^*_q$ and so $\lambdaanglengle pt^lc^m \ranglengle= \mathbb{Z}_q$. In other words, $x + jpt^lc^m$ is running over all elements of ${\rm B}bb Z_q$ when $j$ is running over all positive integers. Therefore, from $(i, x + jpt^lc^m) \in P_{k, \psihi}$ we obtain that $\{(i, y): y \in \mathbb{Z}_q\}\sigmaubseteq P_{k, \psihi}$. On the other hand, by Theorem \ref{thm:order vt}, each fibre $P_{k, \psihi}$ of $\psihi$ has order $q$. Therefore, $$ P_{k, \psihi} = \{(i, y): y \in \mathbb{Z}_q\} $$ for all $k^* = (i, x) \in V({\rm G}amma^*)$. Since $k^*$ is the unique vertex of ${\rm G}amma^*$ in $P_{k, \psihi}$, $k^*$ and $i$ determine each other uniquely. Thus $i$ is running over all elements of ${\rm B}bb Z_p$ when $k^*$ is running over all vertices of ${\rm G}amma^*$. By Lemma \ref{lem:pqrsu cat prod}, ${\rm G}ammamma$ is a spanning subgraph of $\Psi$, and moreover ${\rm G}amma \ne \Psi$ since $t \not \in H(q, r)$. Hence ${\rm G}amma^*$ is a subgraph of $\Psi$ and so we can talk about the inclusion homomorphism $\delta: {\rm G}amma^* \rightarrow \Psi$. Let $\psii: \Psi \rightarrow G(p,s), (i, x) \mapsto i$ be the projection from $\Psi$ to $G(p, s)$. Since $\psii$ is surjective, $\psii \delta: {\rm G}amma^* \rightarrow G(p, s)$ is a surjective homomorphism. This together with the fact that ${\rm G}amma^*$ and $G(p, s)$ have the same order implies that $\psii \delta$ is bijective. Hence $(\psii \delta)({\rm G}amma^*) \cong {\rm G}amma^*$ and $(\psii \delta)({\rm G}amma^*)$ is a spanning subgraph of $G(p,s)$. We claim that $(\psii \delta)({\rm G}amma^*) = G(p, s)$. In fact, let $i, j \in {\rm B}bb Z_p$ be any two adjacent vertices of $G(p, s)$, so that $j - i = a^{l_0} \in H(p,s)$ for some integer $l_0$. Then $i$ and $j$ each determines uniquely a vertex of ${\rm G}amma^*$, say, $h^* = (i, x)$ and $k^* = (j, y)$, respectively. The fibres of $\psihi$ containing $h^*$ and $k^*$ are $P_{h, \psihi}$ and $P_{k, \psihi}$, respectively. Since $j - i = a^{l_0} \in H(p,s)$, each vertex $(i, z) \in P_{h, \psihi}$ is adjacent in ${\rm G}amma$ to at least one vertex of $P_{k, \psihi}$, say, $(j, z+t^{l_0} c)$. Thus $h^*$ and $k^*$ are adjacent in ${\rm G}amma^*$. In other words, if two vertices of $G(p, s)$ are adjacent, then the corresponding vertices of ${\rm G}amma^*$ are adjacent in ${\rm G}amma^*$. Hence $|E({\rm G}amma^*)| \gammae |E(G(p, s))|$. On the other hand, $|E({\rm G}amma^*)| = |E((\psii \delta)({\rm G}amma^*))| \lambdae |E(G(p, s))|$ as $(\psii \delta)({\rm G}amma^*)$ is a spanning subgraph of $G(p,s)$. Therefore, $|E({\rm G}amma^*)| = |E((\psii \delta)({\rm G}amma^*))| = |E(G(p, s))|$ and so $(\psii \delta)({\rm G}amma^*) = G(p, s)$. Consequently, ${\rm G}amma^* \cong G(p, s)$. Moreover, from the discussion above we see that the fibres of $\psihi$ are the sets $\{(i, x): x \in {\rm B}bb Z_q\}$, $i \in {\rm B}bb Z_p$, as claimed in (a). Similarly, one can prove that, if $|V({\rm G}ammamma^\alphast)|=q$, then the statements in (b) hold. ${\rm B}ox$ \end{proof} Recall that in the proof of Lemma \ref{lem:pqrsu cat prod} we proved that $H(q, u) = \cup_{l=1}^{k} t^l H(q,r)$. This implies that for any arc $(x,y)$ of $G(q,u)$ there exists an integer $l$ such that $y-x\in t^l H(q,r)$. {\betaf i}gskip \betaegin{equation}gin{proof}\tauextbf{of Theorem \ref{thm:pqrsu}}~~ We prove (a) only since the proof of (b) is similar. Denote $\Psi := G(p, s) \tauimes G(q, u)$. \tauextsf{Sufficiency:} Suppose that there exists a homomorphism $\eta:G(p,s)\rightarrow G(q,u)$ such that $\eta(j)-\eta(i)\in t^lH(q,r)$ for every arc $(i,j)$ of $G(p,s)$ with $j-i=a^l$. Let ${\rm D}elta$ be the subgraph of ${\rm G}ammamma$ induced by $\{(i,\eta(i)): i\in\mathbb{Z}_{p}\} \sigmaubset {\rm B}bb Z_p \tauimes {\rm B}bb Z_q$. The definition of $\eta$ ensures that the map $(i,\eta(i)) \mapsto i$ from $V({\rm D}elta)$ to $V(G(p,s)) = {\rm B}bb Z_p$ is an isomorphism from ${\rm D}e$ to $G(p,s)$. Since ${\rm G}amma \rightarrow \Psi$ by inclusion (Lemma \ref{lem:pqrsu cat prod}) and $\Psi \rightarrow G(p, s)$ by projection (Lemma \ref{lem:prop2.1}), we have ${\rm G}amma \rightarrow G(p,s) \cong {\rm D}e$. This together with the inclusion homomorphism ${\rm D}e \rightarrow {\rm G}amma$ implies that ${\rm G}amma \lambdaeftrightarrow {\rm D}e$. Therefore, ${\rm G}amma^* \cong {\rm D}e^* \cong G(p,s)$ by Lemma \ref{lem:hom eq} and Corollary \ref{coro:prime order}. \tauextsf{Necessity:} Suppose that ${\rm G}amma^* \cong G(p,s)$. Then by Theorem \ref{thm:pqrsu core} there is a retraction $\psihi:{\rm G}ammamma\rightarrow{\rm G}ammamma^\alphast$ whose fibres are the sets $\{(i,x): x \in {\rm B}bb Z_q\}$, $i \in {\rm B}bb Z_p$. On the other hand, we have $\deltaelta: {\rm G}ammamma \rightarrow \Psi$ by inclusion (Lemma \ref{lem:pqrsu cat prod}) and $\psii: \Psi \rightarrow G(p,s), (i, x) \mapsto i$ by projection. Thus $\psii\deltaelta:{\rm G}ammamma\rightarrow G(p,s), (i, x) \mapsto i$ is a homomorphism whose set of fibres is identical to the set of fibres of $\psihi$. As seen in (\ref{eq:fibre}), each fibre of $\psihi$ contains exactly one vertex of ${\rm G}amma^*$. Thus each fibre of $\psii\deltaelta$ contains exactly one vertex of ${\rm G}amma^*$. In other words, for each $i\in\mathbb{Z}_p$, ${\rm G}ammamma^\alphast$ contains exactly one vertex of the form $(i,x)$. Thus $\tauheta := (\psii\deltaelta)\mid_{{\rm G}ammamma^\alphast}: V({\rm G}ammamma^\alphast) \rightarrow V(G(p,s)), (i,x)\mapsto i$, is a bijection. Since $\psii\deltaelta$ is a homomorphism, $\tauheta: {\rm G}ammamma^\alphast \rightarrow G(p,s)$ is a homomorphism. Since $|E({\rm G}ammamma^\alphast) |=|E(G(p,s))|$ (as ${\rm G}ammamma^\alphast\cong G(p,s)$), it follows that $\tauheta$ is an isomorphism from ${\rm G}amma^*$ to $G(p, s)$. Let $\rho: \Psi \rightarrow G(q,u), (i, x) \mapsto x$ be the projection from $\Psi$ to $G(q,u)$. Then the projection $\rho\deltaelta:{\rm G}ammamma\rightarrow G(q,u), (i, x) \mapsto x$ is a homomorphism from ${\rm G}ammamma$ to $G(q,u)$. Hence $\psisi := (\rho\deltaelta)\mid_{{\rm G}ammamma^\alphast}:{\rm G}ammamma^\alphast\rightarrow G(q,u), (i, x) \mapsto x$ is a homomorphism. Consequently, $\eta := \psisi\tauheta^{-1}: G(p,s)\rightarrow G(q,u)$ is a homomorphism, and it maps each $i \in {\rm B}bb Z_p$ to the unique $x(i) \in {\rm B}bb Z_q$ such that $(i, x(i))$ is the unique vertex of ${\rm G}amma^*$ contained in the fibre $\{(i, x): x \in {\rm B}bb Z_q\}$ of $\psihi$. If $(i, j)$ is an arc of $G(p,s)$, then $j = i + a^l$ for some integer $l$, and $(\tauheta^{-1}(i), \tauheta^{-1}(j))$ is an arc of ${\rm G}amma^*$ (as $\tauheta^{-1}$ is an isomorphism from $G(p, s)$ to ${\rm G}amma^*$) and hence an arc of ${\rm G}amma$. Since $\tauheta^{-1}(i) = (i, x(i))$, it follows from the definition of ${\rm G}amma$ that $\tauheta^{-1}(j) = (i + a^l, x(i) + t^l c^m)$ for some integer $m$. Therefore, $\eta(j) - \eta(i) = (x(i) + t^l c^m) - x(i) = t^l c^m \in t^l H(q, r)$ as required. ${\rm B}ox$ \end{proof} \sigmaection{Cores of symmetric Maru\v{s}i\v{c}-Scapellato graphs} \lambdaanglebel{sec:cores ms} In this section we determine the cores of symmetric Maru\v{s}i\v{c}-Scapellato graphs, to be given in Theorem \ref{thm:pq ms cores}. Such graphs have order $pq$ for a Fermat prime $q$ and a prime factor $p$ of $q-2$. In \S\ref{subsec:ms} we give the definition of (general) Maru\v{s}i\v{c}-Scapellato graphs. In \S\ref{subsec:clq} and \S\ref{subsec:ind} we derive bounds on the clique and independent numbers of a general (not necessarily symmetric) Maru\v{s}i\v{c}-Scapellato graph of order $pq$, respectively. It turns out that these bounds are crucial to the proof of Theorem \ref{thm:pq ms cores}. In \S\ref{subsec:order core} we give necessary conditions for the core of a symmetric Maru\v{s}i\v{c}-Scapellato graph to have order $q$, which will be used to show that this occurs only in a certain very special case. After a brief discussion on the cores of two specific rank-three graphs in \S\ref{subsec:rank3}, finally we prove Theorem \ref{thm:pq ms cores} in \S\ref{subsec:proof cores ms}. \sigmaubsection{Maru\v{s}i\v{c}-Scapellato graphs} \lambdaanglebel{subsec:ms} Maru\v{s}i\v{c}-Scapellato graphs (\tauextit{MS graphs} for short) were introduced in \cite{MR1174460}. We adopt their definition and notation\footnote{${\rm G}ammamma(a,m,S,U)$ in Definition \ref{defn:ms} is the graph $X(a,m,S,U)$ in \cite[Definition 3.6]{MR1223702} and \cite[Definition 1.3]{MR1174460}, and is $F(2^a +1, m, S, U)$ in \cite[p.188]{MR1289072} where this graph is called a Fermat graph.} from \cite[Definition 3.6]{MR1223702}. \betaegin{equation}gin{definition} \lambdaanglebel{defn:ms} \rm Let $a > 1$ be an integer, $m>1$ a divisor of $2^a-1$, $S=-S$ a (possibly empty) symmetric subset of $\mathbb{Z}_m^\alphast$, $U$ a subset of $\mathbb{Z}_m$, and $w$ a primitive element of ${\rm GF}(2^a)$. The \tauextit{Maru\v{s}i\v{c}-Scapellato graph} ${\rm G}ammamma = {\rm G}ammamma(a,m,S,U)$ is the graph with vertex set \betaegin{equation}gin{equation*} V({\rm G}ammamma) := {\rm PG}(1,2^a)\tauimes \mathbb{Z}_m\quad \mbox{(with ${\rm PG}(1,2^a)$ identified to ${\rm GF}(2^a)\cup \{\infty\}$)} \end{equation*} such that $(\infty,r)\in V({\rm G}ammamma)$ has neighbourhood $$ {\rm G}ammamma((\infty,r)) := \{(\infty,r+s): s\in S\}\cup\{(x,r+u): x\in {\rm GF}(2^a), u\in U\} $$ and $(x,r)\in V({\rm G}ammamma)$ (where $x\in {\rm GF}(2^a)$) has neighbourhood $$ {\rm G}ammamma((x,r)) := \{(x,r+s): s\in S\}\cup\{(\infty,r-u): u\in U\}\cup \{(x+w^i,-r+u+2i): i\in \mathbb{Z}_{2^a-1}, u\in U\}. $$ \end{definition} Obviously, ${\rm G}amma = {\rm G}ammamma(a,m,S,U)$ has valency $|S| + 2^a |U|$. Maru\v{s}i\v{c} and Scapellato \cite{MR1214891} proved that ${\rm G}ammamma$ admits ${\rm SL}(2,2^{a})$ as a vertex-transitive group of automorphisms. Moreover, they showed that \betaegin{equation} \lambdaanglebel{eq:Sig} {\cal B} := \{B_x: x\in {\rm PG}(1,2^a)\},\,\; \mbox{where $B_x:=\{(x,r): r\in \mathbb{Z}_m\}$} \end{equation} is an ${\rm SL}(2,2^{a})$-invariant partition of $V({\rm G}amma)$ such that the quotient graph ${\rm G}amma_{{\cal B}}$ is the complete graph of order $2^{a} + 1$, that is, there is at least one edge of ${\rm G}amma$ between any two blocks of ${\cal B}$. They proved further that ${\rm G}ammamma$ is ${\rm SL}(2,2^{a})$-symmetric if ${\rm G}ammamma={\rm G}ammamma(a,m,\emptyset,\lambdaeft\lambdabrace u\right\rbrace)$ for some $u\in\mathbb{Z}_m$. An integer of the form $F_s := 2^{2^s}+1$ is called a \tauextit{Fermat number}, where $s\gammaeq 0$ is an integer; if $F_s$ is a prime, then it is called a \tauextit{Fermat prime}. \deltaelete { \sigmam{I am not sure whether we should mention $F(s)$ and $F'(s)$ in our paper. Where do we need them?} \rk{We don't really need these graphs. But in \cite{MR1223693} they define $F(s)$ and $F'(s)$, not MS graphs. Then in the following paragraphs we show that $F(s)$ and $F'(s)$ are in fact MS graphs. I think we must at least make a passing reference to this.} \betaegin{equation}gin{example} \lambdaanglebel{ex:ms3q} \rm Let $T:={\rm PSL}(2, 2^{2^s})$, where $s \gammae 1$ and $q = 2^{2^s} + 1$ is a Fermat prime. Then $T$ has a transitive representation of degree $3q$ on $V := {\rm PG}(1, 2^{2^s}) \tauimes {\rm B}bb Z_3$ (see \cite[Lemma 4.6]{MR1223693}). Three mutually isomorphic graphs, $F_0(s)$, $F_1(s)$ and $F_2(s)$, are defined in \cite{MR1223693} as orbital graphs of $T$ on $V$. Define $F(s) := F_0(s)$ and $F'(s) := F_1(s)\cup F_2(s)$ as in \cite{MR1223693}. \sigmam{defn verified} In \cite[Lemma 4.7]{MR1223693} it was proved that $F(s)$ and $F'(s)$ are imprimitive symmetric graphs of order $3q$ with blocks of size $3$, but without blocks of size $q$. Since $2^{2^s}$ is even, ${\rm PSL}(2,2^{2^s})={\rm SL}(2,2^{2^s})$ and so both $F(s)$ and $F'(s)$ are orbital graphs of ${\rm SL}(2,2^{2^s})$ with respect to a transitive representation of degree $3q$. \sigmam{$F'(s)$ is a generalized orbital graph of ${\rm SL}(2,2^{2^s})$ on $V$, and so we cannot use \cite[Theorem 3.1]{MR1214891} directly} However, by \cite[Theorem 3.1]{MR1214891}, there is a nontrivial orbital graph of $T$ on $V$ with $\{B_x: x\in {\rm PG}(1,2^{2^s})\}$ as a complete block system of block size $3$, if and only if ${\rm G}ammamma\cong {\rm G}ammamma(a,3,\emptyset,\{u\})$ for some $u\in\mathbb{Z}_3$. Since each $F_i(s)$ is such a graph, we have $F_i(s)\cong{\rm G}ammamma(2^s,3,\emptyset,\{i\})$ for $i = 0, 1, 2$. Therefore, $$ F(s)\cong{\rm G}ammamma(2^s,3,\emptyset,\{0\}),\;\, F'(s)\cong{\rm G}ammamma(2^s,3,\emptyset,\{1,2\}). $$ \end{example} } The following result of Praeger, Wang and Xu \cite{MR1223702} determines all symmetric MS graphs of order a product of two distinct primes. (Note that the symmetric graphs $F(s), F'(s)$ of order $3q$ defined in \cite{MR1223693} (where $q = 2^{2^s} + 1$ is a Fermat prime with $s \gammae 1$) are isomorphic to the MS graphs ${\rm G}ammamma(2^s,3,\emptyset,\{0\}), {\rm G}ammamma(2^s,3,\emptyset,\{1,2\})$, respectively.) \betaegin{equation}gin{theorem}[{\cite[3.7(b), 3.8 and 4.9(a)]{MR1223702}}] \lambdaanglebel{thm:sym ms} Let $q=2^a+1$ be a Fermat prime, where $a=2^s$ with $s \gammae 1$, and let $p$ be a prime divisor of $2^{a}-1$. Then an MS graph of order $pq$ is symmetric if and only if it is of the form ${\rm G}ammamma = {\rm G}ammamma(a,p,\emptyset,U)$, where either \betaegin{equation}gin{equation*} U=\{u\} \end{equation*} for some $u\in\mathbb{Z}_p$, or \betaegin{equation} \lambdaanglebel{eq:uab} U = U_{e, i} := \{i2^{ej}: 0 \lambdaeq j < d/e\} \end{equation} for some $i \in {\rm B}bb Z_p^*$ and divisor $e \gammae 1$ of $\gammacd(d, a)$ with $1 < d/e < p-1$, where $d$ is the order of $2$ in ${\rm B}bb Z_p^*$. In the former case, ${\rm G}ammamma\cong {\rm G}ammamma(a,p,\emptyset,\{0\})$ and ${\rm val}({\rm G}ammamma)=2^{a}$; in the latter case, ${\rm val}({\rm G}ammamma)=2^{a}d/e$. \end{theorem} Note that \betaegin{equation} \lambdaanglebel{eq:U} 1 < |U_{e, i}| = d/e < p-1 \end{equation} and ${\rm G}ammamma(a,p,\emptyset,U_{e, i})$ is ${\rm SL}(2,2^{a})\rtimes {\rm B}bb Z_{a/e}$-symmetric (see {\cite[Theorem 3.7(b)]{MR1223702}}). \sigmaubsection{Cores of symmetric Maru\v{s}i\v{c}-Scapellato graphs} \lambdaanglebel{subsec:cores ms} The following is the main result in this section. Its proof will be given in \S\ref{subsec:proof cores ms}. \betaegin{equation}gin{theorem} \lambdaanglebel{thm:pq ms cores} Let $a=2^s$ with $s \gammae 1$ an integer, and let $p$ be a prime divisor of $2^{a}-1$ and $q=2^a+1$ a Fermat prime. Let ${\rm G}amma = {\rm G}amma(a, p, \emptyset, U)$ be a symmetric MS graph as described in Theorem \ref{thm:sym ms}, where $U = \{u\}$ for some $u \in {\rm B}bb Z_p$, or $U = U_{e, i}$ as given in (\ref{eq:uab}). Then the following hold: \betaegin{equation}gin{itemize} \item[\rm (a)] if $pq=15$, then either ${\rm G}ammamma={\rm G}ammamma(2,3,\emptyset,\{u\})$ and ${\rm G}ammamma$ is a core, or ${\rm G}ammamma={\rm G}ammamma(2,3,\emptyset,\{1,2\})$ and ${\rm G}ammamma^\alphast\cong K_5$; \item[\rm (b)] if $pq > 15$ and $p$ is not a Fermat prime, then ${\rm G}ammamma$ is a core; \item[\rm (c)] if $pq > 15$ and $p=2^{2^{l}}+1$ is a Fermat prime with $0 \lambdaeq l < s-1$, then ${\rm G}ammamma$ is a core; \item[\rm (d)] if $pq > 15$, $p=2^{2^{s-1}}+1$ is a Fermat prime and ${\rm G}amma = {\rm G}amma(a, p, \emptyset, \{u\})$, then ${\rm G}ammamma^\alphast\cong K_p$; \item[\rm (e)] if $pq > 15$, $p=2^{2^{s-1}}+1$ is a Fermat prime and ${\rm G}amma = {\rm G}amma(a, p, \emptyset, U_{e, i})$, then ${\rm G}ammamma$ is a core. \end{itemize} \end{theorem} \sigmaubsection{Assumption and notation} \lambdaanglebel{subsec:notation} \tauextit{In the remainder of this section we assume that $s, a, p, q$ are as in Theorem \ref{thm:pq ms cores} and that $w$ is a primitive element of ${\rm GF}(2^{a})$ as used in Definition \ref{defn:ms}.} For brevity, we set \betaegin{equation} \lambdaanglebel{eq:Psi} {\rm G}amma_{S, U} := {\rm G}ammamma(a, p, S, U), \end{equation} where $S = -S \sigmaubseteq {\rm B}bb Z_p^*$ and $U \sigmaubseteq {\rm B}bb Z_p$. Note that this MS graph has order $pq$ and vertex set ${\rm PG}(1, 2^a) \tauimes {\rm B}bb Z_p$, but it is not necessarily symmetric. As in \cite[Eq. (14)]{MR1214891}, for each $b \in {\rm GF}(2^{a})$, define \betaegin{equation}gin{equation} \lambdaanglebel{eq:lambda} \lambdaanglembda_b((x,r))=\betaegin{equation}gin{cases} (\infty, r), & x=\infty,\ r \in {\rm B}bb Z_p\\ (x+b,r), & x\in {\rm GF}(2^{a}),\ r \in {\rm B}bb Z_p. \end{cases} \end{equation} Define \cite[Eq. (10) and (12)]{MR1214891} \betaegin{equation}gin{equation} \lambdaanglebel{eq:rho} \rho((x,r))=\betaegin{equation}gin{cases} (x,r+1), & x\in\{\infty,0\},\ r \in {\rm B}bb Z_p\\ (xw,r+1), & x\in {\rm GF}(2^{a})^\alphast,\ r \in {\rm B}bb Z_p. \end{cases} \end{equation} Then $\lambda_b, \rho \in {\rm A}ut({\rm G}amma_{S, U})$ \cite{MR1214891} and so \betaegin{equation} \lambdaanglebel{eq:J} J := \lambdaanglengle \rho^p\ranglengle \lambdae {\rm A}ut({\rm G}amma_{S, U}) \end{equation} \betaegin{equation} \lambdaanglebel{eq:H} H := \{\lambdaanglembda_b: b \in {\rm GF}(2^{a})\} \lambdae {\rm A}ut({\rm G}amma_{S, U}). \end{equation} Note that \betaegin{equation} \lambdaanglebel{eq:JH} J \cong \mathbb{Z}_{(2^{a}-1)/p},\;\, H \cong \mathbb{Z}_2^{a},\;\, H\rtimes J = {\rm SL}(2, 2^a)_{\infty} \lambdaeq {\rm SL}(2, 2^a) \lambdae {\rm A}ut({\rm G}amma_{S, U}). \end{equation} Recall that ${\rm G}amma_{S, U}$ admits an ${\rm SL}(2, 2^a)$-invariant partition ${\cal B}$ defined in (\ref{eq:Sig}). Clearly, $\lambdaanglembda_b$ fixes $B_\infty$ pointwise for each $b \in {\rm GF}(2^{a})$, and $\rho^{ip}$ fixes $B_\infty\cup B_0$ pointwise for each integer $i$. Therefore, $H\rtimes J$ fixes $B_\infty$ pointwise; that is, for every $r \in {\rm B}bb Z_p$, \betaegin{equation} \lambdaanglebel{eq:JH1} H\rtimes J\lambdaeq {\rm A}ut({\rm G}amma_{S, U})_{(\infty,r)}. \end{equation} For each $d\in\mathbb{Z}_p$, denote by ${\rm G}amma_{S, U}^{d}$ the subgraph of ${\rm G}amma_{S, U}$ induced by \betaegin{equation} \lambdaanglebel{eq:Vd} V_d := \{(x,d): x\in {\rm GF}(2^{a})\}. \end{equation} In view of (\ref{eq:lambda}) and (\ref{eq:rho}), each element of $H\rtimes J$ fixes the second coordinate of every vertex of ${\rm G}amma_{S, U}$. Therefore, $H\rtimes J$ fixes $V_d$ setwise. \deltaelete{ (More explicitly, a typical element $\lambda_b \rho^{ip}$ of $H\rtimes J$ maps $(x,d)\in V_d$ to $(xw^{ip}+b,d) \in V_d$, where $b \in {\rm GF}(2^a)$ and $i$ is an integer.) } Since $H\rtimes J \lambdae {\rm A}ut({\rm G}amma_{S, U})$, it follows that ${\rm G}amma_{S, U}^d$ admits $H \rtimes J$ as a group of automorphisms in its induced action on $V_d$. Moreover, $H$ is regular on $V_d$ (in particular ${\rm G}amma_{S, U}^d$ is vertex-transitive), and each element of $J$ fixes $(0,d) \in V_d$. \sigmaubsection{What happens if the core of a symmetric MS graph has order $q$} \lambdaanglebel{subsec:order core} \betaegin{equation}gin{lemma} \lambdaanglebel{thm:one vertex per block} Let ${\rm G}amma = {\rm G}amma_{\emptyset, \{u\}}$ or ${\rm G}amma_{\emptyset, U_{e, i}}$, where $u \in {\rm B}bb Z_p$ and $U_{e, i}$ is given in (\ref{eq:uab}). If $|V({\rm G}ammamma^\alphast)| = q$, then ${\rm G}ammamma^\alphast$ contains exactly one vertex from each block of ${\cal B}$. \end{lemma} \betaegin{equation}gin{proof} Suppose to the contrary that ${\rm G}ammamma^\alphast$ contains multiple vertices from some block of ${\cal B}$. Since ${\rm G}ammamma$ is vertex-transitive, without loss of generality we may assume that $(\infty,u),(\infty,v)\in V({\rm G}ammamma^\alphast)\cap B_\infty$, where $u, v \in {\rm B}bb Z_p$ with $u\neq v$. By the definition of ${\rm G}amma$ (Definition \ref{defn:ms}), each block of ${\cal B}$ is an independent set of ${\rm G}amma$. In particular, $(\infty,u)$ and $(\infty,v)$ are not adjacent in ${\rm G}amma$ and so ${\rm G}ammamma^\alphast$ is not a complete graph. Since ${\rm G}amma$ is symmetric by Theorem \ref{thm:sym ms}, so is ${\rm G}amma^*$ by Theorem \ref{thm:sym cores}. Since ${\rm G}amma^*$ has prime order $q$, it follows that ${\rm G}ammamma^\alphast\cong G(q,r)$ for some proper even divisor $r$ of $q-1$ and ${\rm A}ut({\rm G}ammamma^\alphast) \cong \mathbb{Z}_{q}\rtimes H(q, r)$ is a Frobenius group in its action on $V({\rm G}ammamma^\alphast)$ (see the discussion below Definition \ref{defn:pr}). Let $\psihi:{\rm G}ammamma\rightarrow {\rm G}ammamma^\alphast$ be a retraction. Since each $\lambda_b \in H$ fixes $B_\infty$ pointwise, $(\psihi\lambdaanglembda_b)\mid_{{\rm G}ammamma^\alphast}\in {\rm A}ut({\rm G}ammamma^\alphast)$ fixes both $(\infty,u)$ and $(\infty,v)$. Since ${\rm A}ut({\rm G}ammamma^\alphast)$ is a Frobenius group on $V({\rm G}ammamma^\alphast)$, it follows that $(\psihi\lambdaanglembda_b)\mid_{{\rm G}ammamma^\alphast}=1_{{\rm A}ut({\rm G}ammamma^\alphast)}$ is the identity element of ${\rm A}ut({\rm G}ammamma^\alphast)$. In other words, $\lambdaanglembda_b$ must map each vertex of ${\rm G}ammamma^\alphast$ to a vertex of ${\rm G}ammamma$ in the same fibre of $\psihi$; that is, the $H$-orbit $H((y, z))$ containing $(y, z) \in V({\rm G}ammamma^\alphast)$ is a subset of $\psihi^{-1}((y, z))$. Since $|B_\infty|=p<q=|V({\rm G}ammamma^\alphast)|$, there exists at least one vertex $(y,z)\in V({\rm G}ammamma^\alphast)$ with $y\neq\infty$. By (\ref{eq:lambda}), it can be verified that $H$ is semiregular on $V({\rm G}amma) \sigmaetminus B_{\infty}$. Since $(y, z) \in V({\rm G}amma) \sigmaetminus B_{\infty}$, it follows that $p<2^{a}=|H|=|H((y,z))|\lambdaeq |\psihi^{-1}((y,z))|$. However, since $|V({\rm G}ammamma^\alphast)|=q$ by our assumption, we have $|\psihi^{-1}((y,z))|=p$ by Theorem \ref{thm:order vt}. This contradiction shows that ${\rm G}ammamma^\alphast$ contains at most one vertex from each block of ${\cal B}$. This together with $|V({\rm G}ammamma^\alphast)|=|{\cal B}|=q$ implies that ${\rm G}amma^*$ contains exactly one vertex from each block of ${\cal B}$. ${\rm B}ox$ \end{proof} It is well known that all Fermat numbers $F_t = 2^{2^t} + 1$ are pairwise coprime \cite{MR1866957} and satisfy the relation \cite{MR1866957}: \betaegin{equation}gin{equation} \lambdaanglebel{eq:Fermat recurrence} F_t = F_0 F_1\cdots F_{t-1}+2,\;\, t \gammaeq 1. \end{equation} In particular, since $q = F_s$ by our assumption, $q-2 = F_0 F_1\cdots F_{s-1}$. \betaegin{equation}gin{theorem} \lambdaanglebel{thm:ms core complete} Let ${\rm G}amma = {\rm G}amma_{\emptyset, \{u\}}$ or ${\rm G}amma_{\emptyset, U_{e, i}}$, where $u \in {\rm B}bb Z_p$ and $U_{e, i}$ is given in (\ref{eq:uab}). If $pq > 15$ and $|V({\rm G}ammamma^\alphast)| = q$, then ${\rm G}ammamma^\alphast\cong K_{q}$. \end{theorem} \betaegin{equation}gin{proof} Suppose that $pq > 15$, $|V({\rm G}ammamma^\alphast)| = q$ but ${\rm G}ammamma^\alphast \not \cong K_{q}$. Since $pq > 15$ and $p < q = 2^{a} + 1 = F_s$, we have $s \gammae 2$. Since ${\rm G}amma$ is symmetric, by Theorem \ref{thm:sym cores} and the discussion below Definition \ref{defn:pr}, ${\rm G}ammamma^\alphast\cong G(q,r)$ for some proper even divisor $r$ of $q-1$ and moreover ${\rm A}ut({\rm G}ammamma^\alphast)$ is a Frobenius group on $V({\rm G}ammamma^\alphast)$ as seen in the proof of Lemma \ref{thm:one vertex per block}. Let $\psihi:{\rm G}ammamma\rightarrow {\rm G}ammamma^\alphast$ be a retraction. Since $p$ is a prime divisor of $2^{a}-1 = F_{s} - 2$, by (\ref{eq:Fermat recurrence}) and the fact that distinct Fermat numbers are coprime, we know that $p$ is a prime factor of exactly one $F_l$, $0 \lambdae l \lambdaeq s-1$. Thus $p\lambdaeq F_{s-1}$, and so by (\ref{eq:Fermat recurrence}) and the fact $s \gammae 2$, we have $|B_\infty \cup B_0|=2p\lambdaeq 2F_{s-1}<F_0F_1\cdots F_{s-1}+2= F_s = q$. Since $\psihi$ has $q$ fibres, it follows that there is at least one fibre of $\psihi$ which is disjoint from $B_\infty \cup B_0$. In other words, there exists a vertex $(x,t)\in V({\rm G}ammamma^\alphast)$ with $x\in {\rm GF}(2^{a})^\alphast$ such that $\psihi^{-1}((x,t))\cap(B_\infty \cup B_0)=\emptyset$. Since ${\rm G}ammamma$ is vertex-transitive, every vertex of ${\rm G}ammamma$ is contained in some copy of ${\rm G}amma^*$. In particular, for every vertex $(y,z)\in \psihi^{-1}((x,t))$, there is a core ${\rm G}ammamma^\#\cong{\rm G}ammamma^\alphast$ of ${\rm G}amma$ such that $(y,z)\in V({\rm G}ammamma^\#)$. (Note that ${\rm G}ammamma^\#$ depends on $(y, z)$, though all of them are isomorphic to each other.) We claim that $\psihi\mid_{{\rm G}ammamma^\#}$ is an isomorphism from ${\rm G}ammamma^\#$ to ${\rm G}ammamma^\alphast$. In fact, since $\psihi$ is a homomorphism, $\psihi\mid_{{\rm G}ammamma^\#}: {\rm G}ammamma^\#\rightarrow{\rm G}ammamma^\alphast$ is a homomorphism. Moreover, $\psihi\mid_{{\rm G}ammamma^\#}$ is surjective for otherwise the composition of a retraction from ${\rm G}amma$ to ${\rm G}ammamma^\#$ and $\psihi\mid_{{\rm G}ammamma^\#}$ is a homomorphism from ${\rm G}amma$ to a proper subgraph of ${\rm G}amma^*$, contradicting the assumption that ${\rm G}amma^*$ is a core of ${\rm G}amma$. Since ${\rm G}ammamma^\#\cong{\rm G}ammamma^\alphast$, there is an isomorphism $\deltaelta:{\rm G}ammamma^\alphast\rightarrow{\rm G}ammamma^\#$. Then $(\deltaelta\psihi)\mid_{{\rm G}ammamma^\#}: {\rm G}ammamma^\#\rightarrow{\rm G}ammamma^\#$ is an endomorphism and so an automorphism of ${\rm G}ammamma^\#$ as ${\rm G}ammamma^\#$ is a core. In particular, $(\deltaelta\psihi)\mid_{{\rm G}ammamma^\#}$ is a bijection from $V({\rm G}ammamma^\#)$ to itself. Therefore, $\psihi\mid_{{\rm G}ammamma^\#}$ is a bijection from $V({\rm G}ammamma^\#)$ to $V({\rm G}ammamma^\alphast)$ and hence an isomorphism from ${\rm G}ammamma^\#$ to ${\rm G}ammamma^\alphast$. In particular, each fibre of $\psihi$ contains exactly one vertex of ${\rm G}ammamma^\#$. Define $\eta: V({\rm G}ammamma^\alphast)\rightarrow V({\rm G}ammamma^\#)$ to be the inverse of the isomorphism $\psihi\mid_{{\rm G}ammamma^\#}: V({\rm G}ammamma^\#) \rightarrow V({\rm G}ammamma^\alphast)$. Then $(y,z) = \eta((x,t))$ and for each $(j,k)\in V({\rm G}ammamma^\alphast)$, $\eta((j,k))\in V({\rm G}ammamma^\#)\cap \psihi^{-1}((j,k))$ is the unique vertex of ${\rm G}amma^{\#}$ contained in the fibre $\psihi^{-1}((j,k))$. Define $\psisi := \eta\psihi:{\rm G}ammamma\rightarrow{\rm G}ammamma^\#$. Then $\psisi$ is a retraction whose set of fibres is identical to the set of fibres of $\psihi$. More specifically, for $(j, k) \in V({\rm G}amma^*)$, the fibre $\psisi^{-1}(\eta(j,k))$ of $\psisi$ is equal to the fibre $\psihi^{-1}((j,k))$ of $\psihi$. In particular, $\psisi^{-1}((y,z))=\psihi^{-1}((x,t))$. Applying Lemma \ref{thm:one vertex per block} to ${\rm G}ammamma^\#$, we know that ${\rm G}ammamma^\#$ contains exactly one vertex from each block of ${\cal B}$. Clearly, $(y,z)$ is the unique vertex of ${\rm G}ammamma^\#$ in $B_y$. Let $(\infty,c),(0,d)$ be the vertices of ${\rm G}ammamma^\#$ contained in $B_{\infty}, B_0$, respectively, where $c,d\in\mathbb{Z}_p$. Since by (\ref{eq:rho}) $J$ fixes $B_\infty\cup B_0$ pointwise, for any $\gammaamma\in J$, $(\psisi\gammaamma)\mid_{{\rm G}ammamma^\#}\in {\rm A}ut({\rm G}ammamma^\#)$ fixes both $(\infty,c)$ and $(0,d)$. Since ${\rm A}ut({\rm G}ammamma^\#)$ is a Frobenius group on $V({\rm G}ammamma^\#)$ (as ${\rm G}ammamma^\# \cong {\rm G}amma^* \cong G(q, r)$), it follows that $(\psisi\gammaamma)\mid_{{\rm G}ammamma^\#}=1_{{\rm A}ut({\rm G}ammamma^\#)}$. In other words, $\gammaamma$ maps each vertex of $V({\rm G}ammamma^\#)$ to a vertex of ${\rm G}ammamma$ in the same fibre of $\psisi$. Since this holds for every $\gammaamma\in J$, the $J$-orbit $J((y,z))$ containing $(y, z)$ satisfies $J((y,z))\sigmaubseteq \psisi^{-1}((y,z))=\psihi^{-1}((x,t))$. Since this holds for every $(y,z)\in \psihi^{-1}((x,t))$, $\psihi^{-1}((x,t))$ is a (disjoint) union of $J$-orbits and so $|J|$ divides $|\psihi^{-1}((x,t))|$. Note that $|\psihi^{-1}((x,t))|=p$ by Theorem \ref{thm:order vt} and our assumption $|V({\rm G}amma^*)|=q$. On the other hand, since $\psihi^{-1}((x, t))\cap(B_\infty \cup B_0)=\emptyset$, for each $(y,z)\in \psihi^{-1}((x,t))$ we have $y\neq\infty,0$ and thus by (\ref{eq:rho}), $|J((y,z))|=|J| = (2^{a}-1)/p$. Therefore, $(2^{a}-1)/p$ is a divisor of $p$, implying that either $(2^{a}-1)/p=p$ or $(2^{a}-1)/p=1$. If $(2^{a}-1)/p=1$, then by (\ref{eq:Fermat recurrence}), $p = 2^{a}-1=F_s-2=F_0F_1\cdots F_{s-1}$, which forces $s=1$, $a = 2$, $p=F_0=3$ and $q=5$. However, this contradicts the fact $s \gammae 2$. If $(2^{a}-1)/p=p$ (and $s \gammae 2$), then by (\ref{eq:Fermat recurrence}), $p^2 = 2^{a}-1 = F_0F_1\cdots F_{s-1}$. However, this cannot happen since $F_0F_1\cdots F_{s-1}$ contains two distinct prime factors, namely $F_0=3$ and $F_1=5$, but $p$ has only one prime factor. ${\rm B}ox$ \end{proof} \sigmaubsection{Bounding the clique number} \lambdaanglebel{subsec:clq} \betaegin{equation}gin{theorem} \lambdaanglebel{thm:clq>p} The clique number of ${\rm G}amma_{S, U}$ satisfies $\omega({\rm G}amma_{S, U})\gammaeq p$, with equality only when $p=2^{2^l}+1$ for some $0\lambdaeq l\lambdaeq s-1$. \end{theorem} \betaegin{equation}gin{proof} By Definition \ref{defn:ms}, ${\rm G}amma_{S, U}$ contains ${\rm G}amma_{\emptyset, \{u\}}$ as a spanning subgraph, where $u \in U$, and by Theorem \ref{thm:sym ms}, this spanning subgraph is isomorphic to ${\rm G}ammamma := {\rm G}amma_{\emptyset, \{0\}}$. Thus $\omega({\rm G}amma_{S, U}) \gammaeq \omega({\rm G}amma)$. We first prove that the result is true for ${\rm G}amma$. Since $p$ divides $2^{a}-1 = F_s - 2 = F_0F_1\cdots F_{s-1}$ (by (\ref{eq:Fermat recurrence})) and all Fermat numbers are pairwise coprime, $p$ divides exactly one $F_l$ for some $0\lambdaeq l\lambdaeq s-1$. Since ${\rm GF}(2^{a})^\alphast = \lambdaanglengle w\ranglengle$ has order $2^{a}-1=F_0F_1\cdots F_{s-1}$, $w^{F_lF_{l+1}\cdots F_{s-1}}$ has order $F_0F_1\cdots F_{l-1}$ in ${\rm GF}(2^{a})^\alphast$. Hence the multiplication group of the subfield ${\rm GF}(2^{2^l})$ of ${\rm GF}(2^{a})$ is given by ${\rm GF}(2^{2^l})^\alphast=\lambdaanglengle w^{F_lF_{l+1}\cdots F_{s-1}}\ranglengle$. Set $C := \{(x,0): x \in {\rm GF}(2^{2^l})\}$. Let $x,y \in{\rm GF}(2^{2^l})$ be distinct elements. Then $y=x+w^{iF_lF_{l+1}\cdots F_{s-1}}$ for some $i$, and so $(y,0)=(x+w^{iF_lF_{l+1}\cdots F_{s-1}},2iF_lF_{l+1}\cdots F_{s-1})$ as $F_l \equiv 0$ $\mod{p}$. Since $w^{iF_lF_{l+1}\cdots F_{s-1}}\in {\rm GF}(2^{2^l})^\alphast$, by Definition \ref{defn:ms}, $(x,0)$ and $(y,0)$ are adjacent in ${\rm G}ammamma$. Thus $C$ is a clique of ${\rm G}amma$ with size $2^{2^l}$. In addition, by Definition \ref{defn:ms}, $(\infty,0)$ is adjacent to every vertex of $C$ in ${\rm G}amma$. Therefore, $\{(\infty,0)\}\cup C$ is a clique of ${\rm G}amma$ with size $2^{2^l}+1$, and consequently $\omega({\rm G}ammamma) \gammaeq 2^{2^l}+1$. Since $p$ is a factor of $F_l = 2^{2^l}+1$, we then have $\omega({\rm G}amma)\gammaeq p$, and equality occurs only when $p=2^{2^l}+1$. Therefore, $\omega({\rm G}amma_{S, U}) \gammaeq \omega({\rm G}amma) \gammae p$. Moreover, if $\omega({\rm G}amma_{S, U}) = p$, then $\omega({\rm G}amma) = p$ and so $p=2^{2^l}+1$. ${\rm B}ox$ \end{proof} \betaegin{equation}gin{theorem} \lambdaanglebel{thm:clq<q} If $S=\emptyset$ then $\omega({\rm G}amma_{S, U}) \lambdaeq \frac{2^{a}}{p-1}|U|+1$, and if $S\neq\emptyset$ then $\omega({\rm G}amma_{S, U})\lambdaeq \frac{2^{a}}{p-1}|U|+p-1$. \end{theorem} \betaegin{equation}gin{proof} Denote $\Psi := {\rm G}amma_{S, U}$ and $\Psi^d := {\rm G}amma_{S, U}^d$ for each $d\in\mathbb{Z}_p$. Fix $r\in \mathbb{Z}_p \sigmaetminus U$. Denote ${\rm G}amma := {\rm G}amma_{\emptyset, \{r\}}$ and ${\rm G}amma^d := {\rm G}amma_{\emptyset, \{r\}}^d$. Then $\Psi$ is edge-disjoint from ${\rm G}amma$, and $\Psi^d$ is edge-disjoint from ${\rm G}amma^d$. Hence any clique of ${\rm G}amma^d$ is an independent set of $\Psi^d$, and consequently $\omega({\rm G}amma^d)\lambdaeq \alphalpha(\Psi^d)$. Since ${\rm G}amma$ is vertex-transitive, every vertex of it is contained in a maximum clique. In particular, for each $d\in\mathbb{Z}_p$, $(\infty,d-r)$ is contained in a maximum clique of ${\rm G}amma$. Since the neighbourhood of $(\infty,d-r)$ in ${\rm G}amma$ is $V_d$, such a maximum clique must be a subset of $\{(\infty,d-r)\}\cup V_{d}$, and therefore $\omega({\rm G}amma^d) = \omega({\rm G}amma) - 1$. Since $\omega({\rm G}amma)\gammaeq p$ by Theorem \ref{thm:clq>p}, it follows that $p-1\lambdaeq \omega({\rm G}amma^d)\lambdaeq \alphalpha(\Psi^d)$. On the other hand, since $\Psi^d$ is vertex-transitive, by Theorem \ref{thm:ao} we have $\alpha(\Psi^d) \omega(\Psi^d) \lambdae |V(\Psi^d)|$. Therefore, $\omega(\Psi^d)\lambdaeq |V_d|/\alphalpha(\Psi^d) \lambdaeq 2^{a}/(p-1)$. Now let $C$ be a fixed maximum clique of $\Psi$ containing $(\infty,0)$ (such a maximum clique exists since $\Psi$ is vertex-transitive), and let $N$ be the set of elements $d \in \mathbb{Z}_p$ such that $C\cap V_d \neq \emptyset$. Since whenever $d \not \in U$, $(\infty,0)$ is not adjacent in $\Psi$ to any vertex of $V_d$, we have $$ N = \{u \in U: C\cap V_u \neq \emptyset\}. $$ Since $|C\cap V_u| \lambdae \omega(\Psi^u) \lambdaeq 2^{a}/(p-1)$ as proved above, we obtain \betaegin{equation} \lambdaanglebel{eq:C1} |C|=|C\cap B_\infty|+\sigmaum_{u \in N} |C\cap V_u| \lambdaeq |C\cap B_\infty|+\frac{2^{a}}{p-1} |N|. \end{equation} Set $$ T_1:= \{(\infty,z)\in C\cap B_\infty: z+r\in U\},\quad T_2:= \{(\infty,z)\in C\cap B_\infty: z+r\notin U\}. $$ Then $|C\cap B_\infty|=|T_1|+|T_2|$. Since $r \not \in U$, by Definition \ref{defn:ms} no vertex $(\infty, z)\in C\cap B_\infty$ is adjacent to any vertex in $V_{z+r}$. Thus, for each $(\infty, z)\in T_1$, we have $C\cap V_{z+r} =\emptyset$ and so $z+r \not \in N$. Consequently $|N| \lambdae |U|-|T_1|$. Plugging this into (\ref{eq:C1}) and noting $2^{a}/(p-1) > 1$, we obtain $$ \omega(\Psi) = |C| \lambdaeq |T_1| + |T_2| + \frac{2^{a}}{p-1}(|U| - |T_1|) \lambdaeq \frac{2^{a}}{p-1}|U| + |T_2|. $$ If $S = \emptyset$, then $C\cap B_\infty = \{(\infty, 0)\}$. Since $0+r \notin U$, we then have $|T_2|=1$ and so $\omega(\Psi)\lambdaeq \frac{2^{a}}{p-1}|U|+1$, as required. Assume that $S \neq \emptyset$. If $|C\cap B_\infty| \lambdae p-1$, then $|T_2| \lambdaeq p-1$. If $|C\cap B_\infty|=p$, then $C\cap B_\infty = B_{\infty}$ and so $T_1 \ne \emptyset$, implying $|T_2| = |C\cap B_\infty|-|T_1| \lambdaeq p-1$. In either case we obtain $\omega(\Psi) \lambdaeq \frac{2^{a}}{p-1}|U|+p-1$. ${\rm B}ox$ \end{proof} \sigmaubsection{Bounding the independence number} \lambdaanglebel{subsec:ind} The purpose of this subsection is to give an upper bound on $\alphalpha({\rm G}amma_{\emptyset,\{0\}})$ under the additional assumption that \betaegin{equation} \lambdaanglebel{eq:pF} \mbox{$p = F_l = 2^{2^l}+1$ is a Fermat prime for some $0 \lambdaeq l \lambdaeq s-1$}. \end{equation} Denote \betaegin{equation} \lambdaanglebel{eq:n} n := F_{l+1} \cdots F_{s-1} \end{equation} \betaegin{equation} \lambdaanglebel{eq:C} C := \{(x,0): x\in {\rm GF}(2^{2^l})\} \end{equation} $$ V_0 := \{(x, 0): x\in {\rm GF}(2^{a})\}. $$ As seen at the end of \S\ref{subsec:notation}, $H$ fixes $V_0$ setwise and ${\rm G}amma_{\emptyset,\{0\}}^0$ admits $H$ as a group of automorphisms in its induced action on $V_0$. Moreover, one can see that ${\rm G}amma_{\emptyset,\{0\}}^0$ is $H$-vertex-transitive. The main result in this subsection is as follows. \betaegin{equation}gin{theorem} \lambdaanglebel{thm:pq ms ind<q} Suppose that $p$ is as in (\ref{eq:pF}) and let $n, C$ be defined in (\ref{eq:n}), (\ref{eq:C}), respectively. Then $\alphalpha({\rm G}amma_{\emptyset,\{0\}})\lambdaeq q$, with equality only if $l = s-1$ (that is, $p=2^{2^{s-1}}+1$). \end{theorem} Denote by $$ \lambdaanglembda_b(C) = \{(x+b,0): x\in {\rm GF}(2^{2^l})\} $$ the image of $C$ under $\lambda_b \in H$. We need the following lemma in the proof of Theorem \ref{thm:pq ms ind<q}. \betaegin{equation}gin{lemma} \lambdaanglebel{lem:blocks} Suppose that $p$ is as in (\ref{eq:pF}) and let $n, C$ be defined in (\ref{eq:n}), (\ref{eq:C}), respectively. Let $h$ be an integer with $1\lambdaeq h\lambdaeq F_{0}F_1 \cdots F_{l-1}$. The following hold: \betaegin{equation}gin{itemize} \item[\rm (a)] $C$ is a block of imprimitivity for $H$ in its action on $V_0$ and is a $(p-1)$-clique of ${\rm G}amma_{\emptyset,\{0\}}^0$ (hence so is $\lambdaanglembda_{b}(C)$ for each $\lambda_b \in H$). Moreover, $C \cap \lambdaanglembda_{w^{hn}}(C) = \emptyset$. \item[\rm (b)] For $1\lambdaeq j \lambdaeq q-2$ such that $(w^j,0)\in\lambdaanglembda_{w^{hn}}(C)$ but $w^j \neq w^{hn}$, we have $j \not \equiv hn \mod{p}$. \item[\rm (c)] For $1\lambdaeq i, j \lambdaeq q-2$ such that $(w^i,0),(w^j,0)\in\lambdaanglembda_{w^{hn}}(C)$ but $w^i\neq w^j$, we have $i \not \equiv j \mod{p}$. \end{itemize} \end{lemma} \betaegin{equation}gin{proof} (a) Denote ${\rm G}amma^0 := {\rm G}amma_{\emptyset,\{0\}}^0$. Define $$ L := \{\lambda_b: b \in{\rm GF}(2^{2^l})\}. $$ Then $L \lambdaeq H \lambdaeq {\rm A}ut({\rm G}amma_{\emptyset,\{0\}})_{(\infty,0)}$ by (\ref{eq:JH1}). It can be seen that the $L$-orbit $L((0,0))$ containing $(0,0)$ is exactly $C$. By Theorem \ref{thm:clq>p} and its proof, $C$ is a $(p-1)$-clique of ${\rm G}ammamma^0$. Since each $\lambda_b \in H$ induces an automorphism of ${\rm G}amma^0$, $\lambdaanglembda_{b}(C)$ is also a $(p-1)$-clique of ${\rm G}ammamma^0$. Since $H$ is abelian, $L$ is a normal subgroup of $H$. Since $H$ is transitive on $V({\rm G}ammamma^0)$, it follows that the $L$-orbit $C$ is a block of imprimitivity for $H$ in its action on $V_0$, and hence so is $\lambdaanglembda_{b}(C)$ for each $\lambda_b \in H$. Since $p = F_l$ is a Fermat prime and distinct Fermat numbers are coprime, $hn$ is not a multiple of $F_{l} F_{l+1} \cdots F_{s-1}$. Hence $w^{hn} \not \in {\rm GF}(2^{2^l})$ and so $(w^{hn},0)\notin C$. Therefore, $ \lambdaanglembda_{w^{hn}}(C) \neq C$, which implies $C\cap\lambdaanglembda_{w^{hn}}(C)=\emptyset$. (b) Since $(w^j,0)\in\lambdaanglembda_{w^{hn}}(C)$, we have $(w^j,0)=(x+w^{hn},0)$ for some $x\in {\rm GF}(2^{2^l})^*$. Since $p=F_l$ and $x\in {\rm GF}(2^{2^l})^*$, we have $x=w^{tpn}$ for some integer $t$ with $1 \lambdaeq t \lambdaeq F_{0}F_1 \cdots F_{l-1}$. Thus $w^j=w^{hn}+w^{tpn}$ and so $w^{j-hn} = 1+w^{(tp-h)n}$ (where $1$ is the multiplicative identity of ${\rm GF}(2^a)$). Since $w^{(tp-h)n} \in {\rm GF}(2^{2^{l+1}})$, $1+w^{(tp-h)n}\in {\rm GF}(2^{2^{l+1}})$. Hence $w^{j-hn}=1+w^{(tp-h)n}=w^{kn}$ for some integer $k$ with $1 \lambdaeq k \lambdaeq F_{0}F_1 \cdots F_{l}$, and so $j-hn \equiv kn \mod{(q-2)}$. Since $p = F_l$ is a divisor of $q-2$, we then have $j-hn \equiv kn \mod{p}$. Note that $w^{kn} \ne 1$ as $1 < kn \lambdae q-2$. Suppose by way of contradiction that $j\equiv hn\mod{p}$. Then $kn \equiv 0 \mod{p}$. However, $n$ is coprime to $p$ as distinct Fermat primes are coprime. Hence $k \equiv 0 \mod{p}$ and so $w^{kn} \in {\rm GF}(2^{2^{l}})^*$. Consequently, $w^{(tp-h)n} = w^{kn} - 1 \in {\rm GF}(2^{2^{l}})^*$ and therefore $(tp-h)n\equiv 0\mod{pn}$. It follows that $p$ divides $h$, but this cannot happen as $1 \lambdaeq h \lambdaeq F_{0}F_1 \cdots F_{l-1} = F_l - 2 = p-2$. This contradiction shows that $j \not \equiv hn \mod{p}$. (c) Since $(w^i,0),(w^j,0)\in\lambdaanglembda_{w^{hn}}(C)$, we have $w^i=w^{hn}+x$ and $w^j=w^{hn}+y$ for some $x,y\in {\rm GF}(2^{2^{l}})$. Since $hn$ is a multiple of $F_{l+1} \cdots F_{s-1}$ ($=n$) but not a multiple of $F_{l} F_{l+1} \cdots F_{s-1}$, $w^{hn}$ is an element of ${\rm GF}(2^{2^{l+1}})$ but not ${\rm GF}(2^{2^{l}})$. Thus $w^i$ and $w^j$ are elements of ${\rm GF}(2^{2^{l+1}})$ but not ${\rm GF}(2^{2^{l}})$. Therefore, $w^i=w^{h'n}$ for some $1\lambdaeq h' \lambdaeq F_{0}F_1 \cdots F_{l-1}$, yielding $i = h' n$. Since $(w^i,0)\in\lambdaanglembda_{w^{hn}}(C) \cap \lambdaanglembda_{w^{i}}(C)$, by part (a), $\lambdaanglembda_{w^{hn}}(C)=\lambdaanglembda_{w^{i}}(C)=\lambdaanglembda_{w^{h' n}}(C)$ and hence $(w^j,0)\in \lambdaanglembda_{w^{h' n}}(C)$. Thus, by part (b), $j\not\equiv h' n \mod{p}$, that is, $j \not\equiv i \mod{p}$. ${\rm B}ox$ \end{proof} We also need the following known result in the proof of Theorem \ref{thm:pq ms ind<q}. \betaegin{equation}gin{lemma}[{\cite[Corollary 3]{MR1322111}}] \lambdaanglebel{lem:ind num} Let $\Psi$ be a graph with minimum valency $\deltaelta(\Psi)$, and let $r$ be such that $r \gammaeq \alphalpha_{\Psi}(v)$ for every $v \in V(\Psi)$, where $\alpha_{\Psi}(v)$ is the independence number of the subgraph of $\Psi$ induced by the neighbourhood of $v$. Then $$ \alphalpha(\Psi) \lambdaeq \frac{r |V(\Psi)|}{r+\deltaelta(\Psi)}. $$ \end{lemma} {\betaf i}gskip \betaegin{equation}gin{proof}\tauextbf{of Theorem \ref{thm:pq ms ind<q}}~~ Denote ${\rm G}amma := {\rm G}amma_{\emptyset,\{0\}}$ and ${\rm G}amma^0 := {\rm G}amma_{\emptyset,\{0\}}^0$. By Lemma \ref{lem:blocks}(a), both $C$ and $\lambdaanglembda_{w^{n}}(C)$ are $(p-1)$-cliques of ${\rm G}ammamma^0$, and $C\cap \lambdaanglembda_{w^{n}}(C)=\emptyset$. Thus, for any $(w^j,0)\in\lambdaanglembda_{w^{n}}(C)$, we have $(w^j,0) \not \in C$ and so $w^j\not\in {\rm GF}(2^{2^{l}})$. Since $(w^j,0)\in\lambdaanglembda_{w^{n}}(C)$, $w^j=w^n+x$ for some $x\in {\rm GF}(2^{2^l})$. Since $n = F_{l+1} \cdots F_{s-1}$, $w^n\in {\rm GF}(2^{2^{l+1}})$ and so $w^j\in{\rm GF}(2^{2^{l+1}})$. Hence $j = in$ for some $1\lambdaeq i\lambdaeq F_0 F_1 \cdots F_l$. Since $p=F_l$ and $n$ are coprime, if $j\equiv 0\mod{p}$, then $p$ divides $i$, say, $i = i' p$, and hence $w^j=w^{in}=w^{i'F_{l}F_{l+1}\cdots F_{s-1}} \in {\rm GF}(2^{2^{l}})$, a contradiction. Therefore, $j\not\equiv 0\mod{p}$. On the other hand, $|\lambdaanglembda_{w^{n}}(C)| = |C| = p - 1$ and by Lemma \ref{lem:blocks}(c), for distinct $(w^i,0),(w^j,0) \in \lambdaanglembda_{w^{n}}(C)$ we have $i\not\equiv j\mod{p}$. Therefore, for each integer $d$ with $1\lambdaeq d \lambdaeq p-1$, $\lambdaanglembda_{w^{n}}(C)$ contains exactly one vertex $(w^j,0)$ such that $j\equiv d\mod{p}$. By the definition of $J$ (see (\ref{eq:rho}) and (\ref{eq:J})), $J$ fixes $V_0$ setwise and $J\lambdaeq {\rm A}ut({\rm G}ammamma^0)_{(0,0)}$ (see the discussion around (\ref{eq:JH1})). Moreover, for each $(w^j,0)\in \lambdaanglembda_{w^{n}}(C)$ and $\rho^{tp} \in J$ with $1\lambdaeq t\lambdaeq (2^{a}-1)/p$, $\rho^{tp}((w^j,0))=(w^{j+tp},0)$. Thus the $J$-orbit containing $(w^j,0)$ is $$ J((w^j,0)) = \{(w^{j+tp}, 0): 1\lambdaeq t\lambdaeq (2^{a}-1)/p\}. $$ It can be verified that, for $1\lambdaeq t_1, t_2\lambdaeq (2^{a}-1)/p$ with $t_1\neq t_2$, we have $w^{j+t_1 p} \ne w^{j+t_2p}$. Hence $|J((w^j,0))| = (2^{a}-1)/p$. Since $\lambda_{w^{n}}(C)$ is a $(p-1)$-clique of ${\rm G}amma^0$ and $J \lambdaeq {\rm A}ut({\rm G}ammamma^0)$, both $\rho^{t_1p}(\lambda_{w^{n}}(C)) = \{(w^{j+t_1 p},0): (w^j,0)\in \lambdaanglembda_{w^{n}}(C)\}$ and $\rho^{t_2p}(\lambda_{w^{n}}(C)) = \{(w^{j+t_2 p},0): (w^j,0)\in \lambdaanglembda_{w^{n}}(C)\}$ are $(p-1)$-cliques of ${\rm G}amma^0$. We claim that, for $t_1 \ne t_2$, \betaegin{equation} \lambdaanglebel{eq:disj} \rho^{t_1p}(\lambdaanglembda_{w^{n}}(C))\cap\rho^{t_2p}(\lambdaanglembda_{w^{n}}(C))=\emptyset. \end{equation} Suppose otherwise. Then there are $(w^i,0),(w^j,0)\in \lambdaanglembda_{w^{n}}(C)$ such that $w^{i+t_1p} = w^{j+t_2p}$. Since $w^{j+t_1p}\neq w^{j+t_2p}$ as seen in the previous paragraph, we have $i\neq j$ and so $(w^i,0)$ and $(w^j,0)$ are distinct elements of $\lambdaanglembda_{w^{n}}(C)$. Thus $i\not\equiv j \mod{p}$ by what we proved in the first paragraph. Since $p$ divides $2^a - 1$, it follows that $i+t_1p\not\equiv j+t_2p \mod{(2^a - 1)}$ and hence $(w^{i+t_1p},0)\neq (w^{j+t_2p},0)$, which is a contradiction. This completes the proof of (\ref{eq:disj}). Since the neighbourhood of $(0, 0)$ in ${\rm G}amma^0$ is ${\rm G}ammamma^{0}((0,0)) = \{(w^{tp}, 0): 1\lambdaeq t\lambdaeq (2^{a}-1)/p\}$, by (\ref{eq:disj}), $\{\rho^{tp}(\lambdaanglembda_{w^{n}}(C)): 1\lambdaeq t\lambdaeq (2^{a}-1)/p\}$ is a partition of $V_0 \sigmaetminus ({\rm G}ammamma^{0}((0,0)) \cup \{(0,0)\})$. Since ${\rm G}ammamma^0$ is vertex-transitive, each of its vertices is contained in a maximum independent set. Choose $I$ to be a maximum independent set of ${\rm G}ammamma^0$ containing $(0,0)$. Then $I$ and ${\rm G}ammamma^{0}((0,0))$ are disjoint. Since $\rho^{tp}(\lambdaanglembda_{w^{n}}(C))$ is a clique for each $\rho^{tp}\in J$, it contains at most one vertex of $I$. Since these $(2^{a}-1)/p$ cliques form a partition of $V_0 \sigmaetminus ({\rm G}ammamma^{0}((0,0)) \cup \{(0,0)\})$ as shown above, it follows that $\alphalpha({\rm G}ammamma^0) = |I| \lambdaeq 1+((2^{a}-1)/p)$. Since $p = F_l$, by (\ref{eq:Fermat recurrence}), $2^a - 1 = F_s - 2 = F_0 \cdots F_{l-1} p F_{l+1} \cdots F_{s-1} = (p-2)pF_{l+1} \cdots F_{s-1}$, and hence $p-1 \lambdaeq 1+((2^{a}-1)/p)$ with equality if and only if $l = s-1$. Denote by $\alpha_{{\rm G}amma}(x, r)$ the independence number of the subgraph of ${\rm G}amma$ induced by the neighbourhood of $(x, r) \in {\rm PG}(1, 2^a) \tauimes {\rm B}bb Z_p$ in ${\rm G}amma$. Since ${\rm G}ammamma$ is vertex-transitive and the neighbourhood of $(\infty, 0)$ in ${\rm G}amma$ is equal to $V_0$, we have $\alphalpha_{{\rm G}amma}(x, r)=\alphalpha_{{\rm G}amma}(\infty, 0) = \alphalpha({\rm G}ammamma^0)$. Since ${\rm val}({\rm G}amma) = 2^a$, by Lemma \ref{lem:ind num}, $$ \alphalpha({\rm G}ammamma) \lambdaeq \frac{\alphalpha({\rm G}ammamma^0)|V({\rm G}ammamma)|}{\alphalpha({\rm G}ammamma^0)+{\rm val}({\rm G}ammamma)} \lambdaeq \frac{(1+\frac{2^{a}-1}{p})(2^{a}+1)p}{(1+\frac{2^{a}-1}{p})+2^{a}} = (2^{a}+1) \cdot \frac{2^{a}+(p-1)}{2^{a}+(1+\frac{2^{a}-1}{p})} \lambdaeq 2^{a}+1 = q $$ and equality holds only if $l = s-1$ (that is, $p=2^{2^{s-1}}+1$). ${\rm B}ox$ \end{proof} \sigmaubsection{Two rank-three graphs} \lambdaanglebel{subsec:rank3} In addition to Theorems \ref{thm:ms core complete}, \ref{thm:clq>p}, \ref{thm:clq<q} and \ref{thm:pq ms ind<q}, to prove Theorem \ref{thm:pq ms cores} we also need a result \cite{MR2470534} on the cores of two specific rank-three graphs. First, a few definitions \cite{Praeger97} are in order. Let $G$ be a transitive group on a set $V$. The action of $G$ on $V$ induces an action on $V \tauimes V$, defined by $g(u, v) := (g(u), g(v))$ for $g \in G$ and $(u, v) \in V \tauimes V$. The $G$-orbits on $V \tauimes V$ are called the \tauextit{$G$-orbitals} on $V$. For a fixed $v \in V$, there is a one-to-one correspondence between the $G$-orbitals on $V$ and the $G_{v}$-orbits on $V$, the latter being the \tauextit{$G$-suborbits} and their lengths \tauextit{subdegrees}. The number of $G$-orbitals is called the \tauextit{rank} of $G$. A $G$-orbital ${\rm D}e$ on $V$ gives rise to a \tauextit{$G$-orbital graph} with vertex set $V$ and arc set ${\rm D}e$. If ${\rm D}e$ is \tauextit{nontrivial} (that is, ${\rm D}e \ne \{(v, v): v \in V\}$) and \tauextit{self-paired} (that is, $(u, v) \in {\rm D}e$ implies $(v, u) \in {\rm D}e$), then the $G$-orbital graph associated with ${\rm D}e$ is a nontrivial undirected $G$-symmetric graph. Conversely, any $G$-symmetric graph is a $G$-orbital graph. A rank-three graph (defined in \S\ref{sec:intro}) can be defined equivalently as a nontrivial orbital graph of a rank-three permutation group of even order. Let $t \gammae 1$ be an integer and $V(4, 2^{2^t})$ a $4$-dimensional vector space over ${\rm GF}(2^{2^t})$ equipped with a non-degenerate alternating bilinear form. Let $V$ be the set of $1$-dimensional subspaces of $V(4, 2^{2^t})$. Then any group $G$ with ${\rm PSp}(4,2^{2^{t}}) \lambdaeq G \lambdaeq {\rm PG}ammaSp(4,2^{2^{t}})$ acts on $V$ in the usual way. The following were proved in \cite[Lemma 3.5]{MR1244933}: (i) $|V| = (2^{2^{t}}+1)(2^{2^{t+1}}+1)$, $G$ has rank $3$, the subdegrees of $G$ on $V$ are $1, 2^{2^{t}}+2^{2^{t+1}}$ and $2^{2^{t+2}}$, and all suborbits are self-paired (that is, the corresponding $G$-orbitals are self-paired); (ii) the corresponding nontrivial $G$-orbital graphs $\Psi, \overlineerline{\Psi}$ (which are complements of each other) are the only (incomplete, nonempty) vertex-primitive graphs on $V$ admitting $G$ as a group of automorphisms, and each of them has automorphism group ${\rm PG}ammaSp(4,2^{2^{t}})$; (iii) if $|V|=pq$, with $p < q$ and $p,q$ primes, then $p=2^{2^{t}}+1$ and $q=2^{2^{t+1}}+1$ are Fermat primes. The graphs $\Psi, \overlineerline{\Psi}$ above with $|V|=pq$ arose in the classification \cite{MR1289072, MR1244933} of vertex-transitive graphs of order a product of two distinct primes. In the proof of Theorem 2.1 in \cite[p.193]{MR1289072}, it was proved that in this case both $\Psi$ and $\overlineerline{\Psi}$ are isomorphic to MS graphs. Moreover, by (i) above, they are rank-three graphs. Furthermore, $\Psi$ is the rank-three graph $W_3(2^{2^{t}})$ in \cite[Section 3.5]{MR2470534}. In fact, since $V(4, 2^{2^t})$ carries a non-degenerate alternating bilinear form, we have the classical polar space $W_3(2^{2^{t}})$ whose points are the 1-dimensional subspaces of $V(4, 2^{2^t})$ that are totally isotropic with respect to the form (in other words, the point set of $W_3(2^{2^{t}})$ is exactly $V$). As in \cite[Section 3.5]{MR2470534}, by abusing notation we also denote by $W_3(2^{2^{t}})$ the graph whose vertices are the points of this polar space such that two vertices are adjacent if and only if they are orthogonal with respect to the form. This graph is a rank-three graph admitting $G$ in the previous paragraph as a group of automorphisms (see \cite[Section 3.5]{MR2470534}). The discussion in the previous paragraph implies that $\Psi$ or $\overlineerline{\Psi}$ is the rank-three graph $W_3(2^{2^{t}})$. (In fact, $\Psi = W_3(2^{2^{t}})$ by the proof of \cite[Lemma 3.5]{MR1244933}.) Since by \cite[Section 3.5]{MR2470534} both $W_3(2^{2^{t}})$ and its complement have complete cores, so do $\Psi$ and $\overlineerline{\Psi}$. The reader is referred to \cite{MR2470534, Thas} for further discussion on polar spaces. \sigmaubsection{Proof of Theorem \ref{thm:pq ms cores}} \lambdaanglebel{subsec:proof cores ms} We are now ready to prove Theorem \ref{thm:pq ms cores}. Since ${\rm G}ammamma$ is vertex-transitive, by Theorem \ref{thm:order vt}, if ${\rm G}ammamma$ is not a core then either $|V({\rm G}ammamma^\alphast)|=p$ or $|V({\rm G}ammamma^\alphast)|=q$. By (\ref{eq:inva}) and Theorem \ref{thm:clq>p}, $\omega({\rm G}ammamma^\alphast)=\omega({\rm G}ammamma)\gammaeq p$. Therefore, if $|V({\rm G}ammamma^\alphast)|=p$, then ${\rm G}ammamma^\alphast\cong K_{p}$, whilst if $pq > 15$ and $|V({\rm G}ammamma^\alphast)| = q$, then ${\rm G}ammamma^\alphast \cong K_{q}$ by Theorem \ref{thm:ms core complete}. (a) Suppose that $pq=15$. Then $s = 1, p = 3, q = 5$ and ${\rm G}ammamma \cong {\rm G}ammamma(2,3,\emptyset, \{0\})$ or ${\rm G}ammamma \cong {\rm G}ammamma(2,3,\emptyset, \{1,2\})$ by Theorem \ref{thm:sym ms}. If ${\rm G}ammamma \cong {\rm G}ammamma(2,3,\emptyset, \{1,2\})$, then computations using Mathematica show that $\omega({\rm G}ammamma)=\chi({\rm G}ammamma)=5$ and so ${\rm G}ammamma^\alphast\cong K_5$. Assume now ${\rm G}ammamma \cong {\rm G}ammamma(2,3,\emptyset, \{0\})$. Then computations show that $\chi({\rm G}ammamma)=4$ and $\omega({\rm G}ammamma)=3$, and so $|V({\rm G}ammamma^\alphast)| \ne 3$ in view of (\ref{eq:inva}). If $|V({\rm G}ammamma^\alphast)|=5$, then ${\rm G}amma^*$ is a symmetric circulant of order $5$ and so ${\rm G}ammamma^\alphast\cong K_5$ or $C_5$. However, this cannot happen by (\ref{eq:inva}) since $\chi(K_5)\neq 4$ and $\chi(C_5)\neq 4$. Therefore, if ${\rm G}ammamma \cong {\rm G}ammamma(2,3,\emptyset, \{0\})$, then ${\rm G}ammamma$ is a core. In what follows we assume $pq > 15$ without mentioning explicitly. (b) Suppose that $p$ is not a Fermat prime. Then $|V({\rm G}ammamma^\alphast)| \gammae \omega({\rm G}ammamma^\alphast) = \omega({\rm G}ammamma) > p$ by (\ref{eq:inva}) and Theorem \ref{thm:clq>p}. To prove that ${\rm G}amma$ is a core it suffices to show $|V({\rm G}ammamma^\alphast)| \ne q$. In fact, since by (\ref{eq:U}) $U$ is a proper subset of $\mathbb{Z}_p$, there exists an element $u' \in \mathbb{Z}_p \sigmaetminus U$. The MS graph ${\rm G}ammamma' := {\rm G}ammamma(a, p, \emptyset, \{u'\})$ has the same vertex set as ${\rm G}amma$ but is edge-disjoint from ${\rm G}amma$. Hence $\omega({\rm G}ammamma')\lambdaeq\alphalpha({\rm G}ammamma)$. Further, we have $p < \omega({\rm G}ammamma')$ by applying Theorem \ref{thm:clq>p} to ${\rm G}amma'$, and hence $p < \alphalpha({\rm G}ammamma)$. This last inequality implies $|V({\rm G}ammamma^\alphast)| \ne q$, for otherwise we would have ${\rm G}ammamma^\alphast\cong K_q$ by Theorem \ref{thm:ms core complete} and $\alpha({\rm G}amma) = p \alpha({\rm G}amma^*) = p$ by (\ref{eq:nohom1}), a contradiction. (c) Suppose that $p=2^{2^{l}}+1$ is a Fermat prime with $0 \lambdaeq l < s-1$. Similar to Case (b), consider an MS graph ${\rm G}ammamma' := {\rm G}ammamma(a, p, \emptyset, \{u'\})$, where $u' \in \mathbb{Z}_p \sigmaetminus U$. Since $V({\rm G}amma) = V({\rm G}amma')$ but $E({\rm G}amma) \cap E({\rm G}amma') = \emptyset$, we have $\omega({\rm G}ammamma) \lambdaeq \alphalpha({\rm G}ammamma')$. Since $l < s-1$ and ${\rm G}ammamma' \cong {\rm G}ammamma(a, p, \emptyset, \{0\})$ by Theorem \ref{thm:sym ms}, we have $\alpha({\rm G}amma') = \alpha({\rm G}ammamma(a, p, \emptyset, \{0\})) < q$ by Theorem \ref{thm:pq ms ind<q}. Therefore, $\omega({\rm G}ammamma) < q$, and so $\omega({\rm G}ammamma^*) < q$ by (\ref{eq:inva}). This together with Theorem \ref{thm:ms core complete} implies that $|V({\rm G}ammamma^\alphast)| \neq q$. On the other hand, since by Theorem \ref{thm:sym ms} ${\rm G}ammamma$ contains a spanning subgraph isomorphic to ${\rm G}ammamma'$, we have $\alphalpha({\rm G}ammamma) \lambdaeq \alphalpha({\rm G}ammamma') < q$. Therefore, $|V({\rm G}ammamma^\alphast)| \neq p$, for otherwise we would have ${\rm G}ammamma^\alphast\cong K_p$ (as seen in the beginning of this proof) and $\alphalpha({\rm G}ammamma) = q \alpha({\rm G}amma^*) = q$ by (\ref{eq:nohom1}), a contradiction. Since $|V({\rm G}ammamma^\alphast)|$ is neither $p$ nor $q$, ${\rm G}ammamma$ must be a core. (d) Suppose that $p = 2^{2^{s-1}}+1$ is a Fermat prime (so that $s \gammae 2$) and $U = \{u\}$. Let $\Psi$ be the rank-three graph with order $pq$ and valency ${\rm val}(\Psi) = 2^{2^{s-1}} + 2^{2^{s}}$ mentioned in \S\ref{subsec:rank3} (by setting $t = s-1$), so that $\Psi^*$ is a complete graph. Since $\Psi$ is an MS graph, by Definition \ref{defn:ms} and Theorem \ref{thm:sym ms}, it contains a spanning subgraph isomorphic to ${\rm G}ammamma$. Since $\Psi^*$ is complete, either $\Psi^\alphast \cong K_p$ or $\Psi^\alphast \cong K_{q}$. However, since $q-1$ is not a divisor of ${\rm val}(\Psi)$ ($= 2^{2^{s-1}} + 2^{2^{s}}$), $\Psi^\alphast \not \cong K_{q}$ by Theorem \ref{thm:val cores}. Thus $\Psi^\alphast\cong K_p$. This together with the fact that ${\rm G}amma$ is isomorphic to a spanning subgraph of $\Psi$ implies that ${\rm G}ammamma \rightarrow K_p$. On the other hand, by Theorem \ref{thm:clq>p}, $\omega({\rm G}ammamma)\gammaeq p$ and so ${\rm G}ammamma$ contains a copy of $K_p$ as an induced subgraph. Therefore, ${\rm G}ammamma\lambdaeftrightarrow K_p$ and so ${\rm G}ammamma^\alphast \cong K_p^* = K_p$. (e) Suppose that $p=2^{2^{s-1}}+1$ is a Fermat prime (so that $s \gammae 2$) and $U = U_{e, i}$. Then $2 \lambdaeq |U| \lambdae p-2$ by (\ref{eq:U}). Note that ${\rm G}ammamma(a, p, \mathbb{Z}_p^\alphast, \mathbb{Z}_p\sigmaetminus U)$ is the complement of ${\rm G}amma$ and hence $\alpha({\rm G}amma) = \omega({\rm G}amma(a, p, \mathbb{Z}_p^\alphast, \mathbb{Z}_p\sigmaetminus U))$. Thus, by applying Theorem \ref{thm:clq<q} to ${\rm G}amma(a, p, \mathbb{Z}_p^\alphast, \mathbb{Z}_p\sigmaetminus U)$ and noting $|U| \gammae 2$, we obtain $$ \alphalpha({\rm G}ammamma) \lambdaeq \frac{2^{a}}{p-1}(p - |U|)+p-1 \lambdaeq \frac{2^{a}}{p-1}(p-2) + p-1 =2^{a} < q. $$ If $|V({\rm G}ammamma^\alphast)| = p$, then ${\rm G}ammamma^\alphast\cong K_p$ and so $\alpha({\rm G}amma) = q \alpha({\rm G}amma^*) = q$ by (\ref{eq:nohom1}), a contradiction. Thus $|V({\rm G}ammamma^\alphast)| \neq p$. On the other hand, by Theorem \ref{thm:clq<q}, $\omega({\rm G}ammamma)\lambdaeq \frac{2^{a}}{p-1}|U| + 1< q$ since $|U| \lambdae p-2$. Thus $\omega({\rm G}amma^*) = \omega({\rm G}amma) < q$ (by (\ref{eq:inva})) and so $|V({\rm G}ammamma^\alphast)| \neq q$. Therefore, ${\rm G}ammamma$ is a core. This completes the proof of Theorem \ref{thm:pq ms cores}. \sigmaection{Proof of Theorem \ref{thm:main}} \lambdaanglebel{sec:main proof} Let $p$ and $q$ be primes with $2 \lambdae p < q$. By Theorem \ref{thm:circu} (\cite{MR884254, MR1223693, MR1223702}), an imprimitive symmetric graph of order $pq$ is in one of the following families: (a) The four graphs in Example \ref{ex:inc}, namely $X({\rm PG}(d-1, r))$, $X'({\rm PG}(d-1, r))$, $X(H(11)) \cong G(22, 5)$ and $X'(H(11))$. These graphs are bipartite and hence their cores are $K_2$. (b) Imprimitive symmetric circulant graphs of order $pq$ listed in the second column of Table \ref{tab:cir} (see Definitions \ref{defn:2qr}, \ref{defn:3qr} and \ref{defn:pqrsu} and Example \ref{ex:lexprod}). Their cores are given in Lemma \ref{lem:2qr} and Theorems \ref{thm:lexprod}, \ref{lem:2qr1}, \ref{thm:del lex prod}, \ref{thm:pqrsu cat}, \ref{thm:pqrsu core} and \ref{thm:pqrsu}, respectively, completing the third column of Table \ref{tab:cir}. Note that, by Lemma \ref{lem:3qr}, the core of $G(3q, r)$ ($q \gammae 5$) is reduced to that of $G(3q; r, 2, r)$ when $r$ is even or that of $G(3q; r, 2, 2r)$ when $r$ is odd, which can be obtained by using Theorems \ref{thm:pqrsu cat}, \ref{thm:pqrsu core} and \ref{thm:pqrsu}. (c) Symmetric MS graphs described in Theorem \ref{thm:sym ms}. Their cores are given in Theorem \ref{thm:pq ms cores}, completing the third column of Table \ref{tab:2}. This completes the proof of Theorem \ref{thm:main}. {\betaf i}gskip {\betaf Acknowledgements}~ We would like to thank the anonymous referees for their helpful comments. Rotheram would like to thank Gordon Royle for co-supervising his thesis, and Zhou is grateful to Chris Godsil and Gordon Royle for introducing him to cores of graphs. Rotheram was supported by an Australian Postgraduate Award and Zhou was supported by a Future Fellowship of the Australian Research Council. \sigmamall \betaegin{equation}gin{thebibliography}{99} {\betaf i}bitem{MR791653} M. O. Albertson, and K. L. Collins, Homomorphisms of {$3$}-chromatic graphs, {\em Discrete Math.} 54 (1985), no. 2, 127--132. {\betaf i}bitem{MR2470534} P. J. Cameron, and P. Kazanidis, Cores of symmetric graphs, {\em J. Aust. Math. Soc.} 85 (2008), no. 2, 145--154. {\betaf i}bitem{MR0279000} C. Chao, On the classification of symmetric graphs with a prime number of vertices, {\em Trans. Amer. Math. Soc.} 158 (1971), 247--256. {\betaf i}bitem{MR884254} Y. Cheng and J. Oxley, On weakly symmetric graphs of order twice a prime, {\em J. Combin. Theory Ser. B} 42 (1987), no. 2, 196--211. {\betaf i}bitem{CGV} B. Codenotti, I. Gerace and S. Vigna, Hardness results and spectral techniques for combinatorial problems on circulant graphs, {\em Linear Algebra Appl.} 285 (1998), 123--142. {\betaf i}bitem{D} E. Dobson, Automorphism groups of metacirculant graphs of order a product of two distinct primes, {\em Combin. Probab. Comput.} 15 (2006), 105--130. {\betaf i}bitem{MR1829620} C. Godsil and G. Royle, {\em Algebraic Graph Theory}, Springer-Verlag, New York, 2001. {\betaf i}bitem{GR} C. Godsil and G. Royle, {\em Interesting Graphs and Their Colourings}, preprint, 2008. {\betaf i}bitem{MR2813515} C. Godsil and G. Royle, Cores of geometric graphs, {\em Ann. Comb.} 15 (2011), no. 2, 267--276. {\betaf i}bitem{MR1468789} G. Hahn and C. Tardif, Graph homomorphisms: structure and symmetry, in: G.~Hahn and G. Sabidussi eds., {\em Graph Symmetry} (Montreal, 1996, NATO Adv. Sci. Inst. Ser. C, Math. Phys. Sci. 497), Kluwer Academic Publishing, Dordrecht, 1997, pp.107--166. {\betaf i}bitem{MR2089014} P. Hell and J. Ne{\v{s}}et{\v{r}}il, {\em Graphs and Homomorphisms}, Oxford University Press, Oxford, 2004. {\betaf i}bitem{MR1788124} W. Imrich and S. Klav{\v{z}}ar, {\em Product Graphs}, Wiley-Interscience, New York, 2000. {\betaf i}bitem{MR1866957} M. K{\v{r}}{\'{\i}}{\v{z}}ek and F. Luca and L. Somer, {\em 17 Lectures on Fermat Numbers}, Springer-Verlag, New York, 2001. {\betaf i}bitem{MR1804825} B. Larose and C. Tardif, Hedetniemi's conjecture and the retracts of a product of graphs, {\em Combinatorica} 20 (2000), no. 4, 531--544. {\betaf i}bitem{MR1174460} D. Maru\v{s}i\v{c} and R. Scapellato, Characterizing vertex-transitive {$pq$}-graphs with an imprimitive automorphism subgroup, {\em J. Graph Theory} 16 (1992), no. 4, 375--387. {\betaf i}bitem{MR1214891} D. Maru\v{s}i\v{c} and R. Scapellato, Imprimitive representations of ${\rm SL}(2,2^k)$, {\em J. Combin. Theory Ser. B} 58 (1993), no. 1, 46--57. {\betaf i}bitem{MR1289072} D. Maru\v{s}i\v{c} and R. Scapellato, Classifying vertex-transitive graphs whose order is a product of two primes, {\em Combinatorica} 14 (1994), no. 2, 187--201. {\betaf i}bitem{MR1223702} C. E. Praeger, R. J. Wang and M. Y. Xu, Symmetric graphs of order a product of two distinct primes, {\em J. Combin. Theory Ser. B} 58 (1993), no. 2, 299--318. {\betaf i}bitem{MR1244933} C. E. Praeger and M. Y. Xu, Vertex-primitive graphs of order a product of two distinct primes, {\em J. Combin. Theory Ser. B} 59 (1993), no. 2, 245--266. {\betaf i}bitem{Praeger97} C. E. Praeger, Finite transitive permutation groups and finite vertex transitive graphs, in: G.~Hahn and G. Sabidussi eds., {\em Graph Symmetry} (Montreal, 1996, NATO Adv. Sci. Inst. Ser. C, Math. Phys. Sci. 497), Kluwer Academic Publishing, Dordrecht, 1997, pp.277--318. {\betaf i}bitem{R} D. E. Roberson, Cores of vertex transitive graphs, {\em Electron. J. Combin.} 20 (2013), no. 2, Paper 45, 7 pp. {\betaf i}bitem{MR1322111} Z. Ryj{\'a}{\v{c}}ek and I. Schiermeyer, On the independence number in {$K_{1,r+1}$}-free graphs, {\em Discrete Math.} 138 (1995), no. 1-3, 365--374. {\betaf i}bitem{Thas} J. A. Thas, Projective geometry over a finite field, in: Handbook of Incidence Geometry (ed. F. Buekenhout), Elsevier, Amsterdam, 1995, pp. 295--347. {\betaf i}bitem{2013arXiv1302.6652T} A. Thomson and S. Zhou, Rotational circulant graphs, {\em Discrete Appl. Math.} 162 (2014), 296--305. {\betaf i}bitem{MR1223693} R. J. Wang and M. Y. Xu, A classification of symmetric graphs of order {$3p$}, {\em J. Combin. Theory Ser. B} 58 (1993), no. 2, 197--216. \end{thebibliography} \end{document}
math
\begin{document} \twocolumn[ \aistatstitle{Global Multi-armed Bandits with Hölder Continuity} \aistatsauthor{ Anonymous Author 1 \And Anonymous Author 2 \And Anonymous Author 3 } \aistatsaddress{ Unknown Institution 1 \And Unknown Institution 2 \And Unknown Institution 3 } ] \begin{abstract} \comment{ In Multi-armed bandit (MAB) problems, a learner maximizes its gains by sequentially choosing among a set of arms with unknown rewards. Choosing an arm reveals information about the arm's reward at the expense of missing the opportunity of collecting the (possibly higher) reward of another arm. In the classical MAB problem the arms are assumed to be independent. Hence, choosing an arm reveals no information about the rewards of other arms. Thus, identifying the best arm requires all arms to be selected at least logarithmically many times, thereby resulting in logarithmic regret. } Standard Multi-Armed Bandit (MAB) problems assume that the arms are independent. However, in many application scenarios, the information obtained by playing an arm provides information about the remainder of the arms. Hence, in such applications, this informativeness can and should be exploited to enable faster convergence to the optimal solution. In this paper, we introduce and formalize the Global MAB (GMAB), in which arms are \textit{globally} informative through a \emph{global parameter}, i.e., choosing an arm reveals information about \emph{all} the arms. We propose a greedy policy for the GMAB which always selects the arm with the highest estimated expected reward, and prove that it achieves \textit{bounded parameter-dependent regret}. Hence, this policy selects suboptimal arms only finitely many times, and after a finite number of initial time steps, the \textit{optimal arm} is selected in \textit{all} of the remaining time steps with probability one. In addition, we also study how the \emph{informativeness} of the arms about each other's rewards affects the speed of learning. Specifically, we prove that the parameter-free (worst-case) regret is sublinear in time, and decreases with the informativeness of the arms. We also prove a sublinear in time {\em Bayesian risk} bound for the GMAB which reduces to the well-known Bayesian risk bound for linearly parameterized bandits when the arms are \emph{fully informative}. GMABs have applications ranging from drug and treatment discovery to dynamic pricing. \end{abstract} \section{Introduction} In this paper we study a new class of MAB problems which we name the Global MAB (GMAB). In the GMAB problem, a learner sequentially selects one of the available $K$ arms with the goal of maximizing its total expected reward. We assume that expected reward of arm $k$ is $\mu_{k}(\theta_{*})$, where $\theta_{*}\in\Theta$ is an unknown global parameter. For the given global parameter $\theta_{*}$, the reward of each arm follows an i.i.d. process. The learner knows the expected reward function $\mu_{k}(\cdot)$ of all the arms $k$. In this setting an arm $k$ is informative about another arm $k'$ because the learner can estimate the expected reward of arm $k'$ by using the estimated reward of arm $k$ and the expected reward functions $\mu_{k}(\cdot)$ and $\mu_{k'}(\cdot)$. Under mild assumptions on the expected reward functions, we prove that a greedy policy which always selects the arm with the highest estimated expected reward achieves \textit{bounded regret}, which is independent of time. In other words, suboptimal arms are selected only finitely many times before converging to the optimal arm. This is a surprising result, since as shown in \cite{lairobbinsl}, it is not possible to achieve bounded regret in standard MAB problems because playing arm $k$ is the only option to learn about its expected reward in these problems. \comment{ This is the case because the expected arm rewards in standard MAB problems\cite{lairobbinsl} are independent of each other, in contrast to the GMAB problem in which the expected rewards of the arms are related through $\theta_{\textrm{true}}$. } While most of the literature on MAB problems assumes independent arms \cite{lairobbinsl,A2002,Kauffman} and focuses on achieving regret that is logarithmic in time, structured MAB problems exist in which bounded regret has been proven. One prominent example is provided in \cite{Tsiklis_structured} in which the expected rewards of the arms are known linear functions of a global parameter. Under this assumption, \cite{Tsiklis_structured} proves that the greedy policy achieves bounded regret. Proving finite regret bounds under this linearity assumption becomes possible since all the arms are fully informative about each other, i.e., rewards obtained from an arm can be used to estimate the expected reward of the other arms using a linear transformation on the obtained rewards. In this paper we consider a more general model in which the expected reward functions are {\em Hölder continuous}, which requires to use a non-linear estimator to exploit the {\em weak informativeness}. Thus, our model includes the case when the expected reward functions are linear functions as a special case. However, while our regret results are a generalization of the results in \cite{Tsiklis_structured}, our analysis of the regret is more complicated since the arms are not fully informative as in the linear case. Thus, deriving regret bounds in our setting requires us to develop new proof techniques. However, we also show that our learning algorithm and the regret bounds reduce to the ones in \cite{Tsiklis_structured} when the arms have linear reward functions. In addition to the bounded regret bound (which depends on the value of the parameter $\theta_{*}$), we also provide a parameter-free regret bound and a bound on the Bayesian risk given a distribution $f(\cdot)$ over the parameter space $\Theta$, which matches known upper bound $\Omega(\log T)$ for the linear reward functions \cite{Tsiklis_structured}. Both of these bounds are sublinear in time and depend on the informativeness of the arms with respect to the other arms, subsequently referred to shortly as informativeness. Many applications can be formalized as a GMAB, where the reward functions are {\em Hölder continuous} in the global parameter. Examples include clinical trials involving similar drugs (e.g., drugs with a similar chemical composition) or treatments which may have similar effects on the patients and hence, the outcome of administering one drug/treatment to a patient will yield information about the outcome of administering a similar drug/treatment to that patient. Another example is dynamic pricing \cite{dynamic}. In dynamic pricing, an agent sequentially selects a price from a set of prices ${\cal P}$ with the objective of maximizing its revenue over a finite time horizon. At time $t$, the agent first selects a price $p\in{\cal P}$, and then observes the amount of sales, which is given as $S_{p,t}(\Lambda)=\bar{F}_{p}(\Lambda)+\epsilon_{t}$, where $\bar{F}(.)$ is modulating function and $\epsilon_{t}$ is the noise term with zero mean. The modulating function is the purchase probability of an item of price $p$ given the market size $\Lambda$. Here, the market size is the global parameter, which is unknown and needs to be learned by setting any price and observing the sales related to that price. Commonly used modulating functions include the exponential and logistic functions. In summary, the main contributions of our paper are: \begin{itemize} \item We formalize a new class of structured MAB problems, which we refer to as Global MABs. This class of problems represents a generalization of the linearly parametrized bandits in \cite{Tsiklis_structured}. \item For GMABs, we propose a greedy policy that always selects the arm with the highest estimated expected reward. We prove that the greedy policy achieves bounded regret (independent of time horizon $T$, depending on $\theta_{*}$). \item In addition to proving that the regret is bounded (which is related to the asymptotic behavior), we also show how the regret increases over time by identifying and characterizing three regimes of growth: first, the regret increases at most sublinearly over time until a first threshold (that depends on the informativeness) after which it increases at most logarithmically over time until a second threshold, before converging to a finite number asymptotically. These thresholds have the property that they are decreasing in the informativeness. \item We prove a sublinear in time worst-case (parameter-free) regret bound. The rate of increase in time decreases with the informativeness of the arms, meaning that the regret will increase slower when the informativeness is high. \item Given a distribution over the set of global parameter values, we prove a Bayesian risk bound that depends on the informativeness. When the arms are \emph{fully informative}, such as in the case of linearly parametrized bandits \cite{Tsiklis_structured}, our Bayesian risk bound and our proposed greedy policy reduce to the well known Bayesian risk bound and the greedy policy in \cite{Tsiklis_structured}, respectively. \end{itemize} \subsection{Related Work} Numerous types of MAB problems have been defined and investigated in the past decade - these include stochastic bandits \cite{lairobbinsl,A2002,Auer_2002a,KL_divergence,Agawal_89}, Bayesian bandits \cite{Kauffman,Thompson,Goyal_Thompson,KauffmanThompson,thompson_priorfree}, contextual bandits \cite{epoch_greedy,Slivkins,thompson_contextual}, combinatorial bandits \cite{combinatorial}, and many other variants. Instead of comparing our method against all these MAB variants, we group the existing literature based on the main theme of this paper: exploiting the informativeness of an arm to learn about the rewards of other arms. We call a MAB problem {\em non-informative} if the reward observations of any arm do not reveal any information about the expected rewards of any other arms. Examples of non-informative MAB are the stochastic bandits \cite{lairobbinsl,A2002} and the bandits with local parameters \cite{Goyal_Thompson,Kauffman}. In these problems the regret grows at least logarithmically in time, since each arm should be selected at least logarithmically many times to identify the optimal arm. We call a MAB problem {\em group-informative} if the reward observations from an arm provide information about the rewards of a known group of other arms but not all the arms. Examples of group-informative MAB problems are combinatorial bandits \cite{combinatorial}, contextual bandits \cite{epoch_greedy,Slivkins,thompson_contextual} and structured bandits \cite{Tsiklis,paramtric_GLM}. In these problems the regret grows at least logarithmically over time since at least one suboptimal arm should be selected at least logarithmically many times to identify groups of arms that are suboptimal. We call a MAB problem {\em globally-informative} if the reward observations from an arm provides information about the rewards of \textit{all} the arms. The proposed GMABs include the linearly-parametrized MABs in \cite{Tsiklis_structured} as a subclass. Therefore, we prove a bounded regret for a larger class of problems. Another related work is \cite{gittins}, in which the optimal arm selection strategy is derived for the infinite time horizon learning problem, when the arm rewards are parametrized with known priors, and the future rewards are discounted. However, in the Gittins' formulation of the MAB problem, the parameters of the arms are different from each other, and the discounting allows the learner to efficiently solve the optimization problem related to arm selection by decoupling the joint optimization problem into $K$ individual optimization problems - one for each arm. In contrast, we do not assume known priors, and the learner in our case does not solve an optimization problem but rather learns the global parameter through its reward observations. Another seemingly related learning scenario is the experts setting \cite{Experts}, where after an arm is chosen, the rewards of all arms are observed and their estimated rewards is updated. Hence, there is no tradeoff between exploration and exploitation and finite regret bounds can be achieved in an expert system with finite number of arms and stochastic arm rewards. However, unlike in the expert setting, the GMABs achieve finite regret bounds while observing \textit{only} the reward of the selected arm. Hence, the arm reward estimation procedure in GMABs requires forming reward estimates by collectively considering the observed rewards from all the arms, which is completely different than in the expert systems, in which the expected reward of an arm is estimated only by using the past reward observations from that arm. \section{Global Multi-Armed Bandits} \subsection{Problem Formulation} The set of all arms is denoted by ${\cal K}$ and the number of arms is $K=|{\cal K}|$, where $|\cdot|$ is the cardinality operator. The reward obtained by playing an arm $k\in{\cal K}$ at time $t$ is given by a random variable $X_{k,t}$. We assume that for $t\geq1$ and $k\in{\cal K}$, $X_{k,t}$ is drawn independently from an unknown distribution $\nu_{k}(\theta_{*})$ with support $[0,1]$.\footnote{The set $[0,1]$ is just a convenient normalization. In general, we only need that distribution has a bounded support.} The learner knows that the expected reward of an arm $k\in{\cal K}$ is a (Hölder continuous, invertible) function of the global parameter $\theta_{*}$, which is given by $E_{X_{k,t}\sim\nu_{k}(\theta_{*})}(X_{k,t})=\mu_{k}(\theta_{*})$, where $\mu_{k}: \Theta \rightarrow [0,1]$ and $E[\cdot]$ denotes the expectation. Hence, the true expected reward of arm $k$ is equal to $\mu_{k}(\theta_{*})$. \begin{assumption} \label{ass:holder} (i) For each $k\in{\cal K}$, the reward function $\mu_{k}$ is invertible on $[0,1]$. \\ (ii) For each $k\in{\cal K}$ and $y,y'\in[0,1]$, there exists $D_{1}>0$ and $0<\gamma_{1}\leq1$ such that $|\mu_{k}^{-1}(y)-\mu_{k}^{-1}(y')|\leq D_{1}|y-y'|^{\gamma_{1}}$, where $\mu_{k}^{-1}$ is the inverse reward function for arm $k$. \\ (iii) For each $k\in{\cal K}$ and $\theta,\theta '\in \Theta$ there exists $D_{2}>0$ and $0<\gamma_{2}\leq1$, such that $|\mu_{k}(\theta)-\mu_{k}(\theta')|\leq D_{2}|\theta-\theta'|^{\gamma_{2}}$. \end{assumption} Assumption \ref{ass:holder} ensures that the reward obtained from an arm can be used to update the estimated expected rewards of the other arms. The last two conditions are Hölder conditions on the reward and inverse reward functions, which enable us to define the informativeness. It turns out that the invertibility of the reward functions is a crucial assumption that is required to achieve bounded regret. We illustrate this by a counter example when we discuss parameter dependent regret bounds. \comment{These assumptions are not necessary for our algorithm to run, but are required to derive the bounded regret.} There are many reward functions that satisfy Assumption \ref{ass:holder}. Examples include: ($i$) exponential functions such as $\mu_{k}(\theta)=a\exp(b\theta)$ for some $a>0$, ($ii$) linear and piecewise linear functions, and ($iii$) sub-linear and super-linear functions in $\theta$ which are invertible in $\Theta$ such as $\mu_{k}(\theta)=a\theta^{\gamma}$ with $\gamma>0$. The goal of the learner is to choose a sequence of arms (one at each time) $\boldsymbol{I}:=(I_{1},\ldots,I_{T})$ up to to time $T$ to maximize its expected total reward. This corresponds to minimizing the regret which is the expected total loss due to not always selecting the optimal arm, i.e., the arm with the highest expected reward. Let $k^{*}(\theta_{*}):=\argmax_{k\in{\cal K}}\mu_{k}(\theta_{*})$ be the set of optimal arms and $\mu^{*}(\theta_{*}):=\max_{k\in{\cal K}}\mu_{k}(\theta_{*})$ be the expected reward of the optimal arm for true value of global parameter $\theta_{*}$. The cumulative regret of learning algorithm which selects arm $I_{t}$ until time horizon $T$ is defined as \begin{align} \text{Reg}(\theta_{*},T):=\sum_{t=1}^{T} r_{t}(\theta_{*}), \end{align} where $r_{t}(\theta_{*})$ is the one step regret given by $r_{t}(\theta_{*}):=\mu^{*}(\theta_{*})-\mu_{I_{t}}(\theta_{*})$ for global parameter $\theta_{*}$. In the following sections we will derive regret bounds both as a function of $\theta_{*}$ (parameter-dependent regret) and independent from $\theta_{*}$ (worst-case or parameter-free regret). \subsection{Greedy Policy} \begin{figure} \caption{Pseudocode of the greedy policy.} \label{fig:GP} \end{figure} In this section, we propose a greedy policy for the GMAB problem, which selects the arm with the highest estimated expected reward at each time $t$. Different from previous works in MABs \cite{A2002,lairobbinsl} in which the expected reward estimate of an arm only depends on the reward observations from that arm, the proposed greedy policy uses a {\em global parameter estimate} $\hat{\theta}_{t}$ for the global parameter, which is given by $\hat{\theta}_{t}:=\sum_{k=1}^{K}w_{k}(t)\hat{\theta}_{k,t}$, where $w_{k}(t)$ is the weight of arm $k$ at time $t$ and $\hat{\theta}_{k,t}$ is the estimate of the global parameter based only on the reward observations from arm $k$ until time $t$. Let ${\cal X}_{k,t}$ denote the set of rewards obtained from the selections of arm $k$ by time $t$, i.e., ${\cal X}_{k,t}=(X_{k,t})_{\tau<t\;|I_{\tau}=k}$ and $\hat{X}_{k,t}$ be the sample mean estimate of the rewards obtained from arm $k$ by time $t$, i.e., ${\hat{X}_{k,t}}:=(\sum_{x\in{\cal X}_{k,t}}x)/|{\cal X}_{k,t}|$. The proposed greedy policy operates as follows for any time $t \geq 2$: ($i$) the arm with highest expected reward according to the estimated parameter $\hat{\theta}_{t-1}$ is selected, i.e., $I_{t}\in\argmax_{k\in{\cal K}}\mu_{k}(\hat{\theta}_{t-1})$, ($ii$) reward $X_{I_{t},t}$ is obtained and individual reward estimates $\hat{X}_{k,t}$ are updated for $k\in{\cal K}$, ($iii$) the individual estimates of each arm $k$ for the global parameters are updated as $\hat{\theta}_{k,t}=\mu_{k}^{-1}({\hat{X}_{k,t}})$, ($iv$) the weights of each arm $k$ are updated as $w_{k}(t)=N_{k}(t)/(t)$, where $N_{k}(t)$ is the number of times the arm $k$ is played until time $t$, i.e., $N_{k}(t)=|{\cal X}_{k,t}|$. For $t =1$, since there is no global parameter estimate, the greedy policy selects randomly among the set of arms. The pseudocode of the greedy policy is given in Fig. \ref{fig:GP}. \section{Regret Analysis for the Greedy Policy} \subsection{Preliminaries} In this subsection we define the tools that will be used in deriving the regret bounds. Consider any arm $k\in{\cal K}$. Its \emph{optimality region} is defined as $\Theta_{k}:=\{\theta\in\Theta\;|k\in k^{*}(\theta)\}$. \comment{Since the reward functions are known a priori, the optimality regions of the arms can be calculated a priori.} Clearly, we have $\bigcup_{k\in{\cal K}}\Theta_{k}=\Theta$. If $\Theta_{k}=\emptyset$ for an arm $k$, this implies that there exists no global parameter values for which arm $k$ is optimal. Since there exists an arm $k'$ such that $\mu_{k'}(\theta) >\mu_k(\theta)$ for any $\theta \in \Theta$ for an arm with $\Theta_k = \emptyset$, the greedy policy will discard arm $k$ after $t= 1$. Therefore, without loss of generality we assume that $\Theta_{k}\neq\emptyset$ for all $k\in{\cal K}.$ For global parameter $\theta_{*} \in \Theta$, we define the \emph{suboptimality gap }of an arm $k\in{\cal K} \setminus k^{*}(\theta_{*})$ as $\delta_{k}(\theta_{*}):=\mu^{*}(\theta_{*})-\mu_{k}(\theta_{*})$. For parameter $\theta_{*}$, the minimum suboptimality gap is defined as $\delta_{\text{min}}(\theta_{*}):=\min_{k\in{\cal K}\setminus k^{*}(\theta_{*})}\delta_{k}(\theta_{*})$. \comment{ In our analysis, we will show that when the expected reward estimate for an arm $k$ is within $\delta_{\min}(\theta_{\textrm{true}})/2$ of its true expected reward for all arms $k\in{\cal K}$, then the greedy policy will select the optimal arm, even when its global parameter estimate is not exactly equal to $\theta_{\textrm{true}}$. } \begin{figure} \caption{Illustration of minimum suboptimality gap and suboptimality distance} \label{fig: illustration} \end{figure} Recall that the expected reward estimate for arm $k$ is equal to its expected reward corresponding to the global parameter estimate. We will show that as more arms are selected, the global parameter estimate will converge to the true value of the global parameter. However, if $\theta_{*}$ lies close to the boundary of the optimality region of $k^{*}(\theta_{*})$, the global parameter estimate may fall outside of the optimality region of $k^{*}(\theta_{*})$ for a large number of time steps, thereby resulting in a large regret. Let $\Theta^{\text{sub}}(\theta_{*})$ be the suboptimality region for given global parameter $\theta_{*}$, which is defined as the subset of parameter space in which an arm in the set ${\cal K} \setminus k^{*}(\theta_{*})$ is optimal, i.e $\Theta^{\text{sub}}(\theta_{*}) = \cup_{k' \in {\cal K} \setminus k^{*}(\theta_{*})} \Theta_{k'}$. In order to bound the expected number of such deviations from the optimality region, for any arm $k$ we define a metric called the \emph{suboptimality distance, }which is equal to the smallest distance between the value of the global parameter and suboptimality region. \begin{definition} \label{def:dist} For a given global parameter $\theta_{*}$, the \textit{suboptimality distance} is defined as \[ \Delta_{\text{min}}(\theta_{*}):=\left\{ \begin{array}{lr} \inf_{\theta'\in\Theta^{\text{sub}}(\theta_{*})} |\theta_{*}-\theta'| & \text{if } \Theta^{\text{sub}}(\theta_{*}) \neq \emptyset\\ 1 & \text{if }\Theta^{\text{sub}}(\theta_{*}) = \emptyset \end{array}\right. \] \end{definition} From the definition of the suboptimality distance it is evident that the greedy policy always selects an optimal arm in $k^{*}(\theta_{*})$ when $\hat{\theta}_{t}$ is within $\Delta_{\text{min}}(\theta_{*})$ of the global parameter $\theta_{*}$. An illustration of suboptimality gap and suboptimality distance is given in Fig. \ref{fig: illustration} for a GMAB problem instance with $3$ arms and reward functions $\mu_{1}(\theta)=1-\sqrt{\theta}$, $\mu_{2}(\theta)=0.8\theta$ and $\mu_{3}(\theta)=\theta^{2}$. \comment{ In the following lemma we characterize the minimum amount of change in the value of the global parameter that will make a suboptimal action an optimal action. This change is given in terms of $\delta_{min}(\theta)$, and will be used to bound the regret by bounding the probability of the deviation of $\hat{\theta}_{t}$ from $\theta$ by an amount greater than $\Delta_{min}(\theta)$. } In the following lemma, we show that minimum suboptimality distance is nonzero for any global parameter $\theta_{*}$. This result ensures that we can identify the optimal arm within finite amount of time. \begin{lemma} \label{prop:non_zero} Given any $\theta_{*}\in\Theta$, there exists a constant $\epsilon_{\theta_{*}}=\delta_{\min}(\theta_{*})^{1/\gamma_{2}}/(2D_{2})^{1/\gamma_{2}}$, where $D_{2}$ and $\gamma_{2}$ are the constants given in Assumption 1 such that $\Delta_{\min}(\theta_{*})\geq\epsilon_{\theta_{*}}.$ In other words, the minimum suboptimality distance is always positive. \end{lemma} \comment{ \begin{proof} For any suboptimal arm $k\in{\cal K}-k^{*}(\theta)$, we have $\mu_{k^{*}(\theta)}(\theta)-\mu_{k}(\theta)\geq\delta_{\min}(\theta)>0.$ We also know that $\mu_{k}(\theta')\geq\mu_{k^{*}(\theta)}(\theta')$ for all $\theta'\in\Theta_{k}$. Hence for any $\theta'\in\Theta_{k}$ at least one of the following should hold: (i) $\mu_{k}(\theta')\geq\mu_{k}(\theta)-\delta_{\min}(\theta)/2$, (ii) $\mu_{k^{*}(\theta)}(\theta')\leq\mu_{k^{*}(\theta)}(\theta)+\delta_{\min}(\theta)/2$. If both of the below does not hold, then we must have $\mu_{k}(\theta')<\mu_{k^{*}(\theta)}(\theta')$, which is false. This implies that we either have $\mu_{k}(\theta)-\mu_{k}(\theta')\leq\delta_{\min}(\theta)/2$ or $\mu_{k^{*}(\theta)}(\theta)-\mu_{k^{*}(\theta)}(\theta')\geq - \delta_{\min}(\theta)/2$, or both. Recall that from Assumption 1 we have $|\theta-\theta'|\geq|\mu_{k}(\theta)-\mu_{k}(\theta')|^{1/\gamma_{2}}/D_{2}^{1/\gamma_{2}}$. This implies that $|\theta-\theta'|\geq\epsilon_{\theta}$ for all $\theta'\in\Theta_{k}$. \end{proof}} For notational brevity, we denote in the remainder of the paper $\Delta_{\text{min}}(\theta_{*})$ and $\delta_{\text{min}}(\theta_{*})$ as $\Delta_{*}$ and $\delta_{*}$, respectively. \begin{lemma} \label{lemma:gap} Consider a run of the greedy policy until time $t$. Then, the following relation between $\hat{\theta}_{t}$ and $\theta_{*}$ holds with probability one: $|\hat{\theta}_{t}-\theta_{*}|\leq\sum_{k=1}^{K}w_{k}(t)D_{1}|\hat{X}_{k,t}-\mu_{k}(\theta_{*})|^{\gamma_{1}}$ \end{lemma} Lemma \ref{lemma:gap} shows that the gap between the global parameter estimate and the true value of the global parameter is bounded by a weighted sum of the gaps between the estimated expected rewards and the true expected rewards of the arms. \begin{lemma} \label{lemma:onesteploss} For given global parameter $\theta_*$, the one step regret of the greedy policy is bounded by $r_{t}(\theta_{*})=\mu^{*}(\theta_{*})-\mu_{I_{t}}(\theta_{*}) \leq 2D_{2}|\theta_{*}-\hat{\theta}_{t}|^{\gamma_{2}}$ with probability one, where $I_{t}$ is the arm selected by the greedy policy at time $t\geq2$. \end{lemma} Lemma \ref{lemma:onesteploss} ensures that the one step loss decreases as $\hat{\theta}_{t}$ approaches to $\theta_{*}$. Since the regret at time $T$ is the sum of the one step losses up to time $T$, we will bound the regret by bounding the expected distance between $\hat{\theta}_{t}$ and $\theta_{*}$. Given a parameter value $\theta_{*}$, let ${\cal G}_{\theta_{*},\hat{\theta}_{t}}^{x}:=\{|\theta_{*}-\hat{\theta}_{t}|>x\}$ be the event that the distance between the global parameter estimate and its true value exceeds $x$. Similarly, let ${\cal F}_{\theta_{*},\hat{\theta}_{t}}^{k}(x):=\{|\hat{X}_{k,t}-\mu_{k}(\theta_{*})|>x\}$ be the event that the distance between the sample mean reward estimate of arm $k$ and the true expected reward of arm $k$ exceeds $x$. The following lemma relates these events. \begin{lemma} \label{lemma:eventbound} For any $t \geq 2$ and given global parameter $\theta_{*}$, we have ${\cal G}_{\theta_{*},\hat{\theta}_{t}}^{x}\subseteq \cup_{k=1}^{K}{\cal F}_{\theta_{*},\hat{\theta}_{t}}^{k}((\frac{x}{D_{1}})^{\frac{1}{\gamma_{1}}})$ with probability one. \end{lemma} This lemma follows from the decomposition given in Lemma \ref{lemma:gap}. This lemma will be used to bound the probability of event ${\cal G}_{\theta_*,\hat{\theta}_{t}}^{x}$ in terms of probabilities of the events ${\cal F}_{\theta_*,\hat{\theta}_{t}}^{k}((\frac{x}{D_{1}})^{\frac{1}{\gamma_{1}}})$. \subsection{Parameter-Free Regret Analysis} The following theorem bounds the expected regret of the greedy policy in one step. \begin{theorem} \label{thm:onestepregret} Under Assumption \ref{ass:holder}, for given global parameter $\theta_{*}$, the expected one-step regret of the greedy policy is bounded by $E[r_{t}(\theta_{*})]=O(t^{-\frac{\gamma_{1}\gamma_{2}}{2}})$. \end{theorem} Theorem \ref{thm:onestepregret} does not only prove that the expected loss incurred in one step by the greedy policy goes to zero with time but also bounds the expected loss that will be incurred at any time step $t$.\footnote{The asymptotic notation is only used for a succinct representation, to hide the constants and highlight the time dependence. This bound holds not just asymptotically but for any finite $t$.} This is a \emph{worst-case} bound in the sense that it does not depend on $\theta_{*}$. Using this result, we derive the parameter-free regret bound in the next theorem. \begin{theorem} \label{thm:par_indep} Under Assumption \ref{ass:holder}, for given global parameter $\theta$, the parameter-free regret of the greedy policy is bounded by $E[\text{Reg}(\theta_{*},T)] = O(K^{\frac{\gamma_{1}\gamma_{2}}{2}}T^{1-\frac{\gamma_{1}\gamma_{2}}{2}})$. \end{theorem} Note that the parameter-free regret bound is sublinear both in terms of the time horizon $T$ and the number of arms $K$. Moreover, it depends on the form of the reward functions given in Assumption 1. The Hölder exponent $\gamma_{1}$ on the inverse reward functions characterizes the informativeness of an arm about the other arms. The informativeness of an arm $k$ can be viewed as the information obtained about the expected rewards of the other arms from the rewards observed from arm $k$. The informativeness is maximized for the case when the inverse reward functions are linear or piecewise linear, i.e., $\gamma_{1}=1$. It is increasing $\gamma_{1}$, which results in the regret decreasing with the informativeness. On the other hand, the Hölder exponent $\gamma_{2}$ is related to the loss due to suboptimal arm selections, which decreases with $\gamma_{2}$. Both of these observations follow from Lemma \ref{lemma:gap} and \ref{lemma:onesteploss}. As a consequence, the parameter-free regret is decreasing in both $\gamma_{1}$ and $\gamma_{2}$. When the reward functions are linear or piecewise linear, we have $\gamma_{1}=\gamma_{2}=1$; hence, the parameter-free regret is $O(\sqrt{T})$, which matches with the worst-case regret bound of standard MAB algorithms in which a linear estimator is used \cite{BubeckBianchi} and bounds given for linearly parametrized bandits \cite{Tsiklis_structured}. \subsection{Parameter-Dependent Regret Analysis} \label{subsec:par_dep} Although the regret bound derived in the previous section holds for any global parameter value, it is easy to see that the performance of the greedy policy depends on the true value of the global parameter. For example, it is easier to identify the optimal arm in GMAB problems with large suboptimality distance than GMAB problems which have small suboptimality distance. In this section, we prove a regret bound that depends on the suboptimality distance. Moreover, our regret bound is characterized by three regimes of growth: sublinear growth followed by logarithmic growth followed by a constant bound. The boundaries of these regimes are defined by parameter-dependent (problem-specific) constants. \begin{definition} Let $C_{1}(\Delta_{*})$ be the least integer $\tau$ such that $\tau\geq\frac{D_{1}^{\frac{2}{\gamma_{1}}}K}{2{\Delta_{*}}^{\frac{2}{\gamma_{1}}}}\log(\tau)$ and let $C_{2}(\Delta_{*})$ be the least integer $\tau$ such that $\tau\geq\frac{D_{1}^{\frac{2}{\gamma_{1}}}K}{{\Delta_{*}}^{\frac{2}{\gamma_{1}}}}\log(\tau)$. \end{definition} The constants $C_{1}(\Delta_{*})$ and $C_{2}(\Delta_{*})$ depend on the informativeness (Hölder exponent $\gamma_1$) and global parameter $\theta_{*}$. We define the expected regret between time $T_{1}$ and $T_{2}$ for global parameter $\theta_{*}$ as \begin{align} R_{\theta_{*}}(T_{1},T_{2}):=E[\text{Reg}(T_{2},\theta_{*})-\text{Reg}(T_{1},\theta_{*})]. \end{align} The following theorem gives a three regime parameter-dependent regret bound. \begin{theorem} Under Assumptions \ref{ass:holder}, the regret of the greedy policy is bounded as follows: If \\ (i) $1\leq T\leq C_{1}(\Delta_{*})$, the regret is sublinear in time, i.e., \begin{align} R_{\theta_{*}}(T,0)=O(T^{1-\frac{\gamma_{1}\gamma_{2}}{2}}),\label{thm:par_dep} \end{align} (ii) $C_{1}(\Delta_{*})\leq T\leq C_{2}(\Delta_{*})$, the regret is logarithmic in time, i.e., \begin{align} R_{\theta_{*}}(T,C_{1}(\Delta_{*}))\leq1+2K\log(\frac{T}{C_{1}(\Delta_{*})}), \end{align} (iii) $T\geq C_{2}(\Delta_{*})$, the regret is bounded, i.e., \begin{align} R_{\theta_{*}}(T,C_{2}(\Delta_{*}))\leq K\frac{\pi^{2}}{3} \end{align} \end{theorem} \begin{corollary} The regret of the greedy policy is bounded, i.e., $\lim_{T\rightarrow\infty}\text{Reg}(T,\theta_{*})<\infty$. \end{corollary} These results are obtained when Assumption \ref{ass:holder} holds, which implies that the reward functions are invertible. We provide a counter example for a non- invertible reward function to show that bounded regret is not possible for general non-invertible reward functions. \textbf{Counter Example} : All expected arm rewards come from a set with $K$ distinct elements. There are $K!$ permutations of these distinct elements, and the global parameter space $\Theta$ is divided into $K!$ intervals such that the expected reward distribution of each arm in each interval is constant and equals to the value of the element it corresponds to in one of the permutations. In order to identify the arm rewards correctly, we have to know the permutation and hence, the parameter value $\theta_{*}$. However, we cannot identify all the arms correctly without playing all of them separately because an arm can have the same expected reward in different permutations (for different parameter intervals), but at least one of the other arms will have a different expected reward in these permutations. In each time $t\leq T$ in each regime in Theorem \ref{thm:par_dep}, the probability of selecting a suboptimal arm is bounded by different functions of $t$, which leads to different growth rates of the regret bound depending on the value of $T$. For instance, when $C_{1}(\Delta_{*})\leq t\leq C_{2}(\Delta_{*})$, the probability of selecting a suboptimal arm is in the order of $t^{-1}$; hence, the greedy policy achieves the logarithmic regret, when $t\geq C_{2}(\Delta_{*})$, the probability of selecting a suboptimal arm is in the order of $t^{-2}$, which makes the probability of selecting a suboptimal arm infinitely often zero. In conclusion, the greedy policy achieves bounded regret. Note that a bounded regret is the striking difference between the standard MAB algorithms \cite{lairobbinsl,A2002} and the proposed policy. \begin{theorem} \label{thm:convergence} The sequence of arms selected by the greedy policy converges to the optimal arm almost surely, i.e., $\lim_{t\rightarrow\infty}I_{t}=k^{*}(\theta_{*})$ with probability 1. \end{theorem} Theorem \ref{thm:convergence} implies that a suboptimal arm is selected by greedy policy only finitely many times. In other words, there exists a finite number such that selection of greedy policy is the optimal arm after that number with probability $1$. This is the biggest difference between MAB algorithms \cite{lairobbinsl,A2002} in which suboptimal arms are selected infinitely many times and the proposed greedy policy. Although the parameter dependent regret bound is finite, since $\lim_{\Delta_{*}\rightarrow0}C_{1}(\Delta_{*})=\infty$, in the worst-case, this bound reduces to the parameter-free regret bound given in Theorem \ref{thm:par_indep}. \section{Bayesian Risk Analysis of the Greedy Policy} In this section, assuming that global parameter is drawn from an unknown distribution $f(\theta_{*})$ on $\Theta$, we provide an analysis of the Bayesian risk, which is defined as follows: \begin{align} \hspace{-0.1in}\text{Risk}(T)=E_{\theta_{*}\sim f(\theta_{*})}\left[E_{\boldsymbol{X}_{t}\sim\boldsymbol{\nu}}\left[\sum_{t=1}^{T}r_{t}(\theta)|\theta_{*}=\theta\right]\right], \end{align} $\boldsymbol{\nu}=\times_{k=1}^{K}\nu_{k}(\theta_{*})$ is the joint distribution of the rewards given the parameter value is $\theta_{*}$. The Bayesian risk is equal to the expected regret with respect to the distribution of the global parameter $f(\theta_{*})$. Since suboptimality distance is a function of global parameter $\theta_{*}$, there is a prior distribution on the minimum sub optimality distance, which we denote as $g(\Delta_{*})$. A simple upper bound on the Bayesian risk can be obtained by taking the expectation of the regret bound given in Theorem \ref{thm:par_indep} with respect to $\theta_{*}$, which gives the bound $\text{Risk}(T)=O(T^{1-\frac{\gamma_{1}\gamma_{2}}{2}})$. Next, we will show that a tighter regret bound on the Bayesian risk can be derived if the following assumption holds. \begin{assumption} \label{ass:bayes_assumptions}The prior distribution on the global parameter is such that minimum sub optimality distance $\Delta_{*}$ has a bounded density function, i.e., $g(\Delta_{*})\leq B$. One example of this is the case when $f(\theta_{*})$ is bounded. \end{assumption} Assumption \ref{ass:bayes_assumptions} is satisfied for many instances of the GMAB problem. An example is a GMAB problem with two arms, $f(\theta_{*})\sim\textrm{Uniform}([0,1])$, $\mu_{1}(\theta_{*})=\theta_{*}$ and $\mu_{2}(\theta_{*})=1-\theta_{*}$. For this example we have $g(\Delta_{*})\leq2$ for $\Delta_{*}\in[0,0.5]$. \begin{theorem} \label{thm:risk} Under Assumptions \ref{ass:holder} and \ref{ass:bayes_assumptions}, the Bayesian risk of the greedy policy is bounded by \\ (i) $\text{Risk}(T)=O(\log T)$, for $\gamma_{1}\gamma_{2}=1$. \\ (ii) $\text{Risk}(T)=O(T^{1-\gamma_{1}\gamma_{2}})$, for $\gamma_{1}\gamma_{2}<1$. \end{theorem} Our Bayesian risk bound for the greedy policy coincides with the Bayesian risk bound for the linearly-parametrized MAB problem given in \cite{Tsiklis_structured} when the arms are fully informative, i.e., $\gamma_{1}\gamma_{2}=1$. For this case, the optimality of the Bayesian risk bound is established in \cite{Tsiklis_structured}, in which a lower bound of $\Omega(\log T)$ is proven. Similar to the parameter-free regret bound given in Theorem \ref{thm:par_indep}, the Bayesian risk is also decreasing with the informativeness, and minimized for the case when the arms are fully informative. \section{Extension to Bandits with Group Informativeness} Our global informativeness assumption can be relaxed to {\em group informativeness}. When the arms are group informative, reward observations from an arm only provides information about the rewards of the arms that are within the same group with the original arm. Let ${\cal C}=(C_{1},\ldots,C_{D})$ be be the set of the groups, and assume that they are known by the learner. Then, a standard MAB algorithm such as UCB1 \cite{A2002} can be used to select the group, while the greedy policy can be used to select among the arms within a group. In this way, we can exploit the informativeness among the arms within a group and find the group to which the best arm belongs by a standard MAB algorithm. In this way it is possible to achieve bounded regret within each group. However, in order to identify the group to which the optimal arm belongs, each groups should be selected at least logarithmically many times by the standard MAB algorithm. As a result, the combination of two algorithms yields a regret bound of $O(D\log T)$ which depends on the number of groups instead of the number of arms. The formal derivation of this result is left as future work. \section{Conclusion} In this paper we introduce a new class of MAB problems called global multi-armed bandits. This general class of GMAB problems encompasses the previously introduced linearly-parametrized bandits as a special case. We proved that the regret for the GMABs has three regimes, which we characterized for the regret bound, and showed that the parameter-dependent regret is bounded, i.e., it is asymptotically finite. In addition to this, we also proved a parameter-free regret bound and a Bayesian risk bound, both of which grow sublinearly over time, where the rate of growth depends on the informativeness of the arms. Future work includes extension of global informativeness to group informativeness, and a foresighted MAB problem, where the arm selection is based on a foresighted policy that explores the arms according to their level of informativeness rather than the greedy policy. \section{Proofs} In this section, we provide the proofs of theorems. The proofs of lemmas are given in the supplementary material. Let $\boldsymbol{w}(t) := (w_1(t), \ldots, w_K(t))$ be the vector of weights and $\boldsymbol{N}(t) := (N_1(t), \ldots, N_k(t))$ be the vector of counters at time $t$. We have $\boldsymbol{w}(t) = \frac{1}{t} \boldsymbol{N}(t)$. Since $\boldsymbol{N}(t)$ depends on the history, they are both random variables depending on the obtained rewards. \subsection{Proof of Theorem 1} By lemma \ref{lemma:onesteploss} and Jensen's inequality, we have \begin{align} E[r_{t}(\theta_{*})]\leq2D_{2}E[|\theta_{*}-\hat{\theta}_{t}|]^{\gamma_{2}}. \label{eq:r_t} \end{align} By using Lemma \ref{lemma:gap} and Jensen's inequality, we have \begin{align} &E[|\theta_{*}-\hat{\theta}_{t}|] \leq \notag \\ &D_1 E[\sum_{k=1}^{K} w_k(t) E[|\hat{X}_{k,t} -\mu_k(\theta_{*})|\;| \boldsymbol{w}(t)]^{\gamma_1}], \label{eq:gap} \end{align} , where $E[\cdot | \cdot]$ denotes the conditional expectation. Note that $\hat{X}_{k,t}=\frac{\sum_{x\in{\cal X}_{k,t}} x}{N_{k}(t)}$ and $E_{x\sim\nu_{k}(\theta_{*})}[x]=\mu_{k}(\theta_{*})$. Therefore, we can bound $E[|\hat{X}_{k,t}-\mu_{k}(\theta_{*})|\;| \boldsymbol{w}(t)]$ for each $k\in{\cal K}$ using Chernoff- Hoeffding inequality. For each $k \in {\cal K}$, we have \begin{align} & E[|\hat{X}_{k,t}-\mu_{k}(\theta_{*})|\;| \boldsymbol{w}(t)] \notag \\ &=\int_{x=0}^{1}\!\text{Pr}(|\hat{X}_{k,t}-\mu_{k}(\theta_{*})|>x | \boldsymbol{w}(t))\,\mathrm{d}x \notag \\ &\leq \int_{x=0}^{\infty}\!2\exp(-2x^{2}N_{k}(t))\,\mathrm{d}x \leq\sqrt{\frac{\pi}{2N_{k}(t)}} , \label{eq:chernoff1} \end{align} , where $N_k(t) = t w_k(t)$ is a random variable. The first inequality is a result of the Chernoff-Hoeffding bound. Combining (\ref{eq:gap}) and (\ref{eq:chernoff1}), we get \begin{align} E[|\theta_{*} - \hat{\theta}_t|] \leq 2D_{1}(\frac{\pi}{2})^{\frac{\gamma_1}{2}}\frac{1}{t^{\frac{\gamma_1}{2}}} E[\sum_{k=1}^K {w_k(t)}^{1- \frac{\gamma_1}{2}}]. \label{eq:gap2} \end{align} Since $w_k(t) \leq 1$ for all $k \in {\cal K}$, and $\sum_{k=1}^K w_k(t) =1$ for any possible $\boldsymbol{w}(t)$, we have $E[\sum_{k=1}^{K} w_{k}(t)^{1-\frac{\gamma_{1}}{2}}]\leq K^{\frac{\gamma_{1}}{2}}$. Then, combining (\ref{eq:r_t}) and (\ref{eq:gap2}), we have \begin{align} E[r_{t}(\theta_{*})]\leq 2 D_{1}^{\gamma_{2}}D_{2}\frac{\pi}{2}^{\frac{\gamma_{1}\gamma_{2}}{2}} K^{\frac{\gamma_{1}\gamma_{2}}{2}}\frac{1}{t^{\frac{\gamma_{1}\gamma_{2}}{2}}} . \end{align} \subsection{Proof of Theorem 2} The bound is consequence of Theorem \ref{thm:onestepregret} and inequality given in \cite{bound}, i.e., \begin{align} E[\text{Reg}(\theta_{*},T)] \leq 1 + \frac{2D_{1}^{\gamma_{2}}D_{2}\frac{\pi}{2}^{\frac{\gamma_{1}\gamma_{2}}{2}}K^{\frac{\gamma_{1}\gamma_{2}}{2}}}{1-\frac{\gamma_{1}\gamma_{2}}{2}}(1+ T^{1-\frac{\gamma_{1}\gamma_{2}}{2}}). \notag \end{align} \subsection{Proof of Theorem 3} We need to bound the probability of the event that ${I_{t}\neq k^{*}(\theta_*)}$. Since at time $t$, the arm with the highest $\mu_{k}(\hat{\theta}_{t})$ is selected by the greedy policy, $\hat{\theta}_{t}$ should lie in $\Theta\setminus\Theta_{k^{*}(\theta_{*})}$ for greedy policy to select a suboptimal arm. Therefore, we can write, \begin{align} \{{I_{t}\neq k^{*}(\theta_{*})}\}=\{{\hat{\theta}_{t}\in\Theta\setminus\Theta_{k^{*}(\theta_{*})}}\}\subseteq{{\cal G}_{\theta_{*},\hat{\theta}_{t}}^{\Delta_{*}}} . \label{eq:evbound} \end{align} By Lemma \ref{lemma:eventbound} and (\ref{eq:evbound}), we have \begin{align} & \Pr(I_{t}\neq k^{*}(\theta_*))\leq\sum_{k=1}^{K}E[E[I({\cal F}_{\theta_*,\hat{\theta}_{t}}^{k}((\frac{x}{D_{1}})^{\frac{1}{\gamma_{1}}}))| \boldsymbol{N}(t)]] \notag\\ & =\sum_{k=1}^{K}E[Pr({\cal F}_{\theta_*,\hat{\theta}_{t}}^{k}((\frac{x}{D_{1}})^{\frac{1}{\gamma_{1}}})| \boldsymbol{N}(t))] \notag\\ & \leq\sum_{k=1}^{K}2E[\exp(-2(\frac{\Delta_{*}}{D_{1}})^{\frac{2}{\gamma_{1}}}N_{k}(t))]\notag\\ & \leq 2K\exp(-2(\frac{\Delta_{*}}{D_{1}})^{\frac{2}{\gamma_{1}}}\frac{t}{K}). \label{eqn:probimportant} \end{align} , where the first inequality is followed by union bound and second inequality is obtained by using the Chernoff-Hoeffding bound. The last inequality is obtained by using the worst-case selection processes $N_{k}(t)=\frac{t}{K}$. We have $\Pr(I_{t}\neq k^{*}(\theta_{*}))\leq\frac{1}{t}$ for $t>C_{1}(\Delta_{*})$ and $\Pr(I_{t}\neq k^{*}(\theta_{*}))\leq\frac{1}{t^{2}}$ for $t>C_{2}(\Delta_{*})$. The bound in the first regime is the result of Theorem \ref{thm:par_indep}. The bound in the second and third regimes is obtained by summing the probability given in (\ref{eqn:probimportant}) from $C_{1}(\Delta_{*})$ to $T$ and $C_{2}(\Delta_{*})$ to $T$, respectively. \subsection{Proof of Theorem 4} Let $(\Omega,{\cal F},P)$ denote probability space, where $\Omega$ is the sample set and ${\cal F}$ is the $\sigma$-algebra that the probability measure $P$ is defined on. Let $\omega \in \Omega$ denote a sample path. We will prove that there exists event $N\in{\cal F}$ such that $P(N)=0$ and if $\omega \in N^{c}$, then $\lim_{t\rightarrow\infty}I_{t}(\omega)=k^{*}(\theta_{*})$. Define the event ${\cal E}_{t} :=\{I_{t}\neq k^{*}(\theta_{*})\}$. We show in the proof of Theorem \ref{thm:par_dep} that $\sum_{t=1}^{T}P({\cal E}_{t})<\infty$. By Borel-Cantelli lemma, we have \begin{align} P({\cal E}_{t}\text{ infintely often})=P(\limsup_{t\rightarrow\infty}{\cal E}_{t})=0. \end{align} Define $N :=\limsup_{t\rightarrow\infty}{\cal E}_{t}$, where $P(N)=0$. We have, \begin{align} N^{\text{c}}=\liminf_{t\rightarrow\infty}{\cal E}_{t}^{\text{c}}, \end{align} , where $P(N^{\text{c}})=1-P(N)=1$, which means that $I_{t}=k^{*}(\theta_*)$ for all $t$ except for a finite number. \subsection{Proof of Theorem 5} \begin{proof} The one step loss due to suboptimal arm selection with global parameter estimate $\hat{\theta}_{t}$ is given in Lemma \ref{lemma:onesteploss}. Recall that we have \begin{align} \{I_{t}\neq k^{*}(\theta_{*})\}\subseteq\{|\theta_{*}-\hat{\theta}_{t}|>\Delta_{*}\} . \notag \end{align} Let $Y_{\theta_{*},\hat{\theta}_{t}} :=|\theta_{*}-\hat{\theta}_{t}|$. Then, we have \begin{align} & \text{Risk}(T) \notag \\ &\leq2D_{2}\sum_{t=1}^{T}E_{\theta_{*}\sim f(\theta)}[E_{\boldsymbol{X}\sim\boldsymbol{\nu}}[Y_{\theta_{*},\hat{\theta}_{t}}^{\gamma_{2}}I(Y_{\theta_{*},\hat{\theta}_{t}}>\Delta_{*})]]\notag\\ & \leq2D_{2}\sum_{t=1}^{T}E_{\theta_{*} \sim f(\theta)}[E_{\boldsymbol{X}\sim\boldsymbol{\nu}}[Y_{\theta_{*},\hat{\theta}_{t}}I(Y_{\theta_{*},\hat{\theta}_{t}}>\Delta_{*})]]^{\gamma_{2}} \notag , \end{align} , where $I(.)$ is the indicator function which is $1$ if the statement is true and zero otherwise. The first inequality followed by Lemma \ref{lemma:gap}. The second inequality is by Jensen's inequality and the fact that $I(.)=I^{\gamma}(.)$ for any $\gamma>0$. We now focus on the expectation expression for some arbitrary $t$. Let $f(\theta)$ denote the density function of global parameter. \begin{align} & E_{\theta_{*}\sim f(\theta)}[E_{\boldsymbol{X}\sim\boldsymbol{\nu}}[Y_{\theta_{*},\hat{\theta}_{t}}I(Y_{\theta_{*},\hat{\theta}_{t}}>\Delta_{*})]]\notag\\ & =\int_{\theta_{*}=0}^{1}\! f(\theta_{*})\int_{x=0}^{\infty}\!\Pr(Y_{\theta_{*},\hat{\theta}_{t}}I(Y_{\theta_{*},\hat{\theta}_{t}}>\Delta_{*})\geq x)\,\mathrm{d}x\,\mathrm{d}{\theta}\notag\\ & =\int_{\theta_{*}=0}^{1}\! f(\theta_{*})\int_{x=\Delta_{*}}^{\infty}\!\Pr(Y_{\theta_{*},\hat{\theta}_{t}}\geq x)\,\mathrm{d}x\,\mathrm{d}{\theta}\notag\\ & =\int_{\Delta=0}^{1}\! g(\Delta)\int_{x=\Delta}^{\infty}\!\Pr(Y_{\theta_{*},\hat{\theta}_{t}}\geq x)\,\mathrm{d}x\,\mathrm{d}{\Delta}, \notag \end{align} , where the last equation is followed by change of variables in integral. Note that we have by Theorem \ref{thm:par_dep} \begin{align} \Pr(Y_{\theta_{*},\hat{\theta}_{t}}\geq x)\leq2K\exp(-2x^{\frac{2}{\gamma_{1}}}D_{1}^{-\frac{2}{\gamma_{1}}}\frac{t}{K}). \notag \end{align} Then, we have \begin{align} & E_{\theta\sim\nu_{\theta}}[E_{\boldsymbol{X}\sim\boldsymbol{\nu}}[Y_{\theta_{*},\hat{\theta}_{t}}I(Y_{\theta_{*},\hat{\theta}_{t}}>\Delta_{*})]]\notag\\ & \leq2KB\int_{\Delta=0}^{1}\!\exp(-2{\Delta}^{\frac{2}{\gamma_{1}}}D_{1}^{-\frac{2}{\gamma_{1}}}\frac{t}{K})\,\mathrm{d}{\Delta}\notag\\ & \int_{y=0}^{\infty}\!\exp(-2y^{\frac{2}{\gamma_{1}}}D_{1}^{-\frac{2}{\gamma_{1}}}\frac{t}{K})\,\mathrm{d}y\notag\\ & =2KB(\frac{\gamma_{1}}{2}2^{-\frac{\gamma_{1}}{2}}D_{1}K^{\frac{\gamma_{1}}{2}} \Gamma(\frac{\gamma_1}{2}))^{2}t^{-\gamma_{1}} , \notag \end{align} , where the inequality follows from the change of variable $y=x-\Delta$ and then the fact that $(y+\Delta)^{\frac{2}{\gamma_{1}}}\geq y^{\frac{2}{\gamma_{1}}}+\Delta^{^{\frac{2}{\gamma_{1}}}}$ since $\frac{2}{\gamma_{1}}\geq1$. By summing these from $1$ to $T$, we get \[ \text{Risk}(T)\leq\left\{ \begin{array}{lr} 1 + A(1+2\log T) & \text{if }\gamma_{1}\gamma_{2}=1\\ 1+ A(1+ \frac{1}{1-\gamma_{1} \gamma_{2}} T^{1-\gamma_{1}\gamma_{2}}) & \text{if }\gamma_{1}\gamma_{2}<1 \end{array}\right. \] , where $A=2D_{2}(\frac{B\gamma_{1}^{2}D_{1}^{2}K^{1+\gamma_{1}}}{2^{1+\gamma_{1}}}\Gamma^{2}(\frac{\gamma_{1}}{2}))$. \end{proof} \section{Appendix} \begin{lemma} \label{prop:non_zero} Given any $\theta_{*}\in\Theta$, there exists a constant $\epsilon_{\theta_{*}}=\delta_{\min}(\theta_{*})^{1/\gamma_{2}}/(2D_{2})^{1/\gamma_{2}}$, where $D_{2}$ and $\gamma_{2}$ are the constants given in Assumption 1 such that $\Delta_{\min}(\theta_{*})\geq\epsilon_{\theta_{*}}.$ In other words, the minimum suboptimality distance is always positive. \end{lemma} \begin{proof} For any suboptimal arm $k\in{\cal K}-k^{*}(\theta)$, we have $\mu_{k^{*}(\theta)}(\theta)-\mu_{k}(\theta)\geq\delta_{\min}(\theta)>0.$ We also know that $\mu_{k}(\theta')\geq\mu_{k^{*}(\theta)}(\theta')$ for all $\theta'\in\Theta_{k}$. Hence for any $\theta'\in\Theta_{k}$ at least one of the following should hold: (i) $\mu_{k}(\theta')\geq\mu_{k}(\theta)-\delta_{\min}(\theta)/2$, (ii) $\mu_{k^{*}(\theta)}(\theta')\leq\mu_{k^{*}(\theta)}(\theta)+\delta_{\min}(\theta)/2$. If both of the below does not hold, then we must have $\mu_{k}(\theta')<\mu_{k^{*}(\theta)}(\theta')$, which is false. This implies that we either have $\mu_{k}(\theta)-\mu_{k}(\theta')\leq\delta_{\min}(\theta)/2$ or $\mu_{k^{*}(\theta)}(\theta)-\mu_{k^{*}(\theta)}(\theta')\geq - \delta_{\min}(\theta)/2$, or both. Recall that from Assumption 1 we have $|\theta-\theta'|\geq|\mu_{k}(\theta)-\mu_{k}(\theta')|^{1/\gamma_{2}}/D_{2}^{1/\gamma_{2}}$. This implies that $|\theta-\theta'|\geq\epsilon_{\theta}$ for all $\theta'\in\Theta_{k}$. \end{proof} \begin{lemma} \label{lemma:gap} Consider a run of the greedy policy until time $t$. Then, the following relation between $\hat{\theta}_{t}$ and $\theta_{*}$ holds with probability one: $|\hat{\theta}_{t}-\theta_{*}|\leq\sum_{k=1}^{K}w_{k}(t)D_{1}|\hat{X}_{k,t}-\mu_{k}(\theta_{*})|^{\gamma_{1}}$ \end{lemma} \begin{proof} \begin{align} &|\theta_{*} - \hat{\theta}_t| = |\sum_{k=1}^{K} w_k(t)\hat{\theta}_{k,t} -\theta_{*}| \notag \\ & = \sum_{k=1}^{K} w_k(t)|\theta_{*} - \hat{\theta}_{k,t}| \notag \\ & = \sum_{k=1}^{K} w_k(t)|\mu^{-1}_k(\hat{X}_{k,t}) - \mu^{-1}_k(\mu_k(\theta_{*}))| \notag \\ & \leq \sum_{k=1}^{K} w_k(t)D_1|\hat{X}_{k,t} - \mu_k(\theta_{*})|^{\gamma_1} \end{align} , where last inequality followed by Assumption 1. \end{proof} \begin{lemma} \label{lemma:onesteploss} For given global parameter $\theta_*$, the one step regret of the greedy policy is bounded by $r_{t}(\theta_{*})=\mu^{*}(\theta_{*})-\mu_{I_{t}}(\theta_{*}) \leq 2D_{2}|\theta_{*}-\hat{\theta}_{t}|^{\gamma_{2}}$ with probability one, where $I_{t}$ is the arm selected by the greedy policy at time $t\geq2$. \end{lemma} \begin{proof} Note that $I_t \in \argmax_{k \in {\cal K}} \mu_k(\hat{\theta}_t)$. Therefore, we have \begin{align} \mu_{I_t}(\hat{\theta}_t) - \mu_{k^{*}(\theta_{*})}(\hat{\theta}_t) \geq 0 \label{eq:add} . \end{align} We have $\mu^{*}(\theta_{*}) = \mu_{k^{*}(\theta_{*})}(\theta_{*})$. Then, we can bound \begin{align} &\mu^{*}(\theta_{*})-\mu_{I_{t}}(\theta_{*}) \notag \\ &= \mu_{k^{*}(\theta_{*})}(\theta_{*}) - \mu_{I_{t}}(\theta_{*}) \notag \\ & \leq \mu_{k^{*}(\theta_{*})}(\theta_{*}) - \mu_{I_{t}}(\theta_{*}) + \mu_{I_t}(\hat{\theta}_t) - \mu_{k^{*}(\theta_{*})}(\hat{\theta}_t) \notag \\ & = \mu_{k^{*}(\theta_{*})}(\theta_{*}) - \mu_{k^{*}(\theta_{*})}(\hat{\theta}_t) + \mu_{I_t}(\hat{\theta}_t)- \mu_{I_{t}}(\theta_{*}) \notag \\ & \leq 2D_2|\theta_{*} -\hat{\theta}_t|^{\gamma_2} \end{align} , where the first inequality followed by inequality \ref{eq:add} and second inequality by Assumption 1. \end{proof} \begin{lemma} \label{lemma:eventbound} For any $t \geq 2$ and given global parameter $\theta_{*}$, we have ${\cal G}_{\theta_{*},\hat{\theta}_{t}}^{x}\subseteq \cup_{k=1}^{K}{\cal F}_{\theta_{*},\hat{\theta}_{t}}^{k}((\frac{x}{D_{1}})^{\frac{1}{\gamma_{1}}})$ with probability one. \end{lemma} \begin{proof} \begin{align} &\{ |\theta_{*} - \hat{\theta}_t| \geq x \} \notag \\ &\subseteq \{ \sum_{k=1}^{K} w_k(t) D_1 |\hat{X}_{k,t} - \mu_k(\theta_{*})| \geq x\} \notag \\ & \subseteq \cup_{k=1}^{K} \{ w_k(t) D_1 |\hat{X}_{k,t} - \mu_k(\theta_{*})| \geq w_k(t) x\} \notag \\ & = \cup_{k=1}^{K} \{ |\hat{X}_{k,t} - \mu_k(\theta_{*})| \geq (\frac{x}{D_1})^{\frac{1}{\gamma_1}} \} \end{align} , where the first inequality followed by Lemma \ref{lemma:gap} and second inequality by the fact that $\sum_{k=1}^K w_k(t) =1$. \end{proof} \end{document}
math
\begin{document} \title{Generic stability, regularity, and quasi-minimality} \begin{abstract} We study the notions generic stability, regularity, homogeneous pregeometries, quasiminimality, and their mutual relations, in arbitrary first order theories. We prove that ``infinite-dimensional homogeneous pregeometries" coincide with generically stable strongly regular types $(p(x), x=x)$. We prove that in a theory without the strict order property, regular types are generically stable, and prove analogous results for quasiminimal structures. We prove that the ``generic type" of a quasiminimal structure is ``locally strongly regular". \end{abstract} \section{Introduction} The first author was motivated partly by hearing Wilkie's talks on his program for proving Zilber's conjecture that the complex exponential field is quasiminimal (definable subsets are countable or co-countable), and wondering about the first order (rather than infinitary) consequences of the approach. The second author was partly motivated by his interest in adapting his study of minimal structures (definable subsets are finite or cofinite) and his dichotomy theorems (\cite{Tan}), to the quasiminimal context. Zilber's conjecture (and the approach to it outlined by Wilkie) is closely related to the existence of a canonical pregeometry on the complex exponential field. In fact in section 5 of this paper we discuss to what extent a pregeometry can be recovered just from quasi-minimality, sometimes assuming the presence of a definable group structure. This continues in a sense an earlier study of the general model theory of quasi-minimality by Itai, Tsuboi, and Wakai \cite{ITW}. A pregeometry is a closure relation on subsets of a not necessarily saturated structure $M$ satisfying usual properties (including exchange). We will also assume ``homogeneity" ($\tp(b/A)$ is unique for $b\notin \cl(A)$) and ``infinite-dimensionality" ($\dim(M)$ is infinite). One of the points of this paper then is that the canonical ``generic type" $p$ of the pregeometry is ``generically stable" and regular. This includes the statement that on realizations of $p$, the closure operation is precisely forking in the sense of Shelah. See Theorem 3 of section 4. Generic stability, the stable-like behavior of a given complete type vis-a-vis forking, was studied in several papers including \cite{Shelah783} and \cite{NIPII}, but mainly in the context of theories with $NIP$ (i.e. without the independence property). Here we take the opportunity, in section 2, to give appropriate definitions in an arbitrary ambient theory $T$, as well as discussing generically stable (strongly) regular types. The notion of a regular type is central in stability theory and classification theory, where the counting of models of superstable theories is related to dimensions of regular types. Here (section 3) we give appropriate generalizations of (strong) regularity for an arbitrary theory $T$ (although it does not agree with the established definitions for simple theories). In section 6 a local version of regularity is given and applied to the analysis of quasiminimal structures (see Corollaries 2 and 3). \noindent The current paper is a revised and expanded version of a preprint ``Remarks on generic stability, pregeometries, and quasiminimality" by the first author, which was written and circulated in June 2009. \noindent The first author would like to thank Clifton Ealy, Krzysztof Krupinski, and Alex Usvyatsov for various helpful conversations and comments. After seeing the first author's preliminary results (and talk at at a meeting in Lyon in July 2009), Ealy pointed out that the commutativity of invariant-regular groups is problematic (see our Question at the end of section 3) and suggested possible directions towards a counterexample. Krupinski independently came up with examples such as Example 3 of the current paper. And after a talk by the first author on the same subject in Bedlewo in August 2009, Usvyatsov pointed out examples such as Example 1 and 2 of the current paper. \noindent We now give our conventions and give a few basic definitions relevant to the paper. $T$ denotes an arbitrary complete $1$-sorted theory in a language $L$ and $\bar{M}$ denotes a saturated (monster) model of $T$. As a rule $a,b,c,...$ denote elements of $\bar M$, and $\bar{a},\bar{b},\bar{c}$ denote finite tuples of elements. (But in some situations $a,b,..$ may denote elements of ${\bar M}^{eq}$.) $A,B,C$ denote small subsets, and $M,M_0, ...$ denote small elementary submodels of $\bar{M}$. A global type $p(\bar{x})\in S(\bar{M})$ is said to be $A$-invariant if $p$ is $\Aut(\bar{M}/A)$-invariant; $p$ is invariant if it is $A$-invariant for some small $A$. For $A$-invariant $p$ we can form a Morley sequence $I=(\bar{a}_i\,:\,i<\omega)$ of $p$ over $A$, by letting $\bar{a}_n\models p|(A,\bar{a}_0,...,\bar{a}_{n-1})$, similarly for any ordinal $\lambda$ in place of $\omega$. The $A$-invariance of $p(\bar{x})$ implies that Morley sequences are indiscernible and that $\tp(I/A)$ depends only on $p(\bar{x})$ and $A$; in particular, $p^{(n)}(\bar{x}_1,...,\bar{x}_n)$ is well defined as a global type of a Morley sequences in $p$ of length $n$. Note that the $p^{(n)}$'s are all invariant. We will say that $p$ is \emph{symmetric} if $p^{(2)}(\bar{x}_1,\bar{x}_2)=p^{(2)}(\bar{x}_2,\bar{x}_1)$, otherwise $p$ is \emph{asymmetric}. The symmetry of $p$ is equivalent to: every Morley sequence in $p$ over (any small) $A$ is totally indiscernible, in which case all the $p^{(n)}$'s are invariant under permutations of (tuples) of variables. Recall that $(P,\cl)$, where $\cl$ is an operation on subsets of $P$, is a \emph{pregeometry} (or $\cl$ is a pregeometry on $P$) if for all $A,B\subseteq P$ and $a,b\in P$ the following holds: Monotonicity: \ \ $A\subseteq B$ \ implies \ $A\subseteq\cl(A)\subseteq\cl(B)$; \ \ Finite character \ \ $\cl(A)=\bigcup \{\cl(A_0)\,|\,A_0\subseteq A$ finite$\}$; \ \ Transitivity \ \ $\cl(A)=\cl(\cl(A))$, and \ \ Exchange (symmetry) \ \ $a\in\cl(A\cup\{b\})\setminus\cl(A)$ \ \ implies \ \ $b\in\cl(A\cup\{a\})$. \ \noindent $\cl$ is called a \emph{closure operator} if it satisfies the first three conditions.. \section{Generic stability} \begin{defn}\label{D1} A non-algebraic global type $p(\bar{x})\in S(\bar{M})$ is generically stable if, for some small $A$, it is $A$-invariant and: \begin{center} if $(\bar{a}_i\,:\,i<\omega)$ is a Morley sequence in $p$ over $A$ then for any formula $\phi(\bar{x})$ (with parameters from $\bar{M}$) \ $\{i\,:\ \models\phi(\bar{a}_i)\}$ is either finite or co-finite.\end{center} \end{defn} \begin{rmk} If $p$ is generically stable then as a witness-set $A$ in the definition we can take any small $A$ such that $P$ is $A$-invariant. \end{rmk} \begin{prop}\label{GS} Let $p(\bar{x})\in S(\bar{M})$ be generically stable and $A$-invariant. Then: (i) \ For any formula $\phi(\bar{x},\bar{y})\in L$ there is $n_{\phi}$ such that for any Morley sequence $(\bar{a}_i\,:\,i<\omega)$ of $p$ over $A$, and any $\bar{b}$: \begin{center} $\phi(\bar{x},\bar{b})\in p $ \ \ \ iff \ \ \ $\models\bigvee_{w\subset\{0,1,...2n_{\phi}\},\,|w|=n_{\phi}+1}\bigwedge_{i\in w}\phi(\bar{a}_i,\bar{b}).$\end{center} (ii) \ $p$ is definable over $A$ and almost finitely satisfiable in $A$. (iii) \ Any Morley sequence of $p$ over $A$ is totally indiscernible. (iv) \ $p$ is the unique global nonforking extension of $p|A$. \end{prop} \begin{proof} This is a slight elaboration of the proof of Proposition 3.2 from \cite{NIPII}. (i) \ By compactness, for any $\phi(\bar{x},\bar{y})\in L$ there is $n_{\phi}$ such that for any Morley sequence $(\bar{a}_i\,:\,i<\omega)$ in $p$ over $A$, and any $\bar{b}$ either at most $n_{\phi}$ many $\bar{a}_i$'s satisfy $\phi(\bar{x},\bar{b})$ or at most $n_{\phi}$ many $\bar{a}_i$'s satisfy $\neg\phi(\bar{x},\bar{b})$. (ii) \ By (i) $p$ is definable (over a Morley sequence), so by $A$-invariance $p$ is definable over $A$. Let $M$ be a model containing $A$, $\phi(\bar{x},\bar{c})\in p$ and let $I=(\bar{a}_i\,:\,i<\omega)$ be a Morley sequence in $p$ over $A$ such that $\tp(I/M\bar{c})$ is finitely satisfiable in $M$. Then, by (i), $\phi(\bar{x},\bar{c})$ is satisfied by some $\bar{a}_i$ hence also by an element of $M$. (iii) \ This follows from (i) exactly as in the proof of the Proposition 3.2 of \cite{NIPII} (where NIP was not used). (iv) \ It is clearly enough to prove it for $B=A$. Let $q(\bar{x})$ be a global nonforking extension of $p|A$. We will prove that $q=p$. \noindent \emph{Claim.} Suppose $(\bar{a}_0,....,\bar{a}_n,\bar{b})$ are such that $\bar{a}_0\models q|A$,\, $\bar{a}_{i+1}\models q|(A,\bar{a}_0,....,\bar{a}_i)$ and $b\models q|(A,\bar{a}_0,....,\bar{a}_n)$. Then $(\bar{a}_0,....,\bar{a}_n,\bar{b}) $ is a Morley sequence in $p$ over $A$. \noindent\emph{Proof.} \ We prove it by induction. Suppose we have chosen $\bar{a}_0,...,\bar{a}_n$ as in the claim and we know (induction hypothesis) that $(\bar{a}_0,...,\bar{a}_n)$ begins a Morley sequence $I=(\bar{a}_i\,|\,i<\omega)$ in $p$ over $A$. Suppose that $\phi(\bar{a}_0,...,\bar{a}_n,\bar{x})\in q$. Then we claim that $\phi(\bar{a}_0,...,\bar{a}_{n-1},\bar{a}_i,x)\in q$ for all $i>n$. For otherwise, without loss of generality \ $\neg\phi(\bar{a}_0,...,\bar{a}_{n-1},\bar{a}_{n+1},\bar{x})\in q$. But then, by indiscernibility of $\{\bar{a}_i\bar{a}_{i+1}\,:\,i=n,n+2,n+4,...\}$ over $(A,\bar{a}_0,...,\bar{a}_{n-1})$, and the nondividing of $q$ over $A$, we have that \begin{center} $\{\phi(\bar{a}_0,...,\bar{a}_{n-1},\bar{a}_i,x)\wedge \neg\phi(\bar{a}_0,...,\bar{a}_{n-1},\bar{a}_{i+1},\bar{x})\,:\,i=n,n+2,n+4,...\}$ \end{center} is consistent, which contradicts Definition \ref{D1}. Hence \,$\models \phi(\bar{a}_0,...,\bar{a}_{n-1},\bar{a}_i,\bar{b})$ for all $i>n$. So by part (i), $\phi(\bar{a}_0,...,\bar{a}_{n-1},\bar{x},\bar{b})\in p(\bar{x})$. The inductive assumption gives that $\tp(\bar{a}_0,...,\bar{a}_{n-1},\bar{a}_n/A)= \tp(\bar{a}_0,...,\bar{a}_{n-1},\bar{b}/A)$, so by $A$-invariance of $p$, $\phi(\bar{a}_0,...,\bar{a}_{n-1},\bar{x},\bar{a}_n)\in p(\bar{x})$. Thus $\models\phi(\bar{a}_0,...,\bar{a}_{n-1},\bar{a}_{n+1},\bar{a}_n)$ and, by total indiscernibility of $I$ over $A$ (part (iii)), $\models\phi(\bar{a}_0,...,\bar{a}_{n-1},\bar{a}_n,\bar{a}_{n+1})$. We have shown that $q|(A,\bar{a}_0,...,\bar{a}_n)=p|(A,\bar{a}_0,...,\bar{a}_n)$, which allows the induction process to continue. The claim is proved. Now suppose that $\phi(\bar{x},\bar{c})\in q(\bar{x})$. Let $\bar{a}_i$ for $i<\omega$ be such that $\bar{a}_i$ realizes $q|(A,\bar{c},\bar{a}_0,...,\bar{a}_{i-1})$ for all $i$. By the claim $(\bar{a}_i\,;\,i<\omega)$ is a Morley sequence of $p$ over $A$. But $\phi(\bar{a}_i,\bar{c})$ for all $i$, hence by the proof of (i), $\phi(\bar{x},\bar{c})\in p$. So $q=p$. \end{proof} Let us restate the notion of generic stability for groups from \cite{NIPII} in the context of connected groups. Recall that a definable (or even type-definable) group $G$ is connected if it has no relatively definable subgroup of finite index. A type $p(x)\in S_G(\bar{M})$ is left $G$-invariant iff for all $g\in G$: \begin{center} \ $(g\cdot p)(x)=^{def}\{\phi(g^{-1}\cdot x)\,: \phi(x)\in p(x)\}=p(x)$;\end{center} likewise for right $G$-invariant. \begin{defn} Let $G$ be a definable (or even type-definable) connected group in $\bar{M}$. $G$ is \emph{generically stable} if there is a global complete 1-type $p(x)$ extending '$x\in G$' such that $p$ is generically stable and left $G$-invariant. \end{defn} As in the NIP context, we show that a generically stable left invariant type is also right invariant and is unique such (and we will call it the generic type): \begin{lem} Suppose that $G$ is generically stable, witnessed by $p(x)$. Then $p(x)$ is the unique left-invariant and also the unique right-invariant type. \end{lem} \begin{proof} First we prove that $p$ is right invariant. Let $(a,b)$ be a Morley sequence in $p$ over (any small) $A$. By left invariance \,$g=a^{-1}\cdot b\models p|A$. By total indiscernibility we have $\tp(a,b)=\tp(b,a)$, so \,$g^{-1}=b^{-1}\cdot a\models p|A$. This proves that $p=p^{-1}$. Now, for any $g\in G$ we have \ \begin{center} $p=g^{-1}\cdot p=p^{-1}=(g^{-1}\cdot p)^{-1}=p^{-1}\cdot g=p\cdot g$ , \end{center} so $p$ is also right-invariant. For the uniqueness, suppose that $q$ is a left invariant global type and we prove that $p=q$. Let $\phi(x)\in q$ be over $A$, let $I=(a_i\,:\,i\in\omega)$ be a Morley sequence in $p$ over $A$, and let $b\models q|(A,I)$. Then, by left invariance of $q$, for all $i\in\omega$ we have $\models\phi(a_i\cdot b)$. By Proposition \ref{GS}(i) we get $\phi(x\cdot b)\in p(x)$ and, by right invariance, $\phi(x)\in p(x)$. Thus $p=q$. \end{proof} \section{Global regularity} For stable $T$, a stationary type $p(x)\in S(A)$ is said to be regular if for any $B\supseteq A$, and $b$ realizing a forking extension of $p$ over $B$, $p|B$ (the unique nonforking extension of $p$ over $B$) has a unique complete extension over $B,b$. Appropriate versions of regularity (where we do not have stationarity) have been given for simple theories for example. Here we give somewhat different versions for global ``invariant" types in arbitrary theories. \begin{defn}Let $p(\bar{x})$ be a global non-algebraic type. (i) \ $p(\bar{x})$ is said to be \emph{invariant-regular} if, for some small $A$, it is $A$-invariant and for any $B\supseteq A$ and $\bar{a}\models p|A$: \ either $\bar{a}\models p|B$ \ or \ $p|B\vdash p|B\bar{a}$. (ii) \ Suppose $\phi(\bar{x})\in p$. We say that $(p(\bar{x}),\phi(\bar{x}))$ is \emph{invariant-strongly regular} if, for some small $A$, it is $A$-invariant and for all $B\supseteq A$ and $\bar{a}$ satisfying $\phi(\bar{x})$, either \ $\bar{a}\models p|B$ \ or \ $p|B\vdash p|B\bar{a}$. (iii) \ Likewise we have the notions \emph{definable -(strongly) regular} and \emph{generically stable - (strongly) regular}. \end{defn} \begin{rmk}\label{Rreg} If $p$ is invariant-regular then as a witness-set $A$ in the definition we can take any small $A$ such that $p$ is $A$-invariant. Likewise for strongly regular types. \end{rmk} \begin{defn} Let $N$ be a submodel of $\bar{M}$ (possibly $N=\bar{M}$), and let $p(x)\in S_1(N)$. The operator $\cl_p$ is defined on (all) subsets of $N$ by: \begin{center} $\cl_p(A)=\{a\in N\,|\, a\nmodels p|A\}$.\end{center} \end{defn} \begin{rmk}\label{R22}(i) \ Let $p\in S_1(\bar{M})$ be $\emptyset$-invariant and $\phi(x)\in p_0(x)=p(x)|\emptyset$. Then regularity of $p$ can be expressed in terms of $\cl_p$: - \ $(p(x),x=x)$ is invariant-strongly regular iff $p|A\vdash p|\cl_p(A)$ for any small $A$. - \ $(p(x),\phi(x))$ is invariant-strongly regular iff $p|A\vdash p|A\cup(\cl_p(A)\cap\phi(\bar{M}))$ for any small $A$ . - \ $p$ is invariant-regular iff $p|A\vdash p|A\cup(\cl_p(A)\cap p_0(\bar{M}))$ for any small $A$. \ If we consider $\cl_p$ only as an operation on subsets of $p_0(\bar{M})$ we have that $p$ is invariant-regular just if $p|A\vdash p|\cl_p(A)$ for all $A\subset p_0(\bar{M})$. (ii) \ $\cl_p$ satisfies the monotonicity and has finite character, but in general it is not a closure operator. $\cl_p(\cl_p(\emptyset))=\bar{M}$ can easily happen with $RM(p)=2$ (and $T$ is $\omega$-stable). \end{rmk} \begin{lem}\label{Lsym} Suppose $p\in S_1(\bar{M})$ is $\emptyset$-invariant and let $p_0=p|\emptyset$. (i) \ If $p$ is invariant-regular then: - \ (the restriction of) $\cl_p$ is a closure operator on $p_0(\bar{M})$. - \ $\cl_p$ is a pregeometry operator on $p_0(\bar{M})$ \ iff \ every Morley sequence in $p$ over $\emptyset$ is totally indiscernible. (ii) \ $(p(x),x=x)$ is strongly regular \ iff \ $\cl_p$ is a closure operator on $\bar{M}$. (iii) \ Suppose that $(p(x),x=x)$ is strongly regular. Then $\cl_p$ is a pregeometry operator on $\bar{M}$ iff every Morley sequence in $p$ over $\emptyset$ is totally indiscernible. \end{lem} \begin{proof} (i) \ Suppose that $p$ is invariant-regular and consider $\cl_p$ only as an operation on subsets of $p_0(\bar{M})$. Let $A\subset p_0(\bar{M})$ and let $a\models p_0$. Then: \begin{center} $a\notin\cl_p(A)$ \ iff \ $a\models p|A$ \ iff \ $a\models p|\cl_p(A)$ \ iff \ $a\notin\cl_p(\cl_p(A))$;\end{center} The first and the last equivalence follow from the definition of $\cl_p$, and the middle one is by Remark \ref{R22}(i). Thus $\cl_p(A)=\cl_p(\cl_p(A))$. The other clause is proved as in (iii) below. (ii) \ As in (i), for any $A\subset \bar{M}$ we have: \begin{center} $a\notin\cl_p(A)$ \ iff \ $a\models p|A$ \ iff \ $a\models p|\cl_p(A)$ \ iff \ $a\notin\cl_p(\cl_p(A))$,\end{center} and $\cl_p$ is a closure operator on $\bar{M}$. (iii) \ Suppose that every Morley sequence in $p$ over $\emptyset$ is symmetric. To show that $\cl_p$ is a pregeometry operator, by part (ii), it suffices to verify the exchange property over finite $A\subset M$. Since, by (ii), $\cl_p$ is a closure operator on $M$ there is a Morley sequence in $p$ (over $\emptyset$) $(a_1,...,a_n)\in A^n$ such that $\cl_p(A)= \cl_p(a_1,..,a_n)$. Now, let $a\models p|A$ and let $b\in \bar{M}$. Note that $(a_1,...,a_n,a)$ is a Morley sequence and that $\cl_p(aA)=\cl_p(a_1,...,a_n,a)$. We have: \begin{center} $b\notin\cl_p(aA)$ \ iff \ $b\notin \cl_p(a_1,...,a_n,a)$ \ iff \ $(a_1,...,a_n,a,b)$ is a Morley sequence \ iff \ $(a_1,...,a_n,b,a)$ is a Morley sequence \ iff \ $a\notin\cl_p(Ab)$.\end{center} The exchange follows. The other direction is similar. \end{proof} \begin{thm}\label{Pr1} Suppose that $p(x)\in S_1(\bar{M})$ is $\emptyset$-invariant and regular. Then exactly one of the following cases holds: (1) \ $p$ is symmetric, in which case $\cl_p$ is a pregeometry operator on $(p|\emptyset)(\bar{M})$, and if $(p(x),\phi(x))$ is strongly regular then $\cl_p$ is a pregeometry operator on $\phi(\bar{M})$. (2) \ $p$ is asymmetric, in which case there exists a finite $A$ and a $A$-definable partial order $\leq$ such that every Morley sequence in $p$ over $A$ is strictly increasing. \end{thm} \begin{proof} If $p$ is symmetric then every Morley sequence in $p$ over $\emptyset$ is totally indiscernible and (1) follows from Lemma \ref{Lsym}(i). So suppose that $p$ is asymmetric. then there exists an asymmetric Morley sequence in $p$ over some finite $A'$, and let $(c_1,c_2,...,c_n,a,b)$ be the shortest possible and let $A=A'\bar{c}$. By $\emptyset$-invariance we have $\tp(a,b/A)\neq \tp(b,a/A)$. Then $(a,b)$ is an asymmetric Morley sequence over $A$ so let $\phi(x,y)\in\tp(a,b/A)$ be asymmetric: \ $\models\phi(x,y)\rightarrow\neg\phi(y,x)$. \ Then from $a\in\cl_p(bA)$ and $b\notin\cl_p(aA)$ it follows that $\phi(a,x)$ is large while \,$\phi(x,b)$ is small. By invariance, $\phi(x,a)$ is small, too. We claim that $$(p|A)(t)\cup\{\phi(t,a)\}\cup\{\neg\phi(t,b)\}$$ is inconsistent. Otherwise, there is $d$ realizing $p|A$ such that \ $\models\phi(d,a)\wedge \neg\phi(d,b)$. Then $d$ does not realize $p|(A,a)$ (witnessed by $\phi(x,a)$) so, by regularity, $p|(A,a)\vdash p|(A,a,d)$ and thus $b\models p|(A,a,d)$. In particular $b\models p|(A,d)$ and since, by invariance, $\phi(d,x)$ is large we conclude $\models\phi(d,b)$. A contradiction. From the claim, by compactness, we find $\theta(t)\in p|A $ such that \begin{center} $\models(\forall t)(\phi(t,a)\wedge\theta(t)\rightarrow \phi(t,b)).$\end{center}Let \ $x\lq y$ \ be \ $(\forall t)(\phi(t,x)\wedge\theta(t)\rightarrow \phi(t,y)).$ \ Clearly, $\lq$ defines a quasi order and $a\lq b$. Also: \begin{center} $\models \phi(a,b)\wedge \theta(a)\wedge\neg\phi(a,a)$;\end{center} The first conjunct follows by our choice of $\phi$, the second from $a\models p|A$, and the third from the asymmetry of $\phi$. Altogether they imply $b\nlq a$. Thus if $x<y$ is $x\lq y\wedge y\nlq x$ we have $a<b$. \end{proof} The next examples concern issues of whether symmetric regular types are definable or even generically stable. But we first give a case there this is true (although it depends formally on Theorem 3 of the next section). \begin{cor} Suppose that $(p(x),x=x)$ is invariant-strongly regular and symmetric. Then $p(x)$ is generically stable. \end{cor} \begin{proof} If $(p(x),x=x)$ is invariant-strongly regular then, by Theorem \ref{Pr1}(1) $\cl_p$ is a pregeometry operator on $\bar{M}$, and then $p(x)$ is generically stable by Theorem \ref{Tpr}(ii). \end{proof} \begin{exm} A symmetric, definable - strongly regular type which is not generically stable. \ \noindent Let $L=\{U,V,E\}$ where $U,V$ are unary and $E$ is a binary predicate. Consider the bipartite graph $(M,U^M,V^M,E^M)$ where $U^M=\omega$, $V^M$ is the set of all finite subsets of $\omega$, $M=U^M\cup V^M$, and $E^M=\{(u,v)\,:\,u\in U^M, \ v\in V^M, \ \textmd{and} \ u\in v\}$. Let $A\subset M$ be finite. Then: \begin{center} If \ $(c_1,...,c_n)\,,(d_1,...,d_n)\in (U^M)^n$ \ have the same quantifier-free type over $A$ then $\tp(c_1,...,c_n/A)=\tp(d_1,...,d_n/A)$,\end{center}since the involution of $\omega$ mapping $c_i$'s to $d_i$'s respectively, and fixing all the other elements of $\omega$ is an $A$-automorphism of $M$. Note that this is expressible by a set of first-order sentences, so is true in the monster. Further, if $e_1,...,e_n\in U^M$ are distinct and have the same type over $A$ then, since every permutation of $\omega$ which permutes $\{e_1,...,e_n\}$ and fixes all the other elements of $\omega$ is an $A$-automorphism of $M$, $(e_1,...,e_n)$ is totally indiscernible over $A$. This is also expressible by a set of first-order sentences. Let $p(x)\in S_1(M)$ be the type of a ``new" element of $U$ which does not belong to any element of $V^M$. Then, by the above, $p$ is definable, its global heir $\bar{p}$ is symmetric, and $(\bar{p}(x),U(x))$ is strongly regular. \end{exm} \begin{exm} A symmetric invariant - strongly regular type which is not definable. \noindent Consider the bipartite graph $(M,U^M,V^M,E^M)$ where $U^M=\omega$, $V^M$ consists of all finite and co-finite subsets of $\omega$, $M=U^M\cup V^M$, and $E^M$ is $\in$. \ Let $\bar{M}$ be the monster and let $p(x)\in S_1(\bar{M})$ be the type of a new element of $U^{\bar{M}}$, which belongs to all co-finite members of $V^{\bar{M}}$ (and no others). Arguing as in the previous example $(p(x),U(x))$ is strongly regular and symmetric. Since 'being a co-finite subsets of $U^M$\,' is not definable, $p$ is not definable. \end{exm} \begin{defn} Let $G$ be a definable (or even type-definable) group in $\bar{M}$. \ $G$ is called \emph{invariant-regular group} if for some global type $p(x)\in S_G(\bar{M})$, $(p(x),``x\in G")$ is invariant-strongly regular. \end{defn} We will see in Example \ref{field} that asymmetric invariant-regular groups, and even fields, exist. \begin{thm}\label{Tg} Suppose that $G$ is a definable, invariant-regular group, witnessed by $p(x)\in S_G(\bar{M})$. Then: (i) \ $p(x)$ is both left and right translation invariant (and in fact invariant under definable bijections). (ii) \ A formula $\phi(x)$ is in $p(x)$ \ iff \ two left (right) translates of $\phi(x)$ cover $G$ \ iff \ finitely many left (right) translates of $\phi(x)$ cover $G$. (Hence $p(x)$ is the unique generic type of $G$.) (iii) \ $p(x)$ is definable over $\emptyset$; in particular, $G$ is a definable-regular group. (iv) \ $G=G^0$ \ (i.e. $G$ is connected). \end{thm} \begin{proof} (i) \ Suppose that $f:G\longrightarrow G$ is a $B$-definable bijection and $a\models p|B$. Since $p|B\vdash p|(B,f(a))$ is not possible, by strong regularity, we get $f(a)\models p|B$. Thus $p$ is invariant under $f$. (ii) \ Suppose that $D\subseteq G$ is defined by $\phi(x)\in p(x)$ which is over $A$. Let $g\models p|A$ and we show \ $G=D\cup g\cdot D$\,. If $b\in G\setminus D$ then $b$ does not realize $p|A$ so, by strong regularity, $g\models p|(A,b)$. By (i) $g^{-1}\models p|(A,b)$, thus $g^{-1}\in D\cdot b^{-1}$ and $b\in g\cdot D$. This proves \ $G=D\cup g\cdot D$\,. For the other direction, if finitely many translates of $\psi(x)$ cover $G$ then at least one of them belongs to $p(x)$ and, by (i), $\psi(x)\in p(x)$. (iii) and (iv) follow immediately from (ii). \end{proof} \noindent\textbf{Question.} \ Is every regular-invariant group commutative? \section{Homogeneous pregeometries} If $(M,\cl)$ is a pregeometry then, as usual, we obtain notions of independence and dimension: for $A,B\subset M$ we say that $A$ is independent over $B$ if $a\notin\cl(A\setminus \{a\}\cup B)$ for all $a\in A$. Given $A$ and $B$, all subsets of $A$ which are independent over $B$ and maximal such, have the same cardinality, called $\dim(A/B)$. $(M,\cl)$ is infinite-dimensional if $\dim(M/\emptyset)\geq\aleph_0$. \begin{rmk}\label{R32} (i) \ If $\bar{c}$ is a tuple of length $n$ then $\dim(\bar{c}/B)\leq n$ for any $B$. (ii) \ If $l(\bar{c})=n$, $|A|\geq n+1$ and $A$ is independent over $B$ then there is $a\in A$ such that $a\notin\cl(B\cup\bar{c})$. \end{rmk} \begin{defn}\label{Dhom} We call an infinite-dimensional pregeometry $(M,\cl)$ {\em homogeneous} if for any finite $B\subset M$, the set of all $a\in M$ such that $a\notin \cl(B)$ is the set of realizations in $M$ of a complete type $p_B(x)$ over $B$. \end{defn} Note that Definition \ref{Dhom} relates in some way the closure operation to the first-order structure. But note that it does not say anything about automorphisms, and nothing is being claimed about the homogeneity of $M$ as a first-order structure. \begin{lem}\label{L34} Suppose $(M,\cl)$ is a homogeneous pregeometry. (i) \ \ $p_{\cl}(x)=\bigcup\{p_B(x)\,:$ $B$ finite subset of $M\}$ \ is a complete 1-type over $M$, which we call the generic type of the pregeometry $(M,\cl)$. (ii) \ \ $a\notin\cl(B)$ \ iff \ $a\models p_{\cl}(x)|B$. \ In particular \ $\cl=\cl_{p_{\cl}}$. (iii) \ $I=(a_i\,:\,i<\omega)$ is independent over $B$ \ iff \ $a_i\models p_{\cl}|(B,a_0,...,a_{i-1})$ for all $i$. \ In particular, if $M=\bar{M}$ and $p$ is $B$-invariant then $I$ is independent over $B$ iff it is a Morley sequence in $p$ over $B$. \end{lem} \begin{proof} (i) \ Consistency is by compactness: given $A_1,...,A_n$ finite subsets of $M$ and $B=A_1\cup...\cup A_n$, clearly $p_{A_1}(x)\cup...\cup p_{A_n}(x)\subseteq p_B(x)$ and the later is consistent. Completeness is clear. (ii) and (iii) are easy. \end{proof} \begin{lem}\label{L35} Suppose $(M,\cl)$ is a homogeneous pregeometry. Let $(a_i\,:\,i\in\omega)$ be an $\emptyset$-independent subset of $M$. Then for any $L$-formula $\phi(x,\bar{y})$ with $l(\bar{y})=n$, and $n$-tuple $\bar{b}$ from $M$: \begin{center} $\phi(x,\bar{b})\in p_{\cl}(x)$ \ \ \ iff \ \ \ $\models \wedge_{i\in w}\phi(a_i,b)$ \ for some \ $w\subset\{1,...2n\},\,|w|=n+1$. \end{center} In particular $p_{\cl}(x)$ is definable. \end{lem} \begin{proof} If $\phi(x,\bar{b})$ is large (namely in $p_{cl}$), then its negation is small, and thus if $M\models\neg\phi(a,\bar{b})$ then $a\in\cl(\bar{b})$. By Remark \ref{R32}(ii), at most $n$ many $a_i$'s can satisfy $\neg\phi(x,\bar{b})$, hence at least $n+1$ among the first $2n+1$ $a_i$'s satisfy $\phi(x,\bar{b})$. Conversely, if at least $n+1$ $a_i$'s satisfy $\phi(x,\bar{b})$ then, again by Remark \ref{R32}(ii), $\phi(x,\bar{b})$ can not be small, so it is large. \end{proof} \begin{prop}\label{Ppr} Suppose $(M,\cl)$ is a homogeneous pregeometry. Let $p(x)$ be the generic type and let $\bar{p}(x)$ be its (unique by definability) global heir. (i) \ $(\bar{M},\cl_{\bar{p}})$ \ is a homogeneous pregeometry and $\cl$ is the restriction of $\cl_{\bar{p}}$ to $M$. (ii) \ If $(a_1,...a_n)$ (from $\bar{M}$) is independent over $A$ then: \begin{center} $\tp(b_1,...,b_n/A)=\tp(a_1,...,a_n/A)$ \ \ iff \ \ $(b_1,...,b_n)$ is independent over $A$.\end{center} (iii) \ $\bar{p}(x)$ is $\emptyset$-invariant and generically stable. (iv) \ $(\bar{p}(x),x=x)$ is strongly regular \end{prop} \begin{proof} (i) is an easy exercise, using the fact that $\bar{p}$ is defined by the same schema which defines $p$. (ii) \ We prove it by induction on $n$. For $n=1$, by definition, we have $a_1,b_1\models \bar{p}|A$. Now assume true for $n$ and prove for $n+1$. Without loss of generality $A=\emptyset$. Suppose first that $\tp(b_1,...,b_{n+1})=\tp(a_1,...,a_{n+1})$. Let $a'$ realize $p|(a_1,...,a_{n+1},b_1,...,b_{n+1})$. \ So \ $\tp(a_1,...,a_n,a_{n+1})=\tp(a_1,...,a_n,a')$. On the other hand, by the induction assumption (over $\emptyset$), $(b_1,...,b_n)$ is independent, so independent over $a'$ (by symmetry). By induction assumption applied over $a'$, $\tp(b_1,...,b_n,a')=\tp(a_1,...,a_n,a')$. Hence $\tp(b_1,...,b_n,a')=\tp(b_1,...,b_n,b_{n+1})$. As $a'\notin\cl_{\bar{p}}(b_1,...,b_n)$, also $b_{n+1}\notin \cl_{\bar{p}}(b_1,...,b_n)$. Thus $(b_1,...b_n,b_{n+1})$ is independent. The converse (if $(b_1,...,b_{n+1})$ is independent then it realizes $\tp(a_1,...,a_{n+1})$) is proved in a similar fashion and left to the reader. (iii) \ By part (i), Lemma \ref{L35} also applies to the pregeometry $\cl_{\bar{p}}$. Let $(a_i\,:\,i\in\omega)$ be $\cl_{\bar{p}}$-independent. Then $\bar{p}(x)$ is defined over $(a_i\,:\,i\in\omega)$ as in Lemma \ref{L35}. But if $(b_i\,:\,i\in\omega)$ has the same type as $(a_i\,:\,i\in\omega)$ then, by (ii), it is also $\cl_{\bar{p}}$\,-independent, hence $\bar{p}(x)$ is defined over $(b_i\,:\,i\in\omega)$ in the same way. this implies that $\bar{p}$ is $\emptyset$-invariant. Thus, by Lemma \ref{L34}(iii) a Morley sequence in $\bar{p}$ is the same thing as an infinite $\cl_{\bar{p}}$\,-independent set. By Lemma \ref{L35} and Definition \ref{D1}, $\bar{p}$ is generically stable. (iv) \ By part (i) $(\bar{M},\cl_{\bar{p}})$ is a pregeometry so, by Lemma \ref{Lsym}(ii), $(\bar{p}(x),x=x)$ is strongly regular. \end{proof} We now drop (for a moment) all earlier assumptions and summarize the situation: \begin{thm}\label{Tpr} Let $T$ be an arbitrary theory. (i) \ Let $p(x)$ be a global $\emptyset$-invariant type such that $(p(x),x=x)$ is strongly regular. Then $(\bar{M},\cl_p)$ is a homogeneous pregeometry. (ii) \ On the other hand, suppose $M\models T$ and $(M,\cl)$ is a homogeneous pregeometry. Then there is a unique global $\emptyset$-invariant generically stable type $p(x)$ such that $(p(x),x=x)$ is strongly regular, and such that the restriction of $\cl_p$ to $M$ is precisely $\cl$. \end{thm} \section{Quasiminimal structures} Recall that a 1-sorted structure $M$ in a countable language is called quasi-minimal if $M$ is uncountable and every definable (with parameters) subset of $M$ is countable or co-countable; the definition was given by Zilber in \cite{Z1}. Here we investigate the general model theory of quasiminimality, continuing an earlier work by Itai, Tsuboi and Wakai \cite{ITW}. Throughout this section fix a quasiminimal structure $M$ and its monster model $\bar{M}$. The set of all formulas (with parameters) defining a co-countable subset of $M$ forms a complete 1-type $p(x)\in S_1(M)$; we will call it the generic type of $M$. If $p(x)$ happens to be definable we will denote its (unique) global heir by $\bar{p}(x)$. \begin{defn} (i) \ $p(x)$ (or $M$) is {\em countably based} if there is a countable $A\subset M$ such that $p$ does not split over $A$; i.e. whenever $\bar{b}_1\equiv \bar{b}_2 (A)$ we have \ $(\phi(x,\bar{b}_1)\leftrightarrow\phi(x,\bar{b}_2))\in p(x)$ for all $\phi(x,\bar{y})$ over $A$. (ii) \ If $p$ does not split over $A$ then we say that $(a_1,...,a_n)$ is a weak Morley sequence in $p$ over $A$ if $a_{k}$ realizes $p|(A,a_1,...,a_{k-1})$ for all relevant $k$. \end{defn} As in the case of global invariant types weak Morley sequences are indiscernible over $A$. \begin{rmk} (i) \ In quasiminimal structures Zilber's countable closure operator $\ccl$ is defined via $\cl_p$\,: \begin{center} $\cl_p^0(A)=\cl_p(A)$, \ $\cl_p^{n+1}(A)=\cl_p(\cl_p^{n}(A))$ \ and \ $\ccl(A)=\bigcup_{n\in\omega} \cl_p^n(A) $.\end{center} (ii) \ If $A$ is countable then $\ccl(A)$ is countable, too. \ $\ccl$ is a closure operator on $M$. (iii) \ $\cl_p$ is a closure operator \ iff \ $\cl_p=\ccl$ (which is in general not the case, see Example \ref{field}). \end{rmk} \begin{lem}\label{Le1} \ \ Suppose that $p(x)$ is countably based, witnessed by $A$. Then (i) \ \ $\cl_p$ is a closure operator on $M$. (ii) \ $(M,\cl_p)$ is a pregeometry iff every weak Morley sequence in $p$ over $A$ is totally indiscernible. \end{lem} \begin{proof}(i) \ Without loss of generality suppose that $A = \emptyset$. Assuming that $\cl_p\neq\ccl$ we will find a non-indiscernible weak Morley sequence over some countable $C\subset M$, which is in contradiction with $\emptyset$-invariance. So suppose that $\cl_p\neq\ccl$. Then there are a (countable) $C\subset M$ and $a\in M$ such that $a\in\cl_p^2(C)\setminus\cl_p(C)$. Since $a\notin \cl_p(C)$ we have $a\models p|C$ so let $a_1,a_2\notin\ccl(aC)$ be such that $(a,a_1,a_2)$ is a weak Morley sequence over $C$. We will prove that it is not indiscernible. Witness $a\in\cl_p^2(C)$ by a small formula $\varphi(x,\bar{b})\in\tp(a/C\bar{b})$ such that $\varphi(x,y_1,...,y_n)$ is over $C$ and $(b_1,...,b_n)=\bar{b}\in \cl_p(C)^n$. Choose $\theta_i(y_i)\in\tp(b_i/C)$ witnessing $b_i\in\cl_p(C)$ and let \ \ $x_1\equiv_{\varphi}x_2$ \ denote the formula \ \ \begin{center} $(\forall \bar{y})\left(\bigwedge_{1\leq i\leq n}\theta_i(y_i)\rightarrow (\varphi(x_1,\bar{y})\leftrightarrow \varphi(x_2,\bar{y}))\right)$.\end{center} It is, clearly, over $C$ and we show $a\makebox[1em]{$\not$\makebox[.6em]{$\equiv$}}_{\varphi}a_2$: \ from $\models \varphi(a,\bar{b})$ (witnessing $a\in\cl_p^2(C)$) and $a_2\notin\ccl(C)$ we derive $\models\neg\varphi(a_2,\bar{b})$ and thus $a\makebox[1em]{$\not$\makebox[.6em]{$\equiv$}}_{\varphi}a_2$. \ On the other hand, since all realizations of $\theta_i$'s are in $\cl_p(C)$, and since $\tp(a_1/\cl_p(C))=\tp(a_2/\cl_p(C))$, we have $a_1\equiv_{\varphi}a_2$. Therefore $(a,a_1,a_2)$ is not indiscernible. (ii) \ Having proved (i), the proof of Lemma \ref{Lsym}(iii) goes through. \end{proof} Let us note that if $(M,\cl_{p})$ {\em is} a pregeometry, then as infinite-dimensionality and homogeneity are automatic (for quasiminimal $M$), we can apply Theorem 3(ii). \begin{thm} If $p(x)$ does not split over $\emptyset$ then exactly one of the following two holds: (1) \ Every weak Morley sequence in $p$ over $\emptyset$ is totally indiscernible; in this case $\cl_p$ is a pregeometry operator on $M$, $p$ is definable, $\bar{p}$ is generically stable and $(\bar{p}(x),x=x)$ is strongly regular. (2) \ There exists an asymmetric weak Morley sequence in $p$ (over some domain); then for some finite $A$ there is an $A$-definable partial order $\leq$ such that every weak Morley sequence in $p$ is strictly increasing. \end{thm} \begin{proof} First suppose that every weak Morley sequence in $p$ over $\emptyset$ is symmetric. Then, by Lemma \ref{Le1}(i), $\cl_p$ is a closure operator and, by Lemma \ref{Lsym}(ii), it is a pregeometry operator. Thus $(M,\cl_p)$ is a homogeneous pregeometry and the conclusion follows from Proposition \ref{Ppr}. Now suppose that there is an asymmetric weak Morley sequence in $p$ over some finite $A$. By invariance it is indiscernible, so after adding an initial part to $A$ we get a weak Morley sequence $(a,b)$ over $A$ which is not symmetric. Let $\phi(x,y)\in\tp(ab/A)$ be asymmetric ($\models\phi(x,y)\rightarrow \neg\phi(y,x)$). Then $\phi(a,x)$ is large and $\phi(x,b)$ is small; by invariance $\phi(x,a)$ is small, too. Every weak Morley sequence of length 2 satisfies this conditions, so $(a,b)$ can be found such that each of $a,b$ realizes $p|\ccl(A)$, and there is a countable, $\ccl$-closed $M_0\prec M$ containing $A$ such that $a\in M_0$ and $b\notin M_0$ (i.e $b$ realizes $p|M_0$). We claim: \begin{center} $\models(\forall t)(\phi(t,a)\rightarrow \phi(t,b)).$\end{center} Let $d\in M$ be such that $\models\phi(d,a)$. Then $d\in\ccl(Aa)$, because $\phi(x,a)$ is small, and so $d\in M_0$ ($M_0$ is $\ccl$-closed). Now, if $d\in\ccl(A)$ then $a\equiv b(\ccl(A))$ implies $\models\phi(d,b)$. Otherwise $d\models p|\ccl(A)$ so, since $p|M_0$ does not split over $\emptyset$, we have $(a,b)\equiv (d,b)(A)$. In particular $\models\phi(d,b)$. In both cases we have $\models\phi(d,b)$ proving the claim. Let \ $x\lq y$ \ be \ $(\forall t)(\phi(t,x)\wedge\theta(t)\rightarrow \phi(t,y)).$ \ Clearly, $\lq$ defines a quasi order and, as in the proof of Theorem \ref{Pr1}, we have \ $\models\phi(a,b)\wedge\theta(b)\wedge \neg\phi(a,a)$ \ and $a<b$. \end{proof} \begin{thm} Suppose that $M$ is a quasiminimal group. Then $p(x)$ is definable over $\emptyset$, both left and right translation invariant, and $\bar{M}$ is a connected, definable-regular group witnessed by $\bar{p}(x)$. \end{thm} \begin{proof} Let $X\subseteq M$ be definable. First we claim that $X$ is uncountable \ iff \ $X\cdot X=M$. \ If $X$ is uncountable, then $X$ is co-countable, as is $X^{-1}$. So for any $a\in M$, $a\cdot X^{-1}$ is co-countable, so has nonempty intersection with $X$. If $d\in X\cap a\cdot X^{-1}$ then $a\in X\cdot X$, proving the claim. It follows that $p(x)$ is definable over $\emptyset$. In particular, it is countably based and, by Lemma \ref{Le1}, $\cl_p$ is a closure operator on $M$. Then $\cl_{\bar{p}}$ is also a closure operator and $(p(x),x=x)$ is strongly regular by Lemma \ref{Lsym}(ii). The rest follows from Theorem \ref{Tg}. \end{proof} \begin{exm}\label{field} An asymmetric quasiminimal field. \ \noindent In fact, every strongly minimal structure of size $\aleph_1$ can be expanded to become asymmetric quasiminimal. \ Let $I=\omega_1\times Q$ and let $\vartriangleleft$ be a (strict) lexicographic order on $I$. Further, let $B=\{b_i\,|\,i\in I\}$ be a maximal $\acl$-independent subset of $M$. For each $a\in M$ let $i\in I$ be $\vartriangleleft$-maximal for which there are $i_1,...,i_n\in I$ such that $a\in\acl(b_{i_1},...,b_{i_n},b_i)\setminus \acl(b_{i_1},...,b_{i_n})$; Clearly, $i=i(a)$ is uniquely determined. \ Now, expand $(M,..)$ to $(M,<,...)$ where $b< c$ iff $i(b) \vartriangleleft i(c)$. \ We will prove that $(M,<,...)$ is quasiminimal. Suppose that $M_0\prec M$ is a countable, $<$-initial segment of $M$ and that $B\setminus M_0$ does not have $\leq$-minimal elements, and let $a,a'\in M\setminus M_0$. Then there is an automorphism of $(B,<)$ fixing $B\cap M_0$ pointwise and moving $b_{i(a)}$ to $b_{i(a')}$. It easily extends to an $M_0$-automorphism of $(M,<,...)$, so \ $b_{i(a)}\equiv b_{i(a')}(M_0)$ (in the expanded structure).\, Note that replacing $b_{i(a)}$ by $a$ in $B$ (in the definition of $<$) does not affect $<$, so \ $a\equiv a'(M_0)$ and there is a single 1-type over $M_0$ realized in $M\setminus M_0$. Since every countable set is contained in an $M_0$ as above, $(M,<,...)$ is quasiminimal. \end{exm} \noindent\textbf{Question} \ Is every quasiminimal field algebraically closed? The following is an example of a quasiminimal structure, whose quasi-minimality does not look like regularity at all: \ $\cl_p(A)\neq \cl_p(\cl_p(A))$ for arbitrarily large countable $A$'s. \begin{exm}\label{E4}(A quasiminimal structure where $\cl_p\neq\ccl$) \\ Peretyatkin in \cite{Per} constructed an $\aleph_0$-categorical theory of a 2-branching tree. Our quasiminimal structure will be its model. The language consists of a single binary function symbol $L=\{\inf\}$. Let $\Sigma$ be the class of all finite $L$-structures $(A,\inf)$ satisfying: (i) \ $(A,\inf)$ is a semilattice; (ii) \ $(A,<)$ is a tree \, (where \, $x< y$ \, iff \, $\inf(x,y)=x\neq y$); (iii) \ (2-branching) \ \ No three distinct, pairwise $<$-incompatible elements satisfy: \ \ $\inf(x,y)=\inf(x,z)=\inf(y,z)$. \noindent Then the Fraiss\'{e} limit of $\Sigma$ exists and its theory, call it $T_2$, is $\aleph_0$-categorical and has unique 1-type. If we extend the language to $\{\inf,<,\perp\}$, where $x\perp y$ stands for $x\nleq y \wedge y\nleq x$, then $T_2$ has elimination of quantifiers. Let $(\bar{M},<)$ be the monster model of $T_2$, let $\triangleleft$ be a lexicographic order on $I=\omega_1\times Q$, and let $C=\{c_i\,|\,i\in \omega_1\times Q\}$ be $<$-increasing. Then we can find a sequence of countable models $\{M_i\,|\,i\in \omega_1\times Q\}$ satisfying: (1) \ $M_i\prec M_j$ \ for all $i\vartriangleleft j$ ; (2) \ $M_i\cap C=\{c_j\in C\,|\,j\trianglelefteq i\}$ for all $i$; (3) \ $M_i\cap C<M_j\setminus M_i$ \ for all $i\vartriangleleft j$. \noindent Finally, let $M=\bigcup\{M_i\,|\,i\in I\}$. \ Clearly, $C$ is an uncountable branch in $M$. Moreover, by (3), any other branch is completely contained in some $M_i$, and is so countable. This suffices to conclude that $M$ is quasiminimal and that the generic type is determined by $C<x$. Fix $c_i\in C$ and $a\in M\setminus C$ with $c_i<a$. Note that $x\nleq c_i$ is small, so $M_j\subset\cl_p(c_i)$ for all $j\vartriangleleft i$. Also, $x\perp a$ is large so $\cl_p(a)$ is the union of branches containing $a$. Since $c_i\in \cl_p(a)$ then $M_j\subseteq \cl_p(c_i)\subset \cl_p^2(a)$. Since $M_j \nsubseteq \cl_p(a)$ we conclude that $\cl_p(a)\neq\cl_p^2(a)$ and $\cl_p$ is not a closure operator. Similarly, for any countable $A\subset M$ we can find $a,c_i$ as above much bigger than $A$, and thus both realizing $p|A$. Then $x\perp a\wedge\neg (x\perp c_i)\in p(x)$ witness that $p(x)$ is not countably invariant. \end{exm} \section{Local regularity} In this section we study conditions under which a type $p(x)\in S_1(M)$ (where $M$ is not necessarily saturated) has a global, strongly regular, $M$-invariant extension. From the definition of strong regularity and Remark \ref{Rreg} $p(x)$ has to satisfy the following: \begin{defn}A non-isolated type $p(x)\in S_1(A)$ is \emph{locally strongly regular via} $\phi(x)\in p(x)$ if $p(x)$ has a unique extension over $M\bar{b}$ whenever $\bar{b}$ is a finite tuple of realizations of $\phi(x)$ no element of which realizes $p$. \end{defn} For simplicity, we will be working with types which are locally strongly regular via $x=x$ and it is not hard to reformulate the results below with $x=x$ replaced by $\phi(x)$. \begin{prop}\label{p03} Suppose that $p(x)\in S_1(M)$ is definable and locally strongly regular via $x=x$, and let $\bar{p}(x)$ be its global heir. Then $(\bar{p}(x),x=x)$ is definable-strongly regular. \end{prop} \begin{proof} Suppose that $(\bar{p}(x),x=x)$ is not strongly regular and let $B\supset M$ be such that $\bar{p}|B\nvdash \bar{p}|\cl_{\bar{p}}(B)$. Then, without loss of generality, $B=M\bar{b}$ and there are $a\in\cl_{\bar{p}}(B)$ and $c\models \bar{p}|B$ such that $c$ does not realize $\bar{p}|Ba$. Witness $a\in\cl_{\bar{p}}(B)$ by $\theta(y,\bar{z})$ which is over $M$ and \ $\models\theta(a,\bar{b})\wedge\neg(d_p\theta)(\bar{b})$ \ (where $d_p$ is the defining schema of $p$). Similarly, find $\phi(x,y,\bar{z})$ over $M$ such that $\models\phi(c,a,\bar{b})\wedge\neg(d_p\phi)(a,\bar{b})$. \begin{center} $\models(\exists y)(\theta(y,\bar{b})\wedge\neg(d_p\theta)(\bar{b})\wedge \phi(c,y,\bar{b})\wedge\neg(d_p\phi)(y,\bar{b}))$. \end{center} Since $\tp(c/M\bar{b})$ is an heir of $p(x)$ there is $\bar{m}\in M$ and $a'$ such that \begin{center} $\models \theta(a',\bar{m})\wedge\neg(d_p\theta)(\bar{m}) \wedge \phi(c,a',\bar{m})\wedge\neg(d_p\phi)(a',\bar{m})$. \end{center} The first two conjuncts witness $a'\notin p(\bar{M})$ while the last two witness that $c$ is not a realization of $\bar{p}|Ma'$. A contradiction. \end{proof} For the next proposition, recall that if $p(x)\in S_{1}(M)$ then by a coheir sequence in $p$ we mean a Morley sequence in $p'$ over $M$ for some global coheir of $p$. \begin{prop}\label{Psymm} Suppose that $p(x)\in S_1(M)$ is locally strongly regular via $x=x$ and that there exists an infinite, totally indiscernible (over $M$) coheir-sequence in $p$. Then $p$ is definable, its global heir $\bar{p}$ is generically stable and $(\bar{p}(x),x=x)$ is strongly regular. \end{prop} \begin{proof}Let $I=\{a_i\,:\, i\in\omega\}$ be a symmetric coheir sequence in $p$, let $p_n(x)=\tp(a_{n+1}/|Ma_1...a_n)$ $p_I=\cup_{n\in\omega}p_n(x)\in S_1(MI)$. We will prove that $p_I$ is locally strongly regular via $x=x$, then, by standard arguments, it follows that $p$ has a global coheir $\bar{p}$ which extends $p_I$ and that $(\bar{p}(x),x=x)$ is invariant-strongly regular and symmetric; the conclusion follows by Theorem \ref{Pr1}. Suppose, on the contrary, that $p_I$ is not locally strongly regular. Then some $p_n$ is not locally strongly regular, so there are $b_1 ... b_k=\bar{b}$, with none realizing $p_n(x)$, such that $p_n$ has at least two extensions in $S_1(M\bar{a}\bar{b})$ (here $\bar{a}=a_1...a_n$). Let $\varphi(x,\bar{z},\bar{y})$ be over $M$ and such that both $\varphi(x,\bar{a},\bar{b})$ and $\neg\varphi(x,\bar{a},\bar{b})$ are consistent with $p_n(x)$. Choose $\theta_i(y_i,\bar{a})\in\tp(b_i/M\bar{a})$ witnessing that $b_i$ does not realize $p_n$ and let \ $\phi(x_1,x_2,\bar{a})$ \ be \ \ \begin{center} $(\exists \bar{y})\left(\bigwedge_{1\leq i\leq n}(\theta_i(y_i,\bar{a}) \wedge\neg\theta_i(x_2,\bar{a})) \wedge\neg(\varphi(x_1,\bar{a},\bar{y})\leftrightarrow (\varphi(x_2,\bar{a},\bar{y}))\right)$.\end{center} It is, clearly, over $M$ and we show \ $\models\phi(a_{n+2},a_{n+1},\bar{a})$. \ By our assumptions on $\varphi$ and $\bar{b}$, there is $\bar{b}'\equiv\bar{b}(M\bar{a})$ such that $\models \neg(\varphi(a_{n+1},\bar{a},\bar{b})\leftrightarrow (\varphi(a_{n+1},\bar{a},\bar{b}'))$. Also $\models \bigwedge_{1\leq i\leq n}(\theta_i(b_i,\bar{a})\wedge\theta_i(b_i',\bar{a}) \wedge\neg\theta_i(a_{n+1},\bar{a}))$. \ Thus for any $c\in M$ either $\bar{b}$ or $\bar{b}'$ in place of $\bar{y}$ witnesses $\models\phi(c,a_{n+1},\bar{a})$ and, since $\tp(a_{n+2}/M\bar{a}a_{n+1})$ is a coheir of $p$, we conclude $\models\phi(a_{n+2},a_{n+1},\bar{a})$. By total indiscernibility, $\tp(\bar{a}/Ma_{n+1}a_{n+2})$ is a coheir of $p$, so there are $\bar{m}\in M$ and $\bar{d}$ such that \begin{center} $\bigwedge_{1\leq i\leq n}(\theta_i(d_i,\bar{m}) \wedge\neg\theta_i(a_{n+1},\bar{m})) \wedge\neg(\varphi(a_{n+2},\bar{m},\bar{d})\leftrightarrow (\varphi(a_{n+1},\bar{m},\bar{d}))$.\end{center} The first conjunct witnesses that each $d_i$ does not realize $p$, and the second witnesses that $p$ does not have a unique extension over $M\bar{d}$. A contradiction. \end{proof} Our next goal is to prove that the generic type of a quasiminimal structure is locally strongly regular via $x=x$. This we will do in a more general situation, for any ($M$ and) $p\in S_1(M)$ for which the closure operator induced by $\cl_p$ (we will call it $\Cl_p$) 'does not finitely generate $M$'. So, fix for now $M$ and $p\in S_1(M)$. \begin{center} $\Cl_p(A)=\bigcup\{\cl_p^n(A)\,|\,n\in\omega\}$ \ where \ $\cl_p^0(A)= \cl_p(A)$, \ $\cl_p^{n+1}(A)=\cl_p(\cl_p^{n}(A))$.\end{center} Call $A\subseteq M$ \emph{finitely $\Cl_p$-generated} if there is finite $A_0\subset A$ such that $A\subset\Cl_p(A_0)$. \ $\{a_i\,|\,i\in\alpha\}$ is a \emph{$\Cl_p$-free sequence over $B\subset M$} if $a_i\notin\Cl_p(A_iB)$ for all $i\leq\alpha$ \ $(A_i=(\{a_i\,|\,j<i\})$. \ $\Cl_p$-free means $\Cl_p$-free over $\emptyset$. (1) \ \ $a\notin \cl_p(B)$ \ implies \ $a\models p\upharpoonright B$; \ \ $a\notin \Cl_p(B)$ \ implies \ $a\models p\upharpoonright \Cl_p(B)$. (2) \ \ If $A=\{a_i\,|\,i\in\alpha\}$ is $\Cl_p$-free then $p\upharpoonright \Cl_p(A)= \bigcup_{i\in\alpha} p\upharpoonright \Cl_p(A_i)$ (3) \ \ If $A=\{a_i\,|\,i\in\alpha\}$ is $\Cl_p$-free and $\alpha$ is a limit ordinal then $p\upharpoonright \Cl_p(A)$ is non-algebraic and finitely satisfiable in $A$. (4) \ \ Maximal $\Cl_p$-free sequences always exists. If $M_0\subset M$ is not finitely $\Cl_p$-generated and $\{a_i\,|\,i\in\alpha\}\subset M_0$ is a maximal $\Cl_p$-free sequence such that $\alpha$ is minimal possible, then $\alpha$ is a limit ordinal (otherwise, take the last $a_i$, put it on the first place ... ) \begin{prop} Suppose $p\in S_1(M)$ and $M$ is not finitely $\Cl_p$-generated. Then $p$ is locally strongly regular via $x=x$. \end{prop} \begin{proof} Suppose, on the contrary, that there are $d_1,d_2\in p(\mathcal{M})$, a formula $\phi(x,\bar{y})$, and a tuple $\bar{b}=b_1b_2...b_n\in \mathcal{M}^n$ such that none of $b_i$'s realize $p$ and: \begin{center} $\models \neg\phi(d_1,\bar{b})\wedge \phi(d_2,\bar{b}))$.\end{center} Choose $\theta_i(y_i)\in\tp(b_i/M)$ such that $\theta_i(x)\notin p(x)$. Without loss of generality, after absorbing parameters into the language, we may assume that each $\theta_i(x)$ is over $\emptyset$. Let $I=\{a_i\,|\,i<\alpha\}$ be a maximal $\Cl_p$-free sequence of minimal possible length. Then $\alpha$ is a limit ordinal and at least one of \begin{center} $\{i<\alpha\,|\,\models \phi(a_i,\bar{b})\}$ \ and \ $\{i<\alpha\,|\,\models \neg\phi(a_i,\bar{b})\}$\end{center} is cofinal in $\alpha$. Assume the first one is cofinal and let \ $I_0=\{a_i\,|\,\models \phi(a_i,\bar{b})\}$. Then $p$ is an $I_0$-type (that is, finitely satisfiable in $I_{0}$) and there is an $I_0$-type in $S_1(Md_1)$, it necessarily contains $\phi(x,\bar{b})$; wlog, let $d_2$ realizes it. Thus, both $\tp(d_1/M_0)$ and $\tp(d_2/Md_1)$ are $I_0$-types. We have: \begin{center} $\models (\exists\bar{y})(\bigwedge_{1\leq i\leq n}\theta_i(y_i)\wedge\neg\phi(d_1,\bar{y})\wedge\phi(d_2,\bar{y}))$. \end{center} Since $\tp(d_2/Md_1)$ is an $I_0$-type, there is $a_i\in I_0$ such that: \begin{center} $\models (\exists\bar{y})(\bigwedge_{1\leq i\leq n}\theta_i(y_i)\wedge\neg\phi(d_1,\bar{y})\wedge\phi(a_i,\bar{y}))$. \end{center} Since $\tp(d_1/M)$ is an $I_0$-type, there is $a_j\in I_0$ such that: \begin{center} $\models (\exists\bar{y})(\bigwedge_{1\leq i\leq n}\theta_i(y_i)\wedge\neg\phi(a_j,\bar{y})\wedge\phi(a_i,\bar{y}))$. \end{center} Finally, since $a_i,a_j\in M$ there is $\bar{b}'=b_1'b_2'...b_n'\in M^n$ satisfying: \begin{center} $\models \bigwedge_{1\leq i\leq n}(\theta_i(b_i')\wedge\neg\phi(a_j,\bar{b}')\wedge\phi(a_i,\bar{b}'))$. \end{center} But $\bigwedge_{1\leq i\leq n}\theta_i(b_i')$ \ implies \ $\bar{b}'\subset\cl_p(\emptyset)$ and thus \ $\tp(a_i/\cl_p(\emptyset))\neq \tp(a_j/\cl_p(\emptyset))$. \ A contradiction. \end{proof} \begin{cor} If $M$ is a quasiminimal structure then its generic type $p$ is locally strongly regular via $x=x$. Moreover, whenever $M_0\prec M_1\prec... \prec M$ are $\ccl$-closed, then $p|\cup_{i\in\omega}M_i$ is locally strongly regular via $x=x$. \end{cor} \begin{cor} Suppose that $M$ is quasiminimal, $p$ is its generic type, and that there exists an infinite $\ccl$-free (or an uncountable $\cl_p$-free), totally indiscernible sequence $\subset M$. Then $p$ is definable, its global heir $\bar{p}$ is generically stable and $(\bar{p}(x),x=x)$ is strongly regular. \end{cor} \begin{proof} Let $I=\{a_i\,:\,i\in\omega\}$ be a symmetric $\ccl$-free sequence. Let $M_0=\ccl(a_0)$ and $M_{i+1}=\ccl(M_ia_{i+1})$. Then $M_0\prec M_1\prec... \prec M$ are $\ccl$-closed and $p|M_{\omega}$ is almost strongly regular via $x=x$ (where $M_{\omega}=\cup_{i\in\omega}M_i$). Let $J$ be an infinite indiscernible sequence over $M_{\omega}$ extending $I$. Then $J$ is symmetric and the conclusion follows by Proposition \ref{Psymm} applied to $p|M_{\omega}$. \end{proof} \begin{thm}\label{Tg} Suppose that $G\subseteq M$ is a definable group and $p(x)\in S_G(M)$ is locally strongly regular via $``x\in G"$. Then: (i) \ $p(x)$ is both left and right translation invariant (and in fact invariant under definable bijections). (ii) \ A formula $\phi(x)$ is in $p(x)$ \ iff \ two left (right) translates of $\phi(x)$ cover $G$ \ iff \ finitely many left (right) translates of $\phi(x)$ cover $G$. (Hence $p(x)$ is the unique generic type of $G$.) (iii) \ $p(x)$ is definable over $\emptyset$ and $G$ is connected. (iv) \ If, in addition, $p(x)$ is locally strongly regular via $x=x$ then $(\bar{p}(x),x=x)$ is strongly regular and $\bar{G}$ is a definable-regular group. (Here $\bar{p}$ is the unique heir of $p(x)$ and $\bar{G}\subseteq \bar{M}$ is defined by $``x\in G"$). \end{thm} \begin{proof} (i) \ Suppose that $f:\bar{G}\longrightarrow \bar{G}$ is an $M$-definable bijection and $a\models p$. Since $p\vdash p|(M,f(a))$ is not possible, by local strong regularity we get $f(a)\models p$. Thus $p$ is invariant under $f$. (ii) \ The local strong regularity of $p(x)$ implies that whenever $g,g'\in \bar{G}$ do not realize $p$ then $g\cdot g'$ does not realize $p$ either. It follows that $a\cdot g\models p$ whenever $a\models p$ and $g\in \bar{G}$ does not realize $p$. Thus: \begin{center} $\phi(x)\in p(x)$ \ \ iff \ \ $(\forall y\in \bar{G})(\neg\phi(y)\rightarrow\phi(y\cdot x))\in p(x)$, \end{center} and \ $\phi(x)\in p(x)$ \ iff \ $\phi(\bar{G})\cup a^{-1}\cdot\phi(\bar{G})= \bar{G}$. (iii) follows immediately from (ii), and then (iv) follows from Proposition \ref{p03}. \end{proof} \end{document}
math
\begin{document} \title{Enumerating Teams in First-Order Team Logics} \begin{abstract} We start the study of the enumeration complexity of different satisfiability problems in first-order team logics. Since many of our problems go beyond DelP, we use a framework for hard enumeration analogous to the polynomial hierarchy, which was recently introduced by Creignou et al. (Discret. Appl. Math. 2019). We show that the problem to enumerate all satisfying teams of a fixed formula in a given first-order structure is DelNP-complete for certain formulas of dependence logic and independence logic. For inclusion logic formulas, this problem is even in DelP. Furthermore, we study the variants of this problems where only maximal, minimal, maximum and minimum solutions, respectively, are considered. For the most part these share the same complexity as the original problem. An exception is the minimum-variant for inclusion logic, which is DelNP-complete. \end{abstract} \section{Introduction} Decision problems in general ask for the existence of a solution to some problem instance. In contrast, for \emph{enumeration problems} we aim at generating all solutions. For many---or maybe most---real-world tasks, enumeration is therefore more natural or practical to study; we only have to think of the domain of databases where the user is interested in all answer tuples to a database query. Other application areas include web search engines, data mining, web mining, bioinformatics and computational linguistics. From a theoretical point of view, maybe the most important problem is that of enumerating all satisfying assignments of a given propositional formula. Clearly, even simple enumeration problems may produce a big output. The number of satisfying assignments of a formula can be exponential in the length of the formula. In \cite{JPY1988}, different notions of efficiency for enumeration problems were first proposed, the most important probably being $\textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace$ (``polynomial delay''), consisting of those enumeration problems where, for a given instance $x$, the time between outputting any two consecutive solutions as well as pre- and postcomputation times (see \cite{DBLP:conf/foiks/MeierR18}) are polynomially bounded in $|x|$. Another notion of tractability is captured by the class $\textnormal{I}nc\protect\ensuremath{\classFont{P}}\xspace$ where the delay and post-computation time can also depend on the number of solutions that were already output. The separation $\textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace \subsetneq \textnormal{I}nc\protect\ensuremath{\classFont{P}}\xspace$ was mentioned in \cite{Strozecki2010}, although one should note that slighlty different definitions were used there. Several examples of membership results for tractable classes can be found in \cite{DBLP:journals/jcss/LucchesiO78,DBLP:journals/tods/KimelfeldK14,DBLP:journals/fuin/CreignouV15,DBLP:conf/pods/CarmeliKK17,DBLP:conf/csl/BaganDG07,DBLP:conf/pods/DurandSS14}. As a notion of higher complexity, recently an analogue of the polynomial hierarchy for enumeration problems has been introduced \cite{DBLP:journals/dam/CreignouKPSV19}. Lower bounds for enumeration problems are obtained by proving hardness (under a suitable reducibility notion) in a level $\Sigma_k^p$ of that hierarchy for some $k \geq 1$ and are regarded as evidence for intractability. Here, we consider enumeration tasks for so-called \emph{team-based logics}, where first-order formulas with free variables are evaluated in a given structure not for a single assignment to these variables but for sets of such assignments; these sets are called \emph{teams}. The logical language is extended by so-called generalised dependency atoms (sometimes referred to as \emph{team atoms}) that allow to specify properties of teams, e.g., that the value of a variable functionally depends on some other variable(s) (the dependence atom $\dep{\dots}$~\cite{DBLP:books/daglib/0030191}), that a variable is independent of some other variable(s) (the independence atom $\perp$~\cite{DBLP:journals/sLogica/GradelV13}), or that the values of a variable occur as values of some other variable(s) (the inclusion atom $\subseteq$~\cite{DBLP:journals/apal/Galliani12}). Team-based logics were introduced by Jouko Väänänen \cite{DBLP:books/daglib/0030191} and have been used for the study of various dependence and independence concepts important in many areas such as database theory and Bayesian networks (see, e.g., the articles in the textbook by Abramsky et~al.~\cite{DBLP:books/daglib/0037838}). For a fixed first-order formula and a given input structure, the complexity of the problem of counting all satisfying teams has been studied by Haak et~al.~\cite{DBLP:conf/mfcs/HaakKMVY19}, where completeness for classes such as $\#\cdot\protect\ensuremath{\classFont{P}}\xspace$ and $\#\cdot\protect\ensuremath{\mathbb{N}}\xspaceP$ was obtained. In the enumeration context, and in analogy to the case of classical propositional logic as above, it is now natural to ask for algorithms to enumerate all satisfying teams of a fixed formula in a given input structure. Enumerating teams for formulas with the above mentioned dependency atom thus means enumerating all sets of tuples in a relational database that fulfil the given Boolean combination of \protect\ensuremath{\classFont{FO}}\xspace-statements and functional dependencies. In this paper, we consider this problem and initiate the study of enumeration complexity for team based logics. Notice that, the task of enumerating teams has been considered before in the propositional setting by Meier and Reinbold~\cite{DBLP:conf/foiks/MeierR18}. We consider team-based logics with the inclusion, the dependence and the independence atom, and study the problems of enumerating all satisfying teams or certain optimal satisfying teams, where optimal can mean maximal or minimal with respect to inclusion or cardinality. Our results are summarised in Table~\ref{tab:summary} on p.~\pageref{tab:summary}. It is known that in terms of expressive power dependence logic corresponds to the class $\protect\ensuremath{\mathbb{N}}\xspaceP$. Hence one cannot expect efficient algorithms for enumerating teams, and in fact, we prove that the problem is $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-complete (i.e.\@\xspace, $\textnormal{D}el\Sigma_1^p$-complete) in all but one variants (enumerating all or optimal satisfying teams). For the remaining variant---enumerating inclusion maximal satisfying teams---we show $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-hardness and sketch $\textnormal{D}el\Sigma^p_2$ membership in the conclusion, the precise complexity remains open. Analogous results hold for independence logic. Inclusion logic, however, in a model-theoretic sense is equal to the class $\protect\ensuremath{\classFont{P}}\xspace$ (at least in so-called \emph{lax semantics} \cite{DBLP:conf/csl/GallianiH13}). Consequently, inclusion logic is less expressive than dependence logic (under the assumption $\protect\ensuremath{\classFont{P}}\xspace\neq\protect\ensuremath{\mathbb{N}}\xspaceP$), and the picture in the enumeration context reflects this: We prove that for each inclusion logic formula, there is a polynomial-delay algorithm for enumerating all satisfying teams in a given structure. This is also true when we want to enumerate all maximal, minimal, or maximum satisfying teams. Interestingly, enumerating minimum satisfying teams is $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-complete, as for the other logics we consider. In the next section, we introduce team semantics and the relevant logics. There, we also introduce algorithmic enumeration and the needed complexity classes, and we formally define the enumeration problems we want to classify in this paper. In Sect.~\ref{effenum}, we present an efficient enumeration algorithms for inclusion logic, while Sect.~\ref{hardenum} is devoted to the presentation of our completeness proofs for the class $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$. Finally, we summarise our results and conclude with some open questions. Due to space restrictions, most proofs are only sketched in the paper, but all full details can be found in the appendix. \section{Definitions and Preliminaries} We assume familiarity with basic notations from complexity theory \cite{DBLP:books/daglib/0092426}. We will make use of the complexity classes $\protect\ensuremath{\classFont{P}}\xspace$ and $\protect\ensuremath{\mathbb{N}}\xspaceP$. \subsection{Team logic} A \emph{vocabulary} $\sigma=\{\, R_1^{j_1},\dots, R_k^{j_k}\, \}$ is a finite set of relations with corresponding arities $j_1, \dots j_k \in \mathbb N_+$. A \emph{$\sigma$-structure} $\mathcal{A}=(A,(R_i^\mathcal{A})_{R_i\in\sigma})$ consists of a \emph{universe} $A$ that is a set, and an interpretation of the relations of $\sigma$ in $A$, i.e.\@\xspace, $R_i^\mathcal{A}\subseteq A^{j_i}$ for each $R_i\in\sigma$. Let $D$ be a finite set of first-order variables and $A$ be some set. An \emph{assignment} $s\colon D\to A$ is a function over \emph{domain} $D$ and \emph{codomain} $A$. The algorithms that we construct later assume an arbitrary order on assignments and thereby on singleton teams. For our purposes a lexicographical order suffices. Moreover, if $s\leq t$ and there exists a $1\leq j\leq n$ such that $s(x_j)<t(x_j)$ then we write $s<t$. Given an assignment $s$, a variable $x$ and an element $a$ from $A$, the assignment $s(a/x)\colon D\cup\{x\}\to A$ is defined by $s(a/x)(x)\mathrel{=_{\mathrm{def}}} a$ and $s(a/x)(y)\mathrel{=_{\mathrm{def}}} s(y)$ for $x\neq y$. We call $s(a/x)$ a \emph{supplementing function}. A \emph{team} is a finite set of assignments with common domain and codomain. For a team $X$, let $\max(X)$ be the largest assignment contained in $X$ with respect to the lexicographical order on assignments defined before. Considering a team $X$, a finite set $A$, and a function $F\colon X\to\powerset{A}\setminus\{\emptyset\}$, we then define $X[A/x]$ as the modified team $\{\,s(a/x)\mid s\in X,a\in A\,\}$. Furthermore, we denote by $X[F/x]$ the team $\{\,s(a/x)\mid s\in X,a\in F(s)\,\}$. If $X$ is a team whose codomain is the universe of a $\sigma$-structure $\mathcal{A}$, we say $X$ is a team of $\mathcal{A}$. Now, we proceed with the definition of syntax and semantics of first-order team logic. Let $\sigma$ be a vocabulary. Then, the syntax of \emph{first-order team logic}, $\protect\ensuremath{\classFont{FO}}\xspace[\sigma]$, is defined by the following grammar: \begin{equation} \label{eq:fosynt} \varphi ::= x=y\mid x\neq y\mid R(\tu x)\mid \lnot R(\tu x)\mid (\varphi \land \varphi)\mid (\varphi \lor \varphi)\mid \exists x.\varphi\mid \forall x.\varphi, \tag{$\star$} \end{equation} where $\tu x$ is a tuple of first-order variables, $x,y$ are first-order variables, and $R\in\sigma$. Notice that we restricted the syntax to atomic negation. The reason for that restriction is the high complexity of problems on formulas with arbitrary negation symbols both in first-order as well as propositional logic \cite{DBLP:books/daglib/0030191,DBLP:journals/tocl/HannulaKVV18}. \begin{definition}[Team semantics] Let $\sigma$ be a vocabulary, $\mathcal{A}$ be a $\sigma$-structure, $X$ be a team of $\mathcal{A}$, $x,y$ be first-order variables, $\tu x$ be a tuple of first-order variables, $R$ be a relation symbol, and $\varphi,\psi\in\protect\ensuremath{\classFont{FO}}\xspace(\sigma)$. The satisfaction relation $\models_X$ for $\protect\ensuremath{\classFont{FO}}\xspace[\sigma]$-formulas is defined as: \[ \begin{array}{llcl} \mathcal{A} \models_X& x=y &\Leftrightarrow& \forall s\in X\text{ we have that }s(x)=s(y),\\ \mathcal{A} \models_X& x\neq y &\Leftrightarrow& \forall s\in X\text{ we have that }s(x)\neq s(y),\\ \mathcal{A} \models_X& R(\tu x) &\Leftrightarrow& \forall s\in X\text{ we have that }s(\tu x)\in R^\mathcal{A},\\ \mathcal{A} \models_X& \lnot R(\tu x) &\Leftrightarrow& \forall s\in X\text{ we have that }s(\tu x)\notin R^\mathcal{A},\\ \mathcal{A} \models_X& (\varphi\land\psi) &\Leftrightarrow& \mathcal{A}\models_X\varphi\text{ and }\mathcal{A}\models_X\psi,\\ \mathcal{A} \models_X& (\varphi\lor\psi) &\Leftrightarrow& \exists Y,Z\subseteq X\text{ with } Y\cup Z=X\text{ and }\mathcal{A}\models_Y\varphi\text{ and }\mathcal{A}\models_Z\psi,\\ \mathcal{A}\models_X&\forall x.\varphi &\Leftrightarrow& \mathcal{A}\models_{X[A/x]}\varphi,\\ \mathcal{A}\models_X&\exists x.\varphi &\Leftrightarrow& \mathcal{A}\models_{X[F/x]}\varphi\text{ for some }F\colon X\to\powerset{A}\setminus\{\,\emptyset\,\}. \end{array} \] \end{definition} If the underlying vocabulary is clear from the context or not relevant, we usually omit $\sigma$ the expression $\protect\ensuremath{\classFont{FO}}\xspace[\sigma]$ and write $\protect\ensuremath{\classFont{FO}}\xspace$ instead. Let $\varphi\in\protect\ensuremath{\classFont{FO}}\xspace$ be a first-order team logic formula. We denote by $\free{\varphi}$ the set of \emph{free variables} in $\varphi$. Observe that on singletons, the semantics of $φ_1 \lor φ_2$ resemble that of the classical disjunction. On teams, however, this generalises to the so-called \emph{split junction} operator which literally splits the team into (not necessarily disjunct) parts where each of the formulas $φ_1$ and $φ_2$ has to be satisfied by one of the parts. Notice that the previously defined semantics are called \emph{lax semantics}. Furthermore, observe that the empty team satisfies any formula. This yields the desirable \emph{flatness} property (a team satisfies a formula if and only if\@\xspace every assignment/singleton from the team satisfies the formula). Note that for a fixed formula $\varphi$ and a given structure $\mathcal{A}$ there are $\text{dom}(\mathcal{A})^{|\free\varphi|}$ different assignments, i.e. a polynomial number of assignments. Since each team is a set of assignments, the size of a team is polynomially bounded as well. Formulae of $\protect\ensuremath{\classFont{FO}}\xspace(\dep{\dots})$ are {\em closed downwards}, i.e., $\mathcal{A}\models_X\varphi$ and $Y\subseteq X$ implies $\mathcal{A}\models_Y\varphi$, formulae of $\protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$ are {\em closed under unions}, i.e., $\mathcal{A}\models_X\varphi$ and $\mathcal{A}\models_Y\varphi$ implies $\mathcal{A}\models_{X\cup Y}\varphi$ \cite{DBLP:books/daglib/0030191,DBLP:journals/apal/Galliani12}. \begin{example} Consider the formula $\varphi\mathrel{=_{\mathrm{def}}} R(x,y)\lor\lnot R(x,y)$, the structure $\mathcal{A}$ with $R^{\mathcal{A}}=\{\,(0,1),(1,0)\,\}$ and the team $X=\{\,s_1,s_2\,\}$ defined with $s_1(x)=0, s_1(y)=1$, and $s_2(x)=1=s_2(y)$. Then $\mathcal{A}\models_X \varphi$ as we can split $X$ into $X_1=\{\,s_1\,\}$ and $X_2=\{\,s_2\,\}$ such that $\mathcal{A}\models_{X_1} R(x,y)$ and $\mathcal{A}\models_{X_2}\lnot R(x,y)$. \end{example} Additionally to the connectives defined in the $\protect\ensuremath{\classFont{FO}}\xspace$-syntax above, we will make use of so-called \emph{generalised dependency atoms}. We will use the \emph{dependence atom} $\dep{\tu x, \tu Dy}$, the \emph{inclusion atom} $\tu x \subseteq \tu y$ and the \emph{independence atom} $\tu x \perp_{\tu z} \tu y$ where $\tu x, \tu y, \tu z$ are tuples of first-order variables and $y$ is a first-order variable. Now for any subset $A \subseteq \{\,\dep{\dots}, \perp, \subseteq\,\}$, we define $\protect\ensuremath{\classFont{FO}}\xspace(A)$ as first-order logic extended by the respective atoms. More precisely, we extend the grammar (\ref{eq:fosynt}) by adding a rule for each atom in $A$. For example, for $\protect\ensuremath{\classFont{FO}}\xspace(\{\,\subseteq\,\})$ we add the rule $φ ::= \tu x \subseteq \tu y$ for any tuples $\tu x, \tu y$ of \protect\ensuremath{\classFont{FO}}\xspace-variables. For convenience, we often omit the curly brackets and write for example $\protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$ instead of $\protect\ensuremath{\classFont{FO}}\xspace(\{\,\subseteq\,\})$. The logics $\protect\ensuremath{\classFont{FO}}\xspace(\dep{\dots})$, $\protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$ and $\protect\ensuremath{\classFont{FO}}\xspace(\perp)$ are called \emph{dependence logic}, \emph{inclusion logic} and \emph{independence logic}, respectively\xspace. Intuitively, an independence atom expresses that two tuples are independent with respect to a third tuple. A tuple $\tu x$ depends on another tuple $\tu y$, so $\dep{\tu x,\tu y}$, if for every pair of assignments from the team that agree on $\tu x$ also agree on $\tu y$. This is the idea of functional dependency in the database setting. A tuple $\tu x$ is included in a tuple $\tu y$, that is $\tu x\subseteq\tu y$, if for every assignment $t_1$ in the team there exists another one $t_2$ such that $\tu x$ under $t_1$ coincides with $\tu y$ under $t_2$. Before we formally define the semantics for these three atoms, we need to introduce a little bit of notation. If $\tu x=(x_1,\dots,x_n)$ is a tuple of first-order variables for $n\in\mathbb N$, and $s$ is an assignment, then $s(\tu x)\mathrel{=_{\mathrm{def}}} (s(x_1),\dots,s(x_n))$. \begin{definition}[Generalised dependency atoms semantics] Let $\sigma$ be a vocabulary, $\mathcal{A}$ be a $\sigma$-structure, $X$ be a team of $\mathcal{A}$, and $\tu x, \tu y, \tu z$ be tuples of first-order variables. The satisfaction relation $\models_X$ for $\protect\ensuremath{\classFont{FO}}\xspace(\sigma)$-formulas then is extended as follows: \[ \begin{array}{llcl} \mathcal{A} \models_X& \tu x\perp_{\tu z}\tu y &\Leftrightarrow& \forall s,t\in X\text{ with }s(\tu z)=t(\tu z)\;\; \exists u\in X\text{ such that }\\&&& u(\tu x)=s(\tu x),\; u(\tu y) =t(\tu y),\; u(\tu z)=s(\tu z).\\ \mathcal{A} \models_X& \tu x\subseteq\tu y &\Leftrightarrow& \forall s\in X\;\exists t\in X\text{ such that }s(\tu x)=t(\tu y).\\ \mathcal{A} \models_X& \dep{\tu x,\tu y} &\Leftrightarrow& \forall s,t\in X\text{ we have that }s(\tu x)=t(\tu x)\text{ implies }s(\tu y)=t(\tu y). \end{array} \] \end{definition} In the following, we define the model checking problem on the level of first-order team logic formulas in the setting of data complexity (fixed formula). \problemdefdec{$\protect\ensuremath{\prob{VerifyTeam}}\xspace_\varphi$}{Structure $\mathcal{A}, \text{ team }X$}{$\mathcal{A} \models_X \varphi \text{ and } X \neq \emptyset?$} \begin{lemma}\label{vernp} Let $A\subseteq\{\,\perp,\subseteq,\dep{\dots}\,\}$, $\varphi\in\protect\ensuremath{\classFont{FO}}\xspace(A)$. Then $\protect\ensuremath{\prob{VerifyTeam}}\xspace_\varphi\in\protect\ensuremath{\mathbb{N}}\xspaceP$. \end{lemma} \begin{proof} Every fixed formula is of bounded width (width is the maximal number of free variables in subformulas of a given formula). As all of the generalised dependency atoms in $A$ can be evaluated in polynomial time, a result from Grädel~\cite[Theorem 5.1]{DBLP:journals/tcs/Gradel13} applies, yielding $\protect\ensuremath{\prob{VerifyTeam}}\xspace_\varphi\in\protect\ensuremath{\mathbb{N}}\xspaceP$. \end{proof} Our algorithms often start with either $\emptyset$ or $\text{dom}(\mathcal{A})^{|\free\varphi|}$ (the full team) as one of their inputs, for a fixed formula $\varphi$ and a structure $\mathcal{A}$. Instead of $\text{dom}(\mathcal{A})^{|\free\varphi|}$ we will write $\protect\ensuremath{\mathbb{X}}\xspace.$ The following proposition summarises important results from literature that are referenced later in proofs. It mainly states key connection between team logics and predicate logic, also mentioning descriptive complexity results that are consequences of these connections. \begin{proposition}[\cite{DBLP:journals/apal/Galliani12,DBLP:journals/jolli/KontinenV09,DBLP:conf/csl/GallianiH13}]\label{ind2sigma11}\label{expr} \ \\ \begin{enumerate} \item\label{propit:dep=ind=np} Over sentences both $\protect\ensuremath{\classFont{FO}}\xspace(\perp)$ and $\protect\ensuremath{\classFont{FO}}\xspace(\dep{\dots})$ are expressively equivalent to $\Sigma^1_1$: Every $\sigma$-sentence of $\protect\ensuremath{\classFont{FO}}\xspace(\perp)$ (or $\protect\ensuremath{\classFont{FO}}\xspace(\dep{\dots})$) is equivalent to a $\sigma$-sentence $\psi$ of $\Sigma^1_1$, i.e., for any $\sigma$-structure $\mathcal{A}$, $\mathcal{A}\models\varphi\iff\mathcal{A}\models\psi,$ and vice versa. As a consequence of Fagin's Theorem \cite{fagingeneralized}, over finite structures both $\protect\ensuremath{\classFont{FO}}\xspace(\perp)$ and $\protect\ensuremath{\classFont{FO}}\xspace(\dep{\dots})$ capture \protect\ensuremath{\mathbb{N}}\xspaceP. \item\label{myopic} Let $\varphi(R)$ be a \emph{myopic} $\sigma$-formula, that is, $\varphi(R)=\forall \tu{x} (R(\tu{x})\rightarrow \psi(R,\tu{x}))$, where $\psi$ is a first order $\sigma$-formula with only positive occurrences of $R$. Then there exists a $\sigma$-formula $\chi \in \protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$ such that for all $\sigma$-structures $\mathcal{A}$ and all teams $X$ we have $\mathcal{A} \models_X \chi(\tu{x}) \Leftrightarrow \mathcal{A}, \textnormal{rel}(X) \models \varphi(R)$. \end{enumerate} \end{proposition} \subsection{Enumeration} For the basics of enumeration complexity theory, we follow Creignou et al. \cite{DBLP:journals/dam/CreignouKPSV19}. In contrast to decision problems where one gets an input and often has to answer whether there is a ``solution'' to the input, for enumeration problems one has to compute the set of all solutions to the input. As an example see the difference between the decision problem $\protect\ensuremath{\prob{Sat}}\xspace^{\mathrm{team}}_\varphi$ and the enumeration problem $\protect\ensuremath{\prob{E-Sat}}\xspace^{\mathrm{team}}_\varphi$. \problemdefdec{$\protect\ensuremath{\prob{Sat}}\xspace^{\mathrm{team}}_\varphi$}{Structure $\mathcal{A}$}{$\{\,X \mid \mathcal{A} \models_{X} \varphi \text{ and } X \neq \emptyset\,\} \neq \emptyset?$} \problemdef{$\protect\ensuremath{\prob{E-Sat}}\xspace^{\mathrm{team}}_\varphi$}{Structure $\mathcal{A}$}{$\{\,X \mid \mathcal{A} \models_X \varphi \text{ and } X \neq \emptyset\,\}$} Note that for all our problem definitions, if not otherwise stated, φ is a formula from $\protect\ensuremath{\classFont{FO}}\xspace(A)$ for some $A \subseteq \{\,\dep{\dots}, \subseteq, \perp\,\}$. As these sets can get exponentially large compared to the input our, classical measures (like runtime of the machine/algorithm) will not suffice. To be able to talk about tractability and intractability of problems in the enumeration setting we need to define new classes. The idea is that we will not bound the time of the whole computation, but the time of the computations between the outputs of two consecutive solutions, which we will call \emph{delay}. Instead of Turing machines we will use random access machines (RAMs), to be able to access the (potentially) exponential ``memory'' in polynomial time. \begin{definition}[\cite{DBLP:journals/dam/CreignouKPSV19}] Let $C$ be a decision complexity class and $p$ be a polynomial. The enumeration class $\textnormal{D}el C$ consists of all enumeration problems $E$, for which there exists a RAM $M$ with oracle $L\in C$ such that for all inputs $x$, $M$ enumerates the output set of $E$ with $p(|x|)$ delay and all oracle queries are bounded by $p(|x|).$ \end{definition} \begin{example} We show $\protect\ensuremath{\prob{E-Sat}}\xspace \in \textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$. \problemdef{$\protect\ensuremath{\prob{E-Sat}}\xspace$}{Propositional forumlua $\varphi$}{$\{\,\beta \mid \beta \models \varphi\,\}$} Let $\varphi$ be our input formula over the variables $x_1, \dots, x_n$. We start by assigning the value $0$ to variable $x_1$ and ask the oracle $\protect\ensuremath{\prob{Sat}}\xspace$ (satisfiability of propositional formulas) if the resulting formula is satisfiable. If the answer is ``no'', we know that, there is no satisfying assignment for $\varphi$, which assigns the value $0$ to variable $x_1$ and we therefore ask the oracle again but this time we assign the value $1$ to variable $x_1$. If the answer is ``yes'' we continue by assigning the value $0$ to variable $x_2$ and ask our oracle again. That means for each ``yes'' we go one step down in the tree of assignments and assign the value $0$ to the next variable, if the answer is ``no'' and we did not assign the value $1$ to the current variable before then we assign the value $1$ to it this time and if the answer is ``no'' and we assigned the value $1$ to the current variable before, we go one step up in the tree off assignments. If at some point we assigned all variables and get the answer ``yes'', we output the current (satisfying) assignment. If we gone through all assignments this way we output that there is no further satisfying assignment and halt. We now have to argue, that this method has polynomial delay, the oracles questions are polynomially bounded and that the oracle is in $\protect\ensuremath{\mathbb{N}}\xspaceP$. The last one is the easiest, since we all know $\protect\ensuremath{\prob{Sat}}\xspace \in \protect\ensuremath{\mathbb{N}}\xspaceP.$ The oracles questions have the same length as the input formula, therefore they are polynomial bounded. To get from one satisfying assignment to another we have to go up and down the whole tree of assignments once in the worst case. Since the depth is $n$ this takes $p(n)$ time. The method is called flashlight or torchlight search, and our algorithms in sections~\ref{effenum} and \ref{hardenum} showing membership for $\textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace$ and $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$ will be based on it. \end{example} To be able to show hardness for our new classes we need a suitable definition of reducibility. The reduction we use is quite similar to a Turing reduction in the decision case. For this we give a machine access to an enumeration oracle to solve another enumeration problem. The kind of machine we use here is called \emph{enumeration oracle machine} (EOM) which is a RAM with some new special registers: an infinite number of registers for the oracle questions and one register for the answer. The machine can write an oracle question into the respective registers (one bit per register) and in one step the answer appears in the register for the answer. If there are further solutions to the question that were not given before, the answer is a solution. Otherwise, the answer is a special symbol, meaning that all solutions have been given. The machines that we use are also \emph{oracle-bounded}, that is, all oracle questions are polynomial in the size of the input. \begin{definition}[\cite{DBLP:journals/dam/CreignouKPSV19}] Let $E_1,E_2$ be enumeration problems. We say that $E_1$ reduces to $E_2$ via $D$-reductions, $E_1 \le_{\textnormal{D}} E_2$, if there is an oracle-bounded EOM $M$ that enumerates $E_1$ using oracle $E_2$ with polynomial delay and independently of the order in which the $E_2$-oracle enumerates it answers. \end{definition} \begin{proposition}[\cite{DBLP:journals/dam/CreignouKPSV19}] The class $\textnormal{D}el \Sigma_k^p$ is closed under $D$-reductions for any $k \in \mathbb{N}$. \end{proposition} Let $\prob{E}$ be the enumeration problem, given input $x$, to output the set of solutions $S(x)$. We denote by $\prob{Exist-E}$ the problem to decide, given $x$, whether $|S(x)| \geq 1$. \begin{proposition}[\cite{DBLP:journals/dam/CreignouKPSV19}]\label{existhard} Let $E$ be an enumeration problem and $k \ge 1$ such that $\prob{Exist-E}$ is $\Sigma_k^p$-hard. Then we have that $E$ is $\textnormal{D}el\Sigma_k^p$-hard under $D$-reductions. \end{proposition} We slightly generalise this theorem: \begin{theorem}\label{arbhard} Let $A$ be an $\Sigma_k^p$-hard decision problem and let $E$ be an enumeration problem such that $A$ can be decided in polynomial time by an algorithm that has access to oracle $E$. Then it holds that $E$ is $\textnormal{D}el\Sigma_k^p$-hard under $D$-reductions. \end{theorem} \begin{proof} The proof is essentially the same as the one for Prop.~\ref{existhard}. Let $B \in \textnormal{D}el\Sigma_k^p$ and $L\in\Sigma_k^p$ be a witness for $B \in \textnormal{D}el\Sigma_k^p$, that is, there is an algorithm with access to oracle $L$ that enumerates $B$ with polynomial delay. Since $A$ is $\Sigma_k^p$-hard and by the precondition of the theorem ($A$ can be decided in polynomial time by an algorithm with an $E$-oracle), we can answer the oracle questions to $L$ by asking $E$ instead. It follows that $B$ can be enumerated by an algorithm with an $E$-oracle with polynomial delay. \end{proof} We will close this subsection defining four more enumeration problems. In the following two sections we analyse the complexity of the defined problems for our different logics. \problemdef{$\protect\ensuremath{\prob{E-MaxSat}}\xspace^{\mathrm{team}}_\varphi$}{Structure $\mathcal{A}$}{$\{\,X \mid \mathcal{A} \models_X \varphi, X \neq \emptyset \text{ and } \forall X' \ X \subsetneq X' \Rightarrow \mathcal{A} \not\models_{X'} \varphi\,\}$} \problemdef{$\protect\ensuremath{\prob{E-CMaxSat}}\xspace^{\mathrm{team}}_\varphi$}{Structure $\mathcal{A}$}{$\{\,X \mid \mathcal{A} \models_X \varphi, X \neq \emptyset \text{ and } \forall X' \ |X'|> |X| \Rightarrow \mathcal{A} \not\models_{X'}\varphi \,\}$} \noindent The dual problems $\protect\ensuremath{\prob{E-MinSat}}\xspace^{\mathrm{team}}_φ$ and $\protect\ensuremath{\prob{E-CMinSat}}\xspace^{\mathrm{team}}_φ$ require the conditions $\forall X'\neq \emptyset \ X' \subsetneq X \Rightarrow \mathcal{A} \not\models_{X'} \varphi$ and $\forall X'\neq \emptyset \ |X'|< |X| \Rightarrow \mathcal{A} \not\models_{X'}\varphi$ instead, respectively. \section{Efficient Enumeration}\label{effenum} In this section, we study the class $\textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace$. All the results are for inclusion logic and rely on the fact that $\protect\ensuremath{\prob{MaxSubTeam}}\xspace$---the problem to compute the maximal subteam of a given team satisfying a given inclusion logic formula in a given structure---is computable in polynomial time. This was shown for modal propositional inclusion logic \cite{DBLP:journals/logcom/HellaKMV19}. Our case can be proven similar by induction. Usually this result is not usable for satisfiability since one has to give $\protect\ensuremath{\prob{MaxSubTeam}}\xspace$ the full team $\mathbb{X}$ which is exponentially large compared to a given formula, but since we fix the formula this is not a problem. Note that for inclusion logic the maximal satisfying team is unambiguous: if there are two satisfying teams $X, X'$ of same size, then $X \cup X'$ is also satisfying due to union closure. The teams $X, X'$ therefore can not be maximal with respect to cardinality and inclusion. \problemdef{$\protect\ensuremath{\prob{MaxSubTeam}}\xspace_\varphi$}{Structure $\mathcal{A}, \text{ team }X$}{$X'$ with $\mathcal{A} \models_{X'} \varphi, X' \subseteq X \text{ and } \forall X''\subseteq X \colon |X''|>|X'| \Rightarrow \mathcal{A}\not\models_{X''}\varphi $} In our algorithms we use $\protect\ensuremath{\prob{MaxSubTeam}}\xspace_\varphi$ as an oracle, but one could also call it as a subroutine, since $\textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace^\protect\ensuremath{\classFont{P}}\xspace=\textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace.$ \begin{theorem}\label{inclP} For any formula $\varphi \in \protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$ it holds that $\protect\ensuremath{\prob{E-Sat}}\xspace^{\mathrm{team}}_\varphi \in \textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace.$ \end{theorem} \begin{proof} We construct a recursive algorithm with access to a $\protect\ensuremath{\prob{MaxSubTeam}}\xspace$ oracle that on input $(\mathcal{A},X,Y)$ enumerates all satisfying subteams $X' \neq \emptyset$ of $X$ with $Y\subseteq X'$. To compute for a given $\mathcal{A}$ all satisfying subteams, we then need to run this algorithm on input $(\mathcal{A}, \protect\ensuremath{\mathbb{X}}\xspace, \emptyset)$. \begin{algorithm}[H] \textnormal{D}ontPrintSemicolon \label{alg:esatinc} \caption{Algorithm used to show $\protect\ensuremath{\prob{E-Sat}}\xspace^{\mathrm{team}}_φ \in \textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace$ for $φ \in \protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$} \SetKwProg{Fn}{Function}{}{end} \Fn{\textnormal{EnumerateSubteams}(structure $\mathcal{A}$, teams $X, Y$) with oracle $\protect\ensuremath{\prob{MaxSubTeam}}\xspace$} { $X\gets\protect\ensuremath{\prob{MaxSubTeam}}\xspace_\varphi(\mathcal{A},X)$\; \textnormal{I}f{$X\neq \emptyset \wedge Y \subseteq X$}{ output $X$\; \For{$s\in X$}{ $Y = \{\,s' \mid s' < s \wedge s'\in X \,\} $ \; EnumerateSubteams$(\mathcal{A},X\setminus\{\,s\,\},Y)$}} } \end{algorithm} The algorithm does not output any solution more than once. In the recursive calls, it only outputs solutions where at least one assignment is omitted from the maximal solution, which is the only solution output before. Also, when the assignment $s$ is chosen in the for-loop, the next recursive call only outputs solutions that omit $s$, but contain all assignments $s' < s$ that were present in $X$. In contrast, in every solution found in previous recursive calls, at least one of the assignments $s' < s$ from $X$ was omitted. On the other hand, the algorithm outputs every solution at least once. Every solution is a subset of the maximal satisfying subteam of $X$ and the algorithm starts with that maximal solution and then recursively looks for all strict subsets of it. This can be seen by noticing that when choosing the assignment $s$ in the for-loop, the next recursive call outputs all satisfying subteams of $X$ that exclude $s$, except for those that also exclude some $s' < s$ from $X$ and were hence output before. \end{proof} \begin{theorem}\label{inclmaxP} Let $\varphi \in \protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$. Then $\protect\ensuremath{\prob{E-MinSat}}\xspace^{\mathrm{team}}_\varphi \in \textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace.$ \end{theorem} \begin{proof} This can be proven similar to Theorem~\ref{inclP} by slightly modifying Algorithm~\ref{alg:esatinc} such that\@\xspace it takes input $(\mathcal{A},X,Y)$ and computes all inclusion minimal satisfying subteams $X'\neq\emptyset$ of $X$ with $Y\subseteq X'$. The only change needed for this is that it only outputs a team $X$, if \protect\ensuremath{\prob{MaxSubTeam}}\xspace answers $\emptyset$ for all $X\setminus\{\,s\,\}$, where $ s\in X$. \begin{algorithm} \caption{Algorithm used to show $\protect\ensuremath{\prob{E-MinSat}}\xspace^{\mathrm{team}}_φ \in \textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace$} \label{alg:minsatincl} \SetKwProg{Fn}{Function}{}{end} \textnormal{D}ontPrintSemicolon \Fn{\textnormal{EnumerateMinSubteams}($\text{structure } \mathcal{A}, \text{ teams } X, Y$) with oracle $\protect\ensuremath{\prob{MaxSubTeam}}\xspace$} { $X\gets\text{\protect\ensuremath{\prob{MaxSubTeam}}\xspace}(\mathcal{A},X)$\; \textnormal{I}f{$X\neq \emptyset \wedge Y \subseteq X$}{ \lIf{$\forall s \in X \ \textnormal{\protect\ensuremath{\prob{MaxSubTeam}}\xspace}(\mathcal{A}, X \setminus \{\,s\,\})=\emptyset$}{output $X$}\Else{ \For{$s\in X$}{ $Y = \{\,s' \mid s' < s \wedge s'\in X \,\} $ \; EnumerateMinSubteams$(\mathcal{A},X\setminus\{\,s\,\},Y)$}}} } \end{algorithm} \end{proof} The next result follows from the fact, that \protect\ensuremath{\prob{MaxSubTeam}}\xspace can be computed in polynomial time, since the solution set only consists of the maximal satisfying team for both problems. \begin{theorem} For $\varphi \in \protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$ the problems $\protect\ensuremath{\prob{E-MaxSat}}\xspace_\varphi^{\mathrm{team}}, \protect\ensuremath{\prob{E-CMaxSat}}\xspace_\varphi^{\mathrm{team}}$ are included in $\textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace$. \end{theorem} Note that there is an enumeration problem we did not mention in this section, which is $\protect\ensuremath{\prob{E-CMinSat}}\xspace_\varphi^{\mathrm{team}}$. This is due to the fact, that this problem is actually $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-complete as we will see in the next section. \section{A Characterisation of DelNP}\label{hardenum} We show that for certain formulas the problem $\protect\ensuremath{\prob{E-Sat}}\xspace_\varphi^{\mathrm{team}}$ captures the class $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$. Moreover, we will extend this result to all remaining cases, that is, all combinations of logics and problems we did not classify already in Section~\ref{effenum}. \begin{theorem}\label{sathard} Let $A\subseteq \{\,\dep{\dots},\perp\,\}, A \neq \emptyset$. There exists a formula $\varphi \in \protect\ensuremath{\classFont{FO}}\xspace(A)$ such that the problem $\protect\ensuremath{\prob{Sat}}\xspace^{\mathrm{team}}_\varphi$ is $\protect\ensuremath{\mathbb{N}}\xspaceP$-hard. \end{theorem} \begin{proof} We show the result for $A = \{\,\perp\,\}$. The proof for $A = \{\,\dep{\dots}\,\}$ works analogously by reducing from the $\protect\ensuremath{\mathbb{N}}\xspaceP$-complete problem $\protect\ensuremath{\Sigma_1\prob{-CNF}}\xspace^-$, that is, given a propositional formula $φ \in \protect\ensuremath{\Sigma_1\prob{-CNF}}\xspace^-$, decide whether φ is satisfiable. Here, $\protect\ensuremath{\Sigma_1\prob{-CNF}}\xspace$ is the class of propositional formulas with existential quantifiers in prenex normal form and where the quantifier-free part is in conjunctive normal form. The negative fragment $\protect\ensuremath{\Sigma_1\prob{-CNF}}\xspace^-$ further restricts formulas by allowing free variables to only occur negatively. We reduce from the $\protect\ensuremath{\mathbb{N}}\xspaceP$-complete problem $\protect\ensuremath{\prob{CNF}}\xspace\prob{-}\protect\ensuremath{\prob{Sat}}\xspace$ to the problems $\protect\ensuremath{\prob{Sat}}\xspace_φ^\mathrm{rel}$ and $\protect\ensuremath{\prob{Sat}}\xspace_φ^{\mathrm{rel}*}$ for some $φ \in \Sigma_1^1$, see below for formal definitions. By Proposition~\hyperref[propit:dep=ind=np]{\ref*{ind2sigma11} item \ref*{propit:dep=ind=np}} we get that $\protect\ensuremath{\prob{Sat}}\xspace_{\varphi'}^{\mathrm{team}}$ is $\protect\ensuremath{\mathbb{N}}\xspaceP$-hard, for a formula $\varphi' \in \protect\ensuremath{\classFont{FO}}\xspace(\perp)$. Let $\varphi$ be a $\Sigma_1^1$-formula. \problemdefdec{$\protect\ensuremath{\prob{Sat}}\xspace_\varphi^\mathrm{rel}$}{Structure $\mathcal{A}$}{$\{\,R \mid \mathcal{A}, R \models \varphi \,\} \neq \emptyset?$} \problemdefdec{$\protect\ensuremath{\prob{Sat}}\xspace_\varphi^{\mathrm{rel}*}$}{Structure $\mathcal{A}$}{$\{\, R \mid \mathcal{A}, R \models \varphi \text{ and } R\neq \emptyset\,\} \neq \emptyset?$} Let $\psi(x_1,\dots,x_n)=\bigwedge_i^m C_i$ be a propositional formula in conjunctive normal form, with $C_i=\bigvee_{j} l_{i,j}.$ We encode $\psi$ via the structure $\mathcal{A}(ψ)=\{\,\{\,x_1,\dots,x_n,C_1,\dots,C_m\,\}, P^2,N^2\,\}$, where $(C,x) \in P$ ($(C,x) \in N$) if and only if\@\xspace variable $x$ occurs positively (negatively) in clause $C$. We define the following $\Sigma_1^1$-formula $χ(R)$ over vocabulary $(P^2, N^2)$: \[\chi(R)= \forall C \ \exists x \ ((P(C,x) \wedge R(x)) \vee (N(C,x) \wedge \neg R(x))).\] Now, we have that $\exists R\colon \mathcal{A}(ψ), R \models χ(R) \iff ψ$ is satisfiable, showing $\protect\ensuremath{\prob{CNF}}\xspace\prob{-}\protect\ensuremath{\prob{Sat}}\xspace \le_m^p \protect\ensuremath{\prob{Sat}}\xspace_\chi^\mathrm{rel}$. Next, we will show $\protect\ensuremath{\mathbb{N}}\xspaceP$-hardness for $\protect\ensuremath{\prob{Sat}}\xspace^{\mathrm{rel}*}_{\chi'}$. This follows from an easy reduction from $\protect\ensuremath{\prob{Sat}}\xspace^{\mathrm{rel}}_{\varphi}$ to $\protect\ensuremath{\prob{Sat}}\xspace^{\mathrm{rel}*}_{\varphi'}$ which holds for all $\varphi \in \Sigma_1^1$. Let $\varphi'(R)=\varphi(R)\vee \varphi(\emptyset)$. Now, for all structures $\mathcal{A}$ we claim that $\exists R \colon \mathcal{A}, R\models \varphi(R) \iff \exists R'\neq \emptyset \colon \mathcal{A}, R'\models \varphi'(R')$. ``$\Rightarrow$'': If $\mathcal{A}, R \models \varphi(R)$ only holds for $R=\emptyset$, then $\mathcal{A}, R' \models \varphi'(R')$ holds for any $R'$, in particular for any $R'\neq \emptyset.$ If $\mathcal{A}, R \models \varphi(R)$ for any $R\neq\emptyset$, then $\mathcal{A}, R \models \varphi'(R)$ also holds. ``$\Leftarrow$'': Since $\mathcal{A}, R \not\models \varphi(R)$ for all $R$, in particular we have $\mathcal{A}, \emptyset \not\models \varphi(\emptyset)$. This immediately shows $\mathcal{A}, R \not\models \varphi'(R)$ for all $R$. \end{proof} \begin{corollary}\label{delnphard} For $A\subseteq \{\,\dep{\dots},\perp\,\}, A \neq \emptyset $ there exists a formula $\varphi\in \protect\ensuremath{\classFont{FO}}\xspace(A)$ such that the problems $\protect\ensuremath{\prob{E-Sat}}\xspace_\varphi^{\mathrm{team}}, \protect\ensuremath{\prob{E-MaxSat}}\xspace_\varphi^{\mathrm{team}}, \protect\ensuremath{\prob{E-CMaxSat}}\xspace_\varphi^{\mathrm{team}}, \protect\ensuremath{\prob{E-MinSat}}\xspace_\varphi^{\mathrm{team}}, \protect\ensuremath{\prob{E-CMinSat}}\xspace_\varphi^{\mathrm{team}}$ are $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-hard. \end{corollary} \begin{proof} By Theorem~\ref{sathard}, there is a formula $\varphi \in \protect\ensuremath{\classFont{FO}}\xspace(A)$ (with $A\subseteq \{\,\dep{\dots},\perp\,\}$) such that $\protect\ensuremath{\prob{Sat}}\xspace_\varphi^{\mathrm{team}}$ is $\protect\ensuremath{\mathbb{N}}\xspaceP$-hard. Since $\protect\ensuremath{\prob{Sat}}\xspace_\varphi^{\mathrm{team}}$ can be decided in polynomial time by an algorithm with oracle access to any of the problems mentioned in this corollary (simply ask the oracle and return ``no'' if and only if\@\xspace the output is $\bot$), by Theorem~\ref{arbhard}, it follows that all of these problems are $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-hard. \end{proof} \begin{theorem}\label{mem} For $A=\{\,\perp,\dep{\dots},\subseteq\,\}$ and $\varphi \in \protect\ensuremath{\classFont{FO}}\xspace(A)$, we have that $\protect\ensuremath{\prob{E-Sat}}\xspace^{\mathrm{team}}_\varphi \in \textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP.$ \end{theorem} \begin{proof} We give a recursive algorithm enumerating $\protect\ensuremath{\prob{E-Sat}}\xspace^{\mathrm{team}}_\varphi$ with polynomial delay, when given oracle access to $\protect\ensuremath{\prob{ExtendTeam}}\xspace_{\varphi}$ (for definition see below) and $\protect\ensuremath{\prob{VerifyTeam}}\xspace_\varphi$. \problemdefdec{$\protect\ensuremath{\prob{ExtendTeam}}\xspace_\varphi$}{Structure $\mathcal{A}, \text{ team }X, \text{ set of assignments }Y$}{$\{\,X' \mid \mathcal{A} \models_{X'} \varphi, X \subsetneq X' \text{ and } X' \cap Y = \emptyset \,\} \neq \emptyset?$} $\protect\ensuremath{\prob{ExtendTeam}}\xspace_φ \in \protect\ensuremath{\mathbb{N}}\xspaceP$ for all φ: A team $X'$ is guessed and $X \subsetneq X' ∧ X' \cap Y = \emptyset$ can be checked in polynomial time. Finally, $\mathcal{A} \models_{X'} \varphi$ can be decided in $\protect\ensuremath{\mathbb{N}}\xspaceP$ by Lemma~\ref{vernp}. We now construct an algorithm that gets a structure $\mathcal{A}$ and a team $X$ as inputs and outputs all satisfying teams $X'$ with $X\subseteq X'$ and $X'\setminus X \subseteq \{\,s \in \text{dom}(\mathcal{A})^k \mid s > \textnormal{max}(X)\,\}$, that is, $X'$ only contains new assignments that are larger than the largest assignment in $X$. The algorithm searches these teams $X'$ by using recursive calls where exactly one assignment $s > \max(X)$ is added to $X$. By design, the recursive call where $s'$ is added only outputs teams that contain $s'$ and no assignment between $\max(X)$ and $s'$, ensuring that no team is output twice. We run the algorithm with input $(\mathcal{A}, \emptyset)$ to get all satisfying teams. \begin{algorithm}[H]\label{alg:esat} \textnormal{D}ontPrintSemicolon \caption{Algorithm used to show $\protect\ensuremath{\prob{E-Sat}}\xspace^{\mathrm{team}}_φ \in \textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$ for $φ \in \protect\ensuremath{\classFont{FO}}\xspace(A)$} \SetKwProg{Fn}{Function}{}{end} \Fn{\textnormal{EnumerateSuperteams}(structure $\mathcal{A}$, team $X$) with oracles $\protect\ensuremath{\prob{ExtendTeam}}\xspace_{\varphi}$ and $\protect\ensuremath{\prob{VerifyTeam}}\xspace_{\varphi}$}{ $Y=\bigcup_{s < \text{max}(X) \wedge s \not\in X} s$\; \lIf{$\protect\ensuremath{\prob{VerifyTeam}}\xspace_\varphi(\mathcal{A},X)$}{output $X$} \textnormal{I}f{$\protect\ensuremath{\prob{ExtendTeam}}\xspace_\varphi(\mathcal{A},X,Y)$}{ \ForAll{$s > \textnormal{max}(X)$}{ $\text{EnumerateSuperteams}(\mathcal{A},X \cup \{\,s\,\})$}} } \end{algorithm} \end{proof} \begin{theorem} \label{memcardmax} For $A=\{\,\perp,\dep{\dots},\subseteq\,\}$, $\varphi \in \protect\ensuremath{\classFont{FO}}\xspace(A)$, we have that $\protect\ensuremath{\prob{E-CMaxSat}}\xspace^{\mathrm{team}}_\varphi \in \textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$. \end{theorem} \begin{proof} There is a recursive algorithm that on input $(\mathcal{A},X,k)$ enumerates all satisfying superteams of $X$ having cardinality $k$ with polynomial delay. The algorithm is very similar to the one used for Theorem~\ref{mem}. The only differences are that $|X|=k$ is checked before a team $X$ is output and that $\protect\ensuremath{\prob{ExtendCMaxTeam}}\xspace_{\varphi}$ is used as the oracle instead of $\protect\ensuremath{\prob{ExtendTeam}}\xspace_{\varphi}$. \problemdef{$\protect\ensuremath{\prob{ExtendCMaxTeam}}\xspace_\varphi$}{Structure $\mathcal{A}, \text{ team }X, \text{ set of assignments } Y, \text{ natural number } k $}{$\{\,X' \mid \mathcal{A} \models_{X'} \varphi, X \subsetneq X' , X' \cap Y = \emptyset \text{ and } |X'|=k\,\} \neq \emptyset $} \begin{algorithm}\label{enumcardmaxteams} \caption{Algorithm used to show $\protect\ensuremath{\prob{E-CMaxSat}}\xspace^{\mathrm{team}}_\varphi \in \textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$} \label{alg:cardmax} \SetKwProg{Fn}{Function}{}{end} \textnormal{D}ontPrintSemicolon \Fn{\textnormal{EnumerateCMaxTeams}($\text{structure }\mathcal{A},\text{ team }X,\text{ natural number }k$) with oracles $\protect\ensuremath{\prob{ExtendCMaxTeam}}\xspace_{\varphi}$ and $\protect\ensuremath{\prob{VerifyTeam}}\xspace_{\varphi}$}{ $Y=\bigcup_{s < \text{max}(X) \wedge s \not\in X} s$\; \lIf{$\protect\ensuremath{\prob{VerifyTeam}}\xspace_\varphi(\mathcal{A},X)\wedge |X|=k$}{output $X$} \ElseIf{$\protect\ensuremath{\prob{ExtendCMaxTeam}}\xspace_\varphi(\mathcal{A},X,Y,k)$}{ \For{$s > \textnormal{max}(X)$}{ $\textnormal{EnumerateCardMaxTeams}(\mathcal{A},X \cup \{\,s\,\},k)$}}} \end{algorithm} The maximum cardinality $k$ can be computed by asking the $\protect\ensuremath{\prob{ExtendCMaxTeam}}\xspace_\varphi$ oracle on input $(\mathcal{A},\emptyset,\emptyset,i)$ for $i=|\text{dom}(\mathcal{A})|^{|\free\varphi|}, \dots, 1$. \end{proof} \begin{theorem}\label{memmin} For $A\subseteq \{\,\perp,\dep{\dots},\subseteq\,\}$ and $\varphi \in \protect\ensuremath{\classFont{FO}}\xspace(A)$ the problems $\protect\ensuremath{\prob{E-MinSat}}\xspace_\varphi^{\mathrm{team}}$, $\protect\ensuremath{\prob{E-CMinSat}}\xspace_\varphi^{\mathrm{team}}$ are included in $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP.$ \end{theorem} \begin{proof} For $\protect\ensuremath{\prob{E-MinSat}}\xspace_\varphi^{\mathrm{team}}$ we can run a slightly modified version of Algorithm~\ref{alg:esat} on input ($\mathcal{A}, \emptyset)$, which was originally used for $\protect\ensuremath{\prob{E-Sat}}\xspace_\varphi^{\mathrm{team}}$. The only modification needed is that the new algorithm terminates after outputting a solution. We can solve $\protect\ensuremath{\prob{E-CMinSat}}\xspace_\varphi^{\mathrm{team}}$ similarly, but this time adjust the algorithm we described in Theorem~\ref{memcardmax}. We compute the minimal $k$ (instead of the maximal) for which $\protect\ensuremath{\prob{ExtendCMaxTeam}}\xspace_{\varphi}$ is true before starting the Algorithm with that $k$. Also, the new algorithm again terminates after outputting a solution. \end{proof} In the next result, we show \protect\ensuremath{\mathbb{N}}\xspaceP-hardness for the decision problem $\protect\ensuremath{\prob{CMinSat}}\xspace_\varphi^\mathrm{team}$, for an inclusion logic formula $\varphi$. \problemdefdec{$\protect\ensuremath{\prob{CMinSat}}\xspace_\varphi^{\mathrm{team}}$}{Structure $\mathcal{A}, k\in\mathbb{N}$}{$\{\, X \mid \mathcal{A} \models_{X} \varphi, X \neq \emptyset \text{ and }|X| \le k \,\} \neq \emptyset?$} By this and Theorem~\ref{arbhard}, we can conclude \textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP-hardness for $\protect\ensuremath{\prob{E-CMinSat}}\xspace_\varphi^\mathrm{team}$. We reduce from the \protect\ensuremath{\mathbb{N}}\xspaceP-complete problem $\protect\ensuremath{\prob{IS}}\xspace^*$ (\textsc{IndependentSet}) to $\protect\ensuremath{\prob{CMinSat}}\xspace_\varphi^\mathrm{team}$ with two intermediate steps. \problemdefdec{$\protect\ensuremath{\prob{IS}}\xspace^*$}{Graph $G=(V,E), k\in\mathbb{N}$}{$\{\, V' \mid \forall u,v \in V'\colon \{\,i,j\,\} \not\in E, V' \subsetneq V, |V'| \ge k \text{ and } V' \subseteq V\,\} \neq \emptyset?$} Note that $\protect\ensuremath{\prob{IS}}\xspace^*$ is \protect\ensuremath{\mathbb{N}}\xspaceP-complete: We can reduce from the standard version $\protect\ensuremath{\prob{IS}}\xspace$, where $V'=V$ is allowed, by just adding one new vertex which is connected to all old vertices. The problems remaining problems we need for this reduction are defined as follows. \problemdefdec{$\protect\ensuremath{\prob{CMinSat}}\xspace_\varphi^{\mathrm{rel}}$ for $φ \in \Sigma_1^1$}{Structure $\mathcal{A}, k\in\mathbb{N}$}{$\{\, R \mid\mathcal{A}, R \models \varphi, R \neq \emptyset \text{ and } |R| \le k \,\} \neq \emptyset?$} \problemdefdec{$\protect\ensuremath{\prob{MaxZerosDualHorn}}\xspace^*$}{Propositional dual-horn formula $\varphi, k\in\mathbb{N}$}{$\{\,\beta \mid \beta \models \varphi, \beta \neq \emptyset\text{ and } |\beta| \le k \,\} \neq \emptyset?$} For this, we represent propositional assignments β by the set (relation) of variables it maps to $1$. Also, we call $|β|$ the \emph{weight} of β. \begin{theorem}\label{cardhard} There is a formula $\varphi \in \protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$ such that $\protect\ensuremath{\prob{CMinSat}}\xspace_\varphi^{\mathrm{team}}$ is $\protect\ensuremath{\mathbb{N}}\xspaceP$-hard. \end{theorem} \begin{proof} We reduce from the \protect\ensuremath{\mathbb{N}}\xspaceP-complete problem $\protect\ensuremath{\prob{IS}}\xspace^*$, showing that there are a myopic formula $φ' \in \Sigma_1^1$ and a formula $φ \in \protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$ such that\@\xspace \[\protect\ensuremath{\prob{IS}}\xspace^* \underset{(1)}{\leq^P_m} \protect\ensuremath{\prob{MaxZerosDualHorn}}\xspace^* \underset{(2)}{\leq^P_m} \protect\ensuremath{\prob{CMinSat}}\xspace_{\varphi'}^{\mathrm{rel}} \underset{(3)}{\leq^P_m} \protect\ensuremath{\prob{CMinSat}}\xspace_\varphi^{\mathrm{team}}.\] For (1) an arbitrary $(G=(V,E),k)$ is mapped to $(\varphi=\bigwedge_{\{\,i,j\,\} \in E} x_i \vee x_j, |V|-k)$. Intuitively, assigning a variable $x_i$ to $0$ in φ corresponds to picking the vertex $i$ in $G$ for an independent set. The formula φ expresses that at most one of the variables in any clause may be set to $0$, corresponding to the condition that at most one of the endpoints of an edge can be in an independent set. From this it can easily be seen that there is a $1$-$1$-correspondence between indpendent sets $V'$ of $G$ of size at least $k$ and satisfying assignments of φ of weight at most $k$. Note that $\varphi$ is obviously a \protect\ensuremath{\prob{DualHorn}}\xspace formula. Let $σ = (P^2, N^2)$ be a vocabulary. A propositional CNF-formula χ can be encoded as a $σ$-structure $\mathcal{A}_χ$ as follows: The universe contains the variables and clauses of χ. The relation $P^{\mathcal{A}_χ}$ ($N^{\mathcal{A}_χ}$) contains a pair $(C,x)$, if $C$ is a clause in χ, $x$ is a variable and $x$ occurs positively (negatively) in $C$ in the formula χ. For (2), define the myopic second-order formula $φ'$ over σ as follows: \begin{align*} φ'(R) = \forall x \ (R(x) \rightarrow (\forall C\ &((\neg \exists z \ N(C,z))\rightarrow (\exists y \ P(C,y)\wedge R(y))) \\ &\wedge (N(C,x)\rightarrow (\exists y \ P(C,y)\wedge R(y))))) \end{align*} Now suppose $R$ satisfies the formula $\phi'$. Let $x \in R$. It follows that all clauses that contain $x$ or contain only positive literals are satisfied by $R$: If $x$ is positively contained in a clause $C$, then it is already satisfied since $x \in R$. If $x$ is negatively contained in $C$, then there must be another variable $y$ that occurs positively in $C$ (since each clause contains at most one negative literal) with $y \in R$. If $C$ only contains positive literals, then there must be one $y \in R.$ This only works if there is at least one variable included in $R$. If $R$ is empty in the first place the premise of the first implication is always false and therefore the conclusion can be anything. It follows that $\phi'(\emptyset)$ is always true, which is no surprise since it is a myopic formula. But since we are only looking for non-empty relations, non zero-assignments $\beta$ respectively this is not a problem. Now for all assignments $\beta\neq \emptyset$ it holds that $\beta \models χ \iff \mathcal{A}_χ, \beta \models φ'(\beta)$. Finally, (3) follows from Proposition~\hyperref[myopic]{\ref*{ind2sigma11} item \ref*{myopic}}, since $φ'$ is a myopic formula. \end{proof} The second and third reductions are essentially the same that were used to show $\#\prob{DualHorn} \subseteq \#\protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$ \cite{DBLP:conf/mfcs/HaakKMVY19}. The difference is that in the counting case, the number of solutions to the \classFont{DualHorn}-formula must be equal to number of solutions to the $\protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$-formula, and in our case the size of maximal and minimal solutions must preserved. Fortunately the given formula in the second reduction delivers both, as the solutions are exactly the same for both formulas. Note that this reduction also works if we use positive \classFont{2CNF}-formulas (propositional formula in conjunctive normal form, where each clause has two positive literals) instead of \classFont{DualHorn}-formulas, since the given formula $\varphi=\bigwedge_{\{\,i,j\,\} \in E}x_i \vee x_j$ is a positive \classFont{2CNF}-formula. \begin{corollary}\label{cor:chardelnp} Let $\mathcal{E}=\{\,\protect\ensuremath{\prob{E-Sat}}\xspace,\protect\ensuremath{\prob{E-CMaxSat}}\xspace,\protect\ensuremath{\prob{E-MinSat}}\xspace,\protect\ensuremath{\prob{E-CMinSat}}\xspace\,\}$. \begin{enumerate} \item For all $\prob{E}\in\mathcal{E}$ and $\varphi \in \protect\ensuremath{\classFont{FO}}\xspace(A)$ with $A\subseteq \{\,\perp,\dep{\dots},\subseteq\,\}$ $\prob{E}_\varphi^{\mathrm{team}}$ is in $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$. \item There are formulas $\varphi_1 \in \protect\ensuremath{\classFont{FO}}\xspace(\dep\dots), \varphi_2 \in \protect\ensuremath{\classFont{FO}}\xspace(\perp), \varphi_3 \in\protect\ensuremath{\classFont{FO}}\xspace(\subseteq)$ such that for all $\prob{E}\in\mathcal{E}$ the problems $\prob{E}_{\varphi_1}^{\mathrm{team}}$, $\prob{E}_{\varphi_2}^{\mathrm{team}}$ and $\protect\ensuremath{\prob{E-CMinSat}}\xspace_{\varphi_3}^{\mathrm{team}}$ are \textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP-complete. \end{enumerate} \end{corollary} \begin{proof} Statement 1.\ follows directly from Theorems~\ref{mem}, \ref{memcardmax} and \ref{memmin}. For statement 2., the hardness for the case of inclusion logic follows from Theorem~\ref{arbhard} together with Theorem~\ref{cardhard}, as $\protect\ensuremath{\prob{CMinSat}}\xspace_φ^\mathrm{team}$ can trivially be decided in polynomial time with oracle access to $\protect\ensuremath{\prob{E-CMinSat}}\xspace_φ^\mathrm{team}$: Simply get a solution from the oracle, compute its cardinality and compare it to $k$. The other cases follow from Corollary~\ref{delnphard}. \end{proof} By Corollary~\ref{cor:chardelnp} we get a characterization of the class $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$ as the closure of the mentioned problems under the enumeration reducibility notion. \section{Conclusion} In Table~\ref{tab:summary}, we summarise the complexity results we obtained in this paper. We completely classified all but one of the considered enumeration problems and obtained either polynomial-delay algorithms or completeness for $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$. We have no final result regarding $\protect\ensuremath{\prob{E-MaxSat}}\xspace_\varphi^\mathrm{team}$ for dependence logic and independence logic formulas. By Corollary~\ref{delnphard} this problem is $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-hard but we do not know if it is included in $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP.$ On the other hand the problem is included in $\textnormal{D}el\Sigma_2^p$, as one can construct an algorithm similar to Algorithm~\ref{alg:esat} that uses $\protect\ensuremath{\prob{VerifyTeam}}\xspace_{\varphi}$ and $\protect\ensuremath{\prob{ExtendMaxTeam}}\xspace_\varphi$ as oracles (it is easy to see, that $\protect\ensuremath{\prob{ExtendMaxTeam}}\xspace_\varphi \in \textnormal{D}el\Sigma_2^p$). We conjecture that this problem is in fact $\textnormal{D}el\Sigma_2^p$-complete but we are missing the hardness proof. \problemdef{$\protect\ensuremath{\prob{ExtendMaxTeam}}\xspace_\varphi$}{Structure $\mathcal{A}, \text{ team } X, \text{ set of assignments } Y$}{$\{\,X' \mid \mathcal{A} \models_{X'} \varphi, X \subsetneq X', X'\cap Y = \emptyset \text{ and } \forall X'' X' \subsetneq X'' \mathcal{A} \not\models_{X''} \varphi\,\} \neq \emptyset $} \begin{table}\centering \begin{tabular}{ccc}\toprule & $\subseteq$ & $\dep{\dots},\, \perp$ \\\midrule $\protect\ensuremath{\prob{E-Sat}}\xspace$ & $\in \textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace$ & $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-complete \\ $\protect\ensuremath{\prob{E-MaxSat}}\xspace$ & $\in \textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace$ & $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-hard, $\in \textnormal{D}el\Sigma_2^p$ \\ $\protect\ensuremath{\prob{E-MinSat}}\xspace$ & $\in \textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace$ & $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-complete \\ $\protect\ensuremath{\prob{E-CMaxSat}}\xspace$ & $\in \textnormal{D}el\protect\ensuremath{\classFont{P}}\xspace$ & $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-complete\\ $\protect\ensuremath{\prob{E-CMinSat}}\xspace$ & $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-complete & $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$-complete \\\bottomrule \end{tabular} \captionsetup{justification=centering,margin=2cm} \caption{Summary of obtained complexity results}\label{tab:summary} \end{table} There are some more open issues that immediately lead to questions for further research. All our results are obtained for a certain fixed set of generalised dependency relations. Our selection was motivated by those logics most frequent found in the literature. It will be interesting to see whether other atoms or combinations of atoms lead to different (higher?) complexity. There is a notion of \emph{strict semantics} (see, e.g.\@\xspace, the work of Galliani~\cite{DBLP:journals/apal/Galliani12}). Our results do not immediately transfer to strict semantics, since, for example, Lemma~\ref{vernp} is not true for independence logic with strict semantics. It would be interesting to study the enumeration complexity of team logics in strict semantics. Maybe even more interesting is the extension of the logical language by the so called strong (or classical) negation. Observe that our logics only allow atomic negation. It is known that with full classical negation, many generalised dependency atoms can be simulated (in modal logic, negation is even complete in the sense that it can simulate any FO-expressible dependency). We consider it likely that enumeration problems for logics with classical negation will lead us out of the class $\textnormal{D}el\protect\ensuremath{\mathbb{N}}\xspaceP$ and potentially even to arbitrary levels of the hierarchy. \appendix \end{document}
math
\begin{document} \mathscr{M}aketitle \begin{abstract} We completely calculate the $RO(\mathscr{M}athbb{Z}/p)$-graded coefficients $H\underline{\mathscr{M}athbb{Z}/p}_\star H\underline{\mathscr{M}athbb{Z}/p}$ for the constant Mackey functor $\underline{\mathscr{M}athbb{Z}/p}$. \end{abstract} \section{Introduction}\lambdaabel{intro} In \mathscr{M}athscr{C}ite{hk}, Hu and Kriz computed the Hopf algebroid \beg{ez2steen}{(H\underline{\mathscr{M}athbb{Z}/2}_\star, H\underline{\mathscr{M}athbb{Z}/2}_\star H\underline{\mathscr{M}athbb{Z}/2})} where $\underline{\mathscr{M}athbb{Z}/2}$ denotes the constant $\mathscr{M}athbb{Z}/2$-Mackey functor and the subscript $?_\star$ denotes $RO(G)$-graded coefficients. (We direct the reader to \mathscr{M}athscr{C}ite{dress,lewis,li,sk} for general preliminaries on Mackey functors, and to \mathscr{M}athscr{C}ite{lms,lms1} for their relationship with ordinary equivariant generalized (co)homology theories.) There are several remarkable features of the Hopf algebroid \rightarrowref{ez2steen}, which played an important role in the calculation of coefficients of Real-oriented spectra in \mathscr{M}athscr{C}ite{hk}. For example, the morphism ring of \rightarrowref{ez2steen} is a free module over the object ring (in particular, the Hopf algebroid is ``flat," see \mathscr{M}athscr{C}ite{ravenel}). Another remarkable fact is that the Hopf algebroid \rightarrowref{ez2steen} is closely related to the motivic dual Steenrod algebra Hopf algebroid \mathscr{M}athscr{C}ite{voev,hko}. In particular, if we denote by $\alpha$ the real sign representation, there are generator classes $\xi_i$ in degrees $(2^i-1)(1+\alpha)$ and $\mathscr{T}au_i$ in degrees $(2^i-1)(1+\alpha)+1$. A key point in proving this is the existence of the infinite complex projective space with $\mathscr{M}athbb{Z}/2$-action by complex conjugation. When trying to generalize this calculation to a prime $p>2$, i.e. to compute \beg{ezpsteen}{(H\underline{\mathscr{M}athbb{Z}/p}_\star, H\underline{\mathscr{M}athbb{Z}/p}_\star H\underline{\mathscr{M}athbb{Z}/p}) \;\mathscr{T}ext{for $p>2$ prime},} the first difficulty one encounters is that no $p>2$ generalization of complex conjugation on $\mathscr{M}athbb{C} P^\infty$ presents itself. Additionally, applications of this calculation were not yet known. For those reasons, likely, the question remained open for more than 20 years. However, recently $\mathscr{M}athbb{Z}/p$-equivariant cohomological operations at $p>2$ have become of interest, for example due to questions of $p>2$ analogues of certain computations by Hill, Hopkins, and Ravenel in their solution to the Kervaire invariant $1$ problem \mathscr{M}athscr{C}ite{hhr, hhrodd}. In fact, Sankar and Wilson \mathscr{M}athscr{C}ite{sw} made progress on the calculation of \rightarrowref{ezpsteen}, in particular providing a complete decomposition of the spectrum $ H\underline{\mathscr{M}athbb{Z}/p}\wedge H\underline{\mathscr{M}athbb{Z}/p}$ and thereby proving that the morphism ring \rightarrowref{ezpsteen} is {\em not} a flat module over the object ring. The goal of the present paper is to complete the calculation of the Hopf algebroid \rightarrowref{ezpsteen}. The authors note that while the present paper is largely self-contained, comparing with the results of \mathscr{M}athscr{C}ite{sw} proved to be a step in the present calculation. We also point out the fact that while we do obtain complete characterizations of all the structure formulas of the Hopf algebroid \rightarrowref{ezpsteen}, not all of them are presented in a nice explicit fashion, but only by recursion through (explicit) maps into other rings. This, in fact, also occurs in \rightarrowref{ez2steen}, where while the product relations and the coproduct are very elegant, the right unit is only recursively characterized by comparison with Borel cohomology. In the case of \rightarrowref{ezpsteen}, this occurs for more of the structure formulas. The formulas are, in fact, too complicated to present in the introduction, but we outline here the basic steps and results, with references to precise statements in the text. We first need to establish certain general preliminary facts about $\mathscr{M}athbb{Z}/p$-equivariant homology which are given in Section \rightarrowef{prelim} below. The first important realization is that $\mathscr{M}athbb{Z}/p$-equivariant cohomology with constant coefficients is periodic under differences of irreducible real $\mathscr{M}athbb{Z}/p$-representations of complex type. Therefore, since we are only using the additive structure of the real representation ring, we can reduce the indexing from $RO(\mathscr{M}athbb{Z}/p)$ to the free abelian group $R$ on $1$ (the real $1$-representation) and $\beta$, which is a chosen irreducible real representation of complex type. Considering the full $R$-grading, on the other hand, is essential, since the pattern is simpler than if we only considered the $\mathscr{M}athbb{Z}$-grading (not to mention the fact that it contains more information). As already remarked, even in the $p=2$ case, not all the free generators are in integral dimensions. As noted in \mathscr{M}athscr{C}ite{sw}, the dual $\mathscr{M}athbb{Z}/p$-equivariant Steenrod algebra for $p>2$ is not a free module over the coefficient, but still, the $R$-graded pattern is a lot easier to describe than the $\mathscr{M}athbb{Z}$-graded pattern. Additively speaking, for $p>2$, the $R$-graded dual $\mathscr{M}athbb{Z}/p$-equivariant Steenrod algebra $A_\star=H\underline{\mathscr{M}athbb{Z}/p}_\star H\underline{\mathscr{M}athbb{Z}/p}$ is a sum of copies of the coefficients of $H\underline{\mathscr{M}athbb{Z}/p}$, and the coefficients of another $H\underline{\mathscr{M}athbb{Z}/p}$-module $HM$, which, up to shift by $\beta-2$, is the equivariant cohomology corresponding to the Mackey functor $Q$ which is $0$ on the fixed orbit and $\mathscr{M}athbb{Z}/p$ (fixed) on the free orbit. In Propositions \rightarrowef{p1}, \rightarrowef{p2}, we describe the ring $H\underline{\mathscr{M}athbb{Z}/p}_\star$ and the module $HM_\star$. It is quite interesting that for $p=2$, this $H\underline{\mathscr{M}athbb{Z}/p}$-module $HM$ is, in fact, an $R$-shift of $H\underline{\mathscr{M}athbb{Z}/2}$. For $p>2$, however, this fails, and in fact infinitely many higher $\underline{\mathscr{M}athbb{Z}/p}$-$Tor$-groups of $Q$ with itself are non-trivial. How is it possible for $A_\star$ to be manageable, then? As noted in \mathscr{M}athscr{C}ite{sw}, the spectrum $H\underline{\mathscr{M}athbb{Z}/p}\wedge H\underline{\mathscr{M}athbb{Z}/p}$ is, in fact not a wedge-sum of $R$-graded suspensions of $H\underline{\mathscr{M}athbb{Z}/p}$ and $HM$. It turns out that instead, the $HM$-summands of the coefficients form ``duplexes," i.e. pairs each of which makes up an $H\underline{\mathscr{M}athbb{Z}/p}$-module which we denote by $HT$, whose smashes over $H\underline{\mathscr{M}athbb{Z}/p}$ are again $R$-shifted copies of $HT$ (Proposition \rightarrowef{p44}). We establish this, and further explain the phenomenon, in Section \rightarrowef{prelim} below. Now our description of the multiplicative properties of $A_\star$ for $p>2$ must, of course, take into account the smashing rules of the building blocks $HT$ over $H\underline{\mathscr{M}athbb{Z}/p}$. However, we still need to identify elements which play the role of ``multiplicative generators." In the present situation, this is facilitated in Section \rightarrowef{slens} by computing the $H\underline{\mathscr{M}athbb{Z}/p}$-cohomology of the equivariant complex projective space $\mathscr{M}athbb{C} P_{\mathscr{M}athbb{Z}/p}^\infty$ (Proposition \rightarrowef{p33}) and the equivariant infinite lens space $B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$ (Proposition \rightarrowef{pzpzp}). This can be accomplished using the spectral sequence coming from a filtration of the complete complex universe by a regular flag, which was also used in \mathscr{M}athscr{C}ite{kluf} to give a new more explicit proof of Hausmann's theorem \mathscr{M}athscr{C}ite{haus} on the universality of the equivariant formal group on stable complex cobordism. The spectral sequences can be completely solved and the multiplicative structure is completely determined by comparison with Borel cohomology. One can then use an analog of Milnor's method \mathscr{M}athscr{C}ite{milnor} to construct elements of $A_\star$. It is convenient to do both the cases of the projective space and the lens space, since in the projective case, the spectral sequence actually collapses, thus yielding elements of $\underline{\xi}_n\in A_\star$ of dimension $$2p^{n-1}+(p^n-p^{n-1}-1)\beta$$ and $\underline{\mathscr{T}heta}_n$ of dimension $$2(p^n-1)+(p-1)(p^n-1)\beta.$$ One also obtains analogues of Milnor coproduct relations between these elements by the method of \mathscr{M}athscr{C}ite{milnor}. One notes that the elements $\underline{\mathscr{T}heta}_n$ are of the right ``slope" $2k+\ell\beta$ where $(k:\ell)=(1:p-1)$ and in fact, they turn out to generate $H\underline{\mathscr{M}athbb{Z}/p}_\star$-summands of $A_\star$. It is also interesting to note that for $p>2$, it is impossible to shift the element $\underline{\xi}_n$ to the ``right slope," since the values of $2k$ and $\ell$ would not be integers. To completely understand the role of the elements $\underline{\xi}_n$, and also to construct the remaining ``generators" of $A_\star$, one solves the flag spectral sequence for $B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$. This time, there are some differentials (although not many), which is the heuristic reason why $A_\star$ is not a free $H\underline{\mathscr{M}athbb{Z}/p}_\star$-module. In fact, following the method of \mathscr{M}athscr{C}ite{milnor}, one constructs $\underline{\mathscr{T}au}_n$, $\widehat{\mathscr{T}au}_n$, of degree $|\underline{\xi}_n|+1$, $|\underline{\xi}_n|$, respectively, and also an element $\widehat{\xi}_n$ of degree $|\underline{\xi}_n|-1$. The ``multiplet" of elements \beg{emultiplet100}{\underline{\mathscr{T}au}_n, \widehat{\mathscr{T}au}_n, \underline{\xi}_n, \widehat{\xi}_n,} in fact, ``generates" a copy of $HT_\star$. Again, Milnor-style coproduct relations on these elements follow using the same method. The cohomology of the equivariant lens space, in fact, gives rise to one additional set of elements $\underline{\mathscr{M}u}_n$ of degree $|\underline{\mathscr{T}heta}_n|+1$, each of which also generates an $H\underline{\mathscr{M}athbb{Z}/p}_\star$-summand of $A_\star$. Coproduct relations on this element also follow from the method of \mathscr{M}athscr{C}ite{milnor}, but this time, there are more terms, so they are hard to write down in an elegant form. To construct a complete decomposition of $A_\star$ into a sum of copies of $H\underline{\mathscr{M}athbb{Z}/p}_\star$, $HT_\star$, one notes that along with the ``multiplet" \rightarrowref{emultiplet100}, we have also have a ``multiplet" \beg{emultiplet101}{ \underline{\mathscr{T}au}_n\underline{\xi}_n^{i-1}, \widehat{\mathscr{T}au}_n\underline{\xi}_n^{i-1}, \underline{\xi}_n^{i}, \widehat{\xi}_n\underline{\xi}_n^{i-1} } for $i=1,\dots,p-1$, which ``generates" another copy of $HT_\star$, in a degree predicted by Proposition \rightarrowef{p44}. (This, in fact, means that some of the linear combination of those elements will be divisible by the class $b$ of degree $-\beta$ which comes from the inclusion $S^0\rightarrow S^\beta$. We will return to this point below.) For now, however, let us remark that for $i=p$, the element $\underline{\xi}_n^p$ is dependent on $\underline{\mathscr{T}heta}_n$, which is precisely accounted for by the presence of the additional element $\underline{\mathscr{M}u}_n$. Tensoring these structures for different $n$ then completely accounts for all elements of $A_\star$, coinciding with the result of \mathscr{M}athscr{C}ite{sw}. To determine the structure formulas of $A_\star$ completely, we need to determine the divisibility of the linear combinations of the elements \rightarrowref{emultiplet101} by $b$, and to completely describe all the multiplicative relations. This can, in fact, be done by comparing with the Borel cohomology version $A^{cc}_\star$ of the $\mathscr{M}athbb{Z}/p$-equivariant Steenrod algebra, similarly as in \mathscr{M}athscr{C}ite{hk}. The divisibility is described in Proposition \rightarrowef{ppdiv}. The final answer is recorded in Theorem \rightarrowef{tfinal}. We write down explicitly those formulas which are simple enough to state explicitly. The remaining formulas are determined recursively by the specified map into $A^{cc}_\star$. \noindent {\bf Acknowledgement:} The authors are very indebted to Peter May for comments. The authors are also thankful to Hood Chatham for creating the spectralsequences latex package and to Eva Belmont for tutorials on it. \section{Preliminaries}\lambdaabel{prelim} Let $p$ be an odd prime. In this paper, we will work $p$-locally. In other words, when we write $\mathscr{M}athbb{Z}$, we mean $\mathscr{M}athbb{Z}_{(p)}$. For our present purposes, the ring structure on $RO(\mathscr{M}athbb{Z}/p)$ does not matter. Thus, it can be treated as the free abelian group on the irreducible real representations, which are $1$ (the trivial irreducible real representation) and $\beta^i$, $i=1,\dots, (p-1)/2$ where $\beta$ is the $1$-dimensional complex representation where the generator $\gamma$ acts by $\zeta_p$ (since $\beta^i$ is the dual of $\beta^{p-i}$). However, $p$-local $\mathscr{M}athbb{Z}/p$-equivariant generalized cohomology is periodic with period $\beta^i-\beta^j$, $1\lambdaeq i,j<p$. (This was also observed by Sankar-Wilson \mathscr{M}athscr{C}ite{sw}.) This follows from the fact that $S(\beta^i)$ is the homotopy cofiber of $$1-\gamma^i:\mathscr{M}athbb{Z}/p_+\rightarrow \mathscr{M}athbb{Z}/p_+.$$ Now we have a homotopy commutative diagram of $\mathscr{M}athbb{Z}/p$-equivariant spectra $$\diagram \mathscr{M}athbb{Z}/p_+\rightarrowto^{1-\gamma^i}&\mathscr{M}athbb{Z}/p_+\\ \mathscr{M}athbb{Z}/p_+\uto^{Id}\rightarrowto^{1-\gamma}&\mathscr{M}athbb{Z}/p_+.\uto_{1+\gamma+\dots+\gamma^{i-1}} \enddiagram$$ On the other hand, $p$-locally $N_i=1+\gamma+\dots+\gamma^{i-1}$ has a homotopy inverse equal to a unit times $$N_{j}^i=1+\gamma^i+\gamma^{2i}+\dots+\gamma^{(j-1)i}$$ where $j$ is a positive integer such that $ij=kp^2+1$. Indeed, one has \beg{eunitt}{N_{j}^i\mathscr{M}athscr{C}irc N_i=1+\gamma+\dots+\gamma^{p^2k}=pkN_p+1.} Since we are working $p$-locally, the right hand side induces an isomorphism both in $\mathscr{M}athbb{Z}/p$-equivariant and non-equivariant homotopy groups, and thus is a unit in the $\mathscr{M}athbb{Z}/p$-equivariant stable homotopy category. Therefore, the reduced cell chain complexes of $S^{\beta^i}$, which are the unreduced suspensions of $S(\beta^i)$, are also isomorphic, which implies the periodicity by the results of \mathscr{M}athscr{C}ite{sk, lewis} (see also \mathscr{M}athscr{C}ite{hk1,klu}). Thus, when discussing ordinary $\mathscr{M}athbb{Z}/p$-homology, the grading can be by elements of $R=\mathscr{M}athbb{Z}\{1,\beta\}$, without losing information. Now the Borel cohomology $H\underline{\mathscr{M}athbb{Z}/p}_\star^c=F(E\mathscr{M}athbb{Z}/p_+,H\underline{\mathscr{M}athbb{Z}/p})_\star$ is complex-oriented. Its $R$-graded coefficients (given by group cohomology) are given by $$H\underline{\mathscr{M}athbb{Z}/p}^c_\star=\mathscr{M}athbb{Z}/p[\sigma^{2},\sigma^{-2}][b]\otimes \mathscr{M}athscr{L}ambda[u],$$ where the (homological) dimension of the generators is given by $$|b|=-\beta,\;|\sigma^{2}|=\beta-2, |u|=-1.$$ The Tate cohomology $H_\star^t(\underline{\mathscr{M}athbb{Z}/p})=(\widetilde{E\mathscr{M}athbb{Z}/p}\wedge F(E\mathscr{M}athbb{Z}/p_+,H\underline{\mathscr{M}athbb{Z}/p}))_\star$ (where $\widetilde{X}$ denotes the unreduced suspension of $X$) is given by $$H\underline{\mathscr{M}athbb{Z}/p}^t_\star=\mathscr{M}athbb{Z}/p[\sigma^{2},\sigma^{-2}][b, b^{-1}] \otimes \mathscr{M}athscr{L}ambda[u].$$ (The notation $\sigma^2$ is to connect with the notation of \mathscr{M}athscr{C}ite{hk}, where the present question was treated for $p=2$.) The Borel homology $H^b_\star H\underline{\mathscr{M}athbb{Z}/p} = (E\mathscr{M}athbb{Z}/p_+\wedge H\underline{\mathscr{M}athbb{Z}/p})_\star$ then is given by $$ H\underline{\mathscr{M}athbb{Z}/p}^b_\star=\mathscr{M}athbb{S}igma^{-1}H^t_\star H\underline{\mathscr{M}athbb{Z}/p}/H^c_\star H\underline{\mathscr{M}athbb{Z}/p}.$$ Now similarly as for $p=2$, the $\mathscr{M}athbb{Z}$-graded coefficients imply $$H\underline{\mathscr{M}athbb{Z}/p}_\star^\mathscr{M}athscr{P}hi=\mathscr{M}athbb{P}hi^{\mathscr{M}athbb{Z}/p}H\underline{\mathscr{M}athbb{Z}/p}_\star=(\widetilde{E\mathscr{M}athbb{Z}/p}\wedge H\underline{\mathscr{M}athbb{Z}/p})_\star=\mathscr{M}athbb{Z}/p[\sigma^{-2}][b,b^{-1}]\otimes \mathscr{M}athscr{L}ambda[\sigma^{-2}u],$$ the map into $H\underline{\mathscr{M}athbb{Z}/p}^t_\star$ is given by inclusion. Composing the inclusion with the quotient map into $\mathscr{M}athbb{S}igma H\underline{\mathscr{M}athbb{Z}/p}^b_\star$ then gives the connecting map of the long exact sequence associated with the fibration $$ H\underline{\mathscr{M}athbb{Z}/p}^b\rightarrow H\underline{\mathscr{M}athbb{Z}/p}\rightarrow H\underline{\mathscr{M}athbb{Z}/p}^\mathscr{M}athscr{P}hi$$ where $E^\mathscr{M}athscr{P}hi=E\wedge \widetilde{E\mathscr{M}athbb{Z}/p}$ for a $\mathscr{M}athbb{Z}/p$-equivariant spectrum $E$. Regarding the commutation rule of a (not necessarily coherently) $\mathscr{M}athbb{Z}/p$-equivariant ring spectrum, we note that the commutation rule between a class of degree $1$ and a class of degree $\beta$ is a matter of convention. The commutation rule between two classes of degree $\beta$ is given by $x\mathscr{M}apsto -x$ on $S^\beta$, which is equivariantly homotopic to the identity. Thus, if the dimensions of classes $x,y$ are $k+\ell\beta$, $m+n\beta$, then we can write $$xy=(-1)^{km}yx.$$ This implies the following \begin{proposition}\lambdaabel{p1} Consider the ring $$\mathscr{M}athbb{G}amma=\mathscr{M}athbb{Z}/p[\sigma^{-2}][b]\otimes\mathscr{M}athscr{L}ambda[\sigma^{-2}u]$$ graded as above. Let $\mathscr{M}athbb{G}amma^\mathscr{M}athscr{P}rime$ be the second local cohomology of $\mathscr{M}athbb{G}amma$ with respect to the ideal $(\sigma^{-2},b)$, desuspended by $1$. Then $$H\underline{\mathscr{M}athbb{Z}/p}_\star=\mathscr{M}athbb{G}amma\mathrm{op}lus \mathscr{M}athbb{G}amma^\mathscr{M}athscr{P}rime$$ with the abelian ring structure over $\mathscr{M}athbb{G}amma$. Addtively, therefore, $H\underline{\mathscr{M}athbb{Z}/p}_{k+\ell\beta}$ is $\mathscr{M}athbb{Z}/p$ when $0\lambdaeq k\lambdaeq -2\ell$ or $-2\ell\lambdaeq k\lambdaeq -2$, and $0$ otherwise. \end{proposition} $\square$\mathscr{E}dskip We shall sometimes call the $\mathscr{M}athbb{G}amma$ part of the coefficients as {\em the good tail} and the $\mathscr{M}athbb{G}amma^\mathscr{M}athscr{P}rime$ part {\em the derived tail}. Generally, the $R$-graded coefficients of ring spectra $X$ we will consider will split into a ``good tail" $X_{\star}^g$ and a ``derived tail" $X_\star^d$. Generally, the coefficient ring will be abelian over the good tail, and the derived tail will be isomorphic to the second local cohomology of $\mathscr{M}athbb{G}amma$ with coefficients in the good tail, desuspended by $1$. This is convenient, since it describes the multiplication completely. From that point of view, then, we can write $$\mathscr{M}athbb{G}amma=H\underline{\mathscr{M}athbb{Z}/p}_\star^g,$$ $$\mathscr{M}athbb{G}amma^\mathscr{M}athscr{P}rime=H\underline{\mathscr{M}athbb{Z}/p}_\star^d=H^2_{(\sigma^{-2},b)}(\mathscr{M}athbb{G}amma,\mathscr{M}athbb{G}amma)[-1].$$ There is another $H\underline{\mathscr{M}athbb{Z}/p}$-module spectrum which will play a role in our calculations. Consider the short exact sequence of $\mathscr{M}athbb{Z}/p$-Mackey functors \beg{emq}{0\rightarrow Q\rightarrow \underline{\mathscr{M}athbb{Z}/p}\rightarrow \mathscr{M}athbb{P}hi\rightarrow 0} where $Q$ resp. $\mathscr{M}athbb{P}hi$ are $\mathscr{M}athbb{Z}/p$ on the free resp. fixed orbit, and $0$ on the other orbit. This leads to a cofibration sequence of (coherent) $H\underline{\mathscr{M}athbb{Z}/p}$-module spectra. We have $H\mathscr{M}athbb{P}hi^\mathscr{M}athscr{P}hi=H\mathscr{M}athbb{P}hi$, which implies $$H\mathscr{M}athbb{P}hi_\star=\mathscr{M}athbb{Z}/p[b,b^{-1}].$$ We also note that \beg{ehqqf}{ HQ^\mathscr{M}athscr{P}hi_n=\lambdaeft\{\begin{array}{ll}\mathscr{M}athbb{Z}/p &\mathscr{T}ext{for $n\in\{1,2,\dots\}$}\\ 0 &\mathscr{T}ext{else.} \end{array}\rightarrowight. } The following follows from the long exact sequence on $RO(\mathscr{M}athbb{Z}/p)$-graded coefficients associated with \rightarrowref{emq}. \begin{proposition}\lambdaabel{p2} The group $HQ_{k+\ell\beta}$ is $\mathscr{M}athbb{Z}/p$ when $1\lambdaeq k\lambdaeq -2\ell$ or $-2\ell\lambdaeq k\lambdaeq -1$ and $0$ otherwise. From a $H\underline{\mathscr{M}athbb{Z}/p}_\star$-module point of view, the $\ell<0$ part of the coefficients (the {\em good tail}) behaves as an ideal in $\mathscr{M}athbb{G}amma$ and the $\ell>0$ part {\em the derived tail} behaves as a quotient of $\mathscr{M}athbb{G}amma^\mathscr{M}athscr{P}rime$. Additionally, we have, again, $$H_\star^d=H^2_{(\beta,\sigma^{-2})}(H\underline{\mathscr{M}athbb{Z}/p}_\star^g,HQ_\star^g)[-1].$$ \end{proposition} $\square$\mathscr{E}dskip We also put $$HM=\mathscr{M}athbb{S}igma^{\beta-2}HQ$$ for indexing purposes, since the element in degree $0$ is closest to playing the role of the ``generator" of $HM_\star$. (The element in degree $-1$ is its multiple in Borel cohomology.) Now there is more to the multiplicative structure, however. We are interested in calculating \beg{ecococo1}{HQ\wedge_{H\underline{\mathscr{M}athbb{Z}/p}}HQ. } Note that if we denote by $\overline{\mathscr{M}athbb{Z}/p}$ the co-constant $\underline{\mathscr{M}athbb{Z}/p}$-module (i.e. which is $\mathscr{M}athbb{Z}/p$ on both the fixed and free orbit, and the corestriction is $1$ while the restriction is $0$), then we have \beg{ecoconst}{\mathscr{M}athbb{S}igma^{2-\beta}H\underline{\mathscr{M}athbb{Z}/p}=H\overline{\mathscr{M}athbb{Z}/p}. } Now note that at $p=2$, if we denote the real sign representation by $\alpha$, we have $$HQ=\mathscr{M}athbb{S}igma^{1-\alpha}H\underline{\mathscr{M}athbb{Z}/2},$$ so $$HQ\wedge_{H\underline{\mathscr{M}athbb{Z}/2}}HQ=\mathscr{M}athbb{S}igma^{2-\beta}H\underline{\mathscr{M}athbb{Z}/2}=H\overline{\mathscr{M}athbb{Z}/2}.$$ For $p>2$, however, the situation is more complicated. Recall that for $1\lambdaeq i\lambdaeq p$, we have $\mathscr{M}athbb{F}_p[\mathscr{M}athbb{Z}/p]$-modules $L_i$ on which the generator $\gamma$ of $\mathscr{M}athbb{Z}/p$ acts by a Jordan block of size $i$. For any $\mathscr{M}athbb{F}_p[\mathscr{M}athbb{Z}/p]$-module, we have a corresponding $\underline{\mathscr{M}athbb{Z}/p}$-module $\underline{V}$ where on the fixed orbit, we have $V^{\mathscr{M}athbb{Z}/p}$, and the restriction is inclusion (see \mathscr{M}athscr{C}ite{sk, lewis}). Recall from \mathscr{M}athscr{C}ite{sk} that the $\underline{\mathscr{M}athbb{Z}/p}$-modules $\underline{L_p}$, $\underline{L_1}=\underline{\mathscr{M}athbb{Z}/p}$ are projective, so to calculate \rightarrowref{ecococo1}, we can use a projective resolution of $Q$ by these $\underline{\mathscr{M}athbb{Z}/p}$-modules. We have a short exact sequence \beg{ecocos1}{0\rightarrow\underline{L_{p-1}}\rightarrow \underline{L_p}\rightarrow Q\rightarrow 0 } and we also have a short exact sequence \beg{ecocos2}{0\rightarrow \underline{L_{p+1-j}}\rightarrow\underline{L_1}\mathrm{op}lus\underline{L_p}\rightarrow\underline{L_j}\rightarrow 0.} Splicing together short exact sequence \rightarrowref{ecocos1} with alternating short exact sequences \rightarrowref{ecocos2} for $j=p-1$, $j=2$, we therefore obtain a projective resolution of $Q$ of the form \beg{ecocos3}{ \dots \underline{L_1}\mathrm{op}lus \underline{L_p}\rightarrow\dots\rightarrow \underline{L_1}\mathrm{op}lus \underline{L_p}\rightarrow \underline{L_p} } We also have $$\underline{L_p}\otimes_{\underline{\mathscr{M}athbb{Z}/p}} Q=\underline{L_p},$$ so tensoring \rightarrowref{ecocos3} with $Q$ over $\underline{\mathscr{M}athbb{Z}/p}$ gives a chain complex of $\underline{\mathscr{M}athbb{Z}/p}$-modules of the form \beg{ecocos4}{\dots Q\mathrm{op}lus \underline{L_p}\rightarrow\dots\rightarrow Q\mathrm{op}lus \underline{L_p}\rightarrow \underline{L_p},} which gives the following \begin{proposition}\lambdaabel{pcocos} For all primes $p$, we have: $$Q\otimes_{\underline{\mathscr{M}athbb{Z}/p}}Q=\overline{\mathscr{M}athbb{Z}/2}.$$ For $p=2$, $$Tor_i^{\underline{\mathscr{M}athbb{Z}/2}}(Q,Q)=0\;\mathscr{T}ext{for $i>0$},$$ $$(HQ\wedge_{H\underline{\mathscr{M}athbb{Z}/2}}HQ)=H\overline{\mathscr{M}athbb{Z}/2}.$$ For $p>2$, $$Tor_i^{\underline{\mathscr{M}athbb{Z}/p}}(Q,Q)=\mathscr{M}athbb{P}hi\;\mathscr{T}ext{for $i>0$},$$ and we have \beg{eqqqq}{(HQ\wedge_{H\underline{\mathscr{M}athbb{Z}/p}}HQ)_\star=\mathscr{M}athbb{S}igma^{2-\beta}HQ_\star\mathrm{op}lus \mathscr{M}athbb{S}igma^2H\underline{\mathscr{M}athbb{Z}/p}_\star^\mathscr{M}athscr{P}hi } as $H\underline{\mathscr{M}athbb{Z}/p}_\star$-modules. \end{proposition} \noindent {\bf Remark:} It is important to note that the smash product studied in Proposition \rightarrowef{pcocos} is over $H\underline{\mathscr{M}athbb{Z}/p}$. Over $H\underline{\mathscr{M}athbb{Z}}$, the answer is different, and in fact, the homotopy of the smash product is finite-dimensional (see remark after Proposition \rightarrowef{p44} below). \begin{proof} All the statements except \rightarrowref{eqqqq} follow directly from evaluating the homology of \rightarrowref{ecocos4}. The reason the case of $p=2$ is special is that then we have $2=p$, $p-1=1$, so there will be additional maps alternatingly canceling the fixed points for higher $Tor$. To prove \rightarrowref{eqqqq}, first note that its part concerning the $\mathscr{M}athbb{Z}$-graded line follows from the other statements. The $R$-graded case, however, needs more attention, since we are no longer dealing with objects in the heart, so a priori we merely have a spectral sequence whose $E^2$-term is given by the sum of the $R$-graded coefficients of the $H\underline{\mathscr{M}athbb{Z}/p}$-modules with the Postnikov decomposition given by the Mackey $Tor$-groups: \beg{emackeysss}{ E_{rs}^2=H(Tor^{\underline{\mathscr{M}athbb{Z}/p}}_s(Q,Q))_{r+*\beta}\mathscr{M}athbb{R}ightarrow (HQ\wedge_{H\underline{\mathscr{M}athbb{Z}/p}}HQ)_{r+s+*\beta} } (The indexing may seem reversed from what one would expect. Note, however, that it is induced by a cell filtration on representation sphere spectra.) We need to investigate this spectral sequence. To this end, it is worthwhile pointing out an alternative approach. Since all the $H\underline{\mathscr{M}athbb{Z}/p}$-modules considered above have very easy Borel homology, essentially the same information can be recovered by working on geometric fixed points. Now for $H\underline{\mathscr{M}athbb{Z}/p}$-modules $A,B$, one has \beg{ecocos5}{ A^\mathscr{M}athscr{P}hi\wedge_{H\underline{\mathscr{M}athbb{Z}/p}^\mathscr{M}athscr{P}hi}B^\mathscr{M}athscr{P}hi=(A\wedge_{H\underline{\mathscr{M}athbb{Z}/p}}B)^\mathscr{M}athscr{P}hi. } Also, we have a coherent equivalence of categories between $H\underline{\mathscr{M}athbb{Z}/p}^\mathscr{M}athscr{P}hi$-modules and $(H\underline{\mathscr{M}athbb{Z}/p}^\mathscr{M}athscr{P}hi)^{\mathscr{M}athbb{Z}/p}$-modules. Now we have $$B:=H\underline{\mathscr{M}athbb{Z}/p}^\mathscr{M}athscr{P}hi_*=\mathscr{M}athbb{Z}/p[t]\otimes\mathscr{M}athscr{L}ambda[u]$$ with $|t|=2$, $|u|=1$, while $$J:=HQ^\mathscr{M}athscr{P}hi_*=(u,t),$$ (by which we denote that ideal in $B$). By \mathscr{M}athscr{C}ite{ekmm}, we therefore have a spectral sequence of the form \beg{ecocos10}{Tor^{B}_{r}(J,J)_s\mathscr{M}athbb{R}ightarrow (HQ\wedge_{H\underline{\mathscr{M}athbb{Z}/p}}HQ)^\mathscr{M}athscr{P}hi_{r+s}. } To calculate the left hand side of \rightarrowref{ecocos10}, we have a $B$-resolution $C$ of $J$ of the form \beg{ecocos11}{\diagram \dots\rightarrowto & B[4]\rightarrowto^{-u}\drto^t & B[3]\rightarrowto^{-u}\drto^t&B[2]\\ \dot\rightarrowto& B[3]\rightarrowto_u & B[2]\rightarrowto_u& B[1]. \enddiagram } Tensoring over $B$ with $J$ and taking homology, we get $$J\otimes_B J=\mathscr{M}athbb{Z}/p\{u\otimes u,u\otimes t, t\otimes u\}\mathrm{op}lus B\{t\otimes t\}$$ (where the braces indicate a sum of copies indexed by the given elements) with the $B$-module structure the notation suggests, while $$Tor_i^B(J,J)=\mathscr{M}athbb{Z}/p\{u,t\}[i+1]\;\mathscr{T}ext{for $i>0$}.$$ Thus, the spectral sequence \rightarrowref{ecocos11} is given by $$ E_{rs}^1=\lambdaeft\{\begin{array}{ll} \mathscr{M}athbb{Z}/p & \mathscr{T}ext{if $r=0$ and $s=2,4,5,6,\dots$}\\ \mathscr{M}athbb{Z}/p\mathrm{op}lus \mathscr{M}athbb{Z}/p & \mathscr{T}ext{if $r=0$ and $s=3$}\\ \mathscr{M}athbb{Z}/p & \mathscr{T}ext{if $r>0$ and $s=r+2,r+3$}\\ 0 & \mathscr{T}ext{else.} \end{array} \rightarrowight. $$ Moreover, one can show that the spectral sequence \rightarrowref{ecocos10} collapses since the only possible targets of differentials is the $B\{t\otimes t\}$ in filtration degree $0$, but by comparison with $H\underline{\mathscr{M}athbb{Z}/p}$, we see that those elements cannot be $0$, since they inject into Borel homology by the connecting map. Thus, we see that \beg{emackeysss1}{ (HQ\wedge_{H\underline{\mathscr{M}athbb{Z}/p}}HQ)^\mathscr{M}athscr{P}hi_n=\lambdaeft\{ \begin{array}{ll} \mathscr{M}athbb{Z}/p &\mathscr{T}ext{for $n=2$}\\ \mathscr{M}athbb{Z}/p\mathrm{op}lus \mathscr{M}athbb{Z}/p & \mathscr{T}ext{for $n=3,4,\dots$}\\ 0 & \mathscr{T}ext{else.} \end{array} \rightarrowight. } Now the $E2$-term \rightarrowref{emackeysss} on the $(?)^\mathscr{M}athscr{P}hi$-level (obtained by inverting $b$) is ``off by one" in the sense that we obtain $$ \begin{array}{ll} \mathscr{M}athbb{Z}/p &\mathscr{T}ext{for $n=1$}\\ \mathscr{M}athbb{Z}/p\mathrm{op}lus \mathscr{M}athbb{Z}/p & \mathscr{T}ext{for $n=2,3,4,\dots$}\\ 0 & \mathscr{T}ext{else.} \end{array} $$ This, in fact, detects a single $d^2$-differential in the spectral sequence \rightarrowref{emackeysss} originating in the $H\overline{\mathscr{M}athbb{Z}/p}_\star$-part in degrees $$2-n\beta,\; n=1,2,3,\dots$$ and also proves there cannot be any other differentials, thus yielding \rightarrowref{eqqqq} (see Figures 5 and 6). \end{proof} Now let $T$ denote the $\mathscr{M}athbb{Z}/p$-equivariant suspension spectrum of the cofiber of the second desuspension of the $\mathscr{M}athbb{Z}/p$-equivariant based degree $p$ map \beg{eswsw}{S^\beta\rightarrow S^2.} (This spectrum was denoted by $T(\mathscr{T}heta)$ in \mathscr{M}athscr{C}ite{sw}.) We also denote $HT=H\underline{\mathscr{M}athbb{Z}/p}\wedge T.$ We shall see in Section \rightarrowef{slens} that the spectrum $T$ (up to suspension) maps into the based suspension spectrum of the $\mathscr{M}athbb{Z}/p$-equivariant lens space $B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$, and therefore, the connecting map of the $H\underline{\mathscr{M}athbb{Z}/p}_\star$-homology long exact sequence of \rightarrowref{eswsw} is an isomorphism on the $\mathscr{M}athbb{Z}/p$ in degree $\beta$ (which is the only degree in which it can be non-trivial for dimensional reason). We also see that the coefficients of $HT$ suggest the possibility of a filtration whose associated graded pieces are wedges of suspensions of $HM$. Indeed, from the universal property, we readily construct a morphism $$HT\rightarrow HM,$$ which leads to a cofibration sequence of the form \beg{eswsw1}{ \mathscr{M}athbb{S}igma HM\rightarrow HT\rightarrow HM, } which splits additively on $R$-graded coefficients. The $\mathscr{M}athbb{Z}$-graded coefficients of $HT$, which are $\mathscr{M}athbb{Z}/p$ in degrees $-1$ and $1$, and $\mathscr{M}athbb{Z}/p\mathrm{op}lus\mathscr{M}athbb{Z}/p$ in degree $0$, generate the two $HM$-copies in \rightarrowref{eswsw1} in the above sense. The cofibration \rightarrowref{eswsw1} can also be seen on the level of Mackey functors. Desuspending \rightarrowref{eswsw} by $\beta$ and smashing with $H\underline{\mathscr{M}athbb{Z}/p}$, we get, by \rightarrowref{ecoconst}, a map of the form $$H\underline{\mathscr{M}athbb{Z}/p}\rightarrow H\overline{\mathscr{M}athbb{Z}/p}.$$ Its cofiber can then be realized by a Mackey chain complex \beg{ecocos20}{ \underline{\mathscr{M}athbb{Z}/p}\rightarrow \overline{\mathscr{M}athbb{Z}/p} } set in homological degrees $0,1$, where the differential is $1$ on the fixed orbit and $0$ on the free orbit (this is, essentially, the only non-trivial possibility). In the derived category of $\underline{\mathscr{M}athbb{Z}/p}$-modules, then, \rightarrowref{ecocos20} obviously maps to $Q$, with the kernel quasiisomorphic to $Q[1]$. The cofibration \rightarrowref{eswsw1} does not split. To see that, we observe that \begin{proposition}\lambdaabel{p44} In the derived category of $p$-local $\mathscr{M}athbb{Z}/p$-equivariant spectra, we have an equivalence \beg{eswsw2}{ T\wedge T\sim T\vee \mathscr{M}athbb{S}igma^{\beta-1}T. } \end{proposition} \begin{proof} The strategy is to smash two copies of the cofibration \rightarrowref{eswsw} together. This gives a cofibration sequence \beg{ettsplit}{T\rightarrow T\wedge T\rightarrow \mathscr{M}athbb{S}igma^{\beta-1}T. } We need to show that the first map \rightarrowref{ettsplit} has a left inverse in the derived category. To this end, smashing the two cofibration sequences introduces two increasing filtrations on $T\wedge T$ where the associated graded pieces in degree $0,1,2$ are \beg{ettsplit1}{S^0, \;S^{\beta-1}\vee S^{\beta-1},\; S^{2\beta-2},} respectively. We obtain an obvious splitting $$F_1(T\wedge T)\rightarrow T$$ whose composition with the projection $$T\rightarrow S^{\beta-1}$$ further also extends to $T\wedge T$. Therefore, the obstruction to constructing the splitting lies in the $RO(\mathscr{M}athbb{Z}/p)$-graded $\mathscr{M}athbb{Z}/p$-equivariant homotopy group \beg{ettsplit2}{\mathscr{M}athscr{P}i_{2\beta-3}^{\mathscr{M}athbb{Z}/p}S^0. } To study the group \rightarrowref{ettsplit2}, we consider the fibration $$S^0\rightarrow S^{2\beta}\rightarrow\mathscr{M}athbb{S}igma (S(2\beta)_+).$$ Since $\mathscr{M}athscr{P}i_0^{\mathscr{M}athbb{Z}/p}S^3=0$, we can represent a class in \rightarrowref{ettsplit2} by a $\mathscr{M}athbb{Z}/p$-equivariant stable map \beg{ettsplit3}{\alpha:S(2\beta)_+\rightarrow S^2.} The source of \rightarrowref{ettsplit3} is a free spectrum, and $\alpha$ is $0$ when restricted to the $1$-skeleton. On $S(2\beta)_2/S(2\beta)_1$, the group of homotopy classes of possible maps to $S^2$ is $\mathscr{M}athbb{Z}$. However, the attaching map of the free $2$-cell to the free $1$-cell is $N_p=1+\gamma+\dots+\gamma^{p-1}$ and thus $p$ times the generator is homotopic to $0$. Thus, the group is actually $\mathscr{M}athbb{Z}/p$. All of these maps extend to $S(2\beta)$ by the homotopy addition theorem, while the group of possible choices of the extension lies in $$\mathscr{M}athscr{P}i_1^{\{e\}}(S^0),$$ which has no $p$-primary component for $p>2$. Thus, away from $2$, the obstruction group is $\mathscr{M}athbb{Z}/p$ and is the same as on the level of $H\underline{\mathscr{M}athbb{Z}}$-modules. In the category of $H\underline{\mathscr{M}athbb{Z}}$-modules, we observe that $\mathscr{M}athbb{S}igma^{2-\beta}T$ is the homotopy cofiber of a stable map \beg{ettmack1}{S^0\rightarrow S^{2-\beta}.} Smashing \rightarrowref{ettmack1} with $H\underline{\mathscr{M}athbb{Z}}$ can be realized as the map $\mathscr{M}athscr{P}hi$ from the constant Mackey functor $\underline{\mathscr{M}athbb{Z}}$ to the co-constant Mackey functor $\overline{\mathscr{M}athbb{Z}}$ which is $p$ on the free orbit and $1$ on the fixed orbit: \beg{ettmack2}{\mathscr{M}athscr{P}hi:\underline{\mathscr{M}athbb{Z}}\rightarrow\overline{\mathscr{M}athbb{Z}}.} We see that this map $\mathscr{M}athscr{P}hi$ is injective and its cokernel is $Q$. Thus, we have: \beg{ettmack3}{\mathscr{M}athbb{S}igma^{2-\beta}H\underline{\mathscr{M}athbb{Z}}\wedge T=HQ.} Now denoting by $\mathscr{M}athcal{L}_p$ the principal projective $\underline{\mathscr{M}athbb{Z}}$-module on the free orbit (i.e. the unique $\underline{\mathscr{M}athbb{Z}}$-module which is the integral regular representation $\mathscr{M}athcal{L}_p$ on the free orbit and $\mathscr{M}athbb{Z}$ on the fixed orbit, there is a $\underline{\mathscr{M}athbb{Z}}$-projective resolution of $\overline{\mathscr{M}athbb{Z}}$ of the form \beg{ettmack4}{ \diagram \underline{\mathscr{M}athbb{Z}}\rightarrowto^\subset&\underline{\mathscr{M}athcal{L}_p}\rightarrowto^{1-\gamma}&\underline{\mathscr{M}athcal{L}_p}. \enddiagram } Thus, given \rightarrowref{ettmack2}, we have a $\underline{\mathscr{M}athbb{Z}}$-resolution of $Q$ of the form \beg{ettmack5}{ \diagram \underline{\mathscr{M}athbb{Z}}\rightarrowto^\subset&\underline{\mathscr{M}athbb{Z}}\mathrm{op}lus\underline{\mathscr{M}athcal{L}_p}\rightarrowto^{\subset\mathrm{op}lus(1-\gamma)}&\underline{\mathscr{M}athcal{L}_p}. \enddiagram } Thus, $H\underline{\mathscr{M}athbb{Z}}\wedge\mathscr{M}athbb{S}igma^{4-2\beta}T\wedge T$ can be realized by tensoring \rightarrowref{ettmack5} with $Q$ over $\underline{\mathscr{M}athbb{Z}}$, which gives a chain complex of $\underline{\mathscr{M}athbb{Z}}$-modules of the form \beg{ettmack6a}{ \diagram Q\rightarrowto^{0\mathrm{op}lus\subset}&Q\mathrm{op}lus\underline{L}_p\rightarrowto^{\subset\mathrm{op}lus(1-\gamma)}&\underline{L}_p \enddiagram } with the last term in degree $0$, which can also be rewritten as the two-stage chain complex of $\underline{\mathscr{M}athbb{Z}}$-modules \beg{ettmack6}{\diagram Q\mathrm{op}lus\overline{L}_{p-1}\rightarrowto^{\subset\mathrm{op}lus(1-\gamma)}&\underline{L}_p \enddiagram } in degrees $1,0$. Now we have a cofibration sequence \beg{ettmack7}{\mathscr{M}athbb{S}igma^{4-2\beta}T\rightarrow \mathscr{M}athbb{S}igma^{4-2\beta}T\wedge T\rightarrow \mathscr{M}athbb{S}igma^{3-\beta}T. } We see that the last term can be realized by the chain complex \beg{ettmack8}{Q\rightarrow 0 } in degrees $0,1$. There exists a chain map $\mathscr{M}athscr{P}si$ from \rightarrowref{ettmack6} to \rightarrowref{ettmack8} which is identity on $Q$ and $0$ on the other components. This map, in fact, has a right inverse given by $$(1,-(1-\gamma)^{p-2}).$$ Moreover, $Ker(\mathscr{M}athscr{P}si)$ is the chain complex in degrees $1,0$ of the form \beg{ettmack9}{\diagram \overline{L}_{p-1}\rightarrowto^{1-\gamma}&\underline{L}_p \enddiagram } which can be also written as \beg{ettmack10}{\diagram Q\rightarrowto^\subset&\underline{L}_{p}\rightarrowto^{1-\gamma}&\underline{L}_p \enddiagram } which is \rightarrowref{ettmack4} tensored over $\underline{\mathscr{M}athbb{Z}}$ with $Q$, and thus represents $$H\underline{\mathscr{M}athbb{Z}}\wedge S^{2-\beta}\wedge \mathscr{M}athbb{S}igma^{2-\beta}T.$$ Thus, we have proved our splitting after smashing with $H\underline{\mathscr{M}athbb{Z}}$ in the category of $H\underline{\mathscr{M}athbb{Z}}$-modules. To prove the statement spectrally, however, we need to be even more precise, since the surjection from \rightarrowref{ettmack6} to \rightarrowref{ettmack8} is not unique: It could also be non-zero on the second summand (and those surjections do not split). We need to prove specifically that the surjection from \rightarrowref{ettmack6} to \rightarrowref{ettmack8} induced by the second map \rightarrowref{ettmack7} is $0$ on the second component of the source of \rightarrowref{ettmack6}. However, to this end, it suffices to prove that the map $\mathscr{M}athscr{P}si$ vanishes when composed with the canonical chain map from \rightarrowref{ettmack4} to \rightarrowref{ettmack6a}. We see however that this map is obtained by smashing with $H\underline{\mathscr{M}athbb{Z}}$ the composition $$S^{2-\beta}\rightarrow \mathscr{M}athbb{S}igma^{2-\beta}T\rightarrow\mathscr{M}athbb{S}igma^{4-2\beta}T\wedge T\rightarrow \mathscr{M}athbb{S}igma^{3-\beta}T,$$ which is $0$ since it involves composing two consecutive maps in a cofibration sequence. Thus, our statement is proved. \end{proof} \noindent {\bf Remark:} From the above proof, one can also read offf the homotopy groups of $HQ\wedge_{H\underline{\mathscr{M}athbb{Z}}}HQ$, which are concentrated in finitely many degrees (compare with Proposition \rightarrowef{pcocos}). Smashing \rightarrowref{eswsw2} with $H\underline{\mathscr{M}athbb{Z}/p}$, we see that $HT\wedge_{H\underline{\mathscr{M}athbb{Z}/p}}HT$ additively splits as a direct sum of $HM_\star$ suspended by $0,1,\beta,\beta-1$.This is not what would happen if the cofibration of $H\underline{\mathscr{M}athbb{Z}/p}$-modules \rightarrowref{eswsw1} split: the higher derived terms would appear. This will play a role of our description of the $\mathscr{M}athbb{Z}/p$-equivariant Steenrod algebra in the subsequent sections. One may, in fact, ask how the ``tidy" behavior \rightarrowref{eswsw2} is even possible on $H\underline{\mathscr{M}athbb{Z}/p}$-homology, given the infinitely many higher $Tor$'s of $Q$ with itself, computed in Proposition \rightarrowef{pcocos}. We present an explanation in terms of geometric fixed points: The resolution \rightarrowref{ecocos11} in fact gives a short exact sequence of $B$-modules of the form $$0\rightarrow J[1]\rightarrow B[1]\mathrm{op}lus B[2]\rightarrow J\rightarrow 0.$$ On the level of geometric fixed points, the cofibration \rightarrowref{eswsw1} in fact realizes this extension. \noindent {\bf Comment:} It is worth noting that each of the traceless indecomposable modular representaions $L_i$, $i=1,\dots,p-1$ gives rise to a Mackey functor (which we also denote by $\widetilde{L_i}$) equal to $L_i$ on the free orbit and $0$ on the fixed orbit. Thus, $\widetilde{L_1}=Q$. The $H\underline{\mathscr{M}athbb{Z}/p}$-modules $H\widetilde{L_i}$, $i=1,\dots,p-1$ all have additively isomorphic $RO(\mathscr{M}athbb{Z}/p)$-graded coefficients, even though no morphism of spectra induces this isomorphism (passing to Borel homology, this says there is no map between $L_i$, $L_j$ for $i\neq j$ which would induce an isomorphism in group homology). Of course, the non-equivariant coefficients of $HL_i$ are $L_i$ in degree $0$, so we see they all are of different dimensions. The reason we only encounter $H\widetilde{L_1}=HQ$ in our calculations is that in the Borel homology spectrum of $H\underline{\mathscr{M}athbb{Z}/p}\wedge H\underline{\mathscr{M}athbb{Z}/p}$ is a wedge sum of suspended copies of $H\underline{\mathscr{M}athbb{Z}/p}^b$. This raises the question as to whether in general a morphism of spectra $f:X\rightarrow Y$ which induces an isomorphism in $RO(\mathscr{M}athbb{Z}/p)$-graded coefficients is a weak equivalence. This is obviously true for $p=2$ by the cofibration sequence $$\mathscr{M}athbb{Z}/2_+\rightarrow S^0\rightarrow S^\alpha,$$ but it is false for $p>2$: Consider the Mackey functor which is equal to the traceless complex representation $\beta$ on the free orbit and $0$ on the fixed orbit. (We will also denote it by $\beta$.) Then the $\mathscr{M}athbb{Z}/p$-Borel homology spectrum $H\beta^b$ has trivial $RO(\mathscr{M}athbb{Z}/p)$-graded coefficients, since the cohomology theory is $(\beta-2)$-periodic, and the group homology of $\mathscr{M}athbb{Z}/p$ with coefficients in $\beta$ is $0$. (Also note that for $p=2$, this Borel homology will be non-zero in degree $\alpha-1$ where $\alpha$ is the $1$-dimensional real sign representation.) On the other hand, call an equivariant spectrum $X$ {\em $p$-complete} when its canonical map into the homotopy inverse limit of $X\wedge M\mathscr{M}athbb{Z}/p^r$ is an equivalence. Then a morphism $f:X\rightarrow Y$ of $p$-complete bounded below $\mathscr{M}athbb{Z}/p$-equivariant spectra which induces an isomorphism of $RO(\mathscr{M}athbb{Z}/p)$-graded coefficients is a weak equivalence. To see this, since we already know $f^\mathscr{M}athscr{P}hi$ is an equivalence, it suffices to consider the case when $X,Y$ are free. Equivalently, we must show that a bounded below free $\mathscr{M}athbb{Z}/p$-spectrum whose fixed point coefficients are $0$ is $0$. So assume it is not $0$, and consider the bottom dimensional degree non-equivariant $\mathscr{M}athbb{Z}[\mathscr{M}athbb{Z}/p]$-module $V$ of its coefficients. But then $V/(1-\gamma)\neq 0$, since on a modular representation of $\mathscr{M}athbb{Z}/p$, $1-\gamma$ is never onto. Thus, the coefficients of the fixed point spectrum are non-zero in the same degree. We do not know whether the bounded below assumptions can be removed. \section{Cohomology of the equivariant projective spaces and lens spaces}\lambdaabel{slens} The $\mathscr{M}athbb{Z}/p$-equivariant complex projective space $\mathscr{M}athbb{C} P^\infty_{\mathscr{M}athbb{Z}/p}$ can be identified with the space of complex lines on the complete complex $\mathscr{M}athbb{Z}/p$-universe $\mathscr{M}athcal{U}$. An explicit decomposition \beg{eflag}{\mathscr{M}athcal{U}=\bigoplus_{i\in \mathscr{M}athbb{N}_0} \alpha_i} is called a {\em flag}. It leads to a filtration $$F_n(\mathscr{M}athbb{C} P^\infty)=P(\alpha_0\mathrm{op}lus\dots\mathrm{op}lus \alpha_n)$$ We have $$F_n(\mathscr{M}athbb{C} P^\infty)/F_{n-1}(\mathscr{M}athbb{C} P^\infty)\mathscr{M}athscr{C}ong S^{\alpha_n^{-1}(\alpha_0\mathrm{op}lus\dots\mathrm{op}lus \alpha_{n-1})}.$$ This leads to a spectral sequence \beg{ess1}{E_1=\bigoplus_{n\in\mathscr{M}athbb{N}_0}H\underline{\mathscr{M}athbb{Z}/p}_{\star-\alpha_n^{-1}(\alpha_0\mathrm{op}lus\dots\mathrm{op}lus \alpha_{n-1})} \mathscr{M}athbb{R}ightarrow H\underline{\mathscr{M}athbb{Z}/p}^\star\mathscr{M}athbb{C} P^\infty_{\mathscr{M}athbb{Z}/p}.} Whether or not this collapses depends on the flag. For the {\em regular flag} $$\alpha_i=\beta^i,$$ (remembering our convention on indexing), the free generators of the copies of $H\underline{\mathscr{M}athbb{Z}/p}_\star$ are in dimensions \beg{ess2}{\begin{array}{l}0,-\beta,-2\beta,\dots,-(p-1)\beta,\\ -(p-1)\beta-2,-p\beta-2,\dots,-2(p-1)\beta-2,\\ -2(p-1)\beta-4,\dots,-3(p-1)\beta-4,\\ \dots \end{array} } We see from Proposition \rightarrowef{p1} that there is no element in \rightarrowref{ess1} in total dimension $-\beta-1$ or $-3-(p-1)\beta$, and therefore the generator $x$ in dimension $-\beta$ and the generator $y$ in dimension $-2-(p-1)\beta$ are permanent cycles, and thus, the spectral sequence collapses, and thus $H\underline{\mathscr{M}athbb{Z}/p}^\star\mathscr{M}athbb{C} P^\infty_{\mathscr{M}athbb{Z}/p}$ is a free $H\underline{\mathscr{M}athbb{Z}/p}_\star$-module. \begin{proposition}\lambdaabel{p33} One has $$H\underline{\mathscr{M}athbb{Z}/p}^\star\mathscr{M}athbb{C} P^\infty_{\mathscr{M}athbb{Z}/p} =H\underline{\mathscr{M}athbb{Z}/p}_\star[x,y]/(\sigma^{-2}y-x^p+b^{p-1}x). $$ \end{proposition} \begin{proof} The collapse of the spectral sequence was already proved. Thus, what remains to prove is the multiplicative relation. To this end, we work in Borel cohomology. (This could, in principle, generate counterterms in the $\mathscr{M}athbb{G}amma^\mathscr{M}athscr{P}rime$ tail, but since it is $\sigma^{-2}$-divisible, that could be corrected by a different choice of the generator $y$.) Now in Borel cohomology, we are essentially working in the cohomology of the space $\mathscr{M}athbb{C} P^\infty\mathscr{T}imes B\mathscr{M}athbb{Z}/p$. From this point of view, it is convenient to treat the periodicity $\sigma^{2}$ as the identity, so from this point of view, we have $$H^*(\mathscr{M}athbb{C} P^\infty\mathscr{T}imes B\mathscr{M}athbb{Z}/p;\mathscr{M}athbb{Z}/p)=\mathscr{M}athbb{Z}/p[x,b]\otimes \mathscr{M}athscr{L}ambda[u]$$ where $x,b$ have cohomological dimension $2$ and $u$ has cohomological dimension $1$. The computation of the element $y$ from this point of view then amounts to computing the Euler class of the regular complex representation of $\mathscr{M}athbb{Z}/p$. This is $$x(x+b)(x+2b)\dots(x+(p-1)b)=x^p-b^{p-1}x.$$ \end{proof} Now similarly as in Milnor \mathscr{M}athscr{C}ite{milnor}, we have, for a $\mathscr{M}athbb{Z}/p$-space $X$, a multiplicative map \beg{emilnor1}{\lambdaambda:H^\star(X)\rightarrow H^\star(X)\widehat{\otimes} A_\star} where the $\widehat{\otimes}$ denotes the tensor product completed at $(b)$. For $X=\mathscr{M}athbb{C} P^\infty_{\mathscr{M}athbb{Z}/p}$, we get \beg{emilnor2}{\lambdaambda(x)=x\otimes 1+\sum_{n\geq 1}y^{p^{n-1}}\otimes\underline{\xi}_n } and \beg{emilnor3}{\lambdaambda(y)=y\otimes 1+\sum_{n\geq 1}y^{p^n}\otimes\underline{\mathscr{T}heta}_n } where the dimensions are given by $$|\underline{\xi}_n|=2p^{n-1}+(p^n-p^{n-1}-1)\beta,$$ $$|\underline{\mathscr{T}heta}_n|=2(p^{n}-1)+(p-1)(p^n-1)\beta.$$ From co-associativity, we can further conclude that, writing $$\widetilde{\mathscr{M}athscr{P}si}(t)=\mathscr{M}athscr{P}si(t)-t\otimes1-1\otimes t, $$ we have \beg{ecoprod1}{\widetilde{\mathscr{M}athscr{P}si}(\underline{\xi}_n)=\sum\underline{\mathscr{T}heta}_i^{p^{n-i-1}}\otimes\underline{\xi}_{n-i}, } \beg{ecoprod2}{\widetilde{\mathscr{M}athscr{P}si}(\underline{\mathscr{T}heta}_n)=\sum\underline{\mathscr{T}heta}_i^{p^{n-i}} \otimes \underline{\mathscr{T}heta}_{n-i} .} The picture becomes a little less tidy when we calculate $H\underline{\mathscr{M}athbb{Z}/p}^\star B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$. For a model of $B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$, we use the quotient of the unit sphere in $\mathscr{M}athcal{U}$ by the action of $\mathscr{M}athbb{Z}/p\subset S^1$. Now a flag \rightarrowref{eflag}, we have a filtration with $$F_{2n+1}B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)=S(\alpha_0\mathrm{op}lus\dots\mathrm{op}lus\alpha_n)/(\mathscr{M}athbb{Z}/p)$$ $$F_{2n}B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)=\{(x_0,\dots,x_n)\in S(\alpha_0\mathrm{op}lus\dots\mathrm{op}lus\alpha_n)\mathscr{M}id Arg(x_n)=2k\mathscr{M}athscr{P}i/p\}/ (\mathscr{M}athbb{Z}/p).$$ We have $$F_{2n}/F_{2n-1}\mathscr{M}athscr{C}ong S^{(\alpha_0\mathrm{op}lus\dots \mathrm{op}lus\alpha_{n-1})\alpha_n^{-1}},$$ $$F_{2n+1}/F_{2n}\mathscr{M}athscr{C}ong S^{(\alpha_0\mathrm{op}lus\dots \mathrm{op}lus\alpha_{n-1})\alpha_n^{-1}\mathrm{op}lus 1_\mathscr{M}athbb{R}}.$$ Thus, we have a spectral sequence \beg{esss1a}{E_1=\bigoplus_{n\in \mathscr{M}athbb{N}_0}H\underline{\mathscr{M}athbb{Z}/p}_\star F_n/F_{n-1}\mathscr{M}athbb{R}ightarrow H\underline{\mathscr{M}athbb{Z}/p}B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p) } If we use the regular flag, the generators of the summands will be in (homological) degrees: $$\begin{array}{l} 0,-1,-\beta, -\beta-1,\dots, -(p-1)\beta, -(p-1)\beta-1,\\ -(p-1)\beta-2, -(p-1)\beta-3,\dots,-2(p-1)\beta-2, -2(p-1)-3,\\ -2(p-1)-4,\dots \end{array} $$ The difference however now is that in the spectral sequence \rightarrowref{esss1a}, the generator $z$ in degree $-1$ supports a $d_1$ differential. To see this, otherwise, it would be a permanent cycle, and hence so would its Bockstein. Now the image of $x$ in Borel cohomology is a non-trivial element of $$H^1(\mathscr{M}athbb{Z}/p\mathscr{T}imes\mathscr{M}athbb{Z}/p;\mathscr{M}athbb{Z}/p),$$ so the image of $\beta(z)$ in Borel cohomology would be non-zero. However, we see that in the $E_1$-term of the spectral sequence \rightarrowref{esss1a}, all the elemenets of dimension $-2$ have image $0$ in Borel cohomology. There is a unique target of this differential, and all other differentials originate in $$z\mathscr{M}athscr{C}dot H\underline{\mathscr{M}athbb{Z}/p}^\star \mathscr{M}athbb{C} P^\infty_{\mathscr{M}athbb{Z}/p}.$$ One also notes that \beg{eeee1}{z\mathscr{M}athscr{C}dot x^{p-1}} is a permanent cycle. Bookkeeping leads to the following: \begin{proposition}\lambdaabel{p10} We have \beg{ep101}{\begin{array}{l}H\underline{\mathscr{M}athbb{Z}/p}^\star B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)= \\ H\underline{\mathscr{M}athbb{Z}/p}_\star[y]\otimes \mathscr{M}athscr{L}ambda[z\mathscr{M}athscr{C}dot x^{p-1}]\mathrm{op}lus (HM_\star[x,y]/(\sigma^{-2}y-x^p+b^{p-1}x)) \otimes \mathscr{M}athbb{Z}/p\{x,\nu\}. \end{array}} (See Figure 10 for the element $\nu$.) \end{proposition} $\square$\mathscr{E}dskip In Proposition \rightarrowef{p10}, the ``polynomial generators" at $HM_\star$ just mean suspension by dimensional degree of the given monomial. In particular, we have canonical elements $q\in H^{1+\beta}B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$, $s\in H^{\beta} B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$, $\nu\in H^{\beta-1} B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$ represented by \beg{ep102}{bz-xu\in x\mathscr{M}athscr{C}dot HM_\star, \;z\sigma^{-2}u\in \nu\mathscr{M}athscr{C}dot HM_\star, z\sigma^{-2}\in \nu\mathscr{M}athscr{C}dot HM_\star.} (See figure 10; whiskers point to the names of the elements concerned.) In \rightarrowref{ep102}, the name of the elements comes from Borel cohomology, which we write as elements of the appropriate terms of \rightarrowref{ep101}. We also have an element $\omegaega\in H^{(p-1)\beta+1}(B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p))$ which represents \rightarrowref{eeee1}. One notes, however, that this is only correct in the asociated graded object of our filtration. To determine the exact image in Borel cohomology, we note that we must have $$\beta(\omegaega)=y,$$ since $y$ is the additive generator of $H\underline{\mathscr{M}athbb{Z}/p}^{(p-1)\beta+2}B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$. This gives $$\omegaega\mathscr{M}apsto z(\sigma^{2-2p}t^{p-1}-b^{p-1}).$$ Now we can write $$\lambdaambda(q)=q\otimes 1 +\sum_{n\geq 1} y^{p^{n-1}}\otimes \widehat{\xi}_{n},$$ \beg{essss}{\lambdaambda(s) = s \otimes 1 + q \otimes (\mathscr{T}au_0 + b^{p-2} \widehat{\xi}_1) + \sum_{n \geq 1} y^{p^{n-1}} \otimes \widehat{\mathscr{T}au}_n,} \beg{enununu}{\lambdaambda(\nu) = \nu \otimes 1 - q \otimes b^{p-2} \underline{\xi}_1 + x \otimes (\mathscr{T}au_0 + b^{p-2} \widehat{\xi}_1) + \sum_{n\geq 1} y^{p^{n-1}} \otimes \underline{\mathscr{T}au}_n.} The somewhat complicated form of the right hand side of formulas \rightarrowref{essss}, \rightarrowref{enununu} is forced by considering which elements exist in $H\underline{\mathscr{M}athbb{Z}/p}^\star B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$. We can also write $$\lambdaambda(\omegaega)=\omegaega\otimes 1 +\sum_{n\geq1} y^{p^n}\otimes\underline{\mathscr{M}u}_n +\dots,$$ however, the $\dots$ indicate that there will be other summands. The dimensions are then given by $$|\widehat{\xi}_n|=|\underline{\xi}_n|-1,$$ $$|\underline{\mathscr{T}au}_n|=|\underline{\xi}_n|+1,$$ $$|\widehat{\mathscr{T}au}_n|=|\underline{\mathscr{T}au}_n|-1=|\underline{\xi}_n|.$$ $$|\underline{\mathscr{M}u}_n|=|\underline{\mathscr{T}heta}_n|+1.$$ Co-associativity then implies \beg{ecoprod1a}{\widetilde{\mathscr{M}athscr{P}si}(\widehat{\xi}_n)=\sum\underline{\mathscr{T}heta}_i^{p^{n-i-1}}\otimes\widehat{\xi}_{n-i}, } \beg{ecoprod2a}{\widetilde{\mathscr{M}athscr{P}si}(\widehat{\mathscr{T}au}_n)=\sum\underline{\mathscr{T}heta}_i^{p^{n-i-1}} \otimes\widehat{\mathscr{T}au}_{n-i}+\widehat{\xi}_n\otimes (\mathscr{T}au_0+\widehat{\xi}_1b^{p-2}) ,} \beg{ecoprod3a}{\widetilde{\mathscr{M}athscr{P}si}(\underline{\mathscr{T}au}_n)=\sum\underline{\mathscr{T}heta}_i^{p^{n-i-1}} \otimes\underline{\mathscr{T}au}_{n-i}+\widehat{\xi}_n\otimes\underline{\xi}_1b^{p-1}+\underline{\xi}_n\otimes (\mathscr{T}au_0+\widehat{\xi}_1b^{p-2}) .} We can now re-state Proposition \rightarrowef{p10} more precisely, in fact determining the multiplicative structure of $H\underline{\mathscr{M}athbb{Z}/p}^\star B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$ completely: \begin{proposition}\lambdaabel{pzpzp} The module $H\underline{\mathscr{M}athbb{Z}/p}^\star B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$ is additively isomorphic to a direct sum of $$H\underline{\mathscr{M}athbb{Z}/p}_\star[y]\otimes\mathscr{M}athscr{L}ambda[\omegaega]$$ and (suspended) copies of $HM_\star$ generated on monomials of the form $$y^nx^{i-1}\mathscr{M}athscr{P}i$$ where $n\in \mathscr{M}athbb{N}_0$, $i=1,\dots, p-1$, and $\mathscr{M}athscr{P}i$ stands for one of the symbols $x,\nu$. The multiplicative structure is entirely determined by the canonical inclusion of the good tail into $$H\underline{\mathscr{M}athbb{Z}/p}^\star\mathscr{M}athbb{C} P^\infty_{\mathscr{M}athbb{Z}/p}\otimes \mathscr{M}athscr{L}ambda[z,u].$$ In particular, the good tail is the quotient of $$H\underline{\mathscr{M}athbb{Z}/p}_\star[x,y]\otimes\mathscr{M}athscr{L}ambda[s,q,\nu,\omegaega]$$ modulo the relations $$x^p-b^{p-1}x=y\sigma^{-2},$$ $$\sigma^{-2}q=b\nu-(\sigma^{-2}u)x,$$ $$\sigma^{-2}s=(\sigma^{-2}u)\nu,$$ $$(\sigma^{-2} u)s=0,$$ $$(\sigma^{-2}u)q=bs,$$ $$qs=0,$$ $$q\nu=xs,$$ $$\nu s=0,$$ $$\omegaega s=\omegaega\nu=0,$$ $$\omegaega q=-sy,$$ $$\omegaega x=y\nu.$$ Additionally, we have $$H\underline{\mathscr{M}athbb{Z}/p}^\star B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)^d=H^2_{(b,\sigma^{-2})}(H\underline{\mathscr{M}athbb{Z}/p}_\star^g, H\underline{\mathscr{M}athbb{Z}/p}^\star B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)^g)[-1].$$ \end{proposition} $\square$\mathscr{E}dskip \section{Images in Borel cohomology}\lambdaabel{sborel} Similarly as in the $p=2$ case, it is convenient to consider the {\em Borel cohomology dual Steenrod algebra} $$A^{cc}_\star=F(E\mathscr{M}athbb{Z}/p_+,H\underline{\mathscr{M}athbb{Z}/p}\wedge H\underline{\mathscr{M}athbb{Z}/p})_\star.$$ We have $$A^{cc}_\star=(A_*[\sigma^{2},\sigma^{-2}][b]\otimes \mathscr{M}athscr{L}ambda[u])^\wedge_{(b)}.$$ In $A^{cc}_\star$, which makes a formal Hopf algebrioid, the non-equivariant coproduct relations hold, and we have \beg{eru}{\rightarrowho^2:=\eta_R(\sigma^2)=\sigma^2+\sigma^{2p} b^{p-1} \xi_1+\dots+\sigma^{2p^n} b^{p^n-1}\xi_n+\dots } \beg{erou}{\overline{u}:=\eta_R(u)= u+\sigma^2 b\mathscr{T}au_0+\sigma^{2p}b^p\mathscr{T}au_1+\dots+\sigma^{2p^n}b^{p^n}\mathscr{T}au_n+\dots } \begin{proposition}\lambdaabel{pppelements} In $A_\star^{cc}$, we have \beg{ero10}{ \underline{\xi}_n=\rightarrowho^{-2}\sigma^{2p^n-2p^{n-1}}\xi_n+\dots+\rightarrowho^{-2}\sigma^{2p^N-2p^{n-1}}b^{p^N -p^n}\xi_N+\dots, } \beg{erou101}{\begin{array}{l} \widehat{\xi}_n=\sigma^{2p^n-2p^{n-1}}(b\mathscr{T}au_n-\overline{u}\rightarrowho^{-2}\xi_n)+\dots\\ +\sigma^{2p^N-2p^{n-1}}(b\mathscr{T}au_N-\overline{u}\rightarrowho^{-2}\xi_N)b^{p^N-p^n}+\dots, \end{array} } \beg{erou102}{ \underline{\mathscr{T}au}_n= \rightarrowho^{-2}\sigma^{2p^n-2p^{n-1}}\mathscr{T}au_n+\dots+\rightarrowho^{-2}\sigma^{2p^N-2p^{n-1}}b^{p^N -p^n}\mathscr{T}au_N+\dots \;, } \beg{erou103}{ \widehat{\mathscr{T}au}_n=\underline{\mathscr{T}au}_n\overline{u}=\rightarrowho^{-2}\sigma^{2p^n-2p^{n-1}}\mathscr{T}au_n\overline{u}+ \dots +\rightarrowho^{-2}\sigma^{2p^N-2p^{n-1}}\mathscr{T}au_N\overline{u}b^{p^N-p^n}+\dots\; . } \end{proposition} \begin{proof} Multiplying \rightarrowref{eru} by $\sigma^{-2}\rightarrowho^{-2}$, we get \beg{ero1}{\sigma^{-2}=\rightarrowho^{-2}+\rightarrowho^{-2}\sigma^{2p-2}b^{p-1}\xi_1+\dots \rightarrowho^{-2}\sigma^{2p^n-2}b^{p^n-1} \xi_n+\dots} Now $\mathscr{M}athbb{C} P^\infty_{\mathscr{M}athbb{Z}/p}$ has the same $\mathscr{M}athbb{Z}/p$-Borel cohomology as $\mathscr{M}athbb{C} P^\infty$, which is \beg{eborelcp}{H\underline{\mathscr{M}athbb{Z}/p}^\star_c\mathscr{M}athbb{C} P^\infty=\mathscr{M}athbb{Z}/p[t][\sigma^{2},\sigma^{-2}][b]\otimes\mathscr{M}athscr{L}ambda[u]. } Further, in Borel cohomology, we can write $$x=\sigma^{-2}t$$ and thus, $$y=\sigma^{2-2p}t^p-tb^{p-1}.$$ Combining this with \rightarrowref{emilnor2}, we have \begin{align*} \lambdaambda(t) & = \lambdaambda(\sigma^2 x) = x \otimes \rightarrowho^2 + \sum_{n \geq 1} y^{p^{n-1}} \otimes \underline{\xi}_n \rightarrowho^2\\ & = \sigma^{-2}t \otimes \rightarrowho^2 + \sum_{n \geq 1} (\sigma^{2p^{n-1}-2p^n}t^{p^n} - b^{p^{n}-p^{n-1}}t^{p^{n-1}}) \otimes \underline{\xi}_n \rightarrowho^2\\ & = t \otimes (\sigma^{-2}-b^{p-1}\underline{\xi}_1) \rightarrowho^2 + \sum_{n \geq 1} t^{p^n} \otimes (\sigma^{2p^{n-1}-2p^n} \underline{\xi}_n - b^{p^{n+1}-p^{n}} \underline{\xi}_{n+1})\rightarrowho^2 \end{align*} Comparing with the non-equivariant result \begin{equation*} \lambdaambda(t) = t \otimes 1 + \sum_{n \geq 1} t^{p^n} \otimes \xi_n, \end{equation*} we have \begin{equation*} \begin{cases} 1 & =(\sigma^{-2}-b^{p-1}\underline{\xi}_1) \rightarrowho^2 \\ \xi_n & = (\sigma^{2p^{n-1}-2p^n} \underline{\xi}_n - b^{p^{n+1}-p^{n}} \underline{\xi}_{n+1})\rightarrowho^2, \end{cases} \end{equation*} and so \begin{equation*} \begin{cases} \underline{\xi}_1 & =(\sigma^{-2}- \rightarrowho^{-2}) b^{1-p} \\ \underline{\xi}_{n+1} & = (\sigma^{2p^{n-1}-2p^n} \underline{\xi}_n - \rightarrowho^{-2}\xi_{n})b^{p^n-p^{n+1}}. \\ \end{cases} \end{equation*} (We work in Tate cohomology, into which the Borel cohomology embeds.) Using \rightarrowref{ero1} and induction, we can prove \rightarrowref{ero10}. Now apply $\lambdaambda$ to $\sigma^{-2}y = x^p - b^{p-1}x$, we have \begin{align*} & y \otimes \rightarrowho^{-2} + \sum_{n\geq 1} y^{p^n} \otimes \underline{\mathscr{T}heta}_n \rightarrowho^{-2} \\ & = (x^p \otimes 1 + \sum_{n \geq 1} y^{p^n} \otimes \underline{\xi}_n)- (b^{p-1}x \otimes 1 + \sum_{n \geq 1} y^{p^{n-1}} \otimes b^{p-1}\underline{\xi}_n) \\ & = (\sigma^{-2}y \otimes 1 - y \otimes b^{p-1}\underline{\xi}_1) + \sum_{n \geq 1} y^{p^n} \otimes (\underline{\xi}_n - b^{p-1}\underline{\xi}_{n+1}) \end{align*} Comparing coefficients, we get \beg{ero11}{\underline{\mathscr{T}heta}_n\rightarrowho^{-2}=\underline{\xi}_n^p-\underline{\xi}_{n+1}b^{p-1}. } Note, in fact, that the relation \rightarrowref{ero11} is true on the nose (meaning not just on the image in Borel cohomology, i.e. modulo the derived tail) by Proposition \rightarrowef{p33}, and the Hopf algebroid relation between the product and the coproduct. One should also point out that, in particular, \beg{ero12}{\rightarrowho^{-2}=\sigma^{-2}-\underline{\xi}_1b^{p-1}. } Now multiplying \rightarrowref{ero1} by $\overline{u}$, we get $$\overline{u} \sigma^{-2}=\overline{u}\rightarrowho^{-2}+\overline{u}\rightarrowho^{-2}\sigma^{2p-2}b^{p-1}\xi_1+\dots \overline{u}\rightarrowho^{-2}\sigma^{2p^n-2}b^{p^n-1} \xi_n+\dots$$ Plugging in \rightarrowref{erou}, we get \beg{erou100}{\begin{array}{l} u\sigma^{-2}-\overline{u}\rightarrowho^{-2}+b\mathscr{T}au_0=\\ b^{p-1}\sigma^{2p-2}(\overline{u}\rightarrowho^{-2}\xi_1-b\mathscr{T}au_1)+ \dots + b^{p^n-1}\sigma^{2p^n-2}(\overline{u}\rightarrowho^{-2}\xi_n-b\mathscr{T}au_n)+\dots \end{array} } In the Borel cohomology, apply $\lambdaambda$ to $q = bz - xu = bz - \sigma^{-2}tu$, we get \begin{equation*} \lambdaambda(q) = \lambdaambda(z)(1 \otimes b) - \lambdaambda(t) (1 \otimes \rightarrowho^{-2} \bar{u}). \end{equation*} Plugging in the formula for $\lambdaambda(q)$, $\lambdaambda(z)$, $\lambdaambda(t)$, we get \begin{align*} &(bz - \sigma^{-2}tu) \otimes 1 + \sum_{n \geq 0} (\sigma^{2p^{n-1}-2p^n} t^{p^{n+1}} - b^{p^{n+1}-p^{n}} t^{p^n}) \otimes \widehat{\xi}_{n+1} \\ & = (z \otimes b + \sum_{n \geq 0} t^{p^n} \otimes \mathscr{T}au_n b) - (t \otimes \rightarrowho^{-2} \bar{u} + \sum_{n \geq 1} t^{p^n} \otimes \xi_n \rightarrowho^{-2} \bar{u}). \end{align*} The coefficients of $z$ match on both sides. Comparing coefficients of $t^{p^n}$, we get \begin{equation*} \begin{cases} -\sigma^{-2}u -b^{p-1} \widehat{\xi}_1 + \rightarrowho^{-2} \overline{u} - b \mathscr{T}au_0 & = 0\\ \sigma^{2p^{n}-2p^{n-1}} \widehat{\xi}_{n} - b^{p^{n+1}-p^{n}} \widehat{\xi}_{n+1} - \mathscr{T}au_n b + \xi_n \rightarrowho^{-2} \overline{u} &= 0, \end{cases} \end{equation*} and so \begin{equation*} \begin{cases} \widehat{\xi}_1 & =(-\sigma^{-2}u + \rightarrowho^{-2} \overline{u} - b \mathscr{T}au_0) b^{1-p} \\ \widehat{\xi}_{n+1} & = (\sigma^{2p^{n-1}-2p^{n}} \widehat{\xi}_{n} - \mathscr{T}au_n b + \xi_n \rightarrowho^{-2} \bar{u}) b^{p^{n}-p^{n+1}}. \end{cases} \end{equation*} Using \rightarrowref{erou100} and induction, we can prove \rightarrowref{erou101}. Recall that we have in Borel cohomology \begin{align*} q & = bz - u\sigma^{-2}t \\ s & = \sigma^{-2}zu \\ \nu & = \sigma^{-2}z \end{align*} From above, we have \begin{align} \lambdaabel{eq:xi1ul} b^{p-1}\underline{\xi}_1 & = \sigma^{-2} - \rightarrowho^{-2}; \\ \lambdaabel{eq:xi1hat} b^{p-1} \widehat{\xi}_1 & = -\sigma^{-2}u + \rightarrowho^{-2}\overline{u} - b\mathscr{T}au_0. \end{align} Multiplying \rightarrowref{eq:xi1ul} by $u$ (resp. $\overline{u}$) and adding to \rightarrowref{eq:xi1hat}, we get \begin{align} \lambdaabel{eq:useful1} ub^{p-1}\underline{\xi}_1 + b^{p-1} \widehat{\xi}_1 + b\mathscr{T}au_0 & = \rightarrowho^{-2}(\overline{u}-u), \\ \lambdaabel{eq:useful2} \overline{u}b^{p-1}\underline{\xi}_1 + b^{p-1} \widehat{\xi}_1 + b\mathscr{T}au_0 & = \sigma^{-2}(\overline{u} - u). \end{align} Plugging in the Borel cohomology expression into formula \rightarrowref{enununu} and comparing coefficients with \begin{equation*} \lambdaambda(\nu) = \lambdaambda(z)(1 \otimes \rightarrowho^{-2}) = z \otimes \rightarrowho^{-2} + \sum_{n \geq 0} t^{p^n}\otimes \mathscr{T}au_n\rightarrowho^{-2}, \end{equation*} we must have that \begin{align} \lambdaabel{eq:nv1} \rightarrowho^{-2} & = \sigma^{-2} - b^{p-1} \underline{\xi}_1 & \mathscr{T}ext{ coefficient of $z$ }\\ \lambdaabel{eq:nv2} \rightarrowho^{-2} \mathscr{T}au_0& = \sigma^{-2}(ub^{p-2} \underline{\xi}_1 + b^{p-2} \widehat{\xi}_1 + \mathscr{T}au_0) - b^{p-1} \underline{\mathscr{T}au}_1 &\mathscr{T}ext{ coefficient of $t$ }\\ \lambdaabel{eq:nv3} \rightarrowho^{-2} \mathscr{T}au_{n+1} & = \sigma^{2p^{n-1}-2p^{n}} \underline{\mathscr{T}au}_n - b^{p^{n+1}-p^n} \underline{\mathscr{T}au}_{n+1}& \mathscr{T}ext{ coefficient of $t^n$} \end{align} Now, \rightarrowref{eq:nv1} is just \rightarrowref{eq:xi1ul}. Plugging \rightarrowref{eq:useful1} into \rightarrowref{eq:nv2}, we have \begin{align*} b^{p-1}\underline{\mathscr{T}au}_{1} = \rightarrowho^{-2}(\sigma^{-2} b^{-1}(\overline{u} - u) - \mathscr{T}au_0). \end{align*} Plugging in \rightarrowref{erou} and inducting based on \rightarrowref{eq:nv3}, we get \rightarrowref{erou102}. Now we shall prove that $\widehat{\mathscr{T}au}_n = \underline{\mathscr{T}au}_n\overline{u}$. From \begin{equation*} \lambdaambda(s) = s \otimes 1 + q \otimes (\mathscr{T}au_0 + b^{p-2} \widehat{\xi}_1) + \sum_{n \geq 1} y^{p^{n-1}} \otimes \widehat{\mathscr{T}au}_n \end{equation*} and $\lambdaambda(s) = \lambdaambda(\nu u) = \lambdaambda(\nu)(1 \otimes \overline{u})$, it remains to verify \begin{equation*} s \otimes 1 + q \otimes (\mathscr{T}au_0 + b^{p-2} \widehat{\xi}_1) = \nu \otimes \overline{u} - q \otimes b^{p-2} \underline{\xi}_1 \overline{u} + x \otimes (\mathscr{T}au_0 + b^{p-2} \widehat{\xi}_1) \overline{u}. \end{equation*} Using \rightarrowref{eq:xi1hat} and \rightarrowref{eq:useful2} ($\overline{u}$ is exterior), this is equivalent to \begin{equation*} \nu \otimes (u-\overline{u}) + q \otimes \sigma^{-2}b^{-1}(\overline{u}-u) + x \otimes b^{-1}\sigma^{-2}u\overline{u} = 0. \end{equation*} Plugging in the Borel cohomology expressions, the left hand side is the sum of \begin{equation*} z \otimes [\sigma^{-2}(u-\overline{u})+ \sigma^{-2}(\overline{u}-u)] = 0 \end{equation*} and ($u$ is exterior) \begin{equation*} - \sigma^{-2}t \otimes u \sigma^{-2}b^{-1}(\overline{u}-u) + \sigma^{-2}t \otimes b^{-1}\sigma^{-2}u\overline{u} = 0. \end{equation*} Thus, we have \rightarrowref{erou103}. \end{proof} From this, we can deduce further multiplicative relations \beg{erou104}{ \widehat{\xi}_n\rightarrowho^{-2}=-\underline{\xi}_n\overline{u}\rightarrowho^{-2} +b\underline{\mathscr{T}au}_n, } \beg{erou105}{ \widehat{\mathscr{T}au}_n\rightarrowho^{-2}=\underline{\mathscr{T}au}_n\overline{u}\rightarrowho^{-2}, } \beg{erou106}{ b\widehat{\mathscr{T}au}_n=\widehat{\xi}_n\overline{u}\rightarrowho^{-2}, } and \beg{erou107}{ \widehat{\mathscr{T}au}_n\rightarrowho^{-2}\overline{u}=0. } Recall, of course, \rightarrowref{ero12}, and also \beg{erou12aa}{ \overline{u}\rightarrowho^{-2}=u\sigma^{-2}+b\mathscr{T}au_0+b^{p-1}\widehat{\xi}_1. } From this, we deduce additional multiplicative relations \beg{eroo1}{\widehat{\mathscr{T}au}_n^2=\widehat{\mathscr{T}au}_n\underline{\mathscr{T}au}_n=\widehat{\mathscr{T}au}_n\widehat{\xi}_n= \widehat{\xi}_n^2= 0 } \beg{eroo3}{ \widehat{\xi}_n\underline{\mathscr{T}au}_n=\underline{\xi}_n\widehat{\mathscr{T}au}_n. } Again, these relations are true on the nose, and not just in Borel cohomology, by Proposition \rightarrowef{pzpzp} and the compatibility of the product and the coproduct. Now using the fact that $H\underline{\mathscr{M}athbb{Z}/p}\wedge H\underline{\mathscr{M}athbb{Z}/p}$ is a wedge of $R$-suspensions of $H\underline{\mathscr{M}athbb{Z}/p}$ and $HT$, monomials in $A_\star$ which are $b$-divisible in $A_\star^{cc}$ must also be $b$-divisible in $A_\star$. Thus, we also get elements of the form \beg{edivdiv}{\mathscr{M}athscr{F}rac{\widehat{\xi}_m\widehat{\xi}_n}{b},\; \mathscr{M}athscr{F}rac{\widehat{\xi}_m\underline{\xi}_n-\widehat{\xi}_n\underline{\xi}_m}{b},\; \mathscr{M}athscr{F}rac{\widehat{\xi}_m\widehat{\mathscr{T}au}_n}{b}= \mathscr{M}athscr{F}rac{\widehat{\mathscr{T}au}_m\widehat{\xi}_n}{b}, \mathscr{M}athscr{F}rac{\widehat{\xi}_m\underline{\mathscr{T}au}_n-\underline{\xi}_m\widehat{\mathscr{T}au}_n}{b}, } (note that the last two are related by switching $m$ and $n$), and elements obtained by iterating this procedure. In fact, this is related to the Example in Section \rightarrowef{prelim}. In the language of \mathscr{M}athscr{C}ite{sw}, the ``quadruplet" $\widehat{\xi}_n,\underline{\xi}_n,\widehat{\mathscr{T}au}_n, \underline{\mathscr{T}au}_n$ generate an $HT_\star$ summand of $A_\star$. Thus, the elements \rightarrowref{edivdiv} are precisely those divisions by $b$ which are allowed in $HT\wedge_{H\underline{\mathscr{M}athbb{Z}/p}}HT_\star$ according to the Example. We obtain the following \begin{proposition}\lambdaabel{ppdiv} Put $d\widehat{\xi}_i=\underline{\xi}_i$, $d\widehat{\mathscr{T}au}_i=\underline{\mathscr{T}au}_i$. For given $i_1<\dots<i_k$, let $K(i_1,\dots,i_k)$ denote the sub-$\mathscr{M}athbb{Z}/p$-module of $A_\star$ spanned by monomials of the form \beg{ekkkk}{\kappaappa_{i_1}\dots\kappaappa_{i_k}\underline{\xi}_{i_1}^{s_1}\dots \underline{\xi}_{i_k}^{s_k}} where $0\lambdaeq s_\ell\lambdaeq p-2$, $\kappaappa_\ell$ can stand for any of the elements $\widehat{\xi}_\ell$, $\underline{\xi}_\ell$, $\widehat{\mathscr{T}au}_\ell$, $\underline{\mathscr{T}au}_\ell$. Let $K_j(i_1,\dots,i_k)$ denote the submodule of $K(i_1,\dots,i_k)$ of elements divisible by $b^j$. Then $K_j(i_1,\dots,i_k)$ is spanned by elements $$y,dy$$ where $y$ is of the form \rightarrowref{ekkkk} so that at least $j+1$ of the elements $\kappaappa_{i_s}$ are of the form $\widehat{\xi}_{i_s}$ or $\widehat{\mathscr{T}au}_{i_s}$, only at most one of them being $\widehat{\mathscr{T}au}_{i_s}$. Such $y$ will be called {\em admissible} (otherwise, $y=0$). \end{proposition} $\square$\mathscr{E}dskip Using the our computation of $H\underline{\mathscr{M}athbb{Z}/p}^\star B_{\mathscr{M}athbb{Z}/p}(\mathscr{M}athbb{Z}/p)$ in the last section, we similarly conclude that \beg{emun1}{ \underline{\mathscr{M}u}_n=\mathscr{M}athscr{F}rac{\widehat{\xi}_n\underline{\xi}_n^{p-1}-\underline{\mathscr{T}heta}_n\rightarrowho^{-2}\overline{u}}{ b}. } By \mathscr{M}athscr{C}ite{sw}, this element generates an $H\underline{\mathscr{M}athbb{Z}/p}_\star$-summand. \section{The $\mathscr{M}athbb{Z}/p$-equivariant Steenrod algebra}\lambdaabel{ssteen} To express the additive structure of $A_\star$, we introduce a ``Cartan-Serre basis" of elements of the form \beg{eueu0}{ \mathscr{T}au_0^e\underline{\mathbb{T}heta}_S\underline{M}_Q, } and \beg{eueu}{ \underline{\Xi}_R\underline{\mathbb{T}heta}_S\underline{\mathscr{M}athrm{T}}_E\mathscr{T}au_0^e } where \beg{eueu1}{\begin{array}{ll} R=(r_1,r_2,\dots), & r_n\in \mathscr{M}athbb{Z}, 0\lambdaeq r_n<p,\\ Q=(q_1,q_2,\dots), & q_n\in \{0,1\},\\ S=(s_1,s_2,\dots), & s_n\in \mathscr{M}athbb{Z}, 0\lambdaeq s_n,\\ E=(e_1,e_1,\dots), & e_n\in \{0,1\},\\ e\in \{0,1\}& \end{array} } As usual, only finitely many non-zero entries are allowed in each sequence. Here we understand $$\underline{\mathbb{T}heta}_S=\underline{\mathscr{T}heta}_1^{s_1}\underline{\mathscr{T}heta}_2^{s_2}\dots,$$ $$\underline{M}_Q=\underline{\mathscr{M}u}_1^{q_1}\underline{\mathscr{M}u}_2^{q_2}\dots,$$ $$\underline{\Xi}_R=\underline{\xi}_1^{r_1}\underline{\xi}_2^{r_2}\dots,$$ $$\underline{\mathscr{M}athrm{T}}_E=\underline{\mathscr{T}au}_1^{e_1}\underline{\mathscr{T}au}_2^{e_2}\dots\;.$$ The elements $\underline{\mathbb{T}heta}_n$ have the dimensions introduced above. Additionally, we recall $$|\underline{\Xi}_n|=2p^{n-1}+(p^n-p^{n-1}-1)\beta,$$ $$|\underline{\mathscr{M}athrm{T}}_n|=|\underline{\Xi}_n|+1.$$ Additionally, for any sequence of natural numbers $P$ with finitely many non-zero elements, we denote by $|P|$ the number of non-zero elements in $P$. We will assume $|R+E|\neq 0$ in \rightarrowref{eueu}. \begin{theorem}\lambdaabel{tfinal} The dual $\mathscr{M}athbb{Z}/p$-equivariant Steenrod algebra $A_\star$ is, additively, a sum of copies of $$H\underline{\mathscr{M}athbb{Z}/p}_\star$$ shifted by the total dimension of elements of the form \rightarrowref{eueu0} with $R=E=0$, and copies of $$HM_\star$$ with admissible generators of the form $y, dy$ as in Proposition \rightarrowef{ppdiv} times $b^{-\ell}$ where $\ell$ is the maximum number so that $y$ is divisible by $b^\ell$ according to Proposition \rightarrowef{ppdiv}. The dimensions of admissible monomials in the admissible monomials in $\underline{\xi}_m$, $\underline{\mathscr{T}au}_n$ are given by the values $|\underline{\Xi}_R|$, $|\underline{\mathscr{M}athrm{T}}_E|$ given above. The good tail $A_\star^g$ is multiplicatively generated, as an algebra over $$\mathscr{M}athbb{Z}/p[\sigma^{-2},b]\otimes \mathscr{M}athscr{L}ambda[\sigma^{-2}u]$$ by the elements $\underline{\xi}_n$, $\widehat{\xi}_n$, $\underline{\mathscr{T}au}_n$, $\widehat{\mathscr{T}au}_n$, $\underline{\mathscr{T}heta}_n, \underline{\mathscr{M}u}_n$, $n\geq 1$, $\mathscr{T}au_0$, subject to all relations valid in $A^{cc}_\star$, including \rightarrowref{erou104}-\rightarrowref{eroo3}. The derived tail is given by $$A_\star^d=H^2_{(b,\sigma^{-2})}(H\underline{\mathscr{M}athbb{Z}/p}_\star^g, A_\star^g)[-1]$$ and $A_\star$ is an abelian ring with respect to $A_\star^g$. \end{theorem} \begin{proof} We already checked this on Borel (co)homology. On geometric fixed points (which are determined by the ``ideal" part), the elements $\underline{\xi}_R$ match non-negative powers of $\rightarrowho^{-2}$, $\mathscr{T}au_0$ matches $\overline{u}\rightarrowho^{-2}$. The elements $\widehat{\xi}_n, \widehat{\mathscr{T}au}_n,\underline{\mathscr{T}au}_n$ and their $\mathscr{M}athbb{Z}/p[\sigma^{-2}]\otimes\mathscr{M}athscr{L}ambda[u\sigma^{-2}]$-multiples represent the $\mathscr{M}athbb{Z}/p[\sigma^{-2}]\otimes\mathscr{M}athscr{L}ambda[u\sigma^{-2}]$-multiples of $\mathscr{T}au_{n-1}$. (The relations guarantee that no element is represented twice.) \end{proof} \end{document}
math
\begin{document} \title{Properties of distance functions on convex surfaces and applications} \author{Jan Rataj} \author{Lud\v ek Zaj\'\i\v cek} \address{Charles University\\ Faculty of Mathematics and Physics\\ Sokolovsk\'a 83\\ 186 75 Praha 8\\ Czech Republic} \varepsilonmail{[email protected]} \varepsilonmail{[email protected]} \subjclass{53C45, 52A20} \keywords{distance function, convex surface, Alexandrov space, DC\ manifold, ambiguous locus, skeleton, $r$-boundary} \thanks{The research was supported by the grant MSM 0021620839 from the Czech Ministry of Education. The second author was also supported by the grants GA\v CR 201/06/0198 and 201/09/0067. } \begin{abstract} If $X$ is a convex surface in a Euclidean space, then the squared intrinsic distance function $\mathrm{dist}^2(x,y)$ is DC (d.c., delta-convex) on $X\times X$ in the only natural extrinsic sense. An analogous result holds for the squared distance function $\mathrm{dist}^2(x,F)$ from a closed set $F \subset X$. Applications concerning $r$-boundaries (distance spheres) and ambiguous loci (exoskeletons) of closed subsets of a convex surface are given. \varepsilonnd{abstract} \maketitle \markboth{J. Rataj and L.~Zaj\'{\i}\v{c}ek}{Properties of distance functions} \section{Introduction} The geometry of $2$-dimensional convex surfaces in $\mathbb{R}^3$ was thoroughly studied by A.D. Alexandrov \cite{Acon}. Important generalizations for $n$-dimensional convex surfaces in $\mathbb{R}^{n+1}$ are due to A.D. Milka (see, e.g., \cite{Mi}). Many (but not all) results on geometry on convex surfaces are special cases of results of the theory of Alexandrov spaces with curvature bounded from below. Let $X \subset \mathbb{R}^{n+1}$ be an $n$-dimensional (closed bounded) convex surface and $\varepsilonmptyset \neq F \subset X$ a closed set. We will prove (Theorem \ref{distset}) that (A)\ \ {\it the intrinsic distance $d_F(x):= \mathrm{dist}(x,F)$ is locally DC on $X \setminus F$ in the natural extrinsic sense (with respect to natural local charts).} It is well-known that, in a Euclidean space, $d_F$ is not only locally DC but even locally semiconcave on the complement of $F$. This was generalized to smooth Riemannian manifolds in \cite{MM}. The result (A) can be applied to some problems from the geometry of convex surfaces that are formulated in the language of intrinsic distance functions. The reason of this is that DC functions (i.e., functions which are differences of two convex functions) have many nice properties which are close to those of $C^2$ functions. We present two applications. The first one (Theorem \ref{paralel}) concerns $r$-boundaries (distance spheres) of a closed set $F\subset X$ in the cases $\dim X =2,3$. It implies that, for almost all $r$, the $r$-boundary is a Lipschitz manifold, and so provides an analogue of well-known results proved (in Euclidean spaces) by Ferry \cite{Fe} and Fu \cite{Fu}. The second application (Theorem \ref{zam}) concerns the ambiguous locus (exoskeleton) of a closed subset of an $n$-dimensional ($n \in \mathbb{N}$) convex surface. This result is essentially stronger than the corresponding result of T. Zamfirescu in Alexandrov spaces of curvature bounded from below. It is not clear whether the results of these applications can be obtained as consequences of results in Alexandrov spaces (possibly with some additional properties). In any case, there are serious obstacles for obtaining such generalizations by our methods (see Remark \ref{obst}). To explain briefly what is the ``natural extrinsic sense'' from (A), consider for a while an unbounded convex surface $ X \subset \mathbb{R}^{n+1}$ which is the graph of a convex function $f: \mathbb{R}^{n} \to \mathbb{R}$, and denote $x^*:= (x,f(x))$ for $x \in \mathbb{R}^n$. Then (A) also holds (see Remark \ref{unbounded}) and is equivalent to the statement (B)\ \ {\it the function $h(x):= \mathrm{dist}(x^*,F)$ is locally DC on $\{x \in \mathbb{R}^n:\ x^* \notin F\}$.} Moreover, it is true that (C)\ \ {\it $h^2(x):= \mathrm{dist}^2(x^*,F)$ is DC on whole $\mathbb{R}^n$, and} (D)\ \ {\it the function $g(x,y):= \mathrm{dist}^2(x^*,y^*)$ is DC on $\mathbb{R}^{2n}= \mathbb{R}^n \times \mathbb{R}^n$.} For a natural formulation of corresponding results (Theorem \ref{distset} and \ref{main}) for a closed bounded convex surface $X$, we will define in a canonical way the structure of a DC manifold on $X$ and $X \times X$. A weaker version of the result (C) (in the case $n=2$) was known for a long time to the second author, who used a method similar to that of Alexandrov's proof (for two-dimensional convex surfaces) of Alexandrov-Toponogov theorem, namely an approximation of a general convex surface by polyhedral convex surfaces and considering a developing of those polyhedral convex surfaces ``along geodesics''. However, it is not easy to formalize this geometrically transparent method (even for $n=2$). In the present article we use another method suggested by the first author. Namely, we use well-known semiconcavity properties of distance functions on $X$ and $X\times X$ in an intrinsic sense (i.e., in the sense of the theory of length spaces). Using this method, we got rid of using developings. However, our proof still needs approximation by polyhedral surfaces. Note that, in the case $n=1$, the above statements (A)-(D) have straigthforward proofs and an example (in which $F$ is a singleton) can be easily constructed where the DC function $h^2$ from (C) is neither semiconcave nor semiconvex. The organization of the paper is as follows. In Section~2 (Preliminaries) we recall some facts concerning length spaces, semiconcave functions, DC functions, DC manifolds, and DC surfaces. Further we prove (by standard methods) two needfull technical lemmas on approximation of convex surfaces by polyhedral surfaces. In Section 3 we prove our main results on distance functions on closed bounded convex surfaces. Section 4 is devoted to applications which we already briefly described above. In the last short Section 5 we present several remarks and questions concerning DC structures on length spaces. \section{Preliminaries} In a metric space, $B(c,r)$ denotes the open ball with center $c$ and radius $r$. The symbol $\mathcal{H}^k$ stands for the $k$-dimensional Hausdorff measure. If $a, b \in \mathbb{R}^n$, then $[a,b]$ denotes the segment joining $a$ and $b$. If $F$ is a Lipschitz mapping, then $\mathrm{Lip}\, F$ stands for the least Lipschitz constant of $F$. If $W$ is a unitary space and $V$ is a subspace of $W$, then we denote by $V_W^{\perp}$ the orthogonal complement of $V$ in $W$. If $f$ is a mapping from a normed space $X$ to a normed space $Y$, then the symbol $df(a)$ stands for the (Fr\' echet) differential of $f$ at $a\in X$. If $df(a)$ exists and $$ \lim_{x,y \to a, x\neq y} \ \frac{f(y)-f(x)- df(a)(y-x)}{\|y-x\|} =0,$$ then we say that $f$ is {\varepsilonm strictly differentiable} at $a$ (cf. \cite[p.~19]{Mor}). For the sake of brevity, we introduce the following notation (we use the symbol $\Delta^{2} $, though $\Delta^2f(x,y)$ is one half of a second difference). \begin{definition}\label{drudif} If $f$ is a real function defined on a subset $U$ of a vector space and $x,y, \frac{x+y}2 \in U$, we denote \begin{equation}\label{drd} \Delta^2f(x,y):=\frac{f(x)+f(y)}2 - f\left(\frac{x+y}2\right). \varepsilonnd{equation} \varepsilonnd{definition} Note that, if $f(y) = \|y\|^2,\ y \in \mathbb{R}^n$, then \begin{equation}\label{drddrm} \Delta^2f(x+h,x-h)= \frac{\|x+h\|^2 + \|x-h\|^2}{2} - \|x\|^2 = \|h\|^2. \varepsilonnd{equation} We shall need the following easy lemma. Its first part is an obvious consequence of \cite[Lemma 1.16]{VeZa} (which works with convex functions). The second part clearly follows from the first one. \begin{lemma}\label{VZ} \begin{enumerate} \item Let $f: (a,b) \to \mathbb{R}$ be a continuous function. Suppose that for every $t \in (a,b)$ and $\delta > 0$ there exists $0<d < \delta$ such that $\Delta^2f(t+d,t-d)\leq 0$. Then $f$ is concave on $(a,b)$. \item Let $f$ be a continuous function on on open convex subset $C \subset \mathbb{R}^n$. Suppose that for every $x\in C$ there exists $\delta >0$ such that $\Delta^2f(x+h,x-h)\leq 0$ whenever $\|h\| < \delta$. Then $f$ is concave on $C$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \subsection{Length spaces and semiconcave functions} A metric space $(X,d)$ is called a {\it length (or inner or intrinsic) space} if, for each $x, y \in X$, $d(x,y)$ equals to the infimum of lengths of curves joining $x$ and $y$ (see \cite[p. 38]{BBI} or \cite[p. 824]{Pl}). If $X$ is a length space, then a curve $\mathrm{var}phi: [a,b] \to X$ is called {\it minimal}, if it is a shortes curve joining its endpoints $x=\mathrm{var}phi(a)$ and $y=\mathrm{var}phi(b)$ parametrized by the arc-length. A length space $X$ is called {\it geodesic (or strictly intrinsic) space} if each pair of points in $X$ can be joined by a minimal curve. Note that any complete, locally compact length space is geodesic (see \cite[Theorem 8]{Pl}). Alexandrov spaces with curvature bounded from below are defined as length spaces which have a lower curvature bound in the sense of Alexandrov. The precise definition of these spaces can be found in \cite{BBI} or \cite{Pl}. (Frequently Alexandrov spaces are supposed to be complete and/or finite dimensional.) If $X$ is a length space and $\mathrm{var}phi: [a,b] \to X$ a minimal curve, then the point $s=\mathrm{var}phi((a+b)/2)$ is called {\it the midpoint of the minimal curve $\mathrm{var}phi$}. A point $t$ is called {\it a midpoint of $x,y$} if it is the midpoint of a minimal curve $\mathrm{var}phi$ joining $x$ and $y$. If $\mathrm{var}phi$ as above can be chosen to lie in a set $G\subset X$, we will say that $t$ is {\it a $G$-midpoint of $x,y$}. One of several natural equivalent definitions (see \cite[Definition 1.1.1 and Proposition 1.1.3]{CaSi}) of semiconcavity in $\mathbb{R}^n$ reads as follows. \begin{definition}\label{semieuk} A function $u$ on an open set $A\subset \mathbb{R}^n$ is called {\it semiconcave} with a {\it semiconcavity constant} $c\geq 0$ if $u$ is continuous on $A$ and \begin{equation}\label{seeu} \Delta^2u(x+h,x-h)\leq (c/2) \|h\|^2, \varepsilonnd{equation} whenever $x, h \in \mathbb{R}^n$ and $[x-h,x+h] \subset A$. \varepsilonnd{definition} \begin{remark}\label{semieuk2} It is well-known and easy to see (cf. \cite[Proposition 1.1.3]{CaSi}) that $u$ is semiconcave on $A$ with semiconcavity constant $c$ if and only if the function $g(x) = u(x) - (c/2) \|x\|^2$ is locally concave on $A$. \varepsilonnd{remark} The notion of semiconcavity extends naturally to length spaces $X$. The authors working in the theory of length spaces use mostly the following terminology (cf. \cite[p.\ 5]{Pet} or \cite[p.\ 862]{Pl}). \begin{definition}\label{semilen} Let $X$ be a geodesic space. Let $G \subset X$ be open, $c\geq 0$, and $f: G \to \mathbb{R}$ be a locally Lipschitz function. \begin{enumerate} \item We say that $f$ is $c$-{\it concave} if, for each minimal curve $\gamma: [a,b] \to G$, the function $g(t) = f\circ \gamma(t) - (c/2)t^2$ is concave on $[a,b]$. \item We say that $f$ is {\it semiconcave} on $G$ if for each $x \in G$ there exists $c \geq 0$ such that $f$ is $c$-concave on an open neighbourhood of $x$. \varepsilonnd{enumerate} \varepsilonnd{definition} \begin{remark}\label{semlen} If $X=\mathbb{R}^n$, then $c$-concavity coincides with semiconcavity with constant $c$. \varepsilonnd{remark} We will need the following simple well-known characterization of $c$-concavity. Because of the lack of the reference, we give the proof. \begin{lemma}\label{loclen} Let $Y$ be a geodesic space. Let $M \subset Y$ be open, $c\geq 0$, and $f: M\to \mathbb{R}$ be a locally Lipschitz function. Then the following are equivalent. \begin{enumerate} \item $f$ is $c$-concave on $M$. \item If $x,y \in M$, and $s$ is an $M$-midpoint of $x,y$, then \begin{equation}\label{semmid} \frac{f(x)+f(y)}{2}- f(s) \leq (c/2) d^2, \varepsilonnd{equation} where $d:= (1/2)\ \mathrm{dist}(x,y)$. \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} Suppose that (i) holds. To prove (ii), let $x, y, s, d$ be as in (ii). Choose a minimal curve $\gamma: [a,b] \to M$ with $\gamma(a) = x, \gamma(b)=y$ and $\gamma((1/2)(a+b)) = s$. By (i), the function $g(t) = f\circ \gamma(t) - (c/2)t^2$ is concave on $[a,b]$. So $\widetilde f := f \circ \gamma$ is semiconcave with semiconcavity constant $c$ on $(a,b)$ by Remark \ref{semieuk2}. Consequently, $ \Delta^2\widetilde f(b-h,a+h) \leq (c/2) |(1/2)(b-a)-h|^2$ for each $0 < h < (1/2)(b-a)$. By continuity of $\widetilde f$ we clearly obtain \varepsilonqref{semmid}, since $d= (1/2)(b-a)$. To prove (ii)$\mathbb{R}ightarrow$(i), consider a minimal curve $\gamma: [a,b] \to M$ and suppose that $f$ satisfies (ii). It is easy to see that then $\widetilde f := f \circ \gamma$ is semiconcave with semiconcavity constant $c$ on $(a,b)$. By Remark \ref{semieuk2}, $g(t) = f\circ \gamma(t) - (c/2)t^2$ is concave on $(a,b)$, and therefore (by continuity of $g$), also on $[a,b]$. \varepsilonnd{proof} \subsection{DC manifolds and DC surfaces} \begin{definition}\label{dcfce} Let $C$ be a nonempty convex set in a real normed linear space $X$. A function $f\colon C\to\mathbb{R}$ is called {\varepsilonm DC}\ (or d.c., or delta-convex) if it can be represented as a difference of two continuous convex functions on $C$. If $Y$ is a finite-dimensional normed linear space, then a mapping $F\colon C\to Y$ is called {\varepsilonm DC}, if $y^*\circ F$ is a DC function on $C$ for each linear functional $y^* \in Y^*$. \varepsilonnd{definition} \begin{remark}\label{podc} \begin{enumerate} \item To prove that $F$ is DC, it is clearly sufficient to show that $y^*\circ F$ is DC for each $y^*$ from a basis of $Y^*$. \item Each DC mapping is clearly locally Lipschitz. \item There are many works on optimization that deal with DC functions. A theory of DC (delta-convex) mappings in the case when $Y$ is a general normed linear space was built in \cite{VeZa}. \varepsilonnd{enumerate} \varepsilonnd{remark} Some basic properties of DC functions and mappings are contained in the following lemma. \begin{lemma}\label{zakldc} Let $X,Y,Z$ be finite-dimensional normed linear spaces, let $C\subset X$ be a nonempty convex set, and $U \subset X$ and $V \subset Y$ open sets. \begin{enumerate} \item[(a)]\ {\rm (\cite{A1})} If the derivative of a function $f$ on $C$ is Lipchitz, then $f$ is DC. In particular, each affine mapping is DC. \item[(b)]\ {\rm (\cite{Ha})} If a mapping $F: C \to Y$ is locally DC on $C$, then it is DC on $C$. \item[(c)]\ {\rm (\cite{Ha})} Let a mapping $F: U \to Y$ be locally DC, $F(U) \subset V$, and let $G:V\to Z$ be locally DC. Then $G \circ F$ is locally DC on $U$. \item[(d)]\ {\rm (\cite{VeZa})} Let $F: U \to V$ be a bilipschitz bijection which is locally DC on $U$. Then $F^{-1}$ is locally DC on $V$. \varepsilonnd{enumerate} \varepsilonnd{lemma} Since locally DC mappings are stable with respect to compositions {\rm lin}\,ebreak (Lemma~\ref{zakldc}(c)), the notion of an $n$-dimensional DC manifold can be defined in an obvious way, see \cite[\S\S2.6, 2.7]{Kuwae}. The importance of this notion was shown in Perelman's preprint \cite{Per}, cf.\ Section~\ref{Sec-Rem}. \begin{definition}\label{DCman} Let $X$ be a paracompact Hausdorff topological space and $n \in \mathbb{N}$. \begin{enumerate} \item We say that $(U, \mathrm{var}phi)$ is an {\it $n$-dimensional chart on} $X$ if $U$ is a nonempty open subset of $X$ and $\mathrm{var}phi: U \to \mathbb{R}^n$ a homeomorphism of $U$ onto an open set $\mathrm{var}phi(U)\subset\mathbb{R}^n$. \item We say that two $n$-dimensional charts $(U_1, \mathrm{var}phi_1)$ and $(U_2, \mathrm{var}phi_2)$ on $X$ are {\it DC-compatible} if $U_1 \cap U_2 = \varepsilonmptyset$ or $U_1 \cap U_2 \neq \varepsilonmptyset$ and the {\it transition maps} $\mathrm{var}phi_2 \circ (\mathrm{var}phi_1)^{-1}$ and $\mathrm{var}phi_1 \circ (\mathrm{var}phi_2)^{-1}$ are locally DC (on their domains $\mathrm{var}phi_1(U_1 \cap U_2)$ and $\mathrm{var}phi_2(U_1 \cap U_2)$, respectively). \item We say that a system $\mathcal{A}$ of $n$-dimensional charts on $X$ is an {\it $n$-dimen\-si\-onal DC atlas on} $X$, if the domains of the charts from $\mathcal{A}$ cover $X$ and any two charts from $\mathcal{A}$ are DC-compatible. \varepsilonnd{enumerate} \varepsilonnd{definition} Obviously, each $n$-dimensional DC atlas $\mathcal{A}$ on $X$ can be extended to a uniquely determined maximal $n$-dimensional DC atlas (which consists of all $n$-dimensional charts on $X$ that are DC-compatible with all charts from $A$). We will say that $X$ {\it is equipped with an ($n$-dimensional) DC structure} (or with a structure of an $n$-dimensional DC manifold), if a maximal $n$-dimensional DC atlas on $X$ is determined (e.g., by a choice of an $n$-dimensional DC atlas). Let $X$ be equipped with a DC structure and let $f$ be a function defined on an open set $G \subset X$. Then we say that $f$ is {\it DC} if $f\circ \mathrm{var}phi^{-1}$ is locally DC on $\mathrm{var}phi(U \cap G)$ for each chart $(U,\mathrm{var}phi)$ from the maximal DC atlas on $X$ such that $U\cap G\neq\varepsilonmptyset$. Clearly, it is sufficient to check this condition for each chart from an arbitrary fixed DC atlas. \begin{remark}\label{hilbert} \begin{enumerate} \item If we consider, in the definition of the chart $(U,\mathrm{var}phi)$, a mapping $\mathrm{var}phi$ from $U$ to an $n$-dimensional unitary space $H_\mathrm{var}phi$, the whole Definition~\ref{DCman} does not change sense. (Indeed, we can identify $H_\mathrm{var}phi$ with $\mathbb{R}^n$ by an isometry because of Lemma~\ref{zakldc}~(a), (c).) In the following, it will be convinient for us to use such (formally more general) charts with range in an $n$-dimensional linear subspace of a Euclidean space. \item If $X, Y$ are nonempty spaces equipped with $m,n$-dimensional DC structures, respectively, then the Cartesian product $X\times Y$ is canonically equipped with an $(m+n)$-dimensional DC structure. Indeed, let $\mathcal{A}_X,\mathcal{A}_Y$ be $m,n$-dimensional DC atlases on $X,Y$, respectively. Then, $$\mathcal{A}=\{ (U_X\times U_Y,\mathrm{var}phi_X\otimes\mathrm{var}phi_Y):\, (U_X,\mathrm{var}phi_X)\in\mathcal{A}_X, (U_Y,\mathrm{var}phi_Y)\in\mathcal{A}_Y\}$$ is an $(m+n)$-dimensional DC atlas on $X\times Y$, if we define $(\mathrm{var}phi_X\otimes\mathrm{var}phi_Y)(x,y)=(\mathrm{var}phi_X(x),\mathrm{var}phi_Y(y))$. \item If $X, Y$ are equipped with $m,n$-dimensional DC structures, respectively, and $f:X\times Y\to\mathbb{R}$ is DC, then the section $x\mapsto f(x,y)$ is DC on $X$ for any $y\in Y$, and the section $y\mapsto f(x,y)$ is DC on $Y$ for any $x\in X$. \varepsilonnd{enumerate} \varepsilonnd{remark} \begin{definition}\label{surfldc} Let $H$ be an $(n+k)$-dimensional unitary space ($n,k \in \mathbb{N}$). We say that a set $M \subset H$ is a {\it $k$-dimensional Lipschitz} (resp.\ {\it DC}) {\it surface}, if it is nonempty and for each $x \in M$ there exists a $k$-dimensional linear space $Q \subset H$, an open neighbourhood $W$ of $x$, a set $G \subset Q$ open in $Q$ and a Lipschitz (resp. locally DC) mapping $h: G \to Q^{\perp}$ such that $$ M \cap W = \{u + h(u): u \in G\}.$$ \varepsilonnd{definition} \begin{remark}\label{surf} \begin{enumerate} \item Lipschitz surfaces were considered e.g.\ by Whitehead \cite[p. 165]{Wh} or Walter \cite{Walter}, who called them strong Lipschitz submanifolds. Obviously, each DC surface is a Lipschitz surface. For some properties of DC surfaces see \cite{Zajploch}. \item If we suppose, in the above definition of a DC surface, that $G$ is convex and $h$ is DC and Lipschitz, we obtain clearly the same notion. \item Each Lipschitz (resp.\ DC) surface admits a natural structure of a Lipschitz (resp.\ DC) manifold that is given by the charts of the form $( W \cap M, \psi^{-1})$, where $\psi(u)= u + h(u),\ u \in G$ (cf. Remark \ref{hilbert}(i)). \varepsilonnd{enumerate} \varepsilonnd{remark} \begin{lemma}\label{aznapl} Let $H$ be an $n$-dimensional unitary space, $V \subset H$ an open convex set, and $f:V \to \mathbb{R}^m$ be a DC mapping. Then there exists a sequence $(T_i)$ of $(n-1)$-dimensional DC surfaces in $H$ such that $f$ is strictly differentiable at each point of $V \setminus \bigcup_{i=1}^{\infty} T_i$. \varepsilonnd{lemma} \begin{proof} Let $f=(f_1,\dots,f_m)$. By definition of a DC mapping, $f_j = \alpha_j - \beta_j$, where $\alpha_j$ and $\beta_j$ are convex functions. By \cite{Zaj}, for each $j$ we can find a sequence $T^j_k$, $k \in \mathbb{N}$, of $(n-1)$-dimensional DC surfaces in $H$ such that both $\alpha_j$ and $\beta_j$ are differentiable at each point of $D_j:= H \setminus \bigcup_{k=1}^{\infty} T^j_k$. Since each convex function is strictly differentiable at each point at which it is (Fr\' echet) differentiable (see, e.g., \cite[Proposition 3.8]{VeZa} for a proof of this well-known fact), we conclude that each $f_j$ is strictly differentiable at each point of $D_j$. Since strict differentiablity of $f$ clearly follows from strict differentiability of all $f_j$'s, the proof is finished after ordering all sets $T^j_k$, $k \in \mathbb{N}$, $j=1,\dots,m$, to a sequence $(T_i)$. \varepsilonnd{proof} \subsection{Convex surfaces} \begin{definition} \label{convex_surface} A {\it convex body} in $\mathbb{R}^n$ is a compact convex subset with non\-empty interior. Under a {\it convex surface} in $\mathbb{R}^n$ we understand the boundary $X=\partial C$ of a convex body $C$. A convex surface $X$ is said to be {\it polyhedral} if it can be covered by finitely many hyperplanes. \varepsilonnd{definition} It is well-known that a convex surface in $\mathbb{R}^n$ with its intrinsic metric is a complete geodesic space with nonnegative curvature (see \cite{Buyalo} or \cite[\S10.2]{BBI}). Obviously, each convex surface $X$ is a DC surface (cf. Remark \ref{Rcomp}(iii)), and so has a canonical DC structure. In the following, we will work mainly with ``standard'' DC charts on $X$ (which are considered in the generalized sense of Remark \ref{hilbert}(i)). \begin{definition}\label{standard} Let $X \subset \mathbb{R}^{n+1}$ be a convex surface and $U$ a nonempty, relatively open subset of $X$. We say that $(U, \mathrm{var}phi)$ is a {\it standard $n$-dimensional chart} on $X$, if there exist a unit vector $e\in\mathbb{R}^{n+1}$, a convex, relatively open subset $V$ of the hyperplane $e^\perp$, and a Lipschitz convex function $f: V \to \mathbb{R}$ such that, setting $F (x) := x + f(x) e,\ x \in V$, we have $U = F(V)$ and $\mathrm{var}phi = F^{-1}$. In this case we will say that $(U,\mathrm{var}phi)$ is an {\it $(e,V)$-standard chart} on $X$ and $f$ will be called {\it the convex function associated with the standard chart}. \varepsilonnd{definition} \begin{remark}\label{Rcomp} \begin{enumerate} \item Clearly, if $(U,\mathrm{var}phi)$ is an $(e,V)$-standard chart on $X$ and $\pi$ denotes the orthogonal projection onto $e^\perp$, then $\mathrm{var}phi = \pi\restriction_U$. \item Let $(U_1, \mathrm{var}phi_1)$ and $(U_2, \mathrm{var}phi_2)$ be standard charts as in the above definition. Then these charts are DC-compatible. Indeed, $\mathrm{var}phi_1^{-1}$ is a DC mapping from $V_1$ to $\mathbb{R}^{n+1}$ and $\mathrm{var}phi_2$ is a restriction of a linear mapping $\pi$ (see (i)). So $\mathrm{var}phi_2 \circ (\mathrm{var}phi_1)^{-1}=\pi\circ (\mathrm{var}phi_1)^{-1}$ is locally DC by Lemma \ref{zakldc}(a),(c). \item Let $X \subset \mathbb{R}^{n+1}$ be a convex surface, $z \in X$, and let $C$ be the convex body for which $X = \partial C$. Choose $a \in {\rm int}\, C$, set $e:= \frac{a-z}{\|a-z\|}$ and $V:= \pi (B(a,\delta))$, where $\delta>0$ is sufficiently small and $\pi$ is the orthogonal projection of $\mathbb{R}^{n+1}$ onto $e^{\perp}$. Then it is easy to see that there exists an $(e,V)$-standard chart $(U,\mathrm{var}phi)$ on $X$ with $z \in U$. \varepsilonnd{enumerate} \varepsilonnd{remark} By (ii) and (iii) above, the following definition is correct. \begin{definition}\label{ststr} Let $X \subset \mathbb{R}^{n+1}$ be a convex surface. Then the {\it standard DC structure} on $X$ is determined by the atlas of all standard $n$-dimensional charts on $X$. \varepsilonnd{definition} \begin{lemma}\label{obrdc} Let $X \subset \mathbb{R}^{n+1}$ ($n\geq 2$) be a convex surface and let $(U,\mathrm{var}phi)$ be an $(e,V)$-standard chart on $X$. Let $T \subset e^{\perp}$ be an $(n-1)$-dimensional DC surface in $e^{\perp}$ with $T\cap V\neq\varepsilonmptyset$. Then $\mathrm{var}phi^{-1}(T \cap V)$ is an $(n-1)$-dimensional DC surface in $\mathbb{R}^{n+1}$. \varepsilonnd{lemma} \begin{proof} Let $f$ be the convex function associted with $(U,\mathrm{var}phi)$. Let $z$ be an arbitrary point of $\mathrm{var}phi^{-1}(T \cap V)$. Denote $x:= \mathrm{var}phi(z)$. By Definition \ref{surfldc} there exist an $(n-1)$-dimensional linear space $Q \subset e^{\perp}$, a set $G \subset Q$ open in $Q$, an open neighbourhood $W$ of $x$ in $e^{\perp}$ and a locally DC mapping $h: G \to Q_{e^{\perp}}^{\perp}$ such that $T \cap W = \{u + h(u):\ u \in G\}$. We can and will suppose that $W \subset V$. Observing that $z \in \mathrm{var}phi^{-1}(T \cap W)$ and $\mathrm{var}phi^{-1}(T \cap W)$ is an open set in $\mathrm{var}phi^{-1}(T \cap V)$, $$\mathrm{var}phi^{-1}(T \cap W)= \{u + h(u)+ f(u+h(u)) e:\ u \in G\} $$ and $u \mapsto h(u)+ f(u+h(u)) e$ is a locally DC mapping $G \to Q^{\perp}_{\mathbb{R}^{n+1}}$, we finish the proof. \varepsilonnd{proof} \begin{lemma}\label{aprox} \begin{enumerate} \item[{\rm (i)}] Let $X$ be a convex surface in $\mathbb{R}^m$. Then there exists a sequence $(X_k)$ of polyhedral convex surfaces in $\mathbb{R}^m$ converging to $X$ in the Hausdorff distance. \item[{\rm (ii)}] Let convex surfaces $X_k$ converge in the Hausdorff distance to a convex surface $X$ in $\mathbb{R}^m$ and let $\mathrm{dist}_X$, $\mathrm{dist}_{X_k}$ denote the intrinsic distances on $X$, $X_k$, respectively. Assume that $a, b \in X$, $a_k, b_k \in X_k$, $a_k \to a$ and $b_k \to b$. Then $\mathrm{dist}_{X_k}(a_k,b_k) \to \mathrm{dist}_X(a,b)$. \item[{\rm (iii)}] If $X_k,X$ are as in (ii) then ${\rm diam}\, X_k\to{\rm diam}\, X$, where ${\rm diam}\, X_k,{\rm diam}\, X$ is the intrinsic diameter of $X_k,X$, respectively. \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} (i) is well-known, see e.g.\ \cite[\S1.8.15]{Schneider}. (ii) can be proved as in \cite[Lemma~10.2.7]{BBI}, where a slightly different assertion is shown. We present here the proof for completeness. Let $C,C_k$ be convex bodies in $\mathbb{R}^m$ such that $X=\partial C$, $X_k=\partial C_k$, $k\in\mathbb{N}$, and assume, without loss of generality, that the origin lies in the interior of $C$. It is easy to show that, since the Hausdorff distance of $X$ and $X_k$ tends to zero, there exist $k_0 \in \mathbb{N}$ and a sequence $\mathrm{var}epsilon_k \searrow 0$ such that $$(1-\mathrm{var}epsilon_k)C\subset C_k\subset (1+\mathrm{var}epsilon_k)C,\quad k \geq k_0.$$ For a convex body $D$ in $\mathbb{R}^m$ and corresponding convex surface $Y=\partial D$, we shall denote by $\Pi_Y$ the metric projection of $\mathbb{R}^m$ onto $Y$, defined outside of the interior of $D$. The symbol $\mathrm{dist}_Y$ denotes the intrinsic distance on the convex surface $Y$. Let $a,b,a_k,b_k$ from the assumption be given, and (for $k \geq k_0$) denote $\widetilde{a}_k=\Pi_{X_k}((1+\mathrm{var}epsilon_k)a)$, $\widetilde{b}_k=\Pi_{X_k}((1+\mathrm{var}epsilon_k)b)$. Since $\Pi_{X_k}$ is a contraction (see e.g. \cite[Theorem~1.2.2]{Schneider}), we have \begin{eqnarray*} \mathrm{dist}_{X_k}(\widetilde{a}_k,\widetilde{b}_k)&\leq& \mathrm{dist}_{(1+\mathrm{var}epsilon_k)X}((1+\mathrm{var}epsilon_k)a,(1+\mathrm{var}epsilon_k)b)\\ &=&(1+\mathrm{var}epsilon_k)\mathrm{dist}_X(a,b). \varepsilonnd{eqnarray*} Further, clearly $\widetilde{a}_k\to a$ and $\widetilde{b}_k\to b$, which implies that $\mathrm{dist}_{X_k}(\widetilde{a}_k,a_k)\to 0$ and $\mathrm{dist}_{X_k}(\widetilde{b}_k,b_k)\to 0$. Consequently, $$\limsup_{k\to\infty}\mathrm{dist}_{X_k}(a_k,b_k)\leq\mathrm{dist}_X(a,b).$$ The inequality \ $\liminf_{k\to\infty}\mathrm{dist}_{X_k}(a_k,b_k)\geq\mathrm{dist}_X(a,b)$ \ is obtained in a similar way, considering the metric projections of $a_k$ and $b_k$ onto $(1-\mathrm{var}epsilon_k)X$. (iii) is a straightforward consequence of (ii) and the compactness of $X$. \varepsilonnd{proof} \begin{lemma}\label{aprox2} Let $X \subset \mathbb{R}^{n+1}$ be a convex surface, $(U,\mathrm{var}phi)$ an $(e,V)$-standard chart on $X$, and let $f$ be the associated convex function. Let $(X_k)$ be a sequence of convex surfaces which tends in the Hausdorff metric to $X$, and $W \subset V$ be an open convex set such that $\overline{W} \subset V$. Then there exists $k_0 \in \mathbb{N}$ such that, for each $k \geq k_0$, the surface $X_k$ has an $(e,W)$-standard chart $(U_k,\mathrm{var}phi_k)$, and the associated convex functions $f_k$ satisfy \begin{equation}\label{vlfk} f_k(x) \to f(x),\ x \in W\ \ \ \ \text{and}\ \ \ \ \limsup_{k \to \infty}\, \operatorname{Lip} f_k \leq \operatorname{Lip} f. \varepsilonnd{equation} \varepsilonnd{lemma} \begin{proof} Denote by $C$($C_k$) the convex body for which $X = \partial C$ ($X_k = \partial C_k$, respectively). Clearly, the convex function $f$ has the form $$f(v)=\inf\{ t\in\mathbb{R}:\, v+te\in C\},\quad v\in V.$$ Let $\pi$ be the orthogonal projection onto $e^\perp$ and denote $$W_r:=\{v \in e^{\perp}:\ \mathrm{dist}(v, W) < r\},\quad r>0.$$ Let $\mathrm{var}epsilon,\delta>0$ be such that $W_{\mathrm{var}epsilon+\delta}\subset V$, and let $k_0=k_0(\delta)\in\mathbb{N}$ be such that the Hausdorff distance of $X$ and $X_k$ (and, hence, also of $C$ and $C_k$) is less that $\delta$ for all $k>k_0$. Fix a $k>k_0$. It is easy to show that $$f_k^*(v)= \inf\{ t\in\mathbb{R}:\, v+te\in C_k\},\quad v\in W_\mathrm{var}epsilon$$ is a finite convex function. We shall show that \begin{equation} \label{bbb} |f_k^*(v)-f(v)|\leq (1+\mathrm{Lip}\, f)\delta,\quad v\in W_\mathrm{var}epsilon. \varepsilonnd{equation} Take a point $v\in W_\mathrm{var}epsilon$ and denote $x=v+f(v)e\in X$ and $y=v+f_k^*(v)e\in X_k$. From the definition of the Hausdorff distance, there must be a point $c\in C$ with $\|c-y\|<\delta$. This implies that for $w:=\pi(c)$ we have $f(w)\leq c\cdot e$ and $$f_k^*(v)=y\cdot e\geq c\cdot e-\delta \geq f(w)-\delta\geq f(v)-\delta \operatorname{Lip} f-\delta.$$ For the other inequality, note that, since $f_k^*$ is convex, there exists a unit vector $u\in\mathbb{R}^{n+1}$ with $u\cdot e=:-\varepsilonta<0$ such that $(z-y)\cdot u\leq 0$ for all $z\in C_k$ (i.e., $u$ is a unit outer normal vector to $C_k$ at $y$). It is easy to see that $(z-y)\cdot u\leq\delta$ for all $z\in C$, since the Hausdorff distance of $C$ and $C_k$ is less than $\delta$. Consider the point $z=w+f(w)e\in C$ with $w=v+\delta u^*$, where $u^*=\pi(u)/\|\pi(u)\|$ if $\pi(u)\neq 0$ and $u^*$ is any unit vector in $e^\perp$ if $\pi(u)=0$. Then \begin{eqnarray*} \delta&\geq&(z-y)\cdot u=(w+f(w)e-v-f_k^*(v)e)\cdot u\\ &=&(w-v)\cdot u+(f(w)-f_k^*(v))(e\cdot u)\\ &=&\delta\sqrt{1-\varepsilonta^2}+(f(w)-f_k^*(v))(-\varepsilonta)\\ &\geq&\delta(1-\varepsilonta)+(f_k^*(v)-f(w))\varepsilonta, \varepsilonnd{eqnarray*} which implies that $$f_k^*(v)\leq f(w)+\delta\leq f(v)+\delta \operatorname{Lip} f+\delta$$ by the Lipschitz property of $f$, and \varepsilonqref{bbb} is verified. We shall show now that for $k>k_0$, $X_k$ has an $(e,W)$-standard chart with associated convex function $f_k:=f_k^*\restriction W$ (i.e., that $f_k$ is Lipschitz) and that \varepsilonqref{vlfk} holds. Given two different points $u,v\in W$, we define points $u^*, v^* \in W_\mathrm{var}epsilon$ as follows: we set $u^*=u-\mathrm{var}epsilon\frac{v-u}{\| v-u\|}$, $v^*=v$ if $f_k(u)\geq f_k(v)$, and $u^*=u$, $v^*=v+\mathrm{var}epsilon\frac{v-u}{\| v-u\|}$ if $f_k(u)\leq f_k(v)$. Then, using \varepsilonqref{bbb} and convexity of $f_k^*$, we obtain $$\frac{|f_k(u) - f_k(v)|}{\|u-v\|} \leq \frac{|f_k^*(u^*) - f_k^*(v^*)|}{\|u^*-v^*\|}\leq \operatorname{Lip} f + \frac{(2+2\operatorname{Lip} f)\delta}{\mathrm{var}epsilon}$$ whenever $k>k_0(\delta)$. Therefore, $\operatorname{Lip} f_k \leq \operatorname{Lip} f + \frac{(2+2\operatorname{Lip} f)\delta}{\mathrm{var}epsilon}$. Using this inequality, \varepsilonqref{bbb}, and the fact that $\delta>0$ can be arbitrarily small, we obtain \varepsilonqref{vlfk}. \varepsilonnd{proof} \section{Extrinsic properties of distance functions on convex surfaces} We will prove our results via the following result concerning intrinsic properties of distance functions on convex surfaces, which is an easy consequence of well-known results. \begin{proposition} \label{P1} Let $X$ be a complete geodesic (Alexandrov) space with nonnegative curvature. Then the Cartesian product $X^2$ with the product metric $$\mathrm{dist}_{X\times X} ((x_1,x_2),(y_1,y_2))=\sqrt{\mathrm{dist}^2(x_1,y_1)+\mathrm{dist}^2(x_2,y_2)}$$ is a complete geodesic space with nonnegative curvature as well, and the squared distance $g(x_1,x_2):=\mathrm{dist}^2(x_1,x_2)$ is $4$-concave on $X^2$. \varepsilonnd{proposition} \begin{proof} The assertion on the properties of $X^2$ is well-known, see e.g. \cite[\S3.6.1, \S10.2.1]{BBI}. In order to show the $4$-concavity of $g$, we shall use the fact that \begin{equation} \label{diag} g(x_1,x_2)=2\,\mathrm{dist}_{X\times X}^2((x_1,x_2),D),\quad x_1,x_2\in X, \varepsilonnd{equation} where $D$ is the diagonal in $X\times X$. To see that \varepsilonqref{diag} holds, note that \begin{eqnarray*} \mathrm{dist}_{X\times X}^2((x_1,x_2),D) &=&\inf_{y\in X}\mathrm{dist}_{X\times X}^2((x_1,x_2),(y,y))\\ &=&\inf_{y\in X}(\mathrm{dist}^2(x_1,y)+\mathrm{dist}^2(x_2,y)). \varepsilonnd{eqnarray*} Choosing a midpoint of $x_1$ and $x_2$ for $y$ in the last expression, we see that $\mathrm{dist}_{X\times X}^2((x_1,x_2),D)\leq \frac 12\mathrm{dist}^2(x_1,x_2)$. On the other hand, if $y$ is an arbitrary point of $X$, we get by the triangle inequality $$\mathrm{dist}^2(x_1,x_2)\leq 2(\mathrm{dist}^2(x_1,y)+\mathrm{dist}^2(x_2,y))= 2\mathrm{dist}_{X\times X}^2((x_1,x_2),(y,y)),$$ and thus we get the other inequality proving \varepsilonqref{diag}. To finish the proof, we use the following fact: {\varepsilonm If $Y$ is a length space of nonnegative curvature and $\varepsilonmptyset\neq F\subset Y$ a closed subset, then the squared distance function $d_F^2(\cdot)=\mathrm{dist}_Y^2(\cdot,F)$ is $2$-concave on $Y$.} This is well-known if $F$ is a singleton (see e.g.\ \cite[Proposition~116]{Pl}) and follows easily for a general nonempty closed set $F$ by the facts that $d^2_F(y)=\inf_{x\in F}d^2_{\{ x\}}(y)$ and that the infimum of concave functions is concave. If we apply this for $Y=X\times X$ and $F=D$, \varepsilonqref{diag} completes the proof. \varepsilonnd{proof} \begin{lemma} \label{L-f} Let $X$ be a polyhedral convex surface in $\mathbb{R}^{n+1}$, $T \in X$, and $(U, \mathrm{var}phi)$ be an $(e,V)$-standard chart on $X$ such that $T \in U$. Let $f$ be the associated convex function and $t:= \mathrm{var}phi(T)$. Then there exists a $\delta>0$ such that for all $x,y\in V$ with $t=(x+y)/2$ and $\|x-t\|=\|y-t\|<\delta$ we have $$\mathrm{dist}(S,T)\leq 2\Delta^2f(x,y),$$ whenever $S$ is a midpoint of $\mathrm{var}phi^{-1}(x),\mathrm{var}phi^{-1}(y)$. \varepsilonnd{lemma} \begin{proof} Denoting $F:= \mathrm{var}phi^{-1}$, we have $F(u)= u + f(u)e$. Let $L$ be the Lipschitz constant of $f$. It is easy to see that we can choose $\delta_0>0$ such that for any $x\in V$ with $\|x-t\|<\delta_0$, the function $f$ is affine on the segment $[x,t]$. Then we take $\delta\leq\delta_0/L$, such that for any two points $x,y\in B(t,\delta)$, any minimal curve connecting $F(x)$ and $F(y)$ (and, hence, also any midpoint of $F(x),F(y)$) lies in $U$. Let two points $x,y\in B(t,\delta)$ with $t = \frac{x+y}{2}$ be given and denote $\Delta=\Delta^2f(x,y)$. Let $S$ be a midpoint of $F(x),F(y)$ (lying necessarily in $U$) and set $s=\mathrm{var}phi(S)$. Note that $\Delta\leq L\delta$. From the parallelogram law, we obtain $$2\|F(x)-T\|^2+2\|F(y)-T\|^2=\|F(y)-F(x)\|^2+4\Delta^2,$$ since \begin{equation} \label{Eq-L-1} \Delta=\left\|\frac{F(x)+F(y)}2-T\right\|. \varepsilonnd{equation} Taking the square root, and using the inequality $a+b\leq\sqrt{2a^2+2b^2}$, we obtain $$\|F(x)-T\|+\|F(y)-T\|\leq\sqrt{\|F(y)-F(x)\|^2+4\Delta^2}.$$ It is clear that the geodesic distance of $F(x)$ and $F(y)$ is at most $\|F(x)-T\|+\|F(y)-T\|$ (which is the length of a curve in $X$ connecting $F(x)$ and $F(y)$). Thus, $$\|S-F(x)\|\leq\mathrm{dist}(S,F(x))=\frac 12\mathrm{dist} (F(x),F(y))\leq \sqrt{\left(\frac{\|F(y)-F(x)\|}2\right)^2+\Delta^2}$$ and the same upper bound applies to $\|S-F(y)\|$. Summing the squares of both distances, we obtain $$\|S-F(x)\|^2+\|S-F(y)\|^2\leq\frac 12 \|F(y)-F(x)\|^2+2\Delta^2$$ and, since the left hand side equals, again by the parallelogram law, $$\frac 12 \left(\|F(y)-F(x)\|^2+\|2S-(F(x)+F(y)\|^2\right),$$ we arrive at \begin{equation} \label{Eq-L-2} \left\|S-\frac{F(x)+F(y)}2\right\|\leq\Delta. \varepsilonnd{equation} Considering the orthogonal projections of $S$ and $\frac{F(x)+F(y)}2$ onto $e^{\perp}$, we obtain $$\|s-t\|\leq\Delta\leq L\delta\leq\delta_0$$ and, hence, we have $$\mathrm{dist} (S,T)=\|S-T\|,$$ since $f$ is affine on $[s,t]$. On the other hand, equations \varepsilonqref{Eq-L-1} and \varepsilonqref{Eq-L-2} imply $\|S-T\|\leq 2\Delta$, which completes the proof. \varepsilonnd{proof} \begin{proposition}\label{hlavni} Let $X \subset \mathbb{R}^{n+1}$ be a convex surface and let $(U_i,\mathrm{var}phi_i)$ be $(e_i,V_i)$ standard charts, $i=1,2$. Let $f_1$, $f_2$ be the corresponding convex functions. Set $$ g(x_1,x_2)=\mathrm{dist}^2(\mathrm{var}phi_1^{-1}(x_1), \mathrm{var}phi_2^{-1}(x_2)),\ \ \ \ x_1 \in V_1,\ x_2 \in V_2,$$ where $\mathrm{dist}$ is the intrinsic distance on $X$. Then the function $g-c-d$ is concave on $V_1\times V_2$, where \begin{eqnarray*} c(x_1,x_2)&=&4(1+L^2)(\|x_1\|^2+\|x_2\|^2),\\ d(x_1,x_2)&=&4M(f_1(x_1)+f_2(x_2)), \varepsilonnd{eqnarray*} $L=\max\{\mathrm{Lip}\, f_1,\mathrm{Lip}\, f_2\}$ and $M$ is the intrinsic diameter of $X$. \varepsilonnd{proposition} \begin{proof} Assume first that the convex surface $X$ is polyhedral. We shall show that for any $t\in V_1\times V_2$ there exists $\delta>0$ such that \begin{equation} \label{Eq-T-1} \Delta^2g(x,y)\leq\Delta^2c(x,y)+\Delta^2d(x,y) \varepsilonnd{equation} for all $x,y\in B(t,\delta)\subset V_1\times V_2$ with $t=(x+y)/2$, which implies the assertion, see Lemma~\ref{VZ}. We have \begin{eqnarray*} \Delta^2g(x,y)&=&\frac{g(x)+g(y)}2-g(t)\\ &=&\left(\frac{g(x)+g(y)}2-g(s)\right)+\left(g(s)-g(t)\right), \varepsilonnd{eqnarray*} whenever $s=(s_1,s_2)\in V_1\times V_2$ is such that $(\mathrm{var}phi_1^{-1}(s_1),\mathrm{var}phi_2^{-1}(s_2))$ is a midpoint of $(\mathrm{var}phi_1^{-1}(x_1),\mathrm{var}phi_2^{-1}(x_2))$ and $(\mathrm{var}phi_1^{-1}(y_1),\mathrm{var}phi_2^{-1}(y_2))$ in $X^2$, where $x=(x_1,x_2)$ and $y=(y_1,y_2)$. By Proposition~\ref{P1} and Lemma~\ref{loclen}(ii), the first summand is bounded from above by $$2\,\frac{\mathrm{dist}^2(\mathrm{var}phi_1^{-1}(x_1),\mathrm{var}phi_1^{-1}(y_1))+\mathrm{dist}^2(\mathrm{var}phi_2^{-1}(x_2),\mathrm{var}phi_2^{-1}(y_2))}4.$$ Since clearly $$\mathrm{dist} (\mathrm{var}phi_i^{-1}(x_i),\mathrm{var}phi_i^{-1}(y_i))\leq\sqrt{1+(\operatorname{Lip} f_i)^2}\|x_i-y_i\|, \quad i=1,2, $$ we get \begin{eqnarray*} \frac{g(x)+g(y)}2-g(s)&\leq&(2+(\operatorname{Lip} f_1)^2+(\operatorname{Lip} f_2)^2) \frac{\|x_1-y_1\|^2+\|x_2-y_2\|^2}2\\ &\leq& \Delta^2c(x,y) \varepsilonnd{eqnarray*} (we use the fact that $\Delta^2c(x,y)=4(1+L^2)(\|x-y\|/2)^2$, see \varepsilonqref{drddrm}). In order to verify \varepsilonqref{Eq-T-1}, it remains thus to show that \begin{equation} \label{Eq-T-2} |g(s)-g(t)|\leq \Delta^2d(x,y) . \varepsilonnd{equation} Denote $t=(t_1,t_2)$, $s=(s_1,s_2)$, $T_i=\mathrm{var}phi_i^{-1}(t_i)$ and $S_i=\mathrm{var}phi_i^{-1}(s_i)$, $i=1,2$ . We have \begin{eqnarray*} |g(s)-g(t)|&=&|\mathrm{dist}^2(S_1,S_2)-\mathrm{dist}^2(T_1,T_2)|\\ &\leq&2M|\mathrm{dist}(S_1,S_2)-\mathrm{dist}(T_1,T_2)|\\ &\leq&2M(\mathrm{dist}(S_1,T_1)+\mathrm{dist}(S_2,T_2)), \varepsilonnd{eqnarray*} where the last inequality follows from the (iterated) triangle inequality. Applying Lemma~\ref{L-f} and the fact that $S_i$ is a midpoint of $\mathrm{var}phi_i^{-1}(x_i),\mathrm{var}phi_i^{-1}(y_i)$ (see \cite[\S4.3]{Pl}), we get $\mathrm{dist}(S_i,T_i)\leq 2\Delta^2f_i(x_i,y_i)$, $i=1,2$, for $\delta$ sufficiently small. Since clearly $$\Delta^2d(x,y)=4M(\Delta^2f_1(x_1,y_1)+\Delta^2f_2(x_2,y_2)),$$ \varepsilonqref{Eq-T-2} follows. Let now $X$ be an arbitrary convex surface. Let $(X_k)$ be a sequence of polyhedral convex surfaces which tends in the Hausdorff metric to $X$. Consider arbitrary open convex sets $W_i \subset V_i$ with $\overline{W_i} \subset V_i$, $i=1,2$. Applying Lemma~\ref{aprox2} (and considering a subsequence of $X_k$ if necessary), we find $(e_i,W_i)$-standard charts $(U_{i,k},\mathrm{var}phi_{i,k})$ of $X_k$ such that the associated convex functions $f_{i,k}$ converge to $f_i\restriction_{W_i}$, $L^*_i:= \lim_{k\to \infty} \operatorname{Lip} f_{i,k}$ exists and $L^*_i \leq \operatorname{Lip} f_i$, $i=1,2$. By the first part of the proof we know that the function $$ \psi_k(x_1,x_2) := g_k(x_1,x_2) - 4(1 + L_k^2)(\|x_1\|^2+\|x_2\|^2)- 4M_k(f_{1,k}(x_1)+f_{2,k}(x_2)),$$ where $M_k$ is the intrinsic diameter of $X_k$ and $L_k = \max(\operatorname{Lip} f_{1,k}, \operatorname{Lip} f_{1,k})$, is concave on $W_1 \times W_2$. Obviously, $L_k \to L^*:= \max(L^*_1, L^*_2) \leq L$ and Lemma~\ref{aprox} implies that $g_k\to g$ and $M_k\to M$. Consequently, $$\lim_{k \to \infty} \psi_k(x_1,x_2) = g(x_1,x_2) - 4(1 + {L^*}^2)(\|x_1\|^2+\|x_2\|^2)- 4M(f_{1}(x_1)+f_{2}(x_2))$$ is concave on $W_1 \times W_2$. Since $L^* \leq L$ , we obtain that $g-c-d$ is concave on $W_1 \times W_2$. Thus $g-c-d$ is locally concave, and so concave, on $V_1 \times V_2$. \varepsilonnd{proof} Proposition~\ref{hlavni} has the following immediate corollary (recall the definition of a DC function on a DC manifold, Definition~\ref{DCman}, and the definition of the DC structure on $X^2$, Remark~\ref{hilbert}~(ii)). \begin{theorem} \label{main} Let $X$ be a convex surface in $\mathbb{R}^{n+1}$. Then the squared distance function $(x,y)\mapsto\mathrm{dist}^2(x,y)$ is DC on $X^2$. \varepsilonnd{theorem} Using Remark~\ref{hilbert}~(iii), we obtain \begin{corollary} \label{pevny_bod} Let $X$ be a convex surface in $\mathbb{R}^{n+1}$ and let $x_0\in X$ be fixed. Then the squared distance from $x_0$, $x\mapsto\mathrm{dist}^2(x,x_0)$, is DC on $X$. \varepsilonnd{corollary} Since the function $g(z) = \sqrt z$ is DC on $(0,\infty)$, Lemma \ref{zakldc}(c) easily implies \begin{corollary} \label{pevny_bod1} Let $X$ be a convex surface in $\mathbb{R}^{n+1}$ and let $x_0\in X$ be fixed. Then the distance from $x_0$, $x\mapsto\mathrm{dist}(x,x_0)$, is DC on $X\setminus \{x_0\}$. \varepsilonnd{corollary} \begin{remark}\label{nacel} If $n=1$, it is not difficult to show that the function $x\mapsto\mathrm{dist}(x,x_0)$ is DC on the whole $X$. On the other hand, we conjecture that this statement is not true in general for $n \geq 2$. \varepsilonnd{remark} \begin{theorem}\label{distset} Let $X \subset \mathbb{R}^{n+1}$ be a convex surface and $\varepsilonmptyset \neq F \subset X$ a closed set. Denoting $d_F := \mathrm{dist}(\cdot,F)$, \begin{enumerate} \item the function $(d_F)^2$ is DC on $X$ and \item the function $d_F$ is DC on $X\setminus F$. \varepsilonnd{enumerate} \varepsilonnd{theorem} \begin{proof} Since $X$ is compact, we can choose a finite system $(U_i,\mathrm{var}phi_i)$, $i \in I$, of $(e_i,V_i)$-standard charts which forms a DC atlas on $X$. Let $f_i$, $i \in I$, be the corrresponding convex functions. Choose $L>0$ such that $\mathrm{Lip}\, f_i\leq L$ for all $i \in I$ and let $M$ be the intrinsic diameter of $X$. To prove (i), it is sufficient to show that, for all $i \in I$, $(d_F)^2\circ (\mathrm{var}phi_i)^{-1}$ is DC on $V_i$. So fix $i \in I$ and consider an arbitrary $y \in F$. Choose $j \in I$ with $y \in U_j$. Set $$ \omega(x) : = 4(1+L^2)\|x\|^2 + 4Mf_i(x),\ \ \ \ \ x \in V_i.$$ Proposition \ref{hlavni} (used for $\mathrm{var}phi_1 = \mathrm{var}phi_i$ and $\mathrm{var}phi_2 = \mathrm{var}phi_j$) easily implies that the function $h_y(x) = \mathrm{dist}^2(\mathrm{var}phi_i^{-1}(x), y) - \omega(x)$ is concave on $V_i$. Consequently, the function $$ \psi(x) := (d_F)^2\circ (\mathrm{var}phi_i)^{-1}(x) - \omega(x) = \inf_{y \in F} h_y(x)$$ is concave on $V_i$. So $(d_F)^2\circ (\mathrm{var}phi_i)^{-1} = \psi + \omega = \omega - (-\psi)$ is DC on $V_i$. Thus (i) is proved. Since the function $g(z) = \sqrt z$ is DC on $(0,\infty)$, Lemma~\ref{zakldc}(c) easily implies (ii). \varepsilonnd{proof} \begin{remark}\label{unbounded} It is not difficult to show that Theorems \ref{distset} and \ref{main} imply corresponding results in $n$-dimensional closed unbounded convex surfaces $X \subset \mathbb{R}^{n+1}$; in particular that the statements (B), (C) and (D) from Introduction hold. To this end, it is sufficient to consider a bounded closed convex surface $\widetilde X$ which contains a sufficiently large part of $X$. \varepsilonnd{remark} \section{Applications} Our results on distance functions can be applied to a number of problems from the geometry of convex surfaces that are formulated in the language of distance functions. We present below applications concerning $r$-boundaries (distance spheres), the multijoined locus, and the ambiguous locus (exoskeleton) of a closed subset of a convex surface. Recall that $r$-boundaries and ambiguous loci were studied (in Euclidean, Riemannian and Alexandrov spaces) in a number of articles (see, e.g., \cite{Fe}, \cite{ST}, \cite{Zamf}, \cite{HLW}). The first application (Theorem \ref{abst} below) concerning $r$-boundaries provides an analogue of well-known results proved (in Euclidean spaces) by Ferry \cite{Fe} and Fu \cite{Fu}. It is an easy consequence of Theorem \ref{distset} and the following general result on level sets of DC functions, which immediately follows from \cite[Theorem 3.4]{RaZa}. \begin{theorem}\label{abst} Let $n \in \{2,3\}$, let $E$ be an $n$-dimensional unitary space, and let $d$ be a locally DC function on an open set $G \subset E$. Suppose that $d$ has no stationary point. Then there exists a set $N \subset \mathbb{R}$ with $\mathcal{H}^{(n-1)/2}(N) =0$ such that, for every $r \in d(G) \setminus N$, the set $d^{-1}(r)$ is an $(n-1)$-dimensional DC surface in $E$. Moreover, $N$ can be chosen such that $N= d(C)$, where $C$ is a closed set in $G$. \varepsilonnd{theorem} (Let us note that $C$ can be chosen to be the set of all critical points of $d$, but we will not need this fact.) \begin{theorem}\label{paralel} Let $n \in \{2,3\}$ and let $X \subset \mathbb{R}^{n+1}$ ($n\geq 2$) be a convex surface and $\varepsilonmptyset \neq K \subset X$ a closed set. For $r>0$, consider the $r$-boundary (distance sphere) $K_r := \{x \in X:\ \mathrm{dist}(x,K) =r\}$. There exists a compact set $N \subset [0,\infty)$ with $\mathcal{H}^{(n-1)/2}(N) =0$ such that that, for every $r \in (0,\infty) \setminus N$, the $r$-boundary $K_r$ is either empty, or an $(n-1)$-dimensional DC surface in $\mathbb{R}^{n+1}$. \varepsilonnd{theorem} \begin{proof} Choose a system $(U_i,\mathrm{var}phi_i)$, $i \in \mathbb{N}$, of $(e_i,V_i)$-standard charts on $X$ such that $G:=X\setminus K = \bigcup_{i=1}^{\infty} U_i$. By Theorem \ref{distset}, we know that $d_i :=d_K \circ \mathrm{var}phi_i^{-1}$ is locally DC on $V_i$, where $d_K := \mathrm{dist}(\cdot,K)$. Moreover, no $t \in \mathrm{var}phi_i(U_i)$ is a stationary point of $d_i$ (i.e., the differential of $d_i$ at $t$ is nonzero). Indeed, otherwise there exists $\delta>0$ such that $|d_i(\tau) - d_i(t)| < \|\tau -t\|$ whenever $\|\tau -t\|<\delta$. Denote $x:= \mathrm{var}phi^{-1}(t)$ and choose a minimal curve $\gamma$ with endpoints $x$ and $u\in K$ and length $s = \mathrm{dist} (x,K)$. Choosing a point $x^*$ on the image of $\gamma$ which is sufficiently close to $x$ and putting $\tau := \mathrm{var}phi_i(x^*)$, we clearly have $\|\tau -t\|<\delta$ and $|d_i(\tau) - d_i(t)| = \mathrm{dist}(x,x^*) \geq \|\tau -t\|$, which is a contradiction. Consequently, by Theorem \ref{abst} we can find for each $i$ a set $S_i \subset V_i$ closed in $V_i$ such that, for $N_i:= d_i(S_i)$, we know that $\mathcal{H}^{(n-1)/2}(N_i) =0$ and, for each $r \in (0,\infty)\setminus N_i$, the set $d_i^{-1}(r)$ is either empty, or an $(n-1)$-dimensional DC surface in $e_i^{\perp}$. Define $S$ as the set of all points $x \in G$ such that $\mathrm{var}phi_i(x) \in S_i$ whenever $x \in U_i$. Obviously, $S$ is closed in $G$. Set $N:= d_K(S) \cup \{0\}$. Since clearly $N \subset \bigcup_{i=1}^{\infty} N_i \cup \{0\}$, we have $\mathcal{H}^{(n-1)/2}(N) =0$. Since $K \cup S$ is compact, $N = d_K(K \cup S)$ and $d_K$ is continuous, we obtain that $N$ is compact. Let now $r \in (0,\infty) \setminus N$ and $x \in K_r$. Let $x \in U_i$. Then clearly $K_r \cap U_i = \mathrm{var}phi_i^{-1}(d_i^{-1}(r))$. Since $d_i^{-1}(r)$ is an $(n-1)$-dimensional DC surface in $e_i^{\perp}$, Lemma \ref{obrdc} implies that $K_r \cap U_i$ is an $(n-1)$-dimensional DC surface in $\mathbb{R}^{n+1}$. Since $x \in K_r$ was arbitrary, we obtain that $K_r$ is an $(n-1)$-dimensional DC surface in $\mathbb{R}^{n+1}$. \varepsilonnd{proof} \begin{remark}\label{obst} Let $n=2$. Then the weaker version of Theorem \ref{paralel} in which $\mathcal{H}^{1}(N) =0$ (instead of $\mathcal{H}^{1/2}(N) =0$) and $K_r$ are $(n-1)$-dimensional Lipschitz manifolds follows from \cite[Theorem~B]{ST} proved in $2$-dimensional Alexandrov spaces without boundary. In such Alexandrov spaces even the version in which $\mathcal{H}^{1/2}(N) =0$ and $K_r$ are $(n-1)$-dimensional Lipschitz manifolds holds; it is proved in \cite{RaZa} using Theorem \ref{abst} and Perelman's DC structure (cf.\ Section~\ref{Sec-Rem}). However, it seems to be impossible to deduce by this method Theorem \ref{paralel} in its full strength; any proof that $K_r$ are DC surfaces probably needs results of the present article. If $X$ is a $3$-dimensional Alexandrov space without boundary, it is still possible that the version of Theorem \ref{paralel} in which $K_r$ are Lipschitz manifolds holds. But it cannot be proved using only Theorem \ref{abst} and Perelman's DC structure even if $X$ is a convex surface. The obstacle is that the set $X \setminus X^*$ of ``Perelman's singular'' points (cf.\ Section~\ref{Sec-Rem}) can have positive $1$-dimensional Hausdorff measure even if $X$ is a convex surface in $\mathbb{R}^4$ (see \cite[Example 6.5]{RaZa}). \varepsilonnd{remark} \begin{remark}\label{obecdim} Examples due to Ferry \cite{Fe} show that Theorem \ref{paralel} cannot be generalized for $n \geq 4$. For an arbitrary $n$-dimensional convex surface $X$ we can, however, obtain (quite similarly as in \cite{RaZa} for Riemannian manifolds or Alexandrov spaces without Perelman singular points) that for all $r>0$ except a countable set, each $K_r$ contains an $(n-1)$-dimensional DC surface $A_r$ such that $A_r$ is dense and open in $K_r$, and $\mathcal{H}^{n-1}(K_r \setminus A_r) =0$. \varepsilonnd{remark} If $K$ is a closed subset of a length space $X$, the {\it multijoined locus} $M(K)$ of $K$ is the set of all points $x\in X$ such that the distance from $x$ to $K$ is realized by at least two different minimal curves in $X$. If two such minimal curves exist that connect $x$ with two different points of $K$, $x$ is said to belong to the {\it ambiguous locus} $A(K)$ of $K$. The ambiguous locus of $K$ is also called skeleton of $X\setminus K$ (or exoskeleton of $K$, \cite{HLW}). Zamfirescu \cite{Zamf} studies the multijoined locus in a complete geodesic (Alexandrov) space of curvature bounded from below and shows that it is $\sigma$-porous. An application of Theorem~\ref{distset} yields a stronger result for convex surfaces: \begin{theorem}\label{zam} Let $K$ be a closed subset of a convex surface $X\subset \mathbb{R}^{n+1}$ ($n\geq 2$). Then $M(K)$ (and, hence, also $A(K)$) can be covered by countably many $(n-1)$-dimensional DC surfaces lying in $X$. \varepsilonnd{theorem} \begin{proof} Let $(U,\mathrm{var}phi)$ be an $(e,V)$-standard chart on $X$. It is clearly sufficient to prove that $M(K)\cap U$ can be covered by countably many $(n-1)$-dimensional DC surfaces. Set $F: = \mathrm{var}phi^{-1}$ and denote by $d_K(z)$ the intrinsic distance of $z \in X$ from $K$. Since both the mapping $F$ and the function $d_K \circ F$ are DC on $V$ (see Theorem~\ref{distset} and Lemma~\ref{zakldc}), they are by Lemma~\ref{aznapl} strictly differentiable at all points of $V\setminus N$, where $N$ is a countable union of $(n-1)$-dimensional DC surfaces in $e^{\perp}$. By Lemma \ref{obrdc}, $F(N\cap V)$ is a countable union of $(n-1)$-dimensional DC surfaces in $\mathbb{R}^{n+1}$. So it is sufficient to prove that $M(K)\cap U\subset F(N) $. To prove this inclusion, suppose to the contrary that there exists a point $x \in M(K)\cap U$ such that both $F$ and $d_K \circ F$ are strictly differentiable at $x$. We can assume without loss of generality that $x=0$. Let $T:= (dF(0))(e^{\perp})$ be the vector tangent space to $X$ at $0$. Let $P$ be the projection of $\mathbb{R}^{n+1}$ onto $T$ in the direction of $e$ and define $Q:= (P\restriction_U)^{-1}$. It is easy to see that $Q=F\circ (d F(0))^{-1}$ and therefore $dQ(0) = (d F(0)) \circ (d F(0))^{-1} = {\rm id}_T$. Since $0 \in M(K)$, there exist two different minimal curves $\beta,\gamma: [0,r]\to X$ such that $r= d_K(0)$, $\beta(0)=\gamma(0)=0$, $\beta(r) \in K$, and $\gamma(r) \in K$. As any minimal curves on a convex surface, $\beta$ and $\gamma$ have right semitangents at $0$ (see \cite[Corollary~2]{Buyalo}); let $u,v\in\mathbb{R}^{n+1}$ be unit vectors from these semitangents. Further, \cite[Theorem 2]{Mi} easily implies that $u\neq v$. Clearly $d_K \circ \beta(t) = r-t,\ t \in [0,r]$, and $(P \circ \beta)'_+(0) = P(\beta'_+(0))=u$. Further observe that $d_K \circ Q$ is differentiable at $0$, since $d_K \circ F$ is differentiable at $0= (dF(0))^{-1}(0)$. Using the above facts, we obtain \begin{eqnarray*} (d(d_K \circ Q)(0))(u)&=&(d(d_K \circ Q)(0))((P \circ \beta)'_+(0))= (d_K \circ Q \circ P \circ \beta)'_+(0)\\ &=& (d_K \circ \beta)'_+(0) = -1. \varepsilonnd{eqnarray*} In the same way we obtain $(d(d_K \circ Q)(0))(v) =-1$. Thus, $u+v\neq 0$ and, by the linearity of the differential, $$(d(d_K \circ Q)(0))\left(\frac{u+v}{\| u+v\|}\right)=\frac{-2}{\| u+v\|}<-1.$$ Thus there exists $\mathrm{var}epsilon >0$ such that \begin{equation}\label{novel} \|d(d_K \circ Q)(0)\| > 1+ \mathrm{var}epsilon. \varepsilonnd{equation} Since $dQ(0) ={\rm id}_T$ and $Q = F\circ (d F(0))^{-1}$ is clearly strictly differentiable at 0, there exists $\delta >0$ such that $$ \|Q(p)-Q(q) - (p-q)\| \leq \mathrm{var}epsilon \|p-q\|,\ \ \ \ \ p, q \in B(0,\delta) \cap T,$$ and consequently $Q$ is Lipchitz on $B(0,\delta) \cap T$ with constant $1+\mathrm{var}epsilon$. Let $p, q \in B(0,\delta) \cap T$ and consider the curve $\omega: [0,1] \to X$, $\omega(t) = Q (tp + (1-t)q)$. Then clearly $$ \mathrm{dist}(Q(p),Q(q)) \leq \mathrm{length}\ \omega \leq (1+\mathrm{var}epsilon) \|p-q\|.$$ Consequently $$\|d_K \circ Q(p) - d_K \circ Q (q)\| \leq \mathrm{dist}(Q(p),Q(q)) \leq (1+\mathrm{var}epsilon) \|p-q\|.$$ Thus the function $d_K \circ Q$ is Lipchitz on $B(0,\delta) \cap T$ with constant $1+\mathrm{var}epsilon$, which contradicts \varepsilonqref{novel}. \varepsilonnd{proof} \begin{remark} An analoguous result on ambiguous loci in a Hilbert space was proved in \cite{Zadi}. \varepsilonnd{remark} \section{Remarks and questions} \label{Sec-Rem} The results of \cite{Per} and Corollary \ref{pevny_bod1} suggest that the following definition is natural. \begin{definition}\label{compat} Let $X$ be a length space and let an open set $G \subset X$ be equipped with an $n$-dimensional DC structure. We will say that this DC structure is {\it compatible} with the intrinsic metric on $X$, if the following hold. \begin{enumerate} \item For each DC chart $(U, \mathrm{var}phi)$, the map $\mathrm{var}phi: U \to \mathbb{R}^n$ is locally bilipschitz. \item For each $x_0 \in X$, the distance function $\mathrm{dist}(x_0,\cdot)$ is DC (with respect to the DC structure) on $G \setminus \{x_0\}$. \varepsilonnd{enumerate} \varepsilonnd{definition} If $M$ is an $n$-dimensional Alexandrov space with curvature bounded from below and without boundary, the results of \cite{Per} (cf.\ \cite[\S2.7]{Kuwae} give that there exists an open dense set $M^* \subset M$ with $\dim_H(M \setminus M^*) \leq n-2$ and an $n$-dimensional DC structure on $M^*$ compatible with the intrinsic metric on $M$ (cf.\ \cite[p.~6, line 9 from below]{Per}). Since the components of each chart of this DC structure are formed by distance functions, Lemma \ref{zakldc}(d) easily implies that {\it no other DC structure on $M^*$ compatible with the intrinsic metric exists}. Let $X \subset \mathbb{R}^{n+1}$ be a convex surface. Then Corollary \ref{pevny_bod1} gives that the standard DC structure on $X$ is compatible with the intrinsic metric on $X$. By the above observations, there is no other compatible DC structure on the (open dense) ``Perelman's set'' $X^*$. {\it We conjecture that this uniqueness is true also on the whole $X$.} Further note that the standard DC structure on $X$ has an atlas such that all corresponding transition maps are $C^{\infty}$. Indeed, let $C$ be the convex body for which $X = \partial C$. We can suppose $0 \in {\rm int}\, C$ and find $r>0$ such that $B(0,r) \subset {\rm int}\, C$. Now ``identify'' $X$ with the $C^{\infty}$ manifold $\partial B(0,r)$ via the radial projection of $X$ on $\partial B(0,r)$. Then, this bijection transfers the $C^{\infty}$ structure of $\partial B(0,r)$ on $X$. We conclude with the following problem. {\bf Problem }\ Let $f: \mathbb{R}^n \to \mathbb{R}$ be a semiconcave (resp DC) function. Consider the ``semiconcave surface'' (resp. DC surface) $X := {\rm graph}\, f$ equipped with the intrinsic metric. Let $x_0 \in X$. Is it true that the distance function $\mathrm{dist}(x_0,\cdot)$ is DC on $X \setminus \{x_0\}$ with respect to the natural DC structure (given by the projection onto $\mathbb{R}^n$)? In other words, is the natural DC structure on $X$ compatible with the intrinsic metric on $X$? If $f$ is convex, then the answer is positive, see Remark \ref{unbounded}. If $f$ is semiconcave, then each minimal curve $\mathrm{var}phi$ on $X$ has bounded turn in $\mathbb{R}^{n+1}$ by \cite{Re}. Thus some interesting results on intrinsic properties extend from convex surfaces to the case of semiconcave surfaces. So, there is a chance that the above problem has the affirmative answer in this case. However, we were not able to extend our proof to this case. \begin{thebibliography}{99} \bibitem{Acon} A.D. Alexandrov, {\varepsilonm Intrinsic Geometry of Convex Surfaces (Russian)}, OGIZ, Moscow-Leningrad, 1948. \bibitem{A1} A. D. Alexandrov, {\varepsilonm On surfaces represented as the difference of convex functions,} Izv. Akad. Nauk. Kaz. SSR 60, Ser. Math. Mekh. 3 (1949), 3--20 (in Russian). \bibitem{BBI} D. Burago, Y. Burago, S. Ivanov, {\it A course in metric geometry}, Graduate Studies in Mathematics, Volume 33, Amer.\ Math.\ Soc., Providence, 2001. \bibitem{Buyalo} S.V. Buyalo, {\it Shortest paths on convex hypersurfaces of Riemannian spaces}, Zap.\ Nau\v{c}n.\ Sem.\ Leningrad.\ Otdel.\ Mat.\ Inst.\ Steklov.\ (LOMI) 66 (1976), 114--132. \bibitem{CaSi} P. Cannarsa, C. Sinestrari, {\varepsilonm Semiconcave functions, Hamilton-Jacobi equations, and optimal control}, Progress in Nonlinear Differential Equations and their Applications, 58. Birkh\"auser Boston, Inc., Boston, MA, 2004. \bibitem{Fe} S. Ferry, {\varepsilonm When $\mathrm{var}epsilonsilon $-boundaries are manifolds,} Fund. Math. 90 (1976), 199--210. \bibitem{Fu} J.H.G. Fu, {\varepsilonm Tubular neighborhoods in Euclidean spaces}, Duke Math.\ J. 52 (1985), 1025--1046. \bibitem{Ha} P. Hartman, {\varepsilonm On functions representable as a difference of convex functions,} Pacific J. Math. 9 (1959), 707--713. \bibitem{HLW} D. Hug, G. Last, W. Weil, {\varepsilonm A local Steiner-type formula for general closed sets and applications,} Math. Z. 246 (2004), 237--272. \bibitem{Kuwae} K. Kuwae, Y. Machigashira, T. Shioya: {\it Sobolev spaces, Laplacian, and heat kernel on Alexandrov spaces}, Math.\ Z. 238 (2001), 269--316. \bibitem{Mi} A.D. Milka, {\varepsilonm Shortest lines on convex surfaces (Russian),} Dokl. Akad. Nauk SSSR 248 (1979), 34--36. \bibitem{Mor} B.S. Mordukhovich, {\varepsilonm Variational analysis and generalized differentiation I., Basic theory,} Grundlehren der Mathematischen Wissenschaften 330, Springer-Verlag, Berlin, 2006. \bibitem{MM} C. Mantegazza, A.C. Mennucci, {\varepsilonm Hamilton-Jacobi equations and distance functions on Riemannian manifolds}, Appl. Math. Optim. 47 (2003), 1--25. \bibitem{OS} Y. Otsu, T. Shioya, {\varepsilonm The Riemann structure of Alexandrov spaces}, J. Differential Geom. 39 (1994), 629--658. \bibitem{Per} G. Perelman, {\varepsilonm DC structure on Alexandrov space}, an unpublished preprint (1995), available at http://www.math.psu.edu/petrunin/papers/papers.html. \bibitem{Pet} A. Petrunin, {\varepsilonm Semiconcave functions in Alexandrov's geometry}, in: Surveys in Differential Geometry, Vol.~XI, J. Cheeger and K. Grove Eds., Int.\ Press, Somerville, 2007, pp. 137--202. \bibitem{Pl} C. Plaut, {\varepsilonm Metric spaces of curvature $\geq k$}, Handbook of geometric topology, 819--898, North-Holland, Amsterdam, 2002. \bibitem{RaZa} J. Rataj, L. Zaj\'\i\v cek, {\varepsilonm Critical values and level sets of distance functions in Riemannian, Alexandrov and Minkowski spaces}, arXiv 0911.4020. \bibitem{Re} Yu. G. Reshetnyak, {\varepsilonm On a generalization of convex surfaces} (Russian), Mat. Sbornik 40 (82)(1956), 381--398. \bibitem{Schneider} R. Schneider, {\varepsilonm Convex bodies: the Brunn-Minkowski theory}, Cambrigde University Press, Cambrigde, 1993. \bibitem{ST} K. Shiohama, M. Tanaka, {\varepsilonm Cut loci and distance spheres on Alexandrov surfaces,} Actes de la Table Ronde de G\'eom\'etrie Diff\'erentielle (Luminy, 1992), 531--559, S\'emin. Congr., 1, Soc. Math. France, Paris, 1996. \bibitem{VeZa} L. Vesel\'y, L. Zaj\'\i\v cek, {\varepsilonm Delta-convex mappings between Banach spaces and applications,} Dissertationes Math. (Rozprawy Mat.) 289 (1989), 52 pp. \bibitem{Walter} R. Walter, {\it Some analytical properties of geodesically convex sets}, Abh.\ Math.\ Sem.\ Univ.\ Hamburg 45 (1976), 263--282. \bibitem{Wh} J.H.C. Whitehead, {\it Manifolds with transverse fields in Euclidean space}, Ann.\ Math.\ 73 (1961), 154--212. \bibitem{Zaj} L. Zaj\'\i\v{c}ek, \textit{On the differentiation of convex functions in finite and infinite dimensional spaces}, Czechoslovak Math. J. 29 (1979), 292--308. \bibitem{Zadi} L. Zaj\'\i\v{c}ek, \textit{ Differentiability of the distance function and points of multi-valuedness of the metric projection in Banach space}, Czechoslovak Math. J. 33 (1983), 340--348. \bibitem{Zajploch} L. Zaj\'\i\v{c}ek, \textit{ On Lipschitz and d.c.\ surfaces of finite codimension in a Banach space}, Czechoslovak Math. J. 58 (2008), 849--864. \bibitem{Zamf} T. Zamfirescu, {\it On the cut locus in Alexandrov spaces and applications to convex surfaces}, Pacific J. Math.\ 217 (2004), 375--386. \varepsilonnd{thebibliography} \varepsilonnd{document}
math
\begin{document} \title{The expressional limits of formal languages in the notion of observation} \author{Stathis Livadas\\ Department of Mathematics,\\ University of Patras, Patras, 26500\\ Greece\\ e-mail:[email protected]\\tel:+30 6947 302876, fax: +30 2610 997425\\} \maketitle \begin{abstract} In this article I deal with the notion of observation in the most fundamental sense and its formal representation by means of languages serving as expressional tools of formal-axiomatical theories. In doing so, I have taken this notion in two diverse contexts. In a first context as an epistemic notion linked to an elaboration of objects of a mathematical theory taken as registered facts of objective apprehension and then as notion linked to a process of measurement on the quantum-mechanical level. The second context in terms of which I deal with the notion of observation is that of phenomenological constitution basically as it is described in E. Husserl's texts on phenomenology of temporal consciousness. Taking that mathematical objects as formal-ontological objects in phenomenological constitution are based on perceptual objects by means of categorial intuition, the question is whether and under what theoretical assumptions we can, in principle, include quantum objects in the class of formal-ontological objects and thus inquire on the limits of their description in the language of a formal-axiomatical theory. On one hand, I derive an irreducibility on the level of observables as indecomposable atoms without any further syntactical content in formal representation and on the other a transcendence of a continuous substratum self-constituted as a kind of impredicative objective unity upon which is partly grounded the definition of an observational frame and the generation of a predicative universe of discourse. \end{abstract} \noindent {\bf Keywords}. Continuum, flux of consciousness, individual, intentionality, observation, quantum measurement, quantum non-separability, transcendental. \vspace*{0.4cm} \tableofcontents \vspace*{0.2cm} \section{Introduction} I generally think that setting the limits of formal languages with respect to a notion of observation pretty much of the task consists in establishing an underlying all-encompassing theoretical background on which to be able to talk about observation either as an epistemic notion in the various contexts in which it is encountered or as a phenomenological notion linked to {\it a priori} defined acts of an `observing' subject. My underlying interpretative scheme will be that of a phenomenological constitution of the intentional objects of experience by a `participating' knowing subject in an object-like organization of the surrounding World-for us which from a certain viewpoint comes close to the version of Active Scientific Realism in Quantum Mechanics described e.g., in \cite{Karak} and \cite{Karak1}. In my view this interpretation should lead to a `representation' of well defined objects in the flux of consciousness by means of noetical-noematical constitution which involves a definition of objects as reidentifying bearers of predicates across phenomenological time. This has to do with a view of objects\footnote{If we regard as phenomenal objects those given `primordially in perception' and represented in consciousness then interpretative objects like atoms, electrons, etc. can be also regarded as given by experience and thus considered as real to the extent that their `reality' is based upon the interpretation of sensible signs in an experimental situation (see: \cite{Heel}, I, p. 5).} in quantum mechanical context as general anticipative frameworks in a Boolean frame and then in a unifying mathematical - probabilistic tool corresponding to each experimental preparation. Let me be a little bit more specific here about the meaning of the phenomenological terms of noetical and noematical described originally in E. Husserl's {\it Ideen I} (\cite{Hu1}). A noematical object manifests itself as a `giveness' in the flux of a subject's consciousness and it is constituted by certain modes of being as such (e.g. with a proper to it predicative nest) i.e., as a well defined object immanent to the flux which can then be `transformed' to a formal-ontological object and consequently a symbolic object of an analytical theory naturally including any formal mathematical theory. It can then be said to be given apodictically in experience: (1) it can be recognized by a perceiver directly as a manifested essence in any perceptual judgement (2) it can be predicated as existing according to the descriptive norms of a language and (3) it can be verified as such (a reidentifying object) in multiple acts more or less at will (\cite{Heel}). In contrast, an intentional object by hyletical-noetical apprehension ({\it Wahrnehmung}) can be only thought of as an aprioric orientation of intentionality by its sole virtue of being given as such `in person' in front of the consciousness inside the open horizon of the World-for us.\footnote{The World-for us or Life-World in Husserlian terminology can be roughly described to a non-phenomenologist as the physical world in its ever receding horizon including in intersubjective sense all knowing subjects in a special kind of presence in the World. More on this in E. Husserl's {\it The Crisis of European Sciences and Transcendental Phenomenology} (\cite{Hu4}).} It is given within a horizon whose outer `layer' is the boundary between the intentional object and the World-for us; the latter, is meant as not being this object or any of its parts and moreover it is the `field' of all next possible noetical apprehensions. These intentional objects as most fundamental objects of intentionality cannot be reduced to a lower level of apprehension and this is seemingly the reason for which in their subsequent temporal constitution as individual noematical objects corresponding to `state-of-things' they bear no `inner' content at least not one describable by any analytical means. My interpretational scheme will be also that of a transcendental reduction of a phenomenological type when it comes to the notion of constitution in-itself as an objectivated, homogeneous, continuous unity `external' to its immanent objects on which to be able to constitute well defined objects of `observation'.\footnote{I note here that there exist certain approaches claiming a transcendental deduction of Quantum Mechanics ({\bf QM}) such as M. Bitbol's (\cite{Bitb}) in the sense of a Kantian-type reduction of certain particularities of quantum description (e.g. superposition of states, continuity of the state vector) to their corresponding correlates in internal modes of functioning of human consciousness or J. Hintikka's ideas on a shift from a passive reception of objects to their effective research and instrumental shaping on the part of the knowing subject.} This continuity in the sense of an underlying, impredicative and continuous substratum that makes possible to reinstate objects as bearers of predicates in kinematical interdependence in each experimental context is reduced to a transcendental subjectivity in the self-constitution of each one's flux of consciousness (\cite{Hu2}, pp. 295-96). I also draw attention onto how the unity of a fulfilled time consciousness is reflected in the language of the mathematical theory of Quantum Mechanics in the form of classical continuity assumptions in the description of certain quantum phenomena or in the form of the state vector representing a quantum object immediately after measurement in a quantum experiment. What can be derived from the mathematical description of quantum phenomena is the possibility to refer to entities as observational and then syntactical individuals in disentangled states and further the possibility to describe them by means of continuous transformations across time (e.g. state vectors or Dirac transformations) which implies, in turn, a continuity of the parameter time. I link these two fundamental possibilities in a phenomenologically motivated orientation to: (i) The existence of objects of intentionality within an outer and inner horizon of noetical apprehension - conditioned on the existence of a relation of intentional character\footnote{This is not to be understood as a relation of some kind of psychological content. By intentional relation, which is a phenomenological term, it is meant something fundamentally deeper and aprioric. To a non-expert in Phenomenology it can be described in one phrase as grounding an aprioric necessity of the orientation of a subject's consciousness to the object of its orientation.} between a knowing subject and an object - which are then in noematical constitution bearers of a fundamental predicative formation and (ii) the impredicativity of the self-constituting unity of the time flux of consciousness leading finally to the transcendence of the pure ego in Husserlian terminology. I view these two fundamental irreducibilities in subsections \ref{subsec2} and \ref{subsec3} as playing an underlying role in the mathematical description of certain quantum phenomena such as the Bohm-Aharonov effect and quantum non-separability. An approach to quantum non-separability motivated to some extent by phenomenological concerns has been provided in \cite{Karak} and \cite{Karak1}. Finally, regarding mathematical objects as not perceptual objects in literal sense yet founded on perception\footnote{In E. Husserl's view, perception by virtue of perceptual acts provides the concrete, immediate and non-reflective basis for all our experiences and thus provides also the basis for any intuition of abstract objects. For details, see R. Tieszen's \cite{Tiesz1}, pp. 412-15.} I look, mainly in section \ref{Sec2}, into how the aforementioned irreducibilities are formalized in the language of axiomatical mathematical theories, notably in {\it Zermelo-Fraenkel} with {\it Axiom of Choice} Theory ({\bf ZFC}), and consequently in the mathematical metatheory of {\bf QM}. They set, in effect, as I shall try to show, the expressional limits of a formal language in the notion of observation because they stand as irreducibilities of a rather phenomenological and certainly of a non-analytical character. By this measure, they can be considered as a unifying substratum of both quantum-mechanical observation (in the sense of a fundamental character `observation') and the logical-axiomatical structure of the corresponding formal theory. From this aspect, if we adhere to the view that mathematical Continuum is primarily based on the intuitive Continuum of our real experience and it can be modelized after the self-constituted Continuum of the time flux of consciousness (which was roughly both L.E.J. Brouwer's and H. Weyl's view of the matter, see: \cite{Van Att}), we plausibly expect the transcendental root of phenomenological Continuum to be somehow reflected in the axiomatization of mathematical theories pertaining to a notion of mathematical Continuum. This is what seems to happen with the independence from the rest of the axioms of {\bf ZF-C} of statements making claims about fundamental properties or the cardinality of the Continuum (e.g. {\it Continuum Hypothesis}, the {\it Axiom of Choice}, {\it Suslin's Hypothesis}, etc). I simply add as indirectly relative to this, G\"{o}del's view that the mathematical essences we intuit could not be linguistic conventions in the sense that: ``{\it instead of clarifying the meaning of abstract and non-finitary mathematical concepts by explaining them in terms of syntactical rules, abstract and non-finitary concepts are used to formulate the syntactical rules}" (\cite{Tiesz2}, p. 193). \section{Noematical objects as individuals of mathematical theories}\label{Sec2} It seems to me purposeful to draw attention to what is most fundamental, in fact what is irreducible in the build-up of analytical statements of any degree of complexity. If we assume that any analytical statement incorporates noematical objects\footnote{This kind of noematical objects, e.g. syntactical individuals of categorial formulas, numerical symbols, functions of Pure Analysis, Euclidean or non-Euclidean domains of such functions etc. were characterized by Husserl as referring to `state-of-things' ({\it Sachverhalte}) which are intentionalities towards an `empty something' ({\it Leeretwas}) (\cite{Hu1}, p. 33)} in the sense of signification objects ({\it Sinnesobjekte}) acting as predicate bearers ({\it Seinssobjekte}) in noematical constitution together with any doxical modalities reflecting consciousness-based states e.g. doubt, certainty, negation, etc., then any attempt at a radical `deconstruction' of the analytical structure of any sentence would inevitably reduce to statements about individuals and to properties correlated with their very nature as individuals. As E. Husserl claimed, those sentences are no more of an analytical nature but of a rather phenomenological one leading to a kind of `observation' of intentional character. This radical reduction which in addition to predicate bearers as such reaches their predicative environment as well, should fundamentally attach at least the $\in$ predicate as a noematical correlate to each such predicate bearer in case we talk about syntactical atoms of analytical - mathematical formulas. This type of reduction is meant as the result of the elimination of all possible doxical modalities in the construction of analytical sentences of any level of complexity as, for instance, in general statements expressing doubt ($S$ should be $p$), corroboration ($S$ is in fact $p$) or negation ($S$ is not $p$) and so on, including also forms uniquely defined by their syntactical structure e.g. where one quantifies over elements satisfying a particular analytical property $S$ ($\forall p\; S$ or $\exists p\; S$). What is left ultimately is a multiplicity of individuals (or a collection of such individuals by phenomenological association) as intentionally perceived and constituted as reidentifying noematical objects in a varying predicate environment with $\in$ predicate intentionally attached to them as an irreducible, non-logical notion of order. The predicative nests intentionally attached to individuals (in the sense of syntactical atoms) were termed by Husserl as {\it Kernformen} in \cite{Hu2} and were supposed to invariably define the essential nature of individuals - substrates in the subsequent construction of analytical statements of a higher order. From my point of view the $\in$ predicate is even more fundamental than the equality predicate at least in analytical representation for any notion of equality presupposes a notion of mutual inclusion. This is reflected in a fundamental way by the adoption of {\it Extensionality Axiom}: $$\forall X\;\forall Y(\forall u (u\in X\longleftrightarrow u\in Y)\longrightarrow X=Y)$$ in the foundations of standard set theory as well as in almost any formal theory treating sets as definite collections of objects. My claim is that this radical reduction to individuals - predicate bearers as ultimate phenomenological substrates of analytical statements provides a satisfactory interpretational framework for the role of urelements of transitive classes under $\in$ predication. As it will become more clear in next section, a common foundation to both formal-ontological objects of categorial structures and to objects of quantum `observation' in their noematical representation in consciousness lies in the notion of intentional objects constituted in an all-inclusive objectivity of consciousness as reidentifying across time predicate bearers. Any intentional object in the possibly lowest level of `observation' can, in Husserl's view, be only an individual - substrate deprived of any inner structure (even lacking a temporal form), at least not one expressible by any analytical means together with an {\it a priori} predicative formation by virtue of which it can be perceived as a unique noematical object appropriating {{\it eo ipso} a relational property with respect to any other such object. It is by all accounts this essential characteristic together with the retentional\footnote{The terms retention and protention are purely phenomenological terms and can be roughly communicated to a non-expert in Phenomenology respectively as a kind of immediate conservation in memory (retention) and immediate expectation of original impression (protention); they are described to be of an aprioric and not of psychological character in the constitution of the sequence of original impressions in the flux of consciousness. For more details, see \cite{Hu3}.} } character of the constitution process that it is possible to re-identify any intentional object of primordial experience as the one and same noematical object $x$ under varying predicative situations. Any attempt to pass from those individuals perceived as a self-donated presence in front of the intentionality of consciousness to a constituted objectivity of a higher order can only entail circularities in description or {\it a priori} terms. For instance, in {\it Ideen I} Husserl referred to the multi-ray intentionality of the synthetic consciousness which turns by an essentially {\it a priori} mode the apprehension of a collection of objectivities into an apprehension of a single objective whole by what he termed a monothetical act whose {\it wesensm\"{a}ssig} (by essence) mode evidently points to a creeping transcendence (\cite{Hu1}, p. 276). I'll comment on phenomenological transcendence which was described {\it in extenso} in terms of the self-constitution of time consciousness by Husserl in \cite{Hu3} later in Sec. 4; now I call attention to irreducible individuals as formal - ontological objects with their inherent predicative formation and to their implicit role in determining properties of transitive classes concerning, in particular, the proof of absoluteness of certain categorial formulas.\footnote{In rough terms an absolute formula $\varphi$ inside a model $X$ can be described as keeping the `mirror' image of itself in any other model $Y$ ordered by set inclusion ($X\subset Y$) with respect to the original model $X$.} It is very important, for instance, to assure absoluteness of certain bounded quantifier set-theoretical formulas in the build-up of hierarchies of transitive classes $L_{\alpha}$ in G\"{o}del's Constructible Universe L to prove that it serves as a model of {\bf ZFC} plus {\bf CH} [{\bf ZF} theory + {\bf AC} ({\it Axiom of Choice}) + {\bf CH} ({\it Continuum Hypothesis})]. For such formulas the property of absoluteness basically is related with the transitivity property of the corresponding class; in intuitive terms it has much to do with the invariability of the $\in$ - predicative character of the zero-level elements of the original transitive model in the recursive definition of classes of any order inside it.\footnote{A class $M$ is transitive if for any $x,y,z\in M$ whenever $x\in y$ and $y\in z$, $x\in z$. This is equivalent to the statement that whenever $x\in M$ and $x$ is not a zero-level element (i.e., an urelement under $\in$ predicate) then $x\subseteq M$.} For instance, by transitivity property any bounded quantifier formula $\varphi$ of the form $$(\exists u\in x)\; \psi\;\;\mbox{or}\;\;(\forall u\in x)\;\psi$$ is absolute between any transitive models $M$ and $N$ whenever formula $\psi$ is. The simple proof is based on two assumptions. First, that in the inductive definition of absolute formulas any atomic formula $\psi$ of the form $i\in j$ and $i=j$ is absolute and second, that any bounded variable $u$ of the formula $\varphi$ is the `reflection' of a certain invariably the same urelement $u_{i}$ under $\in$ - predication in $M$.\footnote{It is an immediate consequence of the transitivity property of model $M$ that it satisfies the Axiom of Extensionality by absoluteness of the bounded quantifier formulas $$\forall X\;\forall Y\;[(\forall u\in X)\rightarrow (u\in Y)\wedge (\forall u\in Y)\rightarrow (u\in X)]\longrightarrow X=Y]$$ (\cite{Jech}, pp. 82-83).} Both assumptions reduce to admitting the possibility of existence of irreducible individuals retaining invariably their double nature as individuals-as such and as members of the (transitive) class to which they belong. Whether they should be objects of stratified categorial formulas in a logical - mathematical statement or well defined objects of quantum observation expressible as formal - ontological objects in the syntactical norms of a formal - analytical discourse changes nothing as to the essence of their individuality and their aprioric predication thus leading to a view of them as `transformations' of intentional objects of noetical apprehension. The latter case (i.e. quantum objects) will be discussed in more detail in the next subsection. Whether we may introduce individuals as urelements dropping the Axiom of Extensionality as Fraenkel and Mostowski did in constructing appropriate models in which the {\it Axiom of Choice} fails\footnote{P. Cohen dismissed those urelements as fictitious objects $x_{i}$ such that $\forall y\;(\neg y\in x_{i})$ yet $x_{i}\neq x_{j}$ for $i\neq j$ (\cite{KrauCol}, p. 202). Yet in the sense of noematical individuals inside the unity of consciousness that I have proposed it makes sense to talk about such objects.} or dismiss them altogether reserving this denomination only for null-set ($\emptyset$) yet retaining a notion of individuals as first-level elements in a cumulative type structure, the underlying idea of `indecomposable' individuals preserving invariably their syntactical and (in appropriate interpretation) semantical content remains fundamentally the same. Even viewing urelements of an extended Zermelo-Fraenkel universe (ZFU, $\in$) as not identical yet indistinguishable elements by the definition of $\mathcal{A}$ - indistinguishability inside a relational structure $\mathcal{A}=\langle D, \{R_{i}\}_{i\in I} \rangle$ (as proposed by Krause and Coelho in \cite{KrauCol}) they can be easily made distinguishable by associating to any collection of them an ordinal number making thus possible to talk about a collection $\sigma_{0},\;\sigma_{1},\;\sigma_{3},....,\;\sigma_{n-1}$ of such objects. This is a result of the simple proof that any ordinal as a well-ordered structure $\langle A,\;<\;\rangle $ is a rigid structure, i.e. the only automorphism in this structure is the identity function (\cite{KrauCol}, p. 201). In other words, in a rigid structure $\mathcal{A}$ the notion of not identical elements and that of $\mathcal{A}$ - distinguishable elements coincide. Let us note that the question of the individuality of entities in the context of quantum mechanics has provided for much theoretical discussion on the nature of quantum objects as they are regarded by some physicists (notably by Scr\"{o}dinger) as non-individuals upon which a notion of identity does not make sense or by others as bearing a kind of intrinsic individuality by which though they might be ``{\small qualitatively the same in all aspects representable in quantum mechanical models yet numerically distinct}" (\cite{Van Fr}, p. 376).\footnote{In \cite{KrauCol} a model-theoretical characterization is proposed of the two opposing views on the question of the individuality of quantum particles in terms of a trivial and a non-trivial rigid expansion of a relational structure $\mathcal{A}=\langle D, \{R_{i}\}_{i\in I} \rangle$. It is evident by the arguments employed in this article that its authors have the view that the mathematical structure of Quantum Mechanics has a non-trivial rigid expansion (i.e. not one by trivially adjoining the ordinal structure) whose physical intuition is that quantum entities are somehow `intrinsically' distinguishable one from another.} Individuals as purely {\it qua} individuals whether we refer to an axiomatical-mathematical model or to a mathematical modeling of quantum mechanics are to be thought on a formal level as reflections of irreducible intentionalities of consciousness of an {\it a priori} $\;$character and they should not be necessarily identified on a formal-ontological level with certain elements of standard or non-standard theories at least not in the absence of the {\it Foundation Axiom} of {\bf ZFC} theory. They can represent any entity in the structure of an appropriate formal language as long as it is taken as elementary and not further reducible in the sense we have described.\footnote{In \cite{BriRew}, C. Brink and I. Rewitzky derive by a proper mathematical modelisation involving Priestley duality (something close to Stone duality) that it is not essentially different whether we talk either of individuals (things), properties, or facts in the world, establishing, in effect, an intranslatability between an ontology of individuals (nominalism), an ontology of properties (realism) and an ontology of facts (factualism).} Of course many other relations or properties other than the $\in$$\;$ predicate can be predicated to any individual as an object of a formal-ontological discourse but what we are most interested in here is to reach the most fundamental, the not further reducible level of predication. Evidently, this kind of irreducibility connects with a notion of ordinals as a transitive and well-ordered structure within a mathematical - axiomatical system. Taking into account that by definition an ordinal number is a transitive and well-ordered by $\in$ inclusion set it seems natural to conjecture that transitive models of {\bf ZFC} should be determined by their sets of ordinals. In fact, this was proved by P. Vop\v{e}nka and B. Balcar in \cite{Vop} where any transitive models $M$, $N$ of {\bf ZFC} are proved to be equal ($M=N$) whenever $M$ and $N$ have the same sets of ordinals with the restriction that {\it AC}$\;$ is satisfied in $M$ ($M\models AC$). It is not without importance here that {\bf AC} should be satisfied at least in model $M$ and it gives the motivation to a brief review of this independent infinity axiom within the scope of the present work. The intuition of the {\it Axiom of Choice} is that we can, in principle, apply a criterion of choice at any infinity level which would provide us with the possibility to select an irreducible individual (think of it as an urelement of a formal theory) together with an inherent $\in$ relation attached to it, among any other potential choices or in phenomenological terms among any other possible intentional `observations'. In a phenomenological approach, I argue that a notion of ordering may be automatically induced by any object of intentionality at the level of hyletical - noetical apprehension by the sole virtue of the intentional `property' of the object in question to bear an outer horizon, i.e. that part of the Life-World that is not the object or parts of the object. Evidently, this `property' provides for a complementary domain of `observation' for a next potential noetical apprehension which by its very enactment provides for a new complementary domain and so on. This way a notion of well-ordering may be grounded on the noetical level of intentionality with regard to a transformation thereafter of an hyletical-noetical object to a noematical one possibly belonging to an aggregation of other such objects in the continuous unity of the absolute flux of consciousness. Now, what is left after discarding all other details of constitution is the possibility to `observe' (and retain) intentionally individuals-as such as protentions of intentionality in the domain of `observation'. By this deconstruction process, a fundamental reduction of the {\it Axiom of Choice} as a series of intentional acts of an {\it a priori} character inducing in posterior sense a well-ordering among any aggregation of formal-ontological objects seems to me plausible. \section{Can quantum mechanical interpretation be related to a phenomenology of constitution?}\label{Sec3} \subsection{Some remarks on the mathematical language of Quantum Mechanics} In this section I shall argue for the possibility of an interpretation of Quantum Mechanics along phenomenological lines especially in connection with the notion of the intentional relation subject-object and the constitution of noematical objects as well defined objects in the self-constituted unity of consciousness. This is an approach that to my knowledge is pretty much new though there have already been various interpretations beyond the mainstream options of realism and instrumentalism, such as M. Bitbol's views in \cite{Bitb} motivated by an attempt at a transcendental deduction of Quantum Mechanics or the {\it Many-Worlds Interpretation} of Quantum Mechanics ({\bf MWI}) taken as B. De Witt's interpretation of H. Everett's `relative state' formulation of {\bf QM}.\footnote{In connection with this consciousness-related orientation, I specifically refer to its `psychological' version where the quantum measurement process is roughly reduced to a `splitting' of a single consciousness before interaction to several afterwards yet retaining by some psychological mode its unity through time. See, \cite{Ev1} and \cite{Ev2}.} In M. Bitbol's approach I retain, first, his view of an original type transcendental deduction summed up as providing an internal correlation between a unifying mode of appearances of phenomena and certain laws of understanding considered as preconditions of experience. This seems, in a quite general form, to shift the view towards a field where the mental faculties of a subject might actively take part in grounding experience as such and also in shaping up the objects of experience. And, second, I think we should keep in mind the meaning of his constraint of contextualization which corresponds a Boolean subframe to each experimental preparation linked to a unified mathematical tool of probabilistic prediction irrespective of the context associated to the measurement that follows the preparation (\cite{Bitb}, p. 11). In such a case, based on the notion of a reidentifying object across time (which in my view presupposes the existence of an otherwise irreducible intentional relation subject/object) we can ascribe to each experimental preparation a unified, predictive (non-Kolmogorovian) tool whose valuations are associated with a Boolean framework irrespective of the context associated to the measurement that follows the preparation. This seems interesting to the extent that: a) it presupposes an object-like organization of phenomena to be described in the language of a Boolean observer `attached' to each experimental situation and b) it implies a unifying mathematical tool bridging, in effect, the contextual frames of the preparation of an experiment and its measurement. The former condition can be understood as leading to the following assumptions. The indirect introduction of a participating `observer' who has a particular mode of `observation', then a particular mode of constituting his observations and reproducing them in a predicable object-like universe and a particular language to describe them as well defined objects. In the case of an experimental preparation and measurement this is inherently linked with an observer's capacity to transform his intentionalities in terms of noetical-hyletical apprehensions of the real world to noematical objects in terms of reidentifying objects across time and thus well defined bearers of predicates in an ordered context. Then he would be able to talk, in principle, about the ontological nature of these constituted objects irrespectively of whether they are considered as objects under predication of a formal or of a common natural language. For they have become objects of a formal ontology which in reverse order reduce by phenomenological intentionality to individuals- as such bearers of an outer horizon in hyletical - noetical apprehension. In view of the aforesaid I propose an interpretation (in terms of a noetical - noematical constitution) of the presence of a knowing subject that performs quantum measurements via a measuring apparatus in the following fashion: The measured property produces a macroscopic effect on the instrument (e.g. a pointer reading or a track in a bubble chamber) which is a material sign. This can be considered as having a double reality; its material one as a pointer sign or a bubble track and an intentional reality proper to it as a sign susceptible to be constituted at a next stage as a formal-ontological object. A sign regardless of its particular material content has the mode of being a sign-as such and in being so it can be thought of as an intentional object of noetical apprehension by virtue of being merely a so-called `state-of-things' ({\it Sachverhalt}), in other words an `empty something'\footnote{By the transformation of originally given intentional objects as `state-of-things' to formal-ontological objects it is possible for a physical interrelation to be formalized mathematically with numerical (or mathematical) symbols corresponding to observable signs furnished by the intermediary of a measuring instrument.}; it should then be apriorically directed to a knowing subject performing the experiment by means of a measuring device. In any case, it may be assumed that the signs of a measuring apparatus are symbols of certain physical properties (natural symbols) insofar as they are uniquely determined by the interaction with a quantum entity in terms of which they `translate' the hidden state of the quantum entity into uniquely determined sensible signs (\cite{Heel}, p. 174-175). Evidently, these signs which are part of the `physicalistic language' of the measuring apparatus can be considered as intentional objects for the performing subject who can then turn them to linguistic symbols; the latter is conditioned on his capacity to constitute them as well defined noematical objects in his flux of consciousness. As symbols of a linguistic statement they are not just material reality signs but they are parts of a predicative environment by being symbols as-such corresponding to `state-of-things' which are moreover bearers of two important properties: 1) They do not determine a unique linguistic statement. By being symbols-as such they can be arguments of equivalent logical-mathematical formulas inasmuch as they are abstractions of unique material signs and 2) They are devoid of any inner analytical content as they are linguistic symbols abstracting in each case a unique and irreducible intentional object, e.g. the sign-as such of the bubble track of a particle. At the stage a knowing subject will be able to represent noematical objects as such and such well defined objects of discourse in such or other internal noematical mode he should by necessity have constituted them already in a kind of synthetic unity to be able to talk about them together at once; and this unity should undeniably be a temporal unity. But arguably this reduces to a constitution of internal time in the form of a continuous unity of time consciousness which has almost nothing to do with external (or scientific) time e.g. the time frame of classical or relativistic theory. In fact, both quantization conditions and the continuous wave-like propagation of phenomena, stemming from the formalism of Quantum Mechanics with appropriate boundary conditions, are due to the intrinsic property of quantum objects to be `embeddable' as outcomes of sufficiently reproducible experiments to a unified meta-contextual frame of probabilistic description. This possibility involves, on a phenomenological level, at once both a relationship subject-object of an intentional character and the noematical constitution of intentional objects as well defined objects in the unity of consciousness; in a quantum experiment this is translatable to an embedding of reproducible `observations' in a meta-contextual Boolean subframe.\footnote{My intentional/constitutional approach is not directly connected with the general stance one might have on the question of the nature of quantum particles as there is no unconditioned description of them. Anyway I note that in the light of EPR critique and the antinomies of entanglement states Schr{\"o}dinger attempted a reinterpretation of the epistemological questions of quantum theory and a reexamination of the question of the individuality of particles. In fact, it was his attempt to interpret the formal results of the Bose-Einstein statistics which implicated an indiscernibility of monoatomic gas molecules that led him to abandon the particle interpretation and adopt the undulatory view in terms of which the gas as a physical system should be considered as a system of stationary waves in which molecules are just states of excitation energy deprived in this way of individuality. Later he gave a physical interpretation in electromagnetic sense to the wave function $\psi$ as solution of his general equation but that was again of a wave configuration consisting in the superposition of all kinematically possible point-configurations of the system each one intervening by its special ``weight" in the physically interpreted formula $\int \psi \psi^{\ast}$ (current density). Moreover his wave image proved more satisfactory in representing atomic transitions by an energy exchange between different vibrations rather than by a quantum leap between states in which case one cannot possibly describe the transition in time and space.} In addition and independently of the context that follows the preparation of a quantum experiment there should be some intrinsic way by which quantum objects as intentional ones become reidentifying objects of a constituting consciousness invariably over (internal) temporal unity. Eliminating then all time-related modes of noematical constitution (e.g., simultaneity, succession, casual relationship) there should be a temporal substratum of the predicable universe of Quantum Mechanics whose temporality should be something radically different than the ordinary objective time of a classical macroscopic system. This may lead to argue that the unifying, meta-contextual time of the predictive tool should be the objectivated `reflection' of the absolute flux of time consciousness whose objectivation cannot be described but as a sort of `mirror' reflection of its ever {\it in-act} self. I close this subsection by referring to a well-known quantum effect where the derivation of quantum conditions by classical continuity assumptions as constraints provides a clue to the necessity of assumption of an impredicative temporal substratum to which I just referred above. In the case of the potential well, for instance, the discrete eigenvalues of the energy operator is the formal result of continuity assumptions about the wave function $\psi$ on the boundaries of the potential well and of constraints put on the wave function out of the classically permitted region, in the `observational limit' to infinity.\footnote{Based on the continuity of the wave function of a free particle at the boundaries $x= \pm \frac{a}{2}$ of a potential well $V$ we get the equations $\psi(\frac{a}{2})= \psi(-\frac{a}{2})$ and $\psi'(\frac{a}{2})= \psi'(-\frac{a}{2})$. For $x< -\frac{a}{2}\;$ or $x> \frac{a}{2}$, in the limit at infinity it must hold that ${\mathop{\mathrm{lim}}\limits_{x\longrightarrow \pm \infty}}\;\Psi(x)=0$ for the wave function $\Psi$ of a free particle of energy $E$ (with $E< V$) which in this region of the plane takes the form of a descending exponential function $\psi(x)=C \exp(k_{1}x) + D \exp(-k_{1}x)$ (see \cite{Mess} or any other basic {\bf QM} textbook).} Both constraints underlie an observational capacity linked at least indirectly to a subjective notion of temporal continuum. In that sense, the condition of continuity at $x= \pm \frac{a}{2}$ of the wave function and its derivative implies the underlying existence of a continuous substratum of internal time providing a continuous domain for the particle state function whereas normalization condition ${\mathop{\mathrm{lim}}\limits_{x\longrightarrow \pm \infty}}\;\Psi(x)=0$ which is in accord with the classical intuition that the formal representation of the physical state of a free particle should approach zero at infinite limit, is a classical limit equation presupposing a continuous space-time substructure. It seems that a continuous substratum of time-consciousness which is also asserted (the primordial intuition of mathematics) in a non-standard notion of mathematical Continuum\footnote{For a brief survey of L.E.J. Brouwer's and H. Weyl's modelization of mathematical Continuum after the phenomenological Continuum I refer to M. Van Atten's {\it et al} work in \cite{Van Att}.} should be implicitly assumed in setting classical limit equations ${\mathop{\mathrm{lim}}\limits_{x\longrightarrow \pm \infty}}\;\Psi(x)=0$ and equations of continuity of the wave function $\Psi$ at the boundaries $x= \pm \frac{a}{2}$. To sum up, discrete eigenvalues of bound states of a quantum system are in mathematical formulation partly an indirect consequence of classical limit and continuity assumptions against an underlying substratum of impredicative continuum of propagation. An objectivated, time-fulfilled Continuum ({\it erf\"{u}llte Kontinuum}) must be also presupposed in the phase of second (or field) quantization in the relativistic version of Quantum Mechanics where single particle wave functions of classical version are transformed into quantum field operators on quantum states defined on any space-time point. This is basically implemented by an extension of Langrangian formalism to field equations. To demonstrate the underlying presence of this kind of impredicative continuous spatio-temporal substratum in the mathematical theory of quantum mechanics I refer, in the following subsection, to the well-known Bohm-Aharonov effect. \subsection{The case of the Bohm-Aharonov effect}\label{subsec2} The main purpose of my reference to the Bohm-Aharonov effect is to show that certain irregularities on the observational - physical level are reduced in mathematical tool to special topological properties of the relevant configuration space. In the specific effect the irregularity has to do with the presence of a solenoid causing a shift in the interference pattern of a double slit in the notable absence of an external magnetic field. Moreover, the physical effect observed which is the change in phase difference in the electron interference pattern $\Delta\delta=\frac{e}{\hbar}\int \mbox{curl}A\;dS$ depends only on curlA\footnote{The magnitude A is the vector potential which in classical physics, as it is well-known, is linked to the magnetic induction B by the formula $B=\mbox{curl}A$.} so that it could be deduced that an electron is influenced by fields which are only non-zero in regions inaccessible to it. In formal terms, this amounts to a non-locality of the integral $\oint Adr$. In short we could say that the Bohm-Aharonov effect owes to the non-trivial topology of the vacuum (in this particular case the space outside the solenoid) and the fact that electrodynamics is a gauge theory (\cite{Ryd}, p. 101). Being a bit more specific, without intending to enter into the details of the experimental context, the existence of the Bohm-Aharonov effect is essentially translatable to a topological situation where the configuration space of the null field is a plane with a hole in it, that is the non-simply connected circle $S^{1}$. In further mathematical elaboration, this generates a many-valued gauge function $x$ mapping the group space $S^{1}$ onto the configuration space of the experiment $S^{1}\times R$ such that not all such $x$ are deformable to a constant gauge function ($x=\mbox{const})$. In that case, it would produce $A_{\mu}=0$ and no Bohm-Aharonov effect (\cite{Ryd}, p. 105). In mathematical formalism the function $x$ such that $A=\nabla x$ turns out to be a many-valued function and this becomes possible since the space in which it is defined is not simply connected. That is, the group space of the gauge group of electromagnetism $U(1)$ is the non-simply connected circle $S^{1}$ where, roughly speaking, a non-simply connected space is one in which not all curves may be continuously shrunk to a point. If $x$ were single-valued, then $B=\mbox{curl}A=\mbox{curl}\nabla x\equiv 0$ everywhere, so there would be no magnetic flux at all and consequently no physical effect taking into account that $\Delta\delta=\frac{e}{\hbar}\Phi$. In view of our previous discussion, we note in this specific quantum mechanical experiment a `transformation' of the irregular observational characteristics of the quantum phenomenon into peculiarities in the topological texture of a spatiotemporal continuous substratum; in the particular case the peculiarity lies in the property of non-simply connected of the configuration space of the experimental context. But, in generating topological properties leading to certain discontinuities in configuration space one must assume, prior to the assumption of discontinuity gaps in topological structure, the constancy of an underlying spatiotemporal continuum across time which can in turn reduce to the constancy of a fulfilled time-consciousness self-constituted as a continuous unity `bridging', in effect, the context of an experimental preparation with that of measurement. \subsection{Interpreting quantum non-separability}\label{subsec3} In quantum mechanical theory quantum non-separability arises first as a result of the principle of superposition of states and second from the impossibility to provide, given a compound system S and its corresponding Hilbert space H, a decomposition of it into a tensor product $H=H_{1}\otimes H_{2}\otimes ...\otimes H_{N}$ of the subsystem spaces $H_{i}$ such that an observable A of S can be expressed in the canonical form $A=A_{1}\otimes A_{2}\otimes ... \otimes A_{N}$ of suitable observables of the subsystems $S_{i}$. Formally this is a result of the particular feature of the tensor product that it is not a restriction of the topological product $H=H_{1}\times H_{2}\times ...\times H_{N}$ but includes it as a proper subset. Given that in quantum mechanical theory there are no reasonable criteria that would guarantee the existence (and uniqueness) of such a tensor product decomposition of the whole system the question is how we could possibly derive it and on what terms on an operational level.\footnote{A prototype of an EPR - correlated system experimentally confirmed is the compound system S of spin-singlet pairs. It consists of a pair $(S_{1}, S_{2})$ of spin $\frac{1}{2}$ particles in the singlet state $$ W= \frac{1}{\sqrt{2}}\;\{\mid \psi_{+}> \otimes \mid\phi_{-}> - \mid \psi_{-}> \otimes \mid \phi_{+}>\},$$ where $\{\mid \psi_{\pm}>\}$ and $\{\mid \phi_{\pm}>\}$ are orthonormal bases of the two dimensional Hilbert spaces $H_{1}$ and $H_{2}$ associated with $S_{1}$ and $S_{2}$ respectively. In such a situation, it is theoretically predicted and experimentally confirmed that the spin components of $S_{1}$ and $S_{2}$ have always opposite spin orientations.} V. Karakostas, for instance, discusses in \cite{Karak} and \cite{Karak1} the question of non-separability from the point of view of Active Scientific Realism as presupposing the feasibility of the kinematical independence between a component subsystem of interest and an appropriate measuring system including its environment; it presupposes, in general, the separation between the observer and the observed. Taking the physical world as an unbroken whole we have to separate it, to perform a breakdown of the entanglement of subsystems. In what is called a Heisenberg cut, we have to decompose the compound entangled system into interacting but disentangled components that is, into measured objects on the one hand and measuring systems (uncorrelated observers in a broad sense) on the other with no (or insignificantly so) holistic correlations among them. By means of the Heisenberg cut can be generated well-defined separate objects in their contextual environments described in terms of a process of projecting the holistic non-Boolean domain of entangled quantum correlations into a meta-contextual Boolean frame that breaks the wholeness of nature by means of an effective participancy in the physical world of a knowing/intentional subject (\cite{Karak}, pp. 300, 303). In fact, the notion of an effective participancy of a knowing/intentional subject in the physical world seems to imply the Aristotelian idea of {\it potentia} since, on a quantum level, for any effective observer inside the Life-World there should be two categories of entities, those posterior to his knowing/intentional acts which as already pointed out he has some inherent mode to recognize as well-defined objects and those prior to his purely intentional acts which should by necessity be for him mere potentialities; in that sense, ``{\small a quantum object exists, independently of any operational procedures, only in the sense of `potentiality', namely, as being characterized by a set of potentially possible values for its various physical quantities that are actualized when the object is interacting with its environment or a pertinent experimental context}" (\cite{Karak1}, p. 290). In view of description of the relation between a knowing subject and an object of his intentionality (in terms of noetical-noematical constitution), offered mainly in subsection 3.1, we may argue that there exists a certain convergence of the interpretational content of phenomenological analysis with the positions of Active Scientific Realism\footnote{In a certain sense this approach is related to H. Everett's `Relative State Interpretation' of Quantum Theory e.g. by means of a decoupling of world components $\psi^{(R)}$, $\psi^{(L)}$ of a certain superposition state $\psi(t)= e^{iHt} \phi (\varphi_{R}\pm \varphi_{L})$ corresponding to a localization of consciousness not only in space and time but also in certain Hilbert space components (see example: \cite{Zeh}, pp. 73-74).} inasmuch as: The implementation of the Heisenberg cut can be taken in a fundamental sense as presupposing a notion of co-existence and also an idea of separation in a domain of intentional `observation' between a consciousness intentionally directed to its object and the object in-itself as a direct and unambiguous presence in front of the intentionality of consciousness. By applying his intentionality a knowing subject creates a particular context to inquire e.g. on the `hidden status' of an entangled quantum state in the following two stages: 1) on the noetical level by apprehending a sensible sign (of the measuring apparatus) as such in the sense that it could not be otherwise but apprehend it as a sign distinguishable from any other possible sign in the protention of his intentionality (cf: \cite{Hu5}, p. 8); at this stage he has already lost his claim on an access to the inner reality of the entangled state for he noetically apprehends what he apprehends in the `physicalistic language' of the apparatus and 2) thereafter he constitutes it as a noematical object immanent to his consciousness in the modes already described. Moreover, I feel that the introduction of the effective presence of a knowing/intentional subject in the Life-World puts on a close footing these two interpretations with respect to the aristotelian notion of potentiality as they seem to somehow weaken the vaguely metaphysical character of this principle exactly by the introduction of an intentional/constituting subject as part of the Life-world. So, from a phenomenological point of view, a World in which pre-predicative structures (i.e. intentionality structures) linked to the presence of an intentional/constituting subject determine by `anticipation' actual predicated instances may be defined as a domain of real possibility anterior to actuality. This seems partly to eliminate the vague ontological status - not to say purely metaphysical - of Aristotelian potentialities for it substitutes for the notion of a first {\it entelechy}, reached necessarily by regression ad infinitum of all classes of potentialities, the notion of at least one constituting subject in a pre-phenomenological World. Connected to the view above, is the assertion that the inherently probabilistic nature of Quantum Mechanics may be interpreted as due to the irretrievable loss of information caused by the cut of a quantum non-separable whole in the measuring process. This means that in view of the reduction to the intentionalities of a subject performing a quantum experiment certain potentialities of a quantum whole are realized whereas others are not on the level of noetical apprehension and this is what can, in principle, be asserted for any particular contextual experimental frame. From this aspect, the principle of actualization put up by R. Omn$\acute{e}$s as an additional external rule not emerging from the internal structure of {\bf QM} to postulate the passing from phenomena to facts and used ``{\small merely as a licence to use consistent logic to reason from present brute experience}" (\cite{Far}, p. 1335) leads at least indirectly to an effective participancy of a knowing/intentional subject. In phenomenological approach as the knowing/intentional subject acting as a constituting factor transposes the pre-phenomenological `unity' of fundamental experience to the a-thematic, impredicative unity of its self-constituted flux of consciousness (\cite{Hu2}, pp. 283, 295) it looks as if this underlying impredicative spatio-temporality should bear its `imprint' on the interpretation of the wholeness of a quantum non-separable state standing as an undissectable whole and a limit to a complete scientific cognizance of physical reality. A prime reason for this limitation may lie in the fact that we lack any possible way to `go deeper' than intentionality and consequently the analytical means to fully describe the inner time of an entangled state before or exactly at the stage of noetical apprehension; more generally it seems that we lack the means to unconditionally approach the temporality of the pre-phenomenological World (the World before the phenomenological reduction of the constituting {\it Ich}) which in Husserl's writings is presupposed as the constant synthetic unity of every possible experience and also the common denominator in terms of substance of all beings in the World. This phenomenological `incompleteness' might somehow account for the inherent impossibility to provide a complete description of the World as a whole by means of a formal and logically consistent theory that would also include its universe (including the knowing/intentional subject) as its own object. Just as any language of an axiomatical system of mathematics capable of expressing at least elementary arithmetic cannot but eventually engender antinomies or paradoxes (cf. with G\"{o}del's First Incompleteness Theorem) especially in relation to self-referential descriptions. In quantum world we could claim that this runs parallel to the example of von Neumann's account of quantum measurement that leads to an infinite regressional sequence of observing observers (\cite{Karak}, p. 306). In this connection, I refer to M.L. dalla Chiara's view of the measurement problem of quantum mechanics as a characteristic question of the semantical closure of a theory, in other words as to ``{\small what extent a consistent theory (in this case {\bf QM}) can be closed with respect to the objects and the concepts which are described and expressed in its metatheory}". According to dalla Chiara, quantum mechanical theory as a consistent theory satisfying some standard formal requirements, turns out to be the subject of some limitations due to purely logical reasons concerning its capacity to completely describe and express certain physical objects and concepts. Nevertheless, even if a contradiction produced in the metatheory of {\bf QM} can be overcome on the purely logical grounds (linked to similar limitative results on the consistency of axiomatical systems in set theory) that ``{\small any apparatus which realizes the reduction of the wave function is necessarily only a metatheoretical object}" (\cite{Chiar}, p. 338) the question, in my view, remains open of providing a consistent and complete metatheoretical description as to what `happens' in physical state terms in-between the experimental preparation of a compound system such as $s\otimes \mathcal{Q}$ and the time of measurement corresponding to the collapse of the wave function (where $s$ is a physical state at time $t$ and $\mathcal{Q}$ a measuring apparatus identified with a Boolean-minded observer assigning truth values to non-Boolean quantum substructures). The jump of truth values in the process of measurement which is formally the result of the absence of an isomorphism between Boolean and non-Boolean structures - assuming that a quantum object, considered as an objective existence, is the non-distributive lattice of its properties - forces for a Boolean observer the need of the existence of an objective time in which he must `move' (\cite{Grib}, p. 2396). This question is also linked with J. von Neumann's Projection Postulate (or `The Reduction of the Wave Function' postulate) as it implicitly establishes the necessity of a self-constituting time flux by assigning to the mathematical translation $\tau((s)(t))$ of a physical state $s(t)$ at time $t$ the same eigenvector $\psi_{\kappa}$ as for the measured quantity $Q_{i}$ of the state $s(t)$ at time $t_{1}$ soon after the measurement.\footnote{By this postulate we get as a result of measurement the interval $r_{\kappa}\pm \epsilon_{Q_{i}}$, where $r_{\kappa}$ is an eigenvalue of the mathematical form of $Q_{i}$ and $\psi_{\kappa}$ its corresponding eigenvector (\cite{Chiar}, p. 344).} As a matter of fact, even if we assume Von Neumann's Projection Postulate or Van Fraassen's modal interpretation of Quantum Mechanics as `external' metatheoretical conditions in a purely logical way we cannot be led by any linguistic means to a complete description of the `physical change' that takes place during the measurement process in the compound system `system + apparatus'. This raises again the question of a self-constituting time flux of consciousness and the constitution of objects in it as noematical correlates of hyletical-noetical moments of an outward directed intentionality. Closing the section I turn again to the fundamental irreducibilities which in my view shape in an essential way our observational frame in an intersubjective world of an unbounded horizon of events: On the one hand intentionalities of an {\it a priori} character directed on the lowest level of primordial experience to individuals - as such transposed then with their noematical correlates as immanences of the flux of each subject's consciousness. On the other, the intuition of continuous unity as a substratum divested of any quality on which to constitute and deliver a meaning to well-defined noematical objects described deeply enough as an impredicative self-constituting unity of the flux of consciousness leading ultimately to a transcendental ego of consciousness (see: \cite{Hu3}, pp. 97-99). Saying it in more intuitive terms, as much it is impossible to reduce the mental process by which we may abstract from an original impression in immediate awareness evidently distinct from any other in temporal flow to anything more fundamental in noetical apprehension it is equally impossible to capture what is constituted as the unity of a whole in consciousness by means of the former activity. \section{Observation in the language of formal systems. Where is the irreducibility and where the transcendence?} As the main purpose of this article was to discuss the limits of formal languages with respect to the notion of observation I naturally sought to reach the most fundamental level of observation beyond the limits of the common intuition of this notion. In doing so, I took also into account the claim, which was E. Husserl's belief, that mathematical objects are special cases of perceptual objects leaving aside any counter-arguments which are nevertheless of a rather artificial nature, e.g. whether the mathematical object $\{\emptyset\}$ should be also considered a perceptual object. My theoretical standpoint, linking fundamental observation to a phenomenology of constitution put under the same perspective the mental process of formation of mathematical objects as syntactical atoms corresponding to `state-of-things' in a formal-ontological environment and the process of constitution of quantum entities as well defined noematical objects in consciousness based on their former intentional apprehension in the physical world.\footnote{My approach in this paper is not meant to be a transcendental deduction of Quantum Mechanics by means of Phenomenology for, notwithstanding my claim to providing some clues on a possible phenomenological interpretation of {\bf QM} on the level of `observation' and in metatheory, there are certain constants (the Planck constant, for instance) or symmetry principles of {\bf QM} which are still doubtful whether they are of a purely empirical objective character or possibly susceptible of a phenomenological interpretation. Nevertheless, for the Planck constant, for example, there is a view of it as a not purely extrinsic datum but as arising from the generic situation of mankind which in my opinion leads indirectly to a notion of intersubjective constitution in the Life-World.} It is obvious that in such an approach we should regard mathematics as divested of any platonist content and in a certain sense devoid of the conveniences of Cantorian-type actual infinity. In this connection, mathematical theories of an alternative nonstandard character especially those which incorporate e.g. a notion of natural infinity as an open-ended shift of classes of hereditarily finite `observations' (for instance, Alternative Set Theory and Hyperfinite Set Theory) seem more adapted to my view of mathematical activity as a special kind of abstraction in an intersubjective and interactive field of events of a `local' but ever receding horizon. This way, talking of an object as an individual-as such it is in a fundamental and essential way the same whether it is a syntactical atom of a stratified mathematical formula corresponding to a unique `state-of-things' or whether it is a quantum entity of a certain non-separable state apprehended by a disentangled interaction and transformed as a reidentifying-noematical object by a knowing/intentional subject using a measuring apparatus as an extension of his consciousness. As long as they can be apprehended as distinct to any other possible intentional apprehensions in the process of constitution they can both be classified as individuals and if in addition they can be constituted as bearers of an otherwise undefined sense of `order' to any other such apprehension they can be classified as individual-substrates bearers by essence of an appropriate to them predicative environment ({\it Kernform}). Is there a way to penetrate more deeply, to open and `read' the inner content of those individuals, in a word, to reach a deeper level of apprehension? The answer seems to be negative and in addition not in the sense of a contingent state of affairs but of a generic state of affairs. The main indication is our own intuition in the direct givenness of the intentional objects of our experience and this is why the intentionality of primordial experience is described exactly as intentionality to intentional objects, i.e. to individuals-as such. I would call it a most fundamental irreducibility in relation to human perception though it is at the same time rather `friendly' and easy to co-operate with our other mental faculties. It is thanks to this fundamental intuition that we can comprehend and handle almost anything from sequences of natural numbers to the capacity to shape images of various distinct particle trajectories in a bubble chamber. On the contrary, there is another irreducibility which though it is our most common intuition it proves most difficult to comprehend let alone describe by formal first-order linguistic means. This is the intuition of Continuum which includes everything from the common intuition of our existence as a continuous state of events, to the intuition of a curve on a piece of paper as a continuous set of black points, to the intuition of subatomic events as taking place against a time-fulfilled continuous background. What is it that makes possible this coherent unity in constitution which is reflected as a formalized continuous substratum of the mathematical metatheory of `observations' irrespective of whether they refer to a quantum mechanical context or to a context of common physical intuition? Husserl made a clear distinction between phenomenological time, the homogenous form of all living experiences in the flux of consciousness and the objective or scientific time. By necessity every real experience is a durating experience which is a fact extracted by pure intuition of its enactment and it is constituted by a certain {\it a priori} mode as an infinitely fulfilled continuum of durations. Going deeper into the ontology of phenomenological Continuum Husserl encountered grave difficulties in comprehending it as he himself professed in {\it Ding und Raum} and he left it even in his later writings as a rather obscure notion leading him to a transcendental pure ego of consciousness. This ego as an absolute and impredicative subjectivity is only accessible by its objectivation as a `mirror image' of itself and it is only in this way possible to reflect on the Continuum as an objective whole and also on the notion of an unbounded infinity in a Kantian sense (\cite{Hu1}, p. 331). Essentially Husserl eradicated the transcendence of the world of the purest idealist doctrine only to introduce it backdoor by means of a personified pure ego. The matter, in last count, is not so much whether one should in principle accept a transcendence of intuitive and consequently of mathematical Continuum by means of a phenomenological reduction or by some other interpretational scheme as the hard fact that any attempt to describe Continuum by the first-order linguistic means of a formal axiomatical system inevitably leads to circularities in definitions or entails some form of {\it ad hoc} axiomatization. This is presumably reflected in the independence of actual infinity principles such as {\it Continuum Hypothesis} and of the {\it Axiom of Choice} from the other axioms of the Zermelo-Fraenkel Set Theory and to some extent in the {\it ad hoc} extension or prolongation principles axiomatizing the embedding of standard structures into saturated nonstandard domains (see: \cite{Lev}). There is an ongoing theoretical discussion on the possibility of a non-analytical character of the {\it Continuum Hypothesis} question in the foundation of mathematics and on this account we refer to S. Feferman's thesis in \cite{Fef} that ``{\small the {\it Continuum Hypothesis} is an inherently vague problem that no new axiom will settle in a convincingly definite way}". It seems worthwhile to close by mentioning a recent neuroscientific approach to the constitution of unitary and coherent experiences out of independent bits of data in which the processes of predication and identity between different occurrences of variables (viewed as individuals) are reflections of the same underlying conceptual process in the brain. In \cite{Piet} this process is described as diagrammatic and iconic rather than symbolic, in a certain sense as a neuronal continuous connection between differently localized predicates which involves also {\it Gestaltpsychological} notions substituting traditional symbolic operations. But invoking some sort of continuous or topological representation in the context of neuronal processes described by first order formalism one might get trapped once more in the inherent impredicativity of the notion of Continuum. It is like swinging to the other end of the pendulum as we mark the following irreducibilities its extreme points; the intentionality to individuals-as such on the one hand and the intuition of the impredicative Continuum on the other. The question of the nature of Continuum as well as the epistemic content of the aprioric principle of intentionality might be the object of a yet deeper research linking such diverse fields as mathematics, logic, quantum theory including its offspring quantum gravity, cognition theory and neurophysiology of the brain. Or they might in principle be elusive and beyond any reach on the grounds of the circular question: On what terms can mind capture the mind? \end{document}
math
\begin{document} \title{\LARGE \bf Learning Quasi-Kronecker Product Graphical Models} \thispagestyle{empty} \pagestyle{empty} \begin{abstract} We consider the problem of learning graphical models where the support of the concentration matrix can be decomposed as a Kronecker product. We propose a method that uses the Bayesian hierarchical learning modeling approach. Thanks to the particular structure of the graph, we use a the number of hyperparameters which is small compared to the number of nodes in the graphical model. In this way, we avoid overfitting in the estimation of the hyperparameters. Finally, we test the effectiveness of the proposed method by a numerical example. \end{abstract} \section{INTRODUCTION}\label{sec:intro} Many modern applications are characterized by high-dimensional data sets from which it is important to discover the meaningful interactions among the variables rather than to find an accurate model. An powerful tool to analyze these interrelations is given by graphical models (i.e. Markov networks), \cite{LAURITZEN_1996,speed1986gaussian,8378239}. The simplest version of the latter is constituted by a zero mean Gaussian random vector to which we attach an undirected graph: each node corresponds to a component of the random vector and there is an edge between two nodes if and only if the corresponding components are conditionally dependent given the others. In these applications, there is a large interest of learning sparse graphical models (i.e. graphs with few edges) from data; indeed, these models are characterized by few conditional interdependence relations among the components. Interestingly, a sparse graph corresponds to a covariance matrix whose inverse, say concentration matrix, is sparse. The problem of learning sparse graphical models, sometime called covariance selection problem, can be faced by using regularization techniques, \cite{huang2006covariance,banerjee2008model,friedman2008sparse,d2008first}. For instance, \cite{banerjee2008model} proposed a regularized maximum-likelihood (ML) estimator for the covariance matrix where the $\ell_1$ penalty norm on the concentration matrix has been considered. Since the $\ell_1$ norm penalty induces sparsity, the estimated covariance matrix will have a sparse inverse. It is worth noting that these approaches can be extended to dynamic graphical models, \cite{SONGSIRI_TOP_SEL_2010,LATENTG} as well as factor models \cite{valeCDC,ciccone2017factor,7331087,8264253}. These regularized estimators are known to be sensitive to the choice of the regularization parameter, i.e. the weight on $\ell_1$ penalty, which is typically selected by cross-validation or theoretical derivation. To overcome this issue, a Bayesian hierarchical modeling approach has been considered, \cite{asadi2009map}. Here, the concentration matrix is modeled as a random matrix whose prior is characterized by a regularization parameter (called hyperparameter). Then, the hyperparameter as well as the covariance matrix are jointly estimated. Since the $\ell_1$ norm shrinks all the entries to zero, and thus introduces a bias, a further improvement is to consider a weighted $\ell_1$ norm, see \cite{scheinberg2010sparse}, where the hyperparameter is a matrix whose dimension (in principle) coincides with the number of the nodes in the graph. On the other hand, the introduction of an hyperparameter with many variables could lead to overfitting in the estimation of the hyperparameter matrix. An important class of graphical models is represented by the so called Kronecker Product (KP) graphical models, \cite{tsiligkaridis2013covariance,4392825,dutilleul1999mle,tsiligkaridis2013convergence,SINQUIN201714131} wherein it is required that the concentration matrix can be decomposed as a Kronecker product. KP graphical models find application in many fields: spatiotemporal MEG/EEG modeling \cite{bijma2005spatiotemporal}; recommendation systems like NetFlix and gene expression analysis, \cite{allen2010transposable}; face recognition analysis\cite{zhang2010learning}. In these applications the most important feature is the graphical structure, i.e. the fact that the support of the concentration matrix can be decomposed as a Kronecker product. The contribution of the present paper is to address the problem of learning graphical models where the support of the concentration matrix can be decomposed as a Kronecker product. We call such models Quasi-Kronecker Product (QKP) graphical models. Note that, the assumption that the support can be decomposed as a Kronecker product does not imply that the concentration matrix does. Therefore, QKP graphical models can understood as a weaker version of KP graphical models, making the former class less restrictive than the latter. Adopting the Bayesian hierarchical modeling approach, in the spirit of \cite{scheinberg2010sparse}, we introduce two hyperparameter matrices whose total number of variables is small compared to the number of nodes in the graph. In this way, we avoid overfitting in the estimation of the hyperparameter. The paper is outlined as follows. In Section \ref{sec:graph_model} we introduce graphical models and the problem of graphical model selection. In Section \ref{sec:QK_graph} we introduce QKP graphical models. In Section \ref{sec:learning} we propose a Bayesian procedure to learn QKP graphical models from data, while Section \ref{sec:init} is devoted on how to initialize the procedure. In Section \ref{sec:sim} we present a numerical example to show the effectiveness of the proposed method. Finally, Section \ref{sec:concl} draws the conclusions. We warn the reader that the present paper only reports some preliminary result regarding the Bayesian estimation of QKP graphical models. In particular, all the proofs and most of the technical assumptions needed therein are omitted and will be published afterwards. {\em Notation}: Given a symmetric matrix S, we write $S\succ 0$ ($S\succeq 0$) if $S$ is positive (semi-)definite. $x\sim \mathcal N(\mu,\Sigma)$ means $x$ is a Gaussian random vector with mean $\mu$ and covariance matrix $\Sigma$. $\mathbb{E}[\cdot]$ denotes the expectation operator. Given two functions $f(x)$ and $g(x)$, $f\propto g$ means that the argmin with respect to $x$ of $f$ and $g$ do coincide. Given a matrix $S$ of dimension $m\times m$, $s_{jk}$ denotes its entry in position $(j,k)$. Given a matrix $S$ of dimension $m_1m_2\times m_1m_2$, $s_{jk,il}$ denotes its entry in position $((j-1)m_2+i,(k-1)m_2+l)$. Given a matrix $S$, $\mathrm{vec}(S)$ denotes the vectorization of matrix $S$. Given a matrix $S$ with positive entries, $log(S)$ denotes the matrix with entry $\log(s_{jk})$ in position $(j,k)$. Given a matrix $S$, $\exp(S)$ is the matrix with entry $\exp(s_{jk})$ in position $(j,k)$ and $\mathrm{abs}(S)$ denotes the matrix with entry $|s_{jk}|$ in position $(j,k)$. $\mathbf{1}_m$ denotes the $m$-dimensional vector of ones. \section{GRAPHICAL MODEL SELECTION} \label{sec:graph_model} Let $x=[\, x_1\ldots x_m\,]^T$ be a zero mean Gaussian random vector taking values in $\mathcal{R}s^m$ and with covariance matrix $\Sigma\succ 0$. Thus, this random vector is completely characterized by $\Sigma$. We can attach to $x$ an undirected graph $\mathcal{G}c(V,\Omega)$ where $V$ denotes the set of its nodes, and $\Omega$ denotes the set of its edges. More precisely, each nodes corresponds to a component $x_j$, $j=1\ldots m$, of $x$ and there is an edge between nodes $j,k$ if and only $x_j$ and $x_k$ are conditionally dependent given the other components, or equivalently, for any $j\neq k$: \al{x_j\, \bot ,x_k\,|\,x_l, \, l\neq j,k\, \, \iff \, (j,k)\notin \Omega.} Thus, $\Omega$ describes the conditionally dependent pairs of $x$. The graph $\mathcal{G}c$ is referred to as graphical model of $x$; an example is provided in Figure \ref{fig_ex_grahp}. \begin{figure} \caption{Example of graphical model of $x=[\,x_1\,x_2\,x_3\,x_4\,]^T$ where $(x_1,x_3)$, $(x_2,x_3)$, $(x_2,x_4)$ and $(x_3,x_4)$ are the components of $x$ which are conditionally dependent given the others.} \label{fig_ex_grahp} \end{figure} Dempster proved that conditional independence relations are given by the concentration matrix of $x$, i.e. $S:=\Sigma^{-1}$, \cite{DEMPSTER_1972}: \al{x_j\, \bot \, x_k\,|\,x_l, \, l\neq j,k\, \, \iff \, s_{jk}=0.} Accordingly, sparsity of $S$, i.e. $S$ with many entries equal to zero, reflects the fact that the graphical model $\mathcal{G}c$ of $x$ is sparse, i.e. $\mathcal{G}c$ has few edges. In many applications, it is required to learn a sparse graphical model $\mathcal{G}c$ from data. More precisely, given a sequence of data $\mathrm x^N:=\{\,\mathrm x_1^T\ldots \mathrm x_N^T\,\}$ generated by $x$, find a sparse graphical model $\mathcal{G}c(V,E)$ for $x$ where $\mathcal{G}c$ is sparse. The simplest idea is to compute the sample covariance from the data \al{\hat \Sigma=\frac{1}{N}\sum_{k=1}^N \mathrm x_k \mathrm x_k^T;} then the graphical model is given by the support of $\hat \Sigma^{-1}$. However, the resulting graph is full even in the case that the underlying system is well described by a sparse graphical model. In \cite{asadi2009map}, a procedure based on a Bayesian hierarchical model has been proposed. More precisely, the entries of $S$ are assumed to be i.i.d. and Laplace distributed with hyperparameter $\gamma\geq 0$, i.e. the probability density function (pdf) of $s_{jk}$ is $p(s_{jk})=\gamma /2\exp\left(-\gamma|s_{jk}|\right)$. The resulting procedure is described in Algorithm \ref{algoS}. The main drawback of this approach is that it assigns a priori the same level of sparsity to each entry of $S$. This method has been extended to the case wherein only the entries in the same column of $S$ have the same distribution, \cite{scheinberg2010sparse}, allowing different levels of sparsity in the prior. A further extension is to assume that all the entries of $S$ may be distributed in a different way, but respecting the symmetry, that is $p(s_{jk})=p(s_{kj})=\gamma_{jk} /2\exp\left(-\gamma_{jk}|s_{jk}|\right)$ with $\gamma_{jk}\geq 0$. Using argumentations similar to the ones in \cite{scheinberg2010sparse}, it is not difficult to find that the procedure described in Algorithm \ref{algoS2}. \begin{algorithm} \caption{Algorithm for sparse graphical models in \cite{asadi2009map}}\label{algoS} \begin{algorithmic}[1] \small \STATE Set $\varepsilon>0$, $\hat \gamma^{(0)}>0$, $\epsilon_{STOP}>0$ \STATE $h\leftarrow 1$ \mathcal{R}EPEAT \STATE Solve \al{\footnotesize\hat S^{(h)}&=\underset{S\succ 0}{\, \mathrm{argmin}\,} -\frac{N}{2}\log |S| +\frac{N}{2} \tr(S\hat \Sigma)\nonumber\\ &\hspace{1.3cm}+\hat \gamma^{(h-1)}\sum_{j,k=1}^{m}| s_{jk}|\nonumber} \STATE Update hyperparameter \al{\hat \gamma^{(h)}&=\frac{m^2}{\sum_{j,k=1}^{m} |\hat s_{jk}^{(h)}| +\varepsilon}\nonumber} \STATE $h\leftarrow h+1$ \mathcal{U}NTIL{ $\|\hat S^{(h)}-\hat S^{(h-1)}\|\leq \epsilon_{STOP}$} \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Algorithm for sparse graphical models}\label{algoS2} \begin{algorithmic}[1] \small \STATE Set $\varepsilon>0$, $\hat \mathcal{G}amma^{(0)}$ $m\times m$ matrix positive entrywise, $\epsilon_{STOP}>0$ \STATE $h\leftarrow 1$ \mathcal{R}EPEAT \STATE Solve \al{\footnotesize\hat S^{(h)}&=\underset{S\succ 0}{\, \mathrm{argmin}\,} -\frac{N}{2}\log |S| +\frac{N}{2} \tr(S\hat \Sigma)\nonumber\\ &\hspace{1.3cm}+\sum_{j,k=1}^{m}\hat \gamma^{(h-1)}_{jk}| s_{jk}|\nonumber} \STATE Update hyperparameter \al{\hat \gamma_{jk}^{(h)}&=\frac{1}{|\hat s_{jk}^{(h)}| +\varepsilon}, \; j,k=1\ldots m\nonumber} \STATE $h\leftarrow h+1$ \mathcal{U}NTIL{ $\|\hat S^{(h)}-\hat S^{(h-1)}\|\leq \epsilon_{STOP}$} \end{algorithmic} \end{algorithm} \section{QKP GRAPHICAL MODELS}\label{sec:QK_graph} Consider the undirected graph $\mathcal{G}c(V,\Omega)$ and let $m$ denote the number of its nodes. Let $E_\Omega$ be the $m\times m$ binary matrix defined as follows: \al{(E_\Omega)= \left\{\begin{array}{cc}1, & \hbox{ if $(j,k)\in\Omega$ } \\ 0, &\hbox{ otherwise} \end{array}\right. .} We say that $\mathcal{G}c(V,\Omega)$ is a Kronecker Product graph if there exist two graphs $\mathcal{G}c_1(V_1,\Omega_1)$ and $\mathcal{G}c_1(V_2,\Omega_2)$ with $m_1$ and $m_2$ nodes, respectively, such that: \al{E_\Omega= E_{\Omega_1}\otimes E_{\Omega_2}} where $m=m_1m_2$. In shorthand notation we will write $\mathcal{G}c=\mathcal{G}_1\otimes \mathcal{G}c_2$. In practice, in this graph $\mathcal{G}c$ we can recognize modules containing $m_2$ nodes sharing the same graphical structure, described by $\Omega_2$; the interaction among those $m_1$ modules is described by $\Omega_1$. An illustrative example is given in Figure \ref{Fig:ex_graph_kron}. \begin{figure*} \caption{{\em Panel (a)} \label{Fig:ex_graph_kron} \end{figure*} Let $x=[\, x_1\ldots x_m\,]^T$ be a zero mean Gaussian random vector taking values in $\mathcal{R}s^m$ and with inverse covariance matrix (i.e. concentration matrix) $S\succ 0$ whose support is $E_\Omega=E_{\Omega_1}\otimes E_{\Omega_2}$. We can attach to $x$ a Kronecker Product graph $\mathcal{G}c(V,\Omega)=\mathcal{G}c_1(V_1,\Omega_1)\otimes \mathcal{G}c_2(V_2,\Omega_2)$ Accordingly, $\Omega_1$ characterizes the conditional dependence relations among the modules, while $\Omega_2$ characterizes the recurrent conditional dependence relations among the nodes in each module. This graphical model is referred to as Quasi-Kronecker Product (QKP) graphical model to distinguish from the Kronecker Product (KP) graphical model proposed in \cite{tsiligkaridis2013convergence}. Indeed, in the latter the support of the concentration matrix and its support admit a Kronecker decomposition, i.e. $E_\Omega=E_{\Omega_1}\otimes E_{\Omega_2}$ and $S=S_1\otimes S_2$. In our graphical model, even if the support of the concentration matrix admits a Kronecker product decomposition, the concentration matrix does not. \section{LEARNING QKP GRAPHICAL MODELS} \label{sec:learning} We address the problem of learning a QKP graphical model $\mathcal{G}c=\mathcal{G}c_1\otimes \mathcal{G}c_2$ from data. In many real applications the observed data are explained by a sparse graphical model because the latter allows a straightforward interpretation of the interaction among the variables involved in the application. Thus, in our case we require that $\mathcal{G}c_1$ and $\mathcal{G}c_2$ are sparse. The problem can be formalized as follows. \begin{problem}\label{probl_kron}Let $x$ be a zero mean random vector of dimension $m=m_1m_2$, $m_1,m_2\in\mathbb{N}$, with zero mean and inverse covariance matrix $S$. Given a sequence of data $\mathrm x^N:=\{\,\mathrm x_1^T\ldots \mathrm x_N^T\,\}$ generated by $x$, find a QKP graphical model $\mathcal{G}c(V,E)=\mathcal{G}c_1(V_1)\otimes \mathcal{G}c_2(V_2,E_2)$ for $x$ where $\mathcal{G}c_1$ and $\mathcal{G}c_2$ are sparse and have $m_1,m_2$ nodes, respectively.\end{problem} To solve Problem \ref{probl_kron}, we adopt the Bayesian hierarchical modeling. $S$ is modeled as a random matrix whose prior depends on the hyperparameters $\Lambda$ and $\mathcal{G}amma$. $\Lambda$ is a $m_1\times m_1$ symmetric random matrix with nonnegative entries and $\mathcal{G}amma$ is a $m_2\times m_2$ symmetric random matrix with nonnegative entries. The hyperprior for $\Lambda$ and $\mathcal{G}amma$ depends on $\varepsilon_1$ and $\varepsilon_2$, respectively. The latter are deterministic positive quantities. We proceed to characterize the Bayesian model in detail. The conditional pdf of $x^N$ under model $x\sim \mathcal N(0,S^{-1})$ is: \al{\label{pdf_Fish}p(\mathrm x^N|S)&=\prod_{k=1}^N\frac{1}{\sqrt{(2\pi)^m |S^{-1}|}} \exp\left(-\frac{1}{2}\mathrm x_k^T S\mathrm x_k^T \right). \nonumber\\ &\propto |S|^{N/2}\exp\left(-\frac{1}{2}\tr(S\hat \Sigma)\right) } where the neglected terms do not depend on $S$. In what follows we assume that $\hat \Sigma\succ 0$. We model the entries of $S$ as independent random variables, so that the prior for $S$ is: \al{p(S|\Lambda,\mathcal{G}amma)=\prod_{j,k=1}^{m_1}\prod_{i,l=1}^{m_2} p(s_{jk,il}|\lambda_{jk},\gamma_{il})} where $s_{jk,il}$ is Laplace distributed \al{\label{prima_oss}p(s_{jk,il}|\lambda_{jk},\gamma_{il})=\frac{\lambda_{jk}\gamma_{il}}{2} \exp\left(-\lambda_{jk}\gamma_{il}|s_{jk,il}|\right).} $\Lambda$ and $\mathcal{G}amma$ are independent random matrices, i.e. $p(\Lambda,\mathcal{G}amma)=p(\Lambda)p(\mathcal{G}amma)$. The entries of $\Lambda$ and $\mathcal{G}amma$ are assumed to be independent \al{p(\Lambda)=\prod_{j,k=1}^{m_1} p(\lambda_{jk}), \; \; p(\mathcal{G}amma)=\prod_{i,l=1}^{m_2} p(\gamma_{il})} and with exponential distribution \al{\label{seconda_oss} p(\lambda_{jk})=\varepsilon_1 \exp(-\varepsilon_1 \lambda_{jk}),\;\; p(\gamma_{il})=\varepsilon_2 \exp(-\varepsilon_2 \gamma_{il})} with $\varepsilon_1,\varepsilon_2$ deterministic and positive quantities. At this point, some comment regarding the choice of the prior on $S$ and the hyperprior on $\Lambda,\mathcal{G}amma$ is required. From (\ref{prima_oss}) it is clear that $s_{jk,il}$ takes value close to zero with high probability if the product $\lambda_{jk}\gamma_{il}$ is large. Moreover, if $\gamma_{il}$ is very large for some $(i,l)$ and $\lambda_{jk}\geq \epsilon>0$, for all $j,k=1\ldots m_1$, such that $\gamma_{il}\epsilon$ is large then $s_{jk,il}$ with $j,k=1\ldots m_1$ take values close to zero with high probability. Accordingly, the different modules in the graph will have a similar sparsity pattern with high probability. Accordingly, prior (\ref{prima_oss}) assigns high probability to QKP graphical models. The hyperprior in (\ref{seconda_oss}) guarantees that $\lambda_{jk}$ and $\gamma_{il}$ diverge with probability zero. As we will see, this assumption guarantees that the optimization procedure that we propose is well-posed. Next, we characterize the maximum a posteriori (MAP) estimator of $S$ (and thus also the MAP estimator of the covariance matrix by the invariance principle). The latter minimizes the negative log-likelihood \al{\ell (\mathrm x^N; S,\Lambda,\mathcal{G}amma)= -\log p(\mathrm x^N,S, \Lambda,\mathcal{G}amma)} where $p(\mathrm x^N,S, \Lambda,\mathcal{G}amma)$ is the joint pdf of $\mathrm x^N$, $S$, $\Lambda$ and $\mathcal{G}amma$. Note that, \al{\label{joint_pdf}p(\mathrm x^N,S, \Lambda,\mathcal{G}amma)=p(\mathrm x^N|S)p(S|\Lambda,\mathcal{G}amma)p(\Lambda)p(\mathcal{G}amma),} in particular the negative log-likelihood contains the prior (\ref{prima_oss}) inducing sparsity on intra-group/modules. Moreover, we have \al{\label{log_joint} \ell( &\mathrm x^N; S, \Lambda,\mathcal{G}amma):=-\log p(\mathrm x^N|S)-\log p(S|\Lambda,\mathcal{G}amma)\nonumber\\ & \hspace{0.5cm}-\log p(\Lambda)-\log p(\mathcal{G}amma) \nonumber\\ &\propto -\frac{N}{2}\log |S| +\frac{N}{2} \tr(S\hat \Sigma)+\sum_{j,k=1}^{m_1}\sum_{i,l=1}^{m_2} \lambda_{jk}\gamma_{il}|s_{jk,il}|\nonumber\\ &\hspace{0.5cm} +\sum_{j,k=1}^{m_1}( \varepsilon_1 \lambda_{jk} -m_2^2\log\lambda_{jk})+\sum_{il=1}^{m_2}(\varepsilon_2\gamma_{il}- m_1^2\log\gamma_{il}) } where the neglected terms do not depend on $S$, $\Lambda$ and $\mathcal{G}amma$. It is clear that the MAP estimator of $S$ depends on $\Lambda$, $\mathcal{G}amma$, $\varepsilon_1$ and $\varepsilon_2$. In what follows we assume $\varepsilon_1$ and $\varepsilon_2$ fixed. Then, a way to estimate $\Lambda$ and $\mathcal{G}amma$ from the data is provided by the empirical Bayes approach: $\Lambda$ and $\mathcal{G}amma$ are given by maximizing the marginal likelihood of $x^{N}$ which is obtained by integrating out $S$ in (\ref{joint_pdf}), \cite{rasmussen2004gaussian}. However, it is not easy to find an analytical expression for the marginal likelihood in this case. An alternative simplified approach for optimizing $\Lambda$, $\mathcal{G}amma$ is the generalized maximum likelihood (GML) method, \cite{zhou1997approximate}. According to this method, $S$, $\Lambda$ and $\mathcal{G}amma$ minimize jointly (\ref{log_joint}): \al{(\hat S,\hat \Lambda,\hat \mathcal{G}amma)=&\underset{S,\Lambda,\mathcal{G}amma}{\, \mathrm{argmin}\,} \ell( \mathrm x^N; S, \Lambda,\mathcal{G}amma)\nonumber\\ & \hbox{ s.t. } S\succ 0,\; \lambda_{jk}\geq 0,\; \gamma_{il}\geq 0.} Since the joint optimization of the three variables is still an hard problem, we propose an iterative three-step procedure. At the $h$-th iteration we solve the following three optimization problems: \al{\label{P1}\hat S^{(h)}&=\underset{S\succ 0}{\, \mathrm{argmin}\,} \ell( \mathrm x^N; S,\hat \Lambda^{(h-1)},\hat \mathcal{G}amma^{(h-1)})\\ \label{P2} \hat \Lambda^{(h)}&=\underset{\substack{\Lambda\\ \hbox{\footnotesize s.t. } \lambda_{jk}\geq 0}}{\, \mathrm{argmin}\,} \ell( \mathrm x^N; \hat S^{(h)}, \Lambda,\hat \mathcal{G}amma^{(h-1)})\\ \label{P3}\hat \mathcal{G}amma^{(h)}&=\underset{\substack{\mathcal{G}amma\\ \hbox{\footnotesize s.t. } \gamma_{il}\geq 0}}{\, \mathrm{argmin}\,} \ell( \mathrm x^N; \hat S^{(h)}, \hat \Lambda^{(h)},\mathcal{G}amma).} It is possible to prove that Problems (\ref{P1}), (\ref{P2}) and (\ref{P3}) do admit a unique solution. Moreover, the resulting procedure is illustrated in Algorithm \ref{algo}. \begin{algorithm} \caption{Proposed algorithm}\label{algo} \label{algo:RWS} \begin{algorithmic}[1] \small \STATE Set $\varepsilon_1>0$, $\varepsilon_2>0$, $\hat \Lambda^{(0)}$ and $\hat \mathcal{G}amma^{(0)}$ positive entrywise, $\epsilon_{STOP}>0$ \STATE $h\leftarrow 1$ \mathcal{R}EPEAT \STATE Solve \al{\footnotesize\hat S^{(h)}&=\underset{S\succ 0}{\, \mathrm{argmin}\,} -\frac{N}{2}\log |S| +\frac{N}{2} \tr(S\hat \Sigma)\nonumber\\ &\hspace{1.3cm}+\sum_{j,k=1}^{m_1}\sum_{i,l=1}^{m_2} \hat \lambda_{jk}^{(h-1)}\hat \gamma_{il}^{(h-1)}|s_{jk,il}|\nonumber} \STATE Update hyperparameters \al{\hat \lambda_{jk}^{(h)}&=\frac{m_2^2}{\sum_{i,l=1}^{m_2} \hat\gamma_{il}^{(h-1)}|\hat s_{jk,il}^{(h)}| +\varepsilon_1}, \; \; j,k=1\ldots m_1\nonumber\\ \hat \gamma_{il}^{(h)}&=\frac{m_1^2}{\sum_{j,k=1}^{m_1} \hat \lambda_{jk}^{(h)}|\hat s_{jk,il}^{(h)}| +\varepsilon_2},\; \; i,l=1\ldots m_2\nonumber} \STATE $h\leftarrow h+1$ \mathcal{U}NTIL{ $\|\hat S^{(h)}-\hat S^{(h-1)}\|\leq \epsilon_{STOP}$} \end{algorithmic} \end{algorithm} In the aforementioned algorithm the hyperparameters selection is performed iteratively through Step 5. It is worth noting that Algorithm \ref{algo} is similar to Algorithm \ref{algoS} and Algorithm \ref{algoS2}: the main difference is that in the proposed algorithm we have two types of hyperparameters. As a consequence, in the proposed algorithm we have three optimization steps instead of two. \section{INITIAL CONDITIONS} \label{sec:init} In Algorithm \ref{algo} we have to fix the initial conditions for the hyperparameters, that is $\hat \Lambda^{(0)}$ and $\hat \mathcal{G}amma^{(0)}$. The idea is to approximate $\hat \Sigma^{-1}$ through a Kronecker product, then the two matrices of this product are used to initialize $\Lambda$ and $\mathcal{G}amma$. Given $\hat \Sigma$, we want to find $\bar W$ and $\bar Y$ of dimension $m_1$ and $m_2$, respectively, with positive entries such that $\bar W\otimes \bar Y \approx \mathrm{abs}(\hat \Sigma^{-1})+\epsilon \mathbf{1}_{m_1m_2} \mathbf{1}_{m_1m_2}^T $ where $\epsilon >0$ is chosen sufficiently small. The presence of the term $\epsilon \mathbf{1}_{m_1m_2} \mathbf{1}_{m_1m_2}^T$ allows to take the entrywise logarithm on both sides, obtaining \al{\label{approx1} W &\otimes I_{m_2} +I_{m_1}\otimes Y\nonumber\\ &\approx \log (\mathrm{abs}(\hat \Sigma^{-1})+\epsilon \mathbf{1}_{m_1m_2} \mathbf{1}_{m_1m_2}^T)} where $ W=\log \bar W$ and $ Y=\log \bar Y$. It is not difficult to see that the following relations hold: \al{\mathrm{vec} ( W \otimes I_{m_2})&=(I_{m_1}\otimes \mathbf{1}_{m_2}\otimes I_{m_1} \otimes \mathbf{1}_{m2})\mathrm{vec}( W)\nonumber\\ \mathrm{vec} (I_{m_1} \otimes Y)&=( \mathbf{1}_{m_1}\otimes I_{m_2} \otimes \mathbf{1}_{m1}\otimes I_{m_2}) \mathrm{vec}( Y).} Then, we can write (\ref{approx1}) as $Az\approx b$ where \al{A&=[\, I_{m_1}\otimes \mathbf{1}_{m_2}\otimes I_{m_1} \otimes \mathbf{1}_{m2}\; \; \mathbf{1}_{m_1}\otimes I_{m_2} \otimes \mathbf{1}_{m1}\otimes I_{m_2}\,] \nonumber\\ z&=[\,\mathrm{vec}( W)^T \;\; \mathrm{vec}( Y)^T\,]^T\nonumber\\ b&=\mathrm{vec}(\log (\mathrm{abs}(\hat \Sigma^{-1})+\epsilon \mathbf{1}_{m_1m_2} \mathbf{1}_{m_1m_2}^T)).} Accordingly, $z$ can be found by solving the least squares problem $\hat z=\operatornamewithlimits{argmin}_z \|Az-b\|$, therefore $\hat z=(A^TA)^{-1}A^Tb$. From $z$ we recover $W$ and $Y$. Note that, $W$ and $Y$ computed from $z$ are not symmetric matrices. Thus, to compute $\bar W$ and $\bar Y$ from $W$ and $Y$ we force the symmetric structure: $\bar W=(\exp (W)+\exp (W)^T)/2$ and $\bar Y =(\exp (Y)+\exp (Y)^T)/2$. At this point, it is worth noting that $w_{jk}y_{il}$ provides roughly the order of magnitude of $s_{jk,il}$. On the other hand, the hyperparameters $\lambda_{jk}$ and $\gamma_{il}$ provides the prior about the order the order of magnitude of $s_{jk,il}$: the larger $\lambda_{jk}\gamma_{il}$ is, the more $s_{jk,il}$ is close to zero. Accordingly, we choose $\hat \Lambda^{(0)}$ and $\hat \mathcal{G}amma^{(0)}$ such that: \al{\hat \lambda_{jk}^{(0)}&=\frac{1}{w_{jk}},\; j,k=1\ldots m_1\nonumber\\ \hat \gamma_{il}^{(0)}&=\frac{1}{y_{il}},\; i,l=1\ldots m_2 .} \section{SIMULATION RESULTS} \label{sec:sim} \begin{figure*} \caption{Set of edges estimated in a realization of the Monte Carlo experiment. {\em First picture} \label{fig:realization1000} \end{figure*} We consider a Monte Carlo experiment structured as follows: \begin{itemize} \item We generate $60$ QKP graphical models with $m_1=6$ modules and each module contains $m_2=10$ nodes. For each model, $\Omega_1$ and $\Omega_2$ are generated randomly. The fraction of edges is set equal to $20\%$ for both $\Omega_1$ and $\Omega_2$; \item For each model we generate a finite-length realization $\mathrm x^N:=\{\,\mathrm x_1\ldots \mathrm x_N\,\}$, with $N=1000$, and we compute the sample covariance $\hat \Sigma$. \item For each realization we consider the following estimators: \begin{itemize} \item S1 estimator: it computes a sparse graphical model by using Algorithm \ref{algoS}, in this case we have one scalar hyperparameter; \item S2 estimator: it computes a sparse graphical model by using Algorithm \ref{algoS2}, in this case we have $m_1m_2(m_1m_2+1)/2=1830$ variables in the hyperparameter matrix; \item QKP estimator: it computes a QKP graphical model with $m_1=6$ modules and each module has $m_2=10$ nodes, in this case the total number of variables in the hyperparameters matrices is $m_1(m_1+1)/2+m_1(m_2+1)/2=76$. \end{itemize} \item For each realization, we compute the relative error in reconstructing the concentration matrix and the relative error in reconstructing the sparsity pattern using S1, S2 and QKP. For instance, the relative error in reconstructing concentration matrix using QKP estimator is \al{e_{QKP}=\frac{\| S_{true}- \hat S_{QKP}\|}{\|S_{true}\|}} where $S_{true}$ denotes the concentration matrix of the true model, while $\hat S_{QKP}$ is the estimated concentration matrix; here, $\|\cdot \|$ denotes the Frobenius norm. The relative error in reconstructing the sparsity pattern using QKP estimator is \al{e_{SP,QK}=\frac{\| E_{\Omega_{true}}- E_{\hat \Omega_{QKP}}\|}{(m_1m_2(m_1m_2+1))/2}} where $\Omega_{set}$ and $\hat \Omega_{QKP}$ denote the set of edges of the true graphical model and the one estimated, respectively. \end{itemize} Figure \ref{fig:realization1000} depicts the set of edges estimated using the three estimators in a realization of the Monte Carlo experiment. As we see, only QK provides a structure similar to the true model. Figure \ref{fig:boxplot1000} \begin{figure} \caption{Monte Carlo experiment with realizations of length $N=1000$. {\em Left panel} \label{fig:boxplot1000} \end{figure} shows the boxplot of the relative error in reconstructing the sparsity pattern (left panel) and the concentration matrix (right panel). As we can see, the worst performance is given by S1, while the best performance is achieved by QKP. In particular, the relative error of the estimated sparsity pattern for QKP is very small compared to the other two methods. The poor performance of S1 is due by the fact that only a scalar hyperparameter is not sufficient to capture the correct structure of the graph. On the contrary, the poor performance of S2 is due by the fact that there is overfitting in the estimation of the hyperparameter matrix. \section{CONCLUSIONS} \label{sec:concl} We have introduced Quasi-Kronecker Product graphical models wherein the nodes are regrouped in modules having the same number of nodes. The interactions among the nodes of the same module as well as the interaction among the nodes of two modules follow a common structure. Then, we have addressed the problem of learning QKP graph models from data using a Bayesian hierarchical model. Finally, we have compared the proposed procedure with Bayesian learning techniques for estimating sparse graphical models: simulation evidence showed the effectiveness of the proposed method. \end{document}
math
\begin{document} \title{Regularization of ultraviolet divergence for a particle interacting with a scalar quantum field} \author{O.\ D.\ Skoromnik} \email[Corresponding author: ]{[email protected]} \affiliation{Max Planck Institute for Nuclear Physics, Saupfercheckweg 1, 69117 Heidelberg, Germany} \author{I.\ D.\ Feranchuk} \affiliation{Belarusian State University, 4 Nezavisimosty Ave., 220030, Minsk, Belarus} \author{D. V. Lu} \affiliation{Belarusian State University, 4 Nezavisimosty Ave., 220030, Minsk, Belarus} \author{C. H. Keitel} \affiliation{Max Planck Institute for Nuclear Physics, Saupfercheckweg 1, 69117 Heidelberg, Germany} \begin{abstract} When a non-relativistic particle interacts with a scalar quantum field, the standard perturbation theory leads to a dependence of the energy of its ground state on an undefined parameter---``momentum cut-off''---due to the ultraviolet divergence. We show that the use of non-asymptotic states of the system results in a calculation scheme in which all observable quantities remain finite and continuously depend on the coupling constant without any additional parameters. It is furthermore demonstrated that the divergence of traditional perturbation series is caused by the energy being a function with a logarithmic singularity for small values of the coupling constant. \end{abstract} \pacs{11.10.-z, 11.10.Gh, 11.15.Bt, 11.15.Tk, 63.20.kd} \keywords{quantum field theory; non-perturbative theory; renormalization; divergences} \maketitle \section{Introduction} \langlebel{sec:introduction} A characteristic property of the majority of quantum field theories (QFT) is the divergence of integrals appearing in the perturbation theory for the calculation of physical quantities such as mass and charge of the interacting particles. The divergences appear in the integrations over momenta in intermediate states both on the lower limit (infrared divergence) and on the upper limit (ultraviolet divergence). In order to circumvent this difficulty, a renormalization procedure is used, which allows the redefinition of the initial parameters of the system through their observable values. The renormalization scheme was firstly developed for quantum electrodynamics (QED) \cite{PhysRev.75.1736,*PhysRev.85.631,*PhysRev.95.1300,*Stueckelberg1953} and later generalized to other QFT models \cite{'tHooft1971173,*Hooft1971167,*'tHooft1972189}. These schemes can be used for so-called renormalizable theories, for which the reconstructed perturbation theory can be built in a way that the infinite values are included in the definition of ``physical'' charge and mass and, therefore, do not appear in other observables of the system. However, even the founders of QED anticipated ``that the renormalization theory is simply a way to sweep the difficulties of the divergences in electrodynamics under the rug.'' (R. P. Feynman \cite{Feynman12081966}). In many papers P. A. M. Dirac wrote that this approach was in contradiction with logical principles of quantum mechanics \cite{Dirac1981,Dirac1981book}. Accordingly, the question arises whether these divergences are an intrinsic property of quantum field models or they are caused by the application of perturbation theory for the calculation of physical quantities, which are non-analytical functions of the coupling constant such as in the theory of superconductivity \cite{PhysRev.108.1175}. A large number of works is devoted to this problem, nevertheless a solution still has not been found up to now. However, this question is of great importance for a correct mathematical formulation of fundamental physical theories and is essential for examining the applicability of non-renormalizable theories \cite{PhysRevD.88.125014,*PhysRevD.87.065024} for the description of real physical systems. Let us recall that in standard perturbation theory the Hamiltonian of non-interacting fields is used as a zeroth-order approximation, while Fock states of the free fields are employed for the calculation of the transition matrix elements in the subsequent corrections to observable characteristics of the system. This approach is based on the assumption of an asymptotic switch off of the interaction between the fields \cite{LandauQED}. However, in a series of works it has been shown \cite{PhysRev.140.B1110,Faddeev1970,PhysRevD.11.3481,PhysRev.173.1527,*PhysRev.174.1882,*PhysRev.175.1624} that the infrared divergence arises just because of the use of asymptotically free field states. As follows from reference \cite{PhysRev.140.B1110} and the subsequent publication \cite{Faddeev1970}, the infrared divergence disappears in all orders of perturbation theory in QED if, in the zeroth-order approximation, the coherent states of the electromagnetic field bound to the particle are used and the parameters of these states are appropriately chosen. At first glance, this may contradict representation theory in quantum mechanics, in accordance to which the result of the calculation of the observable characteristics of the system should not depend on the choice of basis states, provided that those form a full basis in a Hilbert space as for free field states. However, this statement is correct only for the exact solution of the problem, whereas individual terms of the perturbation series can change with a different choice of basis in zeroth-order approximation. As was demonstrated in reference \cite{Feranchuk2015,Feranchuk1995370} the transition from one basis to another corresponds to the partial summation of a divergent series within standard perturbation theory and allows for the non-perturbative calculation of subsequent corrections in the form of a convergent sequence. A good example of how the basis choice influences the approximate calculation of the characteristics of the quantum system with a continuous spectrum is given by the scattering at a Coulomb potential \cite{PhysRevA.82.052703,*PhysRevA.70.052701}. In this well known case, the wave function of the system has no singularities. In contrast, Born's scattering amplitude, approximately calculated via an asymptotically free basis, displays a singularity for scattering at small angles. As was demonstrated in references \cite{PhysRevA.82.052703,*PhysRevA.70.052701}, this singularity does not appear with the use of non-asymptotic wave functions. The main goal of our work is to investigate whether a proper choice of basis in zeroth-order approximation allows to construct a calculation scheme free of ultraviolet divergences. In order not to overload the proposed approach with details related to the internal degrees of freedom and to render all calculations as transparent as possible, we investigate as a representative example a model system, which consists of a non-relativistic particle without spin interacting with a scalar quantum field. A standard-perturbation-theory series, in this case, does not exhibit infrared divergence, contains however ultraviolet divergence. This results in a dependence of the energy of the ground state on the undefined momentum cut-off, which is required for the calculation of high-order corrections. Consequently, our task is to prove that the energy of the ground state of the considered model system can be calculated without any additional parameters such as a momentum cut-off. At the same time, it is important to show that the energy of the system is a non-analytical function of the coupling constant and consequently can not be represented as a series in the framework of conventional perturbation theory. With the inclusion of a field polarization our employed model coincides with non-relativistic QED \cite{Healy1982} or if the field is scalar it has a physical realization in solids \cite{Toyozawa01071961}, where however, due to the discrete structure of a crystal, a natural momentum cut-off intrinsically appears, defined via the Brillouin-zone boundary. In free space this regularization is not present and has to be artificially included, e.g., via lattice models \cite{Smit2002}, where the boundary momentum is defined through an artificial lattice period. In contrast, in our formulation we will consider a system in free space without neither natural nor artificial cut-off. In addition, in a series of works \cite{Spohn1998,*Spohn1989,Amann1991414,*Arai1997455,jmp.5.1190.1964,Bach1998299,*Bach2007426,*Chen20082555} an analogous model of a particle interacting with a scalar quantum field with the momentum cut-off was used for the investigation of the fundamental mathematical problem of the existence of the solutions of the Schr\"{o}dinger equation. The article is organized in the following way. In section \ref{sec:model_description} the model of a non-relativistic particle without spin interacting with a scalar quantum field is described and its parameters are calculated in the framework of conventional perturbation theory. In section \ref{sec:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system} the basis of non-asymptotic states is investigated and the iteration scheme of the calculations is presented. The zeroth-order approximation, which is found to be free of ultraviolet divergence for the energy and effective mass is then worked out using this basis. In section \ref{sec:second_order_iteration_for_the_energy_convergence} the proposed iteration scheme is employed for computing the correction to the zeroth-order approximation of the energy. The convergence of all integrals is demonstrated and the character of the singularity of the energy as a non-analytic function of the coupling constant is determined in the weak coupling limit. In addition, the details of all calculations are presented in appendices. \section{Model description} \langlebel{sec:model_description} Let us examine the Hamiltonian of the system consisting of a non-relativistic particle interacting with a scalar quantum field \begin{align} \opa H &= \opa H_0+\opa H_{\text{int}}, \langlebel{eq:model_description1} \\ \opa H_0 &= \frac{\opA p^2}{2}+\sum_{\boldsymbol k}\omega_{\boldsymbol k} \opa a^\dag_{\boldsymbol k}\opa a_{\boldsymbol k}, \langlebel{eq:model_description2} \\ \opa H_{\text{int}} &=\frac{g}{\sqrt{2\Omega}}\sum_{\boldsymbol k}A_{\boldsymbol k} \left(e^{\ri\thp{k}{r}}\opa a_{\boldsymbol k}+e^{-\ri\thp{k}{r}}\opa a^\dag_{\boldsymbol k}\right). \langlebel{eq:model_description3} \end{align} Here, we select the system of units in which $m = 1$, $\hbar = c = 1$, the momentum operator $\opA p=-\ri \nabla$, normalization volume $\Omega$, vertex function $A_{\boldsymbol k} = 1/{\sqrt{\omega_{\boldsymbol k}}}$, creation (annihilation) operators $\opa a_{\boldsymbol k}^\dag$ ($\opa a_{\boldsymbol k}$) of the field mode with the frequency $\omega_{\boldsymbol k}=k=\vert{\boldsymbol k}\vert$, and the coupling constant $g$. The real physical system, which is described via Hamiltonian (\ref{eq:model_description1}) corresponds to an electron interacting with acoustic phonons in a continuous model of a crystal \cite{Toyozawa01071961}. If we choose $\omega_{\boldsymbol k} = 1$, $A_{\boldsymbol k} = 1/k$, and $g = \sqrt{8\pi \alpha}$, operator (\ref{eq:model_description1}) corresponds to the Fr\"ohlich Hamiltonian \cite{Froehlich1954}, which describes the interaction of an electron with optical phonons in a crystal, i.e., the so-called ``polaron'' problem \cite{RevModPhys.63.63,*Mitra198791,PhysRev.97.660,Spohn1987278}. The total momentum operator \begin{align} \opA P = -\ri \nabla + \sum_{\boldsymbol k}\boldsymbol k\opa a^\dag_{\boldsymbol k}\opa a_{\boldsymbol k}\langlebel{eq:model_description4} \end{align} commutes with the Hamiltonian of the system (\ref{eq:model_description1}) and consequently the eigenvalues $E(\boldsymbol P)$ and eigenfunctions $|\Psi_{\boldsymbol P}\rangle$ are defined as solutions of the following system of equations \begin{align} \opa H|\Psi_{\boldsymbol P}\rangle &= E(\boldsymbol P)|\Psi_{\boldsymbol P}\rangle, \langlebel{eq:model_description5} \\ \opA P|\Psi_{\boldsymbol P}\rangle &= \boldsymbol P |\Psi_{\boldsymbol P}\rangle. \langlebel{eq:model_description6} \end{align} In the conventional perturbation expansion over the coupling constant in the zeroth-order approximation the solution of the stationary Schr\"odinger equation with Hamiltonian (\ref{eq:model_description2}) is simply determined and corresponds to the free particle with momentum $\boldsymbol p$ and Fock states of the phonon field with the set of occupation numbers $\{n_{\boldsymbol k_1},n_{\boldsymbol k_2},\ldots\}\equiv \{n_{\boldsymbol k}\}$: \begin{align} |\Psi^{(0)}_{\boldsymbol p,\{n_{\boldsymbol k}\}}\rangle &= \frac{e^{\ri \thp{p}{r}}}{\sqrt{\Omega}}|\{n_{\boldsymbol k}\}\rangle,\quad \sum_{\boldsymbol k} \opa a^\dag_{\boldsymbol k}\opa a_{\boldsymbol k}|\{ n_{\boldsymbol k}\}\rangle =\sum_{\boldsymbol k}n_{\boldsymbol k} |\{ n_{\boldsymbol k}\}\rangle, \langlebel{eq:model_description7} \\ E^{(0)}(\boldsymbol P, \{ n_{\boldsymbol k}\}) &= \frac{1}{2}\left(\boldsymbol P - \sum_{\boldsymbol k}\boldsymbol k n_{\boldsymbol k}\right)^2 + \sum_{\boldsymbol k}\omega_{\boldsymbol k} n_{\boldsymbol k},\quad \boldsymbol P = \boldsymbol p + \sum_{\boldsymbol k}\boldsymbol k n_{\boldsymbol k}. \langlebel{eq:model_description8} \end{align} Let us suppose that the system is in the ground state of the phonon field $\{n_{\boldsymbol k}\} = 0$, which leads to the following eigenfunction and eigenvalue \begin{align} |\Psi^{(0)}_{\boldsymbol P, 0}\rangle &= \frac{e^{\ri \thp{P}{r}}}{\sqrt{\Omega}} |0\rangle \langlebel{eq:model_description9} \\ E^{(0)}(\boldsymbol P,0) &= \frac{P^2}{2}, \quad \boldsymbol P = \boldsymbol p. \langlebel{eq:model_description10} \end{align} The first non-vanishing correction to the system energy arises in the second order of perturbation theory (single-phonon intermediate transitions) and corresponds to the self-energy diagram, which defines the mass operator $\Sigma(\boldsymbol P)$ and is determined as \begin{align} \langlebel{eq:model_description11} \Sigma (\boldsymbol P) &= \Delta E^{(2)}(\boldsymbol P,0) = - \frac{g^2}{2 \Omega} \sum_{\boldsymbol k} \frac{1}{\omega_{\boldsymbol k}} \frac{1}{k^2/2 - \thp{P}{k} + \omega_k} = - \frac{g^2}{16 \pi^3} \int \frac{d \boldsymbol k}{k [k^2/2 - \thp{P}{k} + k]}. \end{align} In order to select bound state energy $E_b = E^{(2)}(0,0)$ and effective mass $m^*$ of a particle we expand the energy in a series over $\boldsymbol P$ up to second order \begin{align} E^{(0)}(\boldsymbol P,0) + \Delta E^{(2)}(\boldsymbol P,0) \approx E_b + \frac{P^2}{2 m^*} \equiv - \frac{g^2}{16 \pi^3} \int \frac{d \boldsymbol k}{k [k^2/2 + k]} + \frac{P^2}{2} - \frac{g^2}{16 \pi^3} \int \frac{d \boldsymbol k}{k [k^2/2 + k]^3}(\thp{P}{k})^2.\langlebel{eq:model_description12} \end{align} The first integral in equation (\ref{eq:model_description12}) logarithmically diverges, that is the bound state energy depends on the momentum cut-off $K$ \begin{align} E_b = - \frac{g^2}{2 \pi^2}\ln \left(\frac{K}{2} +1\right),\langlebel{eq:model_description13} \end{align} and becomes infinite when $K\rightarrow\infty$, such that the correction to the energy is undefined in the framework of the perturbation theory for our model \cite{Messiah1981}. At the same time, the corrected mass is well defined and equal to \begin{align} \frac{1}{m^*} \simeq 1 - \frac{g^2}{6 \pi^2}; \quad m^* \simeq 1 + \frac{g^2}{6 \pi^2 }. \langlebel{eq:model_description14} \end{align} In contrast to this in the polaron problem all integrals are convergent because they contain in the denominator the additional power of $k$. The corresponding quantities for the polaron problem read as \cite{RevModPhys.63.63,*Mitra198791,PhysRev.97.660,Spohn1987278} \begin{align} E_b \simeq - \alpha; \quad m^* \simeq 1 + \frac{\alpha}{6 }.\langlebel{eq:model_description15} \end{align} It is important to stress here that in our model, the interaction energy between particle and field is observable and consequently, the infinite energy (\ref{eq:model_description12}) can not be included in the mass renormalization. Thus, we can conclude that the use of perturbation theory for two physically close quantum-field models leads to qualitatively different results. Therefore, a modification of the calculation method of subsequent corrections to the energy for our model is required and appears achievable. \section{Iteration scheme, basis choice and zeroth-order approximation of the system's energy} \langlebel{sec:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system} In order to build an iteration scheme not in the framework of perturbation theory we will employ the operator method (OM) for the solution of the Schr\"odinger equation, which was introduced in reference \cite{Feranchuk1982211} and its detailed explanation is given in the monograph \cite{Feranchuk2015,Feranchuk1995370}. Let us quickly revise here the basics of this method. Suppose, the eigenvalues $E_\mu$ and eigenvectors $|\Psi_{\mu}\rangle$ with a set of quantum numbers $\mu$ of the stationary Schr\"odinger equation need to be found: \begin{align} \opa H |\Psi_{\mu}\rangle = E_{\mu}|\Psi_{\mu}\rangle.\langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_1} \end{align} In contrast to perturbation theory, where the Hamiltonian $\opa H$ of the system is split into the zeroth-order approximation and perturbation parts, according to OM the total Hamiltonian is taken into account as is, while however, the state vector is probed via an approximate state: \begin{align*} |\Psi_{\mu}\rangle \approx |\psi_{\mu}(\omega_{\mu})\rangle, \end{align*} which depends on a set of variational parameters $\omega_{\mu}$. Then, the exact solution can be represented as a series \begin{align} |\Psi_{\mu}\rangle = |\psi_{\mu}(\omega_{\mu})\rangle + \sum_{\nu \neq \mu} C_{\mu \nu}|\psi_{\nu} (\omega_{\mu} )\rangle. \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_2} \end{align} Here we want to pay attention to the fact that for a given set of quantum numbers $\mu$, the set $\omega_{\mu}$ is fixed. By plugging the expansion (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_2}) into Schr\"odinger's equation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_1}) and projecting on different states $|\psi_{\mu}(\omega_{\mu})\rangle$ and $|\psi_{\nu}(\omega_{\mu})\rangle$ one obtains the equations for the energies $E_{\mu}$ and coefficients $C_{\mu \nu}$: \begin{align} E_{\mu} &= \left[1 + \sum_{\nu \neq \mu} C_{\mu \nu}I_{\mu \nu}\right]^{-1} \left[H_{\mu \mu} + \sum_{\nu \neq \mu} C_{\mu \nu}H_{\mu \nu}\right]; \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_3} \\ C_{\mu \gamma} &= \left[E_{\mu} - H_{\gamma \gamma}\right]^{-1}\left[ H_{\gamma \mu} - E_{\mu}I_{\gamma \mu} + \sum_{\nu \neq \mu \neq \gamma}C_{\mu \nu} (H_{\gamma \nu} - E_{\mu}I_{\gamma \nu})\right]; \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_4} \\ H_{\mu \nu} &\equiv \langle\psi_{\mu} (\omega_{\mu} )| \opa H | \psi_{\nu} (\omega_{\mu})\rangle; \quad I_{\mu \nu} \equiv \langle\psi_{\mu} (\omega_{\mu} )| \psi_{\nu} (\omega_{\mu})\rangle. \nonumber \end{align} It is important to stress here that all matrix elements are calculated with the \emph{full} Hamiltonian of the system and the set of vectors $|\psi_{\mu}(\omega_{\mu})\rangle$ can be normalized, while not necessarily being mutually orthogonal. The system of equations (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_3}), (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_4}) is the exact representation of the Schr\"odinger equation. For the approximate solution of this system, in accordance with OM the following concept is used: the closer the zeroth-order approximation of the state vector is to the exact solution, the closer the matrix $H_{\mu \nu}$ becomes to the diagonal one. Therefore, an iteration scheme for the solution of the system (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_3}), (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_4}) can be built, for which convergence is determined with the ratios of non-diagonal elements $H_{\mu \nu}$ to the diagonal ones $H_{\mu \mu}$ in the representation of the state vectors $|\psi_{\mu}(\omega_{\mu})\rangle$. A sufficiently detailed discussion of the convergence of the iteration scheme for different physical systems is given in the monograph \cite{Feranchuk2015}. Consequently, we find the system of recurrent equations \begin{align} E^{(s)}_{\mu} &= \left[1 + \sum_{\nu \neq \mu} C^{(s-1)}_{\mu \nu}I_{\mu \nu}\right]^{-1} \left[H_{\mu \mu} + \sum_{\nu \neq \mu} C^{(s-1)}_{\mu \nu}H_{\mu \nu}\right]; \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_5} \\ C^{(s)}_{\mu \gamma} &= \left[E^{(s-1)}_{\mu} - H_{\gamma \gamma}\right]^{-1}\left[ H_{\gamma \mu} - E^{(s-1)}_{\mu}I_{\gamma \mu} + \sum_{\nu \neq \mu \neq \gamma}C^{(s-1)}_{\mu \nu} (H_{\gamma \nu} - E^{(s-1)}_{\mu}I_{\gamma \nu})\right]; \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_6} \\ C^{(-1)}_{\mu \nu} &= C^{(0)}_{\mu \nu} = 0; \quad E^{(0)}_{\mu} = H_{\mu \mu}. \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_7} \end{align} As opposed to conventional perturbation theory, where the exact solution is defined as a sum of corrections of all orders, in OM the exact value of the energy of the system is given as a limit of a sequence \begin{align} E_{\mu} = \lim_{s \rightarrow \infty} E^{(s)}_{\mu}; \quad s = 0,1,\ldots.\langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_8} \end{align} In particular, for the first two iterations one can find \begin{align} E^{(1)}_{\mu} &= E^{(0)}_{\mu} = H_{\mu \mu} ; \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_9} \\ E^{(2)}_{\mu} &= \left[1 + \sum_{\nu \neq \mu} \frac{(H_{\nu \mu} - E^{(0)}_{\mu}I_{\nu \mu})I_{\mu \nu}}{E^{(0)}_{\mu} - H_{\nu \nu} }\right]^{-1} \left[H_{\mu \mu} + \sum_{\nu \neq \mu} \frac{(H_{\nu \mu} - E^{(0)}_{\mu}I_{\nu \mu})H_{\mu \nu}}{E^{(0)}_{\mu} - H_{\nu \nu}}\right]. \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_10} \end{align} The last equation looks analogously to the second-order correction of perturbation theory, while the main difference is related to the denominators of equation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_10}), where the matrix element $H_{\nu \nu}$ is calculated with the full Hamiltonian of the system, whereas the perturbation theory relations merely involve the diagonal element of the unperturbed Hamiltonian. As will be shown below, this is exactly the reason, which determines the convergence of integrals over intermediate states. Before proceeding with the application of the iteration scheme, let us discuss the choice of the parameters $\{\omega_{\mu}\} = \{\omega_{\mu}^1,\ldots,\omega_{\mu}^n,\ldots\}$ in more detail. For this we note that the representation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_3}-\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_4}) is exactly equivalent to the Scr\"{o}dinger equation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_1}), provided that the set of states $\{|\psi_{\mu}(\{\omega_{\mu}\})\rangle\}$ is a complete one. It is evident that if the state vectors $\{|\psi_{\mu}(\{\omega_{\mu}\})\rangle\}$ coincide with the exact eigenstates $|\Psi_{\mu}\rangle$ of the full Hamiltonian, the matrix $H_{\mu\nu}$ is a diagonal one, i.e. $H_{\mu\nu} = E_{\mu}\delta_{\mu\nu}$ and the coefficients $C_{\mu\nu} = 0$. The eigenvalues $E_{\mu}$ are determined exactly and are independent of the set of parameters $\{\omega_{\mu}\}$. Therefore, the relation \begin{align} \frac{\partial E_{\mu}}{\partial\omega_{\mu}^n} \equiv 0, \quad n=\{1,2,\ldots\} \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_rev1} \end{align} holds identically. According to our initial assumption we choose the trial parameters $\{\omega_{\mu}\}$ in the basis states $|\psi_{\mu}(\{\omega_{\mu}\})\rangle$ such that they determine the best possible approximation for the exact solution $|\Psi_{\mu}\rangle$ in the chosen class of functions. This is equivalent to the supposition that the off-diagonal elements of the matrix $H_{\mu \nu}$ are small numbers such that the ratios $H_{\mu \nu}/H_{\mu \mu}$ are proportional to some effective small parameter $\epsilon$. Therefore, the zeroth-order approximation of the operator method is chosen as \begin{align} E_{\mu}^{(0)}(\{\omega_{\mu}\}) = H_{\mu \mu}(\{\omega_{\mu}\}), \quad C_{\mu \nu}^{(0)} = 0, \quad |\Psi_{\mu}^{(0)}\rangle = |\psi_{\mu}(\{\omega_{\mu}\})\rangle.\langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_rev2} \end{align} However, the matrix $H_{\mu \nu}$ contains small off-diagonal elements, which need to be taken into account. Hence, the subsequent approximations read \begin{align} E_{\mu} &= H_{\mu \mu}(\{\omega_{\mu}\}) + \sum_{s = 1}^{\infty}\epsilon^s E_{\mu}^{(s)}(\{\omega_{\mu}\}), \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_rev3} \\ C_{\mu \nu}(\{\omega_{\mu}\}) &= \sum_{s=1}^{\infty}\epsilon^s C_{\mu \nu}^{(s)}(\{\omega_{\mu}\}),\quad \mu\neq \nu.\langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_rev4} \end{align} As the left-hand side of equation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_rev3}) does not depend on the parameters $\{\omega_{\mu}\}$ it is natural to require that in each order in $\epsilon$ the right-hand side also does not depend on $\{\omega_{\mu}\}$: \begin{align} \frac{\partial E_{\mu}}{\partial\omega_{\mu}^n} = 0, \quad s=\{0,1,\ldots\}\langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_rev5} \end{align} for each $\omega_{\mu}^n$. In the monograph \cite{Feranchuk2015} it was demonstrated that the recalculation of the parameters $\{\omega_{\mu}\}$ in every order in $\epsilon$ speeds up the convergence of the iteration scheme, however does not change the qualitative behaviour of the energy levels of the system. For this reason, in all calculations below we will fix the parameters $\{\omega_{\mu}\}$ via the zeroth-order approximation: \begin{align} \frac{\partial E_{\mu}^{(0)}}{\partial\omega_{\mu}^n} = 0.\langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_rev6} \end{align} In what follows we apply the above approach to the description of our model. In accordance with OM we choose a variational state vector, which incorporates the qualitative peculiarities of the system. From a physical point of view a field can be considered as a system of an infinite number of harmonic oscillators. Due to the interaction with a particle the equilibrium positions of these harmonic oscillators are modified. In a representation of creation and annihilation operators the shift of equilibrium positions corresponds to the displacement of a classical component $u_{\boldsymbol k}$ on these operators \cite{Scully1997}, i.e. $\opa a_{\boldsymbol k}^{\dag} \rightarrow \opa a_{\boldsymbol k}^{\dag}+u_{\boldsymbol k}^{*}$ and $\opa a_{\boldsymbol k} \rightarrow \opa a_{\boldsymbol k}+u_{\boldsymbol k}$, such that we choose a basis of field oscillators consisting of coherent states. As a result a so-called localized state of a particle in the field of these classical components arises. This means that during its existence the particle becomes ``dressed'', i.e. somewhat smeared out while still localized. Moreover, this ``dressed'' state should be an eigenstate of the total momentum operator $\opA P$, since $\opA P$ commutes with $\opa H$. Concluding, we formulate the following conditions for the state vectors: i) representation in the basis of coherent states; ii) imposing the localization of the particle state; iii) use of variational state vectors as eigenstates of $\opA P$ (\ref{eq:model_description4}). In order to incorporate the first two conditions in the state vector, we choose it as the product of the square integrable wave function of a particle, localized near an arbitrary point $\boldsymbol R$ in space, and a coherent state of the field, analogous to the polaron problem \cite{RevModPhys.63.63,*Mitra198791,0022-3719-17-24-012}: \begin{equation} \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_11} \vert\Psi({\boldsymbol r},{\boldsymbol R})\ranglengle= \phi({\boldsymbol r}-{\boldsymbol R})\exp\left(\sum_{\boldsymbol k}\Bigl(u^*_{\boldsymbol k}e^{-\ri \thp{k}{R}} \opa a^\dag_{\boldsymbol k}-u_{\boldsymbol k}e^{\ri \thp{k}{R}} \opa a_{\boldsymbol k}\Bigr)\right)\vert 0\ranglengle. \end{equation} In the state (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_11}) the classical component of the field $u_{\boldsymbol k}$ and the wave function $\phi(\boldsymbol r - \boldsymbol R)$ can be considered as the variational parameters $\{\omega_{\mu}\}$ of OM. In accordance with the above described procedure of the choice of the parameters $\{\omega_{\mu}\}$, the functional derivative over these parameters from the functional $\langlengle \Psi({\boldsymbol r},{\boldsymbol R})\vert \opa H \vert\Psi({\boldsymbol r},{\boldsymbol R})\ranglengle$ should be equal to zero. This yields an equation for the classical components of the field $u_{\boldsymbol k}$ and the wave function $\phi(\boldsymbol r - \boldsymbol R)$: \begin{align} \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_12} \frac{\delta}{\delta u_{\boldsymbol k}} &\left[ \langlengle \Psi({\boldsymbol r},{\boldsymbol R})\vert \opa H \vert\Psi({\boldsymbol r},{\boldsymbol R})\ranglengle\right] = \frac{\delta}{\delta \phi({\boldsymbol r}-{\boldsymbol R})} \left[ \langlengle \Psi({\boldsymbol r},{\boldsymbol R})\vert \opa H \vert\Psi({\boldsymbol r},{\boldsymbol R})\ranglengle\right] = 0. \end{align} By calculating the functional with the Hamiltonian (\ref{eq:model_description1}) and corresponding derivatives one obtains the connection between the classical components of the field and the wave function of the particle: \begin{align} \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_13} u_{\boldsymbol k} &=-\frac{g}{\sqrt{2\Omega\omega^3_{\boldsymbol k}}}\int d \boldsymbol r |\phi ({\boldsymbol r})|^2 e^{-\ri\thp{k}{r}}. \end{align} In the general case, the second equation in (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_12}) leads to the integral equation for the function $\phi_{\boldsymbol P}(\boldsymbol r)$. However, according to reference \cite{Feranchuk2015}, the convergence of the iteration scheme of OM does not depend on the particular choice of variational parameters, under the condition that the approximate state vector takes into account qualitative characteristics of the system. Therefore, for the analytical investigation of the energy $E_L^{(0)}(\boldsymbol P,g)$ we replace the exact numerical solution with a trial wave function, which depends on the single parameter $\langlembda$ and is equal to \begin{align} \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_27} \phi(\boldsymbol r) = \frac{\langlembda^{\frac{3}{2}}}{\pi^{\frac{3}{4}}}e^{-\frac{\langlembda^2 r^2}{2}}. \end{align} We notice that in the polaron problem the application of OM with the trial wave function (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_27}) yields an accuracy of the order of $1\%$ in the calculation of the bound state energy and the effective mass \cite{0022-3719-17-24-012}. With this choice of wave function, we proceed to calculate the classical component of the field $u_{\boldsymbol k}$ and the Fourier transform of the wave function (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_27}), which will be required below: \begin{align} u_{\boldsymbol k} &= -\frac{g}{\sqrt{2 \Omega}}\frac{1}{\sqrt{k^3}}\int d\boldsymbol r |\phi(\boldsymbol r)|^2 e^{-\ri\thp{k}{r}} = -\frac{g}{\sqrt{2 \Omega}}\frac{e^{-\frac{k^2}{4 \langlembda^2}}}{\sqrt{k^3}}; \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_28} \\ \phi_{\boldsymbol k} &= \int d\boldsymbol r \phi(\boldsymbol r)e^{-\ri\thp{k}{r}} = 2\sqrt{2}\frac{\pi^{\frac{3}{4}}}{\langlembda^{\frac{3}{2}}} e^{-\frac{k^2}{2 \langlembda^2}} = \phi_0 e^{-\frac{k^2}{2 \langlembda^2}}. \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_29} \end{align} Furthermore, the states (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_11}) are not the eigenstates of the total momentum operator $\opA P$ of the system, i.e. they are not translationary invariant. Moreover, these states are degenerate, as they do not depend on the localization point $\boldsymbol R$ of the particle in space. The choice of the correct linear combination of these states allows one to build a set of states which are not degenerate and are eigenstates of the total momentum operator $\opA P$: \begin{align} |\Psi^{(0)}_{\boldsymbol P_1, n_{\boldsymbol k}}\rangle &= \frac{1}{N_{\boldsymbol P_1,n_{\boldsymbol k}}\sqrt{\Omega}}\int d \boldsymbol R \phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R)\exp\left\{\ri(\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R \right\}\exp\left\{\sum_{\boldsymbol q}(u_{\boldsymbol q} e^{- \ri\boldsymbol q \!\cdot\!ot \boldsymbol R}\opa a_{\boldsymbol q}^\dag - u_{\boldsymbol q}^* e^{ \ri\boldsymbol q \!\cdot\!ot \boldsymbol R}\opa a_{\boldsymbol q})\right\}|n_{\boldsymbol k}\rangle,\langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_14} \\ \opA P |\Psi^{(0)}_{\boldsymbol P_1,n_{\boldsymbol k}}\rangle &= \boldsymbol P_1 |\Psi^{(0)}_{\boldsymbol P_1,n_{\boldsymbol k}}\rangle. \nonumber \end{align} Here $\Omega$ is the normalization volume and $\boldsymbol P_1$ the total momentum of the system, $|n_{\boldsymbol k}\rangle$ are Fock field states with occupation number $n_{\boldsymbol k}$, $\phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R)$ is the wave function of the particle localized at point $\boldsymbol r = \boldsymbol R$ and the classical component of the field $u_{\boldsymbol k}$ is defined via equation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_13}). The normalization constant for the state $|\Psi^{(0)}_{\boldsymbol P_1, 1_{\boldsymbol k}}\rangle$ is defined as \begin{align} \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_15} |N_{\boldsymbol P_1,1_{\boldsymbol k}}|^2 = \int d \boldsymbol R_1 d \boldsymbol \rho \phi_{\boldsymbol P_1}^*(\boldsymbol \rho)\phi_{\boldsymbol P_1}(\boldsymbol \rho - \boldsymbol R_1) e^{\ri(\boldsymbol P_1 - \boldsymbol k)\!\cdot\!ot \boldsymbol R_1 + \sum_k |u_k|^2(e^{-\ri\thp{k}{R_1}}-1)}\left(2|u_k|^2(\cos\thp{k}{R_1}-1)+1\right). \end{align} In addition the set of states (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_14}) forms a complete and an orthonormal basis in Hilbert space. The completeness of these states follows from the fact that they are eigenstates of a Hermitian operator $\opA{P}$, with an explicit proof given in Appendix~A. Concluding, the set of states (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_14}) takes into account physical peculiarities of the system, forms the complete set of states in Hilbert space for arbitrary functions $\phi_{\boldsymbol P}(\boldsymbol r)$ and $u_{\boldsymbol k}$, which are an analog to the parameters $\{\omega_{\mu}\}$, and, therefore, can be usable in the iteration scheme (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_5}-\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_7}). The zeroth-order approximation for the ground state vector following above procedure then reads as \begin{align} \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_16} |\Psi^{(L)}_{\boldsymbol P}\rangle &= \frac{1}{N_{\boldsymbol P}\sqrt{\Omega}} \int d{\boldsymbol R}\,\phi_{\boldsymbol P}({\boldsymbol r}-{\boldsymbol R})\exp \left(\ri\thp{P}{R}+ \sum_{\boldsymbol k}\Bigl(u^*_{\boldsymbol k}e^{-\ri \thp{k}{R}} \opa a^\dag_{\boldsymbol k}-u_{\boldsymbol k}e^{\ri \thp{k}{R}} \opa a_{\boldsymbol k}\Bigr)\right)| 0 \rangle, \end{align} whereas equations (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_9}), (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_10}) look like \begin{align} E^{(2)} &= \frac{E_L^{(0)} + \sum_{\boldsymbol P_1, \{n_k \neq 0\}}C^{(1)}_{\boldsymbol P_1, \{n_{\boldsymbol k}\}}\langle\Psi^{(L)}_{\boldsymbol P}| \opa H |\Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}\rangle}{1 + \sum_{\boldsymbol P_1, \{n_k\neq 0\}}C^{(1)}_{\boldsymbol P_1, \{n_{\boldsymbol k}\}}\langle\Psi^{(L)}_{\boldsymbol P}| \Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}\rangle};\langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_17} \\ C^{(1)}_{\boldsymbol P_1, \{n_{\boldsymbol k}\}} &= \frac{E_L^{(0)} \langle \Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}} |\Psi^{(L)}_{\boldsymbol P}\rangle - \langle \Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}| \opa H |\Psi^{(L)}_{\boldsymbol P}\rangle }{H_{\boldsymbol P_1, \{n_{\boldsymbol k}\}; \boldsymbol P_1, \{n_{\boldsymbol k}\}} - E_L^{(0)}}; \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_18} \\ H_{\boldsymbol P_1, \{n_{\boldsymbol k}\}; \boldsymbol P_2, \{n_{1\boldsymbol k}\}} &= \langle \Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}| \opa H |\Psi_{\boldsymbol P_2, \{ n_{1\boldsymbol k}\}}\rangle, \quad E_L^{(0)} = \langle \Psi^{(L)}_{\boldsymbol P}| \opa H |\Psi^{(L)}_{\boldsymbol P}\rangle. \nonumber \end{align} We want to emphasize once more that all matrix elements are calculated with the full Hamiltonian of the system \begin{align} \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_19} \opa H = \frac{1}{2}\left( \boldsymbol{P}^2 - 2 \sum_{\boldsymbol k} \opa a_{\boldsymbol k}^\dag \opa a_{\boldsymbol k} \boldsymbol k \!\cdot\!ot \boldsymbol{P} +\left(\sum_{\boldsymbol k} \opa a_{\boldsymbol k}^\dag \opa a_{\boldsymbol k} \boldsymbol k\right)^2\right)+\sum_{\boldsymbol k}\omega_{\boldsymbol k} \opa a^\dag_{\boldsymbol k}\opa a_{\boldsymbol k}+ \frac{g}{\sqrt{\Omega}}\sum_{\boldsymbol k}\frac{1}{\sqrt{2\omega_{\boldsymbol k}}} \left(e^{\ri\boldsymbol k\boldsymbol r}\opa a_{\boldsymbol k}+e^{-\ri\boldsymbol k\boldsymbol r}\opa a^\dag_{\boldsymbol k}\right). \end{align} Let us calculate the ground state energy $E^{(0)}_L$ of the system in this basis. The details of the calculations can be found in appendix C. The ground state energy reads accordingly \begin{align} &E_{L}^{(0)}(\boldsymbol P,g)= \frac{P^2}{2} - \thp{P}{Q} + G + E_{\text{f}}(\boldsymbol P) + E_{\text{int}}(\boldsymbol P), \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_20} \end{align} with \begin{align*} \boldsymbol Q &= \frac{1}{|N_{\boldsymbol P}|^2} \sum_{\boldsymbol k}\boldsymbol k |u_{\boldsymbol k}|^2\int d{\boldsymbol R} d{\boldsymbol r}\,\phi_{\boldsymbol P}^*({\boldsymbol r})\phi_{\boldsymbol P}({\boldsymbol r - \boldsymbol R}) e^{\Phi (\boldsymbol R) + \ri (\boldsymbol P - \boldsymbol k)\!\cdot\!ot \boldsymbol R}; \\ G &=\frac{1}{2} \frac{1}{|N_{\boldsymbol P}|^2}\sum_{\boldsymbol m,\boldsymbol l}\thp{m}{l}|u_{\boldsymbol m}|^2 |u_{\boldsymbol l}|^2\int d\boldsymbol r d\boldsymbol R \phi^*(\boldsymbol r)\phi(\boldsymbol r - \boldsymbol R)e^{\ri\thp{P}{R}+\Phi(\boldsymbol R)-\ri(\boldsymbol m + \boldsymbol l)\!\cdot\!ot R}; \\ E_{\text{f}}(\boldsymbol P) &= \frac{1}{|N_{\boldsymbol P}|^2} \sum_{\boldsymbol k}\left(k + \frac{k^2}{2}\right)|u_{\boldsymbol k}|^2\int d{\boldsymbol R} d{\boldsymbol r}\,\phi_{\boldsymbol P}^*({\boldsymbol r})\phi_{\boldsymbol P}({\boldsymbol r-\boldsymbol R}) e^{\Phi (\boldsymbol R) + \ri (\boldsymbol P - \boldsymbol k)\!\cdot\!ot \boldsymbol R}; \\ E_{\text{int}}(\boldsymbol P) &= \frac{g}{|N_{\boldsymbol P}|^2 } \sum_{\boldsymbol k}\frac{u_{\boldsymbol k}}{\sqrt{2 k \Omega}} \int d{\boldsymbol R} d{\boldsymbol r}\left(\phi_{\boldsymbol P}^*({\boldsymbol r}+\boldsymbol R)\phi_{\boldsymbol P}({\boldsymbol r})+\phi_{\boldsymbol P}^*({\boldsymbol r})\phi_{\boldsymbol P}({\boldsymbol r}-{\boldsymbol R})\right)e^{\Phi (\boldsymbol R) + \ri (\thp{P}{R} + \thp{k}{r})}; \\ \Phi (\boldsymbol R)&= \sum_{\boldsymbol k}|u_{\boldsymbol k}|^2(e^{-\ri\thp{k}{R}}-1); \\ |N_{\boldsymbol P}|^2 &= \int d{\boldsymbol R} d{\boldsymbol r}\,\phi_{\boldsymbol P}^*({\boldsymbol r})\phi_{\boldsymbol P}({\boldsymbol r}-{\boldsymbol R}) e^{\Phi(\boldsymbol R) + \ri \thp{P}{R}}. \end{align*} Actually, the iteration scheme (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_5}), (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_6}), (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_7}) can be used for arbitrary coupling constants \cite{Feranchuk2015}. However, as was described above, in the framework of our model we are interested in the behavior of the ground state energy $E_L^{(0)}$ in the weak coupling limit. In this limit we can neglect the function \begin{align*} \Phi(\boldsymbol R) = \sum_{\boldsymbol m}|u_{\boldsymbol m}|^2 \left(e^{-\ri\thp{m}{R}}-1\right)\sim g^2, \end{align*} in the exponent of all integrals in equation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_20}) as $g\ll1$. First of all, we investigate the situation of a particle at rest, i.e. $\boldsymbol P = 0$. In this case for the weak coupling limit the integrals in equation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_20}) can be expressed through the Fourier transforms of the wave function $\phi(\boldsymbol r)$: \begin{align} \int d\boldsymbol R_1 d \boldsymbol \rho \phi^*(\boldsymbol \rho)\phi(\boldsymbol \rho - \boldsymbol R_1) e^{-\ri\thp{k}{R_1}} =\int d \boldsymbol \rho \phi^*(\boldsymbol \rho)e^{-\ri\thp{k}{\rho}}\int d \boldsymbol R \phi(\boldsymbol R)e^{\ri\thp{k}{R}} = \phi^*_{\boldsymbol k}\phi_{-\boldsymbol k} &= \phi_{\boldsymbol k}^2, \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_30} \\ \int d\boldsymbol R_1 d \boldsymbol \rho \phi^*(\boldsymbol \rho)\phi(\boldsymbol \rho - \boldsymbol R_1) = |\phi_0|^2 &= \phi_0^2, \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_31} \\ \int d \boldsymbol R_1 d \boldsymbol \rho \phi^*(\boldsymbol \rho)\phi(\boldsymbol \rho - \boldsymbol R_1) e^{-\ri\thp{k}{\rho}} = \phi^*_{\boldsymbol k}\phi_0 &= \phi_{\boldsymbol k}\phi_0. \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_32} \end{align} With the use of equations (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_20}), (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_30}-\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_32}) we can rewrite the energy of the ground state in a form \begin{align} E^{(0)}_L(0, g) = \frac{1}{2}\sum_{\boldsymbol m,\boldsymbol l}\thp{m}{l}|u_{\boldsymbol m}|^2 |u_{\boldsymbol l}|^2\,\frac{\phi_{\boldsymbol l+ \boldsymbol m}^2}{\phi_0^2}+\sum_{\boldsymbol k}\left(k + \frac{k^2}{2}\right)|u_{\boldsymbol k}|^2 \frac{\phi_{\boldsymbol k}^2}{\phi_0^2} + \frac{2g}{\sqrt{2 \Omega}}\sum_{\boldsymbol k}\frac{u_{\boldsymbol k}}{\sqrt{k}}\frac{\phi_{\boldsymbol k}}{\phi_0}, \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_33} \end{align} which up to fourth order in $g$ yields \begin{align} E^{(0)}_L(0, g) = \frac{g^2}{24\pi^2}\left(\langlembda(-4+\sqrt{2})\sqrt{3\pi}+\langlembda^2\right) + O (g^4).\langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_34} \end{align} By minimizing the energy with respect to $\langlembda$ one finds \begin{align} E^{(0)}_L(0, g) = - g^2\frac{(-4+\sqrt{2})^2}{32\pi}; \quad \langlembda = \frac{\sqrt{3\pi}}{2}(4 - \sqrt{2}).\langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_35} \end{align} In the weak-coupling limit it is also possible to obtain a renormalization for the mass of the particle. This is accomplished by expanding the energy (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_20}) in a series over $\boldsymbol P$ up to second order. The details of the calculation can be found in appendix D. The result reads \begin{align} E^{(0)}_L(P, g)\approx E^{(0)}_L(0, g) + \frac{P^2}{2}\left[1-\frac{g^2}{9\pi^2}\frac{17-\sqrt{2}}{21}\right]; \quad m^{(0)*} = 1+\frac{g^2}{9\pi^2}\frac{17-\sqrt{2}}{21}. \langlebel{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_36} \end{align} From this equation we can conclude that the factor, which determines the corrected mass is half the one via the leading second-order term from perturbation theory, see e.g. equation (\ref{eq:model_description14}). \section{Second order iteration for the energy and convergence} \langlebel{sec:second_order_iteration_for_the_energy_convergence} In the previous section we have found the energy of the ground state and the renormalized mass, which are proportional to the square of the coupling constant in zeroth-order approximation. However, the correction to the energy coming from single-phonon intermediate transitions is of the same order with respect to the coupling constant. Consequently, its contribution should also be taken into account, thus requiring the calculation of the energy of the system in the second iteration (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_17}). In order to calculate the second order iteration for the energy we notice (appendix E) that the matrix elements $\langle \Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}} |\Psi^{(L)}_{\boldsymbol P}\rangle$ and $\langle \Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}| \opa H |\Psi^{(L)}_{\boldsymbol P}\rangle$, which are found in equations (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_17}), (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_18}) are proportional to the delta function of the total momentum of the system $\delta(\boldsymbol P_1 - \boldsymbol P)$. Therefore, during the evaluation of the sum over $\boldsymbol P_1$ in equation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_17}) for the energy we have used the usual procedure \cite{LandauQED}: one of the delta functions in its square was replaced through the normalization volume $\Omega$, and the integration over the remaining one yields $\boldsymbol P = \boldsymbol P_1$, thus expressing the conservation of momentum. Firstly, we consider the case, when a particle is at rest, i. e. $\boldsymbol P = 0$. The results, which are expressed through the Fourier components of the particle wave function in the weak coupling limit read: \begin{align} E^{(2)}(0,g) = \frac{A}{B},\langlebel{eq:second_order_iteration_for_the_energy_convergence_1} \end{align} where \begin{align} A = E_L^{(0)}&+\sum_{\boldsymbol k}\frac{1}{\phi_{\boldsymbol k}^2 \phi_0^2}\left[-u_{\boldsymbol k} \phi_{\boldsymbol k}^2\left(\frac{k^2}{2}+k\right)-\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}}-u_{\boldsymbol k} \phi_{\boldsymbol k}^2\bigg(g^2 I_{\boldsymbol k}+g^4 J_{\boldsymbol k}-E_L^{(0)}\bigg) \right] \nonumber \\ &\mspace{40mu}\times\left[u_{\boldsymbol k} \phi_{\boldsymbol k}^2\left(\frac{k^2}{2}+k\right)+\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}}+u_{\boldsymbol k} \phi_{\boldsymbol k}^2\bigg(g^2 I_{\boldsymbol k}+g^4 J_{\boldsymbol k}\bigg) - E_L^{(0)} u_{\boldsymbol k} \phi_0^2\right] \nonumber \\ &\mspace{40mu}\times \left[\left(\frac{k^2}{2}+k\right)+g^2 I_{\boldsymbol k}+g^4 J_{\boldsymbol k}-E_L^{(0)}\right]^{(-1)}, \langlebel{eq:second_order_iteration_for_the_energy_convergence_2} \end{align} and \begin{align} B = 1 + \sum_{\boldsymbol k}\frac{1}{\phi_{\boldsymbol k}^2 \phi_0^2}\frac{\left[-u_{\boldsymbol k} \phi_{\boldsymbol k}^2\left(\frac{k^2}{2}+k\right)-\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}}-u_{\boldsymbol k} \phi_{\boldsymbol k}^2\bigg(g^2 I_{\boldsymbol k}+g^4 J_{\boldsymbol k}-E_L^{(0)}\bigg) \right]u_{\boldsymbol k}(\phi_{\boldsymbol k}^2-\phi_{0}^2)}{\left(\frac{k^2}{2}+k\right)+g^2 I_{\boldsymbol k}+g^4 J_{\boldsymbol k}-E_L^{(0)}}.\langlebel{eq:second_order_iteration_for_the_energy_convergence_3} \end{align} In equations (\ref{eq:second_order_iteration_for_the_energy_convergence_2}- \ref{eq:second_order_iteration_for_the_energy_convergence_3}) we have introduced the following notations \begin{align} \sum_{\boldsymbol m}\boldsymbol m |u_{\boldsymbol m}|^2 \phi_{\boldsymbol m+\boldsymbol k}^2 &\equiv g^2 \phi_{\boldsymbol k}^2 \boldsymbol I^{(1)}_{\boldsymbol k}; \quad \boldsymbol I^{(1)}_{\boldsymbol k} = \frac{\boldsymbol k}{k^2}\frac{\langlembda^2}{32\pi^2} \frac{4k-e^{\frac{2}{3}\frac{k^2}{\langlembda^2}}\sqrt{6\pi}\langlembda \text{Erf}\frac{\sqrt{\frac{2}{3}}k}{\langlembda}}{k}; \langlebel{eq:second_order_iteration_for_the_energy_convergence_4} \\ \sum_{\boldsymbol m}\left(\frac{m^2}{2}+m\right)|u_{\boldsymbol m}|^2 \phi_{\boldsymbol m+\boldsymbol k}^2 &\equiv g^2 \phi_{\boldsymbol k}^2 I^{(2)}_{\boldsymbol k}; \quad I^{(2)}_{\boldsymbol k} = \frac{ \langlembda^2}{96 \pi^2} \frac{\sqrt{6\pi}\langlembda e^{\frac{2}{3}\frac{k^2}{\langlembda^2}}\text{Erf}\frac{\sqrt{\frac{2}{3}}k}{\langlembda} + 6 \pi \text{Erfi}\frac{\sqrt{\frac{2}{3}}k}{\langlembda}}{k}; \langlebel{eq:second_order_iteration_for_the_energy_convergence_5} \\ \frac{g \phi_{\boldsymbol k}}{\sqrt{2 \Omega}}\sum_{\boldsymbol m}\frac{u_{\boldsymbol m}}{\sqrt{m}}(\phi_{\boldsymbol m+\boldsymbol k}+\phi_{\boldsymbol m - \boldsymbol k}) &\equiv g^2 \phi_{\boldsymbol k}^2 I^{(3)}_{\boldsymbol k}; \quad I^{(3)}_{\boldsymbol k} = - \frac{ \langlembda^2}{4\pi}\frac{\text{Erfi}\frac{k}{\sqrt{3}\langlembda}}{k}; \langlebel{eq:second_order_iteration_for_the_energy_convergence_6} \\ I_{\boldsymbol k} &= \boldsymbol k \!\cdot\!ot \boldsymbol I_{\boldsymbol k}^{(1)}+I_{\boldsymbol k}^{(2)}+I_{\boldsymbol k}^{(3)}; \langlebel{eq:second_order_iteration_for_the_energy_convergence_7} \\ \frac{1}{2}\sum_{\boldsymbol l,\boldsymbol m}\thp{l}{m}|u_{\boldsymbol l}|^2|u_{\boldsymbol m}|^2 \phi_{\boldsymbol l+\boldsymbol m+\boldsymbol k}^2 &\equiv g^4 \phi_{\boldsymbol k}^2 J_{\boldsymbol k}; \quad J_{\boldsymbol k} \approx \frac{5^{1/2}\langlembda^2}{4(2\pi)^3 3^5}e^{\frac{4}{5}\frac{k^2}{\langlembda^2}}\frac{\frac{2}{15}\frac{k^2}{\langlembda^2} - 1}{(1+\frac{4}{45}\frac{k^2}{\langlembda^2})^3}, \langlebel{eq:second_order_iteration_for_the_energy_convergence_8} \end{align} where $\text{Erf}(x) = 2/\sqrt{\pi}\int_0^x e^{-z^2}dz$ and $\text{Erfi}(x) = -\ri\text{Erf}(\ri x)$ are the error function and the imaginary error functions, respectively. When we calculated the energy of the ground state, we dropped all terms with power in $g$ larger than $g^2$. Consequently, we can neglect the term $g^4 J_{\boldsymbol k}$ in comparison with $g^2 I_{k}$, which can be confirmed by the direct numerical calculation of the integral. Prior to the numerical evaluation of the integrals (\ref{eq:second_order_iteration_for_the_energy_convergence_2}) and (\ref{eq:second_order_iteration_for_the_energy_convergence_3}), let us understand their structure through the approximate analytical calculation. We investigate the behavior of the numerator and denominator of the quantities $A$ and $B$. We start from breaking the integration region into two parts, namely $[0,k_0]$ and $[k_0,\infty)$. The value $k_0$ will be fixed below. Let us work out the behavior of the quantity $I_{\boldsymbol k}$ for small and large values of $k$. First of all we notice that $g^2 I_{0}$ gives exactly the ground state energy $E_L^{(0)}$. For small values of $k$, with the increase of $k$ the value of $g^2 I_{\boldsymbol k}\sim -g^2 k^2/(18\pi^2) +E_L^{(0)}$, i.e, it grows quadratically in absolute value, while being negative. Therefore, due to the presence of $g^2$, this term is small in comparison with $k^2/2+k$ for small values of $k$, so that, in the denominator of quantity $A$, the leading term is $k^2/2+k$. For large values of $k$, the quantity $g^2 I_{\boldsymbol k}$ exponentially grows as $I_{\boldsymbol k}\sim e^{\frac{2}{3}\frac{k^2}{\langlembda^2}}/k$ and becomes the leading contribution in comparison with $k^2/2+k$, despite the higher power of $g$. In analogy, we can analyze the numerator of the quantity $A$. For small values of $k$ we can neglect in every square bracket in equation (\ref{eq:second_order_iteration_for_the_energy_convergence_2}) the large powers of $g$, i.e. terms with exponents larger than $1$. Consequently, for small values of $k$, the integrand within $A$ looks like \begin{align} \langlebel{eq:second_order_iteration_for_the_energy_convergence_9} -\frac{\left[u_{\boldsymbol k} \phi_{\boldsymbol k}^2\left(\frac{k^2}{2}+k\right)+\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}}\right]^2}{\phi_{\boldsymbol k}^2 \phi_0^2 \left(\frac{k^2}{2}+k\right)}. \end{align} For large values of $k$, the numerator is exponentially decreasing, with the leading term being $(-\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}})(-E_L^{(0)} u_{\boldsymbol k})$. This follows from the fact that $u_{\boldsymbol k}\sim e^{-\frac{k^2}{4 \langlembda^2}}$ and $\phi_{\boldsymbol k}\sim e^{-\frac{k^2}{2 \langlembda^2}}$. Consequently, the integrand for large values of $k$ can be presented as \begin{align} \langlebel{eq:second_order_iteration_for_the_energy_convergence_10} \frac{(-\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}})(-E_L^{(0)} u_{\boldsymbol k})}{g^2\phi_{\boldsymbol k}^2 I_{\boldsymbol k}}. \end{align} Combining all together, we find that the quantity $A$ can be approximately calculated as \begin{align} A \approx E_L^{(0)} + \sum_{\boldsymbol k<\boldsymbol k_0}\frac{-\left(u_{\boldsymbol k} \frac{\phi_{\boldsymbol k}}{\phi_0}\left(\frac{k^2}{2}+k\right)+\frac{g}{\sqrt{2 \Omega}}\frac{1}{\sqrt{k}}\right)^2}{(\frac{k^2}{2}+k)}+\sum_{\boldsymbol k>\boldsymbol k_0}\frac{(-\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}})(-E_L^{(0)} u_{\boldsymbol k})}{\phi_{\boldsymbol k}^2 g^2 I_{\boldsymbol k}}. \langlebel{eq:second_order_iteration_for_the_energy_convergence_11} \end{align} In this expression, both sums are well defined and remain finite. The sum over the region $k>k_0$ is finite and convergent, while the ratio of numerator and denominator in the integrand is exponentially decreasing as $e^{-\frac{5k^2}{12 \langlembda^2}}$. In equation (\ref{eq:second_order_iteration_for_the_energy_convergence_11}) the point $k_0$ is determined as a solution of the equation \begin{align} \frac{k^2}{2} + k + g^2 I_{\boldsymbol k} - E_L^{(0)} = 0, \langlebel{eq:second_order_iteration_for_the_energy_convergence_12} \end{align} or employing the asymptotic behavior for the function $I_{\boldsymbol k}$ (appendix F) \begin{align} \frac{k_0^2}{2} + k_0 = g^2\frac{\langlembda^3 \sqrt{6\pi}}{48\pi^2}\frac{e^{\frac{2}{3}\frac{k_0^2}{\langlembda^2}}}{k_0}, \langlebel{eq:second_order_iteration_for_the_energy_convergence_13} \end{align} and by finding the following logarithm \begin{align} \ln\frac{(\frac{k_0^2}{2}+k_0)k_0}{a} &= -2|\ln g|+\frac{2}{3}\frac{k_0^2}{\langlembda^2},\langlebel{eq:second_order_iteration_for_the_energy_convergence_14} \end{align} with \begin{align*} a &= \frac{\langlembda^3 \sqrt{6\pi}}{48\pi^2}. \end{align*} In the limit of extremely small $g$, we can build the solution of equation (\ref{eq:second_order_iteration_for_the_energy_convergence_14}) via iterations, thus yielding \begin{align} k_0 \sim \langlembda\sqrt{3|\ln g|}.\langlebel{eq:second_order_iteration_for_the_energy_convergence_15} \end{align} The estimation of quantity $B$ can be performed in a similar fashion and one finds \begin{align} B \approx 1+ \sum_{\boldsymbol k<\boldsymbol k_0}\frac{-\left(u_{\boldsymbol k} \frac{\phi_{\boldsymbol k}^2}{\phi_0^2}\left(\frac{k^2}{2}+k\right)+\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k}}{\phi_0\sqrt{k}}\right)u_{\boldsymbol k}(\phi_{\boldsymbol k}^2-\phi_0^2)}{\phi_{\boldsymbol k}^2(\frac{k^2}{2}+k)}+ \sum_{\boldsymbol k>\boldsymbol k_0}\frac{(-\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}})u_{\boldsymbol k}(\frac{\phi_{\boldsymbol k}^2}{\phi_0^2}-1)}{\phi_{\boldsymbol k}^2 g^2 I_{\boldsymbol k}}.\langlebel{eq:second_order_iteration_for_the_energy_convergence_16} \end{align} At first sight, it may appear that the quantity $B$ features an infrared divergence, because the term $\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} u_{\boldsymbol k}}{\phi_0\sqrt{k}}/(\phi_{\boldsymbol k}^2(\frac{k^2}{2}+k)) \sim 1/k^3$ as $k\rightarrow 0$. However, this additional power of $k$ in the denominator is cancelled through the difference $\phi_{\boldsymbol k}^2 - \phi_0^2 \sim k^2/\langlembda^2$. The convergence at infinity is manifested with the exponential decrease of the integrand $\sim e^{-\frac{5k^2}{12 \langlembda^2}}$. By plugging into equations (\ref{eq:second_order_iteration_for_the_energy_convergence_11}) and (\ref{eq:second_order_iteration_for_the_energy_convergence_16}) the values of $\phi_{\boldsymbol k}$ and $u_{\boldsymbol k}$, which are defined in equations (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_28}- \ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_29}) and calculating the integrals (appendix F) we find the approximate analytical formula for the second iteration for the ground state energy \begin{align} A &\approx E_L^{(0)}-\left[\frac{g^2\langlembda}{24 \pi ^2}\left(\sqrt{6 \pi } \text{Erf}\left(\frac{\sqrt{\frac{3}{2}} k_0}{\langlembda }\right)+\langlembda -\langlembda e^{-\frac{3k_0^2}{2 \langlembda ^2}}\right)-\frac{g^2\langlembda}{2 \sqrt{3}\pi^{3/2}}\text{Erf}\left(\frac{\sqrt{3}k_0}{2\langlembda }\right)\right]-\frac{g^2}{2\pi^2}\ln\left(\frac{k_0}{2}+1\right)+E_L^{(0)}\frac{12\sqrt{6\pi}}{5 \langlembda \pi}e^{-\frac{5k_0^2}{12 \langlembda^2}}, \langlebel{eq:second_order_iteration_for_the_energy_convergence_17} \\ B &\approx 1+\frac{g^2}{12\pi^2}(1-e^{-\frac{3}{2}\frac{k_0^2}{\langlembda^2}})-g^2f\left(\frac{k_0}{\langlembda}\right) - \frac{144\sqrt{6\pi}}{25 \langlembda \pi}\left(1+\frac{5}{12}\frac{k_0^2}{\langlembda^2}\right)e^{-\frac{5k_0^2}{12 \langlembda^2}}, \langlebel{eq:second_order_iteration_for_the_energy_convergence_18} \\ f(x) &= \frac{1}{4\pi^2}\int_0^x \frac{tdt}{1+t/2}e^{-\frac{3}{4}t^2}.\nonumber \end{align} Within the accuracy of the approximate formulas, we can set $B\approx1$. Therefore, one finally obtains \begin{align} E^{(2)}(0,g) \approx A. \langlebel{eq:second_order_iteration_for_the_energy_convergence_19} \end{align} The use of our simple analytical expressions allows to establish the behavior of the energy as a function of the coupling constant and consequently to determine the character of the singularity. In order to select the singularity, we investigate the limit \begin{align*} \lim_{g\rightarrow0}E^{(2)}(0,g). \end{align*} In this limit, the value of $k_0$ logarithmically grows. Consequently, we can approximately set $k_0 \rightarrow \infty$ both in the expression in square brackets and in the last term of equation (\ref{eq:second_order_iteration_for_the_energy_convergence_17}). This way, the square bracket becomes equal to the energy of the ground state (appendix F) and cancels $E_L^{(0)}$. The last term also does not contribute to the energy as being exponentially small. Consequently, the only term remains, which exactly determines the character of the singularity and is equal to \begin{align} E^{(2)}(0,g) &\underset{g\rightarrow0}{\longrightarrow}-\frac{g^2}{2\pi^2}\ln\left(\frac{k_0}{2}+1\right); \langlebel{eq:second_order_iteration_for_the_energy_convergence_20} \\ k_0 &\approx \langlembda\sqrt{3|\ln g|}. \langlebel{eq:second_order_iteration_for_the_energy_convergence_21} \end{align} We observe that this term exactly coincides with the result via perturbation theory, i.e. equation (\ref{eq:model_description13}), however, here with a well specified ``cut-off''. Moreover, the most contributions to the integral in the energy arise from the region $k<k_0$ and this is exactly the reason for the natural ``cut-off'', which is determined self consistently and is directly related to the only parameter of the Hamiltonian, namely the coupling constant. Let us mention here that the corrections to the energy of the system (\ref{eq:second_order_iteration_for_the_energy_convergence_20}) arise in the subsequent iteration and are related to the transitions into intermediate states with two phonons. These contributions are proportional to $g^4$. In addition we note here that the absence of the ultraviolet divergence in the energy of the ground state in equations (\ref{eq:second_order_iteration_for_the_energy_convergence_1}, \ref{eq:second_order_iteration_for_the_energy_convergence_2}, \ref{eq:second_order_iteration_for_the_energy_convergence_3}) is due to the fact that as in the zeroth-order approximation and in the second-order iteration in the resolvent $[E^{(0)}_{\mu} - H_{\nu \nu}]^{-1}$ of equation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_10}) the dressed wave functions (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_14}) were used. This leads to the effective momentum cut-off $k_0(g)$, which is determined as the solution of equation (\ref{eq:second_order_iteration_for_the_energy_convergence_12}). This cut-off is a function of the coupling constant and is not a phenomenological parameter, which needs to be introduced for the removal of the ultraviolet divergence. Moreover, as follows from equation (\ref{eq:second_order_iteration_for_the_energy_convergence_20}), the energy of the ground state has a logarithmic singularity as $g\rightarrow 0$. It is clearly seen that this dependence can not be sorted out in the framework of perturbation theory, which yields a power series over the coupling constant $g$. \begin{figure} \caption{(Color online) (a)) The dependence on the coupling constant of the ratio of the exact numerical evaluation to the approximate analytical formula of the second iteration for the energy. The value $k_0$ in the analytical approximation is equal to $k_0 = \langlembda\sqrt{3|\ln g|} \end{figure} In order to ensure that our interpretation is correct, we have evaluated the integrals numerically and have found in the limit of extremely small $g$ the ratio of the results via exact numerical versus analytical evaluations. This ratio is almost constant and is approximately equal to one, as presented in Figure~\ref{fig1}. Therefore, we can conclude that the main reason why conventional perturbation theory fails is related to the fact that the energy of the system is a non-analytical function of the coupling constant and consequently can not be expanded in a series over $g$ near a singular point. The second interesting consequence of the numerical evaluation of the integral is related to the fact that the energy of the system contains a small imaginary part, which means that the state has a finite lifetime and is quasi-stationary. To prove this, we have calculated the transition probability to the state $|\Psi_{\boldsymbol P_1,1_{\boldsymbol k}}\rangle$ for the case when a particle is at rest, i.e. \begin{align} \frac{w_{0\rightarrow1}}{2} &= \pi \int |\langle \Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}| \opa H |\Psi^{(L)}_{\boldsymbol P}\rangle|^2 \delta\left(H_{\boldsymbol P_1, 1_{\boldsymbol k}; \boldsymbol P_1, 1_{\boldsymbol k}}-E_L^{(0)}\right)\frac{\Omega d\boldsymbol k}{(2\pi)^3} \nonumber \\ &=\frac{\Omega}{2\pi}\frac{k^2}{|k+1+g^2 I^\prime_{\boldsymbol k}|}\frac{\left[u_{\boldsymbol k} \phi_{\boldsymbol k}^2\left(\frac{k^2}{2}+k\right)+\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}}+u_{\boldsymbol k} \phi_{\boldsymbol k}^2 g^2 I_{\boldsymbol k} - E_L^{(0)} u_{\boldsymbol k} \phi_0^2\right]^2}{\phi_0^2 \phi_{\boldsymbol k}^2}\Bigg|_{\frac{k^2}{2}+k+g^2 I_{\boldsymbol k} - E_L^{(0)}=0}. \langlebel{eq:second_order_iteration_for_the_energy_convergence_22} \end{align} The result of evaluation is presented in Figure~\ref{fig1}. As can be seen from the figure the two curves coincide exactly. This can be interpreted via the diagram technique \cite{LandauQED}. The second order iteration for the energy of the particle can be presented via the diagram depicted in Figure~\ref{fig2}. If the diagram is split by the dashed line, the imaginary part will correspond to the transition probability to the state $|\Psi_{\boldsymbol P_1,1_{\boldsymbol k}}\rangle$. \begin{figure} \caption{(Color online) Feynman diagram of the process. If the diagram is split by the dashed line, the imaginary part will correspond to the transition probability to the state $|\Psi_{\boldsymbol P_1,1_{\boldsymbol k} \end{figure} Here, we need to stress that contrary to standard perturbation theory in our formulation the conservation of energy is governed not by the free Hamiltonian $\opa H_0$, but through the expectation value of the total Hamiltonian $H_{\boldsymbol P_1 1_{\boldsymbol k};\boldsymbol P_1 1_{\boldsymbol k}}$. Therefore, for certain values of $k$ and for certain coupling constants $g$ this energy level might appear to be below the energy $E_L^{(0)}$, featuring a so called quasi-intersection of energy levels. If the transition probability to the state $|\Psi_{\boldsymbol P_1,1_{\boldsymbol k}}\rangle$ were large, the description of the system with the state vectors (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_14}) would not be applicable and the reconstruction of the states would need to be performed, which takes into account the degeneracy between the energies $E_L^{(0)}$ and $H_{\boldsymbol P_1 1_{\boldsymbol k};\boldsymbol P_1 1_{\boldsymbol k}}$. In our case, however, the transition probability is small and consequently the description with a complex energy, with small imaginary part, is valid, in analogy to the theory of a natural line width of the atomic states or anharmonic oscillator $p^2/2+x^2/2-\mu x^4$, with $\mu>0$. In order to conclude our formulation we have calculated the renormalized mass in the second iteration. In terms of the introduced abbreviations the second order iteration for the particle energy can be written as \begin{align} E^{(2)}(\boldsymbol P,g) = \frac{\frac{P^2}{2} + \tilde E_{L}^{(0)}(\boldsymbol P,g) + A_{\boldsymbol P}}{B_{\boldsymbol P}}, \langlebel{eq:second_order_iteration_for_the_energy_convergence_24} \end{align} with \begin{align} \tilde E_L^{(0)}(\boldsymbol P,g) &= - \boldsymbol P\!\cdot\!ot\sum_{\boldsymbol m}\boldsymbol m |u_{\boldsymbol m}|^2 \frac{\phi_{\boldsymbol P - \boldsymbol m}^2}{\phi_{\boldsymbol P}^2} + \sum_{\boldsymbol m}\left(\frac{m^2}{2} + m\right)|u_{\boldsymbol m}|^2 \frac{\phi_{\boldsymbol P - \boldsymbol m}^2}{\phi_{\boldsymbol P}^2} + \frac{2g}{\sqrt{2 \Omega}}\sum_{\boldsymbol m}\frac{u_{\boldsymbol m}}{\sqrt{m}}\frac{\phi_{\boldsymbol P - \boldsymbol m}}{\phi_{\boldsymbol P}}.\langlebel{eq:second_order_iteration_for_the_energy_convergence_25} \end{align} The quantity $A_{\boldsymbol P}$ reads as \begin{align} A_{\boldsymbol P} &= \sum_{\boldsymbol k}\Bigg[\frac{P^2}{2}u_{\boldsymbol k}\left(\phi_{\boldsymbol P - \boldsymbol k}^2 - \phi_{\boldsymbol P}^2\right)+\left(\frac{k^2}{2}+k-\thp{P}{k}\right)u_{\boldsymbol k}\phi_{\boldsymbol P - \boldsymbol k}^2 + \frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol P - \boldsymbol k}\phi_{\boldsymbol P}}{\sqrt{k}} + g^2 u_{\boldsymbol k} \phi_{\boldsymbol P - \boldsymbol k}^2 I_{\boldsymbol P - \boldsymbol k}-\tilde E_{L}^{(0)}(\boldsymbol P,g)u_{\boldsymbol k}\phi_{\boldsymbol P}^2\Bigg] \nonumber \\ &\mspace{29mu}\times\left[\tilde E_{L}^{(0)}(\boldsymbol P,g)u_{\boldsymbol k}\phi_{\boldsymbol P-\boldsymbol k}^2-\left(\left(\frac{k^2}{2}+k-\thp{P}{k}\right)u_{\boldsymbol k}\phi_{\boldsymbol P - \boldsymbol k}^2 + \frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol P - \boldsymbol k}\phi_{\boldsymbol P}}{\sqrt{k}} + g^2 u_{\boldsymbol k} \phi_{\boldsymbol P - \boldsymbol k}^2 I_{\boldsymbol P - \boldsymbol k}\right)\right]\frac{1}{\phi_{\boldsymbol P - \boldsymbol k}^2 \phi_{\boldsymbol P}^2} \nonumber \\ &\mspace{29mu}\times \left[\left(\frac{k^2}{2}+k-\thp{P}{k}\right) + g^2 I_{\boldsymbol P-\boldsymbol k} - \tilde E_L^{(0)}(\boldsymbol P,g)\right]^{-1}, \langlebel{eq:second_order_iteration_for_the_energy_convergence_26} \end{align} and the quantity $B_{\boldsymbol P}$ is equal to \begin{align} B_{\boldsymbol P} &=1+ \sum_{\boldsymbol k}\left[\tilde E_{L}^{(0)}(\boldsymbol P,g)u_{\boldsymbol k}\phi_{\boldsymbol P-\boldsymbol k}^2-\left(\left(\frac{k^2}{2}+k-\thp{P}{k}\right)u_{\boldsymbol k}\phi_{\boldsymbol P - \boldsymbol k}^2 + \frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol P - \boldsymbol k}\phi_{\boldsymbol P}}{\sqrt{k}} + g^2 u_{\boldsymbol k} \phi_{\boldsymbol P - \boldsymbol k}^2 I_{\boldsymbol P - \boldsymbol k}\right)\right]\frac{u_{\boldsymbol k}\left(\phi_{\boldsymbol P - \boldsymbol k}^2 - \phi_{\boldsymbol P}^2\right)}{\phi_{\boldsymbol P - \boldsymbol k}^2 \phi_{\boldsymbol P}^2} \nonumber \\ &\mspace{29mu}\times \left[\left(\frac{k^2}{2}+k-\thp{P}{k}\right) + g^2 I_{\boldsymbol P-\boldsymbol k} - \tilde E_L^{(0)}(\boldsymbol P,g)\right]^{-1}. \langlebel{eq:second_order_iteration_for_the_energy_convergence_27} \end{align} To proceed, we again break the limit of the integration into two parts, however, now we know that the main contribution to the energy of the system comes from the region $[0,k_0]$. In this region, we again drop all terms, with power of $g$ larger than one. We recall here that the classical component of the field $u_{\boldsymbol k}$ is proportional to $g$ and the energy $E_{L}^{(0)}(\boldsymbol P,g)\sim g^2$. In addition, the limit $\boldsymbol P\ll 1$ is considered. Moreover, as the quantity $B_{\boldsymbol P}$, after expansion over momentum $\boldsymbol P$, will have a form $B_{\boldsymbol P} = 1-g^2 F(P^2)$, we can set $\boldsymbol P=0$ in $B_{\boldsymbol P}$, in order to preserve the same accuracy. In this approximation, the quantity $A_{\boldsymbol P}$ takes the form \begin{align} A_{\boldsymbol P} = &-\frac{P^2}{2}\sum_{\boldsymbol k<\boldsymbol k_0}\left\{u_{\boldsymbol k}\frac{\left(\phi_{\boldsymbol P - \boldsymbol k}^2 - \phi_{\boldsymbol P}^2\right)}{\phi_{\boldsymbol P - \boldsymbol k}\phi_{\boldsymbol P}}\left[u_{\boldsymbol k}\frac{\phi_{\boldsymbol P - \boldsymbol k}}{\phi_{\boldsymbol P}} + \frac{g}{\sqrt{2 \Omega}}\frac{1}{\sqrt{k}} \left(\frac{k^2}{2}+k-\thp{P}{k}\right) ^{-1}\right]\right\} \nonumber \\ &-\sum_{\boldsymbol k<\boldsymbol k_0}\left[\left(\frac{k^2}{2}+k-\thp{P}{k}\right)u_{\boldsymbol k}\frac{\phi_{\boldsymbol P - \boldsymbol k}}{\phi_{\boldsymbol P}} + \frac{g}{\sqrt{2 \Omega}}\frac{1}{\sqrt{k}} \right]^2\left(\frac{k^2}{2}+k-\thp{P}{k}\right)^{-1} \langlebel{eq:second_order_iteration_for_the_energy_convergence_28} \end{align} and \begin{align} B_{\boldsymbol P} = B_0 = 1 -\sum_{\boldsymbol k < \boldsymbol k_0}\frac{\left(u_{\boldsymbol k} \frac{\phi_{\boldsymbol k}^2}{\phi_0^2}\left(\frac{k^2}{2}+k\right)+\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k}}{\phi_0\sqrt{k}}\right)u_{\boldsymbol k}(\phi_{\boldsymbol k}^2-\phi_0^2)}{\phi_{\boldsymbol k}^2(\frac{k^2}{2}+k)}. \langlebel{eq:second_order_iteration_for_the_energy_convergence_29} \end{align} If the definition of $E_{L}^{(0)}(\boldsymbol P,g)$ (\ref{eq:second_order_iteration_for_the_energy_convergence_25}), together with equations (\ref{eq:second_order_iteration_for_the_energy_convergence_28}) and (\ref{eq:second_order_iteration_for_the_energy_convergence_29}), is used in equation (\ref{eq:second_order_iteration_for_the_energy_convergence_24}), the second iteration for the energy of a moving particle is obtained \begin{align} E^{(2)}(\boldsymbol P,g) &= E^{(2)}(0,g) -\frac{g^2}{2 \Omega}\sum_{\boldsymbol k<\boldsymbol k_0}\frac{(\thp{P}{k})^2}{k(k^2/2+k)^3} \nonumber \\ &\mspace{50mu}+\frac{P^2}{2}\left\{1-\sum_{\boldsymbol k < \boldsymbol k_0} \left[u_{\boldsymbol k}^2 \left(\frac{\phi_{\boldsymbol k}^2}{\phi_{0}^2}-1\right)+\frac{g}{\sqrt{2 \Omega}}\frac{u_{\boldsymbol k}}{\sqrt{k}(k^2/2+k)}\frac{\left(\phi_{\boldsymbol k}^2 - \phi_{0}^2\right)}{\phi_{\boldsymbol k}\phi_{0}}\right]\right\}\nonumber \\ &\mspace{50mu}\times\left(1-\sum_{\boldsymbol k < \boldsymbol k_0} \left[u_{\boldsymbol k}^2 \left(\frac{\phi_{\boldsymbol k}^2}{\phi_{0}^2}-1\right)+\frac{g}{\sqrt{2 \Omega}}\frac{u_{\boldsymbol k}}{\sqrt{k}(k^2/2+k)}\frac{\left(\phi_{\boldsymbol k}^2 - \phi_{0}^2\right)}{\phi_{\boldsymbol k}\phi_{0}}\right]\right)^{-1},\langlebel{eq:second_order_iteration_for_the_energy_convergence_30} \end{align} or after simplification, taking into account the fact that the sum in the denominator is proportional to $g^2$, one finally obtains \begin{align} E^{(2)}(\boldsymbol P,g) &= E^{(2)}(0,g) +\frac{P^2}{2}-\frac{g^2}{2 \Omega}\sum_{\boldsymbol k<\boldsymbol k_0}\frac{(\thp{P}{k})^2}{k(k^2/2+k)^3}.\langlebel{eq:second_order_iteration_for_the_energy_convergence_31} \end{align} From here, we see that the second iteration for the renormalized mass \begin{align} m^{(2)*} \approx 1+\frac{g^2}{6\pi^2}\langlebel{eq:second_order_iteration_for_the_energy_convergence_32} \end{align} coincides with the one via perturbation theory. \section{Conclusion} \langlebel{sec:conclusion} In current methods of renormalization in QFT, the momentum cut-off plays an important role \cite{Collins1984}, which in fact is an additional and undefined parameter of the theory. Usually, the inclusion of such parameter for a concrete model is justified with the argument that the theory becomes incorrect on a small scale, where a more general theory must be used instead. For example, in the case of QED it is widely accepted that on a small scale the Standard Model, with its own characteristic length, should be rather used. However, in the Standard Model, as in its possible generalizations, for the renormalization of perturbation theory the cut-off is required. Consequently, we come to the requirement of the inclusion of some ``fundamental length'' or unobservable parameter of any QFT. However, the Fr\"ohlich Hamiltonian demonstrates the absence of a cut-off in the polaron theory. In this QFT all corrections are determined through convergent integrals and, consequently, the cut-off is not required. Here we considered a more general QFT than the one associated with the polaron problem, for which standard perturbation theory gives rise to divergences. The main result of the present work consists in the construction of a calculation scheme for this more general QFT that only leads to convergent integrals. In addition to that, the regularization of all integrals is related to the effective-cut-off momentum, which is defined through the parameters of the system itself. Moreover, the divergences of standard perturbation theory are explained through the energy being a non-analytical function of the coupling constant, of a form $\ln(\sqrt{|\ln g|}/2+1)$, around zero, and, therefore, can not be represented as a power series around this singular point. It is also important that the character of the singularity, defined in equation (\ref{eq:second_order_iteration_for_the_energy_convergence_20}) in the weak coupling limit does not depend on the particular choice of the wave functions $\phi_{\boldsymbol P}(\boldsymbol r)$ of the zeroth-oder approximation. From a formal point of view, the convergence of all integrals is explained as follows: i) the use of the decomposition (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_2}), i.e. the special state vectors, which are the product of the wave function of a localized particle and a coherent state of the field and ii) the calculation of the energy of the system with the iteration scheme (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_10}), in which the resolvent of the operator $[H_{kk}-E^{(0)}]^{-1}$ contains the matrix elements of the full Hamiltonian of the system. In standard perturbation theory the Hamiltonian of non-interacting fields is used in the analogous expressions. From a physical point of view the argument i) corresponds to avoiding an adiabatic switch off of the interaction. This means that a particle during its existence time is considered as ``dressed'', i.e. to be in a localized state which is created due to the interaction between the particle and the field. The argument ii) leads to the ``cutting'' of all integrals for a large momentum due to the reconstruction of a localized state in intermediate states, caused by the quasi-intersection of the ground and the single-phonon states. Our approach should not be considered and does not pretend to be the full solution of the renormalization problem in QFT, specifically, because of the use of a simple, non-relativistically covariant model. Nevertheless, it demonstrates an alternative, succeeding without introducing any phenomenological momentum cut-off. \begin{acknowledgments} The authors are grateful to S. Cavaletto for useful discussions. \end{acknowledgments} \section*{Appendix A: Proof that the states (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_14}) are eigenstates of the total momentum operator} \langlebel{sec:appendix} In this appendix we present an explicit proof that the total momentum operator \begin{align*} \opA P = -\ri \nabla_{\boldsymbol r} + \sum_{\boldsymbol k}\boldsymbol k \opa a_{\boldsymbol k}^\dag \opa a_{\boldsymbol k} \end{align*} commutes with the Hamiltonian \begin{align*} \opa H = -\frac{1}{2}\Delta + \sum_{\boldsymbol k}k \opa a_{\boldsymbol k}^\dag \opa a_{\boldsymbol k} + \frac{g}{\sqrt{2\Omega}}\sum_{\boldsymbol k}A_{\boldsymbol k} \left(e^{\ri\thp{k}{r}}\opa a_{\boldsymbol k}+e^{-\ri\thp{k}{r}}\opa a^\dag_{\boldsymbol k}\right) \end{align*} of the system and that the states (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_14}) are eigenstates of $\opA P$ \begin{align} \opA P |\Psi^{(0)}_{\boldsymbol P_1,n_{\boldsymbol k}}\rangle &= \boldsymbol P_1 |\Psi^{(0)}_{\boldsymbol P_1,n_{\boldsymbol k}}\rangle,\langlebel{A1} \end{align} consequently forming a complete set in the Hilbert space. Let us begin with the commutator: \begin{align} [\opa H, \opA P] &= \left[\frac{g}{\sqrt{2\Omega}}\sum_{\boldsymbol k}A_{\boldsymbol k} \left(e^{\ri\thp{k}{r}}\opa a_{\boldsymbol k}+e^{-\ri\thp{k}{r}}\opa a^\dag_{\boldsymbol k}\right), -\ri \nabla\right] \nonumber \\ &\mspace{90mu}+\left[\frac{g}{\sqrt{2\Omega}}\sum_{\boldsymbol k}A_{\boldsymbol k} \left(e^{\ri\thp{k}{r}}\opa a_{\boldsymbol k}+e^{-\ri\thp{k}{r}}\opa a^\dag_{\boldsymbol k}\right),\sum_{\boldsymbol k}\boldsymbol k \opa a_{\boldsymbol k}^\dag \opa a_{\boldsymbol k}\right] \nonumber \\ &=\frac{g}{\sqrt{2 \Omega}}\sum_{\boldsymbol k}A_{\boldsymbol k}\left(\opa a_{\boldsymbol k}e^{\ri \thp{k}{r}}(-\boldsymbol k)+\opa a_{\boldsymbol k}^\dag e^{-\ri \thp{k}{r}}\boldsymbol k\right) \nonumber \\ &\mspace{90mu}+\frac{g}{\sqrt{2 \Omega}}\sum_{\boldsymbol k}A_{\boldsymbol k}\left(\opa a_{\boldsymbol k}e^{\ri \thp{k}{r}}\boldsymbol k+\opa a_{\boldsymbol k}^\dag e^{-\ri \thp{k}{r}}(-\boldsymbol k)\right) = 0,\langlebel{A2} \end{align} which was to be proven. Now we will demonstrate that the relation (\ref{A1}) holds. Also, we will introduce the notations \begin{align} \opa D(\boldsymbol R) &= \exp\left\{\sum_{\boldsymbol q}(u_{\boldsymbol q} e^{- \ri\boldsymbol q \!\cdot\!ot \boldsymbol R}\opa a_{\boldsymbol q}^\dag - u_{\boldsymbol q}^* e^{ \ri\boldsymbol q \!\cdot\!ot \boldsymbol R}\opa a_{\boldsymbol q})\right\}, \text{ with} \langlebel{A3} \\ &\opa D^\dag(\boldsymbol R)\opa D(\boldsymbol R) = \opa D(\boldsymbol R)\opa D^\dag(\boldsymbol R) = 1,\langlebel{A4} \\ &\opa D^\dag(\boldsymbol R)\opa a_{\boldsymbol k}\opa D(\boldsymbol R) = \opa a_{\boldsymbol k} +u_{\boldsymbol k}e^{-\ri\thp{k}{R}},\langlebel{A5} \\ &\opa D^\dag(\boldsymbol R)\opa a^\dag_{\boldsymbol k}\opa D(\boldsymbol R) = \opa a^\dag_{\boldsymbol k} + u^*_{\boldsymbol k}e^{\ri\thp{k}{R}},\langlebel{A6} \\ &\ri\frac{\partial \opa D(\boldsymbol R)}{\partial \boldsymbol R} = \opa D(\boldsymbol R)\sum_{\boldsymbol q}\boldsymbol q\left(u_{\boldsymbol q}e^{-\ri\thp{q}{R}}\opa a_{\boldsymbol q}^\dag+u^*_{\boldsymbol q}e^{\ri\thp{q}{R}}\opa a_{\boldsymbol q}\right)\langlebel{A7} \end{align} Consequently, with the help of equations (\ref{A3}-\ref{A7}) we may write \begin{align} \opA P |\Psi^{(0)}_{\boldsymbol P_1,n_{\boldsymbol k}}\rangle &= \left(-\ri \nabla_{\boldsymbol r} + \sum_{\boldsymbol q}\boldsymbol q \opa a_{\boldsymbol q}^\dag \opa a_{\boldsymbol q}\right)\frac{1}{N_{\boldsymbol P_1,n_{\boldsymbol k}}\sqrt{\Omega}}\int d \boldsymbol R \phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R)\exp\left\{\ri(\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R \right\}\opa D(\boldsymbol R)|n_{\boldsymbol k}\rangle \nonumber \\ &=\frac{1}{N_{\boldsymbol P_1,n_{\boldsymbol k}}\sqrt{\Omega}}\int d \boldsymbol R (-\ri \nabla_{\boldsymbol r})(\phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R))\exp\left\{\ri(\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R \right\}\opa D(\boldsymbol R)|n_{\boldsymbol k}\rangle \nonumber \\ &+\frac{1}{N_{\boldsymbol P_1,n_{\boldsymbol k}}\sqrt{\Omega}}\int d \boldsymbol R \phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R)\exp\left\{\ri(\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R \right\}\sum_{\boldsymbol q}\boldsymbol q \opa a_{\boldsymbol q}^\dag \opa a_{\boldsymbol q}\opa D(\boldsymbol R)|n_{\boldsymbol k}\rangle. \langlebel{A8} \end{align} By noticing that $-\ri \nabla_{\boldsymbol r}(\phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R)) = \ri \nabla_{\boldsymbol R}(\phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R))$ and transforming $\sum_{\boldsymbol q}\boldsymbol q \opa a_{\boldsymbol q}^\dag \opa a_{\boldsymbol q}\opa D(\boldsymbol R) = \opa D(\boldsymbol R)\opa D^\dag(\boldsymbol R)\sum_{\boldsymbol q}\boldsymbol q \opa a_{\boldsymbol q}^\dag \opa a_{\boldsymbol q}\opa D(\boldsymbol R)$ one obtains \begin{align} \opA P |\Psi^{(0)}_{\boldsymbol P_1,n_{\boldsymbol k}}\rangle &= \frac{1}{N_{\boldsymbol P_1,n_{\boldsymbol k}}\sqrt{\Omega}}\int d \boldsymbol R (\ri \nabla_{\boldsymbol R})\left[\phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R)\exp\left\{\ri(\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R \right\}\opa D(\boldsymbol R)\right]|n_{\boldsymbol k}\rangle \nonumber \\ &-\frac{1}{N_{\boldsymbol P_1,n_{\boldsymbol k}}\sqrt{\Omega}}\int d \boldsymbol R \phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R)(\ri \nabla_{\boldsymbol R})\left[\exp\left\{\ri(\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R \right\}\opa D(\boldsymbol R)\right]|n_{\boldsymbol k}\rangle \nonumber \\ &+\frac{1}{N_{\boldsymbol P_1,n_{\boldsymbol k}}\sqrt{\Omega}}\int d \boldsymbol R \phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R)\exp\left\{\ri(\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R \right\}\opa D(\boldsymbol R) \nonumber \\ &\mspace{350mu}\times\sum_{\boldsymbol q}\boldsymbol q \left(\opa a_{\boldsymbol q}^\dag+u_{\boldsymbol q}^* e^{\ri\thp{q}{R}}\right) \left(\opa a_{\boldsymbol q}+u_{\boldsymbol q} e^{-\ri\thp{q}{R}}\right)|n_{\boldsymbol k}\rangle. \langlebel{A9} \end{align} The first term in equation (\ref{A9}) vanishes due to the square-integrability of the function $\phi_{\boldsymbol P_1}(\boldsymbol r-\boldsymbol R)$. The derivative in the second term is equal to \begin{align} (\ri \nabla_{\boldsymbol R})\left[\exp\left\{\ri(\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R \right\}\opa D(\boldsymbol R)\right] &= -(\boldsymbol P_1 - \boldsymbol k n_k)\exp\left\{\ri(\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R \right\}\opa D(\boldsymbol R) \nonumber \\ &+\exp\left\{\ri(\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R \right\}\opa D(\boldsymbol R)\sum_{\boldsymbol q}\boldsymbol q\left(u_{\boldsymbol q}e^{-\ri\thp{q}{R}}\opa a_{\boldsymbol q}^\dag+u^*_{\boldsymbol q}e^{\ri\thp{q}{R}}\opa a_{\boldsymbol q}\right) \langlebel{A10} \end{align} and, therefore, the terms which are not proportional to $\boldsymbol P_1$ in equation (\ref{A10}) cancel the last term in equation (\ref{A9}). As a result, equation (\ref{A9}) transforms into \begin{align} \opA P |\Psi^{(0)}_{\boldsymbol P_1,n_{\boldsymbol k}}\rangle = \boldsymbol P_1 |\Psi^{(0)}_{\boldsymbol P_1,n_{\boldsymbol k}}\rangle, \langlebel{A11} \end{align} which was to be proven. According to reference \cite{landau1965quantum}, the eigenstates of a Hermitian operator form a complete and orthogonal set of functions in the Hilbert space. As the functions (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_14}) are eigenstates of the Hermitian operator $\opA P$, they form a complete orthogonal set for arbitrary generalized parameters $\phi_{\boldsymbol P_1}(\boldsymbol r - \boldsymbol R)$ and $u_{\boldsymbol k}$. \section*{Appendix B: Matrix elements calculation} \langlebel{sec:matrix_elements_calculation} In all subsequent calculations, the matrix elements of a type \begin{align} \langlebel{1} \langle n_{\boldsymbol j}|&\exp\left(-\sum_{\boldsymbol m}\opa a_{\boldsymbol m}^\dag u_{\boldsymbol m}e^{-\ri\thp{m}{R^\prime}}-\opa a_{\boldsymbol m}u^*_{\boldsymbol m}e^{\ri\thp{m}{R^\prime}}\right)\sum_{\boldsymbol l}f(\opa a_{\boldsymbol l},\opa a^\dag_{\boldsymbol l})\nonumber \\ &\times\exp\left(\sum_{\boldsymbol m}\opa a_{\boldsymbol m}^\dag u_{\boldsymbol m}e^{-\ri\thp{m}{R}}-\opa a_{\boldsymbol m} u^*_{\boldsymbol m}e^{\ri\thp{m}{R}}\right)|n_{\boldsymbol k}\rangle \end{align} need to be evaluated. By using the identities \begin{align} \opa D &= e^{\beta \opa a^\dag - \beta^* \opa a} = e^{-|\beta|^2/2}e^{\beta \opa a^\dag}e^{-\beta^* \opa a} = e^{|\beta|^2 /2}e^{-\beta^* \opa a}e^{\beta \opa a^\dag}, \langlebel{2} \\ \opa D^{-1}\opa a \opa D &= \opa a +\beta, \quad \opa D^{-1}\opa a^\dag \opa D= \opa a^\dag + \beta^*,\langlebel{3} \end{align} equation (\ref{1}) can be transformed into the form \begin{align} &\frac{\exp(\sum_{\boldsymbol m}|u_{\boldsymbol m}|^2 (e^{-\ri\boldsymbol m \!\cdot\!ot (\boldsymbol R - \boldsymbol R^\prime)}-1))}{\sqrt{n_{\boldsymbol k}!n_{\boldsymbol j}!}}\nonumber \\ &\times\langle0|(\opa a_{\boldsymbol j} - u_{\boldsymbol j}e^{-\ri\thp{j}{R^\prime}}+u_{\boldsymbol j}e^{-\ri\thp{j}{R}})^{n_{\boldsymbol j}}\sum_{\boldsymbol l}f\left(\opa a_{\boldsymbol l}+u_{\boldsymbol l} e^{-\ri\thp{l}{R}},\opa a^\dag_{\boldsymbol l}+u_{\boldsymbol l}^* e^{\ri\thp{l}{R^\prime}}\right)\nonumber \\ &\times(\opa a_{\boldsymbol k}^\dag + u_{\boldsymbol k}^* e^{\ri\thp{k}{R^\prime}}-u_{\boldsymbol k}^* e^{\ri\thp{k}{R}})^{n_{\boldsymbol k}}|0\rangle. \langlebel{4} \end{align} The evaluation of equation (\ref{4}) is performed in the usual manner, i.e, by noticing that $\opa a|0\rangle = \langle0|\opa a^\dag = 0$ and the vacuum average is not equal to zero only if the number of creation operators is equal to the one of annihilation operators and is an even number. \section*{Appendix C: Ground state energy} \langlebel{sec:ground_state_energy} According to equation (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_18}) of the manuscript, the ground state energy is defined as \begin{align} E_L^{(0)} = \langle \Psi^{(L)}_{\boldsymbol P}|\opa H|\Psi^{(L)}_{\boldsymbol P}\rangle \langlebel{5} \end{align} with the wave function \begin{align} \langlebel{6} |\Psi^{(L)}_{\boldsymbol P}\rangle &= \frac{1}{N_{\boldsymbol P}\sqrt{\Omega}} \int d{\boldsymbol R}\,\phi_{\boldsymbol P}({\boldsymbol r}-{\boldsymbol R})\exp \left(\ri\thp{P}{R}+ \sum_{\boldsymbol k}\left(u_{\boldsymbol k}\opa a^\dag_{\boldsymbol k}e^{-\ri\thp{k}{R}}-\frac{1}{2}u^2_{\boldsymbol k} \right)\right)| 0 \rangle \end{align} and Hamiltonian \begin{align} \opa H &= \frac{1}{2}\left(\opA{P}^2 -2 \sum_k \opa a_k^\dag \opa a_k \boldsymbol k \!\cdot\!ot \opA{P}+\left(\sum_k \opa a_k^\dag \opa a_k \boldsymbol k\right)^2\right)+\sum_{\boldsymbol k}\omega_k \opa a^+_{\boldsymbol k}\opa a_{\boldsymbol k}\nonumber \\ &+ \frac{g}{\sqrt{\Omega}}\sum_{\boldsymbol k}\frac{1}{\sqrt{2\omega_k}} \left(e^{\ri\boldsymbol k\boldsymbol r}\opa a_{\boldsymbol k}+e^{-\ri\boldsymbol k\boldsymbol r}\opa a^+_{\boldsymbol k}\right). \langlebel{7} \end{align} The normalization constant $N_{\boldsymbol P}$ is found from the condition \begin{align} \langlebel{8} \langle\Psi^{(L)}_{\boldsymbol P}|\Psi^{(L)}_{\boldsymbol P}\rangle = 1. \end{align} In order to evaluate equation (\ref{8}), we use equation (\ref{4}), in which $n_{\boldsymbol k} = n_{\boldsymbol j} = 0$ and $f(\opa a_{\boldsymbol l},\opa a_{\boldsymbol l}^\dag) = \delta_{\boldsymbol l,0}$. This gives immediately the result \begin{align} |N_{\boldsymbol P}|^2 = \int d{\boldsymbol R}\int d{\boldsymbol r}\,\phi_{\boldsymbol P}^*({\boldsymbol r})\phi_{\boldsymbol P}({\boldsymbol r}-{\boldsymbol R}) \exp\left(\sum_{\boldsymbol k}|u_{\boldsymbol k}|^2(e^{-\ri\thp{k}{R}}-1) + \ri \thp{P}{R}\right).\langlebel{9} \end{align} The expectation value of the energy is performed in exactly the same way. First of all, the matrix elements of the field states are calculated with the help of equation (\ref{4}). For example, if the function $f$ is chosen as $f = \boldsymbol l \opa a_{\boldsymbol l}^\dag \opa a_{\boldsymbol l}$, $n_{\boldsymbol k} = n_{\boldsymbol j} = 0$, we immediately find \begin{align} \boldsymbol Q &= \langle \sum_{\boldsymbol l}\boldsymbol l \opa a_{\boldsymbol l}^\dag \opa a_{\boldsymbol l}\rangle \nonumber \\ &=\int d\boldsymbol R d\boldsymbol R^\prime d\boldsymbol r \phi^*(\boldsymbol r- \boldsymbol R^\prime)\phi(\boldsymbol r - \boldsymbol R) \sum_{\boldsymbol l}\boldsymbol l |u_{\boldsymbol l}|^2 \nonumber \\ &\times\exp\left(\sum_{\boldsymbol m}|u_{\boldsymbol m}|^2 (e^{-\ri\boldsymbol m \!\cdot\!ot (\boldsymbol R - \boldsymbol R^\prime)}-1)+\ri(\boldsymbol P - \boldsymbol l)\!\cdot\!ot(\boldsymbol R - \boldsymbol R^\prime)\right).\langlebel{10} \end{align} Then by carrying out the change of variables $\boldsymbol R - \boldsymbol R^\prime = \boldsymbol R_1$ and $\boldsymbol r - \boldsymbol R^\prime = \boldsymbol \rho$, we obtain \begin{align} \boldsymbol Q &= \frac{1}{|N_{\boldsymbol P}|^2} \sum_{\boldsymbol k}\boldsymbol k |u_{\boldsymbol k}|^2\int d{\boldsymbol R} d{\boldsymbol r}\,\phi_{\boldsymbol P}^*({\boldsymbol r})\phi_{\boldsymbol P}({\boldsymbol r - \boldsymbol R}) e^{\Phi (\boldsymbol R) + \ri (\boldsymbol P - \boldsymbol k)\!\cdot\!ot \boldsymbol R},\langlebel{11} \\ \Phi (\boldsymbol R)&= \sum_{\boldsymbol k}|u_{\boldsymbol k}|^2(e^{-\ri\thp{k}{R}}-1). \nonumber \end{align} All other matrix elements are evaluated in exactly the same fashion. Consequently, we obtain expression (\ref{eq:iteration_scheme_basis_zero_order_approximation_to_the_energy_of_the_system_20}) of the manuscript: \begin{align} &E_{L}^{(0)}(\boldsymbol P,g)= \frac{P^2}{2} - \thp{P}{Q} + G + E_{\text{f}}(\boldsymbol P) + E_{\text{int}}(\boldsymbol P); \langlebel{12} \\ &\boldsymbol Q = \frac{1}{|N_{\boldsymbol P}|^2} \sum_{\boldsymbol k}\boldsymbol k |u_{\boldsymbol k}|^2\int d{\boldsymbol R} d{\boldsymbol r}\,\phi_{\boldsymbol P}^*({\boldsymbol r})\phi_{\boldsymbol P}({\boldsymbol r - \boldsymbol R}) e^{\Phi (\boldsymbol R) + \ri (\boldsymbol P - \boldsymbol k)\!\cdot\!ot \boldsymbol R}; \nonumber \\ & G =\frac{1}{2} \frac{1}{|N_{\boldsymbol P}|^2}\sum_{\boldsymbol m,\boldsymbol l}\thp{m}{l}|u_{\boldsymbol m}|^2 |u_{\boldsymbol l}|^2\int d\boldsymbol r d\boldsymbol R \phi^*(\boldsymbol r)\phi(\boldsymbol r - \boldsymbol R)e^{\ri\thp{P}{R}+\Phi(\boldsymbol R)-\ri(\boldsymbol m + \boldsymbol l)\!\cdot\!ot R};\nonumber \\ &E_{\text{f}}(\boldsymbol P) = \frac{1}{|N_{\boldsymbol P}|^2} \sum_{\boldsymbol k}\left(k + \frac{k^2}{2}\right)|u_{\boldsymbol k}|^2\int d{\boldsymbol R} d{\boldsymbol r}\,\phi_{\boldsymbol P}^*({\boldsymbol r})\phi_{\boldsymbol P}({\boldsymbol r-\boldsymbol R}) e^{\Phi (\boldsymbol R) + \ri (\boldsymbol P - \boldsymbol k)\!\cdot\!ot \boldsymbol R};\nonumber \\ &E_{\text{int}}(\boldsymbol P) = \frac{g}{|N_{\boldsymbol P}|^2 } \sum_{\boldsymbol k}\frac{u_{\boldsymbol k}}{\sqrt{2 k \Omega}} \int d{\boldsymbol R} d{\boldsymbol r}\left(\phi_{\boldsymbol P}^*({\boldsymbol r}+\boldsymbol R)\phi_{\boldsymbol P}({\boldsymbol r})+\phi_{\boldsymbol P}^*({\boldsymbol r})\phi_{\boldsymbol P}({\boldsymbol r}-{\boldsymbol R})\right)e^{\Phi (\boldsymbol R) + \ri (\thp{P}{R} + \thp{k}{r})};\nonumber \\ &\Phi (\boldsymbol R)= \sum_{\boldsymbol k}|u_{\boldsymbol k}|^2(e^{-\ri\thp{k}{R}}-1);\nonumber \\ &|N_{\boldsymbol P}|^2 = \int d{\boldsymbol R} d{\boldsymbol r}\,\phi_{\boldsymbol P}^*({\boldsymbol r})\phi_{\boldsymbol P}({\boldsymbol r}-{\boldsymbol R}) e^{\Phi(\boldsymbol R) + \ri \thp{P}{R}}.\nonumber \end{align} We notice here one more time that the Fourier component of the function reads \begin{align} \phi(\boldsymbol r) = \frac{\langlembda^{\frac{3}{2}}}{\pi^{\frac{3}{4}}}e^{-\frac{\langlembda^2 r^2}{2}}. \langlebel{13} \end{align} and the classical component of the field look like \begin{align} u_{\boldsymbol k} &= -\frac{g}{\sqrt{2 \Omega}}\frac{1}{\sqrt{k^3}}\int d\boldsymbol r |\phi(\boldsymbol r)|^2 e^{-\ri\thp{k}{r}} = -\frac{g}{\sqrt{2 \Omega}}\frac{e^{-\frac{k^2}{4 \langlembda^2}}}{\sqrt{k^3}}; \langlebel{14} \\ \phi_{\boldsymbol k} &= \int d\boldsymbol r \phi(\boldsymbol r)e^{-\ri\thp{k}{r}} = 2\sqrt{2}\frac{\pi^{\frac{3}{4}}}{\langlembda^{\frac{3}{2}}} e^{-\frac{k^2}{2 \langlembda^2}} = \phi_0 e^{-\frac{k^2}{2 \langlembda^2}}. \langlebel{15} \end{align} In order to calculate the energy, we firstly neglect the function \begin{align} \Phi(R) = \sum_{\boldsymbol k}|u_{\boldsymbol k}|^2 (e^{-\ri\thp{k}{R}}-1)=g^2 \frac{1}{4\pi^2}\int_0^\infty dt \frac{e^{-\frac{t^2}{2 }}}{t}\left(\frac{\sin \langlembda R t}{\langlembda R t}-1\right)\sim g^2 \langlebel{16} \end{align} in equation (\ref{12}). The remaining quantities can be rewritten employing the Fourier transform of the function $\phi(\boldsymbol r)$. As for example \begin{align} \int d\boldsymbol R_1 d \boldsymbol \rho \phi^*(\boldsymbol \rho)\phi(\boldsymbol \rho - \boldsymbol R_1) e^{-\ri\thp{k}{R_1}} =\int d \boldsymbol \rho \phi^*(\boldsymbol \rho)e^{-\ri\thp{k}{\rho}}\int d \boldsymbol R \phi(\boldsymbol R)e^{\ri\thp{k}{R}} = \phi^*_{\boldsymbol k}\phi_{-\boldsymbol k} &= \phi_{\boldsymbol k}^2, \langlebel{17} \\ \int d\boldsymbol R_1 d \boldsymbol \rho \phi^*(\boldsymbol \rho)\phi(\boldsymbol \rho - \boldsymbol R_1) = |\phi_0|^2 &= \phi_0^2, \langlebel{18} \\ \int d \boldsymbol R_1 d \boldsymbol \rho \phi^*(\boldsymbol \rho)\phi(\boldsymbol \rho - \boldsymbol R_1) e^{-\ri\thp{k}{\rho}} = \phi^*_{\boldsymbol k}\phi_0 &= \phi_{\boldsymbol k}\phi_0, \langlebel{19} \end{align} and by plugging equations (\ref{17}-\ref{19}) into equation (\ref{12}), one finds \begin{align} E^{(0)}_L(0, g) = \frac{1}{2}\sum_{\boldsymbol m,\boldsymbol l}\thp{m}{l}|u_{\boldsymbol m}|^2 |u_{\boldsymbol l}|^2\,\frac{\phi_{\boldsymbol l+ \boldsymbol m}^2}{\phi_0^2}+\sum_{\boldsymbol k}\left(k + \frac{k^2}{2}\right)|u_{\boldsymbol k}|^2 \frac{\phi_{\boldsymbol k}^2}{\phi_0^2} + \frac{2g}{\sqrt{2 \Omega}}\sum_{\boldsymbol k}\frac{u_{\boldsymbol k}}{\sqrt{k}}\frac{\phi_{\boldsymbol k}}{\phi_0}. \langlebel{20} \end{align} By insertion of the definitions of the classical component of the field $u_{\boldsymbol k}$ and the Fourier transform of the function $\phi_{\boldsymbol k}$, defined in equations (\ref{14}) and (\ref{15}), we find \begin{align} E^{(0)}_L&(0, g) = \frac{g^4}{8(2\pi)^6}\int d\boldsymbol l d\boldsymbol m \frac{\thp{m}{l}}{m^3 l^3}e^{-\frac{3}{2}\frac{m^2}{\langlembda^2}-\frac{3}{2}\frac{l^2}{\langlembda^2}-\frac{2\thp{m}{l}}{\langlembda^2}}+\frac{g^2}{2(2\pi)^3}\int \frac{d\boldsymbol k}{k^2}\left(1+\frac{k}{2}\right)e^{-\frac{3}{2}\frac{k^2}{\langlembda^2}} \nonumber \\ &- \frac{g^2}{(2\pi^3)}\int \frac{d\boldsymbol k}{k^2}e^{-\frac{3}{4}\frac{k^2}{\langlembda^2}} =\frac{g^4}{8(2\pi)^5}\int \frac{d\boldsymbol l}{l^2}e^{-\frac{3}{2}\frac{l^2}{\langlembda^2}}\int dm e^{-\frac{3}{2}\frac{m^2}{\langlembda^2}}\frac{-2lm \langlembda^2 \cosh\frac{2lm}{\langlembda^2}+\langlembda^4\sinh\frac{2lm}{\langlembda^2}}{2l^2m^2} \nonumber \\ &+\frac{g^2}{24\pi^2}\left(\langlembda(-4+\sqrt{2})\sqrt{3\pi}+\langlembda^2\right) =\frac{g^4 \langlembda^2}{16(2\pi)^4}\int_0^\infty \frac{du}{u^2}\left(4u-e^{\frac{2}{3}u^2}\sqrt{6\pi}\text{Erf}\left(\sqrt{\frac{2}{3}}u\right)\right)e^{-\frac{3}{2}u^2}\nonumber \\ &+\frac{g^2}{24\pi^2}\left(\langlembda(-4+\sqrt{2})\sqrt{3\pi}+\langlembda^2\right) \nonumber \\ &=-\frac{g^4 \langlembda^2}{2^8 \pi^4}\alpha + \frac{g^2}{24\pi^2}\left(\langlembda(-4+\sqrt{2})\sqrt{3\pi}+\langlembda^2\right),\langlebel{21} \end{align} where $\alpha = 0.736559$. To find $\langlembda$, we minimize the energy, which results in the equation \begin{align} \frac{\partial E^{(0)}_L(0, g)}{\partial \langlembda} = -\frac{2g^4 \langlembda }{2^8 \pi^4}\alpha + \frac{g^2}{24\pi^2}\left((-4+\sqrt{2})\sqrt{3\pi}+2\langlembda\right),\langlebel{22} \end{align} from here we find \begin{align} \langlembda = -\frac{\sqrt{3\pi}}{2}(-4+\sqrt{2})\frac{1}{1-\frac{3 \alpha g^2}{32\pi^2}}\approx -\frac{\sqrt{3\pi}}{2}(-4+\sqrt{2})\left(1+\frac{3 \alpha g^2}{32\pi^2}\right)\langlebel{23} \end{align} and by plugging $\langlembda$ in equation (\ref{21}) \begin{align} E^{(0)}_L(0, g) = -g^2\frac{(-4+\sqrt{2})^2}{32\pi} - \frac{3 \alpha g^4(-4+\sqrt{2})^2}{2^{10}\pi^3} + O(g^6). \langlebel{24} \end{align} \section*{Appendix D: Mass renormalization in zeroth-order approximation} \langlebel{sec:mass_renormalization_in_zeroth_order_approximation} In the weak coupling limit we find the renormalized mass in zeroth-order approximation. For this purpose, we rewrite the energy through Fourier components for the case $\boldsymbol P\neq0$. This yields \begin{align} E^{(0)}_L(\boldsymbol P, g)&= \frac{P^2}{2} - \thp{P}{Q} + G + E_{\text{f}}(\boldsymbol P) + E_{\text{int}}(\boldsymbol P); \langlebel{25} \\ \boldsymbol Q &= \frac{1}{|N_{\boldsymbol P}|^2} \sum_{\boldsymbol k}\boldsymbol k |u_{\boldsymbol k}|^2 \phi_{\boldsymbol P - \boldsymbol k}^2; \langlebel{26} \\ G &= \frac{1}{2} \frac{1}{|N_{\boldsymbol P}|^2} \sum_{\boldsymbol m,\boldsymbol l}\thp{m}{l}|u_{\boldsymbol m}|^2 |u_{\boldsymbol l}|^2 \phi_{\boldsymbol P - \boldsymbol l - \boldsymbol m}^2;\langlebel{27} \\ E_{\text{f}}(\boldsymbol P) &= \frac{1}{|N_{\boldsymbol P}|^2} \sum_{\boldsymbol k}\left(k + \frac{k^2}{2}\right)|u_{\boldsymbol k}|^2 \phi_{\boldsymbol P - \boldsymbol k}^2; \langlebel{28} \\ E_{\text{int}}(\boldsymbol P) &= \frac{g}{|N_{\boldsymbol P}|^2 } \sum_{\boldsymbol k}\frac{u_{\boldsymbol k}}{\sqrt{2 k \Omega}} (\phi_{\boldsymbol P}\phi_{\boldsymbol P -\boldsymbol k}+\phi_{\boldsymbol P} \phi_{\boldsymbol P + \boldsymbol k}); \langlebel{29} \\ |N_{\boldsymbol P}|^2 &= \phi_{\boldsymbol P}^2.\langlebel{30} \end{align} From here we can immediately conclude that the quantity $G$ yields only a correction of the order of $g^4$ and can be neglected. Let us expand the Fourier transform of the function $\phi(\boldsymbol r)$ into Taylor series over $\boldsymbol P$ up to second-order \begin{align} \phi^2_{\boldsymbol P} &=\phi_0^2 e^{-\frac{P^2}{\langlembda^2}} \approx \phi_0^2 \left(1-\frac{P^2}{\langlembda^2}\right), \langlebel{31} \\ \phi_{\boldsymbol P - \boldsymbol k}^2 &= \phi_0^2 e^{-\frac{(\boldsymbol P - \boldsymbol k)^2}{\langlembda^2}} = \phi_0^2\left(e^{-\frac{k^2}{\langlembda^2}}+e^{-\frac{k^2}{\langlembda^2}}\frac{2\thp{P}{k}}{\langlembda^2}+e^{-\frac{k^2}{\langlembda^2}}\frac{2(\thp{P}{k})^2}{\langlembda^4}-e^{-\frac{k^2}{\langlembda^2}} \frac{P^2}{\langlembda^2}\right),\langlebel{32} \\ \phi_{\boldsymbol P - \boldsymbol k} &= \phi_0 e^{-\frac{(\boldsymbol P - \boldsymbol k)^2}{2\langlembda^2}} = \phi_0\left(e^{-\frac{k^2}{2\langlembda^2}}+e^{-\frac{k^2}{2\langlembda^2}}\frac{\thp{P}{k}}{\langlembda^2}+e^{-\frac{k^2}{2\langlembda^2}}\frac{(\thp{P}{k})^2}{2\langlembda^4}-e^{-\frac{k^2}{2\langlembda^2}} \frac{P^2}{2\langlembda^2}\right).\langlebel{33} \end{align} By plugging equations (\ref{31}-\ref{33}) into equations (\ref{26}-\ref{30}) and taking into account only the terms of the order of $P^2$, we find for the vector $\boldsymbol Q$ \begin{align} \boldsymbol Q &= \frac{1}{1-P^2/\langlembda^2}\frac{2}{\langlembda^2}\sum_{\boldsymbol k}\boldsymbol k |u_{\boldsymbol k}|^2 e^{-\frac{k^2}{\langlembda^2}}(\thp{P}{k}) = \frac{1}{1-P^2/\langlembda^2}\frac{g^2}{\langlembda^2}\frac{\boldsymbol P}{4\pi^2}\int_0^\infty dk k e^{-\frac{3}{2}\frac{k^2}{\langlembda^2}}\int_{-1}^1 t^2 dt \nonumber \\ &= \frac{g^2}{9\pi^2}\frac{\boldsymbol P}{2} \langlebel{34} \end{align} and for the field energy \begin{align} E_{\text{f}} = \frac{1}{1-P^2/\langlembda^2} \sum_{\boldsymbol k}\left(k + \frac{k^2}{2}\right)|u_{\boldsymbol k}|^2 \left(e^{-\frac{k^2}{\langlembda^2}}+e^{-\frac{k^2}{\langlembda^2}}\frac{2\thp{P}{k}}{\langlembda^2}+e^{-\frac{k^2}{\langlembda^2}}\frac{2(\thp{P}{k})^2}{\langlembda^4}-e^{-\frac{k^2}{\langlembda^2}} \frac{P^2}{\langlembda^2}\right). \langlebel{35} \end{align} In this expression, in round brackets the first and the last terms cancel each other after decomposition of the normalization constant in the Taylor series in $\boldsymbol P$. The result reads as \begin{align} E_{\text{f}} &= E_{\text{f}}(0)+\frac{2}{\langlembda^2}\sum_{\boldsymbol k}\left(k + \frac{k^2}{2}\right)|u_{\boldsymbol k}|^2e^{-\frac{k^2}{\langlembda^2}}(\thp{P}{k})^2 \nonumber \\ &=E_{\text{f}}(0)+\frac{g^2}{\langlembda^4}\frac{P^2}{4\pi^2}\int_0^\infty k \left(k + \frac{k^2}{2}\right)e^{-\frac{3}{2}\frac{k^2}{\langlembda^2}}dk\int_{-1}^1 t^2 dt \nonumber \\ &=E_{\text{f}}(0) + \frac{P^2}{2}\frac{g^2}{9\pi^2}\frac{1}{6\langlembda}(\sqrt{6\pi}+2\langlembda). \langlebel{36} \end{align} The remaining energy is calculated in exactly the same way \begin{align} E_{\text{int}} = \frac{g}{\sqrt{2 \Omega}}\frac{1}{1-P^2/(2\langlembda^2)}\sum_{\boldsymbol k}\frac{u_{\boldsymbol k}}{\sqrt{k}}\Bigg[e^{-\frac{k^2}{2\langlembda^2}}+e^{-\frac{k^2}{2\langlembda^2}}\frac{\thp{P}{k}}{\langlembda^2}+e^{-\frac{k^2}{2\langlembda^2}}\frac{(\thp{P}{k})^2}{2\langlembda^4}-e^{-\frac{k^2}{2\langlembda^2}} \frac{P^2}{2\langlembda^2} \nonumber \\ +e^{-\frac{k^2}{2\langlembda^2}}-e^{-\frac{k^2}{2\langlembda^2}}\frac{\thp{P}{k}}{\langlembda^2}+e^{-\frac{k^2}{2\langlembda^2}}\frac{(\thp{P}{k})^2}{2\langlembda^4}-e^{-\frac{k^2}{2\langlembda^2}} \frac{P^2}{2\langlembda^2}\Bigg]. \langlebel{37} \end{align} In a full analogy to the field energy $E_{\text{f}}$, the first and the last terms are cancelled. The remaining terms are \begin{align} E_{\text{int}} &= E_{\text{int}}(0) + \frac{g}{\sqrt{2 \Omega}}\frac{1}{\langlembda^4} \sum_{\boldsymbol k}\frac{u_{\boldsymbol k}}{\sqrt{k}}e^{-\frac{k^2}{2\langlembda^2}}(\thp{P}{k})^2 \nonumber \\ &=E_{\text{int}}(0) -\frac{g^2}{\langlembda^4}\frac{P^2}{8\pi^2}\int_0^\infty k^2 e^{-\frac{3}{4}\frac{k^2}{\langlembda^2}}dk\int_{-1}^1 t^2 dt \nonumber \\ &=E_{\text{int}}(0) - \frac{g^2}{9\pi^2}\frac{1}{\langlembda}\sqrt{\frac{\pi}{3}}\frac{P^2}{2}. \langlebel{38} \end{align} By combining all results together, we find the equation for the total energy of the system with a renormalized mass, in the zeroth-order approximation: \begin{align} E^{(0)}_L(\boldsymbol P, g) &= E^{(0)}_L(0, g) + \frac{P^2}{2}\left[1-\frac{g^2}{9\pi^2}\left(1+\frac{1}{\langlembda}\sqrt{\frac{\pi}{3}} - \frac{(\sqrt{6\pi}+2\langlembda)}{6\langlembda}\right)\right] \nonumber \\ &=E^{(0)}_L(0, g) + \frac{P^2}{2}\left[1-\frac{g^2}{9\pi^2}\left(\frac{2}{3}+\frac{1}{\langlembda}\frac{\sqrt{6\pi}(\sqrt{2}-1)}{6}\right)\right], \langlebel{39} \end{align} or by plugging in for $\langlembda$ according to equation (\ref{23}) we finally obtain \begin{align} E^{(0)}_L(\boldsymbol P, g) = E^{(0)}_L(0, g) + \frac{P^2}{2}\left[1-\frac{g^2}{9\pi^2}\frac{17-\sqrt{2}}{21}\right]. \langlebel{40} \end{align} Concluding, the renormalized mass is equal to \begin{align} m^{*(0)} = 1+\frac{g^2}{9\pi^2}\frac{17-\sqrt{2}}{21}. \langlebel{41} \end{align} \section*{Appendix E: Calculation of matrix elements in the second iteration for the energy of the system} \langlebel{sec:calculation_of_matrix_elements_in_the_second_iteration_for_the_energy_of_the_system} The calculation of the second iteration of the energy of the system requires the evaluation of the transition matrix elements $\langle\Psi^{(L)}_{\boldsymbol P}| \opa H |\Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}\rangle$, $\langle\Psi^{(L)}_{\boldsymbol P}| \Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}\rangle$ and $\langle \Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}| \opa H |\Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}\rangle$ from the full Hamiltonian of the system, equation (\ref{7}), with the function \begin{align} |\Psi^{(0)}_{\boldsymbol P_1, n_{\boldsymbol k}}\rangle &= \frac{1}{N_{\boldsymbol P_1,1_{\boldsymbol k}}\sqrt{\Omega}}\int d \boldsymbol R \phi_{\boldsymbol P_1}(\boldsymbol r- \boldsymbol R) \exp\left\{\ri (\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R\right\}\nonumber \\ &\times\exp\left[\sum_{\boldsymbol k}(u_{\boldsymbol k} e^{-\ri\thp{k}{R}}\opa a_{\boldsymbol k}^\dag - u_{\boldsymbol k}^* e^{\ri\thp{k}{R}}\opa a_{\boldsymbol k})\right]|n_{\boldsymbol k}\rangle \nonumber \\ &=\frac{1}{N_{\boldsymbol P_1,1_{\boldsymbol k}}\sqrt{\Omega}}\int d \boldsymbol R \phi_{\boldsymbol P_1}(\boldsymbol r- \boldsymbol R) \exp\left\{\ri (\boldsymbol P_1 - \boldsymbol k n_{\boldsymbol k})\!\cdot\!ot \boldsymbol R - \frac{1}{2}\sum_{{\boldsymbol k}}|u_{\boldsymbol k}|^2 + \sum_{\boldsymbol k} u_{\boldsymbol k} e^{-\ri\thp{k}{R}}\opa a_{\boldsymbol k}^\dag \right\}\nonumber \\ &\times\frac{(\opa a_{\boldsymbol k}^\dag - u_{\boldsymbol k}^* e^{\ri\thp{k}{R}})^{n_{\boldsymbol k}}}{\sqrt{n_{\boldsymbol k}!}}|0\rangle. \langlebel{42} \end{align} The normalization constant in equation (\ref{42}) is calculated with the help of equation (\ref{4}) and has the form \begin{align} |N_{\boldsymbol P_1,1_{\boldsymbol k}}|^2 &= \int d \boldsymbol R_1 d \boldsymbol \rho \phi_{\boldsymbol P_1}^*(\boldsymbol \rho)\phi_{\boldsymbol P_1}(\boldsymbol \rho - \boldsymbol R_1) e^{\ri(\boldsymbol P_1 - \boldsymbol k)\!\cdot\!ot \boldsymbol R_1 + \sum_{\boldsymbol k} |u_{\boldsymbol k}|^2(e^{-\ri\thp{k}{R_1}}-1)}\nonumber \\ &\times\left(2|u_{\boldsymbol k}|^2(\cos\thp{k}{R_1}-1)+1\right). \end{align} The calculation of the transition matrix element is performed with the help of equation (\ref{4}): \begin{align} \langle \Psi_{\boldsymbol P_1,n_{\boldsymbol k}}&|\opa H|\Psi_{\boldsymbol P}^{(L)}\rangle = \frac{(2\pi)^3 \delta(\boldsymbol P - \boldsymbol P_1)}{N_{\boldsymbol P_1,1_{\boldsymbol k}}^* N_{\boldsymbol P} \Omega} \int d \boldsymbol R_1 d \boldsymbol \rho \phi_{\boldsymbol P_1}^*(\boldsymbol \rho)\phi_{\boldsymbol P}(\boldsymbol \rho - \boldsymbol R_1) \nonumber \\ &\times e^{\ri\thp{P}{R_1} + \sum_{\boldsymbol k} |u_{\boldsymbol k}|^2(e^{-\ri\thp{k}{R_1}}-1)}\frac{u_{\boldsymbol k}^{n_{\boldsymbol k}} (e^{-\ri\thp{k}{R_1}}-1)^{n_{\boldsymbol k}}}{\sqrt{n_{\boldsymbol k}!}} \nonumber \\ &\times\Bigg[\frac{P_1^2}{2} + \left(\frac{1}{2}k^2+k- \thp{P_1}{k}\right) n_{\boldsymbol k} e^{-\ri\thp{k}{R_1}}(e^{-\ri\thp{k}{R_1}}-1)^{-1} \nonumber \\ &\mspace{35mu}+\frac{g}{\sqrt{2 \Omega}}\frac{e^{-\ri\thp{k}{\rho}} }{\sqrt{\omega_k}}n_{\boldsymbol k} u_{\boldsymbol k}^{-1}(e^{-\ri\thp{k}{R_1}}-1)^{-1} \nonumber \\ &\mspace{35mu}+\sum_{\boldsymbol m} \boldsymbol m |u_{\boldsymbol m}|^2 e^{-\ri\thp{m}{R_1}}\!\cdot\!ot \left(-\boldsymbol P_1+\boldsymbol k n_{\boldsymbol k} e^{-\ri\thp{k}{R_1}}(e^{-\ri\thp{k}{R_1}}-1)^{-1}\right) \nonumber \\ &\mspace{35mu}+ \sum_{\boldsymbol m} \left(\frac{1}{2}m^2+m\right) |u_{\boldsymbol m}|^2 e^{-\ri\thp{m}{R_1}}+ \frac{1}{2}k^2 n_{\boldsymbol k}(n_{\boldsymbol k}-1)e^{-2\ri\thp{k}{R_1}}(e^{-\ri\thp{k}{R_1}}-1)^{-2} \nonumber \\ &\mspace{35mu}+\frac{1}{2}\left(\sum_{\boldsymbol m} \boldsymbol m |u_{\boldsymbol m}|^2 e^{-\ri\thp{m}{R_1}}\right)^2+\frac{g}{\sqrt{2 \Omega}}\sum_{\boldsymbol m} \frac{u_{\boldsymbol m} e^{\ri\thp{m}{\rho}}}{\sqrt{\omega_m}}(e^{-\ri\thp{m}{R_1}}+1)\Bigg]. \langlebel{44} \end{align} We further obtain the cover integral \begin{align} \langle \Psi_{\boldsymbol P_1,n_{\boldsymbol k}}|\Psi_{\boldsymbol P}^{(L)}\rangle &= \frac{(2\pi)^3 \delta(\boldsymbol P - \boldsymbol P_1)}{N_{\boldsymbol P_1,1_{\boldsymbol k}}^* N_{\boldsymbol P} \Omega} \int d \boldsymbol R_1 d \boldsymbol \rho \phi_{\boldsymbol P_1}^*(\boldsymbol \rho)\phi_{\boldsymbol P}(\boldsymbol \rho - \boldsymbol R_1) e^{\ri\thp{P}{R_1} + \sum_{\boldsymbol k} |u_{\boldsymbol k}|^2(e^{-\ri\thp{k}{R_1}}-1)}\nonumber \\ &\times\frac{u_{\boldsymbol k}^{n_{\boldsymbol k}} (e^{-\ri\thp{k}{R_1}}-1)^{n_{\boldsymbol k}}}{\sqrt{n_{\boldsymbol k}!}} \langlebel{45} \end{align} and the expectation value of the Hamiltonian \begin{align} \langle \Psi_{\boldsymbol P_1,1_{\boldsymbol k}}|\opa H|\Psi_{\boldsymbol P_1,1_{\boldsymbol k}}\rangle &= \frac{P_1^2}{2} + \frac{1}{|N_{\boldsymbol P_1,1_{\boldsymbol k}}|^2 } \int d \boldsymbol R_1 d \boldsymbol \rho \phi_{\boldsymbol P_1}^*(\boldsymbol \rho)\phi_{\boldsymbol P_1}(\boldsymbol \rho - \boldsymbol R_1) e^{\ri(\boldsymbol P_1 - \boldsymbol k)\!\cdot\!ot \boldsymbol R_1 + \sum_{\boldsymbol k} |u_{\boldsymbol k}|^2(e^{-\ri\thp{k}{R_1}}-1)} \nonumber \\ &\mspace{-105mu}\times\Bigg\{k^2 |u_{\boldsymbol k}|^2e^{-\ri\thp{k}{R_1}}\nonumber \\ &\mspace{-75mu}+\left(2|u_{\boldsymbol k}|^2(\cos\thp{k}{R_1}-1)+1\right)\Bigg[ - \sum_{\boldsymbol m} \thp{P_1}{m} |u_{\boldsymbol m}|^2 e^{-\ri\thp{m}{R_1}} + \sum_{\boldsymbol m} \left(\frac{m^2}{2}+m\right)|u_{\boldsymbol m}|^2 e^{-\ri\thp{m}{R_1}}\nonumber \\ &\mspace{90mu}+\frac{1}{2}\left(\sum_{\boldsymbol m} \boldsymbol m |u_{\boldsymbol m}|^2 e^{-\ri\thp{m}{R_1}}\right)^2 + \frac{g}{\sqrt{2 \Omega}}\sum_{\boldsymbol m} \frac{u_{\boldsymbol m} e^{\ri\thp{m}{\rho}}}{\sqrt{\omega_m}}(e^{-\ri\thp{m}{R_1}}+1)\Bigg]\nonumber \\ &\mspace{-75mu}+\left(2|u_{\boldsymbol k}|^2(e^{-\ri\thp{k}{R_1}}-1)+1\right)\Bigg[-\thp{k}{P_1} + \sum_{\boldsymbol m} \thp{k}{m}|u_{\boldsymbol m}|^2 e^{-\ri\thp{m}{R_1}}+\frac{k^2}{2}+k\Bigg]\nonumber \\ &\mspace{-75mu}+\frac{g}{\sqrt{2 \Omega}}\Bigg[\frac{u_{\boldsymbol k} e^{\ri\thp{k}{\rho}}}{\sqrt{\omega_{\boldsymbol k}}}(e^{-\ri\thp{k}{R_1}}-1) + \frac{u_{\boldsymbol k}^* e^{-\ri\thp{k}{\rho}}}{\sqrt{\omega_{\boldsymbol k}}}(1-e^{\ri\thp{k}{R_1}})\Bigg]\Bigg\}. \langlebel{46} \end{align} Equations (\ref{44}-\ref{46}) are valid for arbitrary coupling constants. However, in the weak coupling limit they are significantly simplified as they can be expressed via Fourier transforms. Another significant simplification comes from the fact that the action of one field mode on the system is inversely proportional to the square root of the normalization volume $\Omega$, that is $u_{\boldsymbol k}\sim 1/\sqrt{\Omega}$. Consequently, such terms can be kept only within the sum. Within this approximation, equations (\ref{44}-\ref{46}) can be rewritten as \begin{align} \langle \Psi_{\boldsymbol P_1,n_k}|\Psi_{\boldsymbol P}^{(L)}\rangle = \frac{(2\pi)^3 \delta(\boldsymbol P - \boldsymbol P_1)}{\Omega}\frac{u_{\boldsymbol k}(\phi_{\boldsymbol P - \boldsymbol k}^2- \phi_{\boldsymbol P}^2)}{\phi_{\boldsymbol P} \phi_{\boldsymbol P - \boldsymbol k}},\langlebel{47} \end{align} and \begin{align} \langle \Psi_{\boldsymbol P_1,n_k}|\opa H|\Psi_{\boldsymbol P}^{(L)}\rangle &= \frac{(2\pi)^3 \delta(\boldsymbol P - \boldsymbol P_1)}{\Omega}\frac{1}{\phi_{\boldsymbol P}\phi_{\boldsymbol P - \boldsymbol k}} \nonumber \\ &\times \Bigg[\frac{P_1^2}{2} u_{\boldsymbol k}(\phi_{\boldsymbol P - \boldsymbol k}^2- \phi_{\boldsymbol P}^2) + \left(\frac{k^2}{2}+k- \thp{P_1}{k}\right)u_{\boldsymbol k} \phi_{\boldsymbol P - \boldsymbol k}^2+\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol P - \boldsymbol k} \phi_{\boldsymbol P}}{\sqrt{k}}\nonumber \\ &-u_{\boldsymbol k}(\boldsymbol P_1 - \boldsymbol k)\sum_{\boldsymbol m}\boldsymbol m |u_{\boldsymbol m}|^2 \phi_{\boldsymbol P - \boldsymbol m - \boldsymbol k}^2 \nonumber \\ &+ u_{\boldsymbol k}\sum_{\boldsymbol m}\left(\frac{m^2}{2}+m\right)|u_{\boldsymbol m}|^2 \phi_{\boldsymbol P - \boldsymbol m - \boldsymbol k}^2 +\frac{u_{\boldsymbol k}}{2}\sum_{\boldsymbol l,\boldsymbol m}\thp{l}{m}|u_{\boldsymbol l}|^2|u_{\boldsymbol m}|^2 \phi_{\boldsymbol P - \boldsymbol l - \boldsymbol m - \boldsymbol k}^2 \nonumber \\ &+\frac{g}{\sqrt{2 \Omega}}u_{\boldsymbol k}\sum_{\boldsymbol m}\frac{u_{\boldsymbol m}}{\sqrt{m}}\phi_{\boldsymbol P - \boldsymbol k}(\phi_{\boldsymbol P - \boldsymbol k - \boldsymbol m}+\phi_{\boldsymbol P - \boldsymbol k + \boldsymbol m}) - u_{\boldsymbol k}\phi_{\boldsymbol P}^2 \tilde E_L^{(0)}\Bigg], \langlebel{48} \end{align} as well as \begin{align} \langle \Psi_{\boldsymbol P_1,n_k}|\opa H|\Psi_{\boldsymbol P_1,n_k}\rangle &= \frac{P_1^2}{2}+\frac{1}{\phi_{\boldsymbol P - \boldsymbol k}^2}\Bigg[\phi_{\boldsymbol P -\boldsymbol k}^2\left(-\thp{k}{P_1}+\frac{k^2}{2}+k\right) - (\boldsymbol P_1-\boldsymbol k)\!\cdot\!ot\sum_{\boldsymbol m}\boldsymbol m |u_{\boldsymbol m}|^2\phi_{\boldsymbol P - \boldsymbol m - \boldsymbol k}^2\nonumber \\ &+\sum_{\boldsymbol m}\left(\frac{m^2}{2}+m\right)|u_{\boldsymbol m}|^2\phi_{\boldsymbol P - \boldsymbol m - \boldsymbol k}^2 +\frac{g}{\sqrt{2 \Omega}}\sum_{\boldsymbol m}\frac{u_{\boldsymbol m}}{\sqrt{m}}\phi_{\boldsymbol P - \boldsymbol k}\left(\phi_{\boldsymbol P - \boldsymbol m-\boldsymbol k}+\phi_{\boldsymbol P + \boldsymbol m-\boldsymbol k}\right)\nonumber \\ &+\frac{1}{2}\sum_{\boldsymbol l,\boldsymbol m}\thp{l}{m}|u_{\boldsymbol l}|^2|u_{\boldsymbol m}|^2 \phi_{\boldsymbol P - \boldsymbol m - \boldsymbol l - \boldsymbol k}^2 \Bigg]. \langlebel{49} \end{align} In equations (\ref{47}-\ref{49}) we have used expressions for the normalization constants \begin{align} N_{\boldsymbol P} = \phi_{\boldsymbol P}, \quad N_{\boldsymbol P_1,1_{\boldsymbol k}} = \phi_{\boldsymbol P_1 - \boldsymbol k} \langlebel{50} \end{align} and sorted out the energy of the zeroth-order approximation \begin{align} \tilde E_L^{(0)} &= - \boldsymbol P\sum_{\boldsymbol m}\boldsymbol m |u_{\boldsymbol m}|^2 \frac{\phi_{\boldsymbol P - \boldsymbol m}^2}{\phi_{\boldsymbol P}^2} + \frac{1}{2}\sum_{\boldsymbol l,\boldsymbol m}\thp{m}{l}|u_{\boldsymbol m}|^2 |u_{\boldsymbol l}|^2 \frac{\phi_{\boldsymbol P - \boldsymbol m - \boldsymbol l}^2}{\phi_{\boldsymbol P}^2} \nonumber \\ &+\sum_{\boldsymbol m}\left(\frac{m^2}{2} + m\right)|u_{\boldsymbol m}|^2 \frac{\phi_{\boldsymbol P - \boldsymbol m}^2}{\phi_{\boldsymbol P}^2} + \frac{g}{\sqrt{2 \Omega}}\sum_{\boldsymbol m}\frac{u_{\boldsymbol m}}{\sqrt{m}}\phi_{\boldsymbol P}\frac{(\phi_{\boldsymbol P - \boldsymbol m}+\phi_{\boldsymbol P + \boldsymbol m})}{\phi_{\boldsymbol P}}. \langlebel{51} \end{align} \section*{Appendix F: Second iteration for the energy of the system with particle at rest} \langlebel{sec:second_iteration_for_the_energy_of_the_system_with_particle_at_rest} For the evaluation of the second iteration for the particle energy the knowledge of the behavior of different terms in expressions (\ref{47}-\ref{49}) is required. In order to determine those, we will carry out the summations over $\boldsymbol m$. For the first sum we can write \begin{align} \boldsymbol I^{(1)}_{\boldsymbol k} &= \frac{1}{g^2\phi_{\boldsymbol k}^2}\sum_{\boldsymbol m}\boldsymbol m |u_{\boldsymbol m}|^2 \phi_{\boldsymbol m+\boldsymbol k}^2 =\frac{\phi_0^2}{g^2\phi_{\boldsymbol k}^2} \frac{g^2}{2(2\pi)^3}\int d\boldsymbol m \left(\begin{aligned} &m\sin \theta \cos \phi \\ &m\sin \theta \sin \phi \\ &m \cos \theta \end{aligned}\right)\frac{e^{-\frac{m^2}{2 \langlembda^2}}}{m^3}e^{-\frac{m^2 + k^2 + 2\thp{m}{k}}{\langlembda^2}} \nonumber \\ &=\frac{1}{8\pi^2}\frac{\boldsymbol k}{k}\int dm e^{-\frac{3}{2}\frac{m^2}{\langlembda^2}}\frac{-2km \langlembda^2 \cosh \frac{2km}{\langlembda^2} + \langlembda^4 \sinh \frac{2km}{\langlembda^2}}{2k^2 m^2}\nonumber \\ &=\frac{\langlembda^2}{32\pi^2}\frac{\boldsymbol k}{k}\frac{4k-e^{\frac{2}{3}\frac{k^2}{\langlembda^2}}\sqrt{6\pi}\langlembda \text{Erf}\frac{\sqrt{\frac{2}{3}}k}{\langlembda}}{k^2} \underset{k\rightarrow\infty}{\longrightarrow}-\frac{\langlembda^2}{32\pi^2}\frac{\boldsymbol k}{k^3}\left(-4k+e^{\frac{2}{3}\frac{k^2}{\langlembda^2}}\sqrt{6\pi}\langlembda\right) \nonumber \\ &\sim - \frac{\phi_0^2}{\phi_{\boldsymbol k}^2} \frac{\langlembda^3 \sqrt{6\pi}}{32\pi^2}\frac{e^{-\frac{1}{3}\frac{k^2}{\langlembda^2}}}{k^3}\boldsymbol k \langlebel{52} \end{align} as for the second \begin{align} I^{(2)}_{\boldsymbol k} &= \frac{1}{g^2 \phi_{\boldsymbol k}^2}\sum_{\boldsymbol m}\left(\frac{m^2}{2}+m\right)|u_{\boldsymbol m}|^2 \phi_{\boldsymbol m+\boldsymbol k}^2 = \frac{1}{2(2\pi)^3}\int \frac{d \boldsymbol m}{m^2}\left(1+\frac{m}{2}\right)e^{-\frac{3}{2}\frac{m^2}{\langlembda^2}-\frac{2\thp{m}{k}}{\langlembda^2}} \nonumber \\ &=\frac{\langlembda^2}{8\pi^2}\int dm\left(1+\frac{m}{2}\right)e^{-\frac{3}{2}\frac{m^2}{\langlembda^2}} \frac{\sinh \frac{2km}{\langlembda^2}}{km}\nonumber \\ &=\frac{\langlembda^2}{96k}\left(\frac{\sqrt{6\pi}\langlembda e^{\frac{2}{3}\frac{k^2}{\langlembda^2}}\text{Erf}\frac{\sqrt{\frac{2}{3}}k}{\langlembda}}{\pi^2}+\frac{6\text{Erfi}\frac{\sqrt{\frac{2}{3}}k}{\langlembda}}{\pi}\right) \nonumber \\ &\underset{k\rightarrow\infty}{\longrightarrow} \frac{\langlembda^3}{96k^2 \pi^{\frac{3}{2}}}\left(\sqrt{6}e^{\frac{2}{3}\frac{k^2}{\langlembda^2}}(3+k)-\frac{6\ri k\sqrt{\pi}}{\langlembda}\right) \sim \frac{\phi_0^2}{\phi_{\boldsymbol k}^2} \frac{\langlembda^3\sqrt{6\pi}}{96\pi^2}\frac{e^{-\frac{1}{3}\frac{k^2}{\langlembda^2}}}{k} \langlebel{53} \end{align} and for the third \begin{align} I^{(3)}_{\boldsymbol k} &= \frac{1}{g^2 \phi_{\boldsymbol k}^2}\phi_{\boldsymbol k}\frac{g}{\sqrt{2 \Omega}}\sum_{\boldsymbol m}\frac{u_{\boldsymbol m}}{\sqrt{m}}(\phi_{\boldsymbol m+\boldsymbol k}+\phi_{\boldsymbol m - \boldsymbol k}) = - \frac{1}{2(2\pi)^3}\int \frac{d\boldsymbol m}{m^2}e^{-\frac{3}{4}\frac{m^2}{\langlembda^2}}\left(e^{-\frac{\thp{m}{k}}{\langlembda^2}}+e^{\frac{\thp{m}{k}}{\langlembda^2}}\right) \nonumber \\ &= -\frac{\langlembda^2}{2\pi^2}\int dm e^{-\frac{3}{4}\frac{m^2}{\langlembda^2}}\frac{\sinh\frac{km}{\langlembda^2}}{km} = -\frac{\langlembda^2}{4\pi}\frac{\text{Erfi}\frac{k}{\sqrt{3}\langlembda}}{k} \nonumber \\ &\underset{k\rightarrow\infty}{\longrightarrow} -\frac{\langlembda^2}{4k^2 \pi^{\frac{3}{2}}}(-\ri k \sqrt{\pi}+\sqrt{3}\langlembda e^{\frac{k^2}{3 \langlembda^2}}) \sim -\frac{\phi_0^2}{\phi_{\boldsymbol k}^2}\frac{\langlembda^3 \sqrt{3\pi}}{4\pi^2}\frac{e^{-\frac{2}{3}\frac{k^2}{\langlembda^2}}}{k^2}. \langlebel{54} \end{align} In the above expressions $\text{Erf}(x) = 2/\sqrt{\pi}\int_0^x e^{-z^2}dz$ and $\text{Erfi}(x) = -\ri\text{Erf}(\ri x)$ are the error function and the imaginary error functions, respectively. Consequently, we can introduce the abbreviations, which where used in equations (\ref{eq:second_order_iteration_for_the_energy_convergence_4}-\ref{eq:second_order_iteration_for_the_energy_convergence_8}) of the manuscript, namely \begin{align} I_{\boldsymbol k} &= \boldsymbol k \!\cdot\!ot \boldsymbol I_{\boldsymbol k}^{(1)}+I_{\boldsymbol k}^{(2)}+I_{\boldsymbol k}^{(3)}\underset{k\rightarrow\infty}\sim-\frac{\langlembda^3\sqrt{6\pi}}{48\pi^2}\frac{e^{\frac{2k^2}{3 \langlembda^2}}}{k}. \langlebel{55} \end{align} After the determination of the asymptotic behavior of the different terms, we can find the second iteration for the energy of the system \begin{align} E^{(2)} = \frac{A}{B}. \langlebel{56} \end{align} For the numerator we have \begin{align} A = E_L^{(0)} &+ \sum_{\boldsymbol P_1, \{n_k\}}C^{(1)}_{\boldsymbol P_1, \{n_{\boldsymbol k}\}}\langle\Psi^{(L)}_{\boldsymbol P}| \opa H |\Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}\rangle \nonumber \\ &=E_L^{(0)} + \sum_{\boldsymbol k<\boldsymbol k_0}\frac{-\left(u_{\boldsymbol k} \frac{\phi_{\boldsymbol k}}{\phi_0}\left(\frac{k^2}{2}+k\right)+\frac{g}{\sqrt{2 \Omega}}\frac{1}{\sqrt{k}}\right)^2}{(\frac{k^2}{2}+k)}+\sum_{\boldsymbol k>\boldsymbol k_0}\frac{(-\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}})(-E_0 u_{\boldsymbol k})}{\phi_{\boldsymbol k}^2 I_{\boldsymbol k}g^2} \langlebel{57} \end{align} and for the denominator we obtain \begin{align} B = 1 &+ \sum_{\boldsymbol P_1, \{n_k\}}C^{(1)}_{\boldsymbol P_1, \{n_{\boldsymbol k}\}}\langle\Psi^{(L)}_{\boldsymbol P}| \Psi_{\boldsymbol P_1, \{ n_{\boldsymbol k}\}}\rangle \nonumber \\ &=1+ \sum_{\boldsymbol k<\boldsymbol k_0}\frac{-\left(u_{\boldsymbol k} \frac{\phi_{\boldsymbol k}^2}{\phi_0^2}\left(\frac{k^2}{2}+k\right)+\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k}}{\phi_0\sqrt{k}}\right)u_{\boldsymbol k}(1-\frac{\phi_0^2}{\phi_{\boldsymbol k}^2})}{(\frac{k^2}{2}+k)}+ \sum_{\boldsymbol k>\boldsymbol k_0}\frac{(-\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}})u_{\boldsymbol k}(\frac{\phi_{\boldsymbol k}^2}{\phi_0^2}-1)}{\phi_{\boldsymbol k}^2 I_{\boldsymbol k}g^2 }. \langlebel{58} \end{align} Further explicit calculations yield \begin{align} A = E_L^{(0)} &+ \sum_{\boldsymbol k<\boldsymbol k_0}\frac{-\left(u_{\boldsymbol k} \frac{\phi_{\boldsymbol k}}{\phi_0}\left(\frac{k^2}{2}+k\right)+\frac{g}{\sqrt{2 \Omega}}\frac{1}{\sqrt{k}}\right)^2}{(\frac{k^2}{2}+k)} +\sum_{\boldsymbol k>\boldsymbol k_0}\frac{(-\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}})(-E_0 u_{\boldsymbol k})}{\phi_{\boldsymbol k}^2 I_{\boldsymbol k}g^2 } \nonumber \\ &=E_L^{(0)}-\sum_{\boldsymbol k<\boldsymbol k_0}\left(u_{\boldsymbol k}^2\left(\frac{\phi_{\boldsymbol k}}{\phi_0}\right)^2\left(\frac{k^2}{2}+k\right) +\frac{2g}{\sqrt{2 \Omega}}\frac{u_{\boldsymbol k}}{\sqrt{k}}\frac{\phi_{\boldsymbol k}}{\phi_0}+\frac{g^2}{2 \Omega}\frac{1}{k^2(k/2+1)}\right)\nonumber \\ &+\frac{g}{\sqrt{2 \Omega}}\frac{E_L^{(0)}}{g^2} \sum_{\boldsymbol k>\boldsymbol k_0}\frac{\phi_{\boldsymbol k}\phi_0 u_{\boldsymbol k}}{\sqrt{k}\phi_{\boldsymbol k}^2 I_{\boldsymbol k}} \nonumber \\ &=E_L^{(0)}-\sum_{\boldsymbol k<\boldsymbol k_0}\left(u_{\boldsymbol k}^2\left(\frac{\phi_{\boldsymbol k}}{\phi_0}\right)^2\left(\frac{k^2}{2}+k\right) +\frac{2g}{\sqrt{2 \Omega}}\frac{u_{\boldsymbol k}}{\sqrt{k}}\frac{\phi_{\boldsymbol k}}{\phi_0}\right)-\frac{g^2}{2\pi^2}\ln\left(\frac{k_0}{2}+1\right)\nonumber \\ &+\frac{E_L^{(0)}}{2(2\pi)^3} \int_{\boldsymbol k_0}^\infty d\boldsymbol k \frac{e^{-\frac{k^2}{4 \langlembda^2}}}{k^2}\frac{e^{-\frac{k^2}{2 \langlembda^2}}}{\frac{\langlembda^3 \sqrt{6\pi}}{48\pi^2}\frac{e^{-\frac{1}{3}\frac{k^2}{\langlembda^2}}}{k}} \nonumber \\ &=E_L^{(0)}-\sum_{\boldsymbol k<\boldsymbol k_0}\left(u_{\boldsymbol k}^2\left(\frac{\phi_{\boldsymbol k}}{\phi_0}\right)^2\left(\frac{k^2}{2}+k\right) +\frac{2g}{\sqrt{2 \Omega}}\frac{u_{\boldsymbol k}}{\sqrt{k}}\frac{\phi_{\boldsymbol k}}{\phi_0}\right)-\frac{g^2}{2\pi^2}\ln\left(\frac{k_0}{2}+1\right)\nonumber \\ &+E_{L}^{(0)}\frac{12}{\langlembda^3 \sqrt{6\pi}} \int_{k_0}^\infty dk k e^{-\frac{5}{12}\frac{k^2}{\langlembda^2}} \nonumber \\ &=E_L^{(0)}-\left[\frac{g^2\langlembda}{24 \pi ^2}\left(\sqrt{6 \pi } \text{Erf}\left(\frac{\sqrt{\frac{3}{2}} k_0}{\langlembda }\right)+\langlembda -\langlembda e^{-\frac{3k_0^2}{2 \langlembda ^2}}\right)-\frac{g^2\langlembda}{2 \sqrt{3}\pi^{3/2}}\text{Erf}\left(\frac{\sqrt{3}k_0}{2\langlembda }\right)\right]\nonumber \\ &-\frac{g^2}{2\pi^2}\ln\left(\frac{k_0}{2}+1\right)+E_L^{(0)}\frac{12\sqrt{6\pi}}{5 \langlembda \pi}e^{-\frac{5k_0^2}{12 \langlembda^2}} \langlebel{59} \end{align} and \begin{align} B &= 1+ \sum_{\boldsymbol k<\boldsymbol k_0}\frac{-\left(u_{\boldsymbol k} \frac{\phi_{\boldsymbol k}^2}{\phi_0^2}\left(\frac{k^2}{2}+k\right)+\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k}}{\phi_0\sqrt{k}}\right)u_{\boldsymbol k}(1-\frac{\phi_0^2}{\phi_{\boldsymbol k}^2})}{(\frac{k^2}{2}+k)} + \sum_{\boldsymbol k>\boldsymbol k_0}\frac{(-\frac{g}{\sqrt{2 \Omega}}\frac{\phi_{\boldsymbol k} \phi_0}{\sqrt{k}})u_{\boldsymbol k}(\frac{\phi_{\boldsymbol k}^2}{\phi_0^2}-1)}{\phi_{\boldsymbol k}^2 I_{\boldsymbol k}g^2}\nonumber \\ &=1+\frac{g^2}{2(2\pi)^3}\int_0^{\boldsymbol k_0} d\boldsymbol k \frac{k^2}{\langlembda^2}\left[\frac{e^{-\frac{3k^2}{2 \langlembda^2}}}{k^3}-\frac{e^{-\frac{3k^2}{4 \langlembda^2}}}{k^2(k^2/2+k)}\right]-\frac{12}{\langlembda^5 \sqrt{6\pi}}\int_{k_0}^\infty dk k^3 e^{-\frac{5}{12}\frac{k^2}{\langlembda^2}} \nonumber \\ &=1+\frac{g^2}{12\pi^2}(1-e^{-\frac{3}{2}\frac{k_0^2}{\langlembda^2}})-g^2f\left(\frac{k_0}{\langlembda}\right)- \frac{144\sqrt{6\pi}}{25 \langlembda \pi}\left(1+\frac{5}{12}\frac{k_0^2}{\langlembda^2}\right)e^{-\frac{5k_0^2}{12 \langlembda^2}} \langlebel{60} \end{align} with \begin{align*} f(x) = \frac{1}{4\pi^2}\int_0^x\frac{tdt}{1+t/2}e^{-\frac{3}{4}t^2}. \end{align*} \end{document}
math
\begin{document} \title[Specification and Verification of Timing Properties]{Specification and Verification of Timing Properties in Interoperable Medical Systems } \author[M.~Zarneshan]{Mahsa Zarneshan\rsuper{a}} \author[F.~Ghassemi]{Fatemeh Ghassemi\rsuper{a,b}\texorpdfstring{\textsuperscript{\textdagger}}{}} \thanks{\textsuperscript{\textdagger}{} Corresponding Author} \author[E.~Khamespanah]{Ehsan Khamespanah\rsuper{a}} \author[M.~Sirjani]{Marjan Sirjani\rsuper{c}} \author[J.Hatcliff]{John Hatcliff\rsuper{d}} \address{School of Electrical and Computer Engineering, University of Tehran,Tehran, Iran} \email{[email protected],[email protected],[email protected]} \address{School of Computer Science, Institute for Research in Fundamental Sciences, PO. Box 19395-5746, Tehran, Iran} \address{School of Innovation, Design and Engineering, M\"{a}lardalen University, V\"{a}ster{\aa}s, Sweden} \email{[email protected]} \address{Computer Science Department, Kansas State University, USA} \email{[email protected]} \begin{abstract} To support the dynamic composition of various devices/apps into a medical system at point-of-care, a set of communication patterns to describe the communication needs of devices has been proposed. To address timing requirements, each pattern breaks common timing properties into finer ones that can be enforced locally by the components. Common timing requirements for the underlying communication substrate are derived from these local properties. The local properties of devices are assured by the vendors at the development time. Although organizations procure devices that are compatible in terms of their local properties and middleware, they may not operate as desired. The latency of the organization network interacts with the local properties of devices. To validate the interaction among the timing properties of components and the network, we formally specify such systems in Timed Rebeca. We use model checking to verify the derived timing requirements of the communication substrate in terms of the network and device models. We provide a set of templates as a guideline to specify medical systems in terms of the formal model of patterns. A composite medical system using several devices is subject to state-space explosion. We extend the reduction technique of Timed Rebeca based on the static properties of patterns. We prove that our reduction is sound and show the applicability of our approach in reducing the state space by modeling two clinical scenarios made of several instances of patterns. \end{abstract} \maketitle \section{Introduction}\label{S:intro} Medical Application Platforms (MAPs) \cite{hatcliff2012rationale} support the deployment of medical systems that are composed of medical devices and apps. The devices, apps, and the platform itself may be developed independently by different vendors. The ASTM F2761 standard \cite{ASTM} specifies a particular MAP architecture called an Integrated Clinical Environment (ICE). The AAMI-UL 2800 standard complements F2671 by defining safety/security requirements of interoperable medical systems, including those built using the ICE architecture. Other medical domain standards such as the IEC 80001 series address safety, security, and risk management of medical information technology (IT) networks. All of these standards, as well as emerging regulatory guidance documents for interoperable medical devices, emphasize the importance of accurately specifying the device and app interfaces, understanding the interactions between devices and apps, as well as the implications of those interactions (and associated failures) for safety and security. Timeliness is an important safety aspect of these interactions -- sensed information and actuation control commands need to be communicated to and from medical functions within certain latency bounds. A number of approaches to medical device interfacing have been proposed. Given the needs described above, interfacing approaches and interoperability platforms would clearly benefit from specification and verification frameworks that can define the interface capabilities of devices/apps and provide an automated means for verifying properties of interactions of devices/apps as they are composed into systems. Using such a framework that supports a compositional approach, we can inspect the substitutability and compatibility of devices to have a flexible and correct composite medical system. Goals for a device interfacing framework, together with its communication needs have been described in \cite{requirements}. Working from these goals, a collection of communication patterns for MAPs was proposed in \cite{7318707} that can be implemented on widely available middleware frameworks. These communication patterns address timing properties for medical systems built on a MAP. The patterns break down the timing properties into finer properties that can be locally monitored by each component. The timing properties impose constraints on timing behavior of components like the minimum and maximum amount of time between consequent sent or handled messages. These constraints balance the message passing speed among components, and assure freshness of data. The timing requirements of the communication substrate can be derived from these local timing properties. The timing requirements of the communication substrate impose upper bounds on communication latency. With certain assumptions on the local timing properties, and configuring the network properly based on the derived requirements, we can guarantee the timing properties of the composite system. Assuming that only devices compatible in terms of timing properties are composed together, they may fail to operate as desired due to the interaction of network latency and local timing properties. Communication failures or unpredicted and undesired delays in medical systems may result in loss of life. For example, considering some of the MAP applications outlined in \cite{ASTM} and \cite{hatcliff2012rationale}, in a scenario where a Patient-Controlled Analgesic (PCA) Pump is being controlled by a monitoring app, once the app receives data from patient monitoring device indicating that a patient's health is deteriorating, the app needs to send a halt command to the pump within a certain time bound to stop the flow of an opioid into the patient. In a scenario, where an app is pausing a ventilator to achieve a higher-quality x-ray, the ventilator needs to be restarted within a certain time bound. We can verify the satisfaction of timing communication requirements with regard to the network behavior and configuration before deployment. The verification results are helpful for dynamic network configuration or capacity planning. We assume that components in systems (both apps and devices) satisfy their timing constraints, checked using conventional timing analysis techniques. We focus on timing issues in the communication substrate. Components have no direct control over the communication substrate performance, and ensuring that the system performs correctly under varying network performance is a key concern of the system integrator. We exploit model checking to verify that the configured devices together with assumptions about latencies in the deployed network ensure timing requirements of medical systems before deployment. Each timing requirement expresses a requirement on the communication substrate for each involved pattern in the system. Each timing requirement of communication substrate imposes an upper bound on (logical) end-to-end communication latency between two components. When a component takes part in more than one pattern simultaneously, it will receive an interleaving of messages. The upper bound of the end-to-end communication latency depends on such interleaving. Model checking technique is a suitable approach that considers all possible interleaving of messages to verify the properties. We use the actor-based modeling language Rebeca \cite{sirjani2004modeling,Sirjani06} to verify the configuration of medical systems. We exploit the timed extension of Rebeca to address local timing properties defined in terms of the timing behavior of components. Timed Rebeca \cite{TimedRebeca,SirjaniK16} is supported by the Afra tool suite which efficiently verifies timed properties by model checking. We model communication patterns such that their components communicate over a shared communication substrate. We provide a template for the shared communication substrate in Timed Rebeca, and it can be reused in modeling different medical systems irrespective of the number of components involved. In this paper we model and analyze communication patterns in Timed Rebeca using the architecture proposed for the communication patterns in the extended version of \cite{7318707}. For each pattern, this architecture considers an interface component on either side of the communication to abstract the lower-level details of the communication substrate. These interface components monitor the local timing properties of patterns. So for modeling devices/apps, we only focus on their logic for communicating messages through the interfaces of patterns reusing the proposed models of patterns. The interface components of patterns together with the communication substrate are modeled by distinct actors. Since the timing behavior of network affects the timing properties, we also consider the behavior of the underlying network on scheduling messages while modeling the communication substrate. When the number of devices increases, we may encounter state space explosion problem during model checking. To tackle the problem, we propose a reduction technique while preserving the timing properties of the communication patterns, and we prove the correctness of our technique. We implement our reduction technique in Java and build a tool that automatically reduces the state space generated by Afra. We illustrate the applicability of our reduction technique through two case studies on two clinical scenarios made of several instances of patterns. Our experimental result shows that our reduction technique can minimize the number of states significantly and make analysis of larger systems possible. The contributions of the paper can be summarized as: \begin{itemize} \item Modeling the communication patterns using Timed Rebeca and providing templates for building Timed Rebeca models of composite medical systems that are connected based on the communication patterns; \item Proposing a novel technique for state space reduction in model checking of Timed Rebeca models; \item Modeling and analyzing two real-world case-studies. \end{itemize} This paper extends an earlier conference publication \cite{COORD20} by adding more explanation on the theory and foundation of our reduction technique. We provide a visualization of the state-space to show the reduction in a clearer way. We also provide guidelines and templates for modeling composite systems. The experiments on considering different communication substrate models due to different networks and the first case-study are also new materials. In our communication substrate models, we model the effect of networks by introducing different timing delays or priority on transmitting the messages of patterns. The novelty of our modeling approach is that only the behavior of devices/apps need to be modeled. Thanks to the interface components, the behavior of devices/apps are separated from the ones monitoring the local timing properties. The model of patterns used in the system is reused with no modification and the proposed template for the communication substrate should be only adjusted to handle the messages of involved patterns. Our reduction technique takes advantage of static properties of patterns to merge those states satisfying the same local timing properties of communication patterns. Although the approach in this paper is motivated by needs of interoperable medical systems, the communication patterns and architectural assumptions that underlie the approach are application-independent. Thus, approach can also be used in other application domains in which systems are built from middleware-integrated components as long as the communications used in this paper are applied for specifying intercomponent communication. \section{Communication Patterns}\label{SS:patterns} In this paper, we model communication patterns using Rebeca, here we provide an outline of patterns (based on the content of \cite{7318707}). Devices and apps involved in a communication pattern are known as components that communicate with each other via a \emph{communication substrate}, e.g., networking system calls or a middleware. Each pattern is composed of a set of roles accomplished by components. A component may participate in several patterns with different roles simultaneously. Patterns are parameterized by a set of local timing properties that their violation can lead to a failure. In addition, each pattern has a point-to-point timing requirement that should be guaranteed by communication substrate. There are five communication patterns: \begin{itemize} \item\textbf{Publisher-Subscriber:} a publisher role broadcasts data about a topic and every devices/apps that need it can subscribe to data. Publisher does not wait for any acknowledgement or response from subscribers, \JH{so communication is asynchronous and one-way}. \item\textbf{Requester-Responder:} a requester role requests data from a specific responder and waits for data from the responder. \item\textbf{Initiator-Executor:} an initiator role requests a specific executor to perform an action and waits for action completion or its failure. \item \textbf{Sender-Receiver:} a sender role sends data to a specific receiver and waits until either data is accepted or rejected. \end{itemize} \subsection{Publisher-Subscriber\label{subsec::ppparam}} In this pattern, the component with the publisher role sends a $\it publish$ message to those components that have subscribed previously. \JH{Even when there is only a single subscriber component, choosing the pattern may be appropriate in situations where one wishes to have one-way asynchronous communication between the sender and receiver. In the interoperable medical device domain, this pattern would commonly be used in situations where a bedside monitor such as a pulse oximeter is sending data such as pulse rate information ($PR$) or blood oxygenation information ($SpO_2$) to some type of remote display (like a monitor that aggregates many types of health-related information for the patient) and/or applications that watch for trends in data to generate alarms for care-givers or to trigger some type of automated change in the patient's treatment. In these situations, there is a one-way flow of information from the monitoring device to one or more consumers.} This pattern is parameterized with the following local timing properties: \begin{itemize} \item MinimumSeparation ($N_{pub}$): if the interval between two consecutive $\it publish$ messages from the publisher is less than $N_{pub}$, then the second one is dropped by announcing a \textit{fast Publication} failure. \item MaximumLatency ($L_{pub}$): if the communication substrate fails to accept $\it publish$ message within $L_{pub}$ time units, it informs the publisher of \textit{timeout}. \item MinimumRemainingLifeTime ($R_{pub}$): if the data arrives at the subscriber late, i.e., after $R_{pub}$ time units since publication, the subscriber is notified by a \textit{stale data} failure. \item MinimumSeparation ($N_{sub}$): if the interval between arrival of two consecutive messages at the subscriber is less than $N_{sub}$, then the second one is dropped. \item MaximumSeparation ($X_{sub}$): if the interval between arrival of two consecutive messages at the subscriber is greater than $X_{sub}$ then the subscriber is notified by a \textit{slow publication} failure. \item MaximumLatency ($L_{sub}$): if the subscriber fails to consume a message within $L_{sub}$ time units, then it is notified by a \textit{slow consumption} failure. \item MinimumRemainingLifeTime ($R_{sub}$): if the remaining life time of the $\it publish$ message is less than $R_{sub}$, then the subscriber is notified by a \textit{stale data} failure. \end{itemize} \JH{The timing properties are chosen to (a) enable both the producer and consumer to characterize their local timing behavior or requirements and (b) enable reasoning about the producer/consumer time behavior compatibility and important ``end-to-end" timing properties when a producer and consumer are composed. For example, the local property $N_{pub}$ allows the publisher to specify the minimum separation time between the messages that it will publish. From this value, one can derive the maximum rate at which messages will be sent. This provides a basis for potential consumer components to determine if their processing capabilities are sufficient to handle messages coming at that rate. Publisher compliance to $N_{pub}$ can be checked at run-time within the communication infrastructure of the producer, e.g. before outgoing messages are handed off to the communication substrate, by storing the time that the previous message was sent. Similarly, $N_{sub}$ and $X_{sub}$ are local properties for the subscriber. These allow the subscriber to state its assumptions/needs about the timing of incoming data. Figure~\ref{Fig:PubSub} gives further intuition about the purpose and relationship between the parameters.} \JH{While the above parameters can be seen as part of component \emph{interface specifications} on both the Publisher and Subscriber components, when reasoning about end-to-end properties, the following attribute reflects a property of the networking resource upon which inter-component communication is deployed. \begin{itemize} \item MaximumLatencyOfCommSubstrate ($L_{m}$): the maximum latency of communication of messages between the Publisher and Subscriber across the communication substrate. \end{itemize} } Each communication pattern owns a non-local point-to-point \textit{timing requirement} that considers aggregate latencies across the path of the communication -- including delays introduced by application components, interfaces, and the communication substrate ($L_{m}$). In this pattern the requirement is ``the data to be delivered with lifetime of at least $R_{sub}$, communication substrate should ensure maximum message delivery latency [across the substrate] $L_m$ does not exceed $R_{pub}-R_{sub}-L_{pub}$'' (inequality \ref{eq:pub-sub}). \begin{equation} \label{eq:pub-sub} R_{pub}-R_{sub}-L_{pub} \geq L_{m} \end{equation} \JH{Regarding the intuition of this inequality, consider that the publisher will send a piece of data with a parameter $R_{pub}$ indicating how long that data will be fresh/valid. As the message is communicated, latencies will accumulate in the PublisherInterface (maximum value is $L_{pub}$) and communication substrate (maximum value is $L_{m}$). When the message arrives at the SubscriberInterface, it’s remaining freshness would be $R_{pub} - L_{pub} - L_{m}$. That remaining freshness should be as least as large as $R_{sub}$ — the time needed by the subscriber to do interesting application work with the value, i.e., to achieve the goals of the communication, the following inequality should hold $R_{pub} - L_{pub} - L_{m} \geq R_{sub}$. The intuition is that $R_{pub}$ is a application property of the publisher — in essence, a ``guarantee'' of freshness to consumers based on the nature of the data, and $R_{sub}$ is ``requirement'' of the consumer (it needs data at least that fresh to do its application work). Given that $L_{pub}$ is a fixed latency in the software that interacts with the network, the network needs to guarantee that $L_{m}$ is low enough to make the inequality above hold. Using algebra to reorient the constraint so that it can be more clearly represented as a latency constraint on the communication substrate ($L_m$) yields Inequality~\ref{eq:pub-sub}.} \JH{For an example of how these parameters might be used in a medical application, assume a pulse oximeter device that publishes pulse rate data of the patient. A monitoring application might subscribe to the physiological readings from the pulse oximeter and other devices to support a ``dashboard" that provides a composite view device readings and generates alerts for care-givers based on a collection of physiological parameters. In such a system, the Publisher-Subscriber pattern can be used to communicate information from the pulse oximeter (publisher) to the monitoring application (subscriber). In this description, there is only one subscriber (the monitoring application), but using the Publisher-Subscriber pattern is still appropriate because it allows other subscribers (e.g., a separate alarm application, or a data logging application) to be easily added. Even when there is a single subscriber, the pattern selection emphasizes that the communication is one-way. For publisher local properties, the pulse oximeter can use $N_{pub}$ to indicate the maximum rate at which it will publish blood oxygenation information (Sp$O_2$) and/or pulse rate information. In medical devices in general, this rate would typically be associated with the interval at which meaningful changes can be reflected in the reported physiological parameters. The device designer would use the $L_{pub}$ parameter to specify the maximum length of the delay associated with putting a published value out on the communication substrate that would be acceptable for safe and correct use of the device. On the subscriber side, $N_{sub}$ allows the monitoring application to specify an upper bound on the rate of incoming messages. The value chosen may be derived in part from the execution time needed to compute new information and format resulting data for the display. Intuitively, the $X_{sub}$ allows the monitoring application to indicate how frequently it needs pulse oximetry data to maintain an ``up to date'' display. The other properties can be used to characterize end-to-end (non-local) timing concerns. To ensure that care-givers receive timely dashboard information and alerts, safety requirements should specify that information is (a) communicated from the pulse oximeter device to the monitoring application with a medically appropriate bound on the latency, and (b) the received physiological parameter is currently an accurate reflection of the patient's physiological state (i.e., the parameter is “fresh” enough to support the medical intended use). Such requirements would build on the type of non-local timing requirement specified above. } \subsection{Requester-Responder} In this pattern, the component with the role requester, sends a $\it request$ message to the component with the role responder. The responder should reply within a time limit as specified by its local timing properties. \JH{In the interoperable medical device domain, this pattern would commonly be used in situations where an application needs to ``pull'' information from a medical device (e.g., retrieving the current blood pressure reading from a blood pressure device, retrieving the infusion settings from an infusion pump) or fetching patient data from medical record database.} This pattern is parameterized with the following local timing properties: \begin{itemize} \item MinimumSeparation ($N_{req}$): if interval between two consecutive $\it request$ messages is less than $N_{req}$, then the second one is dropped with a \textit{fast Request} failure. \item MaximumLatency ($L_{req}$): if the $\it response$ message does not arrive within $L_{req}$ time units, then the request is ended by a \textit{timeout} failure. \item MinimumRemainingLifeTime ($R_{req}$): if the $\it response$ message arrives at the requester with a remaining lifetime less than $R_{req}$, then the requester is notified by a \textit{stale data} failure. \item MinimumSeparation ($N_{res}$): if the duration between the arrival of two consecutive $\it request$ messages is less than $N_{res}$, then the request is dropped while announcing an \textit{excess load} failure. \item MaximumLatency ($L_{res}$): if the $\it response$ message is not provided within the $L_{res}$ time units, the request is ended by a \textit{timeout} failure. \item MinimumRemainingLifeTime ($R_{res}$): if the $\it request$ message with the promised minimum remaining lifetime cannot be responded by the responder, then request is ended by a \textit{data unavailable} failure. \end{itemize} \JH{Compared to the Publisher-Subscriber, several of the timing specification parameters are similar, while others are reoriented to focus on the completion of the end-to-end two-phase "send the request out, get a response back" as opposed to the one-phase goal of the Publisher-Subscriber "send the message out". For example, the minimum separation parameters for both the Requester $N_{req}$ and Responder $N_{res}$ are analogous to the $N_{pub}$ and $N_{sub}$ parameters of the Publish-Subscriber pattern. The MinimumRemainingLifetime concept is extended to include a check not only on the arrival of the request at the responder ($R_{req}$) at the first phase of the communication, but also the a check on the communication from the Responder back to the Requester (at the end of the second phase of the communication). } Reasoning about the end-to-end two-phase objective of this pattern now needs to consider communication substrate latencies for both the request message $L_{m}$ and the response message $L'_{m}$. The point-to-point \textit{timing requirement} defined for this pattern concerns the delivery of response with lifetime of at least $R_{req}$. So the communication substrate should ensure that ``the sum of [its] maximum latencies to deliver the request to the responder ($L_m$) and the resulting response to the requester ($L'_m$) does not exceed $L_{req}+R_{req}-L_{res}-R_{res}$'' (inequality \ref{eq:req-res}). \\ \begin{equation} \label{eq:req-res} L_{req}+R_{req}-L_{res}-R_{res} \geq L_{m}+L'_{m} \end{equation} \JH{ For an example of how this pattern might be used in a medical application, consider a medical application that requires a blood pressure reading. The application would send a request message to a blood pressure device (with maximum communication substrate latency $L_{m}$), the blood pressure device would either return the most recent reading or acquire a new reading (with latency of $L_{res}$ to obtain the value within the device), and then the device would send a response message to the requester (with maximum communication substrate latency $L_{m}$). $L_{req}$ expresses the application's requirement on the overall latency of the interaction. The lifetime parameters can be used in a manner similar to that of the Publisher-Subscriber pattern.} \subsection{Initiator-Executor} In this pattern, the component with the initiator role, requests a specific component with the executor role to execute an action. The executor should provide appropriate acknowledgment message (action succeeded, action failed or action unavailable) within a time limit as specified by its local timing properties. \JH{In interoperable medical applications, this pattern would typically be used by an application to instruct an actuation device to perform some action. For example, an infusion control application might use the pattern to start or stop the infusion process. A computer-assisted surgery application might use the pattern to instruct the movement of computer-controlled surgical instruments.} This pattern is parameterized with the following local timing properties: \begin{itemize} \item MinimumSeparation ($N_{ini}$): if interval between two consecutive $\it initiate$ messages is less than $N_{ini}$, then the second one is dropped with a \textit{fast init} failure. \item MaximumLatency ($L_{ini}$): if the $\it acknowledgment$ message does not arrive within $L_{ini}$ time units, then the request is ended by a \textit{timeout} failure. \item MinimumSeparation ($N_{exe}$): if the duration between the arrival of two consecutive $\it initiate$ message is less than $N_{exe}$, then the request is dropped while announcing an \textit{excess load} failure. \item MaximumLatency ($L_{exe}$): once the initiating message arrives at the Executor, if the $\it acknowledgment$ message is not provided within the $L_{exe}$ time units, the request is ended by a \textit{timeout} failure. \end{itemize} \JH{$N_{ini}$ can be seen as a guarantee in interface specification on the Initiator to not send messages faster than a certain rate. $L_{ini}$ is a requirement that the Initiator has on the overall latency of the action. Failure of the system to satisfy this property might lead the initiating component to raise an alarm or take some other corrective action necessary for safety. $N_{exe}$ can be understood as a requirement that the Executor has related to its ability to handle action requests. $L_{exe}$ can be seen as a guarantee that the Executor provides to either perform the action or generate a time out message within a certain time bound.} The point-to-point \textit{timing requirement} defined for this pattern concerns the delivery of data within maximum latencies. \JH{The overall latency of the actions of sending the execution command $L_m$, the executor carrying out the action $L_{exe}$, and sending the acknowledge $L'_m$ should not exceed the requirement on the overall latency specified by the Initiator $L_{ini}$. Some algebra on this relationship to focus on the requirements of the communication substrate yields the following inequality (inequality \ref{eq:ini-exe}).} \\ \begin{equation} \label{eq:ini-exe} L_{ini}-L_{exe} \geq L_{m}+L'_{m} \end{equation} For example, in the X-Ray/Ventilator synchronization in Section~\ref{sec:x-ray-app}, a coordinating application needs to send commands to both the X-Ray and Ventilator. The Initiator-Executor pattern can be used to control both of these devices with the minimum separation constraints as used in previous patterns. The parameter $L_{ini}$ parameter would be used to specify the requirement on the maximum latency of each interaction. \subsection{Sender-Receiver} In this pattern, the component with the sender role, sends data to a specific component with the receiver role. The receiver should reply with appropriate acknowledgment message (data accepted or data rejected) within a time limit as specified by its local timing properties. \JH{This pattern is structurally and semantically very similar to the Initiator-Executor pattern. It is only presented as a separate pattern to distinguish the fact that the receiving component only accepts data and, e.g., stores it rather than performing an action that may impact the external environment.} This pattern is parameterized with the following local timing properties: \begin{itemize} \item MinimumSeparation ($N_{sen}$): if interval between two consecutive $\it send$ messages is less than $N_{sen}$, then the second one is dropped with a \textit{fast send} failure. \item MaximumLatency ($L_{sen}$): if the $\it acknowledgment$ message does not arrive within $L_{sen}$ time units, then the data sent is ended by a \textit{timeout} failure. \item MinimumSeparation ($N_{rec}$): if the duration between the arrival of two consecutive $\it send$ messages is less than $N_{rec}$, then the data is dropped while announcing an \textit{excess load} failure. \item MaximumLatency ($L_{rec}$): if the $\it acknowledgment$ message is not provided within the $L_{rec}$ time units, the data sent is ended by a \textit{timeout} failure. \end{itemize} The point-to-point \textit{timing requirement} defined for this pattern concerns the delivery of data within maximum latencies. So the communication substrate should ensure that ``the sum of maximum latencies to deliver the sent data to the reciever ($L_m$) and the resulting acknowledgment to the sender ($L'_m$) does not exceed $L_{sen}-L_{rec}$'' (inequality \ref{eq:sen-rec}). \\ \begin{equation} \label{eq:sen-rec} L_{sen}-L_{rec} \geq L_{m}+L'_{m} \end{equation} \JH{In interoperable medical applications, this pattern would typically be used to change the settings on a device or to update a record in some electronic medical record. For example assume a BP monitor that measures blood pressure every 3 minutes periodically. The monitoring application could use the pattern to change the settings on the device to an interval of 1 minute.} \section{Timed Rebeca and Actor Model} Actor model \cite{agha1985actors,hewitt1977viewing} is a concurrent model based on computational objects, called actors, that communicate asynchronously with each other. Actors are encapsulated modules with no shared variables. Each actor has a unique address and mailbox. Messages sent to an actor are stored in its mailbox. Each actor is defined through a set of message handlers to specify the actor behavior upon processing of each message. Rebeca \cite{sirjani2004modeling,Sirjani06} is an actor model language with a Java-like syntax which aims to bridge the gap between formal verification techniques and the real-world software engineering of concurrent and distributed applications. Rebeca is supported by a robust model checking tool, named Afra\footnote{\url{http://www.rebeca-lang.org/alltools/Afra}}. Timed Rebeca is an extension of Rebeca for modeling and verification of concurrent and distributed systems with timing constraints. As all timing properties in communication patterns are based on time, we use Timed Rebeca for modeling and formal analysis of patterns by Afra. Hereafter, we use Rebeca as short for Timed Rebeca in the paper. \begin{figure} \caption{Abstract syntax of Timed Rebeca. Angle brackets $\langle~\rangle$ denotes meta parenthesis, superscripts $+$ and $*$ respectively are used for repetition of one or more and repetition of zero or more times. Combination of $\langle~\rangle$ with repetition is used for comma separated list. Brackets $[~]$ are used for optional syntax. Identifiers $C$, $T$, $m$, $v$, $c$, $e$, and $r$ respectively denote class, type, method name, variable, constant, expressions, and rebec name, respectively. \label{Fig::TimedRebecaGrammar} \label{Fig::TimedRebecaGrammar} \end{figure} The syntax of Timed Rebeca \cite{TimedRebeca,SirjaniK16} is given in Figure \ref{Fig::TimedRebecaGrammar}. Each Rebeca model contains \textit{reactive classes} definition and \textit{main} part. Main part contains instances of reactive classes. These instances are actors that are called rebecs. Reactive classes have three parts: \textit{known rebecs}, \textit{state variables} and \textit{message servers}. Each rebec can communicate with its known rebecs or itself. Local state of a rebec is indicated by its state variables and received messages which are in the rebec's mailbox. Rebecs are reactive, there is no explicit receive and mailbox manipulation. \EK{Messages trigger the execution of the statements of message servers when they are taken from the message mailbox. An actor can change its state variables through assignment statements, make decisions through conditional statements, communicates with other actors by sending messages, and performs periodic behavior by sending messages to itself. A message server may have a \emph{nondeterministic assignment} statement which is used to model the nondeterminism in the behavior of a message server. The timing features are \textit{computation time}, \textit{message delivery time} and \textit{message expiration}. Computation time is shown by \textit{delay} statement. Message delivery and expiration times are expressed by associating \textit{after} and \textit{deadline} values with message sending statements}. \begin{example} A simple request-response system is specified in Timed Rebeca given in Figure \ref{Fig:ُTimedRebecaModelingExample}. This model has two rebecs: $\it req$ is an instances of class $\it Requester$ while $\it res$ is an instance of $\it Responder$. The size of the rebec mailboxes is specified by $(5)$ after the name of classes. These two rebecs are passed as the known rebecs of each other in lines $27-28$ by instantiating. Each class has a message server with the same name as the class name and it acts similar to class constructors in object-oriented languages. Rebec $\it req$ initially sends a message $\it request$ to itself upon executing its constructor. The global time is initially $0$. Rebec $\it req$ takes the message $\it request$ from its mailbox to handle. By executing the statement ${\it delay}(3)$ it is blocked until time $3$. The rebec $\it req$ resumes its execution at time $3$ by sending a ``$\it request$" message to the rebec $\it res$. This message is delivered to the rebec $\it res$ after a delay of $8$, i.e., $11$. At time $11$, rebec $\it res$ takes the message ``$\it request$" from its mailbox. Upon executing its message server, it sends a message ``$\it response$" to $\it req$ which will be delivered at time $16$. Rebec $\it req$ takes the message ``$\it response$" from its mailbox at time $16$ and sends a message ``$\it request$" to itself. \end{example} \begin{figure} \caption{A simple request-response system in Timed Rebca} \label{Fig:ُTimedRebecaModelingExample} \end{figure} \subsection{Standard Semantics of Timed Rebeca Model} \EK{Formal semantics of Timed Rebeca models are presented as labeled transition systems (LTS), i.e., a basic model to define the semantics of reactive systems \cite{baier}. A LTS is defined by the quadruple $\langle S,\rightarrow,L,s_{0} \rangle$ where $S$ is a set of states, $L$ a set of labels, $\rightarrow\,\subseteq S\times L\times S$ a set of transitions, and $s_{0}$ the initial state. Let $s\overto{\alpha}t$ denote $(s,\alpha,t)\in\rightarrow$. Timed Transition System (TTS), a basic computational model of realtime systems, generalizes the basic computation model of transition systems by associating an interval with each transition to indicate how long a transition takes \cite{DBLP:conf/rex/HenzingerMP91}. In a TTS, transitions are partitioned into two classes: instantaneous transitions (in which time does not progress), and time ticks when the global clock is incremented. These time ticks happen when all participants ``agree'' for time elapse. The standard semantics of Timed Rebeca is defined in terms of TTS as described in \cite{DBLP:journals/scp/KhamespanahKS18}. In the following, the brief description of this semantics is presented based on \cite{DBLP:journals/scp/KhamespanahKS18}. Assume that $\mathit{AID}$ is the set of the identifiers of all of the rebecs, $\mathit{MName}$ is the set of the names of all of the message servers, $\mathit{Var}$ is the set of the all of the identifiers of variables, and $\mathit{Val}$ is all of the possible values of variables. Also, $\mathit{Msg} = \mathit{AID} \times \mathit{MName} \times (\mathit{Var} \rightarrow \mathit{Val}) \times \mathbb{N} \times \mathbb{N}$ is the type of messages which are passed among actors. In a message $(i,m,r,a,d) \in \mathit{Msg}$, $i$ is the identifier of the sender of this message, $m$ is the name of its corresponding method, $r$ is a function mapping argument names to their values, $a$ is its arrival time, and $d$ is its deadline. Also assume that the set $\powerset{\mathit{A}}$ is the power set and $\powermultiset{\mathit{A}}$ is the power multiset of its given set $A$. \begin{defi} For a given Timed Rebeca model $\mathcal{M}$, $\mathit{TTS_{\mathcal{M}}}=(S, \rightarrow, Act, s_0)$ is its standard semantics such that: \begin{itemize} \item The global state of a Timed Rebeca model is represented by a function $s : \mathit{AID} \rightarrow (\mathit{Var} \rightarrow \mathit{Val}) \times \powermultiset{\mathit{Msg}} \times \mathbb{N} \times \mathbb{N} \times \mathbb{N} \cup \{\epsilon\}$, which maps an actor's identifier to the local state of the actor. The local state of an actor is defined by a tuple like $(v, q, \sigma, t, r)$, where $v : \mathit{Var} \rightarrow \mathit{Val}$ gives the values of the state variables of the actor, $q : \powermultiset{\mathit{Msg}}$ is the message bag of the actor, $\sigma : \mathbb{N}$ is the program counter, $t$ is the actor local time, and $r$ is the time when the actor resumes executing remaining statements. The value of $\epsilon$ for the resuming time shows that this actor is not executing a message server. \item In the initial state of the model, for all of the actors, the values of state variables and content of the actor's message bag is set based on the statements of its constructor method, and the program counter is set to zero. The local times of the actors are set to zero and their resuming times are set to $\epsilon$. \item The set of actions is defined as $Act = \mathit{MName} \cup \mathbb{N} \cup \{\tau\}.$ \item The transition relation $\rightarrow \,\subseteq S \times Act \times S$ defines the transitions between states that occur as the results of actors' activities including: taking a message from the mailbox, executing a statement, and progress in time. The latter is only enabled when the others are disabled for all of the actors. This rule performs the minimum required progress of time to enable one of the other rules. \end{itemize} \end{defi} More details and SOS rules which define these transitions are presented in \cite{DBLP:journals/scp/KhamespanahKS18}. } \begin{example} We explain the state-space shown in Figure \ref{fig::TTS} derived partially for the Rebeca model given in Figure \ref{Fig:ُTimedRebecaModelingExample}. The global state is defined by the local states of rebecs and global time. \EK{Note that this presentation has some minor difference in comparison with the structure of the global state in the presented semantics. As local states of all the rebecs in TTS has the same time, in Figure \ref{Fig:ُTimedRebecaModelingExample} one value for \emph{now} is shown as the global time of the system. In addition, the values of the state variables and resuming times are omitted to make the figure simpler.} In the initial state, called $s_1$, only the rebec $\it req$ has a message ``$\it request$" in its bag. By taking this message, we have a transition of type event to the state $s_2$ while the $\it pc$ of rebec is set to $1$ indicating the first statement of the message server ``$\it request$" should be executed. Upon executing the delay statement, the rebec is suspended for 3 units of time. As no rebec can have progress, the global time advances to $3$ and there is a time transition to the state $s_3$. Now, rebec $\it req$ resumes its execution by executing the send statements. This execution makes a state transition to the state $s_4$ by inserting a message ``$\it request$" into the mailbox of $\it res$ by setting its arrival time to $11$. \end{example} \begin{figure} \caption{TTS\label{fig::TTS} \label{fig::TTS} \caption{FTTS} \label{fig::FTTS} \caption{BFTTS} \label{fig::BFTTS} \caption{ TTS, FTTS, and BFTTS for the Timed Rebeca model in Figure \ref{Fig:ُTimedRebecaModelingExample} \label{fig::FTTSandTTS} \end{figure} \subsection{Coarse-grained Semantics of Timed Rebeca Models}\label{subsec::FTTS} \EK{ Floating Time Transition System (FTTS), introduced in \cite{RTS}, gives a natural event-based semantics for timed actors, providing a significant amount of reduction in the size of transition systems. FTTS is a coarse-grain semantics and contains only event transitions of the standard semantics; each state transition shows the effect of handling of a message by a rebec. The semantics of Timed Rebeca in FTTS is defined in terms of a transition system. The structure of states in FTTS are the same as that of in TTS; however, the local times of actors in a state can be different. FTTS can be used for the analysis of Timed Rebeca models as there is no shared variables, no blocking send or receive, single-threaded actors, and atomic (non-preemptive) execution of message servers which gives an isolated message server execution. As a result, the execution of a message server of an actor will not interfere with the execution of message servers of other actors. Therefore, all the statements of a given message server can be executed (including delay statements) during a single transition. \begin{defi} For a given Timed Rebeca model $\mathcal{M}$, $\mathit{FTTS_{\mathcal{M}}}=(S, \hookrightarrow, Act', s_0)$ is its floating time semantics where $S$ is the set of states, $s_0$ is the initial state, $Act'$ is the set of actions, $\hookrightarrow \,\subseteq S \times Act' \times S$ is the transition relation, described as the following. \begin{itemize} \item The global state of a Timed Rebeca model $s \in S$ in FTTS is the same as that of in the standard semantics. In comparison with the standard semantics, the values of \textit{program counter} and \textit{resuming time} are set to zero and $\epsilon$, respectively, for all actors in FTTS. As a result, states of actors in FTTS are in the form of $(v, q, 0, t, \epsilon)$. In addition, there is no guarantee for the local times of actors to be the same, i.e. time \emph{floats} across the actors in the transition system. \item The initial state of a model in FTTS is the same as that of in TTS. \item The set of actions is defined as $Act' = \mathit{MName}.$ \item The transition relation $\hookrightarrow \,\subseteq S \times Act' \times S$ defines the transitions between states that occur as the results of actors' activities including: taking a message from the message box, and executing all of the statements of its corresponding message server. For proposing the formal definition of $\hookrightarrow$, we have to define the notion of \textit{idle} actors. An actor in the state $(v, q, \epsilon, t, r)$ is idle if it is not busy with executing a message server. Consequently, a given state $s$ is idle, if $s(x)$ is idle for every actor $x$. We use the notation $\mathit{idle}(s,x)$ to denote the actor identified by $x$ is idle in state $s$, and $\mathit{idle}(s)$ to denote $s$ is idle. Using these definitions, two states $s, s' \in S$ are in relation $s \longhooktrans{mg} s'$ if and only if the following conditions hold. \begin{itemize} \item $\mathit{idle}(s) \wedge \mathit{idle}(s')$, and \item $\exists\, s_1, s_2, \cdots, s_n \in S, x \in \mathit{AID} \cdot s \longtrans{mg} s_1 \rightarrow \cdots \rightarrow s_n \rightarrow s' \wedge \forall y \in \mathit{AID}/\{x\},~ 1 \leq i \leq n \cdot \neg \mathit{idle}(s_i, x) \wedge \mathit{idle}(s_i, y)$ \end{itemize} \end{itemize} \end{defi} More details and SOS rules which define these transitions in FTTS are presented in \cite{RTS} and \cite{Ehsan:Thesis:2018}. } \begin{example} The FTTS of the Rebeca model given in Figure \ref{Fig:ُTimedRebecaModelingExample} is given in Figure \ref{fig::FTTS}. \EK{As mentioned before, the values of resuming times and program counters are set to $\epsilon$ and zero in FTTS, so they are not shown in the figure.} In the initial state, called $t_1$, only the rebec $\it req$ has a message ``$\it request$" in its bag, and the local time of all rebecs is $0$. Upon handling the message ``$\it request$", the local time of rebec $\it req$ is progressed to $3$ and a message ``$\it request$" is inserted to the bag of $\it res$ as shown in the state $t_2$. Upon handling the message ``$\it request$" by rebec $\it res$, as its arrival time is $11$, the local time of rebec is progressed to $11$ in the state $t_3$. \end{example} \EK{As proved in \cite{RTS}, the FTTS and the TTS of a given Timed Rebeca model are in a weak bisimulation relation. Hence, the FTTS preserves the timing properties of its corresponding TTS, specified by weak modal $\mu$-calculus where the actions are taking messages from the bag of actors. There is no explicit time reset operator in Timed Rebeca; so, the progress of time results in an infinite number of states in transition systems of Timed Rebeca models in both TTS and FTTS. However, Timed Rebeca models generally show periodic or recurrent behaviors, i.e. they perform periodic behaviors over infinite time. Based on this fact, in \cite{FTTS} a new notion for equivalence relation between two states is proposed to make transition systems finite, called \textit{shift equivalence relation}. Intuitively, when building the state space there may be a new state $s$ generated in which the local states of rebecs are the same as an already existing state $s'$, and the new state $s$ only differs from $s'$ in a fixed shift in the value of parts which are related to the time (the same shift value for all timed related values, i.e. now, arrival times of messages, and deadlines of messages). Such new states can be merged with the older ones to make the transition systems bounded. \begin{defi}[shift-equivalence relation]\label{Def::def1} Two states $s$ and $s'$ are called \emph{shift equivalent}, denoted by $s\simeq_\delta s'$, if for all the rebecs with identifier $x\in\mathit{ID}$ there exists $\delta$ such that: \begin{enumerate} \item $\mathit{statevars}(s(x))=\mathit{statevars}(s'(x))$, \item $\mathit{now}(s(x)) = \mathit{now}(s'(x))+\delta$, \item $ \forall m\in \mathit{bag}(s(x))\Leftrightarrow (\mathit{msgsig}(m),\mathit{arrival}(m)+\delta,\mathit{deadline}(m)+\delta)\in\mathit{bag}(s'(x)).$ \end{enumerate} \end{defi} The \emph{bounded floating-time transition system}s (BFTTS) $\langle S_f,\hookrightarrow, Act' ,s_{0_f}\rangle$ of a Timed Rebeca model is obtained by merging states of its corresponding FTTS $\langle S,\rightarrow, Act',s_{0} \rangle$ that are in shift equivalence relation. Shift equivalent states are merged into the one that its rebecs have the smallest local times. In \cite{RTS} it is proved that FTTS and its corresponding BFTTS are strongly bisimilar; so, BFTTS of a Timed Rebeca model preserves the timing properties of its corresponding FTTS. } \begin{example} The FTTS of Figure \ref{fig::FTTS} modulo shift-equivalence is partially shown in Figure \ref{fig::BFTTS}. Assume the state $t_6$ in FTTS with the same configuration of $t_6'$ in BFTTS. In the state $t_6$, rebec $\it req$ handles its ``$\it response$" message and as a consequence it sends a ``$\it request$" message to itself and its local clock is advanced to $32$. We call the resulting state $t_7$. The local clocks of rebecs in states $t_7$ and $t_4$ have a $16$-time difference and the values of their state variables and bag contents are equal. So, these two states are shift-equivalent and are merged, resulting the loop in the corresponding BFTTS. \end{example} \section{Modeling Patterns in Rebeca}\label{sec::model} We use the architecture proposed in \cite{7318707} for implementing communication patterns. We will explain the main components of the Publisher-Subscriber pattern as the others are almost the same. As illustrated in Figure \ref{Fig:PubSub}, the pattern provides communication between two application components -- a \emph{client} and a \emph{service}, each of which could be either a software app or a medical device. For example, the client could be a pulse oximeter publishing SPO2 values to a monitoring application service. In our modeling approach, each of the patterns will have a component acting as an interface on either side of the communication that abstracts the lower-level details of a communication substrate. In this case, there is a \textit{PublisherRequester} component that the publisher calls to send a message through the communication substrate and a \textit{SubscriberInvoker} that receives the message from the communication substrate and interacts with the service. This structure is common in most communication middleware (e.g., the Java Messaging Service, or OMG's Data Distribution Service) in which APIs are provided to the application level components and then behind the scenes a communication substrate handles marshalling/unmarshalling and moving the message across the network using a particular transport mechanism. In our approach to reason about timing properties, the interface components \textit{PublisherRequester} and \textit{SubscriberInvoker} check the local timing properties related to the client or service side, respectively. \begin{figure} \caption{Publisher-Subscriber pattern sequence diagram: \FG{see Section \ref{subsec::ppparam} \label{Fig:PubSub} \end{figure} We model each component of this architecture as a distinct actor or rebec in Rebeca. We explain the model of the Publisher-Subscriber pattern in details. Other patterns are modeled using a similar approach. Figure \ref{fig:Publisher} illustrates \textit{PublisherRequester} reactive class, which is the interface between the client (device/app) and the communication substrate. As we see in lines $3$ and $4$, it has two known rebecs: the communication substrate \textit{cs} to which messages are forwarded, and the client \textit{c} to which it will return messages indicating the success/failure status of the communication. We define the state variable $\mathit{lastPub}$ in line $5$ for saving the time of last publication message. We use this time for computing the interval between two consecutive messages. This rebec has a message server named \textit{publish}. We pass \emph{Lm} and \emph{life} parameters through all message servers in the model to compute the delivery time and remaining lifetime of each message. As the communication substrate is impacted by network traffic, the communication delay between the interface and the communication substrate is non-deterministic. To model this communication delay, we define the variable \emph{clientDelay} (in line $11$) with non-deterministic values. The parameters of \emph{Lm} and \emph{life} are updated in lines $12$ and $13$ based on \emph{clientDelay}. This interface is responsible for checking $N_{pub}$ and $L_{pub}$ properties as specified in lines $15$-$23$. To check $N_{pub}$, the interval between two consecutive \emph{publish} messages should be computed by subtracting the current local time of rebec from $\mathit{lastPub}$. The reserved word \emph{now} represents the local time of the rebec. As this reserved word can not be used directly in expressions, we first assign it to the local variable \emph{time} in line $14$. If both properties are satisfied, it sends a \emph{transmitPublish} message to the communication substrate and an \emph{accepted} message to the client. \MZ{The \emph{accepted} message notifies the \emph{publisher} that the \emph{publish} message was sent to the subscriber through the communication substrate}. These messages are delivered to their respective receivers with a non-deterministic delay, modeled by \emph{clientDelay}, using the statement $\mathit{after}$. It means that the message is delivered to the client after passing this time. In case that the $N_{pub}$ property is violated, it sends a message \emph{fastPublicationFailure} to the client. If the $L_{pub}$ property is violated, it sends a message \emph{timeOutFailure}. \begin{figure} \caption{Modeling publisher interface in Timed Rebeca} \label{fig:Publisher} \end{figure} \textit{Communication substrate} abstracts message passing middleware by specifying the outcomes of message passing. To this aim, it may consider priorities among received messages to transmit or assign specific or non-deterministic latency for sending messages. A specification of \textit{communication substrate} reactive class is shown in Figure \ref{fig:CS}. It handles \emph{transmitPublish} messages by sending a $\mathit{RcvPublish}$ message to its known rebec, a rebec of \textit{SubscriberInvoker} class in line $11$. It considers a non-deterministic communication delay for each message, modeled by the local variable \emph{netDelay} in line $8$. This rebec updates the parameters \emph{Lm} and \emph{lifetime} based on \emph{netDelay} before sending $\mathit{RcvPublish}$ in lines $9$ and $10$. \begin{figure} \caption{Modeling communication substrate in Timed Rebeca} \label{fig:CS} \end{figure} The \textit{SubscriberInvoker} reactive class, given in Figure \ref{fig:Subsciber}, is an interface between the communication substrate and the service (device/app). It has only one known rebec that is the destination for the messages of its instances. We define a state variable $\mathit{lastPub}$ in line $5$ to save the time of the last publication message that arrived in this rebec. This reactive class is responsible for checking $N_{sub}$, $X_{sub}$, $R_{pub}$, $R_{sub}$, and $L_{pub}$ properties (see Subsection \ref{SS:patterns}). Message servers in this rebec are $\mathit{RcvPublish}$ and $\mathit{consume}$. The $\mathit{RcvPublish}$ server begins by modeling the communication delay between the interface and the service by assigning the variable \emph{serviceDelay} (in line $8$) with a non-deterministic value. \FG{As we explained in Section \ref{SS:patterns}, a subscriber states its need about the timing of incoming data by parameters $N_{sub}$ and $X_{sub}$: the rate at which it consumes data.} It computes the interval between two consecutive $\mathit{RcvPublish}$ messages in lines $9-10$ and then uses this value to check that it be greater than equal to the minimum separation constraint at line $11$ ($N_{sub}$) and less than equal to the maximum separation constraint at line $14$ ($X_{sub}$). \FG{Otherwise, subscriber concludes a too fast or slow publication, respectively. A subscriber also states its need about the freshness of data by timing properties $R_{pub}$ and $R_{sub}$; if data arrives at the subscriber late after its publication (by comparing \emph{Lm} and $R_{pub}$) or its remaining lifetime is less than $R_{sub}$, subscriber concludes that data is stale (line $17$) and sends a failure message to the service. } By satisfying the properties, it saves the local time of the actor in $\mathit{lastPub}$ and sends a $\mathit{consume}$ message to the service $\mathit{after}$ a delay of \emph{serviceDelay}. Handling the message reception notification from the service, the message server $\mathit{consume}d$ checks $L_{sub}$ property in line $29$ and sends a failure to the service if the consumption time exceeded the specified maximum consumption latency $L_{sub}$. \begin{figure} \caption{Modeling subscriber interface in Timed Rebeca} \label{fig:Subsciber} \end{figure} The \textit{Service} reactive class, given in Figure \ref{fig:Service}, consumes the publish message with a delay and then assigns $L_m$ to $\it transmissionTime$ indicating the point-to-point message deliver latency from publication of data to its receipt. \begin{figure} \caption{Modeling service in Timed Rebeca} \label{fig:Service} \end{figure} \subsection{Analysis of Patterns} Checking of local timing properties is encoded in component models, and failure to satisfy such properties is indicated by notification messages sent to relevant components. Non-local timing properties are specified using assertions in the property language of the model checker and are checked during the model checking process. An example of such an assertion (corresponding to the non-local timing property of inequality \ref{eq:pub-sub}) is shown in Figure~\ref{Fig:PubSubProperty}. $\mathit{LatencyOverLoad}$ is the name of property and $\mathit{transmissionTime}$ is the maximum message delivery time ($L_M$ in inequality \ref{eq:pub-sub}) that is the state variable of the $\mathit{service}$ class. By using $\mathit{Assertion}$ keyword, the property should be satisfied in all the states of the model. Table \ref{Table:modelCheck} shows the result of the analysis of patterns using Rebeca. We assign two groups of values to parameters to show the state-space size for when the timing requirement is satisfied and when it is not satisfied. As we see the state-space size is smaller when the timing requirement inequality is not satisfied, because in this situation a path in which the property is not satisfied is found and the state-space generation stops. \begin{figure} \caption{The timing requirement for the Publisher-Subscriber pattern} \label{Fig:PubSubProperty} \end{figure} \begin{table}[h] \begin{center} \caption{Analysis of patterns in Rebeca} \begin{tabular}{|c||c|c|c|c|c|c|} \hline \multirow{2}{*}{\begin{tabular}{c} Communication\\ pattern \end{tabular}} & \multirow{2}{*}{Parameters} & \multicolumn{3}{c|}{\multirow{2}{*}{No. states}} & \multicolumn{2}{c|}{\multirow{2}{*}{Timing requirement}} \\ & & \multicolumn{3}{c|}{} & \multicolumn{2}{c|}{} \\ \hline \multirow{2}{*}{\begin{tabular}{c} Publisher\\ Subscriber \end{tabular}} & \begin{tabular}[c]{@{}l@{}}$N_p=4,L_p=6$\\ $R_p=40, N_s=7$\\$L_s=5, X_s=20$\\$R_s=5$\end{tabular} & \multicolumn{3}{c|}{235} & \multicolumn{2}{c|}{$R_{pub}-R_{sub}-L_{pub} \geq L_{m}$} \\ \cline{2-7} & \begin{tabular}[c]{@{}l@{}}$N_p=5,L_p=3$\\ $R_p=20, N_s=4$\\$L_s=7,X_s=12$\\$R_s=10$\end{tabular} & \multicolumn{3}{c|}{56} & \multicolumn{2}{c|}{$R_{pub}-R_{sub}-L_{pub} \ngeq L_{m}$} \\ \hline \multirow{2}{*}{\begin{tabular}{c} Requester\\Responder \end{tabular}} & \begin{tabular}[c]{@{}l@{}}$N_r=2,L_r=30$\\ $R_r=15,N_s=7$\\$L_s=5, R_s=10$\end{tabular} & \multicolumn{3}{c|}{205} & \multicolumn{2}{c|}{$L_{req}+R_{req}-L_{res}-R_{res} \geq L_{m}+L'_{m}$} \\ \cline{2-7} & \begin{tabular}[c]{@{}l@{}}$N_r=4,L_r=24$\\ $R_r=18, N_s=5$\\$L_s=10, R_s=20$\end{tabular} & \multicolumn{3}{c|}{113} & \multicolumn{2}{c|}{$L_{req}+R_{req}-L_{res}-R_{res} \ngeq L_{m}+L'_{m}$} \\ \hline \multirow{2}{*}{\begin{tabular}{c} Sender\\ Receiver \end{tabular}} & \begin{tabular}[c]{@{}l@{}}$N_s=5,L_s=30$\\ $N_r=7,L_r=5$\end{tabular} & \multicolumn{3}{c|}{179} & \multicolumn{2}{c|}{$L_{sen}-L_{rec} \geq L_{m}+L'_{m}$} \\ \cline{2-7} & \begin{tabular}[c]{@{}l@{}}$N_s=4,L_s=6$\\ $N_r=7,L_r=5$\end{tabular} & \multicolumn{3}{c|}{82} & \multicolumn{2}{c|}{$L_{sen}-L_{rec} \ngeq L_{m}+L'_{m}$} \\ \hline \multirow{2}{*}{\begin{tabular}{c} Initiator\\ Executor \end{tabular}} & \begin{tabular}[c]{@{}l@{}}$N_i=3,L_i=25$\\ $N_e=5,L_e=4$\end{tabular} & \multicolumn{3}{c|}{169} & \multicolumn{2}{c|}{$L_{ini}-L_{exe} \geq L_{m}+L'_{m}$} \\ \cline{2-7} & \begin{tabular}[c]{@{}l@{}}$N_i=5,L_e=10$\\ $N_i=5,L_e=4$\end{tabular} & \multicolumn{3}{c|}{49} & \multicolumn{2}{c|}{$L_{ini}-L_{exe} \ngeq L_{m}+L'_{m}$} \\ \hline \end{tabular} \label{Table:modelCheck} \end{center} \end{table} As we assumed that apps/devices satisfy their timing constraints, violation of each inequality shows a run-time failure in which the communication substrate fails to communicate the data quickly enough to meet the interface components timing requirements and so the network is not configured properly. Alternatively, it can be seen as a design error where, e.g., the interface components are making too stringent local timing requirements (e.g., on the freshness). \subsection{Guidelines on Modeling Composite Medical Systems}\label{subsec::guide} Depending on the configuration of a composite medical system, devices and applications connect to each other through a specific pattern. For each connection of two devices/apps, two interface components are needed as summarized in Table \ref{Tab::guideline}. \begin{table}[htbp] \centering \caption{Interface components and communicated messages for each pattern } \label{Tab::guideline} \begin{tabular}{|c|c|c|} \hline Pattern & Interface & Comm. \\%$& Interval \\ & Components & Message \\%& Variable\\ \hline \multirow{2}{*}{Publisher-Subscriber} & \textit{PublisherRequester} & \textit{transmitPublish} \\%& $\it lastPub$\\ & \textit{SubscriberInvoker} & \\ \hline \multirow{2}{*}{Request-Responder} & \textit{RequestRequester} & \textit{transmitRequest}\\% & $\it lastReq$ \\ & \textit{ResponderInvoker} & \textit{transmitResponse}\\% & $\it lastReq$\\ \hline \multirow{2}{*}{Initiator-Executor} & \textit{InitiatorRequester} & \textit{transmitInitiate}\\% & $\it lastInit$\\ & \textit{ExecutorInvoker} & \textit{transmitAck} \\ \hline \multirow{2}{*}{Sender-Receiver} & \textit{SenderRequester} & \textit{transmitSend} \\% & $\it lastSend$\\ & \textit{ReceiverInvoker} & \\%& $\it lastSend$\\ \hline \end{tabular} \end{table} In a composite medical system, there may be device/apps that communicate over a shared message passing middleware. In such cases, we should also share the \textit{communication substrate} among the corresponding patterns of device/apps. It is a design decision to be faithful to the patterns (and the system). \FG{As a shared \textit{communication substrate} communicates with all interface components of involved patterns, then we have to pass these components via its constructor in our models. Instead, to make the specification of a shared \textit{communication substrate} independent from its interface components, we use the inheritance concept in Rebeca. We implement a base reactive class for the shared \textit{communication substrate} and all interface components as shown in Figure \ref{fig:Base}, inspired by the approach of \cite{yousefi2019verivanca}. We define the state variable $\it id$ in line $2$ to uniquely identify rebecs. This class has a lookup method named \emph{find} to get the rebec with a given identifier as its parameter. Thanks to the special statement \emph{getAllActors} (in line $4$), we can get access to all actors extending the \textit{Base}. In the method \emph{find}, we define an array of reactive classes and initiate it by calling \emph{getAllActors}. We iterate over the actors of this array to find the actor with the given identifier (in lines $5-7$).} \begin{figure} \caption{Base Reactive Class} \label{fig:Base} \end{figure} The \textit{communication substrate} reactive class \textit{extends} \textit{Base} class. As illustrated in Figure \ref{fig:csb}, this class has a parameter $\it id$ in its constructor for assigning the $\it id$ variable of the parent class (in line $3$). This class has no known rebec as opposed to the one specified in Figure \ref{fig:CS}. Instead, rebecs append their identifiers to their messages during their communication with the substrate. The communication substrate class uses the \textit{find} method for finding the rebec that wants to send data based on its id (lines $6$ and $11$). \FG{\textit{Communication substrate} class includes a message server for each communicated message of all patterns as shown in Table \ref{Tab::guideline} (in addition to those for error messages). We can remove the message servers of unused patterns for the sake of readability. However, there is no cost if we do not remove the additional ones as when an event is not triggered, it will not be handled. This class specification can be used as a template even when we have no sharing.} As \textit{communication substrate} class in Figure \ref{fig:csb} is commonly used by the components of Publisher-Subscriber and Requester-Responder patterns, it has the two message servers \textit{transmitPublish} and \textit{transmitRequest} for Publisher-Subscriber, and \textit{transmitResponse} for Requester-Responder. \begin{figure} \caption{Modeling Communication substrate using inheritance} \label{fig:csb} \end{figure} All interface components that communicate through a shared \textit{communication substrate} should also extend the \textit{Base} class. \FG{For each usage of a pattern, one instance of component interfaces on both side is needed. When an interface component is instantiated, the identifier of its counterpart interface component is set via the constructor. When an interface component sends a message to its counterpart interface component via our proposed communication substrate, it includes the identifier of the counterpart entity.} Each device/app may use several patterns to communicate with other device/app. Depending on its role in each pattern, we consider a known rebec of the appropriate interface component. To model devices/apps, we only focus on the logic for sending messages through the interface components. \FG{We consider three types of delays in our specifications; 1) between sender interface components and communication substrate (\textit{after} in the sender interface in the Rebeca model), and 2) between communication substrate and receiver interface components (\textit{after} in the communication substrate in the Rebeca model), 3) between receiver interface component to receiver application (\textit{after} in the receiver interface in the Rebeca model). The sender interface component models the behavior of communication driver, and the first type of delay defined in this component models the delay caused by network traffic. The driver retries until it successfully sends its message depending on the traffic. The value of this delay is defined by \emph{clientDelay} in the sender interface component. The second type of delay, defined by \emph{netDelay} in communication substrate, shows the delay of message passing middleware on transferring messages (for example caused by the routing and dispatching algorithms). The third type of delay, defined by \emph{serviceDelay} in the receiver interface components, shows the delay caused by the system load. When a receiver interface receives messages, it should send them to the application components. Depending on the system load, the operating system allows the interface components to deliver their message to the application components. } \section{State-space Reduction}\label{sec::reduction} A medical system is composed of several devices/apps that communicate with each other by using any of communication patterns. With the aim of verifying the timing requirements of medical systems before deployment, we use the Rebeca model checking tool, Afra. As we explained in Section \ref{sec::model}, each communication pattern is at least modeled by five rebecs. It is well-known that as the number of rebecs increases in a model, the state space grows significantly. For a simple medical system composed of two devices that communicate with an app, there are nine rebecs in the model (as communication substrate is in common). In a more complex system, adding more devices may result in state-space explosion, and model checking cannot be applied. We propose a partial order reduction technique for FTTS which merges those states satisfying the same local timing properties of communication patterns. The reduced model preserves the class of timed properties specified by weak modal $\mu$-calculus of the original model where the actions are taking messages from the message bag \cite{RTS}. If we use the value of $\mathit{now}$ in our Rebeca codes, then it is very likely that we encounter an unbounded state space, because the first condition of shift-equivalent relation given in Definition \ref{Def::def1} may not be satisfied. By this condition, two states are shift-equivalent if all state variables of all rebecs have the same value. Here, we suggest a restricted form of using the value of $\mathit{now}$ by specifying a set of variables called \emph{interval variables}, and we relax the first condition for such variables. The model checking tool of Rebeca, Afra, is adjusted to treat these variables differently and hence we prevented generating an unbounded state space. Let $\mathit{Var}_\Delta$ be the set of interval variables that are defined to only hold time values, i.e., we can only assign the value of $\mathit{now}$ to these variables at different points of the program. We use these variables to measure the time interval between two execution points of a rebec by comparing the value of $\mathit{now}$ at these two points. At the first point of execution, we assign $\mathit{now}$ to the interval variable $x\in\mathit{Var}_\Delta$, and at the second point the expression ``$\mathit{now}-x$'' measures the time interval between the first and the second point. For instance, the state variable $\mathit{lastPub}$ of Rebeca class \emph{PublisherRequeter} in Publisher-Subscriber pattern is used to measure the interval between two consecutive $\it publish$ messages. This interval value is used to check local timing properties like $N_{sub}$, $N_{pub}$, and $X_{sub}$. As variables of $\mathit{Var}_\Delta$ can only get the value of $\mathit{now}$, we relax the first condition of Definition \ref{Def::def1} on these variables, and we treat such variables similar to the local time (see Section \ref{subsec::FTTS}). We also relax the third condition of Definition \ref{Def::def1} which compares the messages in rebec bags. \rev{The local timing properties may impose restrictions on the transmission time or the remaining life time of data. In our implementations, we model the transmission time or remaining life time of data by the parameters of messages, namely $L_m$ and \emph{life}, respectively (see the message server $\mathit{RcvPublish}$ in the Rebeca class \emph{SubscriberInvoker}). To check the timing property on the freshness of data, the parameter \emph{life} is compared with the constant $R_{sub}$, configured as the system parameter, in a conditional statement. This is the only place that the variable is used within its message server. So, we can abstract the concrete value of this parameter and only consider the result of the comparison. Instead of passing $\mathit{life}$ as the parameter, we can pass the Boolean result of $\mathit{life}<R_{sub}$ instead upon sending the message $\mathit{RcvPublish}$. We use this interpretation to compare the messages with such parameters in bags. Instead of comparing the values of parameters one-by-one, for the parameters similar to \emph{life} we only consider the result of the comparison. If the values of \emph{life} for two $\mathit{RcvPublish}$ messages either both satisfy or violate the condition $\mathit{life}<R_{sub}$ while other parameters are equal, these messages can be considered equivalent as both result the same set of statements to be executed (irrespective to the value of this parameter). This idea can also be applied to the message $\it RcvResponse$ in the Request-Responder pattern, parameterized by \emph{life} which is compared to the local timing property $R_{res}$ in its message server. Formally speaking,} we identify those messages, denoted by $\mathit{Msg}_{ex}$, whose parameters are only used in conditional statements (if-statements). We use the conditions in the if-statements for data abstraction. We can abstract the concrete values of the parameters and only consider the result of evaluation of conditions. In Definition \ref{Def::def2} (relaxed-shift equivalence relation), for simplicity, we assume that each message $m\in\mathit{Msg}_{ex}$ has only one parameter that we abstract its value, denoted by ${\it msgpar}_{ex}(m)$, based on the result of one condition, denoted as ${\it cond}(m)$. We also assume that the message has another parameter that its value is denoted by ${\it msgpar}_{\overline{ex}}(m)$. By ${\it cond}(m)({\it msgpar}_{\it ex}(m))$, we mean the result of evaluation of the condition that is checked by the message server $m$ over its specific parameter of $m$. The concrete values of a message parameter can be abstracted if it is only used in the conditions of if-statements. The concrete value of a parameter is needed if it is used in other statements (e.g., assignment or send statement in Timed Rebeca). \rev{To ensure the soundness of our abstraction, we limit ${\it cond}(m)$ to propositional logic over the comparison of ${\it msgpar}_{\it ex}$ with constants. Considering more complicated conditions is among of our future work.} In practice, we can find such parameters and their corresponding conditions through a static analysis of the message server or ask the programmer to identify them. \begin{defi}[relaxed-shift equivalence relation]\label{Def::def2} Two semantic states $s$ and $s'$, denoted by $s\sim_{\delta}s'$, are relaxed-shift equivalent if there exists $\delta$ such that for all the rebecs with identifier $x\in\mathit{ID}$ : \begin{enumerate} \item {\small\[\begin{array}{l}\forall v\in\mathit{Var}\setminus{\mathit{Var}_\Delta} \cdot\mathit{statevars}(s(x))(v) =\mathit{statevars}(s'(x))(v),\\ \forall v\in \mathit{Var}_\Delta\cap{\it Dom}(s(x))\Rightarrow\mathit{statevars}(s(x))(v) = \mathit{statevars}(s'(x))(v)+\delta. \end{array}\]} \item {\small$\mathit{now}(s(x)) = \mathit{now}(s'(x))+\delta$}. \item {\small \[\begin{array}{l}\forall m\in \mathit{bag}(s(x)) \,\wedge m\not\in \mathit{Msg}_{ex} \Leftrightarrow (\mathit{msgsig}(m),\mathit{arrival}(m)+\delta,\mathit{deadline}(m)+\delta)\in\mathit{bag}(s'(x)),\vspace*{1mm}\\ \forall m\in \mathit{bag}(s(x)) \,\wedge m\in \mathit{Msg}_{ex} \Leftrightarrow \exists m'\in\mathit{bag}(s'(x)) \cdot {\it Type}(\mathit{msgsig}(m))={\it Type}(\mathit{msgsig}(m')) \,\wedge \\\hspace*{1cm} \mathit{arrival}(m') =\mathit{arrival}(m)+\delta \,\wedge \, \mathit{deadline}(m')= \mathit{deadline}(m)+\delta \,\wedge \\\hspace*{1.2cm} {\it msgpar}_{\overline{ex}}(m)={\it msgpar}_{\overline{ex}}(m') \, \wedge \, ({\it cond}(m)({\it msgpar}_{\it ex}(m)) \Leftrightarrow {\it cond}(m')({\it msgpar}_{ex}(m'))). \end{array}\]} \end{enumerate} \end{defi} We consider Timed Rebeca models that in their message servers $\mathit{now}$ can be only accessed for updating variables of $\mathit{Var}_\Delta$ or used in expressions like ``$\mathit{now}-x$'' (where $x\in\mathit{Var}_\Delta$) for computing an interval value. We reduce FTTSs of such models by merging states that are relaxed-shift equivalent. The following theorem shows that an FTTS modulo relaxed-shift equivalence preserves the properties of the original one. \begin{thm}\label{Th::preserve} For the given FTTS $\langle S,s_0,\rightarrow\rangle$, assume the states $s,s'\in S$ such that $s\sim_{\delta}s'$. If $(s,m,s^\ast)\in\rightarrow$, then there exists $s^{\ast\ast}$ such that $(s',m,s^{\ast\ast})\in\rightarrow$ and $s^\ast\sim_{\delta}s^{\ast\ast}$. \end{thm} \begin{proof} Assume an arbitrary transition $(s,m,s^\ast)\in\rightarrow$ that denotes handling the message $m$ by the rebec $i$. Based on the third condition of Definition \ref{Def::def2}, two cases can be distinguished: \begin{itemize} \item $m\not\in \mathit{Msg}_{ex}$: there exists message $m'\in\mathit{bag}(s'(i))$ such that $m'= (\mathit{msgsig}(m),\\\mathit{arrival}(m)+\delta,\mathit{deadline}(m)+\delta)$. The local time of rebec $i$ in both states $s$ and $s'$ are progressed by executing delay statements in the message servers $m$ and $m'$ which is the same and hence, their local timers have still $\delta$-difference in the resulting states $s^\ast$ and $s^{\ast\ast}$. So, the second condition is satisfied (result $\dagger$). Due to the execution of the same message server, all variables are updated by the same expressions. Based on the constraint on the models and the assumption $s\sim_{\delta}s'$, we conclude that all the variables except for $\mathit{Var}_\Delta$ have the same values (defined by the same expressions on un-timed variables). The values of $\mathit{Var}_\Delta$ are updated to $\mathit{now}$ and, by the result $\dagger$, they still have $\delta$-difference. So, $s^{\ast}$ and $s^{\ast\ast}$ satisfy the first condition. The messages sent to other rebecs during handling $m$ and $m'$ are sent at the same point with the same parameters (defined by expressions on untimed-variables). As their local timers have $\delta$-difference, the arrival and deadline of sent messages have $\delta$-difference. So, the third condition is also satisfied. \item $m\in \mathit{Msg}_{ex}$: there exists message $m'\in\mathit{bag}(s'(i))$ with the same message name of $m$, i.e., ${\it Type}(\mathit{msgsig}(m))$, arrival time $\mathit{arrival}(m)+\delta$, and deadline $\mathit{deadline}(m)+\delta$. The first assumption of Definition \ref{Def::def2} together with the assumptions $ {\it msgpar}_{\overline{ex}}(m)={\it msgpar}_{\overline{ex}}(m')$ and $({\it cond}(m)({\it msgpar}_{\it ex}(m)) \Leftrightarrow {\it cond}(m')({\it msgpar}_{ex}(m')))$ result that the same statements are executed by the rebec $i$ during handling $m$ and $m'$. The variables $x\in\mathit{Var}_\Delta$ are used in expressions as $\mathit{now}-x$ and so $\mathit{now}(s(i))-\mathit{statevars}(s(i))(x) = \mathit{now}(s'(i))-\mathit{statevars}(s'(i))(x)$. With a similar discussion in the previous case, the local time of rebec $i$ in both states $s$ and $s'$ are progressed by executing the same delay statements and hence, their local timers have still $\delta$-difference in the resulting states $s^\ast$ and $s^{\ast\ast}$. So, the second condition is satisfied (result $\ddagger$). Due to the execution of the same statements, all variables are updated by the same expressions. Based on the constraint on the models and the assumption $s\sim_{\delta}s'$, we conclude that all the variables except of $\mathit{Var}_\Delta$ have the same values (defined by the same expressions on un-timed variables). The values of $\mathit{Var}_\Delta$ are updated to $\mathit{now}$ and, by the result $\ddagger$, they still have $\delta$-difference. So, $s^{\ast}$ and $s^{\ast\ast}$ satisfy the first condition. The messages sent to other rebecs during handling $m$ and $m'$ are sent at the same point with the same parameters (defined by expressions on untimed-variables). As their local timers have $\delta$-difference, the arrival and deadline of sent messages have $\delta$-difference. So, the third condition is also satisfied. \qedhere \end{itemize} \end{proof} \begin{cor}\label{Co::preserve} For the given FTTS $\langle S,s_0,\rightarrow\rangle$, assume the states $s,s'\in S$ such that $s\sim_{\delta}s'$. Then, $s$ and $s'$ are strongly bisimilar. \end{cor} \begin{proof} To show that $s$ and $s'$ are strongly bisimilar, we construct a strong bisimulation relation $R$ such that $(s,s')\in R$. Construct $R=\{(t,t') \mid t\sim_{\delta}t'\}$. We show that $R$ satisfies the transfer conditions of strong bisimilarity. For an arbitrary pair $(t,t')$, we must show that\begin{itemize} \item If $(t,m,t^\ast)\in \rightarrow$, then there exists $t^{\ast\ast}$ such that $(t',m,t^{\ast\ast})\in \rightarrow$ and $(t^\ast,t^{\ast\ast})\in R$; \item If $(t',m,t^{\ast\ast})\in \rightarrow$, then there exists $t^{\ast}$ such that $(t,m,t^\ast)\in \rightarrow$ and $(t^\ast,t^{\ast\ast})\in R$. \end{itemize} If $(t,m,t^\ast)\in \rightarrow$, by Theorem \ref{Th::preserve}, there exists $t^{\ast\ast}$ such that $(t',m,t^{\ast\ast})\in \rightarrow$ and $t^\ast\sim_{\delta}t^{\ast\ast}$. By the construction of $R$, $t^\ast\sim_{\delta}t^{\ast\ast}$ implies that $(t^\ast,t^{\ast\ast})\in R$. The same discussion holds when $(t',m,t^{\ast\ast})\in \rightarrow$. Concluding that $R$ is a strong bisimulation. Trivially $(s,s')\in R$. \end{proof} The relaxed-shift equivalence preserves the conditions of shift equivalence on all variables except the time related variables, i.e., $\mathit{Var}_\Delta$. Furthermore, it preserves the conditions of shift equivalence on all messages in the bag except for messages $\mathit{Msg}_{ex}$. But the condition on parameters of ${\it msgpar}_{\it ex}$, i.e., $({\it cond}(m)({\it msgpar}_{\it ex}(m)) \Leftrightarrow {\it cond}(m')({\it msgpar}_{ex}(m')))$ ensures that the same statements of the corresponding message server will be executed. Therefore, by Corollary \ref{Co::preserve} any FTTS modulo relaxed-shift equivalence is strongly bisimilar to its original FTTS, and it not only preserves the local timing properties (those properties checked on variables of $\mathit{Var}_\Delta$ in model specification like $\mathit{now}-\mathit{lastPub}<N_{\it pub}$) of the original one but also preserves the timing properties defined on events (taking messages from the bag). \begin{example} Figure \ref{Fig:bftts} shows part of the FTTS for the Publisher-Subscriber pattern. As we see all local times and the values of lastPub are shifted one unit in states $s_{17}$ and $s_{20}$, so the first and the second conditions of Definition \ref{Def::def2} are satisfied. The message contents of their bags are equal in all rebecs except for the rebec $si$. This rebec has a $\it RcvPublish$ message in its bag in both states that their $\mathit{life}$ values are different but both of them are greater than the $R_{sub}$ value. So the third condition of Definition \ref{Def::def2} is satisfied too and the states are relaxed-shift equivalent and we can merge them. Respectively, we can merge states $s_{22}$ with $s_{25}$ and $s_{27}$ with $s_{30}$. We remark that these states are not merged in the corresponding BFTTS. \end{example} \begin{figure} \caption{Part of FTTS for the publisher-subscriber pattern: $c$ is an instance of $\it Client$, $\it pr$ of $\it PublisherReqester$, $\it cs$ of $\it CommunicationSubstrate$, $\it si$ of $\it SubscriberInvoker$, and $\it s$ of $\it Service$.} \label{Fig:bftts} \end{figure} \tikzstyle{block} = [rectangle, draw, text width=20em, text centered, rounded corners, minimum height=2em, node distance = 12em] \tikzstyle{first} = [rectangle, draw, text width=8em, text centered, rounded corners, minimum height=2em, node distance = 15em] \tikzstyle{dotted} = [draw, densely dotted, -stealth] \begin{tikzpicture}[scale=0.45, transform shape] \node[block](s16) { \begin{tabularx}{\textwidth}{l|l|X} \multicolumn{3}{c}{$s_{16}$} \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{c}} & bag & [] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{pr}} & bag & [] \\ \cline{2-3} & state variables & lastPub=6 \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{cs}} & bag & [(publish(8,28),8,$\infty$)] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{si}} & bag & [] \\ \cline{2-3} & state variables & lastPub=0 \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{s}} & bag & [] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 8 \\ \end{tabularx} }; \node[block, xshift=25em](s17) { \begin{tabularx}{\textwidth}{l|l|X} \multicolumn{3}{c}{$s_{17}$} \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{c}} & bag & [(accepted(),8,$\infty$)] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{pr}} & bag & [] \\ \cline{2-3} & state variables & lastPub=6 \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{cs}} & bag & [] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{si}} & bag & [(RcvPublish(9,27),8,$\infty$)] \\ \cline{2-3} & state variables & lastPub=0 \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{s}} & bag & [] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 8 \\ \end{tabularx} }; \node[block, xshift=50em](s6) { \begin{tabularx}{\textwidth}{l|l|X} \multicolumn{3}{c}{$s_{6}$} \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{c}} & bag & [(accepted(),9,$\infty$)] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 9 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{pr}} & bag & [] \\ \cline{2-3} & state variables & lastPub=7 \\ \cline{2-3} & now & 9 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{cs}} & bag & [(publish(9,29),9,$\infty$)] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 9 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{si}} & bag & [] \\ \cline{2-3} & state variables & lastPub=0 \\ \cline{2-3} & now & 9 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{s}} & bag & [] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 9 \\ \end{tabularx} }; \node[block, below of=s16, yshift=-15em, xshift=25em](s22) { \begin{tabularx}{\textwidth}{l|l|X} \multicolumn{3}{c}{$s_{22}$} \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{c}} & bag & [] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{pr}} & bag & [] \\ \cline{2-3} & state variables & lastPub=6 \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{cs}} & bag & [] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{si}} & bag & [(RcvPublish(8,27),8,$\infty$)] \\ \cline{2-3} & state variables & lastPub=0 \\ \cline{2-3} & now & 8 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{s}} & bag & [] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 8 \\ \end{tabularx} }; \node[block, below of=s22, yshift=-15em](s27) { \begin{tabularx}{\textwidth}{l|l|X} \multicolumn{3}{c}{$s_{27}$} \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{c}} & bag & [] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 9 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{pr}} & bag & [] \\ \cline{2-3} & state variables & lastPub=6 \\ \cline{2-3} & now & 9 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{cs}} & bag & [] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 9 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{si}} & bag & [] \\ \cline{2-3} & state variables & lastPub=9 \\ \cline{2-3} & now & 9 \\ \hline \multirow{2}{*}{\rotatebox[origin=c]{90}{s}} & bag & [(publish(10),10,$\infty$)] \\ \cline{2-3} & state variables & \\ \cline{2-3} & now & 9 \\ \end{tabularx} }; \node[first, above of=s16, xshift=25em](s1){$s_1$}; \path [line] (s16) -- node[xshift=-6em] {(publish(8,28),8,$\infty$)} (s22); \path [line] (s17) -- node[xshift=-4em] {(accepted(),8,$\infty$)} (s22); \path [line] (s6) -- node[xshift=-6em] {(publish(8,28),8,$\infty$)} (s22); \path [line] (s22) -- node[xshift=-6em] {(RcvPublish(9,27),9,$\infty$)} (s27); \path [dotted] (s1) -| (s16); \path [dotted] (s1) -- (s17); \path [dotted] (s1) -| (s6); \path [dotted] (s27) -- (9.6,-27); \end{tikzpicture} \section{Case Studies}\label{sec::caseStudy} Our reduction techniques are more applicable when using several patterns and devices, as one might find in an interoperable medical system. We applied our techniques on the following two interoperability scenarios, modeled based on the guidelines given in Section \ref{subsec::guide}. The first case scenario relies on the Initiator-Executor pattern while the second one uses the Publisher-Subscriber and Request-Responder patterns. \subsection{X-Ray and Ventilator Synchronization Application} \label{sec:x-ray-app} As summarized \linebreak in~\cite{hatcliff2012rationale}, a simple example of automating clinician workflows via cooperating devices addresses problems in acquiring accurate chest X-ray images for patients on ventilators during surgery~\cite{Langevin-al:AJRCCM99}. To keep the lungs' movements from blurring the image, doctors must manually turn off the ventilator for a few seconds while they acquire the X-ray image, but there are risks in inadvertently leaving the ventilator off for too long. For example, Lofsky documents a case where a patient death resulted when an anesthesiologist forgot to turn the ventilator back on due to a distraction in the operating room associated with dropped X-ray film and a jammed operating table~\cite{Lofsky:APSFNewsletter05}. These risks can be minimized by automatically coordinating the actions of the X-ray imaging device and the ventilator. Specifically, a centralized automated coordinator running a pre-programmed coordination script can use device data from the ventilator over the period of a few respiratory cycles to identify a target image acquisition point where the lungs will be at full inhalation or exhalation (and thus experiencing minimal motion). At the image acquisition point, the controller can pause the ventilator, activate the X-ray machine to acquire the image, and then signal the ventilator to ``unpause'' and continue the respiration. An interoperable medical system realizing this concept was first implemented in \cite{xRay}. \begin{figure} \caption{Communication between entities in the X-Ray and Ventilator Synchronization Application.\label{Fig::x-rayApp} \label{Fig::x-rayApp} \end{figure} We model the system assuming that the image acquisition point is identified. Controller initiates starting and stopping actions on two devices through the initiator-executor pattern. We define two instances of Initiator-Executor in the model for communication of the controller with ventilator and X-ray machine as shown in Figure \ref{Fig:x-rayMain}. Each instance of the pattern needs one instance of \textit{InitiatorRequester} and \textit{ExecutorInvoker} classes as the interface between device/app and the communication substrate as explained in Table \ref{Tab::guideline}. The model of controller is given in Figure \ref{Fig::Controller}. The controller communicates with the communication substrate via {\small{$\it IR\_VENTILATOR$}} and {\small$\it IR\_X\_RAY$}, instances of \textit{InitiatorRequester}. First the controller initiates a stop command to ventilator in its initialization in line $13$. Upon receiving a successful acknowledgement through $\it ack$ message, it initiates a start command to X-ray in line $18$. Upon successful completion of the start command, informed via $\it ack$, it initiates a stop to X-ray in line $21$ and then a start command to ventilator in line $23$. Upon receiving an unsuccessful acknowledge or successful completion of the last command, the controller is terminated by sending a $\it terminate$ message to itself. \begin{figure} \caption{Main part of X-ray and ventilator application} \label{Fig:x-rayMain} \end{figure} \begin{figure} \caption{The controller model in Timed Rebeca} \label{Fig::Controller} \end{figure} As illustrated in Figure \ref{Fig:x-rayCS}, the \textit{communication substrate} extends the \textit{Base} class for transmitting the two messages of these interface components. \begin{figure} \caption{X-Ray and ventilator communication substrate} \label{Fig:x-rayCS} \end{figure} \subsection{PCA Safety Interlock Application} \label{sec:pca-app} A Patient-Controlled Analgesia (PCA) pump is a medical device often used in clinical settings to intravenously infuse pain killers (e.g., opioids) at a programmed rate into a patient's blood stream. A PCA pump also includes a button that can be pushed by the patient to receive additional bolus doses of drug -- thus allowing patients to manage their own pain relief. PCA infusion is often used for pain relief when patients are recovering from an operation. Despite settings on the pump that limit the total amount of drug infused per hour and that impose lock out intervals between each bolus dose, there is still a risk of overdose when using PCA pumps. Symptoms of opioid overdose include respiratory depression in which a patient's blood oxygenation (SPO2 as can be measured by pulse oximetry) drops and expelled carbon dioxide (End-Tidal CO2 as can be measured by capnography) increases. A PCA pump alone has no way of telling if a patient is suffering from respiratory depression. However, using emerging interoperable medical system approaches that leverage MAP infrastructure, a pump can be integrated with a pulse oximeter (to measure blood oxygenation) and a capnometer (to measure ETCO2) and an additional control logic in a monitoring application as shown in Figure~\ref{Fig::monitoringApp}. The monitoring application looks for drops in SPO2 and increases in ETCO2, and if monitored values indicate that respiratory depression may be occuring, the application sends a command to the PCA Pump to halt infusion. Other signals (not shown in Figure~\ref{Fig::monitoringApp}) may be used to alert care-givers of a problem. This scenario has been considered in a number of demonstrations in medical device interoperability research (see {\em e.g.} \cite{Arney-al:ICCPS10,king:sehc10}), in interoperability risk management \cite{Hatcliff-al:ISPCE2018}, and is a subject of current standardization activities. The specifics of the model considered here are inspired by the prototype of Ranganath \footnote{\url{https://bitbucket.org/rvprasad/clinical-scenarios}} that uses OMG's DDS message-passing middleware as the communication substrate. \begin{figure} \caption{Communication between entities in the monitoring application.\label{Fig::monitoringApp} \label{Fig::monitoringApp} \end{figure} The capnometer and oximeter devices publish data through the Publisher-Subscriber pattern, and the monitoring application detects if data strays outside of the valid range and sends the appropriate command to disable pump infusion. The model of monitoring application is given in Figure \ref{Fig::Monitor}. Monitor, as the role of service in Publisher-Subscriber, consumes data published by capnometer and oximeter in its $\it consume$ message server. It communicates with these devices via {\small$\it SI\_c$} and {\small$\it SI\_o$} which are instances of \textit{SubscriberInvoker}. Upon receiving a consume message and detecting invalid values of SPO2 or ETCO2, it sends an inactive command to pump in lines $22$ and $29$. As monitoring app communicates with pump via the Requester-Responder pattern, it sends its commands to pump via {\small $\it RR\_p$}, an instance of \textit{RequestRequester}. We abstractly model the invalid/valid values for SPO2 or ETCO2 by $\it false$/$\it true$ values for the parameter $\it data$ of $\it consume$ message server. This parameter together with $\it topic$ models the published data of devices. \begin{figure} \caption{The Timed Rebeca model of monitoring application.\label{Fig::Monitor} \label{Fig::Monitor} \end{figure} As two devices (capnometer and oximeter) send data by using the Publisher-Subscriber pattern to the monitoring app, there are two instances of the Publisher-Subscriber pattern in the final model. The pump and monitoring app communicate via the Requester-Responder pattern. In the resulting Timed Rebeca model of the application, we define two instances of \textit{PublisherRequester} and \textit{SubscriberInvoker} interfaces in \textit{main} and one instances of \textit{RequestRequester} and \textit{ResponderInvoker}, as shown in Figure \ref{fig:main}. The instance of \textit{CommunicationSubstrate} class shown in Figure \ref{fig:csb}, called \textit{cs}, is used by all the components to send their messages and it includes four message servers for transmitting the messages of these two patterns. \begin{figure} \caption{Main part of PCA safety interlock application} \label{fig:main} \end{figure} \subsection{Communication Substrate Models}\label{SS:network models} Different network settings such as Wireless, CAN, Ethernet, etc have different behaviors on transmitting the messages. The latency of the underlying network organization interacts with the local properties of devices namely $R_{\it pub}$, $R_{\it sub}$, and $L_{\it pub}$. To ensure that network latency will not exceed the derived upper bounds in terms of the local timing properties of devices before deployment, we model the middleware behavior on transmitting messages within the communication substrate. Using the verification results, we can adjust the network by dynamic network configuration or capacity planning in organizations. \FG{We specify a communication substrate class for each shared network settings based on our proposed template in Section \ref{subsec::guide} and adapt the delay values on transmitting messages and priorities on handling messages. We make an instance from each class for those device/apps that communicate over its corresponding shared network.} We consider three different network settings and hence, three communication substrate models for the PCA safety interlock application: \begin{itemize} \item The first model shown in Figure \ref{fig:csb} imposes a non-deterministic delay on transmitting the messages of both patterns. The number and possible values for the delays are the same for all messages of both patterns. \item The second model considers the same number (which is also equal to the first model) but different values for the delays. \item In the third model as shown in Figure \ref{Fig:CaseStudyCS2}, the network handles messages of the Publisher-Subscriber pattern with a higher priority over the messages of Requester-Responder. This is implemented by using \textit{@priority} statement before their message servers. Communication substrate handles messages based on their arrival time. Messages that are arrived at the same time, are handled based on their priorities. A lower value indicates to a higher priority. \end{itemize} \begin{figure} \caption{Communication substrate model with applying priority to patterns} \label{Fig:CaseStudyCS2} \end{figure} \subsection{Experimental Results} \FG{We extended Afra which applies our reduction technique during the state-space derivation of a complete given model. The tool adds a state to the set of previously generated states if it is not relaxed-shift equivalent to any of them. This on-the-fly application of reduction during state-space generation results in an efficient memory consumption. }\rev{Our tool currently does not support the third condition of Definition~\ref{Def::def2} and we have hard-coded the comparison on messages in the state-space generator for the case study: If the message is of $\mathit{RcvPublish}$ type, we compare the result of $\mathit{life}<R_{sub}$ instead of comparing the value of $\mathit{life}$. } We applied our reduction technique on the model of some patterns and case studies\footnote{The Rebeca models are available at \url{fghassemi.adhoc.ir/shared/MedicalCodes.zip}}. We got $23\%$ reduction for Requester-Responder, $32\%$ for Publisher-Subscriber, $7\%$ for Initiator-Executor and \MZ{and $8\%$ for Sender-Receiver}. The Initiator-Executor \MZ{and The Sender-Receiver} pattern only have variables measuring the interval between two consecutive messages while the Requester-Responder and Publisher-Subscriber patterns also have remaining lifetime parameter in their messages (\emph{life}) for which our reduction technique relaxes the merge condition. So, the first two patterns have more reduction as their states may reduce with the first and third conditions of Definition \ref{Def::def2}, but the states of Initiator-Executor \MZ{and Sender-Receiver} only reduce with the first condition. In the PCA Monitoring Application which is a medical system using several patterns as explained in Section \ref{sec::caseStudy} we have $29\%$ reduction in the state space and for the X-ray and ventilator application we have $27\%$ reduction. \begin{table}[h] \begin{center} \caption{Reduction in patterns and their composition} \begin{tabular}{|c||c|c|c|} \hline \textbf{Model} & \textbf{No. states} & \textbf{No. states} & \textbf{Reduction}\\ & \textbf{before reduction} & \textbf{after reduction} & \\ \hline Requester-Responder & $205$ & $157$ & $23\%$\\ Publisher-Subscriber & $235$ & $159$ & $32\%$ \\ Initiator-Executor & $113$ & $103$ & $7\%$\\ \MZ{Sender-Receiver} & $179$ & $164$ & $8\%$\\ Monitoring App. & $1058492$ & $753456$ & $29\%$ \\ X-Ray-Ventilator & $27755$ & $19309$ & $27\%$ \\ \hline \end{tabular} \label{tab:reduction} \end{center} \end{table} Table \ref{Table:caseStudyReduction} shows the reduction of the three network models described in Section \ref{SS:network models}. As we see the reduction in the first model is $28.82\%$. \FG{As the possible delay values are increased for one pattern in the second model, the state space size grows and the reduction increases to $28.92\%$.} In the prioritized model, the resulting state space is smaller than the others. After applying the reduction approach, the state space size reduces to $23.9\%$. for larger state spaces we have more reduction, hence in more complicated systems with more components, we will have a significant amount of reduction in the state space to analyze the system more easily. \begin{table}[h] \begin{center} \caption{Reduction of PCA Monitoring Application with different communication substrates} \begin{tabular}{|c||c|c|c|} \hline \textbf{\begin{tabular}{c} Communication \\substrate model \end{tabular}} & \textbf{\begin{tabular}{c} No. states \\before reduction \end{tabular}} & \textbf{\begin{tabular}{c} No. states \\mathit{after} reduction \end{tabular}} & \textbf{Reduction} \\ \hline Equal delays & 1058492 & 753456 & 28.82 \% \\ different delays & 1074689 & 763811 & 28.92 \% \\ prioritizing patterns & 576961 & 439049 & 23.9 \% \\ \hline \end{tabular} \label{Table:caseStudyReduction} \end{center} \end{table} \section{Related Work} Our discussion of related work focuses on formal modeling, specification, and verification of interoperable medical systems. \cite{Arney-al:ICCPS10} and \cite{6341078} were some of the first works to consider formal modeling and verification of systems based on the ICE architecture. \cite{6341078} addresses variations of the PCA Monitoring Application in Section~\ref{sec:pca-app}. It focuses on using both UPPAAL's timed automata formalism \cite{bengtsson1995uppaal} and Simulink to capture greater details of the component and system functionality and timing, as well as continuous dynamics of the patient's physiology and its response to the presence of opioids in the bloodstream. Using UPPAAL model checking, various system safety properties are verified including proper halting of PCA infusion when the patient's health is deteriorating. The UPPAAL modeling includes a model of the communication infrastructure which includes capture of non-deterministic error behaviors such as dropped messages. Our work provides more detailed modeling of communication patterns and component oriented timing specifications, whereas \cite{6341078} provides much greater detail of the medical functionality and patient physiology with continuous dynamics. The PhD dissertation of Arney \cite{ArneyphDthesis} builds on \cite{Arney-al:ICCPS10} to address expanded versions of the applications in Sections \ref{sec:x-ray-app} and \ref{sec:pca-app}. The approach uses a domain-specific modeling language based on Extended Finite State Machines. A transformation from the modeling language to Java provides simulation capabilities, and a translation to UPPAAL provides model-checking capabilities. Similar to \cite{Arney-al:ICCPS10}, the focus is on exposing the abstract functional behavior of devices and applications rather than more details of the middleware communication and associated communication timing. The PhD dissertation of King \cite{KingphDthesis} provides the closest capture of component-related timing properties related to the communication patterns \cite{7318707} and our abstract modeling of the patterns. \cite{KingphDthesis} defines a domain-specific language for distributed interoperable medical systems with a formal semantics that takes into account the details of tasking and communication. As opposed to focusing on verification, the emphasis of the formalism is to provide a foundation for establishing the soundness of sophisticated real-time scheduling and component interactions of a novel time-partitioning middleware developed by King using Google's OpenFlow software-control network switches. King constructs a dedicated refinement-checking framework that addresses communication time and task and network scheduling using the symbolic representations of timing constraints based on UPPAAL's ``zone" representation. A number of experiments are performed to assess the scalability and practical effectiveness of the framework. Larson et al. \cite{Larson-al:NFM2013} specify a more detailed version of the PCA Monitoring Application of Section~\ref{sec:pca-app} using the Architecture Analysis and Definition Language (AADL). Simple functional properties of components are specified on AADL component interfaces using the BLESS interface specification language \cite{Larson-al:NFM2013}. Component behaviors are specified using the BLESS-variant of AADL's Behavior Annex -- a language of concurrent state machines with communication operations based on AADL's real-time port-based communication. The BLESS theorem prover was used to prove in a compositional manner that component state machine behaviors conform to their BLESS interface specifications and that the composition of components satisfies important system-level behavior specifications. Compared to the approach of this paper, \cite{Larson-al:NFM2013} focuses on compositional checking of richer functional properties using theorem proving techniques and does not expose the time-related details of communication patterns considered in the model-checking based verification in this paper. Each of the works above has different strengths that contribute important practical utilities. The long-term vision for specification and verification of interoperable medical systems would almost certainly include a \emph{suite} of techniques that work on a modeling framework supporting realistic and detailed architecture descriptions and embedded system implementations. Interface specifications would be used to specify component behavior for functional, timing, and fault-related behavior. It is likely that both deductive methods and model checking techniques would be needed to support both compositional contract-based reasoning as well as system state-space exploration (with domain-specific partial order reductions that account for scheduling and atomicity properties of the framework). The work presented in this paper complements the works above by focusing on one part of this larger vision, i.e., it illustrates how an existing framework for timed actor-based communication can be leveraged to specify and verify timing-related abstractions of middleware communication between components. For work that does not focus on MAP-based architectures, \cite{sobrinho2019formal} models and verifies biomedical signal acquisition systems using colored Petri nets in \cite{sobrinho2019formal}. The Model checkers UPPAAL and PRISM \cite{kwiatkowska2011prism} are used to verify autonomous robotic systems as the physical environment of robots has timing constraints and probability \cite{luckcuck2019formal}. To tackle the state-space explosion, reduction techniques such as symmetry and counter abstraction \cite{clarke2011model} are used to verify the models of swarm robotic systems. \section{Conclusion and Future Work}\label{sec::conclude} In this paper, we formally modeled composite medical devices interconnected by communication patterns in Timed Rebeca modeling language. We analyzed the configuration of their parameters to assure their timing requirements by Afra tool using the model checking technique. Since modeling many devices using several patterns results in state-space explosion, we proposed a reduction technique by extending FTTS merging technique with regard to the local timing properties. We illustrated the applicability of our approach on two model scenarios inspired by real-world medical systems. We applied our reduction technique on these models. Our results show significant reduction in systems with a higher number of components. We proposed guidelines and templates for modeling composite systems. Our templates take advantage of inheritance concept in Timed Rebeca in order to have a common communication substrate among instances of patterns. Enriching our models by adding some modal behaviors to devices/apps is among our future work. For example, we can consider different operational modes for the monitoring application in PCA safety interlock application to verify the operational logic of monitoring application. The modes can be normal, degraded, and failed operation. In the normal operation mode, the monitoring app makes decisions based on inputs from both the pulse oximeter and the capnometer. In degraded mode, one of the two devices has failed or gone offline, in that case, the logic in the monitoring app only uses information from the non-failing device. In the failed operation mode, both monitoring devices have gone off-line and the clinician should be notified via an alarm. By modeling the scheduling algorithm of communication network, we can measure communication latency more precisely. We aim to generalize our reduction approach by automatically deriving constraints on state variables like the one for $\mathit{lastPub}$ or message contents to relax shift-equivalence relation in other domains. To this aim, we can use the techniques of static analysis. \FG{Defining a specific language to model the composition and coordination of medical devices, leveraging the proposed communication patterns is among our future work.} \section*{Acknowledgment} The research of the forth author is partially supported by the KKS Synergy project, SACSys, the SSF project Serendipity, and the KKS Profile project DPAC. \begin{comment} \appendix \newcommand{\encoding}[1]{\llbracket #1 \rrbracket} \newcommand{\axiomrule}[1]{\raisebox{.0ex}{\scriptsize{$#1$}}} \newcommand{\sosrule}[2]{\frac{\raisebox{.7ex}{\scriptsize{{$#1$}}}} {\raisebox{-1.0ex}{\scriptsize{$#2$}}}} \newcommand{\sosruleNormal}[3]{\frac{{\parbox{#1}{\center$#2$ }}} {\parbox{#1}{ \center$#3$}}} \newcommand{\trans}[1]{\,{\stackrel{{#1}}{\rightarrow}}\,} \newcommand{\longtrans}[1]{\xrightarrow{#1}} \newcommand{\ntrans}[1]{\,{\stackrel{{#1}}{\nrightarrow}}\,} \newcommand{\gtrans}[2]{\,{\stackrel{{#2}}{\rightarrow_{#1}}}\,} \newcommand{\mbox{$~\underline{\leftrightarrow}~$}}{\mbox{$~\underline{\leftrightarrow}~$}} \newcommand{\lesssim}{\lesssim} \newcommand{\Longtrans}[1]{\,{\stackrel{{#1}}{\Longrightarrow}}\,} \newcommand{\longhooktrans}[1]{\,{\lhook\joinrel\xrightarrow{#1}}\,} \newcommand{\DedRule}[1]{\mbox{{\bf\small (#1)}}} \section{SOS Rules of TTS}\label{apx::TTS-SOS} Formal semantics of Rebeca models are presented as labeled transition systems. Labeled transition systems (LTS) are used as the basic model to define the semantics of reactive systems \cite{baier}. A labeled transition system (LTS), is defined by the quadruple $\langle S,\rightarrow,L,s_{0} \rangle$ where $S$ is a set of states, $\rightarrow\,\subseteq S\times L\times S$ a set of transitions, $L$ a set of labels, and $s_{0}$ the initial state. Let $s\overto{\alpha}t$ denote $(s,\alpha,t)\in\rightarrow$. In the standard semantics of Rebeca, global states are defined based on the local states of the actors, and a global clock called $\it now$. The local states of rebecs are defined by the triple $\langle v,q,\sigma\rangle$, where $v$ denotes the value of state variables, $q$ is the rebec mailbox, and $\sigma$ is the program counter pointing at the current statement of the message server to be executed. Let $\mathit{ID}$ denote the set of rebec identifiers, and $S$ the set of global states. Each global state $s\in S$ includes a mapping from the rebec identifier to its local state. Let $\mathit{Var}$, $\mathit{Value}$, and $\mathit{Msg}$ to be the set of variables, values, and messages, respectively. We model the mailbox of rebecs by a bag of messages, denoted by $\mathit{bag}(\mathit{Msg})$, and $\mathbb{N}$ to model the program counter of message servers and the global time. The set of global states is defined by pairs of a mapping from each rebec identifier to its local state and the global time, $S=(\mathit{ID}\rightarrow (\mathit{Var}\rightarrow \mathit{Value})\times \mathit{bag}(\mathit{Msg}) \times \mathbb{N})\times \mathbb{N}$. Each message $m\in\mathit{Msg}$ constitutes of three parts, namely $m=(\mathit{msgsig}, \mathit{arrival}, \mathit{deadline})$, where $\mathit{msgsig}$ consists of the message name and message parameters, $\mathit{arrival}$ is the arrival time of the message, and $\mathit{deadline}$ is the deadline of the message. The global states change due to the handling of messages by rebecs. Each rebec takes a message from its mailbox, modeled as a message bag, and executes its message server, and hence, the value of state variables may be updated. The global clock only advances when no other transition can be taken. The semantics of timed Rebeca models are defined as timed transition systems (TTSs), i.e., LTSs with three types of transitions: events, passage of time, and $\tau$ for execution of message server's statements. \begin{longtable}{cr} $\sosruleNormal{10cm}{s(x)=(v, \langle (ac, mg, pr, ar, dl)|T \rangle, \epsilon, t, \epsilon) \wedge ar \leq t \wedge dl \geq t} {s \longtrans{mg} s[x\mapsto(v \cup pr \cup \{\mathit{self} \mapsto x \wedge \mathit{sender} \mapsto ac\}, T, body(x, mg),t , t)]}$ & \textbf{(taking-message)}\\ $\sosruleNormal{10cm}{s(x)=(v, q, \langle \mathit{var} := \mathit{expr}|\sigma\rangle, t, r) \wedge r = t} {s \longtrans{\tau_{assign}} s[x\mapsto (v[\mathit{var} \mapsto \mathit{eval}_v(\mathit{expr})], q, \sigma, t, r)]}$ & \textbf{(assignment)}\\ $\sosruleNormal{11cm}{s(x)=(v, q, \langle \mathbf{if}~\mathit{expr}~\mathbf{then}~\sigma~\mathbf{else}~\sigma'|\sigma'' \rangle, t, r) \wedge r = t \wedge \mathit{eval}_v(\mathrm{expt})=\mathbf{True}} {s \longtrans{\tau_{cond}} s[x\mapsto (v, q, \sigma \oplus \sigma'', t, r)]}$ & \textbf{(Conditional$_T$)}\\ $\sosruleNormal{11cm}{s(x)=(v, q, \langle \mathbf{if}~\mathit{expr}~\mathbf{then}~\sigma~\mathbf{else}~\sigma'|\sigma'' \rangle, t, r) \wedge r = t \wedge \mathit{eval}_v(\mathrm{expt})=\mathbf{False}} {s \longtrans{\tau_{cond}} s[x\mapsto (v, q, \sigma' \oplus \sigma'', t, r)]}$ & \textbf{(Conditional$_F$)}\\ $\sosruleNormal{8cm}{s(x)=(v, q, \langle \mathit{var} := ?( \mathit{expr}_1, \mathit{expr}_2, \cdots, \mathit{expr}_n)|\sigma\rangle, t, r) \wedge r = t} {s \longtrans{\tau_{nondet}} s[x\mapsto (v[\mathit{var} \mapsto \mathit{eval}_v(\mathit{expr}_i)], q, \sigma, t, r)]} 1 \leq i \leq n$ & \textbf{(nondet-assign)}\\ $\sosruleNormal{11cm}{s(x)=(v, q, \langle y.m(e_1)~\mathit{after}(e_2)~\mathit{deadline}(e_3)|\sigma\rangle, t, r) \wedge r = t \wedge s(y)=(v', q', \sigma', t', r') \wedge p=\mathit{params(y,m)}} {s \longtrans{\tau_{send}} s[x\mapsto (v, q, \sigma, t, r) ] [y \mapsto (v', q' \cup \{(m, (\mathit{map}(p, \mathit{eval}_v(e_1))), t + e_2, t + e_3)\}, \sigma', t', r']}$ & \textbf{(send)}\\ $\sosruleNormal{8cm}{s(x)=(v, q, \langle \mathbf{delay}(e)|\sigma\rangle, t, r) \wedge r = t} {s \longtrans{\tau_{delay}} s[x\mapsto (v, q, \sigma, t, r+\mathit{eval}_v(e))]}$ & \textbf{(delay)}\\ $\sosruleNormal{7cm}{s(x)=(v, q, \langle \mathbf{skip}|\sigma\rangle, t, r) \wedge r = t} {s \longtrans{\tau_{skip}} s[x\mapsto (v, q, \sigma, t, r)]} $ & \textbf{(skip)}\\ $\sosruleNormal{6cm}{s(x)=(v, q, \langle \mathbf{endm} \rangle, t, r)} {s \longtrans{\tau_{end}} s[x\mapsto (v|_{\mathit{svars}(x)}, q, \epsilon, t, r)]}$ & \textbf{(end-method)}\\ $\sosruleNormal{11cm}{s\ntrans{mg} \wedge s\ntrans{\tau} \wedge n_1=\min_{x\in\mathit{AID}}\{ar| s(x)=(v, q, \sigma, t, r) \cdot \sigma = \epsilon \wedge q = \langle (ac, mg, pr, ar, dl)|T \rangle\} \wedge n_2=\min_{x\in\mathit{AID}}\{r|s(x)=(v, q, \sigma, t, r) \cdot \sigma \neq \epsilon\} \wedge tp=\min\{n_1, n_2\}} {s \longtrans{t} \{(x, (v, q, \sigma, tp, r))~|~(x, (v, q, \sigma, t, r)) \in s\}}$ & \textbf{(time-progress)}\\ \end{longtable} \item $AP$ contains the name of all of atomic propositions. \item Function $L: S \rightarrow 2^{AP}$ associates a set of atomic propositions with each state, shown by $L(s)$ for a given state $s$. \end{comment} \end{document}
math
{\beta}gin{document} \title{There are infinitely many non-Galois cubic fields whose Dedekind zeta functions have negative central value} \abstract{The Dedekind zeta functions of infinitely many non-Galois cubic fields have negative central values.} \tableofcontents {\mathcal S}ection{Introduction} Let $K$ be a number field of degree $n$, and denote its Dedekind zeta function by $\zeta_K$. It was known to Riemann that $\zeta_{\mathbb Q}(\frac12) = -1.46... < 0$. Hecke proved that $\zeta_K(s)$ has a meromorphic continuation with a simple pole at $s=1$. The generalized Riemann Hypothesis says that all the nontrivial zeros lie on the line ${\mathbb R}e s=1/2$, which would imply that $\zeta_K(s)$ takes only negative real values in the open interval $s\in (1/2,1)$ by the intermediate value theorem. This leads to the question of the \emph{possible vanishing} of $\zeta_K(s)$ at the central point $s=1/2$. The answer was given by Armitage~\cite{Armitage} who showed that a certain number field $K$ of degree $48$ constructed by Serre~\cite[\S9]{Serre} satisfies $\zeta_K(\frac 12)=0$, and also by Fr\"ohlich~\cite{Fr} who constructed infinitely many quaternion fields $K$ of degree $8$ such that $\zeta_K(\frac 12)=0$. In each of these examples, $\zeta_K(s)$ factors into Artin $L$-functions some of which have root number $-1$. Such an $L$-function is forced to vanish at $s=1/2$ which in turn forces $\zeta_K(\tfrac 12)=0$. Conversely, which conditions can warrant that $\zeta_K(\tfrac12)$ is non-vanishing? An {\it $S_n$-number field} $K$ is a degree-$n$ extension of ${\mathbb Q}$ such that the normal closure of $K$ has Galois group $S_n$ over ${\mathbb Q}$. For such a field $K$, $\zeta_K(s)$ factors as the product of $\zeta_{\mathbb Q}(s)$ and an irreducible Artin $L$-function whose root number is necessarily $+1$. It is then widely believed that $\zeta_K(\tfrac 12)<0$ for every $S_n$-number field $K$. In the case $n=2$, a classical result of Jutila~\cite{Jutila} establishes that $\zeta_K(\tfrac12)$ is non-vanishing for infinitely many quadratic number fields $K$. This was later improved in a landmark result of Soundararajan~\cite{Sound} to a positive proportion of such fields when ordered by discriminant, with this proportion rising to at least $87.5\%$ in some families. In this article, we study the case $n=3$. Our main result is as follows. {\beta}gin{thm}\label{thm1} The Dedekind zeta functions of infinitely many $S_3$-fields have negative central values. \end{thm} We will in fact prove a stronger version of Theorem {\rm r}ef{thm1}, in which we restrict ourselves to cubic fields satisfying any finite set of local specifications. To state this result precisely, we introduce the following notation. Let $\Sigma=(\Sigma_v)$ be a {\it finite set of cubic local specifications}. That is, for each place $v$ of ${\mathbb Q}$, $\Sigma_v$ is a non-empty set of \'etale cubic extensions of ${\mathbb Q}_v$, such that for large enough primes $p$, $\Sigma_p$ contains all \'etale cubic extensions of ${\mathbb Q}_p$. We let ${\mathbb F}F_\Sigma$ denote the set of cubic fields $K$ such that $K\otimes{\mathbb Q}_v\in\Sigma_v$ for each $v$. Then we have the following result. {\mathrm {ind}}ex{$\Sigma=(\Sigma_v)$, finite set of local specifications} {\mathrm {ind}}ex{${\mathbb F}F_\Sigma$, family of cubic fields prescribed by $\Sigma$} {\beta}gin{thm}\label{thmnv1} Let $\Sigma$ be a finite set of local specifications. Then there are infinitely many $S_3$-fields in ${\mathbb F}F_\Sigma$ with negative central value. \end{thm} Define ${\mathbb F}F_\Sigma(X)$ to be the set of fields $K\in{\mathbb F}F_\Sigma$ with $|\Delta(K)|<X$. The foundational work of Davenport--Heilbronn \cite{DH} determined asymptotics $|{\mathbb F}F_\Sigma(X)|{\mathcal S}im {\alpha}pha_\Sigma \cdot X$ with an explicit constant ${\alpha}pha_\Sigma >0$. We prove quantitative versions of our main theorems, where we give lower bounds for the \emph{logarithmic density} $\delta_\Sigma(X)$ of the set of fields arising in Theorem {\rm r}ef{thmnv1} with bounded discriminant: {\beta}gin{equation} \delta_\Sigma(X):=\log\bigl| \{ K\in {\mathbb F}F_\Sigma(X),\ \zeta_K(\tfrac12)<0 \} \bigr|/\log X. \end{equation} Our {\mathrm {ind}}ex{$\delta_\Sigma(X)$, logarithmic density of fields $K\in {\mathbb F}F_\Sigma(X)$ with $\zeta_K(\tfrac12)<0$ } next result implies that the number of cubic $S_3$-fields whose Dedekind zeta function is negative at the central point has logarithmic density $\geq 0.67$: {\beta}gin{thm}\label{thmnv2} For any finite set $\Sigma$ of local specifications, {\beta}gin{equation*} \liminf_{X\to\infty} \delta_\Sigma(X)\geq \frac{64}{95} = 0.67368\ldots ;\quad\quad \limsup_{X\to\infty} \delta_\Sigma(X)\geq \frac{97}{128} = 0.75781\ldots \end{equation*} \end{thm} Note that Theorem {\rm r}ef{thmnv1} is an immediate consequence of Theorem {\rm r}ef{thmnv2} since we may add a specification $\Sigma_p$ at an additional prime $p$ that forces all cubic fields $K\in {\mathbb F}F_\Sigma$ to be non-Galois. Alternatively, we may observe that the number of Galois cubic fields $K$, with discriminant less than $X$, is known to be asymptotic to $cX^{\frac 12}$ by work of Cohn \cite{Cohn}, where $c$ is an explicit constant. Hence, Theorem {\rm r}ef{thmnv2} implies that most cubic fields $K\in {\mathbb F}F_\Sigma(X)$ with $\zeta_K(\frac12)<0$ must be non-Galois. The above numerical values are established from {\beta}gin{equation*} \liminf_{X\to\infty} \delta_\Sigma(X)\geq \frac{2}{3-4\delta};\quad\quad\quad \limsup_{X\to\infty} \delta_\Sigma(X)\geq \frac34 + \delta, \end{equation*} where $\delta=\frac{1}{128}$ is the current record subconvexity exponent due to Blomer--Khan~\cite{Blomer-Khan}, which implies \[ |\zeta_K(\tfrac 12)|\ll_\epsilon |\Delta(K)|^{\frac14 -\delta +\epsilon}. \] The \emph{convexity bound} $\delta=0$ still yields the same kind of asymptotic results for $\delta_\Sigma(X)$, only with the weaker lower bound of $\frac23$. The same applies to all other results in this paper so that a reader who wouldn't want to rely on the above recent subconvexity estimate could stay with $\delta=0$. Other numerical values for $\delta>0$ have been obtained by Duke--Friedlander--Iwaniec~\cite{Duke-Friedlander-Iwaniec}, Blomer--Harcos--Michel \cite[Corollary~2]{Blomer-Harcos-Michel}, and Wu \cite{Wu}. Conditional on the Lindel\"of Hypothesis for all $\zeta_K(\frac 12)$, $K\in {\mathbb F}F_\Sigma$, we would have $ \lim\limits_{X\to\infty}\delta_\Sigma(X)=1$. Even this conditional result would not imply that a positive proportion subset of ${\mathbb F}F_\Sigma(X)$ is non-vanishing, it does only guarantee the existence of $\gg_\epsilon X^{1-\epsilon}$ cubic fields $K\in{\mathbb F}F_\Sigma(X)$ with $\zeta_K(\frac 12)<0$ for every $\epsilon>0$. A cubic number field is an $S_3$-field if and only if it is not Galois; hence we refer to non-Galois cubic fields as $S_3$-fields. Galois cubic fields are cyclic and (as is already noted above) the number of cyclic cubic fields $K$ of discriminant less than $X$ is about $X^{\frac 12}$. The zeta function of a cyclic cubic field $K$ factors as a product of Dirichlet $L$-functions of conjugate cubic characters of conductor $|\Delta(K)|^{\frac12}$ (see \S{\rm r}ef{sHecke}). It follows from a result of Baier--Young~\cite[Corollary 1.2]{Baier-Young} that for $\gg X^{\frac 37}$ cyclic cubic fields of discriminant less than $X$ the Dedekind zeta function is negative at the central point. Recently, David--Florea--Lalin \cite{DFL} have studied the analogous problem of cyclic cubic field extensions of the rational function field ${\mathbb F}_q(T)$, where they obtain a positive proportion of non-vanishing. {\mathcal S}ubsubsection*{The first moment of the central values of Artin $L$-functions of cubic fields} There is an extensive literature on the non-vanishing at special points of $L$-functions varying in families. The present situation of cubic fields is an important geometric family. Its central values are of ${{\rm r}m GL}_2$-type and well-studied from an analytic perspective. At the same time, the geometry of the count of cubic number fields with bounded discriminant has a rich history. Let $K$ be a cubic field. The Dedekind zeta function of $K$ factors as $\zeta_K(s)=\zeta_{\mathbb Q}(s)L(s,{\rm r}ho_K)$, where $L(s,{\rm r}ho_K)$ denotes the Artin $L$-function associated with the $2$-dimensional Galois representation {\beta}gin{equation*} {\rm r}ho_K:{{\rm r}m Gal}(M/{\mathbb Q})\hookrightarrow S_3\hookrightarrow{{\rm r}m GL}_2({\mathbb C}), \end{equation*} where $M$ is the Galois closure of $K$. It is known from work of Hecke that $L(s,{\rm r}ho_K)$ is an entire function. It will be more convenient for us to work with the central $L$-value $L(\tfrac12,{\rm r}ho_K)$ rather than $\zeta_K(\tfrac12)$, which is equivalent since they differ by the non-zero constant $\zeta_{\mathbb Q}(\tfrac12)$. In order to prove Theorem~{\rm r}ef{thmnv2}, the standard approach is to estimate the \emph{first moment} of $L(\tfrac 12,{\rm r}ho_K)$ for $K\in {\mathbb F}F_\Sigma$. Thus we ask the question: can one obtain an asymptotic for \[ {\mathcal S}um_{K\in {\mathbb F}F_\Sigma(X)} L(\tfrac 12,{\rm r}ho_K),\quad \text{as $X\to \infty$}? \] This question is still open. Fortunately, we observe that we may weaken the question in the following three ways: First, we shall study the smooth version which is technically much more convenient. Second, we shall impose a local inert specification $\Sigma_p$ at an additional prime $p$. Third, and this is our most important point, we observe that it suffices that the remainder term can be expressed in terms of central values of cubic fields with lower discriminant. Indeed, we then have a dichotomy of either an asymptotic for the first moment or an unusually large remainder term, either of which implies the non-vanishing of many central values. {\beta}gin{thm}\label{thmMoment} There exists an absolute constant $\mu>0$ such that the following holds. Suppose that for some prime $p$, the specification $\Sigma_p$ consists only of the unramified cubic extension of ${\mathbb Q}_{p}$ (i.e., the cubic fields in ${\mathbb F}F_\Sigma$ are prescribed to be inert at $p$). Let ${\mathbb P}si:{\mathbb R}_{>0}\to {\mathbb C}$ be a smooth compactly supported function and suppose that $\widetilde {\mathbb P}si(1) = \int_0^\infty{\mathbb P}si=1$. Then, for every $0<\nu \le \mu$, $\epsilon>0$, and $X\ge 1$, {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{K\in{\mathbb F}F_\Sigma}L\bigl(\tfrac12,{\rm r}ho_K\bigr) {\mathbb P}si\Bigl (\frac{|\Delta(K)|}{X}\Bigr) &=& \displaystyle C_\Sigma \cdot X \cdot \bigl(\log X + \widetilde {\mathbb P}si'(1) \bigr)+ C'_\Sigma \cdot X \\[.2in]&& \displaystyle + \,O_{\epsilon,\nu,\Sigma,{\mathbb P}si}\Bigl( X^{1+\epsilon-\nu}+ X^{\frac12+\epsilon}\cdot {\mathcal S}um_{K\in {\mathbb F}F_\Sigma\bigl (X^{\frac{3}{4}+\nu}\bigr)} \frac{ \bigl|L\bigl(\tfrac12,{\rm r}ho_K\bigr)\bigr|} {\bigl| \Delta(K) \bigr |^{\frac 12}} \Bigr), \end{array} \end{equation*} where $C_\Sigma>0$ and $C'_\Sigma\in {\mathbb R}$ depend only on $\Sigma$. \end{thm} It is easy to see that Theorem~{\rm r}ef{thmMoment} implies that infinitely many fields $K\in{\mathbb F}F_\Sigma$ have nonzero central values using an argument by contradiction. If there were finitely many non-vanishing $L$-values, then the left-hand side would be bounded, and the second term inside $O_{\epsilon,\nu,\Sigma,{\mathbb P}si}(\cdot)$ of the right-hand side would be bounded by $X^{\frac12+\epsilon}$. This is a contradiction because the term $C_\Sigma X \log X$ would be larger than all the other terms. The fact that Theorem~{\rm r}ef{thmMoment} also implies Theorem~{\rm r}ef{thmnv2} is established in Section~{\rm r}ef{sec:proof}. The main term of Theorem~{\rm r}ef{thmMoment} is familiar in the study of moments of $L$-functions. In particular the nature of the constants $C_\Sigma$ and $C'_\Sigma$ is transparent, with $C_\Sigma$ proportional to the Euler product~\eqref{CSigmaProduct}. We denote the $n$th Dirichlet coefficient of $L(s,{\rm r}ho_K)$ by $\lambda_K(n)$, which is a multiplicative function of $n$. For a prime power $p^k$, the coefficient $\lambda_{K}(p^k)$ depends only on the cubic \'etale algebra $K\otimes{\mathbb Q}_p$ over ${\mathbb Q}_p$, and is in fact determined by ${\mathcal O}_K\otimes{\mathbb F}_p$, where ${\mathcal O}_K$ denotes the ring of integers of $K$. Therefore, for a fixed positive integer $n$, the asymptotic average value of $\lambda_K(n)$ over $K\in{\mathbb F}F_\Sigma$ is in fact an average over a finite set (see~\cite[\S2.11]{SST} and \cite[\S2]{SST1} for a general discussion of this phenomenon in the context of Sato--Tate equidistribution for geometric families). We denote this average by $t_\Sigma(n)$ and note that this is a multiplicative function of $n$. We have $t_\Sigma(p)=O_\Sigma(\frac1p)$ as the prime $p\to \infty$, which also is a general feature~\cite[\S2]{SST1} that implies that the number field family ${\mathbb F}F_\Sigma$ is expected~\cite[Eq.(11)]{SST} to have average rank $0$. Moreover, $t_\Sigma(p^2)= 1 + O_\Sigma(\frac1{p^2})$ for the present family ${\mathbb F}F_\Sigma$ which implies that the following normalized Euler product converges: {\beta}gin{equation}\label{CSigmaProduct} \prod_{p} \Bigl[(1-p^{-1}) {\mathcal S}um^\infty_{k=0} \frac{t_\Sigma(p^k)}{p^{k/2}}\Bigr]. \end{equation} This product is shown to be positive and to be proportional to $C_\Sigma$ (see Section {\rm r}ef{sec:average}). We shall discuss the remainder terms and our proof of Theorem~{\rm r}ef{thmMoment} in \S{\rm r}ef{s_overview}. An explicit value of $\mu$ is a third of a thousandth. This small numerical value arises from the complications in bounding the remainder terms in all of the different ranges in our proof coupled with that the exponent of the secondary term $X^{\frac 56}$ of the asymptotic count of cubic fields is already by itself close to $1$. {\mathcal S}ubsubsection*{Low-lying zeros of the Dedekind zeta functions of cubic fields} Our equidistribution results in Section~{\rm r}ef{sec:switch} on the asymptotic average value of $\lambda_K(n)$ over $K\in {\mathbb F}F_\Sigma(X)$ with robust remainder terms as $n,X\to \infty$ have applications towards the statistics of low-lying zeros of the Dedekind zeta functions of cubic fields (the Katz--Sarnak heuristics). A conjecture in~\cite{SST} predicts that for a homogeneous orthogonal family of $L$-functions, the low-lying zeros of the family should have \emph{symplectic symmetry type}. Given a test function ${\mathbb P}hi:{\mathbb R}\to{\mathbb C}$, let ${\mathcal D}({\mathbb F}F_\Sigma(X),{\mathbb P}hi)$ denote the $1$-level density (defined precisely in Section {\rm r}ef{sec:low-lying}) of the family of Dedekind zeta functions of the fields in ${\mathbb F}F_\Sigma$ with respect to ${\mathbb P}hi$. Then the Katz--Sarnak heuristics predict the equality {\beta}gin{equation}\label{onelevel} \lim_{X\to \infty} {\mathcal D}({\mathbb F}F_\Sigma(X),{\mathbb P}hi)=\widehat{{\mathbb P}hi}(0)- \frac12\int_{-1}^1\widehat{{\mathbb P}hi}(t)dt, \end{equation} for all even functions ${\mathbb P}hi$, whose Fourier transform $\widehat{{\mathbb P}hi}$ has support contained in $(-a,a)$ for a constant $a$ to be determined. Yang \cite{Yang} verifies~\eqref{onelevel} for even functions ${\mathbb P}hi$ whose Fourier transform has support contained in $({\mathcal S}criptstyle{-}$$\frac{1}{50},\frac{1}{50})$. The constant $\frac{1}{50}$ has been subsequently improved to $\frac{4}{41}$ by work of Cho--Kim \cite{ChoKim1} and independently~\cite{SST1}. Here, we prove the following result: {\beta}gin{thm}\label{thmllz} Let $\Sigma$ be as above, with the same assumption that for at least one prime $p$, the specification $\Sigma_{p}$ consists only of the unramified cubic extension of ${\mathbb Q}_p$. Then~\eqref{onelevel} holds for even functions ${\mathbb P}hi$ whose Fourier transform has support contained in $({\mathcal S}criptstyle{-}$$\frac{2}{5},\frac{2}{5})$. \end{thm} {\mathcal S}ubsection{Overview of the proof of the main theorems}\label{s_overview} These proofs are carried out in several steps. First, we control the central value $L(\tfrac 12,{\rm r}ho_K)$ using the approximate functional equation. This allows us to approximate $L(\tfrac 12,{\rm r}ho_K)$ in terms of a smooth sum of the Dirichlet coefficients $\lambda_K(n)$, where the sum has length $O_\epsilon(|\Delta(K)|^{1/2+\epsilon})$. More precisely, we have {\beta}gin{equation}\label{eqintroAFE} L(\tfrac 12,{\rm r}ho_K)={\mathcal S}um^\infty_{n=1}\frac{\lambda_K(n)}{n^{1/2}} V^\pm\Bigl(\frac{n}{{\mathcal S}qrt{|\Delta(K)}|}\Bigr), \end{equation} where $V^\pm$ is a rapidly decaying smooth function depending only on the sign $\pm$ of $\Delta(K)$. Therefore, studying the average value of $L(\tfrac 12,{\rm r}ho_K)$ as $K$ varies over the family ${\mathbb F}F_\Sigma(X)$ of cubic fields with discriminant bounded by $X$ necessitates the study of smoothed sums of Dirichlet coefficients $\lambda_K(n)$: {\beta}gin{equation}\label{eqintrofirstsum} {\mathcal S}um_{n\le X^{1/2+\epsilon}}\frac{1}{n^{1/2}} {\mathcal S}um_{K\in{\mathbb F}F_\Sigma}\lambda_K(n){\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr), \end{equation} where ${\mathbb P}si:{\mathbb R}_{>0}\to{\mathbb C}$ is a smooth function with compact support. In particular, a basic input for the proof is the determination of the average value $t_\Sigma(n)$ of $\lambda_K(n)$ over $K\in{\mathbb F}F_\Sigma(X)$. Moreover, it is necessary to obtain good error terms for this average with an explicit dependence on $n$. {\mathcal S}ubsubsection*{Expanding the definition of $\lambda_K(n)$ to cubic rings $R$} In order to compute the average value of $\lambda_K(n)$ over $K\in{\mathbb F}F_\Sigma$ with good error terms, it is necessary for us to expand this average to one over cubic orders $R$. This is because cubic rings can be parametrized by group orbits on a lattice and Poisson summation, applied through the theory of Shintani zeta functions following Taniguchi--Thorne~\cite{TT} and \cite{TaTh1}, becomes available as an important tool.\footnote{This is in direct analogy to the quadratic case, in which P\'olya--Vinagradov type estimates are used to estimate the sum of Legendre symbols $\bigl(\frac{n}{D}\bigr)$, as $D$ varies over all discriminants and not merely the squarefree ones.} It is therefore necessary for us to define a quantity $\lambda_R(n)$, for positive integers $n$ and cubic rings $R$. There are different natural choices for the value of $\lambda_R(n)$. For example, it is possible to set the Dirichlet coefficients of $R$ to be equal to the corresponding coefficients of $R\otimes{\mathbb Q}$. Another possible choice arises from work of Yun \cite{Yun}, in which Yun defines a natural zeta function $\zeta_R(s)$ associated to orders $R$ in global fields. It is then possible to set the Dirichlet coefficients of $R$ to equal the corresponding coefficients of $\zeta_R(s)/\zeta(s)$. However, we require $\lambda_R(n)$ to satisfy the following three conditions: {\beta}gin{itemize} \item[{{\rm r}m (a)}] We require $\lambda_R(n)=\lambda_K(n)$ when $R$ is the ring of integers of $K$. \item[{{\rm r}m (b)}] We require $\lambda_R(n)$ to be multiplicative in $n$. \item[{{\rm r}m (c)}] When $p$ is prime, we require the value of $\lambda_R(p^k)$ to be defined modulo $p$, i.e., $\lambda_R(p^k)$ should be determined by $R\otimes{\mathbb F}_p$. \end{itemize} The above two candidate choices for $\lambda_R(n)$ satisfy the first two properties, but not the third. In fact, the above three conditions uniquely determine the value of $\lambda_R(p^k)$, at least for rings $R$ such that $R\otimes{\mathbb Z}_p$ is Gorenstein. More precisely, $\lambda_R(n)$ should be defined to be the $n$th Dirichlet coefficient of $D(s,R)$, where $D(s,R)$ is defined by an Euler product whose $p$th factor $D_p(s,R)$ is given by {\beta}gin{equation}\label{eqEPR} D_p(s,R):=\left\{ {\beta}gin{array}{lll} (1-p^{-s})^{-2}&\;{{\rm r}m if}\;&R\otimes{\mathbb F}_p={\mathbb F}_p^3;\\[.15in] (1-p^{-2s})^{-1}&\;{{\rm r}m if}\;&R\otimes{\mathbb F}_p={\mathbb F}_p\oplus{\mathbb F}_{p^2};\\[.15in] (1+p^{-s}+p^{-2s})^{-1}&\;{{\rm r}m if}\;&R\otimes{\mathbb F}_p={\mathbb F}_{p^3};\\[.15in] (1-p^{-s})^{-1}&\;{{\rm r}m if}\;&R\otimes{\mathbb F}_p={\mathbb F}_p\oplus{\mathbb F}_p[t]/(t^2);\\[.15in] 1&\;{{\rm r}m else.}\;& \end{array}{\rm r}ight. \end{equation} It is clear from the definition that $\lambda_R(n)$ satisfies the three required properties. {\mathcal S}ubsubsection*{Summing $\lambda_R(n)$ over cubic rings $R$ with bounded discriminant} Next, we need to evaluate a smoothed sum of $\lambda_R(n)$, for $R$ varying over cubic rings having bounded discriminant. Such a result follows immediately from the following three ingredients. First, the Delone--Faddeev parametrization of cubic rings in terms of ${{\rm r}m GL}_2({\mathbb Z})$-orbits on $V({\mathbb Z})$, the space of integral binary cubic forms. Second, results of Shintani \cite{Shintani} on the analytic properties of the Shintani zeta functions associated to $V({\mathbb Z})$. Third, local Fourier transform computations of Mori \cite{Mori} on $V({\mathbb F}_p)$. Let $n$ be a positive integer, and write $n=mk$, where $m$ is squarefree, $k$ is powerful, and $(m,k)=1$. Then we have the following result, stated for primes and prime powers as Theorem {\rm r}ef{t_Polya}, which is a smoothed cubic analogue of the P\'olya--Vinogradov inequality: There exist explicit constants ${\alpha}pha(n)$ and $\gamma(n)$ such that {\beta}gin{equation}\label{thmpv} \displaystyle{\mathcal S}um_{[R:{\mathbb Z}]=3}\lambda_R(n){\mathbb P}si\Bigl(\frac{|\Delta(R)|}{X}\Bigr)= {\alpha}pha(n) X+ \gamma(n) X^{5/6}+O_\epsilon\big(n^\epsilon \cdot m\cdot {\mathrm {rad}}(k)^2\big), \end{equation} where ${\mathrm {rad}}(k)$ denotes the radical of $k$, and the sum over rings is weighted by the inverse of the size of the stabilizer, $|\operatorname{Stab}(R)|^{-1}$. {\mathcal S}ubsubsection*{Sieving to maximal orders} We define the quantity {\beta}gin{equation*} S(R)={\mathcal S}um_{n}\frac{\lambda_R(n)}{n^{1/2}} V^\pm\Bigl(\frac{n}{{\mathcal S}qrt{|\Delta(R)|}}\Bigr). \end{equation*} We note that $S(R)=L(\tfrac 12,{\rm r}ho_K)$ when $R$ is the ring of integers of $K$. However, when $R$ is not maximal, it is {\em not} necessarily true that $S(R)$ is equal to $D(\tfrac 12,R)$. In order to evaluate \eqref{eqintrofirstsum}, we need to perform an inclusion-exclusion sieve. Thus, for all squarefree integers $q$, we need estimates on the sums {\beta}gin{equation}\label{eqintroFS} {\mathcal S}um_{R\in{\mathcal M}_q}S(R){\mathbb P}si\Bigl(\frac{|\Delta(R)|}{X}\Bigr), \end{equation} where ${\mathcal M}_q$ denotes the space of cubic orders $R$ that have index divisible by $q$ in the ring of integers of $R\otimes{\mathbb Q}$. Estimating sums over ${\mathcal M}_q$ is tricky since the condition of nonmaximality at $q$ is defined modulo $q^2$ and not modulo $q$. That is, maximality of $R$ at a prime $p$ cannot be detected from the local algebra $R\otimes{\mathbb F}_p$. To reduce our mod $q^2$ sum to a mod $q$ sum, we use an idea originating in the work of Davenport--Heilbronn \cite{DH} and further developed as a precise switching trick in \cite{BST}. Namely, we replace the sum over ${\mathcal M}_q$ with a sum over the set of overorders of ${\mathcal M}_q$ of index-$q$. For $q$ in what we call the ``small range'', i.e., $q\le X^{1/8-\epsilon}$, the switching trick in conjunction with \eqref{thmpv} allows us to estimate each summand in \eqref{eqintroFS} with a sufficiently small error term. Ideally, we would use a tail estimate for large $q$. This tail estimate requires bounding the value of $S(R)$ for nonmaximal rings $R$. The convexity bound yields the following estimate for rings $R\in{\mathcal M}_q$ with $\Delta(R)\asymp X$: {\beta}gin{equation}\label{eqintroconvexity} |S(R)|\ll_\epsilon \frac{X^{1/4+\epsilon}}{q^{1/2}}. \end{equation} Neither the convexity bound nor the best known subconvexity bounds give sufficiently good estimates to cover all squarefree integers $q> X^{1/8-\epsilon}$. However, assuming the generalized Lindel\"of Hypothesis (or indeed, a sufficiently strong subconvexity bound) is enough to determine the first moment for $L(\tfrac 12,{\rm r}ho_K)$. Moreover, this method yields unconditional upper bounds on the average value of $L(\tfrac 12,{\rm r}ho_K)$, a slightly stronger version of which is proven in Theorem~{\rm r}ef{thuncondupbd}: {\beta}gin{thm}\label{introupper} Let $\Sigma$ be a finite set of local specifications and assume that for some prime $p$, we have $\Sigma_p=\{{\mathbb Q}_{p^3}\}$. Then for $X\geq 1$, we have {\beta}gin{equation}\label{equncondbound} {\mathcal S}um_{K\in{\mathbb F}F_\Sigma}L\bigl(\tfrac12,{\rm r}ho_K\bigr) {\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr)\ll_{\Sigma,{\mathbb P}si} X^{29/28}. \end{equation} \end{thm} We note that this average bound is significantly stronger than the bound obtained by simply summing the best known pointwise upper bounds for $L(\tfrac12,{\rm r}ho_K)$. {\mathcal S}ubsubsection*{The approximate functional equation for cubic rings} The first ingredient required for estimating $S(R)$, when $R$ is a nonmaximal cubic order with index $> X^{1/8-\epsilon}$, is a generalization of the approximate functional equation \eqref{eqintroAFE} to the setting of cubic orders. This modification is proved in Proposition~{\rm r}ef{thm_AFE2}, and expresses $S(R)-D(\tfrac 12,R)$ as a sum of arithmetic quantities associated to $R$. The advantage of expressing $S(R)$ in this way is that this latter sum is much shorter than the original sum defining $S(R)$: of length $\ll_\epsilon X^{1/2+\epsilon}/q$ rather than $\ll_\epsilon X^{1/2+\epsilon}$. However, this shortening comes at a cost. The summands of this new sum involve Dirichlet coefficients from both $D(s,R)$ and $L(s,{\rm r}ho_{R\otimes{\mathbb Q}})$. In order to control the coefficients of $L(s,{\rm r}ho_{R\otimes{\mathbb Q}})$, it is necessary to isolate the exact index of $R$ in the ring of integers of $R\otimes{\mathbb Q}$. Merely knowning that $q$ divides the index is not enough. To precisely control the index, a secondary sieve is necessary. Carrying out this secondary sieve yields the following estimate for $q > X^{1/8-\epsilon}$: {\beta}gin{equation}\label{eqintroSD} {\mathcal S}um_{R\in {\mathcal M}_q}S(R){\mathbb P}si\Bigl(\frac{|\Delta(R)|}{X}\Bigr) \approx {\mathcal S}um_{R\in {\mathcal M}_q}D(\tfrac 12,R){\mathbb P}si\Bigl(\frac{|\Delta(R)|}{X}\Bigr). \end{equation} This estimate is proved in Section~{\rm r}ef{sConditional}, and is the crucial technical ingredient in the proof of Theorem~{\rm r}ef{thmMoment}. Equation \eqref{eqintroSD} allows us to exploit the advantages of using $S(R)$ and $D(\tfrac 12,R)$ in the original inclusion exclusion sieve. Namely, for small $q$, the sum of $S(R)$ over $R\in{\mathcal M}_q$, can be well estimated with Equation~\eqref{thmpv} since $S(R)$ is simply a sum of the coefficients $\lambda_R(n)$. However for large $q$, it is advantageous to instead sum $D(\tfrac 12,R)$ over $R\in{\mathcal M}_q$. This is because the value of $D(\tfrac 12,R)$ behaves predictably as $R$ varies over suborders of a fixed cubic field. {\mathcal S}ubsubsection*{Summing $D(\tfrac 12,R)$ over $R\in{\mathcal M}_q$ and over large $q$} We are left to estimate the sum {\beta}gin{equation}\label{eqintroD12sum} {\mathcal S}um_{q> X^{1/8-\epsilon}}\mu(q){\mathcal S}um_{R\in{\mathcal M}_q}D(\tfrac 12,R) {\mathbb P}si\Bigl(\frac{|\Delta(R)|}{X}\Bigr). \end{equation} Expressing $D(\tfrac 12,R)$ in terms of $L(\tfrac 12,{\rm r}ho_{R\otimes {\mathbb Q}})$ allows us to repackage \eqref{eqintroD12sum} into sums of the following form: {\beta}gin{equation}\label{eqintrorepack} {\mathcal S}um_{{\mathcal S}ubstack{K\in{\mathbb F}F_\Sigma\\|\Delta(K)|\asymp Y}} {\mathcal S}um_{{\mathcal S}ubstack{R{\mathcal S}ubset{\mathcal O}_K\\{\mathrm {ind}}(R)\asymp{\mathcal S}qrt{X/Y}}} D(\tfrac 12,R) \ll_{\epsilon,\Sigma} X^\epsilon {\mathcal S}um_{{\mathcal S}ubstack{K\in{\mathbb F}F_\Sigma\\|\Delta(K)|\asymp Y}} \#\bigl\{R{\mathcal S}ubset{\mathcal O}_K:{\mathrm {ind}}(R)\asymp{\mathcal S}qrt{X/Y}\bigr\}\cdot |L(\tfrac 12,{\rm r}ho_K)|. \end{equation} Let $K$ be a fixed cubic field. A result of Datskovsky--Wright \cite{DW} gives asymptotics for the number of suborders of $K$ having bounded index. This yields Theorem~{\rm r}ef{thmMoment}. Our next idea is to assume the nonnegativity of $L(\tfrac12,{\rm r}ho_K)$. Since the result of Datskovsky--Wright is very precise, it turns out that we can input the unconditional upper bound on the sums of $L(\tfrac12,{\rm r}ho_K)$ in \eqref{equncondbound}, to obtain an improved upper bound on the right-hand side of \eqref{eqintrorepack}. This improved upper bound is enough to obtain asymptotics for the first moment of $L(\tfrac12,{\rm r}ho_K)$, conditional on its nonnegativity. Finally, we obtain Theorem {\rm r}ef{thmnv2} by making a version of the following simple idea precise: If $L(\tfrac 12,{\rm r}ho_K)$ does indeed vanish for most fields $K$, then the right-hand side of \eqref{eqintrorepack} is forced to be small, which in turn implies an upper bound on the left-hand side of \eqref{eqintrorepack}, which in turn allows for the computation of the first moment of $L(\tfrac12,{\rm r}ho_K)$, which in turn implies non-vanishing for many fields $K$. This leads to a contradiction, and it follows that $L(\tfrac 12,{\rm r}ho_K)$ does not vanish for many fields $K$. Finally, we observe that the same method of proof applies to the values $L(\tfrac12+it,{\rm r}ho_K)$ for a fixed $t\in {\mathbb R}$ and yield variants of Theorems~{\rm r}ef{thm1}, {\rm r}ef{thmnv1}, {\rm r}ef{thmnv2}, {\rm r}ef{thmMoment}, {\rm r}ef{introupper} with suitable modifications. {\mathcal S}ubsection{Organization of the paper} This paper is organized as follows. In Section~{\rm r}ef{sec:pre}, we collect preliminary results on the space of cubic rings and fields. In particular, we recall the Delone--Faddeev parametrization of cubic rings in terms of ${{\rm r}m GL}_2({\mathbb Z})$-orbits on integral binary cubic forms. We also discuss Fourier analysis on the space of binary cubic forms over ${\mathbb F}_p$ and ${\mathbb Z}/n{\mathbb Z}$. In Section~{\rm r}ef{sec:Artin}, we introduce the Artin character on cubic fields $K$ that arise as Dirichlet coefficients of $L(s,{\rm r}ho_K)=\zeta_K(s)/\zeta(s)$. We then define an extension to the space of cubic rings (and thus also the space of binary cubic forms). Next, in Section~{\rm r}ef{sec:dedekind}, we recall the analytic properties of $L(s,{\rm r}ho_K)$, for a cubic field $K$. In particular, we recall the approximate functional equation. We then discuss an unbalanced form of the approximate functional equation for orders within cubic fields. In Section~{\rm r}ef{secszf}, we recall Shintani's theory of the zeta functions associated to the space of binary cubic forms. As a well-known consequence of this theory, we derive estimates for the sums of congruence functions (i.e., functions $\phi$ on the space of cubic rings $R$ such that $\phi$ is determined by $R\otimes{\mathbb Z}/n{\mathbb Z}$ for some integer $n$) over the space of cubic rings with bounded discriminant. Then in Section~{\rm r}ef{sec:switch}, we apply a squarefree sieve to determine the sum of these congruence functions over the space of cubic fields. In Section~{\rm r}ef{sec:low-lying}, we use the results from Section~{\rm r}ef{sec:switch} to prove Theorem {\rm r}ef{thmllz} on the statistics of the low-lying zeros of the zeta functions of cubic fields. Next, in Section~{\rm r}ef{sec:average}, we start our analysis of the average central values of $L(s,{\rm r}ho_K)$, where $K$ ranges over cubic fields. In particular we prove the upper bound Theorem~{\rm r}ef{thuncondupbd}, obtaining an improved estimate on the average size of $L(\tfrac 12,{\rm r}ho_K)$ compared to the pointwise bound. In Section~{\rm r}ef{sConditional}, we complete the most difficult part of the proof, in which we show that for each somewhat large $q$, the values of $S(R)$ and $D(\tfrac 12,R)$ are close to each other, on average over $R\in{\mathcal M}_q$. We use this result in Section~{\rm r}ef{sec:proof} to first prove Theorem~{\rm r}ef{thmMoment}, and using this in addition, to prove our main result Theorem~{\rm r}ef{thmnv2}. {\mathcal S}ubsection{Notations and conventions} {\beta}gin{itemize} \item A positive integer $k$ is said to be \emph{powerful} if $v_p(k)\ge 2$ for every prime $p|k$. {\mathrm {ind}}ex{$v_p(k)\ge 2$ for every $p\mid k$, powerful integer} \item The \emph{radical}, also called the square-free kernel, of a positive integer $k$ is the product of its prime factors, ${\mathrm {rad}}(k):=\prod_{p|k} p$. {\mathrm {ind}}ex{${\mathrm {rad}}(k)$, radical of the positive integer $k$} \item We shall always use $\Sigma$ to refer to the finite set of local conditions imposed on the family of cubic fields. \item We shall always use ${\mathbb P}si$ to denote a compactly supported Schwartz function that will control the discriminants of binary cubic forms, cubic rings, or cubic fields. \end{itemize} {\mathcal S}ubsection*{Acknowledgements} ASh is supported by an NSERC discovery grant and a Sloan fellowship. ASo was supported by the grant 2016-03759 from the Swedish Research Council. NT acknowledges support by NSF grants DMS-1454893 and DMS-2001071. {\mathcal S}ection{Preliminaries on cubic rings and fields}\label{sec:pre} Let $V={{\rm r}m Sym}^3(2)$ denote the space of binary cubic forms. The group ${{\rm r}m GL}_2$ acts on $V$ via the following twisted action: {\beta}gin{equation*} \gamma \cdot f(x,y) := \det(\gamma)^{-1} f((x,y)\cdot \gamma). \end{equation*} {\mathrm {ind}}ex{$V$, space of binary cubic forms with twisted action by ${{\rm r}m GL}_2$} It is well-known that the representation $({{\rm r}m GL}_2,V)$ is {\it prehomogeneous} and that the ring of relative invariants for the action of ${{\rm r}m GL}_2$ on $V$ is freely generated by the {\it discriminant} which we denote by $\Delta$. We have that $\Delta$ is homogeneous of degree $4$ and $\Delta(\gamma \cdot f)=(\det \gamma)^2 \Delta(f)$. In this section, we describe the parametrization of cubic rings and fields in terms of ${{\rm r}m GL}_2({\mathbb Z})$-orbits on $V({\mathbb Z})$. We also discuss Fourier analysis on the space $V({\mathbb Z}/n{\mathbb Z})$, and in particular describe the Fourier transforms of all ${{\rm r}m GL}_2({\mathbb F}_p)$-invariant functions on $V({\mathbb F}_p)$. {\mathcal S}ubsection{Binary cubic forms and the parametrization of cubic rings}\label{s_binary_cubic} Levi \cite{Levi} and Delone--Faddeev \cite{DF}, further refined by Gan--Gross--Savin \cite{GGS}, prove that there is a bijection between the set of ${{\rm r}m GL}_2({\mathbb Z})$-equivalence classes of integral binary cubic forms and isomorphism classes of cubic rings over~${\mathbb Z}$: {\beta}gin{proposition}\label{df} There is a bijection between the set of isomorphism classes of cubic rings and the set of ${{\rm r}m GL}_2({\mathbb Z})$-orbits on $V({\mathbb Z})$, given as follows. A cubic ring $R$ is associated to the ${{\rm r}m GL}_2({\mathbb Z})$-equivalence class of the integral binary cubic form corresponding to the map {\beta}gin{equation*} {\beta}gin{array}{rcl} R/{\mathbb Z}&\to& \wedge^2(R/{\mathbb Z})\\[.07in] \theta&\mapsto& \theta\wedge\theta^2. \end{array} \end{equation*} \end{proposition} Throughout this paper, for an integral binary cubic form $f\in V({\mathbb Z})$, we denote the cubic ring corresponding to $f$ by $R_f$, the cubic algebra $R_f\otimes{\mathbb Q}$ by $K_f$, and the ring of integers of $K_f$ by ${\mathcal O}_{K_f}$. {\mathrm {ind}}ex{$R_f$, cubic ring corresponding to a form $f\in V({\mathbb Z})$} {\mathrm {ind}}ex{$K_f = R_f\otimes {\mathbb Q}$, cubic field corresponding to the form $f\in V({\mathbb Z})^{{{\rm r}m irr}}$} We have {\mathrm {ind}}ex{$\Delta(f)$, discriminant of the binary cubic form $f$} {\mathrm {ind}}ex{$\Delta(R)$, discriminant of the cubic ring $R$} {\mathrm {ind}}ex{$\Delta(K)$, discriminant of the cubic field $K$} \[ \Delta(R_f) = \Delta(f) = b^2c^2-4ac^3-4b^3d-27a^2d^2+18abcd, \] for $f(x,y)=ax^3+bx^2y+cxy^2+dy^3$, and where we denote by the same letter $\Delta$ the discriminants of rings and algebras. Since $\Delta(K_f)=\Delta({\mathcal O}_{K_f})$ by definition, we have the equality {\beta}gin{equation}\label{disc_form-field} \Delta(f) = \Delta(K_f) [\mathcal{O}_{K_f}:R_f]^2=\Delta(K_f){\mathrm {ind}}(f)^2, \end{equation} where we define the {\it index} of $f$, or ${\mathrm {ind}}(f)$, to be $[{\mathcal O}_{K_f}:R_f]$. {\mathrm {ind}}ex{${\mathrm {ind}}(f)$, index of $R_f$ in ${\mathcal O}_{K_f}$} In particular, we see that $|\Delta(K_f)|\le |\Delta(f)|$, and that the signs of $\Delta(f)$ and $\Delta(K_f)$ coincide. If $\Delta(f)\neq 0$, then the algebra $K_f$ is \'etale. If $f\in V({\mathbb Z})^{{\rm r}m irr}$ is irreducible, then $K_f$ is a field. Furthermore, $\Delta(f)>0$ when $K_f$ is totally real, and $\Delta(f)<0$ when $K_f$ is complex. {\mathrm {ind}}ex{$V({\mathbb Z})^{{{\rm r}m irr}}$, subset of irreducible binary cubic forms} We say that a ring $R$ has \emph{rank $n$} if it is free of rank $n$ as a ${\mathbb Z}$-module. We say that a rank $n$ ring $R$ is {\it maximal} if it is not a proper subring of any other ring of rank $n$. For a prime $p$, we say that a rank $n$ ring $R$ is {\it maximal at $p$} if $R\otimes{\mathbb Z}_p$ is maximal in the sense that it is not a proper subring of any other ring that is free of rank $n$ as a ${\mathbb Z}_p$-module. We have that $R$ is maximal if and only if it is maximal at $p$ for every prime $p$. {\mathrm {ind}}ex{$V({\mathbb Z})^{\rm max}$, subset of maximal binary cubic forms} We say that an integral binary cubic form $f$ is {\it maximal} (resp.\ {\it maximal at $p$}) if the corresponding cubic ring $R_f$ is maximal (resp.\ maximal at $p$). We have the following result \cite[\S 3]{BST} characterizing binary cubic forms that are maximal at $p$. {\beta}gin{proposition}\label{p:nonmaximal} An integral binary cubic form $f\in V({\mathbb Z})$ is maximal at a prime $p$ if and only if both of the following two properties hold: {\beta}gin{itemize} \item[(i)] $f$ is not a multiple of $p$, and \item[(ii)] $f$ is not ${{\rm r}m GL}_2({\mathbb Z})$-equivalent to a form $ax^3+bx^2y+cxy^2+dy^3$, with $p^2\mid a$ and $p\mid b$. \end{itemize} \end{proposition} We will also need the following result, proved in \cite[Propositions 15 and 16]{BST}, that determines the number of index-$p$ subrings and index-$p$ overrings of a cubic ring. {\beta}gin{proposition}\label{subsupring} For an integral binary cubic form $f\in V({\mathbb Z})$, the number of cubic rings in $K_f$ containing $R_f$ with index $p$ is equal to the number of double zeros ${\alpha}pha\in {\mathbb P}^1({\mathbb F}_p)$ of $f$ modulo $p$ such that $p^2 | f({\alpha}pha')$ for all ${\alpha}pha'\in {\mathbb P}^1({\mathbb Z})$ with ${\alpha}pha'\equiv {\alpha}pha$ ${{\rm r}m mod}$~$p$. For an integral binary cubic form $g\in V({\mathbb Z})$, there is a bijection between index-$p$ subrings of $R_g$ and zeros in ${\mathbb P}^1({\mathbb F}_p)$ of $g$ modulo $p$, whose number we denote by $\omega_p(g)$. \end{proposition} {\mathrm {ind}}ex{$\omega_p(g)$, number of zeros in ${\mathbb P}^1({\mathbb F}_p)$ of $g$ modulo $p$} {\beta}gin{example} Consider a form $f(x,y)=ax^3+bx^2y+cxy^2+dy^3 \in V({\mathbb Z})$, with $p^2\mid a$ and $p\mid b$ which is nonmaximal by Proposition~{\rm r}ef{p:nonmaximal}.(ii). Then ${\alpha}pha = [1:0]\in {\mathbb P}^1({\mathbb F}_p)$ is a double root of $f$ modulo $p$. The form $\bigl({\beta}gin{smallmatrix}\frac1{p}&{}\\{}&1\end{smallmatrix}\bigr)\cdot f(x,y)=(a/p^2)x^3+(b/p)x^2y+cxy^2+pdy^3$ corresponds to an index-$p$ overring of $R_f$. This is consistent with Proposition~{\rm r}ef{subsupring} which says that the number of cubic rings in $K_f$ containing $R_f$ with index $p$ is at least one. \end{example} {\mathcal S}ubsection{Binary cubic forms over ${\mathbb F}_p$ and ${\mathbb Z}/n{\mathbb Z}$}\label{s_binary_Fp} {\mathrm {ind}}ex{$V^*$, dual of $V$ with compatible action by ${{\rm r}m GL}_2$} Let $V^*={{\rm r}m Sym}_3(2)$ denote the \emph{dual} of $V$, and denote by $[,]$ the duality pairing. The ${{\rm r}m GL}_2$-action on $V^*$ is defined by the rule that $[,]$ is relatively invariant: {\beta}gin{equation}\label{relatively} [\gamma \cdot f, \gamma \cdot f_* ] = \det(\gamma) [f,f_*],\quad \forall \gamma \in {{\rm r}m GL}_2,\ f\in V,\ f_*\in V^*. \end{equation} The scalar matrices in $Z({{\rm r}m GL}_2)$ act by scalar multiplication on both $V$ and $V^*$. Let $a_*:=[y^3,f_*]$, $b_*:=-[xy^2,f_*]$, $c_*:=[x^2y,f_*]$, $d_*:=-[x^3,f_*]$, and \[ \Delta_*(f_*):= 3 b_*^2c_*^2 + 6 a_*b_*c_*d_* - 4 a_*c_*^3 - 4 b_*^3d_* - a_*^2 d_*^2. \] Both $\Delta$ and $\Delta_*$ are homogeneous of degree $4$ and satisfy $\Delta(\gamma \cdot f)=(\det \gamma)^2 \Delta(f)$ and $\Delta_*(\gamma \cdot f_*)=(\det \gamma)^2 \Delta_*(f_*)$. Following \cite[\S3]{Shintani} and \cite[Table 1]{HCL4}, the lattice $V^*({\mathbb Z})$ is isomorphic to the sub-lattice {\beta}gin{equation*} V^*({\mathbb Z}) {\mathcal S}imeq \{a_*x^3+3b_*x^2y+3c_*xy^2+d_*y^3:\;a_*,b_*,c_*,d_*\in{\mathbb Z}\} {\mathcal S}ubset V({\mathbb Z}), \end{equation*} with compatible ${{\rm r}m GL}_2({\mathbb Z})$-action. The restriction of $\Delta$ to $V^*({\mathbb Z})$ coincides with $27 \Delta_*$ as a direct calculation shows. We also see that the pairing $[,]:V({\mathbb Z})\times V^*({\mathbb Z})\to {\mathbb Z}$ coincides with the restriction of the antisymmetric bilinear form {\beta}gin{equation*} {\beta}gin{array}{rcl} V({\mathbb Z})\times V({\mathbb Z})&\to& \frac 13{\mathbb Z} \\[.1in] (f_1,f_2) &\mapsto &d_1a_2 - \frac{c_1b_2}{3}+ \frac{b_1c_2}{3} - a_1d_2. \end{array} \end{equation*} For an integer $n\ge 1$, the ${\mathbb Z}/n{\mathbb Z}$ points of $V$, which we denote by $V({\mathbb Z}/n{\mathbb Z})$, form a finite abelian group which can be identified with the quotient $V({\mathbb Z})/nV({\mathbb Z})$. The same holds for $V^*({\mathbb Z}/n{\mathbb Z}) {\mathcal S}imeq V^*({\mathbb Z})/nV^*({\mathbb Z})$. We obtain a perfect pairing $[,]:V({\mathbb Z}/n{\mathbb Z}) \times V^*({\mathbb Z}/n{\mathbb Z})\to {\mathbb Z}/n{\mathbb Z}$. The finite abelian group $V^*({\mathbb Z}/n{\mathbb Z})$ is in natural bijection with the \emph{group of characters} $V({\mathbb Z}/n{\mathbb Z})\to S^1$, where $S^1$ denotes the unit circle in ${\mathbb C}^\times$. Indeed, given $f_*\in V^*({\mathbb Z}/n{\mathbb Z})$, we associate the character {\beta}gin{equation*} {\beta}gin{array}{rcc} \chi_{f_*}:V({\mathbb Z}/n{\mathbb Z})&\to& S^1\\[.1in] f&\mapsto& e\Bigl(\frac{[f,f_*]}{n}\Bigr), \end{array} \end{equation*} where $e({\alpha}pha):= e^{2\pi i{\alpha}pha}$. {\mathrm {ind}}ex{$\widehat \phi:V({\mathbb Z}/n{\mathbb Z})\to {\mathbb C}$, Fourier transform of function $\phi$ on $V({\mathbb Z}/n{\mathbb Z})$} Given a function $\phi:V({\mathbb Z}/n{\mathbb Z})\to {\mathbb C}$, we have the notion of its \emph{Fourier transform} $\widehat{\phi}$ given by {\beta}gin{equation*} {\beta}gin{array}{rcl} \widehat{\phi}: V^*({\mathbb Z}/n{\mathbb Z})&\to& {\mathbb C}\\[.15in] \widehat{\phi}(f_*)&:=& \displaystyle\frac{1}{n^4}{\mathcal S}um_{f\in V({\mathbb Z}/n{\mathbb Z})} e\Bigl(\frac{[f,f_*]}{n}\Bigr) \phi(f). \end{array} \end{equation*} In this paper, we will be concerned with the Fourier transforms of ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant functions. Regarding this, we have the following result which is probably known although we couldn't find the statement in the literature. {\beta}gin{lemma} The Fourier transform $\widehat{\phi}$ of a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function $\phi$ is ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant. \end{lemma} {\beta}gin{proof} Let $\gamma\in {{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$, $f_*\in V^*({\mathbb Z}/n{\mathbb Z})$ and the function $\phi$ be given. We have {\beta}gin{equation}\label{SL2} {\beta}gin{array}{rcl} \displaystyle\widehat{\phi}(\gamma \cdot f_*)&=& \displaystyle\frac{1}{n^4}{\mathcal S}um_{f\in V({\mathbb Z}/n{\mathbb Z})} e\Bigl(\frac{[f,\gamma \cdot f_*]}{n}\Bigr) \phi(f) \\[.2in]&=& \displaystyle\frac{1}{n^4}{\mathcal S}um_{f\in V({\mathbb Z}/n{\mathbb Z})} e\Bigl(\frac{\det(\gamma)[\gamma^{-1}\cdot f,f_*]}{n}\Bigr) \phi(f) \\[.2in]&=& \displaystyle\frac{1}{n^4}{\mathcal S}um_{f\in V({\mathbb Z}/n{\mathbb Z})} e\Bigl(\frac{\det(\gamma)[f,f_*]}{n}\Bigr) \phi(f), \end{array} \end{equation} where the first equality is by definition, the second equality follows from~\eqref{relatively}, and the third equality follows from the ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariance of $\phi$ and the bijective change of variable $f$ by $\gamma \cdot f$. To finish the proof of the lemma, we absorb the $\det(\gamma)$ factor into the sum over $f$ since $\phi(uf)=\phi(f)$ for every $u\in ({\mathbb Z}/n{\mathbb Z})^\times$ and $f\in V({\mathbb Z}/n{\mathbb Z})$ because $Z({{\rm r}m GL}_2)$ acts by scalar multiplication on $V$. \end{proof} {\mathcal S}ubsection{Fourier transforms of ${{\rm r}m GL}_2$-orbits} We now fix a prime number $p$. The \emph{orbits} for the action of ${{\rm r}m GL}_2({\mathbb F}_p)$ on $V({\mathbb F}_p)$ and $V^*({\mathbb F}_p)$ are characterized as follows~\cite[\S5]{TT}. There are six orbits depending on how a binary cubic form factors over ${\mathbb F}_p$. We denote the orbits on $V({\mathbb F}_p)$ by {\mathrm {ind}}ex{${\mathcal O}_{\mathcal S}igma$, orbits for the action of ${{\rm r}m GL}_2({\mathbb F}_p)$ on $V({\mathbb F}_p)$} {\mathrm {ind}}ex{${\mathcal O}^*_{\mathcal S}igma$, orbits for the action of ${{\rm r}m GL}_2({\mathbb F}_p)$ on $V^*({\mathbb F}_p)$} {\beta}gin{equation}\label{eqbcforbit} {\mathcal O}_{(111)},{\mathcal O}_{(12)},{\mathcal O}_{(3)},{\mathcal O}_{(1^21)},{\mathcal O}_{(1^3)},{\mathcal O}_{(0)}, \end{equation} and the orbits on $V^*({\mathbb F}_p)$ by {\beta}gin{equation}\label{eqbcforbitd} {\mathcal O}_{(111)}^*,{\mathcal O}^*_{(12)},{\mathcal O}_{(3)}^*,{\mathcal O}_{(1^21)}^*,{\mathcal O}_{(1^3)}^*,{\mathcal O}_{(0)}^*, \end{equation} respectively, where ${\mathcal O}_{(111)},{\mathcal O}_{(111)}^*$ denote the sets of forms having three distinct rational roots in ${\mathbb P}^1({\mathbb F}_p)$, the sets ${\mathcal O}_{(12)},{\mathcal O}_{(12)}^*$ consist of forms having one root in ${\mathbb P}^1({\mathbb F}_p)$ and one pair of conjugate roots defined over the quadratic extension of ${\mathbb F}_p$, the sets ${\mathcal O}_{(3)},{\mathcal O}_{(3)}^*$ consist of forms irreducible over ${\mathbb F}_p$, the sets ${\mathcal O}_{(1^21)},{\mathcal O}_{(1^21)}^*$ (resp.\ ${\mathcal O}_{(1^3)},{\mathcal O}_{(1^3)}^*$) consist of forms having a root in ${\mathbb P}^1({\mathbb F}_p)$ of multiplicity $2$ (resp.\ $3$), and ${\mathcal O}_{(0)},{\mathcal O}_{(0)}^*$ is the singleton set containing the zero form. Given a subset $S$ of $V({\mathbb F}_p)$ or $V^*({\mathbb F}_p)$, let $C_S$ denote its characteristic function. Every ${{\rm r}m GL}_2({\mathbb F}_p)$-invariant function on $V({\mathbb F}_p)$ (resp.\ $V^*({\mathbb F}_p)$) is a linear combination of the six functions \[ C_{{\mathcal O}_{(0)}},\;C_{{\mathcal O}_{(1^3)}},\;C_{{\mathcal O}_{(1^21)}}, C_{{\mathcal O}_{(111)}},\;C_{{\mathcal O}_{(12)}},\;C_{{\mathcal O}_{(3)}},\mbox{ (resp. } C_{{\mathcal O}_{(0)}^*},\;C_{{\mathcal O}_{(1^3)}^*},\;C_{{\mathcal O}_{(1^21)}^*}, C_{{\mathcal O}_{(111)}^*},\;C_{{\mathcal O}_{(12)}^*},\;C_{{\mathcal O}_{(3)}^*}). \] Therefore, the Fourier transforms of the first six of the above functions determine the Fourier transforms of every ${{\rm r}m GL}_2({\mathbb F}_p)$-invariant function on $V({\mathbb F}_p)$. {\mathrm {ind}}ex{$M$, matrix of the Fourier transform of ${{\rm r}m GL}_2({\mathbb F}_p)$-orbits on $V({\mathbb F}_p)$} {\beta}gin{proposition}[Mori~\cite{Mori}] \label{thFT} Let $p\neq 3$ be a prime number, and $M=(m_{ij})$ be the following $6\times 6$ matrix {\beta}gin{equation*} M:=\frac{1}{p^4} \left[ {\beta}gin{array}{cccccc} 1 &(p+1)(p-1) & p(p+1)(p-1) &p(p+1)(p-1)^2/6 & p(p+1)(p-1)^2/2 & p(p+1)(p-1)^2/3 \\[.05in] 1 &-1 & p(p-1) &p(p-1)(2p-1)/6 & -p(p-1)/2 & -p(p+1)(p-1)/3 \\[.05in] 1 &p-1 &p(p-2) & -p(p-1)/2 & -p(p-1)/2 &0 \\[.05in] 1 &2p-1 &-3p &p(\pm p + 5)/6 &-p(\pm p - 1)/2 &p(\pm p -1 )/3 \\[.05in] 1 &-1 &-p &-p(\pm p - 1)/6 &p(\pm p + 1)/2 &-p(\pm p -1 )/3 \\[.05in] 1 &-p-1 &0 &p(\pm p - 1)/6 &-p(\pm p - 1)/2 &p(\pm p + 2)/3 \end{array} {\rm r}ight], \end{equation*} where the signs $\pm$ appearing in the bottom-right $3\times 3$ corner are according as $p \equiv \pm1 \pmod{3}$. Then \[\widehat{C_j}={\mathcal S}um_{i=1}^6 m_{ij}C_{i}^*,\quad 1\le j \le 6, \] where we have set {\beta}gin{equation*} {\beta}gin{array}{rcl} (C_1,C_2,C_3,C_4,C_5,C_6)&:=& (C_{{\mathcal O}_{(0)}},C_{{\mathcal O}_{(1^3)}},C_{{\mathcal O}_{(1^21)}},C_{{\mathcal O}_{(111)}},C_{{\mathcal O}_{(12)}},C_{{\mathcal O}_{(3)}}); \\[.07in] (C_1^*,C_2^*,C_3^*,C_4^*,C_5^*,C_6^*)&:=& (C_{{\mathcal O}_{(0)}^*},C_{{\mathcal O}_{(1^3)}^*},C_{{\mathcal O}_{(1^21)}^*},C_{{\mathcal O}_{(111)}^*}, C_{{\mathcal O}_{(12)}^*},C_{{\mathcal O}_{(3)}^*}). \end{array} \end{equation*} \end{proposition} {\beta}gin{proof} The result was announced in \cite{Mori}, and a proof appears in the work of Taniguchi--Thorne~\cite[Thm.11]{TaTh2} and~\cite[Rem.6.8]{TT}. \end{proof} {\beta}gin{remarks*} (i) For $j=1$, that is for the first column of $M$, Proposition~{\rm r}ef{thFT} says that the Fourier transform of $C_{{\mathcal O}_{(0)}}$, which is the Dirac function of the origin, is equal to the constant function $1/p^4$ as should be. (ii) For $i=1$, the first row of $M$ in Proposition~{\rm r}ef{thFT} provides the respective sizes of each of the $6$ conjugacy classes, because \[ {\mathcal S}um_{f\in V({\mathbb F}_p)} C_j(f) = p^4\widehat {C_j}(0) = p^4 m_{1j}. \] They add up to $m_{11}+ m_{12} + \cdots + m_{16} = 1$ as should be. (iii) For every $j,k$, we have ${\mathcal S}um_{f\in V({\mathbb F}_p)} C_j(f)C_k(f) = p^4 \delta_{jk} m_{1j}$, because the characteristic functions are pairwise orthogonal since the orbits are pairwise disjoint. This implies, by the Plancherel formula, ${\mathcal S}um_{f_*\in V^*({\mathbb F}_p)} \widehat{C_j}(f_*)\widehat{C_k}(f_*) = \delta_{jk} m_{1j}$. Hence, Proposition~{\rm r}ef{thFT} implies {\beta}gin{equation}\label{verif-orth} p^4 {\mathcal S}um^6_{i=1} m_{ij} m_{ik}m_{1i} = \delta_{jk} m_{1j}, \quad 1\le j,k\le 6, \end{equation} which indeed holds true as a direct verification shows. Because of the symmetry between $j,k$, verifying~\eqref{verif-orth} entails to verifying $21$ equalities. \end{remarks*} Proposition {\rm r}ef{thFT} has the following important consequence. {\beta}gin{corollary}\label{corphihatbound} Let $p$ be a prime number, and let $\phi:V({\mathbb F}_p)\to{\mathbb C}$ be a ${{\rm r}m GL}_2({\mathbb F}_p)$-invariant function such that $|\phi(f)|\le 1$ for every $f\in V({\mathbb F}_p)$. Then we have {\beta}gin{equation*} \widehat{\phi}(f_*)\ll\left\{ {\beta}gin{array}{ccl} \displaystyle p^{-2}&{{\rm r}m if}& f_*\in {\mathcal O}^*_{(111)},{\mathcal O}^*_{(12)}, {\mathcal O}^*_{(3)},{\mathcal O}^*_{(1^21)}; \\[.15in] \displaystyle p^{-1} &{{\rm r}m if}& f_*\in {\mathcal O}^*_{(1^3)}; \\[.15in] \displaystyle 1 &{{\rm r}m if}& f_*\in{\mathcal O}^*_{(0)}. \end{array} {\rm r}ight. \end{equation*} The absolute constant in $\ll$ can be taken to be $4$. \end{corollary} {\beta}gin{proof} The rows of $M$ are bounded by $m_{1\bullet} = O(1)$, $m_{2\bullet} = O(p^{-1})$ and $m_{i\bullet} = O(p^{-2})$ for $3\le i \le 6$, or equivalently $M=\bigl[O(1),O(p^{-1}),O(p^{-2}),O(p^{-2}),O(p^{-2}),O(p^{-2})\bigr]^T$. For example, we can make the absolute constant explicit as follows: ${\mathcal S}um^6_{j=1} m_{1j} = 1$, ${\mathcal S}um^6_{j=1} |m_{2j}| \le 1/p$, ${\mathcal S}um^6_{j=1} |m_{3j}| \le 2/p^2$, ${\mathcal S}um^6_{j=1} |m_{4j}|\le 4/p^2$, ${\mathcal S}um^6_{j=1} |m_{5j}|\le 2/p^2$, ${\mathcal S}um^6_{j=1} |m_{6j}|\le 2/p^2$. By assumption, $\phi = {\mathcal S}um\limits^6_{j=1} a_j C_j$ with $|a_j|\le 1$. Proposition~{\rm r}ef{thFT} implies that \[ |\widehat \phi(f_*)| \le {\mathcal S}um^6_{i=1} C^*_i(f_*) {\mathcal S}um^6_{j=1} |m_{ij}|. \] We deduce \[ |\widehat \phi(f_*)| \ll C_1^*(f_*) + p^{-1} C_2^*(f_*) + p^{-2} \left(C_3^*(f_*) + C_4^*(f_*) + C_5^*(f_*) + C_6^*(f_*) {\rm r}ight), \] from which the corollary follows. \end{proof} {\mathcal S}ection{The Artin character of cubic fields and rings}\label{sec:Artin} Let $K$ be a cubic field extension of ${\mathbb Q}$, with normal closure $M$. The Dedekind zeta function $\zeta_K(s)$ of $K$ factors as {\beta}gin{equation*} \zeta_K(s)=\zeta_{\mathbb Q}(s)L(s,{\rm r}ho_K), \end{equation*} where $\zeta_{\mathbb Q}(s)$ denotes the Riemann zeta function and $L(s,{\rm r}ho_K)$ is an Artin $L$-function associated to the two-dimensional representation ${\rm r}ho_K$ of ${{\rm r}m Gal}(M/{\mathbb Q})$, \[ {\rm r}ho_K: {{\rm r}m Gal}(M/{\mathbb Q}) \hookrightarrow S_3 \to {{\rm r}m GL}_2({\mathbb C}). \] In this section, we first begin by collecting some well-known properties of $L(s,{\rm r}ho_K)$. We denote the Dirichlet coefficients of $L(s,{\rm r}ho_K)$ by $\lambda_K(n)$. Then we extend the definition of $\lambda_K(n)$ to the set of all cubic rings $R$. We do this by defining $\lambda_n(f)$ for all binary cubic forms $f$. Finally, for primes $p\neq 3$, we compute the Fourier transform of the function $\lambda_p$. {\mathrm {ind}}ex{${\rm r}ho_K$, two-dimensional Galois representation} {\mathcal S}ubsection{Standard properties of $L(s,{\rm r}ho_K)$}\label{sHecke} {\mathrm {ind}}ex{$\lambda_K(n)$, $n$th Dirichlet coefficient of $L(s,{\rm r}ho_K)$} We denote the Euler factors of $L(s,{\rm r}ho_K)$ at primes $p$ by $L_p(s,{\rm r}ho_K)$, and the $n$th Dirichlet coefficient of $L(s,{\rm r}ho_K)$ by $\lambda_K(n)$. We have that $\lambda_K$ is multiplicative. We write the $p^k$th Dirichlet coefficient of the logarithmic derivative of $L(s,{\rm r}ho_K)$ as $\theta_K(p^k)\log p$. That is, we have for ${\mathbb R}e(s)>1$, {\beta}gin{equation}\label{def:Dirichlet} {\beta}gin{array}{rcccl} \displaystyle L(s,{\rm r}ho_K)&=& \displaystyle\prod_{p\;{{\rm r}m prime}} L_p(s,{\rm r}ho_K)&=& \displaystyle{\mathcal S}um^\infty_{n= 1}\frac{\lambda_K(n)}{n^s},\\[.2in] \displaystyle-\frac{L'(s,{\rm r}ho_K)}{L(s,{\rm r}ho_K)}&=& \displaystyle-{\mathcal S}um_{p\;{{\rm r}m prime}}\frac{L_p'(s,{\rm r}ho_K)}{L_p(s,{\rm r}ho_K)}&=& \displaystyle{\mathcal S}um_{n=1}^{\infty}\frac{\theta_K(n)\mathcal{L}ambda(n)}{n^s}. \end{array} \end{equation} Note that $\theta_K$ is supported on prime powers. {\mathrm {ind}}ex{$\theta_K(n)$, coefficient of the logarithmic derivative of $L(s,{\rm r}ho_K)$} Next, we recall some classical facts about $L(s,{\rm r}ho_K)$. Let ${\mathbb G}amma_{\mathbb R}(s):=\pi^{-s/2} {\mathbb G}amma(\frac s2)$ and ${\mathbb G}amma_{\mathbb C}(s):=2(2\pi)^{-s}{\mathbb G}amma(s)$. Hecke proved that the \emph{completed Dedekind zeta function} \[ \xi_K(s) := |\Delta(K)|^{s/2} \zeta_K(s) \cdot {\beta}gin{cases} {\mathbb G}amma_{\mathbb R}(s)^3, & \text{if $\Delta(K)>0$},\\ {\mathbb G}amma_{\mathbb R}(s){\mathbb G}amma_{\mathbb C}(s), & \text{if $\Delta(K)<0$}, \end{cases} \] has a meromorphic continuation to $s\in {\mathbb C}$ with simple poles at $s=0,1$ and satisfies the functional equation $\xi_K(s)=\xi_K(1-s)$. We introduce the following notation: {\beta}gin{equation*} {\beta}gin{array}{rcl} \gamma^+(s)&:=&{\mathbb G}amma_{\mathbb R}(s)^2 = \pi^{-s}{\mathbb G}amma(\frac s2)^2;\\[.1in] \gamma^-(s)&:=&{\mathbb G}amma_{\mathbb C}(s) = 2(2\pi)^{-s}{\mathbb G}amma(s). \end{array} \end{equation*} {\mathrm {ind}}ex{$\gamma^{\pm}(s)$, Gamma factor in the functional equation of $L(s,{\rm r}ho_K)$} {\beta}gin{proposition}[Hecke]\label{p:Hecke} $L(s,{\rm r}ho_K)$ is entire and satisfies the functional equation $\mathcal{L}ambda(s,{\rm r}ho_K) = \mathcal{L}ambda(1-s,{\rm r}ho_K)$, where $\mathcal{L}ambda(s,{\rm r}ho_K) := |\Delta(K)|^{s/2} L_\infty(s,{\rm r}ho_K) L(s,{\rm r}ho_K)$ is the completed $L$-function, and \[ L_\infty(s,{\rm r}ho_K) := \gamma^{{\mathcal S}gn(\Delta(K))}(s) = {\beta}gin{cases} {\mathbb G}amma_{\mathbb R}(s)^2, & \text{if $\Delta(K)>0$},\\ {\mathbb G}amma_{\mathbb C}(s), & \text{if $\Delta(K)<0$}. \end{cases} \] \end{proposition} {\beta}gin{proof} The functional equation of $L(s,{\rm r}ho_K)$ follows from the functional equations of $\zeta_K(s)$ and $\zeta_{\mathbb Q}(s)$. It remains to show that $L(s,{\rm r}ho_K)$ is entire and there are two cases to distinguish: If $K$ is non-Galois, then $M/{\mathbb Q}$ is Galois with Galois group isomorphic to $S_3$, whereas if $K$ is Galois, then $M=K$ with Galois group isomorphic to ${\mathbb Z}/3{\mathbb Z}$. (i) If $K=M$ is Galois, then the Artin representation {\beta}gin{equation*} {\rm r}ho_K:{{\rm r}m Gal}(M/{\mathbb Q})\cong {\mathbb Z}/3{\mathbb Z} \hookrightarrow S_3 \to{{\rm r}m GL}_2({\mathbb C}) \end{equation*} is the direct sum of the two nontrivial characters of ${\mathbb Z}/3{\mathbb Z}$. Hence $L(s,{\rm r}ho_K)= L(s,\chi_K)L(s,\overline {\chi_K})$ for two conjugate Dirichlet characters $\chi_K$ and $\overline{\chi_K}$ of order $3$ and conductor $|\Delta(K)|^{\frac12}$. Dirichlet proved that $L(s,\chi_K)$ and $L(s,\overline{\chi_K})$ are entire. (ii) If $K$ is non-Galois, then the Artin representation {\beta}gin{equation*} {\rm r}ho_K:{{\rm r}m Gal}(M/{\mathbb Q})\cong S_3\to{{\rm r}m GL}_2({\mathbb C}) \end{equation*} obtained from the standard representation of $S_3$ is irreducible. In this case, the sextic field $M$ has a unique quadratic subfield denoted $L$. We have an exact sequence \[ {{\rm r}m Gal}(M/L) \hookrightarrow {{\rm r}m Gal}(M/{\mathbb Q}) \twoheadrightarrow {{\rm r}m Gal}(L/{\mathbb Q}), \] and the representation ${\rm r}ho_K$ of ${{\rm r}m Gal}(M/{\mathbb Q}){\mathcal S}imeq S_3$ is induced from a character $\chi_K$ of ${{\rm r}m Gal}(M/L){\mathcal S}imeq A_3={\mathbb Z}/3{\mathbb Z}$: \[ {\rm r}ho_K {\mathcal S}imeq \operatorname{Ind}^{{{\rm r}m Gal}(M/{\mathbb Q})}_{{{\rm r}m Gal}(M/L)} (\chi_K). \] Thus we have $L(s,{\rm r}ho_K) = L(s,\chi_K)$. Via class field theory, $\chi_K$ corresponds to a ring-class character of $L$ of order $3$. We have that $L(s,\chi_K)$ is entire by work of Hecke on the $L$-functions attached to Gr\"ossencharacters. \end{proof} {\beta}gin{proposition}[Hecke, Maass]\label{HeckeMaass} The representation ${\rm r}ho_K$ is modular. That is, there exists a unique automorphic representation $\pi_K$ of ${{\rm r}m GL}_2$ such that $L(s,{\rm r}ho_K)$ is equal to the principal $L$-function $L(s,\pi_K)$. {\beta}gin{itemize} \item If $K/{\mathbb Q}$ is cyclic, then $\pi_K$ is an Eisenstein series with trivial central character. \item If $K$ is an $S_3$-field, then $\pi_K$ is cuspidal and its central character is the quadratic Dirichlet character associated to the quadratic resolvant of $K$. Moreover, {\beta}gin{itemize} \item if $\Delta(K)<0$ then $\pi_{K,\infty}$ is holomorphic of weight $1$, \item if $\Delta(K)>0$ then $\pi_{K,\infty}$ is spherical of weight $0$. \end{itemize} \end{itemize} \end{proposition} {\beta}gin{proof} With notation as in the previous proof, we have, if $K$ is Galois, $L(s,{\rm r}ho_K)=L(s,\chi_K)L(s,\overline{\chi_K})$ for a Dirichlet character $\chi_K$, and if $K$ is non-Galois, $L(s,{\rm r}ho_K)=L(s,\chi_K)$ for a Gr\"ossencharacter $\chi_K$ of finite order of the quadratic extension $L/{\mathbb Q}$. In the latter case, by a result of Hecke and Maass, there is an automorphic representation $\pi_K$ of ${{\rm r}m GL}_2$, constructed from theta-series, such that $L(s,\chi_K)=L(s,\pi_K)$. If $K$ is Galois, then the analogous result $L(s,\chi_K)L(s,\overline{\chi_K})=L(s,\pi_K)$ holds with $\pi_K$ constructed from Eisenstein series. The unicity of $\pi_K$ follows from the strong multiplicity-one theorem for ${{\rm r}m GL}_2$. The central character of $\pi_K$ corresponds under class field theory to the determinant character $\det {\rm r}ho_K$, given by \[ \det {\rm r}ho_K: {{\rm r}m Gal}(M/{\mathbb Q}) \twoheadrightarrow {{\rm r}m Gal}(M/{\mathbb Q})^{\text ab} \to {\mathbb C}^\times. \] If $K$ is non-Galois with quadratic resolvant $L$, then ${{\rm r}m Gal}(M/{\mathbb Q})^{\text ab} = {{\rm r}m Gal}(L/{\mathbb Q})$. This concludes the proof of the first claim of the second part of the proposition. The remaining two claims are consequences of results of Hecke and Maass on dihedral forms corresponding to Gr\"ossencharacters of quadratic fields. \end{proof} {\mathcal S}ubsection{Definition and properties of $\lambda_n(f)$} \label{sec:lambdan} Let $K$ be a cubic field with ring of integers ${\mathcal O}_K$. We say that $K$ has \emph{splitting type} ${\mathcal S}igma_p(K)$ to be $(111)$, $(12)$, $(3)$, $(1^21)$ or $(1^3)$ at $p$ if $p$ factors as $\mathfrak{p}_1\mathfrak{p}_2\mathfrak{p}_3$, $\mathfrak{p}_1\mathfrak{p}_2$, $p$, $\mathfrak{p}_1^2\mathfrak{p}_2$, or $\mathfrak{p}^3$, respectively. Recall that $L(s,{\rm r}ho_K)$ has an Euler factor decomposition, where it may be checked that the $p$th Euler factor $L_p(s,{\rm r}ho_K)$ only depends on the splitting type of $K$ at $p$, and is as follows: {\beta}gin{equation}\label{eqEP} L_p(s,{\rm r}ho_K)=\left\{ {\beta}gin{array}{lclll} (1-p^{-s})^{-2}&=&\displaystyle{\mathcal S}um_{m=0}^\infty (m+1)p^{-ms}&\;{{\rm r}m if}\;&{\mathcal S}igma_p(K)=(111);\\[.15in] (1-p^{-2s})^{-1}&=&\displaystyle{\mathcal S}um_{m=0}^\infty p^{-2ms}&\;{{\rm r}m if}\;&{\mathcal S}igma_p(K)=(12); \\[.15in] (1+p^{-s}+p^{-2s})^{-1}&=&\displaystyle{\mathcal S}um_{m=0}^\infty (p^{-3ms}-p^{-(3m+1)s})&\;{{\rm r}m if}\;&{\mathcal S}igma_p(K)=(3);\\[.15in] (1-p^{-s})^{-1}&=&\displaystyle{\mathcal S}um_{m=0}^\infty p^{-ms}&\;{{\rm r}m if}\;&{\mathcal S}igma_p(K)=(1^21);\\[.15in] 1&&&\;{{\rm r}m if}\;&{\mathcal S}igma_p(K)=(1^3). \end{array}{\rm r}ight. \end{equation} For a prime $p$, recall the six ${{\rm r}m GL}_2({\mathbb F}_p)$-orbits ${\mathcal O}_{\mathcal S}igma$ on $V({\mathbb F}_p)$ defined in \eqref{eqbcforbit}. {\beta}gin{definition}\label{def:lambda} Given an element $f\in V({\mathbb F}_p)$, we define the {\it splitting type} ${\mathcal S}igma_p(f)$ of $f$ to be ${\mathcal S}igma$ if $f\in {\mathcal O}_{\mathcal S}igma$. For $m\geq 1$, we define the function $\lambda_{p^m}:V({\mathbb F}_p)\to {\mathbb Z}$ as follows: Let $f\in V({\mathbb F}_p)$ have splitting type ${\mathcal S}igma$. Let $K$ be any field also having splitting type ${\mathcal S}igma$ at $p$. Then we define $\lambda_{p^m}(f):=\lambda_K(p^m)$. This serves as a definition for all nonzero $f$. For the zero form, we simply define $\lambda_{p^m}(0):=0$. \end{definition} {\mathrm {ind}}ex{${\mathcal S}igma_p(f)$, splitting type of $f$ at $p$} {\mathrm {ind}}ex{$\lambda_n(f)$, Artin character on the space of cubic forms} Explicitly, we compute {\beta}gin{equation}\label{def_lambda} \lambda_{p^m}(f):=\left\{ {\beta}gin{array}{rcl} (m+1)&{{\rm r}m if}& {\mathcal S}igma_p(f)=(111);\\[.05in] 1&{{\rm r}m if}& {\mathcal S}igma_p(f)=(12)\;\; {{\rm r}m and}\;\; m\equiv{0}\pmod{2};\\[.05in] 0&{{\rm r}m if}& {\mathcal S}igma_p(f)=(12)\;\; {{\rm r}m and}\;\; m\equiv{1}\pmod{2};\\[.05in] 1&{{\rm r}m if}& {\mathcal S}igma_p(f)=(3)\;\; {{\rm r}m and}\;\; m\equiv{0}\pmod{3};\\[.05in] -1&{{\rm r}m if}& {\mathcal S}igma_p(f)=(3)\;\; {{\rm r}m and}\;\; m\equiv{1}\pmod{3};\\[.05in] 0&{{\rm r}m if}& {\mathcal S}igma_p(f)=(3)\;\; {{\rm r}m and}\;\; m\equiv{2}\pmod{3};\\[.05in] 1&{{\rm r}m if}& {\mathcal S}igma_p(f)=(1^21);\\[.05in] 0&{{\rm r}m if}& {\mathcal S}igma_p(f)=(1^3);\\[.05in] 0&{{\rm r}m if}& {\mathcal S}igma_p(f)=(0). \end{array} {\rm r}ight. \end{equation} Extending notation, we set $\lambda_{p^m}:V({\mathbb Z})\to{\mathbb Z}$ by defining $\lambda_{p^m}(f):=\lambda_{p^m}(f \pmod{p} )$, where on the right-hand side we have the reduction of $f$ modulo $p$. We also write ${\mathcal S}igma_p(f) = {\mathcal S}igma_p(f\pmod{p})$ for the splitting type of $f$ at $p$. For a positive integer $n\ge 1$, we define $\lambda_n:V({\mathbb Z})\to{\mathbb Z}$ multiplicatively in $n$, i.e., we set {\beta}gin{equation*} \lambda_n(f):=\prod_{p^m\parallel n}\lambda_{p^m}(f). \end{equation*} The function $\lambda_n(f)$ is ${{\rm r}m GL}_2({\mathbb Z})$-invariant and only depends on the reduction of $f$ modulo $\operatorname{rad}(n)$, where $\operatorname{rad}(n)$ is the radical of $n$, that is the largest square-free divisor of $n$. Next, given a binary cubic form $f\in V({\mathbb Z})$, we define the following Dirichlet series: {\mathrm {ind}}ex{$D(s,f)$, Dirichlet series of $\lambda_n(f)$} {\beta}gin{equation}\label{eqLsf} D(s,f):={\mathcal S}um_{n\geq 1}\frac{\lambda_n(f)}{n^s}=\prod_pD_p(s,f), \end{equation} where the function $D_p(s,f)$ depends only on the splitting type of $f$ at $p$. In fact, if a cubic field $K$ has the same splitting type as $f$ at $p$, then $D_p(s,f)=L_p(s,{\rm r}ho_K)$, where $L_p(s,{\rm r}ho_K)$ is given explicitly in \eqref{eqEP}. When $f$ is a multiple of $p$, we have $D_p(s,f)=1$. For an irreducible integral binary cubic form $f$, with associated number field $K_f$ as in Proposition~{\rm r}ef{df}, the relationship between $D(s,f)$ and $L(s,{\rm r}ho_{K_f})$ is given by the following. {\beta}gin{lemma}\label{l_Euler} Let $f\in V({\mathbb Z})^{{\rm r}m irr}$ be irreducible. Assume that $f$ is maximal at the prime $p$. Then ${\mathcal S}igma_p(f)={\mathcal S}igma_p(K_f)$, and therefore {\beta}gin{equation}\label{eqEP1} D_p(s,f)= L_p(s,{\rm r}ho_{K_f}). \end{equation} \end{lemma} {\beta}gin{proof} Since $f$ is maximal at $p$, we have $R_f\otimes{\mathbb Z}_p\cong {\mathcal O}_{K_f}\otimes{\mathbb Z}_p$, where $R_f$ denotes the cubic ring corresponding to $f$ and ${\mathcal O}_{K_f}$ denotes the ring of integers of $K_f$. Further tensoring with ${\mathbb F}_p$, we obtain $R_f\otimes{\mathbb F}_p\cong {\mathcal O}_{K_f}\otimes{\mathbb F}_p$. The former determines ${\mathcal S}igma_p(f)$ while the latter determines the splitting of $K_f$ at $p$. Thus, the claim follows. \end{proof} {\beta}gin{corollary}\label{c_Euler} If $f\in V({\mathbb Z})^{{\rm r}m irr,max}$ is irreducible and maximal, that is if $R_f$ is the ring of integers of the number field $K_f$, then $ L(s,{\rm r}ho_{K_f})=D(s,f)$, and $\lambda_{K_f}(n)=\lambda_n(f)$ for all $n\geq 1$. \end{corollary} {\beta}gin{proof} This is immediate from Definition {\rm r}ef{def:lambda} and the previous Lemma {\rm r}ef{l_Euler}. \end{proof} {\beta}gin{corollary}\label{lem_Dpsf} Let $f\in V({\mathbb Z})^{{\rm r}m irr}$ be irreducible. Then the function $D(s,f)$ converges absolutely for ${\mathbb R}e(s)>1$. \end{corollary} {\beta}gin{proof} This is immediate since $D(s,f)$ and $L(s,{\rm r}ho_{K_f})$ can differ only at the finitely many Euler factors at $p$, where $f$ is nonmaximal at $p$. \end{proof} For every $f\in V({\mathbb Z})$, and prime power $n=p^m$, define $\theta_{p^m}(f)$ from the $p^m$th-coefficient of the logarithmic derivative, {\beta}gin{equation*} {\beta}gin{array}{rcccl} \displaystyle \displaystyle-\frac{D'(s,f)}{D(s,f)}&=& \displaystyle-{\mathcal S}um_{p} \frac{D_p'(s,f)}{D_p(s,f)}&=& \displaystyle{\mathcal S}um_{n=1}^{\infty}\frac{\theta_n(f)\mathcal{L}ambda(n)}{n^s}, \quad {\mathbb R}e(s)>1. \end{array} \end{equation*} {\mathrm {ind}}ex{$\theta_n(f)$, coefficients of the logarithmic derivative of $D(s,f)$} {\beta}gin{lemma}\label{l_thetap2} For every prime $p$ and $f\in V({\mathbb Z})$, we have $\theta_p(f)=\lambda_p(f)$ and $\theta_{p^2}(f)=2\lambda_{p^2}(f)-\lambda_p(f)^2$. Furthermore, we have the bound $|\theta_{p^m}(f)|\leq 2$ for every prime $p$, integer $m\ge 1$ and $f\in V({\mathbb Z})$. \end{lemma} {\beta}gin{proof} The first two claims follow from Schur polynomial identities. The third claim follows from \cite[Lemma 2.2]{SST1}, along with the observation that for every prime $p$ and every nonzero element $f\in V({\mathbb F}_p)$, the factor $D_p(s,f)$ occurs as the $p$'th Euler factor of some cubic field $K$. \end{proof} We conclude this section with certain Fourier transform computations. First, we have the following result, which will be useful in the sequel when we sum $\lambda_p$ and $\theta_{p^2}$ over ${{\rm r}m GL}_2({\mathbb Z})$-orbits on integral binary cubic forms having bounded discriminant. {\beta}gin{proposition}\label{c_thetahat} Let $p\neq 3$ be a prime. Then {\beta}gin{equation*} \widehat{\lambda_p}(f_*)\;\;=\;\;\left\{ {\beta}gin{array}{ccl} \displaystyle\frac{-1}{p^3}&{{\rm r}m if}& f_* \in \mathcal O^*_{(111)}, \mathcal O^*_{(12)}, \mathcal O^*_{(3)}, \mathcal O^*_{(1^21)} ;\\[.15in] \displaystyle\frac{p^2-1}{p^3} &{{\rm r}m if}& f_* \in\mathcal O^*_{(1^3)}, \mathcal O^*_{(0)}. \end{array} {\rm r}ight. \end{equation*} Moreover, $ \widehat{\theta_{p^2}}(0)\;\;=\;\;1-\displaystyle\frac{1}{p^2}$. \end{proposition} {\beta}gin{proof} This follows from Proposition~{\rm r}ef{thFT}. For example, when $f_* \in \mathcal O^*_{(111)}$, we compute {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle\widehat{\lambda_p}(f_*)&=&\displaystyle\frac{1}{p^4}\Bigl( \lambda_p(0)+\lambda_p(1^3)(2p-1) +\lambda_p(1^21)(-3p)+\lambda_p(111)(p(5\pm p)/6) \\[.2in]&&\displaystyle +\lambda_p(12)(-p(-1\pm p)/2)+\lambda_p(3)(p(-1\pm p)/3)\Bigr) \\[.2in]&=&\displaystyle \frac{1}{p^4}\Bigl(0+0-3p+p(5\pm p)/3-0-p(-1\pm p)/3\Bigr) \\[.2in]&=&\displaystyle \frac{-1}{p^3}, \end{array} \end{equation*} as claimed. The computation when $f_*$ is in the other orbits is similar. Finally, note that $\theta_{p^2}(f)$ is equal to $2$ when ${\mathcal S}igma_p(f)\in\{(111),(12)\}$, equal to $-1$ when ${\mathcal S}igma_p(f)=(3)$, equal to $1$ when ${\mathcal S}igma_p(f)=(1^21)$, and equal to $0$ otherwise. Therefore, from the first row of the table in Proposition {\rm r}ef{thFT}, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle\widehat{\theta_{p^2}}(0)&=& \displaystyle\bigl(\frac{2}{6}+1-\frac{1}{3}\bigr) \frac{p(p+1)(p-1)^2}{p^4}+\frac{p(p+1)(p-1)}{p^4} \\[.2in]&=&\displaystyle (p-1+1)\Bigl(\frac{(p+1)(p-1)}{p^3}\Bigr) \\[.2in]&=&\displaystyle 1-\frac{1}{p^2}, \end{array} \end{equation*} as necessary. \end{proof} {\beta}gin{remark*} Requiring that equality~\eqref{eqEP1} of Lemma~{\rm r}ef{l_Euler} holds is enough to force the value of $\lambda_{p^m}(f)$ for every non-zero element $f\in V({\mathbb F}_p) - \{0\}$ to be as in~\eqref{def_lambda}. We have then chosen $\lambda_{p^m}(0):=0$ specifically so that the identities of Proposition~{\rm r}ef{c_thetahat} hold. \end{remark*} Let $u_p:V({\mathbb Z}/p^2{\mathbb Z})\to \{0,1\}$ denote the characteristic function of the set of elements that lift to binary cubic forms in $V({\mathbb Z}_p)$ that are maximal at $p$. We then have the following result. {\beta}gin{proposition}\label{propmaxden} We have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle\widehat{u_p\cdot \lambda_p}(0)&=& \displaystyle\frac{(p-1)(p^2-1)}{p^4};\\[.15in] \displaystyle\widehat{u_p\cdot \lambda_{p^2}}(0)&=& \displaystyle\frac{(p^2-1)^2}{p^4};\\[.15in] \displaystyle\widehat{u_p\cdot \theta_{p^2}}(0)&=& \displaystyle\frac{(p^2-1)^2}{p^4}. \end{array} \end{equation*} \end{proposition} {\beta}gin{proof} The Fourier transform at $0$ can be evaluated by a density computation. That it so say, for any function $\phi:V({\mathbb Z}/p^2{\mathbb Z})\to{\mathbb R}$, we have {\beta}gin{equation*}\widehat{\phi}(0)=\frac{1}{p^8}{\mathcal S}um_{f\in V({\mathbb Z}/p^2{\mathbb Z})}\phi(f). \end{equation*} In \cite[Lemma 18]{BST}, the densities of $u_p$ are listed for each splitting type, as $\mu({\mathcal U}_p(111))$, $\mu({\mathcal U}_p(12))$, and so on, which we will abbreviate simply as $\mu(111)$, $\mu(12)$, and so on. And so we may calculate: {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle\widehat{u_p\cdot \lambda_p}(0)&=& \displaystyle\mu(111)\lambda_p(111)+\mu(12)\lambda_p(12)+\mu(3)\lambda_p(3)+\mu(1^21)\lambda_p(1^21)+\mu(1^3)\lambda_p(1^3) \\[.2in]&=& \displaystyle \frac{1}{p^4}\Bigl(\frac16 (p-1)^2p(p+1)\cdot 2+\mu(12) \cdot 0 +\frac13 (p-1)^2p(p+1) \cdot(-1)+(p-1)^2(p+1)\cdot 1\Bigr) \\[.2in]&=& \displaystyle \frac{(p-1)(p^2-1)}{p^4}, \end{array} \end{equation*} as necessary. Similarly, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle\widehat{u_p\cdot \lambda_{p^2}}(0)&=& \displaystyle\mu(111)\lambda_{p^2}(111)+\mu(12)\lambda_{p^2}(12)+ \mu(3)\lambda_{p^2}(3)+\mu(1^21)\lambda_{p^2}(1^21)+\mu(1^3)\lambda_{p^2}(1^3) \\[.2in]&=& \displaystyle \frac{1}{p^4}\Bigl(\frac16 (p-1)^2p(p+1)\cdot 3+\frac12 (p-1)^2p(p+1)+ \mu(3)\cdot 0 +(p-1)^2(p+1)\cdot 1\Bigr) \\[.2in]&=& \displaystyle \frac{(p^2-1)(p-1)(p+1)}{p^4}, \end{array} \end{equation*} as necessary. Finally, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle\widehat{u_p\cdot \theta_{p^2}}(0)&=& \displaystyle\mu(111)\theta_{p^2}(111)+\mu(12)\theta_{p^2}(12)+\mu(3)\theta_{p^2}(3)+\mu(1^21)\theta_{p^2}(1^21)+\mu(1^3)\theta_{p^2}(1^3) \\[.2in]&=& \displaystyle \frac{1}{p^4}\Bigl(\Bigl(\frac16+\frac12\Bigr) (p-1)^2p(p+1)\cdot 2 +\frac13 (p-1)^2p(p+1) \cdot(-1)+(p-1)^2(p+1)\cdot 1\Bigr) \\[.2in]&=& \displaystyle \frac{1}{p^4}\bigl((p-1)^2p(p+1)+(p-1)^2(p+1)\bigr) \\[.2in]&=& \displaystyle \frac{(p^2-1)^2}{p^4}, \end{array} \end{equation*} as necessary. \end{proof} {\mathcal S}ection{Estimates on partial sums of Dirichlet coefficients of cubic fields and rings}\label{sec:dedekind} In this section, we compute smoothed partial sums of the coefficients $\lambda_K(n)$ as well as of $\lambda_n(f)$. This section is organized as follows. First we collect some preliminary facts about Mellin inversion. Then, we recall the convexity bounds as well as current records towards the Lindel\"of Hypothesis for principal ${{\rm r}m GL}(2)$ $L$-functions. We use these estimates to obtain bounds on smooth sums of the Dirichlet coefficients $\lambda_K(n)$ in terms of $|\Delta(K)|$, where $K$ is a cubic field. Finally in \S{\rm r}ef{s_lambdaf}, we prove analogous bounds on smooth sums of $\lambda_n(f)$ in terms of $|\Delta(f)|$, where $f\in V({\mathbb Z})^{{\rm r}m irr}$ is an irreducible integral binary cubic form. {\mathcal S}ubsection{Upper bounds on smooth sums of $\lambda_K(n)$} \label{s_lambdaK} We begin with a discussion of Mellin inversion, which will be used throughout this paper. Let ${\mathbb P}hi:{\mathbb R}_{\geq 0}\to{\mathbb C}$ be a smooth function that is rapidly decaying at infinity. We recall the definition of the \emph{Mellin transform} {\mathrm {ind}}ex{$\widetilde {\mathbb P}hi$, $\widetilde {\mathbb P}si$, Mellin transforms of ${\mathbb P}hi$, ${\mathbb P}si$} $$ \widetilde{{\mathbb P}hi}(s):=\int_0^\infty x^s{\mathbb P}hi(x)\frac{dx}{x}. $$ The integral converges absolutely for ${\mathbb R}e(s)>0$. Integrating by parts yields the functional equation $\widetilde{{\mathbb P}hi}(s)=-\widetilde{{\mathbb P}hi'}(s+1)/s$. Hence, it follows that $\widetilde{{\mathbb P}hi}$ has a meromorphic continuation to ${\mathbb C}$, with possible simple poles at non-positive integers. Furthermore, $\widetilde{{\mathbb P}hi}(s)$ has superpolynomial decay on vertical strips. \emph{Mellin inversion} states that we have, for every $x\in {\mathbb R}_{>0}$, {\beta}gin{equation*} {\mathbb P}hi(x)=\int_{{\mathbb R}e(s)=2}x^{-s}\widetilde{{\mathbb P}hi}(s)\frac{ds}{2\pi i}. \end{equation*} Consider a general Dirichlet series $ D(s)={\mathcal S}um_{n=1}^\infty \frac{a_n}{n^s} $ which converges absolutely for ${\mathbb R}e(s) > 1$. We can then express the smoothed sums of the Dirichlet coefficients $a_n$ as line integrals. For every positive real number $X\in {\mathbb R}_{>0}$, we have {\beta}gin{equation*} {\mathcal S}um_{n\geq 1} a_n{\mathbb P}hi\Bigl(\frac{n}{X}\Bigr) =\int_{{\mathbb R}e(s)=2} D(s)X^s\widetilde{{\mathbb P}hi}(s)\frac{ds}{2\pi i}. \end{equation*} Consider the function $L(s,{\rm r}ho_K)$ for a cubic field $K$. The \emph{convexity bound} obtained from the Phragm\'en--Lindel\"of principle, \[ L\bigl(\tfrac12+it,{\rm r}ho_K\bigr) \ll_\epsilon (1+|t|)^{\frac12+\epsilon}|\Delta(K)|^{\frac14+\epsilon}, \] will suffice for our purpose of establishing the main Theorem~{\rm r}ef{thmMoment}. We shall also use the current best bound for $L(\frac12+it,{\rm r}ho_K)$ due to Blomer--Khan~\cite{Blomer-Khan} to achieve an improved numerical quality of the exponents in Theorem~{\rm r}ef{thmnv2} and in the other results. {\beta}gin{theorem} [Bound for ${{\rm r}m GL}(2)$ $L$-functions in the level aspect] \label{p_Wu} For every $\epsilon >0$, $t\in {\mathbb R}$ and cubic number field $K$, \[ L\bigl(\tfrac12+it,{\rm r}ho_K\bigr) \ll_\epsilon (1+|t|)^{O(1)} |\Delta(K)|^{\theta + \epsilon}, \] where $\theta:=\frac14 - \delta$ and $\delta:=\frac{1}{128}$. \end{theorem} {\mathrm {ind}}ex{$\delta>0$, subconvexity exponent for $\zeta_K(\tfrac 12)$} {\beta}gin{proof} Proposition~{\rm r}ef{HeckeMaass} says that if $K$ is cyclic, then $L(s,{\rm r}ho_K)=L(s,\chi_K)L(s,\overline \chi_K)$. We then apply Burgess estimate for Dirichlet characters, which yields the upper bound \[ L\bigl(\tfrac12+it,{\rm r}ho_K\bigr)\ll_\epsilon (1+|t|)^{O(1)} |\Delta(K)|^{\frac14 - \frac{1}{16} + \epsilon}. \] If $K$ is an $S_3$-field, then $L(s,{\rm r}ho_K) = L(s,\pi_K)$ is the $L$-function of a ${{\rm r}m GL}(2)$ form of level $|\Delta(K)|$, unitary central character and weight $0$ or $1$. We then apply the estimate of Blomer--Khan~\cite[Thm.~1]{Blomer-Khan}, which yields the desired bound. \end{proof} The above result allows us to bound smoothed weighted partial sums of the Dirichlet coefficients of $L(s,{\rm r}ho_K)$. {\beta}gin{corollary}\label{propsubcv} For every smooth function with compact support ${\mathbb P}hi:{\mathbb R}_{\geq 0}\to {\mathbb C}$, $\epsilon>0$, $T\ge 1$ and cubic number field $K$, {\beta}gin{equation*} {\mathcal S}um_{n\ge 1}\frac{\lambda_K(n)}{n^{1/2}} {\mathbb P}hi\left(\frac nT{\rm r}ight)\ll_{\epsilon,{\mathbb P}hi} T^\epsilon |\Delta(K)|^{\theta+\epsilon}. \end{equation*} \end{corollary} {\beta}gin{proof} Applying Mellin inversion, we obtain {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{n\ge 1}\frac{\lambda_K(n)}{n^{1/2}} {\mathbb P}hi\left(\frac nT{\rm r}ight)&=& \displaystyle\frac{1}{2\pi i} \int_{{\mathbb R}e(s)=2}L\bigl(\tfrac{1}{2}+s,{\rm r}ho_K\bigr) \widetilde{{\mathbb P}hi}(s)T^sds\\[.2in] &\ll_{\epsilon,N} & \displaystyle T^\epsilon \underset{|t|\le T^\epsilon}{\rm max} \bigl|L\bigl(\tfrac{1}{2}+\epsilon+it,{\rm r}ho_K\bigr)\bigr|+ \underset{|t|> T^\epsilon} {\rm max} |t|^{-N} \bigl|L\bigl(\tfrac{1}{2}+\epsilon+it,{\rm r}ho_K\bigr)\bigr|, \end{array} \end{equation*} where the bound follows by shifting the integral contour to the line ${\mathbb R}e(s)=\epsilon$, and using the rapid decay of the Mellin transform $\widetilde {\mathbb P}hi(\epsilon+it)\ll_N |t|^{-N}$ for $|t|\ge 1$. The corollary now follows from Theorem~{\rm r}ef{p_Wu} and the Phragm\'en--Lindel\"of principle, the upper-bound on the vertical line $\frac12+it$ being transported to the vertical line $\frac12+\epsilon+it$. \end{proof} We continue with the \emph{approximate functional equation} which gives the value of $L(\frac12,{\rm r}ho_K)$ as a sum of its Dirichlet coefficients $\lambda_K$. Let $G(u)$ be an even holomorphic function, bounded in the strip $|{\mathbb R}e(u)|<4$, and normalized by $G(0)=1$. Define for $y\in {\mathbb R}_{>0}$ {\mathrm {ind}}ex{$G(s)$, choice of an even holomorphic function} {\mathrm {ind}}ex{$V^{\pm}$, test function in the approximate functional equation} {\beta}gin{equation}\label{defV} V^\pm(y):=\frac{1}{2\pi i}\int_{{\mathbb R}e(u)=3}y^{-u}G(u)\frac{\gamma^\pm(1/2+u)}{\gamma^\pm(1/2)}\frac{du}{u}. \end{equation} We have that $V^\pm(y)$ is a rapidly decaying function as $y\to \infty$ that extends continuously at the origin with $V^\pm(0)=1$. {\beta}gin{proposition}\label{propAFE} For every cubic number field $K$ with $\pm\Delta(K)\in{\mathbb R}_{>0}$, we have {\beta}gin{equation} L\bigl(\tfrac{1}{2},{\rm r}ho_K\bigr)= 2{\mathcal S}um_{n=1}^\infty \frac{\lambda_K(n)}{n^{1/2}}V^\pm\Bigl(\frac{n}{|\Delta(K)|^{1/2}}\Bigr). \end{equation} \end{proposition} {\beta}gin{proof} In view of the functional equation of Proposition~{\rm r}ef{p:Hecke}, this is~\cite[Theorem 5.3]{IK}. \end{proof} {\mathcal S}ubsection{Upper bounds on smooth sums of $\lambda_n(f)$} \label{s_lambdaf} Let $f\in V({\mathbb Z})^{{\rm r}m irr}$ be an irreducible binary cubic form and recall the Dirichlet series $D(s,f)$ with Dirichlet coefficients $\lambda_n(f)$ defined in \S{\rm r}ef{sec:Artin}. {\beta}gin{definition}\label{d:Ep} For $f\in V({\mathbb Z})^{{\rm r}m irr}$ and a prime $p$, define $E_p(s,f)$ by {\beta}gin{equation*} D_p(s,f)=L_p(s,{\rm r}ho_{K_f})E_p(s,f). \end{equation*} Let $E(s,f)=\prod_{p} E_p(s,f)$, hence we have $D(s,f)=L(s,{\rm r}ho_{K_f})E(s,f)$. \end{definition} {\mathrm {ind}}ex{$E_p(s,f)$, Euler factor of the form $f$ nonmaximal at $p$} It follows from Lemma~{\rm r}ef{l_Euler} that $E_p(s,f)=1$ if $f$ is maximal at $p$, thus $E(s,f)=\prod\limits_{p\mid {\mathrm {ind}}(f)} E_p(s,f)$. We next list the different possible values taken by $E_p(s,f)$. {\beta}gin{lemma}\label{lemEpsf} Let $f\in V({\mathbb Z})^{{\rm r}m irr}$ be an irreducible binary cubic form. For every prime $p$, we have that $E_p(s,f)$ is a polynomial in $p^{-s}$ of degree at most two. In fact, it is one of {\beta}gin{equation*} 1,\quad 1-p^{-s},\quad 1+p^{-s},\quad (1-p^{-s})^2,\quad 1-p^{-2s},\quad 1+p^{-s}+p^{-2s}. \end{equation*} Moreover, if $p\parallel{\mathrm {ind}}(f)$, or if the splitting type of $f$ at $p$ is $(1^21)$, then $E_p(s,f)$ is of degree at most one, hence it is one of {\beta}gin{equation*} 1,\quad 1-p^{-s},\quad 1+p^{-s}. \end{equation*} \end{lemma} {\beta}gin{proof} We consider each possible splitting type of $f$ seperately. If ${\mathcal S}igma_p(f)=(0)$, then $D_p(s,f)=1$ and $p^2|{\mathrm {ind}}(f)$, hence the lemma follows from~\eqref{eqEP}. If ${\mathcal S}igma_p(f)=(111), (12)$, or $(3)$, then $f$ is maximal at $p$, thus $E_p(s,f)=1$ by Lemma~{\rm r}ef{l_Euler}, and the lemma follows. Suppose next that ${\mathcal S}igma_p(f)=(1^21)$. Then we claim that the splitting type of ${\mathcal O}_{K_f}$ at $p$ is either $(111)$, $(12)$, or $(1^21)$, which implies the lemma by \eqref{eqEP} because then either $E_p(s,f)=1-p^{-s}$, $E_p(s,f)=1+p^{-s}$, or $E_p(s,f)=1$, respectively. Indeed, when $f$ is nonmaximal at $p$, Proposition {\rm r}ef{p:nonmaximal} implies that by replacing $f$ with a ${{\rm r}m GL}_2({\mathbb Z})$-translate, we may assume that $f(x,y)=ax^3+bx^2y+pcxy^2+p^2dy^3$, where $p{\mathrm {nm}}id b$. The overorder $S$ of $R_f$ having index $[S:R_f]=p$ corresponds to the form $g(x,y)=pax^3+bx^2y+cxy^2+dy^3$. Now the splitting type ${\mathcal S}igma_p(g)$ is either $(111)$, $(12)$, or $(1^21)$. In the former two cases, $S$ is maximal at $p$ and the claim is proved. In the last case, the claim follows by induction on the index, by repeating the argument with $g$ instead of $f$. Suppose finally that ${\mathcal S}igma_p(f)=(1^3)$, then $D_p(s,f)=1$, hence $E_p(s,f)=L(s,{\rm r}ho_{K_f})^{-1}$ is a polynomial in $p^{-s}$ of degree at most two given by~\eqref{eqEP}. Suppose moreover that $p\parallel{\mathrm {ind}}(f)$. We need to show that $E_p(s,f)$ is of degree at most one. From Proposition {\rm r}ef{p:nonmaximal}, we may assume that $f(x,y)$ is of the form $ax^3+pbx^2y+pcxy^2+p^2dy^3$. The index-$p$ overorder $S$ of $R_f$ must be maximal at $p$, which implies that the binary cubic form corresponding to ${\mathcal O}_{K_f}\otimes{\mathbb Z}_p$ is $pax^3+pbx^2y+cxy^2+dy^3$. Clearly, the splitting type of ${\mathcal O}_{K_f}$ at $p$ is $(1^21)$ or $(1^3)$. Thus $E_p(s,f)=1-p^{-s}$ or $E_p(s,f)=1$, respectively. \end{proof} We obtain the following result analogous to Corollary {\rm r}ef{propsubcv} for the coefficients $\lambda_n(f)$ where $f$ is an irreducible (not necessarily maximal) binary cubic form. {\beta}gin{proposition}\label{p_sum_lambdan} Let ${\mathbb P}hi:{\mathbb R}_{\geq 0}\to {\mathbb C}$ be a smooth function rapidly decaying at infinity. For every $f\in V({\mathbb Z})^{{\rm r}m irr}$, $\epsilon>0$ and $T\ge 1$, {\beta}gin{equation}\label{eqpartialLfsum} {\mathcal S}um_{n\ge 1} \frac{\lambda_n(f)}{n^{1/2}} {\mathbb P}hi\left(\frac nT {\rm r}ight) \ll_{\epsilon,{\mathbb P}hi} {\mathrm {ind}}(f)^{-2\theta} |\Delta(f)|^{\theta+\epsilon} T^\epsilon, \end{equation} where $\theta=\frac14 - \delta$ is as in Theorem~{\rm r}ef{p_Wu}. \end{proposition} {\beta}gin{proof} The proof is similar to that of Corollary~{\rm r}ef{propsubcv}. We have that the left-hand side is equal to \[ \frac{1}{2\pi i}\int_{{\mathbb R}e(s)=2} T^s{\mathcal S}um_{n\ge 1} \frac{\lambda_n(f)}{n^{\frac12+s}} \widetilde {\mathbb P}hi(s)ds =\frac{1}{2\pi i}\int_{{\mathbb R}e(s)=2}T^sL\bigl(\tfrac12+s,{\rm r}ho_K\bigr) \prod_{p\mid{\mathrm {ind}}(f)}E_p\bigl(\tfrac12+s,f\bigr) \widetilde {\mathbb P}hi(s) ds. \] For ${\mathbb R}e(s)\geq 0$, these local factors $E_p(\tfrac12+s,f)$ are absolutely bounded, (indeed by the number $4$). We have the elementary estimate {\beta}gin{equation*} \prod_{p\mid{\mathrm {ind}}(f)}E_p\bigl(\tfrac12+s,f\bigr)\leq \prod_{p\mid{\mathrm {ind}}(f)}4\ll_\epsilon |{\mathrm {ind}}(f)|^\epsilon. \end{equation*} As before, pulling the line of integration to ${\mathbb R}e(s)=\epsilon$, we deduce that \[ {\mathcal S}um_{n\ge 1} \frac{\lambda_n(f)}{n^{1/2}} {\mathbb P}hi\left(\frac nT {\rm r}ight) \ll_{\epsilon,{\mathbb P}hi} T^{\epsilon} |\Delta(K_f)|^{\theta} |\Delta(f)|^{\epsilon}, \] from which the assertion follows since $\Delta(f) = {\mathrm {ind}}(f)^2 \Delta(K_f)$. \end{proof} In our next result below (Theorem~{\rm r}ef{thm_AFE2}), we give a more precise estimate of the smoothed partial sums of $\lambda_n(f)$ when we use ${\mathbb P}hi=V^\pm$ as a smoothing function. We start by defining, for an irreducible binary cubic form $f\in V({\mathbb Z})^{{\rm r}m irr}$, such that $\pm\Delta(f)\in{\mathbb R}_{>0}$, the quantity $S(f)$: {\mathrm {ind}}ex{$S(f)$, Dirichlet sum of length $|\Delta(f)|^{\frac12}$ of $\lambda_n(f)$} {\beta}gin{equation}\label{defSf} S(f):={\mathcal S}um_{n\ge 1} \frac{\lambda_n(f)}{n^{1/2}} V^{\pm}\Bigl(\frac{n}{|\Delta(f)|^{1/2}}\Bigr). \end{equation} If $f\in V({\mathbb Z})^{{{\rm r}m irr},{\rm max}}$ is irreducible and maximal, then $2S(f)=L\bigl(\tfrac12,{\rm r}ho_{K_f}\bigr)$ by Corollary~{\rm r}ef{c_Euler} and Proposition~{\rm r}ef{propAFE}. For general irreducible $f\in V({\mathbb Z})^{{{\rm r}m irr}}$, Proposition {\rm r}ef{p_sum_lambdan} yields the bound {\beta}gin{equation}\label{Lf_bound} S(f)\ll_\epsilon {\mathrm {ind}}(f)^{-2\theta}|\Delta(f)|^{\theta+\epsilon}. \end{equation} Moreover, we have $D(\tfrac12,f) = L(\tfrac12,{\rm r}ho_{K_f}) E(\tfrac12,f)$ and {\beta}gin{equation}\label{boundEf} E\bigl(\tfrac12,f\bigr) =\prod_{p|{\mathrm {ind}}(f)} \Bigl(1+O\bigl(p^{-\frac12}\bigr)\Bigr) = |{\mathrm {ind}}(f)|^{o(1)}, \end{equation} which implies that the same upper bound as~\eqref{Lf_bound} holds for $D(\tfrac12,f)\ll_\epsilon {\mathrm {ind}}(f)^{-2\theta}|\Delta(f)|^{\theta+\epsilon}$. {\beta}gin{definition}\label{defepm} For $f\in V({\mathbb Z})^{{\rm r}m irr}$, a prime $p\mid {\mathrm {ind}}(f)$, and an integer $m\ge 0$, define $e_{p,m}(f)$ from the following power series expansion: \[ \frac{E_p\bigl(\tfrac12-s,f\bigr)} {E_p\bigl(\tfrac12+s,f\bigr)} = p^{2s-1} {\mathcal S}um^\infty_{m=0} e_{p,m}(f) p^{m(1/2-s)}. \] Recall from Definition~{\rm r}ef{d:Ep} that $E_p(s,f)$ is a polynomial in $p^{-s}$ of degree at most two. If $p{\mathrm {nm}}id {\mathrm {ind}}(f)$, let $e_{p,m}(f)=0$ for every $m\ge 0$. \end{definition} {\mathrm {ind}}ex{$e_{p,m}(f)$, coefficients of Euler factor of $f$ nonmaximal at $p$} {\beta}gin{examples*} {\bf (a)} $E_p(s,f)=1- p^{-s}$: In this case, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle\frac{p}{p^{2s}} \frac{E_p\bigl(\tfrac12-s,f\bigr)}{E_p\bigl(\tfrac12+s,f\bigr)}&=& \displaystyle\frac{p}{p^{2s}}\Bigl(1-\frac{p^s}{p^{1/2}}\Bigr) \Bigl(1-\frac{1}{p^{1/2+s}}\Bigr)^{-1} \\[.2in]&=&\displaystyle \Bigl(\frac{p}{p^{2s}}-\frac{p^{1/2}}{p^s}\Bigr) \Bigl({\mathcal S}um_{n\geq 0}\frac{1}{p^{n/2+ns}}\Bigr) \\[.2in]&=&\displaystyle 0-\frac{p^{1/2}}{p^s}+\frac{p- 1}{p^{2s}}+\frac{p^{1/2}- p^{-1/2}}{p^{3s}} +\cdots+\frac{p^{-(m-4)/2}- p^{-(m-2)/2}}{p^{ms}}+\cdots \end{array} \end{equation*} It therefore follows that we have {\beta}gin{equation*} e_{p,0}(f)=0,\quad e_{p,1}(f)=-1,\quad e_{p,2}(f)=1-\frac{1}{p},\quad e_{p,m}(f)=(p^{-m+2}- p^{-m+1}), \end{equation*} for all $m\geq 3$. If $E_p(s,f)=1+p^{-s}$, we obtain similar formulas. \noindent{\bf (b)} $E_p(s,f)=(1- p^{-s})^2$: In this case, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle\frac{p}{p^{2s}} \frac{E_p\bigl(\tfrac12-s,f\bigr)}{E_p\bigl(\tfrac12+s,f\bigr)}&=& \displaystyle\frac{p}{p^{2s}}\Bigl(1-\frac{p^s}{p^{1/2}}\Bigr)^2 \Bigl(1-\frac{1}{p^{1/2+s}}\Bigr)^{-2} \\[.2in]&=&\displaystyle \Bigl(\frac{{\mathcal S}qrt{p}}{p^{s}}-1\Bigr)^2 \Bigl({\mathcal S}um_{n\geq 0}\frac{1}{p^{n/2+ns}}\Bigr)^2 \\[.2in]&=&\displaystyle \Bigl(1-2\frac{p^{1/2}}{p^s}+\frac{p}{p^{2s}}\Bigr) \Bigl(1+\frac{2}{p^{1/2+s}}+\frac{3}{p^{1+2s}}+\frac{4}{p^{3/2+3s}}+\cdots\Bigr) \\[.2in]&=&\displaystyle 1+\Bigl(\frac{2}{p^{1/2}}-2p^{1/2}\Bigr)\frac{1}{p^s} +\Bigl(p+\frac{3}{p}-4\Bigr)\frac{1}{p^{2s}} +\Bigl(2p^{1/2}-\frac{6}{p^{1/2}}+\frac{4}{p^{3/2}}\Bigr)\frac{1}{p^{3s}}+\cdots, \end{array} \end{equation*} where the coefficient of $1/p^{ms}$ is $\ll m/p^{(m-4)/2}$. It therefore follows that we have {\beta}gin{equation*} e_{p,0}(f)=1,\quad e_{p,1}(f)=-2+\frac{2}{p},\quad e_{p,2}(f)=1-\frac4{p}+\frac3{p^2}, \quad e_{p,m}(f)\ll \frac{m}{p^{m-2}}, \end{equation*} for all $m\geq 3$. \noindent{\bf (c)} $E_p(s,f)=1+p^{-s}+p^{-2s}$: In this case, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle\frac{p}{p^{2s}} \frac{E_p\bigl(\tfrac12-s,f\bigr)}{E_p\bigl(\tfrac12+s,f\bigr)}&=& \displaystyle\Bigl(1+\frac{p^{1/2}}{p^{s}}+\frac{p}{p^{2s}}\Bigr) \Bigl(1+\frac{1}{p^{1/2+s}}+\frac{1}{p^{1+2s}}\Bigr)^{-1} \\[.2in]&=&\displaystyle \displaystyle\Bigl(1+\frac{p^{1/2}}{p^{s}}+\frac{p}{p^{2s}}\Bigr) \Bigl(1-\frac{1}{p^{1/2+s}}+\frac{1}{p^{3/2+3s}}+\cdots\Bigr) \\[.2in]&=&\displaystyle 1+\Bigl(p^{1/2}-\frac{1}{p^{1/2}}\Bigr)\frac{1}{p^s} +(p-1)\frac{1}{p^{2s}} +\Bigl(\frac{1}{p^{3/2}}-p^{1/2}\Bigr)\frac{1}{p^{3s}}+\cdots, \end{array} \end{equation*} where the coefficient of $1/p^{ms}$ is $\ll_\epsilon p^{\epsilon m}/p^{(m-4)/2}$. It therefore follows that once again we have {\beta}gin{equation*} e_{p,0}(f)=1,\quad e_{p,1}(f)=1-\frac{1}{p},\quad e_{p,2}(f)=1-\frac1{p}, \quad e_{p,m}(f)\ll_\epsilon \frac{p^{\epsilon m}}{p^{m-2}}, \end{equation*} for all $m\geq 3$. \end{examples*} For every integer $k \ge 1$, define $e_k(f)$ multiplicatively as \[ e_k(f) := \prod_{p|k} e_{p,{v_p(k)}}(f). \] If there exists a prime $p|k$ at which $f$ is maximal, then $e_k(f)=0$ because $p{\mathrm {nm}}id {\mathrm {ind}}(f)$ which implies $e_{p,v_p(k)}(f)=0$. In other words, $e_k(f)$ is supported on the integers $k$ all of whose prime factors divide ${\mathrm {ind}}(f)$. {\beta}gin{proposition}\label{Efs} For every $f\in V({\mathbb Z})^{{\rm r}m irr}$, and ${\mathbb R}e(s)>-\tfrac12$, \[ \frac{E\bigl(\tfrac12-s,f\bigr)} {E\bigl(\tfrac12+s,f\bigr)} = {\mathrm {rad}}({\mathrm {ind}}(f))^{2s-1} {\mathcal S}um^\infty_{k=1} e_k(f) k^{1/2-s}. \] \end{proposition} {\beta}gin{proof} Since $E(s,f)=\prod\limits_{p|{\mathrm {ind}}(f)} E_p(s,f)$, the proposition follows from Definition~{\rm r}ef{defepm}, and from Lemma~{\rm r}ef{lemEpsf} which implies that $E_p(\tfrac12+s,f)$ has no zero for ${\mathbb R}e(s)>-\tfrac12$. \end{proof} We will need the following result, bounding the values of $|e_k(f)|$. {\beta}gin{proposition}\label{ekbound} For every $f\in V({\mathbb Z})^{{\rm r}m irr}$, $\epsilon >0$, and $k\ge 1$, \[ e_k(f) \ll_\epsilon k^\epsilon, \] where the multiplicative constant depends only on $\epsilon$. If $k$ is powerful, then we have the improved bound \[ e_k(f) \ll_\epsilon \frac{{\mathrm {rad}}(k)^2}{k} k^\epsilon. \] \end{proposition} {\beta}gin{proof} The first claim of the proposition would follow from the identity $e_{p,m}(f)\ll m+p^{\epsilon m}$. The second claim would follow from the identities $e_p(f),e_{p,2}(f)\ll 1$ and $e_{p,m}(f)\ll_\epsilon \frac{m+p^{\epsilon m}}{p^{m-2}}$ for $m\geq 3$. These identities have been verified in Examples {\bf (a)}, {\bf (b)}, and {\bf (c)} above. (Note that Example {\bf (a)} implies the result for $E_p(s,f)=(1-p^{-2s})$ and also that the case of $E_p(s,f)=(1+p^{-s})$ is identical to that of Example {\bf (a)}.) This concludes the proof of the proposition. \end{proof} Next, we fix a single form $f$, and analyze the coefficients $e_k(f)$. {\beta}gin{proposition}\label{propklarge} Let $f\in V({\mathbb Z})^{{\rm r}m irr}$, and write ${\mathrm {ind}}(f)=q_1q_2$, where $q_1$ is squarefree, $(q_1,q_2)=1$, and $q_2$ is powerful. Then $e_{(\cdot)}(f):{\mathbb Z}_{\geq 1}\to{\mathbb R}$ is supported on multiples of $q_1$. Namely $q_1{\mathrm {nm}}id k$ implies $e_k(f) =0$. \end{proposition} {\beta}gin{proof} Since $q_1$ is squarefree, it follows from Lemma {\rm r}ef{lemEpsf} that for every prime $p\mid q_1$, we have $E_p(s,f)$ is one of $1$, or $1\pm p^{-s}$. Observe from Example {\bf (a)} above that $e_{p,0}(f)=0$. The proposition follows immediately. \end{proof} The following is an \emph{unbalanced} approximate function equation for $D(s,f)$ analogous to Proposition~{\rm r}ef{propAFE} for $L(s,{\rm r}ho_K)$. {\beta}gin{theorem}\label{thm_AFE2} For every $f\in V({\mathbb Z})^{{\rm r}m irr}$, {\beta}gin{equation*} S(f)=D\bigl(\tfrac 12,f\bigr)- {\mathcal S}um^\infty_{k=1} \frac{e_k(f)k^{1/2}} {{\mathrm {rad}}({\mathrm {ind}}(f))} {\mathcal S}um^\infty_{n=1}\frac{\lambda_n(f)}{n^{1/2}}V^{{\mathcal S}gn(\Delta(f))}\Bigl( \frac{{\mathrm {ind}}(f)^2 k n}{{\mathrm {rad}}({\mathrm {ind}}(f))^2|\Delta(f)|^{\frac 12}}\Bigr). \end{equation*} \end{theorem} {\beta}gin{proof} To ease notation for the proof, we let $\pm := {{\mathcal S}gn(\Delta(f))}$ and $K:=K_f$. We have {\beta}gin{equation*} {\beta}gin{array}{rcl} S(f)&=&\displaystyle \int_{{\mathbb R}e(s)=2}D\bigl(\tfrac12+s,f\bigr)|\Delta(f)|^{s/2} \widetilde{V^\pm}(s)\frac{ds}{2\pi i} \\[.15in]&=&\displaystyle D\bigl(\tfrac12,f\bigr)+ \int_{{\mathbb R}e(s)=-1/4}D\bigl(\tfrac12+s,f\bigr)|\Delta(f)|^{s/2} \widetilde{V^\pm}(s)\frac{ds}{2\pi i}. \end{array} \end{equation*} The functional equation for $L(s+\tfrac12,{\rm r}ho_K)$ is {\beta}gin{equation*} L\bigl(\tfrac12+s,{\rm r}ho_K)\gamma^\pm\bigl(\tfrac12+s)|\Delta(K)|^{\frac{s}{2}} =L\bigl(\tfrac12-s,{\rm r}ho_K)\gamma^\pm\bigl(\tfrac12-s)|\Delta(K)|^{-\frac{s}{2}}. \end{equation*} Therefore, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} S(f)-D(\tfrac12,f)&=&\displaystyle \int_{{\mathbb R}e(s)=-1/4}L\bigl(\tfrac12+s,{\rm r}ho_K\bigr)E(\tfrac12+s,f) |\Delta(f)|^{s/2}\widetilde{V^\pm}(s)\frac{ds}{2\pi i} \\[.15in]&=&\displaystyle \int_{{\mathbb R}e(s)=-1/4}L\bigl(\tfrac12-s,{\rm r}ho_K\bigr) \frac{\gamma^\pm(\tfrac12-s)}{\gamma^\pm(\tfrac12+s)} E(\tfrac12+s,f)|\Delta(K)|^{-s} |\Delta(f)|^{s/2}\widetilde{V^\pm}(s)\frac{ds}{2\pi i} \\[.15in]&=&\displaystyle \int_{{\mathbb R}e(s)=1/4}D\bigl(\tfrac12+s,f\bigr) \frac{E(\tfrac12-s,f)}{E(\tfrac12+s,f)}|\Delta(f)/q^4|^{s/2} \frac{\gamma^\pm(\tfrac12+s)}{\gamma^\pm(\tfrac12-s)}\widetilde{V^\pm}(-s)\frac{ds}{2\pi i}, \end{array} \end{equation*} where the final equality follows since $\Delta(f)=q^2\Delta(K)$, where we have set $q:={\mathrm {ind}}(f)$. Mellin inversion yields {\beta}gin{equation}\label{eqMelV} \widetilde{V^\pm}(s)=\frac{G(s)}{s}\frac{\gamma^\pm(\tfrac12+s)}{\gamma^\pm(\tfrac12)}. \end{equation} As a consequence, we have {\beta}gin{equation*} \frac{\gamma^\pm(\tfrac12+s)}{\gamma^\pm(\tfrac12-s)}\widetilde{V^\pm}(-s)= -\frac{G(s)}{s}\frac{\gamma^\pm(\tfrac12+s)}{\gamma^\pm(\tfrac12)}=-\widetilde{V^\pm}(s), \end{equation*} which we inject in the previous equality: {\beta}gin{equation}\label{eqapeftemp} {\beta}gin{array}{rcl} D(\tfrac12,f)-S(f)&=&\displaystyle \int_{{\mathbb R}e(s)=1/4}D\bigl(\tfrac12+s,f\bigr) \frac{E(\tfrac12-s,f)}{E(\tfrac12+s,f)}|\Delta(f)/q^4|^{s/2} \widetilde{V^\pm}(s)\frac{ds}{2\pi i}\\[.2in] &=&\displaystyle \int_{{\mathbb R}e(s)=1/4}D\bigl(\tfrac12+s,f\bigr) \Bigl( {\mathrm {rad}}(q)^{2s-1} {\mathcal S}um^\infty_{k=1} e_k(f) k^{1/2-s} \Bigr)|\Delta(f)/q^4|^{s/2} \widetilde{V^\pm}(s)\frac{ds}{2\pi i}, \end{array} \end{equation} where the final equality follows from Proposition {\rm r}ef{Efs}. The summand corresponding to $k$ in the second line of \eqref{eqapeftemp} yields ${\mathrm {rad}}(q)^{-1}e_k(f)k^{1/2}$ times the integral {\beta}gin{equation*} \int_{{\mathbb R}e(s)=1/4} D\bigl(\tfrac12+s,f\bigr) \Bigl(\frac{|\Delta(f)|^{\frac12}{\mathrm {rad}}(q)^2}{kq^2} \Bigr)^s \widetilde{V^\pm}(s)\frac{ds}{2\pi i} = {\mathcal S}um_{n\geq 1}\frac{\lambda_n(f)}{n^{1/2}}V^\pm\Bigl( \frac{nkq^2}{{\mathrm {rad}}(q)^2|\Delta(f)|^{\frac 12}}\Bigr). \end{equation*} Theorem~{\rm r}ef{thm_AFE2} follows by summing over $k\ge 1$. \end{proof} We end this section with the following remark. {\beta}gin{remark}{{\rm r}m When we consider sums weighted by the function $V^{\pm}(\cdot /X)$, which is rapidly decaying, we say that the {\it length of the sum} is at most $X^{1+\epsilon}$ (since we have that $V^{\pm}(y)$ is negligible for $y>X^\epsilon$). Suppose $f\in V^({\mathbb Z})^{{\rm r}m irr}$ has large index $q={\mathrm {ind}}(f)$, then all of the inner sums arising in Theorem {\rm r}ef{thm_AFE2} to express $S(f)-D(\tfrac12,f)$ are always significantly shorter than the sum defining $S(f)$. Indeed, the sum defining $S(f)$ has length $|\Delta(f)|^{1/2+\epsilon}$. The length of any inner sum arising in Theorem {\rm r}ef{thm_AFE2} is easily computed. Let $q=q_1q_2$, where $q_1$ is squarefree, $(q_1,q_2)=1$, and $q_2$ is powerful. Then note that we have {\beta}gin{equation*} \frac{q^2}{{\mathrm {rad}}(q)^2}=\frac{q_2^2}{{\mathrm {rad}}(q_2)^2}\geq q_2, \end{equation*} with equality if and only if the exponent of every prime dividing $q_2$ is $2$. Also note that we have $q_1|k$ from Proposition {\rm r}ef{propklarge}. Therefore, the length of the inner sum is at most $|\Delta(f)|^{1/2+\epsilon}/{\mathrm {ind}}(f)$.} \end{remark} {\mathcal S}ection{Counting binary cubic forms using Shintani zeta functions}\label{secszf} In this section we recall the asymptotics for the number of ${{\rm r}m GL}_2({\mathbb Z})$-orbits of integral binary cubic forms ordered by discriminant. We will impose congruence conditions modulo positive integers $n$ and study how the resulting error terms depend on $n$. This section is organized as follows: first, in \S{\rm r}ef{sub:basic-shintani}, we collect results from the theory of Shintani zeta functions corresponding to the representation of ${{\rm r}m GL}_2$ on $V$. Next, we use standard counting methods to determine the required asymptotics in~\S{\rm r}ef{sub:bound-shintani}, and moreover give an explicit bound on the error terms. Finally, in \S{\rm r}ef{s_polya}, we prove a smoothed analogue of the P\'olya--Vinogradov inequality in the setting of cubic rings. {\mathcal S}ubsection{Functional equations, poles, and residues of Shintani zeta functions}\label{sub:basic-shintani} Let $n$ be a positive integer and let $\phi:V({\mathbb Z}/n{\mathbb Z})\to {\mathbb C}$ be a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function. Let $\xi(\phi,s)$ denote the \emph{Shintani zeta function} defined by {\mathrm {ind}}ex{$\xi^\pm(\phi,s)$, Shintani zeta function with congruence function $\phi$} {\beta}gin{equation}\label{defxi} \xi^\pm(\phi,s):={\mathcal S}um_{f\in\frac{V({\mathbb Z})^\pm}{{{\rm r}m GL}_2({\mathbb Z})}}\phi(f) \frac{|\Delta(f)|^{-s}}{|{{\rm r}m Stab}(f)|}, \end{equation} where we abuse notation and also denote the composition of $\phi$ with the reduction modulo $n$ map $V({\mathbb Z})\twoheadrightarrow V({\mathbb Z}/n{\mathbb Z})$ by $\phi$. For a function $\psi:V^*({\mathbb Z}/n{\mathbb Z}) \to {\mathbb C}$, let $\xi^{*\pm}(\psi,s)$ denote the \emph{dual} Shintani zeta function defined in~\cite[Def.4.2]{TT}. {\mathrm {ind}}ex{$\xi^{* \pm}(\psi,s)$, dual Shintani zeta function with congruence function $\psi$} {\beta}gin{theorem}[Sato--Shintani] \label{thshinfs} The functions $\xi^\pm$ and $\xi^{*\pm}$ have a meromorphic continuation to the whole complex plane, and satisfy the functional equations {\beta}gin{equation*}\Bigl({\beta}gin{array}{c} \xi^+(\phi,1-s)\\[.03in]\xi^-(\phi,1-s) \end{array} \Bigr) =n^{4s} \frac{(3^6\pi^{-4})^s}{18} {\mathbb G}amma\Bigl(s-\frac16\Bigr) {\mathbb G}amma(s)^2 {\mathbb G}amma\Bigl(s+\frac16\Bigr) \Bigl({\beta}gin{array}{cc} {\mathcal S}in 2\pi s & {\mathcal S}in \pi s\\ 3{\mathcal S}in \pi s & {\mathcal S}in 2\pi s \end{array} \Bigr) \Bigl({\beta}gin{array}{c} \xi^{*+}(\hat\phi,s)\\[.03in]\xi^{*-}(\hat\phi,s) \end{array} \Bigr), \end{equation*} where $\hat{\phi}:V^*({\mathbb Z}/n{\mathbb Z}) \to {\mathbb C}$ is the Fourier transform of $\phi$ as in \S{\rm r}ef{s_binary_Fp}. \end{theorem} {\beta}gin{proof} This is due to Shintani~\cite{Shintani} for $n=1$ and Sato~\cite{Sato} for general $n$. See also \cite[Thm.4.3]{TT} for a modern exposition. In fact the above theorem is a special case because the congruence function $\phi$ in \cite{Sato,TT} is not necessarily ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant. In the more general case of an arbitrary congruence function $\phi:V({\mathbb Z}/n{\mathbb Z})\to {\mathbb C}$, the Shintani zeta functions, respectively its dual, are defined using the principal subgroup ${\mathbb G}amma(n)$ and summing $f$ over the quotient $V({\mathbb Z})^\pm/{\mathbb G}amma(n)$, respectively $V^*({\mathbb Z})^\pm/{\mathbb G}amma(n)$. Assuming that $\phi$ is ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant, the general definition reduces to~\eqref{defxi}. \end{proof} The possible poles of $\xi^{\pm}(\phi,s)$ occur at $1$ and $5/6$, and the residues shall be given in Proposition~{\rm r}ef{t:residues} below. First we define {\mathrm {ind}}ex{${\alpha}pha^\pm,{\beta}ta^\pm,\gamma^\pm$, residues of Shintani zeta function} {\beta}gin{equation*} {\beta}gin{array}{ccccccc} \displaystyle{\alpha}pha^+&:=&\displaystyle\frac{\pi^2}{36};\;\;\;\;\; {\beta}ta^+&:=&\displaystyle\frac{\pi^2}{12};\;\;\;\;\;\gamma^+&:=&\displaystyle\zeta(1/3)\frac{2\pi^2}{9{\mathbb G}amma(2/3)^3};\\[.2in] \displaystyle{\alpha}pha^-&:=&\displaystyle\frac{\pi^2}{12};\;\;\;\;\; {\beta}ta^-&:=&\displaystyle\frac{\pi^2}{12};\;\;\;\;\;\gamma^-&:=&\displaystyle\zeta(1/3)\frac{2{\mathcal S}qrt{3}\pi^2}{9{\mathbb G}amma(2/3)^3}. \end{array} \end{equation*} Then the functions $\xi^\pm(s)=\xi^\pm(1,s)$, corresponding to the constant function $\phi=1$, have residues ${\alpha}pha^\pm+{\beta}ta^\pm$ at $s=1$ and $\gamma^\pm$ at $s=5/6$. Moreover, the pole at $1$ has the following interpretation: the term ${\alpha}pha^\pm$ comes from the contribution of irreducible cubic forms and the term ${\beta}ta^\pm$ comes from the contribution of reducible cubic forms. As before, let $n$ be a positive integer. Let $\phi:V({\mathbb Z}/n{\mathbb Z})\to{\mathbb C}$ be a function of the form $\phi=\prod_{p^{\beta}ta \parallel n}\phi_{p^{\beta}ta}$, where $\phi_{p^{\beta}ta}:V({\mathbb Z}/p^{\beta}ta {\mathbb Z})\to{\mathbb C}$ and ${\beta}ta:=v_p(n)$. We define the linear functionals ${\mathcal A}_{p^{\beta}ta}$, ${\mathcal B}_{p^{\beta}ta}$, and ${\mathcal C}_{p^{\beta}ta}$ to be {\beta}gin{equation}\label{eqcabc} {\mathcal A}_{p^{\beta}ta}(\phi_{p^{\beta}ta}):=\widehat{\phi_{p^{\beta}ta}}(0),\;\;\;\;\;\;\;\; {\mathcal B}_{p^{\beta}ta}(\phi_{p^{\beta}ta}):=\widehat{\phi_{p^{\beta}ta} \cdot b_p}(0),\;\;\;\;\;\;\;\; {\mathcal C}_{p^{\beta}ta}(\phi_{p^{\beta}ta}):=\widehat{\phi_{p^{\beta}ta}\cdot c_p}(0), \end{equation} where $\phi_{p^{\beta}ta}\mapsto \widehat{\phi_{p^{\beta}ta}}$ is the Fourier transform of functions on $V({\mathbb Z}/p^{\beta}ta {\mathbb Z})$ from \S{\rm r}ef{sec:pre} and where the functions \[ b_p,c_p:V({\mathbb Z}/p^{\beta}ta{\mathbb Z}) \twoheadrightarrow V({\mathbb Z}/p{\mathbb Z})\to{\mathbb R}_{\ge 0} \] are ${{\rm r}m GL}_2({\mathbb Z}/p^{\beta}ta{\mathbb Z})$-invariant and defined in Table {\rm r}ef{tabbc}. {\mathrm {ind}}ex{$b_p(f),c_p(f)$, densities of splitting types} {\mathrm {ind}}ex{${\mathcal A}_n(\phi),{\mathcal B}_n(\phi),{\mathcal C}_n(\phi)$, linear functionals for the residues of $\xi^\pm(\phi,s)$} We define ${\mathcal A}_n(\phi)$, ${\mathcal B}_n(\phi)$, and ${\mathcal C}_n(\phi)$ multiplicatively as the product over $p^{\beta}ta\parallel n$ of ${\mathcal A}_{p^{\beta}ta}(\phi_{p^{\beta}ta})$, ${\mathcal B}_{p^{\beta}ta}(\phi_{p^{\beta}ta})$, and ${\mathcal C}_{p^{\beta}ta}(\phi_{p^{\beta}ta})$, respectively. By multilinearity, the domain of definition of the functionals ${\mathcal A}_n$, ${\mathcal B}_n$, and ${\mathcal C}_n$ extends to all functions $\phi:V({\mathbb Z}/n{\mathbb Z})\to {\mathbb C}$. Abusing notation, we denote the lift of $\phi$ (resp.\ $\phi_{p^{\beta}ta}$) to $V(\widehat{{\mathbb Z}})$ (resp.\ $V({\mathbb Z}_p)$) also by $\phi$ (resp.\ $\phi_{p^{\beta}ta}$). Note that ${\mathcal A}_n(\phi)$ can be interpreted as the integral {\beta}gin{equation*} {\mathcal A}_n(\phi)=\int_{V(\widehat{{\mathbb Z}})}\phi(f)df=\prod_{p}\int_{V({\mathbb Z}_p)}\phi_{p^{\beta}ta}(f)df, \end{equation*} where $\phi_{p^{\beta}ta}$ is simply defined to be the function $1$ when $p{\mathrm {nm}}id n$. This is true because, under our normalizations ${\rm Vol}(V({\mathbb Z}_p))=1$. {\beta}gin{table}[ht] \centering {\beta}gin{tabular}{|c | c| c| } \hline &&\\[-8pt] Splitting type of $f$ at $p$ & $b_p(f)$ & $(1-p^{-2})c_p(f)$ \\[4pt] \hline\hline &&\\[-8pt] $(111)$ & 3&$(1-p^{-2/3})(1+p^{-1/3})^2$\\[4pt] $(12)$&1 &$(1-p^{-4/3})$\\[4pt] $(3)$&0 &$(1-p^{-1/3})(1+p^{-1})$\\[4pt] $(1^21)$&$\frac{p+2}{p+1}$ &$(1+p^{-1/3})(1-p^{-1})$\\[4pt] $(1^3)$&$\frac{1}{p+1}$ &$(1-p^{-4/3})$\\[4pt] $(0)$& 1 &$(1-p^{-2})p^{2/3}$\\ \hline \end{tabular} \caption{Densities of splitting types}\label{tabbc} \end{table} We then have the following expressions for the residues of Shintani zeta functions, see~\cite{Sato,DW,TT}. {\beta}gin{proposition}\label{t:residues} The functions $\xi^\pm(\phi,s)$ are holomorphic on ${\mathbb C} - \{1,5/6\}$ with at worst simple poles at $s=1,5/6$ and the residues are given by {\beta}gin{equation*} {\beta}gin{array}{lcl} \operatorname*{Res}\limits_{s=1}\xi^\pm(\phi,s)&=&{\alpha}pha^\pm\cdot{\mathcal A}_n(\phi)+{\beta}ta^\pm\cdot{\mathcal B}_n(\phi),\\[.1in] \operatorname*{Res}\limits_{s=5/6}\xi^\pm(\phi,s)&=&\gamma^\pm\cdot{\mathcal C}_n(\phi). \end{array} \end{equation*} \end{proposition} The interpretation of these residues is that the term ${\alpha}pha^\pm\cdot{\mathcal A}_n(\phi)$ is the main term contribution from counting irreducible binary cubic forms, the term ${\beta}ta^\pm\cdot{\mathcal B}_n(\phi)$ is the main term contribution from counting reducible binary cubic forms, and the term $\gamma^\pm\cdot{\mathcal C}_n(\phi)$ is the secondary term contribution from counting irreducible binary cubic forms, particularly arising from cubic rings that are close to being \emph{monogenic}, i.e., that have an element which generates a subring of small index. {\mathcal S}ubsection{Uniform bound for Shintani zeta functions near the abscissa of convergence} We recall the following \emph{tail estimate} due to Davenport--Heilbronn \cite{DH}. See also \cite{BBP} for a streamlined proof. {\beta}gin{proposition}[Davenport--Heilbronn] \label{lemunif13} Let $n$ and $m$ be positive squarefree integers. The number of ${{\rm r}m GL}_2({\mathbb Z})$-orbits on the set of binary cubic forms having discriminant bounded by $X$ and splitting type $(1^3)$ at every prime dividing $n$ and splitting type $(0)$ at every prime dividing $m$ is bounded by $O_\epsilon(X/(m^4n^{2-\epsilon}))$, where the implied constant is independent of $X$, $m$, and $n$. \end{proposition} Let $p$ be a prime. Recall that the set of ${{\rm r}m GL}_2({\mathbb Z}/p{\mathbb Z})$-orbits on $V^*({\mathbb Z}/p{\mathbb Z})$ (resp.\ $V({\mathbb Z}/p{\mathbb Z})$) is classified by the possible splitting types, namely, $(111)$, $(12)$, $(3)$, $(1^21)$, $(1^3)$, and $(0)$. {\beta}gin{definition} For a prime $p$ and a ${{\rm r}m GL}_2({\mathbb Z}/p{\mathbb Z})$-invariant function $\psi_p$ on $V^*({\mathbb Z}/p{\mathbb Z})$ (resp.\ $\phi_p$ on $V({\mathbb Z}/p{\mathbb Z})$), we define {\beta}gin{equation*} {\beta}gin{array}{rcl} E_p(\psi_p)&:=&\displaystyle|\psi_p(111)|+ |\psi_p(12)|+|\phi_p(3)|+|\psi_p(1^21)| +|\psi_p(1^3)|p^{-2}+|\psi_p(0)|p^{-4}, \end{array} \end{equation*} and similarly for $E_p(\phi_p)$. \end{definition} Let $n$ be a positive integer, and let $\psi:V^*({\mathbb Z}/n{\mathbb Z})\to{\mathbb C}$ (resp.\ $\phi:V({\mathbb Z}/n{\mathbb Z})\to{\mathbb C}$) be a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function. If $\psi$ factors as $\psi=\prod_{p^{\beta}ta\parallel n}\psi_{p^{\beta}ta}$, where $\psi_{p^{\beta}ta}:V^*({\mathbb Z}/p^{\beta}ta {\mathbb Z})\to{\mathbb C}$ are ${{\rm r}m GL}_2({\mathbb Z}/p^{\beta}ta {\mathbb Z})$-invariant functions, then we define {\beta}gin{equation*} {\beta}gin{array}{rcl} E_n(\psi)&:=&\displaystyle\prod_{{\mathcal S}ubstack{p\parallel n}}E_p(\psi_p)\cdot \prod_{{\mathcal S}ubstack{p^{\beta}ta \parallel n\\{\beta}ta \geq 2}}\|\psi_{p^{\beta}ta }\|_\infty, \end{array} \end{equation*} where $\|\cdot\|_\infty$ denotes the $L^\infty$-norm. We extend $E_n$ to all ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant functions as being the projective cross norm. We have a similar definition for $E_n(\phi)$. {\mathrm {ind}}ex{$E_n(\psi)$, norm of $\psi$ weighted by splitting types} {\beta}gin{proposition}\label{properfirst} Let $n$ be a positive integer. Let $\psi$ be a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function on $V^*({\mathbb Z}/n{\mathbb Z})$. For every $\epsilon>0$ and $t\in {\mathbb R}$, we have {\beta}gin{equation} \xi^{*\pm}(\psi,1+\epsilon+it) \ll_{\epsilon} n^\epsilon E_n(\psi). \end{equation} The same bound holds for $\xi^{\pm}(\phi,1+\epsilon+it)$ for a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function $\phi$ on $V({\mathbb Z}/n{\mathbb Z})$. \end{proposition} {\beta}gin{proof} Let $q$ be a positive squarefree integer. We say that $\tau$ is a {\it splitting type modulo $q$} if $\tau=(\tau_p)_{p\mid q}$ is a collection of splitting types $\tau_p$ for each prime $p$ dividing $q$. Let $q(\tau,1^3)$ $($resp.\ $q(\tau,0))$ denote the product of primes $p$ dividing $q$, such that $\tau_p=(1^3)$ $($resp.\ $\tau_p=(0))$. That is, \[ q(\tau,1^3) := \prod_{{\mathcal S}ubstack{ p|q \\ \tau_p = (1^3) }} p,\quad q(\tau,0) := \prod_{{\mathcal S}ubstack{ p|q \\ \tau_p = (0) }} p. \] We write $n=q\ell$, where $q$ is squarefree, $\ell$ is powerful, and $(q,\ell)=1$. Given an integral binary cubic form $f$, we have the factorization $\psi(f)=\psi_q(f)\psi_\ell(f)$, where $\psi_q:V({\mathbb Z}/q{\mathbb Z})\to{\mathbb C}$ and $\psi_\ell:V({\mathbb Z}/\ell{\mathbb Z})\to{\mathbb C}$ are ${{\rm r}m GL}_2({\mathbb Z}/q{\mathbb Z})$-invariant and ${{\rm r}m GL}_2({\mathbb Z}/\ell{\mathbb Z})$-invariant functions, respectively, and as usual, we are denoting the lifts of $\psi_q$ and $\psi_\ell$ to $V^*({\mathbb Z})$ also by $\psi_q$ and $\psi_\ell$, respectively. Let $S(q)$ denote the set of splitting types modulo $q$. For $f\in V^*({\mathbb Z})$, the value of $\psi_q(f)$ is determined by the splitting type $\tau$ modulo $q$ of $f$. For such a splitting type $\tau\in S(q)$, we accordingly define $\psi_q(\tau):=\psi_q(f)$, where $f\in V^*({\mathbb Z})$ is any element with splitting type $\tau$ modulo $q$. Let $s=1+\epsilon+it$. We have {\beta}gin{equation*} |\xi^{*\pm}(\psi,s)| \le \|\psi_\ell\|_\infty \cdot {\mathcal S}um_{\tau\in S(q)}|\psi_q(\tau)|{\mathcal S}um_{m=1}^\infty \frac{c_\tau(m)}{m^{1+\epsilon}}, \end{equation*} where $c_\tau(m)$ denotes the number of ${{\rm r}m GL}_2({\mathbb Z})$-orbits on the set of elements in $V^*({\mathbb Z})$ having discriminant $m$ and splitting type $\tau$ modulo $q$. From partial summation, we obtain {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{m=1}^\infty \frac{c_\tau(m)}{m^{1+\epsilon}}&=& {\mathcal S}um_{k=1}^{\infty}\Bigl(\frac{1}{k^{1+\epsilon}}-\frac{1}{(k+1)^{1+\epsilon}}\Bigr){\mathcal S}um_{m=1}^kc_\tau(m) \\[.2in]&\ll_\epsilon & {\mathcal S}um_{k=1}^{\infty} \frac{1}{k^{2+\epsilon}} {\mathcal S}um_{m=1}^kc_\tau(m). \end{array} \end{equation*} From Proposition {\rm r}ef{lemunif13}, it follows that we have {\beta}gin{equation*} {\mathcal S}um_{m=1}^kc_\tau(m)\ll_\epsilon k\cdot q(\tau,1^3)^{-2+\epsilon}\cdot q(\tau,0)^{-4}, \end{equation*} where the multiplicative constant is independent of $n$, $\tau$, and $k$. Therefore, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} \xi^{*\pm}(\psi,s)&\ll_\epsilon&\displaystyle \|\psi_\ell\|_\infty \cdot {\mathcal S}um_{\tau\in S(q)} |\psi_q(\tau)|q(\tau,1^3)^{-2+\epsilon}\cdot q(\tau,0)^{-4} \Bigl({\mathcal S}um_{k=1}^{\infty}\frac{1}{k^{1+\epsilon}}\Bigr) \\[.2in]&\ll_\epsilon&\displaystyle n^\epsilon E_n(\psi). \end{array} \end{equation*} In the last equation, we used that \[E_n(\psi)=\|\psi_\ell\|_\infty \cdot \prod\limits_{p|q}E_p(\psi_p) = \|\psi_\ell\|_\infty \cdot {\mathcal S}um\limits_{\tau\in S(q)} |\psi_q(\tau)| q(\tau,1^3)^{-2} q(\tau,0)^{-4}. \qedhere \] \end{proof} {\mathcal S}ubsection{Smooth counts of binary cubic forms satisfying congruence conditions}\label{sub:bound-shintani} As in the previous subsection, let $n$ be a positive integer, and let $\phi:V({\mathbb Z}/n{\mathbb Z})\to{\mathbb C}$ be a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function. Let ${\mathbb P}si:{\mathbb R}_{> 0}\to {\mathbb C}$ be a smooth function of compact support. For a real number $X\ge 1$, define the \emph{counting function} $N^\pm_{\mathbb P}si(\phi;X)$ to be {\beta}gin{equation*} N^\pm_{\mathbb P}si(\phi;X):={\mathcal S}um_{f\in\frac{V({\mathbb Z})^\pm}{{{\rm r}m GL}_2({\mathbb Z})}} \frac{\phi(f)}{|{{\rm r}m Stab}(f)|}{\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr). \end{equation*} Applying the Mellin transform results from Section {\rm r}ef{sec:dedekind}, and shifting the line of integration from ${\mathbb R}e(s)=2$ to ${\mathbb R}e(s)=-\epsilon$, with $0<\epsilon<1$, we obtain {\beta}gin{equation}\label{eqnpx52} {\beta}gin{array}{rcl} N^\pm_{\mathbb P}si(\phi;X) &=&\displaystyle\frac{1}{2\pi i}\int_{{\mathbb R}e(s)=2}X^s\xi^\pm(\phi,s)\widetilde{{\mathbb P}si}(s)ds\\[.2in] &=&\displaystyle{\rm Res}_{s=1}\xi^\pm(\phi,s)\cdot \widetilde {\mathbb P}si(1)\cdot X+{\rm Res}_{s=5/6}\xi^\pm(\phi,s) \cdot \widetilde {\mathbb P}si(\frac 56)\cdot X^{5/6}+{\mathcal E}_\epsilon(\phi,{\mathbb P}si) \\[.2in] &=&\displaystyle({\alpha}pha^\pm{\mathcal A}_n(\phi)+{\beta}ta^\pm{\mathcal B}_n(\phi)) \cdot \widetilde {\mathbb P}si(1)\cdot X +\gamma^\pm{\mathcal C}_n(\phi) \cdot \widetilde {\mathbb P}si(\frac 56)\cdot X^{5/6}+{\mathcal E}_\epsilon(\phi,{\mathbb P}si). \end{array} \end{equation} The \emph{error term} ${\mathcal E}_\epsilon(\phi,{\mathbb P}si)$ is defined below, and bounded using the functional equation in Theorem {\rm r}ef{thshinfs} and Stirling's asymptotic formula in the form ${\mathbb G}amma({\mathcal S}igma+it)\ll_{{\mathcal S}igma} (1+|t|)^{{\mathcal S}igma-\frac12} e^{\frac{-\pi |t|}{2}} $ for every ${\mathcal S}igma\not\in {\mathbb Z}_{\le 0}$ and $t\in {\mathbb R}$: {\beta}gin{equation}\label{eqnpx52er} {\mathcal E}_\epsilon(\phi,{\mathbb P}si):=\int_{{\mathbb R}e(s)=-\epsilon} X^s\xi^\pm(\phi,s)\widetilde{{\mathbb P}si}(s)\frac{ds}{2\pi i} \ll_{\epsilon} n^{4+\epsilon} \operatorname{max}\limits_{t\in {\mathbb R}} |\xi^{*\pm} (\widehat\phi,1+\epsilon+it)| E_\infty(\widetilde{{\mathbb P}si};\epsilon), \end{equation} where we define $E_\infty(\widetilde{{\mathbb P}si};\epsilon):=\int_{-\infty}^{\infty} \left| \widetilde {\mathbb P}si(-\epsilon+it) {\rm r}ight| (1+|t|)^{2+4\epsilon} dt.$ {\mathrm {ind}}ex{$E_\infty(\widetilde {\mathbb P}si;\epsilon)$, archimedean norm of $\widetilde {\mathbb P}si$} {\beta}gin{theorem}\label{countphi} Let ${\mathbb P}si:{\mathbb R}_{>0}\to{\mathbb C}$ be a smooth function with compact support such that $\int^\infty_{0} {\mathbb P}si(x)dx = 1$ and $\epsilon>0$. Let $n$ be a positive integer, and write $n=qm$, where $q$ is squarefree, $(q,m)=1$, and $m$ is powerful. For every real $X\ge 1$, and ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function $\phi:V({\mathbb Z}/n{\mathbb Z})\to{\mathbb C}$, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle N^\pm_{\mathbb P}si(\phi;X)&=&\displaystyle \bigl({\alpha}pha^\pm{\mathcal A}_n(\phi)+{\beta}ta^\pm{\mathcal B}_n(\phi)\bigr)X+ \gamma^\pm{\mathcal C}_n(\phi)\cdot \widetilde {\mathbb P}si(\frac 56)\cdot X^{5/6}+ O_{\epsilon}\Bigl( n^{4+\epsilon} E_n(\widehat \phi) E_\infty(\widetilde{{\mathbb P}si};\epsilon)\Bigr). \end{array} \end{equation*} \end{theorem} {\beta}gin{proof} This follows from \eqref{eqnpx52}, \eqref{eqnpx52er}, and Proposition {\rm r}ef{properfirst}. \end{proof} The following lemmas bound $E_n(\widehat \phi)$ for various functions $\phi$. {\beta}gin{lemma}\label{lemEbound} Let $n$ be a positive integer and $\phi$ be a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function on $V({\mathbb Z}/n{\mathbb Z})$. Then we have, for every $\epsilon>0$, {\beta}gin{equation*} E_n(\widehat{\phi})\ll_\epsilon n^\epsilon\bigl(\prod_{p\parallel n}p\bigr)^{- 2}\|\phi\|_\infty. \end{equation*} \end{lemma} {\beta}gin{proof} This follows from the definition of $E_n$ and $E_p$ along with Corollary {\rm r}ef{corphihatbound}. \end{proof} Recall from~\S{\rm r}ef{sec:lambdan} the function $\lambda_n$, which is a ${{\rm r}m GL}_2({\mathbb Z}/{\mathrm {rad}}(n){\mathbb Z})$-invariant function on $V({\mathbb Z}/{\mathrm {rad}}(n){\mathbb Z})$. {\beta}gin{lemma}\label{lemE-lambda} For every positive integer $n$ and every $\epsilon>0$, {\beta}gin{equation*} E_n(\widehat{\lambda_n})\ll_\epsilon n^\epsilon \bigl(\prod_{p\parallel n}p\bigr)^{- 3} \bigl(\prod_{p^2\mid n}p\bigr)^{- 2}. \end{equation*} \end{lemma} {\beta}gin{proof} This follows in the same way as the previous lemma along with the additional input of Proposition~{\rm r}ef{c_thetahat}. \end{proof} {\beta}gin{lemma}\label{lemAC-lambda} For every prime $p\neq 3$, \[ {\mathcal A}_p(\lambda_p) = \widehat{\lambda_p}(0) = \frac{p^2-1}{p^3}, \quad {\mathcal C}_p(\lambda_p) = \widehat{\lambda_pc_p}(0) \ll \frac{1}{p^{1/3}}. \] \end{lemma} {\beta}gin{proof} The first equation is derived in Proposition {\rm r}ef{c_thetahat}. To prove the second inequality, we write {\beta}gin{equation*} {\beta}gin{array}{rcl} c_p(111)&=& \displaystyle(1-p^{-1/3})(1+p^{-1/3})^3\Bigl(1-\frac{1}{p^2}\Bigr)^{-1};\\[.15in] c_p(3)&=& \displaystyle(1-p^{-1/3})(1+p^{-1})\Bigl(1-\frac{1}{p^2}\Bigr)^{-1};\\[.15in] c_p(1^21)&=& \displaystyle(1+p^{-1/3})(1-p^{-1})\Bigl(1-\frac{1}{p^2}\Bigr)^{-1}. \end{array} \end{equation*} We compute $\widehat{\lambda_pc_p}(0)$ using Proposition {\rm r}ef{thFT} and obtain {\beta}gin{equation*} {\mathcal C}_p(\lambda_p)= \frac13\Bigl(1-\frac{1}{p}\Bigr)(1-p^{-1/3}) \bigl((1+p^{-1/3})^3-(1+p^{-1})\bigr)+\frac{1}{p} \Bigl(1-\frac{1}{p}\Bigr)(1+p^{-1/3}), \end{equation*} which concludes the proof of the lemma. \end{proof} {\mathcal S}ubsection{Application to cubic analogues of P\'olya--Vinogradov}\label{s_polya} We sum the Artin character over isomorphism classes of cubic rings. This is a cubic analogue of the P\'olya--Vinogradov inequality~\cite[Thm.~12.5]{IK}, which sums Artin characters over quadratic rings. There are some substantial differences between quadratic and cubic cases: first, in the cubic case we see the presence of second order terms which do not occur in the quadratic case. Second, since the parameter space of cubic rings is four dimensional (as opposed to one dimensional), the trivial range for summing the Artin character $\lambda_n$ over cubic rings with discriminant bounded by $X$ is $X\gg n^4$ (as opposed to $X\gg n$ in the quadratic case). {\beta}gin{theorem}[Cubic analogue of P\'olya--Vinogradov]\label{t_Polya} Let $p$ be a prime and let $k\geq 2$ be an integer. Let ${\mathbb P}si:{\mathbb R}_{>0}\to{\mathbb C}$ be a smooth function with compact support such that $\int_0^\infty {\mathbb P}si(x)dx=1$. Then we have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{f\in\frac{V({\mathbb Z})^\pm}{{{\rm r}m GL}_2({\mathbb Z})}}\frac{\lambda_p(f)}{|{{\rm r}m Stab}(f)|}{\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr)&=& \displaystyle\Bigl({\alpha}pha^\pm\frac{p^2-1}{p^3}+{\beta}ta^\pm\frac{p^3-1}{p^3}\Bigr)X+ \gamma^\pm \widehat{\lambda_pc_p}(0)\widetilde {\mathbb P}si(\frac 56)\cdot X^{5/6}+O_{\epsilon,{\mathbb P}si}(p^{1+\epsilon}); \\[.2in] \displaystyle{\mathcal S}um_{f\in\frac{V({\mathbb Z})^\pm}{{{\rm r}m GL}_2({\mathbb Z})}}\frac{\lambda_{p^k}(f)}{|{{\rm r}m Stab}(f)|}{\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr)&=& \displaystyle\Bigl({\alpha}pha^\pm \widehat{\lambda_{p^k}}(0)+{\beta}ta^\pm \widehat{\lambda_{p^k}b_p}(0) \Bigr)X +\gamma^\pm \widehat{\lambda_{p^k}c_p}(0)\widetilde {\mathbb P}si(\frac 56)\cdot X^{5/6}+O_{\mathbb P}si(kp^2). \end{array} \end{equation*} \end{theorem} {\beta}gin{proof} This is a consequence of Theorem {\rm r}ef{countphi} in conjunction with Propositions~{\rm r}ef{lemunif13} and {\rm r}ef{properfirst} and Lemma~{\rm r}ef{lemE-lambda}. \end{proof} {\mathcal S}ection{Sieving to the space of maximal binary cubic forms}\label{sec:switch} In this section, we employ an inclusion-exclusion sieve to sum over \emph{maximal} binary cubic forms. To set up this sieve, we need the following notation. Denote the set of maximal integral binary cubic forms by $V({\mathbb Z})^{\rm max}$. For a squarefree positive integer $q$, we let {\mathrm {ind}}ex{$q$, square-free integer entering into the sieve} ${\mathcal W}_q$ denote the set of elements in $V({\mathbb Z})$ that are {\mathrm {ind}}ex{${\mathcal W}_q$, elements in $V({\mathbb Z})$ nonmaximal at every prime dividing $q$} \emph{nonmaximal at every prime dividing $q$}. Given a ${{\rm r}m GL}_2({\mathbb Z})$-invariant set $S{\mathcal S}ubset V({\mathbb Z})$, we let $\overline{S}:=\frac{S}{{{\rm r}m GL}_2({\mathbb Z})}$ denote the set of ${{\rm r}m GL}_2({\mathbb Z})$-orbits on~$S$. {\mathrm {ind}}ex{$\overline{S}$, set of ${{\rm r}m GL}_2({\mathbb Z})$-orbits on $S$} Let ${\mathbb P}si:{\mathbb R}_{>0}\to{\mathbb C}$ be a smooth function with compact support, and let $\phi:V({\mathbb Z})\to{\mathbb C}$ be a ${{\rm r}m GL}_2({\mathbb Z})$-invariant function. Then we have {\beta}gin{equation}\label{eqinex} {\mathcal S}um_{f\in\overline{V({\mathbb Z})^{\pm,{\rm max}}}}\frac{\phi(f)}{{|{{\rm r}m Stab}(f)|}}{\mathbb P}si(|\Delta(f)|) ={\mathcal S}um_{q\geq 1}\mu(q){\mathcal S}um_{f\in\overline{{\mathcal W}^{\pm}_q}}\frac{\phi(f)}{{|{{\rm r}m Stab}(f)|}}{\mathbb P}si\bigl(|\Delta(f)|\bigr). \end{equation} The difficulty in obtaining good estimates for the right-hand side of \eqref{eqinex} is that the set ${\mathcal W}_q$ is defined via congruence conditions modulo $q^2$, and a direct application of the results of Section~{\rm r}ef{secszf} yields not sufficiently precise error terms for sums over such sets. We overcome this difficulty in \S{\rm r}ef{s_overrings} by using a ``switching trick'', developed in \cite{BST}, which transforms the sum over ${\mathcal W}_p$ to a weighted sum over $V({\mathbb Z})$, where the weights are defined modulo $p$. We then combine the results of Section~{\rm r}ef{secszf} and \S{\rm r}ef{s_overrings} to carry out the sieve and obtain improved bounds for the error term. Finally, in \S{\rm r}ef{s_switch_applications}, we derive several applications; notably, we obtain a smoothed version of Roberts' conjecture, and sum the Artin character $\lambda_K(n)$ over cubic fields $K$. For a positive squarefree integer $m$ and an integral binary cubic form $f\in V({\mathbb Z})$, denote the number of roots (resp.\ simple roots) in ${\mathbb P}^1({\mathbb Z}/m{\mathbb Z})$ of the reduction of $f$ modulo $m$ by $\omega_m(f)$ (resp.\ $\omega^{(1)}_m(f)$). By the Chinese remainder theorem, $\omega_m(f)$ and $\omega^{(1)}_m(f)$ are multiplicative in $m$. {\mathrm {ind}}ex{$\omega^{(1)}_m(f)$, number of simple roots of $f$ modulo $m$} {\beta}gin{proposition}[{\cite[Equation (70)]{BST}}] \label{prop:sqitchclassic} For every positive squarefree integer $q$ and every function ${\mathbb P}si:{\mathbb R}_{>0}\to {\mathbb C}$ of compact support, {\beta}gin{equation*} {\mathcal S}um_{f\in \overline{{\mathcal W}^{\pm}_q}} \frac{{\mathbb P}si\bigl(|\Delta(f)|\bigr)}{{|{{\rm r}m Stab}(f)|}} = {\mathcal S}um_{k\ell\mid q}\mu(\ell) {\mathcal S}um_{f\in\overline{V({\mathbb Z})^{\pm}}}\frac{\omega_{k\ell}(f)}{{|{{\rm r}m Stab}(f)|}} {\mathbb P}si\Bigl(\frac{q^4|\Delta(f)|}{k^2}\Bigr). \end{equation*} \end{proposition} The above identity was proved using the following procedure in \cite[\S9]{BST}. Every element $f\in {\mathcal W}_q$ corresponds to a ring $R_f$ that is nonmaximal at every prime dividing $q$, hence $R_f$ is contained in a certain ring $R$, such that the \emph{index} ${\mathrm {ind}}(f):=[R:R_f]$ satisfies $q\mid{\mathrm {ind}}(f)$ and ${\mathrm {ind}}(f)\mid q^2$. In particular, the discriminant of $R_f$ is smaller than that of $R$. Then elements in $\overline{{\mathcal W}_q}$ can be counted by counting the rings $R$ instead of $R_f$. In what follows, we formalize this procedure, and adapt it so that we may sum congruence functions over $\overline{{\mathcal W}_q}$ (Theorem~{\rm r}ef{thswitch} which is a strenghtening of Proposition~{\rm r}ef{prop:sqitchclassic}). {\mathcal S}ubsection{Switching to overrings}\label{s_overrings} We begin with a bijection which allows us to replace sums over $\overline{{\mathcal W}_q}$ with sums over $\overline{{\mathcal W}_{q_1}}$, for various integers $q_1\mid q$ with $q_1<q$. Given a set $S{\mathcal S}ubset V({\mathbb Z})$ and an element ${\alpha}pha\in{\mathbb P}^1({\mathbb F}_p)$, let $S^{({\alpha}pha)}$ denote the set of elements $f\in S$ such that $f({\alpha}pha)\equiv 0\pmod p$. Then we have the following result. {\beta}gin{lemma}\label{lembij} Let $q$ be a positive squarefree number, and let $p$ be a prime such that $p\mid q$. Then there is a natural bijection between the following two sets: {\beta}gin{equation}\label{eqbij} \Bigl(\overline{{\mathcal W}_q}\backslash \overline{p{\mathcal W}_{q/p}}\Bigr) \bigcup \overline{\Bigl\{(pg,{\alpha}pha):g\in {\mathcal W}_{q/p}^{({\alpha}pha)},\,{\alpha}pha\in{\mathbb P}^1({\mathbb F}_p)\Bigr\}} \longleftrightarrow \overline{\Bigl\{ (g,{\alpha}pha): g\in {\mathcal W}_{q/p}^{({\alpha}pha)},\,{\alpha}pha\in{\mathbb P}^1({\mathbb F}_p) \Bigr\}}. \end{equation} Moreover, this bijection can be explicitly described as follows: both sets are in natural bijection with the set of isomorphism classes of pairs $(R,R')$ with $R{\mathcal S}ubset R'$, where $R$ is an index-$p$ subring of the cubic ring $R'$. The two bijections are given via $(R_f=R,R')\mapsto f$ and $(R,R'=R_g)\mapsto g$. Moreover, we have $|{{\rm r}m Stab}(f)|=|{{\rm r}m Stab}(g)|$. \end{lemma} {\mathrm {ind}}ex{$f\leftrightarrow (g,{\alpha}pha)$ switch, $R_f$ is an index-$p$ subring of $R_g$} {\beta}gin{proof} The set $\overline{{\mathcal W}_q}$ is in bijection with the set of cubic rings that are nonmaximal at every prime $p$ dividing $q$. As in \cite[\S7]{BST}, we consider the set of pairs of cubic rings $R{\mathcal S}ubset R'$, such that $R$ is nonmaximal at every prime dividing $q$, and the index of $R$ in $R'$ is $p$. Let $f$ and $g$ be representatives for the ${{\rm r}m GL}_2({\mathbb Z})$-orbits on $V({\mathbb Z})$ corresponding to $R$ and $R'$, respectively. If $f\in W_q$ is not a multiple of $p$, then there exists a unique index-$p$ overring $R'$ of $R$ by Proposition {\rm r}ef{subsupring}. On the other hand, if $f=pg$ is a multiple of $p$, then the set of index-$p$ overrings $R'$ or $R$ are in natural bijection with the roots of $g$ in ${\mathbb P}^1({\mathbb F}_p)$ (also by Proposition {\rm r}ef{subsupring}). Therefore, the set of pairs $(R,R')$ is in natural bijection with ${{\rm r}m GL}_2({\mathbb Z})$-orbits on the following set: {\beta}gin{equation}\label{eqs1} ({\mathcal W}_q\backslash p {\mathcal W}_q)\bigcup \Bigl\{(pg,{\alpha}pha):{\alpha}pha\in{\mathbb P}^1({\mathbb F}_p),\; g\in {\mathcal W}_{q/p}^{({\alpha}pha)}\Bigr\}, \end{equation} and every form in the above set corresponds to the ring $R$. On the other hand, the set of index-$p$ subrings of the ring $R_g$ is in natural bijection with the set of roots of $g$ in ${\mathbb P}^1({\mathbb F}_p)$ by Proposition {\rm r}ef{subsupring}. Therefore, the set of pairs $(R,R')$ is also in natural bijection with ${{\rm r}m GL}_2({\mathbb Z})$-orbits on the set {\beta}gin{equation}\label{eqs2} \Bigl\{(g,{\alpha}pha):{\alpha}pha\in{\mathbb P}^1({\mathbb F}_p),\;g\in {\mathcal W}_{q/p}^{({\alpha}pha)}\Bigr\}, \end{equation} and every form in the above set corresponds to the ring $R'$. It follows that ${{\rm r}m GL}_2({\mathbb Z})$-orbits on the sets \eqref{eqs1} and \eqref{eqs2} are in natural bijection. \end{proof} We will also need the following lemma determining how the above bijection changes the splitting types of the binary cubic forms. {\beta}gin{lemma}\label{l_sigmap_f} Let $g\in {\mathcal W}_{q/p}$ and ${\alpha}pha\in{\mathbb P}^1({\mathbb F}_p)$ be a root of $g$ modulo $p$. Let $f\in \overline{{\mathcal W}_q}$ correspond to the ${{\rm r}m GL}_2({\mathbb Z})$-orbit of $(g,{\alpha}pha)$ under the bijection of Lemma {\rm r}ef{lembij}. Then {\beta}gin{equation*} {\mathcal S}igma_p(f)=\left\{{\beta}gin{array}{ccl} (1^21) && \mbox{if }{\alpha}pha \mbox{ is a simple root of }$g$;\\[.1in] (1^3)\mbox{ or } (0) && \mbox{otherwise.} \end{array}{\rm r}ight. \end{equation*} Moreover, for every prime $\ell\neq p$, we have ${\mathcal S}igma_\ell(f)={\mathcal S}igma_\ell(g)$. And more generally, for every integer $n$ coprime with $p$, the reduction of $f$ modulo $n$ and the reduction of $g$ modulo $n$ are ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-conjugates. \end{lemma} {\beta}gin{proof} By translating $g$ with an element of ${{\rm r}m GL}_2({\mathbb Z})$ if necessary, we can assume that ${\alpha}pha=[0:1]$. In that case, we have $g(x,y)=ax^3+bx^2y+cxy^2+dy^3$, where $p\mid d$. Furthermore, we have $p{\mathrm {nm}}id c$ if and only if ${\alpha}pha$ is a simple root. Then, the element $f(x,y)$ is given by $f(x,y)=p^2ax^3+pbx^2y+cxy^2+d/py^3$, and has splitting type $(1^21)$ if and only if $p{\mathrm {nm}}id c$. The first part of the lemma follows. The second part of the lemma follows since $R_f$ has index-$p$ in $R_g$, which implies that $R_f\otimes{\mathbb F}_\ell\cong R_g\otimes{\mathbb F}_\ell$ and $R_f \otimes {\mathbb Z}/nZ \cong R_g \otimes {\mathbb Z}/n{\mathbb Z}$. \end{proof} Let $n$ be a positive integer, and let $\phi:V({\mathbb Z}/n{\mathbb Z})\to{\mathbb C}$ be a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function such that $\phi$ is given by {\beta}gin{equation*} \phi=\prod_{p^{\beta}ta\| n}\phi_{p^{\beta}ta}, \end{equation*} where $f\mapsto \phi_{p^{\beta}ta}(f)$ is ${{\rm r}m GL}_2({\mathbb Z}/p^{\beta}ta{\mathbb Z})$-invariant. When ${\beta}ta=1$, we have that $\phi_p(f)$ is determined by the splitting type of $f$ at $p$. For any positive integer $k$ dividing $n$, such that $(k,n/k)=1$, we denote $\prod_{p^{\beta}ta\|k}\phi_{p^{\beta}ta}$ by $\phi_k$. Let $d\geq 1$ be a squarefree integer dividing $n$ such that $(d,n/d)=1$. {\beta}gin{definition} We say that such a function $\phi_n$ is {\it simple} at $d$, if for all $p\mid d$, we have $\phi_p(f)= \phi_p(0)$ when the splitting type of $f$ at $p$ is $(1^3)$. \end{definition} {\mathrm {ind}}ex{$\phi_p(1^3)=\phi_p(0)$, simple congruence function at $p$} We are now ready to prove the main result of this subsection. {\beta}gin{theorem}\label{thswitch} Let ${\mathbb P}si:{\mathbb R}_{> 0}\to {\mathbb C}$ be a compactly supported function, $n$ be a positive integer and $q$ be a positive squarefree integer. Let $\phi$ be a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function on $V({\mathbb Z}/n{\mathbb Z})$. Denote $(q,n)$ by $de$, where $d$ is the product of primes dividing $(q,n)$ at which $\phi$ is simple, and assume that $\phi_p(0)=0$ for every prime $p|d$. Write $n=dm$ and $\phi=\phi_d\phi_m$. Then {\beta}gin{equation*} \displaystyle{\mathcal S}um_{f\in\overline{{\mathcal W}_q^\pm}} \frac{\phi(f)}{{|{{\rm r}m Stab}(f)|}}{\mathbb P}si\bigl(|\Delta(f)|\bigr)= \phi_d(1^21)\displaystyle{\mathcal S}um_{k\ell \mid \frac{q}{de}}\mu(\ell){\mathcal S}um_{g\in\overline{{\mathcal W}_e^\pm}} \omega_d^{(1)}(g)\omega_{k\ell}(g) \frac{\phi_m(g)}{|{{\rm r}m Stab}(g)|} {\mathbb P}si\Bigl(\frac{q^4|\Delta(g)|}{d^2k^2}\Bigr). \end{equation*} \end{theorem} {\beta}gin{proof} We prove Theorem {\rm r}ef{thswitch} by induction on the number of prime factors of $d$. First we consider the case $d=1$ which we establish by induction on the number of prime factors of $q/e$. Let $p$ be a prime dividing $q/e$. We again use the bijection of Lemma~{\rm r}ef{lembij} to relate the sum over $f\in\overline{{\mathcal W}_q}$ to a sum over $g\in\overline{{\mathcal W}_{q/p}}$. If $f\in \overline{p{\mathcal W}_{q/p}}$, then $\phi(f/p)=\phi(f)$ because $\phi$ is ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant and $(p,n)=1$ implies $1/p\in Z({{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z}))$ which acts by scalar multiplication on $V({\mathbb Z}/n{\mathbb Z})$. Suppose $f\in\overline{{\mathcal W}_q}\backslash \overline{p{\mathcal W}_{q/p}}$ corresponds to the ${{\rm r}m GL}_2({\mathbb Z})$-orbit of $g\in {\mathcal W}_{q/p}$ and a root ${\alpha}pha\in {\mathbb P}^1({\mathbb F}_p)$ of $g$ modulo $p$ under the surjection of Lemma {\rm r}ef{lembij}. Then since $(n,q)=1$, we have $\phi(f)=\phi(g)$ from Lemma {\rm r}ef{l_sigmap_f}. Thus, {\beta}gin{align*} {\mathcal S}um_{f\in\overline{{\mathcal W}_q^\pm}} \frac{\phi(f)}{|{{\rm r}m Stab}(f)|} {\mathbb P}si\bigl(|\Delta(f)|\bigr) &= {\mathcal S}um_{k_1\ell_1 \mid p}\mu(\ell_1){\mathcal S}um_{f\in\overline{{\mathcal W}_{q/p}^\pm}} \omega_{k_1\ell_1}(f) \frac{\phi(f)}{|{{\rm r}m Stab}(f)|} {\mathbb P}si\Bigl(\frac{q^4|\Delta(f)|}{k_1^2}\Bigr)\\ &= {\mathcal S}um_{k_1\ell_1 \mid p}\mu(\ell_1) {\mathcal S}um_{k_2\ell_2 \mid \frac{q}{p}}\mu(\ell_2) {\mathcal S}um_{f\in\overline{{\mathcal W}_{e}^\pm}} \omega_{k_1\ell_1}(f) \omega_{k_2\ell_2}(f) \frac{\phi(f)}{|{{\rm r}m Stab}(f)|} {\mathbb P}si\Bigl(\frac{q^4|\Delta(f)|}{k_1^2k_2^2}\Bigr)\\ &= {\mathcal S}um_{k\ell \mid \frac{q}{e}}\mu(\ell) {\mathcal S}um_{f\in\overline{{\mathcal W}_{e}^\pm}} \omega_{k\ell}(f) \frac{\phi(f)}{|{{\rm r}m Stab}(f)|} {\mathbb P}si\Bigl(\frac{q^4|\Delta(f)|}{k^2}\Bigr), \end{align*} where the second equality follows by induction on the sum over $\overline{{\mathcal W}_{q/p}^\pm}$ of the ${{\rm r}m GL}_2({\mathbb Z}/pn{\mathbb Z})$-invariant function $\omega_{k_1\ell_1}\cdot \phi$. It remains to prove the inductive step on the number of prime factors of $d$. Let $p$ be a prime dividing $d$. We use the bijection of Lemma~{\rm r}ef{lembij} to relate the sum over $f\in\overline{{\mathcal W}_q}$ to sums over $f\in\overline{{\mathcal W}_{q/p}}$. Suppose $f\in\overline{{\mathcal W}_q}$ corresponds to $g\in\overline{{\mathcal W}_{q/p}^{({\alpha}pha)}}$ under the bijection of Lemma {\rm r}ef{lembij}, then by Lemma {\rm r}ef{l_sigmap_f}, we have $\phi_p(g)=\phi_p(1^21)$ if ${\alpha}pha$ is a simple root and $\phi_p(g)=\phi_p(1^3)=0$ otherwise. Also, we have $\phi_{n/p}(g)=\phi_{n/p}(f)$. Therefore, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{f\in\overline{{\mathcal W}_q^\pm}} \frac{\phi(f)}{|{{\rm r}m Stab}(f)|} {\mathbb P}si\bigl(|\Delta(f)|\bigr)&=& \displaystyle{\mathcal S}um_{g\in\overline{{\mathcal W}^\pm_{q/p}}} \omega^{(1)}_p(g) \phi_p(1^21) \frac{\phi_{n/p}(g)}{|{{\rm r}m Stab}(g)|} {\mathbb P}si\Bigl(\frac{|\Delta(g)|}{p^2}\Bigr) \\[.2in]&=&\displaystyle \phi_d(1^21) {\mathcal S}um_{g\in\overline{{\mathcal W}^\pm_{q/d}}} \omega^{(1)}_p(g)\omega^{(1)}_{d/p}(g) \frac{\phi_{n/d}(g)}{|{{\rm r}m Stab}(g)|} {\mathbb P}si\Bigl(\frac{|\Delta(g)|}{d^2}\Bigr), \end{array} \end{equation*} where the second equation follows by induction on the sum over $\overline{{\mathcal W}^\pm_{q/p}}$ of the (simple at $d/p$) function $\phi_{n/p}\cdot\omega^{(1)}_p$. The result now follows since $\omega^{(1)}_k$ is multiplicative in $k$. \end{proof} {\mathcal S}ubsection{Summing congruence functions over $\overline{{\mathcal W}^\pm_q}$}\label{s_sieve_max} Let $n$ be a positive integer and let $\phi:V({\mathbb Z}/n{\mathbb Z})\to {\mathbb C}$ be of the form $\phi=\prod_{p^{\beta}ta\parallel n}\phi_{p^{\beta}ta}$, where $\phi_{p^{\beta}ta}:V({\mathbb Z}/p^{\beta}ta {\mathbb Z})\to {\mathbb C}$ and ${\beta}ta:=v_p(n)$. {\mathrm {ind}}ex{${\mathcal A}^{(q)}_n,{\mathcal C}^{(q)}_n$, residue functionals with nonmaximality condition at $q$} Let $V({\mathbb Z}_p)^{{\mathrm {nm}}}$ be the subset of $V({\mathbb Z}_p)$ of nonmaximal cubic forms. It is the closure of ${\mathcal W}_p^\pm$ inside $V({\mathbb Z}_p)$. {\mathrm {ind}}ex{$V({\mathbb Z}_p)^{{\mathrm {nm}}}$, subset of $V({\mathbb Z}_p)$ of nonmaximal cubic forms} Given a positive squarefree integer $q$, we define \[ {\beta}gin{array}{rcl} \displaystyle{\mathcal A}^{(q)}_{n}(\phi) &:=&\displaystyle \prod_{p|q} \int_{ V({\mathbb Z}_p)^{\mathrm {nm}}}\phi_{p^{\beta}ta}(f)df \cdot \prod_{{\mathcal S}ubstack{p\mid n\\ p{\mathrm {nm}}id q}} {\mathcal A}_{p^{\beta}ta}(\phi_{p^{\beta}ta}), \\[.2in] {\mathcal C}^{(q)}_{n}(\phi) &:=&\displaystyle \prod_{p|q} \int_{ V({\mathbb Z}_p)^{\mathrm {nm}}}\phi_{p^{\beta}ta}(f)c_p(f)df \cdot \prod_{{\mathcal S}ubstack{p\mid n\\ p{\mathrm {nm}}id q}} {\mathcal C}_{p^{\beta}ta}(\phi_{p^{\beta}ta}), \end{array} \] where $df$ denotes the probability Haar measure on $V({\mathbb Z}_p)$, and the values of $c_p(f)$ are given in Table {\rm r}ef{tabbc}. When $p|q$ but $p{\mathrm {nm}}id n$, we assume by convention that $\phi_{p^{\beta}ta}=1$ in the integral above. Note that if $q=1$ or more generally $(n,q)=1$ then ${\mathcal A}_n^{(q)}={\mathcal A}_n$ and ${\mathcal C}_n^{(q)}={\mathcal C}_n$. {\beta}gin{theorem}\label{propWqerror} Let ${\mathbb P}si:{\mathbb R}_{> 0}\to {\mathbb C}$ be a smooth function with compact support such that $\int {\mathbb P}si=1$, $n$ be a positive integer, $q$ be a positive squarefree integer and let $d:=(q,n)$. Let $\phi = \prod_{p^{\beta}ta \parallel n} \phi_{p^{\beta}ta}$ be a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function on $V({\mathbb Z}/n{\mathbb Z})$ such that $\phi$ is simple at $d$ and $\phi_p(0)=0$ for every prime $p\mid d$. Finally assume that there exists a prime $p$ dividing $n/d$ such that $\phi_{p^{\beta}ta}$ is supported on elements $f\in V({\mathbb Z}/p^{\beta}ta{\mathbb Z})$ with splitting type ${\mathcal S}igma_p(f)=(3)$. Then for every $X\ge 1$, {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{f\in \overline{{\mathcal W}_q^\pm}} \frac{\phi(f)}{|{{\rm r}m Stab}(f)|} {\mathbb P}si \Bigl(\frac{|\Delta(f)|}{X}\Bigr) &=& \displaystyle{\alpha}pha^\pm \cdot {\mathcal A}^{(q)}_n(\phi)X +\gamma^\pm \cdot {\mathcal C}^{(q)}_n(\phi) \cdot \widetilde {\mathbb P}si(\frac 56) \cdot X^{5/6} \\[.2in]&&\displaystyle + O_{\epsilon}\Bigl( d^{1+\epsilon}q^{1+\epsilon} \cdot \bigl(\frac{n}{d}\bigr)^{4+\epsilon} E_{n/d} \bigl(\widehat{\phi_{n/d}}\bigr) \cdot E_\infty(\widetilde{{\mathbb P}si},\epsilon) \Bigr). \end{array} \end{equation*} \end{theorem} {\beta}gin{proof} The values of the constants in front of the primary and secondary main terms follow from Theorem {\rm r}ef{countphi}. The term $\mathcal B_n(\phi)$ vanishes because there exists a prime $p$ dividing $n$ such that $\phi_{p^{\beta}ta}$ is supported on elements in $V({\mathbb Z}/p^{\beta}ta{\mathbb Z})$ with splitting type $(3)$, which implies $\mathcal B_{p^{\beta}ta}(\phi_{p^{\beta}ta})=0$ because $\phi_{p^{\beta}ta} \cdot b_p$ vanishes on $V({\mathbb Z}/p^{\beta}ta{\mathbb Z})$ in view of Table~{\rm r}ef{tabbc}. It remains to justify the size of the error term. For this, we first use Theorem {\rm r}ef{thswitch} to write {\beta}gin{equation*} {\mathcal S}um_{f\in \overline{{\mathcal W}_q^\pm}} \frac{\phi_n(f)}{|{{\rm r}m Stab}(f)|} {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) = \phi_d(1^21){\mathcal S}um_{k\ell\mid\frac{q}{d}}\mu(\ell){\mathcal S}um_{g\in\overline{V({\mathbb Z})^\pm}} \omega^{(1)}_{d}(g)\omega_{k\ell}(g) \frac{\phi_{n/d}(g)}{|{{\rm r}m Stab}(g)|} {\mathbb P}si\Bigl(\frac{q^4|\Delta(g)|}{Xd^2k^2}\Bigr). \end{equation*} For each $k$ and $\ell$, we apply Theorem {\rm r}ef{countphi} to the inner sum, obtaining {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{g\in\overline{V({\mathbb Z})^\pm}} \omega^{(1)}_{d}(g)\omega_{k\ell}(g) \frac{\phi_{n/d}(g)}{|{{\rm r}m Stab}(g)|} {\mathbb P}si\Bigl(\frac{q^4|\Delta(g)|}{Xd^2k^2}\Bigr) &=&\displaystyle c^{(1)}_{k,\ell}X+c^{(2)}_{k,\ell}X^{5/6} \\[.2in]&&\displaystyle +O_\epsilon\Bigl( (nkl)^{4+\epsilon}\cdot E_{d}(\widehat{\omega^{(1)}_{d}})E_{k\ell}(\widehat{\omega_{k\ell}}) E_{n/d}(\widehat{\phi_{n/d}})E_\infty(\widetilde{{\mathbb P}si},\epsilon) \Bigr)\\[.2in]&=&\displaystyle c^{(1)}_{k,\ell}X+c^{(2)}_{k,\ell}X^{5/6} \\[.2in]&&\displaystyle +O_\epsilon\Bigl( d^{2+\epsilon}(k\ell)^{1+\epsilon}\cdot \Bigl(\frac{n}{d}\Bigr)^{4+\epsilon} E_{n/d}(\widehat{\phi_{n/d}})E_\infty(\widetilde{{\mathbb P}si},\epsilon) \Bigr). \end{array} \end{equation*} The second estimate above follows since we have the bounds {\beta}gin{equation*} E_{d}(\widehat{\omega^{(1)}_d})\ll\frac{1}{d^{2-\epsilon}},\qquad\qquad E_{k\ell}(\widehat{\omega_{k\ell}})\ll\frac{1}{k^{3-\epsilon}\ell^{3-\epsilon}}, \end{equation*} where the bounds follow from Lemma {\rm r}ef{lemEbound} since $\omega_{k\ell}=\lambda_{k\ell}+1$. Summing over $k\ell$ dividing $q/d$ yields the result. \end{proof} Recall that for a finite collection $\Sigma$ of local specifications, we defined a \emph{family of fields} ${\mathbb F}F_\Sigma$. The finite collection $\Sigma$ can also be used to cut out subsets of binary cubic forms. Namely, for a set $S$ of integral binary cubic forms, let $S(\Sigma)$ denote the subset of elements $f\in S$ such that $R_f\otimes{\mathbb Q}_v\in\Sigma_v$ for each place $v$ associated to $\Sigma$. Here, as usual, $R_f$ denotes the cubic ring associated to $f$. Henceforth, we will always assume that $\Sigma_\infty$ is a singleton set. That is, it is either ${\mathbb R}\oplus{\mathbb R}\oplus{\mathbb R}$, corresponding to cubic fields and forms with positive discriminant, or it is ${\mathbb R}\oplus{\mathbb C}$, corresponding to cubic fields and forms with negative discriminant. Accordingly the sign $\pm$ in ${\alpha}pha^\pm$, $\gamma^\pm$, $V({\mathbb Z})^\pm$, ${\mathcal W}_q^\pm$, and so on, with be $+$ for the former case and $-$ for the latter case. {\mathrm {ind}}ex{$\pm$, $+$ is for totally real fields and $-$ is for complex fields} Let $\chi_\Sigma$ be the characteristic function of the set of elements $(f_p) \in \prod_{p}V({\mathbb Z}_p)$ such that $R_{f_p}\otimes{\mathbb Q}_{p}\in \Sigma_{p}$ for every prime $p$. {\mathrm {ind}}ex{$\chi_\Sigma$, characteristic function of forms with specification $\Sigma$} We have that $\chi_\Sigma$ factors through the quotient $ \prod_{p}V({\mathbb Z}_p) \twoheadrightarrow V({\mathbb Z}/r_\Sigma {\mathbb Z})$ to a ${{\rm r}m GL}_2({\mathbb Z}/r_\Sigma {\mathbb Z})$-invariant function which we also denote by the same letter $\chi_\Sigma$. Here $r_\Sigma$ is a positive integer which is the product of $p$ over all primes $p\neq 2,3$ such that $\Sigma_p$ is specified at $p$ and of $16$ (resp. $27$) for the prime $2$ (resp. $3$). {\mathrm {ind}}ex{$r_\Sigma$, product of primes $p$ such that $\Sigma_p$ is specified at $p$} {\beta}gin{corollary}\label{lemlamer} Let $\Sigma$ be a finite collection of local specifications and assume that $\Sigma_p=\{{\mathbb Q}_{p^3}\}$ for at least one prime $p$. For every positive integer $n$, positive squarefree integer $q$ and $X\ge 1$, we have {\beta}gin{equation}\label{eqlamer} \displaystyle{\mathcal S}um_{f\in\overline{{\mathcal W}_q(\Sigma)}} \frac{\lambda_n(f)}{|{{\rm r}m Stab}(f)|} {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) =\displaystyle {\alpha}pha^\pm {\mathcal A}^{(q)}_{[n,r_\Sigma]} (\lambda_n\chi_\Sigma)X + \gamma^\pm {\mathcal C}^{(q)}_{[n,r_\Sigma]} (\lambda_n\chi_\Sigma) \cdot \widetilde {\mathbb P}si(\frac 56) \cdot X^{5/6} +O_{\epsilon,\Sigma}\bigl((nq)^{1+\epsilon} \cdot E_\infty(\widetilde{{\mathbb P}si},\epsilon) \bigr). \end{equation} \end{corollary} {\beta}gin{proof} The two main terms of the asymptotic follow from Theorem {\rm r}ef{propWqerror}, and it is only necessary to analyze the size of the error term. We write $n=n_1n_2$, where $n_1$ is squarefree, $n_2$ is powerful and $(n_1,n_2)=1$. Let $m$ denote the radical of $n_2$. Since $\lambda_n$ is defined modulo $n_1m$, Theorem {\rm r}ef{propWqerror} yields an error term of {\beta}gin{equation*} O_{\epsilon,\Sigma}\Bigl(\frac{(n_1m)^{4+\epsilon}q^{1+\epsilon}}{(n,q)^3}\cdot E_{\frac{n}{(n,q)}}\bigl(\widehat {\lambda_{\frac{n}{(n,q)}}}\bigr) E_\infty(\widetilde {\mathbb P}si,\epsilon) \Bigr). \end{equation*} For a prime $p$ and and integer $k\geq 2$, it follows from Lemma {\rm r}ef{lemEbound} that we have {\beta}gin{equation*} E_p(\widehat{\lambda_p})\ll\frac{1}{p^3};\hspace{.3in} E_{p^k}(\widehat{\lambda_{p^k}})\ll\frac{k}{p^2}. \end{equation*} Therefore, we obtain {\beta}gin{equation*} E_{\frac{n}{(n,q)}}\bigl(\widehat{\lambda_\frac{n}{(n,q)}}\bigr)\ll_\epsilon \frac{(n,q)^3n^\epsilon}{n_1^3m^{2}}. \end{equation*} The theorem now follows since $n_1m^2\leq n$. \end{proof} We end with two results. The first is a uniform estimate, proved in \cite{DH}, on the number of elements in $\overline{{\mathcal W}_q}$ with bounded discriminant. This estimate will be used to bound the tail of the sum in the right-hand side of \eqref{eqinex}. {\beta}gin{proposition}[Davenport~\cite{DH}]\label{propunif} For every $\epsilon>0$, $X\ge 1$, and squarefree integer $q$, {\beta}gin{equation*} \#\bigl\{f\in\overline{{\mathcal W}^\pm_q}:|\Delta(f)|<X\bigr\} \ll_\epsilon\frac{X}{q^{2-\epsilon}}. \end{equation*} The multiplicative constant is independent of $q$ and $X$ (it depends only on $\epsilon$). \end{proposition} {\beta}gin{proof} With the notation we have set up, Davenport's proof can be expressed as follows: We use Proposition~{\rm r}ef{prop:sqitchclassic} with ${\mathbb P}si$ the indicator function of the interval $[\frac12,X]$. Then, instead of applying Theorem~{\rm r}ef{countphi} as above, we apply the more direct upper bound $\omega_{k\ell}(f)\ll q^{\epsilon}$ and estimate from above the sum over $f\in \overline{V({\mathbb Z})^{\pm}}$ by $\frac{X k^2}{q^4}$. \end{proof} Second, we add up the functionals of Theorem {\rm r}ef{propWqerror} over squarefree numbers $q$. Let $\phi:V({\mathbb Z}/n{\mathbb Z})\to{\mathbb C}$ be a function of the form $\phi=\prod_{p^{\beta}ta \parallel n} \phi_{p^{\beta}ta}$, where $\phi_{p^{\beta}ta}:V({\mathbb Z}/p^{\beta}ta {\mathbb Z})\to {\mathbb C}$ and ${\beta}ta=v_p(n)$. For every prime $p{\mathrm {nm}}id n$, we define $\phi_{p^{\beta}ta}:V({\mathbb Z}_p)\to{\mathbb C}$ to simply be the constant $1$ function. We now define the functionals {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal A}^{\rm max}(\phi)&:=& \displaystyle\prod_{p}\int_{f\in V({\mathbb Z}_p)^{\rm max}}\phi_{p^{\beta}ta}(f)df;\\[.1in] \displaystyle{\mathcal C}^{\rm max}(\phi)&:=& \displaystyle\prod_{p}\int_{f\in V({\mathbb Z}_p)^{\rm max}}c_p(f)\phi_{p^{\beta}ta}(f)df, \end{array} \end{equation*} where the values of $c_p(f)$ are given in Table {\rm r}ef{tabbc}. By multilinearity, the domain of definition of the functionals ${\mathcal A}^{{\rm max}}$ and ${\mathcal C}^{\rm max}$ extends to all functions $\phi:V({\mathbb Z}/n{\mathbb Z})\to {\mathbb C}$. {\beta}gin{lemma}\label{lemmtcompmax} For every integer $n$, the following identity between functionals defined on functions from $V({\mathbb Z}/n{\mathbb Z})$ holds: {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{q\ge 1}\mu(q){\mathcal A}^{(q)}_n&=&\displaystyle{\mathcal A}^{\rm max};\\[.1in] \displaystyle{\mathcal S}um_{q\ge 1}\mu(q){\mathcal C}^{(q)}_n&=&\displaystyle{\mathcal C}^{\rm max}. \end{array} \end{equation*} \end{lemma} {\mathrm {ind}}ex{${\mathcal A}^{{\rm max}},{\mathcal C}^{\rm max}$, residue functionals with maximality condition} {\beta}gin{proof} This follows from the partition \[ V({\mathbb Z}_p) = V({\mathbb Z}_p)^{\rm max} {\mathcal S}qcup V({\mathbb Z}_p)^{\mathrm {nm}} \] for every prime $p$, and the inclusion-exclusion principle. \end{proof} {\mathcal S}ubsection{Application to smooth counts of cubic fields with prescribed local specifications}\label{s_switch_applications} In this subsection, we use \eqref{eqinex}, Theorem {\rm r}ef{propWqerror}, Proposition~{\rm r}ef{propunif} and Lemma {\rm r}ef{lemmtcompmax} to sum congruence functions over the space of cubic fields. We denote the set of all cubic fields $K$ with $\pm\Delta(K)>0$ by ${\mathbb F}F^\pm$. We say that $\theta:{\mathbb F}F^\pm\to{\mathbb C}$ is a {\it simple function defined modulo $n$} if there exists a simple ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant function $\phi: V({\mathbb Z}/n{\mathbb Z})\to {\mathbb C}$ such that for every cubic field $K$, whose ring of integers corresponds to a maximal binary cubic form $f$, we have $\theta(K)=\phi(\bar{f})$, where $\bar{f}$ denotes the reduction of $f$ modulo $n$. For example $\lambda_K(n)$ is a simple function defined modulo $n$ corresponding to the function $\lambda_n(f)$. {\beta}gin{theorem}\label{thfieldscount} Let ${\mathbb P}si:{\mathbb R}_{> 0}\to {\mathbb C}$ be a smooth function with compact support such that $\int {\mathbb P}si=1$. Let $\Sigma$ be a finite set of local specifications, such that $\Sigma_p=\{{\mathbb Q}_{p^3}\}$ for at least one prime $p$. For every real $X\ge 1$ and integer $n\ge 1$, {\beta}gin{equation*} {\mathcal S}um_{K\in{\mathbb F}F_\Sigma}\lambda_K(n){\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr) = {\alpha}pha^\pm {\mathcal A}^{\rm max}(\lambda_n \chi_\Sigma) X+ \gamma^\pm {\mathcal C}^{\rm max}(\lambda_n \chi_\Sigma) \widetilde{\mathbb P}si\bigl(\frac56\bigr) X^{5/6}+ O_{\epsilon,\Sigma,{\mathbb P}si}\bigl(X^{2/3+\epsilon}n^{1/3}\bigr). \end{equation*} \end{theorem} \noindent Before we turn to the proof of Theorem {\rm r}ef{thfieldscount}, we make the following two observations. First, the quadratic analogue of the above result is the question of summing the Legendre symbol $\bigl(\frac{\cdot}{n}\bigr)$ over the set of squarefree integers. Second, the case $n=1$ of the above result (with the simplifying assumption that $\Sigma_p=\{{\mathbb Q}_{p^3}\}$ for at least one prime $p$) is a smoothed version (instead of a sharp version counting $K\in {\mathbb F}F_\Sigma(X)$ without the ${\mathbb P}si$-smoothing) of the refined Roberts' conjecture, proved independently in \cite{BST} and \cite{TaTh1}. Those works obtain the error terms $O_\epsilon(X^{5/6-1/48+\epsilon})$ and $O_\epsilon(X^{7/9+\epsilon})$ for the sharp version of the refined Roberts' conjecture. Independently from the present article, the recent work \cite{BTT} obtains an improved error term of $O_\epsilon(X^{2/3+\epsilon})$. This seems to indicate that $X^{2/3+\epsilon}$ is the natural exponent both for our present purposes of smoothly summing the Artin character of cubic fields and for the problem of sharp counting of cubic fields. {\beta}gin{proof}[Proof of Theorem {\rm r}ef{thfieldscount}] We start with the sieve \eqref{eqinex} to write {\beta}gin{equation*} {\mathcal S}um_{K\in{\mathbb F}F_\Sigma}\lambda_K(n){\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr)= {\mathcal S}um_{q\geq 1}\mu(q){\mathcal S}um_{f\in\overline{{\mathcal W}_q^\pm}} \frac{\lambda_n(f)}{|{{\rm r}m Stab}(f)|} \chi_\Sigma(f){\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) + O_\epsilon\bigl(X^{\frac12}n^\epsilon\bigr). \end{equation*} Note that the sum over $K$ is \emph{not} weighted by the size of the automorphism group. On the right-hand side, the difference is accounted by the number of cyclic cubic fields which is $O(X^{\frac12})$. Pick a real number $Q$ to be optimized later. Using Corollary {\rm r}ef{lemlamer} for $q\leq Q$, Proposition {\rm r}ef{propunif} for $q>Q$, and Lemma {\rm r}ef{lemmtcompmax} to evaluate the main terms, we obtain {\beta}gin{equation*} {\mathcal S}um_{K\in{\mathbb F}F_\Sigma}\lambda_K(n){\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr)= {\alpha}pha^\pm{\mathcal A}^{\rm max}(\lambda_n\chi_\Sigma)X+ \gamma^\pm{\mathcal C}^{\rm max}(\lambda_n\chi_\Sigma) \widetilde{\mathbb P}si\bigl(\frac56\bigr)X^{5/6}+ O_{\epsilon,\Sigma,{\mathbb P}si}\bigl((nQ^2)^{1+\epsilon}\bigr)+O_{\epsilon,{\mathbb P}si}\Bigl(\frac{X}{Q^{1-\epsilon}}\Bigr). \end{equation*} Optimizing, we pick $Q=(X/n)^{1/3}$ which yields Theorem {\rm r}ef{thfieldscount}. \end{proof} Finally, we have a result estimating smoothed sums over cubic fields, where we sum over arbitirary congruence functions defined modulo a squarefree integer. (We could allow more general specifications, but this situation seems to be the most common in applications). {\beta}gin{theorem}\label{thKcountsimple} Let ${\mathbb P}si:{\mathbb R}_{> 0}\to {\mathbb C}$ be a smooth function with compact support such that $\int {\mathbb P}si=1$. Let $n$ be a positive squarefree integer, and let $\theta$ be a simple function on the family ${\mathbb F}F^+$ (resp. ${\mathbb F}F^-$) of totally real cubic fields (resp. complex cubic fields) corresponding to a ${{\rm r}m GL}_2({\mathbb Z}/n{\mathbb Z})$-invariant congruence function $\phi:V({\mathbb Z}/n{\mathbb Z})\to{\mathbb C}$ which is simple at $n$ and such that $\phi_p(0)=0$ for every prime $p|n$. Namely $\theta(K_f)=\phi(\overline f)$ for every $f\in V({\mathbb Z})^{\pm,{{\rm r}m irr},{\rm max}}$. Assume that for at least one prime $p|n$, $\theta(K)\neq 0$ forces $K$ to be inert at $p$. Then we have {\beta}gin{equation*} {\mathcal S}um_{K\in{\mathbb F}F^\pm}\theta(K){\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr) ={\alpha}pha^\pm{\mathcal A}^{\rm max}(\phi)X+ \gamma^\pm{\mathcal C}^{\rm max}(\phi)\widetilde{\mathbb P}si\bigl(\frac56\bigr) X^{5/6}+O_{\epsilon,{\mathbb P}si}\bigl(X^{2/3+\epsilon}n^{2/3+\epsilon}||\theta||_\infty\bigr). \end{equation*} \end{theorem} {\beta}gin{proof} As before, we begin with the inclusion-exclusion sieve. Pick $Q>1$ to be optimized and write {\beta}gin{equation*} {\mathcal S}um_{K\in{\mathbb F}F^\pm}\theta(K){\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr)= {\mathcal S}um_{q\leq Q}\mu(q) {\mathcal S}um_{f\in\overline{{\mathcal W}_q^\pm}}\frac{\phi(f)}{|{{\rm r}m Stab}(f)|}{\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr) +O_{\epsilon,{\mathbb P}si}\Bigl(\frac{X}{Q^{1-\epsilon}}\Bigr)+O\bigl(X^{1/2}||\theta||_\infty\bigr). \end{equation*} For $q\leq Q$, we use Theorem {\rm r}ef{propWqerror} to write {\beta}gin{equation*} {\mathcal S}um_{f\in\overline{{\mathcal W}_q^\pm}}\frac{\phi(f)}{|{{\rm r}m Stab}(f)|}{\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr)= {\alpha}pha^\pm{\mathcal A}^{(q)}_n(\phi)X+\gamma^\pm{\mathcal C}^{(q)}_n(\phi) \widetilde{\mathbb P}si\bigl(\frac56\bigr)X^{5/6} +O_{\epsilon,{\mathbb P}si}\Bigl(\frac{n^{4+\epsilon}q^{1+\epsilon}}{(n,q)^3}\cdot E_{\frac{n}{(n,q)}}\bigl(\widehat{\phi_{\frac{n}{(n,q)}}}\bigr)\Bigr). \end{equation*} From the definition of the error term $E_{\frac{n}{(n,q)}}$ and Corollary {\rm r}ef{corphihatbound}, we obtain the bound {\beta}gin{equation*} E_{\frac{n}{(n,q)}}\bigl(\widehat{\phi_{\frac{n}{(n,q)}}}\bigr)\ll_\epsilon \frac{(n,q)^2}{n^{2-\epsilon}} ||\theta||_\infty. \end{equation*} Using Lemma {\rm r}ef{lemmtcompmax} to evaluate the main term, we therefore obtain {\beta}gin{equation*} {\mathcal S}um_{K\in{\mathbb F}F^\pm}\theta(K){\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr) ={\alpha}pha^\pm{\mathcal A}^{\rm max}(\phi)X+ \gamma^\pm{\mathcal C}^{\rm max}(\phi) \widetilde{\mathbb P}si\bigl(\frac56\bigr)X^{5/6}+O_{\epsilon,{\mathbb P}si}\Bigl(\frac{X}{Q^{1-\epsilon}}\Bigr) +O_{\epsilon,{\mathbb P}si}\bigl(n^{2+\epsilon}Q^{2+\epsilon}||\theta||_\infty\bigr). \end{equation*} Optimizing, we pick $Q=X^{1/3}/n^{2/3}$, which yields the result. \end{proof} {\mathcal S}ection{Low-lying zeros of Dedekind zeta functions of cubic fields}\label{sec:low-lying} We follow the setup of \cite[\S2.4]{SST1} and of the previous Section~{\rm r}ef{sec:switch}. For every function $\mathrm{et}a:{\mathbb F}F_\Sigma\to{\mathbb C}$, we define {\beta}gin{equation*} {\mathcal S}_\Sigma(\mathrm{et}a,X):={\mathcal S}um_{K\in {\mathbb F}F_\Sigma}\mathrm{et}a(K) {\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr) \end{equation*} to be the smoothed average of $\mathrm{et}a(K)$ over fields $K$ in ${\mathbb F}F_\Sigma$ with discriminant close to $X$. Note in particular that ${\mathcal S}_\Sigma(1,X)$ denotes a smooth count of elements in ${\mathbb F}F_\Sigma$. For a cubic field $K$, recall from Proposition~{\rm r}ef{p:Hecke} that the function $L(s,{\rm r}ho_K)$ is known to be entire and that the Artin conductor of $L(s,{\rm r}ho_K)$ is equal to $|\Delta(K)|$. We define the quantity $\mathcal{L}_X$ to be the average value of $\log |\Delta(K)|$ over $K\in{\mathbb F}F_\Sigma(X)$, i.e., we define {\beta}gin{equation*} \mathcal{L}_X:=\frac{{\mathcal S}_\Sigma(\log |\Delta(K)|,X)}{{\mathcal S}_\Sigma(1,X)}. \end{equation*} The Davenport--Heilbronn theorem implies that we have {\beta}gin{equation*} \mathcal{L}_X=\log X+O(1). \end{equation*} We write the \emph{nontrivial zeros} of $L(s,{\rm r}ho_K)$ as $1/2+i\gamma_K^{(j)}$, where the imaginary part of $\gamma_K^{(j)}$ is bounded in absolute value by $1/2$. We pick ${\mathbb P}hi:{\mathbb R}\to{\mathbb C}$ to be a smooth and even function such that the Fourier transform $\widehat{{\mathbb P}hi}:{\mathbb R}\to{\mathbb C}$ has compact support contained in the open interval $(-a,a)$. It is then known that ${\mathbb P}hi$ can be extended to an entire function of exponential type $a$. Define $Z_K(X)$ by {\beta}gin{equation*} Z_K(X):={\mathcal S}um_j {\mathbb P}hi\Bigl(\frac{\gamma^{(j)}_K\mathcal{L}_X}{2\pi}\Bigr). \end{equation*} We work with the following variant of the \emph{$1$-level density} ${\mathcal D}({\mathbb F}F_\Sigma(X),{\mathbb P}hi)$ of the family of Artin $L$-functions $L(s,{\rm r}ho_K)$ (equivalently, of the family of Dedekind zeta functions $\zeta_K(s)$) of $K\in{\mathbb F}F_\Sigma$: {\beta}gin{equation*} {\mathcal D}({\mathbb F}F_\Sigma(X),{\mathbb P}hi):= \frac{{\mathcal S}_\Sigma\bigl(Z_K(X),X\bigr)}{{\mathcal S}_\Sigma(1,X)}. \end{equation*} Recall that $\theta_K(n)$ was defined in \eqref{def:Dirichlet} so that the $n$th Dirichlet coefficient of the logarithmic derivative of $L(s,{\rm r}ho_K)$ is $\theta_K(n)\mathcal{L}ambda(n)$. We use the explicit formula \cite[Proposition 2.1]{SST1} to evaluate $Z_K(X)$: {\beta}gin{equation*}\label{eqexplicit} {\mathcal S}um_j {\mathbb P}hi\big(\gamma^{(j)}_K\big) =\frac1{2\pi}\int_{-\infty}^\infty {\mathbb P}hi(t)\bigl(\log |\Delta(K)|+O(1)\bigr)dt -\frac{1}{\pi}{\mathcal S}um_{n=1}^\infty \frac{\theta_K(n)\mathcal{L}ambda(n)}{n^{1/2}} \widehat{{\mathbb P}hi}\Bigl(\frac{\log n}{2\pi}\Bigr). \end{equation*} It yields $Z_K(X)=Z_K^{(1)}(X)+Z_K^{(2)}(X)$, where {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle Z_K^{(1)}(X)&=& \displaystyle\frac{1}{2\pi}\int_{-\infty}^\infty {\mathbb P}hi\Bigl(\frac{t\mathcal{L}_X}{2\pi} \Bigr)\bigl(\log |\Delta(K)|+O(1)\bigr)dt; \\[.2in] \displaystyle Z_K^{(2)}(X)&=& \displaystyle-\frac{2}{\mathcal{L}_X}{\mathcal S}um_{n=1}^\infty \frac{\theta_K(n)\mathcal{L}ambda(n)}{{\mathcal S}qrt{n}} \widehat{{\mathbb P}hi}\Bigl(\frac{\log n}{\mathcal{L}_X}\Bigl). \end{array} \end{equation*} A computation identical to \cite[(17)]{SST1} gives {\beta}gin{equation}\label{eqDZ1} \lim_{X\to\infty} \frac{{\mathcal S}_\Sigma\bigl(Z_K^{(1)}(X),X\bigr)}{{\mathcal S}_\Sigma(1,X)} =\widehat{{\mathbb P}hi}(0). \end{equation} To compute the $1$-level density, we need to compute the asymptotics of ${\mathcal S}_\Sigma(Z_K^{(2)}(X),X)$. We write {\beta}gin{equation}\label{eqLLZZ2} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}_\Sigma(Z_K^{(2)}(X),X)&=& \displaystyle-\frac{2}{\mathcal{L}_X} {\mathcal S}_\Sigma\Bigl({\mathcal S}um_{n=1}^\infty\frac{\theta_K(n)\mathcal{L}ambda(n)}{{\mathcal S}qrt{n}} \widehat{{\mathbb P}hi}\Bigl(\frac{\log n}{\mathcal{L}_X}\Bigr),X\Bigr) \\[.2in]&=& \displaystyle-\frac{2}{\mathcal{L}_X} {\mathcal S}um_{p,m}\frac{\log p}{p^{m/2}} \widehat{{\mathbb P}hi}\Bigl(\frac{m\log p}{\mathcal{L}_X}\Bigr) {\mathcal S}_\Sigma\bigl(\theta_K(p^m),X\bigr). \end{array} \end{equation} We now have the following result estimating the ratios ${\mathcal S}_\Sigma\bigl(\theta_K(p^m),X\bigr)/{\mathcal S}_\Sigma(1,X)$. {\beta}gin{proposition}\label{propthetacount} Let $p$ be a prime number, and let $X\ge 1$ be a real number. Then, for integers $m\geq 3$, we have {\beta}gin{equation}\label{eqllzb} {\beta}gin{array}{rcl} \displaystyle \frac{{\mathcal S}_\Sigma(\theta_K(p),X)}{{\mathcal S}_\Sigma(1,X)}&\ll_\epsilon& \displaystyle \frac{1}{p}+\frac{1}{p^{1/3}X^{1/6}}+\frac{p^{1/3}}{X^{1/3-\epsilon}}; \\[.2in] \displaystyle \frac{{\mathcal S}_\Sigma(\theta_K(p^2),X)}{{\mathcal S}_\Sigma(1,X)}-1&\ll_\epsilon& \displaystyle \frac{1}{p^2}+\frac{1}{X^{1/6}}+ \frac{p^{2/3}}{X^{1/3-\epsilon}}; \\[.2in] \displaystyle \frac{{\mathcal S}_\Sigma(\theta_K(p^m),X)}{{\mathcal S}_\Sigma(1,X)}&\ll& \displaystyle 1. \end{array} \end{equation} \end{proposition} {\beta}gin{proof} From Lemma {\rm r}ef{l_thetap2} we have that $\theta_K(p)=\lambda_K(p)$. The left-hand side of the first line of \eqref{eqllzb} can be computed from Theorem {\rm r}ef{thfieldscount}, yielding {\beta}gin{equation*} \frac{{\mathcal S}_\Sigma(\theta_K(p),X)}{{\mathcal S}_\Sigma(1,X)}\ll_\epsilon {\mathcal A}^{\rm max}(\lambda_p \chi_\Sigma)+X^{-1/6}{\mathcal C}^{\rm max}(\lambda_p \chi_\Sigma) +X^{-1/3+\epsilon}p^{1/3}. \end{equation*} The required bound then follows from the first part of Proposition {\rm r}ef{propmaxden}, together with the definitions of ${\mathcal A}^{\rm max}$ and ${\mathcal C}^{\rm max}$. The proof of the second inequality is identical: we merely use Theorem {\rm r}ef{thKcountsimple} instead of Theorem {\rm r}ef{thfieldscount} and the third part of Proposition {\rm r}ef{propmaxden} instead of the first. Finally, Lemma {\rm r}ef{l_thetap2} states that $|\theta_K(p^m)|\leq 2$, from which the third inequality follows immediately. \end{proof} We are now ready to prove the main result of this section. {\beta}gin{proof}[Proof of Theorem {\rm r}ef{thmllz}] From \eqref{eqLLZZ2} and Proposition {\rm r}ef{propthetacount}, we obtain {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle-\frac{{\mathcal S}_\Sigma(Z_K^{(2)}(X),X)}{{\mathcal S}_\Sigma(1,X)}&=& \displaystyle\frac{2}{\mathcal{L}_X} {\mathcal S}um_{p}\frac{\log p}{p} \widehat{{\mathbb P}hi}\Bigl(\frac{2\log p}{\mathcal{L}_X}\Bigr) \frac{{\mathcal S}_\Sigma(\theta_K(p^2),X)}{{\mathcal S}_\Sigma(1,X)}+O\biggl(\frac{1}{\log X} {\mathcal S}um_{{\mathcal S}ubstack{p^m\ll X^a\\m\neq 2}}\frac{\log p}{p^{m/2}} \frac{{\mathcal S}_\Sigma(\theta_K(p^m),X)}{{\mathcal S}_\Sigma(1,X)}\biggr) \\[.25in]&=& \displaystyle\frac{2}{\mathcal{L}_X} {\mathcal S}um_{p}\frac{\log p}{p}\widehat{{\mathbb P}hi}\Bigl(\frac{2\log p}{\mathcal{L}_X}\Bigr) +O_\epsilon\Bigl(\frac{1}{\log X}+X^{\frac{a-1}{6}} +X^{\frac{5a-2}{6}+\epsilon}\Bigr) \\[.25in]&&+ \displaystyle O_\epsilon\Bigl(\frac{1}{\log X}+X^{-\frac{1}{6}+\epsilon} +X^{\frac{a-1}{3}+\epsilon}\Bigr) +O\Bigl(\frac{1}{\log X}\Bigr), \end{array} \end{equation*} where the three error terms respectively arise from the three estimates of Proposition {\rm r}ef{propthetacount}. Assuming that $a<\frac{2}{5}$, and using the above computation in conjunction with \eqref{eqDZ1}, gives {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle \lim_{X\to \infty} {\mathcal D}({\mathbb F}F_\Sigma(X),{\mathbb P}hi)&=& \displaystyle\widehat{{\mathbb P}hi}(0)-\lim_{X\to \infty} \displaystyle\frac{2}{\mathcal{L}_X} {\mathcal S}um_{p}\frac{\log p}{p}\widehat{{\mathbb P}hi}\Bigl(\frac{2\log p}{\mathcal{L}_X}\Bigr) \\[.2in]&=& \displaystyle\widehat{{\mathbb P}hi}(0)-\frac12\int_{-1}^{1}\widehat{{\mathbb P}hi}(t)dt, \end{array} \end{equation*} where the final equality follows from the prime number theorem. This concludes the proof of Theorem~{\rm r}ef{thmllz}. \end{proof} {\mathcal S}ection{Main term for the average central values}\label{sec:average} Let $\Sigma=(\Sigma_v)$ be a finite set of local specifications. Without loss of generality we assume that $\Sigma_\infty$ is a singleton set, which is to say that either the cubic fields prescribed by $\Sigma_\infty=\{{\mathbb R}\times {\mathbb R} \times {\mathbb R}\}$ are all totally real, or the cubic fields prescribed by $\Sigma_\infty = \{{\mathbb R}\times {\mathbb C}\}$ are all complex. We also assume (by adding a prime if necessary) that there exists a prime $p$ such that $\Sigma_{p}=\{{\mathbb Q}_{p^3}\}$. Let ${\mathbb F}F_\Sigma$ denote the family of cubic fields $K$ prescribed by the set $\Sigma$ of specifications, namely such that for each place $v$ we have $K \otimes_{\mathbb Q} {\mathbb Q}_v\in\Sigma_v$. We let $V({\mathbb Z})(\Sigma)$ denote the set of elements $f\in V({\mathbb Z})$ such that $\chi_\Sigma(f)=1$ and such that $\Delta(f)>0$ if $\Sigma_\infty = \{{\mathbb R}\times {\mathbb R}\times {\mathbb R}\}$ (resp. $\Delta(f)<0$ if $\Sigma_\infty=\{{\mathbb R}\times {\mathbb C}\}$). For each prime $p$, let ${\mathcal W}_p(\Sigma)$ denote the set of elements in $V({\mathbb Z})(\Sigma)$ that are nonmaximal at $p$. If $q$ is a squarefree positive integer, we set ${\mathcal W}_q(\Sigma)=\cap_{p\mid q}{\mathcal W}_p(\Sigma)$. In particular ${\mathcal W}_1(\Sigma)=V({\mathbb Z})(\Sigma)$. Thanks to the condition $\Sigma_{p}=\{{\mathbb Q}_{p^3}\}$, we have that every form $f\in V({\mathbb Z})(\Sigma)$ is irreducible. This implies that the set $\overline{V({\mathbb Z})(\Sigma)^{{\rm max}}}$ of ${{\rm r}m GL}_2({\mathbb Z})$-orbits parametrizes under the Delone--Faddeev correspondence the family ${\mathbb F}F_\Sigma$ of cubic fields prescribed by the finite set $\Sigma$ of specifications. Let ${\mathbb P}si:{\mathbb R}_{>0} \to {\mathbb C}$ be a smooth function of compact support with $\int {\mathbb P}si=1$. Note that since ${{\rm r}m Stab}(f)$ is nontrivial in our families only when $f$ corresponds to a $C_3$-order, there are only $O(X^{1/2})$ such forms with discriminant bounded by $X$, so the contribution from elements with nontrivial stabilizer is negligible. For the rest of this paper, we automatically assume that every sum of binary cubic forms $f$ is weighted by $1/|{{\rm r}m Stab}(f)|$. For a real number $X\ge 1$, the inclusion-exclusion principle in conjunction with Proposition~{\rm r}ef{propAFE} yields: {\beta}gin{equation}\label{eqnvzsieve} A_\Sigma(X):= {\mathcal S}um_{K\in {\mathbb F}F_\Sigma}L(\tfrac 12,{\rm r}ho_K){\mathbb P}si\Bigl(\frac{|\Delta(K)|}{X}\Bigr)= 2{\mathcal S}um_{q\geq 1}\mu(q){\mathcal S}um_{f\in \overline{{\mathcal W}_q(\Sigma)}} S(f) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) + O_{\epsilon}\bigl(X^{\frac34 -\delta + \epsilon}\bigr), \end{equation} {\mathrm {ind}}ex{$A_\Sigma(X)$, smoothed first moment of $L(\tfrac12,{\rm r}ho_K)$} where $S(f)$ was defined in~\eqref{defSf} to be {\beta}gin{equation}\label{eqL12f} S(f)={\mathcal S}um_{n=1}^\infty\frac{\lambda_n(f)}{n^{1/2}} V^\pm\Bigl(\frac{n}{{\mathcal S}qrt{|\Delta(f)|}}\Bigr), \end{equation} with $V^\pm$ as in Proposition {\rm r}ef{propAFE} and where the sign is $+$ if $\Sigma_\infty=\{{\mathbb R}\times {\mathbb R}\times {\mathbb R}\}$ and $-$ if $\Sigma_\infty = \{{\mathbb R}\times {\mathbb C}\}$. The identity holds because for a maximal irreducible binary cubic form $f\in V({\mathbb Z})^{{{\rm r}m irr},{\rm max}}$ corresponding to the ring of integers of a cubic field $K_f$, we have $2S(f)=L(\tfrac12,{\rm r}ho_{K_f})$ by Corollary~{\rm r}ef{c_Euler} and Proposition~{\rm r}ef{propAFE}. In this section, we will prove two results. First, we will prove an upper bound on $A_\Sigma(X)$, which improves on the pointwise bound coming from summing the best known upper bounds on $|L(\tfrac12,{\rm r}ho_K)|$ over the associated fields $K$. Second, assuming a sufficiently strong upper bound on $|L(\tfrac12,{\rm r}ho_K)|$, we obtain asymptotics for $A_\Sigma(X)$. {\mathcal S}ubsection{Asymptotics for the terms with $q<Q$} For $Q\in {\mathbb R}_{\ge 1}$ to be chosen later, we split the right-hand side of \eqref{eqnvzsieve} into two parts, {\beta}gin{equation*} {\mathcal S}um_{q<Q} \quad \text{and} \quad {\mathcal S}um_{q\ge Q}. \end{equation*} This section is concerned with the first part: {\beta}gin{equation}\label{eqlongest} \displaystyle 2 {\mathcal S}um_{q<Q}\mu(q) {\mathcal S}um_{f\in \overline{{\mathcal W}_q(\Sigma)}} {\mathcal S}um_{n=1}^\infty \frac{\lambda_n(f)}{n^{1/2}} {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) V^\pm \Bigl(\frac{n}{{\mathcal S}qrt{|\Delta(f)|}}\Bigr). \end{equation} It will be convenient for us to set some notation surrounding the smooth functions above and their Mellin transforms. For any positive real number $y\in {\mathbb R}_{> 0}$, let ${\mathcal H}_y:{\mathbb R}_{>0}\to {\mathbb C}$ denote the compactly supported function {\beta}gin{equation}\label{eqimpfunc} {\mathcal H}_y(t):= {\mathbb P}si(t) \cdot V^\pm\Bigl(\frac{y}{{\mathcal S}qrt{t}}\Bigr). \end{equation} The relevance of ${\mathcal H}_y(t)$ is that we have the equality {\beta}gin{equation*} {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) V^\pm\Bigl(\frac{n}{{\mathcal S}qrt{|\Delta(f)|}}\Bigr) ={\mathcal H}_{\frac{n}{{\mathcal S}qrt{X}}}\Bigl(\frac{|\Delta(f)|}{X}\Bigr). \end{equation*} {\mathrm {ind}}ex{${\mathcal H}_y$, compactly supported function on ${\mathbb R}_{>0}$} {\beta}gin{lemma}\label{bound-H} $($i$)$ There exists a constant $C>0$ depending only on ${\mathbb P}si$ such that for every $\epsilon \in [-1,1]$ and $y\in {\mathbb R}_{>0}$, {\beta}gin{equation*} E_\infty(\widetilde{{\mathcal H}_y};\epsilon)= \int^\infty_{-\infty} |\widetilde{\mathcal H_y}(-\epsilon + ir)| (1+|r|)^{2+4\epsilon} dr \le C. \end{equation*} $($ii$)$ There exists a constant $C>0$ depending only on ${\mathbb P}si$ such that for every $y\in {\mathbb R}_{>0}$, $|\widetilde{\mathcal H_y}(\frac 56)| \le C$. \end{lemma} {\beta}gin{proof} We have by definition~\eqref{defV}, \[ \widetilde{V^\pm}(s) = G(s) \frac{\gamma^\pm(\tfrac12+s)}{\gamma^\pm(\tfrac12)\cdot s}. \] We deduce that the Mellin transform of $t\mapsto V^\pm(\frac{y}{{\mathcal S}qrt{t}})$ is equal to \[ 2 y^{2s} \widetilde{V^\pm}(-2s) = -y^{2s} G(-2s) \frac{\gamma^\pm(\tfrac12-2s)}{\gamma^\pm(\tfrac12)\cdot s}. \] Since $\mathcal H_y$ is the product of the two functions ${\mathbb P}si$ and $t\mapsto V^\pm(\frac{y}{{\mathcal S}qrt{t}})$, its Mellin transform is the convolution of the Mellin transforms of the respective functions: \[ \widetilde{\mathcal H_y}({\mathcal S}igma + ir) = 2 \int_{{\mathbb R}e(u)=\mathrm{et}a} \widetilde {\mathbb P}si({\mathcal S}igma+ir + u) y^{-2u} \widetilde{V^\pm}(2u) \frac{du}{2\pi i}, \] where $0 <\mathrm{et}a < \tfrac 12$ is fixed. We deduce the following inequality: \[ |\widetilde{\mathcal H_y}({\mathcal S}igma + ir)| \le \frac{y^{-2\mathrm{et}a}}{\pi} \int^\infty_{-\infty} |\widetilde {\mathbb P}si({\mathcal S}igma +ir +\mathrm{et}a + i\tau )| \cdot |\widetilde{V^\pm}(2\mathrm{et}a + 2i\tau )| d\tau. \] We shall use this inequality for $y\in [1,+\infty)$, in which case $y^{-2\mathrm{et}a}\le 1$. We also shift the contour further to ${\mathbb R}e(u)=-\mathrm{et}a$, picking up a simple pole at $u=0$, to obtain the inequality: \[ |\widetilde{\mathcal H_y}({\mathcal S}igma + ir)| \le \frac{y^{2\mathrm{et}a}}{\pi} \int^\infty_{-\infty} |\widetilde {\mathbb P}si({\mathcal S}igma +ir - \mathrm{et}a + i\tau )| \cdot |\widetilde{V^\pm}(-2\mathrm{et}a + 2i\tau )| d\tau + |\widetilde {\mathbb P}si({\mathcal S}igma + ir)|\cdot |G(0)|. \] We shall use this inequality for the other interval $y\in (0,1]$, in which case $y^{2\mathrm{et}a}\le 1$. Assertion (ii) follows immediately by inserting ${\mathcal S}igma=\tfrac 56$ and $r=0$. Assertion (i) follows by inserting ${\mathcal S}igma=-\epsilon$ and integrating over $r$ because $E_\infty(\widetilde{{\mathcal H}_y};\epsilon)$ for $y\ge 1$ is bounded by \[ \frac{1}{\pi} \int^\infty_{-\infty} |\widetilde {\mathbb P}si(-\epsilon+\mathrm{et}a +ir)| (1+|r|)^{2+4\epsilon} dr \cdot \int^\infty_{-\infty} |\widetilde{V^\pm}(2\mathrm{et}a + 2i\tau )| (1+|\tau|)^{2+4\epsilon} d\tau \le C, \] where $C$ depends only on ${\mathbb P}si$. The estimate for $y\le 1$ is similar. \end{proof} We are now ready to prove the main result of this subsection. {\beta}gin{proposition}\label{p_main-term} For every $\epsilon>0$ and $Q,X\ge 1$, the sum \eqref{eqlongest} is asymptotic to \[ C_\Sigma \cdot X \cdot \bigl(\log X + \widetilde {\mathbb P}si'(1)\bigr) + C'_{\Sigma} \cdot X + O_{\epsilon,\Sigma,{\mathbb P}si}\Bigl( \frac{X^{1+\epsilon}}{Q} + X^{11/12+\epsilon} + Q^{2+\epsilon}X^{3/4+\epsilon}\Bigr), \] where $C_\Sigma>0$ and $C'_\Sigma\in {\mathbb R}$ depend only on the finite set $\Sigma$ of local specifications. {\mathrm {ind}}ex{$C_\Sigma,C'_\Sigma$, main terms for the first moment} \end{proposition} {\beta}gin{proof} Since $V^\pm$ is a function rapidly decaying at infinity, we may truncate the $n$-sum in the definition of $S(f)$ to $n < X^{1/2+\epsilon}$ with negligible error term. To estimate \eqref{eqlongest} we switch order of summation and consider \[ \displaystyle 2{\mathcal S}um_{n< X^{1/2+\epsilon}} {\mathcal S}um_{q<Q}\mu(q) {\mathcal S}um_{f\in \overline{{\mathcal W}_q(\Sigma)}} \frac{\lambda_n(f)}{n^{1/2}} {\mathcal H}_{\frac{n}{{\mathcal S}qrt{X}}}\Bigl(\frac{|\Delta(f)|}{X}\Bigr). \] We use Corollary {\rm r}ef{lemlamer}, to estimate the inner sum over $f$: {\beta}gin{multline}\label{qlessQ} 2{\mathcal S}um_{n<X^{1/2+\epsilon}}\frac{1}{{\mathcal S}qrt{n}} {\mathcal S}um_{q<Q}\mu(q) \left ({\alpha}pha^\pm {\mathcal A}^{(q)}_{[n,r_\Sigma]}(\lambda_n \chi_\Sigma) \cdot \widetilde{H_{\frac{n}{{\mathcal S}qrt{X}}}} (1) X+ \gamma^\pm {\mathcal C}^{(q)}_{[n,r_\Sigma]}(\lambda_n \chi_\Sigma) \cdot \widetilde{H_{\frac{n}{{\mathcal S}qrt{X}}}} (\frac 56) \cdot X^{5/6} {\rm r}ight ) \\ +\displaystyle O_{\epsilon,\Sigma,{\mathbb P}si}\Bigl({\mathcal S}um_{n< X^{1/2+\epsilon}}\frac{|g(\frac{n}{{\mathcal S}qrt{X}})|}{{\mathcal S}qrt{n}} {\mathcal S}um_{q<Q} (nq)^{1+\epsilon} \cdot E_\infty(\widetilde{{\mathcal H}_{\frac{n}{{\mathcal S}qrt{X}}}},\epsilon) \Bigr), \end{multline} where {\beta}gin{equation}\label{eqgXn} g(y):=\widetilde{{\mathcal H}_y}(1) = \int_0^\infty {\mathcal H}_y(t)dt. \end{equation} {\mathrm {ind}}ex{$g(y)$, equal to $\widetilde{H_y}(1)$} The error term above is seen to be bounded by $O_{\epsilon,\Sigma,{\mathbb P}si}(Q^{2+\epsilon}X^{3/4+\epsilon})$ thanks to Lemma~{\rm r}ef{bound-H}. Next, we bound the secondary term in~\eqref{qlessQ}. Since $r_\Sigma$ is fixed, the contribution to ${\mathcal C}^{(q)}_{[n,r_\Sigma]}(\lambda_n \chi_\Sigma)$ from primes $p\mid r_\Sigma$ is bounded. Therefore, we consider without further mention in the remainder of this paragraph only the primes $p{\mathrm {nm}}id r_\Sigma$. The contribution to ${\mathcal C}^{(q)}_{[n,r_\Sigma]}(\lambda_n \chi_\Sigma)$ from primes $p\mid q$ and $p{\mathrm {nm}}id n$ is given in \cite[Thm.2.2]{TaTh1} and \cite[Cor.8.15]{TT} to be $O(p^{-5/3})$. (Note that our quantity ${\mathcal C}^{(p)}_1(1)$ defined in \S{\rm r}ef{s_sieve_max} corresponds to the quantity denoted ${\mathcal C}_{p^2}({\mathbb P}hi_p,1)$ in~\cite{TT}.) The contribution to ${\mathcal C}^{(q)}_{[n,r_\Sigma]}(\lambda_n \chi_\Sigma)$ from primes $p\mid (q,n)$ is estimated from~\cite[Prop.8.16]{TT} to also be $O(p^{-5/3})$. (If $a=(1^21_*)$, then $\mathcal C_{p^2}(a,1)\asymp p^{1/3}$, and the cardinality of the orbit ${{\rm r}m GL}_2({\mathbb Z}/p^2{\mathbb Z})\cdot a$ inside $V({\mathbb Z}/p^2{\mathbb Z})$ is equal to $p^4(p^2-1)$ by \cite[Lem.5.6]{TT}, which yields $p^{1/3} p^6 / p^8 = p^{-5/3}$, whereas the other nonmaximal types $a=(1^3_*)$, $(1^3_{**})$, $(0)$ have a smaller contribution.) The contribution to ${\mathcal C}^{(q)}_{[n,r_\Sigma]}(\lambda_n \chi_\Sigma)$ from primes $p\parallel n$ and $p{\mathrm {nm}}id q$ is computed from~\eqref{def_lambda} and Table {\rm r}ef{tabbc} to be $O(p^{-1/3})$ (see also Lemma~{\rm r}ef{lemAC-lambda}). The contribution to ${\mathcal C}^{(q)}_{[n,r_\Sigma]}(\lambda_n \chi_\Sigma)$ from primes $p^2\mid n$ and $p{\mathrm {nm}}id q$ is bounded by $O(1)$. The contribution to ${\mathcal C}^{(q)}_{[n,r_\Sigma]}(\lambda_n \chi_\Sigma)$ from primes $p{\mathrm {nm}}id nq$ is a convergent infinite product that is uniformly bounded. Therefore, letting $n_1=\prod_{p||n} p$, the secondary term in~\eqref{qlessQ} is {\beta}gin{equation*} \ll_{\epsilon,\Sigma,{\mathbb P}si} X^{5/6} \cdot {\mathcal S}um_{n< X^{1/2+\epsilon}}\frac{1}{{\mathcal S}qrt{n}}{\mathcal S}um_{q< Q}\frac{(n,q)^{1/3}}{q^{5/3-\epsilon}n_1^{1/3-\epsilon}}\ll_{\epsilon,\Sigma,{\mathbb P}si} X^{11/12+\epsilon}, \end{equation*} which is sufficiently small. Finally, from Lemma {\rm r}ef{lemmtcompmax}, we see that the first term in~\eqref{qlessQ} is equal to {\beta}gin{equation*} 2{\alpha}pha^\pm \cdot X \cdot {\mathcal S}um_{n<X^{1/2+\epsilon}}\frac{g(\frac{n}{{\mathcal S}qrt X})}{{\mathcal S}qrt{n}}{\mathcal A}^{\rm max}(\lambda_n \chi_\Sigma) +O_{\epsilon,\Sigma,{\mathbb P}si}\Bigl({\mathcal S}um_{n< X^{1/2+\epsilon}}{\mathcal S}um_{q=Q}^\infty \frac{X}{{\mathrm {rad}}(n)^{3/2}q^{2-\epsilon}}\Bigr). \end{equation*} The result now follows with the values of the constants being {\beta}gin{equation}\label{defCSigma} C_\Sigma:={\alpha}pha^\pm{\rm Res}_{s=\frac12} T_\Sigma(s),\quad C'_\Sigma:= 2{\alpha}pha^\pm C', \end{equation} as follows from Proposition~{\rm r}ef{propconstant} below. \end{proof} {\mathcal S}ubsection{Computing the leading constants}\label{s_Burgess} We compute the constants $C_\Sigma,C'_\Sigma$ arising in Proposition {\rm r}ef{p_main-term}. We begin with the following lemma. {\beta}gin{lemma}\label{lemMelgX} The Mellin transform of the function $g$ in~\eqref{eqgXn} is {\beta}gin{equation*} \widetilde{g}(s)=\widetilde{{\mathbb P}si}(1+s/2)\frac{G(s)}{s} \frac{\gamma^\pm(1/2+s)}{\gamma^\pm(1/2)}, \end{equation*} where $G$ is as in~\eqref{defV}. In particular, $\widetilde{g}(s)$ is meromorphic on the half-plane ${\mathbb R}e(s)>-1/2$ with only a simple pole at $s=0$. \end{lemma} {\beta}gin{proof} Unwinding definitions~\eqref{eqimpfunc} and~\eqref{eqgXn}, we see that {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle\widetilde{g}(s)&=& \displaystyle\int_{0}^\infty {\mathbb P}si(t)\int_{0}^\infty V^\pm\Bigl(\frac{y}{{\mathcal S}qrt{t}}\Bigr)y^s\frac{dy}{y}dt \\[.2in]&=& \displaystyle \int_{0}^\infty t^{s/2+1} {\mathbb P}si(t)\frac{dt}{t}\int_{0}^\infty V^\pm(u)u^s \frac{du}{u} \\[.2in]&=& \displaystyle \widetilde{{\mathbb P}si}(1+s/2)\widetilde{V^\pm}(s). \end{array} \end{equation*} The lemma follows from the expression~\eqref{eqMelV} for $\widetilde{V^\pm}(s)$. \end{proof} {\mathrm {ind}}ex{$T_\Sigma(s)$, Dirichlet series of $t_\Sigma(n)$} Define the Dirichlet series {\beta}gin{equation}\label{defTSigma} T_\Sigma(s):={\mathcal S}um_{n=1}^\infty\frac{t_\Sigma(n) }{n^s},\end{equation} where {\mathrm {ind}}ex{$t_\Sigma(n)$, average of $\lambda_K(n)$ over $K$ in ${\mathbb F}F_\Sigma$} $t_\Sigma(n)={\mathcal A}^{{\rm max}}(\lambda_n\chi_\Sigma)$ is the average of $\lambda_K(n)$ over $K$ in ${\mathbb F}F_\Sigma$ (note that this is actually a finite average, since the value of $\lambda_K(n)$ is determined by the splitting type of $K$ at the primes dividing $n$.) {\beta}gin{proposition}\label{propanalcont} The Dirichlet series $T_\Sigma(s)$ has a meromorphic continuation to the half-plane ${\mathbb R}e(s)>1/3$ with a simple pole at $s=\frac 12$. Moreover, this simple pole has a positive residue. \end{proposition} {\beta}gin{proof} For every integer $n\ge 1$, we have {\beta}gin{equation*} t_\Sigma(n)=\prod_{p^k\parallel n}{\mathcal S}um_{{\mathcal S}igma} \frac{\lambda_{p^k}({\mathcal S}igma)}{\# \mathcal O_{\mathcal S}igma}, \end{equation*} where $\mathcal O_{\mathcal S}igma {\mathcal S}ubset V({\mathbb F}_p)$ is the ${{\rm r}m GL}_2({\mathbb F}_p)$-orbit attached to ${\mathcal S}igma$, and ${\mathcal S}igma$ ranges over all splitting types that are compatible with $\Sigma_p$. The quantity $t_\Sigma(n)$ is clearly multiplicative, and so $T_\Sigma(s)$ has an Euler product decomposition {\beta}gin{equation*} T_\Sigma(s):=\prod_p{\mathcal S}um^\infty_{k=0}\frac{t_\Sigma(p^k)}{p^{ks}}. \end{equation*} If $p\neq 3$ and there is no specification $\Sigma_p$ at $p$, then Proposition~{\rm r}ef{propmaxden} asserts that $t_\Sigma(p)=\frac{(p-1)(p^2-1)}{p^4}$ and that $t_\Sigma(p^2)=\frac{(p^2-1)^2}{p^4}$. Therefore, the Dirichlet series $T_\Sigma(s)\zeta(2s)^{-1}$ converges absolutely for ${\mathbb R}e(s)>1/3$. It follows that the residue at $s=\tfrac 12$ is given by the following convergent product \[ {\rm Res}_{s=\frac12} T_\Sigma(s) = \frac{1}{2} \prod_p (1-p^{-1}) {\mathcal S}um^\infty_{k=0} \frac{t_\Sigma(p^k)}{p^{k/2}}. \] We claim that each factor in the product is positive: \[ {\mathcal S}um^\infty_{k=0} \frac{t_\Sigma(p^k)}{p^{k/2}} >0 \quad \text{for every prime $p$.} \] Indeed, $\lambda_{p^m}(f)$ is only negative if ${\mathcal S}igma_p(f)=(3)$ and $m\equiv 1\pmod{3}$, in which case $\lambda_{p^m}(f)=-1$. Therefore, the minimum possible value of $ {\mathcal S}um^\infty_{k=0}\frac{t_\Sigma(p^k)}{p^{k/2}}$ occurs when $\Sigma_p=\{(3)\}$. In this case {\beta}gin{equation}\label{eqpostest} {\beta}gin{array}{rcl} \displaystyle {\mathcal S}um^\infty_{k=0} \frac{t_\Sigma(p^k)}{p^{k/2}}&=& \displaystyle {\mathcal S}um_{k\equiv 0\pmod 3}\frac{1}{p^{k/2}}- {\mathcal S}um_{k\equiv 1\pmod{3}}\frac{1}{p^{k/2}}, \end{array} \end{equation} which is clearly positive since the $n$th term of the sum on the left is greater than the $n$th term of the sum of the right. \end{proof} {\beta}gin{proposition}\label{propconstant} As $X\to \infty$, we have the asymptotic \[ \displaystyle{\mathcal S}um_{n=1}^\infty \frac{t_\Sigma(n)}{{\mathcal S}qrt{n}} g(\frac{n}{{\mathcal S}qrt X}) =\frac{1}{2}{\rm Res}_{s=\frac12}T_\Sigma(s)\cdot \bigl(\log X + \widetilde {\mathbb P}si'(1)\bigr) + C' +O_{\epsilon,\Sigma,{\mathbb P}si}\bigl(X^{-\frac1{12} + \frac \epsilon 2} \bigr), \] where \[ C':= {\frac{d}{ds}}\Bigl|_{s=0} sT_\Sigma(\tfrac 12+s) \frac{\gamma^\pm\bigl(\frac12+s\bigr)}{\gamma^\pm(1/2)}. \] \end{proposition} {\beta}gin{proof} From Lemma {\rm r}ef{lemMelgX}, we obtain {\beta}gin{equation}\label{eqconstantint} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{n=1}^\infty \frac{t_\Sigma(n)}{{\mathcal S}qrt{n}} g(\frac{n}{{\mathcal S}qrt X})&=&\displaystyle \frac{1}{2\pi i}\int_{{\mathbb R}e(s)=2} T_\Sigma(\tfrac 12+s)\widetilde{g}(s) X^{s/2}ds\\[.2in] &=& \displaystyle \frac{1}{2\pi i}\int_{{\mathbb R}e(s)=2}s T_\Sigma(\tfrac 12+s)\widetilde{{\mathbb P}si}(1+s/2) G(s)\frac{\gamma^\pm\bigl(\frac12+s\bigr)}{\gamma^\pm(1/2)}X^{s/2}\frac{ds}{s^2} \\[.2in] &=& \displaystyle \frac{1}{2\pi i}\int_{{\mathbb R}e(s)=2} J(s)X^{s/2}\frac{ds}{s^2}, \end{array} \end{equation} where the above equation serves as a definition of $J(s)$. Since $\widetilde{{\mathbb P}si}(1)=G(0)=1$, it follows that $J(s)$ is holomorphic in ${\mathbb R}e(s)>-\frac16$, and $J(0)={\rm Res}_{s=\frac12}T_\Sigma(s)$. Expanding in Tayor series, we write {\beta}gin{equation*} J(s)X^{s/2}=J(0)+\Bigl(\frac{J(0)\log X}{2}+J'(0)\Bigr)s+\cdots \end{equation*} Shifting the integral to ${\mathbb R}e(s)=-\tfrac16 +\epsilon$ for some $0<\epsilon<\frac16$, we therefore obtain {\beta}gin{equation*} \displaystyle{\mathcal S}um_{n=1}^\infty \frac{t_\Sigma(n)}{{\mathcal S}qrt{n}} g(\frac{n}{{\mathcal S}qrt X}) =\frac{1}{2}{\rm Res}_{s=1/2}T_\Sigma(s)\cdot \log X+J'(0)+O_{\epsilon,\Sigma,{\mathbb P}si}\bigl(X^{-\frac {1}{12}+\frac \epsilon 2} \bigr). \end{equation*} Calculating $J'(0)$, we obtain, using that $G(s)$ is even: \[ J'(0) =\frac{1}{2}{\rm Res}_{s=\frac 12}T_\Sigma(s)\cdot \widetilde {\mathbb P}si'(1) + C'. \] This concludes the proof of the proposition. \end{proof} {\mathcal S}ubsection{Upper bound for the first moment} In this subsection we investigate pointwise bounds for the tail of the sieve when $q\ge Q$. {\beta}gin{proposition}\label{lem_Eq} For every $Q,X\ge 1$ and $\epsilon >0$, \[ {\mathcal S}um\limits_{q\ge Q} {\mathcal S}um_{{\mathcal S}ubstack{f\in \overline{{\mathcal W}_q^{{\rm r}m irr}}\\ |\Delta(f)| < X}} |S(f)| = O_{\epsilon} \Bigl(\frac{X^{5/4-\delta+\epsilon}}{Q^{3/2-2\delta}}\Bigr), \] for $\delta=\frac{1}{128}$ as in Theorem {\rm r}ef{p_Wu}. \end{proposition} {\beta}gin{proof} Let $f\in V({\mathbb Z})^{{\rm r}m irr}$ be an irreducible binary cubic form, and denote the field of fractions of the ring associated to $f$ by $K_f$. Note that for $f\in \mathcal W_q$ with $|\Delta(f)| < X$, we have $|\Delta(K_f)|< X/q^2$, and recall from Proposition {\rm r}ef{propunif} that {\beta}gin{equation*} \#\bigl\{f\in \overline{{\mathcal W}_q}:\ |\Delta(f)|<X\bigr\}\ll_\epsilon\frac{X}{q^{2-\epsilon}}. \end{equation*} Therefore, we deduce from~\eqref{Lf_bound} the estimate \[ {\mathcal S}um\limits_{q\ge Q} {\mathcal S}um_{{\mathcal S}ubstack{f\in \overline{{\mathcal W}_q^{{\rm r}m irr}}\\ |\Delta(f)| < X}} |S(f)| \ll_\epsilon {\mathcal S}um_{q\geq Q} (X/q^2)^{\theta+\epsilon}\cdot X/q^{2-\epsilon}, \] where we recall that $\theta=1/4-\delta$. The result follows. \end{proof} Optimizing, we pick $Q=X^{\frac{1-2\delta}{7-4\delta}}$ in \eqref{eqlongest}. We have now established the following by combining the two Propositions {\rm r}ef{p_main-term} and {\rm r}ef{lem_Eq}. {\beta}gin{theorem}\label{thuncondupbd} For every $X\ge 1$ and $\epsilon >0$, {\beta}gin{equation}\label{eqLfavgupbound} A_\Sigma(X) \ll_{\epsilon,\Sigma,{\mathbb P}si} X^{\frac{29-28\delta}{28-16\delta}+\epsilon}. \end{equation} Numerically, \[ \frac{29-28\delta}{28-16\delta} = \frac{921}{892} = 1.0325\ldots \] for the best known value of $\delta=\frac{1}{128}$ of Theorem {\rm r}ef{p_Wu}. \end{theorem} The exponent is smaller than $5/4-\delta = \tfrac{159}{128} = 1.2421875$, thus~\eqref{eqLfavgupbound} is an improvement on the exponent arising from summing the pointwise bound on $|L(\tfrac 12,{\rm r}ho_K)|$ over cubic fields $K$ with discriminant bounded by $X$. {\mathcal S}ection{Conditional computation of the first moment of $L(\tfrac 12,{\rm r}ho_K)$}\label{sConditional} In this section, we shall compute the first moment of $L(\tfrac 12,{\rm r}ho_K)$ assuming one of two hypotheses. More precisely, we prove the following result. {\beta}gin{theorem}\label{propSN} Assume {{\rm r}m one} of the following two hypotheses: {\beta}gin{itemize} \item[{{\rm r}m (S)}] {\bf Strong Subconvexity:} For every $K\in{\mathbb F}F_\Sigma$, we have $|L(\frac12,{\rm r}ho_K)|\ll |\Delta(K)|^{\frac{1}{6}-\vartheta}$ for some $\vartheta>0$. \item[{{\rm r}m (N)}] {\bf Nonnegativity:} For every $K\in{\mathbb F}F_\Sigma$, we have $L(\frac12,{\rm r}ho_K)\geq 0$. \end{itemize} Then we have for small enough $\epsilon>0$, {\beta}gin{equation*} {\mathcal S}um_{K\in{\mathbb F}F_\Sigma}L\bigl(\tfrac12,{\rm r}ho_K\bigr) {\mathbb P}si\Bigl (\frac{|\Delta(K)|}{X}\Bigr) = C_\Sigma \cdot X \cdot \bigl(\log X + \widetilde {\mathbb P}si'(1) \bigr)+ C'_\Sigma \cdot X + O_{\epsilon,\Sigma,{\mathbb P}si}(X^{1-\epsilon}), \end{equation*} where $C_\Sigma$ and $C_\Sigma'$ are the constants arising in Proposition {\rm r}ef{propconstant}. \end{theorem} Compared to Section~{\rm r}ef{sec:average}, the proof is significantly more difficult, and will require several new inputs. Indeed, recall that we have {\beta}gin{equation}\label{eqincexcrep} A_\Sigma(X)= 2 {\mathcal S}um_{q\ge 1}\mu(q) {\mathcal S}um_{f\in \overline{{\mathcal W}_q(\Sigma)}} S(f) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr). \end{equation} Pick a small $\kappa_\downarrow>0$. Proposition {\rm r}ef{p_main-term} provided an estimate for the above sum with $q$ in the range $[1,X^{1/8-\kappa_\downarrow}]$. {\mathrm {ind}}ex{$q\in [X^{1/8-\kappa_\downarrow},X^{1/8+\kappa_\uparrow}]$, border range of the sieve} {\mathrm {ind}}ex{$q\geq X^{1/8+\kappa_\uparrow}$, large range of the sieve} For $q\geq X^{1/8-\kappa_\downarrow}$, our approach is to approximate the smoothed sum of $S(f)$ with a smoothed sum of $D(\tfrac 12,f)$. We do this by breaking up these $q$ into two ranges: the ``large range'' and the ``border range''. Namely, we pick a small $\kappa_\uparrow>0$. Then the range $q\geq X^{1/8+\kappa_\uparrow}$ is the large range while the range $[X^{1/8-\kappa_\downarrow},X^{1/8+\kappa_\uparrow}]$ is the border range. For $q$ in both of these ranges we want to prove {\beta}gin{equation}\label{eqSDCloseIntro9} {\mathcal S}um_{f\in\overline{{\mathcal W}_q(\Sigma)}}S(f){\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) \approx {\mathcal S}um_{f\in\overline{{\mathcal W}_q(\Sigma)}}D(\tfrac12,f){\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr). \end{equation} On average over $f\in\overline{{\mathcal W}_q(\Sigma)}$, this is an unbalanced approximation of the central value $D(\tfrac12,f)$ by the Dirichlet sum $S(f)$ of the coefficients $\lambda_n(f)$. In \S{\rm r}ef{sLargerange}, we establish \eqref{eqSDCloseIntro9} with $q$ in the large range, which is straightforward. The bulk of the section is devoted to proving \eqref{eqSDCloseIntro9} in the border range. This is proved in \S{\rm r}ef{sPreparations} and \S{\rm r}ef{sBorder} using the unbalanced approximate functional equation of Proposition~{\rm r}ef{thm_AFE2}. The crux of the proof is to estimate the average of the coefficients $e_k(f)$ of the unbalanced Euler factors $E_p(s,f)$ over the forms $f\in \overline{{\mathcal W}_q(\Sigma)}$. Finally, in \S{\rm r}ef{sSuborders}, we compute the average of $D(\tfrac 12,f)$ (assuming either nonnegativity or strong subconvexity of $L(\tfrac 12,{\rm r}ho_K)$), thereby obtaining the average of $S(f)$ and finishing the proof of Theorem {\rm r}ef{propSN}. {\mathcal S}ubsection{Estimates for the large range}\label{sLargerange} We begin by estimating $S(f)$ for integral binary cubic forms with large index. {\beta}gin{lemma}\label{lemeasybound} For every integral binary cubic form $f\in V({\mathbb Z})^{{\rm r}m irr}$ and every $\epsilon>0$, we have {\beta}gin{equation*} S(f)=D(\tfrac 12,f)+O_\epsilon\Bigl(\frac{|\Delta(f)|^{1/4+\epsilon}}{{\mathrm {ind}}(f)}\Bigr). \end{equation*} \end{lemma} {\beta}gin{proof} By definition, we have {\beta}gin{equation*} S(f)=\frac{1}{2\pi i}\int_{{\mathbb R}e(s)=2}D(\tfrac12+s,f)\widetilde{V^\pm}(s) |\Delta(f)|^{s/2}ds. \end{equation*} Shifting to the line $s=-1/2+\epsilon$, we pick up the pole of $\widetilde{V^\pm}(s)$ at $0$ (with residue $1$), to obtain {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle S(f)&=&\displaystyle D(\tfrac 12,f)+ \frac{1}{2\pi i}\int_{{\mathbb R}e(s)=-1/2+\epsilon}D(\tfrac12+s,f)\widetilde{V^\pm}(s) |\Delta(f)|^{s/2}ds \\[.2in]&=&\displaystyle D(\tfrac12,f)+O_\epsilon\bigl(|\Delta(f)|^{-1/4+\epsilon}|\Delta(K)|^{1/2+\epsilon}\bigr), \end{array} \end{equation*} where the final estimate follows since $D(s,f)$ is within $|\Delta(f)|^\epsilon$ of $L(s,{\rm r}ho_K)$ for ${\mathbb R}e(s)$ close to $0$. The lemma now follows since $\Delta(f)={\mathrm {ind}}(f)^2\Delta(K)$. \end{proof} Adding up the above estimate for $f\in \overline{{\mathcal W}_q(\Sigma)}$, we immediately obtain the following result. {\beta}gin{proposition}\label{proplargerange} For every square-free $q$, and $X\ge 1$, we have {\beta}gin{equation*} {\mathcal S}um_{f\in \overline{{\mathcal W}_q(\Sigma)}}S(f){\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) ={\mathcal S}um_{f\in \overline{{\mathcal W}_q(\Sigma)}}D(\tfrac 12,f){\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) +O_{\epsilon,\Sigma,{\mathbb P}si}\Bigl(\frac{X^{5/4+\epsilon}}{q^3}\Bigr). \end{equation*} \end{proposition} {\beta}gin{proof} The proposition follows from Lemma {\rm r}ef{lemeasybound} and the tail estimate in Proposition {\rm r}ef{propunif}. \end{proof} An immediate consequence of the previous result is the following estimate for $q$ in the large range. {\beta}gin{corollary}\label{corlargerange} For every small $\kappa_\uparrow>0$, square-free $q>X^{1/8+\kappa_\uparrow}$ and $X\ge 1$, we have {\beta}gin{equation*} {\mathcal S}um_{f\in \overline{{\mathcal W}_q(\Sigma)}}\bigl(S(f)-D(\tfrac 12,f)\bigr) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr)\ll_{\epsilon,\kappa_\uparrow,\Sigma,{\mathbb P}si} \frac{X^{1-2\kappa_\uparrow+\epsilon}}{q}. \end{equation*} \end{corollary} {\mathcal S}ubsection{Preparations and strategy for the border range}\label{sPreparations} In this subsection, we shall introduce spaces, notation, and some preliminary results that will be useful subsequently in handling the border range. One of the key tools in comparing $S(f)$ and $D(\tfrac 12,f)$ is the unbalanced approximate functional equation of Proposition~{\rm r}ef{thm_AFE2}. To apply this result, it is not possible to only work with the information that forms $f \in {\mathcal W}_q(\Sigma)$ are nonmaximal at primes dividing $q$. It is necessary to work with the additional information of the index of $f$, including at primes not dividing $q$. To this end, for a positive (not necessarily squarefree) integer $b$, let ${\mathcal U}_{b}(\Sigma)$ denote the set of binary cubic forms $f\in V({\mathbb Z})(\Sigma)$ such that ${\mathrm {ind}}(f)=b$. {\mathrm {ind}}ex{${\mathcal U}_b$, set of cubic forms $f$ with ${\mathrm {ind}}(f)=b$} Note the inclusion ${\mathcal U}_b(\Sigma) {\mathcal S}ubset {\mathcal W}_{{\mathrm {rad}}(b)}(\Sigma)$, and in fact we have {\beta}gin{equation*} {\mathcal W}_q(\Sigma) =\bigsqcup_{m\geq 1} {\mathcal U}_{mq}(\Sigma), \end{equation*} where the union is disjoint and $q$ is square-free. Let $\overline{{\mathcal U}_{b}(\Sigma)}$ denote the set of ${{\rm r}m GL}_2({\mathbb Z})$-orbits on ${\mathcal U}_{b}(\Sigma)$. {\mathrm {ind}}ex{${\mathcal Y}_{b,r}$, subset of cubic forms $f\in {\mathcal W}_r$ with $b\parallel {\mathrm {ind}}(f)$} Let $b$ be a positive integer, and let $r$ be a positive squarefree integer such that $(b,r)=1$. Finally, we define the set ${\mathcal Y}_{b,r}(\Sigma)$ to be the subset of elements in ${\mathcal W}_{r}(\Sigma)$ whose index at primes $p$ dividing $b$ is exactly $p^{v_p(b)}$. As usual we let $\overline{{\mathcal Y}_{b,r}(\Sigma)}$ denote the set of ${{\rm r}m GL}_2({\mathbb Z})$-orbits on ${\mathcal Y}_{b,r}(\Sigma)$. The significance of these subsets ${\mathcal Y}_{b,r}(\Sigma)$ is the following disjoint union \[ {\mathcal Y}_{b,r}(\Sigma) = \bigsqcup_{(b,s)=1} {\mathcal U}_{brs}(\Sigma), \] hence for any function $\phi:\overline{{\mathcal U}_b(\Sigma)}\to{\mathbb C}$, we have {\beta}gin{equation*} {\mathcal S}um_{f\in \overline{{\mathcal U}_b(\Sigma)}} \phi(f){\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) ={\mathcal S}um_{(b,r)=1}\mu(r){\mathcal S}um_{f\in \overline{{\mathcal Y}_{b,r}(\Sigma)}} \phi(f){\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr). \end{equation*} Recall that the border range is what we are calling $q\in[X^{1/8-\kappa_\downarrow},X^{1/8+\kappa_\uparrow}]$, where $\kappa_\downarrow,\kappa_{\uparrow}$ are positive constants that can eventually be taken to be arbitrarily small. We next estimate the sum of $S(f)-D(\tfrac 12,f)$ over $f$ in $\overline{{\mathcal U}_{mq}(\Sigma)}$, where $m$ is somewhat large. We begin by bounding the number of elements in $\overline{{\mathcal U}_{mq}(\Sigma)}{\mathcal S}ubset \overline{{\mathcal U}^{{\rm r}m irr}_{mq}}$ that have discriminant less than $X$. {\beta}gin{lemma}\label{lemUunif} For every positive integer $m$ and square-free $q$, write $mq=m_1q_1$, where $m_1$ is powerful, $(m_1,q_1)=1$, and $q_1$ is squarefree. Then for every $X\ge 1$, {\beta}gin{equation}\label{eqUunif} |\{f\in \overline{{\mathcal U}^{{\rm r}m irr}_{mq}}:|\Delta(f)|<X\}|\ll_\epsilon \frac{X^{1+\epsilon}}{m_1^{5/3}q_1^2}. \end{equation} The multiplicative constant depends only on $\epsilon$ (it is independent of $m,q,X$). \end{lemma} {\beta}gin{proof} Elements $f$ in the left-hand side of \eqref{eqUunif} are in bijection with rings $R_f$ that have index $mq=m_1q_1$ in the maximal orders ${\mathcal O}_{K_f}$ of their fields of fractions $K_f$. It follows that the discriminants of these fields $K_f$ are less than $X/(m_1^2q_1^2)$. It follows that the total number of such fields that can arise is bounded by $O(X/m_1^2q_1^2)$. To estimate the total number of rings $R_f$ that can arise, it suffices to estimate the number of such rings $R_f$ within a single $K_f$. This can be done prime by prime, for each prime dividing the index $m_1q_1$. Let $p$ be a prime dividing $q_1$. Since $q_1$ is squarefree, it follows that the index of $R_f$, at the prime $p$, is $p$. Given the index $p$ overorder $R$ of $R_f$, it follows from Proposition {\rm r}ef{subsupring}, that the number of index $p$ suborders of $R$ is bounded by $3$. For primes dividing $m_1$, this procedure is more complicated since there can be many more subrings with prime power index. However, this question is completely answered by work of Shintani \cite{Shintani} and Datskovsky--Wright~\cite{DW} (see \cite[\S1.2]{KNT}), who give an explicit formula for the counting function of suborders $R$ of a fixed cubic field $K$, which we state as Proposition {\rm r}ef{propSDW}. They show that the number of suborders of index $p^k$, for $k\geq 1$ is the $k$th Dirichlet coefficient of the $p$th Euler factor of {\beta}gin{equation*} \frac{\zeta_K(s)}{\zeta_K(2s)}\zeta_{\mathbb Q}(3s)\zeta_{\mathbb Q}(3s-1). \end{equation*} We verify the lemma for a prime $p$ having splitting type $(12)$ in $K$. For such a prime, the $p$th Euler factor of the above Dirichlet series is: {\beta}gin{equation*} (1-p^{-2s})^{-1}(1-p^{4s})(1-p^{-3s})^{-1}(1-p^{-3s+1})^{-1}= (1+p^{-2s})\Bigl({\mathcal S}um_{k=0}^\infty p^{-3ks}\Bigr) \Bigl({\mathcal S}um_{k=0}^\infty p^{k-3ks}\Bigr). \end{equation*} It is thus clear that the $k$th Dirichlet coefficient is bounded by $O(p^{k/3})$. Therefore, the number of possible suborders of index $p^k$ is bounded by $O(p^{k/3})$. Other splitting types can be handled in indentical fashion, giving the same bound. Putting this together, it follows that the number of suborders of $K$, having index $q_1m_1$ is bounded by $O(q_1^\epsilon m_1^{1/3})$. Multiplying this quantity by $X/(m_1q_1)^2$ yields the result. \end{proof} {\beta}gin{lemma}\label{lemborderrange} For $X\ge 1$, square-free $q$, and small enough $\mathrm{et}a>0$ {\beta}gin{equation*} {\mathcal S}um_{m> X^\mathrm{et}a}{\mathcal S}um_{f\in \overline{{\mathcal U}_{mq}(\Sigma)}} \bigl( S(f) - D(\tfrac 12,f) \bigr) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr)= O_{\epsilon,\Sigma,{\mathbb P}si}\Bigl(\frac{X^{5/4-\mathrm{et}a+\epsilon}}{q^3}\Bigr). \end{equation*} \end{lemma} {\beta}gin{proof} From Lemma {\rm r}ef{lemeasybound}, it follows that for $f\in{\mathcal U}_{mq}(\Sigma)$ with $|\Delta(f)|\asymp X$, we have $S(f)-D(\tfrac 12,f)=O(\frac{X^{1/4+\epsilon}}{mq})$. We write $mq$ as $m_1q_1$, where $q_1$ is squarefree with $(q_1,m_1)=1$, and $m_1$ is powerful. We now have {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{m> X^\mathrm{et}a} {\mathcal S}um_{f\in \overline{{\mathcal U}_{mq}(\Sigma)}} \bigl(S(f)-D(\tfrac 12,f)\bigr) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) \ll_{\epsilon,\Sigma,{\mathbb P}si} \displaystyle{\mathcal S}um_{m>X^\mathrm{et}a}\frac{X^{1/4+\epsilon}}{mq}\cdot \frac{X^{1+\epsilon}}{m_1^{5/3}q_1^2}, \end{array} \end{equation*} where the final estimate follows from Lemma {\rm r}ef{lemUunif}. \end{proof} We then have the following corollary. {\beta}gin{corollary}\label{corborderrange} Let $X\ge 1$, squarefree $q>X^{1/8-\kappa_\downarrow}$, and $\mathrm{et}a>0$ be such that $\mathrm{et}a-2\kappa_\downarrow>0$. Then we have {\beta}gin{equation*} {\mathcal S}um_{m> X^\mathrm{et}a}{\mathcal S}um_{f\in \overline{{\mathcal U}_{mq}(\Sigma)}} \bigl( S(f) - D(\tfrac 12,f) \bigr) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr)= O_{\epsilon,\kappa_\downarrow,\Sigma,{\mathbb P}si}\Bigl(\frac{X^{1+2\kappa_\downarrow-\mathrm{et}a+\epsilon}}{q}\Bigr). \end{equation*} \end{corollary} \noindent Furthermore, $\kappa_\downarrow$ and hence $\mathrm{et}a$ can be taken to be arbitrarily small. Therefore, a consequence of the above lemma is that when $q$ is in the border range, sums over $\overline{{\mathcal U}_{mq}(\Sigma)}$ only have to be considered for $m$ less than arbitrarily small powers of $X$. Let $q\in [X^{1/8-\kappa_\downarrow},X^{1/8+\kappa_\uparrow}]$ be fixed for the rest of this subsection. For a positive integer $m$, we write $mq=m_1q_1$, where $m_1$ is powerful, $(m_1,q_1)=1$, and $q_1$ is squarefree. Note that since $m$ will be taken to be very small ($\ll X^\mathrm{et}a$), $q_1$ will be quite close in size to $q$. We restate Proposition~{\rm r}ef{thm_AFE2} for convenience: for $f \in \overline{{\mathcal U}_{m_1q_1}(\Sigma)}$, we have {\beta}gin{equation}\label{eqinex2} S(f)=D(\tfrac 12,f)- {\mathcal S}um_{k=1}^\infty \frac{e_k(f)k^{1/2}}{q_1{\mathrm {rad}}(m_1)} {\mathcal S}um_{n=1}^\infty \frac{\lambda_n(f)}{n^{1/2}} V^\pm \left( \frac{m_1^2 kn}{{\mathrm {rad}}(m_1)^2 |\Delta(f)|^{1/2}} {\rm r}ight), \end{equation} where $e_k(f)k^{1/2}$ is the $k$th Dirichlet coefficient of the series {\beta}gin{equation*} {\mathcal S}um_{k=1}^\infty \frac{e_k(f)k^{1/2}}{k^s} =q_1^{1-2s}{\mathrm {rad}}(m_1)^{1-2s}\frac{E(\frac12-s,f)}{E(\frac12+s,f)}. \end{equation*} Our next and final goal of this subsection is to perform a switching trick, analogous to Theorem {\rm r}ef{thswitch}, in which our sums over $\overline{{\mathcal U}_{m_1q_1}(\Sigma)}$ are replaced with sums over $\overline{{\mathcal U}_{m_1}(\Sigma)}$. We thus need to understand how the quantity $e_k(f)$ behaves under such a switch. The next lemma does just that: more precisely, if $f$ is nonmaximal and switches to the pair $(g,{\alpha}pha)$ with prime index $p$, then the next lemma determines $e_k(f)$ in terms of $(g,{\alpha}pha)$. As recalled in Proposition~{\rm r}ef{subsupring}, the proof of \cite[Proposition 16]{BST} implies that there is a bijection between the zeros in ${\mathbb P}^1({\mathbb F}_p)$ of the reduction modulo $p$ of $g(x,y)$ and the set of cubic rings that are index-$p$ subrings of $R_g$. Thus, $f$ corresponds uniquely to a pair $(g,{\alpha}pha)$, where ${\alpha}pha\in {\mathbb P}^1({\mathbb F}_p)$ is a root of $g(x,y)$ modulo $p$. Then the following lemma determines $E_p(s,f)$ given this pair $(g,{\alpha}pha)$. {\beta}gin{lemma}\label{gDetermineE} Let $g\in V({\mathbb Z})$ be a binary cubic form that is maximal at $p$. Let ${\alpha}pha\in{\mathbb P}^1({\mathbb F}_p)$ be a root of the reduction of $g$ modulo $p$. Let $f\in V({\mathbb Z})$ be a binary cubic form corresponding to the index-$p$ subring of $R_g$ associated to the pair $(g,{\alpha}pha)$. Then $E_p(s,f)$, and hence $e_k(f)$ for every $k$, is determined by the pair $(g,{\alpha}pha)$. More precisely, we have {\beta}gin{itemize} \item[{{\rm r}m (a)}] If ${\mathcal S}igma_p(g)=(111)$, then ${\mathcal S}igma_p(f)=(1^21)$ and $E_p(s,f)=1-p^{-s}$; \item[{{\rm r}m (b)}] If ${\mathcal S}igma_p(g)=(12)$, then ${\mathcal S}igma_p(f)=(1^21)$ and $E_p(s,f)=1+p^{-s}$; \item[{{\rm r}m (c)}] If ${\mathcal S}igma_p(g)=(1^21)$ and ${\alpha}pha$ is the single root, then ${\mathcal S}igma_p(f)=(1^21)$ and $E_p(s,f)=1$; \item[{{\rm r}m (d)}] If ${\mathcal S}igma_p(g)=(1^21)$ and ${\alpha}pha$ is the double root, then ${\mathcal S}igma_p(f)=(1^3)$ and $E_p(s,f)=1-p^{-s}$; \item[{{\rm r}m (e)}] If ${\mathcal S}igma_p(g)=(1^3)$, then ${\mathcal S}igma_p(f)=(1^3)$ and $E_p(s,f)=1$.\qedhere \end{itemize} \end{lemma} {\beta}gin{proof} The procedure to compute $f(x,y)$ given the pair $(g,{\alpha}pha)$ is as follows: use the action of ${{\rm r}m GL}_2({\mathbb Z})$ to move ${\alpha}pha$ to the point $[1:0]\in{\mathbb P}^1({\mathbb F}_p)$. This yields the binary cubic form $ax^3+bx^2y+cxy^2+dy^3$, where $p\mid a$. Moreover, since $g$ is maximal at $p$, we see that $p\mid b$ implies that $p^2{\mathrm {nm}}id a$. Then $f(x,y)$ can be taken to be $(a/p)x^3+bx^2y+pcxy^2+p^2dy^3$. Running this procedure for the different splitting types of $g$ immediately shows that the corresponding $f$ has the splitting type listed in the lemma. For example, if $g$ has splitting type $(111)$ or $(12)$, then we may bring one of the single roots (using a ${{\rm r}m GL}_2({\mathbb Z})$-transformation) to infinity. Then we may write $g(x,y)=pax^3+bx^2y+cxy^2+dy^3$, where $p{\mathrm {nm}}id b$ since $g$ is unramified. Then the procedure gives $f(x,y)=ax^3+bx^2y+pcxy^2+p^2dy^3$. Since $p{\mathrm {nm}}id b$, the splitting type of $f(x,y)$ is $(1^21)$ as claimed. The other cases are similar, and we omit them. Finally, $e_{\ell,{\beta}ta}(f)$ is determined for $p\neq \ell|{\mathrm {ind}}(f)$ and all ${\beta}ta\ge 0$ as follows from Lemma~{\rm r}ef{l_sigmap_f}. \end{proof} The final result of this subsection is to determine what happens to the quantity $e_k(f)\lambda_n(f)$ after the switch. {\beta}gin{lemma}\label{switchek} Let $m_1$ and $q_1$ be positive integers, where $m_1$ is powerful, $(m_1,q_1)=1$, and $q_1$ is squarefree. Let $k$ be a positive integer divisible only by primes dividing $m_1q_1$. Let $n$ be a positive integer and write $n=n_1\ell$ where $(\ell,m_1q_1)=1$ and $n_1$ is divisible only by primes dividing $m_1q_1$. Then we have {\beta}gin{equation*} {\mathcal S}um_{f\in \overline{{\mathcal U}_{m_1q_1}(\Sigma)}}e_k(f)\lambda_n(f){\mathbb P}si(|\Delta(f)|)= {\mathcal S}um_{g\in \overline{{\mathcal U}_{m_1}(\Sigma)}}c_{q_1}(g)d_{m_1}(g) \lambda_\ell(g){\mathbb P}si(q_1^2|\Delta(g)|), \end{equation*} where $c_{q_1}$ and $d_{m_1}$ are congruence functions on $V({\mathbb Z})$ defined modulo $q_1$ and $m_1^3$, respectively. Furthermore, we have $c_{q_1}(g)\ll_\epsilon q_1^\epsilon$ and $d_{m_1}(g)\ll_\epsilon m_1^\epsilon$ uniformly for every $g\in V({\mathbb Z})$. \end{lemma} {\beta}gin{proof} As in Section~{\rm r}ef{sec:switch}, we will write sums over $\overline{{\mathcal U}_{m_1q_1}(\Sigma)}$ in terms of sums over $\overline{{\mathcal U}_{m_1}(\Sigma)}$. In this case, we have the simple bijection {\beta}gin{equation*} {\mathcal U}_{m_1q_1}(\Sigma) \leftrightarrow \bigl\{(g,{\alpha}pha):g\in{\mathcal U}_{m_1}(\Sigma), {\alpha}pha\in{\mathbb Z}/q_1{\mathbb Z},g({\alpha}pha)\equiv 0\pmod{q_1}\bigr\}, \end{equation*} which follows by an argument similar to that of Lemma~{\rm r}ef{lembij}. Since the functions $e_k$ and $\lambda_n$ are multiplicative, we may write {\beta}gin{equation*} e_k(f)=e_{k_1}(f)e_{k_2}(f);\quad \lambda_n(f)=\lambda_{n_1}(f)\lambda_{\ell}(f), \end{equation*} where $k_1$ is only divisible by primes dividing $q_1$, and $k_2$ is only divisible by primes dividing $m_1$. Let $f\in \overline{{\mathcal U}_{m_1q_1}(\Sigma)}$ correspond to the pair $(g,{\alpha}pha)$ with $g\in \overline{{\mathcal U}_{m_1}(\Sigma)}$. It follows that $\lambda_{n_1}(f)=0$ if ${\alpha}pha$ corresponds to a double root of $g$ modulo some prime $p\mid (q_1,n_1)$. Otherwise, $\lambda_{n_1}(f)=1$. Clearly, we have $\lambda_n(f)=\lambda_\ell(g)$, and $e_{k_2}(f)=e_{k_2}(g)$. We have seen in Lemma~{\rm r}ef{gDetermineE} that the value $e_{k_1}(f)$ depends only on the splitting type of $g$ modulo all the primes dividing $q_1$, and on whether ${\alpha}pha$ is a single or double root modulo all the primes dividing $q_1$. It is clear that $e_{k_2}(g)$ is defined via congruence conditions modulo $m_1^3$. The first claim of the lemma now follows from the bijection. The bounds in the second claim of the lemma are immediate since $\lambda_{n_1}$, $e_{k_1}$ and $e_{k_2}$, each are bounded by $\ll_\epsilon n_1^\epsilon$, $\ll_\epsilon k_1^\epsilon$ and $\ll_\epsilon k_2^\epsilon$, respectively (see Proposition~{\rm r}ef{ekbound} and the examples just before Proposition {\rm r}ef{Efs} for the claims regarding $e_{k_i}$). \end{proof} {\mathcal S}ubsection{Estimates for the border range}\label{sBorder} In this subsection, we assume that our integers $q$ lie in the border range $[X^{1/8-\kappa_\downarrow},X^{1/8+\kappa_\uparrow}]$ with small enough $\kappa_\downarrow,\kappa_\uparrow>0$. Our goal is to bound {\beta}gin{equation*} {\mathcal S}um_{f\in \overline{{\mathcal W}_{q}(\Sigma)}} \bigl( S(f) - D(\tfrac 12,f) \bigr) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr), \end{equation*} for $q$ in this range. Recall that we have a disjoint union {\beta}gin{equation*} \overline{{\mathcal W}_q(\Sigma)}=\bigsqcup_{m\geq 1}\overline{{\mathcal U}_{mq}(\Sigma)}, \end{equation*} and that we will be summing $S(f)-D(\tfrac 12,f)$ over $\overline{{\mathcal U}_{mq}(\Sigma)}$ (and then summing over $m$) rather than simply summing over $\overline{{\mathcal W}_q(\Sigma)}$. From Lemma {\rm r}ef{lemborderrange}, it follows that we may restrict the sum to $m\leq X^\mathrm{et}a$, where $\mathrm{et}a$ may be taken to be arbitrarily small. All multiplicative constants are understood to depend on the initial choices of $\kappa_\downarrow,\kappa_\uparrow,\mathrm{et}a>0$. We write $mq=m_1q_1$, where $m_1$ is powerful, $(m_1,q_1)=1$ and $q_1$ is squarefree. Note then that $m_1\leq m^2\leq X^{2\mathrm{et}a}$, and thus $q_1\geq q/m\geq X^{1/8-\mathrm{et}a-\kappa_\downarrow}$. We begin by fixing $k$ and $n$ in \eqref{eqinex2}, and bounding the sum over $f\in \overline{{\mathcal U}_{m_1q_1}(\Sigma)}$. {\beta}gin{proposition}\label{propafeeb} For every small enough $\kappa_1>0$, the following estimate holds. Let $m_1$, $q_1$, $k$, and $n$ be positive integers and $X\ge 1$. Assume that $m_1$ is powerful, $(m_1,q_1)=1$, and $q_1$ is squarefree. Write $n=n_1\ell_1$ where $(\ell_1,m_1q_1)=1$ and $n_1$ is divisible only by primes dividing $m_1q_1$. Denote the radical of $\ell_1$ by $\ell$. Then {\beta}gin{equation*} {\mathcal S}um_{f\in \overline{{\mathcal U}_{m_1q_1}(\Sigma)}}e_k(f) \lambda_n(f) V^\pm\Bigl(\frac{nkm_1^2}{{\mathrm {rad}}(m_1)^2|\Delta(f)|^{1/2}} \Bigr) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr)\ll_{\epsilon,\Sigma,{\mathbb P}si} X^\epsilon\cdot H(n,m_1,q_1;X), \end{equation*} where {\beta}gin{equation*} H(n,m_1,q_1;X)=\frac{X}{q_1^2m_1^{5/3}\ell} + \frac{X^{5/6+\kappa_1/3}}{q_1^{5/3}\ell^{1/3}}+ \ell q_1^2 m_1^{12} X^{9\kappa_1} + \frac{X^{1-\kappa_1}}{q_1^2 m_1^{5/3}}. \end{equation*} \end{proposition} {\beta}gin{proof}Applying the preceding Lemma~{\rm r}ef{switchek}, we obtain {\beta}gin{equation*} {\mathcal S}um_{f\in \overline{{\mathcal U}_{m_1q_1}(\Sigma)}}e_k(f) \lambda_n(f)V\Bigl(\frac{nkm_1^2}{{\mathrm {rad}}(m_1)^2{\mathcal S}qrt{|\Delta(f)|}} \Bigr){\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr)= {\mathcal S}um_{f\in\overline{{\mathcal U}_{m_1}(\Sigma)}}c_{q_1}(f)d_{m_1}(f)\lambda_{\ell_1}(f) {\mathbb P}si_1\Bigl(\frac{q_1^2|\Delta(f)|}{X}\Bigr), \end{equation*} where $c_{q_1}$ is defined modulo $q_1$, $d_{m_1}$ is defined modulo $m_1^3$, and ${\mathbb P}si_1={\mathcal H}_{\frac{nkm_1^2}{{\mathcal S}qrt{X}{\mathrm {rad}}(m_1)^2}}$. Recall that in Corollary~{\rm r}ef{bound-H}, we bound $E_\infty(\widetilde{{\mathbb P}si_1};-\epsilon)$ by an absolute constant. For brevity in this proof, we will write $\ll$ as a shorthand for $\ll_{\epsilon,\Sigma,{\mathbb P}si}$. We perform an inclusion-exclusion principle to write the sum over $\overline{{\mathcal U}_{m_1}(\Sigma)}$ in terms of sums over $\overline{{\mathcal Y}_{m_1, r}(\Sigma)}$. This yields {\beta}gin{equation*} {\mathcal S}um_{f\in\overline{{\mathcal U}_{m_1}(\Sigma)}}c_{q_1}(f)d_{m_1}(f)\lambda_{\ell_1}(f) {\mathbb P}si_1\Bigl(\frac{q_1^2|\Delta(f)|}{X}\Bigr)= {\mathcal S}um_{(m_1,r)=1}\mu( r ){\mathcal S}um_{f\in\overline{{\mathcal Y}_{m_1, r }(\Sigma)}} c_{q_1}(f)d_{m_1}(f)\lambda_{\ell_1}(f) {\mathbb P}si_1\Bigl(\frac{q_1^2|\Delta(f)|}{X}\Bigr). \end{equation*} We split up the above sum into two sums, corresponding to the ranges $ r < B$ and $ r \geq B$, for some $B>1$. We estimate each summand in the range $ r <B$ using Theorem {\rm r}ef{countphi}, and each summand in the range $ r \geq B$ using Lemma {\rm r}ef{lemUunif}, to respectively obtain {\beta}gin{equation*} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{f\in\overline{{\mathcal Y}_{m_1, r }(\Sigma)}} c_{q_1}(f)d_{m_1}(f)\lambda_{\ell_1}(f) {\mathbb P}si_1\Bigl(\frac{q_1^2|\Delta(f)|}{X}\Bigr) &\ll& \displaystyle\frac{X^{1+\epsilon}(\ell, r )}{q_1^{2}m_1^{5/3} r ^2\ell}+ \frac{X^{5/6+\epsilon}(\ell,r)}{q_1^{5/3}r^{5/3}\ell^{1/3}}+\ell q_1^2 m_1^{12} r ^{8}X^\epsilon \\[.2in]&\ll& \displaystyle\frac{X^{1+\epsilon}}{q_1^{2}m_1^{5/3} r \ell}+ \frac{X^{5/6+\epsilon}}{q_1^{5/3}r^{2/3}\ell^{1/3}}+\ell q_1^2 m_1^{12} r ^{8}X^\epsilon; \\[.2in] \displaystyle{\mathcal S}um_{f\in\overline{{\mathcal Y}_{m_1, r }(\Sigma)}} c_{q_1}(f)d_{m_1}(f)\lambda_{\ell_1}(f) {\mathbb P}si_1\Bigl(\frac{q_1^2|\Delta(f)|}{X}\Bigr) &\ll& \displaystyle\frac{X^{1+\epsilon}}{q_1^2m_1^{5/3} r ^2}. \end{array} \end{equation*} The second bound is simply an application of the tail estimate of Lemma~{\rm r}ef{lemUunif}. The first bound is more complicated, and we explain how it is derived. Summing over $\overline{{\mathcal Y}_{m_1, r }(\Sigma)}$ can be replaced by summing a function $\phi\chi_\Sigma$ over $\overline{V({\mathbb Z})}$, where $\phi$ is defined modulo $m_1^2r^2$ and $\chi_\Sigma$ is the indicator function defined in \S{\rm r}ef{s_sieve_max} before Corollary~{\rm r}ef{lemlamer}. In the above equation, we are therefore summing a function defined over $r^2m_1^3q_1\ell r_\Sigma$ (here, we also use Lemma {\rm r}ef{switchek}). Moreover $q_1$ is squarefree, and the function defined modulo $\ell$ is $\lambda_{\ell_1}$. Therefore, the error term with applying Theorem {\rm r}ef{countphi} is bounded by $\ll \ell q_1^2m_1^{12}r^8X^\epsilon$. We now estimate the first and second main terms. The density of the first main term follows from the uniformity estimates and the bound ${\mathcal A}_{\ell_1}(\lambda_{\ell_1}) \ll\frac{1}{\ell}$ from Lemma~{\rm r}ef{lemAC-lambda}. The second main term computation follows similarly using the bound ${\mathcal C}_{\ell_1}(\lambda_{\ell_1})\ll \frac{1}{\ell^{1/3}}$ from Lemma~{\rm r}ef{lemAC-lambda}. Adding the above bounds over the appropriate ranges of $ r $ yields \[ {\mathcal S}um_{f\in \overline{{\mathcal U}_{m_1q_1}(\Sigma)}}e_k(f) \lambda_n(f)V\Bigl(\frac{nkm_1^2}{{\mathrm {rad}}(m_1){\mathcal S}qrt{|\Delta(f)|}} \Bigr){\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) \ll \frac{X^{1+\epsilon} \log B}{q_1^2m_1^{5/3}\ell} + \frac{X^{5/6+\epsilon} B^{1/3}}{q_1^{5/3}\ell^{1/3}}+ \ell q_1^2 m_1^{12} B^9X^{\epsilon} + \frac{X^{1+\epsilon}}{q_1^2 m_1^{5/3} B}. \] Choosing $B=X^{\kappa_1}$ concludes the proof of the proposition. \end{proof} Let notation be as in the beginning of this section. We have {\beta}gin{equation*} {\beta}gin{array}{rcl} &&\displaystyle{\mathcal S}um_{f\in\overline{{\mathcal W}_q(\Sigma)}}\bigl(D(\tfrac 12,f-S(f)\bigr) {\mathbb P}si\Bigl(\frac{\Delta(f)}{X}\Bigr)\\[.2in]&=& \displaystyle{\mathcal S}um_{f\in\overline{{\mathcal W}_q(\Sigma)}}{\mathbb P}si\Bigl(\frac{\Delta(f)}{X}\Bigr) {\mathcal S}um_{k=1}^\infty\frac{e_k(f)k^{1/2}}{{\mathrm {rad}}({\mathrm {ind}}(f))}{\mathcal S}um_{n=1}^\infty \frac{\lambda_n(f)}{n^{1/2}}V^{{{\rm r}m sgn}(\Delta(f))}\Bigl( \frac{{\mathrm {ind}}(f)^2kn}{{\mathrm {rad}}({\mathrm {ind}}(f))^2|\Delta(f)|^{1/2}}\Bigr) \\[.2in]&=& \displaystyle{\mathcal S}um_{m=1}^{\infty}{\mathcal S}um_{f\in\overline{{\mathcal U}_{mq}(\Sigma)}} {\mathbb P}si\Bigl(\frac{\Delta(f)}{X}\Bigr) {\mathcal S}ideset{}{^\flat}{\mathcal S}um_{k\geq 1} \frac{e_k(f)k^{1/2}}{q_1{\mathrm {rad}}(m_1)}{\mathcal S}um_{n=1}^\infty \frac{\lambda_n(f)}{n^{1/2}}V^{{{\rm r}m sgn}(\Delta(f))}\Bigl( \frac{m_1^2kn}{{\mathrm {rad}}(m_1)^2|\Delta(f)|^{1/2}}\Bigr) \\[.2in]&=& \displaystyle{\mathcal S}um_{m=1}^{X^\mathrm{et}a}{\mathcal S}um_{f\in\overline{{\mathcal U}_{mq}(\Sigma)}} {\mathbb P}si\Bigl(\frac{\Delta(f)}{X}\Bigr) {\mathcal S}ideset{}{^\flat}{\mathcal S}um_{k\geq 1}\frac{e_k(f)k^{1/2}}{q_1{\mathrm {rad}}(m_1)} {\mathcal S}um_{n\le \frac{X^{1/2+\epsilon}}{k}} \frac{\lambda_n(f)}{n^{1/2}}V^{{{\rm r}m sgn}(\Delta(f))}\Bigl( \frac{m_1^2kn}{{\mathrm {rad}}(m_1)^2|\Delta(f)|^{1/2}}\Bigr) \\[.2in] &&\displaystyle+O_{\epsilon,\kappa_\downarrow,\Sigma,{\mathbb P}si}\Bigl(\frac{X^{1-\mathrm{et}a+2\kappa_\downarrow+\epsilon}}{q}\Bigr), \end{array} \end{equation*} where the final estimate follows from Corollary {\rm r}ef{corborderrange}, and the rapid decay of $V^{\pm}$ to truncate the $n$-sum, and where the $\flat$ above indicates that the sum over $k$ is supported on multiples of $q_1$ and ranges only over integers whose prime factors are all divisors of $mq$. Next, we truncate the sum over $k$ as follows. {\beta}gin{lemma} For every small enough $\kappa_2>0$, $X\ge 1$, and $q_1,m_1$ as above (i.e., satisfying $m_1\leq X^{2\mathrm{et}a}$ and $q_1 \ge X^{1/8-\mathrm{et}a-\kappa_{\downarrow}}$), we have {\beta}gin{equation}\label{eqkupbd} \displaystyle{\mathcal S}um_{m=1}^{X^\mathrm{et}a}{\mathcal S}um_{{\mathcal S}ubstack{f\in\overline{{\mathcal U}^{{\rm r}m irr}_{mq}}\\ |\Delta(f)|<X}} {\mathcal S}ideset{}{^\flat}{\mathcal S}um_{k > q_1^2X^{2\mathrm{et}a+3\kappa_2}}\frac{|e_k(f)|k^{1/2}}{q_1{\mathrm {rad}}(m_1)} {\mathcal S}um_{n\le \frac{X^{1/2+\epsilon}}{k}} \frac{|\lambda_n(f)|}{n^{1/2}} \ll_{\epsilon,\kappa_2} \displaystyle\frac{X^{1-\kappa_2+4\mathrm{et}a+2\kappa_\downarrow+\epsilon}}{q}. \end{equation} \end{lemma} {\beta}gin{proof} The integers $k$ that arise range over products of powers of primes dividing $mq$. Write $k=k_1k_2$ where $(k_1,k_2)=1$, $v_p(k_1)\le 2$ and $v_p(k_2)\ge 3$ for every prime $p|k$. We have $k_1 \le q_1^2 {\mathrm {rad}}(m_1)^2$ since $q_1$ is square-free. Hence $k>q_1^2 X^{2\mathrm{et}a+3\kappa_2}$ implies $k_2 > X^{3\kappa_2}$. It follows from Proposition~{\rm r}ef{ekbound} that \[ e_{k}(f) \ll_\epsilon \frac{{\mathrm {rad}}(k_2)^2}{k_2} X^\epsilon \le k_2^{-1/3} X^{\epsilon} < X^{-\kappa_2 + \epsilon}. \] Hence the sum over $k> q_1^2 X^{2\mathrm{et}a+3\kappa_2}$ is bounded by \[ \frac{X^{-\kappa_2+\epsilon}}{q_1{\mathrm {rad}}(m_1)} X^{1/4+\epsilon} = \frac{X^{1/4-\kappa_2+2\epsilon}}{q_1{\mathrm {rad}}(m_1)}. \] We already know from Lemma~{\rm r}ef{lemUunif} that {\beta}gin{equation*} {\mathcal S}um_{{\mathcal S}ubstack{f\in\overline{{\mathcal U}^{{\rm r}m irr}_{mq}}\\ |\Delta(f)|<X}} 1 \ll_\epsilon \frac{X^{1+\epsilon}}{m_1^{5/3}q_1^2}. \end{equation*} Therefore, the left-hand side of \eqref{eqkupbd} is bounded by {\beta}gin{equation*} \ll_{\epsilon,\kappa_2} X^{5/4 -\kappa_2 + 3\epsilon} \cdot {\mathcal S}um^{X^\mathrm{et}a}_{m=1} \frac{1}{m_1^{5/3}{\mathrm {rad}}(m_1) q_1^3} \ll_{\epsilon,\kappa_2} \frac{X^{5/4-\kappa_2+\mathrm{et}a+3\epsilon}}{q_1^3}\le \frac{X^{1-\kappa_2+3\mathrm{et}a+2\kappa_\downarrow+3\epsilon}}{q_1}, \end{equation*} which is sufficient because $q_1\ge q/m$ and $m\le X^\mathrm{et}a$. \end{proof} We input Proposition {\rm r}ef{propafeeb}, which bounds the sum over $f$, and obtain with Corollary~{\rm r}ef{corborderrange} and \eqref{eqkupbd}: {\beta}gin{equation}\label{eqafterfsum} {\beta}gin{array}{rcl} \displaystyle{\mathcal S}um_{f\in\overline{{\mathcal W}_q(\Sigma)}}\bigl(D(\tfrac 12,f)-S(f)\bigr) {\mathbb P}si\Bigl(\frac{\Delta(f)}{X}\Bigr)&\ll_{\epsilon,\kappa_2,\Sigma,{\mathbb P}si}& \displaystyle{\mathcal S}um_{m_1=1}^{X^{2\mathrm{et}a}} {\mathcal S}ideset{}{^\flat}{\mathcal S}um_{k\leq q_1^2X^{2\mathrm{et}a+3\kappa_2}} {\mathcal S}um_{n\le \frac{X^{1/2+\epsilon}}{k}} \frac{k^{1/2}}{n^{1/2}q_1{\mathrm {rad}}(m_1)} X^{\epsilon} H(n,m_1,q_1;X) \\[.2in] &&\displaystyle +\frac{X^{1-\mathrm{et}a+2\kappa_\downarrow+\epsilon}}{q} +\frac{X^{1-\kappa_2+4\mathrm{et}a+2\kappa_\downarrow+\epsilon}}{q}. \end{array} \end{equation} In our next result, we estimate the top line of \eqref{eqafterfsum}: {\beta}gin{proposition}\label{propimpest} For every square-free $q\in [X^{1/8-\kappa_\downarrow},X^{1/8+\kappa_\uparrow}]$ and $X\ge 1$, we have {\beta}gin{equation*} {\mathcal S}um_{m_1=1}^{X^{2\mathrm{et}a}} {\mathcal S}ideset{}{^\flat}{\mathcal S}um_{k\leq q_1^2X^{2\mathrm{et}a+3\kappa_2}} {\mathcal S}um_{n\le \frac{X^{1/2+\epsilon}}{k}} \frac{k^{1/2}}{n^{1/2}q_1{\mathrm {rad}}(m_1)} H(n,m_1,q_1;X)\ll_{\epsilon,\kappa_1,\kappa_2} H(q;X), \end{equation*} where $H(q;X)$ is the sum of the final terms in Equations \eqref{eqest1}, \eqref{eqest2}, \eqref{eqest3}, and \eqref{eqest4}. \end{proposition} {\beta}gin{proof} We take each term in $H(n,m_1,q_1;X)$ by turn, and sum it over $n$, then $k$, and then $m_1$. In this proof we shall write $\ll$ as a shorthand for $\ll_{\epsilon,\kappa_1,\kappa_2}$. Recall that we had written $n=n_1\ell_1$, where $n_1$ is only divisible by primes dividing $mq$, while $(\ell_1, mq)=1$, and that we denoted the radical of $\ell_1$ by $\ell$. We begin with the first term: in this case, the sum over $\ell$ (and also the sum over $m_1$) converges and so we write {\beta}gin{equation}\label{eqest1} \frac{X}{q_1^2}{\mathcal S}um_{m_1=1}^{X^{2\mathrm{et}a}} {\mathcal S}ideset{}{^\flat}{\mathcal S}um_{k\leq q_1^2X^{2\mathrm{et}a+3\kappa_2}} {\mathcal S}um_{n_1\le \frac{X^{1/2+\epsilon}}{k}} {\mathcal S}um_{\ell\le \frac{X^{1/2+\epsilon}}{kn_1}} \frac{k^{1/2}}{n_1^{1/2}\ell^{1/2}q_1{\mathrm {rad}}(m_1)} \frac{1}{m_1^{5/3}\ell}\ll \frac{X^{7/8+3\mathrm{et}a+(3/2)\kappa_2+\kappa_\downarrow+\epsilon}}{q}. \end{equation} For the second term, we write {\beta}gin{equation}\label{eqest2} \frac{X^{5/6+\kappa_1/3}}{q_1^{5/3}} {\mathcal S}um_{m_1=1}^{X^{2\mathrm{et}a}} {\mathcal S}ideset{}{^\flat}{\mathcal S}um_{k\leq q_1^2X^{2\mathrm{et}a+3\kappa_2}} {\mathcal S}um_{n_1\le \frac{X^{1/2+\epsilon}}{k}} {\mathcal S}um_{\ell\le \frac{X^{1/2+\epsilon}}{kn_1}} \frac{k^{1/2}}{n_1^{1/2}\ell^{1/2}q_1{\mathrm {rad}}(m_1)} \frac{1}{\ell^{1/3}}\ll \frac{X^{11/12+(8/3)\mathrm{et}a+\kappa_1/3+\kappa_2+\epsilon}}{q^2}. \end{equation} To estimate the third term, we write {\beta}gin{equation}\label{eqest3} {\beta}gin{array}{rcl} \displaystyle q_1^2X^{9\kappa_1} {\mathcal S}um_{m_1=1}^{X^{2\mathrm{et}a}} {\mathcal S}ideset{}{^\flat}{\mathcal S}um_{k\leq q_1^2X^{2\mathrm{et}a+3\kappa_2}} {\mathcal S}um_{n\le \frac{X^{1/2+\epsilon}}{k}} \frac{k^{1/2}}{n^{1/2}q_1{\mathrm {rad}}(m_1)} \ell m_1^{12}&\ll& \displaystyle q_1X^{3/4+9\kappa_1+26\mathrm{et}a+\epsilon} {\mathcal S}ideset{}{^\flat}{\mathcal S}um_{k\leq q_1^2X^{2\mathrm{et}a+3\kappa_2}}\frac{1}{k} \\[.2in]&\ll&\displaystyle \frac{X^{7/8+9\kappa_1+26\mathrm{et}a+\kappa_\uparrow+\epsilon}}{q}, \end{array} \end{equation} where the final estimate follows because non-zero values of $k$ are all multiples of the squarefree $q_1$: see Proposition {\rm r}ef{propklarge}. Finally, we have {\beta}gin{equation}\label{eqest4} \frac{X^{1-\kappa_1}}{q_1^2} {\mathcal S}um_{m_1=1}^{X^{2\mathrm{et}a}} {\mathcal S}ideset{}{^\flat}{\mathcal S}um_{k\leq q_1^2X^{2\mathrm{et}a+3\kappa_2}} {\mathcal S}um_{n\le \frac{X^{1/2+\epsilon}}{k}} \frac{k^{1/2}}{n^{1/2}q_1{\mathrm {rad}}(m_1)} \frac{1}{m_1^{5/3}}\ll \frac{X^{1-\kappa_1+3\mathrm{et}a+2\kappa_\downarrow+\epsilon}}{q}. \end{equation} This concludes the proof of Proposition {\rm r}ef{propimpest}. \end{proof} We are now ready to prove the main result of this subsection. {\beta}gin{proposition}\label{propmediumrange} There exist positive constants $\kappa_\uparrow,\kappa_\downarrow,\kappa_3$ such that the following holds. For every $X \ge 1$ and every squarefree $q\in [X^{1/8-\kappa_\downarrow},X^{1/8+\kappa_\uparrow}]$, we have {\beta}gin{equation*} {\mathcal S}um_{f\in \overline{{\mathcal W}_q(\Sigma)}} \Bigl(S(f)-D(\tfrac 12,f)\Bigr) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr)= O_{\Sigma,{\mathbb P}si}\Bigl(\frac{X^{1-\kappa_3}}{q}\Bigr). \end{equation*} \end{proposition} {\beta}gin{proof} We apply \eqref{eqafterfsum} and then apply Proposition {\rm r}ef{propimpest}. It is only necessary to ensure that the exponent of $X$ is less than $1$ for each of the 6 different error terms. This is easily done. First, we temporarily pick any positive $\kappa_\uparrow$ and $\kappa_\downarrow$. Next we pick $\mathrm{et}a>2\kappa_\downarrow$. Then we pick $\kappa_1>3\mathrm{et}a+2\kappa_\downarrow$ and $\kappa_2>4\mathrm{et}a+2\kappa_\downarrow$. This takes care of~\eqref{eqest4} and of the last two terms of \eqref{eqafterfsum}. Finally, to ensure that the exponents of $X$ in the final terms of \eqref{eqest1}, \eqref{eqest2}, and \eqref{eqest3} are less than $1$, we simply divide our constants $\kappa_\uparrow,\kappa_\downarrow,\mathrm{et}a,\kappa_1,\kappa_2$ by the same sufficiently large number. \end{proof} We now put together our results for the border range and the large range. {\beta}gin{theorem}\label{thmStoD} There exists an absolute constant $\varkappa>0$ such that for every $X\ge 1$ and every squarefree $q\ge X^{1/8-\varkappa}$, we have {\beta}gin{equation*} {\mathcal S}um_{f\in\overline{{\mathcal W}_q(\Sigma)}} \Bigl( S(f) - D(\tfrac 12,f) \Bigr) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) = O_{\Sigma,{\mathbb P}si} \Bigl(\frac{X^{1-\varkappa}}{q}\Bigr). \end{equation*} \end{theorem} {\beta}gin{proof} We combine Corollary {\rm r}ef{corlargerange} and Proposition {\rm r}ef{propmediumrange}, where we choose $\varkappa = \min(\kappa_{\uparrow},\kappa_3)$. \end{proof} {\beta}gin{corollary}\label{corStoD} There exists an absolute constant $\mu>0$ such that for every $X\ge 1$, we have {\beta}gin{equation}\label{eqStoD} {\mathcal S}um_{{\mathcal S}ubstack{q \;{{\rm r}m squarefree}\\ q\geq X^{1/8-\mu}}} \left| {\mathcal S}um_{f\in\overline{{\mathcal W}_q(\Sigma)}} \Bigl( S(f) - D(\tfrac 12,f) \Bigr) {\mathbb P}si\Bigl(\frac{|\Delta(f)|}{X}\Bigr) {\rm r}ight| = O_{\Sigma,{\mathbb P}si} \bigl(X^{1-\mu}\bigr). \end{equation} \end{corollary} {\beta}gin{proof} Adding up the above result for $q\geq X^{1/8-\varkappa}$, we note that $\{f\in \overline{{\mathcal W}_q(\Sigma)}:|\Delta(f)|<X\}$ is empty for $q\ge X^{1/2}$ because $\Delta(f) = {\mathrm {ind}}(f)^2 \Delta(K_f) \ge q^2 \Delta(K_f) \ge q^2$ for $f\in {\mathcal W}_q$. \end{proof} {\beta}gin{remark} Choosing $X^{\kappa_1}=\frac{X^{1/10}}{\ell^{1/10}q_1^{2/5}m_1^{7/5}}$ in the proof of Proposition~{\rm r}ef{propafeeb} yields the equality of the last two terms and the following value {\beta}gin{equation*} H(n,m_1,q_1;X)=\frac{X}{q_1^2m_1^{5/3}\ell} + \frac{\ell^{1/10} X^{9/10}}{q_1^{8/5}m_1^{3/5}} + \frac{X^{5/6}}{q_1^{5/3}\ell^{1/3}}. \end{equation*} Then the remainders \eqref{eqest3} and \eqref{eqest4} are equal, and we may further choose $\kappa_\uparrow$ so that they are also equal to the remainder in Corollary~{\rm r}ef{corlargerange} which yields a specific choice of the constant $\kappa_\uparrow$. In the end, an admissible set of values of the constants is as follows: $ \kappa_\downarrow = \tfrac{1}{3000}$, $\kappa_{\uparrow}=\tfrac{1}{300}$, $\mathrm{et}a=\tfrac{1}{1000}$ $\kappa_1=\tfrac{1}{100}$, $\kappa_2=\tfrac{1}{200}, \varkappa = \tfrac{1}{3000}$. To verify the admissibility of these numerical values, it suffices to insert them in each of the remainder terms of Proposition~{\rm r}ef{p_main-term}, Corollary~{\rm r}ef{corlargerange}, Corollary~{\rm r}ef{corborderrange}, \eqref{eqkupbd}, \eqref{eqest1}, \eqref{eqest2}, \eqref{eqest3}, and \eqref{eqest4}. \end{remark} {\mathcal S}ubsection{Counting suborders}\label{sSuborders} In this subsection we prove Theorem {\rm r}ef{propSN} by conditionally bounding {\beta}gin{equation*} {\mathcal S}um_{q>X^{1/8-\varkappa}}{\mathcal S}um_{f\in\overline{{\mathcal W}_q(\Sigma)}}S(f). \end{equation*} Note that by Corollary {\rm r}ef{corStoD}, we may replace $S(f)$ in the above sum by $D(\tfrac 12,f)$. The advantage of using $D(\tfrac 12,f)$ over $S(f)$ is that the values of $D(\tfrac 12,f)$ for binary cubic forms $f$ corresponding to suborders of a fixed cubic field $K$ can be simultaneously controlled in terms of $L(\tfrac 12,{\rm r}ho_K)$. To this end, we start by recalling the following result, due to works of Shintani \cite{Shintani} and Datskovsky--Wright~\cite{DW} (see \cite[\S1.2]{KNT}), giving an explicit formula for the counting function of suborders $R$ of a fixed cubic field $K$. {\beta}gin{proposition}\label{propSDW} Let $K$ be a cubic field with ring of integers ${\mathcal O}_K$. For an order $R{\mathcal S}ubset{\mathcal O}_K$, let ${\mathrm {ind}}(R)$ denote the index of $R$ in ${\mathcal O}_K$. Then {\beta}gin{equation*} {\mathcal S}um_{R{\mathcal S}ubset {\mathcal O}_K}\frac{1}{{\mathrm {ind}}(R)^s}=\frac{\zeta_K(s)}{\zeta_K(2s)}\zeta_{\mathbb Q}(3s)\zeta_{\mathbb Q}(3s-1). \end{equation*} \end{proposition} We thus obtain the following corollary regarding the number $N_K(Z)$ of orders of ${\mathcal O}_K$ with index less than $Z$ for a cubic field $K$. {\beta}gin{corollary}\label{suborders} For every $\epsilon>0$, $Z\ge 1$ and cubic field $K$, we have {\beta}gin{equation*} {\beta}gin{array}{rcl} N_K(Z)\ll_\epsilon Z^{1+\epsilon}|\Delta(K)|^\epsilon. \end{array} \end{equation*} The implied constant is independent of $K$ and $Z$. \end{corollary} {\beta}gin{proof} This follows from Perron's formula integrating along the vertical line ${\mathbb R}e(s)=1+\epsilon$. \end{proof} The above result can be used to give a very useful bound on the sum of $D(\tfrac 12,f)$, over $f\in\overline{{\mathcal W}_q(\Sigma)}$ for $q$ greater than some positive $Q$. {\beta}gin{lemma} For every $Q,X\ge 1$ and $\epsilon>0$, {\beta}gin{equation}\label{eqsumK} {\mathcal S}um_{q\ge Q} {\mathcal S}um_{ {\mathcal S}ubstack{f\in \overline{{\mathcal W}_q(\Sigma)}\\ |\Delta(f)|< X}} |D(\tfrac12,f)| \ll_{\epsilon,\Sigma} X^{\frac12 + \epsilon} {\mathcal S}um_{2^{\mathbb N}\ni Y\le X/Q^2} {Y^{-\frac12}} {\mathcal S}um_{{\mathcal S}ubstack{K\in {\mathbb F}F_\Sigma \\ Y\leq|\Delta(K)|<2Y}}|L(\tfrac 12,{\rm r}ho_K)|. \end{equation} \end{lemma} {\beta}gin{proof} Consider a real number $Y$ with $Y\ll X/Q^2$ and a cubic field $K$ such that $Y\leq |\Delta(K)|<2Y$. Then the number of binary cubic forms $f\in \cup_{q\ge Q} \overline{W_q(\Sigma)}$ such that $|\Delta(f)|<X$ and $K_f = K$ is bounded by \[ N_K\Bigl(\frac{X^{\frac12}}{Y^{\frac12}}\Bigr) = O_\epsilon\bigl(X^{1/2+\epsilon}/Y^{1/2}\bigr), \] using Corollary~{\rm r}ef{suborders}. Summing over all $K$ in the discrimant range $Y\leq|\Delta(K)|< 2Y$, and then summing over $Y\in 2^{\mathbb N}$ such that the dyadic ranges $[Y,2Y)$ cover (more than) the interval $[1,X/Q^2]$, we capture the sum over $f\in \overline{{\mathcal W}_q(\Sigma)}$, for all $q>Q$, such that $|\Delta(f)|< X$. Recall from~\eqref{boundEf} that we have $D(\tfrac12,f) = L(\tfrac12,{\rm r}ho_{K_f}) E(\tfrac12,f)$ and $E(\tfrac12,f) =\prod_{p|\Delta(f)} (1+O(p^{-\frac12})) = |\Delta(f)|^{o(1)}$, which concludes the proof of the lemma. \end{proof} The above lemma yields the following consequence, which clarifies how nonnegativity is used by us. {\beta}gin{corollary}\label{cornnest} For every cubic field $K\in{\mathbb F}F_\Sigma$, assume that $L(\tfrac12,{\rm r}ho_K)\geq 0$. Then for $Q,X\geq 1$, we have {\beta}gin{equation}\label{eqcornonnegbound} {\mathcal S}um_{q\geq Q} {\mathcal S}um_{{\mathcal S}ubstack{f\in\overline{{\mathcal W}_q(\Sigma)}\\|\Delta(f)|<X}} D(\tfrac12,f) \ll_{\epsilon,\Sigma} X^{29/28+\epsilon}Q^{-15/14}. \end{equation} \end{corollary} {\beta}gin{proof} First note that the assumption $L(\tfrac12,{\rm r}ho_K)\geq 0$ for all cubic fields $K$ implies that $D(\tfrac12,f)\geq 0$ for all irreducible integral binary cubic forms. Thus, we may apply the previous lemma to estimate the left-hand side of \eqref{eqcornonnegbound}. From Theorem {\rm r}ef{thuncondupbd} (using a smooth function which dominates the characteristic function of $[1,2]$), we obtain {\beta}gin{equation*} {\mathcal S}um_{{\mathcal S}ubstack{K\in {\mathbb F}F_\Sigma \\ Y\leq|\Delta(K)|<2Y}}|L(\tfrac 12,{\rm r}ho_K)|\ll_{\epsilon,\Sigma} Y^{\frac{29-28\delta}{28-16\delta}+\epsilon}, \end{equation*} for $\delta = 1/128$. Even the bound with $\delta=0$ in conjunction with \eqref{eqsumK}, yields the result. \end{proof} We are now ready to prove Theorem {\rm r}ef{propSN}. {\beta}gin{proof}[Proof of Theorem {\rm r}ef{propSN}] Proof assuming strong subconvexity: The hypothesis (S) would imply that the central value in the right-hand side of \eqref{eqsumK} is bounded by $Y^{\frac 16 -\vartheta}$. Hence the bound in \eqref{eqsumK} becomes $X^{\frac 12 + \epsilon} (X/Q^2)^{\frac{2}{3}-\vartheta}$. We pick $Q=X^{\frac{1}{8}-\kappa_{\downarrow}}$ with $\epsilon,\kappa_\downarrow >0$ sufficiently small such that $\tfrac12 + \epsilon + (\tfrac34 + 2 \kappa_\downarrow)(\tfrac 23 - \vartheta)<1$. Proposition {\rm r}ef{p_main-term}, together with Corollary~{\rm r}ef{corlargerange} and Corollary~{\rm r}ef{corStoD}, now yield the result. \noindent Proof assuming nonnegativity: We pick $Q=X^{1/8-\varkappa}$, with $\varkappa$ as in Theorem {\rm r}ef{thmStoD}. It follows that we have {\beta}gin{equation*} {\mathcal S}um_{q\geq Q}{\mathcal S}um_{{\mathcal S}ubstack{f\in \overline{{\mathcal W}_q(\Sigma)}}}S(f) {\mathbb P}si\Bigl (\frac{|\Delta(K)|}{X}\Bigr)= {\mathcal S}um_{q\geq Q}{\mathcal S}um_{{\mathcal S}ubstack{f\in \overline{{\mathcal W}_q(\Sigma)}}}D(\tfrac12,f) {\mathbb P}si\Bigl (\frac{|\Delta(K)|}{X}\Bigr) +O_{\epsilon,\Sigma,{\mathbb P}si}(X^{1-\varkappa+\epsilon}). \end{equation*} Since we are assuming hypothesis (N), Corollary {\rm r}ef{cornnest} asserts that we have {\beta}gin{equation*} {\mathcal S}um_{q\geq Q} {\mathcal S}um_{{\mathcal S}ubstack{f\in \overline{{\mathcal W}_q(\Sigma)}\\ |\Delta(f)|<X}} D(\tfrac12,f)\ll_{\epsilon,\Sigma} X^{101/112+\epsilon}, \end{equation*} which is sufficiently small. The result now follows from Proposition {\rm r}ef{p_main-term}. \end{proof} {\mathcal S}ection{Proofs of Theorems {\rm r}ef{thmnv2} and {\rm r}ef{thmMoment}}\label{sec:proof} In addition to the quantity $A_\Sigma(X)$, that we defined in \eqref{eqnvzsieve}, we also define {\mathrm {ind}}ex{$MA_\Sigma(X)$, sum of {$\lvert L(\tfrac12,{\rm r}ho_K) {\rm r}vert $} for $K\in {\mathbb F}F_\Sigma(X)$} {\beta}gin{equation*} {\beta}gin{array}{rcl} MA_\Sigma(X)&:=&\displaystyle {\mathcal S}um_{{\mathcal S}ubstack{K\in {\mathbb F}F_\Sigma\\ X\le |\Delta(K)|<2X}} |L(\tfrac 12,{\rm r}ho_K)|; \\[.2in]\displaystyle PA_\Sigma(X)&:=&\displaystyle {\mathcal S}um_{{\mathcal S}ubstack{K\in {\mathbb F}F_\Sigma\\ X/2 \le |\Delta(K)|< 3X \\mathcal{L}(\tfrac 12,{\rm r}ho_K)\ge 0}} L(\tfrac 12,{\rm r}ho_K). \end{array} \end{equation*} The letter M stands for \emph{maximal} and the letter P for \emph{positive}. {\mathrm {ind}}ex{$PA_\Sigma(X)$, sum of $L(\tfrac12,{\rm r}ho_K)\ge 0$ for $K\in {\mathbb F}F_\Sigma(X)$} {\beta}gin{proposition}\label{MAlePA} For every $\epsilon>0$ and $X\ge 1$, we have the asymptotic inequality \[ MA_\Sigma(X) \le 2 PA_\Sigma(X) + O_{\epsilon,\Sigma}\bigl(X^{\frac{29-28\delta}{28-16\delta} +\epsilon}\bigr). \] \end{proposition} {\beta}gin{proof} We let ${\mathbb P}si_1:{\mathbb R}_{>0}\to [0,1]$ be a smooth function compactly supported on the interval $[\frac12,3]$ such that ${\mathbb P}si_1(t)=1$ for $t\in [1,2]$. We have an inequality followed by a basic identity {\beta}gin{equation}\label{eqbasicidentity} {\beta}gin{aligned} MA_\Sigma(X) &\le {\mathcal S}um_{{\mathcal S}ubstack{K\in {\mathbb F}F_\Sigma}} |L(\tfrac 12,{\rm r}ho_K)| {\mathbb P}si_1\Bigl(\frac{|\Delta(K)|}{X}\Bigr) \\ &= 2 {\mathcal S}um_{{\mathcal S}ubstack{K\in {\mathbb F}F_\Sigma\\ L(\tfrac 12,{\rm r}ho_K)\ge 0}} L(\tfrac 12,{\rm r}ho_K) {\mathbb P}si_1\Bigl(\frac{|\Delta(K)|}{X}\Bigr) - {\mathcal S}um_{{\mathcal S}ubstack{K\in {\mathbb F}F_\Sigma}} L(\tfrac 12,{\rm r}ho_K) {\mathbb P}si_1\Bigl(\frac{|\Delta(K)|}{X}\Bigr), \end{aligned} \end{equation} which follows from $|x| = 2{\rm max}(x,0) - x$ for every $x\in {\mathbb R}$. The first sum is $\le 2 PA_\Sigma(X)$. (Note that in the respective definitions of $MA_\Sigma(X)$ and $PA_\Sigma(X)$, the discriminant range has increased from $X\le |\Delta(K)|<2X$ to $X/2 \le |\Delta(K)|< 3X$ for this purpose). The second sum is equal to $A_\Sigma(X)$ for which we have established the estimate~\eqref{eqLfavgupbound}. This concludes the proof. \end{proof} We finally arrive at the proof of our main result of this paper. In Section~{\rm r}ef{sec:average}, we have estimated the terms $q<Q$ of the first moment $A_\Sigma(X)$. In Section~{\rm r}ef{sConditional}, we have estimated for the other terms $q\ge Q$ the difference $S(f) - D(\tfrac12,f)$. The conclusion of all these results is summarized in the following which was stated in the introduction as Theorem~{\rm r}ef{thmMoment}: {\beta}gin{theorem} There is an absolute constant $\mu>0$ such that the following holds. For every $0<\nu\le \mu$, $\epsilon>0$, and $X\ge 1$, {\beta}gin{equation}\label{eqASignewerror} A_\Sigma(X)-C_\Sigma\cdot X \bigl(\log X + \widetilde {\mathbb P}si'(1) \bigr) - C'_\Sigma \cdot X \ll_{\epsilon,\nu,\Sigma,{\mathbb P}si} X^{1+\epsilon-\nu} + X^{1/2+\epsilon} \cdot {\mathcal S}um_{ 2^{\mathbb N} \ni Y\le X^{3/4+\nu}}\frac{MA_\Sigma(Y)}{Y^{1/2}}, \end{equation} where the sum over $Y$ is dyadic, namely $Y\in 2^{\mathbb N}$ is constrained to be a power of $2$. \end{theorem} {\beta}gin{proof} The result will follow from Proposition {\rm r}ef{p_main-term}, Corollary {\rm r}ef{corStoD} and \eqref{eqsumK}. It follows from Proposition {\rm r}ef{p_main-term} that \[ A_{\Sigma}(X) - C_\Sigma\cdot X \bigl(\log X + \widetilde {\mathbb P}si'(1) \bigr) - C'_\Sigma \cdot X \ll_{\epsilon, \Sigma,{\mathbb P}si} \frac{X^{1+\epsilon}}{Q}+X^{\frac{11}{12}+\epsilon} + Q^{2+\epsilon} X^{\frac34+\epsilon} + {\mathcal S}um_{q\ge Q} \left| {\mathcal S}um_{f\in \overline{W_q(\Sigma)}} S(f) {\mathbb P}si\left( \frac{|\Delta(f)|}{X} {\rm r}ight) {\rm r}ight|. \] Let $a>0$ be sufficiently small such that ${\mathbb P}si(t)=0$ whenever $a^2t\ge 1$. Choose $Q= a^{-1} X^{1/8- \nu/ 2}$. Using that $Q\gg_{\mathbb P}si X^{1/8-\mu}$, we can apply Corollary~{\rm r}ef{corStoD} to obtain the bound \[ {\mathcal S}um_{q\ge Q} \left| {\mathcal S}um_{f\in \overline{W_q(\Sigma)}} \Bigl( S(f) - D(\tfrac 12,f) \Bigr) {\mathbb P}si\left( \frac{|\Delta(f)|}{X} {\rm r}ight) {\rm r}ight| \ll_{\Sigma,{\mathbb P}si} X^{1-\mu} \le X^{1-\nu}. \] The estimate \eqref{eqsumK} yields \[ {\beta}gin{aligned} {\mathcal S}um_{q\ge Q} {\mathcal S}um_{f\in \overline{W_q(\Sigma)}} \left| D(\tfrac 12,f) {\rm r}ight| {\mathbb P}si\left( \frac{|\Delta(f)|}{X} {\rm r}ight) &\ll_{{\mathbb P}si} {\mathcal S}um_{q\ge Q} {\mathcal S}um_{{\mathcal S}ubstack{f\in \overline{W_q(\Sigma)}\\ |\Delta(f)|<X/a^2}} \left| D(\tfrac 12,f) {\rm r}ight| \\ &\ll_{\epsilon,\Sigma,{\mathbb P}si} X^{\frac12 + \epsilon} \cdot {\mathcal S}um_{ 2^{\mathbb N} \ni Y\le (X/a^2)/Q^2} \frac{MA_\Sigma(Y)}{Y^{1/2}}. \end{aligned} \] It remains to observe that $(X/a^2)/Q^2 = X^{3/4 + \nu}$ to conclude the proof. \end{proof} We are now ready to prove our main Theorem {\rm r}ef{thmnv2}. Recall that the qualitative version in Theorem {\rm r}ef{thmnv1} follows from Theorem {\rm r}ef{thmnv2}. {\beta}gin{proof}[Proof of Theorem {\rm r}ef{thmnv2}] Recall that $C_\Sigma>0$ in Proposition~{\rm r}ef{p_main-term}. We distinguish two cases depending on the size of the sum of $MA_\Sigma(Y)$ in the right-hand side of \eqref{eqASignewerror}. In the first case, if the right-hand side of \eqref{eqASignewerror} is $< X$, then we have $A_\Sigma(X){\mathcal S}im C_\Sigma \cdot X\cdot \log X$. In combination with Theorem~{\rm r}ef{p_Wu}, we obtain that $\gg_{\epsilon,\Sigma} X^{\frac{3}{4}+\delta-\epsilon}$ cubic fields $K\in{\mathbb F}F_\Sigma$ with $|\Delta(K)|<X$ satisfy $L(\tfrac 12,{\rm r}ho_K)>0$. Hence {\beta}gin{equation}\label{firstcase} \delta_\Sigma(X) \ge \frac{3}{4} + \delta - \epsilon - O_\epsilon\bigl(\frac{1}{\log X}\bigr), \end{equation} which is sufficient to imply Theorem {\rm r}ef{thmnv2} in that case. Assume in the second case that the right-hand side of \eqref{eqASignewerror} is $\ge X$, namely {\beta}gin{equation*} {\mathcal S}um_{2^{\mathbb N} \ni Y\le X^{3/4+\nu}}\frac{MA_\Sigma(Y)}{Y^{1/2}}\ge X^{1/2-\epsilon}. \end{equation*} This implies that there exists $Y\in 2^{\mathbb N}$ with $Y\leq X^{3/4+\nu}$ such that $MA_\Sigma(Y)\ge X^{1/2-\epsilon}Y^{1/2}$. It follows from Proposition~{\rm r}ef{MAlePA} that {\beta}gin{equation*} 2 PA_\Sigma(Y)\ge X^{1/2-\epsilon}Y^{1/2} +O_{\epsilon,\Sigma}\bigl(Y^{\frac{29-28\delta}{28-16\delta} +\epsilon}\bigr). \end{equation*} Since $Y \leq X^{3/4+\nu}$, the error term is negligible. (The convexity bound $\delta=0$ suffices for this). We deduce in the second case: {\beta}gin{equation}\label{eqYineq} 2PA_\Sigma(Y)\ge X^{1/2-\epsilon}Y^{1/2}. \end{equation} Theorem~{\rm r}ef{p_Wu} and \eqref{eqYineq} imply that $\gg_{\epsilon} X^{1/2-\epsilon}Y^{1/4+\delta-\epsilon}$ cubic fields $K\in{\mathbb F}F_\Sigma$ with $|\Delta(K)|<Y$ satisfy the inequality $L(\tfrac 12,{\rm r}ho_K)>0$. Hence {\beta}gin{equation}\label{secondcase} \delta_\Sigma(Y) \ge \frac{\log X}{2 \log Y} + \bigl(\frac 14+\delta -\epsilon\bigr) -O_\epsilon\bigl(\frac{1}{\log Y}\bigr). \end{equation} Since~\eqref{eqYineq} implies that $Y\to \infty$ we deduce {\beta}gin{equation*} \limsup_{X\to \infty} \delta_\Sigma(X) \ge \frac 34 + \delta, \end{equation*} because the inequality is satisfied either in the first case by $\delta_\Sigma(X)$ in~\eqref{firstcase} or in the second case by $\delta_\Sigma(Y)$ in \eqref{secondcase}, since $\frac{\log X}{\log Y} \ge \frac{4}{3+4\nu}\ge 1$. To conclude a lower bound on the $\liminf$, we need a lower bound on $Y$ in the second case. Theorem~{\rm r}ef{p_Wu} implies $PA_\Sigma(Y) = O_\epsilon(Y^{\frac 54 -\delta+\epsilon})$. Together with \eqref{eqYineq}, this yields the following lower bound: {\beta}gin{equation}\label{lowerY} Y\gg_\epsilon X^{\frac{2}{3-4\delta}-\epsilon}. \end{equation} This implies {\beta}gin{equation}\label{deltaSigmage} \delta_\Sigma(X) \ge \frac 12 + \bigl(\frac 14 +\delta\bigr)\cdot \frac{2}{3-4\delta} - \epsilon - O_\epsilon\bigl(\frac{1}{\log X}\bigr). \end{equation} The first two terms of~\eqref{deltaSigmage} simplify to $\frac{2}{3-4\delta}$, hence \[ \liminf_{X\to \infty} \delta_\Sigma(X) \ge \frac{2}{3-4\delta}. \] This concludes the proof of Theorem {\rm r}ef{thmnv2}. \end{proof} The same argument implies an \emph{Omega} result $MA_\Sigma(X) ={\mathcal O}mega_\Sigma(X^{1-o(1)})$ as $X\to \infty$. Namely, there is a sequence $X_k\to \infty$ such that for every $\epsilon >0$, $MA_\Sigma(X _k)/X_k^{1-\epsilon}\to \infty$. For completeness, we also record the following lower bound for the first moment: {\beta}gin{proposition}\label{corboundmoment} For every $\epsilon>0$ and $X\ge 1$, \[ X^{\frac{5-4\delta}{6-8\delta}-\epsilon} \ll_{\epsilon,\Sigma} {\mathcal S}um_{K\in {\mathbb F}F_\Sigma(X)} \bigl|L\bigl(\tfrac12,{\rm r}ho_K\bigr)\bigr|. \] \end{proposition} {\beta}gin{proof} The lower bound~\eqref{lowerY} for $Y$ implies the lower bound in Proposition~{\rm r}ef{corboundmoment} as follows: \[ {\mathcal S}um_{K\in {\mathbb F}F_\Sigma(X)} |L(\tfrac 12,{\rm r}ho_K)| \ge {\mathcal S}um_{K\in {\mathbb F}F_\Sigma(Y)} |L(\tfrac 12,{\rm r}ho_K)| \gg_{\epsilon,\Sigma} X^{\frac12 -\epsilon} Y^{\frac12}, \] and $\tfrac12 + \tfrac{1}{3-4\delta} = \tfrac{5-4\delta}{6-8\delta}$. \end{proof} {\beta}gin{thebibliography}{10} \bibitem{Armitage} J.\ V.\ Armitage, Zeta functions with a zero at $s=\frac12$, {\it Invent.\ Math.}\ \textbf{15} (1972), 199--205. \bibitem{Baier-Young} S.\ Baier, M.\ P.\ Young, Mean values with cubic characters, {\it J.\ Number Theory}\ \textbf{130} (2010), no.\ 4, 879--903. \bibitem{BBP} K.\ Belabas, M.\ Bhargava, C.\ Pomerance, Error estimates for the Davenport-Heilbronn theorems, {\it Duke Math.\ J.}\ \textbf{153} (2010), no.\ 1, 173--210. \bibitem{HCL4} M.\ Bhargava, Higher composition laws IV: The parametrization of quintic rings, {\it Ann.\ of Math.}\ (2) \textbf{167} (2008), no.\ 1, 53--94. \bibitem{BST} M.\ Bhargava, A.\ Shankar, J.\ Tsimerman, On the Davenport-Heilbronn theorems and second order terms, {\it Invent.\ Math.}\ \textbf{193} (2013), no.\ 2, 439--499. \bibitem{BTT} M.\ Bhargava, T.\ Taniguchi, F.\ Thorne, Improved error estimates for the Davenport--Heilbronn theorems, preprint 2021, arXiv:2107.12819. \bibitem{Blomer-Harcos-Michel} V.\ Blomer, G.\ Harcos, P.\ Michel, Bounds for modular $L$-functions in the level aspect, {\it Ann.\ Sci.\ Ecole Norm.\ Sup.}\ (4) \textbf{40} (2007), no.\ 5, 697--740. \bibitem{Blomer-Khan} V.\ Blomer, R.\ Khan, Uniform subconvexity and symmetry breaking reciprocity, {\it J.\ Funct.\ Anal.}\ \textbf{276} (2019), no.\ 7, 2315--2358. \bibitem{ChoKim1} P.\ J.\ Cho, H.\ H.\ Kim, Low lying zeros of Artin $L$-functions, {\it Math.\ Z.}\ \textbf{279} (2015), no.\ 3-4, 669--688. \bibitem{Cohn} H.\ Cohn, The density of abelian cubic fields, {\it Proc.\ Amer.\ Math.\ Soc.} {\bf 5} (1954), 476--477. \bibitem{DW} B.\ Datskovsky, D.\ J.\ Wright, The adelic zeta function associated to the space of binary cubic forms. II. Local theory, {\it J.\ Reine Angew.\ Math.\ } {\bf 367} (1986), 27--75. \bibitem{DH} H.\ Davenport, H.\ Heilbronn, On the density of discriminants of cubic fields II, {\it Proc.\ Roy.\ Soc.\ London Ser.\ A} \textbf{322} (1971), no.\ 1551, 405--420. \bibitem{DFL} C.\ David, A.\ Florea, M.\ Lalin, Non-vanishing for cubic $L$-functions, preprint 2020, arXiv:2006.15661. \bibitem{DF} B.\ N.\ Delone, D.\ K.\ Faddeev, {\it The theory of irrationalities of the third degree}, Translations of Mathematical Monographs \textbf{10}, American Mathematical Society, Providence, R.I., 1964. \bibitem{Duke-Friedlander-Iwaniec} W.\ Duke, J.\ B.\ Friedlander, H.\ Iwaniec, The subconvexity problem for Artin $L$-functions, {\it Invent.\ Math.}\ \textbf{149} (2002), no.\ 3, 489--577. \bibitem{Fr} A.\ Fr\"ohlich, Artin root numbers and normal integral bases for quaternion fields, {\it Invent.\ Math.}\ \textbf{17} (1972), 143--166. \bibitem{GGS} W.\ T.\ Gan, B.\ Gross, G.\ Savin, Fourier coefficients of modular forms on $G_2$, {\it Duke Math.\ J.}\ \textbf{115} (2002), no.\ 1, 105--169. \bibitem{IK} H.\ Iwaniec, E.\ Kowalski, {\it Analytic number theory}, American Mathematical Society Colloquium Publications \textbf{53}, American Mathematical Society, Providence, RI, 2004. \bibitem{Jutila} M.\ Jutila, On the mean value of $L(\tfrac12,\chi)$ for real characters, {\it Analysis} {\bf 1} (1981), no.\ 2, 149--161. \bibitem{KNT} N.\ Kaplan, J.\ Marcinek, R.\ Takloo-Bighash, Distribution of orders in number fields, {\it Res.\ Math.\ Sci.\ } {\bf 2} (2015), Art.\ 6, 57 pp. \bibitem{Levi} F.\ Levi, Kubische Zahlk{\"o}rper und bin{\"a}re kubische Formenklassen, {\it Ber. S{\"a}chs. Akad. Wiss. Leipzig, Math.-Naturwiss}, Kl {\bf 66} (1914), 26--37. \bibitem{Mori} S.\ Mori, Orbital Gauss sums associated with the space of binary cubic forms over a finite field, preprint 2010. \bibitem{SST} P.\ Sarnak, S.\ W.\ Shin, N.\ Templier, Families of $L$-functions and their symmetry, in {\it Proceedings of Simons Symposia, Families of automorphic forms and the trace formula}, Springer Verlag, 2016, 531--578. \bibitem{Sato} F.\ Sato, On functional equations of zeta distributions, {\it Adv. Stud.\ Pure Math.} \textbf{15}, Academic Press, 1989, 465--508. \bibitem{Serre} J.-P.\ Serre, Conducteurs d'Artin des caract\`eres r\'eels, {\it Invent.\ Math.}\ \textbf{14} (1971), 173--183. \bibitem{SST1} A.\ Shankar, A.\ S\"odergren, N.\ Templier, Sato-Tate equidistribution of certain families of Artin $L$-functions, {\it Forum Math.\ Sigma} \textbf{7} (2019), e23. \bibitem{Shintani} T.\ Shintani, On Dirichlet series whose coefficients are class numbers of integral binary cubic forms, {\it Proc.\ Japan Acad.\ } {\bf 46} (1970), 909--911. \bibitem{Sound} K.\ Soundararajan, Nonvanishing of quadratic Dirichlet $L$-functions at $s=\frac12$, {\it Ann.\ of Math.}\ (2) \textbf{152} (2000), no.\ 2, 447--488. \bibitem{TT} T.\ Taniguchi, F.\ Thorne, Orbital $L$-functions for the space of binary cubic forms, {\it Canad.\ J.\ Math.\ } {\bf 65} (2013), no.\ 6, 1320--1383. \bibitem{TaTh1} T.\ Taniguchi, F.\ Thorne, Secondary terms in counting functions for cubic fields, {\it Duke Math.\ J.}\ \textbf{162} (2013), no.\ 13, 2451--2508. \bibitem{TaTh2} T.\ Taniguchi, F.\ Thorne, Orbital exponential sums for prehomogeneous vector spaces, {\it Amer.\ J.\ Math.}\ \textbf{142} (2020), no.\ 1, 177--213. \bibitem{Wu} H.\ Wu, Explicit subconvexity for $\mathrm{GL}_2$, preprint 2018, arXiv:1812.04391. \bibitem{Yang} A.\ Yang, Distribution problems associated to zeta functions and invariant theory, Ph.D.\ Thesis, Princeton University, 2009. \bibitem{Yun} Z.\ Yun, Orbital integrals and Dedekind zeta functions, {\it The legacy of Srinivasa Ramanujan}, 399--420, Ramanujan Math.\ Soc.\ Lect.\ Notes Ser.\ \textbf{20}, Ramanujan Math.\ Soc., Mysore, 2013. \end{thebibliography} \newgeometry{left=1cm,right=1cm,tmargin=2.5cm, marginpar=1cm} {{\mathcal S}mall \printindex} {\mathbb A}tEndDocument{ {\footnotesize \textit{E-mail address}, \texttt{[email protected]}\par \textsc{Department of Mathematics, University of Toronto, Toronto, ON, M5S 2E4, Canada}\par \addvspace{ amount} \textit{E-mail address}, \texttt{[email protected]}\par \textsc{Department of Mathematical Sciences, Chalmers University of Technology and the University of\\ {\rm r}ule[0ex]{0ex}{0ex}\hspace{14pt}Gothenburg, SE-412 96 Gothenburg, Sweden}\par \addvspace{ amount} \textit{E-mail address}, \texttt{[email protected]}\par \textsc{Department of Mathematics, Cornell University, Ithaca, NY 14853, USA} }} \end{document}
math
\begin{document} \title{On the spreading of quantum walks starting from local and delocalized states} \begin{abstract} We investigate the ballistic spreading behavior of the one-dimensional discrete time quantum walks whose time evolution is driven by any balanced quantum coin. We obtain closed-form expressions for the long-time variance of position of quantum walks starting from any initial qubit (spin-$1/2$ particle) and position states following a delta-like (local), Gaussian and uniform probability distributions. By averaging over all spin states, we find out that the average variance of a quantum walk starting from a local state is independent of the quantum coin, while from Gaussian and uniform states it depends on the sum of relative phases between spin states given by the quantum coin, being non-dispersive for a Fourier walk and large initial dispersion. We also perform numerical simulations of the average probability distribution and variance along the time to compare them with our analytical results. \end{abstract} \noindent{\it Keywords}: Quantum walks, Spreading, Gaussian states. \section{Introduction} The quantum counterparts of the classical random walks are known as quantum random walks \cite{aharonov1993quantum,kempe2003quantum} or quantum walks. The quantum walker is a qubit, a particle with an internal degree of freedom (spin $1/2$-like) placed on a regular lattice where each site is an external degree of freedom (position). Instead of a coin tossing game, to determine whether the particle goes to left or right, the dynamics is given by a unitary operator applied successive times to an initial state time-evolving the quantum state. This operator is formed by a quantum coin which rotates the qubit followed by a conditional displacement that displaces the qubit according to its spin state. The main difference between classical and quantum walks is due to the superposition principle, which leads the quantum walks to have unique features: a double peak distribution with a quadratic gain in their variance of the position along the time and the entanglement between the internal and external degrees of freedom created by their particular dynamics \cite{venegas2012quantum}. Quantum walks have been attracting a lot of attention due to their diversity of the implications in basic science and their potential applications. For instance, they are useful for the understanding of some biological processes such as the photosynthesis \cite{engel2007evidence} or human decision-making \cite{buseymer2006quantum}, to perform computational tasks as quantum search engine \cite{shenvi2003quantum,tulsi2008faster}, make universal quantum computation \cite{childs2009universal,lovett2010universal}, for generating maximal entanglement \cite{vieira2013dynamically} and quantum localization \cite{vieira2014entangling}. Moreover, they are versatile enough to be implemented in some experimental platforms \cite{wang2013physical}. The main purpose here is to understand how the delocalization of the initial state affects the spreading behavior of the quantum walks. Few earlier works discuss some aspects of this issue \cite{tregenna2003controlling,brun2003quantum,valcarcel2010tailoring,zhang2016creating}, however, a general answer for all possibilities of initial spin states (qubits) or for any type of quantum coin is still missing. To achieve this aim, we use a mathematical framework to obtain via analytical \cite{brun2003quantum} and numerical approaches the variance of the position regarding local, Gaussian and uniform states for any initial qubit and balanced coin. In particular, we also calculate the average quantities to analyze the general features and differences among these states. The article is structured as follows. In Section \ref{sec:2}, we review the mathematical formalism of quantum walks, their initial states and dynamical evolution. In Section \ref{sec:3}, we obtain a general expression of the long-time variance of the position in the momentum space and calculate analytical expressions for the variance of quantum walks starting from local, Gaussian and uniform states. We also discuss the dispersion velocity as function of the initial spin state for local and uniform states. In Section \ref{sec:4}, we confront the average variance calculated from our models to the numerical simulations. Finally, a brief conclusion with our main results is depicted in Section \ref{sec:5}. \section{One-dimensional quantum walks} \label{sec:2} Formally, a quantum walk state belongs to the Hilbert space $\mathcal{H}=\mathcal{H}_C\otimes\mathcal{H}_P$, where $\mathcal{H}_C$ is the coin space and $\mathcal{H}_P$ is the position space. The coin space is a complex two-dimensional vector space spanned by two spin states $\{\ket{\uparrow}, \ket{\downarrow}\}$ and the position space is a countable infinite-dimensional vector space spanned by a set of orthonormal vectors $\{\cdots,\ket{j-1},\ket{j},\ket{j+1},\cdots\}$ with $j\in\mathbb{Z}$ being the discrete positions on a lattice. The one-dimensional quantum walker is a qubit, a particle with an internal degree of freedom as a two-level state (spin $1/2$-like) and its position and momentum as the external degrees of freedom. Then, let us first consider an arbitrary initial qubit state, \begin{equation} \ket{\Psi_s(0)}=\cos\left(\frac{\alpha}{2}\right)\ket{\uparrow}+e^{i\beta}\sin\left(\frac{\alpha}{2}\right)\ket{\downarrow}, \label{Psi_s} \end{equation} in the Bloch sphere representation \cite{nielsen2010quantum} where $\alpha \in[0,\pi]$ and $\beta \in[-\pi,\pi]$. For instance, an up spin state has $\alpha=0$ for any value of $\beta$ or still, an equal superposition of spin states without phase difference between them has $\alpha=\pi/2$ and $\beta=0$. Therefore, a general quantum walk state is \begin{equation} \ket{\Psi(0)}=\sum_{j=-\infty}^{+\infty}\ket{\Psi_s(0)}\otimes f(j)\ket{j}, \label{Psi_0} \end{equation} where $a(j,0)=f(j)\cos(\alpha/2)$ and $b(j,0)=f(j)e^{i\beta}\sin(\alpha/2)$ are the initial spin up and down amplitudes, respectively, and $|f(j)|^2$ gives us an initial probability distribution function. The condition of normalization is $\sum_j[|a(j,0)|^2+|b(j,0)|^2]=1$ with the sum over all integers. We employ here a local, Gaussian and uniform initial states. Since the local state has a Dirac delta function $\delta(j)$ as the initial probability distribution, we obtain a qubit on the origin position, \begin{equation} \ket{\Psi_L(0)}=\ket{\Psi_s(0)}\otimes\ket{0}. \label{Psi_0_Local} \end{equation} Taking a Gaussian probability distribution with initial dispersion $\sigma_0$, then a general initial Gaussian state is \begin{equation} \ket{\Psi_G(0)}=\sum_{j=-\infty}^{+\infty}\ket{\Psi_s(0)}\otimes\frac{\text{exp}\left(-j^2/4\sigma_0^2\right)}{(2\pi\sigma_0^2)^{\frac{1}{4}}}\ket{j}, \label{Psi_0_Gauss} \end{equation} while a uniform state could be written as \begin{equation} \ket{\Psi_U(0)}=\sum_{j=-\infty}^{+\infty}\ket{\Psi_s(0)}\otimes u\ket{j}, \label{Psi_0_Unif} \end{equation} with $\sum_j|u|^2=1$ and $u\rightarrow 0$ being the sum over all integers. The unitary dynamical evolution of the quantum walk starting from an initial state $\ket{\Psi(0)}$ is given by, \begin{equation} \ket{\Psi(t)}=U(q,\theta,\phi)^t\ket{\Psi(0)}, \label{time_evolution} \end{equation} in discrete time steps $t$ with \begin{equation} U(q,\theta,\phi)=S.[C(q,\theta,\phi)\otimes\mathbbm{1}_P], \label{U_operator} \end{equation} being the time evolution operator where $\mathbbm{1}_P$ is the identity operator in $\mathcal{H}_P$, $C(q,\theta,\phi)$ is the quantum coin and $S$ is the conditional displacement operator. The quantum coin $C(q,\theta,\phi)$ operates over the spin states and generates a superposition of them. Since a general quantum coin $C(q,\theta,\phi)$ belongs to the $SU(2)$ and up to an irrelevant global phase, it can be written in the following way, \begin{equation} \displaystyle C(q,\theta,\phi) = \begin{pmatrix} \sqrt{q} & \sqrt{1-q}e^{i\theta} \\ \sqrt{1-q}e^{i\phi} & -\sqrt{q}e^{i(\theta+\phi)} \end{pmatrix}, \label{Quantum_coin} \end{equation} with three independent parameters $q$, $\theta$ and $\phi$. The first parameter ranges from $0$ to $1$, and it determines if the coin is unbiased ($q=1/2$) or biased ($q\neq1/2$). The last both terms range from $0$ to $2\pi$, and they control the relative phases between spin states. The conditional displacement operator $S$, \begin{equation} S=\sum_j(\ket{\uparrow}\bra{\uparrow}\otimes\ket{j+1}\bra{j}+\ket{\downarrow}\bra{\downarrow}\otimes\ket{j-1}\bra{j}), \label{S_operator} \end{equation} shifts the qubit from the site $j$ to the site $j+1$ ($j-1$) conditioned to its up (down) spin state, which generates entanglement between the spin and position states \cite{vieira2013dynamically,vieira2014entangling}. The one time step evolution of a quantum walk state gives \begin{equation} \ket{\Psi(t)}=U(q,\theta,\phi)\ket{\Psi(t-1)} \label{onestep_Psi} \end{equation} and by replacing \eqref{Quantum_coin} and \eqref{S_operator} into the $U(q,\theta,\phi)$ in \eqref{onestep_Psi} allow us to write the following equations for the spin amplitudes $a(j,t)$ and $b(j,t)$, \begin{align} a(j,t)&=\sqrt{q}a(j-1,t-1)+\sqrt{1-q}e^{i\theta}b(j-1,t-1),\\ b(j,t)&=\sqrt{1-q}e^{i\phi}a(j+1,t-1)-\sqrt{q}e^{i(\theta+\phi)}b(j+1,t-1). \end{align} These recurrence relations are used to obtain the quantum walk probability distribution profile and their variance of position through an iterative calculation starting from their initial amplitudes $a(j,0)$ and $b(j,0)$. This approach is used to perform the numerical calculations. The probability distribution for each position $j$ at a time step $t$ can be obtained straightforwardly as \begin{equation} |\Psi(j,t)|^2=|a(j,t)|^2+|b(j,t)|^2, \label{Prob_Dist} \end{equation} such that $|a(j,t)|^2$ and $|b(j,t)|^2$ are the probability distributions for up and down spin components. The variance of position at a particular time step $t$ is \begin{equation} \sigma^2(t)=\braket{\hat{\mathbf{j}}^2}_t-\braket{\hat{\mathbf{j}}}_t^2, \label{variance} \end{equation} where \begin{equation} \braket{\hat{\mathbf{j}}^m}_t=\sum_j j^m|\Psi(j,t)|^2. \label{variance_sup} \end{equation} In order to deal with \eqref{variance_sup}, on the next section we will use a mathematical framework to develop closed-form expressions for the long-time variance of a quantum walk starting from any qubit on one position (local state) or spread over many positions following a Gaussian and uniform probability distributions. \section{Long-time variance} \label{sec:3} Since the expression \eqref{variance_sup} is difficult to handle analytically, we should made a change of basis to the dual $k$-space $\mathcal{\tilde{H}}_k$ spanned by the Fourier transformed vectors $\ket{k}=\sum_{j}e^{ikj}\ket{j}$ with $k\in[-\pi,\pi]$ \cite{ambainis2001one}. Then, the initial state \eqref{Psi_0} can be rewritten as, \begin{equation} \ket{\tilde{\Psi}(0)}=\int_{-\pi}^{\pi} \dfrac{\mathrm{d}k}{2\pi}\ket{\Phi_k(0)}\otimes\ket{k}, \label{Psitil_0} \end{equation} where $\ket{\Phi_k(0)}=\tilde{a}_k(0)\ket{\uparrow}+\tilde{b}_k(0)\ket{\downarrow}$. In the Fourier representation, the conditional displacement operator $S$ is diagonal, \begin{equation} S_k=\ket{\uparrow}\bra{\uparrow}\otimes e^{-ik}\ket{k}\bra{k}+ \ket{\downarrow}\bra{\downarrow}\otimes e^{ik}\ket{k}\bra{k}, \label{S_diag} \end{equation} thus the time evolution operator \eqref{U_operator} in the $k$-space gives \begin{equation} U_k=\dfrac{1}{\sqrt{2}}\begin{pmatrix} e^{-i(\delta+k)} & e^{i(\eta-k)} \\ e^{-i(\eta+k)} & -e^{i(\delta+k)} \end{pmatrix}, \label{Uk} \end{equation} where $\delta=(\theta+\phi)/2$, $\eta=(\theta-\phi)/2$ and the coin is balanced ($q=1/2$). After the diagonalization of \eqref{Uk} we obtain the following eigenvalues and eigenvectors \cite{tregenna2003controlling}, \begin{equation} \lambda_k^{\pm}=\pm\dfrac{e^{i\delta}}{\sqrt{2}}\left[\sqrt{1+\cos^2(k-\delta)}\mp i\sin(k-\delta)\right],\\ \label{eigenvalues} \end{equation} \begin{align} \ket{\Phi_k^{\pm}}&=\dfrac{1}{N_k^{\pm}} \begin{pmatrix} e^{ik} \\ e^{-i(\delta+\eta)}\left(\sqrt{2}\lambda_k^{\pm}-e^{ik}\right) \end{pmatrix}, \label{eigenvectors} \end{align} with \begin{equation} (N_k^{\pm})^2=4\mp 2\left[\cos (k-\delta)\sqrt{1+\cos^2 (k-\delta)}\pm \sin^2 (k-\delta)\right]. \label{N_k} \end{equation} Now we are able to elaborate some details of the formalism introduced by Brun \textit{et al.} \cite{brun2003quantum} in order to achieve an analytical expression for the variance of the position without disorder, nor decoherence. The density operator can be written from \eqref{Psitil_0} as, \begin{equation}\label{densidade} \rho(0)=\ket{\tilde{\Psi}(0)}\bra{\tilde{\Psi}(0)}=\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k}{2\pi}\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k'}{2\pi}\ket{\Phi_k(0)}\bra{\Phi_k(0)}\otimes\ket{k}\bra{k'}. \end{equation} Let us rewrite the operator $U_k$ from \eqref{Uk} in the following way, \begin{equation}\label{U_k_decomposto} U_k=\left(e^{-ik}\ket{\uparrow}\bra{\uparrow}+e^{ik}\ket{\downarrow}\bra{\downarrow}\right)C(q,\theta,\phi). \end{equation} Since an arbitrary operator $\Lambda$ is transformed as \begin{equation} \mathcal{L}_{kk'}\Lambda= U_k\Lambda U_{k'}^\dagger, \end{equation} thus, by using this notation $\rho_k(0)=\ket{\Phi_k(0)}\bra{\Phi_k(0)}$, we can write \begin{equation} \rho(t)=\ket{\tilde{\Psi}(t)}\bra{\tilde{\Psi}(t)}=\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k}{2\pi}\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k'}{2\pi}\mathcal{L}_{kk'}^t\rho_k(0)\otimes\ket{k}\bra{k'}. \end{equation} The reduced density operator relative to the position is given by \begin{equation} \rho_P(t)=\int_{-\pi}^{\pi}\dfrac{\mathrm{d} k}{2\pi}\int_{-\pi}^{\pi}\dfrac{\mathrm{d} k'}{2\pi}\ket{k}\bra{k'}\mathrm{Tr}\left\{\mathcal{L}_{kk'}^t\rho_k(0)\right\}, \end{equation} therefore, the probability to find the quantum walker on the position $j$ at time $t$ is \begin{align} |\Psi(j,t)|^2 &= \mathrm{Tr}\left\{ \left(\mathbbm{1}_C \otimes \ket{j}\bra{j}\right) \rho(t)\right\}, \\ &= \mathrm{Tr}\left\{\ket{j}\bra{j} \rho_P(t)\right\}, \\ &= \braket{j|\rho_P(t)|j}. \end{align} Since the inverse Fourier transform is \begin{equation} \ket{j}=\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k}{2\pi}e^{-ikj}\ket{k}, \end{equation} we have, \begin{align} |\Psi(j,t)|^2&=\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k}{2\pi}\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k'}{2\pi}\braket{j|k}\braket{k'|j}\text{Tr}\{\mathcal{L}_{kk'}^t\rho_k(0)\}\\ &=\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k}{2\pi}\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k'}{2\pi}e^{-ij(k'-k)}\text{Tr}\{\mathcal{L}_{kk'}^t\rho_k(0)\}. \end{align} We calculate the expressions for the moments of the distribution by, \begin{align} \braket{\hat{\mathbf{j}}^m}_t &= \sum_j j^m |\Psi(j,t)|^2\nonumber \\ \label{jm} &= \sum_j j^m \int_{-\pi}^{\pi}\dfrac{\mathrm{d}k}{2\pi} \int_{-\pi}^{\pi}\dfrac{\mathrm{d}k'}{2\pi} e^{-ij(k'-k)} \text{Tr}\{\mathcal{L}_{kk'}^t\rho_k(0)\}, \end{align} where the position operator $\hat{\mathbf{j}}$ acts like $\hat{\mathbf{j}}\ket{j}=j\ket{j}$. We can change the order of sum and integration in \eqref{jm} to use the identity, \begin{equation} 2\pi(-i)^m\delta^{(m)}(k'-k)=\sum_{j=-\infty}^{+\infty}j^me^{-ij(k'-k)}, \end{equation} where $\delta^{(m)}(k'-k)$ is the $m$-th derivative of Dirac delta function. Thus, \begin{equation} \braket{\hat{\mathbf{j}}^m}_t =\dfrac{(-i)^m}{2\pi}\int_{-\pi}^{\pi}\mathrm{d}k \int_{-\pi}^{\pi} \mathrm{d}k' \delta^{(m)}(k'-k) \text{Tr}\{\mathcal{L}_{kk'}^t\rho_k(0)\}.\label{jm_t} \end{equation} The integration above will be made by parts using the derivative of $\mathcal{L}_{kk'}$ in terms of \eqref{U_k_decomposto}, \begin{align} \dfrac{\partial}{\partial k} U_k\Lambda U_{k'}^\dagger &= \left(-ie^{-ik}\ket{\uparrow}\bra{\uparrow}+ie^{ik}\ket{\downarrow}\bra{\downarrow}\right) C(q,\theta,\phi)\Lambda U_{k'}^\dagger\nonumber\\ &=-i\left(e^{-ik}\ket{\uparrow}\bra{\uparrow}-e^{ik}\ket{\downarrow}\bra{\downarrow}\right) C(q,\theta,\phi)\Lambda U_{k'}^\dagger\nonumber\\ &=-i\left(\ket{\uparrow}\bra{\uparrow}-\ket{\downarrow}\bra{\downarrow}\right) U_k\Lambda U_{k'}^\dagger\nonumber\\ &=-i\hat{Z} U_k\Lambda U_{k'}^\dagger, \end{align} where $\hat{Z}=\ket{\uparrow}\bra{\uparrow}-\ket{\downarrow}\bra{\downarrow}$. Trace properties give us, \begin{align} \dfrac{\partial}{\partial k}\mathrm{Tr} \{\mathcal{L}_{kk'}\rho_k(0)\} &=\mathrm{Tr} \left\{\dfrac{\partial}{\partial k}\mathcal{L}_{kk'}\rho_k(0)\right\}\nonumber\\ &=-i\mathrm{Tr} \{\hat{Z}\mathcal{L}_{kk'}\rho_k(0)\}\nonumber\\ &=-i\mathrm{Tr} \{(\mathcal{L}_{kk'}\rho_k(0))\hat{Z}\}\nonumber\\ &=-\dfrac{\partial}{\partial k'}\mathrm{Tr} \{\mathcal{L}_{kk'}\rho_k(0)\},\label{dk'} \end{align} Using \eqref{dk'} on the integration by parts of \eqref{jm_t} with $m=1$, we obtain \begin{equation} \braket{\hat{\mathbf{j}}}_t=-\int\dfrac{\mathrm{d}k}{2\pi}\sum_{n=1}^t \mathrm{Tr} \{\hat{Z}\mathcal{L}^n_{kk}\rho_k(0)\}.\label{j1_t} \end{equation} In the same way for \eqref{jm_t} with $m=2$, \begin{equation} \begin{split} \braket{\hat{\mathbf{j}}^2}_t = -\int\dfrac{\mathrm{d}k}{2\pi}\left\{\sum_{n=1}^t\sum_{n'=1}^n \mathrm{Tr} \left[\hat{Z}\mathcal{L}^{n-n'}_{kk} \left(\hat{Z}\mathcal{L}_{kk}^{n'}\rho_k(0)\right)\right]\right.\qquad\\ +\left.\sum_{n=1}^t\sum_{n'=1}^{n-1} \mathrm{Tr} \left[\hat{Z}\mathcal{L}^{n-n'}_{kk} \left((\mathcal{L}_{kk}^{n'}\rho_k(0))\hat{Z}\right)\right]\right\}.\label{j2_t} \end{split} \end{equation} It is possible to expand the states $\ket{\Phi_k(0)}$ in terms of the eigenstates of $ U_k$, \begin{equation} \ket{\Phi_k(0)}=c_k^+\ket{\Phi_k^+}+c_k^-\ket{\Phi_k^-},\label{phi_k0+-} \end{equation} in such a way that $c_k^{\pm}=\braket{\Phi_k^{\pm}|\Phi_k(0)}$. After inserting the \eqref{phi_k0+-} in \eqref{j1_t}, we obtain the expected value of position, \begin{align} \braket{\hat{\mathbf{j}}}_t &=-\sum_{n=1}^t\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k}{2\pi}\bra{\Phi_k(0)}( U_k)^n\hat{Z}( U_k^\dagger)^n\ket{\Phi_k(0)}\nonumber\\ &=-t\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k}{2\pi}\left\{ |c_k^+|^2\bra{\Phi_k^+}\hat{Z}\ket{\Phi_k^+}+ |c_k^-|^2\bra{\Phi_k^-}\hat{Z}\ket{\Phi_k^-} \right\}+\text{oscillatory terms},\label{j1} \end{align} and by means of \eqref{j2_t}, the expected value of the square of the position, \begin{equation} \begin{split} \braket{\hat{\mathbf{j}}^2}_t =t^2\int_{-\pi}^{\pi}\dfrac{\mathrm{d}k}{2\pi}\left\{ |c_k^+|^2\bra{\Phi_k^+}\hat{Z}\ket{\Phi_k^+}^2+ |c_k^-|^2\bra{\Phi_k^-}\hat{Z}\ket{\Phi_k^-}^2 \right\}\\ +\mathcal{O}(t)+\text{oscillatory terms},\label{j2} \end{split} \end{equation} where oscillatory terms vanish in the limit of $t\rightarrow\infty$. Therefore the expected values can be calculated as, \begin{equation} \braket{\Phi_k^{\pm}|\hat{Z}|\Phi_k^{\pm}}=\dfrac{\pm \cos(k-\delta)\left[ \sqrt{1+\cos^2(k-\delta)}\mp\cos(k-\delta) \right]}{1+\cos(k-\delta)\left[ \sqrt{1+\cos^2 (k-\delta)}\mp\cos(k-\delta)\right]}, \label{expect_value} \end{equation} and also the coefficients, \begin{align} c_k^{\pm}=\ &\dfrac{e^{-ik}}{N_k^{\pm}}\left\{\tilde{a}_k(0)-\tilde{b}_k(0)e^{i\eta}\left[ e^{i\delta}-\sqrt{2}\lambda_k^{\pm}e^{i(k-\delta)}\right]\right\}, \label{c_k} \end{align} where $\tilde{a}_k(0)=\tilde{f}(k)\cos(\alpha/2)$ and $\tilde{b}_k(0)=\tilde{f}(k)e^{i\beta}\sin(\alpha/2)$ are the spin up and down initial amplitudes in the $k$-space respectively. Therefore, to obtain the variance, we should insert these amplitudes in \eqref{c_k}, replacing them in \eqref{j1} together with \eqref{expect_value}. After these replacements, we have to solve the remaining integral, \begin{equation} I(\delta)=\int \dfrac{\mathrm{d}k}{2\pi}|\tilde{f}(k)|^2\left\{\dfrac{\cos^2(k-\delta)}{1+\cos^2(k-\delta)}\right\} \label{Integral} \end{equation} to find \begin{equation} \braket{\hat{\mathbf{j}}}_t=I(\delta)\left[\cos\alpha+\sin\alpha\cos(\beta+\theta)\right]t , \label{j_general} \end{equation} for $t\gg 1$, since oscillatory terms are disregarded \cite{brun2003quantum}. This equation shows a dependence on the initial spin state ($\alpha$ and $\beta$) and coin parameter $\theta$. Nevertheless, the same does not occur with the square of the position, replacing \eqref{c_k} in \eqref{j2} with \eqref{expect_value} we reach, \begin{equation} \braket{\hat{\mathbf{j}}^2}_t=I(\delta)t^2. \label{j2_general} \end{equation} Now we can insert both \eqref{j_general} and \eqref{j2_general} in \eqref{variance} to obtain the variance, \begin{equation} \sigma^2(t)=I(\delta)\left\{1-I(\delta)\left[\cos\alpha+\sin\alpha\cos(\beta+\theta)\right]^2\right\}t^2. \label{var_general} \end{equation} In order to calculate the long-time variance for each initial state, we first should to write each particular $\tilde{f}(k)$, then replacing it in \eqref{Integral} and the result in \eqref{var_general} as shown on the next sections. \subsection{Local state} The local amplitudes in the $k-$space have $\tilde{f}_L(k)=1$ and inserting it in \eqref{Integral} gives $I_L=1-\sqrt{2}/2$. Then, replacing $I_L$ in \eqref{var_general}, we find the long-time variance, \begin{equation} \sigma_L^2(t)=\left\{\left(1-\dfrac{\sqrt{2}}{2}\right)-\left(\dfrac{3}{2}-\sqrt{2}\right)\left[\cos\alpha+\sin\alpha\cos(\beta+\theta)\right]^2\right\}t^2, \label{variance_Local} \end{equation} for an arbitrary initial spin state and coin. At this point, we are able to calculate the average variance by integrating over all spin states, \begin{equation} \braket{\sigma_L^2}(t)=\int_{0}^{\pi}\dfrac{\mathrm{d}\alpha}{\pi}\int_{-\pi}^{\pi} \dfrac{\mathrm{d}\beta}{2\pi}\sigma_L^2(t)=\dfrac{2\sqrt{2}-1}{8}t^2, \label{variance_mean_Local} \end{equation} where the dependence on the initial spin state ($\alpha$ and $\beta$) and coin parameter $\theta$ vanish. \subsection{Gaussian state} In order to obtain the Gaussian amplitudes in $k-$space, we should to change from the discrete variable $j$ to $x$ to integrate, \begin{equation} \tilde{f}_G(k)=\int\limits_{-\infty}^{+\infty}\frac{\text{exp}\left(-x^2/(4\sigma_0^2)-ikx \right)}{\left( 2\pi\sigma_0^2 \right)^{\frac{1}{4}}} \mathrm{d}x=\left(8\pi\sigma_0^2\right)^{\frac{1}{4}} e^{-k^2\sigma_0^2}, \label{f_Gauss} \end{equation} since imaginary part vanishes \cite{orthey2017asymptotic}. After replacing it in \eqref{Integral} the remaining integral does not have exact solution, however an approximate numerical solution results, \begin{equation} I_G(\delta,\sigma_0)=\dfrac{2\sigma_0}{\sqrt{2\pi}}\dfrac{\left(\mu\cos^4\delta+\nu\cos^2\delta+\xi\right)}{1+\cos^2\delta}, \label{I_Gauss} \end{equation} where $\mu,\nu$ and $\xi$ can be fitted by $\sum_{n=0}^4 a_n/\sigma_0^n$ whose parameters $a_n$ are in the Table~\ref{tab:1} of \ref{appendix_fit}. Inserting \eqref{I_Gauss} in \eqref{var_general}, we achieve a variance that is similar to the local case, \begin{equation} \sigma_G^2(t)=\{1-I_G(\delta,\sigma_0)\left[\cos\alpha+\sin\alpha\cos(\beta+\theta)\right]^2\}I_G(\delta,\sigma_0)t^2, \label{variance_Gauss} \end{equation} except for the dependence on $\delta$ and $\sigma_0$, then the variance by averaging over all spin states gives, \begin{equation} \braket{\sigma_G^2}(t)=\left[1-\dfrac{3}{4}I_G(\delta,\sigma_0)\right]I_G(\delta,\sigma_0)t^2. \label{variance_mean_Gauss} \end{equation} However, unlike the local case, we find a dependence on $\delta=(\theta+\phi)/2$ and $\sigma_0$ in \eqref{variance_mean_Gauss}. This result evidences that for an initial delocalized state, in particular a Gaussian one, the average variance still remains dependent on the quantum coin used along the walk and the initial dispersion of the state. It is worth mentioning that the variance $\sigma_G^2(t)$ does not converge to $\sigma_L^2(t)$ for $\sigma_0\rightarrow 0$, in order to connect the Gaussian state to the local state (delta-like). Making it correctly would imply a renormalization of the state by means of a typical Error Function to maintain the condition of normalization \cite{orthey2017asymptotic}. However, the normalization of Gaussian states for $\sigma_0\geq1$ is preserved through this model. \subsection{Uniform state} At last, for a uniform state is easy to conclude that their $k-$space amplitudes have $\tilde{f}_G(k)=\sqrt{2\pi\delta(k)}$ and replacing it in \eqref{Integral} results, \begin{equation} I_U(\delta)=\dfrac{\cos^2\delta}{1+\cos^2\delta}. \label{I_Uniform} \end{equation} Inserting \eqref{I_Uniform} in \eqref{var_general} gives us the variance, \begin{equation} \sigma_U^2(t)=\dfrac{\cos^2\delta+\cos^4\delta\{1-\left[\cos\alpha+\sin\alpha\cos(\beta+\theta)\right]^2\}}{(1+\cos^2\delta)^2}t^2, \label{variance_Uniform} \end{equation} then a uniform state which evolves through a Fourier coin ($\theta=\phi=\pi/2)$ has a non-spreading behavior. The average variance is \begin{equation} \braket{\sigma_U^2}(t)=\dfrac{4\cos^2\delta+\cos^4\delta}{4(1+\cos^2\delta)^2}t^2, \label{variance_mean_Uniform} \end{equation} which has a dependence on $\delta=(\theta+\phi)/2$ as the Gaussian case. \subsection{Dispersion velocity} The dispersion velocity of quantum walks starting from a local state driven by Hadamard and Fourier coins is distinguished only by a translation of $\pi/2$ in $\beta$ as shown in Fig. \ref{fig:1} (a) and (b). \begin{figure} \caption{Dispersion velocities $v_\sigma=\frac{\mathrm{d} \label{fig:1} \end{figure} This particular result corroborate the fact that there is no loss of generality on choosing any quantum coin when the quantum walks start from a local state \cite{tregenna2003controlling,ambainis2001one}. However, when they start from a uniform state by means of a Hadamard coin, they have a strong dependence on the initial spin state (qubit) and while by means of a Fourier coin, they are nondispersive regardless of the initial qubit as shown in Fig. \ref{fig:1} (c) and (d) respectively. \section{Average spreading} \label{sec:4} The quantum walks are very sensitive to their initial spin states, then in order to make a fair comparison between distinct position states and check the average quantities calculated via analytical approach, we carry out numerical calculations by averaging a large set of initial spin states. All averages are made over $N=2,016$ spin states varying $(\alpha,\beta)$ from $(0,0)$ to $(\pi,2\pi)$ in independent increments of $0.1$ of quantum walks starting from local state and a few Gaussian states. \begin{figure} \caption{Total average probability distributions (black) and for each spin component (red and blue) after $1000$ time steps of a Hadamard walk starting from (a) local and Gaussian states with initial dispersion (b) $\sigma_0=1$ and (c) $10$. For the sake of clarity, there is a break region between $j=-600$ and $600$. (d) A detail of the total average probability distributions for local (black) and Gaussian states with $\sigma_0=1$ (blue), $2$ (red) and $3$ (green). For the local case in (a) and (d), only the probabilities at the even points are plotted, once the odd points have null probability.} \label{fig:2} \end{figure} Therefore, the total average probability distribution in each position $j$ at an arbitrary time $t$ is \begin{equation} \braket{|\Psi(j,t)|^2}=\sum_{i=1}^N\frac{|a_i(j,t)|^2}{N}+\sum_{i=1}^N\frac{|b_i(j,t)|^2}{N}, \label{Prob_Mean} \end{equation} where the terms on the right are the average probability distributions of spin up and down, respectively, and the index $i$ corresponding to each distinct initial spin state. In the same way, the average variance can be calculated by, \begin{equation} \braket{\sigma^2}(t)=\sum_{i=1}^N\frac{\sigma^2_i(t)}{N}. \label{sigma} \end{equation} \begin{figure} \caption{Total average probability distributions starting from Gaussian states with initial dispersion (a) $\sigma_0=1$ and (b) $10$ with their initial states (black) at $t=0$ and after $1000$ time steps. These states time-evolve driven by quantum coins with $\theta+\phi=0$ (Hadamard in blue), $\theta+\phi=\pi/2$ (red) and $\theta+\phi=\pi$ (Fourier in green). Average variance of local state (black) and Gaussian states with (c) $\sigma_0=1$ and (d) $10$ obtained from numerical calculations (symbols) and from the expressions~\eqref{variance_mean_Local} \label{fig:3} \end{figure} We start our calculations using a Hadamard coin with $q=1/2$ and $\theta=\phi=0$, which creates a unbiased superposition between spin up and down without phase difference between them. Figure \ref{fig:2} shows the average probability distributions over the positions $j$ after $1000$ time steps of quantum walks starting from (a) local and (b)-(c) Gaussian states with initial dispersion $\sigma_0=1$ and $10$ respectively. For all cases, we obtain symmetrical total average distributions with both spin components in opposite sides. However, the spin up (down) has a greater probability on positive (negative) positions. The average probability ratio between spin up and down for positive or negative positions remains approximately steady along the time evolution and it decays asymptotically with the initial dispersion. For instance, let us consider $j<0$, then this ratio is about 33\% for local, 21\% and 18\% for Gaussian states with $\sigma_0=1$ and $2$ respectively and around 17\% from $\sigma_0=3$ and beyond. Figure \ref{fig:2} (d) shows a detail of the total average probability where we can see that, insofar as the Gaussian states delocalize ($\sigma_0$ increases), the probabilities decrease to zero far from the origin position, while the local state probability tends to a uniform distribution around $j=0$. \begin{figure} \caption{Coefficient $C$ of $\braket{\sigma^2} \label{fig:4} \end{figure} The average spreading behavior of quantum walks starting from a local state remains the same for all balanced coins ($q=1/2$) as showed in figure \ref{fig:2} (a). On the other hand, when the quantum walks starting from Gaussian states, they have a strong dependence on the parameters $\theta$ and $\phi$, peculiarly on the sum $\theta+\phi$ and also on the initial dispersion. In order to check the dependence between the initial dispersion and the quantum coin, we also carried out numerical simulations of quantum walks starting from Gaussian states driven by balanced coins from the Hadamard with $\theta+\phi=0$ up to the Fourier (or Kempe) coin with $\theta+\phi=\pi$ \cite{kempe2003quantum}. Figure \ref{fig:3} shows the total average probability of quantum walks starting from Gaussian states with initial dispersion (a) $\sigma_0=1$ and (b) $10$ and their average variances in (c) and (d) respectively. All cases display a quadratic behavior along the time, i.e., $\braket{\sigma^2}(t)=Ct^2$. However for a large initial dispersion, in particular for $\sigma_0=10$, the coefficient $C$ is very close to zero for $\theta+\phi=\pi$, which suggests a non-spreading behavior for $\sigma_0\gg 1$ in agreement with \eqref{variance_mean_Uniform}. Figure \ref{fig:4} shows how the coefficient $C$ varies for distinct values of $\theta+\phi$ and displays a comparison between a polynomial curve fit obtained from the simulations to their respective models for local and a few Gaussian states given by \eqref{variance_mean_Local} and \eqref{variance_mean_Gauss} respectively. \section{Conclusions} \label{sec:5} In this paper, we studied the spreading behavior of quantum walks through Brun-type formalism \cite{brun2003quantum} and numerical calculations. We obtained closed-form expressions for the long-time variance of position of quantum walks starting from any spin state (qubit) and local, Gaussian and uniform position states. We calculated the average variance analytically and we carried out extensive numerical calculations of the average variance and probability distribution profiles by averaging over a large ensemble of initial spin states. From both perspectives, we found out that the average variance of a quantum walk starting from a local state is always the same regardless the quantum coin, while from Gaussian and uniform states have a strong dependence on the quantum coin parameters, being non-dispersive for $\theta+\phi=\pi$ (Fourier walk) and $\sigma_0\gg 1$. We hope our findings can be tested on different experimental platforms \cite{wang2013physical}. Particularly, it is important to notice that the external degree of freedom could be the $z$ component of orbital angular momentum instead of position $j$. In this context, the experiments based on the manipulation of the orbital angular momentum of photons from a unique light beam without refraction or reflections \cite{zhang2010implementation,goyal2013implementing,cardano2015quantum} seem to be promising for implementing delocalized states. Finally, it is worth mentioning that the resemblance between the asymptotic entanglement in quantum walks \cite{orthey2017asymptotic} and their long-time spreading behavior as function of their initial spin states suggests a relation between them, which might be a subject for a further study. \section{Fitted parameters for the variance of Gaussian states}\label{appendix_fit} The values $\mu$, $\nu$ and $\xi$ from the function $I_G(\delta,\sigma_0)$ in \eqref{I_Gauss} following the model $\sum_{n=0}^{4}a_n/\sigma_0^n$, whose parameters $a_n$ are in the Table \ref{tab:1}. \begin{center} \begin{table}[ht] \center\begin{tabular}{lccc} \hline & $\mu$ & $\nu$ & $\xi$ \\ \hline $a_0$ & 0.0022$\pm$0.0004 & -0.0020$\pm$0.0005 & 0.0002$\pm$0.0001 \\ $a_1$ & -0.0492$\pm$0.0077 & 1.2995$\pm$0.0085 & -0.0053$\pm$0.0020 \\ $a_2$ & 0.2938$\pm$0.0361 & -0.2668$\pm$0.0400 & 0.0296$\pm$0.0095 \\ $a_3$ & 0.5030$\pm$0.0596 & -1.0016$\pm$0.0661 & 0.2548$\pm$0.0157 \\ $a_4$ & -0.4612$\pm$0.0312 & 0.5991$\pm$0.0346 & -0.1049$\pm$0.0082 \\ \hline \end{tabular} \caption{Fitted parameters and standard errors of $\mu$, $\nu$ and $\xi$.}\label{tab:1} \end{table} \end{center} \end{document}
math
\begin{document} \title{The asymptotics of group Russian roulette} \author{Tim van de Brug} \author{Wouter Kager} \author{Ronald Meester} \date{\today} \address{Vrije Universiteit\\ Faculty of Sciences\\ Department of Mathematics\\ De Boelelaan 1081a\\ 1081\,HV Amsterdam\\ The Netherlands} \email{\char`{t.vande.brug,w.kager,r.w.j.meester\char`}\,@\,vu.nl} \begin{abstract} We study the group Russian roulette problem, also known as the shooting problem, defined as follows. We have $n$~armed people in a room. At each chime of a clock, everyone shoots a random other person. The persons shot fall dead and the survivors shoot again at the next chime. Eventually, either everyone is dead or there is a single survivor. We prove that the probability~$p_n$ of having no survivors does not converge as $n\to \infty$, and becomes asymptotically periodic and continuous on the $\log{n}$~scale, with period~1. \end{abstract} \keywords{Group Russian roulette, shooting problem, non-convergence, coupling, asymptotic periodicity and continuity} \subjclass[2010]{Primary 60J10; secondary 60F99} \maketitle \section{Introduction and main result} \label{sec:introduction} In~\cite{Winkler}, Peter Winkler describes the following probability puzzle, called group Russian roulette, and also known as the shooting problem. We start at time $t=0$ with $n$~people in a room, all carrying a gun. At time $t=1$, all people in the room shoot a randomly chosen person in the room; it is possible that two people shoot each other, but no one can shoot him- or herself. We assume that every shot instantly kills the person shot at. After this first shooting round, a random number of people have survived, and at time $t=2$ we repeat the procedure with all survivors. Continuing like this, eventually we will reach a state with either no survivors, or exactly one survivor. Denote by~$p_n$ the probability that eventually there are no survivors. We are interested in the behavior of~$p_n$ as $n\to \infty$. Observe that the probability that a given person survives the first shooting round is $( 1-(n-1)^{-1} )^{n-1} \approx 1/e$, so that the expected number of survivors of the first round is approximately~$n/e$. This fact motivates us to plot $p_n$ against~$\log{n}$, see Figure~\ref{figure1} below. Figure~\ref{figure1} suggests that $p_n$ does not converge as $n\to \infty$, and becomes asymptotically periodic on the~$\log{n}$ scale, with period~1. This turns out to be correct, and is perhaps surprising. One may have anticipated that, as $n$ gets very large, the fluctuations at every round will somehow make the process forget its starting point, but this is not the case. Indeed, here we prove the following: \begin{theorem} \label{thm:main} There exists a continuous, periodic function $f\colon \mathds{R}\to [0,1]$ of period~1, satisfying $\sup f\geq 0.515428$ and $\inf f\leq 0.477449$, such that \[ \sup_{x\geq x_0} \, \abs[\big]{ p_{\floor{\exp x}} - f(x) } \to 0 \qquad \text{as $x_0\to\infty$}. \] \end{theorem} \begin{figure} \caption{$p_n$ as a function of~$\log{n} \label{figure1} \end{figure} The solution to the group Russian roulette problem as it is stated in Theorem~\ref{thm:main} was already stated in~\cite{Winkler}, without the explicit bounds on the limit function. However, \cite{Winkler} does not provide a proof, and as far as we know, there is no proof in the literature. A number of papers \cites{LiPom, AthFid, RadGriLos, EisSteStr, Pro, BruOci, Kin, BraSteWil} study the following related problem and generalizations thereof. Suppose we have $n$~coins, each of which lands heads up with probability~$p$. Flip all the coins independently and throw out the coins that show heads. Repeat the procedure with the remaining coins until 0 or~1 coins are left. The probability of ending with 0~coins does not converge as $n\to \infty$ and becomes asymptotically periodic and continuous on the $\log n$ scale \cites{EisSteStr,RadGriLos}. For $p=1- 1/e$, the limit function takes values between 0.365879 and 0.369880, see~\cite{EisSteStr}*{Corollary~2}. The coin tossing problem for $p=1-1/e$ has some similarities with group Russian roulette. In view of Theorem~\ref{thm:main} and the results in \cites{EisSteStr,RadGriLos}, the asymptotic behavior of these two models is qualitatively similar but their limit functions have different average values and amplitudes. In the above-mentioned papers, explicit expressions for the probability of ending with no coins could be obtained because of the independence between coin tosses. Analytic methods were subsequently employed to evaluate the limit. This strategy does not seem applicable to the group Russian roulette problem for the simple reason that no closed-form expressions can be obtained for the relevant probabilities. Our approach is, therefore, very different, and we end this introduction with an overview of our strategy. We recursively compute rigorous upper and lower bounds on~$p_n$ for $n=1,\dots,6000$, using \textit{Mathematica}. Based on these computations, we identify values of~$n$ where $p_n$ is high (the ``hills'') and values of~$n$ where $p_n$ is low (the ``valleys''). To prove the non-convergence of the~$p_n$, we explicitly construct intervals $H_k$ and~$V_k$ ($k=0,1,\dotsc$) in such a way that, if $n\in H_k$ for some~$k$, then with high probability uniformly in~$k$, the number of survivors in the shooting process starting with $n$~people will, during the first~$k$ shooting rounds, visit each of the intervals $H_{k-1}, H_{k-2}, \dots, H_0$ (in that order), and similarly for the~$V_k$. By our rigorous bounds on~$p_n$ we know that $H_0$ is a hill and~$V_0$ a valley. This implies that the values of~$p_n$ on the respective intervals $H_k$ and~$V_k$ are separated from each other, uniformly in~$k$. We stress that, although we make use of \textit{Mathematica}, our proof of Theorem~\ref{thm:main} is completely rigorous. There are no computer simulation methods involved, and we use only integer calculations to avoid rounding errors. To make this point clear, we isolated the part of the proof where we use \textit{Mathematica}\ as a separate lemma, Lemma~\ref{lem:mathematica}. In the proof of this lemma we explain how we compute the rigorous bounds we need. Our \textit{Mathematica}\ notebook and bounds on the~$p_n$ are available online at \url{http://arxiv.org/format/1507.03805}. A generic bound on the probability that the number of survivors after each round successively visits the intervals in a carefully constructed sequence, appears in Section~\ref{sec:intervals} below. To obtain a good bound on this probability, we make crucial use of a coupling, introduced in Section~\ref{sec:coupling}, which allows us to compare the random number of survivors of a single shooting round with the number of empty boxes remaining after randomly throwing balls into boxes. For this latter random variable reasonably good tail bounds are readily available, and we provide such a bound in Section~\ref{sec:tails}. The coupling is also crucial in proving the asymptotic continuity and periodicity of the~$p_n$ on the $\log n$ scale. To prove continuity, we consider what happens if we start the shooting process from two different points in the same interval, using for every round an independent copy of the coupled numbers of survivors for each point. By carefully analyzing the properties of our coupling, we will show that we can make the two coupled processes collide with arbitrarily high probability before reaching 0 or~1, by making the intervals sufficiently narrow on the $\log n$ scale, and taking the interval we start from far enough to the right. This shows that for our two starting points, the probabilities of eventually having no survivors must be very close to each other. Periodicity follows because our argument also applies when we start from two points that lie in different intervals, and the distance between the intervals in our construction is~1 on the $\log n$ scale. The proof of non-convergence of the~$p_n$, based on the coupling and tail bounds from Section~\ref{sec:prelims}, is in Section~\ref{sec:nonconvergence}. The proof of asymptotic periodicity and continuity follows in Section~\ref{sec:periodicity}. Together, these results give Theorem~\ref{thm:main}. \section{Coupling and tail bounds} \label{sec:prelims} \subsection{Coupling and comparison} \label{sec:coupling} Let $S_n$ be the number of survivors after one round of the shooting process starting with $n$~people. Using inclusion-exclusion, the distribution of~$S_n$ can be written down explicitly: \begin{multline} \label{Sn-incl-excl} \P(S_n=k) = \binom{n}{k} (n-1)^{-n} \times \\ \sum_{r=0}^{n-k-2}\binom{n-k}{r} (-1)^r(n-k-r)^{k+r} (n-k-r-1)^{n-k-r}. \end{multline} We use this formula in Section~\ref{sec:nonconvergence}, but not in the rest of our analysis. Instead, let $Y_n$ be a random variable that counts the number of boxes that remain empty after randomly throwing $n-1$ balls into $n-1$ (initially empty) boxes. Similarly, let~$Z_n$ be the result of adding~1 to the number of boxes that remain empty after randomly throwing $n$~balls into $n-1$ boxes. It turns out that these random variables $Y_n$ and~$Z_n$ are very close in distribution to~$S_n$, and are more convenient to work with. In this section we describe a coupling between the $S_n$, $Y_n$ and~$Z_n$, for all $n\geq2$ simultaneously, in which (almost surely) $S_n$, $Y_n$ and~$Z_n$ are within distance~1 from each other for all~$n$, and the $Y_n$ and~$Z_n$ are ordered in~$n$ (see Lemma~\ref{lem:coupling} below). This last fact has the useful implication that in the shooting problem, if the number~$n$ of people alive in the room is known to be in an interval $[a, b]$, then the probability that the number of survivors of the next shooting round will lie in some other interval $[\alpha, \beta]$ can be estimated by considering only the two extreme cases $n=a$ and~$n=b$ (see Corollary~\ref{cor:coupling} below). At the end of the section, we extend our coupling to a coupling we can use to study shooting processes with multiple shooting rounds. To describe our coupling, we construct a Markov chain as follows. Number the people $1, 2, \dots, n$ and define $A^n_i \subset \{1,\dots,n\}$ as the set of people who are not shot by any of the persons~1 up to~$i$ (inclusive). In this formulation, $S^n_i := \card{A^n_i}$ represents the number of survivors if only persons~1 up to~$i$ shoot, and we can write \[ S_n = S_n^n = \card{A^n_n}. \] The sets~$A^n_i$ ($i=1,\dots,n$) form a Markov chain inducing the process~$(S^n_i)_i$ with transition probabilities given by \begin{equation} \begin{split} \P(S^n_{i+1} = S^n_i - 1 \mid A^n_i) &= 1- \P(S^n_{i+1} = S^n_i \mid A^n_i) \\ &= \frac{S^n_i - \I(i+1\in A^n_i)}{n-1}. \label{Mprocess1} \end{split} \end{equation} Indeed, when person~$i+1$ selects his target, the number of persons who will survive the shooting round decreases by~1 precisely when person~$i+1$ aims at someone who has not already been targeted by any of the persons~1 up to~$i$, where we must take into account that person~$i+1$ cannot shoot himself (hence the subtraction of $\I(i+1\in A^n_i)$ in the numerator). An explicit construction of the process described above can be given as follows. Suppose that on some probability space, we have random variables $U_1, U_2, \dotsc$ uniformly distributed on $(0,1]$, and, for all finite subsets~$A$ of~$\mathds{N}$ and all $i\in\mathds{N}$, random variables~$V_{A,i}$ uniformly distributed on the set $A\setminus\{i\}$, all independent of each other. Now fix $n\geq2$. Set $S^n_0 := n$ and $A^n_0 := \{1,\dots,n\}$, and for $i=0,1,\dots,n-1$, recursively define \[ A^n_{i+1} := \begin{cases} A^n_i \setminus \{V_{A^n_i,i+1}\} & \text{if } {\displaystyle U_{n-i} \leq \frac{\card{A^n_i} - \I(i+1\in A^n_i)}{n-1}}; \\ A^n_i & \text{otherwise}; \end{cases} \] and set $S^n_{i+1} := \card{A^n_{i+1}}$. In this construction, the variable $U_{n-i}$ is used first to decide whether person~$i+1$ aims at someone who will not be shot by any of the persons~1 up to~$i$, and then we use~$V_{A^n_i,i+1}$ to determine his victim. Clearly, this yields a process with the desired distribution, and provides a coupling of the processes~$(S^n_i)_i$ for all $n\geq2$ simultaneously. We now extend this coupling to include new processes $(Y^n_i)_i$ and~$(Z^n_i)_i$, as follows. For fixed $n\geq2$, we first set $Y^n_0 := n$ and $Z^n_0 := n$, and then for $i=0,1,\dotsc,n-1$ we recursively define \begin{align} \label{Yprocess} Y^n_{i+1} &:= \begin{cases} Y^n_i-1 & \text{if }{\displaystyle U_{n-i}\leq\frac{Y^n_i}{n-1}};\\ Y^n_i & \text{otherwise}; \end{cases} \\ \intertext{and} \label{Zprocess} Z^n_{i+1} &:= \begin{cases} Z^n_i-1 & \text{if }{\displaystyle U_{n-i}\leq\frac{Z^n_i-1}{n-1}};\\ Z^n_i & \text{otherwise}. \end{cases} \end{align} Then, by construction, $(Y^n_i)_i$ and $(Z^n_i)_i$ are Markov chains with the respective transition probabilities \begin{align} \P(Y^n_{i+1} = Y^n_i-1 \mid Y^n_i) &= 1-\P(Y^n_{i+1} = Y^n_i \mid Y^n_i) = \frac{Y^n_i}{n-1}; \label{Mprocess2} \\ \P(Z^n_{i+1} = Z^n_i-1 \mid Z^n_i) &= 1-\P(Z^n_{i+1} = Z^n_i \mid Z^n_i) = \frac{Z^n_i-1}{n-1}. \label{Mprocess3} \end{align} The similarity with~\eqref{Mprocess1} is clear, and we see that we can interpret $Y^n_i$ as the number of empty boxes after throwing $i$~balls into $n$~boxes, where the first ball is thrown into the $n$th box and the remaining balls are thrown randomly into the first $n-1$ boxes only. Likewise, $Z^n_i$ is the number of empty boxes after throwing $i$~balls into the first $n-1$ of a total of $n$~boxes (so that the $n$th box remains empty throughout the process). If we now set \[ S_n := S^n_n, \quad Y_n := Y^n_n \text{ and } Z_n := Z^n_n, \] then $S_n$, $Y_n$ and~$Z_n$ have the interpretations described at the beginning of this section. The next lemma shows they have the properties we mentioned: \begin{lemma} \label{lem:coupling} The coupling of the $S_n$, $Y_n$ and~$Z_n$ described above satisfies \begin{enumerate} \item $Y_n\leq Y_{n+1}\leq Y_n+1$ and $Z_n\leq Z_{n+1} \leq Z_n+1$ for all $n\geq2$; \item $Y_n\leq S_n\leq Z_n\leq Y_n+1$ for all $n\geq2$. \end{enumerate} \end{lemma} \begin{proof} As for~(1), we claim that the~$Y^n_i$ satisfy the stronger statement that \begin{equation} Y^n_i \leq Y^{n+1}_{i+1} \leq Y^n_i+1 \qquad \text{for all $n\geq2$ and $i=0,1,\dots,n$}. \label{ordering1} \end{equation} To see this, first note that necessarily, $Y^{n+1}_1 = n = Y^n_0$. Now suppose that $Y^n_i = Y^{n+1}_{i+1}$ for some index~$i$. Then \eqref{Yprocess} implies that if $Y^{n+1}_{i+2} = Y^{n+1}_{i+1} - 1$, we also have $Y^n_{i+1} = Y^n_i - 1$. Hence the ordering is preserved, proving that $Y^n_i \leq Y^{n+1}_{i+1}$ for all $i\leq n$. Likewise, if $Y^{n+1}_{i+1} = Y^n_i+1$ and $Y^n_{i+1} = Y^n_i - 1$ for some~$i$, then \eqref{Yprocess} implies that $Y^{n+1}_{i+2} = Y^{n+1}_{i+1} - 1$. This proves~\eqref{ordering1} and hence~(1) for the~$Y_n$. The proof for the random variables~$Z_n$ is similar. As for property~(2), observe that if $Y^n_i = Z^n_i$ for some index~$i$ and $Z^n_{i+1} = Z^n_i - 1$, then also $Y^n_{i+1} = Y^n_i - 1$. On the other hand, if $Y^n_i = Z^n_i - 1$ for some index~$i$, then it follows from the construction that $Y^n_j = Z^n_j - 1$ for all $j=i,i+1,\dots,n$. Since $Y^n_0 = Z^n_0 = n$, we conclude that \begin{equation} Y^n_i \leq Z^n_i \leq Y^n_i+1 \qquad \text{for all $n\geq2$ and $i=0,1,\dots,n$}. \label{ordering2} \end{equation} Furthermore, if $Y^n_i = S^n_i$ and $S^n_{i+1} = S^n_i - 1$, then our construction implies that $Y^n_{i+1} = Y^n_i - 1$. Similarly, if $S^n_i = Z^n_i$ and $Z^n_{i+1} = Z^n_i - 1$, then in our coupling we also have that $S^n_{i+1} = S^n_i - 1$. It follows that \[ Y^n_i \leq S^n_i \leq Z^n_i \qquad \text{for all $n\geq2$ and $i=0,1,\dots,n$}, \] and this together with~\eqref{ordering2} establish property~(2). \end{proof} \begin{corollary} \label{cor:coupling} Suppose we have coupled the~$S_n$ as described above. Then, for any intervals $[a,b]$ and $[\alpha,\beta]$, with $a,b,\alpha,\beta$ integers, \[ \P(\exists n\in [a,b]\colon S_n\notin [\alpha,\beta]) \leq \P(Y_a \leq \alpha-1) + \P(Y_b \geq \beta). \] \end{corollary} \begin{proof} Let the $S_n$ and~$Y_n$ be coupled as described above. By Lemma~\ref{lem:coupling}, \[ \begin{split} \P(\forall n \in [a,b]\colon S_n\in [\alpha,\beta]) &\geq \P(\forall n \in [a,b]\colon Y_n \in [\alpha,\beta -1 ]) \\ &= \P(Y_a \geq \alpha, Y_b \leq \beta -1). \end{split} \] By taking complements the desired result follows. \end{proof} \begin{remark} The distribution of $Y_i^n$ is related to Stirling numbers of the second kind, as follows. Recall that $Y_{i+1}^{n+1}$ can be interpreted as the number of empty boxes after throwing $i$~balls randomly into $n$~boxes. We claim that \[\begin{split} \P(Y_{i+1}^{n+1} = n-k) &= \P(\mbox{$n-k$ boxes empty, $k$ boxes non-empty}) \\ &= \frac{n!}{(n-k)!} \frac1{n^i} S(i,k), \end{split}\] with $S(i,k)$ a Stirling number of the second kind. Indeed, $S(i,k)$ is by definition the number of ways of partitioning the set of $i$~balls into $k$ non-empty subsets. Balls in the same subset are thrown into the same box. The number of ways to assign these subsets to $k$ distinct boxes equals $n!/(n-k)!$. Finally, $n^i$ is the number of ways of distributing $i$~balls over $n$~boxes. \end{remark} We now extend our coupling to a coupling we can use for an arbitrary number of shooting rounds, and for shooting processes starting from different values of~$n$. Since the shooting rounds must be independent, we take an infinite number of independent copies of the coupling described above, one for each element of~$\mathds{Z}$ (so including the negative integers). The idea is to use a different copy for each round of a shooting process. For reasons that will become clear, we want to allow the copy that is used for the first round to vary with the starting point~$n$. To be precise, let $X^n_i$ represent the number of survivors after round~$i$ of a shooting process started with~$n$ people in the room. Let~$k_n$ be the number of the copy of our coupling that is to be used for the first round of this process, and denote the $i$-th copy of~$S_n$ by~$S_n^{(i)}$. We recursively define \begin{equation} \label{Xni} X^n_0 := n, \qquad X^n_{i+1} := S^{(k_n-i)}_{X^n_i} \text{ for $i\geq0$}. \end{equation} In this way, the $(k_n-i)$-th copy of the~$S_n$ is used to determine what happens in round~$i+1$ of the process. Note that the index $k_n-i$ becomes negative when $i>k_n$. Our setup is such that if $k_m = k_n$, then the shooting processes started from $m$ and~$n$ are coupled from the first shooting round onward, but if $k_m = k_n + l$ with $l>0$, then the shooting process started from~$m$ will first undergo $l$~independent shooting rounds before it becomes coupled with the shooting process started from~$n$. Thus, by varying the~$k_n$, we can choose after how many rounds shooting processes with different starting points become coupled. \subsection{Tail bounds} \label{sec:tails} In Corollary~\ref{cor:coupling} we have given a bound on the probability that the shooting process, starting at any point in some interval, visits another interval after one shooting round. This bound is in terms of the tails of the distribution of the random variables~$Y_n$. In this section, we show that $Y_n$ in fact has the same distribution as a sum of independent Bernoulli random variables, and we use this result to obtain tail bounds for~$Y_n$. \begin{lemma} \label{lem:sumofind.} For every $n\geq 3$, there exist $n-2$ independent Bernoulli random variables $W_1,\dots,W_{n-2}$ such that $Y_n$ has the same distribution as $W_1 + W_2 + \dots + W_{n-2}$. \end{lemma} \begin{proof} The proof is based on the following beautiful idea due to Vatutin and Mikha{\u\i}lov~\cite{VatutinMikhailov}: we will show that the generating function of~$Y_n$ has only real roots, and then show that this implies the statement of the lemma. For the first step we observe that $\tbinom{Y_n}{k}$ is just the number of subsets of size~$k$ of the boxes that remain empty after throwing $n-1$~balls into $n-1$~boxes. This implies that \[\begin{split} \E\binom{Y_n}{k} &= \E \sum_{1\leq i_1 < \cdots < i_k \leq n-1} \I(\mbox{boxes $i_1,\dots,i_k$ empty}) \\ &= \binom{n-1}{k} \P(\mbox{boxes $1,\dots, k$ empty}) = \binom{n-1}{k}\left( \frac{n-k-1}{n-1} \right)^{n-1}. \end{split}\] Hence, if we define \[ R(z) = \sum_{k=0}^{n-1} \binom{n-1}{k} (n-k-1)^{n-1} z^k, \] then we see that \[ \E\left( z^{Y_n} \right) = \E\biggl( \, \sum_{k=0}^{n-1} \binom{Y_n}{k} (z-1)^k \, \biggr) =(n-1)^{-(n-1)} R(z-1). \] We want to show that $R$ has only real roots, for which it is enough to show that $z^{n-1}R(1/z)$ has only real roots. To show this, we now write \[\begin{split} z^{n-1}R(1/z) &= \sum_{k=0}^{n-1} \binom{n-1}{k} (n-k-1)^{n-1} z^{n-k-1} \\ &= \sum_{k=0}^{n-1} \binom{n-1}{k} \left(z \frac{d}{dz}\right)^{\!n-1} z^{n-k-1}, \end{split}\] from which it follows that \[ z^{n-1}R(1/z) = \left(z \frac{d}{dz}\right)^{\!n-1} \: \sum_{k=0}^{n-1} \binom{n-1}{k} z^{n-k-1} = \left(z \frac{d}{dz}\right)^{\!n-1} (z+1)^{n-1}. \] Now observe that if a polynomial $f(z)$ has only real roots, then so do the polynomials $zf(z)$ and~$f'(z)$ (one way to see the latter is to observe that between any two consecutive zeroes of~$f$, there must be a local maximum or minimum). Therefore, our last expression for $z^{n-1} R(1/z)$ above has only real roots and hence so does~$R$. It follows that the generating function $\E(z^{Y_n})$ of~$Y_n$ has only real roots. Now note that $\E(z^{Y_n})$ is a polynomial of degree $n-2$ which cannot have positive roots. Let its roots be $-d_1, -d_2, \dots, -d_{n-2}$, with all the $d_i\geq 0$, and let $W_1,\dots,W_{n-2}$ be independent Bernoulli random variables such that \[ \P(W_i=1) = 1-\P(W_i=0) = \frac{1}{1+d_i}, \qquad i=1,\dots,n-2. \] Note that these are properly defined random variables because of the fact that $d_i\geq 0$ for all~$i$. Writing $W = W_1+\dots+W_{n-2}$, we now have \[ \E\left( z^W \right) = \prod_{i=1}^{n-2} \E\left( z^{W_i} \right) = \prod_{i=1}^{n-2} \frac{z+d_i}{1+d_i} = \E\left( z^{Y_n} \right), \] so $Y_n$ and~$W$ have the same distribution. \end{proof} Tail bounds for sums of independent Bernoulli random variables are generally derived from a fundamental bound due to Chernoff~\cite{Chernoff} by means of calculus, see e.g.~\cite{AlonSpencer}*{Appendix~A}. Here we use the following result: \begin{theorem} \label{thm:tails} Let $W$ be the sum of~$n$ independent Bernoulli random variables, and let $p = \E W/n$. Then for all $u \geq 0$ we have \begin{align} \P(W\leq \E W - u) &\leq \exp\left( -\frac12 \frac{u^2}{np(1-p)-u(1-2p)/3} \right), \label{jansonineq1} \\ \intertext{and} \P(W\geq \E W + u) &\leq \exp\left( -\frac12 \frac{u^2}{np(1-p)+u(1-2p)/3} \right). \label{jansonineq2} \end{align} \end{theorem} \begin{proof} This theorem has been proved by Janson, see \cite{Janson}*{Theorems 1 and~2}. For the convenience of the reader we outline the main steps of the proof of inequality~\eqref{jansonineq2} here. Inequality~\eqref{jansonineq1} follows by symmetry. By \cite{AlonSpencer}*{Theorem~A.1.9} we have for all $\lambda>0$, \begin{equation} \label{A19} \P(W \geq \E W+u) < e^{-\lambda pn} (pe^{\lambda} + (1-p))^n e^{-\lambda u}. \end{equation} By the remark following \cite{AlonSpencer}*{Theorem~A.1.9}, for given $p,n,u$, the value of~$\lambda$ that minimizes the right hand side of inequality~\eqref{A19} is \begin{equation} \label{optimallambda} \lambda = \log \biggl[ \left( \frac{1-p}{p} \right) \left( \frac{u +np}{n-(u+np)} \right) \biggr]. \end{equation} In \cite{AlonSpencer} suboptimal values of~$\lambda$ are substituted into~\eqref{A19} to obtain bounds. Substituting the optimal value~\eqref{optimallambda} into~\eqref{A19}, and letting $q=1-p$ and $x = u/n \in [0,q]$, yields \cite{Janson}*{Inequality~(2.1)}: \[ \P(W \geq \E W +u) \leq \exp\biggl( -n (p+x) \log \frac{p+x}{p} - n (q-x) \log \frac{q-x}{q} \biggr). \] Following~\cite{Janson} we define, for $0\leq x\leq q$, \[ f(x) = (p+x) \log \frac{p+x}{p} + (q-x) \log \frac{q-x}{q} - \frac{x^2}{2(pq+x(q-p)/3)}. \] Then $f(0)=f'(0) =0$ and \[ f''(x) = \frac{\frac{1}{3} pq (q-p)^2 x^2 + \frac{1}{27} (q-p)^3 x^3 + p^2 q^2 x^2}{(x+p)(q-x)(pq+x(q-p)/3)^3} \geq 0, \] for $0\leq x\leq q$. Hence $f(x)\geq 0$ for $0\leq x\leq q$, which proves~\eqref{jansonineq2}. \end{proof} \begin{corollary} \label{cor:tailY} Let $n\geq 4$. Then for all $u\geq 0$, \begin{align*} \P\left( Y_n \leq \frac{n-5/3}{e} -u \right) &\leq \exp\left( -\frac{1}{2} \frac{e^2 u^2}{(n-1)(e-1)} \right) \\ \intertext{and} \P\left( Y_n \geq \frac{n-3/2}{e} +u \right) &\leq \exp\left( -\frac{1}{2} \frac{e^2 u^2}{(n-1)(e-1) + ue(e-2)/3} \right). \end{align*} \end{corollary} \begin{proof} Recall that $Y_n$ can be interpreted as the number of empty boxes after randomly throwing $n-1$ balls into $n-1$ boxes. Thus we have \begin{equation} \label{expectationY} \E Y_n = (n-1) \left( 1-\frac{1}{n-1} \right)^{n-1}. \end{equation} We will bound this expectation using the following two inequalities, which hold for all $u\in(0,1)$: \begin{align} (1-u)^{1/u} &\leq (1-\textstyle\frac12 u) / e, \label{inequalityu1}\\ (1-u)^{1/u} &\geq (1-\textstyle\frac12 u - \frac12 u^2)/e. \label{inequalityu2} \end{align} To prove these inequalities, we define \begin{align*} h_1(u) &= u + \log (1-u) - u \log (1-\tfrac12 u), \\ h_2(u) &= u + \log(1-u) - u \log(1-\tfrac12 u -\tfrac12 u^2). \end{align*} Then $h_1(0)=h_2(0) =h'_1(0) = h'_2(0) =0$ and moreover \[ h_1''(u) = - \frac{u(5-5u+u^2)}{(1-u)^2(2-u)^2},\qquad h_2''(u) = \frac{u(7+2u)}{(1-u)(2+u)^2}. \] Hence $h_1''(u) < 0$ and $h_2''(u)>0$ for $u\in (0,1)$. Therefore, $h_1(u)<0$ and $h_2(u)>0$ for all $u\in (0,1)$, which implies \eqref{inequalityu1} and~\eqref{inequalityu2}. By \eqref{expectationY} and~\eqref{inequalityu1}, we have that \begin{equation} \label{EYupper} \E Y_n \leq \frac{n-3/2}{e}. \end{equation} Similarly, using \eqref{expectationY} and~\eqref{inequalityu2}, we obtain that \begin{equation} \label{EYlower} \E Y_n \geq \frac{n-3/2}{e} - \frac{1}{2e(n-1)} \geq \frac{n-5/3}{e} \qquad \text{for $n\geq4$}. \end{equation} Since, by Lemma~\ref{lem:sumofind.}, $Y_n$ has the same distribution as a sum of $n-2$ independent Bernoulli random variables, Theorem~\ref{thm:tails} applies to the~$Y_n$. It follows from \eqref{EYupper} and~\eqref{EYlower} that in applying this theorem to~$Y_n$ for $n\geq4$, we can use that \[ (n-2)p \leq \frac{n-1}{e},\qquad 1-p \leq \frac{e-1}{e},\qquad 0\leq 1-2p \leq \frac{e-2}{e}, \] where $p = \E Y_n /(n-2)$. This yields the desired result. \end{proof} \subsection{Visiting consecutive intervals} \label{sec:intervals} Corollaries \ref{cor:coupling} and~\ref{cor:tailY} together give an explicit upper bound on the probability that the shooting process, starting anywhere in some interval, visits a given other interval after the next shooting round. In this section, we extend this result to more than one round. We give an explicit construction of a sequence of intervals $I_0, I_1, \dotsc$ and, using Corollaries \ref{cor:coupling} and~\ref{cor:tailY}, we estimate the probability that the shooting process successively visits each interval in this specific sequence. To start our construction, suppose that the (real) numbers $I^-_0,I^+_0 \geq 2$, with $I^+_0 < eI^-_0$, and a parameter $\gamma\in (0,1]$ are given. Set \begin{equation} \label{s_0} s_0 := \sum_{i=1}^\infty \sqrt{i}\,e^{-i/2} = 2.312449444\cdots, \end{equation} and define the number~$c_0$ in terms of $I^+_0$, $I^-_0$ and~$\gamma$ by \begin{equation} \label{c_0} c_0 := \left( \sqrt{\smash[b]{I^+_0}}-\sqrt{\smash[b]{I^-_0}} \, \right) \frac{\gamma}{s_0\sqrt{e}}. \end{equation} For all $k\geq1$, we now define the real numbers $I^-_k$ and~$I^+_k$ by \begin{align} \label{left} I^-_k &:= I^-_0 e^k \biggl( 1 + c_0 \sqrt{\frac{e}{\smash[b]{I^-_0}}} \sum_{i=1}^k \sqrt{i} \, e^{-i/2} \biggr), \\ \label{right} I^+_k &:= I^+_0 e^k \biggl( 1 - c_0 \sqrt{\frac{e}{\smash[b]{I^+_0}}} \sum_{i=1}^k \sqrt{i} \, e^{-i/2} \biggr), \end{align} and we set $I_k := [\floor{I^-_k}, \ceil{I^+_k}]$ for all $k\geq0$. These specific choices for $I^+_k$ and~$I^-_k$ may look peculiar, but the reader will see in our calculations below why they are convenient. At this point, let us just note that our intervals are disjoint (since $I^-_{k+1} > I^-_0 e^{k+1} > I^+_0 e^k > I^+_k$) and their lengths are (roughly) given by the relatively simple expression \[ I^+_k - I^-_k = (I^+_0 - I^-_0) e^k \biggl( 1 - \frac{\gamma}{s_0} \sum_{i=1}^k \sqrt{i} \, e^{-i/2} \biggr). \] For $\gamma=1$, this reduces to \[ I^+_k - I^-_k = (I^+_0 - I^-_0) e^k \frac{1}{s_0} \sum_{j\geq 1} \sqrt{j+k} \, e^{-(j+k)/2} \geq (I^+_0 - I^-_0) e^{k/2}, \] which shows that the lengths of our intervals~$I_k$ grow to infinity with~$k$. We want to consider shooting processes starting from any $n\in \bigcup_{k=1}^\infty I_k$, and we couple these processes as in~\eqref{Xni}, where we take $k_n$ equal to the index of the interval containing~$n$. In this way, all shooting processes starting from the same interval are coupled from the first round onward, while a shooting process starting from a point in~$I_{k+l}$ first undergoes $l$ independent shooting rounds (and, with high probability, reaches~$I_k$), before it becomes coupled to a shooting process starting from a point in~$I_k$. The following lemma gives an estimate of the probability that a shooting process starting from any $n\in \bigcup_{k=1}^\infty I_k$ visits each of the intervals $I_{k_n-1}, I_{k_n-2}, \dots, I_0$, in that order. \begin{lemma} \label{lem:intervals} Let the numbers $I^+_0,I^-_0$ and the parameter $\gamma\in (0,1]$ be given, and define the intervals $I_k$ ($k\geq0$) by \eqref{left} and~\eqref{right}, as explained above. For each $n\in \bigcup_{k=1}^\infty I_k$, let $k_n$ be the index of the interval containing~$n$, and define $X^n_i$ ($i\geq0$) by~\eqref{Xni}. Then \begin{multline} \label{boundintervals} \P\bigl( \text{for some $\textstyle n\in \bigcup_{k=1}^\infty I_k$ and $i\leq k_n$, $X_i^n \not\in I_{k_n-i}$} \bigr) \\ \leq \frac{1}{e^{c_1}-1} + \frac{1}{e^{c_2}-1} \end{multline} where \[ c_1 = \frac{ec_0^2/2} {(e-1)\bigl( 1+c_0 s_0 \, \sqrt{e\smash[b]{\null/I^-_0}} \bigr)} \quad\text{and}\quad c_2 = \frac{ec_0^2/2} {e-1 - c_0(2e-1) \bigm/ 3\sqrt{\smash[b]{I^+_0}} }, \] with $s_0$ and~$c_0$ defined as in \eqref{s_0} and~\eqref{c_0}. Note that the right hand side of~\eqref{boundintervals} depends, via $c_0,c_1,c_2$, on the choice of $I^+_0$, $I^-_0$ and~$\gamma$. \end{lemma} \begin{proof} Let the $S_n^{(k)}$, for $n\geq 2$ and $k\geq 1$, be coupled as in Section~\ref{sec:coupling}. Suppose we are on the event that for all $k\geq 1$ and $n\in I_k$ it holds that $S_n^{(k)} \in I_{k-1}$. Then, by our coupling, it follows that for all $k\geq 1$, $n\in I_k$ and $i\leq k$, $X_i^n \in I_{k-i}$. The latter statement is equivalent to saying that for all $n\in \bigcup_{k=1}^{\infty} I_k$ and $i\leq k_n$, $X_i^n \in I_{k_n-i}$. Therefore, the left hand side of~\eqref{boundintervals} is bounded above by \begin{equation} \label{proofintervals000} \P(\exists k\geq 1, \exists n\in I_k\colon S_n^{(k)} \not\in I_{k-1}) \leq \sum_{k=1}^{\infty} \P(\exists n\in I_k\colon S_n \not\in I_{k-1}). \end{equation} By Corollary~\ref{cor:coupling} we have for $k\geq 1$, \begin{multline} \label{proofintervals1} \P(\exists n\in I_k\colon S_n\notin I_{k-1}) \\ \leq \P\bigl( Y_{\floor{I_k^-}} \leq \floor{I_{k-1}^-} - 1 \bigr) + \P\bigl( Y_{\ceil{I_k^+}} \geq \ceil{I_{k-1}^+} \bigr). \end{multline} To bound the right hand side of~\eqref{proofintervals1}, we use Corollary~\ref{cor:tailY}, which applies for all $k\geq 1$ since $I^-_1 \geq eI^-_0 > 4$. We first note that since $\tfrac{1+5/3}{e}-1<0$, \begin{equation} \label{proofintervals1a} \floor{I^-_{k-1}} - 1 - \frac{\floor{I^-_k} - 5/3}{e} \leq I^-_{k-1} - \frac{I^-_k}{e} = -c_0 \sqrt{\smash[b]{I^-_0}} \, e^{(k-1)/2} \sqrt{k}. \end{equation} By Corollary~\ref{cor:tailY} with $n = \floor{I_k^-}$ and $-u$ equal to the right hand side of~\eqref{proofintervals1a}, using $\floor{I^-_k} - 1 \leq I^-_0 e^k \bigl( 1+c_0 s_0\sqrt{e\smash[b]{\null/I^-_0}} \, \bigr)$, we see that \begin{equation} \label{proofintervals2} \P( Y_{\floor{I_k^-}} \leq \floor{I_{k-1}^-} - 1 ) \leq \exp\left( -\frac{1}{2} \frac{ec_0^2 k} {(e-1)\bigl( 1+c_0 s_0 \, \sqrt{e\smash[b]{\null/I^-_0}} \bigr)} \right). \end{equation} Likewise, \[ \ceil{I^+_{k-1}} - \frac{\ceil{I^+_k} - 3/2}{e} \geq I^+_{k-1} - \frac{I^+_k}{e} = c_0 \sqrt{\smash[b]{I^+_0}} \, e^{(k-1)/2} \sqrt{k}. \] By Corollary~\ref{cor:tailY}, using $\ceil{I^+_k}-1 \leq I^+_0 e^k - c_0 e^k \sqrt{\smash[b]{I^+_0}}$ and $e^{(k-1)/2} \sqrt{k} \leq e^{k-1}$, \begin{equation} \label{proofintervals3} \P( Y_{\ceil{I_k^+}} \geq \ceil{I_{k-1}^+} ) \leq \exp\left( -\frac{1}{2} \frac{ec_0^2 k} {e-1 - c_0 (2e-1) \bigm/ 3\sqrt{\smash[b]{I^+_0}} } \right). \end{equation} By \eqref{proofintervals1}, \eqref{proofintervals2} and~\eqref{proofintervals3}, the right hand side of~\eqref{proofintervals000} is bounded above by the sums over all $k\geq1$ of the right hand sides of \eqref{proofintervals2} and~\eqref{proofintervals3}, added together. This proves~\eqref{boundintervals}. \end{proof} \section{Non-convergence} \label{sec:nonconvergence} In this section we prove non-convergence of the~$p_n$: \begin{theorem}[Non-convergence] \label{thm:nonconvergence} It is the case that \[ \limsup_{n \to \infty} p_n \geq 0.515428 \qquad\text{and}\qquad \liminf_{n \to \infty} p_n \leq 0.477449. \] \end{theorem} The idea of the proof of Theorem \ref{thm:nonconvergence} is as follows. We will take intervals $H_0$ and~$V_0$ around $n=2795$ and~$n=4608$, the last peak and valley in Figure~\ref{figure1}, respectively, so that $p_n$ is high on~$H_0$ and low on~$V_0$. Then we will construct sequences of intervals $H_1, H_2, \dotsc$ and $V_1, V_2, \dotsc$ in such a way that, if $n\in H_k$ for some~$k$, then with high probability uniformly in~$k$, the number of survivors in the shooting process starting with $n$~persons will, during the first~$k$ shooting rounds, visit each of the intervals $H_{k-1}, H_{k-2}, \dots, H_0$ (in that order), and similar for~$V_k$. As a consequence, $p_n$ must be high on all intervals~$H_k$, and low on all intervals~$V_k$. To make this work, the intervals $H_0$ and~$V_0$ should be big enough to make the probability high that the number of survivors after $k$~rounds will lie in them when we start from $H_k$ or~$V_k$, but small enough so that the values taken by the~$p_n$ on the respective intervals $H_0$ and~$V_0$ are sufficiently separated from each other. It turns out that $H_0 = [2479, 3151]$ and $V_0 = [4129, 5143]$ work, and these intervals form our starting point. The next three intervals $H_1$, $H_2$ and~$H_3$ are constructed as follows. We choose the right boundary~$H^+_1$ of~$H_1$ such that $\E S_{H_1^{\smash+}}$ lies roughly $3.56$~standard deviations away from the right boundary of~$H_0$, and we choose the left boundary~$H_1^-$ of~$H_1$ similarly. In this way, we expect that after one shooting round we will end up in~$H_0$ with high probability, when we start in~$H_1$. The intervals $H_2$ and~$H_3$ are constructed similarly, and so are the intervals $V_1$ and~$V_2$. We need this special treatment only for two (instead of three) intervals $V_1$ and~$V_2$, because $V_0$ lies to the right of~$H_0$. We end up with \begin{align*} H_0 &= [2479, 3151], & V_0 &= [4129, 5143],\\ H_1 &= [6991, 8290], & V_1 &= [11553, 13623], \\ H_2 &= [19425, 22086], & V_2 &= [31952, 36447], \\ H_3 &= [53501, 59301]. \end{align*} The remaining intervals are now constructed as explained in Section~\ref{sec:intervals}, taking $H_3$ and~$V_2$ as the respective starting intervals. To be more precise, we first set $I_0 := H_3$, take $\gamma=1$, and then for $k\geq4$ define the intervals $H_k = [H^-_k, H^+_k] := [\floor{I^-_{k-3}}, \ceil{I^+_{k-3}}]$ using equations \eqref{left} and~\eqref{right} for the endpoints. In the same way we define the intervals $V_k$ for $k\geq3$, taking $I_0 := V_2$ as the initial interval in the construction from Section~\ref{sec:intervals}. The following lemma tells us that the values of the~$p_n$ on the intervals $H_0$ and~$V_0$ are sufficiently separated from each other and that, when we start in~$H_3$, the number of survivors in the shooting process will visit each of the intervals $H_2, H_1, H_0$ with high probability, and similarly for $V_2, V_1, V_0$. We obtain the desired bounds using computations in \textit{Mathematica}. We explain how we can perform the computations in such a way that we avoid introducing rounding errors, and thus obtain rigorous results. \begin{lemma} \label{lem:mathematica} We have \begin{align*} \min\{ p_n\colon n \in H_0\} &\geq 0.5163652651, \\ \max\{ p_n\colon n \in V_0\} &\leq 0.4767018688, \end{align*} and moreover \begin{align*} \sum_{k=1}^3 \left[ \P(Y_{H_k^-} \leq H_{k-1}^- - 1) + \P(Y_{H_k^+} \geq H_{k-1}^+) \right] &\leq 0.0010954222, \\ \sum_{k=1}^2 \left[ \P(Y_{V_k^-} \leq V_{k-1}^- - 1) + \P(Y_{V_k^+} \geq V_{k-1}^+) \right] &\leq 0.0006060062. \end{align*} \end{lemma} \begin{proof} The explicit bounds in the first part of Lemma~\ref{lem:mathematica} are based on exact calculations in \textit{Mathematica}\ of bounds on the numbers~$p_n$ up to $n=6000$ using the recursion \begin{equation} \label{recursion} p_n = \sum_{k=0}^{n-2} \P(S_n=k)\,p_k \qquad (n\geq2), \end{equation} with $p_0=1$ and $p_1=0$. To obtain lower bounds on~$p_n$ from this recursion, we need lower bounds on the $\P(S_n=k)$. To this end, write \[ t^n_{k,r} = \binom{n}{k} \binom{n-k}{r} (n-k-r)^{k+r} (n-k-r-1)^{n-k-r} \] for the terms that appear in the inclusion-exclusion formula~\eqref{Sn-incl-excl}. Observe that these are integer numbers. Now, for fixed $n$ and~$k$, define $r_{\max}$ by \[ r_{\max} := \min\{ r\geq0 \colon 10^{10} t^n_{k,2r} < (n-1)^n \}. \] Since truncating the sum in the inclusion-exclusion formula after an even number of terms yields a lower bound on~$\P(S_n=k)$, we have that \[ \P(S_n=k) \geq (n-1)^{-n} \sum_{r=0}^{2r_{\max}-1} (-1)^r t^n_{k,r}. \] By our choice of~$r_{\max}$, we know that the difference between the left and right hand sides of this inequality is smaller than~$10^{-10}$. However, this rational lower bound on~$\P(S_n=k)$ is numerically awkward to work with, because the numerator and denominator become huge for large~$n$. We therefore bound $\P(S_n=k)$ further by the largest smaller rational number of the form $m/10^{10}$ with $m\in\mathds{N}$. Stated in a different way, we bound the quantity $10^{10} \P(S_n=k)$ from below by the integer \[ P_{n,k} := 0 \vee \floor[\bigg]{ 10^{10} \sum_{r=0}^{2r_{\max}-1} (-1)^r t^n_{k,r} \biggm/ (n-1)^n }, \] where we remark that for integers $a$ and~$b$, $\floor{a/b}$ is just the quotient of the integer division~$a/b$. We now return to~\eqref{recursion}. Suppose that we are given nonnegative integers $\hat{p}_0, \hat{p}_1, \dots, \hat{p}_{n-1}$ that satisfy $10^{10}p_k \geq \hat{p}_k$ for $k = 0,1,\dots,n-1$. Let \[ \hat{p}_n := \floor[\bigg]{\, \sum_{k = k_1}^{k_2} P_{n,k} \hat{p}_k \biggm/ 10^{10} }, \] where \[ k_1 = 0 \vee \ceil[\big]{n/e - \sqrt{5n}\,}, \quad k_2 = (n-2) \wedge \floor[\big]{n/e + \sqrt{5n}\,}. \] Then it follows from~\eqref{recursion} and the fact that $10^{10} \P(S_n=k) \geq P_{n,k}$, that $10^{10}p_n \geq \hat{p}_n$. In this way, starting from the values $\hat{p}_0 = 10^{10}$ and $\hat{p}_1 = 0$, we recursively compute integer lower bounds on the numbers $10^{10} p_n$, or equivalently, rational lower bounds on~$p_n$, up to $n=6000$. We emphasize that this procedure involves only integer calculations, that could in principle be done by hand. For practical reasons, we invoke the aid of \textit{Mathematica}\ to perform these calculations for us, using exact integer arithmetic. In the same way (now starting the recursion from $\hat{p}_0 = 0$ and $\hat{p}_1 = 10^{10}$), we compute exact bounds on the probabilities $1-p_n$ of ending up with a single survivor. Taking complements, this gives us rational upper bounds on the~$p_n$ up to $n = 6000$. The first part of Lemma~\ref{lem:mathematica} follows from these exact bounds, and Figure~\ref{figure1} shows the lower bounds as a function of~$\log n$. As it turns out, the largest difference between our upper and lower bounds on the~$p_n$ is~$527 \times 10^{-10}$. The second part of Lemma~\ref{lem:mathematica} again follows from exact integer calculations with the aid of \textit{Mathematica}. Inclusion-exclusion tells us that \begin{equation} \label{Yincl-excl} \P(Y_{n+1} = k+i) = \frac1{n^n} \sum_{r=i}^{n-k} (-1)^{r-i} \binom{n}{k+i} \binom{n-k-i}{r-i} (n-k-r)^n. \end{equation} Summing over $i = 0,\dots,n-k$, interchanging the order of summation, and reorganising the binomial coefficients yields \[ \P(Y_{n+1} \geq k) = \frac1{n^n} \sum_{r=0}^{n-k} (-1)^r \sum_{i=0}^r (-1)^i \binom{k+r}{k+i} \binom{n}{k+r} (n-k-r)^n. \] Using the binomial identity \[ \sum_{i=0}^r (-1)^i \binom{k+r}{k+i} = \binom{k+r-1}{r} \qquad (k\geq1, r\geq0), \] which is easily proved by induction in~$r$, we conclude that for $k \geq 1$, \[ \P(Y_{n+1} \geq k) = \frac1{n^n} \sum_{r=0}^{n-k} (-1)^r \frac{k}{k+r} \binom{n}{k} \binom{n-k}{r} (n-k-r)^n. \] Since $\P(Y_{n+1} \leq k) = 1 - \P(Y_{n+1} \geq k) + \P(Y_{n+1} = k)$, the previous equation together with~\eqref{Yincl-excl} for $i=0$ gives \[ \P(Y_{n+1} \leq k) = 1 + \frac1{n^n} \sum_{r=0}^{n-k} (-1)^r \frac{r}{k+r} \binom{n}{k} \binom{n-k}{r} (n-k-r)^n. \] We note that from our derivation it follows that, as before, the terms that appear in the sums above are integers. This allows us to compute the rational numbers $\P(Y_{n+1} \leq k)$ and $\P(Y_{n+1} \geq k)$, and hence the sums in the second part of Lemma~\ref{lem:mathematica}, using only exact integer arithmetic. Bounding these sums above by rational numbers of the form $m/10^{10}$ (which again involves only integer arithmetic) yields the second part of Lemma~\ref{lem:mathematica}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:nonconvergence}.] Let the intervals~$H_k$, for $k\geq 0$, be constructed as explained below the statement of Theorem~\ref{thm:nonconvergence}. Similarly as in Section~\ref{sec:intervals}, for $n\in \bigcup_{k=1}^\infty H_k$ we now define $X^n_i$ by~\eqref{Xni}, with $k_n$ equal to the value of~$k$ such that $n\in H_k$. Recall that $X^n_i$ represents the number of survivors after round~$i$ of the shooting process started from~$n$. Fix a $k\geq1$ and $n\in H_k$. We are interested in the event \[ G_n = \{ \text{$X_i^n\in H_{k-i}$ for all $i=1,\dots,k$} \}. \] It follows from \[\begin{split} \P(G_n^c) & = \P(\exists i\leq k\colon X_i^n \not\in H_{k-i} ) \\ & \leq \P(\exists i\leq k-3\colon X_i^n \not\in H_{k-i} ) + \sum_{k=1}^3 \P( \exists m\in H_k\colon S_m\not\in H_{k-1}) \end{split}\] and Corollary~\ref{cor:coupling}, that $\P(G_n^c)$ is bounded from above by \begin{multline} \label{nonconv10} \P(\exists i\leq k-3\colon X_i^n \not\in H_{k-i} ) \\ + \sum_{k=1}^3 \left[ \P\bigl( Y_{H_k^-} \leq H_{k-1}^- - 1 \bigr) + \P\bigl( Y_{H_k^+} \geq H_{k-1}^+ \bigr) \right]. \end{multline} We use Lemma~\ref{lem:intervals} to compute an upper bound on the first term in~\eqref{nonconv10}, and Lemma~\ref{lem:mathematica} to bound the sum in the second term. This gives \[ \P(G_n^c) \leq 0.0007188677 + 0.0010954222 = 0.0018142899, \] uniformly for all $k\geq1$ and $n\in H_k$. Using the first part of Lemma~\ref{lem:mathematica}, this gives \[ p_n \geq \P(G_n) \min_{m\in H_0} p_m \geq 0.9981857101 \times 0.5163652651 \geq 0.515428, \] for all $n\in \bigcup_{k=1}^\infty H_k$. In a similar way, we bound the values $1-p_n$ from below, and hence the~$p_n$ from above, on the intervals~$V_k$. \end{proof} \section{Periodicity and continuity} \label{sec:periodicity} \subsection{Main theorem} \label{sec:maintheorem} In this section we prove the convergence of the~$p_n$ on the $\log n$~scale to a periodic and continuous function~$f$. Together with Theorem~\ref{thm:nonconvergence} (non-convergence), this gives Theorem~\ref{thm:main}. \begin{theorem}[Asymptotic periodicity and continuity] \label{thm:asymptotics} There exists a periodic and continuous function $f\colon \mathds{R}\to[0,1]$ of period~1 such that \[ \sup_{x\geq x_0} \, \abs[\big]{ p_{\floor{\exp x}} - f(x) } \to 0 \qquad \text{as }x_0\to\infty. \] \end{theorem} To prove Theorem~\ref{thm:asymptotics}, we consider coupled shooting processes started from different points that lie in one of the intervals \begin{align} \label{J0} J_0 &= \bigl[ e^{k_0+w-3\delta}, e^{k_0+w-3\delta} + (e^{k_0+w-3\delta})^{2/3} \bigr], \\ \label{Jk} J_k &= \bigl[ e^{k_0+w+k-\delta}, e^{k_0+w+k+\delta} \bigr], \qquad k\geq 1, \end{align} for some $k_0$, $w$ and~$\delta$ specified in Proposition~\ref{prop:ingr3} below. Observe that the intervals~$J_k$ for $k\geq1$ have length~$2\delta$ on the $\log n$ scale. We will show in three steps that with high probability, the distance between the numbers of survivors in these shooting processes decreases, and the coupled processes collide before the number of survivors has reached 0 or~1. The three steps are respectively described by Propositions \ref{prop:ingr3}, \ref{prop:ingr2} and \ref{prop:ingr1} below. \begin{proposition} \label{prop:ingr3} For all $\varepsilon>0$ and~$a_2$ there exist $\delta\in(0,\tfrac13)$ and $k_0 > 1+\log a_2$ such that, with the intervals~$J_k$ as in \eqref{J0} and~\eqref{Jk}, \[ \inf_{w\in[0,1]} \P\bigl( \text{for all $\textstyle n \in \bigcup_{k=1}^{\infty} J_k$, $X^n_{k_n} \in J_0$} \bigr) \geq 1-\varepsilon, \] where for all $n\in \bigcup_{k=1}^\infty J_k$ and $i\geq0$, $X^n_i$ is defined by~\eqref{Xni}, with $k_n$ equal to the value of~$k$ such that $n\in J_k$. \end{proposition} Note that on the event considered in Proposition~\ref{prop:ingr3}, the number of survivors~$X^n_{k_n}$ after $k_n$ shooting rounds for different starting points $n\in \bigcup_{k=1}^\infty J_k$ are all in the same interval~$J_0$. By~\eqref{Xni}, from this moment onward the processes~$X^n_{k_n+i}$ ($i\geq0$) for different~$n$ will be coupled together. Our next two propositions explore what will happen when we are in a situation like this. \begin{proposition} \label{prop:ingr2} For all $n\geq2$ and $i\geq0$, let the~$X^n_i$ be coupled as in~\eqref{Xni}, with $k_n=0$ for all~$n$. Then for all $\varepsilon>0$ there exist $a_0$ and~$d$ such that, for all $a,b$ with $a_0\leq a<b\leq a+a^{2/3}$, \[ \P\bigl( \text{$-1 \leq X^b_i - X^a_i \leq d$ and $X_i^a \geq a^{0.01}$ for some~$i$} \bigr) \geq 1-\varepsilon. \] \end{proposition} \begin{proposition} \label{prop:ingr1} For all $n\geq2$ and $i\geq0$, let the~$X^n_i$ be coupled as in~\eqref{Xni}, with $k_n=0$ for all~$n$. Then for all $\varepsilon>0$ and~$d$ there exists~$a_1$ such that, for all $a,b$ with $a_1 \leq a < b \leq a+d$, \[ \P\bigl( \text{$X^a_i = X^b_i$ for some~$i$} \bigr) \geq 1-\varepsilon. \] \end{proposition} We will now prove Theorem~\ref{thm:asymptotics} using these three propositions, and defer the proofs of Propositions \ref{prop:ingr3}, \ref{prop:ingr2} and~\ref{prop:ingr1} to Sections \ref{sec:oneshootinground} and~\ref{sec:proofpropositions}. \begin{proof}[Proof of Theorem~\ref{thm:asymptotics}.] We define, for all $x\geq 0$ and integer~$k$, \[ f_k(x) := p_{\floor{\exp(k+x)}}. \] First we will use Propositions \ref{prop:ingr3}, \ref{prop:ingr2} and~\ref{prop:ingr1} to prove that for all $\varepsilon>0$, there exist $\delta>0$ and~$k_0$ such that, for all $u,v\in [0,1]$ with $\abs{u-v}\leq \delta$, \begin{equation} \label{asymptoticsstar} \abs{f_k(u) - f_l(v)} \leq \varepsilon \quad\text{for all $k,l\geq k_0$}. \end{equation} Let $\varepsilon>0$. Choose $a_0$ and~$d$ according to Proposition~\ref{prop:ingr2} such that, for all~$a,b$ with $a_0\leq a<b\leq a+a^{2/3}$, \begin{equation} \label{mainingr2} \P\bigl( \text{$\abs{X_i^b - X_i^a} \leq d$ and $X_i^a, X_i^b \geq a^{0.01} - 1$ for some~$i$} \bigr) \geq 1-\frac{\varepsilon}{3}. \end{equation} Next, choose~$a_1$ according to Proposition~\ref{prop:ingr1} such that, for all~$a,b$ with $a_1\leq a < b\leq a+d$, \begin{equation} \label{mainingr1} \P\bigl( \text{$X_i^a = X_i^b$ for some~$i$} \bigr) \geq 1 - \frac{\varepsilon}{3}. \end{equation} Recall that in both \eqref{mainingr2} and~\eqref{mainingr1}, the shooting processes~$X^n_i$ are coupled from the first shooting round onward. Finally, we define $a_2 := \max\{ a_0, (a_1+1)^{100} \}$, and choose $\delta \in (0,\tfrac13)$ and $k_0 > 1 + \log a_2$ according to Proposition~\ref{prop:ingr3} such that \begin{equation} \label{mainingr3} \inf_{w\in[0,1]} \P\bigl( \text{for all $\textstyle n \in \bigcup_{k=k_0}^\infty J_k$, $X^n_{k_n} \in J_0$} \bigr) \geq 1-\frac{\varepsilon}{3}, \end{equation} where the $X^n_i$ are coupled as in~\eqref{Xni}, with $k_n$ equal to the index of the interval~$J_k$ containing~$n$. We claim that~\eqref{asymptoticsstar} holds for these $\delta$ and~$k_0$. In order to prove this, let $u,v \in[0,1]$ be such that $\abs{u-v}\leq \delta$ and let $k,l\geq k_0$. Write $w = (u+v)/2$ and set \[ \alpha := \floor{\exp(k+u)},\qquad \beta := \floor{\exp(l+v)}. \] Note from \eqref{J0} and~\eqref{Jk} that $\alpha\in J_{k-k_0}$ and $\beta\in J_{l-k_0}$, so in particular, $X^\alpha_i$ and~$X^\beta_i$ are defined and coupled as described above. We need to show that $\abs{ p_\alpha - p_\beta } \leq \varepsilon$, but we will actually prove the stronger statement that \begin{equation} \label{xsequal} \P \bigl( \text{$X^\alpha_{k_\alpha+i} = X^\beta_{k_\beta+i}$ for some~$i$} \bigr) \geq 1-\varepsilon. \end{equation} To prove~\eqref{xsequal}, first note that by \eqref{mainingr3} and~\eqref{J0}, \begin{equation} \label{mainingr3c} \P\bigl( X^a_{k_a}, X^b_{k_b} \in \bigl[ e^{k_0+w-3\delta}, e^{k_0+w-3\delta} + (e^{k_0+w-3\delta})^{2/3} \bigr] \bigr) \geq 1-\frac{\varepsilon}{3}. \end{equation} Since $k_0 > 1+\log a_2$ and $\delta < 1/3$, we have that $e^{k_0+w-3\delta} \geq a_2\geq a_0$. Using the fact that $X^\alpha_{k_\alpha+i}$ and~$X^\beta_{k_\beta+i}$ are coupled together for all~$i\geq0$, and since $a_2^{0.01} \geq a_1+1$, it now follows from \eqref{mainingr3c} and~\eqref{mainingr2} that \begin{multline} \label{mainingr2c} \P\bigl( \text{$\abs[\big]{X^\alpha_{k_\alpha+i} - X^\beta_{k_\beta+i}} \leq d$ and $X^\alpha_{k_\alpha+i}, X^\beta_{k_\beta+i} \geq a_1$ for some~$i$} \bigr) \\ \geq 1 - \frac{2\varepsilon}{3}. \end{multline} By \eqref{mainingr2c} and~\eqref{mainingr1}, we have that~\eqref{xsequal} holds. This proves~\eqref{asymptoticsstar}. Next we prove that \eqref{asymptoticsstar} implies the theorem. Let $\varepsilon>0$, and let $\delta>0$ and~$k_0$ be such that \eqref{asymptoticsstar} holds for this~$\varepsilon$. Fix $x\geq 0$. Taking $u = v = x-\floor{x}$ in~\eqref{asymptoticsstar} and using $f_k(x) = f_k(\floor{x} + u) = f_{k+\floor{x}}(u)$, we get \begin{equation} \label{asymptoticsone} \abs{f_k(x) - f_l(x)} = \abs[\big]{f_{k+\floor{x}}(u) - f_{l+\floor{x}}(u)} \leq \varepsilon,\quad \text{for all $k,l\geq k_0$}. \end{equation} In particular, $\sup_{k\geq k_0} f_k(x) \leq \varepsilon + \inf_{k\geq k_0} f_k(x)$ and hence $\lim_{k\to\infty} f_k(x)$ exists. We define \[ f(x) := \lim_{k\to\infty} f_k(x), \qquad x\geq0. \] Since $f_k(l+x) = f_{k+l}(x)$ for integer~$k,l$, the limit function~$f$ is periodic with period~1. Furthermore, since \eqref{asymptoticsone} holds uniformly for all $x\geq 0$, by taking $k=k_0$ and letting $l\to\infty$ we obtain \[ \varepsilon \geq \sup_{x\geq 0} \abs{f_{k_0}(x) - f(x)} = \sup_{x\geq 0} \abs[\big]{p_{\floor{\exp(k_0+x)}} - f(k_0+x)}, \] which proves the desired uniform convergence to the limit function~$f$. Finally, by~\eqref{asymptoticsstar} we obtain that, for all $u,v\in [0,1]$ with $\abs{u-v}\leq \delta$, \[ \abs{f(u) - f(v)} = \lim_{k\to\infty} \abs{f_k(u) - f_k(v)} \leq \varepsilon, \] which shows that $f$ is continuous. This completes the proof. \end{proof} \subsection{One shooting round} \label{sec:oneshootinground} In this section we give two key ingredients for the proof of Propositions \ref{prop:ingr3}, \ref{prop:ingr2} and~\ref{prop:ingr1}. These two ingredients give information about one shooting round of coupled shooting processes starting at two different points $a$ and~$b$. The first ingredient is the following lemma: \begin{lemma} \label{lem:ingr1} Let $S_n$, $n\geq 2$, be coupled as in Section~\ref{sec:coupling}. For all $a,b$ such that $a< b \leq \tfrac{5}{4} a$, \[ \P(S_a=S_b) \geq e^{-7(b-a)}. \] \end{lemma} \begin{proof} Let $Y_i^n$, $S_i^n$ and~$Z_i^n$ be coupled as in Section~\ref{sec:coupling}. By Lemma~\ref{lem:coupling}, we have that $Y_a^a \leq S_a \leq Z^b_b$ and $Y_a^a \leq S_b \leq Z^b_b$. Hence it suffices to show that \begin{equation} \label{ingr1.1} \P(Y^a_a = Z^b_b) \geq e^{-7(b-a)}. \end{equation} Note that the Markov chain~$(Z^b_i)_i$ first takes $b-a$ steps independently, before its steps are coupled to the Markov chain~$(Y^a_i)_i$. To prove~\eqref{ingr1.1}, we will first estimate the probability that the~$Z^b$ process decreases to the height~$a$ in these first $b-a$ steps, and then estimate the probability that in the remaining $a$~steps, the distance between $Z^b_{b-a+i}$ and~$Y^a_i$ never increases. For the first part, note that by Robbins' version of Stirling's formula~\cite{Robbins}, \[\begin{split} \P(Z^b_{b-a} = a) &= \frac{b-1}{b-1} \frac{b-2}{b-1} \dotsm \frac{a}{b-1} = \frac{(b-1)!}{(a-1)!} (b-1)^{-(b-a)} \\ &\geq \frac{\sqrt{2\pi} (b-1)^{b-1+\frac{1}{2}} e^{-(b-1)} e^{1/(12b-11)} } {\sqrt{2\pi} (a-1)^{a-1+\frac{1}{2}} e^{-(a-1)} e^{1/(12a-12)}} (b-1)^{-(b-a)}, \end{split}\] which implies \begin{equation} \label{ingr1.2} \P(Z^b_{b-a} = a) \geq e^{-(b-a)}. \end{equation} Next we consider the probability that in the remaining steps, the processes $Y^a_i$ and~$Z^b_{b-a+i}$ stay at the same height. By the coupled transition probabilities \eqref{Yprocess} and~\eqref{Zprocess}, for all $i = 0,1,\dots,a-1$ and $k < a$ we have \begin{multline*} \P(Y_{i+1}^a = Z_{b-a+i+1}^b \mid Y^a_i = Z^b_{b-a+i} = k) = 1- \frac{k}{a-1} + \frac{k-1}{b-1} \\ \geq 1 - \frac{a}{a-1} +\frac{a-1}{b-1} = 1-\frac{1}{a-1} - \frac{b-a}{b-1} \geq 1 - \frac{2(b-a)}{a} - \frac{b-a}{a}. \end{multline*} By our assumption that $b \leq \tfrac54 a$ and the inequality $1-u \geq e^{-2u}$, which holds for $0\leq u\leq \tfrac34$, this gives \[ \P(Y_{i+1}^a = Z_{b-a+i+1}^b \mid Y^a_i = Z^b_{b-a+i} = k) \geq e^{-6(b-a)/a}. \] A separate computation shows that this bound also holds for $k=a$. Since this bound holds for each of the remaining $a$~steps, we conclude that \begin{equation} \label{ingr1.3} \P(Y_a^a = Z_b^b \mid Z^b_{b-a} = a) \geq e^{-6(b-a)}. \end{equation} Together with~\eqref{ingr1.2}, this gives~\eqref{ingr1.1}, which completes the proof. \end{proof} The second key ingredient is Lemma~\ref{lem:ingr2} below. For the proof, we need the following preliminary result: \begin{lemma} \label{lem:binomial} Let $\lambda_1,\lambda_2 > 0$ be such that $\lambda_1/\lambda_2$ is an integer. Let $T_x$, $x=0,1,\dots,\lambda_1/\lambda_2$, be independent random variables such that $T_x$ has the exponential distribution with parameter $\lambda_1 - x \lambda_2$ (where $T_{\lambda_1/\lambda_2} = \infty$ with probability~1). Define \[ X_t = \min\{ x\colon T_0 + T_1 + \dots + T_x > t \}. \] Then $X_t$ has the binomial distribution with parameters $n= \lambda_1/\lambda_2$ and $p = 1-e^{-\lambda_2 t}$. \end{lemma} \begin{proof} Let $n = \lambda_1/\lambda_2$. Consider $n$ independent Poisson processes, each with rate~$\lambda_2$. Let $X_t'$ be the number of Poisson processes that have at least~1 jump before time~$t$. Clearly, $X_t'$ has the binomial distribution with parameters $n = \lambda_1/\lambda_2$ and $p = 1-e^{-\lambda_2 t}$. We will prove that $X_t'$ has the same law as~$X_t$, which implies the statement of the lemma. Let $T_0'$ be the waiting time until one of the $n$ Poisson processes has a jump. Then $T_0'$ has the exponential distribution with parameter $n \lambda_2 = \lambda_1$, hence $T_0'$ has the same law as~$T_0$. Without loss of generality, suppose this jump occurs in Poisson process~1. Let $T_1'$ be the waiting time from time~$T_0'$ until one of the Poisson processes 2 through~$n$ has a jump. Then $T_1'$ has the exponential distribution with parameter $(n-1) \lambda_2 = \lambda_1 - \lambda_2$, hence $T_1'$ has the same law as~$T_1$. Moreover, $T_0'$ and~$T_1'$ are independent. Continuing in this way, we construct independent random variables $T_0', T_1', \dots, T_{n-1}'$ that have the same laws as $T_0, T_1, \dots, T_{n-1}$. Finally, we define $T_n' = \infty$. We then have that $X_t' = \min\{ x\colon T_0' + T_1' + \dots + T_x' > t \}$. It follows that $X_t'$ has the same law as~$X_t$. \end{proof} \begin{lemma} \label{lem:ingr2} Let $S_n$, $n\geq 2$, be coupled as in Section~\ref{sec:coupling}. There exist $a_0,c_1,c_2>0$ such that, for all~$a,b$ with $a_0 \leq a < b \leq a + a^{2/3}$, \[ \P\bigl( S_b - S_a \leq \tfrac{1}{2} (b-a) \bigr) \geq 1 - c_1 e^{-c_2(b-a)}. \] \end{lemma} \begin{proof} Let $Y_i^n$, $S_i^n$ and~$Z_i^n$ be coupled as in Section~\ref{sec:coupling}. First we will show that there exists $c_3>0$ such that, for all $a,b$ sufficiently large and satisfying $a < b \leq a + a^{2/3}$, \begin{equation} \label{ingr2.1} \P\bigl( Z^b_{b-a} - a > 0.01 (b-a) \bigr) \leq e^{-c_3(b-a)}. \end{equation} Note that $Z^b_{b-a} - a$ is the number of times the $Z^b$~process does not decrease in the first $b-a$ steps. By~\eqref{Mprocess3}, for $0\leq i < b-a$, the conditional probability that $Z^b$ does not decrease in the $(i+1)$-th step satisfies \[ \P\bigl( Z^b_{i+1} = Z^b_i \bigm| Z^b_i \bigr) \leq 1 - \frac{a+1-1}{b-1} \leq 1 - \frac{a}{b}. \] Therefore, $Z^b_{b-a} - a$ is stochastically smaller than a random variable~$W$ having the binomial distribution with parameters $n= b-a$ and $p = 1-\tfrac{a}{b}$. If $a$ and~$b$ are such that $1-\tfrac{a}{b} < 0.01$, then we can use Hoeffding's inequality to bound the left hand side of~\eqref{ingr2.1} by \[ \P(W > 0.01 (b-a)) \leq \exp \bigl[ -2(b-a) (0.01-p)^2 \bigr]. \] It follows that~\eqref{ingr2.1} holds. Next we will show that there exist $c_4,c_5>0$ such that, for all $a,b$ sufficiently large and satisfying $a < b \leq a + a^{2/3}$, \begin{equation} \label{ingr2.2} \P\bigl( Z^b_b - Y^a_a > \tfrac{1}{2} (b-a) \bigm| Z^b_{b-a} - a \leq 0.01 (b-a) \bigr) \leq c_4 e^{-c_5(b-a)}. \end{equation} To prove~\eqref{ingr2.2}, we consider the process of differences $Z^b_{b-a+i} - Y^a_i$ at the steps~$i$ at which the $Y^a$~process decreases, and bound the probability that at such steps the $Z^b$~process does not decrease. Using the coupled transition probabilities \eqref{Yprocess} and~\eqref{Zprocess}, for $k < a$ and $l > 0$ we have \begin{align} & \P\bigl( Z^b_{b-a+i+1} = k+l \bigm| Y^a_{i+1} = k-1, Y^a_i = k, Z^b_{b-a+i} = k+l \bigr) \nonumber \\ &\qquad\null = \biggl[ \left( \frac{k}{a-1} - \frac{k+l-1}{b-1} \right) \frac{a-1}{k} \biggr]^+ \leq \biggl[ 1 - \frac{a-1+l-1}{b-1} \biggr]^+ \nonumber\\ &\qquad\null \leq \biggl[ \frac{b-a - (l-2)}{b} \biggr]^+. \label{ingr2.2a} \end{align} The upper bound~\eqref{ingr2.2a} also holds for $k=a$. Next we define a pure birth process~$X_t$, $t=0,1,\dotsc$ with the properties that (i) $X_0 \geq \floor{0.01(b-a)}$, and (ii) when their heights are the same, the birth process~$X_t$ increases with a higher probability than the process of differences $Z^b_{b-a+i}-Y^a_i$. To define the process~$X_t$, let \[ X_0 = x_0 := \max\{ 2, \floor{0.01 (b-a)} \}, \] and let the dynamics of~$X_t$ be given by \[\begin{split} \P(X_{t+1} = X_t + 1 \mid X_t = x_0 + x) &= 1 - \P(X_{t+1} = X_t \mid X_t = x_0 + x) \\ &= \frac{b-a-x}{b} \end{split}\] for $x = 0,1,\dots,b-a$. Since the two processes can be coupled in such a way that on the event $\{Z^b_{b-a} - a \leq 0.01(b-a)\}$, the birth process~$X_t$ dominates the process of differences, we have that \begin{multline} \label{ingr2.3} \P\bigl( Z^b_b - Y^a_a > \tfrac{1}{2} (b-a) \bigm| Z^b_{b-a} - a \leq 0.01 (b-a) \bigr) \\ \leq \P\bigl( X_{t_0} > \tfrac{1}{2} (b-a) \bigr) + \P\bigl( Y^a_a < \tfrac{a}{e} - (\tfrac{a}{e})^{5/6} \bigr), \end{multline} where \[ t_0 = \floor[\big]{a - \bigl( \tfrac{a}{e} - ( \tfrac{a}{e} )^{5/6} \bigr)}. \] To bound the first term on the right in~\eqref{ingr2.3}, we use a continuous-time version of the process~$X_t$. Let $T_x$, $x = 0, 1, \dots, b-a$, be independent such that $T_x$ has the geometric distribution with parameter $p = (b-a-x)/b$ (where $T_{b-a}=\infty$ with probability~1). We can then write \[ X_t = x_0 + \min \{ x\colon T_0 + \dots + T_x > t \}, \] Now let $T_x'$, $x=0,1,\dots,b-a$, be independent such that $T_x'$ has the exponential distribution with parameter \[ \lambda = \log \frac{b}{a} - x \frac{\log b - \log a}{b-a}. \] We define the continuous-time process~$X_t'$, $t\geq 0$, by \[ X_t' = x_0 + \min\{ x\colon T_0'+\dots+T_x' > t \}. \] Since \[\begin{split} \P(T_x > t) &= \left( 1 - \frac{b-a-x}{b} \right)^{\floor{t}} \geq e^{-t \bigl[ -\log\bigl( 1 - \frac{b-a-x}{b} \bigr) \bigr] } \\ &\geq e^{-t \bigl[ \frac{b-a-x}{b} \sum_{k=1}^{\infty} \frac{1}{k} (\frac{b-a}{b})^{k-1} \bigr] } = e^{-t \bigl[ \frac{b-a-x}{b-a} \log \frac{b}{a} \bigr] } = \P(T_x' > t), \end{split}\] we have that $T_x'$ is stochastically less than~$T_x$. It follows that $X_t$ is stochastically dominated by~$X_t'$, hence \begin{equation} \label{ingr2.4} \P\bigl( X_{t_0} > \tfrac{1}{2} (b-a) \bigr) \leq \P\bigl( X_{t_0}' > \tfrac{1}{2} (b-a) \bigr). \end{equation} By Lemma~\ref{lem:binomial}, $X'_{t_0} - x_0$ has the same law as a random variable~$W'$ having the binomial distribution with parameters \[ n' = b-a,\qquad p' = 1 - \exp\left( - \frac{\log b-\log a}{b-a} t_0 \right). \] We have, as $a\to\infty$, \[ p' \leq 1 - \exp\bigl( -\tfrac{1}{a} \floor[\big]{a - \tfrac{a}{e} + (\tfrac{a}{e})^{5/6}} \bigr) \to 1 - e^{-( 1 - e^{-1} )} \approx 0.4685. \] Therefore, if $0.01(b-a)\geq2$ and $a$ is sufficiently large, then using Hoeffding's inequality we can bound $\P\bigl( X_{t_0}' > \tfrac{1}{2} (b-a) \bigr)$ above by \[ \P\bigl( W' > 0.49 (b-a) \bigr) \leq \exp \bigl[ -2(b-a) (0.49 - p')^2 \bigr] \leq e^{-c_6 (b-a)} \] for some constant $c_6 > 0$ that does not depend on~$a,b$. Hence \begin{equation} \label{ingr2.5} \P\bigl( X_{t_0}' > \tfrac{1}{2} (b-a) \bigr) \leq e^{200 c_6} e^{-c_6 (b-a)} \end{equation} for all values of~$b-a$ and sufficiently large~$a$. By Corollary~\ref{cor:tailY} and the assumption that $b -a \leq a^{2/3}$, the second term on the right in~\eqref{ingr2.3} satisfies \begin{equation} \label{ingr2.6} \P\bigl( Y^a_a < \tfrac{a}{e} - (\tfrac{a}{e})^{5/6} \bigr) \leq e^{-c_7 a^{2/3}} \leq e^{-c_7 (b-a)}, \end{equation} for some constant $c_7>0$ that does not depend on $a,b$. Combining \eqref{ingr2.3}, \eqref{ingr2.4}, \eqref{ingr2.5} and~\eqref{ingr2.6} gives~\eqref{ingr2.2}. Since $Y_a^a \leq S_a$ and $S_b \leq Z^b_b$, \eqref{ingr2.1} and~\eqref{ingr2.2} imply the statement of the lemma. \end{proof} \subsection{Proof of Propositions \ref{prop:ingr3}, \ref{prop:ingr2} and~\ref{prop:ingr1}} \label{sec:proofpropositions} \begin{proof}[Proof of Proposition~\ref{prop:ingr3}.] The proposition is a corollary of Lemma~\ref{lem:intervals} applied for a specific sequence of intervals, as we explain below. We define, for every $x\geq 0$, a sequence of intervals $I_k(x)$, $k\geq 0$, as follows. Let \[ \delta_x = \tfrac{1}{12} e^{-\frac{1}{3}x}, \] and let $I_0(x) = [ \floor{I_0^-(x)}, \ceil{I_0^+(x)} ]$ with \[ I_0^-(x) = e^{x-2\delta_x},\qquad I_0^+(x) = e^{x+2\delta_x}. \] Let $s_0$ be defined by~\eqref{s_0}, let $\gamma=1/4$ and let $c_0 = c_0(x)$ be given by~\eqref{c_0}, i.e., \[ c_0(x) = \frac{1}{4s_0} e^{\frac{1}{2}x-\frac{1}{2}} (e^{\delta_x} - e^{-\delta_x}). \] For $k\geq 1$, we define the interval~$I_k(x)$ by \eqref{left} and~\eqref{right}, with $I_0(x)$ and $c_0=c_0(x)$ as above, i.e., $I_k(x) = [ \floor{I_k^-(x)}, \ceil{I_k^+(x)} ]$ with \begin{align*} I_k^-(x) &= e^{x+k-2\delta_x} \Bigl( 1 + c_0(x) e^{\frac{1}{2} - \frac{1}{2}x + \delta_x} \sum\nolimits_{i=1}^k \sqrt{i} \, e^{-i/2} \Bigr), \\ I_k^+(x) &= e^{x+k+2\delta_x} \Bigl( 1 - c_0(x) e^{\frac{1}{2} - \frac{1}{2}x - \delta_x} \sum\nolimits_{i=1}^k \sqrt{i} \, e^{-i/2} \Bigr). \end{align*} We will apply Lemma~\ref{lem:intervals} for the sequence of intervals~$I_k(x)$ defined above. Since $e^{u} - e^{-u} = 2u + O(u^3)$ as $u\downarrow 0$, for our choice of intervals we have \[ c_0(x) = \frac{1}{24 s_0} e^{\frac16 x - \frac12} + O\bigl( e^{-\frac12 x} \bigr) \to \infty \text{ as $x\to\infty$}, \] hence the right hand side of~\eqref{boundintervals} tends to~0 as $x\to \infty$. Now let $\varepsilon>0$ and~$a_2$ be given. By Lemma~\ref{lem:intervals}, there exists $k_0 > 1+\log a_2$ such that \begin{equation} \label{ingr3.1a} \inf_{x\in[k_0,k_0+1]} \P\bigl( \text{for all $\textstyle n \in \bigcup_{k=1}^{\infty} I_k(x)$}, X^n_{k_n} \in I_0(x) \bigr) \geq 1-\varepsilon. \end{equation} Choose \[ \delta := \delta_{k_0+1} = \tfrac{1}{12} e^{-\frac{1}{3}(k_0+1)}. \] We will prove that for all $x\in [k_0,k_0+1]$, \begin{align} \label{ingr3.1b} I_k(x) &\supset [e^{x+k-\delta}, e^{x+k+\delta} ] \text{ for all $k\geq 1$}, \\ \label{ingr3.1c} I_0(x) &\subset [e^{x-3\delta}, e^{x-3\delta} + (e^{x-3\delta})^{2/3} ]. \end{align} Together, \eqref{ingr3.1a}, \eqref{ingr3.1b} and~\eqref{ingr3.1c} imply the statement of the proposition, where $x$ plays the role of $k_0+w$ in the proposition. To prove \eqref{ingr3.1b} and~\eqref{ingr3.1c}, let $x\in [k_0,k_0+1]$. The inclusion~\eqref{ingr3.1b} follows from the observations that for all $k\geq 1$, \begin{align*} \floor{I_k^-(x)} &\leq e^{x+k} \bigl( e^{-2\delta_x} + c_0(x) s_0 e^{-\frac12 x - \delta_x + \frac12} \bigr) \leq e^{x+k} \bigl( \tfrac34 e^{-2\delta} + \tfrac14 \bigr) \leq e^{x+k-\delta}, \\ \intertext{where we have used in the last two steps that $\delta \leq \delta_x \leq 1/12$, and likewise} \ceil{I_k^+(x)} &\geq e^{x+k} \bigl( e^{2\delta_x\phantom+} - c_0(x) s_0 e^{-\frac12 x + \delta_x +\frac12} \bigr) \geq e^{x+k} \bigl( \tfrac34 e^{2\delta\phantom+} + \tfrac14 \bigr) \geq e^{x+k+\delta}. \end{align*} Next we prove the inclusion~\eqref{ingr3.1c}. Since $\tfrac14 e^{-1/3} > \tfrac16$, for $k_0$ sufficiently large we have that \[ \floor{I_0^-(x)} = \floor[\big]{\exp\bigl( x - \tfrac16 e^{-\frac{1}{3} x} \bigr)} \geq \exp \bigl( x - \tfrac14 e^{-\frac13(k_0+1)} \bigr) = e^{x-3\delta}, \] and similarly $\ceil{I_0^+(x)} \leq e^{x+3\delta}$. Moreover, using the inequalities $e^{1-6\delta} \geq 1-6\delta \geq 1-\tfrac12 e^{-x/3}$ we obtain \[ e^{x-6\delta} + e^{\frac23 x-5\delta} \geq \bigl( e^x + e^{\frac23 x} \bigr) (1-6\delta) \geq e^x + \tfrac12 e^{\frac23 x} - \tfrac12 e^{\frac13 x} \geq e^x, \] from which it follows that \[ e^{x-3\delta} + \bigl( e^{x-3\delta} \bigr)^{2/3} \geq e^{x+3\delta}. \] This proves~\eqref{ingr3.1c}, and completes the proof of the proposition. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:ingr2}.] The idea is to repeatedly apply Lemma~\ref{lem:ingr2} to the coupled processes $X^a_i$ and~$X^b_i$, with $a < b\leq a+a^{2/3}$, until the distance $X^b_i-X^a_i$ has decreased to a constant. To this end, however, $X^a_i$ and~$X^b_i$ should satisfy the conditions of Lemma~\ref{lem:ingr2} at each round, and this requires that we first strengthen the statement of the lemma somewhat. Let $S_n$, $n\geq 2$, be coupled as in Section~\ref{sec:coupling}. By Lemma~\ref{lem:coupling} and Corollary~\ref{cor:tailY}, and since $\tfrac4{11} < e^{-1}$, there exists $c_0>0$ such that for all~$a$, \begin{equation} \label{cor2.0} \P\bigl( S_a \geq \tfrac{4}{11} a \bigr) \geq 1 - e^{-c_0 a}. \end{equation} Next, note that it is a deterministic fact that if $S_b-S_a \leq \tfrac12(b-a)$, $b-a \leq a^{2/3}$ and $S_a \geq \tfrac{4}{11} a$, then \begin{equation} \label{cor2.01} S_b-S_a \leq \tfrac12 a^{2/3} \leq \tfrac12 \bigl( \tfrac{11}{4} S_a \bigr)^{2/3} \leq S_a^{2/3}. \end{equation} By \eqref{cor2.0}, \eqref{cor2.01} and Lemma~\ref{lem:ingr2}, there exist $a_0^*,c_1,c_2 >0$ such that, for all~$a,b$ with $a_0^*\leq a < b\leq a+a^{2/3}$, \begin{equation} \label{cor2.1} \P\bigl( S_b - S_a \leq \min\bigl\{ \tfrac12 (b-a), S_a^{2/3} \bigr\}, S_a \geq \tfrac{4}{11}a \bigr) \geq 1 - c_1 e^{-c_2 (b-a)}. \end{equation} The additional statements that $S_b-S_a\leq S_a^{2/3}$ and $S_a \geq \tfrac{4}{11}a$ in~\eqref{cor2.1} make this version of the statement of Lemma~\ref{lem:ingr2} suitable for repeated application to the coupled processes $X^a_i$ and~$X^b_i$. Let $\varepsilon>0$. Define $a_0 := (a_0^*)^{100}$ and~$d$ such that $\sum_{k=d}^{\infty} c_1 \exp(-c_2 k) \leq \varepsilon$, let $a,b$ be such that $a_0\leq a<b\leq a+a^{2/3}$, and let $T_0 = \inf\{ i\colon X_i^b - X_i^a \leq d \}$. We claim that \begin{multline} \label{cor2.2} \P\bigl( \text{$-1 \leq X^b_i - X^a_i \leq d$ and $X_i^a \geq a^{0.01}$ for some~$i$} \bigr) \\ \geq \P\bigl( \text{$X_{i+1}^b - X_{i+1}^a \leq \tfrac{1}{2} (X_i^b - X_i^a)$, $X_{i+1}^a \geq \tfrac{4}{11} X_i^a$ for all $i < T_0$} \bigr). \end{multline} Indeed, the following three facts together imply~\eqref{cor2.2}: \begin{enumerate} \item If $X^b_{i+1} - X^a_{i+1} \leq \tfrac12 (X^b_i - X^a_i)$ for every $i<0.97\log a$, then $X^b_i - X^a_i \leq d$ for some $i\leq 0.97\log a$ (and hence $T_0 \leq 0.97\log a$), since \[ (b-a) \bigl( \tfrac{1}{2} \bigr)^{0.97\log a-1} \leq 2a^{2/3} \bigl( \tfrac{1}{2} \bigr)^{0.97\log a} \leq 2 \leq d. \] \item If $X_{i+1}^a \geq \tfrac{4}{11} X_i^a$ in every round $i<0.97\log a$, then $X^a_i \geq a^{0.01}$ for all $i\leq 0.97\log a$, since \[ a \bigl( \tfrac{4}{11} \bigr)^{0.97 \log a} \geq a^{0.01}. \] \item If $X_i^b - X_i^a > 0$, then $X^b_{i+1} - X^a_{i+1} \geq -1$ a.s.\ by \eqref{Xni} and Lemma~\ref{lem:coupling}. \end{enumerate} The right hand side of~\eqref{cor2.2} is at least \begin{multline} \P\bigl( X_{i+1}^b - X_{i+1}^a \leq \min\{\tfrac{1}{2} (X_i^b - X_i^a),X_{i+1}^a)^{2/3}\} \\ \text{and $X_{i+1}^a \geq \tfrac{4}{11} X_i^a$ for all $i < T_0$} \bigr) \geq 1-\sum_{k=d}^{\infty} c_1 e^{-c_2 k} \geq 1-\varepsilon, \label{cor2.3} \end{multline} where the first bound on the probability follows from repeated application of~\eqref{cor2.1}. Note that on the event considered in~\eqref{cor2.3} we have that $X_i^a \geq a^{0.01} \geq a_0^{0.01} \geq a_0^*$ for all $i<T_0$, so that we can indeed apply~\eqref{cor2.1}. The sum in~\eqref{cor2.3} is over all possible values that the distance $X^b_i-X^a_i$ can assume, and all larger values. The second inequality in~\eqref{cor2.3} follows from the definition of~$d$. Combining \eqref{cor2.2} and~\eqref{cor2.3} yields the desired result. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:ingr1}.] Let the $S_n$, $n\geq 2$, be coupled as in Section~\ref{sec:coupling}. By Lemma~\ref{lem:coupling} and Corollary~\ref{cor:tailY}, and since $\tfrac4{11} < e^{-1}$, there exists $c_0>0$ such that for all~$a$, \begin{equation} \label{cor1.0} \P\bigl( S_a \geq \tfrac{4}{11} a \bigr) \geq 1 - e^{-c_0 a}. \end{equation} By Lemma~\ref{lem:ingr2}, there exist $a_0,c_1,c_2 >0$ such that for all~$a,b$ satisfying $a_0\leq a < b\leq a+a^{2/3}$, \begin{equation} \label{cor1.1} \P\bigl( S_b - S_a \leq \tfrac12 (b-a) \bigr) \geq 1 - c_1 e^{-c_2 (b-a)}. \end{equation} Let $\varepsilon>0$ and~$d$ be given. Choose $d_0\geq d$ such that \begin{equation} \label{defd0} e^{-d_0} \leq \frac{\varepsilon}{3} \quad\text{and}\quad d_0 c_1^{d_0} e^{14 d_0-c_2 d_0^2} \leq \frac{\varepsilon}{3}, \end{equation} and define $T := d_0 \ceil{\exp(14d_0)}$. Now choose $a_1^*$ such that \begin{equation} \label{defa1} 2T e^{-c_0a_1^*} \leq \frac{\varepsilon}{3} \quad\text{and}\quad a_1^* \geq \max\bigl\{ a_0,(2d_0)^{3/2},8d_0 \bigr\}, \end{equation} and set $a_1 := a_1^*(11/4)^T$. Let $a,b$ be such that $a_1\leq a < b \leq a+d$, and consider the coupled processes $X^a_i$ and~$X^b_i$. Define the events \begin{align*} A_k &:= \bigl\{ \text{$X^a_i, X^b_i \geq a_1^*$ for all $i\leq k$} \bigr\}, \\ B_k &:= \bigl\{ \text{$\abs{X^b_i - X^a_i}\leq 2d_0$ for all $i\leq k$} \bigr\}, \\ C_k &:= \bigl\{ \text{$\abs{X^b_i - X^a_i}\neq 0$ for all $i\leq k$} \bigr\}. \end{align*} Our goal is to show that~$C_T$ has small probability, which implies that with high probability, $\abs{X^b_i-X^a_i} = 0$ for some~$i$. Since \begin{equation} \label{cor1.2} \P(C_T) \leq \P(A_T^c) + \P(A_T\cap B_T^c) + P(A_T\cap B_T\cap C_T), \end{equation} it suffices to prove that $\P(A_T^c)$, $\P(A_T\cap B_T^c)$ and $\P(A_T\cap B_T\cap C_T)$ are small. We start with~$\P(A_T^c)$. Observe that it is a deterministic fact that if $a\geq a_1$ and $X^a_i \geq \tfrac4{11} X^a_{i-1}$ for all $i\leq T$, then by definition of~$a_1$, $X^a_i \geq a_1^*$ for all $i\leq T$. Hence, by \eqref{cor1.0} and~\eqref{defa1}, \begin{equation} \label{cor1.3} \P(A_T^c) \leq 2T e^{-c_0a_1^*} \leq \frac{\varepsilon}{3}. \end{equation} Next, we turn to $\P(A_T\cap B_T^c)$. Observe that we can consider the absolute differences $\abs{X^b_i - X^a_i}$, $i = 0,1,\dotsc$, as a random walk starting at $b-a \leq d_0$. By the definition~\eqref{Xni} of the~$X^n_i$ and Lemma~\ref{lem:coupling}, \[ \abs{X^b_{i+1} - X^a_{i+1}} \leq \abs{X^b_i - X^a_i} + 1 \text{ a.s.\ for all $i \geq 0$}. \] This implies that it can only be the case that $\abs{X^b_i-X^a_i} > 2d_0$ for some $i\leq T$, if there exists a $k<T-d_0$ such that $d_0\leq \abs{X^b_{k+j}-X^a_{k+j}} \leq d_0+j$ holds for $j = 0,1,2,\dotsc,d_0$. Hence, if we introduce the notation \[ D_{k,i} := A_{k+i} \cap \bigl\{ \text{$d_0\leq \abs{X^b_{k+j}-X^a_{k+j}}\leq d_0+j$ for $j=0,1,\dots,i$} \bigr\}, \] then we have that $\P(A_T\cap B_T^c) \leq \sum_{k<T-d_0} \P(D_{k,d_0})$, which is the same as \begin{equation} \label{cor1.4} \P(A_T\cap B_T^c) \leq \sum_{k=0}^{T-d_0-1} \prod_{i=1}^{d_0} \P(D_{k,i} \mid D_{k,i-1}) \cdot \P(D_{k,0}). \end{equation} Now, by~\eqref{defa1}, if $X_i^a,X_i^b \geq a_1^*$ and $\abs{X_i^b - X_i^a} \leq 2d_0$, then it also holds that ${\abs{X^b_i - X^a_i}} \leq \min\{ X^a_i,X^b_i \}^{2/3}$. Moreover, if the absolute difference $\abs{X^b_i-X^a_i}$ is strictly less than~$2d_0$, it will drop below~$d_0$ if it decreases by at least $\tfrac12\abs{X^b_i-X^a_i}$ at the next step. Therefore, it follows from~\eqref{cor1.1} that \begin{equation} \label{cor1.5} \P(D_{k,i} \mid D_{k,i-1}) \leq \P\bigl( \text{$\abs{X^b_{k+i}-X^a_{k+i}}\geq d_0$} \bigm| D_{k,i-1} \bigr) \leq c_1 e^{-c_2 d_0} \end{equation} for $i\leq d_0$. Together, \eqref{cor1.4}, \eqref{cor1.5} and~\eqref{defd0} give \begin{equation} \label{cor1.6} \P\bigl( A_T\cap B_T^c \bigr) \leq (T-d_0) \bigl( c_1e^{-c_2 d_0} \bigr)^{d_0} \leq d_0 c_1^{d_0} e^{14 d_0-c_2 d_0^2} \leq \frac{\varepsilon}{3}. \end{equation} Finally, we consider $\P(A_T\cap B_T\cap C_T)$. To simplify the notation, write $E_i = A_i\cap B_i\cap C_i$ for $i\geq0$. Then we have \[ \P(E_T) = \prod_{i=1}^T \P(E_i \mid E_{i-1}) \P(E_0) \leq \prod_{i=1}^T \P(C_i\mid E_{i-1}). \] Since on the event~$E_i$, $\abs{X^b_i-X^a_i} \leq 2d_0$ and $2d_0 \leq \tfrac14 a_1^*\leq \tfrac14\min\{X^b_i,X^a_i\}$, by Lemma~\ref{lem:ingr1} each factor in this product is bounded above by $1-e^{-14d_0}$. Hence, using the inequality $1-u \leq e^{-u}$ and~\eqref{defd0}, \begin{equation} \label{cor1.7} \P(A_T\cap B_T\cap C_T) = \P(E_T) \leq \bigl(1-e^{-14d_0}\bigr)^T \leq e^{-d_0} \leq \frac{\varepsilon}{3}. \end{equation} Combining \eqref{cor1.2}, \eqref{cor1.3}, \eqref{cor1.6}, and~\eqref{cor1.7} gives \[ \P\bigl( \text{$\abs{X^b_i-X^a_i}=0$ for some $i\leq T$} \bigr) = 1-P(C_T) \geq 1-\varepsilon.\qedhere \] \end{proof} \section*{} \subsection*{Acknowledgment:} We thank Henk Tijms for drawing our attention to the group Russian roulette problem. \end{document}
math
\begin{document} \begin{abstract} The $C^{\ast}$-algebra $\mathcal{U}_{nc}(n)$ is the universal $C^{\ast}$-algebra generated by $n^2$ generators $u_{ij}$ that make up a unitary matrix. We prove that Kirchberg's formulation of Connes' embedding problem has a positive answer if and only if $\mathcal{U}_{nc}(2) \otimes_{\min} \mathcal{U}_{nc}(2)=\mathcal{U}_{nc}(2) \otimes_{\max} \mathcal{U}_{nc}(2)$. Our results follow from properties of the finite-dimensional operator system $\mathcal{V}_n$ spanned by $1$ and the generators of $\mathcal{U}_{nc}(n)$. We show that $\mathcal{V}_n$ is an operator system quotient of $M_{2n}$ and has the OSLLP. We obtain necessary and sufficient conditions on $\mathcal{V}_n$ for there to be a positive answer to Kirchberg's problem. Finally, in analogy with recent results of Ozawa, we show that a form of Tsirelson's problem related to $\mathcal{V}_n$ is equivalent to Connes' embedding problem. \end{abstract} \keywords{Connes' Embedding Problem; Kirchberg's Conjecture; Operator Systems; Unitary Correlation Sets} \maketitle \tableofcontents \section*{Introduction} One of the most significant open problems in the field of operator algebras is Connes' embedding problem \cite{connes}, which asks whether every finite von Neumann algebra with separable predual can be embedded into the ultrapower of the hyperfinite $II_1$ factor in a trace-preserving way. One of the simpler formulations of the problem is known as Kirchberg's problem \cite[Proposition 8]{kirchberg93}, which asks whether or not $C^*(F_n) \otimes_{\min} C^*(F_n)=C^*(F_n) \otimes_{\max} C^*(F_n)$ for some (equivalently all) $n \geq 2$, where $F_n$ is the free group on $n$ generators. Due to recent work in \cite{fritz}, \cite{junge}, and \cite{ozawa}, another equivalent statement of the embedding problem is in terms of certain sets of quantum bipartite correlations (see \cite{fritz}, \cite{junge}, \cite{ozawa} and \cite{tsirelson} for more information on these correlations). One of our main results is that Kirchberg's problem, stated in terms of $C^*(F_n)$, is equivalent to the same problem when $C^*(F_n)$ is replaced by Brown's $C^{\ast}$-algebra $\mathcal{U}_{nc}(n)$, defined in \cite{brown} as the universal $C^{\ast}$-algebra whose generators make up a unitary $n \times n$ matrix. In other words, Connes' embedding problem has a positive answer if and only if $\mathcal{U}_{nc}(2) \otimes_{\min} \mathcal{U}_{nc}(2)=\mathcal{U}_{nc}(2) \otimes_{\max} \mathcal{U}_{nc}(2)$ (see Theorem \ref{uncequivalenttokirchberg}). On the way to proving this equivalence, we obtain a new proof of Kirchberg's theorem \cite[Corollary 1.2]{kirchberg} that $C^*(F_n) \otimes_{\min} \cB(\cH)=C^*(F_n) \otimes \cB(\cH)$ for every Hilbert space $\mathcal{H}$. Both of these results follow from properties of the finite-dimensional operator system $\mathcal{V}_n$ spanned by the generators of $\mathcal{U}_{nc}(n)$. The significance of stating Kirchberg's conjecture in terms of $\mathcal{U}_{nc}(n)$ lies in recent advances in quantum information theory. Indeed, the phenomenon of embezzling entanglement in a bipartite scenario can be modelled using states on tensor products of $\mathcal{U}_{nc}(n)$ (see \cite{CLP} for more information on embezzlement of entanglement). Our study of the $C^{\ast}$-algebra $\mathcal{U}_{nc}(n)$ and states on the tensor products $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{U}_{nc}(n)$ and $\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n)$ allow for a theory of so-called ``unitary correlation sets", which is motivated by their connection to quantum information theory. A problem involving unitary correlation sets that is analogous to Tsirelson's problem is shown to also be equivalent to Kirchberg's problem. The methods in this paper draw on many results in the recent theory of operator system tensor products and operator system quotients. In Section $\S 1$ we review some basic results in the theory of operator system tensor products, and in Section $\S 2$ we recall some nuclearity-related properties of operator systems that arise in equivalent formulations of Kirchberg's problem. Section $\S 3$ gives a short introduction to the theory of operator system quotients. In Section $\S 4$, we explore properties of the operator system $\mathcal{V}_n$ and the $C^{\ast}$-algebra $\mathcal{U}_{nc}(n)$ while giving alternate characterizations of the WEP and DCEP in terms of tensor products with $\mathcal{V}_n$. We link both $\mathcal{V}_n$ and $\mathcal{U}_{nc}(n)$ to Kirchberg's problem in Section $\S 5$. Finally, Section $\S 6$ draws on some results in quantum bipartite correlations in the field of quantum information theory, and an analogous theory of such correlations is developed in terms of the $C^{\ast}$-algebra $\mathcal{U}_{nc}(n)$. \section{Operator Systems and their Tensor Products} In this section, we include a short introduction to operator systems and their tensor theory. The interested reader can see \cite{KPTT} for a thorough introduction to the subject. First, we give the abstract definition of an operator system, bearing in mind that concrete operator systems are always self-adjoint vector subspaces of $\cB(\cH)$, for some Hilbert space $\mathcal{H}$, which contain the identity element. Assume that $\mathcal{S}$ is a complex vector space. An \textbf{involution} on $\mathcal{S}$ is a map $*:\mathcal{S} \to \mathcal{S}$ such that for all $x,y \in \mathcal{S}$ and $\alpha \in \mathbb{C}$, \begin{itemize} \item $(\alpha x+y)^*=\overline{\alpha} x^*+y^*$, and \item $(x^*)^*=x$. \end{itemize} We denote by $\mathcal{S}_h$ the set of all $x \in \mathcal{S}$ with $x=x^*$. We call $\mathcal{S}$ a \textbf{$*$-vector space} if it is a complex vector space equipped with an involution. Whenever $\mathcal{S}$ is a $*$-vector space, there is a natural way to make $M_n(\mathcal{S})$ into a $*$-vector space, where $M_n(\mathcal{S})$ is the space of all $n \times n$ matrices with entries in $\mathcal{S}$; indeed, one may let $(x_{ij})^*=(x_{ji}^*)$ for each $(x_{ij}) \in M_n(\mathcal{S})$. Given a $*$-vector space $\mathcal{S}$, a \textbf{matrix ordering} on $\mathcal{S}$ is a set of cones $\{C_n\}_{n=1}^{\infty}$, where $C_n \subseteq (M_n(\mathcal{S}))_h$, satisfying the following conditions: \begin{itemize} \item $C_n+C_n \subseteq C_n$ and $tC_n \subseteq C_n$ for all $t \geq 0$ and $n \in \mathbb{N}$, \item $A^*C_nA \subseteq C_m$ for all $n,m \in \mathbb{N}$ and $A \in M_{n,m}(\mathbb{C})$, and \item $C_n \cap (-C_n)=\{0\}$ for all $n \in \mathbb{N}$. \end{itemize} A $*$-vector space $\mathcal{S}$ is said to be a \textbf{matrix-ordered $*$-vector space} if there is a matrix order $\{C_n\}_{n=1}^{\infty}$ on $\mathcal{S}$. We say that an element $e \in \mathcal{S}_h$ is an \textbf{order unit} if for every $x \in \mathcal{S}_h$, there is $r>0$ such that $x+re \in C_1$. We call $e$ an \textbf{Archimedean order unit} if whenever $x \in \mathcal{S}_h$ is such that $x+re \in C_1$ for all $r>0$, we have $x \in C_1$. We call $e$ a \textbf{matrix order unit} if for each $n \in \mathbb{N}$, $I_n=\begin{pmatrix} e \\ & \ddots \\ & & e \end{pmatrix} \in M_n(\mathcal{S})$ is an order unit for $M_n(\mathcal{S})$. Note that $e$ is a matrix order unit if and only if it is an order unit for $\mathcal{S}$ with respect to the cone $C_1$. We say that $e$ is an \textbf{Archimedean matrix order unit} provided that each $I_n$ is an Archimedean order unit. With this terminology in hand, we can define the abstract version of operator systems. An \textbf{(abstract) operator system} is a triple $(\mathcal{S},\{C_n\}_{n=1}^{\infty},e)$ where $\mathcal{S}$ is a $*$-vector space, $\{C_n\}_{n=1}^{\infty}$ is a matrix ordering on $\mathcal{S}$, and $e$ is an Archimedean matrix order unit for $\mathcal{S}$. We will usually refer to $\mathcal{S}$ as an operator system, allowing the context to dictate which matrix ordering is being used. If $(\mathcal{S},\{C_n\}_{n=1}^{\infty},e)$ and $(\mathcal{T},\{D_n\}_{n=1}^{\infty},e)$ are operator systems with $\mathcal{T} \subseteq \mathcal{S}$, we will say that $\mathcal{T}$ is an \textbf{operator subsystem} of $\mathcal{S}$ provided that $\mathcal{T}$ has the same $*$-vector space structure as $\mathcal{S}$ and $D_n=C_n \cap \mathcal{T}$ for all $n \in \mathbb{N}$. Given operator systems $\mathcal{S}$ and $\mathcal{T}$ and a number $k \in \mathbb{N}$, we say that a linear map $\varphi:\mathcal{S} \to \mathcal{T}$ is \textbf{$k$-positive} provided that for all $n \leq k$, $(\varphi(x_{ij})) \in M_n(\mathcal{T})_+$ whenever $(x_{ij}) \in M_n(\mathcal{S})_+$. We say that $\varphi$ is \textbf{completely positive} if it is $k$-positive for every $k \in \mathbb{N}$. We use the abbreviaton ``ucp" for ``unital and completely positive". A ucp map $\varphi:\mathcal{S} \to \mathcal{T}$ is a \textbf{complete order isomorphism} provided that $\varphi$ is a bijection and $\varphi^{-1}:\mathcal{T} \to \mathcal{S}$ is also ucp. We will also need the notion of an \textbf{order isomorphism}, which is a linear map $\varphi:\mathcal{S} \to \mathcal{T}$ between operator systems that is a bijection, such that $\varphi$ and $\varphi^{-1}$ are $1$-positive. Finally, given a linear map $\varphi:\mathcal{S} \to \mathcal{T}$ between operator systems, we say that $\varphi$ is a \textbf{complete order embedding} (or \textbf{complete order injection}) if there is an operator subsystem $\mathcal{T}_1 \subseteq \mathcal{T}$ such that $\varphi(\mathcal{S})=\mathcal{T}_1$ and $\varphi:\mathcal{S} \to \mathcal{T}_1$ is a complete order isomorphism. The following celebrated result of Choi and Effros ensures that there is no difference between considering abstract operator systems and concrete operator systems. \begin{mythe} \emph{(Choi-Effros, \cite{choieffros})} Let $\mathcal{S}$ be an abstract operator system equipped with Archimedean matrix order unit $e$. Then there is a Hilbert space $\mathcal{H}$ and a complete order embedding $\varphi:\mathcal{S} \to \cB(\cH)$ such that $\varphi(e)=I_{\mathcal{H}}$. Conversely, any operator system contained in $\cB(\cH)$ is an abstract operator system with Archimedean matrix order unit $e=I_{\mathcal{H}}$. \end{mythe} It will be useful to consider matricial states on an operator system $\mathcal{S}$. By way of notation, for each $n \in \mathbb{N}$, we let $S_n(\mathcal{S})$ be the set of all ucp maps from $\mathcal{S}$ into $M_n$, and we let $S_{\infty}(\mathcal{S})=\bigcup_{n \in \mathbb{N}} S_n(\mathcal{S})$. We will often use the term \textbf{state} to refer to elements of $S_1(\mathcal{S})$. Any operator system $\mathcal{S}$ has a sequence of matrix norms which, when completed, give $\mathcal{S}$ an operator space structure. Indeed, if $(\mathcal{S},\{C_n\}_{n=1}^{\infty},e)$ is the operator system structure on $\mathcal{S}$ and $X \in M_n(\mathcal{S})$, the norms $$\|X\|_n=\inf \left\{r>0: \begin{pmatrix} rI_n & X \\ X^* & rI_n\end{pmatrix} \in C_{2n} \right\}$$ make $\mathcal{S}$ into a matricially normed space (see, for example, \cite[Chapter 13]{paulsen02}). Before moving on to tensor products, we require some information about duals of operator systems. The Banach space dual of an operator system $\mathcal{S}$ can be given the structure of a matrix-ordered $*$-vector space \cite[Lemma 4.2, Lemma 4.3]{choieffros}. This structure is given as follows: let $\mathcal{S}^d$ denote the Banach space dual of $\mathcal{S}$. Given $f \in \mathcal{S}^d$, we define $f^* \in \mathcal{S}^d$ by $f^*(x)=\overline{f(x^*)}$ for $x \in \mathcal{S}$, which makes $\mathcal{S}^d$ into a $*$-vector space. We say an element $(f_{ij}) \in M_n(\mathcal{S}^d)$ is positive provided that the map $F:\mathcal{S} \to M_n$ given by $F(x)=(f_{ij}(x))$ is completely positive. If we also assume that $\mathcal{S}$ is finite-dimensional, then $\mathcal{S}^d$ becomes an operator system with order unit given by a faithful state on $\mathcal{S}$ \cite{choieffros}. In fact, if $\mathcal{S}$ is finite-dimensional, then $\mathcal{S}^{dd}$ and $\mathcal{S}$ are completely order isomorphic via the canonical map $i:\mathcal{S} \to \mathcal{S}^{dd}$. \begin{mydef} \emph{(Kavruk-Paulsen-Todorov-Tomforde, \cite{KPTT})} Let $\mathcal{S}$ and $\mathcal{T}$ be operator systems. A collection of matricial cones $\tau=\{C_n\}_{n=1}^{\infty}$ with $C_n \subseteq M_n(\mathcal{S} \otimes \mathcal{T})_h$ is said to be an \textbf{operator system structure} on $\mathcal{S} \otimes \mathcal{T}$ if \begin{enumerate} \item $(\mathcal{S} \otimes \mathcal{T},\{C_n\}_{n=1}^{\infty},1_{\mathcal{S}} \otimes 1_{\mathcal{T}})$ is an operator system, \item $(s_{ij} \otimes t_{k\text{el}l}) \in C_{nm}$ whenever $(s_{ij}) \in M_n(\mathcal{S})_+$, $(t_{k\text{el}l}) \in M_m(\mathcal{T})_+$ and $n,m \in \mathbb{N}$, and \item whenever $n,k \in \mathbb{N}$ and $\varphi \in S_n(\mathcal{S})$ and $\psi \in S_k(\mathcal{T})$, then $\varphi \otimes \psi \in S_{nk}(\mathcal{S} \otimes \mathcal{T})$ with respect to the collection of matricial cones $\tau=\{C_n\}_{n=1}^{\infty}$. \end{enumerate} We denote by $\mathcal{S} \otimes_{\tau} \mathcal{T}$ the resulting operator system. \end{mydef} Let $\mathcal{O}$ be the category of operator systems with ucp maps as the morphisms. Following the definitions in \cite{KPTT}, we say that a mapping $\tau:\mathcal{O} \times \mathcal{O} \to \mathcal{O}$ given by $(\mathcal{S},\mathcal{T}) \mapsto \mathcal{S} \otimes_{\tau} \mathcal{T}$ is an \textbf{operator system tensor product} if for all operator systems $\mathcal{S}$ and $\mathcal{T}$, the matrix ordering on $\tau(\mathcal{S},\mathcal{T})$ is an operator system structure on $\mathcal{S} \otimes \mathcal{T}$. We say that an operator system tensor product $\tau$ is \textbf{functorial} if it satisfies the following property: \begin{itemize} \item If $\mathcal{S}_1$ and $\mathcal{T}_1$ are operator systems and $\varphi:\mathcal{S} \to \mathcal{S}_1$ and $\psi:\mathcal{T} \to \mathcal{T}_1$ are ucp maps, then $\varphi \otimes \psi: \mathcal{S} \otimes_{\tau} \mathcal{S}_1 \to \mathcal{T} \otimes_{\tau} \mathcal{T}_1$ is ucp. \end{itemize} \begin{mydef} \emph{(Kavruk-Paulsen-Todorov-Tomforde, \cite{KPTT})} Let $\mathcal{S}$ and $\mathcal{T}$ be operator systems. The \textbf{minimal tensor product} of $\mathcal{S}$ and $\mathcal{T}$ is the vector space $\mathcal{S} \otimes \mathcal{T}$, with order unit $1 \otimes 1$, equipped with positive cones in $M_n(\mathcal{S} \otimes \mathcal{T})$ given by the set $C_n^{\min}(\mathcal{S},\mathcal{T})$ of all $X \in M_n(\mathcal{S} \otimes \mathcal{T})$ for which $X=X^*$ and $\varphi \otimes \psi(X) \geq 0$ whenever $\varphi \in S_{\infty}(\mathcal{S})$ and $\psi \in S_{\infty}(\mathcal{T})$. \end{mydef} \begin{mydef} \emph{(Kavruk-Paulsen-Todorov-Tomforde, \cite{KPTT})} Given operator systems $\mathcal{S},\mathcal{T}$, the \textbf{commuting tensor product} of $\mathcal{S}$ and $\mathcal{T}$ is the vector space $\mathcal{S} \otimes \mathcal{T}$ with order unit $1 \otimes 1$, with positive cones $C_n^{\text{comm}}(\mathcal{S},\mathcal{T})$ given by the following property: $X \in M_n(\mathcal{S} \otimes \mathcal{T})_h$ is in $C_n^{\text{comm}}(\mathcal{S},\mathcal{T})$ if and only if whenever $\mathcal{H}$ is a Hilbert space and $\varphi:\mathcal{S} \to \cB(\cH)$ and $\psi:\mathcal{T} \to \cB(\cH)$ are ucp maps with commuting ranges, then $\varphi \cdot \psi(X) \geq 0$, where $\varphi \cdot \psi(s \otimes t):=\varphi(s)\psi(t)$ for all $s \in \mathcal{S}$ and $t \in \mathcal{T}$. \end{mydef} \begin{mydef} \emph{(Kavruk-Paulsen-Todorov-Tomforde, \cite{KPTT})} Let $\mathcal{S},\mathcal{T}$ be operator systems. For each $n \in \mathbb{N}$, define the set $D_n^{\max}(\mathcal{S},\mathcal{T})$ to be the set of all $X \in M_n(\mathcal{S} \otimes \mathcal{T})_h$ for which there exist $S \in M_k(\mathcal{S})_+$, $T \in M_m(\mathcal{T})_+$ and a linear map $A:\mathbb{C}^k \otimes \mathbb{C}^m \to \mathbb{C}^n$ such that $$X=A(S \otimes T)A^*.$$ Then the \textbf{maximal tensor product} of $\mathcal{S}$ and $\mathcal{T}$ is defined to be the operator system $(\mathcal{S} \otimes \mathcal{T},1 \otimes 1,C_n^{\max}(\mathcal{S},\mathcal{T}))$, where $$C_n^{\max}(\mathcal{S},\mathcal{T})=\{ X \in M_n(\mathcal{S} \otimes \mathcal{T})_h: \forall \varepsilon>0, X+\varepsilon 1 \in D_n^{\max}(\mathcal{S},\mathcal{T})\}.$$ \end{mydef} Each of $\min$, $c$ and $\max$ are functorial operator system tensor products \cite{KPTT}. For finite-dimensional operator systems, the min and max tensor products are dual to each other. \begin{pro} \emph{(Farenick-Paulsen, \cite{FP})} \langlebel{dualofminmax} If $\mathcal{S}$ and $\mathcal{T}$ are finite-dimensional operator systems, then $(\mathcal{S} \otimes_{\min} \mathcal{T})^d$ is completely order isomorphic to $\mathcal{S}^d \otimes_{\max} \mathcal{T}^d$, and $(\mathcal{S} \otimes_{\max} \mathcal{T})^d$ is completely order isomorphic to $\mathcal{S}^d \otimes_{\min} \mathcal{T}^d$. \end{pro} Two more tensor products are of interest: the \textbf{essential left} and \textbf{essential right} tensor products. \begin{mydef} \emph{(Kavruk-Paulsen-Todorov-Tomforde, \cite{KPTT})} Let $\mathcal{S},\mathcal{T}$ be operator systems. Define the operator system $\mathcal{S} \otimes_{\text{el}} \mathcal{T}$ to be the operator system structure arising from the inclusion $\mathcal{S} \otimes \mathcal{T} \subseteq \mathcal{I}(\mathcal{S}) \otimes_{\max} \mathcal{T}$, where $\mathcal{I}(\mathcal{S})$ is the injective envelope of $\mathcal{S}$. (See \cite{hamana} or \cite[Chapter 15]{paulsen02} for more on injective envelopes.) Similarly, define $\mathcal{S} \otimes_{\text{er}} \mathcal{T}$ to be the operator system structure arising from the inclusion $\mathcal{S} \otimes \mathcal{T} \subseteq \mathcal{S} \otimes_{\max} \mathcal{I}(\mathcal{T})$. \end{mydef} The tensor products $\text{er}$ and $\text{el}$ are examples of asymmetric tensor products; in fact, the map $s \otimes t \mapsto t \otimes s$ induces a complete order isomorphism $\mathcal{S} \otimes_{\text{er}} \mathcal{T} \simeq \mathcal{T} \otimes_{\text{el}} \mathcal{S}$ and $\mathcal{S} \otimes_{\text{el}} \mathcal{T} \simeq \mathcal{T} \otimes_{\text{er}} \mathcal{S}$. Given two operator system tensor products $\alpha,\beta$, we will write $\alpha \leq \beta$ to mean that whenever $\mathcal{S},\mathcal{T}$ are operator systems, the identity map $\text{id}:\mathcal{S} \otimes_{\beta} \mathcal{T} \to \mathcal{S} \otimes_{\alpha} \mathcal{T}$ is ucp. For example, we have the following (see \cite{KPTT}): $$\min \leq \text{er}, \, \text{el} \leq c \leq \max.$$ For operator system tensor products $\alpha,\beta$, we say that an operator system $\mathcal{S}$ is $(\alpha,\beta)$-nuclear if $\mathcal{S} \otimes_{\alpha} \mathcal{T}=\mathcal{S} \otimes_{\beta} \mathcal{T}$ for all operator systems $\mathcal{T}$. Equivalently, $\mathcal{S}$ is $(\alpha,\beta)$-nuclear if the identity map $\text{id}:\mathcal{S} \otimes_{\alpha} \mathcal{T} \to \mathcal{S} \otimes_{\beta} \mathcal{T}$ is a complete order isomorphism for every operator system $\mathcal{T}$. Frequently, the theory of operator system tensor products has been motivated by the theory of operator space tensor products. In particular, suppose that $X$ is an operator space contained in $\cB(\cH)$. Then there is a canonical operator system that contains a completely isometric copy of $X$; namely, one may define $$\mathcal{S}_X=\left\{ \begin{pmatrix} \langlembda I_{\mathcal{H}} & x \\ y^* & \mu I_{\mathcal{H}} \end{pmatrix} \in M_2(\cB(\cH)): x,y \in X, \, \langlembda,\mu \in \mathbb{C} \right\},$$ which is equipped with the complete isometry $X \hookrightarrow \mathcal{S}_X$ given by sending $x \in X$ to the matrix $\begin{pmatrix} 0 & x \\ 0 & 0 \end{pmatrix}$. See \cite[Chapter 8]{paulsen02} for more information on $\mathcal{S}_X$. It is left to the reader to check that $\mathcal{S}_X$ does not depend on the embedding $X \subseteq \cB(\cH)$, up to unital complete order isomorphism. For operator spaces $X$ and $Y$, considering the copy of $X \otimes Y$ inside of $\mathcal{S}_X \otimes \mathcal{S}_Y$ gives rise to operator space tensor products. For any operator spaces $X,Y$ and operator system tensor product $\tau$, there is a natural operator space tensor product, denoted by $X \otimes^{\tau} Y$, given by the inclusion of $X \otimes Y$ in $\mathcal{S}_X \otimes_{\tau} \mathcal{S}_Y$. We will refer to the tensor product $X \otimes^{\tau} Y$ as the \textbf{induced operator space tensor product} of $X$ and $Y$ (see \cite{KPTT}). Two important operator space tensor products are the injective tensor product and the projective tensor product, which we will denote by $X \check{\otimes} Y$ and $X \widehat{\otimes} Y$, respectively (see \cite{blecherpaulsen}). The relation between tensor products of operator systems of the form $\mathcal{S}_X$ and operator spaces is outlined in the following theorem. \begin{mythe} \emph{(Kavruk-Paulsen-Todorov-Tomforde, \cite{KPTT})} \langlebel{canonicalopsys} Let $X,Y$ be operator spaces. The following are true: \begin{enumerate} \item $X \otimes^{\min} Y=X \check{\otimes} Y$ completely isometrically. \item $X \otimes^{\max} Y=X \widehat{\otimes} Y$ completely isometrically. \end{enumerate} \end{mythe} Similar to the operator system $\mathcal{S}_X$, one may define another ``canonical" operator system of an operator space $X$ by letting $$\mathcal{S}_X^0=\left\{ \begin{pmatrix} \langlembda I_{\mathcal{H}} & x \\ y^* & \langlembda I_{\mathcal{H}} \end{pmatrix} \in M_2(\cB(\cH)): x,y \in X, \, \langlembda \in \mathbb{C} \right\}.$$ If $\tau$ is an operator system tensor product and $X,Y$ are operator spaces, then we shall denote by $X \otimes_0^{\tau} Y$ the operator space structure on $X \otimes Y$ induced by the inclusion $X \otimes Y \subseteq \mathcal{S}_X^0 \otimes_{\tau} \mathcal{S}_Y^0$. The analogous result to Theorem \ref{canonicalopsys} holds for $\mathcal{S}_X^0$ as well. For completeness, we include the proofs. \begin{lem} Let $X,Y$ be operator spaces. Then the inclusion $X \otimes Y \subseteq \mathcal{S}_X^0 \otimes \mathcal{S}_Y^0$ gives rise to an operator space tensor product of $X$ and $Y$; that is, the following conditions hold (in the sense of \cite{blecherpaulsen}): \begin{enumerate} \item If $x \in M_n(X)$ and $y \in M_m(Y)$, then $$\|x \otimes y\|_{M_{nm}(X \otimes_0^{\tau} Y)} \leq \|x\|_{M_n(X)} \|y\|_{M_m(Y)}.$$ \item If $\phi:X \to M_n$ and $\psi:Y \to M_m$ are completely bounded maps, then $\phi \otimes \psi: X \otimes_0^{\tau} Y \to M_{mn}$ is completely bounded, with $\| \phi \otimes \psi \|_{cb} \leq \|\phi \|_{cb} \|\phi \|_{cb}$. \end{enumerate} \end{lem} \begin{proof} By \cite[Proposition 3.4]{KPTT}, treating $\mathcal{S}_X^0 \otimes_{\tau} \mathcal{S}_Y^0$ as an operator space yields an operator space tensor product of the operator spaces $\mathcal{S}_X^0$ and $\mathcal{S}_Y^0$. Since the inclusions $X \subseteq \mathcal{S}_X^0$ and $Y \subseteq \mathcal{S}_Y^0$ are complete isometries, condition (1) follows since it holds for the operator space tensor product $\mathcal{S}_X^0 \otimes_{\tau} \mathcal{S}_Y^0$. To show that condition (2) holds, we may assume without loss of generality that $\phi$ and $\psi$ are completely contractive. Let $\Phi:\mathcal{S}_X^0 \to M_2(M_n)$ be defined by $$\Phi \left( \begin{pmatrix} \langlembda & x_1 \\ x_2^* & \langlembda \end{pmatrix} \right)=\begin{pmatrix} \langlembda I_n & \phi(x_1) \\ \phi(x_2)^* & \langlembda I_n \end{pmatrix}.$$ The proof that $\Phi$ is ucp is standard and analogous to \cite[Lemma 8.1]{paulsen02}. Similarly, the map $\Psi:\mathcal{S}_Y^0 \to M_2(M_m)$ given by $$\Psi \left( \begin{pmatrix} \langlembda & y_1 \\ y_2^* & \langlembda \end{pmatrix} \right)=\begin{pmatrix} \langlembda I_m & \psi(y_1) \\ \psi(y_2)^* & \langlembda I_m \end{pmatrix}$$ is unital and completely positive. By Property (3) of operator system tensor products, $\Phi \otimes \Psi: \mathcal{S}_X^0 \otimes_{\tau} \mathcal{S}_Y^0 \to M_{4mn}$ is ucp. Compressing to a corner block yields $\phi \otimes \psi$, so that $\|\phi \otimes \psi \|_{cb} \leq 1$. \end{proof} Since the minimal operator system tensor product and the spatial operator space tensor product are injective (and equal for operator systems), the following result is immediate. \begin{mythe} \langlebel{canonicalmin} Let $X$ and $Y$ be operator spaces. Then $X \otimes_0^{\min} Y$ is completely isometric to $X \check{\otimes} Y$. \end{mythe} The analogous result also holds for the maximal operator system tensor product. \begin{mythe} \langlebel{canonicalmax} Let $X$ and $Y$ be operator spaces. Then $X \otimes_0^{\max} Y$ is completely isometric to $X \widehat{\otimes} Y$. \end{mythe} \begin{proof} Let $U \in M_p(X \otimes Y)$. Define $\|U\|^{\max}$ to be the norm of $U$ in $\mathcal{S}_X^0 \otimes_{\max} \mathcal{S}_Y^0$, and define $\|U\|$ to be the norm of $U$ in $X \widehat{\otimes} Y$. First, suppose that $\|U\|<1$. Every operator system tensor product is an operator space tensor product \cite[Proposition 3.4]{KPTT}, so that $\| \cdot \|^{\max}$ is smaller than the projective tensor product norm \cite{blecherpaulsen}. Hence, $\|U\|^{\max}<1$. Conversely, suppose that $\|U\|^{\max}<1$. As in the proof of \cite[Theorem 5.9]{KPTT}, let $e=\begin{pmatrix} e_1 & 0 \\ 0 & e_2 \end{pmatrix}$ and $f=\begin{pmatrix} f_1 & 0 \\ 0 & f_2 \end{pmatrix}$ be the identities of $\mathcal{S}^0_X$ and $\mathcal{S}^0_Y$, respectively; then $e \otimes f$ is the identity of $\mathcal{S}^0_X \otimes_{\max} \mathcal{S}^0_Y$. Write $U=(u_{r,s}) \in M_p(X \otimes Y)$. Then $$\begin{pmatrix} \|U\|^{\max} (e \otimes f)_p & U \\ U^* & \|U\|^{\max}(e \otimes f)_p \end{pmatrix} \in C_{2p}^{\max}(\mathcal{S}_X^0,\mathcal{S}_Y^0).$$ By adding $(1-\|U\|^{\max}) \begin{pmatrix} (e \otimes f)_p & 0 \\ 0 & (e \otimes f)_p \end{pmatrix}$, it follows that $$\begin{pmatrix} (e \otimes f)_p & U \\ U^* & (e \otimes f)_p \end{pmatrix} \in D_{2p}^{\max}(\mathcal{S}_X^0,\mathcal{S}_Y^0).$$ This implies that there are matrices $P=(P_{ij}) \in M_n(\mathcal{S}_X^0)_+$, $Q=(Q_{k \text{el}l}) \in M_m(\mathcal{S}_Y^0)_+$ and $T=\begin{pmatrix} A \\ B \end{pmatrix}$ where $A=(a_{r,(i,k)})$ and $B=(b_{r,(i,k)})$ are $p \times mn$ matrices of scalars, such that $$\begin{pmatrix} (e \otimes f)_p & U \\ U^* & (e \otimes f)_p \end{pmatrix}=T(P \otimes Q)T^*.$$ Comparing blocks yields the equations $(e \otimes f)_p=A(P \otimes Q)A^*$, $U=A(P \otimes Q)B^*$, $U^*=B(P \otimes Q)A^*$ and $(e \otimes f)_p=B(P \otimes Q)B^*$. We may write $P_{ij}=\begin{pmatrix} \alpha_{ij}e_1 & x_{ij} \\ w_{ij}^* & \alpha_{ij}e_2 \end{pmatrix}$ and $Q_{k\text{el}l}=\begin{pmatrix} \gamma_{k\text{el}l}f_1 & y_{k\text{el}l} \\ z_{k\text{el}l}^* & \gamma_{k\text{el}l}f_2 \end{pmatrix}$ where $\alpha_{ij},\gamma_{k\text{el}l} \in \mathbb{C}$, $x_{ij},w_{ij} \in X$, and $y_{k\text{el}l},z_{k\text{el}l} \in Y$. Now set $R=(\alpha_{ij})$, $S=(\gamma_{k\text{el}l})$, $\mathcal{X}=(x_{ij})$ and $\mathcal{Y}=(y_{k\text{el}l})$. The fact that $P,Q$ are positive implies that $R$ and $S$ are positive, $(w_{ij}^*)=\mathcal{X}^*$, $(z_{k\text{el}l}^*)=\mathcal{Y}^*$, and that for every $r>0$, we have $\|(R+rI_n)^{-1/2} \mathcal{X} (R+rI_n)^{-1/2}\| \leq 1$ in $M_n(X)$ and $\|(S+rI_m)^{-1/2}\mathcal{Y}(S+rI_m)^{-1/2}\| \leq 1$ in $M_n(Y)$ \cite[p.~99]{paulsen02}. Let $Re_1:=(\alpha_{ij}e_1)$, and similarly define $Re_2$, $Sf_1$ and $Sf_2$. Looking at the blocks of the $4 \times 4$ block matrix equation $(e \otimes f)_p=A(P \otimes Q)A^*$, we obtain the equations $(e_i \otimes f_j)_p=A(Re_i \otimes Sf_j)A^*$ for $i,j=1,2$. Therefore, $I_p=A(R \otimes S)A^*=B(R \otimes S)B^*$. The element $U$ is only present in the $(1,4)$-block of the $4 \times 4$ block matrix, with the other blocks being equal to zero. Hence, the equation $U=A(P \otimes Q)B^*$ in $\mathcal{S}_X^0 \otimes \mathcal{S}_Y^0$ gives $U=A(\mathcal{X} \otimes \mathcal{Y})B^*$ in $X \otimes Y$. If $R,S$ are invertible, then let $A_0=A(R \otimes S)^{\frac{1}{2}}$ and let $B_0=B(R \otimes S)^{\frac{1}{2}}$. Then $$U=A_0(R \otimes S)^{-1/2} (\mathcal{X} \otimes \mathcal{Y})(R \otimes S)^{-1/2} B_0^*=A_0[(R^{-1/2}\mathcal{X} R^{-1/2}) \otimes (S^{-1/2} \mathcal{Y} S^{-\frac{1}{2}})]B_0^*.$$ We know that $A_0A_0^*=B_0B_0^*=I_p$. Thus, letting $\mathcal{X}_0=R^{-1/2}\mathcal{X} R^{-1/2}$ and $\mathcal{Y}_0=S^{-1/2}\mathcal{Y} S^{-1/2}$, we have $U=A_0(\mathcal{X}_0 \otimes \mathcal{Y}_0)B_0^*$ with all the matrices appearing in this factorization having norm at most one. Therefore, $\|U\| \leq 1$. If either of $R$ or $S$ are not invertible, then add $rI_n$ and $rI_m$ to $R$ and $S$, respectively, for $r>0$, and define new matrices $A_0=A[(R+rI_n) \otimes (S+rI_m)]^{1/2}$ and $B_0=B[(R+rI_n) \otimes (S+rI_m)]^{-1/2}$. The corresponding factorization yields $\|U\| \leq 1+Cr$ for some $C$ that is independent of $r$. Since this is possible for all $r>0$, we obtain $\|U\| \leq 1$. \end{proof} \section{The OSLLP, WEP AND DCEP for Operator Systems} There are two $C^{\ast}$-algebraic properties that play a crucial role in Kirchberg's conjecture: the local lifting property (LLP) and the weak expectation property (WEP) (see \cite{kirchberg93}). Here we outline these properties for operator systems, as well as the double commutant expectation property (DCEP). For the convenience of the reader, we give some known characterizations of these properties in terms of tensor products with $\cB(\cH)$ and tensor products with $C^*(F_{\infty})$. See \cite{quotients} for more information and proofs of the theorems in this section. Let $\mathcal{S}$ be an operator system. We say that $\mathcal{S}$ has the \textbf{operator system local lifting property} (OSLLP) if whenever $\mathcal{A}$ is a unital $C^{\ast}$-algebra, $\mathcal{I} \subseteq \mathcal{A}$ is a two-sided ideal with $\pi:\mathcal{A} \to \mathcal{A}/\mathcal{I}$ the canonical quotient map, and $\varphi:\mathcal{S} \to \mathcal{A}/\mathcal{I}$ is a ucp map, then for every finite-dimensional operator system $\mathcal{T} \subseteq \mathcal{S}$, there is a ucp map $\psi_{\mathcal{T}}:\mathcal{T} \to \mathcal{A}$ such that $\pi \circ \psi_{\mathcal{T}}=\varphi_{|\mathcal{T}}$. This property for operator systems was initially defined in \cite{quotients}. The OSLLP can be characterized in a few ways. \begin{mythe} \emph{(Kavruk-Paulsen-Todorov-Tomforde, \cite{quotients})} \langlebel{osllp} Let $\mathcal{S}$ be an operator system. The following are equivalent. \begin{enumerate} \item $\mathcal{S}$ has the OSLLP. \item $\mathcal{S}$ is $(\min,\text{er})$-nuclear. \item $\mathcal{S} \otimes_{\min} \cB(\cH)=\mathcal{S} \otimes_{\max} \cB(\cH)$ for all Hilbert spaces $\mathcal{H}$. \end{enumerate} \end{mythe} Also defined in \cite{quotients}, we say that an operator system $\mathcal{S}$ has the \textbf{weak expectation property} (WEP) if the canonical inclusion $i:\mathcal{S} \hookrightarrow \mathcal{S}^{dd}$ extends to a ucp map $\phi:\mathcal{I}(\mathcal{S}) \to \mathcal{S}^{dd}$, where $\mathcal{I}(\mathcal{S})$ is the injective envelope of $\mathcal{S}$. The WEP is in fact a form of nuclearity. \begin{mythe} \emph{(Kavruk-Paulsen-Todorov-Tomforde, \cite{quotients})} \langlebel{wepnuclearity} An operator system $\mathcal{S}$ has the WEP if and only if $\mathcal{S}$ is $(\text{el},\max)$-nuclear. \end{mythe} Finally, we say that an operator system $\mathcal{S}$ has the \textbf{double commutant expectation property} \cite{quotients} (DCEP), if for any unital complete order embedding $\iota:\mathcal{S} \to \cB(\cH)$, there is a ucp extension $\varphi:\mathcal{I}(\mathcal{S}) \to \iota(\mathcal{S})''$ of $\iota$, where $\mathcal{I}(\mathcal{S})$ is the injective envelope of $\mathcal{S}$ and $\iota(\mathcal{S})''$ is the double commutant of $\iota(\mathcal{S})$ in $\cB(\cH)$. The DCEP is a weaker form of nuclearity than the WEP. \begin{mythe} \emph{(Kavruk-Paulsen-Todorov-Tomforde, \cite{quotients})} \langlebel{dcep} Let $\mathcal{S}$ be an operator system. The following are equivalent. \begin{enumerate} \item $\mathcal{S}$ has the DCEP. \item $\mathcal{S}$ is $(\text{el},c)$-nuclear. \item $\mathcal{S} \otimes_{\min} C^*(F_{\infty})=\mathcal{S} \otimes_{\max} C^*(F_{\infty})$. \end{enumerate} \end{mythe} All unital $C^{\ast}$-algebras are $(c,\max)$-nuclear \cite[Theorem 6.7]{KPTT}, so that the WEP and the DCEP are the same for unital $C^{\ast}$-algebras. However, the DCEP is a weaker property in general. Indeed, the operator system $\mathcal{T}_n$ of tridiagonal matrices in $M_n$ is $(\min,c)$-nuclear but $\mathcal{T}_n \otimes_{\min} \mathcal{T}_n^d \neq \mathcal{T}_n \otimes_{\max} \mathcal{T}_n^d$ for $n \geq 3$ (see \cite{KPTT} for more information on $\mathcal{T}_n$). Thus, $\mathcal{T}_n$ has the DCEP but not the WEP. \section{Complete Quotient Maps} The theory of operator system quotients is not as straightforward as in other categories. Here, we give a very brief introduction to this quotient theory. Much more information on operator system quotients can be found in \cite{quotients}. Suppose that $\mathcal{S},\mathcal{T}$ are operator systems and $\varphi:\mathcal{S} \to \mathcal{T}$ is a ucp map with kernel $\mathcal{J}$. We endow the quotient vector space $\mathcal{S}/\mathcal{J}$ with an operator system structure as follows. If $q:\mathcal{S} \to \mathcal{S}/\mathcal{J}$ denotes the canonical (vector space) quotient map and we denote by $\dot{x}$ the image $q(x)$ of a vector $x \in \mathcal{S}$, then the order unit of $\mathcal{S}/\mathcal{J}$ is $\dot{1}$, while the adjoint of $\dot{x}$ is simply $\dot{x^*}$. We define the sets $$D_n(\mathcal{S},\mathcal{J})=\{ \dot{X} \in M_n(\mathcal{S}/\mathcal{J})_h: \exists Y \in M_n(\mathcal{S})_+ \text{ such that } q^{(n)}(Y)=\dot{X}\},$$ where $q^{(n)}$ is the $n$-fold amplification of $q$. To ensure that the positive cones on $\mathcal{S}/\mathcal{J}$ satisfy the Archimedean property, we define the cones to be $$C_n(\mathcal{S},\mathcal{J})=\{\dot{X} \in M_n(\mathcal{S}/\mathcal{J})_h: \forall \varepsilon>0, \, \dot{X}+\varepsilon 1 \in D_n(\mathcal{S},\mathcal{J})\}.$$ In general, the norm induced by the above operator system structure on $\mathcal{S}/\mathcal{J}$ can differ greatly from the usual quotient norm on $\mathcal{S}/\mathcal{J}$ (see \cite[Example 4.4]{quotients} or \cite[Lemma 2.1]{FP} for examples). Given operator systems $\mathcal{S},\mathcal{T}$ and a u.c.p. map $\varphi:\mathcal{S} \to \mathcal{T}$ with kernel $\mathcal{J}$, we say that $\mathcal{J}$ is \textbf{completely order proximinal} if $D_n(\mathcal{S},\mathcal{J})=C_n(\mathcal{S},\mathcal{J})$ for all $n$. A surjective ucp map $\varphi:\mathcal{S} \to \mathcal{T}$ with kernel $\mathcal{J}$ is said to be a \textbf{complete quotient map} if the induced map $\dot{\varphi}:\mathcal{S}/\mathcal{J} \to \mathcal{T}$ given by $\dot{\varphi}(\dot{x})=\varphi(x)$ is a complete order isomorphism. There is a relation between complete quotient maps and complete order injections via adjoint maps. For a linear map $\varphi:\mathcal{S} \to \mathcal{T}$, we define the adjoint map $\varphi^d:\mathcal{T}^d \to \mathcal{S}^d$ by $[\varphi^d(\psi)](s)=\psi(\varphi(s))$ for all $\psi \in \mathcal{T}^d$ and $s \in \mathcal{S}$. \begin{pro} \emph{(Farenick-Paulsen, \cite{FP})} \langlebel{completequotientdual} Let $\mathcal{S}$ and $\mathcal{T}$ be finite-dimensional operator systems. Then a linear map $\varphi:\mathcal{S} \to \mathcal{T}$ is a complete quotient map if and only if $\varphi^d:\mathcal{T}^d \to \mathcal{S}^d$ is a complete order injection. \end{pro} \section{The $C^{\ast}$-algebra $\mathcal{U}_{nc}(n)$} For any $n \in \mathbb{N}$, denote by $\mathcal{U}_{nc}(n)$ the universal $C^{\ast}$-algebra of $n^2$ generators $\{u_{ij}\}_{i,j=1}^n$ with the restriction that the matrix $U=(u_{ij})$ is unitary. This algebra was first defined by L. Brown in \cite{brown}. We may define the operator subsystem $$\mathcal{V}_n=\text{span } (\{1\} \cup \{u_{ij}\}_{i,j=1}^n \cup \{u_{ij}^*\}_{i,j=1}^n).$$ The operator system possesses an important universal property. \begin{pro} \langlebel{univprop} Let $(T_{ij}) \in M_n(\cB(\cH))$ be a contraction. Then there is a unique ucp map $\psi:\mathcal{V}_n \to \cB(\cH)$ such that $\psi(u_{ij})=T_{ij}$ for all $1 \leq i,j \leq n$. The ucp map $\psi$ dilates to a unital $*$-homomorphism $\pi:\mathcal{U}_{nc}(n) \to M_2(\cB(\cH))$ such that $\pi(u_{ij})=\begin{pmatrix} T_{ij} & (\sqrt{I-TT^*})_{ij} \\ (\sqrt{I-T^*T})_{ij} & -T_{ji}^* \end{pmatrix}$ for all $1 \leq i,j \leq n$. \end{pro} \begin{proof} Let $T=(T_{ij})$; then $\|T\| \leq 1$. The operator $V=\begin{pmatrix} T & \sqrt{I-TT^*} \\ \sqrt{I-T^*T} & -T^* \end{pmatrix}$ is unitary in $M_2(M_n(\cB(\cH)))$. Performing a canonical shuffle (see \cite[p.~97]{paulsen02}) yields a unitary $W=(W_{ij}) \in M_n(M_2(\cB(\cH)))$ such that $$W_{ij}=\begin{pmatrix} T_{ij} & (\sqrt{I-TT^*})_{ij} \\ (\sqrt{I-T^*T})_{ij} & -T_{ji}^* \end{pmatrix}.$$ By the universal property of $\mathcal{U}_{nc}(n)$, there is a unital $*$-homomorphism $\pi:\mathcal{U}_{nc}(n) \to M_2(\cB(\cH))$ with $\pi(u_{ij})=W_{ij}$ for all $i,j$. Compressing to the $(1,1)$-corner in $M_2(\cB(\cH))$ yields the ucp map $\psi$, as desired. \end{proof} We remark that $\mathcal{V}_n$ is the image of a unital, completely positive map on $M_{2n}$. Indeed, define $\varphi:M_{2n} \to V_n$ by $$\varphi(E_{ij})=\begin{cases} \frac{1}{2n} 1 \text{ if } i=j \\ \frac{1}{2n} u_{i,j-n} \text{ if } i \leq n \text{ and } j \geq n+1 \\ \frac{1}{2n}u_{j,i-n}^* \text{ if } i \geq n+1 \text{ and } j \leq n \\ 0 \text{ otherwise}. \end{cases}$$ Then the Choi matrix of $\varphi$ is $(\varphi(E_{ij}))=\begin{pmatrix} \frac{1}{2n} I & \frac{1}{2n} U \\ \frac{1}{2n} U^* & \frac{1}{2n} I \end{pmatrix}$. Since $U^*U=I$, $(\varphi(E_{ij})) \in M_2(M_n(\mathcal{V}_n))_+$, so that $\varphi$ is unital and completely positive by a theorem of Choi (see \cite[Theorem 3.14]{paulsen02}). We will denote by $\mathcal{J}_{2n}$ the kernel of $\varphi$. Then $\mathcal{J}_{2n}=\left\{ \begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix} \in M_2(M_n): \text{tr}(A \oplus B)=0 \right\}$. It is readily checked that $\mathcal{J}_{2n}$ contains no positive element except $0$. It follows by \cite[Proposition 2.4]{kavruknuclearity} that $\mathcal{J}_{2n}$ is completely order proximinal. To simplify notation, we define for each $n \geq 2$ the index sets $\Lambda_n^+=\{(i,j) \in \{1,...,2n\}^2: i \leq n, \, j \geq n+1\}$ and $\Lambda_n^-=\{(i,j) \in \{1,...,2n\}^2: i \geq n+1, \, j \leq n\}$. We define $\Lambda_n=\Lambda_n^+ \cup \Lambda_n^-$. Using the notation of \cite{FP}, whenever $i,j \in \{1,...,2n\}$ are such that $\varphi(E_{ij}) \neq 0$, we define $e_{ij}=\dot{E}_{ij} \in M_{2n}/\mathcal{J}_{2n}$. We let $q:M_{2n} \to M_{2n}/\mathcal{J}_{2n}$ be the canonical quotient map. To show that $\mathcal{V}_n$ is a complete quotient of $M_{2n}$, we need some equivalent characterizations of positivity in the quotient operator system $M_{2n}/\mathcal{J}_{2n}$. This result and its proof are analogous to \cite[Proposition 2.3]{FP}. The key difference is the use of the universal property of $\mathcal{V}_n$ in the proof that (6) implies (5) below. \begin{lem} \langlebel{positivity} Let $A_{11},A_{ij} \in M_p$ for every $(i,j) \in \Lambda_n$. The following are equivalent. \begin{enumerate} \item $\dot{1} \otimes A_{11}+\sum_{(i,j) \in \Lambda_n} e_{ij} \otimes A_{ij}$ is positive in $(M_{2n}/\mathcal{J}_{2n}) \otimes M_p$. \item $\dot{1} \otimes A_{11}+\sum_{(i,j) \in \Lambda_n} \dot{\psi}(e_{ij}) \otimes A_{ij}$ is positive in $M_r \otimes M_p$ whenever $r \in \mathbb{N}$ and $\dot{\psi}:M_{2n}/\mathcal{J}_{2n} \to M_r$ is ucp. \item $\dot{1} \otimes A_{11} + \sum_{(i,j) \in \Lambda_n} \psi(E_{ij}) \otimes A_{ij}$ is positive in $M_r \otimes M_p$ whenever $r \in \mathbb{N}$ and $\psi:M_{2n} \to M_r$ is ucp with $\psi(\mathcal{J}_{2n})=\{0\}$. \item Whenever $B_{ij} \in M_r$ for $(i,j) \in \Lambda_n^+$ and the matrix $B=(B_{i,j+n}) \in M_n(M_r)$ are such that $$\begin{pmatrix} \frac{1}{2n} I_r & B \\ B^* & \frac{1}{2n} I_r \end{pmatrix}$$ is positive in $M_2(M_n(M_r))$, then $$I_r \otimes A_{11}+\sum_{(i,j) \in \Lambda_n^+} B_{ij} \otimes A_{ij} + \sum_{(i,j) \in \Lambda_n^-} B_{ji}^* \otimes A_{ij} \in (M_r \otimes M_p)_+.$$ \item Whenever $C_{ij} \in M_r$ for $(i,j) \in \Lambda_n^+$ and the matrix $C=(C_{i,j+n}) \in M_n(M_r)$ is such that $$\begin{pmatrix} I_r & C \\ C^* & I_r \end{pmatrix}$$ is positive in $M_2(M_n(M_r))$, then $$I_r \otimes A_{11}+\sum_{(i,j) \in \Lambda_n^+} \frac{1}{2n} C_{ij} \otimes A_{ij}+\sum_{(i,j) \in \Lambda_n^-} \frac{1}{2n} C_{ji}^* \otimes A_{ij} \in (M_r \otimes M_p)_+.$$ \item $I_r \otimes A_{11}+\sum_{(i,j) \in \Lambda_n^+} \frac{1}{2n} u_{i,j-n} \otimes A_{ij} + \sum_{(i,j) \in \Lambda_n^-} \frac{1}{2n} u_{j,i-n}^* \otimes A_{ij}$ is positive in $\mathcal{V}_n \otimes M_p$. \item $I_r \otimes (2nA_{11})+\sum_{i=1}^{2n} E_{ii} \otimes B_i+\sum_{(i,j) \in \Lambda_n} E_{ij} \otimes A_{ij}$ is positive in $M_{2n} \otimes M_p$ for some matrices $B_1,...,B_{2n} \in M_p$ such that $\sum_{i=1}^{2n} B_i=(2n-4n^2)A_{11}$. \item $\sum_{i=1}^{2n} E_{ii} \otimes R_{ii} +\sum_{(i,j) \in \Lambda_n} E_{ij} \otimes R_{ij}$ is positive in $M_{2n} \otimes M_p$ for some matrix $R=(R_{ij}) \in (M_{2n} \otimes M_p)_+$ such that $R_{ij}=A_{ij}$ for $(i,j) \in \Lambda_n$ and $\sum_{i=1}^{2n} R_{ii}=2nA_{11}$. \end{enumerate} \end{lem} \begin{proof} Suppose that (1) holds. Since $\mathcal{J}_{2n}$ is completely order proximinal, there exists a matrix $R=\sum_{i,j=1}^{2n} E_{ij} \otimes R_{ij} \in (M_{2n} \otimes M_p)_+$ such that $q \otimes \text{id}_p(R)=\dot{1} \otimes A_{11}+\sum_{(i,j) \in \Lambda_n} e_{ij} \otimes A_{ij}$. It follows that $$\dot{1} \otimes A_{11}+\sum_{(i,j) \in \Lambda_n} e_{ij} \otimes A_{ij}=q \otimes \text{id}_p \left( \sum_{i,j=1}^{2n} E_{ij} \otimes R_{ij} \right)=\sum_{i=1}^{2n} \frac{1}{2n}e_{ii} \otimes R_{ii}+\sum_{(i,j) \in \Lambda_n} e_{ij} \otimes R_{ij}.$$ Therefore, $R_{ij}=A_{ij}$ whenever $(i,j) \in \Lambda_n$, and $\sum_{i=1}^{2n} R_{ii}=2nA_{11}$, which shows that (8) is true. \\ Assume that (8) is true, and let $R=(R_{ij}) \in (M_{2n} \otimes M_p)_+$ be as given. Write $R_{ii}=2nA_{11}+B_i$ where $B_i=-\sum_{j \neq i} R_{jj}$. Then $$2nA_{11}=\sum_{i=1}^{2n} R_{ii}=4n^2 A_{11}+\sum_{i=1}^{2n} B_i.$$ Hence, $\sum_{i=1}^{2n} B_i=(2n-4n^2)A_{11}$ and (7) follows. \\ If (7) is true, then applying $\varphi \otimes \text{id}_p$ shows that $$1 \otimes (2n A_{11})+\sum_{i=1}^{2n} \frac{1}{2n} 1 \otimes B_i+\sum_{(i,j) \in \Lambda_n^+} u_{i,j-n} \otimes A_{ij}+\sum_{(i,j) \in \Lambda_n^-} u_{j,i-n}^* \otimes A_{ij} \in (\mathcal{V}_n \otimes M_p)_+.$$ Using the fact that $\sum_{i=1}^{2n} \frac{1}{2n} 1 \otimes B_i=\frac{1}{2n} 1 \otimes \left(\sum_{i=1}^{2n} B_i \right)=(1-2n) 1 \otimes A_{11}$ shows that $$1 \otimes A_{11}+\sum_{(i,j) \in \Lambda_n^+} u_{i,j-n} \otimes A_{ij}+\sum_{(i,j) \in \Lambda_n^-} u_{j,i-n}^* \otimes A_{ij} \in (\mathcal{V}_n \otimes M_p)_+.$$ Thus, (7) implies (6). Suppose that (6) is true, and let $C_{ij} \in M_r$ for $(i,j) \in \Lambda_n^+$ be such that $\begin{pmatrix} I_r & C \\ C^* & I_r \end{pmatrix} \in (M_2(M_n(M_r)))_+$, where $C=(C_{i,j+n}) \in M_n(M_r)$. Then $C$ is a contraction in $M_n(M_r)$. By Proposition \ref{univprop}, there is a ucp map $\psi:\mathcal{V}_n \to M_r$ such that $\psi(u_{ij})=C_{i,j+n}$ for all $1 \leq i,j \leq n$. Applying $\psi \otimes \text{id}_p$ to the positive element in (6) shows that $$I_r \otimes A_{11}+\sum_{(i,j) \in \Lambda_n^+} \frac{1}{2n} C_{ij} \otimes A_{ij}+\sum_{(i,j) \in \Lambda_n^-} \frac{1}{2n} C_{ji}^* \otimes A_{ij} \in (M_r \otimes M_p)_+.$$ Therefore, (5) is true. Assume (5). Let $B_{ij} \in M_r$ for $(i,j) \in \Lambda_n^+$ and $B=(B_{i,j+n}) \in M_n(M_r)$ be such that $\begin{pmatrix} \frac{1}{2n} I_r & B \\ B^* & \frac{1}{2n} I_r \end{pmatrix}$ is in $(M_2(M_n(M_r)))_+$. Let $C_{ij}=2nB_{ij}$, so that $B=\frac{1}{2n}C$, where $C=(C_{i,j+n})$. Then (5) immediately implies (4). Suppose that (4) holds, and let $\psi:M_{2n} \to M_r$ be a ucp map such that $\psi(\mathcal{J}_{2n})=\{0\}g$. Since $E_{ii}-E_{jj} \in \mathcal{J}_{2n}$ for all $i \neq j$, we have $\psi(E_{ii})=\frac{1}{2n} I_r$ for all $1 \leq i \leq 2n$. If $B_{ij}=\psi(E_{ij})$ for $(i,j) \in \Lambda_n^+$, then the Choi matrix of $\psi$ is $\begin{pmatrix} \frac{1}{2n} I_r & B \\ B^* & \frac{1}{2n} I_r \end{pmatrix}$, and hence must be positive. Then (3) follows from (4). If (3) is true and $\dot{\psi}:M_{2n}/\mathcal{J}_{2n} \to M_r$ is ucp, then $\psi:=\dot{\psi} \circ q:M_{2n} \to M_r$ is ucp and annihilates $\mathcal{J}_{2n}$. This shows that (2) holds. Finally, suppose that (2) is true. Let $h=\dot{1} \otimes A_{11}+\sum_{(i,j) \in \Lambda_n} e_{ij} \otimes A_{ij}$. Note that an element $x$ of $(M_{2n}/\mathcal{J}_{2n}) \otimes M_p$ is positive if and only if whenever $r \in \mathbb{N}$ and $\gamma:(M_{2n}/\mathcal{J}_{2n}) \otimes M_p \to M_r$ is ucp, then $\gamma(x) \in (M_r)_+$ (see \cite{choieffros}). Now, if $\gamma:(M_{2n}/\mathcal{J}_{2n}) \otimes M_p \to M_r$ is ucp, then we may find linear maps $V_1,...,V_m:\mathbb{C}^p \to \mathbb{C}^r \otimes \mathbb{C}^p$ and ucp maps $\dot{\psi}_1,...,\dot{\psi}_m:M_{2n}/\mathcal{J}_{2n} \to M_r$ such that $$\gamma=\sum_{i=1}^m V_i^* (\dot{\psi}_i \otimes \text{id}_p(\cdot)) V_i.$$ Applying (2), we see that $\gamma(h) \in (M_r)_+$ for each ucp map $\gamma:(M_{2n}/\mathcal{J}_{2n}) \otimes M_p \to M_r$ and for each $r \in \mathbb{N}$. Thus, $h$ is positive in $(M_{2n}/\mathcal{J}_{2n}) \otimes M_p$. Therefore, (1) follows from (2), as desired. \end{proof} \begin{mythe} For $\varphi:M_{2n} \to \mathcal{V}_n$ and for $\mathcal{J}_{2n}$ as above, the following are true: \begin{enumerate} \item The map $\varphi:M_{2n} \to \mathcal{V}_n$ is a complete quotient map; i.e., $M_{2n}/\mathcal{J}_{2n}$ is completely order isomorphic to $\mathcal{V}_n$. \item The $C^{\ast}$-envelope of $\mathcal{V}_n$ is $\mathcal{U}_{nc}(n)$. \end{enumerate} \end{mythe} \begin{proof} The proof is similar to the proof of \cite[Theorem 2.4]{FP}. Since $\varphi$ is a surjection, the map $\dot{\varphi}:M_{2n}/\mathcal{J}_{2n} \to \mathcal{V}_n$ given by $\dot{\varphi}(\dot{x})=\varphi(x)$ is ucp and a linear bijection. Using the fact that statements (1) and (6) are equivalent in Lemma \ref{positivity}, we see that $\dot{\varphi}$ is a complete order isomorphism, which proves the first statement. For the second statement, we will show that $\mathcal{U}_{nc}(n)$ satisfies the universal property of $C_e^{\ast}(\mathcal{V}_n)$ (see, for example, \cite{GL}). Let $\mathcal{A}$ be any unital $C^*$-algebra equipped with a unital complete order embedding $\iota:\mathcal{V}_n \to \mathcal{A}$ such that $C^*(\iota(\mathcal{V}_n))=\mathcal{A}$. We assume that $\mathcal{U}_{nc}(n)$ is represented faithfully on some Hilbert space $\mathcal{H}$. The identity map $\text{id}:\mathcal{V}_n \to \mathcal{V}_n \subseteq \mathcal{U}_{nc}(n)$ can be written as $\text{id}=\kappa \circ \iota$, where $\kappa:\iota(\mathcal{V}_n) \to \mathcal{V}_n$ is the ucp inverse of $\iota$. We extend $\kappa$ to a ucp map $\rho:\mathcal{A} \to \cB(\cH)$ by Arveson's extension theorem \cite{arveson}. Let $\rho=V^* \pi(\cdot)V$ be a minimal Stinespring representation of $\rho$ on some Hilbert space $\mathcal{H}_{\pi}=\text{ran}(V) \oplus \text{ran}(V)^{\perp}$. With respect to this decomposition, for all $1 \leq i,j \leq n$, we have $$\pi(\iota(u_{ij}))=\begin{pmatrix} u_{ij} & * \\ * & * \end{pmatrix}.$$ The matrix $\pi^{(n)} \circ \iota^{(n)}(U)=(\pi \circ \iota(u_{ij}))_{i,j=1}^n$, after applying the canonical shuffle, looks like $$\begin{pmatrix} U & * \\ * & * \end{pmatrix}.$$ Since $U$ is unitary and $\pi \circ \iota$ is completely contractive, the $(1,2)$ and $(2,1)$ blocks must be $0$. By applying the inverse shuffle, it follows that for all $i,j$, we have $$\pi(\iota(u_{ij}))=\begin{pmatrix} u_{ij} & 0 \\ 0 & * \end{pmatrix}.$$ Thus, $\rho$ is multiplicative on the generators $\{ \iota(u_{ij})\}_{i,j=1}^n$ of $\mathcal{A}$, so that $\rho$ is a $*$-homomorphism with $\rho(\iota(u_{ij}))=u_{ij}$ for all $i,j$. This shows that $\rho$ is surjective from $\mathcal{A}$ onto $\mathcal{U}_{nc}(n)$. By the universal property of $C^{\ast}$-envelopes, we conclude that $C_e^{\ast}(\mathcal{V}_n)=\mathcal{U}_{nc}(n)$. \end{proof} Using the fact that $M_{2n}/\mathcal{J}_{2n} \simeq \mathcal{V}_n$ allows for a description of the dual of $\mathcal{V}_n$. \begin{cor} \langlebel{dualofvn} The operator system dual $\mathcal{V}_n^d$ of $\mathcal{V}_n$ is completely order isomorphic to $\mathcal{S}_{M_n}^0$. \end{cor} \begin{proof} We use the same argument as in \cite[Proposition 2.7]{FP}. Since $\varphi:M_{2n} \to \mathcal{V}_n$ is a complete quotient map, $\varphi^d:\mathcal{V}_n^d \to M_{2n}^d$ is a complete order embedding by Proposition \ref{completequotientdual}. If $\{\delta_{ij}\}_{i,j=1}^{2n}$ is the dual basis for $M_{2n}^d$ of the canonical basis $\{E_{ij}\}_{i,j=1}^{2n}$ for $M_{2n}$, then $M_{2n}$ is completely order isomorphic to $M_{2n}^d$ via the mapping $E_{ij} \mapsto \delta_{ij}$ \cite[Theorem 6.2]{PTT}. Taking the unit of $M_{2n}^d$ to be the canonical normalized trace, this mapping is a unital complete order isomorphism. It follows that the vector space dual of $M_{2n}/\mathcal{J}_{2n}$, equipped with the operator system structure inherited from $M_{2n}$, is the operator system dual of $\mathcal{V}_n$. It is not hard to see that the vector space dual of $M_{2n}/\mathcal{J}_{2n}$ is the annihilator of $\mathcal{J}_{2n}$ in $M_{2n}^d$. Therefore, $\mathcal{V}_n^d \simeq \mathcal{S}_{M_n}^0$. \end{proof} We will now move towards an analogue of Kirchberg's Theorem for $\mathcal{U}_{nc}(n)$. Kirchberg's famous result on the full group $C^{\ast}$-algebra of the free group $F_n$, for $n \geq 2$, is that $C^*(F_n) \otimes_{\min} \cB(\cH)=C^*(F_n) \otimes_{\max} \cB(\cH)$ for every Hilbert space $\mathcal{H}$. We will show that a similar result is true when replacing $C^*(F_n)$ by $\mathcal{U}_{nc}(n)$. First, we adopt some terminology using Lemma \ref{positivity}. We say that an operator system $\mathcal{S}$ has \textbf{property} $\mathfrak{V}_n$ if whenever $p \in \mathbb{N}$ and $S_{11},S_{ij} \in M_p(\mathcal{S})$ for $(i,j) \in \Lambda_n$ are such that $$1 \otimes S_{11}+\sum_{(i,j) \in \Lambda_n^+} \frac{1}{2n} u_{i,j-n} \otimes S_{ij}+\sum_{(i,j) \in \Lambda_n^-} \frac{1}{2n} u_{j,i-n}^* \otimes S_{ij} \in (\mathcal{U}_{nc}(n) \otimes_{\min} M_p(\mathcal{S}))_+,$$ then for each $\varepsilon>0$ there exist $R_{ij}^{\varepsilon} \in M_p(\mathcal{S})$ for $1 \leq i,j \leq 2n$ such that \begin{itemize} \item The matrix $R_{\varepsilon}=(R_{ij}^{\varepsilon})$ is positive in $M_{2n}(M_p(\mathcal{S}))$. \item $R_{ij}^{\varepsilon}=S_{ij}$ for all $(i,j) \in \Lambda_n$. \item $\sum_{i=1}^{2n} R_{ii}^{\varepsilon} =2n(S_{11}+\varepsilon 1_{M_p(\mathcal{S})})$. \end{itemize} Equivalently, $\mathcal{S}$ has property $\mathfrak{V}_n$ if and only if the above holds when replacing the above positive element of $\mathcal{U}_{nc}(n) \otimes_{\min} M_p(\mathcal{S})$ with $$\dot{1} \otimes S_{11}+\sum_{(i,j) \in \Lambda_n} e_{ij} \otimes S_{ij} \in (M_{2n}/\mathcal{J}_{2n} \otimes_{\min} M_p(\mathcal{S}))_+.$$ We will say that $\mathcal{S}$ has property $\mathfrak{V}$ if it has property $\mathfrak{V}_n$ for every $n \in \mathbb{N}$. These properties were inspired by the similar notion of operator systems having property $\mathfrak{W}_{n+1}$ with regards to the operator system $\mathcal{W}_{n+1} \subseteq C^*(F_n)$ given by $\mathcal{W}_{n+1}=\text{span } \{w_iw_j^*: 1 \leq i,j \leq n+1 \}$, where $w_2,...,w_{n+1}$ are the generators of $F_n$ and $w_1=1$ (see \cite{FP}). Lemma \ref{positivity} shows that $M_p$ has property $\mathfrak{V}$ for every $p \in \mathbb{N}$. The operator systems satisfying property $\mathfrak{V}_n$ are characterized in the following proposition. \begin{pro} \langlebel{propertyv} Let $\mathcal{S}$ be an operator system. Then $\mathcal{S}$ has property $\mathfrak{V}_n$ if and only if $\mathcal{V}_n \otimes_{\min} \mathcal{S}=\mathcal{V}_n \otimes_{\max} \mathcal{S}$. In particular, if $\mathcal{S}$ is $(\min,\max)$-nuclear, then $\mathcal{S}$ has property $\mathfrak{V}$. \end{pro} \begin{proof} We proceed as in the proof of \cite[Proposition 3.3]{FP}. Let $X=(x_{k\text{el}l}) \in M_r(\mathcal{V}_n \otimes \mathcal{S})$ for $r>1$; then $$x_{k\text{el}l}=1 \otimes s_{11}^{(k\text{el}l)}+\sum_{(i,j) \in \Lambda_n^+} \frac{1}{2n} u_{i,j-n} \otimes s_{ij}^{(k\text{el}l)}+\sum_{(i,j) \in \Lambda_n^-} \frac{1}{2n} u_{j,i-n}^* \otimes s_{ij}^{(k\text{el}l)},$$ so if $S_{11}=(s_{11}^{(k\text{el}l)})$ and $S_{ij}=(s_{ij}^{(k\text{el}l)})$, then we obtain $$X=1 \otimes S_{11}+\sum_{(i,j) \in \Lambda_n^+} \frac{1}{2n} u_{i,j-n} \otimes S_{ij}+\sum_{(i,j) \in \Lambda_n^-} \frac{1}{2n} u_{j,i-n}^* \otimes S_{ij},$$ where $S_{11},S_{ij} \in M_r$. Now, $\mathcal{S}$ satisfies the definition of property $\mathfrak{V}_n$ when $p=r$ if and only if $M_r(\mathcal{S})$ satisfies property $\mathfrak{V}_n$ when $p=1$. Moreover, we have $M_r(\mathcal{V}_n \otimes_{\min} \mathcal{S})=\mathcal{V}_n \otimes_{\min} M_r(\mathcal{S})$ and $M_r(\mathcal{V}_n \otimes_{\max} \mathcal{S})=\mathcal{V}_n \otimes_{\max} M_r(\mathcal{S})$. By replacing $\mathcal{S}$ with $M_r(\mathcal{S})$ if necessary, in order to check that $\mathcal{S}$ has property $\mathfrak{V}_n$, it suffices to show that $\mathcal{S}$ satisfies the definition of property $\mathfrak{V}_n$ when $p=1$. Suppose that $\mathcal{V}_n \otimes_{\min} \mathcal{S}=\mathcal{V}_n \otimes_{\max} \mathcal{S}$, and suppose that $$x:=1 \otimes S_{11}+\sum_{(i,j) \in \Lambda_n^+} \frac{1}{2n} u_{i,j-n} \otimes S_{ij}+\sum_{(i,j) \in \Lambda_n^-} \frac{1}{2n} u_{j,i-n}^* \otimes S_{ij} \in (\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S})_+.$$ Then $x \in (\mathcal{V}_n \otimes_{\max} \mathcal{S})_+$. Hence, for every $\varepsilon>0$, $x+\varepsilon(1 \otimes 1_{\mathcal{S}}) \in D_1^{\max}(\mathcal{V}_n,\mathcal{S})$. This means that there is $V \in M_k(\mathcal{V}_n)_+$, $S \in M_m(\mathcal{S})_+$ and a linear map $A:\mathbb{C}^k \otimes \mathbb{C}^m \to \mathbb{C}$ such that $$x+\varepsilon (1 \otimes 1_{\mathcal{S}})=A(V \otimes S)A^*.$$ Since $\varphi:M_{2n} \to \mathcal{V}_n$ is a complete quotient map, there is $R \in M_k(M_{2n})_+$ such that $$x+\varepsilon (1 \otimes 1_{\mathcal{S}})=A (\varphi(R) \otimes S)A^*.$$ So, with $R_{\varepsilon}=A(R \otimes S)A^* \in M_{2n}(\mathcal{S})_+$, we have $\varphi \otimes \text{id}_{\mathcal{S}}(R_{\varepsilon})=x+\varepsilon(1 \otimes 1_{\mathcal{S}})$. That is to say, for each $\varepsilon>0$, there is $R_{\varepsilon} \in M_{2n}(\mathcal{S})_+$ such that $$x+\varepsilon(1 \otimes 1_{\mathcal{S}})=\sum_{i=1}^{2n} 1 \otimes R_{ii}^{\varepsilon}+\sum_{(i,j) \in \Lambda_n^+} \frac{1}{2n} u_{i,j-n} \otimes R_{ij}^{\varepsilon}+\sum_{(i,j) \in \Lambda_n^-} \frac{1}{2n} u_{j,i-n}^* \otimes R_{ij}^{\varepsilon}.$$ Comparing coefficients with the coefficients of $x+\varepsilon(1 \otimes 1_{\mathcal{S}})$ shows that $R_{ij}^{\varepsilon}=S_{ij}$ for $(i,j) \in \Lambda_n$ and $\frac{1}{2n} \sum_{i=1}^{2n} R_{ii}^{\varepsilon}=S_{11}+\varepsilon 1$. This shows that $\mathcal{S}$ has property $\mathfrak{V}_n$. Conversely, suppose that $\mathcal{S}$ has property $\mathfrak{V}_n$ and let $p \in \mathbb{N}$; we must show that $\mathcal{C}_p^{\min}(\mathcal{V}_n,\mathcal{S}) \subseteq \mathcal{C}_p^{\max}(\mathcal{V}_n,\mathcal{S})$. As before, by replacing $\mathcal{S}$ with $M_r(\mathcal{S})$ if necessary, we may assume that $p=1$. Let $x \in (\mathcal{V}_n \otimes_{\min} \mathcal{S})_+$. Then there are $s_{11},s_{ij} \in \mathcal{S}$ for $(i,j) \in \Lambda_n$ such that $$x=1 \otimes s_{11}+\sum_{(i,j) \in \Lambda_n^+} \frac{1}{2n} u_{i,j-n} \otimes s_{ij}+\sum_{(i,j) \in \Lambda_n^-} \frac{1}{2n} u_{j,i-n}^* \otimes s_{ij}.$$ Since $\mathcal{S}$ has property $\mathfrak{V}_n$, given $\varepsilon>0$, there are $R_{ij}^{\varepsilon} \in \mathcal{S}$ for $1 \leq i,j \leq 2n$ such that $R_{\varepsilon}=(R_{ij}^{\varepsilon}) \in M_n(\mathcal{S})_+$, $R_{ij}^{\varepsilon}=s_{ij}$ for all $(i,j) \in \Lambda_n$ and $\sum_{i=1}^{2n} R_{ii}^{\varepsilon}=2n(s_{11}+\varepsilon 1_{\mathcal{S}})$. If $\varphi:M_{2n} \to \mathcal{V}_n$ is the complete quotient map given as before, then the map $\varphi \otimes \text{id}_{\mathcal{S}}:M_{2n} \otimes_{\max} \mathcal{S} \to \mathcal{V}_n \otimes_{\max} \mathcal{S}$ is ucp, and $$\varphi \otimes \text{id}_{\mathcal{S}}(R_{\varepsilon})=x+\varepsilon(1 \otimes 1_{\mathcal{S}}) \in (\mathcal{V}_n \otimes_{\max} \mathcal{S})_+.$$ Thus, $x+\varepsilon(1 \otimes 1_{\mathcal{S}}) \in \mathcal{D}_1^{\max}(\mathcal{V}_n,\mathcal{S})$ for all $\varepsilon>0$. Therefore, $x \in (\mathcal{V}_n \otimes_{\max} \mathcal{S})_+$, which completes the proof. \end{proof} The next fact about tensor products of $\mathcal{V}_n$ is very useful. \begin{pro} \langlebel{vncommutingorderembedding} Let $\mathcal{S}$ be any operator system. For all $n \geq 2$, the inclusion $\mathcal{V}_n \otimes_c \mathcal{S} \subseteq \mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{S}$ is a complete order embedding. \end{pro} \begin{proof} By \cite[Theorem 6.7]{KPTT}, $\mathcal{A} \otimes_c \mathcal{S}=\mathcal{A} \otimes_{\max} \mathcal{S}$ for every unital $C^{\ast}$-algebra $\mathcal{A}$. We must show that $C_p^{\text{comm}}(\mathcal{V}_n,\mathcal{S})=C_p^{\text{comm}}(\mathcal{U}_{nc}(n),\mathcal{S}) \cap M_p(\mathcal{V}_n \otimes \mathcal{S})$ for each $p \in \mathbb{N}$. The inclusion map $\iota_n:\mathcal{V}_n \to \mathcal{U}_{nc}(n)$ is ucp, so by functoriality of the commuting tensor product, $\iota_n \otimes \text{id}_{\mathcal{S}}:\mathcal{V}_n \otimes_c \mathcal{S} \to \mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{S}$ is ucp. Therefore, $C_p^{\text{comm}}(\mathcal{V}_n,\mathcal{S}) \subseteq C_p^{\text{comm}}(\mathcal{U}_{nc}(n),\mathcal{S}) \cap M_p(\mathcal{V}_n \otimes \mathcal{S})$ for each $p$. Conversely, suppose that $X \in C_p^{\text{comm}}(\mathcal{U}_{nc}(n),\mathcal{S}) \cap M_p(\mathcal{V}_n \otimes \mathcal{S})$. Let $\psi:\mathcal{V}_n \to \cB(\cH)$ and $\gamma:\mathcal{S} \to \cB(\cH)$ be ucp maps with commuting ranges. Let $T=(T_{ij})$ where $T_{ij}=\psi(u_{ij})$. By Proposition \ref{univprop}, the map $\psi$ dilates to a unital $*$-homomorphism $\pi:\mathcal{U}_{nc}(n) \to M_2(\cB(\cH))$ such that, for all $1 \leq i,j \leq n$, $$\pi(u_{ij})=\begin{pmatrix} T_{ij} & (\sqrt{I-TT^*})_{ij} \\ (\sqrt{I-T^*T})_{ij} & -T_{ji}^* \end{pmatrix}.$$ We extend $\psi$ to a ucp map on all of $\mathcal{U}_{nc}(n)$ by letting $\psi(x)$ be given by the $(1,1)$ corner of $\pi(x)$, for each $x \in \mathcal{U}_{nc}(n)$. Define $\widetilde{\gamma}:\mathcal{S} \to M_2(\cB(\cH))$ by setting $$\widetilde{\gamma}(s)=\begin{pmatrix} \gamma(s) & 0 \\ 0 & \gamma(s) \end{pmatrix}.$$ Since $\gamma(s)$ commutes with each $T_{ij}$ and $T_{ij}^*$, we see that $\gamma(s) \otimes I_{\mathcal{H}}$ commutes with $T$ and $T^*$. Hence, $\gamma(s) \otimes I_{\mathcal{H}}$ commutes with $C^*(I_{\mathcal{H}},T,T^*)$, which contains $\sqrt{I-T^*T}$ and $\sqrt{I-TT^*}$. Thus, $\gamma(s) \otimes I_{\mathcal{H}}$ commutes with each block of $\pi(u_{ij})$. It follows that the range of $\widetilde{\gamma}$ commutes with each $\pi(u_{ij})$. Since $\pi$ is a $*$-homomorphism, the ucp maps $\pi$ and $\widetilde{\gamma}$ must have commuting ranges. Therefore, the map $\pi \cdot \widetilde{\gamma}:\mathcal{U}_{nc}(n) \otimes_c \mathcal{S} \to M_2(\cB(\cH))$ is ucp. Compressing to the $(1,1)$ corner in $M_2(\cB(\cH))$ yields the map $\psi \cdot \gamma$. It follows that $(\psi \cdot \gamma)_{|\mathcal{V}_n \otimes \mathcal{S}}$ is ucp on the inclusion of $\mathcal{V}_n \otimes \mathcal{S}$ into $\mathcal{U}_{nc}(n) \otimes_c \mathcal{S}$. Hence, $(\psi \cdot \gamma)^{(p)}(X) \in M_p(\cB(\cH))_+$. As $\psi$ and $\gamma$ were arbitrary, we conclude that $X \in C_p^{\text{comm}}(\mathcal{V}_n,\mathcal{S})$. \end{proof} \begin{lem} \langlebel{vnmincunc} Let $\mathcal{S}$ be any operator system. Then $\mathcal{V}_n \otimes_{\min} \mathcal{S}=\mathcal{V}_n \otimes_c \mathcal{S}$ if and only if $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S}=\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{S}$. In particular, if $\mathcal{S}$ has property $\mathfrak{V}_n$, then $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S}=\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{S}$. \end{lem} \begin{proof} Suppose that $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S}=\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{S}$. Since the $\min$ tensor product is injective, $\mathcal{V}_n \otimes_{\min} \mathcal{S}$ is completely order isomorphic to the image of $\mathcal{V}_n \otimes \mathcal{S}$ in $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S}$. By Proposition \ref{vncommutingorderembedding}, $\mathcal{V}_n \otimes_c \mathcal{S}$ is completely order isomorphic to its image in $\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{S}$. It follows that $\mathcal{V}_n \otimes_{\min} \mathcal{S}=\mathcal{V}_n \otimes_c \mathcal{S}$. Conversely, suppose that $\mathcal{V}_n \otimes_{\min} \mathcal{S}=\mathcal{V}_n \otimes_c \mathcal{S}$. We employ an argument analogous to the proof of \cite[Proposition 3.6]{FP}. Let $X \in M_p(\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S})_+$, and let $\psi:\mathcal{U}_{nc}(n) \to \cB(\cH)$ and $\gamma:\mathcal{S} \to \cB(\cH)$ be ucp maps with commuting ranges. Let $\psi=V^*\pi(\cdot)V$ be a minimal Stinespring representation for $\psi$ on some Hilbert space $\mathcal{H}_{\pi}$. By Arveson's commutant lifting theorem \cite[Theorem 1.3.1]{arveson}, there is a unital $*$-homomorphism $\rho:(\psi(\mathcal{U}_{nc}(n)))' \to (\pi(\mathcal{U}_{nc}(n)))'$ such that $\rho(T)V=VT$ for all $T \in (\psi(\mathcal{U}_{nc}(n)))'$. Since $\psi$ and $\gamma$ have commuting ranges, we see that $\widetilde{\gamma}=\rho \circ \gamma:\mathcal{S} \to (\pi(\mathcal{U}_{nc}(n)))' \subseteq \mathcal{B}(\mathcal{K})$ is ucp and its range commutes with the range of $\pi$. Since $\mathcal{V}_n \otimes_{\min} \mathcal{S}=\mathcal{V}_n \otimes_c \mathcal{S}$, the map $(\pi \cdot \widetilde{\gamma})_{|\mathcal{V}_n \otimes_{\min} \mathcal{S}}$ is ucp. Since the min tensor product is injective, $\mathcal{V}_n \otimes_{\min} \mathcal{S}$ is completely order isomorphic to the image of $\mathcal{V}_n \otimes \mathcal{S}$ in $\mathcal{U}_{nc}(n) \otimes_{\min} C_e^*(\mathcal{S})$. Arveson's extension theorem \cite{arveson} guarantees existence of a ucp extension $\eta: \mathcal{U}_{nc}(n) \otimes_{\min} C_e^*(\mathcal{S}) \to \mathcal{B}(\mathcal{K})$ of $(\pi \cdot \widetilde{\gamma})_{|\mathcal{V}_n \otimes_{\min} \mathcal{S}}$. For any $1 \leq i,j \leq n$, we see that $$\eta(u_{ij} \otimes 1)=\pi\cdot \widetilde{\gamma}(u_{ij} \otimes 1)=\pi(u_{ij}).$$ Thus, $\{ u_{ij}: 1 \leq i,j \leq n \}$ is in the multiplicative domain $\mathcal{M}_{\eta}$ of $\eta$. It follows that $\mathcal{U}_{nc}(n) \otimes 1 \subseteq \mathcal{M}_{\eta}$. Hence whenever $a \in \mathcal{U}_{nc}(n)$ and $s \in \mathcal{S}$, we obtain $$\eta(a \otimes s)=\eta(\underbrace{(a \otimes 1)}_{\in \mathcal{M}_{\eta}}(1 \otimes s))=\eta(a \otimes 1)\eta(1 \otimes s)=\pi(a)\widetilde{\gamma}(s).$$ Now, the upper-left corner of $\pi(a)\widetilde{\gamma}(s)$ is $$V^*\pi(a)\widetilde{\gamma}(s)V=V^*\pi(a)\rho(\gamma(s))V=V^*\pi(a)V\gamma(s)=\psi(a)\gamma(s).$$ Using this fact, we have $$\psi \cdot \gamma(a \otimes s)=\psi(a)\gamma(s)=V^*\pi(a)\widetilde{\gamma}(s)V=V^*\eta(a \otimes s)V.$$ So, for all $z \in \mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S}$ we have $\psi \cdot \gamma(z)=V^*\eta(z)V$, so that $(\psi \cdot \gamma)_{|\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S}}$ is ucp. Therefore, $(\psi \cdot \gamma)^{(n)}(X) \in M_p(M_m)_+$ so that $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S}=\mathcal{U}_{nc}(n) \otimes_c \mathcal{S}$. \\ \end{proof} \begin{lem} \langlebel{bofhpropertyv} If $\mathcal{H}$ is any Hilbert space, then $\cB(\cH)$ has property $\mathfrak{V}$. Equivalently, $\mathcal{V}_n$ has the OSLLP for every $n \geq 2$. \end{lem} \begin{proof} The proof proceeds in a similar manner to the proof of \cite[Proposition 3.5]{FP}. Let $n \in \mathbb{N}$ and let $S_{11},S_{ij} \in \cB(\cH)$ for $(i,j) \in \Lambda_n$. Suppose that $$1 \otimes S_{11}+\sum_{(i,j) \in \Lambda_n^+} \frac{1}{2n} u_{i,j-n} \otimes S_{ij}+\sum_{(i,j) \in \Lambda_n^-} \frac{1}{2n} u_{j,i-n}^* \otimes S_{ij} \in (\mathcal{U}_{nc}(n) \otimes_{\min} \cB(\cH))_+.$$ As in the proof of Proposition \ref{propertyv}, we may assume that $p=1$. The matrix $0 \in M_n$ is a contraction, so by Proposition \ref{univprop}, the map $\alpha:\mathcal{V}_n \to \mathbb{C}$ given by $\alpha(u_{ij})=0$ and $\alpha(1)=1$ extends to a ucp map on $\mathcal{U}_{nc}(n)$. Hence, $\alpha \otimes \text{id}_{\cB(\cH)}$ is ucp on $\mathcal{U}_{nc}(n) \otimes_{\min} \cB(\cH)$, which forces $S_{11} \geq 0$. Fix $\varepsilon>0$. For any finite-dimensional subspace $\mathcal{M}$ of $\mathcal{H}$, we know that $\mathcal{B}(\mathcal{M})$ has property $\mathfrak{V}$. Let $P_{\mathcal{M}}$ denote the orthogonal projection onto $\mathcal{M}$. Replacing $S_{11}$ with $P_{\mathcal{M}} S_{11} P_{\mathcal{M}}$ and $S_{ij}$ with $P_{\mathcal{M}} S_{ij} P_{\mathcal{M}}$, we may find $R_{ij}^{\varepsilon,\mathcal{M}} \in \mathcal{B}(\mathcal{M})$ such that \begin{itemize} \item $R_{\varepsilon,\mathcal{M}}:=(R_{ij}^{\varepsilon,\mathcal{M}})$ is in $(M_{2n}(\mathcal{B}(\mathcal{M})))_+$, \item $R_{ij}^{\varepsilon,\mathcal{M}}=P_{\mathcal{M}}S_{ij}P_{\mathcal{M}}$ for $(i,j) \in \Lambda_n$, and \item $\sum_{i=1}^{2n} R_{ii}^{\varepsilon,\mathcal{M}}=2n(P_{\mathcal{M}} S_{11} P_{\mathcal{M}}+\varepsilon I_{\mathcal{M}})=2n(P_{\mathcal{M}}S_{11}P_{\mathcal{M}}+\varepsilon P_{\mathcal{M}})$. \end{itemize} Clearly $\sum_{i=1}^{2n} R_{ii}^{\varepsilon,\mathcal{M}} \leq 2n(S_{11}+\varepsilon I_{\mathcal{H}})$, so since each $R_{ii}^{\varepsilon,\mathcal{M}}$ is positive, the diagonal blocks of $R_{\varepsilon,\mathcal{M}}$ are bounded. Since $\mathcal{M}$ is finite-dimensional and $R_{\varepsilon,\mathcal{M}} \geq 0$, the norm of $R_{\varepsilon,\mathcal{M}}$ is given by the largest eigenvalue. Therefore, indexing finite-dimensional subspaces of $\mathcal{H}$ by inclusion, the net $(R_{\varepsilon,\mathcal{M}})_{\mathcal{M} \leq \mathcal{H}, \, \dim(\mathcal{M})<\infty}$ is uniformly bounded. Let $R_{\varepsilon}$ be a $w^*$-limit point of the net $(R_{\varepsilon,\mathcal{M}})_{\mathcal{M}}$. Then the corresponding subnet of $(P_{\mathcal{M}})_{\mathcal{M}}$ converges strongly to $I_{\mathcal{H}}$. It follows that if $R_{\varepsilon}=(R_{ij}^{\varepsilon}) \in M_n(\cB(\cH))$, then $R_{\varepsilon} \geq 0$, while $R_{ij}^{\varepsilon}=S_{ij}$ for all $(i,j) \in \Lambda_n$ and $\sum_{i=1}^{2n} R_{ii}^{\varepsilon}=2n(S_{11}+\varepsilon I_{\mathcal{H}})$. Therefore, $\cB(\cH)$ has property $\mathcal{V}_n$ for every $n \in \mathbb{N}$. \end{proof} \begin{mythe} \langlebel{unckirchberg} $\mathcal{U}_{nc}(n)$ has the LLP; i.e., $\mathcal{U}_{nc}(n) \otimes_{\min} \cB(\cH)=\mathcal{U}_{nc}(n) \otimes_{\max} \cB(\cH)$. \end{mythe} \begin{proof} By Lemma \ref{bofhpropertyv} and Proposition \ref{propertyv}, $\mathcal{V}_n \otimes_{\min} \cB(\cH)=\mathcal{V}_n \otimes_{\max} \cB(\cH)$. Applying Lemma \ref{vnmincunc} gives the desired result. \end{proof} It should be noted that Kirchberg's Theorem for $C^*(F_n)$ follows from Theorem \ref{unckirchberg}. To show this fact, we will need the notion of a retract of operator systems. We will say that an operator system $\mathcal{S}$ is a \textbf{retract} of an operator system $\mathcal{T}$ if there are ucp maps $\psi:\mathcal{S} \to \mathcal{T}$ and $\chi:\mathcal{T} \to \mathcal{S}$ such that $\chi \circ \psi=\text{id}_{\mathcal{S}}$. \begin{lem} \langlebel{retract} Let $\mathcal{S}_1,\mathcal{S}_2,\mathcal{T}_1,\mathcal{T}_2$ be operator systems, and let $\tau_1,\tau_2 \in \{ \min,c,\max\}$. For $i=1,2$, suppose that $\mathcal{S}_i$ is a retract of $\mathcal{T}_i$. If $\mathcal{T}_1 \otimes_{\tau_1} \mathcal{T}_2=\mathcal{T}_1 \otimes_{\tau_2} \mathcal{T}_2$ completely order isomorphically (respectively, order isomorphically), then $\mathcal{S}_1 \otimes_{\tau_1} \mathcal{S}_2=\mathcal{S}_1 \otimes_{\tau_2} \mathcal{S}_2$ completely order isomorphically (respectively, order isomorphically). \end{lem} \begin{proof} Since $\min \leq c \leq \max$ as operator system tensor products, we may assume that $\tau_1 \leq \tau_2$. Since $\mathcal{S}_i$ is a retract of $\mathcal{T}_i$, there are ucp maps $\varphi_i:\mathcal{S}_i \to \mathcal{T}_i$ and $\psi_i:\mathcal{T}_i \to \mathcal{S}_i$ such that $\psi_i \circ \varphi_i=\text{id}_{\mathcal{S}_i}$. For each $j=1,2$, by functoriality of $\tau_j$, the maps $\varphi_1 \otimes \varphi_2:\mathcal{S}_1 \otimes_{\tau_j} \mathcal{S}_2 \to \mathcal{T}_1 \otimes_{\tau_j} \mathcal{T}_2$ and $\psi_1 \otimes \psi_2:\mathcal{T}_1 \otimes_{\tau_j} \mathcal{T}_2 \to \mathcal{S}_1 \otimes_{\tau_j} \mathcal{S}_2$ are ucp. Moreover, the following diagram commutes: \begin{tikzpicture}[every node/.style={midway}] \matrix[column sep={20em,between origins}, row sep={7em}] at (0,0) { \node(uncmin) {$\mathcal{T}_1 \otimes_{\tau_1} \mathcal{T}_2$} ; & \node(uncmax) {$\mathcal{T}_1 \otimes_{\tau_2} \mathcal{T}_2$}; \\ \node(wnmin) {$\mathcal{S}_1 \otimes_{\tau_1} \mathcal{S}_2$}; & \node (wnmax) {$\mathcal{S}_1 \otimes_{\tau_2} \mathcal{S}_2$};\\ }; \draw[->] (wnmin) -- (uncmin) node[anchor=east] {$\varphi_1 \otimes \varphi_2$}; \draw[->] (uncmin) -- (uncmax) node[anchor=south] {$\text{id}_{\mathcal{T}_1} \otimes \text{id}_{\mathcal{T}_2}$}; \draw[->] (uncmax) -- (wnmax) node[anchor=west] {$\psi_1 \otimes \psi_2$}; \draw[->] (wnmin) -- (wnmax) node[anchor=south] {$\text{id}_{\mathcal{S}_1} \otimes \text{id}_{\mathcal{S}_2}$}; \end{tikzpicture} By assumption, the map $\text{id}:\mathcal{T}_1 \otimes_{\tau_1} \mathcal{T}_2 \to \mathcal{T}_1 \otimes_{\tau_2} \mathcal{T}_2$ is completely positive (respectively, positive). Thus, $\text{id}:\mathcal{S}_1 \otimes_{\tau_1} \mathcal{S}_2 \to \mathcal{S}_1 \otimes_{\tau_2} \mathcal{S}_2$ is completely positive (respectively, positive). The result follows. \end{proof} For the next lemma, we define the operator system $\mathcal{S}_n \subseteq C^*(F_n)$ to be $\mathcal{S}_n=\text{span } \{1,w_1,...,w_n,w_1^*,...,w_n^*\}$, where $w_1,...,w_n$ are the generators of $F_n$. \begin{lem} \langlebel{uncretract} Let $n \geq 2$. \begin{enumerate} \item $C^*(F_n)$ is a retract of $\mathcal{U}_{nc}(n)$. \item $\mathcal{S}_n$ is a retract of $\mathcal{V}_n$. \end{enumerate} \end{lem} \begin{proof} To prove (1), we note that $\begin{pmatrix} w_1 \\ & \ddots \\ & & w_n \end{pmatrix} \in M_n(C^*(F_n))$ is unitary. Hence, there is a unital $*$-homomorphism $\pi:\mathcal{U}_{nc}(n) \to C^*(F_n)$ such that $\pi(u_{ij})=0$ for $i \neq j$ and $\pi(u_{ii})=w_i$. Then $\pi \otimes \text{id}_{\mathcal{S}}:\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{S} \to C^*(F_n) \otimes_{\max} \mathcal{S}$ is ucp, while $\text{id}:\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S} \to \mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{S}$ is ucp by Proposition \ref{propertyv} and Lemma \ref{vnmincunc}. We let $U_1=U:=(u_{ij}) \in M_n(\mathcal{U}_{nc}(n))$. For $2 \leq i \leq n$, let $U_i$ be the conjugation of $U$ by a permutation matrix such that the $(1,1)$-entry of $U_i$ is $u_{ii}$. Then each $U_i \in M_n(\mathcal{U}_{nc}(n))$ is unitary, so by the universal property for $C^*(F_n)$, there is a unital $*$-homomorphism $\rho:C^*(F_n) \to M_n(\mathcal{U}_{nc}(n))$ such that $\rho(w_i)=U_i$ for all $1 \leq i \leq n$. Compressing to the $(1,1)$-entry in $M_n(\mathcal{U}_{nc}(n))$ gives rise to a ucp map $\psi:C^*(F_n) \to \mathcal{U}_{nc}(n)$ such that $\psi(w_i)=u_{ii}$ for all $1 \leq i \leq n$. Since $\pi \circ \psi(w_i)=w_i$ and since $w_i$ is unitary, it follows that $w_i$ lies in the multiplicative domain of $\pi \circ \psi$. Since $C^*(F_n)$ is generated by $\{w_1,...,w_n\}$, it follows that $\pi \circ \psi$ is multiplicative on $C^*(F_n)$. The fact that $\pi \circ \psi(w_i)=w_i$ for all $i$ forces $\pi \circ \psi=\text{id}_{C^*(F_n)}$. Thus, (1) holds. For (2), since $\psi(w_i)=u_{ii} \in \mathcal{V}_n$ for all $1 \leq i \leq n$, we have $\psi(\mathcal{S}_n) \subseteq \mathcal{V}_n$. Clearly $\pi(\mathcal{V}_n)=\mathcal{S}_n$. Since $\pi \circ \psi=\text{id}_{C^*(F_n)}$, it follows that $\mathcal{S}_n$ is a retract of $\mathcal{V}_n$ via the maps $\psi_{|\mathcal{S}_n}:\mathcal{S}_n \to \mathcal{V}_n$ and $\pi_{|\mathcal{V}_n}:\mathcal{V}_n \to \mathcal{S}_n$. \end{proof} \begin{mythe} \langlebel{propertyvandw} Whenever $\mathcal{S}$ is an operator system with property $\mathfrak{V}_n$, we have $C^*(F_n) \otimes_{\min} \mathcal{S}=C^*(F_n) \otimes_{\max} \mathcal{S}$. \end{mythe} \begin{proof} By Lemma \ref{uncretract}, $C^*(F_n)$ is a retract of $\mathcal{U}_{nc}(n)$. Applying Lemma \ref{retract}, since $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{S}=\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{S}$, it follows that $C^*(F_n) \otimes_{\min} \mathcal{S}=C^*(F_n) \otimes_{\max} \mathcal{S}$, which completes the proof. \end{proof} \begin{cor} \emph{(Kirchberg's Theorem, \cite{kirchberg})} \langlebel{kirchberg} Let $n \geq 2$. Then $C^*(F_n)$ has the LLP. In other words, $C^*(F_n) \otimes_{\min} \cB(\cH)=C^*(F_n) \otimes_{\max} \cB(\cH)$. \end{cor} Using Theorem \ref{propertyvandw}, it is possible to characterize unital $C^{\ast}$-algebras having the WEP and operator systems having the DCEP in terms of tensor products with $\mathcal{V}_2$. \begin{mythe} \langlebel{weppropertyv2} Let $\mathcal{A}$ be a unital $C^{\ast}$-algebra. The following are equivalent. \begin{enumerate} \item $\mathcal{A}$ has the WEP. \item $\mathcal{A}$ has property $\mathfrak{V}$. \item $\mathcal{A}$ has property $\mathfrak{V}_2$. \item $\mathcal{A} \otimes_{\min} \mathcal{V}_2=\mathcal{A} \otimes_{\max} \mathcal{V}_2$. \end{enumerate} \end{mythe} \begin{proof} Clearly (2) implies (3), while (3) implies (4) by Proposition \ref{propertyv}. Suppose that $\mathcal{A}$ has the WEP. By Theorem \ref{wepnuclearity}, $\mathcal{A}$ is $(\text{el},\max)$-nuclear. By Theorem \ref{osllp}, each $\mathcal{V}_n$ having the OSLLP implies that each $\mathcal{V}_n$ is $(\min,\text{er})$-nuclear. Hence, $$\mathcal{V}_n \otimes_{\min} \mathcal{A}=\mathcal{V}_n \otimes_{\text{er}} \mathcal{A}=\mathcal{A} \otimes_{\text{el}} \mathcal{V}_n=\mathcal{A} \otimes_{\max} \mathcal{V}_n=\mathcal{V}_n \otimes_{\max} \mathcal{A}.$$ By Proposition \ref{propertyv} and the fact that $n \geq 2$ was arbitrary, we conclude that $\mathcal{A}$ has property $\mathfrak{V}$. This shows that (1) implies (2). Finally, we prove that (4) implies (1). Suppose that $\mathcal{A} \otimes_{\min} \mathcal{V}_2=\mathcal{A} \otimes_{\max} \mathcal{V}_2$. Then by Lemma \ref{vnmincunc}, we have $\mathcal{U}_{nc}(2) \otimes_{\min} \mathcal{A}=\mathcal{U}_{nc}(2) \otimes_{\max} \mathcal{A}$. Using Theorem \ref{propertyvandw}, we have $C^*(F_2) \otimes_{\min} \mathcal{A}=C^*(F_2) \otimes_{\max} \mathcal{A}$. As $F_{\infty}$ embeds as a subgroup into $F_2$, by \cite[Proposition 8.8]{pisierbook} it follows that there are ucp maps $\Phi:C^*(F_{\infty}) \to C^*(F_2)$ and $\Psi:C^*(F_2) \to C^*(F_{\infty})$ with $\Psi \circ \Phi=\text{id}$. By Lemma \ref{retract}, we have $C^*(F_{\infty}) \otimes_{\min} \mathcal{A}=C^*(F_{\infty}) \otimes_{\max} \mathcal{A}$. By \cite[Proposition 1.1(iii)]{kirchberg93}, $\mathcal{A}$ has the WEP. \end{proof} There is a similar characterization for operator systems with the DCEP. \begin{mythe} \langlebel{dcepv2} Let $\mathcal{S}$ be an operator system. The following are equivalent. \begin{enumerate} \item $\mathcal{S}$ has the DCEP. \item $\mathcal{S} \otimes_{\min} \mathcal{V}_n=\mathcal{S} \otimes_c \mathcal{V}_n$ for all $n \geq 2$. \item $\mathcal{S} \otimes_{\min} \mathcal{V}_2=\mathcal{S} \otimes_c \mathcal{V}_2$. \end{enumerate} \end{mythe} \begin{proof} Assume that $\mathcal{S}$ has the DCEP. By Theorem \ref{dcep}, $\mathcal{S}$ is $(\text{el},c)$-nuclear, while $\mathcal{V}_n$ is $(\min,\text{er})$-nuclear. It follows that $\mathcal{S} \otimes_{\min} \mathcal{V}_n=\mathcal{S} \otimes_c \mathcal{V}_n$ for all $n \geq 2$. Hence, (1) implies (2). Clearly (2) implies (3). If (3) is true, then by Lemma \ref{vnmincunc} and by Theorem \ref{propertyvandw}, we must have $\mathcal{S} \otimes_{\min} C^*(F_2)=\mathcal{S} \otimes_{\max} C^*(F_2)$. Since $C^*(F_{\infty})$ is a retract of $C^*(F_2)$, using Lemma \ref{retract} gives $\mathcal{S} \otimes_{\min} C^*(F_{\infty})=\mathcal{S} \otimes_{\max} C^*(F_{\infty})$. Applying Theorem \ref{dcep} shows that $\mathcal{S}$ has the DCEP, so that (1) is true. \end{proof} \section{Relating $\mathcal{V}_n$ to Kirchberg's conjecture} The proof of Theorem \ref{propertyvandw} shows that $C^*(F_n)$ is a retract of $\mathcal{U}_{nc}(n)$ via ucp maps. Using this trick allows for a connection between $\mathcal{U}_{nc}(n)$ and Kirchberg's conjecture. \begin{mythe} \langlebel{conditionforkirchberg} If $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{U}_{nc}(n) =\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n)$ for some $n \geq 2$, then Kirchberg's conjecture is valid. \end{mythe} \begin{proof} It is well known that Kirchberg's conjecture is true if and only if it holds for some $n \in \mathbb{N}$ with $n \geq 2$. Now, if $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{U}_{nc}(n)=\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n)$, then combining Lemmas \ref{retract} and \ref{uncretract} yields the complete order isomorphism $C^*(F_n) \otimes_{\min} C^*(F_n)=C^*(F_n) \otimes_{\max} C^*(F_n)$. \end{proof} The link between Kirchberg's conjecture and the WEP allows us to prove the converse of Theorem \ref{conditionforkirchberg}. In other words, while the assumption that $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{U}_{nc}(n)=\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n)$ for some $n \geq 2$ appears to be slightly stronger than Kirchberg's conjecture, it is in fact equivalent to Kirchberg's conjecture. \begin{mythe} \langlebel{uncequivalenttokirchberg} The following statements are equivalent. \begin{enumerate} \item $\mathcal{V}_2 \otimes_{\min} \mathcal{V}_2=\mathcal{V}_2 \otimes_c \mathcal{V}_2$. \item $\mathcal{U}_{nc}(2) \otimes_{\min} \mathcal{U}_{nc}(2)=\mathcal{U}_{nc}(2) \otimes_{\max} \mathcal{U}_{nc}(2)$. \item $C^*(F_2) \otimes_{\min} C^*(F_2)=C^*(F_2) \otimes_{\max} C^*(F_2)$. \item $C^*(F_{\infty}) \otimes_{\min} C^*(F_{\infty})=C^*(F_{\infty}) \otimes_{\max} C^*(F_{\infty})$. \item Connes' embedding problem has a positive answer. \end{enumerate} \end{mythe} \begin{proof} Note that if (3) holds, then since $C^*(F_{\infty})$ is a retract of $C^*(F_2)$, (4) also holds by the same argument as in the proof of Theorem \ref{weppropertyv2}. Clearly $F_2$ embeds into $F_{\infty}$ so that, by \cite[Proposition 8.8]{pisierbook}, $C^*(F_2)$ is a retract of $C^*(F_{\infty})$. Hence, (4) implies (3). Using Lemma \ref{vnmincunc} shows that (1) implies (2), while Theorem \ref{conditionforkirchberg} shows that (2) implies (3). Assuming (4) is true, it follows that $C^*(F_{\infty})$ has the WEP \cite{kirchberg93}. Then \cite[Theorem~9.1]{quotients} shows that any operator system $\mathcal{S}$ that is $(\min,\text{er})$-nuclear satisfies $\mathcal{S} \otimes_{\min} \mathcal{S}=\mathcal{S} \otimes_c \mathcal{S}$. By Lemma \ref{bofhpropertyv} and Theorem \ref{osllp}, $\mathcal{V}_2$ is $(\min,\text{er})$-nuclear. Therefore, $\mathcal{V}_2 \otimes_{\min} \mathcal{V}_2=\mathcal{V}_2 \otimes_c \mathcal{V}_2$, as required. \end{proof} Because $\mathcal{S}_n$ is a retract of $\mathcal{V}_n$, we can prove the following. \begin{pro} \langlebel{vncnotmax} For all $n,m \geq 2$, $\mathcal{V}_n \otimes_c \mathcal{V}_m \neq \mathcal{V}_n \otimes_{\max} \mathcal{V}_m$. \end{pro} \begin{proof} By \cite[Theorem 3.8]{FKPT}, we have $\mathcal{S}_n \otimes_c \mathcal{S}_m \neq \mathcal{S}_n \otimes_{\max} \mathcal{S}_m$ for all $n,m \geq 2$. Hence, if $\mathcal{V}_n \otimes_c \mathcal{V}_m=\mathcal{V}_n \otimes_{\max} \mathcal{V}_m$, then by Lemmas \ref{retract} and \ref{uncretract}, we have $\mathcal{S}_n \otimes_c \mathcal{S}_m=\mathcal{S}_n \otimes_{\max} \mathcal{S}_m$, which is a contradiction. \end{proof} \begin{cor} For all $n,m \geq 2$, $C_e^*(\mathcal{V}_n \otimes_{\max} \mathcal{V}_m) \neq C_e^*(\mathcal{V}_n) \otimes_{\max} C_e^*(\mathcal{V}_m)$. \end{cor} \begin{proof} Suppose that $C_e^*(\mathcal{V}_n \otimes_{\max} \mathcal{V}_m)=C_e^*(\mathcal{V}_n) \otimes_{\max} C_e^*(\mathcal{V}_m)$. The latter $C^{\ast}$-algebra is $\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(m)$. Applying Proposition \ref{vncommutingorderembedding}, $\mathcal{V}_n \otimes_c \mathcal{V}_m$ is completely order isomorphic to its inclusion in $\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(m)$. The operator system $\mathcal{V}_n \otimes_{\max} \mathcal{V}_m$ is completely order isomorphic to its inclusion in $C_e^*(\mathcal{V}_n \otimes_{\max} \mathcal{V}_m)$ \cite{hamana}. Thus, $\mathcal{V}_n \otimes_c \mathcal{V}_m=\mathcal{V}_n \otimes_{\max} \mathcal{V}_m$, contradicting Proposition \ref{vncnotmax}. \end{proof} \begin{cor} Let $U=(u_{ij}) \in M_n(\mathcal{U}_{nc}(n))$ and $V=(v_{k\text{el}l}) \in M_m(\mathcal{U}_{nc}(m))$ be the matrices of generators of $\mathcal{U}_{nc}(n)$ and $\mathcal{U}_{nc}(m)$, respectively, where $n,m \geq 2$. Then $U_0=(u_{ij} \otimes 1) \in M_n(C_e^*(\mathcal{V}_n \otimes_{\max} \mathcal{V}_m))$ and $V_0=(1 \otimes v_{k\text{el}l}) \in M_m(C_e^*(\mathcal{V}_n \otimes_{\max} \mathcal{V}_m))$ fail to be unitary. \end{cor} \begin{proof} Suppose that $U_0$ and $V_0$ were unitary. The entries of $U_0$ $*$-commute with the entries of $V_0$, so there are unital $*$-homomorphisms $\pi:\mathcal{U}_{nc}(n) \to C_e^*(\mathcal{V}_n \otimes_{\max} \mathcal{V}_m)$ and $\rho:\mathcal{U}_{nc}(m) \to C_e^*(\mathcal{V}_n \otimes_{\max} \mathcal{V}_m)$ with $\pi(u_{ij})=u_{ij} \otimes 1$ and $\rho(v_{k\text{el}l})=1 \otimes v_{k\text{el}l}$. The ranges of $\pi$ and $\rho$ commute, so that $\pi \cdot \rho:\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(m) \to C_e^*(\mathcal{V}_n \otimes_{\max} \mathcal{V}_m)$ is a $*$-homomorphism. In particular, $\pi \cdot \rho$ is ucp and $\pi \cdot \rho$ is the identity map when restricted to $\mathcal{V}_n \otimes \mathcal{V}_m$. By Proposition \ref{vncommutingorderembedding}, the inclusion of $\mathcal{V}_n \otimes \mathcal{V}_m$ into $\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(m)$ is $\mathcal{V}_n \otimes_c \mathcal{V}_m$. Therefore, $\text{id}:\mathcal{V}_n \otimes_c \mathcal{V}_m \to \mathcal{V}_n \otimes_{\max} \mathcal{V}_m$ is ucp, contradicting Proposition \ref{vncnotmax}. \end{proof} \section{Unitary Correlation Sets} There is a way of stating Connes' embedding Problem in terms of what are known as quantum correlation matrices. We outline the definition of these sets, as they will motivate the definition of a new collection of correlation sets. The background for Tsirelson's problem in quantum correlations is motivated by a question in bipartite quantum information theory. Essentially, there are two models often used for nonlocal quantum correlations: one is a tensor product model, and one is a commuting model. Tsirelson's problem asks whether these models are the same, up to approximation. We will formulate the sets of nonlocal quantum correlations in each model below. If $\mathcal{H}$ is a Hilbert space, we say that a set of operators $(P_i)_{i=1}^m$ is a \textbf{positive operator-valued measure with $m$ outputs} (POVM) if each $P_i \geq 0$ and $\sum_{i=1}^m P_i=I$. If the $P_i$'s are also orthogonal projections, then we say that $(P_i)_{i=1}^m$ is a \textbf{projection-valued measure with $m$ outputs} (PVM). Note that if $(P_i)_{i=1}^m$ is a PVM on $\mathcal{H}$, then it necessarily follows that $P_i \perp P_j$ for $i \neq j$. For each choice of $n,m \in \mathbb{N}$, the set of quantum commuting correlation probabilities of two separated systems of $n$ POVM's with $m$ outputs is given by $$C_{qc}(n,m)=\{ (\langle P_{a,x} Q_{b,y} \xi,\xi \rangle)_{a,b,x,y} \},$$ where for each $a,b \in \{1,...,n\}$, $(P_{a,x})_{x=1}^m$ and $(Q_{b,y})_{y=1}^m$ are POVM's with $m$ outputs, $\mathcal{H}$ is some Hilbert space, $\xi \in \mathcal{H}$ is a unit vector, and $P_{a,x}Q_{b,y}=Q_{b,y}P_{a,x}$ for all choices of $a,b,x,y$. Similarly, we define $$C_q(n,m)=\{ (\langle (P_{a,x} \otimes Q_{b,y})\xi,\xi \rangle)_{a,b,x,y} \},$$ where each $(P_{a,x})_{x=1}^m$ is a POVM with $m$ outputs on a Hilbert space $\mathcal{H}_A$, each $(Q_{b,y})_{y=1}^m$ is a POVM with $m$-outputs on a Hilbert space $\mathcal{H}_B$, $\xi \in \mathcal{H}_A \otimes \mathcal{H}_B$ is a unit vector, and $\dim(\mathcal{H}_A),\dim(\mathcal{H}_B)<\infty$. We also define the possibly larger set $C_{qs}(n,m)$ to be the set of all correlations with the same form as for $C_q(n,m)$, except that we allow the Hilbert spaces to be infinite-dimensional. For convenience, we denote by $C_{qa}(n,m)$ the closure of $C_q(n,m)$. It is known that $$C_q(n,m) \subseteq C_{qs}(n,m) \subseteq C_{qa}(n,m) \subseteq C_{qc}(n,m), \, \forall n,m,$$ and $C_{qc}(n,m)$ is closed. Moreover, each of these sets is convex. One form of Tsirelson's problem, then, is determining whether $C_{qa}(n,m)=C_{qc}(n,m)$ for all $n,m \geq 2$. More information on these correlation sets can be found in \cite{tsirelson}, \cite{fritz} and \cite{junge}. It is shown in \cite{fritz} and \cite{junge} that, in the definitions of $C_q(n,m)$ and $C_{qc}(n,m)$, one may take the POVM's to simply be PVM's. There is a natural link between the sets $C_{qa}(n,m)$, $C_{qc}(n,m)$ and states on tensor products of the $C^{\ast}$-algebra $C^{\ast}( \ast_n \mathbb{Z}_m)$, where $\ast_n \mathbb{Z}_m$ denotes the free product of $n$ copies of the finite cyclic group $\mathbb{Z}_m$ (see \cite{fritz, junge,ozawa}). One key fact is that $C^*(\ast_n \mathbb{Z}_m)$ is isomorphic to $\ast_n \text{el}l_m^{\infty}$, the free product of $n$ copies of $\text{el}l_m^{\infty}$ \cite{fritz,junge,ozawa}. If $g_x$ denotes the generator of the $x$-th copy of $\mathbb{Z}_m$ in $C^*(\ast_n \mathbb{Z}_m)$ and $e_{a,x}$ denotes the generator of the $a$-th coordinate in the $x$-th copy of $\text{el}l_m^{\infty}$, then the isomorphism is implemented via $$g_x \mapsto \sum_{a=1}^m \exp \left( \frac{2\pi a i}{m} \right) e_{a,x}.$$ It follows (see \cite{fritz,junge,ozawa}) that $C_{qa}(n,m)$ is the set of coordinates in $\mathbb{R}^{n^2m^2}$ given by the images of states $s \in \mathcal{S}(C^*(*_n \mathbb{Z}_m) \otimes_{\min} C^*(*_n \mathbb{Z}_m))$ on the generating set $\{e_{a,x} \otimes e_{b,y}: 1 \leq a,b \leq m, \, 1 \leq x,y \leq n\}$, while $C_{qc}(n,m)$ corresponds to the images of states on $C^*(*_n \mathbb{Z}_m) \otimes_{\max} C^*(*_n \mathbb{Z}_m)$. For our purposes, we may consider the special case of $m=2$, which involves the $C^{\ast}$-algebra $C^*( \ast_n \mathbb{Z}_2)$. Following the notation in \cite{FKPT}, we let $h_i$ be the generator of the $i$-th copy of $\mathbb{Z}_2$ inside of $C^*(\ast_n \mathbb{Z}_2)$. Each $h_i$ is a self-adjoint unitary. We let $NC(n)$ be the operator system generated by $\{h_1,...,h_n\}$ inside of $C^*(*_n \mathbb{Z}_2)$. \begin{pro} \emph{(Farenick-Kavruk-Paulsen-Todorov, \cite{FKPT})} \langlebel{ncboxunivprop} If $X_1,...,X_n \in \cB(\cH)$ are hermitian contractions, then there is a unique ucp map $\gamma:NC(n) \to \cB(\cH)$ given by $\gamma(h_i)=X_i$ for all $1 \leq i \leq n$. \end{pro} The isomorphism $C^*(\ast_n \mathbb{Z}_2) \simeq \ast_n \text{el}l^{\infty}_2$ is implemented by the mapping $$h_i \mapsto p_i-q_i,$$ where $p_i$ is the element $(1,0)$ in the $i$-th copy of $\text{el}l_2^{\infty}$, and $q_i$ is the element $(0,1)$ in the $i$-th copy of $\text{el}l_2^{\infty}$. By \cite[Lemma 6.2]{FKPT}, the operator system $NC(n) \otimes_c NC(n)$ is completely order isomorphic to its image inside of $C^*(*_n \mathbb{Z}_2) \otimes_{\max} C^*(*_n \mathbb{Z}_2)$. With this information in hand, we easily obtain the following: \begin{pro} \langlebel{qaqcorderiso} For $n \geq 2$, $C_{qa}(n,2)=C_{qc}(n,2)$ if and only if the identity map $\text{id}:NC(n) \otimes_{\min} NC(n) \to NC(n) \otimes_c NC(n)$ is an order isomorphism. \end{pro} \begin{pro} \langlebel{ncboxfactorsthroughvn} For any $n \geq 2$, $NC(n)$ is a retract of $\mathcal{V}_n$. \end{pro} \begin{proof} By \cite[Proposition 5.7]{FKPT}, there are ucp maps $\eta:NC(n) \to \mathcal{S}_n$ and $\theta:\mathcal{S}_n \to NC(n)$ such that $\theta \circ \eta=\text{id}_{NC(n)}$. By lemma \ref{uncretract}, there are ucp maps $\psi:\mathcal{S}_n \to \mathcal{V}_n$ and $\pi:\mathcal{V}_n \to \mathcal{S}_n$ with $\pi \circ \psi=\text{id}_{\mathcal{S}_n}$. Then $\psi \circ \eta:NC(n) \to \mathcal{V}_n$ and $\theta \circ \pi:\mathcal{V}_n \to NC(n)$ are ucp maps satisfying $(\theta \circ \pi) \circ (\psi \circ \eta)=\text{id}_{NC(n)}$. We conclude that $NC(n)$ is a retract of $\mathcal{V}_n$. \end{proof} We wish to define correlation matrices with respect to $\mathcal{U}_{nc}(n)$ that are similar in nature to Tsirelson's correlation sets. We recall that a $C^{\ast}$-algebra $\mathcal{A}$ is said to be \textbf{residually finite-dimensional} if there is a family $(\pi_i)_{i \in I}$ of $*$-homomorphisms $\pi_i:\mathcal{A} \to \mathcal{B}(\mathcal{H}_i)$ with each $\dim(\mathcal{H}_i)<\infty$ such that $\pi:=\bigoplus_{i \in I} \pi_i$ is faithful. A key component in linking the usual quantum correlation matrices with Kirchberg's conjecture is the fact that $C^*(F_n)$ is RFD for every $n$. Here, we show that $\mathcal{U}_{nc}(n)$ also enjoys this property. \begin{mythe} \langlebel{uncrfd} For any $n \geq 2$, $\mathcal{U}_{nc}(n)$ is RFD. \end{mythe} \begin{proof} The proof mimics the proof that $C^*(F_n)$ is RFD (see \cite[Theorem 7]{choi}). It is not hard to see that $\mathcal{U}_{nc}(n)$ is a separable $C^{\ast}$-algebra, so we may assume that $\mathcal{U}_{nc}(n) \subseteq \cB(\cH)$ is faithfully represented on a separable infinite-dimensional Hilbert space $\mathcal{H}$. Hence, there are operators $U_{ij} \in \cB(\cH)$ for $1 \leq i,j \leq n$ such that $\mathcal{U}_{nc}(n) \simeq C^*(\{U_{ij}\}_{i,j})$ via the mapping $u_{ij} \mapsto U_{ij}$. Let $(P_m)_{m=1}^{\infty}$ be a sequence of increasing projections with $\text{rank}(P_m)=m$ and $SOT$-$\lim_{m \to \infty} P_m=I$. Define $V_{m,ij}=P_mU_{ij}P_m$ and let $V_m=(V_{m,ij})$. Since $\text{rank}(P_m)=m$, we may identify $V_{m,ij} \in M_m$ for each $i,j$ and hence $V_m \in M_n(M_m)$. Observe that $$V_m=\begin{pmatrix} P_m \\ & \ddots \\ & & P_m \end{pmatrix} U \begin{pmatrix} P_m \\ & \ddots \\ & & P_m \end{pmatrix},$$ where $U=(U_{ij})$. Therefore, each $V_m$ is a contraction. By Proposition \ref{univprop}, there exist unital $*$-homomorphisms $\pi_m:\mathcal{U}_{nc}(n) \to M_2(M_m)$ for each $m \in \mathbb{N}$ such that $$X_{m,ij}:=\pi_m(u_{ij})=\begin{pmatrix} V_{m,ij} & (\sqrt{I-V_mV_m^*})_{ij} \\ (\sqrt{I-V_m^*V_m})_{ij} & -V_{m,ji}^* \end{pmatrix}$$ for all $i,j$. Since $V_{m,ij}^*=P_mU_{ij}^* P_m$, $SOT$-$\lim_{m \to \infty} V_m=U$ and $SOT$-$\lim_{m \to \infty} V_m^*=U^*$. Hence, every entry of $V_m$ converges in SOT, so that $$SOT\text{-}\lim_{m \to \infty} X_m=\begin{pmatrix} U_{ij} & 0 \\ 0 & -U_{ji}^* \end{pmatrix}.$$ Let $F$ be any word in the generators of $\mathcal{U}_{nc}(n)$. We similarly obtain $$SOT\text{-}\lim_{m \to \infty} \pi_m(F)=\begin{pmatrix} F & 0 \\ 0 & F(\{-U_{ji}^*,-U_{ji}\}) \end{pmatrix},$$ where $F(\{-U_{ji}^*,-U_{ji}\})$ is the word obtained by replacing every occurrence of $U_{ij}$ with $-U_{ji}^*$, and every occurrence of $U_{ij}^*$ with $-U_{ji}$. Assume that $F$ is norm $1$. Then given $\varepsilon>0$, there is $m_0 \in \mathbb{N}$ such that for all $m \geq m_0$, we have $\|F(\{X_{m,ij},X_{m,ij}^*\})\| \geq 1-\varepsilon$. Hence, $\pi:=\bigoplus_{m \in \mathbb{N}} \pi_m$ is isometric on the dense subspace of linear combinations of words in the generators of $\mathcal{U}_{nc}(n)$. Since $\pi$ must be continuous, $\pi$ is isometric on $\mathcal{U}_{nc}(n)$. This shows that $\pi$ is faithful and $\mathcal{U}_{nc}(n)$ is RFD. \end{proof} \begin{rem} \langlebel{minrfd} It is not hard to see that whenever $\mathcal{A}$ and $\mathcal{B}$ are RFD $C^{\ast}$-algebras, then $\mathcal{A} \otimes_{\min} \mathcal{B}$ is also RFD. Hence, $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{U}_{nc}(k)$ is RFD for every $n,k \geq 2$. \end{rem} As with $C^*(F_n)$, we can reformulate Kirchberg's conjecture in terms of whether or not $\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n)$ is RFD. The proof is identical to the $C^*(F_n)$ case \cite[Proposition 7.4.4]{brownozawa} and is omitted. \begin{mythe} \langlebel{uncmaxrfdconjecture} The following statements are equivalent. \begin{enumerate} \item (Kirchberg's Conjecture) $C^*(F_n) \otimes_{\min} C^*(F_n)=C^*(F_n) \otimes_{\max} C^*(F_n)$ for all/some $n \geq 2$. \item $\mathcal{U}_{nc}(n) \otimes_{\min} \mathcal{U}_{nc}(n)=\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n)$ for all/some $n \geq 2$. \item $\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n)$ is RFD for all/some $n \geq 2$. \end{enumerate} \end{mythe} We will show below that (3) holds if we weaken the assumption of residual finite-dimensionality to being quasidiagonal. Recall that a $C^{\ast}$-algebra $\mathcal{A}$ is \textbf{quasidiagonal}, or \textbf{QD}, if there is a net of ucp maps $\varphi_{\langlembda}:\mathcal{A} \to M_{k(\langlembda)}$ such that $\lim_{\langlembda} \|\varphi_{\langlembda}(a)\|=\|a\|$ and $\lim_{\langlembda} \|\varphi_{\langlembda}(ab)-\varphi_{\langlembda}(a)\varphi_{\langlembda}(b)\|=0$ for all $a,b \in \mathcal{A}$. It is easy to see that every RFD $C^{\ast}$-algebra is QD. \begin{mythe} \langlebel{uncmaxqd} For every $n \geq 2$, $\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n)$ is QD. \end{mythe} \begin{proof} The proof is similar to the proof for $C^*(F_n) \otimes_{\max} C^*(F_n)$ (see \cite[Proposition 7.4.5]{brownozawa}). Let $\pi:\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n)$ be a faithful representation on a Hilbert space $\mathcal{H}$. Let $U=(U_{ij})$ be the matrix of generators of $\mathcal{U}_{nc}(n) \otimes 1$, and let $V=(V_{ij})$ be the matrix of generators of $1 \otimes \mathcal{U}_{nc}(n)$, so that each $U_{ij},V_{ij} \in \cB(\cH)$. The nature of the max tensor product forces the $U_{ij}$'s and $V_{k\text{el}l}$'s to $*$-commute. The unitary group of $\mathcal{B}(\mathcal{H}^{(n)})$ is path connected by the Borel functional calculus. Hence, there are norm-continuous functions $u,v:[0,1] \to \mathcal{B}(\mathcal{H}^{(n)})$ such that $u(0)=\mathcal{I}_{\mathcal{H}^{(n)}}=v(0)$, $u(1)=U$ and $v(1)=V$. Since $UV=VU$ and $UV^*=V^*U$, the von Neumann algebras $W^*(U)$ and $W^*(V)$ must commute with each other. Using the Borel functional calculus, we can arrange to have $u(t) \in W^*(U)$ and $v(t) \in W^*(V)$ for all $t \in [0,1]$. The entries of $U$ and $V$ $*$-commute, so this must also hold for the entries of $p(U,U^*)$ and $q(V,V^*)$ for any $*$-polynomials $p,q$. Taking limits, one sees that the entries of $u(t) \in \mathcal{B}(\mathcal{H}^{(n)})$ must $*$-commute with the entries of $v(t) \in \mathcal{B}(\mathcal{H}^{(n)})$ for all $t \in [0,1]$. Since $u(t)$ and $v(t)$ are unitary with $*$-commuting entries, there is a unique $*$-homomorphism $\pi_t:\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n) \to \cB(\cH)$ with $\pi_t(u_{ij} \otimes 1)=(u(t))_{ij}$ and $\pi_t(1 \otimes v_{ij})=(v(t))_{ij}$. As $\pi_0$ is the trivial representation onto $\mathbb{C} I_{\mathcal{H}}$ and $\pi_1=\pi$, we see that $\pi$ is homotopic to the trivial representation. Since $\pi$ is injective and $\mathbb{C}$ is obviously QD, by \cite[Proposition 7.3.5]{brownozawa}, $\mathcal{U}_{nc}(n) \otimes_{\max} \mathcal{U}_{nc}(n)$ is QD. \end{proof} To obtain a unitary version of Tsirelson's problem that is equivalent to Kirchberg's conjecture, it is helpful to have a characterization of RFD $C^{\ast}$-algebras in terms of their state spaces. By way of notation, we denote by $\mathcal{S}(\mathcal{A})$ the set of all states on a unital $C^{\ast}$-algebra $\mathcal{A}$. We define $\text{Fin}(\mathcal{A})$ to be the set of all states on $\mathcal{A}$ whose GNS representations act on finite-dimensional Hilbert spaces. While a number of characterizations for residual finite-dimensionality are given in \cite{EL}, we only require the following one. \begin{mythe} \emph{(Exel-Loring \cite{EL})} \langlebel{EL} A unital $C^{\ast}$-algebra $\mathcal{A}$ is RFD if and only if $\text{Fin}(\mathcal{A})$ is $w^*$-dense in $\mathcal{S}(\mathcal{A})$. \end{mythe} We are now in a position to define our unitary correlation sets. As in the usual setting, we will consider a \textit{tensor product} model as well as a \textit{commuting} model. For $n \geq 2$ and a unitary $U=(U_{ij}) \in M_n(\cB(\cH))$ for some Hilbert space $\mathcal{H}$, we let $\mathfrak{B}_n(U)=\{I_{\mathcal{H}}\} \cup \{U_{ij},U_{ij}^*\}_{i,j=1}^n$. We define $UC_q(n_1,n_2)$ to be the set of all $(2n_1^2+1)(2n_2^2+1)$-tuples of the form $$(\langle X \otimes Y)\xi,\xi \rangle)_{X \in \mathfrak{B}_{n_1}(U), \, Y \in \mathfrak{B}_{n_2}(V)},$$ where $U \in M_{n_1}(\mathcal{B}(\mathcal{H}_A))$ and $V \in M_{n_2}(\mathcal{B}(\mathcal{H}_B))$, $\mathcal{H}_A$ and $\mathcal{H}_B$ are finite-dimensional Hilbert spaces, and $\xi \in \mathcal{H}_A \otimes \mathcal{H}_B$ is a unit vector. We define the possibly larger set $UC_{qs}(n_1,n_2)$ to be the set of all correlations of the same form as for $UC_q(n_1,n_2)$, except that we allow the Hilbert spaces to be infinite-dimensional. For convenience, we will also define $UC_{qa}(n_1,n_2)$ to be the closure of $UC_q(n_1,n_2)$. For the commuting unitary correlation sets, we define $UC_{qc}(n_1,n_2)$ to be the set of all $(2n_1^2+1)(2n_2^2+1)$-tuples of the form $$(\langle XY\xi,\xi \rangle)_{X \in \mathfrak{B}_{n_1}(U), \, Y \in \mathfrak{B}_{n_2}(V)},$$ where $U \in M_{n_1}(\cB(\cH))$ and $V \in M_{n_2}(\cB(\cH))$ are unitaries, $\mathcal{H}$ is a Hilbert space, $\xi \in \mathcal{H}$ is a unit vector, and $XY=YX$ for all $X \in \mathfrak{B}_{n_1}(U)$ and $Y \in \mathfrak{B}_{n_2}(V)$. Since $U$ and $V$ are commuting unitaries, it follows that the $U_{ij}$'s and $V_{k\text{el}l}$'s $*$-commute. For convenience, we denote by $\mathcal{G}_{n_1,n_2}$ the set of generators of $\mathcal{V}_{n_1} \otimes \mathcal{V}_{n_2}$ of the form $x \otimes y$, where $x \in \{1\} \cup \{u_{ij},u_{ij}^*\}_{i,j=1}^{n_1}$ and $y \in \{1\} \cup \{v_{k\text{el}l},v_{k\text{el}l}^*\}_{k,\text{el}l=1}^{n_2}$. By the correspondence between GNS representations and states, $$UC_{qc}(n_1,n_2)=\{ (s(x))_{x \in \mathcal{G}_{n_1,n_2}}: s \in \mathcal{S}(\mathcal{U}_{nc}(n_1) \otimes_{\max} \mathcal{U}_{nc}(n_2)) \}.$$ By Proposition \ref{vncommutingorderembedding}, the inclusion $\mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2} \subseteq \mathcal{U}_{nc}(n_1) \otimes_{\max} \mathcal{U}_{nc}(n_2)$ is a complete order embedding. Therefore, we may also write $$UC_{qc}(n_1,n_2)=\{(s(x))_{x \in \mathcal{G}_{n_1,n_2}}: s \in \mathcal{S}(\mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2})\}.$$ It is not hard to see that $$UC_q(n_1,n_2)=\{(s(x))_{x \in \mathcal{G}_{n_1,n_2}}: s \in \text{Fin}(\mathcal{U}_{nc}(n_1) \otimes_{\min} \mathcal{U}_{nc}(n_2))\}.$$ These unitary correlation sets have similar properties to the quantum correlation sets. \begin{pro} \langlebel{uccontainments} For every $n_1,n_2 \geq 2$, $$UC_q(n_1,n_2) \subseteq UC_{qs}(n_1,n_2) \subseteq UC_{qa}(n_1,n_2) \subseteq UC_{qc}(n_1,n_2),$$ and each of these sets is convex. Moreover, $UC_{qc}(n_1,n_2)$ is closed. \end{pro} \begin{proof} Since the state space of any operator system is convex, it is easy to see that each set above is convex. Clearly $UC_q(n_1,n_2) \subseteq UC_{qs}(n_1,n_2)$. Every element of $UC_{qs}(n_1,n_2)$ corresponds to a state on $\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2}$, which extends to a state on $\mathcal{U}_{nc}(n_1) \otimes_{\min} \mathcal{U}_{nc}(n_2)$ by the Hahn-Banach theorem. By Theorem \ref{EL}, the set $\text{Fin}(\mathcal{U}_{nc}(n_1) \otimes_{\min} \mathcal{U}_{nc}(n_2))$ is $w^*$-dense in $\mathcal{S}(\mathcal{U}_{nc}(n_1) \otimes_{\min} \mathcal{U}_{nc}(n_2))$, so that each element of $UC_{qs}(n_1,n_2)$ is also in $UC_{qa}(n_1,n_2)$. To show that $UC_{qa}(n_1,n_2) \subseteq UC_{qc}(n_1,n_2)$, it suffices to show that $UC_{qc}(n_1,n_2)$ is closed. To that end, let $((s_p(x)_{x \in \mathcal{G}_{n_1,n_2}})_{p=1}^{\infty}$ be a sequence in $UC_{qc}(n_1,n_2)$ that converges, where $(s_p)_{p=1}^{\infty} \subseteq \mathcal{S}(\mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2})$. The mapping $s:\mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2} \to \mathbb{C}$ given by $s(x)=\lim_{p \to \infty} s_p(x)$ for all $x \in \mathcal{G}_{n_1,n_2}$ extends to a linear functional. It follows that $s=w^*$-$\lim_{p \to \infty} s_p$. Since the state space on an operator system is $w^*$-closed, we see that $s \in \mathcal{S}(\mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2})$ so that $UC_{qc}(n_1,n_2)$ is closed. \end{proof} Before we link these unitary correlation sets to Connes' embedding problem, it will be helpful to have a better description of $UC_{qa}(n_1,n_2)$. \begin{lem} \langlebel{ucqa} For each $n_1,n_2 \geq 2$, $$UC_{qa}(n_1,n_2)=\{(s(x))_{x \in \mathcal{G}_{n_1,n_2}}: s \in \mathcal{S}(\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2})\}.$$ \end{lem} \begin{proof} Note that $\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2}$ is completely order isomorphic to its inclusion in $\mathcal{U}_{nc}(n_1) \otimes_{\min} \mathcal{U}_{nc}(n_2)$. Since $\mathcal{S}(\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2})$ is $w^*$-closed, the proof of Proposition \ref{uccontainments} shows that $$UC_{qa}(n_1,n_2) \subseteq \{(s(x))_{x \in \mathcal{G}_{n_1,n_2}}: s \in \mathcal{S}(\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2})\}.$$ Conversely, let $s \in \mathcal{S}(\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2})$. By the Hahn-Banach theorem we may extend $s$ to a state on $\mathcal{U}_{nc}(n_1) \otimes_{\min} \mathcal{U}_{nc}(n_2)$. Since $\mathcal{U}_{nc}(n_1) \otimes_{\min} \mathcal{U}_{nc}(n_2)$ is RFD, by Theorem \ref{EL}, $s$ can be approximated pointwise by elements of $\text{Fin}(\mathcal{U}_{nc}(n_1) \otimes_{\min} \mathcal{U}_{nc}(n_2))$. Restricting to the set $\mathcal{G}_{n_1,n_2}$ yields a net of states whose images on the set $\mathcal{G}_{n_1,n_2}$ are elements of $UC_q(n_1,n_2)$. Since this net of states converges pointwise to $s$, we see that $(s(x))_{x \in \mathcal{G}_{n_1,n_2}} \in UC_{qa}(n_1,n_2)$, as required. \end{proof} Using Lemma \ref{ucqa} allows us to formulate the problem of deciding whether $UC_{qa}(n_1,n_2)=UC_{qc}(n_1,n_2)$ in terms of $\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2}$ and $\mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2}$. \begin{lem} \langlebel{utsirelsonorderiso} Let $n_1,n_2 \geq 2$. Then $UC_{qa}(n_1,n_2)=UC_{qc}(n_1,n_2)$ if and only if $\text{id}:\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2} \to \mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2}$ is an order isomorphism. \end{lem} \begin{proof} If $UC_{qa}(n_1,n_2)=UC_{qc}(n_1,n_2)$, then by linearity the states on $\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2}$ and $\mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2}$ are the same. An element in an operator system is positive if and only if its image under each state is positive (see, for example, \cite[Chapter 13]{paulsen02}), so we conclude that $C_1^{\min}(\mathcal{V}_{n_1},\mathcal{V}_{n_2})=C_1^{\text{comm}}(\mathcal{V}_{n_1},\mathcal{V}_{n_2})$. Therefore, $\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2}$ and $\mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2}$ must be order isomorphic. Conversely, if $\text{id}:\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2} \to \mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2}$ is an order isomorphism, then the positive elements are the same in the two operator systems, so the state spaces are identical. Restricting to the set $\mathcal{G}_{n_1,n_2}$, we obtain the equality $UC_{qa}(n_1,n_2)=UC_{qc}(n_1,n_2)$. \end{proof} We are now ready for the main result of this section. \begin{mythe} \langlebel{unitarycorrelations} The following are equivalent. \begin{enumerate} \item Connes' embedding problem has a positive answer. \item $UC_{qa}(n_1,n_2)=UC_{qc}(n_1,n_2)$ for all $n_1,n_2 \geq 2$. \item $UC_{qa}(n,n)=UC_{qc}(n,n)$ for all $n \geq 2$. \item $C_{qa}(n,m)=C_{qc}(n,m)$ for all $n,m \geq 2$. \item $C_{qa}(n,2)=C_{qc}(n,2)$ for all $n \geq 2$. \end{enumerate} \end{mythe} \begin{proof} Suppose (1) holds. By \cite[Theorem 9.1]{quotients}, Kirchberg's conjecture is equivalent to every $(\min,\text{er})$-nuclear operator system being $(\text{el},c)$-nuclear. As each $\mathcal{V}_n$ is $(\min,\text{er})$-nuclear, it follows that $\mathcal{V}_{n_1} \otimes_{\min} \mathcal{V}_{n_2}=\mathcal{V}_{n_1} \otimes_c \mathcal{V}_{n_2}$ for all $n_1,n_2 \geq 2$. Hence, these operator systems are order isomorphic, so that $UC_{qa}(n_1,n_2)=UC_{qc}(n_1,n_2)$ for all $n_1,n_2 \geq 2$. Clearly (2) implies (3) and (4) implies (5). The implication $(5) \implies (1)$ was obtained by Ozawa \cite[Theorem 36]{ozawa}. Hence, we need only show that (3) implies (5). By Lemma \ref{utsirelsonorderiso}, condition (3) implies that $\mathcal{V}_n \otimes_{\min} \mathcal{V}_n$ and $\mathcal{V}_n \otimes_c \mathcal{V}_n$ are order isomorphic. By Proposition \ref{ncboxfactorsthroughvn}, $NC(n)$ is a retract of $\mathcal{V}_n$. Using Lemma \ref{retract}, the identity map $\text{id}:NC(n) \otimes_{\min} NC(n) \to NC(n) \otimes_c NC(n)$ is $1$-positive. Since $\min \leq c$, we see that $NC(n) \otimes_{\min} NC(n)$ and $NC(n) \otimes_c NC(n)$ are order isomorphic for all $n \geq 2$. Applying Proposition \ref{qaqcorderiso}, we obtain the equality $C_{qa}(n,2)=C_{qc}(n,2)$, as desired. \end{proof} Some striking differences arise between the quantum correlation sets and the unitary correlation sets. It is known that $C_{qa}(2,2)=C_{qc}(2,2)$ (see, for example, \cite{ozawa}). The question of whether $C_q(n,m)=C_{qc}(n,m)$ for all $n,m \geq 2$ was open until Slofstra \cite{slofstra} recently proved that there are large $n,m$ for which $C_{qs}(n,m) \neq C_{qc}(n,m)$. Similarly, it was unknown whether $C_{qs}(n,m)$ is closed for all $n,m \geq 2$, until Slofstra recently provided a counterexample \cite{slofstra17} for large $n,m$. In contrast, it is now known that $UC_{qs}(2,2) \subsetneq UC_{qc}(2,2)$. Indeed, in \cite{CLP} it is shown that there is a state $s:\mathcal{U}_{nc}(2) \otimes_{\min} \mathcal{U}_{nc}(2) \to \mathbb{C}$ that cannot arise from a finite-dimensional representation of $\mathcal{U}_{nc}(2) \otimes_{\min} \mathcal{U}_{nc}(2)$. In fact, it is shown that this state cannot arise from a spatial representation of $\mathcal{U}_{nc}(2) \otimes_{\min} \mathcal{U}_{nc}(2)$ on a tensor product of Hilbert spaces, even if the Hilbert spaces are infinite-dimensional. Since $C_n^{\text{comm}}(\mathcal{U}_{nc}(2),\mathcal{U}_{nc}(2)) \subseteq C_n^{\min}(\mathcal{U}_{nc}(2),\mathcal{U}_{nc}(2))$, $s$ is also a state on $\mathcal{U}_{nc}(2) \otimes_{\max} \mathcal{U}_{nc}(2)$. Hence we obtain an element of $UC_{qc}(2,2)$ that cannot be in $UC_{qs}(2,2)$. This shows that $UC_{qs}(2,2) \subsetneq UC_{qc}(2,2)$. Moreover, it is shown in \cite{CLP} that $s$ can be approximated in the $w^*$-topology by states on $\mathcal{U}_{nc}(2) \otimes_{\min} \mathcal{U}_{nc}(2)$ corresponding to elements of $UC_q(2,2)$. Therefore, $UC_q(2,2)$ and $UC_{qs}(2,2)$ are not even closed. The methods in \cite{CLP} can be adapted in a natural way to show that $UC_{qs}(n,n) \subsetneq UC_{qc}(n,n)$ for all $n \geq 2$, and that $UC_q(n,n)$ and $UC_{qs}(n,n)$ are not closed. \end{document}
math
\begin{document} \title{Conflict-free (vertex)-connection numbers of graphs with small diameters\footnote{Supported by NSFC No.11871034, 11531011 and NSFQH No.2017-ZJ-790.}} \author{ \small Xueliang Li$^{1,2}$, Xiaoyu Zhu$^1$\\ \small $^1$Center for Combinatorics and LPMC\\ \small Nankai University, Tianjin 300071, China\\ \small Email: [email protected]; [email protected]\\ \small $^2$School of Mathematics and Statistics\\ \small Qinghai Normal University, Xining, Qinghai 810008, China } \date{} \maketitle \begin{abstract} A path in an(a) edge(vertex)-colored graph is called a conflict-free path if there exists a color used on only one of its edges(vertices). An(A) edge(vertex)-colored graph is called conflict-free (vertex-)connected if for each pair of distinct vertices, there is a conflict-free path connecting them. For a connected graph $G$, the conflict-free (vertex-)connection number of $G$, denoted by $cfc(G)(\text{or}~vcfc(G))$, is defined as the smallest number of colors that are required to make $G$ conflict-free (vertex-)connected. In this paper, we first give the exact value $cfc(T)$ for any tree $T$ with diameters $2,3$ and $4$. Based on this result, the conflict-free connection number is determined for any graph $G$ with $diam(G)\leq 4$ except for those graphs $G$ with diameter $4$ and $h(G)=2$. In this case, we give some graphs with conflict-free connection number $2$ and $3$, respectively. For the conflict-free vertex-connection number, the exact value $vcfc(G)$ is determined for any graph $G$ with $diam(G)\leq 4$. {\flushleft\bf Keywords}: conflict-free (vertex-)connection coloring; conflict-free (vertex-)connection number; diameter {\flushleft\bf AMS subject classification 2010}: 05C15, 05C40, 05C05. \end{abstract} \section{Introduction} In this paper, all graphs considered are simple, finite and undirected. We refer to book \cite{BM} for notation and terminology in graph theory not defined here. Among all subjects of graph theory, chromatic theory is no doubt the most arresting. In this paper, we mainly deal with the conflict-free (vertex-) connection coloring of graphs. In \cite{ELRS}, Even et al. first introduced the hypergraph version of conflict-free (vertex-)coloring. Actually, this coloring emerged as the requirement of the times. It was motivated to solve the problem of assigning frequencies to different base stations in cellular networks. Since then, this coloring has received wide attention due to its practical application value. Afterwards, Czap et al. introduced the concept of conflict-free connection coloring in \cite{CJV}. In an edge-colored graph, a path is called \emph{conflict-free} if there is at least one color used on exactly one of its edges. This edge-colored graph is said to be \emph{conflict-free connected} if any pair of distinct vertices of the graph are connected by a conflict-free path, and the coloring is called a {\it conflict-free connection coloring}. The \emph{conflict-free connection number} of a connected graph $G$, denoted by $cfc(G)$, is defined as the smallest number of colors required to make $G$ conflict-free connected. There are many results on this topic, for more details, please refer to \cite{CDHJLS,CHL,CJLZ,CJV,DLLMZ}. It is easy to see that $1\leq cfc(G)\leq n-1$ for a connected graph $G$. Motivated by the above concept, Li et al. \cite{LZZMZJ} introduced the concept of \emph{conflict-free vertex-connection}. A path in a vertex-colored graph is called \emph{conflict-free} if there is a color used on exactly one of its vertices. This vertex-colored graph is said to be \emph{conflict-free vertex-connected} if any two distinct vertices of the graph are connected by a conflict-free path, and the coloring is called a {\it conflict-free vertex-connection coloring}. The \emph{conflict-free vertex-connection number} of a connected graph $G$, denoted by $vcfc(G)$, is defined as the smallest number of colors required to make $G$ conflict-free vertex-connected. In \cite{DS,LZZMZJ,LW}, various results were given in respect of this concept. It has already been obtained that $2\leq vcfc(G)\leq \lceil \log_2(n+1)\rceil$. We use $S_n$ to denote the \emph{star graph} on $n$ vertices and denote by $T(n_1,n_2)$ the \emph{double star} in which the degrees of its two (adjacent) center vertices are $n_1+1$ and $n_2+1$, respectively. For a connected graph $G$, the \emph{distance} between two vertices $u$ and $v$ is the minimum length of all paths between them, and we write it as $d_G(u,v)$. The \emph{eccentricity} of a vertex $v$ of $G$ is defined by $ecc_G(v)=max_{u\in V(G)}~d_G(u,v)$. The {\it diameter} of $G$ is defined by $diam(G)=max_{v\in V(G)}~ecc_G(v)$ while the {\it radius} of $G$ is defined by $rad(G)=min_{v\in V(G)}~ecc_G(v)$. These parameters have much to do with graph structures and are very significant in the field of graph study. So it stimulates our interest to research on the conflict-free (vertex-)connections of graphs with small diameters. In this paper, we first give the exact value $cfc(T)$ for any tree $T$ with diameters $2,3$ and $4$. Based of this result, the conflict-free connection number is determined for any graph $G$ with $diam(G)\leq 4$ except for those graphs $G$ with diameter $4$ and $h(G)=2$. In this case, we give some graphs with conflict-free connection numbers $2$ and $3$, respectively. For the conflict-free vertex-connection number, the exact value $vcfc(G)$ is determined for any graph $G$ with $diam(G)\leq 4$. \section{$cfc$-values for trees with diameters $2,3$ and $4$}\label{tree} For a connected graph $G$, let $X$ denote the set of cut-edges of $G$, and let $C(G)$ denote the subgraph induced by the set $X$. It is easy to see every component of $C(G)$ is a tree and $C(G)$ is a forest. Let $h(G)=\text{max}~\{cfc(T):T ~\text{is a component of}~ C(G)\}$. In \cite{CJV} and \cite{CHL}, the authors showed the following result. \begin{lem}\cite{CJV}\label{cfcbound} For a connected graph $G$, we have $h(G)\leq cfc(G)\leq h(G) + 1.$ Moreover, the bounds are sharp. \end{lem} So, $h(G)$ is a crucial parameter to determine the conflict-free connection number of a connected graph $G$. Nevertheless, from the definition of $h(G)$, determining the value of $h(G)$ depends on determining the conflict-free connection numbers of trees. Therefore, in this section we first give the exact values of the conflict-free connection numbers of trees with diameters $2,3$ and $4$. \begin{thm} For a tree $T$ with diameter $2$ or $3$, we have $cfc(T)=\Delta(T)$. \end{thm} \noindent {\it Proof.} It is easy to see that $T$ is a star $S_n$ if and only if it has diameter $2$, and a double star $T(n_1,n_2)(n_1\geq n_2)$ if and only if it has diameter $3$. For the former case, any two edges of $T$ must be colored differently in any conflict-free connection coloring, and thus $cfc(T)=\Delta(T)$. While in the latter case, we can obtain that $cfc(T)=n_1+1=\Delta(T)$ by a similar analysis.\qed For a tree $T$ of diameter $4$, we denote by $u$ the unique vertex with eccentricity two. The neighbors of $u$ are pendent vertices $w_1,w_2,\cdots,w_\ell$ and $v_1,v_2,\cdots,v_k$ with degrees $p_1\geq p_2\geq\cdots\geq p_k$. Certainly, $k+\ell=d(u)$. In every conflict-free connection coloring $c$ of $T$, the incident edges of every vertex must receive different colors \ding{172}. Without loss of generality, set $c(uv_i)=i(1\leq i\leq k)$ and $c(uw_j)=k+j(1\leq j\leq \ell)$. Observe that if one incident edge of $v_i$ is assigned with color $j$, then color $i$ can not appear on any edge incident with $v_j(1\leq i,j\leq k)$ \ding{173}. Actually, we are seeking for the minimum number of colors satisfying \ding{172} and \ding{173}. Next we define a vector class $S_r(r\in N^{+})$. We say that an $r$-tuple $(s_1,s_2,s_3,\cdots,s_r)(s_i(1\leq i\leq r)\in N)$ belongs to $S_r$ if we can find a sequence of distinct $2$-tuples $(1,i_{1,1}),(1,i_{1,2}),\cdots,(1,i_{1,s_1}),(2,i_{2,1}),\cdots,(2,i_{2,s_2}), \cdots,$\\ $(r,i_{r,s_r})$ the components of which are all from $[r]$ satisfying that: (1) the two components of every $2$-tuple are different, (2) $(i,j)$ and $(j,i)(1\leq i,j\leq r)$ can not both appear. Note that if $(s_1,s_2,s_3,\cdots,s_r)\in S_r$ then any permutation of its components also belongs to $S_r$. Thus we may suppose $s_1\geq s_2\geq s_3\geq\cdots\geq s_r$. \begin{lem}\label{tuple} An $r$-tuple $(s_1,s_2,s_3,\cdots,s_r)(s_i(1\leq i\leq r)\in N)$ belongs to $S_r$ if and only if $\sum\limits_{i=1}^{j}s_i\leq \frac{(2r-1-j)j}{2}(1\leq j\leq r)$. \end{lem} \noindent {\it Proof.} First we show the necessity. If $(s_1,s_2,s_3,\cdots,s_r)\in S_r$, then accordingly there is a sequence of $2$-tuples for them according to the definition. Suppose both $(i,j)$ and $(j,i)(1\leq i,j\leq r,i\neq j)$ do not appear. Then, randomly add one of them to the sequence. Repeat this operation until nothing can be added. Finally there are $\frac{(r-1)r}{2}$ $2$-tuples and the corresponding $r$-tuple is $(s'_1,s'_2,s'_3,\cdots,s'_r)$. Assume, to the contrary, there exists a $j$ such that $\sum_{i=1}^{j}s_i> \frac{(2r-1-j)j}{2}(1\leq j\leq r)$. Then $\sum_{i=1}^{j}s'_i\geq \sum_{i=1}^{j}s_i> \frac{(2r-1-j)j}{2}$. Obviously, $j\neq r$. Besides, we have $\sum_{i=j+1}^{r}s'_i\geq \frac{(r-j)(r-j-1)}{2}$ simply by checking the sequence. However, this implies that $\frac{(r-1)r}{2}=\sum_{i=1}^{r}s'_i=\sum_{i=1}^{j}s'_i+\sum_{i=j+1}^{r}s'_i>\frac{(2r-1-j)j}{2}+\frac{(r-j)(r-j-1)}{2}=\frac{(r-1)r}{2}$, a contradiction. Thus the necessity holds. For the sufficiency, we prove it by applying induction on $r$. When $r=0,1,2$, it is easy to check that if $\sum_{i=1}^{j}s_i\leq \frac{(2r-1-j)j}{2}(1\leq j\leq r)$ for $(s_1,s_2,s_3,$ $\cdots, s_r)$, then this $r$-tuple belongs to $S_r$. Assume that the sufficiency holds for $r=p$. Consider the case $r=p+1$. For $(s_1,s_2,s_3,\cdots,s_{p+1})$, suppose $s_1=p-q$. We distinguish two cases to clarify. \textbf{Case 1.} $s_{q+1}>s_{q+2}$. In this case, we prove that $(s_2-1,s_3-1,\cdots,s_{q+1}-1, s_{q+2},\cdots,s_{p+1})\in S_p$. When $2\leq j\leq q+1$, we have $\sum_{i=2}^{j}(s_i-1)\leq (j-1)(p-q-1)< \frac{(2p-j)(j-1)}{2}$. When $q+2\leq j\leq p+1$, $\sum_{i=2}^{q+1}(s_i-1)+\sum_{i=q+2}^{j}s_i=\sum_{i=1}^{j}s_i-p\leq \frac{(2p-j+1)j}{2}-p=\frac{(2p-j)(j-1)}{2}$. Therefore, $(s_2-1,s_3-1,\cdots,s_{q+1}-1, s_{q+2},\cdots,s_{p+1})\in S_p$, and so there exists a sequence for it. By adding $(1,p+1),(2,p+1),\cdots,(q,p+1),(p+1,q+1),(p+1,q+2),\cdots,(p+1,p)$ to this sequence, we get a sequence satisfying (1), (2) for $(s_2,\cdots,s_{p+1},s_1)$, implying that $(s_1,s_2,\cdots,s_{p+1})$ belongs to $S_{p+1}$. \textbf{Case 2.} $s_{q+1}=s_{q+2}$. Suppose $s_r>s_{r+1}=s_{r+2}=\cdots=s_{q+1}=s_{q+2}=\cdots=s_t>s_{t+1}$. Again, we prove that $s'=(s'_1,s'_2,\cdots,s'_p)=(s_2-1,s_3-1,\cdots,s_r-1,s_{r+1},\cdots,s_{t-q+r-1},s_{t-q+r}-1,\cdots,s_t-1,s_{t+1},\cdots,s_{p+1})\in S_p$. Similar to the discussion in \textbf{Case 1}, for $1\leq j\leq r-1$ or $t-1\leq j\leq p$, $\sum_{i=1}^{j}s'_i\leq \frac{(2p-1-j)j}{2}$. Thus if $s'\notin S_p$, the first $j$ such that $\sum_{i=1}^{j-1}s'_i\leq \frac{(2p-j)(j-1)}{2}$ and $\sum_{i=1}^{j}s'_i> \frac{(2p-j-1)j}{2}$ must appear between $r$ and $t-2$. Then we also deduce that $s'_j>p-j$, so $s'_i\geq p-j(j+1\leq i\leq t-1)$. However, this leads to $\frac{(2(p+1)-1-t)t}{2}\geq \sum_{i=1}^{t}s_i=\sum_{i=1}^{t-1}s'_i+p=\sum_{i=1}^{j}s'_i+\sum_{i=j+1}^{t-1}s'_i+p>\frac{(2p-j-1)j}{2}+(t-1-j)(p-j)+p>\frac{(2(p+1)-1-t)t}{2}$, a contradiction. Thus $s'\in S_p$. By a similar analysis as in \textbf{Case 1}, we can check that $(s_1,s_2,\cdots,s_{p+1})\in S_{p+1}$. The proof is thus complete.\qed We call the colors from $[k]$ the old colors. In any conflict-free connection coloring of $T$, we denote by $h_i(1\leq i\leq k)$ the number of old colors used on the edges incident with $v_i$ except $uv_i$. Obviously $(h_1,h_2,\cdots,h_k)\in S_k$. In order to add new colors as few as possible, we are actually seeking for the number $a=max(min\{max\{p_i-1-h_i:1\leq i\leq k\}:(h_1,h_2,\cdots,h_k)\in S_k\},0)$. Let $c_i=p_i-k+i-1(1\leq i\leq k)$, $b=max\{\lceil max\{\sum_{i=1}^{j}\frac{c_i}{j}:1\leq j\leq k\}\rceil,0\}$. Suppose that $max\{\sum_{i=1}^{j}\frac{c_i}{j}:1\leq j\leq k\}$ is obtained by $j=t$. Assume $a<b$. Then $a<\sum_{i=1}^{t}\frac{c_i}{t}$. Thus there exists a $k$-tuple $(h_1,h_2,\cdots,h_k)\in S_k$ such that $h_i\geq p_i-1-a>p_i-1-\sum_{i=1}^{t}\frac{c_i}{t}$. However, this implies that $\sum_{i=1}^{t}h_i>\sum_{i=1}^{t}(p_i-1)-\sum_{i=1}^{t}c_i=\sum_{i=1}^{t}(k-i)=\frac{(2k-1-t)t}{2}$, a contradiction to $(h_1,h_2,\cdots,h_k)\in S_k$ by Lemma \ref{tuple}. Thus $a\geq b$. Next, we only need to construct $(h_1,h_2,\cdots,h_k)\in S_k$ with $b=max\{p_i-1-h_i:1\leq i\leq k\}$. Let $h_i=max\{p_i-1-b,0\}$, it can be easily verified that $(h_1,h_2,\cdots,h_k)$ satisfies our demand. As a result, $a=b$. Combining Lemma \ref{tuple} with the above analysis, we get the following result. \begin{thm} Let $T$ be a tree with diameter $4$, and denote by $u$ its unique vertex with eccentricity two. The neighbors of $u$ are pendent vertices $w_1,w_2,\cdots,w_\ell$ and $v_1,v_2,\cdots,v_k$ with degrees $p_1\geq p_2\geq\cdots\geq p_k$. Then $cfc(T)=max\{k+b,d(u)\}$ where $b=max\{\lceil max\{\sum_{i=1}^{j}\frac{c_i}{j}:1\leq j\leq k\}\rceil,0\}$ and $c_i=p_i-k+i-1(1\leq i\leq k)$. \end{thm} \section{Results for graphs with diameters $2,3$ and $4$} Based on the results in the above section for trees with diameters 2,3, and 4, we are now ready to determine the $cfc(G)$ and $vcfc(G)$ for graphs with diameters $2,3$ and $4$. At first, we present some auxiliary lemmas that will be used in the sequel. \begin{lem}\cite{LZZMZJ}\label{vcfc=2} For a connected graph $G$ of order at least $3$, we have that $vcfc(G)=2$ if and only if $G$ is $2$-connected or $G$ has only one cut-vertex. \end{lem} \begin{lem}\cite{LZZMZJ} For a connected graph $G$, we have $vcfc(G)\leq rad(G)+1$. \end{lem} For the conflict-free connection of graphs, the following results have already been obtained. \begin{lem}\cite{CJV} For a noncomplete $2$-connected graph $G$, we have $cfc(G)=2$. \end{lem} \begin{lem}\cite{CHL}\label{2-edge-connected} For a noncomplete $2$-edge-connected graph $G$, we have $cfc(G)=2$. \end{lem} \begin{lem}\cite{CJV}\label{order2} If $G$ is a connected graph and $C(G)$ is a linear forest whose each component has an order $2$, then $cfc(G)=2$. \end{lem} \begin{lem}\cite{CHL}\label{unique} Let $G$ be a connected graph with $h(G)\geq 2$. If there exists a unique component $T$ of $C(G)$ such that $cfc(T)=h(G)$, then $cfc(G)=h(G)$. \end{lem} \begin{rem} We have calculated the exact value $cfc(T)$ for any tree $T$ with $diam(T)\leq 4$ in \textbf{Section \ref{tree}}. If $G$ is a connected graph with $diam(G)\leq 4$, then any component of $C(G)$ must be a tree with diameter no more than four. Thus we can calculate $h(G)$ according to the theorems in \textbf{Section \ref{tree}}. \end{rem} For graphs with diameter $2$, we have the following result. \begin{thm}\label{diameter2} For a connected graph $G$ with diameter $2$, we have $vcfc(G)=2$ and $cfc(G)=\text{max}~\{2,h(G)\}$. \end{thm} \noindent {\it Proof.} Since $G$ has diameter $2$, it is easy to find that $G$ has at most one cut-vertex. According to Lemma \ref{vcfc=2}, $vcfc(G)=2$. If $G$ is $2$-edge-connected, then $cfc(G)=2$ by Lemma \ref{2-edge-connected}. Otherwise, $C(G)$ must be a star, and thus $cfc(G)=\text{max}~\{2,h(G)\}$ by Lemmas \ref{order2} and \ref{unique}.\qed For graphs with diameter $3$, we have the following result. Recall that a vertex in a block of a graph $G$ is called an \emph{internal vertex} if it is not a cut-vertex of $G$. \begin{thm}\label{diameter3} For a connected graph $G$ with diameter $3$, we have that $vcfc(G)\leq3$ and $cfc(G)=max\{2,h(G)\}$ except for the graph depicted in \textbf{Figure 1} which has conflict-free connection number $h(G)+1=3$. \end{thm} \noindent {\it Proof.} Removing all internal vertices of end blocks of $G$, it is easy to check that at most one block is left. Since otherwise if there are two blocks $B_1,B_2$, we can always find two other blocks $C_1,C_2$ such that $V(B_i)\bigcap V(C_i)\neq \phi$ and $V(B_i)\bigcap V(C_{3-i})= \phi(i=1,2)$ and for any two internal vertices $u\in V(C_1),v\in V(C_2)$, every $u$-$v$ path is a $u$-$C_1$-$B_1$-$B_2$-$C_2$-$v$ path. However, this implies that the distance between $u$ and $v$ is at least four, contradicting the fact that $diam(G)=3$. Suppose that $G$ contains no more than one cut-vertex. Then $vcfc(G)=2$ according to Lemma \ref{vcfc=2}. Otherwise for the left block $B_1$, it is bound to contain all cut-vertices of $G$, and we assign the color $3$ to one of them and the color $2$ to all remaining vertices of $V(B_1)$. Other unmentioned vertices of $G$ share the color $1$. It is easy to check that $G$ is conflict-free vertex-connected under this coloring. As a result, $vcfc(G)\leq 3$. The conflict-free connection number of $G$ has been determined by Lemmas \ref{2-edge-connected}, \ref{order2} and \ref{unique} when $h(G)\leq 1$ or $h(G)\geq 2$ and there exists a unique component $T$ such that $cfc(T)=h(G)$. Thus we only need to consider the remaining cases. This implies that $B_1$ exists and it is nontrivial. Besides, every component of $C(G)$ is a star with its center attached to $B_1$. Let $h(G)=k$. If $k\geq3$, since $cfc(G)\geq k$, to prove $cfc(G)=k$, we only need to provide a conflict-free connection $k$-coloring of $G$. For each component of $C(G)$, give it a conflict-free connection coloring from $[k]$. As for each nontrivial block, give two of its edges the colors $2$ and $3$ respectively and all others the color $1$. It can be verified that $G$ is conflict-free connected in this way. When $k=2$, we denote by $n_1$ the number of vertices of $B_1$ and $\ell$ the number of components of $C(G)$. If $\ell<n_1$, then there exists a vertex $v$ of $B_1$ not attached by any component of $C(G)$. Note that since $diam(G)=3$, the subgraph induced by the vertices each of which is attached by some component of $C(G)$ is complete. We only need to give a conflict-free connection $2$-coloring of $G$: The edges of each component of $C(G)$ receive different colors from $[2]$. Randomly choose an edge $e$ of $B_1$ incident with $v$ and each edge for each of other nontrivial blocks, then assign to them the color $2$. The remaining edges are given the color $1$. The checking process is omitted. For the case $\ell=n_1$, certainly $B_1$ is complete with vertices $v_1,v_2,\cdots,v_{n_1}$. Since $diam(G)=3$, for any end block of $G$, all its internal vertices are adjacent to the cut-vertex it contains. If $n_1\geq4$, we offer a conflict-free connection $2$-coloring of $G$: Assign different colors to the edges of each component of $C(G)$ from $[2]$; give color $2$ to all edges of the path $v_1v_2\cdots v_{n_1}$ and color $1$ to the remaining edges of $B_1$. Observe that each edge of $B_1$ with color $i(i\in[2])$ is contained in a triangle the other two edges of which receive distinct colors. Then pick one edge for each end block and give it color $2$. Other edges are given color $1$. The verification is similar. Suppose $n_1=3$ with at least one component of $C(G)$ being $P_2$. Choose one of such and give its edge the color $1$. Without loss of generality, assume that this edge is incident with $v_1\in V(B_1)$. Pick one edge of $B_1$ incident with $v_1$ and give it the color $2$, again, other edges of $B_1$ share the same color $1$. We color the edges of other components of $C(G)$ and nontrivial blocks the way as we did in the last paragraph. Obviously, this is a conflict-free connection $2$-coloring for $G$. If $n_1=3$ and any component of $C(G)$ is a $P_3$, we show that two colors are not enough. Note that there are always two adjacent edges of $B_1$ sharing the same color if only two colors are used. Without loss of generality, suppose that the edges $v_3v_1,v_3v_2$ both have color $1$. Let $v_1u_1,v_2u_2$ have color $1$ and $v_1w_1,v_2w_2$ have color $2$ where these edges are all cut-edges. It is easy to check that there is no conflict-free path between $u_1$ and $u_2$ or $w_1$ and $w_2$ no matter what color the edge $v_1v_2$ is assigned, a contradiction. Thus according to Lemma \ref{cfcbound}, $cfc(G)=h(G)+1=3$.\qed Finally, we study the conflict-free (vertex-)connection number of graphs with diameter $4$ in the next two results. \begin{thm}\label{diameter4} For a connected graph $G$ with diameter $4$, we have that $vcfc(G)\leq 3$, and $cfc(G)=2$ if $h(G)\leq 1$; $cfc(G)=h(G)$ if $h(G)\geq3$. \end{thm} \noindent {\it Proof.} Since $G$ has diameter $4$, then after removing all internal vertices of end blocks, the resulting graph has at most one cut-vertex. If there is none, we can give colors as we did in the proof of Theorem \ref{diameter3}. Otherwise, give color $3$ to this cut-vertex $v_1$ and color $2$ to all vertices of blocks incident with $v_1$ except for $v_1$. Finally, assign color $1$ to all remaining vertices. Surely, $G$ is conflict-free vertex-connected under this coloring. Let $h(G)=k$. If $k\leq 1$, the result follows from Lemmas \ref{2-edge-connected} and \ref{order2}. If $k\geq 3$, we assign to $E(G)$ $k$ colors as we did in the third paragraph of the proof of Theorem \ref{diameter3}. For every pair of distinct vertices $u,v\in V(G)$, any path between them contains the same set of cut-edges. If they belong to the same component of $C(G)$, the conflict-free path is clear. Otherwise, since $diam(G)=4$, there are at most three cut-edges on the path. Thus at least one color of $2$ and $3$ (say $2$) appears at most once. If it does not appear, then we can choose a $u$-$v$ path passing the $2$-colored edge of a nontrivial block and evading all other such edges of the nontrivial blocks it goes through. Else, the desired path is one avoiding all $2$-colored edges of the nontrivial blocks it passes. Thus, $k$ colors are enough in this case.\qed \begin{cor} For a connected graph with $diam(G)\leq4$, we have that $vcfc(G)=3$ if and only if $G$ has more than one cut-vertex. \end{cor} \noindent {\it Proof.} The result is an immediate corollary of Lemma \ref{vcfc=2}, Theorems \ref{diameter2}, \ref{diameter3} and \ref{diameter4}.\qed \begin{rem} If $k=2$, according to Lemma \ref{cfcbound}, we have $2\leq cfc(G)\leq3$. The situation in this case is complicated. Suppose there are exactly $\ell$ components of $C(G)$ with conflict-free connection number 2. Then for each $\ell\geq2$, we give some graphs of diameter $4$ with conflict-free connection numbers 2 and 3, respectively. \begin{figure} \caption{: The graph $G_\ell$ with $cfc(G_\ell)=2(\ell\geq2)$.} \end{figure} See \textbf{Figure 2} for the graph $G_\ell$ with $cfc(G_\ell)=2(\ell\geq2)$. Each $v_i(2\leq i\leq \ell+1)$ of $G_\ell$ is attached to a $P_3$. We give each such $P_3$ the colors $1$ and $2$ to its two edges, respectively. Besides, give color $1$ to $u_1v_i$ and $2$ to $u_2v_i(3\leq i\leq \ell+1)$. The coloring for other edges are labelled in \textbf{Figure 2}. It is easy to check that this is a conflict-free connection $2$-coloring for $G_\ell$. \begin{figure} \caption{: The graph $H_\ell$ with $cfc(H_\ell)=3(\ell\geq2)$.} \end{figure} The graph $H_\ell$ with $cfc(H_\ell)=3(\ell\geq2)$ is depicted in \textbf{Figure 3}. Suppose, to the contrary, there exists a conflict-free connection $2$-coloring $c$ for $H_\ell$. When $\ell=2$, without loss of generality, let $c(x_1x_2)=c(x_3x_4)=c(x_6x_7)=1, c(x_2x_3)=c(x_7x_8)=2$. Then if $c(x_3x_7)=1$, to ensure a conflict-free path between $x_4$ and $x_6$, there must be $c(x_3x_5)\neq c(x_5x_7)$. However, there is no conflict-free path between $x_1$ and $x_8$, a contradiction. The case when $c(x_3x_7)=2$ can be dealt with similarly. Thus $cfc(H_2)=3$. With the same method, we can deduce that $cfc(H_3)=3$. For $H_\ell(\ell\geq4)$, without loss of generality, set $c(v_1w_1)=c(v_2w_3)=1, c(v_1w_2)=c(v_2w_4)=2$. Suppose there exist two monochromatic paths (say $u_1v_1u_2$ and $u_1v_2u_2$) with the same color between $u_1$ and $u_2$. Then there is no conflict-free path between $w_1$ and $w_3$ or $w_2$ and $w_4$, contracting our assumption. If this two monochromatic paths receive different colors, then there is no $w_1$-$w_4$ conflict-free path, a contradiction. Assume that $c(u_1v_1)=c(u_1v_2)\neq c(u_2v_1)=c(u_2v_2)$. For the sake of the existences of conflict-free paths between $w_1$ and $w_3$, $w_2$ and $w_4$, there must be two monochromatic $u_1$-$u_2$ paths with different colors, a contraction to our above analysis. Therefore, $u_1$ and $u_2$ are connected by at most three distinct paths, contradicting with $\ell\geq4$. As a result, $cfc(H_\ell)=3(\ell\geq4)$. \end{rem} \end{document}
math
\begin{document} \title{Fuzzy general linear methods} \author{Javad Farzi, Afsaneh Moradi, \\ Department of Mathematics, Sahand University Of Technology, P.O. Box 51335-1996, Tabriz, IRAN.} \email[Corresponding author]{[email protected]} \maketitle \begin{abstract} This paper concerns with the developing the most general schemes so-called Fuzzy General Linear Methods (FGLM) for solving fuzzy differential equations. The general linear methods (GLM) for ordinary differential equations are the middle state of two extreme extensions (linear multistep and Runge-Kutta methods) of the one step Euler method. In this paper we develop the FGLM framework of the Adams schemes for solving fuzzy differential equations under the strongly generalized differentiability. The stability, consistency and convergent results will be addressed. The numerical results and the order of accuracy is illustrated to show the efficiency and accuracy of the novel scheme. \end{abstract} \keywords{ General linear methods, Adams methods, Strongly generalized differentiability, Generalized fuzzy derivative, Fuzzy differential equations.} \subjclass{34A07} \section{Introduction} Many problems in science and engineering have some uncertainty in their nature and fuzzy differential equations are appropriate tools for modeling of such problems \cite{F1}. The interpretation of a fuzzy differential equation in the sense of generalized differentiability allows to fuzzifying the appropriate numerical methods of ordinary differential equations to fuzzy differential equations. The Hukuhara derivative with the extension principle or differential inclusions have some disadvantages. The main drawback is that the solutions obtained in this setting have increasing length of their supports \cite{38,Bede3}. Many authors have been generalized the traditional methods such as Euler's method, Adams-Bashforth methods, predictor-corrector method, Runge-Kutta method,...\cite{Bede5,21,20,26,28,29,23,19} to fuzzy differential and fuzzy initial value problems. However, they use Hukuhara differentiability and fuzify the numerical method using extension principle or other methods \cite{F1}. Under the concept of strongly generalized differentiability there exist fuzzy derivative for a large class of fuzzy-number-valued functions \cite{Bede2,Bede3}. Another advantage is that there exist two local solutions, so-called, (i)-differentiable and (ii)-differentiable solutions. According to the nature of the initial value problem we can choose the best meaningful practical solution. In this paper we develop the GLM schemes based on strongly generalized differentiability concept. Notion of a fuzzy derivative first introduced by Chang and Zadeh \cite{3} and Dubo and Prade \cite{4} introduced its extension. Stefanini \cite{30, 31} introduced the fuzzy gH-difference and Bede and Stefanini\cite{Bede7} defined and studied new generalization of the differentiability for fuzzy-number-valued functions. The aim of this paper is to develop the GLMs for fuzzy differential equations and study their consistency, stability and convergence. In this paper, under the strongly generalized differentiability we develop a well-known Adams-Bashforth methods in the framework of a general linear method. This starting step will motivate us to develop the arbitrary classes of GLMs with demanded properties in forthcoming research. Let us denote by $\mathbb{R}_{\mathcal{F}}$ the class of fuzzy numbers, i.e. normal, convex, upper semicontinuous and compactly supported fuzzy subsets of the real numbers. The fuzzy initial value problem is defined as follow: \begin{eqnarray}\label{eq-f} y'(t) &=& f(t,y(t)),\quad t\in [t_0,T],\\ y(t_0) &=& y_0,\nonumber \end{eqnarray} where, $f:[t_0,T]\times \mathbb{R}_{\mathcal{F}}\to \mathbb{R}_{\mathcal{F}}$ and $y_0\in \mathbb{R}_{\mathcal{F}}$. Here, we explain the GLM for ordinary IVP (\ref{eq-f}) and in next sections we will discuss on the development of GLM for FIVP. Burage and Butcher \cite{2} have presented a standard representation of a GLM in terms of four matrices. These methods were formulated as follows: \begin{equation}\label{1} \begin{array}{c} Y=hAf(Y)+Uy^{[n-1]},\\ y^{[n]}=hBf(Y)+Vy^{[n-1]}. \end{array} \end{equation} where $y^{[n-1]}$ and $y^{[n]}$ are input and output approximations, respectively, and \[A\in \mathbb{R}^{s\times s},\quad U\in \mathbb{R}^{s\times r},\quad B\in \mathbb{R}^{r\times s},\quad V\in \mathbb{R}^{r\times r}.\] In this paper we use the fuzzy interpolation for constructing Adams-Bashforth schemes in the general linear methods framework. The organization of this paper is as follow: In section \ref{sec2} we present the preliminaries from GLM and fuzzy calculus. In section \ref{sec3} we apply the GLM form of linear multistep methods to solve the fuzzy differential equations and in section \ref{sec5} numerical results are given. \section{Preliminaries}\label{sec2} In this section we present the required concepts from general linear methods and also we shortly review the required definitions form fuzzy calculus, as given in \cite{Bede1}. We will give the main idea of the paper for an important subclass of LMMs, the so-called Adams methods, in GLM framework. \begin{defn} Let $u,v\in \mathbb{R}_\mathcal{F}$, the Hukuhara difference (H-difference $\circleddash_{H}$) of $u$ and $v$ is defined by \[u\circleddash v=w \Longleftrightarrow u=v+w.\] \end{defn} Where $w\in\mathbb{R}_\mathcal{F}$ is called the H-difference of $u$ and $v$. If H-difference $u\circleddash v$ exists, then $[u\circleddash v]_r=[u_r^--v_r^-,u_r^+-v_r^+]$. The Hukuhara derivative for a fuzzy function was introduced by Puri and Relescu \cite{36}. From Kaleva \cite{37} and Diamond \cite{38}, it follows that a Hukuhara differentiable function has increasing length of its support interval. So the Hukuhara difference rarely exists and to overcome this situation strongly generalized differentiability of fuzzy-number-valued functions was introduced and studied by Bede-Gal \cite{Bede3}. Thus, in this case a differentiable function may have the property that the support has increasing or decreasing length. \begin{defn}\label{def2.5} Let $f:(a,b)\rightarrow\mathbb{R}_\mathcal{F}$ and $x_0\in(a,b)$. We say that f is strongly generalized differentiable at $x_0$, if there exists an element $f'(x_0)\in \mathbb{R}_\mathcal{F}$, such that \begin{itemize} \item[(i)] for each $h>0$ sufficiently close to 0, the H-differences $f(x_0+h)\circleddash f(x_0)$ and $f(x_0)\circleddash f(x_0-h)$ exist and \[\lim_{h\rightarrow0}\frac{f(x_0+h)\circleddash f(x_0)}{h}=\lim_{h\rightarrow0}\frac{f(x_0)\circleddash f(x_{0}-h)}{h}=f'(x_0),\] or \item[(ii)] for each $h>0$ sufficiently close to 0, the H-differences $f(x_0)\circleddash f(x_0+h) $ and $f(x_0-h)\circleddash f(x_0)$ exist and \[\lim_{h\rightarrow0}\frac{f(x_0)\circleddash f(x_0+h)}{(-h)}=\lim_{h\rightarrow0}\frac{f(x_0-h) \circleddash f(x_0)}{(-h)}=f'(x_0).\] \end{itemize} \end{defn} Let $f:(a,b)\rightarrow\mathbb{R}_\mathcal{F}$, we say that $f$ is $(i)$-differentiable and ($ii$)-differentiable on $(a,b)$ if $f$ is differentiable in the sense ($i$) and ($ii$) of Definition \ref{def2.5}, respectively. There is also two other differentiability cases - (iii) and (iv) - differentiability - that in these cases there is no existence theorems and we do not discuss them here. Bede in \cite{Bede5} proved that under certain conditions the fuzzy initial value problem \eqref{eq-f} has a unique solution and is equivalent to the system of ODEs \begin{equation*} \left\{\begin{array}{c} (y_{r}^-)'=f_{r}^-(t,y_{r}^-,y_r^+)\\ (y_{r}^+)'=f_{r}^+(t,y_{r}^-,y_r^+) \end{array},r\in[0,1]\right. \end{equation*} with respect to H-differentiability. In this interpretation solutions of a fuzzy differential equation have always an increasing length of its support interval. So a fuzzy dynamical system will have more uncertain behavior in time and it does not allow to have a periodic solutions. Thus, for solve FDEs the different ideas and methods have been investigated. The second interpretation was based on Zadeh's extension principle defined in \cite{44}. Consider the classical ODE $x'=f(t,x,a)$, $x(t_0)=x_0\in\mathbb{R}$ where $a\in\mathbb{R}$ is a parameter. By using Zadeh's extension principle on the classical solution, we obtain a solution of the FIVP. The third interpretation have been developed based on generalized fuzzy derivative. In this work we will work with interpretation based on strongly generalized differentiability. Fuzzy differential equations based on generalized H-differentiability were investigated by Bede-Gal in \cite{Bede3} and more general results were proposed in Bede-Gal \cite{Bede4}. According to the assumptions of the Theorem 9.11 in \cite{Bede1}, the fuzzy initial value problem \eqref{1} is equivalent to the union of the ODEs: \begin{eqnarray}\label{i-diff} \left\{ \begin{array}{ll} (y_{\alpha}^{-})'(t) = f_{\alpha}^{-}(t,y_{\alpha}^{-}(t),y_{\alpha}^{+}(t)) \\ (y_{\alpha}^{+})'(t) = f_{\alpha}^{+}(t,y_{\alpha}^{-}(t),y_{\alpha}^{+}(t)), &\alpha\in [0,1] \\ (y_{\alpha}^{-})(t_0) = (y_0)_{\alpha}^{-},\quad (y_{\alpha}^{+})(t_0) = (y_0)_{\alpha}^{+}. \end{array} \right. \end{eqnarray} and \begin{eqnarray}\label{ii-diff} \left\{ \begin{array}{ll} (y_{\alpha}^{-})'(t) = f_{\alpha}^{+}(t,y_{\alpha}^{-}(t),y_{\alpha}^{+}(t)) \\ (y_{\alpha}^{+})'(t) = f_{\alpha}^{-}(t,y_{\alpha}^{-}(t),y_{\alpha}^{+}(t)), & \alpha\in [0,1] \\ (y_{\alpha}^{-})(t_0) = (y_0)_{\alpha}^{-},\quad (y_{\alpha}^{+})(t_0) = (y_0)_{\alpha}^{+}. \end{array} \right. \end{eqnarray} For triangular input data we have the same systems \eqref{i-diff} and \eqref{ii-diff} with an extra equation $(y_{\alpha}^{1})'(t) = f_{\alpha}^{1}(t,y_{\alpha}^{-}(t),y_{\alpha}^{1}(t),y_{\alpha}^{+}(t))$ where $f=(f^{-},f^{1},f^{+})$ (see Theorem 9.12 in \cite{Bede1}). A linear multistep method is defined by the first characteristic polynomial $\rho(r) = \sum_{j=0}^k \alpha_j r^j$ and the second characteristic polynomial $\sigma(r) = \sum_{j=0}^k \beta_j r^j$ as follow \begin{equation} \sum_{j=0}^k \alpha_j y_{n+j} = h\sum_{j=0}^k \beta_j f_{n+j}, \end{equation} where $a=t_{n}\leq t_{n+1}\leq \cdots \leq t_{N}=b$, $h=\frac{b-a}{N}=t_{n+k}-t_{n+k-1}$, $f_{n+j} = f(t_{n+j},y_{n+j})$ and $\alpha_j$ and $\beta_{j}, j=0,1,\cdots,k$ are constants. In this scheme we can evaluate an approximate solution $y_{n+k}$ for the exact value $y(x_{n+k})$ using the starting values $y_0,y_1,\dots, y_{n+k-1}$. The Adams schemes are characterized by their first characteristic polynomial as $\rho(r)=r^{k}-r^{k-1}$. Therefor, we have \begin{equation}\label{eq2.1} y_{n+k} = y_{n+k-1}+h\sum_{j=0}^{k}\beta_{j}f_{n+j}, \end{equation} In (\ref{eq2.1}) the case $\beta_{k}=0$ means that the method is explicit and otherwise the method is implicit. The stability issue of LMMs are characterized by the root condition for the first characteristic polynomial $\rho(r)$, that means the roots $r_s, s=1,2,\dots,k$ of $\rho(r)$ satisfy $|r_s|\le 1$ and the roots with $|r_s|=1$ are simple \cite{15}. The zero-stability of an LMM and correspondingly the its GLM form depends on that the first characteristic polynomial $\rho(r)$ or the minimal polynomial of the matrix $V$ satisfies the root condition. \section{A GLM scheme with strongly generalized differentiability}\label{sec3} In this section we present the derivation of a GLM based on linear $k-$step Adams schemes for solving fuzzy initial value problem under strongly generalized differentiability. Assume that for an equally spaced points $0=t_0<t_1<\cdot<t_N=T$ at $t_n$ the exact solutions are indicated by ${\textbf{Y}}_{1}(t_{n};r)=[\textbf{Y}_1^-(t_n;r),\textbf{Y}_1^+(t_n;r)]$ and $\textbf{Y}_{2}(t_{n};r)=[\textbf{Y}_2^-(t_n;r),\textbf{Y}_2^+(t_n;r)]$ under (i) and (ii)-differentiability, respectively. Also assume that $y_{1}(t_{n};r)=[y_1^-(t_n;r),y_1^+(t_n;r)]$ and $y_{2}(t_{n};r)=[y_2^-(t_n;r),y_2^+(t_n;r)]$ are approximate solutions at $t_n$ under (i) and (ii)-differentiability, respectively. The $k$-step Adams methods under Hukuhara or (i)-differentiability can be written as: \begin{equation}\label{equ3.1} \begin{array}{ccc} y_{1r}^-(t_{n+k};r) &=& y_{1r}^-(t_{n+k-1};r) +h\sum_{j=0}^{k}\beta_j f^-(t_{n+j},y_{1r}(t_{n+j};r)),\\ y_{1r}^+(t_{n+k};r) &=& y_{1r}^+(t_{n+k-1};r) +h\sum_{j=0}^{k}\beta_j f^+(t_{n+j},y_{1r}(t_{n+j};r)), \end{array} \end{equation} and under (ii)-differentiability can be written as: \begin{equation}\label{equ3.2} \begin{array}{ccc} y_{2r}^-(t_{n+k};r) &=& y_{2r}^-(t_{n+k-1};r) +h\sum_{j=0}^{k}\beta_j f^+(t_{n+j},y_{2r}(t_{n+j};r)),\\ y_{2r}^+(t_{n+k};r) &=& y_{2r}^+(t_{n+k-1};r) +h\sum_{j=0}^{k}\beta_j f^-(t_{n+j},y_{2r}(t_{n+j};r)), \end{array} \end{equation} The Adams schemes are $k$-step methods \eqref{eq2.1} with $\rho(r)=r^{k}-r^{k-1}$. In this setting we can find their corresponding general linear method framework. In GLM representation we should first determine the input and output vectors and then find the corresponding matrices. For this end we consider the input and output approximation of general linear methods as follow \begin{equation*} y^{[n-1]}=\left( \begin{array}{c} y_{n+k-1}\\ hf_{n+k-1}\\ hf_{n+k-2}\\ \vdots\\ hf_{n+1}\\ hf_{n} \end{array}\right),\qquad y^{[n]}=\left( \begin{array}{c} y_{n+k}\\ hf_{n+k}\\ hf_{n+k-1}\\ \vdots\\ hf_{n+2}\\ hf_{n+1} \end{array}\right). \end{equation*} Similarly, a linear k-steps methods under strongly generalized differentiability \eqref{equ3.1} and \eqref{equ3.2} can be representation in the form of general linear methods. For this representation the input vectors for the GLM form of \eqref{equ3.1} and \eqref{equ3.2} are indicated by $y_{1r}^{[n-1]}=\big[y_{1r}^{-[n-1]},y_{1r}^{+[n-1]}\big]$ and $y_{2r}^{[n-1]}=\big[y_{2r}^{-[n-1]},y_{2r}^{+[n-1]}\big]$ under (i) and (ii)-differentiability, respectively. Corresponding to the input vectors, the output vectors are indicated by $y_{1r}^{[n]}=\big[y_{1r}^{-[n]},y_{1r}^{+[n]}\big]$ and $y_{2r}^{[n]}=\big[y_{2r}^{-[n]},y_{2r}^{+[n]}\big]$ under (i) and (ii)-differentiability, respectively. Now, we consider the input approximation of general linear methods in terms of (i)-differentiability as: \begin{equation}\label{input_i} y_{1r}^{-[n-1]}=\left(\begin{array}{c} y^-_{{n+k-1}_{1r}} \\ hf^-_{{n+k-1}_{1r}} \\ hf^-_{{n+k-2}_{1r}} \\ \vdots\\ hf^-_{{n+1}_{1r}}\\ hf^-_{{n}_{1r}} \end{array}\right),\qquad y_{1r}^{+[n-1]}=\left(\begin{array}{c} y^+_{{n+k-1}_{1r}} \\ hf^+_{{n+k-1}_{1r}} \\ hf^+_{{n+k-2}_{1r}} \\ \vdots\\ hf^+_{{n+1}_{1r}}\\ hf^+_{{n}_{1r}} \end{array}\right), \end{equation} and under the (ii)-differentiability we obtain the following input vectors: \begin{equation}\label{input_ii} y_{2r}^{-[n-1]}=\left(\begin{array}{c} y^-_{{n+k-1}_{2r}} \\ hf^+_{{n+k-1}_{2r}} \\ hf^+_{{n+k-2}_{2r}} \\ \vdots\\ hf^+_{{n+1}_{2r}}\\ hf^+_{{n}_{2r}} \end{array}\right),\qquad y_{1r}^{+[n-1]}=\left(\begin{array}{c} y^+_{{n+k-1}_{2r}} \\ hf^-_{{n+k-1}_{2r}} \\ hf^-_{{n+k-2}_{2r}} \\ \vdots\\ hf^-_{{n+1}_{2r}}\\ hf^-_{{n}_{2r}} \end{array}\right), \end{equation} By considering the above input vectors, the fuzzy general linear methods form of \eqref{equ3.1} and \eqref{equ3.2} can be formulated in case of (i)-differentiability as: \begin{equation}\label{GLM1} \left(\begin{array}{c} Y_{1r} \\ \hline y_{1r}^{[n]} \end{array}\right) =\left(\begin{tabular}{c|c} A & U \\ \hline B & V \end{tabular}\right)\left(\begin{array}{c} hf_{1r}(Y_{1r}) \\ \hline y_{1r}^{[n-1]} \end{array}\right), \end{equation} and in case of (ii)-differentiability we have: \begin{equation}\label{GLM1} \left(\begin{array}{c} Y_{2r} \\ \hline y_{2r}^{[n]} \end{array}\right) =\left(\begin{tabular}{c|c} A & U \\ \hline B & V \end{tabular}\right)\left(\begin{array}{c} hf_{2r}(Y_{2r}) \\ \hline y_{2r}^{[n-1]} \end{array}\right), \end{equation} where $Y_{1r}=[Y_{1r}^-,Y_{1r}^+]$ and $Y_{2r}=[Y_{2r}^-,Y_{2r}^+]$ are internal stages under (i) and (ii)-differentiability, respectively. Also \begin{equation*} \left(\begin{tabular}{c|c} A & U \\ \hline B & V \end{tabular}\right)= \left(\begin{tabular}{c|ccccc} 0 & 1 & $\beta_{k-1}$ & $\cdots$ & $\beta_{1}$ & $\beta_{0}$ \\ \hline 0 & 1 & $\beta_{k-1}$ &$ \cdots$ & $\beta_{1}$ & $\beta_{0}$ \\ 1 & 0 & 0 & $\cdots$ & 0 & 0 \\ 0 & 0 & 1 & $\cdots$ & 0 & 0 \\ $\vdots$ & $\vdots$ &$ \vdots$ & $\quad$ & $\vdots$ & $\vdots$ \\ 0 & 0 & 0 & $\cdots$ & 1 & 0 \end{tabular}\right). \end{equation*} Now, we consider two example of Fuzzy GLMs form of k-step methods under strongly generalized differentiability for $k=4,5$. First, Consider $k=4$. The input vectors for $k=4$ under (i) and (ii)-differentiability are as follow, respectively: \begin{equation*} {y}^{\mp[n-1]}_{1r}=\left(\begin{array}{c} {y}^{\mp}_{1r}(t_{n+3})\\ h{f}^{\mp}_{1r}(t_{n+3},y_{1r}(t_{n+3}))\\ h{f}^{\mp}_{1r}(t_{n+2},y_{1r}(t_{n+2}))\\ h{f}^{\mp}_{1r}(t_{n+1},y_{1r}(t_{n+1}))\\ h{f}^{\mp}_{1r}(t_{n},y_{1r}(t_{n})) \end{array}\right),\quad {y}^{\mp[n-1]}_{2r}=\left(\begin{array}{c} {y}^{\mp}_{2r}(t_{n+3})\\ h{f}^{\pm}_{2r}(t_{n+3},y_{2r}(t_{n+3}))\\ h{f}^{\pm}_{2r}(t_{n+2},y_{2r}(t_{n+2}))\\ h{f}^{\pm}_{2r}(t_{n+1},y_{2r}(t_{n+1}))\\ h{f}^{\pm}_{2r}(t_{n},y_{2r}(t_{n})) \end{array}\right), \end{equation*} and \begin{equation*} \left( \begin{tabular}{l|lllll} 0&1&$\frac{55}{24}$&$\frac{-59}{24}$&$\frac{37}{24}$&$\frac{-9}{24}$\\ \cline{1-6} 0&1&$\frac{55}{24}$&$\frac{-59}{24}$&$\frac{37}{24}$&$\frac{-9}{24}$\\ 1&0&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&1&0 \end{tabular}\right). \end{equation*} similarly, for $k=5$ we obtain \begin{equation*} {y}^{\mp[n-1]}_{1r}=\left(\begin{array}{c} {y}^{\mp}_{1r}(t_{n+4})\\ h{f}^{\mp}_{1r}(t_{n+4},y_{1r}(t_{n+4}))\\ h{f}^{\mp}_{1r}(t_{n+3},y_{1r}(t_{n+3}))\\ h{f}^{\mp}_{1r}(t_{n+2},y_{1r}(t_{n+2}))\\ h{f}^{\mp}_{1r}(t_{n+1},y_{1r}(t_{n+1}))\\ h{f}^{\mp}_{1r}(t_{n},y_{1r}(t_{n})) \end{array}\right),\quad {y}^{\mp[n-1]}_{2r}=\left(\begin{array}{c} {y}^{\mp}_{2r}(t_{n+4})\\ h{f}^{\pm}_{2r}(t_{n+4},y_{2r}(t_{n+4}))\\ h{f}^{\pm}_{2r}(t_{n+3},y_{2r}(t_{n+3}))\\ h{f}^{\pm}_{2r}(t_{n+2},y_{2r}(t_{n+2}))\\ h{f}^{\pm}_{2r}(t_{n+1},y_{2r}(t_{n+1}))\\ h{f}^{\pm}_{2r}(t_{n},y_{2r}(t_{n})) \end{array}\right), \end{equation*} and \begin{equation*} \left( \begin{tabular}{l|llllll} 0&1&$\frac{1901}{720}$&$\frac{-2774}{720}$&$\frac{2616}{720}$&$\frac{-1274}{720}$&$\frac{251}{720}$\\ \cline{1-7} 0&1&$\frac{1901}{720}$&$\frac{-2774}{720}$&$\frac{2616}{720}$&$\frac{-1274}{720}$&$\frac{251}{720}$\\ 1&0&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&0&1&0&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&1&0 \end{tabular}\right). \end{equation*} \section{Convergence, consistency and stability} To address the convergence of the presented FGLMs we consider the numerical solutions ${y}_{1}(t_{n+j};r)=[{y}^-_{1}(t_{n+j};r),{y}^+_{1}(t_{n+j};r)]$ and ${y}_{2}(t_{n+j};r)=[{y}^-_{2}(t_{n+j};r),{y}^+_{2}(t_{n+j};r)]$ and the corresponding exact solutions $\mathbf{Y}_{1}(t_{n+j};r)=[\mathbf{Y}^-_{1}(t_{n+j};r),\mathbf{Y}^+_{1}(t_{n+j};r)]$ and $\mathbf{Y}_{2}(t_{n+j};r)=[\mathbf{Y}^-_{2}(t_{n+j};r),\mathbf{Y}^+_{2}(t_{n+j};r)]$ under (i) and (ii)-differentiability, respectively. The local truncation errors (LTEs) of the FGLMs under strongly generalized differentiability are defined by \begin{equation}\label{eq3.23} \begin{array}{c} {\Psi}_{1}(t_{n+k};r)=\sum_{j=0}^{k}r_{j}y_{1}(t_{n+j};r)-h\psi_{f_{1}}\big(y_1(t_{n+k};r),\cdots,y_1(t_n;r)\big),\\ {\Psi}_{2}(t_{n+k};r)=\sum_{j=0}^{k}r_{j}y_{2}(t_{n+j};r)-h\psi_{f_{2}}\big(y_2(t_{n+k};r),\cdots,y_2(t_n;r)\big),\\ \end{array} \end{equation} where $r_{k}=-r_{k-1}=1$ and $r_{j}=0$ for $j=0,1,\ldots,k-2$ and \begin{eqnarray*} \psi_{f_{1}}\big(y_1(t_{n+k};r),\cdots,y_1(t_n;r)&=&\sum_{j=0}^{k-1}\beta_jf_1(t_{n+j},y_1(t_{n+j};r))\\ \psi_{f_{2}}\big(y_2(t_{n+k};r),\cdots,y_2(t_n;r)&=&\sum_{j=0}^{k-1}\beta_jf_2(t_{n+j},y_2(t_{n+j};r)) \end{eqnarray*} Consistency and stability are two essential conditions for convergent. \begin{defn} A Fuzzy GLM form of k-step method under generalized differentiability is said to be consistent if for all fuzzy initial value problems, the residual ${\Psi}_{1}(t_{n+k};r)$ and ${\Psi}_{2}(t_{n+k};r)$ defined by (\ref{eq3.23}) satisfies \begin{eqnarray*} \lim_{h\rightarrow0}\frac{1}{h}{\Psi}_{1}(t_{n+k};r)=0,\\ \lim_{h\rightarrow0}\frac{1}{h}{\Psi}_{2}(t_{n+k};r)=0. \end{eqnarray*} \end{defn} \begin{defn} A Fuzzy GLM is stable if the minimal polynomial of coefficient matrix $V$ has no zeros greater than 1 and all zeros equal to 1 are simple, in other words it satisfies the root condition. \end{defn} To verify the stability of the given Fuzzy GLMs under generalized differentiability in section \ref{sec3} we found the minimal polynomial $p_{k}(w)$ of the coefficient matrix $V$ for $k=4,5$: \begin{eqnarray*} p_{k}(w)=w^{k}(w-1),\quad k=4,5, \end{eqnarray*} which simply satisfies the root condition and the corresponding Fuzzy GLMs are stable. \section{Numerical results}\label{sec5} In this section, we report among many test problems an example to show the numerical results of FGLMs for solving fuzzy differential equations under strongly generalized differentiability. We utilize the FGLMs ($k=4,5$) presented in section \ref{sec3}. The absolute error numerical results concerning the order of convergence is provided. We can estimate the order of convergence $p$ by evaluation of the fraction $\frac{E(h/2)}{E(h)}= O(\frac{1}{2^p})$. \begin{test}\label{test6.1} (Bede \cite{Bede1}) Consider the following fuzzy initial value problem \begin{equation}\label{FIVP1} y'=-y+e^{-t}(-1,0,1),\qquad y_0=(-1,0,1). \end{equation} The system of ODEs corresponding to (i)-differentiability is given by \begin{equation*} \left\{\begin{array}{l} (y^-)'= -y^+-e^{-t},\\ (y^1)'= -y^1,\\ (y^+)' = -y^-+e^{-t}, \\ y_0=(-1,0,1). \end{array}\right. \end{equation*} The analytical solution under (i)-differentiability is \begin{eqnarray} Y_1^-(t;r) &=& (1-r)(\frac{1}{2}e^{-t}-\frac{3}{2}e^t) \nonumber\\ Y_1^+(t;r) &=& (1-r)(\frac{3}{2}e^t-\frac{1}{2}e^{-t}) \nonumber \end{eqnarray} Similarly, the system of ODEs corresponding to (ii)-differentiability is given by \begin{equation*} \left\{\begin{array}{l} (y^-)'= -y^-+e^{-t},\\ (y^1)'= -y^1,\\ (y^+)' = -y^+-e^{-t}, \\ y_0=(-1,0,1), \end{array}\right. \end{equation*} and the analytical solution under (ii)-differentiability is \begin{eqnarray} Y_2^-(t;r) &=& (-1+r)(1-t)\exp(-t) \nonumber\\ Y_2^+(t;r) &=& (1-r)(1-t)\exp(-t). \nonumber \end{eqnarray} We demonstrate the numerical solution of FIVP \eqref{FIVP1} in the interval $[0,2]$. The (i) and (ii)-exact and approximate solutions, resulted by FGLMs for $k=4$ and $k=5$, are presented in Tables \ref{Tab6.1.1} and \ref{Tab6.1.2} at $t=2$ with $N=20$ and $h=\frac{T-t_0}{N}$. Moreover, the results for their convergence provided in Tables \ref{Tab6.1.3} and \ref{Tab6.1.4}. \end{test} \begin{table}[!htp] \centering \tiny \begin{tabular}{cccc} \hline $r$ & $y_{1r}$ & $Y_{1r}$ & $E_{1r}$ \\[0.5mm] \hline\\[-1.5mm] 0 & [-1.101531E1, 1.101531E1] & [-1.101592E1, 1.101592E1] & 6.024101E-4\\[0.5mm] 0.1 & [-9.913783E0, 9.913783E0] & [-9.914325E0, 9.914325E0] & 5.421691E-4\\[0.5mm] 0.2 & [-8.812251E0, 8.812251E0] & [-8.812733E0, 8.812733E0] & 4.819281E-4\\[0.5mm] 0.3 & [-7.710720E0, 7.710720E0] & [-7.711142E0, 7.711142E0] & 4.216871E-4\\[0.5mm] 0.4 & [-6.609188E0, 6.609188E0] & [-6.609550E0, 6.609550E0] & 3.614461E-4\\[0.5mm] 0.5 & [-5.507657E0, 5.507657E0] & [-5.507958E0, 5.507958E0] & 3.012050E-4\\[0.5mm] 0.6 & [-4.406126E0, 4.406126E0] & [-4.406367E0, 4.406367E0] & 2.409640E-4\\[0.5mm] 0.7 & [-3.304594E0, 3.304594E0] & [-3.304775E0, 3.304775E0] & 1.807230E-4\\[0.5mm] 0.8 & [-2.203063E0, 2.203063E0] & [-2.203183E0, 2.203183E0] & 1.204820E-4\\[0.5mm] 0.9 & [-1.101531E0, 1.101531E0] & [-1.101592E0, 1.101592E0] & 6.024101E-5\\[0.5mm] 1.0 & [0,0] & [0,0] & 0 \\[0.5mm] \hline \\ (a) \end{tabular} \begin{tabular}{cccc} \hline $r$ & $y_{2r}$ & $Y_{2r}$ & $E_{2r}$\\[0.5mm] \hline\\[-1.5mm] 0 & [1.352883E-1, -1.352883E-1] & [1.353353E-1, -1.353353E-1] & 4.699417E-5\\[0.5mm] 0.1 & [1.217595E-1, -1.217595E-1] & [1.218018E-1, -1.218018E-1] & 4.229476E-5\\[0.5mm] 0.2 & [1.082306E-1, -1.082306E-1] & [1.082682E-1, -1.082682E-1] & 3.759534E-5\\[0.5mm] 0.3 & [9.470180E-2, -9.470180E-2] & [9.473470E-2, -9.473470E-2] & 3.289592E-5\\[0.5mm] 0.4 & [8.117297E-2, -8.117297E-2] & [8.120117E-2, -8.120117E-2] & 2.819650E-5\\[0.5mm] 0.5 & [6.764414E-2, -6.764414E-2] & [6.766764E-2, -6.766764E-2] & 2.349709E-5\\[0.5mm] 0.6 & [5.411532E-2, -5.411532E-2] & [5.413411E-2, -5.413411E-2] & 1.879767E-5\\[0.5mm] 0.7 & [4.058649E-2, -4.058649E-2] & [4.060058E-2, -4.060058E-2] & 1.409825E-5\\[0.5mm] 0.8 & [2.705766E-2, -2.705766E-2] & [2.706706E-2, -2.706706E-2] & 9.398835E-6\\[0.5mm] 0.9 & [1.352883E-2, -1.352883E-2] & [1.353353E-2, -1.353353E-2] & 4.699417E-6\\[0.5mm] 1.0 & [0,0] & [0,0] & 0 \\[0.5mm] \hline \\ (b) \end{tabular} \caption{\scriptsize(a)Approximate solution of the FGLM ($k=4$) $y_{1r}=[y_{1r}^-,y_{1r}^+]$, exact solution $Y_{1r}=[Y_{1r}^-,Y_{1r}^+]$ and absolute error $E_{1r}$ under (i)-differentiability, (b)Approximate solution of the FGLM ($k=4$) $y_{2r}=[y_{2r}^-,y_{2r}^+]$, exact solution $Y_{2r}=[Y_{2r}^-,Y_{2r}^+]$ and absolute error $E_{2r}$ under (ii)-differentiability, Test \ref{test6.1}.}\label{Tab6.1.1} \end{table} \begin{table}[!htp] \centering \tiny \begin{tabular}{ccccccc} \hline $r$ & $y_{1r}$ & $Y_{1r}$ & $E_{1r}$ \\[0.5mm] \hline\\[-1.5mm] 0 & [-1.101587E1, 1.101587E1] & [-1.101592E1, 1.101592E1] & 4.451187E-5\\[0.5mm] 0.1 & [-9.914285E0, 9.914285E0] & [-9.914325E0, 9.914325E0] & 4.006069E-5\\[0.5mm] 0.2 & [-8.812698E0, 8.812698E0] & [-8.812733E0, 8.812733E0] & 3.560950E-5\\[0.5mm] 0.3 & [-7.711110E0, 7.711110E0] & [-7.711142E0, 7.711142E0] & 3.115831E-5\\[0.5mm] 0.4 & [-6.609523E0, 6.609523E0] & [-6.609550E0, 6.609550E0] & 2.670712E-5\\[0.5mm] 0.5 & [-5.507936E0, 5.507936E0] & [-5.507958E0, 5.507958E0] & 2.225594E-5\\[0.5mm] 0.6 & [-4.406349E0, 4.406349E0] & [-4.406367E0, 4.406367E0] & 1.780475E-5\\[0.5mm] 0.7 & [-3.304762E0, 3.304762E0] & [-3.304775E0, 3.304775E0] & 1.335356E-5\\[0.5mm] 0.8 & [-2.203174E0, 2.203174E0] & [-2.203183E0, 2.203183E0] & 8.902375E-6\\[0.5mm] 0.9 & [-1.101587E0, 1.101587E0] & [-1.101592E0, 1.101592E0] & 4.451187E-6\\[0.5mm] 1.0 & [0,0] & [0,0] & 0 \\[0.5mm] \hline \\ (a) \end{tabular} \begin{tabular}{cccc} \hline $r$ & $y_{2r}$ & $Y_{2r}$ & $E_{2r}$ \\[0.5mm] \hline\\[-1.5mm] 0 & [1.353406E-1, -1.353406E-1] & [1.353353E-1, -1.353353E-1] & 5.270043E-6 \\[0.5mm] 0.1 & [1.218065E-1, -1.218065E-1] & [1.218018E-1, -1.218018E-1] & 4.743039E-6 \\[0.5mm] 0.2 & [1.082724E-1, -1.082724E-1] & [1.082682E-1, -1.082682E-1] & 4.216035E-6 \\[0.5mm] 0.3 & [9.473839E-2, -9.473839E-2] & [9.473470E-2, -9.473470E-2] & 3.689030E-6 \\[0.5mm] 0.4 & [8.120433E-2, -8.120433E-2] & [8.120117E-2, -8.120117E-2] & 3.162026E-6 \\[0.5mm] 0.5 & [6.767028E-2, -6.767028E-2] & [6.766764E-2, -6.766764E-2] & 2.635022E-6 \\[0.5mm] 0.6 & [5.413622E-2, -5.413622E-2] & [5.413411E-2, -5.413411E-2] & 2.108017E-6 \\[0.5mm] 0.7 & [4.060217E-2, -4.060217E-2] & [4.060058E-2, -4.060058E-2] & 1.581013E-6 \\[0.5mm] 0.8 & [2.706811E-2, -2.706811E-2] & [2.706706E-2, -2.706706E-2] & 1.054009E-6 \\[0.5mm] 0.9 & [1.353406E-2, -1.353406E-2] & [1.353353E-2, -1.353353E-2] & 5.270043E-7 \\[0.5mm] 1.0 & [0,0] & [0,0] & 0 \\[0.5mm] \hline \\ (b) \end{tabular} \caption{\scriptsize(a)Approximate solution of the FGLM ($k=5$), $y_{1r}=[y_{1r}^-,y_{1r}^+]$, exact solution $Y_{1r}=[Y_{1r}^-,Y_{1r}^+]$ and absolute error $E_{1r}$ under (i)-differentiability, (b)Approximate solution of the FGLM ($k=5$), $y_{2r}=[y_{2r}^-,y_{2r}^+]$, exact solution $Y_{2r}=[Y_{2r}^-,Y_{2r}^+]$ and absolute error $E_{2r}$ under (ii)-differentiability, Test \ref{test6.1}.}\label{Tab6.1.2} \end{table} \begin{table}[!htp] \centering \tiny \begin{tabular}{cccccccccccc} \hline $r$&$h_{i}$&\quad&\quad&\quad&\quad& $E_{2r}(h_{i})$&\quad&\quad&\quad&\quad& $p$\\[1mm] \hline 0.2& $\frac{1}{10}$ & \quad&\quad&\quad&\quad& 3.759533862475462E-5& \quad&\quad&\quad&\quad& \quad\\[1mm] \quad& $\frac{1}{20}$ & \quad&\quad&\quad&\quad& 2.360816361970941E-6& \quad&\quad&\quad&\quad& 3.993196066118324E0\\[1mm] \quad& $\frac{1}{40}$ & \quad&\quad&\quad&\quad& 1.475860954835984E-7& \quad&\quad&\quad&\quad& 3.999657112312050E0\\[1mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 9.220812974275461E-9& \quad&\quad&\quad&\quad& 4.000519042265663E0\\[1mm] \hline 0.4 &$\frac{1}{10}$ & \quad&\quad&\quad&\quad& 2.819650396852780E-5& \quad&\quad&\quad&\quad& \quad\\[1mm] \quad&$\frac{1}{20}$ & \quad&\quad&\quad&\quad& 1.770612271481675E-6& \quad&\quad&\quad&\quad& 3.993196066113545E0\\[1mm] \quad&$\frac{1}{40}$ & \quad&\quad&\quad&\quad& 1.106895716057599E-7& \quad&\quad&\quad&\quad& 3.999657112405316E0\\[1mm] \quad&$\frac{1}{80}$ & \quad&\quad&\quad&\quad& 6.915609765401065E-9& \quad&\quad&\quad&\quad& 4.000519034937461E0\\[1mm] \hline 0.6 &$\frac{1}{10}$ & \quad&\quad&\quad&\quad& 1.879766931239119E-5& \quad&\quad&\quad&\quad& \quad\\[1mm] \quad&$\frac{1}{20}$ & \quad&\quad&\quad&\quad& 1.180408180992409E-6& \quad&\quad&\quad&\quad& 3.993196066110909E0\\[1mm] \quad&$\frac{1}{40}$ & \quad&\quad&\quad&\quad& 7.379304775567697E-8& \quad&\quad&\quad&\quad& 3.999657112049211E0\\[1mm] \quad&$\frac{1}{80}$ & \quad&\quad&\quad&\quad& 4.610406514893306E-9& \quad&\quad&\quad&\quad& 4.000519033851667E0\\[1mm] \hline 0.8 &$\frac{1}{10}$ & \quad&\quad&\quad&\quad& 9.398834656195593E-6& \quad&\quad&\quad&\quad& \quad\\[1mm] \quad&$\frac{1}{20}$ & \quad&\quad&\quad&\quad& 5.902040904962047E-7& \quad&\quad&\quad&\quad& 3.993196066110909E0\\[1mm] \quad&$\frac{1}{40}$ & \quad&\quad&\quad&\quad& 3.689652387783848E-8& \quad&\quad&\quad&\quad& 3.999657112049211E0\\[1mm] \quad&$\frac{1}{80}$ & \quad&\quad&\quad&\quad& 2.305203257446653E-9& \quad&\quad&\quad&\quad& 4.000519033851667E0\\[1mm] \hline \end{tabular} \caption{\scriptsize Convergence of the FGLM ($k=4$)under (ii)-differentiability}\label{Tab6.1.3} \end{table} \begin{table}[!htp] \centering \tiny \begin{tabular}{cccccccccccc} \hline $r$&$h_{i}$&\quad&\quad&\quad&\quad& $E_{2r}(h_{i})$&\quad&\quad&\quad&\quad& $p$\\ [.5mm]\hline\\[-1.5mm] 0.2 & $\frac{1}{10}$ & \quad&\quad&\quad&\quad& 4.216034534335056E-6 & \quad&\quad&\quad&\quad& \quad\\[1mm] \quad& $\frac{1}{20}$ & \quad&\quad&\quad&\quad& 1.336319765954386E-7 & \quad&\quad&\quad&\quad& 4.979549509841708E0\\[1mm] \quad& $\frac{1}{40}$ & \quad&\quad&\quad&\quad& 4.185707641601866E-9 & \quad&\quad&\quad&\quad& 4.996649911789974E0\\[1mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 1.308334413030465E-10 & \quad&\quad&\quad&\quad& 4.999668298467584E0\\[1mm] \quad& $\frac{1}{160}$ & \quad&\quad&\quad&\quad& 4.088021587911328E-12 & \quad&\quad&\quad&\quad& 5.000184718796247E0\\[1mm] \hline 0.4 & $\frac{1}{10}$ & \quad&\quad&\quad&\quad& 3.162025900754761E-6 & \quad&\quad&\quad&\quad& \quad\\[1mm] \quad& $\frac{1}{20}$ & \quad&\quad&\quad&\quad& 1.002239824188234E-7 & \quad&\quad&\quad&\quad& 4.979549510242824E0\\[1mm] \quad& $\frac{1}{40}$ & \quad&\quad&\quad&\quad& 3.139280738140293E-9 & \quad&\quad&\quad&\quad& 4.996649908201587E0\\[1mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 9.812499424111110E-11 & \quad&\quad&\quad&\quad& 4.999669576905354E0\\[1mm] \quad& $\frac{1}{160}$ & \quad&\quad&\quad&\quad& 3.065672715685253E-12 & \quad&\quad&\quad&\quad& 5.000345072764134E0\\[1mm] \hline 0.6 & $\frac{1}{10}$ & \quad&\quad&\quad&\quad& 2.108017267167528E-6 & \quad&\quad&\quad&\quad& \quad\\[.5mm] \quad& $\frac{1}{20}$ & \quad&\quad&\quad&\quad& 6.681598831159707E-8 & \quad&\quad&\quad&\quad& 4.979549509542057E0\\[.5mm] \quad& $\frac{1}{40}$ & \quad&\quad&\quad&\quad& 2.092853827739827E-9 & \quad&\quad&\quad&\quad& 4.996649907306344E0\\[.5mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 6.541663044590251E-11 & \quad&\quad&\quad&\quad& 4.999670292639665E0\\[.5mm] \quad& $\frac{1}{160}$ & \quad&\quad&\quad&\quad& 2.043996916167856E-12 & \quad&\quad&\quad&\quad& 5.000192524602108E0\\[.5mm] \hline 0.8 & $\frac{1}{10}$ & \quad&\quad&\quad&\quad& 1.054008633583764E-6 & \quad&\quad&\quad&\quad& \quad\\[1mm] \quad& $\frac{1}{20}$ & \quad&\quad&\quad&\quad& 3.340799415579854E-8 & \quad&\quad&\quad&\quad& 4.979549509542057E0\\[1mm] \quad& $\frac{1}{40}$ & \quad&\quad&\quad&\quad& 1.046426913869913E-9 & \quad&\quad&\quad&\quad& 4.996649907306344E0\\[1mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 3.270831522295126E-11 & \quad&\quad&\quad&\quad& 4.999670292639665E0\\[1mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 1.021998458083928E-12 & \quad&\quad&\quad&\quad& 5.000192524602108E0\\[1mm] \hline \end{tabular} \caption{\scriptsize Convergence of the FGLM ($k=5$)under (ii)-differentiability}\label{Tab6.1.4} \end{table} From Tables \ref{Tab6.1.3} and \ref{Tab6.1.4}, it follows that the Fuzzy GLMs of $4$-step methods under strongly generalized differentiability have convergence order 4 and the Fuzzy GLMs form of 5-step methods have convergence order 5. \section{Conclusion} In this paper we have developed the linear multistep methods (Adams-Bashforth methods) in the framework of general linear methods for solving fuzzy differential equations under strongly generalized differentiability. We have shown the consistency, stability, and convergence of the new FGLM formulation. The general framework of FGLMs will be studied in the forthcoming paper. \end{document}
math
\begin{equation}gin{document} \let\le=\leqslant \let\gammae=\gammaeqslant \let\leq=\leqslant \let\gammaeq=\gammaeqslant \newcommand{\varepsilon }{\varepsilon } \newcommand{\gamma}{\gammaamma} \newcommand{{\Bbb F}}{{\Bbb F}} \newcommand{{\Bbb N}}{{\Bbb N}} \newcommand{{\Bbb Z}}{{\Bbb Z}} \newcommand{{\Bbb Q}}{{\Bbb Q}} \newcommand{\Rightarrow }{\Rightarrow ightarrow } \newcommand{\Omega }{\Omega } \newcommand{\sigma }{\sigma igma } \newcommand{\varkappa }{\varkappa } \newcommand{\Leftrightarrow }{\Leftrightarrow } \newcommand{\alpha }{\alpha } \newcommand{\omega }{\omega } \newcommand{\begin{equation}gin{theorem}}{\begin{equation}gin{theorem}} \newcommand{\varepsilon t}{\varepsilon nd{theorem}} \newcommand{\begin{equation}}{\begin{equation}gin{equation}} \newcommand{\varepsilon e}{\varepsilon nd{equation}} \newcommand{\begin{corollary}}{\begin{equation}gin{corollary}} \newcommand{\varepsilon c}{\varepsilon nd{corollary}} \newcommand{\begin{proof}}{\begin{equation}gin{proof}} \newcommand{\varepsilon p}{\varepsilon nd{proof}} \newcommand{\begin{lemma}}{\begin{equation}gin{lemma}} \newcommand{\varepsilon l}{\varepsilon nd{lemma}} \title{Finite groups and Lie rings with a metacyclic Frobenius group of automorphisms} \markright{} \author{{E.\,I.~Khukhro}\\ \sigma mall Sobolev Institute of Mathematics, Novosibirsk, 630\,090, Russia\\[-1ex] \sigma mall [email protected] \\ {N.\,Yu.~Makarenko\footnote{The second author was supported by the Russian Foundation for Basic Research, project no.~13-01-00505.}}\\ \sigma mall Universit\'{e} de Haute Alsace, Mulhouse, 68093, France, and \\ \sigma mall Sobolev Institute of Mathematics, Novosibirsk, 630\,090, Russia \\[-1ex] \sigma mall natalia\[email protected]} \date{} \maketitle \sigma ubjclass{Primary 17B40, 20D45; Secondary 17B70, 20D15, 20F40} \begin{equation}gin{center}{\it to Victor Danilovich Mazurov on the occasion of his 70th birthday} \varepsilon nd{center} \begin{equation}gin{abstract} Suppose that a finite group $G$ admits a Frobenius group of automorphisms $FH$ of coprime order with cyclic kernel $F$ and complement $H$ such that the fixed point subgroup $C_G(H)$ of the complement is nilpotent of class $c$. It is proved that $G$ has a nilpotent characteristic subgroup of index bounded in terms of $c$, $|C_G(F)|$, and $|F|$ whose nilpotency class is bounded in terms of $c$ and $|H|$ only. This generalizes the previous theorem of the authors and P.~Shumyatsky, where for the case of $C_G(F)=1$ the whole group was proved to be nilpotent of $(c,|H|)$-bounded class. Examples show that the condition of $F$ being cyclic is essential. Results based on the classification provide reduction to soluble groups. Then representation theory arguments are used to bound the index of the Fitting subgroup. Lie ring methods are used for nilpotent groups. A similar theorem on Lie rings with a metacyclic Frobenius group of automorphisms $FH$ is also proved. \varepsilon nd{abstract} \keywords{finite group, Frobenius groups, automorphism, soluble, nilpotent, Clifford's theorem, Lie ring} \sigma ection{Introduction} Suppose that a finite group $G$ admits a Frobenius group of automorphisms $FH$ of coprime order with cyclic kernel $F$ and complement $H$. In a number of recent papers the structure of $G$ was studied under the assumption that the kernel acts without nontrivial fixed points: $C_G(F)=1$. A fixed-point-free action of $F$ alone was already known to imply many nice properties of $G$ (see more on this below). But the `additional' action of the Frobenius complement $H$ suggested another approach to the study of $G$. Namely, in the case $C_G(F)=1$, by Clifford's theorem all $FH$-inva\-ri\-ant elementary abelian sections of $G$ are free ${\Bbb F}_pH$-modules (for various $p$), and therefore it is natural to expect that many properties or parameters of $G$ should be close to the corresponding properties or parameters of $C_G(H)$, possibly also depending on $H$. Prompted by Mazurov's problem 17.72 in Kourovka Notebook \cite{kour}, several results of this nature were obtained recently \cite{khu08, mak-shu10,khu10al, khu-ma-shu, khu-ma-shu-DAN, shu-a4, shu-law, khu12ja,khu12al}, the properties and parameters in question being the order, rank, Fitting height, nilpotency class, and exponent. In particular, it was proved in \cite{khu-ma-shu} that if $C_G(H)$ is nilpotent of class $c$, then $G$ is nilpotent of $(c,|H|)$-bounded class (a special case of this result solving part (a) of Mazurov's problem was proved earlier by the second author and Shumyatsky \cite{mak-shu10}). Henceforth we write for brevity, say, ``$(a,b,\dots )$-bounded'' for ``bounded above by some function depending only on $a, b,\dots $''. An important next step is considering finite groups $G$ with a Frobenius group of automorphisms $FH$ in which the kernel $F$ no longer acts fixed-point-freely but has a relatively small number of fixed points. Then it is natural to strive for similar restrictions, in terms of the complement $H$ and its fixed points $C_G(H)$, for a subgroup of index bounded in terms of $|C_G(F)|$ and other parameters: ``almost fixed-point-free'' action of $F$ implying that $G$ is ``almost'' as good as when $F$ acts fixed-point-freely. Such restrictions for the order and rank of $G$ were recently obtained in \cite{khu13}. In the present paper we deal with the nilpotency class assuming that $FH$ is a metacyclic Frobenius group. Examples in \cite{khu-ma-shu} show that such results cannot be obtained for non-metacyclic $FH$, even in the case $C_G(F)=1$. \begin{equation}gin{theorem} \label{t-g} Suppose that a finite group $G$ admits a Frobenius group of automorphisms $FH$ of coprime order with cyclic kernel $F$ and complement $H$ such that the fixed-point subgroup $C_G(H)$ of the complement is nilpotent of class $c$. Then $G$ has a nilpotent characteristic subgroup of index bounded in terms of $c$, $|C_G(F)|$, and $|F|$ whose nilpotency class is bounded in terms of $c$ and $|H|$ only. \varepsilon nd{theorem} In the proof, reduction to soluble groups is given by results based on the classification (\cite{hart} or \cite{wang-chen}). Then representation theory arguments are used to bound the index of the Fitting subgroup, thus reducing the proof to the case of a nilpotent group $G$. We state separately the corresponding Theorem~\ref{t-n}, since it gives a better bound for the index of the Fitting subgroup and does not require the Frobenius group $FH$ to be metacyclic. For nilpotent groups, a Lie ring method is used. A similar theorem on Lie rings is also proved, although its application to the case of nilpotent group in Theorem~\ref{t-g} is not straightforward and requires additional efforts. \begin{equation}gin{theorem} \label{t-l} Suppose that a finite Frobenius group $FH$ with cyclic kernel $F$ and complement $H$ acts by automorphisms on a Lie ring $L$ in whose ground ring $|F|$ is invertible. If the fixed-point subring $C_L(H)$ of the complement is nilpotent of class $c$ and the fixed-point subring of the kernel $C_L(F)$ is finite of order $m$, then $L$ has a nilpotent Lie subring whose index in the additive group of $L$ is bounded in terms of $c$, $m$, and $|F|$ and whose nilpotency class is bounded in terms of $c$ and $|H|$ only. \varepsilon nd{theorem} The functions bounding the index and nilpotency class in the theorems can be estimated from above explicitly, although we do not write out these estimates here. For Lie algebras we do not need the condition that $|F|$ be invertible. \begin{equation}gin{corollary} \label{c-l} Suppose that a finite Frobenius group $FH$ with cyclic kernel $F$ and complement $H$ acts by automorphisms on a Lie algebra $L$ in such a way that the fixed-point subalgebra $C_L(H)$ is nilpotent of class $c$ and the fixed-point subalgebra $C_L(F)$ has finite dimension $m$. Then $L$ has a nilpotent Lie subalgebra of finite codimension bounded in terms of $c$, $m$, and $|F|$ and whose nilpotency class is bounded in terms of $c$ and $|H|$ only.\varepsilon nd{corollary} Earlier in \cite{mak-khu13} we proved this under the additional condition that the characteristic of $L$ be coprime to $|H|$. We now discuss in more detail the context of the two parts of the proof of Theorem~\ref{t-g}, which are quite different, the first about bounding the index of the Fitting subgroup by methods of representation theory, and the second about bounding the nilpotency class of a subgroup of bounded index by Lie ring methods. Let $G$ be a finite (soluble) group $G$ admitting a soluble group of automorphisms $A$ of coprime order. Connections between the Fitting heights of $G$ and $C_G(A)$, depending also on the number $\alpha (A)$ of prime factors in $|A|$, were first established by Thompson \cite{th2} and later improved by various authors, including linear bounds by Kurzweil \cite{kurz} and Turull \cite{tu}. The Hartley--Isaacs theorem \cite{ha-is} (using Turull's result \cite{tu}) says that $|G:F_{2\alpha (A)+1}|$ is bounded in terms of $\alpha (A)$ and $|C_G(A)|$. These results can of course be applied both to the action of $F$ and of $H$ in our Theorem~\ref{t-g}. But our conclusion is in a sense much stronger, bounding the index of a nilpotent subgroup, rather than of a subgroup of Fitting height depending on $\alpha (F)$ or $\alpha (H)$. Of course, this is due to the stronger hypotheses of combined actions of the kernel and the complement, neither of which alone is sufficient. Now suppose that the group $G$ is already nilpotent and admits an (almost) fixed-point-free group of automorphisms $A$. Then further questions arise about bounding the nilpotency class or the derived length of $G$ (or of a subgroup of bounded index). Examples show that such bounds can only be achieved if $A$ is cyclic. By Higman's theorem \cite{hi} a (locally) nilpotent group with a fixed-point-free automorphism of prime order $p$ is nilpotent of $p$-bounded class. This immediately follows from the Higman--Kreknin--Kostrikin theorem \cite{hi,kr,kr-ko} saying that a Lie ring with a fixed-point-free automorphism of prime order $p$ is nilpotent of $p$-bounded class. The first author \cite{kh1, kh2} proved that if a periodic (locally) nilpotent group $G$ admits an automorphism $\varphi$ of prime order $p$ with $m=|C_G(\varphi )|$ fixed points, then $G$ has a nilpotent subgroup of $(m,p)$-bounded index and of $p$-bounded class. (The result was later extended by Medvedev \cite{me} to not necessarily periodic locally nilpotent groups.) This group result was also based on a similar theorem on Lie rings in \cite{kh2}, albeit also on additional arguments, as in general there is no good correspondence between subrings of the associated Lie ring and subgroups of a group. The proofs in \cite{kh2} were based on a method of graded centralizers; this method was later developed by the authors \cite{khmk4, mk05, khmk1, khmk2, khmk3, khmk5} in further studies of almost fixed-point-free automorphisms of Lie rings and nilpotent groups. It is this method that we also use in the proofs of both the Lie ring Theorem~\ref{t-l} and the nilpotent case of the group Theorem~\ref{t-g}. There is apparent similarity between the relation of the above-mentioned theorem on ``almost fixed-point-free'' automorphism of prime order to the ``fixed-point-free'' Higman--Kreknin--Kostrikin theorem and the relation of the results of the present paper on a Frobenius group of automorphisms with ``almost fixed-point-free'' kernel to the Khukhro--Makarenko--Shumyatsky theorem \cite{khu-ma-shu} on a Frobenius group of automorphisms with fixed-point-free kernel: in both cases a bound for the nilpotency class of the whole group is replaced by a bound for the nilpotency class of a subgroup of bounded index. In fact, this similarity goes deeper than just the form of the results: the method of proof of the Lie ring Theorem~\ref{t-l} and of the nilpotent case of the group Theorem~\ref{t-g} is a modification of the aforementioned method of graded centralizers used in \cite{kh2}. In both cases, the previous nilpotency results are used as certain combinatorial facts about Lie rings with finite cyclic grading, which give rise to certain transformations of commutators. The HKK-transformation in \cite{kh2} was based on the Higman--Kreknin--Kostrikin theorem, and in the present paper we use the KMS-transformation based on the Khukhro--Makarenko--Shumyatsky theorem \cite{khu-ma-shu}, combined with the machinery of the method of graded centralizers, with certain modifications. In the present paper the cyclic group of automorphisms $F$ is of arbitrary (composite) order. Recall that it is still an open problem to bound the derived length of a finite group with a fixed-point-free automorphism. So far this is known only in the above-mentioned case of automorphism of prime order (and of order 4 due to Kov\'acs). The problem is already reduced to nilpotent groups, and there is Kreknin's theorem \cite{kr} giving bounded solubility of a Lie ring with a fixed-point-free automorphism, but the existing Lie ring methods cannot be used for bounding the derived length in general. The authors \cite{khmk5} also proved almost solubility of Lie rings and algebras admitting an almost regular automorphism of finite order, with bounds for the derived length and codimension of a soluble subalgebra, but for groups even the fixed-point-free case remains open. The latter result can be applied to the Lie ring in Theorem~\ref{t-l}, but we need $(c,|H|)$-bounded nilpotency of a subring, rather than $|F|$-bounded solubility. It is the combined actions of the kernel and the complement that have to be used here, neither of which alone is sufficient. There remain several open problems about groups $G$ (and Lie rings) with a Frobenius group of automorphisms $FH$ (with kernel $F$ and complement $H$). For example, even in the case of a 2-Frobenius group $GFH$ (when $GF$ is also a Frobenius group), Mazurov's question 17.72(b) remains open: is the exponent of $G$ bounded in terms of $|H|$ and the exponent of $C_G(H)$? Other open questions in the case $C_G(F)=1$ include bounding the derived length of $G$ in terms of that of $C_G(H)$ and $|H|$. Such questions are already reduced to nilpotent groups, since it was proved in \cite{khu12ja} that then the Fitting height of $G$ is equal to the Fitting height of $C_G(H)$. One notable difference of the results of the present paper from the previous results in the case $C_G(F)=1$ is that we impose the additional condition that the order of $G$ and $FH$ be coprime. Although Hartley's theorem \cite{hart} would still provide reduction to soluble groups without the coprimeness condition, there are further difficulties that for now remain unresolved. Note, for example, that it is still unknown if the Fitting height of a finite soluble group admitting an automorphism of order $n$ with $m$ fixed points is bounded in terms of $m$ and $n$ (in the coprime case even a better result is a special case of the Hartley--Isaacs theorem \cite{ha-is}). \sigma ection{Almost nilpotency}\label{s-n} In this section we prove the ``almost nilpotency'' part of Theorem~\ref{t-g}. It makes sense to state a separate theorem, as the bound for the index of the Fitting subgroup depends only on $|C_G(F)|$ and $|F|$. (Dependence on the nilpotency class $c$ of $C_G(H)$ appears in addition in Theorem~\ref{t-g}, where a nilpotent subgroup of $(c,|H|)$-bounded class is required.) Moreover, in the following theorem, $FH$ is an arbitrary, not necessarily metacyclic, Frobenius group. \begin{equation}gin{theorem} \label{t-n} Suppose that a finite group $G$ admits a Frobenius group of automorphisms $FH$ of coprime order with kernel $F$ and complement $H$ such that the fixed-point subgroup $C_G(H)$ of the complement is nilpotent. Then the index of the Fitting subgroup $F(G)$ is bounded in terms of $|C_G(F)|$ and $|F|$. \varepsilon nd{theorem} By the result of Wang and Chen \cite{wang-chen} based on the classification (applied to the coprime action of $H$ on $G$), the group $G$ is soluble. Further proof in some parts resembles the proof of \cite[Theorem~2.7(c)]{khu-ma-shu} and \cite[Theorem~2.1]{khu12ja}, where the case of $C_G(F)=1$ was considered. But some arguments in \cite{khu-ma-shu} do not work because a certain section $Q$ here cannot be assumed to be abelian, and some arguments in \cite{khu12ja} cannot be applied as $F$ is no longer fixed-point-free everywhere. Instead, an argument in \cite{khu10al} is adapted to our situation (although the result of \cite{khu10al} was superseded by \cite{khu12ja}). We begin with some preliminaries. Suppose that a group $A$ acts by automorphisms on a finite group $G$ of coprime order: $(|A|,|G|)=1$. For every prime $p$, the group $G$ has an $A$-invariant Sylow $p$-subgroup. The fixed points of the induced action of $A$ on the quotient $G/N$ by an $A$-invariant normal subgroup are covered by fixed points of $A$ in $G$, that is, $ C_{G/N}(A)=C_G(A)N/N$. A similar property also holds for a finite group $A$ of linear transformations acting on a vector space over a field of characteristic coprime to $|A|$. These well-known properties of coprime action will be used without special references. The following lemma is a consequence of Clifford's theorem. \begin{equation}gin{lemma}[{\cite[Lemma~2.5]{khu-ma-shu}}]\label{l-free} If a Frobenius group $FH$ with kernel $F$ and complement $H$ acts by linear transformations on a vector space $V$ over a field $k$ in such a way that $C_V(F)=0$, then $V$ is a free $kH$-module. \varepsilon nd{lemma} It is also convenient to use the following theorem of Hartley and Isaacs~\cite{ha-is}. \begin{equation}gin{theorem}[{\cite[Theorem~B]{ha-is}}] For an arbitrary finite group $A$ there exists a number $ \delta (A)$ depending only on $A$ with the following property. Let $A$ act on~$G$, where $G$ is a finite soluble group such that $(|G|,|A|)=1$, and let $k$ be any field of characteristic not dividing~$|A|$. Let $V$ be any irreducible $kAG$-module and let $S$ be any $kA$-module that appears as a component of the restriction~$V_A$. Then $\dim _kV\leqslant \delta (A) m_S$, where $ m_S$ is the multiplicity of $S$ in~$V_A$. \varepsilon nd{theorem} The following is a key proposition in the proof of Theorem~\ref{t-n}. Note that here $C_G(H)$ is not assumed to be nilpotent, as a stronger assertion is needed for induction on $|H|$ to work. \begin{equation}gin{proposition}\label{p1} Let $G$ be a finite group admitting a Frobenius group of automorphisms $FH$ of coprime order with kernel $F$ and complement $H$. Suppose that $V=F(G)=O_p(G)$ is an elementary abelian $p$-group such that $C_V(F)=1$ and $G/V$ is a $q$-group. Then $F(C_G(H))\leq V$. \varepsilon nd{proposition} \begin{proof} Let $Q$ be an $FH$-invariant Sylow $q$-subgroup of $G$. Suppose the opposite and choose a nontrivial element $c\in Q\cap Z(C_G(H)))$. Note that $c$ centralizes $C_V(H)$ but acts nontrivially on $V$. Our aim is a contradiction arising from these assumptions. Consider $\langle c^{HF}\rangle=\langle c^F\rangle$, the minimal $FH$-inva\-ri\-ant subgroup containing~$c$. We can assume that \begin{equation}gin{equation}\label{zam-c} Q=\langle c^F\rangle. \varepsilon nd{equation} We regard $V$ as an ${\Bbb F}_pQFH$-module. At the same time we reserve the right to regard $V$ as a normal subgroup of the semidirect product $VQFH$. For example, we may use the commutator notation: the subgroup $[V,Q]=\langle [v,g]\mid v\in V,\;g\in Q\rangle$ coincides with the subspace spanned by $\{-v+\nobreak vg\mid v\in V,\;g\in Q\}$. We also keep using the centralizer notation for fixed points, like $C_V(H)=\{v\in V\mid vh=v\text{ for all } h\in H\}$, and for kernels, like $C_Q(Y)=\{x\in Q\mid yx=y\text{ for all }y\in Y\}$ for a subset $Y\sigma ubseteq V$. We now extend the ground field to a finite field $k$ that is a splitting field for $QFH$ and obtain a $kQFH$-module $\omega idetilde V=V\otimes _{{\Bbb F}_p}k$. Many of the above-mentioned properties of $V$ are inherited by ${\omega idetilde{V}}$: \vskip1ex \noindent (V1) \ $\omega idetilde{V}$ is a faithful $kQ$-module; \noindent (V2) \ $c$ acts trivially on $C_{\omega idetilde{V}}(H)$; \noindent (V3) \ $C_{\omega idetilde{V}}(F)=0$. \vskip1ex Our aim is to show that $c$ centralizes $\omega idetilde V$, which will contradict (V1). Consider an unrefinable series of $kQFH$-sub\-modules \begin{equation}gin{equation}\label{riad2} \omega idetilde{V}=V_1 >V_2>\dots >V_n> V_{n+1}=0. \varepsilon nd{equation} Let $W$ be one of the factors of this series; it is a nontrivial irreducible $kQFH$-module. If $c$ acts trivially on every such $W$, then $c$ acts trivially on $\omega idetilde V$, as the order of $c$ is coprime to the characteristic $p$ of the field $k$ --- this contradicts (V1). Therefore in what follows we assume that $c$ acts nontrivially on $W$. The following properties hold for $W$: \vskip1ex \noindent (W1) \ $c$ acts nontrivially on $W$; \noindent(W2) \ $c$ acts trivially on $C_{W}(H)$; \noindent(W3) \ $C_{W}(F)=0$; \noindent(W4) \ $W$ is a free $kH$-module. \vskip1ex Indeed, property (W1) has already been mentioned. Property (W2) follows from (V2) since $C_{\omega idetilde{V}}(H)$ covers $C_{W}(H)$. Property (W3) follows from (V3) since $C_{\omega idetilde{V}}(F)$ covers $C_{W}(F)$. Property (W4) follows from (W3) by Lemma~\ref{l-free}. We shall need the following elementary remark. \begin{equation}gin{lemma} \label{l-faith} Let $FH$ be a Frobenius group with kernel $F$ and complement~$H$. In any action of $FH$ with nontrivial action of $F$ the complement $H$ acts faithfully.\qed \varepsilon nd{lemma} The following lemma will be used repeatedly in the proof. \begin{equation}gin{lemma} \label{c-triv-free} Suppose that $M=\bigoplus _{h\in H}M_h$ is a free $kH$-sub\-module of $W$, that is, the subspaces $M_h$ form a regular $H$-orbit: $M_{h_1}h_2=M_{h_1h_2}$ for $h_1,h_2\in H$. If the element $c$ leaves invariant each of the $M_h$, then $c$ acts trivially on $M$. \varepsilon nd{lemma} \begin{equation}gin{proof} The fixed points of $H$ in $M$ are the diagonal elements $\sigma um _{h\in H}mh$ for any $m$ in $M_1$, where $mh\in M_h$. Since $c$ acts trivially on every such sum by property~(W2) and leaves invariant every direct summand $M_h$, it must act trivially on each $mh$. Clearly, the elements $mh$ run over all elements in all the summands $M_h=M_1h$, $h\in H$. \varepsilon nd{proof} We now apply Clifford's theorem and consider the decomposition $ W=W_1\oplus \dots \oplus W_t $ of $W$ into the direct sum of the Wedderburn components $W_i$ with respect to~$Q$. We consider the transitive action of $FH$ on the set $\Omega=\{W_1,\dots ,W_t\}$. \begin{equation}gin{lemma} \label{c-triv-reg} The element $c$ acts trivially on the sum of components in any regular $H$-orbit in $\Omega$. \varepsilon nd{lemma} \begin{equation}gin{proof} This follows from Lemma~\ref{c-triv-free}, because the sum of components in a regular $H$-orbit in $\Omega$ is obviously a free $kH$-sub\-module. \varepsilon nd{proof} Note that $H$ transitively permutes the $F$-orbits in $\Omega$. Let $\Omega _1=W_1^F$ be one of these $F$-orbits and let $H_1$ be the stabilizer of $\Omega _1$ in $H$ in the action of $H$ on $F$-orbits. If $H_1=1$, then all the $H$-orbits in $\Omega$ are regular, and then $c$ acts trivially on $W$ by Lemma~\ref{c-triv-reg}. This contradicts our assumption (W1) that $c$ acts nontrivially on $W$. Thus, we assume that $H_1\ne 1$. \begin{equation}gin{lemma} \label{orb-h1} The subgroup $H_1$ has exactly one non-regular orbit in~$\Omega _1$ and this orbit is a fixed point. \varepsilon nd{lemma} \begin{equation}gin{proof} Let $\overline F$ be the image of $F$ in its action on $\Omega _1$. If $\overline F=1$, then $\Omega _1=\{W_1\}$ consists of a single Wedderburn component, and the lemma holds. Thus, we can assume that $\overline F\ne 1$, and $\overline{F}H_1$ is a Frobenius group with complement~$H_1$. By Lemma~\ref{l-faith} the subgroup $H_1$ acts faithfully on $\Omega_1$ and we use the same symbol for it in regard of its action on $\Omega_1$. Let $S$ be the stabilizer of the point $W_1 \in \Omega _1$ in $\overline{F}H_1$. Since $|\Omega _1|=|\overline F:\overline F\cap S|=|\overline F H_1:S|$ and the orders $|\overline F|$ and $|H_1|$ are coprime, $S$ contains a conjugate of $H_1$; without loss of generality (changing $W_1$ and therefore $S$ if necessary) we assume that $H_1\leq S$. We already have a fixed point $W_1$ for $H_1$. It follows that $H_1$ acts on $\Omega _1$ in the same way as $H_1$ acts by conjugation on the cosets of the stabilizer of $W_1$ in $\overline F$. But in a Frobenius group no non-trivial element of a complement can fix a non-trivial coset of a subgroup of the kernel. Otherwise there would exist such an element of prime order and, since this element is fixed-point-free on the kernel, its order would divide the order of that coset and therefore the order of the kernel, a contradiction. \varepsilon nd{proof} We now consider the $H$-orbits in $\Omega$. Clearly, the $H$-orbits of elements of regular $H_1$-orbits in $\Omega_1$ are regular $H$-orbits. Thus, by Lemma~\ref{orb-h1} there is exactly one non-regular $H$-orbit in $\Omega$ --- the $H$-orbit of the fixed point $W_1$ of $H_1$ in $\Omega _1$. Therefore by Lemma~\ref{c-triv-reg} we obtain the following. \begin{equation}gin{lemma} \label{c-triv-out} The element $c$ acts trivially on all the Wedderburn components $W_i$ that are not contained in the $H$-orbit of~$W_1$.\qed \varepsilon nd{lemma} Therefore, since $c$ acts nontrivially on $W$, it must be nontrivial on the sum over the $H$-orbit of $W_1$. Moreover, since $c$ commutes with $H$, the element $c$ acts in the same way --- and therefore nontrivially --- on all the components in the $H$-orbit of $W_1$. In particular, $c$ is nontrivial on $W_1$. We employ induction on $|H|$. In the basis of this induction, $|H|$ is a prime, and either $H_1=1$, which gives a contradiction as described above, or $H_1=H$, which is the case dealt with below. First suppose that $H_1\ne H$. Then we consider $U=\bigoplus _{f\in F}W_1f$, the sum of components in $\Omega _1$, which is a $kQFH_1$-module. \begin{equation}gin{lemma} \label{c-triv-cu} The element $c$ acts trivially on $C_U(H_1)$. \varepsilon nd{lemma} \begin{equation}gin{proof} Indeed, $c$ acts trivially on the sum over any regular $H_1$-orbit, because such an $H_1$-orbit is a part of a regular $H$-orbit, on the sum over which $c$ acts trivially by Lemma~\ref{c-triv-reg}. By Lemma~\ref{orb-h1} it remains to show that $c$ is trivial on $C_{W_1}(H_1)$. For $x\in C_{W_1}(H_1)$ and some right transversal $\{t_i\mid 1\leq i\leq |H:H_1|\}$ of $H_1$ in $H$ we have $\sigma um _{i} xt_i\in C(H)$. Indeed, for any $h\in H$ we have $\sigma um _{i} xt_ih=\sigma um _{i} xh_{1i}t_{j(i)}$ for some $h_{1i}\in H_1$ and some permutation of the same transversal $\{t_{j(i)}\mid 1\leq i\leq |H:H_1|\}$, and the latter sum is equal to $\sigma um _{i} xt_{j(i)}=\sigma um _{i} xt_{i}$ as $xh_{1i}=x$ for all $i$. Since $c$ acts trivially on $\sigma um _{i} xt_{i}\in C(H)$ by property~(W2), it must also act trivially on each summand, as they are in different $c$-inva\-ri\-ant components; in particular, $xc=x$. \varepsilon nd{proof} In order to use induction on $|H|$, we consider the additive group $U$ on which the group $ Q F H_1$ acts as a group of automorphisms. The action of $Q$ and $F$ may not be faithful, but the action of $H_1$ is faithful by Lemma~\ref{l-faith}, because $FH_1$ is a Frobenius group and the action of $F$ is nontrivial since $C_U(F)=0$ by property~(W3). Switching to multiplicative notation also for the additive group of $U$, we now have the semidirect product $G_1=UQ$ admitting the Frobenius group of automorphisms $(F/C_F(G_1))H_1$ such that $C_{U}(F/C_F(G_1))=1$. Consider $\bar G_1=G_1/O_q(G_1)$, keeping the same notation for $U$, $F$, $H_1$. Note that $\bar c\ne 1$, that is, $c\not\in O_q(G_1)$, as $c$ is nontrivial on $U$. Then the hypotheses of the proposition hold for $\bar G_1$ and $(F/C_F(G_1))H_1$. We claim that $\bar c\in F(C_{\bar G_1}(H_1))$; indeed, $C_{\bar G_1}(H_1)=C_{U}(H_1)C_{Q}(H_1)$ is a $\{p,q\}$-group, in which $C_{U}(H_1)$ is a normal $p$-sub\-group centralized by the $q$-ele\-ment $\bar c$ by Lemma~\ref{c-triv-cu}. If $H_1\ne H$, then by induction on $|H|$ we must have $\bar c\in U$, a contradiction. Thus, it remains to consider the case where $H_1=H$, that is, $W_1$ is $H$-inva\-ri\-ant, which is assumed in what follows. We now focus on the action on $W_1$, using bars to denote the images of $Q$ and its elements in their action on~$W_1$. We can regard $H$ as acting by automorphisms on $\overline Q$. Let $\zeta _2(\overline Q)$ be the second centre of $\overline Q$. We obviously have $ [[H,\, \overline c],\, \zeta _2(\overline Q)]=[1,\, \zeta _2(\overline Q)]=1. $ We also have $ [[\overline c,\, \zeta _2(\overline Q)],\, H]\leq [Z(\overline Q),\, H]=1, $ since $Z(\overline Q)$ is represented on the homogeneous $k\overline Q$-module $W_1$ by scalar linear transformations. Therefore by the Three Subgroup Lemma, \begin{equation}gin{equation}\label{z2h} [[\zeta _2(\overline Q),\, H],\, \overline c]=1. \varepsilon nd{equation} By the choice of $c\in Z(C_G(H))$ we also have \begin{equation}gin{equation}\label{cz2} [C_{\zeta _2(\overline Q)}(H),\, \overline c]=1, \varepsilon nd{equation} because $C_Q(H)$ covers $C_{\overline Q}(H)$. Since the action of $H$ on $Q$ is coprime by hypothesis, we have $ \zeta _2(\overline Q)=[\zeta _2(\overline Q), H]C_{\zeta _2(\overline Q)}(H). $ Therefore equalities \varepsilon qref{z2h} and \varepsilon qref{cz2} together imply the equality \begin{equation}gin{equation}\label{z2} [\overline c ,\,\zeta _2(\overline Q)]=1. \varepsilon nd{equation} Now let $F_1$ denote the stabilizer of $W_1$ in $F$, so that the stabilizer of $W_1$ in $FH$ is equal to $F_1H$. Then for any element $f\in F\sigma etminus F_1$ the component $W_1f$ is outside the $H$-orbit of $W_1$, which is equal to $\{ W_1\}$ in the case under consideration. As mentioned above, $c$ acts trivially on all the Wedderburn components outside the $H$-orbit of $W_1$. Thus, $c$ acts trivially on $W_1f$, which is equivalent to $c^{f^{-1}}$ acting trivially on $W_1$. In other words, $\overline{c^{x}}=1$ in the action on $W_1$ for any $x\in F\sigma etminus F_1$. (Note that it does not matter that $W_1$ is not $F$-invariant: for any $g\in F$ the element $c^g$ belongs to $Q$, which acts on $W_1$.) Since $Q=\langle c^F\rangle$ by \varepsilon qref{zam-c}, we obtain that $\overline Q=\langle \overline c^{F_1}\rangle$. In view of the $F_1$-invariance of the section $\overline Q$ we can apply conjugation by any $g\in F_1$ to equation \varepsilon qref{z2} to obtain that $$ [\overline c^{g},\, \zeta _2(\overline Q)^{g}]=[\overline c^{g},\, \zeta _2(\overline Q)]=1. $$ As a result, $$ [\overline Q,\, \zeta _2(\overline Q)]=[\langle \overline c^{F_1}\rangle,\, \zeta _2(\overline Q)]=1. $$ This means that $\overline Q$ is abelian. In the case of $\overline Q$ abelian we arrive at a contradiction similarly to how this was done in \cite[Theorem~2.7(c)]{khu-ma-shu}. The sum of the $W_i$ over all regular $H$-orbits is obviously a free $kH$-module. Since the whole $W$ is also a free $kH$-module by property~(W4), the component $W_1$ must also be a free $kH$-module, as a complement of the sum over all regular $H$-orbits. Since $\overline Q$ is abelian, $c$ acts on $W_1$ by a scalar linear transformation. By Lemma~\ref{c-triv-free} (or simply because $c$ has fixed points in $W_1$, as $C_{W_1}(H)\ne 0$) it follows that, in fact, $c$ must act trivially on $W_1$, a contradiction with property~(W1). Proposition~\ref{p1} is proved. \varepsilon p \begin{equation}gin{proposition}\label{p2} Suppose that a soluble finite group $G$ admits a Frobenius group of automorphisms $FH$ of coprime order with kernel $F$ and complement $H$ such that $V=F(G)=O_p(G)$ is an elementary abelian $p$-group and $C_G(H)$ is nilpotent. If $C_V(F)=1$, then $G=VC_G(F)$. \varepsilon nd{proposition} \begin{proof} The quotient $\bar G =G/V$ acts faithfully on $V$. We claim that $F$ acts trivially on $\bar G$. Suppose not; then $F$ acts non-trivially on the Fitting subgroup $F(\bar G)$. Indeed, otherwise $[\bar G ,F]$ acts trivially on $F(\bar G)$, which contains its centralizer, so then $[\bar G ,F]\leq F(\bar G)$ and $F$ acts trivially on $\bar G$ since the action is coprime. Thus, $F$ acts non-trivially on some Sylow $q$-subgroup $Q$ of $F(\bar G)$. But then there is a nontrivial fixed point of $H$ in $Q$ by Lemma~\ref{l-free}. This would contradict Proposition~\ref{p1}, since here $C_G(H)$ is nilpotent. \varepsilon p \begin{equation}gin{proof}[Proof of Theorem \ref{t-n}] Recall that $G$ is a soluble finite group admitting a Frobenius group of automorphisms $FH$ of coprime order with kernel $F$ and complement $H$ such that $|C_G(F)|=m$ and $C_G(H)$ is nilpotent. We claim that $|G/F(G)|$ is $(m,|F|)$-bounded. We can assume that $G=[G,F]$. For every $p\nmid |C_G(F)|$ we have $G=O_{p',p}(G)$ by Proposition~\ref{p2} applied to the quotient of $G$ by the pre-image of the Frattini subgroup of $O_{p',p}(G)/O_{p'}(G)$ with $V$ equal to the Frattini quotient of $O_{p',p}(G)/O_{p'}(G)$. It remains to prove that $|G/O_{p',p}(G)|$ is $(m,|F|)$-bounded for every $p$, since then the index of $F(G)=\bigcap O_{p',p}(G)$ will be at most the product of these bounded indices over the bounded set of primes $p$ dividing $ |C_G(F)|$. The quotient $\bar G =G/O_{p',p}(G)$ acts faithfully on the Frattini quotient $X$ of $O_{p',p}(G)/O_{p'}(G)$. It is sufficient to bound the order of the Fitting subgroup $F(\bar G)$. Thus, we can assume that $\bar G=F(\bar G)$. Therefore $\bar G$ is a $p'$-group, so that the order of $\bar G FH$ is coprime to $p$, the characteristic of the ground field of $X$ regarded as a vector space over ${\Bbb F}_p$. Let $X=Y_1\oplus \cdots \oplus Y_s\oplus Z_1\oplus\dots\oplus Z_t$ be the decomposition of $X$ into the direct sum of irreducible ${\Bbb F}_p\bar G F$-submodules, where $C_{Y_i}(F)\ne 0$ for all $i$ and $C_{Z_j}(F)= 0$ for all $j$. (Here either of $s$ or $t$ can be zero.) By the Hartley--Isaacs theorem \cite[Theorem~B]{ha-is} there is an $|F|$-bounded number $\alpha (F)$ such that $\dim Y_i\leq \alpha (F)\dim C_{Y_i}(F)$ for every $i$; here $\dim C_{Y_i}(F)$ is the multiplicity of the trivial ${\Bbb F}_pF$-submodule, which does appear in $Y_i$ by definition. Therefore the order of $Y_1\oplus \cdots \oplus Y_s$ is $(m,|F|)$-bounded. Let $Y= Y_1\oplus \cdots \oplus Y_s$. We claim that $Y$ is $H$-invariant. Indeed, $C_{Y_i}(F)\ne 0$ for every $i$. Hence, $C_{Y_ih}(F)=C_{Y_ih}(F^h)=C_{Y_i}(F)h\ne 0$ for every $h\in H$. Since $Y_ih$ is also an irreducible ${\Bbb F}_p\bar G F$-submodule, we must have either $Y_ih\sigma ubseteq Y$ or $Y_ih\cap Y=0$. But the second possibility is impossible, since $C_{Y_ih}(F)\ne 0$ and $C_X(F)\sigma ubseteq Y$ by construction. The subgroup $C_{G}(Y)$ is normal and $FH$-invariant. Since $|Y|$ is also $(m,|F|)$-bounded, $|G/C_G(Y)|$ is also $(m,|F|)$-bounded. By Maschke's theorem, $X=Y\oplus Z$, where $Z$ is normal and $FH$-invariant. Obviously, $C_Z(F)=0$. We now revert to the multiplicative notation and regard $X,Y,Z$ as sections of the group $G$. The group $C_G(Y)/X$ acts on $Z$. Since $C_G(Y)/X$ is faithful on $X$ and the action is coprime, it is also faithful on $Z$, so that $Z=O_p(G_1)$, where $G_1$ is the semidirect product $G_1=Z\rtimes C_G(Y)/X$. The group $G_1$ with the induced action of $FH$ satisfies the hypotheses of Proposition~\ref{p2} (with $V=Z$). Indeed, $F$ acts fixed-point-freely on $Z=O_p(G_1)$, and $C_{G_1}(H)$ is covered by the images of subgroups of $C_G(H)$ and therefore is also nilpotent. By Proposition~\ref{p2} we obtain that $F$ acts trivially on $C_G(Y)/X$, so that $|C_G(Y)/X| \leq |C_G(F)|=m$. As a result, $|G/O_{p',p}(G)|=|G/C_G(Y)|\cdot |C_G(Y)/X|$ is $(m,|F|)$-bounded, as required. \varepsilon p \sigma ection{Lie ring theorem}\label{s-l} In this section a finite Frobenius group $FH$ with cyclic kernel $F$ of order $n$ and complement $H$ of order $q$ acts by automorphisms on a Lie ring $L$ in whose ground ring $n$ is invertible. If the fixed-point subring $C_L(H)$ of the complement is nilpotent of class $c$ and $C_L(F)=0$, that is, the kernel $F$ acts without non-trivial fixed points on $L$, then by the Makarenko--Khukhro--Shumyatsky theorem~\cite{khu-ma-shu} the Lie ring $L$ is nilpotent of $(c,q)$-bounded class. In Theorem~\ref{t-l} we have $|C_L(F)|=m$ and need to prove that $L$ contains a nilpotent subring of $(m,n,c)$-bounded index (in the additive group) and of $(c,q)$-bounded nilpotency class. The proof of Theorem~\ref{t-l} uses the method of graded centralizers, which was developed in the authors' papers on groups and Lie rings with almost regular automorphisms \cite{kh2,khmk1,khmk3, khmk4,khmk5}; see also Ch.~4 in \cite{kh4}. This method consists in the following. In the proof of Theorem~\ref{t-l} we can assume that the ground ring contains a primitive $n$th root of unity~$\omega$. Let $F=\langle \varphi\rangle$. Since $n$ is invertible in the ground ring, then $L$ decomposes into the direct sum of the ``eigenspaces'' $L_j=\{ a\in L\mid a^{\varphi}=\omega ^ja\}$, which are also components of a $({\Bbb Z} /n{\Bbb Z})$-grading: $[L_s,\, L_t]\sigma ubseteq L_{s+t},$ where $s+t$ is calculated modulo $n$. In each of the $L_i$, $i\ne 0$, certain additive subgroups $L_i(k)$ of bounded index --- ``graded centralizers'' --- of increasing levels $k$ are successively constructed, and simultaneously certain elements (representatives) $x_i(k)$ are fixed, all this up to a certain $(c,q)$-bounded level $T$. Elements of $L_j(k)$ have a centralizer property with respect to the fixed elements of lower levels: if a commutator (of bounded weight) that involves exactly one element $y_j(k)\in L_j(k)$ of level $k$ and some fixed elements $x_i(s)\in L_i(s)$ of lower levels $s<k$ belongs to $L_0$, then this commutator is equal to~$0$. The sought-for subring $Z$ is generated by all the $L_i(T)$, $i\ne 0$, of the highest level $T$. The proof of the fact that the subring $Z$ is nilpotent of bounded class is based on a combinatorial fact following from the Makarenko--Khukhro--Shumyatsky theorem~\cite{khu-ma-shu, khu-ma-shu-DAN} for the case $C_L(F)=0$ (similarly to how combinatorial forms of the Higman--Kreknin--Kostrikin theorems were used in our papers \cite{kh2,mk05} on almost fixed-point-free automorphisms). The question of nilpotency is reduced to consideration of commutators of a special form, to which the aforementioned centralizer property is applied. First we recall some definitions and notions. Products in a Lie ring are called ``commutators''. The Lie subring generated by a subset~$S$ is denoted by $\langle S\rangle $, and the ideal by ${}_{{\rm id}}\!\left< S \right>$. Terms of the lower central series of a Lie ring $L$ are defined by induction: $\gammaamma_1(L)=L$; $\gammaamma_{i+1}(L)=[\gammaamma_i(L),L].$ By definition a Lie ring $L$ is nilpotent of class~$h$ if $\gammaamma_{h+1}(L)=0$. A simple commutator $[a_1,a_2,\dots ,a_s]$ of weight (length) $s$ is by definition the commutator $[\dots [[a_1,a_2],a_3],\dots ,a_s]$. By the Jacobi identity $[a,[b,c]]=[a,b,c]-[a,c,b]$ any (complex, repeated) commutator in some elements in any Lie ring can be expressed as a linear combination of simple commutators of the same weight in the same elements. Using also the anticommutativity $[a,b]=-[b,a]$, one can make sure that in this linear combination all simple commutators begin with some pre-assigned element occurring in the original commutator. In particular, if $ L=\langle S\rangle $, then the additive group $L$ is generated by simple commutators in elements of~$S$. Let $A$ be an additively written abelian group. A Lie ring $L$ is \textit{$A$-graded} if $$L=\bigoplus_{a\in A}L_a\qquad \text{ and }\qquad[L_a,L_b]\sigma ubseteq L_{a+b},\quad a,b\in A,$$ where the grading components $L_a$ are additive subgroups of $L$. Elements of the $L_a$ are called \textit{homogeneous} (with respect to this grading), and commutators in homogeneous elements \textit{homogeneous commutators}. An additive subgroup $H$ of $L$ is said to be \textit{homogeneous} if $H=\bigoplus_a (H\cap L_a)$; then we set $H_a=H\cap L_a$. Obviously, any subring or an ideal generated by homogeneous additive subgroups is homogeneous. A homogeneous subring and the quotient ring by a homogeneous ideal can be regarded as $A$-graded rings with induced grading. \begin{equation}gin{proof}[Proof of Theorem~\ref{t-l}] Recall that $L$ is a Lie ring admitting a Frobenius group of automorphisms $FH$ with cyclic kernel $F=\langle \varphi\rangle$ of order $n$ invertible in the ground ring of $L$ and with complement $H$ of order $q$ such that the subring $C_L(H)$ of fixed points of the complement is nilpotent of class $c$ and the subring of fixed points $C_L(F)$ of the kernel has order $m=|C_L(F)|$. Let $\omega$ be a primitive $n$th root of unity. We extend the ground ring by $\omega$ and denote by $\omega idetilde L$ the ring $L\otimes _{{\Bbb Z} }{\Bbb Z} [\omega ]$. The group $FH$ naturally acts on $\omega idetilde L$; then $C_{\omega idetilde L}(H)$ is nilpotent of class~$c$, and $|C_{\omega idetilde L}(F)|\leq |C_L(\varphi )|^n=m^n$. If $\tilde L$ has a nilpotent subring of $(m,n,c)$-bounded index in the additive group and of $(c,q)$-bounded nilpotency class, then the same holds for $L$. Therefore we can replace $L$ by $\tilde L$, so that henceforth we assume that the ground ring contains $\omega $. \begin{equation}gin{definition} We define $\varphi$-\textit{components} $L_k$ for $k=0,\,1,\,\ldots ,n-1$ as the ``eigensubspaces'' $$L_k=\left\{ a\in L\mid a^{\varphi}=\omega ^{k}a\right\} .$$ \varepsilon nd{definition} Since $n$ is invertible in the ground ring, we have $L= L_0 \oplus L_1\oplus \dots \oplus L_{n-1}$ (see, for example,~\cite[Ch.~10]{hpbl}). This decomposition is a $({\Bbb Z}/n{\Bbb Z})$-grading due to the obvious inclusions $[L_s,\, L_t]\sigma ubseteq L_{s+t\,({\rm mod}\,n)}$, so that the $\varphi$-components are the grading components. \begin{equation}gin{definition} We refer to elements, commutators, additive subgroups that are homogeneous with respect to this grading into $\varphi$-components as being \textit{$\varphi$-homogeneous}. \varepsilon nd{definition} \begin{equation}gin{Index Convention} Henceforth a small letter with index $i$ denotes an element of the $\varphi$-component $L_i$, so that the index only indicates the $\varphi$-component to which this element belongs: $x_i\in L_i$. To lighten the notation we will not use numbering indices for elements in $L_j$, so that different elements can be denoted by the same symbol when it only matters to which $\varphi$-component these elements belong. For example, $x_1$ and $x_1$ can be different elements of $L_1$, so that $[x_1,\, x_1]$ can be a nonzero element of $L_2$. These indices will be considered modulo~$n$; for example, $a_{-i}\in L_{-i}=L_{n-i}$. \varepsilon nd{Index Convention} Note that under the Index Convention a $\varphi$-homogeneous commutator belongs to the $\varphi$-component $L_s$, where $s$ is the sum modulo $n$ of the indices of all the elements occurring in this commutator. Since the kernel $F$ of the Frobenius group $FH$ is cyclic, the complement $H$ is also cyclic. Let $H= \langle h \rangle$ and $\varphi^{h^{-1}} = \varphi^{r}$ for some $1\leq r \leq n-1$. Then $r$ is a primitive $q$th root of unity in the ring ${\Bbb Z}/n {\Bbb Z}$. The group $H$ permutes the $\varphi$-components $L_i$ as follows: $L_i^h = L_{ri}$ for all $i\in \Bbb Z/n\Bbb Z$. Indeed, if $x_i\in L_i$, then $(x_i^{h})^{\varphi} = x_i^{h\varphi h^{-1}h} = (x_i^{\varphi^{r}})^h =\omega^{ir}x_i^h$. \begin{equation}gin{notation} In what follows, for a given $u_k\in L_k$ we denote the element $u_k^{h^i}$ by $u_{r^ik}$ under the Index Convention, since $L_k^{h^i} = L_{r^ik}$. We denote the $H$-orbit of an element $x_i$ by $O(x_{i})=\{x_{i},\,\, x_{ri},\dots, x_{r^{q-1}i}\}$. \varepsilon nd{notation} \paragraph{Combinatorial theorem.} We now prove a combinatorial consequence of the Makarenko--Khukhro--Shumyatsky theorem in~\cite{khu-ma-shu}. Recall that we use the centralizer notation $C_L(A)$ for the fixed-point subring of a Lie ring $L$ admitting a group of automorphisms~$A$. \begin{equation}gin{theorem} [{\cite[Theorem~5.6(iii)]{khu-ma-shu}}]\label{kh-ma-shu10-1} Let $FH$ be a Frobenius group with cyclic kernel $F$ of order $n$ and complement $H$ of order $q$. Suppose that $FH$ acts by automorphisms on a Lie ring $M$ in the ground ring of which $n$ is invertible. If $C_M(F)=0$ and $C_M(H)$ is nilpotent of class $c$, then for some $(c,q)$-bounded number $f=f(c,q)$ the Lie ring $M$ is nilpotent of class at most $f$. \varepsilon nd{theorem} We need the following consequence for our Lie ring $L$ under the hypotheses of Theorem~\ref{t-l}, the ground ring of which we already assume to contain the $n$th primitive root of unity, so that $L=L_0\oplus L_1\oplus\dots\oplus L_{n-1}$. \begin{equation}gin{proposition}\label{combinatorial} Let $f(c,q)$ be the function in Theorem~\ref{kh-ma-shu10-1} and let $T=f(c,q)+1$. Every simple $\varphi$-homogeneous commutator $[x_{i_1}, x_{i_2},\ldots, x_{i_{T}}]$ of weight $T$ with non-zero indices can be represented as a linear combination of $\varphi$-homogeneous simple commutators of the same weight $T$ in elements of the union of $H$-orbits $\bigcup_{s=1}^T O(x_{i_s})$ each of which contains an initial segment with zero sum of indices modulo $n$ and includes exactly the same number of elements of each $H$-orbit $O(x_{i_s})$ as the original commutator. \varepsilon nd{proposition} \begin{equation}gin{proof} The idea of the proof is application of Theorem~\ref{kh-ma-shu10-1} to a free Lie ring with operators $FH$. Recall that we can assume that the ground ring $R$ of our Lie ring $L$ contains a primitive root of unity $\omega $ and a multiplicative inverse of $n$. Given arbitrary (not necessarily distinct) non-zero elements $i_1, i_2,\cdots, i_T\in \Bbb Z /n{\Bbb Z}$, we consider a free Lie ring $K$ over $R$ with $qT$ free generators in the set $$Y=\{\underbrace{y_{i_1}, y_{ri_1}, \ldots, y_{r^{q-1}i_1}}_{O(y_{i_1})},\,\,\, \underbrace{y_{i_2}, y_{ri_2}, \ldots, y_{r^{q-1}i_2}}_{O(y_{i_2})},\ldots, \underbrace{y_{i_{T}},y_{ri_T}, \ldots, y_{r^{q-1}i_T}}_{O(y_{i_T})}\},$$ where indices are formally assigned and regarded modulo $n$ and the subsets $O(y_{i_s})=\{y_{i_s}, y_{ri_s}, \ldots, y_{r^{q-1}i_s}\}$ are disjoint. Here, as in the Index Convention, we do not use numbering indices, that is, all elements $y_{r^ki_j}$ are by definition different free generators, even if indices coincide. (The Index Convention will come into force in a moment.) For every $i=0,\,1,\,\dots ,n-1$ we define the additive subgroup $K_i$ generated by all commutators in the generators $y_{j_s}$ in which the sum of indices of all entries is equal to $i$ modulo $n$. Then $K=K_0\oplus K_1\oplus \cdots \oplus K_{n-1}$. It is also obvious that $ [K_i,K_j]\sigma ubseteq K_{i+j\,({\rm mod\, n)}}$; therefore this is a $({\Bbb Z} /n{\Bbb Z})$-grading. The Lie ring $K$ also has the natural ${\Bbb N}$-grading $K=G_1(Y)\oplus G_2(Y)\oplus \cdots $ with respect to the generating set $Y$, where $G_i(Y)$ is the additive subgroup generated by all commutators of weight $i$ in elements of $Y$. We define an action of the Frobenius group $FH$ on $K$ by setting $k_i^{\varphi}=\omega^i k_i$ for $k_i\in K_i$ and extending this action to $K$ by linearity. Since $K$ is the direct sum of the additive subgroups $K_i$ and $n$ is invertible in the ground ring, we have $K_i=\{k\in K \mid k^{\varphi}=\omega^i k \}$. An action of $H$ is defined on the generating set $Y$ as a cyclic permutation of elements in each subset $O(y_{i_s})$ by the rule $(y_{r^ki_s})^h=y_{r^{k+1}i_s}$ for $k=0,\ldots, q-2$ and $(y_{r^{q-1}i_s})^h=y_{i_s}$. Then $O(y_{i_s})$ becomes the $H$-orbit of an element $y_{i_s}$. Clearly, $H$ permutes the components $K_i$ by the rule $K_i^h = K_{ri}$ for all $i\in \Bbb Z/n\Bbb Z$. Let $J={}_{{\rm id}}\!\left< K_0 \right>$ be the ideal generated by the $\varphi$-component $K_0$. Clearly, the ideal $J$ consists of linear combinations of commutators in elements of $Y$ each of which contains a subcommutator with zero sum of indices modulo $n$. The ideal $J$ is generated by homogeneous elements with respect to the gradings $K=\bigoplus_i G_i(Y)$ and $K=\bigoplus_{i=0}^{n-1} K_i$ and therefore is homogeneous with respect to both gradings. Note also that the ideal $J$ is obviously $FH$-invariant. Let $I={}_{{\rm id}}\!\left< \gammaamma_{c+1}(C_K(H)) \right>^F$ be the smallest $F$-invariant ideal containing the subring $\gammaamma_{c+1}(C_K(H))$. The ideal $I$ is obviously homogeneous with respect to the grading $K=\bigoplus_i G_i(Y)$ and is $FH$-invariant. Being $F$-invariant, the ideal $I$ is also homogeneous with respect to the grading $K=\bigoplus_{i=0}^{n-1} K_i$. Indeed, for $z\in I$, let $z=k_0+k_1+\cdots+k_{n-1}$, where $k_i\in K_i$. On the other hand, for every $i=0,\ldots, n-1$ we have $z_i:= \sigma um_{s=0}^{n-1} \omega^{-is}z^{\varphi^s}\in K_i$ and $nz=\sigma um_{j=0}^{n-1}z_i$. Since $n$ is invertible in the ground ring, $z=1/n\sigma um_{j=0}^{n-1}z_i$. Hence $k_i=(1/n) z_i=(1/n)\sigma um_{s=0}^{n-1} \omega^{-is}z^{\varphi^s}$. Since $I$ is $\varphi$-invariant, $z^{\varphi^{j}}\in I$ for all $j$; therefore, $k_i\in I$ for all $i$, as required. Consider the quotient Lie ring $M=K/(J+I)$. Since the ideals $J$ and $I$ are homogeneous with respect to the gradings $K=\bigoplus_i G_i(Y)$ and $K=\bigoplus_{i=0}^{n-1} K_i$, the quotient ring $M$ has the corresponding induced gradings. Furthermore, $M_0=0$ by construction of $J$. Therefore in the induced action of the group $FH$ on $M$ we have $C_M(F)=0$, since $n$ is invertible in the ground ring. The group $H$ permutes the grading components of $M=M_1\oplus\dots \oplus M_{n-1}$ with regular orbits of length $q$. Therefore elements of $C_M(H)$ have the form $m+m^h+\dots +m^{h^{q-1}}$. Hence $C_M(H)$ is the image of $C_K(H)$ in $M=K/(I+J)$ and $\gammaamma_{c+1}(C_M(H)) =0$ by construction of $I$. By Theorem~\ref{kh-ma-shu10-1} $M$ is nilpotent of $(c,q)$-bounded class $f=f(c,q)$. Consequently, $$ [y_{i_1}, y_{i_2},\ldots, y_{i_{T}} ]\in J+I={}_{{\rm id}}\!\left< K_0 \right>+{}_{{\rm id}}\!\left<\gammaamma_{c+1}(C_K(H)) \right>^F. $$ Since both ideals are homogeneous with respect to the grading $K=\bigoplus_i G_i(Y)$, this means that the commutator $[y_{i_1}, y_{i_2},\ldots, y_{i_{T}}]$ is equal modulo the ideal $I$ to a linear combination of commutators of the same weight $T$ in elements of $Y$ each of which contains a subcommutator with zero sum of indices modulo $n$. We claim that in addition every commutator in this linear combination can be assumed to include exactly one element of the $H$-orbit $O(y_{i_s})$ for every $s=1,\ldots, T$. For every $s=1,\ldots,T$ we consider the homomorphism $\theta_s$ extending the mapping $$ O(y_{i_s}) \rightarrow 0; \qquad y_{i_k}\rightarrow y_{i_k} \quad \text{if}\quad k\neq s. $$ We say for brevity that a commutator \textit{depends on $ O(y_{i_s})$} if it involves at least one element of $ O(y_{i_s})$. The homomorphism $\theta_s$ sends to 0 every commutator in elements of $Y$ that depends on $ O(y_{i_s})$, and acts as identity on commutators that are independent of $ O(y_{i_s})$. Hence $\theta_s$ clearly commutes with the action of $H$. The automorphism $\varphi$ acts on any homogeneous commutator by multiplication by some power of $\omega $. Therefore $\theta_s$ also commutes with the action of $F$. It follows that the ideal $I$ is invariant under $\theta_s$. Indeed, $C_K(H)$ is $\theta_s$-invariant, then so is the ideal ${}_{{\rm id}}\!\left< \gammaamma_{c+1}(C_K(H)) \right>$. Since $\theta_s$ commutes with the action of $F$, the $F$-closure of the latter ideal, which is $I$, is also $\theta_s$-invariant. We apply the homomorphism $\theta_s$ to the commutator $[y_{i_1}, y_{i_2},\ldots, y_{i_{T}}]$ and its representation modulo $I$ as a linear combination of commutators in elements of $Y$ of weight $T$ containing subcommutators with zero sum of indices modulo $n$. The image $\theta_s([y_{i_1}, y_{i_2},\ldots, y_{i_{T}}])$ is equal to $0$, as well as the image of any commutator depending on $O(y_{i_s})$. Hence in the representation of $[y_{i_1}, y_{i_2},\ldots, y_{i_{T}}]$ the part of the linear combination in which commutators are independent of $O(y_{i_s})$ is equal to zero and can be excluded from the expression. By applying consecutively $\theta_s$, $s=1,\ldots, T$, and excluding commutators independent of $O(y_{i_s})$, $s=1,\ldots, T$, in the end we obtain modulo $I$ a linear combination of commutators each of which contains at least one element from every orbit $O(y_{i_s})$, $s=1,\ldots, T$. Since under these transformations the weight of commutators remains the same and is equal to $T$, no other elements can appear, and every commutator will contain exactly one element in every orbit $O(y_{i_s})$, $s=1,\ldots, T$. Now suppose that $L$ is an arbitrary Lie ring over $R$ satisfying the hypothesis of Proposition~\ref{combinatorial}. Let $x_{i_1}, x_{i_2},\ldots, x_{i_{T}} $ be arbitrary $\varphi$-homogeneous elements of $L$. We define the homomorphism $\delta$ from the free Lie ring $K$ into $L$ extending the mapping $$ y_{r^ki_s}\to x_{i_s}^{h^k}\quad \text{for} \quad s=1,\ldots,T \quad \text{and}\quad k=0,1,\ldots,q-1. $$ It is easy to see that $\delta$ commutes with the action of $FH$ on $K$ and $L$. Therefore $\delta (O(y_{i_s}))=O(x_{i_s})$ and $\delta (I)=0$, since $\gammaamma_{c+1}(C_L(H)) =0$ and $\delta(C_K(H))\sigma ubseteq C_L(H)$. We now apply $\delta$ to the representation of the commutator $[y_{i_1}, y_{i_2},\ldots, y_{i_{T}}]$ constructed above. Since $\delta(I)=0$, as the image we obtain a representation of the commutator $[x_{i_1},x_{i_2},\ldots, x_{i_{T}} ]$ as a linear combination of commutators of weight $T$ in elements of the set $\delta (Y)=\bigcup_{s=1}^T O(x_{i_s})$ each of which contains exactly the same number of elements from every $H$-orbit $O(x_{i_s})$, $s=1,\ldots, T$, as the original commutator, and has a subcommutator with zero sum of indices modulo $n$. (Unlike the disjoint orbits $O(y_{i_s})$ in $K$, their images may coincide in $L$, which is why we can only claim ``the same number of elements'' from each of them, rather than exactly one from each.) Finally, by the anticommutativity and Jacobi identities we can transform this linear combination into another one of simple commutators in the same elements each having an initial segment with zero sum of indices. The proposition is proved. \varepsilon nd{proof} \begin{equation}gin{definition} We define a \textit{KMS-transformation} of a commutator $[x_{i_1}, x_{i_2},\ldots, x_{i_l}]$ for $l\gammaeq T$ to be its representation according to Proposition~\ref{combinatorial} as a linear combination of simple commutators of the same weight in elements of $\bigcup_{s=1}^T O(x_{i_s})$ each of which has an initial segment from $L_0$ of weight $\leq T$, that is, commutators of the form $$ [c_0,y_{j_{w+1}},\ldots, y_{j_{l}}], $$ where $c_0=[y_{j_1},\ldots, y_{j_w}]\in L_0$, $w\leq T$, $y_{j_k}\in O(x_{i_1})\cup \cdots \cup O(x_{i_l})$, with subsequent re-denoting \begin{equation}\label{re-den} z_{j_{w+1}}=-[c_0, y_{j_{w+1}}],\qquad z_{i_{w+s}}=y_{i_{w+s}} \,\,\,\text{for}\,\,\, s>1. \varepsilon e \varepsilon nd{definition} The following assertion is obtained by repeated application of KMS-transformations. \begin{equation}gin{proposition} \label{kh-ma-shu-transformation} For any positive integers $t_1$ and $t_2$ there exists a $(t_1,t_2, c,q)$-bounded positive integer $V=V(t_1,t_2, c,q)$ such that any simple $\varphi$-homogeneous commutator $[x_{i_1}, x_{i_2},\ldots, x_{i_{V}}]$ of weight $V$ with non-zero indices can be represented as a linear combination of $\varphi$-homogeneous commutators of the same weight in elements of the set $X=\bigcup_{s=1}^V O(x_{i_s})$ each having either a subcommutator of the form \begin{equation}gin{equation}\label{f1} [u_{k_1},\ldots, u_{k_s}] \varepsilon nd{equation} with $t_1$ different initial segments with zero sum of indices modulo~$n$, that is, with $k_1+k_2+\cdots+k_{r_i}\varepsilon quiv 0\; ({\rm mod}\, n)$ for $ 1<r_1<r_2<\cdots<r_{t_1}=s$, or a subcommutator of the form \begin{equation}gin{equation}\label{f2} [u_{k_0}, c^1,\ldots, c^{t_2}], \varepsilon nd{equation} where $u_{k_0}\in X$, every $c^i$ belongs to $L_0$ (with numbering upper indices $i=1,\ldots,t_2$) and has the form $[a_{k_1},\ldots, a_{k_i}]$ for $a_{k_j}\in X$ with $k_1+\cdots+k_i\varepsilon quiv 0\; ({\rm mod}\, n)$. Here we can set $V(t_1,t_2,c,q)= \sigma um_{i=1}^{t_1}((f(c,q)+1)^2t_2)^i+1$. \varepsilon nd{proposition} \begin{equation}gin{proof} The proof practically word-for-word repeats the proof of the Proposition in~\cite{kh2} (see also Proposition~4.4.2 in \cite{kh4}), with the HKK-transformations replaced by the KMS-transformations. Namely, the KMS-transformation is applied to the initial segment of length $T$ of the commutator. Each of the resulting commutators contains a subcommutator in $L_0$, which becomes a part of a new element of type $z_1$ in \varepsilon qref{re-den}. Then the KMS-transformation is applied again, and so on. Note that images under $H$ of commutators in elements of the $H$-orbits of the $x_{i_j}$ are again commutators in elements of the $H$-orbits of the $x_{i_j}$. Subcommutators in $L_0$ are thus accumulated, either nested (for example, if, say, at the second step the element $z_1$ containing $c_0\in L_0$ is included in the new subcommutator in $L_0$), or disjoint (if, say, at the second step $z_1$ is not included in a new subcommutator in $L_0$). Nested accumulation leads to \varepsilon qref{f1}, and disjoint to \varepsilon qref{f2}. The total number of subcommutators in $L_0$ increases at every step, and the required linear combination is achieved after sufficiently many steps. The precise details of the proof by induction can be found in~\cite{kh2} or \cite[\S\,4.4]{kh4}. The only difference is that HKK-transformations always produce commutators in the same elements as the original commutator $[x_{i_1}, x_{i_2},\ldots, x_{i_{V}}]$, while after the KMS-transformation the resulting commutators may also involve images of the elements $x_{i_j}$ under the action of $H$. This is why elements of the $H$-orbits have to appear in the conclusion of the proposition. \varepsilon nd{proof} \paragraph{Representatives and graded centralizers.} We begin construction of graded centralizers by induction on the level taking integer values from $0$ to $T$, where the number $T=T(c,q)= f(c,q)+1$ is defined in Proposition~\ref{combinatorial}. A graded centralizer $L_j(s)$ of level $s$ is a certain additive subgroup of the $\varphi$-component~$L_j$. Simultaneously with construction of graded centralizers we fix certain elements of them --- representatives of various levels --- the total number of which is $(m,n,c)$-bounded. \begin{equation}gin{definition} The \textit{pattern} of a commutator in $\varphi$-homogeneous elements (of various~$L_i$) is defined as its bracket structure together with the arrangement of indices under the Index Convention. The \textit{weight} of a pattern is the weight of the commutator. The commutator itself is called the value of its pattern on given elements. \varepsilon nd{definition} \begin{equation}gin{definition} Let $\vec x=(x_{i_1},\dots ,x_{i_k})$ be an ordered tuple of elements $x_{i_s}\in L_{i_s}$, $i_s\ne 0$, such that $i_1+\dots + i_k\not\varepsilon quiv 0\, ({\rm mod}\, n)$. We set $j=-i_1-\dots - i_k\,({\rm mod}\, n)$ and define the mapping \begin{equation}gin{equation}\label{vartheta} \vartheta _{\vec x}: y_j\rightarrow [y_j, x_{i_1}, \dots , x_{i_k}]. \varepsilon nd{equation} \varepsilon nd{definition} By linearity this is a homomorphism of the additive subgroup $L_j$ into $L_0$. Since $|L_0|= m$, we have $|L_j : {\rm Ker}\, \vartheta_{\vec x}|\leq m$. \begin{equation}gin{notation} Let $U=U(c,q)=V(T, T-1, c,q )$, where $V$ is the function in Proposition~\ref{kh-ma-shu-transformation}. \varepsilon nd{notation} \begin{equation}gin{definition0} Here we only fix representatives of level $0$. First, for every pair $({\bf p},c)$ consisting of a pattern ${\bf p}$ of a simple commutator of weight $\leq U$ with non-zero indices of entries and zero sum of indices and a commutator $c\in L_0$ equal to the value of ${\bf p}$ on $\varphi$-homogeneous elements of various $L_{i}$, $i\ne 0$, we fix one such representation. The elements of $L_{j}$,\, $j\ne 0$, occurring in this fixed representation of $c$ are called \textit{representatives of level~$0$}. Representatives of level $0$ are denoted by $x_j (0)$ (with letter ``$x$'') under the Index Convention (recall that the same symbol can denote different elements). Furthermore, together with every representative $x_j(0)\in L_j$, $j\ne 0$, we fix all elements of its $H$-orbit $$O(x_j(0))=\{x_j(0), x_j(0)^h,\,\ldots, x_j(0)^{h^{q-1}} \}, $$ and also call them \textit{representatives of level~$0$}. Elements of these orbits are denoted by $x_{r^{s}j}(0):=x_j(0)^{h^{s}}$ under the Index Convention (since $L_i^h\leq L_{ri}$). \varepsilon nd{definition0} The total number of patterns ${\bf p}$ of weight $\leq U$ is $(n,c)$-bounded, $|L_0|=m$, and $|O(x_j(0))|=q$; hence the number of representatives of level $0$ is $(m,n,c)$-bounded. \begin{equation}gin{definition3} Suppose that we have already fixed $(m,n,c)$-boundedly many representatives of levels $<t$. We define \textit{graded centralizers of level $t$} by setting, for every $j\ne 0$, $$ L_j(t)=\bigcap_{\vec x} {\rm Ker}\, \vartheta_{\vec x}, $$ where $\vec x=\left( x_{i_1}(\varepsilon_1), \dots , x_{i_k}(\varepsilon_k)\right) $ runs over all ordered tuples of all lengths $k\leq U$ consisting of representatives of (possibly different) levels $<t$ such that $$j+ i_1+\cdots + i_k\varepsilon quiv 0\; ({\rm mod}\, n).$$ For brevity we also call elements of $L_j(t)$ \textit{centralizers of level $t$} and fix for them the notation $y_j(t)$ with letter ``$y$'' (under the Index Convention). The number of representatives of all levels $<t$ is $(m,n,c)$-bounded and $|L_j: { \rm Ker}\,\vartheta_{\vec x}| \leq m$ for all $\vec x$. Hence the intersection here is taken over $ (m,n,c)$-boundedly many additive subgroups of index $\leq m$ in $L_j$, and therefore $L_j(t)$ also has $(m,n,c) $-bounded index in the additive subgroup $L_{j}$. By definition a centralizer $y_j(t)$ of level $t$ has the following centralizer property with respect to representatives of lower levels: \begin{equation}gin{equation}\label{centralizer-property} \left[ y_j(t), x_{i_1}(\varepsilon_1),\, \ldots ,\, x_{i_k}(\varepsilon_k) \right]=0, \varepsilon nd{equation} as soon as $j+ i_1+\cdots +i_k\varepsilon quiv 0\, (\mbox{mod}\, n)$, \ $k\leq U$, and the elements $x_{i_s}(\varepsilon_s)$ are representatives of any (possibly different) levels $\varepsilon_s<t$. We now fix representatives of level $t$. For every pair $({\bf p},c)$ consisting of a pattern ${\bf p}$ of a simple commutator of weight $\leq U$ with non-zero indices of entries and zero sum of indices and a commutator $c\in L_0$ equal to the value of ${\bf p}$ on $\varphi$-homogeneous elements \textit{of graded centralizers $L_{i}(t)$, $i\ne 0$, of level $t$}, we fix one such representation. The elements occurring in this fixed representation of $c$ are called \textit{representatives of level $t$} and are denoted by $ x_{j}(t)$ (under the Index Convention). Next, for every (already fixed) representative $x_j(t)$ of level $t$, we fix the elements of the $H$-orbit $$O(x_j(t))=\{x_j(t), x_j(t)^h,\,\ldots, x_j(t)^{h^{q-1}} \}, $$ and call them also \textit{representatives of level~$t$}. These elements are denoted by $x_{r^{s}j}(t):=x_j(t)^{h^{s}}$ under the Index Convention (since $L_j^{h^s}\leq L_{r^sj}$). The number of patterns of weight $\leq U$ is $(n,c)$-bounded, $|L_0|=m$, and $|O(x_j(t))|=q$; hence the total number of representatives of level~$t$ is $(m,n, c)$-bounded. \varepsilon nd{definition3} The construction of centralizers and representatives of levels $\leq T$ is complete. We now consider their properties. It is clear from the construction of graded centralizers that \begin{equation}gin{equation}\label{vkljuchenie} L_j(k+1)\leq L_j(k) \varepsilon nd{equation} for all $j\ne 0$ and all $k=1,\ldots, T$. The following lemma follows immediately from the definition of representatives and from the inclusions~\varepsilon qref{vkljuchenie}; we shall refer to this lemma as the ``freezing'' procedure. \begin{equation}gin{lemma}[freezing procedure]\label{zamorazhivanie} Every simple commutator $ [y_{j_1}(k_1),y_{j_2}(k_2),\dots,y_{j_w}(k_w) ] $ of weight $w\leq U$ in centralizers of levels $k_1,k_2,\dots, k_w$ with zero modulo $n$ sum of indices can be represented $($frozen\/$)$ as a commutator $[x_{j_1}(s),x_{j_2}(s),\dots,x_{j_w}(s) ]$ of the same pattern in representatives of any level $s$ satisfying $ 0\leq s\leq \min \{ k_1,k_2,\dots,k_w \}$. \varepsilon nd{lemma} \begin{equation}gin{definition} We define a \textit{quasirepresentative of weight $w\gammaeq 1$ and level $k$} to be any commutator of weight $w$ which involves exactly one representative $x_i(k)$ of level $k$ and $w-1$ representatives $x_s(\varepsilon_s)$ of any lower levels $\varepsilon_s<k$. Quasirepresentatives of level $k$ are denoted by $\hat{x}_{j}(k)\in L_j$ under the Index Convention. Quasirepresentatives of weight $1$ are precisely representatives. \varepsilon nd{definition} \begin{equation}gin{lemma}\label{invariance} If $y_j(t)\in L_j(t)$ is a centralizer of level $t$, then $(y_j(t))^h$ is a centralizer of level $t$. If $\hat{x}_j(t)$ is a quasirepresentative of level $t$, then $(\hat{x}_j(t))^h$ is a quasirepresentative of level $t$ and of the same weight as $\hat{x}_j(t)$. \varepsilon nd{lemma} \begin{equation}gin{proof} By hypothesis, $$ [y_j(t), x_{i_1}(\varepsilon_1), \dots , x_{i_k}(\varepsilon_k)]=0, $$ whenever $\varepsilon _i<t$ for all $i$, $j+i_1+\cdots+ i_k \varepsilon quiv 0 \; ({\rm mod}\, n)$, and $k\leq U$. By applying the automorphism $h$ we obtain that $$[y_{j}(t)^h, x_{ri_1}(\varepsilon_1), \dots , x_{ri_k}(\varepsilon_k)]=0 $$ with $y_{j}(t)^h\in L_{rj}$. Since the set of representatives is $H$-invariant by construction, the tuples $x_{ri_1}(\varepsilon_1), \dots , x_{ri_k}(\varepsilon_k)$ run over all tuples of representatives of levels $<t$ with index tuples such that $rj+ ri_1 +\dots + ri_k\varepsilon quiv 0 \; ({\rm mod}\, n)$. By definition this means that $y_{j}(t)^h\in L_{rj}(t)$. Now let $\hat{x}_j(t)$ be a quasirepresentative of weight $k$ of level $t$. By definition this is a commutator involving exactly one representative $x_{i_1}(t)$ of level $t$ and some representatives $x_{i_2}(\varepsilon_2), \dots , x_{i_k}(\varepsilon_k)$ of smaller levels $\varepsilon_s<t$. By construction, the elements $(x_{i_s}(\varepsilon_s))^{h}=x_{ri_s}(\varepsilon_s)$ are also representatives of the same levels~$\varepsilon_s$. Then the image $(\hat{x}_j(t))^h$ is also a commutator involving exactly one representative $x_{ri_1}(t)$ of level $t$ and representatives $x_{ri_2}(\varepsilon_2), \dots , x_{ri_k}(\varepsilon_k)$ of smaller levels $\varepsilon_s<t$. Therefore $(\hat{x}_j(t))^h$ is also a quasirepresentative of level $t$ of the same weight. \varepsilon nd{proof} When using Lemma~\ref{invariance} we denote the elements $y_{j}(t)^{h^s}$ by $y_{r^sj}(t)$, and the elements $\hat{x}_j(t)^{h^s}$ by $\hat{x}_{r^sj}(t)$. Lemma~\ref{invariance} also implies that all representatives of level $t$, elements $x_j(t), x_j(t)^h,\ldots, x_j(t)^{h^{q-1}}$, are centralizers of level $t$. \begin{equation}gin{lemma}\label{quasirepresentatives} Any commutator involving exactly one centralizer ${y}_{i}(t)$ (or quasirepresentative ${\hat x}_{i}(t)$) of level $t$ and quasirepresentatives of levels $< t$ is equal to $0$ if the sum of indices of its entries is equal to $0$ and the sum of their weights is at most~$U+1$. \varepsilon nd{lemma} \begin{equation}gin{proof} Based on the definitions, by the Jacobi and anticommutativity identities we can represent this commutator as a linear combination of simple commutators of weight $\leq U+1$ beginning with the centralizer ${y}_{i}(t)$ (or a centralizer ${y}_{j}(t)$ involved in ${\hat x}_{i}(t)$) of level $t$ and involving in addition only some representatives of levels $<t$. Since the sum of indices of all these elements is also equal to $0$, all these commutators are equal to $0$ by~\varepsilon qref{centralizer-property}. \varepsilon nd{proof} \paragraph{Completion of the proof of the Lie ring theorem.}\label{section-main-teorem} Recall that $T$ is the fixed notation for the highest level, which is a $(c,q)$-bounded number. We constructed above the graded centralizers $L_j(T)$. We now set $$ Z=\left< L_1(T),\,L_2(T),\ldots ,L_{n-1}(T)\right> .$$ Since $|L_j:L_j(T)|$ is $(m,n,c)$-bounded for $j\ne 0$ and $|L_0|=m$, it follows that $|L:Z|$ is $(m,n,c)$-bounded. We claim that the subring $Z$ is nilpotent of $(c,q)$-bounded class and therefore is a required one. Since $Z$ is generated by the $L_{j}(T)$, $j\ne 0$, it is sufficient to prove that every simple commutator of weight $U$ of the form \begin{equation}gin{equation}\label{9} [y_{i_1}(T),\ldots, y_{i_U}(T)], \varepsilon nd{equation} where $y_{i_j}(T)\in L_{i_j}(T)$, is equal to zero. Recall that the $H$-orbits $O(y_{i_j}(T))=\{y_{i_j}(T)^{h^s}=y_{r^si_j}(T)\mid s=0,1,\ldots, q-1\}$ consist of centralizers of level $T$ by Lemma~\ref{invariance}. By Proposition~\ref{kh-ma-shu-transformation}, the commutator~\varepsilon qref{9} can be represented as a linear combination of commutators in elements of $Y=\bigcup_{j=1}^U O(y_{i_j}(T))$ each of which either has a subcommutator of the form~\varepsilon qref{f1} in which there are $T$ distinct initial segments in $L_0$, or has a subcommutator of the form~\varepsilon qref{f2} in which there are $T-1$ occurrences of elements from $L_0$. It is sufficient to prove that such subcommutators of both types are equal to zero. We firstly consider a commutator of the form~\varepsilon qref{f2} \begin{equation}gin{equation}\label{main-f-2} [y_{k_0}(T), c^1,\ldots, c^{T-1}], \varepsilon nd{equation} where $y_{k_0}(T)\in Y$ and every $c^i\in L_0$ (with numbering upper indices $i=1,\ldots,T-1$) has the form $[y_{k_1}(T),\ldots, y_{k_i}(T)]$ with $y_{k_j}(T)\in Y$ and $k_1+\cdots+k_i\varepsilon quiv 0 \; ({\rm mod}\, n)$. Using Lemma~\ref{zamorazhivanie} we ``freeze'' every $c^k$ as a commutator of the same pattern in representatives of level $k$. Then by expanding the inner brackets by the Jacobi identity $[a,[b,c]]=[a,b,c]-[a,c,b]$ we represent the commutator~\varepsilon qref{main-f-2} as a linear combination of commutators of the form \begin{equation}gin{equation}\label{main-f-22} [y_{k_0}(T),\,\,x_{j_1}(1),\ldots, x_{j_k}(1),\,x_{j_{k+1}}(2),\ldots, x_{j_s}(2), \ldots, \, \, \, \,x_{j_{l+1}}(T-1),\ldots, x_{j_{u}}(T-1)\,]. \varepsilon nd{equation} We subject the commutator~\varepsilon qref{main-f-22} to a certain collecting process, aiming at a linear combination of commutators with initial segments consisting of (quasi)representatives of different levels $1,2,\ldots,T-1 $ and the element $y_{k_0}(T)$. For that, by the formula $[a,b,c]=[a,c,b]+[a,[b,c]]$, we begin moving the element $x_{j_{k+1}}(2)$ in~\varepsilon qref{main-f-22} (the first from the left element of level 2) to the left, in order to place it right after the element $x_{j_{1}}(1)$. These transformations give rise to additional summands: say, at first step we obtain $$[y_{k_0}(T),\,\,\ldots,x_{j_{k+1}}(2), x_{j_k}(1),\,\ldots,\,]+[u_{k_0}(T),\,\,\ldots, [x_{j_k}(1),\,x_{j_{k+1}}(2)],\ldots]. $$ In the first summand we continue transferring $x_{j_{k+1}}(2)$ to the left, over all representatives of level 1. In the second summand the subcommutator $[x_{j_k}(1),\,x_{j_{k+1}}(2)]$ is a quasirepresentative, which we denote by $\hat{x}_{j_k+j_{k+1}}(2)$ and start moving this quasirepresentative to the left over all representatives of level 1. Since we are transferring a (quasi)representative of level $2$ over representatives of level $1$, in additional summands every time there appear subcommutators that are quasirepresentatives of level $2$, which assume the role of the element being transferred. \begin{equation}gin{remark}\label{rem} Here and in subsequent similar situations, it may happen that the sum of indices of a new subcommutator is zero, so it cannot be regarded as a quasirepresentative --- but then such a subcommutator is equal to 0 by Lemma~\ref{quasirepresentatives}. \varepsilon nd{remark} As a result we obtain a linear combination of commutators of the form $$[[y_{k_0}(T),x(1),\hat{x}(2)], x(1),\ldots,x(1),\,\,x(2),\ldots, x(2),\ldots, x(T-1),\ldots,x(T-1)] $$ with collected initial segment $[y_{k_0}(T),x(1),\hat{x}(2)]$. (For simplicity we omitted indices in the formula.) Next we begin moving to the left the first from the left representative of level 3 in order to place it in the fourth place. This element is also transferred only over representatives of lower levels, and the new subcommutators in additional summands are quasirepresentatives of level 3. These quasirepresentatives of level 3 assume the role of the element being transferred, and so on. In the end we obtain a linear combination of commutators with initial segments of the form \begin{equation}gin{equation} \label{main-f-23}[y_{k_0}(T),\, \hat{x}_{k_1}(1), \hat{x}_{k_2}(2),\ldots,\hat{x}_{k_{T-1}}(T-1) ]. \varepsilon nd{equation} By Proposition~\ref{combinatorial} the commutator~\varepsilon qref{main-f-23} of weight $T$ is equal to a linear combination of $\varphi$-homogeneous commutators of the same weight $T$ in elements of the $H$-orbits of the elements $y_{k_0}(T),\, \hat{x}_{k_1}(1), \hat{x}_{k_2}(2),\ldots,\hat{x}_{k_{T-1}}(T-1)$ each involving exactly the same number of elements of each $H$-orbit of these elements as \varepsilon qref{main-f-23} and having a subcommutator with zero sum of indices modulo $n$. By Lemma~\ref{invariance} every element $(\hat{x}_ {k_i}(i))^{h^{l}}=\hat{x}_{r^lk_i}(i)$ is a quasirepresentative of level $i$ and any $(y_{k_0}(T))^{h^l}$ is a centralizer of the form $y_{r^lk_0}(T)$ of level $T$. Since every level appears only once in \varepsilon qref{main-f-23}, those subcommutators with zero sum of indices are equal to 0 by Lemma~\ref{quasirepresentatives}. Hence every commutator of the linear combination is equal to 0. We now consider a commutator of the form~\varepsilon qref{f1} \begin{equation}gin{equation}\label{main-f-1} [y_{k_1}(T),\ldots, y_{k_s}(T)], \varepsilon nd{equation} where $y_{k_j}(T)\in Y$ and there are $T$ distinct initial segments with zero sum of indices: $k_1+\cdots+k_{r_i}\varepsilon quiv 0\,({\rm mod}\, n)$ for $1<r_1<r_2<\cdots<r_{T}=s$. The commutator~\varepsilon qref{main-f-1} in elements of the centralizers $L_i(T)$ of level $T$ belongs to $L_0$; therefore by Lemma~\ref{zamorazhivanie} it can be ``frozen'' in level $T$, that is, represented as the value of the same pattern on representatives of level $T$: \begin{equation}gin{equation}\label{main-f-11} [x_{k_1}(T),\ldots, x_{k_s}(T)]. \varepsilon nd{equation} Next, the initial segment of \varepsilon qref{main-f-11} of length $r_{T-1}$ also belongs to $L_0$ and is a commutator in centralizers of level $T-1$, since $L_i(T-1)\leq L_i(T)$. Therefore by Lemma~\ref{zamorazhivanie} it can be ``frozen'' in level $T-1$, and so on. As a result the commutator \varepsilon qref{main-f-1} is equal to a commutator of the form \begin{equation}gin{equation}\label{main-f-12} [x(1),\ldots, x(1),x(2),\ldots, x(2),\ldots, \ldots, x(T),\ldots, x(T)]. \varepsilon nd{equation} (We omitted here indices for simplicity.) We subject the commutator~\varepsilon qref{main-f-12} to exactly the same transformations as the commutator~\varepsilon qref{main-f-22}. First we transfer the left-most element of level $2$ to the left to the second place, then the left-most element of level 3 to the third place, and so on. In additional summands the emerging quasirepresentatives $\hat{x}(i)$ (see Remark~\ref{rem}) assume the role of the element being transferred and are also transferred to the left to the $i$th place. In the end we obtain a linear combination of commutators with initial segments of the form \begin{equation}gin{equation}\label{main-f-13}[\hat{x}_{k_1}(1), \hat{x}_{k_2}(2),\ldots,\hat{x}_{k_{T}}(T) ]. \varepsilon nd{equation} By Proposition~\ref{combinatorial} the commutator~\varepsilon qref{main-f-13} of weight $T$ is equal to a linear combination of $\varphi$-homogeneous commutators of the same weight $T$ in elements of the $H$-orbits of the elements $\hat{x}_{k_1}(1), \hat{x}_{k_2}(2),\ldots,\hat{x}_{k_{T}}(T)$ each involving exactly the same number of elements of each $H$-orbit of these elements as \varepsilon qref{main-f-13} and having a subcommutator with zero sum of indices modulo $n$. By Lemma~\ref{invariance} every element $(\hat{x}_ {k_i}(i))^{h^{l}}$ is a quasirepresentative $\hat{x}_{r^lk_i}(i)$ of level $i$. Since every level appears only once in \varepsilon qref{main-f-13} and there is an initial segment with zero sum of indices, those subcommutators with zero sum of indices are equal to 0 by Lemma~\ref{quasirepresentatives}. Hence every commutator of the linear combination is equal to 0. \varepsilon nd{proof} \begin{equation}gin{proof}[Proof of Corollary~\ref{c-l}] Let $L$ be a Lie algebra over a field of characteristic $p$. Let $F=\langle \psi\rangle \times \langle \chi\rangle$, where $ \langle \psi \rangle$ is the Sylow $p$-subgroup and $\langle \chi \rangle$ the Hall $p'$-subgroup. Consider the fixed-point subalgebra $A=C_L(\chi )$. It is $\psi$-invariant and $ C_A(\psi )=C_L(\varphi )$, so that ${\rm dim\,}C_A(\psi )\leq m$. Since $\psi$ has order $p^k$ and the characteristic is~$p$, this implies that ${\rm dim\,}A\leq mp^k$ by a well-known lemma following from the Jordan normal form of $\psi$, see for example, \cite[1.7.4]{kh4}. Thus, $L$ admits the Frobenius group of automorphisms $\langle\chi\rangle H$ with $(m,n)$-bounded ${\rm dim\,}C_L(\chi )$, so we can assume that $p$ does not divide $n$. After that we can repeat the arguments of the proof of Theorem~\ref{t-l} with obvious modifications: codimensions instead of indices, etc. Most significant modification is in the definition of representatives, which now have to be fixed bases of subspaces generated by values of patterns, rather than all values of these patterns. Actually almost the whole proof in \cite{mak-khu13} can be repeated with the improvement that we made in the present paper in the proof of Proposition~\ref{combinatorial}. \varepsilon nd{proof} \sigma ection{Proof of Theorem~\ref{t-g}} Recall that $G$ is a finite group admitting a Frobenius group of automorphisms $FH$ of coprime order with cyclic kernel $F$ of order $n$ and complement $H$ of order $q$ such that the fixed-point subgroup $C_G(H)$ of the complement is nilpotent of class $c$. Let $m=|C_G(F)|$; we need to prove that $G$ has a nilpotent characteristic subgroup of $(m, n,c)$-bounded index and of $(c, q)$-bounded nilpotency class. By a result of B.~Bruno and F.~Napolitani \mbox{\cite[Lemma~3]{bru-nap04}} if a group has a subgroup of finite index $k$ that is nilpotent of class $l$, then it also has a characteristic subgroup of finite $(k,l)$-bounded index that is nilpotent of class $\leq l$. Therefore in the proof of Theorem~\ref{t-g} we only need a subgroup of $(m, n,c)$-bounded index and of $(c, q)$-bounded nilpotency class. By Theorem~\ref{t-n} the group $G$ has a nilpotent subgroup of $(m, n)$-bounded index. Therefore henceforth we assume that $G$ is nilpotent. The proof of Theorem \ref{t-g} for nilpotent groups is based on the ideas developed in \cite{kh2} for almost fixed-point-free automorphisms of prime order. A modification of the method of graded centralizers is employed, which was used in \S\,\ref{s-l} for Lie rings. But now construction of fixed elements (representatives) and generalized centralizers $A(s)$ of levels $s\leq 2T-2$ is conducted in the group $G$: $$G=A(0)\gammaeq A(1)\gammaeq \dots \gammaeq A(2T-2),$$ where $T=T(c,q)=f(c,q)+1$ and $f$ is the function in Proposition~\ref{kh-ma-shu10-1}. The subgroups $A(s)$ will have $(m,n,c)$-bounded indices in $G$ and the images of elements of $A(s)$ in the associated Lie ring $L(G)$ extended by a primitive $n$th root of unity will have centralizer properties in this Lie ring with respect to representatives of lower levels. Direct application of Theorem~\ref{t-l} to $L(G)$ does not give a required result for the group $G$, since there is no good correspondence between subgroups of $G$ and subrings of $L(G)$. As in \cite{kh2} we overcome this difficulty by proving that, in a certain critical situation, the group $G$ itself is nilpotent of $(c, q)$-bounded class, the advantage being that the nilpotency class of $G$ is equal to that of $L(G)$. This is not true in general, but can be achieved by using induction on a certain complex parameter. This parameter controls the possibility of replacing commutators in elements of the Lie ring by commutators in representatives of higher levels. If the parameter becomes smaller for some of the subgroups $A(i)$ than for the group $G$, then by the induction hypothesis the subgroup $A(i)$, and therefore also the group $G$ itself, contains a required subgroup of $(m,n,c)$-bounded index and of $(c,q)$-bounded nilpotency class. If, however, this parameter does not diminish up to level $2T-2$, then we prove that the whole group $G$ is nilpotent of class $< V(T, 2(T-1), c,q )$, where $V$ is the function in Proposition~\ref{kh-ma-shu-transformation}. \paragraph{Generalized centralizers and representatives in the group.} Let $L(G)=\bigoplus_i \gamma _i(G)/\gamma _{i+1}(G)$ be the associated Lie ring of the group $G$, where $\gamma _i(G)$ are terms of the lower central series. The summand $\gamma _i(G)/\gamma _{i+1}(G)$ is the homogeneous component of weight $i$ of the Lie ring $L(G)$ with respect to the generating set $G/\gamma _{2}(G)$. Since $(|G|, |FH|)=1$, we have $|C_{L(G)}(F)|=|C_G(F)|=m$ and $C_{L(G)}(H)=\bigoplus_i C_{\gammaamma_i(G)}(H)\gammaamma_{i+1}(G)/\gammaamma_{i+1}(G)$ for the induced group $FH$ of automorphisms of $L(G)$. Therefore it is easy to see that $C_{L(G)}(H)$ is also nilpotent of class at most $c$. Recall that $F=\langle\varphi\rangle$ is cyclic of order $n$. Let $L=L(G)\otimes_{\Bbb Z}{\Bbb Z}[\omega]$, where $\omega$ is a primitive $n$th root of unity. Recall that as a ${\Bbb Z}$-module, ${\Bbb Z} [\omega ]=\bigoplus _{i=0}^{E(n)-1} \omega ^i {\Bbb Z}$, where $E(n)$ is the Euler function. Hence, \begin{equation} \label{euler0} L=\bigoplus _{i=0}^{E(n)-1} L(G)\otimes \omega ^i {\Bbb Z}. \varepsilon e In particular, $C_L(\varphi )= \bigoplus _{i=0}^{E(n)-1} C_{L(G)}(\varphi )\otimes \omega ^i {\Bbb Z}$, so that $|C_L(\varphi )|=|C_{L(G)}(\varphi )|^{E(n)}=m^{E(n)}$ is an $(m,n)$-bounded number. Since $(|G|, n)=1$, we have $L=L_0\oplus L_1\oplus \cdots \oplus L_{n-1}$, where $L_i=\{x\in L\mid x^{\varphi}=\omega ^ix\}$ are the $\varphi$-components of $L$, as in \S\,\ref{s-l}, and $C_L(\varphi )=L_0$. We consider $L(G)$ to be naturally embedded in $L$ as $L(G)\otimes 1$. Since $(|G|,n)=1$, we can assume that the ground rings of $L(G)$ and $L$ contain $1/n$. \begin{equation}gin{definition} Let $x \in G$, and let $\bar{x}$ be the image of $x$ in $G/\gammaamma_2(G)$. We define the $\varphi$-\textit{terms} of the element $x$ in $L$ by the formula $x_k=\frac{1}{n}\sigma um_{s=0}^{n-1}\omega^{-ks}\bar x^{\varphi ^s}$, $k=0,1,\dots , n-1$. Then $x_k\in L_k$ and $\bar x=x_0+x_1+\cdots+x_{n-1}$. We re-define \textit{$\varphi$-homogeneous commutators} as commutators in $\varphi$-terms of elements. \varepsilon nd{definition} Note that the $\varphi$-terms of elements are calculated in the homogeneous component of weight 1 of the ring $L$ (in particular, for elements $\gammaamma_2(G)$ they are all equal to $0$). Note also that now $\varphi$-homogeneous commutators have a more narrow meaning than in \S\,\ref{s-l} (where they were commutators in any elements of the grading $\varphi$-components $L_i$). As in \S\,\ref{s-l}, the group $H=\langle h\rangle$ of order $q$ permutes the $\varphi$-components $L_i$ by the rule $L_i^h = L_{ri}$ for $i\in \Bbb Z/n\Bbb Z$. Elementary calculations show that the action of $H$ preserves the $\varphi$-terms of elements. \begin{equation}gin{lemma}\label{invariance-group} Let $x_i$, $i=0,\ldots , n-1$, be the $\varphi$-terms of an element $x\in G$. Then $(x_{j})^h=(x^h)_{jr}$ and $(x^h)_{i}=(x_{ir^{-1}})^h$, where $(x^h)_{k}\in L_{k}$ are the $\varphi$-terms of $x^h\in G$.\qed \varepsilon nd{lemma} Here the construction of generalized centralizers $A(s)$ and fixed representatives is somewhat different from how this was done in \S\,\ref{s-l}. Complications arise from the fact that the centralizer property is defined in the Lie ring $L$, while the $A(s)$ are subgroups of $G$. The following lemma interprets this property in the group $G$. \begin{equation}gin{lemma}\label{svjaz} Suppose that $j+i_1+i_2+\cdots+i_k\varepsilon quiv 0\, ({\rm mod}\, n)$ for some $j,\, i_1,\, \ldots,\, i_k \in {\Bbb Z}/n{\Bbb Z}$. Then for the $\varphi$-terms $u_j, x_{i_1},\, y_{i_2},\, \ldots,\, z_{i_k}$ of $k+1$ elements $u,\, x,\,y, \ldots, z \in G$ to satisfy the equation $[u_j,\, x_{i_1},\, y_{i_2},\, \ldots,\, z_{i_k}]=0$ in the Lie ring $L$, it is sufficient that the congruences $\prod_{t=0}^{n-1}\left[ u,\, x^{\varphi ^{a_1}},\, y^{\varphi ^{a_2}},\, \ldots,\, z^{\varphi ^{a_k}}\right] ^{\varphi ^t}\varepsilon quiv 1\, ({\rm mod}\, \gammaamma_{k+2}(G))$ hold in the group $G$ for all ordered tuples $a_1,\, a_2,\, \ldots,\, a_k$ of elements of ${\Bbb Z}/n{\Bbb Z}$. \varepsilon nd{lemma} \begin{equation}gin{proof} We substitute the expressions of the $\varphi$-terms: \begin{equation}gin{align*} [u_j,\, x_{i_1},\, y_{i_2},\, \ldots,\, z_{i_k}] &=\Big[ \frac{1}{n}\sigma um_{s=0}^{n-1}\omega^{-js}\bar u^{\varphi^s},\,\, \frac{1}{n}\sigma um_{s=0}^{n-1}\omega^{-i_1s} \bar{x}^{\varphi^s},\, \ldots,\, \frac{1}{n}\sigma um_{s=0}^{n-1}\omega^{-i_ks} \bar{z}^{\varphi^s}\Big]\\ &=\frac{1}{n^{k+1}}\sigma um_{l=0}^{n-1} \omega^l \sigma um_{\sigma ubstack{-js_0-i_1s_1-\cdots-i_ks_k\varepsilon quiv l\, (\rm {mod}\,n)\\ 0\leq s_i\leq n-1}}[\bar u^{\varphi^{s_0}},\, \bar{x}^{\varphi^{s_1}},\, \bar{y}^{\varphi^{s_2}},\ldots, \bar{z}^{\varphi^{s_k}}], \varepsilon nd{align*} where $\bar u$, $\bar{x}, \ldots, \bar{z}$ are the images of the elements $u$, $x, y,\ldots, z$ in $G/\gammaamma_2(G)$ regarded as elements of the Lie ring $L$. Since $j+i_1+\cdots+i_k\varepsilon quiv 0 \, ({\rm mod}\, n) $, the summation condition $-js_0-i_1s_1-\cdots-i_ks_k\varepsilon quiv l \,(\rm {mod}\,n) $ can be rewritten as $i_1(s_0-s_1)+i_2(s_0-s_2)+\cdots+ i_k(s_0-s_k)\varepsilon quiv l \,(\rm {mod}\,n)$. Therefore the inner sum splits into several sums of the form $\sigma um_{t=0}^{n-1}\left[ \bar u,\, \bar x^{\varphi ^{a_1}},\, \bar y^{\varphi ^{a_2}},\, \ldots,\, \bar z^{\varphi ^{a_k}}\right] ^{\varphi ^t}$. The congruences in the statement of the lemma are equivalent to these sums being equal to 0. \varepsilon nd{proof} We shall need homomorphisms similar to \varepsilon qref{vartheta} used in \S\,\ref{s-l} but defined on the group $G$. For every ordered tuple $\vec v=({x,\,y, \ldots,\, z})$ of length $k$ of elements of $G$ and every tuple $\vec a=(a_1,\, a_2,\, \ldots,\, a_k)$ of elements of ${\Bbb Z}/n{\Bbb Z}$ we define the homomorphism $$ \vartheta_{\vec v,\, \vec a} :\; u\rightarrow \Big( \prod_{t=0}^{n-1}\left[ u,\, x^{\varphi ^{a_1}},\, y^{\varphi ^{a_2}},\, \ldots,\, z^{\varphi ^{a_k}}\right] ^{\varphi^t} \Big) \gammaamma _{k+2}(G) $$ of the group $G$ into $\gammaamma_{k+1}(G)/\gammaamma_{k+2}(G)$. The image of an element $u$ under $\vartheta_{\vec v,\, \vec a}$ is equal to the product of commuting elements over an orbit of the automorphism $\varphi$ in the abelian group $\gammaamma_{k+1}(G)/\gammaamma_{k+2}(G)$ and therefore belongs to $C_{L(G)}(\varphi )$. Hence, $|G:\mbox{Ker}\, \vartheta_{\vec x,\, \vec a}|\leq |C_{L(G)}(\varphi )| =m$. We further set $K(\vec v)=\bigcap_{\vec a}\mbox{Ker}\, \vartheta_{\vec v,\, \vec a}$, where $\vec a$ runs over all tuples of length $k$ of elements of ${\Bbb Z}/n{\Bbb Z}$. The index of the subgroup $K(\vec v)$ is $(m, n, k)$-bounded. A straightforward calculation shows that $K(\vec v)^h=K(\vec v^h)$, where $\vec v^h=({x^h,y^h, \ldots, z^h})$. We claim that the subgroup $K(\vec v)$ is $F$-invariant. Let $u\in K(\vec v)$. We need to show that $u^{\varphi}\in K(\vec v)$, that is, $$ \prod_{t=0}^{n-1}\left[ u^{\varphi},\, x^{\varphi ^{a_1}},\, y^{\varphi ^{a_2}},\, \ldots,\, z^{\varphi ^{a_k}}\right] ^{\varphi^t}\in \gammaamma_{k+2}(G). $$ for any tuple $\vec a=(a_1,\, a_2,\, \ldots,\, a_k)$ of elements of ${\Bbb Z}/n{\Bbb Z}$. Indeed, $$ \prod_{t=0}^{n-1}\left[ u^{\varphi},\, x^{\varphi ^{a_1}},\, y^{\varphi ^{a_2}},\, \ldots,\, z^{\varphi ^{a_k}}\right] ^{\varphi^t} \;\varepsilon quiv\; \prod_{s=0}^{n-1}\left[ u,\, x^{\varphi ^{a_1-1}},\, y^{\varphi ^{a_2-1}},\, \ldots,\, z^{\varphi ^{a_k-1}}\right] ^{\varphi^{s}} \;({\rm mod}\,\gammaamma _{k+2}(G)) $$ after the substitution $s=t+1$, since the commutators commute modulo $\gammaamma_{k+2}(G)$. The right-hand side is trivial modulo $\gammaamma_{k+2}(G)$, since $u\in K(\vec v)$. By Lemma~\ref{svjaz}, for a tuple $\vec v=({x,y,\ldots,z})$ of length $k$ the corresponding subgroup $K(\vec v)$ has the following centralizer property: for any $u\in K(\vec v)$ and for the $\varphi$-terms of the elements in $\vec v$, \begin{equation}\label{c-prop} [u_j,\, x_{i_1},\, y_{i_2},\, \ldots,\, z_{i_k}]=0 \varepsilon e in the Lie ring $L$ as soon as $j+i_1+i_2+\cdots+i_k\varepsilon quiv 0\, (\mbox{mod}\, n)$. \begin{equation}gin{notation} For what follows we fix the notation $N=N(c,q)=V(T, 2(T-1), c,q)$, where $V$ is the functions in~\ref{kh-ma-shu-transformation}. \varepsilon nd{notation} We now begin the construction of generalized centralizers $A(i)$ of levels $i\leq 2T-2$ with simultaneous fixation of representatives both in the group $G$ and in the homogeneous component of weight 1 of the Lie ring $L$. The level is indicated in parenthesis. We set $A(0)=G$. Recall that the \textit{pattern} of a commutator in elements of the $L_i$ is its bracket structure together with the arrangement of the indices. \begin{equation}gin{definition0} At level 0 we only fix representatives of level $0$. For every pair $({\bf p},c)$ consisting of a pattern ${\bf p}$ of a simple $\varphi$-homogeneous commutator of weight $\leq N$ with nonzero indices and zero sum of indices and a commutator $c\in L_0$ equal to the value of this pattern on the $\varphi$-terms of elements, we fix one such representation. The $\varphi$-terms $a_j$ of elements $a\in G$ (which belong to the homogeneous component of weight 1 of $L$) occurring in this fixed representation of the commutator $c$, as well as these elements $a\in G$ themselves, are called \textit{ring and group representatives of level~$0$} and are denoted by $x_j (0)\in L_j$ (under the Index Convention) and $x(0)\in G$, respectively. Together with every ring representative $x_j(0)\in L_j,$ $j\ne 0$, we fix all elements of its $H$-orbit $O\left(x_j(0)\right)=\{x_j(0), x_j(0)^h,\,\ldots, x_j(0)^{h^{q-1}} \}$, as well as all elements of $G$ in the $H$-orbit $O\left(x(0)\right)=\{x(0), x(0)^h,\,\ldots, x(0)^{h^{q-1}} \}$ of the corresponding group representative, which we also call (ring and group) \textit{representatives of level~$0$}. Elements of the orbit $O(x_j(0))$ are denoted by $x_{r^{s}j}(0):=x_j(0)^{h^{s}}$ under the Index Convention (since $L_i^h\leq L_{ri}$). By Lemma~\ref{invariance-group} the elements $x_{r^{s}j}(0)=x_j(0)^{h^{s}}$ are the $\varphi$-terms of the element $x(0)^{h^s}$; therefore all ring representatives are $\varphi$-terms of some group representatives. Since the total number of patterns ${\bf p}$ of weight $\leq N$ is $(n,c)$-bounded, $|L_0|\leq m^{E(n)}$, and every $H$-orbit has size~$q$, it follows that the number of representatives of level $0$ is $(m,n,c)$-bounded. \varepsilon nd{definition0} \begin{equation}gin{definition3} Suppose that we already fixed $(m,n,c)$-boundedly many representatives of levels $s=0,\dots ,t-1$, both elements of the group $x(s)\in A(s)$ and their $\varphi$-terms $x_j(s)$. Suppose also that the set of representatives is $H$-invariant. We now define \textit{generalized centralizers of level $t$} (or, in brief, \textit{centralizers of level $t$}) setting $A(t)=\bigcap_{\vec x}K(\vec x)$, where $\vec x=\left(x^1(\varepsilon_1),\, \ldots,\, x^k(\varepsilon_k)\right) $ runs over all ordered tuples of lengths $k$ for all $k\leq N$ composed of group representatives $x^s(\varepsilon_s)\in A(\varepsilon)$ of levels $\varepsilon_s<t$. Here we use numbering upper indices, since lower indices always indicate the belongness to $\varphi$-components of the Lie ring. We call elements $a\in A(t)$, as well as their $\varphi$-terms $a_j$, \textit{group and ring centralizers of level $t$} and fix for them the notation $y(t)$ and $y_j(t)$, respectively (under the Index Convention) indicating level in parentheses. Clearly, $A(t)\leq A(t-1)$. Note that the subgroup $A(t)$ is $F$-invariant, since all the subgroups $K(\vec x)$ are $F$-invariant. We claim that $A(t)$ is also $H$-invariant. If $y(t)\in A(t)$, then $y(t)\in K(\vec v)$ for any tuple $\vec v=\left(x^1(\varepsilon_1),\, \ldots,\, x^k(\varepsilon_k)\right) $ of length $k\leq N$ composed of representatives of levels $\varepsilon_j<t$. Then $y(t)^h\in K(\vec v)^h=K(\vec v ^h)$. Since the set of representatives of levels $<t$ is $H$-invariant, the tuples $\vec v ^h$ also run over all tuples of lengths $k\leq N$ composed of representatives of levels $<t$. Hence, $y(t)^h\in A(t)$. Since the number of representatives of levels $<t$ is $(m,n,c)$-bounded, the intersection $A(t)=\bigcap_{\vec x}K(\vec x)$ is taken over $(m,n,c)$-boundedly many subgroups of $(m,n,c)$-bounded index and therefore also has $(m,n,c)$-bounded index in $G$. Note that by \varepsilon qref{c-prop} ring centralizers of level $t$ have the following centralizer property with respect to representatives of lower levels $\varepsilon_i<t$: \begin{equation}gin{equation}\label{group-centralizer-property} \left[ y_j(t), x_{i_1}(\varepsilon_1),\, \ldots, x_{i_k}(\varepsilon_k) \right]=0, \varepsilon nd{equation} as soon as $k\leq N$ and $j+ i_1+\cdots+ i_k\varepsilon quiv 0\, ({\rm mod}\, n)$. (No numbering indices here under the Index Convention, so the $x_{i_k}(\varepsilon_k)$ with the same index may be the $\varphi$-terms of different elements $x(\varepsilon_k)$.) We now fix representatives of level $t$. For every pair $({\bf p},c)$ consisting of a pattern ${\bf p}$ of a simple $\varphi$-homogeneous commutator of weight $\leq N$ with nonzero indices and zero sum of indices and a commutator $c\in L_0$ equal to the value of this pattern on ring centralizers of level $t$ (which are $\varphi$-terms $y_j(t)$ of elements $y(t)\in A(t)$), we fix one such representation. The $\varphi$-terms $a_j$ of elements $a\in G$ (which belong to the homogeneous component of weight 1 of $L$) occurring in this fixed representation of the commutator $c$, as well as these elements $a\in G$ themselves, are called \textit{ring and group representatives of level~$t$} and are denoted by $x_j (t)\in L_j$ (under the Index Convention) and $x(t)\in G$, respectively. Together with every ring representative $x_j(t)\in L_j$, $j\ne 0$, we fix all elements of its $H$-orbit $O(x_j(t))=\{x_j(t), x_j(t)^h,\,\ldots, x_j(t)^{h^{q-1}} \}$, as well as all elements of $G$ in the $H$-orbit $O(x(t))=\{x(t), x(t)^h,\,\ldots, x(t)^{h^{q-1}} \}$ of the corresponding group representative, which we also call (ring and group) \textit{ representatives of level~$t$}. Elements of the orbit $O(x_j(t))$ are denoted by $x_{r^{s}j}(t):=x_j(t)^{h^{s}}$ under the Index Convention (since $L_i^h\leq L_{ri}$). By Lemma~\ref{invariance-group} the elements $x_{r^{s}j}(t)=x_j(t)^{h^{s}}$ are the $\varphi$-terms of the element $x(t)^{h^s}$; therefore all ring representatives are $\varphi$-terms of some group representatives. Since the total number of patterns ${\bf p}$ of weight $\leq N$ is $(n,c)$-bounded, $|L_0|\leq m^{E(n)}$, and every $H$-orbit has size~$q$, it follows that the number of representatives of level $t$ is $(m,n,c)$-bounded. \varepsilon nd{definition3} The construction of generalized centralizers and representatives of levels $\leq 2T-2$ is complete. It is important that ring centralizers and representative ``in the new sense'' enjoy similar properties as graded centralizers and representatives defined in \S\,\ref{s-l}. In particular, we can ``freeze'' commutators in ring centralizers as commutators in ring representatives of the same or any lower level, that is, an analogue of Lemma~\ref{zamorazhivanie} holds. Quasirepresentatives (in the Lie ring) are defined in the same fashion as in \S\,\ref{s-l}. The following lemma is an analogue of Lemma~\ref{invariance}. \begin{equation}gin{lemma}\label{invariance-group-2} If $y_j(t)$ is a ring centralizer of level $t$, then $y_j(t)^h$ is a centralizer of level $t$. If $\hat{x}_j(t)$ is a quasirepresentative of level $t$, then $(\hat{x}_j(t))^h$ is a quasirepresentative of level~$t$. \varepsilon nd{lemma} \begin{equation}gin{proof} The assertion of the lemma are proved by repeating word-for-word the proof of Lemma~\ref{invariance}. \varepsilon nd{proof} Since the centralizer property \varepsilon qref{group-centralizer-property} holds for commutators of weight $\leq N+1$, rather than $\leq U+1$ as in \varepsilon qref{centralizer-property}, in an analogue of Lemma \ref{quasirepresentatives} the weight parameter $U+1$ must be changed to $N+1$. \begin{equation}gin{lemma}\label{group-quasirepresentatives} Any commutator involving exactly one ring centralizer ${y}_{i}(t)$ (or quasirepresentative $\hat {x}_{i}(t)$) of level $t$ and quasirepresentatives of lower levels $< t$ is equal to $0$ if the sum of indices of its entries is equal to $0$ and the sum of their weights is at most~$N+1$. \varepsilon nd{lemma} \begin{equation}gin{proof} The proof repeats word-for-word the proof of Lemma~\ref{quasirepresentatives} with the centralizer property \varepsilon qref{centralizer-property} replaced by \varepsilon qref{group-centralizer-property}. \varepsilon nd{proof} \paragraph{Induction parameter.} In contrast to Theorem~\ref{t-l}, where we proved the nilpotency of the Lie subring generated by centralizers of maximal level, in Theorem~\ref{t-g} we prove that the Lie ring $L$ itself is nilpotent of bounded class (in a certain critical situation). Therefore here we must consider commutators in arbitrary $\varphi$-terms of elements. The following parameter enables us to control the possibility of replacing $\varphi$-homogeneous subcommutators in $L_0$ by the values of the same patterns on representatives of higher levels. \begin{equation}gin{definition} The \textit{induction parameter} is defined to be the triple $(m,\, \bar m,\, t)$, where $m=|C_G(\varphi )|$; $\bar m=(m_1, m_2, \ldots, m_N)$ for $m_j=|C_{\gammaamma_j(G)/\gammaamma_{j+1}(G)}(\varphi )|$; and $t=|{\mathscr P}(G)|$, where ${\mathscr P}(G)$ is the set of all pairs $({\bf p}, c)$ consisting of a pattern ${\bf p}$ of a weight $\leq N$ with nonzero indices of entries and zero sum of indices and a commutator $c\in L_0$ equal to the value of ${\bf p}$ on $\varphi$-terms $a_j$ of elements $a\in G$.\varepsilon nd{definition} We denote by $(m(B), \bar m(B), t(B))$ the triple constructed in the same fashion for an $ F$-invariant subgroup $B$. In particular, if $M=L(B)\otimes_{\Bbb Z}\Bbb Z[\omega]$ and $M=M_0\oplus M_1\oplus \cdots \oplus M_{n-1}$ is the decomposition into the direct sum of $\varphi$-components, then ${\mathscr P}(B)$ is the set of pairs $({\bf p}, c)$ consisting of a pattern ${\bf p}$ of weight $\leq N$ with nonzero indices of entries and zero sum of indices and a commutator $c\in M_0$ equal to the value of ${\bf p}$ on $\varphi$-terms $b_j$ in $M$ of elements $b\in B$ (defined in the same fashion as $\varphi$-terms for $G$ and $L$). We introduce the inverse lexicographical order on the vectors $\bar m = (m_1, m_2, \ldots, m_N)$: $$(m_{11}, m_{12}, \ldots, m_{1N}) < (m_{21}, m_{22}, \ldots, m_{2N}) \Leftrightarrow$$ $$\Leftrightarrow \text{ for some } k \gammaeq 1 \; \; m_{1i} = m_{2i} \text{ for all } i < k \text{ and } m_{1k} > m_{2k}.$$ We introduce the lexicographical order on the triples $(m,\, \bar m,\, t)$: \begin{equation}gin{align*} (m_1, \bar m_1, t_1) < (m_2, \bar m_2, t_2) \Leftrightarrow & \text{ either }m_1 < m_2, \\ & \text{ or } m_1 = m_2\text{ and } \bar m_1 < \bar m_2, \\ & \text{ or } m_1 = m_2,\, \, \, \bar m_1 = \bar m_2, \text{ and } t_1 < t_2. \varepsilon nd{align*} \begin{equation}gin{lemma}\label{nerav} For any $\varphi$-invariant subgroup $B\leq G$ we have $(m(B), \bar m(B), t(B))\leq (m(G), \bar m(G), t(G))$ with respect to the order introduced above. \varepsilon nd{lemma} \begin{equation}gin{proof} Clearly, $m(B)\leq m(G)$. Now suppose that $m(B)=m(G)$, that is, $C_B(\varphi)= C_G(\varphi)$; we claim that then $\bar m(B)\leq \bar m(G)$. Suppose that for some $1\leq k\leq N$ we have $m_i(B)=m_i(G)$ for all $i<k$ (this is vacuous for $k=1$); we need to show that $m_k(B)\gammaeq m_k(G)$. Since $|C_G(\varphi)|=|C_B(\varphi)|$ and $m_i(B)=m_i(G)$ for all $i<k$, we have $|C_{\gammaamma_k(B)}(\varphi)|=|C_{\gammaamma_k(G)}(\varphi)|$. Since $C_{\gammaamma_k(B)}(\varphi) \leq C_{\gammaamma_k(G)}(\varphi)$, these subgroups coincide. Let $D:=C_{\gammaamma_k(B)}(\varphi)= C_{\gammaamma_k(G)}(\varphi)$. Since $(|G|, n)=1$, we have $C_{\gammaamma_{k}(B)/\gammaamma_{k+1}(B)}(\varphi)= D \gammaamma_{k+1}(B)/\gammaamma_{k+1}(B)\cong D/D\cap \gammaamma_{k+1}(B)$, as well as $C_{\gammaamma_k(G)/\gammaamma_{k+1}(G)}(\varphi)= D \gammaamma_{k+1}(G)/\gammaamma_{k+1}(G)\cong D/D\cap \gammaamma_{k+1}(G)$. Clearly, $|D/D\cap \gammaamma_{k+1}(B)|\gammaeq |D/D\cap \gammaamma_{k+1}(G)|$. Hence, $m_k(B)\gammaeq m_k(G)$. Finally, suppose that $\bar m(B)=\bar m(G)$; we claim that then $t(B)\leq t(G)$. We saw above that then $C_G(\varphi)\cap \gammaamma_k(B)=C_G(\varphi)\cap \gammaamma_k(G)$ for all $k\leq N$. Hence for every $k\leq N$, $$ C_{\gammaamma_k(B)/\gammaamma_{k+1}(B)}(\varphi)\cong \left(C_G(\varphi)\cap \gammaamma_k(B)\right)/\left(C_G(\varphi)\cap \gammaamma_{k+1}(B)\right)=$$ $$=\left(C_G(\varphi)\cap \gammaamma_k(G)\right)/\left(C_G(\varphi)\cap \gammaamma_{k+1}(G)\right)\cong C_{\gammaamma_k(G)/\gammaamma_{k+1}(G)}(\varphi). $$ For $k\leq N$, let $\bar c$ and $\tilde c$ denote the images of an element $c\in C_{\gamma _k(G)}(\varphi)=C_{\gamma _k(B)}(\varphi)$ in $\gammaamma_{k}(G)/\gammaamma_{k+1}(G)$ and $\gammaamma_{k}(B)/\gammaamma_{k+1}(B)$, respectively. Clearly, the isomorphism $C_{\gammaamma_k(B)/\gammaamma_{k+1}(B)}(\varphi)\cong C_{\gammaamma_k(G)/\gammaamma_{k+1}(G)}(\varphi)$ is induced by the mapping $\tilde c\to\bar c$. It is also clear that the same mapping induces the natural isomorphism $$ \sigma igma _k: C_{\gammaamma_k(B)/\gammaamma _{k+1}(B)}(\varphi )\otimes _\Bbb Z \, \Bbb Z [\omega]\;\to \; C_{\gammaamma_k(G)/\gammaamma_{k+1}(G)}(\varphi )\otimes _\Bbb Z \, \Bbb Z [\omega]. $$ In view of the decompositions \begin{equation} \label{euler} L=\bigoplus_{i=0}^{E(n)-1} (L(G)\otimes \omega ^i)\quad \text{and}\quad M=\bigoplus_{i=0}^{E(n)-1} (L(B)\otimes \omega ^i), \varepsilon e where $E(n)$ is the Euler function (see \varepsilon qref{euler0}), the isomorphism $\sigma igma _k$ is given by the mapping $$ \sigma igma _k:\; \sigma um _{i=0}^{E(n)-1}\tilde c ^i\omega ^i \;\to\; \sigma um _{i=0}^{E(n)-1}\bar c ^i\omega ^i, $$ where $c^0,c^1,\dots ,c^{E(n)-1}$ with numbering upper indices are elements of $C_{\gamma _k(G)}(\varphi)=C_{\gamma _k(B)}(\varphi)$. Now suppose that an element $\sigma um _{i=0}^{E(n)-1}\tilde c ^i\omega ^i \in C_{\gammaamma_k(B)/\gamma _{k+1}(B)}(\varphi )\otimes _\Bbb Z \, \Bbb Z [\omega]$ is equal to the value of a pattern ${\bf p}$ with nonzero indices and zero sum of indices of weight $k\leq N$ in $\varphi$-terms of some elements of $B/\gammaamma_2(B)$ regarded as elements of the Lie ring $M$. This means that \begin{equation}gin{equation}\label{group-f-1} \sigma um _{i=0}^{E(n)-1}\tilde c ^i\omega ^i = \varkappa (\tilde b^1_{i_1},\ldots, \tilde b^k_{i_k}), \varepsilon nd{equation} where $\varkappa $ is a $\varphi$-homogeneous commutator of weight $k$ with nonzero indices and zero sum of indices in the $\varphi$-terms $\tilde b^1_{i_1},\ldots, \tilde b^k_{i_k}$ in $M$ of elements $b^1,\ldots, b^k\in B$ with numbering upper indices. (We use tildes to denote the $\varphi$-terms in $M$ to distinguish them from $\varphi$-terms of the same elements $b^1,\ldots, b^k$ with respect to $G$ and $L$.) In view of \varepsilon qref{euler} the equality of two elements of the Lie ring $M$ is equivalent, after collecting terms, to the equalities of the coefficients of $1, \omega, \ldots, \omega^{E(n)-1}$ --- the coefficients which are elements of the Lie ring $L(B)$. Since the $\varphi$-terms $\tilde b^s_{i_s}\in L_{i_s}$ in \varepsilon qref{group-f-1} are canonically expressed with coefficients in ${\Bbb Z} [\omega ]$ in terms of the images in $B/\gammaamma_2(B)$ of the elements $(b^s)^{\varphi ^j}\in B$, equation \varepsilon qref{group-f-1} in $M$ is equivalent to a certain system of congruences of group commutators $\varkappa ^{\alpha}$ (with numbering upper indices) of weight $k$ in the elements $(b^s)^{\varphi^j}\in B$: \begin{equation}gin{equation}\label{systema} \left\{ \begin{equation}gin{array}{ccl} c^0&\varepsilon quiv &\prod_{\alpha} \varkappa ^{\alpha} \;({\rm mod}\, \gammaamma_{k+1}(B))\\ c^1&\varepsilon quiv &\prod_{\alpha} \varkappa ^{\alpha}\; ({\rm mod}\, \gammaamma_{k+1}(B))\\ \vdots && \\ c^{E(n)-1}&\varepsilon quiv &\prod_{\alpha} \varkappa ^{\alpha}\; ({\rm mod}\, \gammaamma_{k+1}(B)) \varepsilon nd{array} \right. \varepsilon nd{equation} We now define a mapping \begin{equation}gin{equation}\label{nu} \nu _k: ({\bf p}, \tilde C)\rightarrow ({\bf p},\bar C), \varepsilon nd{equation} where $\tilde C = \sigma um _{i=0}^{E(n)-1}\tilde c ^i\omega ^i$ for $c^{i}\in C_{\gamma _k(G)}(\varphi)=C_{\gamma _k(B)}(\varphi)$, and $\bar C = \sigma igma _k(\tilde C)=\sigma um _{i=0}^{E(n)-1}\bar c ^i\omega ^i$ for the same elements $c^{i}$. \begin{lemma} \label{same} If $\bar m(B)=\bar m(G)$, then $\nu _k(({\bf p},\tilde C))\in {\mathscr P}(G)$ and, moreover, $\bar C$ is the value of the same pattern ${\bf p}$ on the $\varphi$-terms $b^1_{i_1},\ldots, b^k_{i_k}$ in the Lie ring $L$ of the same elements $b^1,\ldots, b^k$ in \varepsilon qref{group-f-1}, that is, $\bar C=\varkappa (b^1_{i_1},\ldots, b^k_{i_k}) $ for the same commutator $\varkappa$ as in \varepsilon qref{group-f-1}. \varepsilon l \begin{proof} Indeed, the system of congruences \varepsilon qref{systema} remains valid if we replace $\gammaamma_{k+1}(B)$ by a larger subgroup $\gammaamma_{k+1}(G)$. But modulo $\gammaamma_{k+1}(G)$ this system is equivalent to the required equation $\bar C= \varkappa (b^1_{i_1},\ldots, b^k_{i_k}) $ in $L$. Thus, the pair $({\bf p},\bar C)=\nu _k({\bf p},\tilde C)$ belongs to ${\mathscr P}(G)$. \varepsilon p We now complete the proof of Lemma~\ref{nerav}. As we saw above, if $\bar m(B)=\bar m(G)$ for all $k\leq N$, then the mapping $\sigma igma _k:\tilde C\to \bar C$ is an isomorphism. Hence the union $\nu =\bigcup _{k=1}^N\nu _k$ of the mappings \varepsilon qref{nu} is an injective mapping of ${\mathscr P}(B)$ into ${\mathscr P}(G)$. Thus, $t(B)\leq t(G)$ if $\bar m(B)=\bar m(G)$. \varepsilon nd{proof} It follows from the above that if $(m(B), \bar m(B), t(B))=(m(G), \bar m(G), t(G))$, then $\nu$ is a one-to-one correspondence between the sets ${\mathscr P}(B)$ and ${\mathscr P}(G)$. Applying this to the centralizers $A(i)$ we obtain the following. \begin{equation}gin{corollary} \label{group-cor} Suppose that $(m(A(i)), \bar m(A(i)), t(A(i)))=(m(G), \bar m(G), t(G))$ for all levels $i=1,\ldots, 2T-2$. Then every simple $\varphi$-homogeneous commutator in $C_L(\varphi)=L_0$ equal to the value of a pattern ${\bf p}$ with nonzero indices of weight $k\leq N$ on $\varphi$-terms of some elements of $G$ can be represented as the value of the same pattern ${\bf p}$ on representatives of level $s$ for every $s=0, 1,\ldots, 2T-2$. \varepsilon nd{corollary} \begin{equation}gin{proof} Suppose that $\bar C\in C_{\gammaamma_k(G)/\gamma _{k+1}(G)}(\varphi )\otimes _\Bbb Z \, \Bbb Z [\omega]$ is equal to the value of the pattern ${\bf p}=[*_{i_1},\ldots,*_{i_k}]$ with nonzero indices and zero sum of indices of weight $k\leq N$ on $\varphi$-terms of some elements. Since $(m(A(s)), \bar m(A(s)), t(A(s)))=(m(G), \bar m(G), t(G))$, the mapping $\nu$ is a one-to-one correspondence between ${\mathscr P}(B)$ and ${\mathscr P}(G)$. Hence the pair $({\bf p}, \bar C)$ is the image under $\nu _k$ of a pair $({\bf p}, \tilde C)$ for the same pattern ${\bf p}$ and for $\tilde C=[\tilde g^1_{i_1},\ldots, \tilde g^k_{i_k}]$ in $M$, where $\tilde g^j_{i_t}$ are $\varphi$-terms in $M=L(A(s))\otimes _\Bbb Z \, \Bbb Z [\omega]$ of elements $g^j\in A(s)$. Moreover, by Lemma~\ref{same} then $\bar C=[g^1_{i_1},\ldots, g^k_{i_k}]$ in $L$, where $g^j_{i_t}$ are the corresponding $\varphi$-terms in $L$ of the same elements $g^j\in A(s)$. Since the elements $g^j$ belong to the generalized centralizer $A(s)$ (and therefore should be denoted $y^j(s)=g^j$), by the construction of representatives of level $s$ the commutator $\bar C=[y^1_{i_1}(s),\ldots, y^k_{i_k}(s)]$ can be ``frozen'' in level $s$, that is, represented as the value of the same pattern on fixed ring representatives $\bar C =[x^1_{i_1}(s),\ldots, x^k_{i_k}(s)]$. \varepsilon nd{proof} \paragraph{Completion of the proof of Theorem \ref{t-g}} Note that for a given value of $|C_G(\varphi)|=m(G)=m$ the number of possible triples $(m(G),\, \bar m(G),\, t(G))$ is obviously $(m,n,c)$-bounded. Therefore we can use induction on the parameter $(m(G), \bar m(G), t(G))$ in order to show that the group $G$ contains a subgroup of $(m,n,c)$-bounded index that is nilpotent of $(c,q)$-bounded class $< N$. The basis of induction is the case $m(G)=1$, which means that $C_G(F)=1$; then the group $G$ is nilpotent of class $\leq f(c,q)<N$ by the Makarenko--Khukhro--Shumyatsky Theorem~\cite{khu-ma-shu}. If for some $i=1,\ldots ,2T-2$ the induction parameter for the subgroup $A(i)$ becomes smaller, that is, $(m(A(i)), \bar m(A(i)), t(A(i)))<(m(G), \bar m(G), t(G))$, then by the induction hypothesis applied to the $FH$-invariant subgroup $A(i)$ it contains a subgroup of $(m,n,c)$-bounded index in $A(i)$ that is nilpotent of class $<N$, which is a required subgroup, since the index of $A(i)$ in $G$ is also $(m,n,c)$-bounded. Therefore it is sufficient to consider the case where $(m(A(i)), \bar m(A(i)), t(A(i)))=(m(G), \bar m(G), t(G))$ for all $i=1,\ldots 2T-2$, and we assume this in what follows. We claim that in this critical situation the whole group is nilpotent of class $<N$. For that it is sufficient to show that the Lie ring $L$ is nilpotent of class $<N$. Note that we can also assume that $C_G(F)\leq \gammaamma_2(G)$, since in the opposite case $C_{[G,F]}(F)<C_G(F)$, and then the result follows by the induction hypothesis applied to the $FH$-invariant subgroup $[G,F]$, whose index is $\leq m$, since $G=[G,F] C_G(F)$. Therefore also $C_L(\varphi)\leq \gammaamma_2(L)$ and the Lie ring $L$ is generated by $\varphi$-terms $a_i\in L_i$ of elements for $i\neq 0$. Since the nilpotency identity can be verified on the generators of the Lie ring, it is sufficient to show that \begin{equation}gin{equation}\label{group-f-4} [a_{i_1}, \ldots, a_{i_N}]=0 \varepsilon nd{equation} for any $\varphi$-terms $a_{i_s}\in L_{i_s}$, $i_s\neq 0$, of elements of $G$. For that, in turn, it is sufficient to show the triviality of all commutators of the form~\varepsilon qref{f1} and~\varepsilon qref{f2} in Proposition~\ref{kh-ma-shu-transformation} applied to the commutator~\varepsilon qref{group-f-4} with $t_1=T$ and $t_2=2(T-1)$, where, recall, $N=V(T, 2(T-1), c,q )$ and $T=f(c,q)+1$ for the function $f(c,q)$ in Proposition~\ref{kh-ma-shu10-1}. We now repeat, almost word-for-word, the final arguments in the proof of Theorem~\ref{t-l}, with obvious replacement of the centralizer property relative to representatives in the old sense by the similar property in the new sense. This is possible, since in the critical situation under consideration Corollary~\ref{group-cor} guarantees the possibility of representing elements in $L_0$ that are commutators in $\varphi$-terms of elements in the form of the values of the same patterns in representatives of any level $\leq 2T-2$. A small modification of the arguments, which required replacing $T-1$ by $2(T-1)$ as the value of the parameter $t_2$ in Proposition~\ref{kh-ma-shu-transformation}, is only needed for a commutator of the form~\varepsilon qref{f2}. First we consider a commutator \begin{equation}gin{equation}\label{group-f-5} [u_{k_1},\ldots, u_{k_s}], \varepsilon nd{equation} of the form~\varepsilon qref{f1} having $T$ distinct initial segments with zero sum of indices modulo~$n$, that is, with $k_1+k_2+\cdots+k_{r_i}\varepsilon quiv 0\; ({\rm mod}\, n)$ for $1<r_1<r_2<\cdots<r_{T}=s$. Since $(m(A(i)), \bar m(A(i)), t(A(i)))=(m(G), \bar m(G), t(G))$ for all $i=1,\ldots, 2T-2$, by Corollary~\ref{group-cor} we can represent the commutator~\varepsilon qref{group-f-5} as the value of the same pattern on representatives of level $T$. Then, using the inclusions $A(i)\gammaeq A(i+1)$, we successively represent the initial segments of lengths $r_{T-1}, r_{T-2},\ldots $ of the resulting commutator as the value of the same pattern on representatives of levels $T-1, T-2, \ldots$ (since all these initial segments, as well as the commutator~\varepsilon qref{group-f-5} itself, belong to $L_0$). As a result we obtain a commutator equal to~\varepsilon qref{group-f-5} and having the form $$ [x(1),\ldots, x(1),x(2),\ldots, x(2),\ldots, \ldots, x(T),\ldots, x(T)], $$ where we omitted indices to lighten the notation. We apply to this commutator the same, word-for-word, arguments that prove the equality to 0 of the commutator~\varepsilon qref{main-f-12}. We only need to replace the centralizer property~\varepsilon qref{centralizer-property} and Lemma~\ref{quasirepresentatives} used in \S\,\ref{s-l} by the centralizer property~\varepsilon qref{group-centralizer-property} and Lemma~\ref{group-quasirepresentatives}. We now consider a commutator \begin{equation}gin{equation}\label{group-f-6} [a_j, c^1_0,\ldots ,c^{2T-2}_0] \varepsilon nd{equation} of the form~\varepsilon qref{f2}, where $c^i_0\in L_0$ with numbering upper indices are simple $\varphi$-homogeneous commutators in $\varphi$-terms in $L_j$, $j\neq 0$, of elements of $G$. For each $s=1,\ldots,2T-2$ we substitute into~\varepsilon qref{group-f-6} an expression of the commutator $c^s_0$ as the value of the same pattern of weight $<N$ on representatives of level $s$, which is possible by Corollary~\ref{group-cor}. After expanding the inner brackets we obtain a linear combination of commutators of the form \begin{equation}\label{bad} [a_j, x(1),\ldots, x(1),\,\ldots, x(2T-2),\ldots, x(2T-2)]. \varepsilon e In contrast to the commutator~\varepsilon qref{main-f-22} in \S\,\ref{s-l}, here $a_j$ does not necessarily belong to a centralizer of high level, so we need a different argument. The same arguments as those applied above to the commutator~\varepsilon qref{main-f-22} make it possible to represent \varepsilon qref{bad} as a linear combination of commutators with initial segments of the form \begin{equation}\label{dvoinoi} [a_j, \hat x(1), \hat x(2)\ldots, \hat x(2T-2)], \varepsilon e where $a_j$ is followed by quasirepresentatives of levels $1,\ldots, 2T-2$, one in each level. By Proposition~\ref{combinatorial} the initial segment $[a_j, \hat x(1),\hat x(2),\,\ldots, \hat x(T-1)]$ of weight $T$ of the commutator \varepsilon qref{dvoinoi} is equal to a linear combination of simple $\varphi$-homogeneous commutators in elements of the $H$-orbits of the elements $a_j, \hat x(1),\hat x(2),\,\ldots, \hat x(T-1)$ each involving exactly the same number of elements of each $H$-orbit of these elements as \varepsilon qref{dvoinoi} and having an initial segment in $L_0$. By Lemma~\ref{invariance-group-2} each element $\hat{x}_ {k_i}(i)^{h^{l}}$ is a quasirepresentative of the form $\hat{x}_{r^lk_i}(i)$ of level $i$. If such an initial segment does not contain an element of the $H$-orbit of $a_j$, then this initial segment is equal to 0 by Lemma~\ref{group-quasirepresentatives}, since the levels are all different. If, however, this initial segment in $L_0$ does contain an element of the $H$-orbit of $a_j$, then we can freeze it in level 0, that is, replace by the value of the same pattern on representatives of level $0$. Therefore it remains to prove equality to zero of a commutator of the form \begin{equation}gin{equation}\label{group-f-7} [x(0),\ldots, x(0), \hat x(\varepsilon_{r+1}),\ldots,\hat x(\varepsilon_{s}), x(T),\ldots, x(T),\ldots, x(2T-2),\ldots, x(2T-2)], \varepsilon nd{equation} which now involves only (quasi)representatives, and the levels $\varepsilon_{r+1},\ldots,\varepsilon_{s}$ are all less than~$T$. We now apply almost the same collecting process as the one applied above to~\varepsilon qref{main-f-22}. The difference is that we move to the left only (first from the left) quasirepresentatives of levels $\gammaeq T$, and collected parts are initial segments of the form $[x(0), \hat x(T),\ldots, \hat x(T+s)]$. As a result the commutator~\varepsilon qref{group-f-7} becomes equal to a linear combination of commutators in quasirepresentatives with initial segments of length $T$ of the form $[x(0), \hat x(T),\ldots, \hat x(2T-2)]$. By applying Proposition~\ref{combinatorial} to such an initial segment we obtain a linear combination of $\varphi$-homogeneous commutators in elements of the $H$-orbits of the elements $x(0), \hat x(T),\ldots, \hat x(2T-2)$, each involving exactly the same number of elements of each $H$-orbit of these elements as that initial segment and having an initial segment in $L_0$. By Lemma~\ref{invariance-group-2} elements of the $H$-orbits of the elements $x(0), \hat x(T),\ldots, \hat x(2T-2)$ are also quasirepresentatives of the same levels. These initial segments in $L_0$ are equal to $0$ by Lemma~\ref{group-quasirepresentatives}, since the levels are all different. \qed \begin{equation}gin{thebibliography}{88} \bibitem{bru-nap04} B.~Bruno and F.~Napolitani, A note on nilpotent-by-\v{C}ernikov groups, \varepsilon mph{Glasgow Math.~J.} \textbf{46} (2004), 211--215. \bibitem{hart} B. Hartley, A general Brauer-Fowler theorem and centralizers in locally finite groups, \varepsilon mph{Pacific J. Math.} {\bf 152}, no.~1 (1992), 101--117. \bibitem{ha-is} { B. Hartley and I. M. Isaacs}, On characters and fixed points of coprime operator groups, \varepsilon mph{J. Algebra} {\bf 131} (1990), 342--358. \bibitem{hi} {G.\,Higman}, Groups and rings which have automorphisms without non-trivial fixed elements, \varepsilon mph{J.~London Math. Soc.} {\bf 32} (1957), 321--334. \bibitem{hpbl} {B. Huppert and N. Blackburn}, \varepsilon mph{Finite groups}~II, Springer, Berlin, 1982. \bibitem{kr} { V. A. Kreknin}, The solubility of Lie algebras with regular automorphisms of finite period, \varepsilon mph{Math. USSR Doklady} {\bf 4} (1963), 683--685. \bibitem{kr-ko} { V. A. Kreknin and A. I. Kostrikin}, Lie algebras with regular automorphisms, \varepsilon mph{Math. USSR Doklady} {\bf 4} (1963), 355--358. \bibitem{kh1} { E. I. Khukhro}, Finite $p$-groups admitting an automorphism of order $p$ with a small number of fixed points, \varepsilon mph{Math. Notes} {\bf 38} (1986), 867--870. \bibitem{kh2} {E. I. Khukhro}, Groups and Lie rings admitting an almost regular automorphism of prime order, \varepsilon mph{Math. USSR Sbornik} {\bf 71} (1992), 51--63. \bibitem{kh4} {E. I. Khukhro}, \varepsilon mph{Nilpotent groups and their automorphisms}, De Gruyter, Berlin, 1993. \bibitem{khu08} E. I.~Khukhro, Graded Lie rings with many commuting components and an application to $2$-Fro\-be\-nius groups, \varepsilon mph{Bull. London Math. Soc.} {\bf 40} (2008), 907--912. \bibitem{khu10al} { E. I. Khukhro}, Nilpotent length of a finite group admitting a Frobenius group of automorphisms with fixed-point-free kernel, \varepsilon mph{Algebra Logic} {\bf 49} (2010), 551--560. \bibitem{khu12ja} E. I. Khukhro, Fitting height of a finite group with a Frobenius group of automorphisms, \varepsilon mph{J.~Algebra} {\bf 366} (2012), 1--11. \bibitem{khu12al} E. I. Khukhro, Automorphisms of finite groups admitting a partition, \varepsilon mph{Algebra Logic} {\bf 51} (2012), 392--411 (2012). \bibitem{khu13} E. I. Khukhro, Rank and order of a finite group admitting a Frobenius group of automorphisms, \varepsilon mph{Algebra Logic} {\bf 52} (2013), to appear. \bibitem{khmk4} {E. I. Khukhro and N. Yu. Makarenko}, Lie rings with almost regular automorphisms, \varepsilon mph{J.~Algebra} {\bf 264} (2003), 641--664. \bibitem{khu-ma-shu} E. I.~Khukhro, N. Y. Makarenko, and P. Shumyatsky, Frobenius groups of auto\-mor\-phisms and their fixed points, \varepsilon mph{Forum Math.}, 2011; DOI: \verb#10.1515/FORM.2011.152#; \verb#arxiv.org/abs/1010.0343#. \bibitem{kurz} H. Kurzweil, $p$-Automorphismen von aufl\"{o}sbaren $p'$-Gruppen, \varepsilon mph{Math.~Z.} {\bf 120} (1971), 326--354. \bibitem{mk05} N. Yu. Makarenko, A nilpotent ideal in the Lie rings with automorphism of prime order, \textit{Siberian Math. J.} {\bf 46} (2005), 1097--1107. \bibitem{khmk1} N. Yu. Makarenko and E. I. Khukhro, On Lie rings admitting an automorphism of order~4 with few fixed points, \varepsilon mph{Algebra Logic} {\bf 35} (1996), 21--43. \bibitem{khmk2} N. Yu. Makarenko and E. I. Khukhro, Nilpotent groups admitting an almost regular automorphism of order four, \varepsilon mph{Algebra Logic} {\bf 35} (1996), 176--187. \bibitem{khmk3} N. Yu. Makarenko and E. I. Khukhro, Lie rings admitting automorphisms of order 4 with few fixed points. II, \varepsilon mph{Algebra Logic} {\bf 37} (1998), 78--91. \bibitem{khmk5} {N. Yu. Makarenko and E. I. Khukhro}, Almost solubility of Lie algebras with almost regular automorphisms, \varepsilon mph{J.~Algebra} {\bf 277} (2004), 370--407. \bibitem{mak-khu13} {N. Yu. Makarenko and E. I. Khukhro}, Lie algebras admitting a metacyclic Frobenius group of automorphisms, \varepsilon mph{Siberian Math.~J.} {\bf 54} (2013), 100--114. \bibitem{khu-ma-shu-DAN} N. Yu. Makarenko, E. I. Khukhro, and P. Shumyatsky, Fixed points of Frobenius groups of automorphisms, \varepsilon mph{Doklady Math.} {\bf 83}, no.~2 (2011), 152--154. \bibitem{mak-shu10} N. Y. Makarenko and P. Shumyatsky, Frobenius groups as groups of automorphisms, \varepsilon mph{Proc. Amer. Math. Soc.} {\bf 138} (2010), 3425--3436. \bibitem{me} Y. A. Medvedev, Groups and Lie algebras with almost regular automorphisms, \varepsilon mph{J.~Algebra} {\bf 164} (1994), 877--885. \bibitem{shu-a4} P. Shumyatsky, On the exponent of a finite group with an automorphism group of order twelve, \varepsilon mph{J. Algebra} {\bf 331} (2011), 482--489. \bibitem{shu-law} P. Shumyatsky, Positive laws in fixed points of automorphisms of finite groups, \varepsilon mph{J.~Pure Appl. Algebra} {\bf 215} (2011), 2550--2566. \bibitem{th2} J. Thompson, Automorphisms of solvable groups, \varepsilon mph{J.~Algebra} {\bf 1} (1964), 259--267. \bibitem{tu} A. Turull, Fitting height of groups and of fixed points, \varepsilon mph{J.~Algebra} {\bf 86} (1984), 555--566. \bibitem{kour} \varepsilon mph{Unsolved Problems in Group Theory. The Kourovka Notebook}, no.~17, Institute of Mathematics, Novosibirsk, 2010. \bibitem{wang-chen} Y. M. Wang and Z.~M.~Chen, Solubility of finite groups admitting a coprime order operator group, \varepsilon mph{Boll. Un. Mat. Ital. A (7)} {\bf 7}, no.~3 (1993), 325--331. \varepsilon nd{thebibliography} \varepsilon nd{document}
math
\begin{document} \title{Extrapolated Sequential Constraint Method for Variational Inequality over the Intersection of Fixed-Point Sets } \titlerunning{Extrapolated Sequential Constraint Method} \author{Mootta Prangprakhon \and Nimit Nimana } \authorrunning{M. Prangprakhon and N. Nimana} \institute{M. Prangprakhon \at Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen, 40002 Thailand, \email{mootta\[email protected]} \and N. Nimana\at Department of Mathematics, Faculty of Science, Khon Kaen University, Khon Kaen, 40002 Thailand, \email{[email protected]} } \date{Received: date / Accepted: date} \maketitle \begin{abstract} This paper deals with the solving of variational inequality problem where the constrained set is given as the intersection of a number of fixed-point sets. To this end, we present an extrapolated sequential constraint method. At each iteration, the proposed method is updated based on the ideas of a hybrid conjugate gradient method used to accelerate the well-known hybrid steepest descent method, and an extrapolated cyclic cutter method for solving a common fixed point problem. We prove strong convergence of the method under some suitable assumptions of step-size sequences. We finally show the numerical efficiency of the proposed method compared to some existing methods. \keywords{Conjugate gradient direction \and Cutter \and Fixed point \and Hybrid steepest descent method \and Variational inequality} \end{abstract} \section{Introduction} \label{intro} In this paper, we consider the following variational inequality problem: \begin{problem}\label{Problem-VIP} Let $T_i:\mathcal{H}\to\mathcal{H}$, $i=1,2,\ldots,m$, be cutters with $\bigcap\limits_{i = 1}^m \fix{{T_i}}\ne \emptyset$, and let $F:\mathcal{H}\to\mathcal{H}$ be $\eta$-strongly monotone and $\kappa$-Lipschitz continuous. Then, our objective is to find a point $\bar u\in \bigcap\limits_{i = 1}^m \fix{{T_i}}$ such that \[\langle F({\bar u}),z- {\bar u}\rangle \ge 0 \indent\text{ for all $z\in \bigcap\limits_{i = 1}^m \fix{{T_i}}$. }\] \end{problem} Attentively, Problem \ref{Problem-VIP} has a bilevel structure, namely, its outer level given by the variational inequality govern by the operator $F$, while the constrained set is the inner level problem, which is the common fixed point problem of cutter operators. We emphasize here the importance of Problem \ref{Problem-VIP} is not only the allowing us a generalization of the constrained set, but also various applications for modelling real-world problems like network location problems \cite{I13,I15-2,IH14,I19-2}, and machine learning \cite{I19-3}, to name but a few. For simplicity, we denote by VIP($F,C$) a variational inequality problem corresponding to an operator $F$ and a nonempty closed convex set $C$. In the literature, the simplest iterative algorithm for solving VIP($F,C$) is the well-known {\it projected gradient method} (PGM) \cite{G64}. The method essentially has the form: \begin{equation}\label{PGM} \left \{ \begin{aligned} &x^1\in C \text{ is arbitrarily chosen, } \\ &x^{n+1}=\mathrm{proj}_{C}(x^n-\mu F(x^n)), \end{aligned} \right. \end{equation} for every $n\in\mathbb{N}$, where $\mathrm{proj}_{C}:\mathcal{H}\to C$ is the metric projection onto $C$, $F:\mathcal{H}\to C$ is $\eta$-strongly monotone and $\kappa$-Lipschitz continuous over $C$ and $\mu\in(0,2\eta/\kappa^2)$. It was proved that the sequence $\{x^n\}_{n=1}^\infty$ generated by (\ref{PGM}) converges strongly to the unique solution of VIP($F,C$) in \cite{G64}. As PGM requires the use of the metric projection $\mathrm{proj}_{C}$, it is perfectly suitable for the case when $C$ is simple enough in the sense that $\mathrm{proj}_{C}$ has a closed-form expression. However, in many practical situations, the structure of $C$ can be highly intricate and, in consequence, $\mathrm{proj}_{C}$ is difficult to evaluate. To overcome the above limitation, Yamada \cite{Y01} proposed the celebrated {\it hybrid steepest descent method} (HSDM) which essentially replaces the use of $\mathrm{proj}_{C}$ in (\ref{PGM}) with an appropriate nonexpansive operator $T$. By intepreting $C$ as the fixed point set of $T$, the method is defined by the following: \begin{equation}\label{HSDM} \left \{ \begin{aligned} &x^1\in \mathcal{H} \text{ is arbitrarily chosen, } \\ &x^{n+1}=T(x^n-\mu\beta_nF(x^n)), \end{aligned} \right. \end{equation} for every $n\in\mathbb{N}$, where $F:\mathcal{H}\to\mathcal{H}$ is $\eta$-strongly monotone and $\kappa$-Lipschitz continuous over $\mathcal{H}$, and $\mu\in(0, 2\eta/\kappa^2)$. It is well-known that, under some certain conditions on $\{\beta_n\}_{n=1}^\infty\subset(0,1]$, the sequence $\{x^n\}_{n=1}^\infty$ generated by (\ref{HSDM}) converges strongly to the unique solution of VIP($F,\fix T$), where $\fix T := \{ x \in \mathcal{H}: Tx = x\}$. Note that, in the context of (\ref{HSDM}), if $F:=\nabla f$ where $f:\mathcal{H}\to\mathbb{R}$ is a convex, continuously Fr\'echet differentiable functional, HSDM thus solves VIP($\nabla f,\fix T$), which is nothing else than the convex minimization problem over the fixed point set of a nonexpansive operator. On the other hand, it is well-known that the {\it conjugate gradient method} (CGM)\cite{NW99, DY99, FR64,GN92} and the {\it three-term conjugate gradient method} (TCGM)\cite{ZZLI06-1,ZZLI06-2,ZZLI06-3} have great efficacy in decreasing the function $f$ value rapidly. According to these underline motivations, several modifications among HSDM, CGM and TCGM are proposed in order to accelerate HSDM, namely, the {\it hybrid conjugate gradient method} (HCGM) \cite{IY09}, the {\it hybrid three-term conjugate gradient method} (HTCGM)\cite{I11} and the {\it accelerated hybrid conjugate gradient method} (AHCGM) \cite{I15}. As a matter of fact, HCGM and HTCGM are relatively similar in some basic structures and some additional conditions needed to ensure their convergences. In addressing such procedures, their common form is as follows: \begin{equation}\label{HCGM} \left \{ \begin{aligned} &x^1\in\mathcal{H} \text{ is arbitrarily chosen, } \\ &d^1=-\nabla f(x^1),\\ & x^{n+1}=T(x^n+\mu\beta_nd^n),\\ \end{aligned} \right. \end{equation} for every $n\in\mathbb{N}$, where $\mu\in(0, 2\eta/\kappa^2)$, $\{\beta_n\}_{n=1}^\infty\subset(0,1]$ is a step size and $\{d^n\}_{n=1}^\infty\in\mathcal{H}$ is a search direction. However, it is worth mentioning that the search directions of these methods are slightly different, that is, the search direction of HCGM is defined by \begin{equation}\label{CGM} d^{n}=-\nabla f(x^{n})+\varphi_{n}^{(1)}d^{n-1}, \end{equation} meanwhile the search direction of HTCGM is defined by \begin{equation}\label{TCGM} d^{n}=-\nabla f(x^{n})+\varphi_{n}^{(1)}d^{n-1}-\varphi_{n}^{(2)}w^n, \end{equation} for every $n\in\mathbb{N}$, where $\{\varphi_n^{(i)}\}_{n=1}^\infty\subset[0,\infty)(i=1,2)$ and $\{w^n\}_{n=1}^\infty\in\mathcal{H}$ is arbitrarily chosen. Then, it was proved in \cite{IY09} and \cite{I11} that, under some certain assumptions on $\{\beta_n\}_{n=1}^\infty\subset(0,1]$, each sequence generated by HCGM and HTCGM converges strongly to the unique solution of VIP($\nabla f,\fix T$) whenever $ \mathop {\lim }\limits_{n \to \infty }{\varphi_n^{(i)}} = 0 (i=1,2)$, and the sequences $\{\nabla f(x^n)\}_{n=1}^\infty$ and $\{w^n\}_{n=1}^\infty$ are bounded. Next, let us review some sequential methods used for solving the common fixed point problem (in short, CFPP). Namely, let $T_i:\mathcal{H}\to\mathcal{H}$, $i=1,2,\ldots,m$, be nonlinear operators, the problem is to find $$x^*\in\bigcap\limits_{i = 1}^m \fix{T_i},$$ provided that the intersection is nonempty. A classical sequential method for solving CFPP was developed from an iterative method introduced by Kaczmarz \cite{K37} who firstly aimed to solve a linear system in $\mathbb{R}^n$. The method was referred to the {\it cyclic projection method} (CPM) or {\it Kaczmarz method} (KM) which has the form: \begin{equation}\label{CPM} \left \{ \begin{aligned} &x^1\in\mathcal{H} \text{ is arbitrarily chosen, } \\ & x^{n+1}=\mathrm{proj}_{C_m}\cdots \mathrm{proj}_{C_1}x^n,\\ \end{aligned} \right. \end{equation} where $\mathrm{proj}_{C_i}$ are the metric projections onto the linear equations $C_i\subset\mathcal{H}$, $i=1,2,\ldots,m$. After that, the general case when $C_i\subset\mathcal{H}$, $i=1,2,\ldots,m$, are nonempty closed and convex subsets was considered by Bregman \cite{B65}. It was proved that the sequence generated by (\ref{CPM}) converges weakly to a solution of CFPP. As the interest in the aforementioned results continuously increase, it is well-known that, under some additional hypotheses, the convergence of CPM is true for a wider class of operators such as nonexpansive operators or cutter operators \cite{O67,C12,CC11,L95,C10}. In particular, the latter is a key tool of a method called the {\it cyclic cutter method} (CCM) which its weak convergence was proved by Bauschke and Combettes \cite{BC01}. In order to accelerate the convergence of CCM, Cegielski and Censor \cite{CC12} proposed the so-called {\it extrapolated cyclic cutter method} (ECCM) which essentially requires the use of an appropriate step-size function $\sigma:\mathcal{H}\to(0,\infty)$ to speed up numerically the convergence behaviour. Indeed, let $T_i:\mathcal{H}\to\mathcal{H}$, $i=1,2,\ldots,m$, be cutters with $\bigcap\limits_{i = 1}^m {\fix{T_i}\ne \emptyset}$, define $T: = {T_m}{T_{m - 1}}\cdots{T_1}$, $S_0:=Id$ and $S_i:=T_iT_{i-1}\cdots T_1$, then they defined the step-size function $\sigma$ as \begin{equation}\label{sigma} \sigma (x):=\left\{ \begin{array}{ll} \displaystyle\frac{\sum_{i=1}^{m}\langle Tx-S_{i-1}x,S_{i}x-S_{i-1}x\rangle }{\Vert Tx-x\Vert ^{2}}, & \text{for \ }x\notin \bigcap\limits_{i = 1}^m \fix{T_i}, \\ 1, & \text{otherwise.} \end{array} \right. \end{equation} Moreover, it was shown that ECCM converges weakly whenever the cutter operators $T_i$, $i=1,2,\ldots,m$, satisfy the demi-closedness principle. Along the line of \cite{CC12}, Cegielski and Nimana \cite{CN19} indicated that there are some practical situations in which the value of the extrapolation function $\sigma$ can be enormously large, which consequently may produce some uncertainties in numerical experiments. In order to avoid these situations, they proposed an algorithm called the {\it modified extrapolated cyclic subgradient projection method} (MECSPM). The main idea of this method is to map each iterate obtaining from ECCM via the last subgradient projection. If the constrained sets are nonempty closed convex sets, the modification is nothing else than the projecting a sequence generated by ECCM into the last constraint set. To conclude, the aforementioned methods used for solving variational inequality problem and common fixed point problem are concisely summarized in Table \ref{tab:1}. \begin{table}[h] \caption{Summary of the corresponding iterative methods used for solving Problem \ref{Problem-VIP}.} \label{tab:1} \begin{tabular}{l c c c c c } \hline\noalign{ } \textbf{Reference}& \textbf{Problem} & \textbf{Method} & \textbf{Constrained Operator} \\ \noalign{ }\hline\noalign{ } Goldstein \cite{G64} & VIP$(F, C)$ & PGM & metric projection \\ Yamada \cite{Y01} & VIP$(F, \fix T)$ & HSDM & nonexpansive \\ Iiduka $\&$ Yamada \cite{IY09}& VIP$(\nabla f, \fix T)$ & HCGM & nonexpansive \\ Iiduka \cite{I11} & VIP$(\nabla f, \fix T)$ & HTCGM & nonexpansive \\ Bregman \cite{B65} & CFPP & CPM & metric projection \\ Bauschke $\&$ Combettes \cite{BC01} & CFPP & CCM & cutter \\ Cegielski $\&$ Censor \cite{CC12} & CFPP & ECCM & cutter\\ Cegielski $\&$ Nimana \cite{CN19} & CFPP & MECSPM & subgradient projection \\ {\bf This work} & VIP$\left(F, \bigcap\limits_{i = 1}^m \fix{{T_i}}\right)$& ESCoM-CGD & cutter \\ \noalign{ }\hline \end{tabular} \end{table} The main contribution of this paper is an iterative algorithm called the {\it extrapolated sequential constraint method with conjugate gradient direction} (ESCoM-CGD) used for solving the variational inequality problem over the intersection of the fixed-point sets. To construct the algorithm, we utilize some ideas of the aforementioned methods, namely, HCGM \cite{IY09} and MECSPM\cite{CN19}. Under the context of cutter operators and some certain conditions, we establish strong convergence of the proposed algorithm. In order to demonstrate the effectiveness and the performance of the algorithm, we present numerical results and numerical comparisons of the algorithm with some existing methods such as HCGM and HTCGM. The remainder of this paper is organized as follows. In Section 2, we collect some useful definitions and results needed in the paper. In Section 3, we introduce ESCoM-CGD used for solving Problem \ref{Problem-VIP} and subsequently analyse its convergence result. In Section 4, we derive an important situation of the considered problem by means of the subgradient projection. In Section 5, the efficacy of ESCoM-CGD is illustrated by some numerical results. Finally, we give some concluding remarks in Section 6. \section{Preliminaries} Throughout the paper, $\mathcal{H}$ is always a real Hilbert space with an inner product $\langle \cdot , \cdot \rangle$ and with the norm $\parallel \cdot \parallel$. For a sequence $\{ {x^n}\} _{n = 1}^\infty$, the expressions ${x^n}\rightharpoonup x$ and ${x^n} \to x$ denote $\{ {x^n}\} _{n = 1}^\infty$ converges to $x$ weakly and converges to $x$ in norm, respectively. $Id$ represents the identity operator on $\mathcal{H}.$ An operator $F:\mathcal{H}\to\mathcal{H}$ is said to be $\eta$-{\it strongly monotone} if there exits a constant $\eta>0$ such that $\langle Fx-Fy,x-y \rangle \ge \eta \|x-y\|^2,$ for all $x,y\in\mathcal{H}$, and is said to be $\kappa$-{\it Lipschitz continuous} if there exits a constant $\kappa>0$ such that $\|Fx-Fy\|\le \kappa\|x-y\|,$ for all $x,y\in\mathcal{H}$. The following lemma found in \cite[Lemma 3.1(b)]{Y01} will be useful in the sequel. \begin{lemma}\label{yamada} Suppose that $F:\mathcal{H}\to\mathcal{H}$ is $\eta$-strongly monotone and $\kappa$-Lipschitz continuous. For any $\mu \in (0,2\eta /{\kappa ^2})$ and $\beta \in (0,1]$, define the operator $T^\beta:\mathcal{H}\to\mathcal{H}$ by ${T^\beta }: = Id - \mu \beta F$. Then \[\|{T^\beta }x - {T^\beta }y\| \le (1 - \beta \tau )\|x - y\|,\] for all $x,y\in\mathcal{H}$, where $\tau : = 1 - \sqrt {1 + {\mu ^2}{\kappa ^2} - 2\mu \eta } \in (0,1].$ \end{lemma} \begin{remark} It is worth to notice that the well definedness of the parameter $\tau\in(0,1]$ is guaranteed by the assumption of $F$. Indeed, the monotonicity of $F$ and the Cauchy-Schwarz inequality yield that $ \eta \|x - y{\|^2}\le\langle F(x) - F(y),x - y\rangle\le\|F(x)-F(y)\|\|x-y\|,$ and hence $\eta\|x-y\|\le\|F(x)-F(y)\|.$ Due to the Lipschitz continuity of $F$, we obtain $\|F(x)-F(y)\|\le\kappa\|x-y\|,$ which implies that $$0<\eta\le\kappa.$$ Thus, we have $0<\frac{2\eta}{\kappa^2}$. Setting $\mu\in(0,\frac{2\eta}{\kappa^2})$, we obtain $$0 \le {(1 - \mu \kappa )^2} \le 1 + {\mu ^2}{\kappa ^2} - 2\mu \eta < 1.$$ Therefore $$0 < 1 - \sqrt {1 + {\mu ^2}{\kappa ^2} - 2\mu \eta } \le 1,$$ which means that $\tau\in(0,1]$. \end{remark} Below, some concepts of quasi-nonexpansivity of operators are presented for the sake of further use. More details can be found in \cite[Section 2.1.3]{C12}. An operator $T:\mathcal{H}\to\mathcal{H}$ with $\fix T\ne\emptyset$ is said to be {\it quasi-nonexpansive} if $\|Tx-z\|\le\|x-z\|,$ for all $x\in\mathcal{H}$ and for all $z\in\fix T$, is said to be {\it $\rho$-strongly quasi-nonexpansive}, where $\rho\ge0$, if $\|Tx - z{\|^2} \le \|x - z{\|^2} - \rho \|Tx - x{\|^2},$ for all $x\in\mathcal{H}$ and for all $z\in\fix T$, and, is said to be a {\it cutter }if $ \langle x - Tx,z - Tx\rangle \le 0,$ for all $x\in\mathcal{H}$ and for all $z\in\fix T$. \begin{fact}\label{fact1} If $T:\mathcal{H}\to\mathcal{H}$ is quasi-nonexpansive, then $\fix T$ is closed and convex. \end{fact} \begin{fact}\label{fact2} Let $T:\mathcal{H}\to\mathcal{H}$ be a cutter. Then the following properties hold: (i) $\langle Tx - x,z - x\rangle \ge \|Tx - x{\|^2}$ for every $x\in\mathcal{H}$ and $z\in\fix T$. (ii) $T$ is 1-strongly quasi-nonexpansive. \end{fact} We recall a notion of the demi-closedness principle in the following definition. \begin{definition} An operator $T:\mathcal{H} \to \mathcal{H}$ is said to satisfy the {\it demi-closedness} (DC) principle if $T-Id$ is demi-closed at $0$, that is, for any sequence $\{x^n\}_{n=1}^\infty\subset\mathcal{H}$, if $x^n\rightharpoonup y\in\mathcal{H}$ and $\|(T-Id)x^n\| \to 0$, then $Ty=y$. \end{definition} Further, we recall that an operator $T:\mathcal{H}\to\mathcal{H}$ is said to be {\it nonexpansive} if $\|Tx-Ty\|\le\|x-y\|,$ for all $x,y\in\mathcal{H}$. It is worth mentioning that if $T:\mathcal{H}\to\mathcal{H}$ is a nonexpansive operator with $\fix T\neq\emptyset$, then the operator $T$ satisfies the DC principle (see \cite[Lemma 2]{Z71}). For an operator $T:\mathcal{H}\to\mathcal{H}$ and a real number $\lambda\in[0,2]$, the operator $T_\lambda:=(1-\lambda)Id+\lambda T$ is called a {\it relaxation} of $T$ and $\lambda$ is called a relaxation parameter. Actually, in many situations, the relaxation parameter which is greater than $2$ may yield a superiority of algorithmic convergence property. So, we are now in a position to recall a generalized relaxation of an operator. The generalized relaxation of an operator $T:\mathcal{H}\to\mathcal{H}$ is defined by ${T_{\sigma ,\lambda }}x: = x + \lambda \sigma (x)(Tx - x),$ where $\sigma:\mathcal{H}\to(0,\infty)$ is a step-size function. If $\sigma (x) \ge 1$ for all $x\in\mathcal{H}$, then the operator ${T_{\sigma ,\lambda }}$ is called an {\it extrapolation} of $T$. In the case that $\sigma(x)=1$, for all $x\in\mathcal{H}$, the generalized relaxation of $T$ is reduced to the relaxation of $T$, that is $T_{\sigma,\lambda}=T_\lambda$. We denote here that $T_{\sigma}:=T_{\sigma,1}$. For any $x\in\mathcal{H}$, it can be noted that $$T_{\sigma,\lambda}x-x=\lambda\sigma(x)(Tx-x)=\lambda(T_\sigma x-x),$$ i.e., $T_{\sigma,\lambda}x=x+\lambda(T_\sigma x-x),$ and $$\fix T_{\sigma,\lambda}=\fix T_\sigma=\fix T,$$ for any $\lambda\ne0$. The following lemma plays an important role in proving our convergence result. The proof can be found in \cite[Section 4.10]{C12}. \begin{lemma}\label{lemma-CC12} Let ${T_i}:\mathcal{H} \to \mathcal{H}, i = 1,2,\ldots,m,$ be cutters with $\bigcap\limits_{i = 1}^m {\fix{T_i}} \ne \emptyset$, and denote $T:=T_mT_{m-1}\cdots T_1$. Let $\sigma:\mathcal{H}\to(0,\infty)$ be defined by (\ref{sigma}), then the following properties hold: \begin{itemize} \item[(i)] For any $x\notin\fix T$, we have $${\sigma }(x) \ge \frac{{\frac{1}{2}\sum\limits_{i = 1}^m {\|{S_i}x - {S_{i - 1}}x{\|^2}} }}{{\|Tx - x{\|^2}}} \ge \frac{1}{{2m}},$$ where $S_0=Id$ and $S_i=T_iT_{i-1}\cdots T_1$. \item[(ii)] The operator $T_\sigma$ is a cutter. \end{itemize} \end{lemma} \section{Algorithms and Convergence Results} In this section, we start with the introducing a new iterative algorithm for solving Problem \ref{Problem-VIP} and subsequently study its convergence result. For the sake of convenience, we denote the following notations: the compositions $T := {T_m}{T_{m - 1}}\cdots {T_1},$ $S_0:=Id,$ and $S_i:=T_iT_{i-1}\cdots T_1$, $i=1,2,\dots,m,$ where $T_i:\mathcal{H}\to\mathcal{H}$, $i=1,2,\ldots,m$, are cutters with $\bigcap\limits_{i = 1}^m \fix{{T_i}}\ne \emptyset$. The iterative method for solving Problem \ref{Problem-VIP} is presented as follows. \begin{algorithm}[H] \SetAlgoLined \vskip2mm \textbf{Initialization}: Given $\mu \in (0,2\eta /{\kappa ^2})$, $\{ \beta _{n}\}_{n = 1}^\infty\subset(0,1]$, $\{ \varphi _{n}\}_{n = 1}^\infty\subset[0,\infty)$ and a positive sequence $\{\lambda_n\}_{n=1}^\infty$. Choose $x^1\in \mathcal{H}$ arbitrarily and set ${d^1} = - F({x^1})$. \\ \textbf{Iterative Steps}: For a current iterate $x^n\in \mathcal{H}$ ($n\in\mathbb{N}$), calculate as follows: \textbf{Step 1}. Compute $y^n$ and the step size as $$y^n:=x^n + \mu\beta _{n}d^n$$ and \begin{equation*} \sigma (y ^n):=\left\{ \begin{array}{ll} \displaystyle\frac{\sum_{i=1}^{m}\langle Ty^n-S_{i-1}y^n,S_{i}y^n-S_{i-1}y^n\rangle }{\Vert Ty^n-y^n\Vert ^{2}}, & \text{for \ }y^n\notin \bigcap\limits_{i = 1}^m \fix{{T_i}}, \\ 1, & \text{otherwise.} \end{array} \right. \end{equation*} \textbf{Step 2}. Compute the next iterate and the search direction as \begin{eqnarray}\label{ECCM} {x^{n + 1}} &:=& {T_m}({y ^n} + {\lambda _n}\sigma ({y ^n})(Ty ^n - y^n)), \\ {d^{n + 1}}&:=& - F({x^{n + 1}}) + {\varphi _{n + 1}}{d^n}.\nonumber \end{eqnarray} Update $n:=n+1$ and return to \textbf{Step 1}. \caption{ESCoM-CGD} \label{algorithm} \vskip2mm \end{algorithm} \begin{remark}\label{rem8} \begin{itemize} \item[(i)] In the case of $m=1$, $\lambda_n\equiv1$, and $\sigma(y^n)\equiv1$, Algorithm \ref{algorithm} becomes HCGM considered in \cite{IY09}. Furthermore, if $\varphi_n\equiv0$, Algorithm \ref{algorithm} is the same as HSDM investigated by Yamada \cite{Y01}. \item[(ii)] If $F\equiv0$, Algorithm \ref{algorithm} forms a generalization of MECSPM \cite{CN19} in the sense of the operators $T_i, i=1,\ldots,m$, are assumed to be subgradient projections. Moreover, if the operator $T_m$ in (\ref{ECCM}) is omitted from the method, Algorithm \ref{algorithm} coincides with ECCM \cite{CC12}. \item[(iii)] Note that Algorithm \ref{algorithm} is not feasible in the sense that the generated sequence $\{ {x^n}\} _{n = 1}^\infty$ need not belong to the constrained set. Moreover, the step size $\sigma(y^n)$ may have large values for some $n\in\mathbb{N}$. These situations may yield the instabilities of the method. To avoid this situation, let us observe that if the operator $T_m$ is the metric projection onto a nonempty closed convex and bounded set $C_m$, and the initial point $x^1$ is chosen from $C_m$, then the iterate ${x^n}\in C_m$ $(n\in\mathbb{N})$, which subsequently yields the boundedness of $\{ {x^n}\} _{n = 1}^\infty$. In this case, even if we can not gain the feasibility of the method, it is very worth to note that the presence of $T_m$ in (\ref{ECCM}) ensure us that the generated sequence $\{ {x^n}\} _{n = 1}^\infty\subset C_m$, which may yield the numerical stabilities of the method, see \cite[Section 4]{CN19} further discussion and some numerical illustrations. \end{itemize} \end{remark} It is worth noting that the existence and uniqueness of the solution to Problem \ref{Problem-VIP} is guaranteed by the above conditions according to \cite[Theorem 2.3.3]{FP03}. In order to analyze the main convergence theorem, we present a series of preliminary convergence results which is indicating some important properties of the sequences generated by Algorithm \ref{algorithm}. To begin with, the boundedness of the sequences is investigated in the following lemma. \begin{lemma}\label{lemma1} Let the sequences $\{ {x^n}\} _{n = 1}^\infty$, $\{ {y^n}\} _{n = 1}^\infty$ and $\{ {d^n}\} _{n = 1}^\infty$ be given by Algorithm \ref{algorithm}. Suppose that $\mathop {\lim }\limits_{n \to \infty }{\beta _n} = 0$, $\mathop {\lim }\limits_{n \to \infty } {\varphi _n} = 0$, and $\{ \lambda _{n}\}_{n = 1}^\infty \subset [\varepsilon ,2 - \varepsilon]$ for some constant $\varepsilon \in (0,1)$. If $\{ F({x^n})\} _{n = 1}^\infty$ is bounded, then the sequences $\{ {x^n}\} _{n = 1}^\infty$, $\{ {y^n}\} _{n = 1}^\infty$ and $\{ {d^n}\} _{n = 1}^\infty$ are bounded. \end{lemma} \begin{proof} Assume that $\{ F({x^n})\} _{n = 1}^\infty$ is bounded. We first show that $\{d^n\}_{n=1}^\infty$ is bounded. Accordingly, the assumption $\mathop{\lim}\limits_{n\to\infty}\varphi_n=0$ yields that there exists $n_0\in\mathbb{N}$ such that $\varphi_n\le\frac{1}{2}$ for all $n\ge n_0$. Due to the boundedness of $\{F(x^n)\}$, we set $M_1:=\mathop{\sup}\limits_{n\ge1}\|F(x^n)\|<\infty$ and $M_2:=\max\{M_1,\|d^{n_0}\|\}$. It is obvious to see that $\|d^{n_0}\|\le 2M_2$. By the definition of $\{d^n\}_{n=1}^\infty$, for all $n\ge n_0$, we have \begin{eqnarray} \|d^{n+1}\| \le \|-F(x^{n+1})\|+\varphi_{n+1}\|d^n\| \le \|F(x^{n+1})\|+\frac{1}{2}\|d^n\| \le M_2+\frac{1}{2}\|d^n\|.\label{4.1} \end{eqnarray} Now, we claim that $\|d^n\|\le2M_2$ for all $n\ge n _0$. For $n=n_0$, we immediately get $\|d^n\|=\|d^{n_0}\|\le2M_2$. Let $n\ge n_0$ and $\|d^n\|\le2M_2$. We shall prove that $\|d^{n+1}\|\le2M_2.$ By (\ref{4.1}), we have $$\|d^{n+1}\|\le M_2+\frac{1}{2}\|d^n\|\le 2M_2.$$ Thus $\|d^n\|\le2M_2$ for all $n\ge n_0$. Putting $M^*:=\max\{\|d^1\|,\|d^2\|,\ldots,\|d^{n_0-1}\|,2M_2\}$, we obtain that $\|d^n\|\le M^*,$ for all $n\in\mathbb{N}.$ Therefore $\{d^n\}_{n=1}^\infty$ is bounded. Next, we will show that $\{ {x^n}\} _{n = 1}^\infty$ is bounded. Let $\bar u\in \bigcap\limits_{i = 1}^m \fix{{T_i}}$ be given. According to Lemma \ref{lemma-CC12}(ii), it is worth noting here that $T_\sigma$ is a cutter. By utilizing the quasi-nonexpansivity of $T_m$ and the properties of $T_\sigma$ in Fact \ref{fact2}, for all $n\in\mathbb{N}$, we have \begin{eqnarray} \|{x^{n + 1}} - \bar u{\|^2} &=& \|{T_m}({y^n} + {\lambda _n}\sigma ({y^n})(T{y^n} - {y^n})) - \bar u{\|^2}\nonumber\\ &\le& \|{y^n} + {\lambda _n}\sigma ({y^n})(T{y^n} - {y^n}) - \bar u{\|^2}\nonumber\\ &=& \|{y^n} - \bar u{\|^2} + \lambda _n^2\|\sigma ({y^n})(T{y^n} - {y^n}){\|^2} \nonumber\\ &&+ 2{\lambda _n}\langle {y^n} - \bar u,\sigma ({y^n})(T{y^n} - {y^n})\rangle\nonumber \\ &=& \|{y^n} - \bar u{\|^2} + \lambda _n^2\|{T_\sigma }{y^n} - {y^n}{\|^2} + 2{\lambda _n}\langle {y^n} - \bar u,{T_\sigma }{y^n} - {y^n}\rangle\nonumber \\ &\le& \|{y^n} - \bar u{\|^2} + \lambda _n^2\|{T_\sigma }{y^n} - {y^n}{\|^2} - 2{\lambda _n}\|{T_\sigma }{y^n} - {y^n}{\|^2}\nonumber\\ &=& \|{y^n} - \bar u{\|^2} - {\lambda _n}(2 - {\lambda _n})\|{T_\sigma }{y^n} - {y^n}{\|^2}.\label{lem1-3} \end{eqnarray} Since $\{ \lambda _{n}\}_{n = 1}^\infty \subset [\varepsilon ,2 - \varepsilon]$ for some constant $\varepsilon \in (0,1)$, we obtain that \begin{equation}\label{lem1-4} \|{x^{n + 1}} - \bar u\| \le \|{y^n} - \bar u\|. \end{equation} Then, for all $n\ge2$, we have \begin{eqnarray} \|{y^n} - \bar u\| &=& \|{x^n} + \mu {\beta _{n}}{d^n} - \bar u\|\nonumber\\ &=& \|{x^n} + \mu {\beta _{n}}( - F({x^n}) + {\varphi _n}{d^{n - 1}}) - \bar u\|\nonumber\\ &=& \|({x^n} - \mu {\beta _{n}}F({x^n})) - (\bar u - \mu {\beta _{n }}F(\bar u)) + \mu {\beta _{n }}({\varphi _n}{d^{n - 1}} - F(\bar u))\|\label{lem1-5}\nonumber\\ &\le& \|({x^n} - \mu {\beta _{n}}F({x^n})) - (\bar u - \mu {\beta _{n }}F(\bar u))\| + \mu {\beta _{n }}\|{\varphi _n}{d^{n - 1}} - F(\bar u)\|.\label{lem1-4*} \end{eqnarray} By using the inequalities (\ref{lem1-4}), (\ref{lem1-4*}) and Lemma \ref{yamada}, for all $n\ge2$, we obtain \begin{equation} \|{x^{n+1}} - \bar u\| \le (1 - {\beta _{n}}\tau )\|{x^n} - \bar u\| + \mu {\beta _{n}}\|{\varphi _n}{d^{n - 1}} - F(\bar u)\|,\nonumber \end{equation} where $\tau = 1 - \sqrt {1 + {\mu ^2}{\kappa ^2} - 2\mu \eta } \in (0,1].$ Accoding to the boundedness of $\{d^{n}\}_{n = 1}^\infty$, we set ${M_3}: = \mathop {\sup }\limits_{n \ge 1} \|{\varphi _n}{d^{n - 1}} - F(\bar u)\| < \infty$ and $M: = \max \{ {M_3},\|F(\bar u)\|\}$. The inequality above becomes \begin{equation}\label{1v} \|{x^{n + 1}} - \bar u\| \le (1 - {\beta _{n}}\tau )\|{x^n} - \bar u\| + {\beta _{n}}\tau \left( {\frac{{\mu M}}{\tau }} \right) \text{ for all $n\ge 2$. } \end{equation} However, one can easily check that the inequality (\ref{1v}) also holds true for $n=1$. In the light of induction, we ensure that \[\|{x^n} - \bar u\| \le \max \{ \|{x^1} - \bar u\|,\frac{{\mu M}}{\tau }\} \text{ for all $n\in\mathbb{N}$. }\] Thus $\{x^{n}\}_{n = 1}^\infty$ is bounded as desired. Consequently, $\{y^{n}\}_{n = 1}^\infty$ is also bounded. \qed \end{proof} Before continuing the analysis, for $n\in\mathbb{N}$ and $\bar u\in \bigcap\limits_{i = 1}^m \fix{{T_i}}$, let us denote the following terms: \[\xi_n:={\mu ^2}\beta _{n}^2\|{d^n}{\|^2} + 2\mu {\beta _{n}}\|{x^n} - \bar u\|\|{d^n}\|\indent \text{ and }\indent {\alpha _n}: = {\beta _{n}}\tau.\] In particular, for $n\ge2$, we denote \[{\delta _n}: = \frac{{2\mu }}{\tau }\left( {{\varphi _n}\langle {y^n} - \bar u,{d^{n - 1}}\rangle + \langle {y^n} - \bar u, - F(\bar u)\rangle } \right).\] The aforementioned notations give rise to the following lemmas which demonstate some crucial inequalities needed in proving our main convergence result. \begin{lemma}\label{lemma2}Let the sequences $\{ {x^n}\} _{n = 1}^\infty$, $\{ {y^n}\} _{n = 1}^\infty$ and $\{ {d^n}\} _{n = 1}^\infty$ be given by Algorithm \ref{algorithm}. Suppose that $\{ \lambda _{n}\}_{n = 1}^\infty \subset [\varepsilon ,2 - \varepsilon]$ for some constant $\varepsilon \in (0,1)$. Then, for all $n\in\mathbb{N}$ and $\bar u\in \bigcap\limits_{i = 1}^m \fix{{T_i}}$, there holds: \begin{eqnarray*} \|{x^{n + 1}} - \bar u{\|^2} \le \|{x^n} - \bar u{\|^2} - \frac{{{\lambda _n}(2 - {\lambda _n})}}{{4m}}\sum\limits_{i = 1}^m {\parallel {S_i}{y^n} - {S_{i - 1}}{y^n}{\parallel ^2}} + {\xi _n}. \end{eqnarray*} \end{lemma} \begin{proof} By invoking the inequality (\ref{lem1-3}) and the definition of $T_\sigma$, we have \begin{eqnarray} \|{x^{n + 1}} - \bar u{\|^2} &\le& \|{y^n} - \bar u{\|^2} - {\lambda _n}(2 - {\lambda _n})\|{T_\sigma }{y^n} - {y^n}{\|^2}\nonumber\\ &\le& \|{x^n} + \mu {\beta _{n}}{d^n} - \bar u{\|^2} - {\lambda _n}(2 - {\lambda _n})\|{T_\sigma }{y^n} - {y^n}{\|^2}\nonumber\\ &=& \|{x^n} - \bar u{\|^2} + {\mu ^2}\beta _{n }^2\|{d^n}{\|^2} + 2\mu {\beta _{n }}\langle {x^n} - \bar u,{d^n}\rangle\nonumber\\ && - {\lambda _n}(2 - {\lambda _n})\|{T_\sigma }{y^n} - {y^n}{\|^2}\nonumber\\ &\le& \|{x^n} - \bar u{\|^2} + {\mu ^2}\beta _{n }^2\|{d^n}{\|^2} + 2\mu\beta_n\|{x^n} - \bar u\|\|{d^n}\|\nonumber\\ && - {\lambda _n}(2 - {\lambda _n})\|{T_\sigma }{y^n} - {y^n}{\|^2}\nonumber\\ &=& \|{x^n} - \bar u{\|^2} - {\lambda _n}(2 - {\lambda _n})\|{T_\sigma }{y^n} - {y^n}{\|^2} + {\xi _n}\nonumber\\ &=& \|{x^n} - \bar u{\|^2} - {\lambda _n}(2 - {\lambda _n})\sigma^2 ({y^n})\|T{y^n} - {y^n}{\|^2} + {\xi _n}.\nonumber \end{eqnarray} Thanks to Lemma \ref{lemma-CC12}(i), we finally have {\small \begin{eqnarray} \|{x^{n + 1}} - \bar u{\|^2} &\le& \|{x^n} - \bar u{\|^2} - {\lambda _n}(2 - {\lambda _n})\frac{{\frac{1}{4}{{\left( {\sum\limits_{i = 1}^m {\|{S_i}{y^n} - {S_{i - 1}}{y^n}{\|^2}} } \right)}^2}}}{{\|T{y^n} - {y^n}{\|^4}}}\|T{y^n} - {y^n}{\|^2} + {\xi _n}\nonumber\\ &=& \|{x^n} - \bar u{\|^2} - {\lambda _n}(2 - {\lambda _n})\frac{{\frac{1}{4}{{\left( {\sum\limits_{i = 1}^m {\|{S_i}{y^n} - {S_{i - 1}}{y^n}{\|^2}} } \right)}^2}}}{{\|T{y^n} - {y^n}{\|^2}}} + {\xi _n}\nonumber\\ &=& \|{x^n} - \bar u{\|^2} - \frac{{{\lambda _n}(2 - {\lambda _n})}}{{4m}}\sum\limits_{i = 1}^m {\|{S_i}{y^n} - {S_{i - 1}}{y^n}{\|^2}} + {\xi _n},\nonumber \end{eqnarray} } which completes the proof. \qed \end{proof} \begin{lemma}\label{lemma4} Let the sequences $\{ {x^n}\} _{n = 1}^\infty$, $\{ {y^n}\} _{n = 1}^\infty$ and $\{ {d^n}\} _{n = 1}^\infty$ be given by Algorithm \ref{algorithm}. Suppose that $\{ \lambda _{n}\}_{n = 1}^\infty \subset [\varepsilon ,2 - \varepsilon]$ for some constant $\varepsilon \in (0,1)$. Then, for all $n\ge2$ and $\bar u\in \bigcap\limits_{i = 1}^m \fix{{T_i}}$, there holds: \begin{eqnarray*} \|{x^{n + 1}} - \bar u{\|^2} \le (1 - {\alpha _n})\|{x^n} - \bar u{\|^2} + {\alpha _n}{\delta _n}. \end{eqnarray*} \end{lemma} \begin{proof} By utilizing the inequalities (\ref{lem1-4}), (\ref{lem1-5}), the fact that $\|x+y\|^2\le\|x\|^2+2\langle y, x+y\rangle$, for all $x,y\in\mathcal{H}$, and Lemma \ref{yamada}, for all $n\ge2$, we have \begin{eqnarray} \|{x^{n + 1}} - \bar u{\|^2} &\le& \|{y^n} - \bar u{\|^2}\nonumber\\ &\le& \|({x^n} - \mu {\beta _{n}}F({x^n})) - (\bar u - \mu {\beta _{n }}F(\bar u)) + \mu {\beta _{n}}({\varphi _n}{d^{n - 1}} - F(\bar u)){\|^2}\nonumber\\ &\le& \|({x^n} - \mu {\beta _{n }}F({x^n})) - (\bar u - \mu {\beta _{n }}F(\bar u)){\|^2} \nonumber \\ &&+ 2\langle {x^n} - \mu {\beta _{n }}F({x^n}) - \bar u+ \mu {\beta _{n }}{\varphi _n}{d^{n - 1}},\mu {\beta _{n }}({\varphi _n}{d^{n - 1}} - F(\bar u))\rangle\nonumber \\ &\le& (1 - {\beta _{n }}\tau )\|{x^n} - \bar u{\|^2} \nonumber\\ &&+ 2\mu {\beta _{n }}\langle {x^n} + \mu {\beta _{n }}( - F({x^n}) + {\varphi _n}{d^{n - 1}}) - \bar u,{\varphi _n}{d^{n - 1}} - F(\bar u)\rangle\nonumber \\ &=& (1 - {\beta _{n}}\tau )\|{x^n} - \bar u{\|^2} + 2\mu {\beta _{n}}\langle {y^n} - \bar u,{\varphi _n}{d^{n - 1}} - F(\bar u)\rangle \nonumber\\ &=& (1 - {\beta _{n}}\tau )\|{x^n} - \bar u{\|^2} + 2\mu {\beta _{n}}{\varphi _n}\langle {y^n} - \bar u,{d^{n - 1}}\rangle + 2\mu {\beta _{n}}\langle {y^n} - \bar u, - F(\bar u)\rangle\nonumber \\ &=& (1 - {\beta _{n}}\tau )\|{x^n} - \bar u{\|^2} + {\beta _{n}}\tau \left[ {\frac{{2\mu }}{\tau }\left( {{\varphi _n}\langle {y^n} - \bar u,{d^{n - 1}}\rangle + \langle {y^n} - \bar u, - F(\bar u)\rangle } \right)} \right]\nonumber\\ &=& (1 - {\alpha _n})\|{x^n} - \bar u{\|^2} + {\alpha _n}{\delta _n},\nonumber \end{eqnarray} which completes the proof. \qed \end{proof} We present the following lemma which is an important tool for proving our main result. A proof of the lemma can be found in \cite[Lemma 2.5]{X02}. \begin{lemma}\label{xu} Let $\{ {a_n}\} _{n = 1}^\infty$ be a sequence of nonnegative real numbers such that ${a_{n + 1}} \le (1 - {\alpha _n}){a_n} + {\alpha _n}{\delta _n},$ where the sequences $\{ {\alpha _n}\} _{n = 1}^\infty\subset [0,1]$ and $\{ {\delta _n}\} _{n = 1}^\infty\subset\mathbb{R}$ satisfy $\sum\limits_{n = 1}^\infty {{\alpha _n}} = \infty$ and $\limsup\limits_{n\rightarrow0}\delta_n\le 0$. Then $\mathop {\lim }\limits_{n \to \infty } {a_n} = 0$. \end{lemma} The following theorem is our main convergence result. \begin{theorem}\label{main-thm} Let the sequence $\{ {x^n}\} _{n = 1}^\infty$ be given by Algorithm \ref{algorithm}. Suppose that $\mathop {\lim }\limits_{n \to \infty }{\beta _n} = 0$, $\sum\limits_{n = 1}^\infty {{\beta _n}}= \infty$, $\mathop {\lim }\limits_{n \to \infty } {\varphi _n} = 0$, and $\{ \lambda _{n}\}_{n = 1}^\infty \subset [\varepsilon ,2 - \varepsilon]$ for some constant $\varepsilon \in (0,1)$. If $\{ F({x^n})\} _{n = 1}^\infty$ is bounded and $\{ {T_i}\} _{i = 1}^m$ satisfies the DC principle, then the sequence $\{ {x^n}\} _{n = 1}^\infty$ converges strongly to $\bar u$, the unique solution of Problem \ref{Problem-VIP}. \end{theorem} \begin{proof} Assume that $\{ F({x^n})\} _{n = 1}^\infty$ is bounded and $\{ {T_i}\} _{i = 1}^m$ satisfies the DC principle. For simplicity, we denote ${a_n}: = \|{x^n} - \bar u{\|^2}$. Due to Lemma \ref{lemma1} and the assumption $\mathop {\lim }\limits_{n \to \infty }\beta_n=0$, we obtain $$\mathop {\lim }\limits_{n \to \infty }\xi_n=0.$$ To prove the strong convergence of the theorem, we consider the following two cases of the sequence $\{a_n\} _{n = 1}^\infty$ according to its behavior. \indent\textbf{Case 1.} Suppose that there exists ${n_0}\in\mathbb{N}$ such that ${a_{n + 1}} < {a_n}$ for all $n \ge {n_0}$. It is clear that $\{a_n\} _{n = 1}^\infty$ is convergent. By utilizing Lemma \ref{lemma2} and the assumption $\mathop {\lim }\limits_{n \to \infty }\xi_n=0$, we obtain \begin{eqnarray*} 0 &\le& \limsup_{n \to \infty } \frac{{{\lambda _n}(2 - {\lambda _n})}}{{4m}}\sum\limits_{i = 1}^m {\|{S_i}{y^n} - {S_{i - 1}}{y^n}{\|^2}}\\ &\le&\limsup_{n \to \infty } \left( {{a_n} - {a_{n + 1}}} + \xi_n\right) = \lim_{n \to \infty } a_n - \lim_{n \to \infty }a_{n+1}+\lim_{n \to \infty }\xi_n= 0.\nonumber \end{eqnarray*} Thus, we have \begin{equation*}\label{18} \mathop {\lim }\limits_{n \to \infty } \frac{{{\lambda _n}(2 - {\lambda _n})}}{{4m}}\sum\limits_{i = 1}^m {\|{S_i}{y^n} - {S_{i - 1}}{y^n}{\|^2}} = 0. \end{equation*} Recalling that ${\lambda _n} \in [\varepsilon ,2 - \varepsilon ]$ for an arbitrary constant $\varepsilon \in (0,1),$ we then have \[{\lambda _n} \le 2 - \varepsilon \mathbb{R}ightarrow \varepsilon \le 2 - {\lambda _n} \mathbb{R}ightarrow {\lambda _n}\varepsilon \le {\lambda _n}(2 - {\lambda _n}) \mathbb{R}ightarrow {\varepsilon ^2} \le {\lambda _n}(2 - {\lambda _n}),\] and hence \[\mathop {\lim }\limits_{n \to \infty } \sum\limits_{i = 1}^m {\|{S_i}{y^n} - {S_{i - 1}}{y^n}{\|^2}} = 0,\] which implies that, for all $i=1,2,...,m,$ \begin{equation}\label{20} \mathop {\lim }\limits_{n \to \infty } \|{S_i}{y^n} - {S_{i - 1}}{y^n}\| = 0. \end{equation} On the other hand, since $\{ {y^n}\} _{n = 1}^\infty$ is a bounded sequence, so is the sequence $\{ \langle {y^n} - \bar u, - F(\bar u)\rangle \} _{n = 1}^\infty$. Now, let $\{ {y^{n_k}}\} _{k = 1}^\infty$ be a subsequence of $\{ {y^n}\} _{n = 1}^\infty$ such that \[\limsup\limits_{n\rightarrow\infty}\langle {y^n} - \bar u, - F(\bar u)\rangle = \mathop {\lim }\limits_{k \to \infty } \langle {y^{n_k}} - \bar u, - F(\bar u)\rangle. \] Due to the boundedness of the sequence $\{ {y^{n_k}}\} _{k = 1}^\infty$, there exists a weakly cluster point $z\in\mathcal{H}$ and a subsequence $\{ {y^{{n_{{k_j}}}}}\} _{j = 1}^\infty$ of $\{ {y^{n_k}}\} _{k = 1}^\infty$ such that ${y^{{n_{{k_j}}}}}\rightharpoonup z\in\mathcal{H}$. According to (\ref{20}), let us note that \begin{equation*}\label{wc-ii-2} \mathop {\lim }\limits_{j \to \infty }\| ({T_1} - Id){y^{{n_{{k_j}}}}}\|= \mathop {\lim }\limits_{j \to \infty }\| {S_1}{y^{{n_{{k_j}}}}} - {S_0}{y^{{n_{{k_j}}}}}\| = 0. \end{equation*} Then the DC principle of $T_1$ yields that $z\in\fix T_1.$ Further, we note that the assumption ${y^{{n_{{k_j}}}}}\rightharpoonup z$ and the fact that $\mathop {\lim }\limits_{j \to \infty }\| {T_1}{y^{{n_{{k_j}}}}} - {y^{{n_{{k_j}}}}} \| =0$ lead to ${T_1}{y^{{n_{{k_j}}}}} \rightharpoonup z.$ Furthermore, we observe that \begin{equation*}\label{wc-ii-6} \mathop {\lim }\limits_{j \to \infty }\| ({T_2} - Id){T_1}{y^{{n_{{k_j}}}}}\| = \mathop {\lim }\limits_{j \to \infty } \| {S_2}{y^{{n_{{k_j}}}}} - {S_1}{y^{{n_{{k_j}}}}}\| = 0. \end{equation*} By invoking the DC principle of $T_2$, we then obtain $z\in\fix T_2.$ By continuing the same argument used in the above proving lines, we obtain that $z\in\fix T_i$ for all $i=1,2,\ldots,m,$ that is $z \in\bigcap_{i=1}^m\fix T_i$. As $\bar u$ is the unique solution to Problem \ref{Problem-VIP}, we have \begin{eqnarray}\label{limsupvar}\limsup_{n\rightarrow\infty} \langle {y^n} - \bar u, - F(\bar u)\rangle &=& \lim_{k \to \infty } \langle {y^{n_k}} - \bar u, - F(\bar u)\rangle\nonumber\\ &=& \lim_{j \to \infty } \langle {y^{{n_{{k_j}}}}} - \bar u, - F(\bar u)\rangle = \langle z - \bar u, - F(\bar u)\rangle \le 0. \end{eqnarray} In view of $\delta_n$, we note that \begin{eqnarray} {\delta _n} \le \frac{{2\mu K}}{\tau }{\varphi _n} + \frac{{2\mu }}{\tau }\langle {y^n} - \bar u, - F(\bar u)\rangle, \nonumber \end{eqnarray} where $K:=\mathop {\sup }\limits_{n \ge 2} \langle {y^n} - \bar u,{d^{n - 1}}\rangle < + \infty$. Therefore, $\mathop {\lim }\limits_{n \to \infty } {\varphi _n} = 0$ and the inequality (\ref{limsupvar}) lead to \begin{eqnarray}\label{limdelta} \limsup\limits_{n\rightarrow\infty}{\delta _n} &\le& \frac{{2\mu K}}{\tau }\mathop {\lim }\limits_{n \to \infty } {\varphi _n} + \frac{{2\mu }}{\tau }\limsup\limits_{n\rightarrow\infty}\langle {y^n} - \bar u, - F(\bar u)\rangle\le 0.\label{21-1} \end{eqnarray} According to Lemma \ref{lemma4}, we have, for all $n\ge 2$, that $$a_{n+1} \le (1 - {\alpha _n})a_n + {\alpha _n}{\delta _n}.$$ To reach the conclusion of this case, we observe that $\{\alpha_n\}_{n=1}^\infty\subset[0,1]$ and $\sum\limits_{n = 1}^\infty {{\alpha _n}}=\infty$ which are following the assumptions of $\beta_n$ and the property of $\tau$. Therefore, by applying this and the relation (\ref{limdelta}), Lemma \ref{xu} yields that $\mathop {\lim }\limits_{n \to \infty }\|{x^n}-\bar u\| = 0$. \textbf{Case 2.} Suppose that for any $n_0$, there exists an integer $k\ge n_0$ such that ${a_{{k}}} \le {a_{{k} + 1}}$. For $n$ large enough, we define a set of indexes by \begin{equation*}\label{jn} {J_n}: = \left\{ {k \in [{n_0},n]:{a_k} \le {a_{k + 1}}} \right\}. \end{equation*} Also, for each $n\ge n_0$, we denote \begin{equation*}\label{nun} \nu(n):=\max J_n. \end{equation*} From the above definitions, we observe that $J_n$ is nonempty as there is an $n_0\in J_n$. Due to ${J_n} \subset {J_{n + 1}}$, we get that $\{\nu(n)\}_{n\ge n_0}$ is nondecreasing and $\nu(n)\to\infty$ as $n\to\infty$. Furthermore, it is clear that, for all $n\ge n_0$, \begin{equation}\label{22} a_{\nu(n)}\le a_{\nu(n)+1}. \end{equation} Now, let us notice from the definition of $J_n$ that, for all $n\ge n_0$, we have $\nu(n)\le n$ which can be considered in the following cases: If $\nu(n)= n$, we have $a_{n}=a_{\nu(n)}\le a_{\nu(n)+1}$. If $\nu(n)= n-1$, we have $a_n=a_{\nu(n) +1}$. If $\nu(n)<n-1$, we have ${a_{\nu (n) + 1}} > {a_{\nu (n) + 2}} > \cdots > {a_{n - 1}} > {a_n}$ which is followed by the fact that whenever we set ${a_{\nu (n) + 1}} \le {a_{\nu (n) + 2}}$, the definition of $S_n$ yields that $\nu (n) + 1\in S_n$. However, we know that $\nu(n)=\max{J_n}$. Thus the assumption ${a_{\nu (n) + 1}} \le {a_{\nu (n) + 2}}$ leads to a contradiction. This similar argument happens to the other terms as well. Therefore, by the aforementioned cases, we obtain, for all $n\ge n_0$, that \begin{equation}\label{23} a_{n}\le a_{\nu(n)+1}. \end{equation} Next, utilizing Lemma \ref{lemma2} and the inequality (\ref{22}) lead to \[0 \le {a_{\nu (n) + 1}} - {a_{\nu(n)}} \le - \frac{{{\lambda _{\nu(n)}}\left(2 - {\lambda _{\nu(n)}}\right)}}{{4m}}\sum\limits_{i = 1}^m {\| {S_i}{y^{\nu(n)}} - {S_{i - 1}}{{y^{\nu(n)}}}{\| ^2}} + {\xi _{\nu(n)}},\] and hence \[\frac{{{\lambda _{\nu(n)}}\left(2 - {\lambda _{\nu(n)}}\right)}}{{4m}}\sum\limits_{i = 1}^m {\| {S_i}{y^{\nu(n)}} - {S_{i - 1}}{{y^{\nu(n)}}}{\| ^2}}\leq{\xi _{\nu(n)}},\] for all $n\ge n_0$. According to the assumption $\mathop{\lim }\limits_{n \to \infty }\xi_{\nu(n)}=0$ and the fact that $\varepsilon ^2\le{\lambda _{\nu(n)}}(2 - {\lambda _{\nu(n)}})$, we obtain \begin{equation}\label{24} \mathop {\lim }\limits_{n \to \infty }\|{S_i}{y^{\nu (n)}} - {S_{i - 1}}{y^{\nu (n)}}\| = 0. \end{equation} Now, let $\{ {y^{{\nu(n_k)}}}\} _{k = 1}^\infty \subset \{ {y^{\nu(n)}}\} _{n = 1}^\infty$ be a subsequence such that \[\limsup\limits_{n\rightarrow\infty}\langle {y^{\nu(n)}} - \bar u, - F(\bar u)\rangle = \mathop {\lim }\limits_{k \to \infty } \langle {y^{{\nu(n_k)}}} - \bar u, - F(\bar u)\rangle.\] By proceeding the similar argument to those used in {\bf Case 1}, the relation (\ref{24}) and the DC principle of each $T_i$ yields that, for any subsequence $\{ {y^{{\nu(n_{{k_j}})}}}\} _{j = 1}^\infty$ of $\{ {y^{{\nu{(n_k)}}}}\} _{k = 1}^\infty$, we get that $ {y^{{\nu(n_{{k_j}})}}}\rightharpoonup z\in\bigcap_{i=1}^m\fix T_i$. Furthermore, we have \begin{eqnarray*}\limsup\limits_{n\rightarrow\infty}\langle {y^{\nu (n)}} - \bar u, - F(\bar u)\rangle &=& \mathop {\lim }\limits_{k \to \infty } \langle {y^{\nu ({n_k})}} - \bar u, - F(\bar u)\rangle\\ &=& \mathop {\lim }\limits_{j \to \infty } \langle {y^{\nu ({n_{{k_j}}})}} - \bar u, - F(\bar u)\rangle = \langle z - \bar u, - F(\bar u)\rangle \le 0. \end{eqnarray*} As a result, we simultaneously obtain \begin{equation}\label{25} \limsup\limits_{n\rightarrow\infty}\delta_{\nu(n)}\le 0. \end{equation} In the light of Lemma \ref{lemma4}, we have \[ {a_{\nu (n) + 1}} \le \left( {1 - {\alpha _{\nu (n)}}} \right){a_{\nu (n)}} + {\alpha _{\nu (n)}}{\delta _{\nu (n)}},\] and hence \begin{eqnarray} {a_{\nu (n) + 1}} - {a_{\nu (n)}} &\le& {\alpha _{\nu (n)}}\left( {{\delta _{\nu (n)}} - {a_{\nu (n)}}} \right). \nonumber \end{eqnarray} Since $\alpha _{\nu (n)}>0$, we obtain \[{a_{\nu (n)}} \le {\delta _{\nu (n)}}.\] Thanks to (\ref{25}), we have \[\limsup\limits_{n\rightarrow\infty}{a_{\nu (n)}} \le \limsup\limits_{n\rightarrow\infty}{\delta _{\nu (n)}} \le 0,\] which leads to \[\mathop {\lim }\limits_{n \to \infty } {a_{\nu (n)}} = 0.\] By utilizing the inequality (\ref{23}) together with this, we have \[0 \le \limsup\limits_{n\rightarrow\infty}{a_n} \le \limsup\limits_{n\rightarrow\infty}{a_{\nu (n) + 1}} = 0.\] Hence, we finally obtain that $\mathop {\lim }\limits_{n \to \infty } {a_n} = 0$ as desired. \qed \end{proof} \begin{remark} \begin{itemize} \item[(i)] The step-size sequences $\{\varphi_n\}_{n=1}^\infty$ and $\{\beta_n\}_{n=1}^\infty$ in Theorem \ref{main-thm} are, for instance, $\varphi_n=\frac{1}{(n+1)^a}$ with $a>0$ and $\beta_n=\frac{1}{(n+1)^b}$ with $0<b\leq1$ for all $n\in\mathbb{N}$. \item[(ii)] It can be noted that the DC principle assumed in Theorem \ref{main-thm} will be satisfying in many cases, for instance, the operators $T_i, i=1,\ldots,m$, are nonexpansive, or, in particular, the metric projections onto closed convex sets. Moreover, this still holds true when the operators $T_i, i=1,\ldots,m$, are subgradient projections of continuous convex functions which are Lipschitz continuous on bounded subsets which is further discussed in the next section. \end{itemize} \end{remark} \section{Variational Inequality Problem with Functional Constraints} In this section, we will consider the solving of the variational inequality problem over the finite family of continuous convex functional constraints and a simple closed convex and bounded constraint by applying the results obtained in the previous section. Let $C_i:=\{x\in\mathcal{H}:c_i(x)\leq0\}$ be a sublevel set of a continuous and convex function $c_i:\mathcal{H}\to\mathbb{R}$, $i=1,\ldots,m-1$, and $C_m\subset\mathcal{H}$ be a simple closed convex and bounded set. Let $F:\mathcal{H}\to\mathcal{H}$ be $\eta$-strongly monotone and $\kappa$-Lipschitz continuous, we consider the variational inequality of finding a point $\bar u\in \bigcap\limits_{i = 1}^m C_i$ such that \begin{eqnarray}\label{vip-2}\langle F({\bar u}),z- {\bar u}\rangle \ge 0 \indent\text{ for all } z\in \bigcap\limits_{i = 1}^m C_i. \end{eqnarray} Assume that $\bigcap\limits_{i = 1}^m C_i\neq\emptyset$. Let us consider, for each $i=1,\ldots,m-1$, since each $C_i$ is the sublevel set of the function $c_i$, we define the operator $T_i:\mathcal{H}\to\mathcal{H}$ to be a subgradient projection relative to $c_i$, $P_{c_i}:\mathcal{H}\to\mathcal{H}$, namely, for every $x\in\mathcal{H}$, \begin{equation*} P_{c_i}(x):=\left \{ \begin{aligned} &x-\frac{c_i(x)}{\|g_i(x)\|^2}g_i(x) \indent \textrm{if } c_i(x)>0,\\ &x\hspace{3.05cm} \textrm{ otherwise,} \end{aligned} \right. \end{equation*} where $g_i(x)\in\partial c_i(x):=\{g\in\mathcal{H}:\left\langleg,y-x\right\rangle\leq c_i(y)-c_i(x),\forall y\in\mathcal{H}\}$, is a subgradient of the function $c_i$ at the point $x$. Since $c_i, i=1,\ldots,m-1$, are continuous and convex, we ensure that the subdifferential sets $\partial c_i(x), i=1,\ldots,m-1$, are nonempty, for every $x\in\mathcal{H}$, see \cite[Proposition 16.17]{BC17}. Note that the subgradient projection $P_{c_i}$ is a cutter and $\fix P_{c_i}=C_i$, for all $i=1,\ldots,m-1$, see \cite[Lemma 4.2.5 and Corollary 2.4.6]{C12} Moreover, since $C_m$ is the nonempty closed convex and bounded, we define the operator $T_m:\mathcal{H}\to\mathcal{H}$ to be a metric projection onto $C_{m}$ written by $\mathrm{proj}_{C_m}:\mathcal{H}\to\mathcal{H}$, i.e., for every $x\in\mathcal{H}$, we have $$\|x-\mathrm{proj}_{C_m}x\|= \inf_{y\in C_m} \|x-y\|.$$ Note that the metric projection $\mathrm{proj}_{C_m}$ is also a cutter and $\fix \mathrm{proj}_{C_m}=C_m$, see \cite[Theorem 2.2.21]{C12}. These mean that the operators $T_i, i=1,\ldots,m$, are cutters and $\bigcap\limits_{i = 1}^m \fix T_i\neq\emptyset$. Now, in order to construct an iterative method for solving the problem (\ref{vip-2}), we recall the notations $T := {T_m}{T_{m - 1}}\dots {T_1},$ $S_0:=Id$, and $S_i:=T_iT_{i-1}\dots T_1, i=1,2,\dots,m$. Furthermore, for every $x\in\mathcal{H}$, we denote $u_i:=S_ix$, and $v_i:=u_i-u_{i-1}$. Thus, we have $u_0=x$ and $u_i=T_iu_{i-1}, i=1,2,\dots,m$. Firstly, let us note from \cite[Remark 10]{CC12} that $$\sum_{i=1}^m\left\langlev_i+v_{i+1}+\cdots+v_m,v_i\right\rangle=\sum_{i=1}^m\left\langlev_1+v_{2}+\cdots+v_i,v_i\right\rangle.$$ It follows that, for every $x\in\mathcal{H}$, \begin{eqnarray*} \sum_{i=1}^m\left\langleTx-S_{i-1}x,S_ix-S_{i-1}x\right\rangle &=&\sum_{i=1}^m\left\langleu_m-u_{i-1},u_i-u_{i-1}\right\rangle\\ &=&\sum_{i=1}^m\left\langlev_i+v_{i+1}+\cdots+v_m,v_i\right\rangle\\ &=&\sum_{i=1}^m\left\langlev_1+v_{2}+\cdots+v_i,v_i\right\rangle\\ &=&\sum_{i=1}^m\left\langleu_i-u_0,u_i-u_{i-1}\right\rangle\\ &=&\sum_{i=1}^m\left\langleT_iu_{i-1}-x,T_iu_{i-1}-u_{i-1}\right\rangle \end{eqnarray*} Now, for every $i=1,\ldots,m-1$, and $x\notin\bigcap\limits_{i = 1}^m \fix T_i$, we note that \begin{equation*} u_i=T_iu_{i-1}=u_{i-1}-\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|^2}g_i(u_{i-1}), \end{equation*} where $g_i(u_{i-1})$ is a subgradient of the function $c_i$ at the point $u_{i-1}$. For simplicity, we use throughout the convention that $\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|}=0$ whenever $\max\{c_i(u_{i-1}),0\}=0$. Subsequently, we have {\small\begin{eqnarray*} \left\langleT_iu_{i-1}-x,T_iu_{i-1}-u_{i-1}\right\rangle&=&\left\langleT_iu_{i-1}-x,u_{i-1}-\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|^2}g_i(u_{i-1})-u_{i-1}\right\rangle\\ &=&-\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|^2}\left\langleT_iu_{i-1}-x,g_i(u_{i-1})\right\rangle\\ &=&-\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|^2}\left\langleu_{i-1}-\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|^2}g_i(u_{i-1})-x,g_i(u_{i-1})\right\rangle\\ &=&-\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|^2}\left\langleu_{i-1}-x,g_i(u_{i-1})\right\rangle\\ &&+\left(\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|^2}\right)^2\left\langleg_i(u_{i-1}),g_i(u_{i-1})\right\rangle\\ &=&-\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|^2}\left\langleu_{i-1}-x,g_i(u_{i-1})\right\rangle+\left(\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|}\right)^2 \end{eqnarray*} } On the other hand, for every $x\notin\fix T_m$, we note that $$\left\langle\mathrm{proj}_{C_m}u_{m-1}-x,\mathrm{proj}_{C_m}u_{m-1}-u_{m-1}\right\rangle.$$ Therefore, the step-size function $\sigma:\mathcal{H}\to[0,+\infty)$ which is defined in (\ref{sigma}) can be written as \begin{equation*} \sigma (x):=\left\{ \begin{array}{ll} \left\langle\mathrm{proj}_{C_m}u_{m-1}-x,\mathrm{proj}_{C_m}u_{m-1}-u_{m-1}\right\rangle\\ -\sum_{i=1}^{m-1}\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|^2}\left\langleu_{i-1}-x,g_i(u_{i-1})\right\rangle\\ +\sum_{i=1}^{m-1}\left(\frac{\max\{c_i(u_{i-1}),0\}}{\|g_i(u_{i-1})\|}\right)^2, & \text{for \ }x\notin C, \\ 1, & \text{otherwise.} \end{array} \right. \end{equation*} According to the above convention and Lemma \ref{lemma-CC12}(i), we can ensure that the step-size function $\sigma (x)$ is well-defined and nonnegative which is bounded from below by $\frac{1}{2m}$, for every $x\in\mathcal{H}$. Now, we are in position to propose the method for solving the problem (\ref{vip-2}) as the following algorithm.\\ \begin{algorithm}[H] \SetAlgoLined \vskip2mm \textbf{Initialization}: Given $\mu \in (0,2\eta /{\kappa ^2})$, $\{ \beta _{n}\}_{n = 1}^\infty\subset(0,1]$, $\{ \varphi _{n}\}_{n = 1}^\infty\subset[0,\infty)$ and a positive sequence $\{\lambda_n\}_{n=1}^\infty$. Choose $x^1\in C_m$ arbitrarily and set ${d^1} = - F({x^1})$. \\ \textbf{Iterative Steps}: For a given current iterate $x^n\in C_m$ ($n\in\mathbb{N}$), calculate as follows: \textbf{Step 1}. Compute $y^n$ as $$y^n:=x^n + \mu\beta _{n}d^n.$$ \textbf{Step 2}. Set $u^n_0:=y^n$ and compute the estimates $$u^n_{i}=u^n_{i-1}-\frac{\max\{c_i(u^n_{i-1}),0\}}{\|g_i(u^n_{i-1})\|^2}g_i(u^n_{i-1}),\hspace{0.5cm}\indent i=1,\ldots,m-1,$$ where $g_i(u^n_{i-1})$ is a subgradient of the function $c_i$ at the point $u^n_{i-1}$, and subsequently compute $$u^n_{m}:=\mathrm{proj}_{C_m}u^n_{m-1}.$$ \textbf{Step 3}. Compute a step size as \begin{equation*} \sigma (y^n):=\left\{ \begin{array}{ll} \left\langle\mathrm{proj}_{C_m}u^n_{m-1}-y^n,\mathrm{proj}_{C_m}u^n_{m-1}-u^n_{m-1}\right\rangle\\ -\sum_{i=1}^{m-1}\frac{\max\{c_i(u^n_{i-1}),0\}}{\|g_i(u^n_{i-1})\|^2}\left\langleu^n_{i-1}-x,g_i(u^n_{i-1})\right\rangle\\ +\sum_{i=1}^{m-1}\left(\frac{\max\{c_i(u^n_{i-1}),0\}}{\|g_i(u^n_{i-1})\|}\right)^2, & \text{for \ }y^n\notin C, \\ 1, & \text{otherwise.} \end{array} \right. \end{equation*} \textbf{Step 4}. Compute a next iterate and a search direction as \begin{equation*} \left \{ \begin{aligned} &{x^{n + 1}} := {\mathrm{proj}_{C_m}}({y ^n} + {\lambda _n}\sigma ({y ^n})(u^n_{m} - y^n)), \\ &{d^{n + 1}}: = - F({x^{n + 1}}) + {\varphi _{n + 1}}{d^n}.\\ \end{aligned} \right. \end{equation*} Update $n:=n+1$ and return to \textbf{Step 1}. \caption{ESCoM-CGD for VIP with functional constraints} \label{algorithm-2} \vskip2mm \end{algorithm} \begin{remark} Observe that Algorithm \ref{algorithm-2} is nothing else than a particular case of ESCoM-CGD (Algorithm \ref{algorithm}). Moreover, as we have mentioned in Remark \ref{rem8} (iii), we underline here again that the initial point $x^1$ in Algorithm \ref{algorithm-2} is particularly chosen in the nonempty closed convex and bounded subset $C_m$ rather than in the whole space $\mathcal{H}$, and the generated iterates $x^n,n\geq2$, are projected into the subset $C_m$. These are done in order to ensure the boundedness of the generated sequence $\{ {x^n}\} _{n = 1}^\infty$. \end{remark} The following corollary is a consequence of Theorem \ref{main-thm}. \begin{corollary}\label{cor} Let the sequence $\{ {x^n}\} _{n = 1}^\infty$ be given by Algorithm \ref{algorithm-2}. Suppose that $\mathop {\lim }\limits_{n \to \infty }{\beta _n} = 0$, $\sum\limits_{n = 1}^\infty {{\beta _n}}= \infty$, $\mathop {\lim }\limits_{n \to \infty } {\varphi _n} = 0$, and $\{ \lambda _{n}\}_{n = 1}^\infty \subset [\varepsilon ,2 - \varepsilon]$ for some constant $\varepsilon \in (0,1)$. If one of the following conditions hold: \begin{itemize} \item[(i)] The functions $c_i, i=1,\ldots,m-1$, are Lipschitz continuous relative to every bounded subset of $\mathcal{H}$; \item[(ii)] The functions $c_i, i=1,\ldots,m-1$, are bounded on every bounded subset of $\mathcal{H}$; \item[(iii)] The subdifferentials $\partial c_i, i=1,\ldots,m-1$, map every bounded subset of $\mathcal{H}$ to a bounded set, \end{itemize} then the sequence $\{ {x^n}\} _{n = 1}^\infty$ converges strongly to $\bar u$, the unique solution of the problem (\ref{vip-2}). \end{corollary} \begin{proof} Observe that the convergence Theorem \ref{main-thm} is depended on the assumptions that the sequence $\{ F({x^n})\} _{n = 1}^\infty$ is bounded and the operators $T_i, i=1,\ldots,m$, satisfy the DC principle. If we verify that these two mentioned assumptions are true, the convergence is a consequence of Theorem \ref{main-thm}. Now, since the operator $F$ is Lipschitz continuous and the generated sequence $\{ {x^n}\} _{n = 1}^\infty$ is bounded, it follows that the sequence $\{ F({x^n})\} _{n = 1}^\infty$ is also bounded. On the other hand, it is noted from \cite[Theorem 4.2.7]{C12} that for a continuous convex function which is satisfying (i), we have that its corresponding subgradient projection will be satisfying the DC principle. Consequently, this means that the operators $T_i, i=1,\ldots,m-1$, are satisfying the DC principle. Moreover, we know from \cite[Proposition 16.20]{BC17} that for a continuous convex function, the assumptions (i) - (iii) are equivalent. This gives us that these three assumptions are the sufficient conditions for the fact that operators $T_i, i=1,\ldots,m-1$, are satisfying the DC principle. Furthermore, since the metric projection $\mathrm{proj}_{C_m}$ is a nonexpansive operator (see, \cite[Theorem 2.2.21]{C12}), it follows that $T_m$ is also satisfying the DC principle. Hence, the assumptions of Theorem \ref{main-thm} are satisfied, and we therefore conclude that the sequence $\{ {x^n}\} _{n = 1}^\infty$ converges strongly to the unique solution of the problem (\ref{vip-2}) as desired. \qed \end{proof} \begin{remark}\begin{itemize} \item[(i)] It is very important to note that the continuity of $c_i, i=1,\ldots,m$, and the assumptions (i) - (iii) used in Corollary \ref{cor} can be dropped whenever the whole Hilbert space $\mathcal{H}$ is finite dimensional, see \cite[Corollary 8.40 and Proposition 16.20]{BC17} for further details. \item[(ii)] An example of the simple closed convex and bounded set $C_m$ in a general Hilbert space is nothing else than a closed ball $C_m:=\{x\in\mathcal{H}:\|x-z\|\leq r\}$, where $z\in\mathcal{H}$ is the center, and $r>0$ is the radius. In particular, if $\mathcal{H}=\mathbb{R}^n$, the finite-dimensional Euclidean space, an additional example is a box constraint $C_m:=[a_1,b_1]\times[a_2,b_2]\times\cdots\times[a_n,b_n]$, where $a_i,b_i\in\mathbb{R}$ with $a_i\leq b_i, i=1,\ldots,n$. For the closed-form formulae of these simple sets, the reader may consult \cite[Subsections 4.1.6 and 4.1.7]{C12}. \end{itemize} \end{remark} \section{Numerical Result} In this section we report the convergence of ESCoM-CGD by the minimum-norm problem to a system of homogeneous linear inequalities with box constraint. Suppose that we are given a matrix $\mathbf{A}=[\mathbf{a}_1|\cdots|\mathbf{a}_m]^\top\in \mathbb{R}^{m\times k}$ of predictors $\mathbf{a}_i=(a_{1i},\ldots,a_{ki})\in\mathbb{R}^k$, for all $i=1\ldots,m$. The approach of the considered problem with a box constraint is to find the vector $x\in\mathbb{R}^k$ that solves the problem \begin{eqnarray*} \begin{array}{ll} \textrm{minimize }\indent \frac{1}{2}\|x\|^2\\ \textrm{subject to}\indent \mathbf{A}x\leq_{\mathbb{R}^m}\mathbf{0}_{\mathbb{R}^m},\\ \indent\indent\indent\indent x\in [u,v]^k, \end{array} \end{eqnarray*} or equivalently, in the explicit form, \begin{eqnarray*} \begin{array}{ll} \textrm{minimize }\indent \frac{1}{2}\|x\|^2\\ \textrm{subject to}\indent \left\langle\mathbf{a}_i, x\right\rangle\leq 0, i=1\ldots,m, x\in [u,v]^k, \end{array} \end{eqnarray*} where $u,v\in\mathbb{R}$ with $u\leq v$. Of course, this minimum-norm problem can be written in the form of Problem \ref{Problem-VIP} as: finding $x^*\in \bigcap\limits_{i = 1}^{m+1} \fix{(\mathrm{proj}_{C_i})}$ such that \[\langle {x^*},x - {x^*}\rangle \ge 0 \indent\text{ for all $x\in \bigcap\limits_{i = 1}^{m+1} \fix{(\mathrm{proj}_{C_i})}$. }\] where the constrained sets $C_i:=\{x\in\mathbb{R}^k:\left\langle\mathbf{a}_i,x\right\rangle\leq 0\}, i=1,\ldots,m,$ are half-spaces and $C_{m+1}:=\{x\in\mathbb{R}^k:x\in [u,v]^k\}$ is a box constraint. It is clear that this variational inequality problem satisfies all assumptions of Problem \ref{Problem-VIP} by setting $F:=Id$, the identity operator, which is $1$-strongly monotone and $1$-Lipschitz continuous, and $T_i:=\mathrm{proj}_{C_i}, i=1,\ldots,m+1,$ the metric projections onto $C_i$ which are cutters with $\fix{(T_i)}=C_i$ and satisfying the demi-closed principle. Moreover, since the box constrained set $C_{m+1}$ is bounded, we have that the generated sequence $\{ {x^n}\} _{n = 1}^\infty$ is a bounded sequence which subsequently yields the boundedness of $\{ {F(x^n)}\} _{n = 1}^\infty$. This means that the assumptions of Theorem \ref{main-thm} are satisfying. All the experiments were performed under MATLAB 9.6 (R2019a) running on a MacBook Air 13-inch, Early 2015 with a 1.6GHz Intel Core i5 processor and 4GB 1600MHz DDR3 memory. All CPU times are given in seconds. We generate the matrix $\mathbf{A}$ in $\mathbb{R}^{m\times k}$ where $m=1000$ and $k=200$ by uniformly distributed random generating between $(-5,5)$ and choose the box constraint with boundaries $u=-1$ and $v=1$. The initial point is a vector whose all coordinates are normally distributed randomly chosen in $(0,1)$. In order to justify the advantages of the proposed Algoritgm \ref{algorithm}, we thus choose the hybrid conjugate gradient method (HCGM) \cite{IY09} and hybrid three term conjugate gradient method (HTCGM) \cite[Algorithm 6]{I11} as the benchmarks for the numerical comparisons. In this situation, we set the operator considered in \cite{IY09,I11} by $T:=\mathrm{proj}_{C_{m+1}}\mathrm{proj}_{C_{m}}\cdots \mathrm{proj}_{C_1}$, which is a nonexpansive operator. Since the minimum-norm solution has the unique solution, in all following numerical experiments, we terminate the experimented methods when the norm become small, i.e., $\|x^n\|\leq 10^{-6}$. We use 10 samplings for different randomly chosen matrix $\mathbf{A}$ and the initial point when performing each combination, and the presented results are averaged. We manually select the involved parameters of each compared algorithm and show some results when it achieves the fairly best performance. Firstly, we demonstrate the effectiveness of step-size sequence $\varphi_n:=\frac{1}{(n+1)^a}$; where $a>0$, when ESCoM-CGD, HCGM, and HTCGM are applied for solving the above minimum-norm problem. We choose different values $a=0.005, 0.01, 0.05$, and $0.1$, and fix the corresponding paramter $\mu=1$, the step-size sequence $\beta_n=\frac{1}{(n+1)^{0.5}}$, and additionally set $\lambda_n=0.7$ for ESCoM-CGD. We plot the number of iterations and computational time in seconds with respect to different choices of $a$ in Figure \ref{varphi}. \begin{figure} \caption{Influences of the step sizes $\varphi_n=1/(n+1)^{a} \label{varphi} \end{figure} According to the plots in Figure \ref{varphi}, we see that the larger the value of $a$ yields the faster convergence in the senses of it need the smaller number of iterations and less computational time. We also see that the proposed ESCoM-CGD is really faster than other methods, where the best result is observed for $a=0.1$. Notice that HCGM and HTCGM are very sensitive to the value of $a$, while ESCoM-CGD seems not. In fact, for $a=0.005$, HCGM and HTCGM require more than 550 ierations, whereas for $a=0.1$, it require approximately 50 iterations. Next, we testify the influence of step-size sequence $\beta_n:=\frac{1}{(n+1)^b}$; where $0<b\leq1$, for the tested methods. We fix $\mu=1$, $\lambda_n=0.7$, and $\varphi_n=\frac{1}{(n+1)^{0.1}}$. We choose different value of $b$ in the interval $(0,1]$, namely, $b=0.01, 0.05, 0.1$, and $0.5$. The number of iterations and computational time in seconds for each choice of $b$ are plotted in Figure \ref{beta}. \begin{figure} \caption{Influences of the step sizes $\beta_n=1/(n+1)^{b} \label{beta} \end{figure} It can be seen from Figure \ref{beta} that ESCoM-CGD gives the best results for all values $b$. Moreover, their number of iterations and computational time seem indifferent to the different choices of $b$. For HCGM and HTCGM, we observe the the number of iterations as well as computational time decrease when the values $b$ grow up. For the exact results, the value $b=0.01$ is the best choice for ESCoM-CGD, however, the value $b=0.5$ is the best choice for both HCGM and HTCGM, which is coherent with the assertions in \cite{IY09,I11}. In Figure \ref{mu}, we illustrate behaviour of the methods with respect to parameter $\mu\in(0,2)$. We fix $\varphi_n=\frac{1}{(n+1)^{0.1}}$ and $\lambda_n=0.7$. Moreover, we fix the best choices $\beta_n=\frac{1}{(n+1)^{0.01}}$ for ESCoM-CGD and $\beta_n=\frac{1}{(n+1)^{0.5}}$ for both HCGM and HTCGM. We choose different value $\mu=10^{-4}, 10^{-3}, 0.01, 0.1, 0.5, 1.0, 1.5$, and $1.9$. According to the plots, we observe that the very small value of $\mu=10^{-4}$ yields the best results for all methods. As a matter of fact, even if the best result for ESCoM-CGD is obtained for very small value $\mu$, we see that the method with large value $\mu>1$ also perform well. The overall best result is observed for HTCGM, this means that the assertion in \cite{I11} is confirmed again. \begin{figure} \caption{Influences of parameter $\mu$ when performing ESCoM-CGD, HCGM \cite{IY09} \label{mu} \end{figure} As it is well-known the the presence of an appropriate relaxation parameter $\lambda_n\in(0,2)$ in MECSPM, or even the state-of-the-art relaxation methods can make the methods converge faster. Now, we demonstrate the influence of the relaxation paramter $\lambda_n$ when ESCoM-CGD is performed for solving the considered problem. We fix $\mu=10^{-4}, \varphi_n=\frac{1}{(n+1)^{0.1}}$, and $\beta_n=\frac{1}{(n+1)^{0.01}}$. We test a set of parameter $\lambda_n\in\{0.1,0.2,\ldots,1.9\}$, and plot the number of iterations and computational time with respect to different choices of $\lambda_n$ in Figure \ref{lambda}. \begin{figure} \caption{Influences of relaxation parameter $\lambda$ when performing ESCoM-CGD.} \label{lambda} \end{figure} According to the curves in Figure \ref{lambda}, we see that the relaxation paramter $\lambda_n$ behaves significantly well convergence for a wide range of choices. In fact, we observe the the faster convergence is obtained for some intermediate choices of $\lambda_n\in[0.8,1.5]$, and the exactly best result is observed for $\lambda_n=1.2$. This observation relatively conforms to the numerical experiments in \cite{CN19}. Finally, to showcase the superiority of our ESCoM-CGD, we compare the methods for various size $(m,k)$ of randomly matrix $\mathbf{A}$. We fix the corresponding parameters as in Table \ref{para}. To show performance of the methods, the number of iterations with respect to the size of $\mathbf{A}$ are plotted in Figure \ref{various}. Moreover, we also present computational time in seconds with respect to the sizes $(m,k)$ in Table \ref{compare--tb}. \begin{table}[h] \caption{Best choice of parameters used for performing ESCoM-CGD, HCGM \cite{IY09} and HTCGM \cite{I11}.} \label{para} \begin{tabular}{l c c c c c} \hline\noalign{ } Parameter& $\varphi_n$ & $\beta_n$ & $\mu$ & $\lambda_n$ \\ \noalign{ }\hline\noalign{ } ESCoM-CGD &$\frac{1}{(n+1)^{0.1}}$ & $\frac{1}{(n+1)^{0.01}}$ & $10^{-4}$ & 1.2\\ HCGM \cite{IY09} &$\frac{1}{(n+1)^{0.1}}$ & $\frac{1}{(n+1)^{0.5}}$ & $10^{-4}$ & -\\ HTCGM \cite{I11} &$\frac{1}{(n+1)^{0.1}}$ & $\frac{1}{(n+1)^{0.5}}$ & $10^{-4}$ & -\\ \noalign{ }\hline \end{tabular} \end{table} \begin{figure} \caption{Number of iterations when performing ESCoM-CGD, HCGM \cite{IY09} \label{various} \end{figure} \begin{table}[h] \caption{\label{compare--tb}Computational time in seconds when performing ESCoM-CGD, HCGM \cite{IY09} and HTCGM \cite{I11} for different choices of number of constraints ($m$) and dimensions ($k$).} \begin{tabular}{l r r r r r r r r r r } \hline\noalign{ } $(m,k)$& ESCoM-CGD & HCGM \cite{IY09} & HTCGM \cite{I11} \\ \noalign{ }\hline\noalign{ } (100,25)& 0.0316& 0.0375& 0.0391\\ (300,75)& 0.0621& 0.0757& 0.0699\\ (500,125)& 0.1029& 0.1370& 0.1302\\ (700,175)& 0.1499& 0.1874& 0.1758\\ (1000,250)& 0.2045& 0.2717& 0.2557\\ (3000,750)& 0.9660& 1.2123& 1.1694\\ (5000,1250)& 2.2990& 3.0666& 3.0382\\ (7000,1750)& 6.9761& 9.2777& 9.2878\\ (10000,2500)& 25.3301& 35.7751& 35.4769\\ (20000,5000)& 105.7223& 143.8273& 144.0225 \\ \noalign{ }\hline \end{tabular} \end{table} The plots in Figure \ref{various} show that ESCoM-CGD gives the best convergence results for all choices of $(m,k)$. Moreover, we see that HCGM and HTCGM reach the optimal tolerance at most the same number of iterations. Likewise, the results given in Table \ref{compare--tb} reveal that ESCoM-CGD reaches the optimal tolerance faster than both HCGM and HTCGM. It is worth noting that when the size $(20000,5000)$, ESCoM-CGD requires computational time less than other two methods approximately 40 seconds. This underlines the essential superiority of the proposed ESCoM-CGD. \section{Conclusion} The object of this work was the solving of a variational inequality problem governed by a strongly monotone and Lipschitz continuous operator over the intersection of fixed-point sets of cutter operators. We associated to it the so-called extrapolated sequential constraint method with conjugate gradient direction. We proved strong convergence of the generated sequence of iterates to the unique solution to the considered problem. Our numerical experiments show that the proposed method has a better convergence behaviour compared to other two methods. For future work, one may consider and analyze a variant of the proposed method by using some constrained selections, e.g. the so-called dynamic string averaging procedure, for dealing with the constrained operators. \end{document}
math
{\beta}egin{document} {\tau}itle[Separation of branches]{Separation of branches of $O(N-1)$-invariant solutions for a semilinear elliptic equation} {\alpha}uthor[F.~Gladiali]{Francesca Gladiali} {\tau}hanks{The author is supported by Gruppo Nazionale per l'Analisi Matematica, la Probabilit\'a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM) and by PRIN-2012-grant ``Variational and perturbative aspects of nonlinear differential problems''.} {\alpha}ddress{Matematica e Fisica, Polcoming, Universit\`a di Sassari, via Piandanna 4, 07100 Sassari, Italy. {\tau}exttt{[email protected]}} {\beta}egin{abstract} \noindent We consider the problem {\beta}egin{equation}\nonumber{\lambda}abel{0} {\lambda}eft\{{\beta}egin{array}{ll} -\Deltaelta u=u^p +{\lambda} u&\hbox{ in }A\\ u>0 &\hbox{ in }A\\ u=0 &\hbox{ on }\partialltae A {\epsilon}nd{array}\right. {\epsilon}nd{equation} where $A$ is an annulus in $\mathbb{R}^N$, $N{\gamma}eq 2$ , $pi^{*}n(1,+i^{*}nfty)$ and ${\lambda}i^{*}n (-i^{*}nfty,0]$. Recent results, \cite{GGPS}, ensure that there exists a sequence $\{p_k\}$ of exponents ($p_k{\tau}o +i^{*}nfty$) at which a nonradial bifurcation from the radial solution occurs. Exploiting the properties of $O(N-1)$-invariant spherical harmonics, we introduce two suitable cones $\mathcal{K}^1$ and $\mathcal{K}^2$ of $O(N-1)$-invariant functions that allow to separate the branches of bifurcating solutions from the others, getting the unboundedness of these branches. {\epsilon}nd{abstract} \maketitle {\sigma}ezione{Introduction} In this paper we consider the problem {\beta}egin{equation}{\lambda}abel{1} {\lambda}eft\{{\beta}egin{array}{ll} -\Deltaelta u=u^p +{\lambda} u&\hbox{ in }A\\ u>0 &\hbox{ in }A\\ u=0 &\hbox{ on }\partialltae A {\epsilon}nd{array}\right. {\epsilon}nd{equation} where $A$ is an annulus of $\mathbb{R}^N$, i.e. $A:=\{xi^{*}n \mathbb{R}^N\, :\, a<|x|<b\}$, $b>a>0$, $N{\gamma}eq 2$, $pi^{*}n(1,+i^{*}nfty)$ and ${\lambda}i^{*}n (-i^{*}nfty,0]$. For simplicity one can think to {\epsilon}qref{1} with ${\lambda}=0$.\\ It is well known that problem (\ref{1}) has a radial solution for any $pi^{*}n (1,+i^{*}nfty)$ (see \cite{KW}), and that this solution is unique if ${\lambda}i^{*}n(-i^{*}nfty,0]$ (see \cite{T} and \cite{F}). We will denote by $u_p$ this radial solution and by ${\mathcal S}$ the curve of radial solutions of (\ref{1}) in the product space $ (1,+i^{*}nfty){\tau}imes C^{1,{\alpha}}_0(\overline A)$, where $C^{1,{\alpha}}_0(\overline A)$ is the set of continuous differentiable functions on $\overline A$ which vanish on $\partialltae A$ and whose first order derivatives are H\"older continuous with exponent ${\alpha}$. In other words: {\beta}egin{equation}{\lambda}abel{1.2} {\mathcal S}:=\{(p,u_p)i^{*}n (1,+i^{*}nfty){\tau}imes C^{1,{\alpha}}_0(\overline A)\, \hbox{ such that }u_p \hbox{ is the radial} \hbox{ solution of (\ref{1})}\}. {\epsilon}nd{equation} In this paper we study the nonradial solutions that bifurcate from the curve ${\mathcal S}$ as the exponent $p$ varies. Let us recall that a point $(p_k,u_{p_k})i^{*}n {\mathcal S}$ is a {{\epsilon}m nonradial bifurcation point} if in every neighborhood of $(p_k,u_{p_k})$ in $(1,+i^{*}nfty){\tau}imes C^{1,{\alpha}}_0(\overline A)$ there exists a nonradial solution $(p,v_p)$ of (\ref{1}).\\ In the paper \cite{GGPS} the authors show that there exists a sequence of values of the exponent $p_k$ such that $p_k{\tau}o +i^{*}nfty$ and $(p_k,u_{p_k})$ is a nonradial bifurcation point for ${\mathcal S}$.\\ These values $p_k$ are found considering the linearized equation at the radial solution $u_p$. In \cite{GGPS} it is shown that $u_p$ is degenerate if and only if, for some $k{\gamma}eq 1$, {\beta}egin{equation}{\lambda}abel{1.3a} {\alpha}_1(p)+k(N-2+k)=0 {\epsilon}nd{equation} where ${\alpha}_1(p)$ is the first eigenvalue of the one-dimensional operator {\beta}egin{equation}{\lambda}abel{1.3} \widehat{L}_p(v)=r^2{\lambda}eft(-v'' -\frac{N-1}r v'-pu_p ^{p-1}v-{\lambda} v\right) {\epsilon}nd{equation} in the space of functions of $H^1_0(a,b)$. From the analyticity of ${\alpha}_1(p)$ with respect to $p$ and from the asymptotic behavior of ${\alpha}_1(p)$ as $p{\tau}o 1$ and as $p{\tau}o +i^{*}nfty$ it is proved in \cite{GGPS} that for any $k{\gamma}eq 1$ the quantity ${\alpha}_1(p)+k(N-2+k)$ changes sign as $p$ varies in $(1,+i^{*}nfty)$. Each time ${\alpha}_1(p)+k(N-2+k)$ changes sign an eigenvalue of the linearized operator changes sign so that the Morse index of the radial solution changes. Let us call {{\epsilon}m{Morse index changing points}} the pairs $(p_k,u_{p_k})i^{*}n {\mathcal S}$ such that the Morse index of the radial solution $u_p$ changes at $p_k$. These points are characterized by {\beta}egin{equation}{\lambda}abel{1.5} {\lambda}eft({\alpha}_1(p_k+\partiallta)+ k(N-2+k) \right){\lambda}eft({\alpha}_1(p_k-\partiallta)+ k(N-2+k)\right)<0\quad \hbox{ for }\partialltai^{*}n (0,\partiallta_0) {\epsilon}nd{equation} for some $\partiallta_0>0$ and for some $k{\gamma}eq 1$. In \cite{G} it is shown that if $( p_k,u_{p_k})$ is a Morse index changing point there exists a continuum, ${\mathcal C}(p_k){\sigma}ubset (1,+i^{*}nfty){\tau}imes C^{1,{\alpha}}_0(\overline A)$, of nonradial solutions bifurcating from that point. This continuum ${\mathcal C}(p_k)$ obeys the so-called Rabinowitz alternative, (see Theorem 3.3 in \cite{G}), i.e. either ${\mathcal C}(p_k)$ is unbounded in $(1,+i^{*}nfty){\tau}imes C^{1,{\alpha}}_0(\overline A)$ or it must meet the curve of radial solutions ${\mathcal S}$ in another Morse index changing point. Here we are able to prove that the second alternative is not possible when $k=1$ or $2$ and $N{\gamma}eq 3$. Our main result is the following: {\beta}egin{theorem}{\lambda}abel{t1} If $N{\gamma}eq 3$, there exist at least two exponents $p_1,p_2i^{*}n(1,+i^{*}nfty)$ such that $(p_k,u_{p_k})$ is a nonradial bifurcation point for the curve ${\mathcal S}$, related by (\ref{1.3}) to $k=1$ and $k=2$ at which the continuum of bifurcating solutions ${\mathcal C}(p_k)$ is unbounded in $(1,+i^{*}nfty){\tau}imes C^{1,{\alpha}}_0(\overline A) $. {\epsilon}nd{theorem} \noindent The unboundedness of the bifurcating branch in Theorem \ref{t1} implies or that the branch exists for every $p>p_k$ (giving a multiplicity result for problem {\epsilon}qref{1}) or that the solutions along the branch make blow-up in the $C^{1,{\alpha}}$-norm at some exponent ${\beta}ar p{\gamma}eq \frac{N+2}{N-2}$. Since problem {\epsilon}qref{1} can be supercritical we cannot exclude that the branch exists only for a fixed value of the exponent $p$, even if we do not think this is the case. So we think that the behavior of the solutions along these unbounded branches deserves to be investigated further.\\ \noindent This result is a first attempt to separate all the branches generated by spherical harmonics ($O(N-1)$-invariant), related by {\epsilon}qref{1.5} to a different eigenvalue $\mu_k=k(N-2+k)$ of the Laplace Beltrami operator on the $(N-1)$-dimensional sphere. Theorem \ref{t1} says that the branches generated by the spherical harmonics corresponding to $\mu_1=N-1$ and $\mu_2=2N$ ($k=1,2$) are separated from the others. The proof relies on the fact that functions which are $O(N-1)$-invariant, can be written as functions which depend only on $r$ and ${\tau}heta$ in radial coordinates, see the Appendix for details. Then to separate the continuum ${\mathcal C}(p_1)$ and ${\mathcal C}(p_2)$ from the others we introduce two different cones in $C^{1,{\alpha}}_0(\overline A)$ and we set our problem on these cones.\\ The cones $\mathcal{K}^1$ and $\mathcal{K}^2$ are defined in Section \ref{s3}, see {\epsilon}qref{3.1} and {\epsilon}qref{3.1-bis} and their definition is completely new. To define $K^1$ we look for solutions which are non increasing with respect to the angle ${\tau}heta$ on the interval $[0,\pi]$. This property is preserved along the branch ${\mathcal C}(p_1)$, as proved in Section \ref{s3}, and allows to distinguish the branch generated by $k=1$ from the others. Functions with this type of symmetry are said Foliated Schwarz symmetric and arise when looking for solutions with low Morse index (see \cite{GPW} as an example). Indeed in our case they arise when the Morse index of the radial solution $u_p$ goes from $1$ to $N+1$, see {\epsilon}qref{morse-index}.\\ \noindent To define $\mathcal{K}^2$ we consider $O(N-1)$ invariant functions which are non increasing with respect to the angle ${\tau}heta$ on the interval $[0,\frac{\pi}2]$ and which are even in $z=\cos{\tau}heta$. Again this property is preserved along the continuum ${\mathcal C}(p_2) $ and it is enough to exclude the branches bifurcating from exponents related by {\epsilon}qref{1.3a} to $\mu_k$ with $k\neq 2$.\\ As said before this is a first attempt to separate branches of nonradial solutions when $N{\gamma}eq 3$. The definition of the cones $\mathcal{K}^1$ and $\mathcal{K}^2$ is suggested by the shape of the $O(N-1)$-invariant spherical harmonics given explicitly in the Appendix. We believe that it should be possible to separate all the branches generated by different spherical harmonics investigating in a deeper way their properties.\\ \noindent The cones $\mathcal{K}^1$ and $\mathcal{K}^2$, introduced here, separate the first spherical harmonics and can be used to distinguish solutions in a radially symmetric domain. The same method can be used to separate branches of nonradial, $O(N-1)$-invariant functions in other settings, for example in the case of the exterior of the ball, see \cite{GP} or in the case of the critical H\'enon problem in $\mathbb{R}^N$, see \cite{GGN}. We believe that also in these cases the cones $\mathcal{K}^1$ and $\mathcal{K}^2$ can give the unboundedness of the bifurcating branch of non radial solutions. Another application of these cones can be, for instance, to reduce the dimension of the kernel of the linearized equation to some problems, see \cite{GGT} as an example.\\[.5cm] The problem to separate branches of solutions generated by different values of $k$ was solved in dimension $N=2$ by Dancer in the paper \cite{DA1}. Using the fact that the spherical harmonic functions associated to the eigenvalues $\mu_k$ are periodic with with period $\frac{2\pi}k$ when $N=2$ Dancer introduced some suitable cones $\mathcal{K}^k$ of periodic functions in which only the $k$-th spherical harmonic lies. This allows to separate branches of solutions related to different values of $\mu_k$ and can give also multiplicity results, see \cite{GGN2} as an example. \\ Using exactly the same cones ${\mathcal K}^n$ of Dancer we have the following result: {\beta}egin{corollary}{\lambda}abel{c1} If $N=2$, for any $ni^{*}n \mathbb{N}$ there exists an exponent $p_ni^{*}n(1,+i^{*}nfty)$ such that the continuum ${\mathcal C}(p_n)$ is unbounded in $C^{1,{\alpha}}_0(\overline A) $. Moreover ${\mathcal C}(p_n)\cap {\mathcal C}(p_m)={\epsilon}mptyset$ if $n\neq m$. Finally for any $p>p_n$ there exist at least $n$ nonradial positive solutions of (\ref{1}). {\epsilon}nd{corollary} The multiplicity result for $N=2$ follows since problem {\epsilon}qref{1} is subcritical in $\mathbb{R}^2$ and cannot be obtained for $N{\gamma}eq 3$ in an easy way.\\ The paper is organized as follows: in section \ref{s2} we introduce all the notations. In section \ref{s3} we prove Theorem \ref{t1} and Corollary \ref{c1}. Finally in the Appendix we derive some properties of functions which are $O(N-1)$-invariant and we write explicitly the $O(N-1)$-invariant spherical harmonics. {\sigma}ezione{Notations and preliminary results}{\lambda}abel{s2} The starting point in the study of bifurcation is the analysis of the degeneracy points to (\ref{1}). To this end we consider the linearized equation at the radial solution $u_p$, i.e. {\beta}egin{equation}{\lambda}abel{l} {\lambda}eft\{{\beta}egin{array}{ll} -\Deltaelta v-pu_p^{p-1}v-{\lambda} v=0 & \hbox{ in }A\\ v=0 & \hbox{ on }\partialltae A. {\epsilon}nd{array}\right. {\epsilon}nd{equation} It is proved in \cite{GGPS} (see Lemma 2.3) that equation {\epsilon}qref{l} admits a nontrivial solution if and only if {\beta}egin{equation}{\lambda}abel{2.1} {\alpha}_1(p)+\mu_k=0, \quad \hbox{ for some } k{\gamma}eq 1, {\epsilon}nd{equation} where ${\alpha}_1(p)$ is the first eigenvalue of the one-dimensional operator $\widehat L_{p}$ defined in {\epsilon}qref{1.3} and $\mu_k=k(N-2+k)$, $k=0,1,\partialltaots$ are the eigenvalues of the Laplace-Beltrami operator $-\Deltaelta_{S^{N-1}}$ on the sphere $S^{N-1}$.\\ Moreover the solutions $v$ of the linearized equation (\ref{l}) corresponding to a degeneracy point $p_i$ can be written as {\beta}egin{equation}{\lambda}abel{au} v(x)=w_{1,p_i}(|x|)\phi_k{\lambda}eft(\frac x{|x|}\right) {\epsilon}nd{equation} where $w_{1,p_i}(r)$ is the first positive eigenfunction of $\widehat L_{p_i}$ and $\phi_k$ is an eigenfunction of the Laplace-Beltrami operator on $S^{N-1}$ relative to the eigenvalue $\mu_k$. \\ As explained in \cite{GGPS}, the Morse index of the radial solution $u_p$ that we denote by $m(p)$ depends only on the sign of the sum ${\alpha}_1(p)+\mu_k$ for $k{\gamma}eq 1$ and precisely is given by (recall that ${\alpha}_1(p)<0$ for any $p$) {\beta}egin{equation}{\lambda}abel{morse-index} m(p) = {\sigma}um_{0 {\lambda}eq j<\frac{2-N}2+\frac1 2 {\sigma}qrt{(N-2)^2-4{\alpha}_1(p)} {\alpha}top_{j\ integer}} \frac{(N+2j-2)(N+j-3)!}{(N-2)!\,j!}. {\epsilon}nd{equation} \noindent In \cite{GGPS} we proved the following result: {\beta}egin{theorem}{\lambda}abel{t02} The Morse index changing points are nonradial bifurcation points for (\ref{1}). Moreover the exponents $p_i$ of these points can be arranged in a sequence that diverges to $+i^{*}nfty$. {\epsilon}nd{theorem} \noindent To introduce the global bifurcation result obtained in \cite{G} we let $X$ be the subspace of $C^{1,{\alpha}}_0({\beta}ar A)$ given by the functions which are $O(N-1)$-invariant, i.e. {\beta}egin{equation}{\lambda}abel{2.3} X:=\{vi^{*}n C^{1,{\alpha}}_0(\overline A) \, \, \hbox{s.t. }v(x_1,\partialltaots,x_N)=v(g(x_1,\partialltaots,x_{N-1}),x_N)\,{\alpha}top \hbox{ for any } gi^{*}n O(N-1)\} {\epsilon}nd{equation} where $O(N-1)$ is the orthogonal group in $\mathbb{R}^{N-1}$, and {\beta}egin{equation}{\lambda}abel{tpv} {\beta}egin{array}{lrlc}T(p,v):&\hbox{ } (1,+i^{*}nfty){\tau}imes X&\rightarrow&\hbox{ }X\\ &(p,v)\,\,\,\,\,&\mapsto & {\lambda}eft(-\Delta -{\lambda}\right)^{-1}{\lambda}eft( |v|^{p-1}v\right). {\epsilon}nd{array} {\epsilon}nd{equation} $T$ is a compact operator for fixed $p$ and is continuous with respect to $p$. We let $S(p,v):=v-T(p,v)$. Then any solution of (\ref{1}) can be found as a solution of $S(p,v)=0$ such that $v{\gamma}eq 0$ in the annulus $A$. Let us denote by $\Sigma$ the closure in $(1,+i^{*}nfty){\tau}imes X$ of the set of solutions of $S(p,v)=0$ different from $u_p$, i.e. {\beta}egin{equation}{\lambda}abel{2.5} \Sigma:=\overline{\{(p,v)i^{*}n (1,+i^{*}nfty){\tau}imes X\,,\, S(p,v)=0\, ,\, v\neq u_p\}}. {\epsilon}nd{equation} If $(p_k,u_{p_k})i^{*}n \mathcal{S}$ is a nonradial bifurcation point, then $(p_k,u_{p_k})i^{*}n \Sigma$. For $(p_k,u_{p_k})i^{*}n \Sigma $ we will call ${\mathcal C}(p_k){\sigma}ubset \Sigma$ the closed connected component of $\Sigma$ which contains $(p_k,u_{p_k})$ and is maximal with respect to the inclusion.\\ In \cite{G} we proved the following result: {\beta}egin{theorem}{\lambda}abel{t12} Let $(p_k,u_{p_k})$ be a Morse index changing point and let ${\mathcal C}(p_k)$ as defined before. Then either {\beta}egin{itemize} i^{*}tem[a)] ${\mathcal C}(p_k)$ is unbounded in $(1,+i^{*}nfty){\tau}imes X $, or i^{*}tem[b)] for some $h\neq k$, $(p_h,u_{p_h})$ is a Morse index changing point and $(p_h,u_{p_h})i^{*}n {\mathcal C}(p_k)$. {\epsilon}nd{itemize} {\epsilon}nd{theorem} \noindent Now we are in position to prove our first new result. {\beta}egin{proposition}{\lambda}abel{p1} Let $(p_k,u_{p_k})$ be a Morse index changing point and let ${\mathcal C}(p_k)$ be as defined before. If ${\mathcal C}(p_k)$ is bounded, the number of the Morse index changing points in ${\mathcal C}(p_k)$ including $ (p_k,u_{p_k})$ is even. {\epsilon}nd{proposition} \noindent This result is based on an improved version of the Rabinowitz alternative due to Ize (see \cite{N}). We report the proof for completeness, see also \cite{AG}.\\ {\beta}egin{proof} If ${\mathcal C}(p_k)$ is bounded then $b)$ of Theorem \ref{t12} holds and ${\mathcal C}(p_k)$ must meet the curve ${\mathcal S}$ at least in one point $ (p_h, u_{{p_h}})$ such that $p_h$ is a degeneracy point, i.e. satisfies {\epsilon}qref{2.1}. But it can meet the curve ${\mathcal S}$ also in other bifurcation points. Since ${\mathcal C}(p_k)$ is bounded and the exponents $p_i$ of the bifurcation points must satisfy {\epsilon}qref{2.1} then ${\mathcal C}(p_k)$ can meet ${\mathcal S}$ at most in finitely many bifurcation points $(p_i,u_i)$, $i=1,\partialltaots,n$ with $p_1<p_2<\partialltaots<p_n$. Arguing as in the proof of Theorem 3.3 in \cite{G}, we can find a bounded open set ${\mathcal O}{\sigma}ubset (1,+i^{*}nfty){\tau}imes X$ such that ${\mathcal C}(p_k){\sigma}ubset {\mathcal O}$, $\partialltae O\cap \Sigma={\epsilon}mptyset$ where $\Sigma$ is as defined in {\epsilon}qref{2.5}. Moreover we can assume that ${\mathcal O}$ does not contain points $(p,u_p)$ if $|p-p_i|{\gamma}eq {\epsilon}_0$ for $i=1,\partialltaots, n$ and ${\epsilon}_0>0$ such that there are not degeneracy points in $\cup_{i=1}^n (p_i-2{\epsilon}_0,p_i+2{\epsilon}_0)$. For ${\mathcal O}$ as above and $r>0$, consider the map $${\beta}egin{array}{llll}S_r(p,v):&\hbox{ }{\beta}ar {\mathcal O}&\rightarrow&\hbox{ } X{\tau}imes \mathbb{R}\\ &(p,v)&\mapsto &{\lambda}eft(S(p,v),\Arrowvert v-u_p\Arrowvert_X^2 -r^2\right) {\epsilon}nd{array}$$ where $\Arrowvert \cdot\Arrowvert_X$ stands for the usual norm in the space $C^{1,{\alpha}}_0(A)$. Now, $\mathit{deg}{\lambda}eft( S_r(p,v),{\mathcal O},(0,0)\right)$ is defined since on $\partialltae {\mathcal O}$ there are no solutions of $S(p,v)=0$ different from the radial solution $u_p$, and hence $0=\Arrowvert v-u_p \Arrowvert_X<r$ for such any solution. Furthermore the degree is independent of $r>0$. For large $r$, $S_r(p,v)=(0,0)$ has no solutions in ${\mathcal O}$, and hence has degree zero. On the other hand, for small $r$, if $(p,v)$ is a solution of $S_r(p,v)=(0,0)$, then $\Arrowvert v-u_p\Arrowvert_X =r$, and hence $p$ is close to one of the $p_i$, $i=1,\partialltaots,n$. But then the sum of local degrees of $S_r$ in the neighborhoods of each of the $p_i$ is equal to zero, so that {\beta}egin{equation}{\lambda}abel{2.6} 0={\sigma}um_{i=1}^n \mathit{deg}{\lambda}eft( S_r(p,v),{\mathcal O}\cap B_{r}(p_i,u_{p_i}),(0,0)\right). {\epsilon}nd{equation} In particular we choose $r<{\epsilon}_0$ for ${\epsilon}_0$ defined as before. In order to compute the degree of $S_r(p,v)$ in ${\mathcal O}\cap B_{r}(p_i,u_{p_i})$ we use again the homotopy invariance of the degree. Let us define $$S_r^t(p,v)={\lambda}eft(S(p,v), t(\Arrowvert v-u_p\Arrowvert_X^2 -r^2)+(1-t)(2p_ip-p^2-p_i^2+r^2)\right)$$ for $ti^{*}n [0,1]$. As before $\mathit{deg}{\lambda}eft( S_r^t(p,v), {\mathcal O}\cap B_{r}(p_i,u_{p_i}), (0,0)\right) $ is well defined since there are no solutions on the boundary if $r$ is small (recall that $u_{p_i\pm r}$ are isolated if $r<{\epsilon}_0$). Moreover the degree is independent of $t$. For $t=1$ we have $S_r^1(v,p)=S_r(p,v)$, while for $t=0$, $S_r^0(p,v)={\lambda}eft(S(p,v),2p_ip-p^2-p_i^2+r^2\right)$ and {\beta}egin{eqnarray} && \mathit{deg}{\lambda}eft( S_r^0(p,v),{\mathcal O}\cap B_{r}(p_i,u_{p_i}), (0,0)\right)\nonumber\\ &&= \mathit{deg} {\lambda}eft( S(p,v),{\mathcal O}\cap B_{r}(p_i,u_{p_i}) ,0\right)\cdot \mathit{deg}{\lambda}eft( 2p_ip-p^2-p_i^2+r^2, \{|p-p_i|<r\},0\right).\nonumber {\epsilon}nd{eqnarray} Now $$\mathit{deg}{\lambda}eft( 2p_ip-p^2-p_i^2+r^2, \{|p-p_i|<r\},0\right)=1$$ for $p=p_i-r$ while $$\mathit{deg}{\lambda}eft( 2p_ip-p^2-p_i^2+r^2, \{|p-p_i|<r\},0\right)=-1$$ for $p=p_i+r$. This implies that {\beta}egin{eqnarray} &&\mathit{deg}{\lambda}eft( S_r(p,v),{\mathcal O}\cap B_{r}(p_i,u_{p_i}),(0,0)\right)=\nonumber\\ &&\mathit{deg}{\lambda}eft( S(p_i-r,\cdot),{\mathcal O}_{p_i-r},0\right) - \mathit{deg}{\lambda}eft( S(p_i+r,\cdot),{\mathcal O}_{p_i+r},0\right)\nonumber\\ &&=(-1)^{m(p_i-r)}-(-1)^{m(p_i+r)}\nonumber {\epsilon}nd{eqnarray} where as in \cite{G} we denote by ${\mathcal O}_p$ the set $\{vi^{*}n {\mathcal O}\, :\, (p,v)i^{*}n {\mathcal O}\}$.\\ We conclude that if $(p_i, u_{p_i})$ is a Morse index changing point then $$\mathit{deg}{\lambda}eft( S_r(p,v),{\mathcal O}\cap B_{r}(p_i,u_{p_i}),(0,0)\right)=\pm 2$$ while if $(p_i, u_{p_i})$ is not a Morse index changing point then $$\mathit{deg}{\lambda}eft( S_r(p,v),{\mathcal O}\cap B_{r}(p_i,u_{p_i}),(0,0)\right)=0.$$ Since the nonzero terms in (\ref{2.6}) correspond only to the Morse index changing points, and since these terms add up to zero, there must be an even number of Morse index changing points. {\epsilon}nd{proof} We let $(p_k,u_{p_k})$ be a bifurcation point, corresponding, via (\ref{1.3a}) to the eigenvalue $\mu_k$. Then the linearized operator $L_{p_k}$ at $u_{p_k}$ has, up to a constant multiple, a unique solution in $X$, see \cite{SW1}, which has the form given by (\ref{au}). We let $w_k$ be this unique normalized (in the $L^{i^{*}nfty}$-norm) eigenfunction of $L_{p_k}(v)=0$ in $X$. Now we want to prove the following result: {\beta}egin{proposition}{\lambda}abel{p2} There exists $\rho_0>0$ such that if $(p,v)i^{*}n {\lambda}eft({\mathcal C}(p_k){\sigma}etminus\{(p_k,u_{p_k})\}\right)\cap B_{\rho}(p_k,u_{p_k})$, then $v-u_p={\alpha}_p w_k+r_p$, where $w_k$ is as before and ${\alpha}_p{\tau}o 0$ as $p{\tau}o p_k$ and $r_p=o({\alpha}_p)$ as $p{\tau}o p_k$. {\epsilon}nd{proposition} {\beta}egin{proof} With the previous notations we let $l_k$ be an element of the dual space $X'$ of $X$, such that $<l_k,w_k>=1$, where $<\cdot,\cdot>$ denotes the duality between $X$ and $X'$. This element $l_k$ exists thanks to the Hahn-Banach Theorem. Let $X_k=\{zi^{*}n X\,\hbox{ such that } <l_k,z>=0\}$. Since $\mathit{dim}{\lambda}eft(\mathit{Ker}\, L_{p_k}\right)$ is finite, then $\mathit{Ker}\, L_{p_k}$ is complemented in $X$, see for example \cite{M} pag 300. Hence we can decompose $X=\mathbb{R}\overline{p}lus X_k$, and every $zi^{*}n X$ can be written as $z={\alpha} w_k+r$, with ${\alpha}=<l_k,z>$ and $ri^{*}n X_k$. \\ For ${\epsilon}tai^{*}n(0,1)$ and ${\gamma}amma>0$, we let $$K_{{\gamma}amma,{\epsilon}ta}=\{(p,v)i^{*}n X\hbox{ such that }|p-p_k|<{\gamma}amma \hbox{ and }|<l_k,v-u_p>|>{\epsilon}ta \Arrowvert v-u_p\Arrowvert_{i^{*}nfty}\}$$ where $\Arrowvert\cdot \Arrowvert_{i^{*}nfty}$ denotes the usual $L^{i^{*}nfty}$-norm. We want to prove, first, that, for any ${\epsilon}ta$ and any ${\gamma}amma$, there exists $\rho_0>0$ such that for all $\rho<\rho_0$ we have ${\lambda}eft({\mathcal C}(p_k){\sigma}etminus\{(p_k,u_{p_k})\}\right)\cap B_{\rho}(p_k,u_{p_k}){\sigma}ubset K_{{\gamma}amma,{\epsilon}ta}$. Here $B_{\rho}(p_k,u_{p_k})$ denotes the ball of radius $\rho$ in the product space $(1,+i^{*}nfty){\tau}imes X$. If there is not such a $\rho_0$, there exist sequences $\rho_n{\tau}o 0$ and $(p_n,v_n)i^{*}n {\lambda}eft({\mathcal C}(p_k) {\sigma}etminus\{(p_k,u_{p_k})\}\right)\cap B_{\rho_n}(p_k,u_{p_k})$ such that $|p_n-p_k|{\lambda}eq \rho_n<{\gamma}amma$, $v_n-u_{p_n}{\tau}o 0$ in $X$ and $|{\alpha}_n|:=|<l_k, v_n-u_{p_n}>|{\lambda}eq {\epsilon}ta \Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}$. Letting $z_n=\frac{v_n-u_{p_n}}{\Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}}$ we have that $z_n$ satisfies {\beta}egin{equation}{\lambda}abel{**} {\lambda}eft\{ {\beta}egin{array}{ll} -\Deltaelta z_n=h_n(x)z_n +{\lambda} z_n& \hbox{ in }A\\ z_n=0 & \hbox{ on }\partialltae A {\epsilon}nd{array}\right. {\epsilon}nd{equation} where {\beta}egin{equation}{\lambda}abel{2.10} h_n(x)=p_ni^{*}nt_0^1 {\lambda}eft( u_{p_n}+t(v_n-u_{p_n})\right)^{p_n-1}dt {\epsilon}nd{equation} and $h_n(x){\tau}o p_k u_{p_k}^{p_k-1}$ in $X$ as $n{\tau}o +i^{*}nfty$. Then $\Arrowvert z_n\Arrowvert_{i^{*}nfty}=1$ and $z_n{\tau}o z$ uniformly in ${\beta}ar A$ where $z$ is a solution of {\beta}egin{equation}{\lambda}abel{2.11} {\lambda}eft\{{\beta}egin{array}{ll} -\Deltaelta z=p_k u_{p_k}^{p_k-1}z+{\lambda} z &\hbox{ in }A\\ z=0 & \hbox{ on }\partialltae A {\epsilon}nd{array}\right. {\epsilon}nd{equation} such that $\Arrowvert z\Arrowvert_{i^{*}nfty} =1$. This implies that either $z=w_k$ or $z=-w_k$. Moreover $\frac{{\alpha}_n}{\Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}}:=\frac {|<l_k,v_n-u_{p_n}>|}{\Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}}{\tau}o |<l_k,\pm w_k>|=1>{\epsilon}ta$ and we get a contradiction. \\ Thus there exists a $\rho_0>0$ as above. Using the previous decomposition $X=\mathbb{R}\overline{p}lus X_k$, we have that $v-u_p={\alpha}_p w_k+r_p$ where ${\alpha}_p:=<l_k,v-u_p>$ and $r_p:=v-u_p-{\alpha}_pw_ki^{*}n X_k$. \\ Now, if $p_n{\tau}o p_k$ then $|{\alpha}_n|:=|{\alpha}_{p_n}|=|<l_k,v_n-u_{p_n}>|=\Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}|<l_k,z_n>|=\Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}(1+o(1))$. This implies that ${\alpha}_n{\tau}o 0$ as $p_n{\tau}o p_k$. Finally $r_n:=r_{p_n}=v_n-u_{p_n}-{\alpha}_n w_k$ so that {\beta}egin{eqnarray} &&\Arrowvert r_n\Arrowvert_{i^{*}nfty} {\lambda}eq \Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}+|{\alpha}_n|\,\Arrowvert w_k\Arrowvert_{i^{*}nfty}<\frac 1{{\epsilon}ta}\,\, |<l_k,v_n-u_{p_n}>|+|{\alpha}_n|\nonumber\\ && =|{\alpha}_n|+\frac 1{{\epsilon}ta}\,|{\alpha}_n|\,\, |<l_k,w_k>|+\frac 1{{\epsilon}ta}\,\, |<l_k,r_n>|=\frac 1{{\epsilon}ta}\, |{\alpha}_n|+|{\alpha}_n|.\nonumber {\epsilon}nd{eqnarray} This shows that $r_n{\tau}o 0$ as $p_n{\tau}o p_k$. Finally $$\Big|<l_k,\frac {u_n-u_{p_n}}{\Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}}>\Big|=\frac {{\alpha}_n}{\Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}}{\tau}o 1$$ so that $$\frac {r_n}{\Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}}=\frac{v_n-u_{p_n}-{\alpha}_n w_k}{\Arrowvert v_n-u_{p_n}\Arrowvert_{i^{*}nfty}}{\tau}o w_k-w_k=0.$$ This shows that $r_n=o(|{\alpha}_n|)$ as $p_n{\tau}o p_k$ and finishes the proof. {\epsilon}nd{proof} \noindent This proposition gives us the behavior of the branch of solutions ${\mathcal C}(p_k)$ near a bifurcation point $(p_k,u_{p_k})$. {\sigma}ezione{Proof of the main result.}{\lambda}abel{s3} In this section we want to prove Theorem \ref{t1}. As before we consider functions which are $O(N-1)$-invariant, i.e. the space $X$ defined in (\ref{2.3}). It is easy to see (see the Appendix for the details) that if $(\rho,\phi_1,\partialltaots,\phi_{N-2},{\tau}heta)$ with $a{\lambda}eq \rho{\lambda}eq b$ and $\phi_ii^{*}n[0,2\pi]$ for $i=1,\partialltaots,N-2$ and ${\tau}hetai^{*}n[0,\pi]$, are the radial coordinates in $\mathbb{R}^N$, then a $O(N-1)$-invariant function in $\mathbb{R}^N$ can be written as a function which depends only on $\rho$ and ${\tau}heta$. \\ We introduce the following cones: {\beta}egin{equation}{\lambda}abel{3.1} \mathcal{K}^1={\lambda}eft\{{\beta}egin{array}{l}vi^{*}n C^{1,{\alpha}}({\beta}ar A)\,\hbox{ s.t. }v \hbox{ is } O(N-1)-\hbox{invariant}, \, v{\gamma}eq 0 \hbox{ in }A,\\ v=v(\rho, {\tau}heta) \hbox{ in radial coordinates for } (\rho, {\tau}heta)i^{*}n A,\\ v(\rho,{\tau}heta) \hbox{ is even in } {\tau}heta \hbox{ and }v(\rho,\pi+{\tau}heta)=v(\rho,\pi-{\tau}heta)\\ v(\rho,{\tau}heta) \hbox{ is non increasing in }{\tau}heta \hbox{ for } {\tau}hetai^{*}n [0,\pi], \rhoi^{*}n[a,b] {\epsilon}nd{array}\right\} {\epsilon}nd{equation} and {\beta}egin{equation}{\lambda}abel{3.1-bis} \mathcal{K}^2={\lambda}eft\{{\beta}egin{array}{l}vi^{*}n C^{1,{\alpha}}({\beta}ar A)\,\hbox{ s.t. }v \hbox{ is } O(N-1)-\hbox{invariant}, \, v{\gamma}eq 0 \hbox{ in }A,\\ v=v(\rho, {\tau}heta) \hbox{ in radial coordinates for } (\rho, {\tau}heta)i^{*}n A,\\ v(\rho,z) \hbox{ is even in } z, \hbox{ where } z=\cos{\tau}heta, \hbox{ for } zi^{*}n[-1,1]\\ v(\rho,{\tau}heta) \hbox{ is non increasing in }{\tau}heta \hbox{ for } {\tau}hetai^{*}n [0,\frac \pi 2], \rhoi^{*}n[a,b]\ {\epsilon}nd{array}\right\}. {\epsilon}nd{equation} As said in the Introduction functions that belong to $\mathcal{K}^1$ are Foliated Schwarz symmetric, see \cite{GPW} for some comments on this type of symmetry.\\ Since the terminology is not uniform in the literature we recall that $W$ is a cone in $X$, if $W$ is a closed convex set in $X$ such that ${\gamma} W{\sigma}ubseteq W$ for any ${\gamma}{\gamma}eq 0$ and $W\cap -W=\{{\epsilon}mptyset\}$.\\ First we can prove the following result: {\beta}egin{lemma}{\lambda}abel{l31} For any $pi^{*}n(1,+i^{*}nfty)$ the map $T(p,-):X\rightarrow X$, defined in (\ref{tpv}), maps the cone ${\mathcal K}^i$ into itself. {\epsilon}nd{lemma} {\beta}egin{proof} Suppose $gi^{*}n {\mathcal K}^i$, then the function $g^pi^{*}n {\mathcal K}^i$ for $i=1,2$. We have that $T(p,g)=w$ if $w$ is a solution to {\beta}egin{equation}{\lambda}abel{3.2} {\lambda}eft\{{\beta}egin{array}{ll} -\Deltaelta w-{\lambda} w =g^p & \hbox{ in }A\\ w=0&\hbox{ on }\partialltae A. {\epsilon}nd{array}\right. {\epsilon}nd{equation} First, since $g{\gamma}eq 0$ in $A$ and ${\lambda}{\lambda}e 0$, the Maximum principle implies $w{\gamma}eq 0$ in $A$. We already know that the map $T(p,-)$ is invariant with respect the action of the group $O(N-1)$, so that $T(p,-)$ maps the space $X$ into itself. Thus $w$ is $O(N-1)$-invariant and, as previously said, we can write $w=w(\rho, {\tau}heta)$ with the properties $w(\rho,{\tau}heta)=w(\rho,-{\tau}heta)$ and $w(\rho,\pi+{\tau}heta)=w(\rho,\pi-{\tau}heta)$ for every $(\rho,{\tau}heta)i^{*}n A$, see the Appendix for these details. Moreover, if $g$ is even in $z$ then $g^p$ is even in $z$ and so also $w=T(p,g)$ is even in $z$. Exploiting this fact we can rewrite (\ref{3.2}) in radial coordinates, getting that $w$ satisfies {\beta}egin{equation}{\lambda}abel{3.3} {\lambda}eft\{{\beta}egin{array}{ll} -\frac{\partialltae ^2 w}{\partialltae \rho^2}-\frac{N-1}{\rho} \frac{\partialltae w}{\partialltae \rho}-\frac 1{\rho^2} \frac{\partialltae ^2 w}{\partialltae {\tau}heta^2}-\frac{N-2}{\rho^2}\cot {\tau}heta\frac{\partialltae w}{\partialltae {\tau}heta}-{\lambda} w=g^p & \hbox{ in }A\\ w=0&\hbox{ on }\partialltae A {\epsilon}nd{array}\right. {\epsilon}nd{equation} Differentiating with respect to ${\tau}heta$ we get that $w_{{\tau}heta}:=\frac{\partialltae w}{\partialltae {\tau}heta}$ satisfies $$-\frac{\partialltae ^2 \widetildeh}{\partialltae \rho^2}-\frac 1{\rho^2}\frac{\partialltae ^2 \widetildeh}{\partialltae {\tau}heta^2}-\frac{N-1}{\rho} \frac{\partialltae \widetildeh}{\partialltae \rho}-\frac{N-2}{\rho^2}\cot {\tau}heta \frac{\partialltae \widetildeh}{\partialltae {\tau}heta} +\frac{N-2}{\rho^2{\sigma}in^2{\tau}heta}\widetildeh-{\lambda} \widetildeh=pg^{p-1}\frac{\partialltae g}{\partialltae {\tau}heta}$$ for $\rhoi^{*}n [a,b]$ and ${\tau}hetai^{*}n [0,\pi]$. This is a second order operator uniformly elliptic in $[a,b]{\tau}imes [0,\pi]$. The coefficient of the linear term is $c(\rho,{\tau}heta)=\frac{N-2}{\rho^2{\sigma}in^2{\tau}heta}-{\lambda}> 0$ in $(a,b){\tau}imes (0,\pi)$ and it is bounded in every closed ball in $(a,b){\tau}imes (0,\pi)$. Also the coefficients of the first order terms, i.e. $\frac{N-1}{\rho}$ and $\frac{N-2}{\rho^2}\cot {\tau}heta$ are bounded on every closed ball in $(a,b){\tau}imes (0,\pi)$. Then, we consider first the case of $gi^{*}n{\mathcal K}^1$, the maximum principle applies since $\frac{\partialltae g}{\partialltae {\tau}heta}{\lambda}eq 0$ for $(\rho,{\tau}heta)i^{*}n (a,b){\tau}imes (0,\pi)$ and implies that $\widetildeh$ reaches its maximum on the boundary of $(a,b){\tau}imes (0,\pi)$, see \cite{PW} pag 64. Then, the boundary conditions $v(a,{\tau}heta)=v(b,{\tau}heta)=0$ for every ${\tau}heta$ imply that $\widetildeh(a,{\tau}heta)=\widetildeh(b,{\tau}heta)=0$ for every ${\tau}hetai^{*}n [0,\pi]$. Finally the symmetry assumptions on $v$ imply, in turn, that $\widetildeh(\rho,0)=\widetildeh(\rho,\pi)=0$, see the Appendix for details, so that $\widetildeh{\lambda}eq 0$ in $[a,b]{\tau}imes [0,\pi]$. This implies that $vi^{*}n {\mathcal K}^1$ and concludes the proof in the case of $gi^{*}n {\mathcal K}^1$.\\ Now assume $gi^{*}n {\mathcal K}^2$. As said before $w=T(p,g)$ is even in $z$. By assumptions we have that $\frac{\partialltae g}{\partialltae {\tau}heta}{\lambda}eq 0$ for $(\rho,{\tau}heta)i^{*}n (a,b){\tau}imes (0,\frac \pi 2)$. Again we can apply the maximum principle getting that $w_{{\tau}heta}$ reaches its maximum on the boundary of $(a,b){\tau}imes (0,\frac \pi 2)$. As before $\widetildeh(a,{\tau}heta)=\widetildeh(b,{\tau}heta)=0$ for every ${\tau}hetai^{*}n [0,\frac \pi 2]$ and $\widetildeh(\rho,0)=0$ for every $\rhoi^{*}n(a,b)$. Finally since $w$ is even in $z$ we get that $w(\rho,\cos{\tau}heta)=w(\rho,-\cos{\tau}heta)$ and this implies $\widetildeh(\rho,\frac \pi 2)=0$ for any $\rhoi^{*}n(a,b)$. Then we have $\widetildeh{\lambda}eq 0$ in $[a,b]{\tau}imes [0,\frac \pi 2]$ showing that $wi^{*}n {\mathcal K}^2$. This concludes the proof of the Lemma. {\epsilon}nd{proof} Before proving the main result we need some notations, following \cite{DA0}. Given a cone ${\mathcal K}$ and a point $ui^{*}n{\mathcal K}$ we let ${\mathcal K}_u:=\{ vi^{*}n X\,:\, u+tvi^{*}n{\mathcal K}\hbox{ for some }t>0\}$ and $S_u:=\{ vi^{*}n \overline{{\mathcal K}}_u\,:\, -vi^{*}n \overline{{\mathcal K}}_u\}$. Then we have that, if $u_p$ is a radial solution of (\ref{1}) then $\frac {\partialltae u_p}{\partialltae {\tau}heta}{\epsilon}quiv 0$ in $A$ and hence $\overline{{\mathcal K}}^1_{u_p}=\{vi^{*}n X\,:\,v=v(\rho, {\tau}heta) \hbox{ in radial coordinates for } (\rho, {\tau}heta)i^{*}n A,\ v(\rho,{\tau}heta) \hbox{ is even in } {\tau}heta \ , \ \frac {\partialltae v}{\partialltae {\tau}heta}{\gamma}eq 0 \hbox{ in }A\}$ while $\overline{{\mathcal K}}^2_{u_p}=\{vi^{*}n X\,:\,v=v(\rho, {\tau}heta) \hbox{ in radial }{\lambda}inebreak[2] \hbox{coordinates }\hbox{for } (\rho, {\tau}heta)i^{*}n A, v(\rho,z) \hbox{ is even in } z=\cos{\tau}heta \ , \ \frac {\partialltae v}{\partialltae {\tau}heta}{\gamma}eq 0 \hbox{ for any } {\tau}hetai^{*}n [0,\frac \pi 2], \rhoi^{*}n[a,b] \}$. This implies that, using the fact that $vi^{*}n {\mathcal K}^2$ is even in $z$, $S^1_{u_p}=S^2_{u_p} =\{ vi^{*}n X\,:\hbox{ such that }v \hbox{ is radially symmetric}\}$. See \cite{DA1} for details.\\ {\beta}egin{proposition}{\lambda}abel{p31} Let $u_p$ be a radial solution of (\ref{1}) which is nondegenerate. Then for $i=1,2$ {\beta}egin{equation}{\lambda}abel{3.5} \mathit{index}_{{\mathcal K}^i}{\lambda}eft( I-T(p,-),u_p\right)={\lambda}eft\{{\beta}egin{array}{ll} \pm 1 & \hbox{ if }{\alpha}_1(p)+\mu_i>0\\ \\ 0& \hbox{ if }{\alpha}_1(p)+\mu_i<0 {\epsilon}nd{array}\right. {\epsilon}nd{equation} where ${\alpha}_1(p)$ is the first eigenvalue of the {{\epsilon}m radial} operator defined in (\ref{1.3}) and $\mu_1=N-1$ and $\mu_2=2N$. {\epsilon}nd{proposition} {\beta}egin{proof} To calculate the index of $I-T(p,-)$ in the cone ${\mathcal K}^i$ at the radial solution $u_p$ we use Theorem 1 in \cite{DA0}. First we observe that, with the previous notations, we have that ${\mathcal K}^i-{\mathcal K}^i$ is dense in $X$. Moreover by assumptions we have that $u_p$ is a fixed point of $T(p,-)$ in ${\mathcal K}^i$ and $T(p,u_p)$ is differentiable at $u_p$ with $T'(p,u_p)$ invertible, since $u_p$ is nondegenerate. We are in position to apply Theorem 1 in \cite{DA0} getting that $$\mathit{index}_{{\mathcal K}^i}{\lambda}eft( I-T(p,-),u_p\right)={\lambda}eft\{{\beta}egin{array}{l} 0 \ \hbox{ if }{\alpha}_1(p)+\mu_i<0\\%\hbox{ if }T'(p,u_p)\,\,\, \hbox{ has the property } {\alpha} \hbox{ on}\overline{{\mathcal K}}_{u_p},\\ \\ \mathit{index}_X{\lambda}eft( I-T(p,-),u_p\right)=\pm 1 \,\,\hbox{ otherwise. } {\epsilon}nd{array}\right.$$ This claim follows from \cite[Theorem 1]{DA0} if we check that $T'(p,u_p)$ has an eigenvalue in $(1,+i^{*}nfty)$ with corresponding eigenvector in $\overline{{\mathcal K}}^i_{u_p}{\sigma}etminus S_{u_p}$ if and only if ${\alpha}_1(p)+\mu_i<0$, see also Lemma 2 in \cite{DA0} and the Remark after it.\\ This is equivalent to show that the linearized operator has a negative eigenvalue with eigenfunction in $\overline{{\mathcal K}}^i_{u_p}{\sigma}etminus S_{u_p}$. In \cite{GGPS} it is shown that the solutions of the linearized equation have the form given in {\epsilon}qref{au}. This result holds also for the eigenfunctions of the eigenvalue problem with weight associated to the linearized equation, see \cite{GGN} for a proof of this assertion. Then, if we restrict to the space $X$ we have that an eigenvalue of the linearized problem becomes negative each time ${\alpha}_1(p)+\mu_k$ becomes negative and the corresponding eigenfunctions have the form given in {\epsilon}qref{au}, i.e. is the product of a positive radial function for a $O(N-1)$-invariant spherical harmonic function. Using the characterization of $O(N-1)$-invariant spherical harmonics we have that the linearized operator has a negative eigenvalue in ${\mathcal K}^1$ if and only if ${\alpha}_1(p)+\mu_1<0$, while the linearized operator has a negative eigenvalue in ${\mathcal K}^2$ if and only if ${\alpha}_1(p)+\mu_2<0$, since the eigenfunction corresponding to the negative eigenvalue ${\alpha}_1(p)+\mu_1$ does not belong to ${\mathcal K}^2$ (it is not even in $z$). This finishes the proof. {\epsilon}nd{proof} {\beta}egin{proof} [Proof of Theorem \ref{t1}] We prove the result in the case of the exponent $p_1$ related by {\epsilon}qref{1.3a} to $\mu_1$. The case of the exponent $p_2$ related to $\mu_2$ follows in the same way substituting the cone ${\mathcal K}^1$ with ${\mathcal K}^2$.\\ Let $p_1,\partialltaots,p_M$ be the Morse index changing points related by (\ref{1.3a}) to the first eigenvalue $\mu_1$. We can repeat the proof of Theorem 3.3 in \cite{G} using the cone ${\mathcal K}^1$ instead of the space $X$. Hence we get, for any $p_j$, $j=1,\partialltaots,M$, the existence of a continuum ${\mathcal C}(p_j)$ of solutions of (\ref{1}) which lies in the cone ${\mathcal K}^1$. Further this continuum either is unbounded in ${\mathcal K}^1$ or it must intersect the curve of radial solutions ${\mathcal S}$ in another Morse index changing point. Moreover the points at which ${\mathcal C}(p_j)$ can intersect the curve of radial solutions ${\mathcal S}$ are related to the first eigenvalue $\mu_1$. This follows since otherwise the continuum $ {\mathcal C}(p_j)$ is not contained in ${\mathcal K}^1$, see also Proposition \ref{p2}. Repeating the proof of Proposition \ref{p1}, in the cone ${\mathcal K}^1$ we have that the number of Morse index changing points which belong to a bounded continuum ${\mathcal C}(p_j)$ has to be even. On the other hand the number of Morse index changing points corresponding to the eigenvalue $\mu_1$, is odd, since ${\alpha}_1(p)+\mu_1>0$ if $p$ is near $1$ while ${\alpha}_1(p)+\mu_1<0$ if $p$ is large enough.\\ This implies the existence of a value $p_1$ such that ${\alpha}_1(p_1)+\mu_1=0$ and ${\mathcal C}(p_1)$ is unbounded in X. {\epsilon}nd{proof} {\beta}egin{remark} We suspect that the equation ${\alpha}_1(p)+{\lambda}_k=0$ has only one solution, but we are not able to prove it. In that case any degeneracy point would be a Morse index changing point and the branch of bifurcating solutions would be unbounded. {\epsilon}nd{remark} Now we sketch the proof of Corollary \ref{c1}. Let $(\rho,{\tau}heta)$ be the radial coordinates in $\mathbb{R}^2$. As said before the proof of this result follows using the cones {\beta}egin{eqnarray} {\mathcal K}^n&:=&{\lambda}eft\{ vi^{*}n C(A)\,:\, v{\gamma}eq 0\hbox{ in }A, v(\rho ,{\tau}heta)=v(\rho,{\tau}heta+\frac{2\pi}{n}) \hbox{ for }(r,{\tau}heta)i^{*}n A\,, v\hbox{ is even in }{\tau}heta\,,\right.\nonumber\\ && {\lambda}eft. v(\rho,{\tau}heta)\hbox{ is decreasing in }{\tau}heta \hbox{ for }0<{\tau}heta<\frac{\pi}{n}\, ,\, a{\lambda}eq \rho{\lambda}eq b\hbox{ and } v=0\hbox{ on }\partialltae A\right\}\nonumber {\epsilon}nd{eqnarray} introduced by Dancer in \cite{DA1}. For these cones ${\mathcal K}^n$ the analogous of Lemma 1 and Theorem 1 in \cite{DA1} holds. These cones allow to separate continua of solutions of (\ref{1}) in $\mathbb{R}^2$ related by (\ref{1.3a}) to a different eigenvalue $\mu_k$. Using, as in the proof of Theorem \ref{t1}, the Proposition \ref{p1} then we have that, corresponding to any $k$, there exists at least a continuum ${\mathcal C}(p_k)$ which is unbounded in $X$. Finally since we are in dimension 2 the solutions of (\ref{1}) cannot blow up at a finite value $p^*$, so that the unbounded continuum $ {\mathcal C}(p_k)$ has to be defined for every $p>p_k$. {\sigma}ezione{Appendix} \noindent{{\beta}f The $O(N-1)$-invariant functions in $\mathbb{R}^N$.}\\[.1cm] Let us consider the spherical coordinates in $\mathbb{R}^N$, $(\rho, \phi_1,\partialltaots,\phi_{N-2},{\tau}heta)$ where $\phi_ii^{*}n[0,2\pi]$, $i=1,\partialltaots,N-2$ and ${\tau}hetai^{*}n [0,\pi]$. As usual $${\lambda}eft\{{\beta}egin{array}{ll} x_i=\rho {\sigma}in {\tau}heta H_i( \phi_1,\partialltaots,\phi_{N-2}) & i=1,\partialltaots,N-2\\ x_N=\rho \cos {\tau}heta {\epsilon}nd{array} \right.$$ where $H_i$ are suitable functions. We are interested in the $O(N-1)$-invariant functions, i.e. functions $v$ such that $$v(x_1,\partialltaots,x_N)=v(g(x_1,\partialltaots,x_{N-1}),x_N)$$ for any $gi^{*}n O(N-1)$. By definition, a function which is $O(N-1)$-invariant depends only on $\rho'={\sigma}qrt{x_1^2+\partialltaots+x_{N-1}^2}$ and $x_N$. Then, in radial coordinates, since $ \rho'={\sigma}qrt{\rho^2-x_N^2}={\sigma}qrt{\rho^2(1-\cos^2{\tau}heta)}=\rho|{\sigma}in {\tau}heta|$ and $x_N=\rho\cos{\tau}heta$, $v$ can be written as a function which depends only on $\rho$ and ${\tau}heta$.\\ Moreover, $v$ must satisfy $v(\rho, {\tau}heta)=v(\rho,-{\tau}heta)$ and $v(\rho, \pi+{\tau}heta)= v(\rho, \pi-{\tau}heta)$. This assertion follows since $v$ must depend only on $\rho'$ and $x_N$ and as functions of $\rho, {\tau}heta$ they satisfy $\rho'(\rho,-{\tau}heta)=\rho'(\rho,{\tau}heta)$, $x_N(\rho,-{\tau}heta)= x_N(\rho,{\tau}heta)$ and $\rho'(\rho,\pi+{\tau}heta)=\rho|{\sigma}in(\pi+{\tau}heta)|=\rho|{\sigma}in{\tau}heta|=\rho|{\sigma}in(\pi-{\tau}heta)|=\rho'(\rho,\pi-{\tau}heta)$, $x_N(\rho,\pi+{\tau}heta)=\rho\cos(\pi+{\tau}heta)=-\rho\cos{\tau}heta=\rho \cos(\pi-{\tau}heta)= x_N(\rho,\pi-{\tau}heta)$.\\ Then an $O(N-1)$-invariant function $v$ satisfies $v(\rho, {\tau}heta)=v(\rho,-{\tau}heta)$ and $v(\rho, \pi+{\tau}heta)= v(\rho, \pi-{\tau}heta)$ and, if $vi^{*}n C^1(A)$ then it is $C^1((a,b){\tau}imes [0,\pi])$ and it verifies $\frac{\partialltae v}{\partialltae {\tau}heta}(\rho,0)=\frac{\partialltae v}{\partialltae {\tau}heta}(\rho,\pi)=0$ for any $\rho>0$. \\[1cm] \noindent{{\beta}f Some remarks on the $O(N-1)$-invariant spherical harmonic functions.}\\[.1cm] From what we said before the $O(N-1)$-invariant spherical harmonics can be written as functions which depend only on the variable ${\tau}heta$. Then, the $k$-th $O(N-1)$-invariant spherical harmonic satisfies {\beta}egin{equation}{\lambda}abel{A1} -{\sigma}in^2 {\tau}heta \frac {\partialltae^2 \Phi_k}{\partialltae {\tau}heta^2} -(N-2){\sigma}in{\tau}heta\cos{\tau}heta \,\,\frac {\partialltae \Phi_k}{\partialltae {\tau}heta} ={\lambda}_k {\sigma}in ^2{\tau}heta \,\, \Phi_k {\epsilon}nd{equation} for ${\tau}hetai^{*}n (0,\pi)$. Letting $z=\cos{\tau}heta$ we get that $\Phi_k(z)$ satisfies {\beta}egin{equation}{\lambda}abel{A2} (1-z^2)\frac {\partialltae ^2 \Phi_k}{\partialltae z^2}-(N-1)z\frac {\partialltae \Phi_k}{\partialltae z}+{\lambda}_k \Phi_k=0 {\epsilon}nd{equation} for $zi^{*}n(-1,1)$. This is a {{\epsilon}m Sturm-Lioville problem}. Then we can say that the $k$-th eigenfunction has k different zeros in $[-1,1]$. From the Sturm Theorem between two consecutive zeros of $\Phi_k$ there is a zero of $\Phi_{k+1}$.\\ Equation {\epsilon}qref{A2} is the {{\epsilon}m{Jacobi equation}} with, using the usual notations for Jacobi, ${\alpha}={\beta}=\frac{N-3}2$ and $n=k$. Then the bounded solutions of {\epsilon}qref{A2} are given, up to a constant multiple, by the Jacobi polynomials, that can be written, using the Rodrigues' formula {\beta}egin{equation}{\lambda}abel{A3} P_k^{(\frac{N-3}2,\frac{N-3}2)}(z)=\frac{(-1)^k}{2^k k!}(1-z^2)^{-\frac{N-3}2}\frac{\partialltae ^k}{\partialltae z^k}{\lambda}eft( (1-z^2)^{k+\frac{N-3}2}\right) {\epsilon}nd{equation} for $zi^{*}n (-1,1)$ and any $k{\gamma}eq 0$.\\ If ${\alpha}={\beta}=0$, i.e. for $N=3$, the Jacobi polynomials reduce to the Legendre polynomials {\beta}egin{equation}\nonumber P_k(z)=\frac 1{2^k k!}\frac{\partialltae ^k}{\partialltae z ^k}{\lambda}eft( z^2-1\right)^k {\epsilon}nd{equation} and, indeed for $N=3$, (\ref{A2}) is the classical Legendre equation.\\ Then the $O(N-1)$-invariant spherical harmonics are, up to a constant multiple, the functions: $$\Phi_k({\tau}heta)=P_k^{(\frac{N-3}2,\frac{N-3}2)}(\cos {\tau}heta)$$ for ${\tau}hetai^{*}n (0,\pi)$, where $P_k^{(\frac{N-3}2,\frac{N-3}2)}$ are the Jacobi Polynomials.\\ To give some examples we have $${\beta}egin{array}{l} \Phi_1({\tau}heta)=\frac {N-1}2 \cos{\tau}heta,\\ \Phi_2({\tau}heta)= \frac{N-1}8{\lambda}eft(N\cos^2{\tau}heta -1\right),\\ \Phi_3({\tau}heta)=\frac 1{48}(N+3)(N+1){\lambda}eft((N+2)\cos^3{\tau}heta -3\cos{\tau}heta\right) {\epsilon}nd{array}$$ This implies, in turn, that the unique $O(N-1)$-invariant spherical harmonic $\Phi_1$ related to the first eigenvalue ${\lambda}_1$ is, up to a constant multiple, $$\Phi_1({\tau}heta)=\cos{\tau}heta.$$ It then follows that $\frac{\partialltae \Phi_1}{\partialltae {\tau}heta}=-{\sigma}in {\tau}heta{\lambda}eq0$ in $[0,\pi]$ while for all the other spherical harmonics the derivative $\frac{\partialltae \Phi_i}{\partialltae {\tau}heta}$ must change sign in $[0,\pi]$. This can be seen since from the formulation (\ref{A2}) $\Phi_1(z)=z$ changes sign once in $(-1,1)$ so that any other solution $\Phi_i(z)$ changes sign at least twice in $(-1,1)$ and this implies that the derivative $\frac{\partialltae \Phi_i}{\partialltae z}$ has to change sign in $(-1,1)$ so that also $ \frac{\partialltae \Phi_i}{\partialltae {\tau}heta}$ has to change sign in $(0,\pi)$. {\beta}egin{thebibliography}{99} {\beta}ibitem[AG]{AG}{{\sigma}c A.L. Amadori, F. Gladiali}, Bifurcation and symmetry breaking for the H\'enon equation, {{\epsilon}m Adv. in Differential Equations} {{\beta}f 19} (2014), 755-782. {\beta}ibitem[DA]{DA0}{{\sigma}c E.N. Dancer}, On the indices of fixed points of mappings in cones and applications, {{\epsilon}m J. Math. Anal. Appl.}, {{\beta}f 91} (1983), 131-151. {\beta}ibitem[DA1]{DA1}{{\sigma}c E.N. Dancer}, Global breaking of symmetry of positive solutions on two-dimensional annuli, {{\epsilon}m Differential Integral Equations}, {{\beta}f 5 } (1992), no. 4, 903-913. {\beta}ibitem[FMT]{F} {{\sigma}c P. Felmer, S. Martinez, K. Tanaka}, Uniqueness of radially symmetric positive solutions for $-\Deltaelta u+u=u{\sigma}p p$ in an annulus, {{\epsilon}m J. Differential Equations,} {{\beta}f 245} (2008), 1198-1209. {\beta}ibitem[G]{G}{{\sigma}c F. Gladiali}, A global bifurcation result for a semilinear elliptic equation, {{\epsilon}m J. of Mathematical Analysis and Applications} {{\beta}f 369} (2010), 306-311. {\beta}ibitem[GGN1]{GGN} {{\sigma}c F. Gladiali, M. Grossi, S. Neves}, Nonradial solutions for the H\'enon equation in $\mathbb{R}^n$, {{\epsilon}m Adv. Math.} {{\beta}f 249} (2013) 1-36. {\beta}ibitem[GGN2]{GGN2} {{\sigma}c F. Gladiali, M. Grossi, S. Neves}, Symmetry breaking and Morse index of solutions of nonlinear elliptic problems in the plane, arXiv:1308.0519 {\beta}ibitem[GGPS]{GGPS} {{\sigma}c F. Gladiali, M. Grossi, F. Pacella, P.N. Srikanth}, Bifurcation and symmetry breaking for a class of semilinear elliptic equations in an annulus, {{\epsilon}m Calc.~Var.~Partial Differential Equations} {{\beta}f 40} (2011), 295-317. {\beta}ibitem[GGT]{GGT}{{\sigma}c F. Gladiali, M. Grossi, C.Troestler}, A non-variational system involving the critical Sobolev exponent, preprint. {\beta}ibitem[GP]{GP} {{\sigma}c F. Gladiali, F. Pacella}, Bifurcation analysis for a class of supercritical elliptic problems in an exterior domain, {{\epsilon}m Nonlinearity} {{\beta}f 24} (2011), 1575-1594. {\beta}ibitem[GPW]{GPW} {{\sigma}c F. Gladiali, F. Pacella, T. Weth}, Symmetry and nonexistence of low Morse index solutions in unbounded domains, {{\epsilon}m Journal de Mathematiques Pures et Appliquees} {{\beta}f 93} (2010), 536-558. {\beta}ibitem[KW]{KW} {{\sigma}c J. L.Kazdan, F.W. Warner}, Remarks on some quasilinear elliptic equations, { {\epsilon}m Comm. Pure Appl. Math.} {{\beta}f 28} (1975), no. 5, 567-597. {\beta}ibitem[M]{M} {{\sigma}c R.E. Megginson}, An introduction to Banach space theory. Graduate Texts in Mathematics, 183. Springer-Verlag, New York, 1998. {\beta}ibitem[N]{N} {{\sigma}c L. Nirenberg}, Topics in nonlinear functional analysis. Revised reprint of the 1974 original. Courant Lecture Notes in Mathematics, 6. New York University, Courant Institute of Mathematical Sciences, New York; American Mathematical Society, Providence, RI, 2001. {\beta}ibitem[PW]{PW}{{\sigma}c M. H. Protter, H. F. Weinberger}, Maximum principle in differential equations, Prentice-Hall, Inc., Englewood Cliffs, N.J. 1967. {\beta}ibitem[R]{R} {{\sigma}c P. H. Rabinowitz}, Some global results for nonlinear eigenvalue problems, {{\epsilon}m J. Functional Analysis} {{\beta}f 7} (1971), 487-513. {\beta}ibitem[SW1]{SW1} {{\sigma}c J. Smoller, A. Wasserman}, Symmetry-breaking for solutions of semilinear elliptic equations with general boundary conditions, {{\epsilon}m Comm. Math. Phys. }{{\beta}f 105}, (1986), 415-441. {\beta}ibitem[T]{T} {{\sigma}c M. Tang}, Uniqueness of positive radial solutions for $\Deltaelta u-u+u{\sigma}p p=0$ on an annulus, {{\epsilon}m J. Differential Equations} {{\beta}f 189}, (2003), 148-160. {\epsilon}nd{thebibliography} {\epsilon}nd{document}
math
\begin{document} \title{A Prosumer-Centric Framework for Concurrent Generation and Transmission Planning — Part I} \author{Ni~Wang, Remco~Verzijlbergh, Petra~Heijnen, and~Paulien~Herder \thanks{N. Wang, R. Verzijlbergh and P. Heijnen is with the Faculty of Technology, Policy and Management, Delft University of Technology, Delft, the Netherlands (e-mails: \{n.wang, r.a.verzijlbergh, p.w.heijnen\}@tudelft.nl). } \thanks{P. Herder is with the Faculty of Applied Sciences, Delft University of Technology, Delft, the Netherlands (e-mail: [email protected]). } \thanks{This research received funding from the Netherlands Organisation for Scientific Research (NWO) [project number: 647.002.007].} } \maketitle \begin{abstract} The growing share of proactive actors in the electricity markets calls for more attention on prosumers and more support for their decision-making under decentralized electricity markets. In view of the changing paradigm, it is crucial to study the long-term planning under the decentralized and prosumer-centric markets to unravel the effects of such markets on the planning decisions. In the first part of the two-part paper, we propose a prosumer-centric framework for concurrent generation and transmission planning. Here, three planning models are presented where a peer-to-peer market with product differentiation, a pool market and a mixed bilateral/pool market and their associated trading costs are explicitly modeled, respectively. To fully reveal the individual costs and benefits, we start by formulating the optimization problems of various actors, i.e. prosumers, transmission system operator, energy market operator and carbon market operator. Moreover, to enable decentralized planning where the privacy of the prosumers is preserved, distributed optimization algorithms are presented based on the corresponding centralized optimization problems. \end{abstract} \begin{IEEEkeywords} Generation and transmission planning, Peer-to-peer electricity markets, Pool electricity markets, Mixed bilateral/pool electricity markets, Distributed optimization. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} To mitigate climate change and reduce carbon emissions, renewable energy sources (RES) play a vital role in modern power systems. Countries around the world set ambitious RES targets for the next decades and the planning of RES generation and transmission is prominent on the agenda. Integrated planning of generation and transmission assumes there is a centralized planning entity, who determines the optimal expansion of the system to fulfill the carbon target. This method serves as a benchmark, which, however, does not provide information on how it can be used in a market environment. In liberalized electricity markets, the generation investments are driven by the price signals from the markets. It is thus essential to investigate the influences of the electricity markets on the planning decisions. In fact, the costs and benefits that are incurred from the electricity markets (later all referred to as trading-related costs, which can be negative in case of revenues), i.e. those associated with buying or selling energy in the markets, constitute a major part of the costs of the generation companies \cite{Botterud2005}, or in general the investors. They play a crucial role in their decision-making on whether or to what extent they will invest. However, in energy system models for integrated planning of generation and transmission, the markets are not yet modeled. The other critical challenge in the prevailing centralized planning models is the lack of attention on the coordination problem when multiple areas need to plan together \cite{Wang2017}. Due to political, technical and privacy issues, it is difficult for the involving agents, e.g., the areas (or otherwise referred to as prosumers\footnote{In this study, the notation prosumer represents an agent with an aggregated supply and/or a demand, which is interchangeable with peer. It can refer to an agent on either the distribution level or the transmission level, which includes but is not limited to households.}) and the grid operators, to share data. With the large-scale integration of RES, the inter-dependencies that are enforced by interconnections between the areas are growing and thus proper coordination between the agents is called for to achieve overall economic efficiency. Therefore, a decentralized planning approach where the agents make decisions locally needs to be developed. This problem becomes more urgent, due to the emergence of more decentralized and prosumer-centric electricity markets, such as the peer-to-peer (P2P) electricity markets \cite{Parag2016}. Therefore, in this study, we propose a prosumer-centric framework for concurrent generation and transmission planning. Instead of the conventional integrated planning approach, we focus on the wills of the relevant agents, where they either do generation or transmission planning at the same time. This way of modeling is called a concurrent approach in this paper. To that end, various electricity markets are modeled explicitly in the planning models of the agents, including P2P markets, pool markets and mixed bilateral/pool markets. The proposed approach will help the prosumers and the grid operators to make informed planning decisions considering the electricity markets while accounting for the carbon target imposed by the government. It can also help policy-makers evaluate how various market designs would influence the decentralized planning decisions. \subsection{Planning models integrating peer-to-peer electricity markets} Traditional electricity markets are designed mainly to accommodate centralized generation sources. In recent years, with the increasing penetration of distributed energy sources, the market paradigms are changing and prosumer markets emerge as next-generation market designs. In these markets, prosumers actively participate in the market and are expected to gain substantial profits from the market. \cite{Parag2016} summarized three typical prosumer market models, i.e. P2P, prosumer-to-grid and organized prosumer groups. In particular, P2P markets have been studied extensively over the past few years. A major focus of the existing studies of P2P electricity markets is on the operational perspective. The design of the market is recognized as one of the essential layers alongside the physical layer and information layer \cite{VanLeeuwen2020}. Here, bilateral trading is considered as one of the most promising P2P market mechanisms \cite{Wang2020c}. When prosumers trade bilaterally in P2P markets, they could express preferences for particular trades. There is not yet a naming convention for that in the field, terms such as heterogeneous preferences (\cite{Yang2015, Hahnel2020}), product differentiation \cite{Sorin2019}, and energy classes \cite{Morstyn2019a} have been used. Among those, product differentiation is a generic mathematical formulation \cite{Baroche2019b} that can be used for various purposes, e.g., \cite{Baroche2019a} used it to account for exogenous network charging in P2P markets. Bilateral contracts exist not only in P2P markets, in fact, they constitute major parts of energy trading in the prevailing electricity markets. They are good at hedging against the uncertainties associated with the price and the quantity in day-ahead markets. The bilateral contracts can be modeled directly by P2P market models. Besides, they are studied mainly using simulation methods (\cite{Bower2000, Knezevic2011, Lopes2012, Imran2020}) and complex network approaches \cite{Bompard2008}. Yet, the investment decisions under the P2P markets are not discussed \cite{Wang2020c}. The same as the traditional pool electricity markets, the P2P markets also have a significant influence on the investment decisions of the prosumers by giving price signals. While the operational aspects of the P2P markets have been extensively studied, it is of importance to study how these markets would lead to what investment decisions. Therefore, a largely ignored aspect of the P2P markets, or in other words, a significant research gap is to integrate those markets in the planning models. Only by such models, prosumers can make informed planning decisions under a P2P market environment. \subsection{Planning models integrating pool electricity markets} Since the liberalization from the 1990s, the electricity sector has undergone drastic changes, where the pool-based electricity market has been the norm. However, with the emergence of the decentralized electricity markets, the predominant paradigm has come to a turning point. Decentralized electricity markets are playing an increasingly crucial role, which might supersede the traditional market designs in the long term. While most studies investigate techno-economic uncertainties such as changes in demand, weather forecasts and cost parameters, the institutional uncertainty, i.e. the uncertainty in how the future electricity markets will be shaped, is generally overlooked. Planning models should highlight this uncertainty, unraveling the associated trading costs and benefits along the planning horizon. Consequently, it is of high importance to consider pool markets as well in the planning models, in addition to the P2P markets. The inclusion of both markets in one framework will show quantitatively and comparatively the incurred costs when different markets are considered. As an example, \cite{Doorman2008} analyzed the effects of four market designs on generation investment. In the literature, the decentralized planning of generation and transmission in the market environment is often addressed by multi-level optimization problems. From the game-theoretical perspective, these problems are also known as leader-follower (Stackelberg) games. In the upper level, the transmission system operator (TSO), as a leader, makes transmission investments, while anticipating the investments of generation companies. In the lower level, generation companies, as followers, maximize their profits, while their optimization problems are constrained by market-clearing outcomes. Several examples are \cite{Jenabi2013, Taheri2017, Grimm2020}. This type of studies takes the perspectives of the TSO and investigates the anticipatory sequential decision-making process. Similarly, some studies assume a reverse sequence in decision-making, i.e., the generation companies make decisions first taking into account the transmission charges, and then the TSO follows, e.g., see the work of \cite{Tohidi2017}. Instead of this sequential decision-making approach, the prosumers and the TSO may also make decisions at the same time, i.e. concurrently. In P2P markets, the market-clearing is modeled as an equilibrium problem (\cite{Baroche2019b, Moret2020}), where all agents, i.e. prosumers, market operators and TSO, negotiate together while minimizing their costs. This approach is also used to study pool markets in \cite{Hobbs03complementarity-basedequilibrium, StevenA.Gabriel2013, Ruiz2014}. Moreover, it has also been applied to the planning problem under the pool market in \cite{He2012}. \subsection{Distributed planning algorithms} Unlike centralized optimization where information has to be gathered by the centralized entity, distributed optimization indicates that agents exchange information with each other and optimize their local problems in order to solve the centralized problem often without the presence of a centralized coordinator. \cite{Wang2017, Molzahn2017, Kargarian2018} give reviews on the distributed optimization approaches that are applied for power systems. Among those, Lagrangian relaxation and Benders decomposition together with their variants are the most commonly seen in the literature \cite{Sagastizabal2012}. In planning models, Benders decomposition and its variants are often used (such as \cite{Ruiz2015, Munoz2016}) to increase solution speed. In P2P market models, Lagrange relaxation, in particular, alternating direction method of multipliers (ADMM) has been used extensively. ADMM allows the agents to solve their local optimization problems while preserving private information. Distributed optimization is particularly needed in the P2P markets where preferences of the prosumers are considered, see e.g. \cite{Morstyn2019a, Sorin2019, Moret2019, Baroche2019b, LeCadre2020}. Given different application domains of the methods, the choice of a method depends on the purpose of using distributed optimization. In this study, since we aim to let the prosumers solve their local problems, ADMM will be further used and discussed. \subsection{Contributions} Based on the background information and the literature review, we found that the existing energy system planning models have a focus on the system level. However, for modern power systems, especially with the increasing interests in P2P markets, the existing integrated planning models are not sufficient to capture the prosumer-level results and are thus not able to aid the investment decision-making of the prosumers. Therefore, we propose a prosumer-centric planning approach that incorporates the following features, which are also presented in Figure \ref{fig:features}. \begin{figure} \caption{Prosumer-centric features and the proposed planning framework, compared to existing energy system planning models.} \label{fig:features} \end{figure} \begin{enumerate} \item all costs and benefits, in particular, those associated with buying/selling energy on the electricity market are included in the objective functions of the prosumers. This requires the explicit formulation of the optimization problem for each agent. The problems are interconnected, and thus cannot be solved separately. To solve them together, an equilibrium problem has to be formulated. \item preferences of the prosumers are included. Compared to the objective where only costs are included, this results in an improved utility function that considers the willingness to pay of the prosumers. This is done by utilizing the production differentiation term from a P2P market. Consequently, the P2P market has to be considered endogenously in a planning model. \item privacy of the prosumers is preserved. To fully account for the privacy of the prosumers, distributed optimization algorithms must be provided where each agent solves its problem locally with limited information exchange with each other. \end{enumerate} In our proposed framework, besides the prosumers, the energy market operator and the TSO, the government is also included. The role of the government is to set carbon goals for the future, which are facilitated through policies such as a cap-and-trade system and carbon tax \cite{He2012}. In this study, a cap-and-trade system with a carbon market is considered since it is a market-based policy and a carbon target can be imposed directly by the government. This planning framework includes three planning models which consider a P2P market, a pool market and a mixed bilateral/pool market, respectively. Part I of the paper focuses on model formulations, and numerical results will be provided in Part II. The contributions of this paper are: \begin{itemize} \item the formulation of a planning model integrating the P2P market design with product differentiation, where previously only operational problems are studied. This planning model is the first-of-its-kind that studies the P2P market beyond only the operational aspects. Such a model allows investigating the influence of the P2P market design on the planning decisions. \item the formulation of planning models integrating the pool electricity market and the mixed bilateral/pool market, respectively. \item the formulations of the distributed optimization algorithms to solve the planning problems in a decentralized manner. \end{itemize} The most common usage for energy system planning models is to assess the investment decisions and the associated costs. Naturally, the fundamental usage of our framework is to investigate the optimal investment decisions and the associated costs and benefits given the various market environments. Despite this, our framework is able to go beyond it and deal with other problems. Here, we will briefly introduce three potential applications and they will be further explained and illustrated by the archetypal case studies in Part II of the paper. As a first example, in the modern and evolving power sector, the planning decisions have to be made ahead of time but there will be uncertainties in future market designs. Hence, it is important to make no-regret decisions such that the total cost is minimal. Thanks to the inclusion of various market designs, the framework is able to help the agents to best deal with this uncertainty. Moreover, the framework can be used as a negotiation tool in joint generation planning of the prosumers. In the endogenous P2P market with product differentiation, the traded energy could be influenced by the willingness to pay of the prosumers, and thus affects the planning decisions. Here, the willingness to pay will play a crucial role in the investment negotiations. Furthermore, the framework allows to model bilateral contracts in the mixed bilateral/pool markets. It opens up research directions such as modeling the effects of bilateral contracts and investigating how product differentiation therein could influence the planning decisions. The remainder of the study is structured as follows. Section \ref{sec:pre} gives the preliminaries of the framework. Then, Section \ref{sec:p2p} - \ref{sec:mix} discuss the planning models integrating the P2P market, the pool market and the mixed bilateral/pool market, respectively. Lastly, conclusions are drawn in Section \ref{sec:conclusion}. \section{Preliminaries} \label{sec:pre} This section describes the preliminaries needed to understand the model and outlines the objective functions for the prosumers and for the TSO in integrated energy system planning models. \begin{figure} \caption{Information flows between the agents for peer-to-peer market, pool market and mixed bilateral/pool market. EMO: energy market operator, CMO: carbon market operator, TSO: transmission system operator. Dashed box indicates that EMO and CMO may not exist physically.} \label{fig:structure} \end{figure} Let us start with briefly introducing the three market designs. Figure \ref{fig:structure} shows the structure of them in terms of the information flows. The P2P market electricity design is based on \cite{Baroche2019b, Moret2020}. In this P2P market, the prosumers are represented by the nodes in the communication graph and the edges connect the pair of the prosumers who might trade energy with each other. The prosumers can trade freely with each other that are on the same communication graph. The bilateral prices are based on negotiation and are derived using the proposed distributed algorithm which performs equally as if there are market operators physically present. Here, differently from the multi-level optimization that is commonly seen in the literature, the TSO is on the same level (and thus on the same communication graph) as the prosumers. In the conventional pool market, the market operators are physically present, and the information flows are only between them and the market participants. In the mixed bilateral/pool market, the market operators are in charge of the pool market while prosumers can also trade bilaterally under the negotiated prices. For each of the three markets, we start by formulating the optimization problems by assuming perfect competition for all the agents, i.e. the prosumers, the TSO, the energy market operator and the carbon market operator. This step is crucial because the individual costs can only be derived if the objective functions of the agents are formulated explicitly. Note that since these problems are all interconnected, i.e. the parameters in one problem may be the decision variables in the other, and vice versa, they should be solved together. This will be done by finding the Nash equilibrium, based on which the equivalent centralized optimization problems will be given. Lastly, distributed optimization algorithms are presented using ADMM. Now the terminology used in the paper will be introduced. Lower case symbols are used for variables, upper case symbols are used for parameters. Dual variables are expressed using Greek letters. $n$ is the index for prosumers $\mathcal{N}$. $i$ is the index for generation technologies $\mathcal{G}$, and storage technologies $\mathcal{S}$. $l$ is the index for transmission lines in the existing line set $\mathcal{L}$. $t$ represents a time step in $\mathcal{T}$. In the family of energy system planning models, the objective is usually to minimize total annualized cost consisting of Capital Expenditure (CapEx) cost of generation and storage technologies, Fixed Operation \& Maintenance (FOM) costs, Variable Operation \& Maintenance (VOM) costs and CapEx cost of networks \cite{Wang2020b}. These planning models use a direct split of the total costs that are incurred to the prosumers and the TSO. For the prosumer $n$, its cost $f_n$ is the summation of CapEx, FOM and VOM of the corresponding technologies. Here, $A_i$ is the annual factor for technology $i$, $C_i$ and $CS_i$ are the CapEx for generation $i$ and storage $i$, respectively, $k_{i,n}$ and $k^{\text{storage}}_{i,n}$ are the capacities for generation and storage technologies $i$ for prosumer $n$, respectively, $B_i$ is the VOM cost for technology $i$ and $p_{i,n,t}$ is the production of technology $i$ for prosumer $n$ at time step $t$. The TSO, however, has to bear the cost $g$, which is the total CapEx for the transmission network, where $\Delta_l$ is the length of line $l$. \begin{subequations} \begin{flalign} f_n & = \sum_{i\in (\mathcal{G}+\mathcal{S})} \frac{C_{i} k_{i,n}}{A_{i}} + \sum_{i\in \mathcal{S}} \frac{CS_{i} k^{\text{storage}}_{i,n}}{A_{i}} \\ & + \sum_{t \in \mathcal{T}} \sum_{i\in \mathcal{G}} B_i p_{i,n,t} \notag \\ g & = \sum_{l \in \mathcal{L}} \frac{\Delta_l C_l k_l}{A_l} \end{flalign} \end{subequations} \section{Peer-to-peer market} \label{sec:p2p} Under a P2P market, we start by formulating the interconnected optimization problems for all the agents. Next, the equivalent centralized optimization problem will be given. Then, the ADMM algorithm for distributed optimization will be described. \subsection{Prosumer} {\allowdisplaybreaks In this study, a prosumer is considered as an archetypal agent in the energy system with some generation capacities and/or an energy demand who participates in a P2P market on the transmission level. Examples can be groups of aggregators/large consumers/generation companies that are aggregated to a power injection point on the transmission network. Note that, in principle, different prosumers can co-exist at a transmission power injection point, i.e. there might be a discrepancy between the set of prosumers and the set of transmission nodes. For simplicity but without loss of generality, we consider the two sets are identical. Moreover, a prosumer can be on a distribution level, e.g., a household and the gist of the framework still applies as long as TSO is changed to DSO, the market is changed to a local market and the power flow calculation is changed accordingly. The prosumer $n \in \mathcal{N}$ can trade energy $p^{\text{p2p}}_{n,m,t} $ at time step $t$ bilaterally with its neighbors $m \in \omega_n$ that are on the communication graph. Trading prices $\lambda^{\text{p2p}}_{n,m,t}$ and grid prices $\lambda^{\text{grid}}_{n,m,t}$ are associated with the energy trades $p^{\text{p2p}}_{n,m,t} $. In addition, $I_{n,m}$ is the product differentiation parameter from $n$ to $m$. More details of the product differentiation term can be found in \cite{Sorin2019}. $\Gamma^{\text{ps}}_n$ is the set of decision variables for prosumer $n$. It includes the investment capacities $k_{i,n}$ of generation and storage conversion $i$, the investment capacities $k^{\text{storage}}_{i,n}$ of storage $i $, the energy production $p_{i,n,t}$ from technology $i$ at time step $t$, the bilateral trades $p^{\text{p2p}}_{n,m,t} $ from $n$ to $m$ at time step $t$, state-of-charge $soc_{i,n,t}$ of storage $i$ at time step $t$, storage discharging $p^{\text{out}}_{i,n,t}$ of storage $i$ at time step $t$, storage charging $p^{\text{in}}_{i,n,t}$ of storage $i$ at time step $t$ and the number of carbon permits $e^{CO_2}_n$ to buy from the carbon market. The objective function of prosumer $n$ is given in (\ref{equ:pro_obj}), which minimizes the total cost of the prosumer. The total cost consists of three parts. The first part $f_n$ is the cost that is commonly used in energy system planning models, i.e. CapEx, FOM and VOM. The second part is the trading-related costs in the energy market, including trading costs, grid costs and product differentiation costs. The third part is the trading-related costs in the carbon market. Since a cap-and-trade system is discussed here, the prosumer $n$ has to buy a certain amount of emissions permits $e^{CO_2}_n$ that are equivalent to their emissions. Therefore, the objective function (\ref{equ:pro_obj}) not only considers the planning costs but also integrates the trading-related costs (which can be negative provided that the prosumer gains more from selling energy) in the energy market and the carbon market. \begin{subequations} \begin{flalign} \min_{\Gamma^{\text{ps}}_n} f_n + \sum_{t \in \mathcal{T}} \sum_{m \in \omega_n} (\lambda^{\text{p2p}}_{n,m,t} + \lambda^{\text{grid}}_{n,m,t}) p^{\text{p2p}}_{n,m,t} \label{equ:pro_obj} \\ + \sum_{t \in \mathcal{T}} \sum_{m \in \omega_n} I_{n,m} |p^{\text{p2p}}_{n,m,t} | + \lambda^{CO_2} e^{CO_2}_n \notag \end{flalign} \begin{flalign} \text{s.t.} & \sum_{i\in \mathcal{G}} p_{i,n,t} - D_{n, t} + \sum_{i\in \mathcal{S}}(p^{\text{out}}_{i,n,t} - p^{\text{in}}_{i,n,t}) \notag \\ & = \sum_{m \in \omega_n} p^{\text{p2p}}_{n,m,t} , \forall t \in \mathcal{T} \label{equ:pro_balance} \\ & 0 \le p_{i,n,t} \le E_{i,n,t} (k_{i,n} + K_{i,n}) , \forall i \in \mathcal{G}, \forall t \in \mathcal{T} \label{equ:pro_gen_limit} \\ & soc_{i,n,t} = soc_{i,n,t-1} + H^{in}_{i} p^{\text{in}}_{i,n,t} - \frac{1}{H^{out}_{i}} p^{\text{out}}_{i,n,t}, \notag\\ & \forall i \in \mathcal{S}, \forall t \in \mathcal{T} \label{equ:pro_storage_begin} \\ & 0 \le soc_{i,n,t} \le k^{\text{storage}}_{i,n} + K^{\text{storage}}_{i,n} , \forall i \in \mathcal{S}, \forall t \in \mathcal{T} \label{equ:cm4} \\ & 0 \le p^{\text{out}}_{i,n,t} \le k_{i,n} + K_{i,n}, \forall i \in \mathcal{S}, \forall t \in \mathcal{T} \\ & 0 \le p^{\text{in}}_{i,n,t} \le k_{i,n} + K_{i,n}, \forall i \in \mathcal{S}, \forall t \in \mathcal{T} \label{equ:pro_storage_end} \\ & e^{CO_2}_n = W_i \sum_{t \in \mathcal{T}} \sum_{i \in \text{$\mathcal{R}$}} p_{i,n,t} \label{equ:pro_CO2} \end{flalign} \end{subequations} } (\ref{equ:pro_balance}) is the nodal energy balance constraint. On the left-hand side of the equation is the net energy that can be traded, and on the right-hand is the sum of the bilateral trades. (\ref{equ:pro_gen_limit}) indicates that the energy production is constrained by the efficiency $E_{i,n,t}$ (capacity factor in case of variable renewable energy) and the capacity of the generation technologies. Here, $K_{i,n}$ is the existing capacity, $k_{i,n}$ is the capacity to be expanded, essentially making the model a generation expansion model. (\ref{equ:pro_storage_begin}) - (\ref{equ:pro_storage_end}) are the storage constraints, as described in \cite{Wang2020b}. (\ref{equ:pro_CO2}) shows that the amount of emissions is equal to the required carbon permits. \subsection{TSO} The role of the TSO is two-fold. On the one hand, it ensures the feasibility of the energy flows and accordingly, invests in the transmission network capacity cost-optimally. On the other hand, it harvests congestion rents by trading $z^{\text{p2p}}_{n,m,t}$ on the electricity market as a spatial arbitrager. As a result, the objective function (\ref{equ:tso_obj}) is to minimize the total cost which incorporates the cost pertaining to these two roles respectively. The decision variables of TSO are represented by the set $\Gamma^{\text{TSO}}$, which includes the investment capacity $k_l$ in line $l$, the trades $z^{\text{p2p}}_{n,m,t}$ from $n$ to $m$ at times step $t$ and the energy flow $f_{l,t}$ in line $l$ at times step $t$. \begin{subequations} \begin{flalign} \min_{\Gamma^{\text{TSO}}} g - \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_ n} \lambda^{\text{grid}}_{n,m,t} z^{\text{p2p}}_{n,m,t} \label{equ:tso_obj} \end{flalign} \begin{flalign} \text{s.t.} & f_{l,t} = \sum_{n \in \mathcal{N}} PTDF_{l,n} \sum_{m \in \omega_n} z^{\text{p2p}}_{n,m,t}, \forall l \in \mathcal{L}, \forall t \in \mathcal{T} \label{equ:tso_ptdf} \\ & - (k_l + K_l) \le f_{l,t} \le k_l + K_l, \forall l \in \mathcal{L}, \forall t \in \mathcal{T} \label{equ:tso_limit} \end{flalign} \end{subequations} The energy flow is modeled using DC power flow equations. In (\ref{equ:tso_ptdf}), the flow $f_{l,t}$ is calculated based on the Power Transfer Distribution Factors (PTDF) matrix and the total net injection at $n$. Except for this formulation, other formulations of DC power flow can also be used. Moreover, AC power flows might be considered as well, e.g., \cite{Moret2020} studied the network losses in a joint transmission and distribution P2P markets, which can be used to complement our framework to consider network operators both on the transmission and distribution level. (\ref{equ:tso_limit}) indicates the thermal limits of the energy flows, where $k_l$ refers to the transmission capacity expansion. \subsection{P2P market operator} The P2P energy market operator clears the market at each time step $t$ by minimizing the energy imbalances, and thus determines the corresponding prices. The set of decision variables $\Gamma^{\text{market}}$ includes the trading price $\lambda^{\text{p2p}}_{n,m,t}$ from $n$ to $m$ at time step $t$ and the grid price $\lambda^{\text{grid}}_{n,m,t}$ from $n$ to $m$ at time step $t$. It makes sure that the bilateral trades should be equal in quantity, and the trading energy from the prosumer $p^{\text{p2p}}_{n,m,t} $ is equal to the arbitraging energy from the TSO $z^{\text{p2p}}_{n,m,t}$ at each time step $t$. \begin{flalign} \min_{\Gamma^{\text{market}}} \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \lambda^{\text{p2p}}_{n,m,t} (p^{\text{p2p}}_{n,m,t} + p^{\text{p2p}}_{m,n,t}) \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \lambda^{\text{grid}}_{n,m,t} (p^{\text{p2p}}_{n,m,t} -z^{\text{p2p}}_{n,m,t}) \end{flalign} \subsection{Carbon market operator} We consider a cap-and-trade system as the carbon policy. The government can determine the maximum amount of emissions that are allowed to be emitted, which will be regarded as a cap on the carbon market. All prosumers need to buy carbon permits that are equivalent to their emissions from the carbon market. The price $\lambda^{CO_2}$ will be determined by the carbon market. It is formulated as the following. \begin{flalign} \min_{\lambda^{CO_2}} \lambda^{CO_2} (\sum_{n \in \mathcal{N}} e^{CO_2}_n - \text{CAP}^{CO_2}) \end{flalign} \subsection{Equivalent planning optimization problem} After laying down the optimization problems of the prosumers, the TSO, the energy market operator and the carbon market operator, it could be found that using this way of formulation, the decision variables of one problem only exist in the objective function of others and not in the constraints which avoids resulting in a generalized Nash equilibrium. We are then able to find the Nash equilibrium by deriving the the Karush–Kuhn–Tucker (KKT) conditions of these problems. The equivalent optimization problem can be written as follows: {\allowdisplaybreaks \begin{subequations} \begin{flalign} \min_{\Gamma} \sum_{n \in \mathcal{N}} f_n + g + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} I_{n,m} |p^{\text{p2p}}_{n,m,t} | \label{equ:p2p_cen_obj} \end{flalign} \begin{flalign} \text{s.t.} & (\ref{equ:pro_balance}) - (\ref{equ:pro_CO2}), \forall n \in \mathcal{N}\\ & (\ref{equ:tso_ptdf}) - (\ref{equ:tso_limit}) \\ & p^{\text{p2p}}_{n,m,t} = - p^{\text{p2p}}_{m,n,t}, \forall n \in \mathcal{N}, \forall m \in \omega_n, \forall t\in T \text{: } \lambda^{\text{p2p}}_{n,m,t} \label{equ:mo_reciprocity} \\ & p^{\text{p2p}}_{n,m,t} = z^{\text{p2p}}_{n,m,t}, \forall n \in \mathcal{N}, \forall m \in \omega_n,\forall t\in T \text{: } \lambda^{\text{grid}}_{n,m,t} \label{equ:mo_grid} \\ & \sum_{n \in \mathcal{N}} e^{CO_2}_n = \text{CAP}^{CO_2} \text{: } \lambda^{CO_2} \label{equ:mo_co2} \end{flalign} \end{subequations} } The decision variables belong to the set $\Gamma$, which includes all the decision variables of the prosumers and the TSO. The objective function (\ref{equ:p2p_cen_obj}) is the summation of the objective functions of all the agents. The constraints are also a gathering of all constraints of the agents' problems. Note that (\ref{equ:mo_reciprocity}) and (\ref{equ:mo_grid}) are KKT conditions of the optimization problem of the energy market operator. (\ref{equ:mo_reciprocity}) is the reciprocity constraint, showing that the bilateral trades should be equal in quantity, where the dual variable $\lambda^{\text{p2p}}_{n,m,t}$ is the trading price. (\ref{equ:mo_grid}) is the energy balance constraint at $n$, equalizing the bilateral trades from prosumer $p^{\text{p2p}}_{n,m,t} $ to the arbitraging energy from the TSO $z^{\text{p2p}}_{n,m,t}$, where the dual variable is the grid price for this trade. (\ref{equ:mo_co2}) gives the cap for all the carbon emissions, with the dual variable $\lambda^{CO_2}$ being the carbon price. \subsection{Distributed solving formulation using ADMM} ADMM is a distributed optimization algorithm that gains popularity in recent years, for its simple formulation and guaranteed convergence. It is widely used as a solving technique in P2P markets. Although different algorithms can be used to solve the planning optimization problem (\ref{equ:p2p_cen_obj}), we chose here to employ ADMM because it fits better in the proposed prosumer-centric framework as it allows decentralized decision-making. The Lagrangian of (\ref{equ:p2p_cen_obj}) is formulated in (\ref{equ:p2p_admm_L}). Compared to (\ref{equ:p2p_cen_obj}), the coupling constraints (\ref{equ:mo_reciprocity})-(\ref{equ:mo_co2}), i.e. the constraints that contain decisions variables of more than one agent, are relaxed. Three quadratic terms with the penalty parameters $Q^{\text{trade}}_{n,m,t}$, $Q^{\text{grid}}_{n,m,t}$ and $Q^{CO_2}$ are added to ensure convergence. {\allowdisplaybreaks \begin{subequations} \begin{flalign} \mathcal{L}(\Gamma^{\text{ps}}_n, \Gamma^{\text{TSO}}) = \sum_{n \in \mathcal{N}} f_n + g \notag + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} I_{n,m} |p^{\text{p2p}}_{n,m,t} | \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \lambda^{\text{p2p}}_{n,m,t} (p^{\text{p2p}}_{n,m,t} + p^{\text{p2p}}_{m,n,t}) \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \frac{Q^{\text{p2p}}_{n,m,t}}{2} (p^{\text{p2p}}_{n,m,t} + p^{\text{p2p}}_{m,n,t})^2 \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \lambda^{\text{grid}}_{n,m,t} (p^{\text{p2p}}_{n,m,t} - z^{\text{p2p}}_{n,m,t}) \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \frac{Q^{\text{grid}}_{n,m,t}}{2} (p^{\text{p2p}}_{n,m,t} - z^{\text{p2p}}_{n,m,t})^2 \notag \\ + \lambda^{CO_2} (\sum_{n \in \mathcal{N}} e^{CO_2}_n - \text{CAP}^{CO_2}) \notag \\ + \frac{Q^{CO_2}}{2} (\sum_{n \in \mathcal{N}} e^{CO_2}_n - \text{CAP}^{CO_2})^2 \label{equ:p2p_admm_L} \end{flalign} Then the ADMM algorithm is formulated in the following way that the decision-making of the four types of agents, i.e. the prosumers, the TSO and the energy market operator and the carbon market operator are all done locally. More specifically, in (\ref{equ:p2p_admm_pro}) and (\ref{equ:p2p_admm_tso}), the prosumers and the TSO solve their local optimization problems, respectively. Then the energy market operator updates the trade price and the grid price in (\ref{equ:p2p_admm_trade}) and (\ref{equ:p2p_admm_grid}), and the carbon market operator updates the carbon price in (\ref{equ:p2p_admm_co2}). The problems will be solved in an iterative manner, where $k$ is the number of iterations. \begin{flalign} \Gamma^{\text{ps}(k+1)}_n = \text{argmin}_{\Gamma^{\text{ps}}_n} \mathcal{L}(\Gamma^{\text{ps}}_n, \Gamma^{\text{TSO}(k)}), \forall n \in \mathcal{N} \label{equ:p2p_admm_pro} \\ \text{s.t. } (\ref{equ:pro_balance}) - (\ref{equ:pro_CO2}) \notag \\ \Gamma^{\text{TSO}(k+1)} = \text{argmin}_{\Gamma^{\text{TSO}}} \mathcal{L}(\Gamma^{\text{ps}(k)}_n, \Gamma^{\text{TSO}}) \label{equ:p2p_admm_tso}\\ \text{s.t. } (\ref{equ:tso_ptdf}) - (\ref{equ:tso_limit}) \notag \\ \lambda^{\text{p2p}(k+1)}_{n,m,t} = \lambda^{\text{p2p}(k)}_{n,m,t} + Q^{\text{trade}}_{n,m,t} (p^{\text{p2p}(k+1)}_{n,m,t} + p^{\text{p2p}(k+1)}_{m,n,t}), \notag \label{equ:p2p_admm_trade}\\ \forall t \in \mathcal{T}, \forall n \in \mathcal{N}, \forall m \in \omega_n \\ \lambda^{\text{grid}(k+1)}_{n,m,t} = \lambda^{\text{grid}(k)}_{n,m,t} + Q^{\text{grid}}_{n,m,t} (p^{\text{grid}(k+1)}_{n,m,t} - z^{\text{p2p}(k+1)}_{n,m,t}), \notag \label{equ:p2p_admm_grid} \\ \forall t \in \mathcal{T}, \forall n \in \mathcal{N}, \forall m \in \omega_n \\ \lambda^{{CO_2}(k+1)} = \lambda^{{CO_2}(k)} + Q^{CO_2} (\sum_{n \in \mathcal{N}} e^{{CO_2}(k+1)}_n - \text{CAP}^{CO_2}) \label{equ:p2p_admm_co2} \end{flalign} \end{subequations} } \section{Pool market} \label{sec:pool} Since the key of our contribution is to integrate different electricity market designs into the planning problems, in this section, we turn to another market design, i.e. the pool market. As we did for the P2P market, we will first formulate the optimization problems of the four types of agents, then give the equivalent centralized optimization problem and finally describe the ADMM algorithm for distributed optimization. \subsection{Prosumer} {\allowdisplaybreaks In the pool market, the main difference with the P2P market is that there are no bilateral trades, instead, the energy trading will be done through a pool. The decision variables of prosumer $n$ in the pool market is similar to those in the P2P market, only without $p^{\text{p2p}}_{n,m,t}$. The objective function of the prosumer $n$ is to minimize its total cost consisting of $f_n$ and the trading-related costs. The net power injection of prosumer $n$ is traded in the pool, under a nodal electricity price $\lambda^{\text{pool}}_{n,t}$ at time step $t$. \begin{subequations} \begin{flalign} \min_{\Gamma^{\text{ps}}_n} f_n & + \sum_{t \in \mathcal{T}} \lambda^{\text{pool}}_{n,t} (\sum_{i\in \mathcal{G}} p_{i,n,t} - D_{n, t}+ \sum_{i\in \mathcal{S}}p^{\text{out}}_{i,n,t} - \sum_{i\in \mathcal{S}}p^{\text{in}}_{i,n,t}) \notag \\ & + \lambda^{CO_2} e^{CO_2}_n \label{equ:nodal_pro_obj} \end{flalign} \begin{flalign} \text{s.t.} (\ref{equ:pro_gen_limit}) - (\ref{equ:pro_CO2}) \end{flalign} \end{subequations} This optimization problem has the same constraints as the prosumer problem in the P2P market regarding generation limits, storage and carbon permits. However, the energy balance constraint is removed, since the net power injection is directly used in the objective function. \subsection{TSO} \begin{subequations} The TSO has the same role as in the P2P market. It acts as a spatial arbitrager, ensures the feasibility of the energy flows and makes network investments accordingly. However, instead of $z^{\text{p2p}}_{n, m, t}$, the TSO now only trades $z^{\text{pool}}_{n, t}$ at $n$ at time step $t$ at the price $\lambda^{\text{pool}}_{n,t}$. Its set of decision variables $\Gamma^{\text{TSO}}$ now includes the investment capacity $k_l$, the traded energy $z^{\text{pool}}_{n, t}$ and the energy flow $f_{l,t}$. \begin{flalign} \min_{\Gamma^{\text{TSO}}} g - \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \lambda^{\text{pool}}_{n,t} z^{\text{pool}}_{n, t} \label{equ:nodal_tso_obj} \end{flalign} \begin{flalign} \text{s.t.} & \sum_{n \in \mathcal{N}} z^{\text{pool}}_{n, t} = 0, \forall t \in \mathcal{T} \label{equ:nodal_tso_balance} \\ & f_{l,t} = \sum_{n \in \mathcal{N}} \text{PTDF}_{l,n} z^{\text{pool}}_{n, t} , \forall l \in \mathcal{L}, \forall t \in \mathcal{T} \label{equ:nodal_tso_flow}\\ & - (k_l + K_l) \le f_{l,t} \le k_l + K_l, \forall l \in \mathcal{L}, \forall t \in \mathcal{T} \label{equ:nodal_tso_limit} \end{flalign} \end{subequations} Since TSO is only a spatial arbitrager, (\ref{equ:nodal_tso_balance}) indicates that its total energy trades at $t$ should be equal to zero. (\ref{equ:nodal_tso_flow}) and (\ref{equ:nodal_tso_limit}) shows the DC power flow calculations using PTDF matrix and the thermal limits of the lines, respectively. \subsection{Pool market operator} The role of the energy market operator is to warrant the energy balance at each node $n$ and to derive the nodal price. Its set of decisions variables $\Gamma^{\text{pool}}$ now only contains $\lambda^{\text{pool}}_{n,t}$ which is the price for prosumer $n$ at time step $t$. It solves the following optimization problem. \begin{flalign} \min_{\Gamma^{\text{pool}}} \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \lambda^{\text{pool}}_{n,t} & (\sum_{i\in \mathcal{G}} p_{i,n,t} - D_{n, t} + \sum_{i\in \mathcal{S}}p^{\text{out}}_{i,n,t} - \sum_{i\in \mathcal{S}}p^{\text{in}}_{i,n,t} \notag \\ &- z^{\text{pool}}_{n, t}) \label{equ:nodal_market_obj} \end{flalign} \subsection{Equivalent planning optimization problems} The carbon market operator's problem is identical to the problem under the P2P market design. And similarly to deriving the equivalent planning optimization for the P2P market, the centralized optimization problem for the pool market can be derived as well. The solution to this problem provides the generation and transmission investment equilibrium under the pool electricity market. The objective function is a summation of (\ref{equ:nodal_pro_obj}), (\ref{equ:nodal_tso_obj}) and (\ref{equ:nodal_market_obj}). Note that this objective function is the same as the commonly-used objective functions in energy system planning models. In the literature, as we explained in Section \ref{sec:pre}, a direct split of the costs was often used. However, this objective function alone or this way of splitting costs do not reveal the real costs, i.e. those including the trading-related costs, for each agent. Only after we have shown the objective functions of each agent, it is now clear that their objectives sum up to this objective (with offsets in trading costs). Due to our prosumer-centric focus, we derived this relationship, such that the solutions will provide more insights from the agents' perspectives rather than only from the system perspective. \begin{subequations} \begin{flalign} \min_{\Gamma} \sum_{n \in \mathcal{N}} f_n + g \label{equ:pool_cen_obj} \end{flalign} \begin{flalign} \text{s.t.} & (\ref{equ:pro_balance}) - (\ref{equ:pro_CO2}), \forall n \in \mathcal{N} \label{equ:nodal_central_pro} \\ & (\ref{equ:nodal_tso_balance}) - (\ref{equ:nodal_tso_limit}) \label{equ:nodal_central_tso}\\ & \sum_{i\in \mathcal{G}} p_{i,n,t} - D_{n, t} + \sum_{i\in \mathcal{S}}(p^{\text{out}}_{i,n,t} - p^{\text{in}}_{i,n,t}) = z^{\text{pool}}_{n, t}, \notag \\ & \forall n \in \mathcal{N}, \forall t \in \mathcal{T} \text{: }\lambda^{\text{pool}}_{n,t} \label{equ:nodal_central_balance} \\ & \sum_{n \in \mathcal{N}} e^{CO_2}_n = \text{CAP}^{CO_2} \text{: } \lambda^{CO_2} \label{equ:pool_cen_co2} \end{flalign} \end{subequations} The set of decision variables of this problem $\Gamma$ includes the decision variables for all the prosumers and the TSO. (\ref{equ:nodal_central_pro}) and (\ref{equ:nodal_central_tso}) are the constraints from the prosumers and the TSO, respectively. (\ref{equ:nodal_central_balance}) warrants that the energy is balanced for each node $n$ at time step $t$, i.e. the energy traded by the prosumer equals the arbitraged energy by the TSO. The dual variable of this constraint is the nodal energy price. Note here, different from the P2P market, there is no specific grid price to pay as it is part of the nodal price. \subsection{Distributed solving formulation using ADMM} The Lagrangian of the centralized planning problem (\ref{equ:pool_cen_obj}) is given in (\ref{equ:pool_admm_L}). The two coupling constraints (\ref{equ:nodal_central_balance}) and (\ref{equ:pool_cen_co2}) are relaxed, in which the penalty parameters $Q^{\text{pool}}_{n,t}, Q^{CO_2}$ are associated with the quadratic terms. {\allowdisplaybreaks \begin{subequations} \begin{flalign} \mathcal{L}(\Gamma^{\text{ps}}_n, \Gamma^{\text{TSO}}) & = \sum_{n \in \mathcal{N}} f_n + g \notag \\ & + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \lambda^{\text{pool}}_{n,t} (\sum_{i\in \mathcal{G}} p_{i,n,t} - D_{n, t} \notag \\ & + \sum_{i\in \mathcal{S}}p^{\text{out}}_{i,n,t} - \sum_{i\in \mathcal{S}}p^{\text{in}}_{i,n,t} - z^{\text{pool}}_{n, t}) \notag \\ & + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \frac{Q^{\text{pool}}_{n,t}}{2} (\sum_{i\in \mathcal{G}} p_{i,n,t} - D_{n, t} \notag \\ & + \sum_{i\in \mathcal{S}}p^{\text{out}}_{i,n,t} - \sum_{i\in \mathcal{S}}p^{\text{in}}_{i,n,t} - z^{\text{pool}}_{n, t})^2 \notag \\ & + \lambda^{CO_2} (\sum_{n \in \mathcal{N}} e^{CO_2}_n - \text{CAP}^{CO_2}) \notag \\ & + \frac{Q^{CO_2}}{2} (\sum_{n \in \mathcal{N}} e^{CO_2}_n - \text{CAP}^{CO_2})^2 \label{equ:pool_admm_L} \end{flalign} The following ADMM algorithm features the local decision-making of the prosumers and the TSO, where their solutions will be sent to the market operators to update the price which will be given back to the prosumers and the TSO for iteration $k$ until convergence for all the problems. \begin{flalign} \Gamma^{\text{ps}(k+1)}_n = \text{argmin}_{\Gamma^{\text{ps}}_n} \mathcal{L}(\Gamma^{\text{ps}}_n, \Gamma^{\text{TSO}(k)}), \forall n \in \mathcal{N} \\ \text{s.t. } (\ref{equ:pro_gen_limit}) - (\ref{equ:pro_CO2}) \notag \\ \Gamma^{\text{TSO}(k+1)} = \text{argmin}_{\Gamma^{\text{TSO}}} \mathcal{L}(\Gamma^{\text{ps}(k)}_n, \Gamma^{\text{TSO}})\\ \text{s.t. } (\ref{equ:nodal_tso_balance}) - (\ref{equ:nodal_tso_limit}) \notag \\ \lambda^{\text{pool}(k+1)}_{n,t} = \lambda^{\text{pool}(k)}_{n,t} + Q^{\text{pool}}_{n,t} (\sum_{i\in \mathcal{G}} p_{i,n,t}^{(k+1)} - D_{n, t} + \sum_{i\in \mathcal{S}} p^{\text{out}(k+1)}_{i,n,t} \notag \\ - \sum_{i\in \mathcal{S}}p^{\text{in}(k+1)}_{i,n,t} - z^{\text{pool}(k+1)}_{n, t}), \forall n \in \mathcal{N}, \forall t \in \mathcal{T} \\ \lambda^{{CO_2}(k+1)} = \lambda^{{CO_2}(k)} + Q^{CO_2} (\sum_{n \in \mathcal{N}} e^{{CO_2}(k+1)}_n - \text{CAP}^{CO_2}) \end{flalign} \end{subequations} } \section{Mixed bilateral/pool market} \label{sec:mix} In this section, we will integrate a mixed pool/bilateral electricity market into the planning model. At first, the optimization problems of all the agents are formulated. Then, an equivalent centralized optimization model is given. Finally, an ADMM algorithm is presented. \subsection{Prosumer} The prosumer $n$ aims to minimize the total costs that consist of $f_n$ and trading-related costs. Here, the trading-related costs in the energy markets include those in both the P2P market and the pool market. Consequently, the decision variables now include all those in the prosumer problems both markets. In particular, the prosumer $n$ can trade bilaterally which is represented by $p^{\text{bi}}_{n,m,t}$, but also in the pool by $p^{\text{pool}}_{n,t}$, representing the traded energy in the pool market for prosumer $n$ at time step $t$. \begin{subequations} \begin{flalign} \min_{\Gamma^{\text{ps}}_n} & f_n + \sum_{t \in \mathcal{T}} \sum_{m \in \omega_n}(\lambda^{\text{bi}}_{n,m,t} + \lambda^{\text{grid}}_{n,m,t}) p^{\text{bi}}_{n,m,t} \notag \\ & + \sum_{t \in \mathcal{T}} \sum_{m \in \omega_n} I_{n,m} |p^{\text{bi}}_{n,m,t}| + \sum_{t \in \mathcal{T}} \lambda^{\text{pool}}_{n,t} p^{\text{pool}}_{n,t} + \lambda^{CO_2} e^{CO_2}_n \end{flalign} \begin{flalign} \text{s.t.} & \Phi_n (\sum_{i\in \mathcal{G}} p_{i,n,t} - D_{n, t} + \sum_{i\in \mathcal{S}}p^{\text{out}}_{i,n,t} - \sum_{i\in \mathcal{S}}p^{\text{in}}_{i,n,t}) \notag \\ & = \sum_{m \in \omega_n} p^{\text{bi}}_{n,m,t}, \forall t \in \mathcal{T} \label{equ:mix_pro_balance1} \\ & (1 - \Phi_n) (\sum_{i\in \mathcal{G}} p_{i,n,t} - D_{n, t} + \sum_{i\in \mathcal{S}}p^{\text{out}}_{i,n,t} - \sum_{i\in \mathcal{S}}p^{\text{in}}_{i,n,t}) \notag \\ & = p^{\text{pool}}_{n,t}, \forall t \in \mathcal{T} \label{equ:mix_pro_balance2} \\ & (\ref{equ:pro_gen_limit}) - (\ref{equ:pro_CO2}) \label{equ:mix_pro_others} \end{flalign} \end{subequations} (\ref{equ:mix_pro_balance1}) and (\ref{equ:mix_pro_balance2}) are both the energy balance constraints. The net power injection $\sum_{i\in \mathcal{G}} p_{i,n,t} - D_{n, t} + \sum_{i\in \mathcal{S}}(p^{\text{out}}_{i,n,t} - p^{\text{in}}_{i,n,t})$ is divided into two parts: one part for the bilateral trading and the other part for the pool-based trading. $\Phi_n$ is a parameter between 0 - 1 that is determined by the prosumer $n$ itself, indicating the percentage of its net energy that $n$ would like to trade bilaterally, the rest will be traded in the pool. In other words, the prosumer has to decide ex-ante how much to trade in the P2P market and in the pool market. (\ref{equ:mix_pro_others}) gives the technical constraints on generation, storage and carbon permits. \subsection{TSO} In addition to the investment cost $g$, the TSO receives the congestion rent from both electricity markets. The objective function is given in (\ref{equ:mix_tso_obj}). (\ref{equ:mix_tso_ptdf}) and (\ref{equ:mix_tso_limit}) ensure the feasibility of the flows. \begin{subequations} \begin{flalign} \min_{\Gamma^{\text{TSO}}} g - \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}}( \sum_{m \in \omega_ n} \lambda^{\text{grid}}_{n,m,t} z^{\text{bi}}_{n,m,t} - \lambda^{\text{pool}}_{n,t} z^{\text{pool}}_{n, t} ) \label{equ:mix_tso_obj} \end{flalign} \begin{flalign} \text{s.t.} & f_{l,t} = \sum_{n \in \mathcal{N}} PTDF_{l,n} (\sum_{m \in \omega_n} z^{\text{bi}}_{n,m,t} + z^{\text{pool}}_{n, t}), \forall l \in \mathcal{L}, \forall t \in \mathcal{T} \label{equ:mix_tso_ptdf} \\ & - (k_l + K_l) \le f_{l,t} \le k_l + K_l, \forall l \in \mathcal{L}, \forall t \in \mathcal{T} \label{equ:mix_tso_limit} \end{flalign} \end{subequations} \subsection{Bilateral market operator} The bilateral market operator ensures the energy balance for the bilateral trades and derives the associated energy and grid prices, by solving the following optimization problem. Its set of decision variables $\Gamma^{\text{bi}}$ consists of the trading price $\lambda^{\text{bi}}_{n,m,t}$ and the grid price $\lambda^{\text{grid}}_{n,m,t}$ with regarding to the bilateral trade from prosumer $n$ to prosumer $m$ at time step $t$. \begin{flalign} \min_{\Gamma^{\text{bi}}} & \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \lambda^{\text{bi}}_{n,m,t} (p^{\text{bi}}_{n,m,t} + p^{\text{bi}}_{m,n,t}) \notag \\ &+ \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \lambda^{\text{grid}}_{n,m,t} (p^{\text{bi}}_{n,m,t} -z^{\text{bi}}_{n,m,t}) \label{equ:mix_market_obj1} \end{flalign} \subsection{Pool market operator} The pool market operator has the same role as that of the bilateral market operator, but only for the energy that is traded in the pool. The following optimization problem will be solved. Its set of decision variables $\Gamma^{\text{pool}}$ is the same as in the pool market. \begin{flalign} \min_{\Gamma^{\text{pool}}} \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \lambda^{\text{pool}}_{n,t} ( p^{\text{pool}}_{n,t}- z^{\text{pool}}_{n, t}) \label{equ:mix_market_obj2} \end{flalign} \subsection{Equivalent optimization problem} The carbon market operator's problem is still the same as the problem under the P2P and pool market design. Up to now, we have been using the same methodology for planning models that integrate the P2P market and the pool market. Due to this consistency, we are able to further investigate the mixed market by formulating an equivalent optimization problem. The objective function (\ref{equ:mix_central_obj}) aggregates the objectives of all the agents and is found to be the same as (\ref{equ:p2p_cen_obj}), since all the trading within the pool adds up to zero. {\allowdisplaybreaks \begin{subequations} \begin{flalign} \min_{\Gamma} \sum_{n \in \mathcal{N}} f_n + g + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} I_{n,m} |p^{\text{bi}}_{n,m,t}| \label{equ:mix_central_obj} \end{flalign} \begin{flalign} \text{s.t.} & (\ref{equ:mix_pro_balance1}) - (\ref{equ:mix_pro_others}), \forall n \in \mathcal{N} \label{equ:mix_central_constraint1} \\ & (\ref{equ:mix_tso_ptdf}) - (\ref{equ:mix_tso_limit}) \label{equ:mix_central_constraint2}\\ & p^{\text{bi}}_{n,m,t} = - p^{\text{bi}}_{m,n,t}, \forall n \in \mathcal{N}, \forall m \in \omega_n, \forall t\in T \text{: } \lambda^{\text{bi}}_{n,m,t} \label{equ:mix_central_constraint3} \\ & p^{\text{bi}}_{n,m,t} = z^{\text{bi}}_{n,m,t}, \forall n \in \mathcal{N}, \forall m \in \omega_n,\forall t\in T \text{: } \lambda^{\text{grid}}_{n,m,t} \label{equ:mix_central_constraint4}\\ & p^{\text{pool}}_{n,t} - z^{\text{pool}}_{n,t} = 0, \qquad \forall n \in \mathcal{N}, \forall t\in T \text{: } \lambda^{\text{pool}}_{n,t} \label{equ:mix_central_constraint5} \\ & \sum_{n \in \mathcal{N}} e^{CO_2}_n = \text{CAP}^{CO_2} \text{: } \lambda^{CO_2} \label{equ:mix_cen_co2} \end{flalign} \end{subequations} } Constraints (\ref{equ:mix_central_constraint1}) and (\ref{equ:mix_central_constraint2}) are the same as those of the prosumers and the TSO. (\ref{equ:mix_central_constraint3}) - (\ref{equ:mix_central_constraint5}) are the KKT conditions from the optimization problems of the bilateral market operator and the pool market operator, where the dual variables of the constraints are the respective prices. \subsection{Distributed solving formulation using ADMM} (\ref{equ:mix_admm_L}) is the Lagrangian of (\ref{equ:mix_central_obj}). Compared to the Lagrangians of problems for the P2P market and the pool market alone, four coupling constraints are relaxed due to the co-existence of the two electricity markets. {\allowdisplaybreaks \begin{subequations} \begin{flalign} \mathcal{L}(\Gamma^{\text{ps}}_n, \Gamma^{\text{TSO}}) = \sum_{n \in \mathcal{N}} f_n + g \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} I_{n,m} |p^{\text{bi}}_{n,m,t}| \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \lambda^{\text{bi}}_{n,m,t} (p^{\text{bi}}_{n,m,t} + p^{\text{bi}}_{m,n,t}) \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \frac{Q^{\text{bi}}_{n,m,t}}{2} (p^{\text{bi}}_{n,m,t} + p^{\text{bi}}_{m,n,t})^2 \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \lambda^{\text{grid}}_{n,m,t} (p^{\text{bi}}_{n,m,t} - z^{\text{bi}}_{n,m,t}) \notag \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \sum_{m \in \omega_n} \frac{Q^{\text{grid}}_{n,m,t}}{2} (p^{\text{bi}}_{n,m,t} - z^{\text{bi}}_{n,m,t})^2 \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \lambda^{\text{pool}}_{n,t} (p^{\text{pool}}_{n,t} - z^{\text{pool}}_{n, t}) \notag \\ + \sum_{t \in \mathcal{T}} \sum_{n \in \mathcal{N}} \frac{Q^{\text{pool}}_{n,t}}{2} (p^{\text{pool}}_{n,t} - z^{\text{pool}}_{n, t})^2 \notag \\ + \lambda^{CO_2} (\sum_{n \in \mathcal{N}} e^{CO_2}_n - \text{CAP}^{CO_2}) \notag \\ + \frac{Q^{CO_2}}{2} (\sum_{n \in \mathcal{N}} e^{CO_2}_n - \text{CAP}^{CO_2})^2 \label{equ:mix_admm_L} \end{flalign} The ADMM algorithm is given as follows. The prosumers and the TSO solve their local optimization problems, respectively. The prices are updated by the two energy market operators and the carbon market operator. \begin{flalign} \Gamma^{\text{ps}(k+1)}_n = \text{argmin}_{\Gamma^{\text{ps}}_n} \mathcal{L}(\Gamma^{\text{ps}}_n, \Gamma^{\text{TSO}(k)}), \forall n \in \mathcal{N} \\ \text{s.t. } (\ref{equ:mix_pro_balance1}) - (\ref{equ:mix_pro_others}) \notag \\ \Gamma^{\text{TSO}(k+1)} = \text{argmin}_{\Gamma^{\text{TSO}}} \mathcal{L}(\Gamma^{\text{ps}(k)}_n, \Gamma^{\text{TSO}})\\ \text{s.t. } (\ref{equ:mix_tso_ptdf}) - (\ref{equ:mix_tso_limit}) \notag \\ \lambda^{\text{bi}(k+1)}_{n,m,t} = \lambda^{\text{bi}(k)}_{n,m,t} + Q^{\text{bi}}_{n,m,t} (p^{\text{bi}(k+1)}_{n,m,t} + p^{\text{bi}(k+1)}_{m,n,t}), \notag\\ \forall t \in \mathcal{T}, \forall n \in \mathcal{N}, \forall m \in \omega_n \notag \\ \lambda^{\text{grid}(k+1)}_{n,m,t} = \lambda^{\text{grid}(k)}_{n,m,t} + Q^{\text{grid}}_{n,m,t} (p^{\text{grid}(k+1)}_{n,m,t} - z^{\text{bi}(k+1)}_{n,m,t}), \notag \\ \forall t \in \mathcal{T}, \forall n \in \mathcal{N}, \forall m \in \omega_n \notag \\ \lambda^{\text{pool}(k+1)}_{n,t} = \lambda^{\text{pool}(k)}_{n,t} + Q^{\text{pool}}_{n,t} (p^{\text{pool}(k+1)}_{n,t} - z^{\text{pool}(k+1)}_{n, t}), \notag \\ \forall l \in \mathcal{L}, \forall t \in \mathcal{T} \notag \\ \lambda^{{CO_2}(k+1)} = \lambda^{{CO_2}(k)} + Q^{CO_2} (\sum_{n \in \mathcal{N}} e^{{CO_2}(k+1)}_n - \text{CAP}^{CO_2}) \end{flalign} \end{subequations} } \section{Conclusion} \label{sec:conclusion} In this paper, a prosumer-centric framework for concurrent generation and transmission planning is presented. Here, we consider prosumers as a general notion for an aggregated production and demand. In current power system planning models, the market environment is often overlooked. Therefore, we consider the planning within three different electricity market environments, i.e. the P2P market with product differentiation, the pool market and the mixed bilateral/pool market. We integrate the market models in the planning models, which combines and contributes to the two mainstream power system models, i.e. planning and market. Among all, one of our contributions is to consider the P2P market in a planning model whereas previous studies only focus on the operational aspects. For each of the three markets, the following modelling flow is used. We first formulate the planning optimization problems for different agents, i.e. the prosumers, the TSO, the electricity market operator and the carbon market operator, separately. This way of problem formulation exposes the true costs that are incurred for the agents along the planning horizon which include both the planning costs and the trading-related costs in the electricity market. After having formulated the optimization problems for all the agents, an equivalent centralized optimization problem has been found and then a distributed optimization algorithm is presented. Besides the P2P market, the pool market and the mixed bilateral/pool market are also included in the framework. These inclusions are particularly critical for two reasons. On the one hand, they allow investigating the influences of these markets on the planning decisions. On the other hand, as P2P markets prepare to roll out, the electricity market design might change in the coming decades, while investment decisions are made now under current market designs. The framework helps to assess the uncertainties related to the changing market designs. In addition, a carbon market with a cap-and-trade system is integrated into this study. It involves the government as an extra agent for the ease of modeling the effects of the carbon targets on the planning decisions, which is well on the political agenda all over the world. Part II of the paper will continue with the numerical results from three archetypal case studies. \end{document}
math
\begin{document} \title[Maximum degree in WRT with bounded random weights]{Fine asymptotics for the maximum degree in weighted recursive trees with bounded random weights} \mathrm{d}ate{\today} \keywords{Weighted recursive graph, Random recursive graph, Uniform DAG, Maximum degree, Degree distribution, Random environment} {\rm Aut}hor[Eslava]{Laura Eslava$^\mathrm{d}agger$} \address{$^\mathrm{d}agger$Universidad Nacional Autonoma Mexico, Instituto de investigaciones en matematicas y en sistemas, CDMX, 04510, Mexico} \mathrm{e}mail{[email protected]} {\rm Aut}hor[Lodewijks]{Bas Lodewijks$^{\mathrm{d}agger\mathrm{d}agger}$} \address{$^{\mathrm{d}agger\mathrm{d}agger}$ Universit\'e Jean Monnet, Saint-Etienne, France and Institut Camille Jordan, Lyon and Saint-Etienne, France} \mathrm{e}mail{[email protected]} {\rm Aut}hor[Ortgiese]{Marcel Ortgiese$^\star$} \address{$^\star$Department of Mathematical Sciences, University of Bath, Claverton Down, Bath, BA2 7AY, United Kingdom.} \mathrm{e}mail{[email protected]} \begin{abstract} A weighted recursive tree is an evolving tree in which vertices are assigned random vertex-weights and new vertices connect to a predecessor with a probability proportional to its weight. Here, we study the maximum degree and near-maximum degrees in weighted recursive trees when the vertex-weights are almost surely bounded and their distribution function satisfies a mild regularity condition near zero. We are able to specify higher-order corrections to the first order growth of the maximum degree established in prior work. The accuracy of the results depends on the behaviour of the weight distribution near the largest possible value and in certain cases we manage to find the corrections up to random order. Additionally, we describe the tail distribution of the maximum degree, the distribution of the number of vertices attaining the maximum degree, and establish asymptotic normality of the number of vertices with near-maximum degree. Our analysis extends the results proved for random recursive trees (where the weights are constant) to the case of random weights. The main technical result shows that the degrees of several uniformly chosen vertices are asymptotically independent with explicit error corrections. \mathrm{e}nd{abstract} \maketitle \section{Introduction} The Weighted Recursive Tree model (WRT), first introduced by Borovkov and Vatutin~\cite{BorVat06}, is a recursive tree process $(T_n,n\in\mathbb{N})$ and a generalisation of the random recursive tree model. Here we consider a variation, first studied by Hiesmayr and I\c slak~\cite{HieIsl17}, where the first vertex does not necessarily have weight one. Let $(W_i)_{i\in\mathbb{N}}$ be a sequence of positive vertex-weights. Initialise the process with the tree $T_1$, which consists of the vertex $1$ (which denotes the root) and assign vertex-weight $W_1$ to it. Recursively, at every step $n\geq 2$, we obtain $T_n$ by adding to $T_{n-1}$ the vertex $n$, assigning vertex-weight $W_n$ to it and connecting $n$ to a vertex $i\in[n-1]$, which, conditionally on the vertex-weights $W_1,\ldots, W_{n-1}$, is selected with a probability proportional to $W_i$. In this paper, we consider edges to be directed towards the vertex with the smaller label. We note that allowing every vertex to connect to $m\in\mathbb{N}$ many predecessors, each one selected independently, yields the more general Weighted Recursive Graph model (WRG) introduced in~\cite{LodOrt21}. The focus of this paper is the WRT model in the case when the vertex-weights are \mathrm{e}mph{almost surely bounded random variables}. Lodewijks and Ortgiese~\cite{LodOrt21} established that, in the case of positive, bounded random vertex-weights, the maximum degree $\Delta_n$ of the WRG model grows logarithmically and that $\Delta_n/\log n\toas 1/\log \theta_m$, where $\theta_m:=1+\E W/m$ with $\E W$ the mean of the vertex-weight distribution and $m\in\mathbb{N}$ the out-degree of each vertex. Note that setting $m=1$ yields the result for the WRT model. In this paper, we improve this result by describing the higher-order asymptotic behaviour of the maximum degree when the vertex-weights are almost surely bounded. In this case we are able to distinguish several classes of vertex-weight distributions for which different higher-order behaviour can be observed. Beyond the initial work of Borovkov and Vatutin and also Hiesmayr and I\c slak studying the height, depth and size of branches of the WRT model, other properties such as the degree distribution, large and maximum degrees, and weighted profile and height of the tree have been studied. Mailler and Uribe Bravo~\cite{MaiBra19}, as well as S\'enizergues~\cite{Sen19} and S\'enizergues and Pain~\cite{SenPain21} study the weighted profile and height of the WRT model. Mailler and Uribe Bravo consider random vertex-weights with particular distributions, whereas S\'enizergues and Pain allow for a more general model with both sequences of deterministic as well as random weights. Iyer~\cite{Iyer20} and the more general work by Fountoulakis and Iyer~\cite{FouIyer21} study the degree distribution of a large class of evolving weighted random trees, and Lodewijks and Ortgiese~\cite{LodOrt21} study the degree distribution of the WRG model. In both cases, the WRT model is a particular example of the models studied and all results prove the existence of an almost sure limiting degree distribution for the empirical degree distribution. Finally, Lodewijks and Ortgiese~\cite{LodOrt21} and Lodewijks~\cite{Lod21} study the maximum degree and the labels of the maximum degree vertices of the WRG model for a large range of vertex-weight distributions. In particular, a distinction between distributions with unbounded support and bounded support is observed. In the former case the behaviour and size of the label of the maximum degree is mainly controlled by a balance of vertices being old (i.e.\ having a small label) and having a large vertex-weight. In the latter case, due to the fact that the vertex-weights are bounded, the behaviour is instead controlled by a balance of vertices being old and having a degree which significantly exceeds their mean degree. A particular case of the WRT model is the Random Recursive Tree (RRT) model, which is obtained when each vertex-weight equals one almost surely. As a result, techniques used to study the maximum degree in the RRT model can be adapted to analyse the maximum degree in the WRT model. Lodewijks and Ortgiese~\cite{LodOrt21} demonstrate this by adapting the approach of Devroye and Lu~\cite{DevLu95} for proving the almost sure convergence of the rescaled maximum degree in the Directed Acyclic Graphs model (DAG) (the multigraph case of the RRT model) and using it for the analysis of the maximum degree in the WRG model, as discussed above. Hence, we survey the development of the properties of the maximum degree of the RRT model. Szyma\'nsky was the first to study the maximum degree of the RRT model and proved its convergence of the mean; $\E{\Delta_n/\log n}\to 1/\log 2$. Later, Devroye and Lu~\cite{DevLu95} extend this to almost sure convergence and extended this to the DAG model as well. Goh and Schmutz~\cite{GohSch02} showed that $\Delta_n-\lfloor \log_2 n\rfloor$ converges in distribution along suitable subsequences and identified possible distributions for the limit. Adarrio-Berry and Eslava~\cite{AddEsl18} provide a precise characterisation of the subsequential limiting distribution of rescaled large degrees in terms of a Poisson point process as well as a central limit theorem result for near-maximum degrees (of order $\log_2 n- i_n$ where $i_n\to\infty,i_n=o(\log n)$). Eslava~\cite{Esl16} extends this to the joint convergence of the degree and depth of high degree vertices. In this paper we adapt part of the techniques developed by Adarrio-Berry and Eslava in~\cite{AddEsl18}. They consist of two main components: First, they establish an equivalence between the RRT model and a variation of the Kingman $n$-coalescent and use this to provide a detailed asymptotic description of the tail distribution of the degrees of $k$ vertices selected uniformly at random, for any $k\in\mathbb{N}$. This variation of the Kingman $n$-coalescent is a process which starts with $n$ trees, each consisting of only a single root. Then, at every step $1$ through $n-1$, a pair of roots is selected uniformly at random and independently of this selection, each possibility with probability $1/2$, one of the two roots is connected to the other with a directed edge. This reduces the number of trees by one and, after $n-1$ steps, yields a directed tree. It turns out that this directed tree is equal in law to the random recursive tree. In the $n$-coalescent all $n$ roots in the initialisation are equal in law and the degrees of the vertices are exchangeable. This allows Adarrio-Berry and Eslava to obtain the degree tail distribution with a precise error rate. Second, this precise tail distribution is used to obtain joint factorial moments of the quantities \be \ba\label{eq:xnirrt} X^{(n)}_i&:=|\{j\in[n]:\mathbb{Z}m_n(j)= \lfloor \log_2 n\rfloor +i\}|,\quad i\in\mathbb{Z},\\ X^{(n)}_{\geq i}&:=|\{j\in[n]:\mathbb{Z}m_n(j)\geq \lfloor \log_2 n\rfloor +i\}|,\quad i\in\mathbb{Z}, \mathrm{e}a\mathrm{e}e where $\mathbb{Z}m_n(j)$ denotes the in-degree of vertex $j$ in the tree of size $n$. The joint factorial moments of these $X^{(n)}_i,X^{(n)}_i$ are used to identify the limiting distribution of high degrees in the tree. The sub-sequential convergence, as mentioned above, is due to the floor function applied to $\log_2 n$ and the integer-valued in-degrees $\mathbb{Z}m_n(j)$. For the WRT model, however, it provides no advantage to construct a `weighted' Kingman $n$-coalescent to obtain precise asymptotic expression for the tail distribution of vertex degrees. As pairs of roots in the Kingman $n$-coalescent are selected uniformly at random and hence the roots are equal in law, it is not necessary to keep track of which roots are selected at what step. In a weighted version of the Kingman $n$-coalescent, pairs of roots would have to be selected with probabilities proportional to their weights, so that it is necessary to record which roots are selected at which step. As a result, a weighted Kingman $n$-coalescent is not (more) useful in analysing the tail distribution of vertex degrees. Instead, we improve results on the convergence of the empirical degree distribution of the WRT model obtained by Iyer~\cite{Iyer20} and Lodewijks and Ortgiese~\cite{LodOrt21}. We obtain a convergence rate to the limiting degree distribution, the asymptotic empirical degree distribution for degrees $k=k(n)$ which diverge with $n$, as well as asymptotic independence of degrees of vertices selected uniformly at random. We combine this with the joint factorial moments of quantities similar to~\mathrm{e}qref{eq:xnirrt} and use the techniques developed by Adarrio-Berry and Eslava~\cite{AddEsl18} to derive fine asymptotics of the maximum degree in the WRT model. \textbf{Notation.} Throughout the paper we use the following notation: we let $\mathbb{N}:=\{1,2,\ldots\}$ be the natural numbers, set $\mathbb{N}_0:=\{0,1,\ldots\}$ to include zero and let $[t]:=\{i\in\mathbb{N}: i\leq t\}$ for any $t\geq 1$. For $x\in\mathbb{R}$, we let $\lceil x\rceil:=\inf\{n\in\mathbb{Z}: n\geq x\}$ and $\lfloor x\rfloor:=\sup\{n\in\mathbb{Z}: n\leq x\}$, and for $x\in\mathbb{R},k\in\mathbb{N}$, let $(x)_k:=x(x-1)(x-2)\cdots(x-(k-1))$ and $(x)^{(k)}:=x(x+1)(x+2)\cdots(x+(k-1))$, and $(x)_0=(x)^{(0)}:=1$. For $x,y \in \mathbb{R}$, we write $x\wedge y := \min \{ x,y\}$ and $x \vee y := \max\{ x, y\}$. Moreover, for sequences $(a_n,b_n)_{n\in\mathbb{N}}$ such that $b_n$ is positive for all $n$ we say that $a_n=o(b_n), a_n\sim b_n, a_n=\mathcal{O}(b_n)$ if $\lim_{n\to\infty}a_n/b_n=0, \lim_{n\to\infty} a_n/b_n=1$ and if there exists a constant $C>0$ such that $|a_n|\leq Cb_n$ for all $n\in\mathbb{N}$, respectively. For random variables $X,(X_n)_{n\in\mathbb{N}},Y$ we denote $X_n\toindis X, X_n\toinp X$ and $X_n\toas X$ for convergence in distribution, probability and almost sure convergence of $X_n$ to $X$, respectively. Also, we write $X_n=o_\mathbb{P}(1)$ if $X_n \toinp 0$ and $X\preceq Y$ if $Y$ stochastically dominates $X$. Finally, we use the conditional probability measure $\mathbb{P}f{\cdot}:=\mathbb{P}(\,\cdot\, |(W_i)_{i\in\mathbb{N}})$ and conditional expectation $\Ef{}{\cdot}:=\E{\,\cdot\,|(W_i)_{i\in\mathbb{N}}}$, where the $(W_i)_{i\in\mathbb{N}}$ are the i.i.d.\ vertex-weights of the WRT model. \section{Definitions and main results}\label{sec:results} The weighted recursive tree (WRT) model is a growing random tree model that generalises the random recursive tree (RRT), in which vertices are assigned (random) weights and new vertices connect with existing vertices with a probability proportional to the vertex-weights. The definition of the WRT model follows the one in~\cite{HieIsl17}: \begin{definition}[Weighted Recursive Tree]\label{def:WRT} Let $(W_i)_{i\geq 1}$ be a sequence of i.i.d.\ copies of a positive random variable $W$ such that $\mathbb{P}{W>0}=1$ and set \be S_n:=\sum_{i=1}^nW_i. \mathrm{e}e We construct the \mathrm{e}mph{weighted recursive tree} as follows: \begin{enumerate} \item[1)] Initialise the tree with a single vertex $1$, denoted as the root, and assign to the root a vertex-weight $W_1$. Denote this tree by $T_1$. \item[2)] For $n\geq 1$, introduce a new vertex $n+1$ and assign to it the vertex-weight $W_{n+1}$. Conditionally on $T_n$, connect to some $i\in[n]$ with probability $W_i/S_n$. Denote the resulting tree by $T_{n+1}$. \mathrm{e}nd{enumerate} We treat $T_n$ as a directed tree, where edges are directed from new vertices towards old vertices. \mathrm{e}nd{definition} \begin{remark}\label{remark:def} $(i)$ Note that the edge connection probabilities are invariant under a rescaling of the vertex-weights. In particular, we may without loss of generality assume for vertex-weight distributions with bounded support that $x_0:=\sup\{x\in\mathbb{R}\,|\, \mathbb{P}{W\leq x}<1\}=1$. $(ii)$ The model can be adapted to allow for a \mathrm{e}mph{random out-degree}: at every step the newly introduced vertex $n+1$ connects with \mathrm{e}mph{every} vertex $i\in[n]$ \mathrm{e}mph{independently} with a probability equal to $W_i/S_n$. The results presented below still hold for this model definition as well. \mathrm{e}nd{remark} Lodewijks and Ortgiese studied certain properties of the Weighted Recursive Graph (WRG) model in~\cite{LodOrt21}. This is a more general version of the WRT model that allows every vertex to connect to $m\in\mathbb{N}$ vertices when introduced, yielding a multigraph when $m>1$. This paper aims to recover and extend some of these results in the tree case ($m=1$) when the vertex-weights are almost surely bounded, i.e.\ $x_0<\infty$. As stated in Remark~{\rm Re}f{remark:def}$(i)$, we can set $x_0=1$ without loss of generality. To formulate the results we need to assume that the distribution of the weights is sufficiently regular, allowing us to control their extreme value behaviour. In certain cases it is more convenient to formulate the assumptions in terms of the distribution of the random variable $(1-W)^{-1}$: \begin{assumption}[Vertex-weight distribution]\label{ass:weights} The vertex-weights $W,(W_i)_{i\in\mathbb{N}}$ are i.i.d.\ strictly positive random variables, which satisfy: \begin{itemize} \item[\namedlabel{ass:weightsup}{$($C$1)$}] The random variable $W$ takes values in $(0,1]$ and $x_0:=\sup\{x\in \mathbb{R}|\mathbb{P}{W\leq x}<1\}=1$. \item[\namedlabel{ass:weightzero}{$($C$2)$}] There exist $C,\rho>0,x_0\in(0,1)$ such that for all $x\in[0,x_0]$ it holds that $\mathbb{P}{W \leq x} \leq C x^\rho .$ \mathrm{e}nd{itemize} Furthermore, the vertex-weights satisfy one of the following conditions: \begin{enumerate}[labelindent = 1cm, leftmargin = 2.2cm] \item[\namedlabel{ass:weightatom}{$($\textbf{Atom}$)$}] The vertex weights follow a distribution that has an atom at one, i.e. there exists a $q_0\in(0,1]$ such that $\mathbb{P}{W=1}=q_0$. (Note that $q_0=1$ recovers the RRT model) \item[\namedlabel{ass:weightweibull}{$($\textbf{Weibull}$)$}] The vertex-weights follow a distribution that belongs to the Weibull maximum domain of attraction (MDA). This implies that there exist some $\alpha>1$ and a positive function $\mathrm{e}ll$ which is slowly varying at infinity, such that \be \hspace{2cm}\mathbb{P}{W\geq 1-1/x}=\mathbb{P}{(1-W)^{-1}\geq x}=\mathrm{e}ll(x)x^{-(\alpha-1)},\qquad x>0. \mathrm{e}e \item[\namedlabel{ass:weightgumbel}{$($\textbf{Gumbel}$)$}] The distribution belongs to the Gumbel maximum domain of attraction (MDA) (and $x_0=1$). This implies that there exist sequences $(a_n,b_n)_{n\in\mathbb{N}}$, such that \be \frac{\max_{i\in[n]}W_i-b_n}{a_n}\toindis \Lambda, \mathrm{e}e where $\Lambda$ is a Gumbel random variable. \noindent Within this class, we further distinguish the following two sub-classes: \begin{enumerate}[labelindent = 1cm, leftmargin = 1.3cm] \item[\namedlabel{ass:weighttaufin}{$($\textbf{RV}$)$}] There exist $a,c_1,\tau>0$, and $b\in\mathbb{R}$ such that \[ \hspace{1.5cm}\mathbb{P}{W>1-1/x}=\mathbb{P}{ (1-W)^{-1} > x} \sim a x^b \mathrm{e}^{-( x/c_1)^\tau} \quad \mbox{as } x \rightarrow \infty .\] \item[\namedlabel{ass:weighttau0}{$($\textbf{RaV}$)$}] There exist $a,c_1>0, b\in\mathbb{R},$ and $\tau>1$ such that \[ \hspace{1.5cm}\mathbb{P}{W>1-1/x}=\mathbb{P}{ (1-W)^{-1} > x} \sim a (\log x)^b \mathrm{e}^{- (\log (x)/c_1)^\tau} \quad \mbox{as } x \rightarrow \infty .\] \mathrm{e}nd{enumerate} \mathrm{e}nd{enumerate} \mathrm{e}nd{assumption} \begin{remark} Condition~{\rm Re}f{ass:weightzero} is required only for a very specific part of the proof of Proposition~{\rm Re}f{lemma:degprobasymp}. Though we were unable to omit this assumption, it covers a wide range of vertex-weight distributions. Still, we believe it is a mere technicality that can be overcome. \mathrm{e}nd{remark} Throughout, we will write \[ \Zm_n(i) := \mbox{ in-degree of vertex } i \mbox{ in } T_n . \] Working with the in-degree allows us to (in principle) generalise our methods to graphs with random out-degree, as mentioned in Remark~{\rm Re}f{remark:def}. Obviously, if the out-degree is fixed, we can recover the results for the degree from our results on the $\Zm_n(i)$. In~\cite{LodOrt21}, the following results are obtained for the WRG model: if we let $\theta_m:=1+\E{W}/m$, \be\label{eq:pk} p_k(m):=\E{\frac{\theta_m-1}{\theta_m-1+W}\Big(\frac{W}{\theta_m-1+W}\Big)^k},\ p_{\geq k}(m):=\sum_{j=k}^\infty p_k(m) =\E{\Big(\frac{W}{\theta_m-1+W}\Big)^k}, \mathrm{e}e then almost surely for any $k\in\mathbb{N}$ fixed, \be \label{eq:pkconv} \lim_{n\to\infty}\frac 1n \sum_{i=1}^n \ind_{\{\Zm_n(i)=k\}}=p_k(m)\qquad \text{and} \qquad \lim_{n\to\infty}\frac 1n\sum_{i=1}^n \ind_{\{\Zm_n(i)\geq k\}}=p_{\geq k}(m), \mathrm{e}e whenever $W$ follows a distribution with a finite mean. In particular, the above is satisfied for all cases in Assumption~{\rm Re}f{ass:weights}. Moreover, if the vertex-weights are bounded almost surely (without loss of generality $x_0=1$), \be \label{eq:bddmaxconv} \max_{i\in[n]}\frac{\Zm_n(i)}{\log_{\theta_m}n}\toas 1. \mathrm{e}e In this paper we improve these results when considering the WRT model with almost surely bounded weights. That is, we consider the case $m=1$. For ease of writing, we let \be \theta:=\theta_1=1+\E W\text{ and }p_k:=p_k(1),p_{\geq k}:=p_{\geq k}(1). \mathrm{e}e First, we are able to extend the result in~\mathrm{e}qref{eq:pkconv} to the case when $k=k(n)$ diverges with $n$ in the sense that the difference between both sides converges to zero in mean, under certain constraints on $k(n)$, and we obtain a convergence rate as well. Combining this result with techniques developed by Addario-Berry and Eslava in~\cite{AddEsl18} for random recursive trees we are then able to identify the higher-order asymptotic behaviour of the maximum degree depending on the cases in Assumption~{\rm Re}f{ass:weights}. Additionally, in certain cases we are able to derive an asymptotic tail distribution for the maximum degree and obtain an asymptotic normality result for the number of vertices with `near-maximal' degrees (in certain cases). These results can be extended to the model with a \mathrm{e}mph{random out-degree} as mentioned in Remark~{\rm Re}f{remark:def} as well. Recall that $\theta:=1+\E W$ and define \be\ba \label{eq:xni} X^{(n)}_i&:=|\{j\in[n]:\mathbb{Z}m_n(j)= \lfloor \log_\theta n\rfloor +i\}|,\\ X^{(n)}_{\geq i}&:=|\{j\in[n]:\mathbb{Z}m_n(j)\geq \lfloor \log_\theta n\rfloor +i\}|. \mathrm{e}a\mathrm{e}e For certain classes of vertex-weight distributions, we can prove the distributional convergence of these quantities along subsequences, as is the case for the RRT model in~\cite{AddEsl18}. This result can be formulated in terms of convergence of point processes. Let $\mathbb{Z}^*:=\mathbb{Z}\cup \{\infty\}$ and endow $\mathbb{Z}^*$ with the metric $d(i,j)=|2^{-i}-2^{-j}|$ and $d(i,\infty)=2^{-i}$ for any $i,j\in\mathbb{Z}$, so that $[i,\infty]$ is a compact set. Following the notation in~\cite{DaleyVere-JonesII}, let $\mathcal M^\#_{\mathbb{Z}^*}$ be the space of boundedly finite measures on $\mathbb{Z}^*$ (which in this case corresponds to locally finite measures) equipped with the vague topology. If we let $\mathcal P$ be a Poisson point process on $\mathbb{R}$ with intensity measure $\lambda(\mathrm{d} x):=q_0\theta^{-x}\log \theta\, \mathrm{d} x,q_0\in(0,1]$, and define \be \label{eq:peps} \mathcal P^{\rm var}\ epsilon:=\sum_{x\in \mathcal P}\mathrm{d}elta_{\lfloor x+{\rm var}\ epsilon\rfloor},\qquad \mathcal P^{(n)}:=\sum_{i\in[n]}\mathrm{d}elta_{\Zm_n(i)-\lfloor \log_\theta n\rfloor},\quad \text{ and } \quad {\rm var}\ epsilon_n:=\log_\theta n-\lfloor \log_\theta n\rfloor, \mathrm{e}e then $\mathcal P^{\rm var}\ epsilon$ and $\mathcal P^{(n)}$ are random elements in $\mathcal M^\#_{\mathbb{Z}^*}$ and we can provide conditions such that $\mathcal P^{(n_\mathrm{e}ll)}$ converges weakly to $\mathcal P^{\rm var}\ epsilon$, for subsequences $(n_\mathrm{e}ll)_{\mathrm{e}ll\in\mathbb{N}}$ such that ${\rm var}\ epsilon_{n_\mathrm{e}ll}\to{\rm var}\ epsilon$ as $\mathrm{e}ll\to\infty$. We abuse notation to write $\mathcal P^{\rm var}\ epsilon(i)=\mathcal P^{\rm var}\ epsilon(\{i\})=|\{x\in\mathcal P: \lfloor x+{\rm var}\ epsilon\rfloor =i\}|=|\{x\in\mathcal P: x\in [i-{\rm var}\ epsilon,i+1-{\rm var}\ epsilon)\}|$. The choice of the metric on $\mathbb{Z}^*$ is convenient since, by Theorem 11.1.VII of~\cite{DaleyVere-JonesII}, weak convergence in $\mathcal M^\#_{\mathbb{Z}^*}$ is equivalent to convergence of finite dimensional distributions, see Definition 11.1.IV of~\cite{DaleyVere-JonesII}. In particular, the weak convergence of $\cP^{(n_\mathrm{e}ll)}$ implies the convergence in distribution of $X_{\ge i}^{(n_l)} = \cP^{(n_\mathrm{e}ll)}[i,\infty)$. We now state our main results, which we split into several theorems based on the cases in Assumption~{\rm Re}f{ass:weights}. \begin{theorem}[High degrees in WRTs {\rm Re}f{ass:weightatom} case]\label{thrm:mainatom} Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ that satisfy the {\rm Re}f{ass:weightatom} case in Assumption~{\rm Re}f{ass:weights} for some $q_0\in(0,1]$. Fix ${\rm var}\ epsilon\in[0,1]$. Let $(n_\mathrm{e}ll)_{\mathrm{e}ll\in\mathbb{N}}$ be a positive integer sequence such that ${\rm var}\ epsilon_{n_\mathrm{e}ll}\to {\rm var}\ epsilon$ as $\mathrm{e}ll\to\infty$. Then $\mathcal P^{(n_\mathrm{e}ll)}$ converges weakly in $\mathcal M^\#_{\mathbb{Z}^*}$ to $\mathcal P^{\rm var}\ epsilon$ as $\mathrm{e}ll\to \infty$. Equivalently, for any $i<i'\in\mathbb{Z}$, jointly as $\mathrm{e}ll\to \infty$, \be (X^{(n_\mathrm{e}ll)}_i,X^{(n_\mathrm{e}ll)}_{i+1},\ldots,X^{(n_\mathrm{e}ll)}_{i'-1},X^{(n_\mathrm{e}ll)}_{\geq i'})\toindis (\mathcal P^{\rm var}\ epsilon(i),\mathcal P^{\rm var}\ epsilon(i+1),\ldots,\mathcal P^{\rm var}\ epsilon (i'-1),\mathcal P^{\rm var}\ epsilon([i',\infty)). \mathrm{e}e \mathrm{e}nd{theorem} We note that this result recovers and extends~\cite[Theorem $1.2$]{AddEsl18}, in which an equivalent result is presented which only holds for the random recursive tree, i.e.\ the particular case of the weighted recursive tree in which $q_0:=\mathbb{P}{W=1}=1$. In Theorem~{\rm Re}f{thrm:mainatom} we allow $q_0\in(0,1)$ as well, under the additional condition~{\rm Re}f{ass:weightzero}. When the vertex-weight distribution belongs to the Weibull MDA, we can prove convergence in probability under a deterministic second-order scaling, but are unable to obtain what we conjecture to be a random third-order term similar to the result in Theorem~{\rm Re}f{thrm:mainatom}: \begin{theorem}[High degrees in WRTs, {\rm Re}f{ass:weightweibull} case]\label{thrm:mainweibull} Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ that satisfy the {\rm Re}f{ass:weightweibull} case in Assumption~{\rm Re}f{ass:weights} for some $\alpha>1$. Then, \be \max_{i\in[n]}\frac{\Zm_n(i)-\log_\theta n}{\log_\theta\log_\theta n}\toinp -(\alpha-1). \mathrm{e}e \mathrm{e}nd{theorem} Finally, when the vertex-weight distribution belongs to the Gumbel MDA, we have similar results compared to the Weibull MDA case in the above theorem. Here we are also able to obtain a deterministic second-order scaling for both the~{\rm Re}f{ass:weighttaufin} and~{\rm Re}f{ass:weighttau0} sub-cases, as well as a third- and fourth-order scaling for the~{\rm Re}f{ass:weighttau0} sub-case: \begin{theorem}[High degrees in WRTs, {\rm Re}f{ass:weightgumbel} case]\label{thrm:maingumbel} Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ that satisfy the {\rm Re}f{ass:weightgumbel} case in Assumption~{\rm Re}f{ass:weights}.\\ In the~{\rm Re}f{ass:weighttaufin} sub-case, let $\gamma:=1/(1+\tau)$. Then, \be \label{eq:gumbrv2nd} \max_{i\in[n]}\frac{\Zm_n(i)-\log_\theta n}{(\log_\theta n)^{1-\gamma}}\toinp -\frac{\tau^\gamma}{(1-\gamma)\log \theta}\Big(\frac{1-\theta^{-1}}{c_1}\Big)^{1-\gamma}=:-C_{\theta,\tau,c_1}. \mathrm{e}e In the~{\rm Re}f{ass:weighttau0} sub-case, \be \label{eq:gumbrav2nd} \max_{i\in[n]}\frac{\Zm_n(i)-\log_\theta n+C_1(\log_\theta\log_\theta n)^\tau-C_2(\log_\theta \log_\theta n)^{\tau-1}\log_\theta \log_\theta\log_\theta n}{(\log_\theta\log_\theta n)^{\tau-1}}\toinp C_3, \mathrm{e}e where \be\ba \label{eq:c123} C_1&:=(\log \theta )^{\tau-1}c_1^{-\tau},\qquad C_2:=(\log \theta )^{\tau-1}\tau(\tau-1)c_1^{-\tau}, \\ C_3&:=\big(\log_\theta(\log\theta) (\tau-1)\log\theta-\log(\mathrm{e} c_1^\tau(1-\theta^{-1})/\tau)\big)(\log \theta )^{\tau-2}\tau c_1^{-\tau}. \mathrm{e}a \mathrm{e}e \mathrm{e}nd{theorem} We see that only in the~{\rm Re}f{ass:weightatom} case we are able to obtain the higher-order asymptotics up to random order. This is due to the fact that, in this particular case, the vertices with high degree all have vertex-weight one. In the other classes covered in Theorems~{\rm Re}f{thrm:mainweibull} and~{\rm Re}f{thrm:maingumbel} vertices with high degrees have a vertex-weight close to one, which causes their degrees to grow slightly slower. This results in the higher-order asymptotics as observed in these theorems. We are able to obtain more precise results related to the maximum and near-maximum degree vertices in the~{\rm Re}f{ass:weightatom} case as well, which again recover and extend the results in~\cite{AddEsl18}. \begin{theorem}[Asymptotic tail distribution for maximum degree in~{\rm Re}f{ass:weightatom} case]\label{thrm:maxtail} $\,$ \\ Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ that satisfy the {\rm Re}f{ass:weightatom} case in Assumption~{\rm Re}f{ass:weights} for some $q_0\in(0,1]$ and recall ${\rm var}\ epsilon_n$ from~\mathrm{e}qref{eq:peps}. Then, for any integer-valued $i_n=i(n)$ with $i_n+\log_\theta n <(\theta/(\theta-1)) \log n$ and $\liminf_{n\to\infty}i_n>-\infty$, \be \mathbb{P}{\max_{j\in[n]}\mathbb{Z}m_n(j)\geq \lfloor \log_\theta n\rfloor +i_n}=\big(1-\mathrm{e}xp(-q_0\theta^{-i_n+{\rm var}\ epsilon_n})\big)(1+o(1)). \mathrm{e}e Moreover, let $\mathcal{M}_n\subseteq [n]$ denote the $($random$)$ set of vertices that attain the maximum degree in $T_n$, fix ${\rm var}\ epsilon\in[0,1]$, and let $(n_\mathrm{e}ll)_{\mathrm{e}ll\in\mathbb{N}}$ be a positive integer sequence such that ${\rm var}\ epsilon_{n_\mathrm{e}ll}\to{\rm var}\ epsilon$ as $\mathrm{e}ll\to\infty$. Then, $|\mathcal{M}_{n_\mathrm{e}ll}|\toindis M_{\rm var}\ epsilon$, where $M_{\rm var}\ epsilon$ has distribution \be\label{eq:meps} \mathbb{P}{M_{\rm var}\ epsilon=k}=\sum_{j\in\mathbb{Z}}\frac{1}{k!}\big(q_0(1-\theta^{-1})\theta^{-j+{\rm var}\ epsilon}\big)^k \mathrm{e}^{-q_0\theta^{-j+{\rm var}\ epsilon}},\qquad k\in\mathbb{N}. \mathrm{e}e \mathrm{e}nd{theorem} Finally, we establish an asymptotic normality result for the number of vertices which have `near-maximum' degrees. For a precise definition of `near-maximum', we define sequences $(s_k,r_k)_{k\in\mathbb{N}}$ as \be\ba \label{eq:ek} s_k&:=\inf\big\{x\in(0,1): \mathbb{P}{W\in(x,1)}\leq \mathrm{e}xp(-(1-\theta^{-1})(1-x)k)\big\},\\ r_k&:=\mathrm{e}xp(-(1-\theta^{-1})(1-s_k)k). \mathrm{e}a\mathrm{e}e As a result, $r_k$ can be used as the error term in the asymptotic expression of $p_{\geq k}$ (as in~\mathrm{e}qref{eq:pk}) when the weight distribution satisfies the~{\rm Re}f{ass:weightatom} case (see Theorem~{\rm Re}f{thrm:pkasymp}) and is essential in quantifying how much smaller `near-maximum' degrees are relative to the maximum degree of the graph in this case. We note that $r_k$ is decreasing and converges to zero with $k$ (see Lemma~{\rm Re}f{lemma:rk}), and that in the definition of $s_k$ and $r_k$ we can allow the index to be continuous rather than just an integer (the proof of Lemma~{\rm Re}f{lemma:rk} can be adapted to still hold in this case). We can then formulate the following theorem: \begin{theorem}[Asymptotic normality of near-maximum degree vertices, {\rm Re}f{ass:weightatom} case]\label{thrm:asympnormal} $\,$ \\ Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ that satisfy the {\rm Re}f{ass:weightatom} case in Assumption~{\rm Re}f{ass:weights} for some $q_0\in(0,1]$. Then, for an integer-valued $i_n=i(n)\to-\infty$ such that $i_n=o(\log n \wedge |\log r_{\log_\theta n}|)$, \be \frac{X^{(n)}_{i_n}-q_0(1-\theta^{-1})\theta^{-i_n+{\rm var}\ epsilon_n}}{\sqrt{q_0(1-\theta^{-1})\theta^{-i_n+{\rm var}\ epsilon_n}}}\toindis N(0,1). \mathrm{e}e \mathrm{e}nd{theorem} \begin{remark}\label{rem:asympnorm} The constraint $i_n=o(\log n\wedge \log r_{\log_\theta n})$ can be simplified by providing more information on the tail of the weight distribution. Only when $W$ has an atom at one and support bounded away from one do we have that $o(\log n\wedge \log r_{\log_\theta n})=o(\log n)$. That is, when there exists an $s\in(0,1)$ such that $\mathbb{P}{W\in(s,1)}=0$. In that case, we can set $s_k= s$ and $r_k= \mathrm{e}xp(-(1-\theta^{-1})(1-s)k)$ for all $k$ large, so that \be \log r_{ \log_\theta n}= -(1-\theta^{-1})(1-s)\log_\theta n, \mathrm{e}e so that indeed $o(\log n\wedge \log r_{\log_\theta n})=o(\log n)$. In all other cases it follows that $s_k\uparrow 1$, so that $\log r_{\log_\theta n}=o(\log n)$ and the constraint simplifies to $i_n=o(\log r_{\log_\theta n})$. \mathrm{e}nd{remark} \textbf{Outline of the paper}\\ In Section~{\rm Re}f{sec:overview} we provide a short overview and intuitive idea of the proofs of Theorems~{\rm Re}f{thrm:mainatom},~{\rm Re}f{thrm:mainweibull}, {\rm Re}f{thrm:maingumbel},~{\rm Re}f{thrm:maxtail} and~{\rm Re}f{thrm:asympnormal}. In Section~{\rm Re}f{sec:exres} we discuss two examples of vertex-weight distributions which satisfy the~{\rm Re}f{ass:weightweibull} and~{\rm Re}f{ass:weightgumbel} cases, respectively, for which more precise results can be obtained. We then provide the key concepts and results that are used in the proofs of the main theorems in Section~{\rm Re}f{sec:taildeg}. We use these results to prove the main theorems in Section~{\rm Re}f{sec:mainproof}. Finally, in Section~{\rm Re}f{sec:ex} we provide the necessary techniques and results, comparable to what is presented in Section {\rm Re}f{sec:taildeg}, to prove the statements regarding the examples of Section {\rm Re}f{sec:exres}. \section{Intuitive idea of (the proof of) the main theorems}\label{sec:overview} We provide a short intuitive idea as to why the results stated in Section~{\rm Re}f{sec:results} hold. The main elements in obtaining a more precise understanding of the behaviour of the maximum degree of the WRT are the following: \begin{enumerate} \item[\namedlabel{item:i}{$(i)$}] A precise expression of the tail distribution of the in-degree of uniformly at random selected vertices $(v_\mathrm{e}ll)_{\mathrm{e}ll\in[k]}$, for any $k\in\mathbb{N}$. That is, \be \mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)\geq m_\mathrm{e}ll\text{ for all } \mathrm{e}ll\in[k]}=\prod_{\mathrm{e}ll=1}^k p_{\geq m_\mathrm{e}ll}(1+o(n^{-\beta})), \mathrm{e}e for some $\beta>0$ and where the $m_\mathrm{e}ll\in\mathbb{N}$ are such that $m_\mathrm{e}ll\leq c \log n$ for some $c\in(0,\theta/(\theta-1))$. This extends~\mathrm{e}qref{eq:pkconv} in the sense of convergence in mean to $k\in\mathbb{N}$ many uniformly at random selected vertices rather than just one, and allows the $m_\mathrm{e}ll$ to grow with $n$ rather than being fixed. Moreover, the error term $1+o(n^{-\beta})$ extends previously known results as well, for which no convergence rate was known. \item[\namedlabel{item:ii}{$(ii)$}] The asymptotic behaviour of $p_{\geq k}$, as defined in~\mathrm{e}qref{eq:pk}, as $k\to\infty$ for each case in Assumption~{\rm Re}f{ass:weights}. \mathrm{e}nd{enumerate} Element {\rm Re}f{item:i}, which is proved in Proposition~{\rm Re}f{lemma:degprobasymp}, allows us to obtain bounds on the probability of the event $\{\max_{j\in[n]}\Zm_n(j)\geq k_n\}$ for any sequence $k_n\to\infty $ as $n\to\infty$. These probabilities can be expressed in terms of $n p_{\geq k_n}$ (using union bounds and the second moment method in the form of the Chung-Erd\H os inequality, as is shown in Lemma~{\rm Re}f{lemma:maxdegwhp}). By~{\rm Re}f{item:ii} we can then precisely quantify $k_n$ such that these bounds either tend to zero or one, which implies whether $\{\max_{j\in[n]}\Zm_n(i)\geq k_n\}$ does or does not hold with high probability. This is the main approach for Theorems~{\rm Re}f{thrm:mainweibull} and~{\rm Re}f{thrm:maingumbel}. To obtain the random limits described in terms of the Poisson process $\mathcal P^{\rm var}\ epsilon$, as in Theorem~{\rm Re}f{thrm:mainatom}, we use a similar approach as in~\cite{AddEsl18}. Both~{\rm Re}f{item:i} and~{\rm Re}f{item:ii} are still essential, but are now used to obtain factorial moments of the quantities $X^{(n)}_i$ and $X^{(n)}_{\geq i}$, defined in~\mathrm{e}qref{eq:xni}, as shown in Proposition~{\rm Re}f{prop:factmean}. More specifically, for any $i<i'\in\mathbb{Z}$ and $a_i,\ldots,a_{i'}\in\mathbb{N}_0$, and recalling that $(x)_k:=x(x-1)\ldots (x-(k-1))$, \be\label{eq:factmean} \E{\Big(X^{(n)}_{\geq i'}\Big)_{a_{i'}}\prod_{k=i}^{i'-1}\Big(X^{(n)}_i\Big)_{a_i}}=\Big(q_0\theta^{-i'+{\rm var}\ epsilon_n}\Big)^{a_{i'}}\prod_{k=i}^{i'-1}\Big(q_0(1-\theta^{-1})\theta^{-k+{\rm var}\ epsilon_n}\Big)^{a_k}(1+o(1)). \mathrm{e}e We stress that the specific form of the right-hand side is due to the underlying assumption in Theorem~{\rm Re}f{thrm:mainatom} that the vertex-weight distribution has an atom at one, as in the~{\rm Re}f{ass:weightatom} case of Assumption~{\rm Re}f{ass:weights}. The error term can be specified in more detail, but we omit this as it serves no further purpose here. The result essentially follows directly from these estimates by observing that the right-hand side of~\mathrm{e}qref{eq:factmean} can be understood as an approximation of the factorial moment of the Poisson random variables $\mathcal P^{\rm var}\ epsilon([i-{\rm var}\ epsilon,i+1-{\rm var}\ epsilon)),\ldots, \mathcal P^{\rm var}\ epsilon([i'-{\rm var}\ epsilon,\infty))$, when ${\rm var}\ epsilon_n$ converges to some ${\rm var}\ epsilon$. The equality in~\mathrm{e}qref{eq:factmean} follows from the fact that $X^{(n)}_i$ and $X^{(n)}_{\geq i}$ can be expressed as sums of indicator random variables of disjoint events, so that their factorial means can be understood via the probabilities in~{\rm Re}f{item:i}. Then, again using the asymptotic behaviour of $p_{\geq k}$ (as in~{\rm Re}f{item:ii}), allows us to obtain the right-hand side of~\mathrm{e}qref{eq:factmean}. Finally, Theorems~{\rm Re}f{thrm:maxtail} and~{\rm Re}f{thrm:asympnormal} are also a result of~\mathrm{e}qref{eq:factmean}. This is due to the fact that the events $\{\max_{j\in[n]}\Zm_n(j) \geq \lfloor\log_\theta n\rfloor +i\}$ can be understood via the events $\{X^{(n)}_{\geq i}>0\}$. Again using ideas similar to ones developed in~\cite{AddEsl18} then allow us to obtain the results. \section{Examples}\label{sec:exres} In this section we discuss some particular choices of distributions for the vertex-weights for which more precise statements can be made compared to those stated in Section~{\rm Re}f{sec:results}. The reason we can improve on these more general results is due to a better understanding of the asymptotic behaviour of $p_k$ and $p_{\geq k}$ (see~\mathrm{e}qref{eq:pk}) as $k\to\infty$. As discussed in Section~{\rm Re}f{sec:overview}, to understand the asymptotic behaviour of the (near-)maximum degree(s) up to random order a very precise asymptotic expression for $p_{\geq k}$ is required. Though not possible in general in the~{\rm Re}f{ass:weightweibull} and~{\rm Re}f{ass:weightgumbel} cases of Assumption~{\rm Re}f{ass:weights}, certain choices of vertex-weight distributions do allow for a more explicit formulation of $p_{\geq k}$, yielding improved asymptotics. The proofs of the results presented here are deferred to Section {\rm Re}f{sec:ex}. \begin{example}[Beta distribution]\label{ex:beta} We consider a random variable $W$ with a tail distribution \be \label{eq:betacdf} \mathbb{P}{W\geq x}=\int_x^1 \frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(\beta)}s^{\alpha-1}(1-s)^{\beta-1}\,\mathrm{d} s, \qquad x\in[0,1), \mathrm{e}e for some $\alpha,\beta>0$. Clearly, $W$ belongs to the~{\rm Re}f{ass:weightweibull} case of Assumption~{\rm Re}f{ass:weights}. Moreover, $W$ satisfies condition~{\rm Re}f{ass:weightzero} of Assumption~{\rm Re}f{ass:weights}. Since for any $x_0\in(0,1)$ we can bound $(1-s)^{\beta-1}\leq 1\vee (1-x_0)^{\beta-1}$ for all $s\in(0,x_0)$, it follows that for any $x\in[0,x_0]$, \be \mathbb{P}{W\leq x}=\int_0^x \frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(\beta)}s^{\alpha-1}(1-s)^{\beta-1}\,\mathrm{d} s\leq (1\vee (1-x_0)^{\beta-1})\frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha+1)\mathcal{G}amma(\beta)}x^\alpha, \mathrm{e}e so that condition~{\rm Re}f{ass:weightzero} is satisfied with $C=(1\vee (1-x_0)^{\beta-1})\mathcal{G}amma(\alpha+\beta)/(\mathcal{G}amma(\alpha+1)\mathcal{G}amma(\beta))$, the constant $x_0\in(0,1)$ arbitrary, and $\rho=\alpha$. We set, for $\theta:=1+\E W\in(1,2),$ \be\ba\label{eq:epsnbeta} X^{(n)}_i&:=|\{j\in[n]: \Zm_n(j)=\lfloor \log_\theta n-\beta\log_\theta\log_\theta n \rfloor +i\}|,\\ X^{(n)}_{\geq i}&:=|\{j\in[n]: \Zm_n(j)\geq \lfloor \log_\theta n-\beta \log_\theta\log_\theta n\rfloor +i\}|,\\ {\rm var}\ epsilon_n&:=(\log_\theta n-\beta \log_\theta\log_\theta n)-\lfloor\log_\theta n-\beta\log_\theta\log_\theta n \rfloor,\\ c_{\alpha,\beta,\theta}&:=\frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)}(1-\theta^{-1})^{-\beta}. \mathrm{e}a\mathrm{e}e Then, we can formulate the following results. \begin{theorem}\label{thrm:betappp} Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ whose distribution satisfies~\mathrm{e}qref{eq:betacdf} for some $\alpha,\beta>0$, and recall ${\rm var}\ epsilon_n$ and $c_{\alpha,\beta,\theta}$ from~\mathrm{e}qref{eq:epsnbeta}. Fix ${\rm var}\ epsilon\in[0,1]$, let $(n_\mathrm{e}ll)_{\mathrm{e}ll\in\mathbb{N}}$ be an increasing integer sequence such that ${\rm var}\ epsilon_{n_\mathrm{e}ll}\to {\rm var}\ epsilon$ as $\mathrm{e}ll \to \infty$ and let $\mathcal P$ be a Poisson point process on $\mathbb{R}$ with intensity measure $\lambda(x)=c_{\alpha,\beta,\theta}\theta^{-x}\log \theta\,\mathrm{d} x$. Define \be \mathcal P^{\rm var}\ epsilon:=\sum_{x\in \mathcal P}\mathrm{d}elta_{\lfloor x+{\rm var}\ epsilon\rfloor} \quad \text{ and } \quad \mathcal P^{(n)}:=\sum_{i\in[n]}\mathrm{d}elta_{\Zm_n(i)-\lfloor \log_\theta n-\beta \log_\theta\log_\theta n \rfloor}. \mathrm{e}e Then in $\mathcal M^\#_{\mathbb{Z}^*}$ $($the space of boundedly finite measures on $\mathbb{Z}^*=\mathbb{Z}\cup\{\infty\}$ with the metric defined in Section~{\rm Re}f{sec:results}$)$, $\mathcal P^{(n_\mathrm{e}ll)}$ converges weakly to $\mathcal P^{\rm var}\ epsilon$ as $\mathrm{e}ll\to \infty$. Equivalently, for any $i<i'\in\mathbb{Z}$, jointly as $\mathrm{e}ll\to \infty$, \be (X^{(n_\mathrm{e}ll)}_i,X^{(n_\mathrm{e}ll)}_{i+1},\ldots,X^{(n_\mathrm{e}ll)}_{i'-1},X^{(n_\mathrm{e}ll)}_{\geq i'})\toindis (\mathcal P^{\rm var}\ epsilon(i),\mathcal P^{\rm var}\ epsilon(i+1),\ldots,\mathcal P^{\rm var}\ epsilon (i'-1),\mathcal P^{\rm var}\ epsilon([i',\infty))). \mathrm{e}e \mathrm{e}nd{theorem} We remark that the second-order term $\beta \log_\theta\log_\theta n$ is established in Theorem~{\rm Re}f{thrm:mainweibull} as well and that the above theorem recovers this result and extends it to the random third-order term, which is similar to the result in Theorem~{\rm Re}f{thrm:mainatom}. \begin{theorem}\label{thrm:betamaxtail} Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ whose distribution satisfies~\mathrm{e}qref{eq:betacdf} for some $\alpha,\beta>0$, and recall ${\rm var}\ epsilon_n$ and $c_{\alpha,\beta,\theta}$ from~\mathrm{e}qref{eq:epsnbeta}. Then, for any integer-valued $i_n$ with $i_n= \mathrm{d}elta\log_\theta n+o(\log n)$ for some $\mathrm{d}elta\in[0,\theta\log\theta/(\theta-1)-1)$ and $\liminf_{n\to\infty}i_n>-\infty$, \be \ba \mathbb P{}&\Big(\max_{j\in[n]}\Zm_n(j)\geq \lfloor \log_\theta n-\beta \log_\theta\log_\theta n\rfloor +i_n\Big)=\big(1-\mathrm{e}xp\big(-c_{\alpha,\beta,\theta}(1+\mathrm{d}elta)^{-\beta}\theta^{-i_n+{\rm var}\ epsilon_n}\big)\big)(1+o(1)). \mathrm{e}a \mathrm{e}e Moreover, let $\mathcal{M}_n\subseteq [n]$ denote the (random) set of vertices that attain the maximum degree in $T_n$, fix ${\rm var}\ epsilon\in[0,1]$ and let $(n_\mathrm{e}ll)_{\mathrm{e}ll\in\mathbb{N}}$ be a positive integer sequence such that ${\rm var}\ epsilon_{n_\mathrm{e}ll}\to{\rm var}\ epsilon$ as $\mathrm{e}ll\to\infty$. Then, $|\mathcal{M}_{n_\mathrm{e}ll}|\toindis M_{\rm var}\ epsilon$, where $M_{\rm var}\ epsilon$ has distribution \be \mathbb{P}{M_{\rm var}\ epsilon=k}=\sum_{j\in\mathbb{Z}}\frac{1}{k!}\big(c_{\alpha,\beta,\theta}(1-\theta^{-1})\theta^{-j+{\rm var}\ epsilon}\Big)^k\mathrm{e}xp\big(-c_{\alpha,\beta,\theta}\theta^{-j+{\rm var}\ epsilon}\big),\qquad k\in\mathbb{N}. \mathrm{e}e \mathrm{e}nd{theorem} \begin{theorem}\label{thrm:betaasympnorm} Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ whose distribution satisfies~\mathrm{e}qref{eq:betacdf} for some $\alpha,\beta>0$ and recall ${\rm var}\ epsilon_n$ and $c_{\alpha,\beta,\theta}$ from~\mathrm{e}qref{eq:epsnbeta}. Then, for an integer-valued $i_n\to-\infty$ such that $i_n=o(\log\log n)$, \be \frac{X^{(n)}_{i_n}-c_{\alpha,\beta,\theta}(1-\theta^{-1})\theta^{-i_n+{\rm var}\ epsilon_n}}{\sqrt{c_{\alpha,\beta,\theta}(1-\theta^{-1})\theta^{-i_n+{\rm var}\ epsilon_n}}}\toindis N(0,1). \mathrm{e}e \mathrm{e}nd{theorem} The three theorems are the analogue of Theorems~{\rm Re}f{thrm:mainatom},~{\rm Re}f{thrm:maxtail} and~{\rm Re}f{thrm:asympnormal}, respectively, where we now consider vertex-weights distributed according to a beta distribution rather than a distribution with an atom at one. \mathrm{e}nd{example} \begin{example}[Fraction of `gamma' random variables]\label{ex:gumb} We consider a random variable $W$ with a tail distribution \be \label{eq:gumbex} \mathbb{P}{W\geq x}=(1-x)^{-b}\mathrm{e}^{-x/(c_1(1-x))},\qquad x\in[0,1), \mathrm{e}e for some $b\in\mathbb{R},c_1>0$ with $bc_1\leq 1$. The random variable $(1-W)^{-1}$ belongs to the Gumbel maximum domain of attraction, as \be \mathbb{P}{(1-W)^{-1}\geq x}=\mathbb{P}{W\geq 1-1/x}=\mathrm{e}^{1/c_1}x^b\mathrm{e}^{-x/c_1},\qquad x\geq 1, \mathrm{e}e so that $W$ belongs to the Gumbel MDA as well by~\cite[Lemma $2.6$]{LodOrt21}, and satisfies the {\rm Re}f{ass:weightgumbel}-{\rm Re}f{ass:weighttaufin} sub-case with $a=\mathrm{e}^{1/c_1},b\in\mathbb{R},c_1>0$, and $\tau=1$. As a result, $X:=(1-W)^{-1}$ is a `gamma' random variable in the sense that its tail distribution is asymptotically equal to that of a gamma random variable, up to constants. We can then write $W=(X-1)/X$, a fraction of these `gamma' random variables. Let $f_W$ denote the probability density function of $W$. It follows from~\mathrm{e}qref{eq:gumbex} that \be f_W(x)=(1/c_1 – b(1-x)) (1-x)^{-(b+2)} \mathrm{e}^{ - x/ (c_1 (1-x))}. \mathrm{e}e Observe that the requirement $bc_1\leq 1$ ensures that $f_W(x)\geq 0$ for all $x\in[0,1)$. As $\lim_{x\uparrow 1}f_W(x)=0$ and the density is bounded from above in a neighbourhood of the origin, it follows that $f_W(x)\leq C$ for some $C>0$ and all $x\in[0,1)$. As a result, this distribution satisfies condition~{\rm Re}f{ass:weightzero} of Assumption~{\rm Re}f{ass:weights} with $\rho=1$, $x_0\in(0,1)$ arbitrary, and some sufficiently large $C>0$. Recall $C_{\theta,\tau,c_1}$ from~\mathrm{e}qref{eq:gumbrv2nd}. We set, for $\theta:=1+\E W\in(1,2),$ \be \label{eq:c} C:=\mathrm{e}^{c_1^{-1}(1-\theta^{-1})/2}\sqrt{\pi}c_1^{-1/4+b/2}(1-\theta^{-1})^{1/4+b/2},\qquad \text{and}\quad c_{c_1,b,\theta}:=C\theta^{C_{\theta,1,c_1}^2/2}, \mathrm{e}e and \be\ba\label{eq:epsngamma} X^{(n)}_i:={}&\big|\big\{j\in[n]: \Zm_n(j)=\big\lfloor \log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n \big\rfloor +i\big\}\big|,\\ X^{(n)}_{\geq i}:={}&\big|\big\{j\in[n]: \Zm_n(j)\geq \big\lfloor \log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n \big\rfloor +i\big\}\big|,\\ {\rm var}\ epsilon_n:={}&\big(\log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n\big )\\ &-\big\lfloor\log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n \big \rfloor. \mathrm{e}a\mathrm{e}e Then, we can formulate the following results. \begin{theorem}\label{thrm:gumbppp} Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ whose distribution satisfies~\mathrm{e}qref{eq:gumbex} for some $b\in\mathbb{R},c_1>0$ such that $bc_1\leq 1$, and recall $C_{\theta,\tau,c_1}$ from~\mathrm{e}qref{eq:gumbrav2nd}. Then, \be \label{eq:gumb3rd} \max_{i\in[n]}\frac{\Zm_n(i)-\log_\theta n+C_{\theta,1,c_1}\sqrt{\log_\theta n}}{\log_\theta \log_\theta n}\toinp \frac{b}{2}+\frac 14. \mathrm{e}e Furthermore, recall ${\rm var}\ epsilon_n$ from~\mathrm{e}qref{eq:epsngamma}, fix ${\rm var}\ epsilon\in[0,1]$, and let $(n_\mathrm{e}ll)_{\mathrm{e}ll\in\mathbb{N}}$ be an increasing integer sequence such that ${\rm var}\ epsilon_{n_\mathrm{e}ll}\to {\rm var}\ epsilon$ as $\mathrm{e}ll \to \infty$. Let $\mathcal P$ be a Poisson point process on $\mathbb{R}$ with intensity measure $\lambda(x)=c_{c_1,b,\theta}\theta^{-x}\log \theta\, \mathrm{d} x$, where we recall $c_{c_1,b,\theta}$ from~\mathrm{e}qref{eq:c}. Define \be \mathcal P^{\rm var}\ epsilon:=\sum_{x\in \mathcal P}\mathrm{d}elta_{\lfloor x+{\rm var}\ epsilon\rfloor} \quad \text{ and }\quad \mathcal P^{(n)}:=\sum_{i\in[n]}\mathrm{d}elta_{\Zm_n(i)-\lfloor \log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n \rfloor}. \mathrm{e}e Then in $\mathcal M^\#_{\mathbb{Z}^*}$ $($the space of boundedly finite measures on $\mathbb{Z}^*=\mathbb{Z}\cup\{\infty\}$ with the metric defined in Section~{\rm Re}f{sec:results}$)$, $\mathcal P^{(n_\mathrm{e}ll)}$ converges weakly to $\mathcal P^{\rm var}\ epsilon$ as $\mathrm{e}ll\to \infty$. Equivalently, for any $i<i'\in\mathbb{Z}$, jointly as $\mathrm{e}ll\to \infty$, \be (X^{(n_\mathrm{e}ll)}_i,X^{(n_\mathrm{e}ll)}_{i+1},\ldots,X^{(n_\mathrm{e}ll)}_{i'-1},X^{(n_\mathrm{e}ll)}_{\geq i'})\toindis (\mathcal P^{\rm var}\ epsilon(i),\mathcal P^{\rm var}\ epsilon(i+1),\ldots,\mathcal P^{\rm var}\ epsilon (i'-1),\mathcal P^{\rm var}\ epsilon([i',\infty))). \mathrm{e}e \mathrm{e}nd{theorem} We remark that the second-order term in~\mathrm{e}qref{eq:gumb3rd} is established in Theorem~{\rm Re}f{thrm:maingumbel},~\mathrm{e}qref{eq:gumbrv2nd}, as well. The above theorem recovers this former result and extends it to the third-order rescaling and to the random fourth-order term, which is similar to the result in Theorem~{\rm Re}f{thrm:mainatom}. \begin{theorem}\label{thrm:gumbmaxtail} Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ whose distribution satisfies~\mathrm{e}qref{eq:gumbex} for some $b\in \mathbb{R},c_1>0$ such that $bc_1\leq 1$, and recall $C_{\theta,\tau,c_1}$, $c_{c_1,b,\theta}$, and ${\rm var}\ epsilon_n$ from~\mathrm{e}qref{eq:gumbrav2nd}, \mathrm{e}qref{eq:c}, and~\mathrm{e}qref{eq:epsngamma}, respectively. Then, for any integer-valued $i_n$ with $i_n= \mathrm{d}elta\sqrt{\log_\theta n}+o(\sqrt{\log n})$ for some $\mathrm{d}elta\geq 0$ and $\liminf_{n\to\infty}i_n>-\infty$, \be \ba \mathbb P{}&\Big(\max_{j\in[n]}\Zm_n(j)\geq \lfloor \log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n \rfloor+i_n\Big)\\ &=\big(1-\mathrm{e}xp\big(-c_{c_1,b,\theta}\theta^{-i_n+{\rm var}\ epsilon_n-\mathrm{d}elta C_{\theta,1,c_1}/2}\big)\big)(1+o(1)). \mathrm{e}a \mathrm{e}e Moreover, let $\mathcal{M}_n\subseteq [n]$ denote the $($random$)$ set of vertices that attain the maximum degree in $T_n$, fix ${\rm var}\ epsilon\in[0,1]$ and let $(n_\mathrm{e}ll)_{\mathrm{e}ll\in\mathbb{N}}$ be a positive integer sequence such that ${\rm var}\ epsilon_{n_\mathrm{e}ll}\to{\rm var}\ epsilon$ as $\mathrm{e}ll\to\infty$. Then, $|\mathcal{M}_{n_\mathrm{e}ll}|\toindis M_{\rm var}\ epsilon$, where $M_{\rm var}\ epsilon$ has distribution \be \mathbb{P}{M_{\rm var}\ epsilon=k}=\sum_{j\in\mathbb{Z}}\frac{1}{k!}\big(c_{c_1,b,\theta}(1-\theta^{-1})\theta^{-j+{\rm var}\ epsilon}\big)^k\mathrm{e}xp\big(-c_{c_1,b,\theta}\theta^{-j+{\rm var}\ epsilon}\big),\qquad k\in\mathbb{N}. \mathrm{e}e \mathrm{e}nd{theorem} \begin{theorem}\label{thrm:gumbasympnorm} Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ whose distribution satisfies~\mathrm{e}qref{eq:gumbex} for some $b\in \mathbb{R},c_1>0$ such that $bc_1\leq 1$, and recall $c_{c_1,b,\theta}$ and ${\rm var}\ epsilon_n$ from~\mathrm{e}qref{eq:c} and~\mathrm{e}qref{eq:epsngamma}, respectively. Then, for an integer-valued $i_n\to-\infty$ such that $i_n=o(\log\log n)$, \be \frac{X^{(n)}_{i_n}-c_{c_1,b,\theta}(1-\theta^{-1})\theta^{-i_n+{\rm var}\ epsilon_n}}{\sqrt{c_{c_1,b,\theta}(1-\theta^{-1})\theta^{-i_n+{\rm var}\ epsilon_n}}}\toindis N(0,1). \mathrm{e}e \mathrm{e}nd{theorem} The three theorems are the analogue of Theorems~{\rm Re}f{thrm:mainatom},~{\rm Re}f{thrm:maxtail} and~{\rm Re}f{thrm:asympnormal}, respectively, where we now consider vertex-weights distributed according to a distribution as in~\mathrm{e}qref{eq:gumbex} rather than a distribution with an atom at one. \mathrm{e}nd{example} In both examples we see that a better understanding of the asymptotic behaviour of the tail of the degree distribution, $(p_{\geq k})_{k\in\mathbb{N}_0}$, allows us to identify the higher-order asymptotic behaviour of the (near-)maximum degree(s). It also shows that a higher-order random limit as in the sense of Theorems~{\rm Re}f{thrm:betappp} and~{\rm Re}f{thrm:gumbppp} is not expressed just by vertex-weights whose distribution has an atom at one, and we conjecture that this result is in fact universal for \mathrm{e}mph{all} vertex-weights distributions with bounded support. \section{Degree tail distributions and factorial moments}\label{sec:taildeg} In this section we state and prove the key elements required to prove the main results as stated in Section~{\rm Re}f{sec:results}. We stress that the results presented and proved in this section cover all the classes introduced in Assumption~{\rm Re}f{ass:weights} (in fact, they cover any vertex-weight $W$ that satisfies conditions~{\rm Re}f{ass:weightsup} and~{\rm Re}f{ass:weightzero}) and that the distinction between the classes of Assumption~{\rm Re}f{ass:weights} follows in Section~{\rm Re}f{sec:mainproof}. The intermediate results presented here form the main technical contribution, and their proofs are the main body of the paper. \subsection{Statement of intermediate results and main ideas} As discussed in Section~{\rm Re}f{sec:overview}, to understand the asymptotic behaviour of the maximum degree and near-maximum degrees we require a more precise understanding of the convergence in mean of the empirical degree distribution. To that end, we present the following result: \begin{proposition}[Distribution of typical vertex degrees]\label{lemma:degprobasymp} Let $W$ be a positive random variable that satisfies conditions~{\rm Re}f{ass:weightsup} and~{\rm Re}f{ass:weightzero} of Assumption~{\rm Re}f{ass:weights}. Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ which are i.i.d.\ copies of $W$, fix $k\in\mathbb{N}$, and let $(v_\mathrm{e}ll)_{\mathrm{e}ll\in[k]}$ be $k$ vertices selected uniformly at random without replacement from $[n]$. For a fixed $c\in (0,\theta/(\theta-1))$, there exist $\beta\geq \beta'>0$ such that uniformly over non-negative integers $m_\mathrm{e}ll< c\log n$ with $\mathrm{e}ll\in[k]$, \begin{align} \mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)=m_\mathrm{e}ll,\text{ for all }\mathrm{e}ll\in[k]}&= \prod_{\mathrm{e}ll=1}^k \E{\frac{\E{W}}{\E{W}+W}\Big(\frac{W}{\E{W}+W}\Big)^{m_\mathrm{e}ll}}\big(1+o\big(n^{-\beta}\big)\big),\label{eq:degdist} \intertext{and} \mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)\geq m_\mathrm{e}ll,\text{ for all }\mathrm{e}ll\in[k]}&= \prod_{\mathrm{e}ll=1}^k \E{\Big(\frac{W}{\E{W}+W}\Big)^{m_\mathrm{e}ll}}\big(1+o\big(n^{-\beta'}\big)\big). \label{eq:degtail} \mathrm{e}nd{align} \mathrm{e}nd{proposition} \begin{remark}\label{rem:degtail} $(i)$ In~\cite[Lemma $1$]{DevLu95}, it is proved that the degrees $(\Zm_n(j))_{j\in[n]}$ are negative quadrant dependent when considering the random recursive tree model (the WRT with deterministic weights, all equal to $1$). That is, for any $k\in\mathbb{N}$ and any $j_1\neq \ldots \neq j_k\in[n]$ and $m_1,\ldots,m_k\in\mathbb{N}$, \be \mathbb{P}{\bigcap_{\mathrm{e}ll=1}^k\{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll\}}\leq\prod_{j=1}^k \mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}. \mathrm{e}e This property only holds for the conditional probability measure $\mathbb P_W$ when considering the WRT (or, more generally, the WRG) model, as follows from~\cite[Lemma $7.1$]{LodOrt21}, and can be obtained `asymptotically' for the probability measure $\mathbb P$, as in the proof of~\cite[Theorem $2.8$, Bounded case]{LodOrt21}. Proposition~{\rm Re}f{lemma:degprobasymp} improves on this by establishing \mathrm{e}mph{asymptotic independence} under the non-conditional probability measure $\mathbb P$ of the degrees of typical vertices, which allows us to extend the results in~\cite{LodOrt21} to more precise asymptotics. $(ii)$ We note that the result only requires the two main conditions in Assumption~{\rm Re}f{ass:weights} on the behaviour of the vertex-weight distribution close to zero and one. Hence, results for other vertex-weight distributions that do not satisfy any of the particular cases outlined in this assumption can be obtained as well using the methods presented in this paper. $(iii)$ The result in Proposition~{\rm Re}f{lemma:degprobasymp} improves on known results, especially those in~\cite{FouIyerMaiSulz19,Iyer20}. In these papers similar techniques are used to prove a weaker result, in which the $m_\mathrm{e}ll$ are not allowed to diverge with $n$ and where no convergence rate is provided. $(iv)$ The constraint $c\in(0,\theta/(\theta-1))$ arises from the following: when $m=c\log n+o(\log n)$, as we shall encounter later in the proof of Lemma~{\rm Re}f{lemma:inepsterm}, see Equation~\mathrm{e}qref{eq:remarkref}, for ${\rm var}\ epsilon>0$ small, \be \frac1n \sum_{i<n^{\rm var}\ epsilon}\mathbb{P}{\mathbb{Z}m_n(v_1)\geq m}\leq \mathrm{e}xp\Big(\log n\Big(-(1-{\rm var}\ epsilon)+(c+o(1))\Big(1-\frac{1}{c(\theta-1)}+\log\Big(\frac{1}{c(\theta-1)}\Big)\Big)\Big)\Big). \mathrm{e}e We want the contribution of left-hand side, compared to the left-hand side of~\mathrm{e}qref{eq:degtail} to be negligible. Hence, it needs to be $o(p_{\geq m})=o(\theta^{-m})$ (recall $p_{\geq m}$ from~\mathrm{e}qref{eq:pk}). We thus have the requirement that \be \label{eq:cineq} -(1-{\rm var}\ epsilon)+c\Big(1-\frac{1}{c(\theta-1)}+\log\Big(\frac{1}{c(\theta-1)}\Big)\Big)>-c\log \theta \mathrm{e}e is satisfied. The inequality in~\mathrm{e}qref{eq:cineq} becomes an equality exactly when $c=\theta/(\theta-1)$ and ${\rm var}\ epsilon=0$. Hence the need for $c$ to be strictly smaller than $\theta/(\theta-1)$. Also, a more technical reason is that certain error terms no longer converge to zero with $n$ when $c\geq \theta/(\theta-1)$. Moreover, we observe that $\frac{1}{\log\theta}< \frac{\theta}{\theta-1}$ when $\theta\in(1,2]$, so that Proposition~{\rm Re}f{lemma:degprobasymp} covers degrees that extend \mathrm{e}mph{beyond} the value of the maximum degree (see~\mathrm{e}qref{eq:bddmaxconv}). This is a crucial element of Proposition~{\rm Re}f{lemma:degprobasymp} and the reason that strong claims regarding the largest degrees in the tree can be proved. \mathrm{e}nd{remark} To use this (tail) distribution of $k$ typical vertices $v_1,\ldots,v_k$, a precise expression for the expected values on the right-hand side in Proposition~{\rm Re}f{lemma:degprobasymp} is required. Recall $p_k$ from~\mathrm{e}qref{eq:pk}. The following theorem comes from~\cite[Theorem $2.7$]{LodOrt21}, in which the maximum degree of weighted recursive graphs is studied for a large class of vertex-weight distribution and in which asymptotic expressions of $p_k$ are presented. \begin{theorem}[\cite{LodOrt21}, Asymptotic behaviour of $p_k$]\label{thrm:pkasymp} Recall that $\theta:=1+\E W$. We consider the different cases with respect to the vertex-weights as in Assumption~{\rm Re}f{ass:weights}, but do not require condition~{\rm Re}f{ass:weightzero} to hold. \begin{enumerate}[labelindent = 1cm, leftmargin = 2.2cm] \item[\namedlabel{thrm:pkatom}{$(\mathrm{\mathbf{Atom}})$}] Recall that $q_0=\mathbb{P}{W=1}>0$ and recall $r_k$ from~\mathrm{e}qref{eq:ek}. Then, \be \label{eq:pkatom} \hspace{2cm} p_k=q_0(1-\theta^{-1})\theta^{-k}\big(1+\mathcal O(r_k)\big). \mathrm{e}e \item[\namedlabel{thrm:pkweibull}{$(\mathrm{\mathbf{Weibull}})$}] Recall that $\alpha>1$ is the power-law exponent. Then, for all $k>1/\E W$, \be\label{eq:pkbddweibull} \underline L(k)k^{-(\alpha-1)} \theta^{-k}\leq p_k\leq \overline L(k)k^{-(\alpha-1)}\theta^{-k}, \mathrm{e}e where $\underline L$ and $\overline L$ are slowly varying at infinity.\\ \item[\namedlabel{thrm:pkgumbel}{$(\mathrm{\mathbf{Gumbel}})$}] \begin{enumerate} \item[$(i)$] If $W$ satisfies the~{\rm Re}f{ass:weighttaufin} sub-case with parameter $\tau>0$, set $\gamma:=1/(\tau+1)$. Then, \be\ba\label{eq:pkbddgumbelrv} p_k=\mathrm{e}xp\Big(-\frac{\tau^\gamma}{1-\gamma}\Big(\frac{(1-\theta^{-1})k}{c_1}\Big)^{1-\gamma}(1+o(1))\Big)\theta^{-k}. \mathrm{e}a\mathrm{e}e \item[$(ii)$] If $W$ satisfies the~{\rm Re}f{ass:weighttau0} sub-case with parameter $\tau>1$, \be\ba\label{eq:pkbddgumbelrav} \hspace{1.8cm}p_k=\mathrm{e}xp\Big(-\Big(\frac{\log k}{c_1}\Big)^\tau\Big(1-\tau(\tau-1)\frac{\log\log k}{\log k}+\frac{K_{\tau,c_1,\theta}}{\log k}(1+o(1))\Big)\Big)\theta^{-k}. \mathrm{e}a\mathrm{e}e where $K_{\tau,c_1,\theta}:=\tau\log(\mathrm{e} c_1^\tau(1-\theta^{-1})/\tau)$. \mathrm{e}nd{enumerate} \mathrm{e}nd{enumerate} \mathrm{e}nd{theorem} \begin{remark}\label{rem:pgeqk} Equivalent upper and lower bounds can be obtained for $p_{\geq k}$ as in~\mathrm{e}qref{eq:pk} (with $m=1$), by adjusting the multiplicative constants in front of the asymptotic expressions by a factor $(1-\theta^{-1})^{-1}$ only. This is due to the fact that \be (1-\theta^{-1})^{-1}p_k=\E{\frac{1}{1-\theta^{-1}}\frac{\theta-1}{\theta-1+W}\Big(\frac{W}{\theta-1+W}\Big)^k}\geq \E{\Big(\frac{W}{\theta-1+W}\Big)^k}=p_{\geq k}\geq p_k. \mathrm{e}e Equivalently, the proof of~\cite[Theorem $2.7$]{LodOrt21} can be adapted to work for $p_{\geq k}$ in which case the same asymptotic behaviour is established for $p_{\geq k}$, but with different constants in the front. \mathrm{e}nd{remark} We also provide less precise but more general bounds on the degree distribution. \begin{lemma}\label{lemma:pkbound} Let $W$ be a positive random variable with $x_0:=\sup\{x>0:\mathbb{P}{W\leq x}<1\}=1$. Then, for any $\xi>0$ and $k$ sufficiently large, \be (\theta+\xi)^{-k}\leq p_k\leq p_{\geq k}\leq \theta^{-k}. \mathrm{e}e \mathrm{e}nd{lemma} \begin{proof} The upper bound on $p_{\geq k}$ directly follows from the fact that $x\mapsto (x/(\theta-1+ x))^k$ is increasing in $x$, so that \be p_{\geq k}=\E{\Big(\frac{W}{\theta-1+W}\Big)^k}\leq \Big(\frac{1}{\theta-1+1}\Big)^k=\theta^{-k}. \mathrm{e}e For the lower bound, let us take some $\mathrm{d}elta\in(0,\xi/(\theta-1+\xi))$ and define \be \label{eq:f} f_k(\theta,x):=\frac{\theta-1}{\theta-1+x}\Big(\frac{x}{\theta-1+x}\Big)^k. \mathrm{e}e Note that $p_k=\E{f_k(\theta,W)}$. Then, since $f_k(\theta,x)$ is increasing in $x$ on $(0,1]$ for $k$ sufficiently large, \be \E{f_k(\theta,W)}\geq \E{f_k(\theta,W)\ind_{\{W>1-\mathrm{d}elta\}}}\geq \mathbb{P}{W>1-\mathrm{d}elta}\frac{\theta-1}{\theta-\mathrm{d}elta}\Big(\frac{1-\mathrm{d}elta}{\theta-\mathrm{d}elta}\Big)^k. \mathrm{e}e We note that, since $x_0=1$, it holds that $\mathbb{P}{W>1-\mathrm{d}elta}>0$ for any $\mathrm{d}elta>0$. Now, by the choice of $\mathrm{d}elta$, we have $(\theta+\xi)(1-\mathrm{d}elta)/(\theta-\mathrm{d}elta)>1$, so we can find some $\zeta>0$ sufficiently small so that \be \E{f_k(\theta,W)}(\theta+\xi)^k \geq \mathbb{P}{W>1-\mathrm{d}elta}\frac{\theta-1}{\theta-\mathrm{d}elta}\Big(\frac{(\theta+\xi)(1-\mathrm{d}elta)}{\theta-\mathrm{d}elta}\Big)^k\geq (1+\zeta)^k\geq 1, \mathrm{e}e as required. \mathrm{e}nd{proof} Recall the definition of $X_i^{(n)},X_{\geq i}^{(n)}$ and ${\rm var}\ epsilon_n$ from~\mathrm{e}qref{eq:xni} and~\mathrm{e}qref{eq:peps}, respectively. Proposition~{\rm Re}f{lemma:degprobasymp} combined with Theorem~{\rm Re}f{thrm:pkasymp} then allows us to obtain the following result. \begin{proposition}[Factorial moments when vertex-weights satisfy the~{\rm Re}f{ass:weightatom} case]\label{prop:factmean} $\,$\\ Consider the WRT model as in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ that satisfy the~{\rm Re}f{ass:weightatom} case in Assumption~{\rm Re}f{ass:weights} for some $q_0\in(0,1]$. Recall $r_k$ from~\mathrm{e}qref{eq:ek}, recall that $\theta:=1+\E W$, and that $(x)_k:=x(x-1)\cdots (x-(k-1))$ for $x\in\mathbb{R},k\in\mathbb{N}$, and $(x)_0:=1$. For fixed $K\in\mathbb{N}$ and $c\in(0,\theta/(\theta-1))$, there exists $\beta>0$ such that the following holds. For any integer-valued $i_n$ and $i_n',$ such that $0<i_n+\log_\theta n<i_n'+\log_\theta n< c\log n$ and any $(a_j)_{i_n\le j\le i_n'}$ non-negative integer sequence such that $\sum_{j=i_n}^{i_n'}a_j=K$, \be \ba \E{\Big(X^{(n)}_{\geq i_n'}\Big)_{a_{i_n'}}\prod_{k=i_n}^{i_n'-1}\Big(X_k^{(n)}\Big)_{a_k}}={}&\Big(q_0\theta^{-i_n'+{\rm var}\ epsilon_n}\Big)^{a_{i_n'}}\prod_{k=i_n}^{i_n'-1}\Big(q_0(1-\theta^{-1})\theta^{-k+{\rm var}\ epsilon_n}\Big)^{a_k}\\ &\times\big(1+\mathcal O\big(r_{\lfloor \log_\theta n\rfloor +i_n}\vee n^{-\beta}\big)\big). \mathrm{e}a\mathrm{e}e \mathrm{e}nd{proposition} \begin{remark} $(i)$ Intuitively, the result of this proposition tells us that the random variables $(X^{(n)}_k)_{k=i_n}^{i_n'-1}$ and $X(n)_{\geq i_n'}$ are, jointly, approximately Poisson distributed, since their factorial moments approximate those of Poisson random variables. This result hence plays an essential role in proving the weak convergence to a Poisson point process in Theorem~{\rm Re}f{thrm:mainatom}. $(ii)$ Related to Remark~{\rm Re}f{rem:asympnorm}, the error term decays polynomially only if $W$ has an atom at one and support bounded away from one and $\log_\theta n+i_n>\mathrm{e}ta \log n$ for some $\mathrm{e}ta>0$. That is, when there exists an $s\in(0,1)$ such that $\mathbb{P}{W\in(s,1)}=0$. In that case, $s_k\leq s$ and $r_k\leq \mathrm{e}xp(-(1-\theta^{-1})(1-s)k)$ for all $k$ large, so that \be r_{\lfloor \log_\theta n\rfloor +i_n}\vee n^{-\beta}\leq \mathrm{e}xp(-(1-\theta^{-1})(1-s)\mathrm{e}ta \log n)\vee n^{-\beta}=n^{-\min\{\mathrm{e}ta(1-\theta^{-1})(1-s),\beta\}}. \mathrm{e}e In all other cases, the error term decays slower than polynomially. \mathrm{e}nd{remark} \begin{proof}[Proof of Proposition~{\rm Re}f{prop:factmean} subject to Proposition~{\rm Re}f{lemma:degprobasymp}] We closely follow the approach in~\cite[Proposition $2.1$]{AddEsl18}, where an analogue result is presented for the case $q_0=1$, i.e.\ for the random recursive tree. Set $K':=K-a_{i_n'}$ and for each $i_n\leq k\leq i_n'$ and each $u\in\mathbb{N}$ such that $\sum_{\mathrm{e}ll=i_n}^{k-1}a_\mathrm{e}ll<u\leq \sum_{\mathrm{e}ll=i_n}^k a_\mathrm{e}ll$, let $m_u=\lfloor \log_\theta n \rfloor +k$. We note that $m_u<\log_\theta n+i_n'<c\log n$, so that the results in Proposition~{\rm Re}f{lemma:degprobasymp} can be used. Also, let $(v_u)_{u\in[K]}$ be $K$ vertices selected uniformly at random without replacement from $[n]$, and define $I:=[K]\backslash[K']$. Then, as the $X^{(n)}_{\geq k}$ and $X^{(n)}_{k}$ can be expressed as sums of indicators, \be \ba \label{eq:meanex} \hspace{-0.2cm}\mathbb E\bigg[\!\Big(X^{(n)}_{\geq i_n'}\Big)_{a_{i_n'}}\!\prod_{k=i_n}^{i_n'-1}\!\Big(X_k^{(n)}\Big)_{a_k}\!\bigg]\!&=(n)_K\mathbb{P}{\mathbb{Z}m_n(v_u)=m_u, \mathbb{Z}m_n(v_w)\geq m_w\text{ for all } u\in[K'],w\in I}\\ &=(n)_K\sum_{\mathrm{e}ll=0}^{K'}\sum_{\substack{S\subseteq [K']\\ |S|=\mathrm{e}ll}}\!\!(-1)^\mathrm{e}ll \mathbb{P}{\mathbb{Z}m_n(v_u)\geq m_u+\ind_{\{u\in S\}}\text{ for all } u\in [K]}, \mathrm{e}a \mathrm{e}e where the second step follows from~\cite[Lemma $5.1$]{AddEsl18} and is based on an inclusion-exclusion argument. We can now use Proposition~{\rm Re}f{lemma:degprobasymp}. First, we note that there exists a $\beta>0$ such that for non-negative integers $m_1,\ldots, m_K< c\log n$, \be \label{eq:tailprob} \mathbb{P}{\mathbb{Z}m_n(v_u)\geq m_u+\ind_{\{u\in S\}}\text{ for all } u\in [K]}=\prod_{u=1}^K \mathbb E\bigg[\Big(\frac{W}{\E{W}+W}\Big)^{m_u+\ind_{\{u\in S\}}}\bigg]\big(1+o\big(n^{-\beta}\big)\big). \mathrm{e}e Now, by Theorem~{\rm Re}f{thrm:pkasymp} and the definition of $r_k$ in~\mathrm{e}qref{eq:ek} and as $r_k$ is decreasing by Lemma~{\rm Re}f{lemma:rk} in the \hyperref[sec:appendix]{Appendix}, when $|S|=\mathrm{e}ll$, \be\label{eq:pkprod} \prod_{u=1}^K \E{\Big(\frac{W}{\E{W}+W}\Big)^{m_u+\ind_{\{u\in S\}}}}=q_0^K\theta^{-\mathrm{e}ll-\sum_{u=1}^K m_u}\big(1+\mathcal O\big(r_{\lfloor \log_\theta n\rfloor+i_n}\vee n^{-\beta'}\big)\big), \mathrm{e}e as the smallest $m_u$ equals $\lfloor \log_\theta n\rfloor+i_n$. We have \be \ba \label{eq:finstep} (n)_K \sum_{\mathrm{e}ll=0}^{K'}\sum_{\substack{S\subseteq [K']\\ |S|=\mathrm{e}ll}}(-1)^\mathrm{e}ll q_0^K \theta^{-\mathrm{e}ll-\sum_{u=1}^K m_u}&=(n)_Kq_0^K \theta^{-K'-\sum_{u=1}^K m_u}\sum_{\mathrm{e}ll=0}^{K'}\binom{K'}{\mathrm{e}ll}(-1)^\mathrm{e}ll \theta^{K'-\mathrm{e}ll}\\ &=(n)_Kq_0^K(1-\theta^{-1})^{K'}\theta^{-\sum_{u=1}^K m_u}. \mathrm{e}a \mathrm{e}e We then observe that $(n)_K=\theta^{K\log_\theta n}(1+\mathcal O(1/n))$. Moreover, we recall that $K=\sum_{k=i_n}^{i_n'}a_k$ and $K'=\sum_{k=i_n}^{i_n'-1}a_k$, while $m_u=\lfloor \log_\theta n\rfloor +k$ if $\sum_{\mathrm{e}ll=i_n}^{k-1}a_\mathrm{e}ll\leq u<\sum_{\mathrm{e}ll=i_n}^k a_\mathrm{e}ll$ for $i_n\leq \mathrm{e}ll\leq i_n'$; in addition, recall ${\rm var}\ epsilon_n$ from~\mathrm{e}qref{eq:peps}. Using~\mathrm{e}qref{eq:finstep} combined with~\mathrm{e}qref{eq:tailprob} and~\mathrm{e}qref{eq:pkprod} in~\mathrm{e}qref{eq:meanex}, we obtain \be \ba \mathbb E\bigg[\!\Big(\!X^{(n)}_{\geq i_n'}\!\Big)_{a_{i_n'}}\!\prod_{k=i_n}^{i_n'-1}\!\Big(\!X_k^{(n)}\!\Big)_{a_k}\!\bigg]\!={}& q_0^K(1-\theta^{-1})^{K'}\theta^{K\log_\theta n-\sum_{u=1}^K m_u}\big(1+\mathcal O\big(r_{\lfloor \log_\theta n\rfloor +i_n}\vee n^{-\beta}\big)\big)\\ ={}&\Big(q_0\theta^{-i_n'+{\rm var}\ epsilon_n}\Big)^{a_{i_n'}}\prod_{k=i_n}^{i_n'-1}\Big(q_0(1-\theta^{-1})\theta^{-k+{\rm var}\ epsilon_n}\Big)^{a_k}\\ &\times\big(1+\mathcal O\big(r_{\lfloor \log_\theta n\rfloor +i_n}\vee n^{-\beta}\big)\big), \mathrm{e}a\mathrm{e}e as desired. \mathrm{e}nd{proof} The next lemma builds on~\cite[Lemma $7.1$]{LodOrt21} and~\cite[Lemma $1$]{DevLu95} and provides bounds on the maximum degree that hold with high probability. \begin{lemma}\label{lemma:maxdegwhp} Let $W$ be a positive random variable that satisfies conditions~{\rm Re}f{ass:weightsup} and~{\rm Re}f{ass:weightzero} of Assumption~{\rm Re}f{ass:weights}. Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ which are i.i.d.\ copies of $W$. Fix $c\in(0,\theta/(\theta-1))$ and let $(k_n)_{n\in\mathbb{N}}$ be a non-negative, diverging integer sequence such that $k_n< c\log n$ and let $v_1$ be a vertex selected uniformly at random from $[n]$. If $\lim_{n\to\infty}n\mathbb{P}{\mathbb{Z}m_n(v_1)\geq k_n}=0$, then \be \lim_{n\to\infty}\mathbb{P}{\max_{i\in[n]}\Zm_n(i)\geq k_n}=0. \mathrm{e}e Similarly, when instead $\lim_{n\to\infty}n\mathbb{P}{\mathbb{Z}m_n(v_1)\geq k_n}=\infty$, \be \lim_{n\to\infty}\mathbb{P}{\max_{i\in[n]}\Zm_n(i) \geq k_n}=1. \mathrm{e}e \mathrm{e}nd{lemma} \begin{remark} Similar to what is discussed in Remark {\rm Re}f{rem:degtail}$(i)$, the result in this lemma is stronger than the results presented in~\cite[Lemma $7.1$]{LodOrt21} and~\cite[Lemma $1$]{DevLu95}. It extends the latter to the WRT model rather than just the RRT model, and improves the former as the result holds for the non-conditional probability measure $\mathbb P$ rather than $\mathbb P_W$, which is what is used in~\cite{LodOrt21}. Due to the difficulties of working with the conditional probability measure, only a first order asymptotic result can be proved there. With the improved understanding of the degree distribution, as in Proposition~{\rm Re}f{lemma:degprobasymp}, the above result can be obtained, which allows for finer asymptotics to be proved. \mathrm{e}nd{remark} \begin{proof}[Proof of Lemma~{\rm Re}f{lemma:maxdegwhp} subject to Proposition~{\rm Re}f{lemma:degprobasymp}] The first result immediately follows from a union bound and the fact that \be \label{eq:uarprob} n\mathbb{P}{\mathbb{Z}m_n(v_1)\geq k_n}=\sum_{i=1}^n \mathbb{P}{\Zm_n(i)\geq k_n}. \mathrm{e}e For the second result, let $A_{n,i}:=\{\Zm_n(i)\geq k_n\},i\in[n]$. Then, by the Chung-Erd{\H o}s inequality (which is one way of formalizing the second moment method), \be \mathbb{P}{\max_{i\in[n]}\Zm_n(i) \geq k_n}=\mathbb{P}{\cup_{i=1}^n A_{n,i}}\geq \frac{\Big(\sum_{i=1}^n \mathbb{P}{A_{n,i}}\Big)^2}{\sum_{i\neq j}\mathbb{P}{A_{n,i}\cap A_{n,j}}+\sum_{i=1}^n \mathbb{P}{A_{n,i}}}. \mathrm{e}e By~\mathrm{e}qref{eq:uarprob} it follows that $\sum_{i=1}^n\mathbb{P}{A_{n,i}}=n\mathbb{P}{\mathbb{Z}m_n(v_1)\geq k_n}$. Furthermore, by Proposition~{\rm Re}f{lemma:degprobasymp}, \be \sum_{i\neq j}\mathbb{P}{A_{n,i}\cap A_{n,j}}=n(n-1)\mathbb{P}{A_{n,v_1}\cap A_{n,v_2}}=(n\mathbb{P}{A_{n,v_1}})^2(1+o(1)), \mathrm{e}e where $v_2$ is another vertex selected uniformly at random, unequal to $v_1$. Note that the condition that $k_n<c\log n$ is required for this to hold. Together with the above lower bound, these two observations yield \be \mathbb{P}{\cup_{i=1}^n A_{n,i}}\geq \frac{(n\mathbb{P}{A_{n,v_1}})^2}{(n\mathbb{P}{A_{n,v_1}})^2(1+o(1))+n\mathbb{P}{A_{n,v_1}}}=\frac{n\mathbb{P}{A_{n,v_1}}}{n\mathbb{P}{A_{n,v_1}}(1+o(1))+1}. \mathrm{e}e Hence, when $n\mathbb{P}{A_{n,v_1}}=n\mathbb{P}{\mathbb{Z}m_n(v_1)\geq k_n}$ diverges with $n$, we obtain the desired result. \mathrm{e}nd{proof} \subsection{Non-rigorous proof of Proposition~{\rm Re}f{lemma:degprobasymp} when $k=1$}\label{sec:nonrigour} What remains for this section is to prove Proposition~{\rm Re}f{lemma:degprobasymp}. As the proof is rather long and involved, we first provide a non-rigorous step-by-step proof of the case $k=1$. That is, we provide an asymptotic estimate for $\mathbb{P}{\mathbb{Z}m_n(v)=m}$, where $v$ is a vertex selected uniformly at random. Though this part is not essential for the proof of Proposition~{\rm Re}f{lemma:degprobasymp}, it helps the reader in understanding the complete proof of the proposition later in this section. In the next sub-section, we discuss the strategy of the full proof and take care of certain parts of the proof in separate lemmas. Whilst proving these lemmas, we refer back to the simpler non-rigorous proof provided here for guidance. The main part of the proof is dedicated to proving~\mathrm{e}qref{eq:degdist}. Once this is established,~\mathrm{e}qref{eq:degtail} follows without much effort. We thus focus on discussing the proof of~\mathrm{e}qref{eq:degdist} here first. To provide a non-rigorous proof of the asymptotic estimate of $\mathbb{P}{\mathbb{Z}m_n(v)=m}$, we first make the following simplification: we assume that $S_\mathrm{e}ll=(\mathrm{e}ll+1)\E W$ for all $\mathrm{e}ll\in\mathbb{N}$. Naturally, the law of large numbers implies that $S_\mathrm{e}ll/\mathrm{e}ll\toas \E W$, so that, up to lower-order error terms, this equality holds. Using this simplification, however, allows us to omit many details and focus on the most important steps of the proof. Let us fix an ${\rm var}\ epsilon\in(0,1)$, which we shall choose sufficiently small later. We then first condition on the value of the uniform vertex $v$, and distinguish between its values being smaller or larger than~$n^{\rm var}\ epsilon$. That is, \be \ba\label{eq:vprobsplit} \mathbb{P}{\mathbb{Z}m_n(v)=m}&=\frac1n \sum_{j=1}^n \E{\mathbb{P}f{\mathbb{Z}m_n(j)=m}}\\ &=\frac1n \sum_{1\leq j< n^{\rm var}\ epsilon}\E{\mathbb{P}f{\mathbb{Z}m_n(j)=m}}+\frac1n \sum_{n^{\rm var}\ epsilon \leq j\leq n}\E{\mathbb{P}f{\mathbb{Z}m_n(j)=m}}. \mathrm{e}a \mathrm{e}e We first take care of the second sum, and start by determining $\mathbb{P}f{\mathbb{Z}m_n(j)=m}$ when $j\geq n^{\rm var}\ epsilon$, under the simplification that $S_\mathrm{e}ll=(\mathrm{e}ll+1)\E W$. Since $\mathbb{Z}m_n(j)$ is a sum of independent indicator random variables, we will see that $\mathbb{Z}m_n(j)$ is well-approximated by a Poisson random variable, whose rate equals the sum of the success probabilities of the indicator random variables. By the simplification $\cS_\mathrm{e}ll=(\mathrm{e}ll+1)\E W$, we can thus determine that \be \mathbb{Z}m_n(j)=\sum_{i=j+1}^n \ind_{\{i\to j\}}=\sum_{i=j+1}^n \text{Ber}\Big(\frac{W_j}{S_{i-1}}\Big)\approx \text{Poi}\Big(\sum_{j=i+1}^n \frac{W_j}{i\E W}\Big)\approx \text{Poi}\Big(\frac{W_j}{\E W}\log(n/j)\Big)=:P_j. \mathrm{e}e It thus follows that $\mathbb{P}f{\mathbb{Z}m_n(j)=m}\approx \mathbb{P}f{P_j=m}$. We now provide the necessary steps to obtain this intuitive result. The degree of vertex $j$ in $T_n$ equals exactly $m$ when there exist vertices $i_1,\ldots ,i_m$ that connect to $j$, and all other vertices do not. Since these connections are independent, \be \ba\label{eq:probexplicit} \mathbb{P}f{\mathbb{Z}m_n(j)=m}&=\sum_{j<i_1<\ldots<i_m\leq n}\prod_{s=1}^m \frac{W_j}{S_{i_s-1}}\prod_{\substack{s=j+1\\ s\neq i_\mathrm{e}ll, \mathrm{e}ll \in[m]}}^n \Big(1-\frac{W_j}{S_{s-1}}\Big)\\ &=\sum_{j<i_1<\ldots<i_m\leq n}\prod_{s=1}^m \frac{W_j}{i_s\E W}\prod_{\substack{s=j+1\\ s\neq i_\mathrm{e}ll, \mathrm{e}ll \in[m]}}^n \Big(1-\frac{W_j}{s\E W}\Big) \mathrm{e}a \mathrm{e}e Here, we sum over all possible choices of vertices $i_1<\ldots <i_m$ that connect to $j$, and the final step follows from the simplifying assumption. We now include the terms $s \in \{i_1, \ldots, i_m\}$ in the second product, by changing the denominator in the fraction in the first product to $i_s\E W- W_j$. This yields \be \label{eq:connectsplit} \mathbb{P}f{\mathbb{Z}m_n(j)=m}=\sum_{j<i_1<\ldots<i_m\leq n}\prod_{s=1}^m \frac{W_j}{i_s\E W-W_j}\prod_{s=j+1}^n \Big(1-\frac{W_j}{s\E W}\Big). \mathrm{e}e Since $W_j\in(0,1]$ almost surely, $i_s>j\geq n^{\rm var}\ epsilon$, and $m<c\log n$, it follows that for any $\xi\in(0,1)$, \be\label{eq:prodasymp} \prod_{s=1}^m \frac{1}{i_s\E W-W_j}=\prod_{s=1}^m \frac{1}{i_s\E W(1+\cO(n^{-{\rm var}\ epsilon}))}=(1+\cO(n^{-(1-\xi){\rm var}\ epsilon}))\prod_{s=1}^m \frac{1}{i_s\E W}. \mathrm{e}e Similarly, \be \prod_{s=j+1}^n \Big(1-\frac{W_j}{s\E W}\Big)=\mathrm{e}xp\Big(\sum_{s=j+1}^n \log\Big(1-\frac{W_j}{s\E W}\Big)\Big)=\mathrm{e}xp\Big(-(1+\cO(n^{-{\rm var}\ epsilon}))\sum_{s=j+1}^n \frac{W_j}{s\E W}\Big). \mathrm{e}e As \be \sum_{s=j+1}^n \frac1s\approx \log(n/j), \mathrm{e}e we arrive at \be \label{eq:nonconnectasymp} \prod_{s=j+1}^n \Big(1-\frac{W_j}{s\E W}\Big)\approx \Big(\frac jn\Big)^{(1+\zeta_n)W_j/\E W}, \mathrm{e}e where $\zeta_n=\cO(n^{-{\rm var}\ epsilon})$. Combined with~\mathrm{e}qref{eq:prodasymp} in~\mathrm{e}qref{eq:connectsplit}, this yields \be \mathbb{P}f{\mathbb{Z}m_n(j)=m}=(1+\cO(n^{-(1-\xi){\rm var}\ epsilon}))\Big(\frac jn\Big)^{(1+\zeta_n)W_j/\E W}\Big( \frac{W_j}{\E W}\Big)^m\sum_{j<i_1<\ldots<i_m\leq n}\prod_{s=1}^m \frac{1}{i_s}. \mathrm{e}e We can approximate the sum by the multiple integrals \be \int_j^n \int_{x_1}^n \cdots \int_{x_{m-1}}^n \prod_{s=1}^m x_s^{-1}\,\mathrm{d} x_m\ldots \mathrm{d} x_1. \mathrm{e}e By Lemma~{\rm Re}f{lemma:logints} we obtain that this equals $(\log(n/j))^m/m!$. So, we finally have \be \ba \mathbb{P}f{\mathbb{Z}m_n(j)=m}&\approx \Big(\frac jn\Big)^{W_j/\E W}\Big(\frac{W_j}{\E W}\log(n/j)\Big)^m\frac{1}{m!}\\ &=\mathrm{e}xp\Big(-\frac{W_j}{\E W}\log(n/j)\Big)\Big(\frac{W_j}{\E W}\log(n/j)\Big)^m\frac{1}{m!}\\ &=\mathbb{P}f{P_j=m}, \mathrm{e}a \mathrm{e}e as desired. Since all vertex-weights are i.i.d., when we take an expectation over the weights, we can in fact omit the index~$j$ from $W_j$ and use a general random variable $W$ instead, so that $P_j$ has mean $\log(n/j)W/\E W$. Using this expression in the second sum in~\mathrm{e}qref{eq:vprobsplit} approximately yields \be \label{eq:poiprob} \frac{1}{n} \sum_{n^{\rm var}\ epsilon \leq j\leq n}\E{ \mathbb{P}f{P_j=m}}. \mathrm{e}e To avoid confusion between different parametrisations, we say that a random variable $X$ has a Gamma$(\alpha,\beta)$ distribution for some $\alpha,\beta>0$ when it has a probability density function $f:\mathbb{R}_+\to \mathbb{R}_+$ with \be \label{eq:gammapdf} f(x)=\frac{\beta^\alpha}{\mathcal{G}amma(\alpha)}x^{\alpha-1}\mathrm{e}^{-\beta x},\qquad x>0. \mathrm{e}e We now use the following duality between Poisson and gamma random variables. Let \\$G\sim \text{Gamma}(m,1)$ be a gamma random variable for some integer $m$. Note that we can also interpret $G$ as a sum of $m$ independent rate one exponential random variables. Then, conditionally on $W$, the event $\{P_j=m\}$ can be thought of as the event that in a rate one Poisson process exactly $m$ particles have arrived before time $\log(n/j)W/\E W$. This is equivalent to the sum of the first $m$ inter-arrival times (which are rate one exponentially distributed) being at most $\log(n/j)W/\E W$, and the sum of the first $m+1$ inter-arrival times exceeding this quantity. As we mentioned, this sum of $m$ rate one exponential random variables is, in law, identical to $G$. So, if we let $V$ be the $m+1^{\text{st}}$ inter-arrival time, independent of $G$, then \be\ba \mathbb{P}f{P_j=m}&=\mathbb{P}f{G\leq \log(n/j)W/\E W, G+V>\log(n/j)W/\E W}\\ &=\mathbb{P}f{Y\leq \log(n/j), Y+\wt V>\log(n/j)}. \mathrm{e}a\mathrm{e}e where, conditionally on $W$, we have $Y\sim \text{Gamma}(m,W/\E W)$ and $ \wt V\sim \text{Exp}(W/\E W)$. In~\mathrm{e}qref{eq:poiprob}, this yields \be \frac1n \E{\sum_{n^{\rm var}\ epsilon\leq j\leq n}\mathbb{P}f{Y\leq \log(n/j),Y+\wt V>\log(n/j)}}\approx \E{\mathbb{P}f{Y\leq \log(n/v),Y+\wt V>\log (n/v)}}, \mathrm{e}e where we recall that $v$ is a uniform element of $[n]$. Observe that $\log(n/v)\approx T$, where $T\sim \text{Exp}(1)$. As a result, the conditional probability can be expressed, using that $Y$ can be viewed as a sum of $m$ i.i.d.\ copies of $\wt V$ and the memoryless property of exponential random variables, as \be \E{\mathbb{P}f{Y\leq T, Y+\wt V\geq T} }=\E{\frac{\E W}{W+\E W}\Big(\frac{W}{W+\E W}\Big)^m}, \mathrm{e}e as desired. We provide some more details for this derivation. First, we write the sum as \be \label{eq:gammadif} \frac{1}{n}\E{ \sum_{n^{\rm var}\ epsilon \leq j\leq n}\mathbb{P}f{Y\leq \log(n/j)}-\mathbb{P}f{Y+\wt V\leq \log(n/j)} }. \mathrm{e}e Since $Y+\wt V\sim \text{Gamma}(m+1,W/\E W)$, it suffices to deal with the first conditional probability only. We can approximate the sum by \be \ba \sum_{n^{\rm var}\ epsilon \leq j\leq n}\mathbb{P}f{Y\leq \log(n/j)}&\approx \int_{n^{\rm var}\ epsilon}^n \mathbb{P}f{Y\leq \log(n/x)}\,\mathrm{d} x=n\int_0^{(1-{\rm var}\ epsilon)\log n}\mathrm{e}^{-y}\mathbb{P}f{Y\leq y}\,\mathrm{d} y. \mathrm{e}a\mathrm{e}e The second step follows from a variable transformation $y=\log(n/x)$. We write $\mathbb{P}f{Y\leq y}=\int_0^y f_{Y|W}(x)\mathrm{d} x$, where $f_{Y|W}(x)$ is as in~\mathrm{e}qref{eq:gammapdf} with $\alpha=m-1, \beta=W/\E W$, the conditional probability density function of $Y$, conditionally on $W$. Changing the order of integration yields \be\ba n{}&\int_0^{(1-{\rm var}\ epsilon)\log n}\mathrm{e}^{-x} f_{Y|W}(x)\,\mathrm{d} x-n^{\rm var}\ epsilon \mathbb{P}f{Y\leq (1-{\rm var}\ epsilon)\log n}\\ &=n\Big(\frac{W}{\E W+W}\Big)^m \mathbb{P}f{Y'\leq (1-{\rm var}\ epsilon)\log n}-n^{\rm var}\ epsilon \mathbb{P}f{Y\leq (1-{\rm var}\ epsilon)\log n}, \mathrm{e}a \mathrm{e}e where $Y'\sim \text{Gamma}(m,1+W/\E W)$, conditionally on $W$. A similar result with $m+1$ and $Y''\sim\text{Gamma}(m+1,1+W/\E W)$ instead of $m$ and $Y'$ follows for the probability of the event $\{Y+\wt V\leq \log(n/j)\}$. We remark that, via the construction of the random variables $Y'$ and $Y''$ using $Y$ and $Y+\wt V$, respectively, they are defined on the same probability space and that in fact $Y''$ stochastically dominates $Y'$. As such, combining both in~\mathrm{e}qref{eq:gammadif} approximately provides \be \ba \mathbb E\bigg[\Big(\frac{W}{\E W+W}{}&\Big)^m \Big(\mathbb{P}f{Y'\leq (1-{\rm var}\ epsilon)\log n}-\frac{W}{\E W+W}\mathbb{P}f{Y''\leq (1-{\rm var}\ epsilon)\log n}\Big)\bigg]\\ ={}&(1+\cO(n^{-(1-\xi){\rm var}\ epsilon}))\E{\frac{\E W}{\E W+W}\Big(\frac{W}{\E W+W}\Big)^m\mathbb{P}f{Y'\leq (1-{\rm var}\ epsilon)\log n}}\\ &+\E{\Big(\frac{W}{\E W+W}\Big)^m\frac{W}{\E W+W}\mathbb{P}f{Y'\leq (1-{\rm var}\ epsilon)\log n, Y''>(1-{\rm var}\ epsilon)\log n}}. \mathrm{e}a \mathrm{e}e Here, the second step follows from the fact that $Y'\preceq Y''$. We finally show that this quantity equals \be \E{\frac{\E W}{\E W+W}\Big(\frac{W}{\E W+W}\Big)^m}(1+o(n^{-\beta})), \mathrm{e}e for some small $\beta>0$, when ${\rm var}\ epsilon$ is sufficiently small, by proving that $\mathbb{P}f{Y'\leq (1-{\rm var}\ epsilon)\log n}$ and $\mathbb{P}f{Y''>(1-{\rm var}\ epsilon)\log n}$ are sufficiently close to one and zero, respectively. We thus conclude that \be \ba \label{eq:1sttermasymp} \frac 1n \sum_{n^{\rm var}\ epsilon \leq j\leq n}\E{\mathbb{P}f{\mathbb{Z}m_n(j)=m}}&=\E{\frac{\E W}{\E W+W}\Big(\frac{W}{\E W+W}\Big)^m}(1+o(n^{-\beta}))\\ &=p_m(1+o(n^{-\beta})), \mathrm{e}a \mathrm{e}e where $\beta$ and ${\rm var}\ epsilon$ are sufficiently small and where we recall $p_m$ from~\mathrm{e}qref{eq:pk}. It remains to show that the first term on the right-hand side of~\mathrm{e}qref{eq:vprobsplit} can be included in the $o(n^{-\beta})$ term. The main difficulty here for the actual proof is controlling the partial sums $S_\mathrm{e}ll$, which, for `too small' $\mathrm{e}ll$, do not concentrate around the expected value with probability sufficiently close to one. In this simplified non-rigorous proof, however, the assumption that $S_\mathrm{e}ll=(\mathrm{e}ll+1)\E W$ allows us to ignore this technical difficulty for now and focus on the main probabilistic arguments. The aim is to show that \be \label{eq:2ndtermbound} \frac 1n \sum_{1\leq j<n^{\rm var}\ epsilon}\E{\mathbb{P}f{\mathbb{Z}m_n(j)=m}}=o(p_mn^{-\beta}), \mathrm{e}e for some small $\beta$ when ${\rm var}\ epsilon$ is sufficiently small. We first focus on the case that $m=\wt c\log n+o(\log n)$ with $\wt c<1/\log \theta$. In this case, we can simply bound the conditional probability from above by one to obtain the upper bound $n^{-(1-{\rm var}\ epsilon)}$. Then, by Lemma~{\rm Re}f{lemma:pkbound}, we have that $p_m\geq (\theta+\xi)^{-m}=n^{-\wt c\log(\theta+\xi)}$ for any $\xi>0$. Since $\wt c<1/\log \theta$, it follows that we can choose $\xi, \beta$, and ${\rm var}\ epsilon$ sufficiently small such that $n^{-(1-{\rm var}\ epsilon)}=o(n^{-\beta-\wt c\log(\theta+\xi)})=o(p_mn^{-\beta})$, from which the claim in~\mathrm{e}qref{eq:2ndtermbound} follows. When $\wt c\in[1/\log \theta,c)$ instead (where we recall that $c<\theta/(\theta-1)$), a more careful approach is required. In this case, we bound $\mathbb{P}f{\mathbb{Z}m_n(j)=m}\leq \mathbb{P}f{\mathbb{Z}m_n(j)\geq m}$ and use a Chernoff bound on the right-hand side of the inequality. This bound follows from the proof of~\cite[Proposition $7.2$]{LodOrt21} and yields \be \label{eq:chernbound} \frac1n \sum_{1\leq j<n^{\rm var}\ epsilon}\E{\mathbb{P}f{\mathbb{Z}m_n(j)=m}}\leq \frac1n \sum_{1\leq j<n^{\rm var}\ epsilon}\E{\mathrm{e}xp(m(1-u_j+\log u_j))}, \mathrm{e}e where \be u_j:=\frac{1}{m}\sum_{\mathrm{e}ll=j}^{n-1}\frac{W_j}{S_\mathrm{e}ll}=\frac1m\sum_{\mathrm{e}ll=j+1}^n\frac{W_j}{\mathrm{e}ll\E W}\leq \frac{\log(n/j)}{m\E W}\leq \frac{\log n}{m\E W}=\frac{1+o(1)}{\wt c \E W}. \mathrm{e}e Note that the second step follows from our simplifying assumption on $S_\mathrm{e}ll$ and we bound $W_j$ from above by one in the inequality. Using that $x\mapsto 1-x+\log x$ is increasing on $(0,1)$, that $1/(\wt c\E W)\leq \log \theta/(\theta-1)<1$, and that we have a bound for $u_j$ uniformly in $1\leq j\leq n^{\rm var}\ epsilon$, \mathrm{e}qref{eq:chernbound} yields \be \ba \label{eq:uiluse} \frac1n \sum_{1\leq j<n^{\rm var}\ epsilon}\!\!\!\!\E{\mathbb{P}f{\mathbb{Z}m_n(j)=m}}&\leq \frac{1}{n^{1-{\rm var}\ epsilon}}\mathrm{e}xp\Big((1+o(1))\wt c\log n \Big(1-\frac{1}{\wt c(\theta-1)}+\log\Big(\frac{1}{\wt c (\theta-1)}\Big)\Big)\Big)\\ &= n^{-(1-{\rm var}\ epsilon)+(1+o(1))\wt c(1-1/(\wt c(\theta-1))+\log(1/(\wt c (\theta-1))))}. \mathrm{e}a \mathrm{e}e The aim is to show that this last expression is $o(p_mn^{-\beta})$ for some sufficiently small $\beta>0$. Again using that $p_mn^{-\beta}\geq n^{-\wt c\log(\theta+\xi)-\beta}$, it suffices to show that \be -(1-{\rm var}\ epsilon)+\wt c\Big(1-\frac{1}{\wt c(\theta-1)}+\log\Big(\frac{1}{\wt c (\theta-1)}\Big)\Big)<-\wt c\log(\theta+\xi)-\beta \mathrm{e}e holds when we choose $\xi,{\rm var}\ epsilon,$ and $\beta$ sufficiently small. Since the right-hand side is continuous in $\xi$ and $\beta$, and the left-hand side is continuous in ${\rm var}\ epsilon$, it suffices to prove that \be -1+\wt c\Big(1-\frac{1}{\wt c(\theta-1)}+\log\Big(\frac{1}{\wt c (\theta-1)}\Big)\Big)<-\wt c\log\theta \mathrm{e}e holds for any $\theta\in(1,2]$ and any $\wt c<\theta/(\theta-1)$. As the second derivative of the mapping $x\mapsto -1+x-1/(\theta-1)+x\log(1/(x(\theta-1)))$ is negative and the mapping $x\mapsto -x\log \theta$ is tangent to the former mapping at $x=\theta/(\theta-1)$, the inequality follows for all $\wt c<\theta/(\theta-1)$. This completes the proof of~\mathrm{e}qref{eq:2ndtermbound} for some sufficiently small ${\rm var}\ epsilon$ and $\beta$. Combined with~\mathrm{e}qref{eq:1sttermasymp}, this yields~\mathrm{e}qref{eq:degdist} for $k=1$. \subsection{Complete proof of Proposition~{\rm Re}f{lemma:degprobasymp}} Before we provide the complete proof, we first discuss the proof strategy. Though this is similar to the approach of the non-rigorous proof provided in the previous sub-section, dealing with $k>1$ many uniformly selected vertices provides technical challenges worth addressing. Let us start by introducing the following notation: for two sequences $f(n)$ and $g(n)$, we let $f(n)\leq j_1\neq \ldots \neq j_k\leq g(n)$ denote the set \be\label{eq:setnot} \{(j_1,\ldots, j_k)\in\mathbb{N}^k: j_\mathrm{e}ll\in[f(n),g(n)]\text{ for all }\mathrm{e}ll\in[k], \text{ and }j_{\mathrm{e}ll_1}\neq j_{\mathrm{e}ll_2}\text{ for any }1\leq \mathrm{e}ll_1<\mathrm{e}ll_2\leq k\}. \mathrm{e}e In words, summing over all indices $f(n)\leq j_1\neq \ldots \neq j_k\leq g(n)$ denotes summing over all distinct indices $j_1,\ldots, j_k$ which are at least $f(n)$ and at most $g(n)$. The left-hand side of~\mathrm{e}qref{eq:degdist} can be expressed by conditioning on the values of the typical vertices, and splitting between cases of young and old vertices. That is, \be\ba\label{eq:splitsum2} \mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}={}&\frac{1}{(n)_k}\sum_{1\leq j_1\neq \ldots \neq j_k\leq n}\!\!\!\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}\\ ={}&\frac{1}{(n)_k}\sum_{n^{ {\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\!\!\!\!\!\!\!\!\!\!\!\!\!\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}\\ &+\frac{1}{(n)_k}\sum_{\textbf{j}\in I_n({\rm var}\ epsilon)}\!\!\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}, \mathrm{e}a \mathrm{e}e where \be\label{eq:ineps} I_n({\rm var}\ epsilon):=\{\textbf{j}=(j_1,\ldots,j_k):1\leq j_1\neq\ldots\neq j_k\leq n,\ \mathrm{e}xists i\in[k]\ j_i<n^{ {\rm var}\ epsilon}\} \mathrm{e}e for any ${\rm var}\ epsilon\in(0,1)$. Splitting the sum on the first line into the two sums on the second and third line allows us to deal with them in a different way, and is similar to the distinction made in~\mathrm{e}qref{eq:vprobsplit}. In the sum on the second line, in which all indices are at least $n^{ {\rm var}\ epsilon}$, we can apply the law of large numbers on sums of vertex-weights to gain more control over the conditional probability of the event $\{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]\}$. The aim is to show that this first sum has the desired form, as on the right-hand side of~\mathrm{e}qref{eq:degdist}. This uses the same ideas as in the non-rigorous proof in the previous sub-section, but increases in difficulty due to multiple summations, integrals, and careful `book-keeping' of a large number of distinct indices. The sum on the third line, in which at least one of the indices takes on values strictly smaller than $n^{{\rm var}\ epsilon}$ can be shown to be negligible compared to the first sum. Especially when $m_\mathrm{e}ll$ is large, this is non-trivial. To do this, we consider the tail events $\{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]\}$ and use the negative quadrant dependence of the degrees (see Remark~{\rm Re}f{rem:degtail} and~\cite[Lemma $7.1$]{LodOrt21}), so that we can deal with the more tractable probabilities $\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}$ for all $\mathrm{e}ll\in[k]$, rather than the probability of the intersection of all tail degree events. We observe that this is not required when $k=1$ (that is, using the negative quadrant dependence), and significantly increases the technical difficulty of this part of the proof compared to the non-rigorous proof provided in the previous sub-section. Depending on whether the indices in $I_n({\rm var}\ epsilon)$ are at most or at least $n^{{\rm var}\ epsilon}$, we then use bounds similar to one developed in the proof of~\cite[Lemma $7.1$]{LodOrt21} or use an approach similar to what we use to bound the sum on the second line of~\mathrm{e}qref{eq:splitsum2}, respectively. In the following lemma, we deal with the sum on the second line of~\mathrm{e}qref{eq:splitsum2}. \begin{lemma}\label{lemma:in0eps} Let $W$ be a positive random variable that satisfies condition~{\rm Re}f{ass:weightsup} of Assumption~{\rm Re}f{ass:weights}. Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ which are i.i.d.\ copies of $W$ and fix $k\in\mathbb{N}$ and $c\in (0,\theta/(\theta-1))$. Then, there exist $\beta>0$ and an ${\rm var}\ epsilon\in(0,1)$ that can be made arbitrarily small, such that uniformly over non-negative integers $m_\mathrm{e}ll< c\log n, \mathrm{e}ll\in[k]$, \be \frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\!\!\!\!\!\!\!\!\!\!\!\!\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all } \mathrm{e}ll\in[k]}=\prod_{\mathrm{e}ll=1}^k \mathbb E\bigg[\frac{\E W}{\E W+W}\Big(\frac{W}{\E W+W}\Big)^{m_\mathrm{e}ll}\bigg]\big(1+o\big(n^{-\beta}\big)\big). \mathrm{e}e \mathrm{e}nd{lemma} We note that condition~{\rm Re}f{ass:weightzero} of Assumption~{\rm Re}f{ass:weights} is not required for this result to hold. To prove this lemma, we sum over all possible $m_\mathrm{e}ll$ vertices that connect to $j_\mathrm{e}ll$ for each $\mathrm{e}ll\in[k]$ and use the fact that the $j_1,\ldots, j_k$ are at least $n^{{\rm var}\ epsilon}$ to precisely control the connection probabilities and to evaluate the sums over all the possible $m_\mathrm{e}ll$ vertices for all $\mathrm{e}ll\in[k]$, as well as the sum over the indices $j_1,\ldots, j_k$. In the following lemma, we show the sum on the third line of~\mathrm{e}qref{eq:splitsum2} is negligible compared to the sum on the second line. \begin{lemma}\label{lemma:inepsterm} Let $W$ be a positive random variable that satisfies conditions~{\rm Re}f{ass:weightsup} and~{\rm Re}f{ass:weightzero} of Assumption~{\rm Re}f{ass:weights}. Consider the WRT model in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ which are i.i.d.\ copies of $W$. Fix $k\in\mathbb{N},c\in (0,\theta/(\theta-1))$ and recall $I_n({\rm var}\ epsilon)$ from~\mathrm{e}qref{eq:ineps}. There exist ${\rm var}\ epsilon\in(0,1)$ and a $\beta>0$ such that uniformly over non-negative integers $m_\mathrm{e}ll< c\log n, \mathrm{e}ll\in[k]$, \be \frac{1}{(n)_k}\sum_{\textbf{j}\in I_n({\rm var}\ epsilon)}\!\!\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}=o\bigg(\prod_{\mathrm{e}ll=1}^k \E{\frac{\E W}{\E W+W}\Big(\frac{W}{\E W+W}\Big)^{m_\mathrm{e}ll}} n^{-\beta}\bigg). \mathrm{e}e \mathrm{e}nd{lemma} Note that condition~{\rm Re}f{ass:weightzero} of Assumption~{\rm Re}f{ass:weights} is required only in this lemma, where it was not necessary in Lemma~{\rm Re}f{lemma:in0eps}. In fact, it is required for one inequality in the proof only, which convinces us that it could possibly be avoided. It is clear that~\mathrm{e}qref{eq:degdist} in Proposition~{\rm Re}f{lemma:degprobasymp} immediately follows from using the results of Lemmas~{\rm Re}f{lemma:in0eps} and~{\rm Re}f{lemma:inepsterm} in~\mathrm{e}qref{eq:splitsum2}. Namely, in Lemma~{\rm Re}f{lemma:in0eps} we can take ${\rm var}\ epsilon $ arbitrarily small and the left-hand side of the equality in Lemma~{\rm Re}f{lemma:inepsterm} is monotone decreasing as ${\rm var}\ epsilon$ decreases, so that we can take ${\rm var}\ epsilon$ as small as required. In what follows we first prove Lemma~{\rm Re}f{lemma:in0eps} in Section~{\rm Re}f{sec:proof5.10}, prove Lemma~{\rm Re}f{lemma:inepsterm} in Section~{\rm Re}f{sec:proof5.11}, and finally complete the proof of Proposition~{\rm Re}f{lemma:degprobasymp} in Section~{\rm Re}f{sec:proof5.1}. After each of the proofs we discuss the required adaptations to prove the results of Lemmas~{\rm Re}f{lemma:in0eps} and~{\rm Re}f{lemma:inepsterm} for the model with \mathrm{e}mph{random out-degree}, as introduced in Remark~{\rm Re}f{remark:def}$(ii)$. \subsection{Proof of Lemma~{\rm Re}f{lemma:in0eps}\label{sec:proof5.10}} \begin{proof}[Proof of Lemma~{\rm Re}f{lemma:in0eps}] We provide a matching upper bound and lower bound for \be \frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\!\!\!\!\!\!\!\!\!\!\!\!\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all } \mathrm{e}ll\in[k]}. \mathrm{e}e \paragraph{\textbf{Upper bound.}} Let $\zeta_n=n^{-\mathrm{d}elta{\rm var}\ epsilon}/\E{W}$ for some $\mathrm{d}elta\in(0,1/2)$. We define the event \be \label{eq:en} E^{(1)}_n:=\bigg\{ \sum_{\mathrm{e}ll=1}^j W_\mathrm{e}ll \in ((1-\zeta_n)\E{W}j,(1+\zeta_{n})\E{W}j),\text{ for all } n^{{\rm var}\ epsilon}\leq j\leq n\bigg\}, \mathrm{e}e We use $E^{(1)}_n$ to mimic the simplified assumption that $S_j=(j+1)\E W$ in the non-rigorous proof in Section~{\rm Re}f{sec:nonrigour}. We know that $\mathbb P((E^{(1)}_n)^c)=o(n^{-\gamma})$ for any $\gamma>0$ (and thus of smaller order than $\prod_{\mathrm{e}ll=1}^k p_{m_\mathrm{e}ll}n^{-\beta}$ for any $\beta>0$ and uniformly in $m_1,\ldots m_k<(\theta/(\theta-1))\log n$) from Lemma~{\rm Re}f{lemma:weightsumbounds} in the~\hyperref[sec:appendix]{Appendix}. By also conditioning on the vertex-weights, this yields for any $\gamma>0$ the upper bound \be \ba \label{eq:enbound} \frac{1}{(n)_k}{}&\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\E{\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}}\\ \leq {}&\frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\mathbb E[\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}\ind_{E_n^{(1)}}]+o(n^{-\gamma}), \mathrm{e}a \mathrm{e}e Now, to express the first term on the right-hand side of~\mathrm{e}qref{eq:enbound}, we consider ordered indices rather than unordered ones. We provide details for the case $n^{{\rm var}\ epsilon}\leq j_1<j_2<\ldots<j_k\leq n$ and discuss later on how the other permutations of $j_1,\ldots,j_k$ can be dealt with. Moreover, for every $\mathrm{e}ll\in[k]$, we introduce the ordered indices $j_{\mathrm{e}ll}<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\mathrm{e}ll\in[k]$, which denote the steps at which vertex $\mathrm{e}ll$ increases it degree by one. Note that for every $\mathrm{e}ll\in[k]$ these indices are distinct by definition, but we also require that $i_{s,\mathrm{e}ll}\neq i_{t,j}$ for any $\mathrm{e}ll,j\in[k],s\in[m_\mathrm{e}ll]$ and $t\in[m_j]$ (equality is allowed only when $\mathrm{e}ll=j$ and $s=t$). Indeed, a new vertex can only connect to one already present vertex. We denote this constraint by adding a $*$ on the summation symbol. Finally, we define $j_{k+1}:=n$. Combining these additional steps, we arrive at \be\ba \label{eq:ubfirst} \frac{1}{(n)_k}{}&\sum_{n^{{\rm var}\ epsilon}\leq j_1< \ldots< j_k\leq n}\E{\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all } \mathrm{e}ll\in[k]}\ind_{E^{(1)}_n}}\\ ={}&\frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}\leq j_1<\ldots<j_k\leq n}\ \, \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\mathbb E\Bigg[\prod_{t=1}^k\prod_{s=1}^{m_t}\frac{W_{j_t}}{\sum_{\mathrm{e}ll=1}^{i_{s,t}-1}W_\mathrm{e}ll}\\ &\times \prod_{u=1}^k\!\!\prod_{\substack{s=j_u+1\\s\neq i_{\mathrm{e}ll,t},\mathrm{e}ll\in[m_t],t\in[k]}}^{j_{u+1}}\!\!\!\!\bigg(1-\frac{\sum_{\mathrm{e}ll=1}^u W_{j_\mathrm{e}ll}}{\sum_{\mathrm{e}ll=1}^{s-1}W_{\mathrm{e}ll}}\bigg)\ind_{E^{(1)}_n}\Bigg]. \mathrm{e}a \mathrm{e}e This step is equivalent to~\mathrm{e}qref{eq:probexplicit} in the non-rigorous proof. The terms in the first double product denote the probabilities that vertices $i_{s,t}$ connect to $j_t$, whereas the terms in the second double product denote the probabilities that vertices $i_{s,t}$ do \mathrm{e}mph{not} connect to the vertices $j_\mathrm{e}ll$ such that $j_\mathrm{e}ll<i_{s,t}$. Similar to~\mathrm{e}qref{eq:connectsplit} in the non-rigorous proof, we then include the terms where $s=i_{\mathrm{e}ll,t}$ for all $\mathrm{e}ll\in[m_t]$ and $t\in[k]$ in the second double product. To do this, we change the first double product to \be \label{eq:fracbound} \prod_{t=1}^k \prod_{s=1}^{m_t}\frac{W_{j_t}}{\sum_{\mathrm{e}ll=1}^{i_{s,t}-1}W_\mathrm{e}ll-\sum_{\mathrm{e}ll=1}^k W_{j_\mathrm{e}ll}\ind_{\{i_{s,t}>j_\mathrm{e}ll\}}}\leq\prod_{t=1}^k \prod_{s=1}^{m_t} \frac{W_{j_t}}{\sum_{\mathrm{e}ll=1}^{i_{s,t}-1}W_\mathrm{e}ll-k}, \mathrm{e}e that is, we subtract the vertex-weight $W_{j_\mathrm{e}ll}$ in the numerator when the vertex $j_\mathrm{e}ll$ has already been introduced by step $i_{s,t}$. In the upper bound we use that the weights are bounded from above by one. We thus arrive at the upper bound \be\label{eq:tailprobmiddle} \frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}\leq j_1<\ldots<j_k\leq n}\ \, \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\!\!\!\mathbb E\Bigg[\prod_{t=1}^k\prod_{s=1}^{m_t}\frac{W_{j_t}}{\sum_{\mathrm{e}ll=1}^{i_{s,t}-1}W_\mathrm{e}ll-k}\prod_{u=1}^k\prod_{s=j_u+1}^{j_{u+1}}\bigg(1-\frac{\sum_{\mathrm{e}ll=1}^u W_{j_\mathrm{e}ll}}{\sum_{\mathrm{e}ll=1}^{s-1}W_\mathrm{e}ll}\bigg)\ind_{E^{(1)}_n}\Bigg]. \mathrm{e}e For ease of writing, we omit the first sum until we actually intend to sum over the indices $j_1,\ldots,j_k$. We use the bounds from the event $E^{(1)}_n$ to bound \be \sum_{\mathrm{e}ll=1}^{i_{s,t}-1}W_\mathrm{e}ll\geq (i_{s,t}-1)\E W(1-\zeta_n),\qquad \sum_{\mathrm{e}ll=1}^{s-1}W_\mathrm{e}ll \leq s\E W (1+\zeta_n). \mathrm{e}e For $n$ sufficiently large, we observe that $(i_{s,t}-1)\E W(1-\zeta_n)-k\geq i_{s,t}\E W(1-2\zeta_n)$, which yields \be \frac{1}{(n)_k}\ \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\!\!\!\!\mathbb E\Bigg[\prod_{t=1}^k\prod_{s=1}^{m_t}\frac{W_{j_t}}{i_{s,t}\E W (1-2\zeta_n)} \prod_{u=1}^k\prod_{s=j_u+1}^{j_{u+1}}\!\!\bigg(1-\frac{\sum_{\mathrm{e}ll=1}^u W_{j_\mathrm{e}ll}}{s\E W(1+\zeta_n)}\bigg)\ind_{E^{(1)}_n}\Bigg]. \mathrm{e}e Moreover, relabelling the vertex-weights $W_{j_t}$ to $W_t$ for $t\in[k]$ does not change the distribution of the terms within the expected value, so that the expected value remains unchanged. We can also bound the indicator from above by one, to arrive at the upper bound \be \frac{1}{(n)_k}\ \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\mathbb E\Bigg[\prod_{t=1}^k\prod_{s=1}^{m_t}\frac{W_t}{i_{s,t}\E W (1-2\zeta_n)} \prod_{u=1}^k\prod_{s=j_u+1}^{j_{u+1}}\bigg(1-\frac{\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll}{s\E W(1+\zeta_n)}\bigg)\Bigg]. \mathrm{e}e We bound the final product from above by \be \ba \label{eq:expbound} \prod_{s=j_u+1}^{j_{u+1}}\bigg(1-\frac{\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll}{s\E W(1+\zeta_n)}\bigg)&\leq \mathrm{e}xp\bigg(-\frac{1}{\E W(1+\zeta_n)} \sum_{s=j_u+1}^{j_{u+1}}\frac{\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll}{s}\bigg)\\ &\leq \mathrm{e}xp\bigg(-\frac{1}{\E W(1+\zeta_n)} \sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll \log\Big(\frac{j_{u+1}}{j_u+1}\Big)\bigg)\\ &= \Big(\frac{j_{u+1}}{j_u+1}\Big)^{-\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll/(\E W(1+\zeta_n))}. \mathrm{e}a\mathrm{e}e As the weights are almost surely bounded by one, we thus find \be \prod_{s=j_u+1}^{j_{u+1}}\bigg(1-\frac{\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll}{s\E W(1+\zeta_n)}\bigg)\leq \Big(\frac{j_{u+1}}{j_u}\Big)^{-\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll/(\E W(1+\zeta_n))}\Big(1+\mathcal O\big(n^{-{\rm var}\ epsilon}\big)\Big), \mathrm{e}e which is equivalent to~\mathrm{e}qref{eq:nonconnectasymp} in the non-rigorous proof. As a result, we obtain the upper bound \be\ba \frac{1}{(n)_k}\, \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\!\!\!\!\!{}&\mathbb E\Bigg[\prod_{t=1}^k\! \bigg(\Big(\frac{W_t}{\E{W}}\Big)^{m_t}\prod_{s=1}^{m_t}\frac{1}{i_{s,t}(1-2\zeta_n)}\bigg)\!\prod_{u=1}^k\!\Big(\frac{j_{u+1}}{j_u}\Big)^{-\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll/(\E{W}(1+\zeta_n))}\Bigg]\\ &\times \Big(1+\mathcal O\big(n^{-{\rm var}\ epsilon}\big)\Big). \mathrm{e}a\mathrm{e}e As $j_{k+1}=n$, we can simplify the final product to obtain \be\ba \frac{1}{(n)_k}\, \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\!\!\!\!\!\!\!(1-2\zeta_n)^{-\sum_{h=1}^k m_h}\mathbb E\Bigg[{}&\prod_{t=1}^k \Big(\frac{W_t}{\E{W}}\Big)^{m_t}\prod_{t=1}^k \Big(j_t^{W_t/(\E{W}(1+\zeta_n))}\prod_{s=1}^{m_t}i_{s,t}^{-1}\Big)\\ &\times n^{-\sum_{\mathrm{e}ll=1}^k W_\mathrm{e}ll/(\E{W}(1+\zeta_n))}\Bigg]\Big(1+\mathcal O\big(n^{-{\rm var}\ epsilon}\big)\Big). \mathrm{e}a \mathrm{e}e We bound this from above even further by no longer constraining the indices $i_{s,t}$ to be distinct. That is, for distinct $t_1$ and $t_2\in[k]$, we allow $i_{s_1,t_1}=i_{s_2,t_2}$ to hold for any $s_1\in[m_{t_1}]$ and $s_2\in[m_{t_2}]$. This yields \be\ba\label{eq:ubexp} \frac{1}{(n)_k}\, \sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\!\!\!\!\!\!\!(1-2\zeta_n)^{-\sum_{h=1}^k m_h}\mathbb E\Bigg[{}&\prod_{t=1}^k \Big(\frac{W_t}{\E{W}}\Big)^{m_t}\prod_{t=1}^k \Big(j_t^{W_t/(\E{W}(1+\zeta_n))}\prod_{s=1}^{m_t}i_{s,t}^{-1}\Big)\\ &\times n^{-\sum_{\mathrm{e}ll=1}^k W_\mathrm{e}ll/(\E{W}(1+\zeta_n))}\Bigg]\Big(1+\mathcal O\big(n^{-{\rm var}\ epsilon}\big)\Big). \mathrm{e}a\mathrm{e}e We set \be \label{eq:at} a_t:=W_t/(\E{W}(1+\zeta_n)), \mathrm{e}e and look at the terms \be \label{eq:simplify} \frac{n^{-\sum_{t=1}^k a_t}}{(n)_k}\ \sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\prod_{t=1}^k \bigg( (a_t(1+\zeta_n))^{m_t}j_t^{a_t} \prod_{s=1}^{m_t}i_{s,t}^{-1}\bigg) . \mathrm{e}e We bound the sums from above by multiple integrals, almost surely, which yields \be \ba\label{eq:intbound} \frac{n^{-\sum_{t=1}^k a_t}}{(n)_k}\prod_{t=1}^k (a_t(1+\zeta_n))^{m_t}j_t^{a_t} \int_{j_t}^n \int _{x_{1,t}}^n\cdots \int_{x_{m_t-1,t}}^n \prod_{s=1}^{m_t}x_{s,t}^{-1}\,\mathrm{d} x_{m_t,t}\ldots\mathrm{d} x_{1,t}. \mathrm{e}a \mathrm{e}e Using~\mathrm{e}qref{eq:logint} in Lemma~{\rm Re}f{lemma:logints}, we obtain that this equals \be \frac{n^{-\sum_{t=1}^k a_t}}{(n)_k}\prod_{t=1}^k (a_t(1+\zeta_n))^{m_t}j_t^{a_t} \frac{(\log(n/j_t))^{m_t}}{m_t!}. \mathrm{e}e Substituting this in~\mathrm{e}qref{eq:simplify} and reintroducing the sum over the indices $j_1,\ldots, j_k$, we arrive at \be \label{eq:intstep1} \frac{(1+\zeta_n)^{\sum_{h=1}^k m_h}}{(n)_k}\sum_{n^{{\rm var}\ epsilon}\leq j_1<\ldots<j_k\leq n}\prod_{t=1}^k\Big(\frac{j_t}{n}\Big)^{a_t}\frac{(a_t\log(n/j_t))^{m_t}}{m_t!}. \mathrm{e}e We observe that switching the order of the indices $j_1,\ldots,j_k$ achieves the same result as permuting the $m_1,\ldots,m_k$ and $a_1,\ldots, a_k$. Hence, if we let $\pi:[k]\to[k]$ be a permutation, then considering the indices $n^{{\rm var}\ epsilon}\leq j_{\pi(1)}<j_{\pi(2)}<\ldots<j_{\pi(k)}\leq n$ yields a similar result as in \mathrm{e}qref{eq:intstep1} but with a term $j_{\pi(t)}^{a_{\pi(t)}}(\log(n/j_{\pi(t)}))^{m_{\pi(t)}}/m_{\pi(t)}!$ in the final product. Since this product is invariant to such permutations of the $m_t$ and $a_t$, the only thing that would change is the summation order of the indices $j_1,\ldots,j_k$. By using~\mathrm{e}qref{eq:intstep1} in~\mathrm{e}qref{eq:ubexp} and incorporating all permutations of the indices, i.e.\ $n^{\rm var}\ epsilon \leq j_{\pi(1)}<j_{\pi(2)}<\ldots<j_{\pi(k)}\leq n$ for all $\pi\in P_k$ (where $P_k$ denotes the set of all permutations on $[k]$), we arrive at \be \ba \label{eq:permutations} \frac{1}{(n)_k}{}&\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\!\!\!\!\!\!\!\!\!\!\!\mathbb E[\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll,\mathrm{e}ll\in[k]}\ind_{E^{(1)}_n}]\\ \leq{}& \frac{1}{(n)_k}\Big(\frac{1+\zeta_n}{1-2\zeta_n}\Big)^{\sum_{h=1}^k m_h}\E{\sum_{\pi\in P_k}\sum_{n^{{\rm var}\ epsilon}\leq j_{\pi(1)}<\ldots<j_{\pi(k)}\leq n}\prod_{t=1}^k\Big(\frac{j_{\pi(t)}}{n}\Big)^{a_t}\frac{(a_t\log(n/j_{\pi(t)}))^{m_t}}{m_t!}}\\ ={}&\frac{1}{(n)_k}\Big(\frac{1+\zeta_n}{1-2\zeta_n}\Big)^{\sum_{h=1}^k m_h}\E{\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq\ldots\neq j_k\leq n}\prod_{t=1}^k\Big(\frac{j_t}{n}\Big)^{a_t}\frac{(a_t\log(n/j_t))^{m_t}}{m_t!}}. \mathrm{e}a \mathrm{e}e We bound the last expression from above even further by allowing the indices $j_1,\ldots, j_k$ to take \mathrm{e}mph{any} integer value in $[n^{\rm var}\ epsilon,n]$. That is, $j_{\mathrm{e}ll_1}=j_{\mathrm{e}ll_2}$ is allowed for $\mathrm{e}ll_1\neq \mathrm{e}ll_2$. As this introduces more non-negative terms, it yields the upper bound \be \ba \frac{1}{(n)_k}\Big(\frac{1+\zeta_n}{1-2\zeta_n}\Big)^{\sum_{h=1}^k m_h}\mathbb E\Bigg[\sum_{n^{{\rm var}\ epsilon}\leq j_1\leq n}\cdots{}& \sum_{n^{\rm var}\ epsilon \leq j_k\leq n}\prod_{t=1}^k\Big(\frac{j_t}{n}\Big)^{a_t}\frac{(a_t\log(n/j_t))^{m_t}}{m_t!}\Bigg]\\ =\frac{1}{(n)_k}\Big(\frac{1+\zeta_n}{1-2\zeta_n}\Big)^{\sum_{h=1}^k m_h}\prod_{t=1}^k {}&\E{\sum_{n^{{\rm var}\ epsilon}\leq j\leq n}\Big(\frac{j}{n}\Big)^{a_t}\frac{(a_t\log(n/j))^{m_t}}{m_t!}}, \mathrm{e}a \mathrm{e}e where we note that the product can be taken out in the second line, since the weights are independent. We now observe two things: First, we can redefine $a_t=a:=W/(\E W(1+\zeta_n))$. After all, the index of the vertex-weights $W_1,\ldots, W_k$ is irrelevant, since they are independent and identically distributed. Second, the summand is equal to $\mathbb{P}f{P(a)=m_t}$, where $P(a)\sim \text{Poi}(a\log(n/j))$, conditionally on $W$. So we obtain, similar to~\mathrm{e}qref{eq:poiprob} in the non-rigorous proof, \be \frac{1}{(n)_k}\Big(\frac{1+\zeta_n}{1-2\zeta_n}\Big)^{\sum_{h=1}^k m_h}\prod_{t=1}^k\E{\sum_{n^{{\rm var}\ epsilon}\leq j\leq n}\mathbb{P}f{P(a)=m_t}}. \mathrm{e}e We now use the duality between Poisson and gamma random variables, as also explained in the intuitive proof after~\mathrm{e}qref{eq:poiprob}. That is, let $G_t\sim \text{Gamma}(m_t,1)$ and $V\sim \text{Exp}(1)$ be independent random variables. Then, $\{P(a)=m_t\}=\{G_t\leq a\log(n/j), G_t+V>a\log(n/j)\}$, so that \be\ba\label{eq:poitogamma} \mathbb{P}f{P(a)=m_t}&=\mathbb{P}f{G_t\leq a\log(n/j), G_t+V>a\log(n/j)}\\ &=\mathbb{P}f{Y_t\leq \log(n/j), Y_t+\wt V>\log(n/j)}\\ &=\mathbb{P}f{Y_t\leq \log(n/j)}-\mathbb P_W(Y_t+\wt V\leq \log(n/j)), \mathrm{e}a \mathrm{e}e where $Y_t\sim \text{Gamma}(m_t,a)$ and $\wt V\sim \text{Exp}(a)$, conditionally on $W$ (and observe that $Y_t+\wt V\sim \text{Gamma}(m_t+1,a)$). We thus obtain \be \label{eq:gammadiffprob} \frac{1}{(n)_k}\Big(\frac{1+\zeta_n}{1-2\zeta_n}\Big)^{\sum_{h=1}^k m_h}\prod_{t=1}^k\E{\sum_{n^{{\rm var}\ epsilon}\leq j\leq n}\big(\mathbb{P}f{Y_t\leq \log(n/j)}-\mathbb P_W(Y_t+\wt V\leq \log(n/j))\big)}, \mathrm{e}e similar to~\mathrm{e}qref{eq:gammadif} in the non-rigorous proof. We now compare the sum of the gamma probabilities with an integral. That is, since both probabilities are decreasing in $j$, we can bound the sum from above by \be \ba \label{eq:sumtoint} \int_{\lfloor n^{\rm var}\ epsilon \rfloor}^n{}& \mathbb{P}f{Y_t\leq \log(n/x)}\,\mathrm{d} x-\int_{\lceil n^{\rm var}\ epsilon\rceil}^n \mathbb P_W(Y_t+\wt V\leq \log(n/x))\,\mathrm{d} x\\ &=n\int_0^{\log(n/\lfloor n^{\rm var}\ epsilon \rfloor)} \mathrm{e}^{-y}\mathbb{P}f{Y_t\leq y}\,\mathrm{d} y-n\int_0^{\log(n/\lceil n^{\rm var}\ epsilon\rceil)}\mathrm{e}^{-y}\mathbb P_W(Y_t+\wt V\leq y)\,\mathrm{d} y, \mathrm{e}a \mathrm{e}e where we use a variable substitution $y=\log(n/x)$ to obtain the second line. Writing the cumulative density function as an integral from zero to $y$ and switching the order of integration, we find \be\ba n{}&\Ef{}{\mathrm{e}^{-Y_t}\ind_{\{Y_t\leq \log(n/\lfloor n^{\rm var}\ epsilon\rfloor)\}}}-n\Ef{}{\mathrm{e}^{-(Y_t+\wt V)}\ind_{\{Y_t+\wt V\leq \log( n/\lceil n^{\rm var}\ epsilon\rceil)\}}}\\ &+\mathbb P_W(Y_t+\wt V\leq \log(n/\lceil n^{\rm var}\ epsilon\rceil))+\lfloor n^{\rm var}\ epsilon\rfloor \big(\mathbb P_W(Y_t+\wt V\leq \log(n/\lceil n^{\rm var}\ epsilon\rceil))-\mathbb{P}f{Y_t\leq \log(n/\lfloor n^{\rm var}\ epsilon\rfloor)}\big). \mathrm{e}a \mathrm{e}e As the last term is negative, we can omit it to obtain an upper bound. The first term on the second line can be bounded from above by \be \mathbb P_W(Y_t+\wt V\leq \log(n/\lceil n^{\rm var}\ epsilon\rceil))\leq \mathbb P_W(\mathrm{e}^{-(Y_t+\wt V)}\geq n^{-(1-{\rm var}\ epsilon)})\leq n^{1-{\rm var}\ epsilon}\E{\mathrm{e}^{-(Y_t+\wt V)}}=n^{1-{\rm var}\ epsilon}\Big(\frac{a}{1+a}\Big)^{m_t+1}. \mathrm{e}e Combined, this yields the upper bound \be \label{eq:gammaexp} n\Ef{}{\mathrm{e}^{-Y_t}\ind_{\{Y_t\leq \log(n/\lfloor n^{\rm var}\ epsilon\rfloor)\}}}-n\Ef{}{\mathrm{e}^{-(Y_t+\wt V)}\ind_{\{Y_t+\wt V\leq \log( n/\lceil n^{\rm var}\ epsilon\rceil)\}}}+n^{1-{\rm var}\ epsilon}\Big(\frac{a}{1+a}\Big)^{m_t+1}. \mathrm{e}e We now distinguish two cases: \begin{enumerate} \item[\namedlabel{item:1}{$(1)$}] $m_t=c_t\log n+o(\log n)$ with $c_t\in[0,1/(\theta-1)]$, for all $t\in[k]$. \item[\namedlabel{item:2}{$(2)$}] $m_t=c_t\log n+o(\log n)$ with $c_t\in(1/(\theta-1),c)$, for some $t\in[k]$. \mathrm{e}nd{enumerate} In case~{\rm Re}f{item:1}, we bound the difference of the expected values from above by \be\ba \label{eq:expdiff} n{}&\Ef{}{\mathrm{e}^{-Y_t}}-n\Ef{}{\mathrm{e}^{-(Y_t+\wt V)}}+n\Ef{}{\mathrm{e}^{-(Y_t+\wt V)}\ind_{\{Y_t+\wt V>\log(n/\lceil n^{\rm var}\ epsilon\rceil)\}}}+n^{1-{\rm var}\ epsilon}\Big(\frac{a}{1+a}\Big)^{m_t+1}\\ &\leq n\frac{1}{1+a}\Big(\frac{a}{1+a}\Big)^{m_t}(1+\cO(n^{-{\rm var}\ epsilon}))+\cO(n^{\rm var}\ epsilon). \mathrm{e}a\mathrm{e}e Substituting this for the argument of the expected value for each $t\in[k]$ in~\mathrm{e}qref{eq:gammadiffprob} yields \be \label{eq:expprod2} \Big(\frac{1+\zeta_n}{1-2\zeta_n}\Big)^{\sum_{h=1}^k m_h}(1+\cO(n^{-{\rm var}\ epsilon}))\prod_{t=1}^k\bigg(\E{\frac{1}{1+a}\Big(\frac{a}{1+a}\Big)^{m_t}}+\cO(n^{-(1-{\rm var}\ epsilon)})\bigg). \mathrm{e}e We recall that $a:=W/(\E W(1+\zeta_n))$ and that $\zeta_n:=n^{-\mathrm{d}elta{\rm var}\ epsilon}/\E W$. Since $W\in[0,1]$ almost surely and $m_t=\cO(\log n)$, \be \label{eq:aapprox} \E{\frac{1}{1+a}\Big(\frac{a}{1+a}\Big)^{m_t}}=\E{\frac{\E W}{\E W+W}\Big(\frac{W}{\E W+W}\Big)^{m_t}}(1+o(n^{-\beta}))=p_{m_t}(1+o(n^{-\beta})), \mathrm{e}e for some small $\beta>0$. We have by Lemma~{\rm Re}f{lemma:pkbound} that $p_{m_t}\geq (\theta+\xi)^{-m_t}$ for any $\xi>0$. Now we use that in case~{\rm Re}f{item:1} $m_t=c_t\log n+o(\log n)$ with $c_t<1/(\theta-1)$. As a result, for $\xi$, ${\rm var}\ epsilon,$ and some $\beta>0$ sufficiently small and since $\log\theta<\theta-1$ for $\theta>1$, we then have $n^{-(1-{\rm var}\ epsilon)}=o( n^{-(c_t+o(1))\log(\theta+\xi)-\beta})=o(p_{m_t}n^{-\beta})$. Together with~\mathrm{e}qref{eq:aapprox}, this implies we can write the argument of the product in~\mathrm{e}qref{eq:expprod2} as $p_{m_t}(1+o(n^{-\beta}))$ whenever $m_t$ satisfies case~{\rm Re}f{item:1}. In case~{\rm Re}f{item:2}, these bounds do not suffice. First of all, we note that because of the product structure in~\mathrm{e}qref{eq:gammadiffprob}, we can deal with any terms that satisfy $c_t \in [0,1/(\theta -1)]$ in the same way as in~{\rm Re}f{item:1}. Without loss of generality we thus assume that $c_t \in (1/(\theta-1), c)$ for all $t \in[k]$. In this case, we use that for any $N>0$, \be \ba\label{eq:expre} \Ef{}{\mathrm{e}^{-Y_t}\ind_{\{Y_t\leq N\}}}&=\int_0^N \frac{a^{m_t}}{\mathcal{G}amma(m_t)}x^{m_t-1}\mathrm{e}^{-(1+a)x}\,\mathrm{d} x\\ &=\Big(\frac{a}{1+a}\Big)^{m_t}\int_0^N\frac{(1+a)^{m_t}}{\mathcal{G}amma(m_t)}x^{m_t-1}\mathrm{e}^{-(1+a)x}\,\mathrm{d} x\\ &=\Big(\frac{a}{1+a}\Big)^{m_t}\mathbb{P}f{Y_t'\leq N}, \mathrm{e}a \mathrm{e}e where $Y_t'\sim \text{Gamma}(m_t,1+a)$, conditionally on $W$. As $Y_t+\wt V\sim \text{Gamma}(m_t+1,a)$, we also obtain a similar result for the second expected value in~\mathrm{e}qref{eq:gammaexp} with a random variable $Y''_t\sim \text{Gamma}(m_t+1,1+a)$. As in the non-rigorous proof, we can assume that $Y_t'$ and $Y''_t$ are defined on the same probability space and we have that $Y_t' \preceq Y_t''$. Using this in~\mathrm{e}qref{eq:gammaexp}, we thus obtain \be \ba\label{eq:mainterm} n{}&\Big(\frac{a}{1+a}\Big)^{m_t}\bigg[\mathbb{P}f{Y_t'\leq \log(n/\lfloor n^{\rm var}\ epsilon\rfloor)}-\frac{a}{1+a}\mathbb{P}f{Y_t''\leq \log(n/\lceil n^{\rm var}\ epsilon\rceil)}\bigg]+n^{1-{\rm var}\ epsilon}\Big(\frac{a}{1+a}\Big)^{m_t+1}\\ &\leq n\Big(\frac{a}{1+a}\Big)^{m_t}\bigg[\frac{1}{1+a}\mathbb{P}f{Y'_t\leq \log(n/\lfloor n^{\rm var}\ epsilon\rfloor)}+\mathbb{P}f{Y'_t\leq \log(n/\lceil n^{\rm var}\ epsilon\rceil), Y''_t>\log(n/\lceil n^{\rm var}\ epsilon\rceil)}\\ &\hphantom{\leq n\Big(\frac{a}{1+a}\Big)^{m_t}\bigg[}\ +\mathbb{P}f{\log(n/\lceil n^{\rm var}\ epsilon\rceil)\leq Y'_t\leq \log(n/\lfloor n^{\rm var}\ epsilon\rfloor)}\bigg]+n^{1-{\rm var}\ epsilon}\Big(\frac{a}{1+a}\Big)^{m_t+1}. \mathrm{e}a \mathrm{e}e By omitting the probability from the first term and bounding the other two probabilities from above by $\mathbb{P}f{Y''_t>\log(n/\lceil n^{\rm var}\ epsilon\rceil)}$, we obtain an upper bound \be n\Big(\frac{a}{1+a}\Big)^{m_t}\Big(\frac{1}{1+a}(1+\cO(n^{-{\rm var}\ epsilon}))+2\mathbb{P}f{Y''_t>\log(n/\lceil n^{\rm var}\ epsilon\rceil)}\Big). \mathrm{e}e Again, we substitute this for the argument in the expected value in~\mathrm{e}qref{eq:gammadiffprob} to obtain for some large constant $C>0$, \be \ba\label{eq:expsum} \Big(\frac{1+\zeta_n}{1-2\zeta_n}\Big)^{\sum_{h=1}^k m_h}(1+\cO(n^{-{\rm var}\ epsilon}))\prod_{t=1}^k\bigg({}&\E{\Big(\frac{1}{1+a}\Big(\frac{a}{1+a}\Big)^{m_t}}\\ +{}&C\E{\Big(\frac{a}{1+a}\Big)^{m_t}\big(\mathbb{P}f{Y''_t>\log(n/\lceil n^{\rm var}\ epsilon\rceil)}+n^{-{\rm var}\ epsilon}\big)}\bigg). \mathrm{e}a\mathrm{e}e It thus remains to show that the second expected value in~\mathrm{e}qref{eq:expsum} is of sufficiently smaller order compared to the first. We obtain this by using the indicators $\ind_{\{(1+a)<c_t/(1-\mu)\}}$ and $\ind_{\{1+a\geq c_t/(1-\mu)\}}$ to split the expected value in two parts, for some $\mu>0$ small enough such that $c_t/(1-\mu)<\theta/(\theta-1)$ (which is possible since $c_t<c<\theta/(\theta-1)$). For the first indicator, we can bound \be\ba \E{\Big(\frac{a}{1+a}\Big)^{m_t}\big(\mathbb{P}f{Y''_t>\log(n/\lceil n^{\rm var}\ epsilon\rceil)}+n^{-{\rm var}\ epsilon}\big)\ind_{\{1+a<c_t/(1-\mu)\}}}&\leq \Big(\frac{c_t/(1-\mu)-1}{c_t/(1-\mu)}\Big)^{m_t}\\ &=\Big(1-\frac{1-\mu}{c_t}\Big)^{m_t}<(\theta+\xi)^{-m_t}, \mathrm{e}a \mathrm{e}e for some small $\xi>0$ and $n$ sufficiently large. Since the first expected value in~\mathrm{e}qref{eq:expsum} equals $p_{m_t}\geq (\theta+\xi/2)^{-m_t}$, where the inequality follows from Lemma~{\rm Re}f{lemma:pkbound}, we thus obtain that \be \label{eq:ind} \mathbb E\bigg[\Big(\frac{a}{1+a}\Big)^{m_t}\mathbb{P}f{Y''_t>\log(n/\lceil n^{\rm var}\ epsilon\rceil)}\ind_{\{1+a<c_t/(1-\mu)\}}\bigg]=o\bigg(\mathbb E\bigg[\Big(\frac{1}{1+a}\Big(\frac{a}{1+a}\Big)^{m_t}\bigg]n^{-\beta}\bigg), \mathrm{e}e for some small $\beta>0$, since $m_t=c_t\log n+o(\log n)$ with $c_t\in[1/\log \theta,c)$. For the second indicator, we observe that $\{Y''_t>\log(n/\lceil n^{\rm var}\ epsilon\rceil)\}=\{Z_t>(1+a)(1-{\rm var}\ epsilon)\log n(1+o(1))\}$, where $Z_t\sim \text{Gamma}(m_t+1, 1)$. We then have \be \ba \mathbb E{}&\bigg[\Big(\frac{a}{1+a}\Big)^{m_t}\mathbb{P}f{Z_t>(1+a)(1-{\rm var}\ epsilon)\log n(1+o(1))}\ind_{\{1+a\geq c_t/(1-\mu)\}}\bigg]\\ &\leq \mathbb E\bigg[\Big(\frac{a}{1+a}\Big)^{m_t}\bigg]\mathbb{P}{Z_t>\frac{c_t(1-{\rm var}\ epsilon)}{1-\mu}\log n(1+o(1))}. \mathrm{e}a \mathrm{e}e It thus suffices to show that the probability on the right-hand side is $o(n^{-\beta})$ for some small $\beta>0$. By choosing ${\rm var}\ epsilon\in(0,\mu)$ we can apply a standard large deviation bound. Let $(V_i)_{i\in\mathbb{N}}$ be i.i.d.\ exponential rate $1$ random variables and let $I(a):=a-1-\log(a)$ be their rate function. Then, as we can think of $Z_t$ as the sum of $V_1,\ldots,V_{m_t+1}$, \be \ba \mathbb P\Big(Z_t\geq \frac{c_t (1-{\rm var}\ epsilon)}{1-\mu}\log n(1+o(1))\Big)&=\mathbb{P}{\sum_{i=1}^{m_t+1}V_i\geq (m_t+1) \frac{c_t (1-{\rm var}\ epsilon)\log n(1+o(1))}{(1-\mu)(m_t+1)}}\\ &\leq\mathrm{e}xp\Big(-(m_t+1)I\Big(\frac{c_t (1-{\rm var}\ epsilon)\log n(1+o(1))}{(1-\mu)(m_t+1)}\Big)\Big). \mathrm{e}a \mathrm{e}e In the first step, we express the upper bound within the probability in terms of the mean of the sum of random variables, which equals $m_t+1$. We then use the large deviations bound in the second step, which we can do as the argument of $I$ is strictly greater than $1$ when $n$ is sufficiently large (as $m_t+1\sim c_t\log n$) and ${\rm var}\ epsilon\in(0,\mu)$ is sufficiently small. Since $I((c_t+o(1)) (1-{\rm var}\ epsilon) \log n/((1-\mu)(m_t+1)))= (1-{\rm var}\ epsilon)/(1-\mu)-1-\log( (1-{\rm var}\ epsilon)/(1-\mu))+o(1)$, we thus arrive at \be \ba \label{eq:ztbound} \mathbb P\Big(Z_t\geq \frac{c_t (1-{\rm var}\ epsilon)}{1-\mu}\log n(1+o(1))\Big)&\leq \mathrm{e}^{-c_t\log n( (1-{\rm var}\ epsilon)/(1-\mu)-1-\log( (1-{\rm var}\ epsilon)/(1-\mu))+o(1))}\\ &=n^{-c_{t,\mu,{\rm var}\ epsilon}+o(1)}, \mathrm{e}a\mathrm{e}e where $c_{t,\mu,{\rm var}\ epsilon}:=c_t( (1-{\rm var}\ epsilon)/(1-\mu)-1-\log( (1-{\rm var}\ epsilon)/(1-\mu)))>0$ as ${\rm var}\ epsilon<\mu$ and $x-1-\log x>0$ holds for $x>1$. This yields~\mathrm{e}qref{eq:ind} when using $\ind_{\{1+a\geq c_t/(1-\mu)\}}$ for $\mu$ and ${\rm var}\ epsilon$ sufficiently small as well. Together with~\mathrm{e}qref{eq:aapprox}, this yields the desired result. Combined with the analysis in case~{\rm Re}f{item:1}, and the fact that \be \Big(\frac{1+\zeta_n}{1-2\zeta_n}\Big)^{\sum_{h=1}^k m_h}(1+\cO(n^{-{\rm var}\ epsilon}))=1+o(n^{-\beta}), \mathrm{e}e for some small $\beta>0$, by the choice of $\zeta_n$, and the fact that $m_t=\cO(\log n)$ for all $t\in[k]$, it follows that~\mathrm{e}qref{eq:expprod2} is at most $(o(n^{-\beta})\prod_{t=1}^k p_{m_t})$ for some small $\beta>0$ in both cases~{\rm Re}f{item:1} and~{\rm Re}f{item:2}. Finally, using this in~\mathrm{e}qref{eq:enbound} yields \be \frac{1}{(n)_k}{}\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all } \mathrm{e}ll\in[k]}\leq \prod_{t=1}^k \big(p_{m_t}(1+o(n^{-\beta}))\big)+o(n^{-\gamma}). \mathrm{e}e As we can choose $\gamma$ arbitrarily large (as discussed prior to~\mathrm{e}qref{eq:enbound}), this term can incorporated in the $o(n^{-\beta})$ term regardless of the choice of $m_t$, which completes the proof of the upper bound. \paragraph{\textbf{Lower bound.}} We then focus on proving a similar lower bound. We define the event \be \label{eq:wten} E^{(2)}_n:=\Big\{\sum_{\mathrm{e}ll=k+1 }^j W_\mathrm{e}ll\in(\E{W}(1-\zeta_n)j,\E{W}(1+\zeta_n)j),\text{ for all } n^{{\rm var}\ epsilon}\leq j\leq n\Big\}. \mathrm{e}e We again have from Lemma~{\rm Re}f{lemma:weightsumbounds} in the~\hyperref[sec:appendix]{Appendix} that $\mathbb P((E^{(2)}_n)^c)=o(n^{-\gamma})$ for any $\gamma>0$. We obtain a lower bound for the probability of the event $\{\mathbb{Z}m_n(v_\mathrm{e}ll)=m_\mathrm{e}ll,\mathrm{e}ll\in[k]\}$ by omitting the second term in~\mathrm{e}qref{eq:enbound}. This yields, together with the rearrangements in~\mathrm{e}qref{eq:ubfirst}, \be \ba \frac{1}{(n)_k}{}&\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\E{\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all } \mathrm{e}ll\in[k]}}\\ \geq{}&\frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}<j_1\neq \ldots\neq j_k\leq n}\ \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\!\!\!\!\!\mathbb E\Bigg[\!\prod_{t=1}^k\prod_{s=1}^{m_t}\frac{W_{j_t}}{\sum_{\mathrm{e}ll=1}^{i_{s,t}-1}W_\mathrm{e}ll}\prod_{u=1}^k\!\!\!\!\!\!\!\prod_{\substack{s=j_u+1\\s\neq i_{\mathrm{e}ll,t},\mathrm{e}ll\in[m_t],t\in[k]}}^{j_{u+1}}\!\!\!\!\!\!\!\!\!\!\!\!\!\bigg(1-\frac{\sum_{\mathrm{e}ll=1}^u W_{j_\mathrm{e}ll}}{\sum_{\mathrm{e}ll=1}^{s-1}W_\mathrm{e}ll}\bigg)\Bigg]. \mathrm{e}a \mathrm{e}e We again start by only considering the ordered indices $n^{{\rm var}\ epsilon}<j_1<\ldots <j_k$ and also omit this sum for now for ease of writing. We also omit the constraint $s\neq i_{\mathrm{e}ll,t},\mathrm{e}ll\in[m_t],t\in[k]$ in the final product. As this introduces more terms smaller than one, we obtain a lower bound. Then, in the two denominators, we bound the vertex-weights $W_{j_1},\ldots, W_{j_k}$ from above and below by one and zero, respectively, to obtain a lower bound \be \frac{1}{(n)_k}\ \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\!\!\!\!\!\mathbb E\Bigg[\prod_{t=1}^k\prod_{s=1}^{m_t}\frac{W_{j_t}}{\sum_{\mathrm{e}ll=1}^{i_{s,t}-1}W_\mathrm{e}ll\ind_{\{\mathrm{e}ll\neq j_t,t\in[k]\}}+k} \prod_{u=1}^k\prod_{s=j_u+1}^{j_{u+1}}\!\!\!\bigg(1-\frac{\sum_{\mathrm{e}ll=1}^u W_{j_\mathrm{e}ll}}{\sum_{\mathrm{e}ll=1}^{s-1}W_\mathrm{e}ll\ind_{\{\mathrm{e}ll\neq j_t,t\in[k]\}}}\bigg)\Bigg]. \mathrm{e}e As a result, we can now swap the labels of $W_{j_t}$ and $W_t$ for each $t\in[k]$, which again does not change the expected value, but it changes the value of the two denominators to $\sum_{\mathrm{e}ll=k+1}^{i_{s,t}}W_\mathrm{e}ll+k$ and $\sum_{\mathrm{e}ll=k+1}^{i_{s,t}}W_\mathrm{e}ll$, respectively. After this we introduce the indicator $\ind_{E^{(2)}_n}$ and use the bounds in $E^{(2)}_n$ on these sums in the expected value to obtain a lower bound. Finally, we note that the (relabelled) weights $W_t,t\in[k],$ are independent of $E^{(2)}_n$ so that we can take the indicator out of the expected value. Combining all of the above steps, we arrive at the lower bound \be \ba \label{eq:lbas} \frac{1}{(n)_k}{}&\, \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\mathbb E\Bigg[\prod_{t=1}^k \Big(\frac{W_t}{\E{W}}\Big)^{m_t} \prod_{s=1}^{m_t}\frac{1}{i_{s,t}(1+2\zeta_n)}\\ &\times\prod_{u=1}^k\prod_{s=j_u+1}^{j_{u+1}}\bigg(1-\frac{\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll}{(s-1)\E W(1-\zeta_n)}\bigg)\Bigg]\mathbb P(E^{(2)}_n).\\ \mathrm{e}a \mathrm{e}e The $1+2\zeta_n$ in the fraction on the first line arises from the fact that, for $n$ sufficiently large, $(i_{s,t}-1)(1+\zeta_n)+k\leq i_{s,t}(1+2\zeta_n)$. As stated above, $\mathbb P(E^{(2)}_n)=1-o(n^{-\gamma})$ for any $\gamma>0$. Similar to the calculations in \mathrm{e}qref{eq:expbound} and using $\log(1-x)\geq -x-x^2$ for $x$ small, we obtain an almost sure lower bound for the final product for $n$ sufficiently large of the form \be\ba \prod_{s=j_u+1}^{j_{u+1}}\bigg(1-\frac{\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll}{(s-1)\E W(1-\zeta_n)}\bigg)&\geq \mathrm{e}xp\bigg(-\frac{1}{\E W(1-\zeta_n)}\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll\sum_{s=j_u+1}^{j_{u+1}} \frac{1}{s-1}\\ &\qquad\quad\ \ \, -\Big(\frac{1}{\E W(1-\zeta_n)}\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll\Big)^2\!\sum_{s=j_u+1}^{j_{u+1}}\!\frac{1}{(s-1)^2}\bigg)\\ &\geq \Big(\frac{j_{u+1}}{j_u}\Big)^{-\sum_{\mathrm{e}ll=1}^u W_\mathrm{e}ll/(\E W(1-\zeta_n))}\Big(1-\mathcal O\big(n^{-{\rm var}\ epsilon}\big)\Big). \mathrm{e}a \mathrm{e}e Using this in~\mathrm{e}qref{eq:lbas} yields the lower bound \be\label{eq:lbstep} \frac{1}{(n)_k}\ \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\!\!\!\!\!\!\!\!\!\!(1+2\zeta_n)^{-\sum_{h=1}^k m_h} \mathbb E\Bigg[\prod_{t=1}^k \Big(\frac{W_t}{\E W}\Big)^{m_t} \Big(\frac{j_t}{n}\Big)^{\wt a_t}\prod_{s=1}^{m_t}i_{s,t}^{-1} \Big)\Bigg]\Big(1-\mathcal O\big(n^{-{\rm var}\ epsilon}\big)\Big), \mathrm{e}e where $\wt a_t:=W_t/(\E W(1-\zeta_n))$. As in the upper bound, we can take the product over $t\in[k]$ out of the expected value by the independence of the vertex-weights. This yields \be \frac{1}{(n)_k}\ \sideset{}{^*}\sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\Big(\frac{1-\zeta_n}{1+2\zeta_n}\Big)^{\sum_{h=1}^k m_h}\prod_{t=1}^k \mathbb E\Bigg[ \wt a_t^{m_t} \Big(\frac{j_t}{n}\Big)^{\wt a_t}\prod_{s=1}^{m_t}i_{s,t}^{-1}\Bigg]\Big(1-\mathcal O\big(n^{-{\rm var}\ epsilon}\big)\Big). \mathrm{e}e As $\zeta_n=n^{-\mathrm{d}elta{\rm var}\ epsilon}/\E W$ and $m_h=\cO(\log n)$ for all $h\in[k]$, the fraction and the $1-\cO(n^{-{\rm var}\ epsilon})$ can be combined to yield a $1-o(n^{-\beta})$, for some small $\beta>0$. We now reintroduce the sum over the indices $n^{{\rm var}\ epsilon}\leq j_1<\ldots<j_k\leq n$ and bound the sum over the indices $i_{s,\mathrm{e}ll}$ from below. We note that the expression in the expected value is decreasing in $i_{s,\mathrm{e}ll}$ and we restrict the range of the indices to $j_\mathrm{e}ll+\sum_{h=1}^k m_h<i_{1,\mathrm{e}ll}<\ldots< i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\mathrm{e}ll \in[k]$, but no longer constrain the indices to be distinct (so that we can drop the $*$ in the sum). In the distinct sums and the suggested lower bound, the number of values the $i_{s,\mathrm{e}ll}$ take on equal \be \prod_{\mathrm{e}ll=1}^k \binom{n-(j_\mathrm{e}ll-1)-\sum_{h=1}^{\mathrm{e}ll-1}m_t}{m_h} \quad \text{and} \quad \prod_{\mathrm{e}ll=1}^k \binom{n-(j_\mathrm{e}ll-1)-\sum_{h=1}^k m_h}{m_\mathrm{e}ll}, \mathrm{e}e respectively. It is straightforward to see that the former allows for more possibilities than the latter, as $\binom{b}{c}> \binom{a}{c}$ when $b> a\geq c$. As we omit the largest values of the expected value (since it decreases in $i_{s,\mathrm{e}ll}$ and we omit the largest values of $i_{s,\mathrm{e}ll}$), we thus arrive at the lower bound \be\ba \label{eq:lbstep2} ((n)_k)^{-1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\sum_{n^{{\rm var}\ epsilon}<j_1<\ldots<j_k\leq n-\sum_{h=1}^k m_h} \sum_{\substack{j_\mathrm{e}ll+\sum_{h=1}^k m_h<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\! \prod_{t=1}^k\mathbb E\bigg[\wt a_t^{m_t} \Big(\frac{j_t}{n}\Big)^{\wt a_t}\!\prod_{s=1}^{m_t}\frac{1}{i_{s,t}}\bigg]\big(1-o(n^{-\beta})\big), \mathrm{e}a \mathrm{e}e where we also restrict the range of indices in the upper bound of the outer sum, as otherwise there would be a contribution of zero from these values of $j_1,\ldots,j_k$. We now use similar techniques compared to the upper bound of the proof to switch from summation to integration. First, we observe that we can switch the inner summation and the product over $t$. Second, we restrict the upper bound on the outer sum to $n-1-2\sum_{h=1}^k m_h$. This will prove useful later. Combined, this yields \be \ba\label{eq:lbsum} ((n)_k)^{-1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\sum_{n^{{\rm var}\ epsilon}<j_1<\ldots<j_k\leq n-1-2\sum_{h=1}^k m_h}\prod_{t=1}^k \sum_{\substack{j_\mathrm{e}ll+\sum_{h=1}^k m_h<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\!\!\!\!\!\!\mathbb E\bigg[\wt a_t^{m_t} \Big(\frac{j_t}{n}\Big)^{\wt a_t}\!\prod_{s=1}^{m_t}\frac{1}{i_{s,t}}\bigg](1-o(n^{-\beta})). \mathrm{e}a\mathrm{e}e Then, we take the inner summation in the expected value and, for now, focus on \be \sum_{\substack{j_t+\sum_{h=1}^k m_h<i_{1,t}<\ldots <i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n\\ \mathrm{e}ll\in[k]}} \prod_{s=1}^{m_t}i_{s,t}^{-1}\geq \int_{j_t+\sum_{h=1}^k m_h+1}^{n+1}\int_{x_{1,t}+1}^{n+1}\cdots \int_{x_{m_t-1,t}+1}^{n+1}\prod_{s=1}^{m_t}x_{s,t}^{-1}\,\mathrm{d} x_{m_t,t}\ldots \mathrm{d} x_{1,t}. \mathrm{e}e Since $j_\mathrm{e}ll\leq n-1-2\sum_{h=1}^k m_h$ for each $\mathrm{e}ll\in[k]$, it follows in particular that $j_\mathrm{e}ll+1+\sum_{h=1}^k m_h\leq n$. It thus follows that we can use~\mathrm{e}qref{eq:loglb} in Lemma~{\rm Re}f{lemma:logints} with $a=j_\mathrm{e}ll+1+\sum_{h=1}^k m_h, b=n+1$, and $k=m_{\mathrm{e}ll}$ to obtain the lower bound \be \label{eq:lbist} \prod_{t=1}^k \frac{1}{m_t!}\log\Big(\frac{n+1}{j_t +\sum_{h=1}^k m_h+m_t}\Big)^{m_t}\geq \prod_{t=1}^k \frac{1}{m_t!}\Big(\log\Big(\frac{n}{j_t +2\sum_{h=1}^k m_h}\Big)\Big)^{m_t}. \mathrm{e}e Substituting this in \mathrm{e}qref{eq:lbsum} yields the lower bound \be \frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}<j_1<\ldots<j_k\leq n-1-2\sum_{h=1}^k m_h} \prod_{t=1}^k \mathbb E\Bigg[ \Big(\frac{j_t}{n}\Big)^{\wt a_t}\frac{\wt a_t^{m_t}}{m_t!}\Big(\log\Big(\frac{n}{j_t +2\sum_{h=1}^k m_h}\Big)\Big)^{m_t} \bigg]\big(1-o(n^{-\beta})\big). \mathrm{e}e Note that, due to the independence of the vertex-weights $W_1,\ldots, W_k$, and hence of $\wt a_1, \ldots, \wt a_k$, we can interchange the product and the sum with the expected value. To simplify the summation over $j_1,\ldots,j_k$, we write the summand as \be \prod_{t=1}^k\Big(j_t+2\sum_{h=1}^k m_h\Big)^{\wt a_t}\frac{n^{-\wt a_t}}{m_t!}\Big(\log\Big(\frac{n}{j_t +2\sum_{h=1}^k m_h}\Big)\Big)^{m_t}\bigg(1-\frac{2\sum_{h=1}^k m_h}{j_t +2\sum_{h=1}^k m_h}\bigg)^{\wt a_t}. \mathrm{e}e Using that $m_t<c \log n,j_t\geq n^{{\rm var}\ epsilon}$ and almost surely, $x^{\wt a_t}\geq x^{1/(\E W(1-\zeta_n))}$ for $x\in[0,1]$, we obtain the lower bound \be \prod_{t=1}^k \Big(j_t+2\sum_{h=1}^k m_h\Big)^{\wt a_t}\frac{n^{-\wt a_t}}{m_t!}\Big(\log\Big(\frac{n}{j_t +2\sum_{h=1}^k m_h}\Big)\Big)^{m_t}\big(1-o(n^{-\beta})\big). \mathrm{e}e We can then shift the bounds on the range of the sum to $n^{{\rm var}\ epsilon}+2\sum_{h=1}^k m_h$ and $n-1$. As $j_k=n$ yields a contribution of zero to the sum, we can increase the upper range on the summation from $n-1$ to $n$. We thus obtain the lower bound \be \frac{1}{(n)_k}\E{\sum_{n^{{\rm var}\ epsilon}+2\sum_{h=1}^k m_h<j_1<\ldots<j_k\leq n}\prod_{t=1}^k \Big(\frac{j_t}{n}\Big)^{\wt a_t}\frac{1}{m_t!}(\log(n/j_t))^{m_t} }\big(1-o(n^{-\beta})\big). \mathrm{e}e We can now use a similar approach as in~\mathrm{e}qref{eq:permutations} through~\mathrm{e}qref{eq:gammaexp}. By considering all permutations of the indices $j_1, \ldots, j_k$, we obtain \be\ba \frac{1}{(n)_k}{}&\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\mathbb E[\mathbb{P}f{\mathbb{Z}m_n(j_t)=m_t\text{ for all } t\in[k]}]\\ &\geq \frac{1-o(n^{-\beta})}{(n)_k}\mathbb E\Bigg[\sum_{n^{{\rm var}\ epsilon}+2\sum_{h=1}^k m_h<j_1\neq \ldots\neq j_k\leq n}\prod_{t=1}^k (j_t/n)^{\wt a_t}\frac{1}{m_t!}(\wt a_t\log(n/j_t))^{m_t}\Bigg]. \mathrm{e}a\mathrm{e}e We now aim to again exchange the order of summation and the product in the expected value at the expense of some additional terms. To this end, we allow the indices $j_1, \ldots, j_k$ to take on the same values, but also subtract the largest terms from the sum to still obtain a lower bound. We are thus required to determine which terms are the largest. The following holds for any $t\in[k]$: $j_t\mapsto (j_t/n)^{\wt a_t}(\wt a_t\log(n/j_t))^{m_t}/m_t!$ is increasing on $[1,n\mathrm{e}xp(-m_t/\wt a_t))$ and decreasing on $(n\mathrm{e}xp(-m_t/\wt a_t), n]$, with maximum value $\mathrm{e}^{-m_t}m_t^{m_t}/m_t!$. As such, we again distinguish the two cases~{\rm Re}f{item:1} and~{\rm Re}f{item:2} as in the upper bound (i.e.\ $m_t=c_t\log n+o(\log n)$ with either $c_t\leq 1/(\theta-1)$ for all $t \in [k]$ or $c_t>1/(\theta-1)$ for some $t \in [k]$). In case~{\rm Re}f{item:2}, we know that $n\mathrm{e}xp(-m_t/\wt a_t)\leq n^{1-c_t(\theta-1)(1+o(1))}=o(1)$, so that $(j_t/n)^{\wt a_t}(\wt a_t\log(n/j_t))^{m_t}/m_t!$ is maximised for the smallest value of $j_t$. Omitting these terms thus yields a lower bound. In case~{\rm Re}f{item:1}, this is not the case, and hence we omit the largest terms at $n\mathrm{e}xp(-m_t/\wt a_t)$. Let us deal with case~{\rm Re}f{item:1} first. Since the sums contain \be \Big(n-\Big\lceil n^{\rm var}\ epsilon +\sum_{h=1}^k m_h\Big \rceil +1\Big)_k \mathrm{e}e many terms, we bound these sums from below by \be \ba \sum_{n^{\rm var}\ epsilon+\sum_{h=1}^k m_h\leq j_1\leq n}{}&\!\!\cdots \!\!\sum_{n^{\rm var}\ epsilon+\sum_{h=1}^k m_h\leq j_k\leq n}\prod_{t=1}^k \frac{(j_t/n)^{\wt a_t}}{m_t!}(\wt a_t\log(n/j_t))^{m_t}\\ & - \cO\Big(\Big(n-\Big\lceil n^{\rm var}\ epsilon -\sum_{h=1}^k m_h\Big \rceil +1\Big)^{k-1}\Big) \prod_{t=1}^k\mathrm{e}^{-m_t}\frac{m_t^{m_t}}{m_t!}\\ ={}&\prod_{t=1}^k \bigg(\sum_{n^{\rm var}\ epsilon+\sum_{h=1}^k m_h\leq j\leq n}\frac{(j/n)^{\wt a_t}}{m_t!}(\wt a_t \log(n/j))^{m_t}\bigg)-\cO(n^{k-1}), \mathrm{e}a\mathrm{e}e where the last line follows from the fact that $\mathrm{e}^{-x} x^x/\mathcal{G}amma(x+1)\leq 1$ for all $x>0$. We now identify the argument of the sum as $\mathbb{P}f{P(\wt a_t,j)=m_t}$, where $P(\wt a_t)\sim \text{Poi}(\wt a_t \log(n/j))$. Again using the Poisson-gamma duality, we can rewrite this sum as \be \ba \sum_{n^{\rm var}\ epsilon+\sum_{h=1}^k m_h\leq j\leq n}{}&\mathbb{P}f{Y_t\leq \log (n/j), Y_t+V>\log(n/j)}\\ &=\sum_{n^{\rm var}\ epsilon+\sum_{h=1}^k m_h\leq j\leq n}\big(\mathbb{P}f{Y_t\leq \log (n/j)}-\mathbb{P}f{Y_t+V\leq \log(n/j)}\big), \mathrm{e}a \mathrm{e}e where $Y_t\sim \text{Gamma}(m_t, \wt a_t)$ and $V_t\sim \text{Exp}(\wt a_t)$ and both random variables are independent, conditionally on $W_t$. With a similar approach as in~\mathrm{e}qref{eq:sumtoint} and~\mathrm{e}qref{eq:gammaexp}, we can bound this sum from below by \be n\Ef{}{\mathrm{e}^{-Y_t}\ind_{\{Y_t\leq \log(n/f(n))\}}}-n\Ef{}{\mathrm{e}^{-(Y_t+V_t)}\ind_{\{Y_t+V_t\leq \log(n/\wt f(n))\}}}+\cO(n^{\rm var}\ epsilon), \mathrm{e}e with $f(n):=\lceil n^{\rm var}\ epsilon +\sum_{h=1}^k m_h\rceil$, $\wt f(n):=f(n)-1$. As $f(n)\sim \wt f(n)\sim n^{\rm var}\ epsilon$, the desired lower bound follows in an analogous way to the approach used in the upper bound in~\mathrm{e}qref{eq:expdiff} through the paragraph following~\mathrm{e}qref{eq:aapprox}. In case~{\rm Re}f{item:2}, we can as in the upper bound assume without loss of generality that the condition on $c_t$ holds for all $t \in [k]$. Then, the largest values of $\mathbb{P}f{P(\wt a_t,j)=m_t}$ are attained for $j_t$ as small as possible. So, by omitting these small $j_t$, we obtain the lower bound \be\ba \sum_{n^{\rm var}\ epsilon+\sum_{h=1}^k m_h\leq j_1\leq n}{}&\sum_{n^{\rm var}\ epsilon+1+\sum_{h=1}^k m_h\leq j_1\leq n}\!\!\cdots \!\!\sum_{n^{\rm var}\ epsilon+(k-1)+\sum_{h=1}^k m_h\leq j_k\leq n}\prod_{t=1}^k \mathbb{P}f{P(\wt a_t,j_t)=m_t}\\ & =\prod_{t=1}^k \bigg(\sum_{n^{\rm var}\ epsilon+(t-1)+\sum_{\mathrm{e}ll=1}^k m_\mathrm{e}ll\leq j\leq n}\mathbb{P}f{P(\wt a_t,j)=m_t}\bigg). \mathrm{e}a \mathrm{e}e As in case~{\rm Re}f{item:1}, we can write each sum as a difference of expected values. Then, following the same steps as in~\mathrm{e}qref{eq:expre} through~\mathrm{e}qref{eq:ztbound}, we obtain the desired matching lower bound in case~{\rm Re}f{item:2} for ${\rm var}\ epsilon,\beta>0$ sufficiently small. The only difference is that we cannot bound the conditional probability in the second line~\mathrm{e}qref{eq:mainterm} from above by one. Instead, the lower bound we obtain is \be\ba n\Big(\frac{a}{1+a}\Big)^{m_t}\bigg[{}&\frac{1}{1+a}\mathbb{P}f{Y'_t\leq \log\Big(n\Big/\Big\lfloor n^{\rm var}\ epsilon+(t-1)+\sum_{\mathrm{e}ll=1}^k m_\mathrm{e}ll\Big\rfloor\Big)}\\ &-\mathbb{P}f{Y''_t\geq \log\Big(n\Big/\Big\lceil n^{\rm var}\ epsilon+(t-1)+\sum_{\mathrm{e}ll=1}^k m_\mathrm{e}ll\Big\rceil\Big)}-n^{-{\rm var}\ epsilon}\frac{a}{1+a}\bigg]\\ \geq n\Bigg[{}&\frac{1}{1+a}\Big(\frac{a}{1+a}\Big)^{m_t}\Bigg(\mathbb{P}f{Y'_t\leq \log\Big(n\Big/\Big\lfloor n^{\rm var}\ epsilon+(t-1)+\sum_{\mathrm{e}ll=1}^k m_\mathrm{e}ll\Big\rfloor\Big)}-\cO(n^{-{\rm var}\ epsilon})\Bigg)\\ &-\cO\bigg(\Big(\frac{a}{1+a}\Big)^{m_t}\bigg)\Bigg]. \mathrm{e}a \mathrm{e}e From here on out, the same approach used to bound~\mathrm{e}qref{eq:expsum} from above can be used to yield the lower bound \be \frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\E{\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all } \mathrm{e}ll\in[k]}}\geq \prod_{t=1}^k p_{m_t}(1-o(n^{-\beta})). \mathrm{e}e Combined with the upper bound, this yields the desired result and concludes the proof. \mathrm{e}nd{proof} We now discuss the necessary changes to the proof for it to hold for the model with \mathrm{e}mph{random out-degree}, as introduced in Remark~{\rm Re}f{remark:def}$(ii)$, as well. The main difference between these two models is the fact that the events $\{n\to i\}$ and $\{n\to j\}$ are no longer disjoint, where $u\to v$ denotes that vertex $u$ connects to vertex $v$ with an edge directed towards $v$. As a result, the probability, conditional on the vertex-weights, that vertex $n$ does \mathrm{e}mph{not} connect to vertices $j_1,\ldots, j_k\in[n-1]$ now equals \be \label{eq:notconnectprob} \prod_{\mathrm{e}ll=1}^k \Big(1-\frac{W_{j_\mathrm{e}ll}}{S_{n-1}}\Big)\qquad \text{instead of}\qquad \Big(1-\frac{\sum_{\mathrm{e}ll=1}^k W_{j_\mathrm{e}ll}}{S_{n-1}}\Big). \mathrm{e}e Moreover, as a new vertex can connect to multiple vertices at once, it is no longer necessary to restrict ourselves to indices $j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots< i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\mathrm{e}ll\in[k]$ with $i_{s,\mathrm{e}ll}\neq i_{t,j}$ for any $\mathrm{e}ll,j\in[k],s\in[m_\mathrm{e}ll],t\in[m_j]$. Instead, when $\mathrm{e}ll\neq j$, $i_{s,\mathrm{e}ll}= i_{t,j}$ is allowed for any $s\in[m_\mathrm{e}ll],t\in[m_j]$. That is, the steps at which two vertices increase their degree by one no longer need to be disjoint. In the proof of the upper bound, this changes the right-hand side of~\mathrm{e}qref{eq:ubfirst} into \be \ba \frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}\leq j_1<\ldots<j_k\leq n}\ \sum_{\substack{j_\mathrm{e}ll<i_{1,\mathrm{e}ll}<\ldots<i_{m_\mathrm{e}ll,\mathrm{e}ll}\leq n,\\\mathrm{e}ll\in[k]}}\mathbb E\Bigg[&\prod_{t=1}^k\prod_{s=1}^{m_t}\frac{W_{j_t}}{\sum_{\mathrm{e}ll=1}^{i_{s,t}-1}W_\mathrm{e}ll}\\ &\times \prod_{u=1}^k\!\!\prod_{\substack{s=j_u+1\\s\neq i_{\mathrm{e}ll,t},\mathrm{e}ll\in[m_t],t\in[k]}}^{j_{u+1}}\prod_{\mathrm{e}ll=1}^u\bigg(1-\frac{W_{j_\mathrm{e}ll}}{\sum_{r=1}^{s-1}W_r}\bigg)\ind_{E_n}\Bigg], \mathrm{e}a \mathrm{e}e where we change the last term in the expected value due to~\mathrm{e}qref{eq:notconnectprob} and omit the $*$ in the inner sum due to the degree increments no longer being disjoint. We can then again use the event $E_n^{(1)}$ to bound the denominators with deterministic quantities. Then, the exact same steps to obtain the upper bound as in~\mathrm{e}qref{eq:expbound} can be performed. Most importantly, this yields that the difference in the upper bound due to~\mathrm{e}qref{eq:notconnectprob} is no longer present. Furthermore, as we omit the constraint of disjoint indices $i_{s,t}$ in~\mathrm{e}qref{eq:ubexp}, we obtain the exact same upper bound as we would here. As a result, the remaining steps of the proof of the upper bound carry through as well. The same is true for the lower bound. In fact, in the proof of the lower bound of Lemma~{\rm Re}f{lemma:in0eps} we argue why we can omit the constraint of disjoint indices $i_{s,t}$ in a certain way and still obtain a lower bound in~\mathrm{e}qref{eq:lbstep2}, which is no longer necessary here. Hence, we obtain~\mathrm{e}qref{eq:lbist} without the factor two in front of $\sum_{h=1}^k m_h$. This change carries through all the steps, so that we arrive at the same desired lower bound. Together with the upper bound, this establishes the desired result. \subsection{Proof of Lemma~{\rm Re}f{lemma:inepsterm}}\label{sec:proof5.11} \begin{proof}[Proof of Lemma~{\rm Re}f{lemma:inepsterm}] We aim to bound \be \label{eq:errorterm} \frac{1}{(n)_k}\sum_{\textbf{j}\in I_n({\rm var}\ epsilon)}\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all } \mathrm{e}ll\in[k]}, \mathrm{e}e where we recall that $I_n({\rm var}\ epsilon)$ from~\mathrm{e}qref{eq:ineps}. This is the equivalent of the steps following~\mathrm{e}qref{eq:2ndtermbound} in the non-rigorous proof in Section~{\rm Re}f{sec:nonrigour}. We first assume that $m_\mathrm{e}ll=c_\mathrm{e}ll \log n+o(\log n)$ for some $c_\mathrm{e}ll\in[0,1/\log \theta)$ for all $\mathrm{e}ll\in[k]$. We define \be I_n({\rm var}\ epsilon,i):=\{\textbf{j}\in I_n({\rm var}\ epsilon): |\{\mathrm{e}ll\in[k]: j_\mathrm{e}ll<n^{{\rm var}\ epsilon}\}|=i\},\qquad i\in[k], \mathrm{e}e that is, $I_n({\rm var}\ epsilon,i)$ denotes the set of indices $\textbf{j}=(j_1,\ldots, j_k)$ such that exactly $i$ of the indices are smaller than $n^{{\rm var}\ epsilon}$, and note that $I_n({\rm var}\ epsilon)=I_n({\rm var}\ epsilon,1)\cup\cdots\cup I_n({\rm var}\ epsilon,k)$. We then write \be \frac{1}{(n)_k}\!\sum_{\textbf{j}\in I_n({\rm var}\ epsilon)}\!\!\!\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}=\sum_{i=1}^k \frac{1}{(n)_k}\!\sum_{\textbf{j}\in I_n({\rm var}\ epsilon,i)}\!\!\!\!\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}, \mathrm{e}e and bound the probability on the right-hand side from above by omitting all events $\{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\}$ whenever $j_\mathrm{e}ll<n^{{\rm var}\ epsilon}$. This leaves us with \be \label{eq:Ssum} \sum_{i=1}^{k-1} \frac{1}{(n)_k}n^{i{\rm var}\ epsilon}\sum_{\substack{S\subseteq [k]\\ |S|=k-i}}\ \ \sideset{}{^*}\sum_{\substack{n^{{\rm var}\ epsilon}\leq j_\mathrm{e}ll \leq n\\ \mathrm{e}ll\in S}}\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll \in S}+\frac{n^{k{\rm var}\ epsilon}}{(n)_k}, \mathrm{e}e where we recall that the $*$ on the inner summation denotes that we only consider distinct values of $j_\mathrm{e}ll,\mathrm{e}ll\in S$. We isolate the case $i=k$ here as in this case no indices are larger than $n^{{\rm var}\ epsilon}$ and we hence bound the probability from above by one, whereas $i=k$ would yield a contribution of zero in the triple sum. The inner sum can then be dealt with in the same manner as in the derivation of the upper bound in the proof of Lemma~{\rm Re}f{lemma:in0eps}, to yield an upper bound \be \sum_{i=1}^{k-1} n^{-i (1-{\rm var}\ epsilon)}\sum_{\substack{S\subseteq [k]\\ |S|=k-i}}\prod_{\mathrm{e}ll\in S}\E{\frac{\E W}{\E W+W}\Big(\frac{W}{\E W+W}\Big)^{m_\mathrm{e}ll}}(1+o\big(n^{-\beta}\big)\big)+2n^{-k (1-{\rm var}\ epsilon)}, \mathrm{e}e for some $\beta>0$. It thus remains to show that for any {$m_\mathrm{e}ll=c_\mathrm{e}ll \log n+o(\log n)$} with $c_\mathrm{e}ll\in[0,1/\log \theta)$ we can take ${\rm var}\ epsilon$ and a $\beta>0$ sufficiently small, such that \be n^{- (1-{\rm var}\ epsilon)}=o\Big(\E{\frac{\E W}{\E W+W}\Big(\frac{W}{\E W+W}\Big)^{m_\mathrm{e}ll}}n^{-\beta}\Big). \mathrm{e}e By Lemma {\rm Re}f{lemma:pkbound}, we have for any $\xi>0$ and $n$ sufficiently large, that \be\label{eq:pkbound} \E{\frac{\E W}{\E W+W}\Big(\frac{W}{\E W+W}\Big)^{m_\mathrm{e}ll}}\geq (\theta+\xi)^{- m_\mathrm{e}ll}=n^{- c_\mathrm{e}ll \log(\theta+\xi)+o(1)}, \mathrm{e}e and $n^{- (1-{\rm var}\ epsilon)}=o(n^{-\mathrm{e}ta-c_\mathrm{e}ll \log(\theta+\xi)+o(1)})$ when we choose $\mathrm{e}ta,\xi,$ and ${\rm var}\ epsilon$ sufficiently small, since $c_\mathrm{e}ll \log \theta <1$ for any $\mathrm{e}ll\in[k]$. As a result, \be \frac{1}{(n)_k}\sum_{\textbf{j}\in I_n({\rm var}\ epsilon)}\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll \in [k]}=o\Big(\prod_{\mathrm{e}ll=1}^k\E{\frac{\E W}{\E W+W}\Big(\frac{W}{\E W+W}\Big)^{m_\mathrm{e}ll}}n^{-\mathrm{e}ta}\Big). \mathrm{e}e We now assume (as before in the proof of Lemma~{\rm Re}f{lemma:in0eps} without loss of generality) that $m_\mathrm{e}ll=c_\mathrm{e}ll \log n+o(\log n)$ with $c_\mathrm{e}ll\in[1/\log \theta,\theta/(\theta-1))$ for all $\mathrm{e}ll\in[k]$. In this case, the crude bound used above no longer suffices. Now, the aim is to combine the above with a similar approach as at the start of the proof of \cite[Theorem $2.9$, Bounded case]{LodOrt21} and also use condition~{\rm Re}f{ass:weightzero} in Assumption~{\rm Re}f{ass:weights}. First, we consider the set of indices $I_n({\rm var}\ epsilon,k)$. To make use of the negative quadrant dependence of the degrees $(\Zm_n(i))_{i\in[n]}$ (see Remark~{\rm Re}f{rem:degtail} and~\cite[Lemma $7.1$]{LodOrt21}), we create an upper bound by considering the event $\{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]\}$. Then, using the tail distribution and the negative quadrant dependency of the degrees under the conditional probability measure $\mathbb P_W$ yields \be \frac{1}{(n)_k}\sum_{\textbf{j}\in I_n({\rm var}\ epsilon,k)}\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll \in [k]}\leq \frac{1}{(n)_k}\sum_{1\leq j_1\neq \ldots \neq j_k<n^{{\rm var}\ epsilon}}\E{\prod_{\mathrm{e}ll=1}^k\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}}. \mathrm{e}e We then allow the indices $j_1,\ldots, j_k$ to take any value between $1$ and $n^{{\rm var}\ epsilon}$, to obtain the upper bound \be \frac{1}{(n)_k}\E{\prod_{\mathrm{e}ll=1}^k \bigg( \sum_{i<n^{{\rm var}\ epsilon}} \mathbb{P}f{\Zm_n(i)\geq m_\mathrm{e}ll}\bigg)}. \mathrm{e}e As in the proof of \cite[Theorem $2.9$, Bounded case]{LodOrt21}, we apply a Chernoff bound to the conditional probability measure $\mathbb P_W$ to obtain \be \frac{1}{(n)_k}\mathbb E\Bigg[\prod_{\mathrm{e}ll=1}^k \bigg( \sum_{i<n^{{\rm var}\ epsilon}}\!\! \mathbb{P}f{\Zm_n(i)\geq m_\mathrm{e}ll}\bigg)\Bigg]\leq \frac{1}{(n)_k}\mathbb E\Bigg[\prod_{\mathrm{e}ll=1}^k \bigg( \sum_{i<n^{{\rm var}\ epsilon}}\!\!\mathrm{e}xp(m_\mathrm{e}ll(1-u_{i,\mathrm{e}ll}+\log u_{i,\mathrm{e}ll}))\bigg)\Bigg], \mathrm{e}e where \be \label{eq:uil} u_{i,\mathrm{e}ll}:=\frac{1}{m_\mathrm{e}ll}\sum_{j=i}^{n-1}\frac{W_i}{S_j}. \mathrm{e}e This is the equivalent of~\mathrm{e}qref{eq:chernbound} in the non-rigorous proof in Section~{\rm Re}f{sec:nonrigour}. We then introduce the constants $\mathrm{d}elta\in(0,1/2),C>kc_\theta^{-1} \theta \log(\theta)/(\theta-1)$, and $\alpha>0$ (with $c_\theta:=1/(2\theta^2)$). Also, we define the sequences $\zeta'_n:=(C\log n )^{-\mathrm{d}elta/(1-2\mathrm{d}elta)}/ \E W$, $f(n):=\lceil \log n/\log\log\log n\rceil$, and $g(n):=\lceil \log\log n\rceil$, $n\in\mathbb{N}$. We then define the events \be \ba E^{(3)}_n&:=\bigg\{\sum_{\mathrm{e}ll=1}^j W_\mathrm{e}ll \geq j\E W(1-\zeta'_n),\text{ for all } (C\log n)^{1/(1-2\mathrm{d}elta)}\leq j\leq n\bigg\},\\ E^{(4)}_n&:=\{S_{\lceil \alpha \log n\rceil}\geq c\log n\}, \\ E^{(5)}_n&:=\{S_{f(n)}\geq g(n)\}. \mathrm{e}a \mathrm{e}e The event $E^{(3)}_n$ is similar to the event $E^{(1)}_n$ introduced in~\mathrm{e}qref{eq:en}, but considers a larger range of indices~$j$. The particular choice of the lower bound on the indices $j$ follows from the fact that we want as much control over the partial sums of the vertex-weights as possible, but need to ensure that $\mathbb P((E^{(3)}_n)^c)$ decays sufficiently fast. It follows that for all three events, the probability of their complement is $o(n^{-\gamma})$ for any $\gamma>0$ (in the case of $E^{(3)}_n$ and $E^{(4)}_n$ when we choose $C$ and $\alpha $ sufficiently large, respectively) by Lemma~{\rm Re}f{lemma:weightsumbounds} in the~\hyperref[sec:appendix]{Appendix}. We can use these events in the expected value to arrive at the upper bound \be \label{eq:e'n} \frac{1}{(n)_k}\E{\prod_{\mathrm{e}ll=1}^k \bigg( \sum_{i<n^{{\rm var}\ epsilon}}\mathrm{e}xp(m_\mathrm{e}ll(1-u_{i,\mathrm{e}ll}+\log u_{i,\mathrm{e}ll}))\bigg)\ind_{E^{(3)}_n\cap E^{(4)}_n\cap E^{(5)}_n}}+\mathbb P((E^{(3)}_n\cap E^{(4)}_n\cap E^{(5)}_n)^c). \mathrm{e}e By the proof of Corollary~{\rm Re}f{cor:uil} in the~\hyperref[sec:appendix]{Appendix} and since $m_\mathrm{e}ll=c_\mathrm{e}ll \log n+o(\log n)$ with $c_\mathrm{e}ll>0$, we can bound $u_{i,\mathrm{e}ll}$ from above uniformly in $i\in[n^{\rm var}\ epsilon]$ and for any $\mathrm{e}ll\in[k]$ by $(1+o(1))/(c_\mathrm{e}ll\E W)$. Recalling that $\E W=\theta-1$ and that $c_\mathrm{e}ll\geq 1/\log \theta>1/(\theta-1)$ (as $\theta>1$), we find that $u_{i,\mathrm{e}ll}<1$ for all $i\in[n]$ and $\mathrm{e}ll\in[k]$ as well. Since $x\mapsto 1-x+\log x$ is increasing on $(0,1)$ we can thus use this upper bound in the first term of~\mathrm{e}qref{eq:e'n} to bound it from above by \be \ba \label{eq:remarkref} \frac{1}{(n)_k}{}&\prod_{\mathrm{e}ll=1}^k \bigg( \sum_{i<n^{{\rm var}\ epsilon}}\mathrm{e}xp\Big(c_\mathrm{e}ll\log n\Big(1-\frac{1}{c_\mathrm{e}ll \E W}+\log \Big(\frac{1}{c_\mathrm{e}ll \E W}\Big)\Big )(1+o(1))\Big )\bigg)\\ &\leq \mathrm{e}xp\Big(\log n(1+o(1))\sum_{\mathrm{e}ll=1}^k \Big(- (1-{\rm var}\ epsilon)+c_\mathrm{e}ll\Big(1-\frac{1}{c_\mathrm{e}ll \E W}+\log\Big(\frac{1}{c_\mathrm{e}ll \E W}\Big)\Big)\Big)\Big), \mathrm{e}a \mathrm{e}e which is the equivalent of~\mathrm{e}qref{eq:uiluse} in the non-rigorous proof in Section~{\rm Re}f{sec:nonrigour}. We then require that \be \label{eq:littleo} \mathrm{e}xp\Big(\log n(1+o(1))\sum_{\mathrm{e}ll=1}^k \Big(\!- (1-{\rm var}\ epsilon)+c_\mathrm{e}ll\Big(1-\frac{1}{c_\mathrm{e}ll \E W}+\log\Big(\frac{1}{c_\mathrm{e}ll \E W}\Big)\Big)\Big)\Big)=o\Big(\frac{1}{n^{\beta}}\prod_{\mathrm{e}ll=1}^k p_{m_\mathrm{e}ll}\Big), \mathrm{e}e for some $\beta>0$. As, by Lemma~{\rm Re}f{lemma:pkbound}, $p_{m_\mathrm{e}ll}\geq (\theta+\xi)^{-m_\mathrm{e}ll}=\mathrm{e}xp(-\log n(1+o(1))c_\mathrm{e}ll\log(\theta+\xi))$ for any $\xi>0$ and $n$ sufficiently large, it suffices to show that \be \label{eq:csineq} \sum_{\mathrm{e}ll=1}^k \Big(- (1-{\rm var}\ epsilon)+c_\mathrm{e}ll\Big(1-\frac{1}{c_\mathrm{e}ll \E W}+\log\Big(\frac{1}{c_\mathrm{e}ll \E W}\Big)\Big)\Big)<-\sum_{\mathrm{e}ll=1}^k c_\mathrm{e}ll\log(\theta+\xi)-\beta, \mathrm{e}e when $\xi,\beta,$ and ${\rm var}\ epsilon$ are sufficiently small. We show that this strict inequality can be achieved for each term individually, by arguing that we can choose ${\rm var}\ epsilon\in(0,1)$ small enough such that \be (1-{\rm var}\ epsilon)>c_\mathrm{e}ll\Big(1-\frac{1}{c_\mathrm{e}ll(\theta-1)}+\log\Big(\frac{\theta+\xi}{c_\mathrm{e}ll(\theta-1)}\Big)\Big),\qquad \mathrm{e}ll\in[k], \mathrm{e}e where we note that we have written $\E W$ as $\theta-1$. The right-hand side is increasing in $c_\mathrm{e}ll$ when $c_\mathrm{e}ll\in[1/\log\theta,\theta/(\theta-1))$, so that all $k$ inequalities are satisfied when we solve \be (1-{\rm var}\ epsilon)>\wt c\Big(1-\frac{1}{\wt c(\theta-1)}+\log\Big(\frac{\theta+\xi}{\wt c(\theta-1)}\Big)\Big), \mathrm{e}e with $\wt c:=\max_{\mathrm{e}ll\in[k]}c_\mathrm{e}ll$. We now show that the right-hand side is strictly smaller than one when $\xi$ is sufficiently small. We write \be \ba \wt c\Big (1-\frac{1}{\wt c(\theta-1)}+\log\Big(\frac{\theta+\xi}{\wt c(\theta-1)}\Big)\Big)&=\wt c\Big(1-\frac{1}{\wt c(\theta-1)}+\log\Big(\frac{\theta}{\wt c(\theta-1)}\Big)\Big)+\wt c\log\Big(1+\frac\xi\theta\Big)\\ &\leq \wt c\Big (1-\frac{1}{\wt c(\theta-1)}+\log\Big(\frac{\theta}{\wt c(\theta-1)}\Big)\Big)+\frac{\xi}{\theta-1}, \mathrm{e}a \mathrm{e}e where the final upper bound follows from the fact that $\log(1+x)\leq x$ for $x>-1$ and $\wt c<\theta/(\theta-1)$. We denote the first term on the right-hand side by $\kappa=\kappa(\wt c,\theta)$. As $\kappa$ is increasing in $\wt c$ when $\wt c< \theta/(\theta-1)$ (which is the case when $c_\mathrm{e}ll<\theta/(\theta-1)$ for all $\mathrm{e}ll\in[k]$), we have $\kappa<1$, as $\wt c<\theta/(\theta-1)$. Thus, setting $\xi<(1-\kappa)(\theta-1)/2$ we achieve the desired result. Now, taking ${\rm var}\ epsilon\in(0,1-(\kappa+\xi/(\theta-1)))$, we arrive at~\mathrm{e}qref{eq:littleo} for some small $\mathrm{e}ta>0$. Recalling that we can bound the second term in~\mathrm{e}qref{eq:e'n} by $o(n^{-\gamma})$ for any $\gamma>0$ and that $p_{m_\mathrm{e}ll}\geq n^{-\nu}$ for some $\nu>0$ uniformly in $m_\mathrm{e}ll<c\log n$, we obtain \be\label{eq:inepsk} \frac{1}{(n)_k}\sum_{\textbf{j}\in I_n({\rm var}\ epsilon,k)}\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}=o\Big(n^{-\mathrm{e}ta}\prod_{\mathrm{e}ll=1}^k p_{m_\mathrm{e}ll}\Big), \mathrm{e}e for some small $\mathrm{e}ta>0$. We now consider the remaining sets $I_n({\rm var}\ epsilon,1),\ldots, I_n({\rm var}\ epsilon,k-1)$ and aim to bound \be \frac{1}{(n)_k}\sum_{i=1}^{k-1}\sum_{\textbf{j} \in I_n({\rm var}\ epsilon,i)}\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll\text{ for all }\mathrm{e}ll \in[k]}. \mathrm{e}e Again, using the negative quadrant dependence and introducing the events $E^{(1)}_n,E^{(3)}_n,E^{(4)}_n$, and $E^{(5)}_n$ yields the upper bound \be \frac{1}{(n)_k}\sum_{i=1}^{k-1}\sum_{\textbf{j} \in I_n({\rm var}\ epsilon,i)}\E{\ind_{E^{(1)}_n\cap E^{(3)}_n\cap E^{(4)}_n\cap E^{(5)}_n}\prod_{\mathrm{e}ll=1}^k \mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}}+\mathbb{P}{(E^{(1)}_n\cap E^{(3)}_n\cap E^{(4)}_n\cap E^{(5)}_n)^c}. \mathrm{e}e The aim is to treat the probabilities of indices which are at most $n^{{\rm var}\ epsilon}$ in the same way as when dealing with the indices in $I_n({\rm var}\ epsilon,k)$ to reach a bound as in~\mathrm{e}qref{eq:littleo}, for which we use the events $E^{(i)}_n, i=3,4,5$. For the indices which are larger than $n^{{\rm var}\ epsilon}$ such an upper bound will not suffice. Instead, we aim to bound $\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}$ when $n^{{\rm var}\ epsilon}\leq j_\mathrm{e}ll\leq n$ in a similar way as we bounded $\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll}$ from above in the proof of Lemma~{\rm Re}f{lemma:in0eps}, for which we use the event $E^{(1)}_n$. First, we split the summation over $I_n({\rm var}\ epsilon,i)$ over all possible configurations of indices with are at most and at least $n^{{\rm var}\ epsilon}$, similar to~\mathrm{e}qref{eq:Ssum}. That is, \be\ba \frac{1}{(n)_k}{}&\sum_{i=1}^{k-1}\sum_{\textbf{j} \in I_n({\rm var}\ epsilon,i)}\E{\ind_{E^{(1)}_n\cap E^{(3)}_n\cap E^{(4)}_n\cap E^{(5)}_n}\prod_{\mathrm{e}ll=1}^k \mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}}\\ ={}&\frac{1}{(n)_k}\sum_{i=1}^{k-1}\sum_{\substack{S\subseteq [k]\\ |S|=i}}\ \sideset{}{^*}\sum_{\substack{1\leq j_\mathrm{e}ll <n^{{\rm var}\ epsilon}\\ \mathrm{e}ll\in S}}\ \sideset{}{^*}\sum_{\substack{n^{{\rm var}\ epsilon}\leq j_\mathrm{e}ll \leq n\\ \mathrm{e}ll \in[k]\backslash S}}\!\!\!\!\!\mathbb E\Bigg[\!\ind_{E^{(1)}_n\cap E^{(3)}_n\cap E^{(4)}_n\cap E^{(5)}_n}\!\!\prod_{\mathrm{e}ll\in S} \mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}\!\!\!\!\prod_{\mathrm{e}ll\in [k]\backslash S}\!\!\!\!\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}\Bigg]. \mathrm{e}a \mathrm{e}e Using the events $E^{(i)}_n, i=3,4,5$, we can follow similar steps as above to bound the sum over the indices $j_\mathrm{e}ll$ and the product of probabilities $\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}$ for $\mathrm{e}ll\in S$ from above by the deterministic upper bound \be \mathrm{e}xp\Big(\log n(1+o(1))\sum_{\mathrm{e}ll\in S} \Big(- (1-{\rm var}\ epsilon)+c_\mathrm{e}ll\Big(1-\frac{1}{c_\mathrm{e}ll \E W}+\log\Big(\frac{1}{c_\mathrm{e}ll \E W}\Big)\Big)\Big)\Big)=:n^{C(S)(1+o(1))}, \mathrm{e}e which yields \be \label{eq:detbound} \sum_{i=1}^{k-1}\frac{n^i}{(n)_k}\sum_{\substack{S\subseteq [k]\\ |S|=i}}n^{C(S)(1+o(1))}\ \sideset{}{^*}\sum_{\substack{n^{{\rm var}\ epsilon}\leq j_\mathrm{e}ll \leq n\\ \mathrm{e}ll \in[k]\backslash S}}\!\!\!\!\E{\ind_{E^{(1)}_n}\prod_{\mathrm{e}ll\in [k]\backslash S}\!\!\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}}. \mathrm{e}e We now proceed to bound each individual probability $\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}$ when $\mathrm{e}ll \in [k]\backslash S$. This follows a similar approach to the upper bound of $\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll}$ in the proof of Lemma~{\rm Re}f{lemma:in0eps} (as well as the non-rigorous proof provided in Section~{\rm Re}f{sec:nonrigour}), with a couple of modifications. Introducing indices $j_\mathrm{e}ll<i_1<\ldots <i_{m_\mathrm{e}ll}\leq n$, which denote the steps at which vertex $j_\mathrm{e}ll$ increases its degree, we can write \be \mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}=\sum_{j_\mathrm{e}ll<i_1<\ldots <i_{m_\mathrm{e}ll}\leq n}\prod_{t=1}^{m_\mathrm{e}ll} \frac{W_{j_\mathrm{e}ll}}{\sum_{r=1}^{i_t-1}W_r}\prod_{\substack{s=j_\mathrm{e}ll+1\\ s\neq i_t,t\in [m_\mathrm{e}ll]}}^{i_{m_\mathrm{e}ll}-1}\Big(1-\frac{W_{j_\mathrm{e}ll}}{\sum_{r=1}^{s-1} W_r}\Big). \mathrm{e}e The second product, in comparison to dealing with the event $\{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\}$, goes up to $i_{m_\mathrm{e}ll}-1$ instead of $n$. This is due to the fact that we now only need to control the connections vertex $j$ does and does not make up to its $m_\mathrm{e}ll^{\text{th}}$ connection. Using the same idea as in~\mathrm{e}qref{eq:fracbound} and using the event $E^{(1)}_n$, we obtain the upper bound \be \ba \sum_{j_\mathrm{e}ll<i_1<\ldots <i_{m_\mathrm{e}ll}\leq n}{}&\prod_{t=1}^{m_\mathrm{e}ll} \frac{W_{j_\mathrm{e}ll}}{(i_t-1)\E W(1-\zeta_n)-1}\prod_{s=j_\mathrm{e}ll+1}^{i_{m_\mathrm{e}ll}-1}\Big(1-\frac{W_{j_\mathrm{e}ll}}{s\E W(1+\zeta_n)}\Big)\\ \leq{}& \sum_{j_\mathrm{e}ll<i_1<\ldots <i_{m_\mathrm{e}ll}\leq n}\prod_{t=1}^{m_\mathrm{e}ll} \frac{W_{j_\mathrm{e}ll}}{i_t\E W(1-2\zeta_n)}\prod_{s=j_\mathrm{e}ll+1}^{i_{m_\mathrm{e}ll}-1}\Big(1-\frac{W_{j_\mathrm{e}ll}}{s\E W(1+\zeta_n)}\Big). \mathrm{e}a \mathrm{e}e The last step follows from the fact that $(i_t-1)(1-\zeta_n)\E W-1\geq i_t(1-2\zeta_n)\E W$ for $n$ sufficiently large. Using this in the expected value of~\mathrm{e}qref{eq:detbound} yields \be \ \sideset{}{^*}\sum_{\substack{n^{{\rm var}\ epsilon}\leq j_\mathrm{e}ll \leq n\\ \mathrm{e}ll \in[k]\backslash S}}\E{\prod_{\mathrm{e}ll\in [k]\backslash S}\bigg(\sum_{j_\mathrm{e}ll<i_1<\ldots <i_{m_\mathrm{e}ll}\leq n}\prod_{t=1}^{m_\mathrm{e}ll} \frac{W_{j_\mathrm{e}ll}}{i_t\E W(1-2\zeta_n)}\prod_{s=j_\mathrm{e}ll+1}^{i_{m_\mathrm{e}ll}-1}\Big(1-\frac{W_{j_\mathrm{e}ll}}{s\E W(1+\zeta_n)}\Big)\bigg)}. \mathrm{e}e We can now relabel the vertex-weights $W_{j_\mathrm{e}ll}$ as $W_\mathrm{e}ll$, $\mathrm{e}ll\in [k]\backslash S$. This does not change the expected value and is possible since the indices $j_\mathrm{e}ll$ are distinct for all $\mathrm{e}ll\in [k]\backslash S$. Directly after this, we omit the requirement that the indices $j_\mathrm{e}ll$ are distinct, which is now of no consequence as the weights have been relabelled already. We hence arrive at the upper bound \be \label{eq:expprod} \prod_{\mathrm{e}ll\in [k]\backslash S}\!\!\!\mathbb E\Bigg[\sum_{n^{{\rm var}\ epsilon}\leq j_\mathrm{e}ll \leq n}\ \sum_{j_\mathrm{e}ll<i_1<\ldots <i_{m_\mathrm{e}ll}\leq n}\prod_{t=1}^{m_\mathrm{e}ll} \frac{W_\mathrm{e}ll}{i_t\E W(1-2\zeta_n)}\!\prod_{s=j_\mathrm{e}ll+1}^{i_{m_\mathrm{e}ll}-1}\!\!\Big(1-\frac{W_\mathrm{e}ll}{s\E W(1+\zeta_n)}\Big)\!\Bigg], \mathrm{e}e where the product can be taken out of the expected value due to the independence of the vertex-weights $W_1,\ldots W_\mathrm{e}ll$. As a result, we can deal with each of expected values individually. Following the same approach as in~\mathrm{e}qref{eq:expbound} and setting $a_\mathrm{e}ll:=W_\mathrm{e}ll/(\E W(1+\zeta_n))$, we obtain the upper bound \be \ba\label{eq:exp} \mathbb E\Bigg[{}&\sum_{n^{{\rm var}\ epsilon}\leq j_\mathrm{e}ll \leq n}\ \sum_{j_\mathrm{e}ll<i_1<\ldots <i_{m_\mathrm{e}ll}\leq n}\prod_{t=1}^{m_\mathrm{e}ll} \frac{W_\mathrm{e}ll}{i_t\E W(1-2\zeta_n)}\Big(\frac{i_{m_\mathrm{e}ll}}{j_\mathrm{e}ll}\Big)^{-a_\mathrm{e}ll}\Bigg]\Big(1+\mathcal O\big(n^{-{\rm var}\ epsilon}\big)\Big)\\ ={}&\mathbb E\Bigg[a_\mathrm{e}ll^{m_\mathrm{e}ll}\!\!\!\!\!\!\sum_{n^{{\rm var}\ epsilon}\leq j_\mathrm{e}ll \leq n}\ \sum_{j_\mathrm{e}ll<i_1<\ldots <i_{m_\mathrm{e}ll}\leq n}\prod_{t=1}^{m_\mathrm{e}ll} i_t^{-1}\Big(\frac{i_{m_\mathrm{e}ll}}{j_\mathrm{e}ll}\Big)^{-a_\mathrm{e}ll}\Bigg]\Big(\frac{1+\zeta_n}{1-2\zeta_n}\Big)^{m_\mathrm{e}ll}\Big(1+\mathcal O\big(n^{-{\rm var}\ epsilon}\big)\Big). \mathrm{e}a\mathrm{e}e Let us first take out the term $(a_\mathrm{e}ll j_\mathrm{e}ll)^{ a_\mathrm{e}ll}$ from the inner sums. We then observe that the summand of the inner sum over the indices $i_1,\ldots, i_{m_\mathrm{e}ll}$ is decreasing, so that we can bound it from above almost surely by the multiple integrals \be \int_{j_\mathrm{e}ll}^n \int_{x_1}^n \cdots \int_{x_{m_\mathrm{e}ll-1}}^n \prod_{t=1}^{m_\mathrm{e}ll-1}x_t^{-1} x_{m_\mathrm{e}ll}^{-(1+a_\mathrm{e}ll)}\,\mathrm{d} x_{m_\mathrm{e}ll}\ldots \mathrm{d} x_1=a_\mathrm{e}ll^{-m_\mathrm{e}ll}j_\mathrm{e}ll^{-a_\mathrm{e}ll}\bigg(1-\Big(\frac{n}{j_\mathrm{e}ll}\Big)^{-a_\mathrm{e}ll}\!\sum_{s=0}^{m_\mathrm{e}ll-1}a_\mathrm{e}ll^{s}\frac{(\log(n/j_\mathrm{e}ll))^s}{s!}\bigg). \mathrm{e}e The equality follows from~\mathrm{e}qref{eq:logint2} in Lemma~{\rm Re}f{lemma:logints} with $a=j_\mathrm{e}ll, b=n, c=a_\mathrm{e}ll$, and $k=m_{\mathrm{e}ll}$. Again reintroducing the term $a_\mathrm{e}ll^{m_\mathrm{e}ll}j^{a_\mathrm{e}ll}$ in the expected value in~\mathrm{e}qref{eq:exp}, we arrive at \be 1-\sum_{s=0}^{m_\mathrm{e}ll-1}(n/j_\mathrm{e}ll)^{-a_\mathrm{e}ll}\frac{(a_\mathrm{e}ll\log(n/j_\mathrm{e}ll))^s}{s!}=\mathbb{P}f{P(a_\mathrm{e}ll)\geq m_\mathrm{e}ll}, \mathrm{e}e where $P(a_\mathrm{e}ll)\sim \text{Poi}(a_\mathrm{e}ll \log(n/j_\mathrm{e}ll))$, conditionally on $W_\mathrm{e}ll$. With a similar duality between Poisson and gamma random variables, we obtain, as in~\mathrm{e}qref{eq:poitogamma}, \be \mathbb{P}f{P(a_\mathrm{e}ll)\geq m_\mathrm{e}ll}=\mathbb{P}f{Y_\mathrm{e}ll\leq \log(n/j_\mathrm{e}ll)}, \mathrm{e}e where $Y_\mathrm{e}ll\sim \text{Gamma}(m_\mathrm{e}ll,a_\mathrm{e}ll)$, conditionally on $W_\mathrm{e}ll$. Then, by the choice of $\zeta_n$, it follows that $((1+\zeta_n)/(1-2\zeta_n))^{m_\mathrm{e}ll}=1+\mathcal O\big(n^{-\mathrm{d}elta{\rm var}\ epsilon}\log n \big)$. Using both these results in~\mathrm{e}qref{eq:exp}, we arrive at \be \label{eq:exppw} \mathbb E\Bigg[\sum_{n^{{\rm var}\ epsilon}\leq j_\mathrm{e}ll \leq n}\mathbb{P}f{Y_\mathrm{e}ll\leq \log(n/j_\mathrm{e}ll)}\Bigg]\Big(1+\mathcal O\big( n^{-\mathrm{d}elta{\rm var}\ epsilon}\log n \big)\Big). \mathrm{e}e As the conditional probability is decreasing in $j_\mathrm{e}ll$, we can bound the sum from above by an integral almost surely to obtain, as in~\mathrm{e}qref{eq:sumtoint} and~\mathrm{e}qref{eq:gammaexp}, \be \int_{\lfloor n^{{\rm var}\ epsilon}\rfloor}^n \mathbb{P}f{Y_\mathrm{e}ll\leq \log(n/x)}\,\mathrm{d} x \leq n\Ef{}{\mathrm{e}^{-Y_\mathrm{e}ll}\ind_{\{Y_\mathrm{e}ll\leq \log(n/\lfloor n^{\rm var}\ epsilon \rfloor)\}}}\leq n\Big(\frac{a_\mathrm{e}ll}{1+a_\mathrm{e}ll}\Big)^{m_\mathrm{e}ll}. \mathrm{e}e Combining this almost sure upper bound with~\mathrm{e}qref{eq:exppw} in~\mathrm{e}qref{eq:expprod}, we arrive at \be \label{eq:tailfin} n^{k-|S|}\prod_{\mathrm{e}ll\in[k]\backslash S} \E{\Big(\frac{a_\mathrm{e}ll}{1+a_\mathrm{e}ll}\Big)^{m_\mathrm{e}ll}}\Big(1+\mathcal O \big(n^{-\mathrm{d}elta{\rm var}\ epsilon}\log n\big)\Big). \mathrm{e}e Finally, with the same steps as in~\mathrm{e}qref{eq:aapprox}, we obtain \be n^{k-|S|}\prod_{\mathrm{e}ll\in[k]\backslash S} \E{\Big(\frac{W}{\E W+W}\Big)^{m_\mathrm{e}ll}}\big(1+o\big(n^{-\beta}\big)\big), \mathrm{e}e for some small $\beta>0$. Using this bound in~\mathrm{e}qref{eq:detbound} yields, for some constant $K>0$, the upper bound \be K\sum_{i=1}^{k-1}\sum_{\substack{S\subseteq [k]\\ |S|=i}}n^{C(S)(1+o(1))}\prod_{\mathrm{e}ll\in [k]\backslash S}\E{\Big(\frac{W}{\E W+W}\Big)^{m_\mathrm{e}ll}}=K\sum_{i=1}^{k-1}\sum_{\substack{S\subseteq [k]\\ |S|=i}}n^{C(S)(1+o(1))}\prod_{\mathrm{e}ll\in [k]\backslash S}p_{\geq m_\mathrm{e}ll}. \mathrm{e}e By Remark~{\rm Re}f{rem:pgeqk}, the tail probability $p_{\geq m_\mathrm{e}ll}=\mathcal O(p_{m_\mathrm{e}ll})$ and by~\mathrm{e}qref{eq:csineq} we have\\ $n^{C(S)(1+o(1))}=o\big(n^{-\mathrm{e}ta(S)}\prod_{\mathrm{e}ll\in S}p_{m_\mathrm{e}ll}\big)$ for some $\mathrm{e}ta(S)>0$. Combined, this yields \be \ba \frac{1}{(n)_k}\sum_{i=1}^{k-1} \sum_{\textbf{j}\in I_n({\rm var}\ epsilon,i)}\E{\ind_{E^{(3)}_n}\ind_{E_n^{(1)}}\prod_{\mathrm{e}ll=1}^k \mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}}&\leq K\sum_{i=1}^{k-1}\sum_{\substack{S\subseteq [k]\\ |S|=i}}n^{C(S)(1+o(1))}\prod_{\mathrm{e}ll\in [k]\backslash S}p_{\geq m_\mathrm{e}ll}\\ &=o\big(n^{-\wt \mathrm{e}ta}\prod_{\mathrm{e}ll=1}^k p_{m_\mathrm{e}ll}\big), \mathrm{e}a \mathrm{e}e with \be \wt \mathrm{e}ta:=\min_{\substack{ S\subseteq [k]\\ 1\leq |S|\leq k-1}}\Big(C(S)-\sum_{\mathrm{e}ll\in S}\log(\theta+\xi)c_\mathrm{e}ll\Big), \mathrm{e}e which is strictly positive when ${\rm var}\ epsilon$ and $\xi$ are sufficiently small, similar to what is discussed above. Combining this with the fact that $\mathbb{P}{\big(E^{(1)}_n\cap E^{(3)}_n\cap E^{(4)}_n\cap E^{(5)}_n\big)^c}=o\big(n^{-\beta}\prod_{\mathrm{e}ll=1}^k p_{m_\mathrm{e}ll}\big)$ uniformly in $m_1,\ldots, m_k< c\log n$ for some $\beta>0$ by Lemma~{\rm Re}f{lemma:weightsumbounds}, and the result in~\mathrm{e}qref{eq:inepsk}, we finally conclude, \be\ba\label{eq:finboundineps} \frac{1}{(n)_k}\sum_{\textbf{j}\in I_n({\rm var}\ epsilon)}\!\!\!\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}={}&\frac{1}{(n)_k}\sum_{\textbf{j}\in I_n({\rm var}\ epsilon,k)}\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}\\ &+\frac{1}{(n)_k}\sum_{i=1}^{k-1}\sum_{\textbf{j}\in I_n({\rm var}\ epsilon,i)}\!\!\!\mathbb{P}{\mathbb{Z}m_n(j_\mathrm{e}ll)=m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}\\ ={}&o\Big(n^{-\beta}\prod_{\mathrm{e}ll=1}^k p_{m_\mathrm{e}ll}\Big), \mathrm{e}a \mathrm{e}e for some $\beta>0$ in the case that $m_\mathrm{e}ll=c_\mathrm{e}ll \log n+o(\log n)$ with $c_\mathrm{e}ll\in[1/\log \theta, \theta/(\theta-1))$ for all $\mathrm{e}ll\in[k]$ as well. When the $m_\mathrm{e}ll$ do not all behave the same, that is, for some $\mathrm{e}ll\in[k]$ $c_\mathrm{e}ll\in[0,1/\log \theta)$ and for some $c_\mathrm{e}ll\in[1/\log \theta, \theta/(\theta-1))$, we can use a combination of the approaches outlined for either of the cases, which concludes the proof. \mathrm{e}nd{proof} We now discuss the required adaptations so that the proof holds for the model with \mathrm{e}mph{random out-degree} as well. This follows from the fact that Lemma~{\rm Re}f{lemma:in0eps} holds for this model, together with the fact that the degrees are still negatively quadrant dependent when the out-degree is random. As a result, all probabilities related to the degrees of multiple vertices can either be dealt with using Lemma~{\rm Re}f{lemma:in0eps} or can be split as a product of probabilities of individual vertices. From the perspective of the in-degree of an individual vertex $i\in[n]$, the model with out-degree one and the model with random out-degree are equivalent, as in every step the in-degree of vertex $i$ increases by one with the same probability. Hence, the proof follows through in exactly the same way and Lemma~{\rm Re}f{lemma:inepsterm} holds for the model with random out-degree as well. \subsection{Proof of Proposition~{\rm Re}f{lemma:degprobasymp}}\label{sec:proof5.1} We finally prove Proposition~{\rm Re}f{lemma:degprobasymp}, using Lemmas~{\rm Re}f{lemma:in0eps} and~{\rm Re}f{lemma:inepsterm}. We remark that the proof does not use that the out-degree is deterministic but uses~\mathrm{e}qref{eq:degdist} only, so that the proof and hence~\mathrm{e}qref{eq:degtail} hold for both the model with fixed and \mathrm{e}mph{random out-degree}. \begin{proof}[Proof of Proposition~{\rm Re}f{lemma:degprobasymp}] As discussed before,~\mathrm{e}qref{eq:degdist} directly follows from~\mathrm{e}qref{eq:splitsum2} combined with Lemmas~{\rm Re}f{lemma:in0eps} and~{\rm Re}f{lemma:inepsterm}. Using~\mathrm{e}qref{eq:degdist}, we then prove~\mathrm{e}qref{eq:degtail}. For ease of writing, we recall that \be p_k:=\E{\frac{\theta-1}{\theta-1+W}\Big(\frac{W}{\theta-1+W}\Big)^k},\quad\text{and} \quad \; p_{\geq k}:=\E{\Big(\frac{W}{\theta-1+W}\Big)^k}. \mathrm{e}e We start by assuming that $m_\mathrm{e}ll=c_\mathrm{e}ll \log n+o(\log n)$ with $c_\mathrm{e}ll\in(0,c)$ for each $\mathrm{e}ll\in[k]$. We discuss how to adjust the proof when $m_\mathrm{e}ll=o(\log n)$ for some or all $\mathrm{e}ll\in[k]$ at the end. For each $\mathrm{e}ll\in [k]$ take an $\mathrm{e}ta_\mathrm{e}ll\in (0,c-c_\mathrm{e}ll)$ so that $\lceil (1+\mathrm{e}ta_\mathrm{e}ll)m_\mathrm{e}ll\rceil <c\log n$. Then, we use the upper bound \be \ba\label{eq:tailub} \mathbb P({}&\mathbb{Z}m_n(v_\mathrm{e}ll)\geq m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k])\\ \leq{}& \sum_{j_1=m_1}^{\lfloor(1+\mathrm{e}ta_1)m_1\rfloor}\cdots \sum_{j_k=m_k}^{\lfloor (1+\mathrm{e}ta_k)m_k\rfloor}\mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)=j_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}\\ &+\sum_{i=1}^k \mathbb{P}{\mathbb{Z}m_n(v_i)\geq \lceil (1+\mathrm{e}ta_i)m_i\rceil, \mathbb{Z}m_n(v_\mathrm{e}ll)\geq m_\mathrm{e}ll\text{ for all }\mathrm{e}ll \neq i}. \mathrm{e}a \mathrm{e}e We first discuss the first term on the right-hand side. As~\mathrm{e}qref{eq:degdist} holds uniformly in $m_1,\ldots, m_k$ $<c\log n$, we find \be \ba\label{eq:degtailub} \sum_{j_1=m_1}^{\lfloor(1+\mathrm{e}ta_1)m_1\rfloor}\!\!\!\!\cdots\!\!\!\! \sum_{j_k=m_k}^{\lfloor (1+\mathrm{e}ta_k)m_k\rfloor}\!\!\!\!\!\!\!\mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)=j_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}&=\!\!\!\sum_{j_1=m_1}^{\lfloor(1+\mathrm{e}ta_1)m_1\rfloor}\!\!\!\!\cdots\!\!\!\! \sum_{j_k=m_k}^{\lfloor (1+\mathrm{e}ta_k)m_k\rfloor}\!\prod_{\mathrm{e}ll=1}^kp_{j_\mathrm{e}ll}\big(1+o\big(n^{-\beta}\big)\big)\\ &=\prod_{\mathrm{e}ll=1}^k \big(p_{\geq m_\mathrm{e}ll}-p_{\geq \lceil (1+\mathrm{e}ta_\mathrm{e}ll)m_\mathrm{e}ll \rceil}\big)\big(1+o\big(n^{-\beta}\big)\big)\\ &\leq \prod_{\mathrm{e}ll=1}^k p_{\geq m_\mathrm{e}ll}\big(1+o\big(n^{-\beta}\big)\big). \mathrm{e}a\mathrm{e}e To finish the upper bound, it remains to show that the term on the second line of~\mathrm{e}qref{eq:tailub} can be incorporated in the $o\big(n^{-\beta}\big)$ term, and it suffices to show this can be done for each term in the sum, independent of the value of $i$. Using the negative quadrant dependence of the degrees under the conditional probability measure $\mathbb P_W$ (see Remark~{\rm Re}f{rem:degtail} and~\cite[Lemma $7.1$]{LodOrt21}), we find \be \ba \label{eq:degtailend} \mathbb P(\mathbb{Z}m_n(v_i){}&\geq \lceil (1+\mathrm{e}ta_i)m_i\rceil, \mathbb{Z}m_n(v_\mathrm{e}ll)\geq m_\mathrm{e}ll\text{ for all } \mathrm{e}ll \in [k]\backslash \{i\})\\ ={}&\frac{1}{(n)_k}\sum_{1\leq j_1\neq \ldots \neq j_k\leq n}\mathbb E\Big[\mathbb{P}f{\mathbb{Z}m_n(j_i)\geq \lceil (1+\mathrm{e}ta_i)m_i\rceil}\prod_{\mathrm{e}ll\in [k]\backslash \{i\}}\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}\Big]\\ ={}&\frac{1}{(n)_k}\sum_{\textbf{j}\in I_n({\rm var}\ epsilon)}\mathbb E\Big[\mathbb{P}f{\mathbb{Z}m_n(j_i)\geq \lceil (1+\mathrm{e}ta_i)m_i\rceil}\prod_{\mathrm{e}ll\in [k]\backslash \{i\}}\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}\Big]\\ &+\frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\!\!\!\!\!\!\!\!\mathbb E\Big[\mathbb{P}f{\mathbb{Z}m_n(j_i)\geq \lceil (1+\mathrm{e}ta_i)m_i\rceil}\prod_{\mathrm{e}ll\in [k]\backslash \{i\}}\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}\Big]. \mathrm{e}a \mathrm{e}e The first term in the last step can be included in the little $o$ term in~\mathrm{e}qref{eq:degtailub} (even when considering $m_i$ rather than $\lceil (1+\mathrm{e}ta_i)m_i\rceil$ in the probability), as follows from computations similar to the ones in~\mathrm{e}qref{eq:errorterm} through~\mathrm{e}qref{eq:finboundineps}, combined with Remark~{\rm Re}f{rem:pgeqk} (which states that $p_{\geq k}=\mathcal O(p_k)$). It remains to show that the same holds for the second term in the last step. Again, we use an argument similar to the steps performed in~\mathrm{e}qref{eq:detbound} through~\mathrm{e}qref{eq:tailfin} to arrive at \be \ba \label{eq:sumbound} \frac{1}{(n)_k}{}&\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\mathbb E\Big[\mathbb{P}f{\mathbb{Z}m_n(j_i)\geq \lceil (1+\mathrm{e}ta_i)m_i\rceil}\prod_{\mathrm{e}ll\in [k]\backslash \{i\}}\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}\Big]\\ &\leq Kp_{\geq \lceil (1+\mathrm{e}ta_i)m_i\rceil}\prod_{\mathrm{e}ll\in [k]\backslash \{i\}}p_{\geq m_\mathrm{e}ll}, \mathrm{e}a \mathrm{e}e for some positive constant $K$. By Lemma~{\rm Re}f{lemma:pkbound} we have the inequalities \be p_{\geq \lceil (1+\mathrm{e}ta_i)m_i\rceil}\leq \theta^{-\lceil (1+\mathrm{e}ta_i)m_i\rceil}\leq \theta^{-m_i}\theta^{-\mathrm{e}ta_im_i},\quad\text{and}\quad p_{\geq m_i}\geq (\theta+\xi)^{-m_i}, \mathrm{e}e for any $\xi>0$. As a result, taking $\xi\in(0,\theta(\theta^{\mathrm{e}ta_i}-1))$ and setting $\phi_i:=1-(1+\xi/\theta)\theta^{-\mathrm{e}ta_i}>0$, we obtain \be \label{eq:expcomp} p_{\geq \lceil (1+\mathrm{e}ta_i)m_i\rceil}\leq (\theta+\xi)^{-m_i}\big((1+\xi/\theta)\theta^{-\mathrm{e}ta_i}\big)^{m_i}\leq p_{\geq m_i}(1-\phi_i)^{m_i}. \mathrm{e}e As $m_i=c_i\log n(1+o(1))$, it follows that $(1-\phi_i)^{m_i}=n^{-c_i\log (1/(1-\phi_i))(1+o(1))}$, so that \be \frac{1}{(n)_k}\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\mathbb E\Big[\mathbb{P}f{\mathbb{Z}m_n(j_i)\geq \lceil (1+\mathrm{e}ta_i)m_i\rceil}\prod_{\mathrm{e}ll\in [k]\backslash \{i\}}\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}\Big] \mathrm{e}e can be incorporated in the little $o$ term in~\mathrm{e}qref{eq:degtailub} for each $i\in[k]$ when we take a \\$\beta'<\beta\wedge \min_{i\in[k]}c_i\log(1/(1-\phi_i))$. This yields \be \label{eq:degtailfinub} \mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)\geq m_\mathrm{e}ll\text{ for all } \mathrm{e}ll \in [k]}\leq \prod_{\mathrm{e}ll=1}^k \E{\Big(\frac{W}{\theta-1+W}\Big)^{m_\mathrm{e}ll}}\big(1+o\big(n^{-\beta'}\big)\big). \mathrm{e}e For a lower bound, we can omit the second line of~\mathrm{e}qref{eq:tailub} and use~\mathrm{e}qref{eq:degtailub} to immediately obtain \be \ba \label{eq:geqpklb} \mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)\geq m_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}\geq{}& \sum_{j_1=m_1}^{\lfloor(1+\mathrm{e}ta_1)m_1\rfloor}\cdots \sum_{j_k=m_k}^{\lfloor (1+\mathrm{e}ta_k)m_k\rfloor}\mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)=j_\mathrm{e}ll\text{ for all }\mathrm{e}ll\in[k]}\\ ={}& \prod_{\mathrm{e}ll=1}^k \big(p_{\geq m_\mathrm{e}ll}-p_{\geq \lceil (1+\mathrm{e}ta_\mathrm{e}ll)m_\mathrm{e}ll\rceil}\big)\big(1+o\big(n^{-\beta}\big)\big). \mathrm{e}a \mathrm{e}e Again using~\mathrm{e}qref{eq:expcomp} yields $p_{\geq m_\mathrm{e}ll}-p_{\geq \lceil (1+\mathrm{e}ta_\mathrm{e}ll)m_\mathrm{e}ll\rceil}=p_{\geq m_\mathrm{e}ll}\big(1+o\big(n^{-\beta'}\big)\big)$ when we set \\$\beta'<\beta\wedge \min_{i\in[k]} c_i\log(1/(1-\phi_i))$. Combined with~\mathrm{e}qref{eq:degtailfinub} this yields~\mathrm{e}qref{eq:degtail} for the case $m_\mathrm{e}ll=c_\mathrm{e}ll\log n+o(\log n)$ with $c_\mathrm{e}ll\in(0,c)$ for all $\mathrm{e}ll\in[k]$. When $c_\mathrm{e}ll=0$ and thus $m_\mathrm{e}ll=o(\log n)$ for some (or all) $i\in[k]$, a few minor alterations are required. Most importantly, we substitute the quantity $\lfloor(c/2)\log n\rfloor $ for $\lfloor (1+\mathrm{e}ta_\mathrm{e}ll)m_\mathrm{e}ll\rfloor$ in all the steps. With this, the upper bound in~\mathrm{e}qref{eq:degtailub} follows directly. Similarly, the first term on the right-hand side of~\mathrm{e}qref{eq:degtailend} can be included in the little $o$ term in~\mathrm{e}qref{eq:degtailub}, as the computations required (which are similar to those in~\mathrm{e}qref{eq:errorterm} through~\mathrm{e}qref{eq:inepsk}) include the case $m_\mathrm{e}ll=o(\log n)$. We again use the upper bound in~\mathrm{e}qref{eq:sumbound} to obtain \be\ba \frac{1}{(n)_k}{}&\sum_{n^{{\rm var}\ epsilon}\leq j_1\neq \ldots \neq j_k\leq n}\mathbb E\Big[\mathbb{P}f{\mathbb{Z}m_n(j_i)\geq \lceil(c/2)\log n\rceil}\prod_{\mathrm{e}ll\in [k]\backslash \{i\}}\mathbb{P}f{\mathbb{Z}m_n(j_\mathrm{e}ll)\geq m_\mathrm{e}ll}\Big]\\ &\leq Kp_{\geq \lceil (c/2)\log n\rceil}\prod_{\mathrm{e}ll\in [k]\backslash \{i\}}p_{\geq m_\mathrm{e}ll}\\ &=K\frac{p_{\geq \lceil (c/2)\log n\rceil}}{p_{\geq m_i}}\prod_{\mathrm{e}ll\in [k]}p_{\geq m_{\mathrm{e}ll}}. \mathrm{e}a\mathrm{e}e By Lemma~{\rm Re}f{lemma:pkbound}, we obtain that the fraction is at most $\theta^{-(c/2)\log n}(\theta+\xi)^{m_i}\leq \theta^{-(c/4)\log n}=n^{-c\log(\theta)/4}$ for all $n$ sufficiently large, when $m_i=o(\log n)$. As a result, this term can be included in the $o(n^{-\beta})$ term in~\mathrm{e}qref{eq:degtailub} for any $i\in[k]$ such that $m_i=o(\log n)$, which proves the upper bound. For the lower bound, we again substitute $\lfloor(c/2)\log n\rfloor$ for $\lfloor (1+\mathrm{e}ta_i)m_i\rfloor$ when $m_i=o(\log n)$. In~\mathrm{e}qref{eq:geqpklb}, this yields a term \be (p_{\geq m_i}-p_{\geq\lceil (c/2)\log n\rceil})(1+o(n^{-\beta}))=p_{\geq m_i}\Big(1-\frac{p_{\geq\lceil (c/2)\log n\rceil}}{p_{\geq m_i}}\Big)(1+o(n^{-\beta})). \mathrm{e}e Again, the fraction is at most $n^{-\mathrm{e}ta}$ for some $\mathrm{e}ta>0$, which proves the lower bound and concludes the proof. \mathrm{e}nd{proof} \section{Proofs of the main theorems}\label{sec:mainproof} With the tools developed in Section~{\rm Re}f{sec:taildeg}, in particular Propositions~{\rm Re}f{lemma:degprobasymp} and~{\rm Re}f{prop:factmean} and Lemma~{\rm Re}f{lemma:maxdegwhp}, we now prove the main results formulated in Section~{\rm Re}f{sec:results}. First, we prove the main result for high degree vertices when the vertex-weight distribution has an atom at one, as in the~{\rm Re}f{ass:weightatom} case. \begin{proof}[Proof of Theorem~{\rm Re}f{thrm:mainatom}] The proof follows the same approach as~\cite[Theorem $1.2$]{AddEsl18}. For an integer subsequence $(n_\mathrm{e}ll)_{\mathrm{e}ll\in\mathbb{N}}$ such that ${\rm var}\ epsilon_{n_\mathrm{e}ll}\to{\rm var}\ epsilon$ as $\mathrm{e}ll\to\infty$, it suffices to prove that for any $i<i'\in \mathbb{Z}$, \be (X^{(n_\mathrm{e}ll)}_i,X^{(n_\mathrm{e}ll)}_{i+1},\ldots,X^{(n_\mathrm{e}ll)}_{i'-1},X^{(n_\mathrm{e}ll)}_{\geq i'})\toindis (\mathcal P^{\rm var}\ epsilon(i),\mathcal P^{\rm var}\ epsilon(i+1),\ldots,\mathcal P^{\rm var}\ epsilon (i'-1),\mathcal P^{\rm var}\ epsilon([i',\infty)))\qquad\text{as }\mathrm{e}ll\to\infty \mathrm{e}e holds. We obtain this via the convergence of the factorial moments of $X^{(n_\mathrm{e}ll)}_i,\ldots,X^{(n_\mathrm{e}ll)}_{i'-1},X^{(n_\mathrm{e}ll)}_{\geq i'}$. Recall $r_k$ from~\mathrm{e}qref{eq:ek}. By Proposition~{\rm Re}f{prop:factmean}, for any non-negative integers $a_i,\ldots,a_{i'}$, \be \ba \E{\Big(X^{(n_\mathrm{e}ll)}_{\geq i'}\Big)_{a_{i'}}\prod_{k=i}^{i'-1}\Big(X_k^{(n_\mathrm{e}ll)}\Big)_{a_k}}={}&\Big(q_0\theta^{-i'+{\rm var}\ epsilon_{n_\mathrm{e}ll}}\Big)^{a_{i'}}\prod_{k=i}^{i'-1}\Big(q_0(1-\theta^{-1})\theta^{-k+{\rm var}\ epsilon_{n_\mathrm{e}ll}}\Big)^{a_k}\\ &\times\big(1+\mathcal O\big(r_{\lfloor \log_\theta n_\mathrm{e}ll\rfloor +i}\vee n_\mathrm{e}ll^{-\beta}\big)\big)\\ \to{}& \Big(q_0\theta^{-i'+{\rm var}\ epsilon}\Big)^{a_{i'}}\prod_{k=i}^{i'-1}\Big(q_0(1-\theta^{-1})\theta^{-k+{\rm var}\ epsilon}\Big)^{a_k}, \mathrm{e}a\mathrm{e}e as $\mathrm{e}ll\to\infty$. By using the properties of Poisson processes, it follows that the limit equals \be \E{\big(\mathcal P^{\rm var}\ epsilon[i',\infty)\big)_{a_{i'}}\prod_{k=i}^{i'-1}\big(\mathcal P^{\rm var}\ epsilon(k)\big)_{a_k}}, \mathrm{e}e due to the particular form of the intensity measure of the Poisson process $\mathcal P$ (which is used in the definition of the Poisson process $\mathcal P^{\rm var}\ epsilon$). The result then follows from~\cite[Theorem $6.10$]{JanLucRuc11}. \mathrm{e}nd{proof} For the results for the~{\rm Re}f{ass:weightweibull} and~{\rm Re}f{ass:weightgumbel} cases, as outlined in Theorems~{\rm Re}f{thrm:mainweibull} and~{\rm Re}f{thrm:maingumbel}, respectively, we combine the asymptotic behaviour of $p_{\geq k}$ in Theorem~{\rm Re}f{thrm:pkasymp} with Proposition~{\rm Re}f{lemma:degprobasymp} and Lemma~{\rm Re}f{lemma:maxdegwhp}. \begin{proof}[Proof of Theorem~{\rm Re}f{thrm:mainweibull}] To establish the convergence in probability, it follows from Lemma~{\rm Re}f{lemma:maxdegwhp} that we need only consider $n\mathbb{P}{\mathbb{Z}m_n(v_1)\geq k_n}$ for some adequate integer-valued $k_n$ such that $k_n<c\log n$ for some $c\in(0,\theta/(\theta-1))$ and where $v_1$ is a vertex selected from $[n]$ uniformly at random. By Proposition~{\rm Re}f{lemma:degprobasymp}, this quantity equals $np_{\geq k_n}(1+o(1))$. Then, we use Theorem~{\rm Re}f{thrm:pkasymp} and Remark~{\rm Re}f{rem:pgeqk} to obtain that, when $W$ satisfies the~{\rm Re}f{ass:weightweibull} case in Assumption~{\rm Re}f{ass:weights}, this quantity is at most \be n\overline C\, \overline L(k_n)k_n^{-(\alpha-1)}\theta^{-k_n}, \mathrm{e}e where $\overline C>1$ is a constant. Now fix an arbitrary $\mathrm{e}ta>0$ and set $k_n:=\lfloor \log_\theta n-(\alpha-1)(1-\mathrm{e}ta) \log_\theta\log_\theta n\rfloor$. This yields \be \ba\label{eq:weibullub} n\overline C{}&\,\overline L(\log_\theta n(1+o(1)))(\log_\theta n)^{-(\alpha-1)}\theta^{-\lfloor \log_\theta n-(\alpha-1)(1-\mathrm{e}ta) \log_\theta\log_\theta n\rfloor}(1+o(1)) \\ &\leq \overline C_2 \overline L(\log_\theta n)(\log_\theta n)^{-(\alpha-1)}(\log_\theta n)^{(\alpha-1)(1-\mathrm{e}ta)}\\ &=\overline C_2\overline L(\log_\theta n)(\log _\theta n)^{-(\alpha-1)\mathrm{e}ta}. \mathrm{e}a \mathrm{e}e Here, $\overline C_2>0$ is a suitable constant and we use that $k_n=\log_\theta n(1+o(1))$ in the first step. Furthermore, we use~\cite[Theorem $1.5.2$]{BinGolTeu87}, which states that for a slowly-varying function $\overline L$, $\overline L(\lambda x)/\overline L(x)$ converges to one uniformly in $\lambda$ on any interval $[a,b]$ such that $0<a\leq b<\infty$. As a result, $\overline L(\log_\theta n(1+o(1)))\leq c\overline L(\log_\theta n)$ for some constant $c>1$ and $n$ sufficiently large. Finally, we use~\cite[Proposition $1.3.6$ $(v)$]{BinGolTeu87} to obtain that for any $\mathrm{e}ta>0$, the final line of~\mathrm{e}qref{eq:weibullub} tends to zero with $n$. This shows that for any $\mathrm{e}ta>0$, with high probability, \be \max_{j\in[n]}\frac{\Zm_n(j)-\log_\theta n}{\log_\theta\log_\theta n}\leq -(\alpha-1)(1-\mathrm{e}ta) \mathrm{e}e holds, due to the first result in Lemma~{\rm Re}f{lemma:maxdegwhp}. A similar approach, when setting $k_n:=\lfloor \log_\theta n-(\alpha-1)(1+\mathrm{e}ta)\log_\theta \log_\theta n\rfloor$, yields \be n\mathbb{P}{\mathbb{Z}m_n(v_1)\geq k_n}\to\infty, \mathrm{e}e so that for any $\mathrm{e}ta>0$, with high probability, \be \max_{j\in[n]}\frac{\Zm_n(j)-\log_\theta n}{\log_\theta\log_\theta n}\geq -(\alpha-1)(1+\mathrm{e}ta) \mathrm{e}e holds. Together, these two bounds prove the desired result. \mathrm{e}nd{proof} \begin{proof}[Proof of Theorem~{\rm Re}f{thrm:maingumbel}] The proof of this theorem follows a similar approach to the proof of Theorem~{\rm Re}f{thrm:mainweibull}. That is, we again apply the results from Theorem~{\rm Re}f{thrm:pkasymp} together with the fact that \be n\mathbb{P}{\mathbb{Z}m_n(v_1)\geq k_n}=np_{\geq k_n}(1+o(1)), \mathrm{e}e for some adequate integer-valued $k_n$ such that $k_n<c\log n$ for some $c\in(0,\theta/(\theta-1))$, as follows from Proposition~{\rm Re}f{lemma:degprobasymp} and Lemma~{\rm Re}f{lemma:maxdegwhp}. In the~{\rm Re}f{ass:weightgumbel}-{\rm Re}f{ass:weighttaufin} sub-case, we know that \be \ba\label{eq:pkgumblb} p_{\geq k_n}=\mathrm{e}xp\Big(-\frac{\tau^\gamma}{1-\gamma}\Big(\frac{(1-\theta^{-1})k_n}{c_1}\Big)^{1-\gamma}(1+o(1))\Big)\theta^{-k_n}, \mathrm{e}a\mathrm{e}e where we recall that $\gamma=1/(\tau+1)$. To prove the desired results, we first set $k_n=\lfloor \log_\theta n-(1+\mathrm{e}ta)C_{\theta,\tau,c_1}(\log_\theta n)^{1-\gamma}\rfloor $ for any $\mathrm{e}ta>0$, where we recall $C_{\theta,\tau,c_1}$ from~\mathrm{e}qref{eq:gumbrv2nd}. Using this in~\mathrm{e}qref{eq:pkgumblb} then yields \be \ba np_{\geq k_n}&= \frac{n}{\theta^{k_n}}\mathrm{e}^{-\log (\theta) C_{\theta,\tau,c_1}k_n^{1-\gamma}(1+o(1))}\geq \mathrm{e}^{\mathrm{e}ta\log (\theta) C_{\theta,\tau,c_1}(\log_\theta n)^{1-\gamma}(1+o(1))}, \mathrm{e}a\mathrm{e}e where we use that $k_n^{1-\gamma}=(\log_\theta n)^{1-\gamma}(1+o(1))$ in the last step. Hence, $n\mathbb{P}{\mathbb{Z}m_n(v_1)\geq k_n}$ diverges. We thus conclude from Lemma~{\rm Re}f{lemma:maxdegwhp} that \be \max_{j\in[n]}\frac{\Zm_n(j)-\log_\theta n}{(\log_\theta n)^{1-\gamma}}\geq -(1+\mathrm{e}ta)C_{\theta,\tau,c_1} \mathrm{e}e holds with high probability. A similar approach, setting \\ $k_n:=\lceil \log_\theta n-(1-\mathrm{e}ta)C_{\theta,\tau,c_1}(\log_\theta n)^{1-\gamma}\rceil$ and combining this with the first result of Lemma~{\rm Re}f{lemma:maxdegwhp} yields \be \max_{j\in[n]}\frac{\Zm_n(j)-\log_\theta n}{(\log_\theta n)^{1-\gamma}}\leq -(1-\mathrm{e}ta)C_{\theta,\tau,c_1} \mathrm{e}e holds with high probability. Together, these two bounds prove~\mathrm{e}qref{eq:gumbrv2nd}. To prove~\mathrm{e}qref{eq:gumbrav2nd} we apply the same methodology but use the asymptotic expression of $p_k$ (and $p_{\geq k}$ by adjusting constants), as in~\mathrm{e}qref{eq:pkbddgumbelrav}. We recall the constants $C_1,C_2,C_3$ from~\mathrm{e}qref{eq:c123} and set $k_n=:\lceil \log_\theta n-C_1(\log_\theta \log_\theta n)^\tau+C_2(\log_\theta \log_\theta n)^{\tau-1}\log_\theta\log_\theta \log_\theta n+(C_3+\mathrm{e}ta)(\log_\theta\log_\theta n)^{\tau-1}\rceil$, for any $\mathrm{e}ta>0$. Then,~\mathrm{e}qref{eq:pkbddgumbelrav} yields \be np_{\geq k_n}= \frac{n}{\theta^{k_n}} \mathrm{e}xp\Big(-\Big(\frac{\log k_n}{c_1}\Big)^\tau\Big(1+\frac{\tau(\tau-1)\log\log k_n}{\log k_n}-\frac{K_{\tau,c_1,\theta}}{\log k_n}(1+o(1))\Big)\Big). \mathrm{e}e Using Taylor expansions, we obtain \be \ba -\Big(\frac{\log k_n}{c_1}\Big)^\tau={}&-\Big(\frac{\log \log_\theta n}{c_1}\Big)^\tau +o(1)=-\log (\theta) C_1(\log_\theta \log_\theta n)^\tau+o(1),\\ \frac{\tau(\tau-1)}{c_1^\tau}(\log k_n)^{\tau-1}\log\log k_n={}&\frac{\tau(\tau-1)}{c_1^\tau}(\log\log_\theta n)^{\tau-1}\log\log\log_\theta n+o(1)\\ ={}&\log(\theta )C_2(\log_\theta\log_\theta n)^{\tau-1}\log_\theta\log_\theta\log_\theta n\\ &+ (\log_\theta(\log\theta))(\log\theta)^\tau\frac{\tau(\tau-1)}{c_1^\tau}(\log_\theta\log_\theta n)^{\tau-1}+o(1),\\ -\frac{K_{\tau,c_1,\theta}}{c_1^\tau}(\log k_n)^{\tau-1}={}&-\frac{K_{\tau,c_1,\theta}}{c_1^\tau}(\log\log_\theta n)^{\tau-1}+o(1)\\ ={}&-(\log \theta)^{\tau-1}\frac{K_{\tau,c_1,\theta}}{c_1^\tau}( \log_\theta\log_\theta n)^{\tau-1}+o(1), \mathrm{e}a \mathrm{e}e where we recall $K_{\tau,c_1,\theta}$ from~\mathrm{e}qref{eq:pkbddgumbelrav} in Theorem~{\rm Re}f{thrm:pkasymp}. As a result, \be \ba n{}&\mathrm{e}xp\Big(-\Big(\frac{\log k_n}{c_1}\Big)^\tau\!\!+\frac{\tau(\tau-1)}{c_1^\tau}(\log k_n)^{\tau-1}\log\log k_n-\frac{K_{\tau,c_1,\theta}}{c_1^{\tau}}(\log k_n)^{\tau-1}(1+o(1))\Big)\theta^{-k_n}\\ &= n\mathrm{e}xp\big(-\log (\theta) C_1(\log_\theta\log_\theta n)^\tau+\log(\theta )C_2 (\log_\theta \log_\theta n)^{\tau-1}\log_\theta\log_\theta\log_\theta n\\ &\hspace{1.8cm}+\log( \theta) C_3 (\log_\theta\log_\theta n)^{\tau-1}(1+o(1))\big)\theta^{-k_n}\\ &\leq \mathrm{e}xp\big(-(\mathrm{e}ta-o(1)) (\log_\theta\log_\theta n)^{\tau-1}\big), \mathrm{e}a\mathrm{e}e where in the last step we use that $k_n\geq \log_\theta n-C_1(\log_\theta \log_\theta n)^\tau+C_2(\log_\theta \log_\theta n)^{\tau-1}\log_\theta\log_\theta \log_\theta n+(C_3+\mathrm{e}ta)(\log_\theta\log_\theta n)^{\tau-1}$. As the right-hand side tends to zero with $n$, Lemma~{\rm Re}f{lemma:maxdegwhp} yields for any fixed $\mathrm{e}ta>0$, with high probability, \be \max_{i\in[n]}\frac{\Zm_n(i)-\big(\log_\theta n-C_1(\log_\theta\log_\theta n)^\tau+C_2(\log_\theta \log_\theta n)^{\tau-1}\log_\theta \log_\theta\log_\theta n\big)}{(\log_\theta\log_\theta n)^{\tau-1}}\leq C_3+\mathrm{e}ta. \mathrm{e}e With a similar approach, setting \be k_n=\lfloor \log_\theta n-C_1(\log_\theta \log_\theta n)^\tau+C_2(\log_\theta \log_\theta n)^{\tau-1}\log_\theta\log_\theta \log_\theta n+(C_3-\mathrm{e}ta)(\log_\theta\log_\theta n)^{\tau-1}\rfloor, \mathrm{e}e we can obtain that for any fixed $\mathrm{e}ta>0$, with high probability, \be \max_{i\in[n]}\frac{\Zm_n(i)-\big(\log_\theta n-C_1(\log_\theta\log_\theta n)^\tau+C_2(\log_\theta \log_\theta n)^{\tau-1}\log_\theta \log_\theta\log_\theta n\big)}{(\log_\theta\log_\theta n)^{\tau-1}}\geq C_3-\mathrm{e}ta. \mathrm{e}e Together these two bounds yield~\mathrm{e}qref{eq:gumbrav2nd}, which concludes the proof. \mathrm{e}nd{proof} \begin{proof}[Proof of Theorem~{\rm Re}f{thrm:maxtail}] We first prove the asymptotic distribution of the maximum degree, whose proof follows the same approach as the proof of~\cite[Theorem $1.3$]{AddEsl18}. We need to consider two cases: $i_n=\mathcal O(1)$ and $i_n\to\infty$ such that $i_n+\log_\theta n <(\theta/(\theta-1))\log n$ and $\liminf_{n\to\infty}i_n >-\infty$. For the former case, as $\mathrm{e}xp(-q_0\theta^{-i_n+{\rm var}\ epsilon_n})=\mathcal O(1)$, it suffices to prove \be \mathbb{P}{\max_{j\in[n]}\Zm_n(j)\geq \lfloor \log_\theta n\rfloor +i_n}-(1-\mathrm{e}xp(-q_0\theta^{-i_n+{\rm var}\ epsilon_n}))\to 0\qquad \text{as }n\to\infty. \mathrm{e}e By the definition of $X^{(n)}_{\geq i}$ in~\mathrm{e}qref{eq:xni}, this is equivalent to \be \label{eq:limconv} \mathbb{P}{X^{(n)}_{\geq i_n}=0}-\mathrm{e}xp(-q_0\theta^{-i_n+{\rm var}\ epsilon_n})\to0\qquad \text{as }n\to\infty. \mathrm{e}e This follows from Theorem~{\rm Re}f{thrm:mainatom} and the subsubsequence principle. That is, if we assume the convergence in~\mathrm{e}qref{eq:limconv} does not hold, then there exists a subsequence $(n_\mathrm{e}ll)_{\mathrm{e}ll\in\mathbb{N}}$ and a $\mathrm{d}elta>0$ such that \be \label{eq:subseq} \mathbb{P}{X^{(n_\mathrm{e}ll)}_{\geq i_{n_\mathrm{e}ll}}=0}-\mathrm{e}xp(-q_0\theta^{-i_{n_\mathrm{e}ll}+{\rm var}\ epsilon_{n_\mathrm{e}ll}})>\mathrm{d}elta\ \ \forall\,\mathrm{e}ll\in\mathbb{N}. \mathrm{e}e However, as ${\rm var}\ epsilon_{n_\mathrm{e}ll}$ is bounded, there exists a subsubsequence ${\rm var}\ epsilon_{n_{\mathrm{e}ll_k}}$ such that ${\rm var}\ epsilon_{n_{\mathrm{e}ll_k}}\to{\rm var}\ epsilon$ for some ${\rm var}\ epsilon\in[0,1]$. Then, by Theorem~{\rm Re}f{thrm:mainatom}, the statement in~\mathrm{e}qref{eq:subseq} is false, from which the result follows. In the latter case, we need only consider $\E{X^{(n)}_{\geq i_n}}$ and $\E{\big(X^{(n)}_{\geq i_n}\big)_2}$, as by~\cite[Corollary $1.11$]{Bol01}, \be \label{eq:probbounds} \E{X^{(n)}_{\geq i_n}}-\frac{1}{2}\E{\big(X^{(n)}_{\geq i_n}\big)_2}\leq \mathbb{P}{X^{(n)}_{\geq i_n}>0}\leq \E{X^{(n)}_{\geq i_n}}. \mathrm{e}e By Proposition~{\rm Re}f{prop:factmean}, we have that \be \E{X^{(n)}_{\geq i_n}}=q_0\theta^{-i_n+{\rm var}\ epsilon_n}(1+o(1)),\qquad \E{\big(X^{(n)}_{\geq i_n}\big)_2}=\big(q_0\theta^{-i_n+{\rm var}\ epsilon_n}\big)^2(1+o(1)). \mathrm{e}e Hence, as $i_n\to\infty$ and ${\rm var}\ epsilon_n$ is bounded, \be \ba \E{X^{(n)}_{\geq i_n}}&=\big(1-\mathrm{e}xp\big(-q_0\theta^{-i_n+{\rm var}\ epsilon_n}\big)\big)(1+o(1)),\\ \E{X^{(n)}_{\geq i_n}}-\frac12 \E{\big(X^{(n)}_{\geq i_n}\big)_2}&=\big(1-\mathrm{e}xp\big(-q_0\theta^{-i_n+{\rm var}\ epsilon_n}\big)\big)(1+o(1)). \mathrm{e}a\mathrm{e}e Combining this with~\mathrm{e}qref{eq:probbounds} yields the desired result. Recall ${\rm var}\ epsilon_n:=\log_\theta n-\lfloor \log_\theta n\rfloor$. We now prove the limiting distribution of $|\mathcal{M}_{n_\mathrm{e}ll}|$ has the desired distribution, as in~\mathrm{e}qref{eq:meps}, when the subsequence $(n_\mathrm{e}ll)_{\mathrm{e}ll\in\mathbb{N}}$ is such that ${\rm var}\ epsilon_{n_\mathrm{e}ll}\to {\rm var}\ epsilon\in[0,1]$, which follows the same approach as~\cite[Theorem $1.1$]{Esl16}. Consider the event $\mathcal E_{j,k}:=\{X^{(n_\mathrm{e}ll)}_j=k, X^{(n_\mathrm{e}ll)}_{\geq j+1}=0\}$ for $j\in\mathbb{Z}$ and $k\in\mathbb{N}$. $\mathcal E_{j,k}$ implies that there are exactly $k$ vertices which attain the maximum degree $\lfloor \log_\theta n\rfloor+j$ in the tree $T_{n_\mathrm{e}ll}$. We observe that the events $\mathcal E_{j,k}$ are pairwise disjoint for different values of $j$. As a result, we obtain the following inequalities: For any $M\in\mathbb{N}$, \be\ba \mathbb{P}{|\mathcal{M}_{n_\mathrm{e}ll}|=k}=\mathbb P\bigg(\bigcup_{j\in\mathbb{Z}}\mathcal E_{j,k}\bigg)&\leq \mathbb P\bigg(\bigcup_{j\leq -(M+1)} \mathcal E_{j,k}\bigg)+\sum_{j=-M}^{M-1}\mathbb{P}{\mathcal E_{j,k}}+\mathbb P\bigg(\bigcup_{j\geq M}\mathcal E_{j,k}\bigg)\\ &\leq \mathbb{P}{X^{(n_\mathrm{e}ll)}_{\geq -M}=0}+\sum_{j=-M}^{M-1}\mathbb{P}{\mathcal E_{j,k}}+\mathbb{P}{X^{(n_\mathrm{e}ll)}_{\geq M}>0}, \mathrm{e}a\mathrm{e}e and \be \mathbb{P}{|\mathcal{M}_{n_\mathrm{e}ll}|=k}=\mathbb P\bigg(\bigcup_{j\in\mathbb{Z}}\mathcal E_{j,k}\bigg)\geq \sum_{j=-M}^{M-1}\mathbb{P}{\mathcal E_{j,k}}. \mathrm{e}e By Theorem~{\rm Re}f{thrm:mainatom} it thus follows that \be \ba \limsup_{n_\mathrm{e}ll\to\infty}\mathbb{P}{|\mathcal{M}_{n_\mathrm{e}ll}|=k}&\leq \liminf_{M\in\mathbb{N}}\bigg(\mathbb{P}{X_{\geq -M}^{\rm var}\ epsilon=0}+\sum_{j=-M}^{M-1}\mathbb{P}{X_j^{\rm var}\ epsilon=k,X^{\rm var}\ epsilon_{\geq j}=0}+\mathbb{P}{X^{\rm var}\ epsilon_{\geq M}>0}\bigg),\\ \liminf_{n_\mathrm{e}ll\to\infty}\mathbb{P}{|\mathcal{M}_{n_\mathrm{e}ll}|=k}&\geq \limsup_{M\in\mathbb{N}}\sum_{j=-M}^M \mathbb{P}{X^{\rm var}\ epsilon_j=k,X^{\rm var}\ epsilon_{\geq j+1}=0}, \mathrm{e}a\mathrm{e}e where $X_j^{\rm var}\ epsilon\sim \text{Poi}\big(q_0(1-\theta^{-1})\theta^{-j+{\rm var}\ epsilon}\big)$ and $X^{\rm var}\ epsilon_{\geq j}\sim\text{Poi}\big(q_0\theta^{-j+{\rm var}\ epsilon}\big)$ are independent Poisson random variables, for $j\in\mathbb{Z}$. As a result, \be \ba \mathbb{P}{X_{\geq -M}^{\rm var}\ epsilon=0}&=\mathrm{e}^{-q_0\theta^{M+{\rm var}\ epsilon}}, \qquad \mathbb{P}{X^{\rm var}\ epsilon_{\geq M}>0}=1-\mathrm{e}^{-q_0\theta^{-M+{\rm var}\ epsilon}}, \\ \mathbb{P}{X^{\rm var}\ epsilon_j=k, X^{\rm var}\ epsilon_{\geq j+1}=0}&=\mathbb{P}{X^{\rm var}\ epsilon_j=k}\mathbb{P}{X^{\rm var}\ epsilon_{\geq j+1}=0}=\frac{1}{k!}\big(q_0(1-\theta^{-1})\theta^{-j+{\rm var}\ epsilon}\big)^k\mathrm{e}^{-q_0\theta^{-j+{\rm var}\ epsilon}}. \mathrm{e}a\mathrm{e}e Hence, we obtain \be \ba \limsup_{n_\mathrm{e}ll\to\infty}\mathbb{P}{|\mathcal{M}_{n_\mathrm{e}ll}|=k}&\leq \liminf_{M\in\mathbb{N}}\bigg(\mathrm{e}^{-q_0\theta^{M+{\rm var}\ epsilon}}\!\!+\!\sum_{j=-M}^{M-1}\!\frac{1}{k!}\big(q_0(1-\theta^{-1})\theta^{-j+{\rm var}\ epsilon}\big)^k\mathrm{e}^{-q_0\theta^{-j+{\rm var}\ epsilon}}\!+1-\mathrm{e}^{-q_0\theta^{-M+{\rm var}\ epsilon}}\bigg)\\ &\leq\sum_{j\in\mathbb{Z}}\frac{1}{k!}\big(q_0(1-\theta^{-1})\theta^{-j+{\rm var}\ epsilon}\big)^k\mathrm{e}^{-q_0\theta^{-j+{\rm var}\ epsilon}},\\ \liminf_{n_\mathrm{e}ll\to\infty}\mathbb{P}{|\mathcal{M}_{n_\mathrm{e}ll}|=k}&\geq\limsup_{M\in\mathbb{N}}\sum_{j=-M}^M\frac{1}{k!}\big(q_0(1-\theta^{-1})\theta^{-j+{\rm var}\ epsilon}\big)^k\mathrm{e}^{-q_0\theta^{-j+{\rm var}\ epsilon}}\\ &=\sum_{j\in\mathbb{Z}}\frac{1}{k!}\big(q_0(1-\theta^{-1})\theta^{-j+{\rm var}\ epsilon}\big)^k\mathrm{e}^{-q_0\theta^{-j+{\rm var}\ epsilon}}. \mathrm{e}a \mathrm{e}e It is then readily checked that the limit is indeed finite and that summing over all $k\in\mathbb{N}$ yields one, which concludes the proof. \mathrm{e}nd{proof} \begin{proof}[Proof of Theorem~{\rm Re}f{thrm:asympnormal}] The proof follows the same argument as the proof of~\cite[Theorem $1.4$]{AddEsl18}, which is based on~\cite[Theorem $1.24$]{Bol01}. Let $1\leq a\leq b$ be integers. Then, by Proposition~{\rm Re}f{prop:factmean} and since $i_n=o(\log n)$, \be \E{\Big(X^{(n)}_{i_n}\Big)_a}-\Big(q_0(1-\theta^{-1})\theta^{-i_n+{\rm var}\ epsilon_n}\Big)^a=\mathcal O\big(\theta^{-i_na}\big(r_{\lfloor \log_\theta n+i_n\rfloor}\vee n^{-\beta}\big)\big). \mathrm{e}e It then remains to show that the right-hand side is in fact $o(\theta^{i_nb})$. We note that $i_n=o(\log r_{\log_\theta n}\wedge \log n)$, so that we can write the right-hand side as \be \mathcal O\big((r_{\log_\theta n})^{1-i_na\log \theta/\log r_{\log_\theta n}}\vee n^{-\beta -i_na\log\theta/\log n}\big)=\mathcal O\big((r_{\log_\theta n})^{1-o(1)}\vee n^{-\beta -o(1)}\big) =o(\theta^{i_nb}), \mathrm{e}e by the constraints on $i_n$, from which the result follows. \mathrm{e}nd{proof} \section{Technical details of examples}\label{sec:ex} In this section we discuss some technical details of the examples discussed in Section {\rm Re}f{sec:exres}. In particular, for each example we provide a precise asymptotic expression of $p_k$ and $p_{\geq k}$ as well as a key element that leads to the results in Section {\rm Re}f{sec:exres}. That is, for each of the examples we state and prove the analogue of Proposition~{\rm Re}f{prop:factmean}. The three theorems presented in each of the examples in Section {\rm Re}f{sec:exres} mimic three of the theorems presented in Section~{\rm Re}f{sec:results}. That is, Theorems~{\rm Re}f{thrm:betappp} and~{\rm Re}f{thrm:gumbppp} are the analogue of Theorems~{\rm Re}f{thrm:mainatom}, Theorems~{\rm Re}f{thrm:betamaxtail} and~{\rm Re}f{thrm:gumbmaxtail} are the analogue of Theorem~{\rm Re}f{thrm:maxtail}, and Theorems~{\rm Re}f{thrm:betaasympnorm} and~{\rm Re}f{thrm:gumbasympnorm} are the analogue of Theorem~{\rm Re}f{thrm:asympnormal}. As a result, their proofs are very similar to the proofs of Theorems~{\rm Re}f{thrm:mainatom},~{\rm Re}f{thrm:maxtail}, and~{\rm Re}f{thrm:asympnormal}, so we omit proving the theorems stated in Section~{\rm Re}f{sec:exres}. \subsection{Example {\rm Re}f{ex:beta}, beta distribution} Let $W$ be beta distributed, i.e.\ its distribution function satisfies~\mathrm{e}qref{eq:betacdf} for some $\alpha,\beta>0$. We prove a result on (the tail of) the limiting degree distribution and provide additional building blocks required to prove the results in Example {\rm Re}f{ex:beta}. \begin{lemma}\label{lemma:betapkasymp} Let the distribution of $W$ satisfy~\mathrm{e}qref{eq:betacdf} for some $\alpha,\beta>0$ and recall $p_k,p_{\geq k}$ from~\mathrm{e}qref{eq:pk}. Then, \be\ba \label{eq:betapk} p_k&= \frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)}(1-\theta^{-1})^{1-\beta}k^{-\beta}\theta^{-k}\big(1+\mathcal O(1/k)\big),\\ p_{\geq k}&=\frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)} (1-\theta^{-1})^{-\beta} k^{-\beta}\theta ^{-k}\big(1+\mathcal O(1/k)\big). \mathrm{e}a \mathrm{e}e \mathrm{e}nd{lemma} Note that this lemma improves on the bounds in~\mathrm{e}qref{eq:pkbddweibull} by providing a precise multiplicative constant, rather than two slowly-varying functions that are (possibly) of different order. \begin{proof} By the distribution of $W$, we immediately obtain that \be\ba p_k&=\int_0^1 (\theta-1)x^k (\theta-1+x)^{-(k+1)}\frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(\beta)}x^{\alpha-1}(1-x)^{\beta-1}\,\mathrm{d} x\\ &=(\theta-1)^{-k}\frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(\beta)}\int_0^1 x^{k+\alpha-1}(1-x)^{\beta-1}(1+x/(\theta-1))^{-(k+1)}\,\mathrm{d} x, \mathrm{e}a\mathrm{e}e We now use Euler's integral representation of the hypergeometric function. That is, for $a,b,c,z\in\mathbb C$ such that $\mathrm{Re}(c)>\mathrm{Re}(b)>0$ and $z$ is not a real number greater than one, \be \int_0^1 x^{b-1}(1-x)^{c-b-1}(1-zx)^{-a}\,\mathrm{d} x=\frac{\mathcal{G}amma(c-b)\mathcal{G}amma(b)}{\mathcal{G}amma(c)} {}_2F_1(a,b,c,z), \mathrm{e}e where $_2F_1$ is the hypergeometric function. We thus obtain \be (\theta-1)^{-k}\frac{\mathcal{G}amma(\alpha+\beta)\mathcal{G}amma(k+\alpha)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(k+\alpha+\beta)} {}_2F_1(k+1,k+\alpha,k+\alpha+\beta,-1/(\theta-1)). \mathrm{e}e We then use one of the Euler transformations of the hypergeometric function, \be _2F_1(a,b,c,z)=(1-z)^{c-a-b} {}_2F_1(c-a,c-b,c,z), \mathrm{e}e to arrive at \be \label{eq:betapkexpl} \theta^{-k}\frac{\mathcal{G}amma(\alpha+\beta)\mathcal{G}amma(k+\alpha)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(k+\alpha+\beta)} \Big(\frac{\theta}{\theta-1}\Big)^{\beta-1} {}_2F_1(\alpha+\beta-1,\beta,k+\alpha+\beta,-1/(\theta-1)). \mathrm{e}e We directly find a particular case in which we can find the value of the hypergeometric function explicitly, namely when $\alpha+\beta=1$. When $\alpha+\beta=1$, we find that the hypergeometric function on the right-hand side of~\mathrm{e}qref{eq:betapkexpl} equals one as the first argument equals zero, independent of the other arguments, so that we arrive at \be \frac{(1-\theta^{-1})^{1-\beta}\mathcal{G}amma(k+\alpha)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(k+1)}\theta^{-k}=\frac{(1-\theta^{-1})^{1-\beta}}{\mathcal{G}amma(\alpha)}k^{-\beta}\theta^{-k}\big(1+\mathcal O(1/k)\big), \mathrm{e}e since $\mathcal{G}amma(x+a)/\mathcal{G}amma(x)=x^a\big(1+\mathcal O(1/x)\big)$ as $x\to\infty$ and $\alpha=1-\beta$ in this particular case. When $\alpha+\beta\neq 1$, we can obtain a similar expression. First, we use one of Pfaff's transformations for the hypergeometric function, \be _2F_1(a,b,c,z)=(1-z)^{-b}{}_2F_1(b,c-a,c,z/(z-1)). \mathrm{e}e Then, applying this to the right-hand side of~\mathrm{e}qref{eq:betapkexpl}, so that $z/(z-1)=1/\theta\in(-1,1)$, we can express the hypergeometric function as a power series. Namely, for $z$ such that $|z|<1$, \be _2F_1(a,b,c,z)=\sum_{j=0}^\infty \frac{a^{(j)}b^{(j)}}{c^{(j)}}\frac{z^j}{\mathcal{G}amma(j)}, \mathrm{e}e where $a^{(j)}:=\prod_{\mathrm{e}ll=1}^j (a+(\mathrm{e}ll-1))$ (and similarly for $b^{(j)},c^{(j)}$). Thus, combining the Pfaff transformation and the power series representation yields \be \label{eq:alphabetaseries} {}_2F_1(\alpha+\beta-1,\beta,k+\alpha+\beta,-1/(\theta-1))=\Big(\frac{\theta}{\theta-1}\Big)^{-\beta}\sum_{j=0}^\infty \frac{\beta^{(j)}(k+1)^{(j)}}{(k+\alpha+\beta)^{(j)}}\frac{\theta^{-j}}{j!}. \mathrm{e}e From the $\alpha+\beta=1$ case, we immediately distil that \be \label{eq:betathetasum} \sum_{j=0}^\infty \frac{\beta^{(j)}}{j!}\theta^{-j}=\Big(\frac{\theta}{\theta-1}\Big)^\beta. \mathrm{e}e The aim is to show that for $k$ large, the series in~\mathrm{e}qref{eq:alphabetaseries} is close to $(\theta/(\theta-1))^\beta$ for any choice of $\alpha,\beta>0$, so that the entire term in~\mathrm{e}qref{eq:alphabetaseries} is close to one. We consider two cases, namely $\alpha+\beta<1$ and $\alpha+\beta>1$. Let us start with the latter. We immediately obtain the upper bound $(k+\alpha+\beta)^{(j)}>(k+1)^{(j)}$, so that using~\mathrm{e}qref{eq:betathetasum} yields that the right-hand side of~\mathrm{e}qref{eq:alphabetaseries} is at most one. For a lower bound, note that \be \frac{(k+1)^{(j)}}{(k+\alpha+\beta)^{(j)}}=\prod_{\mathrm{e}ll=1}^j \Big(1-\frac{\alpha+\beta-1}{k+\alpha+\beta+(\mathrm{e}ll-1)}\Big)\geq \Big(1-\frac{\alpha+\beta-1}{k+\alpha+\beta}\Big)^j, \mathrm{e}e as the fraction in the second step in decreasing in $\mathrm{e}ll$, since $\alpha+\beta-1>0$. We thus have \be \Big(\frac{\theta}{\theta-1}\Big)^{-\beta}\sum_{j=0}^\infty \frac{\beta^{(j)}(k+1)^{(j)}}{(k+\alpha+\beta)^{(j)}}\frac{\theta^{-j}}{j!}\geq \Big(\frac{\theta}{\theta-1}\Big)^{-\beta}\sum_{j=0}^\infty \frac{\beta^{(j)}}{j!}\Big( \Big(1-\frac{\alpha+\beta-1}{k+\alpha+\beta}\Big)\frac{1}{\theta}\Big)^j, \mathrm{e}e which, using~\mathrm{e}qref{eq:betathetasum}, equals \be \Big(\frac{\theta-1}{\theta-1+\frac{\alpha+\beta-1}{k+\alpha+\beta}}\Big)^\beta=\Big(1-\frac{\alpha+\beta-1}{(\theta-1)(k+\alpha+\beta)+(\alpha+\beta-1)}\Big)^\beta =1-\mathcal O(1/k). \mathrm{e}e A similar approach can be used for $\alpha+\beta<1$, where one would have an elementary lower bound and an upper bound that is $1+\mathcal O(1/k)$. In total, combining the above in~\mathrm{e}qref{eq:alphabetaseries} and in~\mathrm{e}qref{eq:betapkexpl} yields \be p_k=\frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)}(1-\theta^{-1})^{1-\beta}k^{-\beta}\theta^{-k}\big(1+\mathcal O(1/k)\big). \mathrm{e}e An equivalent computation can be performed for \be \label{eq:int2} \int_0^1 x^k (\theta-1+x)^{-k}\frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(\beta)}x^{\alpha-1}(1-x)^{\beta-1}\,\mathrm{d} x, \mathrm{e}e to obtain that it equals \be \ba \theta ^{-k}{}&\frac{\mathcal{G}amma(\alpha+\beta)\mathcal{G}amma(k+\alpha)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(k+\alpha+\beta)} \Big(\frac{\theta}{\theta-1}\Big)^\beta {}_2F_1(\alpha+\beta,\beta,k+\alpha+\beta,-1/(\theta-1))\\ &=\theta ^{-k}\frac{\mathcal{G}amma(\alpha+\beta)\mathcal{G}amma(k+\alpha)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(k+\alpha+\beta)} {}_2F_1(\beta,k,k+\alpha+\beta,1/\theta)\\ &=\theta ^{-k}\frac{\mathcal{G}amma(\alpha+\beta)\mathcal{G}amma(k+\alpha)}{\mathcal{G}amma(\alpha)\mathcal{G}amma(k+\alpha+\beta)} \sum_{j=0}^\infty \frac{\beta^{(j)}k^{(j)}}{(k+\alpha+\beta)^{(j)}}\frac{\theta^{-j}}{j!}. \mathrm{e}a \mathrm{e}e In this case an equivalent approach for bounding the sum on the right-hand side works for all $\alpha,\beta>0$. Hence, for~\mathrm{e}qref{eq:int2} we obtain the asymptotic expression \be p_{\geq k}=\frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)} (1-\theta^{-1})^{-\beta} k^{-\beta}\theta ^{-k}\big(1+\mathcal O(1/k)\big), \mathrm{e}e which concludes the proof. \mathrm{e}nd{proof} Recall that in this example we set \be\ba X^{(n)}_i&:=|\{j\in[n]: \Zm_n(j)=\lfloor \log_\theta n-\beta\log_\theta\log_\theta n \rfloor +i\}|,\\ X^{(n)}_{\geq i}&:=|\{j\in[n]: \Zm_n(j)\geq \lfloor \log_\theta n-\beta \log_\theta\log_\theta n\rfloor +i\}|,\\ {\rm var}\ epsilon_n&:=(\log_\theta n-\beta \log_\theta\log_\theta n)-\lfloor\log_\theta n-\beta\log_\theta\log_\theta n \rfloor. \mathrm{e}a\mathrm{e}e We then state the analogue of Proposition~{\rm Re}f{prop:factmean}. \begin{proposition}\label{prop:betafactmean} Consider the WRT model as in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ whose distribution satisfies~\mathrm{e}qref{eq:betacdf} for some $\alpha,\beta>0$, and recall $c_{\alpha,\beta,\theta}$ from~\mathrm{e}qref{eq:epsnbeta}. For a fixed $K\in\mathbb{N},c\in(0,\theta/(\theta-1))$, the following holds. For any integer-valued $i_n,i_n',$ such that $0<\log_\theta n+i_n<\log_\theta n+i_n'< c\log n$ and $i_n,i_n'= \mathrm{d}elta\log_\theta n+o(\log n)$, for some $\mathrm{d}elta\in(-1,c\log\theta-1)\cup\{0\}$ and for $a_{i_n},\ldots,a_{i_n'}\in\mathbb{N}_0$ satisfying $a_{i_n}+\cdots+a_{i_n'}=K$, \be\ba \mathbb E\bigg[\Big(X^{(n_\mathrm{e}ll)}_{\geq i_n'}\Big)_{a_{i_n'}}\prod_{k=i_n}^{i_n'-1}\Big(X_k^{(n_\mathrm{e}ll)}\Big)_{a_k}\bigg]={}&\Big(\frac{c_{\alpha,\beta,\theta}}{(1+\mathrm{d}elta)^{\beta}}\theta^{-i_n'+{\rm var}\ epsilon_n}\Big)^{a_{i_n'}}\prod_{k=i_n}^{i_n'-1}\Big(\frac{c_{\alpha,\beta,\theta}(1-\theta^{-1})}{(1+\mathrm{d}elta)^{\beta}}\theta^{-k+{\rm var}\ epsilon_n}\Big)^{a_k}\\ &\times\Big(1+\mathcal O\Big(\frac{\log\log n}{\log n}\vee \frac{|i_n-\mathrm{d}elta\log_\theta n|\vee |i_n'-\mathrm{d}elta\log_\theta n|}{\log n}\Big)\Big). \mathrm{e}a\mathrm{e}e \mathrm{e}nd{proposition} \begin{proof} Set $K':=K-a_{i_n'}$ and for each $i_n\leq k\leq i_n'$ and each $u$ such that $\sum_{\mathrm{e}ll=i_n}^{k-1}a_\mathrm{e}ll<u\leq \sum_{\mathrm{e}ll=i_n}^k a_\mathrm{e}ll$, let $m_u=\lfloor \log_\theta n-\log_\theta\log_\theta n \rfloor +k$. Also, let $(v_u)_{u\in[K]}$ be $K$ vertices selected uniformly at random without replacement from $[n]$. Then, as the $X^{(n)}_{\geq k}$ and $X^{(n)}_{k}$ can be expressed as sums of indicators, following the same steps as in the proof of Proposition~{\rm Re}f{prop:factmean}, \be\label{eq:meanexunif} \E{\Big(X^{(n)}_{\geq i_n'}\Big)_{a_{i_n'}}\prod_{k=i_n}^{i_n'-1}\Big(X_k^{(n)}\Big)_{a_k}}=(n)_K\sum_{\mathrm{e}ll=0}^{K'}\sum_{\substack{S\subseteq [K']\\ |S|=\mathrm{e}ll}}(-1)^\mathrm{e}ll \mathbb{P}{\mathbb{Z}m_n(v_u)\geq m_u+\ind_{\{u\in S\}}\text{ for all } u\in [K]}. \mathrm{e}e By Proposition~{\rm Re}f{lemma:degprobasymp}, \be \label{eq:tailprobunif} \mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)\geq m_u+\ind_{\{u\in S\}}\text{ for all } u\in [K]}=\prod_{u=1}^K \E{\Big(\frac{W}{\E{W}+W}\Big)^{m_u+\ind_{\{u\in S\}}}}(1+o(n^{-\beta})), \mathrm{e}e for some $\beta>0$. By Lemma~{\rm Re}f{lemma:betapkasymp}, when $|S|=\mathrm{e}ll$, \be\ba \prod_{u=1}^K {}&\E{\Big(\frac{W}{\E{W}+W}\Big)^{m_u+\ind_{\{u\in S\}}}}\\ &=\Big(\frac{\mathcal{G}amma(\alpha+\beta)}{\mathcal{G}amma(\alpha)}(1-\theta^{-1})^{-\beta}\Big)^K \theta^{-\sum_{u=1}^K m_u-\mathrm{e}ll}\prod_{u=1}^K (m_u+\ind_{\{u\in S\}})^{-\beta}(1+\mathcal O (1/\log n)). \mathrm{e}a\mathrm{e}e Here, we are able to obtain the error term $1-\mathcal O(1/\log n)$ due to the fact that $\log_\theta n+i_n>\mathrm{e}ta \log n$ for some $\mathrm{e}ta\in(0,1+\mathrm{d}elta)$ when $n$ is large. Moreover, as $i_n,i_n'= \mathrm{d}elta\log_\theta n+o(\log n)$ and thus $m_u\sim (1+\mathrm{d}elta)\log_\theta n$ for each $u\in[K]$, \be \prod_{u=1}^K(m_u+\ind_{\{u\in S\}})^{-\beta }=( (1+\mathrm{d}elta)\log_\theta n)^{-\beta K}\Big(1+\mathcal O\Big(\frac{\log\log n}{\log n}\vee \frac{|i_n-\mathrm{d}elta\log_\theta n|\vee |i_n'-\mathrm{d}elta\log_\theta n|}{\log n}\Big)\Big), \mathrm{e}e uniformly in $S$ (and $\mathrm{e}ll$). Recalling that $c_{\alpha,\beta,\theta}=(\mathcal{G}amma(\alpha+\beta)/\mathcal{G}amma(\alpha))(1-\theta^{-1})^{-\beta}$, we thus arrive at \be \ba (n)_K{}&\sum_{\mathrm{e}ll=0}^{K'}\sum_{\substack{S\subseteq [K']\\ |S|=\mathrm{e}ll}}(-1)^\mathrm{e}ll\big(c_{\alpha,\beta,\theta}(1+\mathrm{d}elta)^{-\beta}(\log_\theta n)^{-\beta}\big)^K \theta^{-\sum_{u=1}^K m_u-\mathrm{e}ll}\\ &\times \Big(1+\mathcal O\Big(\frac{\log\log n}{\log n}\vee \frac{|i_n-\mathrm{d}elta\log_\theta n|\vee |i_n'-\mathrm{d}elta\log_\theta n|}{\log n}\Big)\Big)\\ ={}&\big(c_{\alpha,\beta,\theta}(1+\mathrm{d}elta)^{-\beta}\theta^{-i_n'+{\rm var}\ epsilon_n}\big)^{a_{i_n'}}\prod_{k=i_n}^{i_n'-1}\!\big(c_{\alpha,\beta,\theta}(1+\mathrm{d}elta)^{-\beta}(1-\theta^{-1})\theta^{-k+{\rm var}\ epsilon_n}\big)^{a_k}\\ &\times\Big(1+\mathcal O\Big(\frac{\log\log n}{\log n}\vee \frac{|i_n-\mathrm{d}elta\log_\theta n|\vee |i_n'-\mathrm{d}elta\log_\theta n|}{\log n}\Big)\Big), \mathrm{e}a\mathrm{e}e where the last step follows from a similar argument as in the proof of Proposition~{\rm Re}f{prop:factmean}. \mathrm{e}nd{proof} With Proposition~{\rm Re}f{prop:betafactmean} at hand, the proofs of Theorems~{\rm Re}f{thrm:betappp},~{\rm Re}f{thrm:betamaxtail}, and~{\rm Re}f{thrm:betaasympnorm} follow the same approach as the proofs of Theorems~{\rm Re}f{thrm:mainatom},~{\rm Re}f{thrm:maxtail}, and~{\rm Re}f{thrm:asympnormal}, respectively. \subsection{Example {\rm Re}f{ex:gumb}, fraction of `gamma' random variables} Let the distribution of $W$ satisfy~\mathrm{e}qref{eq:gumbex} for some $b\in\mathbb{R},c_1>0$ with $bc_1\leq 1$. We prove a result on (the tail of) the limiting degree distribution and provide additional building blocks required to prove the results in Example {\rm Re}f{ex:gumb}. \begin{lemma}\label{lemma:gumbpkasymp} Let the distribution of $W$ satisfy~\mathrm{e}qref{eq:gumbex} for some $b\in\mathbb{R},c_1>0$ such that $bc_1\leq 1$, and recall $p_k,p_{\geq k},$ and $C$ from~\mathrm{e}qref{eq:pk} and~\mathrm{e}qref{eq:c}, respectively. Then, \be\ba \label{eq:gumbpk} p_k&=(1-\theta^{-1})Ck^{b/2+1/4}\mathrm{e}^{-2\sqrt{c_1^{-1}(1-\theta^{-1})k}}\theta^{-k}\big(1+\mathcal O\big(1/\sqrt k\big)\big),\\ p_{\geq k}&=Ck^{b/2+1/4}\mathrm{e}^{-2\sqrt{c_1^{-1}(1-\theta^{-1})k}}\theta^{-k}\big(1+\mathcal O\big(1/\sqrt k\big)\big). \mathrm{e}a \mathrm{e}e \mathrm{e}nd{lemma} Note that this lemma improves on the bounds in~\mathrm{e}qref{eq:pkbddgumbelrv} by providing a polynomial correction term and a precise multiplicative constant. \begin{proof} We start by proving the equality for $p_{\geq k}$ and then show the similar result for $p_k$. By~\mathrm{e}qref{eq:gumbex}, we obtain the following expression for $p_{\geq k}$. \be \ba\label{eq:intrep} p_{\geq k}={}&\int_0^1 x^k (\theta-1+x)^{-k}c_1^{-1}(1-x)^{-(2+b)}\mathrm{e}^{-c_1^{-1}x/(1-x)}\,\mathrm{d} x\\ &-\int_0^1 x^k (\theta-1+x)^{-k}b(1-x)^{-(1+b)}\mathrm{e}^{-c_1^{-1}x/(1-x)}\,\mathrm{d} x. \mathrm{e}a\mathrm{e}e The second integral is of a similar form as the first, with a different constant in front and a different polynomial exponent. We hence only provide an explicit analysis of the first integral. Using a variable transform $u=x/(1-x)$, we find the first integral equals \be \theta^{-k}c_1^{-1}\int_0^\infty u^k (1+u)^{b-k}\Big(1-\frac{1}{\theta(1+u)}\Big)^{-k}\mathrm{e}^{-c_1^{-1}u}\,\mathrm{d} u. \mathrm{e}e We now define $X_u$ to be a negative binomial random variable with parameters $k$ and $p_u:=(\theta(1+u))^{-1}$, for any $u>0$. As the sum over the probability mass function of $X_u$ is one irrespectively of the value of $u$, we obtain that the above equals \be \ba \theta^{-k}c_1^{-1}{}&\int_0^\infty \sum_{j=0}^\infty \binom{j+k-1}{j} p_u^j(1-p_u)^{k} u^k (1+u)^{b-k}\Big(1-\frac{1}{\theta(1+u)}\Big)^{-k}\mathrm{e}^{-c_1^{-1}u}\,\mathrm{d} u\\ &=\theta^{-k}c_1^{-1}\int_0^\infty \sum_{j=0}^\infty \binom{j+k-1}{j}\theta^{-j}u^k (1+u)^{b-(j+k)}\mathrm{e}^{-c_1^{-1}u}\,\mathrm{d} u\\ &=\theta^{-k}c_1^{-1}\sum_{j=0}^\infty \binom{j+k-1}{j}\theta^{-j}\mathcal{G}amma(k+1)\, U(k+1,2+b-j,c_1^{-1}), \mathrm{e}a \mathrm{e}e where $U(a,b,z)$ is the confluent hypergeometric function of the second kind, defined as \be \label{eq:confhyp} U(a,b,z):=\frac{1}{\mathcal{G}amma(a)}\int_0^\infty x^{a-1}(1+x)^{b-a-1}\mathrm{e}^{-zx}\,\mathrm{d} x, \mathrm{e}e whenever $\text{Re}(a)>0$. Using the Kummer transform $U(a,b,z)=z^{1-b}U(1+a-b,2-b,z)$ yields \be\label{eq:finexpr} \theta^{-k}{}c_1^{-1}\sum_{j=0}^\infty \binom{j+k-1}{j}\theta^{-j}\mathcal{G}amma(k+1)\,c_1^{b-(j-1)}U(j+k-b,j-b,c_1^{-1}). \mathrm{e}e Again using the definition of the confluent hypergeometric function of the second kind, we obtain \be\ba c_1^b\theta^{-k}{}&\sum_{j=0}^\infty \frac{\mathcal{G}amma(j+k)\mathcal{G}amma(k+1)}{\mathcal{G}amma(k)\mathcal{G}amma(j+1)\mathcal{G}amma(j+k-b)}(c_1\theta)^{-j}\int_0^\infty u^{j+k-b-1}(1+u)^{-(k+1)}\mathrm{e}^{-c_1^{-1}u}\, \mathrm{d} u\\ ={}& c_1^b k\theta^{-k}\frac{\mathcal{G}amma(k)}{\mathcal{G}amma(k-b)}\sum_{j=0}^\infty \frac{\mathcal{G}amma(j+k)\mathcal{G}amma(k-b)}{\mathcal{G}amma(j+k-b)\mathcal{G}amma(k)}\frac{1}{j!}(c_1\theta)^{-j}\int_0^\infty u^{j+k-b-1}(1+u)^{-(k+1)}\mathrm{e}^{-c_1^{-1}u}\, \mathrm{d} u\\ ={}& c_1^b k\theta^{-k}\frac{\mathcal{G}amma(k)}{\mathcal{G}amma(k-b)}\sum_{j=0}^\infty \frac{(k)^{(j)}}{(k-b)^{(j)}}\frac{1}{j!}(c_1\theta)^{-j}\int_0^\infty u^{j+k-b-1}(1+u)^{-(k+1)}\mathrm{e}^{-c_1^{-1}u}\, \mathrm{d} u, \mathrm{e}a\mathrm{e}e where $(x)^{(j)}:=x(x+1)\cdots (x+(j-1))$, for any $x\in\mathbb{R}$ and $j\in\mathbb{N}_0$. We can then bound \be \label{eq:bbounds} \frac{(k)^{(j)}}{(k-b)^{(j)}}\geq \begin{cases} 1 & \mbox{if } k>b\geq 0,\\ \Big(\frac{k}{k-b}\Big)^j & \mbox{if } b<0. \mathrm{e}nd{cases} \ \text{ and }\ \frac{(k)^{(j)}}{(k-b)^{(j)}}\leq \begin{cases} 1 & \mbox{if } b<0,\\ \Big(\frac{k}{k-b}\Big)^j & \mbox{if } k>b\geq 0. \mathrm{e}nd{cases} \mathrm{e}e As the bounds are symmetric, we can assume that $b\geq 0$ without loss of generality; the other case follows similarly. We deal with the lower bound first. This yields \be \ba \label{eq:uexp} c_1^b{}&k \theta^{-k}\frac{\mathcal{G}amma(k)}{\mathcal{G}amma(k-b)}\sum_{j=0}^\infty \frac{1}{j!}(c_1\theta)^{-j}\int_0^\infty u^{j+k-b-1}(1+u)^{-(k+1)}\mathrm{e}^{-c_1^{-1}u}\, \mathrm{d} u\\ &=c_1^bk \theta^{-k}\frac{\mathcal{G}amma(k)}{\mathcal{G}amma(k-b)}\int_0^\infty u^{k-b-1}(1+u)^{-(k+1)}\mathrm{e}^{-c_1^{-1}(1-\theta^{-1})u}\, \mathrm{d} u\\ &=c_1^bk \theta^{-k}\frac{\mathcal{G}amma(k)}{\mathcal{G}amma(k-b)}\mathcal{G}amma(k-b)U(k-b,-b,c_1^{-1}(1-\theta^{-1})). \mathrm{e}a \mathrm{e}e It follows from~\cite[$(3.12)$ and $(3.15)$]{Tem13} that, when $a>d/2$ is large and $d,z$ are bounded, \be \mathcal{G}amma(a)U(a,d,z^2)=2\mathrm{e}^{z^2/2}\Big(\frac{2z}{u}\Big)^{1-d}K_{1-d}(uz)\big(1+\mathcal O(1/u)\big), \mathrm{e}e where $u=2\sqrt{a-d/2}$ and $K_{1-d}(uz)$ is a modified Bessel function. Combined with the asymptotic expression for the modified Bessel function as in~\cite[$(10.40.2)$]{Loz03}, we obtain \be \label{eq:uasymp} \mathcal{G}amma(a)U(a,d,z^2)=2\sqrt{\frac{\pi}{2uz}}\mathrm{e}^{z^2/2-uz}\Big(\frac{2z}{u}\Big)^{1-d}\big(1+\mathcal O(1/u)\big). \mathrm{e}e In this particular case, it yields \be \ba \mathcal{G}amma(k-b)&U(k-b,-b,c_1^{-1}(1-\theta^{-1}))\\ &=\mathrm{e}^{c_1^{-1}(1-\theta^{-1})/2}\sqrt{\pi}(c_1^{-1}(1-\theta^{-1}))^{1/4+b/2}k^{-b/2-3/4}\mathrm{e}^{-2\sqrt{c_1^{-1}(1-\theta^{-1})k}}\big(1+\mathcal O\big(1/\sqrt k\big)\big). \mathrm{e}a \mathrm{e}e Using this, as well as $\mathcal{G}amma(k)/\mathcal{G}amma(k-b)=k^b(1+\mathcal O(1/k))$, in~\mathrm{e}qref{eq:uexp}, we arrive at \be \label{eq:finuasymp} \mathrm{e}^{c_1^{-1}(1-\theta^{-1})/2}\sqrt{\pi}c_1^{-1/4+b/2}((1-\theta^{-1})k)^{1/4+b/2}\mathrm{e}^{-2\sqrt{c_1^{-1}(1-\theta^{-1})k}}\theta^{-k}\big(1+\mathcal O\big(1/\sqrt k\big)\big). \mathrm{e}e We then tend to the upper bound in~\mathrm{e}qref{eq:bbounds} for $b\geq 0$, which yields \be \ba c_1^b{}&k \theta^{-k}\frac{\mathcal{G}amma(k)}{\mathcal{G}amma(k-b)}\sum_{j=0}^\infty \frac{1}{j!}\Big(\frac{(c_1\theta)^{-1} k}{k-b}\Big)^{j}\int_0^\infty u^{j+k-b-1}(1+u)^{-(k+1)}\mathrm{e}^{-c_1^{-1}u}\, \mathrm{d} u\\ &=c_1^bk \theta^{-k}\frac{\mathcal{G}amma(k)}{\mathcal{G}amma(k-b)}\int_0^\infty u^{k-b-1}(1+u)^{-(k+1)}\mathrm{e}^{-(c_1^{-1}(1-\theta^{-1})-(c_1\theta)^{-1}b/(k-b))u}\, \mathrm{d} u\\ &=c_1^bk \theta^{-k}\frac{\mathcal{G}amma(k)}{\mathcal{G}amma(k-b)}\mathcal{G}amma(k-b)U(k-b,-b,c_1^{-1}(1-\theta^{-1})-(c_1\theta)^{-1}b/(k-b)). \mathrm{e}a \mathrm{e}e From the asymptotic result in~\mathrm{e}qref{eq:uasymp} we find that \be U\Big(k-b,-b,\frac{1}{c_1}\big(1-\theta^{-1}\big)-\frac{1}{c_1\theta}\frac{b}{k-b}\Big)=U\Big(k-b,-b,\frac{1}{c_1}\big(1-\theta^{-1}\big)\Big)\big(1+\mathcal O(1/\sqrt{k})\big), \mathrm{e}e so that the lower and upper bound match up to error terms. By~\mathrm{e}qref{eq:uasymp}, we thus arrive at \be\ba \label{eq:finuasymp2} \int_0^1{}& x^k (\theta-1+x)^{-k}c_1^{-1}(1-x)^{-(2+b)}\mathrm{e}^{-c_1^{-1}x/(1-x)}\,\mathrm{d} x\\ &=\mathrm{e}^{c_1^{-1}(1-\theta^{-1})/2}\sqrt{\pi}c_1^{-1/4+b/2}((1-\theta^{-1})k)^{1/4+b/2}\mathrm{e}^{-2\sqrt{c_1^{-1}(1-\theta^{-1})k}}\theta^{-k}\big(1+\mathcal O\big(1/\sqrt k\big)\big)\\ &=C k^{1/4+b/2}\mathrm{e}^{-2\sqrt{c_1^{-1}(1-\theta^{-1})k}}\theta^{-k}\big(1+\mathcal O\big(1/\sqrt k\big)\big). \mathrm{e}a \mathrm{e}e Finally, when considering the second integral in~\mathrm{e}qref{eq:intrep}, we observe its integrand is similar to that of the first integral but with a different constant in front and with a constant $\wt b=b-1$ in the polynomial exponent. We can thus follow the exact same steps as for the first integral in~\mathrm{e}qref{eq:intrep} to conclude that it can be included in the $\mathcal O(1/\sqrt k)$ term in~\mathrm{e}qref{eq:finuasymp2}. We thus obtain that \be p_{\geq k}=Ck^{1/4+b/2}\mathrm{e}^{-2\sqrt{c_1^{-1}(1-\theta^{-1})k}}\theta^{-k}\big(1+\mathcal O\big(1/\sqrt k\big)\big), \mathrm{e}e as required. We now show the result for $p_k$, which uses the above steps with several minor adjustments. First, \be \ba \label{eq:pkintrep} p_k={}&(\theta -1)\int_0^1 x^k (\theta-1+x)^{-(k+1)}c_1^{-1}(1-x)^{-(2+b)}\mathrm{e}^{-c_1^{-1}x/(1-x)}\,\mathrm{d} x\\ &-(\theta-1)\int_0^1 x^k (\theta-1+x)^{-(k+1)}b(1-x)^{-(b+1)}\mathrm{e}^{-c_1^{-1}x/(1-x)}\,\mathrm{d} x. \mathrm{e}a \mathrm{e}e As for the proof of the asymptotic expression of $p_{\geq k}$, we consider the first integral only as the second one is of lower order. This yields \be\ba (\theta -1){}&\int_0^1 x^k (\theta-1+x)^{-(k+1)}c_1^{-1}(1-x)^{-(2+b)}\mathrm{e}^{c_1^{-1}x/(1-x)}\,\mathrm{d} x\\ ={}&(1-\theta^{-1})c_1^{-1}\theta^{-k}\int_0^\infty u^k (1+u)^{b-k}\Big(1-\frac{1}{\theta(1+u)}\Big)^{-(k+1)}\mathrm{e}^{-c_1^{-1}u}\,\mathrm{d} u\\ ={}&(1-\theta^{-1})c_1^{-1}\theta^{-k}\sum_{j=0}^\infty \binom{j+k}{j}\theta^{-j}\int_0^\infty u^k (1+u)^{b-(j+k)}\mathrm{e}^{-c_1^{-1}u}\,\mathrm{d} u\\ ={}&(1-\theta^{-1})c_1^{-1}\theta^{-k}\sum_{j=0}^\infty \binom{j+k}{j}\theta^{-j}\mathcal{G}amma(k+1)U(k+1,2+b-j,c_1^{-1})\\ ={}&(1-\theta^{-1})c_1^b\theta^{-k}\sum_{j=0}^\infty \binom{j+k}{j}(c_1\theta)^{-j}\mathcal{G}amma(k+1)U(k+j-b,j-b,c_1^{-1})\\ ={}&(1-\theta^{-1})c_1^b\theta^{-k}\frac{\mathcal{G}amma(k+1)}{\mathcal{G}amma(k-b)}\sum_{j=0}^\infty\frac{(k+1)^{(j)} }{(k-b)^{(j)}}\frac{(c_1\theta)^{-j}}{j!}\int_0^\infty \!\!u^{k+j-b-1}(1+u)^{-(k+1)}\mathrm{e}^{-c_1^{-1}u}\,\mathrm{d} u. \mathrm{e}a\mathrm{e}e Similar to~\mathrm{e}qref{eq:bbounds}, we bound \be \label{eq:bbounds2} \frac{(k-1)^{(j)}}{(k-b)^{(j)}}\geq \begin{cases} 1 & \mbox{if } k>b\geq -1,\\ \Big(\frac{k+1}{k-b}\Big)^j & \mbox{if } b<-1. \mathrm{e}nd{cases} \quad \text{ and }\quad \frac{(k)^{(j)}}{(k-b)^{(j)}}\leq \begin{cases} 1 & \mbox{if } b<-1,\\ \Big(\frac{k+1}{k-b}\Big)^j & \mbox{if } k>b\geq -1. \mathrm{e}nd{cases} \mathrm{e}e Again, we assume without loss of generality that $b\geq -1$. Moreover, we only concern ourselves with the lower bound on $(k+1)^{(j)}/(k-b)^{(j)}$ when $b\geq -1$, since we obtain a matching upper bound with the required error term when using the upper bound on $(k+1)^{(j)}/(k-b)^{(j)}$ when $b\geq -1$, as in the proof for $p_{\geq k}$. Thus, we obtain the lower bound \be \ba (1-\theta^{-1}){}&c_1^b\theta^{-k}\frac{\mathcal{G}amma(k+1)}{\mathcal{G}amma(k-b)}\sum_{j=0}^\infty\frac{(c_1\theta)^{-j}}{j!}\int_0^\infty u^{k+j-b-1}(1+u)^{-(k+1)}\mathrm{e}^{-c_1^{-1}u}\,\mathrm{d} u\\ &=(1-\theta^{-1})c_1^bk\theta^{-k}\frac{\mathcal{G}amma(k)}{\mathcal{G}amma(k-b)}\int_0^\infty u^{k+j-b-1}(1+u)^{-(k+1)}\mathrm{e}^{-c_1^{-1}(1-\theta^{-1})u}\,\mathrm{d} u, \mathrm{e}a \mathrm{e}e which, up to the constant $(1-\theta^{-1})$, is the exact same expression as in~\mathrm{e}qref{eq:uexp}. As discussed above, using the upper bound on $(k+1)^{(j)}/(k-b)^{(j)}$ yields a matching upper bound (up to error terms), from which the result follows. \mathrm{e}nd{proof} Recall that in this example we set \be\ba X^{(n)}_i:={}&\big|\big\{j\in[n]: \Zm_n(j)=\big\lfloor \log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n \big\rfloor +i\big\}\big|,\\ X^{(n)}_{\geq i}:={}&\big|\big\{j\in[n]: \Zm_n(j)\geq \big\lfloor \log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n \big\rfloor +i\big\}\big|,\\ {\rm var}\ epsilon_n:={}&\big(\log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n\big )\\ &-\big\lfloor\log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n \big \rfloor. \mathrm{e}a\mathrm{e}e We then state the analogue of Proposition~{\rm Re}f{prop:factmean}. \begin{proposition}\label{prop:gumbfactmean} Consider the WRT model as in Definition~{\rm Re}f{def:WRT} with vertex-weights $(W_i)_{i\in[n]}$ whose distribution satisfies~\mathrm{e}qref{eq:gumbex} for some $b\in\mathbb{R},c_1>0$ such that $bc_1\leq 1$, and recall $c_{c_1,b,\theta}$ from~\mathrm{e}qref{eq:c}. For a fixed $K\in\mathbb{N},c\in(1,\theta/(\theta-1))$ the following holds. For any integer-valued $i_n,i_n'$ such that $0<\log_\theta n+i_n<\log_\theta n+i_n'<c\log_\theta n$ and $i_n,i_n'=\mathrm{d}elta\sqrt{\log_\theta n}+o(\sqrt{\log n})$ for some $\mathrm{d}elta\in\mathbb{R}$ and for $a_{i_n},\ldots,a_{i_n'}\in\mathbb{N}_0$ satisfying $a_{i_n}+\ldots+a_{i_n'}=K$, \be\ba \mathbb E\bigg[\!\Big(X^{(n_\mathrm{e}ll)}_{\geq i_n'}\Big)_{a_{i_n'}}\!\prod_{k=i_n}^{i_n'-1}\!\!\Big(X_k^{(n_\mathrm{e}ll)}\Big)_{a_k}\bigg] ={}&\big(c_{c_1,b,\theta}\theta^{-i_n'+{\rm var}\ epsilon_n-C_{\theta,1,c_1}\mathrm{d}elta/2}\big)^{a_{i_n'}}\!\!\prod_{k=i_n}^{i_n'-1}\!\!\big(c_{c_1,b,\theta}(1-\theta^{-1})\theta^{-k+{\rm var}\ epsilon_n-C_{\theta,1,c_1}\mathrm{d}elta/2}\big)^{a_k}\\ &\times \Big(1+\mathcal O\Big(\frac{\log_\theta\log_\theta n}{\sqrt{\log_\theta n}}\vee \frac{|i_n-\sqrt{\log_\theta n}|\vee |i_n'-\sqrt{\log_\theta n}|}{\sqrt{\log_\theta n}}\Big)\Big). \mathrm{e}a\mathrm{e}e \mathrm{e}nd{proposition} \begin{proof} Set $K':=K-a_{i_n'}$ and for each $i_n\leq k\leq i_n'$ and each $u$ such that $\sum_{\mathrm{e}ll=i_n}^{k-1}a_\mathrm{e}ll<u\leq \sum_{\mathrm{e}ll=i_n}^k a_\mathrm{e}ll$, let $m_u=\big\lfloor\log_\theta n-C_{\theta,1,c_1}\sqrt{\log_\theta n}+(b/2+1/4) \log_\theta \log_\theta n \big \rfloor +k$. Also, let $(v_u)_{u\in[K]}$ be $K$ vertices selected uniformly at random without replacement from $[n]$. Then, as the $X^{(n)}_{\geq k}$ and $X^{(n)}_{k}$ can be expressed as sums of indicators, using the same steps as in the proof of Proposition~{\rm Re}f{prop:factmean}, \be \label{eq:gumbmeanexunif} \E{\Big(X^{(n)}_{\geq i_n'}\Big)_{a_{i_n'}}\prod_{k=i}^{i_n'-1}\Big(X_k^{(n)}\Big)_{a_k}}=(n)_K\sum_{\mathrm{e}ll=0}^{K'}\sum_{\substack{S\subseteq [K']\\ |S|=\mathrm{e}ll}}(-1)^\mathrm{e}ll \mathbb{P}{\mathbb{Z}m_n(v_u)\geq m_u+\ind_{\{u\in S\}}\text{ for all } u\in [K]}. \mathrm{e}e By Proposition~{\rm Re}f{lemma:degprobasymp}, \be \label{eq:gumbtailprobunif} \mathbb{P}{\mathbb{Z}m_n(v_\mathrm{e}ll)\geq m_u+\ind_{\{u\in S\}}\text{ for all } u\in [K]}=\prod_{u=1}^K \E{\Big(\frac{W}{\E{W}+W}\Big)^{m_u+\ind_{\{u\in S\}}}}(1+o(n^{-\beta})), \mathrm{e}e for some $\beta>0$. By Lemma~{\rm Re}f{lemma:gumbpkasymp} (and recalling the constant $C$ in~\mathrm{e}qref{eq:c}), when $|S|=\mathrm{e}ll$, \be\ba \prod_{u=1}^K {}&\E{\Big(\frac{W}{\E{W}+W}\Big)^{m_u+\ind_{\{u\in S\}}}}\\ ={}&(C)^K \theta^{-\sum_{u=1}^K m_u-\mathrm{e}ll}\mathrm{e}xp\Bigg(\!-2\sum_{u=1}^K\sqrt{\frac{1-\theta^{-1}}{c_1}\big(m_u+\ind_{\{u\in S\}}\big)}\Bigg)\prod_{u=1}^K (m_u+\ind_{\{u\in S\}})^{b/2+1/4}\\ &\times (1+\mathcal O (1/\sqrt{\log n})). \mathrm{e}a\mathrm{e}e Here, we are able to obtain the error term $1+\mathcal O(1/\sqrt{\log n})$ due to the fact that $\log_\theta n+i_n>\mathrm{e}ta \log n$ for some $\mathrm{e}ta>0$ when $n$ is large. We note that $C_{\theta,1,c_1}\log\theta=2\sqrt{c_1^{-1}(1-\theta^{-1})}$. As $i_n,i_n'\sim \mathrm{d}elta\sqrt{\log_\theta n}$, \be \prod_{u=1}^K(m_u+\ind_{\{u\in S\}})^{b/2+1/4 }=(\log_\theta n)^{ K(b/2+1/4)}\big(1+\mathcal O\big(1/\sqrt{\log_\theta n}\big)\big), \mathrm{e}e uniformly in $S$ (and $\mathrm{e}ll$). Moreover, again uniform in $S$ and $\mathrm{e}ll$, \be \ba \mathrm{e}xp{}&\bigg(-C_{\theta,1,c_1}\log\theta\sum_{u=1}^K\sqrt{m_u+\ind_{\{u\in S\}}}\bigg)\\ ={}&\mathrm{e}xp\Big(-\Big( C_{\theta,1,c_1}\log\theta\sqrt{\log_\theta n}-\frac{C_{\theta,1,c_1}-\mathrm{d}elta}{2}\Big)\Big)^K\\ &\times \Big(1+\mathcal O\Big(\frac{\log_\theta\log_\theta n}{\sqrt{\log_\theta n}}\vee \frac{|i_n-\sqrt{\log_\theta n}|\vee |i_n'-\sqrt{\log_\theta n}|}{\sqrt{\log_\theta n}}\Big)\Big). \mathrm{e}a \mathrm{e}e This last step follows from the fact that the first-order term of $m_u$ is $\log_\theta n$ and its second-order term is $-(C_{\theta,1,c_1}-\mathrm{d}elta)\sqrt{\log_\theta n}$. Finally, its lower-order terms are $\log_\theta\log_\theta n+(|i-\sqrt{\log_\theta n}|\vee |i_n'-\sqrt{\log_\theta n}|)$. Then, using a Taylor expansion for the square root yields the result. Combining all of the above and recalling that $c_{c_1,b,\theta}=C\theta^{C_{\theta,1,c_1}^2/2}$, we thus arrive at \be \ba (n)_K{}&\sum_{\mathrm{e}ll=0}^{K'}\sum_{\substack{S\subseteq [K']\\ |S|=\mathrm{e}ll}}(-1)^\mathrm{e}ll\Big(C (\log_\theta n)^{b/2+1/4}\mathrm{e}xp\Big(-\Big(C_{\theta,1,c_1}\log\theta\Big(\sqrt{\log_\theta n}-\frac{C_{\theta,1,c_1}-\mathrm{d}elta}{2}\Big)\Big)\Big)\Big)^K \\ &\times \theta^{-\sum_{u=1}^K m_u-\mathrm{e}ll}\Big(1+\mathcal O\Big(\frac{\log_\theta\log_\theta n}{\sqrt{\log_\theta n}}\vee \frac{|i_n-\sqrt{\log_\theta n}|\vee |i_n'-\sqrt{\log_\theta n}|}{\sqrt{\log_\theta n}}\Big)\Big)\\ ={}&\big(c_{c_1,b,\theta}\theta^{-i_n'+{\rm var}\ epsilon_n-C_{\theta,1,c_1}\mathrm{d}elta/2}\big)^{a_{i_n'}}\prod_{k=i_n}^{i_n'-1}\big(c_{c_1,b,\theta}(1-\theta^{-1})\theta^{-k+{\rm var}\ epsilon_n-C_{\theta,1,c_1}\mathrm{d}elta/2}\big)^{a_k}\\ &\times \Big(1+\mathcal O\Big(\frac{\log_\theta\log_\theta n}{\sqrt{\log_\theta n}}\vee \frac{|i_n-\sqrt{\log_\theta n}|\vee |i_n'-\sqrt{\log_\theta n}|}{\sqrt{\log_\theta n}}\Big)\Big), \mathrm{e}a\mathrm{e}e where the last step follows from a similar argument as in the proof of Proposition~{\rm Re}f{prop:factmean}. \mathrm{e}nd{proof} With Proposition~{\rm Re}f{prop:gumbfactmean} at hand, the proofs of Theorems~{\rm Re}f{thrm:gumbppp},~{\rm Re}f{thrm:gumbmaxtail}, and~{\rm Re}f{thrm:gumbasympnorm} follow the same approach as the proofs of Theorems~{\rm Re}f{thrm:mainatom},~{\rm Re}f{thrm:maxtail}, and~{\rm Re}f{thrm:asympnormal}. \textbf{Acknowledgements}\\ The authors would like to sincerely thank the referees for their feedback, which helped to significantly improve the presentation of the article. BL has been funded by an URSA whilst at the University of Bath and is currently funded by the grant GrHyDy ANR-20-CE40-0002. LE was partially supported by the grant PAPIIT IN102822. \appendix \section{} \label{sec:appendix} In this appendix we collect various estimates on the sum of i.i.d.\ random variables, including a quantitative version of the law of large numbers, see Lemmas~{\rm Re}f{lemma:sumbound} and~{\rm Re}f{lemma:weightsumbounds} and Corolary~{\rm Re}f{cor:uil}. Finally, we also include the details of the calculations of certain iterated integrals in Lemma~{\rm Re}f{lemma:logints} and the proof of some of the properties of the sequences defined in~\mathrm{e}qref{eq:ek} in Lemma~{\rm Re}f{lemma:rk}. \begin{lemma}\label{lemma:sumbound} Let $(W_i)_{i\in\mathbb{N}}$ be i.i.d.\ copies of a random variable $W$ that satisfies conditions~{\rm Re}f{ass:weightsup} and~{\rm Re}f{ass:weightzero} of Assumption~{\rm Re}f{ass:weights}. Then, there exists $C' >0$ such that for any $n\in\mathbb{N}$ and any $x>0$, \be \mathbb{P}{S_N\leq x}\leq \big(C'\mathcal{G}amma(\rho)x^\rho\big)^N \mathcal{G}amma(\rho N+1)^{-1}. \mathrm{e}e \mathrm{e}nd{lemma} \begin{proof} We prove the bound by induction. We first couple the random variable $W$ and $(W_i)_{i\in\mathbb{N}}$ to i.i.d.\ random variables $X$ and $(X_i)_{i\in\mathbb{N}}$, respectively. $W$ satisfies condition~{\rm Re}f{ass:weightzero} for some $C,x_0,\rho>0$, and we can without loss of generality assume that $Cx_0^\rho=1$. Indeed, if $Cx_0^\rho<1$ holds, then we can increase $C$, and if $Cx_0^\rho>1$ then we can decrease $x_0$. Hence, we let $X$ be a random variable with probability density function $f_X:[0,1]\to [0,1]$ \be f_X(x):=\begin{cases} C\rho x^{\rho-1} &\mbox{if } x\in[0,x_0],\\ 0 &\mbox{if } x\in(x_0,1]. \mathrm{e}nd{cases} \mathrm{e}e It is clear, by the assumption $Cx_0^\rho=1$, that $f_X$ indeed is a probability density function, and that $\mathbb{P}{W\leq x}\leq \mathbb{P}{X\leq x}$ for all $x\in [0,1]$ by Condition~{\rm Re}f{ass:weightzero}. If we thus let $(X_i)_{i\in\mathbb{N}}$ be i.i.d.\ copies of $X$, then $\mathbb{P}{S_N\leq x}\leq \mathbb{P}{S'_N\leq x} $ for any $N\in\mathbb{N}$ and $x\in\mathbb{R}$, where $S'_N:=\sum_{i=1}^N X_i$. It thus suffices to bound $\mathbb{P}{S_N'\leq x}$ from above. For $N=1$ it directly follows that $\mathbb{P}{X_1\leq x}\leq Cx^\rho$ for any $x>0$, so that the case $N=1$ is satisfied by setting $C':=C\rho=C\mathcal{G}amma(\rho+1)/\mathcal{G}amma(\rho)$. Let us then assume that the bound \be \mathbb{P}{S_N'\leq x}\leq (C'\mathcal{G}amma(\rho)x^\rho)^N\mathcal{G}amma(\rho N+1)^{-1} \mathrm{e}e holds for any $x>0$ and some $N\in\mathbb{N}$. Then, by the induction hypothesis and the definition of $f_X$, \be \mathbb{P}{S_{N+1}'\leq x}=\int_0^{x\wedge 1}\mathbb{P}{S_N'\leq x-t}f_X(t)\,\mathrm{d} t\leq \int_0^{x\wedge 1}(C'\mathcal{G}amma(\rho)(x-t)^\rho)^N\mathcal{G}amma(\rho N+1)^{-1}C\rho t^{\rho-1}\,\mathrm{d} t. \mathrm{e}e By extending the upper range of the integral to $x$, using the substitution $s=t/x$, and recalling that $C\rho=C'$, we arrive at the upper bound \be \frac{C'^{N+1}\mathcal{G}amma(\rho)^Nx^{(N+1)\rho}}{\mathcal{G}amma(\rho N+1)}\int_0^1 (1-s)^{\rho N}s^{\rho-1}\,\mathrm{d} s=\frac{C'^{ N+1} \mathcal{G}amma(\rho)^Nx^{(N+1)\rho}}{\mathcal{G}amma(\rho N+1)}B(\rho,\rho N+1)=\frac{(C'\mathcal{G}amma(\rho)x^\rho)^{N+1}}{\mathcal{G}amma(\rho(N+1)+1)}, \mathrm{e}e where we use in the final step that, for $x,y>0$, $B(x,y):=\mathcal{G}amma(x)\mathcal{G}amma(y)/\mathcal{G}amma(x+y)$ is the Beta function. This yields the desired result and concludes the proof. \mathrm{e}nd{proof} \begin{lemma}[Bounds on partial sums of vertex-weights]\label{lemma:weightsumbounds} Let $(W_i)_{i\in\mathbb{N}}$ be i.i.d.\ copies of a random variable $W$ with finite mean. Let ${\rm var}\ epsilon\in(0,1), \mathrm{d}elta\in(0,1/2)$, $k\in\mathbb{N}$, $C>k\log(\theta)/(2\theta(\theta-1))$, and $c,\alpha>0$, and set $\zeta_n:=n^{-\mathrm{d}elta{\rm var}\ epsilon}/\E{W}$ and $\zeta_n':=(C\log n)^{-\mathrm{d}elta/(1-2\mathrm{d}elta)}/\E W$. Consider the events \be\ba \label{eq:events} E_n^{(1)}&:=\bigg\{ \sum_{\mathrm{e}ll=1}^j W_\mathrm{e}ll \in ((1-\zeta_n)\E{W}j,(1+\zeta_{n})\E{W}j),\text{ for all } n^{{\rm var}\ epsilon}\leq j\leq n\bigg\},\\ E_n^{(2)}&:=\Big\{\sum_{\mathrm{e}ll=k+1 }^j W_\mathrm{e}ll\in((1-\zeta_n)\E{W}j,(1+\zeta_n)\E{W}j),\text{ for all } n^{{\rm var}\ epsilon}\leq j\leq n\Big\},\\ E_n^{(3)}&:=\bigg\{\sum_{\mathrm{e}ll=1}^j W_\mathrm{e}ll \geq (1-\zeta'_n)\E{W} j,\text{ for all } (C\log n)^{1/(1-2\mathrm{d}elta)}\leq j\leq n\bigg\},\\ E_n^{(4)}&:=\{S_{\lceil \alpha \log n\rceil}\geq c\log n\}. \mathrm{e}a \mathrm{e}e Then, for any $\gamma>0$ and any $i\in[4]$ (where, for $i=3,4$, we choose $C$ and $\alpha$ sufficiently large depending on $\gamma$, respectively), for all $n$ large, \be \mathbb{P}{(E_n^{(i)})^c}\leq n^{-\gamma}. \mathrm{e}e Finally, additionally assume that $W$ satisfies conditions~{\rm Re}f{ass:weightsup} and~{\rm Re}f{ass:weightzero} of Assumption~{\rm Re}f{ass:weights}, set $f(n):=\lceil \log n/\log\log\log n\rceil, g(n):=\lceil \log\log n\rceil$, and define $E_n^{(5)}:=\{S_{f(n)}\geq g(n)\}$. Then, for any $\gamma>0$ and all $n$ large, \be \mathbb{P}{(E_n^{(5)})^c}\leq n^{-\gamma}. \mathrm{e}e \mathrm{e}nd{lemma} \begin{proof} We prove the bounds for the complement of the events one by one. By noting that $\wt S_j:=\sum_{\mathrm{e}ll=1}^j W_\mathrm{e}ll-j\E{W}$ is a martingale, that $|\wt S_j-\wt S_{j-1}|\leq 1+\E{W}=\theta$ and that $\zeta_n\geq j^{-\mathrm{d}elta}/\E W$ for $j\geq n^{{\rm var}\ epsilon}$, we can use the Azuma-Hoeffding inequality to obtain \be \label{eq:proben} \mathbb{P}{(E_n^{(1)})^c}\leq \sum_{j\geq n^{{\rm var}\ epsilon}}\mathbb{P}{\big|\wt S_j\big|\geq \zeta_n j\E W}\leq 2\sum_{j\geq n^{{\rm var}\ epsilon}}\mathrm{e}xp\Big(-\frac{j^{1-2\mathrm{d}elta}}{2\theta^2}\Big). \mathrm{e}e Writing $c_\theta:=1/(2\theta^2)$, we further bound the sum from above by \be \label{eq:ecnbound} 2\int_{\lfloor n^{{\rm var}\ epsilon}\rfloor}^\infty \mathrm{e}xp\big(-c_\theta x^{1-2\mathrm{d}elta}\big)\,\mathrm{d} x=2\frac{c_{\theta}^{-1/(1-2\mathrm{d}elta)}}{1-2\mathrm{d}elta}\mathcal{G}amma\Big(\frac{1}{1-2\mathrm{d}elta},c_\theta \lfloor n^{{\rm var}\ epsilon}\rfloor ^{1-2\mathrm{d}elta}\Big), \mathrm{e}e where $\mathcal{G}amma(a,x)$ is the incomplete Gamma function. As, for $a>0$ fixed, $\mathcal{G}amma(a,x)= (1+o(1))x^{a-1}\mathrm{e}^{-x}$, this yields the desired bound. The proof for the event $E^{(2)}_n$ follows in an analogous way. For the event $E^{(3)}_n$, we again use the Azuma-Hoeffding inequality to obtain that there exist constants $C_{\theta,\mathrm{d}elta}, \tilde C_{\theta,\mathrm{d}elta} > 0$ such that \be \ba \mathbb{P}{(E^{(3)}_n)^c}&\leq C_{\theta,\mathrm{d}elta}\Big(c_\theta C\log n\Big)^{2\mathrm{d}elta/(1-2\mathrm{d}elta)}\mathrm{e}xp\Big(-c_\theta \lfloor (C\log n)^{1/(1-2\mathrm{d}elta)}\rfloor^{1-2\mathrm{d}elta}\Big)(1+o(1))\\ &=\wt C_{\theta,\mathrm{d}elta}(\log n)^{2\mathrm{d}elta/(1-2\mathrm{d}elta)}\mathrm{e}xp(-c_\theta C\log n)(1+o(1))\\ &=n^{-c_\theta C(1+o(1))}. \mathrm{e}a \mathrm{e}e Choosing $C$ sufficiently large then yields the desired result. A similar application of the same inequality for $E^{(4)}_n$ yields \be \mathbb{P}{(E^{(4)}_n)^c}\leq \mathbb{P}{|\wt S_{\lceil \alpha \log n\rceil}|\geq |c-\alpha \E W|\log n}\leq 2\mathrm{e}xp\Big(-\frac{(\alpha \E W-c)^2 \log n}{2\alpha \theta^2}\Big). \mathrm{e}e Again, choosing $\alpha$ sufficiently large yields the desired result. Finally, we use Lemma~{\rm Re}f{lemma:sumbound} to obtain \be \ba \mathbb{P}{(E_n^{(5)})^c}&\leq \big(C'\mathcal{G}amma(\rho)g(n)^\rho\big)^{f(n)}\mathcal{G}amma(\rho f(n)+1)^{-1}\\ &=\frac{1+o(1)}{\sqrt{2\pi \rho f(n)}}\mathrm{e}xp(\rho f(n)(1+\log g(n))+f(n)\log(C\mathcal{G}amma(\rho))-\rho f(n)\log(\rho f(n)))\\ &\leq \mathrm{e}xp\big(-\tfrac{\rho}{2}f(n)\log f(n)\big), \mathrm{e}a \mathrm{e}e where the final inequality holds for all $n$ sufficiently large. By the choice of $f(n)$, this yields the desired result and concludes the proof. \mathrm{e}nd{proof} \begin{corollary}\label{cor:uil} Let $(W_i)_{i\in\mathbb{N}}$ be i.i.d.\ copies of a random variable $W$ that satisfies conditions~{\rm Re}f{ass:weightsup} and~{\rm Re}f{ass:weightzero} of Assumption~{\rm Re}f{ass:weights} and define, for $i\in\mathbb{N}$, \be U_{i,n}:=\frac{1}{\log n}\sum_{j=i}^{n-1} \frac{W_i}{S_j}. \mathrm{e}e Then, for any $i\in[n]$ and any $\gamma>0$, there exists a sequence $\mathrm{e}ta_n \mathrm{d}ownarrow 0$ such that \be \mathbb{P}{U_{i,n}\leq \frac{1+\mathrm{e}ta_n}{\E W}}=1-\mathcal O(n^{-\gamma}). \mathrm{e}e \mathrm{e}nd{corollary} \begin{proof} For some constants $\alpha>0$ as in Lemma~{\rm Re}f{lemma:weightsumbounds} and $\mathrm{d}elta\in(0,1/4)$ we bound $U_{i,n}$ from above by \be \label{eq:splitsum} U_{i,n}\leq \frac{1}{\log n}\bigg[\sum_{j=1}^{\lfloor\alpha \log n\rfloor }1\wedge \frac{1}{S_j}+\sum_{j=\lceil \alpha \log n\rceil}^{\lfloor (C\log n)^{1/(1-2\mathrm{d}elta)}\rfloor}\frac{1}{S_j}+\sum_{j=\lceil (C\log n)^{1/(1-2\mathrm{d}elta)}\rceil}^{n-1}\frac{1}{S_j}\bigg], \mathrm{e}e where in the first sum on the right-hand side we use that $W_i\leq 1\wedge S_j$ for any $j\geq i$. We now consider each of the sums one at a time. For the final sum, we use the event $E_n^{(3)}$ from Lemma~{\rm Re}f{lemma:weightsumbounds} to obtain that on this event, \be \label{eq:uil1} \frac{1}{\log n}\sum_{j=\lceil (C\log n)^{1/(1-2\mathrm{d}elta)}\rceil}^{n-1}\frac{1}{S_j} \leq \frac{1}{\E W} \frac{1}{1-\zeta_n'} =\frac{1+o(1)}{\E W}. \mathrm{e}e For the middle sum in~\mathrm{e}qref{eq:splitsum}, we can bound $S_j\geq S_{\lceil \alpha \log n\rceil}$ for each index $j$ in the sum. Together with the event $E^{(4)}_n$ from Lemma~{\rm Re}f{lemma:weightsumbounds}, we obtain the upper bound \be \label{eq:uil2} \frac{(C\log n)^{1/(1-2\mathrm{d}elta)}}{\log n}\frac{1}{S_{\lceil \alpha \log n\rceil}}\leq \frac{C^{1/(1-2\mathrm{d}elta)}}{c} (\log n)^{(4\mathrm{d}elta-1)/(1-2\mathrm{d}elta)}=o(1), \mathrm{e}e by the choice of $\mathrm{d}elta$. Finally, we bound the first sum in~\mathrm{e}qref{eq:splitsum}. We set $f(n):=\lceil \log n/\log\log\log n\rceil$ and $g(n):=\lceil \log\log n\rceil$. On the event $E_n^{(5)}$ from Lemma~{\rm Re}f{lemma:weightsumbounds}, we then have \be \label{eq:uil3} \frac{1}{\log n}\sum_{j=1}^{\lfloor \alpha \log n\rfloor} 1\wedge \frac{1}{S_j}\leq \frac{f(n)}{\log n}+\frac{1}{\log n}\sum_{j=f(n)+1}^{\lfloor \alpha \log n\rfloor} \frac{1}{S_j}\leq \frac{f(n)}{\log n}+\frac{\alpha }{ S_{f(n)}}\leq \frac{f(n)}{\log n}+\frac{\alpha}{g(n)}=o(1). \mathrm{e}e Combining~\mathrm{e}qref{eq:uil1}, \mathrm{e}qref{eq:uil2} and~\mathrm{e}qref{eq:uil3} yields a sequence $(\mathrm{e}ta_n)_{n \geq 1}$ with $\mathrm{e}ta_n \mathrm{d}ownarrow 0$ such that \be \mathbb{P}{U_{i,n}\leq \frac{1+\mathrm{e}ta_n}{\E W}}\leq \mathbb{P}{\big(E_n^{(3)}\cap E_n^{(4)}\cap E_n^{(5)}\big)^c}=O(n^{-\gamma}), \mathrm{e}e for any $\gamma>0$, by Lemma~{\rm Re}f{lemma:weightsumbounds}, which concludes the proof. \mathrm{e}nd{proof} \begin{lemma}\label{lemma:logints} For any $k\in\mathbb{N}$ and any $0< a\leq b<\infty$, \be \label{eq:logint} \int_a^b \int_{x_1}^b\cdots \int_{x_{k-1}}^b \prod_{j=1}^kx_j^{-1}\,\mathrm{d} x_k\ldots\mathrm{d} x_1= \frac{(\log(b/a))^k}{k!}. \mathrm{e}e Similarly, for any $k\in\mathbb{N}$ and any $0< a\leq b-k<\infty$, \be \label{eq:loglb} \int_{a+1}^b\int_{x_1+1}^b\cdots \int_{x_{k-1}+1}^b\prod_{j=1}^{k}x_{j}^{-1}\,\mathrm{d} x_k\ldots \mathrm{d} x_1\geq \frac{(\log(b/(a+k)))^k}{k!}. \mathrm{e}e Moreover, for any $k\in\mathbb{N}$ and any $0<a\leq b<\infty$ and $c>0$, \be \label{eq:logint2} \int_{a}^b \int_{x_1}^b \cdots \int_{x_{k-1}}^b x_k^{-(1+c)}\prod_{j=1}^{k-1}x_j^{-1} \,\mathrm{d} x_k\ldots \mathrm{d} x_1=c^{-k}a^{-c}\bigg(1-(b/a)^{-c}\sum_{j=0}^{k-1}c^j\frac{(\log(b/a))^j}{j!}\bigg). \mathrm{e}e \mathrm{e}nd{lemma} \begin{proof} We prove~\mathrm{e}qref{eq:logint} by recursively computing the integrals. For the inner integral, we directly obtain \be \label{eq:log} \int_{x_{k-1}}^b x_k^{-1}\,\mathrm{d} x_k=\log(b/x_{k-1}). \mathrm{e}e Now, for the $j^{\text{th}}$ innermost integral we use the substitution $y_{k-(j-1)}=\log(b/ x_{k-(j-1)})$, with $2\leq j\leq k$. This yields, for the second innermost integral (i.e.\ $j=2$), by~\mathrm{e}qref{eq:log}, \be \int_{x_{k-2}}^b x_{k-1}^{-1}\log (b/x_{k-1})\,\mathrm{d} x_{k-1}=\int_0^{\log(b/x_{k-2})} y_{k-1}\,\mathrm{d} y_{k-1}=\frac12 (\log(b/x_{k-2}))^2. \mathrm{e}e Reiterating this approach for $j=3,\ldots, k$ yields the desired result. We use the same approach for the proofs of~\mathrm{e}qref{eq:loglb} and~\mathrm{e}qref{eq:logint2}. For the former, the inner integral equals $\log(b/(x_{k-1}+1))$. The integrand we then obtain can be bounded from below by using $x_{k-1}^{-1}\geq (x_{k-1}+1)^{-1}$. Moreover, we restrict the upper limit of the integral over $x_{k-1}$ from $b$ to $b-1$ (note that this is possible since $a\leq b-1$). By using the substitution $y_{k-1}=\log(b/(x_{k-1}+1))$, this yields the lower bound, for any $x_{k-2} \leq b-2$, \be \int_{x_{k-2}+1}^{b-1} (x_{k-1}+1)^{-1}\log(b/(x_{k-1}+1))\,\mathrm{d} x_{k-1}=\int_0^{\log(b/(x_{k-2}+2))} y_{k-1}\,\mathrm{d} y_{k-1}=\frac12 (\log(b/(x_{k-2}+2)))^2. \mathrm{e}e By repeating this procedure for the remaining integrals, we obtain the desired result. To finally prove~\mathrm{e}qref{eq:logint2}, let us define the multiple integrals on the left-hand side as $I_k$. Then, by computing the value of the innermost integral, \be I_k=\frac{I_{k-1}}{c}-\frac{1}{b^cc} \int_a^b\int_{x_1}^b \cdots \int_{x_{k-2}}^b \prod_{j=1}^{k-1}x_j^{-1}\,\ d x_{k-1}\ldots \mathrm{d} x_1=\frac{I_{k-1}}{c}-\frac{(\log(b/a))^{k-1}}{b^cc(k-1)!}, \mathrm{e}e where the last step follows from~\mathrm{e}qref{eq:logint}. Continuing this recursion yields \be I_k=\frac{I_1}{c^{k-1}}-\frac{1}{b^c}\sum_{j=1}^{k-1}c^{-j}\frac{(\log(b/a))^{k-j}}{(k-j)!} =c^{-k}a^{-c}\bigg(1-(b/a)^{-c}\sum_{j=0}^{k-1}c^j\frac{(\log(b/a))^j}{j!}\bigg), \mathrm{e}e which concludes the proof. \mathrm{e}nd{proof} \begin{lemma}\label{lemma:rk} Consider the sequences $(s_k,r_k)_{k\in\mathbb{N}}$ in~\mathrm{e}qref{eq:ek}. These sequences have the following properties: \begin{enumerate} \item[$(i)$] $s_k$ is increasing, \item[$(ii)$] $r_k$ is decreasing and $\lim_{k\to\infty}r_k=0$. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{proof} $(i)$ Assume that $s_{k+1}<s_k$ for some $k\in\mathbb{N}$ and take $x\in(s_{k+1},s_k)$. By the definition of $s_{k+1},s_k$ and the choice of $x$, \be \mathbb{P}{W\in(x,1)}\leq \mathrm{e}^{-(1-\theta^{-1})(1-x)(k+1)}< \mathrm{e}^{-(1-\theta^{-1})(1-x)k}<\mathbb{P}{W\in(x,1)}, \mathrm{e}e which leads to a contradiction. $(ii)$ Assume that $s_k < s_{k+1}$ (otherwise the claim is immediately clear). Note that since the function $\mathbb{P}{W \in (x,1)}$ is c\`adl\`ag, we have for any $x < s_k$, \be\label{eq:rk} \mathbb{P}{ W \in (s_k,1)} \leq r_k \leq \lim_{y \uparrow s_k} \mathbb{P}{W \in (y,1)} \leq \mathbb{P}{W \in (x,1)}. \mathrm{e}e Hence, we have that for any $x \in (s_k, s_{k+1})$. \[ r_k \geq \mathbb{P}{ W \in (s_k,1)} \geq \mathbb{P}{W \in (x,1)} \geq r_{k+1} .\] For the second part, since $s_k$ is increasing by $(i)$, we have that $s_k \rightarrow s^* \in (0,1]$. Suppose that $s^* \in (0,1)$. Then, for $k$ sufficiently large, we have $s_k\leq (1+s^*)/2$ and so $r_k \leq \mathrm{e}^{-(1-\theta^{-1})(1-s^*) k/2}$ and so $r_k$ converges to $0$. Therefore, we can assume that $s_k \uparrow 1$. Let $k_0$ be the smallest $k$ such that $s_k < s_{k+1}$. Such a $k_0$ exists, otherwise $s^* < 1$ since each $s_k <1$. Then, for $k \geq k_0$, let $\mathrm{e}ll_k$ be the largest integer such that $s_{\mathrm{e}ll_k} < s_k$. The assumption that $s_k \uparrow 1$ also excludes that $s_{\mathrm{e}ll_k}$ is eventually constant and so $\mathrm{e}ll_k \rightarrow \infty$. In particular, we can argue as in~\mathrm{e}qref{eq:rk} to see that \[ r_{k} \leq \mathbb{P}{ W \in (s_{\mathrm{e}ll_k},1)}. \] Moreover, as $s_{\mathrm{e}ll_k} \rightarrow 1$ as $k \rightarrow \infty$, we deduce that $r_k \rightarrow 0$. \mathrm{e}nd{proof} \mathrm{e}nd{document}
math
\begin{equation}gin{document} \def\begin{equation}{\begin{equation}gin{equation}} \def\begin{eqnarray}{\begin{equation}gin{eqnarray}} \def\end{equation}{\end{equation}} \def\end{eqnarray}{\end{eqnarray}} \def\langle{\langlengle} \def\rangle{\ranglengle} \def\Theta{\Thetaeta} \def\langlem{\langlembda} \def\partial{\partialrtial} \def{\bf R}^2{{\bf R}^2} \def\overline{Q}_0^i{\overline{Q}_0^i} \def\varepsilonepsilon{\varepsilonepsilon} \def\varepsilonphi{\varepsilonphi} \def\int\! \! \! \int{\int\! \! \! \int} \def\overline{w}{\overline{w}} \def\tilde{w}{\tilde{w}} \def\overline{\xi}{\overline{\xi}} \def\tilde{\xi}{\tilde{\xi}} \def\tilde{\Lambda}_i{\tilde{\Lambda}_i} \def\nonumber{\nonumberumber} \def\displaystyle{\displaystyleplaystyle} \def\varepsilon{\varepsilonepsilon} \begin{equation}gin{center} {\langlerge \bf Lower Bound of the Lifespan of the Solution \\ to Systems of Quasi-linear Wave Equations\\ with Multiple Propagation Speeds}\\[1cm] \end{center} \begin{equation}gin{center} {\sc Akira HOSHIGA}\\ Shizuoka University, Japan\\ \end{center} \begin{equation}gin{abstract} We consider the Cauchy problem of systems of quasilinear wave equations in 2-dimensional space. We assume that the propagation speeds are distinct and that the nonlinearities contain quadratic and cubic terms of the first and second order derivatives of the solution. We know that if the all quadratic and cubic terms of nonlinearities satisfy $Strong$ $Null$-$condition$, then there exists a global solution for sufficiently small initial data. In this paper, we study about the lifespan of the smooth solution, when the cubic terms in the quasi-linear nonlinearities do not satisfy the Strong null-condition. In the proof of our claim, we use the $ghost$ $weight$ energy method and the $L^{\infty}$-$L^{\infty}$ estimates of the solution, which is slightly improved. \end{abstract} \noindent ------------------------------------------------------------\\ {\footnotesize 2010 ASC: 35L05, 35L15, 35L51, 35L70.\\ Keywords: Lifespan, Wave Eqation, Null-condition.} \section{Intrduction} In this paper, we study the Cauchy problem; \begin{eqnarray} &\Box_i u^i =\partial_0^2 u^i-c_i^2\triangle u^i=F^i(\partial u, \partial^2 u)\qquad (x, t)\in \mathbb{R}^2\times (0, \infty), \langlebel{1.1}\\ &u^i(x, 0)=\varepsilon f^i(x),\quad \partial_t u^i (x, 0)=\varepsilon g^i(x)\qquad\qquad x\in\mathbb{R}^2,\langlebel{1.2} \end{eqnarray} where $i=1,2,\cdots, m$ and $u(x, t)=(u^1(x, t), u^2(x, t),\cdots, u^m(x, t))$. We denote $\partial u=(\partial_\alpha u)_{\alpha=0,1,2}$ with $\partial_0=\partial_t=\partial/\partial t$, $\partial_j=\partial/\partial x_j\ (j=1, 2)$ and $\partial^2u=(\partial_\alpha\partial_{\begin{equation}ta}u)_{\alpha,\begin{equation}ta=0,1,2}$. Let $\varepsilon >0$ is a small parameter and assume $f^i(x), g^i(x)\in C^{\infty}_0(\mathbb{R}^2)$ and $\mbox{supp}\{f^i\}, \mbox{supp}\{g^i\}\subset \{x\in\mathbb{R}^2\ :\ |x|\leq M\}$ for some positive constant $M$. We also assume that the propagation speeds of (\ref{1.1}) are distinct constants, namely we assume \begin{eqnarray} 0<c_1<c_2<\cdots <c_m.\langlebel{1.3} \end{eqnarray} Each nonlinearity $F^i(v, w)$ is smooth near the origin and is expressed as \begin{eqnarray} F^i(\partial u,\partial^2 u)=\sum_{\ell=1}^m\sum_{\alpha, \begin{equation}ta=0}^2 A_{\ell}^{i, \alpha\begin{equation}ta}(\partial u)\partial_{\alpha}\partial_{\begin{equation}ta}u^{\ell}+B^i(\partial u),\langlebel{1.4} \end{eqnarray} where \begin{eqnarray} A_{\ell}^{i, \alpha\begin{equation}ta}(\partial u)=\sum_{j=1}^m\sum_{\gamma=0}^2a_{\ell j}^{i, \alpha\begin{equation}ta\gamma}\partial_{\gamma}u^j+\sum_{j,k=1}^m\sum_{\gamma, \delta=0}^2c_{\ell jk}^{i, \alpha\begin{equation}ta\gamma\delta}\partial_{\gamma}u^j\partial_{\delta}u^k+O(|\partial u|^3)\langlebel{1.5} \end{eqnarray} and \begin{eqnarray} B^i(\partial u)=\sum_{j,k=1}^m\sum_{\alpha,\begin{equation}ta=0}^2b_{jk}^{i, \alpha\begin{equation}ta}\partial_{\alpha}u^j\partial_{\begin{equation}ta}u^k+\sum_{j,k,\ell =1}^m\sum_{\alpha,\begin{equation}ta, \gamma=0}^2d_{jk\ell}^{i, \alpha\begin{equation}ta\gamma}\partial_{\alpha}u^j\partial_{\begin{equation}ta}u^k\partial_{\gamma}u^{\ell}+O(|\partial u|^4).\langlebel{1.6} \end{eqnarray} Here $a_{\ell j}^{i, \alpha\begin{equation}ta\gamma}, b_{jk}^{i, \alpha\begin{equation}ta}, c_{\ell jk}^{i, \alpha\begin{equation}ta\gamma\delta}, d_{jk\ell}^{i, \alpha\begin{equation}ta\gamma}$ are constants. \\ In order to derive energy estimate, we need to assume that for each $i, \ell=1,2,\cdots, m$ and $\alpha,\begin{equation}ta=0,1,2$, \begin{eqnarray} A_{\ell}^{i,\alpha\begin{equation}ta}(v)=A_i^{\ell,\alpha\begin{equation}ta}(v)=A_i^{\ell,\begin{equation}ta\alpha}(v)\langlebel{1.7} \end{eqnarray} and \begin{eqnarray} |A_{\ell}^{i,\alpha\begin{equation}ta}(v)|<\frac{(\min\{1, c_1\})^2}{2m}.\langlebel{1.8} \end{eqnarray} The assumption (\ref{1.8}) constitutes no additional restriction, since we will only deal with small solutions. Note that by (\ref{1.3}) and (\ref{1.4}), we have for any $i=1,2,\cdots,m$, \begin{eqnarray} u^i(x, t)=0\qquad \mbox{for}\quad |x|\geq c_mt+M.\langlebel{1.85} \end{eqnarray} For the proof of (\ref{1.85}), see Theorem 4a in F. John \cite{j1}. Furthermore, in order to derive the $ghost$ $weight$ energy method, we need to assume that \begin{eqnarray} \begin{equation}gin{aligned} a_{\ell j}^{i, \alpha\begin{equation}ta\gamma}=0&\qquad\mbox{when}\qquad (j, \ell) \ne (i, i)\\ b_{jk}^{i, \alpha\begin{equation}ta}=0&\qquad \mbox{when}\qquad j\ne k \end{aligned} \langlebel{1.0} \end{eqnarray} for each $i=1,2,\cdots, m$ and $\alpha,\begin{equation}ta,\gamma=0,1,2$. This assumption means that only the terms $\partial u^i\partial^2 u^i$ and $\partial u^j\partial u^j$ $(j=1,\cdots, m)$ appear in the quadratic terms of $F^i$. \\ \indent Our purpose of this paper is to show a precise estimate for the $lifespan$ $T_{\varepsilonepsilon}$. Here, we define $T_{\varepsilonepsilon}$ by the supremum of $T>0$ for which there exists a solution $u$ to the Cauchy problem (\ref{1.1}) and (\ref{1.2}) in $\big(C^{\infty}(\mathbb{R}^2\times [0, T))\big)^m$. To state the known results and our our result, we introduce some notations. Firstly, for $X=(X_0, X_1, X_2)\in \mathbb{R}^3$, we define $\Phi (X)=(\Phi_{\ell}^i(X))_{i, \ell=1,2,\cdots, m}$, $\Psi (X)=(\Psi_{\ell}^i(X))_{i, \ell=1,2,\cdots, m}$, $\Thetaeta (X)=(\Thetaeta_{\ell}^i(X))_{i, \ell=1,2,\cdots, m}$ and $\Xi (X)=(\Xi_{\ell}^i(X))_{i, \ell=1,2,\cdots, m}$ by \begin{eqnarray} &&\Phi_{\ell}^i(X)=\sum_{\alpha,\begin{equation}ta,\gamma=0}^2a_{\ell\ell}^{i,\alpha\begin{equation}ta\gamma}X_{\alpha}X_{\begin{equation}ta}X_{\gamma}, \langlebel{1.9} \\ &&\Psi_{\ell}^i(X)=\sum_{\alpha,\begin{equation}ta=0}^2b_{\ell\ell}^{i,\alpha\begin{equation}ta}X_{\alpha}X_{\begin{equation}ta}, \langlebel{1.10} \\ &&\Thetaeta_{\ell}^i(X)=\sum_{\alpha,\begin{equation}ta,\gamma,\delta=0}^2c_{\ell\ell\ell}^{i,\alpha\begin{equation}ta\gamma\delta}X_{\alpha}X_{\begin{equation}ta}X_{\gamma}X_{\delta}, \langlebel{1.11} \\ &&\Xi_{\ell}^i(X)=\sum_{\alpha,\begin{equation}ta,\gamma=0}^2d_{\ell\ell\ell}^{i,\alpha\begin{equation}ta\gamma}X_{\alpha}X_{\begin{equation}ta}X_{\gamma}. \langlebel{1.12} \end{eqnarray} Moreover, let $\phi (X)=(\phi_{\ell}^i(X))_{i, \ell=1,2,\cdots, m}$ be a function of $X=(X_0, X_1, X_2)$. If \begin{eqnarray} \phi^i_{\ell}(X)\equiv 0 \quad\mbox{for}\quad X_0^2=c_{\ell}^2(X_1^2+X_2^2)\langlebel{1.13} \end{eqnarray} holds for each $i, \ell=1,2,\cdots,m$, then we denote $\phi\approx 0$ and we say that $\phi$ satisfies $Strong$ $Null$-$condition$. On the other hand, if (\ref{1.13}) holds when $\ell=i$ $(i=1,2,\cdots, m)$, then we denote $\phi\sim 0$ and we say that $\phi$ satisfies $Standard$ $Null$-$condition$. In \cite{2005}, the author showed that $\displaystyle{\liminf_{\varepsilonepsilon\to 0} \varepsilonepsilon \sqrt{T_{\varepsilonepsilon}}\geq C}$ holds for a certain positive constant $C$, provided $B^i(\partial u)\equiv 0$. On the other hand, the author also showed in \cite{2006} that $T_{\varepsilonepsilon}=\infty$ for sufficiently small $\varepsilonepsilon>0$, provided $\Phi$, $\Psi$, $\Thetaeta$ and $\Xi$ satisfy Strong Null-Condition and $A_{\ell}^{i,\alpha\begin{equation}ta}(\partial u)\equiv 0$ holds for $\ell\ne i$. In this paper, we consider the case that $\Phi$ and $\Psi$ satisfy Strong Null-condition and $\Xi$ satisfies Standard Null-condition. Namely, we assume $\Phi\approx 0$, $\Psi\approx 0$ and $\Xi\sim 0$. \partialr \indent Secondly, we introduce the Friedlander radiation field $\mathcal{F}^i(\rho,\omega)$. Let $u^i_0(x, t)$ be the solution to the Cauchy problem of the homogeneous linear wave equation; \\ \begin{eqnarray} &\Box_i u_0^i =0\qquad\qquad (x, t)\in \mathbb{R}^2\times (0, \infty), \langlebel{1.14}\\ &u_0^i(x, 0)=f^i(x),\quad \partial_0 u_0^i (x, 0)=g^i(x)\qquad x\in\mathbb{R}^2.\langlebel{1.15} \end{eqnarray} Then we define $\mathcal{F}^i$ by \begin{eqnarray} \mathcal{F}^i(\rho, \omega)=\lim_{r\to\infty}r^{\frac12}u_0^i(x, t)\langlebel{1.16} \end{eqnarray} with $x=r\omega \ (\omega\in S^1)$ and $\rho=r-c_it$. We know that $\mathcal{F}^i(\rho,\omega)$ is expressed by \begin{eqnarray} \nonumber \mathcal{F}^i(\rho, \omega)=\frac1{2\sqrt{2}}\int_{\rho}^{\infty}\frac1{\sqrt{s-\rho}}(R_{g^i}(s, \omega)-\partial_s R_{f^i}(s, \omega))ds, \end{eqnarray} where $R_h(s, \omega)$ is the Radon transform of $h\in C_0^{\infty}(\mathbb{R}^2)$, $i. e.$, \begin{eqnarray} \nonumber R_h(s, \omega)=\int_{\omega\cdot y=s}h(y)\ dS_y \end{eqnarray} for $s\in \mathbb{R},\ \omega\in S^1$. Note that $\mathcal{F}^i(\rho, \omega)$ satisfies \begin{eqnarray} \mathcal{F}^i(\rho, \omega)=0\qquad \mbox{for}\qquad \rho\geq M,\langlebel{1.17} \end{eqnarray} \begin{eqnarray} \big|\partial_{\rho}^{\ell}\mathcal{F}^i(\rho,\omega)\big|\leq \frac{C}{(1+|\rho|)^{\frac12+\ell}}\langlebel{1.18} \end{eqnarray} and \begin{eqnarray} \big|r^{\frac12}\partial_0^{\ell}u_0^i(r\omega, t)-(-c_i)^{\ell}\partial_{\rho}^{\ell}\mathcal{F}^i(r-c_it, \omega)\big|\leqq\frac{C(1+|r-c_it|)^{\frac12}}t \qquad \langlebel{1.19} \end{eqnarray} for $t\geq r/(2c_i)\geq 1$ and $\ell=1, 2$. For the details about (\ref{1.17}), (\ref{1.18}) and (\ref{1.19}), see L. H\"ormander \cite{hor1}.\\ Then we define a constant \begin{eqnarray} H_i=\max_{\begin{equation}gin{subarray}{c}\rho\in\mathbb{R}\\ \omega\in S^1\end{subarray}}\left\{-\frac1{c_i^2}\Thetaeta_i^i(-c_i, \omega ) \partial_{\rho}\mathcal{F}^i(\rho,\omega)\partial_{\rho}^2\mathcal{F}^i(\rho,\omega)\right\}\langlebel{1.20} \end{eqnarray} and set \begin{eqnarray} H=\max\{H_1, H_2,\cdots,H_m\}. \end{eqnarray} Note that by (\ref{1.17}) and (\ref{1.18}), each $H_i$ is well-defined and nonnegative. \\ \indent Now, we state our main result.\\ \begin{equation}gin{thm} Assume that (\ref{1.7}), (\ref{1.8}) and (\ref{1.0}) hold for the Cauchy problem (\ref{1.1}) and (\ref{1.2}). Also assume $\Phi\approx 0$, $\Psi\approx 0$ and $\Xi\sim 0$. Then, if $H>0$, we have \begin{eqnarray} \liminf_{\varepsilonepsilon\to 0}\varepsilonepsilon^2\log (1+T_{\varepsilonepsilon})\geqq \frac1H.\langlebel{1.21} \end{eqnarray} \end{thm} Note that the author showed the same estimate when $a_{\ell j}^{i, \alpha\begin{equation}ta,\gamma}=b_{jk}^{i,\alpha\begin{equation}ta}=0$ for all $\alpha, \begin{equation}ta, \gamma=0,1,2$ and $i, j, k=1,2,\cdots, m$ in \cite{2000}. Hence, our result (\ref{1.21}) is a generalization of the result in \cite{2000}. Also note that we can not improve the estimate (\ref{1.21}), in general, since the counter result has been shown when $m=1$ and $B^1(\partial u)\equiv 0$ in \cite{1995}. \partialr \indent In the following sections, we aim at showing (\ref{1.21}). In section 2, we prepare some notations and state a lemma which implies (\ref{1.21}). We also discus about the estimates of the null-form. In section 3, we will show the $L^{\infty}$-$L^{\infty}$ estimates of solutions to the wave equation. It is an improvement of the one showed in \cite{hk1}. In section 4, we concentrate to show $a$ $priori$ estimates of the solution, by using the ghost energy inequality and the method of ordinary differential equation along the characteristic curves. \\ \section{Preliminary for the proof of Theorem 1.1} \indent Our main theorem is immediately derived from the following lemma. \begin{equation}gin{lem} Under the same situation as Theorem 1.1, choose a positive constant $B$ to be $B<1/H$. Then there exists a constnat $\varepsilonepsilon_0(B)>0$ such that \begin{eqnarray} \varepsilonepsilon^2\log (1+T_{\varepsilonepsilon})\geq B\langlebel{2.0} \end{eqnarray} holds for $0<\varepsilonepsilon<\varepsilonepsilon_0(B)$. \end{lem} In order to state another lemma which causes Lemma 2.1, we introduce some notations. At first, we introduce the following differential operators, \begin{eqnarray} \nonumber \Omega=x_1\partial_2-x_2\partial_1,\qquad S=t\partial_0+x_1\partial_1+x_2\partial_2 \end{eqnarray} and denote $$ \Gamma =(\Gamma_0,\ \Gamma_1,\ \Gamma_2,\ \Gamma_3,\ \Gamma_4)=(\partial_0,\ \partial_1,\ \partial_2,\ \Omega,\ S) $$ and \begin{eqnarray} \nonumber \Gamma^a=\Gamma_0^{a_0}\Gamma_1^{a_1}\Gamma_2^{a_2}\Gamma_3^{a_3}\Gamma_4^{a_4} \end{eqnarray} for a multi-index $a=(a_0, a_1,a_2,a_3,a_4)$. We can verify the following commutator relations; \begin{eqnarray} \nonumber [\Gamma_{\alpha}, \Box_i]=-2\delta_{\alpha 4}\Box_i \qquad (\alpha=0,1,2,3,4, \ i=1, 2, \cdots, m) \end{eqnarray} and \begin{eqnarray} \nonumber &[\partial_{\alpha}, \partial_{\begin{equation}ta}]=0\quad (\alpha,\begin{equation}ta=0,1,2),\qquad [S, \partial_{\alpha}]=-\partial_{\alpha}\quad (\alpha=0,1,2)&\\ \nonumber &[\Omega, \partial_1]=-\partial_2,\quad [\Omega, \partial_2]=\partial_1,\quad [\Omega, \partial_0]=0,\quad [S, \Omega]=0.& \end{eqnarray} Here, $[A, B]=AB-BA$ and $\delta_{\alpha\begin{equation}ta}$ is the Kronecker delta. \\ \indent Secondly, we define norms. Let $v(x, t)=(v^1(x, t), v^2(x, t),\cdots,v^m(x, t))$ be a vector valued function defined on $\mathbb{R}^2\times [0, T)$, then we set \begin{eqnarray} \nonumber |v(x, t)|_k&=&\sum_{i=1}^m|v^i(x, t)|_k=\sum_{i=1}^m\sum_{|a|\leq k}|\Gamma^a v^i(x, t)|,\\ \nonumber |v(t)|_k&=&\sup_{x\in \mathbb{R}^2}|v(x, t)|_k,\qquad |v|_{k,T}=\sup_{0\leqq t<T}|v(t)|_k,\\ \nonumber [v(x, t)]_k&=&\sum_{i=1}^{m}[v^i(x, t)]_k=\sum_{i=1}^m\sum_{|a|\leq k}|(1+|x|)^{\frac12}(1+||x|-c_it|)^{\frac{15}{16}}|\Gamma^a v^i(x, t)|,\\ \nonumber [v(t)]_k&=&\sup_{x\in\mathbb{R}^2}[v(x, t)]_k,\qquad [v]_{k, T}=\sup_{0\leqq t<T}[v(t)]_k,\\ \nonumber [[v(x, t)]]_k&=&\sum_{i=1}^{m}[[v^i(x, t)]]_k=\sum_{i=1}^m\sum_{|a|\leq k}|(1+|x|)^{\frac12}(1+||x|-c_it|)|\Gamma^a v^i(x, t)|,\\ \nonumber [[v(t)]]_k&=&\sup_{x\in\mathbb{R}^2}[[v(x, t)]]_k,\qquad [[v]]_{k, T}=\sup_{0\leqq t<T}[[v(t)]]_k,\\ \nonumber \langle v(x, t)\rangle_k&=&\sum_{i=1}^m\langle v^i(x, t)\rangle_k=\sum_{i=1}^m\sum_{|a|\leq k}(1+|x|+t)^{\frac{7}{16}}|\Gamma^a v^i(x, t)|,\\ \nonumber \langle v(t)\rangle_k&=& \sup_{x\in \mathbb{R}^2}\langle (x, t)\rangle_k, \qquad \langle v\rangle_k=\sup_{0\leq t<T}\langle v(t)\rangle_k,\\ \nonumber \langle\langle v(x, t)\rangle\rangle_k&=&\sum_{i=1}^m\langle\langle v^i(x, t)\rangle\rangle_k=\sum_{i=1}^m\sum_{|a|\leq k}(1+|x|+t)^{\frac{1}{2}}|\Gamma^a v^i(x, t)|,\\ \nonumber \langle\langle v(t)\rangle\rangle_k&=& \sup_{x\in \mathbb{R}^2}\langle\langle (x, t)\rangle\rangle_k, \qquad \langle\langle v\rangle\rangle_{k,T}=\sup_{0\leq t<T}\langle\langle v(t)\rangle\rangle_k,\\ \nonumber ||v(t)||_k&=& \sum_{i=1}^{m}\sum_{|a|\leq k}\bigg(\int_{\mathbb{R}^2}|\Gamma^a v^i(x, t)|^2\ dx\bigg)^{\frac12},\qquad ||v||_{k, T}=\sup_{0\leqq t<T}||v(t)||_k, \end{eqnarray} where $k$ is a nonnegative integer and $|a|=a_0+a_1+\cdots+a_4$ for a multi-index $a=(a_0,a_1,a_2,a_3,a_4)$. \\ Then, we find that the following lemma implies Lemma 2.1.\\ \begin{equation}gin{lem} Let $u(x, t)=(u^1(x, t), u^2(x, t),\cdots, u^m(x, t))\in (C^{\infty}(\mathbb{R}^2\times [0, T)))^m$ be a solution to (\ref{1.1}) and (\ref{1.2}). Choose an integer $k$ so that $k\geq 21$. Let $B>0$ be a constant so that $B<1/H$ and also let $J>0$ be a constant. Then, there exist constants $K=K(B)>0$ and $\varepsilon_0=\varepsilon_0(J, B)>0$ such that, if \begin{eqnarray} [\partial u]_{k, T}+\langle u\rangle_{k+1, T}\leq J\varepsilon \langlebel{2.1} \end{eqnarray} holds for $0<\varepsilon<\varepsilon_0$, then \begin{eqnarray} [\partial u]_{k, T_B}+\langle u\rangle_{k+1, T_B}\leq K\varepsilon \langlebel{2.2} \end{eqnarray} holds for the same $\varepsilon$. Here, we have set $T_B=\min \{T, t_B\}$ and $t_B=\exp(B/\varepsilon^2)-1$. \end{lem} \noindent {\bf Proof of Lemma 2.1 providing Lemma 2.2:} We show that Lemma 2.2 implies Lemma 2.1 by contradiction. If the statement of Lemma 2.1 is incorrect, there exists a positive constant $B_0(<1/H)$ such that for any $\varepsilonepsilon>0$, there exists $\delta=\delta(\varepsilonepsilon)>0$ such that \begin{eqnarray} \delta^2\log (1+T_{\delta})\leq B_0 \quad\mbox{and}\quad 0<\delta<\varepsilonepsilon.\langlebel{2.00} \end{eqnarray} On the other hand, by the local existence theorem which was shown in A. Majda \cite{ma}, we find that there are positive constants $\varepsilonepsilon_1$ and $t_{\varepsilonepsilon}$ such that there exists a smooth solution $u(x, t)\in C^{\infty}(\mathbb{R}^2\times [0, t_{\varepsilonepsilon}))$ to (\ref{1.1}) and (\ref{1.2}) for $0<\varepsilonepsilon<\varepsilonepsilon_1$. Let $L>0$ be a constant satisfying $[\partial u(0)]_k+\langle u(0)\rangle_{k+1}\leq L\varepsilonepsilon$ and set $J_0=2\max\{K(B_0), L\}$, where $K(B_0)$ is the constant determined in Lemma 2.2 with $B=B_0$. Then we can define a positive constant $\tau_{\varepsilonepsilon}$ by \begin{eqnarray} \nonumber \tau_{\varepsilonepsilon}=\sup\{t>0\ :\ t<T_{\varepsilonepsilon}\ \ \mbox{and}\ \ [\partial u(t)]_k+\langle u(t)\rangle_{k+1}\leq J_0\varepsilonepsilon\}\ (\leq T_{\varepsilonepsilon}) \end{eqnarray} for each $\varepsilonepsilon\in (0, \varepsilonepsilon_*)$. Here we have set $\varepsilonepsilon_*=\min\{\varepsilonepsilon_0(J_0, B_0) , \varepsilonepsilon_1\}$. Note that (\ref{2.1}) holds for $\varepsilonepsilon\in (0, \varepsilonepsilon_*)$ with $J=J_0$ and $T=\tau_{\varepsilonepsilon}$. Moreover, by using (\ref{1.7}) and (\ref{1.8}), we can show $\tau_{{\varepsilonepsilon}}<T_{{\varepsilonepsilon}}$ for each $\varepsilonepsilon\in (0, \varepsilonepsilon_*)$. (For the detail, see the proof of Lemma 2.1 in \cite{2005}.) This means that \begin{eqnarray} [\partial u]_{k, \tau_{\varepsilonepsilon}}+\langle u\rangle_{k+1,\tau_{\varepsilonepsilon}}= J_0{\varepsilonepsilon}\langlebel{2.p} \end{eqnarray} holds for $\varepsilonepsilon\in (0, \varepsilonepsilon_*)$. However, as mentioned above, there exists a constant $\delta=\delta (\varepsilonepsilon_*)$ such that (\ref{2.00}) holds. In that case, we find that $T_{B_0}=\min\{\tau_{\delta}, t_{B_0}\}=\tau_{\delta}$ and hence that Lemma 2.2 implies \begin{eqnarray} [\partial u]_{k,\tau_{\delta}}+\langle u\rangle_{k+1,\tau_{\delta}}\leq K(B_0){\delta} \leq \frac{J_0}2{\delta}. \end{eqnarray} This contradicts to (\ref{2.p}) and therefore we find that the claim of Lemma 2.1 is correct.\\[2mm] \indent In the rest of this paper, we aim at showing Lemma 2.2. For this purpose, we prepare a proposition with respect to the $null$-$form$. Set $\displaystyle{c_*=\min_{1\leq i\leq m}\{c_i-c_{i-1}\}/3}$ with $c_0=0$. We see $c_*>0$ from (\ref{1.3}). Also we set \begin{eqnarray} \Lambda_i(T)=\{ (x, t)\in \mathbb{R}^2\times [0, T)\ :\ ||x|-c_it|\leq c_*t \} \langlebel{2.3} \end{eqnarray} and \begin{eqnarray} \Lambda_0(T)=\mathbb{R}^2\times[0, T)\setminus \displaystyle{\bigcup_{i=1}^m\Lambda_i(T)}.\langlebel{2.4} \end{eqnarray} We find that $\Lambda_i(T)\cap\Lambda_j(T)=\emptyset$ holds for any $T>0$, if $i\ne j$ and that there exists a constant $C_1>0$ such that \begin{eqnarray} \frac1{C_1}(1+|x|+t)\leq 1+||x|-c_jt|\leq C_1(1+|x|+t)\qquad (x, t)\in\Lambda_i(T) \langlebel{2.5} \end{eqnarray} holds for any $T>0$, if $i\ne j$. \\ \indent In order to derive a good decay property from the null-form in $\Lambda_i(T)$, we introduce the following operators; \begin{eqnarray} Z=(Z_1^i, Z_2^i),\qquad\quad Z_{\alpha}^i=c_i\partial_{\alpha}+\frac{x_\alpha}{|x|}\partial_0\qquad (i=1,2,\cdots,m,\ \alpha=1,2).\langlebel{2.6} \end{eqnarray} Then we find that \begin{eqnarray} Z_1^i=\frac{c_it-|x|}{t}\partial_1+\frac{x_1S-x_2\Omega}{|x|t},\qquad Z_2^i=\frac{c_it-|x|}{t}\partial_2+\frac{x_2S+x_1\Omega}{|x|t}\langlebel{2.7} \end{eqnarray} and hence that \begin{eqnarray} |Z^iv(x, t)|\leq \frac{||x|-c_it|}{t}|\partial v(x, t)|_0+\frac2t|v(x, t)|_1.\langlebel{2.8} \end{eqnarray} Now we have the following.\\ \begin{equation}gin{prop} Let $T>1$ be a constant and let $k$ be a positive integer. Let $v(x, t)=(v^1(x, t),v^2(x, t),\cdots, v^m(x, t))$ and $w(x, t)=(w^1(x, t), w^2(x, t),\cdots, w^m(x, t))$ be functions belonging to $(C^{\infty}(\mathbb{R}^2\times [0, T)))^{m}$. Assume that $\Phi\approx 0$, $\Psi\approx 0$, $\Xi\sim 0$ and (\ref{1.0}) hold. Then, there exists a positive constant $C_2$ independent of $T$ such that \begin{eqnarray} \begin{equation}gin{aligned} &\left|\sum_{\alpha,\begin{equation}ta,\gamma=0}^2a_{ii}^{i,\alpha\begin{equation}ta\gamma}\partial_{\gamma}v^i(x, t)\partial_{\alpha}\partial_{\begin{equation}ta}w^i(x, t)\right|_k\\ \leq \ &C_2\sum_{|b+c|\leq k+1}(|Z^i\Gamma^b v^i(x, t)||\Gamma^c\partial^2 w^i (x, t)|+|\Gamma^b \partial v^i(x, t)||Z^i\Gamma^c\partial w^i (x, t)|),\langlebel{2.9} \end{aligned} \end{eqnarray} \begin{eqnarray} \left|\sum_{\alpha,\begin{equation}ta=0}^2b_{jj}^{i, \alpha\begin{equation}ta}\partial_{\alpha}v^j(x, t)\partial_{\begin{equation}ta}v^j(x, t)\right|_k \leq C_2\sum_{|b+c|\leq k}|Z^j \Gamma^b v^j(x, t)||\Gamma^c\partial v^j(x, t)|,\langlebel{2.10} \end{eqnarray} \begin{eqnarray} \begin{equation}gin{aligned} &\left|\sum_{\alpha,\begin{equation}ta,\gamma=0}^2d_{iii}^{i, \alpha\begin{equation}ta\gamma}\partial_{\alpha}v^i(x, t)\partial_{\begin{equation}ta}v^i(x, t)\partial_{\gamma}v^i(x, t)\right|_k\qquad\qquad\qquad\qquad\qquad\qquad\qquad \\ \leq \ &C_2\sum_{|b+c+d|\leq k}|Z^i \Gamma^b v^i(x, t)||\Gamma^c \partial v^i(x, t)||\Gamma^d \partial v^i(x, t)| \langlebel{2.11} \end{aligned} \end{eqnarray} and especially \begin{eqnarray} \begin{equation}gin{aligned} &\sum_{|a|\leq k}\left|\sum_{\alpha,\begin{equation}ta,\gamma=0}^2\big\{\Gamma^a(a_{ii}^{i,\alpha\begin{equation}ta\gamma}\partial_{\gamma}v^i(x, t)\partial_{\alpha}\partial_{\begin{equation}ta}v^i(x, t))-a_{ii}^{i,\alpha\begin{equation}ta\gamma}\partial_{\gamma}v^i(x, t)\partial_{\alpha}\partial_{\begin{equation}ta}\Gamma^a v^i(x, t)\big\}\right|\qquad\\ \leq \ &C_2\sum_{|b+c|\leq k+1\atop |b|,|c|\leq k}|Z^i\Gamma^bv^i(x, t)||\Gamma^c\partial v^i(x, t)|\qquad\qquad \langlebel{2.12} \end{aligned} \end{eqnarray} hold for $i, j=1,2,\cdots,m$. Moreover, we find that \begin{eqnarray} \begin{equation}gin{aligned} &\left|\sum_{\alpha,\begin{equation}ta=0}^2b_{jj}^{i, \alpha\begin{equation}ta}\partial_{\alpha}v^j(x, t)\partial_{\begin{equation}ta}v^j(x, t)\right|_k\qquad\qquad\qquad \\ \leq\ &C_2\bigg(\frac{||x|-c_jt||\partial v^j(x, t)|_{[\frac{k}2]}|\partial v^j(x, t)|_{k}}{1+|x|+t}+\frac{P_{k}(v^j)(x, t)}{1+|x|+t}\bigg)\langlebel{2.13} \end{aligned} \end{eqnarray} holds for $(x, t)\in \Lambda_j(T)\cap\{(y, s) : s\geq 1\}$, $i,j=1,2,\cdots,m$, and that \begin{eqnarray} \begin{equation}gin{aligned} &\left|\sum_{\alpha,\begin{equation}ta,\gamma=0}^2a_{ii}^{i, \alpha\begin{equation}ta\gamma}\partial_{\gamma}v^i(x, t)\partial_{\alpha}\partial_{\begin{equation}ta}w^i(x, t)\right|_k\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\\ \leq\ &C_2\bigg(\frac{||x|-c_it|(|\partial v^i(x, t)|_{[\frac{k}2]}|\partial w^i(x, t)|_{k+1}+|\partial w^i(x, t)|_{[\frac{k+1}2]}|\partial v^i(x, t)|_{k})}{1+|x|+t}+\langlebel{2.14}\\ \ &\qquad +\frac{P_{k}(v^i, w^i)(x, t)}{1+|x|+t}\bigg), \end{aligned} \end{eqnarray} \begin{eqnarray} \begin{equation}gin{aligned} &\left|\sum_{\alpha,\begin{equation}ta,\gamma=0}^2d_{iii}^{i, \alpha\begin{equation}ta\gamma}\partial_{\alpha}v^i(x, t)\partial_{\begin{equation}ta}v^i(x, t)\partial_{\gamma}v^i(x, t)\right|_k\qquad\qquad\\ \leq \ &C_2\bigg(\frac{||x|-c_it||\partial v^i(x, t)|_{[\frac{k}2]}^2|\partial v^i(x, t)|_{k}}{1+|x|+t}+\frac{|\partial v^i(x, t)|_{[\frac{k}2]}P_{k}(v^i)(x, t)}{1+|x|+t}\bigg),\langlebel{2.15} \end{aligned} \end{eqnarray} and \begin{eqnarray} \begin{equation}gin{aligned} &\sum_{|a|\leq k}\left|\sum_{\alpha,\begin{equation}ta,\gamma=0}^2\big\{\Gamma^a(a_{ii}^{i,\alpha\begin{equation}ta\gamma}\partial_{\gamma}v^i(x, t)\partial_{\alpha}\partial_{\begin{equation}ta}v^i(x, t))-a_{ii}^{i,\alpha\begin{equation}ta\gamma}\partial_{\gamma}v^i(x, t)\partial_{\alpha}\partial_{\begin{equation}ta}\Gamma^a v^i(x, t)\big\}\right|\ \ \ \\ \leq\ &C_2\bigg(\frac{||x|-c_it||\partial v^i(x, t)|_{[\frac{k+1}2]}|\partial v^i(x, t)|_{k}}{1+|x|+t}+\frac{Q_{k}(v^i)(x, t)}{1+|x|+t}\bigg) \qquad \langlebel{2.16} \end{aligned} \end{eqnarray} hold for $(x, t)\in \Lambda_i(T)\cap\{(y, s) : s\geq 1\}$, $i=1,2,\cdots,m$. Here we have set \begin{eqnarray} \nonumber &&P_k(v)(x, t)=|v(x, t)|_{[\frac{k}2]+1}|\partial v(x, t)|_{k}+|\partial v(x, t)|_{[\frac{k}2]}|v(x, t)|_{k+1},\\ \nonumber &&P_k(v, w)(x, t)=|v(x, t)|_{[\frac{k}2]+1}|\partial w(x, t)|_{k+1}+|\partial v(x, t)|_{[\frac{k}2]}|\partial w(x, t)|_{k+1}+\\ &&\nonumber\qquad\qquad\qquad\qquad+|\partial w(x, t)|_{[\frac{k}2]+1}|\partial v(x, t)|_{k}+|\partial w(x, t)|_{[\frac{k}2]+1}|v(x, t)|_{k+1},\\ \nonumber &&Q_k(v)(x, t)=|v(x, t)|_{[\frac{k}2]+1}|\partial v(x, t)|_{k}+|\partial v(x, t)|_{[\frac{k}2]}|\partial v(x, t)|_{k}+\\ \nonumber &&\qquad\qquad\qquad\quad +|\partial v(x, t)|_{[\frac{k}2]+1}|v(x, t)|_{k+1}. \end{eqnarray} \end{prop} For the proof of Proposition 2.1, see Proposition 2.1 in \cite{2006}. \section{$L^{\infty}$-$L^{\infty}$ estimate} In this section, we will show a weighted $L^{\infty}$-$L^{\infty}$ estimate of solutions to inhomogeneous wave equations. It is an improvement of the estimate in Proposition 4.2 in \cite{hk1}. Let $c$ and $T$ be positive constants and $F$ be a function in $C^{\infty}(\mathbb{R}^2\times [0, T))$. Then we introduce an operator $L_{c}(F)$; \begin{eqnarray} L_c(F)(x, t)=\frac1{2\pi c}\int_0^t\bigg(\int_{|x-y|<c(t-s)}\frac{F(y, s)}{\sqrt{c^2(t-s)^2-|x-y|^2}} dy\bigg)ds \langlebel{3.1} \end{eqnarray} for $(x, t)\in \mathbb{R}^2\times [0, T)$. We know that $L_{c}(F)$ is the solution to the Cauchy problem; \begin{eqnarray} \nonumber &(\partial_0^2-c^2\triangle)L_{c}(F)=F(x, t),&\qquad\quad (x, t)\in \mathbb{R}^2\times [0, T),\\ \nonumber &L_{c}(x, 0)=\partial_0 L_{c}(x, 0)=0,& \qquad\qquad x\in \mathbb{R}^2. \end{eqnarray} Then we have the following. \begin{equation}gin{prop} Let $c_i$ $(i=1, 2, \cdots, m)$ be the propagation speeds defined in (\ref{1.3}). Let $T>0$ and $F, G ,H \in C^{\infty}(\mathbb{R}^2\times [0, T))$. Choose $\mu>0$, $\nu >0$ and $\rho>0$ arbitrarily. Then, there exist positive constants $\tilde{C}_{\mu}$, $\hat{C}_{\nu}$ and $\dot{C}_{\rho}$ independent of $T$ such that \begin{eqnarray} |L_{c_i}(F)(x, t)|(1+|x|)^{\frac12}\leq \tilde{C}_{\mu}M^{(i)}_{\mu, 0}(F)(x, t) \langlebel{l0} \end{eqnarray} and \begin{eqnarray} \begin{equation}gin{aligned} &|\nabla L_{c_i}(G+H)(x, t)|(1+|x|)^{\frac12}(1+||x|-c_it|)\\ \leq\ \ &\hat{C}_{\nu}M^{(i)}_{\nu, 0}(G)(x, t)+\dot{C}_{\rho}\{1+\log (1+t+|x|)\}M^{(i)}_{0, \rho}(H)(x, t) \langlebel{l1} \end{aligned} \end{eqnarray} hold for $(x, t)\in {\bf R}^2\times [0, T)$. Here $\nabla=(\partial_1, \partial_2)$ and we have set \begin{eqnarray} \nonumber M^{(i)}_{\mu,\nu}(F)(x, t)=\sum_{j=0}^m\sup_{(y, s)\in \atop \Lambda_j(T)\cap D^i(x, t)}\{|y|^{\frac12}z_{\mu, \nu}^{(j)}(|y|, s)|F(y, s)|_1\}, \langlebel{l2} \end{eqnarray} \begin{eqnarray} \nonumber z_{\mu, \nu}^{(j)}(\langlem, s)=(1+s+\langlem)^{1+\mu}(1+|\langlem -c_j s|)^{1+\nu}, \langlebel{l3} \end{eqnarray} \begin{eqnarray} \nonumber D^i(x, t)=\{(y, s)\ :\ |x-y|\leq c_i(t-s)\ \}. \end{eqnarray} \end{prop} \noindent {\bf Proof of Proposition 3.1:}\ \ \ \ By the same argument with the proof of Propositions 4.1 and 4.2 in \cite{hk1}, we obtain (\ref{l0}) and (\ref{l1}) when $H(x, t)\equiv 0$. Therefore, we have only to show (\ref{l1}) when $G(x, t)\equiv 0$. Without loss of generality, we may assume $c_i=1$ and for the sake of simplicity, we denote the constant depending on $\rho$ by $C$ which may change line by line, during this section. Set \begin{eqnarray} \nonumber E_1&=&\{(y, s)\in {\bf R}^2\times[0, t)\ :\ |y|+s>t-r,\ |x-y|<t-s\}\\ \nonumber E_2&=&\{(y, s)\in {\bf R}^2\times[0, t)\ :\ (t-r-1/2)_+<|y|+s<t-r \}\\ \nonumber E_3&=&\{(y, s)\in {\bf R}^2\times[0, t)\ :\ |y|+s<(t-r-1/2)_+ \} \end{eqnarray} with $r=|x|$ and define \begin{eqnarray} \nonumber P_j(H)(x, t)=\frac1{2\pi}\int\! \! \! \int_{E_j}\frac{H(y, s)}{\sqrt{(t-s)^2-|x-y|^2}}\ dyds\qquad \qquad (j=1,2,3), \end{eqnarray} then we have $$ \partial_{\ell} L_{c_i}(H)(x, t)=\sum_{j=1}^3P_j(\partial_{\ell} H)(x, t)\qquad \quad (\ell=1, 2.) $$\\ \indent Firstly, we deal with $P_1(\partial_{\ell} H)(x, t)$. Following the computation made in Section 4 of \cite{hk1}, we find \begin{eqnarray} \nonumber |P_1(\partial_{\ell} H)(x, t)|\leq CM^{(i)}_{0,\rho}(H)\sum_{k=1}^5I_k, \langlebel{l4} \end{eqnarray} where we have set \begin{eqnarray} \nonumber I_1&=&\sum_{j=0}^m \int\! \! \! \int_{D_1}\frac{\langlem^{\frac12}}{z_{0, \rho}^{(j)}(\langlem, s)}\left(\int_{-\varepsilonphi}^{\varepsilonphi}K_1(\langlem, \psi; r, t-s)\ d\psi\right)d\langlem ds, \\ \nonumber I_2&=&\sum_{j=0}^m \int_{D'_2}\frac{\langlem^{\frac12}}{z_{0, \rho}^{(j)}(\langlem, s)}\left(\int_{0}^{1}K_2(\langlem, \tau; r, t-s)\ d\tau\right)d\sigma, \\ \nonumber I_3&=&\sum_{j=0}^m \int\! \! \! \int_{D_2}\frac1{\langlem^{\frac12}z_{0, \rho}^{(j)}(\langlem, s)}\left(\int_{0}^{1}K_2(\langlem, \tau; r, t-s)\ d\tau\right)d\langlem ds, \\ \nonumber I_4&=&\sum_{j=0}^m \int\! \! \! \int_{D_2}\frac{\langlem^{\frac12}}{z_{0, \rho}^{(j)}(\langlem, s)}\left(\int_{0}^{1}|\partial_{\langlem}K_2(\langlem, \tau; r, t-s)|\ d\tau\right)d\langlem ds, \\ \nonumber I_5&=&\sum_{j=0}^m \int\! \! \! \int_{D_2}\frac{\langlem^{\frac12}}{z_{0, \rho}^{(j)}(\langlem, s)}\left(\int_{0}^{1}|(\partial_{\langlem}\Psi\cdot K_2)(\langlem, \tau; r, t-s)|\ d\tau\right)d\langlem ds. \end{eqnarray} Here we have used the following notation: \begin{eqnarray} \nonumber K_1(\langlem, \varepsilonphi; r,t)&=&\frac1{2\pi\sqrt{t^2-r^2-\langlem^2+2r\langlem\cos \psi}},\\ \nonumber K_2(\langlem, \tau; r,t)&=&\frac1{2\pi\sqrt{2r\langlem\tau (1-\tau)\{2-(1-\cos \varepsilonphi)\tau\}}},\\ \nonumber \varepsilonphi(\langlem; r,t)&=&\arccos \left(\frac{r^2+\langlem^2-t^2}{2r\langlem}\right),\\ \nonumber \Psi(\langlem, \tau; r,t)&=&\arccos \{1-(1-\cos\varepsilonphi)\tau\},\\ \nonumber D_1&=&\{(\langlem, s)\in (0, \infty)\times (0, t)\ :\ \langlem_-<\langlem<\langlem_-+\delta \ \ \ \mbox{or}\ \ \ \langlem_+-\delta<\langlem<\langlem_+\},\\ \nonumber D_2&=&\{(\langlem, s)\in (0, \infty)\times (0, t)\ :\ \langlem_-+\delta<\langlem<\langlem_+-\delta \},\\ \nonumber D'_2&=&\{(\langlem, s)\in (0, \infty)\times (0, t)\ :\ \langlem=\langlem_-+\delta\ \ \ \mbox{or}\ \ \ \langlem=\langlem_+-\delta \}\\ \nonumber \end{eqnarray} with $\langlem_-=|t-s-r|$, $\langlem_+=t-s+r$ and $\delta=\min\{r,1/2\}$. Thus we aim to show \begin{eqnarray} I_k\leq\frac{C\{1+\log(1+t+r)\}}{(1+r)^{\frac12}(1+|t-r|)}\qquad\quad (k=1,2,3,4,5).\langlebel{l5} \end{eqnarray} In oder to show (\ref{l5}), we use the following estimates which are proved in Lemma 4.1 in \cite{hk2}.\\ \begin{equation}gin{lem} \ \ \ Let $(\langlem, s)\in D_1\cup D_2$. then we have \begin{eqnarray} \int_{-\varepsilonphi}^{\varepsilonphi}K_1\ d\psi=2\int_0^1K_2\ d\tau &\leq&\frac{C}{(r\langlem)^{\frac12}}\log\left(2+\frac{r\langlem h(t-s-r)}{(\langlem-\langlem_-)(\langlem_+-\langlem)}\right),\langlebel{l6}\\ \int_0^1|\partial_{\langlem}K_2|\ d\tau &\leq&\frac{C}{(r\langlem)^{\frac12}(\langlem+s+r-t)},\langlebel{l7}\\ \int_0^1|\partial_{\langlem}\Psi\cdot K_2|\ d\tau &\leq&\frac{C}{(r\langlem)^{\frac12}}\left(\frac1{(\langlem_+-\langlem)^{\frac12 }(\langlem-\langlem_-)^{\frac12}}+\frac1{(\langlem^2-\langlem_-^2)^{\frac12}}\right),\langlebel{l8} \end{eqnarray} where, $h(p)=1$ for $p>0$ and $h(p)=0$ for $p\leq 0$. \end{lem} First we evaluate $I_1$. When $t-r-s>0$ and $\langlem>\langlem_+-\delta$, we have \begin{eqnarray} \nonumber \log\left(2+\frac{r\langlem}{(\langlem-\langlem_-)(\langlem_++\langlem)}\right)\leq \log 3, \end{eqnarray} since $\langlem-\langlem_->r$. Moreover, we find that \begin{eqnarray} \nonumber &z_{0, \rho}^{(j)}(\langlem, s)\geq z_{0, \rho}^{(j)}(\langlem_+,s)&\qquad \quad \mbox{for}\qquad \langlem_+-\delta<\langlem<\langlem_+\\ \nonumber &z_{0, \rho}^{(j)}(\langlem, s)\geq C z_{0, \rho}^{(j)}(\langlem_-,s)&\qquad \quad\mbox{for}\qquad \langlem_-<\langlem<\langlem_-+\delta. \end{eqnarray} Hence, by (\ref{l6}), we get \begin{eqnarray} I_1\leq \frac{C}{r^{\frac12}}\sum_{j=0}^m(A_{1,j}+B_{1, j}+C_{1, j}),\langlebel{l9} \end{eqnarray} where we have set \begin{eqnarray} \nonumber A_{1,j}&=&\int_0^t\left(\int_{\langlem_+-\delta}^{\langlem_+}\frac1{z_{0, \rho}^{(j)}(\langlem_+, s)}\ d\langlem\right)ds,\\ \nonumber B_{1,j}&=&\int_0^{(t-r)_+}\left(\int^{\langlem_-+\delta}_{\langlem_-}\frac1{z_{0, \rho}^{(j)}(\langlem_-, s)}\log\left(2+\frac{\langlem}{\langlem-\langlem_-}\right)d\langlem\right)ds,\\ \nonumber C_{1,j}&=&\int_{(t-r)_+}^t\left(\int^{\langlem_-+\delta}_{\langlem_-}\frac1{z_{0, \rho}^{(j)}(\langlem_-, s)}\ d\langlem\right)ds. \end{eqnarray} It follows that \begin{eqnarray} \nonumber A_{1, j}&=&\int_0^t\left(\int_{\langlem_+-\delta}^{\langlem_+}\frac1{(1+s+\langlem_+)(1+|\langlem_+-c_js|)^{1+\rho}}\ d\langlem\right)ds\\ &\leq& \frac{C\delta}{1+t+r}\int_{-\infty}^{\infty}\frac1{(1+|(1+c_j)s-t-r|)^{1+\rho}}\ ds\langlebel{l10}\\ \nonumber &\leq& \frac{C\delta}{1+|t-r|}. \end{eqnarray} When we deal with $B_{1,j}$, we may assume $t>r$, since $B_{1, j}=0$ if $t\leq r$. Integrating by parts, we find \begin{eqnarray} \nonumber \int^{\langlem_-+\delta}_{\langlem_-}\log\left(2+\frac{\langlem}{\langlem-\langlem_-}\right)d\langlem &=& \nonumber \int^{\langlem_-+\delta}_{\langlem_-}\left\{\log(3\langlem-2\langlem_-)-\log(\langlem-\langlem_-)\right\}d\langlem\\ \nonumber &=&\left[\frac{3\langlem-2\langlem_-}3\log (3\langlem-2\langlem_-)-(\langlem-\langlem_-)\log (\langlem-\langlem_-)\right]_{\langlem_-}^{\langlem_-+\delta}\\ \nonumber &=&\frac{\langlem_-+3\delta}{3}\log(\langlem_-+3\delta)-\delta\log\delta-\frac{\langlem_-}{3}\log\langlem_-\\ \nonumber &=&\frac{\langlem_-}3\log \left(1+\frac{3\delta}{\langlem_-}\right)+\delta\log (\langlem_-+3\delta)-\delta\log{\delta}\\ \nonumber &\leq& \delta+\delta\log (2+|t-r|)+\delta^{\frac12}, \end{eqnarray} where we have used $0<\delta<1/2$ and the facts \begin{eqnarray} \nonumber 0\leq &\displaystyle{\frac{\log (1+x)}{x}}&< 1\qquad\quad \mbox{for}\qquad x> 0,\\ \nonumber 0\leq &-\delta^{\frac12}\log \delta&<1\qquad\quad \mbox{for}\qquad 0<\delta<\frac12. \end{eqnarray} Hence we have \begin{eqnarray} \begin{equation}gin{aligned} B_{1, j}\ \ \leq& \ \ \frac{C\delta^{\frac12}\log (2+|t-r|)}{1+|t-r|} \int_{-\infty}^{\infty}\frac1{(1+|(1+c_j)s-t-r|)^{1+\rho}}\ ds\\ \leq&\ \ \frac{C\delta^{\frac12}\{1+\log(1+t+r)\}}{1+|t-r|}.\langlebel{l11} \end{aligned} \end{eqnarray} When $s>(t-r)_+$, we have \begin{eqnarray} \nonumber s+\langlem_-=2s-t+r\geq |t-r|,\qquad s+\langlem_-\geq C|\langlem-c_j s|=C|(1-c_j)s-t+r|, \end{eqnarray} which imply \begin{eqnarray} \nonumber z_{0, \rho}^{(j)}(\langlem_-, s)&\geq& C(1+|t-r|)(1+|(1-c_j)s-t+r|)^{1+\rho}\qquad \mbox{if}\quad j\ne i\\ \nonumber z_{0, \rho}^{(i)}(\langlem_-, s)&\geq& C(1+|t-r|)^{1+\rho}(1+2s-t+r). \end{eqnarray} Therefore, we get \begin{eqnarray} \begin{equation}gin{aligned} C_{1,j}\ \ \leq &\ \ \frac{C\delta}{1+|t-r|}\int_{-\infty}^{\infty} \frac{1}{(1+|(1-c_j)s-t+r|)^{1+\rho}}\ ds\\ \leq\ \ &\frac{C\delta}{1+|t-r|}\qquad \qquad \qquad \mbox{if}\qquad j\ne i \langlebel{l12} \end{aligned} \end{eqnarray} \begin{eqnarray} \begin{equation}gin{aligned} C_{1,i}\ \ \leq&\ \ \frac{C\delta}{(1+|t-r|)^{1+\rho}}\int_{(t-r)_+}^{t}\frac1{1+2s+t-r}\ ds\\ \leq&\ \ \frac{C\delta\{1+\log(1+t+r)\}}{1+|t-r|}.\langlebel{l13} \end{aligned} \end{eqnarray} Summing up (\ref{l9}), (\ref{l10}), (\ref{l11}), (\ref{l12}) and (\ref{l13}), we obtain (\ref{l5}) for $k=1$, since $\delta /r\leq 2/(1+r)$. \\ \indent In the remainder of the proof of (\ref{l4}), we assume $r\geq 1/2$ so that $\delta=1/2$, because $D_2$ is the empty set, if $0<r<1/2$.\\ \indent Since $\langlem=\langlem_-+1/2$ or $\langlem=\langlem_+-1/2$ for $(\langlem, s)\in D_2'$, we obtain (\ref{l5}) for $k=2$ analogously to the previous argument.\\ \indent Next we evaluate $I_3$. Note that $\langlem >\langlem_-+1/2$ for $(\langlem, s)\in D_2$ and that \begin{eqnarray} \nonumber \log\left(2+\frac{r\langlem}{(\langlem-\langlem_-)(\langlem_++\langlem)}\right)\leq \log (2+2\langlem) \end{eqnarray} for $\langlem>\langlem_-+1/2$. Therefore we get from (\ref{l6}) \begin{eqnarray} \nonumber r^{\frac12}I_3&\leq& C\sum_{j=0}^m\int\! \! \! \int_{D_2}\frac{\log(2+2\langlem)}{(1+\langlem)z_{0, \rho}^{(j)}(\langlem, s)}d\langlem ds\\ \nonumber &\leq& C\{1+\log (1+t+r)\}\sum_{j=0}^m A_{3, j}, \end{eqnarray} where we have set \begin{eqnarray} \nonumber A_{3, j}&=&\int\! \! \! \int_{D_2}\frac{1}{(1+s+\langlem)^2(1+|\langlem-c_js|)^{1+\rho}} d\langlem ds\qquad (1\leq j\leq m),\\ \nonumber A_{3, 0}&=&\int\! \! \! \int_{D_2}\frac{1}{(1+s+\langlem)(1+\langlem)^{2+\rho}} d\langlem ds. \end{eqnarray} When $1\leq j\leq m$, changing variables by \begin{eqnarray} \alpha=\langlem+s\qquad \mbox{and}\qquad \begin{equation}ta=\langlem-s,\langlebel{l14} \end{eqnarray} we have \begin{eqnarray} \nonumber A_{3, j}&\leq&\frac12\int_{|t-r|}^{t+r}\frac1{(1+\alpha)^2}\left(\int_{-|t-r|}^{\alpha}\frac1{(1+|\psi_j(\alpha,\begin{equation}ta)|)^{1+\rho}}\ d\begin{equation}ta\right)d\alpha\\ \nonumber &\leq&\frac{C}{1+|t-r|}, \end{eqnarray} where \begin{eqnarray} \nonumber 2\psi_j(\alpha, \begin{equation}ta)=(c_j+1)\begin{equation}ta-(c_j-1)\alpha. \end{eqnarray} On the other hand, when $j=0$, we have \begin{eqnarray} \nonumber A_{3, 0}\leq \frac{1}{1+|t-r|}\int\! \! \! \int_{D_2}\frac1{(1+\langlem)^{2+\rho}}\ d\langlem ds\leq \frac{C}{1+|t-r|}. \end{eqnarray} Therefore we have (\ref{l5}) for $k=3$. \\ \indent Next we evaluate $I_4$. Since $\langlem+s+r-t\geq 1/2$ for $\langlem\geq\langlem_-+1/2$, we get from (\ref{l7}) \begin{eqnarray} \nonumber r^{\frac12}I_4&\leq&C\sum_{j=0}^m\int\! \! \! \int_{D_2}\frac1{z_{0, \rho}^{(j)}(\langlem, s)(\langlem+s+r-t+1)}\ d\langlem ds\\ \nonumber &\leq&\frac{C}{(1+|t-r|)}\int_{|t-r|}^{t+r}\frac1{\alpha+r-t+1}\left(\int_{-|r-t|}^{\alpha}\frac1{(1+|\psi_j(\alpha,\begin{equation}ta)|)^{1+\rho}}\ d\begin{equation}ta\right)d\alpha\\ \nonumber &\leq& \frac{C\{1+\log (1+t+r)\}}{1+|t-r|}, \end{eqnarray} which yields (\ref{l5}) for $k=4$. \\ \indent Next we evaluate $I_5$. It follows from $\langlem_-+1/2\leq \langlem\leq \langlem_+-1/2$ that \begin{eqnarray} \nonumber 3(\langlem_+-\langlem)\geq \langlem_+-\langlem+1,\quad 3(\langlem-\langlem_-)\geq\langlem-\langlem_-+1,\quad 9(\langlem^2-\langlem_-^2)\geq (\langlem+1)^2-\langlem_-^2. \end{eqnarray} Hence we get from (\ref{l8}) \begin{eqnarray} \nonumber r^{\frac12}I_5\leq C\sum_{j=0}^m(A_{5, j}+B_{5, j}+C_{5, j}), \end{eqnarray} where we have set \begin{eqnarray} \nonumber A_{5, j}&=&\int\! \! \! \int_{D_2\cap \{t-r\leq s\}}\frac1{z_{0, \rho}^{(j)}(\langlem,s)(t+r-s-\langlem)^{\frac12}(\langlem-t+s+r)^{\frac12}} \ d\langlem ds,\\ \nonumber B_{5, j}&=&\int\! \! \! \int_{D_2\cap \{t-r\geq s\}}\frac1{z_{0, \rho}^{(j)}(\langlem,s)(t+r-s-\langlem+1)^{\frac12}(\langlem+t-s-r+1)^{\frac12}} \ d\langlem ds,\\ \nonumber C_{5, j}&=&\int\! \! \! \int_{D_2}\frac1{z_{0, \rho}^{(j)}(\langlem,s)(\langlem-t+s+r+1)^{\frac12}(\langlem+t-s-r+1)^{\frac12}} \ d\langlem ds. \end{eqnarray} Changing variables by (\ref{l14}), we have \begin{eqnarray} \nonumber A_{5, j}&\leq&\frac{C}{1+|t-r|}\int_{|t-r|}^{t+r}\frac1{(t+r-\alpha)^{\frac12}(\alpha-t+r)^{\frac12}} \left(\int_{-|r-t|}^{\alpha}\frac1{(1+|\psi_j(\alpha,\begin{equation}ta)|)^{1+\rho}} \ d\begin{equation}ta\right)d\alpha\\ \nonumber &\leq&\frac{C}{1+|t-r|}\int_{t-r}^{t+r}\frac1{(t+r-\alpha)^{\frac12}(\alpha-t+r)^{\frac12}}\ d\alpha\\ \nonumber &\leq&\frac{C}{1+|t-r|}. \end{eqnarray} Changing variables by (\ref{l14}) and then by $\sigma=\psi_j(\alpha, \begin{equation}ta)$, we get \begin{eqnarray} \nonumber B_{5, j} &\leq&\frac12\int_{|t-r|}^{t+r}\frac1{(1+\alpha)(t+r-\alpha+1)^{\frac12}}\times\\ \nonumber &&\qquad\qquad\qquad\times\left(\int_{\gamma_j}^{\alpha}\frac1{(1+|\sigma|)^{1+\rho}\{1+\frac2{c_j+1}(\sigma-\gamma_j)\}^{\frac12}}\ d\sigma\right)d \alpha, \end{eqnarray} where \begin{eqnarray} \nonumber 2\gamma_j=(1-c_j)\alpha+(1+c_j)(r-t). \end{eqnarray} It has been shown in Lemma 3.13 in \cite{ky} that \begin{eqnarray} \nonumber \int_{\gamma_j}^{\alpha}\frac1{(1+|\sigma|)^{1+\rho}\{1+\frac2{c_j+1}(\sigma-\gamma_j)\}^{\frac12}}\ d\sigma \leq \frac{C}{(1+|\gamma_j|)^{\frac12}}. \end{eqnarray} Therefore, if $j\ne i$, we have \begin{eqnarray} \nonumber B_{5, j}&\leq&\frac{C}{(1+|t-r|)}\int_{|t-r|}^{t+r}\frac1{(t+r-\alpha+1)^{\frac12}(1+|\gamma_j|)^{\frac12}}\ d\alpha\\ \nonumber &\leq&\frac{C}{(1+|t-r|)}\int_{|t-r|}^{t+r}\left(\frac1{t+r-\alpha+1}+\frac1{1+|\gamma_j|}\right)\ d\alpha\\ \nonumber &\leq&\frac{C\{1+\log (1+t+r)\}}{1+|t-r|}. \end{eqnarray} On the other hand, if $j=i$, since $\gamma_i=r-t$, we have \begin{eqnarray} \nonumber B_{5, i}&\leq&\frac{C}{(1+|t-r|)}\int_{|t-r|}^{t+r}\frac1{(1+\alpha)^{\frac12}(t+r-\alpha+1)^{\frac12}}\ d\alpha\\ \nonumber &\leq&\frac{C}{(1+|t-r|)}\int_{|t-r|}^{t+r}\left(\frac1{1+\alpha}+\frac1{t+r-\alpha+1}\right)\ d\alpha\\ \nonumber &\leq&\frac{C\{1+\log (1+t+r)\}}{1+|t-r|}. \end{eqnarray} Since we can deal with $C_{5, j}$ similarly to $B_{5, j}$, we obtain (\ref{l5}) for $k=5$. \\ \indent Secondly, we deal with $P_2(\partial_{\ell} H)$ We can assume $t>r$, since otherwise $E_2$ is empty. Switching to polar coordinates, \begin{eqnarray} x=(r\cos \theta, r\sin \theta),\qquad y=\langlembda\xi=(\langlem \cos (\theta+\psi), \langlem\sin (\theta+\psi)),\langlebel{l15} \end{eqnarray} we get \begin{eqnarray} \nonumber P_2(\partial_{\ell} H)(x, t)=\int_0^{t-r}\left(\int_{(\langlem-\frac12)_+}^{\langlem_-}\langlem\partial_{\ell} H(\langlem\xi, s)\left(\int_{-\pi}^{\pi}K_1(\langlem,\psi;r, t-s)\ d\psi\right)d\langlem\right)ds \end{eqnarray} By Proposition 5.2 in \cite{ay}, we have \begin{eqnarray} \int_{-\pi}^{\pi}K_1(\langlem,\psi;r, t-s)\ d\psi\leq \frac{C}{(\langlem+\langlem_-)^{\frac12}(\langlem_+-\langlem)^{\frac12}}\log\left(2+\frac{r\langlem}{(\langlem_--\langlem)(\langlem_++\langlem)}\right) \langlebel{l16} \end{eqnarray} for $0<s<t-r$ and $0<\langlem<\langlem_-$. It follows from the fact \begin{eqnarray} \frac1{\langlem_+-\langlem}\leq \frac{2}{(r+1)(\langlem_--\langlem)}\qquad\quad\mbox{for}\qquad 0<s<t-r, \ \ \ \langlem_--\frac12\leq\langlem<\langlem_-\langlebel{l17} \end{eqnarray} that \begin{eqnarray} (r+1)^{\frac12}|P_2(\partial_{\ell} H)(x, t)|\leq CM^{(i)}_{0, \rho}(F)\sum_{j=0}^m A_{6, j},\langlebel{l18} \end{eqnarray} where we have set \begin{eqnarray} \nonumber A_{6, j}=\int_0^{t-r}\left(\int_{(t-r-\frac12)_+}^{t-r}\frac{\log\left(2+\displaystyle{\frac{\langlem+s}{\langlem_--\langlem}}\right)}{(1+s+\langlem)(1+|\langlem-c_js|)^{1+\rho}(\langlem_-\langlem)^{\frac12}}\ d\langlem\right)ds. \end{eqnarray} Changing variables by (\ref{l14}), we get \begin{eqnarray} \nonumber A_{6, j}&\leq&C\int_{t-r-\frac12}^{t-r}\frac{\log\left(2+\displaystyle{\frac{\alpha}{t-r+\alpha}}\right)}{(1+\alpha) (t-r-\alpha)^{\frac12}}\left(\int_{-\infty}^{\infty}\frac1{(1+|\psi_j|)^{1+\rho}}\ d\begin{equation}ta\right) d\alpha\\ \nonumber &\leq& \frac{C}{t-r+1/2}\int_{t-r-\frac12}^{t-r}\frac{\log(2(t-r)-\alpha)-\log (t-r-\alpha)}{(t-r-\alpha)^{\frac12}}\ d\alpha. \end{eqnarray} Here \begin{eqnarray} \nonumber &&\int_{t-r-\frac12}^{t-r}\frac{\log(2(t-r)-\alpha)-\log (t-r-\alpha)}{(t-r-\alpha)^{\frac12}}\ d\alpha\\ \nonumber &=&\bigg[-2(t-r-\alpha)(\log(2(t-r)-\alpha)-\log(t-r-\alpha))\bigg]_{t-r-\frac12}^{t-r}\\ \nonumber &&\qquad \qquad -2\int_{t-r-\frac12}^{t-r}\left(\frac{(t-r-\alpha)^{\frac12}}{2(t-r)-\alpha}-\frac{(t-r-\alpha)^{\frac12}}{t-r-\alpha}\right) d\alpha\\ \nonumber &=&\sqrt{2}\log\bigg(\frac12+t-r\bigg)+\int_{t-r-\frac12}^{t-r}\frac{t-r}{(2(t-r)-\alpha)(t-r-\alpha)^{\frac12}}\ d\alpha\\ \nonumber &\leq& C\log\bigg(\frac32+2t\bigg)+\frac{t-r}{t-r+\frac12}\int_{t-r-\frac12}^{t-r}\frac{1}{(t-r-\alpha)^{\frac12}}\ d\alpha\\ \nonumber &\leq& C\bigg(1+\log\bigg(\frac32+2t\bigg)\bigg) \end{eqnarray} implies \begin{eqnarray} A_{6, j}\leq\frac{C\{1+\log (1+t+r)\}}{1+|t-r|}.\langlebel{l19} \end{eqnarray} Thus, combining (\ref{l18}) and (\ref{l19}), we obtain \begin{eqnarray} \nonumber (1+r)^{\frac12}(1+|t-r|)|P_2(\partial_{\ell} H)(x, t)|\leq C\{1+\log (1+t+r)\}M^{(i)}_{0, \rho}(H). \end{eqnarray} \indent Thirdly, we deal with $P_3(\partial_{\ell} H)$. We can assume $t>r+1/2$, since otherwise $E_3$ is empty. Integrating by parts in $y$ and switching to polar coordinates as in (\ref{l16}), we get \begin{eqnarray} \nonumber P_3(\partial_{\ell} H)(x, t)&=&\int_{0}^{t-r-\frac12}\left(\int_0^{t-r-\frac12}\left(\int_{-\pi}^{\pi}\langlem H(\langlem\xi, s)K_3(\langlem,\psi;x, t-s)d\psi\right)d\langlem\right)ds\\ \nonumber &&+\int_0^{t-r-\frac12}\left(\int_{-\pi}^{\pi}\langlem \xi_l H(\langlem\xi, s)K_1(\langlem, \psi; x, t-s)\bigg|_{\langlem=t-s-r-\frac12}d\psi\right)ds\\ \nonumber &\equiv&J_1+J_2, \end{eqnarray} where we have set \begin{eqnarray} \nonumber K_3(\langlem, \psi; x, t)=\frac{-(x_{\ell}-\langlem\xi_{\ell})}{2\pi (t^2-r^2-\langlem^2+2r\langlem \cos\psi)^{\frac32}}. \end{eqnarray} We see from (\ref{l16}) and (\ref{l17}) that \begin{eqnarray} \nonumber J_2&\leq& \frac{CM^{(i)}_{0, \rho}(H)\{1+\log (1+t+r)\}}{(r+1)^{\frac12}(t-r+1/2)}\sum_{j=0}^m\int_0^{t-r-\frac12}\frac{1}{(1+|t-r-(c_j+1)s-1/2|)^{1+\rho}}\ ds\\ \nonumber &\leq&\frac{C\{1+\log (1+t+r)\}M^{(i)}_{0, \rho}(H)}{(1+r)^{\frac12}(1+|t-r|)}. \end{eqnarray} As shown in the proof of Proposition 4.2 in \cite{hk2}, we have \begin{eqnarray} \nonumber \int_{-\pi}^{\pi}|K_3(\langlem, \psi;r, t-s)|d\psi\leq \frac{C}{(\langlem_--\langlem)(\langlem_-+\langlem)^{\frac12}(\langlem_+-\langlem)^{\frac12}}. \end{eqnarray} Therefore, since $r+1\leq 2(\langlem_+-\langlem)$ for $\langlem<t-r-s-1/2$, we get from (\ref{l14}) \begin{eqnarray} \nonumber &&J_1\\ \nonumber &\leq&\frac{CM^{(i)}_{0, \rho}(H)}{(1+r)^{\frac12}}\sum_{j=0}^m\int_{0}^{t-r-\frac12}\left(\int_0^{t-r-s-\frac12}\frac1{(1+s+\langlem)(\langlem_--\langlem)(1+|\langlem-c_js|)^{1+\rho}}\ d\langlem\right)ds\\ \nonumber &\leq&\frac{CM^{(i)}_{0, \rho}(H)}{(1+r)^{\frac12}}\sum_{j=0}^m\left\{\int_{\frac{t-r}2}^{t-r-\frac12}\left(\int_{-\alpha}^{\alpha}\frac1{(1+\alpha)(t-r-\alpha)(1+|\psi_j(\alpha, \begin{equation}ta)|)^{1+\rho}}\ d\begin{equation}ta\right)d\alpha+\right.\\ \nonumber &&\qquad\qquad\qquad\qquad+\left.\int^{\frac{t-r}2}_{0}\left(\int_{-\alpha}^{\alpha}\frac1{(1+\alpha)(t-r-\alpha)(1+|\psi_j(\alpha, \begin{equation}ta)|)^{1+\rho}}\ d\begin{equation}ta\right)d\alpha\right\}\\ \nonumber &\leq&\frac{C\{1+\log (1+t+r)\}M^{(i)}_{0, \rho}(H)}{(1+r)^{\frac12}(1+|t-r|)}. \end{eqnarray} This completes the proof of (\ref{l1}). \\ \section{$a$ $priori$ estimates} In this section, we derive the $a$ $priori$ estimate (\ref{2.2}) assuming (\ref{2.1}). For this purpose, we introduce a notation. Let the assumptions of Lemma 2.1 be fulfilled and let $p(x, t)$ and $q(x, t)$ be functions defined on a set $D\subset \mathbb{R}^2\times [0, T)$. Then we denote $$ p(x, t)=O^*(q(x, t))\qquad\quad \mbox{in}\quad D, $$ when there exist constants $K=K(B)>0$ and $\varepsilonepsilon_0=\varepsilonepsilon_0(J, B)>0$ such that, if (\ref{2.1}) holds for $0<\varepsilonepsilon<\varepsilonepsilon_0$, then $$ |p(x, t)|\leq K q(x, t) \qquad\mbox{for}\quad (x, t)\in D $$ for the same $\varepsilonepsilon$. We can easily show that if $p_1(x, t)=O^*(q(x, t))$ and $p_2(x, t)=O^*(q(x, t))$, then $p_1(x, t)+p_2(x, t)=O^*(q(x, t))$. Then our task to prove Lemma 2.1 is showing \begin{eqnarray} [\partial u(x, t)]_{k}=O^*(\varepsilonepsilon)\quad\mbox{and}\quad \langle u(x, t)\rangle_{k+1}=O^*(\varepsilon)\qquad \quad \mbox{in}\quad \mathbb{R}^2\times [0, T_B).\langlebel{4.1} \end{eqnarray} Also we will express constants determined independently of $J$ and $T$ by $K_n \ (n\in \mathbb{N})$ in the following argument. \\ \indent Now we aim to show (\ref{4.1}). By (\ref{3.1}), we can write \begin{eqnarray} u^i(x, t)=u_0^i(x, t)+L_{c_i}(F^i)(x, t),\langlebel{4.2a} \end{eqnarray} where $u^i_0(x, t)$ is the solution to (\ref{1.13}), (\ref{1.14}) and satisfies for any nonnegative integer $p$, \begin{eqnarray} [[\partial u_0^i(t)]]_{p}+\langle\langle u_0^i(t)\rangle\rangle_{p+1}\leq C_0\varepsilonepsilon\qquad \mbox{for}\quad 0\leq t<\infty,\langlebel{4.2} \end{eqnarray} with some constant $C_0=C_0(f^i, g^i, p)>0$. Then, we have for a multi-index $a=(a_0, a_1, \cdots,a_4)$, \begin{eqnarray} \Gamma^a L_{c_i}(F^i)=v_a^i+\sum_{|b|\leq |a|}C_{a, b}L_{c_i}(\Gamma^b F^i),\langlebel{4.00} \end{eqnarray} with some constants $C_{a, b}$. Here, $v^i_a=v_a^i(x, t)$ is the solution to the Cauchy problem; \begin{eqnarray} \nonumber \Box_i v_a^i=0\qquad \qquad\qquad\mbox{in}\quad \mathbb{R}^2\times [0, \infty),\\ \nonumber v_a^i(x, 0)=\varepsilonepsilon^2\phi_a^i(x) ,\quad \partial_0v_a^i(x, 0)=\varepsilonepsilon^2 \psi_a^i(x)\qquad \mbox{in}\quad \mathbb{R}^2, \end{eqnarray} with functions $\phi^i_a,\ \psi_a^i\in C_0^{\infty}(\mathbb{R}^2)$ determined by $(f^j,g^j)_{j=1, 2, \cdots,m}$ suitably. Indeed, by the commutation relations of $\Gamma_{\alpha}$ and $\Box_i$ and by the definition of $L_{c_i}(F^i)$, we have \begin{eqnarray} \nonumber \Box_i \Gamma_{\alpha} L_{c_i}(F^i)=\Gamma_{\alpha}\Box_iL_{c_i}(F^i)+2\delta_{\alpha 4}\Box_iL_{c_i}(F^i)=\Gamma_{\alpha}F^i+2\delta_{\alpha 4}F^i \end{eqnarray} and \begin{eqnarray} \Gamma_{\alpha} L_{c_i}(F^i)(x, 0)=0, \qquad\partial_0\Gamma_{\alpha} L_{c_i}(F^i)(x, 0)=\delta_{\alpha 0}F^i(x, 0). \end{eqnarray} Since $F^i$ is quadratic, we can denote $F^i(x, 0)=\varepsilonepsilon^2 \psi^i(x)\in C^{\infty}_0(\mathbb{R}^2)$. Hence we have \begin{eqnarray} \Gamma_{\alpha}L_{c_i}(F^i)=v^i+L_{c_i}(\Gamma_{\alpha}F^i+2\delta_{\alpha 4}F^i)=v^i+L_{c_i}(\Gamma_{\alpha}F^i)+2\delta_{\alpha 4}L_{c_i}(F^i) \end{eqnarray} where $v^i=v^i(x, t)$ is the solution to the Cauchy problem; \begin{eqnarray} \nonumber \Box_i v^i=0\qquad \qquad\qquad\mbox{in}\quad \mathbb{R}^2\times [0, \infty),\\ \nonumber v^i(x, 0)=0 ,\quad \partial_0v^i(x, 0)=\delta_{\alpha 0}\varepsilonepsilon^2 \psi^i(x)\qquad \mbox{in}\quad \mathbb{R}^2. \end{eqnarray} This implies (\ref{4.00}) when $|a|=1$. Repeating the above argument, we can obtain (\ref{4.00}) for any $a$. \\ Note that, as with (\ref{4.2}), we have for any nonnegative integer $p$, \begin{eqnarray} \sum_{|b|\leq p}[[\partial v_b^i(t)]]_{0}+\sum_{|c|\leq p+1}\langle\langle v_c^i(t)\rangle\rangle_{0}\leq C'_0\varepsilonepsilon^2\qquad \mbox{for}\quad 0\leq t<\infty,\langlebel{4.222} \end{eqnarray} with some constant $C'_0=C'_0(f^1,\cdots, f^m, g^1,\cdots, g^m,p)>0$. It follows from (\ref{4.2a}) and (\ref{4.00}) that \begin{eqnarray} \Gamma^a u^i(x, t)=\Gamma^a u^i_0(x, t)+v^i_a(x, t)+\sum_{|b|\leq |a|}C_{a, b}L_{c_i}(\Gamma^b F^i)(x, t)\langlebel{4.01} \end{eqnarray} Therefore, our task for the proof of (\ref{4.1}) is to show \begin{eqnarray} \sum_{|b|\leq k} [\partial L_{c_i}(\Gamma^b F^i)(x, t)]_{0}+\sum_{|c|\leq k+1}\langle L_{c_i}(\Gamma^c F^i)(x, t)\rangle_{0}=O^*(\varepsilon)\qquad \mbox{in}\quad \mathbb{R}^2\times [0, T_B).\langlebel{4.3} \end{eqnarray} We will show (\ref{4.3}) by dividing the area into some parts.\\ \indent Firstly, we assume $\displaystyle{0\leq t\leq 1/{\varepsilonepsilon}}$. In this region, we can show the sharper estimates; \begin{eqnarray} \begin{equation}gin{aligned} &\sum_{|b|\leq k} [[\partial L_{c_i}(\Gamma^b F^i)(x, t)]]_{0}+\sum_{|c|\leq k+1}\langle\langle L_{c_i}(\Gamma^c F^i)(x, t)\rangle\rangle_{0}\\ =&\ O^* (\varepsilon^{\frac54})\quad\qquad \mbox{in}\qquad\quad \mathbb{R}^2\times \big[0, 1/{\varepsilonepsilon}\big].\langlebel{4.3a} \end{aligned} \end{eqnarray} For this purpose, we prepare two propositions with respect to the energy. \\ \begin{equation}gin{prop} Let $u(x, t)\in C^2(\mathbb{R}^2\times [0, T))$ be a function satisfying $||u||_{2, T}<\infty$. Then, there exists a constant $C_3>0$ such that \begin{eqnarray} |x|^{\frac12}|u(x, t)|\leq C_3||u(t)||_{2} \langlebel{4.4} \end{eqnarray} holds for $(x, t)\in \mathbb{R}^2\times [0, T)$. \end{prop} \begin{equation}gin{prop} Let $\displaystyle{u(x, t)=(u^1(x, t),u^2(x, t),\cdots,u^m(x, t))}$ $\displaystyle{\in (C^{\infty}(\mathbb{R}^2\times [0, T)))^m}$ be the solution to (\ref{1.1}) and (\ref{1.2}) and also let $\ell$ be a positive integer. Assume (\ref{1.7}). Then there exist constants $\delta>0$ and $C_4=C_4(\ell)>0$ such that if $\displaystyle{|\partial u|_{[\frac{\ell+1}2], T}<\delta}$ holds, then \begin{eqnarray} ||\partial u(t)||_{\ell}\leq C_4||\partial u(0)||_{\ell}\exp\bigg(C_4\int_0^t|\partial u(s)|_{[\frac{\ell +1}2]}ds\bigg) \langlebel{4.5} \end{eqnarray} holds for $0\leq t<T$. \end{prop} We omit the proof of the propositions. For the details of Proposition 4.1, see \cite{kla1}. On the other hand, we get Proposition 4.2 by the usual energy argument for the quasilinear wave equations. \\ By (\ref{2.1}) and $k\geq 21$, we have $|\partial u|_{[\frac{k+5}2], \frac1{\varepsilonepsilon}}\leq |\partial u|_{k, \frac1{\varepsilonepsilon}}\leq J\varepsilonepsilon<\delta$ for $0<\varepsilonepsilon <\varepsilonepsilon_0$, if we take $\varepsilonepsilon_0$ to be $J\varepsilonepsilon_0<\delta$. Hence, by (\ref{2.1}) and (\ref{4.5}), we have \begin{eqnarray} \nonumber ||\partial u(t)||_{k+4}&=& O\bigg(C_4||\partial u(0)||_{k+4}\exp\bigg(C_4\int_0^{\frac1{\varepsilonepsilon}}|\partial u(s)|_{[\frac{k+5}2]}ds\bigg)\bigg)\\ &=&O^*\bigg(\varepsilonepsilon\exp\bigg(\int_0^{\frac1{\varepsilonepsilon}}\frac{C_4J\varepsilonepsilon}{\sqrt{1+s}}ds\bigg)\bigg)\langlebel{4.6}\\ \nonumber &=& O^*\left(\varepsilonepsilon\exp(4C_4J\varepsilonepsilon^{\frac12})\right)\\ \nonumber &=& O^*(\varepsilonepsilon) \qquad\qquad \mbox{in}\quad \big[0,\ 1/{\varepsilonepsilon}\big], \end{eqnarray} if we take $\varepsilonepsilon_0$ to be $\varepsilonepsilon_0\leq 1$ and $J^2\varepsilonepsilon_0\leq 1$. Therefore, by (\ref{1.85}), (\ref{2.1}), (\ref{l0}), (\ref{4.4}) and (\ref{4.6}), we have for $|c|\leq k+1$ \begin{eqnarray} \nonumber &&\sum_{|c|\leq k+1}\langle\langle L_{c_i}(\Gamma^c F^i)(x, t)\rangle\rangle_{0}\\ \nonumber &=& O\left(\sum_{j=0}^m\sup_{(y, s)\in \atop \Lambda_j(\frac1{\varepsilonepsilon})\cap D^i(x, t)}\{|y|^{\frac12} (1+s+|y|)^{1+\mu}(1+||y|-c_js|)|F^i(y, s)|_{k+1}\}\right)\\ &=& O^*( (1+t)^{\frac9{16}+\mu}[\partial u(t)]_{[\frac{k+2}2]}||\partial u(t)||_{k+2})\langlebel{4.7}\\ \nonumber &=&O^*(J\varepsilonepsilon^{\frac{23}{16}-\mu})\\ \nonumber &=&O^*(\varepsilonepsilon^{\frac54})\qquad\qquad \mbox{in}\qquad \mathbb{R}^2\times \big[0, \ 1/{\varepsilonepsilon}\big], \end{eqnarray} if we take $\mu$ and $\varepsilonepsilon_0$ to be $\mu<1/{16}$ and $J^8\varepsilonepsilon_0\leq 1$. As for $\partial L_{c_i}(\Gamma^b F^i)$ with $|b|\leq k$, when $0\leq t\leq 1$, it follows from (\ref{1.85}) that $1+|x|+t$ and $1+||x|-c_it|$ are bounded in the support of the solution $u^i(x, t)$. Hence, by (\ref{4.00}), (\ref{4.222}) and (\ref{4.7}), we find \begin{eqnarray} \nonumber \sum_{|b|\leq k}[[\partial L_{c_i}(\Gamma^b F^i)(x, t)]]_0&=&O\left(\sum_{|c|\leq k+1}([[v_c^i(x, t)]]_{0}+[[ L_{c_i}(\Gamma^{c}F^i(x, t)]]_{0})\right)\\ \nonumber &=&O\left(\sum_{|c|\leq k+1}(\langle\langle v_c^i(x, t)\rangle\rangle_{0}+\langle\langle L_{c_i}(\Gamma^{c}F^i(x, t)\rangle\rangle_{0})\right)\langlebel{4.bb}\\ \nonumber &=&O^*(\varepsilonepsilon^{\frac54})\qquad \qquad \mbox{in}\qquad \mathbb{R}^2\times [0, 1]. \end{eqnarray} On the other hand, by (\ref{1.85}), (\ref{2.1}), (\ref{l1}) with $G=0$, (\ref{4.4}) and (\ref{4.6}), we have \begin{eqnarray} \nonumber &&\sum_{|b|\leq k}[[\nabla L_{c_i}(\Gamma^b F^i)(x, t)]]_{0}\\ \nonumber &=& O\left(\sum_{j=0}^m\sup_{(y, s)\in \atop \Lambda_j\cap D^i(x, t)}\{|y|^{\frac12} (1+s+|y|)^{1+\nu}(1+||y|-c_js|)|F^i(y, s)|_{k+1}\}\right)\\ &=& O^*( (1+t)^{\frac9{16}+\nu}[\partial u(t)]_{[\frac{k+2}2]}||\partial u(t)||_{k+2})\langlebel{4.10}\\ \nonumber &=&O^*(J\varepsilonepsilon^{\frac{23}{16}-\nu})\\ \nonumber &=&O^*(\varepsilonepsilon^{\frac54})\qquad\qquad \mbox{in}\qquad \mathbb{R}^2\times \big[0,\ 1/{\varepsilonepsilon}\big], \end{eqnarray} if we take $\nu$ and $\varepsilonepsilon_0$ to be $\nu<1/{16}$ and $J^8\varepsilonepsilon_0<1$. Therefore, when $1\leq t$, by (\ref{1.85}), (\ref{4.00}), (\ref{4.222}) , (\ref{4.7}), (\ref{4.10}) and the identity \begin{eqnarray} \partial_0=-\frac{x_1}t\partial_1-\frac{x_2}t\partial_2+\frac1t S,\langlebel{4.a2} \end{eqnarray} we have \begin{eqnarray} \begin{equation}gin{aligned} &\ \sum_{|b|\leq k}[[\partial_0 L_{c_i}(\Gamma^b F^i)(x, t)]]_0\\ =&\ \ O\left(\sum_{|b|\leq k}\bigg([[\nabla L_{c_i}(\Gamma^b F^i(x, t)]]_0 +\frac1{1+t}[[S L_{c_i}(\Gamma^b F^i)(x, t)]]_{0}\bigg)\right)\langlebel{4.9}\\ =&\ \ O\left(\sum_{|b|\leq k}[[\nabla L_{c_i}(\Gamma^b F^i(x, t)]]_0 +\sum_{|c|\leq k+1}(\langle\langle v_c^i(x, t)\rangle\rangle_{0}+\langle\langle L_{c_i}(\Gamma^{c}F^i(x, t)\rangle\rangle_{0}) \right)\quad\\ =&\ \ O^*(\varepsilonepsilon^{\frac54})\qquad \qquad \mbox{in}\qquad \mathbb{R}^2\times [1, 1/\varepsilonepsilon]. \end{aligned} \end{eqnarray} Therefore we obtain (\ref{4.3a}).\\%, if we take $\kappa_0=K_5+\max\{K_8, K_{12}, K_{15}\}$. \\ \indent Secondly, we assume $\displaystyle{1/{\varepsilonepsilon}\leq t\leq T_B}$. In this region, we need more precise energy estimate: \begin{equation}gin{prop} Let $u(x, t)=(u^1(x, t), u^2(x, t), \cdots, u^m(x, t))\in (C^{\infty}( \mathbb{R}^2\times [0, T))^m$ be the solution to (\ref{1.1}) and (\ref{1.2}) under the same assumption in Theorem 1.1. Also let $\ell$ be a positive integer. Then there exist positive constants $C_5$ and $\delta$ such that if \begin{eqnarray} [[[\partial u]]]_{[\frac{\ell+1}2], T}+\langle u\rangle_{[\frac{\ell+1}2]+1,T}<\delta\langlebel{delta} \end{eqnarray} holds, then \begin{eqnarray} ||\partial u(t)||_{\ell}\leq C_5||\partial u(t_0)||_{\ell}\exp\left(C_5\int_{t_0}^t\frac{[[[\partial u(s)]]]_{[\frac{\ell+1}2]}+\langle u(s)\rangle_{[\frac{\ell+1}2]+1}}{1+s}\ ds\right)\langlebel{4.11} \end{eqnarray} holds for $0\leq t_0 \leq t<T$. Here, we have set \begin{eqnarray} \nonumber &&[[[v]]]_{p ,\tau}=\sup_{0\leq t< \tau}[[[v(t)]]]_{p},\quad [[[v(t)]]]_p=\sup_{x\in \mathbb{R}^2}[[[v(x, t)]]]_p,\\ \nonumber &&[[[v(x, t)]]]_p=\left\{\begin{equation}gin{array}{ll}[[v(x, t)]]_p& \mbox{when}\ \ |x|\leq t^{\frac{7}{8}},\\ [v(x, t)]_p& \mbox{when}\ \ |x|>t^{\frac7{8}}.\end{array} \right. \end{eqnarray} \end{prop} For the proof of (\ref{4.11}), see Proposition 4.1 in \cite{2006}, in there we used the $ghost$ $weight$ energy method.\\[2mm] {\bf Remark:}\ The difference of situations between Proposition 4.1 in \cite{2006} and Proposition 4.3 in this paper is the power of $1+||x|-c_it|$ in $[\partial u^i]_k$ which is the norm we are going to estimate. To derive the $ghost$ $weight$ energy method, we need to suppose that $|\partial u^i(x, t)|_k$ is equivalent to $1+||x|-c_it|$ in the area where $|x|$ is small. Thus we need to define another norm $[[[\partial u]]]_k$ and assume (\ref{delta}) in Proposition 4.3. \\[3mm] \indent In order to use (\ref{4.11}) with $\ell=k+9$ and $T=T_B$, we also show that \begin{eqnarray} [[[\partial u(x, t)]]]_{[\frac{k+10}2]}+\langle u(x, t)\rangle_{[\frac{k+10}2]+1}=O^* (\varepsilonepsilon^{\frac12})\qquad \qquad \mbox{in}\qquad \mathbb{R}^2\times [0, \ T_B)\langlebel{4.13} \end{eqnarray} holds. By (\ref{2.1}) and $k\geq 21$, we find that \begin{eqnarray} [\partial u(x, t)]_{[\frac{k+10}2]}+\langle u(x, t)\rangle_{[\frac{k+10}2]+1}=O^*(J\varepsilonepsilon) =O^*(\varepsilonepsilon^{\frac12})\qquad \mbox{in}\qquad \mathbb{R}^2\times [0, T_B) \langlebel{4.13a} \end{eqnarray} if we take $\varepsilonepsilon_0$ to be $J^2\varepsilonepsilon_0\leq 1$. Furthermore, if we obtain \begin{eqnarray} \sum_{|b|\leq [\frac{k+10}2]}[[\nabla L_{c_i}(\Gamma^b F^i)(x, t)]]_{0}=O^*(\varepsilonepsilon^{\frac12})\qquad\qquad \mbox{in}\qquad \mathbb{R}^2\times \big[0, \ T_B\big),\langlebel{4.12} \end{eqnarray} then by (\ref{1.85}), (\ref{2.1}), (\ref{4.00}), (\ref{4.a2}) and (\ref{4.12}), we find that \begin{eqnarray} \begin{equation}gin{aligned} &\ \ \sum_{|b|\leq [\frac{k+10}2]}[[\partial_0 L_{c_i}(\Gamma^b F^i)(x, t)]]_{0}\\ =&\ \ O\left(\sum_{|b|\leq [\frac{k+10}2]}\bigg(\frac{|x|}{t}[[\nabla L_{c_i}(\Gamma^b F^i)(x, t)]]_0+\frac1{t}[[SL_{c_i}(\Gamma^b F^i(x, t))]]_0\bigg)\right)\\ =&\ \ O\left(\sum_{|b|\leq [\frac{k+10}2]}[[\nabla L_{c_i}(\Gamma^b F^i)(x, t)]]_0+\right.\\ &\ \ \left.\qquad \qquad +\sum_{|c|\leq [\frac{k+10}2]+1}\bigg(\langle\langle v^i_c(x, t)\rangle\rangle_0+\frac{(1+|x|)^{\frac12}}{(1+|x|+t)^{\frac7{16}}}\langle L_{c_i}(\Gamma^c F^i(x, t))\rangle_0\bigg)\right)\\ =&\ \ O\left(\sum_{|b|\leq [\frac{k+10}2]}[[\nabla L_{c_i}(\Gamma^b F^i)(x, t)]]_0+\right.\\ &\ \ \left.\qquad\qquad\qquad+\sum_{|c|\leq [\frac{k+10}2]+1}(\langle\langle v^i_c(x, t)\rangle\rangle_0+\langle L_{c_i}(\Gamma^c F^i(x, t))\rangle_0)\right)\\ =&\ \ O^*(\varepsilonepsilon^{\frac12}+J\varepsilonepsilon)\\ =&\ \ O^*(\varepsilonepsilon^{\frac12} )\qquad\qquad\mbox{in}\qquad \{(x, t)\ : \ |x|\leq t^{\frac78},\ 1/{\varepsilonepsilon}\leq t<T_B\},\langlebel{4.12a} \end{aligned} \end{eqnarray} if we take $\varepsilonepsilon_0$ to be $J^2\varepsilonepsilon_0\leq 1$. Hence, by (\ref{4.2}), (\ref{4.222}), (\ref{4.3a}), (\ref{4.12}) and (\ref{4.12a}), we have (\ref{4.13}). \\ \indent In order to prove (\ref{4.3}) and (\ref{4.12}), we show that for any positive integer $\ell\leq k+1$ and for any positive constant $\eta$ \begin{eqnarray} \nonumber &&\sum_{i=1}^m\sum_{|c|\leq \ell+1} \langle\langle L_{c_i}(\Gamma^c F^i)(x, t)\rangle\rangle_0\\ &=&O^*\big(\varepsilonepsilon+J^2\varepsilonepsilon^2(1+t)^{\eta}\sup_{0\leq s\leq t}||\partial u(s)||_{\ell+8}\big)\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big)\langlebel{4.14} \end{eqnarray} and \begin{eqnarray} \nonumber &&\sum_{i=1}^m\sum_{|b|\leq \ell} [[\nabla L_{c_i}(\Gamma^b F^i)(x, t)]]_0\\ &=&O^*\big(\varepsilonepsilon+J^2\sup_{(y,s)\in \mathbb{R}^2\times [0, t]} |y|^{\frac12}|\partial u(y, s)|_{\ell+6}\big)\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big).\langlebel{4.15} \end{eqnarray} hold. We will show (\ref{4.14}) and (\ref{4.15}) step by step. \\ \indent At first, by (\ref{1.5}), (\ref{1.6}), (\ref{1.85}), (\ref{2.13}), (\ref{2.14}), (\ref{l0}), (\ref{l1}), (\ref{4.4}) and $\varepsilonepsilon^2\log (1+t)\leq B$, we have for any $\mu_1>0$ and $\rho_1>0$, \begin{eqnarray} \begin{equation}gin{aligned} &\ \ \sum_{i=1}^m\sum_{|c|\leq \ell+1} \langle\langle L_{c_i}(\Gamma^c F^i)(x, t)\rangle\rangle_0\\ =&\ \ O\bigg(\sum_{i=1}^{m}\sum_{|c|\leq \ell+1} M_{1+\mu_1, 1}^{(i)}(\Gamma^c F^i)(x, t) \bigg)\\ =&\ \ O\bigg(\sum_{i=1}^{m}\sum_{j=0}^m \sup_{(y, s)\in D^i(x, t)\cap \Lambda_j} |y|^{\frac12}z^{(j)}_{1+\mu_1, 1}(|y|, s)|F^i(y, s)|_{\ell+1}\bigg)\\ =&\ \ O\bigg(\sum_{j=0}^m \sup_{(y, s)\in \Lambda_j\atop 0\leq s\leq t} \bigg\{|y|^{\frac12}z^{(j)}_{\mu_1,2}(|y|, s)\sum_{h=1}^m|\partial u^h(y, s)|_{[\frac{\ell+2}2]}|\partial u^h(y, s)|_{\ell+2}+\\ &\ \ +|y|^{\frac12}z^{(j)}_{\mu_1, 1}(|y|, s)\sum_{h=1}^m(|u^h(y, s)|_{[\frac{\ell+2}2]+1}|\partial u^h(y, s)|_{\ell+2} +|\partial u^h(y, s)|_{[\frac{\ell+2}2]}|u^h(y, s)|_{\ell+3})+\\ &\ \ +|y|^{\frac12}z^{(j)}_{1+\mu_1, 1}(|y|, s)|\partial u(y, s)|_{[\frac{\ell+2}2]}^2|\partial u(y, s)|_{\ell+2}\bigg\}\bigg)\langlebel{4.16}\\ =&\ \ O\bigg(([\partial u]_{[\frac{\ell+2}2],t}+\langle u\rangle_{[\frac{\ell+2}2]+1,t})\times\\ &\ \ \qquad\quad\times\sup_{(y, s)\in\mathbb{R}^2\times [0, t]}\{(1+s+|y|)^{-\frac7{16}+\mu_1}([[\partial u(y, s)]]_{\ell+2}+\langle\langle u(y,s)\rangle\rangle_{\ell+3})\}+\\ &\ \ \qquad +(1+t)^{\mu_1}[\partial u]_{[\frac{\ell+2}2],t}^2\sup_{(y,s)\in \mathbb{R}^2\times [0, t]}|y|^{\frac12}|\partial u(y, s)|_{\ell +2} \bigg)\\ =&\ \ O^*\bigg(J\varepsilonepsilon\sup_{(y, s)\in\mathbb{R}^2\times [0, t]}\{(1+s+|y|)^{-\frac7{16}+\mu_1}([[\partial u(y, s)]]_{\ell+2}+\langle\langle u(y,s)\rangle\rangle_{\ell+3})\}+\\ &\ \ \qquad +J^2\varepsilonepsilon^2(1+t)^{\mu_1}\sup_{0\leq s\leq t}||\partial u(s)||_{\ell +4} \bigg)\qquad\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big) \end{aligned} \end{eqnarray} and \begin{eqnarray} \nonumber &&\sum_{i=1}^m\sum_{|b|\leq \ell} [[\nabla L_{c_i}(\Gamma^b F^i)(x, t)]]_0\\ \nonumber &=&O\bigg(\sum_{i=1}^{m}\sum_{|c|\leq \ell+1} \bigg\{ M_{1+\mu_1, 1}^{(i)}(\Gamma^c N^i_2) +(1+\log(1+t))M_{1, 1+\rho_1}^{(i)}(\Gamma^c (F^i-N_2^i) ) \bigg\} \bigg)\\ \nonumber &=&O\bigg(\sum_{i=1}^{m}\sum_{j=0}^m \sup_{(y, s)\in D^i(x, t)\cap \Lambda_j} \{|y|^{\frac12}z^{(j)}_{1+\mu_1, 1}(|y|, s)|N_2^i(y, s)|_{\ell+1}+\\ \nonumber &&\qquad\qquad\qquad\qquad +(1+\log (1+t))|y|^{\frac12}z^{(j)}_{1, 1+\rho_1}(|y|, s)|(F^i-N_2^i)(y, s)|_{\ell+1}\}\bigg)\\ \nonumber &=&O\bigg(\sum_{j=0}^m \sup_{(y, s)\in \Lambda_j\atop 0\leq s\leq t} \bigg\{|y|^{\frac12}z^{(j)}_{\mu_1,2}(|y|, s)\sum_{h=1}^m|\partial u^h(y, s)|_{[\frac{\ell+2}2]}|\partial u^h(y, s)|_{\ell+2}+ \end{eqnarray} \begin{eqnarray} \nonumber && +|y|^{\frac12}z^{(j)}_{\mu_1, 1}(|y|, s)\sum_{h=1}^m(|u^h(y, s)|_{[\frac{\ell+2}2]+1}|\partial u^h(y, s)|_{\ell+2} +|\partial u^h(y, s)|_{[\frac{\ell+2}2]}|u^h(y, s)|_{\ell+3})\bigg\}+\\ \nonumber && +(1+\log (1+t))\sum_{j=0}^m \sup_{(y, s)\in \Lambda_j\atop 0\leq s\leq t}\{|y|^{\frac12}z^{(j)}_{1, 1+\rho_1}(|y|, s)|\partial u(y, s)|_{[\frac{\ell+2}2]}^2|\partial u(y, s)|_{\ell+2}\}\bigg)\\ &=&O\bigg(([\partial u]_{[\frac{\ell+2}2],t}+\langle u\rangle_{[\frac{\ell+2}2]+1,t})\times\langlebel{4.17}\\ \nonumber &&\qquad\quad \times\sup_{(y, s)\in\mathbb{R}^2\times [0, t]}\{(1+s+|y|)^{-\frac7{16}+\mu_1}([[\partial u(y, s)]]_{\ell+2}+\langle\langle u(y,s)\rangle\rangle_{\ell+3})\}+\\ \nonumber &&\qquad +(1+\log(1+t))[\partial u]_{[\frac{\ell+2}2],t}^2\sup_{(y,s)\in \mathbb{R}^2\times [0, t]}|y|^{\frac12}|\partial u(y, s)|_{\ell +2} \bigg)\\ \nonumber &=&O^*\bigg(J\varepsilonepsilon\sup_{(y, s)\in\mathbb{R}^2\times [0, t]}\{(1+s+|y|)^{-\frac7{16}+\mu_1}([[\partial u(y, s)]]_{\ell+2}+\langle\langle u(y,s)\rangle\rangle_{\ell+3})\}+ \\ \nonumber &&\qquad +J^2\varepsilonepsilon^2(1+\log(1+t))\sup_{(y,s)\in \mathbb{R}^2\times [0, t]}|y|^{\frac12}|\partial u(y, s)|_{\ell +2} \bigg)\\ \nonumber &=&O^*\bigg(J\varepsilonepsilon\sup_{(y, s)\in\mathbb{R}^2\times [0, t]}\{(1+s+|y|)^{-\frac7{16}+\mu_1}([[\partial u(y, s)]]_{\ell+2}+\langle\langle u(y,s)\rangle\rangle_{\ell+3})\}+\\ \nonumber &&\qquad +J^2\sup_{(y,s)\in \mathbb{R}^2\times [0, t]}|y|^{\frac12}|\partial u(y, s)|_{\ell +2} \bigg)\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon},\ T_B\big), \end{eqnarray} where we have set \begin{eqnarray} \nonumber N_2^i=\sum_{j, \ell=1}^m\sum_{\alpha,\begin{equation}ta=0}^2 a_{\ell j}^{i, \alpha\begin{equation}ta\gamma}\partial_{\gamma}u^j\partial_{\alpha}\partial_{\begin{equation}ta}u^{\ell} +\sum_{j,k=1}^m\sum_{\alpha,\begin{equation}ta=0}^2b_{jk}^{i, \alpha\begin{equation}ta}\partial_{\alpha}u^j\partial_{\begin{equation}ta}u^k. \end{eqnarray} Next, we estimate $(1+s+|y|)^{-\frac7{16}+\mu_1}([[\partial u(y, s)]]_{\ell+2}+\langle\langle u(y,s)\rangle\rangle_{\ell+3})$ for $(y, s)\in\mathbb{R}^2\times [0, t]$. By the same manner as (\ref{4.16}), for any $\mu_2>0$, we obtain by (\ref{1.5}), (\ref{1.6}), (\ref{1.85}), (\ref{2.13}), (\ref{2.14}), (\ref{l0}), (\ref{l1}), (\ref{4.2}), (\ref{4.222}), (\ref{4.01}) and (\ref{4.9}) \begin{eqnarray} \nonumber && (1+s+|y|)^{-\frac7{16}+\mu_1}([[\partial u(y, s)]]_{\ell+2}+\langle\langle u(y,s)\rangle\rangle_{\ell+3})\\ \nonumber &=&O\bigg(\varepsilonepsilon+\sum_{i=1}^m\sum_{|b|\leq \ell+2\atop |c|\leq \ell+3} (1+s+|y|)^{-\frac7{16}+\mu_1}\times\\ \nonumber &&\qquad\qquad\qquad\qquad\times([[\nabla L_{c_i}(\Gamma^b F^i)(y, s)]]_0+\langle\langle L_{c_i}(\Gamma^c F^i)(y, s)\rangle\rangle_0)\bigg)\\ \nonumber &=&O\bigg(\varepsilonepsilon+\sum_{i=1}^{m}\sum_{|c|\leq \ell+3}\sup_{y\in\mathbb{R}^2}\{(1+s+|y|)^{-\frac7{16}+\mu_1}M_{1+\mu_2, 1}^{(i)}(\Gamma^c F^i)(y, s)\} \bigg)\\ \nonumber &=&O\bigg(\varepsilonepsilon+\sum_{i=1}^{m}\sum_{j=0}^m\sup_{y\in\mathbb{R}^2} \sup_{(\xi, \tau)\in D^i(y, s)\cap \Lambda_j} \{|\xi|^{\frac12}z^{(j)}_{\frac9{16}+\mu_1+\mu_2, 1}(|\xi|, \tau)|F^i(\xi, \tau)|_{\ell+3}\bigg) \end{eqnarray} \begin{eqnarray} &=&O\bigg(\varepsilonepsilon+([\partial u]_{[\frac{\ell+4}2],t}+\langle u\rangle_{[\frac{\ell+4}2]+1,t})\times\langlebel{4.18}\\ \nonumber &&\qquad\qquad\quad\times \sup_{(\xi, \tau)\in\mathbb{R}^2\times [0, s]}\{(1+\tau+|\xi|)^{-\frac78 +\mu_1+\mu_2}([[\partial u(\xi, \tau)]]_{\ell+4}+\langle\langle u(\xi, \tau)\rangle\rangle_{\ell+5})\}+\\ \nonumber &&\qquad +[\partial u]_{[\frac{\ell+4}2],t}^2\sup_{(\xi, \tau)\in \mathbb{R}^2\times [0, s]}|\xi|^{\frac12}|\partial u(\xi, \tau)|_{\ell +4} \bigg)\\ \nonumber &=&O^*\bigg(\varepsilonepsilon+J\varepsilonepsilon\sup_{(\xi, \tau)\in\mathbb{R}^2\times [0, s]}\{(1+\tau+|\xi|)^{-\frac78+\mu_1+\mu_2}([[\partial u(\xi, \tau)]]_{\ell+4}+\langle\langle u(\xi, \tau)\rangle\rangle_{\ell+5})\}+\\ \nonumber &&\qquad +J^2\varepsilonepsilon^2\sup_{(\xi, \tau)\in \mathbb{R}^2\times [0, s]}|\xi|^{\frac12}|\partial u(\xi, \tau)|_{\ell +4} \bigg)\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big). \end{eqnarray} Moreover, by the same manner as (\ref{4.18}), for any $\mu_3>0$ we obtain (\ref{1.5}), (\ref{1.6}), (\ref{1.85}), (\ref{l0}), (\ref{l1}), (\ref{4.2}), (\ref{4.222}), (\ref{4.01}) and (\ref{4.9}) \begin{eqnarray} \nonumber && (1+\tau+|\xi|)^{-\frac7{8}+\mu_1+\mu_2}([[\partial u(\xi, \tau)]]_{\ell+4}+\langle\langle u(\xi, \tau)\rangle\rangle_{\ell+5})\\ \nonumber &=&O\bigg(\varepsilonepsilon+\sum_{i=1}^m\sum_{|b|\leq \ell+4\atop |c|\leq \ell+5} (1+\tau+|\xi|)^{-\frac7{8}+\mu_1+\mu_2}\times\\ \nonumber &&\qquad\qquad\qquad\qquad\times([[\nabla L_{c_i}(\Gamma^b F^i)(\xi, \tau)]]_0+\langle\langle L_{c_i}(\Gamma^c F^i)(\xi, \tau)\rangle\rangle_0)\bigg)\\ &=&O\bigg(\varepsilonepsilon+\sum_{i=1}^{m}\sum_{|c|\leq \ell+5}\sup_{y\in\mathbb{R}^2}\{(1+\tau+|\xi|)^{-\frac7{8}+\mu_1+\mu_2}M_{1+\mu_3, 1}^{(i)}(\Gamma^c F^i)(\xi, \tau)\} \bigg)\langlebel{4.19}\\ \nonumber &=&O\bigg(\varepsilonepsilon+\sum_{i=1}^{m}\sum_{j=0}^m\sup_{y\in\mathbb{R}^2} \sup_{(\zeta, \theta)\in D^i(\xi, \tau)\cap \Lambda_j} \{|\zeta|^{\frac12}z^{(j)}_{\frac1{8}+\mu_1+\mu_2+\mu_3, 1}(|\zeta|, \theta)|F^i(\zeta, \theta)|_{\ell+5}\bigg)\\ \nonumber &=&O\bigg(\varepsilonepsilon+[\partial u]_{[\frac{\ell+6}2],t} \sup_{(\zeta, \theta)\in\mathbb{R}^2\times [0, \tau]}\{|\zeta|^{\frac12}(1+\theta+|\zeta|)^{-\frac5{16} +\mu_1+\mu_2+\mu_3}|\partial u(\zeta, \theta)|_{\ell+6}\}\bigg)\\ \nonumber &=&O^*\bigg(\varepsilonepsilon+J\varepsilonepsilon \sup_{(\zeta, \theta)\in\mathbb{R}^2\times [0, \tau]}\{|\zeta|^{\frac12}|\partial u(\zeta, \theta)|_{\ell+6}\} \bigg)\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big), \end{eqnarray} if we take $\mu_1,\mu_2,\mu_3$ to be $\mu_1+\mu_2+\mu_3<5/{16}$. Combining (\ref{4.16}), (\ref{4.17}), (\ref{4.18}) and (\ref{4.19}) and taking $\mu_1=\eta$, $\varepsilonepsilon_0\leq 1$, $J\varepsilonepsilon_0\leq 1$, we have (\ref{4.14}) and (\ref{4.15}).\\ \indent Now we show (\ref{4.12}). It follows from (\ref{2.1}), (\ref{4.15}) and $k\geq 21$ that \begin{eqnarray} \nonumber &&\sum_{i=1}^m\sum_{|b|\leq [\frac{k+10}2]} [[\nabla L_{c_i}(\Gamma^b F^i)(x, t)]]_0\\ \nonumber &=&O^*\bigg(\varepsilonepsilon+J^2B\sup_{(y,s)\in \mathbb{R}^2\times [0, t]} |y|^{\frac12}|\partial u(y, s)|_{[\frac{k+10}2]+6}\bigg)\\ \nonumber &=&O^*(\varepsilonepsilon+J^2[\partial u]_{k,t})\\ \nonumber &=&O^*((1+J^3)\varepsilonepsilon)\\ \nonumber &=&O^*(\varepsilonepsilon^{\frac12})\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big), \end{eqnarray} if we take $\varepsilonepsilon_0$ to be $J^6\varepsilonepsilon_0\leq 1$. This implies (\ref{4.12}) and therefore (\ref{4.13}). Furthermore, (\ref{4.13}) implies that there exists a positive constant $K_1$ such that \begin{eqnarray} [[[\partial u(s)]]]_{[\frac{k+10}2]}+\langle u(s)\rangle_{[\frac{k+10}2]+1}\leq K_1\varepsilonepsilon^{\frac12}\langlebel{4.13aa} \end{eqnarray} holds for $0<\varepsilonepsilon <\varepsilonepsilon_0$. Hence, by (\ref{4.11}) and (\ref{4.13aa}), we have \begin{eqnarray} \nonumber ||\partial u(t)||_{k+9}&=&O\bigg(C_5||\partial u(0)||_{k+9}\exp\bigg(C_5\int_0^t\frac{[[[\partial u(s)]]]_{[\frac{k+10}2]}+\langle u(s)\rangle_{[\frac{k+10}2]+1}}{1+s}\ ds \bigg)\bigg)\\ \nonumber &=&O^*\bigg(\varepsilonepsilon\exp\bigg(\varepsilonepsilon^{\frac12}\int_0^t\frac{C_5K_1}{1+s}\ ds\bigg)\bigg)\\ &=&O^*\bigg(\varepsilonepsilon\exp\bigg(C_5K_1\varepsilonepsilon^{\frac12}\log (1+t)\bigg)\bigg)\langlebel{4.20}\\ \nonumber &=&O^*\bigg(\varepsilonepsilon(1+t)^{C_5K_1\varepsilonepsilon^{\frac12}}\bigg)\qquad\qquad \mbox{in}\qquad \big[1/{\varepsilonepsilon},\ T_B\big). \end{eqnarray} Therefore , by (\ref{4.4}), (\ref{4.14}), (\ref{4.15}) and (\ref{4.20}), we obtain \begin{eqnarray} \nonumber &&\sum_{i=1}^m\sum_{|c|\leq k+2}\langle\langle L_{c_i}(\Gamma^{c}F^i)(x, t)\rangle\rangle_{0}\\ \nonumber &=&O^*\bigg(\varepsilonepsilon+J^2\varepsilonepsilon^2(1+t)^{\eta}\sup_{0\leq s\leq t}||\partial u(s)||_{k+9}\bigg)\\ &=&O^*\bigg(\varepsilonepsilon+J^2\varepsilonepsilon^3(1+t)^{\eta+C_5K_1\varepsilonepsilon^{\frac12}}\bigg)\langlebel{4.21}\\ \nonumber &=&O^*\big(\varepsilonepsilon(1+t)^{\frac1{16}}\big)\qquad\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big) \end{eqnarray} and \begin{eqnarray} \nonumber &&\sum_{i=1}^m\sum_{|b|\leq k+1}[[\nabla L_{c_i}(\Gamma^b F^i)(x, t)]]_0\\ \nonumber &=&O^*\bigg(\varepsilonepsilon+J^2\varepsilonepsilon^2(1+\log(1+t))\sup_{0\leq s\leq t}||\partial u(s)||_{k+9}\bigg)\\ &=&O^*\bigg(\varepsilonepsilon +J^2B\varepsilonepsilon(1+t)^{C_5K_1\varepsilonepsilon^{\frac12}}\bigg)\langlebel{4.22}\\ \nonumber &=&O^*\big(\varepsilonepsilon^{\frac{127}{128}}(1+t)^{\frac1{256}}\big)\qquad\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big), \end{eqnarray} if we choose $\eta$ and $\varepsilonepsilon_0$ to be $0<\eta+C_5K_1\varepsilonepsilon_0^{\frac12}<1/{16}$, $0<C_5K_1\varepsilonepsilon^{\frac12}_0<1/256$ and $J^{256}\varepsilonepsilon_0\leq 1$. Hence, by (\ref{4.2}), (\ref{4.222}) and (\ref{4.21}), we have \begin{eqnarray} \langle\langle u(x, t)\rangle\rangle_{k+2}=O^*(\varepsilonepsilon(1+t)^{\frac1{16}})\qquad\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big).\langlebel{4.a10} \end{eqnarray} Hence, by (\ref{4.a10}),we obtain \begin{eqnarray} \begin{equation}gin{aligned} \langle u(x, t)\rangle_{k+2}&=O^*((1+t)^{-\frac1{16}}\langle\langle u(x, t)\rangle\rangle_{k+2})\\ &=O^*(\varepsilonepsilon)\qquad\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big)\langlebel{4.a12} \end{aligned} \end{eqnarray} and therefore by (\ref{4.2}), (\ref{4.222}), (\ref{4.01}), (\ref{4.9}), (\ref{4.22}) and (\ref{4.a12}), we have \begin{eqnarray} [\partial u(x, t)]_{k+1}=O^*\big(\varepsilonepsilon^{\frac{127}{128}}(1+t)^{\frac1{256}}\big) \qquad\qquad\mbox{in}\qquad \mathbb{R}^2\times \big[1/{\varepsilonepsilon}, \ T_B\big).\langlebel{4.a11} \end{eqnarray} Note that (\ref{4.a12}) and (\ref{4.a11}) are stronger than we needed with respect to the order of derivatives. We will make use of the strength of the estimates below. \\ On the other hand, in order to estimate $\partial u$, we introduce a subset of $\mathbb{R}^2\times [0, T_B)$ by \begin{eqnarray} \nonumber \tilde{\Lambda}_i(T)=\big\{ (x, t)\ :\ ||x|-c_it|\leq t^{\frac14}, \ 1/{\varepsilonepsilon}\leq t<T\big\}\qquad (i=1,2,\cdots, m), \end{eqnarray} and discuss by dividing the area $\mathbb{R}^2\times [0, T_B)$ into out-side and in-side of $\tilde{\Lambda}_i(T_B)$. We also introduce notations; \begin{eqnarray} \nonumber \tilde{\Lambda}^c_i(T)=\big\{ (x, t)\ :\ (x, t)\not\in \tilde{\Lambda}_i(T), \ \ 1/{\varepsilonepsilon}\leq t<T\big\} \end{eqnarray} and \begin{eqnarray}\ \nonumber \partial \tilde{\Lambda}_i(T)=\big\{ (x, t)\ :\ ||x|-c_it|=t^{\frac14}\ \ \mbox{when}\ \ 1/{\varepsilonepsilon}<t<T\\ \nonumber \qquad\qquad\qquad\qquad\qquad \ \mbox{or}\ \ ||x|-c_it|<t^{\frac14}\ \ \mbox{when}\ \ t=1/{\varepsilonepsilon}\ \big\}. \end{eqnarray} Then we find that \begin{eqnarray} (1+t)^{\frac14}\leq C_6(1+||x|-c_i|)\qquad\quad \mbox{for}\qquad (x, t)\in \tilde{\Lambda}^c_i(T_B)\langlebel{4.24} \end{eqnarray} holds for some constant $C_6>0$ and that \begin{eqnarray} \tilde{\Lambda}_i(T_B)\subset \Lambda_i(T_B)\langlebel{4.a24} \end{eqnarray} holds for sufficiently small $\varepsilonepsilon>0$. Hence, it follows from (\ref{4.22}) and (\ref{4.24}) that \begin{eqnarray} \nonumber \sum_{|b|\leq k+1}[\nabla L_{c_i}(\Gamma^b F^i)(x, t)]_0 &=&O^*\big(\varepsilonepsilon^{\frac{127}{128}}(1+t)^{\frac1{256}}(1+||x|-c_it|)^{-\frac1{16}}\big)\\ &=&O^*\big(\varepsilonepsilon^{\frac{127}{128}}(1+t)^{-\frac3{256}}\big)\langlebel{4.25}\\ \nonumber &=&O^*(\varepsilonepsilon^{\frac{257}{256}}) \qquad\qquad\mbox{in}\qquad \tilde{\Lambda}_i^c(T_B), \end{eqnarray} for $i=1, 2, \cdots, m$. Hence, by (\ref{4.2}), (\ref{4.222}), (\ref{4.01}), (\ref{4.a2}), (\ref{4.a12}) and (\ref{4.25}), we obtain \begin{eqnarray} [\partial u^i(x, t)]_{k+1} &=&O^*(\varepsilonepsilon)\langlebel{4.b1} \qquad\qquad\mbox{in}\qquad \tilde{\Lambda}_i^c(T_B), \end{eqnarray} for $i=1, 2, \cdots, m$. Especially, by (\ref{4.2a}), (\ref{4.2}), (\ref{4.00}), (\ref{4.222}), (\ref{4.a2}), (\ref{4.a12}) and (\ref{4.25}), we obtain \begin{eqnarray} [\partial_0 u^i(x, t)-\varepsilonepsilon\partial_0 u^i_0(x, t)]_0&=&O^*(\varepsilonepsilon^{\frac{257}{256}})\qquad \qquad \mbox{on}\qquad \partial \tilde{\Lambda}_i(T_B)\ \langlebel{4.a13} \end{eqnarray} and \begin{eqnarray} [\partial^2_0 u^i(x, t)-\varepsilonepsilon\partial^2_0 u^i_0(x, t)]_0&=&O^*(\varepsilonepsilon^{\frac{257}{256}})\qquad \qquad \mbox{on}\qquad \partial \tilde{\Lambda}_i(T_B).\langlebel{4.a14} \end{eqnarray} Now, the task left for us is to show \begin{eqnarray} [\partial u^i (x, t)]_k=O^*(\varepsilonepsilon) \qquad\qquad\mbox{in}\qquad \tilde{\Lambda}_i(T_B),\langlebel{4.26} \end{eqnarray} for $i=1, 2, \cdots, m$. We use the method of ordinary differential equation along the pseudo characteristic curves. Let $u(x, t)=(u^1(x, t), u^2(x, t),\cdots, u^m(x, t))$ be the solution to (\ref{1.1}) and (\ref{1.2}) and denote $x=r\omega, \ (r=|x|,\ \omega\in S^1)$. Then, for fixed $\langlembda\in \mathbb{R}$ and $\omega\in S^1$, we define the $i$-th pseudo characteristic curve in $(r, t)$-plane by the solution $r=r^i(t; \langlembda)$ of the Cauchy problem; \begin{eqnarray} \frac{dr}{dt}&=&\kappa_i(r, t)\equiv c_i+\frac1{2c_i^3}\Thetaeta_i(-c_i, \omega)(\partial_0 u^i(r\omega, t))^2 \qquad\quad t_0\leq t<T_B,\langlebel{4.27}\\ \nonumber r(t_0)&=&c_it_0+\langlembda, \end{eqnarray} where $t_0=1/{\varepsilonepsilon}$ when $|\langlembda|<\varepsilonepsilon^{-\frac14}$, and $t_0=\langlembda^4$ when $|\langlembda|\geq \varepsilonepsilon^{-\frac14}$. Namely, the initial point $(r^i(t_0;\langlembda)\omega, t_0)$ is on $\partial \tilde{\Lambda}_i(T_B)$ for each $\langlembda\in\mathbb{R}$ and $\omega\in S^1$. Denote \begin{eqnarray} \nonumber \mathcal{J}^i(\langlembda;\omega)=\{(x, t)\ : \ x=r^i(t;\langlembda)\omega,\ \ t_0\leq t<T_B\ \}, \end{eqnarray} then we find that \begin{eqnarray} \nonumber \tilde{\Lambda}_i(T_B)=\displaystyle{\bigcup_{\langlembda\in\mathbb{R}, \ \omega\in S^1}}\mathcal{J}^i(\langlembda; \omega) \end{eqnarray} holds for each $i=1, 2, \cdots, m$. For the details, see \cite{2005}. Now, we can transform the equation (\ref{1.1}) into an ordinary differential equation along the pseudo characteristic curve. For a vector valued function $v=(v^1, v^2, \cdots, v^m)$, set \begin{eqnarray} \nonumber \mathcal{E}_iv=\Box_iv^i-\sum_{\ell=1}^{m}\sum_{\alpha,\begin{equation}ta=0}^2A_{\ell}^{i,\alpha\begin{equation}ta}(\partial u)\partial_{\alpha}\partial_{\begin{equation}ta}v^{\ell}, \end{eqnarray} then we obtain an identity \begin{eqnarray} \nonumber &&(\partial_0+\kappa_i\partial_r)\big(r^{\frac12}\partial_0v^i\big)\\ \nonumber&=&\frac{r^{\frac12}}2\mathcal{E}_iv+\frac{r^{\frac12}}2(\partial_0+c_i\partial_r)^2v^i+ \frac{r^{\frac12}(\kappa_i-c_i)}{c_i} (\partial_0+c_i\partial_r)\partial_0v^i+\\ &&+\frac{c_i^2}{2r^{\frac32}}\Omega^2v^i+\frac1{2r^{\frac12}}(\kappa_i-c_i)\partial_0v^i+\frac{c_i}{2r^{\frac12}}(\partial_0+c_i\partial_r)v^i\langlebel{4.28}-\\ \nonumber &&-\frac{r^{\frac12}(\kappa_i-c_i)}{c_i}\partial^2_0 v^i+\frac{r^{\frac12}}2\sum_{\ell=1}^{m}\sum_{\alpha, \begin{equation}ta=0}^2A_{\ell}^{i, \alpha\begin{equation}ta}(\partial u) \partial_{\alpha}\partial_{\begin{equation}ta}v^{\ell}. \end{eqnarray} Note that the differential operator $\partial_0+\kappa_i\partial_r$ in the left hand side of (\ref{4.28}) means the derivative along $r=r^i(t;\langlembda)$ in $(r, t)$-plane. Furthermore, by (\ref{2.6}), (\ref{2.7}) and the definition of $\tilde{\Lambda}_i(T)$, we have for any $\alpha,\begin{equation}ta=0,1,2$ and $i=1,2,\cdots, m$ \begin{eqnarray} \nonumber \frac1{r^{\frac12}}-\frac1{(c_it)^{\frac12}}&=&O\bigg(\frac{|r-c_it|}{(1+t)^{\frac32}}\bigg)=O\bigg(\frac1{(1+t)^{\frac54}}\bigg)\\ \nonumber (\partial_0+c_i\partial_r)v&=&O\bigg(\frac{1+|r-c_it|}{1+t}|\partial v|_0+\frac1{1+t}|v|_1\bigg)\\ \partial_{\alpha}v+\frac{\omega_{\alpha}}{c_i}\partial_0 v&=&O\bigg(\frac{1+|r-c_it|}{1+t}|\partial v|_0+\frac1{1+t}|v|_1\bigg)\langlebel{4.29}\\ \nonumber (\partial_0+c_i\partial_r)^2v&=&O\bigg(\frac{(1+|r-c_it|)^2}{(1+t)^2}|\partial v|_1+\frac1{(1+t)^2}|v|_2\bigg)\\ \nonumber \partial_{\alpha}\partial_{\begin{equation}ta}v-\frac{\omega_{\alpha}\omega_{\begin{equation}ta}}{c_i^2}\partial_0^2v&=& O\bigg(\frac{1+|r-c_it|}{1+t}|\partial v|_1+\frac1{(1+t)^2}|v|_2\bigg) \end{eqnarray} in $\tilde{\Lambda}_i(T_B)$. By (\ref{2.1}), (\ref{2.14}) and (\ref{4.29}), we have \begin{eqnarray} \nonumber && -\frac{\kappa_i-c_i}{c_i}\partial_0^2 v^i+\frac12\sum_{\ell=1}^{m} \sum_{\alpha, \begin{equation}ta=0}^2A_{\ell}^{i, \alpha\begin{equation}ta}(\partial u)\partial_{\alpha}\partial_{\begin{equation}ta}v^{\ell}\\ \nonumber &=&-\frac1{2c_i^4}\Thetaeta_i(-c_i, \omega)(\partial_0 u^i)^2\partial_0^2 v^i+\frac12\sum_{\alpha,\begin{equation}ta,\gamma,\delta=0}^2 c_{iii}^{i,\alpha\begin{equation}ta\gamma\delta}\partial_{\gamma}u^i\partial_{\delta}u^i\partial_{\alpha}\partial_{\begin{equation}ta}v^i+\\ &&+O^*\bigg(\frac{|r-c_it|}{1+t}|\partial u^i|_0|\partial v^i|_1+\frac1{1+t}|u^i|_1|\partial v^i|_1+\frac1{(1+t)^2}|u^i|_1|v^i|_2+\\ \nonumber &&+\sum_{j\ne i}(|\partial u^j|_0|\partial u|_0|\partial v|_1+|\partial u|_0^2|\partial v^j|_1)+|\partial u|^3_0|\partial v|_1\bigg)\langlebel{4.30}\\ \nonumber &=&O^*\bigg(\frac{|r-c_it|}{1+t}|\partial u^i|_0|\partial v^i|_1+\frac1{1+t}|u^i|_1|\partial v^i|_1+\frac1{(1+t)^2}|u^i|_1|v^i|_2+\\ \nonumber &&\ \ \ +\sum_{j\ne i}(|\partial u^j|_0|\partial u|_0|\partial v|_1+|\partial u|_0^2|\partial v^j|_1)+|\partial u|_0^3|\partial v|_1 \bigg)\qquad \mbox{in}\qquad \tilde{\Lambda}_i(T_B), \end{eqnarray} if we take $\varepsilonepsilon_0$ to be $J\varepsilonepsilon_0\leq 1$. Therefore, it follows from (\ref{4.28}), (\ref{4.29}) and (\ref{4.30}) that \begin{eqnarray} \begin{equation}gin{aligned} &\ \ (\partial_0+\kappa_i\partial_r)\big(r^{\frac12}\partial_0 v^i\big)-\frac{r^{\frac12}}2\mathcal{E}_iv\\ =&\ \ O^*\bigg(\frac{1}{1+t}|\partial v^i|_1+\frac1{(1+t)^{\frac32}}|v^i|_2+\frac{1}{(1+t)^{\frac14}}|\partial u^i|_0|\partial v^i|_1+\langlebel{4.31}\\ &\ \ \ \ \ +\frac1{(1+t)^{\frac12}}|u^i|_1|\partial v^i|_1+r^{\frac12}\sum_{j\ne i}(|\partial u^j|_0|\partial u|_0|\partial v|_1+|\partial u|_0^2|\partial v^j|_1)+\\ &\ \ \ \ \ +r^{\frac12}|\partial u|^3_0|\partial v|_1\bigg)\qquad\mbox{in}\qquad \tilde{\Lambda}_i(T_B). \end{aligned} \end{eqnarray} Now we show (\ref{4.26}) by induction with respect to $k$. Choose a point $(x, t)\in \tilde{\Lambda}_i(T_B)$, then there exist $\langlembda\in\mathbb{R}$ and $\omega \in S^1$ such that $x=r\omega$ and $(r\omega, t)\in \mathcal{J}^i(\langlembda;\omega)$. At first, we show \begin{eqnarray} [\partial u^i(x, t)]_0=O^*(\varepsilonepsilon)\qquad\qquad \mbox{in}\qquad \tilde{\Lambda}_i(T_B)\langlebel{4.32} \end{eqnarray} for $i=1, 2, \cdots, m$. Setting $v=u$ in (\ref{4.31}), we have by (\ref{1.1}), (\ref{2.1}), (\ref{2.13}) and (\ref{2.15}), \begin{eqnarray} \nonumber &&\frac{d}{ds}\big\{\big(r^i(s;\langlembda)\big)^{\frac12}\partial_0u^i(r^i(s; \langlembda)\omega, s)\big\}\\ \nonumber &=&O^*\bigg((r^i)^{\frac12}B^i(\partial u)+\frac{1}{1+s}|\partial u^i|_1+\frac1{(1+s)^{\frac32}}|u^i|_2+\frac{1}{(1+s)^{\frac14}}|\partial u^i|_1^2+ \\ &&+\frac1{(1+s)^{\frac12}}|u^i|_1|\partial u^i|_1+(r^i)^{\frac12}\sum_{j\ne i}|\partial u^j|_1|\partial u|^2_1+(r^i)^{\frac12}|\partial u|^4_1\bigg)\langlebel{4.33}\\ \nonumber &=&O^*\bigg(\frac{J\varepsilonepsilon}{(1+s)^{\frac54}}\bigg)\qquad \mbox{in}\qquad [t_0, t], \end{eqnarray} if we take $\varepsilonepsilon_0$ to be $J\varepsilonepsilon_0\leq 1$. Integrating (\ref{4.32}) from $t_0$ to $t$, we have \begin{eqnarray} \begin{equation}gin{aligned} &r^{\frac12}\partial_0 u^i(r\omega, t)\\ =&\ \big(r^i(t_0;\langlembda)\big)^{\frac12}\partial_0 u^i(r^i(t_0; \langlembda)\omega, t_0)+O^*\bigg(\frac{J\varepsilonepsilon}{(1+t_0)^{\frac14}}\bigg)\qquad\mbox{in}\qquad \tilde{\Lambda}_i(T_B),\langlebel{4.34}\end{aligned} \end{eqnarray} which implies \begin{eqnarray} r^{\frac12}\partial_0 u^i(r\omega, t)=O^*\big(\varepsilonepsilon\big)\qquad\quad \mbox{in}\qquad \tilde{\Lambda}_i(T_B),\langlebel{4.35} \end{eqnarray} if we take $\varepsilonepsilon_0$ to be $J^4\varepsilonepsilon_0\leq 1$. Moreover, integrating (\ref{4.27}) and using (\ref{4.35}), we have \begin{eqnarray} \nonumber r-c_it&=&r^i(t_0; \langlembda)-c_it_0+O^*\big(\varepsilonepsilon^2\log (1+t)\big)\\ &=&r^i(t_0; \langlembda)-c_it_0+O^*(B)\langlebel{4.36}\\ \nonumber &=&O^*\big(1+|r^i(\langlembda;t_0)-c_it_0|\big)\qquad\qquad\mbox{in}\qquad \tilde{\Lambda}_i(T_B). \end{eqnarray} Hence, by (\ref{4.25}), (\ref{4.34}) and (\ref{4.36}), we obtain \begin{eqnarray} \nonumber &&[\partial_0 u^i(x, t)]_0\\ \nonumber &=&O\big((1+|r-c_it|)^{\frac{15}{16}}r^{\frac12}|\partial_0 u^i(r\omega, t)|\big)\\ \nonumber &=&O^*\bigg((1+|r^i(\langlembda;t_0)-c_it_0|)^{\frac{15}{16}}\times\\ &&\qquad\qquad\times\bigg\{\big(r^i(t_0;\langlembda)\big)^{\frac12}|\partial_0 u^i(r^i(\langlembda;t_0)\omega, t_0)|+\frac{J\varepsilonepsilon}{(1+t_0)^{\frac14}}\bigg\}\bigg)\langlebel{4.37}\\ \nonumber &=&O^*\bigg(\varepsilonepsilon+\frac{J\varepsilonepsilon}{(1+t_0)^{\frac{1}{64}}}\bigg)\\ \nonumber &=&O^*(\varepsilonepsilon)\qquad\qquad\mbox{in}\qquad\tilde{\Lambda}_i(T_B), \end{eqnarray} if we take $\varepsilonepsilon_0$ to be $J^{64}\varepsilonepsilon_0\leq 1$. It follows from (\ref{2.1}), (\ref{4.29}) and (\ref{4.37}) that (\ref{4.32}) holds. Note that (\ref{4.36}) implies that there exists a positive constant $C_7$ independent of $J$ such that \begin{eqnarray} \frac1{C_7}(1+|r^i_0(t_0; \langlembda)-c_it_0|)\leq 1+|r-c_it|\leq C_7(1+|r^i_0(t_0; \langlembda)-c_it_0|)\langlebel{4.a15} \end{eqnarray} for $(r\omega, t)\in \mathcal{J}^i(\langlembda;\omega)$. Therefore, by (\ref{1.19}), (\ref{4.a13}), (\ref{4.34}) and (\ref{4.a15}), we have \begin{eqnarray} \nonumber &&(r^i(s;\langlembda))^{\frac12}\partial_0 u^i(r^i(s; \langlembda)\omega, s)\\ &=&-c_i\varepsilonepsilon\partial_{\rho}\mathcal{F}(\langlembda,\omega)+O^*\bigg(\frac{\varepsilonepsilon^{\frac{257}{256}}}{(1+|r^i(t_0; \langlembda)-c_i t_0|)^{\frac{15}{16}}}\bigg)\quad\qquad\mbox{in}\qquad [t_0, t].\langlebel{4.a16} \end{eqnarray} Secondly, we will show \begin{eqnarray} [\partial u^i(x, t)]_1=O^*(\varepsilonepsilon)\qquad\qquad\mbox{in}\qquad \tilde{\Lambda}_i(T_B)\langlebel{4.38} \end{eqnarray} for $i=1, 2, \cdots, m$. Set $v=\partial_0 u$ in (\ref{4.31}), then we have \begin{eqnarray} (\partial_0+\kappa_i\partial_r)((r^i)^{\frac12}\partial^2_0 u^i)=\frac{(r^i)^{\frac12}}{2}\mathcal{E}_i\partial_0 u+O^*\bigg(\frac{J\varepsilonepsilon}{(1+s)^{\frac54}}\bigg) \qquad\qquad\mbox{in}\qquad [t_0, t].\langlebel{4.39} \end{eqnarray} By (\ref{1.1}), (\ref{2.1}), (\ref{2.13}), (\ref{2.14}), (\ref{2.15}), (\ref{4.29}) and (\ref{4.a16}), we have \begin{eqnarray} \nonumber &&\mathcal{E}_i\partial_0 u\\ \nonumber &=&\sum_{\ell=1}^m\sum_{\alpha, \begin{equation}ta=0}^b\partial_0\big(A_{\ell}^{i,\alpha\begin{equation}ta}(\partial u)\big)\partial_{\alpha}\partial_{\begin{equation}ta}u^{\ell}+\partial_0\big(B^i(\partial u)\big)\\ \nonumber &=&\sum_{\alpha,\begin{equation}ta,\gamma,\delta=0}^2c_{iii}^{i, \alpha\begin{equation}ta\gamma\delta}(\partial_0\partial_{\gamma}u^i\partial_{\delta}u^i+ \partial_{\gamma}u^i\partial_0\partial_{\delta}u^i)\partial_{\alpha}\partial_{\begin{equation}ta}u^i+\\ \nonumber &&+O^*\bigg(\frac{1+|r^i-c_is|}{1+s}|\partial u^i|_1^2+\frac1{(1+s)^2}|\partial_0 u^i|_1|u^i|_2+\\ &&\qquad\qquad\qquad+\sum_{j\ne i}(|\partial u^j|_0|\partial u|_1^2+|\partial u|_0|\partial u^j|_1|\partial u|_1)+|\partial u|_0^2|\partial u|_1^2\bigg)\langlebel{4.40} \\ \nonumber &=&\frac{2\Thetaeta_i(-c_i, \omega)}{c_i^4(r^i)^{\frac32}}((r^i)^{\frac12}\partial_0 u^i)((r^i)^{\frac12}\partial_0^2 u^i)^2+O^*\bigg(\frac{J^2\varepsilonepsilon^2}{(1+s)^{\frac{39}{16}}}\bigg)\\ \nonumber &=&-\frac{2\varepsilonepsilon \Thetaeta_i(-c_i, \omega)\partial_{\rho}\mathcal{F}^i(\langlembda, \omega)}{c_i^4(r^i)^{\frac12}(1+s)}((r^i)^{\frac12}\partial_0^2 u^i)^2+\\ \nonumber &&+O^*\bigg(\frac{J^2\varepsilonepsilon^2}{(1+s)^{\frac{39}{16}}}+\frac{J^2\varepsilonepsilon^{3+\frac1{256}}}{(1+s)^{\frac32}(1+|r^i(t_0; \langlembda)-c_it_0|)^{\frac{15}{16}}}\bigg)\qquad\quad\mbox{in}\qquad [t_0, t]. \end{eqnarray} Hence, we have \begin{eqnarray} \nonumber &&(\partial_0+\kappa_i\partial_r)((r^i)^{\frac12}\partial^2_0 u^i)\\ &=&-\frac{\varepsilonepsilon \Thetaeta_i(-c_i, \omega)\partial_{\rho}\mathcal{F}^i(\langlembda, \omega)}{c_i^4(1+s)} ((r^i)^{\frac12}\partial_0^2 u^i)^2+\\ \nonumber &&+O^*\bigg(\frac{J\varepsilonepsilon}{(1+s)^{\frac{5}{4}}}+\frac{J^2\varepsilonepsilon^2}{(1+s)^{\frac{31}{16}}}+\frac{J^2\varepsilonepsilon^{3+\frac1{256}}}{(1+s)(1+|r^i(t_0; \langlembda)-c_it_0|)^{\frac{15}{16}}}\bigg) \qquad\quad\mbox{in}\qquad [t_0, t].\langlebel{4.41} \end{eqnarray} Set \begin{eqnarray} W(s)=(r^i(s;\langlembda))^{\frac12}\partial_0^2 u^i(r^i(s;\langlembda)\omega, s), \end{eqnarray} then (\ref{1.19}), (\ref{4.a14}) and (\ref{4.41}) imply the Cauchy problem of the ordinary differential equation; \begin{eqnarray} &&W'(s)=-\frac{\varepsilonepsilon\Thetaeta_i(-c_i, \omega)\partial_{\rho}\mathcal{F}^i(\langlembda, \omega)}{c_i^4(1+s)}W(s)^2+Q(s),\qquad t_0\leq s\leq t\ (<T_B),\ \ \ \langlebel{4.42}\\ &&W(t_0)=(r^i(t_0, \langlembda))^{\frac12}\partial_0^2 u^i(r^i(t_0, \langlembda)\omega, t_0)=\varepsilonepsilon c_i^2\partial_{\rho}^2\mathcal{F}^i(\langlembda, \omega)+O^*\big(\varepsilonepsilon^{\frac{257}{256}}\big),\ \ \ \ \langlebel{4.43} \end{eqnarray} where \begin{eqnarray} Q(s)=O^*\bigg(\frac{J\varepsilonepsilon}{(1+s)^{\frac{5}{4}}}+\frac{J^2\varepsilonepsilon^2}{(1+s)^{\frac{31}{16}}}+\frac{J^2\varepsilonepsilon^{3+\frac1{256}}}{(1+s)(1+|r^i(t_0; \langlembda)-c_it_0|)^{\frac{15}{16}}}\bigg).\langlebel{4.44} \end{eqnarray} Note that \begin{eqnarray} \nonumber \int_{t_0}^t|Q(s)|\ ds&=&O^*\bigg( \frac{J\varepsilonepsilon}{(1+t_0)^{\frac14}}+\frac{J^2\varepsilonepsilon^2}{(1+t_0)^{\frac{15}{16}}}+\frac{J^3\varepsilonepsilon^{3+\frac1{256}}\log (1+t)}{(1+|r^i(t_0; \langlembda)-c_it_0|)^{\frac{15}{16}}}\bigg)\\ &=&O^*\bigg(\frac{J\varepsilonepsilon^{\frac{17}{16}}}{(1+t_0)^{\frac{15}{64}}}+\frac{J^2\varepsilonepsilon^{\frac{173}{64}}}{(1+t_0)^{\frac{15}{64}}}+\frac{J^3B\varepsilonepsilon^{1+\frac1{256}}}{(1+|r^i(t_0; \langlembda)-c_it_0|)^{\frac{15}{16}}}\bigg)\langlebel{4.45}\\ \nonumber &=&O^*\bigg(\frac{\varepsilonepsilon^{\frac{33}{32}}}{(1+t_0)^{\frac{15}{64}}}+\frac{\varepsilonepsilon^{\frac{513}{512}}}{(1+|r^i(t_0; \langlembda)-c_it_0|)^{\frac{15}{16}}}\bigg) \qquad\qquad\mbox{in}\qquad [t_0,\ T_B), \end{eqnarray} if we choose $\varepsilonepsilon_0$ to be $J^{3}\varepsilonepsilon_0^{\frac1{512}}<1$. Now we can show \begin{eqnarray} [\partial_0^2 u^i(x, t)]_0=O^*(\varepsilonepsilon)\qquad\qquad\mbox{in}\qquad \tilde{\Lambda}_i(T_B),\langlebel{4.46} \end{eqnarray} by using the following proposition. \begin{equation}gin{prop} Let $w(t)$ be the solution of the ordinary differential equation; \begin{eqnarray} \nonumber w'(t)=\frac{\alpha}{1+t}w(t)^2+q(t)\qquad\qquad\mbox{for}\qquad T_0\leq t<T_1, \end{eqnarray} where $\alpha$ is a constant, $T_0$ and $T_1$ are positive constants and $q(t)$ is a continuous function in $[T_0, T_1)$. Assume \begin{eqnarray} \nonumber q_*=\int_{T_0}^{T_1}|q(t)|\ dt <\infty \qquad\mbox{and}\qquad 2\alpha q_*\{\log(1+T_1)-\log(1+T_0)\}<1. \end{eqnarray} Then, \begin{eqnarray} |w(t)|\leq \bigg(1+\frac1{1-\alpha (w(T_0)+q_*)\{\log(1+t)-\log (1+T_0)\} }\bigg)(|w(T_0)|+q_*)\langlebel{4.ap} \end{eqnarray} holds, as long as the right hand side of (\ref{4.ap}) is well-defined. \end{prop} For the proof of Proposition 4.4, see the proof of Proposition 3.4 in \cite{2000}. \\ By (\ref{4.45}), we have $\displaystyle{q_*=\int_{t_0}^{T_B}|Q(s)|\ ds=O^*(\varepsilonepsilon^{\frac{513}{512}})<\infty}$ and \begin{eqnarray} \nonumber &&2\alpha q_*(\log (1+T_B)-\log (1+t_0))\\ \nonumber &=&-2\frac{\varepsilonepsilon\Thetaeta_i(-c_i, \omega)\partial_{\rho}\mathcal{F}^i(\langlembda, \omega)}{c_i^4}q_*(\log(1+T_B)-\log (1+t_0))\\ \nonumber &\leq& K_{2}B\varepsilonepsilon^{\frac1{512}}<1 \end{eqnarray} for $0<\varepsilonepsilon<\varepsilonepsilon_0$, if we take $\varepsilonepsilon_0$ to be $(K_{2}B)^{512}\varepsilonepsilon_0<1$. Hence, it follows from (\ref{4.42}), (\ref{4.43}), (\ref{4.45}), (\ref{4.ap}) and $HB<1$ that \begin{eqnarray} \nonumber &&|W(t)|\\ \nonumber &\leq& \left(1+\frac{1}{\displaystyle{1-\alpha (W(t_0)+q_*)\{\log (1+t)-\log (1+t_0)\}}}\right)(|W(t_0)|+q_*)\\ &\leq& \left(1+\frac{1}{\displaystyle{1-\bigg(\varepsilonepsilon^2H +\alpha q_*\bigg)\{\log (1+t)-\log (1+t_0)\}}}\right)(|W(t_0)|+q_*)\langlebel{4.47}\\ \nonumber &\leq& \left(1+\frac{1}{\displaystyle{1-HB-K_{2}B\varepsilonepsilon^{\frac1{512}}/2}}\right) (|W(t_0)|+q_*)\\ \nonumber &\leq& \left(1+\frac{2}{1-HB}\right) (|W(t_0)|+q_*)\qquad\qquad\quad t_0\leq t<T_B \end{eqnarray} holds for $0<\varepsilonepsilon<\varepsilonepsilon_0$, if we tale $\varepsilonepsilon_0$ to be $\varepsilonepsilon_0\leq \{(1-HB)/(K_{2}B)\}^{512}$. Therefore, by (\ref{1.18}), (\ref{4.a15}), (\ref{4.43}) and (\ref{4.47}), we obtain \begin{eqnarray} \nonumber &&(1+|r-c_i t|)^{\frac{15}{16}}|W(t)|\\ \nonumber &=& O^*\left(\bigg(1+\frac{2}{\displaystyle{1-HB}}\bigg)C_7(1+|r^i(t_0;\langlembda)-c_it_0|)^{\frac{15}{16}} (|W(t_0)|+q_*)\right)\\ \nonumber &=&O^*(\varepsilonepsilon)\qquad\qquad\qquad \mbox{in}\qquad\tilde{\Lambda}_i(T_B). \end{eqnarray} This implies (\ref{4.46}). Moreover, by (\ref{2.1}), (\ref{4.29}) and (\ref{4.46}), we have \begin{eqnarray} [\partial^2 u^i(x, t)]_0=O^*(\varepsilonepsilon)\qquad\qquad\qquad\mbox{in}\qquad\tilde{\Lambda}_i(T_B).\langlebel{4.kk} \end{eqnarray} \indent Now we show (\ref{4.38}). Set $v=\Gamma_{p} u\ \ (p=3,4)$ in (\ref{4.31}), then we have \begin{eqnarray} (\partial_0+\kappa_i\partial_r)((r^i)^{\frac12}\partial_0 \Gamma_{p} u^i)=\frac{(r^i)^{\frac12}}{2}\mathcal{E}_i\Gamma_{p} u+O^*\bigg(\frac{J\varepsilonepsilon}{(1+s)^{\frac54}}\bigg) \qquad\mbox{in}\qquad\tilde{\Lambda}_i(T_B).\langlebel{4.48} \end{eqnarray} By (\ref{1.1}), (\ref{2.1}), (\ref{2.13}), (\ref{2.14}), (\ref{2.15}), (\ref{4.29}), (\ref{4.32}) and (\ref{4.kk}), we have \begin{eqnarray} \nonumber &&\mathcal{E}_i\Gamma_p u\\ \nonumber &=&\sum_{\ell=1}^m\sum_{\alpha, \begin{equation}ta=0}^2\big\{\Gamma_p \big(A_{\ell}^{i,\alpha\begin{equation}ta}(\partial u)\partial_{\alpha}\partial_{\begin{equation}ta}u^{\ell}\big)-A_{\ell}^{i,\alpha\begin{equation}ta}(\partial u)\partial_{\alpha}\partial_{\begin{equation}ta}\Gamma_p u^{\ell}\big\}-\\ \nonumber &&\ \ \ -[\Gamma_p, \Box_i]u^i+\Gamma_p \big(B^i(\partial u)\big)\\ &=&O^*\bigg(|\partial u|_0|\partial^2 u|_0|\partial_0 \Gamma_p u^i|_0+\frac{1+|r^i-c_is|}{1+s}|\partial u^i|_1^2+\frac1{(1+s)^2}|\partial_0 u^i|_1|u^i|_2+\langlebel{4.49}\\ \nonumber &&\qquad +\sum_{j\ne i}(|\partial u^j|_0|\partial u|_1^2+|\partial u|_0|\partial u^j|_1|\partial u|_1)+|\partial u|_0^2|\partial u|_1^2\bigg) \\ \nonumber &=&O^*\bigg(\frac{\varepsilonepsilon^2}{(r^i)^{\frac12}(1+s)}|r^{\frac12}\partial_0\Gamma_p u^i|_0+\frac{J^2\varepsilonepsilon^2}{(1+s)^{\frac{39}{16}}}\bigg)\qquad\quad\mbox{in}\qquad [t_0, t]. \end{eqnarray} Hence, by setting \begin{eqnarray} V_1(s)=(r^i(s;\langlembda))^{\frac12}\partial_0\Gamma_p u^i(r^i(s;\langlembda)\omega, s), \end{eqnarray} we have \begin{eqnarray} V_1'(s)=O^*\bigg(\frac{\varepsilonepsilon^2}{1+s}|V_1(s)|+\frac{J\varepsilonepsilon}{(1+s)^{\frac54}} \bigg) \qquad \quad\mbox{in}\qquad [t_0, t],\langlebel{4.50} \end{eqnarray} if we take $\varepsilonepsilon_0$\ to be $J\varepsilonepsilon_0\leq 1$. Thus, the Gronwall inequality implies \begin{eqnarray} \nonumber &&|V_1(t)|\\ &=&O^*\bigg(\bigg\{ |V_1(t_0)|+\int_{t_0}^t \bigg(\frac{J\varepsilonepsilon}{(1+s)^{\frac54}} \bigg) ds\bigg\} \exp\bigg(\int_{t_0}^t\frac{K_{3}\varepsilonepsilon^2}{1+s}ds\bigg)\bigg)\ \ \ \langlebel{4.50a}\\ \nonumber &=&O^*\bigg(\bigg( |V_1(t_0)|+\frac{J\varepsilonepsilon}{(1+t_0)^{\frac14}} \bigg) e^{K_{3}B}\bigg)\qquad\mbox{in}\qquad\tilde{\Lambda}_i(T_B). \end{eqnarray} Hence, by (\ref{4.b1}), (\ref{4.36}) and (\ref{4.50a}), we have \begin{eqnarray} \nonumber &&(1+|r-c_it|)^{\frac{15}{16}}|V_1(t)|\\ \nonumber &=&O^*\bigg(C_7(1+|r^i(t_0, \langlembda)-c_it_0|)^{\frac{15}{16}}\bigg( |V_1(t_0)|+\frac{J\varepsilonepsilon}{(1+t_0)^{\frac14}}\bigg) e^{K_{3}B}\bigg)\\ &=&O^*\bigg(\varepsilonepsilon +\frac{J\varepsilonepsilon}{(1+t_0)^{\frac1{64}}}\bigg)\langlebel{4.51}\\ \nonumber &=&O^*(\varepsilonepsilon)\qquad\qquad\mbox{in}\qquad\tilde{\Lambda}_i(T_B), \end{eqnarray} if we take $\varepsilonepsilon_0$ to be $J^{64}\varepsilonepsilon_0\leq 1$. Therefore, (\ref{4.a12}), (\ref{4.29}) and (\ref{4.51}) imply (\ref{4.38}). \\ \indent Finally, for any integer $h$ so that $2\leq h\leq k$, we show \begin{eqnarray} [\partial u^i (x, t)]_{h}=O^*(\varepsilonepsilon)\qquad\qquad \qquad \mbox{in}\qquad \tilde{\Lambda}_i(T_B),\langlebel{4.52} \end{eqnarray} for $i=1,\cdots,m$, under the assumption \begin{eqnarray} [\partial u (x, t)]_{h-1}=O^*(\varepsilonepsilon)\qquad\qquad \qquad \mbox{in}\qquad \mathbb{R}^2\times [1/\varepsilonepsilon, T_B). \langlebel{4.53} \end{eqnarray} Set $v=\Gamma^a u$ with $|a|\leq h$ in (\ref{4.31}), then (\ref{1.1}), (\ref{2.13}), (\ref{2.15}), (\ref{4.a12}), (\ref{4.a11}), (\ref{4.32}), (\ref{4.kk}) and (\ref{4.53}) imply \begin{eqnarray} \nonumber &&(\partial_0+\kappa_i\partial_r)((r^i)^{\frac12}\partial_0 \Gamma^a u^i)\\ \nonumber &=&\frac{(r^i)^{\frac12}}2\mathcal{E}_i \Gamma^a u+O^*\bigg(\frac{1}{1+s}|\partial u^i|_{h+1}+\frac1{(1+s)^{\frac32}}|u^i|_{h+2}+\\ &&\ \ +\frac{1}{(1+s)^{\frac14}}|\partial u^i|_0|\partial u^i|_{h+1}+\frac1{(1+s)^{\frac12}}|u^i|_1|\partial u^i|_{h+1}+\langlebel{4.54b}\\ \nonumber &&\ \ +(r^i)^{\frac12}\sum_{j\ne i}(|\partial u^j|_0|\partial u|_0|\partial u|_{h+1}+|\partial u|_0^2|\partial u^j|_{h+1})+(r^i)^{\frac12}|\partial u|^3_0|\partial u|_{h+1}\bigg)\\ \nonumber &=&\frac{(r^i)^{\frac12}}2\mathcal{E}_i \Gamma^a u+O^*\bigg(\frac{\varepsilonepsilon}{(1+s)^{1+\frac{63}{256}}}\bigg) \qquad\mbox{in}\qquad [t_0, t] \end{eqnarray} and \begin{eqnarray} \nonumber &&\mathcal{E}_i \Gamma^a u\\ \nonumber &=&\sum_{\ell=1}^m\sum_{\alpha,\begin{equation}ta=0}^2\{\Gamma^a (A_{\ell}^{i, \alpha\begin{equation}ta}(\partial u)\partial_{\alpha}\partial_{\begin{equation}ta}u^{\ell})- A_{\ell}^{i,\alpha\begin{equation}ta}(\partial u)\partial_{\alpha}\partial_{\begin{equation}ta}\Gamma^a u^{\ell}\}+\Gamma^a B^i(\partial u)+[\Box_i, \Gamma^a]u^i\\ \nonumber &=&O^*\bigg(|\partial u^i|_0|\partial^2 u^i|_0\sum_{|a|\leq h}|\partial_0 \Gamma^a u^i|+\frac{|r^i-c_is|}{1+s}(|\partial u^i|_h+|\partial u^i|_{h}^2)|\partial u^i|_{h+1}+\\ &&\ \ \ \ \ +\frac1{1+s}(|\partial u^i|_{h+1}+|\partial u^i|_{h+1}^2)|u^i|_{h+2}+|\partial u^i|_{h-1}^3+\langlebel{4.54a}\\ \nonumber &&\ \ \ \ \ +\sum_{j\ne i}(|\partial u^j|_h|\partial u|_h|\partial u|_{h+1}+|\partial u|_h^2|\partial u^j|_{h+1})+|\partial u|^3_h|\partial u|_{h+1}\bigg)\\ \nonumber &=&O^*\bigg(\frac{\varepsilonepsilon^2}{(r^i)^{\frac12}(1+s)}|(r^i)^{\frac12}\partial_0\Gamma^a u^i|+\frac{\varepsilonepsilon^{1+\frac{127}{128}}}{(r^i)^{\frac12}(1+s)^{\frac{127}{256}}}+\\ \nonumber &&\ \ \ \ \ +\frac{\varepsilonepsilon^3}{(r^i)^{\frac12}(1+s)(1+|r^i-c_is|)^{\frac{15}{16}}}\bigg) \qquad\quad\mbox{in}\qquad [t_0, t]. \end{eqnarray} Thus, by setting \begin{eqnarray} V_{h}(s)=\sum_{|a|\leq h}(r^i(s; \langlembda))^{\frac12}\partial_0\Gamma^{a}u^i(r^i(s;\langlembda)\omega, s) \end{eqnarray} and by (\ref{4.a15}), (\ref{4.54b}) and (\ref{4.54a}), we have \begin{eqnarray} \nonumber &&V_h'(s)\\ \nonumber &=&O^*\bigg(\frac{\varepsilonepsilon^2}{1+s}|V_h(s)|+\frac{\varepsilonepsilon}{(1+s)^{1+\frac{63}{256}}}+\frac{\varepsilonepsilon^3}{(1+s)(1+|r^i(t_0; \langlembda)-c_it_0|)^{\frac{15}{16}}}\bigg) \qquad\mbox{in}\quad [t_0, t]. \end{eqnarray} The gronwall inequality implies \begin{eqnarray} \nonumber &&|V_h(t)|\\ \nonumber &=&O^*\bigg(\bigg\{|V_h(t_0)|+\int_{t_0}^t\bigg(\frac{\varepsilonepsilon}{(1+s)^{1+\frac{63}{256}}}+\frac{\varepsilonepsilon^3}{(1+s)(1+|r^i(t_0;\langlembda)-c_it_0|)^{\frac{15}{16}}}\bigg)ds\bigg\}\times\\ \nonumber &&\qquad\qquad\qquad\quad\times \exp\bigg(\int_{t_0}^t \frac{K_{4}\varepsilonepsilon^2}{1+s}ds\bigg)\bigg)\ \ \\ \nonumber &=&O^*\bigg(\bigg\{|V_h(t_0)|+\frac{\varepsilonepsilon}{(1+t_0)^{\frac{63}{256}}}+\frac{B \varepsilonepsilon}{(1+|r^i(t_0; \langlembda)-c_it_0|)^{\frac{15}{16}}}\bigg\} e^{ K_{4}B}\bigg) \end{eqnarray} Hence, by (\ref{4.a15}), we have \begin{eqnarray} \nonumber &&(1+|r-c_i t|)^{\frac{15}{16}}|V_h(t)|\\ \nonumber &=&O^*\bigg(C_7(1+|r^i(t_0; \langlembda)-c_it_0|)^{\frac{15}{16}}\times\\ && \qquad\times\bigg\{|V_h(t_0)|+\frac{\varepsilonepsilon}{(1+t_0)^{\frac{63}{256}}}+\frac{B \varepsilonepsilon}{(1+|r^i(t_0; \langlembda)-c_it_0|)^{\frac{15}{16}}}\bigg\} e^{ K_{4}B}\bigg)\\ \nonumber &=&O^*\bigg(\varepsilonepsilon+\frac{\varepsilonepsilon}{(1+t_0)^{\frac{3}{256}}}\bigg)\\ \nonumber &=&O^*(\varepsilonepsilon)\qquad\qquad\qquad\mbox{in}\qquad \tilde{\Lambda}_i(T_B).\langlebel{4.55} \end{eqnarray} It follows from (\ref{4.b1}) and (\ref{4.55}) that (\ref{4.52}) holds. \begin{equation}gin{thebibliography}{99} \bibitem{ay} \newblock R.\ {A}gemi and K.\ {Y}okoyama, \newblock {The null condition and global existence of solutions to systems of wave equations with different speeds}, \newblock Advances in Nonlinear Partial Differential Equations and Stcastics, Ser. Adv. Math. Appl. Sci., {\bf 48}, World Scientific, River Edge, NJ, (1998), pp. 43-86. \bibitem{hor1} \newblock L.\ {H}o\"rmander, \newblock {The lifespan of classical solutions of nonlinear hyperbolic equations}, \newblock Lecture Note in Math., {\bf 1256} (1987), pp. 214-280. \bibitem{1995} \newblock A.\ {H}oshiga, \newblock {The asymptotic behaviour of the radially symmetric solutions to quasilinear wave equations in two space dimensions}, \newblock Hokkaido Math. Journal, {\bf 24(3)} (2000), pp. 575-615. \bibitem{2000} \newblock A.\ {H}oshiga, \newblock {The lifespan of solutions to quasilinear hyperbolic systems in two-space dimensions}, \newblock Nonlinear Analysis., {\bf 42} (2000), pp. 543-560. \bibitem{2005} \newblock A.\ {H}oshiga, \newblock {Existence and blowing up of solutions to systems of quasilinear wave equations in two space dimensions}, \newblock Advances in Math. Sci. and Appli., {\bf 15} (2005), pp. 69-110. \bibitem{2006} \newblock A.\ {H}oshiga, \newblock {The existence of global solutions to systems of quasilinear wave equations with quadratic nonlinearities in 2-dimensional space}, \newblock Funkcialaj Ekvaciaj, {\bf 49} (2006), pp. 357-384. \bibitem{hk2} \newblock A.\ {H}oshiga and H.\ {K}ubo, \newblock {Global small amplitude solutions of nonlinear hyperbolic systems with a critical exponent under the null condition}, \newblock SIAM J. Math., {\bf 31(3)} (2000), pp. 486-513. \bibitem{hk1} \newblock A.\ {H}oshiga and H.\ {K}ubo, \newblock {Global solvability for systems of nonlinear wave equations with multiple speeds in two space dimensions}, \newblock Diff. and Int. Eqs., {\bf 17} (2004), pp. 593-622. \bibitem{j1} \newblock F.\ {J}ohn, \newblock {Blow-up for quasi-linear wave equations in three space dimensions}, \newblock Comm. Pure Appli. Math., {\bf 34} (1981), pp. 29-51. \bibitem{kla1} \newblock S.\ {K}lainerman, \newblock {Remarks on the global Sobolev inequalities in the Minkowski space $\mathbb{R}^{n+1}$}, \newblock Comm. Pure Appli. Math., {\bf 40} (1987), pp. 111-117. \bibitem{ky} \newblock K.\ {K}ubota and K.\ {Y}okoyama, \newblock {Global existence of classical solutions to systems of nonlinear wave equations with different speeds of propagation}, \newblock Japanese J. Math., {\bf 27} (2001), pp. 113-202. \bibitem{ma} \newblock A.\ {M}ajda, \newblock {\it Compressible fluid flow and systems of conservation laws}, \newblock Springer Appli. Math. Sci., {\bf 53} (1984). \end{thebibliography} \end{document} \end{document}
math
\begin{document} \title{Levinson's theorem for the Schr\"{o}dinger equation in two dimensions} \author{Shi-Hai Dong \thanks{Electronic address:[email protected]}} \address{Institute of High Energy Physics, P.O.Box 918(4), Beijing 100039, The People's Republic of China} \author{Xi-Wen Hou} \address{Institute of High Energy Physics, P.O. Box 918(4), Beijing 100039\\ and Department of Physics, University of Three Gorges, Yichang 443000, The People's Republic of China} \author{Zhong-Qi Ma} \address{China Center for Advanced Science and Technology (World Laboratory), P.O.Box 8730, Beijing 100080\\ and Institute of High Energy Physics, P.O. Box 918(4), Beijing 100039, The People's Republic of China} \maketitle \begin{abstract} Levinson's theorem for the Schr\"{o}dinger equation with a cylindrically symmetric potential in two dimensions is re-established by the Sturm-Liouville theorem. The critical case, where the Schr\"{o}dinger equation has a finite zero-energy solution, is analyzed in detail. It is shown that, in comparison with Levinson's theorem in non-critical case, the half bound state for $P$ wave, in which the wave function for the zero-energy solution does not decay fast enough at infinity to be square integrable, will cause the phase shift of $P$ wave at zero energy to increase an additional $\pi$. \end{abstract} \section{INTRODUCTION} In 1949, an important theorem in quantum mechanics was established by Levinson [1], who set up a relation between the total number $n_{\ell}$ of bound states with angular momentum $\ell$ and the phase shift $\delta_{\ell}(0)$ of the scattering state at zero momentum for the Schr\"{o}dinger equation with a spherically symmetric potential $V(r)$ in three dimensions: $$\delta_{\ell}(0)-\delta_{\ell}(\infty)=\left\{\begin{array}{ll} \left(n_{\ell}+1/2 \right)\pi~~~~~ &{\rm when}~~\ell=0 ~~{\rm and~ a~ half~ bound~ state~ occurs} \\ n_{\ell}\pi &{\rm the~remaining~cases} , \end{array} \right. \eqno (1) $$ \noindent where the potential $V(r)$ satisfies the asymptotic conditions: $$r^{2}|V(r)| dr \longrightarrow 0,~~~~~{\rm at}~~ r \longrightarrow 0, \eqno (2a) $$ $$r^{3}|V(r)| dr \longrightarrow 0,~~~~~{\rm at}~~ r \longrightarrow \infty. \eqno(2b)$$ \noindent The first condition is necessary for the nice behavior of the wave function at the origin, and the second one is necessary for the analytic property of the Jost function, which was used in his proof. The first line in Eq.(1) was first shown by Newton [2] for the case where a half bound state of $S$ wave occurs. A zero-energy solution to the Schr\"{o}dinger equation is called a half bound state if its wave function is finite, but does not decay fast enough at infinity to be square integrable. As is well known, there is degeneracy of states for the magnetic quantum number due to the spherical symmetry. Usually, this degeneracy is not expressed explicitly in the statement of Levinson's theorem. Due to the wide interest in lower-dimensional field theories recently, it may be worthwhile to study Levinson's theorem in two dimensions. The purpose of the present paper is to re-establish the Levinson theorem for the Schr\"{o}dinger equation in two dimensions in terms of the Sturm-Liouville theorem. A lot of papers [2-10] have been devoted to the different proofs and generalizations of Levinson's theorem, for example, to noncentral potentials [2], to nonlocal interactions [2,3], to the relativistic equations [7,10], and to electron-atom scattering [9]. Roughly speaking, there are three main methods for the proof of Levinson's theorem. One [1] is based on elaborative analysis of the Jost function. This method requires good behavior of the potential. For example, as pointed out by Newton [11], when the asymptotic condition (2b) is not satisfied, Levinson's theorem is violated. The second one is the Green function method [5], where the total number of the physical states, which is infinite, is proved to be independent of the potential, and the number of the bound states is the difference between the infinite numbers of the scattering states without and with the potential. Since the number of the states in a continuous spectrum is uncountable, a simple model is usually used to discretize the continuous part of the spectrum by requiring the wave functions to be vanishing at a sufficiently large radius. We recommend the third method to prove Levinson's theorem by the Sturm-Liouville theorem [6-8]. For the Sturm-Liouville problem, the fundamental trick is the definition of a phase angle which is monotonic with respect to the energy [12]. This method is very simple, intuitive and easy to generalize. In this proof, it is demonstrated explicitly that as the potential changes, the phase shift at zero momentum jumps by $\pi$ while a scattering state becomes a bound state, or vice versa. Newton's counter-examples [11], where the condition (2b) is violated, can be proved to satisfy the modified Levinson theorem [6]. Recently, Lin [13] established a two-dimensional analog of Levinson's theorem for the Schr\"{o}dinger equation with a cylindrically symmetric potential by the Green function method, and declared that, unlike the case in the three dimensions, the half bound state did not modify Levinson's theorem in two dimensions: $$\eta_{m}(0)-\eta_{m}(\infty)=n_{m}\pi, ~~~~~m=0, 1, 2, \ldots, \eqno (3a) $$ \noindent where $\eta_{m}(0)$ is the limit of the phase shifts at zero momentum for the $m$th partial wave, and $n_{m}$ is the total number of bound states with the angular momentum $m\hbar$. Both $\eta_{m}$ and $n_{m}$ are independent of the sign of the angular momentum $\pm m\hbar$ so that only non-negative $m$ is needed to be discussed. The experimental study [14] of Levinson's theorem in two dimensions has appeared in the literatures. This form of Levinson's theorem for two dimensions [13] conflicts with an early result by Boll\'{e}, Gesztesy, Danneels and Wilk (BGDW) [15] in 1986, who overcame the difficult about the logarithmic singularity of two-dimensional free Green's function at zero energy, and proved with "a surprise" (see the title of [15]) that the half bound state of $P$ wave causes the phase shift $\eta_{1}(0)$ at zero momentum to increase an additional $\pi$, exactly like the zero-energy bound states: $$\eta_{m}(0)-\eta_{m}(\infty)=\left(n_{m}+1 \right)\pi, ~~~~~ {\rm when}~~m=1 ~~{\rm and~a~ half~ bound~ state~ occurs}. \eqno (3b) $$ The critical case where the Schr\"{o}dinger equation has a finite zero-energy solution, is very sensitive and worthy of some careful analysis, especially when two conflicting versions of Levinson's theorem in two dimensions were presented. The Sturm-Liouville theorem provides a powerful tool for this analysis. In the present paper we re-establish the Levinson theorem for the Schr\"{o}dinger equation in two dimensions by the Sturm-Liouville theorem, which coincides with the version by BGDW [15]. It seems to us that the problem in the proof by Lin [13] may be whether or not the set of the physical solutions to the Schr\"{o}dinger equation in two dimensions is complete when a half bound state of $P$ wave occurs because the corresponding wave function for the half bound state tends to zero at infinity, although it does not decay fast enough at infinity to be square integrable. It is different for $S$ wave because the wave function of the half bound state of $S$ wave is finite but does not tend to zero at infinity. This paper is organized as follows. We firstly assume that the potential is vanishing beyond a sufficiently large radius $r_{0}$ for simplicity, and leave the discussion of the general potentials for the last section. In Sec.II we choose the logarithmic derivative of the radial wave function of the Schr\"{o}dinger equation as the "phase angle" [12], and prove by the Sturm-Liouville theorem that it is monotonic with respect to the energy. In terms of this monotonic property, in Sec.III the number of the bound states is proved to be related with the the logarithmic derivative of zero energy at $r_{0}$ as the potential changes. In Sec.IV we further prove that the the logarithmic derivative of zero energy at $r_{0}$ also determines the limit of the phase shifts at zero momentum, so that Levinson's theorem is proved. The critical case, where a zero-energy solution occurs, is analyzed carefully there. The problem that the potential has a tail at infinity will be discussed in Sec. V. \section{NOTATIONS AND THE STURM-LIOUVILLE THEOREM} Consider the Schr\"{o}dinger equation with a potential $V(r)$ that depends only on the distance $r$ from the origin $$H\psi =-\displaystyle {\hbar^{2} \over 2\mu }\left( \displaystyle {1 \over r} \displaystyle {\partial \over \partial r} r \displaystyle {\partial \over \partial r} + \displaystyle {1 \over r^{2}} \displaystyle {\partial^{2} \over \partial \varphi^{2} } \right)\psi +V(r) \psi =E \psi .$$ \noindent where $\mu$ denotes the mass of the particle. For simplicity, we firstly discuss the case with a cutoff potential: $$V(r)=0,~~~~~{\rm when}~~r\geq r_{0}, \eqno (4) $$ \noindent where $r_{0}$ is a sufficiently large radius. The general case where the potential $V(r)$ has a tail at infinity will be discussed in Sec.V. Introduce a parameter $\lambda$ for the potential $V(r)$: $$V(r,\lambda)=\lambda V(r). \eqno (5) $$ \noindent As $\lambda$ increases from zero to one, the potential $V(r,\lambda)$ changes from zero to the given potential $V(r)$. Owing to the symmetry of the potential, we have $$\psi(r,\varphi,\lambda)=r^{-1/2} R_{m}(r,\lambda) e^{ \pm im \varphi}, ~~~~~m=0, 1, 2, \ldots, \eqno (6) $$ \noindent where the radial wave function $R_{m}(r,\lambda)$ satisfies the radial equation: $$\displaystyle {\partial^{2} R_{m}(r,\lambda) \over \partial r^{2} } +\left\{\displaystyle {2\mu \over \hbar^{2}} \left(E-V(r,\lambda)\right) -\displaystyle {m^{2}-1/4 \over r^{2}} \right\} R_{m}(r,\lambda) =0. \eqno (7) $$ Now, we are going to solve Eq.(7) in two regions and match two solutions at $r_{0}$. Since the Schr\"{o}dinger equation is linear, the wave function $\psi$ can be multiplied by a constant factor. Removing the effect of the factor, we only need one matching condition at $r_{0}$ for the logarithmic derivative of the radial function: $$A_{m}(E,\lambda)\equiv \left\{ \displaystyle {1 \over R_{m}(r,\lambda) } \displaystyle {\partial R_{m}(r,\lambda) \over \partial r} \right\}_{r=r_{0}-} =\left\{ \displaystyle {1 \over R_{m}(r,\lambda) } \displaystyle {\partial R_{m}(r,\lambda) \over \partial r} \right\}_{r=r_{0}+} . \eqno (8) $$ Due to the condition (2a), only one solution is convergent at the origin. For example, for the free particle ($\lambda=0$), the solution to Eq.(7) at the region $0\leq r \leq r_{0}$ is proportional to the Bessel function $J_{m}(x)$: $$R_{m}(r,0)=\left\{\begin{array}{ll} \sqrt{\displaystyle {\pi kr \over 2 }}J_{m}(kr),~~~~~ &{\rm when}~~E>0~~{\rm and}~~k=\left(2\mu E \right)^{1/2}/ \hbar \\ e^{-im\pi/2}\sqrt{\displaystyle {\pi \kappa r \over 2 }}J_{m}(i\kappa r) ,~~~~~&{\rm when}~~E\leq 0~~{\rm and}~~\kappa=\left(-2\mu E \right)^{1/2} / \hbar , \end{array} \right. \eqno (9) $$ \noindent The solution $R_{m}(r,0)$ given in Eq.(9) is a real function. A constant factor on the radial function $R_{m}(r,0)$ is not important. In the region $r_{0}\leq r < \infty$, we have $V(r)=0$. For $E>0$, there are two oscillatory solutions to Eq.(7). Their combination can always satisfy the matching condition (8), so that there is a continuous spectrum for $E>0$. $$R_{m}(r,\lambda)=\sqrt{\displaystyle {\pi kr \over 2 }} \left\{ \cos \eta_{m}(k,\lambda)J_{m}(kr)-\sin \eta_{m}(k,\lambda) N_{m}(kr) \right\}~~~~~~~~~~~~~~~~~$$ $$~~~~~~~~ \sim \cos \left(kr-\displaystyle{m\pi \over 2}- \displaystyle {\pi \over 4} +\eta_{m}(k,\lambda) \right),~~~~~~~~~~~~~ {\rm when}~~r\longrightarrow \infty . \eqno (10) $$ \noindent where $N_{m}(kr)$ is the Neumann function. From the matching condition (8) we have: $$\tan \eta_{m}(k,\lambda)=\displaystyle {J_{m}(kr_{0}) \over N_{m}(kr_{0})} ~\cdot ~\displaystyle {A_{m}(E,\lambda)-kJ'_{m}(kr_{0})/ J_{m}(kr_{0})-1/(2r_{0}) \over A_{m}(E,\lambda)-kN'_{m}(kr_{0})/ N_{m}(kr_{0})-1/(2r_{0}) } . \eqno (11) $$ $$\eta_{m}(k)\equiv \eta_{m}(k,1). \eqno (12) $$ \noindent where the prime denotes the derivative of the Bessel function, the Neumann function, and later the Hankel function with respect to their argument. The phase shift $\eta_{m}(k,\lambda)$ is determined from Eq.(11) up to a multiple of $\pi$ due to the period of the tangent function. Levinson determined the phase shift $\eta_{m}(k)$ with respect to the phase shift $\eta_{m}(\infty)$ at infinite momentum. For any finite potential, the phase shift $\eta_{m}(\infty)$ will not change and is always equal to the phase shift of zero potential. Therefore, Levinson's definition for the phase shift is equivalent to the convention that the phase shift $\eta_{m}(k)$ is determined with respect to the phase shift $\eta_{m}(k,0)$ for the free particle, where $\eta_{m}(k,0)$ is defined to be zero: $$\eta_{m}(k,0)=0,~~~~~{\rm where}~~V(r,0)=0. \eqno (13) $$ \noindent We prefer to use this convention where the phase shift $\eta_{m}(k,\lambda)$ is determined completely as $\lambda$ increases from zero to one. It is the reason why we introduce the parameter $\lambda$. Since there is only one convergent solution at infinity for $E\leq 0$ the matching condition (8) is not always satisfied. $$R_{m}(r,\lambda)=e^{i(m+1)\pi/2}\sqrt{\displaystyle {\pi \kappa r \over 2 }} H^{(1)}_{m}(i\kappa r) \sim e^{-\kappa r} ,~~~~~ {\rm when}~~r\longrightarrow \infty . \eqno (14) $$ \noindent where $H^{(1)}_{m}(x)$ is the Hankel function of the first kind. When the condition (8) is satisfied, a bound state appears at this energy. It means that there is a discrete spectrum for $E \leq 0$. Now, we turn to the Sturm-Liouville theorem. Denote by $\overline{R}_{m}(r,\lambda)$ the solution to Eq.(7) for the energy $\overline{E}$ $$\displaystyle {\partial^{2} \over \partial r^{2}}\overline{R}_{m}(r,\lambda) +\left\{\displaystyle {2\mu \over \hbar^{2}} \left(\overline{E} -V(r,\lambda)\right)-\displaystyle {m^{2}-1/4 \over r^{2}} \right\} \overline{R}_{m}(r,\lambda)=0. \eqno (15) $$ Multiplying Eq.(7) and Eq.(15) by $\overline{R}_{m}(r,\lambda)$ and $R_{m}(r,\lambda)$, respectively, and calculating their difference, we have $$\displaystyle {\partial \over \partial r} \left\{ R_{m}(r,\lambda) \displaystyle {\partial \overline{R}_{m}(r,\lambda) \over \partial r} -\overline{R}_{m}(r,\lambda) \displaystyle {\partial R_{m}(r,\lambda) \over \partial r} \right\} =-\displaystyle {2\mu \over \hbar^{2}}\left(\overline{E}-E\right) \overline{R}_{m}(r,\lambda)R_{m}(r,\lambda). \eqno (16) $$ \noindent According to the boundary condition, both solutions $R_{m}(r,\lambda)$ and $\overline{R}_{m}(r,\lambda)$ should be vanishing at the origin. Integrating (16) in the region from $0$ to $r_{0}$, we have $$\displaystyle {1 \over \overline{E}-E} \left\{ R_{m}(r,\lambda) \displaystyle {\partial \overline{R}_{m}(r,\lambda) \over \partial r} -\overline{R}_{m}(r) \displaystyle {\partial R_{m}(r,\lambda) \over \partial r} \right\}_{r=r_{0}-} =-\displaystyle {2\mu \over \hbar^{2}}\int_{0}^{r_{0}} \overline{R}_{m}(r,\lambda)R_{m}(r,\lambda)dr. $$ \noindent Taking the limit, we obtain $$\displaystyle {\partial A_{m}(E,\lambda) \over \partial E} =\displaystyle {\partial \over \partial E} \left( \displaystyle {1 \over R_{m}(r,\lambda) } \displaystyle {\partial R_{m}(r,\lambda) \over \partial r} \right)_{r=r_{0}-} =-\displaystyle {2\mu \over \hbar^{2}} R_{m}(r_{0},\lambda)^{-2} \int_{0}^{r_{0}}R_{m}(r,\lambda)^{2}dr<0 . \eqno (17) $$ \noindent Similarly, from the boundary condition that when $E\leq 0$ the radial function $R_{m}(r,\lambda)$ tends to zero at infinity, we have $$\displaystyle {\partial \over \partial E} \left( \displaystyle {1 \over R_{m}(r,\lambda) }\displaystyle {\partial R_{m}(r,\lambda) \over \partial r} \right)_{r=r_{0}+} =\displaystyle {2\mu \over \hbar^{2}} R_{m}(r_{0},\lambda)^{-2} \int_{r_{0}}^{\infty}R_{m}(r,\lambda)^{2}dr>0 . \eqno (18) $$ \noindent Therefore, when $E\leq 0$, both sides of Eq.(8) are monotonic with respect to the energy $E$: As energy increases, the logarithmic derivative of the radial function at $r_{0}-$ decreases monotonically, but that at $r_{0}+$ increases monotonically. This is an essence for the Sturm-Liouville theorem. \section{THE NUMBER OF BOUND STATES} In this section we will relate the number of bound states with the logarithmic derivative $A_{m}(0,\lambda)$ of the radial function at $r_{0}-$ for zero energy when the potential changes, in terms of the monotonic property of the logarithmic derivative of the radial function with respect to the energy $E$. From Eq.(14) we have: $$ \left( \displaystyle {1 \over R_{m}(r,\lambda) } \displaystyle {\partial R_{m}(r,\lambda) \over \partial r} \right)_{r=r_{0}+} =\displaystyle {i\kappa H^{(1)}_{m}(i\kappa r_{0})' \over H^{(1)}_{m}(i\kappa r_{0}) }-\displaystyle {1 \over 2r_{0}} =\left\{\begin{array}{ll} (-m+1/2)/r_{0} &{\rm when}~~E\sim 0 \\ -\kappa \sim -\infty &{\rm when}~~E\longrightarrow -\infty. \end{array} \right. \eqno (19) $$ \noindent The logarithmic derivative given in Eq.(19) does not depend on $\lambda$. On the other hand, when $\lambda=0$ we obtain from Eq.(10): $$A_{m}(E,0)=\left(\displaystyle {1 \over R_{m}(r,0)} \displaystyle {\partial R_{m}(r,0) \over \partial r } \right)_{r=r_{0}-} =\displaystyle {i\kappa J'_{m}(i\kappa r_{0}) \over J_{m}(i\kappa r_{0}) }-\displaystyle {1 \over 2r_{0}} =\left\{\begin{array}{ll} (m+1/2)/r_{0} &{\rm when}~~E\sim 0 \\ \kappa \sim \infty &{\rm when}~~E\longrightarrow - \infty. \end{array} \right. \eqno (20) $$ \noindent It is evident from Eqs.(19) and (20) that as the energy increases from $-\infty$ to $0$, there is no overlap between two variant ranges of two logarithmic derivatives such that there is no bound state when $\lambda=0$ except for $S$ wave where there is a half bound state at $E=0$. The half bound state for $S$ wave will be discussed in Sec.IV. If $A_{m}(0,\lambda)$ decreases across the value $(-m+1/2)/r_{0}$ as $\lambda$ changes, an overlap between the variant ranges of two logarithmic derivatives of two sides of $r_{0}$ appears. Since the logarithmic derivative of the radial function at $r_{0}-$ decreases monotonically as the energy increases, and that at $r_{0}+$ increases monotonically, the overlap means that there must be one and only one energy where the matching condition (8) is satisfied, namely a bound state appears. From the viewpoint of node theory, when $A_{m}(0,\lambda)$ decreases across the value $(-m+1/2)/r_{0}$, a node for the zero-energy solution to the Schr\"{o}dinger equation comes inwards from the infinity, namely a scattering state changes to a bound state. As $\lambda$ changes, $A_{m}(0,\lambda)$ may decreases to $-\infty$, jumps to $\infty$, and then decreases again across the value $(-m+1/2)/r_{0}$, so that another overlap occurs and another bound state appears. Note that when the zero point in the zero-energy solution $R_{m}(r,\lambda)$ comes to $r_{0}$, $A_{m}(0,\lambda)$ goes to infinity. It is not a singularity. Each time $A_{m}(0,\lambda)$ decreases across the value $(-m+1/2)/r_{0}$, a new overlap between the variant ranges of two logarithmic derivatives appears such that a scattering state changes to a bound state. In the same time, a new node comes inwards from infinity in the zero-energy solution to the Schr\"{o}dinger equation. Conversely, each time $A_{m}(0,\lambda)$ increases across the value $(-m+1/2)/r_{0}$, an overlap between those two variant ranges disappears such that a bound state changes back to a scattering state, and simultaneously, a node goes outwards and disappears in the zero-energy solution. The number of bound states $n_{m}$ is equal to the times that $A_{m}(0,\lambda)$ decreases across the value $(-m+1/2)/r_{0}$ as $\lambda$ increases from zero to one, subtracted by the times that $A_{m}(0,\lambda)$ increases across the value $(-m+1/2)/r_{0}$. It is also equal to the number of nodes in the zero-energy solution. In the next section we will show that this number is nothing but the phase shift $\eta_{m}(0)$ at zero momentum divided by $\pi$. \section{LEVINSON'S THEOREM} In order to determine the phase shift $\eta_{m}(k)$ completely, we have introduced the convention for the phase shift $\eta_{m}(k,\lambda)$, where $k>0$, which is changed continuously as $\lambda$ increases from zero to one and $\eta_{m}(k,0)$ is defined to be vanishing. The phase shift $\eta_{m}(k,\lambda)$ is calculated by Eq.(11). It is easy to see from Eq.(11) that the phase shift $\eta_{m}(k,\lambda)$ increases monotonically as the logarithmic derivative $A_{m}(E,\lambda)$ decreases: $$\left. \displaystyle {\partial \eta_{m}(k,\lambda) \over \partial A_{m}(E,\lambda)}\right|_{k} =\displaystyle {-8r_{0}\cos^{2}\eta_{m}(k,\lambda) \over \pi \left\{2r_{0}A_{m}(E,\lambda)N_{m}(kr_{0})-2kr_{0}N'_{m}(kr_{0})- N_{m}(kr_{0})\right\}^{2} } \leq 0, \eqno (21) $$ \noindent where $k=\left(2\mu E\right)^{1/2}/\hbar$. The phase shift $\eta_{m}(0,\lambda)$ is the limit of the phase shift $\eta_{m}(k,\lambda)$ as $k$ tends to zero. Therefore, what we are interested in is the phase shift $\eta_{m}(k,\lambda)$ at a sufficiently small momentum $k$, $k\ll 1/r_{0}$. For the small momentum we obtain from Eq.(11) $$\tan \eta_{m}(k,\lambda)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~$$ $$=\left\{\begin{array}{ll} \displaystyle {-\pi (kr_{0})^{2m} \over 2^{2m}m!(m-1)!} \cdot \displaystyle {A_{m}(0,\lambda)-(m+1/2)/r_{0} \over A_{m}(0,\lambda)-c^{2}k^{2}-\displaystyle {-m+1/2 \over r_{0}}\left(1- \displaystyle {(kr_{0})^{2} \over (m-1)(2m-1) }\right) } &{\rm when}~~m \geq 2 \\[1mm] \displaystyle {-\pi (kr_{0})^{2} \over 4} ~\cdot~\displaystyle {A_{m}(0,\lambda)-3/(2r_{0}) \over A_{m}(0,\lambda)-c^{2}k^{2}+\displaystyle {1 \over 2r_{0}} \left(1+ 2(kr_{0})^{2}\log(kr_{0})\right) } &{\rm when}~~m = 1 \\[1mm] \displaystyle {\pi \over 2\log(kr_{0})} ~\cdot~\displaystyle {A_{m}(0,\lambda)-c^{2}k^{2} -\displaystyle {1 \over 2r_{0}} \left(1-(kr_{0})^{2} \right) \over A_{m}(0,\lambda)-c^{2}k^{2}-\displaystyle {1 \over 2r_{0}}\left(1+ \displaystyle {2 \over \log (kr_{0})} \right) } &{\rm when}~~m =0 .\end{array}\right. \eqno (22) $$ \noindent where the expansion for $A_{m}(E,\lambda)$, calculated from (17), is used: $$A_{m}(E,\lambda)=A_{m}(0,\lambda)-c^{2}k^{2}+\ldots,~~~~~c^{2}>0,~~~ ~~E=\displaystyle {\hbar^{2} k^{2} \over 2\mu}. \eqno (23) $$ \noindent In addition to the leading terms, we include in Eq.(22) some next leading terms, which are useful only for the critical case where the leading terms cancel each other. First of all, it can be seen from Eq.(22) that $\tan \eta_{m}(k,\lambda)$ tends to zero as $k$ goes to zero, namely, $\eta_{m}(0,\lambda)$ is always equal to the multiple of $\pi$. In other words, if the phase shift $\eta_{m}(k,\lambda)$ for a sufficiently small $k$ is expressed as a positive or negative acute angle plus $n\pi$, its limit $\eta_{m}(0,\lambda)$ is equal to $n\pi$, where $n$ is an integer. It means that $\eta_{m}(0,\lambda)$ changes discontinuously. By the way, in three dimensions, the tangent of the phase shift may go to infinity for the critical case of $S$ wave. Secondly, if $A_{m}(E,\lambda)$ decreases as $\lambda$ increases, $\eta_{m}(k,\lambda)$ increases monotonically. As $A_{m}(E,\lambda)$ decreases, each times $\tan \eta_{m}(k,\lambda)$ for a sufficiently small $k$ changes sign from positive to negative (through a jump from positive infinity to negative infinity), $\eta_{m}(0,\lambda)$ jumps by $\pi$. However, each times $\tan \eta_{m}(k,\lambda)$ changes sign from negative to positive, $\eta_{m}(0,\lambda)$ keeps invariant. Conversely, if $A_{m}(E,\lambda)$ increases as $\lambda$ increases, $\eta_{m}(k,\lambda)$ decreases monotonically. As $A_{m}(E,\lambda)$ increases, each time $\tan \eta_{m}(k,\lambda)$ changes sign from negative to positive, $\eta_{m}(0,\lambda)$ jumps by $-\pi$, and each time $\tan \eta_{m}(k,\lambda)$ changes sign from positive to negative, $\eta_{m}(0,\lambda)$ keeps invariant. When $V(r,\lambda)$ changes from zero to the given potential $V(r)$ continuously, each time the $A_{m}(0,\lambda)$ decreases from near and larger than the value $(-m+1/2)/r_{0}$ to smaller than that value, the denominator in Eq.(22) changes sign from positive to negative and the remaining factor keeps positive, such that the phase shift at zero momentum $\eta_{m}(0,\lambda)$ jumps by $\pi$. Conversely, each time the $A_{m}(0,\lambda)$ increases across that value, the phase shift at zero momentum $\eta_{m}(0,\lambda)$ jumps by $-\pi$. Note that when the $A_{m}(0,\lambda)$ decreases from near and larger than the value $(m+1/2)/r_{0}$ to smaller than that value, the numerator in Eq.(22) changes sign from positive to negative and the remaining factor keeps negative, such that the phase shift at zero momentum $\eta_{m}(0,\lambda)$ does not jump. Conversely, when the $A_{m}(0,\lambda)$ increases across the value $(m+1/2)/r_{0}$, the phase shift at zero momentum $\eta_{m}(0,\lambda)$ also keeps invariant. It is the reason why we did not include the next leading terms in the numerator of Eq.(22) except for $m=0$. Therefore, the phase shift $\eta_{m}(0)/\pi$ is just equal to the times $A_{m}(0,\lambda)$ decreases across the value $(-m+1/2)/r_{0}$ as $\lambda$ increases from zero to one, subtracted by the times $A_{m}(0,\lambda)$ increases across that value. In the previous section we have proved that the difference of the two times is nothing but the number of bound states $n_{m}$, namely, we proved the Levinson theorem for the Schr\"{o}dinger equation in two dimensions for the non-critical cases: $$\eta_{m}(0)=n_{m}\pi. \eqno (24a) $$ We should pay some attention to the case of $m=0$. When $A_{m}(0)$ decreases across the value $1/(2r_{0})$, both the numerator and denominator in Eq.(22) change signs, but not simultaneously because the next leading terms in the numerator and denominator of Eq.(22) are different. It is easy to see that the numerator changes sign first, and then the denominator changes sign, namely, $\tan \eta_{m}(k)$ at small $k$ changes firstly from negative to positive, then to negative again so that $\eta_{m}(0)$ jumps by $\pi$. Similarly, when $A_{m}(0)$ increases across the value $1/(2r_{0})$, $\eta_{m}(0)$ jumps by $-\pi$. For $\lambda=0$ ($V(r,0)=0$) and $m=0$, the numerator in Eq.(22) is equal to zero, the denominator is positive, and the phase shift $\eta_{0}(0)$ is defined to be zero. If $A_{0}(E)$ decreases as $\lambda$ increases from zero, the numerator becomes negative firstly, and then the denominator changes sign from positive to negative such that the phase shift $\eta_{0}(0,\lambda)$ jumps by $\pi$ and simultaneously a bound state appears. If $A_{0}(E)$ increases as $\lambda$ increases from zero, the numerator becomes positive, and the remaining factor keeps negative such that the phase shift $\eta_{0}(0,\lambda)$ keeps to be zero, and no bound state appears. Now, we turn to discuss the critical case where the logarithmic derivative $A_{m}(0,1)$ ($\lambda=1$) is equal to the value $(-m+1/2)/r_{0}$. In the critical case, the following solution with zero energy in the region $r_{0}\leq r < \infty$ will match this $A_{m}(0,1)$ at $r_{0}$: $$R_{m}(r)=r^{-m+1/2}. \eqno (25) $$ \noindent It is a bound state when $m\geq 2$, but called a half bound state when $m=1$ and $0$. A half bound state is not a bound state, because its wave function is finite but not square integrable. We are going to discuss the critical case where $A_{m}(0,\lambda)$ decreases (or increases) and reaches, but not across, the value $(-m+1/2)/r_{0}$ as $V(r,\lambda)$ changes from zero to the given potential $V(r)$. For definiteness, we discuss the case where $A_{m}(0,\lambda)$ decreases and reaches the value $(-m+1/2)/r_{0}$. In this case a new bound state with zero energy appears for $m \geq 2$, but does not appear for $m=1$ and $0$. We should check whether or not the phase shift $\eta_{m}(0)$ increases an additional $\pi$. It is easy to see from the next leading terms in the denominator of Eq.(22) that the denominator for $m \geq 2$ has changed sign from positive to negative as $A_{m}(0,\lambda)$ decreases and reaches the value $(-m+1/2)/r_{0}$, namely, the phase shift $\eta_{m}(0)$ jumps by $\pi$ and simultaneously a new bound state of zero-energy appears. For $m=0$ the next leading term with $\log (kr_{0})$ in the denominator of Eq.(22) is positive and larger than the term $-c^{2}k^{2}$, such that the denominator does not change sign, namely, the phase shift $\eta_{m}(0)$ does not jump. It meets the fact that no new bound state appears. For $m=1$ the next leading term in the denominator of Eq.(22) is negative such that the denominator does change sign as $A_{m}(0,\lambda)$ decreases and reaches the value $-1/(2r_{0})$, namely, the phase shift $\eta_{m}(0)$ jumps by $\pi$. However, in this case no new bound state appears simultaneously. The discussion for the cases where $A_{m}(0,\lambda)$ increases and reaches the value $(-m+1/2)/r_{0}$ is similar. Therefore, Levinson's theorem (24a) holds for the critical cases except for $m=1$. In the latter case, Levinson's theorem for the Schr\"{o}dinger equation in two dimensions becomes: $$\eta_{m}(0)=\left(n_{m}+1 \right)\pi, ~~~~~ {\rm when}~~m=1 ~~{\rm and~a~ half~ bound~ state~ occurs}. \eqno (24b) $$ \noindent Equation (24) is the same as Eq.(3) because in our convention $\eta_{m}(\infty)=0$. \section{DISCUSSION} Now, we discuss the general case where the potential $V(r)$ has a tail at $r\geq r_{0}$. Let $r_{0}$ be so large that only the leading term in $V(r)$ is concerned in the region $r\geq r_{0}$: $$V(r) \sim \displaystyle {\hbar^{2} \over 2\mu} br^{-n},~~~~{\rm when}~~r \longrightarrow \infty. \eqno (26) $$ \noindent where $b$ is a nonvanishing constant and $n$ is a positive constant, not necessarily to be an integer. From the condition (2b), $n$ should be larger than 3. Substituting Eq.(26) into Eq.(7) and changing the variable $r$ to $\xi$ $$\xi=\left\{\begin{array}{ll} kr=r\sqrt{2\mu E}/\hbar &{\rm when}~~E>0 \\ \kappa r=r\sqrt{-2\mu E}/\hbar &{\rm when}~~E\leq 0, \end{array} \right. \eqno (27) $$ \noindent we get the radial equation at the region $r_{0} \leq r < \infty$ $$\displaystyle {d^{2} R_{m}(\xi,\lambda) \over d\xi^{2}}+ \left\{1 -\displaystyle {b \over \xi^{n}}k^{n-2} -\displaystyle {m^{2}-1/4 \over \xi^{2}} \right\} R_{m}(\xi,\lambda)=0,~~~~~{\rm when}~~E>0,$$ $$\displaystyle {d^{2} R_{m}(\xi,\lambda) \over d\xi^{2}}+ \left\{-1 -\displaystyle {b \over \xi^{n}}\kappa^{n-2} -\displaystyle {m^{2}-1/4 \over \xi^{2}} \right\} R_{m}(\xi,\lambda)=0,~~~~~{\rm when}~~E\leq 0, \eqno (28) $$ \noindent where $R_{m}(\xi,\lambda)$ depends on $\lambda$ through the matching condition (8). As far as Levinson's theorem is concerned, we are only interested in the solutions with the sufficiently small $k$ and $\kappa$. If $n\geq 3$, in comparison with the term of the centrifugal potential, the term with a factor $k^{n-2}$ (or $\kappa^{n-2}$) is too small to affect the phase shift at a sufficiently small $k$ and the variant range of the logarithmic derivative $(dR_{m}(r)/dr)/R_{m}(r)$ at $r_{0}+$. Therefore, the proof given in the previous sections is effective for those potential with a tail so that Levinson's theorem (24) holds. When $n=2$, we define $$\nu^{2}=m^{2}+b. \eqno (29) $$ \noindent The radial equation (7) becomes $$\displaystyle {\partial^{2} R_{m}(r,\lambda) \over \partial r^{2}}+ \left\{\displaystyle {2\mu E \over \hbar^{2}} -\displaystyle {\nu^{2}-1/4 \over r^{2}} \right\} R_{m}(r,\lambda)=0,~~~~~r\geq r_{0}. \eqno (30) $$ \noindent If $\nu^{2}<0$, there are infinite number of bound states. We will not discuss this case as well as the case with $\nu=0$ here. When $\nu^{2}> 0$, we take $\nu > 0$. Some formulas given in the previous sections will be changed by replacing the angular quantum number $m$ with $\nu$. Equation (19) becomes $$ \left( \displaystyle {1 \over R_{m}(r,\lambda) } \displaystyle {\partial R_{m}(r,\lambda) \over \partial r} \right)_{r=r_{0}+} =\displaystyle {i\kappa H^{(1)}_{\nu}(i\kappa r_{0})' \over H^{(1)}_{\nu}(i\kappa r_{0}) }-\displaystyle {1 \over 2r_{0}} =\left\{\begin{array}{ll} (-\nu+1/2)/r_{0} &{\rm when}~~E\sim 0 \\ -\kappa \sim -\infty &{\rm when}~~E\longrightarrow -\infty. \end{array} \right. \eqno (31) $$ \noindent The scattering solution (10) in the region $r_{0}\leq r < \infty$ becomes $$R_{m}(r,\lambda)=\sqrt{\displaystyle {\pi kr \over 2 }} \left\{ \cos \delta_{\nu}(k,\lambda)J_{\nu}(kr) -\sin \delta_{\nu}(k,\lambda) N_{\nu}(kr) \right\}~~~~~~~~~~~~~~~~~$$ $$~~~~~~~~ \sim \cos \left(kr-\displaystyle{\nu\pi \over 2}- \displaystyle {\pi \over 4} +\delta_{\nu}(k,\lambda) \right) ,~~~~~~~~~~~~~ {\rm when}~~r\longrightarrow \infty . \eqno (32) $$ \noindent Thus, the phase shift $\eta_{m}(k)$ can be calculated from $\delta_{\nu}(k,1)$ $$\eta_{m}(k)=\delta_{\nu}(k,1)+(m-\nu)\pi/2. \eqno (33) $$ \noindent $\delta_{\nu}(k,\lambda)$ satisfies $$\tan \delta_{\nu}(k,\lambda)=\displaystyle {J_{\nu}(kr_{0}) \over N_{\nu}(kr_{0})}~\cdot~ \displaystyle {A_{m}(E,\lambda)-kJ'_{\nu}(kr_{0})/ J_{\nu}(kr_{0})-1/(2r_{0}) \over A_{m}(E,\lambda)-kN'_{\nu}(kr_{0})/ N_{\nu}(kr_{0})-1/(2r_{0}) } , \eqno (34) $$ \noindent and it increases monotonically as the logarithmic derivative $A_{m}(E,\lambda)$ decreases: $$\left. \displaystyle {\partial \delta_{\nu}(k,\lambda) \over \partial A_{m}(E,\lambda)}\right|_{k} =\displaystyle {-8r_{0}\cos^{2}\delta_{\nu}(k,\lambda) \over \pi \left\{2r_{0}A_{m}(E,\lambda)N_{\nu}(kr_{0})-2kr_{0}N'_{\nu}(kr_{0})- N_{\nu}(kr_{0})\right\}^{2} } \leq 0. \eqno (35) $$ \noindent For a sufficiently small $k$ we have $$\tan \delta_{\nu}(k,\lambda)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~$$ $$=\left\{\begin{array}{ll} \displaystyle {-\pi (kr_{0})^{2\nu} \over 2^{2\nu}\nu!(\nu-1)!} ~\cdot~\displaystyle {A_{m}(0,\lambda)-(\nu+1/2)/r_{0} \over A_{m}(0,\lambda)-c^{2}k^{2}-\displaystyle {-\nu+1/2 \over r_{0}}\left(1- \displaystyle {(kr_{0})^{2} \over (\nu-1)(2\nu-1) }\right) } &{\rm when}~~\nu >1 \\ \displaystyle {-\pi \over \nu\Gamma(\nu)^{2}}\left( \displaystyle {kr_{0} \over 2}\right)^{2\nu} \displaystyle {A_{m}(0,\lambda)-(\nu+1/2)/r_{0} \over A_{m}(0,\lambda)-c^{2}k^{2}-\displaystyle {-\nu+1/2 \over r_{0}}+ \displaystyle {2\pi \cot (\nu\pi) \over r_{0}\Gamma(\nu)^{2}} \left(\displaystyle {kr_{0} \over 2}\right)^{2\nu} } &{\rm when}~~0<\nu<1 .\end{array}\right. \eqno (36) $$ \noindent The asymptotic forms for the case $\nu=1$ have already been given in Eq.(22). Now, repeating the proof for Levinson's theorem (24), we obtain the modified Levinson's theorem for the non-critical cases: $$\eta_{m}(0)-(m-\nu)\pi/2=\delta_{\nu}(0,1)=n_{m}\pi . \eqno (37) $$ \noindent For the critical case where $A_{m}(0,1)=(-\nu+1/2)/r_{0}$, the modified Levinson theorem (37) holds for $\nu>1$, where a new bound state appears and simultaneously $\eta_{m}(k)$ jumps by $\pi$, but the modified Levinson theorem (37) is violated for $0<\nu \leq 1$, where a half bound state appears and simultaneously $\eta_{m}(k)$ jumps by $\nu \pi$. In other words, the theorem needs to be further modified in these cases. From the above discussion, we come to the conclusion that for the potential with a tail (26) at the infinity, when $n\leq 2$ Levinson's theorem (24) is violated, and when $n>2$, even if it contains a logarithmic factor, Levinson's theorem (24) holds. Because in the latter case, for any arbitrarily given small $\epsilon$, one can always find a sufficiently large $r_{0}$ such that $|V(r)|<\epsilon/r^{2}$ in the region $r_{0}<r <\infty$. Since $\nu^{2}=m^{2}+\epsilon\sim m^{2}$, Levinson's theorem (24) holds for this case. {\bf ACKNOWLEDGMENTS}. This work was supported by the National Natural Science Foundation of China and Grant No. LWTZ-1298 of the Chinese Academy of Sciences. \end{document}
math
\betaegin{document} \title[ LDP and inviscid shell models] {Large deviation principle and inviscid shell models} \alphauthor[H. Bessaih ] {Hakima Bessaih } \alphaddress{ H. Bessaih, University of Wyoming, Department of Mathematics, Dept. 3036, 1000 East University Avenue, Laramie WY 82071, United States} \varepsilonmail{[email protected]} \alphauthor[A. Millet]{Annie Millet } \alphaddress{A. Millet, SAMOS, Centre d'\'Economie de la Sorbonne, Universit\'e Paris 1 Panth\'eon Sorbonne, 90 Rue de Tolbiac, 75634 Paris Cedex France {\it and} Laboratoire de Probabilit\'es et Mod\`eles Al\'eatoires, Universit\'es Paris~6-Paris~7, Bo\^{\i}te Courrier 188, 4 place Jussieu, 75252 Paris Cedex 05, France} \varepsilonmail{ [email protected] {\it and} [email protected]} \sigmaubjclass[2000]{Primary 60H15, 60F10; Secondary 76D06, 76M35. } \kappaeywords{Shell models of turbulence, viscosity coefficient and inviscid models, stochastic PDEs, large deviations} \betaegin{abstract} A LDP is proved for the inviscid shell model of turbulence. As the viscosity coefficient $\nu$ converges to 0 and the noise intensity is multiplied by $\sigmaqrt{\nu}$, we prove that some shell models of turbulence with a multiplicative stochastic perturbation driven by a $H$-valued Brownian motion satisfy a LDP in ${\mathcal C}([0,T], V)$ for the topology of uniform convergence on $[0,T]$, but where $V$ is endowed with a topology weaker than the natural one. The initial condition has to belong to $V$ and the proof is based on the weak convergence of a family of stochastic control equations. The rate function is described in terms of the solution to the inviscid equation. \varepsilonnd{abstract} \maketitle \sigmaection{Introduction}\lambdabel{s1} Shell models, from E.B. Gledzer, K. Ohkitani, M. Yamada, are simplified Fourier systems with respect to the Navier-Stokes ones, where the interaction between different modes is preserved only between nearest neighbors. These are some of the most interesting examples of artificial models of fluid dynamics that capture some properties of turbulent fluids like power law decays of structure functions. There is an extended literature on shell models. We refer to K.~Ohkitani and M.~ Yamada \cite{OY89}, V.~ S.~Lvov, E.~Podivilov, A.~Pomyalov, I.~Procaccia and D.~Vandembroucq \cite{LPPPV98}, L.~Biferale \cite{Biferale} and the references therein. However, these papers are mainly dedicated to the numerical approach and pertain to the finite dimensional case. In a recent work by P.~Constantin, B.~Levant and E.~S.~Titi \cite{CLT06}, some results of regularity, attractors and inertial manifolds are proved for deterministic infinite dimensional shells models. In \cite{CLT07-inviscid} these authors have proved some regularity results for the inviscid case. The infinite-dimensional stochastic version of shell models have been studied by D. Barbato, M. Barsanti, H. Bessaih and F. Flandoli in \cite{BBBF06} in the case of an additive random perturbation. Well-posedeness and apriori estimates were obtained, as well as the existence of an invariant measure. Some balance laws have been investigated and preliminary results about the structure functions have been presented. The more general formulation involving a multiplicative noise reads as follows \[ du(t) + [\nu A u(t) + B(u(t),u(t))]\, dt = \sigma(t,u(t))\, dW_t\, , \quad u(0)=\xi.\] driven by a Hilbert space-valued Brownian motion $W$. It involves some similar bilinear operator $B$ with antisymmetric properties and some linear "second order" (Laplace) operator $A$ which is regularizing and multiplied by some non negative coefficient $\nu$ which stands for the viscosity in the usual hydro-dynamical models. The shell models are adimensional and the bilinear term is better behaved than that in the Navier Stokes equation. Existence, uniqueness and several properties were studied in \cite{BBBF06} in the case on an additive noise and in \cite{CM} for a multiplicative noise in the "regular" case of a non-zero viscosity coefficient which was taken constant. Several recent papers have studied a Large Deviation Principle (LDP) for the distribution of the solution to a hydro-dynamical stochastic evolution equation: S.~Sritharan and P.~Sundar \cite{Sundar} for the 2D Navier Stokes equation, J.~Duan and A.~Millet \cite{DM} for the Boussinesq model, where the Navier Stokes equation is coupled with a similar nonlinear equation describing the temperature evolution, U.~Manna, S.~Sritharan and P.~Sundar \cite{MSS} for shell models of turbulence, I.~Chueshov and A.~Millet \cite{CM} for a wide class of hydro-dynamical equations including the 2D B\'enard magneto-hydro dynamical and 3D $\alphalpha$-Leray Navier Stokes models, A.Du, J. Duan and H. Gao \cite{DDG} for two layer quasi-geostrophic flows modeled by coupled equations with a bi-Laplacian. All the above papers consider an equation with a given (fixed) positive viscosity coefficient and study exponential concentration to a deterministic model when the noise intensity is multiplied by a coefficient $\sigmaqrt{\varepsilonpsilon}$ which converges to 0. All these papers deal with a multiplicative noise and use the weak convergence approach of LDP, based on the Laplace principle, developed by P. Dupuis and R. Ellis in \cite{DupEl97}. This approach has shown to be successful in several other infinite-dimensional cases (see e.g. \cite{BD00}, \cite{BD07}, \cite{WeiLiu}) and differ from that used to get LDP in finer topologies for quasi-linear SPDEs, such as \cite{Sowers}, \cite{ChenalMillet}, \cite{CR}, \cite{Chang}. For hydro-dynamical models, the LDP was proven in the natural space of trajectories, that is ${\mathcal C}([0,T],H) \cap L^2([0,T],V)$, where roughly speaking, $H$ is $L^2$ and $V= Dom(A^{\frac{1}{2}})$ is the Sobolev space $H_1^2$ with proper periodicity or boundary conditions. The initial condition $\xi$ only belongs to $H$. The aim of this paper is different. Indeed, the asymptotics we are interested in have a physical meaning, namely the viscosity coefficient $\nu$ converges to 0. Thus the limit equation, which corresponds to the inviscid case, is much more difficult to deal with, since the regularizing effect of the operator $A$ does not help anymore. Thus, in order to get existence, uniqueness and apriori estimates to the inviscid equation, we need to start from some more regular initial condition $\xi\in V$, to impose that $(B(u,u),Au)=0$ for all $u$ regular enough (this identity would be true in the case on the 2D Navier Stokes equation under proper periodicity properties); note that this equation is satisfied in the GOY and Sabra shell models of turbulence under a suitable relation on the coefficients $a,b$ and $\mu$ stated below. Furthermore, some more conditions on the diffusion coefficient are required as well. The intensity of the noise has to be multiplied by $\sigmaqrt{\nu}$ for the convergence to hold. The technique is again that of the weak convergence. One proves that given a family $(h_\nu)$ of random elements of the RKHS of $W$ which converges weakly to $h$, the corresponding family of stochastic control equations, deduced from the original ones by shifting the noise by $\frac{h_\nu}{\sigmaqrt {\nu}}$, converges in distribution to the limit inviscid equation where the Gaussian noise $W$ has been replaced by $h$. Some apriori control of the solution to such equations has to be proven uniformly in $\nu>0$ for "small enough" $\nu$. Existence and uniqueness as well as apriori bounds have to be obtained for the inviscid limit equation. Some upper bounds of time increments have to be proven for the inviscid equation and the stochastic model with a small viscosity coefficient; they are similar to that in \cite{DM} and \cite{CM}. The LDP can be shown in ${\mathcal C}([0,T],V)$ for the topology of uniform convergence on $[0,T]$, but where $V$ is endowed with a weaker topology, namely that induced by the $H$ norm. More generally, under some slight extra assumption on the diffusion coefficient $\sigma$, the LDP is proved in ${\mathcal C}([0,T],V)$ where $V$ is endowed with the norm $\|\cdot\|_\alphalpha:=|A^\alphalpha (\cdot) |_H$ for $0\leq \alphalpha\leq \frac{1}{4}$. The natural case $\alphalpha=\frac{1}{2}$ is out of reach because the inviscid limit equation is much more irregular. Indeed, it is an abstract equivalent of the Euler equation. The case $\alphalpha=0$ corresponds to $H$ and then no more condition on $\sigma$ is required. The case $\alphalpha = \frac{1}{4}$ is that of an interpolation space which plays a crucial role in the 2D Navier Stokes equation. Note that in the different context of a scalar equation, M. Mariani \cite{MaMa} has also proved a LDP for a stochastic PDE when a coefficient $\varepsilon$ in front of a deterministic operator converges to 0 and the intensity of the Gaussian noise is multiplied by $\sigmaqrt{\varepsilon}$. However, the physical model and the technique used in \cite{MaMa} are completely different from ours. \sigmamallskip The paper is organized as follows. Section 2 gives a precise description of the model and proves apriori bounds for the norms in ${\mathcal C}([0,T],H)$ and $ L^2([0,T],V)$ of the stochastic control equations uniformly in the viscosity coefficient $\nu\in ]0,\nu_0]$ for small enough $\nu_0$. Section 3 is mainly devoted to prove existence, uniqueness of the solution to the deterministic inviscid equation with an external multiplicative impulse driven by an element of the RKHS of $W$, as well as apriori bounds of the solution in ${\mathcal C}([0,T], V)$ when the initial condition belong to $V$ and under reinforced assumptions on $\sigma$. Under these extra assumptions, we are able to improve the apriori estimates of the solution and establish them in ${\mathcal C}([0,T],V)$ and $ L^2([0,T],Dom(A))$. Finally the weak convergence and compactness of the level sets of the rate function are proven in section 4; they imply the LDP in ${\mathcal C}([0,T],V)$ where $V$ is endowed with the weaker norm associated with $A^\alphalpha$ for any value of $\alphalpha$ with $0\leq \alphalpha \leq \frac{1}{4}$. The LDP for the 2D Navier Stokes equation as the viscosity coefficient converges to 0 will be studied in a forthcoming paper. We will denote by $C$ a constant which may change from one line to the next, and $C(M)$ a constant depending on $M$. \sigmaection{Description of the model} \lambdabel{s2} \sigmaubsection{GOY and Sabra shell models} Let $H$ be the set of all sequences $u=(u_1, u_2,\ldots)$ of complex numbers such that $\sigmaum_n |u_n|^2<\infty$. We consider $H$ as a \varepsilonmph{real} Hilbert space endowed with the inner product $(\cdot,\cdot)$ and the norm $|\cdot|$ of the form \betaegin{equation}\lambdabel{normH} (u,v)={\rm Re}\,\sigmaum_{n\geq 1}u_n v_n^*,\quad |u|^2 =\sigmaum_{n\geq 1} |u_n|^2, \varepsilonnd{equation} where $v_n^*$ denotes the complex conjugate of $v_n$. Let $k_0>0$, $\mu>1$ and for every $n\geq 1$, set $k_n=k_0\, \mu^n$. Let $A:Dom(A)\sigmaubset H \to H $ be the non-bounded linear operator defined by \[ (Au)_n = k_n^2 u_n,\quad n=1,2,\ldots,\qquad Dom(A)=\Big\{ u\in H\, :\; \sigmaum_{n\geq 1} k_n^4 |u_n|^2<\infty\Big\}. \] The operator $A$ is clearly self-adjoint, strictly positive definite since $(Au,u)\geq k_0^2 |u|^2$ for $u\in Dom(A)$. For any $\alphalpha >0$, set \betaegin{equation} \lambdabel{Halpha} {\mathcal H}_\alphalpha = Dom(A^\alphalpha) = \{ u\in H \, :\, \sigmaum_{n\geq 1} k_n^{4\alphalpha} |u_n|^2 <+\infty\},\; \|u\|^2_\alphalpha = \sigmaum_{n\geq 1} k_n^{4\alphalpha} |u_n|^2 \; \mbox{\rm for }\; u\in {\mathcal H}_\alphalpha. \varepsilonnd{equation} Let ${\mathcal H}_0=H$, \[ V:=Dom(A^{\frac{1}{2}}) = \Big\{ u\in H \, :\, \sigmaum_{n\geq 1} k_n^2 |u_n|^2 <+\infty\Big\}\, ;\; \mbox{\rm also set }\; {\mathcal H}= {\mathcal H}_{\frac{1}{4}},\, \|u\|_{\mathcal H} = \|u\|_{\frac{1}{4}}. \] Then $V$ (as each of the spaces ${\mathcal H}_\alphalpha$) is a Hilbert space for the scalar product $(u,v)_V = Re(\sigmaum_n k_n^2\, u_n\, v_n^*)$, $u,v\in V$ and the associated norm is denoted by \betaegin{equation} \lambdabel{normV} \|u\|^2 = \sigmaum_{n\geq 1} k_n^2\, |u_n|^2. \varepsilonnd{equation} The adjoint of $V$ with respect to the $H$ scalar product is $V' = \{ (u_n)\in {\mathbb C}^{\mathbb N} \, :\, \sigmaum_{n\geq 1} k_n^{-2}\, |u_n|^2 <+\infty\}$ and $V\sigmaubset H\sigmaubset V'$ is a Gelfand triple. Let $\lambdangle u\, ,\, v\rangle = Re\left (\sigmaum_{n\geq 1} u_n\, v_n^*\right)$ denote the duality between $u\in V$ and $v\in V'$. Clearly for $0\leq \alphalpha <\betaeta$, $u\in {\mathcal H^\betaeta}$ and $v\in V$ we have \betaegin{equation}\lambdabel{compcalH} \|u\|_\alphalpha^2 \leq k_0^{4(\alphalpha - \betaeta)}\, \|u\|^2_\betaeta \, , \; \mbox{\rm and }\; \|v\|^2_{\mathcal H} \leq |v|\, \|v\|, \varepsilonnd{equation} where the last inequality is proved by the Cauchy-Schwarz inequality. Set $ u_{-1}= u_{0}=0 $, let $a,b$ be real numbers and $B : H\times V \to H$ (or $B : V\times H \to H$) denote the bilinear operator defined by \betaegin{equation} \lambdabel{GOY} \left[B(u,v)\right]_n=-i\left( a k_{n+1} u_{n+1}^* v_{n+2}^* +b k_{n} u_{n-1}^* v_{n+1}^* -a k_{n-1} u_{n-1}^* v_{n-2}^* -b k_{n-1} u_{n-2}^* v_{n-1}^* \right) \varepsilonnd{equation} for $n=1,2,\ldots$ in the GOY shell-model (see, e.g., \cite{OY89}) or \betaegin{equation} \lambdabel{Sabra} \left[B(u,v)\right]_n=-i\left( a k_{n+1} u_{n+1}^*\, v_{n+2} +b k_{n} u_{n-1}^* v_{n+1} +a k_{n-1} u_{n-1} v_{n-2} +b k_{n-1} u_{n-2} v_{n-1} \right), \varepsilonnd{equation} in the Sabra shell model introduced in \cite{LPPPV98}. Note that $B$ can be extended as a bilinear operator from $H\times H$ to $V'$ and that there exists a constant ${C}>0$ such that given $u,v\in H$ and $w\in V$ we have \betaegin{equation}\lambdabel{trili} |\lambdangle B(u,v)\, ,\, w\rangle | + |\betaig( B(u,w)\, ,\, v\betaig) | + |\betaig( B(w,u)\, ,\, v\betaig) | \leq {C}\, |u|\, |v|\, \|w\|. \varepsilonnd{equation} An easy computation proves that for $u,v\in H$ and $w\in V$ (resp. $v,w\in H$ and $u\in V$), \betaegin{equation}\lambdabel{antisym} \lambdangle B(u,v)\, ,\, w\rangle = - \betaig(B(u,w)\, ,\, v\betaig) \; \mbox{\rm (resp. }\, \betaig( B(u,v)\, ,\, w\betaig) = - \betaig(B(u,w)\, ,\, v\betaig) \mbox{\rm \;)}. \varepsilonnd{equation} Furthermore, $B:V\times V\to V$ and $B : {\mathcal H}\times {\mathcal H} \to H$; indeed, for $u,v\in V$ (resp. $u,v\in {\mathcal H}$) we have \betaegin{align}\lambdabel{boundB1} \|B(u,v)\|^2 & = \sigmaum_{n\geq 1} k_n^2\, |B(u,v)_n|^2\, \leq C\, \|u\|^2 \sigmaup_n k_n^2 |v_n|^2 \leq C\, \|u\|^2\, \|v\|^2,\\ | B(u,v)| & \leq C\, \|u\|_{\mathcal H} \, \|v\|_{\mathcal H}. \nonumber \varepsilonnd{align} For $u,v$ in either $H$, ${\mathcal H}$ or $V$, let $B(u):=B(u,u)$. The anti-symmetry property \varepsilonqref{antisym} implies that $ |\lambdangle B(u_1)-B(u_2)\, ,\, u_1-u_2\rangle_V| = |\lambdangle B(u_1-u_2), u_2\rangle_V|$ for $u_1,u_2\in V$ and $ |\lambdangle B(u_1)-B(u_2)\, ,\, u_1-u_2\rangle| = |\lambdangle B(u_1-u_2), u_2\rangle|$ for $u_1\in H$ and $u_2\in V$. Hence there exist positive constants $\betaar{C}_1$ and $\betaar{C}_2$ such that \betaegin{eqnarray}\lambdabel{diffBV} |\lambdangle B(u_1)-B(u_2)\, ,\, u_1-u_2\rangle_V| & \leq & \betaar{C}_1 \, \|u_1-u_2\|^2\, \|u_2\|, \forall u_1,u_2\in V, \\ |\lambdangle B(u_1)-B(u_2)\, ,\, u_1-u_2\rangle| & \leq & \betaar{C}_2 \, |u_1-u_2|^2\, \|u_2\|, \forall u_1\in H, \forall u_2\in V \lambdabel{diffB-HV}. \varepsilonnd{eqnarray} Finally, since $B$ is bilinear, Cauchy-Schwarz's inequality yields for any $\alphalpha \in [ 0,\frac{1}{2}]$, $u,v\in V$: \betaegin{align}\lambdabel{AalphaB} \betaig|\betaig( A^\alphalpha B(u)- A^\alphalpha B(v) \, ,\, A^\alphalpha(u-v)\betaig) \betaig|& \leq \betaig| \betaig( A^\alphalpha B(u-v,u) + A^\alphalpha B(v,u-v)\, ,\, A^\alphalpha (u-v)\betaig)\betaig| \nonumber \\ & \leq C \|u-v\|^2_\alphalpha \, (\|u\|+\|v\|). \varepsilonnd{align} In the GOY shell model, $B$ is defined by \varepsilonqref{GOY}; for any $u\in V$, $Au\in V'$ we have \[ \lambdangle B(u,u), A u\rangle = Re\Big( -i \sigmaum_{n\geq 1} u_n^*\, u_{n+1}^* \, u_{n+2}^* \mu^{3n+1}\Big) \, k_0^3 (a+b\mu^2 -a \mu^4 -b\mu^4).\] Since $\mu \neq 1$, \betaegin{equation}\lambdabel{B(u)AuG} a(1+\mu^2) +b \mu^2 =0\quad \mbox{\rm if and only if } \; \lambdangle B(u,u)\, ,\, Au\rangle =0, \forall u\in V. \varepsilonnd{equation} On the other hand, in the Sabra shell model, $B$ is defined by \varepsilonqref{Sabra} and one has for $u\in V$, \[ \lambdangle B(u,u)\, ,\, Au\rangle = k_0^3 Re \Big( -i \sigmaum_{n\geq 1} \mu^{3n+1}\, \Big[ (a+b\, \mu^2) \, u_n^*\, u_{n+1}^*\, u_{n+2} + (a+b) \mu^4 u_n \, u_{n+1}\, u_{n+2}^*\Big]\Big).\] Thus $(B(u,u),Au)=0$ for every $u\in V$ if and only if $a+b\mu^2=(a+b)\mu^4$ and again $\mu \neq 1$ shows that \varepsilonqref{B(u)AuG} holds true. \sigmaubsection{Stochastic driving force} Let $Q$ be a linear positive operator in the Hilbert space $H$ which is trace class, and hence compact. Let $H_0 = Q^{\frac12} H$; then $H_0$ is a Hilbert space with the scalar product $$ (\partialhi, \partialsi)_0 = (Q^{-\frac12}\partialhi, Q^{-\frac12}\partialsi),\; \forall \partialhi, \partialsi \in H_0, $$ together with the induced norm $|\cdot|_0=\sigmaqrt{(\cdot, \cdot)_0}$. The embedding $i: H_0 \to H$ is Hilbert-Schmidt and hence compact, and moreover, $i \; i^* =Q$. Let $L_Q\varepsilonquiv L_Q(H_0,H) $ be the space of linear operators $S:H_0\mapsto H$ such that $SQ^{\frac12}$ is a Hilbert-Schmidt operator from $H$ to $H$. The norm in the space $L_Q$ is defined by $|S|_{L_Q}^2 =tr (SQS^*)$, where $S^*$ is the adjoint operator of $S$. The $L_Q$-norm can be also written in the form \betaegin{equation}\lambdabel{LQ-norm} |S|_{L_Q}^2=tr ([SQ^{1/2}][SQ^{1/2}]^*)=\sigmaum_{k\geq 1} |SQ^{1/2}\partialsi_k|^2= \sigmaum_{k\geq 1} |[SQ^{1/2}]^*\partialsi_k|^2 \varepsilonnd{equation} for any orthonormal basis $\{\partialsi_k\}$ in $H$, for example $(\partialsi_k)_n = \deltalta_n^k$. \partialar Let $W(t)$ be a Wiener process defined on a filtered probability space $(\Omega, \mathcal F, (\mathcal F_t), {\mathbb P})$, taking values in $H$ and with covariance operator $Q$. This means that $W$ is Gaussian, has independent time increments and that for $s,t\geq 0$, $f,g\in H$, \[ {\mathbb E} (W(s),f)=0\quad\mbox{and}\quad {\mathbb E} (W(s),f) (W(t),g) = \betaig(s\wedge t)\, (Qf,g). \] Let $\betaeta_j$ be standard (scalar) mutually independent Wiener processes, $\{ e_j\}$ be an orthonormal basis in $H$ consisting of eigen-elements of $Q$, with $Qe_j=q_je_j$. Then $W$ has the following representation \betaegin{equation}\lambdabel{W-n} W(t)=\lim_{n\to\infty} W_n(t)\;\mbox{ in }\; L^2(\Omega; H)\; \mbox{ with } W_n(t)=\sigmaum_{1\leq j\leq n} q^{1/2}_j \betaeta_j(t) e_j, \varepsilonnd{equation} and $Trace(Q)=\sigmaum_{j\geq 1} q_j$. For details concerning this Wiener process see e.g. \cite{PZ92}. \partialar Given a viscosity coefficient $\nu >0$, consider the following stochastic shell model \betaegin{equation} \lambdabel{Sshell} d_t u(t)+ \betaig[ \nu A u(t) + B(u(t))\betaig]\, dt = \sigmaqrt{\nu}\, \sigmaigma_\nu(t,u(t))\, dW(t), \varepsilonnd{equation} where the noise intensity $\sigma_\nu: [0, T]\times V \to L_Q(H_0, H)$ of the stochastic perturbation is properly normalized by the square root of the viscosity coefficient $\nu$. We assume that $\sigmaigma_\nu$ satisfies the following growth and Lipschitz conditions: \partialar \noindent \textbf{Condition (C1):} {\it $\sigma_\nu \in {\mathcal C}\betaig([0, T] \times V; L_Q(H_0, H)\betaig)$, and there exist non negative constants $K_i$ and $L_i$ such that for every $t\in [0,T]$ and $u,v\in V$:\\ {\betaf (i)} $|\sigma_\nu(t,u)|^2_{L_Q} \leq K_0+ K_1 |u|^2+ K_2 \|u\|^2$, \\ {\betaf (ii)} $|\sigma_\nu(t,u)-\sigma_\nu(t,v)|^2_{L_Q} \leq L_1 |u-v|^2 + L_2 \|u-v\|^2$. } \sigmamallskip For technical reasons, in order to prove a large deviation principle for the distribution of the solution to \varepsilonqref{Sshell} as the viscosity coefficient $\nu$ converges to 0, we will need some precise estimates on the solution of the equation deduced from \varepsilonqref{Sshell} by shifting the Brownian $W$ by some random element of its RKHS. This cannot be deduced from similar ones on $u$ by means of a Girsanov transformation since the Girsanov density is not uniformly bounded when the intensity of the noise tends to zero (see e.g. \cite{DM} or \cite{CM}). \partialar To describe a set of admissible random shifts, we introduce the class $\mathcal{A}$ as the set of $H_0-$valued $(\mathcal F_t)-$predictable stochastic processes $h$ such that $\int_0^T |h(s)|^2_0 ds < \infty, \; $ a.s. For fixed $M>0$, let \[S_M=\Big\{h \in L^2(0, T; H_0): \int_0^T |h(s)|^2_0 ds \leq M\Big\}.\] The set $S_M$, endowed with the following weak topology, is a Polish (complete separable metric) space (see e.g. \cite{BD07}): $ d_1(h, k)=\sigmaum_{k=1}^{\infty} \frac1{2^k} \betaig|\int_0^T \betaig(h(s)-k(s), \tilde{e}_k(s)\betaig)_0 ds \betaig|,$ where $ \{\tilde{e}_k(s)\}_{k=1}^{\infty}$ is an orthonormal basis for $L^2([0, T], H_0)$. For $M>0$ set \betaegin{equation} \lambdabel{AM} \mathcal{A}_M=\{h\in \mathcal{A}: h(\omega) \in S_M, \; a.s.\}. \varepsilonnd{equation} In order to define the stochastic control equation, we introduce for $\nu\geq 0$ a family of intensity coefficients $\tilde{\sigma}_\nu$ of a random element $h\in {\mathcal A}_M$ for some $M>0$. The case $\nu=0$ will be that of an inviscid limit "deterministic" equation with no stochastic integral and which can be dealt with for fixed $\omegaega$. We assume that for any $\nu\geq 0$ the coefficient $\tilde{\sigmaigma}_\nu$ satisfies the following condition: \partialar \noindent \textbf{Condition (C2):} {\it ${}\;{\tilde \sigma}_\nu \in {\mathcal C}\betaig([0, T] \times V; L(H_0, H)\betaig)$ and there exist constants $\tilde{K}_{\mathcal H}$, $\tilde{K}_i$, and $\tilde{L}_j$, for $i=0,1$ and $j=1,2$ such that: \betaegin{align} |\tilde{\sigma}_\nu (t,u)|^2_{L(H_0,H)} \leq \tilde{K}_0 + \tilde{K}_1 |u|^2 + \nu \tilde{K}_{\mathcal H} \|u\|_{\mathcal H}^2, & \quad \forall t\in [0,T], \; \forall u\in V, \lambdabel{tilde-s-b}\\ |\tilde{\sigma}_\nu(t,u) -\tilde{\sigma}_\nu(t,v) |^2_{L(H_0,H)} \leq \tilde{L}_1 |u-v|^2 + \nu \tilde{L}_2 \|u-v\|^2, & \quad \forall t\in [0,T], \; \forall u,v\in V, \lambdabel{tilde-s-lip} \varepsilonnd{align} where ${\mathcal H}={\mathcal H}_{\frac{1}{4}} $ is defined by \varepsilonqref{Halpha} and $|\cdot |_{L(H_0,H)}$ denotes the (operator) norm in the space $L(H_0,H)$ of all bounded linear operators from $H_0$ into $H$. Note that if $\nu=0$ the previous growth and Lipschitz on $\tilde{\sigmaigma}_0(t,.)$ can be stated for $u,v\in H$. } \betaegin{remark}\lambdabel{re:s-tilde-s} {\rm Unlike ({\betaf C1}) the hypotheses concerning the control intensity coefficient $\tilde{\sigma}_\nu $ involve a weaker topology (we deal with the operator norm $|\cdot |_{L(H_0,H)}$ instead of the trace class norm $|\cdot |_{L_Q}$). However we require in \varepsilonqref{tilde-s-b} a stronger bound (in the interpolation space ${\mathcal H}$). One can see that the noise intensity $\sigmaqrt{\nu}\, \sigma_\nu$ satisfies Condition ({\betaf C2}) provided that in Condition ({\betaf C1}), we replace point (i) by $|\sigma_\nu(t,u)|^2_{L_Q} \leq K_0+ K_1 |u|^2+ K_{\mathcal H} \|u\|_{\mathcal H}^2$. Thus the class of intensities satisfying both Conditions ({\betaf C1}) and ({\betaf C2}) when multiplied by $\sigmaqrt{\nu} $ is wider than that those coefficients which satisfy condition {\betaf (C1)} with $K_2=0$. } \varepsilonnd{remark} \partialar Let $M >0$, $h\in {\mathcal A}_M$, $\xi$ an $H$-valued random variable independent of $W$ and $\nu >0$. Under Conditions ({\betaf C1}) and ({\betaf C2}) we consider the nonlinear SPDE \betaegin{equation} \lambdabel{uhnu} d u_h^\nu(t) + \betaig[ \nu\, A u_h^\nu(t) + B\betaig(u_h^\nu(t) \betaig) \betaig]\, dt = \sigmaqrt{\nu}\, \sigmaigma_\nu (t,u_h^\nu(t))\, dW(t) + \tilde{\sigma}_\nu(t, u_h^\nu(t)) h(t)\, dt, \varepsilonnd{equation} with initial condition $u_h^\nu(0)=\xi$. Using \cite{CM}, Theorem 3.1, we know that for every $T>0$ and $\nu >0$ there exists $\betaar{K}_2^\nu := \betaar{K}_2(\nu,T,M)>0$ such that if $h_\nu\in {\mathcal A}_M$, ${\mathbb E}|\xi|^4<+\infty$ and $0\leq K_2< \betaar{K}_2^\nu $, equation \varepsilonqref{uhnu} has a unique solution $u^\nu_h \in {\mathcal C}([0,T],H)\cap L^2([0,T],V)$ which satisfies: \betaegin{align*} (u^\nu_h,v)-(\xi,v)& +\int_0^t \betaig[ \nu \lambdangle u_h^\nu(s) , Av\rangle +\lambdangle B(u_h^\nu (s)), v\rangle \betaig]\, ds \\ & = \int_0^t \betaig( \sigmaqrt{\nu}\, \sigma_\nu(s,u_h^\nu(s))\, dW(s)\, ,\, v\betaig) + \int_0^t \betaig( \tilde{\sigma}_\nu(s,u^\nu_h(s))h(s)\, ,\, v\betaig)\, ds \varepsilonnd{align*} a.s. for all $v\in Dom(A)$ and $t\in [0,T]$. Note that $u_h^\nu$ is a weak solution from the analytical point of view, but a strong one from the probabilistic point of view, that is written in terms of the given Brownian motion $W$. Furthermore, if $K_2\in [0, \betaar{K}_2^\nu[$ and $L_2\in [0,2[$, there exists a constant $C_\nu:=C(K_i, L_j, \tilde{K}_i, \tilde{K}_{\mathcal H}, T,M,\nu)$ such that \betaegin{equation} \lambdabel{boundini} {\mathbb E}\Big( \sigmaup_{0\leq t\leq T} |u_h^\nu(t)|^4 + \int_0^T \|u_h^\nu(t)\|^2\, dt + \int_0^T \|u_h^\nu(t)\|_{\mathcal H}^4 \, dt \Big) \leq C_\nu\, (1+{\mathbb E}|\xi|^4). \varepsilonnd{equation} The following proposition proves that $ \betaar{K}_2^\nu$ can be chosen independent of $\nu$ and that a proper formulation of upper estimates of the $H$, ${\mathcal H}$ and $V$ norms of the solution $u^\nu_h$ to \varepsilonqref{uhnu} can be proved uniformly in $h\in {\mathcal A}_M$ and in $\nu \in (0,\nu_0]$ for some constant $\nu_0>0$. \betaegin{prop} \lambdabel{unifnu} Fix $M>0$, $T>0$, $\sigmaigma_\nu$ and $\tilde{\sigma}_\nu$ satisfy Conditions ({\betaf C1})--({\betaf C2}) and let the initial condition $\xi$ be such that ${\mathbb E}|\xi|^4<+\infty$. Then in any shell model where $B$ is defined by \varepsilonqref{GOY} or \varepsilonqref{Sabra}, there exist constants $\nu_0>0$, $\betaar{K}_2$ and $\betaar{C}(M)$ such that if $0<\nu\leq \nu_0$, $0\leq K_2<\betaar{K}_2$, $L_2<2$ and $h\in {\mathcal A}_M$, the solution $u^\nu_h$ to \varepsilonqref{uhnu} satisfies: \betaegin{align} \lambdabel{bound1} {\mathbb E} \,\Big(\, & \sigmaup_{0\leq t\leq T}|u^\nu_{h}(t)|^4 + \nu \int_0^T \|u^\nu_{h}(s)\|^2 \, ds + \nu \int_0^T \|u^\nu_{h}(s)\|_{\mathcal H}^4\, ds \, \Big) \leq \betaar{C}(M)\, \betaig( {\mathbb E}|\xi|^4 +1\betaig). \varepsilonnd{align} \varepsilonnd{prop} \betaegin{proof} For every $N>0$, set $\tau_N = \inf\{t:\; |u^\nu_{h}(t)| \geq N \}\wedge T.$ It\^o's formula and the antisymmetry relation in \varepsilonqref{antisym} yield that for $t \in [0, T]$, \betaegin{align} \lambdabel{Ito1} |u^\nu_{h}(t\wedge& \tau_N)|^2 = |\xi|^2 + 2 \sigmaqrt{\nu}\, \int_0^{t\wedge \tau_N}\!\! \betaig( \sigmaigma_\nu(s,u^\nu_{h}(s)) dW(s) , u^\nu_{h}(s)\betaig) -2\, \nu \int_0^{t\wedge \tau_N}\!\! \|u^\nu_{h}(s)\|^2 ds \nonumber \\ & \, +2\int_0^{t\wedge \tau_N}\!\! \betaig( \tilde{\sigmaigma}_\nu (s,u^\nu_{h}(s)) h(s), u^\nu_{h}(s)\betaig) \, ds + \nu \int_0^{t\wedge \tau_N} \!\! |\sigmaigma_\nu(s,u^\nu_{h}(s))|_{L_Q}^2\, ds, \varepsilonnd{align} and using again It\^o's formula we have \betaegin{equation} \lambdabel{estimate1} |u^\nu_{h}(t\wedge \tau_N)|^4 + 4\, \nu \int_0^{t\wedge \tau_N} \!\! |u^\nu_{h}(r)|^2 \, \|u^\nu_{h}(r)\|^2 \, dr \leq \; |\xi|^4 + I(t) + \sigmaum_{1\leq j\leq 3} {T}_j(t), \varepsilonnd{equation} where \betaegin{eqnarray*} I(t) &= & 4\sigmaqrt{\nu} \; \Big| \int_0^{t\wedge \tau_N} \betaig(\sigma_\nu(r,u^\nu_{h}(r))\; dW(r), u^\nu_{h}(r)\; |u^\nu_{h}(r)|^{2}\betaig ) \Big| , \\ {T}_1(t) &= & 4 \, \int_0^{t\wedge \tau_N} |(\tilde{\sigma}_\nu (r,u^\nu_{h}(r))\, h(r)\,,\: u^\nu_{h}(r))| \; |u^\nu_{h}(r)|^{2} dr, \\ T_2(t) &= & 2\nu \, \int_0^{t\wedge \tau_N} |\sigma_\nu(r,u^\nu_{h}(r))|^2_{L_Q} \; |u^\nu_{h}(r) |^2 dr, \\ {T}_3(t) &= & 4\nu \, \int_0^{t\wedge \tau_N} |\sigma_\nu^*(s, u^\nu_{h}(r))\; u^\nu_{h}(r)|^2_{0}\, dr. \varepsilonnd{eqnarray*} Since $h\in \mathcal{A}_M$, the Cauchy-Schwarz and Young inequalities and condition {\betaf (C2)} imply that for any $\varepsilonpsilon >0$, \betaegin{align} \lambdabel{majT1} {T}_1 (t)& \leq \; 4 \; \int_0^{t\wedge \tau_N}\!\! \Big( \sigmaqrt{\tilde{K}_0} + \sigmaqrt{\tilde{K}_1} \, |u^\nu_{h}(r)| + \sigmaqrt{\nu\, \tilde{K}_{\mathcal H}} \, k_0^{-\frac{1}{2}} \|u^\nu_{h}(r)\| \Big) \,|h(r)|_0 \,|u^\nu_{h}(r)|^3 dr \nonumber \\ & \leq \; 4\, \sigmaqrt{\tilde{K}_0 \, M\, T} + 4 \Big(\sigmaqrt{\tilde{K}_0 } + \sigmaqrt{\tilde{K}_1}\Big) \, \int_0^{t\wedge \tau_N} |h(r)|_0 \, |u^\nu_h(r)|^4\, ds \nonumber \\ & \qquad + \varepsilonpsilon \, \nu \int_0^t \|u^\nu_{h}(r)\|^2 \, |u^\nu_{h}(r)|^2 \, dr + \frac{4\, \tilde{K}_{\mathcal H}}{\varepsilonpsilon\, k_0} \, \int_0^{t\wedge \tau_N} \!\! |h(r)|_0^2 \, |u^\nu_{h}(r)|^4 \, dr. \varepsilonnd{align} Using condition {\betaf (C1)} we deduce \betaegin{align} \lambdabel{majT2} & {T}_2 (t)+T_3(t) \leq \; 6\, \nu \, \int_0^{t\wedge \tau_N} \!\! \betaig[K_0+K_1\, |u^\nu_h(r)|^2 + K_2 \|u^\nu_{h}(r)\|^2\betaig] \ |u^\nu_{h}(r)|^2 \, dr \nonumber \\ & \quad \leq 6\, \nu\, K_0\, T + 6\, \nu\, (K_0+K_1) \, \int_0^{t\wedge \tau_N} \!\! |u^\nu_h(r)|^4\, dr + 6 \, \nu \, K_2 \int_0^t \| u^\nu_h(r)\|^2 \, |u^\nu_{h}(r)|^2 dr. \varepsilonnd{align} Let $K_2 \leq \frac{1}{2}$ and $0< \varepsilonpsilon \leq 2-3K_2$; set \[ \varphi(r)=4\Big( \sigmaqrt{\tilde{K}_0} + \sigmaqrt{\tilde{K}_1}\Big)\, |h(r)|_0 + \frac{4\tilde{K}_{\mathcal H}}{\varepsilonpsilon k_0}\, |h(r)|_0^2 + 6\, \nu (K_0+K_1) .\] Then a.s. \betaegin{equation}\lambdabel{intphi} \int_0^T \varphi(r)\, dr\leq 4\Big( \sigmaqrt{\tilde{K}_0} + \sigmaqrt{\tilde{K}_1}\Big)\, \sigmaqrt{M\, T} + \frac{4\tilde{K}_{\mathcal H}}{\varepsilonpsilon k_0}\, M + 6\, \nu (K_0+K_1)\, T:=\Phi \varepsilonnd{equation} and the inequalities \varepsilonqref{estimate1}-\varepsilonqref{majT2} yield that for \[ X(t)=\sigmaup_{r\leq t} |u^\nu_h(r\wedge \tau_N)|^4\; ,\; Y(t)=\nu \int_0^t \|u^\nu_h(r\wedge \tau_N)\|^2\, |u^\nu_h(r\wedge \tau_N)|^2\, ds ,\] \betaegin{equation}\lambdabel{estimate2} X(t)+(4-6K_2-\varepsilonpsilon) Y(t)\leq |\xi|^4 + \Big(4 \sigmaqrt{\tilde{K}_0 M T}+ 6\nu K_0T \Big) + I(t) + \int_0^t \varphi(s)\, X(s)\, ds. \varepsilonnd{equation} The Burkholder-Davis-Gundy inequality, ({\betaf C1}), Cauchy-Schwarz and Young's inequalities yield that for $t\in[0, T]$ and $\deltalta,\kappaappa >0$, \betaegin{align} \lambdabel{estimate3} & {\mathbb E} I(t) \leq 12 \,\sigmaqrt{\nu} \, {\mathbb E} \Big( \Big\{ \int_0^{t\wedge \tau_N} \betaig[ K_0+K_1\, |u^\nu_h(s)|^2 + K_2\, \|u^\nu_h(s)\|^2\betaig]\, |u^\nu_{h}(r)|^6 ds \Big\}^\frac12 \Big) \nonumber \\ &\; \leq 12\, \sigmaqrt{\nu} \, {\mathbb E} \Big( \sigmaup_{0\leq s\leq t} |u^\nu_h(s\wedge\tau_N)|^2 \, \Big\{ \int_0^{t\wedge \tau_N} \betaig[ K_0+K_1\, |u^\nu_h(s)|^2 + K_2\, \|u^\nu_h(s)\|^2\betaig]\, |u^\nu_{h}(s)|^2 ds \Big\}^\frac12 \Big) \nonumber \\ &\; \leq \deltalta \, {\mathbb E}(Y(t)) + \Big( \frac{36 K_2 }{\deltalta} + \kappaappa\, \nu\Big) \, {\mathbb E}(X(t)) + \frac{36}{\kappaappa}\, \Big[ K_0\, T + (K_0+K_1) \int_0^t {\mathbb E} (X(s))\, ds\Big]. \varepsilonnd{align} Thus we can apply Lemma 3.2 in \cite{CM} (see also Lemma 3.2 in \cite{DM}), and we deduce that for $0<\nu\leq \nu_0$, $K_2\leq \frac{1}{2}$, $\varepsilonpsilon= \alphalpha = \frac{1}{2}$, $ \betaeta=\frac{36 K_2}{\deltalta} + \kappaappa\, \nu_0 \leq 2^{-1}\, e^{-\Phi}$, $ \deltalta \leq \alphalpha 2^{-1}\, e^{-\Phi}$ and $\gamma =\frac{36}{\kappaappa}(K_0+K_1)$, \betaegin{equation} \lambdabel{Gronwallgene} {\mathbb E} \Big( X(T)+\alphalpha Y(T)\Big) \leq 2 \varepsilonxp\betaig( \Phi +2T\gamma e^\Phi\betaig) \Big[ 4\sigmaqrt{\tilde{K}_0\, M\, T} +6 \nu_0 K_0 T + \frac{36}{\kappaappa} K_0T + {\mathbb E}(|\xi|^4)\Big]. \varepsilonnd{equation} Using the last inequality from \varepsilonqref{compcalH}, we deduce that for $K_2$ small enough, $\betaar{C}(M)$ independent of $N$ and $\nu\in ]0,\nu_0]$, \[ {\mathbb E}\Big( \sigmaup_{0\leq t\leq T} |u^\nu_h(t\wedge\tau_N)|^4 +\nu\, \int_0^{\tau_N}\|u^\nu_h(t)\|_{\mathcal H}^4 \, dt \Big) \leq \betaar{C}(M) (1+{\mathbb E}(|\xi|^4)). \] As $N\to +\infty$, the monotone convergence theorem yields that for $\betaar{K}_2$ small enough and $\nu\in ]0,\nu_0]$ \[ {\mathbb E}\Big( \sigmaup_{0\leq t\leq T} |u^\nu_h(t)|^4 +\nu\, \int_0^T \|u^\nu_h(t)\|_{\mathcal H}^4 \, dt \Big) \leq \betaar{C}(M) (1+{\mathbb E}(|\xi|^4)). \] \partialar This inequality and \varepsilonqref{Gronwallgene} with $t$ instead of $t\wedge\tau_N$ conclude the proof of \varepsilonqref{bound1} by a similar simpler computation based on conditions {\betaf (C1)} and {\betaf (C2)}. \varepsilonnd{proof} \sigmaection{Well posedeness, more a priori bounds and inviscid equation}\lambdabel{s3} The aim of this section is twofold. On one hand, we deal with the inviscid case $\nu=0$ for which the PDE \betaegin{equation}\lambdabel{u0h} du_h^0(t) + B(u_h^0(t))\, dt = \tilde{\sigmaigma}_0(t,u^0_h(t))\, h(t)\, dt\; , \quad u_h^0(0)=\xi \varepsilonnd{equation} can be solved for every $\omegaega$. In order to prove that \varepsilonqref{u0h} has a unique solution in ${\mathcal C}([0,T],V)$ a.s., we will need stronger assumptions on the constants $\mu, a,b$ defining $B$, the initial condition $\xi$ and $\tilde{\sigma}_0$. The initial condition $\xi$ has to belong to $V$ and the coefficients $a,b,\mu$ have to be chosen such that $(B(u,u), Au)=0$ for $u\in V$ (see \varepsilonqref{B(u)AuG}). On the other hand, under these assumptions and under stronger assumptions on $\sigma_\nu$ and $\tilde{\sigma}_\nu$, similar to that imposed on $\tilde{\sigma}_0$, we will prove further properties of $u_h^\nu$ for a strictly positive viscosity coefficient $\nu$. Thus, suppose furthermore that for $\nu >0$ (resp. $\nu =0$), the map \[ \tilde{\sigma}_\nu : [0,T]\times Dom(A) \to L(H_0,V) \; (\mbox{\rm resp. } \tilde{\sigma}_0 : [0,T]\times V \to L(H_0,V))\] satisfies the following: \sigmamallskip \noindent {\betaf Condition (C3):} {\it There exist non negative constants $\tilde{K}_i$ and $\tilde{L}_j$, $i=0,1,2$, $j=1,2$ such that for $s\in [0,T]$ and for any $u,v\in Dom(A)$ if $\nu>0$ (resp. for any $u,v\in V$ if $\nu=0$), \betaegin{equation} \lambdabel{growthbis} |A^{\frac{1}{2}} \tilde{\sigma}_\nu(s,u)|_{L(H_0,H)}^2 \leq \tilde{K}_0 + \tilde{K}_1 \, \|u\|^2 + \nu\, \tilde{K}_2\, |Au|^2, \varepsilonnd{equation} and \betaegin{equation} \lambdabel{Lipbis} |A^{\frac{1}{2}} \tilde{\sigma}_\nu(s,u) - A^{\frac{1}{2}} \tilde{\sigma}_\nu(s,v) |_{L(H_0,H)}^2 \leq \tilde{L}_1 \, \|u-v\|^2 + \nu\, \tilde{L}_2\, |Au-Av|^2. \varepsilonnd{equation} } \betaegin{theorem}\lambdabel{exisuniq0} Suppose that $\tilde{\sigma}_0$ satisfies the conditions {\betaf (C2)} and {\betaf (C3)} and that the coefficients $a,b,\mu$ defining $B$ satisfy $a(1+\mu^2)+b\mu^2=0$. Let $\xi\in V$ be deterministic. For any $M>0$ there exists $C(M)$ such that equation \varepsilonqref{u0h} has a unique solution in ${\mathcal C}([0,T],V)$ for any $h\in {\mathcal A}_M$, and a.s. one has: \betaegin{equation}\lambdabel{boundu0} \sigmaup_{h\in {\mathcal A}_M}\, \sigmaup_{0\leq t\leq T} \|u^0_h(t)\| \leq C(M)(1+\|\xi\|) . \varepsilonnd{equation} \varepsilonnd{theorem} Since equation \varepsilonqref{u0h} can be considered for any fixed $\omegaega$, it suffices to check that the deterministic equation \varepsilonqref{u0h} has a unique solution in ${\mathcal C}([0,T],V)$ for any $h\in S_M$ and that \varepsilonqref{boundu0} holds. For any $m\geq 1$, let $ H_m = span(\varphi_1, \cdots, \varphi_m) \sigmaubset Dom(A)$, \betaegin{equation} \lambdabel{HPm} P_m: H \to H_m \quad \mbox{\rm denote the orthogonal projection from $H$ onto $H_m$}, \varepsilonnd{equation} and finally let $\tilde{\sigma}_{0,m}=P_m \tilde{\sigma}_0$. Clearly $P_m$ is a contraction of $H$ and $|\tilde{\sigma}_{0,m}(t,u)|_{L(H_0,H)}^2 \leq | \tilde{\sigma}_0(t,u)|_{L(H_0,H)}^2$. Set $u^0_{m,h}(0)=P_m\, \xi$ and consider the ODE on the $m$-dimensional space $H_m$ defined by \betaegin{equation}\lambdabel{Galerkin0} d\betaig( u^0_{m,h}(t)\, ,\, v\betaig) = \betaig[-\betaig( B(u^0_{m,h}(t) )\, ,\, v\betaig) +\betaig( \tilde{\sigma}_0(t,u^0_{m,h}(t))\, h(t)\, ,v\betaig) \betaig]\, dt \varepsilonnd{equation} for every $v\in H_m$. Note that using \varepsilonqref{boundB1} we deduce that the map $ u \in H_m \mapsto \lambdangle B(u)\, ,\, v\rangle $ is locally Lipschitz. Furthermore, since there exists some constant $C(m)$ such that $\|u\| \vee \|u\|_{\mathcal H}\leq C(m) |u|$ for $u\in H_m$, Condition ({\betaf C2}) implies that the map $u\in H_m \mapsto \betaig( (\tilde{\sigma}_{0,m}(t, u) h(t)\, ,\, \varphi_k) : 1\leq k\leq m \betaig) $, is globally Lipschitz from $H_m$ to ${\mathbb R}R^m$ uniformly in $t$. Hence by a well-known result about existence and uniqueness of solutions to ODEs, there exists a maximal solution $u^0_{m,h}=\sigmaum_{k=1}^m (u^0_{m,h}\, ,\, \varphi_k\betaig)\, \varphi_k$ to \varepsilonqref{Galerkin0}, i.e., a (random) time $\tau^0_{m,h}\leq T$ such that \varepsilonqref{Galerkin0} holds for $t< \tau^0_{m,h}$ and as $t \uparrow \tau^0_{m,h}<T$, $|u^0_{m,h}(t)| \to \infty$. The following lemma provides the (global) existence and uniqueness of approximate solutions as well as their uniform a priori estimates. This is the main preliminary step in the proof of Theorem~\ref{exisuniq0}. \betaegin{lemma}\lambdabel{boundGalerkin} Suppose that the assumptions of Theorem \ref{exisuniq0} are satisfied and fix $M>0$. Then for every $h\in {\mathcal A}_M$ equation \varepsilonqref{Galerkin0} has a unique solution in ${\mathcal C}([0,T],H_m)$. There exists some constant $C(M)$ such that for every $h\in {\mathcal A}_M$, \betaegin{equation}\lambdabel{boundu0nh} \sigmaup_m\, \sigmaup_{0\leq t\leq T} \|u^0_{m,h}(t)\|^2 \leq C(M)\, (1+\|\xi\|^2) \; a.s. \varepsilonnd{equation} \varepsilonnd{lemma} \betaegin{proof} The proof is included for the sake of completeness; the arguments are similar to that in the classical viscous framework. Let $h\in {\mathcal A}_M$ and let $u^0_{m,h}(t)$ be the approximate maximal solution to \varepsilonqref{Galerkin0} described above. For every $N>0$, set $ \tau_N = \inf\{t:\; \|u^0_{m,h}(t)\| \geq N \}\wedge T. $ Let $\Pi_m : H_0\to H_0$ denote the projection operator defined by $\Pi_m u =\sigmaum_{k=1}^m \betaig( u \, ,\, e_k\betaig) \, e_k$, where $\{e_k, k\geq 1\}$ is the orthonormal basis of $H$ made by eigen-elements of the covariance operator $Q$ and used in \varepsilonqref{W-n}. \partialar Since $\varphi_k\in Dom(A)$ and $V$ is a Hilbert space, $P_m$ contracts the $V$ norm and commutes with $A$. Thus, using {\betaf (C3)} and \varepsilonqref{B(u)AuG}, we deduce \betaegin{align*} \|u^0_{m,h}(t\wedge \tau_N)\|^2 \leq &\; \|\xi\|^2 - 2\int_0^{t\wedge \tau_N}\!\! \betaig( B(u^0_{m,h}(s))\, ,\, A u^0_{m,h}(s)\betaig)\, ds \\ & + 2 \int_0^{t\wedge \tau_N} \betaig| A^{\frac{1}{2}} P_m \tilde{\sigmaigma}_{0,m} (s,u^0_{m,h}(s)) h(s)\betaig|\, \|u^0_{m,h}(s)\|\, ds \\ \leq & \; |\xi\|^2 + 2\sigmaqrt{\tilde{K}_0 M T} + 2\Big( \sigmaqrt{\tilde{K}_0} + \sigmaqrt{\tilde{K}_1}\Big) \int_0^{t\wedge \tau_N} |h(s)|_0\, \|u^0_{m,h}(s)\|^2\, ds. \varepsilonnd{align*} Since the map $\|u^0_{m,h}(.)\|$ is bounded on $[0,\tau_N]$, Gronwall's lemma implies that for every $N>0$, \betaegin{equation}\lambdabel{Galerkin4} \sigmaup_m \sigmaup_{t\leq \tau_N} \|u^0_{m,h}(t)\|^2 \leq \Big( \|\xi\|^2 + 2\sigmaqrt{\tilde{K}_0 M T}\Big) \, \varepsilonxp\Big( 2\sigmaqrt{M T}\, \Big[ \sigmaqrt{\tilde{K}_0 }+ \sigmaqrt{\tilde{K}_1}\Big]\Big). \varepsilonnd{equation} Let $\tau:=\lim_N \tau_N$ ; as $N\to \infty$ in \varepsilonqref{Galerkin4} we deduce \betaegin{equation}\lambdabel{GalerkinT} \sigmaup_m \sigmaup_{t\leq \tau} \|u^0_{m,h}(t)\|^2 \leq \Big( \|\xi\|^2 + 2\sigmaqrt{\tilde{K}_0 M T}\Big) \, \varepsilonxp\Big( 2 \sigmaqrt{M T}\, \Big[ \sigmaqrt{\tilde{K}_0 }+ \sigmaqrt{\tilde{K}_1}\Big]\Big). \varepsilonnd{equation} On the other hand, $\sigmaup_{t\leq \tau} \|u^0_{m,h}(t)\|^2 = +\infty$ if $\tau<T$, which contradicts the estimate (\ref{GalerkinT}) . Hence $\tau = T$ a.s. and we get \varepsilonqref{boundu0nh} which completes the proof of the Lemma. \varepsilonnd{proof} We now prove the main result of this section. \partialar \noindent \textbf{Proof of Theorem \ref{exisuniq0}: }\\ \noindent \textbf{Step 1: } Using the estimate \varepsilonqref{boundu0nh} and the growth condition \varepsilonqref{tilde-s-b} we conclude that each component of the sequence $\betaig( (u^0_{m,h})_n, n\geq 1\betaig)$ satisfies the following estimate \betaegin{equation*} \sigmaup_m\, \sigmaup_{0\leq t\leq T} |(u^0_{m,h})_{n}(t)|^2 + \betaig|\betaig(\tilde{\sigmaigma}_0(t,u^0_{m,h}(t)) h(t)\betaig)_n\betaig|\leq C\; a.s. \, , \forall n=1, 2, \cdots \varepsilonnd{equation*} for some constant $C>0$ depending only on $M, \|\xi\|, T$. Moreover, writing the equation \varepsilonqref{u0h} for the GOY shell model in the componentwise form using \varepsilonqref{GOY} (the proof for the Sabra shell model using \varepsilonqref{Sabra}, which is similar, is omitted), we obtain for $n=1, 2, \cdots$ \betaegin{align}\lambdabel{GOY-u0mnh} (u^0_{m,h})_{n}(t)=&(P_{m}\xi)_n+i\int_{0}^{t}( a k_{n+1} (u^0_{m,h})_{n+1}^* (s)(u^0_{m,h})_{n+2}^*(s) +b k_{n} (u^0_{m,h})_{n-1}^*(s) (u^0_{m,h})_{n+1}^*(s) \nonumber\\ &-a k_{n-1} (u^0_{m,h})_{n-1}^*(s) (u^0_{m,h})_{n-2}^*(s) -b k_{n-1} (u^0_{m,h})_{n-2}^*(s) (u^0_{m,h})_{n-1}^*(s))ds\nonumber\\ &+\int_{0}^{t}\betaig( \tilde{\sigma}_0(s,u^0_{m,h}(s))\, h(s)\betaig)_n ds\, . \varepsilonnd{align} Hence, we deduce that for every $n\geq 1$ there exists a constant $C_{n}$, independent of $m$, such that $$\|(u^0_{m,h})_{n}\|_{C^{1}\left([0,T];\mathbb{C}\right)}\leq C_{n}.$$ Applying the Ascoli-Arzel\`a theorem, we conclude that for every $n$ there exists a subsequence $(m^{n}_{k})_{k\geq 1}$ such that $(u^0_{m^{n}_{k},h})_{n}$ converges uniformly to some $(u^0_{h})_{n}$ as $k\longrightarrow\infty$. By a diagonal procedure, we may choose a sequence $(m^{n}_{k})_{k\geq 1}$ independent of $n$ such that $(u^0_{m,h})_{n}$ converges uniformly to some $(u^0_{h})_{n}\in {\mathcal C}\left([0,T];\mathbb{C}\right)$ for every $n\geq 1$; set $$u^0_{h}(t)=((u^0_{h})_{1}, (u^0_{h})_{2},\dots).$$ From the estimate \varepsilonqref{boundu0nh}, we have the weak star convergence in $L^{\infty}(0,T; V)$ of some further subsequence of $\betaig( u^0_{m^{n}_{k},h}\,: \, {k\geq 1})$. The weak limit belongs to $L^{\infty}(0,T; V)$ and has clearly $(u^0_{h})_{n}$ as components that belong to ${\mathcal C}\left([0,T];\mathbb{C}\right)$ for every integer $n\geq 1$. Using the uniform convergence of each component, it is easy to show, passing to the limit in the expression \varepsilonqref{GOY-u0mnh}, that $u^0_{h}(t)$ satisfies the weak form of the GOY shell model equation \varepsilonqref{u0h}. Finally, since \[ u^0_h(t)=\xi + \int_0^t \betaig[ -B(u^0_h(s)) + \tilde{\sigmaigma}_0(s,u^0_h(s)) h(s)\betaig]\, ds, \] is such that $\sigmaup_{0\leq s\leq T} \|u^0_h(s)\|<\infty$ a.s. and since for every $s\in [0,T]$, by \varepsilonqref{boundB1} and \varepsilonqref{growthbis} we have a.s. \[ \betaig[ \|B(u^0_h(s))\|+ \|\tilde{\sigmaigma}_0(s,u^0_h(s)) h(s)\|\betaig] \leq C \Big( 1+\sigmaup_{0\leq s\leq T} \|u^0_h(s)\|^2 \betaig) \betaig(1+|h(s)|_0\Big)\in L^2([0,T]),\] we deduce that $u^0_h\in {\mathcal C}([0,T],V)$ a.s. \sigmamallskip \noindent \textbf{Step 2: } To complete the proof of Theorem \ref{exisuniq0}, we show that the solution $u_h^0$ to \varepsilonqref{u0h} is unique in ${\mathcal C}([0, T], V) $. Let $v \in \mathcal{C} ([0,T],V)$ be another solution to \varepsilonqref{u0h} and set \[ \tau_N =\inf \{t \geq 0: \|u_h^0(t) \| \geq N \} \wedge \inf \{t \geq 0: \|v(t) \| \geq N \} \wedge T. \] Since $\|u_h^0(.) \|$ and $\|v(.)\|$ are bounded on $[0,T]$, we have $\tau_N \to T$ as $N\to \infty$. Set $U=u^0_h-v$; equation \varepsilonqref{diffBV} implies \betaegin{align*} \betaig| \betaig( A^{\frac{1}{2}} B(u_h^0(s))- A^{\frac{1}{2}} B(v(s)),A^{\frac{1}{2}} U(s)\betaig)\betaig| & = \betaig| \betaig( B(u_h^0(s))- B(v(s)),A U(s)\betaig) \betaig| \\ & \leq \betaar{C}_1 \| U(s) \|^2\, \|v(s)\|. \varepsilonnd{align*} On the other hand, the Lipschitz property \varepsilonqref{Lipbis} from condition {\betaf (C3)} for $\nu=0$ implies \[ \betaig| \betaig[ A^{\frac{1}{2}} \tilde{\sigma}_0(s,u_h^0(s))- A^{\frac{1}{2}} \tilde{\sigma}_0(s,v(s))\betaig] h(s) \betaig| \leq \sigmaqrt{\tilde{L}_1} \|u_h^0(s)-v(s)\| \, |h(s)|_0. \] \partialar \noindent Therefore, \betaegin{align*} \|U(t\wedge \tau_N)\|^2 \;\; & = \int_0^{t\wedge \tau_N}\!\! \Big\{ - 2 \Big( A^{\frac{1}{2}} B(u_h^0(s))- A^{\frac{1}{2}} B(v(s)),A^{\frac{1}{2}} U(s)\Big) \\ &\qquad \qquad + 2 \Big( [ A^{\frac{1}{2}} \tilde{\sigma}_0(s,u_h^0(s))- A^{\frac{1}{2}} \tilde{\sigma}_0(s,v(s))] h(s), A^{\frac{1}{2}} U(s) \Big) \Big\}\, ds\\ &\leq 2 \int_0^t \betaig( \betaar{C}_1\, N + \sigmaqrt{L}_1|h(s)|_0\betaig) \, \|U(s\wedge \tau_N)\|^2\, ds, \varepsilonnd{align*} and Gronwall's lemma implies that (for almost every $\omegaega$) $ \sigmaup_{0\leq t\leq T}\|U(t\wedge \tau_N)\|^2=0$ for every $N$. As $N\to \infty$, we deduce that a.s. $U(t)=0$ for every $t$, which concludes the proof. $\Box$ \betaigskip We now suppose that the diffusion coefficient $\sigma_\nu$ satisfies the following condition {\betaf (C4)} which strengthens {\betaf (C1)} in the way {\betaf (C3)} strengthens {\betaf (C2)}, i.e., \noindent {\betaf Condition (C4)} There exist constants $K_i$ and $L_i$, $i=0,1,2$, $j=1,2$, such that for any $\nu >0$ and $u\in Dom(A)$, \betaegin{eqnarray}\lambdabel{bnd-sbis} |A^{\frac{1}{2}}\sigma_\nu(s,u)|^2_{L_Q} &\leq & K_0+K_1 \|u\|^2 + K_2|Au|^2 ,\\ |A^{\frac{1}{2}}\sigma_\nu(s,u)-A^{\frac{1}{2}}\sigma_\nu(s,v) |^2_{L_Q} &\leq & L_1 \|u-v\|^2 + L_2 |Au-Av|^2 . \lambdabel{lip-sbis} \varepsilonnd{eqnarray} Then for $\nu>0$, the existence result and apriori bounds of the solution to \varepsilonqref{uhnu} proved in Proposition \ref{unifnu} can be improved as follows. \betaegin{prop} \lambdabel{unifnu-bis} Let $\xi\in V$, let the coefficients $a,b,\mu$ defining $B$ be such that $a(1+\mu^2) +b\mu^2 =0$, $\sigma_\nu$ and ${\tilde \sigma}_{\nu}$ satisfy the conditions {\betaf (C1)}, {\betaf (C2)}, {\betaf (C3)} and {\betaf (C4)}. Then there exist positive constants $\betaar{K}_2$ and $\nu_0$ such that for $0<K_2<\betaar{K}_2$ and $0< \nu\leq \nu_0$, for every $M>0$ there exists a constant $C(M)$ such that for any $h\in {\mathcal A}_M$, the solution $u^\nu_h$ to \varepsilonqref{uhnu} belongs to ${\mathcal C}([0,T],V)$ almost surely and \betaegin{equation}\lambdabel{bound-V} \sigmaup_{h\in {\mathcal A}_M}\, \sigmaup_{0 < \nu\leq \nu_0} {\mathbb E} \Big( \sigmaup_{t\in [0,T]} \|u_h^\nu(t)\|^2 + \nu \int_0^T |Au^\nu_h(t)|^2 \, dt \Big) \leq C(M). \varepsilonnd{equation} \varepsilonnd{prop} \betaegin{proof} Fix $m\geq 1$, let $P_m$ be defined by \varepsilonqref{HPm} and let $u_{m,h}^\nu(t)$ be the approximate maximal solution to the (finite dimensional) evolution equation: $u^\nu_{m,h}(0) = P_m \xi$ and \betaegin{eqnarray}\lambdabel{Galerkin-nu} d u^\nu_{m,h}(t) &=& \betaig[-\nu P_m A u^\nu_{m,h}(t) - P_m B(u^\nu_{m,h}(t)) + P_m \tilde{\sigma}_\nu(t,u^\nu_{m,h}(t))\, h(t) \betaig] dt \nonumber \\ && + P_m \sigmaqrt{\nu} \, \sigma_\nu (t,u^\nu_{m,h})(t) dW_m(t), \varepsilonnd{eqnarray} where $W_m$ is defined by \varepsilonqref{W-n}. Proposition 3.3 in \cite{CM} proves that \varepsilonqref{Galerkin-nu} has a unique solution $u^\nu_{m,h} \in {\mathcal C}([0,T],P_m(H))$. For every $N>0$, set \betaegin{equation*} \tau_N = \inf\{t:\; \|u_{m,h}^\nu(t)\| \geq N \}\wedge T. \varepsilonnd{equation*} Since $P_m(H)\sigmaubset Dom(A)$, we may apply It\^o's formula to $\|u^\nu_{m,h}(t)\|^2$. Let $\Pi_m : H_0 \to H_0$ be defined by $\Pi_m u = \sigmaum_{k=1}^m \betaig( u,e_k\betaig)\, e_k$ for some orthonormal basis $\{ e_k, k\geq 1\}$ of $H$ made by eigen-vectors of the covariance operator $Q$; then we have: \betaegin{align*} & \|u_{m,h}^\nu(t\wedge \tau_N)\|^2 = \|P_m \xi\|^2 + 2\sigmaqrt{\nu} \int_0^{t\wedge \tau_N}\!\! \betaig(A^{\frac{1}{2}} P_m\sigma_\nu(s, u_{m,h}^\nu(s)) dW_m(s) , A^{\frac{1}{2}} u^\nu_{m,h}(s)\betaig) \\ & \; + \nu \int_0^{t\wedge \tau_N} \!\! |P_m\sigma_\nu(s,u^\nu_{m,h}(s))\, \Pi_m|_{L_Q}^2\, ds -2 \int_0^{t\wedge \tau_N}\!\! \betaig( A^{\frac{1}{2}} B( u^\nu_{m,h}(s)) \, ,\, A^{\frac{1}{2}} u^\nu_{m,h}(s)\betaig)\, ds \\ & -2 \nu\! \int_0^{t\wedge \tau_N}\!\!\!\! \betaig( A^{\frac{1}{2}}P_m A u^\nu_{m,h}(s) , A^{\frac{1}{2}} u^\nu_{m,h}(s)\betaig) ds + 2\int_0^{t\wedge \tau_N}\!\!\!\! \betaig( A^{\frac{1}{2}} P_m \tilde{\sigma}_\nu(s,u^\nu_{m,h}(s)) h(s) , A^{\frac{1}{2}} u^\nu_{m,h}(s)\betaig) ds. \varepsilonnd{align*} Since the functions $\varphi_k$ are eigen-functions of $A$, we have $A^{\frac{1}{2}} P_m = P_m A^{\frac{1}{2}}$ and hence $\betaig( A^{\frac{1}{2}} P_m Au^\nu_{m,h}(s),A^{\frac{1}{2}} u^\nu_{m,h}(s)\betaig) = |Au^\nu_{m,h}(s)|^2$. Furthermore, $P_m$ contracts the $H$ and the $V$ norms, and for $u\in Dom(A)$, $\betaig( B(u), Au\betaig) =0$ by \varepsilonqref{B(u)AuG}. Hence for $0<\varepsilonpsilon =\frac{1}{2} (2-K_2)< 1$, using Cauchy-Schwarz's inequality and the conditions {\betaf (C3)} and {\betaf (C4)} on the coefficients $\sigma_\nu$ and $\tilde{\sigma}_\nu$, we deduce \betaegin{align*} & \|u_{m,h}^\nu(t\wedge \tau_N)\|^2 + \varepsilonpsilon \nu \int_0^{t\wedge \tau_N}\!\! \betaig| Au^\nu_{m,h}(s)|^2\, ds \leq \| \xi\|^2 + \nu \int_0^{t\wedge \tau_N} \!\! \betaig[ K_0 + K_1 \|u^\nu_{m,h}(s)\|^2\betaig] \, ds \\ &\quad + 2 \sigmaqrt{\nu} \int_0^{t\wedge \tau_N}\!\! \betaig(A^{\frac{1}{2}} P_m \sigma_\nu(s, u_{m,h}^\nu(s)) dW_m(s) , A^{\frac{1}{2}} u^\nu_{m,h}(s)\betaig) \\ & \quad + 2 \int_0^{t\wedge \tau_N} \!\! \Big\{ \Big[ \sigmaqrt{\tilde{K}_0} + \Big( \sigmaqrt{\tilde{K}_0} + \sigmaqrt{\tilde{K}_1}\Big) \|u^\nu_{m,h}(s)\|^2 \Big] |h(s)|_0 + \frac{ \tilde{K}_2}{\varepsilonpsilon} |h(s)|_0^2 \| u^\nu_{m,h}(s)\|^2 \Big\} \, ds. \varepsilonnd{align*} For any $t\in [0,T]$ set \betaegin{eqnarray*} I(t)&=&\sigmaup_{0\leq s\leq t}\Big|2\sigmaqrt{\nu} \int_0^{s\wedge \tau_N}\!\! \betaig(A^{\frac{1}{2}} P_m\sigma_\nu(r, u_{m,h}^\nu(r)) dW_m(r) \, ,\, A^{\frac{1}{2}} u^\nu_{m,h}(r)\betaig)\Big|,\\ X(t)&=&\sigmaup_{0\leq s\leq t} \|u^\nu_{m,h}(s\wedge\tau_N)\|^2, \quad Y(t)= \int_0^{t\wedge\tau_N} |Au^\nu_{m,h}(r)|^2\, dr,\\ \varphi(t)&=& 2 \Big( \sigmaqrt{\tilde{K}_0} + \sigmaqrt{\tilde{K}_1}\Big) \, |h(t)|_0 + \nu K_1 + \frac{\tilde{K}_2}{\varepsilonpsilon} |h(t)|_0^2. \varepsilonnd{eqnarray*} Then almost surely, $\int_0^T \varphi(t)\, dt \leq \nu K_1 T + 2\betaig( \sigmaqrt{\tilde{K}_0} + \sigmaqrt{\tilde{K}_1}\betaig) \sigmaqrt{MT} + \frac{ \tilde{K}_2}{\varepsilonpsilon}M :=C$. The Burkholder-Davis-Gundy inequality, conditions ({\betaf C1}) -- {\betaf (C4)}, Cauchy-Schwarz and Young's inequalities yield that for $t\in[0, T]$ and $\betaeta >0$, \betaegin{align*} {\mathbb E} I(t) & \; \leq \; 6 \sigmaqrt{\nu} \, {\mathbb E} \Big\{ \int_0^{t\wedge \tau_N} \betaig|A^{\frac{1}{2}} \sigma_\nu(s,u_{m,h}^\nu(r))\; \Pi_m |^2_{L_Q}\; \|u^\nu_{m,h}(s)\|^2 ds \Big\}^\frac12 \\ & \leq \; \betaeta \;{\mathbb E} \Big(\sigmaup_{0\leq s\leq t\wedge\tau_N} \|u_{m,h}(s)\|^2 \Big) + \frac{9 \nu K_1 }{\betaeta} \;{\mathbb E} \int_0^{t\wedge \tau_N} \| u_{m,h}(s)\|^2\, ds\\ &\qquad + \frac{9 \nu K_0}{\betaeta}\;T + \frac{9\nu K_2}{\betaeta} {\mathbb E} \int_0^{t\wedge \tau_N} |Au^\nu_{m,h}(s)|^2 ds. \varepsilonnd{align*} Set $Z=\|\xi\|^2 + \nu_0 K_0 T + 2\sigmaqrt{\tilde {K}_0 T M} $, $\alphalpha = \varepsilonpsilon \nu$, $\betaeta=2^{-1}e^{-C}$, $K_2 < 2^{-2} e^{-2C} (9+2^{-3}e^{-2C})^{-1}$; the previous inequality implies that the bounded function $X$ satisfies a.s. the inequality \[ X(t)+ \alphalpha Y(t) \leq Z+ I(t) + \int_0^t \varphi(s)\, X(s)\, ds . \] Furthermore, $I(t)$ is non decreasing, such that for $0<\nu\leq \nu_0$, $\deltalta = \frac{9\nu K_2}{\betaeta} \leq \alphalpha 2^{-1} e^{-C}$ and $\gamma = \frac{9\nu_0}{a} K_1$, one has \[ {\mathbb E} I(t)\leq \betaeta {\mathbb E} X(t) +\gamma {\mathbb E} \int_0^t X(s)\, ds + \deltalta Y(t)+ \frac{9\nu_0}{\betaeta} K_0 T. \] Lemma 3.2 from \cite{CM} implies that for ${K}_2$ and $\nu_0$ small enough, there exists a constant $C(M,T)$ which does not depend on $m$ and $N$, and such that for $0<\nu\leq \nu_0$, $m \geq 1$ and $h\in {\mathcal A}_M$: \[ \sigmaup_{N>0} \, \sigmaup_{m\geq 1} {\mathbb E}\Big[ \sigmaup_{0\leq t\leq \tau_N} \|u^\nu_{m,h}(t)\|^2 + \nu \int_0^{\tau_N} |Au^\nu_{m,h}(t)|^2 \, dt\Big] < \infty. \] Then, letting $N\to \infty$ and using the monotone convergence theorem, we deduce that \betaegin{equation}\lambdabel{bound-Galer-bis} \sigmaup_{m\geq 1}\, \sigmaup_{h\in {\mathcal A}_M} {\mathbb E}\Big[ \sigmaup_{0\leq t\leq T} \|u^\nu_{m,h}(t)\|^2 + \nu \int_0^T |Au^\nu_{m,h}(t)|^2 \, dt \Big] < \infty. \varepsilonnd{equation} Then using classical arguments we prove the existence of a subsequence of $(u^\nu_{m,h}, m\geq 1)$ which converges weakly in $L^2([0,T]\times \Omegaega , V) \cap L^4([0,T]\times \Omegaega , {\mathcal H})$ and converges weak-star in $L^4(\Omegaega, L^\infty([0,T],H))$ to the solution $u^\nu_h$ to equation \varepsilonqref{uhnu} (see e.g. \cite{CM}, proof of Theorem 3.1). In order to complete the proof, it suffices to extract a further subsequence of $(u^\nu_{m,h}, m\geq 1)$ which is weak-star convergent to the same limit $u^\nu_h$ in $L^2(\Omegaega, L^\infty([0,T],V))$ and converges weakly in $L^2(\Omegaega\times [0,T], Dom(A))$; this is a straightforward consequence of \varepsilonqref{bound-Galer-bis}. Then as $m\to \infty$ in \varepsilonqref{bound-Galer-bis}, we conclude the proof of \varepsilonqref{bound-V}. \varepsilonnd{proof} \sigmaection{Large deviations} \lambdabel{s4} We will prove a large deviation principle using a weak convergence approach \cite{BD00, BD07}, based on variational representations of infinite dimensional Wiener processes. Let $\sigmaigma : [0,T]\times V\to L_Q$ and for every $\nu>0$ let $\betaar{\sigma}_\nu : [0,T]\times Dom(A) \to L_Q$ satisfy the following condition: \noindent {\betaf Condition (C5)}: \\ {\it (i) There exist a positive constant $\gamma$ and non negative constants $\betaar{C}$, $\betaar{K}_0$, $\betaar{K}_1$ and $\betaar{L}_1$ such that for all $u,v \in V$ and $s, t\in [0,T]$: \betaegin{align*} | \sigma(t,u)|_{L_Q}^2 \leq \betaar{K}_0 + \betaar{K}_1 \, |u|^2 , &\quad \betaig| A^{\frac{1}{2}} \sigma(t,u)\betaig|^2_{L_Q} \leq \betaar{K}_0 +\betaar{K}_1\, \|u\|^2, \\ |\sigma(t,u)- \sigma(t,v)|^2_{L_Q} \leq \betaar{L}_1\, |u-v |^2 , &\quad \betaig| A^{\frac{1}{2}}\sigma(t,u) - A^{\frac{1}{2}}\sigma(t,v)\betaig|^2_{L_Q} \leq \betaar{L}_1\, \|u-v\|^2 , \\ \betaig| \sigma(t,u) - \sigma(s,u)\betaig|_{L_Q} & \leq C\, (1+\|u\|)\, |t-s|^\gamma. \varepsilonnd{align*} (ii) There exist a positive constant $\gamma$ and non negative constants $\betaar{C}$, $\betaar{K}_0$, $\betaar{K}_{\mathcal H}$, $\betaar{K}_2$ and $\betaar{L}_2$ such that for $\nu>0$, $s,t\in [0,T]$ and $u,v\in Dom(A)$, \betaegin{align*} | \betaar{\sigma}_\nu(t,u)|_{L_Q}^2 \leq \betaig( \betaar{K}_0 + \betaar{K}_{\mathcal H} \, \|u\|_{\mathcal H}^2 \betaig) , &\quad \betaig| A^{\frac{1}{2}} \betaar{\sigma}_\nu(t,u)\betaig|^2_{L_Q} \leq \betaig(\betaar{K}_0 + \betaar{K}_2\, |Au|^2\betaig) , \\ |\betaar{\sigma}_\nu(t,u)- \betaar{\sigma}_\nu(t,v)|^2_{L_Q} \leq {\betaar L}_2\, \|u-v \|^2 , &\, \betaig| A^{\frac{1}{2}}\betaar{\sigma}_\nu(t,u) - A^{\frac{1}{2}}\betaar{\sigma}_\nu(t,v)\betaig|_{L_Q}^2 \leq \betaar{L}_2\, |Au-Av|^2 , \\ \betaig| \betaar{\sigma}_\nu(t,u) - \betaar{\sigma}_\nu(t,u)\betaig|_{L_Q} & \leq \betaar{C}\, (1+\|u\|)\, |t-s|^\gamma. \varepsilonnd{align*} } Set \betaegin{equation}\lambdabel{defs} \sigmaigma_\nu = \tilde{\sigma}_\nu = \sigma+\sigmaqrt{\nu} \betaar{\sigma}_\nu\; \; \mbox{\rm for }\quad \nu > 0, \quad \mbox{\rm and } \quad \tilde{\sigma}_0=\sigma. \varepsilonnd{equation} Then for $0\leq \nu \leq \nu_1$, the coefficients $\sigma_\nu$ and $\tilde{\sigma}_\nu$ satisfy the conditions {\betaf (C1)}-{\betaf (C4)} with \betaegin{align} & K_0=\tilde{K}_0=4\betaar{K}_0,\; K_1=\tilde{K}_1=2\betaar{K}_1,\; L_1=\tilde{L}_1=2 \betaar{L}_1, \; \tilde{K}_2=2\betaar{K}_2, \; \tilde{K}_{\mathcal H}=2\betaar{K}_{\mathcal H}, \nonumber \\ & K_2=2 \betaig[ \betaar{K}_2 \vee \betaig( \betaar{K}_{\mathcal H} k_0^{4\alphalpha-2}\betaig) \betaig] \nu_1 , \; L_2= 2 \betaar{L}_2 {\nu_1}\quad \mbox{\rm and } \tilde{L}_2= 2\betaar{L}_2 . \lambdabel{constants} \varepsilonnd{align} Proposition \ref{unifnu-bis} and Theorem \ref{exisuniq0} prove that for some $\nu_0\in ]0,\nu_1]$, $\betaar{K}_2$ and $\betaar{L}_2$ small enough, $0<\nu\leq \nu_0$ (resp. $\nu=0$), $\xi\in V$ and $h_\nu\in {\mathcal A}_M$, the following equation has a unique solution $ u_{h_\nu}^\nu$ (resp. $u^0_h$) in ${\mathcal C}(0,T],V)$: $u^\nu_{h_\nu}(0)=u^0_h(0)=\xi$, and \betaegin{align} du^\nu_{h_\nu}(t) + \betaig[ \nu Au^\nu_{h_\nu} (t) + B(u^\nu_{h_\nu}(t))\betaig] dt & = \sigmaqrt{\nu}\, \sigma_\nu (t,u^\nu_{h_\nu}(t))\, dW(t) + \tilde{\sigma}_\nu (t,u^\nu_{h_\nu}(t)) h_\nu(t) dt, \lambdabel{equhnu} \\ du^0_h(t) + B(u^0_h(t))\, dt & = \sigmaigma(t, u^0_h(t))\, h(t)\, dt. \lambdabel{control} \varepsilonnd{align} Recall that for any $\alphalpha \geq 0$, ${\mathcal H}_\alphalpha$ has been defined in \varepsilonqref{Halpha} and is endowed with the norm $\|\cdot\|_\alphalpha$ defined in \varepsilonqref{Halpha}. When $0\leq \alphalpha\leq \frac{1}{4}$, as $\nu \to 0$ we will establish a Large Deviation Principle (LDP) in the set ${\mathcal C}([0,T],V)$ for the uniform convergence in time when $V$ is endowed with the norm $\|\,\cdot\,\|_\alphalpha$ for the family of distributions of the solutions $u^\nu$ to the evolution equation: $u^\nu(0)=\xi\in V$, \betaegin{equation} \lambdabel{evolunu} du^\nu(t) + \betaig[ \nu Au^\nu(t) + B(u^\nu(t))\betaig]\, dt = \sigmaqrt{\nu} \sigma_\nu (t,u^\nu(t))\, dW(t), \varepsilonnd{equation} whose existence and uniqueness in ${\mathcal C}([0,T],V)$ follows from Propositions \ref{unifnu} and \ref{unifnu-bis}. Unlike in \cite{Sundar}, \cite{DM}, \cite{MSS} and \cite{CM}, the large deviations principle is not obtained in the natural space, which is here ${\mathcal C}([0,T],V)$ under the assumptions {\betaf(C5)}, because the lack of viscosity does not allow to prove that $u^0_h(t)\in Dom(A)$ for almost every $t$. To obtain the LDP in the best possible space with the weak convergence approach, we need an extra condition, which is part of condition {\betaf (C5)} when $\alphalpha=0$, that is when ${\mathcal H}_\alphalpha=H$. \sigmamallskip \noindent {\betaf Condition (C6):} {\it Let $\alphalpha \in [0,\frac{1}{4}]$; there exists a constant $L_3$ such that for $u,v \in {\mathcal H}_\alphalpha$ and $t\in [0,1]$, \betaegin{equation}\lambdabel{Lipalpha} \betaig| A^{\alphalpha} \sigma(t,u)- A^{\alphalpha} \sigma(t,v)|_{L_Q} \leq L_3\, \|u-v\|_\alphalpha. \varepsilonnd{equation} } Let $\mathcal{B}$ denote the Borel $\sigma-$field of the Polish space \betaegin{equation}\lambdabel{defX} {\mathcal X}= {\mathcal C}([0,T],V)\quad \mbox{\rm endowed with the norm}\quad \|u\|_{\mathcal X}=:\sigmaup_{0\leq t\leq T} \|u(t)\|_\alphalpha, \varepsilonnd{equation} where $\| \, \cdot\, \|_{_\alphalpha}$ is defined by \varepsilonqref{Halpha}. We at first recall some classical definitions; by convention the infimum over an empty set is $ +\infty$. \betaegin{defn} The random family $(u^\nu )$ is said to satisfy a large deviation principle on ${\mathcal X}$ with the good rate function $I$ if the following conditions hold:\\ \indent \textbf{$I$ is a good rate function.} The function $I: {\mathcal X} \to [0, \infty]$ is such that for each $M\in [0,\infty[$ the level set $\{\partialhi \in {\mathcal X}: I(\partialhi) \leq M \}$ is a compact subset of ${\mathcal X}$. \\ For $A\in \mathcal{B}$, set $I(A)=\inf_{u \in A} I(u)$.\\ \indent \textbf{Large deviation upper bound.} For each closed subset $F$ of ${\mathcal X} $: \[ \lim\sigmaup_{\nu\to 0}\; \nu \log {\mathbb P}(u^\nu \in F) \leq -I(F). \] \indent \textbf{Large deviation lower bound.} For each open subset $G$ of ${\mathcal X} $: \[ \lim\inf_{\nu\to 0}\; \nu \log {\mathbb P}(u^\nu \in G) \geq -I(G). \] \varepsilonnd{defn} Let ${\mathcal C}_0=\{ \int_0^. h(s)ds \, :\, h\in L^2([0,T], H_0)\} \sigmaubset {\mathcal C}([0, T], H_0)$. Given $\xi\in V$ define ${\mathcal G}_\xi^0: {\mathcal C}([0, T], H_0) \to X $ by $ {\mathcal G}_\xi^0(g)=u_h^0 $ for $ g=\int_0^. h(s)ds \in {\mathcal C}_0$ and $u_h^0$ is the solution to the (inviscid) control equation \varepsilonqref{control} with initial condition $\xi$, and ${\mathcal G}^0_\xi(g)=0$ otherwise. The following theorem is the main result of this section. \betaegin{theorem}\lambdabel{PGDunu} Let $\alphalpha \in [0,\frac{1}{4}]$, suppose that the constants $a,b,\mu$ defining $B$ are such that $a(1+\mu^2) +b\mu^2=0$, let $\xi\in V$, and let $\sigma_\nu$ and $\tilde{\sigma}_\nu$ be defined for $\nu>0$ by \varepsilonqref{defs} with coefficients $\sigma$ and $\betaar{\sigma}_\nu$ satisfying the conditions ({\betaf C5}) and {\betaf (C6)} for this value of $\alphalpha$. Then the solution $(u^\nu)_{\nu>0}$ to \varepsilonqref{evolunu} with initial condition $\xi$ satisfies a large deviation principle in ${\mathcal X}:={\mathcal C}([0,T],V)$ endowed with the norm $\|u\|_{\mathcal X} =: \sigmaup_{0\leq t\leq T} \|u(t)\|_\alphalpha$, with the good rate function \betaegin{eqnarray} \lambdabel{ratefc} I (u)= \inf_{\{h \in L^2(0, T; H_0): \; u ={\mathcal G}_\xi^0(\int_0^. h(s)ds) \}} \Big\{\frac12 \int_0^T |h(s)|_0^2\, ds \Big\}. \varepsilonnd{eqnarray} \varepsilonnd{theorem} We at first prove the following technical lemma, which studies time increments of the solution to the stochastic control problem \varepsilonqref{equhnu} which extends both \varepsilonqref{evolunu} and \varepsilonqref{control}. To state this lemma, we need the following notations. For every integer $n$, let $\partialsi_n : [0,T]\to [0,T]$ denote a measurable map such that:\; $s\leq \partialsi_n(s) \leq \betaig(s+c2^{-n})\wedge T$ for some positive constant $c$ and for every $s\in [0,T]$. Given $N>0$, $h_\nu\in {\mathcal A}_M$, for $t\in [0,T]$ and $\nu\in [0,\nu_0]$, let \[ G_N^\nu(t)=\Big\{ \omegaega \, :\, \Big (\sigmaup_{0\leq s\leq t} \|u_h^\nu(s)(\omegaega)\|^2 \Big)\vee \Big( \int_0^t |A u_h^\nu(s)(\omegaega)|^2 ds \Big) \leq N\Big\}.\] \betaegin{lemma} \lambdabel{timeincrement} Let $a,b,\mu$ satisfy the condition $a(1+\mu^2)+b\mu^2=0$. Let $\nu_0, M,N$ be positive constants, $\sigmaigma$ and $\betaar{\sigma}_\nu$ satisfy condition {\betaf (C5)}, $\sigma_\nu$ and $\tilde{\sigmaigma}_\nu$ be defined by \varepsilonqref{defs} for $\nu\in [0,\nu_0]$. For every $\nu \in ]0,\nu_0]$, let $\xi\in L^4(\Omega;H)\cap L^2(\Omega;V)$, $h_\nu \in {\mathcal A}_M$ and let $u_{h_\nu}^\nu(t)$ denote solution to \varepsilonqref{equhnu}. For $\nu=0$, let $\xi\in V$, $h\in {\mathcal A}_M$, let $u_h^0(t)$ denote be solution to \varepsilonqref{control}. Then there exists a positive constant $C$ (depending on $K_i, \tilde{K}_i, L_i, \tilde{L}_i, T, M, N, \nu_0$) such that: \betaegin{align} \lambdabel{timenu} I_n(h_\nu ,\nu):&={\mathbb E}\Big[ 1_{G_N^\nu(T)} \int_0^T\!\! \|u_{h_\nu}^\nu(s)- u_{h_\nu}^\nu(\partialsi_n(s))\|^2 \, ds\Big] \leq C\, 2^{-\frac{n}{2}} \; \mbox{ \rm for } 0<\nu\leq \nu_0, \\ I_n(h,0):&= 1_{G_N^0(T)} \int_0^T \|u_h^0(s)-u_h^0(\partialsi_n(s))\|^2 \, ds \leq C\, 2^{-n } \; \mbox{ \rm a.s. for } \nu=0. \lambdabel{time0} \varepsilonnd{align} \varepsilonnd{lemma} \betaegin{proof} For $\nu>0$, the proof is close to that of Lemma 4.2 in \cite{DM}. Let $\nu \in ]0,\nu_0]$, $h\in {\mathcal A}_M$,; for any $s\in [0,T]$, It\^o's formula yields \[ \|u_{h_\nu}^\nu(\partialsi_n(s))-u_{h_\nu}^\nu(s)\|^2 =2\int_s^{\partialsi_n(s)} \!\!\! \betaig(A \betaig[ u_{h_\nu}^\nu(r)-u_{h_\nu}^\nu(s)\betaig] , d u_{h_\nu}^\nu(r)) +\nu \int_s^{\partialsi_n(s)} \!\!\! |A^{\frac{1}{2}}\sigma (r,u_{h_\nu}^\nu (r))|^2_{L_Q}d r . \] Therefore $I_n(h_\nu,\nu)= \sigmaum_{1\leq i\leq 5} I_{n,i}(h_\nu,\nu)$, where \betaegin{align*} I_{n,1}(h_\nu,\nu)&=2\, \sigmaqrt{\nu}\; {\mathbb E}\Big( 1_{G_N^\nu(T)} \int_0^T\!\! \!\! ds \int_s^{\partialsi_n(s)}\! \betaig( A^{\frac{1}{2}} \sigmaigma_\nu(r, u_{h_\nu}^\nu(r)) dW(r) \, , \, A^{\frac{1}{2}} \betaig[u_{h_\nu}^\nu(r)-u_{h_\nu}^\nu(s)\betaig] \betaig)\Big) , \\ I_{n,2}(h_\nu,\nu)&={\nu}\; {\mathbb E} \Big( 1_{G_N^\nu(T)} \int_0^T \!\!ds \int_s^{\partialsi_n(s)} \!\! |A^{\frac{1}{2}} \sigmaigma_\nu(r,u_{h_\nu}^\nu(r))|_{L_Q}^2 \, dr\Big) , \\ I_{n,3}(h_\nu,\nu)&=-2 \, {\mathbb E} \Big( 1_{G_N^\nu(T)} \int_0^T \!\! ds \int_s^{\partialsi_n(s)} \!\! \betaig\lambdangle A^{\frac{1}{2}} B( u_{h_\nu}^\nu(r))\, , \, A^{\frac{1}{2}}\betaig[ u_{h_\nu}^\nu(r)-u_{h_\nu}^\nu(s)\betaig] \betaig\rangle \, dr\Big) , \\ I_{n,4}(h_\nu,\nu)&=- 2 \,\nu\, {\mathbb E} \Big( 1_{G_N^\nu(T)} \int_0^T \!\! ds \int_s^{\partialsi_n(s)} \!\! \betaig\lambdangle A^{\frac{3}{2}} \, u_{h_\nu}^\nu(r)\, , \, A^{\frac{1}{2}}\betaig[ u_{h_\nu}^\nu(r)-u_{h_\nu}^\nu(s)\betaig] \betaig\rangle \, dr\Big) , \\ I_{n,5}(h_\nu,\nu)&=2 \, {\mathbb E} \Big( 1_{G_N^\nu(T)} \int_0^T \!\!\! ds \int_s^{\partialsi_n(s)} \!\!\! \betaig( A^{\frac{1}{2}} \tilde{\sigmaigma}_\nu(r,u_{h_\nu}^\nu(r)) \, h_{\nu}(r)\, , \, A^{\frac{1}{2}}\betaig[ u_{h_\nu}^\nu(r)-u_{h_\nu}^\nu(s) \betaig] \betaig)\, dr\Big) . \varepsilonnd{align*} Clearly $G_N^\nu(T)\sigmaubset G_N^\nu(r)$ for $r\in [0,T]$. Furthermore, $\|u_h^\nu(r)\|^2 \vee \|u_h^\nu(s)\|^2 \le N$ on $G_N^{\nu}(r)$ for $0\leq s\leq r\leq T$. \partialar The Burkholder-Davis-Gundy inequality and ({\betaf C5}) yield for $0< \nu \leq \nu_0$ \betaegin{align*} |I_{n,1}(h_\nu,\nu)|&\leq 6\sigmaqrt{\nu} \int_0^T ds \; {\mathbb E} \Big( \int_s^{\partialsi_n(s)} \betaig|A^{\frac{1}{2}}\sigma_\nu(r,u_{h_\nu}^\nu(r))\betaig|_{L_Q}^2 1_{G_N^\nu(r)}\, \| u_{h_\nu}^\nu(r)- u_{h_\nu}^\nu(s)\|^2 \; dr \Big)^{\frac{1}{2}} \\ &\leq 6 \sigmaqrt{2 \nu_0 N } \int_0^T ds \; {\mathbb E} \Big( \int_s^{\partialsi_n(s)} \betaig[ K_0+K_1\, \|u_{h_\nu}^\nu(r)\|^2 + K_2\, |A u_{h_\nu}^\nu(r)|^2 \betaig]\; \, dr \Big)^{\frac{1}{2}}. \varepsilonnd{align*} The Cauchy-Schwarz inequality and Fubini theorem as well as \varepsilonqref{bound-V}, which holds uniformly in $\nu \in ]0,\nu_0]$ for small enough fixed $\nu_0>0$, imply \betaegin{align} \lambdabel{In1} |I_{n,1}(h_\nu,\nu)|&\leq 6 \sigmaqrt{2 \nu_0 N T } \Big[ {\mathbb E} \int_0^T\!\! \betaig[ K_0+K_1\, \|u_{h_\nu}^\nu(r)\|^2 + K_2 |Au_{h_\nu}^\nu(r)|^2\betaig] \Big( \int_{(r-c2^{-n})\vee 0}^r ds\Big) dr \Big]^{\frac{1}{2}} \nonumber \\ &\leq C_1\, \sigmaqrt{N}\, 2^{-\frac{n}{2}} \varepsilonnd{align} for some constant $C_1$ depending only on $K_i$, $i=0,1,2$, $L_j$, $ j=1,2$, $M$, $\nu_0$ and $T$. The property ({\betaf C5}) and Fubini's theorem imply that for $0 < \nu\leq \nu_0$, \betaegin{align} \lambdabel{In2} |I_{n,2}(h_\nu,\nu)|&\leq \nu \, {\mathbb E} \Big( 1_{G_N^\nu(T)} \int_0^T \!\! ds \int_s^{\partialsi_n(s)} \!\! \betaig[ K_0+K_1\|u_{h_\nu}^\nu(r)\|^2 + K_2 |Au_{h_\nu}^\nu(r)|^2 \betaig] dr\Big) \nonumber \\ &\leq \nu_0 {\mathbb E} \int_0^T \betaig[ K_0 +K_1\, \|u^\nu_{h_\nu}(r)\|^2 + K_2 |A u_{h_\nu}^\nu(r)|^2 \betaig] \, c 2^{-n}\, dr \; \leq\; C_1 2^{-n} \varepsilonnd{align} for some constant $C_1$ as above. Since $\betaig\lambdangle B(u), Au\betaig\rangle=0$ and $\| B(u)\|\leq C \|u\|^2$ for $u\in V$ by \varepsilonqref{boundB1}, we deduce that \betaegin{align} \lambdabel{In3} |I_{n,3}&(h_\nu,\nu)| \leq 2 {\mathbb E} \Big( 1_{G_N^\nu(T)}\!\! \int_0^T \!\!\! ds \int_s^{\partialsi_n(s)} \!\!\! dr \betaig( A^{\frac{1}{2}} B(u^\nu_{h_\nu}(r)) \, ,\, A^{\frac{1}{2}} u^\nu_{h_\nu}(s)\betaig) \Big) \nonumber \\ &\leq 2C {\mathbb E} \Big( 1_{G_N^\nu(T)}\!\! \int_0^T \!\!\! ds \int_s^{\partialsi_n(s)} \!\!\ \|u^\nu_{h_\nu}(r)\|^2 \, \| u^\nu_h(s)\| \, dr \Big) \leq 2C \, N^{\frac{3}{2}}\, T^2 2^{-n} . \varepsilonnd{align} Using Cauchy-Schwarz's inequality and \varepsilonqref{bound-V} we deduce that \betaegin{eqnarray} \lambdabel{In4} I_{n,4}(h_\nu,\nu) &\leq & 2 \, \nu\, {\mathbb E} \Big( 1_{G_N^\nu(T)} \int_0^T \!\! \! ds \int_s^{\partialsi_n(s)} \!\!\! dr \betaig[ - |Au_{h_\nu}^\nu(r)|^2 + |A u_{h_\nu}^\nu(r)|\, |A u_{h_\nu}^\nu(s)|\betaig]\Big) \nonumber \\ &\leq & \frac{\nu}{2}\; {\mathbb E} \Big( \int_0^T ds \; |A u^\nu_{h_\nu}(s)|^2 \, \int_s^{\partialsi_n(s)} dr \Big) \leq C_1 2^{-n} \varepsilonnd{eqnarray} for some constant $C_1$ as above.\\ Finally, Cauchy-Schwarz's inequality, Fubini's theorem, ({\betaf C5}) and the definition of ${\mathcal A}_M$ yield \betaegin{align} \lambdabel{In5} & |I_{n,5}(h_\nu,\nu)|\leq 2 \; {\mathbb E} \Big( 1_{G_N^\nu(T)} \int_0^T \!\! ds \int_s^{\partialsi_n(s)} dr \nonumber \\ &\quad \qquad \betaig[ \tilde{K}_0 +\tilde{K}_1\|u_{h_\nu}^\nu(r)\|^2 +\nu \tilde{K}_2 |A u_{h_\nu}^\nu(r)|^2 \betaig]^{\frac{1}{2}}\, |h_\nu(r)|_0 \, \|u_{h_\nu}^\nu(r)-u_{h_\nu}^\nu(s)\|\, \Big) \nonumber \\ & \quad \leq 4 \sigmaqrt{N} \; {\mathbb E} \Big( 1_{G_N^\nu(T)}\, \betaig( \tilde{K}_0 + \tilde{K}_1 N\betaig)^{\frac{1}{2}} \int_{0}^T |h_\nu(r)|_0 \, \Big( \int_{(r-c2^{-n})\vee 0}^r ds \Big)\, dr \Big) \nonumber \\ & \quad \qquad + 4\sigmaqrt{N} {\mathbb E} \Big( 1_{G_N^\nu(T)}\, \sigmaqrt{\nu_0 \tilde{K}_2} \int_0^T |Au^\nu_{h_\nu}(r)|\, |h_\nu(r)|_0 \Big( \int_{(r-c2^{-n})\vee 0}^r ds \Big)\, dr \Big) \nonumber \\ & \quad \leq 4\sigmaqrt{N} \Big[ \sigmaqrt{M T } \betaig( \tilde{K}_0 + \tilde{K}_1 N\betaig)^{\frac{1}{2}} + \betaig(\nu_0\, \tilde{K}_2 \, NM\betaig)^{\frac{1}{2}}\Big] \, c\, T 2^{-n} \leq C(\nu_0,N,M,T)\, 2^{-n}. \varepsilonnd{align} Collecting the upper estimates from \varepsilonqref{In1}-\varepsilonqref{In5}, we conclude the proof of \varepsilonqref{timenu} for $0<\nu\leq \nu_0$. \partialar Let $h\in {\mathcal A}_M$; a similar argument for $\nu=0$ yields for almost every $\omegaega$ \[ 1_{G_N^0(T)} \int_0^T \int_0^T \|u^0_h(\partialsi_n(s)) -u^0_h(s)\|^2 \, ds \leq \sigmaum_{j=1,2} I_{n,j}(h,0),\] with \betaegin{eqnarray*} I_{n,1}(h,0)&=&-2 \, 1_{G_N^0(T)} \int_0^T \!\! ds \int_s^{\partialsi_n(s)} \!\! \betaig\lambdangle A^{\frac{1}{2}} B( u_h^0(r))\, , \, A^{\frac{1}{2}}\betaig[ u_h^0(r)-u_h^0(s)\betaig] \betaig\rangle \, dr , \\ I_{n,2}(h,0)&=&2 \, 1_{G_N^0(T)} \int_0^T \!\! ds \int_s^{\partialsi_n(s)} \!\! \betaig( A^{\frac{1}{2}} \tilde{\sigmaigma}_0(r,u_h^\nu(r)) \, h(r)\, , \, A^{\frac{1}{2}}\betaig[ u_h^\nu(r)-u_h^\nu(s) \betaig] \betaig)\, dr . \varepsilonnd{eqnarray*} An argument similar to that which gives \varepsilonqref{In3} proves \betaegin{equation}\lambdabel{In01} |I_{n,1}(h,0)|\leq C(T,N) \, 2^{-n}. \varepsilonnd{equation} Cauchy-Schwarz's inequality and {\betaf (C5)} imply \betaegin{align}\lambdabel{In02} |I_{n,2}&(h,0)|\leq 2 \; 1_{G_N^0(T)} \int_0^T \!\! ds \int_s^{\partialsi_n(s)} dr \betaig(\tilde{K}_0 +\tilde{K}_1\|u_h^0(r)\|^2 \betaig)^{\frac{1}{2}}\, |h(r)|_0 \, \|u_h^0(r)-u_h^0(s)\| \nonumber \\ & \leq 4 \sigmaqrt{N} \; \betaig( \tilde{K}_0 + \tilde{K}_1 N\betaig)^{\frac{1}{2}} \int_{0}^T |h(r)|_0 \, \Big( \int_{(r-c2^{-n})\vee 0}^r ds \Big)\, dr \leq C(N,M,T)\, 2^{-n}. \varepsilonnd{align} The inequalities \varepsilonqref{In01} and \varepsilonqref{In02} conclude the proof of \varepsilonqref{time0}. \varepsilonnd{proof} \partialar Now we return to the setting of Theorem~\ref{PGDunu}. Let $\nu_0\in ]0,\nu_1]$ be defined by Theorem \ref{unifnu} and Proposition \ref{unifnu-bis}, $(h_\nu , 0 < \nu \leq \nu_0)$ be a family of random elements taking values in the set ${\mathcal A}_M$ defined by \varepsilonqref{AM}. Let $u^\nu_{h_\nu}$ be the solution of the corresponding stochastic control equation \varepsilonqref{equhnu} with initial condition $u^\nu_{h_\nu}(0)=\xi \in V$. Note that $u^\nu_{h_\nu}={\mathcal G}^\nu_\xi\Big(\sigmaqrt{\nu} \betaig( W_. + \frac{1}{\sigmaqrt \nu} \int_0^. h_\nu(s)ds\betaig) \Big)$ due to the uniqueness of the solution. The following proposition establishes the weak convergence of the family $(u^\nu_{h_\nu})$ as $\nu\to 0$. Its proof is similar to that of Proposition 4.5 in \cite{CM}; see also Proposition 3.3 in \cite{DM}. \betaegin{prop} \lambdabel{weakconv} Let $a,b,\mu$ be such that $a(1+\mu^2)+b\mu^2=0$. Let $\alphalpha\in [0,\frac{1}{4}]$, $\sigma$ and $\betaar{\sigma}_\nu$ satisfy the conditions ({\betaf C5}) and {\betaf (C6)} for this value of $\alphalpha$, $\sigma_\nu$ and $\tilde{\sigma}_\nu$ be defined by \varepsilonqref{defs}. Let $\xi $ be ${\mathcal F}_0$-measurable such that ${\mathbb E} \betaig(|\xi|_H^4+\|\xi\|^2\betaig)< \infty$, and let $h_\nu$ converge to $h$ in distribution as random elements taking values in ${\mathcal A}_M$, where this set is defined by \varepsilonqref{AM} and endowed with the weak topology of the space $L_2(0,T;H_0)$. Then as $\nu \to 0$, the solution $u^\nu_{h_\nu}$ of \varepsilonqref{equhnu} converges in distribution in ${\mathcal X}$ (defined by \varepsilonqref{defX}) to the solution $u_h^0$ of \varepsilonqref{control}. That is, as $\nu\to 0$, the process ${\mathcal G}^\nu_\xi \Big(\sigmaqrt{\nu} \betaig( W_. + \frac{1}{\sigmaqrt{\nu}} \int_0^. h_\nu(s)ds\betaig) \Big)$ converges in distribution to $ {\mathcal G}^0_\xi \betaig(\int_0^. h(s)ds\betaig)$ in ${\mathcal C}([0,T],V)$ for the topology of uniform convergence on $[0,T]$ where $V$ is endowed with the norm $\|\,\cdot\,\|_\alphalpha$. \varepsilonnd{prop} \betaegin{proof} Since ${\mathcal A}_M$ is a Polish space (complete separable metric space), by the Skorokhod representation theorem, we can construct processes $(\tilde{h}_\nu, \tilde{h}, \tilde{W})$ such that the joint distribution of $(\tilde{h}_\nu, \tilde{W})$ is the same as that of $(h_\nu, W)$, the distribution of $\tilde{h}$ coincides with that of $h$, and $ \tilde{h}_\nu \to \tilde{h}$, a.s., in the (weak) topology of $S_M$. Hence a.s. for every $t\in [0,T]$, $\int_0^t \tilde{h}_\nu(s) ds - \int_0^t \tilde{h}(s)ds \to 0$ weakly in $H_0$. To ease notations, we will write $(\tilde{h}_\nu, \tilde{h}, \tilde{W})=(h_\nu,h,W)$. Let $U_\nu=u^\nu_{h_\nu}-u_h^0\in {\mathcal C}([0,T],V)$; then $U_\nu(0)=0$ and \betaegin{align}\lambdabel{difference2} d U_\nu(t) = - \betaig[\nu Au^\nu_{h_\nu}(t) & +B(u^\nu_{h_\nu}(t))-B(u_h^0(t))\betaig]\, dt + \betaig[\sigma(t,u^\nu_{h_\nu}(t)) h_\nu(t) -\sigma(t,u_h^0(t)) h(t)\betaig]\, dt \nonumber\\ & +\sigmaqrt{\nu} \; \sigma_\nu(t,u^\nu_{h_\nu}(t))\, dW(t) + \sigmaqrt{\nu}\, \betaar{\sigma}_\nu(t, u^\nu_{h_\nu}(t))\, h_\nu(t)\, dt. \varepsilonnd{align} On any finite time interval $[0, t]$ with $t\leq T$, It\^o's formula, yields for $\nu >0 $ and $\alphalpha\in [0,\frac{1}{2}]$: \betaegin{align*} & \|U_\nu(t)\|_\alphalpha^2 = -2\nu \int_0^t \!\!\! \betaig( A^{1+\alphalpha} u^\nu_{h_\nu}(s) , A^\alphalpha U_\nu(s)\betaig) ds -2\int_0^t \!\! \! \betaig\lambdangle A^\alphalpha \betaig[ B(u^\nu_{h_\nu}(s))- B(u^0_{h}(s))\betaig] , A^\alphalpha U_\nu(s)\betaig\rangle ds\\ & \quad + 2\sigmaqrt{\nu}\int_0^t \betaig(A^\alphalpha \sigma_\nu(s,u^\nu_{h_\nu}(s)) dW(s)\, ,\, A^\alphalpha U_\nu(s) \betaig) + \nu \int_0^t|A^\alphalpha \sigma_\nu(s,u^\nu_{h_\nu}(s))|^2_{L_Q}\, ds \\ &\quad + 2\sigmaqrt{\nu} \int_0^t \betaig( A^\alphalpha \betaar{\sigma}_\nu(s,u^\nu_{h_\nu}(s))\, h_\nu(s)\,,\, A^\alphalpha U_\nu(s)\betaig)\, ds \\ & \quad + 2\int_0^t \betaig(A^\alphalpha \betaig[\sigma(s,u^\nu_{h_\nu}(s))h_{\nu}(s)-\sigma(s,u^0_h(s))\, h(s)\betaig]\, ,\, A^\alphalpha U_\nu(s) \betaig)\, ds. \varepsilonnd{align*} Furthermore, $\betaig( A^\alphalpha \betaar{\sigma}_\nu(s,u^\nu_{h_\nu}(s)) h_\nu(s)\, ,\, A^\alphalpha U_\nu(s)\betaig) = \betaig( \betaar{\sigma}_\nu(s,u^\nu_{h_\nu}(s)) h_\nu(s)\, ,\, A^{2 \alphalpha} U_\nu(s)\betaig)$. The Cauchy-Schwarz inequality, conditions {\betaf (C5)} and {\betaf (C6)}, \varepsilonqref{AalphaB} and \varepsilonqref{compcalH} yield since $\alphalpha \in [0,\frac{1}{4}]$ \betaegin{align}\lambdabel{total-error} & \|U_\nu(t)\|_\alphalpha^2 \leq 2\nu \int_0^t \!\! \betaig| A^{\frac{1}{2}+2\alphalpha} u^\nu_{h_\nu}(s)\betaig| \, \betaig( \|u^\nu_{h_\nu}(s)\| + \|u^0_h(s)\|\betaig)\, ds \nonumber \\ & \quad + 2 C \int_0^t \!\! \|U_\nu(s)\|_\alphalpha^2 \, \betaig( \|u^\nu_{h_\nu}(s)\| + \|u^0_h(s)\|\betaig)\, ds + 2\sigmaqrt{\nu}\int_0^t \!\! \betaig( \sigma_\nu(s,u^\nu_{h_\nu}(s)) dW(s)\, ,\, A^{2\alphalpha} U_\nu(s) \betaig) \nonumber \\ &\quad + \nu \int_0^t\betaig[K_0 + K_1 \|u^\nu_{h_\nu}(s)\|^2 + K_2 |Au^\nu_{h_\nu}(s)|^2 \betaig] \, ds \nonumber \\ &\quad + 2\sigmaqrt{\nu} \int_0^t \Big[ \sigmaqrt{\tilde{K}_0} + k_0^{-\frac{1}{2}} \sigmaqrt{\tilde{K}_{\mathcal H}} \|u^\nu_{h_\nu}(s)\|\Big]\, |h_\nu(s)|_0\, k_0^{4\alphalpha -1} \betaig( \|u^\nu_{h_\nu}(s)\|+ \|u^0_h(s)\|\betaig)\, ds \nonumber \\ &\quad + 2\int_0^t \betaig(A^\alphalpha \betaig[\sigma(s,u^\nu_{h_\nu}(s))-\sigma(s,u^0_h(s))\betaig] h_\nu(s) \, ,\, A^\alphalpha U_\nu(s) \betaig)\, ds \nonumber \\ & \quad + 2\int_0^t \betaig(A^\alphalpha \sigma(s,u^0_h(s)) \, \betaig[ h_\nu(s)-h_0(s)\betaig]\, ,\, A^\alphalpha U_\nu(s)\betaig)\, ds \nonumber \\ & \leq 2 \int_0^t \!\! \|U_\nu(s)\|^2_\alphalpha \, \betaig[C \|u^\nu_{h_\nu}(s)\|^2 +C \|u^0_h(s)\|^2 + L_3 |h_\nu(s)|_0\betaig] \, ds + \sigmaum_{1\leq j\leq 5} T_j(t,\nu), \varepsilonnd{align} where using again the fact that $\alphalpha\leq \frac{1}{4}$, we have \betaegin{align*} T_1(t,\nu)&=2\nu \, \sigmaup_{s\leq t} \betaig[ \|u^\nu_{h_\nu}(s)\| + \|u^0_h(s)\|\betaig]\, \int_0^t |A u^\nu_{h_\nu}(s)|\, ds , \\ T_2(t,\nu)&= 2\sigmaqrt{\nu}\int_0^t \betaig( \sigma_\nu(s,u^\nu_{h_\nu}(s)) dW(s)\, ,\, A^{2\alphalpha} U_\nu(s) \betaig), \\ T_3(t,\nu)&= \nu \int_0^t \betaig[K_0 + K_1 \|u^\nu_{h_\nu}(s)\|^2 + K_2 |Au^\nu_{h_\nu}(s)|^2 \betaig] \, ds,\\ T_4(t,\nu)&= 2\sigmaqrt{\nu} \, k_0^{2\alphalpha -1} \int_0^t \!\! \Big[ \sigmaqrt{\tilde{K}_0} + k_0^{-\frac{1}{2}} \, \sigmaqrt{\tilde{K}_{\mathcal H}}\, \|u^\nu_{h_\nu}(s)\|\Big] \, |h_\nu(s)|_0 \betaig( \|u^\nu_{h_\nu}(s)\|+ \|u^0_h(s)\|\betaig) ds,\\ T_5(t,\nu)&= 2\int_0^t \Big( \sigma(s,u^0_h(s))\, \betaig( h_{\nu}(s)-h(s)\betaig), \, A^{2\alphalpha} U_\nu(s)\Big)\, ds. \varepsilonnd{align*} We want to show that as $\nu \to 0$, $\sigmaup_{t\in [0,T]} \|U_\nu(s)\|_\alphalpha \to 0$ in probability, which implies that $u^\nu_{h_\nu} \to u^0_h$ in distribution in $X$. Fix $N>0$ and for $t\in [0,T]$ let \betaegin{eqnarray*} G_N(t)&=&\Big\{ \sigmaup_{0\leq s\leq t} \|u^0_h(s)\|^2 \leq N\Big\}, \\ G_{N,\nu}(t)&=& G_N(t)\cap \Big\{ \sigmaup_{0\leq s\leq t} \|u^\nu_{h_\nu}(s)\|^2 \leq N\Big\} \cap \Big\{ \nu \int_0^t |A u_{h_\nu}(s)|^2 ds \leq N \Big\}. \varepsilonnd{eqnarray*} The proof consists in two steps.\\ \textbf{Step 1:} For $\nu_0 >0$ given by Proposition \ref{unifnu-bis} and Theorem \ref{exisuniq0}, we have \[ \sigmaup_{0<\nu\leq \nu_0}\; \sigmaup_{h,h_\nu \in {\mathcal A}_M} {\mathbb P}(G_{N,\nu}(T)^c )\to 0 \quad\mbox{\rm as }\; N\to+\infty.\] Indeed, for $\nu \in ]0,\nu_0]$, $h,h_\nu \in {\mathcal A}_M$, the Markov inequality and the a priori estimates \varepsilonqref{boundu0} and \varepsilonqref{bound-V}, which holds uniformly in $\nu\in ]0,\nu_0]$, imply that for $0<\nu\leq \nu_0$, \betaegin{align}\lambdabel{G-c} {\mathbb P} ( G_{N,\nu}(T)^c ) & \leq \frac{1}{N} \sigmaup_{h, h_\nu \in {\mathcal A}_M} {\mathbb E} \Big( \sigmaup_{0\leq s\leq T} \|u^0_h(s)\|^2 + \sigmaup_{0\leq s\leq T} \|u^\nu_{h_\nu}(s)|^2 + \nu \int_0^T|A u^\nu_{h_\nu}(s)|^2 \, ds \Big) \nonumber \\ &\leq {C \, \betaig(1+ {\mathbb E}|\xi|^4 + {\mathbb E}\|\xi\|^2 \betaig)}{N}^{-1}, \varepsilonnd{align} for some constant $C$ depending on $T$ and $M$, but independent of $N$ and $\nu$. \noindent \textbf{Step 2:} Fix $N>0$, let $h, h_\nu \in {\mathcal A}_M$ be such that $h_\nu\to h$ a.s. in the weak topology of $L^2(0,T;H_0)$ as $\nu \to 0$. Then one has: \betaegin{equation} \lambdabel{cv1} \lim_{\nu\to 0} {\mathbb E}\Big[ 1_{G_{N,\nu}(T)} \sigmaup_{0\leq t\leq T } \|U_\nu(t)|_\alphalpha^2 \Big] = 0. \varepsilonnd{equation} Indeed, \varepsilonqref{total-error} and Gronwall's lemma imply that on $G_{N,\nu}(T)$, one has for $0<\nu\leq \nu_0$: \betaegin{equation} \lambdabel{*} \sigmaup_{0\leq t\leq T} \|U_\nu(t)\|_\alphalpha^2 \leq \varepsilonxp\Big(4NC +2L_3 \sigmaqrt{MT}\Big) \sigmaum_{1\leq j\leq 5} \; \sigmaup_{0\leq t\leq T} T_j(t,\nu) \, . \varepsilonnd{equation} Cauchy-Schwarz's inequality implies that for some constant $C(N,T)$ independent on $\nu$: \betaegin{align}\lambdabel{T1n} {\mathbb E} \Big( 1_{G_{N,\nu}(T)} \sigmaup_{0\leq t\leq T}|T_1(t,\nu)|\Big) &\leq 4 \sigmaqrt{TN}\, \sigmaqrt{\nu}\, {\mathbb E}\Big( 1_{G_{N,\nu}(T)} \Big\{ \int_0^T |Au^\nu_{h_\nu}(s)|^2\, ds \Big\}^{\frac{1}{2}} \Big) \nonumber \\ & \leq C(N,T) \, \sigmaqrt{\nu}. \varepsilonnd{align} Since the sets $G_{N,\nu}(.)$ decrease, the Burkholder-Davis-Gundy inequality, $\alphalpha\leq \frac{1}{4}$, the inequality \varepsilonqref{compcalH} and ({\betaf C5}) imply that for some constant $C(N,T)$ independent of $\nu$: \betaegin{align} &{\mathbb E} \Big(\! 1_{G_{N,\nu}(T)} \sigmaup_{0\leq t\leq T} |T_2(t,\nu)|\Big) \leq 6\sigmaqrt{\nu} \; {\mathbb E} \Big\{\!\! \int_0^T 1_{G_{N,\nu}(s)} \, k_0^{4(2\alphalpha - \frac{1}{2})}\, \|U_\nu(s)\|^2 \; |\sigma_\nu(s, u^\nu_{h_\nu}(s))|^2_{L_{Q}} ds\Big\}^\frac12 \nonumber \\ &\leq 6 \sigmaqrt{\nu} k_0^{2(2\alphalpha - \frac{1}{2})} {\mathbb E} \Big\{ \!\!\int_0^T\!\! \! 1_{G_{N,\nu}(s)} 4N\, (K_0+ K_1\| u^\nu_{h_\nu}(s) \|^2 + K_2 |Au^\nu_{h_\nu}(s)|^2 ) ds\Big\}^\frac{1}{2} \leq C(T,N) \sigmaqrt{\nu}. \lambdabel{T2n} \varepsilonnd{align} The Cauchy-Schwarz inequality implies \betaegin{equation}\lambdabel{T3n} {\mathbb E} \Big( 1_{G_{N,\nu}(T)} \sigmaup_{0\leq t\leq T} |T_4(t,\nu)|\Big) \leq \sigmaqrt{\nu} \, C(N,M,T). \varepsilonnd{equation} The definition of $G_{N,\nu}(T)$ implies that \betaegin{equation}\lambdabel{T4n} {\mathbb E} \Big( 1_{G_{N,\nu}(T)} \sigmaup_{0\leq t\leq T} |T_3(t,\nu)|\Big) \leq C\, T\, N\,{\nu}. \varepsilonnd{equation} The inequalities \varepsilonqref{*} - \varepsilonqref{T4n} show that the proof of \varepsilonqref{cv1} reduces to check that \betaegin{equation} \lambdabel{T5n} \lim_{\nu\to 0}\; {\mathbb E} \Big( 1_{G_{N,\nu}(T)} \sigmaup_{0\leq t\leq T} |T_5(t,\nu)|\Big)= 0\, . \varepsilonnd{equation} In further estimates we use Lemma~\ref{timeincrement} with $\partialsi_n=\betaar{s}_n$, where $\betaar{s}_n$ is the step function defined by $\betaar{s}_n = kT2^{-n}$ for $(k-1)T2^{-n} \leq s<kT2^{-n}$. For any $n,N \geq 1$, if we set $t_k=kT2^{-n}$ for $0\leq k\leq 2^n$, we obviously have \betaegin{equation}\lambdabel{t5} {\mathbb E}\Big( 1_{G_{N,\nu}(T)}\sigmaup_{0\leq t\leq T} |T_5(t,\nu)| \Big) \leq 2\; \sigmaum_{1\leq i\leq 4} \tilde{T}_i(N,n, \nu)+ 2 \; {\mathbb E} \betaig( \betaar{T}_5(N,n,\nu)\betaig), \varepsilonnd{equation} where \betaegin{align*} \tilde{T}_1(N,n,\nu)=& {\mathbb E} \Big[ 1_{G_{N,\nu}(T)} \sigmaup_{0\leq t\leq T} \Big| \int_0^t \Big( \sigma(s,u^0_h(s)) \betaig( h_\nu(s)-h(s)\betaig) \, ,\, A^{2\alphalpha}\betaig[ U_\nu(s)- U_\nu(\betaar{s}_n)\betaig] \Big) ds\Big| \Big] ,\\ \tilde{T}_2 (N,n,\nu)=& {\mathbb E}\Big[ 1_{G_{N,\nu}(T)} \\ & \quad \times\sigmaup_{0\leq t\leq T} \Big| \int_0^t \Big( [\sigma(s,u^0_h(s)) - \sigma(\betaar{s}_n, u^0_h(s))] (h_\nu(s)-h(s))\, ,\, A^{2\alphalpha} U_\nu(\betaar{s}_n)\Big)ds \Big|\Big], \\ \tilde{T}_3 (N,n,\nu)=& {\mathbb E}\Big[ 1_{G_{N,\nu}(T)} \\ & \times \sigmaup_{0\leq t\leq T} \Big| \int_0^t \Big( \betaig[ \sigma(\betaar{s}_n,u^0_h(s)) - \sigma(\betaar{s}_n, u^0_h(\betaar{s}_n))\betaig] \betaig(h_\nu(s) - h(s) \betaig)\, ,\, A^{2\alphalpha} U_\nu(\betaar{s}_n)\Big) ds\Big| \Big] ,\\ \tilde{T}_4(N,n,\nu)=& {\mathbb E} \Big[ 1_{G_{N,\nu}(T)} \sigmaup_{1\leq k\leq 2^n} \sigmaup_{t_{k-1}\leq t\leq t_k} \Big| \Big( \sigmaigma(t_k, u^0_h(t_k)) \int_{t_{k-1}}^t \!\!\! (h_\nu(s)-h(s)) ds \, ,\,A^{2\alphalpha} U_\nu(t_k)\Big) \Big| \Big],\\ \betaar{T}_5(N,n, \nu)=& 1_{G_{N,\nu}(T)} \sigmaum_{1\leq k\leq 2^n} \Big| \Big( \sigma(t_k, u^0_h(t_k)) \int_{t_{k-1}}^{t_k } \betaig(h_\nu(s)-h(s)\betaig)\, ds \, ,\, A^{2\alphalpha} U_\nu(t_k )\Big) \Big| . \varepsilonnd{align*} Using the Cauchy-Schwarz and Young inequalities, ({\betaf C5}), \varepsilonqref{compcalH}, \varepsilonqref{timenu} and \varepsilonqref{time0} in Lemma~\ref{timeincrement} with $\partialsi_n(s)=\betaar{s}_n$, we deduce that for some constant $\betaar{C}_1:= C(T,M,N)$ independent of $\nu \in ]0, \nu_0]$, \betaegin{align} \lambdabel{eqT1} & \tilde{T}_1(N,n,\nu)\leq k_0^{4\alphalpha-1} {\mathbb E}\Big[ 1_{G_{N,\nu}(T)} \int_0^T\!\!\! \betaig( \betaar{K}_0+\betaar{K}_1|u^0_h(s)|^2\betaig)^{\frac{1}{2}} |h_\nu(s)-h(s)|_0\, \betaig\| U_\nu(s)-U_\nu(\betaar{s}_n)\betaig\|\, ds\Big] \nonumber \\ & \quad \leq k_0^{4\alphalpha-1} \Big( {\mathbb E} \Big[ 1_{G_{N,\nu}(T)} \int_0^T 2 \betaig\{ \|u^\nu_{h_\nu}(s) - u^\nu_{h_\nu}(\betaar{s}_n)\|^2 + \|u^0_{h}(s) - u^0_{h}(\betaar{s}_n)\|^2 \betaig\}\, ds\Big] \Big)^{\frac{1}{2}} \nonumber \\ &\qquad \times \sigmaqrt{\betaar{K}_0 + k_0^{-2} \betaar{K}_1 N}\; \Big( {\mathbb E} \int_0^T 2\betaig[ |h_\nu(s)|_0^2 + |h(s)|_0^2\betaig] \, ds \Big)^{\frac{1}{2}} \leq \betaar{C}_1 \; 2^{-\frac{n}{4}}. \varepsilonnd{align} A similar computation based on ({\betaf C5}) and \varepsilonqref{time0} from Lemma \ref{timeincrement} yields for some constant $\betaar{C}_3:=C(T,M,N)$ and any $\nu \in ]0, \nu_0]$ \betaegin{align} \lambdabel{eqT2} \tilde{ T}_3 (N,n,\nu) &\leq \sigmaqrt{2Nk_0^{-2}L_1} \Big( {\mathbb E} \Big[ 1_{G_{N,\nu}(T)} \int_0^T\!\! \|u^0_{h}(s) - u^0_{h}(\betaar{s}_n)\|^2 \, ds\Big] \Big)^{\frac{1}{2}} \Big( {\mathbb E} \int_0^T \!\! |h_\nu(s)-h(s)|_0^2 \, ds \Big)^{\frac{1}{2}} \nonumber \\ & \leq \betaar{C}_3 \; 2^{- \frac{n}{4}}. \varepsilonnd{align} The H\"older regularity {\betaf (C5)} imposed on $\sigma(.,u)$ and the Cauchy-Schwarz inequality imply that \betaegin{equation} \lambdabel{eqHolder} \tilde{T}_2(N,n,\nu)\leq C \sigmaqrt{N} 2^{-n\gamma}\, {\mathbb E}\Big( 1_{G_{N,\nu}(T)} \int_0^T\!\! \left(1+\| u^0_h(s)\|\right) |h_\nu(s)-h(s)|_0 ds \Big) \leq \betaar{C}_2 2^{-n\gamma} \varepsilonnd{equation} for some constant $\betaar{C}_2=C(T,M,N)$. Using Cauchy-Schwarz's inequality and ({\betaf C5}) we deduce for $\betaar{C}_4=C(T,N,M)$ and any $\nu \in ]0, \nu_0]$ \betaegin{align} \lambdabel{eqT3} \tilde{T}_4 & (N,n,\nu)\leq {\mathbb E} \Big[ 1_{G_{N,\nu}(T)} \sigmaup_{1\leq k\leq 2^n} \betaig(\betaar{K}_0+\betaar{K}_1| u^0_h(t_k)|^2 \betaig)^{\frac{1}{2}} \int_{t_{k-1}}^{t_k}\!\! |h_\nu(s)-h(s)|_0 \, ds \, \|U_\nu(t_k)\| \, k_0^{4\alphalpha -1}\Big]\nonumber \\ &\leq C(N) \; {\mathbb E}\Big( \sigmaup_{1\leq k\leq 2^n} \int_{t_{k-1}}^{t_k} \betaig(|h_\nu(s)|_0 + |h(s)|_0\betaig) \, ds\Big) \leq \betaar{C}_4\; 2^{-\frac{n}{2}}. \varepsilonnd{align} Finally, note that the weak convergence of $h_\nu$ to $h$ implies that as $\nu\to 0$, for any $a,b\in [0,T]$, $a<b$, the integral $\int_a^b h_\nu(s) ds \to \int_a^b h(s) ds$ in the weak topology of $H_0$. Therefore, since the operator $\sigmaigma(t_k, u^0_h(t_k))$ is compact from $H_0$ to $H$, we deduce that for every $k$, \[ \Big| \sigmaigma(t_k, u^0_h(t_k)) \Big( \int_{t_{k-1}}^{t_k} h_\nu(s) ds - \int_{t_{k-1}}^{t_k} h(s) ds \Big) \Big|_H \to 0~~\mbox{ as }~~\nu \to 0. \] Hence a.s. for fixed $n$ as $\nu \to 0$, $\betaar{T}_5 (N,n,\nu) \to 0$ while $\betaar{T}_5(N,n,\nu) \leq C(\betaar{K}_0,\betaar{K}_1,N,n, M)$. The dominated convergence theorem proves that ${\mathbb E}(\betaar{T}_5(N,n,\nu)) \to 0$ as $\nu \to 0$ for any fixed $n,N$. \partialar This convergence and \varepsilonqref{t5}--\varepsilonqref{eqT3} complete the proof of \varepsilonqref{T5n}. Indeed, they imply that for any fixed $N\geq 1$ and any integer $n\geq 1$ \betaegin{equation*} \limsup_{\nu\to 0}{\mathbb E} \Big[ 1_{G_{N,\nu}(T)} \sigmaup_{0\leq t\leq T} |T_5(t,\nu)|\Big] \leq C_{N,T,M}\; 2^{-n( \frac{1}{4}\wedge \gamma )}. \varepsilonnd{equation*} for some constant $C(N,T,M)$ independent of $n$. Since $n$ is arbitrary, this yields for any integer $N\geq 1$ the convergence property \varepsilonqref{T5n} holds. By the Markov inequality, we have for any $\deltalta >0$ \[ {\mathbb P}\Big(\sigmaup_{0\leq t\leq T} \|U_\nu(t)\|_\alphalpha > \delta \Big) \leq {\mathbb P}(G_{N,\nu}(T)^c )+ \frac{1}{\deltalta^2} {\mathbb E}\Big( 1_{G_{N,\nu}(T)} \sigmaup_{0\leq t\leq T} \|U_\nu(t) \|_\alphalpha^2\Big). \] Finally, \varepsilonqref{G-c} and \varepsilonqref{cv1} yield that for any integer $N\geq 1$, \[ \limsup_{\nu\to 0} {\mathbb P}\Big(\sigmaup_{0\leq t\leq T} \|U_\nu(t)\|_\alphalpha > \delta )\le C(T,M,\deltalta) N^{-1}, \] for some constant $C(T,M,\deltalta)$ which does not depend on $N$. Letting $N\to +\infty$ concludes the proof of the proposition. \varepsilonnd{proof} The following compactness result is the second ingredient which allows to transfer the LDP from $\sigmaqrt{\nu} W$ to $u^\nu$. Its proof is similar to that of Proposition \ref{weakconv} and easier; it will be sketched (see also \cite{DM}, Proposition 4.4). \betaegin{prop} \lambdabel{compact} Suppose that the constants $a, b,\mu$ defining $B$ satisfy the condition $a(1+\mu^2)+b\mu^2=0$, $\sigma$ satisfies the conditions ({\betaf C5}) and ({\betaf C6}) and let $\alphalpha\in [0,\frac{1}{4}]$. Fix $M>0$, $\xi\in V$ and let $ K_M=\{u_h^0 : h \in S_M \}$, where $u_h^0$ is the unique solution in ${\mathcal C}([0,T],V)$ of the deterministic control equation \varepsilonqref{control}. Then $K_M$ is a compact subset of ${\mathcal X}= {\mathcal C}([0,T], V)$ endowed with the norm $\|u\|_{\mathcal X} = \sigmaup_{0\leq t\leq T} \|u(t)\|_\alphalpha$. \varepsilonnd{prop} \betaegin{proof} To ease notation, we skip the superscript 0 which refers to the inviscid case. By Theorem \ref{exisuniq0}, $K_M\sigmaubset {\mathcal C}([0,T],V) $. Let $\{u_n\}$ be a sequence in $K_M$, corresponding to solutions of (\ref{control}) with controls $\{h_n\}$ in $S_M$: \betaegin{eqnarray*} d u_n(t) + B(u_n(t))dt =\sigma(t,u_n(t)) h_n(t) dt, \;\; u_n(0)=\xi. \varepsilonnd{eqnarray*} Since $S_M$ is a bounded closed subset in the Hilbert space $L^2(0, T; H_0)$, it is weakly compact. So there exists a subsequence of $\{h_n\}$, still denoted as $\{h_n\}$, which converges weakly to a limit $h \in L^2(0, T; H_0)$. Note that in fact $h \in S_M$ as $S_M$ is closed. We now show that the corresponding subsequence of solutions, still denoted as $\{u_n\}$, converges in $X$ to $u$ which is the solution of the following ``limit'' equation \betaegin{eqnarray*} d u(t) + B(u(t)) dt =\sigma(t, u(t)) h(t) dt, \;\; u(0)=\xi. \varepsilonnd{eqnarray*} Note that we know from Theorem \ref{exisuniq0} that $u\in {\mathcal C}([0,T],V)$, and that one only needs to check that the convergence of $u_n$ to $u$ holds uniformly in time for the weaker $\|\,\cdot\,\|_\alphalpha$ norm on $V$. To ease notation we will often drop the time parameters $s$, $t$, ... in the equations and integrals. Let $U_n=u_n-u$; using \varepsilonqref{AalphaB} and {\betaf (C6)}, we deduce that for $t\in [0, T]$, \betaegin{align} & \|U_n(t)\|_\alphalpha^2 = - 2\int_0^t \!\! \betaig( A^\alphalpha B(u_n(s))-A^\alphalpha B(u(s))\, ,\, A^\alphalpha U_n(s)\betaig) \, ds \nonumber \\ &\qquad + 2\int_0^t \Big\{ \Big( A^\alphalpha \betaig[ \sigma(s,u_n(s)) -\sigma(s,u(s))\betaig] h_{n}(s)\,,\, A^\alphalpha U_n(s)\Big) \nonumber \\ & \qquad + \betaig(A^\alphalpha \sigma(s,u(s)) \betaig(h_n(s)-h(s)\betaig)\, ,\, A^\alphalpha U_n(s)\betaig)\Big\} ds \nonumber \\ & \leq 2 C \int_0^t\!\! \|U_n(s)\|_\alphalpha^2 \betaig( \|u_n(s)\|+\|u(s)\|\betaig) ds + 2 L_3 \int_0^t \|U_n(s)\|_\alphalpha^2 |h_n(s)|_0\, ds \nonumber \\ &\qquad + 2 \int_0^t \Big( \sigma(s,u(s))\, [h_{n}(s)-h(s)]\; ,\; A^{2\alphalpha} U_n(s)\Big) \, ds . \lambdabel{error1} \varepsilonnd{align} The inequality \varepsilonqref{boundu0} implies that there exists a finite positive constant $\tilde{C}$ such that \betaegin{equation}\lambdabel{est-uni} \sigmaup_n \sigmaup_{0\leq t\leq T} \betaig(\|u(t)\|^2 + \|u_n(t)\|^2\betaig) = \tilde{C}. \varepsilonnd{equation} Thus Gronwall's lemma implies that \betaegin{equation} \lambdabel{errorbound} \sigmaup_{0\leq t\leq T} \|U_n(t)\|_\alphalpha^2 \leq \varepsilonxp\Big(2 C \tilde{C} +2 L_3 \sigmaqrt{ M T} \betaig)\Big)\, \sigmaum_{1\leq i\leq 5} I_{n,N}^{i} , \varepsilonnd{equation} where, as in the proof of Proposition~\ref{weakconv}, we have: \betaegin{eqnarray*} I_{n,N}^1&=& \int_0^T \betaig| \betaig( \sigma(s,u(s))\, [h_n(s)- h(s)]\, ,\, A^{2\alphalpha} U_n(s)-A^{2\alphalpha} U_n(\betaar{s}_N)\betaig)\betaig|\, ds, \\ I_{n,N}^2&=& \int_0^T\Big| \Big( \betaig[ \sigma(s,u(s)) - \sigma(\betaar{s}_N, u(s)) \betaig] [h_n(s)-h(s)]\, ,\, A^{2\alphalpha} U_n(\betaar{s}_N)\Big)\Big| \, ds, \\ I_{n,N}^3&=& \int_0^T\Big| \Big( \betaig[ \sigma(\betaar{s}_N,u(s)) - \sigma(\betaar{s}_N,u(\betaar{s}_N))\betaig] [h_n(s)-h(s)]\, ,\, A^{2\alphalpha} U_n(\betaar{s}_N)\Big)\Big| \, ds, \\ I_{n,N}^4&=& \sigmaup_{1\leq k\leq 2^N} \sigmaup_{t_{k-1}\leq t\leq t_k} \Big|\Big( \sigma(t_k,u(t_k )) \int_{t_{k-1 }}^t (h_n(s)-h(s)) ds \; ,\; A^{2\alphalpha} U_n(t_k) \Big)\Big| , \\ I_{n,N}^5&=& \sigmaum_{1\leq k\leq 2^N} \Big| \Big( \sigma(t_k, u(t_k )) \, \int_{t_{k-1}}^{t_k } [ h_n(s)-h(s)]\, ds \; ,\; A^{2\alphalpha} U_n(t_k ) \Big)\Big| . \varepsilonnd{eqnarray*} The Cauchy-Schwarz inequality, \varepsilonqref{est-uni}, ({\betaf C5}) and \varepsilonqref{time0} imply that for some constants $C_i$, $i=0,\cdots,4$, which depend on $k_0$, $ \betaar{K}_i$, $\betaar{L}_1$, $\tilde{C}$, $M$ and $T$, but do not depend on $n$ and $N$, \betaegin{align} \lambdabel{estim1} I_{n,N}^1 &\leq C_0 \Big( \int_0^T \!\! \betaig( \|u_n(s)-u_n(\betaar{s}_N)\|^2 + \|u(s)-u(\betaar{s}_N)\|^2\betaig) ds \Big)^{\frac{1}{2}} \Big( \int_0^T\!\! |h_n(s)-h(s)|_0^2 ds \Big)^{\frac{1}{2}} \nonumber\\ &\leq C_1 \; 2^{- \frac{N }{2}} \, ,\\ I_{n,N}^3 &\leq C_0 \Big(\int_0^T \|u(s)-u(\betaar{s}_N)\|^2 ds \Big)^{\frac{1}{2}} 2 \sigmaqrt{M} \leq C_3\; 2^{-\frac{N }{2}}\, , \lambdabel{estim2}\\ I_{n,N}^4 &\leq C_0 \, 2^{-\frac{N}{2}} \Big(1+ \sigmaup_{0\leq t\leq T} \|u(t)\|\Big)\, \sigmaup_{0\leq t\leq T} \betaig( \|u(t)\|+\|u_n(t)\|\betaig) 2\sigmaqrt{M} \leq C_4\; 2^{-\frac{N}{2}}\, . \lambdabel{estim3} \varepsilonnd{align} Furthermore, the H\"older regularity of $\sigma(.,u)$ from condition {\betaf (C5)} implies that \betaegin{align} \lambdabel{eqHolderdet} I_{n,N}^2 \leq& \betaar{C} 2^{-N\gamma}\, \sigmaup_{0\leq t\leq T}\betaig(\|u(t)\|+\|u_n(t)\|\betaig)\nonumber \\ &\quad \times \int_0^T (1+\|u(s)\|) (|h(s)|_0+ |h_n(s)|_0)\, ds \leq C_2\, 2^{-N \gamma}. \varepsilonnd{align} For fixed $N$ and $k=1, \cdots, 2^N$, as $n\to \infty$, the weak convergence of $h_n$ to $h$ implies that of $\int_{t_{k-1}}^{t_k} (h_n(s)-h(s))ds$ to 0 weakly in $H_0$. Since $\sigma(t_k,u(t_k))$ is a compact operator, we deduce that for fixed $k$ the sequence $\sigma(t_k, u(t_k)) \int_{t_{k-1}}^{t_k} (h_n(s)-h(s))ds$ converges to 0 strongly in $H$ as $n\to \infty$. Since $\sigmaup_{n,k} \|U_n(t_k)\|\leq 2 \sigmaqrt{\tilde{C}}$, we have $\lim_n I_{n,N}^5=0$. Thus \varepsilonqref{errorbound}-- \varepsilonqref{eqHolderdet} yield for every integer $N\geq 1$ \[ \limsup_{n \to \infty} \sigmaup_{ t\leq T} \|U_n(t)\|_\alphalpha^2 \le C 2^{-N ( \frac{1}{2}\wedge \gamma)}. \] Since $N$ is arbitrary, we deduce that $\sigmaup_{0\leq t\leq T} \|U_n(t)\|_\alphalpha \to 0$ as $n\to \infty$. This shows that every sequence in $K_M$ has a convergent subsequence. Hence $K_M$ is a sequentially relatively compact subset of ${\mathcal X}$. Finally, let $\{u_n\}$ be a sequence of elements of $K_M$ which converges to $v$ in ${\mathcal X}$. The above argument shows that there exists a subsequence $\{u_{n_k}, k\geq 1\}$ which converges to some element $u_h\in K_M$ for the uniform topology on ${\mathcal C}([0,T], V)$ endowed with the $\|\, \cdot\, \|_\alphalpha$ norm. Hence $v=u_h$, $K_M$ is a closed subset of ${\mathcal X}$, and this completes the proof of the proposition. \varepsilonnd{proof} \noindent {\betaf Proof of Theorem \ref{PGDunu}:} Propositions \ref{compact} and \ref{weakconv} imply that the family $\{u^\nu\}$ satisfies the Laplace principle, which is equivalent to the large deviation principle, in ${\mathcal X} $ defined in \varepsilonqref{defX} with the rate function defined by \varepsilonqref{ratefc}; see Theorem 4.4 in \cite{BD00} or Theorem 5 in \cite{BD07}. This concludes the proof of Theorem \ref{PGDunu}. $\Box$ \textbf{Acknowledgment:} This work was partially written while H. Bessaih was invited professor at the University of Paris 1. The work of this author has also been supported by the NSF grant No. DMS 0608494. \betaegin{thebibliography}{99} \betaibitem{BBBF06} D. Barbato, M. Barsanti, H. Bessaih, \& F. Flandoli, Some rigorous results on a stochastic Goy model, {\varepsilonm Journal of Statistical Physics}, {\betaf 125} (2006) 677--716. \betaibitem{BaDP} V. Barbu \& G. Da Prato, Existence and ergodicity for the two-dimensional stochastic magneto-hydrodynamics equations. \varepsilonmph{ Appl. Math. Optim.} {\betaf 56(2)} (2007), 145--168. \betaibitem{Biferale} L. Biferale, Shell models of energy cascade in turbulence, {\varepsilonm Annu. Rev. Fluid Mech., Annual Reviews, Palo Alto, CA} {\betaf 35} (2003), 441--468. \betaibitem{BD00} A. Budhiraja \& P. Dupuis, A variational representation for positive functionals of infinite dimensional Brownian motion, {\varepsilonm Prob. and Math. Stat.} {\betaf 20} (2000), 39--61. \betaibitem{BD07} A. Budhiraja, P. Dupuis \& V. Maroulas, Large deviations for infinite dimensional stochastic dynamical systems. \varepsilonmph{Ann. Prob.} {\betaf 36} (2008), 1390--1420. \betaibitem{CG94} M. Capinsky \& D. Gatarek, Stochastic equations in Hilbert space with application to Navier-Stokes equations in any dimension, {\varepsilonm J. Funct. Anal.} \textbf{126} (1994) 26--35. \betaibitem{CR} S. Cerrai \& M. R\"ockner, { Large deviations for stochastic reaction-diffusion systems with multiplicative noise and non-Lipschitz reaction terms}, {\varepsilonm Ann. Probab.} {\textbf 32} (2004), 1100--1139. \betaibitem{Chang} M. H. Chang, Large deviations for the Navier-Stokes equations with small stochastic perturbations. {\varepsilonm Appl. Math. Comput.} \textbf{76} (1996), 65-93. \betaibitem{ChenalMillet} F. Chenal \& A. Millet, Uniform large deviations for parabolic SPDEs and applications. \varepsilonmph{Stochastic Process. Appl.} \textbf{72} (1997), no. 2, 161--186. \betaibitem{CM} I. Chueshov \& A. Millet, {Stochastic 2D hydrodynamical type systems: Well posedness and large deviations}, \varepsilonmph{Appl. Math. Optim.} (to appear), DOI 10.1007/s00245-009-9091-z, Preprint arXiv:0807.1810v3. \betaibitem{CLT06} P. Constantin, B. Levant, \& E. S. Titi, Analytic study of the shell model of turbulence, {\varepsilonm Physica D} {\betaf 219} (2006), 120--141. \betaibitem{CLT07-inviscid} P. Constantin, B. Levant, \& E. S. Titi, Regularity of inviscid shell models of turbulence, , {\varepsilonm Phys. Rev. E} {\betaf 75} no. 1 (2007), DOI 10.1103/PhysRevE.75.016304, 10 pages. \betaibitem{PZ92} G. Da Prato \& J. Zabczyk, {\varepsilonm Stochastic Equations in Infinite Dimensions}, Cambridge University Press, 1992. \betaibitem{DZ} A. Dembo \& O. Zeitouni, {\varepsilonm Large Deviations Techniques and Applications}, Springer-Verlag, New York, 2000. \betaibitem{DDG} A. Du, J. Duan \& H. Gao, Small probability events for two-layer geophysical flows under uncertainty, preprint arXiv:0810.2818, October 2008. \betaibitem{DM} J. Duan \& A. Millet, Large deviations for the Boussinesq equations under random influences, {\varepsilonm Stochastic Process. Appl.} {\betaf 119-6} (2009), 2052-2081. \betaibitem{DupEl97} P. Dupuis \& R.S. Ellis, {\varepsilonm A Weak Convergence Approach to the Theory of Large Deviations}, Wiley-Interscience, New York, 1997. \betaibitem{KP05} N. H. Katz \& N. Pavlovi\'c, Finite time blow-up for a dyadic model of the Euler equations, {\varepsilonm Trans. Amer. Math. Soc.} {\betaf 357} (2005), 695--708. \betaibitem{Kunita} H. Kunita, {\varepsilonm Stochastic flows and stochastic differential equations}, Cambridge University Press, Cambridge-New York, 1990. \betaibitem{WeiLiu} W. Liu, Large deviations for stochastic evolution equations with small multiplicative noise, {\varepsilonm Appl. Math. Optim.} (to appear), doi:10.1007/s00245-009-9072-2, preprint arXiv:0801.1443-v4. \betaibitem{LPPPV98} V. S. L'vov, E. Podivilov, A. Pomyalov, I. Procaccia \& D. Vandembroucq, Improved shell model of turbulence, {\varepsilonm Physical Review E}, {\betaf 58} (1998), 1811--1822. \betaibitem{MSS} U. Manna, S.S. Sritharan \& P. Sundar, Large deviations for the stochastic shell model of turbulence, preprint arXiv:0802.0585-v1, February 2008. \betaibitem{MaMa} M. Mariani, Large deviations principles for stochastic scalar conservation laws, {\varepsilonm Probab. Theory Related Fields} (to appear), preprint arXiv:0804.0997v3. \betaibitem{MS02} J.L. Menaldi \& S.S. Sritharan, Stochastic 2-D Navier-Stokes equation, \varepsilonmph{Appl. Math. Optim.} \textbf{46} (2002), 31--53. \betaibitem{OY89} K. Ohkitani \& M. Yamada, Temporal intermittency in the energy cascade process and local Lyapunov analysis in fully developed model of turbulence, {\varepsilonm Prog. Theor. Phys.} {\betaf 89} (1989), 329--341. \betaibitem{Sowers} R. Sowers, Large deviations for a reaction-diffusion system with non-Gaussian perturbations. \varepsilonmph{Ann. Prob.} \textbf{20} (1992), 504-537. \betaibitem{Sundar} S. S. Sritharan \& P. Sundar, Large deviations for the two-dimensional Navier-Stokes equations with multiplicative noise, {\varepsilonm Stoch. Proc. and Appl.} \textbf{116} (2006), 1636-1659. \varepsilonnd{thebibliography} \varepsilonnd{document}
math
\begin{document} \title{{\bf {\Large A classical analog to topological non-local quantum interference effect}}} \author{Yakir Aharonov$^{a,b}$, Sandu Popescu $^{c,d}$, Benni Reznik$^{a}$ and Ady Stern$^e$ {\ }} \address{(a) {\em {\small School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel}}\\ (b) {\em {\small Department of Physics, University of South Carolina, Columbia, SC 29208}}\\ (c) {\em {\small H.H. Wills Physics Laboratory, University of Bristol, Tyndall Avenue, Bristol BS8 1TL, UK}} \\ (d) {\em {\small Hewlett-Packard Laboratories, Stoke Gifford, Bristol BS12 6QZ, UK}}\\ (e) {\em {\small Department of Physics, Weizmann Institute of Science, Rehovot 76100, Israel}}} \maketitle \begin{abstract} {The two main features of the Aharonov-Bohm effect are the topological dependence of accumulated phase on the winding number around the magnetic fluxon, and non-locality -- local observations at any intermediate point along the trajectories are not affected by the fluxon. The latter property is usually regarded as exclusive to quantum mechanics. Here we show that both the topological and non-local features of the Aharonov-Bohm effect can be manifested in a classical model that incorporates random noise. The model also suggests new types of multi-particle topological non-local effects which have no quantum analog. } \end{abstract} In the Aharonov-Bohm (AB) effect\cite{AB}, a phase \begin{equation} \phi_{AB} =\hbar q\oint \vec A \cdot \vec dl, \end{equation} is accumulated by a charge $q$ upon circulating a solenoid enclosing a magnetic flux. This phase (mod $2\pi$) can be observed in an interference experiment in which the wavefunction of the charge is split into two wavepackets which encircle the solenoid and then interfere when meet together. The phase is {\it topological} because it is determined by the number of windings the charge carries out around the solenoid, and is independent of the details of the trajectory. The phase is also {\it non-local}: while the magnetic flux in the solenoid clearly affects the resulting interference pattern, it has no local observable consequences along any point on the trajectory. There is no experiment that one can perform anywhere along the trajectory which can tell whether or not there is a magnetic flux inside the solenoid. (In particular, there is no force acting on the charge due to the solenoid.) Various topological analogs of the AB effect have been suggested utilizing light in an optical medium\cite{milloni}, super-fluids\cite{super} and particles in a gravitational background\cite{gravity}. However unlike the AB effect, in these analogs one can observe how the global topological effect builds up locally. Hence these models do not reproduce the non-local aspect of the AB effect. (The gravitation analog is an exceptional case. However it requires a non-trivial space-time structure, which is locally flat but globally not equivalent to a Minkowski space-time.) The above analogs suggest that quantum systems differ fundamentally from classical systems as far as non-locality is concerned. In this letter however we show that a classical non-local effect may be constructed without employing a non-trivial space-time structure. The new ingredient which allows for this is the inclusion of a random bath of particles, which ``masks" local effects, but does not ``screen'' the net topological effect. The classical non-local effect we are constructing now is a classical analog of the Aharonov-Casher effect\cite{AC}. To begin with, consider a particle that is described by the canonical coordinates ${\cal \vec R}$ and ${\cal \vec P}$ , and is carrying a magnetic moment $\vec \mu$. Let this particle, (henceforth referred to as the fluxon) interact with the electric field $ \vec E$ generated by a homogeneously charged straight wire positioned along the $\hat z$-axis. The non-relativistic Hamiltonian describing this system is the one employed in the study of the Aharonov-Casher effect \begin{equation} H_{AC} = {\frac{(\vec {{\cal P}} + \frac{1}{c}\vec\mu\times \vec E)^2}{2m}}. \end{equation} For simplicity, in the following we confine the motion of the fluxon to a two dimensional plane orthogonal to the wire and effectively reduce the system to be planar. We denote by $\vec R$, $\vec P$ the two dimensional position and momentum, respectively. We assume that a time dependent scalar potential is applied to the fluxon to make it move along a desired trajectory in the plane, and that this potential does not interact with the magnetic moment. For brevity we do write this potential below, as it does not affect our considerations. Using polar coordinates, the electric field can be written as $\vec E(R) = 2\lambda|\vec \nabla_R\theta| \hat R$, where $\lambda$ is the linear charge density and $\vec\nabla_R$ is the gradient with respect to $\vec R$ and $\hat R$ is the unit vector along $\vec R$. If $ \vec \mu$ is aligned in the $\hat z$ direction, the Hamiltonian becomes two-dimensional\cite{point-Q} \begin{equation} H_{AC} = {\frac{(\vec P + {\frac{2}{c}}\mu \lambda \vec \nabla_R\theta )^2}{ 2M}}. \label{hac} \end{equation} Classically, the forces on the particle vanish. However the magnetic field experienced by the fluxon in its rest frame, $\vec B = \frac{\vec v \times \vec E}{c}$ is non-vanishing. Therefore one expects a non-zero torque, $\vec \mu \times \vec B$, and consequently an ``internal'' precession of the magnetic moment. This latter effect is present both in the quantum and the classical cases \cite{peshkin-lipkin,aharonov-reznik}. The precession is not described by the above Hamiltonian because $\mu$ was taken as fixed vector, rather than a degree of freedom. To incorporate the internal precession we next introduce an internal angular momentum variable $\vec L=\hat z L$, with a conjugate internal angular coordinate $\varphi$. We further assume that $ \mu\propto L$ (Indeed the magnetic moment of a neutron, for example, $\mu_N = -3.7{\frac{e}{2mc}}{\bf s}$, is proportional to its spin $s$.) Replacing $ \mu$ in eq. (\ref{hac}) by $L$ we have \begin{equation} H = {\frac{(\vec P + \xi L \vec\nabla_R\theta )^2}{2m}} \label{hl} \end{equation} where $\xi$ is the resulting net charge/magnetic-moment coupling constant. While $L$ (and the magnitude of $\mu$) are constants of motion, the internal angle $\varphi$, conjugate to $L$, varies in time and satisfies the equation of motion \begin{equation} {\frac{d\varphi}{dt}} = \xi{\frac{d\vec R}{dt}} \cdot\vec\nabla_{R} \theta = \xi {\frac{d\theta}{dt}}. \label{eom} \end{equation} Consequently $\varphi$ is entirely determined by the polar angle $\theta$ of the fluxon relative to the $x$-axis emanating from the position of the charged line\cite{note1}. Once we fix the initial value $\varphi(\theta_0)$ for a given $\theta_0$, the internal coordinate $\varphi$ at a later time is given by \begin{equation} \varphi(\theta) = \xi(\theta -\theta_0) + \varphi(\theta_0). \end{equation} Hence as the fluxon moves along a closed loop enclosing the charge, and $ \theta$ changes by $2\pi$, the internal angle changes by $2\pi\xi$. Below we refer to the case where $\xi$ is interger as ``trivial'', as it leads to an angle winding of a multiple of $2\pi$, and to the case of $\xi$ non-integer as ``non-trivial''. This situation is similar to the experimentally observed Aharonov-Casher effect in a Josephson junction array \cite{Elion}. Up to now we have constructed a model which exhibits a classical analog of a topological effect. However since we can locally observe how the internal angle changes at intermediate points of the trajectory, this model does not capture the non-local feature of an AB-like effect. As we now turn to show, an effectively non-local behavior emerges if we add to the above system two new ingredients. Firstly, we employ two fluxons. Secondly, we consider the interaction of these fluxons with a non-trivial charged particle situated at the origin (i.e. a particle with non-integer $ \xi$), and a bath of randomly positioned, moving, charged particles, all leading to trivial angle windings of the fluxon (i.e. particles with integer $\xi$). As each fluxon encircles the origin, the particles of the bath randomize its angle. These particles do not, however, randomize the angle {\em difference} between the two fluxons when they coincide in position. Let us denote the coordinates of the two fluxons by $\vec R_k, \vec P_k$, and internal coordinates by $\varphi_k$ and $L_k$. The coupling constants with the particles of the bath are taken to be be ``trivial'', i.e., $ \xi_{1i}=\xi_{2i}=1$. We denote the coordinates of the bath particles by $ \vec r_i=x_i\hat x + y_i\hat y$ and their momenta by $\vec p_i$ (with $ i=1...N$). The Hamiltonian of the system becomes, \begin{eqnarray} H &=& \sum_{k=1}^2{{\frac{(\vec P_k + L\vec \nabla_{R_k} [\xi \theta_k +\sum_i \theta_{ki}] )^2 }{2M}}} \nonumber \\ &+&\sum_{i=1}^N {\frac{ (\vec p_i + L\sum_{k=1}^2\vec \nabla_{r_i}\theta_{ik} )^2 }{2m_i}}. \end{eqnarray} The first term above represents two fluxons, which interact with the charged particle at the origin and with the bath. The second term represents this ``bath''. Notice that the kinetic term for the charged particles of the bath includes a vector potential, too\cite{chargehamiltonian}. The presence of the bath exerts additional vector potential terms \begin{equation} \vec A_{ki}= L\vec \nabla_{R_k}\theta_{ki} \end{equation} where $\theta_{ik} =\arctan{\frac{y_i-Y_k}{x_i-X_k}}$, is the angle between $ \vec r_i -\vec R_k $ and the $x$ axis. As we have seen above (Eq. (\ref{eom})) the internal angle changes according to the relative angle between the fluxon and the charged particle. Indeed, the equation of motion for the internal angle is \begin{eqnarray} {\frac{d \varphi_k}{dt}} &=& \xi {\frac{d\vec R_k}{dt}}\cdot \vec\nabla _{R_k}\theta_{k}+ \sum_i\biggl({\frac{d\vec R_k}{dt}}\cdot \vec \nabla_{R_k} + {\frac{d \vec r_i}{dt}} \cdot \vec \nabla_{r_i}\biggr) \theta_{ki} \nonumber \\ &=& \xi {\frac{d\theta_{k}}{dt}} + \sum_i {\frac{d\theta_{ki}}{dt}}. \end{eqnarray} Clearly, for a sufficiently large number of randomly distributed particles, the effect of the bath on the internal angle becomes chaotic. The time dependence of the $\varphi(t)$ becomes unpredictable. Consider now however the following experiment. We start with the two fluxons situated at the same point. Then one of the fluxons stays fixed while the other moves in a path around the non-trivial charge and returns to its initial point. As noted above, the internal angles of each fluxon change randomly. But consider the the {\it relative} internal angle between the fluxons, \begin{eqnarray} \gamma(t) & \equiv &\varphi_2(t)-\varphi_1(t) \\ &=& \xi(\theta_{1}(t)-\theta_{2}(t)) + \sum_{i\in bath} (\theta_{i1}(t) -\theta_{i2}(t)) + constant. \nonumber \end{eqnarray} We first note that when the two fluxons are located at precisely the same point, $\vec R_1= \vec R_2$ we have \begin{equation} \theta_{i1}(t) = \theta_{i2}(t). \end{equation} Therefore the random changes induced by the bath in the internal angles of the fluxons are {\em identical}, and as long as the fluxons coincide \begin{equation} \gamma(t)=constant . \end{equation} Once the fluxons move apart, the random time dependence of $\varphi_1(t)$ differs from that of $\varphi_2(t)$, and $\gamma(t)$, the relative internal angle, becomes random. Finally however, when the moving fluxon returns to its original point, after $n$ windings around the origin, and the two fluxons coincide again, \begin{equation} \gamma_{final} -\gamma_{initial}= 2\pi n\xi + 2\pi N . \end{equation} The first term is the shift caused by the charge at the origin. The second term is due to the bath and $N$ is an integer random number which counts the number of windings of bath particles. The particles of the bath can wind around only one of the fluxons while the fluxons are apart. However when the fluxons coincide, the final relative winding number $N$ is a random integer. More importantly, $(\gamma_{final}- \gamma_{initial}) mod 2\pi$ is unaffected by the bath particles, in sharp contrast to the values of $\varphi_1,\varphi_2,\gamma$ along the trajectory. Thus, upon closing a loop, the random effects due to the bath particles cancel and $(\gamma_{final}-\gamma_{initial}) mod 2\pi$ depends only on the non-trivial charge. In other words, although during the experiment the internal angles change randomly, upon closing a loop and measuring the change of the internal angle of one fluxon with the respect to the other fluxon (which acts as a reference system) we are able to recover information about the non-trivial charge. The effect is {\it topological}, because it depends only on the winding numbers and not on the details of the loop. Furthermore, and most important, the effect is {\it non-local} because no useful information can be extracted on a local basis (i.e. by monitoring the changes only on parts of the loop); only the closed loop yields information. More generally, we can allow both fluxons to move, starting from the same point and meeting later at some different point, so that the trajectories of the two fluxons form together a closed loop. The non-trivial charge can move as well. In this case \begin{equation} \gamma_{final} -\gamma_{initial}= 2\pi n\xi + 2\pi N, \end{equation} where $n$ is the winding number of the loop around the non-trivial charge while $N$ represents the winding number of the loop around the bath particles. The result above contains the essence of our effect. We will give a number of generalization later, but first let us make some comments. The key element in our effect is the addition of the random bath of trivial charges. When there are no trivial charges present, the effect is purely local - monitoring the changes of the internal angle we can tell about the presence of the non-trivial charge. The vector-potential generated by the non-trivial charge, $\xi L \vec\nabla_R\theta$ is {\it ``gauge-invariant"} and {\it observable}. As we add more and more trivial charges at random positions and having random motion, the vector-potential generated by the non-trivial charge becomes {\it unobservable}. The only gauge invariant quantity becomes the ``loop integral", i.e. the change in the relative internal angle over the closed loop. One can see a certain analogy between the observability and non-observability in this case and the gauge independence of the vector potential and its loop interal. The two particle topological effect considered here is, up to a point, analogous to an interference experiment with a single quantum particle such as the Aharonov-Bohm experiment. The relative phase accumulated along two trajectories in the quantum interference effect is hence analogous to the relative internal angle in our case. There are however significant differences. The first, and obvious difference, is that in the AB effect there is a single particle, (whose wave-function is split in two wave-packets), while in the classical analog we have two particles, each following a well-defined classical trajectory. A more subtle difference is the following. In quantum interference one is always sensitive to the relative phase of two wave packets, since the measured quantity is the {\it square} of the wave function. In the classical case, in contrast, we are able to generalize our model to a situation where there are three particles, with three internal angles, and where the only observable quantity involves the internal angles of all three particles. To illustrate this, we consider 3 fluxon-like particles which interact with a single non-trivial source with a coupling strength $\xi$ located at the origin, and with 3 {\em different} trivial random background charges denoted by $A$, $B$, and $C$. The first fluxon sees particles of type $A$ as positive charges and particles of type $B$ is negative charges. The second fluxon sees type $B$ as a positive charge and and type $C$ as negative charge and the third fluxon sees type $C$ as positive charges and type $A$ as negative charges. The Hamiltonian of the system is then \begin{eqnarray} H_3={\frac{1}{2M_1}}[P_1 +L\nabla_{R_1}(\xi\theta_{1}+\sum_i(\theta_{1i}^A- \theta_{1i}^B))]^2+ \nonumber \\ {\frac{1}{2M_2}}[P_2 +L\nabla_{R_2}(\xi\theta_{2}+ \sum_i(\theta_{2i}^B- \theta_{2i}^C))]^2 + \nonumber \\ {\frac{1}{2M_3}}[P_3 +L\nabla_{R_3}(\xi\theta_{3}+ \sum_i(\theta_{3i}^C- \theta_{3i}^A)]^2 + \nonumber \\ {\frac{1}{2m_A}}\sum_i[p_i^A+L\nabla_{r^A_i}(\theta_{i1}^A-\theta_{i3}^A)]^2+ \nonumber \\ {\frac{1}{m_B}}\sum_i[p_i^B+L\nabla_{r^B_i}(\theta_{i2}^B-\theta_{i1}^B)]^2+ \nonumber \\ {\frac{1}{2m_C}}\sum_i[P_i^C+L\nabla_{r^C_i}(\theta_{i3}^C-\theta_{i2}^C)]^2. \end{eqnarray} Let $\phi$ be the sum of the three internal angles $\phi=\varphi_1+ \varphi_2+\varphi_3$. The change in the sum of the internal angles $ \delta\phi$ which in the present model is "shared" by all 3 particles is given by \begin{eqnarray} \delta\phi=\xi(\delta\theta_{1} +\delta\theta_{2} + \delta\theta_{3}) + \sum_i(\delta\theta^A_{1i}-\delta\theta^A_{3i}) + \nonumber \\ \sum_j(\delta\theta^B_{2j}-\delta\theta^B_{1j})+ \sum_k(\delta\theta^C_{3k}-\delta\theta^C_{1k}). \end{eqnarray} We note that the contribution of the last three sums over $i,j$ and $k$ is random. However when the three fluxons start initially from the same point, and end at the same final point, the random contribution exactly cancel (modulo $2\pi$) and we are left with \begin{equation} \phi_{final}-\phi_{initial} = 2\pi \xi (n_1+n_2+n_3). \end{equation} Unlike the previous example here the change in $\phi$ yields the sum of the winding numbers of each fluxon $n_1+n_2+n_3$. The effects presented above are classical non-local analogs of quantum vector-potential effects, such as the magnetic A-B effect and the Aharonov-Casher effect. Along the same lines we now present an analog to the scalar A-B effect. This is implemented by the interaction Hamiltonian \begin{equation} H_{int}=LV(x). \end{equation} In regions where the potential $V(x)$ is constant, this interaction doesn't generate any {\it force}. Indeed, the force due to this interaction term is equal to $F=-L{\frac{{dV}}{{dx}}}$ and it is zero where the potential is constant. On the other hand, the internal angle $\varphi$ is affected: due to the interaction it suffers an additional change of $V\Delta T$, where $ \Delta T$ is the time spent in the potential $V$. Again, in the absence of the randomizing charges background, the change of the internal angle due to the potential is observable. However the randomizing background makes the change in the internal angle unobservable. An observable effect can be seen only in a ``closed loop" experiment similar to that in the magnetic case. In conclusion, we have described a classical non-local effect, analog to the quantum Aharonov-Bohm and Aharonov-Casher effects. Although many classical analogs to the AB and AC effects are known, they exhibit only the topological character of the AB and AC effects but are local - by local measurements one can see how the topological phase build up gradually. As far as we know, this is the first classical non-local model which does not involve general relativity and non-trivial space-time structures. In our model, although one can measure at any time the internal angle of a ``fluxon", the measurement yields no information about a non-trivial charge. Information can be obtained only in experiments in which a loop is closed. The key ingredient which allows us to transform a local topological effect into a non-local one is the addition of random but topologically trivial, noise. A more detailed discussion of the issue of observability versus unobservability in our model and its relations with cryptography are further discussed in \cite{cryptography}. Y. A., B.R. and A.S. acknowledge support from the Israel Science Foundation, established by the Israel Academy of Sciences and Humanities. Y.A. and B.R. are supported by the Israel MOD Research and Technology Unit. Y.A. acknowledges support and hospitality of the Einstein center at the Weizmann Institute of Science. \begin{references} \bibitem{AB} Y. Aharonov and D. Bohm, Phys. Rev. {\bf 115}, 485 (1959). \bibitem{milloni} R. J. Cook, H. Fearn, P. W. Milonni, Am. J. of Phys. {\bf 63}, 705 (1995). \bibitem{super} F. D. M. Haldane and Y. Wu, Phys. Rev. Lett. 55, 2887-2890 (1985) \bibitem{gravity} Ph. Gerbet and R. Jackiw, Commun. Math. Phys. {\bf 123} 229, (1989). B. Reznik Phys. Rev. A {\bf 51}, 1728 (1995). \bibitem{AC} Y. Aharonov and A. Casher, Phys. Rev. Lett. {\bf 53}, 319 (1984). \bibitem{point-Q} Alternatively, the same effective Hamiltonian can be reached by replacing the line of charge with a point like charged source $Q$ and the point-like magnetic fluxon with line of magnetic fluxons (flux-line) that points in the fixed direction $\hat z$. \bibitem{peshkin-lipkin} M. Peshkin and H. J. Lipkin, Phys. Rev. Lett.{\bf 74}, 2847 (1995). quant-ph/9501012. \bibitem{aharonov-reznik} Y. Aharonov and B. Reznik, Phys. Rev. Lett. {\bf 84}, 4790--4793, (2000); P. Hyllus and E. Sjoqvist, Phys. Rev. Lett. {\bf 89}, 198901 (2002); Y. Aharonov and B. Reznik, Phys. Rev. Lett. {\bf 89}, 198902 (2002). \bibitem{note1} In the Hamiltonian (\ref{hl}) $\varphi$ is not an independent variable, being fully determined by $\theta$. Employing a Lagrangian analysis necessitates an addition of a kinetic term $L_z^2/2I$, which removes the constraint of $\varphi$ to $\theta$ and leads to the well defined Lagrangian \begin{equation} L= {\frac{1}{2}} m v^2 + {\frac{1}{2}} I(\dot\varphi - \vec v\cdot\vec \nabla \theta)^2 \end{equation} for every finite $I$. Our results are reproduced when the limit $ I\rightarrow\infty$ is taken. We note that unlike the usual ${\vec v}\cdot { \vec A}$ coupling, the Lagrangian we obtain has also a quadratic term of $ v\cdot A$. \bibitem{Elion} WJ Elion et al., Phy. Rev. Lett. {\bf 71}, 2311 (1993). \bibitem{chargehamiltonian} There should be, of course, a kinetic term for the non-trivial charge as well. The term is absent here because, for simplicity, we took this charge to be fixed at the origin i.e. to have infinite mass. \bibitem{cryptography} Y. Aharonov, S. Popescu and B. Reznik, in preparation. \end{references} \end{document}
math
\begin{document} \begin{abstract} Let $(U_n)_{n\in \mathbb{N}}$ be a fixed linear recurrence sequence defined over the integers (with some technical restrictions). We prove that there exist effectively computable constants $B$ and $N_0$ such that for any $b,c\in \mathbb{Z}$ with $b> B$ the equation $U_n - b^m = c$ has at most two distinct solutions $(n,m)\in \mathbb{N}^2$ with $n\geq N_0$ and $m\geq 1$. Moreover, we apply our result to the special case of Tribonacci numbers given by $T_1= T_2=1$, $T_3=2$ and $T_{n}=T_{n-1}+T_{n-2}+T_{n-3}$ for $n\geq 4$. By means of the LLL-algorithm and continued fraction reduction we are able to prove $N_0=1.1\cdot 10^{37}$ and $B=e^{438}$. The corresponding reduction algorithm is implemented in Sage. \end{abstract} \maketitle \section{Introduction} In the last couple of years investigating Pillai-type problems with linear recurrence sequences has been very popular (see Table \ref{table:refs}). \begin{table}[h] \caption{Overview of results on $U_n-V_m=c$}\label{table:refs} \begin{tabular}{p{0.3\textwidth} p{0.3\textwidth} p{0.3\textwidth}} \hline $U_n$ & $V_m$ & authors \\ \hline Fibonacci numbers & powers of 2 & Ddamulira, Luca, Rakotomalala \cite{DdamuliraLucaRakotomalala2017} \\ Fibonacci numbers & Tribonacci numbers & Chim, Pink, Ziegler \cite{ChimPinkZiegler2017} \\ Tribonacci numbers & powers of 2 & Bravo, Luca, Yaz\'{a}n \cite{BravoLucaYazan2017} \\ $k$-Fibonacci number & powers of 2 & Ddamulira, G\'{o}mez, Luca \cite{DdamuliraGomezCarlosLuca2018} \\ Pell numbers & powers of 2 & Hernane, Luca, Rihane, Togb\'{e} \cite{HernaneLucaRihaneTogbe2018} \\ Tribonacci numbers & powers of 3 & Ddamulira \cite{Ddamulira2019Tribos} \\ Fibonacci numbers & Pell numbers & Hern\'{a}ndez, Luca, Rivera \cite{HernandezLucaRivera2019} \\ Padovan numbers & powers of 2 & Lomel\'{\i}, Hern\'{a}ndez \cite{LomeliHernandez2019} \\ Padovan numbers & powers of 3 & Ddamulira \cite{Ddamulira2019} \\ Padovan numbers & Tribonacci numbers & Lomel\'{\i}, Hern\'{a}ndez, Luca \cite{LomeliHernandezLuca2019Indian} \\ Fibonacci numbers & Padovan numbers & Lomel\'{\i}, Hern\'{a}ndez, Luca \cite{LomeliHernandezLuca2019} \\ Fibonacci numbers & powers of 3 & Ddamulira \cite{Ddamulira2020} \\ $k$-Fibonacci numbers & powers of 3 & Ddamulira, Luca \cite{DdamuliraLuca2020} \\ $X$-coordinates of Pell equations & powers of 2 & Erazo, G\'{o}mez, Luca \cite{ErazoGomezLuca2021} \\ $k$-Fibonacci numbers & Pell numbers & Bravo, D\'{\i}az, G\'{o}mez \cite{BravoDiazGomez2021} \\ \hline \end{tabular} \end{table} This trend was started in 2017 by Ddamulira, Luca, Rakotomalala \cite{DdamuliraLucaRakotomalala2017}, who proved that the only integers $c$ having at least two representations of the form $F_n - 2^m$ are $c\in \{0,1,-1,-3,5,-11, -30,85\}$ (here $F_n$ is the $n$-th Fibonacci number). This problem was inspired by a result due to S. S. Pillai. In 1936 Pillai \cite{Pillai1936, Pillai1937} proved that if $a$ and $b$ are coprime integers, then there exists a constant $c_0(a,b)$ depending on $a$ and $b$ such that for any $c>c_0(a,b)$ the equation \begin{equation}\label{eq:Pillai} a^n-b^m=c \end{equation} has at most one solution $(n,m)\in \mathbb{Z}_{>0}^2$. A natural generalisation of this problem is to replace $a^n$ and $b^m$ by other linear recurrence sequences. This is what the authors in \cite{DdamuliraLucaRakotomalala2017} did, and also what all the other authors in Table~\ref{table:refs} have done. All these results use lower bounds for linear forms in logarithms and reduction methods. Moreover, there exists a general result: Chim, Pink and Ziegler \cite{ChimPinkZiegler2018} proved that for two fixed linear recurrence sequences $(U_n)_{n\in \mathbb{N}}$, $(V_n)_{n\in \mathbb{N}}$ (with some restrictions) the equation \begin{equation*} U_n - V_m = c \end{equation*} has at most one solution $(n,m)\in \mathbb{Z}_{>0}^2$ for all $c\in \mathbb{Z}$, except if $c$ is in a finite and effectively computable set $\mathcal{C} \subset \mathbb{Z}$ that depends on $(U_n)_{n\in \mathbb{N}}$ and $(V_n)_{n\in \mathbb{N}}$. In this paper, we would like to generalize that result by ``unfixing'' one of the linear recurrence sequences. In the classical setting, it is possible to ``unfix'' $a$ and $b$ completely: Bennett \cite{Bennett2001} proved that for any integers $a,b\geq 2$ and $c\geq 1$ Equation~\eqref{eq:Pillai} has at most two solutions $(n,m)\in \mathbb{Z}_{>0}^2$. Moreover, he conjectured that in fact the equation has at most one solution $(n,m)$ for all but 11 specific exceptional triples $(a,b,c)$. Of course, we cannot simply say that $(U_n)_{n\in \mathbb{N}}$ and $(V_n)_{n\in \mathbb{N}}$ should be completely arbitrary. However, there already exist results where the linear recurrence sequence $(U_n)_{n\in \mathbb{N}}$ is not entirely fixed: In Table~\ref{table:refs} there are some results involving $k$-Fibonacci numbers \cite{DdamuliraLuca2020, BravoDiazGomez2021}, where $k$ is variable. Now what we will do is fix $(U_n)_{n\in \mathbb{N}}$ and let $V_m=b^m$ with variable $b$. Our main result will be that for fixed $(U_n)_{n\in \mathbb{N}}$ (with some restrictions) the equation \[ U_n - b^m = c \] has at most two distinct solutions $(n,m) \in \mathbb{Z}_{>0}^2$ for any $(b,c)\in \mathbb{Z}^2$ with only finitely many exceptions $b \in \mathcal{B}$, where $\mathcal{B}$ is an effectively computable set. Allowing two solutions (instead of one solution) is the price we have to pay for letting $b$ vary. The second solution is needed for technical reasons, but we believe that the result might also be true if we only allow at most one solution. Finally, note that our method does not enable us to solve the problem for a specific sequence $(U_n)_{n\in \mathbb{N}}$ completely. We will show how far we can get by computing the effective bounds for the Tribonacci numbers and reducing the bounds as far as possible. Let us outline the rest of this paper. The next section contains some notations and our results: Theorem~\ref{thm:mainthm} is the main theorem, Theorem~\ref{thm:Tribos} shows what happens if we apply our methods to the Tribonacci numbers. Moreover, we make several remarks on the assumptions in Theorem~\ref{thm:mainthm} and pose some open problems regarding Theorem~\ref{thm:Tribos}. Section~\ref{sec:diophApprox} is a collection of rather well known results from Diophantine approximation. Section~\ref{sec:proofMainThm} is devoted to the proof of Theorem~\ref{thm:mainthm} and Section~\ref{sec:Tribos} is devoted to the proof of Theorem~\ref{thm:Tribos}. Beforehand, in Section~\ref{sec:overviewProof}, we give an overview of the two proofs. In particular, we point out the parallels and differences between the two proofs. \section{Notation and results} A linear recurrence sequence $ (U_n)_{n \in \mathbb{N}} $ is given by finitely many initial values together with a recursive formula of the shape \begin{equation*} U_{n+\ell} = w_{\ell-1} U_{n+\ell-1} + \cdots + w_0 U_n. \end{equation*} We say that such a recurrence sequence is defined over the integers if the coefficients $ w_0, \ldots, w_{\ell-1} $ as well as the initial values are all integers. In this situation all elements of the sequence are integers. It is well known that any such linear recurrence sequence can be written in its Binet representation \begin{equation*} U_n = a_1(n) \alpha_1^n + \cdots + a_k(n) \alpha_k^n, \end{equation*} where the characteristic roots $ \alpha_1, \ldots, \alpha_k $ are algebraic integers and the coefficients $ a_1(n), \ldots, a_k(n) $ are polynomials in $ n $ with coefficients in $ \mathbb{Q}(\alpha_1, \ldots, \alpha_k) $. The recurrence sequence is called simple if $ a_1(n), \ldots, a_k(n) $ are all constant, i.e.\ independent of $ n $. Moreover, $ \alpha_1 $ is called the dominant root if $ \abs{\alpha_1} > \abs{\alpha_i} $ for all $ i = 2, \ldots, k $. Our result is now the following theorem: \begin{mythm} \label{thm:mainthm} Let $ (U_n)_{n \in \mathbb{N}} $ be a simple linear recurrence sequence defined over the integers with Binet representation \begin{equation*} U_n = a \alpha^n + a_2 \alpha_2^n + \cdots + a_k \alpha_k^n \end{equation*} and irrational dominant root $ \alpha > 1 $. Assume further that $ a > 0 $, that $ a $ and $ \alpha $ are multiplicatively independent, and that the equation \begin{equation} \label{eq:techcond} \alpha^z - 1 = a^x \alpha^y \end{equation} has no solutions with $ z \in \mathbb{N} $, $ x,y \in \mathbb{Q} $ and $ -1 < x < 0 $. Then there exist effectively computable constants $ B \geq 2 $ and $ N_0 \geq 2 $ such that the equation \begin{equation} \label{eq:centraleq} U_n - b^m = c \end{equation} has for any integer $ b > B $ and any $ c \in \mathbb{Z} $ at most two distinct solutions $ (n,m) \in \mathbb{N}^2 $ with $ n \geq N_0 $ and $ m \geq 1 $. \end{mythm} Let us give some remarks regarding the technical condition involving Equation~\eqref{eq:techcond} in the above theorem: \begin{myrem} The technical condition containing Equation \eqref{eq:techcond} can be effectively checked for any given recurrence sequence $ (U_n)_{n \in \mathbb{N}} $: First note that by construction $ \alpha $ is an algebraic integer. Moreover, note that the ideals $ (\alpha) $ and $ (\alpha^z-1) $ with $ z \in \mathbb{N} $ have no common prime ideals in their factorisations. Let $ \mathfrak{P}_1, \ldots, \mathfrak{P}_n $ be the prime ideals that appear in the prime ideal factorisation of $ (a) $. If $ \mathfrak{P}_i $ is not a prime factor of $ (\alpha) $, then let $ k_i $ be the order of $ \alpha $ modulo $ \mathfrak{P}_i $, i.e.\ $ k_i $ is minimal such that $ \mathfrak{P}_i $ is a prime factor of $ (\alpha^{k_i}-1) $. Note that if $ \mathfrak{P}_i $ lies above $ (p_i) $ and $ f_i $ is the inertia degree, then $ k_i \mid p_i^{f_i}-1 $, so the $k_i$ are bounded. Thus we can compute the maximum of all these orders $ k_0 := \max k_i $. By Schinzel's theorem on primitive divisors \cite{Schinzel1974}, there exists an effectively computable number $ n_0 $ such that $ \alpha^z - 1 $ has a primitive divisor for any $ z > n_0 $. This means that for $ z > \max \set{k_0, n_0} $ the ideal $ (\alpha^z-1) $ has a primitive divisor which is not a divisor of $ (a) $. Since $ (\alpha) $ and $ (\alpha^z-1) $ have no common divisors, it is impossible that $ \alpha^z - 1 = a^x \alpha^y $ for $ z > \max \set{k_0,n_0} $. For each $ z = 1, \ldots, \max \set{k_0,n_0} $ one can check whether $ \alpha^z - 1 = a^x \alpha^y $ has a solution with $ x,y \in \mathbb{Q} $ and $ -1 < x < 0 $ by looking at the primes of $ \alpha^z-1 $, $ a $ and $ \alpha $. \end{myrem} \begin{myrem} \label{rem:easierCond} Let $ \mathfrak{P}_1, \ldots, \mathfrak{P}_n $ be all prime ideals that appear in the prime ideal factorisations of $ (a) $ and $ (\alpha) $. Then we can write \begin{align*} (a) &= \mathfrak{P}_1^{a_1} \cdots \mathfrak{P}_n^{a_n}, \\ (\alpha) &= \mathfrak{P}_1^{b_1} \cdots \mathfrak{P}_n^{b_n}, \end{align*} where the $ a_i $ and $ b_i $ are integers. The following two conditions are relatively easy to check and each of them implies the technical condition containing Equation \eqref{eq:techcond}: \begin{enumerate}[I)] \item \label{it:condDet} There are $ i,j \in \set{1,\ldots,n} $ such that \begin{equation*} \det \begin{pmatrix} a_i & b_i \\ a_j & b_j \end{pmatrix} = \pm 1. \end{equation*} \item \label{it:condUnit} $ \alpha $ is a unit and there is an index $ i \in \set{1,\ldots,n} $ with $ a_i = \pm 1 $. \end{enumerate} \end{myrem} \begin{proof} If \eqref{eq:techcond} is satisfied, then the factorisation of $ (\alpha^z-1) $ contains also only the prime ideals $ \mathfrak{P}_1, \ldots, \mathfrak{P}_n $ and we can write \begin{equation*} (\alpha^z-1) = \mathfrak{P}_1^{z_1} \cdots \mathfrak{P}_n^{z_n} = (\mathfrak{P}_1^{a_1} \cdots \mathfrak{P}_n^{a_n})^x (\mathfrak{P}_1^{b_1} \cdots \mathfrak{P}_n^{b_n})^y, \end{equation*} which implies \begin{equation*} z_i = a_i x + b_i y \end{equation*} for $ i = 1, \ldots, n $. Note that all the $ z_i, a_i, b_i $ are integers. Therefore if \ref{it:condDet}) is satisfied, then it follows that $ x $ and $ y $ are integers as well and in particular we do not have $ -1 < x < 0 $. If \ref{it:condUnit}) is satisfied, then $ b_1 = \cdots = b_n = 0 $, so $ a_i = \pm 1 $ implies that $ x $ is an integer and again we do not have $ -1 < x < 0 $. \end{proof} \begin{myrem} Theorem \ref{thm:mainthm} can be applied to the Fibonacci numbers. Here we have $ a = \frac{1}{\sqrt{5}} $ and $ \alpha = \frac{1+\sqrt{5}}{2} $, i.e.\ $ \alpha $ is a unit and $ (a) = (\sqrt{5})^{-1} $. Thus condition \ref{it:condUnit}) in Remark \ref{rem:easierCond} is satisfied. \end{myrem} \begin{myrem} Theorem \ref{thm:mainthm} can be applied to the linear recurrence sequence given by $ U_0 = 0 $, $ U_1 = 1 $ and $ U_{n+2} = U_{n+1} + 3 U_n $ for $ n \geq 0 $. Here we have $ a = \frac{1}{\sqrt{13}} $ and $ \alpha = \frac{1+\sqrt{13}}{2} $, i.e.\ $ (\alpha) = \left( \frac{1+\sqrt{13}}{2} \right)^1 $ is prime (it lies over $ p=3 $) and $ (a) = (\sqrt{13})^{-1} $. Thus condition \ref{it:condDet}) in Remark \ref{rem:easierCond} is satisfied. \end{myrem} \begin{myrem} If we weaken Theorem \ref{thm:mainthm} in the sense that we prove the existence of at most three solutions, then an inspection of the proof shows that the technical condition containing Equation \eqref{eq:techcond} is not needed any more. Furthermore, it is not clear, whether all assumptions in Theorem \ref{thm:mainthm} are really necessary for the statement to be true or only required for our proof to work. \end{myrem} As a special case of Theorem \ref{thm:mainthm} we get the following result for the Tribonacci numbers, where the technical condition is checked directly in the proof (Section \ref{sec:Tribos}). Note that the case of Fibonacci numbers has recently been considered by Batte et al.~\cite{BatteEtAl2022}. \begin{mythm}\label{thm:Tribos} Let $(T_n)_{n\in \mathbb{N}}$ be the Tribonacci sequence given by $T_1=1, T_2=1, T_3=2$ and $T_{n}=T_{n-1}+T_{n-2}+T_{n-3}$ for $n\geq 4$. If for some integers $b\geq 2$ and $c$ the equation $T_n-b^m=c$ has at least three solutions in positive integers $n,m$ given by $(n_1,m_1),(n_2,m_2),(n_3,m_3)$ with $n_1>n_2>n_3\geq 2$, then \[ \log b \leq 438 \quad \text{and} \quad 150<n_1\leq 1.1\cdot 10^{37}. \] \end{mythm} \begin{myrem} The assumption $n_1>n_2>n_3\geq 2$ in the above theorem is natural because of $T_1=T_2$. \end{myrem} In view of this result, the following question remains: \begin{myproblem}\label{problem:Tribos-complete} Do there exist any pairs $(b,c)$ such that the equation $T_n-b^m=c$ has at least three solutions? \end{myproblem} Moreover, in the proof of Theorem \ref{thm:Tribos} (Section \ref{sec:Tribos}, \hyperlink{step:smallSols}{``small solutions''}) we will search for $b$'s and $c$'s such that $T_n-b^m=c$ has at least two small solutions. We will only find two solutions for \begin{equation} \label{eq:bc} \begin{split} (b,c) \in \{&(2, -8), (2, -3), (2, -1), (2, 0), (2, 5), (3, -2), (3, 4), (5, -121), \\ &(5, -1), (5, 19), (7, -5), (17, -15), (54, 220), (641, -137)\}. \end{split} \end{equation} Thus the following question remains: \begin{myproblem} Except for the pairs from \eqref{eq:bc}, do there exist any further $(b,c)$ such that $T_n-b^m=c$ has at least two solutions? \end{myproblem} \begin{myrem} In the proof of Theorem \ref{thm:Tribos} (Section \ref{sec:Tribos}, \hyperlink{step:smallSols}{``small solutions''}) we search for all $2\leq n_2 < n_1 \leq 150$ such that the difference of the corresponding Tribonacci numbers can be written in the form $T_{n_1}-T_{n_2}=b^{m_1}-b^{m_2}$ with $b\geq 2$ and $m_1>m_2\geq 1$. The bound 150 is chosen because the computations only take a few minutes and the proof of the upper bound in Theorem \ref{thm:Tribos} is easier if we assume $n_1>150$. In fact, the authors also ran the computations further and checked if there are $2\leq n_2 < n_1 \leq 350$ such that $T_{n_1}-T_{n_2}=b^{m_1}-b^{m_2}$. These computations took about a week on a usual computer using 4 cores. No further solutions than those in \eqref{eq:bc} were found. At this point, the computations start taking pretty long because the factorisation of huge $T_{n_1}-T_{n_2}$ is expensive. \end{myrem} \section{Results from Diophantine Approximation}\label{sec:diophApprox} In this section we state all results from Diophantine approximation, that will be used in the proofs below. In particular, we will use lower bounds for linear forms in logarithms, i.e.\ Baker-type bounds for expressions of the form $|\Lambda|=|b_1 \log \eta_1 + \dots + b_t \log \eta_t|$. These linear forms will be coming from expressions of the form $|\eta_1^{b_1}\cdots \eta_t^{b_t}-1|$ and we will switch between these expressions via the following lemma. \begin{mylemma} \label{lemma:linsmall} Let $ \lambda $ be a real number with $ \abs{\lambda} \leq 1 $. Then we have the inequality \begin{equation*} \frac{1}{4} \abs{\lambda} \leq \abs{e^{\lambda} - 1} \leq 2 \abs{\lambda}. \end{equation*} \end{mylemma} \begin{proof} The proof for these bounds is implied by a straight-forward calculation. For the upper bound we have \begin{align*} \abs{e^{\lambda} - 1} &= \abs{\sum_{t=1}^{\infty} \frac{\lambda^t}{t!}} \leq \sum_{t=1}^{\infty} \frac{\abs{\lambda}^t}{t!} = \abs{\lambda} \cdot \sum_{t=0}^{\infty} \frac{\abs{\lambda}^t}{(t+1)!} \\ &\leq \abs{\lambda} \cdot \sum_{t=0}^{\infty} \frac{1}{(t+1)!} = \abs{\lambda} \cdot (e-1) \leq 2 \abs{\lambda} \end{align*} and for the lower bound we have \begin{align*} \abs{e^{\lambda} - 1} &= \abs{\sum_{t=1}^{\infty} \frac{\lambda^t}{t!}} \geq \abs{\lambda} - \abs{\sum_{t=2}^{\infty} \frac{\lambda^t}{t!}} \geq \abs{\lambda} - \sum_{t=2}^{\infty} \frac{\abs{\lambda}^t}{t!} \\ &= \abs{\lambda} - \abs{\lambda} \cdot \sum_{t=1}^{\infty} \frac{\abs{\lambda}^t}{(t+1)!} \geq \abs{\lambda} - \abs{\lambda} \cdot \sum_{t=1}^{\infty} \frac{1}{(t+1)!} \\ &= \abs{\lambda} \cdot (3-e) \geq \frac{1}{4} \abs{\lambda}. \end{align*} \end{proof} Also, we will use the following simple fact. \begin{mylemma} \label{lemma:linbig} Let $ \lambda $ be a real number with $ \abs{\lambda} > 1 $. Then we have the inequality \begin{equation*} \abs{e^{\lambda} - 1} \geq \frac{3}{5}. \end{equation*} \end{mylemma} \begin{proof} Since $ \abs{\lambda} > 1 $ we either have $ e-1 \leq e^{\lambda} - 1 $ or $ e^{\lambda} - 1 \leq \frac{1}{e} - 1 $. Thus we get \begin{equation*} \abs{e^{\lambda} - 1} \geq \min \setb{e-1, 1-\frac{1}{e}} \geq \frac{3}{5}. \end{equation*} \end{proof} Before we state some lower bounds for linear forms in logarithms, let us recall the definition of the logarithmic height. Let $ \eta $ be an algebraic number of degree $ d $ over the rationals, with minimal polynomial \begin{equation*} a_0 (x - \eta_1) \cdots (x - \eta_d) \in \mathbb{Z}[x]. \end{equation*} Then the absolute logarithmic height of $ \eta $ is given by \begin{equation*} h(\eta) := \frac{1}{d} \left( \log a_0 + \sum_{i=1}^{d} \log \max \setb{1, \abs{\eta_i}} \right). \end{equation*} This height function satisfies some basic properties, which are all well-known (see e.g.\ \cite{Zannier2009} for a reference). Namely for all $ \eta, \gamma \in \overline{\mathbb{Q}} $ and all $ z \in \mathbb{Z} $ we have: \begin{align*} h(\eta + \gamma) &\leq h(\eta) + h(\gamma) + \log 2, \\ h(\eta \gamma) &\leq h(\eta) + h(\gamma), \\ h(\eta^z) &= \abs{z} h(\eta), \\ h(z) &= \log |z| \qquad \text{for } z\neq 0. \end{align*} The following lower bound for linear forms in logarithms is well-known and follows from Matveev's bound \cite{Matveev2000}. \begin{myprop}[Matveev]\label{prop:Matveev} Let $\eta_1, \ldots, \eta_t$ be positive real algebraic numbers in a number field $K$ of degree $D$, let $b_1, \ldots b_t$ be rational integers and assume that \[ \Lambda := b_1 \log \eta_1 + \dots + b_t \log \eta_t \neq 0. \] Then \[ \log |\Lambda| \geq -1.4 \cdot 30^{t+3} \cdot t^{4.5} \cdot D^2 (1 + \log D) (1 + \log B) A_1 \cdots A_t, \] where \begin{align*} B &\geq \max \setb{\abs{b_1}, \ldots, \abs{b_t}},\\ A_i &\geq \max \setb{D h(\eta_i), \abs{\log \eta_i}, 0.16} \quad \text{for all }i = 1,\ldots,t. \end{align*} Moreover, the assumption $\Lambda\neq 0$ is equivalent to $e^\Lambda-1 \neq 0$ and the following bound holds as well: \begin{align*} \log |e^\Lambda-1| &= \log |\eta_1^{b_1}\cdots \eta_t^{b_t} -1 |\\ &\geq -1.4 \cdot 30^{t+3} \cdot t^{4.5} \cdot D^2 (1 + \log D) (1 + \log B) A_1 \cdots A_t. \end{align*} \end{myprop} \begin{proof} The bound for $|\Lambda|$ follows immediately from \cite[Corollary 2.3]{Matveev2000}. In fact, we have that \begin{equation}\label{eq:Matveev-proof} \log |\Lambda| \geq - \frac{e}{2} \cdot 30^{t+3} \cdot t^{4.5} \cdot D^2 (1 + \log D) (1 + \log B) A_1 \cdots A_t, \end{equation} and we can estimate $e/2$ by $1.4$. The bound for $|e^\Lambda-1|$ now follows with Lemma~\ref{lemma:linsmall} and Lemma \ref{lemma:linbig}: If $|\Lambda|>1$, then $|e^\Lambda-1|\geq 3/5$, so $\log |e^\Lambda-1|\geq -0.52$ and the bound is trivially fulfilled. If $|\Lambda|\leq 1$, then we have $|e^\Lambda-1|\geq \frac{1}{4} |\Lambda|$ and we obtain \begin{align*} \log |e^\Lambda-1| \geq \log |\Lambda| - \log 4. \end{align*} Here the bound for $|e^\Lambda-1|$ follows from \eqref{eq:Matveev-proof} since we can omit the $\log 4$ when estimating $e/2$ by $1.4$. \end{proof} \begin{myrem} The lower bound for linear forms in logarithms due to Baker and Wüstholz \cite{bawu93} played a significant role in the development of linear forms in logarithms. The final structure of the lower bound for linear forms in logarithms without an explicit determination of the constant involved has been established by Wüstholz \cite{wu88}, and the precise determination of that constant is the central aspect of \cite{bawu93}. However, slightly sharper bounds are obtained by using Matveev's result \cite{Matveev2000} instead. Let us note that using the highly technical result due to Mignotte \cite{Mignotte:kit} for linear forms in three logarithms would yield even smaller bounds, but to avoid technical difficulties we refrain from applying this result. \end{myrem} For linear forms in only two logarithms, there exist results with much smaller constants. In Section \ref{sec:Tribos} we will use the following bound, which follows immediately from \cite{Laurent2008}. \begin{myprop}[Laurent]\label{prop:Laurent} Let $\eta_1, \eta_2 > 0$ be two real multiplicatively independent algebraic numbers, let $b_1, b_2 \in \mathbb{Z}$ be nonzero integers and let \[ \Lambda := b_1 \log \eta_1 + b_2 \log \eta_2. \] Then \[ \log |\Lambda| \geq - 20.3 D^2 (\max( \log b' + 0.38, 18/D, 1))^2 \log A_1 \log A_2, \] where \begin{align*} D&=[\mathbb{Q}(\eta_1,\eta_2):\mathbb{Q}],\\ b'&=\frac{|b_1|}{\log A_2} + \frac{|b_2|}{\log A_1},\\ \log A_i &\geq \max ( D h(\eta_i),\abs{\log \eta_i},1) \qquad \text{for }i=1,2. \end{align*} \end{myprop} \begin{proof} The bound follows immediately from the bound \cite[Corollary 2]{Laurent2008} with $m=18$ (we have only hidden two of the $D$'s inside $\log A_1$ and $\log A_2$). As for the assumptions, note that Laurent additionally supposes that $b_1$ is positive and $b_2$ is negative, and that $\log \eta_1$ and $\log \eta_2$ are positive. However, changing the signs of both $b$'s does not change $|\Lambda|$ and changing the sign of only one $b$ makes $\abs{\Lambda}$ larger. Thus it is enough to assume that $b_1,b_2$ are nonzero. If $\log \eta_i$ is negative, one can simply swap $\eta_i$ for $\eta_i^{-1}$ and $b_i$ for $-b_i$ without changing the value of $\Lambda$ and still apply Laurent's result. Thus we can also allow the $\log \eta$'s to be negative. Moreover, we do not need to explicitly exclude $\log \eta_i=0$ as in that case we would have $\eta_i=1$, so the $\eta$'s would be multiplicatively dependent. \end{proof} Finally, at the end of Section \ref{sec:Tribos} we will apply the LLL-algorithm to reduce some of the obtained bounds. The next lemma is an immediate variation of \cite[Lemma~VI.1]{Smart1998}. \begin{mylemma}[LLL reduction]\label{lem:LLL} Let $\gamma_1,\gamma_2,\gamma_3$ be positive real numbers and $x_1,x_2,x_3$ integers, and let \[ 0 \neq |\Lambda| := |x_1 \log \gamma_1 + x_2 \log \gamma_2 + x_3 \log \gamma_3|. \] Assume that the $x_i$ are bounded in absolute values by some constant $M$ and choose a constant $C>M^3$. Consider the matrix \[ A= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ [C \log \gamma_1] & [C \log \gamma_2] &[C \log \gamma_3] \end{pmatrix}, \] where $[x]$ denotes the nearest integer to $x$. The columns of $A$ form a basis of a lattice. Let $B$ be the matrix that corresponds to the LLL-reduced basis and let $B^*$ be the matrix corresponding to the Gram-Schmidt basis constructed from $B$. Let $c$ be the Euclidean norm of the smallest column vector of $B^*$ and set $S:=2M^2$ and $T:=(1+3M)/2$. If $c^2 > T^2 + S$, then \[ |\Lambda| > \frac{1}{C}\left( \sqrt{c^2-S} - T \right). \] \end{mylemma} \section{Overview of the proofs}\label{sec:overviewProof} The main idea in the proofs of Theorem~\ref{thm:mainthm} and Theorem~\ref{thm:Tribos} is to consider equations of the form $U_{n_1}-b^{m_1}=U_{n_2}-b^{m_2}$ and note that they are very roughly of the form $a \alpha^{n_1} - b^{m_1}= a \alpha^{n_2} - b^{m_2}$. Then one can do the usual tricks for solving Pillai-type equations via lower bounds for linear forms in logarithms: shifting expressions, estimating and applying e.g.\ Matveev's lower bound. This needs to be done in several steps. The main problem in our situation is that $b$ is not fixed, which is why we need to eliminate it at some point and why we assume the existence of three solutions. We split up the proof of Theorem~\ref{thm:mainthm} into several lemmas and the proof of Theorem~\ref{thm:Tribos} into several steps. Table \ref{table:overview} shows a simplified overview of the main lemmas/steps (the cases which allow us to skip some steps are left out in the table). In each lemma/step a new bound is obtained. Note that the symbol $\ll$ stands for ``$\leq$ up to some effectively computable constant''. Lemmas and steps which use the same linear form in logarithms are displayed side by side. Steps marked with a *star do not involve linear forms in logarithms but simply combine previous bounds. \begin{table}[h] \caption{Overview of the proofs}\label{table:overview} \begin{tabular}{l|l} \hline \textbf{Proof of Theorem \ref{thm:mainthm}} & \textbf{Proof of Theorem \ref{thm:Tribos}} \\ (general result) & (Tribonacci numbers) \\ \hline \textbf{Lemma \ref{lemma:step1}:} & \textbf{Step \ref{step:Step1}:} \\ $\min \setb{n_1-n_2, (m_1-m_2) \log b} \ll \log n_1 \log b$ & $n_1-n_2 \ll \log n_1 \log b$ \\ \textbf{Lemma \ref{lemma:step2}:} & \\ $\max \setb{n_1-n_2, (m_1-m_2) \log b} \ll (\log n_1 \log b)^2$ & \\ \textbf{Lemma \ref{lemma:step3}:} & \textbf{Step \ref{step:Step2}:} \\ $n_1 \ll (\log n_1 \log b)^3$ & $n_1 \ll (\log n_1 \log b)^2$ \\ \cline{1-1} \textbf{Lemma \ref{lemma:step5}:} & \textbf{Step \ref{step:Step3}:} \\ $n_2-n_3 \ll \log n_1$ & $n_2-n_3 \ll (\log n_1)^2$ \\ \textbf{Lemma \ref{lemma:step6}:} & \textbf{Step \ref{step:Step4}:} \\ $\log b \ll (\log n_1)^2$ & $n_1-n_2 \ll (\log n_1)^2$ \\ & \textbf{*Step \ref{step:Step5}:} \\ & $\log b \ll (\log n_1)^2$ \\ \textbf{*End:} & \textbf{*Step \ref{step:Step6}:} \\ $n_1 \ll (\log n_1)^9$ & $n_1 \ll (\log n_1)^8$ \\ & \textbf{Reduction Steps} \\ \hline \end{tabular} \end{table} In the proof of Theorem~\ref{thm:mainthm} in the first three lemmas we only assume the existence of two solutions. We do that because we want to show how far we can get with our method if we aim at proving a stronger result. Below the horizontal line in the middle of Table~\ref{table:overview} we assume the existence of three solutions. In the proof of Theorem~\ref{thm:Tribos} we assume the existence of three solutions from the beginning. This saves us one step and thus helps to keep the constants and exponents smaller (see Step~\ref{step:Step2} in the table). The attentive reader will notice that the bound in Step~\ref{step:Step3} has a larger exponent than the bound in Lemma ~\ref{lemma:step5}. This is because the corresponding linear form has only two logarithms, so in the Tribonacci setting we apply Laurent's lower bound instead of Matveev's, trading the exponent for a much better constant. In the end, in both proofs we obtain an absolute upper bound for $n_1$ from an inequality of the form $n_1 \ll (\log n_1)^k$. Since the constant is very large, the bound for $n_1$ is also very large. In the Tribonacci setting we try to reduce it. Unfortunately, since $b$ is not fixed and the bound for $b$ is very large, we cannot use the linear forms which contain $b$ for the reduction. This is why we are not able to solve Problem~\ref{problem:Tribos-complete}. However, we can still use the linear forms in logarithms which do not contain $b$ and reduce the initial bound for $n_1$ significantly. \section{Proof of Theorem \ref{thm:mainthm}}\label{sec:proofMainThm} Since the proof is a little bit lengthy, we will split it into several lemmas. All assumptions listed in Theorem \ref{thm:mainthm} are general assumptions in this section. During the whole section we will denote by $ C_i $ effectively computable positive constants which are independent of $ n,m,b,c $. We may assume that $ b \geq 2 $ without loss of generality. Let us now fix a Galois automorphism $ \sigma $ on the splitting field $\mathbb{Q}(\alpha, \alpha_2 , \ldots , \alpha_k)$ with the property $ \abs{\sigma(\alpha)} < \alpha $. This is always possible because $ \alpha $ is irrational and the dominant root of $ U_n $. The effectively computable constant $ N_0 $ will only depend on the characteristic roots and coefficients of the recurrence $ U_n $ as well as on the chosen automorphism $ \sigma $. Although it possibly will be updated at some points in the proof, we will always denote it by the same label $ N_0 $. As a first step we will choose $ N_0 $ large enough such that $ U_n $ is positive and strictly increasing for all $ n \geq N_0 $. This is possible since $ U_n $ has a dominant root $ \alpha > 1 $ with positive coefficient. In addition we can assume that $ \abs{\alpha_2} \geq \abs{\alpha_i} $ for $ i = 2,\ldots,k $ without loss of generality. Assuming that Equation \eqref{eq:centraleq} has three distinct solutions $ (n_1,m_1) $, $ (n_2,m_2) $ and $ (n_3,m_3) $, we can write \begin{equation*} U_{n_1} - b^{m_1} = U_{n_2} - b^{m_2} = U_{n_3} - b^{m_3} \end{equation*} with $ n_1 > n_2 > n_3 \geq N_0 $ and $ m_1 > m_2 > m_3 \geq 1 $. When working only with two of those solutions we often may write \begin{equation} \label{eq:twosols} U_{n_1} - U_{n_2} = b^{m_1} - b^{m_2}. \end{equation} Let us fix \begin{equation*} \varepsilon := \frac{1}{2} \cdot \frac{\alpha-1}{\alpha+1} > 0. \end{equation*} Recalling that $ \alpha $ is the dominant root of $ U_n $, we get \begin{equation*} \abs{\frac{U_n}{a \alpha^n} - 1} \leq \varepsilon \end{equation*} for $ n \geq N_0 $ (where we might have updated $N_0$). Thus for $ n \geq N_0 $ we have \begin{equation*} \abs{U_n - a \alpha^n} \leq \varepsilon a \alpha^n, \end{equation*} which implies the bounds \begin{equation} \label{eq:unrange} C_1 a \alpha^n \leq U_n \leq C_2 a \alpha^n \end{equation} for $ C_1 = 1 - \varepsilon $ and $ C_2 = 1 + \varepsilon $. The choice of $ \varepsilon $ guarantees that $ C_3 := C_1 \alpha - C_2 > 0 $. Therefore by using Equation \eqref{eq:twosols} and Inequality \eqref{eq:unrange} we get \begin{align*} b^{m_1} &\geq b^{m_1} - b^{m_2} = U_{n_1} - U_{n_2} \geq C_1 a \alpha^{n_1} - C_2 a \alpha^{n_2} \\ &= C_1 \alpha a \alpha^{n_1-1} - C_2 a \alpha^{n_2} \geq C_3 a \alpha^{n_1-1} = C_4 \alpha^{n_1} \end{align*} as well as \begin{equation*} C_2 a \alpha^{n_1} \geq U_{n_1} \geq U_{n_1} - U_{n_2} = b^{m_1} - b^{m_2} \geq \left( 1 - \frac{1}{b} \right) b^{m_1} \geq \frac{1}{2} b^{m_1}. \end{equation*} These two inequality chains yield \begin{equation} \label{eq:relleadterm} C_4 \alpha^{n_1} \leq b^{m_1} \leq C_5 \alpha^{n_1}. \end{equation} Note that these bounds are valid for any solution of \eqref{eq:centraleq} provided that there is a further smaller solution. Applying the logarithm to Equation \eqref{eq:relleadterm} gives us \begin{equation*} m_1 \leq \frac{\log C_5}{\log b} + n_1 \cdot \frac{\log \alpha}{\log b} < n_1 \end{equation*} if $ b $ is large enough. Thus for our purpose we may assume that $ n_1 > m_1 $. The next big intermediate result is to prove that, if there exist at least two distinct solutions $ (n_1,m_1) $ and $ (n_2,m_2) $ to Equation \eqref{eq:centraleq}, then for the larger one $ (n_1,m_1) $ the bound \begin{equation} \label{eq:logbound} n_1 \leq C_6 (\log n_1 \log b)^3 \end{equation} holds. This will be done by the following three lemmas. \begin{mylemma} \label{lemma:step1} Assume that Equation \eqref{eq:centraleq} has at least two distinct solutions $ (n_1,m_1) $ and $ (n_2,m_2) $ with $ n_1 > n_2 $ as considered in Theorem \ref{thm:mainthm}. Then we have \begin{equation*} \min \setb{n_1-n_2, (m_1-m_2) \log b} \leq C_7 \log n_1 \log b. \end{equation*} \end{mylemma} \begin{proof} Inserting the Binet representation of the linear recurrence sequence into Equation \eqref{eq:twosols} and regrouping terms yields \begin{align*} \abs{a \alpha^{n_1} - b^{m_1}} &= \abs{a \alpha^{n_2} + \sum_{l=2}^{k} a_l \alpha_l^{n_2} - \sum_{l=2}^{k} a_l \alpha_l^{n_1} - b^{m_2}} \\ &\leq a \alpha^{n_2} + C_8 \abs{\alpha_2}^{n_2} + C_9 \abs{\alpha_2}^{n_1} + b^{m_2}, \end{align*} and, dividing both sides by $ b^{m_1} $, we get \begin{align} \abs{\frac{a \alpha^{n_1}}{b^{m_1}} - 1} &\leq \frac{1}{b^{m_1}} \cdot \left( a \alpha^{n_2} + C_8 \abs{\alpha_2}^{n_2} + C_9 \abs{\alpha_2}^{n_1} + b^{m_2} \right) \nonumber \\ &= \frac{a \alpha^{n_2}}{b^{m_1}} + C_8 \frac{\abs{\alpha_2}^{n_2}}{b^{m_1}} + C_9 \frac{\abs{\alpha_2}^{n_1}}{b^{m_1}} + b^{m_2 - m_1} \nonumber \\ \label{eq:chain_step1} &\leq \frac{a \alpha^{n_2}}{C_4 \alpha^{n_1}} + C_8 \frac{\abs{\alpha_2}^{n_2}}{C_4 \alpha^{n_1}} + C_9 \frac{\abs{\alpha_2}^{n_1}}{C_4 \alpha^{n_1}} + b^{m_2 - m_1} \\ &\leq \frac{a}{C_4} \alpha^{n_2-n_1} + \frac{C_8}{C_4} \alpha^{n_2-n_1} + \frac{C_9}{C_4} \abs{\alpha_2}^{n_1-n_2} \alpha^{n_2-n_1} + b^{m_2 - m_1} \nonumber \\ &\leq C_{10} \max \setb{\alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}, b^{m_2 - m_1}},\nonumber \end{align} where in the third line we have used Inequality \eqref{eq:relleadterm}. Note that we can either have $ \abs{\alpha_2} \leq 1 $ or $ \abs{\alpha_2} > 1 $ and that the bound may be simplified if we work with a concrete recurrence sequence where the size of $ \alpha_2 $ is known. We aim for applying Proposition~\ref{prop:Matveev} to get also a lower bound. Therefore let us set $ t=3 $, $ K = \mathbb{Q}(a,\alpha) $, $ D = [K:\mathbb{Q}] $ as well as \begin{align*} \eta_1 &= a, \quad & \eta_2 &= \alpha, \quad & \eta_3 &= b,\\ b_1 &= 1, \quad & b_2 &= n_1, \quad & b_3 &= -m_1. \end{align*} Further we can put \begin{align*} A_1 &= \max \setb{Dh(a), \abs{\log a}, 0.16}, \\ A_2 &= \max \setb{Dh(\alpha), \log \alpha, 0.16}, \\ A_3 &= D \log b, \\ B &= n_1. \end{align*} Finally, we have to show that the expression \begin{equation*} \Lambda := a \alpha^{n_1} b^{-m_1} - 1 \end{equation*} is nonzero. Assuming the contrary, we would have $ a \alpha^{n_1} = b^{m_1} $ and by applying $ \sigma $ we would get the equality $ a \alpha^{n_1} = \sigma(a) \sigma(\alpha)^{n_1} $. Taking absolute values this implies \begin{equation*} \frac{\abs{\sigma(a)}}{a} = \left( \frac{\alpha}{\abs{\sigma(\alpha)}} \right)^{n_1}, \end{equation*} which is a contradiction for $ n_1 \geq N_0 $ (where, as always, we may have updated $N_0$). Hence we have $ \Lambda \neq 0 $. Now Proposition~\ref{prop:Matveev} states that \begin{equation*} \log \abs{\Lambda} \geq -C_{11} (1 + \log n_1) \log b \geq -C_{12} \log n_1 \log b. \end{equation*} Comparing this lower bound with the upper bound coming from the first paragraph of this proof gives us \begin{equation*} -C_{12} \log n_1 \log b \leq \log C_{10} + \log \max \setb{\alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}, b^{m_2 - m_1}} \end{equation*} which immediately implies \begin{equation*} \min \setb{n_1-n_2, (m_1-m_2) \log b} \leq C_7 \log n_1 \log b. \end{equation*} This proves the lemma. \end{proof} \begin{mylemma} \label{lemma:step2} Assume that Equation \eqref{eq:centraleq} has at least two distinct solutions $ (n_1,m_1) $ and $ (n_2,m_2) $ with $ n_1 > n_2 $ as considered in Theorem \ref{thm:mainthm}. Then we have \begin{equation*} \max \setb{n_1-n_2, (m_1-m_2) \log b} \leq C_{13} (\log n_1 \log b)^2 \end{equation*} or \begin{equation*} n_1 \leq C_{14} (\log n_1 \log b)^2. \end{equation*} \end{mylemma} \begin{proof} We distinguish between two cases according to the statement of Lemma \ref{lemma:step1}. Let us first assume that \begin{equation*} \min \setb{n_1-n_2, (m_1-m_2) \log b} = n_1-n_2. \end{equation*} Then we have \begin{equation} \label{eq:s2case1} n_1-n_2 \leq C_7 \log n_1 \log b. \end{equation} Inserting the Binet representation of the linear recurrence sequence into Equation~\eqref{eq:twosols} and regrouping terms yields \begin{align*} \abs{a \alpha^{n_2} (\alpha^{n_1-n_2} - 1) - b^{m_1}} &= \abs{\sum_{l=2}^{k} a_l \alpha_l^{n_2} - \sum_{l=2}^{k} a_l \alpha_l^{n_1} - b^{m_2}} \\ &\leq C_8 \abs{\alpha_2}^{n_2} + C_9 \abs{\alpha_2}^{n_1} + b^{m_2}. \end{align*} Now we divide this inequality by $ b^{m_1} $. If $ \abs{\alpha_2} \leq 1 $ we get \begin{equation*} \abs{\frac{a \alpha^{n_2} (\alpha^{n_1-n_2} - 1)}{b^{m_1}} - 1} \leq \frac{C_8 + C_9 + b^{m_2}}{b^{m_1}} \leq C_{15} b^{m_2-m_1}, \end{equation*} and if $ \abs{\alpha_2} > 1 $ we get \begin{align*} \abs{\frac{a \alpha^{n_2} (\alpha^{n_1-n_2} - 1)}{b^{m_1}} - 1} &\leq \frac{C_{16} \abs{\alpha_2}^{n_1} + b^{m_2}}{b^{m_1}} \leq \frac{C_{16} \abs{\alpha_2}^{n_1}}{C_4 \alpha^{n_1}} + b^{m_2-m_1} \\ &\leq C_{17} \max \setb{\left( \frac{\abs{\alpha_2}}{\alpha} \right)^{n_1}, b^{m_2-m_1}}, \end{align*} where for the second inequality we have used \eqref{eq:relleadterm}. So in both subcases we have \begin{equation} \label{eq:s2c1upper} \abs{\frac{a \alpha^{n_2} (\alpha^{n_1-n_2} - 1)}{b^{m_1}} - 1} \leq C_{18} \max \setb{\left( \frac{\abs{\alpha_2}}{\alpha} \right)^{n_1}, b^{m_2-m_1}}. \end{equation} We aim for applying Proposition~\ref{prop:Matveev} to get a lower bound. Therefore let us set $ t=3 $, $ K = \mathbb{Q}(a,\alpha) $, $ D = [K:\mathbb{Q}] $ as well as \begin{align*} \eta_1 &= a(\alpha^{n_1-n_2} - 1), \quad & \eta_2 &= \alpha, \quad & \eta_3 &= b, \\ b_1 &= 1, \quad & b_2 &= n_2, \quad & b_3 &=-m_1. \end{align*} Using Inequality \eqref{eq:s2case1} gives us \begin{align*} h(\eta_1) &\leq h(a) + h(\alpha^{n_1-n_2} - 1) \leq h(a) + h(\alpha^{n_1-n_2}) + h(1) + \log 2 \\ &\leq C_{19} + (n_1-n_2) h(\alpha) \leq C_{20} \log n_1 \log b \end{align*} and an analogous bound for $ \log \eta_1 $. Thus we can put \begin{align*} A_1 &= C_{21} \log n_1 \log b, \\ A_2 &= \max \setb{Dh(\alpha), \log \alpha, 0.16}, \\ A_3 &= D \log b, \\ B &= n_1. \end{align*} Finally, we have to show that the expression \begin{equation*} \Lambda := a(\alpha^{n_1-n_2} - 1) \alpha^{n_2} b^{-m_1} - 1 \end{equation*} is nonzero. Assuming the contrary, we would have $ a (\alpha^{n_1-n_2} - 1) \alpha^{n_2} = b^{m_1} $ and by applying $ \sigma $ the equality $ a (\alpha^{n_1-n_2} - 1) \alpha^{n_2} = \sigma(a) (\sigma(\alpha)^{n_1-n_2} - 1) \sigma(\alpha)^{n_2} $. Taking absolute values this implies \begin{equation*} \abs{\frac{\sigma(a) (\sigma(\alpha)^{n_1-n_2} - 1)}{a (\alpha^{n_1-n_2} - 1)}} = \left( \frac{\alpha}{\abs{\sigma(\alpha)}} \right)^{n_2}, \end{equation*} which is a contradiction for $ n_2 \geq N_0 $ since the left hand side is bounded above by a constant. Hence we have $ \Lambda \neq 0 $. Now Proposition~\ref{prop:Matveev} states that \begin{equation*} \log \abs{\Lambda} \geq -C_{22} (1 + \log n_1) \log n_1 \log b \log b \geq -C_{23} (\log n_1 \log b)^2. \end{equation*} Comparing this lower bound with the upper bound \eqref{eq:s2c1upper} gives us \begin{equation*} -C_{23} (\log n_1 \log b)^2 \leq \log C_{18} + \log \max \setb{\left( \frac{\abs{\alpha_2}}{\alpha} \right)^{n_1}, b^{m_2-m_1}} \end{equation*} which immediately implies \begin{equation*} \min \setb{n_1, (m_1-m_2) \log b} \leq C_{24} (\log n_1 \log b)^2. \end{equation*} Then, depending on which of the two expressions is the minimum, we either have \begin{equation*} n_1 \leq C_{14} (\log n_1 \log b)^2 \end{equation*} or \begin{equation*} \max \setb{n_1-n_2, (m_1-m_2) \log b} = (m_1-m_2) \log b \leq C_{13} (\log n_1 \log b)^2. \end{equation*} This concludes the first case. In the second case we assume that \begin{equation*} \min \setb{n_1-n_2, (m_1-m_2) \log b} = (m_1-m_2) \log b. \end{equation*} Thus we have \begin{equation} \label{eq:s2case2} (m_1-m_2) \log b \leq C_7 \log n_1 \log b. \end{equation} Inserting the Binet representation of the linear recurrence sequence into Equation~\eqref{eq:twosols} and regrouping terms yields \begin{align*} \abs{a \alpha^{n_1} - b^{m_2} (b^{m_1-m_2} - 1)} &= \abs{a \alpha^{n_2} + \sum_{l=2}^{k} a_l \alpha_l^{n_2} - \sum_{l=2}^{k} a_l \alpha_l^{n_1}} \\ &\leq a \alpha^{n_2} + C_8 \abs{\alpha_2}^{n_2} + C_9 \abs{\alpha_2}^{n_1} \\ &\leq C_{25} \alpha^{n_2} + C_9 \abs{\alpha_2}^{n_1}, \end{align*} and, dividing both sides by $ b^{m_1} - b^{m_2} $, we get \begin{align} \abs{\frac{a \alpha^{n_1}}{b^{m_2} (b^{m_1-m_2} - 1)} - 1} &\leq \frac{C_{25} \alpha^{n_2} + C_9 \abs{\alpha_2}^{n_1}}{b^{m_1} - b^{m_2}} \nonumber \\ &\leq \frac{C_{25} \alpha^{n_2} + C_9 \abs{\alpha_2}^{n_1}}{\frac{1}{2} b^{m_1}} \nonumber \\ &\leq \frac{C_{25} \alpha^{n_2} + C_9 \abs{\alpha_2}^{n_1}}{\frac{1}{2} C_4 \alpha^{n_1}} \nonumber \\ &\leq C_{26} \alpha^{n_2-n_1} + C_{27} \abs{\alpha_2}^{n_1-n_2} \alpha^{n_2-n_1} \nonumber \\ \label{eq:s2c2upper} &\leq C_{28} \max \setb{\alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}}, \end{align} where we have used the inequalities $ b^{m_1} - b^{m_2} \geq \frac{1}{2} b^{m_1} $ and \eqref{eq:relleadterm}. We aim for applying Proposition~\ref{prop:Matveev} to get a lower bound. Therefore let us set $ t=3 $, $ K = \mathbb{Q}(a,\alpha) $, $ D = [K:\mathbb{Q}] $ as well as \begin{align*} \eta_1 &= (b^{m_1-m_2} - 1)/a, \quad & \eta_2 &= \alpha, \quad & \eta_3 &= b, \\ b_1 &= -1, \quad & b_2 &= n_1, \quad & b_3 &= -m_2. \end{align*} Using Inequality \eqref{eq:s2case2} as well as $ h(b) = \log b $ gives us \begin{align*} h(\eta_1) &\leq h(a) + h(b^{m_1-m_2} - 1) \leq h(a) + h(b^{m_1-m_2}) + h(1) + \log 2 \\ &\leq C_{29} + (m_1-m_2) h(b) \leq C_{30} \log n_1 \log b \end{align*} and an analogous bound for $ \log \eta_1 $. Thus we can put \begin{align*} A_1 &= C_{31} \log n_1 \log b, \\ A_2 &= \max \setb{Dh(\alpha), \log \alpha, 0.16}, \\ A_3 &= D \log b, \\ B &= n_1. \end{align*} Finally, we have to show that the expression \begin{equation*} \Lambda := \alpha^{n_1} b^{-m_2} ((b^{m_1-m_2} - 1)/a)^{-1} - 1 \end{equation*} is nonzero. Assuming the contrary, we would have $ a \alpha^{n_1} = b^{m_1} - b^{m_2} $ and by applying $ \sigma $ we would get the equality $ a \alpha^{n_1} = \sigma(a) \sigma(\alpha)^{n_1} $. Taking absolute values this implies \begin{equation*} \frac{\abs{\sigma(a)}}{a} = \left( \frac{\alpha}{\abs{\sigma(\alpha)}} \right)^{n_1}, \end{equation*} which is a contradiction for $ n_1 \geq N_0 $. Hence we have $ \Lambda \neq 0 $. Now Proposition~\ref{prop:Matveev} states that \begin{equation*} \log \abs{\Lambda} \geq -C_{32} (1 + \log n_1) \log n_1 \log b \log b \geq -C_{33} (\log n_1 \log b)^2. \end{equation*} Comparing this lower bound with the upper bound \eqref{eq:s2c2upper} gives us \begin{equation*} -C_{33} (\log n_1 \log b)^2 \leq \log C_{28} + \log \max \setb{\alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}} \end{equation*} which immediately implies \begin{equation*} \max \setb{n_1-n_2, (m_1-m_2) \log b} = n_1-n_2 \leq C_{13} (\log n_1 \log b)^2. \end{equation*} This concludes the proof of the lemma. \end{proof} \begin{mylemma} \label{lemma:step3} Assume that Equation \eqref{eq:centraleq} has at least two distinct solutions $ (n_1,m_1) $ and $ (n_2,m_2) $ with $ n_1 > n_2 $ as considered in Theorem \ref{thm:mainthm}. Then we have \begin{equation*} n_1 \leq C_6 (\log n_1 \log b)^3. \end{equation*} \end{mylemma} \begin{proof} By Lemma \ref{lemma:step2} we can assume that \begin{equation} \label{eq:s2bound} \max \setb{n_1-n_2, (m_1-m_2) \log b} \leq C_{13} (\log n_1 \log b)^2 \end{equation} since the other case is trivial. Inserting the Binet representation of the linear recurrence sequence into Equation \eqref{eq:twosols} and regrouping terms yields \begin{align*} \abs{a \alpha^{n_2} (\alpha^{n_1-n_2} - 1) - b^{m_2} (b^{m_1-m_2} - 1)} &= \abs{\sum_{l=2}^{k} a_l \alpha_l^{n_2} - \sum_{l=2}^{k} a_l \alpha_l^{n_1}} \\ &\leq C_8 \abs{\alpha_2}^{n_2} + C_9 \abs{\alpha_2}^{n_1}. \end{align*} Dividing both sides by $ b^{m_1} - b^{m_2} $ then gives us \begin{align*} \abs{\frac{a \alpha^{n_2} (\alpha^{n_1-n_2} - 1)}{b^{m_2} (b^{m_1-m_2} - 1)} - 1} &\leq \frac{C_8 \abs{\alpha_2}^{n_2} + C_9 \abs{\alpha_2}^{n_1}}{b^{m_1} - b^{m_2}} \\ &\leq \frac{C_8 \abs{\alpha_2}^{n_2} + C_9 \abs{\alpha_2}^{n_1}}{\frac{1}{2} b^{m_1}} \\ &\leq \frac{C_8 \abs{\alpha_2}^{n_2} + C_9 \abs{\alpha_2}^{n_1}}{\frac{1}{2} C_4 \alpha^{n_1}} \\ &\leq C_{34} \max \setb{\alpha^{-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{-n_1}}, \end{align*} where we have used the inequalities $ b^{m_1} - b^{m_2} \geq \frac{1}{2} b^{m_1} $ and \eqref{eq:relleadterm}. Note that the maximum occurs in view of the distinction between $ \abs{\alpha_2} \leq 1 $ and $ \abs{\alpha_2} > 1 $. We aim for applying Proposition~\ref{prop:Matveev} to get a lower bound. Therefore let us set $ t=3 $, $ K = \mathbb{Q}(a,\alpha) $, $ D = [K:\mathbb{Q}] $ as well as \begin{align*} \eta_1 &= \frac{a(\alpha^{n_1-n_2} - 1)}{b^{m_1-m_2} - 1}, \quad & \eta_2 &= \alpha, \quad & \eta_3 &= b,\\ b_1 &= 1, \quad & b_2 &= n_2, \quad & b_3 &= -m_2. \end{align*} Using the bound \eqref{eq:s2bound} as well as $ h(b) = \log b $ gives us \begin{align*} h(\eta_1) &\leq h(a) + h(\alpha^{n_1-n_2} - 1) + h(b^{m_1-m_2} - 1) \\ &\leq h(a) + h(\alpha^{n_1-n_2}) + h(1) + \log 2 + h(b^{m_1-m_2}) + h(1) + \log 2 \\ &\leq C_{35} + (n_1-n_2) h(\alpha) + (m_1-m_2) h(b) \leq C_{36} (\log n_1 \log b)^2 \end{align*} and an analogous bound for $ \abs{\log \eta_1} $. Thus we can put \begin{align*} A_1 &= C_{37} (\log n_1 \log b)^2, \\ A_2 &= \max \setb{Dh(\alpha), \log \alpha, 0.16}, \\ A_3 &= D \log b, \\ B &= n_1. \end{align*} Finally, we have to show that the expression \begin{equation*} \Lambda := \alpha^{n_2} b^{-m_2} a(\alpha^{n_1-n_2} - 1)/(b^{m_1-m_2} - 1) - 1 \end{equation*} is nonzero. Assuming the contrary, we would have $ a (\alpha^{n_1-n_2} - 1) \alpha^{n_2} = b^{m_1} - b^{m_2} $ and by applying $ \sigma $ we get the equality $ a (\alpha^{n_1-n_2} - 1) \alpha^{n_2} = \sigma(a) (\sigma(\alpha)^{n_1-n_2} - 1) \sigma(\alpha)^{n_2} $. Taking absolute values this implies \begin{equation*} \abs{\frac{\sigma(a) (\sigma(\alpha)^{n_1-n_2} - 1)}{a (\alpha^{n_1-n_2} - 1)}} = \left( \frac{\alpha}{\abs{\sigma(\alpha)}} \right)^{n_2}, \end{equation*} which is a contradiction for $ n_2 \geq N_0 $ since the left hand side is bounded above by a constant. Hence we have $ \Lambda \neq 0 $. Now Proposition~\ref{prop:Matveev} states that \begin{equation*} \log \abs{\Lambda} \geq -C_{38} (1 + \log n_1) (\log n_1 \log b)^2 \log b \geq -C_{39} (\log n_1 \log b)^3. \end{equation*} Comparing this lower bound with the upper bound coming from the first paragraph of this proof gives us \begin{equation*} -C_{39} (\log n_1 \log b)^3 \leq \log C_{34} + \log \max \setb{\alpha^{-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{-n_1}} \end{equation*} which immediately implies \begin{equation*} n_1 \leq C_6 (\log n_1 \log b)^3. \end{equation*} Thus the lemma is proven. \end{proof} We have now reached our first milestone, the bound \eqref{eq:logbound} is proven. The second milestone is to prove that if there are at least three distinct solutions $ (n_1,m_1) $, $ (n_2,m_2) $ and $ (n_3,m_3) $, then for the largest one we have \begin{equation} \label{eq:bbound} \log b \leq C_{40} (\log n_1)^2. \end{equation} Again we will split this part into some lemmas. \begin{mylemma} \label{lemma:step4} Assume that Equation \eqref{eq:centraleq} has at least two distinct solutions $ (n_1,m_1) $ and $ (n_2,m_2) $ with $ n_1 > n_2 $ as considered in Theorem \ref{thm:mainthm}. Then we have \begin{equation*} n_1 \log \alpha - C_{41} \leq m_1 \log b \leq n_1 \log \alpha + C_{42}. \end{equation*} \end{mylemma} \begin{proof} This follows immediately by applying the logarithm to Inequality \eqref{eq:relleadterm} from above. \end{proof} \begin{mylemma} \label{lemma:step5} Assume that Equation \eqref{eq:centraleq} has at least three distinct solutions $ (n_1,m_1) $, $ (n_2,m_2) $ and $ (n_3,m_3) $ with $ n_1 > n_2 > n_3 $ as considered in Theorem \ref{thm:mainthm}. Then at least one of the following inequalities holds: \begin{enumerate}[(i)] \item $ \log b \leq C_{43} \log n_1 $, \item $ n_2-n_3 \leq C_{44} \log n_1 $. \end{enumerate} \end{mylemma} \begin{proof} Let us recall Inequality \eqref{eq:chain_step1} from the proof of Lemma \ref{lemma:step1}, where we obtained \begin{equation*} \abs{\frac{a \alpha^{n_1}}{b^{m_1}} - 1} \leq C_{10} \max \setb{\alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}, b^{m_2 - m_1}}. \end{equation*} Now we define the linear form \begin{equation*} \Lambda_{12} := n_1 \log \alpha - m_1 \log b + \log a = \log \frac{a \alpha^{n_1}}{b^{m_1}} \end{equation*} and distinguish between two cases. Let us first assume that $ \abs{\Lambda_{12}} > 1 $. Then by Lemma \ref{lemma:linbig} we have \begin{equation*} \frac{3}{5} \leq \abs{e^{\Lambda_{12}} - 1} \leq C_{10} \max \setb{\alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}, b^{m_2 - m_1}}. \end{equation*} If the maximum is either $ \alpha^{n_2-n_1} $ or $ \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1} $, this implies an upper bound $ n_1-n_2 \leq C_{45} $. We will come back to this later. If the maximum is $ b^{m_2 - m_1} $, then the inequality implies \begin{equation*} \log b \leq (m_1-m_2) \log b \leq C_{46} \end{equation*} and we are done. Therefore we will now assume that $ \abs{\Lambda_{12}} \leq 1 $. Since we are now working with three solutions, we analogously get the upper bound \begin{equation*} \abs{\frac{a \alpha^{n_2}}{b^{m_2}} - 1} \leq C_{10} \max \setb{\alpha^{n_3-n_2}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_3-n_2}, b^{m_3 - m_2}} \end{equation*} and define the linear form \begin{equation*} \Lambda_{23} := n_2 \log \alpha - m_2 \log b + \log a = \log \frac{a \alpha^{n_2}}{b^{m_2}}. \end{equation*} Once again we distinguish between two cases and assume first that $ \abs{\Lambda_{23}} > 1 $. Then by Lemma \ref{lemma:linbig} we have \begin{equation*} \frac{3}{5} \leq \abs{e^{\Lambda_{23}} - 1} \leq C_{10} \max \setb{\alpha^{n_3-n_2}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_3-n_2}, b^{m_3 - m_2}}. \end{equation*} If the maximum is either $ \alpha^{n_3-n_2} $ or $ \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_3-n_2} $, this implies an upper bound $ n_2-n_3 \leq C_{45} $. If the maximum is $ b^{m_3 - m_2} $, then the inequality implies \begin{equation*} \log b \leq (m_3-m_2) \log b \leq C_{46}. \end{equation*} In both situations we are done. Therefore we will now assume that $ \abs{\Lambda_{23}} \leq 1 $. As we have $ \abs{\Lambda_{12}} \leq 1 $ as well as $ \abs{\Lambda_{23}} \leq 1 $, we can apply Lemma \ref{lemma:linsmall} to both linear forms, which yields \begin{align*} \abs{\Lambda_{12}} &\leq 4 \abs{e^{\Lambda_{12}} - 1} \leq C_{47} \max \setb{\alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}, b^{m_2 - m_1}}, \\ \abs{\Lambda_{23}} &\leq 4 \abs{e^{\Lambda_{23}} - 1} \leq C_{47} \max \setb{\alpha^{n_3-n_2}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_3-n_2}, b^{m_3 - m_2}}. \end{align*} As the next step we define a further linear form by \begin{equation*} \Lambda := m_2 \Lambda_{12} - m_1 \Lambda_{23} = (m_2 n_1 - m_1 n_2) \log \alpha + (m_2 - m_1) \log a. \end{equation*} Using the upper bounds for $ \abs{\Lambda_{12}} $ and $ \abs{\Lambda_{23}} $ from above we get \begin{align} \abs{\Lambda} &\leq m_2 \abs{\Lambda_{12}} + m_1 \abs{\Lambda_{23}} \nonumber \\ \label{eq:s5bound} &\leq C_{48} n_1 \max \left( \alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}, b^{m_2 - m_1}, \right. \\ &\hspace{4cm} \left. \alpha^{n_3-n_2}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_3-n_2}, b^{m_3 - m_2} \right). \nonumber \end{align} We aim for applying Proposition \ref{prop:Matveev} to get a lower bound. Therefore let us set $ t=2 $, $ K = \mathbb{Q}(a,\alpha) $, $ D = [K:\mathbb{Q}] $ as well as \begin{align*} \eta_1 &= \alpha, \quad & \eta_2 &= a,\\ b_1 &= m_2 n_1 - m_1 n_2, \quad & b_2 &= m_2 - m_1. \end{align*} Further we can put \begin{align*} A_1 &= \max \setb{Dh(\alpha), \log \alpha, 0.16}, \\ A_2 &= \max \setb{Dh(a), \abs{\log a}, 0.16}, \\ B &= n_1^2. \end{align*} Finally, we have to show that $ \Lambda $ is nonzero. But this follows immediately from $ m_1 \neq m_2 $ and the assumption that $ \alpha $ and $ a $ are multiplicatively independent. Now Proposition \ref{prop:Matveev} states that \begin{equation*} \log \abs{\Lambda} \geq -C_{49} (1 + \log (n_1^2)) \geq -C_{50} \log (n_1^2) \geq -C_{51} \log n_1. \end{equation*} Comparing this lower bound with the upper bound \eqref{eq:s5bound} gives us \begin{align*} -C_{51} \log n_1 &\leq \log C_{48} + \log n_1 + \log \max \left( \alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}, b^{m_2 - m_1}, \right. \\ &\hspace{5cm} \left. \alpha^{n_3-n_2}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_3-n_2}, b^{m_3 - m_2} \right) \end{align*} which immediately implies \begin{equation} \label{eq:s5minbound} \min \setb{n_1-n_2, n_2-n_3, (m_1-m_2) \log b, (m_2-m_3) \log b} \leq C_{52} \log n_1. \end{equation} We have now to handle three cases. If the minimum in Inequality \eqref{eq:s5minbound} is either $ (m_1-m_2) \log b $ or $ (m_2-m_3) \log b $, then we get \begin{equation*} \log b \leq (m_i-m_{i+1}) \log b \leq C_{52} \log n_1 \end{equation*} for an $ i \in \set{1,2} $ and are done. If the minimum in Inequality \eqref{eq:s5minbound} is $ n_2-n_3 $, we have case (ii) of the lemma. So it remains to consider the case when the minimum in Inequality \eqref{eq:s5minbound} is $ n_1-n_2 $. This can be handled together with the still open case $ n_1-n_2 \leq C_{45} $ from above. Hence let us assume that $ n_1-n_2 \leq C_{53} \log n_1 $. By Lemma \ref{lemma:step4} we get \begin{align*} C_{53} \log n_1 &\geq n_1-n_2 \\ &\geq \frac{1}{\log \alpha} (m_1 \log b - C_{42}) - \frac{1}{\log \alpha} (m_2 \log b + C_{41}) \\ &= \frac{1}{\log \alpha} \left( (m_1-m_2) \log b - C_{41} - C_{42} \right) \end{align*} and thus \begin{equation*} \log b \leq (m_1-m_2) \log b \leq C_{54} \log n_1 \end{equation*} which concludes the proof of the lemma. \end{proof} \begin{mylemma} \label{lemma:step6} Assume that Equation \eqref{eq:centraleq} has at least three distinct solutions $ (n_1,m_1) $, $ (n_2,m_2) $ and $ (n_3,m_3) $ with $ n_1 > n_2 > n_3 $ as considered in Theorem \ref{thm:mainthm}. Then we have \begin{equation*} \log b \leq C_{40} (\log n_1)^2. \end{equation*} \end{mylemma} \begin{proof} From Lemma \ref{lemma:step5} we get that either (i) or (ii) holds. Since in the case (i) there is nothing to do, we may assume that we are in the case (ii) and therefore have the bound \begin{equation} \label{eq:s6bound} n_2-n_3 \leq C_{44} \log n_1. \end{equation} Recall the linear form \begin{equation*} \Lambda_{12} = n_1 \log \alpha - m_1 \log b + \log a \end{equation*} from the proof of Lemma \ref{lemma:step5}. Note that it is enough to consider the situation $ \abs{\Lambda_{12}} \leq 1 $, which by Lemma \ref{lemma:linsmall} implied \begin{equation*} \abs{\Lambda_{12}} \leq 4 \abs{e^{\Lambda_{12}} - 1} \leq C_{47} \max \setb{\alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}, b^{m_2 - m_1}}, \end{equation*} since the situation $ \abs{\Lambda_{12}} > 1 $ led to case (i). Taking a look at the proof of Lemma \ref{lemma:step2} and noting that we are working with three solutions, we recall from \eqref{eq:s2c1upper} the upper bound \begin{equation*} \abs{\frac{a \alpha^{n_3} (\alpha^{n_2-n_3} - 1)}{b^{m_2}} - 1} \leq C_{18} \max \setb{\left( \frac{\abs{\alpha_2}}{\alpha} \right)^{n_2}, b^{m_3-m_2}} \end{equation*} and define the linear form \begin{equation*} \widetilde{\Lambda_{23}} := n_3 \log \alpha - m_2 \log b + \log (a(\alpha^{n_2-n_3} - 1)). \end{equation*} Again we distinguish between two cases and assume first that $ \abs{\widetilde{\Lambda_{23}}} > 1 $. Then by Lemma \ref{lemma:linbig} we have \begin{equation*} \frac{3}{5} \leq \abs{e^{\widetilde{\Lambda_{23}}} - 1} \leq C_{18} \max \setb{\left( \frac{\abs{\alpha_2}}{\alpha} \right)^{n_2}, b^{m_3-m_2}}. \end{equation*} If the maximum is $ \left( \frac{\abs{\alpha_2}}{\alpha} \right)^{n_2} $, this implies an upper bound $ n_2 \leq C_{55} $. We will come back to this later. If the maximum is $ b^{m_3 - m_2} $, then the inequality implies \begin{equation*} \log b \leq (m_2-m_3) \log b \leq C_{56} \end{equation*} and we are done. Therefore we will now assume that $ \abs{\widetilde{\Lambda_{23}}} \leq 1 $. In this situation we can apply Lemma \ref{lemma:linsmall} which yields \begin{equation*} \abs{\widetilde{\Lambda_{23}}} \leq 4 \abs{e^{\widetilde{\Lambda_{23}}} - 1} \leq C_{57} \max \setb{\left( \frac{\abs{\alpha_2}}{\alpha} \right)^{n_2}, b^{m_3-m_2}}. \end{equation*} In the next step we define a further linear form by \begin{align*} \Lambda :=\ &m_2 \Lambda_{12} - m_1 \widetilde{\Lambda_{23}} \\ =\ &(m_2 n_1 - m_1 n_3) \log \alpha + (m_2 - m_1) \log a - m_1 \log (\alpha^{n_2-n_3}-1). \end{align*} Using the upper bounds for $ \abs{\Lambda_{12}} $ and $ \abs{\widetilde{\Lambda_{23}}} $ from above we get \begin{align} \abs{\Lambda} &\leq m_2 \abs{\Lambda_{12}} + m_1 \abs{\widetilde{\Lambda_{23}}} \nonumber \\ \label{eq:s6linbound} &\leq C_{58} n_1 \max \setb{\alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}, b^{m_2 - m_1}, \left( \frac{\abs{\alpha_2}}{\alpha} \right)^{n_2}, b^{m_3-m_2}}. \end{align} We aim for applying Proposition \ref{prop:Matveev} to get a lower bound. Therefore let us set $ t=3 $, $ K = \mathbb{Q}(a,\alpha) $, $ D = [K:\mathbb{Q}] $ as well as \begin{align*} \eta_1 &= \alpha, \quad & \eta_2 &= a, \quad & \eta_3 &= \alpha^{n_2-n_3}-1,\\ b_1 &= m_2 n_1 - m_1 n_3, \quad & b_2 &= m_2 - m_1, \quad & b_3 &= -m_1. \end{align*} Using the bound \eqref{eq:s6bound} gives us \begin{align*} h(\eta_3) &\leq h(\alpha^{n_2-n_3}) + h(1) + \log 2 = (n_2-n_3) h(\alpha) + h(1) + \log 2 \\ &\leq C_{59} (n_2-n_3) \leq C_{60} \log n_1 \end{align*} and an analogous bound for $ \abs{\log \eta_3} $. Thus we can put \begin{align*} A_1 &= \max \setb{Dh(\alpha), \log \alpha, 0.16}, \\ A_2 &= \max \setb{Dh(a), \abs{\log a}, 0.16}, \\ A_3 &= C_{61} \log n_1, \\ B &= n_1^2. \end{align*} Finally, we have to show that $ \Lambda $ is nonzero. Assume the contrary. Since $ m_1 \neq m_2 $, this means that $ \alpha $, $ a $ and $ \alpha^{n_2-n_3}-1 $ are multiplicatively dependent. Thus there exist rational numbers $ x,y \in \mathbb{Q} $ such that \begin{equation*} \alpha^{n_2-n_3} - 1 = a^x \alpha^y \end{equation*} because $ \alpha $ and $ a $ are multiplicatively independent by assumption. Then the linear form $ \Lambda $ becomes \begin{equation*} \Lambda = (m_2 n_1 - m_1 n_3 - m_1 y) \log \alpha + (m_2 - m_1 - m_1 x) \log a. \end{equation*} Again using that $ \alpha $ and $ a $ are multiplicatively independent by assumption, $ \Lambda = 0 $ in particular implies \begin{equation*} m_2 - m_1 - m_1 x = 0 \end{equation*} which yields \begin{equation*} m_2 = m_1 (1+x) \end{equation*} and hence $ -1 < x < 0 $ since $ m_1 > m_2 \geq 1 $. But this was excluded in the theorem. Therefore we have $ \Lambda \neq 0 $. Now Proposition \ref{prop:Matveev} states that \begin{equation*} \log \abs{\Lambda} \geq -C_{62} (1 + \log (n_1^2)) \log n_1 \geq -C_{63} \log (n_1^2) \log n_1 \geq -C_{64} (\log n_1)^2. \end{equation*} Comparing this lower bound with the upper bound \eqref{eq:s6linbound} gives us \begin{align*} -C_{64} (\log n_1)^2 &\leq \log C_{58} + \log n_1 \\ &\hspace{0.5cm}+ \log \max \setb{\alpha^{n_2-n_1}, \left( \frac{\alpha}{\abs{\alpha_2}} \right)^{n_2-n_1}, b^{m_2 - m_1}, \left( \frac{\abs{\alpha_2}}{\alpha} \right)^{n_2}, b^{m_3-m_2}} \end{align*} which immediately implies \begin{equation} \label{eq:s6minbound} \min \setb{n_1-n_2, n_2, (m_1-m_2) \log b, (m_2-m_3) \log b} \leq C_{65} (\log n_1)^2. \end{equation} We have now to handle three cases. If the minimum in Inequality \eqref{eq:s6minbound} is either $ (m_1-m_2) \log b $ or $ (m_2-m_3) \log b $, then we get \begin{equation*} \log b \leq (m_i-m_{i+1}) \log b \leq C_{65} (\log n_1)^2 \end{equation*} for an $ i \in \set{1,2} $ and are done. If the minimum in Inequality \eqref{eq:s6minbound} is $ n_1-n_2 $, then we have \begin{equation*} n_1-n_2 \leq C_{65} (\log n_1)^2 \end{equation*} which yields, by using Lemma \ref{lemma:step4}, analogously to the end of the proof of Lemma \ref{lemma:step5} the bound \begin{equation*} \log b \leq (m_1-m_2) \log b \leq C_{66} (\log n_1)^2 \end{equation*} and we are done as well. So it remains to consider the case when the minimum in Inequality \eqref{eq:s6minbound} is $ n_2 $. This can be handled together with the still open case $ n_2 \leq C_{55} $ from above. Hence let us assume that $ n_2 \leq C_{67} (\log n_1)^2 $. By Lemma \ref{lemma:step4} we get \begin{equation*} C_{67} (\log n_1)^2 \geq n_2 \geq \frac{1}{\log \alpha} (m_2 \log b - C_{42}) \geq \frac{1}{\log \alpha} (\log b - C_{42}) \end{equation*} and thus \begin{equation*} \log b \leq C_{68} (\log n_1)^2 \end{equation*} which concludes the proof of the lemma. \end{proof} We have now reached our second milestone, the bound \eqref{eq:bbound} is proven. So if there are at least three distinct solutions $ (n_1,m_1) $, $ (n_2,m_2) $ and $ (n_3,m_3) $ with $ n_1 > n_2 > n_3 $ as considered in Theorem \ref{thm:mainthm}, then we have the two bounds \eqref{eq:logbound} and \eqref{eq:bbound}. Inserting \eqref{eq:bbound} into \eqref{eq:logbound} gives \begin{equation*} n_1 \leq C_{69} (\log n_1)^9 \end{equation*} and therefore the absolute bound \begin{equation*} n_1 \leq C_{70}. \end{equation*} Now we insert this into Inequality \eqref{eq:bbound} and get \begin{equation*} b \leq C_{71}. \end{equation*} This proves Theorem \ref{thm:mainthm}. \section{Proof of Theorem \ref{thm:Tribos}}\label{sec:Tribos}\label{sec:proofTribos} Recall that the Tribonacci sequence is given by $T_1=1, T_2=1, T_3=2$ and $T_{n}=T_{n-1}+T_{n-2}+T_{n-3}$ for $n\geq 4$. Then one can compute the roots $\alpha,\beta,\gamma$ of $f(X)=X^3-X^2-X-1$ and the coefficients $a,b,c$ such that \[ T_n= a\alpha^n + b\beta^n + c\gamma^n. \] Let $\sigma$ be the Galois automorphism on the splitting field $K$ of $f$ that maps $\alpha \mapsto \beta$. It turns out that \[ a=\frac{1}{-\alpha^2 + 4\alpha -1}, \quad b=\sigma(a),\quad c=\sigma^2(a). \] We fix $\alpha,\beta, \gamma$ such that \begin{align*} \alpha &\approx 1.839, \quad & \beta &\approx -0.42 + 0.61 i, \quad & \gamma &\approx -0.42 - 0.61 i,\\ a &\approx 0.336, \quad & b &\approx -0.17 - 0.20 i, \quad & c &\approx -0.17 + 0.20i. \end{align*} Note that \[ |\beta|=|\gamma|\leq 0.74 \quad \text{and} \quad |b|=|c|\leq 0.26, \] so we can estimate \begin{equation}\label{eq:trib:Tn} T_n= a \alpha^n + L(0.52 \cdot 0.74^n). \end{equation} Here the $L$-notation means the following: For functions $f(n)$, $g(n)$ with $g(n)>0 $ for $n\geq 1$ we write \[ f(n) = L(g(n)) \quad \text{if} \quad |f(n)|\leq g(n). \] We check that all assumptions in Theorem \ref{thm:mainthm} are fulfilled: First, $\alpha$ is indeed an irrational dominant root larger than 1 and $a>0$. Second, we check hat $a$ and $\alpha$ are multiplicatively independent. Assume that they are not, then there exist nonzero integers $x,y$ such that $a^x \alpha^y=1$. Since $|a|<1$ and $|\alpha|>1$, the integers $x$ and $y$ must have the same sign. However, if we apply $\sigma$, we get $b^x\beta^y=1$, but since $|b|<1$ and $|\beta|<1$ this is impossible if $x$ and $y$ have the same sign. Therefore, $a$ and $\alpha$ are multiplicatively independent. Finally, we check that there are no unwanted solutions to \eqref{eq:techcond}. Assume that \[ \alpha^z - 1 = a^x \alpha^y \] with $ z \in \mathbb{N} $, $ x,y \in \mathbb{Q} $ and $ -1 < x < 0 $. Then we have for the norms \[ N_{K/\mathbb{Q}}(\alpha^z-1) = N_{K/\mathbb{Q}}(a^x \alpha^y) = N_{K/\mathbb{Q}}(a)^x N_{K/\mathbb{Q}}(\alpha)^y = (1/44)^x1^y =44^{-x}. \] Now since $\alpha$ and therefore $\alpha^z -1$ are algebraic integers, $N_{K/\mathbb{Q}}(\alpha^z-1)=44^{-x}$ has to be an integer. But this is impossible for a rational $x$ between $-1$ and $0$. Assume that we have three solutions $T_{n_1}-b^{m_1}=T_{n_2}-b^{m_2}=T_{n_3}-b^{m_3}=c$ with $n_1>n_2>n_3\geq 2$. We do some preliminary estimations. From \eqref{eq:trib:Tn} we have \begin{align}\label{eq:trib:eqationL} a \alpha^{n_1} - b^{m_1} - (a \alpha^{n_2} - b^{m_2})\nonumber &= L(0.52 \cdot 0.74^{n_1}) + L(0.52 \cdot 0.74^{n_2})\\ &= L(0.91 \cdot 0.74^{n_2}) \end{align} and analogously \begin{equation}\label{eq:trib:equationL2} a \alpha^{n_2} - b^{m_2} - (a \alpha^{n_3} - b^{m_3}) = L(0.91 \cdot 0.74^{n_3}). \end{equation} Moreover, we have \begin{multline*} b^{m_1} \nonumber \geq b^{m_1} - b^{m_2} = T_{n_1} - T_{n_2} = a \alpha^{n_1} - a \alpha^{n_2} + L(0.91 \cdot 0.74^{n_2}) \\ \geq a(1-\alpha^{-1})\alpha^{n_1} - 0.91 \cdot 0.74^{n_2} \geq 0.15 \alpha^{n_1} - 0.68. \end{multline*} Now note that on the one hand for $n_1 \geq 6$ we have $0.68/\alpha^{n_1} \leq 0.02$ and on the other hand $b^{m_1}\geq 2^2=4\geq 0.13 \alpha^{n_1}$ is trivially fulfilled for $n_1\leq 5$. Thus we have in any case \begin{equation}\label{eq:trib:balpha} b^{m_1}\geq 0.13 \alpha^{n_1} \quad \text{and analogously} \quad b^{m_2}\geq 0.13 \alpha^{n_2}. \end{equation} Estimating from the other side, we have \begin{align*} 0.5 \alpha^{n_1} &\geq a \alpha^{n_1} + L(0.52\cdot 0.74^{n_1}) = T_{n_1} \geq T_{n_1}-T_{n_2}\\ &= b^{m_1}-b^{m_2} \geq 0.5 b^{m_1} \end{align*} and the same argument works for the second and third solution. So we have \begin{equation}\label{eq:trib:alphab} \alpha^{n_1} \geq b^{m_1} \quad \text{and} \quad \alpha^{n_2} \geq b^{m_2}. \end{equation} In particular, \eqref{eq:trib:balpha} and \eqref{eq:trib:alphab} imply the bounds \begin{equation}\label{eq:trib:nlogalphamlogb} \begin{split} &m_1 \log b \leq n_1 \log \alpha \leq m_1 \log b + 2.1,\\ &m_2 \log b \leq n_2 \log \alpha \leq m_2 \log b + 2.1 \end{split} \end{equation} and in particular \[ n_1 \geq m_1. \] \step{step:smallSols}{Small solutions:} First, we check that there are no solutions with $n_1\leq 150$. To that end we simply search for all differences of two Tribonacci numbers that can be written in the form $T_{n_1}-T_{n_2} = b^{m_1}-b^{m_2}$ with $b\geq 2$ and $m_1>m_2\geq 1$. With the help of Sage \cite{sagemath} this is not difficult (see \nameref{sec:appendix} for the code): For each pair $2\leq n_2 < n_1 \leq 150$, we compute the prime factorisation of $T_{n_1}-T_{n_2}=p_1^{k_1}\cdots p_l^{k_l}$. Now we need to check if there exist $b\geq 2$ and $x,y\geq 1$ such that $p_1^{k_1}\cdots p_l^{k_l}=b^x(b^y-1)$. Since $\gcd(b^x,b^y-1)=1$, we can simply try $b=p_{i_1}^{k_{i_1}/d}\cdots p_{i_t}^{k_{i_t}/d}$ for each subset $\{p_{i_1},\ldots p_{i_t}\}\subset\{p_1,\ldots,p_l\}$ and $d=\gcd(k_{i_1},\ldots,k_{i_t})$, and check if $(T_{n_1}-T_{n_2})/b^d$ can be written in the form $(b^y -1)$. It turns out that this only happens on 14 occasions: \begin{align*} T_{ 4 } - T_{ 3 } = 2 ^{ 1 } ( 2 ^{ 1 }-1), \quad c&= 0 = T_{ 4 } - 2 ^{ 2 } = T_{ 3 } - 2 ^{ 1 }; \\ T_{ 5 } - T_{ 2 } = 2 ^{ 1 } ( 2 ^{ 2 }-1), \quad c&= -1 = T_{ 5 } - 2 ^{ 3 } = T_{ 2 } - 2 ^{ 1 }; \\ T_{ 5 } - T_{ 2 } = 3 ^{ 1 } ( 3 ^{ 1 }-1), \quad c&= -2 = T_{ 5 } - 3 ^{ 2 } = T_{ 2 } - 3 ^{ 1 }; \\ T_{ 6 } - T_{ 2 } = 2 ^{ 2 } ( 2 ^{ 2 }-1), \quad c&= -3 = T_{ 6 } - 2 ^{ 4 } = T_{ 2 } - 2 ^{ 2 }; \\ T_{ 6 } - T_{ 5 } = 2 ^{ 1 } ( 2 ^{ 2 }-1), \quad c&= 5 = T_{ 6 } - 2 ^{ 3 } = T_{ 5 } - 2 ^{ 1 }; \\ T_{ 6 } - T_{ 5 } = 3 ^{ 1 } ( 3 ^{ 1 }-1), \quad c&= 4 = T_{ 6 } - 3 ^{ 2 } = T_{ 5 } - 3 ^{ 1 }; \\ T_{ 7 } - T_{ 4 } = 5 ^{ 1 } ( 5 ^{ 1 }-1), \quad c&= -1 = T_{ 7 } - 5 ^{ 2 } = T_{ 4 } - 5 ^{ 1 }; \\ T_{ 8 } - T_{ 3 } = 7 ^{ 1 } ( 7 ^{ 1 }-1), \quad c&= -5 = T_{ 8 } - 7 ^{ 2 } = T_{ 3 } - 7 ^{ 1 }; \\ T_{ 8 } - T_{ 7 } = 5 ^{ 1 } ( 5 ^{ 1 }-1), \quad c&= 19 = T_{ 8 } - 5 ^{ 2 } = T_{ 7 } - 5 ^{ 1 }; \\ T_{ 11 } - T_{ 3 } = 17 ^{ 1 } ( 17 ^{ 1 }-1), \quad c&= -15 = T_{ 11 } - 17 ^{ 2 } = T_{ 3 } - 17 ^{ 1 }; \\ T_{ 12 } - T_{ 4 } = 5 ^{ 3 } ( 5 ^{ 1 }-1), \quad c&= -121 = T_{ 12 } - 5 ^{ 4 } = T_{ 4 } - 5 ^{ 3 }; \\ T_{ 12 } - T_{ 7 } = 2 ^{ 5 } ( 2 ^{ 4 }-1), \quad c&= -8 = T_{ 12 } - 2 ^{ 9 } = T_{ 7 } - 2 ^{ 5 }; \\ T_{ 15 } - T_{ 11 } = 54 ^{ 1 } ( 54 ^{ 1 }-1), \quad c&= 220 = T_{ 15 } - 54 ^{ 2 } = T_{ 11 } - 54 ^{ 1 }; \\ T_{ 23 } - T_{ 12 } = 641 ^{ 1 } ( 641 ^{ 1 }-1), \quad c&= -137 = T_{ 23 } - 641 ^{ 2 } = T_{ 12 } - 641 ^{ 1 }. \end{align*} The only $c$ that appears twice is $c=-1$, but on the two occasions the $b$'s are distinct ($2$ and $5$). So there is no $c$ with three solutions with $n_1\leq 150$. The computations only took a couple of minutes on a usual laptop. Let us from now on assume that \[ n_1>150. \] Finally, a little remark on notation: The constants in this section will be numbered starting with $C_{100}$ to set them apart from the previous constants. \begin{mystep}\label{step:Step1} From \eqref{eq:trib:eqationL} we obtain \[ |a \alpha^{n_1} - b^{m_1}| = |a \alpha^{n_2} - b^{m_2} + L(0.91 \cdot 0.74^{n_2})| \leq \max( a \alpha^{n_2}, b^{m_2} ) \leq \alpha^{n_2}, \] where for the first inequality we used the fact that $a\alpha^{n_2}$ and $b^{m_2}$ are both positive and larger than the $L$-terms and for the second estimation we used $a<1$ and \eqref{eq:trib:alphab}. Dividing by $b^{m_1} \geq 0.13 \alpha^{n_1}$ (see \eqref{eq:trib:balpha}) we obtain \begin{equation}\label{eq:trib:Lambda1} |\Lambda_1| := \left| \frac{a\alpha^{n_1}}{b^{m_1}}-1 \right| \leq \frac{\alpha^{n_2}}{0.13 \alpha^{n_1}} \leq 7.7 \alpha^{-(n_1-n_2)}. \end{equation} We check that $\Lambda_1 \neq 0$: If $\Lambda_1=0$, then $a \alpha^{n_1} =b^{m_1} \in \mathbb{Z}$, so an application of the Galois automorphism $\sigma$ does not change $a \alpha^{n_1}$, i.e.\ we get $a \alpha^{n_1} = \sigma (a \alpha^{n_1}) =b \beta^{n_1}$. Taking absolute values and estimating we get $0.33 \cdot 1.83^{n_1} \leq |a \alpha^{n_1}| = |b \beta^{n_1}|\leq 0.26 \cdot 0.74^{n_1}$, which is impossible. Now we apply Proposition~\ref{prop:Matveev} with $t=3$, $K=\mathbb{Q}(a,\alpha)=\mathbb{Q}(\alpha)$, $D=3=[K:\mathbb{Q}]$ and \begin{align*} \eta_1 &= \alpha, \quad & \eta_2 &= b, \quad & \eta_3 &=a, \\ b_1 &= n_1, \quad & b_2 &= -m_1, \quad & b_3 &= 1. \end{align*} We put \begin{align*} A_1&=0.7\geq \log \alpha = \max(D h(\alpha),\log \alpha,0.16),\\ A_2&=3 \log b= \max(D h(b),\abs{\log b},0.16),\\ A_3&= 3.8 \geq \log 44 =3 h(a)= \max(D h(a),\abs{\log a},0.16),\\ B&=n_1. \end{align*} Thus we get that \[ \log|\Lambda_1| \geq - C_{100} \cdot \log n_1 \cdot \log b, \] where \[ C_{100} = 2.6 \cdot 10^{13} > 1.4 \cdot 30^6 \cdot 3^{4.5} \cdot 3^2 (1+ \log 3) \cdot 1.2 \cdot 0.7 \cdot 3 \cdot 3.8. \] Note that the factor $1.2$ comes from the estimate $1+\log B = 1+\log n_1 \leq 1.2 \log n_1$, since $n_1>150$. Together with \eqref{eq:trib:Lambda1} this implies \[ -C_{100}\cdot \log n_1 \cdot \log b \leq \log 7.7 - (n_1-n_2)\log \alpha, \] which yields \begin{equation}\label{eq:trib:boundn1-n2} (n_1-n_2)\log \alpha \leq C_{100}\cdot \log n_1 \cdot \log b. \end{equation} Here we omitted the term $\log 7.7$. We can do this because we estimated quite generously when computing the constant $C_{100}=2.6\cdot 10^{13}$. \end{mystep} \begin{mystep}{}\label{step:Step2} From \eqref{eq:trib:eqationL} we obtain \[ \left| (a \alpha^{n_1} - a \alpha^{n_2}) - (b^{m_1} - b^{m_2}) \right| = L(0.91 \cdot 0.74^{n_2}) \leq 0.5. \] Dividing by $b^{m_1} - b^{m_2} \geq 0.5 b^{m_1}$ we obtain \begin{equation}\label{eq:trib:Lambda2} |\Lambda_2| := \left| \frac{a \alpha^{n_2}(\alpha^{n_1-n_2}-1)}{b^{m_2} (b^{m_1-m_2}-1)}-1\right| \leq b^{-m_1}. \end{equation} We check that $\Lambda_2 \neq 0$: If $\Lambda_{2}=0$, then $a (\alpha^{n_1}-\alpha^{n_2}) \in \mathbb{Z}$, so we must have $a (\alpha^{n_1}-\alpha^{n_2})= \sigma (a (\alpha^{n_1}-\alpha^{n_2})) = b(\beta^{n_1}-\beta^{n_2})$. Taking absolute values and estimating we get \begin{align*} 0.15 \cdot 1.83^{n_1} &\leq 0.33 \alpha^{n_1} (1-\alpha^{-1}) \leq \left| a (\alpha^{n_1}-\alpha^{n_2}) \right| = \left| b (\beta^{n_1}-\beta^{n_2}) \right|\\ &\leq 0.26 (|\beta|^{n_1} + |\beta|^{n_2}) \leq 0.26 (0.74^{n_1} + 0.74^{n_2}), \end{align*} which is impossible for $n_1>150$. Now we apply Proposition~\ref{prop:Matveev} just like in Step \ref{step:Step1}, except that now $b_1=n_2$, $b_2=-m_2$ and $\eta_3=\frac{a(\alpha^{n_1-n_2}-1)}{b^{m_1-m_2}-1}$. In order to find an $A_3$ we estimate the height of $\eta_3$: \begin{align*} h\left( \frac{a(\alpha^{n_1-n_2}-1)}{b^{m_1-m_2}-1} \right) &\leq h(a) + h(\alpha^{n_1-n_2}-1) + h(b^{m_1-m_2}-1)\\ &\leq \frac{1}{3} \log 44 + (n_1-n_2)h(\alpha) + \log 2 + \log (b^{m_1-m_2}-1)\\ &\leq 1.96 + (n_1-n_2) \frac{\log \alpha}{3} + (m_1-m_2)\log b\\ &\leq 1.96 + (n_1-n_2) \frac{\log \alpha}{3} + (n_1-n_2)\log \alpha + 2.1\\ &\leq 4.9 (n_1-n_2), \end{align*} where we used \eqref{eq:trib:nlogalphamlogb} to estimate $(m_1-m_2)\log b \leq (n_1-n_2)\log \alpha + 2.1$. Thus we can set $A_3= 14.7 (n_1-n_2) \geq \max(D h(\eta_3), \abs{\log \eta_3}, 0.16)$ and we obtain analogously to the application of Proposition~\ref{prop:Matveev} in Step~\ref{step:Step1} \[ \log|\Lambda_2| \geq - C_{101} \cdot \log n_1 \cdot \log b \cdot (n_1-n_2), \] where \[ C_{101} = 1.1 \cdot 10^{14} > 1.4 \cdot 30^6 \cdot 3^{4.5} \cdot 3^2 (1+ \log 3) \cdot 1.2 \cdot 0.7 \cdot 3 \cdot 14.7. \] Together with \eqref{eq:trib:Lambda2} this implies \[ m_1 \log b \leq C_{101} \cdot \log n_1 \cdot \log b \cdot (n_1-n_2). \] Using \eqref{eq:trib:nlogalphamlogb} and \eqref{eq:trib:boundn1-n2} from Step~\ref{step:Step1}, we obtain \begin{multline*} n_1 \log \alpha \leq m_1 \log b + 2.1 \leq C_{101} \cdot \log n_1 \cdot \log b \cdot (n_1-n_2)\\ \leq C_{101} \cdot \log n_1 \cdot \log b \cdot (\log \alpha)^{-1} \cdot C_{100}\cdot \log n_1 \cdot \log b, \end{multline*} where we omitted the constant $ 2.1 $ because $C_{101}$ was estimated roughly. Thus, we end up with \begin{equation}\label{eq:trib:boundStep2} n_1 \leq C_{102} \cdot (\log n_1 \cdot \log b)^2, \end{equation} where \[ C_{102} = 7.8 \cdot 10^{27} > C_{100}\cdot C_{101} \cdot (\log \alpha)^{-2 }. \] \end{mystep} \begin{mystep}\label{step:Step3} Recall the bound \eqref{eq:trib:Lambda1}: \[ |\Lambda_1| =\left| \frac{a\alpha^{n_1}}{b^{m_1}}-1 \right| \leq 7.7 \alpha^{-(n_1-n_2)}. \] Assume for a moment that $|\Lambda_1|\geq 0.5$. Then $n_1-n_2 \leq 4$, we get that $\log b \leq m_1 \log b - m_2 \log b \leq (n_1-n_2)\log \alpha + 2.1 \leq 4.6$ and we can immediately skip to Step~\ref{step:Step6}. Therefore, we may assume that $|\Lambda_1|< 0.5$. Since $\abs{\log x} \leq 2 |x-1|$ for $|x-1|<0.5$, we obtain \[ |\Lambda_1'| := \abs{\log a + n_1 \log \alpha - m_1 \log b} \leq 15.4 \alpha^{-(n_1-n_2)}. \] Now we consider the second and the third solution and obtain from \eqref{eq:trib:equationL2} \[ |a \alpha^{n_2}-b^{m_2}| =|a\alpha^{n_3}-b^{m_3}+L(0.91\cdot 0.74^{n_3})| \leq \max(a \alpha^{n_3}, b^{m_3}). \] Dividing by $b^{m_2} \geq 0.13 \alpha^{n_2}$ (by \eqref{eq:trib:balpha}) we obtain \[ |\Lambda_{12}| :=\left| \frac{a\alpha^{n_2}}{b^{m_2}}-1 \right| \leq \max\left( \frac{a \alpha^{n_3}}{0.13 \alpha^{n_2}}, \frac{b^{m_3}}{b^{m_2}}\right) \leq \max(2.6 \alpha^{-(n_2-n_3)}, b^{-(m_2-m_3)}). \] Assume for a moment that $|\Lambda_{12}|\geq 0.5$. Then we either get $n_2-n_3\leq 2$ or $b=2$. In the first case we can immediately skip to the next step. In the second case we can immediately skip to Step~\ref{step:Step6}. Thus we may assume $|\Lambda_{12}|<0.5$ and we get that \[ |\Lambda_{12}'| := \abs{\log a + n_2 \log \alpha - m_2 \log b} \leq \max(5.2 \alpha^{-(n_2-n_3)}, 2b^{-(m_2-m_3)}). \] Thus for the linear form $\Lambda_3':= m_2 \Lambda_1' - m_1 \Lambda_{12}'$ we get the upper bound \begin{align}\label{eq:trib:Lambda3} |\Lambda_{3}'| &=|(n_1m_2 - n_2m_1)\log \alpha + (m_2-m_1)\log a| \nonumber\\ &\leq m_2 \cdot 15.4 \alpha^{-(n_1-n_2)} + m_1 \cdot \max(5.2 \alpha^{-(n_2-n_3)}, 2b^{-(m_2-m_3)}) \nonumber\\ &\leq 20.6 n_1 \max( \alpha^{-(n_1-n_2)}, \alpha^{-(n_2-n_3)},b^{-(m_2-m_3)} ). \end{align} Now we have a linear form in only two logarithms, so we can use Laurent's bound instead of Matveev's. We set \begin{align*} \eta_1 &= \alpha, \quad b_1 = n_1m_2 - n_2 m_1,\\ \eta_2 &= a, \quad b_2 = m_2-m_1,\\ D &= 3, \\ \log A_1 &= 1 = \max(3h(\alpha),\log \alpha,1),\\ \log A_2 &= 3.8 \geq \log 44 = 3h(a) = \max(Dh(a),\abs{\log a}, 1),\\ b' &= \frac{|n_1 m_2 - n_2 m_1|}{3.8} + \frac{|m_2-m_1|}{1} \leq {n_1^2}/{3.7}. \end{align*} We estimate the factor \[ \max( \log b' + 0.38, 18/D, 1) \leq \max (2 \log n_1 - \log 3.7 + 0.38, 6, 1) \leq 2 \log n_1. \] In order to apply Proposition \ref{prop:Laurent}, we need to check if $a$ and $\alpha$ are multiplicatively independent, and if $b_1$ and $b_2$ are nonzero. We have already checked at the beginning of this section that $a$ and $\alpha$ are multiplicatively independent and we are assuming that $n_1>n_2$, which implies $m_1>m_2$, so $b_2=m_2-m_1$ is nonzero. Assume for a moment that $b_1=0$. Then we have \begin{align*} 1 <\abs{\log a} \leq |b_2 \log a| =|\Lambda_3'|, \end{align*} so \[ \log |\Lambda_3'| >0, \] which is much better than what we will obtain from the application of Proposition~\ref{prop:Laurent}. So let us now apply Proposition \ref{prop:Laurent}. We obtain \[ \log |\Lambda_{3}'| \geq -C_{103} (\log n_1)^2, \] where \[ C_{103} = 2778 > 20.3 \cdot 3^2 \cdot 2^2 \cdot 1 \cdot 3.8. \] Together with \eqref{eq:trib:Lambda3} this implies \begin{multline*} - 2778 \cdot (\log n_1)^2 \\ \leq \log 20.6 + \log n_1 - \min((n_1-n_2)\log \alpha, (n_2-n_3)\log \alpha, (m_2-m_3)\log b) \end{multline*} and we get \[ \min((n_1-n_2)\log \alpha, (n_2-n_3)\log \alpha, (m_2-m_3)\log b) \leq 2780 \cdot (\log n_1)^2. \] If the minimum is realised by $(n_1-n_2)\log \alpha$, then we can immediately skip to Step~\ref{step:Step5}. If it is realised by $(n_2-n_3)\log \alpha$, then we have \begin{equation}\label{eq:trib:boundn2-n3} n_2-n_3 \leq 4563 \cdot (\log n_1)^2 \end{equation} and go to the next step. If it is realised by $(m_2-m_3)\log b$, we get that $\log b \leq (m_2-m_3)\log b \leq 2780 \cdot (\log n_1)^2$ and we can skip to Step~\ref{step:Step6}. \end{mystep} \begin{mystep}\label{step:Step4} First, recall from the previous step the bound \[ |\Lambda_1'| = \abs{\log a + n_1 \log \alpha - m_1 \log b} \leq 15.4 \alpha^{-(n_1-n_2)}. \] Second, we generate a new linear form in logarithms by starting from \eqref{eq:trib:equationL2}: \[ |a \alpha^{n_3} (\alpha^{n_2-n_3} - 1) - b^{m_2}| = |b^{m_3} + L(0.91\cdot 0.74^{n_3})| \leq b^{m_3} + 0.7. \] Dividing by $b^{m_2}$ we obtain \[ |\Lambda_{41}| :=\abs{\frac{a \alpha^{n_3} (\alpha^{n_2-n_3} - 1)}{b^{m_2}} - 1} \leq b^{-(m_2-m_3)} + 0.7 b^{-m_2} \leq 1.7 b^{-(m_2-m_3)}. \] If $|\Lambda_{41}|\geq 0.5$, then we immediately get $b\leq 3$ and we can skip to Step~\ref{step:Step6}. Let us assume $|\Lambda_{41}|< 0.5$. Then we obtain \[ |\Lambda_{41}'| := \abs{\log a + n_3 \log \alpha + \log (\alpha^{n_2-n_3}-1) - m_2 \log b} \leq 3.4 b^{-(m_2-m_3)}. \] Hence we have for the linear form $\Lambda_4':= m_2 \Lambda_1' -m_1 \Lambda_{41}'$ the upper bound \begin{align}\label{eq:trib:Lambda4} |\Lambda_4'| &= |(m_2-m_1)\log a + (m_2n_1 - m_1n_3) \log \alpha - m_1 \log(\alpha^{n_2-n_3}-1)| \nonumber\\ &\leq m_2 \cdot 15.4 \alpha^{-(n_1-n_2)} + m_1 \cdot 3.4 b^{-(m_2-m_3)} \nonumber\\ &\leq 18.8 \cdot n_1 \max( \alpha^{-(n_1-n_2)}, b^{-(m_2-m_3)}). \end{align} Now $\Lambda_4'\neq 0$ because the technical condition involving Equation \eqref{eq:techcond} is fulfilled, and we can apply Proposition \ref{prop:Matveev}. We set $t=3$, $K=\mathbb{Q}(a,\alpha)=\mathbb{Q}(\alpha)$, $D=3=[K:\mathbb{Q}]$, as well as \begin{align*} \eta_1&=a, \quad & \eta_2&=\alpha, \quad & \eta_3&=\alpha^{n_2-n_3}-1,\\ b_1 &=m_2-m_1, \quad & b_2 &=m_2n_1 - m_1n_3, \quad & b_3 &= - m_1. \end{align*} Further, we can put \begin{align*} A_1&=3.8 \geq \log 44= \max(Dh(a),\abs{\log a}, 0.16),\\ A_2&=0.7 \geq \log \alpha =\max(Dh(\alpha),\log \alpha, 0.16),\\ A_3&= 2.7 (n_2-n_3) \geq 3 ( (n_2-n_3)h(\alpha)+ \log 2) \geq \max(Dh(\eta_3),\abs{\log\eta_3}, 0.16),\\ B&= n_1^2. \end{align*} Thus we get the lower bound \[ \log |\Lambda_4'| \geq - C_{104} (n_2-n_3) \log n_1, \] where \[ C_{104} = 4.3 \cdot 10^{13} > 1.4 \cdot 30^6 \cdot 3^{4.5} \cdot 3^2 \cdot (1+ \log 3) \cdot 2.2 \cdot 3.8 \cdot 0.7 \cdot 2.7. \] Note that we used the estimate $1+ \log B = 1+ \log (n_1^2) \leq 2.2 \log n_1$ for $n_1 > 150$. Together with \eqref{eq:trib:Lambda4} this implies \[ - C_{104} (n_2-n_3) \log n_1 \leq \log 18.8 + \log n_1 - \min ((n_1-n_2) \log \alpha, (m_2-m_3)\log b), \] from which we get, omitting small terms because $C_{104}$ was roughly estimated, \[ \min ((n_1-n_2) \log \alpha, (m_2-m_3)\log b) \leq C_{104} (n_2-n_3) \log n_1. \] If the minimum is realised by $(n_1-n_2) \log \alpha$, then using \eqref{eq:trib:boundn2-n3} from the previous step we obtain \begin{align}\label{eq:trib:boundStep5} n_1-n_2 &\leq (\log \alpha)^{-1} \cdot C_{104} (n_2-n_3) \log n_1 \nonumber\\ & \leq (\log \alpha)^{-1} \cdot C_{104} \cdot \log n_1 \cdot 4563 \cdot (\log n_1)^2 \nonumber\\ & \leq C_{105} (\log n_1)^3, \end{align} with \[ C_{105} = 3.3\cdot 10^{17} > (\log \alpha)^{-1} \cdot C_{104} \cdot 4563. \] If the minimum is realised by $(m_2-m_3)\log b$, then in a similar way we obtain \begin{align*} \log b \leq (m_2-m_3)\log b \leq C_{104} \cdot \log n_1 \cdot 4563 \cdot (\log n_1)^2 \leq 2 \cdot 10^{17} \cdot (\log n_1)^3 \end{align*} and we can skip to Step \ref{step:Step6}. \end{mystep} \begin{mystep}\label{step:Step5} We now use \eqref{eq:trib:nlogalphamlogb} and \eqref{eq:trib:boundStep5} to obtain a bound for $\log b$: \begin{multline*} \log b \leq (m_1-m_2)\log b = m_1 \log b- m_2 \log b\\ \leq n_1 \log \alpha - n_2 \log \alpha + 2.1 = (n_1-n_2)\log \alpha + 2.1 \leq C_{105} (\log n_1)^3 \log \alpha, \end{multline*} where we omitted the constant $ 2.1 $ because $C_{105}$ came from a rough estimation. Thus we have \begin{equation}\label{eq:trib:boundStep6} \log b \leq C_{106} (\log n_1)^3, \end{equation} with \[ C_{106} = 2.1\cdot 10^{17} > C_{105} \log \alpha. \] \end{mystep} \begin{mystep}\label{step:Step6} Finally, we combine \eqref{eq:trib:boundStep2} from Step \ref{step:Step2} with \eqref{eq:trib:boundStep6}: \begin{align*} n_1 \leq C_{102}\cdot (\log n_1 \cdot \log b)^2 &\leq C_{102}\cdot \left(\log n_1 \cdot C_{106} (\log n_1)^3\right)^2\\ &\leq C_{107} \cdot (\log n_1)^8, \end{align*} with \[ C_{107} = 3.5 \cdot 10^{62} > C_{102} \cdot C_{106}^2 = 7.8 \cdot 10^{27} \cdot \left(2.1 \cdot 10^{17}\right)^2. \] Solving the inequality $n_1 \leq 3.5 \cdot 10^{62} (\log n_1)^8$ numerically yields \[ n_1 \leq 5 \cdot 10^{80}. \] \end{mystep} We have finally found an explicit upper bound for $n_1$. Next we want to reduce this bound. Since the bound for $b$ is extremely large, we cannot use the linear forms from Steps \ref{step:Step1} and \ref{step:Step2} for the reduction process. Instead, we will do four Reduction Steps A--D corresponding to the Steps 3--6 and reduce the bounds as far as we can. \begin{redstep}[Step \ref{step:Step3}]\label{step:RedstepA} Recall from \eqref{eq:trib:Lambda3} that \begin{multline*} |\Lambda_3'| =|(n_1m_2 - n_2m_1)\log \alpha - (m_1-m_2)\log a|\\ \leq 20.6 n_1 \max( \alpha^{-(n_1-n_2)}, \alpha^{-(n_2-n_3)},b^{-(m_2-m_3)} ). \end{multline*} Note that $m_1-m_2 < m_1 \leq n_1 \leq 5 \cdot 10^{80}$. We compute the continued fraction expansion of $\log a/ \log \alpha$ and find the first convergent $p/q$ such that $q\geq 5 \cdot 10^{80}$. Then by the best approximation property of continued fractions it turns out that \[ 4.4 \cdot 10^{-82} \leq |p \log \alpha - q \log a| \leq |\Lambda_3'|. \] This implies \[ \max( \alpha^{-(n_1-n_2)}, \alpha^{-(n_2-n_3)},b^{-(m_2-m_3)} ) \geq 4.4 \cdot 10^{-82} \cdot \frac{1}{20.6 n_1} \geq 4.2 \cdot 10^{-164}. \] \textit{Case 1:} $\max( \alpha^{-(n_1-n_2)}, \alpha^{-(n_2-n_3)},b^{-(m_2-m_3)} )=\alpha^{-(n_1-n_2)}$. Then we get that \[ n_1 - n_2 \leq - \log (4.2 \cdot 10^{-164}) /\log \alpha < 618 \] and we can skip to Reduction Step \ref{step:RedstepC}. \textit{Case 2:} $\max( \alpha^{-(n_1-n_2)}, \alpha^{-(n_2-n_3)},b^{-(m_2-m_3)} )=\alpha^{-(n_2-n_3)}$. Then we get \[ n_2 - n_3 < 618 \] and go to the next step. \textit{Case 3:} $\max( \alpha^{-(n_1-n_2)}, \alpha^{-(n_2-n_3)},b^{-(m_2-m_3)} )=b^{-(m_2-m_3)}$. Then we get \[ \log b \leq (m_2-m_3)\log b \leq -\log(4.2 \cdot 10^{-164}) \leq 377 \] and we can skip to Reduction Step \ref{step:RedstepD}. \end{redstep} \begin{redstep}[Step \ref{step:Step4}]\label{step:RedstepB} Recall from \eqref{eq:trib:Lambda4} that \begin{multline*} |\Lambda_4'| = |(m_2-m_1)\log a + (m_2n_1 - m_1n_3) \log \alpha - m_1 \log(\alpha^{n_2-n_3}-1)|\\ \leq 18.8 n_1 \max( \alpha^{-(n_1-n_2)}, b^{-(m_2-m_3)}) \end{multline*} and all coefficients are bounded by $n_1^2\leq (5 \cdot 10^{80})^2\leq 2.5\cdot 10^{161}=:M$. Now for each $n_2-n_3\in \{1,2,\ldots, 617\}$ we apply the LLL-algorithm to find an absolute lower bound for $|\Lambda_4'|$ as described in Lemma \ref{lem:LLL}. To obtain the matrices $B$ and $B^*$ we use the matrix attributes \verb|LLL()| and \verb|gram_schmidt()| in Sage \cite{sagemath}. In each case, we try $C\approx M^3$ and if the algorithm fails (i.e.\ $c^2\leq T^2+S$), we increase $C$ by a factor of 10. Indeed, in each of the cases the LLL reduction works after at most three tries and we get a lower bound for $|\Lambda_4'|$ in that case. As an overall lower bound we obtain \[ 3.7 \cdot 10^{-326} \leq |\Lambda_4'|. \] This implies \[ \max( \alpha^{-(n_1-n_2)}, b^{-(m_2-m_3)}) \geq 3.7 \cdot 10^{-326} \cdot \frac{1}{18.8 n_1} \geq 3.9 \cdot 10^{-408}. \] \textit{Case 1:} $\max(\alpha^{-(n_1-n_2)}, b^{-(m_2-m_3)}) = \alpha^{n_1-n_2}$: Then \[ n_1-n_2 \leq -\log (3.9 \cdot 10^{-408})/\log \alpha < 1540 \] and we go to the next step. \textit{Case 2:} $\max(\alpha^{-(n_1-n_2)}, b^{-(m_2-m_3)}) = b^{-(m_2-m_3)}$: Then \[ \log b \leq (m_2-m_3)\log b \leq - \log (3.9 \cdot 10^{-408}) \leq 939 \] and we can skip to Reduction Step \ref{step:RedstepD}. \end{redstep} \begin{redstep}[Step \ref{step:Step5}]\label{step:RedstepC} Now we can compute a small bound for $\log b$ like in Step \ref{step:Step5}: \[ \log b \leq (n_1-n_2)\log \alpha + 2.1 \leq 1539 \cdot \log \alpha + 2.1 \leq 940. \] \end{redstep} \begin{redstep}[Step \ref{step:Step6}]\label{step:RedstepD} From \eqref{eq:trib:boundStep2} we get \[ n_1 \leq 7.8 \cdot 10^{27} (\log b)^2 (\log n_1)^2 \leq 7.8 \cdot 10^{27} \cdot 940^2 (\log n_1)^2 \leq 6.9 \cdot 10^{33} \cdot (\log n_1)^2. \] Solving this inequality numerically, we obtain \[ n_1 \leq 5.3\cdot 10^{37}. \] \end{redstep} \step{step:repeating}{Repeating the reduction steps:} With this smaller bound for $n_1$ we can now repeat the Reduction Steps A--D (see Table \ref{table:redSteps}). The Sage code is included in the \nameref{sec:appendix}. \begin{table}[h] \caption{Repeating the reduction steps}\label{table:redSteps} \begin{tabular}{rcccc} \hline \multicolumn{1}{l}{} & \textbf{1\ts{st} round} & \textbf{2\ts{nd} round} & \textbf{3\ts{rd} round} & \textbf{4\ts{th} round} \\ \hline $n_1\leq \ldots$ & $5 \cdot 10^{80}$ & $5.3\cdot 10^{37}$ & $1.2\cdot 10^{37}$ & $1.1\cdot 10^{37}$ \\ \hline \multicolumn{1}{l}{\textbf{Step A}} & & & & \\ $n_1-n_2\leq \ldots$ & 617 & 292 & 288 & 288 \\ $n_2-n_3\leq \ldots$ & 617 & 292 & 288 & 288 \\ $\log b \leq \ldots$ & 377 & 179 & 176 & 176 \\ \hline \multicolumn{1}{l}{\textbf{Step B}} & & & & \\ $n_1-n_2\leq \ldots$ & 1539 & 729 & 719 & 715 \\ $\log b \leq \ldots$ & 939 & 445 & 439 & 437 \\ \hline \multicolumn{1}{l}{\textbf{Step C}} & & & & \\ $\log b \leq \ldots$ & 940 & 447 & 441 & 438 \\ \hline \multicolumn{1}{l}{\textbf{Step D}} & & & & \\ $n_1\leq \ldots$ & $5.3\cdot 10^{37}$ & $1.2\cdot 10^{37}$ & $1.1\cdot 10^{37}$ & $1.1\cdot 10^{37}$ \\ \hline \end{tabular} \end{table} After that, we are not able to reduce the bound significantly any further. From the bounds in the table one can see that we have proven the bounds from Theorem~\ref{thm:Tribos}, i.e. overall we have proven that under the assumptions of Theorem~\ref{thm:Tribos} we have \[ \log b \leq 438 \quad \text{and} \quad 150<n_1\leq 1.1\cdot 10^{37}. \] \section*{Appendix}\label{sec:appendix} Below we have enclosed the Sage code that was used to determine the \hyperlink{step:smallSols}{small solutions} to $T_{n_1}-T_{n_2}=b^{m_1}-b^{m_2}$ in Section \ref{sec:Tribos}, as well as the code that was used to determine the bounds for the reduction rounds in Table \ref{table:redSteps}. \begin{lstlisting}[language=Python] # small solutions # start computing Tribonacci numbers T_n1 t1_veryold = 0; t1_old = 0; t1 = 1 # starting values n1 = 1 while n1 < 150: # compute next Tribonacci number T_n1 temp = t1 + t1_old + t1_veryold t1_veryold = t1_old t1_old = t1 t1 = temp n1 = n1 + 1 # start computing Tribonacci numbers T_n2 < T_n1 t2_veryold = 0; t2_old = 0; t2 = 1 # starting values n2 = 1 while t2 < t1_old: # because we increase at the beginning # compute next Tribonacci number T_n2 temp = t2 + t2_old + t2_veryold t2_veryold = t2_old t2_old = t2 t2 = temp n2 = n2 + 1 # check representation diff = t1 - t2 primefactorisation = list(factor(diff)) factors_for_b = list(Combinations(primefactorisation)) del factors_for_b[0] # exclude b = 1 for factors in factors_for_b: x = gcd([k for [p,k] in factors]) b = prod(p^(k/x) for [p,k] in factors) y = round(log(1 + diff/(b^x))/log(b)) if b^x * (b^y - 1) == diff: m2 = x m1 = x + y c = t2 - b^x print("T_{",n1,"} - T_{",n2,"} =",b,"^{",m2,"} (",b,"^{",y,"}-1), \\quad c&=", c, "= T_{",n1,"} -",b,"^{",m1,"} = T_{",n2,"} -",b,"^{",m2,"}; \\\\") \end{lstlisting} \begin{lstlisting}[language=Python] # reduction steps alpha = n(solve(x^3 - x^2 - x - 1 == 0, x, solution_dict=True)[2][x], digits=2000) a = 1/(-alpha^2 + 4*alpha - 1) ################################################################# print("Step A:") bound_n1 = 5.3*10^37 # replace for each round c = continued_fraction(log(a)/log(alpha)) i = 1 while c.denominator(i) < bound_n1: i = i + 1 p = c.numerator(i) q = c.denominator(i) lowerbound_linform = abs(p*log(alpha) - q*log(a)) lowerbound = lowerbound_linform/(20.6*bound_n1) n2n3max = floor(-log(lowerbound)/log(alpha)) print("bound n_1 - n_2 and n_2 - n_3:", n2n3max) print("bound log(b):", -log(lowerbound)) ################################################################# print("Step B:") M = bound_n1^2 C0 = 10^int(3*log(M)/log(10)) # approx. M^3 but with full precision lowerBound = 1 for n2n3 in range(1, n2n3max+1): # loop in order to find a C that works C = C0 Cmax = C0*10^30 done = False while not done: C = C*10 A = Matrix([[1, 0, round(C*log(a))], [0, 1, round(C*log(alpha))], [0, 0, round(C*log(alpha^n2n3 - 1))]]) B = A.LLL() Bstar, mu = B.gram_schmidt() c = min([norm(N(b)) for b in Bstar]) S = 2*M^2 T = (1 + 3*M)/2 if c^2 > T^2 + S: lowerBound = min(lowerBound, 1/C * (sqrt(c^2 - S) - T)) done = True elif C == Cmax: print('did not work for n2n3 =', n2n3) break lowerBound2 = lowerBound/(18.8*bound_n1) n1n2max = floor(-log(lowerBound2)/log(alpha)) print("bound n_1 - n_2:", n1n2max) print("bound log b:", -log(lowerBound2)) ################################################################# print("Step C:") logbmax = n1n2max*log(alpha) + 2.1 print("bound log b:", logbmax.n()) ################################################################# print("Step D:") logbmax = max(logbmax, -log(lowerBound2)) newbound_n1 = find_root(x - 7.8*10^27*logbmax^2 * log(x)^2, 7.8*10^27*logbmax^2, (7.8*10^27*logbmax^2)^2) print("new bound for n_1:", newbound_n1) \end{lstlisting} \end{document}
math
\begin{document} \newcommand{\Addresses}{{ \footnotesize Francisco Franco Munoz, \textsc{Department of Mathematics, University of Washington, Seattle, WA 98195}\par\nopagebreak \textit{E-mail address}: \texttt{[email protected]} }} \title{f On the counting function for numerical monoids} \abstract{We provide explicit expressions for the counting function and main terms for numerical monoids.} \section{Introduction} The study of numerical monoids is a huge area of research (see \cite{RS} and references therein). A numerical monoid $M$ is a subset of $\mathbb{N}$ containing $0$ that is closed under addition and finite complement. In other words, a finite complement submonoid of $(\mathbb{N},+)$. \noindent The aim of this paper is to calculate in an elementary way the main term of the counting function of a monoid (Theorems \ref{thm-5}, \ref{thm-13}, \ref{thm-14}) which is a natural object. We'll avoid the use of more advanced treatments such as quasi-polynomial functions and integer points of polytopes (\cite{BLDPS}) that provide a more comprehensive view of the subject. The paper is essentially self-contained and requires no prior knowledge of the subject. \subsection{Notation} $\mathbb{N} = \{0, 1, 2, 3, ...\}$, the natural numbers \\ $\mathbb{Z}= -\mathbb{N} \cup \mathbb{N}$, the integers\\ $\mathbb{N}_{+} = \{1, 2, 3, ...\}$, the positive integers \section{Setting} \noindent Let $M = \langle a_1, \dots, a_{n-1}, a_n \rangle$ be a submonoid of $(\mathbb{N}, +)$. By definition $M$ is called a numerical monoid if its complement is finite. Denote by $M_{\widehat{a_n}} = \langle a_1, \dots, a_{n-1} \rangle $ the submonoid obtained by dropping the generator $a_n$. \begin{definition} For $x\in \mathbb{R}$, $a\in \mathbb{N}_{+}$, define $\displaystyle \fra{x}{a} = x-a\lfr{x}{a} $. \end{definition} With this definition, we have $\{x\} = \fra{x}{1}$ the fractional part of $x$. \begin{prop}\label{prop-1} Let $x, k\in \mathbb{Z}$, $a, b\in \mathbb{N}_{>0}$ \begin{enumerate} \item $\fra{x}{a}$ is the residue of $x$ modulo $a$ \item $0\leq \fra{x}{a} < a$ \item $\fra{x}{a}=0 \iff a | x$. \item $\fra{x+ka}{a}=\fra{x}{a}$ \item $\fra{x}{b} = \fra{\fra{x}{ab}}{b}$ \end{enumerate} \end{prop} \begin{proof} Immediate. \end{proof} \begin{lem} \label{lem-2} For $y\in \mathbb{N}$, if $y<a_n$, then $y\in M \iff y\in M_{\widehat{a_n}}$. In particular, for $x\in \mathbb{N}$, $\fra{x}{a_n} \in M \iff \fra{x}{a_n} \in M_{\widehat{a_n}}$. \end{lem} \begin{proof} This is clear. \end{proof} \begin{definition} We define the counting function $$T_{M}(x) = \# \{(x_1, ... , x_n) \in \mathbb{N}^n \mid x_1a_1 + ... + x_na_n = x \}$$ \end{definition} The Theory of Erhart polynomials \cite{BLDPS} provides ways to determine $T_M$. Instead we'll address them directly in an elementary way. \begin{thm} $$ T_{M}(x)=T_{M}(x-a_n)+T_{M_{\widehat{a_n}}}(x)$$ \end{thm} \begin{proof} This is immediate since the set $ \{(x_1, ... , x_n) \in \mathbb{N}^n \mid x_1a_1 + ... + x_na_n = x \}$ can be partitioned as $ \{(x_1, ... , x_n) \in \mathbb{N}^n \mid x_1a_1 + ... + x_na_n = x \} = $ $$=\{(x_1, ... , x_n) \in \mathbb{N}^n \mid x_n > 0 , x_1a_1 + ... + x_na_n = x \}\sqcup \{(x_1, ... , x_n) \in \mathbb{N}^n \mid x_n=0, x_1a_1 + ... + x_na_n = x\}$$ $= \{(x_1, ... , x_{n-1}, y_n) \in \mathbb{N}^n \mid y_n=x_n-1\geq 0 , x_1a_1 + ... + x_{n-1}a_{n-1}+y_na_n = x-a_n \}\sqcup \{(x_1, ... , x_{n-1}, 0) \in \mathbb{N}^n \mid x_n=0, x_1a_1 + ... + x_{n-1}a_{n-1}+0 = x\}$ and the counting for each is $T_{M}(x-a_n)+T_{M_{\widehat{a_n}}}(x)$. \end{proof} \begin{cor}\label{cor-4} $\displaystyle T_{M} (x) = \sum_{k=0}^{\lfr{x}{a_n}} T_{M_{\widehat{a_n}}}(x-ka_n) = \sum_{l=0}^{\lfr{x}{a_n}} T_{M_{\widehat{a_n}}}(\fra{x}{a_n}+la_n) $ \end{cor} \begin{proof} $\displaystyle T_{M} (x) - T_{M}\left(x-a_n\floor*{\frac{x}{a_n}}\right) = \sum_{k=0}^{ \lfr{x}{a_n} -1} \left[T_{M}(x-ka_n)-T_{M}(x-(k+1)a_n)\right] \\=\sum_{k=0}^{\lfr{x}{a_n} - 1} T_{M_{\widehat{a_n}}}(x-ka_n)$. Also $T_{M}\left(x-a_n\floor*{\frac{x}{a_n}}\right)=T_{M}(\fra{x}{a_n})=T_{M_{\widehat{a_n}}}(\fra{x}{a_n}) = T_{M_{\widehat{a_n}}}\left(x-a_n\floor*{\frac{x}{a_n}}\right)$ by Lemma \ref{lem-2} and we conclude that $\displaystyle T_{M} (x) = \sum_{k=0}^{\lfr{x}{a_n}} T_{M_{\widehat{a_n}}}(x-ka_n)$ and making the change of variable $k=\lfr{x}{a_n}-l$ we get the last equality. \end{proof} \section{Dimension $n=2$} $M = \langle a, b \rangle$ and assume that $\gcd(a,b)=1$ in this subsection. \begin{thm} \label{thm-5} We have $$ T_{\langle a, b \rangle} (x) = \lfr{x}{ab} + \epsilon_{\langle a, b\rangle}(\fra{x}{ab})$$ where for $0\leq y < ab$, $\epsilon(y)=\delta(y\in \langle a, b\rangle)$ is the indicator function. \end{thm} \begin{proof} Applying Corollary \ref{cor-4} with $n=2$ and $a_2=b$, we have $\displaystyle T_{\langle a, b \rangle}(x) = \sum_{l=0}^{\lfr{x}{b}} T_{M_{\widehat{b}}}(\fra{x}{b}+lb) = \sum_{l=0}^{\lfr{x}{b}} T_{\langle a\rangle}(\fra{x}{b}+lb)$. Now, it's clear that $T_{\langle a \rangle }(y)$ is $1$ or $0$ according to whether $a|y$ or not and so $T_{\langle a\rangle}(y+ka)=T_{\langle a\rangle}(y)$. Let $y=\lfr{x}{b}$. By Proposition \ref{prop-1}, $\lfr{\lfr{x}{b}}{a}=\lfr{x}{ab}$, so $y=\fra{y}{a} + a\lfr{x}{ab}$. We split the sum $\displaystyle \sum_{l=0}^{\lfr{x}{b}} T_{\langle a\rangle}(\fra{x}{b}+lb) = \sum_{i=0}^{\lfr{x}{ab} - 1} \sum_{j=0}^{a-1} T_{\langle a\rangle}\left(\fra{x}{b}+(ai+j)b\right) + \sum_{j=0}^{\fra{y}{a}} T_{\langle a\rangle}(\fra{x}{b}+(a\lfr{x}{ab}+j)b) = \sum_{i=0}^{\lfr{x}{ab} - 1} \sum_{j=0}^{a-1} T_{\langle a\rangle}(\fra{x}{b}+jb) + \sum_{j=0}^{\fra{y}{a}} T_{\langle a\rangle}(\fra{x}{b}+jb) = \lfr{x}{ab}\sum_{j=0}^{a-1} T_{\langle a\rangle}(\fra{x}{b}+jb) + \sum_{j=0}^{\fra{y}{a}} T_{\langle a\rangle}(\fra{x}{b}+jb)$ and the proof follows from the next two lemmas. \end{proof} \begin{lem} For any $y\in \mathbb{N}$, $\displaystyle \sum_{j=0}^{a-1} T_{\langle a\rangle}(y+jb)=1$ \begin{proof} Since $\gcd(a,b)=1$, there's unique $j, 0\leq j\leq a-1$ such that $a\mid (y+jb)$. \end{proof} \end{lem} \begin{lem} For any $x\in \mathbb{N}$, then $\displaystyle \sum_{j=0}^{\fra{y}{a}} T_{\langle a\rangle}(\fra{x}{b}+jb) = \epsilon_{\langle a, b\rangle}(\fra{x}{ab})$ where $y=\lfr{x}{b}$. \end{lem} \begin{proof} Omitted. \end{proof} \subsection{A different proof} \begin{lem}\label{lem-8} If $x=\alpha a+\beta b\in M=\langle a, b\rangle$, then $T_{M}(x)=\lfr{\alpha}{b}+\lfr{\beta}{a}+1$ \end{lem} \begin{proof} Fix a representation $x=\alpha a+\beta b$. We need to count representations $x=\alpha_1 a+\beta_1 b$. But $\alpha a+\beta b=\alpha_1 a+\beta_1 b \iff (\alpha-\alpha_1)a=(\beta_1-\beta)b$ and since $\gcd(a,b)=1$ this happens $\iff a\mid (\beta_1-\beta), b\mid (\alpha-\alpha_1)$ and $\beta_1-\beta=ja, \alpha-\alpha_1 = jb$. So we need to count the possibilities for $j\in \mathbb{Z}$. We have $\beta_1=ja+\beta$, $\alpha_1=-jb+\alpha$. And the condition is exactly $\beta_1\geq 0$ and $\alpha_1\geq 0$, which happens $\iff ja+\beta\geq 0, -jb+\alpha\geq 0 \iff \frac{-\beta}{a}\leq j\leq \frac{\alpha}{b} \iff \cfr{-\beta}{a}\leq j\leq \lfr{\alpha}{b} $. Now we have $ \cfr{-\beta}{a} = - \lfr{\beta}{a}$ hence there are $\lfr{\alpha}{b}+\lfr{\beta}{a}+1$ such $j$. \end{proof} \begin{cor}\label{cor-9} For $x<ab$, $T_{M}(x)=1 \iff x\in M$. \end{cor} \begin{proof} It's enough to show that if $x\in M$ and $x<ab$, then $T_{M}(x)=1$. By Lemma \ref{lem-8}, this holds iff $\lfr{\alpha}{b}=\lfr{\beta}{a}=0$ where $x=\alpha a+\beta b$ is some representation. But since $x< ab$, then $\alpha a\leq x < ab$ so $\alpha<b$ and $\beta b\leq x < ab$ so $\beta<a$ so $\lfr{\alpha}{b}=\lfr{\beta}{a}=0$. \end{proof} \begin{lem} \label{lem-10} If $x\in M$, then for all $k\in \mathbb{N}$, $T_{M}(x+kab)=T_{M}(x)+k$ \end{lem} \begin{proof} By Lemma \ref{lem-8}, take $x=\alpha a+\beta b\in M$ then $x+kab\in M$ and $x+kab=(\alpha+kb) a+\beta b$ and so $T_M(x+kab)=\lfr{\alpha+kb}{b}+\lfr{\beta}{a}+1=k+\lfr{\alpha}{b}+\lfr{\beta}{a}+1= T_{M}(x)+k$. \end{proof} \begin{lem}\label{lem-11} If $x\notin M$, $0\leq x\leq ab-1$, then $T_{M}(x+ab)=1$ \end{lem} \begin{proof} Since $x+ab\geq ab$ and the Frobenius number of $M$ is $ab-a-b$ (\cite{RS}), $x+ab$ belongs to $M$, so $T_{M}(x+ab)\geq 1$. Now, as in Corollary \ref{cor-9}, it's enough to show that in any representation of $x+ab=\alpha a + \beta b$, both $\alpha < b$ and $\beta < a$. If not, say $\alpha\geq b$, then $x=\alpha a + \beta b -ab = (\alpha-b)a + \beta b$ which says that $x\in M$, contradiction. In the same way, $\beta < a$ and we're done. \end{proof} \begin{thm} For all $x\in \mathbb{N}, T_{M}(x+ab)=T_{M}(x)+1$ and Theorem \ref{thm-5} is valid. \end{thm} \begin{proof} By contradiction, take $x$ minimal such that this formula is not valid for $x$. Notice that by Lemma \ref{lem-11}, we have $x\geq ab$, and by Lemma \ref{lem-10}, we have $x\notin M$. Let $y=x-ab < x$ then by minimality, the theorem is valid for $y$ and so $T_{M}(x)=T_{M}(y+ab)=T_{M}(y)+1\geq 1$ in particular $x\in M$, contradiction. \\ Since $\forall x\in \mathbb{N}, T_{M}(x+ab)=T_{M}(x)+1$, by induction $\forall x, k\in \mathbb{N}$, $T_{M}(x+kab)=T_{M}(x)+k$, and so $T_{M}(x)=T_{M}(\fra{x}{ab}+ab\lfr{x}{ab})= \lfr{x}{ab} + T_{M}(\fra{x}{ab})= \lfr{x}{ab} + \epsilon_{\langle a, b\rangle}(\fra{x}{ab})$ the last equality since $\fra{x}{ab} < ab$ and using Corollary \ref{cor-9}. This proves the theorem. \end{proof} \section{Dimension $n=3$} Assume $\gcd(a,b)=\gcd(a,c)=\gcd(b,c)=1$. These conditions are stronger than necessary, but the theory of numerical monoids permits one to reduce to this case in any dimension \cite{RS}. Let $M = \langle a, b, c \rangle$, $M_{\widehat{c}}= \langle a, b \rangle$ dimension $3$ and $2$ numerical monoids.\\ Recall $\displaystyle T_{M} (x) = \sum_{l=0}^{\lfr{x}{c}} T_{\langle a, b\rangle}(\fra{x}{c}+lc)$ Let $y=\lfr{x}{c}$. Then $y= \fra{y}{ab} + ab\lfr{x}{abc}$. By Theorem \ref{thm-5}, $T_{\langle a, b\rangle}$ splits as sum, consequently we split the sum $\displaystyle \sum_{l=0}^{\lfr{x}{c}} T_{\langle a, b\rangle}(\fra{x}{c}+lc) = \sum_{l=0}^{\lfr{x}{c}} \lfr{\fra{x}{c}+lc}{ab} + \sum_{l=0}^{\lfr{x}{c}} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+cl}{ab})= S1 + S2$\\ $S1= \displaystyle \sum_{l=0}^{\lfr{x}{c}} \lfr{\fra{x}{c}+lc}{ab} = \sum_{i=0}^{\lfr{x}{abc} - 1} \sum_{j=0}^{ab-1} \lfr{\fra{x}{c}+(abi+j)c}{ab} + \sum_{j=0}^{\fra{y}{ab}} \lfr{\fra{x}{c}+(ab\lfr{x}{abc}+j)c}{ab} \\= \sum_{i=0}^{\lfr{x}{abc} - 1} \sum_{j=0}^{ab-1} \lfr{\fra{x}{c}+jc}{ab} + \sum_{i=0}^{\lfr{x}{abc} - 1} \sum_{j=0}^{ab-1} ic+ \sum_{j=0}^{\fra{y}{ab}} \lfr{\fra{x}{c}+jc}{ab}+ \sum_{j=0}^{\fra{y}{ab}} \lfr{x}{abc}c \\= \lfr{x}{abc}\sum_{j=0}^{ab-1} \lfr{\fra{x}{c}+jc}{ab} + \frac{abc}{2}\left(\lfr{x}{abc}-1\right)\lfr{x}{abc} + \sum_{j=0}^{\fra{y}{ab}} \lfr{\fra{x}{c}+jc}{ab}+ (\fra{y}{ab}+1) \lfr{x}{abc}c $ $S2= \displaystyle \sum_{l=0}^{\lfr{x}{c}} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+cl}{ab})= \sum_{i=0}^{\lfr{x}{abc} - 1} \sum_{j=0}^{ab-1} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+(abi+j)c}{ab}) + \sum_{j=0}^{\fra{y}{ab}} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+(ab\lfr{x}{abc}+j)c}{ab}) = \sum_{i=0}^{\lfr{x}{abc} - 1} \sum_{j=0}^{ab-1} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+jc}{ab}) + \sum_{j=0}^{\fra{y}{ab}} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+jc}{ab}) \\= \lfr{x}{abc} \sum_{j=0}^{ab-1} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+jc}{ab}) + \sum_{j=0}^{\fra{y}{ab}} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+jc}{ab}) $ \\ Using Hermite reciprocity, since $\gcd(ab,c)=1$: \\ $\displaystyle \sum_{j=0}^{ab-1} \lfr{\fra{x}{c}+jc}{ab} = \sum_{j=0}^{c-1} \lfr{\fra{x}{c}+jab}{c} = \sum_{j=0}^{c-1} \lfr{\fra{x}{c}+\fra{jab}{c} + c\lfr{jab}{c} }{c} = \sum_{j=0}^{c-1} \lfr{\fra{x}{c}+\fra{jab}{c} }{c} + \sum_{j=0}^{c-1}\lfr{jab}{c} = \sum_{l=0}^{c-1} \lfr{\fra{x}{c}+l }{c} + \sum_{j=0}^{c-1}\lfr{jab}{c} = \floor*{c\frac{\fra{x}{c}}{c}}+\frac{(ab-1)(c-1)}{2} = \fra{x}{c}+\frac{(ab-1)(c-1)}{2} $ \\ Hence we obtain $S1=$ \\ $\displaystyle \sum_{j=0}^{\fra{y}{ab}} \lfr{\fra{x}{c}+jc}{ab} + \lfr{x}{abc}\sum_{j=0}^{ab-1} \lfr{\fra{x}{c}+jc}{ab} + \frac{abc}{2}\left(\lfr{x}{abc}-1\right)\lfr{x}{abc} + (\fra{y}{ab}+1) \lfr{x}{abc}c \\= \sum_{j=0}^{\fra{y}{ab}} \lfr{\fra{x}{c}+jc}{ab} + \lfr{x}{abc}\left(\fra{x}{c}+\frac{(ab-1)(c-1)}{2}\right)+ \frac{abc}{2}\left(\lfr{x}{abc}-1\right)\lfr{x}{abc} + \\ (\fra{y}{ab}+1) \lfr{x}{abc}c = \sum_{j=0}^{\fra{y}{ab}} \lfr{\fra{x}{c}+jc}{ab} + \lfr{x}{abc}\left( \frac{abc}{2}\left(\lfr{x}{abc}-1\right) + \\ \fra{x}{c}+\frac{(ab-1)(c-1)}{2} + \\ (\fra{y}{ab}+1)c \right) \\= \sum_{j=0}^{\fra{y}{ab}} \lfr{\fra{x}{c}+jc}{ab} + S3$, where $S3$ is the sum \\ $\displaystyle S3= \lfr{x}{abc}\left( \frac{abc}{2}\left(\lfr{x}{abc}-1\right) + \fra{x}{c}+\frac{(ab-1)(c-1)}{2} + (\fra{y}{ab}+1)c \right) \\= \lfr{x}{abc}\left( \frac{abc}{2}\left(\lfr{x}{abc}-1\right) + \fra{x}{c}+\frac{(ab-1)(c-1)}{2} + \\ (\fra{\lfr{x}{c}}{ab}+1)c \right) \\= \lfr{x}{abc}\left( \frac{abc}{2}\left(\lfr{x}{abc}-1\right) + x-c\lfr{x}{c}+\frac{(ab-1)(c-1)}{2} + (\lfr{x}{c}-ab\lfr{x}{abc}+1)c \right) \\= \lfr{x}{abc}\left( \frac{abc}{2}\lfr{x}{abc}-\frac{abc}{2} + x-c\lfr{x}{c}+\frac{abc-ab-c+1}{2} + c\lfr{x}{c}-abc\lfr{x}{abc}+c \right) \\= \lfr{x}{abc}\left(x-\frac{abc}{2}\lfr{x}{abc} +\frac{c-ab+1}{2} \right) = \lfr{x}{abc}\left(\frac{abc}{2}\lfr{x}{abc} + \fra{x}{abc} + \frac{c-ab+1}{2} \right)$ Finally we get $\displaystyle S1= \lfr{x}{abc}\left(\frac{abc}{2}\lfr{x}{abc} + \fra{x}{abc} + \frac{c-ab+1}{2} \right) + \sum_{j=0}^{\fra{\lfr{x}{c}}{ab}} \lfr{\fra{x}{c}+jc}{ab} \\= \frac{abc}{2}\lfr{x}{abc}^2 +\lfr{x}{abc}\left( \fra{x}{abc} + \frac{c-ab+1}{2} \right) +\sum_{j=0}^{\fra{\lfr{x}{c}}{ab}} \lfr{\fra{x}{c}+jc}{ab}$\\ Notice that the main term is $\displaystyle \frac{abc}{2}\lfr{x}{abc}^2$ which has order $\displaystyle \sim \frac{x^2}{2abc}$.\\ The entire result is then $\displaystyle S1+S2 \\= \frac{abc}{2}\lfr{x}{abc}^2 +\lfr{x}{abc}\left( \fra{x}{abc} + \frac{c-ab+1}{2} \right) +\sum_{j=0}^{\fra{\lfr{x}{c}}{ab}} \lfr{\fra{x}{c}+jc}{ab} + \\ \lfr{x}{abc} \sum_{j=0}^{ab-1} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+jc}{ab}) + \sum_{j=0}^{\fra{y}{ab}} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+jc}{ab})$. \\ Each of the sums have easy to find bounds: \begin{itemize} \item $\displaystyle \fra{x}{abc} + \frac{c-ab+1}{2} < abc+\frac{c-ab+1}{2}$. \item $\displaystyle \sum_{j=0}^{\fra{\lfr{x}{c}}{ab}} \lfr{\fra{x}{c}+jc}{ab} \leq \displaystyle \sum_{j=0}^{ab-1} \lfr{\fra{x}{c}+jc}{ab} = \fra{x}{c}+\frac{(ab-1)(c-1)}{2} \leq c+\frac{(ab-1)(c-1)}{2}$. \item $\displaystyle \sum_{j=0}^{ab-1} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+jc}{ab}) \leq \sum_{j=0}^{ab-1} 1= ab$ \item $\displaystyle \sum_{j=0}^{\fra{y}{ab}} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+jc}{ab}) \leq \sum_{j=0}^{ab-1} \epsilon_{\langle a, b\rangle}(\fra{\fra{x}{c}+jc}{ab}) \leq ab$ \end{itemize} Hence we obtain: \begin{thm}\label{thm-13} $n=3$, $M = \langle a, b, c \rangle$, $\gcd(a,b)=\gcd(a,c)=\gcd(b,c)=1$. We have $$T_{M}(x) = \frac{abc}{2}\lfr{x}{abc}^2 + r_{M}(x)$$ where $|r_M(x)|\leq Kx+L$ for absolute constants $K, L$ (depending only on $a, b, c$). \end{thm} \section{Dimension $n$ arbitrary} The technique above generalizes, by summing up the leading terms using Corollary \ref{cor-4} obtaining: \begin{thm} \label{thm-14} For $M=\langle a_1, ... , a_d\rangle$, $\gcd(a_i,a_j)=1$ for all $i\neq j$. The counting function $T_{M}(x)$ has main order $$T_M(x) \sim \frac{(a_1....a_d)^{d-2}}{(d-1)!}\lfr{x}{a_1.... a_d}^{d-1}$$ meaning that $T_{M}(x)=\frac{(a_1...a_d)^{d-2}}{(d-1)!}\lfr{x}{a_1.... a_d}^{d-1} + r_{M}(x)$, where $|r_{M}(x)|$ is bounded by a polynomial in $x$ (depending only on $M$) of degree at most $d-2$. \end{thm} \Addresses \end{document}
math
\begin{document} \title{Inhibiting unwanted transitions in population transfer in two- and three-level quantum systems} \author{A. Kiely} \email{[email protected]} \affiliation{Department of Physics, University College Cork, Cork, Ireland} \author{A. Ruschhaupt} \email{[email protected]} \affiliation{Department of Physics, University College Cork, Cork, Ireland} \begin{abstract} We examine the stability of population transfer in two- and three-level systems against unwanted additional transitions. This population inversion is achieved by using recently proposed schemes called ``shortcuts to adiabaticity''. We quantify and compare the sensitivity of different schemes to these unwanted transitions. Finally, we provide examples of shortcut schemes which lead to a zero transition sensitivity in certain regimes, i.e. which lead to a nearly perfect population inversion even in the presence of unwanted transitions. \end{abstract} \maketitle \section{Introduction} The manipulation of the state of a quantum system with time-dependent interacting fields is a fundamental operation in atomic and molecular physics. Modern applications of this quantum control such as quantum information processing \cite{NC} require fast schemes with a high fidelity (typically with an error lower than $10^{-4}$ \cite{NC}) which must also be very stable with respect to imperfections of the system or fluctuations of the control parameters. Most methods used may be classified into two major groups: fast, resonant, fixed-area pulses, and slow adiabatic methods such as ``Rapid'' Adiabatic Passage (RAP). Fixed area pulses are traditionally considered to be fast but unstable with respect to perturbations. For two-level systems, an example of a fixed area pulse is a $\pi$ pulse. A $\pi$ pulse may be fast but is highly sensitive to variations in the pulse area and to inhomogeneities in the sample \cite{Allen}. An alternative to a single $\pi$ pulse are composite pulses \cite{Levitt,Collin,Torosov}, which still need an accurate control of pulse phase and intensity. On the other hand, the canonical robust option is to perform operations adiabatically \cite{adiab}. Nevertheless, such schemes are slow and therefore likely to be affected by decoherence or noise over the long times required and do not lead to an exact transfer. A compromise is to use ``shortcuts to adiabaticity'' (STA), which may be broadly defined as the processes that lead to the same final populations as the adiabatic approach but in a shorter time, for a review see \cite{sta_review, sta_review_2}. In particular, STA for two- and three-level systems are developed in \cite{Chen2010, Sara12, noise, Andreas2013,Sara13} and \cite{sta_3level} respectively. Nonetheless, in an experimental implementation, the system is never an ideal two- or three-level system. There may be unwanted couplings to other levels. The effect of such unwanted transitions for composite pulses has been examined and optimized in \cite{genov_2013} where it was also assumed that the phase of the unwanted coupling to another level could be controlled in a time-dependent way. The goal of this paper is to examine the effect of unwanted couplings to STA in two- and three-level quantum systems where we will assume that no control of the phase of the coupling to the unwanted level is possible. The remainder of this paper is structured as follows. In the subsequent section, we briefly review STA for two-level systems. In Section \ref{sect3}, we examine the sensitivity of STA schemes to unwanted transitions and present schemes to minimize this sensitivity. In Section \ref{sect4}, we review STA for three-level systems and we will examine and optimize their sensitivity to unwanted transitions in Section \ref{sect5}. \section{Invariant-based shortcuts in two-level quantum systems\label{sect2}} Here we will review the derivation of invariant-based STA schemes in two-level quantum systems following the explanation given in \cite{noise}. We assume our two-level system has a Hamiltonian of the form \begin{equation} H_{2L}(t)=\frac{\hbar}{2}\left(\begin{array}{cc} -\delta_{2}(t) & \Omega_{R}(t)-i\Omega_{I}(t)\\ \Omega_{R}(t)+i\Omega_{I}(t) & \delta_{2}(t) \end{array}\right)\label{eq:1} \end{equation} where the ground state is represented by $\left|1\right\rangle =\left(\begin{array}{c} 1\\ 0 \end{array}\right)$ and the excited state by $\left|2\right\rangle =\left(\begin{array}{c} 0\\ 1 \end{array}\right)$ as in Fig. \ref{fig_1_basics}(a). An example of such a quantum system would be a semiclassical coupling of two atomic levels with a laser in a laser-adapted interaction picture. In that setting $\Omega(t) = \Omega_R(t) + i \Omega_I(t)$ would be the complex Rabi frequency (where $\Omega_R$ and $\Omega_I$ are the real and imaginary parts) and $\delta_{2}$ would be the time-dependent detuning between transition and laser frequencies. To simplify the language we will assume this setting for convenience in the following, noting that our reasoning will still pertain to any other two-level system such as a spin-$\frac{1}{2}$ particle or a Bose-Einstein condensate on an accelerated optical lattice \cite{BEC_lattice}. In other settings, $\Omega(t)$ and $\delta_{2}(t)$ will correspond to different physical quantities. The goal is to achieve perfect population inversion in a short time in a two-level quantum system. The system should start at $t=0$ in the ground state and end in the excited state (up to a phase) at final time $T$. In order to design a scheme to achieve this goal i.e. to design a STA, we make use of Lewis-Riesenfeld invariants \cite{LR69}. A Lewis-Riesenfeld invariant of $H_{2L}$ is a Hermitian Operator $I\left(t\right)$ such that \begin{equation} \frac{\partial I}{\partial t}+\frac{i}{\hbar}\left[H_{2L},I\right]=0\,. \end{equation} In this case $I\left(t\right)$ is given by \begin{equation} I\left(t\right)=\frac{\hbar}{2}\mu\left(\begin{array}{cc} \cos\left(\theta\left(t\right)\right) & \sin\left(\theta\left(t\right)\right)e^{-i\alpha\left(t\right)}\\ \sin\left(\theta\left(t\right)\right)e^{i\alpha\left(t\right)} & -\cos\left(\theta\left(t\right)\right) \end{array}\right) \end{equation} where $\mu$ is an arbitrary constant with units of frequency to keep $I\left(t\right)$ with dimensions of energy. The functions $\theta(t)$ and $\alpha(t)$ must satisfy the following equations: \begin{eqnarray} \dot{\theta}&=&\Omega_{I}\cos\alpha-\Omega_{R}\sin\alpha,\label{eq:4}\\ \dot{\alpha}&=&-\delta_{2}-\cot\theta\left(\Omega_{R}\cos\alpha+\Omega_{I}\sin\alpha\right)\,.\label{eq:5} \end{eqnarray} The eigenvectors of $I\left(t\right)$ are \begin{eqnarray} \left|\phi_{+}\left(t\right)\right\rangle &=&\left(\begin{array}{c} \cos\left(\theta/2\right)e^{-i\alpha/2}\\ \sin\left(\theta/2\right)e^{i\alpha/2} \end{array}\right)\,,\\ \left|\phi_{-}(t)\right\rangle &=&\left(\begin{array}{c} \sin\left(\theta/2\right)e^{-i\alpha/2}\\ -\cos\left(\theta/2\right)e^{i\alpha/2} \end{array}\right) \end{eqnarray} with eigenvalues $\pm\frac{\hbar}{2}\mu$. One can write a general solution of the Schr\"{o}dinger equation \begin{eqnarray} i\hbar \frac{d}{dt} \left|\Psi\left(t\right)\right\rangle = H_{2L}(t) \left|\Psi\left(t\right)\right\rangle \end{eqnarray} as a linear combination of the eigenvectors of $I\left(t\right)$ i.e. $\left|\Psi\left(t\right)\right\rangle =c_{+}e^{i\kappa_{+}(t)}\left|\phi_{+}\left(t\right)\right\rangle +c_{-}e^{i \kappa_{-}(t)}\left|\phi_{-}(t)\right\rangle $ where $c_{\pm}\in\mathbb{C}$ and $\kappa_{\pm}\left(t\right)$ are the Lewis-Riesenfeld phases \cite{LR69} \begin{equation} \dot{\kappa}_{\pm}\left(t\right)=\frac{1}{\hbar}\left\langle \phi_{\pm}\left(t\right)\right.\left|\left(i\hbar\partial_{t}-H_{2L}\left(t\right)\right)\right|\left.\phi_{\pm}\left(t\right)\right\rangle\,. \label{kappa} \end{equation} Therefore, it is possible to construct a solution \begin{eqnarray} \left|\psi\left(t\right)\right\rangle &=&\left|\phi_{+}\left(t\right)\right\rangle e^{-i\gamma\left(t\right)/2} \end{eqnarray} where $\gamma = \pm 2 \kappa_{\pm}$. From Eq. \eqref{kappa} we get \begin{equation} \dot{\gamma}=\frac{1}{\sin\theta}\left(\Omega_{R}\cos\alpha+\Omega_{I}\sin\alpha\right)\,.\label{eq:11} \end{equation} For population inversion it must be the case that $\theta\left(0\right)=0$ and $\theta\left(T\right)=\pi$. This ensures that $\left|\psi\left(0\right)\right\rangle =\left|1\right\rangle$ and $\left|\psi\left(T\right)\right\rangle =\left|2\right\rangle$ up to a phase. Note, that this method is not limited to going from the ground state to the excited state; the initial and final states can be determined by changing the boundary conditions on $\theta$ and $\alpha$. Using Eqs. \eqref{eq:4}, \eqref{eq:5} and \eqref{eq:11} we can retrieve the physical quantities: \begin{eqnarray} \Omega_{R}&=&\cos\alpha\sin\theta\,\dot{\gamma}-\sin\alpha\,\dot{\theta}\label{eq:12}\,,\\ \Omega_{I}&=&\sin\alpha\sin\theta\,\dot{\gamma}+\cos\alpha\,\dot{\theta}\label{eq:13}\,,\\ \delta_{2}&=&-\cos\theta\,\dot{\gamma}-\dot{\alpha}\label{eq:14}\,. \end{eqnarray} From this we can see that if the functions $\alpha,\gamma$, and $\theta$ are chosen with the appropriate boundary conditions, perfect population inversion would be achieved at a time $T$ assuming no perturbation or unwanted transitions. These functions will henceforth be referred to as ancillary functions. In the following section we assume that there is an additional unwanted coupling to a third level. \begin{figure} \caption{\label{fig_1_basics} \label{fig_1_basics} \end{figure} \section{Two-level quantum system with unwanted transition\label{sect3}} \subsection{Model} We assume there are in fact three levels in the atom as shown in Fig. \ref{fig_1_basics}(b) and the energy of level $\left|j\right\rangle $ is $\hbar\omega_{j}$ where $j=1,2,3$. Without loss of generality we set $\omega_{1}=0$. The frequency of the laser coupling levels $\left|1\right\rangle $ and $\left|2\right\rangle $ is denoted by $\omega_{L}$. The detuning with the second level is given by \begin{equation} \delta_{2}=\omega_2 - \omega_{L}\, . \end{equation} We assume that this laser is also unintentionally coupling levels $\left|1\right\rangle$ and $\left|3\right\rangle$. With this in mind, we assume that the Rabi frequency $\Omega_{13}(t)$ differs from $\Omega_{12}(t)$ by a constant complex number, i.e. \begin{equation} \Omega_{13}\left(t\right)= \beta e^{i \zeta} \Omega_{12} (t) \end{equation} where $\zeta,\beta$ are real unknown constants, $\beta \ll 1$. $\Omega_{12}(t)$ is the Rabi frequency coupling levels $\left|1\right\rangle$ and $\left|2\right\rangle$. A possible motivation for these assumptions in a quantum-optics setting might be the following: assume that one needs right circularly polarized light ($\sigma^{+}$) in order to couple states $\left|1\right\rangle $ and $\left|2\right\rangle $ and one needs left circularly polarized light ($\sigma^{+}$) to couple states $\left|1\right\rangle $ and $\left|3\right\rangle $. If the laser light is -instead of exactly right polarized- elliptically polarized, this would cause unwanted transitions to level $\left|3\right\rangle$. Other motivations for these assumptions are possible, especially in other quantum systems (different from the quantum-optics setting of an atom and a classical laser). Note, that these assumptions are also used in \cite{genov_2013} with the only difference that in that paper a controllable, time-dependent $\zeta$ has been assumed. The three levels of our atom should have the following state representation: \begin{eqnarray} \left|1\right\rangle =\left(\begin{array}{c} 1\\ 0\\ 0 \end{array}\right)\,,\, \left|2\right\rangle =\left(\begin{array}{c} 0\\ 1\\ 0 \end{array}\right)\,,\, \left|3\right\rangle =\left(\begin{array}{c} 0\\ 0\\ 1 \end{array}\right)\,. \end{eqnarray} Hence our Hamiltonian for the three-level system is \begin{align} H\left(t\right) & =\frac{\hbar}{2}\left(\begin{array}{ccc} -\delta_{2}(t) & \Omega_{12}^{*}\left(t\right) & \beta e^{-i\zeta}\Omega_{12}^{*}\left(t\right)\\ \Omega_{12}\left(t\right) & \delta_{2}(t) & 0\\ \beta e^{i\zeta}\Omega_{12}\left(t\right) & 0 & -2\Delta+\delta_{2}\left(t\right) \end{array}\right)\label{eq:original H} \end{align} where $\Delta = \omega_2-\omega_3$ is the frequency difference between level $\left|2\right\rangle$ and $\left|3\right\rangle$. The phase $\zeta$ can be absorbed in a redefinition of the basis state for the third level and therefore in the following we will just set $\zeta=0$. Using the formalism presented in Sect. \ref{sect2}, we can construct schemes which result in full population inversion in the case of no unwanted transition. There is a lot of freedom in choosing the ancillary functions. The goal will be to find the schemes which are very robust against unwanted transitions, i.e. schemes which result in a nearly perfect population inversion even in the presence of an unwanted transition. \subsection{Transition sensitivity\label{sect_q}} We can write solutions of the time-dependent Schr\"{o}dinger equation for the Hamiltonian in Eq. \eqref{eq:original H} if $\beta=0$ as follows \begin{eqnarray} \left|\psi_{0}\left(t\right)\right\rangle &=&\left(\begin{array}{c} \cos\left(\theta/2\right)e^{-i\alpha/2}\\ \sin\left(\theta/2\right)e^{i\alpha/2}\\ 0 \end{array}\right)e^{-i\gamma/2}\,,\\ \left|\psi_{1}\left(t\right)\right\rangle &=&\left(\begin{array}{c} \sin\left(\theta/2\right)e^{-i\alpha/2}\\ -\cos\left(\theta/2\right)e^{i\alpha/2}\\ 0 \end{array}\right)e^{i\gamma/2}\,,\\ \left|\psi_{2}\left(t\right)\right\rangle &=&\left(\begin{array}{c} 0\\ 0\\ e^{-i\Gamma\left(t\right)} \end{array}\right)\label{hamiltonian_3l} \end{eqnarray} where $\dot{\Gamma}=\frac{1}{2}\left(-2\Delta+\delta_{2}\right)$. These solutions form an orthonormal basis at every time $t$. The ancillary functions $\theta, \alpha, \gamma$ must fulfill Eqs. (\ref{eq:4}), (\ref{eq:5}) and (\ref{eq:11}). This unwanted coupling to the third level can be regarded as a perturbation using the approximation that $\beta$ is small. We can write our Hamiltonian \eqref{hamiltonian_3l} as \begin{equation} H\left(t\right)=H_{0}\left(t\right)+\beta V\left(t\right) \end{equation} where $\beta$ is the strength of the perturbation, \begin{equation} H_{0}\left(t\right)=\frac{\hbar}{2}\left(\begin{array}{ccc} -\delta_{2}(t) & \Omega_{12}^{*}\left(t\right) & 0\\ \Omega_{12}\left(t\right) & \delta_{2}(t) & 0\\ 0 & 0 & -2\Delta+\delta_{2}(t) \end{array}\right) \end{equation} and \begin{equation} V\left(t\right)=\frac{\hbar}{2}\left(\begin{array}{ccc} 0 & 0 & \Omega_{12}^{*}\left(t\right)\\ 0 & 0 & 0\\ \Omega_{12}\left(t\right) & 0 & 0 \end{array}\right)\,.\label{pot} \end{equation} Using time-dependent perturbation theory we can calculate the probability of being in state $\left|2\right\rangle$ at time $T$ as \begin{equation} P_{2}=1- \beta^{2} q + \mathcal{O}\left(\beta^{4}\right)\, \end{equation} where \begin{eqnarray} q = \frac{1}{\hbar^2} \sum_{k=0}^2 \fabsq{\int_0^T dt\, \la \psi_0(t)|V(t)|\psi_k(t) \ra}\,. \end{eqnarray} If we substitute in the expression for the perturbation \eqref{pot} then we get \begin{eqnarray} q &=& \frac{1}{4} \left|\int_{0}^{T}dt\, \cos\left(\frac{\theta}{2}\right) \left(\sin\theta \, \dot\gamma - i \dot\theta\right) e^{i F(t)+i\Delta t} \right|^2\nonumber\\ &=& \left|\int_{0}^{T}dt\, \frac{d}{dt} \left[ \sin\left(\frac{\theta(t)}{2}\right) e^{i F(t)} \right] e^{i\Delta t} \right|^2 \label{eq_q} \end{eqnarray} where $F(t)=\frac{1}{2} \int_0^t ds\, (1+\cos\theta(s)) \dot\gamma(s)$. The $q$ quantifies how sensitive a given protocol (determined by the ancillary functions) is concerning the unwanted transition to level $\left|3\right\rangle$ . Therefore we will call $q$ {\it transition sensitivity} in the following. Our goal will be to determine protocols or schemes which would maximize $P_{2}$ or equivalently minimize $q$. \subsection{General properties of the transition sensitivity\label{sect_2level_properties}} We will begin by examining some general properties of the transition sensitivity $q$. First, we note that $q$ is always independent of $\alpha$. In the case where $\dot{\gamma}=0$ the transition sensitivity is symmetric about $\Delta \leftrightarrow -\Delta$. In the case of $\Delta = 0$, the integral in Eq. \eqref{eq_q} can be easily evaluated by taking into account that $\theta(T)=\pi$ and $\theta(0)=0$. From this we see that \begin{eqnarray} q=1 \; \mbox{if} \; \Delta = 0\,. \end{eqnarray} This means there is no possibility in the case of $\Delta=0$ to completely reduce the influence of the unwanted transition. In the following, we will show that even for $|\Delta| < 1/T$ the transition probability $q$ cannot be zero. By partial integration, we get \begin{eqnarray} q = \left|1 - i \Delta M\right|^2= 1 + 2\Delta \mbox{Im}(M) + \Delta^2 \fabsq{M} \end{eqnarray} where \begin{eqnarray} M &=&\int_0^T dt\, \sin\left(\frac{\theta(t)}{2}\right) \times \nonumber\\ & & \fexp{i (t-T)\Delta -\frac{i}{2} \int_t^T ds\, (1+\cos\theta(s)) \dot\gamma(s)}\,.\nonumber\\ \end{eqnarray} We have $q \ge (1 + \Delta \mbox{Im}(M))^2$ and \begin{eqnarray} |\mbox{Im}(M)| &\le& \int_0^T dt\, \left|\sin\left(\frac{\theta(t)}{2}\right)\right| \le T\,. \end{eqnarray} Let us assume $|\Delta|T < 1$ then \begin{eqnarray} q &\ge& (1 - |\Delta| |\mbox{Im}(M)|)^2\nonumber\\ &\ge& \left(1 - |\Delta| \int_0^T dt\, \left|\sin\left(\frac{\theta(t)}{2}\right)\right|\right)^2\nonumber\\ &\ge& (1- |\Delta| T)^2\,. \label{bound_q} \end{eqnarray} So we get $q > 0$ if $|\Delta|T < 1$, i.e. this means that a necessary condition for $q = 0$ is $T \ge 1/|\Delta|$. The next question which we will address is whether there could be a scheme (independent of $\Delta$) which results in $q=0$ for all $|\Delta| > 1/T$. For this we would need \begin{eqnarray} H(\Delta) := \int_{0}^{T}dt\, \frac{d}{dt} \left[ G\left(t\right)\right] e^{i\Delta t} \stackrel{!}{=} 0 \label{cond} \end{eqnarray} for all $|\Delta| > 1/T$, where $G(t)= \fsin{\theta(t)/2} e^{i F(t)}$. The left-hand side of this equation, $H(\Delta)$, is simply the Fourier transform of $h(t)=\chi_{[0,T]}(t)\frac{d}{dt} \left[G\left(t\right) \right]$ (where $\chi_{[0,T]}(t) = 1$ for $0 \le t \le T$ and zero otherwise). $h(t)$ has compact support. If Eq. \eqref{cond} would be true then this would mean that the Fourier transform $H(\Delta)$ of the compactly supported function $h(t)$ also has compact support. This is not possible and therefore there can be no ($\Delta$-independent) protocol which results in $q=0$ for all $|\Delta| > 1/T$. Nevertheless, we will show below that for a fixed $\Delta$ there are schemes resulting in $q=0$. It is also important to examine general properties for $|\Delta| \gg 1/T$. From the previous remark (and the property that a Fourier transform of any function vanishes at infinity) it is immediately clear that we get $q \to 0$ for $|\Delta| \to \infty$. Using partial integration we can derive a series expansion of $q$ in $1/\Delta$. We use \begin{eqnarray} \int_0^T dt \dot G (t) e^{i \Delta t} &=& -\frac{i}{\Delta} \left[\dot G (t) e^{i \Delta t}\right]_0^T + \frac{i}{\Delta} \int_0^T dt \ddot G(t) e^{i \Delta t}\nonumber\\ &=& -\frac{i}{\Delta} \left[\dot G (t) e^{i \Delta t}\right]_0^T + o\left(\frac{1}{\Delta}\right). \end{eqnarray} Hence, in the case where $|\Delta| \gg 1/T$ the transition sensitivity is \begin{eqnarray} q = \frac{1}{\Delta^2} \frac{1}{4} \dot \theta (0)^2 + ... \end{eqnarray} where we have taken into account that $\theta(0)=0$ and $\theta(T)=\pi$. By repeating partial integration, we get the higher orders in this $1/\Delta$ series. If we demand \begin{eqnarray} \dot\theta(0)=\dot\theta(T)=\ddot\theta(0)=0 \label{additional} \end{eqnarray} then this first term and the next terms in the $1/\Delta$ series expansion of the transition sensitivity vanish. The first non-vanishing term is now \begin{eqnarray} q=\frac{1}{\Delta^6} \dddot \theta(0)^2 + ... \end{eqnarray} \subsection{Reference case: flat $\pi$ pulse} As a reference case we will consider a flat $\pi$ pulse with \begin{eqnarray} \Omega_R = -\frac{\pi}{T} \sin \alpha,\: \Omega_I = \frac{\pi}{T} \cos\alpha \end{eqnarray} with a constant phase $\alpha$. This scheme corresponds to $\theta(t) = \pi \frac{t}{T}$ and $\gamma(t)=0$. The transition sensitivity can be easily calculated \begin{eqnarray} q &=& \left|\int_{0}^{T}dt\, \frac{d}{dt} \left[ \sin\left(\frac{\pi t}{2 T}\right) \right] e^{i\Delta t} \right|^2\nonumber\\ &=& \frac{\pi ^2 \left(4 \Delta^2 T^2-4 \pi \Delta T \sin (\Delta T)+\pi ^2\right)}{\left(\pi ^2-4 \Delta^2 T^2\right)^2}\,. \label{eq_q_pi} \end{eqnarray} This transition sensitivity $q$ is plotted in Fig. \ref{fig_2_q}(a) and (b). It can be seen that $q$ is one for $\Delta = 0$ and it goes to zero for large $|\Delta|$ as is expected. The transition sensitivity for the flat $\pi$ pulse is never exactly zero. \begin{figure} \caption{\label{fig_2_q} \label{fig_2_q} \end{figure} \begin{figure} \caption{\label{fig_3_rabi} \label{fig_3_rabi} \end{figure} \subsection{Other examples of $\pi$ pulses} Let us examine two other examples of protocols. Suppose $\gamma\left(t\right)=0$, $\theta\left(t\right)=2\arcsin\left(\frac{t}{T}\right)$. Then we get \begin{equation} q=\left|\frac{\left(1-e^{i\Delta T}\right)}{\Delta T}\right|^{2}\, . \end{equation} In order to achieve $q=0$ one must have $T=\frac{2n\pi}{\Delta}$. We also set $\alpha$ constant and then the associated physical quantities for this protocol are \begin{equation} \delta_{2}\left(t\right)=0\,,\, \:\Omega_{12}\left(t\right)=\frac{2ie^{i\alpha}}{T\sqrt{1-\frac{t^{2}}{T^{2}}}}\,. \end{equation} This is a type of $\pi$ pulse. Unfortunately the Rabi frequency $\Omega_{12}$ diverges at $t=T$. To stop divergence we set \begin{eqnarray} \theta\left(t\right)=\frac{\pi}{\arcsin\left(1-\epsilon\right)}\arcsin\left(\left(1-\epsilon\right)\frac{t}{T}\right)\, \end{eqnarray} where $0<\epsilon \ll 1$. By setting $\alpha=-\pi/2$ the corresponding Rabi frequency is real (i.e. $\Omega_I(t)=0$) and \begin{eqnarray} \Omega_R (t)= \frac{\pi\left(1-\epsilon\right)}{\arcsin\left(1-\epsilon\right)T\sqrt{1-\frac{t^{2}\left(\epsilon-1\right)^{2}}{T^{2}}}}\,. \label{otherpi} \end{eqnarray} It also follows that $\delta_{2}=0$. The corresponding transition sensitivity with $\epsilon=0.01$ is also plotted in Fig. \ref{fig_2_q}(a) and (b). Note that this scheme converges for $\epsilon \to 1$ to a flat $\pi$ pulse. We also construct a scheme fulfilling Eqs. \eqref{additional} which results in a low $q$ value for large $|\Delta|$. For this scheme we set \begin{eqnarray} \theta(t) =-\frac{3 \pi t^4}{T^4} + \frac{4 \pi t^3}{T^3} \label{large_delta_scheme} \end{eqnarray} and $\gamma=0$. The corresponding transition probability can be seen in Fig. \ref{fig_2_q}(a) and (b). The transition sensitivity for this scheme is lower than that of the flat $\pi$-pulse for $\Delta T > 10$, meaning it is less sensitive to unwanted transitions. If we set $\alpha=-\pi/2$ then we get $\Omega_R(t)=\frac{12\pi t^{2}(T-t)}{T^{4}}$, $\Omega_I=0$ and $\delta_2 = 0$. \subsection{Numerically optimized scheme with $q = 0$} In the following we will present an example of a class of schemes which can be optimized to achieve a zero transition sensitivity for a fixed $\Delta$. We use the ansatz \begin{eqnarray} \begin{array}{rcl} \gamma(t) &=& c_0 \theta(t)\,,\\ \theta(t) &=& (\pi-c_1) t/T + c_1 t^3/T^3 \end{array} \label{scheme_num_3l} \end{eqnarray} where the parameters $c_0$ and $c_1$ were numerically calculated in order to minimize $q$ for a given $\Delta$. The result is shown in Fig. \ref{fig_2_q}(a). As it can be seen, we can construct schemes which make $q$ vanish for $|\Delta| T \ge 1.5$. $\alpha(t)$ is chosen so that the Rabi frequency is real. The corresponding Rabi frequency $\Omega_R$ and the detuning $\delta_2$ is shown in Fig. \ref{fig_4} for different values of $\Delta T$. Note that we pick the ansatz \eqref{scheme_num_3l} because it is simple. It is still possible to optimize the ansatz further for example with the goal of minimizing the maximal Rabi frequency. Moreover, the ansatz could be modified so that the Rabi frequency is zero at initial and final times. \begin{figure} \caption{\label{fig_4} \label{fig_4} \end{figure} \subsection{Comparison of the transition probability} In the following we compare the effectiveness of the different schemes. To do this we compare the exact (numerically calculated) transition probability $P_2$ for the different schemes versus $\beta$ for different values of $\Delta$. This can be seen in Fig. \ref{fig_6_p2}. From this we see that the transition sensitivity is a good indicator of a stable scheme. This is however not the only useful quantity to know about a particular scheme. We also consider the area of the pulse $A := \int_0^T dt\, \sqrt{\Omega_R^2 + \Omega_I^2}$ and its energy $E:=\hbar \int_0^T dt\, \left(\Omega_R^2 + \Omega_I^2\right)$. The values for the different schemes are shown in Table \ref{tab1}. It can be seen that the numerically optimized schemes require a higher energy than three different variations of a $\pi$ pulse. For completeness we also include the following sinusoidal adiabatic scheme \cite{Lu,Xiao} in our comparison: \begin{eqnarray} \begin{array}{rcl} \Omega_{12}\left(t\right)&=&\Omega_{0}\sin\left(\frac{\pi t}{T}\right)\,,\\[0.2cm] \delta_{2}\left(t\right)&=&-\delta_{0}\cos\left(\frac{\pi t}{T}\right)\,. \end{array} \end{eqnarray} We have chosen $\Omega_0$ so that the adiabatic scheme requires the same energy as the numerically optimized scheme. In addition, we have also optimized the $\delta_0$ to maximize the value of $P_2$ for the error-free case $\beta=0$. The energy is high enough that the adiabatic scheme results in a nearly perfect population inversion in the error-free case. Nevertheless, the numerically optimized scheme is less sensitive to unwanted transitions, i.e. the numerically optimized scheme results in a higher $P_2$ for non-zero $\beta$. \begin{table} \begin{tabular}{|c|c|c|} \hline & $A [\pi]$ & $E [\pi^{2}\hbar/T]$\\ \hline Flat $\pi$ pulse & $1$ & $1$\\ \hline Critical timing scheme$\left(\epsilon=0.01\right)$,&&\\ Eq. \eqref{otherpi} & $1$ & $1.28$\\ \hline Large $\Delta$ scheme, Eq. \eqref{large_delta_scheme} & $1$ & $\frac{48}{35}$\\ \hline Numerically optimized scheme,& &\\ Eq. \eqref{scheme_num_3l}& &\\ $\Delta T=1.0\:(c_{0}=1.376, c_{1}=14.927)$ & $4.79$ & $36.56$\\ $\Delta T=3.0\:(c_{0}=1.266, c_{1}=7.873)$ & $2.49$ & $10.51$\\ \hline Adiabatic Scheme & $2T\Omega_{0}\pi^{-2}$ & $\frac{1}{2}\pi^{-2}T^{2}\Omega_{0}^{2}$\\ $\Delta T=1.0$ & $5.44$ & $36.56$\tabularnewline $\Delta T=3.0$ & $2.92$ & $10.51$\tabularnewline \hline \end{tabular}\caption{\label{tab1}Pulse area $A$ and energy $E$ for different protocols.} \end{table} \begin{figure} \caption{\label{fig_6_p2} \label{fig_6_p2} \end{figure} \section{Invariant-based shortcuts in three-level systems \label{sect4}} In this section, we will review the derivation of invariant-based STA in three-level systems \cite{sta_3level} (for an application see for example \cite{Tseng_2012}). We use a different notation than \cite{sta_3level} to underline the connection between the two and three-level Hamiltonians in Eq. \eqref{eq:1} and Eq. \eqref{3level_H0} respectively (see for example \cite{three-two}). In addition, we will introduce different boundary conditions for the ancillary functions than those used in \cite{sta_3level}. We assume our three-level system has a Hamiltonian of the form \begin{equation} H_{3L}\left(t\right)=\frac{\hbar}{2}\left(\begin{array}{ccc} 0 & \Omega_{12}\left(t\right) & 0\\ \Omega_{12}\left(t\right) & 0 & \Omega_{23}\left(t\right)\\ 0 & \Omega_{23}\left(t\right) & 0 \end{array}\right) \label{3level_H0} \end{equation} where $\Omega_{12}$ and $\Omega_{23}$ are real. This could for example describe a three-level atom with two on resonance lasers (one coupling states $\left|1\right\rangle $ and $\left|2\right\rangle $ and the other coupling states $\left|2\right\rangle $ and $\left|3\right\rangle $). The Lewis-Riesenfeld invariant for this Hamiltonian is \begin{equation} I\left(t\right)=\frac{\hbar}{2}\mu\left(\begin{array}{ccc} 0 & -\sin\theta\sin\alpha & -i\cos\theta\\ -\sin\theta\sin\alpha & 0 & -\sin\theta\cos\alpha\\ i\cos\theta & -\sin\theta\cos\alpha & 0 \end{array}\right) \end{equation} where $\mu$ is a constant in units of frequency to keep $I\left(t\right)$ in units of energy. The ancillary functions $\alpha\left(t\right)$ and $\theta\left(t\right)$ satisfy \begin{eqnarray} \dot{\theta}&=&\frac{1}{2}\left(\Omega_{12}\cos\alpha-\Omega_{23}\sin\alpha\right)\,, \label{3level_diff1}\\ \dot{\alpha}&=&-\frac{1}{2}\cot\theta\left(\Omega_{23}\cos\alpha+\Omega_{12}\sin\alpha\right)\,. \label{3level_diff2} \end{eqnarray} Note the similarity with Eqs. \eqref{eq:4} and \eqref{eq:5}. This is due to the aforementioned connection between the two- and three-level Hamiltonians. The eigenstates of $I\left(t\right)$ are \begin{equation} \left|\phi_{0}\left(t\right)\right\rangle =\left(\begin{array}{c} -\sin\theta\cos\alpha\\ -i\,\cos\theta\\ \sin\theta\sin\alpha \end{array}\right) \end{equation} and \begin{equation} \left|\phi_{\pm}\left(t\right)\right\rangle =\frac{1}{\sqrt{2}}\left(\begin{array}{c} \cos\theta\cos\alpha\pm i\,\sin\alpha\\ -i\,\sin\theta\\ -\cos\theta\sin\alpha\pm i\,\cos\alpha \end{array}\right) \end{equation} with eigenvalues $\lambda_{0}=0$ and $\lambda_{\pm}=\pm1$ i.e. $I\left(t\right)\left|\phi_{n}\left(t\right)\right\rangle =\lambda_{n}\left|\phi_{n}\left(t\right)\right\rangle $ and the label $n=0,\pm$. The Lewis-Riesenfeld phases $\kappa_{n}\left(t\right)$ are $\kappa_{0}=0$ and \begin{equation} \kappa_{\pm}=\mp\int_{0}^{t}dt^{'}\left(\dot{\alpha}\cos\theta-\frac{1}{2}\left(\Omega_{12}\sin\alpha+\Omega_{23}\cos\alpha\right)\sin\theta\right). \end{equation} A solution of the time-dependent Schr\"odinger equation with the Hamiltonian \eqref{3level_H0} is now $|\Psi(t)\ra = |\phi_0 (t) \ra$. In order for the solution $|\Psi(t)\ra$ to evolve from the initial state $\left|1\right\rangle $ to the final state $\left|3\right\rangle $ we must impose the following boundary conditions on $\alpha$ and $\theta$: \begin{eqnarray} \theta(0)=-\frac{\pi}{2}\,,\,\theta (T)= \frac{\pi}{2}\,,\, \alpha(0)=0\,,\,\alpha(T)=\frac{\pi}{2}\,. \label{3l_cond1} \end{eqnarray} One could impose the following additional boundary conditions in order to make the Rabi frequencies have a finite limit at the initial and final times \begin{eqnarray} \dot{\alpha}(0)=0\,,\,\dot{\alpha} (T)=0\,,\, \dot{\theta}(0)\neq0\,,\,\dot{\theta} (T)\neq0\,. \label{3l_cond2} \end{eqnarray} Note that the boundary conditions given by Eqs. \eqref{3l_cond1} and \eqref{3l_cond2} are an alternative choice to the ones imposed in \cite{sta_3level}. Using Eqs. \eqref{3level_diff1} and \eqref{3level_diff2} we can calculate the Rabi frequencies \begin{eqnarray} \Omega_{12}\left(t\right)&=& 2\left(-\dot{\alpha}\tan\theta\sin\alpha+\dot{\theta}\cos\alpha\right)\,,\\ \Omega_{23}\left(t\right)&=& -2\left(\dot{\alpha}\tan\theta\cos\alpha+\dot{\theta}\sin\alpha\right)\, . \end{eqnarray} If the functions $\alpha$ and $\theta$ fulfill Eqs. \eqref{3l_cond1} and \eqref{3l_cond2}, then the corresponding Rabi frequencies will lead to full population inversion $\left|1\right\rangle\to \left|3\right\rangle$. \section{Unwanted transitions in three-level systems \label{sect5}} \subsection{Model} Now we assume that there is an unwanted coupling to a fourth level as shown in Fig. \ref{fig_1_basics}(c). Analogous to Section \ref{sect3}, we assume that the laser coupling levels $\left|2\right\rangle$ and $\left|3\right\rangle$ also unintentionally couples levels $\left|2\right\rangle$ and $\left|4\right\rangle$ as well. Hence we assume for the Rabi frequency \begin{equation} \Omega_{24}\left(t\right)=\beta e^{i\nu}\Omega_{23}\left(t\right) \end{equation} where $\beta,\nu\in\mathbb{R}$ are unknown constants and $\beta\ll1$. The Hamiltonian for this four-level system is given by \begin{equation} H\left(t\right)=\frac{\hbar}{2}\left(\begin{array}{cccc} 0 & \Omega_{12} & 0 & 0\\ \Omega_{12} & 0 & \Omega_{23} & \beta e^{-i\nu}\Omega_{23}\\ 0 & \Omega_{23} & 0 & 0\\ 0 & \beta e^{i\nu}\Omega_{23} & 0 & -2\Delta \end{array}\right) \end{equation} where $\Delta=\omega_{3}-\omega_{4}$ and $\hbar\omega_{j}$ is the energy of state $\left|j\right\rangle $. As in the previous case, one can redefine the state $\left|4\right\rangle $ to remove the phase. Hence we set $\nu=0$ in the following. Using the formalism presented in Sect. \ref{sect4}, we can construct schemes which result in full population inversion in the case of no unwanted transitions. Again, there is a lot of freedom in choosing the ancillary functions and the goal will be to find the schemes which are stable concerning these unwanted transitions. \subsection{Transition sensitivity} We once again regard this unwanted transition as a perturbation. To treat it as such we write the Hamiltonian as \begin{equation} H\left(t\right)=H_{0}\left(t\right)+\beta V\left(t\right) \end{equation} where \begin{equation} H_{0}\left(t\right)=\frac{\hbar}{2}\left(\begin{array}{cccc} 0 & \Omega_{12} & 0 & 0\\ \Omega_{12} & 0 & \Omega_{23} & 0\\ 0 & \Omega_{23} & 0 & 0\\ 0 & 0 & 0 & -2\Delta \end{array}\right) \end{equation} and \begin{equation} V\left(t\right)=\frac{\hbar}{2}\left(\begin{array}{cccc} 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \Omega_{23}\\ 0 & 0 & 0 & 0\\ 0 & \Omega_{23} & 0 & 0 \end{array}\right)\,. \end{equation} If $\beta=0$ then the time-dependent Schr\"{o}dinger equation for $H\left(t\right)$ has the following set of orthonormal solutions: \begin{eqnarray} \left|\psi_{0}\left(t\right)\right\rangle & =&\left(\begin{array}{c} -\sin\theta \cos\alpha\\ -i\,\cos\theta\\ \sin\theta\sin\alpha\\ 0 \end{array}\right)e^{i\,\kappa_{0}}\,, \end{eqnarray} \begin{eqnarray} \left|\psi_{1}\left(t\right)\right\rangle &=&\frac{1}{\sqrt{2}}\left(\begin{array}{c} \cos\theta\cos\alpha+i\,\sin\alpha\\ -i\,\sin\theta\\ -\cos\theta\sin\alpha+i\,\cos\alpha\\ 0 \end{array}\right)e^{i\,\kappa_{+}}\,, \end{eqnarray} \begin{eqnarray} \left|\psi_{2}\left(t\right)\right\rangle &=&\frac{1}{\sqrt{2}}\left(\begin{array}{c} \cos\theta\cos\alpha-i\,\sin\alpha\\ -i\,\sin\theta\\ -\cos\theta\sin\alpha-i\,\cos\alpha\\ 0 \end{array}\right)e^{i\,\kappa_{-}}\,, \end{eqnarray} \begin{eqnarray} \left|\psi_{3}\left(t\right)\right\rangle &=&\left(\begin{array}{c} 0\\ 0\\ 0\\ e^{i\Delta t} \end{array}\right)\,. \end{eqnarray} Using time-dependent perturbation theory similar to Section \ref{sect_q}, we get for the probability $P_3$ to end in the state $\left|3\right\rangle$ at time $t=T$ that \begin{equation} P_{3}=1-\beta^{2}Q +\mathcal{O}\left(\beta^{4}\right)\,. \end{equation} where \begin{eqnarray} Q &=&\left|\int_0^Tdt\, e^{i\Delta t} \left(\dot{\alpha}\sin\theta\cos\alpha + \dot{\theta}\cos\theta\sin\alpha\right)\right|^{2}\nonumber\\ &=&\left|\int_0^Tdt\, e^{i\Delta t} \frac{d}{dt}\left(\sin\theta\sin\alpha\right)\right|^{2}\,. \label{def_Q} \end{eqnarray} Similar to Sect. \ref{sect3}, the $Q$ quantifies how sensitive a given protocol is concerning the unwanted transition to level $\left|4\right\rangle$. As before we will call $Q$ {\it transition sensitivity} in the following and our goal will be to determine protocols or schemes which would minimize $Q$. \subsection{General properties of the transition sensitivity} We start by examining some general properties of the transition sensitivity $Q$ given by Eq. \eqref{def_Q} by noting that $Q$ is independent of the sign of $\Delta$. By taking into account the boundary conditions for $\theta(t)$ and $\alpha(t)$ we find that \begin{eqnarray} Q=1 \; \mbox{if} \; \Delta = 0\,. \end{eqnarray} Similar to Sect. \ref{sect_2level_properties}, we get by partial integration \begin{equation} Q=\left|1-i\Delta N\right|^{2}=1+2\Delta \mbox{Im}N+\Delta^{2}\left|N\right|^{2} \end{equation} where \begin{equation} N=-\int_{0}^{T}dt\, e^{i\Delta\left(t-T\right)}\sin\theta\sin\alpha\,. \end{equation} Therefore $Q\geq\left(1+\Delta \mbox{Im}\left(N\right)\right)^{2}$ and \begin{align} \left|\mbox{Im}\left(N\right)\right| & \leq\left|\int_{0}^{T}dt\,\sin\left(\Delta\left(t-T\right)\right)\sin\theta\sin\alpha\right|\nonumber\\ & \leq\int_{0}^{T}dt\,\left|\sin\left(\Delta\left(t-T\right)\right)\sin\theta\sin\alpha\right| \leq T \,. \end{align} Let's assume $\left|\Delta\right|T<1$ then as before we get \begin{eqnarray} Q &\ge& (1 - |\Delta| |\mbox{Im}(N)|)^2 \ge (1- |\Delta| T)^2\,. \label{bound_Q} \end{eqnarray} So $Q>0$ if $\left|\Delta\right|T<1$, i.e. a necessary condition for $Q=0$ is $T\geq\frac{1}{|\Delta|}$. Using similar arguments to the ones in Sect. \ref{sect_2level_properties}, we see that in this case as well there can be no $\Delta$-independent scheme with $Q=0$ for all $|\Delta| > 1/T$. Moreover, an approximation of $Q$ in the case of $|\Delta| T\gg1$ can be derived in a similar way as in the previously mentioned section. So we get for $|\Delta| T\gg1$ that \begin{eqnarray} Q = \frac{1}{\Delta^2} \dot\alpha (0)^2 + ... \end{eqnarray} taking into account the boundary conditions. \begin{figure} \caption{\label{fig_6_Q} \label{fig_6_Q} \end{figure} \begin{figure} \caption{\label{fig_7_num1} \label{fig_7_num1} \end{figure} \begin{figure} \caption{\label{fig_8_num2} \label{fig_8_num2} \end{figure} \subsection{Example of schemes} As a reference case we consider one of the protocols given in \cite{sta_3level}. In this protocol, the following ancillary functions are used \begin{eqnarray} \theta\left(t\right)=\epsilon-\frac{\pi}{2} \, , \, \alpha\left(t\right)=\frac{\pi t}{2T} \end{eqnarray} where $0<\epsilon\ll1$ and the only difference in boundary conditions being that now $\theta\left(T\right)=-\frac{\pi}{2}$. It should be noted that this protocol does not have perfect population transfer since the boundary conditions are not exactly fulfilled for a non-zero $\epsilon$. In \cite{sta_3level} $\epsilon=0.002$ was deemed sufficient. This protocol has the following Rabi frequencies: \begin{eqnarray} \Omega_{12}\left(t\right)&=&\frac{\pi}{T}\cot\epsilon\sin\left(\frac{\pi t}{2T}\right)\,,\nonumber\\ \Omega_{23}\left(t\right)&=&\frac{\pi}{T}\cot\epsilon\cos\left(\frac{\pi t}{2T}\right)\,. \end{eqnarray} The transition sensitivity for this scheme is shown in Fig. \ref{fig_6_Q}. Here we note that the derivation of the transition sensitivity is based on exact population transfer in the error free case. Hence it is not strictly correct to consider the transition sensitivity for this protocol. However for the purposes of comparison we include it. In the following we provide two examples of numerically optimized schemes leading to zero transition sensitivity for some range of $\Delta$. For the first scheme we use the ansatz \begin{eqnarray} \theta(t) &=& -\frac{\pi}{2} + (\pi-c_0-c_1) \frac{t}{T} + c_0 \left(\frac{t}{T}\right)^{2} + c_1 \left(\frac{t}{T}\right)^{3}\,,\nonumber\\ \alpha(t) &=& \frac{\pi}{4} \fsin{\theta(t)} + \frac{\pi}{4} \label{scheme_4l_num1} \end{eqnarray} where the parameters $c_0$ and $c_1$ were numerically calculated in order to minimize $Q$ for a given $\Delta$. Note that this ansatz automatically avoids any divergences of the corresponding physical potentials for $0 \le t \le T$. The resulting transition sensitivity $Q$ is shown in Fig. \ref{fig_6_Q}. As it can be seen, we can construct schemes which make $Q$ vanish for $|\Delta| T \ge 2.5$. The corresponding Rabi frequencies $\Omega_{12}$ and $\Omega_{23}$ are shown in Fig. \ref{fig_7_num1} for different values of $\Delta T$. Another example of a scheme is the following \begin{eqnarray} \lefteqn{\theta\left(t\right)=-\frac{\pi}{2}-\frac{8(\pi-2d_{0})t^{4}}{T^{4}}+\frac{2t^{3}(-16d_{0}+T+7\pi)}{T^{3}}} & &\nonumber\\ & & - \frac{t^{2}(-16d_{0}+3T+5\pi)}{T^{2}}+t\,, \nonumber\\ \lefteqn{\alpha\left(t\right)=\frac{1}{2}(2\pi d_{1}+3\pi)\frac{t^{2}}{T^{2}}} & &\nonumber\\ & & +\left(\frac{1}{2}(-2\pi d_{1}-3\pi)+\frac{3\pi}{2}\right)\frac{t}{T} + d_{1}\sin\left(\frac{\pi t}{T}\right)-\pi\frac{t^{3}}{T^{3}}\nonumber\\ \label{scheme_4l_num2} \end{eqnarray} where the parameters $d_{0}$ and $d_{1}$were numerically calculated to minimize $Q$ for a given $\Delta$. $d_{0}$ was restricted to the range $0.55\leq d_{0}\leq2.5$ to avoid divergence of the Rabi frequencies for all $0 \le t \le T$. The transition sensitivity $Q$ for this scheme is shown in Fig. \ref{fig_6_Q}. It achieves $Q=0$ at $\Delta T=3$. The corresponding Rabi frequencies are shown in Fig. \ref{fig_8_num2}. \subsection{Comparison of the transition probability} In order to compare the schemes we once again look at the exact (numerically calculated) transition probability $P_{3}$ as a function of $\beta$ as in Fig. \ref{fig_9_P3}. \begin{figure} \caption{\label{fig_9_P3} \label{fig_9_P3} \end{figure} We compare the scheme of the schemes proposed in \cite{sta_3level} as a reference scheme, the numerical scheme 1 given by Eq. \eqref{scheme_4l_num1} and the numerical scheme 2 given by \eqref{scheme_4l_num2}. Once again we see that the transition sensitivity is a good indicator of a stable scheme. We also consider the area of the pulse and its energy which in this case is defined as $A := \int_0^T dt\, \sqrt{\Omega_{12}^2 + \Omega_{23}^2}$ and $E:=\hbar \int_0^T dt\, \left(\Omega_{12}^2 + \Omega_{23}^2\right)$ respectively. These values are shown for each scheme in Table \ref{tab_2}. \begin{table} \begin{tabular}{|c|c|c|} \hline & $A[\pi]$ & $E[\pi^{2}\hbar/T]$\\ \hline Scheme of \cite{sta_3level} $\left(\epsilon=0.002\right)$ & $500.00$ & $249999$\tabularnewline \hline Numerical Scheme 1, Eq. \eqref{scheme_4l_num1} & &\\ $\Delta T=1.0 \:(c_{0}=-76.546, c_{1}=49.040)$ & $6.71$ & $70.29$\tabularnewline $\Delta T=3.0 \:(c_{0}=-76.735, c_{1}=46.054)$ & $6.61$ & $73.61$\tabularnewline \hline Numerical Scheme 2, Eq. \eqref{scheme_4l_num2} & &\\ $\Delta T=1.0 \:(d_{0}=0.794, d_{1}=-15.633)$ & $24.34$ & $1171.7$\tabularnewline $\Delta T=3.0 \:(d_{0}=0.852, d_{1}=-13.204)$ & $18.65$ & $663.17$\tabularnewline \hline Adiabatic Scheme, Eq. \eqref{pot_stirap} & $\Omega_{0}T\pi^{-1}$ & $\Omega_{0}^{2}T^{2}\pi^{-2}$\\ $\Delta T=1.0$ & $8.38$ & $70.29$\tabularnewline $\Delta T=3.0$ & $8.58$ & $73.61$\tabularnewline \hline \end{tabular}\caption{Pulse area $A$ and energy $E$ for different protocols.\label{tab_2}} \end{table} For completeness we also include the following adiabatic STIRAP-like scheme in our comparison \cite{stirap}: \begin{eqnarray} \Omega_{12}&=&\Omega_{0}\sin\left(\frac{\pi t}{2T}\right)\,,\\ \Omega_{23}&=&\Omega_{0}\cos\left(\frac{\pi t}{2T}\right)\,. \label{pot_stirap} \end{eqnarray} $\Omega_{0}$ was chosen so that the adiabatic scheme has the same energy as the numerical scheme 1. Both numerically-optimized schemes result in the largest $P_3$ in Fig. \ref{fig_9_P3}(a) if $\beta \neq 0$ for $\Delta T = 1.0$. If $\Delta T = 3.0$, see Fig. \ref{fig_9_P3}(b), then both numerical-optimized schemes result in nearly full population transfer even in the case of $-0.1 < \beta < 0.1$. It can be seen that a full population transfer is not achieved in both cases by this adiabatic scheme for $\beta=0$. \section{Discussion} In this paper, we have examined the stability of shortcuts to adiabatic population transfer in two- and three-level quantum systems against unwanted transitions. For the two-level case as well as for the three-level case, we have defined a transition sensitivity which quantifies how sensitive a given scheme is concerning these unwanted couplings to another level. We have compared the transition sensitivity of different schemes in both settings. We also have provided examples of shortcut schemes leading to a zero transition sensitivity in certain regimes i.e. almost full population inversion is achieved in the presence of unwanted transitions. This approach could be even further generalized; one could construct different shortcut schemes fulfilling even further constraints apart from vanishing transition sensitivity similar to \cite{Andreas2013}. This work could also be generalized to different level structures of the unwanted transitions or to multiple unwanted transition channels. In the latter case, one might expect to find that the unwanted transition with lowest detuning would dominate. \section*{Acknowledgments} We are grateful to David Rea for useful discussion and commenting on the manuscript. \end{document}
math
\begin{document} \title{Erd\"{o}s-Szekeres Partitioning Problem} \author{ Michael Mitzenmacher\thanks{Harvard University} \and Saeed Seddighin\thanks{Toyota Technological Institute at Chicago} } \maketitle \begin{abstract} In this note, we present a substantial improvement on the computational complexity of the Erd\"{o}s-Szekeres partitioning problem and review recent works on dynamic \textsf{LIS}. \end{abstract} \section{Erd\"{o}s-Szekeres Partitioning Problem}\label{sec:intro} It is well-known that any sequence of size $n$ can be decomposed into $O(\sqrt{n})$ monotone subsequences. The proof follows from a simple fact: Any sequence of length $n$ contains either an increasing subsequence of length $\sqrt{n}$ or a non-increasing subsequence of length $\sqrt{n}$. Thus, one can iteratively find the maximum increasing and the maximum non-increasing subsequences of a sequence and take the larger on as one of the solution partitions. Next, by removing the partition from the original sequence and repeating this procedure with the remainder of the elements we obtain a decomposition into at most $O(\sqrt{n})$ partitions. The computational challenge is to do this in an efficient way. The above algorithm can be implemented in time $O(n^{1.5} \log n)$ if we use patience sorting in every iteration. Bar-Yehuda and Fogel~\cite{yehuda1998partitioning} improve the runtime down to $O(n^{1.5})$ by designing an algorithm that solves \textsf{LIS} in time $O(n + k^2)$ where the solution size is bounded by $k$. Since any comparison-based solution for \textsf{LIS} takes time $\Omega(n \log n)$, the gap for Erd\"{o}s-Szekeres partitioning problem has been $\Omega(\sqrt{n}/\log n)$ for quite a long time~\cite{pettie2003shortest,gronlund2014threesomes}. We show in the following that using the recent work of Mitzenmacher and Seddighin~\cite{our-stoc-paper}, Erd\"{o}s-Szekeres partitioning problem can be solved in time $\tilde O_{\epsilon}(n^{1+\epsilon})$ for any constant $\epsilon > 0$. \begin{theorem}\label{theorem:main} For any constant $\epsilon > 0$, one can in time $\tilde O_{\epsilon}(n^{1+\epsilon})$ partition any sequence of length $n$ of distinct integer numbers into $O_{\epsilon}(\sqrt{n})$ monotone (increasing or decreasing) subsequences. \end{theorem} \begin{proof} The proof follows directly from the algorithm of Mitzenmacher and Seddighin~\cite{our-stoc-paper} for dynamic \textsf{LIS}. In their setting, we start with an empty array $a$ and at every point in time we are allowed to (i) add an element, or (ii) remove an element, or (iii) substitute an element for another. The algorithm is able to update the sequence and estimate the size of the \textsf{LIS} in time $\tilde O_{\epsilon}(|a|^{\epsilon})$ where $|a|$ is the size of the array at the time the operation is performed. The approximation factor of their algorithm is constant as long as $\epsilon$ is constant. More precisely, their algorithm estimates the size of the longest increasing subsequence within a multiplicative factor of at most $(1/\epsilon)^{O(1/\epsilon)}$. Although Mitzenmacher and Seddighin~\cite{our-stoc-paper} do not explicitly state this, it implicitly follows from their algorithm that by spending additional time proportional to the reported estimation, their algorithm is able to also find an increasing subsequence with size equal to their reported length. We bring a more detailed discussion for this in Section~\ref{sec:dynamic}. Given a sequence of length $n$ with distinct numbers, we use the algorithm of Mitzenmacher and Seddighin~\cite{our-stoc-paper} to decompose it into $O_{\epsilon}(\sqrt{n})$ monotone subsequences in time $\tilde O_{\epsilon}(n^{1+\epsilon})$. To do so, we initialize two instances of their algorithm that keep an approximation to the longest increasing subsequence and the longest decreasing subsequence of the array. More precisely, in the first instance, we insert all elements of the array exactly the same way they appear in our sequence and in the second instance we insert the elements in the reverse order. Thus the dynamic algorithm for the second instance always maintains an approximation to the longest decreasing subsequence of our array. In every iteration, we estimate the size of the longest increasing and longest decreasing subsequences of the array via the algorithm of Mitzenmacher and Seddighin~\cite{our-stoc-paper}. We then choose the maximum one and ask the algorithm to give us the sequence corresponding to the solution reported. Finally, we remove the elements from both instances of the dynamic algorithm and repeat the same procedure for the remainder of the elements. The total runtime of our algorithm is $\tilde O_{\epsilon}(n^{1+\epsilon})$ since we insert $n$ elements in each of the instances and then remove $n$ elements which amount to $2n$ operations for each instance that runs in time $\tilde O_{\epsilon}(n^{1+\epsilon})$. Moreover, because at every point in time the maximum estimate we receive from each of the dynamic algorithms is at least a constant fraction of the actual longest increasing subsequence, we repeat this procedure at most $O_{\epsilon}(\sqrt{n})$ times. Therefore, we decompose the sequence into $O_{\epsilon}(\sqrt{n})$ monotone subsequences. \end{proof} \begin{remark} The constant factor hidden in the $O$ notation for the number of partitions is optimal in neither the algorithm of Theorem~\ref{theorem:main} nor previous algorithm of~\cite{yehuda1998partitioning} nor the simple greedy algorithm that runs patience sorting in every step. \end{remark} \section{Subsequent Work} Since the algorithm of Mitzenmacher and Seddighin~\cite{our-soda-paper} has constant approximation factor, in order to make sure the number of partitions remains $O(\sqrt{n})$, one needs to set $\epsilon$ to constant and therefore the gap between their runtime of $\tilde O(n^{1+\epsilon})$ and the lower bound of $\Omega(n \log n)$ remains polynomial. Two independent subsequent work further tighten the gap. Kociumaka and Seddighin~\cite{saeednew} improve the gap to subpolynomial by presenting a dynamic algorithm with approximation factor $1-o(1)$ and update time $O(n^{o(1)})$. Gawrychowski and Janczewski~\cite{gawrychowski2020fully} further tighten the gap to polylogarithmic by obtaining a similar algorithm with polylogarithmic update time (with polynomial dependence on $1/\epsilon$). \section{The Dynamic Algorithm of Mitzenmacher and Seddighin~\cite{our-stoc-paper}}\label{sec:dynamic} In this section, we bring the high-level ideas of the dynamic algorithm of Mitzenmacher and Seddighin for \textsf{LIS} and explain why using this algorithm, we can also find the increasing subsequence corresponding to the reported solution size. Their algorithm is based on the grid packing technique explained in Section~\ref{sec:grid}. We then discuss the dynamic algorithm in Section~\ref{sec:results-approach}. \subsection{Background: Grid Packing}\label{sec:grid} Grid packing can be thought of as a game between us and an adversary. In this problem, we have a table of size $m \times m$. Our goal is to introduce a number of segments on the table. Each segment either covers a consecutive set of cells in a row or in a column. A segment $A$ \textit{precedes} a segment $B$ if \textbf{every} cell of $A$ is strictly higher than every cell of $B$ and also \textbf{every} cell of $A$ is strictly to the right of every cell of $B$. Two segments are \textit{non-conflicting}, if one of them precedes the other one. Otherwise, we call them \textit{conflicting}. The segments we introduce can overlap and there is no restriction on the number of segments or the length of each segment. However, we would like to minimize the maximum number of segments that cover each cell. \input{figs/crossing} After we choose the segments, an adversary puts a non-negative number on each cell of the table. The score of a subset of cells of the table would be the sum of their values and the overall score of the table is the maximum score of a path of length $2m-1$ from the bottom-left corner to the top-right corner. In such a path, we always either move up or to the right. The score of a segment is the sum of the numbers on the cells it covers. We obtain the maximum sum of the scores of a non-conflicting set of segments. The score of the table is an upper bound on the score of any set of non-conflicting segments. We would like to choose segments so that the ratio of the score of the table and our score is bounded by a constant, no matter how the adversary puts the numbers on the table. More precisely, we call a solution $(\alpha,\beta)$-approximate, if at most $\alpha$ segments cover each cell and it guarantees a $1/\beta$ fraction of the score of the table for us for any assignment of numbers to the table cells. \input{figs/grid} Mitzenmacher and Seddighin~\cite{our-stoc-paper} prove the following theorem: For any $m \times m$ table and any $0 < \kappa < 1$, there exists a grid packing solution with guarantee $(O_{\kappa}(m^\kappa \log m),O(1/\kappa))$. That is, each cell is covered by at most $O_{\kappa}(m^\kappa \log m)$ segments and the ratio of the table's score over our score is bounded by $O(1/\kappa)$ in the worst case. \begin{theorem}\label{theorem:grid packing} [from~\cite{our-stoc-paper}] For any $0 < \kappa < 1$, the grid packing problem on an $m \times m$ table admits an $(O_{\kappa}(m^\kappa \log m),O(1/\kappa))$-approximate solution. \end{theorem} There is a natural connection between grid packing and \textsf{LIS}. Let us consider an array $a$ of length $n$. We assume for the sake of this example that all the numbers of the array are distinct and are in range $[1,n]$. In other words, $a$ is a permutation of numbers in $[n]$. We map the array to a set of points on the 2D plane by putting a point at $(i,a_i)$ for every position $i$ of the array. \input{figs/lis-grid} For a fixed $m < n$, divide the plane into an $m \times m$ grid where each row and column contains $n/m$ points. Also, we fix a longest increasing subsequence as the solution. The number on each cell of the grid would be equal to the contribution of the elements in that grid cell to the fixed longest increasing subsequence. (We emphasize that the number is {\em not} the longest increasing subsequence inside the cell, but the contribution to the fixed longest increasing subsequence only.) It follows that the score of the grid is exactly equal to the size of the longest increasing subsequence. Let us assume that the score of each segment is available. To approximate the score of the grid (which equals the size of the \textsf{LIS}) we find the largest score we can obtain using non-conflicting segments by dynamic programming. The last observation which gives us speedup for \textsf{LIS} is the following: instead of using the score of each segment (which we are not aware of), we use the size of the \textsf{LIS} for each segment as an approximate value for its score. \textsf{LIS} of each segment can be computed or approximated in sublinear time since each segment has a sublinear number of points. This quantity is clearly an upper bound on the score of each segment but can be used to construct a global solution for the entire array. \input{figs/lis-grid2} \subsection{Dynamic Algorithm for \textsf{LIS}}\label{sec:results-approach} We refer the reader to previous work~\cite{our-stoc-paper,our-soda-paper} for discussions on how to use grid packing for approximating \textsf{LIS}. For the dynamic alogrithm, we consider the point-based representation of the problem. That is, we represent the input as points on the 2D plane where a point $(x,y)$ means the $x$'th element of the sequence has value $y$. This enables us to construct a grid where the rows and columns of the grid evenly divide the points. Next, we use the grid packing technique and after making the segments we construct a partial solution for each segment that keeps an approximation to the \textsf{LIS} of the points covered by that segment. Thus, every time a change is made, our algorithm has to update the solution for all segments that cover the modified point. Theorem~\ref{theorem:grid packing} implies that the number of such segments can be as small as $\tilde O(n^{\kappa})$ for any constant $\kappa > 0$. Moreover, since the number of elements covered by each segment is sublinear, the total update time remains sublinear. In order to update the value of the longest increasing subsequence, Mitzenmacher and Seddighin prove that by doing a DP on the values of the partial solutions for the segments, we can obtain a constant fraction of the solution size for the entire sequence. (This essentially follows from the guarantee of Theorem~\ref{theorem:grid packing}.) They prove that the runtime of the DP depends only on the dimensions of the grid which by proper construction, results in a sublinear time algorithm. Moreover, they show that recursing on this idea leads to arbitrarily small update time $(\tilde O_{\epsilon}(n^\epsilon))$ for any constant $\epsilon > 0$. Since this approach loses a constant factor in every recursive call, their approximation factor is $\epsilon^{O(1/\epsilon)}$. It follows from their technique that after reporting the estimated value of the solution, we can also determine the corresponding sequence in time proportional to its size. More precisely, after using DP to construct a global solution based on partial solutions of the segment, we can find out which segments contribute to such a solution and recursively recover the corresponding increasing subsequences of the relevant segments. To this end, in addition to the DP table which we use for constructing a global solution, we also store which segments contribute to such a solution. This way, the runtime required for determine the corresponding increasing subsequence is proportional to the size of the solution. \end{document}
math
\begin{document} \graphicspath{{images/}} \title{Column Generation \ for Real-Time Ride-Sharing Operations} \begin{abstract} This paper considers real-time dispatching for large-scale ride-sharing services over a rolling horizon. It presents {\sc RTDARS}{} which relies on a column-generation algorithm to minimize wait times while guaranteeing short travel times and service for each customer. Experiments using historic taxi trips in New York City for instances with up to 30,000 requests per hour indicate that the algorithm scales well and provides a principled and effective way to support large-scale ride-sharing services in dense cities. \end{abstract} \section{Introduction} In the past decade, commercial ride-hailing services such as Didi, Uber, and Lyft have decreased reliance on personal vehicles and provided new mobility options for various population segments. More recently, ride-sharing has been introduced as an option for customers using these services. Ride-sharing has the potential for significant positive impact since it can reduce the number of cars on the roads and thus congestion, decrease greenhouse emissions, and make mobility accessible to new population segments by decreasing trip prices. However, the algorithms used by commercial ride-sharing services rarely use state-of-the-art techniques, which reduces the potential positive impact. Recent research by Alonso-Mora et al. \cite{Alonso-Mora462} has shown the benefits of more sophisticated algorithms. Their algorithm uses shareability graphs and cliques to generate all possible routes and a MIP model to select the routes. They impose significant constraints on waiting times (e.g., 420 seconds), which reduces the potential riders to consider for each route at the cost of rejecting customers. This paper considers large-scale ride-sharing services where {\em customers are always guaranteed a ride}, in contrast to prior work. The Real-Time Dial-A-Ride System ({\sc RTDARS}{}) divides the days into short time periods called epochs, batches requests in a given epoch, and then schedules customers to minimize average waiting times. {\sc RTDARS}{} makes a number of modeling and solving contributions. At the modeling level, {\sc RTDARS}{} has the following innovations: \begin{enumerate}[wide, labelindent=5pt] \item {\sc RTDARS}{} follows a Lagrangian approach, relaxing the constraint that all customers must be served in the static optimization problem of each epoch. Instead, {\sc RTDARS}{} associates a penalty with each rider, representing the cost of not serving the customer. \item To balance the minimization of average waiting times and ensure that the waiting time of every customer is reasonable, {\sc RTDARS}{} increases the penalty of an unserved customer in the next epoch, making it increasingly harder not to serve the waiting rider. \item {\sc RTDARS}{} exploits a key property of the resulting formulation to reduce the search space explored for each epoch. \item To favor ride-sharing, {\sc RTDARS}{} uses the concept of virtual stops used in the RITMO project \cite{RITMO} and being adopted by ride-sourcing services. \end{enumerate} {\sc RTDARS}{} solves the static optimization problem for each epoch with a column-generation algorithm based on the three-index MIP formulation \cite{Cordeau2007}. The main innovation here is the pricing problem which is organized as a series of waves, first considering all the insertions of a single customer, before incrementally adding more customers. {\sc RTDARS}{} was evaluated on historic taxi trips from the New York City Taxi and Limousine Commission \cite{nycdata}, which contains large-scale instances with more than 30,000 requests an hour. The results show that {\sc RTDARS}{} can provide service guarantees while improving the state-of-the-art results. For instance, for a fleet of 2,000 vehicles of capacity 4, {\sc RTDARS}{} obtains an average wait of 2.2 minutes and an average deviation from the shortest path of 0.62 minutes. The results also show that large-occupancy vehicles (e.g., 8-passenger vehicles) provide additional benefits in terms of waiting times with negligible increases in in-vehicle time. {\sc RTDARS}{} is also shown to generate a small fraction of the potential columns, explaining its efficiency. The Lagrangian modeling also helps in reducing computation times significantly. The rest of this paper is organized as follows. Section \ref{section-related} presents the related work in more detail. Section \ref{section-online} describes the real-time setting. Section \ref{section-static} specifies the static problem and gives the MIP formulation. Section \ref{section-cg} describes the column generation. Section \ref{section-rt} specifies the real-time operations. Section \ref{section-results} presents the experimental results and Section \ref{section-conclusion} concludes the paper. \section{Related Work} \label{section-related} Dial-a-ride problems have been a popular topic in operations research for a long time. Cordeau and Laporte \cite{Cordeau2007} provided a comprehensive review of many of the popular formulations and the starting point of {\sc RTDARS}{}'s column generation is their three-index formulation. Constraint programming and large neighborhood search were also proposed for dial-a-ride problems (e.g., \cite{Jain2011} \cite{Berbeglia2012}). Progress in communication technologies and the emergence of ride-sourcing and ride-sharing services have stimulated further research in this area. Rolling horizons are often used to batch requests and were used in taxi pooling previously \cite{stars, scalable-taxi}. In addition, stochastic scenarios along with waiting and reallocation strategies have been previously explored in \cite{Bent2007,scenariopvh}. Bertsimas, Jaillet, and Martin \cite{Bertsimas2018OnlineVR} explored the taxi routing problem (without ride-sharing) and introduced a ``backbone'' algorithm which increases the sparsity of the problem by computing a set of candidate paths that are likely to be optimal. Alonso-Mora et al. proposed an anytime algorithm which uses cliques to generate vehicle paths combined with a vehicle rebalancing step to move vehicles towards demand \cite{Alonso-Mora462}. Their ``results show that 2,000 vehicles (15\% of the taxi fleet) of capacity 10 or 3,000 of capacity 4 can serve 98\% of the demand within a mean waiting time of 2.8 min and mean trip delay of 3.5 min.'' \cite{Alonso-Mora462}. Both \cite{Alonso-Mora462} and \cite{Bertsimas2018OnlineVR} use hard time windows to reject riders when they cannot serve them quickly enough (e.g., 420 seconds in the aforementioned results). This decision significantly reduces the search space as only close riders can be served by a vehicle. In contrast, {\sc RTDARS}{} provides service guarantees for all riders, while still reducing the search space through a Lagrangian reformulation. The results show that {\sc RTDARS}{} is capable of providing these guarantees while improving prior results in terms of average waiting times. Indeed, for 2,000 vehicles of capacity 4, {\sc RTDARS}{} provides an average waiting time of 2.2 minutes with a standard deviation of 1.24 and a mean trip deviation of 0.62 minutes (standard deviation 1.13). For 3,000 vehicles of capacity 4, the average waiting time is further reduced to 1.81 minutes with a standard deviation of 1.03 and an average trip deviation of 0.23 minutes. \section{Overview of the Approach} \label{section-online} {\sc RTDARS}{} divides time into epochs, e.g., time periods of 30 seconds. During an epoch, {\sc RTDARS}{} performs two tasks: It batches incoming requests and it solves the epoch optimization problem for all unserved customers from prior epochs. The epoch optimization takes, as inputs, these unserved customers and their penalties, as well as the {\em first} stop of each vehicle after the start of the epoch: Vehicle schedules prior to this stop are committed since, for safety reasons, {\sc RTDARS}{} does not allow a vehicle to be re-routed once it has departed for its next customer. These first stops are called {\em departing stops} in this paper. All customers served before and up to the departing stops of the vehicles are considered served. All others, even if they were assigned a vehicle in the prior epoch optimization, are considered unserved. Once the epoch is completed, a new schedule and a new set of requests are available. The schedule commits the vehicle routes for the entire next epoch and determines their next departing stops. The customer penalties are also updated to make it increasingly harder not to serve them. {\sc RTDARS}{} then moves to the next epoch. \section{The Static Problem} \label{section-static} This section defines and presents the static (generalized) dial-a-ride problem solved for each epoch. its objective is to schedule a set of requests on a given set of vehicles while ensuring that no customer deviates too much from their shortest trip time. The inputs consist primarily of the vehicle and request data. The set of vehicles is denoted by $V$ and each vehicle $v \in V$ is associated with a tuple $(u^v_0, w^v_0, I_v, T_v^B, T_v^E, Q_v)$, where $u^v_0$ is the time the vehicle arrives at its {\em departing stop} for the epoch, $w^v_0$ is the number of passengers currently in the vehicle, $I_v$ is the set of dropoff requests for on-board passengers, $T^B_v$ is the vehicle start time, $T^E_v$ is the vehicle end time, and $Q_v$ is the capacity of the vehicle. In other words, a vehicle $v$ can only insert new requests after time $u^v_0$ and it must serve the dropoffs in $I_v$. The request data is given in terms of a complete graph $\mathcal{G} = (\mathcal{N}, \mathcal{A})$, which contains the nodes for each possible pickup and delivery. There are five types of nodes: the pickup nodes $P = \{1, \dots n\}$, their associated dropoff nodes $D = \{n+1, \dots 2n\}$, the dropoff nodes $I = \cup_{v \in V} I_v$ of the passengers inside the vehicles, the source~$0$, and the sink~$s$ (the last node in terms of indices). Each node $i$ is associated with a number of people~$q_i$ to pick up ($q_i > 0$) or drop off ($q_i < 0$) and the time $\Delta_i \geq 0$ it takes to perform them. If $i \in P$, then the corresponding delivery node is $n + i$ and $q_i = - q_{n+i}$. Also, $q_i$ and $\Delta_i$ are zero for the source and the sink. Each node $i \in P$ is associated with a request, which is a tuple of the form $(e_i,o_i,d_i,q_i)$ where $e_i$ is the earliest possible pickup time, $o_i$ is the pickup location, $d_i$ is the dropoff location, and $q_i$ is the number of passengers. Every request~$i$ in $I$ is associated with the time~$u^P_i$ on which the request was picked up. Every request~$i \in P \cup I$ is associated with the shortest time~$t_i$ from the request origin to its destination. Finally, the input contains a matrix $(t_{i,j})_{(i,j) \in \mathcal{A}}$ of travel times from any node $i$ to any node $j$ satisfying the triangle inequality, the constants~$\alpha$ and $\beta$ which constrain the deviation from the shortest path, and the penalty~$p_i$ of not serving the request~$i \in P$. \begin{figure} \caption{The Static Formulation of the Dial-A-Ride Problem.} \label{model:static} \label{model:static_obj} \label{model:static_constr:allserved} \label{model:static_constr:flow} \label{model:static_constr:source} \label{model:static_constr:sink} \label{model:static_constr:droppick} \label{model:static_constr:dropoffs} \label{model:static_constr:vstart} \label{model:static_constr:voperation1} \label{model:static_constr:voperation2} \label{model:static_constr:bounds} \label{model:static_constr:traveltime} \label{model:static_constr:traveltime_passengers} \label{model:static_constr:calccapacity} \label{model:static_constr:capcity} \label{model:static_constr:domain} \label{fig:static} \end{figure} A MIP model for the static problem is presented in Figure~\ref{fig:static}. The MIP variables are as follows: $u^v_i$ represents the time at which vehicle $v$ arrives at node $i$, $w^v_i$ the number of people in vehicle $v$ when $v$ leaves node $i$, $x^v_{ij}$ denotes whether edge $(i,j)$ is used by vehicle $v$, and $z_i$ captures whether request $i \in P$ is served. Objective \eqref{model:static_obj} balances the minimization of wait times for every pickups with the penalties incurred by unserved riders. Note that the wait times for riders in $I$ are not included in the objective because these riders are already in vehicles: only the constraints on their deviations must be satisfied. Constraints~\eqref{model:static_constr:allserved} ensure that only one vehicle serves each request and that, if the request is not served, $z_i$ is set to 1 to activate the penalty in the objective. Constraints~\eqref{model:static_constr:flow} are flow balance constraints. Constraints~\eqref{model:static_constr:source} and \eqref{model:static_constr:sink} are flow constraints for the source and the sink. Constraints~\eqref{model:static_constr:droppick} ensure that every request is dropped off by the same vehicle that picks it up. Constraints~\eqref{model:static_constr:dropoffs} ensure that every passenger currently in a vehicle is dropped off. Constraints~\eqref{model:static_constr:vstart} define the arrival times at the nodes. Constraints~\eqref{model:static_constr:voperation1} and \eqref{model:static_constr:voperation2} ensure that the vehicle is operational during its working hours. Constraints~\eqref{model:static_constr:bounds} ensure that each rider is picked up no earlier than its lower bound. Constraints~\eqref{model:static_constr:traveltime} ensure that the travel time of each served passenger does not deviate too much from the shortest path between its origin and destination. Passengers are allowed to spend either $\alpha * t_i$ (a percentage of the shortest path), or $\beta + t_i$ (a constant deviation time from the shortest path) traveling in the vehicle, whichever is larger. Constraints~\eqref{model:static_constr:traveltime_passengers} do the same for passengers already in a vehicle. Constraints~\eqref{model:static_constr:calccapacity} define the vehicle capacities. Lastly, constraints~\eqref{model:static_constr:capcity} ensure that the vehicle capacities are not exceeded. Constraints~\eqref{model:static_constr:vstart} and \eqref{model:static_constr:calccapacity} can be linearized using a Big $M$ formulation. The following theorem provides a way to prune the search space significantly. It shows that, in an optimal solution, a rider cannot be picked up by a vehicle $v$ if the smallest possible wait time incurred using $v$ is greater than her penalty. \begin{theorem} A feasible solution where rider $l$ is assigned to vehicle $v$ such that $u^{v}_0 + t_{0,l} - e_l > p_l$ is suboptimal. \label{thm:1} \end{theorem} \begin{proof} Suppose that there exists a feasible solution~{\it (I)} that serves a passenger $l$ such that $u^{v}_0 + t_{0,l} - e_l > p_l$. Let $r$ be the route of vehicle $v$ (i.e., a sequence of edges in $\mathcal{A}$). Removing the pickup and dropoff of rider~$l$ from route~$r$ produces a new feasible route~$\hat{r}$ since the deviation time cannot increase by the triangular inequality and the number of riders in $v$ decreases. Solution~{\it(II)} is derived from solution~{\it (I)} by replacing the route~$r$ by route~$\hat{r}$ and fixing $z_l$ to 1. Using $\hat{u}$ and $\hat{z}$ to denote the variables of solution~{\it (II)}, the cost $C_{{\it (II)}}$ of solution~{\it (II)} is: \begin{subequations} \begin{align} C_{{\it (II)}} &= \sum_{i \in P\setminus\{l\}} \sum_{v \in V} (\hat{u}^v_i - e_i) + \sum_{i \in P \setminus\{l\}} p_i \hat{z_i} + p_l \label{thm1:sol2}\\ &< \sum_{i \in P\setminus\{l\}} \sum_{v \in V} (\hat{u}^v_i - e_i) + \sum_{i \in P \setminus\{l\}} p_i \hat{z}_i + u^{v}_0 + t_{0,l} - e_l \label{thm1:hyp}\\ &\leq \sum_{i \in P\setminus\{l\}} \sum_{v \in V} (u^v_i - e_i) + \sum_{i \in P \setminus\{l\}} p_i \hat{z}_i + u^v_l - e_l \label{thm1:triang}\\ &= \sum_{i \in P} \sum_{v \in V} (u^v_i - e_i) + \sum_{i \in P} p_i z_i = C_{(I)} \label{thm1:sol1} \end{align} \end{subequations} Equality~\eqref{thm1:sol2} is just the definition of the objective of solution~{\it (II)}. Inequality~\eqref{thm1:hyp} is induced by the hypothesis. Inequality~\eqref{thm1:triang} is induced by the triangular inequality on the travel times. Inequality~\eqref{thm1:sol1} just factors the equation to get the objective of solution~{\it (I)}. Solution~{\it (I)} is thus suboptimal. \end{proof} \section{The Column-Generation Algorithm} \label{sec:algorithm} \label{section-cg} This section presents the column-generation algorithm, starting with the master problem before presenting the pricing subproblem, and the specifics of the column-generation process. Upon completion of the column generation, {\sc RTDARS}{} solves a final MIP that imposes integrality constraints on the master problem variables. \paragraph{The Master Problem} The restricted master problem, RMP, (presented in Figure~\ref{fig:master}) selects a route for each vehicle. In order for a route to be assigned to a vehicle, the route must contain dropoffs for every current passenger of that vehicle. The set of routes is denoted by $R$ and its subset of routes that can be assigned to vehicle $v$ is denoted $R_v$. The variables in the master problem are the following: $y_r \in [0,1]$ is set to 1 if potential route $r$ is selected for use and variable $z_{i} \in [0,1]$ is set to 1 if request $i$ is not served by any of the selected routes. The constants are as follows: $c_r$ is the sum of the wait time incurred by customers served by route $r$, $p_{i}$ is the cost of not scheduling request $i$ for this period, and $a_{i}^{r}=1$ if request $i$ is served by route $r$. The objective minimizes the waiting times incurred by all customers on each route and the penalties for the customers not scheduled during the current period. Constraints~\eqref{model:master_constr:allserved} ensure that $z_{i}$ is set to 1 if request $i$ is not served by any of the selected routes and constraints~\eqref{model:master_constr:oneroute} ensure that only one route is selected per vehicle. The dual variables associated with each constraint are specified in between parentheses next to the constraint in the model. \begin{figure} \caption{The Master Problem Formulation.} \label{model:master} \label{model:master_obj} \label{model:master_constr:allserved} \label{model:master_constr:oneroute} \label{model:master_constr:domainz} \label{model:master_constr:domainy} \label{fig:master} \end{figure} \paragraph{The Pricing Problem} The routes for each vehicle~$v$ are generated via a pricing problem depicted in Figure~\ref{fig:pricing}. The pricing problem~\eqref{model:sub} is defined for a given vehicle~$v$. Theorem~\ref{thm:1} makes it possible to remove some passengers from the set~$P$ to obtain the subset~$P_v$ and thus a new graph~$\mathcal{G}_v = (\mathcal{N}_v, \mathcal{A}_v)$. The pricing problem minimizes the reduced cost of the route being generated. Constraints \eqref{model:sub_constr:flow} -- \eqref{model:sub_constr:domain} correspond to constraints \eqref{model:static_constr:flow} -- \eqref{model:static_constr:domain} in the static problem. \begin{figure} \caption{The Pricing Problem Formulation for Vehicle~$v$.} \label{model:sub} \label{model:sub_obj} \label{model:sub_constr:flow} \label{model:sub_constr:source} \label{model:sub_constr:sink} \label{model:sub_constr:droppick} \label{model:sub_constr:dropoffs} \label{model:sub_constr:vstart} \label{model:sub_constr:voperation1} \label{model:sub_constr:voperation2} \label{model:sub_constr:bounds} \label{model:sub_constr:traveltime} \label{model:sub_constr:traveltime_passengers} \label{model:sub_constr:calccapacity} \label{model:sub_constr:capcity} \label{model:sub_constr:domain} \label{fig:pricing} \end{figure} \paragraph{The Column Generation} \newcommand{\tpmod}[1]{{\@displayfalse\pmod{#1}}} \let\oldnl\nl \newcommand{\nonl}{\renewcommand{\nl}{\let\nl\oldnl}} \def\textsc{Release}{\textsc{Release}} \begin{algorithm}[!t] \caption{\textsc{ColumnGeneration}} \label{alg:cg} \setcounter{AlgoLine}{0} \DontPrintSemicolon \While{true} { $\mathcal{C} \gets ${\sc GenerateColumns}()\; \If{$\mathcal{C} = \emptyset$} { break; \; } Solve RMP after adding $\mathcal{C}$ \; } \SetKwFunction{FMain}{{\sc GenerateColumns}} \SetKwFunction{FSUB}{{\sc GenerateSizedColumms}} \SetKwProg{Fn}{Function}{:}{} \label{a1:fundecl} \nonl \Fn{\FMain{}} { $k \gets 1$ \; \While{$k \leq |P|$} { $\mathcal{C} \gets $ {\sc GenerateSizeColumns}($k$)\; \If{$\mathcal{C} \neq \emptyset$} { \Return $\mathcal{C}$ \; } \Else { $k$++ \; } } } \nonl \Fn{\FSUB{k}} { $Q \gets \{ R \subseteq P \mid |R| = k \}$ \; \Forall{$v \in |V|$ ordered by decreasing $\sigma_v$} { $R_v \gets \argmin_{R \subset Q}$ {\sc pricing}($v,R$)\; \If{$\mbox{{\sc pricing}}(v,R_v) \} < 0$} { $Q \gets \{ R \subseteq Q \mid R \cap R_v = \emptyset \}$ \; } } \Return $\{ \mbox{\sc route}(v,R_v) | v \in V ~\&~ \mbox{{\sc pricing}}(v,R_v) < 0 \} $ \; } \end{algorithm} In traditional column generation for dial-a-ride problems, the pricing problem is formulated as a resource-constrained shortest-path problem and solved using dynamic programming. However, the minimization of waiting times, i.e., $\sum_{i \in P} (u_i - e_i)$, is particularly challenging, as it cannot be formulated as a classical resource-constrained shortest-path problem. One option is to discretize time and use time-expanded graphs. However, this raises significant computational challenges for large instances. As a result, this paper solves the pricing problem through an anytime algorithm that takes into account the real-time constraints {\sc RTDARS}{} operates under. The column-generation algorithm is specified in Algorithm \ref{alg:cg}: {\em It generates multiple columns with disjoint sets of customers}. In the algorithm, function {\sc Pricing}($v,R$) solves the pricing problem for a vehicle $v$ and a set $R$ of requests, while {\sc Route}($v,R$) returns the optimal route for a vehicle $v$ and a set of request $R$. Lines 1--5 is the high-level column-generation procedure: It alternates the generation of columns and the solving of the master problem with the generated columns until no more columns can be generated. It proceeds in waves, first generating columns with one customer before progressively increasing the number of considered requests. Procedure {\sc GenerateColumn} (lines 6--12) generates columns by increasing number of requests. Procedure {\sc GenerateSizedColumn} (lines 13--18) generates columns of size $k$, where $k$ is the number of requests in the column. It first computes $Q$, a set in which each element is a $k$-sized set of possible requests. It then considers the various vehicles ranked in decreasing order of their dual values $\sigma_v$. Line~15 computes the sets of requests with the smallest pricing objective value. If the pricing objective is negative (line~16), all set of requests which contains a request covered by $R_v$ are removed from $Q$ to ensure that {\sc RTDARS}{} generates a set of non-overlapping columns at each iteration (line~17). Finally, line 18 returns the routes for each vehicle with negative reduced costs. \section{The Real-Time Problem} \label{section-rt} {\sc RTDARS}{} divides the time horizon into epochs of length $\ell$, i.e., $[0,\ell),[\ell,2\ell),[2\ell,3\ell),\ldots$ and epoct $\tau$ corresponds to the time interval $[\tau \ell , (\tau+1) \ell)$. During period $\tau$, {\sc RTDARS}{} batches the incoming requests into a set $P_\tau$, which is considered in the next epoch. It also optimizes the static problem using the requests accummulated in $P_{\tau-1}$ and those requests not yet committed to in the epochs $\tau-1$ and before. The optimization is performed over the interval $[(\tau+1)\ell,\infty)$. It remains to specify how to compute the inputs to the optimization problem, i.e., the departing stops and times for each vehicle and the various set of requests to serve. To determine the starting stop for a vehicle $v$, the optimization in epoch $\tau$ uses the solution $\phi_{\tau-1}$ to the static problem in epoch $\tau-1$ and considers the first stop $s_v$ in $\phi_{\tau-1}$ in the interval $[(\tau+1)\ell,\infty)$ if it exists. This stop becomes the starting stop $u_0^v$ of the vehicle and its earliest time is given by the earliest departure time of vehicle $v$ in $\phi_{\tau-1}$. If vehicle $v$ is idle at stop $s_v$ in $\phi_{\tau-1}$ and not scheduled on $[(\tau+1)\ell,\infty)$, then the departing stop is $s_v$ and the earlierst departing time is $(\tau+1) \ell$. Consider now the sets $P$, $D$, and $I_v$ ($v \in V$) for period $\tau$. For a vehicle $v$, all the requests before its departing stop $s_v$ are said to be {\em committed} and are not reconsidered. The set $I_v$ are the dropoffs of the requests that have been picked up before $s_v$ but not yet dropped off. The set $P$ corresponds to the requests that have not been picked up by any vehicle $v$ before $s_v$, as well as the requests batched in $P_{\tau-1}$. The set $D$ simply contains the dropoffs associated with $P$. Finally, since the static problem may not schedule all the requests, it is important to update the penalty of unserved requests to ensure that they will not be delayed too long. The penalty for an unserved request $c$ in period $\tau$ is given by $ p_c = \delta 2^{(\tau \ell - e_c) / (10\ell)} \label{equation:obj} $ and it increases exponentially over time as shown in Figure \ref{fig:obj_func}. The $\delta$ parameter incentivizes the schedule of the request in its first available period. Figure \ref{fig:obj_func} displays the function for $\delta = 420$ seconds and $\ell = 30$ seconds: It ensures that the penalty doubles every ten periods (in the example, every five minutes). \begin{figure} \caption{The Penalty Function for Unserved Customers.} \label{fig:obj_func} \end{figure} Observe that the static model schedules all the requests which have not been committed to any vehicle. This gives a lot of flexibility to the real-time system at the cost of more complex pricing subproblems. \section{Experimental Results} \label{section-results} \paragraph{Instance Description} {\sc RTDARS}{} was evaluated on the yellow trip data provided by the New York City Taxi and Limousine Commission \cite{nycdata}. This data provides {\em pickup and dropoff locations}, which were used to match trips to the closest virtual stops, {\em starting times}, which were used as the request time, and the {\em number of passengers}. This section reports results on a representative set of 24 instances, 1 hour per day for two weekdays per month from July 2015 through June 2016. To capture the true difficulty of the problem, rush hours (7--8am) were selected. The instances have an average of 21,326 customers and range from 6,678 customers to 28,484 customers. Individual requests with more customers than the capacity of the vehicles were split into several trips. An additional test was performed on the largest instance with 32,869 customers. \paragraph{Virtual Stops} The evaluation assumes a dial-a-ride system using the concept of virtual stops proposed in the {\sc RITMO} system \cite{RITMO} (Uber and Lyft are now considering similar concepts). Virtual stops are locations where vehicles can pick up and drop off customers without impeding traffic. They also ensure that customers are ready to pick up and make ride-sharing more efficient since they decrease the number of stops. To implement virtual stops, Manhattan was overlayed with a grid with cells of 200 squared meters and every cell had a virtual stop. The trip times were precomputed by querying OpenStreetMap for travel times between each virtual stop \cite{OpenStreetMap}. All customers at a virtual stop are grouped and can be picked up together. \paragraph{Algorithmic Setting} Both the final master problem and the restricted master problem are solved using Gurobi 8.1. Empty vehicles are initially evenly distributed over the virtual stops. The pricing problem uses parallel computing to implement line 15 of Algorithm \ref{alg:cg}, exploring potential requests simultaneously. To meet real-time constraints, the implementation greedily extends the ``optimal'' routes of size $k$ to obtain routes of size $k+1$. Unless otherwise specified, all experiments are performed with the following default parameters: 2,000 vehicles of capacity 5, $\alpha = 1.5$, $\beta = 240$ seconds, and $\delta = 420$ seconds. The impact of these parameters is also studied. \paragraph{Wait Times} \begin{figure}\label{fig:wait_hist} \label{fig:trip_hist} \end{figure} Figure~\ref{fig:wait_hist} reports the distribution of the waiting for all customers across all instances. The results demonstrate the performance of {\sc RTDARS}{}: The average waiting time is about 2.58 minutes with a standard deviation of 1.31. On the instance with 32,869 customers, the average waiting time is 5.42 minutes. \paragraph{Trip Deviation} Figure~\ref{fig:trip_hist} depicts a histogram of trip deviations incurred because of ride-sharing. The results indicate that riders have an average trip deviation of 0.34 minutes with a standard deviation of 0.74. In percentage, this represents a deviation of about 12\%. On the instance with 32,869 customers, the average trip deviation is 2.23 minutes, which shows the small overhead induced by ride-sharing. \paragraph{The Impact of the Fleet Size} Figure~\ref{fig:fleetsize} studies the impact of the fleet size on the waiting times and trip deviation. The plot reports the average waiting times for various numbers of riders, where capacity is 4, $\alpha = 1$, $\beta = 840$ seconds, and $\delta = 420$ seconds to facilitate comparisons to \cite{Alonso-Mora462}. The results show that, even with 1,500 vehicles, the average waiting time remains below 6 minutes and the average deviation time below 40 seconds. Since {\sc RTDARS}{} is guaranteed to serve all the requests, these results demonstrate the potential of column generation and ride-sharing for large-scale real-time dial-a-ride platforms. Adopting {\sc RTDARS}{} has the potential to substantially reduce traffic in large cities, while still guaranteeing service within reasonable times. Recall that the approach in \cite{Alonso-Mora462} does not serve about 2\% of the requests. \begin{figure} \caption{The Impact of the Fleet Size on the Average Wait Times and Average Deviations on All Instances.} \label{fig:fleetsize} \end{figure} \paragraph{The Impact of Vehicle Capacity} Figure~\ref{fig:capacity} studies the impact of the vehicle capacity (i.e., how many passengers a vehicle can carry) on the average waiting times and trip deviation. The parameters are set to 2,000 vehicles, $\alpha = 1$, $\beta = 840$ seconds, and $\delta = 420$ seconds to facilitate comparisons to \cite{Alonso-Mora462}. The results on waiting times show that moving to vehicles of capacity 8 further reduces the average waiting times, especially on the large instances. On the other hand, moving from a capacity 5 to 3 does not affect the results too much. The results on deviations are more difficult to interpret. Obviously moving to a capacity 8 further increases the deviation (although it remains below one minute). However, moving to vehicles of capacity 3 also increases the deviation, which is not intuitive. This may be a consequence of myopic decisions that cannot be corrected easily given the tight capacity. \begin{figure} \caption{The Impact of the Vehicle Capacity on the Average Wait Times and the Average Trip Deviations on All Instances.} \label{fig:capacity} \end{figure} \paragraph{The Impact of the Penalty} The penalty $p_i$ in the model is an exponential function of the current waiting time of customer $i$. Constant $\delta$ controls the initial penalty: If it is too small, the penalty for not scheduling a request for the first few periods is low, which causes an increase in wait times, as can be observed in Figure~\ref{fig:obj_vs_wait}. Once $\delta$ is large enough, the average wait times converge to the same values. \begin{figure}\label{fig:obj_vs_wait} \label{fig:qos_hist} \end{figure} \paragraph{Final Vehicle Assignments} As a result of re-optimization, the vehicle to which a rider is assigned can change. Figure~\ref{fig:qos_hist} reports the amount of time until riders receive their final vehicle assignment (the vehicle which actually picks them up). Not surprisingly, this histogram closely follows the waiting time distribution. The majority of riders receive this assignment quickly. However, it takes some riders over 10 minutes to receive their final vehicle assignment, which shows that {\sc RTDARS}{} takes advantage of the ability to re-assign riders to vehicles which will result in better overall assignments. \begin{figure}\label{fig:cg_hist} \label{fig:pruning_plot} \end{figure} \paragraph{The Impact of Column Generation} Figure \ref{fig:cg_hist} depicts the impact of column generation and reports the number of columns in the final MIP as all possible columns of sizes 1 and 2 to be conservative. The results show that the algorithm only explores a small percentage of all potential columns, demonstrating the benefits of a column-generation approach. \paragraph{The Impact of Pruning} Figure~\ref{fig:pruning_plot} shows the impact of Theorem 1, which provides a way to prune the number of requests considered at each step of the algorithm. The figures report the total optimization time for all time periods of each instance. Each optimization must be performed in less than 30 seconds, but the graph reports the total optimization time over the entire hour. As the results indicate, the pruning benefits become substantial as the instance sizes grow. The results show that the pruning significantly reduces the computational time. They also show that {\sc RTDARS}{} should be able to handle even larger instances since, after exploiting Theorem 1, {\sc RTDARS}{} uses only about a sixth of the available time. This creates opportunities to exploit stochastic information. \begin{figure} \caption{The Impact of the Fleet Size on the Average Vehicle Utilization and Idle Time on All Instances.} \label{fig:vehicle_util} \end{figure} \paragraph{The Impact of Ride Sharing} Figure~\ref{fig:vehicle_util} reports the average number of people in each vehicle at all times for each instance. The results show a significant amount of ride sharing, although single trips and idle time remain a significant portion of the rides, especially when the fleet is oversized. Lastly, Figure~\ref{fig:capacity} shows that wait times are reduced by a factor of 4 when moving from single-rider trips to ride-sharing for large instances while the trip deviation only increases to at most 2 minutes for vehicles of capacity 8, thus demonstrating the value of ride sharing. \paragraph{Comparison with Prior Work} The results of \cite{Alonso-Mora462} ``show that 2,000 vehicles (15\% of the taxi fleet) of capacity 10 or 3,000 of capacity 4 can serve 98\% of the demand within a mean waiting time of 2.8 min and mean trip delay of 3.5 min.'' {\sc RTDARS}{} relaxes the hard time-windows present in \cite{Alonso-Mora462} and improves on these results, yielding an average wait time of 2.2 minutes with only 2,000 vehicles, while guaranteeing service for all riders. \section{Conclusion} \label{section-conclusion} This paper considered the real-time dispatching of large-scale ride-sharing services over a rolling horizon. It presented {\sc RTDARS}{}, a real-time optimization framework that divides the time horizon into epochs and uses a column-generation algorithm that minimizes wait times while guaranteeing services for every rider and a small trip deviation compared to a direct trip. This contrasts to earlier work which rejected customers when the predicted waiting time was considered too long (e.g., 7 minutes). This assumption reduced the search space at the cost of rejecting a significant number of requests. The column-generation algorithm of {\sc RTDARS}{} is derived from a three-index formulation \cite{Cordeau2007} which is adapted for use in real-time dial-a-ride applications. In addition, to ensure that all riders are served in reasonable times, the paper proposed an optimization model that balances the minimization of waiting times with penalties for riders that are not scheduled yet. These penalties are increased after each epoch to make it increasingly harder not to serve waiting riders. The paper also presented a key property of the formulation that makes it possible to reduce the search space significantly. {\sc RTDARS}{} was evaluated on historic taxi trips from the New York City Taxi and Limousine Commission \cite{nycdata}, which contains large-scale instances with more than 30,000 requests an hour. The results indicated that {\sc RTDARS}{} enables a real-time dial-a-ride service to provide service guarantees (every rider is served in reasonable time) while improving average waiting times and average trip deviations compared to prior work. The results also showed that larger occupancy vehicles bring benefits and that the fleet size can be further reduced while preserving very reasonable waiting times. Substantial work remains to be done to understand the strengths and limitations of the approach. The current implementation is myopic and heavily driven by the dual costs to generate the columns. Different pricing implementation, including the use of constraint programming to replace our dedicated search algorithm, and the inclusion of stochastic information are natural directions for future research. \section*{Acknowledgments} This research was partly supported by Didi Chuxing Technology Co. and Department of Energy Research Grant 7F-30154. We would like to thank the reviewers for their detailed comments and suggestions which dramatically improved the paper, and the program chairs for a rebuttal period that was long enough to run many experiments. \end{document}
math
\begin{document} \title{The transverse density bundle and modular classes of Lie groupoids} \author{Marius Crainic} \address{Mathematical Institute, Utrecht University, The Netherlands} \email{[email protected]} \author{Jo\~ao Nuno Mestre} \address{Centre for Mathematics, University of Porto, Portugal} \email{[email protected]} \begin{abstract} In this note we revisit the notions of transverse density bundle and of modular classes of Lie algebroids and Lie groupoids; in particular, we point out that one should use the transverse density bundle $\mathcal{D}_{A}^{\textrm{tr}}$ instead of $Q_A$, which is the representation that is commonly used when talking about modular classes. One of the reasons for this is that, as we will see, $Q_A$ is not really an object associated to the stack presented by a Lie groupoid (in general, it is not a representation of the groupoid!). \end{abstract} \maketitle \section{Introduction}\label{intro} We revisit the transverse density bundle $\mathcal{D}_{A}^{\textrm{tr}}$, a representation that is canonically associated to any Lie groupoid \cite{measures_stacks}, and its relation to the notions of modular classes of Lie groupoids and Lie algebroids. These are characteristic classes of certain 1-dimensional representations \cite{VanEst, QA, mehta-modular}, that we recall and discuss in this note. The definition of the modular class of a Lie algebroid always comes with the slogan, inspired by various examples, that it is ``the obstruction to the existence of a transverse measure''. Here we would like to point out that the transverse density bundle $\mathcal{D}_{A}^{\textrm{tr}}$ and our discussions make this slogan precise. In particular, we point out that the canonical representation $Q_A$ \cite{QA} which is commonly used in the context of modular classes of Lie algebroids should actually be replaced by $\mathcal{D}_{A}^{\textrm{tr}}$, especially when passing from Lie algebroids to Lie groupoids. \ \ \paragraph{\textbf{Notation and conventions:}} Throughout the paper $\mathcal{G}$ denotes a Lie groupoid over $M$, with source and target maps denoted by $s$ and $t$, respectively. For background material on Lie groupoids, Lie algebroids, and their representations we refer to \cite{ieke_mrcun}. We only consider Lie groupoids having all $s$-fibers of the same dimension or, equivalently, whose algebroid $A$ has constant rank. Actually, all vector bundles in this paper are assumed to be of constant rank. \section{Transverse density bundles} \subsection{Volume, orientation and density bundles}\label{sub-dens-bundle} First of all, note that any group homomorphism $\mathrm delta: GL_r\rightarrow \mathbb{R}^*$ allows us to associate to any $r$-dimensional vector space a canonical 1-dimensional vector space \[ L_{\mathrm delta}(V):= \{\timesi: \textrm{Fr}(V)\rightarrow \mathbb{R}\ :\ \timesi(e\cdot A)= \mathrm delta(A) \timesi(e) \ \textrm{for\ all} \ e\in \textrm{Fr}(V),\ A\in GL_r\},\] where $\textrm{Fr}(V)= \textrm{Isom}(\mathbb{R}^r, V)$ is the space of frames on $V$, endowed with the standard (right) action of $GL_r$. The cases that are of interest for us are: \begin{itemize} \item $\mathrm delta= \textrm{det}$, for which we obtain the top exterior power $\Lambda^{\textrm{top}}V^*$. \item $\mathrm delta= \textrm{sign}\circ \textrm{det}$, for which we obtain the \textbf{orientation space} $\mathfrak{o}_{V}$ of $V$. \item $\mathrm delta= |\textrm{det}|$ which defines the \textbf{space $\mathcal{D}_{V}$ of densities} of $V$. More generally, for $l\in \mathbb{Z}$, $\mathrm delta= |\textrm{det}|^l$ defines the \textbf{space $\mathcal{D}_{V}^{l}$ of $l$-densities} of $V$. \end{itemize} It is clear that there is a canonical isomorphism: \[ \mathcal{D}_V\otimes \mathfrak{o}_{V}\cong \Lambda^\mathrm{top} V^*,.\] When $V= L$ is 1-dimensional, $\mathcal{D}_{L}$ is also denoted by $|L|$; so, in general, \[ \mathcal{D}_{V}= |\Lambda^\mathrm{top} V^*|,\] which fits well with the fact that for any $\omega\in \Lambda^\mathrm{top} V^*$, $|\omega|$ makes sense as an element of $\mathcal{D}_{V}$. For 1-dimensional vector spaces $W_1$ and $W_2$, one has a canonical isomorphism $|W_1| \otimes |W_2| \cong |W_1\otimes W_2|$ (in particular $|W^*|\cong |W|^*$). From the properties of $\Lambda^\mathrm{top}V^*$ (or by similar arguments), one obtains canonical isomorphisms: \begin{enumerate} \item[1.] $\mathcal{D}_{V}^{*}\cong \mathcal{D}_{V^*}$ for any vector space $V$. \item[2.] For any short exact sequence of vector spaces \[ 0\rightarrow V\rightarrow U\rightarrow W\rightarrow 0\] (e.g. for $U= V\oplus W$), one has an induced isomorphism $\mathcal{D}_{U}\cong \mathcal{D}_{V}\otimes \mathcal{D}_{W}$. \end{enumerate} Since the previous discussion is canonical (free of choices), it can applied (fiberwise) to vector bundles over a manifold $M$ so that, for any such vector bundle $E$, one can talk about the associated line bundles over $M$ \[ \mathcal{D}_E,\ \Lambda^\mathrm{top} E^*,\ \mathfrak{o}_{E}\] and the previous isomorphisms continue to hold at this level. However, at this stage, only $\mathcal{D}_E$ is trivializable, and even that is in a non-canonical way. \begin{definition} A \textbf{density on a manifold} $M$ is any section of the density bundle $\mathcal{D}_{TM}$. \end{definition} The main point about densities on manifolds is that they can be integrated in a canonical fashion, so that associated to any compactly supported positive density $\rho$ on $M$, one obtains a Radon measure $\mu_{\rho}$ defined by \[ \mu_{\rho}: C_{c}^{\infty}(M)\rightarrow \mathbb{R},\ \ \ \ \mu_{\rho}(f)= \int_{M} f\cdot \rho .\] \subsection{The transverse volume, orientation and density bundles}\label{sub-tr-dens-bundle} \begin{definition}\label{def-tr-gpd-dens} For a Lie algebroid $A$ over $M$, the \textbf{the transverse density bundle of $A$} is the vector bundle over $M$ defined by: \[ \mathcal{D}_{A}^{\textrm{tr}}:= \mathcal{D}_{A^*}\otimes \mathcal{D}_{TM} .\] \end{definition} Similarly one can define the \textbf{transverse volume and orientation bundles} \[ \mathcal{V}^{\textrm{tr}}_{A}:= \mathcal{V}_{A^*}\otimes \mathcal{V}_{TM}= \Lambda^{\textrm{top}}A\otimes \Lambda^{\textrm{top}}T^*M,\ \ \ \mathfrak{o}^{\textrm{tr}}_{A}:= \mathfrak{o}_{A^*}\otimes \mathfrak{o}_{TM}\] and the usual relations between these bundles continue to hold in this setting; e.g.: \[ \mathcal{D}_{A}^{\textrm{tr}}= |\Lambda^{\textrm{top}}A\otimes \Lambda^{\textrm{top}}T^*M|= |\mathcal{V}^{\textrm{tr}}_{A}| . \] One of the main properties of these bundles is that they are representations of $A$ and, even better, of $\mathcal{G}$, whenever $\mathcal{G}$ is a Lie groupoid with algebroid $A$; hence they do deserve the name of ``transverse'' vector bundles. We describe the canonical action of $\mathcal{G}$ on the transverse density bundle $\mathcal{D}_{A}^{\textrm{tr}}$; for the other two the description is identical. We have to associate to any arrow $g: x\rightarrow y$ of $\mathcal{G}$ a linear transformation \[ g_{*}: \mathcal{D}_{A, x}^{\textrm{tr}}\rightarrow \mathcal{D}_{A, y}^{\textrm{tr}}.\] The differential of $s$ and the right translations induce a short exact sequence \[ 0 \rightarrow A_y \rightarrow T_g\mathcal{G} \stackrel{ds}{\rightarrow} T_{x}M \rightarrow 0 ,\] which, in turn (cf. item 2 in Subsection \ref{sub-dens-bundle}), induces an isomorphism: \begin{equation}\label{DTG-dec} \mathcal{D}(T_g\mathcal{G})\cong \mathcal{D}(A_y)\otimes \mathcal{D}(T_xM). \end{equation} Using the similar isomorphism at $g^{-1}$ and the fact that the differential of the inversion map gives an isomorphism $T_g\mathcal{G}\cong T_{g^{-1}}\mathcal{G}$, we find an isomorphism \[ \mathcal{D}(A_y)\otimes \mathcal{D}(T_xM)\cong \mathcal{D}(A_x)\otimes \mathcal{D}(T_yM).\] and therefore an isomorphism: \[ \mathcal{D}(A_{x}^{*})\otimes \mathcal{D}(T_xM)\cong \mathcal{D}(A_{y}^{*})\otimes \mathcal{D}(T_yM),\] and this defines the action $g_*$ we were looking for (it is straightforward to check that this defines indeed an action). \begin{definition} A \textbf{transverse density} for the Lie groupoid $\mathcal{G}$ is any $\mathcal{G}$-invariant section of the transverse density bundle $\mathcal{D}_{A}^{\textrm{tr}}$. \end{definition} \begin{remark}Recall that the canonical integration of densities lets us associate to a compactly supported positive density $\rho$ on a manifold $M$ a Radon measure $\mu_\rho$. Similarly, a positive transverse density for a groupoid $\mathcal{G}$ gives rise to what is called a transverse measure for $\mathcal{G}$. Such measures were studied in \cite{measures_stacks}, and represent measures on the differentiable stack presented by $\mathcal{G}$. \end{remark} \section{The modular class(es) revisited}\label{sec-The modular class(es) revisited} Throughout this section $\mathcal{G}$ is a Lie groupoid over $M$ and $A$ is its Lie algebroid. We will be using the transverse density bundle $\mathcal{D}_{A}^{\textrm{tr}}$, volume bundle $\mathcal{V}_{A}^{\textrm{tr}}$ and orientation bundle $\mathfrak{o}_{A}^{\textrm{tr}}$, viewed as representations of $\mathcal{G}$ as explained in Section \ref{sub-tr-dens-bundle}. Let us mention, right away, the relation between these bundles. As vector bundles over $M$, we know (see Section \ref{sub-dens-bundle}) that there are canonical vector bundle isomorphism between \begin{itemize} \item $\mathcal{D}_{A}^{\textrm{tr}}$ and $\mathcal{V}_{A}^{\textrm{tr}}\otimes \mathfrak{o}_{A}^{\textrm{tr}}$. \item $\mathcal{V}_{A}^{\textrm{tr}}$ and $\mathcal{D}_{A}^{\textrm{tr}}\otimes \mathfrak{o}_{A}^{\textrm{tr}}$. \item $\mathfrak{o}_{A}^{\textrm{tr}}\otimes \mathfrak{o}_{A}^{\textrm{tr}}$ and the trivial line bundle. \item $\mathfrak{o}_{A}^{\textrm{tr}}$ and $(\mathfrak{o}_{A}^{\textrm{tr}})^*$. \end{itemize} \begin{lemma}\label{lemma-can-izos} All these canonical isomorphisms are isomorphisms of representations of $\mathcal{G}$ (where the trivial line bundle is endowed with the trivial action). \end{lemma} \begin{proof} Given the way that the action of $\mathcal{G}$ was defined (Section \ref{sub-tr-dens-bundle}), the direct check can be rather lengthy and painful. Here is a more conceptual approach. The main remark is that these actions can be defined in general, whenever we have a functor $F$ which associates to a vector space $V$ a $1$-dimensional vector space $F(V)$ and to a (linear) isomorphism $f: V\rightarrow W$ an isomorphism $F(f): F(V)\rightarrow F(W)$ such that: \begin{enumerate} \item[1.] $F$ commutes with the duality functor $D$, i.e., $F\circ D$ and $D\circ F$ are isomorphic through a natural transformation $\eta: F\circ D \rightarrow D\circ F$. \item[2.] for any exact sequence $0\rightarrow U\rightarrow V\rightarrow W\rightarrow 0$ there is an induced isomorphism between $F(V)$ and $F(U)\otimes F(W)$, natural in the obvious sense. \end{enumerate} Let's call such $F$'s ``good functors''. The construction from Section \ref{sub-tr-dens-bundle} shows that for any good functor $F$, \[ F_{A}^{\textrm{tr}}:= F(A^*)\otimes F(TM) \] is a representation of $\mathcal{G}$. Given two good functors $F$ and $F'$, an isomorphism $\eta: F\rightarrow F'$ will be called good if it is compatible with the natural transformations from 1. and 2. above. It is clear that, for any such $\eta$, there is an induced map $\eta^{\textrm{tr}}$ that is an isomorphism between $F_{A}^{\textrm{tr}}$ and $F_{A}^{'\, \textrm{tr}}$, as representations of $\mathcal{G}$. It should also be clear that, for any two good functors $F$ and $F'$, so is their tensor product. We see that we are left with proving that certain isomorphisms involving the functors $\mathcal{D}$, $\mathcal{V}$ and $\mathfrak{o}$ (e.g. $\mathcal{D}\cong \mathcal{V}\otimes \mathfrak{o}$) are good in the previous sense; and that is straightforward. \end{proof} \subsection{The modular class of $\mathcal{G}$}\label{subsec-The modular class of G} Let us concentrate on the question of whether $\mathcal{G}$ admits a strictly positive transverse density (these are the ``measures'' from the slogan at the start of the section, or ``geometric measures'' in the terminology of \cite{measures_stacks}). Start with any strictly positive section $\sigma$ of $\mathcal{D}_{A}^{\textrm{tr}}$. Then any other such section is of type $e^f \sigma$ for some $f\in C^{\infty}(M)$; moreover $e^f \sigma$ is invariant if and only if \[ e^{f(y)} \sigma(y)= e^{f(x)} g(\sigma(x))\] for all $g: x\rightarrow y$ an arrow of $\mathcal{G}$. Considering \[ c_{\sigma}(g):= ln\left(\frac{\sigma(y)}{g(\sigma(x))}\right),\] one has $c_{\sigma}\in C^{\infty}(\mathcal{G})$ and one checks right away that it is a 1-cocycle, i.e., \[ c_{\sigma}(gh)= c_{\sigma}(g)+ c_{\sigma}(h) \] for all $g$ and $h$ composable. The condition on $f$ that we were considering reads: \[ f(x)- f(y)= c_{\sigma}(g)\] for all $g:x\rightarrow y$, i.e., $c_{\sigma}= \mathrm delta(f)$ in the differentiable cohomology complex of $\mathcal{G}$, $(C^{\bullet}_{\textrm{diff}}(\mathcal{G}), \mathrm delta)$. Furthermore, an easy check shows that the class $[c_{\sigma}]\in H^{1}_{\textrm{diff}}(\mathcal{G})$ does not depend on the choice of $\sigma$. Therefore it gives rise to a canonical class \[ \textrm{mod}(\mathcal{G})\in H^{1}_{\textrm{diff}}(\mathcal{G}),\] called \textbf{the modular class of the Lie groupoid $\mathcal{G}$}. By construction: \begin{lemma}\label{mod-as-obstr} $\mathcal{G}$ admits a strictly positive transverse density iff $\textrm{mod}(\mathcal{G})= 0$. \end{lemma} The result makes precise the expectations of \cite[Section 7]{mehta-modular} for the meaning of the modular class in the absence of superorientability. With this, the existence of transverse densities and measures for proper Lie groupoids of \cite{measures_stacks} is just about the vanishing of differentiable cohomology of proper groupoids (Proposition $1$ in \cite{VanEst}). The construction of $\textrm{mod}(\mathcal{G})$ can be seen as a very particular case of the construction from \cite{VanEst} of characteristic classes of representations of $\mathcal{G}$, classes that live in the odd differentiable cohomology of $\mathcal{G}$. Here we are interested only in the $1$-dimensional representations $L$, with corresponding class denoted \[ \theta_{\mathcal{G}}(L)\in H^{1}_{\textrm{diff}}(\mathcal{G}).\] For a direct description, similar to that of $\textrm{mod}(\mathcal{G})$, we first assume that $L$ is trivializable as a vector bundle and we choose a nowhere vanishing section $\sigma$. Then, for $g: x\rightarrow y$, we can write \[ g\cdot \sigma(x)= \tilde{c}_{\sigma}(g) \sigma(x)\ \ \ \ (\tilde{c}_{\sigma}(g)\in \mathbb{R}^*)\] and this defines a function \begin{equation}\label{tilde-c-sigma} \tilde{c}_{\sigma}: \mathcal{G} \rightarrow \mathbb{R}^* \end{equation} that is a groupoid homomorphism. The cocycle of interest is \begin{equation}\label{tilde-c-sigma-form} c_{\sigma}=\textrm{ln}(|\tilde{c}_{\sigma}|): \mathcal{G} \rightarrow \mathbb{R}; \end{equation} its cohomology class does not depend on the choice of $\sigma$ and defines $\theta_{\mathcal{G}}(L)$. It is clear that for two such representations $L_1$ and $L_2$ (trivializable as vector bundles), \begin{equation}\label{multiplicativiti-theta} \theta_{\mathcal{G}}(L_1\otimes L_2)= \theta_{\mathcal{G}}(L_1)+ \theta_{\mathcal{G}}(L_2). \end{equation} This indicates how to proceed for a general $L$: consider the representation $L\otimes L$ which is (noncanonically) trivializable and define: \begin{equation}\label{multiplicativiti-theta-trick} \theta_{\mathcal{G}}(L):= \frac{1}{2} \theta_{\mathcal{G}}(L\otimes L). \end{equation} The multiplicativity formula for $\theta_{\mathcal{G}}$ remains valid for all $L_1$ and $L_2$. By construction: \begin{lemma} One has $\textrm{mod}(\mathcal{G})= \theta_{\mathcal{G}}(\mathcal{D}_{A}^{\textrm{tr}})$. \end{lemma} \begin{remark}[a warning] \label{remark-warning} It is not true (even if $L$ is trivializable as a vector bundle!) that $\theta_{\mathcal{G}}(L)$ is the obstruction to $L$ being isomorphic to the trivial representation. Lemma \ref{mod-as-obstr} holds because the transverse density bundle is more than trivializable: one can also talk about positivity of sections of $\mathcal{D}_{A}^{\textrm{tr}}$ and $\mathcal{D}_{A}^{\textrm{tr}}$ is trivializable as an {\it oriented} representation of $\mathcal{G}$. \end{remark} The tendency in existing literature, at least for the infinitesimal version of the modular class (see below), is to use simpler representations instead of $\mathcal{D}_{A}^{\textrm{tr}}$. Here we would like to clarify the role of the transverse volume bundle $\mathcal{V}_{A}^{\textrm{tr}}$: can one use it to define $\textrm{mod}(\mathcal{G})$? In short, the answer is: yes, but one should not do it because it would give rise to the wrong expectations (because of the previous warning!). We summarise this into the following: \begin{proposition}\label{not-col-tr} For any Lie groupoid $\mathcal{G}$, $\textrm{mod}(\mathcal{G})= \theta_{\mathcal{G}}(\mathcal{V}_{A}^{\textrm{tr}})$. However, it is not true that that $\textrm{mod}(\mathcal{G})= 0$ happens if and only if $\mathcal{G}$ admits a transverse volume form (i.e., a nowhere vanishing $\mathcal{G}$-invariant section of $\mathcal{V}_{A}^{\textrm{tr}}$). \end{proposition} Counterexamples for the last part are provided already by manifolds $M$, viewed as groupoids with unit arrows only. Indeed, in this case the associated transverse (density, volume) bundles are the usual bundles of $M$; hence the modular class is zero even if $M$ is not orientable. For the first part of the proposition, using the multiplicativity (\ref{multiplicativiti-theta}) of $\theta_{\mathcal{G}}$ and the canonical isomorphisms discussed at the beginning of the section, we have to show that \begin{equation}\label{vanish} \theta_{\mathcal{G}}(\mathfrak{o}_{A}^{\textrm{tr}})= 0. \end{equation} In turn, this follows by applying again the multiplicativity of $\theta_{\mathcal{G}}$ and the canonical isomorphism between $\mathfrak{o}_{A}^{\textrm{tr}}\otimes \mathfrak{o}_{A}^{\textrm{tr}}$ and the trivial representation. \subsection{The modular class of $A$}\label{subsec-The modular class of A} The construction of the modular class of a Lie algebroid $A$, introduced by Evens, Lu and Weinstein \cite{QA}, is based on the geometry of a certain $1$ - dimensional representation $Q_A$ of the Lie algebroid $A$: $\textrm{mod}(A)$ is the characteristic class of $Q_A$. Let us first recall the construction of the characteristic class $\theta_{A}(L)\in H^1(A)$ associated to any $1$-dimensional representation $L$ of $A$ (the infinitesimal version of the construction of the classes $\theta_{\mathcal{G}}(L)$ of groupoid representations). First one uses the analogue of (\ref{multiplicativiti-theta-trick}) to reduce the construction to the case when $L$ is trivializable as a vector bundle; then, for such $L$, one chooses a nowhere vanishing section $\sigma$ and one writes the infinitesimal action $\nablaabla$ of $A$ on $L$ as \[ \nablaabla_{\alpha}(\sigma)= c_{\sigma}(\alpha) \cdot \sigma,\] therefore defining $c_{\sigma}$ as an element $c_{\sigma}(L)\in \mathcal{O}mega^1(A)$. Similar to the previous discussion, the flatness of $\nablaabla$ implies that $c_{\sigma}(L)$ is a closed $A$-form and its cohomology class does not depend on the choice of $\sigma$; therefore it defines a class, called the characteristic class of $L$, and denoted \[ \theta_A(L)\in H^1(A).\] Note that the situation is simpler than at the level of $\mathcal{G}$: for $L$ trivializable as a vector bundle, $\theta_A(L)= 0$ if and only if $L$ is isomorphic to the trivial representation of $A$ (compare with the warning from Remark \ref{remark-warning}!). Inspired by the previous subsection, we define: \begin{definition} The \textbf{modular class of a Lie algebroid} $A$, denoted $\textrm{mod}(A)$, is the characteristic class of $\mathcal{D}_{A}^{\textrm{tr}}$. \end{definition} When $A$ is the Lie algebroid of a Lie groupoid $\mathcal{G}$, since $\mathcal{D}_{A}^{\textrm{tr}}$ is a representation of $\mathcal{G}$, we deduce (cf. Theorem 7 in \cite{VanEst}): \begin{proposition}\label{mod-cor1} For any Lie groupoid $\mathcal{G}$, the Van Est map in degree $1$, \[ VE: H^{1}_{\mathrm{diff}}(\mathcal{G}) \rightarrow H^1(A)\] sends $\textrm{mod}(\mathcal{G})$ to $\textrm{mod}(A)$. In particular, if $A$ is integrable by a unimodular Lie groupoid (e.g. by a proper Lie groupoid), then its modular class vanishes. \end{proposition} In particular, since the Van Est map in degree 1 is injective if $\mathcal{G}$ is $s$-connected (see e.g. Theorem 4 in \cite{VanEst}) we deduce: \begin{corollary}\label{mod-cor2} If $\mathcal{G}$ is an $s$-connected Lie group with Lie algebroid $A$, then $\textrm{mod}(A)$ is the obstruction to the existence of a strictly positive transverse density of $\mathcal{G}$. \end{corollary} \subsection{(Not) $Q_A$}\label{subsec-not QA} The modular class of a Lie algebroid $A$ can be defined as the characteristic class of various $1$-dimensional representations of $A$. We have used $\mathcal{D}_{A}^{\textrm{tr}}$, but the common choice in the literature (starting with \cite{QA}) is the line bundle \[ Q_{A}= \Lambda^{\textrm{top}}A\otimes |\Lambda^{\textrm{top}}T^*M| .\] The infinitesimal action of $A$ on $Q_A$ is explained in \cite{QA}; equivalently, one writes \begin{equation}\label{dec-QA} Q_A= \mathcal{D}_{A}^{\textrm{tr}}\otimes \mathfrak{o}_A \end{equation} in which both terms are representations of $A$: $\mathcal{D}_{A}^{\textrm{tr}}$ was already discussed, while $\mathfrak{o}_A$ is a representation of $A$ since it is a flat vector bundle over $M$. \begin{lemma}\label{mod-cor0} The representations $Q_A$, $\mathcal{D}_{A}^{\mathrm{tr}}$ and $\mathcal{V}_{A}^{\mathrm{tr}}$ of $A$ have the same characteristic class (namely $\mathrm{mod}(A)$). \end{lemma} \begin{proof} Using $\mathcal{D}_{A}^{\textrm{tr}}\cong \mathcal{V}_{A}^{\textrm{tr}}\otimes \mathfrak{o}_{A}^{\textrm{tr}}$, (\ref{dec-QA}) and the multiplicativity of $\theta_{A}$, it suffices to show that $\theta_A(\mathfrak{o}_{A}^{\textrm{tr}})= 0$ and similarly for $\mathfrak{o}_{A}$. Using again multiplicativity, it suffices to show that $\theta_A(\mathfrak{o}_{A}^{\textrm{tr}}\otimes \mathfrak{o}_{A}^{\textrm{tr}})= 0$ - which is true because $\mathfrak{o}_{A}^{\textrm{tr}}\otimes \mathfrak{o}_{A}^{\textrm{tr}}$ is isomorphic to the trivial representation (Lemma \ref{lemma-can-izos}); and similarly for $\mathfrak{o}_{A}$ just that, this time, $\mathfrak{o}_{A}\otimes \mathfrak{o}_{A}$ is isomorphic to the trivial line bundle already as a flat vector bundle. \end{proof} Despite the previous lemma, because of Proposition \ref{not-col-tr} and the discussion around it, using $\mathcal{V}_{A}^{\mathrm{tr}}$ to define $\textrm{mod}(A)$, although correct, may give rise to the wrong expectations. However, using $Q_A$ to define $\textrm{mod}(A)$ is even more unfortunate, for even more fundamental reasons: in general, $Q_A$ is not a representation of the groupoid $\mathcal{G}$! Indeed, using (\ref{dec-QA}), the condition that $Q_A$ can be made into a representation of $\mathcal{G}$ is equivalent to the same condition for $\mathfrak{o}_A$. But the orientation bundles $\mathfrak{o}_A$ are the typical examples of algebroid representations that do not come from groupoid ones. That is clear already in the case of the pair groupoid of a manifold $M$, whose representations are automatically trivial as vector bundles, but for which $\mathfrak{o}_{A}= \mathfrak{o}_{TM}$ is not trivializable if $M$ is not orientable. Note also that the fact that $\mathcal{D}_{A}^{\textrm{tr}}$, unlike $Q_A$, is a representation of $\mathcal{G}$, was absolutely essential for obtaining Proposition \ref{mod-cor1} and Corollary \ref{mod-cor2}. \subsection{Transverse orientability and the first Stiefel-Whitney class of $\mathcal{G}$} It is interesting to look back at the construction of the characteristic class $\theta_{\mathcal{G}}(L)$ of a $1$-dimensional representation $L$ of $\mathcal{G}$. The reason for the warning mentioned in Remark \ref{remark-warning} comes from the fact that, when passing from $\tilde{c}_{\sigma}$ to $c_{\sigma}$ in (\ref{tilde-c-sigma-form}), one loses information related to orientability. Following the exposition in \cite{tese}, we return to the discussion around (\ref{tilde-c-sigma-form}); in particular, we assume that $L$ is a $1$-dimensional representation of $\mathcal{G}$ that is trivializable as a vector bundle, and $\sigma$ is a nowhere vanishing section of $L$. Then one can either: \nablaewpage \begin{itemize} \item consider the entire $\tilde{c}_{\sigma}: \mathcal{G} \rightarrow \mathbb{R}^*$ \item consider only the part of $\tilde{c}_{\sigma}$ that is not contained in $c_{\sigma}$, i.e., \[ \textrm{sign} \circ \tilde{c}_{\sigma}: \mathcal{G} \rightarrow \mathbb{Z}_2,\] where we identify $\mathbb{Z}_2$ with $\{-1, 1\}\subset \mathbb{R}^*$. \end{itemize} Both of them are (differentiable) cocyles on $\mathcal{G}$ with coefficients in a (abelian Lie) group. Such cocycles give rise to classes in the cohomology groups $H^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{R}^*)$ and $H^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{Z}_2)$, which are abelian groups but no longer vector spaces. As before, the resulting cohomology classes are independent of $\sigma$; we denote them by \[ \tilde{\theta}_{\mathcal{G}}(L)\in H^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{R}^*), \ \ w(L)\in H^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{Z}_2),\] and we will call them the \textbf{extended characteristic class} of $L$, and the \textbf{Stiefel-Whitney class of $L$}, respectively. All these classes can be put together using the decomposition of the group $\mathbb{R}^*$ as \[ \mathbb{R}^* \cong \mathbb{R} \times \mathbb{Z}_2,\ \ \lambda \mapsto (\textrm{ln}(|\lambda|), \textrm{sign}(\lambda))\] and the induced isomorphism \[ H^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{R}^*) \cong H^{1}_{\textrm{diff}}(\mathcal{G}) \times H^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{Z}_2) .\] With this, the extended class of $L$ is \[ \tilde{\theta}_{\mathcal{G}}(L)= (\theta_{\mathcal{G}}(L), w(L)). \] Note that, by construction, $\tilde{\theta}_{\mathcal{G}}(L)$ is trivial (equal to the identity of the group) if and only if $L$ is isomorphic to the trivial representation, and $w(L)$ is trivial if and only if $L$ is $\mathcal{G}$-orientable, i.e., if $L$ admits an orientation with the property that the action of $\mathcal{G}$ is orientation-preserving. A warning however: the classes $\tilde{\theta}_{\mathcal{G}}(L)$ and $w(L)$ have been defined so far only when $L$ is trivializable as a vector bundle; moreover, these constructions cannot be extended to general $L$'s while preserving their main properties. Hence, when it comes to the canonical representations of $\mathcal{G}$, one can apply them only to $\mathcal{D}_{A}^{\textrm{tr}}$, for which one obtains \[ w(\mathcal{D}_{A}^{\textrm{tr}})= 1, \ \ \tilde{\theta}_{\mathcal{G}}(\mathcal{D}_{A}^{\textrm{tr}})= (\textrm{mod}(\mathcal{G}), 1).\] For the previous discussion we assumed that $L$ was trivializable as a vector bundle. To handle general $L$'s one can use covers $\mathcal{U}$ of $M$ by open subsets over which $L$ is trivializable. Such an open cover induces a groupoid $\mathcal{G}_{\mathcal{U}}$ over the disjoint union of the open subsets in $\mathcal{U}$, obtained by pulling-back $\mathcal{G}$ along the canonical map from the disjoint union into $M$. The pull-back $L_{\mathcal{U}}$ of $L$ is a representation of $\mathcal{G}_{\mathcal{U}}$ and, by the choice of $\mathcal{U}$, one has well-defined classes \[ \tilde{\theta}(L_{\mathcal{U}})= (\theta(L_{\mathcal{U}}), w(L_{\mathcal{U}}))\in H^{1}_{\textrm{diff}}(\mathcal{G}_{\mathcal{U}}, \mathbb{R}^*)\cong H^{1}_{\textrm{diff}}(\mathcal{G}_{\mathcal{U}})\times H^{1}_{\textrm{diff}}(\mathcal{G}_{\mathcal{U}}, \mathbb{Z}_2).\] Note that, since $\mathcal{G}_{\mathcal{U}}$ is Morita equivalent to $\mathcal{G}$ (see Example $5.10$ in \cite{ieke_mrcun}), when passing from $L$ to $L_{\mathcal{U}}$ (as representations) one does not lose any information. To obtain a class that is independent of the covers one proceeds as usual and one passes to the filtered colimit (with respect to the refinement of covers) and defines \[ \check{\mathrm{H}}^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{R}^*)= \lim_{\to \mathcal{U}} H^{1}_{\textrm{diff}}(\mathcal{G}_{\mathcal{U}}, \mathbb{R}^*),\] and similarly $\check{\mathrm{H}}^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{Z}_2)$. The Morita invariance of differentiable cohomology with coefficients in $\mathbb{R}$ implies that the restriction to open subsets induces an isomorphism \[ H^{1}_{\textrm{diff}}(\mathcal{G})\cong H^{1}_{\textrm{diff}}(\mathcal{G}_{\mathcal{U}})\] (that sends $\theta_{\mathcal{G}}(L)$ to $\theta_{\mathcal{G}_{\mathcal{U}}}(L_{\mathcal{U}})$); hence there are induced cohomology classes \[ \check{\theta}_{\mathcal{G}}(L)\in \check{\mathrm{H}}^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{R}^*), \ \ w(L)\in \check{\mathrm{H}}^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{Z}_2)\] and canonical isomorphism of groups \[ \check{\mathrm{H}}^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{R}^*) \cong H^{1}_{\textrm{diff}}(\mathcal{G})\times \check{\mathrm{H}}^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{Z}_2)\] such that: \begin{itemize} \item $\check{\theta}_{\mathcal{G}}(L)= (\theta_{\mathcal{G}}(L), w(L))$. \item $\check{\theta}_{\mathcal{G}}(L)$ is trivial if and only if $L$ is isomorphic to the trivial representation. \end{itemize} Actually, $\check{\theta}_{\mathcal{G}}$ gives an isomorphism between the group $\textrm{Rep}^1(\mathcal{G})$ of isomorphism classes of $1$-dimensional representations (with the tensor product) with the \v{C}ech-type cohomology with coefficients in $\mathbb{R}^*$: \begin{equation}\label{tautol-class} \check{\theta}_{\mathcal{G}}: \textrm{Rep}^1(\mathcal{G}) \stackrel{\sim}{\rightarrow} \check{\mathrm{H}}^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{R}^*). \end{equation} Denoting $\mathbb{R}^*= GL_1(\mathbb{R})$ by $H$, this is a particular case of the interpretation of $\mathcal{G}$-equivariant principal $H$-bundles in terms of transition functions (see for example \cite{husemoller}), valid for any Lie group $H$, interpretation that is itself at the heart of Haefliger's work on the transverse geometry of foliations \cite{haefliger}. While $w$ is trivial on $\mathcal{D}_{A}^{\textrm{tr}}$, it gives rise to interesting information when applied to the transverse volume bundle or, equivalently, to the orientation one. \begin{definition} The \textbf{transverse first Stiefel-Whitney class} of $\mathcal{G}$ is: \[ w^{\textrm{tr}}_{1}(\mathcal{G}):= w(\mathcal{V}_{A}^{\textrm{tr}})= w(\mathfrak{o}_{A}^{\textrm{tr}})\in \check{\mathrm{H}}^{1}_{\textrm{diff}}(\mathcal{G}, \mathbb{Z}_2).\] \end{definition} As a consequence of the previous discussion we state here the following: \begin{corollary} One has: \begin{enumerate} \item[1.] $\mathcal{G}$ is transversely orientable iff $w^{\mathrm{tr}}_{1}(\mathcal{G})= 1$. \item[2.] $\mathcal{G}$ admits transverse volume forms iff $\mathrm{mod}(\mathcal{G})= 0$ and $w^{\mathrm{tr}}_{1}(\mathcal{G})= 1$. \end{enumerate} \end{corollary} \nablaoindent \textbf{Acknowledgements} This research was supported by the NWO Vici Grant no. 639.033.312. The second author was supported also by the FCT grant SFRH/BD/71257/2010 under the POPH/FSE programmes. We would also like to acknowledge various discussions with Rui Loja Fernandes, Ioan M\unichar{259}rcu\unichar{539} and David Mart\'inez Torres. \end{document}
math
\mathbf egin{document} \title{Derived non-archimedean analytic Hilbert space} \author{Jorge ANT\'ONIO} \mathrm{adic}dress{Jorge Ant\'onio, Institut de Math\'ematiques de Toulouse, 118 Rue de Narbonne 31400 Toulouse} \email{\texttt{jorge\_tiago.ferrera\[email protected]}} \author{Mauro PORTA} \mathrm{adic}dress{Mauro PORTA, Institut de Recherche Mathématique Avancée, 7 Rue René Descartes, 67000 Strasbourg, France} \email{[email protected]} \date{\today} \subjclass[2010]{Primary 14G22; Secondary 14A20, 18B25, 18F99} \keywords{derived geometry, rigid geometry, Hilbert scheme, formal models, derived generic fiber} \mathbf egin{abstract} In this short paper we combine the representability theorem introduced in \cite{Porta_Yu_Representability,Porta_Yu_Mapping} with the theory of derived formal models introduced in \cite{Antonio_Formal_models} to prove the existence representability of the derived Hilbert space $\mathbf{R} \mathrm{Hilb}(X)$ for a separated $k$-analytic\xspace space $X$. Such representability results relies on a localization theorem stating that if $\mathfrak X$ is a quasi-compact and quasi-separated formal scheme, then the $\infty$-category\xspace $\mathrm{Coh}^+(\mathfrak X^\mathrm{rig})$ of almost perfect complexes over the generic fiber can be realized as a Verdier quotient of the $\infty$-category\xspace $\mathbb Coh^+(\mathfrak X)$. Along the way, we prove several results concerning the the $\infty$-category\xspaces of formal models for almost perfect modules on derived $k$-analytic spaces. \end{abstract} \maketitle \personal{PERSONAL COMMENTS ARE SHOWN!!!} \widetilde ableofcontents \section{Introduction} Let $k$ be a non-archimedean field equipped with a non-trivial valuation of rank $1$. We let $k^\circ$ denote its ring of integers, $\mathfrak m$ an ideal of definition. We furthermore assume that $\mathfrak m$ is finitely generated. Given a separated $k$-analytic\xspace space $X$, we are concerned with the existence of the \emph{derived} moduli space $\mathbf{R} \mathrm{Hilb}(X)$, which parametrizes flat families of closed subschemes of $X$. The truncation of $\mathbf{R} \mathrm{Hilb}(X)$ coincides with the classical Hilbert scheme functor, $\mathrm{Hilb}(X)$, which has been shown to be representable by a $k$-analytic\xspace space in \cite{Conrad_Spreading-out}. On the other hand, in algebraic geometry the representability of the derived Hilbert scheme is an easy consequence of Artin-Lurie representability theorem. In this paper, we combine the analytic version of Lurie's representability obtained by T.\ Y.\ Yu and the second author in \cite{Porta_Yu_Representability} together with a theory of derived formal models developped by the first author in \cite{Antonio_Formal_models}. The only missing step is to establish the existence of the cotangent complex. Indeed, the techniques introduced in \cite{Porta_Yu_Mapping} allows to prove the existence of the cotangent complex at points $x \colon S \to \mathbf{R} \mathrm{Hilb}(X)$ corresponding to families of closed subschemes $j \colon Z \hookrightarrow S \times X$ which are of finite presentation in the derived sense. However, not every point of $\mathbf{R} \mathrm{Hilb}(X)$ satisfies this condition: typically, we are concerned with families which are \emph{almost} of finite presentation. The difference between the two situations is governed by the relative analytic cotangent complex $\mathbb L^\mathrm{an}_{Z / S \times X}$: $Z$ is (almost) of finite presentation if $\mathbb L^\mathrm{an}_{Z/S \times X}$ is (almost) perfect. We can explain the main difficulty as follows: if $p \colon Z \to S$ denotes the projection to $S$, then the cotangent complex of $\mathbf{R} \mathrm{Hilb}(X)$ at $x \colon S \to \mathbf{R} \mathrm{Hilb}(X)$ is computed by $p_+( \mathbb L^\mathrm{an}_{Z / S \times X} )$. Here, $p_+$ is a (partial) left adjoint for the functor $p^*$, which has been introduced in the $k$-analytic\xspace setting in \cite{Porta_Yu_Mapping}. However, in loc.\ cit.\ the functor $p_+$ has only been defined on perfect complexes, rather than on almost perfect complexes. From this point of view, the main contribution of this paper is to provide an extension of the construction $p_+$ to almost perfect complexes. Our construction relies heavily on the existence results for formal models of derived $k$-analytic\xspace spaces obtained by the first author in \cite{Antonio_Formal_models}. Along the way, we establish three results that we deem to be of independent interest, and which we briefly summarize below.\\ Let $\mathfrak X$ be a derived formal $k^\circ$-scheme topologically almost of finite presentation. One of the main construction of \cite{Antonio_Moduli_of_representations, Antonio_Formal_models, antonio2019moduli} is the generic fiber $\mathfrak X^\mathrm{rig}$, which is a derived $k$-analytic\xspace space. The formalism introduced in loc.\ cit.\ provides as well an exact functor \mathbf egin{equation} \label{eq:generic_fiber_coherent_sheaves} (-)^\mathrm{rig} \colon \mathbb Coh^+(\mathfrak X) \longrightarrow \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) , \end{equation} where $\mathbb Coh^+$ denotes the stable $\infty$-category of almost perfect complexes on $\mathfrak X$ and on $\mathfrak X^\mathrm{rig}$. When $\mathfrak X$ is underived, this functor has been considered at length in \cite{Hennion_Porta_Vezzosi_Formal_gluing}, where in particular it has been shown to be essentially surjective, thereby extending the classical theory of formal models for coherent sheaves on $k$-analytic\xspace spaces. In this paper we extend this result to the case where $\mathfrak X$ is derived, which is a key technical step in our construction of the plus pushforward. In order to do so, we will establish the following descent statement, which is an extension of \cite[Theorem 7.3]{Hennion_Porta_Vezzosi_Formal_gluing}: \mathbf egin{thm-intro} \label{thm:intro_descent} The functor $\mathrm{Coh}^+_{\mathrm{loc}} \colon \mathrm{dAn}k \to \mathrm{Cat}_\infty^{\mathrm{st}}$, which associates to every derived formal derived scheme \[ \mathfrak X \in \mathrm{dfDM} \mapsto \mathrm{Coh}^+(\mathfrak X^\mathrm{rig}) \in \mathrm{Cat}^\mathrm{st}_{\infty}, \] satisfies Zariski hyper-descent. \end{thm-intro} We refer the reader to \cref{thm:descent_punctured_category} for the precise statement. S consequence of \cref{thm:intro_descent} above is the following statement, concerning the properties of $\infty$-category\xspaces of formal models for almost perfect complexes on $X \in \mathrm{dAn}k$: \mathbf egin{thm-intro}[\cref{thm:formal_models_filtered}] \label{thm:intro_cat_formal_model} Let $X \in \mathrm{dAn}_k$ be a derived $k$-analytic space and let $\mathcal F \in \mathbb Coh^+(\mathcal F)$ be a bounded below almost perfect complex on $X$. For any derived formal model $\mathfrak X$ of $X$, there exists $\mathcal G \in \mathbb Coh^+(\mathfrak X)$ and an equivalence $\mathcal G^\mathrm{rig} \simeq \mathcal F$. Furthermore, the full subcategory of $\mathbb Coh^+(\mathfrak X) \times_{\mathbb Coh^+(X)} \mathbb Coh^+(X)_{/\mathcal F}$ spanned by formal models of $\mathcal F$ is filtered. \end{thm-intro} \cref{thm:intro_cat_formal_model} is another key technical ingredient in the proof of the existence of a plus pushforward construction. The third auxiliary result we need is a refinement of the existence theorem for formal models for morphisms of derived analytic spaces proven in \cite{Antonio_Formal_models}. It can be stated as follows: \mathbf egin{thm-intro}[\cref{thm:formal_model_flat_map}] \label{thm:intro_flat_lifting} Let $f \colon X \to Y$ be a flat map between derived $k$-analytic\xspace spaces. Then there are formal models $\mathfrak X$ and $\mathfrak Y$ for $X$ and $Y$ respectively and a flat map $\mathfrak f \colon \mathfrak X \to \mathfrak Y$ whose generic fiber is equivalent to $f$. \end{thm-intro} The classical analogue of \cref{thm:intro_flat_lifting} was proven by Bosch and Lutk\"ebohmert in \cite{Bosch_Formal_IV}. The proof of this theorem is not entirely obvious: indeed the algorithm provided in \cite{Antonio_Formal_models} proceeds by induction on the Postnikov tower of both $X$ and $Y$, and at each step uses \cite[Theorem 7.3]{Hennion_Porta_Vezzosi_Formal_gluing} to choose appropriately formal models for $\pi_i(\mathcal O_X^\mathrm{alg})$ and $\pi_i(\mathcal O_Y^\mathrm{alg})$. In the current situation, however, the flatness requirement on $\mathfrak f$ makes it impossible to freely choose a formal model for $\pi_i(\mathcal O_X^\mathrm{alg})$. We circumvent the problem by proving a certain lifting property for morphisms of almost perfect complexes: \mathbf egin{thm-intro}[\cref{lift_p_maps}] Let $X \in \mathrm{dAn}_k$ be a derived $k$-analytic\xspace space and let $f \colon \mathcal F \to \mathcal G$ be a morphism in $\mathbb Coh^+(X)$. Let $\mathfrak X$ denote a given formal model for $X$. Suppose, futhermore, that we are given formal models $\widetilde{\mathcal F}, \widetilde{\mathcal G} \in \mathbb Coh^+(X)$ for $\mathcal F$ and $\mathcal G$, respectively. Then, there exists a non-zero element $t \in \mathfrak m$ such that the map $t^n f$ admits a lift $\widetilde{f} \colon \widetilde{\mathcal F} \to \widetilde{\mathcal G}$, in the $\infty$-category\xspace $\mathrm{Coh}^+(\mathfrak X)$. \end{thm-intro} Finally, the techniques of the current text allow us to prove the following generalization of \cite[Theorem 8.6]{Porta_Yu_Mapping}: \mathbf egin{thm-intro}[\cref{thm:rep_mapping}] Let $S$ be a rigid $k$-analytic\xspace space. Let $X,Y$ be rigid $k$-analytic\xspace spaces over $S$. Assume that $X$ is proper and flat over $S$ and that $Y$ is separated over $S$. Then the $\infty$-functor $\mathbf{Map}_S(X,Y)$ is representable by a derived $k$-analytic\xspace space separated over $S$. \end{thm-intro} \paragraph{\bf{Notation and conventions}} \todo{Modify: not all notations below are useful, and we need others. For instance, $\mathrm{fSch}_{k^\circ}$, $\mathbf A^1_{k^\circ} \coloneqq \Spf( k^\circ \{ T \} )$...} In this paper we freely use the language of $\infty$-categories. Although the discussion is often independent of the chosen model for $\infty$-categories, whenever needed we identify them with quasi-categories and refer to \cite{HTT} for the necessary foundational material. The notations $\mathcal S$ and $\mathbb Cat_\infty$ are reserved to denote the $\infty$-categories of spaces and of $\infty$-categories, respectively. If $\mathcal C \in \mathbb Cat_\infty$ we denote by $\mathcal C^\simeq$ the maximal $\infty$-groupoid contained in $\mathcal C$. We let $\mathbb Cat_\infty^{\mathrm{st}}$ denote the $\infty$-category of stable $\infty$-categories with exact functors between them. We also let $\mathcal P \mathrm{r}^{\mathrm{L}}$ denote the $\infty$-category of presentable $\infty$-categories with left adjoints between them. Similarly, we let $\mathcal P \mathrm{r}^{\mathrm{L}}_{\mathrm{st}}$ denote the $\infty$-categories of stably presentable $\infty$-categories with left adjoints between them. Finally, we set \[ \mathbb Cat_\infty^{\mathrm{st}, \otimes} \coloneqq \mathbb CAlg( \mathbb Cat_\infty^{\mathrm{st}} ) \quad , \quad \mathcal P \mathrm r_{\mathrm{st}}^{\mathrm L, \otimes} \coloneqq \mathbb CAlg( \mathcal P \mathrm{r}^{\mathrm{L}}_{\mathrm{st}} ) . \] Given an $\infty$-category $\mathcal C$ we denote by $\mathrm{PSh}(\mathcal C)$ the $\infty$-category of $\mathcal S$-valued presheaves. We follow the conventions introduced in \cite[\S 2.4]{Porta_Yu_Higher_analytic_stacks_2014} for $\infty$-categories of sheaves on an $\infty$-site. For a field $k$, we reserve the notation $\mathbb CAlg_k$ for the $\infty$-category of simplicial commutative rings over $k$. We often refer to objects in $\mathbb CAlg_k$ simply as \emph{derived commutative rings}. We denote its opposite by $\mathrm{dAff}_k$, and we refer to it as the $\infty$-category of \emph{derived affine schemes}. We say that a derived ring $A \in \mathbb CAlg_k$ is \emph{almost of finite presentation} if $\pi_0(A)$ is of finite presentation over $k$ and $\pi_i(A)$ is a finitely presented $\pi_0(A)$-module.\footnote{Equivalently, $A$ is almost of finite presentation if $\pi_0(A)$ is of finite presentation and the cotangent complex $\mathbb L_{A/k}$ is an almost perfect complex over $A$.} We denote by $\mathrm{dAff}_k^{\mathrm{afp}}$ the full subcategory of $\mathrm{dAff}_k$ spanned by derived affine schemes $\Spec(A)$ such that $A$ is almost of finite presentation. When $k$ is either a non-archimedean field equipped with a non-trivial valuation or is the field of complex numbers, we let $\mathrm{An}_k$ denote the category of analytic spaces over $k$. We denote by $\Sp(k)$ the analytic space associated to $k$. \paragraph{\textbf{Acknowledgments}} We are grateful to B.\ Hennion, M.\ Robalo, B.\ To\"en, G.\ Vezzosi and T.\ Y.\ Yu for useful discussions related to the content of this paper. \todo{Other people?} We are especially grateful to B.\ To\"en for inviting the second author to Toulouse during the fall 2017, when the bulk of this research has been carried out and to IRMA, Universit\'e de Strasbourg, for inviting the first author to Strasbourg during the spring 2018. \todo{Acknowledge the PEPS.} \section{Preliminaries on derived formal and derived non-archimedean geometries} Let $k$ denote a non-archimedean field equipped with a rank $1$ valuation. We let $k^\circ = \{ x \in k : | x | \leq 1 \}$ denote its ring of integers. We assume that $k^\circ$ admits a finitely generated ideal of definition $\mathfrak m$. \mathbf egin{notation} \label{notation:pregeometries} \mathbf egin{enumerate} \item Let $R$ be a discrete commutative ring. Let $\mathcal Tdisc(R)$ denote the full subcategory of $R$-schemes spanned by affine spaces $\mathbb A^n_R$. We say that a morphism in $\mathcal Tdisc(R)$ is \emph{admissible} if it is an isomorphism. We endow $\mathcal Tdisc(R)$ with the trivial Grothendieck topology. \item Let $\mathcal Tad$ denote the full subcategory of $k^\circ$-schemes spanned by formally smooth formal schemes which are topologically finitely generated over $k^\circ$. A morphism in $\mathcal Tad$ is said to be \emph{admissible} if it is formally \'etale. We equip the category $\mathcal Tad$ with the formally \'etale topology, $\widetilde au_\mathrm{\acute{e}t}$. \item Denote $\mathcal Tan(k)$ the category of smooth $k$-analytic spaces. A morphism in $\mathcal Tan(k)$ is said to be \emph{admissible} if it is \'etale. We endow $\mathcal Tank$ with the \'etale topology, $\widetilde au_\mathrm{\acute{e}t}$. \end{enumerate} \end{notation} In what follows, we will let $\mathcal T$ denote either one of the categories introduced above. We let $\widetilde au$ denote the corresponding Grothendieck topology. \mathbf egin{defin} Let $\mathcal X$ be an $\infty$-topos\xspace. A \emph{$\mathcal T$-structure} on $\mathcal X$ is a functor $\mathcal O \colon \mathcal T \to \mathcal X$ which commutes with finite products, pullbacks along admissible morphisms and takes $\widetilde au$-coverings in effective epimorphisms. We denote by $\mathrm{St}r_{\mathcal T}( \mathcal X )$ the full subcategory of $\mathbb Fun_{\mathcal T} \left( \mathcal T, \mathcal X \mathrm{rig}ht)$ spanned by $\mathcal T$-structures. A \emph{$\mathcal T$-structured $\infty$-topos} is a pair $(\mathcal X, \mathcal O)$, where $\mathcal X$ is an $\infty$-topos and $\mathcal O \in \mathrm{St}r_\mathcal T(\mathcal X)$. \end{defin} We can assemble $\mathcal T$-structured $\infty$-topoi into an $\infty$-category denoted $\mathbb RTop(\mathcal T)$. We refer to \cite[Definition 1.4.8]{DAG-V} for the precise construction. \personal{Let $\mathbb RTop$ be the $\infty$-category\xspace of $\infty$-topoi together with right geometric morphisms: a morphism $f \colon \mathcal X \to \mathcal Y$ is a geometric morphism $f_* \colon \mathcal X \mathrm{rig}htleftarrows \mathcal Y \colon f^{-1}$ where $f_*$ is the right adjoint. The functor $\mathbb Fun(\mathcal T,-) \colon \mathbb Cat_\infty \to \mathbb Cat_\infty$ restricts to a functor \[ \mathbb Fun(\mathcal T,-) \colon \left(\mathbb RTop\mathrm{rig}ht)^\mathrm{op} \longrightarrow \mathbb Cat_\infty , \] which sends a geometric morphism $(f^{-1}, f_*)$ to the functor induced by composition with $f^{-1}$. Since the left adjoint of a geometric morphism preserves finite limits, it follows that it respects the full subcategories of $\mathcal T$-structures. In other words, we obtain a well defined functor \[ \mathrm{St}r_\mathcal T \colon \left( \mathbb RTop \mathrm{rig}ht)^\mathrm{op} \longrightarrow \mathbb Cat_\infty . \] This defines a Cartesian fibration $p_{\mathrm{St}r} \colon \mathbb RTop( \mathcal T) \to \mathbb RTop^\mathrm{op}$ and we can identify objects of $\mathbb RTop$ as pairs $(\mathcal X, \mathcal O)$, where $\mathcal X \in \mathbb RTop$ and $\mathcal O \in \mathrm{St}r_{\mathcal T} ( \mathcal X)$. We say that an object of $\mathbb RTop ( \mathcal T)$ is a \emph{$\mathcal T$-structured $\infty$-topos\xspace}.} \mathbf egin{defin} Let $\mathcal X$ be an $\infty$-topos. A morphism of $\mathcal T$-structures $\alpha \colon \mathcal O \to \mathcal O'$ is said to be \emph{local} if for every admissible morphism $f \colon U \to V$ in $\mathcal T$ the diagram \[ \mathbf egin{tikzcd} \mathcal O (U) \ar{r}{\mathcal O(f)} \ar{d}{\alpha_U} & \mathcal O(V) \ar{d}{\alpha_V} \\ \mathcal O'(U) \ar{r}{\mathcal O'(f)} & \mathcal O'(V) \end{tikzcd} \] is a pullback square in $\mathcal X$. We denote by $\mathrm{St}rloc_{\mathcal T} ( \mathcal X )$ the (non full) subcategory of $\mathrm{St}r_{\mathcal T} \left( \mathcal X \mathrm{rig}ht)$ spanned by local structures and local morphisms between these. \end{defin} \mathbf egin{eg} \label{eg1} \mathbf egin{enumerate} \item Let $R$ be a discrete commutative ring. A $\mathcal Tdisc(R)$-structure on an $\infty$-topos $\mathcal X$ is simply a product preserving functor $\mathcal O \colon \mathcal Tdisc(R) \to \mathcal X$. When $\mathcal X = \mathcal S$ is the $\infty$-topos of spaces, we can therefore use \cite[Proposition 5.5.9.2]{HTT} to identify the $\infty$-category $\mathrm{St}r_{\mathcal Tdisc(R)}(\mathcal X)$ with the underlying $\infty$-category $\mathbb CAlg_R$ of the model category of simplicial commutative $R$-algebras. It follows that $\mathrm{St}r_{\mathcal Tdisc(R)}(\mathcal X)$ is canonically identified with the $\infty$-category of sheaves on $\mathcal X$ with values in $\mathbb CAlg_R$. For this reason, we write $\mathbb CAlg_R(\mathcal X)$ rather than $\mathrm{St}rloc_{\mathcal Tdisc(R)}(\mathcal X)$. \item Let $\mathfrak X$ denote a formal scheme over $k^\circ$ complete along $t \in k^\circ$. Denote by $\mathfrak X_\mathrm{f\acute{e}t}$ the small formal \'etale site on $\mathfrak X$ and denote $\mathcal X := \mathrm{Sh}v(\mathfrak X_\mathrm{f\acute{e}t}, \widetilde au_\mathrm{\acute{e}t})^\wedge$ denote the hypercompletion of the $\infty$-topos of formally \'etale sheaves on $\mathfrak X$. We define a $\mathcal Tad$-structure on $\mathcal X$ as the functor which sends $U \in \mathfrak X_\mathrm{f\acute{e}t}$ to the sheaf $\mathcal O( U) \in \mathcal X$ defined by the association \[ V \in \mathfrak X_\mathrm{f\acute{e}t} \mapsto \Hom_{\mathrm{fSch}_{k^\circ}} \left( V, U \mathrm{rig}ht) \in \mathcal S. \] In this case, $\mathcal O( \mathbf A^1_{k^\circ})$ corresponds to the sheaf of functions on $\mathfrak X$ whose support is contained in the $t$-locus of $\mathfrak X$. \personal{We can as well perform a similar construction in the case where $\mathfrak X$ is Deligne-Mumford stack over $k^\circ$, complete along $t \in k^\circ$.} To simplify the notation, we write $\mathrm{fCAlg}_{k^\circ}(\mathcal X)$ rather than $\mathrm{St}rloc_{\mathcal Tad}(\mathcal X)$. \item Let $X$ be a $k$-analytic\xspace space and denote $X_\mathrm{\acute{e}t}$ the associated small \'etale site on $X$. Let $\mathcal X := \mathrm{Sh}v(X_\mathrm{\acute{e}t}, \widetilde auet)^{\wedge}$ denote the hypercompletion of the $\infty$-topos of \'etale sheaves on $X$. We can attach to $X$ a $\mathcal Tank$-structure on $\mathcal X$ as follows: given $U \in \mathcal Tank$, we define the sheaf $\mathcal O(U) \in \mathcal X$ by \[ X_\mathrm{\acute{e}t} \ni V \mapsto \Hom_{\mathrm{An}_k} \left( V, U \mathrm{rig}ht) \in \mathcal S . \] As in the previous case, we can canonically identify $\mathcal O(\mathbf A^1_k)$ with the usual sheaf of analytic functions on $X$. We write $\mathrm{AnRing}_k(\mathcal X)$ rather than $\mathrm{St}rloc_{\mathcal Tan(k)}(\mathcal X)$. \end{enumerate} \end{eg} \mathbf egin{construction} \label{construction:morphims_pregeometries} Let $\mathcal X$ be an $\infty$-topos. We can relate the $\infty$-categories $\mathrm{St}r_{\mathcal Tdisc(k^\circ)}(\mathcal X)$, $\mathrm{St}r_{\mathcal Tdisck}(\mathcal X)$, $\mathrm{St}r_{\mathcal Tad}(\mathcal X)$ and $\mathrm{St}r_{\mathcal Tank}(\mathcal X)$ as follows. Consider the following functors \mathbf egin{enumerate} \item the functor \[ - \otimes_{k^\circ} k \colon \mathcal Tdisc(k^\circ) \longrightarrow \mathcal Tdisck . \] induced by base change along the map $k^\circ \to k$. \item The functor \[ (-)^\wedge_t \colon \mathcal Tdisc(k^\circ) \longrightarrow \mathcal Tad . \] induced by the $(t)$-completion. \item The functor \[ (-)^\mathrm{an} \colon \mathcal Tdisck \longrightarrow \mathcal Tank , \] induced by the analytification. \item The functor \[ (-)^\mathrm{rig} \colon \mathcal Tad \longrightarrow \mathcal Tank \] induced by Raynaud's generic fiber construction (cf.\ \cite[Theorem 8.4.3]{Bosch_Lectures_2014}). \end{enumerate} These functors respect the classes of admissible morphisms and are continuous morphisms of sites. It follows that precomposition with them induce well defined functors \mathbf egin{gather*} \mathrm{St}r_{\mathcal Tdisck}(\mathcal X) \longrightarrow \mathrm{St}r_{\mathcal Tdisc(k^\circ)}(\mathcal X) \quad , \quad (-)^\mathrm{alg} \colon \mathrm{St}r_{\mathcal Tad}(\mathcal X) \longrightarrow \mathrm{St}r_{\mathcal Tdisc(k^\circ)}(\mathcal X) \\ (-)^+ \colon \mathrm{St}r_{\mathcal Tank}(\mathcal X) \longrightarrow \mathrm{St}r_{\mathcal Tad}(\mathcal X) \quad , \quad (-)^\mathrm{alg} \colon \mathrm{St}r_{\mathcal Tank}(\mathcal X) \longrightarrow \mathrm{St}r_{\mathcal Tdisck}(\mathcal X) . \end{gather*} The first functor simply forgets the $k$-algebra structure to a $k^\circ$-algebra one via the natural map $k^\circ \to k$. We refer to the second and fourth functors as the \emph{underlying algebra functors}. The third functor is an analogue of taking the subring of power-bounded elements in rigid geometry. \end{construction} Using the underlying algebra functors introduced in the above construction, we can at last introduce the definitions of derived formal scheme and derived $k$-analytic\xspace space. They are analogous to each other: \mathbf egin{defin} A $\mathcal Tad$-structured $\infty$-topos $\mathfrak X \coloneqq (\mathcal X, \mathcal O_\mathfrak X)$ is said to be a \emph{derived formal Deligne-Mumford\xspace $k^\circ$-stack} if there exists a collection of objects $\{U_i\}_{i \in I}$ in $\mathcal X$ such that $\coprod_{i \in I} U_i \to \mathbf 1_\mathcal X$ is an effective epimorphism and the following conditions are met: \mathbf egin{enumerate} \item for every $i \in I$, the $\mathcal Tad$-structured $\infty$-topos $(\mathcal X_{/U_i}, \pi_0(\mathcal O_\mathfrak X |_{U_i}))$ is equivalent to the $\mathcal Tad$-structured $\infty$-topos arising from an affine formal $k^\circ$-scheme via the construction given in \cref{eg1}. \item For each $i \in I$ and each integer $n \ge 0$, the sheaf $\pi_n( \mathcal O_\mathfrak X^\mathrm{alg} |_{U_i} )$ is a quasi-coherent sheaf over $(\mathcal X_{/U_i}, \pi_0( \mathcal O_\mathfrak X |_{U_i} ))$. \end{enumerate} We say that $\mathfrak X = (\mathcal X, \mathcal O_\mathfrak X)$ is a \emph{formal derived $k^\circ$-scheme} if it is a derived formal Deligne Mumford stack and furthermore its truncation $\mathrm{t}_0(\mathfrak X) \coloneqq (\mathcal X, \pi_0(\mathcal O_\mathfrak X))$ is equivalent to the $\mathcal Tad$-structured $\infty$-topos associated to a formal scheme via \cref{eg1}. \end{defin} \mathbf egin{defin} A $\mathcal Tank$-structured $\infty$-topos $X \coloneqq (\mathcal X, \mathcal O_X)$ is said to be a \emph{derived $k$-analytic\xspace space} if $\mathcal X$ is hypercomplete and there exists a collection of objects $\{U_i\}_{i \in I}$ in $\mathcal X$ such that $\coprod_{i \in I} U_i \to \mathbf 1_\mathcal X$ is an effective epimorphism and the following conditions are met: \mathbf egin{enumerate} \item for each $i \in I$, the $\mathcal Tank$-structured $\infty$-topos $( \mathcal X_{ / U_i}, \pi_0( \mathcal O_X |_{U_i} ) )$ is equivalent to the $\mathcal Tank$-structured $\infty$-topos arising from an ordinary $k$-analytic\xspace space via the construction given in \cref{eg1}. \item For each $i \in I$ and each integer $n \geq 0$, the sheaf $ \pi_n(\mathcal O_X^\mathrm{alg} |_{U_i})$ is a coherent sheaf on $(\mathcal X_{/U_i}, \mathcal O_X |_{U_i})$. \end{enumerate} \end{defin} \mathbf egin{thm}[{cf.\ \cite{Antonio_Formal_models,DAG-IX,Porta_Yu_DNAnG_I} }] Derived formal Deligne-Mumford\xspace $k^\circ$-stacks and derived $k$-analytic\xspace spaces assemble into $\infty$-categories, denoted respectively $\mathrm{dfDM}_{k^\circ}$ and $\mathrm{dAn}_k$, which enjoy the following properties: \mathbf egin{enumerate} \item fiber products exist in both $\mathrm{dfDM}_{k^\circ}$ and $\mathrm{dAn}k$; \item The constructions given in \cref{eg1} induce full faithful embeddings from the categories of ordinary formal Deligne-Mumford\xspace $k^\circ$-stacks $\mathrm{fDM}_{k^\circ}$ and of ordinary $k$-analytic\xspace spaces $\mathrm{An}_k$ in $\mathrm{dfDM}_{k^\circ}$ and $\mathrm{dAn}_k$, respectively. \end{enumerate} \end{thm} Following \cite[\S 8.1]{Lurie_SAG}, we let $\mathbb CAlgad$ denote the $\infty$-category of simplicial commutative rings equipped with an adic topology on their $0$-th truncation. Morphisms are morphisms of simplicial commutative rings that are furthermore continuous for the adic topologies on their $0$-th truncations. We set \[ \mathbb CAlgad_{k^\circ} \coloneqq \mathbb CAlgad_{k^\circ/} , \] where we regard $k^\circ$ equipped with its $\mathfrak m$-adic topology. Thanks to \cite[Remark 3.1.4]{Antonio_Formal_models}, the underlying algebra functor $(-)^\mathrm{alg} \colon \mathrm{fCAlg}_{k^\circ}(\mathcal X) \to \mathbb CAlg_{k^\circ}(\mathcal X)$ factors through $\mathbb CAlgad_{k^\circ}(\mathcal X)$. \personal{Fix $A \in \mathrm{fCAlg}_{k^\circ}(\mathcal X)$. Then we are \emph{not} equipping $A^\mathrm{alg}$ with the $(t)$-adic topology (although this would indeed give a functor $\mathrm{fCAlg}_{k^\circ}(\mathcal X) \to \mathbb CAlgad_{k^\circ}(\mathcal X)$, except that it would be induced by $\mathbb CAlg_{k^\circ}(\mathcal X) \to \mathbb CAlgad_{k^\circ}(\mathcal X)$, which is not what we want). To understand this functor, consider the reduction $k^\circ_n$ of $k^\circ$ modulo $(t^n)$. Then for every $n \ge 1$ we have $k^\circ \to k^\circ_n$, which induces a transformation of pregeometries $\mathcal Tad \to \mathcal Tet(k^\circ_n)$, and therefore a functor \[ L_n \colon \mathrm{fCAlg}_{k^\circ}(\mathcal X) \longrightarrow \mathbb CAlg_{k^\circ_n}(\mathcal X) , \] which satisfies \[ L_n(A) \simeq A^\mathrm{alg} \otimes_{k^\circ} k^\circ_n . \] For every $n$, consider $I_n \coloneqq \ker( \pi_0( A^\mathrm{alg} ) \to \pi_0( A^\mathrm{alg} \otimes_{k^\circ} k^\circ_n ) )$. Then the sequence $\{I_n\}$ defines an adic topology on $A^\mathrm{alg}$, which is compatible with the $(t)$-adic topology on $k^\circ$. It follows that $(A^\mathrm{alg}, \{I_n\})$ defines an element in $\mathbb CAlgad_{k^\circ}(\mathcal X)$.} We denote by $(-)^{\mathrm{ad}}$ the resulting functor: \[ (-)^{\mathrm{ad}} \colon \mathrm{fCAlg}_{k^\circ}(\mathcal X) \longrightarrow \mathbb CAlgad_{k^\circ}(\mathcal X) . \] \mathbf egin{defin} Let $A \in \mathrm{fCAlg}_{k^\circ}(\mathcal X)$. We say that $A$ is \emph{topologically almost of finite type over $k^\circ$} if the underlying sheaf of $k^\circ$-adic algebras $A^{\mathrm{ad}}$ is $t$-complete, $\pi_0(A^\mathrm{alg})$ is sheaf of topologically of finite type $k^\circ$-adic algebras and for each $i > 0$, $\pi_i(A)$ is finitely generated as $\pi_0(A)$-module. We say that a derived formal Deligne-Mumford\xspace stack $\mathfrak X \coloneqq (\mathcal X, \mathcal O_\mathfrak X)$ if \emph{topologically almost of finite type over $k^\circ$} if its underlying $\infty$-topos is coherent (cf.\ \cite[\S 3]{DAG-VII}) and $\mathcal O_\mathfrak X \in \mathrm{fCAlg}_{k^\circ}(\mathcal X)$ is topologically almost of finite type over $k^\circ$. We denote by $\mathrm{dfDM}^{\mathrm{taft}}$ (resp.\ $\mathrm{dfSch}^{\mathrm{taft}}$) the full subcategory of $\mathrm{dfDM}_{k^\circ}$ spanned by those derived formal Deligne-Mumford\xspace stacks $\mathfrak X$ that are topologically almost of finite type over $k^\circ$ (resp.\ and whose truncation $\mathrm{t}_0(\mathfrak X)$ is equivalent to a formal $k^\circ$-scheme). \end{defin} The transformation of pregeometries \[ \mathrm{rig}g \colon \mathcal Tad \longrightarrow \mathcal Tank \] induced by Raynaud's generic fiber functor induces $\mathbb RTop(\mathcal Tank) \to \mathbb RTop(\mathcal Tad)$. \cite[Theorem 2.1.1]{DAG-V} provides a right adjoint to this last functor, which we still denote \[ \mathrm{rig}g \colon \mathbb RTop(\mathcal Tad) \longrightarrow \mathbb RTop(\mathcal Tank) . \] We refer to this functor as the \emph{derived generic fiber functor} or as the \emph{derived rigidification functor}. \mathbf egin{thm}[{\cite[Corollary 4.1.4, Proposition 4.1.6]{Antonio_Formal_models}}] \label{prop21} The functor $\mathrm{rig}g \colon \mathbb RTop(\mathcal Tad) \to \mathbb RTop(\mathcal Tank)$ enjoys the following properties: \mathbf egin{enumerate} \item it restricts to a functor \[ \mathrm{rig}g \colon \mathrm{dfDM}^{\mathrm{taft}} \longrightarrow \mathrm{dAn}_k . \] \item The restriction of $\mathrm{rig}g \colon \mathrm{dfDM}^{\mathrm{taft}} \to \mathrm{dAn}_k$ to the full subcategory $\mathrm{fSch}_{k^\circ}^{\mathrm{taft}}$ is canonically equivalent to Raynaud's generic fiber functor. \item Every derived analytic space $X \in \mathrm{dAn}_k$ whose truncation is an ordinary $k$-analytic\xspace space\footnote{The $\infty$-category $\mathrm{dAn}k$ also contains $k$-analytic\xspace Deligne-Mumford\xspace stacks.} lies in the essential image of the functor $\mathrm{rig}g$. \end{enumerate} \end{thm} Fix a derived formal Deligne-Mumford\xspace stack $\mathfrak X \coloneqq (\mathcal X, \mathcal O_{\mathfrak X})$ and a derived $k$-analytic\xspace space $Y \coloneqq (\mathcal Y, \mathcal O_Y)$. We set \[ \mathcal O_{\mathfrak X} \textrm{-} \mathrm{Mod} \coloneqq \mathcal O_{\mathfrak X}^\mathrm{alg} \textrm{-} \mathrm{Mod} \quad , \quad \mathcal O_Y \textrm{-} \mathrm{Mod} \coloneqq \mathcal O_Y^\mathrm{alg} \textrm{-} \mathrm{Mod} . \] We refer to $\mathcal O_{\mathfrak X} \textrm{-} \mathrm{Mod}$ as the \emph{stable $\infty$-category of $\mathcal O_{\mathfrak X}$-modules}. Similarly, we refer to $\mathcal O_Y \textrm{-} \mathrm{Mod}$ as the \emph{stable $\infty$-category of $\mathcal O_Y$-modules}. The derived generic fiber functor induces a functor \[ \mathrm{rig}g \colon \mathcal O_{\mathfrak X} \textrm{-} \mathrm{Mod} \longrightarrow \mathcal O_{\mathfrak X^\mathrm{rig}} \textrm{-} \mathrm{Mod} . \] \mathbf egin{defin} Let $\mathfrak X \in \mathrm{dfDM}_{k^\circ}$ be a derived $k^\circ$-adic Deligne-Mumford\xspace stack and let $X \in \mathrm{dAn}k$ be a derived $k$-analytic\xspace space. The $\infty$-category $\mathbb Coh^+(\mathfrak X)$ (resp.\ $\mathbb Coh^+(X)$) of almost perfect complexes on $\mathfrak X$ (resp.\ on $X$) is the full subcategory of $\mathcal O_{\mathfrak X} \textrm{-} \mathrm{Mod}$ (resp.\ of $\mathcal O_X \textrm{-} \mathrm{Mod}$) spanned by those $\mathcal O_{\mathfrak X}$-modules (resp.\ $\mathcal O_X$-modules) $\mathcal F$ such that $\pi_i( \mathcal F )$ is a coherent sheaf on $\mathrm{t}_0(\mathfrak X)$ (resp.\ on $\mathrm{t}_0(X)$) for every $i \in \mathbb Z$ and $\pi_i(\mathcal F) \simeq 0$ for $i \ll 0$. \end{defin} For later use, let us record the following result: \mathbf egin{prop}[{\cite{Lurie_SAG} \& \cite[Theorem 3.4]{Porta_Yu_Mapping}}] \label{prop:modules_on_affines_and_affinoid} Let $\mathfrak X$ be a derived affine $k^\circ$-adic scheme. Let $A \coloneqq \Gamma( \mathfrak X; \mathcal O_\mathfrak X^\mathrm{alg})$. Then the functor $\Gamma(\mathfrak X;-)$ restricts to \[ \mathbb Coh^+( \mathfrak X ) \longrightarrow \mathbb Coh^+( A ) \] and furthermore this is an equivalence. Similarly, if $X$ is a derived $k$-affinoid space,\footnote{By definition, $X$ is a derived $k$-affinoid space if $\mathrm{t}_0(X)$ is a $k$-affinoid space.} and $B \coloneqq \Gamma(X; \mathcal O_X^\mathrm{alg})$, then $\Gamma(X;-)$ restricts to \[ \mathbb Coh^+( X ) \longrightarrow \mathbb Coh^+( B ) , \] and furthermore this is an equivalence. \end{prop} To complete this short review, we briefly discuss the notion of the $k^\circ$-adic and $k$-analytic\xspace cotangent complexes. The two theories are parallel, and for sake of brevity we limit ourselves to the first one. We refer to the introduction of \cite{Porta_Yu_Representability} for a more thorough review of the $k$-analytic\xspace theory. In \cite[\S 3.4]{Antonio_Formal_models} it was constructed a functor \[ \Omega^\infty_{\mathrm{ad}} \colon \mathcal O_{\mathfrak X} \textrm{-} \mathrm{Mod} \longrightarrow \mathrm{fCAlg}_{k^\circ}(\mathcal X)_{/\mathcal O_\mathfrak X} , \] which we refer to as the \emph{$k^\circ$-adic split square-zero extension functor}. Given $\mathcal F \in \mathcal O_{\mathfrak X} \textrm{-} \mathrm{Mod}$, we often write $\mathcal O_{\mathfrak X} ^\mathrm{op}lus \mathcal F$ instead of $\Omega^\infty_{\mathrm{ad}}(\mathcal F)$. \mathbf egin{rem} Although the $\infty$-category $\mathcal O_{\mathfrak X} \textrm{-} \mathrm{Mod}$ is \emph{not} sensitive to the $\mathcal Tad$-structure on $\mathcal O_{\mathfrak X}$, the functor $\Omega^\infty_{\mathrm{ad}}$ depends on it in an essential way. \end{rem} \mathbf egin{defin} The functor of \emph{$k^\circ$-adic derivations} is the functor \[ \mathrm{Der}^{\mathrm{ad}}_{k^\circ}(\mathfrak X;-) \colon \mathcal O_{\mathfrak X} \textrm{-} \mathrm{Mod} \longrightarrow \mathcal S \] defined by \[ \mathrm{Der}^{\mathrm{ad}}_{k^\circ}(\mathfrak X;\mathcal F) \coloneqq \Map_{\mathrm{fCAlg}_{k^\circ}(\mathcal X)_{/\mathcal O_\mathfrak X}}( \mathcal O_\mathfrak X, \mathcal O_\mathfrak X ^\mathrm{op}lus \mathcal F ) . \] \end{defin} For formal reasons, the functor $\mathrm{Der}^{\mathrm{ad}}_{k^\circ}(\mathfrak X;-)$ is corepresentable by an object $\mathrm{adic}L_\mathfrak X \in \mathcal O_{\mathfrak X} \textrm{-} \mathrm{Mod}$. We refer to it as the \emph{$k^\circ$-adic cotangent complex of $\mathfrak X$}. The following theorem summarizes its main properties: \mathbf egin{thm}[{\cite[Proposition 3.4.4, Corollary 4.3.5, Proposition 3.5.8]{Antonio_Formal_models}}] \label{thm:adic_and_analytic_cotangent_complex} Let $\mathfrak X \coloneqq (\mathcal X, \mathcal O_\mathfrak X)$ be a derived $k^\circ$-adic Deligne-Mumford\xspace stack. Let $\mathrm t_{\le n} \mathfrak X \coloneqq (\mathcal X, \widetilde au_{\le n} \mathcal O_\mathfrak X)$ be the $n$-th truncation of $\mathfrak X$. Then: \mathbf egin{enumerate} \item the $k^\circ$-adic cotangent complex $\mathrm{adic}L_\mathfrak X$ belongs to $\mathbb Coh^+(\mathfrak X)$; \item in $\mathbb Coh^+( \mathfrak X^\mathrm{rig} )$ there is a canonical equivalence \[ ( \mathrm{adic}L_\mathfrak X )^\mathrm{rig} \simeq \mathbb L^\mathrm{an}_{\mathfrak X^\mathrm{rig}} , \] where $\mathbb L^\mathrm{an}_{\mathfrak X^\mathrm{rig}}$ denotes the analytic cotangent complex of the derived $k$-analytic\xspace space $\mathfrak X^\mathrm{rig}$; \item the algebraic derivation classifying canonical map $(\mathfrak X, \widetilde au_{\le n+1} \mathcal O_\mathfrak X) \to (\mathfrak X, \widetilde au_{\le n} \mathcal O_\mathfrak X)$ can be canonically lifted to a $k^\circ$-adic derivation \[ \mathrm{adic}L_{\mathrm t_{\le n} \mathfrak X} \longrightarrow \pi_{n+1}( \mathcal O_\mathfrak X )[n+2] . \] \end{enumerate} \end{thm} \section{Formal models for almost perfect complexes} \subsection{Formal descent statements} We assume that $k^\circ$ admits a finitely generated ideal of definition $\mathfrak m$. We also fix a set of generators $t_1, \ldots, t_n \in \mathfrak m$. We start by recalling the notion of $\mathfrak m$-nilpotent almost perfect complexes. \mathbf egin{defin} Let $\mathfrak X$ be a derived $k^\circ$-adic Deligne-Mumford\xspace stack topologically almost of finite presentation. We let $\mathbb Coh^+_{\mathrm{nil}}( \mathfrak X )$ denote the fiber of the generic fiber functor \eqref{eq:generic_fiber_coherent_sheaves}: \[ \mathbb Coh^+_{\mathrm{nil}}( \mathfrak X ) \coloneqq \mathrm{fib}\left( \mathbb Coh^+( \mathfrak X ) \xrightarrow{\mathrm{rig}g} \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) \mathrm{rig}ht) . \] We refer to $\mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$ as the full subcategory of $\mathfrak m$-nilpotent almost perfect complexes on $X$. \end{defin} A morphism $\mathfrak f \colon \mathfrak X \to \mathfrak Y$ in $\mathrm{dfDM}_{k^\circ}^{\mathrm{taft}}$ induces a commutative diagram \mathbf egin{equation} \label{eq:naturality_rigg} \mathbf egin{tikzcd} \mathbb Coh^+(\mathfrak Y) \arrow{r}{\mathfrak f^*} \arrow{d}{\mathrm{rig}g} & \mathbb Coh^+( \mathfrak X ) \arrow{d}{\mathrm{rig}g} \\ \mathbb Coh^+( \mathfrak Y^\mathrm{rig} ) \arrow{r}{(\mathfrak f^\mathrm{rig})^*} & \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) . \end{tikzcd} \end{equation} In particular, we see that $\mathfrak f^*$ preserves the subcategory of $\mathfrak m$-nilpotent almost perfect complexes on $X$. Moreover, as both $\mathbb Coh^+( \mathfrak X )$ and $\mathbb Coh^+( \mathfrak X^\mathrm{rig} )$ satisfy \'etale descent, we conclude that $\mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$ satisfies \'etale descent as well. \mathbf egin{lem} \label{lem:characterization_nilpotent_almost_perfect_modules} Let $\mathfrak X$ be a derived $k^\circ$-adic Deligne-Mumford\xspace stack. Then an almost perfect sheaf $\mathcal F \in \mathbb Coh^+(X)$ is $\mathfrak m$-nilpotent if and only if for every $i \in \mathbb Z$ the coherent sheaf $\pi_i(\mathcal F)$ is annihilated by some power of the ideal $\mathfrak m$. \end{lem} \mathbf egin{proof} The question is \'etale local on $\mathfrak X$. In particular, we can assume that $\mathfrak X$ is a derived formal affine scheme topologically of finite presentation. Write \[ A \coloneqq \Gamma( \mathfrak X, \mathcal O_\mathfrak X^\mathrm{alg} ) . \] Let $X \coloneqq \mathfrak X^\mathrm{rig}$. Then \cite[Corollary 4.1.3]{Antonio_Formal_models} shows that \[ \mathrm{t}_0(\mathfrak X^\mathrm{rig}) \simeq ( \mathrm{t}_0(\mathfrak X) )^\mathrm{rig} . \] In particular, we deduce that $X$ is a derived $k$-affinoid space. Write \[ B \coloneqq \Gamma( X, \mathcal O_X^\mathrm{alg} ) . \] We can therefore use \cref{prop:modules_on_affines_and_affinoid} to obtain canonical equivalences \[ \mathbb Coh^+( \mathfrak X ) \simeq \mathbb Coh^+( A^\mathrm{alg} ) \quad , \quad \mathbb Coh^+( X ) \simeq \mathbb Coh^+( B ) . \] Under these identifications, the functor $\mathrm{rig}g$ becomes equivalent to the base change functor \[ - \otimes_A B \colon \mathbb Coh^+(A) \longrightarrow \mathbb Coh^+(B) . \] Moreover, it follows from \cite[Proposition A.1.4]{Antonio_Formal_models} that there is a canonical identification \[ B \simeq A \otimes_{k^\circ} k . \] In particular, $\mathrm{rig}g \colon \mathbb Coh^+( \mathfrak X ) \to \mathbb Coh^+( X )$ is $t$-exact. The conclusion is now straightforward. \end{proof} \mathbf egin{defin} Let $\mathfrak X$ be a derived $k^\circ$-adic Deligne-Mumford\xspace stack. Let $\mathcal F \in \mathbb Coh^+(\mathfrak X^\mathrm{rig})$. We say that $\mathfrak F \in \mathbb Coh^+(\mathfrak X)$ is a formal model for $\mathcal F$ if there exists an equivalence $\mathfrak F^\mathrm{rig} \simeq \mathcal F$ in $\mathbb Coh^+(\mathfrak X^\mathrm{rig})$. We let $\mathbb FormalModels(\mathcal F)$ denote the full subcategory of \[ \mathbb Coh^+(\mathfrak X)_{/\mathcal F} \coloneqq \mathbb Coh^+( \mathfrak X ) \times_{\mathbb Coh^+( \mathfrak X^\mathrm{rig} )} \mathbb Coh^+( \mathfrak X^\mathrm{rig} )_{/\mathcal F} \] spanned by formal models of $\mathcal F$. \end{defin} Our goal in this section is to study the structure of $\mathbb FormalModels(\mathcal F)$, and in particular to establish that it is non-empty and filtered when $\mathfrak X$ is a quasi-compact and quasi-separated derived $k^\circ$-adic scheme. Notice that saying that $\mathbb FormalModels(\mathcal F)$ is non-empty for every choice of $\mathcal F \in \mathbb Coh^+(X)$ is equivalent to asserting that the functor \eqref{eq:generic_fiber_coherent_sheaves} \[ \mathrm{rig}g \colon \mathbb Coh^+( \mathfrak X ) \longrightarrow \mathbb Coh^+( X ) \] is essentially surjective. \mathbf egin{lem} \label{lem:existence_formal_models_affine} If $\mathfrak X$ is a derived $k^\circ$-affine scheme topologically almost of finite presentation, then the functor \eqref{eq:generic_fiber_coherent_sheaves} is essentially surjective. \end{lem} \mathbf egin{proof} We let \[ A \coloneqq \Gamma( \mathfrak X, \mathcal O_\mathfrak X^\mathrm{alg} ) \quad , \quad B \coloneqq \Gamma( \mathfrak X^\mathrm{rig}, \mathcal O_{\mathfrak X^\mathrm{rig}} ) . \] Then as in the proof of \cref{lem:characterization_nilpotent_almost_perfect_modules}, we have identifications $\mathbb Coh^+(\mathfrak X) \simeq \mathbb Coh^+(A)$ and $\mathbb Coh^+( \mathfrak X^\mathrm{rig} ) \simeq \mathbb Coh^+(B)$, and under these identifications the functor $\mathrm{rig}g$ becomes equivalent to \[ - \otimes_A B \colon \mathbb Coh^+(A) \longrightarrow \mathbb Coh^+(B) . \] As $B \simeq A \otimes_{k^\circ} k$, we see that $A \to B$ is a Zariski open immersion. The conclusion now follows from \cite[Theorem 2.12]{Hennion_Porta_Vezzosi_Formal_gluing}. \end{proof} To complete the proof of the non-emptiness of $\mathbb FormalModels(\mathcal F)$, it would be enough to know that the essential image of the functor $\mathbb Coh^+(\mathfrak X) \to \mathbb Coh^+(\mathfrak X^\mathrm{rig})$ satisfies descent. This is analogous to \cite[Theorem 7.3]{Hennion_Porta_Vezzosi_Formal_gluing}. \mathbf egin{defin} Let $\mathfrak X$ be a derived $k^\circ$-adic Deligne-Mumford\xspace stack locally topologically almost of finite presentation. We define the stable $\infty$-category $\mathbb Coh^+_{\mathrm{loc}}( \mathfrak X )$ of \emph{$\mathfrak m$-local almost perfect complexes} as the cofiber \[ \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) \coloneqq \mathrm{cofib} \left( \mathbb Coh^+_{\mathrm{nil}}( \mathfrak X ) \hookrightarrow \mathbb Coh^+( \mathfrak X ) \mathrm{rig}ht) . \] We denote by $\mathrm L \colon \mathbb Coh^+(\mathfrak X) \to \mathbb Coh^+_{\mathrm{loc}}(\mathfrak X)$ the canonical functor. We refer to $\mathrm L$ as the \emph{localization functor}. \end{defin} We summarize below the formal properties of $\mathfrak m$-local almost perfect complexes: \mathbf egin{prop} \label{prop:local_almost_perfect_complexes_formal_properties} Let $\mathfrak X$ be a derived $k^\circ$-adic Deligne-Mumford\xspace-stack locally topologically almost of finite presentation. Then: \mathbf egin{enumerate} \item there exists a unique $t$-structure on the stable $\infty$-category $\mathbb Coh^+_{\mathrm{loc}}( \mathfrak X )$ having the property of making the localization functor \[ \mathrm L \colon \mathbb Coh^+(\mathfrak X) \longrightarrow \mathbb Coh^+_{\mathrm{loc}}(\mathfrak X) \] $t$-exact. \item The functor $\mathrm{rig}g \colon \mathbb Coh^+(\mathfrak X) \to \mathbb Coh^+(\mathfrak X^\mathrm{rig})$ factors as \[ \Lambda \colon \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) \longrightarrow \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) . \] Moreover, the essential images of $\mathrm{rig}g$ and $\Lambda$ coincide. \item If $\mathfrak X$ is affine, then the functor $\Lambda$ is an equivalence. \end{enumerate} \end{prop} \mathbf egin{proof} We start by proving (1). Using \cite[Corollary 2.9]{Hennion_Porta_Vezzosi_Formal_gluing} we have to check that the $t$-structure on $\mathbb Coh^+(\mathfrak X)$ restricts to a $t$-structure on $\mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$ and that the inclusion \[ i \colon \mathbb Cohh_{\mathrm{nil}}(\mathfrak X) \longhookrightarrow \mathbb Cohh(\mathfrak X) \] admits a right adjoint $R$ whose counit $i( R( X ) ) \to X$ is a monomorphism for every $X \in \mathbb Cohh(\mathfrak X)$. For the first statement, we remark that it is enough to check that the functor $\mathrm{rig}g \colon \mathbb Coh^+( \mathfrak X ) \to \mathbb Coh^+( \mathfrak X^\mathrm{rig} )$ is $t$-exact. As both $\mathbb Coh^+( \mathfrak X )$ and $\mathbb Coh^+( \mathfrak X^\mathrm{rig} )$ satisfy \'etale descent in $\mathfrak X$, we can test this locally on $\mathfrak X$. When $\mathfrak X$ is affine, the assertion follows directly from \cref{prop:modules_on_affines_and_affinoid}. As for the second statement, we first observe that \[ \mathbb Cohh( \mathfrak X ) \simeq \mathbb Cohh( \mathrm{t}_0( \mathfrak X ) ) . \] We can therefore assume that $\mathfrak X$ is underived. At this point, the functor $R$ can be explicitly described as the functor sending $\mathfrak F\in \mathbb Cohh( \mathfrak X )$ to the subsheaf of $\mathfrak F$ spanned by $\mathfrak m$-nilpotent sections. The proof of (1) is thus complete. We now turn to the proof of (2). The existence of $\Lambda$ and the factorization $\mathrm{rig}g \simeq \Lambda \circ \mathrm L$ follow from the definitions. Moreover, $\mathrm L \colon \mathbb Coh^+( \mathfrak X ) \to \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X )$ is essentially surjective (cf.\ \cite[Lemma 2.3]{Hennion_Porta_Vezzosi_Formal_gluing}). It follows that the essential images of $\mathrm{rig}g$ and of $\Lambda$ coincide. Finally, (3) follows directly from \cref{prop:modules_on_affines_and_affinoid} and \cite[Theorem 2.12]{Hennion_Porta_Vezzosi_Formal_gluing}. \end{proof} The commutativity of \eqref{eq:naturality_rigg} implies that a morphism $\mathfrak f \colon \mathfrak X \to \mathfrak Y$ in $\mathrm{dfDM}_{k^\circ}^{\mathrm{taft}}$ induces a well defined functor \[ \mathfrak f^{\circ *} \colon \mathbb Coh^+_{\mathrm{loc}}( \mathfrak Y ) \longrightarrow \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) . \] It is a simple exercise in $\infty$-categories to promote this construction to an actual functor \[ \mathbb Coh^+_{\mathrm{loc}} \colon \big( \mathrm{dfDM}_{k^\circ}^{\mathrm{taft}} \big)^\mathrm{op} \longrightarrow \mathbb Cat_\infty^{\mathrm{st}} . \] Having \cref{lem:existence_formal_models_affine} and \cref{prop:local_almost_perfect_complexes_formal_properties} at our disposal, the question of the non-emptiness of $\mathbb FormalModels(\mathcal F)$ is essentially reduced to the the following: \mathbf egin{thm} \label{thm:descent_punctured_category} Let $\mathrm{dfSch}_{k^\circ}^{\mathrm{taft}, \mathrm{qcqs}}$ denote the $\infty$-category of derived $k^\circ$-adic schemes which are quasi-compact, quasi separated and topologically almost of finite presentation. Then the functor \[ \mathbb Coh^+_{\mathrm{loc}} \colon \big( \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}, \mathrm{qcqs}} \big)^\mathrm{op} \longrightarrow \mathbb Cat_\infty^{\mathrm{st}} \] is a hypercomplete sheaf for the formal Zariski topology. \end{thm} \mathbf egin{proof} A standard descent argument reduces us to prove the following statement: let $\mathfrak f_\mathbf ullet \colon \mathfrak U_\mathbf ullet \to \mathfrak X$ be a derived affine $k^\circ$-adic Zariski hypercovering. Then the canonical map \mathbf egin{equation} \label{eq:descent_functor} \mathfrak f_\mathbf ullet^{\circ *} \colon \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) \longrightarrow \lim_{ [n] \in \mathbf \Delta } \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_\mathbf ullet ) \end{equation} is an equivalence. Using \cite[Lemma 3.20]{Hennion_Porta_Vezzosi_Formal_gluing} we can endow the right hand side with a canonical $t$-structure. It follows from the characterization of the $t$-structure on $\mathbb Coh^+_{\mathrm{loc}}(\mathfrak X)$ given in \cref{prop:local_almost_perfect_complexes_formal_properties} that $\mathfrak f_\mathbf ullet^{\circ *}$ is $t$-exact. We will prove in \cref{cor:full_faithfulness} that $\mathfrak f_\mathbf ullet^{\circ *}$ is fully faithful. Assuming this fact, we can complete the proof as follows. We only need to check that $\mathfrak f_\mathbf ullet^{\circ *}$ is essentially surjective. Let $\mathcal C$ be the essential image of $\mathfrak f_\mathbf ullet^{\circ *}$. We now make the following observations: \mathbf egin{enumerate} \item the heart of $\lim_{\mathbf \Delta} \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_\mathbf ullet )$ is contained in $\mathcal C$. Indeed, \cref{lem:existence_formal_models_affine} implies that \[ \Lambda_n \colon \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_n ) \longrightarrow \mathbb Coh^+( \mathfrak U_n^\mathrm{rig} ) \] is an equivalence. These equivalences induce a $t$-exact equivalence \mathbf egin{equation} \label{eq:computing_the_limit} \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) \simeq \lim_{ [n] \in \mathbf \Delta } \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_\mathbf ullet ) . \end{equation} Passing to the heart and using the canonical equivalences \[ \mathbb Cohh_{\mathrm{loc}}( \mathfrak X ) \simeq \mathbb Cohh_{\mathrm{loc}}( \mathrm{t}_0(\mathfrak X) ) \quad , \quad \mathbb Cohh( \mathfrak X^\mathrm{rig} ) \simeq \mathbb Cohh( \mathrm{t}_0( \mathfrak X^\mathrm{rig} ) ) , \] we can invoke the classical Rayanaud's theorem on formal models of coherent sheaves to deduce that the heart of the target of $\mathfrak f_\mathbf ullet^{\circ *}$ is contained in its essential image. \item The subcategory $\mathcal C$ is stable. Indeed, let \[ \mathbf egin{tikzcd}[column sep = small] \mathcal F' \arrow{r}{\varphi} & \mathcal F \arrow{r}{\psi} & \mathcal F'' \end{tikzcd} \] be a fiber sequence in $\mathbb Coh^+( \mathfrak X^\mathrm{rig} ) \simeq \lim_{\mathbf \Delta} \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_\mathbf ullet )$ and suppose that two among $\mathcal F$, $\mathcal F'$ and $\mathcal F''$ belong to $\mathcal C$. Without loss of generality, we can assume that $\mathcal F$ and $\mathcal F''$ belong to $\mathcal C$. Then choose elements $\mathfrak F$ and $\mathfrak F''$ in $\mathbb Coh^+_{\mathrm{loc}}( \mathfrak X )$ representing $\mathcal F$ and $\mathcal F''$. Since $\mathfrak f_\mathbf ullet^{\circ *}$ is fully faithful, we can find a morphism $\widetilde{\psi} \colon \mathfrak F \to \mathfrak F''$ lifting $\psi$. Set \[ \mathfrak F' \coloneqq \mathrm{fib}( \widetilde{\psi} \colon \mathfrak F \to \mathfrak F'' ) . \] Then $\Lambda( \mathfrak F' ) \simeq \mathcal F'$, which means that under the equivalence \eqref{eq:computing_the_limit} the object $\mathcal F'$ belongs to $\mathcal C$. \end{enumerate} These two points together imply that $\mathfrak f_\mathbf ullet^{\circ *}$ is essentially surjective on cohomologically bounded elements. As both the $t$-structures on source and target of $\mathfrak f_\mathbf ullet^*$ are left $t$-complete, the conclusion follows. \end{proof} \mathbf egin{cor} \label{cor:comparing_local_and_rigid_almost_perfect_complexes} Let $\mathfrak X \in \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ and assume moreover that $\mathfrak X$ is quasi-compact and quasi-separated. Then the canonical map \[ \Lambda \colon \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) \longrightarrow \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) \] introduced in \cref{prop:local_almost_perfect_complexes_formal_properties} is an equivalence. \end{cor} \mathbf egin{proof} Let $\mathfrak f_\mathbf ullet \colon \mathfrak U_\mathbf ullet \to \mathfrak X$ be a derived affine $k^\circ$-adic Zariski hypercover. Consider the induced commutative diagram \[ \mathbf egin{tikzcd} \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) \arrow{r}{\mathfrak f_\mathbf ullet^*} \arrow{d}{\Lambda} & \lim_{[n] \in \mathbf \Delta} \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_n ) \arrow{d}{ \Lambda_\mathbf ullet} \\ \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) \arrow{r}{f_\mathbf ullet^*} & \lim_{[n] \in \mathbf \Delta} \mathbb Coh^+( \mathfrak U_n^\mathrm{rig} ) , \end{tikzcd} \] where we set $f_\mathbf ullet \coloneqq (\mathfrak f_\mathbf ullet)^\mathrm{rig}$. The right vertical map is an equivalence thanks to \cref{prop:local_almost_perfect_complexes_formal_properties}. On the other hand, $\mathbb Coh^+( \mathfrak X^\mathrm{rig} )$ satisfies descent in $\mathfrak X$, and therefore the bottom horizontal map is also an equivalence. Finally, \cref{thm:descent_punctured_category} implies that the top horizontal map is an equivalence as well. We thus conclude that $\Lambda \colon \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) \to \mathbb Coh^+( \mathfrak X^\mathrm{rig} )$ is an equivalence. \end{proof} \mathbf egin{cor} \label{cor:existence_formal_models} Let $\mathfrak X \in \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ and assume moreover that it is quasi-compact and quasi-separated. For any $\mathcal F \in \mathbb Coh^+( \mathfrak X^\mathrm{rig} )$, the $\infty$-category $\mathbb FormalModels( \mathcal F )$ is non-empty. \end{cor} \mathbf egin{proof} The localization functor $\mathrm L \colon \mathbb Coh^+( \mathfrak X ) \to \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X )$ is essentially surjective by construction. Since $\mathfrak X$ is a quasi-compact and quasi-separted derived $k^\circ$-adic scheme topologically of finite presentation, \cref{cor:comparing_local_and_rigid_almost_perfect_complexes} implies that $\Lambda \colon \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) \to \mathbb Coh^+( \mathfrak X^\mathrm{rig} )$ is an equivalence. The conclusion follows. \end{proof} \subsection{Proof of \cref{thm:descent_punctured_category}: fully faithfulness} The only missing step in the proof of \cref{thm:descent_punctured_category} is the full faithfulness of the functor \eqref{eq:descent_functor}. We will address this question by passing to the $\infty$-categories of ind-objects. Let $\mathfrak X$ be a quasi-compact and quasi-separated derived $k^\circ$-adic scheme locally topologically almost of finite presentation. \[ \mathfrak f \colon \mathfrak U \longrightarrow \mathfrak X \] be a formally \'etale morphism. Then $\mathfrak f$ induces a commutative diagram \[ \mathbf egin{tikzcd} \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) \arrow{d}{\mathfrak f^*} \arrow{r}{\mathrm L_{\mathfrak X}} & \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) ) \arrow{d}{\mathfrak f^{\circ *}} \\ \mathrm{Ind}( \mathbb Coh^+( \mathfrak U ) ) \arrow{r}{\mathrm L_{\mathfrak U}} & \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U ) ) . \end{tikzcd} \] The functors $\mathfrak f^*$ and $\mathfrak f^{\circ *}$ commute with colimits, and therefore they admit right adjoints $\mathfrak f_*$ and $\mathfrak f^\circ_*$. In particular, we obtain a Beck-Chevalley transformation \mathbf egin{equation} \label{eq:Beck_Chevalley_I} ^\mathrm{\tiny th}eta \colon \mathrm L_\mathfrak X \circ \mathfrak f_* \longrightarrow \mathfrak f^\circ_* \circ \mathrm L_\mathfrak U . \end{equation} A key step in the proof of the full faithfulness of the functor \eqref{eq:descent_functor} is to verify that $^\mathrm{\tiny th}eta$ is an equivalence when evaluated on objects in $\mathbb Cohh( \mathfrak U )$. Let us start with the following variation on \cite[Lemma 7.14]{Hennion_Porta_Vezzosi_Formal_gluing}: \mathbf egin{lem} \label{lem:Beck_Chevalley_Verdier_quotient} Let \mathbf egin{equation} \label{eq:Beck_Chevalley_quotient} \mathbf egin{tikzcd} \mathcal K_\mathcal C \arrow[hook]{r}{i_\mathcal C} \arrow{d}{F_\mathcal K} & \mathcal C \arrow{r}{L_\mathcal C} \arrow{d}{F} & \mathcal Q_\mathcal C \arrow{d}{F_\mathcal Q} \\ \mathcal K_\mathcal D \arrow[hook]{r}{i_\mathcal D} & \mathcal D \arrow{r}{L_\mathcal D} & \mathcal Q_\mathcal D \end{tikzcd} \end{equation} be a diagram of stable $\infty$-categories and exact functors between them. Assume that: \mathbf egin{enumerate} \item the functors $i_\mathcal C$ and $i_\mathcal D$ are fully faithful and admit right adjoints $R_\mathcal C$ and $R_\mathcal D$, respectively; \item the functors $L_\mathcal C$ and $L_\mathcal D$ admit fully faithful right adjoints $j_\mathcal C$ and $j_\mathcal D$, respectively; \item the rows are fiber and cofiber sequences in $\mathbb Cat_\infty^{\mathrm{st}}$; \item the functors $F$, $F_\mathcal K$ and $F_\mathcal Q$ admit right adjoints $G$, $G_\mathcal K$ and $G_\mathcal Q$, respectively. \end{enumerate} Let $X \in \mathcal D$ be an object. Then the following statements are equivalent: \mathbf egin{enumerate} \item the Beck-Chevalley transformation \[ q_X \colon L_\mathcal C( G(X) ) \longrightarrow G_\mathcal Q( L_\mathcal D( X ) ) \] is an equivalence; \item the Beck-Chevalley transformation \[ \kappa_{R_\mathcal D(X)} \colon i_\mathcal C( G_\mathcal K( R_\mathcal D(X) ) ) \longrightarrow G( i_\mathcal D( R_\mathcal D( X ) ) ) \] is an equivalence. \end{enumerate} \end{lem} \mathbf egin{proof} Since $j_\mathcal C$ and $i_\mathcal C$ are fully faithful, it is equivalent to check that \[ j_\mathcal C ( L_\mathcal C( G(X) ) ) \longrightarrow j_\mathcal C ( G_\mathcal Q( L_\mathcal D( X ) ) ) \] is an equivalence if and only if $\kappa_{R_\mathcal D(X)}$ is an equivalence. Using the natural equivalences \[ j_\mathcal C \circ G \simeq G j_\mathcal D \quad , \quad G_\mathcal K \circ R_\mathcal D \simeq R_\mathcal C \circ G \] we obtain the following commutative diagram \[ \mathbf egin{tikzcd} i_\mathcal C ( R_\mathcal C ( G( X ) ) ) \arrow{r} \arrow{d} & G( X ) \arrow{r} \arrow[equal]{d} & j_\mathcal C( L_\mathcal C( G( X ) ) ) \arrow{d} \\ G( i_\mathcal D( R_\mathcal D( X ) ) ) \arrow{r} & G( X ) \arrow{r} & G( j_\mathcal D( L_\mathcal D( X ) ) ) . \end{tikzcd} \] Moreover, since the rows of the diagram \eqref{eq:Beck_Chevalley_quotient} are Verdier quotients, we conclude that the rows in the above diagram are fiber sequences. Therefore, the leftmost vertical arrow is an equivalence if and only if the rightmost one is. \end{proof} \mathbf egin{lem} \label{lem:Beck_Chevalley_I} The Beck-Chevalley transformation \eqref{eq:Beck_Chevalley_I} is an equivalence whenever evaluated on objects in $\mathbb Cohh( \mathfrak U )$. \end{lem} \mathbf egin{proof} Using \cref{lem:Beck_Chevalley_Verdier_quotient}, we see that it is enough to prove that the Beck-Chevalley transformation associated to the square \[ \mathbf egin{tikzcd} \mathrm{Ind}( \mathbb Coh^+_{\mathrm{nil}}( \mathfrak X ) ) \arrow{d}{\mathfrak f^*} \arrow{r} & \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) \arrow{d}{\mathfrak f^*} \\ \mathrm{Ind}( \mathbb Coh^+_{\mathrm{nil}}( \mathfrak U ) ) \arrow{r} & \mathrm{Ind}( \mathbb Coh^+( \mathfrak U ) ) \end{tikzcd} \] is an equivalence when evaluated on objects of $\mathbb Cohh_{\mathrm{nil}}( \mathfrak U )$. As the horizontal functors are fully faithful, it is enough to check that the functor \[ \mathfrak f_* \colon \mathrm{Ind}( \mathbb Coh^+( U ) ) \longrightarrow \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) \] takes $\mathbb Cohh_{\mathrm{nil}}( \mathfrak U )$ to $\mathrm{Ind}( \mathbb Coh^+_{\mathrm{nil}}( \mathfrak X ) )$. Let $\mathfrak F \in \mathbb Cohh_{\mathrm{nil}}( \mathfrak U )$. We have to verify that $( \mathfrak f_*( \mathfrak F ) )^\mathrm{rig} \simeq 0$. Since $\mathfrak F$ is coherent and in the heart and since $\mathfrak U$ is quasi-compact we see that there exists an element $a \in \mathfrak m$ such that the map $\mu_a \colon \mathfrak F \to \mathfrak F$ given by multiplication by $a$ is zero. Therefore $\mathfrak f_*( \mu_a ) \colon \mathfrak f_*( \mathfrak F ) \to \mathfrak f_*( \mathfrak F )$ is homotopic to zero. Since $\mathfrak f_*( \mu_a )$ is equivalent to the endomorphism $\mathfrak f_*( \mathfrak F )$ given by multiplication by $a$, we conclude that $( \mathfrak f_*( \mathfrak F ) )^\mathrm{rig} \simeq 0$. The conclusion follows. \end{proof} Having these adjointability statements at our disposal, we turn to the actual study of the full faithfulness of the functor \eqref{eq:descent_functor}. Let \[ \mathfrak U_\mathbf ullet \colon \mathbf \Delta^\mathrm{op} \longrightarrow \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}} \] be an affine $k^\circ$-adic Zariski hypercovering of $\mathfrak X$ and let $\mathfrak f_\mathbf ullet \colon \mathfrak U_\mathbf ullet \to \mathfrak X$ be the augmentation morphism. The morphism $\mathfrak f_\mathbf ullet$ induces functors \[ \mathfrak f_\mathbf ullet^* \colon \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) \longrightarrow \lim_{ [n] \in \mathbf \Delta } \mathrm{Ind}( \mathbb Coh^+( \mathfrak U_n ) ) \] and \[ \mathfrak f_{\mathbf ullet}^{\circ *} \colon \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) ) \longrightarrow \lim_{[n] \in \mathbf \Delta} \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_n ) ) . \] These functors commute by construction with filtered colimits, and therefore they admit right adjoints, that we denote respectively as \[ \mathfrak f_{\mathbf ullet*} \colon \lim_{ [n] \in \mathbf \Delta } \mathrm{Ind}( \mathbb Coh^+( \mathfrak U_n ) ) \longrightarrow \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) \] and \[ \mathfrak f^{\circ}_{\mathbf ullet *} \colon \lim_{ [n] \in \mathbf \Delta } \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_n ) ) \longrightarrow \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) ) . \] Moreover, the functors $\mathfrak f_\mathbf ullet^*$ and $\mathfrak f_\mathbf ullet^{\circ *}$ fit in the following commutative diagram: \[ \mathbf egin{tikzcd} \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) \arrow{d}{\mathrm L} \arrow{r}{\mathfrak f_\mathbf ullet^*} & \lim_{ [n] \in \mathbf \Delta } \mathrm{Ind}( \mathbb Coh^+( \mathfrak U_\mathbf ullet ) ) \arrow{d}{\mathrm L_\mathbf ullet} \\ \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) ) \arrow{r}{\mathfrak f_\mathbf ullet^{\circ *}} & \lim_{ [n] \in \mathbf \Delta } \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_\mathbf ullet ) ) . \end{tikzcd} \] In particular, we have an associated Beck-Chevalley transformation \mathbf egin{equation} \label{eq:Beck_Chevalley} ^\mathrm{\tiny th}eta \colon \mathrm L \circ \mathfrak f_{\mathbf ullet *} \longrightarrow \mathfrak f_{\mathbf ullet *}^\circ \circ \mathrm L_\mathbf ullet . \end{equation} \mathbf egin{prop} \label{prop:Beck_Chevalley} The Beck-Chevalley transformation \eqref{eq:Beck_Chevalley} is an equivalence when restricted to the full subcategory $\lim_{\mathbf \Delta} \mathbb Cohh( \mathfrak U_\mathbf ullet )$ of $\lim_{\mathbf \Delta} \mathrm{Ind}( \mathbb Coh^+( \mathfrak U_\mathbf ullet ) )$. \end{prop} \mathbf egin{proof} The discussion right after \cite[Corollary 8.6]{Porta_Yu_Higher_analytic_stacks_2014} allows us to identify the functor \[ \mathfrak f_{\mathbf ullet *} \colon \lim_{ [n] \in \mathbf \Delta} \mathrm{Ind}( \mathbb Coh^+( \mathfrak U_n ) ) \longrightarrow \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) \] with the functor informally described by sending a descent datum $\mathfrak F_\mathbf ullet \in \lim_{ \mathbf \Delta } \mathrm{Ind}( \mathbb Coh^+( \mathfrak U_\mathbf ullet ) )$ to \[ \lim_{ [n] \in \mathbf \Delta } \mathfrak f_{n*} \mathfrak F_n \in \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) . \] Similarly, the functor $\mathfrak f^\circ_{\mathbf ullet *}$ sends a descent datum $\mathcal F_\mathbf ullet \in \lim_{\mathbf \Delta} \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_\mathbf ullet )$ to \[ \lim_{ [n] \in \mathbf \Delta } \mathfrak f_{n*}^\circ \mathcal F_n \in \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) ) . \] We therefore have to show that the Beck-Chevalley transformation \[ ^\mathrm{\tiny th}eta \colon \mathrm L \bigg( \lim_{ [n] \in \mathbf \Delta } \mathfrak f_{n*} \mathfrak F_n \bigg) \longrightarrow \lim_{ [n] \in \mathbf \Delta } \mathfrak f^\circ_{n*} (\mathrm L_n \mathfrak F_n) \] is an equivalence whenever each $\mathfrak F_n$ belongs to $\mathbb Cohh( \mathfrak U_n )$. First notice that the functors $\mathfrak f_{\mathbf ullet*}$ and $\mathfrak f^\circ_{\mathbf ullet *}$ are left $t$-exact. In particular, if $\mathfrak F_\mathbf ullet \in \lim_{\mathbf \Delta} \mathrm{Ind}( \mathbb Cohh( \mathfrak U_\mathbf ullet ) )$ then both $\mathrm L \mathfrak f_{\mathbf ullet *}( \mathfrak F_\mathbf ullet )$ and $\mathfrak f^\circ_{\mathbf ullet *}( \mathfrak F_\mathbf ullet )$ are coconnective. As the $t$-structures on $\lim_{ \mathbf \Delta } \mathrm{Ind}( \mathbb Coh^+( \mathfrak U_\mathbf ullet ) )$ and on $\lim_{\mathbf \Delta} \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_\mathbf ullet ) )$ are right $t$-complete, we conclude that it is enough to prove that $\pi_i( ^\mathrm{\tiny th}eta )$ is an isomorphism for every $i \in \mathbb Z$. We now observe that for $m \ge i + 2$ we have \[ \pi_i \bigg( \lim_{ [n] \in \mathbf \Delta } \mathfrak f^\circ_{n*}( \mathrm L_n \mathfrak F_n ) \bigg) \simeq \pi_i \bigg( \lim_{ [n] \in \mathbf \Delta_{\le m} } \mathfrak f^\circ_{n*}( \mathrm L_n \mathfrak F_n ) \bigg) , \] and similarly \[ \pi_i \bigg( \mathrm L \bigg( \lim_{ [n] \in \mathbf \Delta } \mathfrak f_{n*} \mathfrak F_n \bigg) \bigg) \simeq \mathrm L \bigg( \pi_i \bigg( \lim_{ [n] \in \mathbf \Delta } \mathfrak f_{n*} \mathfrak F_n \bigg) \bigg) \simeq \mathrm L \bigg( \pi_i \bigg( \lim_{ [n] \in \mathbf \Delta_{\le m} } \mathfrak f_{n*} \mathfrak F_{n*} \bigg) \bigg) . \] It is therefore enough to prove that for every $m \ge 0$ the canonical map \[ \mathrm L\bigg( \lim_{ [n] \in \mathbf \Delta_{\le m} } \mathfrak f_{n *} \mathfrak F_n \bigg) \longrightarrow \lim_{ [n] \in \mathbf \Delta_{\le m} } \mathfrak f^\circ_{n*}( \mathrm L_n \mathfrak F_n ) \] is an equivalence. As $\mathrm L$ commutes with finite limits, we are reduced to show that the canonical map \[ \mathrm L ( \mathfrak f_{n*} \mathfrak F_n ) \longrightarrow \mathfrak f^\circ_{n*} ( \mathrm L_n \mathfrak F_n ) \] is an equivalence whenever $\mathfrak F_n \in \mathbb Cohh( \mathfrak U_n )$, which follows from \cref{lem:Beck_Chevalley_I}. \end{proof} \mathbf egin{cor} \label{cor:full_faithfulness} Let $\mathfrak X$ and $\mathfrak f_\mathbf ullet \colon \mathfrak U_\mathbf ullet \to \mathfrak X$ be as in the above discussion. Then the functor \[ \mathfrak f_\mathbf ullet^{\circ *} \colon \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) \longrightarrow \lim_{ [n] \in \mathbf \Delta } \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_n ) \] is fully faithful. \end{cor} \mathbf egin{proof} As the functor $\mathfrak f_\mathbf ullet^{\circ *}$ is $t$-exact and the $t$-structure on both categories is left complete, we see that it is enough to reduce ourselves to prove that $\mathfrak f_\mathbf ullet^*$ is fully faithful when restricted to $\mathbb Cohb_{\mathrm{loc}}(\mathfrak X)$. Consider the following commutative cube: \mathbf egin{equation} \mathbf egin{tikzcd} \mathbb Coh^+( \mathfrak X ) \arrow{rr}{\mathfrak f_{\mathbf ullet}^*} \arrow[hook]{dr} \arrow{dd}& & \lim_{[n] \in \mathbf \Delta} \mathbb Coh^+( \mathfrak U_n ) \arrow{dd} \arrow[hook]{dr} \\ {} & \mathrm{Ind} \left( \mathbb Coh^+( \mathfrak X ) \mathrm{rig}ht) \arrow[crossing over]{rr}[near start]{\mathfrak f_{\mathbf ullet}^{*}} \arrow{dd}[near end]{\mathrm L_{\mathfrak X}}& & \lim_{[n] \in \mathbf \Delta} \mathrm{Ind} \left( \mathbb Coh^+( \mathfrak U_n ) \mathrm{rig}ht) \arrow{dd}{\mathrm L_{\mathfrak U_\mathbf ullet}} \\ \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) \arrow{rr}[near start]{\mathfrak f_{\mathbf ullet}^{\circ *}} \arrow[hook]{dr} & & \lim_{[n] \in \mathbf \Delta} \mathbb Coh^+_{\mathrm{loc}}(\mathfrak U_n) \arrow[hook]{dr} \\ {} & \mathrm{Ind} \left( \mathbb Coh^+_{\mathrm{loc}} (\mathfrak X) \mathrm{rig}ht) \arrow[leftarrow,crossing over]{uu} \arrow{rr}{\mathfrak f_{\mathbf ullet}^{\circ *}} & & \lim_{[n] \in \mathbf \Delta} \mathrm{Ind} \left(\mathbb Coh^+_{\mathrm{loc}} ( \mathfrak U_n ) \mathrm{rig}ht) . \end{tikzcd} \end{equation} First of all, we observe that the diagonal functors are all fully faithful. It is therefore enough to prove that the functor \[ \mathfrak f^{\circ *}_\mathbf ullet \colon \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}(( \mathfrak X ) ) \longrightarrow \lim_{ [n] \in \mathbf \Delta } \mathrm{Ind}( \mathbb Coh^+_{\mathrm{loc}}( \mathfrak U_n ) ) \] is fully faithful when restricted to $\mathbb Coh^+_{\mathrm{loc}}( \mathfrak X )$. As this functor admits a right adjoint $\mathfrak f^\circ_{\mathbf ullet *}$, it is in turn enough to verify that for every $\mathcal F \in \mathbb Cohb_{\mathrm{loc}}( \mathcal F )$ the unit transformation \[ _\mathrm{\acute{e}t}a \colon \mathcal F \longrightarrow \mathfrak f^\circ_{\mathbf ullet *} \mathfrak f^{\circ *}_\mathbf ullet( \mathcal F ) \] is an equivalence. Proceeding by induction on the number of nonvanishing homotopy groups of $\mathcal F$, we see that it is enough to deal with the case of $\mathcal F \in \mathbb Cohh_{\mathrm{loc}}(\mathcal F)$. As the functor $\mathrm L_\mathfrak X \colon \mathbb Coh^+( \mathfrak X ) \to \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X )$ is essentially surjective and $t$-exact, we can choose $\mathfrak F \in \mathbb Cohh(\mathfrak X)$ and an equivalence \[ \mathrm L_\mathfrak X( \mathfrak F ) \simeq \mathcal F . \] Moreover, the unit transformation \[ \mathfrak F \longrightarrow \mathfrak f_{\mathbf ullet*} \mathfrak f_\mathbf ullet^* \mathfrak F \] is an equivalence. It is therefore enough to check that the Beck-Chevalley transformation associated to the front square is an equivalence when evaluated on objects in $\lim_{\mathbf \Delta} \mathbb Cohh(\mathfrak U_n)$. This is exactly the content of \cref{prop:Beck_Chevalley}. \end{proof} \subsection{Categories of formal models} Let $\mathfrak X \in \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ be a quasi-compact and quasi-separated derived $k^\circ$-adic scheme topologically almost of finite presentation. We established in \cref{cor:existence_formal_models} that for any $\mathcal F \in \mathbb Coh^+( \mathfrak X^\mathrm{rig} )$ the $\infty$-category of formal models $\mathbb FormalModels( \mathcal F )$ is non-empty. Actually, we can use \cref{cor:comparing_local_and_rigid_almost_perfect_complexes} to be more precise about the structure of $\mathbb FormalModels( \mathcal F )$. We are in particular interested in showing that it is filtered. We start by recording the following immediate consequence of \cref{cor:comparing_local_and_rigid_almost_perfect_complexes}: \mathbf egin{lem} \label{cor:fully_faithful_adjoint_to_rigg} Let $\mathfrak X \in \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ be a quasi-compact and quasi-separated derived $k^\circ$-adic scheme topologically almost of finite presentation. Then the functor \[ \mathrm{rig}g \colon \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) \longrightarrow \mathrm{Ind}( \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) ) \] admits a right adjoint \[ j \colon \mathrm{Ind}( \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) ) \longrightarrow \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) , \] which is furthermore fully faithful. \end{lem} \mathbf egin{proof} \mathbb Cref{cor:comparing_local_and_rigid_almost_perfect_complexes} implies that the functor $\mathrm{rig}g$ induces the equivalence \[ \Lambda \colon \mathbb Coh^+_{\mathrm{loc}}( \mathfrak X ) \stackrel{\sim}{\longrightarrow} \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) . \] In other words, we see that the diagram \[ \mathbf egin{tikzcd} \mathbb Coh^+_{\mathrm{nil}}( \mathfrak X ) \arrow{r} \arrow{d} & \mathbb Coh^+( \mathfrak X ) \arrow{d}{\mathrm{rig}g} \\ 0 \arrow{r} & \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) \end{tikzcd} \] is a pushout diagram in $\mathbb Cat_\infty^{\mathrm{st}}$. Passing to ind-completions, we deduce that $\mathrm{Ind}( \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) )$ is a Verdier quotient of $\mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) )$. Applying \cite[Lemma 2.5 and Remark 2.6]{Hennion_Porta_Vezzosi_Formal_gluing} we conclude that $\mathrm{Ind}( \mathbb Coh^+( \mathfrak X^\mathrm{rig} ) )$ is an accessible localization of $\mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) )$. As these categories are presentable, we deduce that the localization functor $\mathrm{rig}g$ admits a fully faithful right adjoint, as desired. \end{proof} \mathbf egin{notation} Let $\mathfrak X \in \mathrm{dfDM}_{k^\circ}$. Given $\mathcal F, \mathcal G \in \mathrm{Ind}( \mathbb Coh^+(\mathfrak X) )$ we write $\Hom_{\mathfrak X}(\mathcal F, \mathcal G) \in \mathrm{Mod}_{k^\circ}$ for the $k^\circ$-enriched stable mapping space in $\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))$. \end{notation} \mathbf egin{lem} \label{lem:hom_to_nilpotent_is_nilpotent} Let $\mathfrak X \in \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ be a quasi-compact and quasi-separated derived $k^\circ$-adic scheme topologically almost of finite presentation. Let $\mathcal F \in \mathbb Coh^+(\mathfrak X)$ and $\mathcal G \in \mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$. Then \[ \Hom_\mathfrak X(\mathcal F, \mathcal G) \otimes_{k^\circ} k \simeq 0 . \] In other words, $\Hom_{\mathfrak X}(\mathcal F, \mathcal G)$ is $\mathfrak m$-nilpotent in $\mathrm{Mod}_{k^\circ}$. \end{lem} \mathbf egin{proof} Since $\mathfrak X$ is quasi-compact, we can find a finite formal Zariski cover $\mathfrak U_i = \Spf(A_i)$ by formal affine schemes. Let $\mathfrak U_\mathbf ullet$ be the \v{C}ech nerve. Since this is a formal Zariski cover, there exists $m \gg 0$ such that \[ \Hom_{\mathfrak X}(\mathcal F, \mathcal G) \simeq \lim_{[n] \in \mathbf \Delta_{\le m}} \Hom_{\mathfrak U_n}( \mathcal F|_{\mathfrak U_n}, \mathcal G|_{\mathfrak U_n} ) . \] Since the functor $- \otimes_{k^\circ} k \colon \mathrm{Mod}_{k^\circ} \to \mathrm{Mod}_k$ is exact, it commutes with finite limits. Therefore, we see that it is enough to prove that the conclusion holds after replacing $\mathfrak X$ by $\mathfrak U_m$. Since $\mathfrak X$ is quasi-compact and quasi-separated, we see that each $\mathfrak U_m$ is quasi-compact and separated. In other words, we can assume from the very beginning that $\mathfrak X$ is quasi-compact and separated. In this case, each $\mathfrak U_m$ will be formal affine, and therefore we can further reduce to the case where $\mathfrak X$ is formal affine itself. Assume therefore $\mathfrak X = \Spf(A)$. In this case, $\mathbb Coh^+(\mathfrak X) \simeq \mathbb Coh^+(A)$ lives fully faithfully inside $\mathrm{Mod}_A$. Notice that $A \to A \otimes_{k^\circ} k$ is a Zariski open immersion. Therefore, \[ \Hom_A(\mathcal F, \mathcal G) \otimes_{k^\circ} k \simeq \Hom_A(\mathcal F, \mathcal G) \otimes_A (A \otimes_{k^\circ} k) \simeq \Hom_A( \mathcal F \otimes_A k^\circ, \mathcal G \otimes_A k^\circ ) \simeq 0 . \] Thus, the proof is complete. \end{proof} \mathbf egin{cor} \label{cor:base_change_hom} Let $\mathfrak X$ be as in the previous lemma. Given $\mathcal F, \mathcal G \in \mathbb Coh^+(\mathfrak X)$, the canonical map \[ \Hom_{\mathfrak X}( \mathcal F, \mathcal G ) \otimes_{k^\circ} k \longrightarrow \Hom_{\mathfrak X^\mathrm{rig}}( \mathcal F^\mathrm{rig}, \mathcal G^\mathrm{rig} ) \] is an equivalence. \end{cor} \todo{It's kind of miraculous that we can prove this corollary also for $\mathcal G$ unbounded. See \cite[6.5.3.7]{Lurie_SAG}. Check very carefully the proof.} \mathbf egin{proof} Denote by $R \colon \mathrm{Ind}(\mathbb Coh^+(\mathfrak X)) \to \mathrm{Ind}(\mathbb Coh^+_{\mathrm{nil}}(\mathfrak X))$ the right adjoint to the inclusion \[ i \colon \mathrm{Ind}(\mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)) \hookrightarrow \mathrm{Ind}(\mathbb Coh^+(\mathfrak X)) . \] Then for any $\mathcal G \in \mathbb Coh^+(\mathfrak X)$ we have a fiber sequence \[ i R(\mathcal G) \longrightarrow \mathcal G \longrightarrow j( \mathcal G^\mathrm{rig} ) . \] In particular, we obtain a fiber sequence \[ \Hom_{\mathfrak X}( \mathcal F, i R(\mathcal G) ) \longrightarrow \Hom_{\mathfrak X}( \mathcal F, \mathcal G ) \longrightarrow \Hom_{\mathfrak X}( \mathcal F, j( \mathcal G^\mathrm{rig} ) ) . \] Now observe that \[ \Hom_{\mathfrak X}( \mathcal F, j( \mathcal G^\mathrm{rig}) ) \simeq \Hom_{\mathfrak X^\mathrm{rig}}(\mathcal F^\mathrm{rig}, \mathcal G^\mathrm{rig} ) . \] Notice also that since $k^\circ \to k$ is an open Zariski immersion, $\Hom_{\mathfrak X^\mathrm{rig}}(\mathcal F^\mathrm{rig}, \mathcal G^\mathrm{rig}) \otimes_{k^\circ} k \simeq \Hom_{\mathfrak X^\mathrm{rig}}(\mathcal F^\mathrm{rig}, \mathcal G^\mathrm{rig})$. In particular, applying $- \otimes_{k^\circ} k \colon \mathrm{Mod}_{k^\circ} \to \mathrm{Mod}_k$ we find a fiber sequence \[ \Hom_{\mathfrak X}( \mathcal F, i R(\mathcal G ) ) \otimes_{k^\circ} k \longrightarrow \Hom_{\mathfrak X}(\mathcal F, \mathcal G) \otimes_{k^\circ} k \longrightarrow \Hom_{\mathfrak X^\mathrm{rig}}(\mathcal F^\mathrm{rig}, \mathcal G^\mathrm{rig}) . \] It is therefore enough to check that $\Hom_{\mathfrak X}(\mathcal F, i R(\mathcal G)) \otimes_{k^\circ} k \simeq 0$. Since $i$ is a left adjoint, we can write \[ i R(\mathcal G) \simeq \colim_{\alpha \in I} \mathcal G_\alpha , \] where $I$ is filtered and $\mathcal G_\alpha \in \mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$. As $\mathcal F$ is compact in $\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))$, we find \[ \Hom_{\mathfrak X}( \mathcal F, i R(\mathcal G) ) \otimes_{k^\circ} k \simeq \left( \colim_{\alpha \in I} \Hom_{\mathfrak X}( \mathcal F, \mathcal G_\alpha ) \mathrm{rig}ht) \otimes_{k^\circ} k \simeq \colim_{\alpha \in I} \Hom_{\mathfrak X}(\mathcal F, \mathcal G_\alpha) \otimes_{k^\circ} k . \] Since each $\mathcal G_\alpha$ belongs to $\mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$, \cref{lem:hom_to_nilpotent_is_nilpotent} implies that $\Hom_{\mathfrak X}(\mathcal F, \mathcal G_\alpha) \otimes_{k^\circ} k \simeq 0$. The conclusion follows. \end{proof} \mathbf egin{rem} Notice that \cref{cor:base_change_hom} holds without no bounded conditions on the cohomological amplitude on the considered almost perfect complexes. The key ingredient is the fact that the morphism $\Spec k \hookrightarrow \Spec k^\circ$ is an open immersion. Compare with \cite[Lemma 6.5.3.7]{Lurie_SAG}. \end{rem} \mathbf egin{construction} Choose generators $t_1, \ldots, t_n$ for $\mathfrak m$. We consider $\mathbb N^n$ as a poset with order given by \[ (m_1, \ldots, m_n) \le (m_1', \ldots, m_n') \iff m_1 \le m_1', m_2 \le m_2' , \ldots , m_n \le m_n' \] Introduce the functor \[ K \colon \mathbb N^n\longrightarrow \mathrm{Ind}( \mathbb Cohh( \Spf(k^\circ) ) ) \] defined as follows: $K$ sends every object to $k^\circ$, and it sends the morphism $\mathbf m \le \mathbf m'$ to multiplication by $t^{\mathbf m' - \mathbf m}$. By abuse of notation, we still denote the composition of $K$ with the inclusion $\mathrm{Ind}( \mathbb Cohh(k^\circ ) ) \to \mathrm{Ind}( \mathbb Coh^+(k^\circ) )$ by $K$. Let now $\mathfrak X \in \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ be a quasi-compact and quasi-separated derived $k^\circ$-adic scheme topologically almost of finite presentation. Let $\mathcal F \in \mathbb Coh^+(\mathfrak X)$. The natural morphism $q \colon \mathfrak X \to \Spf(k^\circ)$ induces a functor \[ q^* \colon \mathrm{Ind}(\mathbb Coh^+(\Spf(k^\circ))) \longrightarrow \mathrm{Ind}( \mathbb Coh^+( \mathfrak X ) ) . \] We define the functor $K_\mathcal F$ as \[ K_\mathcal F \coloneqq q^*(K(-)) \otimes \mathcal F \colon \mathbb N^n \longrightarrow \mathrm{Ind}(\mathbb Coh^+(\mathfrak X)) . \] We let $\mathcal F^\mathrm{loc}$ denote the colimit of the functor $K_\mathcal F$. Let $\mathcal G \in \mathbb Coh^+(\mathfrak X^\mathrm{rig})$ and let $\alpha \colon \mathcal F^\mathrm{rig} \to \mathcal G$ be a given map. Notice that the natural map \[ \mathcal F^\mathrm{rig} \longrightarrow \colim_{\mathbb N^n} (K_\mathcal F(-))^\mathrm{rig} \] is an equivalence. Therefore $\alpha$ induces a cone \[ (K_\mathcal F(-))^\mathrm{rig} \longrightarrow \mathcal G , \] which is equivalent to the given of a cone \[ K_\mathcal F(-) \longrightarrow j( \mathcal G ) . \] Specializing this construction for $\alpha = \mathrm{id}_{\mathcal F^\mathrm{rig}}$, we obtain a canonical map \[ \gamma_\mathcal F \colon \mathcal F^\mathrm{loc} \longrightarrow j( \mathcal F^\mathrm{rig} ) . \] \end{construction} \mathbf egin{lem} \label{lem:loc_nilpotent} Let $\mathfrak X \in \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ be a quasi-compact and quasi-separated derived $k^\circ$-adic scheme topologically almost of finite presentation. Let $\mathcal F \in \mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$. Then $\mathcal F^\mathrm{loc} \simeq 0$. \end{lem} \mathbf egin{proof} For any $\mathcal G \in \mathbb Coh^+(\mathfrak X)$, we write $\Hom_{\mathfrak X}(\mathcal G, \mathcal F) \in \mathrm{Mod}_{k^\circ}$ for the $k^\circ$-enriched mapping space. As $\mathcal G$ is compact in $\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))$, we have \[ \Hom_{\mathfrak X}( \mathcal G, \mathcal F^\mathrm{loc} ) \simeq \colim_{\mathbb N^n} \Hom_\mathfrak X( \mathcal G, K_\mathcal F(-) ) \simeq \Hom_\mathfrak X( \mathcal G, \mathcal F ) \otimes_{k^\circ} k . \] \mathbb Cref{cor:base_change_hom} implies that \[ \Hom_\mathfrak X( \mathcal G, \mathcal F ) \otimes_{k^\circ} k \simeq \Hom_{\mathfrak X^\mathrm{rig}}( \mathcal G^\mathrm{rig}, \mathcal F^\mathrm{rig} ) \simeq 0 . \] It follows that $\mathcal F^\mathrm{loc} \simeq 0$. \end{proof} \mathbf egin{lem} \label{lem:colimit_torsion_free} Let $\mathfrak X \in \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ be a quasi-compact and quasi-separated derived $k^\circ$-adic scheme topologically almost of finite presentation. Let $\mathcal F \in \mathbb Coh^+(\mathfrak X)$. Then for any $\mathcal G \in \mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$, one has \[ \Map_{\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))}( \mathcal G, \mathcal F^\mathrm{loc} ) \simeq 0 . \] \end{lem} \mathbf egin{proof} It is enough to prove that for every $i \ge 0$ we have \[ \pi_i \Map_{\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))}( \mathcal G, \mathcal F^\mathrm{loc} ) \simeq 0 . \] Up to replacing $\mathcal F$ by $\mathcal F[i]$, we see that it is enough to deal with the case $i = 0$. Let therefore $\alpha \colon \mathcal G \to \mathcal F^\mathrm{loc}$ be a representative for an element in $\pi_0 \Map_{\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))}( \mathcal G, \mathcal F^\mathrm{loc} )$. As $\mathcal G$ is compact in $\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))$, the map $\alpha$ factors as $\alpha' \colon \mathcal G \to \mathcal F$, and therefore it induces a map $\widetilde{\alpha} \colon \mathcal G^\mathrm{loc} \to \mathcal F^\mathrm{loc}$ making the diagram \[ \mathbf egin{tikzcd} \mathcal G \arrow{r}{\alpha'} \arrow{d} & \mathcal F \arrow{d} \\ \mathcal G^\mathrm{loc} \arrow{r}{\widetilde{\alpha}} & \mathcal F^\mathrm{loc} \end{tikzcd} \] commutative, where both compositions are equivalent to $\alpha$. Now, \cref{lem:loc_nilpotent} implies that $\mathcal G^\mathrm{loc} \simeq 0$, and therefore $\alpha$ is nullhomotopic, completing the proof. \end{proof} \mathbf egin{lem} \label{lem:computation_j} Let $\mathfrak X \in \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ be a quasi-compact and quasi-separated derived $k^\circ$-adic scheme topologically almost of finite presentation. Let $\mathcal F \in \mathbb Coh^+(\mathfrak X)$. Then the canonical map \[ \gamma_\mathcal F \colon \mathcal F^\mathrm{loc} \longrightarrow j( \mathcal F^\mathrm{rig} ) \] is an equivalence. \end{lem} \mathbf egin{proof} Let $\mathcal G \in \mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$. Then \[ \Map_{\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))}( \mathcal G, j( \mathcal F^\mathrm{rig} ) ) \simeq \Map_{\mathrm{Ind}(\mathbb Coh^+(\mathfrak X^\mathrm{rig}))}( \mathcal G^\mathrm{rig}, \mathcal F ) \simeq 0 . \] \mathbb Cref{lem:colimit_torsion_free} implies that the same holds true replacing $j(\mathcal F^\mathrm{rig})$ with $\mathcal F^\mathrm{loc}$. As $\mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$ is a stable full subcategory of $\mathbb Coh^+(\mathfrak X)$, it follows that \[ \Hom_{\mathfrak X}(\mathcal G, j(\mathcal F)) \simeq \Hom_{\mathfrak X}(\mathcal G, \mathcal F^\mathrm{loc} ) \simeq 0 . \] Let $\mathcal H \coloneqq \mathrm{fib}( \gamma_\mathcal F )$. Then for any $\mathcal G \in \mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$, one has \[ \Hom_{\mathfrak X}(\mathcal G, \mathcal H) \simeq 0 . \] On the other hand, \[ \mathcal H^\mathrm{rig} \simeq \mathrm{fib}( \gamma_\mathcal F^\mathrm{rig} ) \simeq 0 . \] It follows that $\mathcal H \in \mathrm{Ind}( \mathbb Coh^+_{\mathrm{nil}}(\mathfrak X) )$, and hence that $\mathcal H \simeq 0$. Thus, $\gamma_\mathcal F$ is an equivalence. \end{proof} \mathbf egin{thm} \label{thm:formal_models_filtered} Let $\mathfrak X \in \mathrm{dfSch}_{k^\circ}$ be a quasi-compact and quasi-separated derived $k^\circ$-adic scheme. Let $\mathcal F \in \mathbb Coh^+( \mathfrak X^\mathrm{rig} )$. Then the $\infty$-category $\mathbb FormalModels(\mathcal F)$ of formal models for $\mathcal F$ is non-empty and filtered. \end{thm} \mathbf egin{proof} We know that $\mathbb FormalModels(\mathcal F)$ is non-empty thanks to \cref{cor:existence_formal_models}. Pick one formal model $\mathcal F \in \mathbb FormalModels(\mathcal F)$. Then \cref{lem:computation_j} implies that the canonical map \[ \gamma_\mathcal F \colon \mathcal F^\mathrm{loc} \longrightarrow j( \mathcal F^\mathrm{rig} ) \simeq j( \mathcal F ) \] is an equivalence. We now observe that $\mathbb FormalModels(\mathcal F)$ is by definition a full subcategory of \[ \mathbb Coh^+(\mathfrak X)_{/\mathcal F} \coloneqq \mathbb Coh^+(\mathfrak X) \times_{\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))} \mathrm{Ind}(\mathbb Coh^+(\mathfrak X))_{/j(\mathcal F)} . \] As this $\infty$-category is filtered, it is enough to prove that every object $\mathcal G \in \mathbb Coh^+(\mathfrak X)_{/\mathcal F}$ admits a map to an object in $\mathbb FormalModels(\mathcal F)$. Let $\alpha \colon \mathcal G \to j(\mathcal F)$ be the structural map. Using the equivalence $\gamma_\mathcal F$ and the fact that $\mathcal G$ is compact in $\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))$, we see that $\alpha$ factors as $\mathcal G \to \mathcal F$, which belongs to $\mathbb FormalModels(\mathcal F)$ by construction. \end{proof} \mathbf egin{cor} \label{lift_p_maps} Let $X \in \mathrm{dAn}k$ and $f \colon \mathcal F \to \mathcal G$ be a morphism $\mathbb Coh^+(X)$. Suppose we are given a formal model $\mathfrak X$ for $X$ together with formal models $\mathcal F, \mathcal G \in \mathbb Coh^+(\mathfrak X)$ for $\mathcal F$ and $\mathcal G$, respectively. Then there exists a morphism $\mathfrak f \colon \mathcal F' \to \mathcal G' $ in the $\infty$-category\xspace $\mathbb Coh^+(\mathfrak X)$ lifting \[ t_1^{m_1} \dots t_n^{m_n} f \colon \mathcal F \to \mathcal G, \quad \text{in $\mathbb Coh^+(X)$} \] for suitable non-negative integers $m_1, \dots, m_n \geq 0$. \end{cor} \mathbf egin{proof} Any map $\mathcal F \to \mathcal G$ induces a map $\mathfrak F \to j(\mathcal F) \to j(\mathcal G)$. Using the equivalence $j(\mathcal G) \simeq \mathcal G^\mathrm{loc}$ and the fact that $\mathcal F$ is compact in $\mathrm{Ind}(\mathbb Coh^+(\mathfrak X))$, we see that the map $\mathfrak F \to j(\mathcal G)$ factors as $\mathfrak F \to \mathcal G$. Unraveling the definition of the functor $K_\mathcal G(-)$, we see that the conclusion follows. \end{proof} For later use, let us record the following consequence of \cref{lem:computation_j}: \mathbf egin{cor} \label{cor:characterization_nilpotent} Let $\mathfrak X \in \mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ be a quasi-compact and quasi-separated derived $k^\circ$-adic scheme topologically almost of finite presentation. Let $\mathcal F \in \mathbb Coh^+(\mathfrak X)$. Then $\mathcal F$ is $\mathfrak m$-nilpotent if and only if $\mathcal F^\mathrm{loc} \simeq 0$. \end{cor} \mathbf egin{proof} If $\mathcal F$ is $\mathfrak m$-nilpotent, the conclusion follows from \cref{lem:loc_nilpotent}. Suppose vice-versa that $\mathcal F^\mathrm{loc} \simeq 0$. Then \cref{lem:computation_j} implies that \[ j( \mathcal F^\mathrm{rig} ) \simeq \mathcal F^\mathrm{loc} \simeq 0 . \] Now, \cref{cor:fully_faithful_adjoint_to_rigg} shows that $j$ is fully faithful. In particular it is conservative and therefore $\mathcal F^\mathrm{rig} \simeq 0$. In other words, $\mathcal F$ belongs to $\mathbb Coh^+_{\mathrm{nil}}(\mathfrak X)$. \end{proof} \section{Flat models for morphisms of derived analytic spaces} Using the study of formal models for almost perfect complexes carried out in the previous section, we can prove the following derived version of \cite[Theorem 5.2]{Bosch_Formal_II}: \mathbf egin{thm} \label{thm:formal_model_flat_map} Let $f \colon X \to Y$ be a proper map of quasi-paracompact derived $k$-analytic\xspace spaces. Assume that: \mathbf egin{enumerate} \item the truncations of $X$ and $Y$ are $k$-analytic\xspace spaces.\footnote{As opposed to $k$-analytic\xspace Deligne-Mumford\xspace stacks.} \item The map $f$ is flat. \end{enumerate} Then there exists a proper flat formal model $\mathfrak f \colon \mathfrak X \to \mathfrak Y$ in $\mathrm{dfSch}_{k^\circ}^{\mathrm{taft}}$ for $f$. \end{thm} \mathbf egin{proof} We construct, by induction on $n$, the following data: \mathbf egin{enumerate} \item derived $k^\circ$-adic schemes $\mathfrak X_n$ and $\mathfrak Y_n$ equipped with equivalences \[ \mathfrak X_n^\mathrm{rig} \simeq \mathrm t_{\le n}(X) , \qquad \mathfrak Y_n^\mathrm{rig} \simeq \mathrm t_{\le n}(Y) . \] \item Morphisms $\mathfrak X_n \to \mathfrak X_{n-1}$ and $\mathfrak Y_n \to \mathfrak Y_{n-1}$ exhibiting $\mathfrak X_{n-1}$ and $\mathfrak Y_{n-1}$ as $(n-1)$-truncations of $\mathfrak X_n$ and $\mathfrak Y_n$, respectively. \item A proper flat morphism $\mathfrak f_n \colon \mathfrak X_n \to \mathfrak Y_n$ and homotopies making the cube \[ \mathbf egin{tikzcd} {} & \mathfrak X_n^\mathrm{rig} \arrow{rr}{\mathfrak f_n^\mathrm{rig}} \arrow{dd} \arrow{dl} & & \mathfrak Y_n^\mathrm{rig} \arrow{dd} \arrow{dl} \\ \mathfrak X_{n-1}^\mathrm{rig} \arrow[crossing over]{rr}[near end]{\mathfrak f_{n-1}^\mathrm{rig}} \arrow{dd} & & \mathfrak Y_{n-1}^\mathrm{rig} \\ {} & \mathrm t_{\le n}(X) \arrow{rr} \arrow{dl} & & \mathrm t_{\le n}(Y) \arrow{dl} \\ \mathrm t_{\le n - 1}(X) \arrow{rr} & & \mathrm t_{\le n- 1}(Y) \arrow[leftarrow, crossing over]{uu} \end{tikzcd} \] commutative. \end{enumerate} Having these data at our disposal, we set \[ \mathfrak X \coloneqq \colim_n \mathfrak X_n , \qquad \mathfrak Y \coloneqq \colim_n \mathfrak Y_n , \] and we let $\mathfrak f \colon \mathfrak X \to \mathfrak Y$ be map induced by the morphisms $\mathfrak f_n$. The properties listed above imply that $\mathfrak f$ is proper and flat and that its generic fiber is equivalent to $f$. We are therefore left to construct the data listed above. When $n = 0$, we can apply the flattening technique of Raynaud-Gruson (see \cite[Theorem 5.2]{Bosch_Formal_II}) to produce a proper flat formal model $\mathfrak f_0 \colon \mathfrak X_0 \to \mathfrak Y_0$ for $\mathrm{t}_0(f) \colon \mathrm{t}_0(X) \to \mathrm{t}_0(Y)$. Assume now that we constructed the above data up to $n$ and let us construct it for $n+1$. Set $\mathcal F \coloneqq \pi_{n+1}(\mathcal O_X)[n+2]$ and $\mathcal G \coloneqq \pi_{n+1}(\mathcal O_Y)[n+2]$. Using \cite[Corollary 5.44]{Porta_Yu_Representability}, we can find analytic derivations $d_\alpha \colon (\mathrm t_{\le n}X)[\mathcal F] \to \mathrm t_{\le n} X$ and $d_\mathbf eta \colon (\mathrm t_{\le n} Y)[\mathcal G] \to \mathrm t_{\le n} Y$ making the following cube \mathbf egin{equation} \label{eq:flat_formal_model} \mathbf egin{tikzcd} {} & (\mathrm t_{\le n} X)[\mathcal F] \arrow{rr}{d_0} \arrow{dd} \arrow{dl}{d_\alpha} & & \mathrm t_{\le n} X \arrow{dl} \arrow{dd}{f_n} \\ \mathrm t_{\le n} X \arrow[crossing over]{rr} \arrow{dd} & & \mathrm t_{\le n + 1} X \\ {} & (\mathrm t_{\le n} Y)[\mathcal G] \arrow{rr}[near start]{d_0} \arrow{dl}{d_\mathbf eta} & & \mathrm t_{\le n} Y \arrow{dl} \\ \mathrm t_{\le n} Y \arrow{rr} & & \mathrm t_{\le n + 1} Y \arrow[leftarrow, crossing over]{uu}[swap,near end]{f_{n+1}} \end{tikzcd} \end{equation} commutative. Here $d_0$ denote the zero derivation and we set $f_n \coloneqq \mathrm t_{\le n}(f)$, $f_{n+1} \coloneqq \mathrm t_{\le n+1}(f)$. The derivations $d_\alpha$ and $d_\mathbf eta$ correspond to morphisms $\alpha \colon \mathbb L^\mathrm{an}_{\mathrm t_{\le n} X} \to \mathcal F$ and $\mathbf eta \colon \mathbb L^\mathrm{an}_{\mathrm t_{\le n} Y} \to \mathcal G$, respectively. Moreover, the commutativity of the left side square in \eqref{eq:flat_formal_model} is equivalent to the commutativity of \[ \mathbf egin{tikzcd} f_n^* \mathbb L^\mathrm{an}_{\mathrm t_{\le n} Y} \arrow{r}{f_n^* \mathbf eta} \arrow{d} & f_n^* \mathcal G \arrow{d} \\ \mathbb L^\mathrm{an}_{\mathrm t_{\le n} X} \arrow{r}{\alpha} & \mathcal F \end{tikzcd} \] in $\mathbb Coh^+(\mathrm t_{\le n} X)$. Notice that, since $f$ is flat, the morphism $f_n^* \mathcal F \to \mathcal G$ is an equivalence. Using \cref{thm:adic_and_analytic_cotangent_complex} and the induction hypothesis, we know that $\mathrm{adic}L_{\mathfrak Y_n}$ is a canonical formal model for $\mathbb L^\mathrm{an}_{\mathrm t_{\le n} X}$. Using \cref{thm:formal_models_filtered}, we can therefore find a formal model $\widebar verline{\mathbf eta} \colon \mathrm{adic}L_{\mathfrak Y_n} \to \mathfrak G$ for the map $\mathbf eta$. We now set \[ \mathfrak F \coloneqq \mathfrak f_n^* \mathfrak G . \] Using \cref{lift_p_maps}, we can find $\mathbf m \in \mathbb N^n$ and a formal model $\widetilde alpha \colon \mathrm{adic}L_{\mathfrak X_n} \to \mathfrak F$ for $t^{\mathbf m} \alpha$ together with a homotopy making the diagram \[ \mathbf egin{tikzcd} \mathfrak f_n^* \mathrm{adic}L_{\mathfrak Y_n} \arrow{r}{t^{\mathbf m} \mathfrak f_n^* \widebar verline{\mathbf eta}} \arrow{d} & \mathfrak f_n^* \mathfrak G \arrow[equal]{d} \\ \mathrm{adic}L_{\mathfrak X_n} \arrow{r}{\tilde{\alpha}} & \mathfrak F \end{tikzcd} \] commutative. Set $\widetilde beta \coloneqq t^{\mathbf m} \widebar verline{\mathbf eta} \colon \mathrm{adic}L_{\mathfrak Y_n} \to \mathfrak G$. Then $\widetilde alpha$ and $\widetilde beta$ induce a commutative square \mathbf egin{equation} \label{eq:flat_formal_model_II} \mathbf egin{tikzcd} \mathfrak X_n[\mathfrak F] \arrow{r}{d_{\tilde{\alpha}}} \arrow{d} & \mathfrak X_n \arrow{d}{\mathfrak f_n} \\ \mathfrak Y_n[\mathfrak G] \arrow{r}{d_{\tilde{\mathbf eta}}} & \mathfrak Y_n . \end{tikzcd} \end{equation} We now define $\mathfrak X_{n+1}$ and $\mathfrak Y_{n+1}$ as the square-zero extensions associated to $\widetilde alpha$ and $\widetilde beta$. In other words, they are defined by the following pushout diagrams: \[ \mathbf egin{tikzcd} \mathfrak X_n[\mathfrak F] \arrow{d}{d_{\tilde{\alpha}}} \arrow{r}{d_0} & \mathfrak X_n \arrow{d} \\ \mathfrak X_n \arrow{r} & \mathfrak X_{n+1} \end{tikzcd} , \quad \mathbf egin{tikzcd} \mathfrak Y_n[\mathfrak G] \arrow{r}{d_0} \arrow{d}{d_{\tilde{\mathbf eta}}} & \mathfrak Y_n \arrow{d} \\ \mathfrak Y_n \arrow{r} & \mathfrak Y_{n+1} . \end{tikzcd} \] The commutativity of \eqref{eq:flat_formal_model_II} provides a canonical map $\mathfrak f_{n+1} \colon \mathfrak X_{n+1} \to \mathfrak Y_{n+1}$, which is readily verified to be proper and flat. We are therefore left to verify that $\mathfrak f_{n+1}$ is a formal model for $f_{n+1}$. Unraveling the definitions, we see that it is enough to produce equivalences $a \colon (\mathrm t_{\le n} X)[\mathcal F] \xrightarrow{\sim} (\mathrm t_{\le n} X)[\mathcal F]$ and $b \colon (\mathrm t_{\le n} Y)[\mathcal G] \xrightarrow{\sim} (\mathrm t_{\le n} Y)[\mathcal G]$ making the following diagrams \mathbf egin{equation} \label{eq:flat_formal_model_III} \mathbf egin{tikzcd} (\mathrm t_{\le n} X)[\mathcal F] \arrow{r}{d_{t^{\mathbf m} \alpha}} \arrow{d}{a} & \mathrm t_{\le n} X \arrow[equal]{d} \\ (\mathrm t_{\le n} X)[\mathcal F] \arrow{r}{d_{\alpha}} & \mathrm t_{\le n} X \end{tikzcd} , \quad \mathbf egin{tikzcd} (\mathrm t_{\le n} Y)[\mathcal G] \arrow{r}{d_{t^{\mathbf m} \mathbf eta}} \arrow{d}{b} & \mathrm t_{\le n} Y \arrow[equal]{d} \\ (\mathrm t_{\le n} Y)[\mathcal G] \arrow{r}{d_{\mathbf eta}} & \mathrm t_{\le n} Y \end{tikzcd} \end{equation} commutative. The situation is symmetric, so it is enough to deal with $\mathrm t_{\le n} X$. Consider the morphism \[ t^{- \mathbf m} \colon \mathcal F \longrightarrow \mathcal F , \] which exists because all the elements $t_i \in \mathfrak m$ are invertible in $k$. For the same reason it is an equivalence, with inverse given by multiplication by $t^{\mathbf m}$. This morphism induces a map \[ a \colon (\mathrm t_{\le n} X)[\mathcal F] \longrightarrow ( \mathrm t_{\le n} X)[\mathcal F] , \] which by functoriality is an equivalence. We now observe that the commutativity of \eqref{eq:flat_formal_model_III} is equivalent to the commutativity of \[ \mathbf egin{tikzcd} \mathbb L^\mathrm{an}_{\mathrm t_{\le n} X} \arrow{r}{t^{\mathbf m} \alpha} \arrow[equal]{d} & \mathcal F \arrow{d}{t^{- \mathbf m}} \\ \mathbb L^\mathrm{an}_{\mathrm t_{\le n} X} \arrow{r}{\alpha} & \mathcal F , \end{tikzcd} \] which is immediate. The proof is therefore achieved. \end{proof} \section{The plus pushforward for almost perfect sheaves} Let $f \colon X \to Y$ be a proper map between derived $k$-analytic\xspace spaces of finite tor-amplitude. In \cite[Definition 7.9]{Porta_Yu_Mapping} it is introduced a functor \[ f_+ \colon \mathrm{Perf}(X) \longrightarrow \mathrm{Perf}(Y) , \] and it is shown in Proposition 7.11 in loc.\ cit.\ that for every $\mathcal G \in \mathbb Coh^+(Y)$ there is a natural equivalence \[ \Map_{\mathbb Coh^+(X)}( \mathcal F, f^* \mathcal G ) \simeq \Map_{\mathbb Coh^+(Y)}( f_+(\mathcal F), \mathcal G ) . \] In this section we extend the definition of $f_+$ to the entire $\mathbb Coh^+(X)$, at least under the stronger assumption of $f$ being flat. \mathbf egin{rem} In algebraic geometry, the extension of $f_+$ to $\mathbb Coh^+(X)$ passes through the extension to $\mathrm{QCoh}(X) \simeq \mathrm{Ind}(\mathrm{Perf}(X))$. This is ultimately requires being able to describe every element in $\mathbb Coh^+(X)$ as a filtered colimit of elements in $\mathrm{Perf}(X)$, which in analytic geometry is possible only locally. \end{rem} Therefore, this technique cannot be applied in analytic geometry. When dealing with non-archimedean analytic geometry, formal models can be used to circumvent this problem. \mathbf egin{prop} \label{prop:formal_plus_pushforward} Let $f \colon \mathfrak X \to \mathfrak Y$ be a proper map between derived $k^\circ$-adic schemes. Assume that $f$ has finite tor amplitude. Then the functor \[ f^* \colon \mathbb Coh^+(\mathfrak Y) \to \mathbb Coh^+(\mathfrak X) \] admits a left adjoint \[ f_+ \colon \mathbb Coh^+(\mathfrak X) \to \mathbb Coh^+(\mathfrak Y) . \] \end{prop} \mathbf egin{proof} Let $\mathfrak X_n \coloneqq \mathfrak X \times_{\Spf(k^\circ)} \Spec(k^\circ / \mathfrak m^n)$ and define similarly $\mathfrak Y_n$. Let $f_n \colon \mathfrak X_n \to \mathfrak Y_n$ be the induced morphism. Then by definition of $k^\circ$-adic schemes, \personal{we are using the completeness in particular} we have \[ \mathfrak X \simeq \colim_{n \in \mathbb N} \mathfrak X_n , \qquad \mathfrak Y \simeq \colim_{n \in \mathbb N} \mathfrak Y_n , \] and therefore \[ \mathbb Coh^+(\mathfrak X) \simeq \lim_{n \in \mathbb N} \mathbb Coh^+(\mathfrak X_n) , \qquad \mathbb Coh^+(\mathfrak Y) \simeq \lim_{n \in \mathbb N} \mathbb Coh^+(\mathfrak Y_n) . \] Combining \cite[Remark 6.4.5.2(b) \& Proposition 6.4.5.4(1)]{Lurie_SAG}, we see that each functor \[ f_n^* \colon \mathbb Coh^+(\mathfrak Y_n) \longrightarrow \mathbb Coh^+(\mathfrak X_n) \] admits a left adjoint $f_{n+}$. Moreover, Proposition 6.4.5.4(2) in loc.\ cit.\ implies that these functors $f_{n+}$ can be assembled into a natural transformation, and that therefore they induce a well defined functor \[ f_+ \colon \mathbb Coh^+(\mathfrak X) \longrightarrow \mathbb Coh^+(\mathfrak Y) . \] Now let $\mathcal F \in \mathbb Coh^+(\mathfrak X)$ and $\mathcal G \in \mathbb Coh^+(\mathfrak Y)$. Let $\mathcal F_n$ and $\mathcal G_n$ be the pullbacks of $\mathcal F$ and $\mathcal G$ to $\mathcal X_n$ and $\mathcal Y_n$, respectively. Then \mathbf egin{align*} \Map_{\mathbb Coh^+(\mathfrak X)}( \mathcal F, f^*(\mathcal G) ) & \simeq \lim_{n \in \mathbb N} \Map_{\mathbb Coh^+(\mathfrak X_n)}( \mathcal F_n, f_n^*( \mathcal G_n) ) \\ & \simeq \lim_{n \in \mathbb N} \Map_{\mathbb Coh^+(\mathfrak Y_n)}( f_{n+}(\mathcal F_n), \mathcal G_n ) \\ & \simeq \Map_{\mathbb Coh^+(\mathfrak Y)}( f_+(\mathcal F), \mathcal G ) , \end{align*} which completes the proof. \end{proof} \mathbf egin{cor} \label{cor:plus_pushforward} Let $f \colon X \to Y$ be a proper map between derived analytic spaces. Assume that $f$ is flat. Then the functor \[ f^* \colon \mathbb Coh^+(Y) \to \mathbb Coh^+(X) \] admits a left adjoint \[ f_+ \colon \mathbb Coh^+(X) \to \mathbb Coh^+(Y) . \] \end{cor} \mathbf egin{proof} Using \cref{thm:formal_model_flat_map}, we can choose a proper flat formal model $\mathfrak f \colon \mathfrak X \to \mathfrak Y$ for $f$. Thanks to \cref{prop:formal_plus_pushforward}, we have a well defined functor \[ \mathfrak f_+ \colon \mathbb Coh^+(\mathfrak X) \longrightarrow \mathbb Coh^+(\mathfrak Y) . \] We claim that it restricts to a functor \[ \mathfrak f_+ \colon \mathbb Coh^+_{\mathrm{nil}}(\mathfrak X) \longrightarrow \mathbb Coh^+_{\mathrm{nil}}(\mathfrak Y) . \] Using \cref{cor:characterization_nilpotent}, it is enough to prove that \[ \mathfrak f_+( \mathcal F )^\mathrm{loc} \simeq 0 . \] Extending $\mathfrak f_+$ to a functor $\mathfrak f_+ \colon \mathrm{Ind}( \mathbb Coh^+(\mathfrak X) ) \to \mathrm{Ind}( \mathbb Coh^+(\mathfrak Y) )$, we see that \[ \mathfrak f_+( \mathcal F )^\mathrm{loc} \simeq \mathfrak f_+(\mathcal F^\mathrm{loc}) \simeq 0 . \] Using \cref{cor:comparing_local_and_rigid_almost_perfect_complexes}, we get a well defined functor \[ f_+ \colon \mathbb Coh^+(X) \longrightarrow \mathbb Coh^+(Y) . \] We only have to prove that it is left adjoint to $f^*$. Let $\mathcal F \in \mathbb Coh^+(X)$ and $\mathcal G \in \mathbb Coh^+(Y)$. Choose a formal model $\mathfrak F \in \mathbb Coh^+(\mathfrak X)$. Then unraveling the construction of $f_+$, we find a canonical equivalence \personal{Induced from the given equivalence $\mathfrak F^\mathrm{rig} \simeq \mathcal F$} \[ f_+(\mathcal F) \simeq \mathfrak f_+( \mathfrak F )^\mathrm{rig} . \] We now have the following sequence of natural equivalences: \mathbf egin{align*} \Map_{\mathbb Coh^+(Y)}( f_+( \mathcal F ), \mathcal G ) & \simeq \Map_{\mathbb Coh^+(Y)}( (\mathfrak f_+( \mathfrak F ))^\mathrm{rig}, \mathfrak G^\mathrm{rig} ) \\ & \simeq \Map_{\mathbb Coh^+(\mathfrak X)}( \mathfrak f_+( \mathfrak F ), \mathfrak G ) \otimes_{k^\circ} k & \text{by \cref{cor:base_change_hom}} \\ & \simeq \Map_{\mathbb Coh^+(\mathfrak X)}( \mathfrak F, \mathfrak f^* \mathfrak G ) \otimes_{k^\circ} k \\ & \simeq \Map_{\mathbb Coh^+(\mathfrak X)}( \mathfrak F^\mathrm{rig}, ( \mathfrak f^* \mathfrak G )^\mathrm{rig} ) & \text{by \cref{cor:base_change_hom}} \\ & \simeq \Map_{\mathbb Coh^+(\mathfrak X)}( \mathcal F, f^* \mathcal G ) . \end{align*} The proof is therefore complete. \end{proof} \mathbf egin{cor} \label{cor:plus_pushforward_base_change} Let $f \colon X \to Y$ be a proper and flat map between derived analytic spaces. Let $p \colon Z \to Y$ be any other map and consider the pullback square \[ \mathbf egin{tikzcd} W \arrow{r}{q} \arrow{d}{g} & X \arrow{d}{f} \\ Z \arrow{r}{p} & Y . \end{tikzcd} \] Then for any $\mathcal F \in \mathbb Coh^+(X)$ the canonical map \[ g_+(q^*(\mathcal F)) \longrightarrow p^*(f_+(\mathcal F)) \] is an equivalence. \end{cor} \mathbf egin{proof} Using \cref{thm:formal_model_flat_map}, we find a flat formal model $\mathfrak f \colon \mathfrak X \to \mathfrak Y$. Choose a formal model $\mathfrak p \colon \mathfrak Z \to \mathfrak Y$ for $p \colon Z \to Y$, \personal{It is always possible to find a formal model $\mathfrak Z \to \mathfrak Y'$ for $p$. We can also find a zig-zag $\mathfrak Y' \leftarrow \mathfrak Y'' \to \mathfrak Y$. Then $\mathfrak Y'' \times_{\mathfrak Y} \mathfrak X \to \mathfrak Y''$ is a flat formal model for $f$ and $\mathfrak Y'' \times_{\mathfrak Y'} \mathfrak Z \to \mathfrak Y''$ is a formal model for $p$.} and form the pullback square \[ \mathbf egin{tikzcd} \mathfrak W \arrow{r}{\mathfrak q} \arrow{d}{\mathfrak g} & \mathfrak X \arrow{d}{\mathfrak f} \\ \mathfrak Z \arrow{r}{\mathfrak p} & \mathfrak Y . \end{tikzcd} \] Choose also a formal model $\mathfrak F \in \mathbb Coh^+(\mathfrak X)$ for $\mathcal F$. It is then enough to prove that the canonical map \[ \mathfrak g_+( \mathfrak q^*( \mathfrak F ) ) \longrightarrow \mathfrak p^*( \mathfrak f_+(\mathfrak F) ) \] is an equivalence. This follows at once by \cite[Proposition 6.4.5.4(2)]{Lurie_SAG}. \end{proof} \section{Representability of $\mathbf R \mathrm{Hilb}(X)$} Let $p \colon X \to S$ be a proper and flat morphism of underived $k$-analytic\xspace spaces. We define the functor \[ \mathrm R \mathrm{Hilb}(X/S) \colon \mathrm{dAfd}_S^\mathrm{op} \longrightarrow \mathcal S \] by sending $T \to S$ to the space of diagrams \mathbf egin{equation} \label{eq:functor_points_Hilbert} \mathbf egin{tikzcd}[column sep = small] Y \arrow[hook]{rr}{i} \arrow{dr}[swap]{q_T} & & T \times_S X \arrow{dl}{p_T} \\ {} & T \end{tikzcd} \end{equation} where $i$ is a closed immersion of derived $k$-analytic\xspace spaces, and $q_T$ is flat. \mathbf egin{prop} \label{prop:RHilb_cotangent_complex} Keeping the above notation and assumptions, $\mathrm R \mathrm{Hilb}(X/S)$ admits a global analytic cotangent complex. \end{prop} \mathbf egin{proof} Let $x \colon T \to \mathrm R\mathrm{Hilb}(X/S)$ be a morphism from a derived $k$-affinoid space $T \in \mathrm{dAfd}_S$. It classifies a diagram of the form \eqref{eq:functor_points_Hilbert}. Unraveling the definitions, we see that the functor \[ \mathrm{Der}^\mathrm{an}_{\mathrm R\mathrm{Hilb}(X/S),x}(T;-) \colon \mathbb Coh^+(T) \longrightarrow \mathrm R\mathrm{Hilb}(X/S) \] can be explicitly written as \[ \mathrm{Der}^\mathrm{an}_{\mathrm R\mathrm{Hilb}(X/S),x}(T;\mathcal F) \simeq \Map_{\mathbb Coh^+(Y)}( \mathbb L^\mathrm{an}_{Y/T \times_S X}, q_T^*( \mathcal F ) ) . \] Since $q_T \colon Y \to T$ is proper and flat, \cref{cor:plus_pushforward} implies the existence of a left adjoint $q_{T+} \colon \mathbb Coh^+(Y) \to \mathbb Coh^+(T)$ for $q_T^*$. Moreover, \cite[Corollary 5.40]{Porta_Yu_Representability} implies that $\mathbb L^\mathrm{an}_{Y/ T \times_S X} \in \mathbb Coh{\ge 0}( Y )$. Therefore, we find \[ \mathrm{Der}^\mathrm{an}_{\mathrm R\mathrm{Hilb}(X/S),x}(T; \mathcal F) \simeq \Map_{\mathbb Coh^+(T)}( q_{T+}( \mathbb L^\mathrm{an}_{Y / T \times_S X} ), \mathcal F ) , \] and therefore $\mathrm R \mathrm{Hilb}(X/S)$ admits an analytic cotangent complex at $x$. Using \cref{cor:plus_pushforward_base_change}, we see that it admits as well a global analytic cotangent complex. \end{proof} \mathbf egin{thm} Let $X$ be a $k$-analytic\xspace space. Then $\mathbf{R} \mathrm{Hilb}(X)$ is a derived analytic space. \end{thm} \mathbf egin{proof} We only need to check the hypotheses of \cite[Theorem 7.1]{Porta_Yu_Representability}. The representability of the truncation is guaranteed by \cite[Proposition 5.3.3]{Conrad_Spreading-out}. The existence of the global analytic cotangent complex has been dealt with in \cref{prop:RHilb_cotangent_complex}. Convergence and infinitesimal cohesiveness are straightforward checks. The theorem follows. \end{proof} As a second concluding applications, let us mention that the theory of the plus pushforward developed in this paper allows to remove the lci assumption in \cite[Theorem 8.6]{Porta_Yu_Mapping}: \mathbf egin{thm} \label{thm:rep_mapping} Let $S$ be a rigid $k$-analytic\xspace space. Let $X,Y$ be rigid $k$-analytic\xspace spaces over $S$. Assume that $X$ is proper and flat over $S$ and that $Y$ is separated over $S$. Then the $\infty$-functor $\mathbf{Map}_S(X,Y)$ is representable by a derived $k$-analytic\xspace space separated over $S$. \end{thm} \mathbf egin{proof} The same proof of \cite[Theorem 8.6]{Porta_Yu_Mapping} applies. It is enough to observe that Corollaries \ref{cor:plus_pushforward} and \ref{cor:plus_pushforward_base_change} allow to prove Lemma 8.4 in loc.\ cit.\ by removing the assumption of $Y \to S$ being locally of finite presentation. \end{proof} \ifpersonal \section{Coherent dualizing sheaves} It should be possible to apply the formalism of this paper to get a reasonable construction for the dualizing sheaf of a morphism of derived $k$-analytic\xspace schemes. \mathbf egin{defin} Let $f \colon X \to Y$ be a morphism of derived $k$-analytic\xspace schemes. Choose a formal model $\mathfrak f \colon \mathfrak X \to \mathfrak Y$ and let $\omega_{\mathfrak X / \mathfrak Y}$ be a dualizing sheaf. We set \[ \omega_{X/Y} \coloneqq ( \omega_{\mathfrak X / \mathfrak Y} )^\mathrm{rig} . \] \end{defin} \personal{The problem with this is that $\omega_{\mathfrak X / \mathfrak Y}$ is not stable under the upper star pullback, only under the upper shriek. To make sense of it at the formal level, we would need to know that the extension of the functor $\mathbb Coh^+$ to formal stacks via upper star functoriality coincides with the extension via upper shriek functoriality. This might have been proven in Gaitsgory-Rozenblyum.} \mathbf egin{prop} Suppose $f \colon X \to Y$ is proper and flat. Then: \mathbf egin{enumerate} \item We have \[ f_+(\mathcal F) = f_*( \mathcal F \otimes \omega_{X / Y} ) . \] \item the functor \[ \mathcal F \mapsto f^!(\mathcal F) \coloneqq f^*( \mathcal F \otimes \omega_{X / Y} ) \] is a right adjoint for the functor $f_*$. \end{enumerate} \end{prop} \fi \end{document}
math
\begin{document} \begin{abstract} A {\em zero-sum} sequence of integers is a sequence of nonzero terms that sum to $0$. Let $k>0$ be an integer and let $[-k,k]$ denote the set of all nonzero integers between $-k$ and $k$. Let $\ell(k)$ be the smallest integer $\ell$ such that any zero-sum sequence with elements from $[-k,k]$ and length greater than $\ell$ contains a proper nonempty zero-sum subsequence. In this paper, we prove a more general result which implies that $\ell(k)=2k-1$ for $k>1$. \end{abstract} \keywords{Zero-sum sequence, vector space partition.} \title{A zero-sum theorem over $\mathbb{Z} \section{Introduction}\label{sec:intro} For any multiset $S$, let $|S|$ denote the number of elements in $S$, let $\max(S)$ denote the maximum element in $S$, and let $\Sigma S=\sum_{s\in S}s$. Let $A$ and $B$ be nonempty multisets of positive integers. The pair $\{A,B\}$ is said to be {\em irreducible} if $\Sigma A=\Sigma B$, and for every nonempty proper mutisubsets $A'\subset A$ and $B'\subset B$, $\Sigma A'\not=\Sigma B'$ holds. If $\{A,B\}$ fails to be irreducible, we say that it is {\em reducible}. It is easy to see that if $\{A,B\}$ is irreducible, then $A\cap B=\emptyset$ or $|A|=|B|=1$. We define the {\em length} of $\{A,B\}$ as \[\ell(A,B)=|A|+|B|.\] An irreducible pair $\{A,B\}$ is said to be {\em $k$-irreducible} if $\max(A\cup B)\leq k$. We define \begin{equation}\label{def:lk} \ell(k)=\max\limits_{\{A,B\}}\ell(A,B), \end{equation} where the maximum is taken over all $k$-irreducible pairs $\{A,B\}$. For $k>1$, let \begin{equation}\label{exp:lb} A=\{\underbrace{k,\ldots,k}_{k-1}\} \mbox{ and } B=\{\underbrace{k-1,\ldots,k-1}_k\}. \end{equation} Then $\{A,B\}$ is $k-$irreducible and $\ell(A,B)=2k-1$. This implies that $\ell(k)\geq 2k-1$. El-Zanati, Seelinger, Sissokho, Spence, and Vanden Eynden introduced $k$-irreducible pairs in connection with their work on irreducible $\lambda$-fold partitions (e.g., see~\cite{ESSSV}). They also conjectured that $\ell(k)=2k-1$. In the our main theorem below, we prove a more general result which implies this conjecture in our main theorem below. \begin{theorem}\label{thm:1} If $\{A,B\}$ is an irreducible pair, then $|A|\leq \max(B)$ and $|B|\leq \max(A)$. Consequently, $\ell(k)=2k-1$ if $k>1$. \end{theorem} One may naturally ask which $k-$irreducible pairs $\{A,B\}$ achieve the maximum possible length. We answer this question in the the following corollary. \begin{corollary}\label{cor:1} Let $k>1$ be an integer. A $k-$irreducible pair $\{A,B\}$ has (maximum possible) length $\ell(A,B)=2k-1$ if and only if $\{A,B\}$ is the pair shown in~$(2)$. \end{corollary} A {\em zero-sum} sequence is a sequence of nonzero terms that sum to $0$. A zero-sum sequence is said to be {\em irreducible} if it does not contain a proper nonempty zero-sum subsequence. Given a zero-sum sequence $\tau$ with elements from $[-k,k]$, let $A_\tau$ be the multiset of all positive integers from $\tau$ and $B_\tau$ be the multiset containing the absolute values of all negative integers from $\tau$. Then the sequence $\tau$ is irreducible if and only if the pair $\{A_\tau,B_\tau\}$ is irreducible. Let $k$ be a positive integer, and let $[-k,k]$ denote the set of all nonzero integers between $-k$ and $k$. Then the number $\ell(k)$ defined in~\eqref{def:lk} is also equal to the smallest integer $\ell$ such that any zero-sum sequence with elements from $[-k,k]$ and length greater than $\ell$ contains a proper nonempty zero-sum subsequence. Moreover, it follows from Theorem~\ref{thm:1} that $\ell(k)=2k-1$. Let $G$ be a finite (additive) abelian group of order $n$. The {\em Davenport constant} of $G$, denoted by $D(G)$, is the smallest integer $m$ such that any sequence of elements from $G$ with length $m$ contains a nonempty zero-sum subsequence. Another key constant, $E(G)$, is the smallest integer $m$ such that any sequence of elements from $G$ with length $m$ contains a zero-sum subsequence of length exactly $n$. The constant $E(G)$ was inspired by the well-known result of Erd\"os, Ginzburg, and Giv~\cite{EGZ}, which states that $E(\Z/n\Z)=2n-1$. Subsequently, Gao~\cite{G} proved that $E(G)=D(G)+n-1$. There is a rich literature of research dealing with the constants $D(G)$ and $E(G)$. We refer the interested reader to the survey papers of Caro~\cite{C} and Gao--Geroldinger~\cite{GG} for further information. By rephrasing our main theorem using the language of zero-sum sequence, we can view it as a zero-sum theorem. Whereas zero-sum sequences are traditionally studied for finite abelian groups such as $\Z/n\Z$, we consider in this paper zero-sum sequences over the infinite group $\Z$. The rest of the paper is structured as follows. In Section~\ref{sec:main}, we prove our main results (Theorem~\ref{thm:1} and Corollary~\ref{cor:1}), and in Section~\ref{sec:conc}, we end with some concluding remarks. \section{Proofs of Theorem~\ref{thm:1} and Corollary~\ref{cor:1}}\label{sec:main}\ Suppose, we are given a $k-$irreducible pair $\{A,B\}$. We may assume that $A=\{x_1\cdot a_1,x_2\cdot a_2,\ldots, x_n\cdot a_n\}$ and $B=\{y_1\cdot b_1,y_2\cdot b_2,\ldots, y_m\cdot b_m\}$, where the $a_i$'s and $b_j$'s are all positive integers such that $1\leq a_i,b_j\leq k$ for $1\leq i\leq n$, $1\leq j\leq m$. We also assume that the $a_i$'s (resp. $b_j$'s) are pairwise distinct. Moreover, $x_i>0$ and $y_j>0$ are the multiplicities of $a_i$ and $b_j$ respectively. We also assume that the $a_i$'s (resp. $b_j$'s) are pairwise distinct. For any pair $(a_i,b_j)$, let \begin{enumerate} \item $C$ be the multiset obtained from $A$ by: $(i)$ removing one copy of $a_i$, and $(ii)$ introducing one copy of $a_i-b_j$ if $a_i>b_j$. \item $D$ be the multiset obtained from $B$ by: $(i)$ removing one copy of $b_j$, and $(ii)$ introducing one copy of $b_j-a_i$ if $b_j>a_i$. \end{enumerate} We say that $\{C,D\}$ is {\em $(a_i,b_j)$-derived} from $\{A,B\}$. We also call the above process an {\em $(a_i,b_j)$-derivation}. Consider the integers $p>0$, $q>0$, and $z_{ij}\geq 0$ for $p\leq i\leq q$ and $u\leq j\leq v$. We say that $\{C,D\}$ is $\prod_{i=p}^q\prod_{j=u}^v (a_i,b_j)^{z_{ij}}$-derived from $\{A,B\}$ if it is obtain by performing on $\{A,B\}$ an $(a_i,b_j)$-derivation $z_{ij}$ times for each $(i,j)$ pair. (If $z_{ij}=0$, then we simply do not perform the corresponding $(a_i,b_j)$-derivation.) We illustrate this operation with the following example. Let $A=\{3\cdot 7,2\cdot 1\}=\{7,7,7,1,1\}$ and $B=\{3\cdot 6,5\}=\{6,6,6,5\}$. Then $\{A,B\}$ is $7$-irreducible. A $(7,6)^2(7,5)$-derivation of $(A,B)$ yields the pair $\{C,D\}$, where $C=\{2,1,1,1,1\}$ and $D=\{6\}$. Note that $\{C,D\}$ is $6$-irreducible (thus, $7$-irreducible). In general, the order in which the derivation is done makes a difference. For example, if $A=\{5,5\}$ and $B=\{2,2,2,2,2\}$, then we can do a $(5,2)$ derivation followed by a $(3,2)$-derivation on $\{A,B\}$, but not in reverse order. However, all the derivation used in our proofs can be done in any order. We will use the following lemma. \begin{lemma}\label{lem:1}\ Let $A=\{x_1\cdot a_1,x_2\cdot a_2,\ldots, x_n\cdot a_n\}$ and $B=\{y_1\cdot b_1,y_2\cdot b_2,\ldots, y_m\cdot b_m\}$ be multisets, where the $a_i$'s and $b_i$'s are all positive integers such that $1\leq a_i,b_j\leq k$ for $1\leq i\leq n$, $1\leq j\leq m$. Moreover, $x_i>0$ and $y_j>0$ are the multiplicities of $a_i$ and $b_j$ respectively. Suppose that $\{A,B\}$ is a $k$-irreducible pair with length $|A|+|B|>2$. \n $(i)$ If $\{C,D\}$ is $(a_i,b_j)$-derived from $(A,B)$, then it is $k-$irreducible. \n $(ii)$ Let $p>0$, $q>0$, and $z_{ij}\geq 0$ for $p\leq i\leq q$ and $u\leq j\leq v$, be integers. Assume that $\sum_{j=u}^v z_{ij}\leq x_i$ and $\sum_{i=p}^q z_{ij}\leq y_j$. If $\{C,D\}$ is $\prod_{i=p}^q\prod_{j=u}^v (a_i,b_j)^{z_{ij}}$-derived from $\{A,B\}$, then it is $k-$irreducible. \end{lemma} \begin{proof} We first prove $(i)$. Without loss of generality, we may assume that $a_i>b_j$ since the proof is similar for $a_i<b_j$. Then \begin{equation}\label{eq:lem.1} C=\left(A-\{a_i\}\right)\cup \{a_i-b_j\} \mbox{ and } D=B-\{b_j\}, \end{equation} are nonempty since $|A|+|B|>2$. Since $\{A,B\}$ is irreducible, we have \[ \Sigma A=\Sigma B\Rightarrow \Sigma C=\Sigma A-a_i+(a_i-b_j)=\Sigma B -b_j=\Sigma D.\] Assume that $\{C,D\}$ is reducible. Then, there exist nonempty proper subsets $C'\subset C$ and $D'\subset D$ such that $\Sigma C'=\Sigma D'$. Let $\overline{C}'=C-C'$ and $\overline{D}'=D-D'$. Then $\overline{C}'\subset C$ and $\overline{D}'\subset D$ are also nonempty proper subsets that satisfy $\Sigma \overline{C}'=\Sigma \overline{D}'$. However, it follows from the definition of $C$ in~\eqref{eq:lem.1} that either $C'$ or $\overline{C}'$ is a proper subset of $A$, since $a_i-b_j$ cannot be in both $C'$ and $\overline{C}'$. It also follows from the definition of $D$ in~\eqref{eq:lem.1} that both $D'$ and $\overline{D}'$ are proper subsets of $B$. Thus, either the subset pair $\{C',D'\}$ or $\{\overline{C}',\overline{D}'\}$ is a witness to the reducibility of $\{A,B\}$. This contradicts the fact that $\{A,B\}$ is irreducible. Hence, if $\{A,B\}$ is irreducible, then $\{C,D\}$ is also irreducible. In addition, it follows from~\eqref{eq:lem.1} that $\max(C)\leq \max(A)$ and $\max(D)\leq \max(B)$. Hence, if $\{A,B\}$ is $k$-irreducible, then $\{C,D\}$ is also $k$-irreducible. To prove $(ii)$, observe that we can apply $(i)$ recursively by performing (in any order) on $\{A,B\}$ an $(a_i,b_j)$-derivation $z_{ij}$ times for each $(i,j)$ pair. The conditions on the $z_{ij}$'s guarantee that there are enough pairs $(a_i,b_j)$ in $A\times B$ to independently perform all the $(a_i,b_j)$-derivations for $p\leq i\leq q$ and $u\leq j\leq v$. \end{proof} We will also need the following basic lemma. \begin{lemma}\label{lem:2} Let $x_i$ and $y_j$ be positive integers, where $1\leq i\leq n$ and $1\leq j\leq m$. If $t<m$ be a positive integer such that \[\sum_{j=1}^t y_j\leq \sum_{i=1}^n x_i \mbox{ and } \sum_{j=1}^{t+1} y_j>\sum_{i=1}^n x_i,\] then there exist integers $z_{ij}\geq0 $, $1\leq i\leq n$ and $1\leq j\leq t+1$, such that \[\sum_{i=1}^n z_{ij}=y_j\mbox{ for $1\leq j\leq t$, }\; z_{i,t+1}=x_i-\sum_{j=1}^t z_{ij}\geq0, \mbox{ and } y_{t+1}>\sum_{i=1}^n z_{i,t+1}.\] \end{lemma} \begin{proof} For each $j$, $1\leq j\leq t+1$, consider $y_j$ marbles of color $j$. For each $i$, $1\leq j\leq n$, consider a bin with capacity $x_i$ (i.e., it can hold $x_i$ marbles). Since $p=\sum_{j=1}^t y_j\leq \sum_{i=1}^n x_i=q$, we can distribute all the $p$ marbles into the $n$ bins (with total capacity $q$) without exceeding the capacity of any given bin. Since $p+y_{t+1}=\sum_{j=1}^{t+1} y_j> q$, we can use the additional $y_{t+1}$ marbles to top off the bins that were not already full. Now define $z_{ij}$ to be the number of marbles in bin $i$ that have color $j$. Then the $z_{ij}$'s satisfy the required properties. \end{proof} \bs We now prove our main theorem. \begin{proof}[Proof of Theorem~\ref{thm:1}]\ Let $\{A,B\}$ be a $k-$irreducible pair. We can write $A=\{x_1\cdot a_1,x_2\cdot a_2,\ldots, x_n\cdot a_n\}$ and $B=\{y_1\cdot b_1,y_2\cdot b_2,\ldots, y_m\cdot b_m\}$, where the $a_i$'s and $b_i$'s are all positive integers such that $1\leq a_i,b_j\leq k$ for $1\leq i\leq n$, $1\leq j\leq m$. Moreover, $x_i>0$ and $y_j>0$ are the multiplicities of $a_i$ and $b_j$ respectively. Consequently, we may assume that the $a_i$'s (resp. $b_j$'s) are pairwise distinct. Without loss of generality, we may also assume that \begin{equation}\label{eq:gen-ass} a_1>\ldots>a_n \mbox{ and } b_1>\ldots>b_m. \end{equation} We shall prove by induction on $r=\max(A)+\max(B)\geq 2$ that \begin{equation}\label{eq:ind} |A|\leq \max(B)\quad \mbox{ and }\quad |B|\leq \max(A). \end{equation} If $r=2$, then $k\leq 2$ and the only possible irreducible pair is $\{\{1\},\{1\}\}$. Thus, the inductive statement~\eqref{eq:ind} is clearly true. If $a_i=b_j$ for some pair $(i,j)$, then $A=\{a_i\}=B$. (Otherwise, $A'=\{a_i\}\subset A$ and $B'= \{a_i\}\subset B$ are nonempty proper subsets satisfying $\Sigma A'=\Sigma B'$, which contradicts the irreducibility of $\{A,B\}$.) Moreover, $|A|=|B|=1\leq a_i=\max(A)=\max(B)$ holds. Since $k>1$, we further obtain $\ell(A,B)=|A|+|B|=2<2k-1$. So we can assume that $A\cap B=\emptyset$. Without loss of generality, we may also assume that $\max(A)=a_1>b_1=\max(B)$. \bs Suppose that the theorem holds for all $k-$irreducible pairs $\{C,D\}$ with $2\leq r'=\max( C)+\max( D)<r$. To prove the inductive step, we consider two parts. \bs\n{\bf Part I:} In this part, we show $|A|\leq \max(B)$ . We consider two cases. \bs\n {\bf Case 1:} $y_1>x_1$. Since $y_1>x_1$, we can perform an $(a_1,b_1)^{x_1}$-derivation from $\{A,B\}$ to obtain (by Lemma~\ref{lem:1}) the $k$-irreducible pair $\{C,D\}$, where \[C=\{x_1\cdot (a_1-b_1),x_2\cdot a_2,\ldots,x_n\cdot a_n\} \] and $D=\{ (y_1-x_1)\cdot b_1,y_2\cdot b_2,\ldots,y_m\cdot b_m\}$. Since $r'=\max(C)+\max(D)=\max\{a_1-b_1,a_2\} +b_1<r$, it follows from the induction hypothesis that \begin{equation}\label{eq:c1} |C|=\sum_{i=1}^n x_i\leq \max(D)=b_1. \end{equation} It follows from~\eqref{eq:c1} that $|A|=\sum_{i=1}^n x_i=|C|\leq b_1$ as required. \bs\n {\bf Case 2:} $y_1\leq x_1$. Since $y_1\leq x_1$, we can perform an $(a_1,b_1)^{y_1}$-derivation from $\{A,B\}$ to obtain (by Lemma~\ref{lem:1}) the $k$-irreducible pair $\{C,D\}$, where \[C=\{(x_1-y_1)\cdot a_1,y_1\cdot (a_1-b_1),x_2\cdot a_2,\ldots,x_n\cdot a_n\}\] and $D=\{y_2\cdot b_2,\ldots,y_m\cdot b_m\}$. Since $r'=\max(C)+\max(D) \leq a_1+b_2<r$, it follows from the induction hypothesis that \begin{equation}\label{eq:c2} |C|=(x_1-y_1)+y_1+\sum_{i=2}^n x_i=\sum_{i=1}^n x_i\leq \max(D)= b_2 . \end{equation} It follows from~\eqref{eq:c2} that $|A|=\sum_{i=1}^n x_i=|C|\leq b_2<b_1$. This concludes the first part of the proof. \bs\n {\bf Part II:} In this part, we show that $|B|\leq \max(A)=a_1$. Assume that $|B|>a_1$. Then since $a_1>b_1$ and $|A|\leq b_1$ (by Part I), we obtain $|B|>|A|$. We now consider the cases $a_n>b_1$ and $b_1>a_n$. (Recall that $b_1\not=a_n$ since $A\cap B=\emptyset$.) \bs\n{\bf Case 1:} $a_n>b_1$. Then it follows from our general assumption~\eqref{eq:gen-ass} that \[a_1>\ldots>a_n>b_1>\ldots>b_m.\] We consider the following two subcases. \bs\n{\bf Case 1.1:} $y_1>\sum_{i=1}^n x_i$. Then, we can perform an $\prod_{i=1}^n(a_i,b_1)^{x_i}$-derivation from $\{A,B\}$ to obtain (by Lemma~\ref{lem:1}) the $k$-irreducible pair $\{C,D\}$, where \[C=\{x_1\cdot (a_1-b_1),x_2\cdot (a_2-b_1),\ldots,x_n\cdot (a_n-b_1)\},\] and \[D=\Big\{\Big(y_1-\sum_{i=1}^n x_i\Big)\cdot b_1,y_2\cdot b_2,\ldots,y_m\cdot b_m\Big\}.\] Since $r'=\max(C)+\max(D)=(a_1-b_1)+b_1 <r$, it follows from the induction hypothesis that \begin{equation}\label{eq:c1.1} |C|=\sum_{i=1}^n x_i\leq \max(D)\mbox{ and }|D|=\sum_{j=1}^m y_j-\sum_{i=1}^n x_i\leq \max(C). \end{equation} Thus, it follows from~\eqref{eq:c1.1} \begin{equation*} |B|=\sum_{j=1}^m y_j=|C|+|D|\leq \max(C)+\max(D)= (a_1-b_1)+b_1=a_1. \end{equation*} \bs\n{\bf Case 1.2:} $y_1\leq \sum_{i=1}^n x_i$. Recall from the first paragraph in Part II that \[\sum_{j=1}^m y_j=|B| > |A|=\sum_{i=1}^n x_i.\] Consequently, the above inequality together with $y_1\leq \sum_{i=1}^n x_i$ imply that there exists an integer $t$, $1\leq t<m$, such that \begin{equation}\label{eq:t} \sum_{j=1}^t y_j\leq \sum_{i=1}^n x_i \mbox{ and } \sum_{j=1}^{t+1} y_j>\sum_{i=1}^n x_i. \end{equation} Then it follows from~Lemma~\ref{lem:2} that there exist integers $z_{ij}\geq0 $, $1\leq i\leq n$ and $1\leq j\leq t+1$, such that \[\sum_{i=1}^n z_{ij}=y_j\mbox{ for $1\leq j\leq t$, }\; z_{i,t+1}=x_i-\sum_{j=1}^t z_{ij}\geq0, \mbox{ and } y_{t+1}>\sum_{i=1}^n z_{i,t+1}.\] Thus, we can perform a $\prod_{i=1}^{n}\prod_{j=1}^{t+1}(a_i,b_j)^{z_{ij}}$-derivation from $\{A,B\}$ to obtain (by Lemma~\ref{lem:1}) the $k$-irreducible pair $\{C,D\}$, where \begin{multline*} C=\{z_{11}\cdot (a_1-b_1),\ldots,z_{1,t+1}\cdot (a_1-b_{t+1}),\ldots,\\ z_{i1}\cdot (a_i-b_1),\ldots,z_{i,t+1}\cdot (a_i-b_{t+1}),\ldots,\\ z_{n1}\cdot (a_n-b_1),\ldots,z_{n,t+1}\cdot (a_n-b_{t+1})\}, \end{multline*} and \[D=\Big\{\big(y_{t+1}-\sum_{i=1}^n z_{i,t+1}\big)\cdot b_{t+1},y_{t+2}\cdot b_{t+2},\ldots,y_{m}\cdot b_m\Big\}.\] Since $a_1>\ldots>a_n>b_1>\ldots>b_m$, it follows that \[\max(C)\leq \max(A)-\min\limits_{1\leq j\leq t+1}b_j=a_1-b_{t+1} \mbox{ and } \max(D)=b_{t+1}.\] Thus, $r'=\max(C)+\max(D) \leq (a_1-b_{t+1})+b_{t+1} <r$ and it follows from the induction hypothesis that \begin{equation}\label{eq1:c1.2} |C|=\sum_{j=1}^t \sum_{i=1}^n z_{ij}+\sum_{i=1}^n z_{i,t+1}=\sum_{j=1}^{t}y_j+\sum_{i=1}^n z_{i,t+1}\leq \max(D), \end{equation} and \begin{equation}\label{eq2:c1.2} |D|=(y_{t+1}-\sum_{i=1}^n z_{i,t+1})+\sum_{j=t+2}^m y_j\leq \max(C). \end{equation} From~\eqref{eq1:c1.2} and~\eqref{eq2:c1.2}, we obtain \begin{eqnarray*}\label{eq:7} |B|=\sum_{j=1}^m y_j=|C|+|D|\leq \max(C)+\max(D)\leq a_1-b_{t+1}+b_{t+1}=a_1, \end{eqnarray*} as required. \bs\n{\bf Case 2:} $b_1>a_n$. Let $s$ be that smallest index such that $b_1>a_s$. Since $a_1>b_1>a_n$, the integer $s$ exists and $2\leq s\leq n$. We consider the following two subcases. \bs\n{\bf Case 2.1:} $y_1\leq \sum_{i=s}^n x_n$. Since $y_1\leq \sum_{i=s}^n x_n$, there exist integers $z_i\geq 0$, $s\leq i\leq n$ such that $x_i\geq z_i$, and $y_1=\sum_{i=s}^nz_i$. We can perform an $\prod_{i=s}^n (a_i,b_1)^{z_i}$-derivation from $\{A,B\}$ to obtain (by Lemma~\ref{lem:1}) the $k$-irreducible pair $\{C,D\}$, where \[C=\{x_1\cdot a_1,\ldots,x_{s-1}\cdot a_{s-1},(x_s-z_s)\cdot a_s,\ldots,(x_n-z_n)\cdot a_n\},\] and \[ D=\{z_s\cdot (b_1-a_s),\ldots,z_n\cdot (b_1-a_n),y_2\cdot b_2,\ldots,y_m\cdot b_m\}.\] Since $r'=\max(C)+\max(D)\leq a_1+\max\{b_1-a_n,b_2\}<r$, it follows from the induction hypothesis that \begin{equation}\label{eq:2.1} |D|=\sum_{i=s}^nz_i+\sum_{j=2}^m y_j=y_1+\sum_{j=2}^m y_j=\sum_{j=1}^m y_j\leq \max(C)=a_1. \end{equation} Thus, it follows from~\eqref{eq:2.1} that $|B|=\sum_{j=1}^m y_i=|D|\leq a_1$ as required. \bs\n{\bf Case 2.2:} $y_1>\sum_{i=s}^n x_n$. Since $y_1>\sum_{i=s}^n x_n$, we can perform an $\prod_{i=s}^n (a_i,b_1)^{x_i}$-derivation from $\{A,B\}$ to obtain (by Lemma~\ref{lem:1}) the $k$-irreducible pair $\{A',B'\}$, where \[A'=\{x_1\cdot a_1,x_2\cdot a_2,\ldots,x_{s-1}\cdot a_{s-1}\},\] and \[B'=\Big\{(y_1-\sum_{i=s}^n x_n)\cdot b_1,x_s\cdot (b_1-a_s),\ldots,x_n\cdot (b_1-a_n), y_2\cdot b_2,\ldots,y_m\cdot b_m\Big\}.\] Note that $\max(B')=b_1$. We can now rename the distinct elements of the multiset $B'$ as $b_1',\ldots,b'_{m'}$ such that $\max(B')=b_1'>\ldots>b'_{m'}=\min(B')$. Let $y'_j$ be the multiplicity of $b'_j$ for $1\leq j\leq m'$. We also let $a'_i=a_i$ for $1\leq i\leq s-1=n'$. Recall from Part I that $|A|\leq \max(B)=b_1$. Hence, \[|A'|=\sum_{i=1}^{s-1} x_i\leq \sum_{i=1}^n x_i=|A|\leq b_1.\] If $|B'|\leq |A'|$, then $|B|=\sum_{j=1}^m y_j=|B'|\leq |A'|\leq b_1<a_1$, and we are done. So, we may assume that $|B'|>|A'|$. Since $a'_{n'}=a_{s-1}>b_1= b'_1$ (owing to the definition of $s$ and the fact that $A\cap B=\emptyset$), it follows that \[ a'_1>\ldots>a'_{n'}>b'_1>\ldots>b'_{m'}.\] We can now proceed as in Part II (Case~1 ) to infer that \[|B'|\leq \max(A')\Longrightarrow |B|=\sum_{j=1}^m y_j=|B'|\leq \max(A')=a_1.\] This concludes the second part of the proof. \bs We conclude from Part I and Part II that \[|A|\leq \max(B)=b_1\quad \mbox{ and }\quad |B|\leq \max(A)=a_1.\] Moreover, these inequalities imply that \[\ell(A,B)=|A|+|B|\leq b_1+a_1\leq 2k-1,\] where the last inequality follows from the fact that $1\leq b_1<a_1\leq k$. Finally, since $\ell(k)\geq 2k-1$ (see the example in~\eqref{exp:lb} from Section~\ref{sec:intro}), it follows that $\ell(k)=2k-1$. \end{proof} We now prove the corollary. \begin{proof}[Proof of Corollary~\ref{cor:1}] Let $A=\{x_1\cdot a_1,x_2\cdot a_2,\ldots, x_n\cdot a_n\}$ and $B=\{y_1\cdot b_1,y_2\cdot b_2,\ldots, y_m\cdot b_m\}$ be multisets, where the $a_i$'s and $b_i$'s are all positive integers such that $1\leq a_i,b_j\leq k$ for $1\leq i\leq n$, $1\leq j\leq m$. Moreover, $x_i>0$ and $y_j>0$ are the multiplicities of $a_i$ and $b_j$ respectively. We also assume that the $a_i$'s (resp. $b_j$'s) are pairwise distinct. Without loss of generality, we may also assume that $A\cap B=\emptyset$ and $a_1>b_1$. Suppose that $\{A,B\}$ is a $k-$irreducible pair such that $\ell(A,B)=2k-1$. Then it follows from Theorem~\ref{thm:1} (and the above setup) that \begin{equation}\label{eq:cor0} |A|=\max(B)=b_1=k-1 \mbox{ and } |B|=\max(A)=a_1=k. \end{equation} For a proof by contradiction assume that the pair $\{A,B\}$ is different from the pair $\left\{\{k\cdot(k-1)\},\{(k-1)\cdot k\}\right\}$. We consider two cases. \bs \n{\bf Case 1:} $x_1\geq y_1$. We perform an $(a_1,b_1)^{y_1}$-derivation from $\{A,B\}$ to obtain (by Lemma \ref{lem:1}) the $k$-irreducible pair $\{C,D\}$, where \[C=\{(x_1-y_1)\cdot a_1,y_1\cdot (a_1-b_1),x_2\cdot a_2,\ldots,x_n\cdot a_n\}\] and $D=\{y_2\cdot b_2,\ldots,y_m\cdot b_m\}$. Since $a_1>b_1$, $y_1>0$, and $\sum A=\sum B$, we have $m>1$, so that $b_2\in D$. Hence, $C$ and $D$ are both nonempty. We now use Theorem~\ref{thm:1} on the irreducible pair $\{C,D\}$ to infer that \begin{equation}\label{eq:cor1} |C|=(x_1-y_1)+y_1+\sum_{i=2}^n x_i=\sum_{i=1}^n x_i\leq \max(D)= b_2. \end{equation} It follows from~\eqref{eq:cor1} that $|A|=\sum_{i=1}^n x_i=|C|\leq b_2<b_1=k-1$. This contradicts the fact that $|A|=b_1=k-1$ (see~\eqref{eq:cor0}). \bs \n{\bf Case 2:} $y_1>x_1$. We perform an $(a_1,b_1)^{x_1}$-derivation from $\{A,B\}$ to obtain (by Lemma \ref{lem:1}) the $k$-irreducible pair $\{C,D\}$, where \[C=\{x_1\cdot (a_1-b_1),x_2\cdot a_2,\ldots,x_n\cdot a_n\} =\{x_1\cdot 1,x_2\cdot a_2,\ldots,x_n\cdot a_n\},\] and $D=\{ (y_1-x_1)\cdot b_1,y_2\cdot b_2,\ldots,y_m\cdot b_m\}$. If $n=1$, then $x_1=|A|=k-1$. So $y_1>x_1$ and $\ell(A,B)=2k-1$ imply $y_1=k$, contradicting that $\{A,B\}$ is different from $\left\{\{k\cdot(k-1)\},\{(k-1)\cdot k\}\right\}$. Thus we may assume that $n\geq 2$, that is, $a_2\in C$. Since $a_2\not=b_1=k-1$, we must have $z=b_1-a_2>0$. If $z<x_1$ also holds, then $C'=\{a_2,z\cdot 1\}\subset C$ and $D'=\{b_1\}\subset D$ form a witness for the reducibility of $\{C,D\}$, which is a contradiction. Thus, we must have $b_1-a_2\geq x_1$. We now use Theorem~\ref{thm:1} on the irreducible pair $\{C,D\}$ to infer that \begin{equation}\label{eq:cor2} |D|=(y_1-x_1)+\sum_{j=2}^m y_j=-x_1+\sum_{j=1}^m y_j\leq \max(C)=a_2\leq b_1-x_1. \end{equation} It follows from~\eqref{eq:cor2} that $|B|=\sum_{j=1}^m y_j=x_1+|D|\leq b_1=k-1$. This contradicts the fact that $|B|=a_1=k$ (see~\eqref{eq:cor0}). \end{proof} \section{Concluding Remarks}\label{sec:conc} One may wonder if our results can be extended to other infinite abelian groups. For instance, consider irreducible pairs $\{A,B\}$, where $A$ and $B$ are multisets of rational numbers. Are there suitable (and general enough) conditions on the elements of $\{A,B\}$ that will guarantee that $\ell(A,B)$ is finite? Finally, we remark that Theorem~\ref{thm:1} can be used to bound the number of {\em $\lambda$-fold vector space partitions} (e.g., see~\cite{ESSSV}). We shall address this application in a subsequent paper. \bs\n{\bf Acknowledgement:}\ The authors thank G. Seelinger, L. Spence, and C. Vanden Eynden for providing useful suggestions that led to an improved version of this paper. \bs\n \hrule \end{document}
math
\begin{document} \date{} \title{Doubling Algorithm for The Discretized Bethe-Salpeter Eigenvalue Problem} \begin{abstract} The discretized Bethe-Salpeter eigenvalue problem arises in the Green's function evaluation in many body physics and quantum chemistry. Discretization leads to a matrix eigenvalue problem for $H \in \mathbb{C}^{2n \times 2n}$ with a Hamiltonian-like structure. After an appropriate transformation of $H$ to a standard symplectic form, the structure-preserving doubling algorithm, originally for algebraic Riccati equations, is extended for the discretized Bethe-Salpeter eigenvalue problem. Potential breakdowns of the algorithm, due to the ill condition or singularity of certain matrices, can be avoided with a double-Cayley transform or a three-recursion remedy. A detailed convergence analysis is conducted for the proposed algorithm, especially on the benign effects of the double-Cayley transform. Numerical results are presented to demonstrate the efficiency and structure-preserving nature of the algorithm. \end{abstract} \begin{keywords} Bethe-Salpeter eigenvalue problem, Cayley transform, doubling algorithm \end{keywords} \begin{AMS} 15A18, 65F15 \end{AMS} \section{Introduction} The Bethe-Salpeter equation (BSE)~\cite{sb51} arises in the Green's function evaluation in many body physics, which is the state-of-art model to describe electronic excitation and molecule absorption~\cite{c95,kbdbab14,Leng:16,orr02,phssm13,prg13,rhl12,roro02,rts13a,rts13b,rl00,rg84,ssf98,w90}. In the quantum chemistry and material science communities, the optical absorption spectrum of the BSE is an important and powerful tool for the characterization of different materials. In particular, the comparison of the computed and measured spectra helps to interpret experimental data and validate corresponding theories and models. It is generally known that good agreement between the theory and the experimental data can only be achieved by taking into account the interacting electron-hole pairs or {\em excitons}. This is the case for the BSE which is derived from the coupling of the electrons and their corresponding holes. After discretization, the BSE becomes the Bethe-Salpeter eigenvalue problem (BS-EVP): \begin{equation} \label{bsevp} Hx\equiv \begin{bmatrix} \ \ A & \ \ B \\ \\ -\overline{B} & -\overline{A} \end{bmatrix}x =\lambda x, \end{equation} for $x \neq 0$, where $A, B \in\mathbb{C}^{n\times n}$ satisfy $A^{\HH}=A,\ B^{\T}=B$. Here $(\cdot)^{\HH}$ and $(\cdot)^{\T}$ denote the conjugate transpose and the transpose of matrices, respectively. It can be shown~\cite{bfy15} that any eigenvalue $\lambda$ comes in quadruplets $\{\pm \lambda, \pm \overline{\lambda}\}$ (except for the degenerate cases when $\lambda$ is purely real or imaginary, or zero). Further details on the BS-EVP can be found in~\cite{peter:16,peter:15,yang:16} and the references therein. In principle, all possible excitation energies and absorption spectra are sought although some excitations are more probable than others. The associated likelihood is measured by the spectral density or the density of states of $H$, defined as the number of eigenvalues per unit energy interval: \[ \phi(\omega) = \frac{1}{2n} \sum_{j=1}^{2n} \delta(\omega - \lambda_j), \] where $\delta$ is the Dirac-delta function and $\lambda_j \in \lambda(H)$, the spectrum of $H$. Also of interest is the optical absorption spectrum: \[ \varepsilonilon^+(\omega) = \sum_{j=1}^n \frac{(d_r^{\HH} x_j)(y_j^{\HH} d_l)}{y_j^{\HH} x_j} \delta (\omega - \lambda_j), \] where $x_j$ and $y_j$ are, respectively, the right- and left-eigenvectors corresponding to $\lambda_j>0$, and $d_r$ and $d_l$ are the dipole vectors. Evidently, to estimate these quantities, we require {\it all} the eigenvalues $\lambda_j$ and the associated eigenvectors $x_j$ and $y_j$. To complicate computations further, $A$ and $B$ are often high in dimensions (for systems with many occupied and unoccupied states) and generally dense. In spite of the significance of the BS-EVP \eqref{bsevp}, only a few publications exist on its numerical solution, all under {\it additional} assumptions. Some remarkable discoveries have been made in~\cite{peter:16,peter:15,yang:16} under the condition that $\Gamma H$ is positive definite with $\Gamma =\diag(I_{n},\, -I_{n})$. Few general and efficient methods have been proposed to solve the BS-EVP \eqref{bsevp}. All methods proposed in~\cite{peter:16,peter:15,yang:16} are designed for the linear response eigenvalue problem, under the extra assumptions that $A, B\in\mathbb{R}^{n\times n}$ and $A \pm B$ are symmetric positive definite. Low-rank or tensor approximations~\cite{peter:16,peter:15} have been applied to handle the high computational demand but these techniques require additional structures on $H$. Based on the equivalence of the BS-EVP and a real Hamiltonian eigenvalue problem, Shao et al.~\cite{yang:16} put forward an efficient parallel approach to compute the eigenpairs corresponding to all the positive eigenvalues. Remarkable contributions have also been made for the numerical solution of the related linear response eigenvalue problem~\cite{bl12,bl13}. \subsection*{Contributions} We solve the {\it general} BS-EVP \eqref{bsevp}, without assuming $\Gamma H$ being positive definite. We propose a doubling algorithm (DA) for the BS-EVP in two recursions. To deal with potential breakdowns, we design the double-Cayley transform (DCT) and a three-recursion remedy. The DCT reverses at worst two steps of the DA if there exist some complex eigenvalues and not at all if all eigenvalues are real. In the rare occasions that the DCT fails, the more expensive three-recursion remedy can be applied, without changing the convergence radius. Our DA preserves the special structure of the eigen-pairs. \subsection*{Organization} Some preliminaries are presented in Section~2 and our method is developed in Section~3. We present some illustrative numerical results in Section~4 before the conclusions in Section~5. The Appendix contains two technical lemmas. \section{Preliminaries} We denote the column space, the null space, the spectrum and the set of singular values by $\mathcal{R}(\cdot)$, $\mathcal{N}(\cdot)$, $\lambda(\cdot)$ and $\sigma (\cdot)$ respectively. By $M\oplus N$ or $\diag(M,N)$, we denote $\begin{bmatrix} M & \mathbf 0\\ \mathbf 0&N \end{bmatrix}$. Similarly, we define $\bigoplus_{j} M_j$. The MATLAB expression $M(k:l, s:t)$ denotes the submatrix of $M$ containing elements in rows $k$ to $l$ and columns $s$ to $t$. Also, the $i$th column of the identity matrix $I$ is $e_i$ and \[ J \equiv \begin{bmatrix}&I_n\\ \\-I_n&\end{bmatrix}, \qquad \Gamma \equiv \begin{bmatrix}I_n&\\ \\&-I_n\end{bmatrix}, \qquad \Pi \equiv \begin{bmatrix} &I_n\\ \\I_n& \end{bmatrix}. \] \begin{definition} The matrix pair $(M, \, L)$ with $M, L\in\mathbb{C}^{2n\times 2n}$ is a symplectic pair if and only if $MJM^{\T}=LJL^{\T}$. \end{definition} \begin{definition} The matrix pair $(M,\, L)$ is in the first standard symplectic form (SSF-1) if and only if \[ M=\begin{bmatrix}E&\mathbf 0\\ \\ F&I_n\end{bmatrix}, \qquad L=\begin{bmatrix}I_n&K\\ \\ \mathbf 0&E^{\T}\end{bmatrix}, \] with $E, F\equiv F^{\T}, K\equiv K^{\T}\in\mathbb{C}^{n\times n}$. \end{definition} \begin{definition} Let $M, L\in\mathbb{C}^{2n\times 2n}$ and denote $\mathcal{N}(M, L)\equiv$ \[ \left\{[M_{*},\, L_{*}]: M_{*}, L_{*}\in\mathbb{C}^{2n\times 2n}, \ \rank([M_{*},\, L_{*}])=2n, \ [M_{*},\, L_{*}][L^{\T},\, -M^{\T}]^{\T}=0 \right\}, \] which is nonempty. The action $(M, \, L) \longrightarrow (\widetilde{M}, \, \widetilde{L}) = (M_{*}M, \, L_{*}L)$ is called a {\it doubling transformation} of $(M, L)$ for some $[M_{*},\, L_{*}] \in\mathcal{N}(M, L)$. \end{definition} Next we consider the properties of the doubling transformation. \begin{lemma}(\cite[Theorem~2.1]{lx06})\label{lemme-doubling} Let $(\widetilde{M},\, \widetilde{L})$ be the result of a doubling transformation of $(M,\, L)$, where $M, L$, $\widetilde{M}, \widetilde{L}\in\mathbb{C}^{2n\times 2n}$, we have \begin{enumerate} \item [{\em (1)}] $(\widetilde{M}, \, \widetilde{L})$ is a symplectic pair provided that $(M,\, L)$ is one; and \item [{\em (2)}] if $MU=LUR$ and $MVS=LV$ for some $U, V\in\mathbb{C}^{2n\times l }$ and $R, S\in\mathbb{C}^{ l \times l }$, then $\widetilde{M}U=\widetilde{L}UR^2$ and $\widetilde{M}VS^2=\widetilde{L}V$. \end{enumerate} \end{lemma} In other words, doubling transformations preserve symplecticity and deflating subspaces as well as square eigenvalues of matrix pairs. \begin{lemma}\label{lemma-H-Hermitian} It holds that $H\Pi =-\Pi {\overline{H}}$ and $\Gamma H \Gamma=H^{\HH}$. \end{lemma} \begin{proof} It can be verified directly. \end{proof} \begin{lemma}\label{lemma-eigenpairs} Assume that $HZ=ZS$ with $Z\in\mathbb{C}^{2n\times l }$ and $S\in\mathbb{C}^{ l \times l }$, then we have $H (\Pi\overline{Z})=(\Pi\overline{Z})(-\overline{S})$ and $(Z^{\HH}\Gamma) H=S^{\HH}(Z^{\HH}\Gamma)$. \end{lemma} \begin{proof} The results directly follow from Lemma~\ref{lemma-H-Hermitian}. \end{proof} If $S$ in Lemma~\ref{lemma-eigenpairs} possesses the spectrum $\lambda(S)=\{\lambda, \ldots, \lambda\}$ (repeated $l$ times), Lemmas~\ref{lemma-H-Hermitian} and~\ref{lemma-eigenpairs} imply that $-\lambda$, $\overline{\lambda}$ and $-\overline{\lambda}$ are also the eigenvalues of $H$ with the same algebraic and geometric multiplicities. Provided that $HX_j=X_jS_j$ with $X_j\in\mathbb{C}^{2n\times l_{j}}$ and $S_j\in\mathbb{C}^{l_j\times l_j}$ for $j=1, 2$, Lemma~\ref{lemma-eigenpairs} further implies that $(X_2^{\HH}\Gamma X_1) S_1 = X_2^{\HH}\Gamma H X_1=S_2^{\HH}(X_2^{\HH}\Gamma X_1)$ and $(X_2^{\T}\Pi \Gamma) X_1S_1 = (X_2^{\T}\Pi \Gamma)HX_1=(-S_2^{\T})(X_2^{\T}\Pi \Gamma X_1)$, or equivalently \[ (X_2^{\HH}\Gamma X_1) S_1-S_2^{\HH}(X_2^{\HH}\Gamma X_1)=0= (X_2^{\T}\Pi \Gamma X_1) S_1 + S_2^{\T}(X_2^{\T}\Pi \Gamma X_1). \] Apparently, when $\lambda(S_1)\cap\lambda(\overline S_2)=\emptyset$, we have $X_2^{\HH}\Gamma X_1=0$; when $\lambda(S_1)\cap\lambda(-S_2)=\emptyset$, we have $X_2^{\T}\Pi\ \Gamma X_1=0$. By Lemmas~\ref{lemma-H-Hermitian} and~\ref{lemma-eigenpairs}, we can then deduce the eigen-decomposition result of $H$ for the convergence proof. Temporarily assume that there is no purely imaginary nor zero eigenvalues for $H$, $\lambda_j\neq\lambda_k$ for $j\neq k$ and \begin{align*} \lambda(H)=&\{\underbrace{\lambda_1, \ldots, \lambda_1}_{l_1}, \underbrace{\overline{\lambda}_1, \ldots, \overline{\lambda}_1}_{l_1}, \underbrace{-\overline{\lambda}_1, \ldots, -\overline{\lambda}_1}_{l_1}, \underbrace{-\lambda_1, \ldots, -\lambda_1}_{l_1}, \ldots, \\ &\underbrace{\lambda_s, \ldots, \lambda_s}_{l_s}, \underbrace{\overline{\lambda}_s, \ldots, \overline{\lambda}_s}_{l_s}, \underbrace{-\overline{\lambda}_s, \ldots, -\overline{\lambda}_s}_{l_s}, \underbrace{-\lambda_s, \ldots, -\lambda_s}_{l_s}, \\ &\underbrace{\lambda_{s+1}, \ldots, \lambda_{s+1}}_{l_{s+1}}, \underbrace{-\lambda_{s+1}, \ldots, -\lambda_{s+1}}_{l_{s+1}}, \ldots, \underbrace{\lambda_{t}, \ldots, \lambda_{t}}_{l_{t}}, \underbrace{-\lambda_{t}, \ldots, -\lambda_{t}}_{l_{t}}\}, \end{align*} where $\lambda_j\in\mathbb{C}$ with (i) $\Re(\lambda_{j})\Im(\lambda_j)\neq0$ and $\Re(\lambda_j)<0$ for $j=1, \ldots, s$, and (ii) $\Im(\lambda_j)=0$ and $\lambda_{j}<0$ for $j=s+1, \ldots, t$. Subsequently, we have the following result. \begin{lemma}\label{lemma-decomp} Suppose that no purely imaginary nor zero eigenvalues exist for $H$. Then there exist \begin{align*} X&= [X_1, Y_1, \cdots, X_s, Y_s;\, X_{s+1}, \cdots, X_{t}] \in\mathbb{C}^{2n\times n}, \\ S&=\diag(S_1, R_1, \ldots, S_s, R_s;\, S_{s+1}, \ldots, S_{t})\in\mathbb{C}^{n\times n} \end{align*} with $X_j\in\mathbb{C}^{2n\times l_j}$, $S_{j}\in\mathbb{C}^{l_j\times l_j}$, $\lambda(S_{j})=\{\lambda_j, \ldots, \lambda_j\}$ ($j=1, \ldots, t$), $Y_{j}\in\mathbb{C}^{2n\times l_j}$, $R_{j}\in\mathbb{C}^{l_j\times l_j}$ and $\lambda(R_{j})=\{\overline{\lambda}_j, \ldots, \overline{\lambda}_j\}$ ($j=1, \ldots, s$), such that \[ H [X,\, \Pi\overline{X} ] = [X,\, \Pi\overline{X}] \diag(S,\, -\overline{S}), \ \ \ [X,\, \Pi\overline{X}]^{\HH}\Gamma [X,\, \Pi\overline{X}]=\diag(D,\, -\overline{D}), \\ \] where $D=\diag(D_1, \ldots, D_{s};\, D_{s+1}, \ldots, D_{t})$, \begin{align*} & D_j=\begin{bmatrix}\mathbf 0&X_j^{\HH}\Gamma Y_j\\ Y_j^{\HH}\Gamma X_j&\mathbf 0 \end{bmatrix}\in\mathbb{C}^{2l_j\times 2l_j} \ \ \ (j=1, \ldots, s), \\ & D_j=X_j^{\HH}\Gamma X_j\in\mathbb{C}^{l_j\times l_j} \ \ \ (j=s+1, \ldots, t). \end{align*} \end{lemma} Obviously, $D$ (in Lemma~\ref{lemma-decomp}) is a nonsingular Hermitian matrix. Consequently, we can choose $X$ which satisfies $[X,\, \Pi\overline{X}]^{\HH}\Gamma [X,\, \Pi\overline{X}] =\Gamma$. This leads to $X^{\HH} \Gamma X = I_n$ and $X^{\T}\Gamma \Pi X=0$, implying that $X(1:n,1:n)\in\mathbb{C}^{n\times n}$ is nonsingular with singular values no less than unity and $X(1:n,1:n)^{\T} X(n+1:2n,1:n)$ is complex symmetric. Next consider the case when there exist some purely imaginary eigenvalues for $H$. We further assume that the partial multiplicities (the sizes of the Jordan blocks) of $H$ associated with the purely imaginary eigenvalues are all even. Let $\mathrm{i}\omega_1, \cdots, \mathrm{i}\omega_q$ be the different purely imaginary eigenvalues with Jordan blocks $J_{2p_{r,j}}(\mathrm{i}\omega_j)\in \mathbb{C}^{2p_{r,j} \times 2p_{r,j}}$ for $r=1, \cdots, l_j$ and $j=1, \cdots, q$. Then there exist $W_{r,j}, Z_{r,j}\in \mathbb{C}^{2n\times p_{r,j}}$ such that \begin{align*} & H \left[ W_{1,1}, Z_{1,1}; \cdots; W_{l_1,1}, Z_{l_1,1} \,\vrule \, \cdots \, \vrule \, W_{1,q}, Z_{1,q}; \cdots; W_{l_q,q}, Z_{l_q,q} \right] \\ =& \begin{multlined}[t] \left[ W_{1,1}, Z_{1,1}; \cdots; W_{l_1,1}, Z_{l_1,1} \,\vrule \, \cdots \, \vrule \, W_{1,q}, Z_{1,q}; \cdots; W_{l_q,q}, Z_{l_q,q} \right] \cdot\left[ \bigoplus_{j=1}^q\bigoplus_{r=1}^{l_j} J_{2p_{r,j}}(\mathrm{i}\omega_j)\right]. \end{multlined} \end{align*} With $X\in \mathbb{C}^{2n\times n_1}$ and $S\in \mathbb{C}^{n_1\times n_1}$ and by Lemma~\ref{lemma-decomp}, we obtain \begin{align}\label{eq:eigen_decomp} H\left[ X, W_{\omega}, \Pi \overline{X}, Z_{\omega} \right] = \left[ X, W_{\omega}, \Pi \overline{X}, Z_{\omega} \right] \widetilde{S}, \end{align} where $n_1+\sum_{j=1}^{q}\sum_{r=1}^{l_j}p_{r,j}=n$, and \begin{align*} W_{\omega} &= \left[ W_{1,1}, \cdots, W_{l_1,1}; \cdots; W_{1,q}, \cdots, W_{l_q,q} \right], \\ Z_{\omega} &= \left[ Z_{1,1}, \cdots, Z_{l_1,1}; \cdots; Z_{1,q}, \cdots, Z_{l_q,q} \right], \\ J_{\omega}&= \bigoplus_{j=1}^q\bigoplus_{r=1}^{l_j} J_{p_{r,j}}(\mathrm{i}\omega_j), \qquad \qquad \Omega_{\omega}= \bigoplus_{j=1}^q\bigoplus_{r=1}^{l_j}e_{p_{r,j}}e_1^{\T}, \\ J_{2p_{r,j}}(\mathrm{i}\omega_j) & \equiv \begin{bmatrix} J_{p_{r,j}}(\mathrm{i}\omega_j) & e_{p_{r,j}}e_1^{\T} \\ 0& J_{p_{r,j}}(\mathrm{i}\omega_j) \end{bmatrix}, \ \ \ \widetilde{S} \equiv \begin{bmatrix} S & & &\\ & J_{\omega} && \Omega_{\omega}\\ \\ & & -\overline{S}&\\ & & & J_{\omega} \end{bmatrix}. \end{align*} \section{Doubling Algorithm}\label{doublesec} We now generalize the structure-preserving doubling algorithm (SDA) in~\cite{cfl,cflw,lcjl1,lcjl2} to the DA for the BS-EVP. \subsection{Initial Symplectic Pencil} We transform $H$ to a symplectic pair $(M,\, L)$ in the SSF-1 {\it \`a la} Cayley. \begin{lemma} For $\alpha\in\mathbb{R}$, the matrix pair $(H+\alpha I_{2n}, \, H-\alpha I_{2n})$ is symplectic. \end{lemma} \begin{proof} The result can be deduced from $(HJ)^{\T}=HJ$. \end{proof} \begin{theorem} \label{theorem-SSF-1} Select $\alpha\in\mathbb{R}$ such that both $\alpha I_n-A$ and $R \equiv I_n-(\alpha I_n-\overline{A})^{-1}\overline{B}(\alpha I_n-A)^{-1}B$ are nonsingular. There exists a nonsingular matrix $G\in\mathbb{C}^{2n\times 2n}$ such that $[G(H+\alpha I_n),\, G(H-\alpha I_n)]$ is a symplectic pair in SSF-1, with \begin{equation} \label{MLalpha} M_{\alpha}\triangleq G(H+\alpha I_n)=\begin{bmatrix}E_{\alpha}&\mathbf 0\\ \\F_{\alpha}&I_n\end{bmatrix}, \, \, \, L_{\alpha}\triangleq G(H-\alpha I_n)=\begin{bmatrix}I_n&\overline{F}_{\alpha}\\ \\ \mathbf 0&\overline{E}_{\alpha}\end{bmatrix}, \end{equation} where $E_{\alpha}, F_{\alpha}\in\mathbb{C}^{n \times n}$ satisfy $E_{\alpha}^{\HH}=E_{\alpha}$ and $F_{\alpha}^{\T}=F_{\alpha}$. \end{theorem} \begin{proof} Let $H_{\scriptscriptstyle\pm} \equiv H \pm \alpha I_{2n}$, $A_{\scriptscriptstyle\pm} \equiv A \pm \alpha I_n$, \begin{align*} G_1=\begin{bmatrix} A_{\scriptscriptstyle -}^{-1} & \mathbf 0\\ &\\ \overline{B} A_{\scriptscriptstyle -}^{-1} & I_n\end{bmatrix}, & \ \ \ G_2=\begin{bmatrix}I_n& A_{\scriptscriptstyle -}^{-1}BR^{-1} \overline{A}_{\scriptscriptstyle -}^{-1}\\ &\\ \mathbf 0 & -R^{-1} \overline{A}_{\scriptscriptstyle -}^{-1}\end{bmatrix}, \end{align*} and $G=G_2G_1$. We obtain \begin{align*} & G_1H_{\scriptscriptstyle +} =\begin{bmatrix} A_{\scriptscriptstyle -}^{-1}A_{\scriptscriptstyle +} & A_{\scriptscriptstyle -}^{-1}B\\ \\ 2\alpha\overline{B} A_{\scriptscriptstyle -}^{-1} & -\overline{A}_{\scriptscriptstyle -} R \end{bmatrix}, \ \ \ G_2G_1H_{\scriptscriptstyle +} =\begin{bmatrix}E_{\alpha} & \mathbf 0\\ \\ F_{\alpha}& I_n\end{bmatrix}, \\ & G_1H_{\scriptscriptstyle -} =\begin{bmatrix}I_n & A_{\scriptscriptstyle -}^{-1}B\\ \\\mathbf 0& -\overline{A}_{\scriptscriptstyle -} R-2\alpha I_n\end{bmatrix}, \ \ G_2G_1H_{\scriptscriptstyle -} =\begin{bmatrix}I_n & \overline{F}_{\alpha} \\ \\ \mathbf 0& \overline{E}_{\alpha}\end{bmatrix}, \end{align*} with \begin{equation}\label{EFalpha} E_{\alpha}=I_n + 2\alpha \overline{R}^{-1} A_{\scriptscriptstyle -}^{-1}, \ \ \ F_{\alpha}=-2\alpha \overline{A}_{\scriptscriptstyle -}^{-1}\overline{B}\, \overline{R}^{-1} A_{\scriptscriptstyle -}^{-1}. \end{equation} Furthermore, since $A^{\HH}=A$ and $B^{\T}=B$, we have \begin{align*} E_{\alpha}^{\HH} &=I_{n} + 2\alpha A_{\scriptscriptstyle -}^{-1}R^{-\T}= I_n+2\alpha ( A_{\scriptscriptstyle -}^{-1}-B \overline{A}_{\scriptscriptstyle -}^{-1}\overline{B} )^{-1}=E_{\alpha}, \\ F_{\alpha}^{\T} &=-2\alpha \overline{A}_{\scriptscriptstyle -}^{-1} ( I_n-\overline{B} A_{\scriptscriptstyle -}^{-1}B \overline{A}_{\scriptscriptstyle -}^{-1} )^{-1}\overline{B} A_{\scriptscriptstyle -}^{-1}=F_{\alpha}, \end{align*} i.e., $E_{\alpha}$ and $F_{\alpha}$ are Hermitian and complex symmetric, respectively. Lastly, we have \[ (G H_{\scriptscriptstyle\pm}) J (G H_{\scriptscriptstyle\pm})^{\T}=\begin{bmatrix}&E_{\alpha}\\ \\-\overline{E}_{\alpha}\end{bmatrix}, \] implying that $[G(H+\alpha I_n), \, G(H-\alpha I_n)]$ is a symplectic pair in SSF-1. \end{proof} The following lemma summarizes the eigen-structure of $(M_\alpha, \, L_\alpha)$ in relation to that of $H$, neglecting the simple proof. \begin{lemma}\label{eigenML0} Let \begin{align}\label{eigenH} H [X_1^{\T},\, X_2^{\T}]^{\T}=[X_1^{\T},\, X_2^{\T}]^{\T}S \end{align} for some $X_1, X_2\in\mathbb{C}^{n\times l}$, $S\in\mathbb{C}^{l\times l}$ and $\alpha \notin \lambda(H)$, then we have \[ M_{\alpha}[X_1^{\T},\, X_2^{\T}]^{\T} =L_{\alpha}[X_1^{\T},\, X_2^{\T}]^{\T}S_{\alpha}, \] with $S_{\alpha}\equiv (S-\alpha I_l)^{-1}(S+\alpha I_l)$, where $S_{\alpha}- \alpha I_l$ is nonsingular. \end{lemma} Intrinsically, the DA proposed below requires both $E_{\alpha}$ and $I_n-F_{\alpha}\overline{F}_{\alpha}$ to be nonsingular. Lemma~\ref{l34} and Theorems~\ref{lemma-E0} and~\ref{lemma-F0} below indicate that a small $\alpha$ could achieve such a goal. Moreover, for $\lambda\in\lambda(H)$, we have $(\lambda+\alpha)/(\lambda-\alpha) \in \lambda (S_{\alpha})$. For the efficiency of the DA, we desire a small $\left|(\lambda+\alpha)/(\lambda-\alpha)\right|$ for $\Re(\lambda) <0$. Hence when $|\alpha|>\rho(H)$ (the spectral radius of $H$), we desire $|\alpha|$ to be minimized. \begin{lemma} \label{l34} Let $\alpha>\|H\|_F$, then $\alpha I_n-A$ is positive definite and $R\equiv I_n-(\alpha I_n-\overline{A})^{-1}\overline{B}(\alpha I_n-A)^{-1}B$ is nonsingular, with $\|R^{-1}\|_2\leq \left[ 1-\|(\alpha I_n-A)^{-1}\|_2^2\|B\|_2^2\right]^{-1}$. \end{lemma} \begin{proof} When $\|A\|_F<\|H\|_F<\alpha$, $\alpha I_n-A$ is positive definite Hermitian. Since $\alpha>\|H\|_F\geq\|A\|_F+\|B\|_F$, we have $(\alpha-\omega_1)^{-1} \leq (\alpha-\|A\|_F)^{-1} < \|B\|_F^{-1}$ with $\omega_1$ being the largest eigenvalue of $A$. In addition, with $\|(\alpha I_n-A)^{-1}\|_2= (\alpha-\omega_1)^{-1}$, we have $\|(\alpha I_n-A)^{-1}B\|_2\leq\|(\alpha I_n-A)^{-1}\|_2\|B\|_2= (\alpha-\omega_1)^{-1} \|B\|_2<1$. This implies $\|(\alpha I_n-\overline{A})^{-1}\overline{B}(\alpha I_n-A)^{-1}B\|_2\leq \|(\alpha I_n-A)^{-1}\|_2^2\|B\|_2^2<1$ and our results. \end{proof} \begin{theorem}\label{lemma-E0} As defined in \eqref{EFalpha}, $E_{\alpha}$ is nonsingular when $\alpha>\|H\|_F$. \end{theorem} \begin{proof} Denote the largest and smallest eigenvalues of $A$ by $\omega_1$ and $\omega_n$, respectively. With $\alpha>\|H\|_F$, we have $\|\alpha I_n-A\|_2=\alpha-\omega_n$ and $\|(\alpha I_n-\overline{A})^{-1}\|_2= (\alpha-\omega_1)^{-1}$, yielding $\|(\alpha I_n-A)-B(\alpha I_n-\overline{A})^{-1}\overline{B}\|_2 \leq (\alpha-\omega_n)+ (\alpha-\omega_1)^{-1} \|B\|_2^2$. We also have \begin{align*} &(\alpha-\omega_n)(\alpha-\omega_1)+\|B\|_2^2-2\alpha(\alpha-\omega_1)\\ =&-\left(\alpha+\frac{\omega_n-\omega_1}{2}\right)^2+\frac{(\omega_1+\omega_n)^2}{4}+\|B\|_2^2 <-\left(\alpha+\frac{\omega_n-\omega_1}{2}\right)^2+\frac{\alpha^2-\|A\|_F^2}{2}, \end{align*} as $\alpha^2>\|H\|_F^2=2(\|B\|_F^2+\|A\|_F^2)$. From the fact that $2\|A\|_F^2\geq2(\omega_1^2+\omega_n^2)\geq(\omega_n-\omega_1)^2$, we obtain \[ \frac{\alpha^2-\|A\|_F^2}{2}-\left(\alpha+\frac{\omega_n-\omega_1}{2}\right)^2\leq -\frac{(\alpha+\omega_n-\omega_1)^2}{2}<0. \] This implies $(\alpha-\omega_n)(\alpha-\omega_1)+\|B\|_2^2-2\alpha(\alpha-\omega_1)<0$. We deduce $\|(\alpha I_n-A)-B(\alpha I_n-\overline{A})^{-1}\overline{B}\|_2<2\alpha$, thus $2\alpha \not\in \lambda \{(\alpha I_n-A)-B(\alpha I_n-\overline{A})^{-1}\overline{B} \}$. Therefore, $E_{\alpha}=I_n-2\alpha \left[ (\alpha I_n-A)-B(\alpha I_n-\overline{A})^{-1}\overline{B} \right]^{-1}$ is nonsingular. \end{proof} Complementing Theorem~\ref{lemma-E0}, we have $\lambda (E_{\alpha})$ lies outside $[0,2]$ when $\alpha>\|H\|_F$ because the moduli of all eigenvalues of $\left[ (\alpha I_n-A)-B(\alpha I_n-\overline{A})^{-1}\overline{B} \right]^{-1}$ are greater than $(2\alpha)^{-1}$. \begin{theorem}\label{lemma-F0} Assume that $\alpha> \varrho\|H\|_F+\frac{1}{2}(\varrho-1)^{-1} \|B\|_F$ with $\varrho>1$. Then $\|F_{\alpha}\|_2<1$ with $F_{\alpha}$ defined in \eqref{EFalpha}. \end{theorem} \begin{proof} Let $\omega_1$ be the largest eigenvalue of $A$. Then it holds that \[ \|(\alpha I_n-A)^{-1}\|_2=(\alpha-\omega_1)^{-1}, \ \ \ \|F_{\alpha}\|_2 \leq \frac{2\alpha\|B\|_2}{(\alpha-\omega_1)^2-\|B\|_2^2}. \] We shall show that $\|B\|_2/[(\alpha-\omega_1)^2-\|B\|_2^2]$, in the right-hand-side of the inequality above, is bounded strictly from above by $(2\alpha)^{-1}$ when $\alpha> \varrho\|H\|_F+\frac{1}{2} (\varrho-1)^{-1} \|B\|_F$, or equivalently \begin{align}\label{alpha-ineq} (\alpha-\omega_1)^2-2\alpha\|B\|_2-\|B\|_2^2>0. \end{align} If $\|B\|_2+\omega_1\leq0$, \eqref{alpha-ineq} is apparently valid. When $\|B\|_2+\omega_1>0$ and considering the left-hand-side of \eqref{alpha-ineq} as a quadratic in $\alpha$, \eqref{alpha-ineq} holds if and only if $\alpha>\|B\|_2+\omega_1+\sqrt{2\|B\|_2 (\|B\|_2+\omega_1)}$. With $\eta > 0$ and $\eta_1 \equiv 1/(2\eta^2)$, from the equality \[ \sqrt{2\|B\|_2(\|B\|_2+\omega_1)} =\left[ \eta\sqrt{\|B\|_2}+\sqrt{\eta_1 (\|B\|_2+\omega_1)} \right]^2 -\eta^2\|B\|_2-\eta_1 \left(\|B\|_2+\omega_1 \right), \] we deduce that \begin{align*} &\|B\|_2+\omega_1+\sqrt{2\|B\|_2(\|B\|_2+\omega_1)}\\ =& \left[\eta\sqrt{\|B\|_2}+ \sqrt{\eta_1(\|B\|_2+\omega_1)} \right]^2+\left(1-\eta^2- \eta_1 \right)\|B\|_2 + \left(1-\eta_1 \right)\omega_1 \\ \leq & \ \ \eta^2\|B\|_2+\left(1+\eta_1 \right)(\|B\|_2+\omega_1). \end{align*} With $\eta^2=\frac{1}{2}(\varrho-1)^{-1}$, we get $\eta^2\|B\|_2+\left(1+\eta_1 \right)(\|B\|_2+\omega_1)<\alpha$, thus our result. \end{proof} Theorem~\ref{lemma-F0} demonstrates that when $\varrho$ is chosen as some moderate real positive scalar, such as $\sqrt{2}$, then the corresponding lower bound will be a good candidate for the initial $\alpha$. Additionally, when the condition in Theorem~\ref{lemma-F0} is satisfied, $E_{\alpha}$ and $I_n-F_{\alpha}\overline{F}_{\alpha}$ are nonsingular. Although Theorems~\ref{lemma-E0} and \ref{lemma-F0} show that a small $\alpha$ is sufficient for $E_{\alpha}$ and $I_n-F_{\alpha}\overline{F}_{\alpha}$ to be nonsingular, the minimization of $|(\lambda+\alpha)/(\lambda-\alpha)|$ for an optimal $\alpha$ deserves further consideration, for the fast convergence of the DA. For the optimal $\alpha$, \cite{hlll} proposed some remarkable techniques for the suboptimal solution $\alpha_{opt}:=\argmin_{\alpha>0} \max_{\Re(\lambda)<0} \left|\frac{\lambda +\alpha}{\lambda -\alpha}\right|$. With some prior knowledge (in $\mathcal{D}$ below) of the eigenvalues of $H$, \cite{hlll} essentially solves the following optimization problem: \[ \alpha_{sopt}:=\argmin_{\alpha>0} \max_{\zeta \in \mathcal{D}} \left|\frac{\zeta+\alpha}{\zeta-\alpha}\right|, \qquad \text{where} \quad \{\lambda\in \lambda(H): \Re(\lambda)<0\}\subset \mathcal{D}\subset \mathbb{C}_{-}. \] With $\mathcal{D}$ being an interval, a disk, an ellipse or a rectangle, \cite[Theorem~2.1]{hlll} considers the suboptimal solution $\alpha_{sopt}$. The technique can be applied to \eqref{MLalpha} for a suboptimal $\alpha$ when the distance between $\{\lambda\in \lambda(H): \Re(\lambda)<0\}$ and the imaginary axis is known. From now on, we will always assume $\alpha>0$ such that $\alpha I_{2n}-H$, $\alpha I_n-A$, $I_n-(\alpha I_n-\overline{A})^{-1}\overline{B}(\alpha I_n-A)^{-1}B$ and $E_{\alpha}$ are nonsingular and also assume that $1\notin\sigma(F_{\alpha})$ (before the discussion in Section~3.3). \subsection{Algorithm} We now construct a new symplectic pair by applying the doubling action to a given symplectic pair $(M, L)$ in SSF-1 in \eqref{MLalpha}; i.e., for $E^{\HH}=E$, $F^{\T}=F\in\mathbb{C}^{n\times n}$, we have \begin{align}\label{ML} M=\begin{bmatrix}E& \mathbf 0\\& \\F&I_n\end{bmatrix}, \qquad L=\begin{bmatrix}I_n&\overline{F}\\& \\ \mathbf 0&\overline{E}\end{bmatrix}. \end{align} \begin{theorem}\label{theorem-doubling-trans1} For $M,L$ in \eqref{ML} with $1\notin\sigma(F)$, there exists $[M_{*},\, L_{*}] \in \mathcal{N}(M, L)$ such that $(\widetilde{M},\, \widetilde{L}) = (M_{*}M, \, L_{*}L)$, from the doubling transformation of $(M,\, L)$, is a symplectic pair in SSF-1. Furthermore, $[\widetilde{M},\, \widetilde{L}]$ retains the SSF-1: \[ \widetilde{M}=\begin{bmatrix}\widetilde{E}& \mathbf 0 \\&\\ \widetilde{F}& I_n\end{bmatrix}, \qquad \widetilde{L}=\begin{bmatrix}I_n& \overline{\widetilde{F}}\\&\\ \mathbf 0& \overline{\widetilde{E}}\end{bmatrix}, \] with $\widetilde{E}^{\HH}=\widetilde{E}, \widetilde{F}^{\T}=\widetilde{F}\in\mathbb{C}^{n\times n}$. \end{theorem} \begin{proof} Let \[ M_*=\begin{bmatrix}E+E\overline{F}( I_n-F\overline{F} )^{-1}F& \mathbf 0\\ \\ \overline{E}(I_n- F\overline{F} )^{-1}F& I_n\end{bmatrix},\qquad L_*=\begin{bmatrix}I_n& E\overline{F}(I_n- F\overline{F} )^{-1} \\ \\ \mathbf 0& \overline{E}( I_n-F\overline{F} )^{-1}\end{bmatrix}. \] We have $\rank([M_{*},\, L_{*}])=2n$ and \[ M_{*}L=\begin{bmatrix}E(I_n-\overline{F}F)^{-1}&E\overline{F}(I_n-F\overline{F})^{-1}\\ \\ \overline{E}(I_n-F\overline{F})^{-1}F&\overline{E}(I_n-F\overline{F})^{-1} \end{bmatrix}=L_{*}M, \] implying that $[M_{*},\, L_{*}] \in \mathcal{N}(M, L)$. Routine manipulations yield \[ M_{*}M=\begin{bmatrix}E(I_n-\overline{F}F)^{-1}E&\mathbf 0\\ \\F+\overline{E}F(I_n-\overline{F}F)^{-1}E&I_n\end{bmatrix}, \qquad L_{*}L=\begin{bmatrix}I_n&\overline{F}+E\overline{F}(I_n-F\overline{F})^{-1}\overline{E} \\ \\ \mathbf 0&\overline{E}(I_n-F\overline{F})^{-1}\overline{E}\end{bmatrix}. \] With $\widetilde{E}=E(I_n-\overline{F}F)^{-1}E$ and $\widetilde{F}=F+\overline{E}F(I_n-\overline{F}F)^{-1}E$, the result follows. \end{proof} If we initially take $M_0=M_{\alpha}$ and $L_0=L_{\alpha}$ (from \eqref{MLalpha}), indicating that $E_0=E_{\alpha}$ and $F_0=F_{\alpha}$ (specified in \eqref{EFalpha}), then successive doubling transformations in Theorem~\ref{theorem-doubling-trans1} produce a sequence of symplectic pairs $(M_{k}, L_{k})$ provided that $(I_n-\overline{F}_kF_k)$ are nonsingular for $k\geq 0$. Specifically, we have a well-defined doubling iteration, provided that $1 \not\in \sigma (F_k)$: (for $k=0, 1, \ldots$) \begin{equation}\label{doubling iteration} E_{k+1}=E_k(I_n-\overline{F}_kF_k)^{-1}E_k, \qquad F_{k+1}=F_k+\overline{E}_kF_k(I_n-\overline{F}_kF_k)^{-1}E_k . \end{equation} Assuming \eqref{eigenH} with $S_{\alpha}\equiv(S-\alpha I_l)^{-1}(S+\alpha I_l)$, Lemmas~\ref{lemme-doubling} and~\ref{eigenML0} imply \begin{align}\label{MLX} M_{k} \begin{bmatrix} X_1 \\ \\ X_2 \end{bmatrix} =L_{k} \begin{bmatrix} X_1 \\ \\ X_2 \end{bmatrix} S_{\alpha}^{2^k}, \ \ \ M_{k}=\begin{bmatrix}E_{k}&\mathbf 0\\ \\F_{k}&I_n\end{bmatrix}, \ \ L_{k}=\begin{bmatrix}I_n&\overline{F}_k\\ \\ \mathbf 0&\overline{E}_k\end{bmatrix}. \end{align} The DA in \eqref{doubling iteration} has two iterative formulae for $E_k$ and $F_k$. Interestingly, the SDAs for Riccati equations and quadratic palindromic eigenvalue problems~\cite{cfl,cflw,chlw} have three, those for nonsymmetric algebraic Riccati equations~\cite{lcjl1,lcjl2} have four, while the PDA for the linear palindromic eigenvalue problem~\cite{lccl} has one. \subsection*{Convergence} We next consider the convergence of the DA. Without loss of generality, we assume for the moment that $1\notin \sigma(F_k)$ for all $k=0, 1, \ldots$. For the case that $1\in \sigma(F_k)$ for some $k$, Theorem~\ref{theorem-bound} below essentially demonstrates that the following convergence result still hold. We also require the technical assumption that $X_1$ and $\left[ X_1, \Psi_{11} \right]$, respectively, are nonsingular in Theorems~\ref{convergence} and \ref{thm:conv_pure} below. \begin{theorem}\label{convergence} Assume that $H$ possesses no purely imaginary eigenvalue and \\ $H [X_1^{\T},\, X_2^{\T}\ ]^{\T}=[X_1^{\T},\, X_2^{\T}\ ]^{\T}S$ with $X_1, X_2, S\in\mathbb{C}^{n\times n}$, where $\lambda(S)$ is in the interior of the left half plane. Then for $\{E_k\}$ and $\{F_k\}$ generated by \eqref{doubling iteration}, we have $\lim_{k\rightarrow\infty}E_k=0$ and $\lim_{k\rightarrow\infty}F_k=-X_2X_1^{-1}$, both converging quadratically. \end{theorem} \begin{proof} Let $S_{\alpha} \equiv (S-\alpha I_n)^{-1}(S+\alpha I_n)$. Note that the spectral radius of $S_{\alpha}$ is less than $1$ when $\alpha>0$. The proof is similar to that of \cite[Corollary~3.2]{lx06}. \end{proof} The following theorem illustrates the linear convergence of the proposed DA when some purely imaginary eigenvalues exist. Let the Jordan decompositions of $J_{2p_{r,j}}(\mathrm{i}\omega_j+\alpha)[J_{2p_{r,j}}(\mathrm{i}\omega_j-\alpha)]^{-1}$ be\\ $J_{2p_{r,j}}(\mathrm{i}\omega_j+\alpha)[J_{2p_{r,j}}(\mathrm{i}\omega_j-\alpha)]^{-1}=Q_{r,j}J_{2p_{r,j}}(\mathrm{e}^{\mathrm{i}\theta_j}) Q_{r,j}^{-1}$ for $r=1, \cdots, l_j$ and $j=1, \cdots, q$. Denote $W_{\omega}=[W_{1,\omega}^{\T}, W_{2,\omega}^{\T}]^{\T}$, $Z_{\omega}=[Z_{1,\omega}^{\T}, Z_{2,\omega}^{\T}]^{\T}$, $Q_{r,j} =\begin{bmatrix} Q_{r,j}^{(11)}&Q_{r,j}^{(12)}\\ \\ Q_{r,j}^{(21)}&Q_{r,j}^{(22)} \end{bmatrix}$ and, for $s',t'=1,2$, \begin{align*} Q^{(s't')}&:= \bigoplus_{j=1}^q \bigoplus_{r=1}^{l_j} Q_{r,j}^{(s't')} \\ \Psi_{11} & \equiv W_{1,\omega}Q^{(11)}+Z_{1,\omega}Q^{(21)}, \ \ \ \Psi_{21} \equiv W_{2,\omega}Q^{(11)}+Z_{2,\omega}Q^{(21)}. \end{align*} \begin{theorem}\label{thm:conv_pure} Assume that the partial multiplicities of $H$ associated with the purely imaginary eigenvalues are all even, and $H$ has the eigen-decomposition specified in \eqref{eq:eigen_decomp}. Writing $X=[ X_1^{\T}, X_2^{\T} ]^{\T}$, provided that $[ X_1, \Psi_{11} ]$ is nonsingular, we then have $\lim_{k\to \infty}E_k=0$ and $\lim_{k\to \infty} F_k= [X_2, \Psi_{21}][ X_1, \Psi_{11}]^{-1}$, both converging linearly. \end{theorem} \begin{proof} By \eqref{eq:eigen_decomp} and Lemmas~\ref{lemme-doubling} and~\ref{eigenML0}, we have \begin{align}\label{eq:doubling_pure} M_{k} \begin{bmatrix} X_1&W_{1,\omega}& \overline{X_2} & Z_{1,\omega}\\ X_2&W_{2,\omega}& \overline{X_1}&Z_{2,\omega} \end{bmatrix} = L_k \begin{bmatrix} X_1&W_{1,\omega}& \overline{X_2} & Z_{1,\omega}\\ X_2&W_{2,\omega}& \overline{X_1}&Z_{2,\omega} \end{bmatrix} \widetilde S_{\alpha}^{2^k}, \end{align} where $\widetilde S_{\alpha} = (\widetilde{S} + \alpha I)(\widetilde{S} - \alpha I)^{-1}$ with $\widetilde{S}$ from \eqref{eq:eigen_decomp}. Let $\Pi_{\omega}$ be the permutation matrix satisfying \begin{align*} & \Pi_{\omega}\diag\left\{S, -\overline{S}; \bigoplus_{j=1}^q \bigoplus_{r=1}^{l_j} J_{2p_{r,j}}(\mathrm{i} \omega_j)\right\} \Pi_{\omega}^{\T} = \widetilde{S}, \end{align*} and denote $\mathcal{D} \equiv \diag\left\{ I_{n_1},I_{n_1}; \bigoplus_{j=1}^q \bigoplus_{r=1}^{l_j}Q_{r,j}\right\}$, $J_{\omega,\theta}= \bigoplus_{j=1}^q \bigoplus_{r=1}^{l_j} J_{p_{r,j}}(\mathrm{e}^{\mathrm{i} \theta_j})$, and $S_{\alpha}:=(S+\alpha I)(S-\alpha I)^{-1}$, it holds that \begin{align*} \widetilde {S}_{\alpha} = \begin{multlined}[t] \Big(\Pi_{\omega} \mathcal{D} \Pi_{\omega}^{\T} \Big) \begin{bmatrix} S_{\alpha}&&&\\&J_{\omega,\theta}&&\Omega_{\omega}\\ & &\overline{S}_{\alpha}^{-1}&\\ &&&J_{\omega,\theta} \end{bmatrix} \Big(\Pi_{\omega} \mathcal{D}^{-1} \Pi_{\omega}^{\T} \Big). \end{multlined} \end{align*} This further implies \begin{align}\label{eq:wtdS2^k} \widetilde {S}_{\alpha}^{2^k} = \begin{multlined}[t] \Big(\Pi_{\omega} \mathcal{D} \Pi_{\omega}^{\T} \Big) \begin{bmatrix} S_{\alpha}^{2^k}&&&\\&J_{\omega,\theta}^{2^k}&&\Omega_{\omega,\theta,k}\\ & &\overline{S}_{\alpha}^{-2^k}&\\ &&&J_{\omega,\theta}^{2^k} \end{bmatrix} \Big(\Pi_{\omega} \mathcal{D}^{-1} \Pi_{\omega}^{\T} \Big) \end{multlined} \end{align} with $\Omega_{\omega, \theta, k}=\bigoplus_{j=1}^q \bigoplus_{r=1}^{l_j} J_{2p_{r,j}}^{2^k}(\mathrm{e}^{\mathrm{i}\theta_j})(1:p_{r,j}, p_{r,j}+1:2p_{r,j})$. By \eqref{eq:doubling_pure} and \eqref{eq:wtdS2^k} we have \begin{align*} M_{k} \begin{bmatrix} X_1&W_{1,\omega}& \overline{X_2} & Z_{1,\omega}\\ X_2&W_{2,\omega}& \overline{X_1}&Z_{2,\omega} \end{bmatrix} \left(\Pi_{\omega} \mathcal{D} \Pi_{\omega}^{\T}\right) = \begin{multlined}[t] L_k \begin{bmatrix} X_1&W_{1,\omega}& \overline{X_2} & Z_{1,\omega}\\ X_2&W_{2,\omega}& \overline{X_1}&Z_{2,\omega} \end{bmatrix}\left(\Pi_{\omega} \mathcal{D} \Pi_{\omega}^{\T}\right) \\ \cdot\begin{bmatrix} S_{\alpha}^{2^k}&&&\\&J_{\omega,\theta}^{2^k}&&\Omega_{\omega,\theta,k}\\ & &\overline{S}_{\alpha}^{-2^k}&\\ &&&J_{\omega,\theta}^{2^k} \end{bmatrix}. \end{multlined} \end{align*} Similar to the proof of \cite[Theorem~4.2]{hl}, we obtain the result. \end{proof} Next assume that we have acquired a sympletic pair $(M_{k},\, L_{k})$ with $\|E_k\|_F<\mathbf{u}$, where $\mathbf{u}$ is some small tolerance. The question is then how to compute the eigenvalues and eigenvectors of $H$ from $E_k$ and $F_k$. Without loss of generality, we just show the details for the case that no purely imaginary eigenvalues exist. Denote the error $Z_k \equiv F_k+X_2X_1^{-1}$ (Theorem~\ref{convergence} and \eqref{doubling iteration} suggest $\|Z_k\|_F< \mathbf{u}$), where $X_1, X_2\in\mathbb{C}^{n\times n}$ satisfy $H \left[ X_1^{\T}, \, X_2^{\T} \right]^{\T}=\left[ X_1^{\T}, \, X_2^{\T} \right]^{\T} S$ with $\lambda(S)\subseteq\mathbb{C}_{-}$, we have \begin{align}\label{approx-eigenvalue} H \begin{bmatrix} \ \ I_n \\ \\ -F_k \end{bmatrix} = \begin{bmatrix} \ \ I_n \\ \\ -F_k \end{bmatrix} X_1SX_1^{-1} + \begin{bmatrix}\mathbf 0 \\ \\ Z_k \end{bmatrix} X_1SX_1^{-1} -H \begin{bmatrix}\mathbf 0 \\ \\ Z_k \end{bmatrix}. \end{align} Pre- and post-multiplying $\left[ I_n, \, -F_k^{\HH} \right]$ and $(I_n+F_k^{\HH}F_k)^{-1}$, respectively, to both sides of \eqref{approx-eigenvalue}, we obtain \begin{align*} &(I_n+F_k^{\HH}F_k)X_1SX_1^{-1}(I_n+F_k^{\HH}F_k)^{-1}\\ = & \left\{ \left[ I_n, \, -F_k^{\HH} \right] H \left[ I_n, \, -F_k^{\T} \right]^{\T} + (F_k^{\HH}Z_kX_1SX_1^{-1}+BZ_k+F_k^{\HH}\overline{A}Z_k) \right\} (I_n+F_k^{\HH}F_k)^{-1}. \end{align*} Accordingly, we can take the eigenvalues of $H_k\equiv \left[ I_n, \, -F_k^{\HH} \right] H \left[ I_n, \, -F_k^{\T} \right]^{\T}(I_n+F_k^{\HH}F_k)^{-1}$ to approximate $\lambda(S)$ (the stable subspectrum of $H$). By the generalized Bauer-Fike theorem~\cite{ss90}, when the eigenvalues $\lambda_p(S)$ have Jordan blocks of maximum size $m$, there exists an eigenvalue $\lambda_q(H_k)$ such that \begin{align*} \frac{|\lambda_p(S)-\lambda_q(H_k)|^m}{[1+|\lambda_p(S)-\lambda_q(H_k)|]^{m-1}} &\leq \Upsilon\|(F_k^{\HH}Z_kX_1SX_1^{-1}+BZ_k+F_k^{\HH}\overline{A}Z_k) (I_n+F_k^{\HH}F_k)^{-1}\|_2\\ &\leq \Upsilon\|F_k^{\HH}Z_kX_1SX_1^{-1}+BZ_k+F_k^{\HH}\overline{A}Z_k\|_2, \end{align*} for some $\Upsilon > 0$ associated with $S$. Consequently, we can approximate $\lambda(S)$ by $\lambda(H_k)$. \subsection{Double-Cayley Transform} When $1\in\sigma(F_{k_0})$ for some $k_0>1$ (or the condition in Theorem~\ref{theorem-doubling-trans1} is violated), we cannot construct the new symplectic pair $(M_{k_0+1},\, L_{k_0+1})$ via the doubling transformation in \eqref{doubling iteration}. In this section, we divert the DA from this potential interruption using a DCT. We shall also prove the efficiency of the technique, not requiring a restart with a new $\alpha$. It is worthwhile to point that the DCT may be applied when $I - \overline{F}_{k_0} F_{k_0}$ is ill-conditioned. In practice, we may set a tolerance $\mathbf{u}$ and once the singular values of $F_{k_0}$ satisfy $\frac{\min_{\sigma\in \sigma(F_{k_0})}|\sigma-1|}{\max_{\sigma\in \sigma(F_{k_0})}|\sigma-1|}<\mathbf{u}$, the DCT is then applied. We require the following results firstly. \begin{lemma}\label{lemma-E} Assume that the doubling iteration \eqref{doubling iteration} does not break off for all $k<k_0$. If $E_0$ is nonsingular, so are $E_{k}$ $(0<k \leq k_0)$. \end{lemma} \begin{proof} This directly follows from $E_{k+1}=E_k(I_n-\overline{F}_kF_k)^{-1}E_k$ in \eqref{doubling iteration}. \end{proof} Obviously, Lemma~\ref{lemma-E} suggests that $M_{k_0}$ and $L_{k_0}$, defined in \eqref{MLX}, are both nonsingular and so is \[ L_{k_0}^{-1}M_{k_0}=\begin{bmatrix}E_{k_0}-\overline{F}_{k_0}\overline{E}_{k_0}^{-1}F_{k_0}&-\overline{F}_{k_0}\overline{E}_{k_0}^{-1}\\ &\\ \overline{E}_{k_0}^{-1}F_{k_0}&\overline{E}_{k_0}^{-1}\end{bmatrix}. \] Since $L_{k_0}^{-1}M_{k_0} [X_1^{\T},\, X_2^{\T}]^{\T}= [X_1^{\T},\, X_2^{\T}]^{\T}S_{\alpha}^{2^{k_0}}$, the fact that $\{0, \alpha\} \not\subset\lambda(H)$ implies $L_{k_0}^{-1}M_{k_0} \pm I_{2n}$ are nonsingular. Consequently, we have the following theorem. \begin{theorem}\label{theorem-doubling-trans2} Let $\vartheta\in \{-1, 1\}$ and $\beta \in \mathbb{R}$. Provided that $\vartheta \notin \lambda(E_{k_0})$, then \begin{enumerate} \item [{\em (a)}] $Z=\vartheta I_{n}-E_{k_0}+\vartheta \overline{F}_{k_0}(\vartheta \overline{E}_{k_0}-I_n)^{-1}F_{k_0}$ is nonsingular; \item [{\em (b)}] $(\widehat{H}+\beta \vartheta I_{2n}) [X_1^{\T},\, X_2^{\T}]^{\T} =(\widehat{H}-\beta \vartheta I_{2n}) [X_1^{\T},\, X_2^{\T}]^{\T} (\vartheta S_{\alpha}^{2^{k_0}})$ with $\widehat{A}=\beta \vartheta I_{n}-2\beta Z^{-1}$, $\widehat{B}=(\beta I_n-\vartheta\widehat{A})\overline{F}_{k_0}(\overline{E}_{k_0}-\vartheta I_n)^{-1}$ and $\widehat{H}=\begin{bmatrix}\ \ \widehat{A}&\ \ \widehat{B}\\ \\-\overline{\widehat{B}}&-\overline{\widehat{A}}\end{bmatrix}$; and \item [{\em (c)}] $\widehat{A}$ is Hermitian and $\widehat{B}$ is symmetric. \end{enumerate} \end{theorem} \begin{proof} For (a) with $\vartheta\notin\lambda(E_{k_0})$, $E_{k_0}-\vartheta I_n$ is nonsingular and so is \[ K\triangleq\begin{bmatrix}I_n&\overline{F}_{k_0}(I_n-\vartheta \overline{E}_{k_0})^{-1}\\ &\\ \mathbf 0&(\overline{E}_{k_0}^{-1}-\vartheta I_n)^{-1}\end{bmatrix}. \] In addition, pre-multiplying $L_{k_0}^{-1}M_{k_0}$ by $K$ gives \[ K(L_{k_0}^{-1}M_{k_0}-\vartheta I_{2n}) =\begin{bmatrix}E_{k_0}-\vartheta I_n+\vartheta \overline{F}_{k_0}(I_n-\vartheta \overline{E}_{k_0})^{-1}F_{k_0} & \mathbf 0\\ &\\ (I_n-\vartheta \overline{E}_{k_0})^{-1}F_{k_0}&I_n\end{bmatrix}, \] implying that $Z=\vartheta I_{n}-E_{k_0}+\vartheta \overline{F}_{k_0}(\vartheta \overline{E}_{k_0}-I_n)^{-1}F_{k_0}$ is nonsingular. For (b), manipulations show that $\widehat{H}=\beta\vartheta(L_{k_0}^{-1}M_{k_0}-\vartheta I_n)^{-1}(L_{k_0}^{-1}M_{k_0}+\vartheta I_n)$. Then $M_{k_0}\ [X_1^{\T},\, X_2^{\T}]^{\T} =L_{k_0} [X_1^{\T},\, X_2^{\T}]^{\T}S_{\alpha}^{2^{k_0}}$ implies \[ (L_{k_0}^{-1}M_{k_0}-\vartheta I_n)^{-1}(L_{k_0}^{-1}M_{k_0}+\vartheta I_n) [X_1^{\T},\, X_2^{\T}]^{\T} = [X_1^{\T},\, X_2^{\T}]^{\T}(S_{\alpha}^{2^{k_0}}-\vartheta I_n)^{-1} (S_{\alpha}^{2^{k_0}}+\vartheta I_n), \] leading to $\widehat{H} [X_1^{\T},\, X_2^{\T}]^{\T}= [X_1^{\T},\, X_2^{\T}]^{\T} [\beta \vartheta (S_{\alpha}^{2^{k_0}}-\vartheta I_n)^{-1}(S_{\alpha}^{2^{k_0}}+\vartheta I_n)]$. Consequently, the result follows from the resulting equalities \begin{align*} & (\widehat{H}+\beta \vartheta I_n) [X_1^{\T},\, X_2^{\T}]^{\T} = [X_1^{\T},\, X_2^{\T}]^{\T}[2\beta \vartheta (S_{\alpha}^{2^{k_0}}-\vartheta I_n)^{-1}S_{\alpha}^{2^{k_0}}], \\ & (\widehat{H}-\beta \vartheta I_n) [X_1^{\T},\, X_2^{\T}]^{\T} = [X_1^{\T},\, X_2^{\T}]^{\T}[2\beta (S_{\alpha}^{2^{k_0}}-\vartheta I_n)^{-1}]. \end{align*} For (c), $\widehat{A}^{\HH}=\widehat{A}$ directly follows from its definition and the facts that $E_{k_0}^{\HH}=E_{k_0}$ and $F_{k_0}^{\T}=F_{k_0}$. For the symmetry of $\widehat{B}$, observe that \begin{align*} \widehat{B} &=2\beta \vartheta Z^{-1}\overline{F}_{k_0}(\overline{E}_{k_0}-\vartheta I_n)^{-1}&\\ &=2\beta \vartheta (E_{k_0}- \vartheta I_n)^{-1}[I_n+ \vartheta \overline{F}_{k_0} (\vartheta \overline{E}_{k_0}-I_n)^{-1}F_{k_0}(\vartheta I_n-E_{k_0})^{-1}]^{-1} \overline{F}_{k_0}(\vartheta I_n-\overline{E}_{k_0})^{-1}&\\ &=2\beta \vartheta (E_{k_0}- \vartheta I_n)^{-1} \overline{F}_{k_0} [I_n+\vartheta (\vartheta \overline{E}_{k_0}-I_n)^{-1}F_{k_0}(\vartheta I_n-E_{k_0})^{-1}\overline{F}_{k_0}]^{-1} (\vartheta I_n-\overline{E}_{k_0})^{-1}&\\ &=2\beta \vartheta (E_{k_0}-\vartheta I_n)^{-1} \overline{F}_{k_0}\overline{Z}^{-1} =2\beta \vartheta (E_{k_0}-\vartheta I_n)^{-1}\overline{F}_{k_0} Z^{-\T}=B^{\T}. \end{align*} The proof is complete. \end{proof} Theorem~\ref{theorem-doubling-trans2} implies $\widehat{H} [X_1^{\T},\, X_2^{\T}]^{\T} =\beta \vartheta [X_1^{\T},\, X_2^{\T}]^{\T} (S_{\alpha}^{2^{k_0}}+\vartheta I_l)(S_{\alpha}^{2^{k_0}}-\vartheta I_l)^{-1}$, hence each eigenvalue $\lambda$ of $H$ corresponds to an eigenvalue $\mu$ of $\widehat{H}$: \begin{align}\label{eigenvalue-mu} \mu=f(\lambda)\triangleq \beta\vartheta \cdot \frac{(\lambda+\alpha)^{2^{k_0}}+\vartheta(\lambda-\alpha)^{2^{k_0}}} {(\lambda+\alpha)^{2^{k_0}}-\vartheta(\lambda-\alpha)^{2^{k_0}}}. \end{align} More specifically, for $\lambda\in\lambda(H)$, we have \begin{align*} \left\{ \begin{array}{ll} \{\mu, \ \overline{\mu}=f(\overline{\lambda}),\ -\mu=f(-\lambda),\ -\overline{\mu}=f(-\overline{\lambda})\}\subseteq \lambda(\widehat{H}), & \quad \text{if} \ \Re(\lambda)\Im(\lambda)\neq0;\\ \{\mu, \ -\mu=f(-\lambda) \}\subseteq \lambda(\widehat{H}), & \quad \text{if} \ \Im(\lambda)=0;\\ \{\mu, \ \overline{\mu}=f(\overline{\lambda}) \}\subseteq \lambda(\widehat{H}), & \quad \text{if} \ \Re(\lambda)=0.\\ \end{array} \right. \end{align*} In addition, $\mu \in \lambda(\widehat{H})$ is purely imaginary if $\lambda \in \lambda(H)$ is so. Equivalently, there exists no purely imaginary eigenvalues for $\widehat{H}$ when there is none for $H$. Next select $\gamma\in\mathbb{R}$ with $\gamma I_n-\widehat{A}$ and $I_n-(\gamma I_n-\overline{\widehat{A}})^{-1}\overline{\widehat{B}} (\gamma I_n-\widehat{A})^{-1}\widehat{B}$ being nonsingular. Theorem~\ref{theorem-SSF-1} could then be applied to $\widehat{A}$ and $\widehat{B}$, which are defined in Theorem~\ref{theorem-doubling-trans2}, to obtain a new SSF-1 derived from $\widehat{H}$. Thus, we have \begin{multline*} M_{k_0+1}\begin{bmatrix}X_1\\ \\ X_2\end{bmatrix} = L_{k_0+1}\begin{bmatrix}X_1\\ \\ X_2\end{bmatrix} \left[ \beta\vartheta (S_{\alpha}^{2^{k_0}}+\vartheta I_l) (S_{\alpha}^{2^{k_0}}-\vartheta I_l)^{-1} +\gamma I_l \right] \\ \cdot \left[ \beta\vartheta (S_{\alpha}^{2^{k_0}}+\vartheta I_l) (S_{\alpha}^{2^{k_0}}-\vartheta I_l)^{-1} -\gamma I_l \right]^{-1}, \end{multline*} with \begin{align*} &M_{k_0+1}=\begin{bmatrix}E_{{k_0}+1}&\mathbf 0\\ \\ F_{k_0+1}&I_n\end{bmatrix}, \qquad L_{k_0+1}=\begin{bmatrix}I_n&\overline{F}_{k_0+1}\\ \\ \mathbf 0&\overline{E}_{k_0+1}\end{bmatrix},\\ &E_{k_0+1}=I_n-2\gamma\left[(\gamma I_n-\widehat{A}) -\widehat{B}(\gamma I_n-\overline{\widehat{A}})^{-1}\overline{\widehat{B}}\right]^{-1},\\ &F_{k_0+1}=-2\gamma(\gamma I_n-\overline{\widehat{A}})^{-1}\overline{\widehat{B}} \left[(\gamma I_n-\widehat{A})-\widehat{B} (\gamma I_n-\overline{\widehat{A}})^{-1}\overline{\widehat{B}}\right]^{-1}. \end{align*} We call the above transform from $(M_{k_0}, L_{k_0})$ to $(M_{k_0+1}, L_{k_0+1})$, both symplectic, a DCT. Accordingly, with $\delta_{\lambda}\triangleq (\lambda+\alpha)(\lambda-\alpha)^{-1}$, $|\delta_{\lambda}|<1$ and $\varpi \triangleq (\beta-\vartheta\gamma)(\beta\vartheta +\gamma)^{-1}$, an eigenvalue $\mu$ of $\widehat{H}$ (in \eqref{eigenvalue-mu}) would be transformed into an eigenvalue $\nu$ of $(M_{k_0+1}, L_{k_0+1})$ via the following formula: (for $\lambda \in\lambda(H)$) \begin{align*} \nu &\equiv\nu(\mu)=\frac{\mu+\gamma}{\mu-\gamma}\\ &= \frac{\beta\vartheta[(\lambda+\alpha)^{2^{k_0}}+\vartheta(\lambda-\alpha)^{2^{k_0}}] + \gamma[(\lambda+\alpha)^{2^{k_0}}-\vartheta(\lambda-\alpha)^{2^{k_0}}]} {\beta\vartheta[(\lambda+\alpha)^{2^{k_0}}+\vartheta(\lambda-\alpha)^{2^{k_0}}] - \gamma[(\lambda+\alpha)^{2^{k_0}}-\vartheta(\lambda-\alpha)^{2^{k_0}}]} =\vartheta \cdot \frac{\varpi +\delta_{\lambda}^{2^{k_0}}}{1+\varpi \delta_{\lambda}^{2^{k_0}}}. \end{align*} One may consider the condition number of $I_n-\overline{F}_{k_0+1}F_{k_0+1}$, or equivalently, the difference between $1$ and $\sigma(F_{k_0+1})$. Obviously, $\sigma(F_{k_0})$ depends on $\gamma$. Without loss of generality we assume $\vartheta=1$, then with $\gamma=\beta(\kappa^{2^{k_0}}+1)(\kappa^{2^{k_0}}-1)^{-1}$(with $\kappa$ to be specified), we have \begin{align*} &F_{k_0+1}= -\frac{\kappa^{2^{k_0}}+1}{\kappa^{2^{k_0}}-1} \left(\frac{\overline Z}{\kappa^{2^{k_0}}-1} +I_n \right)^{-1} F_{k_0}(E_{k_0}-I_n)^{-1} \\ & \ \ \ \cdot \left[\left(\frac{Z}{\kappa^{2^{k_0}}-1} +I_n \right) - \overline F_{k_0}(\overline E_{k_0}-I_n)^{-1} \left(\frac{\overline Z}{\kappa^{2^{k_0}}-1} +I_n \right)^{-1} F_{k_0}(E_{k_0}-I_n)^{-1}\right]^{-1}Z. \end{align*} Thus we can choose some $\kappa$ to make $I_n-\overline{F}_{k_0+1}F_{k_0+1}$ well conditioned. We leave the issue of an optimal $\kappa$ or $\gamma$ for the future, while making random choices in our numerical experiments. Theorem~\ref{theorem-bound} and Corollary~\ref{corollary-bound} below illustrate that $\kappa$ characterizes the convergence rate and does not have to be large. With $\gamma>0$ and $\Re(\mu)<0$, we have $|\nu(\mu)|<1$. The following lemma reveals more. \begin{lemma}\label{lemma-nu} Provided that $\vartheta\beta, \gamma>0$, then each $\nu$ corresponding to a non-purely imaginary eigenvalue $\lambda\in \lambda(H)$ with $\Re(\lambda)<0$ satisfies $|\nu|<1$. \end{lemma} \begin{proof} Let $\xi+\mathrm{i}\eta=\varrho=\delta_{\lambda}^{2^{k_0}}$, we then have $|\varrho|=|\delta_{\lambda}|^{2^{k_0}}$ and $|\xi|\leq |\delta_{\lambda}|^{2^{k_0}}$. Consequently, from the definition of $\nu$ we deduce that \begin{align} |\nu|^2&=\frac{(\xi^2+\eta^2)(\beta\vartheta+\gamma)^2+(\beta-\vartheta\gamma)^2+2\vartheta\xi(\beta^2-\gamma^2)} {(\beta\vartheta+\gamma)^2+(\beta-\vartheta\gamma)^2(\xi^2+\eta^2)+2\vartheta\xi(\beta^2-\gamma^2)} \nonumber \\ &= \frac{|\delta_{\lambda}|^{2^{k_0+1}}+2\xi\varpi +\varpi^2} {|\delta_{\lambda}|^{2^{k_0+1}}\varpi^2 +2\xi \varpi +1}. \label{xi} \end{align} Since $\vartheta\beta, \gamma>0$ and the function defined in \eqref{xi} is (i) monotone nondecreasing with respect to $\xi$ when $\beta>\vartheta\gamma$ or (ii) monotone non-increasing otherwise, we obtain \[ |\nu|^2\leq\left\{ \begin{array}{ll} &\frac{|\delta_{\lambda}|^{2^{k_0}}(|\delta_{\lambda}|^{2^{k_0}}+2\varpi ) +\varpi^2} {|\delta_{\lambda}|^{2^{{k_0}}}(2\varpi+|\delta_{\lambda}|^{2^{k_0}} \varpi^2 )+1}, \quad \quad \text{if} \quad \beta>\vartheta\gamma;\\ &\\ &\frac{|\delta_{\lambda}|^{2^{k_0}}(|\delta_{\lambda}|^{2^{k_0}}-2\varpi ) +\varpi^2} {|\delta_{\lambda}|^{2^{{k_0}}}(-2\varpi+|\delta_{\lambda}|^{2^{k_0}} \varpi^2 )+1}, \quad \ \text{if} \quad \beta<\vartheta\gamma;\\ \end{array} \right. \] which is equivalent to \[ |\nu|^2\leq \frac{|\delta_{\lambda}|^{2^{k_0}}(|\delta_{\lambda}|^{2^{k_0}}+2|\varpi|)+ \varpi^2} {|\delta_{\lambda}|^{2^{{k_0}}}(2|\varpi|+ |\delta_{\lambda}|^{2^{k_0}}\varpi^2 )+1} = \left( \frac{|\delta_{\lambda}|^{2^{k_0}}+|\varpi|} {|\delta_{\lambda}|^{2^{k_0}}|\varpi| +1} \right)^2. \] Obviously, $(|\delta_{\lambda}|^{2^{k_0}}+|\varpi|) (|\delta_{\lambda}|^{2^{k_0}}|\varpi| + 1)^{-1} <1$ from $|\varpi| = |\beta-\vartheta\gamma|/(\vartheta\beta+\gamma)<1$ and $|\delta_{\lambda}|<1$, thus the result follows. \end{proof} Lemma~\ref{lemma-nu} demonstrates that for $\lambda\in\lambda(H)$ satisfying $\Im(\lambda)\neq0$, the DCT maps half of these $\lambda$ to some values inside of the unit circle and the other half outside. Next we consider the detailed relationship between $\nu$ and $\varrho=\delta_\lambda^{2^{k_0}}$, which is vital for the convergence of the DA coupled with the DCT. Obviously, when $\vartheta\beta, \gamma>0$, we have $|\varpi|<1$. Taking $\gamma=\beta(\kappa^{2^{k_0}}+\vartheta)(\vartheta\kappa^{2^{k_0}}-1)^{-1} >0$ with $\kappa>1$, we obtain $\varpi =-\kappa^{-2^{k_0}}$ and \[ \nu=\vartheta \cdot \frac{\delta_\lambda^{2^{k_0-1}}- \kappa^{-2^{k_0-1}}} {1-\delta_\lambda^{2^{k_0-1}} \kappa^{-2^{k_0-1}}} \cdot \frac{\delta_\lambda^{2^{k_0-1}}+ \kappa^{-2^{k_0-1}}} {1+\delta_\lambda^{2^{k_0-1}} \kappa^{-2^{k_0-1}}}. \] Denote $\xi+\mathrm{i}\eta=\delta_\lambda^{2^{k_0-1}}$ and define \begin{eqnarray*} \phi &=& \arctanh \delta_\lambda^{2^{k_0-1}}\\ &=& \frac{1}{2} \ln\left|\frac{(\lambda-\alpha)^{2^{k_0-1}}+(\lambda+\alpha)^{2^{k_0-1}}}{(\lambda-\alpha)^{2^{k_0-1}}-(\lambda+\alpha)^{2^{k_0-1}}}\right| +\frac{\mathrm{i}}{2}\arg\left[\frac{(\lambda-\alpha)^{2^{k_0-1}}+(\lambda+\alpha)^{2^{k_0-1}}} {(\lambda-\alpha)^{2^{k_0-1}}-(\lambda+\alpha)^{2^{k_0-1}}}\right],\\ \psi&=&\arctanh \kappa^{-2^{k_0-1}} =\frac{1}{2}\left[ \ln(1+\sqrt{|\varpi|}) -\ln(1-\sqrt{|\varpi|})\right]. \end{eqnarray*} We deduce that \[ \arg\left[\frac{(\lambda-\alpha)^{2^{k_0-1}}+(\lambda+\alpha)^{2^{k_0-1}}}{(\lambda-\alpha)^{2^{k_0-1}}-(\lambda+\alpha)^{2^{k_0-1}}}\right] =\arctan \frac{2\eta}{1-\xi^2-\eta^2}\in\left(-\frac{\pi}{2}, \ \frac{\pi}{2}\right). \] Specifically, $\arg\left[\dfrac{(\lambda-\alpha)^{2^{k_0-1}}+(\lambda+\alpha)^{2^{k_0-1}}}{(\lambda-\alpha)^{2^{k_0-1}}-(\lambda+\alpha)^{2^{k_0-1}}}\right]=0$ when $\lambda\in\mathbb{R}$. Moreover, by the definitions of $\phi$ and $\psi$, routine manipulations show that \[ \nu=\vartheta\tanh(\phi-\psi)\tanh(\phi+\psi) \] with \[ \phi \pm \psi=\frac{1}{2}\ln\left[\frac{\sqrt{\gamma+\vartheta\beta} \pm \sqrt{\vartheta\gamma-\beta}} {\sqrt{\gamma+\vartheta\beta} \mp \sqrt{\vartheta\gamma-\beta}} \sqrt{\frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}}\right] +\frac{\mathrm{i}}{2}\arctan \frac{2\eta}{1-\xi^2-\eta^2}. \] Under the assumptions in Lemma~\ref{lemma-nu}, the following theorem gives a sharp bound for those $|\nu|$ corresponding to $\lambda$ which satisfies $\Im(\lambda)\neq0$ and $|\delta_\lambda| < 1$. \begin{theorem}\label{theorem-bound} Assume that $\lambda$ is not a purely imaginary eigenvalue of $H$, $\vartheta\beta>0$ and $\kappa\geq2$. Then we have $|\nu| \leq \max\left\{|\delta_\lambda|^{2^{k_0-2}}, \ \kappa^{-2^{k_0-2}}\right\}$. \end{theorem} \begin{proof} With $\gamma=\beta\frac{\kappa^{2^{k_0}}+\vartheta}{\vartheta\kappa^{2^{k_0}}-1}$ and $\cos\left(\arctan \frac{2\eta}{1-\xi^2-\eta^2}\right)>0$, we have \begin{align*} \left\{\begin{array}{l} \ln\left(\frac{\sqrt{\gamma+\vartheta\beta}+\sqrt{\vartheta\gamma-\beta}}{\sqrt{\gamma+\vartheta\beta}-\sqrt{\vartheta\gamma-\beta}} \sqrt{\frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}}\right)\geq0, \qquad \text{if} \quad \frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}\geq1;\\ \\ \ln\left(\frac{\sqrt{\gamma+\vartheta\beta}-\sqrt{\vartheta\gamma-\beta}}{\sqrt{\gamma+\vartheta\beta}+\sqrt{\vartheta\gamma-\beta}} \sqrt{\frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}}\right)<0, \qquad \text{otherwise}. \end{array} \right. \end{align*} From Lemma~\ref{lemma-tanh} and $[(1+\xi)^2+\eta^2][(1-\xi)^2+\eta^2]^{-1}\geq1 \Leftrightarrow \xi\geq 0$, we obtain \[ |\nu|< \left\{\begin{array}{l} |\tanh(\phi-\psi)|, \qquad \text{if} \quad \xi>0;\\ |\tanh(\phi+\psi)|, \qquad \text{if} \quad \xi<0. \end{array} \right. \] Now assume that $\xi>0$ and we consider two distinct cases. (i) When \[ \sqrt{\frac{(1-\xi)^2+\eta^2}{(1+\xi)^2+\eta^2}}\leq \frac{\sqrt{\gamma+\vartheta\beta}-\sqrt{\vartheta\gamma-\beta}}{\sqrt{\gamma+\vartheta\beta}+\sqrt{\vartheta\gamma-\beta}} \sqrt{\frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}}<1 \] or \[ \frac{\sqrt{\gamma+\vartheta\beta}-\sqrt{\vartheta\gamma-\beta}}{\sqrt{\gamma+\vartheta\beta}+\sqrt{\vartheta\gamma-\beta}} \sqrt{\frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}}\geq1, \] we have \[ \ln\left[\sqrt{\frac{(1-\xi)^2+\eta^2}{(1+\xi)^2+\eta^2}}\right]\leq\ln\left[\frac{\sqrt{\gamma+\vartheta\beta}-\sqrt{\vartheta\gamma-\beta}} {\sqrt{\gamma+\vartheta\beta}+\sqrt{\vartheta\gamma-\beta}} \sqrt{\frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}}\right]<0 \] or \[ 0<\ln\left[\frac{\sqrt{\gamma+\vartheta\beta}-\sqrt{\vartheta\gamma-\beta}} {\sqrt{\gamma+\vartheta\beta}+\sqrt{\vartheta\gamma-\beta}} \sqrt{\frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}}\right]< \ln\left[\sqrt{\frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}}\right]. \] Hence by (c) and (b) in Lemma~\ref{lemma-tanh}, it is apparent that \begin{align*} |\nu|^2& < |\tanh(\phi-\psi)|^2\\ &\leq \left| \tanh\left\{ \frac{1}{2}\ln\left[\sqrt{\frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}}\right]+\frac{\mathrm{i}}{2}\arctan \frac{2\eta}{1-\xi^2-\eta^2} \right\} \right|^2\\ &=|\tanh(\phi)|^2 = \left| \delta_\lambda \right|^{2^{k_0}}, \end{align*} implying that $|\nu|<\left| \delta_\lambda \right|^{2^{k_0-1}}$. (ii) When \[ \frac{\sqrt{\gamma+\vartheta\beta}-\sqrt{\vartheta\gamma-\beta}}{\sqrt{\gamma+\vartheta\beta}+\sqrt{\vartheta\gamma-\beta}} \sqrt{\frac{(1+\xi)^2+\eta^2}{(1-\xi)^2+\eta^2}}<\sqrt{\frac{(1-\xi)^2+\eta^2}{(1+\xi)^2+\eta^2}} <1, \] we define $\widehat{\xi}+\mathrm{i}\widehat{\eta}=\delta_\lambda^{2^{k_0-2}}$ and without loss of generality assume that $\widehat{\xi}>0$, which satisfies $\widehat{\xi}>|\widehat{\eta}|$ for $0<\xi=\widehat{\xi}^2-\widehat{\eta}^2$. Similar to (i), we obtain \[ |\nu|<|\tanh(\phi-\psi)|= |\tanh(\widehat{\phi}-\widehat{\psi})\tanh(\widehat{\phi}+\widehat{\psi})| <|\tanh(\widehat{\phi}-\widehat{\psi})|, \] where $\widehat{\phi}=\arctanh \delta_\lambda^{2^{k_0-2}}$ and $\widehat{\psi}=\arctanh \kappa^{-2^{k_0-2}}$. Since $\xi=\widehat{\xi}^2-\widehat{\eta}^2>0$ and $|\widehat{\xi}|^2+|\widehat{\eta}|^2=\left| \delta_\lambda\right|^{2^{k_0-1}}$, we have $\widehat{\xi}^2>\frac{1}{2}|\delta_{\lambda}|^{2^{k_0-1}}$, leading to \begin{eqnarray*} |\nu|^2&<&|\tanh(\widehat{\phi}-\widehat{\psi})|^2\\ &=& \dfrac{ \dfrac{\kappa^{2^{k_0-2}}-1}{\kappa^{2^{k_0-2}}+1} \cdot \dfrac{1+|\delta_{\lambda}|^{2^{k_0-1}}+2\widehat{\xi}}{1-|\delta_{\lambda}|^{2^{k_0-1}}} + \dfrac{\kappa^{2^{k_0-2}}+1}{\kappa^{2^{k_0-2}}-1} \cdot \dfrac{1+|\delta_{\lambda}|^{2^{k_0-1}}-2\widehat{\xi}}{1-|\delta_{\lambda}|^{2^{k_0-1}}}-2 } { \dfrac{\kappa^{2^{k_0-2}}-1}{\kappa^{2^{k_0-2}}+1} \cdot \dfrac{1+|\delta_{\lambda}|^{2^{k_0-1}}+2\widehat{\xi}}{1-|\delta_{\lambda}|^{2^{k_0-1}}} + \dfrac{\kappa^{2^{k_0-2}}+1}{\kappa^{2^{k_0-2}}-1} \cdot \dfrac{1+|\delta_{\lambda}|^{2^{k_0-1}}-2\widehat{\xi}}{1-|\delta_{\lambda}|^{2^{k_0-1}}}+2}. \end{eqnarray*} Since $|\tanh(\widehat{\phi}-\widehat{\psi})|^2$ is monotonically nonincreasing with respect to $\widehat{\xi}$, taking $\widehat{\xi}=\frac{1}{\sqrt{2}}|\delta_{\lambda}|^{2^{k_0-2}}$ in the above formula yields \begin{align} |\nu|^2&<|\tanh(\widehat{\phi}-\widehat{\psi})|^2 <\dfrac{1+|\delta_{\lambda}|^{2^{k_0-1}}\kappa^{2^{k_0-1}}-\sqrt{2}\kappa^{2^{k_0-2}}|\delta_{\lambda}|^{2^{k_0-2}}} {\kappa^{2^{k_0-1}}+|\delta_{\lambda}|^{2^{k_0-1}}-\sqrt{2}\kappa^{2^{k_0-2}}|\delta_{\lambda}|^{2^{k_0-2}}} \notag\\ &=\kappa^{-2^{k_0-1}} \cdot \left[ \dfrac{ (2^{-1/2}\kappa^{2^{k_0-2}}|\delta_{\lambda}|^{2^{k_0-2}}-1)^2+ 2^{-1} \kappa^{2^{k_0-1}}|\delta_{\lambda}|^{2^{k_0-1}} } {(2^{-1/2}\kappa^{-2^{k_0-2}}|\delta_{\lambda}|^{2^{k_0-2}}-1)^2+ 2^{-1} \kappa^{-2^{k_0-1}}|\delta_{\lambda}|^{2^{k_0-1}} } \right] \label{kappa}\\ &=|\delta_{\lambda}|^{2^{k_0-1}} \cdot \left[ \dfrac{(\kappa^{2^{k_0-2}}- 2^{-1/2} |\delta_{\lambda}|^{-2^{k_0-2}})^2 + 2^{-1} |\delta_{\lambda}|^{-2^{k_0-1}}} {(\kappa^{2^{k_0-2}}- 2^{-1/2} |\delta_{\lambda}|^{2^{k_0-2}} )^2 + 2^{-1} |\delta_{\lambda}|^{2^{k_0-1}}} \right]. \label{delta} \end{align} Obviously for $\kappa\geq2$, we obtain $( \kappa^{-1} |\delta_{\lambda}| )^{2^{k_0-2}}<1/2$. Hence, by Lemma~\ref{lemma-distance}, when either \begin{description} \item (a) $2^{-1/2} \left(|\delta_{\lambda}|\kappa\right)^{2^{k_0-2}}\leq\frac{1}{2}$, i.e., $\left(|\delta_{\lambda}|\kappa\right)^{2^{k_0-2}} \leq 1/\sqrt{2}$; or \item (b) $\frac{1}{2}< 2^{-1/2} \left(|\delta_{\lambda}|\kappa\right)^{2^{k_0-2}} \leq 1- 2^{-1/2} |\delta_{\lambda}|^{2^{k_0-2}} \kappa^{-2^{k_0-2}}$, i.e., \[ \left(|\delta_{\lambda}|\kappa\right)^{2^{k_0-2}}\geq 1/\sqrt{2}, \qquad |\delta_{\lambda}|^{2^{k_0-2}}(\kappa^{2^{k_0-2}}+\kappa^{-2^{k_0-2}})\leq\sqrt{2}, \] \end{description} the quantity in the square brackets in \eqref{kappa} would be no greater than $1$. This indicates that $|\nu|^2 \leq \kappa^{-2^{k_0-1}}$ or $|\nu|<\kappa^{-2^{k_0-2}}$. When \[ \left(|\delta_{\lambda}|\kappa\right)^{2^{k_0-2}}\geq 1/\sqrt{2}, \qquad |\delta_{\lambda}|^{2^{k_0-2}} (\kappa^{2^{k_0-2}}+\kappa^{-2^{k_0-2}})>\sqrt{2}, \] which imply $|\delta_{\lambda}|^{2^{k_0-2}}>\sqrt{2}/(\kappa^{2^{k_0-2}}+ \kappa^{-2^{k_0-2}})$, we obtain \begin{align}\label{delta2} |\delta_{\lambda}|^{2^{k_0-2}}+|\delta_{\lambda}|^{-2^{k_0-2}}<\frac{\sqrt{2}\kappa^{2^{k_0-2}}}{\kappa^{2^{k_0-1}}+1}+ \frac{\kappa^{2^{k_0-1}}+1}{\sqrt{2}\kappa^{2^{k_0-2}}}<\sqrt{2}\kappa^{2^{k_0-2}}, \end{align} where the first ``$<$'' follows from the fact that the function $f(x)=x + x^{-1}$ is monotonically decreasing when $x<1$. Thus, the assumption $\kappa\geq2$ and \eqref{delta2} together affirm that $2^{-1/2}|\delta_{\lambda}|^{2^{k_0-2}} < 2^{-1} \kappa^{2^{k_0-2}}$ and $2^{-1/2} |\delta_{\lambda}|^{-2^{k_0-2}}\leq\kappa^{2^{k_0-2}}- 2^{-1/2} |\delta_{\lambda}|^{2^{k_0-2}}$. Again using Lemma~\ref{lemma-distance}, we know that the quantity in the square brackets in \eqref{delta} is no greater than $1$, suggesting that the value of the right-hand-side of \eqref{delta} will be no greater than $|\delta_{\lambda}|^{2^{k_0-1}}$, or equivalently $|\nu|<|\delta_{\lambda}|^{2^{k_0-2}}$. Consequently, the result holds for the case when $\xi>0$. The $\xi<0$ case can be proved similarly and we omit the details. \end{proof} For a real $\lambda\in\lambda(H)$, we can obtain a better result, with the power $2^{k_0-2}$ replaced by $2^{k_0}$ in the following corollary. \begin{corollary}\label{corollary-bound} Let $\kappa>1$ and $\vartheta\beta, \alpha>0$, then for $\lambda<0$ $(\lambda \in \lambda(H))$, we have $|\nu|\leq \max \left\{|\delta_\lambda|^{2^{k_0}}, \ \kappa^{-2^{k_0}}\right\}$. \end{corollary} \begin{proof} Let $\phi \equiv \arctanh \delta_\lambda^{2^{k_0}}$, then $\phi=\frac{1}{2} \ln\left[\frac{(\lambda-\alpha)^{2^{k_0}}+(\lambda+\alpha)^{2^{k_0}}}{(\lambda-\alpha)^{2^{k_0}}-(\lambda+\alpha)^{2^{k_0}}}\right]>0$ since $\lambda<0$, and $\psi \equiv \arctanh(-\kappa^{-2^{k_0}}) =-\frac{1}{2}\ln\left(\frac{\kappa^{2^{k_0}}+1}{\kappa^{2^{k_0}}-1}\right)<0$. From the definition of $\nu$, we have $\nu=\vartheta\tanh(\phi+\psi)$. Because $\tanh(\omega)=(\mathrm{e}^{\omega}-\mathrm{e}^{-\omega})(\mathrm{e}^{\omega}+\mathrm{e}^{-\omega})^{-1}$, $\tanh(-\omega)=-\tanh(\omega)$ and $\tanh(\omega)$ is nondecreasing with respect to $\omega\in\mathbb{R}$, then when $\phi\geq|\psi|$ we have $0\leq|\nu|=\tanh(\phi+\psi)\leq\tanh(\phi)$. Otherwise for $\phi<|\psi|$, we have $|\nu|=\tanh(-\psi-\phi)<\tanh(-\psi)=\kappa^{-2^{k_0}}$. Hence, the result holds. \end{proof} To sum up, we propose the DCT to avoid the potential interruption of the DA caused by $1\in\sigma(F_{k_0})$ for some $k_0$. We have conducted a detailed analysis on the eigenvalue $\nu$ of the new pair $(M_{k_0+1}, L_{k_0+1})$, produces a sharp bound of $|\nu|$ in Theorem~\ref{theorem-bound} relative to $|\delta_\lambda|^{2^{k_0-2}}$. Furthermore, Theorem~\ref{theorem-bound} and Corollary~\ref{corollary-bound} imply that a double-Cayley step reverses the convergence \emph{at worst by two steps in general and not at all when $\lambda$ is real}. This guarantees the convergence of the DA when the DCT is only occasionally called for. Similar comments apply when there exist some singular value $\sigma\in \sigma (F_{k_0})$ close to unity, meaning $I - \overline{F}_{k_0} F_{k_0}$ is ill-conditioned, and the double-Cayley remedy is applied. Note that the DCT is applicable when $\vartheta \notin \lambda(E_{k_0})$ with $\vartheta \in \{-1, 1\}$. In the rare occasions when the condition is violated, the three-recursion remedy proposed in subsection~3.4 will be employed. We construct an example to show the need for the DCT. \begin{Example}\rm Let $A=A^{\HH}, B=B^{\T} \in\mathbb{C}^{5\times 5}$ with \begin{align*} A=\begin{bsmallmatrix*}[r] 0.6607 & 0.1299 - 0.1365\mathrm{i} & 0.0632 - 0.0086\mathrm{i} & -0.0341 - 0.0517\mathrm{i} & -0.0628 - 0.0044\mathrm{i}\\ 0.1299 + 0.1365\mathrm{i} & 0.2441 & -0.1293 - 0.1035\mathrm{i} & -0.0363 + 0.1567\mathrm{i} & 0.1042 + 0.1260\mathrm{i}\\ 0.0632 + 0.0086\mathrm{i} & -0.1293 + 0.1035\mathrm{i} & 0.6772 & 0.0236 + 0.0491\mathrm{i} & 0.0542 + 0.0113\mathrm{i}\\ -0.0341 + 0.0517\mathrm{i} & -0.0363 - 0.1567\mathrm{i} & 0.0236 - 0.0491\mathrm{i} & 0.6804 & -0.0326 + 0.0427\mathrm{i}\\ -0.0628 + 0.0044\mathrm{i} & 0.1042 - 0.1260\mathrm{i} & 0.0542 - 0.0113\mathrm{i} & -0.0326 - 0.0427\mathrm{i} & 0.6787 \end{bsmallmatrix*}, \\ B=\begin{bsmallmatrix*}[r] -0.5704 + 0.2984\mathrm{i} & -0.4605 - 0.0324\mathrm{i} & 0.1693 - 0.3006\mathrm{i} & -0.1181 + 0.4597\mathrm{i} & 0.2109 + 0.0879\mathrm{i}\\ -0.4605 - 0.0324\mathrm{i} & 0.0573 - 0.1759\mathrm{i} & -0.1520 + 0.0419\mathrm{i} & -0.1526 - 0.0408\mathrm{i} & 0.1452 - 0.2288\mathrm{i}\\ 0.1693 - 0.3006\mathrm{i} & -0.1520 + 0.0419\mathrm{i} & 0.4908 - 0.7534\mathrm{i} & 0.1880 - 0.0406\mathrm{i} & -0.1733 - 0.1743\mathrm{i}\\ -0.1181 + 0.4597\mathrm{i} & -0.1526 - 0.0408\mathrm{i} & 0.1880 - 0.0406\mathrm{i} & -0.1783 - 0.6552\mathrm{i} & -0.5212 + 0.1871\mathrm{i}\\ 0.2109 + 0.0879\mathrm{i} & 0.1452 - 0.2288\mathrm{i} & -0.1733 - 0.1743\mathrm{i} & -0.5212 + 0.1871\mathrm{i} & -0.2548 - 0.7032\mathrm{i} \end{bsmallmatrix*}. \end{align*} By setting $\alpha=1$ and with the formulae in Theorem~\ref{theorem-SSF-1}, we have $E_0=E_{\alpha}$ and $F_0=F_{\alpha}$: \begin{align*} E_0=\begin{bsmallmatrix*}[r] 1.2482 & 0.4505 - 0.4735\mathrm{i} & 0.2193 - 0.0298\mathrm{i} & -0.1182 - 0.1794\mathrm{i} & -0.2179 - 0.0152\mathrm{i}\\ 0.4505 + 0.4735\mathrm{i} & -0.1966 & -0.4485 - 0.3591\mathrm{i} & -0.1259 + 0.5435\mathrm{i} & 0.3613 + 0.4371\mathrm{i}\\ 0.2193 + 0.0298\mathrm{i} & -0.4485 + 0.3591\mathrm{i} & 1.3055 & 0.0817 + 0.1703\mathrm{i} & 0.1880 + 0.0391\mathrm{i}\\ -0.1182 + 0.1794\mathrm{i} & -0.1259 - 0.5435\mathrm{i} & 0.0817 - 0.1703\mathrm{i} & 1.3166 & -0.1132 + 0.1482\mathrm{i}\\ -0.2179 + 0.0152\mathrm{i} & 0.3613 - 0.4371\mathrm{i} & 0.1880 - 0.0391\mathrm{i} & -0.1132 - 0.1482\mathrm{i} & 1.3105 \end{bsmallmatrix*}, \\ F_0=\begin{bsmallmatrix*}[r] -1.0682 - 0.5623\mathrm{i} & -0.8603 + 0.0680\mathrm{i} & 0.3168 + 0.5662\mathrm{i} & -0.2188 - 0.8623\mathrm{i} & 0.3967 - 0.1673\mathrm{i}\\ -0.8603 + 0.0680\mathrm{i} & 0.0883 + 0.3226\mathrm{i} & -0.2885 - 0.0846\mathrm{i} & -0.2898 + 0.0820\mathrm{i} & 0.2745 + 0.4354\mathrm{i}\\ 0.3168 + 0.5662\mathrm{i} & -0.2885 - 0.0846\mathrm{i} & 0.9207 + 1.4103\mathrm{i} & 0.3503 + 0.0768\mathrm{i} & -0.3258 + 0.3290\mathrm{i}\\ -0.2188 - 0.8623\mathrm{i} & -0.2898 + 0.0820\mathrm{i} & 0.3503 + 0.0768\mathrm{i} & -0.3329 + 1.2301\mathrm{i} & -0.9749 - 0.3510\mathrm{i}\\ 0.3967 - 0.1673\mathrm{i} & 0.2745 + 0.4354\mathrm{i} & -0.3258 + 0.3290\mathrm{i} & -0.9749 - 0.3510\mathrm{i} & -0.4766 + 1.3165\mathrm{i} \end{bsmallmatrix*}. \end{align*} Applying the DA to $E_0$ and $F_0$ for $5$ iterations, we obtain: \begin{align*} E_5=\begin{bsmallmatrix*}[r] 1.5012 & -0.0992 + 0.1043\mathrm{i} & -0.0483 + 0.0066\mathrm{i} & 0.0260 + 0.0395\mathrm{i} & 0.0480 + 0.0034\mathrm{i}\\ -0.0992 - 0.1043\mathrm{i} & 1.8195 & 0.0988 + 0.0791\mathrm{i} & 0.0277 - 0.1197\mathrm{i} & -0.0796 - 0.0963\mathrm{i}\\ -0.0483 - 0.0066\mathrm{i} & 0.0988 - 0.0791\mathrm{i} & 1.4886 & -0.0180 - 0.0375\mathrm{i} & -0.0414 - 0.0086\mathrm{i}\\ 0.0260 - 0.0395\mathrm{i} & 0.0277 + 0.1197\mathrm{i} & -0.0180 + 0.0375\mathrm{i} & 1.4861 & 0.0249 - 0.0326\mathrm{i}\\ 0.0480 - 0.0034\mathrm{i} & -0.0796 + 0.0963\mathrm{i} & -0.0414 + 0.0086\mathrm{i} & 0.0249 + 0.0326\mathrm{i} & 1.4875 \end{bsmallmatrix*}, \\ F_5=\begin{bsmallmatrix*}[r] -0.9956 - 0.6352\mathrm{i} & -0.7338 + 0.3015\mathrm{i} & 0.2834 + 0.6319\mathrm{i} & -0.1291 - 0.8499\mathrm{i} & 0.4238 - 0.2379\mathrm{i}\\ -0.7338 + 0.3015\mathrm{i} & -0.5359 + 0.0786\mathrm{i} & -0.3942 - 0.2753\mathrm{i} & -0.4018 + 0.2612\mathrm{i} & 0.3380 + 0.6318\mathrm{i}\\ 0.2834 + 0.6319\mathrm{i} & -0.3942 - 0.2753\mathrm{i} & 0.9025 + 1.2909\mathrm{i} & 0.2689 + 0.0968\mathrm{i} & -0.3410 + 0.3895\mathrm{i}\\ -0.1291 - 0.8499\mathrm{i} & -0.4018 + 0.2612\mathrm{i} & 0.2689 + 0.0968\mathrm{i} & -0.2733 + 1.2440\mathrm{i} & -0.8719 - 0.3476\mathrm{i}\\ 0.4238 - 0.2379\mathrm{i} & 0.3380 + 0.6318\mathrm{i} & -0.3410 + 0.3895\mathrm{i} & -0.8719 - 0.3476\mathrm{i} & -0.4230 + 1.2122\mathrm{i} \end{bsmallmatrix*}. \end{align*} The singular values~\cite{gv} of $F_5$ are $\{1.9376, \ 1.9376, \ 1.9376,\ 1.9376, \ 1 \}$. Hence, the next doubling step breaks down and the DCT is required to carry the DA forward. \end{Example} \subsection{Three-recursion remedy} This subsection is devoted to resolve the issue that the DCT fails. Especially, one may apply the three-recursion remedy from this section when two step reversions occur with some complex eigenvalues for $H$. Let $Z = Z^{\T} \in \mathbb{C}^{n \times n}$ (which may be chosen randomly) and $I_n + F_k^{\T} Z$ be nonsingular. Write $P_k=(I_n + F_k Z)^{-1}E_k$, $G_k=(I_n + F_k Z)^{-1}F_k^{\T}$ and $H_k = (F_k +Z) - E_k^{\T} Z (I_n + F_k Z)^{-1}E_k$. The following lemma shows how we transform the two recursions for $E_k$ and $F_k$ to three. \begin{lemma}\label{lm:three_recursion_initial} For the decomposition \eqref{eq:eigen_decomp} it holds that \begin{align}\label{eq:doubling_pure_three} & \begin{bmatrix} P_k & \mathbf 0 \\ \\ H_k & I_n \end{bmatrix} \begin{bmatrix} I & \mathbf 0 \\ \\ -Z& I_n \end{bmatrix} \begin{bmatrix} X_1&W_{1,\omega}& \overline{X}_2 & Z_{1,\omega}\\ \\ X_2&W_{2,\omega}& \overline{X}_1& Z_{2,\omega} \end{bmatrix} \nonumber \\ =& \begin{bmatrix} I_n& G_k \\ \\ \mathbf 0 & P_k^{\T} \end{bmatrix} \begin{bmatrix} I & \mathbf 0\\ \\ -Z& I_n \end{bmatrix} \begin{bmatrix} X_1&W_{1,\omega}& \overline{X}_2 & Z_{1,\omega}\\ \\ X_2&W_{2,\omega}& \overline{X}_1&Z_{2,\omega} \end{bmatrix} \widetilde S_{\alpha}^{2^k}, \end{align} where $X_1, X_2, W_{1,\omega}, W_{2,\omega}, Z_{1,\omega}, Z_{2,\omega}$ and $\widetilde {S}_{\alpha}^{2^k}$ are defined as in \eqref{eq:doubling_pure}. \end{lemma} \begin{proof} Define $\Phi = \begin{bmatrix} (I_n + F_k Z)^{-1} & \mathbf 0\\ \\ -E_k^{\T} Z(I_n + F_k Z)^{-1} & I_n \end{bmatrix}$, then we deduce that \begin{align*} \Phi \begin{bmatrix} E_k & \mathbf 0\\ \\ F_k & I_n \end{bmatrix} \begin{bmatrix} I_n & \mathbf 0\\ \\ Z& I_n \end{bmatrix} = \begin{bmatrix} P_k& \mathbf 0\\ \\ H_k & I_n \end{bmatrix}, & \qquad \Phi \begin{bmatrix} I_n & \overline F_k \\ \\ \mathbf 0 & \overline E_k \end{bmatrix} \begin{bmatrix} I_n & \mathbf 0\\ \\ Z& I_n \end{bmatrix} = \begin{bmatrix} I_n & G_k \\ \\ \mathbf 0 & P_k^{\T} \end{bmatrix}. \end{align*} With $\begin{bmatrix} I_n & \mathbf 0\\ \\ Z & I_n \end{bmatrix}^{-1} = \begin{bmatrix} I_n & \mathbf 0 \\ \\ -Z & I_n \end{bmatrix}$, the result follows from \eqref{eq:doubling_pure}. \end{proof} Since $F_k^{\T} = F_k$ and $Z^{\T} = Z$, we have $G_k^{\T} = G_k$ and $H_k^{\T} = H_k$. Applying the doubling algorithms \cite{lx06} for CARE and DARE, provided that $(I_n - G_{k+j-1} H_{k+j-1})^{-1}$ are well-defined for $j\geq 1$, we formulate the three recursions for $P_{k+j}, G_{k+j}$ and $H_{k+j}$ as below: \begin{equation}\label{eq:three_recursions} \begin{aligned} P_{k+j} &= P_{k+j-1} (I_n - G_{k+j-1} H_{k+j-1})^{-1} P_{k+j-1},\\ G_{k+j} &= G_{k+j-1} + P_{k+j-1} (I_n - G_{k+j-1} H_{k+j-1})^{-1} G_{k+j-1} P_{k+j-1}^{\T},\\ H_{k+j} &= H_{k+j-1} + P_{k+j-1}^{\T} H_{k+j-1} (I_n - G_{k+j-1} H_{k+j-1})^{-1} P_{k+j-1}, \end{aligned} \end{equation} where $G_{k+j}^{\T} = G_{k+j}$ and $H_{k+j}^{\T} = H_{k+j}$. It is worthwhile to point that when $I_n - G_{k+j} H_{k+j}$ is singular or ill-conditioned, we can always randomly choose some other $Z^{\T} = Z \in \mathbb{C}^{n\times n}$ and construct $\Psi\in \mathbb{C}^{2n\times 2n}$ such that \begin{align*} \Psi \begin{bmatrix} P_{k+j} &\mathbf 0 \\ \\ H_{k+j} & I_n \end{bmatrix}\begin{bmatrix} I_n & \mathbf 0 \\ \\ Z& I_n \end{bmatrix} = \begin{bmatrix} \widetilde P_{k+j} & \mathbf 0\\ \\ \widetilde H_{k+j} & I_n \end{bmatrix}, & \qquad \Psi \begin{bmatrix} I_n & G_{k+j} \\ \\ \mathbf 0 & P_{k+j}^{\T} \end{bmatrix}\begin{bmatrix} I_n & \mathbf 0 \\ \\ Z& I_n \end{bmatrix} = \begin{bmatrix} I_n & \widetilde G_{k+j} \\ \\ \mathbf 0 & \widetilde P_{k+j}^{\T} \end{bmatrix}. \end{align*} Provided that $I_n - G_{k+j}H_{k+j}$ are well-conditioned for all $j \geq0$, the following two theorems demonstrate the convergence of the three recursions specified in \eqref{eq:three_recursions}. \begin{theorem}\label{thm:convergence_three_recursions} Upon the assumption in Theorem~\ref{convergence}, it holds that $\lim_{k\to \infty} P_{k} =0$ and $\lim_{k\to \infty} H_{k} = Z-X_2X_1^{-1}$, both converging quadratically. \end{theorem} \begin{proof} The results follow from the fact \begin{align*} & \begin{bmatrix} P_k& \mathbf 0\\ \\ H_k & I_n \end{bmatrix} \begin{bmatrix} X_1 & \overline{X}_2 \\ \\ X_2 - Z X_1 & \overline{X}_1 - Z \overline X_2 \end{bmatrix} \\ =& \begin{bmatrix} I_n & G_k \\ \\ \mathbf 0 & P_k^{\T} \end{bmatrix}\begin{bmatrix} X_1 & \overline{X}_2 \\ \\ X_2 - Z X_1 & \overline{X}_1 - Z \overline X_2 \end{bmatrix} \begin{bmatrix} S_{\alpha}^{2^k} &\\ \\ & \overline{S}_{\alpha}^{-2^k} \end{bmatrix} \end{align*} and $\lim_{k\to \infty} S_{\alpha}^{2^k}=0$. We omit the details, as in \cite[Corollary~3.2]{lx06}. \end{proof} \begin{theorem}\label{thm:convergence_three_recursions_pure} Under the assumption in Theorem~\ref{thm:conv_pure}, it holds that $\lim_{k\to \infty} P_{k} =0$ and $\lim_{k\to \infty} H_{k} = Z-X_2X_1^{-1}$, both converging linearly. \end{theorem} \begin{proof} By \eqref{eq:doubling_pure_three} and similar to the proof of Theorem~\ref{thm:conv_pure}, we obtain the result. \end{proof} \section{Numerical Results} We illustrate the performance of the DA with some test examples, three of which from discretized Bethe-Salpeter equations and one generated by the \verb|randn| command in MATLAB. We also apply \verb|eig| in MATLAB (as in \verb|eig|$(H)$ and \verb|eig|$(\Gamma H, \Gamma)$) and Algorithm~1 in~\cite{yang:16} to the test examples for comparison. Computing \verb|eig|$(\Gamma H, \Gamma)$ is based on the equivalence of $Hx = \lambda x$ and $\begin{bmatrix}A& B\\ \overline{B}&\overline{A}\end{bmatrix}x = \lambda \begin{bmatrix}I_n&\mathbf 0\\ \mathbf 0& -I_n\end{bmatrix}x$. No DCT or three-recursion remedy was required. All algorithms are implemented in MATLAB 2012b on a 64-bit PC with an Intel Core~i7 processor at 3.4 GHz and 8G RAM. \begin{Example}\rm \label{ex1} We consider three examples from the discretized Bethe-Salpeter equations for naphthalene (\ce{C10H8}), gallium arsenide (\ce{GaAs}) and boron nitride (\ce{BN}). The dimensions of the corresponding $H$ associated with \ce{C10H8}, \ce{GaAs} and \ce{BN} are respectively $64$, $256$ and $4608$. All eigenpairs of $H$ are computed. Using \verb|eig|$(H)$ as the baseline for comparison, we present the relative accuracy of the computed eigenvalues and the execution time (eTime) of the other three algorithms, all averaged over 50 trials. For the relative accuracy, we compute $\mathrm{prec} =\log_{10} [\max_{j}| (\lambda_j - \widehat{\lambda}_j)/\lambda_j |]$ where $\lambda_j$ and $\widehat{\lambda}_j$ are the computed eigenvalues by the \verb|eig|$(H)$ command and one of the methods, respectively. The residuals \[ \frac{\|H-[X,\, \Pi\overline{X}]\diag(S, \overline{S}) [X,\,\Pi\overline{X}]^{-1}\|_F}{\|H\|_F}, \ \ \ \frac{\|Y^{\HH}HX - \Lambda\|_F}{\|H\|_F}, \] respectively for the DA, \texttt{eig}$(\Gamma H, \Gamma)$ and \cite[Algorithm~1]{yang:16} are displayed, with $Y$ and $X$ being respectively the left and right eigenvector matrices and $\Lambda$ the diagonal matrix containing the eigenvalues of $H$ (please refer to~\cite{yang:16} for details). Also, the numbers of iterations required for doubling averaged over 50 trails are presented. It is worthwhile to point out that for the DA all $\alpha$'s in the 50 trails are generated by the function \texttt{randn}. The results are tabulated in Table~\ref{table}. \begin{table}[H] \footnotesize \centering \begin{tabular}{c|c|c|c} \hline \multicolumn{4}{c}{\ce{C10H8}}\\ \hline &DA & algorithm 1 in~\cite{yang:16} & \verb|eig|$(\Gamma H, \Gamma)$\\ \hline prec & $-13.97$ & $-13.92$ & $-13.95$ \\ residual &$8.14\times10^{-16}$& $2.60\times10^{-15}$ & $1.71\times10^{-15}$ \\ eTime & $7.958\times10^{-1}$ & $5.764\times10^{-1}$ & $3.792\times10^{-1}$ \\ iteration & $6.84$ & $-$ &$-$ \\ \hline \multicolumn{4}{c}{\ce{GaAs}}\\ \hline &DA & algorithm 1 in~\cite{yang:16} & \verb|eig|$(\Gamma H, \Gamma)$\\ \hline prec & $-13.74$ & $-13.54$ & $-13.75$\\ residual & $6.86\times10^{-16}$ & $6.33\times 10^{-15}$ & $5.07\times10^{-15}$ \\ eTime & $5.881\times10^{-1}$ & $3.587\times10^{-1}$ & $3.533\times10^{-1}$\\ iteration & $8.46$ & $-$ & $-$\\ \hline \multicolumn{4}{c}{\ce{BN}}\\ \hline &DA & algorithm 1 in~\cite{yang:16} & \verb|eig|$(\Gamma H, \Gamma)$\\ \hline prec & $-13.11$ & $-13.12$ & $-13.04$\\ residual & $7.50\times 10^{-16}$ & $2.54\times 10^{-14}$ & $1.63\times 10^{-14}$\\ eTime & $6.610\times10^{-1}$ & $4.754\times10^{-1}$ & $4.843\times10^{-1}$\\ iteration & $7.44$ & $-$ & $-$\\ \hline \end{tabular} \caption{Numerical results for Example~\ref{ex1}} \label{table} \end{table} Table~\ref{table} demonstrates that all three methods produce comparable results in terms of the relative accuracy. The DA spends slightly more time than the other methods but produces more accurate solutions with smaller residuals. \end{Example} \begin{Example}\rm \label{ex2} The test example, randomly generated by the command \verb|randn| in MATLAB, is designed to illustrate the structure-preserving property of the DA, a distinct feature of our method. The defining matrices are \begin{align*} H=\begin{bmatrix} \ \ A & \ \ B \\ \\ -\overline{B} & -\overline{A} \end{bmatrix}, \qquad A = \begin{bmatrix} A_1 & & \\ & A_2 & \\ & & A_3 \end{bmatrix}, & \qquad B = \begin{bmatrix} B_1 & & \\ & B_2 & \\ & & B_3 \end{bmatrix} \end{align*} with \begin{align*} A_1 & = \begin{bmatrix*}[l] 2.6361 & \hphantom{-} 1.0378\times10^{1} & \hphantom{-} 5.0751\times10^{-2} \\ 1.0378\times10^{1} & \hphantom{-} 5.2431\times10^{-2} & -4.6067\times10^{-1} \\ 5.0751\times10^{-2} & -4.6067\times10^{-1} & -1.6892\times10^{-2} \end{bmatrix*}, \\ A_2 & =\begin{bmatrix*}[l] -4.0549\times10^{-1} & -3.7710+2.7569 \mathrm{i} \\ -3.7710-2.7569 \mathrm{i} & -4.0549\times10^{-1} \end{bmatrix*}, \\ A_3 &=\begin{bmatrix*}[l] 3.6378\times10^{-1} & 2.7293\times10^{-1} + 3.5908 \mathrm{i} \\ 2.7293\times10^{-1}-3.5908 \mathrm{i} & 3.6378\times10^{-1} \end{bmatrix*}, \\ B_1 &=\begin{bmatrix*}[l] -2.6361 &-1.0375\times10^{1} &-5.1181\times10^{-2} \\ -1.0375\times10^{1} &-5.3457\times10^{-2} &\hphantom{-} 5.0988\times10^{-1} \\ -5.1181\times10^{-2} &\hphantom{-} 5.0988\times10^{-1} &\hphantom{-} 4.2022\times10^{-3} \end{bmatrix*}, \\ B_2 &=\begin{bmatrix} 1.2343\times10^{-1}-3.8788\mathrm{i}\times10^{-1} & 3.7566-2.7464\mathrm{i} \\ 3.7566-2.7464\mathrm{i} & 4.0704\times10^{-1}+6.0156\mathrm{i}\times10^{-5} \end{bmatrix}, \\ B_3 &=\begin{bmatrix*}[l] \hphantom{-} 3.6148\times10^{-1}-5.5211 \mathrm{i} \times10^{-2} & -2.7152\times10^{-1}-3.5722\mathrm{i} \\ -2.7152\times10^{-1} -3.5722\mathrm{i} & -3.6567\times10^{-1}+5.9265\mathrm{i}\times10^{-5} \end{bmatrix*}. \end{align*} The spectrum of $H$ is \begin{align*} \lambda(H) &= \begin{array}[t]{c@{\hspace{0pt}}ll} \{& \pm 4.1204\times10^{-3}, \quad \pm 4.1204\times10^{-3}, &\pm 4.1204\times10^{-3}, \\ &\pm 4.0549\times10^{-1}\pm 5.9927\mathrm{i}\times10^{-5}, & \pm 3.6378 \times10^{-1} \pm 5.8959\mathrm{i} \times10^{-5} \}. \end{array} \end{align*} Note that the algebraic and the geometric multiplicities of \mbox{$\pm 4.1204\times10^{-3}$} are $3$ and $1$, respectively. The DA, \verb|eig|$(H)$ and \verb|eig|$(\Gamma H, \Gamma)$ produce the eigenvalues $\lambda_{D}$, $\lambda_{E}$ and $\lambda_{Ge}$ respectively: \begin{align*} \lambda_{D} &= \begin{array}[t]{c@{\hspace{0pt}}ll} \{& \pm 4.1092\times10^{-3}, \quad \pm 4.1092\times10^{-3}, &\pm 4.1092\times10^{-3}, \\ &\pm 4.0549\times10^{-1}\pm 5.9927\mathrm{i}\times10^{-5}, & \pm 3.6378 \times10^{-1} \pm 5.8959\mathrm{i} \times10^{-5} \}, \end{array} \\ \lambda_{E} &= \begin{array}[t]{c@{\hspace{0pt}}rr@{\hspace{0pt}}l} \{& 4.1137\times10^{-3} - 1.1615\mathrm{i}\times10^{-5}, & 4.1136\times10^{-3}+ 1.1614\mathrm{i}\times10^{-5}&, \\ & 4.1338\times10^{-3} + 1.2681\mathrm{i}\times10^{-9}, & \\ & -4.1136\times10^{-3} - 1.1649\mathrm{i}\times10^{-5}, & -4.1136\times10^{-3} + 1.1650\mathrm{i}\times10^{-5}&, \\ & -4.1338\times10^{-3} - 1.3011\mathrm{i}\times10^{-9},&& \\ & \pm 4.0549 \times10^{-1} \pm 5.9927\mathrm{i}\times10^{-5}, & \pm 3.6378\times10^{-1} \pm 5.8959\mathrm{i}\times10^{-5}&\}, \end{array} \\ \lambda_{Ge} &= \begin{array}[t]{c@{\hspace{0pt}}rr@{\hspace{0pt}}l} \{& 4.1272\times10^{-3}-1.1919\mathrm{i} \times10^{-5}, & 4.1272\times10^{-3}-1.1919\mathrm{i} \times10^{-5}&,\\ & 4.1272\times10^{-3}-1.1919\mathrm{i} \times10^{-5}, &&\\ & -4.1272\times10^{-3} + 1.1851\mathrm{i} \times10^{-5}, & -4.1272 \times10^{-3}+ 1.1851\mathrm{i} \times10^{-5}&, \\ & -4.1272 \times10^{-3} + 1.1851\mathrm{i} \times10^{-5},&& \\ & \pm 4.0549 \times10^{-1} \pm 5.9927\mathrm{i} \times10^{-5}, & \pm3.6378 \times10^{-1} \pm 5.8959\mathrm{i} \times10^{-5}&\}. \end{array} \end{align*} Although all three methods produce computed eigenvalues of low relative accuracy, with $prec_D= -2.5680$, $prec_E=-2.4862$ and $prec_{Ge}=-2.4764$, the DA preserves the distinct eigen-structure of $H$. All eigenvalues from DA appear in quadruples $\{\lambda, \overline{\lambda}, -\lambda, -\overline{\lambda}\}\subseteq\lambda(H)$, unless when $\Im(\lambda)=0$ then in pairs $\{\lambda, -\lambda\}\subseteq\lambda(H)$. The low accuracy (in the order of $\pm 4.1204\times10^{-3}$) of the computed eigenvalues from the methods can be attributed to the defective eigenvalues. Note that Algorithm~1 in~\cite{yang:16} failed because the required assumption $\Gamma H>0$ is not satisfied. \end{Example} \section{Conclusions} In this paper, we propose a doubling algorithm for the discretized Bethe-Salpeter eigenvalue problem, where the Hamiltonian-like matrix $H$ is firstly transformed to a symplectic pair with special structure then $E_k = E_k^{\HH}$ and $F_k=F_k^{\T}$ are computed iteratively. Theorems are proved on the quadratic convergence of the algorithm if no purely imaginary eigenvalues exist (and linear convergence otherwise). The simple double-Cayley transform is designed to deal with any potential breakdown when $1$ is in or close to $\sigma(F_k)$ for some $k$. We also prove that at most two steps of retrogression occur (for complex eigenvalues of $H$, but none for real ones). In addition, a three-recursion remedy is put forward when the double-Cayley transform fails. Numerical examples have been presented to illustrate the efficiency and the distinct structure-preserving nature of the doubling method. The optimal choice of $\alpha$ and the removal of the invertibility assumption of $X_1$ (or $[ X_1, \Psi_{11} ]$ if purely imaginary eigenvalues exist) will be left for future research. \section{Useful Lemmas} The following lemmas are required in Section~\ref{doublesec}. \begin{lemma}\label{lemma-tanh} Given $\omega, \zeta \in \mathbb{R}$, it holds that \begin{enumerate} \item [{\em (a)}] $|\tanh(-\omega+\mathrm{i}\zeta)|^2=|\tanh(\omega+\mathrm{i}\zeta)|^2 =[\mathrm{e}^{2\omega}+\mathrm{e}^{-2\omega}-2\cos(2\zeta)][\mathrm{e}^{2\omega}+\mathrm{e}^{-2\omega}+2\cos(2\zeta)]^{-1}$; \item [{\em (b)}] $|\tanh(\omega+\mathrm{i}\zeta)|^2<1$ when $\cos(2\zeta)>0$; and \item [{\em (c)}] for $\cos(2\zeta)>0$, $|\tanh(\omega+\mathrm{i}\zeta)|^2$ is monotonically nondecreasing with respect to $\omega$ when $\omega\geq 0$, and monotonically nonincreasing otherwise. \end{enumerate} \end{lemma} \begin{proof} Simple computations lead to the two results (a) and (b), and we omit the details here. For (c), we have $\partial |\tanh(\omega+\mathrm{i}\zeta)|^2/\partial\omega= [8(\mathrm{e}^{2\omega}-\mathrm{e}^{-2\omega})\cos(2\zeta)] [(\mathrm{e}^{2\omega}+\mathrm{e}^{-2\omega}+2\cos(2\zeta))^2]^{-1}$. Since $\cos(2\zeta)>0$, the result follows. \end{proof} \begin{lemma}\label{lemma-distance} Define $f(\xi)=(\xi-\tau)^2 + \xi^2$, then for $0\leq\xi\leq\frac{\tau}{2}$, we have \begin{enumerate} \item [{\em (a)}] $f(\xi)=f(\tau-\xi)$; \item [{\em (b)}] $f(\xi)\geq f(\eta) \geq \frac{\tau}{\sqrt{2}}$ for all $\eta$ with $\frac{\tau}{2}\geq \eta \geq \xi$; and \item [{\em (c)}] $f(\xi)\geq f(\eta) \geq \frac{1}{\sqrt{2}}$ for all $\eta$ with $\tau-\xi \geq \eta \geq \frac{\tau}{2}$. \end{enumerate} \end{lemma} \begin{proof} From the fact that $(\xi, \xi)$ and $(1-\xi, 1-\xi)$ are two symmetrical points with respect to the line $g(\omega)=-\omega +\tau$, the result follows with details omitted. \end{proof} \end{document}
math
\begin{document} \title{A second-order generalization\\ of TC and DC kernels} \author{Mattia~Zorzi \thanks{M. Zorzi are with the Department of Information Engineering, University of Padova, Padova, Italy; e-mail: {\tt\small [email protected]} (M. Zorzi).}} \maketitle \begin{abstract}Kernel-based methods have been successfully introduced in system identification to estimate the impulse response of a linear system. Adopting the Bayesian viewpoint, the impulse response is modeled as a zero mean Gaussian process whose covariance function (kernel) is estimated from the data. The most popular kernels used in system identification are the tuned-correlated (TC), the diagonal-correlated (DC) and the stable spline (SS) kernel. TC and DC kernels admit a closed form factorization of the inverse. The SS kernel induces more smoothness than TC and DC on the estimated impulse response, however, the aforementioned property does not hold in this case. In this paper we propose a second-order extension of the TC and DC kernel which induces more smoothness than TC and DC, respectively, on the impulse response and a generalized-correlated kernel which incorporates the TC and DC kernels and their second order extensions. Moreover, these generalizations admit a closed form factorization of the inverse and thus they allow to design efficient algorithms for the search of the optimal kernel hyperparameters. We also show how to use this idea to develop higher oder extensions. Interestingly, these new kernels belong to the family of the so called exponentially convex local stationary kernels: such a property allows to immediately analyze the frequency properties induced on the estimated impulse response by these kernels. \end{abstract} \section{Introduction}\label{sec_intro} Linear system identification problems are traditionally addressed by using Prediction Error Methods (PEM), see \cite{LJUNG_SYS_ID_1999,SODERSTROM_STOICA_1988}. Here, the best model is chosen over a fixed parametric model class (e.g. ARMAX, OE, Box-Jenkins). This approach, however, has two issues: first, the parametrization of the predictor is nonlinear which implies that the minimization of the squared prediction error leads to a non-convex optimization problem; second, we have to face a model selection problem (i.e. order selection) which is usually performed by AIC and BIC criteria \cite{AKAIKE_1974,SCHWARZ_1978}. Regularized kernel-based methods have been recently proposed in system identification in order to overcome the aforementioned limitations, see \cite{PILLONETTO_DENICOLAO2010,EST_TF_REVISITED_2012,KERNEL_METHODS_2014}. Here, we search the candidate model, described via the predictor impulse response, in an infinite dimensional nonparametric model class with the help of a penalty term. Adopting the Bayesian viewpoint, this is a Gaussian process regression problem \cite{RASMUSSEN_WILLIAMNS_2006}: the impulse response is modeled as a Gaussian process with zero mean and with a suitable covariance function, also called kernel \cite{wahba1990spline}. The latter encodes the a priori knowledge about the predictor impulse response. For instance, the impulse response should be Bounded Input Bounded Output (BIBO) stable and with a certain degree of smoothness. The most popular kernels are the tuned-correlated (TC), the diagonal-correlated (DC), and the stable-spline (SS), see \cite{EST_TF_REVISITED_2012,PILLONETTO_DENICOLAO2010}. All these kernels encode the BIBO stability property. Regarding the smoothness, SS is the one inducing more smoothness on the impulse response. It is worth noting that many other extensions can be obtained, see for instance \cite{CHEN2018109,dinuzzo2015kernels,ZORZI2018125,zorzi2020new}. All these kernels depend on few hyperparameters that are learnt from the data by minimizing the so called negative log-marginal likelihood. This task is computationally expensive especially in the case we want to estimate high dimensional models, e.g. the case of dynamic networks, see \cite{BSL,CHIUSO_PILLONETTO_SPARSE_2012,BSL_CDC}. To reduce the computational complexity different strategies have been proposed, see \cite{carli2012efficient,chen2018regularized, chen2021semiseparable,chen2013implementation}. In particular, if the kernel matrix admits a closed form expression for Cholesky factor of its inverse matrix (ant thus also its determinant), then the evaluation of the marginal likelihood can be done efficiently \cite{chen2013implementation}. While it is possible to derive these closed form expressions for TC and DC, see \cite{CARLI2014,7495008}, this is not possible for SS. It is worth noting that an efficient algorithm for the SS kernel has been proposed in \cite{chen2021semiseparable}. The latter, however, can be used only in the case that the input of the system has a prescribed structure, e.g. it cannot be used in the case we collect the data of a system which is in a feedback configuration. The aim of this paper is to introduce a second-order generalization of the TC and DC kernel exploiting the filter-based approach proposed in \cite{marconato2017filter}. These extensions induce more smoothness than TC and DC, respectively. We also introduce a generalized-correlated kernel which incorporates the DC, TC kernels and their second order extensions. Moreover, we show that they admit a closed form expression for the Cholesky of its inverse matrix. Thus, these kernels allow to design an efficient algorithm for the search of the optimal hyperparameters. It is worth noting that SS is the second-order extension of the TC kernel derived in the continuous time. In contrast, the extension that we propose here is derived in the discrete time. Numerical experiments showed that the new second-oder TC kernel represents an attractive alternative to SS because it leads to an estimation algorithm which outperforms the one using SS (even in the case that the computation of the Cholesky factorization of the kernel exploits the fact that SS is extended 2-semiseparable) in terms of computational complexity, while the second-order TC and SS are similar in terms of estimation performance. This idea can be also used to higher order extensions and also to generalize the high frequency kernel proposed in \cite{6160606}. Interestingly, all these new kernels are exponentially convex local stationary (ECLS), \cite{CHEN2018109,ZORZI2018125}. Such a property allows to easily understand the frequency properties of their stationary parts. The outline of the paper is as follows. In Section \ref{sec_PEM} we briefly review the kernel-based PEM method as well as the TC, DC and SS kernels. Section \ref{sec_TC2} introduces the second-order extension for the TC kernel, while Section \ref{sec_DC2} the one for the DC kernel. In Section \ref{sec_GC} we introduce the generalized-correlation kernel. In Section \ref{sec_ME} we derive the closed form expressions for these kernels. In Section \ref{sec_high} we extend this idea to higher order generalizations. In Section \ref{sec_freq} we show that these kernels are ECLS and we analyze the stationary part of these kernels in the frequency domain. Finally, we draw the conclusions in Section \ref{sec_conc}. {\em Notation}. $ \mathcal{S}_T$, with $T\leq \infty$, denotes the cone of positive definite symmetric matrices of dimension $T\times T$. Infinite dimensional matrices, i.e. matrices having an infinite number of columns and/or rows, are denoted using the calligraphic font, e.g. $ \mathcal{K}$, while finite dimensional ones are denoted using the normal font, e.g. $K$. Given $ \mathcal{F}\in \mathbb{R}^{p\times \infty}$ and $ \mathcal{G}\in \mathbb{R}^{\infty\times m}$, the product $ \mathcal{F} \mathcal{G}$ is understood as a $p\times m$ matrix whose entries are limits of infinite sequences \cite{INFINITE_MATRICES}. Given $K\in \mathcal{S}_T$, $[K]_{t,s}$ denotes the entry of $K $in position $(t,s)$, while $[K]_{:,t}$ and $[K]_{t,:}$ denotes the $t$-th column and row, respectively, of $K$. Given $K\in \mathcal{S}_T$, $\|v\|_{K^{-1}}$ denotes the weighted Euclidean norm of $v$ with weight $K^{-1}$. Given $v\in \mathbb{R}^T$, $\tpl v$ denotes the lower triangular $T\times T$ Toeplitz matrix whose first column is given by $v$, while $ \operatorname{diag}(v)$ denotes the diagonal matrix whose main diagonal is $v$. \section{Kernel-based PEM method} \label{sec_PEM} Consider the model \al{\label{mod}y(t)=\sum_{k=1}^\infty g(k) u(t-k)+e(t), \quad t=1\ldots N} where $y(t)$, $u(t)$, $g(t)$ and $e(t)$ denote the output, the input, the impulse response of the model and a zero-mean white Gaussian noise with variance $\sigma^2$, respectively. We can rewrite model (\ref{mod}) as \al{y= \mathcal{A} g+e\nonumber} where $y=[\,y(1)\ldots y(N)\,]^\top\in \mathbb{R}^N$, $e$ is defined likewise, $ \mathcal{A}^{N\times \infty}$ is the regression matrix whose entries are defined by $u(t)$ with $t=1\ldots N$, $g=[\,g(1) \; g(2)\ldots \,]^\top\in \mathbb{R}^\infty$. We want to estimate the impulse response $g$ given the measurements $\{y(t),u(t)\}_{t=1}^N$. Such a problem is ill-posed because we have a finite number of measurements while $g$ contains infinite parameters. The latter can be made well-posed assuming that $g\sim \mathcal{N}(0,\lambda \mathcal{K}(\eta))$ where $ \mathcal{K}(\eta)\in \mathcal{S}_\infty$ is the kernel function and $\eta$ is the vector of hyperparameters characterizing the kernel; in this way, the minimum variance estimator of $g$ is: \al{\label{def_RELS}\hat g=\underset{g\in \mathbb{R}^\infty}{\mathrm{argmin}} \|y- \mathcal{A} g \|^2+\frac{\sigma^2}{\lambda} \| g\|^2_{ \mathcal{K}(\eta)^{-1}}} where $\lambda>0$ denotes the regularization parameter. It is worth noting that the above problem admits a closed form solution. Moreover, $ \mathcal{K}(\eta)$ encodes the a priori information that we have on the impulse response. The aforementioned problem can be formulated as a finite dimensional problem. Indeed, $g$ can be truncated, obtaining a finite impulse response of length $T$; the corresponding kernel matrix $K(\eta)\in \mathcal{S}_T$ is defined as $[K(\eta)]_{t,s}=[ \mathcal{K}(\eta)]_{t,s}$ for $t,s=1\ldots T$ and the regression matrix $A\in \mathbb{R}^{N\times T}$ is given by the first $T$ columns of $ \mathcal{A}$. Such a truncation, with $T$ sufficiently large, does not introduce a bias, because $g$ decays to zero. The so called hyperparameters $\lambda$ and $\eta$ are estimated by minimizing numerically the negative log-marginal likelihood \al{\label{marginal}\ell(y; \lambda, \eta):=\log &\mathrm{d}t(\lambda A K(\eta)A^\top+\sigma^2 I)\nonumber\\ & +y^\top( \lambda A K(\eta)A^\top+\sigma^2 I)^{-1}y.} In what follows, we will drop the dependence on $\eta$ for kernels in order to ease the notation. \subsection{Diagonal and correlated kernels: an overview} We briefly review the most popular kernels used in system identification, see \cite{EST_TF_REVISITED_2012} for a more complete overview. The simplest kernel is diagonal and encodes the a priori information that $g$ should decay to zero exponentially: \al{\label{kernelDI} \mathcal{K}_{DI} = \operatorname{diag}(\beta,\beta^2,\ldots, \beta^t,\ldots)} where $\eta=\beta$ and $0<\beta<1$. Indeed, the penalty term $\|g\|^2_{ \mathcal{K}_{DI}^{-1}}$ is the squared norm of the weighted impulse response \al{h=[\, h_1 \; h_2 \ldots h_t\ldots \,]^\top, \quad h_t=\beta^{-t/2}g_t\nonumber} which amplifies in an exponential way the coefficients $g_t$ as $t$ increases. The tuned-correlated (TC, also called first-order stable spline) kernel embeds also the a priori information that $g$ is smooth: \al{\label{kernelTC} [ \mathcal{K}_{TC}]_{t,s} = \beta^{\max(t,s)}} where $\eta=\beta$ and $0<\beta<1$. The smoothness property can be justified as follows. It is well known that \al{ \mathcal{K}_{TC} =(1-\beta) ( \mathcal{F} \mathcal{D} \mathcal{F}^T)^{-1}\nonumber} where \al{ \mathcal{F}&=\tpl{1,-1,0,\ldots }\nonumber\\ \mathcal{D}&= \operatorname{diag}(\beta^{-1},\beta^{-2},\ldots, \beta^{-t},\ldots);\nonumber} then, $ \mathcal{F}^\top$ is the prefiltering operator, see \cite{marconato2017filter}, performing the first order difference of $h$ and thus the penalty term in (\ref{def_RELS}) penalizes impulses responses for which the norm of the first oder difference of the corresponding $h$ is large \al{\|g\|_{ \mathcal{K}_{TC}^{-1}}^2&=(1-\beta)^{-1}\| \mathcal{F}^\top h\|^2\nonumber\\&=(1-\beta)^{-1}\sum_{t=1}^\infty (h_t-h_{t+1})^2.\nonumber} The diagonal-correlated (DC) kernel is defined as \al{\label{kernelDC} [ \mathcal{K}_{DC}]_{t,s} = \alpha^{|t-s|}\beta^{\max (t,s)}} where $0<\beta<1$, $-\beta^{-1/2}<\alpha<\beta^{-1/2}$ and $\eta=[\, \alpha \;\beta \,]^\top$. It is worth noting that we are taking a definition which is not standard, the standard one is $[ \mathcal{K}_{DC}]_{t,s} = \rho^{|t-s|}\beta^{\frac{t+s}{2}}$ and $\rho=\alpha \beta^{1/2}$, because the former highlights the following limits: \al{\label{condA} \underset{ \alpha \rightarrow 0}{\lim} \mathcal{K}_{DC}= \mathcal{K}_{DI}, \quad \underset{ \alpha \rightarrow 1}{\lim} \mathcal{K}_{DC}= \mathcal{K}_{TC} } that is the DC kernel connects the DI and TC kernel. Indeed, it is not difficult to see that \al{ \mathcal{K}_{DC} = (1-\alpha\beta)( \mathcal{F}_\alpha \mathcal{D} \mathcal{F}_\alpha^T)^{-1}\nonumber} with \al{\label{def_Fa} \mathcal{F}_\alpha=\tpl{1,-\alpha,0,\ldots }.} In plain words, $\alpha$ tunes the behavior of the prefiltering operator: $ \mathcal{F}_\alpha^\top$ behaves as the identity operator for $\alpha$ close to zero, while it behaves as the first order difference operator for $\alpha$ close to one. As a consequence the DC kernel allows to tune the degree of smoothness of $g$. All these kernels admit a closed form factorization of the inverse and determinant which is an appealing feature for minimizing numerically (\ref{marginal}). Moreover, their inverses are banded matrices: $ \mathcal{K}_{DI}^{-1}$ is diagonal, $ \mathcal{K}_{TC}^{-1}$ and $ \mathcal{K}_{DC}^{-1}$ are tridiagonal. The stable spline (SS, also called second-order stable spline) kernel induces more smoothness than TC: \al{ \label{kernelSS}[ \mathcal{K}_{SS}]_{t,s} = \frac{\gamma^{t+s}\gamma^{\max (t,s)}}{2}-\frac{\gamma^{3\max (t,s)}}{6}} where $\eta=\gamma$ and $0<\gamma<1$. However, it does not admit a closed form factorization of the inverse and determinant. Moreover, its inverse is not banded. Finally, a kernel with the aforementioned properties which tunes the degree of smoothness and connects TC with SS does not exist. \section{Second-order TC kernel} \label{sec_TC2} In this section we derive a new kernel, hereafter called TC2, which induces more smoothness than TC and represents an alternative to SS. In order to induce more smoothness it is sufficient to take the penalty term as the norm of the second order difference of $h$: \al{\|g\|^2_{ \mathcal{K}^{-1}_{TC2}}=(1-\beta)^{-3}\|( \mathcal{F}^T)^2h \|^2,\nonumber} thus \al{ \mathcal{K}_{TC2}:=(1-\beta)^3( \mathcal{F}^2 \mathcal{D}( \mathcal{F}^\top)^2)^{-1}\nonumber} where $\eta=\beta$ and $0<\beta<1$. Figure \ref{realizationsTC2} shows ten realizations of $g$ using the TC2 kernel with $\beta=0.8$. We can notice that the degree of smoothness is similar to the one with $ \mathcal{K}_{SS}$. \begin{figure} \caption{Ten realizations of $g\sim \mathcal{N} \label{realizationsTC2} \end{figure} \begin{propo} \label{pentaTC2}The inverse of $ \mathcal{K}_{TC2}$ is a pentadiagonal matrix, that is $[( \mathcal{K}_{TC2})^{-1}]_{t,s}=0$ for any $|t-s|>3$.\end{propo} \begin{proof} The statement is a particular instance of Proposition \ref{prop_gen_inv}, see Section \ref{sec_high}. $\blacksquare$ \end{proof} \\ Throughout the paper we will use the following result. \begin{lemm}[\cite{matriciToeplitz}] \label{lemmaford}Consider a real infinite lower triangular Toeplitz matrix, defined by the sequence $\{a_k, \; k\geq 0\}$ as follows \al{ \mathcal{X}=\tpl{a_0,a_1,a_2,\ldots}.\nonumber} If $a_0\neq 0$, $ \mathcal{X}$ is invertible and the inverse matrix $ \mathcal{Y}= \mathcal{X}^{-1}$ is also a lower triangular Toeplitz matrix with elements $\{b_k, \; k\geq 0\}$ given by the following formula \al{b_0=\frac{1}{a_0}, \; b_k=-\frac{1}{a_0}\sum_{j=0}^{k-1}a_{k-j}b_j\hbox{ for } k\geq 1.\nonumber} \end{lemm} \begin{propo} \label{propKTC2} $ \mathcal{K}_{TC2}$ admits the following closed form expression: \al{\label{formTC2}[ \mathcal{K}_{TC2}]_{t,s}=2\beta^{\max(t,s)+1}+(1-\beta)(1+|t-s|)\beta^{\max(t,s)}.} \end{propo} \begin{proof} First, $ \mathcal{F}$ is a lower triangular Toeplitz matrix which is invertible because the main diagonal is composed by strictly positive elements. Therefore, by Lemma \ref{lemmaford} we have \al{ \mathcal{F}^{-2}=\tpl{1,2,\ldots, t,\ldots}.\nonumber} Moreover, \al{[ \mathcal{F}^{-2}]_{t,:}=[\, 0\; \ldots \;0 \;\hspace{-0.5cm} \underbrace{1}_{\tiny \hbox{$t$-th element}} \hspace{-0.5cm}\; 2 \; 3 \ldots \,].\nonumber} Therefore, \al{[ \mathcal{K}_{TC2}]_{t,s}& =(1-\beta)^3[( \mathcal{F}^{-2})^\top \mathcal{D}^{-1} \mathcal{F} ^{-2}]_{t,s}\nonumber\\ &=(1-\beta)^3[ \mathcal{F}^{-2}]_{t,:} \mathcal{D}^{-1}[( \mathcal{F}^\top)^{-2}]_{:,s}\nonumber\\ &=(1-\beta)^3[ \mathcal{F}^{-2}]_{t,:} \mathcal{D}^{-1}[ \mathcal{F}^{-2}]_{s,:}^\top\nonumber\\ &= \sum_{k=\max(t,s)}^\infty \beta^k(k-t+1)(k-s+1).\nonumber} Finally, it is not difficult to see that the above series converges to (\ref{formTC2}) by exploiting the identity \al{\label{geom_series}\sum_{k=0}^\infty \beta^k=\frac{1}{1-\beta}. } $\blacksquare$ \end{proof} \\ It is worth noting that the SS kernel is also a second-order generalization of the TC kernel. Indeed, TC and SS are obtained by applying a ``stable'' coordinate change to the first and second order, respectively, spline kernel \cite{PILLONETTO_DENICOLAO2010}. That extension has been derived in the continuous time domain, while the one proposed here has been derived in the discrete time domain. \section{Second-order DC kernel}\label{sec_DC2} The aim of this section is to introduce a new kernel, hereafter called DC2, which connects the TC and TC2 kernels. The unique difference between TC and TC2 is the prefiltering operator acting on $h$. Thus, the DC2 kernel should perform a transition from $ \mathcal{F}$ to $ \mathcal{F}^2$. One possible way is to take \al{\label{def_2a} \mathcal{F}_{2,\alpha}:=(1-\alpha) \mathcal{F}+\alpha \mathcal{F}^2 } with $0\leq \alpha\leq 1$ and thus we obtain \al{\label{KDC2} \mathcal{K}_{DC2}:=\kappa( \mathcal{F}_{2,\alpha} \mathcal{D} \mathcal{F}_{2,\alpha}^\top)^{-1}} with $\kappa=(1-\beta)(1-\alpha\beta)(1-\alpha^2\beta)$. In this case we have $\eta=[\, \alpha\; \beta\,]^\top$ with $0<\beta <1$. From the above definition it follows that \al{\label{condB}\underset{ \alpha \rightarrow 0}{\lim} \mathcal{K}_{DC2}= \mathcal{K}_{TC}, \quad \underset{ \alpha \rightarrow 1}{\lim} \mathcal{K}_{DC2}= \mathcal{K}_{TC2}. } Figure \ref{fig_transitions} shows a realization of the impulse response as a function of $\alpha$ using (\ref{KDC2}); as expected, the degree of smoothness increases as $\alpha$ increases. \begin{rem} It is worth noting that one could consider other transitions, e.g. \al{ \mathcal{F}_{2,\alpha}&=\tpl{1,-1-\alpha^2,\alpha,0,\ldots}\nonumber\\ \mathcal{F}_{2,\alpha}&=((1-\alpha) \mathcal{F}^{-1}+\alpha \mathcal{F}^{-2})^{-1}.\nonumber} However, as we will see, (\ref{def_2a}) is the unique definition which guarantees that $K_{DC2}$ admits a closed form expression and is the maximum entropy solution of a matrix completion problem. \end{rem} \begin{figure} \caption{One realization of $g\sim \mathcal{N} \label{fig_transitions} \end{figure} \begin{propo} The inverse of $ \mathcal{K}_{DC2}$ is a pentadiagonal matrix, that is $[( \mathcal{K}_{DC2})^{-1}]_{t,s}=0$ for any $|t-s|>3$.\end{propo} \begin{proof} The proof is similar to the one of Proposition \ref{pentaTC2}. $\blacksquare$ \end{proof} \begin{propo} For $0\leq \alpha<1$, $ \mathcal{K}_{DC2}$ admits the following closed form expression: {\small \al{\label{formDC2}[ \mathcal{K}_{DC2}]_{t,s}=\frac{\beta^{\max(t,s)}(1-(1-\beta)\alpha^{|t-s|+1})-\alpha^2\beta^{\max(t,s)+1}}{1-\alpha}.}} \end{propo} \begin{proof} First, we notice that \al{ \mathcal{F}_{2,\alpha}=((1-\alpha) \mathcal{I} + \alpha \mathcal{F}) \mathcal{F}= \mathcal{F}_\alpha \mathcal{F}\nonumber} where $ \mathcal{F}_\alpha$ has been defined in (\ref{def_Fa}); $ \mathcal{I}$ is the identity matrix of infinite dimension. The main diagonal of $ \mathcal{F}$ and $ \mathcal{F}_{\alpha}$ is composed by strictly positive elements and thus their inverse exist. By Lemma \ref{lemmaford}, we have \al{ \mathcal{F}^{-1}&=\tpl{1,1, \ldots}\nonumber\\ \mathcal{F}^{-1}_\alpha&=\tpl{1,\alpha,\alpha^2,\ldots}.\nonumber} Therefore, \al{ \mathcal{F}_{2,\alpha}^{-1}=\frac{1}{1-\alpha}\tpl{1-\alpha,1-\alpha^2,1-\alpha^3,\ldots}.\nonumber} Finally, \al{[ \mathcal{K}_{DC2}&]_{t,s} =\kappa[ \mathcal{F}_{2,\alpha}^{-1}]_{:,t}^\top \mathcal{D}[ \mathcal{F}_{2,\alpha}^{-1}]_{:,s}\nonumber\\ &=\kappa\sum_{k=\max(t,s)}^\infty \beta^{k} \frac{1-\alpha^{k-t+1}}{1-\alpha}\frac{1-\alpha^{k-s+1}}{1-\alpha}\nonumber} where the above series converges to right hand side of (\ref{formDC2}). The latter fact can be easily proved by using Identity (\ref{geom_series}). $\blacksquare$ \end{proof} \section{Generalized-correlated kernel} \label{sec_GC} In view of (\ref{condA}) and (\ref{condB}) we can define a general kernel, hereafter called generalized-correlated (GC) kernel, that incorporates the DI, DC, TC, DC2 and TC2 kernels. Let $ \mathcal{K}_{DI}(\beta)$, $ \mathcal{K}_{DC}(\alpha,\beta)$, $ \mathcal{K}_{TC}(\beta)$, $ \mathcal{K}_{DC2}(\alpha,\beta)$ and $ \mathcal{K}_{TC2}(\beta)$ be the kernels defined in (\ref{kernelDI}), (\ref{kernelDC}), (\ref{kernelTC}), (\ref{formDC2}) and (\ref{formTC2}), respectively, where we made explicit their dependence on the hyperparameters $0<\alpha<1$ and $0<\beta<1$. Then, we define as GC kernel \al{\label{kernelGC} \mathcal{K}_{GC}(\gamma,\beta)=\left\{\begin{array}{ll} \mathcal{K}_{DI}(\beta),& \gamma=0 \\ \mathcal{K}_{DC}(\gamma,\beta),& 0<\gamma<1 \\ \mathcal{K}_{TC}(\beta), & \gamma=1\\ \mathcal{K}_{DC2}(\gamma-1,\beta), & 1<\gamma<2 \\ \mathcal{K}_{TC2}(\beta),& \gamma=2. \end{array}\right.} where $\gamma$ characterizes the smoothness of the impulse response over a wide range. It is worth noting that $ \mathcal{K}_{GC}$ is a continuous function with respect to $\gamma$ and $\beta$, but not differentiable. In order to test the superiority of the proposed kernel, in respect to DI, DC, TC, DC2 and TC2, we consider two Monte Carlo studies. The first Monte Carlo study is composed by 200 experiments. In each experiment we generate the impulse response $g$ with practical length $T=50$ as follows: \al{g_t=\sum_{k=1}^{10} a_k\cos(b_kt+c_k)\nonumber} where its parameters are drawn as follows: $a_k\in \mathcal{U}([0.2,0.9])$, $b_k\in \mathcal{U}([10^{-6}\pi,10^{-1}\pi])$ and $c_k\in \mathcal{U}([0,\pi])$. Figure \ref{fig_realsimhl} (top) shows ten realizations drawn from such process. Then, we generate the input of length $N=500$ using the MATLAB function \verb"idinput.m" as a realization drawn from a Gaussian noise with band [0, 0.6]. Then, we feed the corresponding system (\ref{mod}) with it obtaining the dataset $\mathrm{D}^N:=\{y(t), u(t)\}_{t=1}^N$. Here, $\sigma^2$ is chosen in such a way that the signal to noise ratio is equal to two. Then, we estimate the impulse response using the following estimators: \begin{itemize} \item $\hat g_{DI}$ is the estimator in (\ref{def_RELS}) using the diagonal kernel (\ref{kernelDI}); \item $\hat g_{DC}$ is the estimator in (\ref{def_RELS}) using the DC kernel (\ref{kernelDC}); \item $\hat g_{TC}$ is the estimator in (\ref{def_RELS}) using the TC kernel (\ref{kernelTC}); \item $\hat g_{D2}$ is the estimator in (\ref{def_RELS}) using the DC2 kernel (\ref{formDC2}); \item $\hat g_{T2}$ is the estimator in (\ref{def_RELS}) using the TC2 kernel (\ref{formTC2}); \item $\hat g_{SS}$ is the estimator in (\ref{def_RELS}) using the SS kernel (\ref{kernelSS}); \item $\hat g_{GC}$ is the estimator in (\ref{def_RELS}) using the GC kernel (\ref{kernelGC}). \end{itemize} \begin{figure} \caption{{\em Top panel} \label{fig_realsimhl} \end{figure} \begin{figure} \caption{Average impulse response fit in the first (top) and second (bottom) Monte Carlo study composed by 200 experiments.} \label{fig_highlow} \end{figure} Finally, for each estimator we compute the average impulse response fit \al{\label{def_AIRF}\mathrm{AIRF}=100\left(1- \frac{\|g-\hat g\|}{\|g-\bar g\|} \right)} where $\bar g=\sum_{t=1}^T g_t$ and $\hat g$ is the corresponding estimator. Clearly, the more $\mathrm{AIRF}$ is close to 100, the better the estimator performance is. Figure \ref{fig_highlow} (top) shows the boxplot of $\mathrm{AIRF}$ for the estimators: D2, T2, SS and GC are the best estimators, while DI is the worst one. In plain words, the best estimators are the ones that are able to induce a sufficient degree of smoothness on the impulse response. The second Monte Carlo study is likewise to the previous one, but $b_k\in \mathcal{U}([0.6\pi,0.7\pi])$. In this case, the realizations of the process $g_t$ are less smooth than before, see Figure \ref{fig_realsimhl} (bottom). Figure \ref{fig_highlow} (bottom) shows the boxplot of $\mathrm{AIRF}$ for the estimators: DI, DC and GC are the best estimators, while T2 and SS are the worst ones. We conclude that GC is the unique estimator which is able to be well performing in both the situations. \section{Efficient implementation to estimate the hyperparameters}\label{sec_ME} The minimization of (\ref{marginal}) is typically performed through the nonlinear optimization solver \verb"fmincon.m" of Matlab. Thus, the crucial aspect is to consider an efficient algorithm to evaluate (\ref{marginal}). We show that the proposed kernels are suitable for this aim. Recall that $K\in \mathcal{S}_T$ denotes the finite dimensional kernel corresponding to $ \mathcal{K}$ and defined as \al{& [K]_{t,s}=[ \mathcal{K}]_{t,s},\quad t,s=1\ldots T.\nonumber} If $K^{-1}$ admits a closed form expression of its Cholesky factor, then the negative log-marginal likelihood in (\ref{marginal}) can be evaluated efficiently as follows, see \cite{7495008}: \al{\label{ell_smart}\frac{r^2}{\sigma^2} +(N-T)\log\sigma^2+\log\mathrm{d}t(\lambda K)+2\log \mathrm{d}t R_1} where $L$ is the Cholesky factor of $K^{-1}=LL^T$ and $R_1$ is given by the QR factorization \al{\left[\begin{array}{cc}R_{d1} & R_{d2} \\ \sigma \sqrt{\lambda^{-1}}L^\top & 0 \end{array}\right] =QR=Q\left[\begin{array}{cc} R_1 & R_2 \\0 & r \end{array}\right]\nonumber} where $Q^\top Q=I_{T+1}$, $R_1\in \mathbb{R}^{T+1\times T}$, $R_2\in \mathbb{R}^{T+1}$ and $r\in \mathbb{R}$. Moreover, $R_{d1}$ and $R_{d2}$ is given by the QR factorization $[\, A \; y\,]=Q_d[\, R_{d1} \; R_{d2}\,]$ which can be computed ``offline'' before to start the optimization task. In what follows we show that TC2, DC2 and GC admit a closed form expression for $L$ and thus also $\log \mathrm{d}t(\lambda K)$. \begin{propo} \label{prop_TC2_inv_finito}The inverse of $K_{TC2}\in \mathcal{S}_T$ admits the following decomposition \al{K_{TC2}^{-1} =(1-\beta)^{-3}F_T^2 D_T (F_T^2)^\top \nonumber} where \al{F_T&=\tpl{1,-1,0,\ldots 0}\in \mathbb{R}^{T\times T}\nonumber \\ D_T &=\left[\begin{array}{cc} D_{1,T} & 0\\ 0& B_T \end{array}\right] \nonumber\\ D_{1,T}&= \operatorname{diag}(\beta^{-1},\beta^{-2},\ldots \beta^{T-2})\nonumber\\ B_T&= (1-\beta)\beta^{-T}\left[\begin{array}{cc} \beta+\beta^{2} & 2\beta^{2} \\ 2\beta^{2} & 1-3\beta +4\beta^{2} \\ \end{array}\right]. \nonumber} Thus, $K_{TC2}^{-1}$ is a pentadiagonal matrix. \end{propo} \begin{proof} Consider \al{X:=(1-\beta)^3(F_T^2\tilde D_T (F_T^2)^\top)^{-1}\nonumber} where \al{\label{Dtilda}\tilde D_T= \operatorname{diag}(\beta^{-1},\beta^{-2}\ldots ,\beta^{-T}).} It is not difficult to see that \al{F_T^{-2}=\tpl{1,2,\ldots,T}.\nonumber} Thus, by arguments similar to ones used in proof of Proposition \ref{propKTC2}, we have \al{[X]_{t,s}=(1-\beta)^3\sum_{k=\max(t,s)}^T\beta^k(k-t+1)(k-s+1).\nonumber} Without loss of generality, we assume that $t\geq s$; hence, \al{[X]_{t,s}&=(1-\beta)^3\sum_{k=t}^T\beta^k(k-t+1)(k-s+1).\nonumber\\ &= (2\beta+(1-b)(1+t-s))\beta^t +\eta(t,s)\nonumber} where \al{\eta(t,s)=(&1-\beta)(\beta^{T+2}(T-t+1)(T-s+3)\nonumber\\ &-\beta^{T+1}(T-t+2)(T-s+2))\nonumber\\ &+2\beta^{T+2}((T-t+1)\beta-(T-t+2))\nonumber} where we have exploited the fact that \al{\sum_{k=0}^T \beta^k =\frac{1-\beta^{T+1}}{1-\beta}.\nonumber} Notice that \al{[K_{TC2}]_{t,s}=[X]_{t,s}-\eta(t,s)\nonumber} and we can rewrite $\eta$ in the shorthand way \al{\label{structeta}\eta(t,s)=\gamma_1 t+\gamma_1 s+\gamma_2 ts +\gamma_3} where $\gamma_k$'s are constants not depending on $t$ and $s$. On the other hand, if we take \al{Y:&=(1-\beta)^3(F_T^2 \Delta^{-1} (F_T^2)^\top)^{-1}\nonumber\\ &=(1-\beta)^3(F_T^{-2})^\top \Delta F_T^{-2},\nonumber} with \al{\label{defDelta}\Delta=\left[\begin{array}{ccccc}0 & \ldots & 0 & \ldots & 0 \\\vdots & \mathrm ddots & \vdots & &\vdots \\\vdots & & 0& z & y \\ 0& \ldots & 0 & y & x\end{array}\right], } then it is not difficult to see that \al{[Y]_{t,s}=&(1-\beta)^{3}[-(T(x+2y+z)+x+y)(t+s)\nonumber\\ &+(x+2y+z)ts+2T(x+y)+x].\nonumber} By taking into account (\ref{structeta}), we can impose that $x,y,z$ obey the conditions \al{\gamma_1&=-(1-\beta)^{3}(T(x+2y+z)+x+y)\nonumber\\ \gamma_2&= (1-\beta)^{3}(x+2y+z)\nonumber\\ \gamma_3&=(1-\beta)^{3}[2T(x+y)+x].\nonumber} In this way, $\eta(t,s)=[Y]_{t,s}$. With this choice, we have \al{K_{TC2}&=X-Y=(1-\beta)^{3} (F_T^{-2})^\top (\tilde D_T^{-1}-\Delta)(F_T^{-2})\nonumber\\ &=(1-\beta)^{3} (F_T^{2}(\tilde D_T^{-1}-\Delta)^{-1}(F_T^2)^\top)^{-1}\nonumber} where it is not difficult to see that $(\tilde D_T^{-1}-\Delta)^{-1}$ coincides with $D_T$. Finally, the fact that $K_{TC2}^{-1}$ is pentadiagonal follows from Proposition \ref{prop_TC_decomp_Gen}, see Section \ref{sec_high}. $\blacksquare$ \end{proof} \begin{propo} \label{prop_DC2_inv_finito}The inverse of $K_{DC2}\in \mathcal{S}_T$ admits the following decomposition \al{K_{DC2}^{-1} =\kappa^{-1}F_{2,\alpha,T} D_T F_{2,\alpha,T}^\top \nonumber} where \al{F_{2,\alpha,T}&=(1-\alpha)F_T+\alpha F_T^2\nonumber \\ D_T &=\left[\begin{array}{cc} D_{1,T} & 0\\ 0& B_T \end{array}\right] \nonumber\\ D_{1,T}&= \operatorname{diag}(\beta^{-1},\beta^{-2},\ldots \beta^{T-2})\nonumber\\ B_T&= (1-\alpha\beta)\beta^{-T}\nonumber\\ & \times\left[\begin{array}{cc} \beta(1+\alpha\beta) & \alpha\beta^{2}(1+\alpha) \\ \alpha\beta^{2}(1+\alpha)& (1-\beta-\alpha^2\beta) (1-\alpha\beta)+2\alpha^2\beta^2\\ \end{array}\right]. \nonumber} Thus, $K_{DC2}^{-1}$ is a pentadiagonal matrix. \end{propo} \begin{proof} Consider \al{X:=\kappa(F_{2,\alpha,T}\tilde D_T F_{2,\alpha,T}^\top)^{-1}\nonumber} where $\tilde D_T$ has been defined in (\ref{Dtilda}). Notice that $F_{2,\alpha,T}=F_{\alpha,T} F_T$ where \al{F_{\alpha,T}=\tpl{1,-\alpha,0 \ldots,0}\in \mathbb{R}^{T\times T}\nonumber} and \al{F_{\alpha,T}^{-1}&=\tpl{1,\alpha^2, \ldots,\alpha^T}\nonumber\\ F_{2,\alpha,T}^{-1}&=F_T^{-1} F_{\alpha,T}^{-1}\nonumber\\ &=\frac{1}{1-\alpha}\tpl{1-\alpha,1-\alpha^2, \ldots,1-\alpha^T}\nonumber.} Without loss of generality, we assume that $t\geq s$, then it is not difficult to see that \al{[X]_{t,s}&=[(F_T^{-1})^\top(F_{\alpha,T}^{-1})^\top \tilde D_T^{-1}F_{\alpha,T}^{-1}F_T^{-1}]_{t,s}\nonumber\\ &=\frac{1}{(1-\alpha)^2}\sum_{k=t}^{T}\beta^k (1-\alpha^{k-t+1})(1-\alpha^{k-s+1})\nonumber\\ &= [K_{DC2}]_{t,s}+\eta(t,s)\nonumber} where \al{\label{structetaDC} \eta(t,s)=\gamma_1 \alpha^{-t}+\gamma_1\alpha^{-s}+\gamma_2\alpha^{-(t+s)}+\gamma_3} and $\gamma_k$'s are constants not depending on $t$ and $s$. On the other hand, if we take \al{Y:&=\kappa(F_{2,\alpha,T} \Delta^{-1} F_{2,\alpha,T}^\top)^{-1}\nonumber\\ &=\kappa(F_{2,\alpha,T}^{-1})^\top \Delta F_{2\alpha,T}^{-1}\nonumber} where $\Delta$ is defined as in (\ref{defDelta}), then it is not difficult to see that \al{[Y]_{t,s}=&\kappa [-\alpha^T(z+y+\alpha x+\alpha y )(\alpha^{-t}+\alpha^{-s}) \nonumber\\ & +\alpha^{2T} (z+2\alpha y+\alpha^2 x) \alpha^{-(t+s)} +(z+2y+x)].\nonumber} By taking into account (\ref{structetaDC}), we can impose that $x,y,z$ obey the conditions \al{\gamma_1&=- \kappa \alpha^T(z+y+\alpha x+\alpha y )\nonumber\\ \gamma_2&= \kappa \alpha^{2T} (z+2\alpha y+\alpha^2 x)\nonumber\\ \gamma_3&= \kappa(z+2y+x).\nonumber } In this way, $\eta(t,s)=[Y]_{t,s}$. With this choice, we have \al{K_{DC2}&=X-Y=\kappa (F_{2,\alpha,T}^{-1})^\top (\tilde D_T^{-1}-\Delta)F_{2,\alpha,T}^{-1}\nonumber\\ &=\kappa(F_{2,\alpha,T}(\tilde D_T^{-1}-\Delta)^{-1}F_{2,\alpha,T}^\top)^{-1}\nonumber} where it is not difficult to see that $(\tilde D_T^{-1}-\Delta)^{-1}$ coincides with $D_T$. Finally, the fact that $K_{DC2}^{-1}$ is pentadiagonal follows by Proposition \ref{prop_DC_gen_finite} in Section \ref{sec_high}. $\blacksquare$ \end{proof} \\ By Proposition \ref{prop_TC2_inv_finito} and \ref{prop_DC2_inv_finito} we have following corollaries. \begin{cor} \label{corr1}Let $L$ denote the Cholesky factor of $K_{TC2}^{-1}$, then \al{[L]_{t,s}=\left\{\begin{array}{lr}\frac{1}{\sqrt{(1-\beta)^3\beta^t }}, & 1 \leq t=s\leq T-2 \\ \frac{-2}{\sqrt{(1-\beta)^3\beta^{t-1} }}, & 2 \leq t=s+1\leq T-1 \\ \frac{1}{\sqrt{(1-\beta)^3\beta^{t-2} }}, & 3 \leq t=s+2\leq T \\ \frac{\sqrt{\beta^{-T+1}(1+\beta)}}{1-\beta}, & t=s=T-1 \\ \frac{-2\sqrt{\beta^{-T+1}}}{(1-\beta)\sqrt{1+\beta}}, & t=s+1=T\\ \sqrt{\frac{\beta^{-T}}{1+\beta}}, & t=s=T \\ 0,& \hbox{ otherwise.} \end{array}\right. \nonumber} Moreover, \al{\mathrm{d}t K_{TC2}=\beta^{\frac{T(T+1)}{2}}(1-\beta)^{3T-4}.\nonumber} \end{cor} \begin{cor} Let $L$ denote the Cholesky factor of $K_{DC2}^{-1}$, then \al{[L]_{t,s}=\left\{\begin{array}{lr}\frac{1}{ \sqrt{\kappa \beta^t }}, & 1 \leq t=s\leq T-2 \\ \frac{-(1+\alpha)}{\sqrt{\kappa \beta^{t-1}}}, & 2 \leq t=s+1\leq T-1 \\ \frac{\alpha}{\sqrt{\kappa \beta^{t-2}}}, & 3 \leq t=s+2\leq T \\ \sqrt{\frac{(1+\alpha \beta)\beta^{-T+1}}{(1-\beta)(1-\alpha^2\beta)}}, & t=s=T-1 \\ \frac{-(1+\alpha)\sqrt{\beta^{-T+1}}}{\sqrt{(1+\alpha \beta ) (1- \beta ) (1-\alpha^2 \beta )}}, & t=s+1=T\\ \sqrt{\frac{\beta^{-T}}{1+\alpha \beta}}, & t=s=T \\ 0,& \hbox{ otherwise.} \end{array}\right. \nonumber} Moreover, \al{\mathrm{d}t K_{DC2}=\beta^{\frac{T(T+1)}{2}}(1-\alpha \beta)^{T-2}(1-\beta)^{T-1}(1-\alpha^2\beta)^{T-1}.\nonumber} \end{cor} In view of the above properties, we have \al{\log &\mathrm{d}t(\lambda K_{TC2})=T\log \lambda+ \frac{T(T+1)}{2}\log\beta\nonumber\\ &+(3T-4)\log (1-\beta)\nonumber\\ \log & \mathrm{d}t(\lambda K_{DC2})=T\log \lambda+ \frac{T(T+1)}{2}\log\beta\nonumber\\ &+(T-2)\log (1-\alpha\beta)+(T-1)\log (1-\beta)\nonumber\\ &+(T-1)\log (1-\alpha^2\beta).\nonumber} In view of the above corollaries and since $K_{DI}^{-1}$, $K_{DC}^{-1}$, $K_{TC}^{-1}$ admit a closed form expression for the Cholesky factor, see \cite{7495008}, then it follows that the Cholesky factor of $K_{GC}^{-1}$ admits a closed form expression. Accordingly, the minimization of the log-marginal likelihood using GC can be efficiently performed by means of the previous algorithm. \begin{rem} The fact that the inverse kernel matrix is pentadiagonal can be also used to compute efficiently (\ref{def_RELS}) through the alternating direction method of multipliers (ADMM) proposed in \cite{FUJIMOTO21}. Indeed, although that paper considers the case of tridiagonal inverse kernel matrices (e.g. TC and DC kernels) that idea holds also for banded inverse kernel matrices and the computational flops do not change.\end{rem} In order to test the aforementioned algorithm equipped with the closed form expressions we consider a Monte Carlo study composed by 50 experiments where the models and the data are generated likewise to the first Monte Carlo study of Section \ref{sec_GC}, but $a_k\in \mathcal U[0.2, 0.9995]$ and $N=5000$. We consider the following algorithms to estimate the impulse response: \begin{itemize} \item \textbf{T2} is the algorithm in \cite{7495008}, i.e. the one explained before, to compute $\hat g_{T2}$ which exploits the fact that TC2 admits the closed form expression for $L$ and $\log\mathrm{d}t(\lambda K)$; \item \textbf{GC} is the algorithm in \cite{7495008}, i.e. the one explained before, to compute $\hat g_{GC}$ which exploits the fact that GC admits the closed form expression for $L$ and $\log\mathrm{d}t(\lambda K)$; \item \textbf{SS} is the algorithm in \cite{chen2013implementation} to compute $\hat g_{SS}$ where the Cholesky factor of $K_{SS}$ is computed by \cite[Algorithm 4.2]{doi:10.1137/19M1267349}, i.e. an efficient algorithm taking linear time which exploits the fact that the SS kernel is extended 2-semiseparable. \end{itemize} For any experiment we measure the computational time (in seconds) of these algorithms through the functions \verb"tic" and \verb"toc" in Matlab. The simulation is run on a MacBook Air with 3.2GHz Apple M1 processor and 8GB 4266 LPDDR4 memory. Figure \ref{fig_burden} shows the average computational time for the three algorithms using as practical length $T=1000,T=1500,T=2000$ (right panel) and the corresponding average impulse response fit (\ref{def_AIRF}) (left panel). \begin{figure} \caption{{\em Left panels} \label{fig_burden} \end{figure} While the performance of the estimators is similar, \textbf{T2} exhibits the best computational time and \textbf{SS} the worst one. It is worth noting that the computational time of \textbf{GC} is worse than the one of \textbf{T2} because in the former we have to optimize three hyperparameters (i.e. $\lambda$, $\gamma$ and $\beta$) while in the latter only two (i.e. $\lambda$ and $\beta$). Finally, for the SS kernel we also considered the algorithm proposed in \cite{7495008} where the Cholesky factor of $K_{SS}$ is computed by \cite[Algorithm 4.2]{doi:10.1137/19M1267349}: the computational time was worse than the one of \textbf{SS}. We conclude that SS and TC2 provide a similar performance, thus SS can be safely replaced by TC2 in order to make more efficient the minimization of the negative log-marginal likelihood. \subsection{Maximum Entropy interpretation} Proposition \ref{prop_TC2_inv_finito} and Proposition \ref{prop_DC2_inv_finito} are also important to show that the kernel matrices $K_{TC2}\in \mathcal{S}_T$ and $K_{DC2}\in \mathcal{S}_T$, with $T\geq 4$, are the maximum entropy solution of a matrix completion problem of the following form. \begin{probl}[Band extension problem] \label{band_PB}Given $m\in \mathbb N$ and $c_{t,s}$, with $|t-s|\leq m$, find the covariance matrix $\Sigma\in \mathcal{S}_T$ of a zero mean Gaussian random vector such that \al{ [\Sigma]_{t,s}=c_{t,s}, \quad |t-s|\leq m.\nonumber} \end{probl} Such an interpretation is important because, as pointed out by Dempster in \cite{dempster1972covariance}, see also \cite{chen2016maximum,chen2018continuous,carli2011maximum,carli2013efficient}, ``the principle of seeking maximum entropy is a principle of seeking maximum simplicity of explanation''. Accordingly, these kernels represent the simplest way of embedding in the prior the fact that the impulse response is BIBO stable and with certain degree of smoothness. Recall that the maximum entropy solution (or extension) of the above problem is defined as \al{\label{MEpb} &{\max}_{\Sigma\in \mathcal{S}_T } \log \mathrm{d}t \Sigma \nonumber\\ & \hbox{ subject to } [\Sigma]_{t,s}=c_{t,s}, \quad |t-s|\leq m.} \begin{teor} \label{teo_1}Consider Problem \ref{band_PB} with $m=2$ and \al{c_{t,s}=2\beta^{\max(t,s)+1}+(1-\beta)(1+|t-s|)\beta^{\max(t,s)},\nonumber} with $|t-s|\leq 2$. Then, the maximun entropy extension solution to (\ref{MEpb}) is $K_{TC2}$.\end{teor} \begin{proof} In order to prove the statement we need to consider Problem (\ref{MEpb}) with $m=2\ldots T-2$ where $c_{t,s}=[K_{TC2}]_{t,s}$ with $|t-s|\leq m$. In particular, the solution for $m=2$ is the maximum entropy solution considered in the statement. \begin{lemm}[\cite{dym1981extensions}] Problem (\ref{MEpb}) admits solution if and only if {\small\al{C_m:=\left[\begin{array}{ccc}c_{t,t}& \ldots & c_{t,m+t} \\ \vdots & & \vdots \\ c_{t+m,t} & \ldots & c_{t+m,t+m}\end{array}\right]\in \mathcal{S}_{m+1},\quad t=1\ldots T-m.\nonumber}}Under such assumption, the solution is unique with the additional property that its inverse is banded of bandwidth $m$, i.e. its elements in position $(t,s)$ are zero for $|t-s|>m$. \end{lemm} It is not difficult to see that in our case $C_m\in \mathcal{S}_{m+1}$ for $m=2\ldots T-2$ and thus the corresponding band extension problems admit a unique solution. The maximum entropy extension admits a closed form solution that can be computed recursively as follows, see \cite{gohberg1993classes}. Let $\Sigma_T^{(T-2)}$ be the partially specified $T\times T$ symmetric matrix \al{\label{defST2} \Sigma_T^{(T-2)}=\left[\begin{array}{ccccc} c_{1,1 } & c_{1 ,2 } & \ldots & c_{ 1,T-1 } & x\\ c_{1 ,2 } & c_{ 2, 2} & \ldots & c_{ 2,T-1 } & c_{2 ,T } \\ \vdots & \vdots & & \vdots & \vdots \\ c_{1 ,T-1 } & c_{2 ,T-1 } & \ldots & c_{ T-1, T-1} & c_{ T-1, T}\\ x & c_{ 2,T } & \ldots & c_{ T-1,T } & c_{ T, T}\end{array}\right] } where $x$ is not fixed. Let $X\in \mathcal{S}_{T-1}$ be the submatrix of $\Sigma_T^{(T-2)}$ such that \al{[X]_{t,s}=c_{t,s}, \; t,s=1\ldots T-1.\nonumber} Then, the solution of (\ref{MEpb}) with $m=T-2$, which is called one-step extension, is given by (\ref{defST2}) with \al{x= -\frac{1}{y_1} \sum_{j=2}^{T-1} c_{T,j}y_j \nonumber} and $[\, y_1 \; y_2 \; \ldots \; y_{T-1} \,]^\top=L^{-1} [\, 1\; 0\; \ldots \; 0\,]^\top$; moreover, the maximum entropy extension $\Sigma_{ME}$ solution to (\ref{MEpb}) with $2\leq m\leq T-2$ is such that $\Sigma_{ME}^{-1}$ is a band matrix of bandwidth $m$ and for all $m+1<t\leq T$ and $1 \leq s\leq t-m-1$ the submatrix $P^\circ\in \mathcal{S}_{t-s+1}$, with $[P^\circ]_{i,j}=[\Sigma_{ME}]_{s-1+i,s-1+j}$, is the one-step extension of the problem \al{&\min_{P\in \mathcal{S}_{t-s+1} } \log \mathrm{d}t P \nonumber\\ & \hbox{ subject to } [P]_{i,j}=c_{s-1+i,s-1+j}, \quad |i-j|\leq t-s-1.\nonumber} Taking into account Proposition \ref{prop_TC2_inv_finito} we know that $K_{TC2}^{-1}$ is banded of bandwidth $m=2$. Let \al{\label{defP}P(s,t)=\left[\begin{array}{ccccc} c_{s,s } & c_{s ,s+1 } & \ldots & c_{ s,t-1 } & c_{s,t}\\ c_{s ,s+1 } & c_{ s+1, s+1} & \ldots & c_{ s+1,t-1 } & c_{s+1 ,t } \\ \vdots & \vdots & & \vdots & \vdots \\ c_{s ,t-1 } & c_{s+1 ,t-1 } & \ldots & c_{ t-1, t-1} & c_{ t-1, t}\\ c_{s,t} & c_{ s+1,t } & \ldots & c_{ t-1,t } & c_{ t, t}\end{array}\right], } with $m+1<t\leq T$ and $1 \leq s\leq t-m-1$, be the submatrix of $K_{TC2}$. Then, given the particular definition of $c_{t,s}$'s, it is not difficult to see that \al{P(s,t-1)&=\beta^{s-1}P(1,t-s)\nonumber\\ &=\beta^{s-1}(1-\beta)^{3}(F_{t-s}^2 D_{t-s}(F_{t-s}^2)^\top)^{-1} \nonumber} where the last equality follows by Proposition \ref{prop_TC2_inv_finito} with $T=t-s$. Then, $P(s,t)$ is the one step-extension of the corresponding band extension problem if $c_{s,t}$ is equal to $x$ and the latter is given as follows. We define \al{[\, & y_1 \; y_2 \; \ldots \; y_{t-s} \,]^\top=P(s,t-1)^{-1}[\, 1\; 0\; \ldots \; 0\,]^\top\nonumber\\ &=\beta^{1-s}(1-\beta)^{-3}F_{t-s}^2 D_{t-s}(F_{t-s}^2)^\top [\, 1\; 0\; \ldots \; 0\,]^\top\nonumber\\ & =\beta^{1-s}(1-\beta)^{-3} [\, \beta^{-1}\; -2\beta^{-1}\; \beta^{-1}\; 0\ldots \; 0\,]^\top;\nonumber} therefore \al{x&=-y_1^{-1} \sum_{j=2}^{t-s} c_{t,s+j}y_j\nonumber\\ &=-\beta(-2\beta^{-1}c_{s+1,t}+\beta^{-1}c_{s+2,t})=2c_{s+1,t}-c_{s+2,t}\nonumber\\ &=2\beta^{t+1}+(1- \beta)(1+t-s)\beta^t=[K_{TC2}]_{t,s}\nonumber} which concludes the proof. $\blacksquare$ \end{proof} \\ \begin{teor} Consider Problem \ref{band_PB} with $m=2$ and \al{c_{t,s}=\frac{\beta^{\max(t,s)}(1-(1-\beta)\alpha^{|t-s|+1})-\alpha^2\beta^{\max(t,s)+1}}{1-\alpha},\nonumber} with $|t-s|\leq 2$. Then, the maximun entropy extension solution to (\ref{MEpb}) is $K_{DC2}$.\end{teor} \begin{proof} The proof is similar to the one of Theorem \ref{teo_1}. More precisely, in this case we have $P(t,s)$ is the submatrix of $K_{DC2}$ defined as in (\ref{defP}) with $m+1<t\leq T$ and $1 \leq s\leq t-m-1$. Then, given the particular definition of $c_{t,s}$'s, it is not difficult to see that \al{P(s,t-1)&=\beta^{s-1}P(1,t-s)\nonumber\\ &=\beta^{s-1}\kappa(F_{2,\alpha,t-s}D_{t-s}F_{2,\alpha,t-s}^\top)^{-1} \nonumber} where the last equality follows by Proposition \ref{prop_DC2_inv_finito} with $T=t-s$. Then, $P(s,t)$ is the one step-extension of the corresponding band extension problem if $c_{s,t}$ is equal to $x$ and the latter is given as follows. We define \al{[\, & y_1 \; y_2 \; \ldots \; y_{t-s} \,]^\top\nonumber\\ &=\beta^{1-s}\kappa^{-1}F_{2,\alpha,t-s} D_{t-s}F_{2,\alpha,t-s}^\top [\, 1\; 0\; \ldots \; 0\,]^\top\nonumber\\ & =\beta^{1-s}\kappa^{-1} [\, \beta^{-1}\; -(1+\alpha)\beta^{-1}\; \alpha\beta^{-1}\; 0\ldots \; 0\,]^\top;\nonumber} therefore \al{x&=-y_1^{-1} \sum_{j=2}^{t-s} c_{t,s+j}y_j \nonumber\\ &=-\beta(-(1+\alpha)\beta^{-1}c_{s+1,T}+\alpha\beta^{-1}c_{s+2,T})\nonumber\\ &=(1+\alpha)c_{s+1,t}-\alpha c_{s+2,t}\nonumber\\ &=\frac{\beta^{t}(1-(1-\beta)\alpha^{t-s+1})-\alpha^2\beta^{t}}{1-\alpha}=[K_{DC2}]_{t,s}\nonumber } which concludes the proof. $\blacksquare$ \end{proof} \section{Higher-order extensions}\label{sec_high} Drawing inspiration from Section \ref{sec_TC2} we can define the TC kernel of order $\mathrm{d}lta\in \mathbb{N}$ as \al{\label{kernelTCd} \mathcal{K}_{TC\mathrm{d}lta}=\kappa_\mathrm{d}lta( \mathcal{F}^\mathrm{d}lta \mathcal{D} ( \mathcal{F}^\mathrm{d}lta)^\top)^{-1}} where $\kappa_\mathrm{d}lta$ is a suitable normalization constant. Here, $\eta=\beta$ with $0<\beta<1$. Figure \ref{realizationsTCdelta} shows ten realizations of $g$ using the TC$\mathrm{d}lta$ kernel with $\beta=0.8$ and for different values of $\mathrm{d}lta$. \begin{figure} \caption{Ten realizations of $g\sim \mathcal{N} \label{realizationsTCdelta} \end{figure} As expected, the larger $\mathrm{d}lta$ is the more smoothness is induced on $g$. \begin{propo} \label{prop_gen_inv}The inverse of $ \mathcal{K}_{TC\mathrm{d}lta}$ is a banded matrix of bandwidth $\mathrm{d}lta$, that is $[ \mathcal{K}_{TC\mathrm{d}lta}^{-1}]_{t,s}=0$ for any $|t-s|>\mathrm{d}lta$.\end{propo} \begin{proof} We prove the claim by induction. First, for $\mathrm{d}lta=1$ we have that TC$\mathrm{d}lta$ is the standard TC and its inverse is tridiagonal, i.e. the claim holds. Assume that $ \mathcal{K}_{TC\mathrm{d}lta-1}^{-1}$ is a banded matrix of bandwidth $\mathrm{d}lta-1$. Then, \al{ \mathcal{K}_{TC\mathrm{d}lta}^{-1}= \frac{\kappa_{\mathrm{d}lta-1}}{\kappa_{\mathrm{d}lta}} \mathcal{F} \mathcal{K}_{TC\mathrm{d}lta-1}^{-1} \mathcal{F}^\top.\nonumber} Notice that $ \mathcal{F}= \mathcal{I}- \mathcal{S}$ where $ \mathcal{S}$ is the lower shift matrix and $ \mathcal{I}$ the identity matrix, both infinite dimensional. Hence, \al{\label{decomp_shift}& \mathcal{K}_{TC\mathrm{d}lta}^{-1}=\kappa_{\mathrm{d}lta-1} \kappa_{\mathrm{d}lta}^{-1} [ \mathcal{K}_{TC\mathrm{d}lta-1}^{-1}\nonumber\\ &\hspace{0.2cm}+ \mathcal{S} \mathcal{K}_{TC\mathrm{d}lta-1}^{-1} \mathcal{S}^\top- \mathcal{K}_{TC\mathrm{d}lta-1}^{-1} \mathcal{S}^\top- \mathcal{S} \mathcal{K}_{TC\mathrm{d}lta-1}^{-1}].} It is well known that premultiplying a matrix $A$ by a lower shift matrix results in the elements of $A$ being shifted downward by one position, with zeroes appearing in the top row. Thus, in view of (\ref{decomp_shift}), we have that $ \mathcal{S} \mathcal{K}_{TC\mathrm{d}lta-1}^{-1} \mathcal{S}^\top$ is a band matrix with bandwidth $\mathrm{d}lta-1$, while $ \mathcal{K}_{TC\mathrm{d}lta-1}^{-1} \mathcal{S}^\top+ \mathcal{S} \mathcal{K}_{TC\mathrm{d}lta-1}^{-1}$ and thus $ \mathcal{K}_{TC\mathrm{d}lta-1}$ are band matrices with bandwidth $\mathrm{d}lta$. $\blacksquare$ \end{proof} Also in this case one could try to find the closed form expression for $ \mathcal{K}_{TC\mathrm{d}lta}$, however its derivation is not straightforward from the case $\mathrm{d}lta=2$. On the other hand, we can define the corresponding finite dimensional kernel matrix $K_{TC\mathrm{d}lta}\in \mathcal{S}_T$ as \al{[K_{TC\mathrm{d}lta}]_{t,s}=[ \mathcal{K}_{TC\mathrm{d}lta}]_{t,s}, \; \; t,s=1\ldots T.\nonumber} \begin{propo} \label{prop_TC_decomp_Gen}The finite dimensional kernel $K_{TC\mathrm{d}lta}$ admits the following decomposition: \al{K_{TC\mathrm{d}lta}^{-1}=\kappa_\mathrm{d}lta^{-1}F_T^\mathrm{d}lta D_T (F_T^\mathrm{d}lta)^\top\nonumber} where \al{D_T &=\left[\begin{array}{cc} D_{1,T} & 0\\ 0& B_T \end{array}\right] \nonumber\\ D_{1,T}&= \operatorname{diag}(\beta^{-1},\beta^{-2},\ldots \beta^{T-\mathrm{d}lta}), \nonumber} and $B_T$ is a $\mathrm{d}lta\times \mathrm{d}lta$ matrix. Thus, $K_{TC\mathrm{d}lta}^{-1}$ is banded of bandwidth $\mathrm{d}lta$. \end{propo} \begin{proof} Let $ \mathcal{V}^{(j)}\in \mathbb{R}^{\infty \times T}$ denote a matrix whose first $j-1$ columns coincide with the null sequence and the remaining ones do not, thus $ \mathcal{V}^{(T+1)}$ is the null matrix. We use $\sim$ to denote the equivalence relation $ \mathcal{X}\sim \mathcal{Y}$ which means that $ \mathcal{X}\in \mathbb{R}^{\infty \times T}$ and $ \mathcal{Y}\in \mathbb{R}^{\infty \times T}$ have the first columns (in the same number) equal to the null sequence and the other ones do not. Thus, the latter induces a splitting of $ \mathbb{R}^{\infty\times T}$ through the corresponding equivalence classes $[ \mathcal{V}^{(j)}]=\{ \mathcal{X}\in \mathbb{R}^{\infty\times T} \hbox{ s.t. } \mathcal{X}\sim \mathcal{V}^{(j)}\}$ with $1\leq j\leq T+1$. In what follows, in order to ease the exposition (and thus with some abuse of notation) we use the symbol $=$ instead of $\sim$ in all the (submatrix) relations involving $ \mathcal{V}^{(j)}$, with $j=1\ldots T+1$. First, notice that $ \mathcal{F}= \mathcal{I}- \mathcal{S}$ and $F_T=I_T-S$ where $ \mathcal{S}$ and $S$ denote, respectively, the infinite and finite dimensional lower shift matrix. Recall that postmultiplying $ \mathcal{V}^{(j)}$, with $1\leq j \leq T$, by $S$ results in the columns of $ \mathcal{V}^{(j)}$ being shifted left by one position with a null sequence appearing in the last column position, thus \al{\label{eq1shift} \mathcal{V}^{(j)}F_T= \mathcal{V}^{(j-1)};} premultiplying $ \mathcal{V}^{(j-1)}$, with $1\leq j \leq T$, by $ \mathcal{S}$ results in the rows of $ \mathcal{V}^{(j-1)}$ being shifted downward by one position with a null row vector appearing in the first top row, thus \al{\label{eq2shift} \mathcal{V}^{(j-1)}F_T= \mathcal{V}^{(j-1)}.} Combining (\ref{eq1shift})-(\ref{eq2shift}), we obtain \al{ \mathcal{V}^{(j)}F_T= \mathcal{F} \mathcal{V}^{(j-1)}\nonumber} and thus \al{\label{eq3shift} \mathcal{F}^{-1} \mathcal{V}^{(j)}F_T= \mathcal{V}^{(j-1)}, \; \; 1\leq j\leq T.} Then, we have \al{\label{eq4shift} \mathcal{F}^{-1} \left[\begin{array}{c} I_T \\ \mathcal{V}^{(j)} \end{array}\right]F_T=\left[\begin{array}{c} I_T \\ \mathcal{O}+ \mathcal{F}^{-1} \mathcal{V}^{(j)}F_T\end{array}\right] } where $ \mathcal{O}\in \mathbb{R}^{\infty \times T}$ is a matrix whose last column is a sequence of ones, while the other columns are null sequencess, i.e. $ \mathcal{O}= \mathcal{V}^{(T-1)}$. Accordingly, by (\ref{eq3shift})-(\ref{eq4shift}) we have \al{\label{eq5shift} \mathcal{F}^{-1} \left[\begin{array}{c} I_T \\ \mathcal{V}^{(j)} \end{array}\right]F_T=\left[\begin{array}{c} I_T \\ \mathcal{V}^{(j-1)} \end{array}\right], \; \; 1\leq j\leq T+1. } Notice that \al{K_{TC\mathrm{d}lta}&= \left[\begin{array}{cc} I_T & 0 \end{array}\right] \mathcal{K}_{TC\mathrm{d}lta}\left[\begin{array}{c} I_T \\0 \end{array}\right] \nonumber\\ &=\kappa_\mathrm{d}lta\left[\begin{array}{cc} I_T & 0 \end{array}\right]( \mathcal{F}^{-\mathrm{d}lta})^\top \mathcal{D}^{-1} \mathcal{F}^{-\mathrm{d}lta } \left[\begin{array}{c} I_T \\0 \end{array}\right].\nonumber } Consider \al{Y:&= (F_T^{\mathrm{d}lta})^\top K_{TC\mathrm{d}lta}F_T^{\mathrm{d}lta} \nonumber\\ &=\kappa_\mathrm{d}lta \mathcal{W}_\mathrm{d}lta^\top \mathcal{D}^{-1} \mathcal{W}_\mathrm{d}lta\nonumber} where \al{ \mathcal{W}_\mathrm{d}lta= \mathcal{F}^{-\mathrm{d}lta } \left[\begin{array}{c} F_T^\mathrm{d}lta \\0 \end{array}\right]= \mathcal{F}^{-\mathrm{d}lta } \left[\begin{array}{c} F_T^\mathrm{d}lta \\ \mathcal{V}^{(T+1)} \end{array}\right].\nonumber} Then, it remains to prove that $Y=\kappa_\mathrm{d}lta D_T^{-1}$. Indeed, \al{ \mathcal{W}_\mathrm{d}lta&= \mathcal{F}^{-(\mathrm{d}lta-1) } \mathcal{F}^{-1}\left[\begin{array}{c} I_T\\ \mathcal{V}^{(T+1)} \end{array}\right]F_T F_T^{\mathrm{d}lta-1}\nonumber\\ &= \mathcal{F}^{-(\mathrm{d}lta-1) }\left[\begin{array}{ccc} I_T \\ \mathcal{V}^{(T)}\end{array}\right]F_T^{\mathrm{d}lta-1}\nonumber\\ &= \ldots =\left[\begin{array}{ccc} I_T \\ \mathcal{V}^{(T+1-\mathrm{d}lta)}\end{array}\right]\nonumber} where we exploited (\ref{eq5shift}). Thus, \al{Y&=\kappa_\mathrm{d}lta\left[\begin{array}{cc} I_T & ( \mathcal{V}^{(T+1-\mathrm{d}lta)})^\top\end{array}\right] \mathcal{D}^{-1}\left[\begin{array}{ccc} I_T \\ \mathcal{V}^{(T+1-\mathrm{d}lta)}\end{array}\right]\nonumber\\ &=\kappa_\mathrm{d}lta\left[\begin{array}{cc} I_T & ( \mathcal{V}^{(T+1-\mathrm{d}lta)})^\top\end{array}\right] \left[\begin{array}{cc} D_{1,T}^{-1} & 0\\ 0& \tilde \mathcal{D}^{-1} \end{array}\right] \left[\begin{array}{ccc} I_T \\ \mathcal{V}^{(T+1-\mathrm{d}lta)}\end{array}\right]\nonumber\\ &=\kappa_\mathrm{d}lta (D_{1,T}^{-1}+( \mathcal{V}^{(T+1-\mathrm{d}lta)})^\top \tilde \mathcal{D}^{-1} \mathcal{V}^{(T+1-\mathrm{d}lta)})=\kappa_\mathrm{d}lta D_T^{-1}\nonumber} where $\tilde \mathcal{D}= \operatorname{diag}(\beta^{T-\mathrm{d}lta-1},\beta^{T-\mathrm{d}lta-2},\ldots)$. $\blacksquare$ \end{proof} \\ It remains to design the DC kernel of oder $\mathrm{d}lta$ connecting $ \mathcal{K}_{TC\mathrm{d}lta-1}$ and $ \mathcal{K}_{TC \mathrm{d}lta}$. Drawing inspiration from Section \ref{sec_DC2} we define it as \al{\label{KDCd} \mathcal{K}_{DC\mathrm{d}lta}=\kappa_{\mathrm{d}lta} ( \mathcal{F}_{\mathrm{d}lta,\alpha} \mathcal{D} \mathcal{F}_{\mathrm{d}lta,\alpha}^\top)^{-1}} where \al{ \mathcal{F}_{\mathrm{d}lta,\alpha}:=(1-\alpha) \mathcal{F}^{\mathrm{d}lta-1}+\alpha \mathcal{F}^\mathrm{d}lta\nonumber } and $\kappa_\mathrm{d}lta$ is the normalization constant. Here, $\eta=[\, \beta\; \alpha\,]^\top$ with $0<\beta<1$ and $0\leq \alpha\leq1$. In Figure \ref{fig_transitions} we show a realization of the impulse response using (\ref{KDCd}) with $\mathrm{d}lta=3$ as a function of $\alpha$; as expected, the degree of smoothness increases as $\alpha$ increases. \begin{figure} \caption{One realization of $g\sim \mathcal{N} \label{trans23} \end{figure} \begin{propo} \label{prop_gen_invDC}The inverse of $ \mathcal{K}_{DC\mathrm{d}lta}$ is a banded matrix of bandwidth $\mathrm{d}lta$, that is $[ \mathcal{K}_{DC\mathrm{d}lta}^{-1}]_{t,s}=0$ for any $|t-s|>\mathrm{d}lta$.\end{propo} \begin{proof} First, for $\mathrm{d}lta=1$ $K_{DC\mathrm{d}lta}$ is the standard DC kernel whose inverse is tridiagonal, i.e. the statement holds. Finally, notice that \al{ \mathcal{F}_{\mathrm{d}lta,\alpha}:= \mathcal{F}((1-\alpha) \mathcal{F}^{\mathrm{d}lta-2}+\alpha \mathcal{F}^{\mathrm{d}lta-1})= \mathcal{F} \mathcal{F}_{\mathrm{d}lta-1,\alpha},\nonumber } thus \al{ \mathcal{K}_{DC\mathrm{d}lta}^{-1}=\kappa_{\mathrm{d}lta-1}\kappa_{\mathrm{d}lta}^{-1} \mathcal{F} \mathcal{K}_{DC\mathrm{d}lta-1}^{-1} \mathcal{F}^\top.\nonumber} Accordingly, the remaining part of the proof is similar to the one of Proposition \ref{prop_gen_inv}. $\blacksquare$ \end{proof} \\ Also in this case the finite dimensional kernel $K_{DC\mathrm{d}lta}\in \mathcal{S}_T$ is defined as \al{[K_{DC\mathrm{d}lta}]_{t,s}= [ \mathcal{K}_{DC\mathrm{d}lta}]_{t,s}, \; \; t,s=1\ldots T.\nonumber} \begin{propo}\label{prop_DC_gen_finite} The finite dimensional kernel $K_{DC\mathrm{d}lta}$ admits the following decomposition: \al{K_{DC\mathrm{d}lta}^{-1}=\kappa_{\mathrm{d}lta} F_{\mathrm{d}lta,\alpha,T} D_T (F_{\mathrm{d}lta,\alpha,T})^\top\nonumber} where \al{F_{\mathrm{d}lta,\alpha,T}&=(1-\alpha)F_T^{\mathrm{d}lta-1}+\alpha F_T^\mathrm{d}lta\nonumber \\ D_T &=\left[\begin{array}{cc} D_{1,T} & 0\\ 0& B_T \end{array}\right] \nonumber\\ D_{1,T}&= \operatorname{diag}(\beta^{-1},\beta^{-2},\ldots \beta^{T-\mathrm{d}lta}) \nonumber} and $B_T$ is a $\mathrm{d}lta\times \mathrm{d}lta$ matrix; Thus, $K_{DC\mathrm{d}lta}^{-1}$ is banded of bandwidth $\mathrm{d}lta$. \end{propo} \begin{proof} The proof is similar to the one of Proposition \ref{prop_TC_decomp_Gen}. $\blacksquare$ \end{proof} \\ Finally, this extension can be applied also to the high-frequency (HF) kernel, see \cite{6160606}: \al{[ \mathcal{K}_{HF}]_{t,s} = (-1)^{ |t-s|} \beta^{\max(t,s)}=(-1)^{ |t-s|}[ \mathcal{K}_{TC}]_{t,s} \nonumber } where $0<\beta<1$. We define the high frequency kernel of oder $\mathrm{d}lta\in \mathbb{N}$ as \al{[ \mathcal{K}_{HF\mathrm{d}lta}]_{t,s}=(-1)^{|t-s|} [ \mathcal{K}_{TC\mathrm{d}lta}]_{t,s}.\nonumber} Moreover, we can define the high frequency diagonal-correlated (HC) kernel connecting HF$\mathrm{d}lta-1$ and HF$\mathrm{d}lta$ as \al{[ \mathcal{K}_{HC}]_{t,s}=(-1)^{|t-s|} [ \mathcal{K}_{DC\mathrm{d}lta}]_{t,s}.\nonumber} It is straightforward to see that $ \mathcal{K}_{HF\mathrm{d}lta}^{-1}$ and $ \mathcal{K}_{HC\mathrm{d}lta}^{-1}$ are banded of bandwidth $\mathrm{d}lta$, as well as their finite dimensional matrices $K_{HF\mathrm{d}lta}^{-1}$ and $K_{HC\mathrm{d}lta}^{-1}$. It is possible to find the closed form expression for the Cholesky factor and the determinant of $K_{HF2}^{-1}$ and $K_{HC2}^{-1}$. Finally, $K_{HF2}$ and $K_{HC2}$ are, respectively, the maximum entropy solution of a band extension problem similar to the ones introduced in Section \ref{sec_ME}. \section{Frequency analysis}\label{sec_freq} An exponentially convex local stationary (ECLS) kernel $ \mathcal{K}\in \mathcal{S}_\infty$ admits the following decomposition \al{\label{defECLS}[ \mathcal{K}]_{t,s}=\beta^{\frac{t+s}{2}} [ \mathcal{W}]_{t,s}} where $ \mathcal{W}\in \mathcal{S}_\infty$ is a stationary kernel, i.e. the covariance function of a stationary process and thus $[ \mathcal{W}]_{t,s}=[ \mathcal{W}]_{t+k,s+k}$ for any $k\in \mathbb{N}$. Recall that TC, DC and SS are ECLS kernels. It is straightforward to see that TC2 and DC2 are ECLS kernel whose stationary parts are, respectively, \al{[ \mathcal{W}_{TC2}]_{t,s}&= 2\beta^{\frac{|t-s|}{2}+1}+(1-\beta)(1+|t-s|)\beta^{\frac{|t-s|}{2}}\nonumber\\ [ \mathcal{W}_{DC2}]_{t,s}&=\frac{\beta^{\frac{|t-s|}{2}} (1-(1-\beta)\alpha^{|t-s|+1})-\alpha^2 \beta^{\frac{|t-s|}{2}+1}}{1-\alpha}. \nonumber} \begin{teor} TC$\mathrm{d}lta$ and DC$\mathrm{d}lta$ kernels with $\mathrm{d}lta>2$ are ECLS, that is \al{ \mathcal{K}_{TC\mathrm{d}lta}=\beta^{\frac{t+s}{2}} [ \mathcal{W}_{TC\mathrm{d}lta}]_{t,s}, \; \; \mathcal{K}_{DC\mathrm{d}lta}=\beta^{\frac{t+s}{2}} [ \mathcal{W}_{DC\mathrm{d}lta}]_{t,s} \nonumber} where $ \mathcal{W}_{TC\mathrm{d}lta}$ and $ \mathcal{W}_{DC\mathrm{d}lta}$ are stationary kernels. \end{teor} \begin{proof} We only prove the claim for TC$\mathrm{d}lta$ because the one for DC$\mathrm{d}lta$ is similar. By (\ref{kernelTCd}), we have that \al{\label{def_TCd_ecls} \mathcal{K}_{TC\mathrm{d}lta}=\kappa_\mathrm{d}lta \mathcal{X}^\top \mathcal{D}^{-1} \mathcal{X}} where $ \mathcal{X}= \mathcal{F}^{-\mathrm{d}lta}=( \mathcal{F}^{-1})^\mathrm{d}lta$. Since $ \mathcal{F}$ is lower triangular, Toeplitz and invertible, then by Lemma \ref{lemmaford} we know that $ \mathcal{F}^{-1}$ is lower triangular and Toeplitz. Accordingly, $ \mathcal{X}$ is lower triangular and Toeplitz because it is given by a product of lower triangular and Toeplitz matrices. Hence, let \al{ \mathcal{X}=\tpl{x_1,x_2,x_3,\ldots}. \nonumber} Moreover, \al{[ \mathcal{X}]_{t,:}=[\, 0\; \ldots \;0 \;\hspace{-0.5cm} \underbrace{x_1}_{\tiny \hbox{$t$-th element}} \hspace{-0.5cm}\; x_2 \; x_3 \ldots \,].\nonumber} Taking into account (\ref{def_TCd_ecls}), we have \al{\label{TCdECLS}[ \mathcal{K}_{TC\mathrm{d}lta}]_{t,s}&=\kappa_\mathrm{d}lta [ \mathcal{X}]_{t,:} \mathcal{D}^{-1} [ \mathcal{X}]_{s,:}^\top\nonumber\\ &=\kappa_\mathrm{d}lta \sum_{k=1}^\infty \beta^{\max(t,s)+k-1} x_k x_{k+|t-s|}\nonumber\\ &=\beta^{\frac{t+s}{2}}\underbrace{ \kappa_\mathrm{d}lta \sum_{k=1}^\infty \beta^{\frac{|t-s|}{2}+k-1} x_k x_{k+|t-s|}}_{:=[ \mathcal{W}_{TC\mathrm{d}lta}]_{t,s}}} where we have exploited the fact that $\max(t,s)=(t+s)/2+|t-s|/2$. It is straightforward to see that $ \mathcal{W}_{TC\mathrm{d}lta}$ is a stationary kernel. In view of (\ref{defECLS}) and (\ref{TCdECLS}), we conclude that TC$\mathrm{d}lta$ is ECLS. $\blacksquare$ \end{proof} Although it is not immediate to derive the closed form expression for $W_{TC\mathrm{d}lta}$ and $W_{DC\mathrm{d}lta}$, we can compute them numerically: \al{[W_{TC\mathrm{d}lta}]_{t,s}\begin{approxi}rox \beta^{-\frac{t+s}{2}\nonumber}[K_{TC\mathrm{d}lta}]_{t,s}} and likewise for DC$\mathrm{d}lta$. Clearly, the larger $T$ is, the better the approximation above is. Therefore, it is interesting to compare the frequency content in their stationary parts. In doing that, we recall that \al{[W]_{t,s}=\frac{1}{2\pi}\int_{-\pi}^\pi \phi(\vartheta)\cos(\vartheta(t-s))\mathrm d \vartheta\nonumber} where $\phi(\vartheta)$, with $\vartheta\in[0,2\pi]$, is the power spectral density of the (stationary) process. In order to compare SS with the others we need to choose $\gamma=\sqrt[3]{\beta}$ in (\ref{kernelSS}); in this way the latter has the exponential part as in (\ref{defECLS}). Figure \ref{fig_freq} shows the \begin{figure} \caption{Power spectral density of the stationary part of TC, TC$\mathrm{d} \label{fig_freq} \end{figure} power spectral densities of the stationary part of TC, TC$\mathrm{d}lta$, with $\mathrm{d}lta=2\ldots 6$ and SS. As expected, the higher TC$\mathrm{d}lta$ is, the more statistical power is concentrated for frequencies close to zero. TC2 promote less smoothness than SS, while the latter is more similar to TC5 and TC6. It is worth noting that we can plot also the power spectral density corresponding to DC$\mathrm{d}lta$. The latter smoothly changes from the one of TC$\mathrm{d}lta-1$, with $\alpha=0$, to the one of TC$\mathrm{d}lta$, with $\alpha=1$. Finally, also HF$\mathrm{d}lta$ and HC$\mathrm{d}lta$ are ECLS kernels. Figure \ref{fig_freqHF} shows the power spectral density of the stationary part of HF and HF$\mathrm{d}lta$ for $\mathrm{d}lta=2\ldots 4$. \begin{figure} \caption{Power spectral density of the stationary part of HF and HF$\mathrm{d} \label{fig_freqHF} \end{figure} The higher $\mathrm{d}lta$ is, the more the statistical power is concentrated for frequencies close to $\pi$. In order to test the performance of the TC$\mathrm{d}lta$ and DC$\mathrm{d}lta$ kernels we consider a Monte Carlo study composed by 200 experiments. In each experiment the models and the data are generated likewise to the first Monte Carlo study of Section \ref{sec_GC}, but the input $u$ is a realization drawn from a Gaussian noise with band [0, 0.2]. We consider the following additional estimators for the impulse response: \begin{itemize} \item $\hat g_{DC\mathrm{d}lta}$ is the estimator in (\ref{def_RELS}) using the DC$\mathrm{d}lta$ kernel (\ref{KDCd}) with $\mathrm{d}lta=2\ldots 6$; \item $\hat g_{TC\mathrm{d}lta}$ is the estimator in (\ref{def_RELS}) using the TC$\mathrm{d}lta$ kernel (\ref{kernelTCd}) with $\mathrm{d}lta=2\ldots 6$. \end{itemize} \begin{figure*} \caption{Average impulse response fit in the Monte Carlo study composed by 200 experiments.} \label{fig_sim1} \end{figure*} Figure \ref{fig_sim1} shows the boxplot of $\mathrm{AIRF}$ for the estimators. The best one is $\hat g_{TC6}$, while $\hat g_{DI}$, $\hat g_{DC}$ and $\hat g_{TC}$ are the worst ones. This result is not surprising because the impulse responses in this Monte Carlo study are enough smooth, see Figure \ref{fig_realsimhl} (top) and, indeed, the TC6 kernel induces more smoothness than the others. \section{Conclusions}\label{sec_conc} We have introduced a second-order extension to TC and DC kernels called TC2 and DC2, respectively. The latter induces more smoothness than the former. This idea can be also extended to higher-orders. We also have introduced a generalized-correlated (GC) kernel which incorporates the DI, DC, TC kernels, i.e. the most popular kernels in system identification, and the DC2 and TC2 kernels. We have derived the closed form expression for the determinant and the Cholesky factorization of the inverse matrix of TC2, DC2 and GC. Accordingly, the latter allow to design efficient algorithms for minimizing the negative log-likelihood. In particular, since TC2 and SS kernels produce similar performances for estimating the impulse response, TC2 represents an appealing alternative to SS because it admits an efficient implementation for searching the optimal hyperparameters through marginal likelihood. Finally, we have also shown that these new kernels are exponentially convex local stationary and thus it is possible to understand easily their frequency properties. \end{document}
math
\begin{document} \begin{abstract} We explore parameterizations by radicals of low genera algebraic curves. We prove that for $q$ a prime power that is large enough and prime to $6$, a fixed positive proportion of all genus 2 curves over the field with $q$ elements can be parameterized by $3$-radicals. This results in the existence of a deterministic encoding into these curves when $q$ is congruent to $2$ modulo $3$. We extend this construction to parameterizations by $\ell$-radicals for small odd integers $\ell$, and make it explicit for $\ell=5$. \end{abstract} \title{The geometry of some parameterizations and encodings} \author{Jean-Marc Couveignes} \address{Jean-Marc Couveignes, Univ. Bordeaux, IMB, UMR 5251, F-33400 Talence, France.} \address{Jean-Marc Couveignes, CNRS, IMB, UMR 5251, F-33400 Talence, France.} \address{Jean-Marc Couveignes, INRIA, F-33400 Talence, France.} \address{Jean-Marc Couveignes, Laboratoire International de Recherche en Informatique et Math\'ematiques Appliqu\'ees.} \email{[email protected]} \author{Reynald Lercier} \address{ \textsc{Reynald Lercier, DGA MI}, La Roche Marguerite, 35174 Bruz, France. } \address{Reynald Lercier, Institut de recherche math\'ematique de Rennes, Universit\'e de Rennes 1, Campus de Beaulieu, 35042 Rennes, France. } \email{[email protected]} \thanks{Research supported by the ``Direction G{\'e}n{\'e}rale de l'Armement'', by the ``Agence Nationale de la Recherche'' (project PEACE), and the Investments for the future Programme IdEx Bordeaux (ANR-10-IDEX-03-02) through the CPU cluster (Numerical certification and reliability).} \date{\today} \maketitle \section{Introduction} Let ${\mathbb F}_{q}$ be a finite field, let $C/{{\mathbb F}_{q}}$ be an algebraic curve, we propose in this paper new algorithms for computing in deterministic polynomial time a point in $C({\mathbb F}_{q})$. This is useful in numerous situations, for instance in discrete logarithm cryptography~\cite{BonFra2001}. To be more precise, we consider this question for low genus curves with an emphasis on the genus 2 case. The mathematical underlying problem is to compute radical expressions for solutions of a system of algebraic equations. Galois theory provides nice answers, both in theory and practice, for sets of dimension 0 and degree less than 5. Explicit results are known in dimension 1 too. A famous theorem of Zariski states that a generic curve of genus at least 7 cannot be parameterized by radicals. Conversely, a complex curve of genus less than 7 can be parameterized by radicals over the field of rational fractions \cite{Harrison,Fried}. In this work we restrict the degree of radicals involved in the parameterizations. Typically, for $C$ a curve over the field with $q$ elements, we only allow radicals of degrees $l$ prime to $q(q-1)$. The reason is that for such $l$, we can compute $l$-th roots of elements in ${\mathbb F}_q$ in deterministic polynomial time in $\log(q)$. Especially, we do not allow square roots. We will be mainly concerned with genus 2 curves. Following pioneering investigations \cite{SK,SKS} by Schinzel and Ska{\l}ba, Shallue and Woestijne came in 2006 to a first practical deterministic algorithm for constructing points on genus 1 curves over any finite field~\cite{SW}. In 2009, Icart proposed a deterministic encoding with quasi-quadratic complexity in $\log q$ for elliptic curves over a finite field when $q$ is congruent to $2$ modulo $3$~\cite{icart}. To this end, he constructed a parameterization by $3$-radicals for every elliptic curve over a field with characteristic prime to $6$. Couveignes and Kammerer recently proved that there exists an infinity of such parameterizations~\cite{CK12}, corresponding to rational curves on a K3 surface associated with the elliptic curve. Nevertheless, in genus 2, only partial results are known. Ulas attempted to generalize Shallue and Woestijne results~\cite{ulas07}. Tibouchi and Fouque designed encodings for curves with automorphism group containing the dihedral group with 8 elements~\cite{FM}. Each of these two constructions reaches a family of dimension 1 inside the dimension 3 moduli space of the genus 2 curves. So the proportion of target curves for such parameterizations tends to zero when $q$ tends to infinity. At the same time, Kammerer, Lercier and Renault~\cite{KLR} published encodings for a dimension 2 family of genus 2 curves. In particular their curves have no non-hyperelliptic involution. However these curves still represent a negligible proportion of all genus 2 curves when $q$ tends to infinity. In this paper we construct a parameterization by $3$-radicals for a genus 2 curve $C$ over a field $K$ with characteristic $p$ prime to $6$ under the sole restriction that $C$ has two $K$-rational points whose difference has order $3$ in the Jacobian variety. This is a dimension 3 family. In particular, we parameterize all genus 2 curves when $K$ is algebraically closed. When $K$ is a finite field with characteristic prime to $6$ we parameterize a positive proportion of all curves in that way. Our construction extends the ones by Farashahi \cite{fara} for genus 1 curves and Kammerer, Lercier and Renault~\cite{KLR} for genus 2 curves. Our starting point is the observation that the role played by Tartaglia-Cardan formulae in these parameterizations can be formalized and generalized using the theory of torsors under solvable finite group schemes. This leads us to a systematic exploration and combination of possibilities offered by the action of small solvable group schemes over curves of low genus. The principles of our method are presented in Section~\ref{sec:defgene}. We first recall the basics of parameterizations of curves by radicals and encodings, then we explain how to produce such parameterizations using the action of solvable group schemes on algebraic curves. Section~\ref{sec:g1} provides a first illustration of this general method in the case of genus 1 curves. This offers a new insight on previous work by Farashahi, and Kammerer, Lercier, Renault. We present a parameterization of a 3~~dimensional family of genus 2 curves in Section~\ref{sec:g2}. Variations on this theme are presented in Section~\ref{sec:other}. Section~\ref{sec:g25} presents detailed computations for one of these families. We parameterize Jacobians of dimension 2 with one 5-torsion point. We finish with a few questions and prospects. We thank Jean Gillibert, Qinq Liu, and Jilong Tong for useful discussions. \section{Definitions and generalities}\label{sec:defgene} In this section we recall a few definitions and present the principles of our method. Sections~\ref{sec:radic} and~\ref{sec:radic2} recall elementary results about radicals. Section~\ref{sec:paramet} recalls the definition of a parameterization. Section~\ref{sec:torsors} gives elementary definitions about torsors. Basic properties of encodings are recalled in Section~\ref{sec:encodings}. Section~\ref{sec:cardan} presents Tartaglia-Cardan formulae in the natural language of torsors. Our strategy for finding new parameterizations is presented in Sections~\ref{sec:mu3mu2} and~\ref{sec:selec}. \subsection{Radical extensions}\label{sec:radic} The following classical lemma \cite[Chapter VI, Theorem 9.1]{lang} gives necessary and sufficient conditions for a binomial to be irreducible. \begin{lemma}\label{lem:cond} Let $K$ be a field, let $d\ge 1$ be a positive integer, and let $a\in K^*$. The polynomial $x^d-a$ is irreducible in $K[x]$ if and only if the two following conditions hold true \begin{itemize} \item For every prime integer $l$ dividing $d$, the scalar $a$ is not the $l$-th power of an element in $K^*$, \item If $4$ divides $d$, then $-4a$ is not the $4$-th power of an element in $K^*$. \end{itemize} \end{lemma} Let $K$ be a field with characteristic $p$. Let $S$ be a set of rational primes such that $p\not\in S$. Let $M\supset K$ be a finite separable $K$-algebra, and $L\subset M$ a $K$-subalgebra of $M$. The extension $L\subset M$ is said to be $S$-{\it radical} if $M$ is isomorphic, as an $L$-algebra, to $L[x]/(x^l-a)$ for some $l \in S$ and some $a\in L^*$. When $S$ contains all primes but $p$, we speak of {\it radical extensions}. An extension $M\supset L$ is said to be $S$-{\it multiradical} if there exists a finite sequence of $K$-algebras \[K\subset L=L_0\subset L_1 \subset \dots \subset L_n=M\] such that every intermediate extension $L_{i+1}/L_i$ for $0\le i\le n-1$ is $S$-radical. \subsection{Radical morphisms}\label{sec:radic2} Let $K$ be a field with characteristic $p$. Let $\bar K\supset K$ be an algebraic closure. Let $f : C \rightarrow D$ be an epimorphism of (projective, smooth, absolutely integral) curves over $K$. We say that $f$ is a {\it radical morphism} if the associated function field extension $K(D)\subset K(C)$ is radical. We define similarly multiradical morphisms, $S$-radical morphisms, $S$-multiradical morphisms. If $f$ is a radical morphism then $K(C)=K(D,b)$ where $b^l=a$ and $a$ is a non-constant function on $D$ and $l\not = p$ is a prime integer. Call $\gamma_b$ the map \begin{equation*}\label{eq:gamma} \xymatrixrowsep{0.1in} \xymatrixcolsep{0.3in} \xymatrix{ \gamma_b : & C \ar@{->}[r] & D\times \mathbb P^1\\ &P\ar@{|->}[r]&(f(P),b(P)). } \end{equation*} Let $X\subset C$ be the ramification locus of $f$, and let $Y=f(X)\subset D$ be the branch locus. A geometric point $Q$ on $D$ is branched if and only if $a$ has a zero or a pole at $Q$ with multiplicity prime to $l$. We ask if $\gamma_b$ induces an injection on $C(\bar K)$. Equivalently we ask if $b$ separates points in every fiber of $f$. First, there is a unique ramification point above each branched point. Then, if $a$ has neither a zero nor a pole at $Q$, then $b$ separates the points in the fiber of $f$ above $Q$. Finally, if $a$ has a zero or a pole at $Q$ with multiplicity divisible by $l$, then $b$ (and $\gamma_b$) fail to separate the points in the fiber of $f$ above $Q$. However, there exists a finite covering $(U_i)_i$ of $C$ by affine open subsets, and functions $b_i \in {\mathcal O}(U_i - X)^*$ such that $b_i/b\in K(D)^*\subset K(C)^*$. We set $\mathbf b=(b_i)_{1\le i\le I}$ and define a map \begin{equation*}\label{eq:gammab} \xymatrixrowsep{0.1in} \xymatrixcolsep{0.3in} \xymatrix{ \gamma_\mathbf b : & C \ar@{->}[r] & D\times \left( \mathbb P^1\right)^I\\ &P\ar@{|->}[r]&(f(P),b_1(P), \dots, b_I(P)). } \end{equation*} This map induces an injection on $C(\bar K)$. So every point $P\in C(\bar K)$ can be characterized by its image $f(P)$ on $D$ and the value of the $b_i$ at $P$. \subsection{Parameterizations}\label{sec:paramet} An $S$-{\it parameterization} of a projective, absolutely integral, smooth curve $C$ over $K$ is a triple $(D,\rho,\pi)$ where $D$ is another projective, absolutely integral, smooth curve over $K$, and $\rho$ is an $S$-multiradical map from $D/K$ onto $\mathbb P^1/K$, and $\pi$ is an epimorphism from $D/K$ onto $C/K$. In this situation one says that $C/K$ is {\it parameterizable} by $S$-radicals. \begin{equation}\label{eq:para} \xymatrix{ &D \ar@{->}[dl]_\pi\ar@{->}[d]^\rho \\ C&\mathbb P^1 }\end{equation} \subsection{$\Gamma$-groups}\label{sec:torsors} Let $K$ be a field with characteristic $p$. Let $K_s$ be a separable closure of $K$. Let $\Gamma$ be the Galois group of $K_s/K$. Let $A$ be a finite set acted on continuously by $\Gamma$. We say that $A$ is a finite $\Gamma$-set. We associate to it the separable $K$-algebra \[\mathop{\rm{Alg}}\nolimits (A)=\mathop{\rm{Hom}}\nolimits _\Gamma(A,K_s)\] of $\Gamma$-equivariant maps from $A$ to $K_s$. If $G$ is a finite $\Gamma$-set and has a group structure compatible with the $\Gamma$-action we say that $G$ is a finite $\Gamma$-group, or a finite {\'e}tale group scheme over $K$. Now let $A$ be a finite $\Gamma$-set acted on by a finite $\Gamma$-group $G$. If the action of $G$ on $A$ is compatible with the actions of $\Gamma$ on $G$ and $A$, then we say that $A$ is a finite $G$-set. The quotient $A/G$ is then a finite $\Gamma$-set. If further $G$ acts freely on $A$ we say that $A$ is a free finite $G$-set. A simply transitive $G$-set is called a $G$-torsor. The left action of $G$ on itself defines a $G$-torsor called the trivial torsor. The set of isomorphism classes of $G$-torsors is isomorphic, as a pointed set, to $H^1(\Gamma,G)$. See \cite[Chapter I \S 2]{NSW}. Let $l\not=p$ be a prime and let $A$ be a free finite $\mu_l$-set. Let $B=A/\mu_l$. According to Kummer theory, the inclusion $\mathop{\rm{Alg}}\nolimits (B)\subset\mathop{\rm{Alg}}\nolimits (A)$ is a radical extension of separable $K$-algebras. It has degree $l$. Let $S$ be a finite set of primes. Assume that the characteristic $p$ of $K$ does not belong to $S$. A finite $\Gamma$-group $G$ is said to be $S$-{\it solvable} if there exists a sequence of $\Gamma$-subgroups $1=G_0\subset G_1\subset \dots \subset G_I=G$ such that for every $i$ such that $0\le i\le I-1$, the group $G_i$ is normal in $G_{i+1}$, and the quotient $G_{i+1}/G_i$ is isomorphic, as a finite $\Gamma$-group, to $\mu_{l_i}$ for some $l_i$ in $S$. Let $G$ be a finite $\Gamma$-group. Assume that $G$ is $S$-{\it solvable}. Let $A$ be a free finite $G$-set. Let $B=A/G$. The inclusion $\mathop{\rm{Alg}}\nolimits (B)\subset\mathop{\rm{Alg}}\nolimits (A)$ is an $S$-multiradical extension of separable $K$-algebras. It has degree $\# G$. \subsection{Encodings}\label{sec:encodings} We assume that $K$ is a finite field with characteristic $p$ and cardinality $q$. Let $S$ be a set of prime integers. We assume that $p\not \in S$ and $S$ is disjoint from the support of $q-1$. Let $C$ and $D$ be two projective, smooth, absolutely integral curves over $K$. Let $f : C\rightarrow D$ be a radical morphism of degree $l\in S$. Let $X\subset C$ be the ramification locus of $f$, and let $Y=f(X)\subset D$ be the branch locus. Let $F : C(K) \rightarrow D(K)$ the induced map on $K$-rational points. We prove that $F$ is a bijection. A branched point $Q$ in $D(K)$ is totally ramified, so has a unique preimage $P$ in $C(K)$. Let $Q\in D(K)-Y(K)$ be a non-branched point. The fiber $f^{(-1)}(Q)$ is a $\mu_l$-torsor. Since $H^1(K,\mu_l)=K^*/(K^*)^l$ is trivial, this torsor is isomorphic to $\mu_l$ with the left action. Since $H^0(K,\mu_l)=\mu_l(K)$ is trivial also, $f^{(-1)}(Q)$ contains a unique $K$-rational point. Therefore $F$ is a bijection. \begin{lemma}\label{lem:bij} Let $K$ be a finite field with $q$ elements. Let $S$ be a finite set of prime integers. We assume that $p\not \in S$ and $S$ is disjoint from the support of $q-1$. Let $f : C\rightarrow D$ be an $S$-multiradical morphism between two smooth, projective, absolutely irreducible curves over $K$. The induced map $F : C(K)\rightarrow D(K)$ on $K$-rational points is a bijection. \end{lemma} The reciprocal map $F^{(-1)} : D(K)\rightarrow C(K)$ can be evaluated in deterministic polynomial time by computing successive $l$-th roots for various $l\in S$. We assume now that we are in the situation of the diagram (\ref{eq:para}). Let $R : D(K)\rightarrow \mathbb P^1(K)$ be the map induced by $\rho$ and let $\Pi : D(K)\rightarrow C(K)$ be the map induced by $\pi$. The composition $\Pi \circ R^{(-1)}$ is called an {\it encoding}. \subsection{Tartaglia-Cardan formulae}\label{sec:cardan} Let $K$ be a field with characteristic prime to $6$. Let $K_s$ be an algebraic closure of $K$. Let $\Gamma$ be the Galois group of $K_s/K$. Let $\mu_3 \subset K_s$ be the finite $\Gamma$-set consisting of the three roots of unity. Let $\mathop{\rm{Sym}}\nolimits (\mu_3)$ be the full permutation group on $\mu_3$. The Galois group $\Gamma$ acts on $\mu_3$. So we have a group homomorphism $\Gamma \rightarrow \mathop{\rm{Sym}}\nolimits (\mu_3)$ and $\Gamma$ acts on $\mathop{\rm{Sym}}\nolimits (\mu_3)$ by conjugation. This action turns $\mathop{\rm{Sym}}\nolimits (\mu_3)$ into a group scheme over $K$. Because $\mu_3$ acts on itself by translation, we have an inclusion of group schemes $\mu_3\subset \mathop{\rm{Sym}}\nolimits (\mu_3)$ and $\mu_3$ is a normal subgroup of $\mathop{\rm{Sym}}\nolimits (\mu_3)$. The stabilizer of $1\in \mu_3$ is a subgroup scheme of $\mathop{\rm{Sym}}\nolimits (\mu_3)$. It is not normal in $\mathop{\rm{Sym}}\nolimits (\mu_3)$. It is isomorphic to $\mu_2$. So $\mathop{\rm{Sym}}\nolimits (\mu_3)$ is the semidirect product $\mu_3\rtimes \mu_2$. Let $\zeta_3\in K_s$ be a primitive third root of unity. We set $\sqrt{-3}=2\zeta_3+1$. Let \[h(x)=x^3-s_1x^2+s_2x-s_3\] be a degree $3$ separable polynomial in $K[x]$. Let \[R=\mathop{\rm{Roots}}\nolimits (h)\] be the set of roots of $h(x)$ in $K_s$. This is a finite $\Gamma$-set with cardinality $3$. Let \[A=\mathop{\rm{Bij}}\nolimits (\mathop{\rm{Roots}}\nolimits (h), \mu_3)\] be the set of bijections from $R$ to $\mu_3$. For $\gamma\in \Gamma$ and $f\in A$ we set ${}^\gamma f= \gamma \circ f \circ \gamma^{-1}$. This turns $A$ into a finite $\Gamma$-set of cardinality $6$. The action of $\mathop{\rm{Sym}}\nolimits (\mu_3)$ on the left turns it into a $\mathop{\rm{Sym}}\nolimits (\mu_3)$-torsor. Let \[C=A/\mu_3\] be the quotient of $A$ by the normal $\Gamma$-subgroup $\mu_3\subset \mathop{\rm{Sym}}\nolimits (\mu_3)$ of order $3$. This is a $\mu_2$-torsor. Let \[B=A/\mu_2\] be the quotient of $A$ by the stabilizer of $1$ in $\mathop{\rm{Sym}}\nolimits (\mu_3)$. This is a finite $\Gamma$-set of cardinality~$3$, naturally isomorphic to $\mathop{\rm{Roots}}\nolimits (h)$. We define a function $\xi$ in $\mathop{\rm{Alg}}\nolimits (B)\subset \mathop{\rm{Alg}}\nolimits (A)$ by \begin{equation*}\label{eq:xi} \xymatrixrowsep{0.1in} \xymatrixcolsep{0.3in} \xymatrix{ \xi : & A\ar@{->}[r] & K_s\\ &f\ar@{|->}[r]&f^{(-1)}(1). }\end{equation*} The algebra $\mathop{\rm{Alg}}\nolimits (B)$ is generated by $\xi$, and the characteristic polynomial of $\xi$ is $h(x)$. So \[\mathop{\rm{Alg}}\nolimits (B)\simeq K[x]/h(x).\] Tartaglia-Cardan formulae construct functions in the algebra $\mathop{\rm{Alg}}\nolimits (A)$ of the $\mathop{\rm{Sym}}\nolimits (\mu_3)$~-~torsor $A$. These functions can be constructed with radicals because \[\mathop{\rm{Sym}}\nolimits (\mu_3)=\mu_3\rtimes \mu_2\] is $\{2,3\}$-solvable. A first function $\delta$ in $\mathop{\rm{Alg}}\nolimits (C)\subset \mathop{\rm{Alg}}\nolimits (A)$ is defined by \begin{equation*}\label{eq:delta} \xymatrixrowsep{0.1in} \xymatrixcolsep{0.3in} \xymatrix{ \delta : & A\ar@{->}[r] & K_s\\ &\scriptscriptstyle riptstyle f\ar@{|->}[r]&\scriptscriptstyle riptstyle \sqrt{-3}\left(f^{(-1)}(\zeta)-f^{(-1)}(1)\right) \left(f^{(-1)}(\zeta^2)-f^{(-1)}(\zeta)\right)\left(f^{(-1)}(1)-f^{(-1)}(\zeta^2)\right). }\end{equation*} Note that the $\sqrt{-3}$ is necessary to balance the Galois action on $\mu_3$. The algebra $\mathop{\rm{Alg}}\nolimits (C)$ is generated by $\delta$. And \[\delta^2 = 81s_3^2-54s_3s_1s_2-3s_1^2s_2^2+12s_1^3s_3+12s_2^3=-3\Delta\] is the discriminant $\Delta$ of $h(x)$ multiplied by $-3$. We say that $-3\Delta$ is the {\it twisted discriminant}. A natural function $\rho$ in $\mathop{\rm{Alg}}\nolimits (A)$ is defined as \begin{equation*}\label{eq:rho} \xymatrixrowsep{0.1in} \xymatrixcolsep{0.3in} \xymatrix{ \rho : & A\ar@{->}[r] & K_s\\ &f\ar@{->}[r]&\sum_{r\in R}r\times f(r)=\sum_{\zeta \in \mu_3}\zeta \times f^{(-1)}(\zeta). }\end{equation*} It is clear that $\rho^3$ is invariant by $\mu_3\subset \mathop{\rm{Sym}}\nolimits (\mu_3)$ or equivalently belongs to $\mathop{\rm{Alg}}\nolimits (C)$. So it can be expressed as a combination of $1$ and $\delta$. Indeed a simple calculation shows that \[\rho^3= s_1^3+\frac{27}{2}s_3-\frac{9}{2}s_1s_2-\frac{3}{2}\delta.\] A variant of $\rho$ is \begin{equation*}\label{eq:rho'} \xymatrixrowsep{0.1in} \xymatrixcolsep{0.3in} \xymatrix{ \rho' : & A\ar@{->}[r] & K_s\\ &f\ar@{->}[r]&\sum_{r\in R}r\times f(r)^{-1}. }\end{equation*} One has \[\rho'^3= s_1^3+\frac{27}{2}s_3-\frac{9}{2}s_1s_2+\frac{3}{2}\delta\] and \[\rho\rho'=s_1^2-3s_2.\] Finally, the root $\xi$ of $h(x)$ can be expressed in terms of $\rho$ and $\rho'$ as \begin{equation*} \xi=\frac{s_1+\rho+\rho'}{3}. \end{equation*} Note that the algebra $\mathop{\rm{Alg}}\nolimits (A)$ is not the Galois closure of $K[x]/h(x)$. If we wanted to construct a Galois closure we would rather consider the $\mathop{\rm{Sym}}\nolimits (\{1,2,3\})$-torsor $\mathop{\rm{Bij}}\nolimits (R,\{1,2,3\})$ of indexations of the roots. We are not interested in this torsor however. This is because $\mu_3\rtimes \mu_2$ is solvable while $C_3\rtimes C_2$ is not, in general. The algebra constructed by Tartaglia and Cardan contains the initial cubic extension, because the quotient of $\mathop{\rm{Bij}}\nolimits (\mathop{\rm{Roots}}\nolimits (h), \mu_3)$ by the stabilizer of $1$ in $\mathop{\rm{Sym}}\nolimits (\mu_3)$ is isomorphic to the quotient of $\mathop{\rm{Bij}}\nolimits (R,\{1,2,3\})$ by the stabilizer of $1$ in $\mathop{\rm{Sym}}\nolimits (\{1,2,3\})$, that is $\mathop{\rm{Roots}}\nolimits (h)$. On the other hand, the quotient of $\mathop{\rm{Bij}}\nolimits (R,\{1,2,3\})$ by the $3$-cycle $(123)\in \mathop{\rm{Sym}}\nolimits (\{1,2,3\})$ is associated with the algebra $K[x]/(x^2-\Delta )$ while the quotient of $\mathop{\rm{Bij}}\nolimits (R,\mu_3)$ by the $3$-cycle $(1\zeta\zeta^2)\in \mathop{\rm{Sym}}\nolimits (\mu_3)$ is associated with the algebra $K[x]/(x^2+3\Delta )$. \subsection{Curves with a $\mu_3\rtimes \mu_2$ action}\label{sec:mu3mu2} We still assume that the characteristic of $K$ is prime to $6$. Let $A$ be a projective, absolutely integral, smooth curve over $K$. We assume that the automorphism group $\mathop{\rm{Aut}}\nolimits (A\otimes_K K_s)$ contains a finite {\'e}tale $K$-group-scheme isomorphic to $\mu_3\rtimes \mu_2$. The quotients $B=A/\mu_2$, and $C=A/\mu_3$ are projective, absolutely integral, smooth curves over $K$. In this situation, we say that $C$ is the {\it resolvent} of $B$. By abuse of language we may say also that we have constructed a parameterization of $B$ by $C$. Assume now that $C$ admits a parameterization by $S$-radicals as in diagram~(\ref{eq:para}). We call $D'$ the normalization of the fiber product of $A$ and $D$ above $C$. We assume that $D'$ is absolutely integral. \begin{equation*}\label{eq:passage} \xymatrix{ &&D' \ar@{->}[dl]\ar@{->}[dr]^{\mu_3} & \\ &A\ar@{->}[dl]_{\mu_2}\ar@{->}[dr]^{\mu_3}&&D\ar@{->}[dl]_{\pi}\ar@{->}[d]^{\rho}\\ B&&C&\mathbb P^1 }\end{equation*} We set $S'=S\cup \{3\}$. We let $\rho'$ be the composite map \[\rho' : D'\stackrel{\mu_3}{\longrightarrow} D\stackrel{\rho}{\longrightarrow} \mathbb P^1,\] and $\pi'$ the composite map \[\pi' : D'\longrightarrow A\stackrel{\mu_2}{\longrightarrow} B.\] Then $(D',\rho',\pi')$ is an $S'$-parameterization of $B$. The mild condition that $D'$ be absolutely integral is granted in the following cases: \begin{enumerate} \item When $C=\mathbb P^1$ and $\pi$ and $\rho$ are trivial. \item When the $\mu_3$-quotient $A\rightarrow C$ is branched at some point $P$ of $C$, and $\pi$ is not branched at $P$. Indeed the two coverings are linearly disjoint in that case. We note that when $C$ has genus 1 we may compose $\pi$ with a translation to ensure that it is not branched at $P$. \item When the degree of $\pi$ is prime to $3$, because $A\rightarrow C$ and $\pi$ are linearly disjoint then. Note that the resulting parameterization $\pi'$ has degree prime to $3$ also. We can iterate in that case. \end{enumerate} \subsection{Selecting curves}\label{sec:selec} We still assume that the characteristic of $K$ is prime to~$6$. We now look for interesting examples of curves with a $\mu_3\rtimes \mu_2$ action. We keep the notation introduced in Section~\ref{sec:mu3mu2}. We set $E=A/(\mu_3\rtimes \mu_2)$. \begin{equation*}\label{eq:passage2} \xymatrix{ &A\ar@{->}[dl]_{\mu_2}\ar@{->}[dr]^{\mu_3}&\\ B\ar@{->}[dr]&&C\ar@{->}[dl]\\ &E& }\end{equation*} The curve $C$ is the one we already know how to parameterize. The curve $B$ is the one we want to parameterize. It should be as generic as possible. In particular, we will assume that $E=\mathbb P^1$. Otherwise, the Jacobian of $B$ would contain a subvariety isogenous to the Jacobian of $E$. It would not be so generic then. Assuming now that $E=\mathbb P^1$ we denote by $r$ the number of branched points of the cover $B\rightarrow E$. Let $r_s$ be the number of branched points with ramification type $2,1$. These are called simple branched points. Let $r_t$ the number of branched points with ramification type $3$. These are totally branched points. We have $r=r_s+r_t$. According to the Hurwitz Genus Formula \cite[III.4.12, III.5.1]{Stich} the genus of $B$ is \[g_B=\frac{r_s}{2}+r_t-2.\] We note that every simple branched point of the cover $B\rightarrow E$ gives rise to a branched point of type $2,2,2$ of the cover $A\rightarrow E$ and to a (necessarily simple) branched point of $C\rightarrow E$. And every totally branched point of the cover $B\rightarrow E$ gives rise to a branched point of type $3,3$ of the cover $A\rightarrow E$ and to a non-branched point of $C\rightarrow E$. So \[g_A=\frac{3r_s}{2}+2r_t-5, \text{ \, and \,\,\,}g_C=\frac{r_s}{2}-1.\] We set \[m=r-3=r_s+r_t-3\] and call it the {\it modular dimension}. It is the dimension of the family of covers obtained by letting the $r$ branched points move along $E=\mathbb P^1$. The $-3$ stands for the action of $\mathop{\rm{Aut}}\nolimits (\mathbb P^1)=\mathop{\rm{PGL}}\nolimits _2$. If we aim at all curves of genus $g_B$ we should have $m$ greater than or equal to the dimension of the moduli space of curves of genus $g_B$. We deduce the {\it genericity condition} \[r_s+4r_t\le 12-2\epsilon(\frac{r_s}{2}+r_t-2),\] where $\epsilon(0)=3$, $\epsilon(1)=1$, and $\epsilon(n)=0$ for $n\ge 2$. This is a necessary condition. The first case to consider is when $C$ has genus 0 (because we know how to parameterize genus 0 curves). So we first take $r_s=2$. So $g_B=r_t-1$ and the genericity condition reads $r_t\le 2$. Only $r_t=2$ is of interest. We shall see in Section~\ref{sec:g1} that we find a parameterization similar to those by Farashahi and Kammerer, Lercier, Renault in this case. Assuming that we know how to parameterize some genus 1 curves, we may consider the case when $C$ itself has genus 1. We have $r_s=4$ in that case. And $g_B=r_t$. The genericity assumption reads $r_t\le 2$. The case $r_t=2$ will be studied in detail in Section~\ref{sec:g2}. \section{Curves of genus 1}\label{sec:g1} Let $K$ be a field of characteristic prime to $6$. Let $B/K$ be a projective, smooth, absolutely integral curve of genus 1. This is the curve we want to parameterize, following the strategy presented in Sections~\ref{sec:mu3mu2} and~\ref{sec:selec}. Since $r_s=r_t=2$ in this case, we look for a map $B\rightarrow \mathbb P^1$ of degree $3$ with two fully branched points and two simply branched points. Such a map has two totally ramified points. They may be either $K$-rational or conjugated over $K$. We will assume that they are $K$-rational. We call them $P_0$ and $P_\infty$. The two divisors $3P_0$ and $3P_\infty$ are linearly equivalent because they both are fibers of the same degree three map to $\mathbb P^1$. So the difference $P_\infty -P_0$ has order $3$ in the Jacobian of $B$. Our starting point will thus be a genus 1 curve $B/K$ and two points $P_0$, $P_\infty$ in $B(K)$ such that $P_\infty-P_0$ has order $3$ in the Jacobian. Let $z\in K(B)$ be a function with divisor $3(P_0-P_\infty)$. There is a unique hyperelliptic involution $\sigma : B\rightarrow B$ sending $P_0$ onto $P_\infty$. It is defined over $K$. There exists a scalar $a_{0,0}\in K^*$ such that $\sigma(z)\times z=a_{0,0}$. Let $x$ be a degree $2$ function, invariant by $\sigma$, with polar divisor $(x)_\infty=P_0+P_\infty$. Associated to the inclusion $K(x)\subset K(x,z)$ there is a map $B\rightarrow \mathbb P^1$ of degree $2$. The sum $z+\sigma(z)$ belongs to $K(x)$. As a function on $\mathbb P^1$ it has a single pole of multiplicity $3$ at $x=\infty$. So $z+a_{0,0}/z$ is a polynomial of degree $3$ in $x$. Multiplying $z$ by a scalar, and adding a scalar to $x$, we may assume that \begin{equation}\label{eq:zxa} z+\frac{a_{0,0}}{z}=x^3+a_{1,1}x+a_{0,1}. \end{equation} The image of $x\times z : B\rightarrow \mathbb P^1\times \mathbb P^1$ has equation \begin{equation*}\label{eq:eqgene1} Z_0Z_1\left(X_1^3+a_{1,1}X_1X_0^2+a_{0,1}X_0^3\right)=X_0^3\left(Z_1^2+a_{0,0}Z_0^2\right). \end{equation*} This is a curve $B^\star\subset \mathbb P^1\times \mathbb P^1$ with arithmetic genus 2. Since $B$ has geometric genus 1, we deduce that $B^\star$ has one ordinary double point (with finite $x$ and $z$ coordinates). Let $(x,z)=(j,k)$ be this singular point. We find \[a_{0,0}=k^2, \,\, a_{1,1}=-3j^2, \,\, a_{0,1}=2k+2j^3.\] The plane affine model $B^\star$ has equation \begin{equation}\label{eq:gen1aff} z^2+{k^2}=z\left(x^3-3j^2x+2(k+j^3)\right). \end{equation} This is a degree $3$ equation in $x$ with twisted discriminant $81(1-k/z)^{2}$ times \begin{eqnarray*} h(z)&=&z^2-(2k+4j^3)z+k^2. \end{eqnarray*} We can parameterize $B$ with cubic radicals. We first parameterize the conic $C$ with equation \begin{equation}\label{eq:conic} v^2=h(z) \end{equation} using the rational point $(z,v)=(0,k)$. Applying Tartaglia-Cardan formulae to the cubic Equation~(\ref{eq:gen1aff}) we deduce a parameterization of $B$ with one cubic radical. In order to relate Equation~(\ref{eq:gen1aff}) to a Weierstrass model, we simply sort in $z$ and find the degree $2$ equation in $z$, \[z^2 -(x^3-3j^2x+2k+2j^3)z+k^2=0\] with discriminant \[(x^3-3j^2x+2k+2j^3)^2-4k^2=(x-j)^2(x+2j)(x^3 -3j^2x+4k+2j^3).\] A Weierstrass model for $B$ is then $u^2=(x+2j)(x^3 -3j^2x+4k+2j^3)$. Replacing $j$ by $\lambda j$ and $k$ by $\lambda^3k$ for some non-zero $\lambda$ in $K$ we obtain an isomorphic curve. So we may assume that $j\in \{0,1\}$ without loss of generality. This construction is not substantially different from the ones given by Farashahi \cite{fara} and Kammerer, Lercier, Renault \cite{KLR}. Starting from any genus 1 curve $B$ and two points $P_0$ and $P_\infty$ such that $P_\infty-P_0$ has order $3$ in the Jacobian, we can construct a model of $B$ as in Equation~(\ref{eq:gen1aff}) and a parameterization of $B$. \subsection{Example} Let us consider an elliptic curve given in Weierstrass form $Y^2 = X^3+a\,X +b$, for example the curve $Y^2 = X^3 + 3\,X - 11$ over ${\mathbb R}$, together with a 3-torsion point $(x_0,y_0) = (3,-5)$. Define the scalars $\alpha$ and $\beta$ by \begin{displaymath} \alpha = -\,{\frac {3\,{{x_0}}^{2}+a}{{2\,y_0}}}\text{ and }\beta = -y_0 - \alpha\,x_0. \end{displaymath} The functions $x=\alpha/3 + (Y+y_0)/(X-x_0)$ and $z=Y+\alpha\,X+\beta$ have divisors with zeros and poles as prescribed. On our particular curve, these functions are \begin{equation}\label{eq:1} x = {\frac {Y-5}{X-3}} + 1\text{ and } z = Y + 3\,X-4\,. \end{equation} The functions $x$ and $z$ are related by Equation~(\ref{eq:zxa}) where \begin{displaymath} a_{0,0}=4\,y_0^2=100\,,\ a_{1,1}=-4\,x_0=-12\,,\ a_{0,1}=-4\,{\frac{4\,{a}^{3}+27\,{b}^{2}}{{27\,{y_0}}^{3}}}=4\,. \end{displaymath} So \[z+\frac{100}{z} = x^3-12\,x+4\,.\] The double point on the latter is $(x,z)=(j,k)$ with \begin{displaymath} j = \frac { -2\alpha}{ 3} = -2\text{ and }k = -2\,y_0 = 10\,. \end{displaymath} A parameterization of the conic $C$ given by Equation~(\ref{eq:conic}) that reaches the point $(z,v)=(0,k)$ at $t=\infty$ is \begin{displaymath} z = 2\,{\frac {kt-k-2\,{j}^{3}}{{t}^{2}-1}} = 4\,{\frac {5\,t+3}{{t}^{2}-1}}\, ,\ v = k-tz=\frac{(2k+4j^3)t-kt^2-k}{t^2-1}\, . \end{displaymath} and using Tartaglia-Cardan formulae we find $x={\rho}/{3}+\, 3j^2/{\rho}$ with \begin{displaymath} \rho =3j^2\times \sqrt [3] {\frac {2(t+1)}{\left(2\,{j}^{3}-kt+k \right)\left( 1-t \right)}}\,. \end{displaymath} It remains to invert Eq.~\eqref{eq:1} in order to express $X$ and $Y$ as functions of $x$ and $z$, \textit{i.e.} as functions of the parameter $t$. For $t=0$, we obtain in this way the point \begin{displaymath} (X,Y)=(2\,(\sqrt [3]{3})^2+4\,\sqrt [3]{3}+3,\ -6\,(\sqrt [3]{3})^2-12\,\sqrt [3]{3}-17)\,. \end{displaymath} \section{Curves of genus 2}\label{sec:g2} We look for parameterizations of genus 2 curves. We will follow the strategy of Sections~\ref{sec:mu3mu2} and~\ref{sec:selec}. We take $r_s=4$ and $r_t=2$ this time. Given a genus 2 curve $B$, we look for a degree three map $B\rightarrow \mathbb P^1$ having $4$ simply branched points and $2$ totally branched points. Such a map has two totally ramified points. We will assume that they are $K$-rational. We call them $P_0$ and $P_\infty$. The difference $P_\infty -P_0$ has order $3$ in the Jacobian of $B$. Our starting point will thus be a genus 2 curve $B/K$ and two points $P_0$, $P_\infty$ in $B(K)$ such that $P_\infty-P_0$ has order $3$ in the Jacobian. The calculations will be slightly different depending on whether the set $\{P_0,P_\infty\}$ is stable under the action of the hyperelliptic involution of $B$ or not. These two cases will be treated in Sections~\ref{sec:2d} and~\ref{sec:compl} respectively. Section~\ref{sec:gene2} recalls simple facts about genus 2 curves. Explicit calculations are detailed in Sections~\ref{sec:wg2} and~\ref{sec:exg2}. \subsection{Generalities}\label{sec:gene2} Let $K$ be a field of odd characteristic. Let $\bar K$ be an algebraic closure of $K$. Let $B/K$ be a projective, smooth, absolutely integral curve of genus~2. Take two non-proportional holomorphic differential forms and let $x$ be their quotient. This is a function on $B$ of degree $2$. Any degree $2$ function $y$ on $B$ belongs to the field $K(x)\subset K(B)$. Otherwise the image of $x\times y : B\rightarrow \mathbb P^1\times \mathbb P^1 $ would be a curve birationally equivalent to $B$ with arithmetic genus $(2-1)\times (2-1)=1$. A contradiction. So every degree two function on $B$ has the form $(ix+j)/(kx+l)$ with $i$, $j$, $k$ and $l$ in $K$. And $B$ has a unique hyperelliptic involution $\sigma$. This is the non-trivial automorphism of the Galois extension $K(x)\subset K(B)$. From Hurwitz genus formula, this extension is ramified at exactly $6$ geometric points $(P_i)_{1\le i\le 6}$ in $B(\bar K)$. If $\#K>5$ we can assume that the unique pole of $x$ is not one of the $P_i$. Set $F(x)=\prod_i(x-x(P_i))\in K[x]$. According to Kummer theory, there exists a scalar $F_0 \in K^*$ such that $F_0F$ has a square root $y$ in $K(B)$. We set $f=F_0F$ and obtain an affine model for $B$ with equation \[y^2=f(x)\] and two $\bar K$-points $O$ and $\sigma(O)$ at infinity. Every function $c$ in $K(B)$ can be written as \[c=a(x)+yb(x)\] with $a$ and $b$ in $K(x)$. If $P=(x_P,y_P)$ is a $\bar K$-point on $B$ we denote by $v_P$ the associated valuation of $\bar K(B)$. If $P$ is one of the $(P_i)_{1\le i\le 6}$ then \begin{equation}\label{eq:v1} v_P(c)=\min (2v_{x_P}(a),2v_{x_P}(b)+1), \end{equation} where $x_P=x(P)\in \bar K$ and $v_{x_P}$ is the valuation of $\bar K(x)$ at $x=x_P$. If $P$ is a finite point which is not fixed by $\sigma$ then \begin{equation}\label{eq:v2} \min(v_P(c), v_{\sigma(P)}(c))= \min (v_{x_P}(a),v_{x_P}(b)). \end{equation} Finally \begin{equation}\label{eq:v3} \min(v_O(c), v_{\sigma(O)}(c))=\min (-\deg(a),-\deg(b)-3). \end{equation} Let $J$ be the Jacobian of $B$. A point $x$ in $J$ can be represented by a divisor in the corresponding linear equivalence class. We may fix a degree $2$ divisor $\Omega$ and associate to $x$ a degree $2$ effective divisor $D_x$ such that $D_x-\Omega$ belongs to the linear equivalence class associated with $x$. This $D_x$ is generically unique. Indeed the only special effective divisors of degree $2$ are the fibers of the map $B\rightarrow \mathbb P^1$. We may also represent linear equivalence classes by divisors of the form $P-Q$ where $P$ and $Q$ are points on $B$. There usually are two such representations as the map \begin{displaymath} \xymatrixrowsep{0.1in} \xymatrixcolsep{0.3in} \xymatrix{ \xymatrixrowsep{0.1in} \xymatrixcolsep{0.3in} B^2 \ar@{->}[r] & \mathop{\rm Jac } (B) \\ (P,Q) \ar@{|->}[r] & P-Q, } \end{displaymath} is surjective and its restriction to the open set defined by \[P\not =Q, P\not =\sigma(Q)\] is finite {\'e}tale of degree $2$. \subsection{A 2-dimensional family}\label{sec:2d} Let $K$ be a field of characteristic prime to $6$. In this paragraph we study genus 2 curves $B/K$ satisfying the condition that there exists a point $P$ in $B(K)$ such that the class of $\sigma(P)-P$ has order $3$ in the Picard group. In particular $P$ is not fixed by $\sigma$. We let $x$ and $y$ be functions as in Section~\ref{sec:gene2}. We can assume that $x(P)=\infty$. Let $z$ be a function with divisor $3(\sigma(P)-P)$. There exists a scalar $w\in K^*$ such that $\sigma(z)\times z=w$. We write \[z=a(x)+yb(x)\] with $a$ and $b$ in $K(x)$. We deduce from Equations~(\ref{eq:v1}), (\ref{eq:v2}), (\ref{eq:v3}), that $a$ and $b$ are polynomials and $\deg(a)\le 3$ and $\deg(b)\le 0$. From $z\sigma(z)=a^2-b^2f=w\in K^*$ we deduce that $\deg(b) = 0$ and $\deg(a) = 3$. We may divide $z$ by a scalar in $K^*$ and assume that $a$ is unitary. Replacing $x$ by $x+\beta$ for some $\beta$ in $K$, we may even assume that $a(x)=x^3+kx+l$ with $k$ and $l$ in $K$. Replacing $y$ by $b y$ we may assume that $b=1$ so \[z=y+x^3+kx+l.\] An affine plane model for $B$ has thus equation \[z^2-2a(x)z+w=0\] that is \begin{equation}\label{eq:cub1} x^3+kx+l=\frac{z+wz^{-1}}{2}. \end{equation} This is a degree $3$ equation in $x$ with coefficients $s_1=0$, $s_2=k$, $s_3=(z+wz^{-1})/2-l$, and twisted discriminant $81/4$ times \[h(z)=z^2+w^2z^{-2} - 4l(z+wz^{-1})+2w+4l^2+\frac{16k^3}{27}.\] We can parameterize $B$ with cubic radicals. We first parameterize the elliptic curve $C$ with equation $v^2=h(z)$ with one cubic radical, using e.g. Icart's method \cite{icart}. We deduce a parameterization of $B$ applying Tartaglia-Cardan formulae to the cubic Equation~(\ref{eq:cub1}). This introduces another cubic radical. This is essentially the construction given by Kammerer, Lercier and Renault \cite{KLR}. Note that this family of genus 2 curves has dimension 2: when $K$ is algebraically closed we may assume that $w=1$ without loss of generality. \subsection{The complementary 3-dimensional family}\label{sec:compl} We still assume that $K$ has prime to $6$ characteristic. We consider a genus 2 curve $B$ and two points $P_0$ and $P_\infty$ in $B(K)$ such that the difference $P_0-P_\infty$ has order $3$ in the Picard group. This time we assume that $P_\infty\not=\sigma(P_0)$. There exists a degree $2$ function $x$ having a zero at $P_0$ and a pole at $P_\infty$. Let $z$ be a function with divisor $3(P_0-P_\infty)$. The image of $x\times z : B\rightarrow \mathbb P^1\times \mathbb P^1$ has equation \begin{equation}\label{eq:eqgene} \sum_{\substack{0\leqslant i\leqslant 3 \\0\leqslant j\leqslant 2}}a_{i,j}X_1^iX_0^{3-i}Z_1^jZ_0^{2-j}=0. \end{equation} The function $z$ takes the value $\infty$ at a single point, and $x$ has a pole at this point. So if we set $Z_0=0$ in Equation~(\ref{eq:eqgene}) the form we find must be proportional to $Z_1^2X_0^3$. We deduce that \[a_{3,2}=a_{2,2}=a_{1,2}=0\] and \[a_{0,2}\not =0.\] The function $z$ takes value $0$ at a single point, and $x$ has a zero at this point. So if we set $Z_1=0$ in Equation~(\ref{eq:eqgene}) the form we find must be proportional to $Z_0^2X_1^3$. We deduce that \[a_{2,0}=a_{1,0}=a_{0,0}=0\] and \[a_{3,0}\not =0.\] Equation~(\ref{eq:eqgene}) now reads \begin{equation*}\label{eq:casgp} (a_{3,0}Z_0+a_{3,1}Z_1)Z_0X_1^3+(a_{1,1}X_0+a_{2,1}X_1)Z_0Z_1X_0X_1+ (a_{0,1}Z_0+a_{0,2}Z_1)Z_1X_0^3=0. \end{equation*} This is a curve of arithmetic genus 2 in $\mathbb P^1\times \mathbb P^1$. It must be smooth because it has geometric genus 2. The corresponding plane affine model has equation \begin{equation}\label{eq:casga} (a_{3,0}+a_{3,1}z)x^3+(a_{1,1}+a_{2,1}x)zx+ (a_{0,1}+a_{0,2}z)z=0. \end{equation} This is a degree $3$ equation in $x$ with twisted discriminant $z^{2}(a_{3,0}+a_{3,1}z)^{-4}$ times \begin{eqnarray*} h(z)&=&(9a_{0,2}a_{3,1})^2z^4+(12a_{0,2}a_{2,1}^3+162a_{3,0}a_{0,2}^2a_{3,1}-54a_{1,1}a_{2,1}a_{0,2}a_{3,1}\\&+&162a_{0,1}a_{3,1}^2a_{0,2})z^3 +(81a_{3,0}^2a_{0,2}^2+12a_{0,1}a_{2,1}^3-54a_{1,1}a_{2,1}a_{0,1}a_{3,1}\\&+&324a_{3,0}a_{0,1}a_{0,2}a_{3,1}-3a_{1,1}^2a_{2,1}^2-54a_{3,0}a_{1,1}a_{2,1}a_{0,2}+81a_{0,1}^2a_{3,1}^2\\&+&12a_{3,1}a_{1,1}^3)z^2+(12a_{1,1}^3a_{3,0}- 54a_{3,0}a_{1,1}a_{2,1}a_{0,1}+162a_{3,0}^2a_{0,1}a_{0,2}\\&+&162a_{3,0}a_{0,1}^2a_{3,1})z+(9a_{3,0}a_{0,1})^2. \end{eqnarray*} We can parameterize $B$ with cubic radicals. We first parameterize the elliptic curve with equation $v^2=h(z)$ with one cubic radical, using Icart's method. We deduce a parameterization of $B$ applying Tartaglia-Cardan formulae to the cubic Equation~(\ref{eq:casga}). This introduces another cubic radical. In order to relate Equation~(\ref{eq:casga}) to a hyperelliptic model, we simply sort in $z$ and find the degree $2$ equation in $z$, \[a_{0,2}z^2+ (a_{3,1}x^3+a_{2,1}x^2+a_{1,1}x+a_{0,1}) z + a_{3,0}x^3=0\] with discriminant \begin{equation}\label{eq:mx} m(x)=(a_{3,1}x^3+a_{2,1}x^2+a_{1,1}x+a_{0,1})^2-4a_{0,2}a_{3,0}x^3. \end{equation} A hyperelliptic model for $B$ is then \[y^2=m(x).\] The construction will succeed for every genus 2 curve having a rational 3-torsion point in its Jacobian that splits in the sense that it can be represented as a difference between two $K$-rational points on $B$. \subsection{Rational 3-torsion points in genus 2 Jacobians}\label{sec:wg2} In this section we start from a hyperelliptic curve \[{\bf y }^2={\bf m }({\bf x }),\] where ${\bf m }({\bf x })$ is a degree $6$ polynomial. We look for a parameterization of it, following Sections~\ref{sec:2d} or~\ref{sec:compl}. To this end we need a model as in Equations~(\ref{eq:casga}) and~(\ref{eq:mx}). Such a model is obtained by writing ${\bf m }({\bf x })$ as a difference ${\bf m }_3({\bf x })^2-{\bf m }_2({\bf x })^3$ where ${\bf m }_3$ is a degree $\le 3$ polynomial and ${\bf m }_2$ is a degree $\le 2$ polynomial with rational roots. We now are very close to investigations by Clebsh \cite{CLE} and Elkies \cite{ELK}. Three-torsion points in the Jacobian of the curve ${\bf y }^2={\bf m }({\bf x })$ correspond to expressions of ${\bf m }$ as a difference between a square and a cube. When the base field $K$ is finite, we may first compute the Zeta function of the curve, deduce the cardinality of the Picard group and obtain elements of order~$3$ in it by multiplying random elements in the Picard group by the prime to three part of its order. For a general base field $K$, we can look for solutions to ${\bf m }({\bf x })={\bf m }_3({\bf x })^2-{\bf m }_2({\bf x })^3$ by a direct Gr\"obner basis computation. Our experiments with the computer algebra softwares \textsc{maple} or \textsc{magma} show that this approach is efficient enough when $K$ is a finite field of reasonable (say cryptographic) size. When $K$ is the field ${\mathbb Q}$ of rationals, this direct approach becomes quite slow. In this section we explain how to accelerate the computation using invariant theory. Our method takes as input, instead of ${\bf m }({\bf x })$, the standard homogeneous invariants for the action of $\mathop{\rm{GL}}\nolimits _2$ evaluated at ${\bf m }({\bf X }_1,{\bf X }_0)$, the degree $6$ projective form associated with ${\bf m }({\bf x })$. Classical invariant theory results~\cite{Bolza,CLE} show that the orbit under $\mathop{\rm{GL}}\nolimits _2$ of a degree 6 non-singular form ${\bf m }({\bf X }_1,{\bf X }_0)$ is characterized by 5 homogeneous invariants $I_2$, $I_4$, $I_6$, $I_{10}$, $I_{15}$, of respective degrees 2, 4, 6, 10, and 15. There is a degree 30 algebraic relation between the $I_i$ (see~\cite{Igusa60}). The action of $\mathop{\rm{GL}}\nolimits _2$ on pairs $({\bf m }_2({\bf X }_1,{\bf X }_0),{\bf m }_3({\bf X }_1,{\bf X }_0))$ consisting of a quadric and a cubic gives rise to well known invariants also: $\iota_2$ (the discriminant of $m_2$), $\iota_4$ (the discriminant of $m_6$) and 3 joint invariants $\iota_3$, $\iota_5$ and $\iota_7$, of respective degrees 2, 4, 3, 5 and 7. There is a degree 14 algebraic relation between the $\iota_i$~\cite[p.187-189]{Salmon1900}. Since the map $({\bf m }_2,{\bf m }_3)\mapsto {\bf m }={\bf m }_3^2-{\bf m }_2^3$ is $\mathop{\rm{GL}}\nolimits _2$-equivariant we can describe its fibers in terms of the invariants on each side. We easily obtain the $I_i$'s as functions of the $\iota_i$'s, \begin{eqnarray}\label{eq:grosys} 2^2\,{I_2} &=& 120\,{\iota_5}+4\,{\iota_4}-12\,{\iota_3}\,{\iota_2}+3\,{{ \iota_2}}^{3}\,,\nonumber \\ 2^{7}\,{I_4} &=& 2640\,{{\iota_5}}^{2}+96\,{\iota_5}\,{\iota_4}-768\,{ \iota_5}\,{\iota_3}\,{\iota_2}+240\,{\iota_5}\,{{\iota_2}}^{3}-24\,{\iota_4}\,{ \iota_3}\,{\iota_2}+8\,{\iota_4}\,{{\iota_2}}^{3}\nonumber \\&&-8\,{{\iota_3}}^{3}+ 48\,{{ \iota_3}}^{2}{{\iota_2}}^{2}-24\,{\iota_3}\,{{\iota_2}}^{4}+3\,{{\iota_2}}^ {6}\,,\nonumber \\ 2^{10}\,{I_6} &=& -5120\,{{\iota_5}}^{3}-192\,{{\iota_5}}^{2}{\iota_4}-2304 \,{{\iota_5}}^{2}{\iota_3}\,{\iota_2}+3504\,{{\iota_5}}^{2}{{\iota_2}}^{3}- 96\,{\iota_5}\,{\iota_4}\,{\iota_3}\,{\iota_2}\nonumber \\&&+240\,{\iota_5}\,{\iota_4}\,{{ \iota_2}}^{3} -288\,{\iota_5}\,{{\iota_3}}^{3}+1008\,{\iota_5}\,{{\iota_3}}^ {2}{{\iota_2}}^{2}-768\,{\iota_5}\,{\iota_3}\,{{\iota_2}}^{4}+120\,{\iota_5 }\,{{\iota_2}}^{6}\nonumber \\&&+4\,{{\iota_4}}^{2}{{\iota_2}}^{3}+24\,{\iota_4}\,{{ \iota_3}}^{2}{{\iota_2}}^{2} -24\,{\iota_4}\,{\iota_3}\,{{\iota_2}}^{4}+4\,{ \iota_4}\,{{\iota_2}}^{6}+36\,{{\iota_3}}^{4}{\iota_2}\\&&-72\,{{\iota_3}}^{3}{{ \iota_2}}^{3}+48\,{{\iota_3}}^{2}{{\iota_2}}^{5}-12\,{\iota_3}\,{{\iota_2}} ^{7}+{{\iota_2}}^{9}\,,\nonumber \\ 2^{12}\,{I_{10}} &=& 46656\,{{\iota_5}}^{5}+3456\,{{\iota_5}}^{4}{\iota_4}- 3888\,{{\iota_5}}^{4}{\iota_3}\,{\iota_2}+729\,{{\iota_5}}^{4}{{\iota_2}}^{ 3}+64\,{{\iota_5}}^{3}{{\iota_4}}^{2}\nonumber \\&&-144\,{{\iota_5}}^{3}{\iota_4}\,{ \iota_3}\,{\iota_2} +27\,{{\iota_5}}^{3}{\iota_4}\,{{\iota_2}}^{3}+128\,{{\iota_5 }}^{3}{{\iota_3}}^{3}-27\,{{\iota_5}}^{3}{{\iota_3}}^{2}{{\iota_2}}^{2}\,.\nonumber \end{eqnarray} Given the $I_i$'s evaluated at ${\bf m }({\bf X }_1,{\bf X }_0)$, the generic change of variable $\lambda = {\iota_2}^3$ and $\mu = \iota_2\times \iota_3$ turns these equations into a system of 4 equations of total degrees 1, 3, 4 and 6 in the 4 variables $\lambda$, $\mu$, $\iota_4$ and $\iota_5$\,. A Gr\"obner basis can be easily computed for the lexicographic order (note that the first equation is linear). This yields a degree 40 polynomial in $\lambda$. If none of the roots of this polynomial are squares, we can abort the calculation because we need ${\bf m }_2({\bf x })$ to have rational roots in order to parameterize the curve ${\bf y }^2 = {\bf m }({\bf x })$. Considering Equation~(\ref{eq:mx}) of Section~\ref{sec:compl} it is natural to look for a form $m$ in the $\mathop{\rm{GL}}\nolimits _2$-orbite of ${\bf m }$ such that $m=m_3^2-m_2^3$ for some $m_2(x) = e\,x$ and $m_3(x) = a\,x^3 + b\,x^2 + c\,x + d$, where $e^3=4a_{0,2}a_{3,0}$, $a=a_{3,1}$, $b=a_{2,1}$, $c=a_{1,1}$, $d=a_{0,1}$. The invariants of $(m_2,m_3)$ are \begin{eqnarray}\label{eq:2} &&\iota_2 = {e}^{2}\,,\ \iota_3 = -e ( 9\,ad-bc )\,,\ \ \ \ \ \ \ \ \ \iota_5 = -{e}^{3}ad\,,\ \iota_7 = {e}^{3} ( a{c}^{3}- {b}^{3}d )\,,\\ &&\hspace*{2cm}\iota_4 = -27\,{a}^{2}{d}^{2}+18\,abcd-4\,a{c}^{3}-4\,{b}^{3}d+{b}^{2}{c}^{2}\,.\nonumber \end{eqnarray} So for each candidate $(\iota_2, \iota_3, \iota_4, \iota_5)$ issued from Equations~(\ref{eq:grosys}), we invert Eq.~\eqref{eq:2}. A Groebner basis for the lexicographic order $d$, $c$, $b$, $a$, $e$ yields generically a 1~dimensional system the last two equations of which are \begin{eqnarray*} 0 &=& e^2 - \iota_2\,,\\ 0 &=& {{\iota_2}}^{3}{\iota_5}\,{b}^{6}-{\iota_2}\, ( {{\iota_2}}^{3}{\iota_4}-{{\iota_2}}^{2}{{\iota_3}}^{2}+36\,{\iota_2}\,{\iota_3}\,{\iota_5}-216\,{{\iota_5}}^{2} )\,e\,a\, {b}^{3}-4\, ( {\iota_2}\,{\iota_3}-9\,{\iota_5} ) ^{3}\,{a}^{2}\,. \end{eqnarray*} We keep solutions $m_2(x)$ and $m_3(x)$ that yield a polynomial $m(x) = m_3(x)^2-m_2(x)^3$ which is $\mathop{\rm{GL}}\nolimits _2$-equivalent to ${\bf m }({\bf x })$ over the base field (see \cite{LRS} for efficient algorithms). Applying the isomorphism to $m_2(x)$ and $m_3(x)$ gives ${\bf m }_2({\bf x })$ and ${\bf m }_3({\bf x })$. \subsection{An example}\label{sec:exg2} Let $K$ be a field with $83$ elements. We start from the genus 2 curve with affine equation ${\bf y }^2={\bf m }({\bf x })$ with ${\bf m }({\bf x })={\bf x }^6+39{\bf x }^5+64{\bf x }^4+7{\bf x }^3+{\bf x }^2+19{\bf x }+36$. In order to find ${\bf m }_3({\bf x })$ and ${\bf m }_2({\bf x })$ such that ${\bf m }({\bf x }) = {\bf m }_3({\bf x })^2 - {\bf m }_2({\bf x })^3$, we first compute the invariants of the degree six form ${\bf m }$ \begin{displaymath} (I_2, I_4, I_6, I_{10}) = (23, 9, 38, 53, 59)\,. \end{displaymath} A Groebner basis for the relations between $\lambda$, $\mu$ and $\iota_4$ is \begin{eqnarray*} \iota_4 &=& 27\,{\lambda}^{39}+58\,{\lambda}^{38}+3\,{\lambda}^{37}+18\,{\lambda}^{36}+42\,{\lambda}^{35}+26\,{\lambda}^{34}+52\,{\lambda}^{33}+60\,{\lambda}^{32}\\&&+78\,{\lambda}^{31}+17\,{\lambda}^{30}+ 50\,{\lambda}^{29}+12\,{\lambda}^{28}+75\,{\lambda}^{27}+20\,{\lambda}^{26}+75\,{\lambda}^{25}+38\,{\lambda}^{24}\\&&+19\,{\lambda}^{23}+21\,{\lambda}^{22}+35\,{\lambda}^{21}+31\,{\lambda}^{20}+27\,{\lambda}^{19}+49\,{\lambda}^{18}+44\,{\lambda}^{17}+30\,{\lambda}^{16}\\&&+38\,{\lambda}^{15}+55\,{\lambda}^{14}+59\,{\lambda}^{13}+6\,{\lambda}^{12}+2\,{\lambda}^{11} +36\,{\lambda}^{10}+18\,{\lambda}^{9}+2\,{\lambda}^{8}+41\,{\lambda}^{7}\\&&+62\,{\lambda}^{6}+3\,{\lambda}^{5}+49\,{\lambda}^{4}+{\lambda}^{3}+33\,{\lambda}^{2}+36\,\lambda+69\,,\\ \mu &=& 62\,\lambda^{40}+46\,\lambda^{39}+11\,\lambda^{38}+33\,\lambda^{37}+75\,\lambda^{36}+19\,\lambda^{35}+53\,\lambda^{34}+10\,\lambda^{33}\\&& +48\,\lambda^{32}+47\,\lambda^{31} +77\,\lambda^{30}+14\,\lambda^{29}+49\,\lambda^{28}+47\,\lambda^{27}+38\,\lambda^{26}+19\,\lambda^{25}\\&&+25\,\lambda^{24}+44\,\lambda^{23}+68\,\lambda^{22}+15\,\lambda^{21}+36\,\lambda^{20}+9\,\lambda^{19}+73\,\lambda^{18}+13\,\lambda^{17}\\&&+64\,\lambda^{16}+5\,\lambda^{15}+67\,\lambda^{14}+82\,\lambda^{13}+69\,\lambda^{12}+9\,\lambda^{11}+69\,\lambda^{10}+35\,\lambda^{9}\\&&+57\,\lambda^{8}+57\,\lambda^{7}+7\,\lambda^{6}+11\,\lambda^{5}+37\,\lambda^{4}+78\,\lambda^{3}+10\,\lambda^{2}+73\,\lambda\,,\\ 0&=&{\lambda}^{40}+48\,{\lambda}^{39}+67\,{\lambda}^{38}+35\,{\lambda}^{37}+50\,{\lambda}^{36}+23\,{\lambda}^{ 35}+4\,{\lambda}^{34}+12\,{\lambda}^{33}\\&&+37\,{\lambda}^{32}+49\,{\lambda}^{31}+40\,{\lambda}^{30}+71\,{\lambda}^{29}+60\,{\lambda}^{28}+79\,{\lambda}^{27}+19\,{\lambda}^{26}+81\,{\lambda}^{25}\\&&+82\,{\lambda} ^{24}+26\,{\lambda}^{23}+9\,{\lambda}^{22}+19\,{\lambda}^{21}+82\,{\lambda}^{20}+40\,{\lambda}^{19}+50\,{\lambda}^{18}+67\,{\lambda}^{17}\\&&+80\,{\lambda}^{16}+29\,{\lambda}^{15}+73\,{\lambda}^{14}+38\,{\lambda}^{13}+81\,{\lambda}^{12}+73\,{\lambda}^{11}+5\,{\lambda}^{10}+14\,{\lambda}^{9}\\&&+82\,{\lambda}^{8}+46\,{\lambda}^{7}+62\,{\lambda}^{6}+32\,{\lambda}^{5}+17\,{\lambda}^{4}+74\,{\lambda}^{3}+15\,{\lambda}^{2 }+30\,\lambda+43\,. \end{eqnarray*} Here, we only have two rational candidates for $(\lambda,\mu, \iota_4)$, the first one gives \begin{displaymath} (\iota_2, \iota_3, \iota_4, \iota_5) = (17, 51, 35, 55)\,. \end{displaymath} Now, inverting Eq.~\eqref{eq:2} yields 4 possibilities, all parameterized by $a$: \begin{enumerate} \item $\{\ {d}+74\,{{c}}^{3}=0,{c}\,{b}+45=0,{c}\,{a}+63\,{{b}}^{2}=0,{{b}}^{3}+23\,{a}=0,{e}+73=0\ \}$\,, \item or $\{\ {d}+65\,{{c}}^{3}=0,{c}\,{b}+45=0,{c}\,{a}+73\,{{b}}^{2}=0,{{b}}^{3}+46\,{a}=0,{e}+73=0\ \}$\,, \item or $\{\ {d}+18\,{{c}}^{3}=0,{c}\,{b}+38=0,{c}\,{a}+73\,{{b}}^{2}=0,{{b}}^{3}+37\,{a}=0,{e}+10=0\ \}$\,, \item or $\{\ {d}+9\,{{c}}^{3}=0,{c}\,{b}+38=0,{c}\,{a}+63\,{{b}}^{2}=0,{{b}}^{3}+60\,{a}=0,{e}+10=0\ \}$\,. \end{enumerate} A solution to the first set of equations is, for $a=1$, \begin{displaymath} m_3(x) = {x}^{3}+46\,{x}^{2}+73\,x+47\ \text{ and }\ m_2(x) = 10\,x, \end{displaymath} and the polynomial \[m(x)= m_3(x)^2-m_2(x)^3\] is $\mathop{\rm{GL}}\nolimits _2$-equivalent to ${\bf m }(x)$. Indeed \[m(\frac{76{\bf x }+70}{36{\bf x }+43})\times (36{\bf x }+43)^6={\bf m }({\bf x }).\] So we set \[{\bf m }_3({\bf x })=m_3(\frac{76{\bf x }+70}{36{\bf x }+43})\times (36{\bf x }+43)^3=15{\bf x }^3 + 30{\bf x }^2 + 46{\bf x } + 7\] and \[{\bf m }_2({\bf x })=m_2(\frac{76{\bf x }+70}{36{\bf x }+43})\times (36{\bf x }+43)^2=53{\bf x }^2 + 29{\bf x } + 54\] and we check that ${\bf m }={\bf m }_3^2-{\bf m }_2^3$. \subsection{Parameterization} The curve with equation ${\bf y }^2={\bf m }({\bf x })$ over the field with $83$ elements is isomorphic to the curve with equation \[y^2 = (a{x}^{3}+b\,{x}^{2}+c\,x+d)^2 - (e\,x)^3 = ({x}^{3}+46\,{x}^{2}+73\,x+47)^2 - (10\,x)^3\] through the change of variables \begin{equation}\label{eq:xX} x=\frac{76{\bf x }+70}{36{\bf x }+43}, \text{ and } y = \frac{{\bf y }}{(36{\bf x }+43)^3}. \end{equation} With the notation in Section~\ref{sec:compl} we have $a=a_{3,1}=1$, $b=a_{2,1}=46$, $c=a_{1,1}=73$, $d=a_{0,1}=47$, $e=10$, $a_{0,2}=-1/2$, $a_{3,0}=-e^3/2$. Let $P_0$ be the point with coordinates $x=0$ and $y=-47$. Let $P_\infty$ be the point where $x$ has a pole and $y/x^3=1$. The functions $x$ has a zero at $P_0$ and a pole at $P_\infty$. The function $z = y + a\,x^3 + b\,x^2 + c\,x + d$ has divisor $3(P_0-P_\infty)$. These two functions are related by the equation \begin{equation}\label{eq:3} ( -{e}^{3}/2+az ) {x}^{3}+ ( bx+c ) zx + ( d-z/2 ) z = 0, \end{equation} that is $( z+81 ) {x}^{3}+ ( 46\,x+73 ) zx+ ( 47+ 41\,z ) z = 0$\,. The resolvent elliptic curve has equation $v^2 = h(z)$ with \begin{displaymath} h(z) = 41\,{z}^{4}+15\,{z}^{3}+38\,{z}^{2}+46\,z+7\,. \end{displaymath} It is birationally isomorphic to the Weierstrass curve with equation \begin{math} Y^2 = X^3 + 37\,X + 60\,, \end{math} whose Icart's parameterization in $t$ is \begin{displaymath} X = \kappa/6+\,{t}^{2}/3\,,\ Y = ({t}^{3}+{t\,\kappa}+28/{t})/6 \end{displaymath} where \begin{displaymath} \kappa = \sqrt[3]{81\,{t}^{6}+79\,{t}^{2}+71+\frac{56}{{t}^{2}}}\,. \end{displaymath} After a birational change of variable, we obtain \begin{eqnarray*} z &=& {\frac {10\,Y+16\,X+72}{74\,{X}^{2}+79\,X+49}},\\ v &=& {\frac { ( 47\,{X}^{2}+8\,X+64 ) Y+51\,{X}^{4}+5\,{X}^{3}+ 20\,{X}^{2}+20\,X+18}{81\,{X}^{4}+72\,{X}^{3}+47\,{X}^{2}+23\,X+77}}\,. \end{eqnarray*} We then apply Tartaglia-Cardan formulae to Eq.~\eqref{eq:3} in order to obtain $x$ and $y=z-m_3(x)$ as functions of $t$. Inverting the change of variables in Equation~(\ref{eq:xX}) gives a point $({\bf x },{\bf y })$ on the initial curve. \subsection{The density of target curves} We prove that the construction in Section~\ref{sec:compl} provides a parameterization for a fixed positive proportion of genus 2 curves over ${\mathbb F}_q$ when $q$ is prime to $6$ and large enough. We call ${\mathcal S}$ the set of non-degenerate sextic binary forms with coefficients in ${\mathbb F}_q$. Scalar multiplication \[(\lambda,m(X_1,X_0))\mapsto \lambda m(X_1,X_0)\] defines an action of the multiplicative group ${\FF _q}s$ on ${\mathcal S}$. The linear group $\mathop{\rm{GL}}\nolimits _2({\FF _q})$ also acts on ${\mathcal S}$. Call $G$ the subgroup of $\mathop{\rm{GL}}\nolimits _2({\FF _q})\times {\FF _q}s$ consisting of pairs $( \gamma, \lambda)$ where $\lambda$ is a square. To every non-degenerate sextic binary form $m(X_1,X_0)$ with coefficients in ${\mathbb F}_q$ we associate the ${\mathbb F}_q$-isomorphism class of the curve with equation $y^2=m(x,1)$. This defines a surjective map $\nu$ from ${\mathcal S}$ onto the set ${\mathcal I}$ of ${\mathbb F}_q$-isomorphism classes of genus 2 curves over ${\FF _q}$. The fibers of $\nu$ are the orbites for the action of $G$ on ${\mathcal S}$. When $q$ tends to infinity, the proportion of forms in ${\mathcal S}$ with non-trivial stabilizer in $G$ tends to zero. So it is equivalent to count isomorphism classes of curves in ${\mathcal I}$ or to count forms in ${\mathcal S}$. We call ${\mathcal P}$ the set of pairs $(m_2,m_3)$ consisting of a split quadratic form \[m_2(X_1,X_0)=(aX_1-bX_0)(cX_1-dX_0)\] and a cubic form $m_3$, such that $m_3^2-m_2^3$ is a non-degenerate sextic form. The cardinality of ${\mathcal P}$ is $q^7\times (1/2+o(1))$ when $q$ tends to infinity. Let $\chi : {\mathcal P}\rightarrow {\mathcal S}$ be the map that sends $(m_2,m_3)$ onto $m_3^2-m_2^3$. According to work by Clebsh \cite{CLE} and Elkies \cite[Theorem 3]{ELK}, fibers of $\chi$ have no more than $240$ elements. So the image of $\chi$ has cardinality at least $q^7\times (1/480 +o(1))$ and density at least $1/480+o(1)$. \begin{theorem} Let $q$ be a prime power that is prime to $6$. The proportion of all genus 2 curves over the field with $q$ elements that can be parameterized by $3$-radicals is at least $1/480+\epsilon(q)$ where $\epsilon$ tends to zero when $q$ tends to infinity. \end{theorem} \section{Other families of covers}\label{sec:other} In Sections~\ref{sec:g1} and~\ref{sec:g2} we have studied two families of $\mu_3\rtimes \mu_2$ covers corresponding to $(r_s,r_t)=(2,2)$ and $(r_s,r_t)=(4,2)$ respectively. In this section we quickly review a few other possibilities. We also present an interesting family of $\mu_5\rtimes \mu_2$ covers. \subsection{The case $(r_s,r_t)=(4,1)$} Both $B$ and $C$ have genus 1. The map $B\rightarrow E$ is any degree three map having a triple pole. If $B$ is given by a Weierstrass model, then for every scalar $t$, the function $y+tx$ will do. So we obtain a one parameter family of parameterization of $B$ by elliptic curves $C_t$. The resolvents $C_t$ form a non-isotrivial family. However, we observed that the 3-torsion group scheme $C_t[3]$ is isomorphic to $B[3]$ for every value of $t$. \subsection{The case $(r_s,r_t)=(6,1)$}\label{sec:61} Both $B$ and $C$ have genus 2. The map $B\rightarrow E$ is any degree three map having a triple pole. There is one such map for every non-Weierstrass point $P$ on $B$. We obtain a one parameter family of parameterization of $B$ by genus 2 curves $C_P$. The resolvents $C_P$ form a non-isotrivial family. However, we observed that the 3-torsion group scheme $J_{C_P}[3]$ is isomorphic to $J_B[3]$ for every $P\in B$. \subsection{The case $(r_s,r_t)=(8,1)$} Both $B$ and $C$ have genus 3. The map $B\rightarrow E$ is a degree three map having a triple pole $P$. This pole is a rational Weierstrass point. The curve $C$ is hyperelliptic. For every genus 3 curve $B$ having a rational Weierstrass point, we thus obtain a parameterization of $B$ by a hyperelliptic curve of genus 3. Conversely, for every hyperelliptic curve of genus 3 which we can parameterize, we obtain a parameterization for a 1-dimensional family of non-hyperelliptic genus 3 curves. \subsection{Curves with a $\mu_5\rtimes \mu_2$ action}\label{sec:mu5mu2} This time we assume that the characteristic of $K$ is prime to $10$. Let $\zeta_5\subset \bar K$ be a primitive $5$-th root of unity. We denote by $\mu_5\rtimes \mu_2$ the subgroup scheme of $\mathop{\rm{Sym}}\nolimits (\mu_5)$ generated by $x\mapsto x^{-1}$ and $x\mapsto \zeta_5x$. Let $A$ be a projective, absolutely integral, smooth curve over $K$. We assume that $\mathop{\rm{Aut}}\nolimits (C\otimes_K\bar K)$ contains the finite {\'e}tale $K$-group scheme $\mu_5\rtimes \mu_2$. We set $B=A/\mu_2$, and $C=A/\mu_5$. If $C$ admits a parameterization by $S$-radicals as in Equation~(\ref{eq:para}), and if the normalization $D'$ of the fiber product of $A$ and $D$ above $C$ is absolutely integral, then we can construct an $S\cup \{5\}$-parameterization of $B$ just as in Section~\ref{sec:mu3mu2}. We assume that $E=A/(\mu_5\rtimes \mu_2)$ has genus 0. Let $r_d$ be the number of branched points with ramification type $2,2,1$. Let $r_t$ be the the number of branched points with ramification type $5$. According to the Hurwitz Genus Formula \cite[III.4.12, III.5.1]{Stich} the genus of $B$ is \[g_B=r_d+2r_t-4.\] Every branched point of type $2,2,1$ of the cover $B\rightarrow E$ gives rise to a branched point of type $2,2,2,2,2$ of the cover $A\rightarrow E$ and to a simple branched point of $C\rightarrow E$. And every totally branched point of the cover $B\rightarrow E$ gives rise to a branched point of type $5,5$ of the cover $A\rightarrow E$ and to a non-branched point of $C\rightarrow E$. So \[g_A=\frac{5r_d}{2}+4r_t-9, \text{ \, and \,\,\,}g_C=\frac{r_d}{2}-1.\] We still call \[m=r_d+r_t-3\] the {\it modular dimension}. The { genericity condition} is \[2r_d+5r_t\le 12-2\epsilon(r_d+2r_t-4),\] where $\epsilon(0)=3$, $\epsilon(1)=1$, and $\epsilon(n)=0$ for $n\ge 2$. An interesting case is when $r_d=6$ and $r_t=0$. Then both $B$ and $C$ have genus~2. The map $B\rightarrow E$ is a $\mu_5\rtimes \mu_2$-cover. The cover $A\rightarrow C$ is unramified. It is a quotient by $\mu_5$. Associated to it, there is a $C_5$ inside $J_C$. So we are just dealing with a genus~2 curve $C$ having a 5-torsion point in its Jacobian. We provide explicit equations for this situation in Section~\ref{sec:g25}. \section{Genus 2 curves with a 5-torsion divisor}\label{sec:g25} We assume that $K$ has characteristic prime to $10$. Let $C$ be a genus 2 curve having a $K$-rational point of order~$5$ in its Jacobian. We assume that this point is the class of $P_\infty-P_0$ where $P_\infty$ and $P_0$ are two $K$-rational points on $C$. We give explicit equations for $C$, $P_0$ and $P_\infty$ depending on rational parameters. In Sections~\ref{sec:cas1}, \ref{sec:cas2}, and \ref{sec:cas3}, we distinguish three cases depending on the action of the hyperelliptic involution $\sigma$ on $P_0$ and $P_\infty$. We note that these two points cannot be both Weierstrass points. We finally give in Section~\ref{sec:exg25} an example of how to combine this construction and the previous ones in order to parameterize more genus 2 curves. \subsection{A first special case}\label{sec:cas1} We first assume that $P_0$ is a Weierstrass point. So $P_\infty$ is not. Let $x$ be a degree $2$ function having a pole at $P_\infty$ and a zero at $P_0$. Let $y$ be a function as in Section~(\ref{sec:gene2}). We have $y^2=f(x)$ for some degree $6$ polynomial in $K[x]$. Let $z\in K(C)$ be a function with divisor $5(P_0-P_\infty)$. We write \[z=a(x)+yb(x)\] with $a(x)$ and $b(x)$ in $K(x)$. We deduce from Equations~(\ref{eq:v1}), (\ref{eq:v2}), (\ref{eq:v3}), that $a$ and $b$ are polynomials and $\deg(a)\le 5$ and $\deg(b)\le 2$. Since $z$ has a pole of order $5$ at $P_\infty$ and has valuation $0$ at $\sigma(P_\infty)$ we actually know that $\deg(a) = 5$ and $\deg(b)= 2$. Also $b$ is divisible by $x$ exactly twice, and $a$ is divisible by $x$ at least thrice. Multiplying $z$ by a scalar we may ensure that $a$ is unitary. Multiplying $y$ by a scalar we may ensure that $b=x^2$. And $a(x)=x^3(x^2+kx+l)$ for some $k$ and some $l$ in $K$. There exists a scalar $w\in K^*$ such that \[z\times \sigma(z)=wx^5=x^4(x^2(x^2+kx+l)^2-f(x)).\] So $f(x)=x^2(x^2+kx+l)^2-wx$. The curve $C$ has affine equation \[y^2=x^2(x^2+kx+l)^2-wx,\] $P_\infty$ is one of the two points at infinity, and $P_0$ is the point $(0,0)$. This is essentially the model given by Boxall, Grant and Lepr{\'e}vost \cite{BGL}. \subsection{Another special case}\label{sec:cas2} We assume now that $\sigma (P_0)=P_\infty$. Let $x$ be a degree two function having poles at $P_0$ and $P_\infty$. Let $y$ and $f(x)$ be as in Section~\ref{sec:gene2}. Let $z$ be a function with divisor $5(P_0-P_\infty)$. We write $z=a(x)+yb(x)$ where $a$ and $b$ are polynomials in $x$ with degrees $5$ and $2$. Multiplying $z$ by a constant in $K$ we may assume that $a$ is unitary. Multiplying $y$ by a constant in $K$ we may assume that $b$ is unitary. Adding a constant to $x$ we may assume that \[b(x)=x^2-k\] for some $k\in K$. There is a scalar $w\in K^*$ such that \[z\times \sigma(z)=w=a^2-fb^2.\] So $w$ is a square in the algebra $K[x]/b(x)$. This leaves two possibilities. Either $w=W^2$ for some $W\in K^*$ and $a(x)=W{\bf m }od b(x)$, or $w=W^2k$ for some $W\in K^*$ and $a(x)=Wx{\bf m }od b(x)$. We study these two subcases successively. \subsubsection{If $w=W^2$ and $a(x)=W{\bf m }od b(x)$} We check that \[a(x)=W{\bf m }od b(x)^2\] indeed. Since $a$ is unitary, there exists a scalar $j\in K$ such that $a=W+(x+j)b^2$. We deduce expressions for $a$, $b$ and $f$ in the parameters $k$, $W$, and $j$. The actual dimension of the family is $2$ because we may multiply $x$ by a scalar. \subsubsection{If $w=W^2k$ and $a(x)=Wx{\bf m }od b(x)$} In particular $k$ is not $0$. We check that $a(x)=Wx+a_1(x)b(x){\bf m }od b(x)^2$ with $a_1(x)=-Wx/(2k)$. So there exist a scalar $j\in K$ such that \[a=Wx -Wxb(x)/(2k)+(x+j)b(x)^2.\] We deduce expressions for $a$, $b$ and $f$ in the parameters $k$, $W$, and $j$. The actual dimension of the family is $2$ again. \subsection{Generic case}\label{sec:cas3} We assume that none of $P_0$ and $P_\infty$ is a Weierstrass point and $\sigma(P_0)\not =P_\infty$. Let $x$ be a degree $2$ function having a zero at $P_0$ and a pole at $P_\infty$. Let $y$ be a function as in Section~\ref{sec:gene2}. We have $y^2=f(x)$ where $f\in K[x]$ is a degree $6$ polynomial. Both $f(0)$ and the leading coefficient of $f$ are squares in $K$. Let $z\in K(C)$ be a function with divisor $5(P_0-P_\infty)$. We write \[z=a(x)+yb(x)\] with $a(x)$ and $b(x)$ in $K(x)$. We deduce from Equations~(\ref{eq:v1}), (\ref{eq:v2}), (\ref{eq:v3}), that $a$ and $b$ are polynomials and $\deg(a)\le 5$ and $\deg(b)\le 2$. Since $z$ has a pole of order~$5$ at $P_\infty$ and has valuation $0$ at $\sigma(P_\infty)$ we actually know that $\deg(a) = 5$ and $\deg(b)= 2$. Multiplying $z$ by a scalar, we may ensure that $a$ is unitary. Multiplying $y$ by a scalar, we may ensure that $b$ is unitary. Since $z$ has a zero of order~$5$ at $P_0$ and has valuation $0$ at $\sigma(P_0)$ we know that $a(0)\not = 0$ and $b(0)\not =0$. The three polynomials $a(x)$, $b(x)$, and $f(x)$ are related by the equation \[a^2-fb^2=wx^5\] for some $w\in K^*$. In particular, $wx$ is a square modulo $b(x)$. We can easily deduce that \[b(x)=x^2+(2k-wl^2)x+k^2\] for some $k$ and $l$ in $K^*$. A square root of $wx$ modulo $b(x)$ is then $(k+x)/l$. A square root of $wx^5$ modulo $b$ is then \[a_0(x)=\frac{(k^2-3kwl^2+w^2l^4)x+(k-wl^2)k^2}{l}.\] Using Hensel's lemma we deduce that $a$ is the square root of $wx^5$ modulo $b^2$ of the form $a_0+a_1b$ with \[a_1(x)=\frac{k^2-2wkl^2+2w^2l^4+x(wl^2+k)}{2wl^3}.\] So there exists $j\in K$ such that $a=a_0+a_1b+a_2b^2$ with \[a_2(x)=x+j.\] We deduce the expressions for $a$, $b$ and $f=(a^2-x^5)/b^2$ in the parameters $j$, $k$, $l$, $w$. \subsection{An example}\label{sec:exg25} Let $K$ be a field with $83$ elements. We set $w=1$, $j=2$, $k=3$, $l=14$ and find $a(x)=x^5 + 37x^4 + 78x^3 + 18x^2 + 26x + 29$ and $b(x)=x^2+59x+9$, and \[f(x)=x^6 + 39x^5 + 64x^4 + 7x^3 + x^2 + 19x + 36.\] The curve $C$ with equation $y^2=f(x)$ has genus 2. Its Jacobian has $3.5.7.71$ points over $K$. We set $z=a(x)+yb(x)$ and define a cyclic unramified covering $A$ of $C$ by setting $t^5=z$. We lift the action of the hyperelliptic involution $\sigma$ onto $A$ by setting $\sigma(t)=x/t$. The function $u=t+x/t$ is invariant by $\sigma$. The field $K(u,x)$ is the function field of the quotient curve $B=A/\sigma$. A singular plane model for $B$ is given by the equation \[u^5+78xu^3+5x^2u=2a(x)=2(x^5+37x^4+78x^3+18x^2+26x+29).\] Note the Tchebychev polynomial on the left hand side. The Jacobian of $B$ has $5.37^2$ points over $K$. In particular, its 3-torsion is trivial. However we can parameterize the curve $B$ using the parameterization of $C$ constructed in Section~\ref{sec:exg2}. Note that $C$ appears in Section~\ref{sec:exg2} under the name $B$. \subsection{Composing parameterizations} In Section~\ref{sec:exg25} we parameterize a genus 2 curve (call it $B_2$) by another genus 2 curve (call it $C_2$), using a $\mu_5\rtimes \mu_2$ action on some curve $A_2$. In Section~\ref{sec:exg2} we had constructed a parameterization of $C_2=B_1$ by a genus one curve (call it $C_1$) using a $\mu_3\rtimes \mu_2$ action on some curve $A_1$. This $C_1$ can be parameterized e.g. using Icart's parameterization. Composing the three parameterizations we obtain a parameterization of $B_2$ by $\mathbb P^1$. \begin{figure} \caption{Composing parameterizations} \label{fig:passage3} \end{figure} This situation is represented on Figure~\ref{fig:passage3}. The curve $D_1$ is the normalization of the fiber product of $D$ and $A_1$ over $C_1$. The curve $D_2$ is the normalization of the fiber product of $D_1$ and $A_2$ over $C_2$. We can prove that $D_1$ and $D_2$ are absolutely irreducible by observing that all down left arrows have degree a power of two, while all down right arrows are Galois of odd degree. The interest of this construction is that, the Jacobian of $B_2$ having trivial 3-torsion, we reach a curve that was inaccessible before. We may compose again and again e.g. with parameterizations as in Section~\ref{sec:61}. It is natural to ask if we can reach that way all genus 2 curves over a large enough finite field or cardinality $q$ when $q$ is prime to $30$. Answering this question requires to study some morphisms from a moduli space of covers to the moduli space of genus 2 curves : proving in particular that the morphism is surjective and that the geometric fibers are absolutely irreducible. \end{document}
math
\begin{document} \preprint{APS/123-QED} \title{The Zeno and anti-Zeno effects: studying modified decay rates for spin-boson models with both strong and weak system-environment couplings} \thanks{It is a paper on a possible extension of the work done in Ref.~\cite{chaudhry2017quantum}.} \author{Irfan Javed} \email{[email protected]} \author{Mohsin Raza} \email{[email protected]} \affiliation{ School of Science and Engineering, Lahore University of Management Sciences (LUMS), Opposite Sector U, D.H.A., Lahore 54792, Pakistan } \date{\today} \begin{abstract} In this paper, we look into what happens to a quantum system under repeated measurements if system evolution is removed before each measurement is performed. Beginning with investigating a single two-level system coupled to two independent baths of harmonic oscillators, we move to replacing it with a large collection of such systems, thereby invoking the large spin-boson model. Whereas each of our two-level systems interacts strongly with one of the aforementioned baths, it interacts weakly with the other. A polaron transformation is used to make it possible for the problem in the strong coupling regime to be treated with perturbation theory. We find that the case involving a single two-level system exhibits qualitative and quantitative differences from the case involving a collection of them; however, the general effects of strong and weak couplings turn out to be the same as those in the presence of system evolution, something which allows us to establish that system evolution has no practical bearing on any of these effects. \end{abstract} \maketitle \section{\label{sec. 1}Introduction} We start with the paradigmatic spin-boson model~\cite{leggett1987dynamics}, but we consider the presence of both strong and weak system-environment couplings. The Hamiltonian describing the problem is the following: \begin{equation} \begin{aligned}[b] H_{L}^{(0)} =\:&\frac{\epsilon}{2}\sigma_{z}+\frac{\Delta}{2}\sigma_{x}+\sum_{k}\omega_{k}b_{k}^{\dagger}b_{k}+\sum_{k}\alpha_{k}c_{k}^{\dagger}c_{k}\\ &+\sigma_{z}\sum_{k}\left(g_{k}^{*}b_{k}+g_{k}b_{k}^{\dagger}\right)+\sigma_{x}\sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right). \end{aligned} \label{eq. 1} \end{equation} Here, $\frac{\epsilon}{2}\sigma_{z}+\frac{\Delta}{2}\sigma_{x}$ is the system Hamiltonian, $\sum_{k}\omega_{k}b_{k}^{\dagger}b_{k}+\sum_{k}\alpha_{k}c_{k}^{\dagger}c_{k}$ is the environment Hamiltonian, and $\sigma_{z}\sum_{k}\left(g_{k}^{*}b_{k}+g_{k}b_{k}^{\dagger}\right)+\sigma_{x}\sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right)$ gives the system-environment interaction. $\epsilon$ characterizes the energies of the system energy eigenstates, $\Delta$ is the tunneling amplitude, and $\omega_{k}$ and $\alpha_{k}$ are the frequencies of harmonic oscillators in the two harmonic oscillator baths interacting with the system. $b_{k}/b_{k}^{\dagger}$ and $c_{k}/c_{k}^{\dagger}$ are the annihilation/creation operators of the first and second baths, respectively, $\sigma_{x}$ and $\sigma_{z}$ are the standard Pauli operators, and we set $\hbar$ equal to $1$ throughout. Superscript $(0)$ denotes the first version of our Hamiltonian, subscript $L$ denotes the lab frame, and henceforth, we use the following definitions: $H_{S1} = \frac{\epsilon}{2}\sigma_{z}$, $H_{S2} = \frac{\Delta}{2}\sigma_{x}$, $H_{B1} = \sum_{k}\omega_{k}b_{k}^{\dagger}b_{k}$, $H_{B2} = \sum_{k}\alpha_{k}c_{k}^{\dagger}c_{k}$, $V_{C1} = \sigma_{z}\sum_{k}\left(g_{k}^{*}b_{k}+g_{k}b_{k}^{\dagger}\right)$, and $V_{C2} = \sigma_{x}\sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right)$. As per our Hamiltonian, a spin-$\frac{1}{2}$ (two-level) system interacts with an environment comprised of two independent baths of harmonic oscillators. Whereas one bath interacts with the $z$ component of the spin of the system, the other interacts with the $x$ component. All through this paper, we assume that interaction, or coupling, with the $z$ component is strong and that interaction, or coupling, with the $x$ component is weak, something which implies that $\abs{g_{k}}>\abs{f_{k}}$ and $\abs{f_{k}}<1$. If the system is prepared in the state $\ket{\uparrow}$, then it evolves both due to the tunneling term $\frac{\Delta}{2}\sigma_{x}$ and due to the system-environment couplings. However, interested in changes in the system state stemming from the system-environment interactions only, we remove the evolution due to our system Hamiltonian, $H_{S1}+H_{S2}$, before performing any measurement to check the state of the system~\cite{matsuzaki2010quantum}. The effective decay rate obtained as a result is what we call the modified decay rate, and we investigate how it depends on the strengths of the strong and weak system-environment couplings. We apply this very treatment to the problem generalizing our spin-boson model to $N_{S}$ two-level systems, all of which interact with the aforementioned environment of harmonic oscillator baths in the same way as the single two-level system described above; the only difference is that we prepare our system in a coherent spin state rather than $\ket{\uparrow}$ this time. Also known as the large spin-boson model~\cite{chaudhry2013role}, this problem is described by the following Hamiltonian: \begin{equation} \begin{aligned}[b] H_{L}^{(1)} =\:&\epsilon J_{z}+\Delta J_{x}+\sum_{k}\omega_{k}b_{k}^{\dagger}b_{k}+\sum_{k}\alpha_{k}c_{k}^{\dagger}c_{k}\\ &+2J_{z}\sum_{k}\left(g_{k}^{*}b_{k}+g_{k}b_{k}^{\dagger}\right)\\ &+2J_{x}\sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right), \end{aligned} \label{eq. 2} \end{equation} where $J_{x}$, $J_{y}$, and $J_{z}$ are the usual angular momentum operators obeying the commutation relations $[J_{k}, J_{l}] = \iota\epsilon_{klm}J_{m}$; superscript $(1)$ denotes the second version of our Hamiltonian; and all other symbols have the meanings ascribed to them in the paragraph following Eq.~(\ref{eq. 1}). Moreover, for this problem, we use the following definitions henceforth: $H'_{S1} = \epsilon J_{z}$, $H'_{S2} = \Delta J_{x}$, $H'_{B1} = \sum_{k}\omega_{k}b_{k}^{\dagger}b_{k}$, $H'_{B2} = \sum_{k}\alpha_{k}c_{k}^{\dagger}c_{k}$, $V'_{C1} = J_{z}\sum_{k}\left(g_{k}^{*}b_{k}+g_{k}b_{k}^{\dagger}\right)$, and $V'_{C2} = J_{x}\sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right)$. \section{\label{sec. 2}Results} \subsection{\label{sec. 2a}Spin-boson model} \subsubsection{\label{sec. 2aa}System density matrix} In this case, our Hamiltonian is $H_{L}^{(0)}$ (Eq.~(\ref{eq. 1})), and we use a polaron transformation defined by $U_{P} = e^{\chi\sigma_{z}/2}$, where $\chi = \sum_{k}\left(\frac{2g_{k}}{\omega_{k}}b_{k}^{\dagger}-\frac{2g_{k}^{*}}{\omega_{k}}b_{k}\right)$, to transform it~\cite{silbey1984variational}. In the polaron frame then, the Hamiltonian is \begin{equation} \begin{aligned}[b] H^{(0)} =\:&\frac{\epsilon}{2}\sigma_{z}+\sum_{k}\omega_{k}b_{k}^{\dagger}b_{k}+\sum_{k}\alpha_{k}c_{k}^{\dagger}c_{k}\\ &+\left[\frac{\Delta}{2}+\sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right)\right]\left(\sigma_{+}e^{\chi}+\sigma_{-}e^{-\chi}\right). \end{aligned} \label{eq. 3} \end{equation} Since system-environment coupling in the polaron frame is weak, the initial system-environment state could be written as $\rho^{(0)}(0) = \rho_{S}^{(0)}(0)\otimes\rho_{B1}^{(0)}(0)\otimes\rho_{B2}^{(0)}(0)$, where $\rho_{S}^{(0)}(0) = \ket{\uparrow}\bra{\uparrow}$, $\rho_{B1}^{(0)}(0) = \frac{e^{-\beta H_{B1}}}{Z_{B1}}$ with $Z_{B1} = \mathrm{Tr}_{B1}\left(e^{-\beta H_{B1}}\right)$, and $\rho_{B2}^{(0)}(0) = \frac{e^{-\beta H_{B2}}}{Z_{B2}}$ with $Z_{B2} = \mathrm{Tr}_{B2}\left(e^{-\beta H_{B2}}\right)$. Using time-dependent perturbation theory~\cite{koshino2005quantum}, we find that \begin{widetext} \begin{equation} \begin{aligned}[b] \rho_{S}^{(0)}(\tau) = U_{S}^{(0)}(\tau)\bigg[&\rho_{S}^{(0)}(0)+\iota\sum_{\mu}\int_{0}^{\tau}dt_{1}\left[\rho_{S}^{(0)}(0), \widetilde{F}_{\mu}^{(0)}(t_{1})\right]\left\langle\widetilde{B}_{\mu}^{(0)}(t_{1})\right\rangle_{B1}\left\langle\widetilde{J}_{\mu}^{(0)}(t_{1})\right\rangle_{B2}\\ &+\iota\frac{\Delta}{2}\sum_{\mu}\int_{0}^{\tau}dt_{1}\left[\rho_{S}^{(0)}(0), \widetilde{F}_{\mu}^{(0)}(t_{1})\right]\left\langle\widetilde{B}_{\mu}^{(0)}(t_{1})\right\rangle_{B1}\\ &+\sum_{\mu\nu}\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}\left(\left[\widetilde{F}_{\mu}^{(0)}(t_{1}), \rho_{S}^{(0)}(0)\widetilde{F}_{\nu}^{(0)}(t_{2})\right]C_{\mu\nu}^{(0)}(t_{1}, t_{2})K_{\mu\nu}^{(0)}(t_{1}, t_{2})+\mathrm{h.c.}\right)\\ &+\frac{\Delta}{2}\sum_{\mu\nu}\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}\left(\left[\widetilde{F}_{\mu}^{(0)}(t_{1}), \rho_{S}^{(0)}(0)\widetilde{F}_{\nu}^{(0)}(t_{2})\right]C_{\mu\nu}^{(0)}(t_{1}, t_{2})\left\langle\widetilde{J}_{\nu}^{(0)}(t_{2})\right\rangle_{B2}+\mathrm{h.c.}\right)\\ &+\frac{\Delta}{2}\sum_{\mu\nu}\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}\left(\left[\widetilde{F}_{\mu}^{(0)}(t_{1}), \rho_{S}^{(0)}(0)\widetilde{F}_{\nu}^{(0)}(t_{2})\right]C_{\mu\nu}^{(0)}(t_{1}, t_{2})\left\langle\widetilde{J}_{\mu}^{(0)}(t_{1})\right\rangle_{B2}+\mathrm{h.c.}\right)\\ &+\frac{\Delta^{2}}{4}\sum_{\mu\nu}\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}\left(\left[\widetilde{F}_{\mu}^{(0)}(t_{1}), \rho_{S}^{(0)}(0)\widetilde{F}_{\nu}^{(0)}(t_{2})\right]C_{\mu\nu}^{(0)}(t_{1}, t_{2})+\mathrm{h.c.}\right)\bigg]U_{S}^{(0)\dagger}(\tau). \end{aligned} \label{eq. 4} \end{equation} \end{widetext} Here, $F_{1}^{(0)} = \sigma_{+}$, $F_{2}^{(0)} = \sigma_{-}$, $B_{1}^{(0)} = X$, $B_{2}^{(0)} = X^{\dagger}$, $J_{1}^{(0)} = \sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right)$, and $J_{2}^{(0)} = \sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right)$. As to their time-evolved counterparts, $\widetilde{F}_{\mu}^{(0)}(t) = U_{S}^{(0)\dagger}(t)F_{\mu}^{(0)}U_{S}^{(0)}(t)$ with $U_{S}^{(0)}(t) = e^{-\iota H_{S1}t}$, $\widetilde{B}_{\mu}^{(0)}(t) = U_{B1}^{(0)\dagger}(t)B_{\mu}^{(0)}U_{B1}^{(0)}(t)$ with $U_{B1}^{(0)}(t) = e^{-\iota H_{B1}t}$, and $\widetilde{J}_{\mu}^{(0)}(t) = U_{B2}^{(0)\dagger}(t)J_{\mu}^{(0)}U_{B2}^{(0)}(t)$ with $U_{B2}^{(0)}(t) = e^{-\iota H_{B2}t}$. Finally, $\langle\ldots\rangle_{B}$ stands for $\mathrm{Tr}_{B}\left[\rho_{B}^{(0)}(\ldots)\right]$, environment correlation functions are defined as $C_{\mu\nu}^{(0)}(t_{1}, t_{2}) = \left\langle\widetilde{B}_{\mu}^{(0)}(t_{1})\widetilde{B}_{\nu}^{(0)}(t_{2})\right\rangle_{B1}$ and $K_{\mu\nu}^{(0)}(t_{1}, t_{2}) = \left\langle\widetilde{J}_{\mu}^{(0)}(t_{1})\widetilde{J}_{\nu}^{(0)}(t_{2})\right\rangle_{B2}$, and h.c. denotes the Hermitian conjugate. Now, since we want the modified decay rate, we go one step further and compute the system density matrix obtained with the removal of the evolution effected by the system Hamiltonian. In the polaron frame, we call this matrix $\rho_{Sn}^{(0)}(\tau)$, and it happens to be \begin{equation*} \mathrm{Tr}_{B1, B2}\left(e^{\iota H_{S,P}^{(0)}\tau}e^{-\iota H^{(0)}\tau}\rho^{(0)}(0)e^{\iota H^{(0)}\tau}e^{-\iota H_{S,P}^{(0)}\tau}\right), \end{equation*} where $H_{S, P}^{(0)} = \frac{\epsilon}{2}\sigma_{z}+\frac{\Delta}{2}(\sigma_{+}e^{\chi}+\sigma_{-}e^{-\chi})$. We note that $e^{\iota H_{S,P}^{(0)}\tau}$ and $e^{-\iota H_{S,P}^{(0)}\tau}$ remove the evolution due to the system Hamiltonian before a measurement is performed. Since we assume that the tunneling amplitude and the system-environment coupling in the polaron frame are small, we could expand both $e^{-\iota H_{S, P}^{(0)}\tau}$ and $e^{-\iota H^{(0)}\tau}$ into perturbation series. Keeping terms to second order then, we find that $\rho_{Sn}^{(0)}(\tau)$ is the sum of $\rho_{S}^{(0)}(\tau)$ and some additional terms. It could easily be shown that most of these additional terms contribute nothing to the modified decay rate. The terms that need to be worked out, however, are \begin{equation*} \begin{aligned}[b] &\mathrm{Tr}_{B1, B2}\left(U^{(0)}(\tau)A_{1}^{(0)}\rho^{(0)}(0)U^{(0)\dagger}(\tau)A_{SP1}^{(0)}\right),\\ &\mathrm{Tr}_{B1, B2}\left(A_{SP1}^{(0)\dagger}U^{(0)}(\tau)\rho^{(0)}(0)A_{1}^{(0)\dagger}U^{(0)\dagger}(\tau)\right),\\ &\mathrm{Tr}_{B1, B2}\left(A_{SP1}^{(0)\dagger}U^{(0)}(\tau)\rho^{(0)}(0)U^{(0)\dagger}(\tau)A_{SP1}^{(0)}\right),\\ &\mathrm{Tr}_{B1, B2}\left(U^{(0)}(\tau)A_{1d}^{(0)}\rho^{(0)}(0)U^{(0)\dagger}(\tau)A_{SP1}^{(0)}\right),\\ \end{aligned} \end{equation*} and \begin{equation*} \mathrm{Tr}_{B1, B2}\left(A_{SP1}^{(0)\dagger}U^{(0)}(\tau)\rho^{(0)}(0)A_{1d}^{(0)\dagger}U^{(0)\dagger}(\tau)\right) \end{equation*} with $U^{(0)}(\tau) = U_{S}^{(0)}(\tau)U_{B1}^{(0)}(\tau)U_{B2}^{(0)}(\tau)$, $A_{1}^{(0)} = -\iota\sum_{\mu}\int_{0}^{\tau}dt\widetilde{F}_{\mu}(t)\otimes\widetilde{B}_{\mu}(t)\otimes\widetilde{J}_{\mu}(t)$, $A_{1d}^{(0)} = -\iota\frac{\Delta}{2}\sum_{\mu}\int_{0}^{\tau}dt\widetilde{F}_{\mu}(t)\otimes\widetilde{B}_{\mu}(t)$, and $A_{SP1}^{(0)} = -\iota\frac{\Delta}{2}\sum_{\mu}\int_{0}^{\tau}dt\widetilde{F}_{\mu}(t)\otimes B_{\mu}$. \subsubsection{\label{sec. 2ab}Survival probability and modified decay rate} At time $t = 0$, we prepare our system in the state $\ket{\uparrow}$, where $\sigma_{z}\ket{\uparrow} = \ket{\uparrow}$, and we subsequently perform a measurement after every interval of duration $\tau$ to check if the system state is still $\ket{\uparrow}$. The evolution due to the system Hamiltonian is removed before each measurement is performed so that the survival probability obtained could be used to calculate the modified decay rate. The required survival probability then is $s_{n}^{(0)}(\tau) = 1-\bra{\downarrow}\rho_{Sn}^{(0)}(\tau)\ket{\downarrow}$. Using our expression for $\rho_{S}^{(0)}(\tau)$ (Eq.~(\ref{eq. 4}))---together with the additional terms we computed---to calculate $\rho_{Sn}^{(0)}(\tau)$, we obtain \begin{widetext} \begin{equation} \begin{aligned}[b] s_{n}^{(0)}(\tau) =\:&1-2\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}C_{12}^{(0)}(t_{1}, t_{2})K_{12}^{(0)}(t_{1}, t_{2})e^{\iota\epsilon(t_{1}-t_{2})}\right)\\ &-\frac{\Delta^{2}}{2}\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}C_{12}^{(0)}(t_{1}, t_{2})e^{\iota\epsilon(t_{1}-t_{2})}\right)\\ &-\frac{\Delta^{2}}{4}\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}C_{12}^{(0)}(0, 0)e^{\iota\epsilon(t_{2}-t_{1})}\\ &+\frac{\Delta^{2}}{2}\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}C_{12}^{(0)}(\tau, t_{1})e^{\iota\epsilon(t_{2}-t_{1})}\right)\\ =\:&1-2\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}C^{(0)}(t_{1}-t_{2})K^{(0)}(t_{1}-t_{2})e^{\iota\epsilon(t_{1}-t_{2})}\right)\\ &-\frac{\Delta^{2}}{2}\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}C^{(0)}(t_{1}-t_{2})e^{\iota\epsilon(t_{1}-t_{2})}\right)\\ &-\frac{\Delta^{2}}{4}\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}C^{(0)}(0)e^{\iota\epsilon(t_{2}-t_{1})}\\ &+\frac{\Delta^{2}}{2}\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}C^{(0)}(\tau-t_{1})e^{\iota\epsilon(t_{2}-t_{1})}\right)\\ =\:&1-2\Re\left[\int_{0}^{\tau}dt\int_{0}^{t}dt'\left(C^{(0)}(t')K^{(0)}(t')e^{\iota\epsilon t'}+\frac{\Delta^{2}}{4}C^{(0)}(t')e^{\iota\epsilon t'}\right)\right]\\ &-\frac{\Delta^{2}}{4}\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}C^{(0)}(0)e^{\iota\epsilon(t_{2}-t_{1})}\\ &+\frac{\Delta^{2}}{2}\Re\left(\int_{0}^{\tau}dt\int_{0}^{\tau}dt'C^{(0)}(t')e^{\iota\epsilon(t-\tau+t')}\right). \end{aligned} \label{eq. 5} \end{equation} \end{widetext} In the penultimate step, we use $C^{(0)}(t_{1}-t_{2}) = C_{12}^{(0)}(t_{1}, t_{2})$ and $K^{(0)}(t_{1}-t_{2}) = K_{12}^{(0)}(t_{1}, t_{2})$, and in the last step, we change variables via $t' = t_{1}-t_{2}$ and $t = t_{1}$ in the first double integral and via $t' = \tau-t_{1}$ and $t = t_{2}$ in the third one. When we calculate the environment correlation functions, we find that $C^{(0)}(t) = e^{-\Phi_{R1}(t)}e^{-\iota\Phi_{I1}(t)}$ with $\Phi_{R1}(t) = \int_{0}^{\infty}d\omega J(\omega)\frac{4-4\cos(\omega t)}{\omega^{2}}\coth(\frac{\beta\omega}{2})$ and $\Phi_{I1}(t) = \int_{0}^{\infty}d\omega J(\omega)\frac{4\sin(\omega t)}{\omega^2}$ and that $K^{(0)}(t) = \Phi_{R2}(t)-\iota\Phi_{I2}(t)$ with $\Phi_{R2}(t) = \int_{0}^{\infty}d\alpha H(\alpha)\cos(\alpha t)\coth(\frac{\beta\alpha}{2})$ and $\Phi_{I2}(t) = \int_{0}^{\infty}d\alpha H(\alpha)\sin(\alpha t)$. Here, spectral densities have been introduced as $\sum_{k}\abs{g_k}^{2}(\ldots)\,\to\,\int_{0}^{\infty}d\omega J(\omega)(\ldots)$ and $\sum_{k}\abs{f_k}^{2}(\ldots)\,\to\,\int_{0}^{\infty}d\alpha H(\alpha)(\ldots)$. Since system-environment coupling in the polaron frame is weak, we could neglect the build-up of correlations between the system and the environment and write the survival probability at time $t = N\tau$ as $S_{n}^{(0)}(t = N\tau) = \left(s_{n}^{(0)}(\tau)\right)^{N} \equiv e^{-\Gamma^{(0)}_{n}(\tau)N\tau}$, thereby defining the modified decay rate $\Gamma_{n}^{(0)}(\tau)$. It follows that $\Gamma_{n}^{(0)}(\tau) = -\frac{1}{\tau}\ln s_{n}^{(0)}(\tau)$~\cite{chaudhry2017quantum}. Using the expression we derived for $s_{n}^{(0)}(\tau)$ (Eq.~(\ref{eq. 5})), we can work out the following expression for $\Gamma_{n}^{(0)}(\tau)$: \begin{equation*} \begin{aligned}[b] &\frac{2}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'e^{-\Phi_{R1}\left(t'\right)}\cos(\epsilon t'-\Phi_{I1}(t'))\Phi_{R2}(t')\\ &+\frac{2}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'e^{-\Phi_{R1}\left(t'\right)}\sin(\epsilon t'-\Phi_{I1}(t'))\Phi_{I2}(t')\\ &+\frac{\Delta^{2}}{2\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'e^{-\Phi_{R1}\left(t'\right)}\cos(\epsilon t'-\Phi_{I1}(t'))\\ &+\frac{\Delta^{2}}{\tau\epsilon^{2}}\sin^{2}\left(\frac{\epsilon\tau}{2}\right)e^{-\Phi_{R1}(0)-\iota\Phi_{I1}(0)}\\ &-\frac{\Delta^{2}}{\tau\epsilon}\sin\left(\frac{\epsilon\tau}{2}\right)\int_{0}^{\tau}dte^{-\Phi_{R1}(t)}\cos\left[\epsilon\left(t-\frac{\tau}{2}\right)-\Phi_{I1}(t)\right]. \end{aligned} \end{equation*} We now plot $\Gamma_{n}^{(0)}(\tau)$ against $\tau$. To do so, we model the spectral densities as $J(\omega) = G\omega^{s}\omega_{c}^{1-s}e^{-\omega/\omega_{c}}$ and $H(\alpha) = F\alpha^{r}\alpha_{c}^{1-r}e^{-\alpha/\alpha_{c}}$, where $G$ and $F$ are dimensionless parameters characterizing the system-environment coupling strengths, $\omega_{c}$ and $\alpha_{c}$ are cut-off frequencies, and $s$ and $r$ are Ohmicity parameters~\cite{breuer2002theory}. Whereas $G$ corresponds to strong coupling, $F$ corresponds to the weak one; therefore, in our plots, we keep $G$ greater than $F$. To be particular, we work at zero temperature and look at the Ohmic case for each of the spectral densities ($s = 1$ and $r = 1$). Doing so gives $\Phi_{R1}(t) = 2G\ln(1+\omega_{c}^{2}t^{2})$, $\Phi_{I1}(t) = 4G\tan^{-1}(\omega_{c}t)$, $\Phi_{R2}(t) = F\frac{\alpha_{c}^2\left(1-\alpha_{c}^{2}t^{2}\right)}{\left(1+\alpha_{c}^{2}t^{2}\right)^{2}}$, and $\Phi_{I2}(t) = 2F\frac{\alpha_{c}^{3}t}{\left(1+\alpha_{c}^{2}t^{2}\right)^{2}}$, allowing us to write our expression for $\Gamma_{n}^{(0)}(\tau)$ as \begin{equation*} \begin{aligned}[b] &\frac{2F}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'\frac{\alpha_{c}^{2}\left(1-\alpha_{c}^{2}t'^{2}\right)\cos(\epsilon t'-4G\tan^{-1}(\omega_{c}t'))}{\left(1+\alpha_{c}^{2}t'^{2}\right)^{2}\left(1+\omega_{c}^{2}t'^2\right)^{2G}}\\ &+\frac{4F}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'\frac{\alpha_{c}^{3}t'\sin(\epsilon t'-4G\tan^{-1}(\omega_{c}t'))}{\left(1+\alpha_{c}^{2}t'^{2}\right)^{2}\left(1+\omega_{c}^{2}t'^2\right)^{2G}}\\ &+\frac{2}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'\frac{\Delta^{2}}{4}\frac{\cos(\epsilon t'-4G\tan^{-1}(\omega_{c}t'))}{\left(1+\omega_{c}^{2}t'^2\right)^{2G}}\\ &+\frac{\Delta^{2}}{\tau\epsilon^{2}}\sin^{2}\left(\frac{\epsilon\tau}{2}\right)\\ &-\frac{\Delta^{2}}{\tau\epsilon}\sin\left(\frac{\epsilon\tau}{2}\right)\int_{0}^{\tau}dt\frac{\cos[\epsilon(t-\tau/2)-4G\tan^{-1}(\omega_{c}t)]}{(1+\omega_{c}^{2}t^{2})}. \end{aligned} \end{equation*} The integrals above could be worked out numerically, and results (graphs of $\Gamma_{n}^{(0)}(\tau)$ against $\tau$) are shown in Fig.~\ref{fig. 1} for different values of the system-environment coupling strengths, $G$ and $F$. What is absolutely clear is that despite having removed the system Hamiltonian evolution, we obtain the results of Ref.~\cite{chaudhry2017quantum}: increasing the strong coupling strength leads to a decrease in the decay rate (Fig.~\ref{fig. 1a}) whereas increasing the weak coupling strength leads to an increase (Fig.~\ref{fig. 1b}). Also, just as Ref.~\cite{chaudhry2017quantum} illustrates, whereas increasing the weak coupling strength does not change the qualitative behavior of the Zeno/anti-Zeno transition, increasing the strong coupling strength does have an effect, namely that it causes the transition to occur at smaller values of $\tau$. Clearly, system evolution has no bearing on the general effects of strong and weak system-environment couplings. \null \begin{figure} \caption{\textbf{Variation of the modified decay rate for the spin-boson model with both strong and weak system-environment coupling strengths.} \label{fig. 1} \end{figure} \subsection{\label{sec. 2b}Large spin-boson model} \subsubsection{\label{sec. 2ba}System density matrix} The Hamiltonian to be considered for this case is $H_{L}^{(1)}$ (Eq.~(\ref{eq. 2})), and we again use a polaron transformation to transform it, the transformation being the same as that used in section~\ref{sec. 2aa}. The Hamiltonian in the polaron frame is thus \begin{equation} \begin{aligned}[b] H^{(1)} =\:&\epsilon J_{z}+\sum_{k}\omega_{k}b_{k}^{\dagger}b_{k}+\sum_{k}\alpha_{k}c_{k}^{\dagger}c_{k}-\kappa J_{z}^{2}\\ &+\left[\frac{\Delta}{2}+\sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right)\right]\left(J_{+}e^{\chi}+J_{-}e^{-\chi}\right), \end{aligned} \label{eq. 6} \end{equation} where $\kappa = 4\sum_{k}\frac{\abs{g_{k}}^{2}}{\omega_{k}}$. Since the system-environment coupling in the polaron frame is weak again, we could write the initial system-environment state as $\rho^{(1)}(0)$ = $\rho_{S}^{(1)}(0)\otimes\rho_{B1}^{(1)}(0)\otimes\rho_{B2}^{(1)}(0)$, where $\rho_{S}^{(1)}(0) = \ket{j}\bra{j}$, $\rho_{B1}^{(1)}(0) = \frac{e^{-\beta H'_{B1}}}{Z'_{B1}}$ with $Z'_{B1} = \mathrm{Tr}_{B1}\left(e^{-\beta H'_{B1}}\right)$, and $\rho_{B2}^{(1)}(0) = \frac{e^{-\beta H'_{B2}}}{Z'_{B2}}$ with $Z'_{B2} = \mathrm{Tr}_{B2}\left(e^{-\beta H'_{B2}}\right)$. $\ket{j}$ represents a spin coherent state with $j = N_{S}/2$, where $N_{S}$, as said in section~\ref{sec. 1}, is the number of two-level systems we work with in our large spin-boson model. Using time-dependent perturbation theory then, we find that \begin{widetext} \begin{equation} \begin{aligned}[b] \rho_{S}^{(1)}(\tau) = U_{S}^{(1)}(\tau)\bigg[&\rho_{S}^{(1)}(0)+\iota\sum_{\mu}\int_{0}^{\tau}dt_{1}\left[\rho_{S}^{(1)}(0), \widetilde{F}_{\mu}^{(1)}(t_{1})\right]\left\langle\widetilde{B}_{\mu}^{(1)}(t_{1})\right\rangle_{B1}\left\langle\widetilde{J}_{\mu}^{(1)}(t_{1})\right\rangle_{B2}\\ &+\iota\frac{\Delta}{2}\sum_{\mu}\int_{0}^{\tau}dt_{1}\left[\rho_{S}^{(1)}(0), \widetilde{F}_{\mu}^{(1)}(t_{1})\right]\left\langle\widetilde{B}_{\mu}^{(1)}(t_{1})\right\rangle_{B1}\\ &+\sum_{\mu\nu}\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}\left(\left[\widetilde{F}_{\mu}^{(1)}(t_{1}), \rho_{S}^{(1)}(0)\widetilde{F}_{\nu}^{(1)}(t_{2})\right]C_{\mu\nu}^{(1)}(t_{1}, t_{2})K_{\mu\nu}^{(1)}(t_{1}, t_{2})+\mathrm{h.c.}\right)\\ &+\frac{\Delta}{2}\sum_{\mu\nu}\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}\left(\left[\widetilde{F}_{\mu}^{(1)}(t_{1}), \rho_{S}^{(1)}(0)\widetilde{F}_{\nu}^{(1)}(t_{2})\right]C_{\mu\nu}^{(1)}(t_{1}, t_{2})\left\langle\widetilde{J}_{\nu}^{(1)}(t_{2})\right\rangle_{B2}+\mathrm{h.c.}\right)\\ &+\frac{\Delta}{2}\sum_{\mu\nu}\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}\left(\left[\widetilde{F}_{\mu}^{(1)}(t_{1}), \rho_{S}^{(1)}(0)\widetilde{F}_{\nu}^{(1)}(t_{2})\right]C_{\mu\nu}^{(1)}(t_{1}, t_{2})\left\langle\widetilde{J}_{\mu}^{(1)}(t_{1})\right\rangle_{B2}+\mathrm{h.c.}\right)\\ &+\frac{\Delta^{2}}{4}\sum_{\mu\nu}\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}\left(\left[\widetilde{F}_{\mu}^{(1)}(t_{1}), \rho_{S}^{(1)}(0)\widetilde{F}_{\nu}^{(1)}(t_{2})\right]C_{\mu\nu}^{(1)}(t_{1}, t_{2})+\mathrm{h.c.}\right)\bigg]U_{S}^{(1)\dagger}(\tau). \end{aligned} \label{eq. 7} \end{equation} \end{widetext} Here, $F_{1}^{(1)} = J_{+}$, $F_{2}^{(1)} = J_{-}$, $B_{1}^{(1)} = X$, $B_{2}^{(1)} = X^{\dagger}$, $J_{1}^{(1)} = \sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right)$, and $J_{2}^{(1)} = \sum_{k}\left(f_{k}^{*}c_{k}+f_{k}c_{k}^{\dagger}\right)$. Their time-evolved counterparts happen to be $\widetilde{F}_{\mu}^{(1)}(t) = U_{S}^{(1)\dagger}(t)F_{\mu}^{(1)}U_{S}^{(1)}(t)$ with $U_{S}^{(1)}(t) = e^{-\iota\left(H'_{S1}-\kappa J_{z}^{2}\right)t}$, $\widetilde{B}_{\mu}^{(1)}(t) = U_{B1}^{(1)\dagger}(t)B_{\mu}^{(1)}U_{B1}^{(1)}(t)$ with $U_{B1}^{(1)}(t) = e^{-\iota H'_{B1}t}$, and $\widetilde{J}_{\mu}^{(1)}(t) = U_{B2}^{(1)\dagger}(t)J_{\mu}^{(1)}U_{B2}^{(1)}(t)$ with $U_{B2}^{(1)}(t) = e^{-\iota H'_{B2}t}$. Finally, $\langle\ldots\rangle_{B}$ stands for $\mathrm{Tr}_{B}\left[\rho_{B}^{(1)}(\ldots)\right]$, environment correlation functions are defined as $C_{\mu\nu}^{(1)}(t_{1}, t_{2}) = \left\langle\widetilde{B}_{\mu}^{(1)}(t_{1})\widetilde{B}_{\nu}^{(1)}(t_{2})\right\rangle_{B1}$ and $K_{\mu\nu}^{(1)}(t_{1}, t_{2}) = \left\langle\widetilde{J}_{\mu}^{(1)}(t_{1})\widetilde{J}_{\nu}^{(1)}(t_{2})\right\rangle_{B2}$, and h.c. denotes the Hermitian conjugate. Since we want the modified decay rate, however, we compute the system density matrix with the system Hamiltonian evolution removed. Calling this matrix $\rho_{Sn}^{(1)}(\tau)$ in the polaron frame, we find it to be \begin{equation*} \mathrm{Tr}_{B1, B2}\left(e^{\iota H_{S,P}^{(1)}\tau}e^{-\iota H^{(1)}\tau}\rho^{(1)}(0)e^{\iota H^{(1)}\tau}e^{-\iota H_{S,P}^{(1)}\tau}\right), \end{equation*} where $H_{S, P}^{(1)} = \epsilon J_{z}+\frac{\Delta}{2}(J_{+}X+J_{-}X^{\dagger})$. It could easily be noted that $e^{\iota H_{S, P}^{(1)}\tau}$ and $e^{-\iota H_{S, P}^{(1)}\tau}$ remove the evolution due to the system Hamiltonian before a measurement is performed. Assuming that the tunneling amplitude and the system-environment coupling in the polaron frame are small, we expand both $e^{-\iota H_{S, P}^{(1)}\tau}$ and $e^{-\iota H^{(1)}\tau}$ into perturbation series, and keeping terms to second order only, we find that $\rho_{Sn}^{(1)}(\tau)$ is the sum of $\rho_{S}^{(1)}(\tau)$ and some additional terms. As before, we could show that most of these additional terms do not contribute anything to the modified decay rate. The terms that need to be worked out, however, are \begin{equation*} \begin{aligned}[b] &\mathrm{Tr}_{B1, B2}\left(A_{SP1}^{(1)\dagger}U^{(1)}(\tau)A_{1}^{(1)}\rho^{(1)}(0)U^{(1)\dagger}(\tau)\right),\\ &\mathrm{Tr}_{B1, B2}\left(A_{SP1}^{(1)\dagger}U^{(1)}(\tau)A_{1d}^{(1)}\rho^{(1)}(0)U^{(1)\dagger}(\tau)\right),\\ &\mathrm{Tr}_{B1, B2}\left(A_{SP2}^{(1)\dagger}U^{(1)}(\tau)\rho^{(1)}(0)U^{(1)\dagger}(\tau)\right),\\ &\mathrm{Tr}_{B1, B2}\left(U^{(1)}(\tau)\rho^{(1)}(0)U^{(1)\dagger}(\tau)A_{SP2}^{(1)}\right),\\ &\mathrm{Tr}_{B1, B2}\left(U^{(1)}(\tau)\rho^{(1)}(0)A_{1}^{(1)\dagger}U^{(1)\dagger}(\tau)A_{SP1}^{(1)}\right),\\ &\mathrm{Tr}_{B1, B2}\left(U^{(1)}(\tau)\rho^{(1)}(0)A_{1d}^{(1)\dagger}U^{(1)\dagger}(\tau)A_{SP1}^{(1)}\right),\\ \end{aligned} \end{equation*} and \begin{equation*} \mathrm{Tr}_{B1, B2}\left(U^{(1)}(\tau)\rho^{(1)}(0)U^{(1)\dagger}(\tau)A_{SP2}^{(1)\dagger}\right) \end{equation*} with $U^{(1)}(\tau) = U_{S}^{(1)}(\tau)U_{B1}^{(1)}(\tau)U_{B2}^{(1)}(\tau)$, $A_{1}^{(1)} = -\iota\sum_{\mu}\int_{0}^{\tau}dt\widetilde{F}_{\mu}(t)\otimes\widetilde{B}_{\mu}(t)\otimes\widetilde{J}_{\mu}(t)$, $A_{1d}^{(1)} = -\iota\frac{\Delta}{2}\sum_{\mu}\int_{0}^{\tau}dt\widetilde{F}_{\mu}(t)\otimes\widetilde{B}_{\mu}(t)$, $A_{SP1}^{(1)} = -\iota\frac{\Delta}{2}\sum_{\mu}\int_{0}^{\tau}dt\widetilde{F}_{\mu}(t)\otimes B_{\mu}$, and $A_{SP2}^{(1)} = -\frac{\Delta^{2}}{4}\sum_{\mu\nu}\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}\widetilde{F}_{\mu}(t_{1})\widetilde{F}_{\nu}(t_{2})\otimes B_{\mu}B_{\nu}$. \subsubsection{\label{sec. 2bb}Survival probability and modified decay rate} We prepare our system in the state $\ket{j}$, where $J_{z}\ket{j} = j\ket{j}$, at time $t = 0$ and perform a measurement after every interval of duration $\tau$ to check if the system state is still $\ket{j}$. The system Hamiltonian evolution is removed before every measurement so that the survival probability obtained corresponds to the modified decay rate. The required survival probability is thus $s_{n}^{(1)}(\tau) = \bra{j}\rho_{Sn}^{(1)}(\tau)\ket{j}$, where $\rho_{Sn}^{(1)}$ is just the sum of $\rho_{S}^{(1)}(\tau)$ (Eq.~(\ref{eq. 7})) and the additional terms we computed. We hence get \begin{widetext} \begin{equation} \begin{aligned}[b] s_{n}^{(1)}(\tau) =\:&1-4j\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}C_{12}^{(1)}(t_{1}, t_{2})K_{12}^{(1)}(t_{1}, t_{2})e^{\iota\left[\epsilon(t_{1}-t_{2})+\kappa(1-2j)(t_{1}-t_{2})\right]}\right)\\ &-\Delta^{2}j\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}C_{12}^{(1)}(t_{1}, t_{2})e^{\iota\left[\epsilon(t_{1}-t_{2})+\kappa(1-2j)(t_{1}-t_{2})\right]}\right)\\ &-\Delta^{2}j\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}C_{12}^{(1)}(0, 0)e^{\iota\epsilon(t_{2}-t_{1})}\\ &+\Delta^{2}j\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}C_{12}^{(1)}(\tau, t_{1})e^{\iota\left[\epsilon(t_{2}-t_{1})+\kappa(2j-1)(-\tau+t_{1})\right]}\right)\\ =\:&1-4j\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}C^{(1)}(t_{1}-t_{2})K^{(1)}(t_{1}-t_{2})e^{\iota\left[\epsilon(t_{1}-t_{2})+\kappa(1-2j)(t_{1}-t_{2})\right]}\right)\\ &-\Delta^{2}j\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{t_{1}}dt_{2}C^{(1)}(t_{1}-t_{2})e^{\iota\left[\epsilon(t_{1}-t_{2})+\kappa(1-2j)(t_{1}-t_{2})\right]}\right)\\ &-\Delta^{2}j\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}C^{(1)}(0)e^{\iota\epsilon(t_{2}-t_{1})}\\ &+\Delta^{2}j\Re\left(\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}C^{(1)}(\tau-t_{1})e^{\iota\left[\epsilon(t_{2}-t_{1})+\kappa(2j-1)(-\tau+t_{1})\right]}\right)\\ =\:&1-4j\Re\left(\int_{0}^{\tau}dt\int_{0}^{t}dt'C^{(1)}(t')K^{(1)}(t')e^{\iota\left[\epsilon t'+\kappa(1-2j)t'\right]}\right)\\ &-\Delta^{2}j\Re\left(\int_{0}^{\tau}dt\int_{0}^{t}dt'C^{(1)}(t')e^{\iota\left[\epsilon t'+\kappa(1-2j)t'\right]}\right)\\ &-\Delta^{2}j\int_{0}^{\tau}dt_{1}\int_{0}^{\tau}dt_{2}C^{(1)}(0)e^{\iota\epsilon(t_{2}-t_{1})}\\ &+\Delta^{2}j\Re\left(\int_{0}^{\tau}dt\int_{0}^{\tau}dt'C^{(1)}(t')e^{\iota\left[\epsilon(t-\tau+t')-\kappa(2j-1)t'\right]}\right). \end{aligned} \label{eq. 8} \end{equation} \end{widetext} In the penultimate step, we use $C^{(1)}(t_{1}-t_{2}) = C_{12}^{(1)}(t_{1}, t_{2})$ and $K^{(1)}(t_{1}-t_{2}) = K_{12}^{(1)}(t_{1}, t_{2})$, and in the last step, we change variables via $t' = t_{1}-t_{2}$ and $t = t_{1}$ in the first two integrals and via $t' = \tau-t_{1}$ and $t = t_{2}$ in the fourth one. Calculating the correlation functions $C^{(1)}(t)$ and $K^{(1)}(t)$, we find that $C^{(1)}(t) = e^{-\Phi_{R1}(t)}e^{-\iota\Phi_{I1}(t)}$ with $\Phi_{R1}(t) = \int_{0}^{\infty}d\omega J(\omega)\frac{4-4\cos(\omega t)}{\omega^{2}}\coth(\frac{\beta\omega}{2})$ and $\Phi_{I1}(t) = \int_{0}^{\infty}d\omega J(\omega)\frac{4\sin(\omega t)}{\omega^2}$ and that $K^{(1)}(t) = \Phi_{R2}(t)-\iota\Phi_{I2}(t)$ with $\Phi_{R2}(t) = \int_{0}^{\infty}d\alpha H(\alpha)\cos(\alpha t)\coth(\frac{\beta\alpha}{2})$ and $\Phi_{I2}(t) = \int_{0}^{\infty}d\alpha H(\alpha)\sin(\alpha t)$, where the spectral density has been introduced as $\sum_{k}\abs{g_k}^{2}(\ldots)\,\to\,\int_{0}^{\infty}d\omega J(\omega)(\ldots)$ and $\sum_{k}\abs{f_k}^{2}(\ldots)\,\to\,\int_{0}^{\infty}d\alpha H(\alpha)(\ldots)$. Again, since system-environment coupling in the polaron frame is weak, we could ignore the correlations building between the system and the environment and follow the reasoning in section~\ref{sec. 2ab} to show that the modified decay rate, $\Gamma_{n}^{(1)}(\tau)$, is \begin{equation*} \begin{aligned}[b] &\frac{4j}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'e^{-\Phi_{R1}(t')}\cos(D_{1}(t'))\Phi_{R2}(t')\\ &+\frac{4j}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'e^{-\Phi_{R1}(t')}\sin(D_{1}(t'))\Phi_{I2}(t')\\ &+\frac{\Delta^{2}j}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'e^{-\Phi_{R1}(t')}\cos(D_{1}(t'))\\ &+\frac{\Delta^{2}}{\tau}(2j)\frac{1}{\epsilon^{2}}\sin^{2}\left(\frac{\epsilon\tau}{2}\right)e^{-\Phi_{R1}(0)-\iota\Phi_{I1}(0)}\\ &-\frac{\Delta^{2}}{\tau}(2j)\frac{1}{\epsilon}\sin\left(\frac{\epsilon\tau}{2}\right)\int_{0}^{\tau}dte^{-\Phi_{R1}(t)}\cos(D_{2}(t)), \end{aligned} \end{equation*} where $D_{1}(t) = \epsilon t+\kappa(1-2j)t-\Phi_{I1}(t)$ and $D_{2}(t) = -\kappa(2j-1)t+\epsilon(t-\tau/2)-\Phi_{I}(t)$. To plot $\Gamma_{n}^{(1)}(\tau)$ against $\tau$, we model the spectral densities the same way as before and look at the Ohmic case for each of them. Working at zero temperature then allows us to write our expression for $\Gamma_{n}^{(1)}(\tau)$ as \begin{equation*} \begin{aligned}[b] &\frac{4Fj}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'\frac{\alpha_{c}^{2}(1-\alpha_{c}^{2}t'^{2})\cos(D_{1}(t'))}{(1+\alpha_{c}^{2}t'^{2})^{2}(1+\omega_{c}^{2}t'^{2})^{2G}}\\ &+\frac{8Fj}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'\frac{\alpha_{c}^{3}t'\sin(D_{1}(t'))}{(1+\alpha_{c}^{2}t'^{2})^{2}(1+\omega_{c}^{2}t'^{2})^{2G}}\\ &+\frac{\Delta^{2}j}{\tau}\int_{0}^{\tau}dt\int_{0}^{t}dt'\frac{\cos(D_{1}(t'))}{(1+\omega_{c}^{2}t'^{2})^{2G}}\\ &+\frac{\Delta^{2}}{\tau}(2j)\frac{1}{\epsilon^{2}}\sin^{2}\left(\frac{\epsilon\tau}{2}\right)\\ &-\frac{\Delta^{2}}{\tau}(2j)\frac{1}{\epsilon}\sin\left(\frac{\epsilon\tau}{2}\right)\int_{0}^{\tau}dt\frac{\cos(D_{2}(t))}{(1+\omega_{c}^{2}t^{2})^{2G}}. \end{aligned} \end{equation*} The integrals could again be worked out numerically. Results are shown in Fig.~\ref{fig. 2} for different values of the system-environment coupling strengths, $G$ and $F$, and they are precisely what one would expect them to be if the system Hamiltonian evolution were kept: increasing the weak coupling strength increases the decay rate (Fig.~\ref{fig. 2b}) whereas increasing the strong coupling strength decreases it generally (Fig.~\ref{fig. 2a}). Also, as is evident, a change in the weak coupling strength has no effect on the qualitative behavior of the Zeno/anti-Zeno transitions. A change in the strong coupling strength, however, does have an effect; as before, increasing it causes the transitions to occur at smaller values of $\tau$. Once again, since we obtain these very results even if the system Hamiltonian evolution is kept, we conclude that the system evolution has no practical bearing on any of them. \null \begin{figure} \caption{\textbf{Variation of the modified decay rate for the large spin-boson model with both strong and weak system-environment coupling strengths.} \label{fig. 2} \end{figure} \section{\label{sec. 3}Discussion} Our results clearly demonstrate that removing the evolution effected by the system Hamiltonian just before measurements are performed does not change the general effects of strong and weak system-environment couplings. We expand the previous work done in this area by considering the presence of both strong and weak couplings, and we successfully show not only that independent coupling baths produce independent effects but also that this independence remains intact even after the system evolution is removed, an observation hinting at the deeper independence of independent strong and weak system-environment couplings. Although we apply our strategy to spin-boson and large spin-boson models only, our work establishes a guide for the implementation of our strategy to other such instances of open quantum systems as well, thereby adding to our knowledge of how multiple system-environment couplings could be treated to study the quantum Zeno and anti-Zeno effects. Given the ubiquity of open quantum systems, our work is highly relevant, for it could be appended to any investigation of Zeno and anti-Zeno effects in these systems. Just to highlight its scope, we propose that our work could be used to expand investigations of the Unruh effect and Unruh-DeWitt detectors, it could be applied to the studies of nonselective projective measurements, and it could even be employed in analyses of the Zeno and anti-Zeno effects in quantum field theory~\cite{hussain2018decay,facchi2003unstable, majeed2018quantum}. Hence, our work---though simple---has pretty extensive applications. \begin{acknowledgments} We would like to extend sincere gratitude to our colleague Hudaiba Soomro for her unstinting support throughout the project. \end{acknowledgments} \end{document}
math
\begin{document} \title{Existence of weak solutions to stochastic heat equations driven by truncated $lpha$-stable white noises with non-Lipschitz coefficients} \begin{abstract} We consider a class of stochastic heat equations driven by truncated $\alpha$-stable white noises for $\alpha\in(1,2)$ with noise coefficients that are continuous but not necessarily Lipschitz and satisfy globally linear growth conditions. We prove the existence of weak solution, taking values in two different forms under different conditions, to such an equation using a weak convergence argument on solutions to the approximating stochastic heat equations. More precisely, for $\alpha\in(1,2)$ there exists a measure-valued weak solution. However, for $\alpha\in(1,5/3)$ there exists a function-valued weak solution, and in this case we further show that for $p\in(\alpha,5/3)$ the uniform $p$-th moment in $L^p$-norm of the weak solution is finite, and that the weak solution is uniformly stochastic continuous in $L^p$ sense. \noindent\textbf{Keywords:} Non-Lipschitz noise coefficients; Stochastic heat equations; Truncated $\alpha$-stable white noises; Uniform $p$-th moment; Uniform stochastic continuity. \noindent{{\bf MSC Classification (2020):} Primary: 60H15; Secondary: 60F05, 60G17} \end{abstract} \section{Introduction} \label{sec1} In this paper we study the existence of weak solution to the following non-linear stochastic heat equation \begin{equation} \label{eq:originalequation1} \left\{\begin{array}{lcl} \dfrac{\partial u(t,x)}{\partial t}=\dfrac{1}{2} \dfrac{\partial^2u(t,x)}{\partial x^2}+ \varphi(u(t-,x))\dot{L}_{\alpha}(t,x), && (t,x)\in (0,\infty) \times(0,L),\\[0.3cm] u(0,x)=u_0(x),&&x\in[0,L],\\[0.3cm] u(t,0)=u(t,L)=0,&& t\in[0,\infty), \end{array}\right. \end{equation} where $L$ is an arbitrary positive constants, $\dot{L}_{\alpha}$ denotes a truncated $\alpha$-stable white noise on $[0,\infty)\times[0,L]$ with $\alpha\in(1,2)$, the noise coefficient $\varphi:\mathbb{R}\rightarrow \mathbb{R}$ satisfies the hypotheses given below, and the initial function $u_0$ is random and measurable. Before studying the equation of particular form (\ref{eq:originalequation1}), we first consider a general stochastic heat equation \begin{equation} \label{GeneralSPDE} \dfrac{\partial u(t,x)}{\partial t}=\dfrac{1}{2}\dfrac{\partial^2u(t,x)}{\partial x^2}+G(u(t,x))+H(u(t,x))\dot{F}(t,x), \quad t\geq0, x\in\mathbb{R}, \end{equation} in which $G:\mathbb{R}\rightarrow \mathbb{R}$ is Lipschitz continuous, $H:\mathbb{R}\rightarrow \mathbb{R}$ is continuous and $\dot{F}$ is a space-time white noise. When $\dot{F}$ is a Gaussian white noise, there is a growing literature on stochastic partial differential equations (SPDEs for short) related to (\ref{GeneralSPDE}) such as the stochastic Burgers equations (see, e.g., Bertini and Cancrini \cite{Bertini:1994}, Da Prato et al. \cite{Daprato:1994}), SPDEs with reflection (see, e.g., Zhang \cite{Zhang:2016}), Parabolic Anderson Model (see, e.g., {G\"{a}rtner and Molchanov \cite{Gartner:1990}), etc. In particular, such a SPDE arises from super-processes (see, e.g., Konno and Shiga \cite{Konno:1988}, Dawson \cite{Dawson:1993} and Perkins \cite{P1991} and references therein). For $G\equiv 0$ and $H(u)=\sqrt{u}$, the solution to (\ref{GeneralSPDE}) is the density field of a one-dimensional super-Brownian motion. For $H(u)=\sqrt{u(1-u)}$ (stepping-stone model in population genetics), Bo and Wang \cite{Bo:2011} considered a stochastic interacting model consisting of equations (\ref{GeneralSPDE}) and proved the existence of weak solution to the system by using a weak convergence argument. In the case that $\dot{F}$ is a Gaussian colored noise that is white in time and colored in space, for continuous function $H$ satisfying the linear growth condition, Sturm \cite{Sturm:2003} proved the existence of a pair $(u,F)$ satisfying (\ref{GeneralSPDE}), the so-called weak solution, by first establishing the existence and uniqueness of lattice systems of SDEs driven by correlated Brownian motions with non-Lipschitz diffusion coefficients that describe branching particle systems in random environment in which the motion process has a discrete Laplacian generator and the branching mechanism is affected by a colored Gaussian random field, and then applying an approximation procedure. Xiong and Yang \cite{Xiong:2023} proved the existence of weak solution $(u,F)$ to (\ref{GeneralSPDE}) in a finite spatial domain with different boundary conditions by considering the weak limit of a sequence of approximating SPDEs of (\ref{GeneralSPDE}). They further proved the existence and uniqueness of the strong solution under additional H\"{o}lder continuity assumption on $H$. If $\dot{F}$ is a L\'{e}vy space-time white noise with Lipschitz continuous coefficient $H$, Albeverio et al. \cite{Albeverio:1998} first proved the existence and uniqueness of the solution when $\dot{F}$ is a Poisson white noise. Applebaum and Wu \cite{Applebaum:2000} extended the results to a general L\'{e}vy space-time white noise. For a stochastic fractional Burgers type non-linear equation that is similar to equation (\ref{GeneralSPDE}) and driven by L\'{e}vy space-time white noise on multidimensional space variables, we refer to Wu and Xie \cite{Wu:2012} and references therein. In particular, when $\dot{F}$ is an $\alpha$-stable white noise for $\alpha\in (0,1)\cup(1,2)$, Balan \cite{Balan:2014} studied SPDE (\ref{GeneralSPDE}) with $G\equiv 0$ and Lipschitz coefficient $H$ on a bounded domain in $\mathbb{R}^d$ with zero initial condition and Dirichlet boundary, and proved the existence of random field solution $u$ for the given noise $\dot{F}$ (the so-called strong solution). The approach in \cite{Balan:2014} is to first solve the equation with truncated noise (by removing the big jumps, the jumps size exceeds a fixed value $K$, from $\dot{F}$), yielding a solution $u_K$, and then show that for $N\geq K$ the solutions $u_N=u_K$ on the event $t\leq\tau_K$, where $\{\tau_K\}_{K\geq1}$ is a sequence of stopping times which tends to infinity as $K$ tends to infinity. Such a localization method which is also applied in Peszat and Zabczyk \cite{Peszat:2006} to show the existence of weak Hilbert-space valued solution. For $\alpha\in(1,2)$, Wang et al. \cite{Wang:2023} studied the existence and pathwise uniqueness of strong function-valued solution of (\ref{GeneralSPDE}) with Lipschitz coefficient $H$ using a localization method, and showed a comparison principle of solutions to such equation with different initial functions and drift coefficients. Yang and Zhou \cite{Yang:2017} found sufficient conditions on pathwise uniqueness of solutions to a class of SPDEs (\ref{GeneralSPDE}) driven by $\alpha$-stable white noise without negative jumps and with non-decreasing H\"{o}lder continuous noise coefficient $H$. But the existence of weak solution to (\ref{GeneralSPDE}) with general non-decreasing H\"{o}lder continuous noise coefficient is left open. For stochastic heat equations driven by general heavy-tailed noises with Lipschitz noise coefficients, we refer to Chong \cite{Chong:2017} and references therein. When $G=0$, $H(u)=u^{\beta}$ with $0<\beta<1$ (non-Lipschitz continuous) in (\ref{GeneralSPDE}) and $\dot{F}$ is an $\alpha$-stable ($\alpha\in(1,2)$) white noise on $[0,\infty)\times\mathbb{R}$ without negative jumps, it is shown in Mytnik \cite{Mytnik:2002} that for $0<\alpha\beta<3$ there exists a weak solution $(u,F)$ satisfying (\ref{GeneralSPDE}) by constructing a sequence of approximating processes that is tight with its limit solving the associated martingale problem, and that in the case of $\alpha\beta=1$ the weak uniqueness of solution to (\ref{GeneralSPDE}) holds. The pathwise uniqueness is shown in \cite{Yang:2017} for $\alpha\beta=1$ and $1<\alpha<\sqrt{5}-1$. For $\alpha$-stable colored noise $\dot{F}$ without negative jumps and with H\"{o}lder continuous coefficient $H$, Xiong and Yang \cite{Xiong:2019} proved the existence of week solution $(u,F)$ to (\ref{GeneralSPDE}) by showing the weak convergence of solutions to SDE systems on rescaled lattice with discrete Laplacian and driven by common stable random measure, which is similar to \cite{Sturm:2003}. In both \cite{Sturm:2003} and \cite{Xiong:2019} the dependence of colored noise helps with establishing the existence of weak solution. Inspired by work in the above mentioned literature, we are interested in the stochastic heat equation (\ref{eq:originalequation1}) in which the noise coefficient $\varphi$ satisfies the following more general hypothesis: \begin{hypothesis} \label{Hypo} $\varphi:\mathbb{R}\rightarrow \mathbb{R}$ is continuous and globally linear growth, and there exists a sequence of Lipschitz continuous functions $\varphi^n:\mathbb{R}\rightarrow \mathbb{R}$ such that \begin{itemize} \item[(i)] $\varphi^n$ uniformly converges to $\varphi$ as $n\rightarrow\infty$; \item[(ii)] for each $n\geq1$, there exists a constant $C_n$ such that $$\vert\varphi^n(x)-\varphi^n(y)\vert \leq C_n\vert x-y\vert,\,\,\forall x,y\in\mathbb{R};$$ \end{itemize} \end{hypothesis} The main contribution of this paper is to prove the existence and regularity of weak solutions to equation (\ref{eq:originalequation1}) under Hypothesis \ref{Hypo}. To this end, we consider two types of weak solutions that are measure-valued and function-valued, respectively. In addition, we also study the uniform $p$-moment and uniform stochastic continuity of the weak solution to equation (\ref{eq:originalequation1}). In the case that $\varphi$ is Lipschitz continuous, the existence of the solution can be usually obtained by standard Picard iteration (see, e.g., Dalang et al. \cite{Dalang:2009}, Walsh \cite{Walsh:1986}) or Banach fixed point principle (see, e.g., Truman and Wu \cite{Truman:2003}, Bo and Wang \cite{Bo:2006}). We thus mainly consider the case that $\varphi$ is non-Lipschitz continuous. Since the classical approaches of Picard iteration and Banach fixed point principle fail for SPDE (\ref{eq:originalequation1}) with non-Lipschitz $\varphi$, to prove the existence of a weak solution $(u,L_{\alpha})$ to (\ref{eq:originalequation1}), we first construct an approximating SPDE sequence with Lipschitz continuous noise coefficients $\varphi^n$, and prove the existence and uniqueness of strong solutions to the approximating SPDEs. We then proceed to show that the sequence of solution is tight in appropriate spaces. Finally, we prove that there exists a weak solution of (\ref{eq:originalequation1}) by using a weak convergence procedure. The rest of this paper is organized as follows. In the next section, we introduce some notation and the main theorems on the existence, uniform $p$-moment and uniform stochastic continuity of weak solution to (\ref{eq:originalequation1}). Section \ref{sec3} is devoted to the proof of the existence of measure-valued weak solution to (\ref{eq:originalequation1}). In Section \ref{sec4}, for $\alpha\in(1,5/3)$ we prove that there exists a weak solution to (\ref{eq:originalequation1}) as an $L^p$-valued process with $p\in(\alpha,5/3)$, and that the weak solution has the finite uniform $p$-th moment and the uniform stochastic continuity in the $L^p$ norm with $p\in(\alpha,5/3)$. \section{Notation and main results} \label{sec2} \subsection{Notation} \label{sec2.1} Let $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq0}, \mathbb{P})$ be a complete probability space with filtration $(\mathcal{F}_t)_{t\geq0}$ satisfying the usual conditions, and let $ N(dt,dx,dz): [0,\infty)\times[0,L]\times \mathbb{R}\setminus\{0\}\rightarrow \mathbb{N} \cup\{0\}\cup\{\infty\} $ be a Poisson random measure on $(\Omega, \mathcal{F}, (\mathcal{F}_t)_{t\geq0}, \mathbb{P})$ with intensity measure $dtdx\nu_{\alpha}(dz)$, where $dtdx$ denotes the Lebesgue measure on $[0,\infty)\times[0,L]$ and the jump size measure $\nu_{\alpha}(dz)$ for $ \alpha\in(1,2) $ is given by \begin{align} \label{eq:smalljumpsizemeasure} \nu_{\alpha}(dz):=(c_{+}z^{-\alpha-1}1_{(0,K]}(z)+c_{-}(-z)^{-\alpha-1}1_{[-K,0)}(z) )dz, \end{align} where $c_{+}+c_{-}=1$ and $K>0$ is an arbitrary constant. Define \begin{align*} \tilde{N}(dt,dx,dz):= N(dt,dx,dz)-dtdx\nu_{\alpha}(dz). \end{align*} Then $\tilde{N}(dt,dx,dz)$ is the compensated Poisson random measure (martingale measure) on $[0,\infty)\times[0,L]\times \mathbb{R}\setminus\{0\}$. As in Balan \cite[Section 5]{Balan:2014}, define a martingale measure \begin{align} \label{def:stablenoise} L_{\alpha}(dt,dx):= \int_{\mathbb{R}\setminus\{0\}}z\tilde{N}(dt,dx,dz) \end{align} for $(t, x)\in [0,\infty)\times[0,L]$. Then the corresponding distribution-valued derivative $\{\dot{L}_{\alpha}(t,x):t\in[0,\infty),x\in[0,L]\}$ is a truncated $\alpha$-stable white noise. Write $\mathcal{G}^{\alpha}$ for the class of almost surely $\alpha$-integrable random functions defined by \begin{align*} \mathcal{G}^{\alpha}:=\left\{f\in\mathbb{B}: \int_0^t\int_0^L\vert f(s,x)\vert ^{\alpha}dxds<\infty, \mathbb{P}\text{-a.s.}\,\,\text{for all} \,\,t\in[0,\infty)\right\}, \end{align*} where $\mathbb{B}$ is the space of progressively measurable functions on $[0,\infty)\times[0,L]\times\Omega$. Then it holds by Mytnik \cite[Section 5]{Mytnik:2002} that the stochastic integral with respect to $\{L_{\alpha}(dx,ds)\}$ is well defined for all $f\in \mathcal{G}^{\alpha}$. Throughout this paper, $C$ denotes the arbitrary positive constant whose value might vary from line to line. If $C$ depends on some parameters such as $p,T$, we denote it by $C_{p,T}$. Let $G_t(x,y)$ be the fundamental solution of heat equation $\frac{\partial u}{\partial t} =\frac{1}{2}\frac{\partial^2 u}{\partial x^2}$ on the domain $[0,\infty)\times[0,L]\times[0,L]$ with Dirichlet boundary conditions (the subscript $t$ is not a derivative but a variable). Its explicit formula (see, e.g., Feller \cite[Page 341]{Feller:1971}) is given by \begin{equation*} G_t(x,y)=\dfrac{1}{\sqrt{2\pi t}}\sum_{k=-\infty}^{+\infty}\left\{ \exp\left(-\dfrac{(y-x+2kL)^2}{2t}\right) -\exp\left(-\dfrac{(y+x+2kL)^2}{2t} \right)\right\} \end{equation*} for $t\in(0,\infty),x,y\in[0,L]$; and $\lim_{t\downarrow0}G_t(x,y)=\delta_y(x)$, where $\delta$ is the Dirac delta distribution. Moreover, it holds by Xiong and Yang \cite[Lemmas 2.1-2.3]{Xiong:2023} that for $s,t\in[0,\infty)$ and $x,y,z\in[0,L]$ \begin{equation} \label{eq:Greenetimation0} G_t(x,y)=G_t(y,x),\,\, \int_0^L|G_t(x,y)|dy+\int_0^L|G_t(x,y)|dx\leq C, \end{equation} \begin{equation} \label{eq:Greenetimation1} \int_0^LG_s(x,y)G_t(y,z)dy=G_{t+s}(x,z), \end{equation} \begin{equation} \label{eq:Greenetimation2} \int_0^L\vert G_t(x,y)\vert ^pdy\leq Ct^{-\frac{p-1}{2}},\,\, p\geq1. \end{equation} Given a topological space $V$, let $D([0,\infty),V)$ be the space of c\`{a}dl\`{a}g paths from $[0,\infty)$ to $V$ equipped with the Skorokhod topology. For given $p\geq1$ we denote by $v_t\equiv\{v(t,\cdot),t\in[0,\infty)\}$ the $L^p([0,L])$-valued process equipped with norm \begin{equation*} \vert\vert v_t\vert\vert_{p}=\left(\int_0^L\vert v(t,x)\vert^pdx\right)^{\frac{1}{p}}. \end{equation*} For any $p\geq1$ and $T>0$ let $L_{loc}^p([0,\infty)\times[0,L])$ be the space of measurable functions $f$ on $[0,\infty)\times[0,L])$ such that \begin{equation*} \vert \vert f\vert \vert _{p,T}=\left(\int_0^T\int_0^L \vert f(t,x)\vert ^pdxdt\right)^{\frac{1}{p}}<\infty,\,\, \forall\,\, 0<T<\infty. \end{equation*} Let $B([0,L])$ be the space of all Borel functions on $[0,L]$, and let $\mathbb{M}([0,L])$ be the space of finite Borel measures on $[0,L]$ equipped with the weak convergence topology. For any $f\in B([0,L])$ and $\mu\in\mathbb{M}([0,L])$ define $ \langle f,\mu\rangle:=\int_0^L f(x)\mu(dx) $ whenever it exists. With a slight abuse of notation, for any $f,g\in B([0,L])$ we also denote by $ \langle f,g\rangle=\int_0^L f(x)g(x)dx. $ \subsection{Main results} \label{sec2.2} By a solution to equation (\ref{eq:originalequation1}) we mean a process $u_t\equiv\{u(t,\cdot),t\in[0,\infty)\}$ satisfying the following weak (variational) form equation: \begin{align} \label{eq:variationform} \langle u_t,\psi\rangle &=\langle u_0,\psi\rangle+\dfrac{1}{2}\int_0^t \langle u_s,\psi{''}\rangle ds +\int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} \varphi(u(s-,x))\psi(x)z\tilde{N}(ds,dx,dz) \end{align} for all $t\in[0,\infty)$ and for any $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=\psi^{'}(0)=\psi^{'}(L)=0$ or equivalently satisfying the following mild form equation: \begin{align} \label{eq:mildform} u(t,x)= \int_0^LG_t(x,y)u_0(y)dy +\int_0^{t+}\int_0^L \int_{\mathbb{R}\setminus\{0\}}G_{t-s}(x,y) \varphi(u(s-,y))z\tilde{N}(ds,dy,dz) \end{align} for all $t\in [0, \infty)$ and for a.e. $x\in [0,L]$, where the last terms in above equations follow from (\ref{def:stablenoise}). For the equivalence between the weak form (\ref{eq:variationform}) and mild form (\ref{eq:mildform}), we refer to Walsh \cite{Walsh:1986} and references therein. We first give the definition (see also in Mytnik \cite{Mytnik:2002}) of a weak solution to stochastic heat equation (\ref{eq:originalequation1}). \begin{definition} Stochastic heat equation (\ref{eq:originalequation1}) has a weak solution with initial function $u_0$ if there exists a pair $(u,L_{\alpha})$ defined on some filtered probability space such that ${L}_{\alpha}$ is a truncated $\alpha$-stable martingale measure on $[0,\infty)\times[0,L]$ and $(u,L_{\alpha})$ satisfies either equation (\ref{eq:variationform}) or equation (\ref{eq:mildform}). \end{definition} We now state the main theorems in this paper. The first theorem is on the existence of weak solution in $D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$ with $p\in(\alpha,2]$ that was first considered in Mytnik \cite{Mytnik:2002}. \begin{theorem} \label{th:mainresult} If the initial function $u_0$ satisfies $\mathbb{E}[\vert\vert u_0\vert\vert_p^p]<\infty$ for some $p\in(\alpha,2]$, then under {\rm Hypothesis \ref{Hypo}} there exists a weak solution $(\hat{u}, {\hat{L}}_{\alpha})$ to equation (\ref{eq:originalequation1}) defined on a filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, \{\hat{\mathcal{F}}_t\}_{t\geq0}, \hat{\mathbb{P}})$ such that \begin{itemize} \item[\rm (i)] $\hat{u}\in D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$; \item[\rm (ii)] ${\hat{L}}_{\alpha}$ is a truncated $\alpha$-stable martingale measure with the same distribution as ${L}_{\alpha}$. \end{itemize} Moreover, for any $T>0$ we have \begin{equation} \label{eq:momentresult} \hat{\mathbb{E}}\left[\vert\vert\hat{u}\vert\vert_{p,T}^p\right]= \hat{\mathbb{E}}\left[\int_0^T\vert\vert\hat{u}_t\vert\vert_p^pdt\right]<\infty. \end{equation} \end{theorem} The proof of Theorem \ref{th:mainresult} is deferred to Section \ref{sec3}. Under additional assumption on $\alpha$, we can show that there exists a weak solution in $D([0,\infty),L^p([0,L]))$, $p\in(\alpha,5/3)$ with better regularity. \begin{theorem} \label{th:mainresult2} Suppose that $\alpha\in (1,5/3)$. If the initial function $u_0$ satisfies $\mathbb{E}[||u_0||_p^p]<\infty$ for some $p\in(\alpha,5/3)$, then under {\rm Hypothesis \ref{Hypo}} there exists a weak solution $(\hat{u}, {\hat{L}}_{\alpha})$ to equation (\ref{eq:originalequation1}) defined on a filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, \{\hat{\mathcal{F}}_t\}_{t\geq0}, \hat{\mathbb{P}})$ such that \begin{itemize} \item[\rm (i)] $\hat{u}\in D([0,\infty),L^p([0,L]))$; \item[\rm (ii)] ${\hat{L}}_{\alpha}$ is a truncated $\alpha$-stable martingale measure with the same distribution as ${L}_{\alpha}$. \end{itemize} Furthermore, for any $T>0$ we have the following uniform $p$-moment and uniform stochastic continuity, that is, \begin{equation} \label{eq:momentresult2} \hat{\mathbb{E}}\left[\sup_{0\leq t\leq T}\vert\vert\hat{u}_t\vert\vert_p^p\right]<\infty, \end{equation} and that for each $0\leq h\leq\delta$ \begin{equation} \label{eq:timeregular} \lim_{\delta\rightarrow0}\hat{\mathbb{E}}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\vert\vert \hat{u}_{t+h}-\hat{u}_t\vert\vert_p^p\right]=0. \end{equation} \end{theorem} The proof of Theorem \ref{th:mainresult2} is deferred to Section \ref{sec4}. We present a specific stochastic heat equation to illustrate our results in the following example. \begin{example} Given $0<\beta<1$, consider the equation (\ref{eq:originalequation1}) with $\varphi(u)=\vert u\vert^{\beta}$ for $u\in\mathbb{R}$, that is, \begin{equation*} \label{eq:example} \left\{\begin{array}{lcl} \dfrac{\partial u(t,x)}{\partial t}=\dfrac{1}{2} \dfrac{\partial^2u(t,x)}{\partial x^2}+ \vert u(t-,x)\vert^{\beta}\dot{L}_{\alpha}(t,x), && (t,x)\in (0,\infty) \times(0,L),\\[0.3cm] u(0,x)=u_0(x),&&x\in[0,L],\\[0.3cm] u(t,0)=u(t,L)=0,&& t\in[0,\infty). \end{array}\right. \end{equation*} It is clear that $\vert u \vert^{\beta}$ is a non-Lipschitz continuous function with globally linear growth. We can construct a sequence of Lipschitz continuous functions $(\varphi^n)_{n\geq1}$ of the form \begin{equation*} \varphi^n(u)=(\vert u \vert\vee\varepsilon_n)^{\beta}, u\in\mathbb{R}, \end{equation*} where $\varepsilon_n\downarrow0$ as $n\uparrow\infty$, such that $\varphi^n$ satisfies {\rm Hypothesis \ref{Hypo}}. One can then apply {\rm Theorems \ref{th:mainresult} and \ref{th:mainresult2}} to establish the existence of weak solutions to the above stochastic heat equation. \end{example} Finally, we provide some discussions on our main results in the following remarks. \begin{remark} Note that the globally linear growth of $\varphi$ in {\rm Hypothesis \ref{Hypo}} guarantees the global existence of weak solutions. One can remove this condition if one only needs the existence of a weak solution up to the explosion time. On the other hand, the uniqueness of the solution to equation (\ref{eq:originalequation1}) is still an open problem because $\varphi$ is non-Lipschitz continuous. \end{remark} \begin{remark} The weak solutions of equation (\ref{eq:originalequation1}) in {\rm Theorems \ref{th:mainresult}} and {\rm\ref{th:mainresult2}} are proved by showing the tightness of the approximating solution sequence $(u^n)_{n\geq1}$ of equation (\ref{eq:approximatingsolution}); see {\rm Propositions \ref{th:tightnessresult}} and {\rm\ref{prop:tightnessresult2}} in {\rm Sections \ref{sec3}} and {\rm \ref{sec4}}, respectively. To show that the equation (\ref{eq:originalequation1}) has a function-valued weak solution, it is necessary to restrict $\alpha\in(1,5/3)$ due to a technical reason that Doob's maximal inequality can not be directly applied to show the uniform $p$-moment estimate of $(u^n)_{n\geq1}$ that is key to the proof of the tightness for $(u^n)_{n\geq1}$. To this end, we apply the factorization method in {\rm Lemma \ref{lem:uniformbound}} for transforming the stochastic integral such that the uniform $p$-moment of $(u^n)_{n\geq1}$ can be obtained. In order to remove this restriction and consider the case of $\alpha\in(1,2)$, we apply another tightness criteria, i.e., {\rm Lemma \ref{lem:tightcriterion0}}, to show the tightness of $(u^n)_{n\geq1}$. However, the weak solution of equation (\ref{eq:originalequation1}) is a measure-valued process in this situation. We also note that the existence of function-valued weak solution of equation (\ref{eq:originalequation1}) in the case of $\alpha\in[5/3,2)$ is still an unsolved problem. \end{remark} \begin{remark} If we remove the restriction of the bounded jumps for the $\alpha$-stable white noise $\dot{L}_{\alpha}$ in equation (\ref{eq:originalequation1}), the jump size measure $\nu_{\alpha}(dz)$ in (\ref{eq:smalljumpsizemeasure}) becomes \begin{align*} \nu_{\alpha}(dz)=(c_{+}z^{-\alpha-1}1_{(0,\infty)}(z)+c_{-}(-z)^{-\alpha-1}1_{(-\infty,0)}(z) )dz \end{align*} for $\alpha\in(1,2)$ and $c_{+}+c_{-}=1$. As in Wang et al. \cite[Lemma 3.1]{Wang:2023} we can construct a sequence of truncated $\alpha$-stable white noise $\dot{L}^K_{\alpha}$ with the jumps size measure given by (\ref{eq:smalljumpsizemeasure}) and a sequence of stopping times $(\tau_K)_{K\geq1}$ such that \begin{equation} \label{eq:stoppingtimes1} \lim_{K\rightarrow+\infty} \tau_K=\infty,\,\,\mathbb{P}\text{-a.s.}. \end{equation} Similar to equation (\ref{eq:originalequation1}), for given $K\geq1$, we can consider the following non-linear stochastic heat equation \begin{equation} \label{eq:truncatedSHE} \left\{\begin{array}{lcl} \dfrac{\partial u_K(t,x)}{\partial t}=\dfrac{1}{2} \dfrac{\partial^2u_K(t,x)}{\partial x^2}+ \varphi(u_K(t-,x))\dot{L}^K_{\alpha}(t,x), && (t,x)\in (0,\infty) \times(0,L),\\[0.3cm] u_K(0,x)=u_0(x),&&x\in[0,L],\\[0.3cm] u_K(t,0)=u_K(t,L)=0,&& t\in[0,\infty). \end{array}\right. \end{equation} If $\varphi$ is Lipschitz continuous, similar to the proof of {\rm Proposition \ref{th:Approximainresult}} in Wang et al. \cite[Proposition 3.2]{Wang:2023} one can show that there exists a unique strong solution $u_K=\{u_K(t,\cdot),t\in[0,\infty)\}$ to equation (\ref{eq:truncatedSHE}) by using the Banach fixed point principle. On the other hand, by Wang et al. \cite[Lemma 3.4]{Wang:2023}, it holds for each $K\leq N$ that \begin{align*} u_K=u_N \,\,\mathbb{P}\text{-} \,a.s. \,\,\text{on}\,\{t<\tau_K\}. \end{align*} By setting \begin{align*} u=u_K,\,0\leq t<\tau_K, \end{align*} and by the fact (\ref{eq:stoppingtimes1}), we obtain the strong (weak) solution $u$ to equation (\ref{eq:originalequation1}) with noise of unbounded jumps via letting $K\uparrow+\infty$. If $\varphi$ is non-Lipschitz continuous, for any $K\geq1$, {\rm Theorem \ref{th:mainresult}} or {\rm Theorem \ref{th:mainresult2}} shows that there exists a weak solution $(\hat{u}_K, {\hat{L}}^K_{\alpha})$ to equation (\ref{eq:truncatedSHE}) defined on a filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, \{\hat{\mathcal{F}}_t\}_{t\geq0}, \hat{\mathbb{P}})_K$. However, we can not show that for each $K\leq N$ \begin{align*} (\hat{u}_K, {\hat{L}}^K_{\alpha})=(\hat{u}_N, {\hat{L}}^N_{\alpha}) \,\,\mathbb{P}\text{-} \,a.s. \,\,\text{on}\,\,\{t<\tau_K\} \end{align*} due to the non-Lipschitz continuity of $\varphi$. Therefore, we do not know whether there exists a common probability space $(\hat{\Omega}, \hat{\mathcal{F}}, \{\hat{\mathcal{F}}_t\}_{t\geq0}, \hat{\mathbb{P}})$ on which all of the weak solutions $((\hat{u}_K, {\hat{L}}^K_{\alpha}))_{K\geq1}$ are defined. Hence, the localization method in Wang et al. \cite{Wang:2023} becomes invalid, and the existence of the weak solution to equation (\ref{eq:originalequation1}) with untruncated $\alpha$-stable noise remains an unsolved problem. \end{remark} \section{Proof of Theorem \ref{th:mainresult}}\label{sec3} The proof of Theorem \ref{th:mainresult} proceeds in the following three steps. We first construct a sequence of the approximating SPDEs with globally Lipschitz continuous noise coefficients $(\varphi^n)_{n\geq1}$ satisfying Hypothesis \ref{Hypo}, and show that for each fixed $n\geq1$ there exists a unique strong solution $u^n$ in $D([0,\infty),L^p([0,L])$ with $p\in(\alpha,2]$ of the approximating SPDE; see Proposition \ref{th:Approximainresult}. We then prove that the approximating solution sequence $(u^n)_{n\geq1}$ is tight in both $D([0,\infty),\mathbb{M}([0,L]))$ and $L_{loc}^p([0,\infty)\times[0,L])$ for all $p\in(\alpha,2]$; see Proposition \ref{th:tightnessresult}. Finally, we proceed to show that there exists a weak solution $(\hat{u},\hat{L}_{\alpha})$ to equation (\ref{eq:originalequation1}) defined on another probability space $(\hat{\Omega}, \hat{\mathcal{F}}, \{\hat{\mathcal{F}}_t\}_{t\geq0}, \hat{\mathbb{P}})$ by applying a weak convergence argument. For each fixed $n\geq1$, we construct the approximate SPDE of the form \begin{equation} \label{eq:approximatingsolution} \left\{\begin{array}{lcl} \dfrac{\partial u^n(t,x)}{\partial t}=\dfrac{1}{2}\dfrac{\partial^2u^n(t,x)} {\partial x^2}+\varphi^n(u^n(t-,x))\dot{L}_{\alpha}(t,x),&& (t,x)\in (0,\infty) \times (0,L),\\[0.3cm] u^n(0,x)=u_0(x),&&x\in [0,L], \\[0.3cm] u^n(t,0)=u^n(t,L)=0,&&t\in[0,\infty), \end{array}\right. \end{equation} where the coefficient $\varphi^n$ satisfies Hypothesis \ref{Hypo}. Given $n\geq1$, by a solution to equation (\ref{eq:approximatingsolution}) we mean a process $u^n_t\equiv\{u^n(t,\cdot), t\in[0,\infty)\}$ satisfying the following weak form equation: \begin{align} \label{eq:approxivariationform} \langle u^n_t,\psi\rangle &=\langle u_0,\psi\rangle+\dfrac{1}{2}\int_0^t \langle u^n_s,\psi{''}\rangle ds +\int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} \varphi^n(u^n(s-,x))\psi(x)z\tilde{N}(ds,dx,dz) \end{align} for all $t\in[0,\infty)$ and for any $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=\psi^{'}(0)=\psi^{'}(L)=0$ or equivalently satisfying the following mild form equation: \begin{align} \label{mildformapproxi0} u^n(t,x)&=\int_0^LG_t(x,y)u_0(y)dy +\int_0^{t+}\int_0^L \int_{\mathbb{R}\setminus\{0\}}G_{t-s}(x,y) \varphi^n(u^n(s-,y))z\tilde{N}(ds,dy,dz) \end{align} for all $t\in [0, \infty)$ and for a.e. $x\in [0, L]$. We now present the definition (see also in Wang et al. \cite{Wang:2023}) of a strong solution to stochastic heat equation (\ref{eq:approximatingsolution}). \begin{definition} Given $p\geq 1$, the stochastic heat equation (\ref{eq:approximatingsolution}) has a strong solution in $D([0,\infty),L^p([0,L]))$ with initial function $u_0$ if for a given truncated $\alpha$-stable martingale measure $L_{\alpha}$ there exists a process $u^n_t\equiv\{u^n(t,\cdot),t\in[0,\infty)\}$ in $D([0,\infty),L^p([0,L]))$ such that either equation (\ref{eq:approxivariationform}) or equation (\ref{mildformapproxi0}) holds. \end{definition} Note that for each $n\geq1$ the noise coefficient $\varphi^n$ is not only Lipschitz continuous but also of globally linear growth. Indeed, for a given $\epsilon>0$ and $n_0\in\mathbb{N}$ large enough, Hypothesis \ref{Hypo} (i) and the globally linear growth of $\varphi$ imply that \begin{equation} \label{eq:glo-lin-growth} |\varphi^n(x)|\leq|\varphi^n(x)-\varphi(x)| +|\varphi(x)|\leq\epsilon+C(1+|x|),\,\, \forall n\geq n_0,\, \forall x\in\mathbb{R}. \end{equation} Therefore, we can use the classical Banach fixed point principle to show the existence and pathwise uniqueness of the strong solution to equation (\ref{eq:approximatingsolution}). Since the proof is standard, we just state the main result in the following proposition. For more details of the proof, we refer to Wang et al. \cite[Proposition 3.2]{Wang:2023} and references therein. Also note that the same method was applied in Truman and Wu \cite{Truman:2003} and in Bo and Wang \cite{Bo:2006} where the stochastic Burgers equation and the stochastic {C}ahn-{H}illiard equation driven by L\'{e}vy space-time white noise were studied, respectively. \begin{proposition} \label{th:Approximainresult} Given any $n\geq1$, if the initial function $u_0$ satisfies $\mathbb{E}[||u_0||_p^p]<\infty$ for some $p\in(\alpha,2]$, then under {\rm Hypothesis \ref{Hypo}} there exists a pathwise unique strong solution $u^n_t\equiv\{u^n(t,\cdot),t\in[0,\infty)\}$ to equation (\ref{eq:approximatingsolution}) such that for any $T>0$ \begin{equation} \label{eq:Approximomentresult} \sup_{n\geq1}\sup_{0\leq t\leq T}\mathbb{E}\left[\vert \vert u^n_t\vert \vert _p^p\right]<\infty. \end{equation} \end{proposition} \begin{remark} By {\rm Hypothesis \ref{Hypo} (ii)}, (\ref{eq:glo-lin-growth}) and estimate (\ref{eq:Approximomentresult}), the stochastic integral on the right-hand side of (\ref{mildformapproxi0}) is well defined. \end{remark} We are going to prove that the approximating solution sequence $(u^n)_{n\geq1}$ is tight in both $D([0,\infty),\mathbb{M}([0,L]))$ and $L_{loc}^p([0,\infty)\times[0,L])$ for all $p\in(\alpha,2]$ by using the following tightness criteria; see, e.g., Xiong and Yang \cite[Lemma 2.2]{Xiong:2019}. Note that this tightness criteria can be obtained by Ethier and Kurtz \cite[Theorems 3.9.1, 3.9.4 and 3.2.2]{Ethier:1986}. \begin{lemma} \label{lem:tightcriterion0} Given a complete and separable metric space $E$, let $(X^n=\{X^n(t),t\in[0,\infty)\})_{n\geq1}$ be a sequence of stochastic processes with sample paths in $D([0,\infty),E)$, and let $C_a$ be a subalgebra and dense subset of $C_b(E)$ (the bounded continuous functions space on $E$). Then the sequence $(X^n)_{n\geq1}$ is tight in $D([0,\infty),E)$ if both of the following conditions hold: \begin{itemize} \item[\rm (i)] For every $\varepsilon>0$ and $T>0$ there exists a compact set $\Gamma_{\varepsilon,T}\subset E$ such that \begin{equation} \label{eq:tightcriterion1} \inf_{n\geq1}\mathbb{P}[X^n(t)\in\Gamma_{\varepsilon,T}\,\, \text{for all}\,\, t\in[0,T] ]\geq1-\varepsilon. \end{equation} \item[\rm (ii)] For each $f\in C_a$, there exists a process $g_n\equiv\{g_n(t),t\in[0,\infty)\}$ such that \begin{equation*} f(X^n(t))-\int_0^tg_n(s)ds \end{equation*} is an $(\mathcal{F}_t)$-martingale and \begin{align} \label{eq:tight-moment} \sup_{0\leq t\leq T}\mathbb{E}\left[\vert f(X^n(t))\vert +\vert g_n(t)\vert \right]<\infty \end{align} and \begin{align} \label{eq:tight-moment2} \sup_{n\geq1}\mathbb{E}\left[\left(\int_0^T\vert g_n(t)\vert ^qdt\right)^{\frac{1}{q}}\right]<\infty \end{align} for each $n\geq1, T>0$ and $q>1$. \end{itemize} \end{lemma} Before showing the tightness of solution sequence $(u^n)_{n\geq1}$, we first find a uniform moment estimate in the following lemma. \begin{lemma} \label{le:uniformbounded} For each $n\geq1$ let $u^n$ be the strong solution to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}}. Then for given $T>0$ and $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=\psi^{'}(0)=\psi^{'}(L)=0$ and $\vert \psi^{''}(x)\vert \leq C\psi(x),x\in[0,L]$, we have for $p\in(\alpha,2]$ that \begin{align} \label{eq:p-uniform} \sup_{n\geq1}\mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_{0}^Lu^n(t,x)\psi(x)dx\right\vert ^p\right]<\infty. \end{align} \end{lemma} \begin{proof} By (\ref{eq:approxivariationform}), it holds that for each $n\geq1$ \begin{align*} \mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_{0}^Lu^n(t,x)\psi(x)dx\right\vert ^p\right]\leq C_p(A_1+A_2+A_3), \end{align*} where \begin{align*} A_1&=\mathbb{E}\left[\left\vert \int_0^Lu_0(x)\psi(x)dx\right\vert ^p\right], \\ A_2&=\mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_0^{t}\int_0^Lu^n(s,x)\psi^{''}(x)dxds \right\vert ^p\right], \\ A_3&=\mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}}\varphi^n(u^n(s-,x))\psi(x)z\tilde{N}(ds,dx,dz) \right\vert ^p\right]. \end{align*} For $p\in(\alpha,2]$ we separately estimate $A_1$, $A_2$ and $A_3$ as follows. For $A_1$, it holds by H\"{o}lder's inequality that \begin{align*} A_1&\leq C_p\left(\int_0^L\vert \psi(x)\vert ^{\frac{p}{p-1}}dx\right)^{\frac{p(p-1)}{p}} \mathbb{E}\left[\int_0^L\vert u_0(x)\vert ^pdx\right] \leq C_p\mathbb{E}[\vert \vert u_0\vert \vert _p^p] \leq C_p \end{align*} due to $\psi\in C^{2}([0,L])$ and $\mathbb{E}[\vert \vert u_0\vert \vert _p^p]<\infty$. For $A_2$, it holds by $\vert \psi^{''}(x)\vert \leq C\psi(x),x\in[0,L]$ and H\"{o}lder's inequality that \begin{align*} A_2&\leq C_p\mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_0^{t}\int_0^Lu^n(s,x)\psi(x)dxds \right\vert ^p\right] \leq C_{p,T}\int_0^{T} \mathbb{E}\left[\sup_{0\leq r\leq s} \left\vert \int_0^Lu^n(r,x)\psi(x)dx \right\vert ^p\right]ds. \end{align*} For $A_3$, the Doob maximal inequality and the Burkholder-Davis-Gundy inequality imply that \begin{align*} A_3&\leq C_p \mathbb{E}\left[\left|\int_0^{T}\int_0^L\int_{\mathbb{R}\setminus\{0\}}\vert \varphi^n(u^n(s-,x)) \psi(x)z\vert^2N(ds,dx,dz)\right|^{\frac{p}{2}}\right] \\ &\leq C_p \mathbb{E}\left[\int_0^{T}\int_0^L\int_{\mathbb{R}\setminus\{0\}}\vert \varphi^n(u^n(s-,x)) \psi(x)z\vert ^pN(ds,dx,dz)\right] \\ &=C_p \mathbb{E}\left[\int_0^{T}\int_0^L\int_{\mathbb{R}\setminus\{0\}}\vert \varphi^n(u^n(s,x)) \psi(x)z\vert ^pdsdx\nu_{\alpha}(dz)\right], \end{align*} where the second inequality follows from the fact that \begin{align} \label{eq:element-inequ} \left\vert \sum_{i=1}^ka_i^2\right\vert ^{\frac{q}{2}}\leq \sum_{i=1}^{k}\vert a_i\vert ^q \end{align} for $a_i\in\mathbb{R}, k\geq1$, and $q\in(0,2]$. By (\ref{eq:smalljumpsizemeasure}), it holds that for $p>\alpha$ \begin{align} \label{eq:jumpestimate} \int_{\mathbb{R}\setminus\{0\}}\vert z\vert ^p\nu_{\alpha} (dz)=c_{+}\int_0^Kz^{p-\alpha-1}dz+c_{-}\int_{-K}^0(-z)^{p-\alpha-1}dz =\frac{K^{p-\alpha}}{p-\alpha}, \end{align} then there exists a constant $C_{p,K,\alpha}$ such that \begin{align*} A_3\leq C_{p,K,\alpha} \mathbb{E}\left[\int_0^{T}\int_0^L\vert \varphi^n(u^n(s,x))\psi(x)\vert ^pdsdx\right]. \end{align*} By (\ref{eq:glo-lin-growth}), $\psi\in C^{2}([0,L])$ and (\ref{eq:Approximomentresult}) in Proposition \ref{th:Approximainresult}, it is easy to see that \begin{align*} A_3&\leq C_{p,K,\alpha,T}\left(1+\sup_{0\leq s\leq T} \mathbb{E}[\vert \vert u^n_s\vert \vert _p^p]\right)\leq C_{p,K,\alpha,T}. \end{align*} Combining the estimates $A_1,A_2$ and $A_3$, we have \begin{align*} \mathbb{E}&\left[\sup_{0\leq t\leq T} \left\vert \int_{0}^Lu^n(t,x)\psi(x)dx\right\vert ^p\right] \leq C_{p,K,\alpha,T}+C_{p,T}\int_0^{T}\mathbb{E}\left[\sup_{0\leq r\leq s} \left\vert \int_0^Lu^n(r,x)\psi(x)dx\right\vert ^p\right]ds \end{align*} Therefore, it holds by Gronwall's lemma that for $p\in(\alpha,2]$ \begin{align*} \sup_{n\geq1}\mathbb{E}\left[\sup_{0\leq t\leq T} \left\vert \int_{0}^Lu^n(t,x)\psi(x)dx\right\vert ^p\right]<\infty, \end{align*} which completes the proof. $\Box$ \end{proof} Note that for any function $v\in L^q[0,L]$ with $q\geq1$, we can identify $L^q([0,L])$ as a subset of $\mathbb{M}([0,L])$ by using the following correspondence $$v(x)\mapsto v(x)dx.$$ Then for each $n\geq1$ we can identify the $D([0,\infty),L^p([0,L]))$-valued random variable $u^n$ as a $D([0,\infty),\mathbb{M}([0,L]))$-valued random variable (still denoted by $u^n$). We now show the tightness of $(u^n)_{n\geq1}$ in the following proposition. \begin{proposition} \label{th:tightnessresult} The solution sequence $(u^n)_{n\geq1}$ to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}} is tight in both $D([0,\infty),\mathbb{M}([0,L]))$ and $L_{loc}^p([0,\infty)\times [0,L])$ for $p\in(\alpha,2]$. Let $u$ be an arbitrary limit point of $u^n$. Then \begin{align} \label{eq:tightresult} u\in D([0,\infty),\mathbb{M}([0,L])) \cap L_{loc}^p([0,\infty)\times [0,L]) \end{align} for $p\in(\alpha,2]$. \end{proposition} \begin{proof} For each $n\geq1, t\geq0$ and $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=\psi^{'}(0)=\psi^{'}(L)=0$ and $\vert \psi^{''}(x)\vert \leq C\psi(x),x\in[0,L]$, let us define \begin{align*} \langle u^n,\psi\rangle:=\langle u_t^n,\psi\rangle=\int_0^Lu^n(t,x)\psi(x)dx. \end{align*} We first prove that the sequence $(\langle u^n,\psi\rangle)_{n\geq1}$ is tight in $D([0,\infty),\mathbb{R})$ by using Lemma \ref{lem:tightcriterion0}. It is easy to see that the condition (i) in Lemma \ref{lem:tightcriterion0} can be verified by Lemma \ref{le:uniformbounded} . In the following we mainly verify the condition (ii) in Lemma \ref{lem:tightcriterion0}. For each $f\in C_b^2(\mathbb{R})$ ($f,f^{'},f^{''}$ are bounded and uniformly continuous) with compact supports, it holds by (\ref{eq:approxivariationform}) and It\^{o}'s formula that \begin{align} \label{eq:ito} f(\langle u_t^n,\psi\rangle)&=f(\langle u_0^n,\psi\rangle) +\int_0^tf^{'}(\langle u_s^n,\psi\rangle)\langle u_s^n,\psi^{''}\rangle) ds \nonumber\\ &\quad+\int_0^t\int_0^L\int_{\mathbb{R}\setminus\{0\}} \mathcal{D}(\langle u_s^n,\psi\rangle,\varphi^n(u^n(s,x))\psi(x)z) dsdx\nu_{\alpha}(dz)+\text{mart.}, \end{align} where $ \mathcal{D}(u,v)=f(u+v)-f(u)-vf^{'}(u) $ for $u,v\in\mathbb{R}$. Since $f,f^{'},f^{''}$ are bounded and $\vert \psi^{''}(x)\vert \leq C\psi(x),x\in[0,L]$, then \begin{align} \label{eq:estimate1} \vert f^{'}(\langle u_s^n,\psi\rangle)\langle u_s^n,\psi^{''}\rangle\vert \leq C\left\vert \int_0^Lu^n({s},x)\psi(x)dx\right\vert . \end{align} By Taylor's formula, one can show that $\vert \mathcal{D}(u,v)\vert \leq C(\vert v\vert\wedge \vert v\vert ^2)$, which also implies that $\vert \mathcal{D}(u,v)\vert \leq C(\vert v\vert \wedge \vert v\vert ^p)$ for $p\in(\alpha,2]$. Thus we have for $p\in(\alpha,2]$, \begin{align} \label{eq:estimate2} &\int_0^L\int_{\mathbb{R}\setminus\{0\}} \vert \mathcal{D}(\langle u_s^n,\psi\rangle,\varphi^n(u^n(s,x))\psi(x)z)\vert dx\nu_{\alpha}(dz) \nonumber\\ &\leq C\left(\int_{\mathbb{R}\setminus\{0\}}\vert z\vert \wedge \vert z\vert ^p\nu_{\alpha}(dz)\right) \int_0^L(\vert \varphi^n(u^n(s,x))\psi(x)\vert +\vert \varphi^n(u^n(s,x))\psi(x)\vert ^p)dx \nonumber\\ &\leq C_{p,K,\alpha}\int_0^L(\vert \varphi^n(u^n(s,x))\psi(x)\vert +\vert \varphi^n(u^n(s,x))\psi(x)\vert ^p)dx, \end{align} where by (\ref{eq:smalljumpsizemeasure}), \begin{align*} \int_{\mathbb{R}\setminus\{0\}}\vert z\vert \wedge \vert z\vert ^p\nu_{\alpha}(dz) &=c_{+}\int_0^1z^{p-\alpha-1}dz +c_{-}\int_{-1}^0(-z)^{p-\alpha-1}dz +c_{+}\int_1^Kz^{-\alpha}dz \\ &\quad +c_{-}\int_{-K}^{-1}(-z)^{-\alpha}dz \\ &=\frac{1}{p-\alpha}+\dfrac{1-K^{1-\alpha}}{\alpha-1}\leq C_{p,K,\alpha}. \end{align*} For given $n\geq1$ let us define \begin{align*} g_n(s)&:=f^{'}(\langle u_s^n,\psi\rangle)\langle u_s^n,\psi^{''}\rangle) +\int_0^L\int_{\mathbb{R}\setminus\{0\}} \mathcal{D}(\langle u_s^n,\psi\rangle,\varphi^n(u^n(s,x))\psi(x)z) dx\nu_{\alpha}(dz). \end{align*} By (\ref{eq:ito}), it is easy to see that \begin{align*} f(\langle u_t^n,\psi\rangle)-\int_0^{t}g_n(s)ds \end{align*} is an $(\mathcal{F}_t)$-martingale. Now we verify the moment estimates (\ref{eq:tight-moment}) and (\ref{eq:tight-moment2}) of the condition (ii) in Lemma \ref{lem:tightcriterion0}. For each $t\in[0,T]$, it holds by the boundedness of $f$, estimates (\ref{eq:estimate1})-(\ref{eq:estimate2}) and (\ref{eq:glo-lin-growth}) that \begin{align*} \mathbb{E}\left[\vert f(\langle u_t^n,\psi\rangle)\vert +\vert g_n(t)\vert \right] &\leq C\left(1+\mathbb{E}\left[\left\vert \int_0^Lu^n(t,x)\psi(x)dx\right\vert \right]\right) \\ &\quad +C_{p,K,\alpha}\mathbb{E}\left[\int_0^L(\vert \psi(x)\vert +\vert u^n(t,x)\psi(x)\vert )dx\right] \\ &\quad+C_{p,K,\alpha}\mathbb{E}\left[\int_0^L(\vert \psi(x)\vert ^p+ \vert u^n(t,x)\psi(x)\vert ^p)dx\right]. \end{align*} Since $\psi\in C^2([0,L])$ implies that $\psi$ is bounded, then it holds by H\"{o}lder's inequality and (\ref{eq:p-uniform}) that for $p\in(\alpha,2]$ \begin{align*} \mathbb{E}\left[\vert f(\langle u_t^n,\psi\rangle)\vert +\vert g_n(t)\vert \right] &\leq C_{p,K,\alpha}\left(1+\left(\sup_{0\leq t\leq T} \mathbb{E}[\vert \vert u^n_t\vert \vert_p^p]\right)^{\frac{1}{p}} +\sup_{0\leq t\leq T} \mathbb{E}[\vert \vert u^n_t\vert \vert_p^p]\right), \end{align*} and so by (\ref{eq:Approximomentresult}), \begin{align*} \sup_{0\leq t\leq T}\mathbb{E}\left[\vert f(\langle u_t^n,\psi\rangle)\vert +\vert g_n(t)\vert \right]<\infty, \end{align*} which verifies the estimate (\ref{eq:tight-moment}). To verify (\ref{eq:tight-moment2}), it suffices to show that for each $n\geq1$ \begin{align*} \mathbb{E}\left[\int_0^T\vert g_n(t)\vert ^qdt\right]<\infty \end{align*} for some $q>1$. By the estimates (\ref{eq:estimate1})-(\ref{eq:estimate2}) and (\ref{eq:glo-lin-growth}), we have \begin{align*} \mathbb{E}\left[\int_0^T\vert g_n(t)\vert ^qdt\right] &\leq C_{q}\mathbb{E}\left[\int_0^T\left\vert \int_0^Lu^n(t,x)\psi(x)dx \right\vert ^qdt\right] \\ &\quad+C_{p,q,K,\alpha}\mathbb{E}\left[\int_0^T\left\vert \int_0^L(\vert \psi(x)\vert + \vert u^n(t,x)\psi(x)\vert )dx \right\vert ^qdt\right] \\ &\quad+C_{p,q,K,\alpha}\mathbb{E}\left[\int_0^T\left\vert \int_0^L(\vert \psi(x)\vert ^p+ \vert u^n(t,x)\psi(x)\vert ^p)dx \right\vert ^qdt\right]. \end{align*} Taking $1<q<2/p$, the H\"{o}lder inequality and boundedness of $\psi$ imply that \begin{align*} \mathbb{E}\left[\int_0^T\vert g_n(t)\vert ^qdt\right] &\leq C_{p,q,K,\alpha,T}\left(1+\sup_{0\leq t\leq T}\mathbb{E}[\vert \vert u^n_t\vert\vert_q^q] +\sup_{0\leq t\leq T}\mathbb{E} \left[\vert \vert u^n_t\vert \vert_{pq}^{pq}\right]\right), \end{align*} and so by (\ref{eq:Approximomentresult}), \begin{align*} \sup_{n\geq1}\mathbb{E}\left[\int_0^T\vert g_n(t)\vert ^qdt\right]<\infty, \end{align*} which verifies the estimate (\ref{eq:tight-moment2}). Therefore, for each $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=\psi^{'}(0)=\psi^{'}(L)=0$ and $\vert \psi^{''}(x)\vert \leq C\psi(x),x\in[0,L]$, the sequence $(\langle u^n,\psi\rangle)_{n\geq1}$ is tight in $D([0,\infty),\mathbb{R})$, and so it holds by Mitoma's theorem (see, e.g., Walsh \cite[pp.361--365]{Walsh:1986}) that $(u^n)_{n\geq1}$ is tight in $D([0,\infty),\mathbb{M}([0,L]))$. On the other hand, by (\ref{eq:Approximomentresult}) we have for each $T>0$ \begin{align*} \sup_{n\geq1}\mathbb{E} \left[\int_0^T\int_0^L\vert u^n(t,x)\vert ^pdxdt\right] \leq C_T\sup_{n\geq1}\sup_{0\leq t\leq T} \mathbb{E}[\vert \vert u^n_t\vert \vert _p^p]<\infty \end{align*} for $p\in(\alpha,2]$. The Markov's inequality implies that for each $\varepsilon>0,T>0$ there exists a constant $C_{\varepsilon,T}$ such that \begin{align*} \sup_{n\geq1}\mathbb{P}\left[\int_0^T\int_0^L\vert u^n(t,x)\vert ^pdxdt>C_{\epsilon,T}\right]<\varepsilon \end{align*} for $p\in(\alpha,2]$. Therefore, the sequence $(u^n)_{n\geq1}$ is also tight in $L^p_{loc}([0,\infty)\times[0,L])$ for $p\in(\alpha,2]$, and the conclusion (\ref{eq:tightresult}) holds. $\Box$ \end{proof} \begin{Tproof}\textbf{~of Theorem \ref{th:mainresult}.} We are going to prove Theorem \ref{th:mainresult} by applying weak convergence arguments. For each $n\geq1$, let $u^n$ be the strong solution of equation (\ref{eq:approximatingsolution}) given by Proposition \ref{th:Approximainresult}. It can also be regarded as an element in $D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$ with $p\in(\alpha,2]$. By Proposition \ref{th:tightnessresult}, there exists a $D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$-valued random variable $u$ such that $u^n$ converges to $u$ in distribution in $D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$ for $p\in(\alpha,2]$. On the other hand, the Skorokhod Representation Theorem (see, e.g., Either and Kurtz \cite[Theorem 3.1.8]{Ethier:1986}) yields that there exists another filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}}_t)_{t\geq0},\hat{\mathbb{P}})$ and on it a further subsequence $(\hat{u}^n)_{n\geq1}$ and $\hat{u}$ which have the same distribution as $(u^n)_{n\geq1}$ and $u$, so that $\hat{u}^n$ almost surely converges to $\hat{u}$ in $D([0,\infty),\mathbb{M}([0,L]))\cap L_{loc}^p([0,\infty)\times [0,L])$ for $p\in(\alpha,2]$. For each $t\geq0,n\geq1$ and any test function $\psi\in C^{2}([0,L])$ with $\psi(0)=\psi(L)=0$ and $\psi^{'}(0)= \psi^{'}(L)=0$, let us define \begin{align*} \hat{M}^n_{t}(\psi)&:=\int_0^L\hat{u}^n(t,x) \psi(x)dx-\int_0^L\hat{u}_0(x) \psi(x)dx -\frac{1}{2}\int_0^{t} \int_0^L\hat{u}^n(s,x)\psi^{''}(x)dxds. \end{align*} Since $\hat{u}^n$ almost surely converges to $\hat{u}$ in the Skorokhod topology as $n\rightarrow\infty$, then \begin{align} \label{eq:M^n_t} \hat{M}^n_{t}(\psi)& \overset{\mathbf{\hat{P}}\text{-a.s.}}{\longrightarrow} \int_0^L\hat{u}(t,x)\psi(x)dx- \int_0^L\hat{u}_0(x)\psi(x)dx -\frac{1}{2}\int_0^{t} \int_0^L\hat{u}(s,x) \psi^{''}(x) dxds \end{align} in the Skorokhod topology as $n\rightarrow\infty$. By (\ref{eq:approxivariationform}) and the fact that $\hat{u}^n$ has the same distribution as $u^n$ for each $n\geq1$, we have \begin{align*} \hat{M}^n_{t}(\psi) \overset{D}=& \int_0^Lu^n(t,x) \psi(x)dx-\int_0^Lu_0(x)\psi(x)dx -\frac{1}{2}\int_0^{t}\int_0^L u^n(s,x)\psi^{''}(x)dxds \nonumber\\ =&\int_0^{t+} \int_0^L\int_{\mathbb{R}\setminus\{0\}}\psi(x) \varphi^n(u^n(s-,x))z\tilde{N}(ds,dx,dz), \end{align*} where $\overset{D}=$ denotes the identity in distribution. The Burkholder-Davis-Gundy inequality, (\ref{eq:element-inequ})-(\ref{eq:jumpestimate}) and (\ref{eq:glo-lin-growth}) imply that for $p\in(\alpha,2]$ \begin{align*} \hat{\mathbb{E}}[\vert \hat{M}^n_{t}(\psi)\vert ^p] &=\mathbb{E} \left[\left\vert \int_0^{t+} \int_0^L\int_{\mathbb{R}\setminus\{0\}}\psi(x) \varphi^n(u^n(s-,x))z\tilde{N}(ds,dx,dz) \right\vert ^p\right] \\ &\leq C_p\mathbb{E} \left[\int_0^t\int_0^L \int_{\mathbb{R}\setminus\{0\}} \vert \psi(x)\vert ^p(1+\vert u^n(s,x)\vert )^p \vert z\vert ^pdsdx\nu_{\alpha}(dz) \right] \\ &\leq C_{p,K,\alpha,T} \left(\int_0^L\vert \psi(x)\vert ^pdx+\bigg\vert \sup_{x\in[0,L]}\psi(x)\bigg\vert ^p \sup_{0\leq t\leq T}\mathbb{E}\left[\vert \vert u^n_t\vert \vert _p^p\right]\right). \end{align*} Then by $\psi\in C^{2}([0,L])$ and (\ref{eq:Approximomentresult}), we have for each $T>0$ \begin{align*} \sup_{n\geq1}\sup_{0\leq t\leq T} \hat{\mathbb{E}}[\vert \hat{M}^n_{t}(\psi)\vert ^p]<\infty. \end{align*} Therefore, it holds by (\ref{eq:M^n_t}) that there exists an $(\hat{\mathcal{F}}_t)$-martingale $\hat{M}_{t}(\psi)$ such that $\hat{M}^n_{t}(\psi)$ converges weakly to $\hat{M}_{t}(\psi)$ as $n\rightarrow\infty$, and for each $t\geq0$ \begin{align} \label{martingle1} \hat{M}_{t}(\psi)&= \int_0^L\hat{u}(t,x)\psi(x)dx- \int_0^L \hat{u}_0(x)\psi(x)dx-\frac{1}{2}\int_0^{t} \int_0^L\hat{u}(s,x) \psi^{''}(x)dxds. \end{align} By Hypothesis \ref{Hypo} (i), the quadratic variation of $\{\hat{M}^n_{t}(\psi),t\in[0,\infty)\}$ satisfies that \begin{align*} \langle \hat{M}^n(\psi), \hat{M}^n(\psi) \rangle_t&=\int_0^{t}\int_0^L \int_{\mathbb{R}\setminus\{0\}}\varphi^n({u}^n(s,x))^2 \psi(x)^2z^2dsdx\nu_{\alpha}(dz) \\ &\overset{D}=\int_0^{t}\int_0^L \int_{\mathbb{R}\setminus\{0\}}\varphi^n(\hat{u}^n(s,x))^2 \psi(x)^2z^2dsdx\nu_{\alpha}(dz) \\ &\overset{\mathbb{P}-a.s.} \rightarrow\int_0^{t}\int_0^L \int_{\mathbb{R}\setminus\{0\}}\varphi(\hat{u}(s,x))^2 \psi(x)^2z^2dsdx\nu_{\alpha}(dz),\,\,t\in[0,T], \end{align*} as $n\rightarrow\infty$. We denote by $\{\langle \hat{M}(\psi),\hat{M}(\psi)\rangle_t,t\in[0,\infty)\}$ the quadratic variation process \begin{align*} \langle \hat{M}(\psi),\hat{M}(\psi)\rangle_t=\int_0^{t} \int_0^L\int_{\mathbb{R}\setminus\{0\}}\varphi(\hat{u}(s,x))^2 \psi(x)^2z^2dsdx\nu_{\alpha}(dz),\,\,t\geq0. \end{align*} Similar to Konno and Shiga \cite[Lemma 2.4]{Konno:1988}, $\langle \hat{M}(\psi),\hat{M}(\psi)\rangle_t$ corresponds to an orthonormal martingale measure $\hat{M}(dt,dx,dz)$ defined on the filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}_t})_{t\geq0}, \hat{\mathbb{P}})$ in the sense of Walsh \cite[Chapter 2]{Walsh:1986} whose quadratic measure is given by \begin{align*} \varphi(\hat{u}(t,x))^2z^2dtdx\nu_{\alpha}(dz). \end{align*} Let $\{\dot{\bar{L}}_{\alpha}(t,x):t\in[0,\infty),x\in[0,L]\}$ be another truncated $\alpha$-stable white noise, defined possibly on $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}_t})_{t\geq0}, \hat{\mathbb{P}})$, independent of $\hat{M}(dt,dx,dz)$ and define \begin{align*} \hat{L}_{\alpha}(t,\psi)&:= \int_0^{t+}\int_{0}^L\int_{\mathbb{R}\setminus\{0\}} \dfrac{1}{\varphi(\hat{u}(s-,x))} 1_{\{\varphi(\hat{u}(s-,x))\neq0\}}\psi(x)z\hat{M}(ds,dx,dz) \\ &\quad+\int_0^{t+}\int_{0}^L \psi(x) 1_{\{\varphi(\hat{u}(s-,x))=0\}} \bar{L}_{\alpha}(ds,dx). \end{align*} Then $\{\hat{L}_{\alpha}(t,\psi): \,t\in[0,\infty),\, \psi\in C^2([0, L]), \psi(0)=\psi(L)=0, \psi^{'}(0)=\psi^{'}(L)=0\}$ determines a truncated $\alpha$-stable white noise $\dot{\hat{L}}_{\alpha}(t,x)$ on $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}_t})_{t\geq0}, \hat{\mathbb{P}})$ with the same distribution as $\dot{L}_{\alpha}(t,x)$ such that \begin{align*} \hat{M}_t(\psi)&=\int_0^{t+} \int_0^L\varphi(\hat{u}(s-,x)) \psi(x) \hat{L}_{\alpha}(ds,dx) =\int_0^{t+} \int_0^L\int_{\mathbb{R}\setminus\{0\}}\varphi(\hat{u}(s-,x)) \psi(x)z\widetilde{\hat{N}}(ds,dx,dz), \end{align*} where $\widetilde{\hat{N}}(dt,dx,dz)$ denotes the compensated Poisson random measure associated to the truncated $\alpha$-stable martingale measure $\hat{L}_{\alpha}(t,x)$. Hence, it holds by (\ref{martingle1}) that $(\hat{u},\hat{L}_{\alpha})$ is a weak solution to (\ref{eq:originalequation1}) defined on $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}_t})_{t\geq0}, \hat{\mathbb{P}})$. On the other hand, since $\hat{u}^n$ has the same distribution as $u^n$ for each $n\geq1$, then the moment estimates (\ref{eq:Approximomentresult}) in Proposition \ref{th:Approximainresult} can be replaced by \begin{equation*} \sup_{n\geq1}\sup_{0\leq t\leq T} \hat{\mathbb{E}}\left[\vert \vert \hat{u}^n_t\vert \vert _p^p\right]<\infty \end{equation*} for $p\in(\alpha,2]$. For moment estimate (\ref{eq:momentresult}), the Fatou's Lemma implies that for $p\in(\alpha,2]$ \begin{align*} \hat{\mathbb{E}}\left[\vert \vert \hat{u}\vert \vert _{p,T}^p\right]&= \hat{\mathbb{E}}\left[\int_0^T\vert \vert \hat{u}_t\vert \vert _p^pdt\right] \leq\liminf_{n\rightarrow\infty}C_T \sup_{0\leq t\leq T}\hat{\mathbb{E}}\left[\vert \vert \hat{u}^n_t\vert \vert _p^p\right]<\infty, \end{align*} which completes the proof. $\Box$ \end{Tproof} \section{Proof of Theorem \ref{th:mainresult2}}\label{sec4} The proof of Theorem \ref{th:mainresult2} is similar to that of Theorem \ref{th:mainresult}. The main difference between them is that in the current proof we need to prove the solution sequence $(u^n)_{n\geq1}$ to equation (\ref{eq:approximatingsolution}), obtained from Proposition \ref{th:Approximainresult}, is tight in $D([0,\infty),L^p([0,L]))$ for $p\in(\alpha,5/3)$. To this end, we need the following tightness criteria; see, e.g., Ethier and Kurtz \cite[Theorem 3.8.6 and Remark (a)] {Ethier:1986}. Note that the same criteria was also applied in Sturm \cite {Sturm:2003} with Gaussian colored noise setting. \begin{lemma} \label{lem:tightcriterion} Given a complete and separable metric space $(E,\rho)$, let $(X^n)$ be a sequence of stochastic processes with sample paths in $D([0,\infty),E)$. The sequence is tight in $D([0,\infty),E)$ if the following conditions hold: \begin{itemize} \item[\rm (i)] For every $\varepsilon>0$ and rational $t\in[0,T]$, there exists a compact set $ \Gamma_{\varepsilon,T}\subset E$ such that \begin{equation} \label{eq:tightcriterion1} \inf_{n}\mathbb{P}[X^n(t)\in\Gamma_{\varepsilon,T}] \geq1-\varepsilon. \end{equation} \item[\rm (ii)] There exists $p>0$ such that \begin{equation} \label{eq:tightcriterion2} \lim_{\delta\rightarrow0}\sup_n\mathbb{E} \left[\sup_{0\leq t\leq T}\sup_{0\leq u\leq \delta}(\rho(X^n_{t+u},X^n_t)\wedge1)^p\right]=0. \end{equation} \end{itemize} \end{lemma} To verify condition (i) of Lemma \ref{lem:tightcriterion}, we need the following characterization of the relatively compact set in $L^p({[0,L]}),p\geq1$; see, e.g., Sturm \cite[Lemma 4.3]{Sturm:2003}. \begin{lemma} \label{lem:compactcriterion} A subset $\Gamma\subset L^p({[0,L]})$ for $p\geq1$ is relatively compact if and only if the following conditions hold: \begin{itemize} \item[\rm (a)] $\sup_{f\in\Gamma} \int_0^L\vert f(x)\vert ^pdx<\infty$, \item[\rm (b)] $\lim_{y\rightarrow0}\int_0^L\vert f(x+y)-f(x)\vert ^pdx=0$ uniformly for all $f\in\Gamma$, \item[\rm (c)] $\lim_{\gamma\rightarrow\infty} \int_{(L-\frac{L}{\gamma},L]}\vert f(x)\vert ^pdx=0$ for all $f\in\Gamma$. \end{itemize} \end{lemma} The proof of the tightness of $(u^n)_{n\geq1}$ is accomplished by verifying conditions (i) and (ii) in Lemma \ref{lem:tightcriterion}. To this end, we need some estimates on $(u^n)_{n\geq1}$, that is, the uniform bound estimate in Lemma \ref{lem:uniformbound}, the temporal difference estimate in Lemma \ref{lem:temporalestimation} and the spatial difference estimate in Lemma \ref{lem:spatialestimation}, respectively. \begin{lemma} \label{lem:uniformbound} Suppose that $\alpha\in(1,5/3)$ and for each $n\geq1$ $u^n$ is the solution to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}}. Then for given $T>0$ there exists a constant $C_{p,K,\alpha,T}$ such that \begin{equation} \label{eq:unformlybounded} \sup_n\mathbb{E}\left[\sup_{0\leq t\leq T}\vert \vert u^n_t\vert \vert _p^p\right]\leq C_{p,K,\alpha,T},\,\,\, \text{for}\,\,\,p\in(\alpha,5/3). \end{equation} \end{lemma} \begin{proof} For each $n\geq 1$, by (\ref{mildformapproxi0}) it is easy to see that $$\mathbb{E}\left[\sup_{0\leq t\leq T}\vert \vert u^n_t\vert \vert _p^p\right]\leq C_p(A_1+A_2),$$ where \begin{align*} A_1&=\mathbb{E}\left[\sup_{0\leq t\leq T}\Bigg\vert \Bigg\vert \int_0^LG_{t}(\cdot,y)u_0(y)dy\Bigg\vert \Bigg\vert _p^p\right],\\ A_2&=\mathbb{E}\left[\sup_{0\leq t\leq T}\Bigg\vert \Bigg\vert \int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} G_{t-s}(\cdot,y)\varphi^n(u^n(s-,y))z \tilde{N}(ds,dy,dz)\Bigg\vert \Bigg\vert _p^p\right]. \end{align*} We separately estimate $A_1$ and $A_2$ as follows. For $A_1$, it holds by Young's convolution inequality and (\ref{eq:Greenetimation0}) that \begin{align*} \label{eq:A_1} A_1 &\leq C\mathbb{E}\left[\int_0^L \sup_{0\leq t\leq T}\left(\int_0^L|G_{t} (x,y)|dx\right) \vert u_0(y)\vert ^pdy\right]\leq C_T\mathbb{E}[\vert\vert u_0\vert \vert _p^p]. \end{align*} By Proposition \ref{th:Approximainresult}, we have $\mathbb{E}[\vert \vert u_0\vert \vert _p^p]<\infty$ for $p\in(\alpha,2]$, and so there exists a constant $C_{p,T}$ such that $A_1\leq C_{p,T}$. For $A_2$, we use the factorization method; see, e.g., Da Prato et al. \cite{Prato:1987}, which is based on the fact that for $0<\beta<1$ and $\,0\leq s\leq t$, \begin{equation*} \int_s^t(t-r)^{\beta-1}(r-s)^{-\beta}dr=\dfrac{\pi}{\sin(\beta\pi)}. \end{equation*} For any function $v: [0,\infty)\times {[0,L]} \rightarrow \mathbb{R}$ define \begin{align*} &\mathcal{J}^{\beta}v(t,x) :=\dfrac{\sin(\beta\pi)}{\pi}\int_0^t\int_0^L(t-s)^{\beta-1}G_{t-s}(x,y)v(s,y)dyds,\\ &\mathcal{J}^n_{\beta}v(t,x):=\int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} (t-s)^{-\beta}G_{t-s}(x,y)\varphi^n(v(s-,y))z\tilde{N}(ds,dy,dz). \end{align*} By the stochastic Fubini Theorem and (\ref{eq:Greenetimation1}), we have \begin{align*} \mathcal{J}^{\beta}\mathcal{J}^n_{\beta}u^n(t,x) &=\dfrac{\sin(\beta\pi)}{\pi}\int_0^t\int_0^L (t-s)^{\beta-1}G_{t-s}(x,y)\Bigg(\int_0^{s+} \int_0^L\int_{\mathbb{R}\setminus\{0\}} (s-r)^{-\beta}\\ &\quad\quad\times G_{s-r}(y,m) \varphi^n(u^n(r-,m))z\tilde{N}(dr,dm,dz)\Bigg)dyds \\ &=\dfrac{\sin(\beta\pi)}{\pi}\int_0^{t+} \int_0^L\int_{\mathbb{R}\setminus\{0\}} \Bigg[\int_r^{t}(t-s)^{\beta-1}(s-r)^{-\beta}\\ &\quad\quad\times\Bigg(\int_0^LG_{t-s}(x,y) G_{s-r}(y,m)dy\Bigg)ds\Bigg] \varphi^n(u^n(r-,m))z\tilde{N}(dr,dm,dz) \\ &=\int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}}G_{t-s}(x,y)\varphi^n(u^n(s-,y))z\tilde{N}(ds,dy,dz). \end{align*} Thus, $$A_2=\mathbb{E}\left[\sup_{0\leq t\leq T}\vert \vert \mathcal{J}^{\beta}\mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p\right].$$ Until the end of the proof we fix a $0<\beta<1$ satisfying \begin{align} \label{eq:factorization} 1-\frac{1}{p}<\beta<\frac{3}{2p}-\frac{1}{2}, \end{align} which requires that $$\frac{3}{2p}-\frac{1}{2}-(1-\frac{1}{p})>0.$$ Therefore, we need the assumption $p<5/3$ for this lemma. Back to our main proof, to estimate $A_2$ we first estimate $\mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p]$. For $p\in(\alpha,5/3)$, the Burkholder-Davis-Gundy inequality, (\ref{eq:element-inequ})-(\ref{eq:jumpestimate}) and (\ref{eq:glo-lin-growth}) imply that \begin{align*} \mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p] =&\int_0^L\mathbb{E}\left[\left\vert \int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} (t-s)^{-\beta}G_{t-s}(x,y)\varphi^n(u^n(s-,y))z\tilde{N}(ds,dy,dz)\right\vert ^p\right]dx\nonumber\\ \leq&C_p\int_0^L\int_0^{t}\int_0^L\int_{\mathbb{R}\setminus\{0\}} \mathbb{E}\left[\vert (t-s)^{-\beta}G_{t-s}(x,y)\varphi^n(u^n(s,y))z\vert ^p\right]\nu_{\alpha}(dz)dydsdx\nonumber\\ \leq&C_{p,K,\alpha}\int_0^L\int_0^{t}\int_0^L \mathbb{E}\left[1+\vert u^n(s,y)\vert ^p\right]\vert t-s\vert ^{-\beta p}\vert G_{t-s}(x,y)\vert ^pdydsdx. \end{align*} Combine (\ref{eq:Greenetimation2}), we have \begin{align*} \mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p]\leq&C_{p,K,\alpha}\left(L+\mathbb{E}\left[\sup_{0\leq s\leq t}\vert \vert u^n_s\vert \vert _p^p\right]\right) \int_0^Ts^{-(\frac{p-1}{2}+\beta p)}ds. \end{align*} For $p<5/3$, by (\ref{eq:factorization}) we have $$\int_0^Ts^{-(\frac{p-1}{2}+\beta p)}ds<\infty.$$ Therefore, there exists a constant $C_{p,K,\alpha,T}$ such that \begin{equation} \label{eq:momentestimation} \mathbb{E}\left[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p\right]\leq C_{p,K,\alpha,T}\left(1+\mathbb{E}\left[\sup_{0\leq s\leq t}\vert \vert u^n_s\vert \vert _p^p\right]\right). \end{equation} We now estimate $A_2=\mathbb{E}[\sup_{0\leq t\leq T}\vert \vert \mathcal{J}^{\beta}\mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p]$. The Minkowski inequality implies that \begin{align} \label{eq:A_2_1} A_2 =&\mathbb{E}\left[\sup_{0\leq t\leq T}\dfrac{\sin(\pi\beta)}{\pi}\Bigg\vert \Bigg\vert \int_0^{t}\int_0^L(t-s)^{\beta-1} G_{t-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dyds \Bigg\vert \Bigg\vert _p^p\right]\nonumber\\ \leq&\dfrac{\sin(\pi\beta)}{\pi}\mathbb{E}\left[\sup_{0\leq t\leq T}\left(\int_0^{t}(t-s)^{\beta-1} \Bigg\vert \Bigg\vert \int_0^LG_{t-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dy\Bigg\vert \Bigg\vert _pds\right)^p\right]. \end{align} By the H\"{o}lder inequality and (\ref{eq:Greenetimation0}), we have \begin{align} \label{eq:A_2_2} &\Bigg\vert \Bigg\vert \int_0^LG_{t-s}(\cdot-y)\mathcal{J}^n_{\beta}u^n(s,y)dy\Bigg\vert \Bigg\vert _p\nonumber\\ &\quad=\left(\int_0^L\left\vert \int_0^L\vert G_{t-s}(x,y)\vert ^{\frac{p-1}{p}}\vert G_{t-s}(x,y)\vert ^{\frac{1}{p}}\mathcal{J}^n_{\beta}u^n(s,y) dy\right\vert ^pdx\right)^{\frac{1}{p}}\nonumber\\ &\quad\leq\left(\int_0^L\left\vert \left(\int_0^L\vert G_{t-s}(x,y)\vert dy\right)^{\frac{p-1}{p}} \left(\int_0^LG_{t-s}(x,y)\vert \mathcal{J}^n_{\beta}u^n(s,y)\vert ^pdy\right)^{\frac{1}{p}}\right\vert ^pdx\right)^{\frac{1}{p}}\nonumber\\ &\quad\leq\left(\sup_{x\in[0,L]}\int_0^L|G_{t-s}(x,y)|dy\right)^{\frac{p-1}{p}} \left(\int_0^L\int_0^L|G_{t-s}(x,y)|\vert \mathcal{J}^n_{\beta}u^n(s,y)\vert ^pdxdy\right)^{\frac{1}{p}}\nonumber\\ &\quad\leq C_T\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p. \end{align} Therefore, it follows from (\ref{eq:A_2_1}), (\ref{eq:A_2_2}), and the H\"{o}lder inequality that \begin{align*} \label{eq:A_2} A_2\leq&\dfrac{\sin(\pi\beta)C_{p,T}}{\pi}\mathbb{E}\left[\sup_{0\leq t\leq T}\left(\int_0^t(t-s)^{\beta-1}\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _pds\right)^p\right]\nonumber\\ \leq&\dfrac{\sin(\pi\beta)C_{p,T}}{\pi}\mathbb{E}\left[\sup_{0\leq t\leq T}\left(\int_0^{t}1^{\frac{p}{p-1}}ds\right)^{p-1} \left(\int_0^{t}(t-s)^{(\beta-1)p}\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p^pds\right)\right] \nonumber\\ \leq&\dfrac{\sin(\pi\beta)C_{p,T}}{\pi}\int_0^{T} (T-s)^{(\beta-1)p} \mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p^p]ds. \end{align*} By (\ref{eq:momentestimation}), it also holds that \begin{align} A_2 \leq&\dfrac{\sin(\pi\beta)C_{p,K,\alpha,T}}{\pi} \int_0^{T}(T-s)^{(\beta-1)p}\left(1+\mathbb{E} \left[\sup_{0\leq r\leq s}\vert \vert u^n_r\vert \vert _p^p\right]\right)ds\nonumber\\ \leq&\dfrac{\sin(\pi\beta)C_{p,K,\alpha,T}}{\pi} \left(1+\int_0^{T}(T-s)^{(\beta-1)p}\mathbb{E} \left[\sup_{0\leq r\leq s}\vert \vert u^n_r \vert \vert _p^p\right]ds\right). \end{align} Combining (\ref{eq:A_2}) and the estimate for $A_1$, we have for each $T>0$, \begin{align*} &\mathbb{E}\left[\sup_{0\leq t \leq T}\vert \vert u^n_t\vert \vert _p^p\right]\leq C_{p,T} +\dfrac{\sin(\pi\beta)C_{p,K,\alpha,T}}{\pi} \int_0^{T}(T-s)^{(\beta-1)p}\mathbb{E} \left[\sup_{0\leq r\leq s}\vert \vert u^n_r\vert \vert _p^p\right]ds. \end{align*} Since $\beta>1-1/p$, applying a generalized Gronwall's Lemma (see, e.g., Lin \cite[Theorem 1.2]{Lin:2013}), we have \begin{align*} \sup_{n}\mathbb{E}\left[\sup_{0\leq t \leq T}\vert \vert u^n_t\vert \vert _p^p\right]\leq C_{p,K,\alpha,T}, \,\,\,\text{for}\,\,\,p\in(\alpha,5/3), \end{align*} which completes the proof. $\Box$ \end{proof} \begin{lemma} \label{lem:temporalestimation} Suppose that $\alpha\in(1,5/3)$ and for each $n\geq1$ $u^n$ is the solution to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}}. Then for given $T>0$, $0\leq h\leq\delta$ and $p\in(\alpha,5/3)$ \begin{equation} \label{eq:temporalestimation} \lim_{\delta\rightarrow0}\sup_n\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\vert \vert u^n_{t+h}-u^n_t\vert \vert _p^p\right]=0. \end{equation} \end{lemma} \begin{proof} For each $n\geq 1$, by the factorization method in the proof of Lemma \ref{lem:uniformbound}, we have $$\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\vert \vert u^n_{t+h}-u^n_t\vert \vert _p^p\right]\leq C_p(B_1+B_2),$$ where \begin{align*} B_1&=\mathbb{E}\left[\sup_{0\leq t\leq T} \sup_{0\leq h\leq \delta}\Bigg\vert \Bigg\vert \int_0^L(G_{t+h}(\cdot-y)-G_{t}(\cdot- y))u_0(y)dy\Bigg\vert \Bigg\vert _p^p\right],\\ B_2&=\mathbb{E}\left[\sup_{0\leq t\leq T} \sup_{0\leq h\leq \delta}\vert \vert \mathcal{J}^{\beta} \mathcal{J}^n_{\beta}u^n_{t+h}-\mathcal{J} ^{\beta}\mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p\right]. \end{align*} For $B_1$, Young's convolution inequality and (\ref{eq:Greenetimation0}) imply that \begin{align*} B_1 &\leq \mathbb{E}\left[\int_0^L\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\left(\int_0^L(|G_{t+h}(x,y)|+|G_{t}(x,y)|)dx\right)\vert u_0(y)\vert ^pdy\right] \leq C_T\mathbb{E}[\vert \vert u_0\vert \vert _p^p]<\infty. \end{align*} Therefore, it holds by Lebesgue's dominated convergence theorem that $B_1$ converges to 0 as $\delta\rightarrow0$. For $B_2$, it is easy to see that $$B_2\leq \frac{\sin(\beta\pi)C_p}{\pi} (B_{2,1}+B_{2,2}+B_{2,3}),$$ where \begin{align*} B_{2,1}&=\mathbb{E}\Bigg[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\Bigg\vert \Bigg\vert \int_0^{t}\int_0^L(t-s)^{\beta-1} (G_{t+h-s}(\cdot,y)-G_{t-s}(\cdot,y)) \mathcal{J}^n_{\beta}u^n(s,y)dyds\Bigg\vert \Bigg\vert _p^p\Bigg],\\ B_{2,2}&=\mathbb{E}\Bigg[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\Bigg\vert \Bigg\vert \int_0^{t}\int_0^L((t+h-s)^{\beta-1}-(t-s)^{\beta-1})G_{t+h-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dyds\Bigg\vert \Bigg\vert _p^p\Bigg],\\ B_{2,3}&=\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\Bigg\vert \Bigg\vert \int_{t}^{t+h}\int_0^L(t+h-s)^{\beta-1}G_{t+h-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dyds\Bigg\vert \Bigg\vert _p^p\right]. \end{align*} By the assumption $p\in(\alpha,5/3)$ of this lemma we can choose a $0<\beta<1$ satisfying $1-1/p<\beta<3/2p-1/2$. By Lemma \ref{lem:uniformbound} and (\ref{eq:momentestimation}), there exists a constant $C_{p,K,\alpha,T}$ such that \begin{equation} \label{ineq:0} \sup_{0\leq t\leq T}\mathbb{E}\left[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p\right]\leq C_{p,K,\alpha,T}. \end{equation} To estimate $B_{2,1}$, we set $G^h_t(x,y)=G_{t+h}(x,y)-G_{t}(x,y)$. Similar to the estimates for (\ref{eq:A_2_1}) and (\ref{eq:A_2_2}) in the proof of Lemma \ref{lem:uniformbound}, we have \begin{align*} B_{2,1}\leq&\mathbb{E}\Bigg[\sup_{0\leq t\leq T} \sup_{0\leq h\leq \delta}\Bigg(\int_0^{t}(t-s)^{\beta-1} \Bigg(\sup_{x\in [0,L]} \int_0^LG^h_{t-s}(x,y)dy\Bigg)^{\frac{p-1}{p}} \vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _pds\Bigg)^p\Bigg]. \end{align*} It also follows from the H\"{o}lder inequality and (\ref{ineq:0}) that \begin{align*} B_{2,1}\leq&C_{p,T}\sup_{0\leq t\leq T}\mathbb{E}\left[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p\right] \sup_{0\leq h\leq \delta} \left(\int_0^{T}s^{(\beta-1)p}\left(\sup_{x\in [0,L]}\int_0^LG_{t-s}^h(x,y)dy\right)^{p-1}ds\right)\nonumber\\ \leq&C_{p,K,\alpha,T}\sup_{0\leq h\leq \delta} \left(\int_0^{T}s^{(\beta-1)p}\left(\sup_{x\in [0,L]}\int_0^LG^h_{t-s}(x,y)dy\right)^{p-1}ds\right). \end{align*} Moreover, since $\beta>1-1/p$, it holds by (\ref{eq:Greenetimation0}) that \begin{align*} &\int_0^{T}s^{(\beta-1)p}\left(\sup_{x\in [0,L]}\int_0^LG_{t-s}^h(x,y)dy\right)^{p-1}ds\\ &\quad\leq\int_0^{T}s^{(\beta-1)p}\left(\sup_{x\in [0,L]}\left(\int_0^L|G_{t+h-s}(x,y)|dy+\int_0^L|G_{t-s}(x,y)|dy\right)\right)^{p-1}ds\\ &\quad\leq C_{p,T}\int_0^{T}s^{(\beta-1)p}ds<\infty. \end{align*} Thus, Lebesgue's Dominated Convergence Theorem implies that $B_{2,1}$ converges to 0 as $\delta\rightarrow0$. For $B_{2,2}$, the Minkowski inequality and Young's convolution inequality imply that \begin{align*} B_{2,2}\leq&\mathbb{E}\Bigg[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta} \Bigg(\int_0^{t}((t+h-s)^{\beta-1}-(t-s)^{\beta-1}) \Bigg\vert \Bigg\vert \int_0^LG_{t+h-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dy\Bigg\vert \Bigg\vert _p ds\Bigg)^p\Bigg]\nonumber\\ \leq&C_{p,T}\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\left(\int_0^{t}((t+h-s)^{\beta-1}-(t-s)^{\beta-1}) \vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p ds\right)^p\right].\nonumber \end{align*} By the H\"{o}lder inequality and (\ref{ineq:0}) we have for $\beta>1-1/p$, \begin{align*} B_{2,2}\leq&C_{p,T}\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta} \int_0^{t}\vert (t+h-s)^{\beta-1}-(t-s)^{\beta-1}\vert ^{p} \vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p^p ds\right]\nonumber\\ \leq&C_{p,T}\sup_{0\leq t\leq T}\mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p]\int_0^{T}\vert (s+\delta)^{\beta-1}-s^{(\beta-1)}\vert ^pds\nonumber\\ \leq&C_{p,K,\alpha,T}\int_0^{T}\vert (s+\delta)^{\beta-1}-s^{(\beta-1)}\vert ^pds<\infty. \end{align*} Therefore, by Lebesgue's dominated convergence theorem, we know that $B_{2,2}$ converges to 0 as $\delta\rightarrow0$. For $B_{2,3}$, similar to $B_{2,2}$, we get \begin{align} \label{ineq:B_{2_3}} B_{2,3}\leq&\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta} \left(\int_{t}^{t+h}(t+h-s)^{\beta-1}\Big\vert \Big\vert \int_0^LG_{t+h-s}(\cdot,y)\mathcal{J}^n_{\beta}u^n(s,y)dy\Big\vert \Big\vert _p ds\right)^p\right]\nonumber\\ \leq&C_{p,T}\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta}\left( \int_{t}^{t+h}(t+h-s)^{\beta-1}\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p ds\right)^p\right]\nonumber\\ \leq&C_{p,T}\mathbb{E}\left[\sup_{0\leq t\leq T}\sup_{0\leq h\leq \delta} \int_{t}^{t+h}\vert (t+h-s)^{\beta-1}\vert ^p\vert \vert \mathcal{J}^n_{\beta}u^n_s\vert \vert _p^p ds\right]\nonumber\\ \leq&C_{p,T}\sup_{0\leq t\leq T}\mathbb{E}[\vert \vert \mathcal{J}^n_{\beta}u^n_t\vert \vert _p^p]\sup_{0\leq h\leq \delta} \int_{t}^{t+h}\vert (t+h-s)^{\beta-1}\vert \leq C_{p,K,\alpha,T}\int_0^{\delta}s^{(\beta-1)p}ds. \end{align} Since $\beta>1-1/p$, we can conclude that the right-hand side of (\ref{ineq:B_{2_3}}) converges to 0 as $\delta\rightarrow0$. Therefore, by the estimates of $B_{2,1}, B_{2,2}, B_{2,3}$ and $B_1$, the desired result (\ref{eq:temporalestimation}) holds, which completes the proof. $\Box$ \end{proof} \begin{lemma} \label{lem:spatialestimation} For each $n\geq1$ let $u^n$ be the solution to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}}. Then for given $t\in[0,\infty)$, $0\leq\vert x_1\vert \leq\delta$ and $p\in(\alpha,2]$ \begin{equation} \label{eq:spatialestimation} \lim_{\delta\rightarrow 0}\sup_n\mathbb{E}\left[ \sup_{\vert x_1\vert \leq \delta}\vert \vert u^n(t,\cdot+x_1)-u^n(t, \cdot)\vert \vert _p^p\right]=0. \end{equation} \end{lemma} \begin{proof} Since the shift operator is continuous in $L^p([0,L])$, then for each $n\geq1$ and $\delta>0$ there exists a pathwise $x_1^{n,\delta}(t)\in\mathbb{R}$ such that $\vert x_1^{n,\delta}(t)\vert \leq\delta$ and $$\sup_{\vert x_1\vert \leq \delta}\vert \vert u^n(t,\cdot+x_1)-u^n(t,\cdot)\vert \vert _p^p=\vert \vert u^n(t,\cdot+x_1^{n,\delta}(t))-u^n(t,\cdot)\vert \vert _p^p.$$ As before, it is easy to see that $$\mathbb{E}[\vert \vert u^n(t,\cdot+x_1^{n,\delta}(t))-u^n(t,\cdot) \vert \vert _p^p]\leq C_p(C_1+C_2),$$ where \begin{align*} C_1&=\mathbb{E}\left[ \bigg\vert \bigg\vert \int_0^L (G_{t}(\cdot+x_1^{n,\delta}(t),y) -G_{t}(\cdot,y))u_0(y)dy\bigg\vert \bigg\vert _p^p \right], \\ C_2&=\mathbb{E}\bigg[\bigg\vert \bigg\vert \int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} (G_{t-s}(\cdot+{x_1^{n,\delta}(t)},y) -G_{t-s}(\cdot,y)) \varphi^n(u^n(s-,y)) z\tilde{N}(dz,dy,ds)\bigg\vert \bigg\vert _p^p\bigg]. \end{align*} For $C_1$, Young's convolution inequality and (\ref{eq:Greenetimation0}) imply that \begin{align*} C_1&\leq\mathbb{E}\left[\int_0^L \left(\int_0^L(|G_{t}(x+x_1^{n, \delta}(t),y)|+|G_{t}(x,y))|dx\right) \vert u_0(y)\vert ^pdy\right] \leq C_T\mathbb{E}[\vert \vert u_0\vert \vert _p^p]<\infty. \end{align*} Thus, the Lebesgue dominated convergence theorem implies that $C_1$ converges to 0 as $\delta\rightarrow0$. For $C_2$, it follows from the Burkholder-Davis-Gundy inequality, (\ref{eq:element-inequ})-(\ref{eq:jumpestimate}), (\ref{eq:glo-lin-growth}) and (\ref{eq:Greenetimation2}) that for $p\in(\alpha,2]$ \begin{align*} C_2 =&\int_0^L\mathbb{E}\bigg[\bigg\vert \int_0^{t+}\int_0^L\int_{\mathbb{R}\setminus\{0\}} (G_{t-s}(x+{x_1^{n,\delta}(t)},y)-G_{t-s}(x,y))\varphi^n(u^n(s-,y))z \tilde{N}(ds,dy,dz)\bigg\vert ^p\bigg]dx\nonumber\\ \leq&C_p\int_0^L\int_0^{t}\int_0^L\int_{\mathbb{R}\setminus\{0\}} \mathbb{E}[\vert (G_{t-s}(x+{x_1^{n,\delta}(t)},y)-G_{t-s}(x,y))\varphi^n(u^n(s,y))z\vert ^p]\nu_{\alpha}(dz)dydsdx\nonumber\\ \leq&C_{p,K,\alpha}\int_0^L\int_0^{t}\int_0^L\mathbb{E}[(1+\vert u^n(s,y)\vert )^p] \vert (G_{t-s}(x+{x_1^{n,\delta}(t)},y)-G_{t-s}(x,y))\vert ^pdydsdx\nonumber\\ \leq&C_{p,K,\alpha}\left(\int_0^L\int_0^{t}\vert G_{t-s}(x+{x_1^{n,\delta}(t)},y)-G_{t-s}(x,y)\vert ^pdsdx\right) \left(L+\sup_{0\leq s\leq t}\mathbb{E}\left[\vert \vert u^n_s\vert \vert _p^p\right]\right) \\ \leq& C_{p,K,\alpha}\left(\int_0^L\int_0^{t}(\vert G_{t-s}(x+{x_1^{n,\delta}(t)},y)\vert ^p+\vert G_{t-s}(x,y)\vert ^p)dsdx\right) \left(L+\sup_{0\leq s\leq t}\mathbb{E}\left[\vert \vert u^n_s\vert \vert _p^p\right]\right) \\ \leq& C_{p,K,\alpha}\left(\int_0^t(t-s)^{-\frac{p-1}{2}}ds\right) \left(L+\sup_{0\leq s\leq t}\mathbb{E}\left[\vert \vert u^n_s\vert \vert _p^p\right]\right) \end{align*} Therefore, it holds by (\ref{eq:Approximomentresult}) and Lebesgue's dominated convergence theorem that $C_2$ converges to 0 as $\delta\rightarrow0$. Hence, by the estimates of $C_1$ and $C_2$, we obtain \begin{align*} \lim_{\delta\rightarrow 0}\sup_n\mathbb{E}\bigg[\sup_{\vert x_1\vert \leq \delta}\vert \vert u^n(t,\cdot+x_1)-u^n(t,\cdot)\vert \vert _p^p\bigg]&=0, \end{align*} which completes the proof. $\Box$ \end{proof} \begin{proposition} \label{prop:tightnessresult2} Suppose that $\alpha\in(1,5/3)$. The sequence of solutions $(u^n)_{n\geq1}$ to equation (\ref{eq:approximatingsolution}) given by {\rm Proposition \ref{th:Approximainresult}} is tight in $D([0,\infty),L^p([0,L]))$ for $p\in(\alpha,5/3)$. \end{proposition} \begin{proof} From (\ref{eq:Approximomentresult}) and Markov's inequality, for each $\varepsilon>0$, $p\in(\alpha,2]$ and $T>0$ there exists a $N\in\mathbb{N}$ such that \begin{align*} \sup\limits_n\mathbb{P}\left[\vert \vert u^n_t\vert \vert _p^p>N\right]\leq\dfrac{\varepsilon}{3},\quad t\in [0,T]. \end{align*} Let $\Gamma^1_{\varepsilon,T}$ be a closed set defined by \begin{align} \label{eq:Gamma1} \Gamma^1_{\varepsilon,T}:=\{v_t\in L^p([0,L]): \vert \vert v_t\vert \vert _p^p\leq N,t\in[0,T]\}. \end{align} By Lemma \ref{lem:spatialestimation} and Markov's inequality, it holds that for each $\varepsilon>0$, $p\in(\alpha,2]$ and $T>0$ \begin{equation*} \lim_{\delta\rightarrow 0}\sup_n\mathbb{P}\left[ \sup_{\vert x_1\vert \leq \delta}\vert \vert u^n(t,\cdot+x_1)-u^n(t,\cdot)\vert \vert _p^p>\varepsilon\right]=0,\quad t\in[0,T]. \end{equation*} Then for $k\in\mathbb{N}$ we can choose a sequence $(\delta_k)_{k\geq1}$ with $\delta_k\rightarrow0$ as $k\rightarrow\infty$ such that \begin{align*} \sup\limits_n\mathbb{P}\left[\sup_{\vert x_1\vert \leq \delta_k}\vert \vert u^n(t,\cdot+x_1)-u^n(t,\cdot)\vert \vert _p^p>\frac{1}{k} \right]\leq\dfrac{\varepsilon}{3}2^{-k},\quad t\in[0,T]. \end{align*} Let $\Gamma^2_{\varepsilon,T}$ be a closed set defined by \begin{align} \label{eq:Gamma2} \Gamma^2_{\varepsilon,T}:=\bigcap_{k=1}^{\infty} \left\{v_t\in L^p([0,L]): \sup_{\vert x_1\vert \leq \delta_k}\vert \vert v(t,\cdot+x_1)-v(t,\cdot)\vert \vert _p^p\leq\frac{1}{k},t\in[0,T] \right\}. \end{align} We next prove that for each $\varepsilon>0$ and $p\in(\alpha,2]$ , \begin{equation} \label{eq:compactset1} \lim_{\gamma\rightarrow \infty}\sup_n\mathbb{P}\left[\int_{(L-\frac{L}{\gamma},L]}\vert u^n(t,x)\vert ^pdx>\varepsilon\right]=0. \end{equation} It is easy to see that \begin{align*} \mathbb{E}\bigg[\int_{(L-\frac{L}{\gamma},L]}\vert u^n(t,x)\vert ^pdx\bigg] &=\mathbb{E}\bigg[\int_0^L \vert u^n(t,x)\vert ^p1_{(L-\frac{L}{\gamma},L]}(x)dx\bigg] \leq C_p(D_1+D_2), \end{align*} where \begin{align*} D_1&=\int_0^L\mathbb{E}\left[\left\vert \int_0^LG_{t} (x,y)u_0(y)dy\right\vert ^p\right] 1_{(L-\frac{L}{\gamma},L]}(x)dx, \\ D_2&=\int_0^L\mathbb{E}\Bigg[\Bigg\vert \int_0^{t+} \int_0^L\int_{\mathbb{R}\setminus\{0\}} G_{t-s}(x,y) \varphi^n(u^n(s-,y))z \tilde{N}(ds,dy,dz) \Bigg\vert ^p\Bigg]1_{(L-\frac{L}{\gamma},L]}(x)dx. \end{align*} It is easy to prove that $D_1$ converges to 0 as $\gamma\rightarrow\infty$ by using Young's convolution inequality and Lebesgue's dominated convergence theorem. For $D_2$, it holds by the Burkholder-Davis-Gundy inequality, (\ref{eq:element-inequ})-(\ref{eq:jumpestimate}) and (\ref{eq:glo-lin-growth}) that for $p\in(\alpha,2]$ \begin{align*} D_2 \leq &C_p\int_0^L\int_0^{t} \int_0^L\int_{\mathbb{R}\setminus\{0\}} \mathbb{E}[\vert G_{t-s}(x,y)\varphi^n(u^n(s,y))z\vert ^p] 1_{(L-\frac{L}{\gamma},L]}(x)\nu_{\alpha}(dz)dydsdx \nonumber\\ \leq&C_{p,K,\alpha}\int_0^L\int_0^{t} \int_0^L\mathbb{E}(1+\vert u^n(s,y)\vert )^p\vert G_{t-s}(x,y)\vert ^p1_{(L-\frac{L}{\gamma},L]}(x)dydsdx \nonumber\\ \leq&C_{p,K,\alpha,T}\left(L+\sup_{0\leq t\leq T}\mathbb{E} \left[\vert \vert u^n_t\vert \vert _p^p\right]\right) \int_0^L\int_0^{t}(t-s)^{-\frac{p-1}{2}} 1_{(L-\frac{L}{\gamma},L]}(x)dsdx. \end{align*} Since $p\leq2$, it holds that \begin{align*} D_2\leq C_{p,K,\alpha,T} \left(\int_0^L1_{(L-\frac{L}{\gamma},L]}(x)dx\right)\left(L+\sup_{0\leq t\leq T}\mathbb{E} \left[\vert \vert u^n_t\vert \vert _p^p\right]\right). \end{align*} By (\ref{eq:Approximomentresult}), $D_2$ converges to 0 as $\gamma\rightarrow\infty$. Therefore, (\ref{eq:compactset1}) is obtained from the estimates of $D_1$ and $D_2$ and Markov's inequality. For any $k\in\mathbb{N}$ and $T>0$ we can choose a sequence $(\gamma_k)_{k\geq1}$ with $\gamma_k\rightarrow\infty$ as $k\rightarrow\infty$ such that \begin{align*} \sup\limits_n\mathbb{P}\left [\int_{(L-\frac{L}{\gamma_k},L]}\vert u^n(t,x)\vert ^pdx>\frac{1}{k}\right] \leq\dfrac{\varepsilon}{3}2^{-k},\quad t\in[0,T]. \end{align*} Let $\Gamma^3_{\varepsilon,T}$ be a closed set defined by \begin{align} \label{eq:Gamma3} \Gamma^3_{\varepsilon,T}:=\bigcap_{k=1}^{\infty} \left\{v_t\in L^p([0,L]): \int_{(L-\frac{L}{\gamma_k},L]}\vert v(t,x)\vert ^pdx\leq\frac{1}{k}, t\in[0,T] \right\}. \end{align} Combining (\ref{eq:Gamma1}), (\ref{eq:Gamma2}) and (\ref{eq:Gamma3}) to define \begin{align*} \Gamma_{\varepsilon,T}:=\Gamma^1_{\varepsilon,T} \cap\Gamma^2_{\varepsilon,T} \cap\Gamma^3_{\varepsilon,T}, \end{align*} then $\Gamma_{\varepsilon,T}$ is a closed set in $L^p([0,L]),p\in(\alpha,2]$. For any function $f\in\Gamma_{\varepsilon,T}$ the definition of $\Gamma_{\varepsilon,T}$ implies that the conditions (a)-(c) in Lemma \ref{lem:compactcriterion} hold, and so $\Gamma_{\varepsilon,T}$ is a relatively compact set in $L^p([0,L]),p\in(\alpha,2]$. Combing the closeness and relatively compactness, we know that $\Gamma_{\varepsilon,T}$ is a compact set in $L^p([0,L]),p\in(\alpha,2]$. Moreover, the definition of $\Gamma_{\varepsilon,T}$ implies that \begin{align*} \inf_n\mathbb{P} [u^n_t\in\Gamma_{\varepsilon,T}]&\geq1- \frac{\varepsilon}{3}\left(1+2\sum_{k=1}^{\infty} 2^{-k}\right)=1-\varepsilon, \end{align*} which verifies condition (i) of Lemma \ref{lem:tightcriterion}. Condition (ii) of Lemma \ref{lem:tightcriterion} is verified by Lemma \ref{lem:temporalestimation} with $p\in(\alpha,5/3)$. Therefore, $(u^n)_{n\geq1}$ is tight in $D([0,\infty),L^p([0,L]))$ for $p\in(\alpha,5/3)$, which completes the proof. $\Box$ \end{proof} \begin{Tproof}\textbf{~of Theorem \ref{th:mainresult2}.} According to Proposition \ref{prop:tightnessresult2}, there exists a $D([0,\infty),L^p([0,T]))$-valued random variable $u$ such that $u^n$ converges to $u$ in distribution in the Skorohod topology. The Skorohod Representation Theorem yields that there exists another filtered probability space $(\hat{\Omega}, \hat{\mathcal{F}}, (\hat{\mathcal{F}}_t)_{t\geq0},\hat{\mathbb{P}})$ and on it a further subsequence $(\hat{u}^n)_{n\geq1}$ and $\hat{u}$ which have the same distribution as $(u^n)_{n\geq1}$ and $u$, so that $\hat{u}^n$ almost surely converges to $\hat{u}$ in the Skorohod topology. The rest of the proofs, including the construction of a truncated $\alpha$-stable measure $\hat{L}_{\alpha}$ such that $(\hat{u},\hat{L}_{\alpha})$ is a weak solution to equation (\ref{eq:originalequation1}), is same as the proof of Theorem \ref{th:mainresult} and we omit them. Since $\hat{u}^n$ has the same distribution as $u^n$ for each $n\geq1$, the moment estimate (\ref{eq:unformlybounded}) in Lemma \ref{lem:uniformbound} can be written as $$ \sup_{n\geq1}\hat{\mathbb{E}}\left[\sup_{0\leq t\leq T}\vert \vert \hat{u}^n_t\vert \vert _p^p\right]\leq C_{p,K,\alpha,T}. $$ Hence, by Fatou's Lemma, $$ \hat{\mathbb{E}}\left[\sup_{0\leq t\leq T}\vert \vert \hat{u}_t\vert \vert _p^p\right]\leq\liminf_{n\rightarrow\infty} \hat{\mathbb{E}}\left[\sup_{0\leq t\leq T}\vert \vert \hat{u}^n_t\vert \vert _p^p\right]<\infty. $$ This yields the uniform $p$-moment estimate (\ref{eq:momentresult2}). Similarly, we can obtain the uniform stochastic continuity (\ref{eq:timeregular}) by Lemma \ref{lem:temporalestimation}. $\Box$ \end{Tproof} \noindent \textbf{Acknowledgements} This work is supported by the National Natural Science Foundation of China (NSFC) (Nos. 11631004, 71532001), Natural Sciences and Engineering Research Council of Canada (RGPIN-2021-04100). \end{document}
math
\begin{document} \title{Strengthening Hadwiger's conjecture for $4$- and $5$-chromatic graphs} \begin{abstract} Hadwiger's famous coloring conjecture states that every $t$-chromatic graph contains a $K_t$-minor. Holroyd [Bull. London Math. Soc. 29, (1997), pp. 139--144] conjectured the following strengthening of Hadwiger's conjecture: If $G$ is a $t$-chromatic graph and $S \subseteq V(G)$ takes all colors in every $t$-coloring of $G$, then $G$ contains a $K_t$-minor \emph{rooted at $S$}. We prove this conjecture in the first open case of $t=4$. Notably, our result also directly implies a stronger version of Hadwiger's conjecture for $5$-chromatic graphs as follows: Every $5$-chromatic graph contains a $K_5$-minor with a singleton branch-set. In fact, in a $5$-vertex-critical graph we may specify the singleton branch-set to be any vertex of the graph. \end{abstract} \section{Introduction} Given a graph $G$ and a number $t \in \mathbb{N}$, a \emph{$K_t$-minor} in $G$ is a collection $(B_i)_{i=1}^{t}$ of pairwise disjoint non-empty subsets of $V(G)$ such that $G[B_i]$ is connected for every $i \in [t]$ and for every distinct $i, j \in [t]$ the sets $B_i$ and $B_j$ are adjacent\footnote{Two subsets $A, B$ of the vertex-set $V(G)$ of a graph $G$ are called \emph{adjacent} if there exists an edge in $G$ with one endpoint in $A$ and one endpoint in $B$.} in $G$. The sets $B_1, \ldots,B_t$ are also called the \emph{branch-sets} of the $K_t$-minor. Hadwiger's conjecture, among the most famous and difficult open problems in graph theory, states the following. \begin{conjecture}[Hadwiger 1943~\cite{hadwiger}] For every integer $t \ge 1$, every graph $G$ with $\chi(G)=t$ contains a $K_t$-minor. \end{conjecture} Hadwiger's conjecture was proved for $t \le 4$ by Hadwiger himself~\cite{hadwiger}, and also by Dirac~\cite{dirac}, while the case $t=5$ was shown to be equivalent to the Four-Color-Problem by Wagner~\cite{wagner}. Since Appel and Haken's proof of the Four-Color-Theorem~\cite{appelhaken1,appelhaken2} in 1976 and its later independent proof by Robertson, Sanders, Seymour and Thomas~\cite{robertson2}, Hadwiger's conjecture was known to hold for $t=5$. In a tour de force, in 1993 Robertson, Seymour and Thomas~\cite{robertson1} proved Hadwiger's conjecture for $t=6$ by reducing it to the Four-Color Theorem. All the cases $t \ge 7$ remain wide open as of today. Much of recent research has focused on asymptotic bounds for coloring graphs with no $K_t$-minor rather than on the precise version of the conjecture, and there has been some exciting progress in this direction recently~\cite{del,norin}. For further partial results on Hadwiger's conjecture and background, we refer the reader to the survey article~\cite{survey} by Seymour. In this paper, we investigate a strengthening of Hadwiger's conjecture proposed by Holroyd~\cite{holroyd} in 1997. His stronger version of Hadwiger's conjecture concerns the containment of minors in graphs of given chromatic number that are \emph{rooted} at a particular set of vertices. Rooted minors have received significant attention before, we refer to~\cite{demasi,ellen,rooted,hayashi,kriesell,kriesell2,marx,wollan} for a selection of results on rooted minors in graphs. Here we use the following definition: For a set $S \subseteq V(G)$, we say that a $K_t$-minor $(B_i)_{i=1}^{t}$ in $G$ is \emph{$S$-rooted} or \emph{rooted at $S$} if $B_i \cap S \neq \emptyset$ for all $i \in [t]$. A second notion we need to introduce before stating Holroyd's conjecture is that of \emph{colorful sets}. For a graph $G$ and $S \subseteq V(G)$ we say that $S$ is \emph{colorful} in $G$ if for every proper $\chi(G)$-coloring of $G$ there are vertices of all $\chi(G)$ colors contained in $S$. A colorful set in a graph may be seen as a part of the graph that ``is hard to color''. In that sense, it is quite intuitive that one may hope to find a $K_t$-minor rooted at this part of the graph. This is exactly Holroyd's conjecture (named ``Strong Hadwiger's conjecture'' in~\cite{holroyd}): \begin{conjecture}[Strong Hadwiger's conjecture, Holroyd 1997~\cite{holroyd}] Let $G$ be a graph with $\chi(G)=t$, and let $S$ be a colorful set in $G$. Then $G$ contains an $S$-rooted $K_t$-minor. \end{conjecture} Holroyd proved his conjecture for $t \le 3$ in~\cite{holroyd}. Here, we take the next step and prove the Strong Hadwiger's conjecture for $t=4$. \begin{theorem}\label{thm:main} Let $G$ be a graph with $\chi(G)=4$ and let $S$ be a colorful set in $G$. Then $G$ contains an $S$-rooted $K_4$-minor. \end{theorem} As was already noted by Holroyd (Theorem~2 in~\cite{holroyd}), the truth of the Strong Hadwiger's conjecture for a value $t$ implies the truth of Hadwiger's conjecture for the value $t+1$. Therefore, Theorem~\ref{thm:main} implies Hadwiger's conjecture for $t=5$. In fact, combining Holroyd's argument with Theorem~\ref{thm:main} we obtain a slight strengthening of Hadwiger's conjecture for $t=5$ as follows. \begin{corollary}\noindent \begin{itemize} \item Every $5$-vertex-critical graph $G$ for every $v \in V(G)$ has a $K_5$-minor containing $\{v\}$ as a singleton branch-set. \item Every $5$-chromatic graph contains a $K_5$-minor with a singleton branch-set. \end{itemize} \end{corollary} \begin{proof}\noindent \begin{itemize} \item Let $v \in V(G)$. Then by criticality of $G$ we have $\chi(G)=5, \chi(G-v)=4$. This implies that $N_G(v)$ is a colorful set in $G-v$. By Theorem~\ref{thm:main} there exists an $N_G(v)$-rooted $K_4$-minor in $G-v$, and adding to it the branch-set $\{v\}$ yields the desired $K_5$-minor. \item This follows from the first item since every $5$-chromatic graph has a $5$-critical subgraph. \end{itemize} \end{proof} \noindent To the best of our knowledge these strengthenings of Hadwiger's conjecture for $t=5$ are novel. \paragraph{\textbf{Organization and Outline.}} The rest of the paper is devoted to the proof of Theorem~\ref{thm:main}. In Section~\ref{sec:prelim}, we prepare the proof by discussing a few results from the literature that will be needed in our proof. Most crucially, we will rely on a characterization for the existence of $K_4$-minors rooted at a set of four distinct vertices by Fabila-Monroy and Wood~\cite{rooted}. We then present our proof in Section~\ref{sec:proof}. A main part of the proof is to establish several properties of a smallest counterexample $G$ to Theorem~\ref{thm:main} in terms of connectivity and the distribution of the vertices in $S$ over the graph. In another step, these structural properties can then either be used to find the desired rooted $K_4$-minor directly, or to restrict the structure of $G$ to a planar graph (Lemma~\ref{lma:rootyness}). At this final stage, we invoke the Four-Color-Theorem to derive a $4$-coloring that shows that $S$ cannot be colorful and this yields the desired contradiction. \section{Preliminaries}\label{sec:prelim} In this section we state a few preliminary results from the literature that we will use in our proof of Theorem~\ref{thm:main}. The first is a precise characterization when a graph admits a $K_3$-minor rooted at $3$ given vertices due to Holroyd~\cite{holroyd} and Linusson and Wood~\cite{linusson}. \begin{lemma}[Theorem~6 in~\cite{holroyd}, Lemma~5 in~\cite{linusson}]~\label{lemma:K3minor} Let $G$ be a graph, and let $a,b,c \in V(G)$ be three distinct vertices. If for every vertex $v \in V(G)$ at least two of $a,b,c$ are in a common component of $G-v$, then there exists an $\{a,b,c\}$-rooted $K_3$-minor in $G$. \end{lemma} Second, we will need a result by Fabila-Monroy and Wood \cite{rooted} for characterising when a graph contains a $K_4$-minor rooted at four given verties $a,b,c,d$. Following the notation in~\cite{rooted}, given a graph $H$, we denote by $H^+$ a graph obtained from $H$ by, for each triangle $T$ in $H$, adding a disjoint (possibly empty) set of vertices $C_T$ to $H$, making them a clique and adjacent to the three vertices of the triangle $T$ in $H$ (the sets $C_T$ and $C_{T'}$ must be chosen disjoint for different triangles $T, T'$ in $H$). An $\{a, b, c, d\}$-web is a graph $H^+$ where $H$ has a planar embedding with outerface $\{a, b, c, d\}$ in some order, and such that every triangle in $H$ forms a face in this embedding. Fabila-Monroy and Wood~\cite{rooted} give a precise description of the edge-maximal graphs with no $K_4$-minor rooted at four specified vertices. Here, we will only need a weaker version of their main result. \begin{theorem}\label{theorem:wood} Let $G$ be a graph with four marked vertices $a,b,c,d$ such that $G$ contains no $\{a,b,c,d\}$-rooted $K_4$-minor. Then either \begin{enumerate} \item $G$ is a spanning subgraph of an $\{a,b,c,d\}$-web, or \item there exist vertices $u,v \in V(G)$ such that $G-\{u,v\}$ has at least three distinct connected components. \end{enumerate} \end{theorem} \begin{proof} By Theorem 15 in \cite{rooted}, $G$ is a spanning subgraph of a graph in one the classes $\mathcal{A}$-$\mathcal{F}$. Class $\mathcal{D}$ is precisely the $\{a,b,c,d\}$-webs in which case $(1)$ holds. It can be directly checked that all graphs in the remaining classes (and thus also their spanning subgraphs) have a $2$-separator as described in $(2)$. \end{proof} Throughout the proof we will use the following convenient (and standard) terminology to speak about separators in graphs: A \emph{separation} of a graph is a tuple $(A,B)$ where $A, B$ are subsets of $V(G)$ with $A \cup B=V(G)$, such that $A \setminus B\neq \emptyset \neq B \setminus A$ and such that there are no edges in $G$ connecting a vertex in $A\setminus B$ to a vertex in $B \setminus A$. The \emph{order} of a separation is $|A \cap B|$, and $A\cap B$ is called the \emph{separator} of the separation $(A,B)$. It is easy to see that a graph is $k$-connected if and only if all its separations have order at least $k$. In the proof, we repeatedly use Menger's theorem for vertex-disjoint paths in the following two variants: \begin{theorem}[Menger's theorem-set versions, cf. Theorem~3.3 in~\cite{diestel}] Let $G$ be $k$-connected, $v \in V(G)$ and $A, B \subseteq V(G)$. \begin{itemize} \item There are $\min\{k,|A|,|B|\}$ vertex-disjoint paths in $G$ each with endpoints in $A$ and $B$. \item If $v\notin A$ then there are $\min\{k,|A|\}$ paths in $G$, each of them connecting $v$ to a vertex in $A$, and any two of them only sharing the vertex $v$. \end{itemize} \end{theorem} We note that in the above statement as well as throughout the whole manuscript we allow paths to consist of single vertices (i.e., be of length $0$). \section{The proof}\label{sec:proof} For a $3$-connected graph $G$ and a subset $S \subseteq V(G)$ of vertices, we say that $S$ is \emph{spread out in $G$} if for every separation $(A,B)$ in $G$ of order $3$, it holds that $S \setminus A \neq \emptyset \neq S \setminus B$. The following auxiliary statement will be of crucical use in our proof of Theorem~\ref{thm:main}. \begin{lemma}\label{lma:rootyness} Let $G$ be a $3$-connected graph, and let $S \subseteq V(G)$ such that $|S|\ge 4$ and $S$ is spread out in $G$. If $G$ contains no $S$-rooted $K_4$-minor then the graph $G^a(S)$, obtained from $G$ by adding a new vertex and making it adjacent to every element of $S$, is planar. \end{lemma} We will prove Lemma \ref{lma:rootyness} by combining Theorem \ref{theorem:wood} with the following statement. \begin{lemma}\label{lma:induction} Let $G$ be a 3-connected graph, and let $S\subseteq V(G)$ be spread out in $G$ such that $G$ contains no $S$-rooted $K_4$-minor. Then for any separation $(A, B)$ of order $3$ such that $|A\cap S|\geq 3$, it holds that $G[B]$ has a planar embedding with $A\cap B$ on the outerface. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lma:induction}] We prove Lemma \ref{lma:induction} by induction on the size of $B$. Note that for any separation, we have $|B|\geq 4$. Moreover, for $|B|=4$, it is clear that such a planar embedding exists. Turning to the induction step, we fix any separation $(A, B)$ as above with $|B|\geq 5$, and assume the statement is true for any separation $(A', B')$ as above with $|B'|<|B|$. Furthermore, we pick some $S' \in \binom{A \cap S}{3}$. \begin{claim}\label{claim:connected} $G[A]$ and $G[B]$ are connected. \end{claim} \begin{claimproof} By Menger's theorem, any $v\in A\setminus B$ is connected to $A\cap B$ in $G$ by three paths that pairwise only share $v$ as a common vertex. These paths are completely contained in $A$ as $A\cap B$ is a separator, so, in particular, $G[A]$ is connected. Similarly, any $v\in B\setminus A$ is connected to $A\cap B$ by three internally disjoint paths in $G$, so $G[B]$ is also connected. \end{claimproof} \begin{claim}\label{claim:cherryminor} There exist enumerations $\{s_1, s_2, s_3\}=S'$ and $\{c_1, c_2, c_3\} = A\cap B$ and disjoint sets $A_1, A_2, A_3 \subseteq A$ such that $\{s_i, c_i\}\subseteq A_i$ and $G[A_i]$ is connected for each $i \in [3]$ and such that $A_2$ is adjacent to $A_1$ and $A_3$ in $G$. \end{claim} \begin{claimproof} Since $G$ is $3$-connected, there exists three vertex-disjoint paths from $S'$ to $A\cap B$. Let $P_1, P_2, P_3$ be an ordering of the paths such that $\text{dist}_{G[A]}(V(P_1), V(P_2))+\text{dist}_{G[A]}(V(P_2), V(P_3))$ is minimized. Then letting $A_1 = V(P_1)$, $A_3=V(P_3)$ and $A_2$ equal the connected component of $G[A\setminus (V(P_1) \cup V(P_3))]$ containing $V(P_2)$, we obtain sets with the desired properties (by the minimality assumption a shortest path from $V(P_2)$ to $V(P_i)$ for $i \in \{1,3\}$ does not intersect $V(P_{4-i})$, certifying that $A_2$ is adjacent to $A_i$). \end{claimproof} We will below let $s_1, s_2, s_3$ and $c_1, c_2, c_3$ denote the orderings of the respective sets, as prescribed by Claim \ref{claim:cherryminor}. \begin{claim}\label{claim:cspread} Let $X\subseteq B$ be such that $|X| \in \{1,2\}$. Any connected component of $G[B\setminus X]$ contains at least one of the vertices $c_1, c_2, c_3$. \end{claim} \begin{claimproof} $G-X$ is connected by $3$-connectivity of $G$. Thus for any $v\in B\setminus X$ there exists a shortest path from $v$ to $(A\cap B)\setminus X$ in $G-X$. By minimality, this path is contained in $B$. Hence $v$ is in the same connected component of $G[B\setminus X]$ as the end-point of this path. \end{claimproof} \begin{claim}\label{claim:cutvertex} If $G[B]$ has a cut-vertex, then $G[B]$ has a planar embedding with $A\cap B$ on its outerface. \end{claim} \begin{claimproof} Let $x$ be a cut-vertex in $G[B]$. By Claim \ref{claim:cspread}, one of $c_1, c_2, c_3$ has to be in a connected component of $G[B\setminus \{x\}]$ that does not contain the other two vertices. We may w.l.o.g. assume that $c_1$ has this property. As each vertex in $G[B\setminus\{x,c_1\}]$ is in the same connected component as $c_2$ or $c_3$, it follows that $c_1$ is isolated in $G[B\setminus\{x\}]$, hence $x$ is the only neighbor of $c_1$ in $G[B]$. It is straight-forward to see that $(A\cup \{c_1\}, B\setminus\{c_1\})$ is a separation in $G$ with separator $\{x, c_2, c_3\}$. By the induction hypothesis, $G[B\setminus\{c_1\}]$ has a planar embedding with $x, c_2, c_3$ on the outerface, which by adding $c_1$ to the outerface of the embedding and connecting it to $x$ can be extended to a planar embedding of $G[B]$ with $c_1,c_2,c_3$ on the outerface. \end{claimproof} It remains to consider the case where $G[B]$ is 2-connected and $|B|\geq 5$. For this we will use Theorem \ref{theorem:wood}. Let $s_4 \in S\cap (B\setminus A)$ (such a vertex exists since $S$ is spread out in $G$). Observe that $G[B]$ has no $K_4$-minor rooted at $\{c_1, c_2, c_3, s_4\}$ as then using Claim \ref{claim:cherryminor} we could immediately extend it to a $K_4$-minor rooted at $S$ (replace any occurrence of a vertex $c_i$ in a branch-set by the connected set $A_i \supseteq \{s_i,c_i\}$ in $A$). Hence, $G[B]$ must satisfy either Cases $(1)$ or $(2)$ of Theorem \ref{theorem:wood}. Let us first consider Case $(1)$. Then $G[B]$ is a spanning subgraph of a $\{c_1,c_2,c_3,s_4\}$-web $H^+$. Concretely, $H$ is a planar graph that admits a planar embedding with outerface $\{c_1,c_2,c_3,s_4\}$ (in some possibly permuted order) and such that every triangle $T$ in $H$ forms a face in this embedding. Further, for every triangle $T$ in $H$ there is a disjoint (possibly empty) set $C_T$ of vertices in $H^+$ such that $H^+$ is formed from $H$ by making $C_T$ a clique fully connected to the three vertices of $T$ for every triangle $T$. Now note that for every triangle $T$ in $H$ for which $C_T \neq \emptyset$, we have that $(V(H),V(T) \cup C_T)$ forms a separation of order $3$ in $G[B]$ with separator $V(T)$. Further, $(A \cup V(H), V(T) \cup C_T)$ is a separation of order $3$ in the whole graph $G$. Since $H$ contains at least $4$ vertices, we have $|V(T) \cup C_T|<|B|$ for every triangle $T$ in $H$ and now the inductive assumption implies that $G[V(T) \cup C_T]$ admits a planar embedding with $V(T)$ on its outerface, for every triangle $T$ in $H$. It is now easy to see that we can take the planar embedding of $H$ as prescribed above, and for each triangular face $T$ with $C_T \neq \emptyset$ glue into it an embedding of $G[V(T) \cup C_T]$ with $V(T)$ on its outerface, to overall reach a planar embedding with $c_1,c_2,c_3$ (and in fact also $s_4$) on its outerface. After deleting some edges from this embedding we reach a planar embedding of $G[B]$ with $c_1,c_2,c_3$ on the outerface. This is the desired outcome and concludes the proof in this case. It only remains to consider Case $(2)$. By Claims \ref{claim:connected} and \ref{claim:cutvertex} we may assume that $G[B]$ is $2$-connected. Let $\{x, y\} \subseteq B$ be the prescribed cut such that $G[B\setminus \{x,y\}]$ has at least $3$ distinct connected components. As by Claim \ref{claim:cspread}, any connected component of $G[B\setminus\{x,y\}]$ contains a vertex among $c_1, c_2, c_3$, we conclude that $G[B\setminus\{x,y\}]$ has exactly three connected components $C_1, C_2, C_3$ where we may w.l.o.g. assume $c_i\in C_i$ for $i=1, 2, 3$. To conclude the induction proof, we will show that these assumptions are sufficient to find branch sets $c_1\in B_1, c_2\in B_2, c_3\in B_3, s_4\in B_4 \subseteq B$ such that any pair of sets except for possibly $B_2$ and $B_3$ are adjacent. Combining this with Claim \ref{claim:cherryminor}, this would imply the existence of an $S$-rooted $K_4$-minor (with branch-sets $A_1 \cup B_1, A_2 \cup B_2, A_3 \cup B_3$ and $B_4$), hence concluding the proof. As $G[B]$ is $2$-connected, there exist for each $i=1, 2, 3$ two internally disjoint paths $P_i, Q_i$ in $G[B]$ with endpoints $c_i, x$ and $c_i, y$, respectively. Then as $\{x, y\}$ separates $C_i$ from the rest of $B$, we must have $V(P_i), V(Q_i) \subseteq C_i\cup \{x, y\}$. Let $X:=\bigcup_{i=1}^{3}{(V(P_i)\setminus \{c_i\})}$ and $Y:=\bigcup_{i=1}^{3}{(V(Q_i)\setminus \{c_i\})}$. Note that $G[X]$ and $G[Y]$ are connected. If $s_4$ is already contained in either $X$ or $Y$, let us w.l.o.g. assume $X$, then $B_1=\{c_1\}\cup Y, B_2 = \{c_2\}, B_3=\{c_3\}$ and $B_4=X$ are vertex sets with the desired properties. Otherwise $s_4$ is contained in some component $C_i$. Note that by choice of $s_4$ it is distinct from $c_i$. Let $P$ be a shortest path from $s_4$ to $X\cup Y$ in $G[B\setminus\{c_i\}]$. Such a path exists as $G[B]$ is $2$-connected. Let us, w.l.o.g., assume that the end-point of $P$ is in $X$. Then similar to before $B_1=\{c_1\}\cup Y, B_2 = \{c_2\}, B_3=\{c_3\}$ and $B_4=X\cup V(P)$ have the desired properties. This concludes the proof of Lemma \ref{lma:induction}. \end{proof} Before giving the proof of Lemma~\ref{lma:rootyness} it will be useful to observe the following simple statement about planar graphs. \begin{lemma}\label{lemma:kuratowski} Let $G$ be a planar graph and let $S \subseteq V(G)$ be a set with $|S|\ge 4$. Suppose that for every choice of vertices $s_1,s_2,s_3,s_4 \in S$ there is a planar embedding of $G$ in which $s_1,\ldots,s_4$ are on the boundary of the outerface. Then the graph $G^a(S)$, obtained from $G$ by adding a new vertex whose neighbors are the elements of $S$, is planar. \end{lemma} \begin{proof} We claim that for every 4-subset $\{s_1,s_2,s_3,s_4\} \subseteq S$, the graph $G^a(s_1,s_2,s_3,s_4)$, obtained by adding a new vertex to $G$ with neighborhood $\{s_1,s_2,s_3,s_4\}$, is planar. Indeed, given a planar embedding with $s_1,s_2,s_3,s_4$ on the outerface, we can simply embed the new vertex in the outerface and preserve a planar embedding. Now, suppose towards a contradiction that $G^a(S)$ is non-planar. By Kuratowksi's theorem, this means that $G^a(S)$ contains a subdivision of $K_5$ or of $K_{3,3}$. Fix one such subdivision of a Kuratowski graph. It is a subgraph of $G^a(S)$ of maximum degree at most $4$. But then it has to be also contained in a graph $G^a(s_1,s_2,s_3,s_4)$ for some $\{s_1,s_2,s_3,s_4\} \subseteq S$, a contradiction, since the latter graph is planar. \end{proof} \begin{proof}[Proof of Lemma \ref{lma:rootyness}.] Take any four vertices $s_1, s_2, s_3, s_4\in S.$ By Theorem \ref{theorem:wood} it follows that if there is no $K_4$-minor rooted at these four vertices, then as $G$ is $3$-connected it must satisfy Case $(2)$ with $s_1, s_2, s_3, s_4$ on the outerface. Concretely, $G$ is a spanning subgraph of an $\{s_1,s_2,s_3,s_4\}$-web $H^+$. This means $H$ is a graph with a planar embedding with outerface $\{s_1,s_2,s_3,s_4\}$ (in some order) and every triangle $T$ in $H$ forms a face in this embedding. Further, for every triangle $T$ in $H$ there is a disjoint set $C_T$ of vertices in $H^+$ such that $H^+$ is formed from $H$ by making $C_T$ a clique fully connected to the three vertices of $T$ for every triangle $T$. Since for every triangle $T$ in $H$ we have that $(V(H),V(T) \cup C_T)$ forms a $3$-separation in $G$, by applying Lemma \ref{lma:induction} we find for every triangle $T$ in $H$ a planar embedding of $G[V(T)\cup C_T]$ with $V(T)$ on the outerface. Hence, by taking the prescribed planar embedding of $H$ and glueing for every triangle $T$ a planar embedding of $G[V(T) \cup C_T]$ as above into the corresponding face, we find a planar embedding of a supergraph of $G$ that has $s_1, s_2, s_3, s_4$ on the outerface. This yields a planar embedding also of $G$ with $s_1,\ldots,s_4$ on the outerface. Since our choice of $s_1,s_2,s_3,s_4$ was arbitrary, Lemma~\ref{lemma:kuratowski} implies that $G^a(S)$ is planar. \end{proof} Equipped with Lemma~\ref{lma:rootyness} we can now present the proof of our main result. \begin{proof}[Proof of Theorem~\ref{thm:main}] We prove the theorem by contradiction. Suppose (reductio ad absurdum) that a graph $G$ is a smallest counterexample (in terms of $|V(G)|$). Then $\chi(G)=4$, and there is a colorful set $S$ of vertices in $G$ such that $G$ contains no $S$-rooted $K_4$-minor. We start by establishing some simple facts concerning the connectivity of $G$. \begin{claim} $G$ is connected. \end{claim} \begin{claimproof} Suppose not, then there exists a partition $(A,B)$ of $V(G)$ with $A, B \neq \emptyset$ such that no edge in $G$ connects $A$ and $B$. Then $\chi(G[A]), \chi(G[B]) \le 4$ and $G[A]$ and $G[B]$ respectively do not contain a $K_4$-minor rooted at $S \cap A$ and $S \cap B$ respectively. By minimality of $G$, there exist proper $4$-colorings $c_A:A \rightarrow [4]$ and $c_B:B \rightarrow [4]$ such that $S \cap A$ does not contain all $4$ colors in $c_A$, and $S \cap B$ does not contain all $4$ colors in $c_B$. By permuting colors we may assume w.l.o.g. that $c_A(S \cap A) \subseteq \{2,3,4\}$ and $c_B(S \cap B) \subseteq \{2,3,4\}$. But then putting together $c_A$ and $c_B$ yields a proper coloring of $G$ in which no vertex in $S$ receives color $1$, a contradiction to $S$ being colorful. \end{claimproof} \begin{claim} $G$ is $2$-connected. \end{claim} \begin{claimproof} Towards a contradiction, suppose there exists a separation $(A,B)$ of $G$ of order $|A \cap B|=1$. Write $A \cap B=\{v\}$. Clearly, $\chi(G[A]), \chi(G[B]) \le 4$. Suppose first that $S \cap A =\emptyset$ or $S \cap B=\emptyset$. By symmetry, we may assume w.l.o.g. that the latter holds, i.e. $S \subseteq A\setminus B$. Then $G[A]$ contains no $S$-rooted $K_4$-minor, hence there is a proper $4$-coloring $c_A:A \rightarrow [4]$ of $G[A]$ in which no vertex in $S$ receives color $1$. Let $c_B:B \rightarrow [4]$ be a proper $4$-coloring of $G$. Possibly after permuting colors in $c_B$ we have $c_A(v)=c_B(v)$, and then the common extension of $c_A$ and $c_B$ to $G$ forms a proper $4$-coloring in which no vertex in $S$ receives color $1$, a contradiction. Moving, suppose that $S \cap A \neq\emptyset \neq S \cap B$. We claim that $G[A]$ does not contain an $(S \cap A) \cup \{v\}$-rooted $K_4$-minor. Indeed, if $G[A \cup \{v\}]$ contains a $K_4$-minor $(B_i)_{i=1}^{4}$ rooted at $(S \cap A) \cup \{v\}$, then one of the branch-sets must contain $v$ (for otherwise $(B_i)_{i=1}^{4}$ is an $S$-rooted $K_4$-minor in $G$), say $v \in B_1$. By connectivity, there exists a path $P$ in $G$ connecting $v$ to a vertex in $S \cap B$, and $V(P) \subseteq B$. Then $(B_1 \cup V(P), B_2, B_3, B_4)$ form the branch-set of an $S$-rooted $K_4$-minor in $G$, a contradiction. The symmetric argument works to show that $G[B]$ contains no $(S \cap B) \cup \{v\}$-rooted $K_4$-minor. By minimality of $G$, it follows that there exist proper $4$-colorings $c_A:A \rightarrow [4]$ of $G_A$ and $c_B:B \rightarrow [4]$ of $G_B$ such that $(S \cap A) \cup \{v\}$ and $(S \cap B) \cup \{v\}$ do not contain a vertex of color $1$. Then $c_A(v) \neq 1 \neq c_B(v)$, and by applying a suitable $1$-invariant permutation of the colors in $c_B$ we may assume w.l.o.g. that $c_A(v)=c_B(v)$. Now, the common extension of $c_A$ and $c_B$ is a proper coloring of $G$ in which no vertex in $S$ receives color $1$, a contradiction. \end{claimproof} \begin{claim} $G$ is $3$-connected. \end{claim} \begin{claimproof} Suppose not, then there exists a separation $(A,B)$ of $G$ of order $|A \cap B|=2$. Write $A \cap B=\{u,v\}$. Let us fix (arbitrarily) a proper coloring $c:V(G)\rightarrow [4]$ of $G$. We now consider two cases depending on the distribution of $S$ on the two sides $A$ and $B$ of the separation. \textbf{Case 1.} $|S \cap A| \ge 2$ and $|S \cap B| \ge 2$. Since $G$ is $2$-connected, by Menger's theorem there exist two vertex-disjoint paths $P_u^A, P_v^A$ in $G$ starting in $S \cap A$ and ending at $u, v$ respectively. Similarly, there exist two disjoint paths $P_u^B, P_v^B$ starting at $S \cap B$ and ending at $u$ and $v$ respectively. Clearly, the paths $P_u^A, P_v^A$ are fully contained in $G[A]$ and the paths $P_u^B, P_v^B$ are fully contained in $G[B]$. Since $G$ is $2$-connected, it is easy to see that $G[A]$ and $G[B]$ are connected graphs, for otherwise $u$ or $v$ would form a cut-vertex of $G$. This implies that there exists a path $P^A$ in $G[A]$ and a path $P^B$ in $G[B]$ such that $P^A$ has endpoints $p_u^A \in V(P_u^A)$ and $p_v^A \in V(P_v^A)$, while $P^B$ has endpoints $p_u^B \in V(P_u^B)$ and $p_v^B \in V(P_v^B)$. W.l.o.g. (by choosing shortest paths) we may assume $V(P_u^A) \cap V(P^A)=\{p_u^A\}, V(P_v^A) \cap V(P^A)=\{p_v^A\}$ and $V(P_u^B) \cap V(P^B)=\{p_u^B\}, V(P_v^B) \cap V(P^B)=\{p_v^B\}$. It is then easy to see that the sets $(V(P_u^A), V(P_v^A) \cup (V(P^A)\setminus\{p_u^A\}))$ form a $K_2$-minor in $G[A]$ rooted at $u,v$ and at $S$, and similarly $(V(P_u^B), V(P_v^B) \cup (V(P^B)\setminus\{p_u^B\}))$ form a $K_2$-minor in $G[B]$ rooted at $u,v$ and at $S$. Consider first the case that $c(u) \neq c(v)$. Then we define $G_A, G_B$ as the graphs obtained from $G[A]$ and $G[B]$ respectively by adding an edge between $u$ and $v$ (if it does not already exist). Since $c$ induces proper colorings on $G_A$ and $G_B$, we have $\chi(G_A), \chi(G_B) \le 4$. Let us further define $S_A:=(S \cap A) \cup \{u,v\}, S_B:=(S \cap B) \cup \{u,v\}$. We claim that $G_A$ contains no $S_A$-rooted $K_4$-minor. Indeed, if $G_A$ were to contain an $S_A$-rooted $K_4$-minor, then replacing every occurrence of $u$ or $v$ in one of its branch-sets by the sets $V(P_u^B)$ or $V(P_v^B) \cup (V(P^B)\setminus\{p_u^B\})$ in $G$ respectively would yield an $S$-rooted $K_4$-minor in $G$, a contradiction. A symmetric argument shows that $G_B$ contains no $S_B$-rooted $K_4$-minor. The above facts, $|V(G_A)|, |V(G_B)|<|V(G)|$ and the minimality assumption on $G$ now imply that there exist proper colorings $c_A:V(G_A) \rightarrow [4]$ of $G_A$ and $c_B:V(G_B)\rightarrow [4]$ of $G_B$ such that no vertex in $S_A$ receives color $1$ in $G_A$ and no vertex in $S_B$ receives color $1$ in $G_B$. In particular, $c_A(u), c_A(v)$ are distinct and not equal to $1$, and the same is true for $c_B(u), c_B(v)$. Hence, after applying a $1$-invariant permutation of $[4]$ that maps $c_B(u)$ to $c_A(u)$ and $c_B(v)$ to $c_A(v)$, we may assume w.l.o.g. that $c_A(u)=c_B(u), c_A(v)=c_B(v)$. Now, the common extension of $c_A$ and $c_B$ to $V(G)$ forms a proper coloring of $G$ in which no vertex in $S$ receives color $1$. This is a contradiction to $S$ being colorful and shows that the case $c(u) \neq c(v)$ cannot occur. Next, suppose that $c(u)=c(v)$. Then we define $G_A$ and $G_B$ as the graphs obtained from $G[A]$ and $G[B]$ by identifying $u$ and $v$ into a new vertex $x_{uv}$, and define $S_A:=S \cup \{x_{uv}\}$ and $S_B:=S \cup \{x_{uv}\}$. Similar to the previous case we claim that $G_A$ contains no $S_A$-rooted $K_4$-minor. Indeed, if it did, then possibly after replacing the occurence of the vertex $x_{uv}$ in one of the four branch-sets with the set $V(P_u^B) \cup V(P^B) \cup V(P_v^B)$ in $G$ yields the four branch-sets of an $S$-rooted $K_4$-minor in $G$, a contradiction. A symmetric argument shows that $G_B$ contains no $S_B$-rooted $K_4$-minor. Note that $c$ induces a proper $4$-coloring on both $G_A$ and $G_B$, so that $\chi(G_A), \chi(G_B) \le 4$. These facts imply (using the minimality of $G$ and that $G_A$ and $G_B$ are smaller than $G$) the existence of proper $[4]$-colorings $c_A, c_B$of $G_A$ and $G_B$ respectively in which no vertex in $S_A$ and $S_B$ respectively gets assigned color $1$. In particular, this means $c_A(x_{uv}) \neq 1 \neq c_B(x_{uv})$. Hence (possibly after applying a $1$-invariant permutation of the color-set $[4]$ in $c_B$ that maps $c_B(x_{uv})$ to $c_A(x_{uv})$) we may assume w.l.o.g. that $c_A(x_{uv})=c_B(x_{uv})$. It is now easy to see that $c^\ast:V(G)\rightarrow [4]$ defined by $c^\ast(u):=c^\ast(v):=c_A(x_{uv})$, $c^\ast(x):=c_A(x)$ for $x \in A \setminus \{u,v\}$, $c^\ast(x):=c_B(x)$ for $x \in B \setminus \{u,v\}$, forms a proper $4$-coloring of $G$ in which no vertex in $S$ gets assigned color $1$. This yields a contradiction to the fact that $S$ is colorful and hence shows that also the case $c(u)=c(v)$ is impossible. All in all, we conclude that Case~1 cannot occur. \textbf{Case 2.} $|S \cap A|\le 1$ or $|S \cap B| \le 1$. W.l.o.g. we assume $|S \cap B|\le 1$ in the following. We denote by $P$ a path in $G[B]$ connecting $u$ and $v$ (such a path must exist, since $G$ is $2$-connected). Furthermore, if $S \setminus A\neq \emptyset$, that is, $S \cap B=\{s\}$ for some $S \in B \setminus A$, then by $2$-connectivity of $G$ and Menger's theorem there exist two internally disjoint paths in $G$ connecting $s$ to $u$ and $v$, which we denote by $P_u$ and $P_v$ respectively. Clearly, $P_u$ and $P_v$ have to be entirely contained in $G[B]$. Moving on, let us first consider the case $c(u)=c(v)$. Let $G_A$ be the graph obtained from $G[A]$ by identifying $u$ and $v$ into a single vertex $x_{uv}$. Note that $c$ induces a proper $4$-coloring on $G_A$, and so $\chi(G_A) \le 4$. We furthermore define a set $S_A$ of vertices in $G_A$ as $S_A:=S$ if $S \cap B=\emptyset$ and $S_A:=S \cup \{x_{uv}\}$ if $S \cap B \neq \emptyset$. We claim that $G_A$ does not contain an $S_A$-rooted $K_4$-minor. In the case $S \subseteq A$, this follows since the occurrence of $x_{uv}$ in any branch-set of an $S_A$-rooted $K_4$-minor in $G_A$ can be replaced by the full set $V(P)$ in $G$, which would result in an $S$-rooted $K_4$-minor in $G$, a contradiction. Similarly, in the case $S \setminus A \neq \emptyset$ the occurrence of $x_{uv}$ in a branch-set of an $S_A$-rooted $K_4$-minor in $G_A$ can be replaced by the set $V(P_1) \cup V(P_2)$ in $G$, again resulting in an $S$-rooted $K_4$-minor in $G$, a contradiction. Hence, $G_A$ is $4$-colorable, contains no $S_A$-rooted $K_4$-minor and is smaller than $G$. By minimality of $G$ there must exist a proper coloring $c_A:V(G_A) \rightarrow [4]$ in which no vertex in $S_A$ receives color $1$. Let $\pi \in S_4$ be a permutation with $\pi(c(u))=\pi(c(v))=c_A(x_{uv})$ and such that $\pi(s) \neq 1$ in the case that $S \setminus A \neq \emptyset$. The existence of such a $\pi$ in can be seen as follows: Either we have $c(s)=c(u)=c(v)$, in which any choice of $\pi$ with $\pi(c(u))=\pi(c(v))=c_A(x_{uv})$ automatically satisfies $\pi(c(s))=c_A(x_{uv})\neq 1$, since $x_{uv} \in S_A$ by definition. And otherwise if $c(s) \neq c(u)=c(v)$, we have enough freedom to choose $\pi$ such that it maps $c(u)=c(v)$ to $c_A(x_{uv})$ and $c(s)$ to an element distinct from $1$. Having found the permutation $\pi$, we can see that the coloring $c^\ast$ of $G$, defined by $c^\ast(x):=c_A(x)$ if $x \in A \setminus B$, and $c^\ast(x):=\pi(c(x))$ if $x \in B$, forms a proper coloring of $G$ in which no vertex in $S$ receives color $1$. This is a contradiction to $S$ being colorful and shows that the subcase $c(u)=c(v)$ cannot occur. Next, let us consider the case $c(u) \neq c(v)$. We let $G_A$ be the graph obtained from $G[A]$ by adding an edge between $u$ and $v$ (if it not already exists). Given our assumption on $c$, we have $\chi(G_A) \le 4$. We define a set $S_A\subseteq V(G)$ as follows: If $S \subseteq A$, then $S_A:=S$. If $S \setminus A \neq \emptyset$, then we define $S_A:=S \setminus \{s\}$ if $c(s) \notin \{c(u),c(v)\}$, $S_A:=(S \setminus \{s\}) \cup \{u\}$ if $c(s)=c(u)$ and $S_A:=(S \setminus \{s\}) \cup \{v\}$ if $c(s)=c(v)$. We claim that in each case, $G_A$ does not contain an $S_A$-rooted $K_4$-minor. Suppose towards a contradiction such a rooted minor $(B_i)_{i=1}^4$ exists in $G_A$. If $S \subseteq A$ or $S \setminus A \neq \emptyset$ and $c(s) \notin \{c(u),c(v)\}$, then $S_A \subseteq S$, so that we can replace any branch-set $B_i$ containing $u$ by the set $B_i \cup (V(P)\setminus \{v\})$ to obtain an $S$-rooted $K_4$-minor in $G$. If $S \setminus A \neq \emptyset$ and $c(s)=c(u)$, we replace any branch-set $B_i$ containing $u$ by $B_i \cup (V(P_1) \cup V(P_2))\setminus \{v\}$ to obtain an $S$-rooted $K_4$-minor in $G$. Finally, if $S \setminus A \neq \emptyset$ and $c(s)=c(v)$, we replace any branch-set $B_i$ containing $v$ by $B_i \cup ((V(P_1) \cup V(P_2))\setminus \{u\})$ to obtain an $S$-rooted $K_4$-minor in $G$. As an $S$-rooted $K_4$-minor in $G$ does not exist by initial assumption, we find that there is indeed no $S_A$-rooted $K_4$-minor in $G_A$. Using the minimality of $G$ we conclude the existence of a proper $4$-coloring $c_A:V(G_A) \rightarrow [4]$ of $G_A$ in which no vertex in $S_A$ receives color $1$. Note that $c_A(u)\neq c_A(v)$. Let $\pi:[4]\rightarrow [4]$ be a permutation such that $\pi(c(u))=c_A(u), \pi(c(v))=c_A(v)$ and $\pi(c(s)) \neq 1$ in the case $S\setminus A \neq \emptyset$. The existence of $\pi$ can be easily seen as follows: If $S \subseteq A$, this is obvious, as we just have to map two distinct colors to a set of two other distinct colors. If $S \setminus A \neq \emptyset$ and $c(s) \notin \{c(u),c(v)\}$, we can use the range of $4$ colors to map $c(u)$ to $c_A(u)$, $c(v)$ to $c_A(v)$ and $c(s)$ to an element of $[4]\setminus\{1,c_A(u),c_A(v)\}$. If $S\setminus A \neq \emptyset$ and $c(s)=c(u)$, then $u \in S_A$ ensures that $\pi(c(s))=\pi(c(u))=c_A(u)\neq 1$ for any $\pi$ that maps $c(u)$ to $c_A(u)$ and $c(v)$ to $c_A(v)$. A symmetric argument works in the case $c(s)=c(v)$. Having found the permutation $\pi$, consider the $[4]$-coloring $c^\ast$ of $G$ defined as $c^\ast(x):=c_A(x)$ if $x \in A \setminus B$, and $c^\ast(x):=\pi(c(x))$ if $x \in B$. This coloring is proper and no vertex in $S$ is assigned color $1$. Again, this contradicts $S$ being colorful and shows that also the assumption $c(u)\neq c(v)$ leads to a contradiction. Since both assumptions on the colors of $u$ and $v$ eventually yield a contradiction, we conclude that Case~2 cannot occur. We found that neither Case~1 nor Case~2 is feasible. Thus our initial assumption on the existence of a separation of order $2$ in $G$ must have been wrong. $G$ is a $3$-connected graph. \end{claimproof} \begin{claim} $S$ is spread out in $G$. \end{claim} \begin{claimproof} Towards a contradiction, suppose there exists a separation $(A,B)$ in $G$ of order $3$ such that $S \subseteq A$. Suppose first that $|B \setminus A|=1$. Writing $B \setminus A=\{x\}$, we can see that all neighbors of the vertex $x$ in $G$ are members of the separator $A \cap B$ of size $3$. Hence $x$ has degree at most $3$ in $G$. By initial assumption on $(A,B)$, we have $x \notin S$. Now, consider the graph $G-x$. By minimality of $G$, there exists a proper $[4]$-coloring of $G-x$ in which no vertex in $S$ receives color $1$. We may extend this to a proper coloring of $G$ by picking a color for $x$ that does not appear on its neighborhood. In this coloring of $G$ still no vertex in $S$ receives color $1$, contradicting that $S$ is colorful. Next, suppose $|B \setminus A|\ge 2$. Write $\{d_1,d_2,d_3\}=A \cap B$ for the vertices in the separator induced by $(A,B)$. Note that since $G$ is $3$-connected, by Menger's theorem for every $b \in B \setminus A$ there exist $3$ internally disjoint paths connecting $b$ to $d_1, d_2$ and $d_3$ respectively. Clearly, these paths have to be fully contained in $G[B]$. We now claim that for every vertex $v \in B$, at least two of $d_1,d_2,d_3$ are contained together in a common connected component of $G[B]-v$. Indeed, pick a vertex $b \in (B\setminus A)\setminus\{v\}$. Since $b$ is connected to $d_1,d_2,d_3$ by three internally disjoint paths in $G[B]$, at least two of these paths still exist in $G[B]-v$ and thus certify that their endpoints in $\{d_1,d_2,d_3\}$ are contained in a common connected component of $G[B]-v$. Now Lemma~\ref{lemma:K3minor} implies the existence of a $K_3$-minor in $G[B]$ rooted at $A \cap B$. Let $(D_i)_{i=1}^{3}$ be the branch-sets of such a rooted $K_3$-minor in $G[B]$ with $d_i \in D_i, i=1,2,3$. Moving on, fix any proper $4$-coloring $c:V(G) \rightarrow [4]$ of $G$. W.l.o.g. (possibly after relabelling colors and vertices) we may assume that one of the following three cases holds: (1) $c(d_1)=c(d_2)=c(d_3)=1$, (2) $c(d_1)=c(d_2)=1, c(d_3)=2$, (3) $c(d_1)=1, c(d_2)=2, c(d_3)=3$. \textbf{Case 1.} $c(d_1)=c(d_2)=c(d_3)=1$. Let $G_A$ be defined as the graph obtained from $G[A]$ by identifying $d_1, d_2, d_3$ into a single vertex $d_A$. Let $S_A \subseteq V(G_A)$ be defined as $S_A:=S$ if $S \cap \{d_1,d_2,d_3\}=\emptyset$ and $S_A:=S \cup \{d_A\}$ if $S \cap \{d_1,d_2,d_3\} \neq \emptyset$. Note that $c$ induces a proper $4$-coloring of $G_A$ by assigning color $1$ to $d_A$. Further, $G_A$ contains no $S_A$-rooted $K_4$-minor: Towards a contradiction, suppose that $(B_i)_{i=1}^{4}$ form branch-sets of an $S_A$-rooted $K_4$-minor in $G_A$. Then either $d_A \notin \bigcup_{i=1}^{4}{B_i}$ and then $(B_i)_{i=1}^{4}$ form an $S$-rooted $K_4$-minor in $G$, a contradiction. Otherwise, we have $d_A \in B_i$ for some $i \in [4]$ (w.l.o.g. $d_A \in B_1$) and then $((B_1\setminus \{d_A\}) \cup D_1 \cup D_2 \cup D_3, B_2, B_3, B_4)$ are easily seen to form the branch-sets of an $S$-rooted $K_4$-minor in $G$, again, a contradiction. The facts that $\chi(G_A)\le 4$, that $G_A$ contains no $S_A$-rooted $K_4$-minor, that $|V(G_A)|<|V(G)|$ and the minimality assumption on $G$ now imply that $G_A$ admits a proper $4$-coloring $c_A:V(G_A)\rightarrow [4]$ such that no vertex in $S_A$ receives color $j$, for some $j \in [4]$. Possibly after permuting colors we may assume w.l.o.g. that $c_A(d_A)=1$. But then the coloring $c^\ast:V(G)\rightarrow [4]$ defined by $c^\ast(x):=c_A(x)$ for every $x \in A \setminus B$ and $c^\ast(x):=c(x)$ for every $x \in B$ forms a proper $4$-coloring of $G$ in which no vertex in $S$ receives color $j$, a contradiction to $S$ being colorful. This shows that Case~1 cannot occur. \textbf{Case 2.} $c(d_1)=c(d_2)=1, c(d_3)=2$. In this case, we define $G_A$ as the graph obtained from $G[A]$ by identifying only $d_1$ and $d_2$ into a new vertex $d_A$ and adding an edge between $d_A$ and $d_3$ (if it does not already exist). Again it is easy to see that the coloring $c$ induces a proper $4$-coloring of $G_A$ by assigning color $1$ to $d_A$. Let $S_A:=S$ if $S \cap \{d_1,d_2\}=\emptyset$ and $S_A:=S \cup \{d_A\}$ otherwise. We claim that there is no $S_A$-rooted $K_4$-minor in $G_A$. Towards a contradiction, suppose such a minor would be described by branch-sets $(B_i)_{i=1}^{4}$. For each $i \in [4]$, set $$B_i^\ast:=\begin{cases}B_i, & \text{if }B_i \cap \{d_A,d_3\}=\emptyset, \cr (B_i\setminus \{d_A\}) \cup (D_1 \cup D_2), & \text{if }B_i \cap \{d_A,d_3\}=\{d_A\}, \cr (B_i\setminus \{d_3\}) \cup D_3, & \text{if }B_i \cap \{d_A,d_3\}=\{d_3\}, \cr (B_i\setminus \{d_A,d_3\}) \cup (D_1 \cup D_2 \cup D_3), & \text{if } B_i \supseteq \{d_A,d_3\} \end{cases}.$$ It is then easy to see that $(B_i^\ast)_{i=1}^{4}$ form an $S$-rooted $K_4$-minor in $G$, a contradiction. Given that $G_A$ is a $4$-colorable graph with no $S_A$-rooted $K_4$-minor which is smaller than $G$, there must exist a proper $4$-coloring $c_A:V(G_A)\rightarrow [4]$ of $G_A$ such that for some $j \in [4]$ no vertex in $S_A$ receives color $j$. Since necessarily $c_A(d_A) \neq c_A(d_3)$ in this coloring, we may assume w.l.o.g. (possibly after permuting colors) that $c_A(d_A)=1, c_A(d_3)=2$. But then it is easy to see that $c^\ast:V(G) \rightarrow [4]$, defined by $c^\ast(x):=c_A(x)$ for every $x \in A\setminus B$ and $c^\ast(x):=c(x)$ for every $x \in B$ forms a proper coloring of $G$ such that no vertex in $S$ receives color $j$. This is a contradiction to $S$ being colorful and shows that also Case~2 cannot occur. \textbf{Case 3.} $c(d_1)=1, c(d_2)=2, c(d_3)=3$. In this final case we define a graph $G_A$ on the vertex set $A$ simply by adding all the three edges $d_1d_2, d_1d_3,d_2d_3$ to $G[A]$ (if they do not exist already). Since $c$ induces a proper coloring also with the added edges, we have $\chi(G_A)\le 4$. We further claim that $G_A$ has no $S$-rooted $K_4$-minor. Indeed if it did, then we could easily obtain a $K_4$-minor rooted at $S$ in $G$ by replacing each occurrence of a vertex $d_i$ in one of its branch-sets by the full set $D_i\subseteq B$ in $G$. Hence, and by minimality of $G$, we find that there is a proper $[4]$-coloring $c_A$ of $G_A$ in which no vertex in $S$ receives color $j$, for some $j \in [4]$. Since $c_A(d_1),c_A(d_2),c_A(d_3)$ have to be pairwise different, we may assume w.l.o.g. $c_A(d_i)=i$ for $i=1,2,3$. However, then clearly the coloring $c^\ast$ of $G$ which is the common extension of $c_A$ and the restriction of $c$ to $B$ is a proper $[4]$-coloring of $G$ in which no vertex in $S$ receives color $j$. This again is a contradiction to our assumption that $S$ is colorful in $G$. Thus, Case~3 cannot occur as well. As we have reached contradictions in all $3$ cases, we conclude that our initial assumption on the existence of a $3$-separation $(A,B)$ with $S \subseteq A$ was wrong. This proves the claim, $S$ is indeed spread out. \end{claimproof} Having established that $G$ is a $3$-connected graph and $S$ is spread out in $G$ in the previous claims, and since $G$ has no $S$-rooted $K_4$-minor by assumption, we now may apply Lemma~\ref{lma:rootyness} to $G$ (note that $|S|\ge 4$ holds trivally, for otherwise $S$ is not colorful). We thus find that $G^a(S)$ (the graph obtained from $G$ by adding a new vertex adjacent to all elements of $S$) is planar. But then the $4$-color theorem guarantees the existence of a proper $4$-coloring of $G^a(S)$. Then in the induced proper $4$-coloring of $G$, no vertex in $S$ can have the color assigned to the additional vertex in $G^a(S)$. This is a contradiction to $S$ being colorful. This contradiction shows that our initial assumption on the existence of a smallest counterexample $G$ was wrong, and concludes the proof of Theorem~\ref{thm:main}. \end{proof} \end{document}
math
\begin{document} \title{ extbf{PBW bases for some 3-dimensional\ skew polynomial algebras} \begin{abstract} \noindent The aim of this paper is to establish necessary and sufficient algorithmic conditions to guarantee that an algebra is actually a 3-dimensional skew polynomial algebra in the sense of Bell and Smith \cite{SmithBell}. \noindent \textit{Key words and phrases:} skew polynomial algebra, diamond lemma, skew PBW extension. \noindent 2010 \textit{Mathematics Subject Classification.} 16S36, 16S32, 16S30. \end{abstract} \section{Introduction} In the study of commutative and non-commutative algebras, it is important to specify one PBW (Poincar\'e-Birkhoff-Witt) basis for every one of them, since this allows us to characterize several properties with physical and mathematical meaning. This fact can be appreciated in several works. For instance, PBW theorem for the universal enveloping algebra of a Lie algebra \cite{Dixmier1996}; PBW theorem for quantized universal enveloping algebras \cite{Yamane1989}; quantum PBW theorem for a wide class of associative algebras \cite{Berger1992}; PBW bases for quantum groups using the notion of Hopf algebra \cite{Ringel1996}, and others. With all these results in mind, in this article we wish to investigate a criteria and some algorithms which decide whether a given ring with some variables and relations can be expressed as a {\em 3-dimensional skew polynomial algebra} defined by Bell and Smith \cite{SmithBell} (Definition \ref{3dimensionaldimension}). We follow the original ideas by Bergman in \cite{Bergman1978} and the treatments established by Bueso et. al., \cite{BuesoGT2003} and Reyes \cite{ReyesPhD}. \\ The paper is organized as follows. Section \ref{Some3dimensionall} contains the algebras of interest for us in this paper, the 3-dimensional skew polynomial algebras. We recall its definition (Definition \ref{3dimensionaldimension}) and its classification (Proposition \ref{3-dimensionalClassification}). Section \ref{sectionDiamondLemma} treats the definitions and preliminary results with the aim of establishing the important result of this paper (Theorem \ref{GomezTorrecillasTheorem3.21}). In Section \ref{SkewPoincareBirkhoffWittTheorem} we establish the algorithms which allow us to decide whether an algebraic structure, defined by variables and relations between them, can be considered as a 3-dimensional skew polynomial algebra (expressions (\ref{gordito}) - (\ref{pss6})). Finally, in Section \ref{examples} we present some examples which illustrate the results obtained in Section \ref{sectionDiamondLemma} and the algorithms formulated in Section \ref{SkewPoincareBirkhoffWittTheorem}.\\ Throughout this paper the letter $\Bbbk$ will denote a field. \section{3-dimensional skew polynomial algebras}\label{Some3dimensionall} The universal enveloping algebra $\mbox{${\cal U}$}(\mathfrak{sl}(2,\Bbbk))$ of the Lie algebra $\mathfrak{sl}(2,\Bbbk)$, the dispin algebra $\mbox{${\cal U}$}(osp(1,2))$ and Woronowicz's algebra $\mbox{${\cal W}$}_{\nu}(\mathfrak{sl}(2,\Bbbk))$ (see Examples \ref{chapeco}) are examples of algebras classified by Bell and Smith in \cite{SmithBell}, which are known as \textit{3-dimensional skew polynomial algebras}. These algebras are particular examples of a more general family of non-commutative rings known as {\em skew PBW extensions} or $\sigma$-{\em PBW extensions}. For these extensions several pro\-per\-ties have been characterized (for example, Noetherianess, regularity, Serre's Theorem, global homological, Krull, Goldie and Gelfand-Kirillov dimensions, Auslander's regularity, prime ideals, incomparability and prime length of prime ideals, higher algebraic $K$-theory, cyclic homology, Armendariz, Baer, quasi-Baer, p.p. and p.q.-Baer, and Koszul pro\-per\-ties, and other ring and module theoretical pro\-per\-ties, c.f. \cite{LezamaReyes2014}, \cite{LezamaAcostaReyes2015}, \cite{ReyesPhD}, \cite{Reyes2013}, \cite{Reyes2014}, \cite{Reyes2014UIS}, \cite{Reyes2015}, \cite{Reyes2018}, \cite{ReyesSuarez2016a}, \cite{ReyesSuarez2016b}, \cite{ReyesSuarez2016c}, \cite{ReyesSuarezClifford2017}, \cite{ReyesSuarezskewCY2017}, \cite{ReyesSuarezUMA2018}, \cite{ReyesYesica}, \cite{SuarezReyes2016}, \cite{SuarezReyesgenerKoszul2017} and others), which means that all these properties have been also investigated for 3-dimensional skew polynomial algebras. Nevertheless, since by definition (see Definition \ref{3dimensionaldimension} below) these algebras are required to have a PBW basis, we consider important to establish necessary and sufficient algorithmic conditions to guarantee that an algebra defined by generators and relations is precisely one of these skew polynomial algebras (this is done in Sections \ref{sectionDiamondLemma} and \ref{SkewPoincareBirkhoffWittTheorem}), and then can apply all results above. With this objective, we start recalling their definition and their characterization. \begin{definition}[\cite{SmithBell}; \cite{Rosenberg1995}, Definition C4.3]\label{3dimensionaldimension} \textit{A 3-dimensional skew polynomial algebra $\mbox{${\cal A}$}$} is a $\Bbbk$-algebra generated by the variables $x,y,z$ restricted to relations $ yz-\alpha zy=\lambda,\ zx-\beta xz=\mu$, and $xy-\gamma yx=\nu$, such that \begin{enumerate} \item $\lambda, \mu, \nu\in \Bbbk+\Bbbk x+\Bbbk y+\Bbbk z$, and $\alpha, \beta, \gamma \in \Bbbk^{*}$; \item Standard monomials $\{x^iy^jz^l\mid i,j,l\ge 0\}$ are a $\Bbbk$-basis of the algebra. \end{enumerate} \end{definition} \begin{remark}\label{Gererere} If we consider the variables $x_1:=x,\ x_2:= y,\ x_3:=z$, then the relations established in Definition \ref{3dimensionaldimension} can be formulated in the following way: \begin{align*} x_3x_2 - \alpha^{-1} x_2x_3 = &\ r_0^{(2,3)} + r_1^{(2,3)}x_1 + r_2^{(2,3)}x_2 + r_3^{(2,3)}x_3,\\ x_3x_1 - \beta x_1x_3 = &\ r_0^{(1,3)} + r_1^{(1,3)}x_1 + r_2^{(1,3)}x_2 + r_3^{(1,3)}x_3,\\ x_2x_1 - \gamma^{-1}x_1x_2 = &\ r_0^{(1,2)} + r_1^{(1,2)}x_1 + r_2^{(1,2)}x_2 + r_3^{(1,2)}x_3, \end{align*} where the elements $r's$ belong to the field $\Bbbk$. \end{remark} Next proposition establishes a classification of 3-dimensional skew polynomial algebras. \begin{proposition}[\cite{Rosenberg1995}, Theorem C.4.3.1, 2.5 in \cite{SmithBell}]\label{3-dimensionalClassification} If $\mbox{${\cal A}$}$ is a 3-dimensional skew polynomial algebra, then $\mbox{${\cal A}$}$ is one of the following algebras: \begin{enumerate} \item [\rm (a)] if $|\{\alpha, \beta, \gamma\}|=3$, then $\mbox{${\cal A}$}$ is defined by the relations $yz-\alpha zy=0,\ zx-\beta xz=0,\ xy-\gamma yx=0$. \item [\rm (b)] if $|\{\alpha, \beta, \gamma\}|=2$ and $\beta\neq \alpha =\gamma =1$, then $\mbox{${\cal A}$}$ is one of the following algebras: \begin{enumerate} \item [\rm (i)] $yz-zy=z,\ \ \ zx-\beta xz=y,\ \ \ xy-yx=x${\rm ;} \item [\rm (ii)] $yz-zy=z,\ \ \ zx-\beta xz=b,\ \ \ xy-yx=x${\rm ;} \item [\rm (iii)] $yz-zy=0,\ \ \ zx-\beta xz=y,\ \ \ xy-yx=0${\rm ;} \item [\rm (iv)] $yz-zy=0,\ \ \ zx-\beta xz=b,\ \ \ xy-yx=0${\rm ;} \item [\rm (v)] $yz-zy=az,\ \ \ zx-\beta xz=0,\ \ \ xy-yx=x${\rm ;} \item [\rm (vi)] $yz-zy=z,\ \ \ zx-\beta xz=0,\ \ \ xy-yx=0$, \end{enumerate} where $a, b$ are any elements of $\Bbbk$. All nonzero values of $b$ give isomorphic algebras. \item [\rm (c)] If $|\{\alpha, \beta, \gamma\}|=2$ and $\beta\neq \alpha=\gamma\neq 1$, then $\mbox{${\cal A}$}$ is one of the following algebras: \begin{enumerate} \item [\rm (i)] $yz-\alpha zy=0,\ \ \ zx-\beta xz=y+b,\ \ \ xy-\alpha yx=0${\rm ;} \item [\rm (ii)] $yz-\alpha zy=0,\ \ \ zx-\beta xz=b,\ \ \ xy-\alpha yx=0$. \end{enumerate} In this case, $b$ is an arbitrary element of $\Bbbk$. Again, any nonzero values of $b$ give isomorphic algebras. \item [\rm (d)] If $\alpha=\beta=\gamma\neq 1$, then $\mbox{${\cal A}$}$ is the algebra defined by the relations $yz-\alpha zy=a_1x+b_1,\ zx-\alpha xz=a_2y+b_2,\ xy-\alpha yx=a_3z+b_3$. If $a_i=0\ (i=1,2,3)$, then all nonzero values of $b_i$ give isomorphic algebras. \item [\rm (e)] If $\alpha=\beta=\gamma=1$, then $\mbox{${\cal A}$}$ is isomorphic to one of the following algebras: \begin{enumerate} \item [\rm (i)] $yz-zy=x,\ \ \ zx-xz=y,\ \ \ xy-yx=z${\rm ;} \item [\rm (ii)] $yz-zy=0,\ \ \ zx-xz=0,\ \ \ xy-yx=z${\rm ;} \item [\rm (iii)] $yz-zy=0,\ \ \ zx-xz=0,\ \ \ xy-yx=b${\rm ;} \item [\rm (iv)] $yz-zy=-y,\ \ \ zx-xz=x+y,\ \ \ xy-yx=0${\rm ;} \item [\rm (v)] $yz-zy=az,\ \ \ zx-xz=z,\ \ \ xy-yx=0${\rm ;} \end{enumerate} Parameters $a,b\in \Bbbk$ are arbitrary, and all nonzero values of $b$ generate isomorphic algebras. \end{enumerate} \end{proposition} \section{Diamond lemma and PBW bases}\label{sectionDiamondLemma} Bergman's Diamond Lemma \cite{Bergman1978} provides a general method to prove that certain sets are bases of algebras which are defined in terms of generators and relations. For instance, the Poincar\'e-Birkhoff-Witt theorem, which appeared at first for universal enveloping algebras of finite dimensional Lie algebras (see \cite{Dixmier1996} for a detailed treatment) can be derived from it. PBW theorems have been considered several classes of commutative and noncommutative algebras (see \cite{Yamane1989}, \cite{Berger1992}, \cite{Ringel1996}, and others). With this in mind, in this section we establish a criteria and some algorithms which decide whether a given ring with some variables and relations can be expressed as a 3-dimensional skew polynomial algebra in the sense of Definition \ref{3dimensionaldimension}. We follow the original ideas presented by Bergman \cite{Bergman1978} and the treatments developed by Bueso et. al. \cite{BuesoGT2003} and Reyes \cite{ReyesPhD}. \begin{definition}\label{definitionBergmanmm} \begin{enumerate} \item [\rm (i)] {\rm Let} $X$ {\rm be a non-empty set and denote by} $ \langle X\rangle$ {\rm and} $\Bbbk \langle X\rangle$ {\rm the free monoid on} $X$ {\rm and the free associative} $\Bbbk$-{\rm ring on} $X$, {\rm respectively. A subset} $Q\subseteq \langle X\rangle\times \Bbbk \langle X\rangle$ {\rm is called a} \textit{reduction system} {\rm for} $\Bbbk\langle X\rangle$. {\rm An element} $\sigma=(W_{\sigma},f_{\sigma})\in Q$ {\rm has components} $W_{\sigma}$ {\rm a word in} $\langle X \rangle $ {\rm and} $f_{\sigma}$ {\rm a polynomial in} $\Bbbk\langle X\rangle$. {\rm Note that every reduction system for} $\Bbbk\langle X\rangle$ {\rm defines a factor ring} $A=\Bbbk\langle X\rangle/I_Q$, {\rm with} $I_Q$ {\rm the two-sided ideal of} $\Bbbk\langle X\rangle$ {\rm generated by the polynomials} $W_{\sigma}-f_{\sigma}$, {\rm with} $\sigma\in Q$. \item [\rm (ii)] {\rm If} $\sigma$ {\rm is an element of a reduction system} $Q$ {\rm and} $A,B\in \langle X\rangle$, {\rm the} $\Bbbk$-{\rm linear endomorphism} $r_{A\sigma B}:\Bbbk\langle X\rangle \to \Bbbk\langle X\rangle$, {\rm which fixes all elements in the basis} $\langle X\rangle$ {\rm different from} $AW_{\sigma}B$ {\rm and sends this particular element to} $Af_{\sigma}B$ {\rm is called a} \textit{reduction} {\rm for} $Q$. {\rm If} $r$ {\rm is a reduction and} $f\in \Bbbk\langle X\rangle$, {\rm then} $f$ {\rm and} $r(f)$ {\rm represent the same element in the} $\Bbbk$-{\rm ring} $\Bbbk\langle X\rangle/I_Q$. {\rm Thus, reductions may be viewed as rewriting rules in this factor ring.} \item [\rm (iii)] {\rm A reduction} $r_{A\sigma B}$ {\rm acts trivially on an element} $f\in \Bbbk\langle X\rangle$ {\rm if} $r_{A\sigma B}(f)=f$. {\rm An element} $f\in \Bbbk\langle X\rangle$ {\rm is said to be} irreducible {\rm under} $Q$ {\rm if all reductions act trivially on} $f$. {\rm Note that the set} $\Bbbk\langle X\rangle_{\rm irr}$ {\rm of all irreducible elements of} $\Bbbk\langle X\rangle$ {\rm under} $Q$ {\rm is a left submodule of} $\Bbbk\langle X\rangle$. \item [\rm (iv)] {\rm Let} $f$ {\rm be an element of} $\Bbbk\langle X\rangle$. {\rm We say that} $f$ \textit{reduces} {\rm to} $g\in \Bbbk\langle X \rangle$, {\rm if there is a finite sequence} $r_1,\dotsc,r_n$ {\rm of reductions such that} $g=(r_n\dotsb r_1)(f)$. {\rm We will write} $f\to_Q g$. {\rm A finite sequence of reductions} $r_1,\dotsc,r_n$ {\rm is said to be} final {\rm on} $f$, {\rm if} $(r_n\dotsb r_1)(f)\in \Bbbk\langle X\rangle_{\rm irr}$. \item [\rm (v)] {\rm An element} $f\in \Bbbk\langle X\rangle$ {\rm is said to be} reduction-finite, {\rm if for every infinite sequence} $r_1,r_2,\dotsc$ {\rm of reductions there exists some positive integer} $m$ {\rm such that} $r_i$ {\rm acts trivially on the element} $(r_{i-1}\dotsb r_1)(f)$, {\rm for every} $i>m$. {\rm If} $f$ {\rm is reduction-finite, then any maximal sequence of reductions} $r_1,\dotsc,r_n$ {\rm such that} $r_i$ {\rm acts non-trivially on the element} $(r_{i-1}\dotsb r_1)(f)$, {\rm for} $1\le i\le n$, {\rm will be finite}. {\rm Thus, every reduction-finite element reduces to an irreducible element. We remark that the set of all reduction-finite elements of} $\Bbbk\langle X\rangle$ {\rm is a left submodule of} $\Bbbk\langle X\rangle$. \item [\rm (vi)] {\rm An element} $f\in \Bbbk\langle X\rangle$ {\rm is said to be} \textit{reduction-unique} {\rm if it is reduction-finite and if its images under all final sequences of reductions coincide. This value is denoted by} $r_Q(f)$. \end{enumerate} \end{definition} \begin{proposition}[\cite{BuesoGT2003}, Lemma 3.13]\label{frooooooo} {\rm (i)} The set $\Bbbk\langle X\rangle_{\rm un}$ of re\-duc\-tion-unique elements of $\Bbbk\langle X\rangle$ is a left submodule, and $r_Q:\Bbbk\langle X\rangle_{\rm un}\to \Bbbk\langle X\rangle_{\rm irr}$ becomes an $\Bbbk$-linear map. {\rm (ii)} If $f,g,h\in \Bbbk\langle X\rangle$ are elements such that $ABC$ is reduction-unique for all terms $A,B,C$ occurring in respectively $f,g,h$, then $fgh$ is re\-duc\-tion-unique. Moreover, if $r$ is any reduction, then $fr(g)h$ is reduction-unique and $r_Q(fr(g)h)=r_Q(fgh)$. \begin{proof} (i) Consider $f, g \in \Bbbk\langle X\rangle_{\rm un},\ \lambda\in \Bbbk$. We know that $\lambda f + g$ is reduction-finite. Let $r_1,\dotsc, r_m$ be a sequence of reductions (note that it is final on this element), and $r:=r_m\dotsb r_1$ for the composition. Using that $f$ is reduction-unique, there is a finite composition of reductions $r'$ such that $(r'r)(f) = r_Q(f)$, and in a similar way, a composition of reductions $r''$ such that $(r''r'r)(g) = r_Q(g)$. Since $r(\lambda f + g)\in \Bbbk\langle X\rangle_{\rm irr}$, then $r(\lambda f + g) = (r''r'r)(\lambda f + g) = \lambda(r''r'r)(f) + (r''r'r)(g) = \lambda r_Q(f) + r_Q(g)$. Hence, the expression $r(\lambda f+g)$ is uniquely determined, and $\lambda f+g$ is reduction-unique. In fact, $r_Q(\lambda f+g) = \lambda r_Q(f) + r_Q(g)$, and therefore (i) is proved. (ii) From (i) we know that $fgh$ is reduction-unique. Consider $r=r_{D\sigma E}$, for $\sigma\in Q,\ D, E\in \langle X\rangle$. The idea is to show that $fr(g)h$ is reduction-unique and $r_Q(fr(g)h) = r_Q(fgh)$. Note that if $f, g, h$ are terms $A, B, C$, then $r_{AD\sigma EC}(ABC) = Ar_{D\sigma E}(B)C$, that is, $Ar_{D\sigma E}(B)C$ is reduction-unique with the equality $r_Q(ABC) = r_Q(Ar_{D\sigma E}(B)C)$. Now, more generally, $f = \sum_{i}\lambda_i A_i,\ g=\sum_{j}\mu_j B_j,\ h=\sum_{k}\rho_kC_k$, where the indices $i, j, k$ run over finite sets, with $\lambda_i, \mu_j, \rho_k$, and where $A_i, B_j, C_k$ are terms such that $A_iB_jC_k$ is reduction unique for every $i, j, k$. In this way, $fr(g)h = \sum_{i,j,k} \lambda_i\mu_j\rho_k A_ir(B_j)C_k$. Finally, since $ABC$ is reduction-finite, for every $i, j, k$, and $r_Q(A_ir(B_j)C_k) = r_Q(A_iB_jC_k)$, from (i), $fr(g)h$ is reduction-unique and $r_Q(fr(g)h) = r_Q(fgh)$. \end{proof} \end{proposition} \begin{proposition}[\cite{BuesoGT2003}, Proposition 3.14] If every element $f\in \Bbbk\langle X\rangle$ is re\-duc\-tion-finite under a reduction system $Q$, and $I_Q$ is the ideal of $\Bbbk\langle X\rangle$ generated by the set $\{W_{\sigma}-f_{\sigma}\mid \sigma \in Q\}$ then $\Bbbk\langle X\rangle=\Bbbk\langle X\rangle_{irr}\oplus I_Q$ if and only if every element of $\Bbbk\langle X\rangle$ is reduction-unique. \begin{proof} Suppose that $\Bbbk\langle X\rangle = \Bbbk\langle X\rangle_{\rm irr}\oplus I_Q$ and consider $f\in \Bbbk\langle X\rangle$. Note that if $g, g'\in \Bbbk\langle X\rangle$ are elements for which $f$ reduces to $g$ and $g'$, then $g-g'\in \Bbbk\langle X\rangle\cap I_Q=\{0\}$, that is, $f$ is reduction-unique. Conversely, if every element of $\Bbbk\langle X\rangle$ is reduction-unique under $Q$, then $r_Q:\Bbbk\langle X\rangle \to \Bbbk\langle X\rangle_{\rm irr}$ is a $\Bbbk$-linear projection. Consider $f\in {\rm ker}(r_Q)$, that is, $r_Q(f)=0$. Then $f\in I_Q$, whence the ${\rm ker}(r_Q)\subseteq I_Q$, but in fact, ${\rm ker}(r_Q)$ contains $I_Q$: for every $\sigma\in Q, A, B\in \langle X\rangle$, we have $r_Q(A(W_{\sigma} - f_{\sigma})B) = r_Q(AW_{\sigma}B) - r_Q(Af_{\sigma}B) = 0$ from Proposition \ref{frooooooo}, when $r=r_{1\sigma 1}$. \end{proof} \end{proposition} Under the previous assumptions, $A=\Bbbk\langle X\rangle/I_Q$ may be identified with the left free $\Bbbk$-module $\Bbbk\langle X\rangle_{irr}$ with $\Bbbk$-module structure given by the multiplication $f*g=r_Q(fg)$. \begin{definition} {\rm An} overlap ambiguity {\rm for} $Q$ {\rm is a} $5$-{\rm tuple} $(\sigma,\tau, A,B,C)$, {\rm where} $\sigma, \tau\in Q$ {\rm and} $A,B,C\in \langle X\rangle\ \backslash\ \{1\}$ {\rm such that} $W_{\sigma}=AB$ {\rm and} $W_{\tau}=BC$. {\rm This ambiguity is} solvable {\rm if there exist compositions of reductions} $r,r'$ {\rm such that} $r(f_{\sigma}C)=r'(Af_{\tau})$. {\rm Similarly,} {\rm a} 5-{\rm tuple} $(\sigma,\tau,A,B,C)$ {\rm with} $\sigma\neq \tau$ {\rm is called an} inclusion ambiguity {\rm if} $W_{\tau}=B$ {\rm and} $W_{\sigma}=ABC$. {\rm This ambiguity is solvable if there are compositions of reductions} $r,r'$ {\rm such that} $r(Af_{\tau}B)=r'(f_{\sigma})$. \end{definition} \begin{definition} {\rm A partial monomial order} $\le$ {\rm on} $\langle X\rangle$ {\rm is said to be} compatible {\rm with} $Q$ {\rm if} $f_{\sigma}$ {\rm is a linear combination of terms} $M$ {\rm with} $M<W_{\sigma}$, {\rm for all} $\sigma \in Q$. \end{definition} \begin{proposition}[\cite{BuesoGT2003}, Proposition 3.18]\label{GomezProposition3.18} If $\le$ is a monomial partial order on $\langle X\rangle$ satisfying the descending chain condition and compatible with a reduction system $Q$, then every element $f\in \Bbbk\langle X\rangle$ is reduction-finite. In particular, every element of $\Bbbk\langle X\rangle$ reduces under $Q$ to an irreducible element. \end{proposition} Let $\le$ be a monoid partial order on $\langle X\rangle$ compatible with the reduction system $Q$. Let $M$ be a term in $\langle X\rangle$ and write $Y_M$ for the submodule of $\Bbbk\langle X\rangle$ spanned by all polynomials of the form $A(W_{\sigma}-f_{\sigma})B$, where $A,B \in \langle X\rangle$ are such that $AW_{\sigma}B<M$. We will denote by $V_M$ the submodule of $\Bbbk\langle X \rangle$ spanned by all terms $M'< M$. Note that $Y_M\subseteq V_M$. \begin{definition} {\rm An overlap ambiguity} $(\sigma,\tau, A,B,C)$ {\rm is said to be} resolvable {\rm relative to} $\le$ {\rm if} $f_{\sigma}C-Af_{\tau}\in Y_{ABC}$. {\rm An inclusion ambiguity} $(\sigma,\tau,A,B,C)$ {\rm is said to be} resolvable {\rm relative to} $\le$ {\rm if} $Af_{\tau}C-f_{\sigma}\in Y_{ABC}$. \end{definition} If $r$ is a finite composition of reductions, and $f$ belongs to $V_M$, then $f-r(f)\in Y_M$. Hence, $f\in Y_M$ if and only if $r(f)\in Y_M$ (\cite{Reyes2013}, Proposition 3.1.8).\\ From the results above we obtain the important theorem of this section. \begin{theorem}[Bergman's Diamond Lemma \cite{Bergman1978}; \cite{BuesoGT2003}, Theorem 3.21]\label{GomezTorrecillasTheorem3.21} Let $Q$ be a reduction system for the free associative $\Bbbk$-ring $\Bbbk\langle X\rangle$, and let $\le$ be a monomial partial order on $\langle X\rangle$, compatible with $Q$ and satisfying the descending chain condition. The following conditions are equivalent: {\rm (i)} all ambiguities of $Q$ are resolvable; {\rm (ii)} all ambiguities of $Q$ are resolvable relative to $\le$; {\rm (iii)} all elements of $\Bbbk\langle X\rangle$ are reduction-unique under $Q$; {\rm (iv)} $\Bbbk\langle X\rangle=\Bbbk\langle X\rangle_{\rm irr}\oplus I_Q$. \end{theorem} \section{Algorithms}\label{SkewPoincareBirkhoffWittTheorem} Throughout this section we will consider the lexicographical degree order $\preceq_{\rm deglex}$ to be defined on the variables $x_1,\dotsc,x_n$. \begin{definition} {\rm A reduction system} $Q$ {\rm for the free associative} $\Bbbk$-{\rm ring} given by $\Bbbk\langle x_1,\dotsc,x_n\rangle$ {\rm is said to be a} $\preceq_{\rm deglex}$-\textit{skew reduction system} {\rm if the following conditions hold:} {\rm (i)} $Q=\{(W_{ji}, f_{ji})\mid 1\le i< j\le n\}$; {\rm (ii)} {\rm for every} $j>i$, $W_{ji}=x_jx_i$ {\rm and} $f_{ji}=c_{i,j}x_ix_j+p_{ji}$, {\rm where} $c_{i,j}\in \Bbbk\ \backslash\ \{0\}$ {\rm and} $p_{ji}\in \Bbbk\langle x_1,\dotsc,x_n\rangle$; {\rm (iii)} {\rm for each} $j>i$, ${\rm lm}(p_{ji}) \preceq_{\rm deglex} x_ix_j$. {\rm We will denote} $(Q,\preceq_{\rm deglex})$ {\rm this type of reduction systems.} \end{definition} Note that if $0\neq p\in \sum_{\alpha}r_{\alpha}x^{\alpha}$, $r_{\alpha}\in \Bbbk$, we consider its \textit{Newton diagram} as $ \mbox{${\cal N}$}(p):=\{\alpha\in \mathbb{N}^{n}\mid r_{\alpha}\neq 0\}$. Let ${\rm exp}(p):={\rm max}\ \mbox{${\cal N}$}(p)$. In this way, by Proposition \ref{GomezProposition3.18} every element $f\in \Bbbk\langle x_1,\dotsc,x_n\rangle$ reduces under $Q$ to an irreducible element. Let $I_Q$ be the two-sided ideal of $\Bbbk\langle x_1,\dotsc,x_n\rangle $ generated by $W_{ji}-f_{ji}$, for $1\le i< j\le n$. If $x_i+I_Q$ is also represented by $x_i$, for each $1\le i\le n$, then we call \textit{standard terms} in $A$. Proposition \ref{GomezProposition4.3} below shows that any polynomial reduces under $Q$ to some standard polynomial and hence standard terms in $A$ generate this algebra as a left free $\Bbbk$-module. \begin{proposition}[\cite{BuesoGT2003}, Lemma 4.2]\label{GomezLemma4.2} If $(Q,\preceq_{\rm deglex})$ is a skew reduction system, then the set $\Bbbk\langle x_1,\dotsc,x_n\rangle_{irr}$ is the left submodule of $\Bbbk\langle x_1,\dotsc,x_n\rangle$ consisting of all standard polynomials $f\in \Bbbk\langle x_1,\dotsc,x_n\rangle$. \begin{proof} It is clear that every standard term is irreducible. Now, let us see that if a monomial $M=\lambda x_{j_1}\dotsb x_{j_s}$ is not standard, then some reduction will act non-trivially on it. If $s<2$ the monomial is clearly standard. This is also true if $j_k\le j_{k+1}$, for every $1\le k\le s-1$. Let $s\ge 2$. There exists $k$ such that $j_k>j_{k+1}$ and $M=Cx_jx_iB=CW_{ji}B$ where $j=j_k$, $i=j_{k+1}$ and where $C$ and $B$ are terms. Then $CW_{ji}B\to_Q Cf_{ji}B$ acts non trivially on $M$. \end{proof} \end{proposition} \begin{proposition}[\cite{BuesoGT2003}, Proposition 4.3]\label{GomezProposition4.3} If $(Q,\preceq_{\rm deglex})$ is a skew reduction system for $\Bbbk\langle x_1,\dotsc,x_n\rangle$, then every element of $\Bbbk\langle x_1,\dotsc,x_n\rangle$ reduces under $Q$ to a standard polynomial. Thus the standard terms in $A=\Bbbk\langle x_1,\dotsc,x_n\rangle/I_Q$ span $A$ as a left free module over $\Bbbk$. \begin{proof} It follows from Proposition \ref{GomezLemma4.2} and Proposition \ref{GomezProposition3.18}. \end{proof} \end{proposition} Next, we present an algorithm to reduce any polynomial in $\Bbbk\langle x_1,\dotsc,x_n\rangle$ to its standard representation modulo $I_Q$. The basic step in this algorithm is the reduction of terms to polynomials of smaller leading term. In the proof of Proposition \ref{GomezLemma4.2} we can choose $k$ to be the least integer such that $j_k>j_{k+1}$, thus yielding a procedure to define for every non-standard monomial $\lambda M$ a reduction denoted ${\rm red}$ that acts non-trivially on $M$. In this way, the linear map ${\rm red}:\Bbbk\langle x_1,\dotsc, x_n\rangle \to \Bbbk\langle x_1,\dotsc, x_n\rangle $ depends on $M$. However, the following procedure is an algorithm. {\scriptsize{ \begin{center} \begin{tabular}{|p{10.5cm}|}\hline \textsf{}\\ \centerline{\textbf{Algorithm: Monomial reduction algorithm}}\label{algorithm1}\\ \setlength{\parindent}{1cm}\textbf{INPUT:} $M=\lambda x_{j_1}\dotsb x_{j_r}$ a non standard monomial.\\ \setlength{\parindent}{1cm}\textbf{OUTPUT:} $p={\rm red}(M)$, a reduction under $Q$ of the monomial $M$\\ \setlength{\parindent}{1cm}\textbf{INITIALIZATION:} $k=1, C=\lambda$\\ \ \ \ \ \ \ \ \ \ \ \ \ \setlength{\parindent}{1cm}\textbf{WHILE} $j_k \le j_{k+1}$ \textbf{DO}\\ \ \ \ \ \ \ \ \ \ \ \ \ $C=Cx_{j_k}$\\ \ \ \ \ \ \ \ \ \ \ \ \ $k=k+1$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textbf{IF} $k+2\le r$ \textbf{THEN}\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $B=x_{j_{k+2}}\dotsb x_{j_r}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textbf{ELSE}\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $B=1$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $j=j_k$, $i=j_{k+1}$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $p=Cf_{j,i}B$. \\ \hline \end{tabular} \end{center}}} An element $f\in \Bbbk\langle x_1,\dotsc, x_n\rangle$ is called \textit{normal} if $ {\rm deg}(X_t)\preceq_{\rm deglex} {\rm deg}({\rm lt}(f))$, for every term $X_t\neq {\rm lt}(f)$ in $f$. \begin{proposition}[\cite{BuesoGT2003}, Proposition 4.5]\label{GomezTorrecillasProposition4.5} If $(Q,\preceq_{\rm deglex})$ is a skew reduction system, then there exists a $\Bbbk$-linear map ${\rm stred}_Q:\Bbbk\langle x_1,\dotsc, x_n\rangle \to \Bbbk\langle x_1,\dotsc, x_n\rangle_{ \rm irr}$ satisfying the following conditions: {\rm (i)} for every element $f$ of $\Bbbk\langle x_1,\dotsc, x_n\rangle$, there exists a finite sequence $r_1,\dotsc$, $r_m$ of reductions such that ${\rm stred}_Q(f)=(r_m\dotsb r_1)(f)$; {\rm (ii)} if $f$ is normal, then we obtain ${\rm deg}({\rm lm}(f))={\rm deg}({\rm lm}({\rm stred}_Q(f)))$. \end{proposition} From the proof of Proposition \ref{GomezTorrecillasProposition4.5} we obtain the next algorithm. Remark \ref{valverde} and Theorem \ref{GomezTorrecillas2Theorem 4.7} are the key results connecting this section with 3-dimensional skew polynomial algebras. \\ {\scriptsize{\begin{center} \begin{tabular}{|p{10.2cm}|}\hline \textsf{} \centerline{\textbf{Algorithm: Reduction to standard form algorithm}} \label{algorithm2}\\ \setlength{\parindent}{1cm}\textbf{INPUT:} $f$ a non-standard polynomial.\\ \setlength{\parindent}{1cm}\textbf{OUTPUT:} $g={\rm stred}_Q(f)$ a standard reduction under $Q$ of $f$\\ \setlength{\parindent}{1cm}\textbf{INITIALIZATION:} $g=0$\\ \ \ \ \ \ \ \ \ \ \ \ \setlength{\parindent}{1cm}\textbf{WHILE} $f\neq 0$ \textbf{DO}\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textbf{IF} ${\rm lm}(f)$ is standard \textbf{THEN}\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $f=f-{\rm lm}(f)$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $g=g+{\rm lm}(g)$\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textbf{ELSE}\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $f=f-{\rm lm}(f)+{\rm red}({\rm lm}(f))$. \\ \hline \end{tabular} \end{center}}} \begin{remark}\label{valverde} A free left $\Bbbk$-module $A$ is a 3-dimensional skew polynomial algebra with respect to $\preceq_{\rm deglex}$ if and only if it is isomorphic to $\Bbbk\langle x_1,\dotsc,x_n\rangle /I_Q$, where $Q$ is a skew reduction system with respect to $\preceq_{\rm deglex}$. \end{remark} By Theorem \ref{GomezTorrecillasTheorem3.21}, the set of all standard terms forms a $\Bbbk$-basis for $A$ given by $A=\Bbbk\langle x_1,\dotsc, x_n\rangle/I_Q$. We have the following key result: \begin{theorem}[\cite{BuesoGT2003}, Theorem 4.7]\label{GomezTorrecillas2Theorem 4.7} Let $(Q,\preceq_{\rm deglex})$ be a skew reduction system on $\Bbbk\langle x_1,\dotsc, x_n\rangle$ and let $A=\Bbbk\langle x_1,\dotsc,x_n\rangle/I_Q$. For $1\le i<j<k\le n$, let $g_{kji}, h_{kji}$ be elements in $\Bbbk\langle x_1,\dotsc, x_n\rangle$ such that $x_kf_{ji}$ {\rm (}resp. $f_{kj}x_i${\rm )} reduces to $g_{kji}$ {\rm (}resp. $h_{kji}${\rm )} under $Q$. The following conditions are equivalent: \begin{enumerate} \item [\rm (i)] $A$ is a 3-dimensional skew polynomial algebra over $\Bbbk$; \item [\rm (ii)] the standard terms form a basis of $A$ as a left free $\Bbbk$-module; \item [\rm (iii)] $g_{kji}=h_{kji}$, for every $1\le i<j<k\le n$; \item [\rm (iv)] ${\rm stred}_Q(x_kf_{ji})={\rm stred}_Q(f_{kj}x_i)$, for every $1\le i<j<k\le n$. \end{enumerate} Moreover, if $A$ is a 3-dimensional skew polynomial algebra, then ${\rm stred}_Q=r_Q$ and $A$ is isomorphic as a left module to $\Bbbk\langle x_1,\dotsc,x_n\rangle_{\rm irr}$ whose module structure is given by the product $f*g:=r_Q(fg)$, for every $f,g\in \Bbbk\langle x_1,\dotsc,x_n\rangle_{\rm irr}$. \begin{proof} The equivalence between (i) and (ii) as well between (i) and (iii) is given by Theorem \ref{GomezTorrecillasTheorem3.21}. The equivalence between (i) and (iv) is obtained from Theorem \ref{GomezTorrecillasTheorem3.21} and Proposition \ref{GomezTorrecillasProposition4.5}. The remaining statements are also consequences of Theorem \ref{GomezTorrecillasTheorem3.21}. \end{proof} \end{theorem} Theorem \ref{GomezTorrecillas2Theorem 4.7} gives an algorithm to check whether $\Bbbk\langle x_1,\dotsc, x_n\rangle/I_Q$ is a skew PBW extension since ${\rm stred}_Q(x_kf_{ji})$ and ${\rm stred}_Q(f_{kj}x_i)$ can be computed by means of Algorithm \textquotedblleft Reduction to standard form algorithm\textquotedblright. \section{Examples}\label{examples} Next, we consider Theorem \ref{GomezTorrecillas2Theorem 4.7} with the aim of showing the relations between the elements $r's$ which guarantee that one can have a 3-dimensional skew polynomial algebra with basis given by Definition \ref{3dimensionaldimension}. If $x_1\prec x_2\prec x_3$ with the notation in Remark \ref{Gererere}, then $(Q,\preceq_{\rm deglex})$ is a skew reduction system and {\small{\begin{align*} {\rm stred}_Q(x_3f_{21}) = &\ x_3(\gamma^{-1} x_1x_2 + r_0^{(1,2)} + r_1^{(1,2)}x_1 + r_2^{(1,2)}x_2 + r_3^{(1,2)}x_3)\\ = &\ \gamma^{-1} x_3x_1x_2 + r_0^{(1,2)}x_3 + r_1^{(1,2)}x_3x_1 + r_2^{(1,2)}x_3x_2 + r_3^{(1,2)}x_3^{2}\\ = &\ \gamma^{-1} (\beta x_1x_3 + r_0^{(1,3)} + r_1^{(1,3)}x_1 + r_2^{(1,3)}x_2 + r_3^{(1,3)}x_3)x_2 + r_0^{(1,2)}x_3 \\ + &\ r_1^{(1,2)}(\beta x_1x_3 + r_0^{(1,3)} + r_1^{(1,3)}x_1 + r_2^{(1,3)}x_2 + r_3^{(1,3)}x_3) \\ + &\ r_2^{(1,2)}(\alpha^{-1} x_2x_3 + r_0^{(2,3)} + r_1^{(2,3)}x_1 + r_2^{(2,3)}x_2 + r_3^{(2,3)}x_3) + r_3^{(1,3)}x_3^{2}\\ = &\ \gamma^{-1} \beta x_1x_3x_2 + \gamma^{-1} r_0^{(1,3)}x_2 + \gamma^{-1} r_1^{(1,3)}x_1x_2 + \gamma^{-1} r_2^{(1,3)}x_2^{2} + \gamma^{-1} r_3^{(1,3)}x_3x_2 \\ + &\ r_0^{(1,2)}x_3 + r_1^{(1,2)}\beta x_1x_3 + r_1^{(1,2)}r_0^{(1,3)} + r_1^{(1,2)}r_1^{(1,3)}x_1 + r_1^{(1,2)}r_2^{(1,3)}x_2 \\ + &\ r_1^{(1,2)}r_3^{(1,3)}x_3 + r_2^{(1,2)}\alpha^{-1} x_2x_3 + r_2^{(1,2)}r_0^{(2,3)} + r_2^{(1,2)}r_1^{(2,3)}x_1\\ + &\ r_2^{(1,2)}r_2^{(2,3)}x_2 + r_2^{(1,2)}r_3^{(1,3)}x_3 + r_3^{(1,3)}x_3^{2}\\ = &\ \gamma^{-1} \beta x_1(\alpha^{-1} x_2x_3 + r_0^{(2,3)} + r_1^{(2,3)}x_1 + r_2^{(2,3)}x_2 + r_3^{(2,3)}x_3) + \gamma^{-1} r_0^{(1,3)}x_2 \\ + &\ \gamma^{-1} r_1^{(1,3)}x_1x_2 + \gamma^{-1} r_2^{(1,3)}x_2^{2} + \gamma^{-1} r_3^{(1,3)}(\alpha^{-1} x_2x_3 + r_0^{(2,3)} + r_1^{(2,3)}x_1 \\ + &\ r_2^{(2,3)}x_2 + r_3^{(2,3)}x_3) + r_0^{(1,2)}x_3 + r_1^{(1,2)}\beta x_1x_3 + r_1^{(1,2)}r_0^{(1,3)} \\ + &\ r_1^{(1,2)}r_1^{(1,3)}x_1 + r_1^{(1,2)}r_2^{(1,3)}x_2 + r_1^{(1,2)}r_3^{(1,3)}x_3 + r_2^{(1,2)}\alpha^{-1} x_2x_3 \\ + &\ r_2^{(1,2)}r_0^{(2,3)} + r_2^{(1,2)}r_1^{(2,3)}x_1 + r_2^{(1,2)}r_2^{(2,3)}x_2 + r_2^{(1,2)}r_3^{(1,3)}x_3 + r_3^{(1,3)}x_3^{2}\\ = &\ \gamma^{-1} \beta \alpha^{-1} x_1x_2x_3 + \gamma^{-1} \beta r_0^{(2,3)}x_1 + \gamma^{-1} \beta r_1^{(2,3)}x_1^{2} + \gamma^{-1} \beta r_2^{(2,3)}x_1x_2 \\ + &\ \gamma^{-1} \beta r_3^{(2,3)}x_1x_3 + \gamma^{-1} r_0^{(1,3)}x_2 + \gamma^{-1} r_1^{(1,3)}x_1x_2 \\ + &\ \gamma^{-1} r_2^{(1,3)}x_2^{2} + \gamma^{-1} r_3^{(1,3)}\alpha^{-1} x_2x_3 + \gamma^{-1} r_3^{(1,3)}r_0^{(2,3)} + \gamma^{-1} r_3^{(1,3)}r_1^{(2,3)}x_1 \\ + &\ \gamma^{-1} r_3^{(1,3)}r_2^{(2,3)}x_2 + \gamma^{-1} r_3^{(1,3)} r_3^{(2,3)}x_3 + r_0^{(1,2)}x_3 + r_1^{(1,2)}\beta x_1x_3 \\ + &\ r_1^{(1,2)}r_0^{(1,3)} + r_1^{(1,2)}r_1^{(1,3)}x_1 + r_1^{(1,2)}r_2^{(1,3)}x_2 + r_1^{(1,2)}r_3^{(1,3)}x_3\\ + &\ r_2^{(1,2)}\alpha^{-1} x_2x_3 + r_2^{(1,2)}r_0^{(2,3)} + r_2^{(1,2)}r_1^{(2,3)}x_1\\ + &\ r_2^{(1,2)}r_2^{(2,3)}x_2 + r_2^{(1,2)}r_3^{(1,3)}x_3 + r_3^{(1,3)}x_3^{2}, \end{align*}}} or equivalently, {\small{\begin{align*} {\rm stred}_Q(x_3f_{21}) = &\ \gamma^{-1} \beta \alpha^{-1} x_1x_2x_3 + (\gamma^{-1} \beta r_0^{(2,3)} + \gamma^{-1} r_3^{(1,3)}r_1^{(2,3)} + r_1^{(1,2)} r_1^{(1,3)}\\ + &\ r_2^{(1,2)}r_1^{(2,3)})x_1 + (\gamma^{-1} r_0^{(1,3)} + \gamma^{-1} r_3^{(1,3)}r_2^{(2,3)} + r_1^{(1,2)}r_2^{(1,3)} + r_2^{(1,2)}r_2^{(2,3)})x_2 \\ + &\ (\gamma^{-1} r_3^{(1,3)}r_3^{(2,3)} + r_0^{(1,2)} + r_1^{(1,2)}r_3^{(1,3)} + r_2^{(1,2)}r_3^{(1,3)})x_3\\ + &\ (\gamma^{-1} r_1^{(1,3)} + \gamma^{-1} \beta r_2^{(2,3)})x_1x_2 \\ + &\ (\gamma^{-1} \beta r_3^{(2,3)} + \beta r_1^{(1,2)})x_1x_3 + (\gamma^{-1} \alpha^{-1} r_3^{(1,3)} + \alpha^{-1} r_2^{(1,2)})x_2x_3 \\ + &\ \gamma^{-1} \beta r_1^{(2,3)}x_1^{2} + \gamma^{-1} r_2^{(1,3)}x_2^{2} + r_3^{(1,3)}x_3^{2}\\ + &\ \gamma^{-1} r_3^{(1,3)}r_0^{(2,3)} + r_1^{(1,2)}r_0^{(1,3)} + r_2^{(1,2)}r_0^{(2,3)}. \end{align*}}} Next, we compute ${\rm stred}_Q(f_{32}x_1)$: {\small{\begin{align*} {\rm stred}_Q(f_{32}x_1) = &\ (\alpha^{-1} x_2x_3 + r_0^{(2,3)} + r_1^{(2,3)}x_1 + r_2^{(2,3)}x_2 + r_3^{(2,3)}x_3)x_1\\ = &\ \alpha^{-1} x_2x_3x_1 + r_0^{(2,3)}x_1 + r_1^{(2,3)}x_1^{2} + r_2^{(2,3)}x_2x_1 + r_3^{(2,3)}x_3x_1\\ = &\ \alpha^{-1} x_2(\beta x_1x_3 + r_0^{(1,3)} + r_1^{(1,3)}x_1 + r_2^{(1,3)}x_2 + r_3^{(1,3)}x_3) + r_0^{(2,3)}x_1 \\ + &\ r_1^{(2,3)}x_1^{2} + r_2^{(2,3)}(\gamma^{-1} x_1x_2 + r_0^{(1,2)} + r_1^{(1,2)}x_1 + r_2^{(1,2)}x_2 + r_3^{(1,2)}x_3)\\ + &\ r_3^{(2,3)}(\beta x_1x_3 + r_0^{(1,3)} + r_1^{(1,3)}x_1 + r_2^{(1,3)}x_2 + r_3^{(1,3)}x_3)\\ = &\ \beta \alpha^{-1} x_2x_1x_3 + \alpha^{-1} r_0^{(1,3)}x_2\\ + &\ \alpha^{-1} r_1^{(1,3)}x_2x_1 + \alpha^{-1} r_2^{(1,3)}x_2^{2} + \alpha^{-1} r_3^{(1,3)}x_2x_3 \\ + &\ r_0^{(2,3)}x_1 + r_1^{(2,3)}x_1^{2} + \gamma^{-1} r_2^{(2,3)}x_1x_2 + r_0^{(1,2)}r_2^{(2,3)}\\ + &\ r_1^{(1,2)}r_2^{(2,3)}x_1 + r_2^{(1,2)}r_2^{(2,3)}x_2 + r_2^{(2,3)}r_3^{(1,2)}x_3 + \beta r_3^{(2,3)}x_1x_3 \\ + &\ r_0^{(1,3)}r_3^{(2,3)} + r_1^{(1,3)}r_2^{(2,3)}x_1 + r_2^{(1,3)}r_3^{(2,3)}x_2 + r_3^{(1,3)}r_3^{(2,3)}x_3\\ = &\ \beta \alpha^{-1} (\gamma^{-1} x_1x_2 + r_0^{(1,2)} + r_1^{(1,2)}x_1 + r_2^{(1,2)}x_2 + r_3^{(1,2)}x_3)x_3 + \alpha^{-1} r_0^{(1,3)}x_2 \\ + &\ \alpha^{-1} r_1^{(1,3)}(\gamma^{-1} x_1x_2 + r_0^{(1,2)} + r_1^{(1,2)}x_1 + r_2^{(1,2)}x_2 + r_3^{(1,2)}x_3) + \alpha^{-1} r_2^{(1,3)}x_2^{2}\\ + &\ \alpha^{-1} r_3^{(1,3)}x_2x_3 + r_0^{(2,3)}x_1 + r_1^{(2,3)}x_1^{2} + \gamma^{-1} r_2^{(2,3)}x_1x_2 + r_0^{(1,2)}r_2^{(2,3)} \\ + &\ r_1^{(1,2)}r_2^{(2,3)}x_1 + r_2^{(1,2)}r_2^{(2,3)}x_2 + r_2^{(2,3)}r_3^{(1,2)}x_3 + \beta r_3^{(2,3)}x_1x_3\\ + &\ r_0^{(1,3)}r_3^{(2,3)} + r_1^{(1,3)}r_2^{(2,3)}x_1 + r_2^{(1,3)}r_3^{(2,3)}x_2 + r_2^{(1,3)}r_3^{(2,3)}x_3 \end{align*}}} or what is the same, {\small{\begin{align*} {\rm stred}_Q(f_{32}x_1) = &\ \gamma^{-1} \beta \alpha^{-1} x_1x_2x_3 + \beta \alpha^{-1} r_0^{(1,2)}x_3 + \beta \alpha^{-1} r_1^{(1,2)}x_1x_3 + \beta \alpha^{-1} r_2^{(1,2)}x_2x_3 \\ + &\ \beta \alpha^{-1} r_3^{(1,2)}x_3^{2} + \alpha^{-1} r_0^{(1,3)}x_2 + \gamma^{-1} \alpha^{-1} r_1^{(1,3)}x_1x_2 + \alpha^{-1} r_0^{(1,2)}r_1^{(1,3)} \\ + &\ \alpha^{-1} r_1^{(1,2)}r_1^{(1,3)}x_1 + \alpha^{-1} r_1^{(1,3)}r_2^{(1,2)}x_2 + \alpha^{-1} r_1^{(1,3)} r_3^{(1,2)}x_3 \\ + &\ \alpha^{-1} r_2^{(1,3)}x_2^{2} + \alpha^{-1} r_3^{(1,3)}x_2x_3 + r_0^{(2,3)}x_1 + r_1^{(2,3)}x_1^{2} + \gamma^{-1} r_2^{(2,3)}x_1x_2 \\ + &\ r_0^{(1,2)}r_2^{(2,3)} + r_1^{(1,2)}r_2^{(2,3)}x_1 + r_2^{(1,2)}r_2^{(2,3)}x_2 + r_2^{(2,3)}r_3^{(1,2)}x_3 \\ + &\ \beta r_3^{(2,3)}x_1x_3 + r_0^{(1,3)}r_3^{(2,3)} + r_1^{(1,3)}r_2^{(2,3)}x_1 \\ + &\ r_2^{(1,3)}r_3^{(2,3)}x_2 + r_2^{(1,3)}r_3^{(2,3)}x_3\\ = &\ \gamma^{-1} \beta \alpha^{-1} x_1x_2x_3 + (\alpha^{-1} r_1^{(1,2)}r_1^{(1,3)} + r_0^{(2,3)} + r_1^{(1,2)}r_2^{(2,3)} + r_1^{(1,3)}r_2^{(2,3)})x_1\\ + &\ (\alpha^{-1} r_0^{(1,3)} + \alpha^{-1} r_1^{(1,3)}r_2^{(1,2)} + r_2^{(1,2)}r_2^{(2,3)} + r_2^{(1,3)}r_3^{(2,3)})x_2\\ + &\ (\beta \alpha^{-1} r_0^{(1,2)} + \alpha^{-1} r_1^{(1,3)}r_3^{(1,2)} + r_2^{(2,3)}r_3^{(1,2)} + r_2^{(1,3)}r_3^{(2,3)})x_3\\ + &\ (\gamma^{-1} \alpha^{-1} r_1^{(1,3)} + \gamma^{-1} r_2^{(2,3)})x_1x_2 + (\beta \alpha^{-1} r_1^{(1,2)} + \beta r_3^{(2,3)})x_1x_3 \\ + &\ (\beta \alpha^{-1} r_2^{(1,2)} + \alpha^{-1} r_3^{(1,3)})x_2x_3 + r_1^{(2,3)}x_1^{2} + \alpha^{-1} r_2^{(1,3)}x_2^{2} + \beta \alpha^{-1} r_3^{(1,2)}x_3^{2}\\ + &\ \alpha^{-1} r_0^{(1,2)}r_1^{(1,3)} + r_0^{(1,2)}r_2^{(2,3)} + r_0^{(1,3)}r_3^{(2,3)}. \end{align*}}} Since we need to satisy the relation ${\rm stred}_Q(x_3f_{21}) = {\rm stred}_Q(f_{32}x_1)$, the following equalities are necessary and sufficient to guarantee that an algebra generated by three variables (where coefficients commute with variables) can be considered as a 3-dimensional skew polynomial algebra in the sense of \cite{SmithBell}: \begin{multline}\label{gordito} \gamma^{-1} \beta r_0^{(2,3)} + \gamma^{-1} r_3^{(1,3)}r_1^{(2,3)} + r_1^{(1,2)} r_1^{(1,3)} + r_2^{(1,2)}r_1^{(2,3)} = \\ \alpha^{-1} r_1^{(1,2)}r_1^{(1,3)} + r_0^{(2,3)} + r_1^{(1,2)}r_2^{(2,3)} + r_1^{(1,3)}r_2^{(2,3)}, \end{multline} \begin{multline}\label{flaquito} \gamma^{-1} r_0^{(1,3)} + \gamma^{-1} r_3^{(1,3)}r_2^{(2,3)} + r_1^{(1,2)}r_2^{(1,3)} + r_2^{(1,2)}r_2^{(2,3)} = \\ \alpha^{-1} r_0^{(1,3)} + \alpha^{-1} r_1^{(1,3)}r_2^{(1,2)} + r_2^{(1,2)}r_2^{(2,3)} + r_2^{(1,3)}r_3^{(2,3)}, \end{multline} \begin{multline}\label{moder} \gamma^{-1} r_3^{(1,3)}r_3^{(2,3)} + r_0^{(1,2)} + r_1^{(1,2)}r_3^{(1,3)} + r_2^{(1,2)}r_3^{(1,3)} = \\ \beta \alpha^{-1} r_0^{(1,2)} + \alpha^{-1} r_1^{(1,3)}r_3^{(1,2)} + r_2^{(2,3)}r_3^{(1,2)} + r_2^{(1,3)}r_3^{(2,3)}, \end{multline} and \begin{align} \gamma^{-1} r_1^{(1,3)} + \gamma^{-1} \beta r_2^{(2,3)} = &\ \gamma^{-1} \alpha^{-1} r_1^{(1,3)} + \gamma^{-1} r_2^{(2,3)} \label{pss1} \\ \gamma^{-1} \beta r_3^{(2,3)} + \beta r_1^{(1,2)} = &\ \beta \alpha^{-1} r_1^{(1,2)} + \beta r_3^{(2,3)} \label{pss2} \\ \gamma^{-1} \alpha^{-1} r_3^{(1,3)} + \alpha^{-1} r_2^{(1,2)} = &\ \beta \alpha^{-1} r_2^{(1,2)} + \alpha^{-1} r_3^{(1,3)} \label{pss3} \\ \gamma^{-1} \beta r_1^{(2,3)} = &\ r_1^{(2,3)}\label{cattt} \\ \gamma^{-1} r_2^{(1,3)} = &\ \alpha^{-1} r_2^{(1,3)} \label{doggg}\\ r_3^{(1,3)} = &\ \beta \alpha^{-1} r_3^{(1,2)}\label{delfi} \\ \gamma^{-1} r_3^{(1,3)}r_0^{(2,3)} + r_1^{(1,2)}r_0^{(1,3)} + r_2^{(1,2)}r_0^{(2,3)} = &\ \alpha^{-1} r_0^{(1,2)}r_1^{(1,3)} + r_0^{(1,2)}r_2^{(2,3)} + r_0^{(1,3)}r_3^{(2,3)}.\label{pss6} \end{align} As an illustration, note that if $r_1^{(2,3)}, r_2^{(1,3)}$ are non-zero elements of $\Bbbk$, then we obtain $\beta=\gamma$ and $\gamma^{-1} = \alpha^{-1} $, from (\ref{cattt}) and (\ref{doggg}), respectively. So, (\ref{delfi}) implies that $r_3^{(1,3)} = r_3^{(1,2)}$. Of course, if $r_1^{(2,3)}, r_2^{(1,3)}=0$ it is not necessarily true that $\gamma^{-1} = \alpha^{-1} $ and $\beta = \alpha^{-1}$. \begin{examples}\label{chapeco} \begin{enumerate} \item \textit{Woronowicz algebra $\mbox{${\cal W}$}_{\nu}(\mathfrak{sl}(2,\Bbbk))$.} This $\Bbbk$-algebra was introduced by Woronowicz in \cite{Woronowicz1987}. It is generated by the indeterminates $x,y,z$ subject to the relations $xz-\nu^4zx=(1+\nu^2)x,\ xy-\nu^2yx=\nu z,\ zy-\nu^4yz=(1+\nu^2)y$, where $\nu \in\ \Bbbk\ \backslash\ \{0\}$ is not a root of unity. Under certain conditions on $\nu$ (see at the end of the computations), this algebra is a 3-dimensional skew polynomial algebra. Let us see the details. Let $x_1:=x, x_2:=y$ and $x_3:=z$. We have the relations $x_3x_1=\nu^{-4}x_1x_3-\nu^{-4}(1+\nu^2)x_1,\ x_2x_1=\nu^{-2}x_1x_2-\nu^{-1}x_3,\ x_3x_2=\nu^4x_2x_3+(1+\nu^{2})x_2$. If $x_1\prec x_2\prec x_3$, then $(Q,\preceq_{\rm deglex})$ is a skew reduction system and \begin{align*} {\rm stred}_Q&(x_3f_{21})=x_3(\nu^{-2}x_1x_2-\nu^{-1}x_3)\\ &=\nu^{-2}x_3x_1x_2-\nu^{-1}x_3^2\\ &=\nu^{-2}(\nu^{-4}x_1x_3-\nu^{-4}(1+\nu^2)x_1)x_2-\nu^{-1}x_3^2\\ &=\nu^{-6}x_1x_3x_2-\nu^{-6}(1+\nu^2)x_1x_2-\nu^{-1}x_3^2\\ &=\nu^{-6}x_1(\nu^4x_2x_3+(1+\nu^2)x_2)-\nu^{-6}(1+\nu^2)x_1x_2-\nu^{-1}x_3^2\\ &= \nu^{-2}x_1x_2x_3+\nu^{-6}(1+\nu^2)x_1x_2-\nu^{-6}(1+\nu^2)x_1x_2-\nu^{-1}x_3^2\\ &= \nu^{-2}x_1x_2x_3-\nu^{-1}x_3^2, \end{align*} while, \begin{align*} {\rm stred}_Q(f_{32}x_1)&=(\nu^4x_2x_3+(1+\nu^2)x_2)x_1\\ &=\nu^4x_2x_3x_1+(1+\nu^2)x_2x_1\\ &=\nu^4x_2(\nu^{-4}x_1x_3-\nu^{-4}(1+\nu^2)x_1)+(1+\nu^2)(\nu^{-2}x_1x_2-\nu^{-1}x_3)\\ &= x_2x_1x_3-(1+\nu^2)x_2x_1+(1+\nu^2)\nu^{-2}x_1x_2-(1+\nu^2)\nu^{-1}x_3\\ &=(\nu^{-2}x_1x_2-\nu^{-1}x_3)x_3-(1+\nu^2)(\nu^{-2}x_1x_2-\nu^{-1}x_3)\\ & + (1+\nu^2)\nu^{-2}x_1x_2-\nu^{-1}(1+\nu^2)x_3\\ &=\nu^{-2}x_1x_2x_3-\nu^{-1}x_3^2-\nu^{-2}(1+\nu^2)x_1x_2 + (1+\nu^2)\nu^{-1}x_3\\ & + (1+\nu^2)\nu^{-2}x_1x_2-\nu^{-1}(1+\nu^2)x_3\\ &=\nu^{-2}x_1x_2x_3-\nu^{-1}x_3^2. \end{align*} Thus, Theorem \ref{GomezTorrecillas2Theorem 4.7} implies that $\mbox{${\cal W}$}_{\nu}(\mathfrak{sl}(2,\Bbbk))$ is a 3-di\-men\-sio\-nal skew polynomial algebra for any value of $\nu \in \Bbbk\ \backslash\ \{0\}$. \item \textit{Dispin algebra $\mbox{${\cal U}$}(osp(1,2))$.} This $\Bbbk$-algebra is generated by the variables $x,y,z$ subjected to the relations $yz-zy=z,\ zx+xz=y,\ xy-yx=x$. Let $x_1:=x, x_2:=y$ and $x_3:=z$. We consider $x_1\prec x_2\prec x_3$. Then $(Q,\preceq_{\rm deglex})$ is a skew reduction system. Relations defining this algebra are $x_3x_2=x_2x_3-x_3,\ x_3x_1=-x_1x_3+x_2$, and $ x_2x_1=x_1x_2-x_1$. Following Theorem \ref{GomezTorrecillas2Theorem 4.7}, we have \begin{align*} {\rm stred}_Q(x_3f_{21})&=x_3(x_1x_2-x_1)\\ &= x_3x_1x_2-x_3x_1\\ &=(-x_1x_3+x_2)x_2-(-x_1x_3+x_2)\\ &=-x_1x_3x_2+x_2^2+x_1x_3-x_2\\ &=-x_1(x_2x_3-x_3)+x_2^2+x_1x_3-x_2\\ &=-x_1x_2x_3+x_1x_3+x_2^2+x_1x_3-x_2, \end{align*} and \begin{align*} {\rm stred}_Q(f_{32}x_1)&=(x_2x_3-x_3)x_1\\ &=x_2x_3x_1-x_3x_1\\ &=x_2(-x_1x_3+x_2)-(-x_1x_3+x_2)\\ &=-x_2x_1x_3+x_2^2+x_1x_3-x_2\\ &=-(x_1x_2-x_1)x_3+x_2^2+x_1x_3-x_2\\ &=-x_1x_2x_3+x_1x_3+x_2^2+x_1x_3-x_2. \end{align*} We can see that ${\rm stred}_Q(x_3f_{21})={\rm stred}_Q(f_{32}x_1)$, so Theorem \ref{GomezTorrecillas2Theorem 4.7} guarantees that $\mbox{${\cal U}$}(osp(1,2))$ is a 3-dimensional skew polynomial algebra. \item If we consider the $\Bbbk$-algebra $A$ generated by the variables $x, y, x$ subjected to the relations $yx = \alpha xy + x$, $zx = \beta xz + z$, $zy=yz$, with $\beta, \alpha \in \Bbbk\ \backslash\ \{0\}$, then the set of variables $\{x, y, x\}$ is not a PBW basis for $A$. Consider the identification $x_1:=x,\ x_2:= y, x_3:= z$. Then, the algebra $A$ is expressed by the relations $x_2x_1 = \alpha x_1x_2 + x_1,\ x_3x_1 = \beta x_1x_3 + x_3,\ x_3x_2 = x_2x_3$, and hence \begin{align*} {\rm stred}_Q(x_3f_{21}) = &\ x_3(\alpha x_1x_2 + x_1) = \alpha x_3x_1x_2 + x_3x_1\\ = &\ \alpha (\beta x_1x_3 + x_3)x_2 + \beta x_1x_3 + x_3\\ = &\ \beta \alpha x_1x_3x_2 + \alpha x_3x_2 + \beta x_1x_3 + x_3\\ = &\ \beta \alpha x_1x_2x_3 + \alpha x_2x_3 + \beta x_1x_3 + x_3\\ {\rm stred}_Q(f_{32}x_1) = &\ x_2x_3x_1\\ = &\ x_2(\beta x_1x_3 + x_3) = \beta x_2x_1x_3 + x_2x_3\\ = &\ \beta(\alpha x_1x_2 + x_1)x_3 + x_2x_3\\ = &\ \beta \alpha x_1x_2x_3 + \beta x_1x_3 + x_2x_3. \end{align*} Since ${\rm stred}_Q(x_3f_{21}) \neq {\rm stred}_Q(f_{32}x_1)$, Theorem \ref{GomezTorrecillas2Theorem 4.7} guarantees that the set $\{x, y, z\}$ is not a PBW basis for the algebra $A$. Considering the notation in Remark \ref{Gererere}, we observe that $r_0^{(1,2)} = r_2^{(1,2)} = r_3^{(1,2)} = r_0^{(1,3)} = r_1^{(1,3)} = r_2^{(1,3)} = r_0^{(2,3)} = r_1^{(2,3)} = r_2^{(2,3)} = r_3^{(2,3)} = 0$, and $r_1^{(1,2)} = r_3^{(1,3)} = \alpha^{-1} = 1$. In particular, expression (\ref{delfi}) impose that $1=0$, which of course is false. This illustrates why the set $\{x, y, z\}$ is not a PBW basis over $\Bbbk$ for the algebra $A$. \end{enumerate} \end{examples} \subsection*{Acknowledgment} The first author is supported by Grant HERMES CODE 30366, Departamento de Matem\'aticas, Universidad Nacional de Colombia, Bogot\'a. \end{document}
math
\begin{document} \if11 { \title{ Modelling Point Referenced Spatial Count Data: A Poisson Process Approach } \author[1,2]{Diego Morales-Navarrete} \affil[1]{Departamento de Estad\'istica, Pontificia Universidad Cat\'olica de Chile, Santiago, Chile} \affil[2]{Millennium Nucleus Center for the Discovery of Structures in Complex Data, Chile, \texttt{[email protected]}} \author[3,4]{Moreno Bevilacqua} \affil[3]{Facultad de Ingenier\'ia y Ciencias, Universidad Adolfo Ib\'a\~nez, Vi\~na del Mar, Chile} \affil[4]{ Dipartimento di Scienze Ambientali, Informatica e Statistica, Ca’ Foscari University of Venice, Italy, \texttt{[email protected]}} \author[5]{Christian Caama\~no-Carrillo} \affil[5]{Departamento de Estad\'istica, Universidad del B\'io-B\'io, Concepci\'on, Chile, \texttt{[email protected]}} \author[1,2,6]{Luis M. Castro} \affil[6]{Centro de Riesgos y Seguros UC, Pontificia Universidad Cat\'olica de Chile, Santiago, Chile, \texttt{[email protected]}} \maketitle } \fi \if01 { \mathbf{i}gskip \mathbf{i}gskip \mathbf{i}gskip \begin{center} {\LARGE\bf Modelling Point Referenced Spatial Count Data: A Poisson Process Approach} \mathrm{e}nd{center} } \fi \mathbf{i}gskip \begin{abstract} {\small Random fields are useful mathematical tools for representing natural phenomena with complex dependence structures in space and/or time. In particular, the Gaussian random field is commonly used due to its attractive properties and mathematical tractability. However, this assumption seems to be restrictive when dealing with counting data. To deal with this situation, we propose a random field with a Poisson marginal distribution considering a sequence of independent copies of a random field with an exponential marginal distribution as ``inter-arrival times'' in the counting renewal processes framework. Our proposal can be viewed as a spatial generalization of the Poisson counting process. Unlike the classical hierarchical Poisson Log-Gaussian model, our proposal generates a (non)-stationary random field that is mean square continuous and with Poisson marginal distributions. For the proposed Poisson spatial random field, analytic expressions for the covariance function and the bivariate distribution are provided. In an extensive simulation study, we investigate the weighted pairwise likelihood as a method for estimating the Poisson random field parameters. Finally, the effectiveness of our methodology is illustrated by an analysis of reindeer pellet-group survey data, where a zero-inflated version of the proposed model is compared with zero-inflated Poisson Log-Gaussian and Poisson Gaussian copula models. Supplementary materials for this article, including technical proofs and \texttt{R} code for reproducing the work, are available as an online supplement. } \mathrm{e}nd{abstract} Keywords: Gaussian random field; Gaussian copula; Pairwise likelihood function; Poisson distribution; Renewal process \section{Introduction} \label{sec:introduction} The faecal pellet count technique is one of the most popular tool for estimating an animal species' abundance. Specifically, this technique uses the number of observed droppings combined with their decay time and the target animal species' defecation rate. With these ingredients, it is possible to obtain an accurate density estimation of an animal population. This method was proposed by \cite{BennetEtAl1940} and has been improved by several authors \citep[see for example][among others]{VanEttenBennet1965,MayleEtAl1999,KrebsEtAl2001}. The study motivating our research is a reindeer pellet-group survey conducted in the northern forest area of Sweden and previously analysed by \cite{Lee2016}. The objective of this survey was to assess the impact of newly established wind farms on reindeer habitat selection. This choice is crucial for the reindeer since it involves trade‐offs between fulfilling necessities for feeding, mating, parental care, and risk mitigation of predation \citep{SivertsenEtAl2016}. Survey data was collected over the years 2009–2010 and presented a large number of zero counts. This situation is frequent when spatial species count data are collected since the survey is conducted using a point transect design \citep{BucklandEtAl2001}. This design considers a set of $K$ plots as systematically spaced plots along lines (transects) located throughout the survey region, where $K$ should be at least 20 for obtaining robust estimates of the abundance. The study area was 250 km$^2$, the distance between each transect was 300 m. On each transect, the distance between each plot was 100 m. The size of each plot was 15 m$^2$ with a radius of 2.18 m. From a modelling viewpoint, the analysis of the reindeer pellet-group data requires the development of statistical models for geo-referenced count data that take into account both spatial dependence and the excessive number of zeros. Random fields or stochastic processes are useful models when dealing with geo-referenced spatial or spatio-temporal data \citep{Stein:1999,Cressie:Wikle:2011,Banerjee-Carlin-Gelfand:2014}. In particular, the Gaussian random field is widely used due to its attractive properties and mathematical tractability \citep{GELFAND201686}. Gaussianity is clearly a restrictive assumption when dealing with counting data. However, many models of current use for spatial count data employ Gaussian random fields as building blocks. The first example is the hierarchical model approach proposed by \cite{Diggle:Tawn:Moyeed:1998}, which can be viewed as a generalized linear mixed model \citep{Diggle-Ribeiro:2007,Diggle-Giorgi:2019}. Under this framework, non-Gaussian models for spatial data can be specified using a link function and a latent Gaussian random field through a conditionally independence assumption. In particular, the Poisson Log-Gaussian random field (Poisson LG hereafter) has been widely applied for modelling count spatial data \citep[see for instance][for interesting applications and in-depth study of its properties] {Christensen:Waagepetersen:2002,Guillot_et_all:2009,Oliveira:2013}. Similar models, that can be defined hierarchically in terms of the specification of the first two moments and a correlation function have been proposed in \cite{Monestiez_et_al2006} and \cite{Oliveira:2014}. It is important to stress that the conditional independence assumption underlying these kind of models leads to (a) random fields with marginal distributions that are not Poisson and (b) random fields with a ``forced'' nugget effects that implies no mean square continuity. \begin{figure}[htb!] \scalebox{0.7}{ \begin{tabular}{cc} \includegraphics[width=7cm, height=5.5cm]{PP1.pdf} & \includegraphics[width=7cm, height=5.5cm]{PP3.pdf}\\ (a)&(b)\\ \includegraphics[width=7cm, height=5.5cm]{PP2.pdf} & \includegraphics[width=7cm, height=5.5cm]{PP4.pdf} \\ (c)&(d)\\ \mathrm{e}nd{tabular}} \caption{A realization of a Poisson LG random field, where the LG random field is given by $e^{\mu+\sqrt{\sigma^2} G(\mathbf{s})}$, where $G$ is a standard Gaussian random field with parameters $\mu=0.5$ and $\sigma^2=0.05$ (panel a) and its associated histogram (panel c). A realization of our proposed Poisson random field with $\lambda=e^{0.5+0.05/2}$ (panel b) and its associated histogram (panel d). In both cases the underlying isotropic correlation is $\rho(r)=(1-r/0.5)^4_+$. } \label{fig:poi} \mathrm{e}nd{figure} To illustrate this situation, Figure \ref{fig:poi} (a) shows a realization on the unit square of a Poisson LG random field, $i.e.$ $e^{\mu+\sqrt{\sigma^2} G(\mathbf{s})}$, where $G$ is the standard Gaussian random field with isotropic correlation $\rho(r)=(1-r/0.5)^4_+$ belonging to the Generalized Wendland family \citep{Bevilacqua:Faouzi_et_all:2019}, $\mu=0.5$, $\sigma^2=0.05$, $r$ is the spatial distance and $(\cdot)_+$ denotes the positive part. In this case, the mean of the Poisson LG field is given by $\lambda=e^{0.5+0.05/2}$. The associated histogram is shown in panel (d). Additionally, Figure \ref{fig:poi} (b) shows a realization and the associated histogram of our proposed random field (see Equation \mathrm{e}qref{qqq}), with the same mean and underlying correlation function of the Poisson LG model. A quick analysis of both figures reveals a ``whitening'' effect on the Poisson LG random field's paths because of the ``forced'' discontinuity at the origin of the correlation function for the Poisson LG (see Section \ref{sec:3.1}). This potential problem, which has also been highlighted by \cite{Oliveira:2013}, indicates that the Poisson LG random field may impose severe restrictions on the correlation structure and may be inadequate to model spatial count data specially consisting of small counts. The second example is the Poisson spatial model obtained using Gaussian copula \citep{ Kazianka:Pilz:2010,Masarotto:Varin:2012,Joe:2014}, which is referred to as the Poisson GC random field hereafter. This approach has some potential benefits with respect to the hierarchical models \citep[see][for a comparison between these two approaches]{Han:Oliveira:2016}. For example, the resulting random field has Poisson marginals and can or cannot be mean square continuous depending on whether the latent Gaussian random field is mean square continuous or not. In addition to some some criticisms concerning the lack of uniqueness of the copula when applied to discrete data \citep{Genest:Neslehova:2007,Trivedi:Zimmer:2017}, this approach has no clarity regarding what underlying physical mechanism is generating the data, making it less interesting from an interpretability perspective. Our proposal tries to solve the drawbacks of the Poisson LG and of the Poisson GC approaches by specifying a new class of spatial counting random fields based on the Poisson counting process \citep{Cox:1970,MAINARDI2007725,ross2008stochastic} applied to the spatial setting. Specifically, we first consider a random field with exponential marginal distributions obtained as a rescaled sum of two independent copies of an underlying standard Gaussian random field. Then, by considering independent copies of the exponential random field as inter-arrival times in the counting renewal processes framework, we obtain a (non-)stationary random field with Poisson marginal distributions. By construction, for each spatial location, the proposed model is a Poisson counting process \textit{i.e.} it represents the random number of events occurring in an arbitrary interval of time when the time between the occurrence of two events is exponentially distributed. More importantly, given two location sites, the associated Poisson counting processes are spatially correlated. For this reason, the proposed model can be viewed as a spatial generalization of the Poisson process. For the novel Poisson random field, we provide the covariance function and analytic expressions for the bivariate distribution in terms of the regularized incomplete Gamma and confluent hypergeometric functions \citep{Gradshteyn:Ryzhik:2007}. It follows that the dependence of the proposed Poisson random field is indexed by the correlation function of the underlying Gaussian random field and by the mean parameter. It is important to stress that our theoretical results are inspired by the two-dimensional renewal theory described in \cite{Hunter:1974}. The Poisson random field estimation is performed with the weighted pairwise likelihood ({\it wpl}) method \citep{Lindsay:1988,Varin:Reid:Firth:2011,Bevilacqua:Gaetan:2015} exploiting the results obtained from the bivariate distribution. In particular, in an extensive simulation study, we explore the efficiency of the {\it wpl} method when estimating the parameters of the proposed Poisson random field. We also explore the statistical efficiency of a Gaussian misspecified version of the {\it wpl} method \citep{cppp,Bev:2020}, which is also called Gaussian quasi-likelihood in some literature \citep{AOS1121}. The findings show that the misspecified {\it wpl} leads to a less efficient estimator, in particular for low counts. However, the method has some computational benefits. In addition, we compare the performance of the optimal linear predictor under the proposed model with the optimal predictors obtained using the Poisson GC and Poisson LG models. Finally, in the real data application, we consider a zero-inflated version of the proposed Poisson random field to deal with excess zeros in the reindeer pellet-group counts data, using the zero-inflated Poisson LG and Poisson GC models as benchmarks. The methods proposed in this paper are implemented in the \texttt{R} \citep{R2020} package \texttt{GeoModels} \citep{Bevilacqua:2018aa} and \texttt{R} code for reproducing the work is available as an online supplement. The remainder of the paper is organized as follows. In Section \ref{sec:2}, we provide some basic notation and describe the exponential random field. Section \ref{sec:3} introduces our proposal by presenting a new class of counting random fields under the general renewal counting framework, focusing on the Poisson random field. A study of the associated correlation function and bivariate distribution is presented and, in addition, a zero-inflated extension of the proposed model is introduced. In Section \ref{sec:4}, the {\it wpl} method for obtaining the {\it wpl} estimates and the optimal linear prediction is discussed. Section \ref{sec:5} provides an in-depth simulation study to investigate the performance of the Poisson random field in spatial and spatio-temporal settings. In Section \ref{sec:6}, the faecal pellet-group counts dataset previously described is re-analysed. Section \ref{sec:7} closes the paper with a discussion of our main findings and future research directions. \section{A random field with exponential marginal distributions}\label{sec:2} To make the paper self-contained, we start by introducing some notation in this Section. For the rest of the paper, given a second order real-valued random field $Q=\{Q(\mathbf{s}), \mathbf{s} \in A\subset \mathbb{R}^d \}$, we denote by $f_{Q(\mathbf{s})}$ and $F_{Q(\mathbf{s})}$ the marginal probability density function ({\it pdf}) and cumulative distribution function ({\it cdf}) of $Q(\mathbf{s})$, respectively. Moreover, for any set of distinct points $(\mathbf{s}_1,\ldots,\mathbf{s}_l)^\top$, $l\in \mathbb{N}$ and $\mathbf{s}_i \in A$, we denote the correlation function by $\rho_Q(\mathbf{s}_i,\mathbf{s}_j)=\Corr(Q(\mathbf{s}_i),Q(\mathbf{s}_j))$. In the stationary case, the adopted notation is $\rho_Q(\mathbf{h})=\Corr(Q(\mathbf{s}_i),Q(\mathbf{s}_j))$, where $\mathbf{h}=\mathbf{s}_i-\mathbf{s}_j $ is the lag separation vector. Finally, $f_{\bm{Q}_{ij}}$ denotes the {\it pdf} of the bivariate random vector $\bm{Q}_{ij}=(Q(\mathbf{s}_i),Q(\mathbf{s}_j))^\top$, $i \neq j$. If the random field $Q$ is a discrete-valued random field, then $\Pr(Q(\mathbf{s})=q)$ and $\Pr(Q(\mathbf{s}_i)=n,Q(\mathbf{s}_j)=m)$, $q,m,n \in \mathbb{N}$ will denote the marginal and bivariate discrete probability functions, respectively. Let $G=\{G(\mathbf{s}), \mathbf{s} \in A \}$ be a zero mean and unit variance weakly stationary Gaussian random field with correlation function $\rho_G(\mathbf{h})$. Henceforth, we call $G$ the Gaussian underlying random field, and with some abuse of notation, we set $\rho(\mathbf{h}):=\rho_{G}(\mathbf{h})$, denoting this as the underlying correlation function. Let $G_1,G_2$ be two independent copies of $G$ and let us define the random field $W=\{W(\mathbf{s}), \mathbf{s}\in A\}$ as follows: \begin{equation}\label{gamma} W(\mathbf{s}) := \frac{1}{2\lambda(\mathbf{s})}\sum_{k=1}^{2} G^2_k({\mathbf{s}}), \mathrm{e}nd{equation} where $\lambda(\mathbf{s})>0$ is a non-random function. $W$ is a stationary random field with a marginal exponential distribution, with parameter $\lambda(\mathbf{s})$ denoted by $W(\mathbf{s}) \sim \mbox{Exp}(\lambda(\mathbf{s}))$ with $\mathds{E}(W(\mathbf{s}))=1/\lambda(\mathbf{s})$, $\mbox{Var}(W(\mathbf{s}))=1/\lambda^2(\mathbf{s})$, and it can be easily observed that $\rho_{W}(\mathbf{h}) =\rho^2(\mathbf{h})$. The associated multivariate exponential density was discussed earlier by \cite{Krishnamoorthy:Parthasarathy:1951}, and since then, its properties have been studied by several authors \citep{Krishnaiah:Rao:1961,Royen:2004}. However, likelihood-based methods for exponential random fields can be troublesome since the analytical expressions of the multivariate density can be derived only in some specific cases. For example, when $d=1$ and the underlying correlation function is exponential, the multivariate {\it pdf} is given by \citep{Bevilacqua:2018ab}: \begin{eqnarray*}\label{gammafd1} f_{W}(w_1,\ldots,w_n)&=&\mathrm{e}xp{\left[-\frac{w_1\lambda_1}{(1-\rho^2_{1,2})} -\frac{w_n \lambda_n}{(1-\rho^2_{n-1,n})}-\sum\limits_{i=2}^{n-1}\frac{(1-\rho^2_{i-1,i}\rho^2_{i,i+1})\lambda_i w_i}{(1-\rho^2_{i-1,i})(1-\rho^2_{i,i+1})}\right]}\nonumber\\ &&\times\prod\limits^{n-1}_{i=1}I_{0}\left(\frac{2\rho_{i,i+1}\sqrt{w_i \lambda_i w_{i+1} \lambda_{i+1}}}{(1-\rho^2_{i,i+1})}\right) \times \left( \prod\limits^{n-1}_{i=1}(1-\rho^2_{i,i+1}) \right)^{-1}, \mathrm{e}nd{eqnarray*} with $\rho_{ij}:=\mathrm{e}xp\{-|s_i-s_j|/\phi\}$, $\lambda_i=\lambda(s_i)$, $\phi>0$ and $I_{a}(x)$ being the modified Bessel function of the first kind of order $a$. Regardless of the dimension of the space $A$ and the type of correlation function, the bivariate exponential {\it pdf} is given by \citep{Kibble:1941,VereJones:1997}: \begin{equation*}\label{pairchi2} f_{W_{ij}}(w_{i},w_j)=\frac{e^{-\frac{(\lambda(\mathbf{s}_i)w_i+\lambda(\mathbf{s}_j)w_j)}{(1-\rho^2(\mathbf{h}))}}}{(1-\rho^2(\mathbf{h}))} I_{0} \left( \frac{ 2\sqrt{\rho^2(\mathbf{h})\lambda(\mathbf{s}_i)\lambda(\mathbf{s}_j)w_iw_j} } {(1-\rho^2(\mathbf{h}))}\right). \mathrm{e}nd{equation*} The exponential random field $W$ will be further used to define a new random field with Poisson marginal distributions. \section{Spatial Poisson random fields}\label{sec:3} Our proposal relies on considering an infinite sequence of independent copies $Y_1,Y_2 \ldots$, of $Y= \{Y(\mathbf{s}), \mathbf{s}\in A\}$, a positive continuous random field. We define a new class of counting random fields, $N_{t(\mathbf{s})}:= \{N_{t(\mathbf{s})}(\mathbf{s}), \mathbf{s}\in A\}$, $t(\mathbf{s})\geq 0$, as follows: \begin{equation}\label{qqq} N_{t(\mathbf{s})}(\mathbf{s}):= \begin{cases}0 &\mbox{if} \quad 0\leq t(\mathbf{s})<S_1(\mathbf{s}) \\ \max\limits_{n\geq 1}\{ S_n(\mathbf{s})\leq t(\mathbf{s})\} & \mbox{if} \quad S_1(\mathbf{s}) \leq t(\mathbf{s}) \mathrm{e}nd{cases}, \mathrm{e}nd{equation} where $S_n(\mathbf{s})=\sum_{i=1}^{n}Y_i(\mathbf{s})$ is the $n$-fold convolution of $Y$ and $N_{t(\mathbf{s})}(\mathbf{s})$ represents the random total number of events that have occurred up to time $t(\mathbf{s})$ at location site $\mathbf{s} \in A$. In our approach, we are assuming that the location sites share a common time $i.e.$ $t=t(\mathbf{s})$. This assumption is justified since the observation window is fixed for each location site in the pellet group count application. The proposed model can be viewed as a spatial generalization of the renewal counting processes \citep{Cox:1970,MAINARDI2007725}, where we consider independent copies of a positive random field as inter-arrival times or waiting times, instead of an independent and identically distributed sequence of positive random variables. For each $\mathbf{s} \in A$, and using the classical results from the renewal counting processes theory, the marginal discrete probability function of $N$ is given by: \begin{equation}\label{www} \Pr(N_t(\mathbf{s})=n)=F_{S_n(\mathbf{s})}(t) - F_{S_{n+1}(\mathbf{s})}(t). \mathrm{e}nd{equation} In addition, the marginal mean (the so-called renewal function) and the variance of $N_t$ are given, respectively, by: \begin{equation*} \mathds{E}(N_t(\mathbf{s}))=\sum_{i=1}^{\infty} F_{S_i(\mathbf{s})}(t) , \quad \mbox{Var}(N_t(\mathbf{s}))=\left(2\sum_{i=1}^{\infty} i F_{S_i(\mathbf{s})}(t) - \mathds{E}(N_t(\mathbf{s}))\right) -( \mathds{E}(N_t(\mathbf{s})))^2. \mathrm{e}nd{equation*} Different elections of the positive random field $Y$ lead to counting random fields with specific marginal distributions. In this paper, we assume that the ``spatial inter-arrival times” are exponentially distributed \textit{i.e.} we assume $Y\mathrm{e}quiv W$, where $W$ is the positive random field defined in \mathrm{e}qref{gamma}, with $\mbox{Exp}(\lambda(\mathbf{s}))$ marginal distribution and {\it cdf} given by $F_{Y(\mathbf{s})}(x)=1-e^{-\lambda(\mathbf{s})x}$, $x>0$. In this case, $S_n(\mathbf{s})\sim \mbox{Gamma}(n,\lambda(\mathbf{s}))$ with $n\in \mathbb{N}$ is an Erlang distribution, with {\it cdf} given by: \begin{equation*} F_{S_n(\mathbf{s})}(x)=1-\sum_{k=0}^{n-1}\frac{e^{-\lambda(\mathbf{s})x}(\lambda(\mathbf{s})x)^{k}}{k!},\quad x>0 \mathrm{e}nd{equation*} and from \mathrm{e}qref{www}, we can obtain the marginal distribution of $N$ as: \begin{equation}\label{kjk} \Pr(N_t(\mathbf{s})=n)=e^{-t\lambda(\mathbf{s})} [t\lambda(\mathbf{s})]^n/n!, \quad n=0,1,2,\ldots. \mathrm{e}nd{equation} with $\mathds{E}(N_t(\mathbf{s}))=\mbox{Var}(N_t(\mathbf{s}))=t\lambda(\mathbf{s})$. By construction, for each $\mathbf{s}$, the proposed model is a Poisson counting process that is $N_t(\mathbf{s})\sim \mbox{Poisson}(t\lambda(\mathbf{s}))$ represents the random number of events that have occurred up to time $t$, when the inter-arrival times are exponentially distributed. In addition, given two arbitrary location sites $\mathbf{s}_1$,$\mathbf{s}_2$, the associated Poisson counting processes $N_t(\mathbf{s}_1)$ and $N_t(\mathbf{s}_2)$ are spatially correlated (see Section \ref{sec:3.1}). Therefore, the proposed model can be viewed as a spatial generalization of the Poisson counting process. Hereafter and without loss of generality, $t$ is set to one, and $N_t$ is denoted as $N$. We will call $N$ a Poisson random field with underlying correlation $\rho(\mathbf{h})$ because $N$ is marginally Poisson distributed and the dependence is indexed by a correlation function. Note that, when the spatially varying mean (and variance) $\lambda(\mathbf{s})$ is not constant then $N$ is not stationary. A typical parametric specification for the mean is given by $\lambda(\mathbf{s})=e^{X(\mathbf{s})^\top\bm{\beta}}$, where $X(\mathbf{s}) \in \mathbb{R}^k$ is a vector of covariates and $\bm{\beta} \in \mathbb{R}^k$ even though other types of parametric and non-parametric specifications can be used. It is important to note that although the proposed Poisson random field is defined on the $d$-dimensional Euclidean space $A$, the proposed method can be easily adapted to other spaces, such as the continuous space-time space, the spherical spaces or the discrete space. The key for this extension is the specification of a suitable underlying correlation function $\rho(\mathbf{h})$. For example, a correlation function defined on the space-time setting, {\it i.e.}, $A\subset \mathbb{R}^d\times \mathbb{R}$ \citep{Gneiting:2002} or on the sphere of arbitrary radius {\it i.e.}, $A\subseteq \mathbb{S}^2=\{ \mathbf{s} \in \mathbb{R}^3, ||\mathbf{s}|| = M\}$, $M>0$ \citep{gneiting2013,porcubev}. In the case of lattice or areal data, a suitable precision matrix with an appropriate neighborhood structure should be specified for the underlying Gaussian Markov random field \citep{Rue:Held:2005}. \subsection{Correlation function}\label{sec:3.1} The following result, which can be found in the pioneering work of \cite{Hunter:1974}, provides the correlation function $\rho_N(\mathbf{s}_i,\mathbf{s}_j)$ of the non-stationary Poisson random field with underlying correlation $\rho(\mathbf{h})$ depending on the regularized lower incomplete Gamma function: \begin{equation}\label{incgamma} \gamma^{*}(a,x)=\frac{\gamma(a,x)}{\Gamma(a)}=\frac{1}{\Gamma(a)}\int\limits^{x}_{0}t^{a-1}e^{-t}dt, \mathrm{e}nd{equation} where $\gamma(\cdot,\cdot)$ is the lower incomplete Gamma function and $\Gamma(\cdot)$ is the Gamma function. Additionally, we define the function $\gamma^{\star}\left(a,x,x'\right)=\gamma^{\ast}\left(a,x\right)\gamma^{\ast}\left(a,x' \right)$, which considers the product of two regularized lower incomplete Gamma function sharing a common parameter. \begin{theo}\label{Theo1} Let $N$ be a non-stationary Poisson random field with underlying correlation $\rho(\mathbf{h})$. Then, \begin{equation*}\label{ccp} \rho_N(\mathbf{s}_i,\mathbf{s}_j)=\dfrac{\rho^2(\mathbf{h})(1-\rho^2(\mathbf{h}))}{\sqrt{\lambda(\mathbf{s}_i)\lambda(\mathbf{s}_j)}}\sum\limits_{r=0}^{\infty}\gamma^*\left(r+1,\dfrac{\lambda(\mathbf{s}_i)}{1-\rho^2(\mathbf{h})}, \dfrac{\lambda(\mathbf{s}_j)}{1-\rho^2(\mathbf{h})}\ \right), \mathrm{e}nd{equation*} with $\mathbf{h}=\mathbf{s}_i-\mathbf{s}_j$. \mathrm{e}nd{theo} \begin{proof} For details, refer to \cite{Hunter:1974}, Section 5.2 (pages 38-39). \mathrm{e}nd{proof} The following result provides a closed form expression of the correlation function in terms of modified Bessel function for the stationary case. \begin{cor}\label{statcoor} In Theorem \ref{Theo1}, when $\lambda(\mathbf{s})=\lambda$, the Poisson random field is weakly stationary with the correlation function given by: \begin{equation}\label{cpoi} \rho_N(\mathbf{h},\lambda)=\rho^2(\mathbf{h})\left[1- \mathrm{e}xp\left(-z(\mathbf{h},\lambda)\right)\left(I_0\left(z(\mathbf{h},\lambda)\right)+I_1\left(z(\mathbf{h},\lambda)\right)\right)\right], \mathrm{e}nd{equation} where $z(\mathbf{h},\lambda)=2\lambda (1-\rho^2(\mathbf{h}))^{-1}$. \mathrm{e}nd{cor} \begin{proof} See the online supplement. \mathrm{e}nd{proof} Note that $\rho_N(\mathbf{h})$ is well defined at the origin since $\lim_{\mathbf{h} \to \bm{0}} \rho_N(\mathbf{h})=1$ implying that the Poisson random field is mean square continuous. Additionally, if $\rho(\mathbf{h})= 0$, then $\rho_N(\mathbf{h})=0$ and if $\lambda \rightarrow \infty$ then $\rho_N(\mathbf{h})= \rho^2(\mathbf{h})$, {\it i.e.}, it converges to the correlation function of an exponential random field. Following the graphical example given in Figure \ref{fig:poi}, we now compare the correlation functions of the proposed Poisson random field with the correlation of the Poisson LG random field, which is defined hierarchically by first considering a LG random field $Z=\{Z(\mathbf{s}), \mathbf{s} \in A \}$ defined as $Z(\mathbf{s})=e^{\mu+\sigma^2G(\mathbf{s})}$, where $G$ is a standard Gaussian random field with correlation $\rho(\mathbf{h})$, and then assuming $Y(\mathbf{s})\mid Z(\mathbf{s})\sim \mbox{Poisson}(Z(\mathbf{s}))$ with $Y(\mathbf{s}_i) \perp \!\!\! \perp Y(\mathbf{s}_j) \mid Z$ for $i\neq j$. In this case, the first two moments of $Y(\mathbf{s})$ are given by $\mathds{E}(Y(\mathbf{s}))=e^{\mu+0.5\sigma^2}$ and $\mbox{Var}(Y(\mathbf{s}))=\mathds{E}(Y(\mathbf{s}))(1+ \mathds{E}(Y(\mathbf{s}))(e^{\sigma^2}-1))$. Consequently, and following \cite{Aitchison}, the correlation function is given by $\rho_Y(\mathbf{h},\mu,\sigma^2)=1$ if $\mathbf{h}=\mathbf{0}$ and \begin{equation*} \rho_Y(\mathbf{h},\mu,\sigma^2)=\frac{e^{\sigma^2 \rho(\mathbf{h})}-1}{e^{\sigma^2}-1+\mathds{E}(Y(\mathbf{s}))^{-1} }, \mathrm{e}nd{equation*} otherwise. This correlation is discontinuous at the origin and the nugget effect is given by: $$\dfrac{\mathds{E}(Y(\mathbf{s}))^{-1}}{\mathds{E}(Y(\mathbf{s}))^{-1} +e^{\sigma^2 }-1}>0.$$ It is apparent that the marginal mean $\mathds{E}(Y(\mathbf{s}))$ has a strong impact on the nugget effect. Figure \ref{covaa5} (a) depicts the correlation functions $\rho_Y(\mathbf{h},0.5,0.05)$, $\rho_Y(\mathbf{h},2.5,0.1)$, and $\rho_Y(\mathbf{h},4.5,0.2)$, which correspond to Poisson LG random fields with mean $\mathds{E}(Y(\mathbf{s}))=1.69,$ $12.81,$ and, $99.48$, respectively. As underlying correlation model we assume $\rho(\mathbf{h})=(1-||\mathbf{h}||/0.5)^{4}_+$. It can be appreciated that for large mean values, the nugget effect is negligible. However, for small mean values, the nugget effect can be huge, and it is the cause of the ``whitening" effect observed in Figure \ref{fig:poi} (a). Figure \ref{covaa5} (b) depicts the correlation function $\rho_N(\mathbf{h},\lambda)$ of the proposed Poisson random field using the same means and underlying correlation function of the Poisson LG random field. It can be appreciated that the correlation covers the entire range between 0 and 1, irrespective of the mean values. Finally, Figure \ref{covaa5} (c) depicts the correlation function of the Poisson GC random field \citep{Han:Oliveira:2016} $C=\{C(\mathbf{s}), \mathbf{s} \in A \}$ defined as $C (\mathbf{s}) = F_{\mathbf{s}}^{-1}(\Phi(G(\mathbf{s})),\lambda)$, where $\Phi(\cdot)$ is the {\it cdf} of the standard Gaussian distribution and $F_{\mathbf{s}}^{-1}(\cdot,\lambda )$ is the quantile function of the Poisson distribution and $G$ is a standard Gaussian random field with correlation $\rho(\mathbf{h})$. The correlation function in this case is given by \begin{equation*} \rho_C(\mathbf{h},\lambda)=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \lambda^{-1}F_{\mathbf{s}_i}^{-1}(\Phi(z_i),\lambda)F_{\mathbf{s}_j}^{-1}(\Phi(z_j),\lambda) \phi_2(z_i,z_j,\rho(\mathbf{h}))dz_i dz_j-\lambda, \mathrm{e}nd{equation*} where $\phi_2$ is the {\it pdf} of the bivariate standard Gaussian distribution. It is apparent that the Poisson GC correlation $\rho_C(\mathbf{h},\lambda)$ is much stronger than $\rho_N(\mathbf{h},\lambda)$, and it does not seem to be affected by the different mean values. It is important to stress that the Poisson and the Poisson GC random fields that are not mean-square continuous can be obtained by introducing a nugget effect, {\it i.e.}, a discontinuity at the origin of $\rho_{N}(\mathbf{h})$. This can be achieved by replacing the underlying correlation function $\rho(\mathbf{h})$ with $\rho^*(\mathbf{h})=\rho(\mathbf{h})(1-\thetau^2)+\thetau^2\mathds{1}_{0}(||\mathbf{h}||)$, where $0\leq\thetau^2<1$ represents the underlying nugget effect. \begin{figure} \scalebox{0.8}{ \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{ccc11.pdf}&\includegraphics[width=0.3\textwidth]{ccc.pdf}&\includegraphics[width=0.3\textwidth]{ccc11cop.pdf}\\ (a)&(b)&(c)\\ \mathrm{e}nd{tabular}} \caption{From left to right: (a) correlation functions $\rho_Y(\mathbf{h},\mu,\sigma^2)$ of the Poisson LG random field with $\mu=0.5, \sigma^2=0.05$ and $\mu=2.5, \sigma^2=0.1$, and $\mu=4.5, \sigma^2=0.2$; (b) correlation function $\rho_N(\mathbf{h},\lambda)$ of our proposed Poisson random field for $\lambda=1.69,$ $12.81,$ and, $99.48$; (c) correlation function $\rho_C(\mathbf{h},\lambda)$ of the Poisson GC random field for $\lambda=1.69,$ $12.81,$ and, $99.48$. The black line in the Figures depicts the underlying correlation model given by $\rho(\mathbf{h})=(1-||\mathbf{h}||/0.5)^4)_+$. }\label{covaa5} \mathrm{e}nd{figure} \subsection{Bivariate distribution}\label{subsec:3.2} In this section, we provide the bivariate distribution of the Poisson random field. This distribution can be written in terms of an infinite series depending on the regularized lower incomplete Gamma function defined in \mathrm{e}qref{incgamma} and the regularized hypergeometric confluent function \citep{Gradshteyn:Ryzhik:2007}, defined as: \begin{equation*} {}_1\widetilde{\mathrm{F}}_{1}(a;b;x)=\frac{{}_1\mathrm{F}_{1}(a;b;x)}{\Gamma(b)}=\sum\limits_{k=0}^{\infty}\dfrac{(a)_k x^k}{\Gamma(b+k)k!}, \mathrm{e}nd{equation*} where ${}_1\mathrm{F}_{1}$ is the standard hypergeometric confluent function. For the sake of simplicity, we analyze the following cases separately: (a) $n=m=0$, (b) $n=0, m\geq1$ and $m=0, n\geq1$, (c) $n=m=1,2 \ldots$, and (d) $n,m\geq 1$, $n\neq m$. Moreover, we set $p_{nm}=\Pr(N(\mathbf{s}_i)=n,N(\mathbf{s}_j)=m)$ , $\lambda_i=\lambda(\mathbf{s}_i)$, $\lambda_j=\lambda(\mathbf{s}_j)$ and $\rho=\rho(\mathbf{h})$ for notational convenience. We additionally define the function $\mathcal{S}$ as follows: \begin{equation*} \mathcal{S}\left( \begin{smallmatrix} a &; b\\ & c \mathrm{e}nd{smallmatrix}, x,x' \right) = {}_1\widetilde{\mathrm{F}}_{1}(a;b;x)\gamma^{\ast}\left(c,x' \right). \mathrm{e}nd{equation*} \begin{theo}\label{theopdf} Let $N$ be a Poisson random field with underlying correlation $\rho$ and mean $\mathds{E}(N(\mathbf{s}_k))=\lambda_k$. Then the bivariate distribution $p_{nm}$ is given by: \noindent (a) Case $n =m = 0$: \begin{align*}p_{00}=-1+e^{-\lambda_i}+e^{-\lambda_j} +(1-\rho^2)\sum\limits_{k=0}^{\infty}\rho^{2k}\gamma^{\star}\left(k+1,\dfrac{\lambda_i}{1-\rho^2},\dfrac{\lambda_j }{1-\rho^2}\right). \mathrm{e}nd{align*} \noindent (b) Cases $n \geq 1 , m=0$ and $m \geq 1 , n=0$: $$p_{n0}=g(n,\lambda_i,\lambda_j,\rho), \quad p_{0m}=g(m,\lambda_j,\lambda_i,\rho),$$respectively, where \begin{align*} g(b,x,y,\rho)=\dfrac{x^b}{b!}e^{-x} -x^b e^{-\frac{x}{1-\rho^2}} \sum\limits_{\mathrm{e}ll=0}^{\infty} \left(\dfrac{\rho^2x}{1-\rho^2}\right)^{\mathrm{e}ll} \mathcal{S}\left( \begin{smallmatrix} b &; b+\mathrm{e}ll+1\\ & \mathrm{e}ll+1 \mathrm{e}nd{smallmatrix}, \dfrac{\rho^2x}{1-\rho^2},\dfrac{y}{1-\rho^2}\right). \mathrm{e}nd{align*} \noindent (c) Case $n =m \geq 1$: \begin{align*} p_{nn}=&-(1-\rho^2)^n\sum\limits_{k=0}^{\infty}\dfrac{\rho^{2k}(n)_k}{k!}\gamma^{\star}\left(n+k,\dfrac{\lambda_i}{1-\rho^2},\dfrac{\lambda_j}{1-\rho^2}\right) \\ & +\left(\dfrac{1-\rho^2}{\rho^{2}}\right)^n\sum\limits_{k=0}^{\infty}\sum\limits_{\mathrm{e}ll=0}^{1}\dfrac{(n)_k}{k!}e^{-\lambda(\bs_i)(1-\mathrm{e}ll)-\lambda(\bs_j)\mathrm{e}ll}\gamma^{\star}\left(n+k,\dfrac{\rho^{2(1-\mathrm{e}ll)}\lambda_i}{1-\rho^2},\dfrac{\rho^{2\mathrm{e}ll}\lambda_j}{1-\rho^2}\right)\\ & +(1-\rho^2)^{n+1}\sum\limits_{k=0}^{\infty}\sum\limits_{\mathrm{e}ll=0}^{\infty}\dfrac{\rho^{2k+2\mathrm{e}ll}(n)_{\mathrm{e}ll}}{\mathrm{e}ll!}\gamma^{\star}\left(n+\mathrm{e}ll+k+1,\dfrac{\lambda_i}{1-\rho^2},\dfrac{\lambda_j}{1-\rho^2}\right). \mathrm{e}nd{align*} \noindent (d) Cases $n \geq 2 , m\geq 1$ with $n>m$, and $m \geq 2 , n\geq 1$ with $m>n$, $$p_{nm}=h(n,m,\lambda_i,\lambda_j,\rho), \quad p_{nm}=h(m,n,\lambda_j,\lambda_i,\rho),$$respectively, where \begin{align*} h(a,b,x,y,\rho)=& x^{m}e^{-\frac{x}{1-\rho^2}} \Bigg[\sum\limits_{\mathrm{e}ll=0}^{\infty}\dfrac{(b)_\mathrm{e}ll}{\mathrm{e}ll!}\left(\dfrac{\rho^2x}{1-\rho^2}\right)^{\mathrm{e}ll} \mathcal{S}\left( \begin{smallmatrix} a-b+1 &; a+\mathrm{e}ll+1\\ & b+\mathrm{e}ll \mathrm{e}nd{smallmatrix}, \dfrac{\rho^2x}{1-\rho^2},\dfrac{y}{1-\rho^2}\right) \\ & -\sum\limits_{k=0}^{\infty}\sum\limits_{\mathrm{e}ll=0}^{\infty}\dfrac{(b)_\mathrm{e}ll}{\mathrm{e}ll!}\left(\dfrac{\rho^2x}{1-\rho^2}\right)^{k+\mathrm{e}ll} \mathcal{S}\left( \begin{smallmatrix} a-b &; a+k+\mathrm{e}ll+1\\ & b+k+\mathrm{e}ll+1 \mathrm{e}nd{smallmatrix}, \dfrac{\rho^2x}{1-\rho^2},\dfrac{y}{1-\rho^2}\right)\Bigg]. \mathrm{e}nd{align*} \mathrm{e}nd{theo} \begin{proof} See the online supplement. \mathrm{e}nd{proof} The evaluation of the bivariate distribution can be troublesome at first sight. However, it can be performed by truncating the series and considering that efficient numerical computation of the regularized lower incomplete gamma and hypergeometric confluent functions can be found in different libraries, such as the GNU scientific library \citep{gough2009gnu} and the most important statistical softwares including \texttt{R}, \textsc{Matlab}, and Python. In particular, the \texttt{R} package Geomodels \citep{Bevilacqua:2018aa} uses the Python implementations in the SciPy library \citep{2020SciPy-NMeth}. The bivariate distribution can be written as the product of two independent Poisson distributions when $\rho_N(\mathbf{h})=0$. This result, provided by \cite{Hunter:1974} in Theorem 3.6, establishes that the independence of two renewal counting processes is equivalent to a zero correlation between them. As outlined in Section \ref{sec:3.1}, $\rho(\mathbf{h})=0$ implies $\rho_N(\mathbf{h})=0$. Consequently, pairwise independence at the level of the underlying Gaussian random field implies pairwise independence for the Poisson random field. We now compare the type of bivariate dependence induced by the proposed model and the GC one when $\lambda=5$. Figure (\ref{ddddd}) (from left to right) presents the bivariate GC distribution, the bivariate Poisson distribution in Theorem \ref{theopdf} and a coloured image representing the differences between them. Note that a positive value of the difference implies that probabilities associated with bivariate distribution in Theorem \ref{theopdf} are greater than the probabilities of the bivariate GC one. Only the probabilities $\Pr(N(\mathbf{s}_i)=n,N(\mathbf{s}_j)=m)$ for $n,m=0,1,\ldots, 12$ are considered in the plots. The first, second and third row consider increasing levels of underlying correlations $\rho(\mathbf{h})=0.1, 0.5, 0.9$. It can be appreciated that the larger the correlation, the more significant is the difference between the proposed and Gaussian copula bivariate distributions. In addition, there is a pattern in which the probabilities of the GC bivariate distribution tend to be larger along the diagonal, {\it i.e.}, the blue scale color is predominant along the diagonal. This is not surprising since the Poisson GC bivariate distribution inherits the type of dependence of the bivariate Gaussian distribution. \begin{figure} [ht!] \scalebox{0.9}{ \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{pois.pdf}&\includegraphics[width=0.3\textwidth]{poiscop.pdf}&\includegraphics[width=0.33\textwidth]{diff01.pdf}\\ \includegraphics[width=0.3\textwidth]{pois1.pdf}&\includegraphics[width=0.3\textwidth]{poiscop1.pdf}&\includegraphics[width=0.33\textwidth]{diff05.pdf}\\ \includegraphics[width=0.3\textwidth]{pois2.pdf}&\includegraphics[width=0.3\textwidth]{poiscop2.pdf}&\includegraphics[width=0.33\textwidth]{diff09.pdf}\\ (a)&(b)&(c)\\ \mathrm{e}nd{tabular}} \caption{For each row (from left to right): bivariate Poisson GC distribution, our proposed bivariate Poisson distribution and the difference between them. The first, second and third row are obtained setting $\rho(\mathbf{h})=0.1,0.5, 0.9$ for the underlying correlation.}\label{ddddd} \mathrm{e}nd{figure} \subsection{A zero-inflated extension}\label{subsec:3.3} In this Section we provide an extension of the proposed model for spatial data which exhibit an excessive number of zeros. Specifically, let $B = \{B(\mathbf{s}),\mathbf{s}\in A\}$, a Bernoulli random field such that $B(\mathbf{s})=\mathds{1}_{(-\infty,0)}(G(\mathbf{s}))$, where $G$ is a Gaussian random field with $\mathbb{E}(G(\mathbf{s})) = \theta(\mathbf{s})$, unit variance and correlation function $\rho_1(\mathbf{h})$. The marginal probability of having a zero is then given by: \[p(\mathbf{s}):=\Pr(B(\mathbf{s})=0)=\Phi(\theta(\mathbf{s})), \] where $\Phi$ is the univariate standard Gaussian {\it cdf}. Let $N$ be a Poisson random field with $\mathbb{E}(N(\mathbf{s})) = \lambda(\mathbf{s})$ and underlying correlation $\rho_2(\mathbf{h})$. Assuming $B$ and $N$ are independent, the proposed Poisson zero-inflated model is then given by a random field $Y= \{Y(\mathbf{s}), \mathbf{s}\in A\}$ defined as: \begin{equation}\label{ppppp} Y(\mathbf{s}):=B(\mathbf{s})N(\mathbf{s}), \mathrm{e}nd{equation} with marginal distribution given by: \begin{equation}\label{mzip} \Pr(Y(\mathbf{s})=y(\mathbf{s}))= \begin{cases}p(\mathbf{s})+(1-p(\mathbf{s}))e^{-\lambda(\mathbf{s})} &\mbox{if} \quad y(\mathbf{s})=0\\ (1-p(\mathbf{s}))\dfrac{\lambda(\mathbf{s})^{y(\mathbf{s})}e^{-\lambda(\mathbf{s})}}{y(\mathbf{s})!} & \mbox{if} \quad y(\mathbf{s})=1,2,\ldots \mathrm{e}nd{cases}, \mathrm{e}nd{equation} and with $\mathds{E}(Y(\mathbf{s}))=(1-p(\mathbf{s}))\lambda(\mathbf{s})$ and $\Var(Y(\mathbf{s}))=\mathds{E}(Y(\mathbf{s}))[1+\frac{p(\mathbf{s})}{1-p(\mathbf{s})}\mathds{E}(Y(\mathbf{s}))]$. Note that the zero-inflated Poisson random field is overdispersed and when $p(\mathbf{s})\to 0$ then the Poisson random field is obtained as special case. For the sake of completeness, we provide the bivariate distribution and the correlation function of the zero-inflated Poisson random field in Proposition 1 (see the online supplement). \section{Estimation and prediction}\label{sec:4} In this section, we start by describing the weighted pairwise likelihood ({\it wpl}) estimation method; then, we focus on the optimal linear prediction. \subsection{Weighted pairwise likelihood estimation}\label{opop} Composite likelihood is a general class of objective functions that combine low-dimensional terms based on the likelihood of marginal or conditional events to construct a pseudo likelihood \cite{Lindsay:1988,Varin:Reid:Firth:2011}. A particular case of the composite likelihood class is the pairwise likelihood \citep[see for example][for application of pairwise likelihood in the spatial setting] {Heagerty:Lele:1998,Bevilacqua:Gaetan:2015,ALEGRIA2017,Bev:2020} that combines the bivariate distributions of all possible distinct pairs of observations. Let $\bm{N}=(n_1,n_2,\ldots,n_l)^{\top}$ be a realization of the Poisson random field $N$ observed at distinct spatial locations $\mathbf{s}_1,\mathbf{s}_2,\ldots,\mathbf{s}_l$, $\mathbf{s}_i\in A$ and let $\bm{\theta}=(\bm{\beta}^{\top},\bm{\alpha}^{\top})$ be the vector of unknown parameters where $\bm{\alpha}$ is the vector parameter associated with the underlying correlation model and $\bm{\beta}$ the regression parameters. The pairwise likelihood function is defined as follows: \begin{equation*}\label{ppl} \mathrm{pl}(\bm{\theta}):= \sum\limits_{i=1}^{l-1}\sum\limits_{j=i+1}^{l}\log(\Pr(N(\mathbf{s}_i)=n_i,N(\mathbf{s}_j)=n_j))\zeta_{ij}, \mathrm{e}nd{equation*} where $\Pr(N(\mathbf{s}_i)=n_i,N(\mathbf{s}_j)=n_j)$ is the bivariate density given in Theorem \ref{theopdf} and $\zeta_{ij}$ is a non-negative suitable weight. The choice of cut-off weights, namely, \begin{equation}\label{wer} \zeta_{ij}= \begin{cases} 1 &\parallel\mathbf{s}_i-\mathbf{s}_j \parallel\leq \xi \\ 0 & \text{otherwise} \mathrm{e}nd{cases}, \mathrm{e}nd{equation} for a positive value of $\xi$, can be motivated by its simplicity and by observing that the dependence between distant observations is weak \citep{Joe:Lee:2009}. Some guidelines on the choice of $\xi$ can be found in \cite{Bevilacqua:Gaetan:Mateu:Porcu:2012,Bevilacqua:Gaetan:2015}. The maximum weighted pairwise likelihood ({\it wpl}) estimator is given by: \begin{equation*} \widehat{\bm{\theta}}:=\operatorname{argmax}_{\bm{\theta}}\, \operatorname{pl}(\bm{\theta}). \mathrm{e}nd{equation*} Under some mixing conditions of the Poisson random field, following \cite{Bevilacqua:Gaetan:2015}, it can be shown that, when increasing domain asymptotic, $\widehat{\bm{\theta}}$ is consistent and asymptotically Gaussian distributed, with the covariance matrix given by $\mathcal{G}^{-1}_n(\bm{\theta})$, {\it i.e.,} the inverse of the Godambe information $\mathcal{G}_n(\bm{\theta}):=\mathcal{H}_n(\bm{\theta})\mathcal{J}_n(\bm{\theta})^{-1}\mathcal{H}_n(\bm{\theta}), $ where $\mathcal{H}_n(\bm{\theta}):=\mathds{E}[-\nabla^2 \operatorname{pl}(\bm{\theta})]$ and $\mathcal{J}_n(\bm{\theta}):={\mbox{Var}}[\nabla \operatorname{pl}(\bm{\theta})]$. The standard error estimation can be obtained from the square root diagonal elements of $\mathcal{G}^{-1}_n(\widehat{\bm{\theta}})$. It is important to stress that the computation of the standard errors requires the evaluation of the matrices $\mathcal{H}_n(\hat{\bm{\theta}})$ and $\mathcal{J}_n(\hat{\bm{\theta}})$. However, the evaluation of $\mathcal{J}_n(\hat{\bm{\theta}})$ is computationally unfeasible for large datasets, and in this case, sub-sampling techniques can be used, as in \cite{Heagerty:Lele:1998} and \cite{Bevilacqua:Gaetan:Mateu:Porcu:2012}. A straightforward and more robust alternative is the parametric bootstrap estimation of $\mathcal{G}^{-1}_n(\bm{\theta})$ \citep{Bai:Kang:Song:2014}. Another critical issue related to large datasets is that computation of the {\it wpl} estimator can be computationally demanding due to the computational complexity associated with the bivariate Poisson distribution given in Theorem \ref{theopdf}. An estimator that requires a smaller computational burden can be obtained under Gaussian misspecification \citep{AOS1121,cppp,Bev:2020}. This is a useful inferential tool when the likelihood function cannot be calculated for some reason, but the first two moments and the correlation are known. In our case, we assume a non-stationary Gaussian random field with mean and variance equal to $\lambda(\mathbf{s})$ and correlation $\rho_{N}(\mathbf{s}_i,\mathbf{s}_j)$ given in Theorem \ref{Theo1}. Then the misspecified maximum $wpl$ requires the computation of the Gaussian bivariate distribution and the misspecified standard maximum likelihood estimation requires the computation of the Gaussian multivariate distribution. In both cases, evaluation of the Gamma incomplete function or the modified Bessel function (in the stationary case) is required to compute the covariance matrix. Table 1, in the online supplement, shows the computational cost of each estimation procedure through different scenarios for the stationary case, to demonstrate the computational gains of the Gaussian misspecified estimation with respect to the Poisson {\it wpl} estimation. \subsection{Optimal linear prediction}\label{olpred} The Poisson random field's optimal predictor concerning the mean squared error criterion requires the knowledge of the finite-dimensional distribution, which is not available for the Poisson random field. As in the estimation step, once again, the Gaussian misspecification allows to build the best linear unbiased predictor (BLUP) based on the correlation of the Poisson random field given in Theorem \ref{Theo1}. Specifically, if the goal is the prediction of $N$ at $\mathbf{s}_0$ given the vector of spatial observations $\mathbf{N}$ observed at $\mathbf{s}_1,\mathbf{s}_2,\ldots,\mathbf{s}_l$, then the optimal linear Gaussian prediction is given by: \begin{equation}\label{pyt} \widehat{N(\mathbf{s}_0)}=\lambda(\mathbf{s}_0)+ \bm{c}^\top \Sigma^{-1}(\bm{N}-\bm{\lambda}), \mathrm{e}nd{equation} where $\bm{\lambda}=(\lambda(\mathbf{s}_1),\ldots,\lambda(\mathbf{s}_l))^\top$, $\bm{c}=[\sqrt{\lambda(\mathbf{s}_0)\lambda(\mathbf{s}_i)}\rho_{N}(\mathbf{s}_0,\mathbf{s}_i)]_{i=1}^l$ and $\Sigma=\sqrt{\bm{\lambda} \bm{\lambda}^\top} \odot [\rho_{N}(\mathbf{s}_i,\mathbf{s}_j)]_{i,j=1}^l$ is the variance-covariance matrix ($\odot$ the matrix Schur product). In practice, the mean and covariance matrix are not known and must be estimated. The associated mean squared error is: \begin{equation*} \mbox{MSE}(\widehat{n(\mathbf{s}_0)})=\lambda(\mathbf{s}_0) - \bm{c}^\top\Sigma^{-1}\bm{c}. \mathrm{e}nd{equation*} Note that this kind of prediction does not guarantee the positivity and discreteness of the prediction. However, optimal linear prediction can generally be a useful approximation of the optimal predictor, as was shown, for example, in \cite{DeOliveira:2006} and recently in \cite{Bevilacqua:2018ab}. When $l$ is large the use of compactly supported correlation functions \citep{Bevilacqua:Faouzi_et_all:2019} can mitigate the computational burden associated with the optimal linear predictor since sparse matrix algorithms can be exploited to handle the inverse of the covariance matrix efficiently. \section{Simulation studies}\label{sec:5} In this section, we focus on two simulation studies. The first one analyses the performance of the {\it wpl} method when estimating the Poisson random field under the spatial and spatio-temporal settings. The second one analyses the Poisson optimal linear predictor's performance, comparing our approach with the Poisson GC and Poisson LG models. This study is presented in the online supplement. \subsection{Performance of the {\it wpl} estimation}\label{pol} In this study, we consider $1000$ realizations from a stationary spatial Poisson random field observed at $\mathbf{s}_i\in [0,1]^2$, $i=1,\ldots , l$, $l = 441$. Specifically, we considered a regular grid with increments of size $0.05$ over the unit square $[0, 1]^2$. The grid points were perturbed, adding a uniform random value over $[-0.015, 0.015]$ to each coordinate. A perturbed grid allows us to obtain more stable estimates since different sets of small distances are available and very close location points are avoided. The simulation of the Poisson random field follows directly from the stochastic representation in \mathrm{e}qref{qqq}, and it depends on the simulation of a sequence of independent copies of an exponential random field obtained by transforming independent copies of a standard Gaussian random field. These random fields were simulated using the Cholesky decomposition. For the Poisson random field we, consider $\lambda(\mathbf{s})=e^{\beta}$ with $\beta=\log(2),$ $\log(5),$ $\log(10),$ $\log(20)$, and an underlying isotropic correlation model $\rho(\mathbf{h})=(1-||\mathbf{h}||/\alpha)^{4}_+$ with $\alpha=0.2$. As outlined in Section \ref{subsec:3.2}, the use of a compactly supported correlation function simplifies the computation of the bivariate Poisson distribution proposed in Theorem \ref{theopdf}. We study the performance of the Poisson {\it wpl}, the misspecified Gaussian {\it wpl} and the misspecified Gaussian maximum likelihood (ML) estimation methods. In the (misspecified) {\it wpl} estimation, we consider a cut-off weight function, as in \mathrm{e}qref{wer}, with $\xi = 0.1$. \begin{table}[htb!] \centering \scalebox{0.80}{ \begin{tabular}{|c|cc|cc|cc|} \cline{2-7} \multicolumn{1}{c|}{} &\multicolumn{2}{c|}{Poisson {\it wpl}} & \multicolumn{2}{c|}{Gaussian {\it wpl}}&\multicolumn{2}{c|}{Gaussian ML}\\ \cline{2-7} \multicolumn{1}{c|}{} & Bias &MSE& Bias & MSE & Bias & MSE \\ \hline $\beta=\log(2)$ &-0.00251 & 0.00151 & -0.00348 &0.00161 & -0.00366 & 0.00165 \\ $\alpha=0.2$ & -0.00663 & 0.00113 &-0.00828 & 0.00208&-0.00748 & 0.00203 \\ \hline $\beta=\log(5)$ &-0.00113 &0.00065& -0.00147 & 0.00068& -0.00161 & 0.00068 \\ $\alpha=0.2$ & -0.00435& 0.00098& -0.00422 & 0.00149 & -0.00344 & 0.00145 \\ \hline $\beta=\log(10)$ & 0.00052 & 0.00033 &0.00031& 0.00033 &0.00014 & 0.00033 \\ $\alpha=0.2$ & -0.00261 &0.00096 &-0.00336 & 0.00120& -0.00296 & 0.00115 \\ \hline $\beta=\log(20)$ & -0.00026 & 0.00018 &-0.00039 & 0.00019 &-0.00037 & 0.00018 \\ $\alpha=0.2$ & -0.00449& 0.00094& -0.00499 &0.00099& -0.00402 & 0.00095 \\ \hline \mathrm{e}nd{tabular}} \caption{Bias and MSE associated with Poisson {\it wpl}, misspecified Gaussian {\it wpl} and misspecified Gaussian ML when the true random field is Poisson with $\lambda(\mathbf{s})=e^{\beta}$ and $\rho(\mathbf{h})=(1-||\mathbf{h}||/\alpha)^{4}_+$.}\label{simu11} \mathrm{e}nd{table} Table \ref{simu11} shows the bias and mean squared error associated with $\beta$ and $\alpha$ through the four scenarios and three estimation methods. As expected, the misspecified Gaussian ML performs slightly better than the misspecified Gaussian {\it wpl}. More importantly, it can be recognized that the Poisson {\it wpl} shows the best performance, particularly when estimating the spatial dependence parameter. This fact is more evident for low counts {\it i.e.}, when $\beta$ is decreasing. However, when increasing the mean, the performances of the three methods of estimation tend to be considerably similar, in particular when the mean of the Poisson random field is $20$. To summarize, the Poisson {\it wpl} is the best method for estimating the Poisson random field when the mean is small (lower than 20 as a rule of thumb in our experiments). For large counts, the misspecified Gaussian {\it wpl} or ML methods show approximately the same performance as the Poisson {\it wpl} method. We also study the proposed methods' performance when estimating a non-stationary version of the Poisson random field. Under the previous simulation setting we changed the constant mean by considering a regression model, that is, $\lambda(\mathbf{s})=\mathrm{e}xp\{\beta+\beta_1u_1(\mathbf{s})+\beta_2 u_2(\mathbf{s})\}$ with $\beta=1.5$, $\beta_1=-0.2$ and $\beta_2=0.3$, where $u_1(\mathbf{s})$ and $u_2(\mathbf{s})$ are independent realizations from a $(0,1)$ uniform random variable. Table \ref{simu22} shows the bias and MSE associated with $\beta$, $\beta_1$, $\beta_2$ and $\alpha$ for the three methods of estimation, and Figure 1 of the online supplement plots the associated centred boxplots. Additionally, in this case, the Poisson {\it wpl} method shows the best performance. We replicate the simulation by considering larger values of the regression parameters (the results are not reported here), which lead to larger counts, and the three methods of estimations show approximately the same performance as in the stationary case. \begin{table}[htb!] \centering \scalebox{0.80}{ \begin{tabular}{|l|cc|cc|cc|} \cline{2-7} \multicolumn{1}{c|}{} &\multicolumn{2}{c|}{Poisson {\it wpl}}& \multicolumn{2}{c|}{Gaussian {\it wpl}}&\multicolumn{2}{c|}{Gaussian ML}\\ \cline{2-7} \multicolumn{1}{c|}{} & Bias &MSE& Bias & MSE & Bias & MSE \\ \hline $\beta=1.5$ & -0.00263 & 0.00419 &-0.00359 &0.00445 & -0.00320& 0.00428 \\ $\beta_1=-0.2$ & 0.00189 &0.00618 &0.00148 & 0.00665 &0.00046 & 0.00627 \\ $\beta_2=0.3$ & 0.00185& 0.00608& 0.00202 &0.00661 &0.00182 &0.00621\\ $\alpha=0.2$ & -0.00148 &0.00091& -0.00030& 0.00124& 0.00096& 0.00122 \\ \hline \mathrm{e}nd{tabular}}\caption{Bias and MSE associated with the Poisson {\it wpl}, misspecified Gaussian {\it wpl} and misspecified Gaussian ML when estimating a non-stationary Poisson random field with $\lambda(\mathbf{s})=\mathrm{e}xp\{\beta+\beta_1u_1(\mathbf{s})+\beta_2 u_2(\mathbf{s})\}$ and $\rho(\mathbf{h})=(1-||\mathbf{h}||/\alpha)^{4}_+$.} \label{simu22} \mathrm{e}nd{table} Finally, we consider a simulation scheme under a spatio-temporal setting. Specifically, we consider $1000$ simulations from a non-stationary space-time Poisson random field observed at $\mathbf{s}_i\in [0,1]^2$, $i=1,\ldots, l$, $l = 40$ spatial location sites, uniformly distributed within the unit square and $t^*_1=0, t^*_2=0.25, \ldots t^*_{25} =6$, $25$ time points. We consider a regression model for the spatio-temporal mean $\lambda(\mathbf{s},t^*)=\mathrm{e}xp\{\beta+\beta_1u_1(\mathbf{s},t^*)+\beta_2 u_2(\mathbf{s},t^*)\}$, where $u_k(\mathbf{s},t^*)$, $k=1,2$ are independent realizations from a $(0,1)$ uniform random variable. We set $\beta=1.5$, $\beta_1=-0.2$ and $\beta_2=0.3$ as in the previous simulation scheme. Additionally, as the underlying space-time correlation, we use a simple isotropic and temporal symmetric space-time Wendland separable model $\rho(\mathbf{h},t^\star)=(1-||\mathbf{h}||/\alpha_\mathbf{s})^{4}_+ (1-|t^\star|/\alpha_{t^*})^{4}_+$ with $\alpha_\mathbf{s}=0.2$ and $\alpha_{t^*}=1$, where $t^\star=t^*_i-t^*_j$ with $i,j\in \{1,2,\ldots,25\}$. Finally, for the misspecified \textit{wpl} estimation, we consider a cut-off weight function as in \mathrm{e}qref{wer} extended to the space time case, with $\xi_{\mathbf{s}} = 0.2$ and $\xi_{t^*} =0.5$. The results concerning this simulation study are shown in Table \ref{simu22st}, including the bias and MSE associated with $\beta$, $\beta_1$, $\beta_2$ and $\alpha_\mathbf{s}$, $\alpha_{t^*}$ for the three estimation methods. In addition, Figure 2 of the online supplement shows the associated box plots. As it can be observed, the Poisson {\it wpl} approach outperforms the misspecified Gaussian {\it wpl} and ML as expected for each parameter. We want to highlight that we have replicated this simulation study by considering higher regression parameters' values, leading to larger counts (for space reasons, these results are not reported here). In that case, all of the estimation methods showed similar behaviours as in the purely spatial case. \begin{table}[htb!] \centering \scalebox{0.8}{ \begin{tabular}{|l|cc|cc|cc|} \cline{2-7} \multicolumn{1}{c|}{} &\multicolumn{2}{c|}{Poisson {\it wpl}}& \multicolumn{2}{c|}{Gaussian {\it wpl}}&\multicolumn{2}{c|}{Gaussian ML}\\ \cline{2-7} \multicolumn{1}{c|}{} & Bias &MSE& Bias & MSE & Bias & MSE \\ \hline $\beta=1.5$ & -0.00058 & 0.00166& -0.00120 & 0.00180 & -0.00110 & 0.00167 \\ $\beta_1=-0.2$ & -0.00079 & 0.00257& -0.00056 & 0.00274 & -0.00102 &0.00249 \\ $\beta_2=0.3$ &0.00036 & 0.00284 &0.00070 & 0.00302 &0.00062 & 0.00267 \\ $\alpha_\mathbf{s}=0.2$ & -0.01057 & 0.00464 &-0.01323 & 0.00630 &-0.01343& 0.00629 \\ $\alpha_{t^*}=1$ & -0.00124 & 0.01846& 0.00165 & 0.02534 &0.00032 & 0.02415\\ \hline \mathrm{e}nd{tabular}}\caption{Bias and MSE associated with the Poisson {\it wpl}, misspecified Gaussian {\it wpl} and misspecified Gaussian ML when estimating a non-stationary spatiotemporal Poisson random field with $\lambda(\mathbf{s},t^*)=\mathrm{e}xp\{\beta+\beta_1u_1(\mathbf{s},t^*)+\beta_2 u_2(\mathbf{s},t^*)\}$ and $\rho(\mathbf{h},t^\star)=(1-||\mathbf{h}||/\alpha_\mathbf{s})^{4}_+(1-|t^\star|/\alpha_{t^*})^{4}_+$.} \label{simu22st} \mathrm{e}nd{table} \section{Application to the reindeer pellet-group survey in Sweden}\label{sec:6} As mentioned in the Introduction, the pellet-group survey is a technique that provides a general idea of species distribution over a specific geographic area. This technique is used, mainly, for (a) estimating the population density of several ungulate species such as deer \citep[see for example][among many others]{EberhardtVanEtten1956,FreedyBowden1983,MootyEtAl1984,RowlandEtAl1984,MarquesEtAl2001} and (b) to study the impact of some covariates in the election of their habitat selection \citep[see for example][]{SkarinEtAl2015,Lee2016,SkarinEtAl2017}. Our analysis considers a reindeer pellet-group survey that was conducted on Storliden Mountain in the northern forest area of Sweden over the years 2009–2010. We focus in 2009 and specifically we consider pellet-group count data collected between June 3rd and 8th in 2009 \citep{Lee2016}. The main goal of the survey was to assess the impact of newly established wind farms on reindeer habitats. In practice, we observe the total number of pellet-groups (a pellet-group is defined as a cluster of 20 or more pellets) at 357 location sites $y(\mathbf{s}_i)$, $i=1,2,\ldots, 357$. In this case, the mechanism generating the pellet-groups, for each location, can be assumed as a Poisson process, where the inter-arrival times between one pellet-group and another could be assumed to be exponentially distributed. Under this assumption, the proposed Poisson random field can be a useful tool to analyze the pellet-groups counts data. The dataset possesses two challenging features. The first one is that 73.67\% of the counts are zeros since the animal might move as it defecates, and some plots present zero pellet-group counts (see Figure \ref{fig:map}). The second one is that the empirical semi-variogram (see Figure 3 of the online supplement) exhibits both spatial correlation and a nugget effect. \begin{figure}[ht!] \begin{center} \setlength{\unitlength}{0.1\textwidth} \scalebox{0.65}{ \includegraphics[width=0.8\linewidth, height=0.4\textheight]{mapcounts1.pdf} } \mathrm{e}nd{center} \caption{Spatial location of reindeer pellet-group survey data. }\label{fig:map} \mathrm{e}nd{figure} To face this problem we consider the zero-inflated Poisson (ZIP) random field proposed in Section \ref{subsec:3.3}. and we compare it with the ZIP Gaussian copula (ZIP GC) using the \texttt{R} package \texttt{gcKrig} \citep{JSSv087i13}. In addition, we consider the ZIP Log-Gaussian (ZIP LG) random field as implemented in the \texttt{R} package \texttt{INLA} \citep{R-INLA:1, R-INLA:2, R-INLA:3} which exploits the integrated nested Laplace approximation, under a Bayesian framework, in the estimation step. Since our application's primary goal is to assess the impact of newly established wind farms on reindeer habitats, we are interested in relating the number of pellet-groups with covariates such as distance to power lines, slopes, or elevation of the field. Therefore, and following the results of \cite{Lee2016}, we include three covariates: Northwest slopes (NS), Elevation (Eln) and Distance to power lines (DPL). In particular we specify $\lambda(\mathbf{s})$ as: $$\lambda(\mathbf{s})=\mathrm{e}xp(\beta_0+\beta_{NS}\mathrm{NS}(\mathbf{s})+\beta_{Eln}\mathrm{Eln}(\mathbf{s})+\beta_{DPL}\mathrm{DPL}(\mathbf{s})). $$ The parameterization for the marginal mean and variance is slightly different for the three models. Specifically, assuming that the probability of excess of zero counts $p$ does not depend on $\mathbf{s}$, the marginal mean and variance specifications are given by $\mathds{E}(Y(\mathbf{s}))=\lambda(\mathbf{s})(1-p)$ and $\Var(Y(\mathbf{s}))=\mathds{E}(Y(\mathbf{s}))\left[1+\dfrac{p}{1-p}\mathds{E}(Y(\mathbf{s}))\right]$ for the proposed model, $\mathds{E}(N(\mathbf{s}))=\lambda(\mathbf{s})$ and $\Var(Y(\mathbf{s}))=\mathds{E}(Y(\mathbf{s}))\left[1+\theta_{GC}\mathds{E}(Y(\mathbf{s}))\right]$ for the GC model, $\mathds{E}(Y(\mathbf{s}))=\lambda(\mathbf{s})\mathrm{e}xp(0.5\sigma^2)(1-p)$ and $\Var(Y(\mathbf{s}))=\mathds{E}(Y(\mathbf{s}))\left[1+\dfrac{p}{1-p}\mathds{E}(Y(\mathbf{s}))\right]+\dfrac{\mathds{E}(Y(\mathbf{s}))^2}{1-p}\left[\mathrm{e}xp(\sigma^2)-1\right]$ for the LG model. Here, $p$ is specified as $\Phi(\theta)$, with $\theta \in \mathbb{R}$, as $\frac{\theta_{GC}}{1+\theta_{GC}}$, with $\theta_{GC}>0$, and as $\frac{\mathrm{e}xp(\theta_{LG})}{1+\mathrm{e}xp(\theta_{LG})}$ , with $\theta_{LG} \in \mathbb{R}$, respectively, so $\theta$, $\theta_{GC}$, $\theta_{LG}$ can be interpreted as overdispersion parameters. It is important to remark that $\beta_0$ can not be compared between the different approaches, but $\beta_{NS}$, $\beta_{Eln}$ and $\beta_{DPL}$ can be compared. We assume an underlying exponential correlation model with nugget effect $\rho(\mathbf{h})=(1-\thetau^2)e^{-||\mathbf{h}||/\alpha} +\thetau^2\mathds{1}_{0}(||\mathbf{h}||)$ for the ZIP GC and ZIP LG random fields. On the other hand for the proposed ZIP model we specify $\rho_1(\mathbf{h})=(1-\thetau_2^2)e^{-||\mathbf{h}||/\alpha}+\thetau_2^2\mathds{1}_{0}(||\mathbf{h}||)$ and $\rho_2(\mathbf{h})=(1-\thetau_1^2)e^{-||\mathbf{h}||/\alpha}+\thetau_1^2\mathds{1}_{0}(||\mathbf{h}||)$ that is two different underlying correlation models for $B$ and $N$ in \mathrm{e}qref{ppppp} sharing a common exponential correlation model and different nugget effects $\thetau_1^2,\thetau_2^2$. We use maximum {\it wpl} estimation with $\xi=150$ in \mathrm{e}qref{wer} for our ZIP random field. For the ZIP GC model, we perform maximum likelihood estimation as explained in Section 3 in the online supplement, and for ZIP LG model, we perform approximate Bayesian inference using the \texttt{INLA} approach \citep{R-INLA:1}. Table \ref{est.app} summarizes the results of the estimates, including their standard error for the three models. In the case of our ZIP random field, standard errors were computed by using parametric bootstrap \citep{Efron.Tibshirani::1986}. For the Poisson LG model the reported estimates are the means of the posterior distributions with associated standard error. Note that if $\beta_{DPL}$ is a positive value, then the counts of pellet-groups increase at larger distances from the power lines, {\it i.e.}, there is a greater reindeer population far from the wind farms. The estimation of the regression parameters is quite similar for our ZIP and the ZIP GC models with lower standard error estimations for the GC model. On the other hand, our ZIP model shows the smallest standard error estimation of the spatial scale parameter $\alpha$. Finally, the estimates of $p$, the excess of zero counts, which depend on $\theta$, $\theta_{GC}$ and $\theta_{LG}$ for the ZIP, ZIP GC and ZIP LG random fields, are given by $0.481$, $0.477$, $0.573$, respectively. \begin{table}[ht!] \centering \scalebox{0.7}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-14} \multicolumn{1}{c|}{} & $\beta_0$ & $\beta_{NS}$ & $\beta_{Eln}$ & $\beta_{DPL}$ &$\theta$ & $\theta_{GC}$& $\theta_{LG}$ &$\alpha$ & $\thetau_1^2$ & $\thetau_2^2$ & $\thetau^2$& $\sigma^2$ & $\overline{\mathrm{RMSE}}$ \\ \hline \multirow{ 2}{*}{ZIP} & -23.423 & -0.534 & 0.005 & 2.594 & -0.048 & & &339.132 &0.868 & 0.624 & & & \multirow{ 2}{*}{0.797}\\ & (6.473) & (0.469) & (0.004) & (0.742) & (1.048) & & &(92.981) & (0.263) & (0.308) & & & \\ \hline \multirow{ 2}{*}{ZIP GC} &-19.096 & -0.465 &0.003 &2.060 & &0.912 & &298.926 & & &0.714 & &\multirow{ 2}{*}{0.800}\\ &(2.309) & (0.364) &(0.003) &(0.265) & &(0.261) & &(186.990) & & &(0.1285) & &\\ \hline \multirow{ 2}{*}{ZIP LG} &-17.905 &-0.826 &0.010 &1.622 & & & 0.293 &685.922 & & & 0.084 & 0.735 &\multirow{ 2}{*}{0.835} \\ &(9.514) &(0.388) &(0.005) &(1.177) & & & (0.098) &(354.614) & & & (0.022) & (0.396) & \\ \hline \mathrm{e}nd{tabular}} \caption{Parameter estimates for the reindeer pellet-group survey data obtained under the ZIP, ZIP GC and ZIP LG random fields. The associated standard errors are in parenthesis. The last column shows the associated empirical mean of the RMSE for each model.} \label{est.app} \mathrm{e}nd{table} The three models considered can also be used for prediction of the of pellet-group counts at specific not sampled location sites. In particular, this approach allows us to discover new areas for reindeer habitat employing the predicted number of pellet-groups, potentially providing information about the behavior of the reindeer population over the entire region. With this goal in mind, we want to assess the predictive performances of the three models. To do so, we randomly choose 80\% of the spatial locations ({\it i.e.}, $286$ location sites) for the parameter estimation and use the remaining 20\% ({\it i.e.}, $71$ location sites) for the predictions. We repeat this procedure $100$ times, recording the RMSE each time. Specifically, for each $j$-th left-out sample $(y_j(\mathbf{s}_1),y_j(\mathbf{s}_2),\ldots,y_j(\mathbf{s}_{71}))$, we compute $$\mathrm{RMSE_j}=\left(\dfrac{1}{71}\sum\limits_{i=1}^{71}(y_j(\mathbf{s}_i)-\widehat{Y}_j(\mathbf{s}_i))^2\right)^{1/2},$$ where $\widehat{Y}_j(\mathbf{s}_i)$ is the optimal linear predictor for our ZIP random field (computed using the correlation given in Proposition 1 in the online supplement), the optimal predictor for the ZIP GC random field and the mean of the posterior predictive distribution for the ZIP LG random field. We report in Table \ref{est.app}, the empirical mean of the RMSE obtained for each left-out sample, {\it i.e.}, $\overline{\mathrm{RMSE}}=\sum\limits_{j=1}^{100}\mathrm{RMSE_j}/100$. The ZIP and ZIP GC random fields' clearly outperform the ZIP LG random field in terms of prediction performance. In particular the proposed ZIP random field provides the smallest $\overline{\mathrm{RMSE}}$. Finally, as suggested by one Referee, we perform a simulation-based model assessment. Specifically, we simulated 10000 realizations under the three fitted models and counted the number of observations lying between the 95\% probability intervals constructed with the simulated data. The results show that our ZIP proposed model and the ZIP GC are pretty similar, with 97.2\% and 97.7\% of the data lying in the 95\% probability intervals, respectively. In the case of the ZIP LG, 96.9\% of observations lie between the 95\% probability interval. In addition, we compared the empirical semi-variogram of the data with the ones obtained from the simulations, and we found that our proposal (ZIP model) presents a small 95\% probability interval compared to its competitors (a tight interval) (see Figure \ref{fig:semivars}). This situation implies that our approach provides less uncertainty when estimating spatial dependence. \begin{figure}[ht!] \begin{center} \setlength{\unitlength}{0.1\textwidth} \scalebox{0.5}{ \includegraphics[scale=1]{semivarSimComp3.pdf} } \mathrm{e}nd{center} \caption{95\% probability intervals for semi-variograms under ZIP (purple), ZIP GC (orange) and ZIP LG (green) models. The empirical semi-variogram for the pellet group data is depicted in the black dashed line. }\label{fig:semivars} \mathrm{e}nd{figure} \section{Concluding remarks}\label{sec:7} This paper has introduced a novel Poisson random field, {\it i.e.}, a random field with Poisson marginal distributions, for regression and dependence analysis when addressing point-referenced count data defined on a spatial Euclidean space. However, the proposed methodology can be easily adapted to other types of data, such as space-time \citep{gneiting2013}, areal \citep{Rue:Held:2005} or spherical data \citep{gneiting2013}. By construction, for each spatial location, the proposed model is a Poisson counting process, {\it i.e.}, it represents the random total number of events occurring in an arbitrary interval of time when the inter-arrival times are exponentially distributed. More importantly, given two arbitrary location sites, the associated Poisson counting processes are spatially correlated. For this reason, the proposed model can be viewed as a spatial generalization of the Poisson process. The correlation between the Poisson counting processes is achieved by considering sequences of independent copies of a random field with an exponential marginal distribution as inter-arrival times in the counting renewal processes framework. The resulting (non-)stationary random field is marginally Poisson distributed and the dependence is indexed by a correlation function. They key features of the proposed Poisson random field with respect to the Poisson Log-Gaussian random field are that its marginal distribution is Poisson distributed and it can be mean square continuous or not. The Poisson Gaussian copula approach shares these good features with our model. However, the generating mechanisms ({\it i.e.}, the Poisson process), underlying our model makes it more appealing from interpretability viewpoint. In our proposal, a possible limitation is that inference based on full likelihood cannot be performed due to the lack of amenable expressions of the associated multivariate distributions. Nevertheless, the simulations studies we conducted showed that our approach, based on a pairwise likelihood estimation seems to be an effective solution for estimating the unknown parameters involved in the Poisson random field. Another potential limitation is that the optimal predictor that minimizes the mean square prediction error is not available. However, our numerical experiments show that our solution based on optimal linear predictor performs very well when compared with the optimal predictors of the Poisson Gaussian copula and Poisson Log Gaussian models. Finally, the application of our model to the reindeer pellet-group survey data in Sweden shows that our approach can be easily adapted to handle spatial count data with an excessive number of zeros. A well-known restriction of the Poisson distribution is equidispersion. Unfortunately, this situation is not always observed in real spatial data. The class of random fields proposed in \mathrm{e}qref{qqq} can be used to obtain random fields with flexible marginal models that consider over or under dispersion. In this case, a possible solution is to consider random fields with a more flexible marginal distribution than the exponential marginal distribution, such as the gamma or Weibull random fields \citep{Bevilacqua:2018ab}. The resulting marginal counting models have been studied in \cite{wink95} and \cite{mac2008}. Another alternative for obtaining over dispersed random fields is considering scale mixtures of Poisson random fields. These topics are currently under study and will be included in a forthcoming paper. \if11 { \end{document}
math
\begin{document} \title{An equation satisfied by all non-trivial zeros $\rho$ of the Riemann zeta function $\zeta$} \begin{center} \textit{Abstract:} \\ We show that if $\rho$ is a non-trivial zero of the Riemann zeta function $\zeta$ then $$2^\rho + \frac{1}{\rho - 1} + \frac{1}{2} = \rho \int_{1}^{\infty} \left\{ t + \frac{1}{2} \right\} t^{-\rho-1} dt$$ \\where, $\{ x \}$ is the fractional part of $x$. \end{center} \section{Introduction} We study the Riemann zeta and Dirichlet eta function using the theory of fractional parts. By theory of fractional parts, I mean to imply a study of integrals of the form $$\frac{1}{r} \int_{p}^{q} \left \{ f(x) \right \} dx$$ where the notation $\{ x \}$ is the fractional part of $x$. For complex numbers, $z = x + iy$ we define the fractional part as $\{ z \} = \{ x \} + i \{ y \}$. It is not hard to grasp that for all real numbers $0 < \{x\}< 1$ and for all complex numbers $0 < | \{ z \}| < \sqrt{2}$ \section{Formula's for $\zeta(s)$ and $\eta(s)$} We already have, \begin{equation} \zeta(s) = \frac{s}{s-1} - s\int_{1}^{\infty} \left \{t\right\} t^{-s-1} dt \end{equation} which, is known to be valid for all $\Re(s) > 0$. [1] \\\\ We derive a similar integral that gives the Dirichlet eta function $\eta(s)$ for all $\Re(s) > 0$ \subsection{An expression for $\eta(s)$ using fractional part} \textbf{Theorem:} \textit{For the Dirichlet eta function defined by for all $\Re(s) > 0$} $\eta(s) = \sum_{k = 1}^{\infty} \frac{1}{(2k - 1)^s} - \frac{1}{(2k)^s}$ \textit{we get an equivalent expression in the form for all $\Re(s) > 0$ where $ \kappa(t) = \left\{ t/2 \right\} + 1/2 - \left\{ t/2 + 1/2 \right\} $, and the expression being given by \begin{equation} \eta(s) = s\int_{1}^{\infty} \kappa(t) t^{-s-1} dt \end{equation} } \textbf{Proof:} A simple simplification of the integral shall prove this case. We have for all $t \in [1,\infty]$ $\kappa(t) = 0$ or $1$. It is not hard to see that $\kappa(t) = 0$ whenever $t \in [2k,2k+1)$ and $1$ whenever $t \in [2k-1,2k)$ for all positive integers $k$. Hence, we can write the integral as $$s\int_{1}^{\infty} \kappa(t) t^{-s-1} dt = \sum_{k=1}^{\infty} \int_{2k-1}^{2k} s t^{-s-1} dt$$ Giving us the sum $$\sum_{k = 1}^{\infty} \frac{1}{(2k - 1)^s} - \frac{1}{(2k)^s}$$ which is nothing but $\eta(s)$. Since we already know that this sum converges for $\Re(s) > 0$, we get our result. \section{The equation for non-trivial zeros} \textbf{Theorem:} \textit{The non-trivial zeros $\rho$ of the Riemann zeta function $( 0 < \Re(\rho) < 1 )$ satisfy the equation $$ 2^\rho + \frac{1}{\rho - 1} + \frac{1}{2} = \rho \int_{1}^{\infty} \left\{ t + \frac{1}{2} \right\} t^{-\rho-1} dt $$ } \textbf{Proof:} We know that if $\zeta(\rho) = 0$ then so is $\eta(\rho) = 0$ for $0 < \Re(\rho) < 1$. [2] From equation (1) we get \begin{equation} \frac{\rho}{\rho - 1} = \rho \int_{1}^{\infty} \left \{t\right\} t^{- \rho -1} dt \end{equation} Now from equation (2) substituting $2t$ in place of $t$ and simplifying for $\eta(\rho) = 0$ gives us \begin{equation} \rho \int_{1/2}^{1} \kappa(2t) t^{-\rho-1} dt + \rho \int_{1}^{\infty} \frac{1}{2}t^{-\rho-1} dt + \rho \int_{1}^{\infty} \left\{ t \right\} t^{-\rho-1} dt = \rho \int_{1}^{\infty} \left\{ t + \frac{1}{2} \right\} t^{-\rho-1} dt \end{equation} Whenever $t \in [1/2,1)$ we have $\kappa(2t) = 1$. Substituting (3) in equation (4) and evaluating gives us \begin{equation} 2^\rho + \frac{1}{\rho - 1} + \frac{1}{2} = \rho \int_{1}^{\infty} \left\{ t + \frac{1}{2} \right\} t^{-\rho-1} dt \end{equation} which is our desired result. \\\\ Equation (5) is the main equation in this paper. All non-trivial zeros $\rho$ of the Riemann zeta function satisify this equation. \end{document}
math
\begin{document} \title{Complete frequency-bin Bell basis synthesizer} \author{Suparna Seshadri} \affiliation{Elmore Family School of Electrical and Computer Engineering and Purdue Quantum Science and Engineering Institute, Purdue University, West Lafayette, Indiana 47907, USA} \author{Hsuan-Hao Lu} \affiliation{Quantum Information Science Section, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA} \author{Daniel E. Leaird} \affiliation{Elmore Family School of Electrical and Computer Engineering and Purdue Quantum Science and Engineering Institute, Purdue University, West Lafayette, Indiana 47907, USA} \author{Andrew M. Weiner} \affiliation{Elmore Family School of Electrical and Computer Engineering and Purdue Quantum Science and Engineering Institute, Purdue University, West Lafayette, Indiana 47907, USA} \author{Joseph M. Lukens} \email{[email protected]} \affiliation{Quantum Information Science Section, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA} \date{\today} \begin{abstract} We report the experimental generation of all four frequency-bin Bell states in a single versatile setup via successive pumping of spontaneous parametric downconversion with single and dual spectral lines. Our scheme utilizes intensity modulation to control the pump configuration and offers turn-key generation of any desired Bell state using only off-the-shelf telecommunication equipment. We employ Bayesian inference to reconstruct the density matrices of the generated Bell states, finding fidelities $\geq$97\% for all cases. Additionally, we demonstrate the sensitivity of the frequency-bin Bell states to common-mode and differential-mode temporal delays traversed by the photons comprising the state---presenting the potential for either enhanced resolution or nonlocal sensing enabled by our complete Bell basis synthesizer. \end{abstract} \begin{textblock}{13.3}(1.4,15) \noindent\fontsize{7}{7}\selectfont \textcolor{black!30}{This manuscript has been co-authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).} \end{textblock} \maketitle \textit{Introduction.---}Bell states are vital resources both for fundamental investigations of quantum entanglement and for realizing practical goals in quantum information processing and metrology. Generation and measurement of Bell states appear in a plethora of quantum communication protocols spanning dense coding~\cite{mattle1996dense}, teleportation~\cite{bennett1993teleporting}, entanglement-based cryptography~\cite{shukla2014protocols, shi2013multi, tittel2000quantum}, and entanglement swapping~\cite{pan1998experimental,goebel2008multistage}. Production of a complete set of Bell states has been actively studied in various photonic degrees of freedom including polarization~\cite{kwiat1995new,mattle1996dense}, orbital angular momentum~\cite{agnew2013generation}, discrete time-bins~\cite{Lo2020}, pulsed time-frequency modes~\cite{brendel1999pulsed} and path encodings~\cite{shadbolt2012generating,silverstone2014chip,li2020femtosecond}. Recently, interest in frequency-bin encoding has grown due to particular advantages such as simple multiplexing capabilities and compatibility with both on-chip integration and optical fiber networks~\cite{kues2019quantum, imany201850, lu2018electro, zhang2021chip}. In frequency bins, the \emph{negative} correlations associated with the $\ket{\Psi^{\pm}}\propto\ket{01}\pm\ket{10}$ Bell states (under the convention where the logical $\ket{1}$ has higher frequency than logical $\ket{0}$ for each photon) are automatically realized through energy conservation in a nonlinear parametric process driven by a continuous-wave (CW) monochromatic pump (or pulsed pump with bandwidth less than the frequency-bin separation of interest). However, the generation of \emph{positively} frequency-correlated $\ket {\Phi^{\pm}}\propto\ket{00}\pm\ket{11}$ states is inherently more challenging. While the $\ket{\Psi^{\pm}}$ states can be deterministically transformed to $\ket{\Phi^{\pm}}$ states using a quantum frequency processor (QFP)~\cite{lu2018quantum}, such transformations require multiple active elements after photon generation, increasing complexity and insertion losses. In this Letter we synthesize all four frequency-bin Bell states in a single setup by successively driving spontaneous parametric down-conversion (SPDC) with single and dual spectral-line pumps. Applying a programmable spectral filter, we then carve out the desired Bell state from the broadband spectrum. We use Bayesian estimation to reconstruct the density matrices of the generated Bell states from mutually unbiased basis measurements and find fidelities $\geq$97\%. The presented scheme together with the recent demonstration of a frequency-bin Bell state analyzer~\cite{lingaraju2022bell} and arbitrary control of frequency-bin qubits~\cite{lu2020fully} lays the groundwork for several entanglement-based quantum networking protocols. Our work also opens up avenues in quantum metrology. We present a proof-of-concept demonstration by probing the opposite impacts of common-mode and differential-mode phases on the positively and negatively correlated Bell states. The results indicate two-photon advantage in the sensitivity of measuring common-mode delays using $\ket{\Phi^{\pm}}$ states which, together with the nonlocal sensing capability of $\ket{\Psi^{\pm}}$ states for differential-mode delays, can be exploited to sense the link latency or perform positioning and clock synchronization~\cite{giovannetti2002positioning, giovannetti2001quantum} in an entanglement-distribution network. \textit{Background.---}Consider two frequency-bin qubits defined on a comb-like grid of narrowband spectral modes spaced by multiples of $\Delta\omega$. In this context, ``narrowband'' implies that the individual bin widths are smaller than all other characteristic frequency scales in the problem---e.g., variations in the phase-matching function, pulse shaper filter widths, or dispersion. An arbitrary pure two-photon state for an idler $I$ and signal $S$ qubit can then be expressed as \begin{equation} \label{eq:arb} \ket{\psi} = c_{00}\ket{I_0S_0} + c_{01}\ket{I_0S_1} + c_{10}\ket{I_1S_0} + c_{11}\ket{I_1S_1}, \end{equation} where $I_n$ ($S_n$) signifies a single photon populating mode centered at frequency $\omega_{I,n}=\omega_{I,0} + n\Delta\omega$ ($\omega_{S,n}=\omega_{S,0} + n\Delta\omega$). In exploring the feasibility of producing such a state through SPDC, we note immediately that three pump wavelengths will be required to satisfy energy conservation for the four logical states: $\omega_{P,-1}=\omega_{I,0}+\omega_{S,0}$, $\omega_{P,0}=\omega_{I,0}+\omega_{S,1}=\omega_{I,1}+\omega_{S,0}$, and $\omega_{P,1}=\omega_{I,1}+\omega_{S,1}$. Combined with some phase-matching function $\beta$, we therefore can write the basis coefficients $c_{mn}$ in terms of complex pump amplitudes $\alpha_{k}$ corresponding to pump frequency line at $\omega_{P,k}$~\cite{grice1997spectral, grice1997spectral}: $c_{00}=\alpha_{-1}\beta_{00}$, $c_{01}=\alpha_{0}\beta_{01}$, $c_{10}=\alpha_{0}\beta_{10}$, and $c_{11}=\alpha_{1}\beta_{11}$, where $\beta_{mn}\equiv \beta(\omega_{I,m},\omega_{S,n})$ denotes the phase-matching coefficient. \begin{figure} \caption{(a) Experimental setup. Frequency domain illustration of the scheme for the generation of (b)~$\ket{\Psi^{(\kappa)} \label{setup} \end{figure} Accordingly, with full control of three pump lines and the relevant phase-matching conditions, it is in principle possible to produce any two-qubit frequency-bin pure state. Yet despite a large body of research on engineered periodic poling designs for nonlinear crystals~\cite{hum2007quasi}, which in the context of biphoton generation include chirped patterns for increased bandwidth~\cite{Harris2007, Nasr2008, Sensarn2010}, phase-modulated patterns for pump switching~\cite{Odele2015, Odele2017}, and Gaussian patterns to remove spectral entanglement~\cite{Branczyk2011, BenDixon2013, Chen2017}, we are unaware of any method to leverage such engineering for fully arbitrary control over the $\beta_{mn}$ phase-matching factors, as would be required for general two-qubit frequency-bin states. Indeed, if seeking a single physical configuration to produce all states of interest, the condition $|\beta_{00}|\approx|\beta_{01}|\approx|\beta_{10}|\approx|\beta_{11}|$ would actually prove desirable in enabling comparable efficiency for all logical basis states. Under this condition, direct production of an arbitrary state is no longer possible, for the dependence of both $c_{01}$ and $c_{10}$ on $\alpha_{0}$ prevents independent specification of each coefficient~\cite{SFWM}. Nevertheless, all four Bell states can be generated: $\alpha_{\pm1}=0$ and $\alpha_{0}\neq 0$ leads to $|c_{00}|=|c_{11}|=0$ and $|c_{01}|=|c_{10}|\neq 0$, whereas $|\alpha_{1}|=|\alpha_{-1}|\neq0$ and $\alpha_{0}= 0$ yields $|c_{00}|=|c_{11}|\neq0$ and $|c_{01}|=|c_{10}|=0$. By applying phase shifts using a pulse shaper after generation---which would likely be present already for subsequent routing and processing---the specific phases required for $\ket{\Psi^\pm}$ and $\ket{\Phi^\pm}$ can be realized. Importantly, since switching between $\ket{\Psi^{\pm}}$ and $\ket{\Phi^{\pm}}$ is effected by modifying the pump, the generated photons experience no additional loss in the process, in contrast to using an active QFP to convert from $\ket{\Psi^\pm}$-type to $\ket{\Phi^\pm}$-type correlations~\cite{lu2018quantum}. \textit{Bell state demonstration.---} Figure~\ref{setup}(a) depicts our frequency-bin Bell basis synthesizer. A CW laser centered at 780.3~nm ($\omega_{P,0}/2\pi = 384.15$~THz) is launched into an electro-optic intensity modulator (EOIM) driven by a 25~GHz radio-frequency (RF) sinusoidal waveform, set to one of two desired modes of operation: in the ``EOIM off'' case, the RF waveform is suppressed and the DC bias point adjusted for maximum transmission, resulting in a single pump line; in the ``EOIM on'' case, an RF waveform with a $3.6$~V peak amplitude---approximately 70\% of the EOIM's half-wave voltage---is applied while the DC bias point is set to the null transmission point, leading to two spectral lines spaced at 50~GHz via carrier suppression. The output from the EOIM is used to pump a fiber-pigtailed periodically poled lithium niobate (PPLN) ridge waveguide engineered for type-0 phase matching and temperature-tuned to $\sim$56~$^\circ$C for maximum efficiency at the pump frequency $\omega_{P,0}$. Two 14~GHz-wide frequency bins separated by $\Delta \omega/2\pi = 25$~GHz are carved using a pulse shaper (Shaper 1) at spacings of $\pm$152.5~GHz (for $I_1$ and $S_0$) and $\pm$177.5~GHz (for $I_0$ and $S_1$) on either side of the CW laser's half-frequency ($\frac{1}{2}\omega_{P,0}$). \begin{figure} \caption{Coincidences (integrated over 4~s) between output frequency bins after the gate operations $\mathbb{I} \label{JSI} \end{figure} The produced state is then characterized by a tomography setup comprising an electro-optic phase modulator (EOPM), a pulse shaper operated as a wavelength-selective switch (Shaper 2), and two superconducting nanowire single-photon detectors (SNSPDs) for coincidence detection. When the EOPM drive signal is off, the measured joint spectral intensity (JSI) corresponds to application of the identity to both signal and idler photons ($\mathbb{I}_I\otimes\mathbb{I}_S$); when the 25~GHz EOPM signal is on (with modulation index $m = 1.435$~rad to ensure equal mixing probability between two adjacent bins), both photons experience a probabilistic Hadamard gate ($\mathbb{H}_I\otimes\mathbb{H}_S$)~\cite{imany2018frequency,lu2020fully} prior to spectrally resolved detection. In the first experiment [Fig.~\ref{setup}(b)], we couple a single carrier pump ($\omega_{P,0}$) at 6.2~mW input into the PPLN waveguide. After spectral filtering, the resulting state is ideally of the form $\ket{\Psi^{(\kappa)}} \propto \ket{I_0S_1} + e^{i\kappa}\ket{I_1S_0}$, where the phase $\kappa$ is dependent on the difference in delays experienced by the biphotons prior to the EOPM~\cite{Supplement}. Since both photons traverse identical links with negligible dispersion, $\kappa$ is expected to be zero. We verify the same by measuring the coincidences between all pairs of signal and idler frequency bins selected by Shaper 2 while the spectral phase on the bin pair $\ket{I_1S_0}$ is scanned by Shaper 1 followed by parallel Hadamard gates applied by the EOPM. The phase $\kappa$ is set to $0$ ($\pi$) using Shaper 1 to obtain the standard Bell state $\ket{\Psi^{+}}\equiv\ket{\Psi^{(0)}}$ ($\ket{\Psi^{-}}\equiv\ket{\Psi^{(\pi)}}$). Figure~\ref{JSI}(a) shows the measured coincidences after applying gate operations $\mathbb{I}_I\otimes\mathbb{I}_S$ and $\mathbb{H}_I\otimes\mathbb{H}_S$---corresponding to measurement in the $Z \otimes Z$ and $X\otimes X$ Pauli bases, respectively. In accordance with theory~\cite{Supplement}, the negative frequency correlations revealed in the $\mathbb{I}_I\otimes\mathbb{I}_S$ JSI are reversed after the Hadamards for the $\ket{\Psi^+}$ Bell state, but retained for $\ket{\Psi^-}$. A factor of $\sim$2.8 lower coincidences for the $\mathbb{H}_I\otimes\mathbb{H}_S$ cases result from the probabilistic nature of the single-EOPM Hadamard~\cite{imany2018frequency}, and would not be observed with a full QFP version~\cite{lu2018quantum}. \begin{figure} \caption{ Real and imaginary parts of the Bayesian-mean-estimated density matrices of all four Bell states, computed from the measurements in Fig.~\ref{JSI} \label{Rho} \end{figure} In the second experiment [Fig.~\ref{setup}(c)], the EOIM eliminates the original pump line at $\omega_{P,0}$ and produces equal first-order sidebands spaced by 50~GHz ($\omega_{P,-1}$ and $\omega_{P,1}$). The power in each sideband after modulation is maintained at $\sim$7~mW in order to achieve coincidence rates similar to the first experiment. Note that this amounts to approximately twice the total pump power as before: in the spontaneous regime, the flux in any given signal-idler bin pair is directly proportional to the pump power at the corresponding sum frequency, so that the two pump lines in the ``EOIM on'' case must \emph{each} match the power of the single line in the ``EOIM off'' case to maintain the rate of Bell state production. Experimentally, we observe 17.5~dB extinction of the original pump line [Fig.~\ref{setup}(a-i)], which implies a roughly 50-fold suppression of negatively correlated biphoton contributions relative to the desired positive correlations. Such intensity modulation offers a particularly simple approach for producing the two lines required, although more general pump inputs would be possible with an optical frequency comb as input---e.g., an EOPM followed by a line-by-line pulse shaper in the 780~nm wavelength band~\cite{Monmayrant2004,Willits2012}. Such an arrangement would allow for arbitrary weightings of the input pump lines, but introduce additional complexity that is not required for the specific Bell state cases of interest here. The succeeding SPDC process generates time-energy entangled biphotons in coherent superpositions of broadband spectral amplitudes centered at half of the pump-sideband frequencies, resulting in a two-qubit entangled state ideally of the form $\ket{\Phi^{(\nu)}} \propto \ket{I_0,S_0} + e^{i\nu}\ket{I_1,S_1}$. The phase $\nu$ is a fixed common-mode phase that is expected from the RF modulation phase and mean optical delay traversed by the biphotons~\cite{Supplement}. We determine the phase $\nu$ in the same fashion as with $\kappa$ but now by scanning the spectral phase imparted on the bin pair $\ket{I_1S_1}$; the measured value of $\nu$ is compensated for and set to $0$ ($\pi$) to obtain $\ket{\Phi^{(0)}}= \ket{\Phi^{+}}$ ($\ket{\Phi^{(\pi)}}= \ket{\Phi^{-}}$). Measured JSIs after the identity and Hadamard operations appear in Fig.~\ref{JSI}(b); in contrast to the $\ket{\Psi^\pm}$ case, positive frequency correlations are now clearly evident in the $\mathbb{I}_I\otimes\mathbb{I}_S$ measurement, with the Hadamard operation producing correlation patterns that depend on the state phase ($0$ or $\pi$). Figure~\ref{JSI} reveals unique correlation signatures for each state in the results from both identity and Hadamard. In fact, these signatures are sufficient to perform high-fidelity state quantum reconstruction via Bayesian inference~\cite{lukens2020practical,lu2021full}, which has been shown to enable low-uncertainty estimates of highly correlated states measured in two pairs of MUBs~\cite{lu2018quantum, lu2022high}. Our specific procedure~\cite{lu2021full} starts with a uniform (Bures) prior and uses a likelihood from the JSI measurements in the $Z\otimes Z$ and $X\otimes X$ bases. The estimated mean density matrices, shown in Fig.~\ref{Rho} have fidelities $\geq97\%$ with respect to the ideal Bell states. Interestingly, the fidelities for $\ket{\Psi^\pm}$ are slightly higher than those for $\ket{\Phi^\pm}$, which can be attributed to a combination of SPDC from residual $\omega_{P,0}$ pump and higher accidental coincidences in the latter case. In the dual-line pump scenario, the pump frequency at $\omega_{P,1}$ ($\omega_{P,-1}$) can also populate photons in frequency bins $I_0$ and $S_0$ ($I_1$ and $S_1)$ via downconversion in which the matched signal or idler falls outside of the computational space. Such processes do not contribute to the ideal $\ket{\Phi^{\pm}}$ state in the coincidence basis. However, in practice the presence of multipair emission means that these undesired detection events can lead to accidental coincidences; specifically, for the same rate of desired coincidences for $\ket{\Psi^\pm}$ and $\ket{\Phi^\pm}$, the $\ket{\Phi^\pm}$ cases have double the rate of single-photon detection events, leading to a four-fold increase in uncorrelated coincidences. The impact of both imperfect carrier extinction and background processes are validated experimentally. First, for the $\mathbb{I}_I\otimes\mathbb{I}_S$ cases in Fig.~\ref{JSI}, the ratio of desired to undesired JSI points is around 50 for the $\ket{\Phi^\pm}$ states, in agreement with that predicted by the 17.5~dB carrier suppression in Fig.~\ref{setup}(a). (For the $\ket{\Psi^\pm}$ states where carrier suppression is not required, the mismatched JSI points fall to the observed accidentals level.) Similarly, computing the coincidences-to-accidentals ratios (CARs) found by comparing the coincidences in Fig.~\ref{JSI} against their values time-shifted in the raw histograms, we obtain CARs of $\sim$400 for $\ket{\Psi^\pm}$ and $\sim$100 for $\ket{\Phi^\pm}$, again matching the four-fold theoretical prediction for accidentals between the two cases. We note that the first nonideality represents a technical limitation that could be eliminated with stronger suppression of the carrier frequency (through a different EOIM or additional pump filtering), whereas the second effect would require further engineering of the phase-matching function beyond the proposed configuration here. Nevertheless, the fidelities $\mathcal{F}\geq0.97$ which we have already observed---in the presence of these effects---highlight the immediate value of our approach for Bell state generation even without any further improvements. \textit{Quantum delay sensing.---}The joint temporal correlation of time-energy entangled biphotons can be utilized for delay metrology with potential quantum advantages~\cite{giovannetti2004quantum,giovannetti2002positioning}. Negatively correlated entangled states (such as $\ket{\Psi^\pm}$) can probe changes to the difference in the delays traversed by the photons (differential-mode delay) via nonlocal measurements only~\cite{nonlocalquan2020high, seshadri2022nonlocal}. Entangled photons with positive frequency correlations (such as $\ket{\Phi^\pm}$) can offer enhancement in delay sensitivity beyond the shot noise limit~\cite{giovannetti2002positioning, kuzucu2005two, kuzucu2008joint}, by responding to changes in the sum of the signal and idler delays (common-mode delay). Combined, these complementary capabilities suggest that frequency-bin Bell states could be employed for distributed sensing applications~\cite{zhao2021field,giovannetti2011advances} and monitoring delays and latencies in quantum networks. We highlight this potential using the demonstrated frequency-bin Bell states by examining their sensitivity to common-mode and differential-mode phase---fully equivalent to temporal delay through the general Fourier relationship between linear spectral phase and group delay~\cite{weiner2011ultrafast, pe2005temporal,seshadri2022nonlocal}. For the special case of two-dimensional systems, \emph{any} relative phase shift is trivially equal to a linear phase; thus any phase operation on a frequency-bin qubit can be mapped to a delay~\cite{lu2020fully}. \begin{figure} \caption{Interferograms as (a)~differential-mode and (b)~common-mode spectral phases are scanned and the coincidences between different pairs of signal and idler bins are measured.} \label{Interferograms} \end{figure} Specifically, a common-mode phase $\varphi_c$ applied on the bins $S_1$ and $I_1$ results in a global phase on the $\ket{\Psi^{\pm}}$ state which remains unaltered. However, the $\ket{\Phi^{+}}$ state transforms into \begin{equation}\label{phi} \ket{\Phi^{(2\varphi_c)}} = \ket{I_0S_0} + e^{2i\varphi_c}\ket{I_1S_1}. \end{equation} Such a transformation is equivalent to a common-mode delay of the form $\tau_c = (\tau_S + \tau_I)/2 = \varphi_c {\Delta\omega}^{-1}$, where $\tau_S$ ($\tau_I$) is the total delay experienced by the signal (idler) photon. That is, biphotons traveling through the same path accumulate phase corresponding to twice the delay traversed, the origin of quantum enhancement~\cite{giovannetti2002positioning}. After parallel Hadamard operations, the probability of coincidence detection between bins $I_0$ and $S_0$ becomes~\cite{Supplement} \begin{equation}\label{ProbPhi} \mathcal{P}^{X \otimes X}_{00}(\Phi^{(2\varphi_c)}) \propto \cos^2(2\varphi_c). \end{equation} On application of differential-mode phase $\varphi_d$ on the bins $S_0$ and $I_1$, the $\ket{\Phi^{\pm}}$ states are unaltered while $\ket{\Psi^{+}}$ transforms to \begin{equation} \ket{\Psi^{(2\varphi_d)}} = \ket{I_0S_1} + e^{2i\varphi_d}\ket{I_1S_0}, \end{equation}\label{psi} and the resultant coincidence probability between bins $I_0$ and $S_0$ after parallel Hadamard operations is \begin{equation}\label{ProbPsi} \mathcal{P}^{X \otimes X}_{00}(\Psi^{(2\varphi_d)}) \propto \cos^2(2\varphi_d). \end{equation} This transformation is equivalent to a differential mode delay of the form $\tau_d = (\tau_S-\tau_I)/2 = {\varphi_d \Delta\omega}^{-1}$. The coincidence probabilities for other signal-idler bin pairs after the Hadamard operation are shown in Ref.~\cite{Supplement}. Using Shaper 1 to successively apply the common-mode phase ($\varphi_c$ on the bins $S_1$ and $I_1$) and differential-mode phase ($\varphi_d$ on the bins $S_0$ and $I_1$) prior to Hadamard operations, we measure coincidences between different frequency-bin pairs (plotted in Fig.~\ref{Interferograms}). The experiment clearly corroborates the theoretical prediction, highlighting the capability for differential- and common-mode delay metrology based on the frequency correlations of the probe state. \textit{Discussion.---}We have demonstrated generation and tomography of all four two-qubit Bell states, to our knowledge the first time in frequency-bin encoding. Readily reconfigurable between single and dual line pump conditions and relying on passive spectral filtering, our setup can synthesize any Bell state within a fixed set of four frequency bins. The capability for on demand switching between the Bell pairs can find use in quantum cryptography applications~\cite{shi2013multi,chun2005secure}. Further, we demonstrate that the strong positive and negative frequency correlations in the generated Bell states can be used for sensing common-mode and differential-mode delays. Moving forward, it will be interesting to explore the extent to which this technique can be generalized to higher-dimensional entanglement, the impact of coincidence basis postselection on protocols using these resources states, and methods to discriminate between them using frequency-bin Bell state analyzers~\cite{lingaraju2022bell}. This work also offers scope for analyzing related multi-line pump architectures for preparing two-qubit states. While on-chip generation of generic negatively correlated two-qudit states has been investigated~\cite{liscidini2019scalable}, our approach can offer further opportunities for on-chip implementation of $\ket{\Phi}$-like states utilizing multi-line pumped spontaneous four-wave mixing in single or series of microring resonantors (MRRs). In fact, through an appropriate cascade of MRR sources, it might be possible to suppress undesired biphoton generation processes such that only the signal and idler mode pairs of interest---i.e., those in the two-qubit computational basis---are efficiently produced. \begin{acknowledgments} We thank AdvR for loaning the PPLN ridge waveguide. A portion of this work was performed at Oak Ridge National Laboratory, operated by UT-Battelle for the U.S. Department of Energy under contract no. DE-AC05-00OR22725. Funding was provided by the National Science Foundation (1839191-ECCS, 2034019-ECCS) and the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research, Early Career Research Program (Field Work Proposal ERKJ353). \end{acknowledgments} \section*{Supplement} \label{theory} \subsection*{Coincidence probability} \begin{table*} \centering \begin{tabular}{llll} \hline\rule{0pt}{1\normalbaselineskip} Basis &\hspace{7mm}Probability & \hspace{22mm}$\Tilde{\Psi}$ &\hspace{24mm} $\Tilde{\Phi}$ \\ \hline \rule{0pt}{1\normalbaselineskip} \multirow{4}{*}{${Z\otimes Z}$} & \hspace{12mm}$\mathcal{P}_{00}$ & \hspace{22.5mm}$0$ & \hspace{23mm}$\lvert \gamma_{00} \rvert^2$ \\ &\hspace{12mm}$\mathcal{P}_{01}$ &\hspace{21mm}$\lvert \gamma_{01}\rvert^2 $ &\hspace{25.5mm}$0$ \\ &\hspace{12mm}$\mathcal{P}_{10}$ & \hspace{21mm}$\lvert\gamma_{10}\rvert^2 $ &\hspace{25.5mm}$0$ \\ & \hspace{12mm}$\mathcal{P}_{11}$ &\hspace{22.5mm}$0$ & \hspace{23mm}$\lvert\gamma_{11}\rvert^2$ \\[1.5mm] \multirow{2}{*}{${\hspace{1mm}X \otimes X}$} & \hspace{8mm}$\mathcal{P}_{00} = \mathcal{P}_{11}$ & \hspace{8mm}$\eta^4\lvert \gamma_{01}e^{i\Delta\omega(\tau_S - \tau_I)} + \gamma_{10} \rvert^2$ & \hspace{8mm}$\eta^4\lvert \gamma_{00} + \gamma_{11}e^{i2\phi + i\Delta\omega(\tau_S + \tau_I)} \rvert^2$ \\ & \hspace{8mm}$\mathcal{P}_{01} = \mathcal{P}_{10}$ &\hspace{8mm}$\eta^4\lvert \gamma_{01}e^{i\Delta\omega(\tau_S - \tau_I)} - \gamma_{10} \rvert^2$ & \hspace{8mm}$\eta^4\lvert \gamma_{00} - \gamma_{11}e^{i2\phi + i\Delta\omega(\tau_S + \tau_I)} \rvert^2$\\ \hline \end{tabular} \caption{ The coincidence probabilities from ${Z \otimes Z}$ and ${X \otimes X}$ basis measurements. }\label{ZXBasis} \end{table*} The negatively and positively correlated entangled states can be written as \begin{equation}\label{Psi1} \ket{\Tilde{\Psi}} = \Big(\gamma_{01} \hat{a}_{I,0}^\dagger \hat{a}_{S,1}^\dagger + \gamma_{10} \hat{a}_{I,1}^\dagger \hat{a}_{S,0}^\dagger\Big) \ket{\mathrm{vac}}, \end{equation} \begin{equation}\label{Phi1} \ket{\Tilde{\Phi}} = \Big(\gamma_{00} \hat{a}_{I,0}^\dagger \hat{a}_{S,0}^\dagger + \gamma_{11} \hat{a}_{I,1}^\dagger \hat{a}_{S,1}^\dagger\Big) \ket{\mathrm{vac}}, \end{equation} where $\ket{\mathrm{vac}}$ is the vacuum state, $\gamma_{kl}$ is the complex probability amplitude of the frequency bin pair associated with the creation operators $\hat{a}_{I,k}^\dagger$ and $\hat{a}_{S,l}^\dagger$ corresponding to the $k^{th}$ idler bin ($I_k$) and $l^{th}$ signal bin ($S_l$) centered at $\omega_{I,k} = \frac{1}{2}\omega_{P,0} - \Omega_0 + (k-1)\Delta\omega $ and $\omega_{S,l}= \frac{1}{2}\omega_{P,0} + \Omega_0 + l\Delta\omega$ respectively, with $\Omega_0$ being the frequency offset of the bins $S_0$ and $I_1$ from the center frequency $\frac{1}{2}\omega_{p,0}$, and $\Delta\omega$ being the frequency-bin separation. If the signal and idler traverse through delays given by $\tau_{_S}$ and $\tau_{_I}$ respectively (prior to the phase modulation illustrated in Fig.~1 of the main text), their annihilation operators transform into $\hat{b}_{I,k}$ and $\hat{b}_{S,l}$ as follows: \begin{equation}\label{operatorI} \hat{b}_{I,k} = \hat{a}_{I,k} \exp{\big(i\tau_{_I}\omega_{{I,k}}\big)}, \end{equation} \begin{equation}\label{operatorS} \hat{b}_{S,l} = \hat{a}_{S,l} \exp{\big(i\tau_{_S}\omega_{{S,l}}\big)}. \end{equation} Coincidence measurements between $k^{th}$ idler bin and $l^{th}$ signal bin in the $Z \otimes Z$ basis (i.e., when no phase modulation is applied) results in coincidence probabilities given by the following expressions and tabulated in Table~\ref{ZXBasis}: \begin{equation}\label{ProbZ} \begin{aligned} &\mathcal{P}^{Z \otimes Z}_{kl}(\Tilde{\Psi}) &= \Big|\bra{\mathrm{vac}} \hat{b}_{I,k} \hat{b}_{S,l} \ket{\Tilde{\Psi}} \Big|^2\\ &\mathcal{P}^{Z \otimes Z}_{kl}(\Tilde{\Phi}) &= \Big|\bra{\mathrm{vac}} \hat{b}_{I,k} \hat{b}_{S,l} \ket{\Tilde{\Phi}} \Big|^2\\ \end{aligned} \end{equation} On applying phase modulation of the form $\hat{m}(t) = e^{im\sin(\Delta\omega t + \phi)}$, the annihilation operators transform into \begin{equation} \begin{aligned} \label{PhModI} \hat{c}_{I,k} = \sum_{p=-1}^1 J_{p}(m) e^{-ip\phi} \hat{b}_{I,k-p}, \end{aligned} \end{equation} \begin{equation} \begin{aligned} \label{PhModS} \hat{c}_{S,l} = \sum_{p=-1}^1 J_{p}(m) e^{-ip\phi} \hat{b}_{S,l-p}, \end{aligned} \end{equation} where $J_p(m)$ is the Bessel function of the first kind, $m$ is the modulation depth in radians, and $\phi$ is the phase of the RF sinusoidal waveform modulating the signal and idler photons. We limit the summation indices to $p\in\{-1,0,1\}$ to reflect the fact that only inputs in the qubit computational space are considered. On choosing the modulation index ($m = 1.435$ rad) such that $J_{1}(m) = J_0(m) = -J_{-1}(m) = 0.548 \equiv \eta$, the annihilation operators from Eqs.~(\ref{PhModI},\ref{PhModS}) can now be written as follows: \begin{equation} \begin{aligned} \label{PhModI2} \hat{c}_{I,k}(\phi) = \eta \Big( \hat{b}_{I,k} + e^{-i\phi}\hat{b}_{I,k-1} - e^{i\phi}\hat{b}_{I,k+1}\Big), \end{aligned} \end{equation} \begin{equation} \begin{aligned} \label{PhModS2} \hat{c}_{S,l}(\phi) = \eta \Big( \hat{b}_{S,l} + e^{-i\phi}\hat{b}_{S,l-1} - e^{i\phi}\hat{b}_{S,l+1}\Big). \end{aligned} \end{equation} Specifically, on setting the phase $\phi = 0$, Eqs.~(\ref{PhModI2},\ref{PhModS2}) essentially take the form of Hadamard operations as follows: \begin{equation} \begin{aligned} \label{Hadamard_binNotation} &\hat{c}_{I,0}(\phi=0) = \eta\left(\hat{b}_{I,0} -\hat{b}_{I,1}\right)\\ &\hat{c}_{S,0}(\phi=0) = \eta\left(\hat{b}_{S,0} - \hat{b}_{S,1}\right)\\ &\hat{c}_{I,1}(\phi=0) = \eta\left(\hat{b}_{I,1} + \hat{b}_{I,0}\right)\\ &\hat{c}_{S,1}(\phi=0) = \eta\left(\hat{b}_{S,1} + \hat{b}_{S,0}\right)\\ \end{aligned} \end{equation} On application of the transformation given by Eqs.~(\ref{PhModI2},\ref{PhModS2}) on the entangled states from Eqs.~(\ref{Psi1},\ref{Phi1}), the coincidence measurements between $k^{th}$ idler bin and $l^{th}$ signal bin in the $X \otimes X$ basis results in coincidence probabilities given by the following expressions and tabulated in Table~\ref{ZXBasis}: \begin{equation}\label{ProbX} \begin{aligned} &\mathcal{P}^{X \otimes X}_{kl}(\Tilde{\Psi}) &= \Big|\bra{\mathrm{vac}} \hat{c}_{I,k} \hat{c}_{S,l} \ket{\Tilde{\Psi}} \Big|^2\\ &\mathcal{P}^{X \otimes X}_{kl}(\Tilde{\Phi}) &= \Big|\bra{\mathrm{vac}} \hat{c}_{I,k} \hat{c}_{S,l} \ket{\Tilde{\Phi}} \Big|^2\\ \end{aligned} \end{equation} \begin{widetext} \noindent For instance, $\mathcal{P}^{X \otimes X}_{00}(\Tilde{\Psi})$ can be computed as follows, \begin{equation}\label{AlgProbX_00_Psi} \begin{aligned} &\mathcal{P}^{X \otimes X}_{00}(\Tilde{\Psi}) = \Big|\bra{\mathrm{vac}} \hat{c}_{I,0} \hat{c}_{S,0} \ket{\Tilde{\Psi}} \Big|^2 \\ & = \eta^4\left| \left\langle{\mathrm{vac}} \left| \left( \hat{b}_{I,0} + e^{-i\phi}\hat{b}_{I,-1} -e^{i\phi}\hat{b}_{I,1} \right)\left( \hat{b}_{S,0} + e^{-i\phi}\hat{b}_{S,-1} -e^{i\phi}\hat{b}_{S,1} \right) \left( \gamma_{01}\hat{a}_{I,0}^\dagger\hat{a}_{S,1}^\dagger + \gamma_{10}\hat{a}_{I,1}^\dagger\hat{a}_{S,0}^\dagger \right) \right|{\mathrm{vac}} \right\rangle \right|^2 \\ & = \eta^4\left| \left\langle\mathrm{vac} \left| \Big(e^{i\tau_{_I}\omega_{{I,0}}}\hat{a}_{I,0}\Big)\Big(-e^{i\phi}e^{i\tau_{_S}\omega_{{S,1}}}\hat{a}_{S,1}\Big)\gamma_{01}\hat{a}_{I,0}^\dagger\hat{a}_{S,1}^\dagger +\Big(-e^{i\phi}e^{i\tau_{_I}\omega_{{I,1}}}\hat{a}_{I,1}\Big) \Big(e^{i\tau_{_S}\omega_{{S,0}}}\hat{a}_{S,0} \Big)\gamma_{10}\hat{a}_{I,1}^\dagger\hat{a}_{S,0}^\dagger \right| \mathrm{ vac}\right\rangle \right|^2\\ & = \eta^4\left| \gamma_{01}e^{i\Delta\omega(\tau_S - \tau_I)} + \gamma_{10} \right|^2 \\ \end{aligned} \end{equation}{Similarly,} \begin{equation}\label{AlgProbX_00_Phi} \begin{aligned} &\mathcal{P}^{X \otimes X}_{00}(\Tilde{\Phi}) = \Big|\bra{\mathrm{vac}} \hat{c}_{I,0} \hat{c}_{S,0} \ket{\Tilde{\Phi}} \Big|^2 \\ & = \eta^4 \left| \left\langle {\mathrm{vac}} \left| \left( \hat{b}_{I,0} + e^{-i\phi}\hat{b}_{I,-1} -e^{i\phi}\hat{b}_{I,1} \right)\left( \hat{b}_{S,0} + e^{-i\phi}\hat{b}_{S,-1} -e^{i\phi}\hat{b}_{S,1} \right) \left( \gamma_{00}\hat{a}_{I,0}^\dagger\hat{a}_{S,0}^\dagger + \gamma_{11}\hat{a}_{I,1}^\dagger\hat{a}_{S,1}^\dagger \right) \right| {\mathrm{ vac}} \right\rangle \right|^2 \\ & = \eta^4\left| \left\langle\mathrm{vac} \left| \Big(e^{i\tau_{_I}\omega_{{I,0}}}\hat{a}_{I,0}\Big)\Big(e^{i\tau_{_S}\omega_{{S,0}}}\hat{a}_{S,0}\Big)\gamma_{00}\hat{a}_{I,0}^\dagger\hat{a}_{S,0}^\dagger +\Big(-e^{i\phi}e^{i\tau_{_I}\omega_{{I,1}}}\hat{a}_{I,1}\Big) \Big(-e^{i\phi}e^{i\tau_{_S}\omega_{{S,1}}}\hat{a}_{S,1} \Big)\gamma_{11}\hat{a}_{I,1}^\dagger\hat{a}_{S,1}^\dagger \right| \mathrm{ vac}\right\rangle \right|^2\\ & = \eta^4\left| \gamma_{00} + \gamma_{11}e^{i2\phi + i\Delta\omega(\tau_S + \tau_I)} \right|^2 \\ \end{aligned} \end{equation} \end{widetext} We note that interference from the Hadamard operation reveals the phases in the frequency bin pairs constituting the Bell states. The negatively correlated $\ket{\Tilde{\Psi}}$ state is affected only by the difference in the delays traversed by the two photons while the positively correlated $\ket{\Tilde{\Phi}}$ state is affected only by the (common-mode) sum of the delays traversed by the two photons and equivalently the phase of the RF waveforms modulating the biphotons. Specific to our experiment where the physical paths are such that $\tau_{_S} = \tau_{_I}$, we impart differential-mode and common-mode delays on a given state by applying phases using a pulse shaper, making use of the equivalence between group delay and linear spectral phase~\cite{pe2005temporal,seshadri2022nonlocal}. \end{document}
math
\begin{document} \title{Extremal shift rule for continuous-time zero-sum Markov games} \author{Yurii Averboukh\footnote{Krasovskii Institute of Mathematics and Mechanics UrB RAS, [email protected]}} \maketitle \begin{abstract} In the paper we consider the controlled continuous-time Markov chain describing the interacting particles system with the finite number of types. The system is controlled by two players with the opposite purposes. The limiting game as the number of particles tends to infinity is a zero-sum differential game. Krasovskii--Subbotin extremal shift provides the optimal strategy in the limiting game. The main result of the paper is the near optimality of the Krasovskii--Subbotin extremal shift rule for the original Markov game. \noindent\textbf{Keywords:} continuous time Markov games, differential games, extremal shift rule, control with guide strategies. \end{abstract} \section{Introduction} The paper is devoted to the construction of near optimal strategies for zero-sum two players continuous-time Markov game based on deterministic game. The term `Markov game' is used for a Markov chain with the Kolmogorov matrix depending on controls of players. These games are also called continuous-time stochastic games. First continuous-time Markov games were studied by Zachrisson \cite{Zachrisson}. The information of recent progress in the theory of continuous-time Markov games can be found in \cite{Neyman}, \cite{Levy} and references therein. We consider the case when the continuous-time Markov chain describes the interacting particle system. The interacting particle system converges to the deterministic system as the number of particles tends to infinity~\cite{Kol},~\cite{Kol_book} (see also~\cite{Darling_Norris},~\cite{Benaim_le_boduak}). The value function of the controlled Markov chain converges to the value function of the limiting control system \cite{Kol} (see also corresponding result for discrete-time systems in~\cite{Gast_et_al}). This result is extended to the case of zero-sum games as well as to the case of nonzero-sum games~\cite{Kol}. If the nonanticipative strategy is optimal for differential game then it is near optimal for the Markov game~\cite{Kol}. However the nonanticipative strategies require the knowledge of the control of the second player. Often this information is inaccessible and the player has only the information about current position. In this case one can use feedback strategies or control with guide strategies. Control with guide strategies were proposed by Krasovskii and Subbotin to construct the solution of deterministic differential game under informational disturbances~\cite{NN_PDG_en}. Note that the feedback strategies do not provide the stable solution of the differential game. If the player uses control with guide strategy, then the control is formed stepwise, and the player has a model of the system and she uses this model to choose an appropriate control using extremal shift rule. The value function is achieved in the limit when the time between control corrections tends to zero. In the original work by Krasovskii and Subbotin the motion of the model is governed by the system that is a copy of the original system and the motion of the original system is close to the motion of the model. Therefore the model can be called guide. Note that formally control with guide strategy is a strategy with memory. However, it suffices to storage only finite number of vectors. Additionally, the player should use computer to obtain the state of the guide at the time of control correction. Control with guide strategies realizing the extremal shift were used for the differential games without Lipschitz continuity of the dynamics in \cite{kriazh} and for the games governed by delay differential equations in~\cite{kras_delay},~\cite{luk_plaks}. Krasovskii and Kotelnikova proposed the stochastic control with guide strategies \cite{a4}--\cite{a6}. In that case the real motion of the deterministic system is close to the auxiliary stochastic process generated by optimal control for the stochastic differential game. The Nash equilibrium for two-player game in the class of control with guide strategies was constructed via extremal shift in \cite{Averboukh_jcds}. In this paper we let the player use the control with guide strategy realizing extremal shift rule in the Markov game. We assume that the motion of the guide is given by the limiting deterministic differential game. We estimate the expectation of the distance between the Markov chain and the motions of the model (guide). This leads to the estimate between the outcome of the player in the Markov game and the value function of the limiting differential game. The paper is organized as follows. In preliminary Section \ref{sect_prel} we describe the Markov game describing the interacting particle system and the limiting deterministic differential game. In Section \ref{sect_cgs} we give the explicit definition of control with guide strategies and formulate the main results. Section \ref{sect_prop} is devoted to a property of transition probabilities. In Section \ref{sect_estima} we estimate the expectation of distance between the Markov chain and the deterministic guide. Section \ref{sect_proof} provides the proofs of the statements formulated in Section \ref{sect_cgs}. \section{Preliminaries}\label{sect_prel} We consider the system of finite number particles. Each particle can be of type~$i$, $i\in \{1,\ldots,d\}$. The type of each particle is a random variable governed by a Markov chain. To specify this chain consider the Kolmogorov matrix $Q(t,x,u,v)=(Q_{ij}(t,x,u,v))_{i,j=1}^d$. That means that the elements of matrix $Q(t,x,u,v)$ satisfy the following properties \begin{itemize} \item $Q_{ij}(t,x,u,v)\geq 0$ for $i\neq j$; \item \begin{equation}\label{Kolmogorov} Q_{ii}(t,x,u,v)=-\sum_{j\neq i}Q_{ij}(t,x,u,v). \end{equation} \end{itemize} Here $$t\in [0,T],\ \ x\in \Sigma_d=\{(x_1,\ldots, x_n):x_i\geq 0, x_1+\ldots+x_n=1\},\ \ u\in U,v\in V.$$ Suppose that $U$ and $V$ are compact sets. The variables $u$ and $v$ are controlled by the first and the second players respectively. Below we assume that $x=(x_1,\ldots,x_n)$ is a row-vector. Additionally we assume that \begin{itemize} \item $Q$ is a continuous function of its variable; \item for any $t$, $u$ and $v$ the function $x\mapsto Q(t,x,u,v)$ is Lipschitz continuous; \item for any $t\in [0,T]$, $\xi,x\in\mathbb{R}^n$ the following equality holds true \begin{equation}\label{isaacs} \min_{u\in U}\max_{v\in V}\langle \xi,xQ(t,x,u,v)\rangle=\max_{v\in V}\min_{u\in U}\langle \xi,xQ(t,x,u,v)\rangle \end{equation} \end{itemize} Condition (\ref{isaacs}) is an analog of well-known Isaacs condition. For a fixed parameters $x\in\mathbb{R}^d$, $u\in{U}$, and $v\in{V}$ the type of each particle is determined by the Markov chain with the generator $$(Q(t,x,u,v)f)_i=\sum_{j\neq i}Q_{ij}(t,x,u,v)(f_j-f_i), \ \ f=(f_1,\ldots,f_d). $$ The another way to specify the Markov chain is the Kolmogorov forward equation $$\frac{d}{dt}P(s,t,x)=P(s,t,x)Q(t,x,u,v). $$ Here $P(s,t,x)=(P_{ij}(s,t,x))_{ij=1}^d$ is the matrix of the transition probabilities. Now we consider the controlled mean-field interacting particle system (see \cite{Kol}). Let $n_i$ be a number of particles of the type $i$. The vector $N=(n_1,\ldots,n_d)\in \mathbb{Z}_+^d$ is the state of the system consisting of $|N|=n_1+\ldots+n_d$ particles. For $i\neq j$ and a vector $N=(n_1,\ldots,n_d)$ denote by $N^{[ij]}$ the vector obtained from $N$ by removing one particle of type $i$ and adding one particle of type $j$ i.e. we replace the $i$-th coordinate with $n_i-1$ and the $j$-th coordinate with $n_j+1$. The mean-field interacting particle system is a Markov chain with the generator $$L_t^h[u,v]f(N)=\sum_{i,j=1}^d n_iQ_{ij}(t,N/|N|,u(t),v(t))[f(N^{[ij]})-f(N)]. $$ The purpose of the first (respectively, second) player is to minimize (respectively, maximize) the expectation of $\sigma(N/|N|)$. Denote the inverse number of particles by $h=1/|N|$. Normalizing the states of the interacting particle system we get the generator (see \cite{Kol}) \begin{equation}\label{generator} L_t^h[u,v]f(N/|N|)=\sum_{i,j=1}^d \frac{1}{h}\frac{n_i}{|N|}Q_{ij}(t,N/|N|,u(t),v(t))\left[f\left(\frac{N^{[ij]}}{|N|}\right)-f\left(\frac{N}{|N|}\right)\right]. \end{equation} Denote the vector $N/|N|$ by $x=(x_1,\ldots,x_d)$. Thus, we have that $$L_t^h[u,v]f(x)=\sum_{i,j=1}^d \frac{1}{h}x_iQ_{ij}(t,x,u(t),v(t))[f(x-he^i+he^j)-f(x)]. $$ Here $e^i$ is the $i$-th coordinate vector. The vector $x$ belongs to the set $$\Sigma_d^h=\{(x_1,\ldots,x_d):x_i\in h\mathbb{Z},\ \ x_1+\ldots+x_d=1\}\subset \Sigma_d. $$ Further, let $\mathcal{U}_{\rm det}[s]$ (respectively, $\mathcal{V}_{\rm det}[s]$) denote the set of deterministic controls of the first (respectively, second) player on $[s,T]$, i.e. $$\mathcal{U}_{\rm det}[s]=\{u:[s,T]\rightarrow U\mbox{ measurable}\}, \ \ \mathcal{V}_{\rm det}[s]=\{v:[s,T]\rightarrow V\mbox{ measurable}\}. $$ Let $(\Omega,\mathcal{F},\{\mathcal{F}_t\},P)$ be a filtered probability space. Extending the definition given in \cite[p. 135]{fleming_soner} to the stochastic game case, we say that the pair of stochastic processes $u$ and $v$ on $[s,T]$ is an admissible pair of controls if \begin{enumerate} \item $u(t)\in U$, $v(t)\in V$; \item the processes $u$ and $v$ are progressive measurable; \item for any $y\in \Sigma^h_d$ there exists an unique $\{\mathcal{F}_t\}_{t\in [s,T]}$-adapted c\`{a}dl\`{a}g stochastic process $X^h(t,s,y,u,v)$ taking values in $\Sigma^h_d$, starting at $y$ at time $s$ and satisfying the following condition \begin{equation}\label{dynkin} \mathbb{E}_{sy}^hf(X^h(t,s,y,u,v))-f(y)=\int_s^t\mathbb{E}_{sy}^h L_{t}^h[u(\tau),v(\tau)]f(X^h(\tau,s,y,u,v))d\tau. \end{equation} \end{enumerate} Here $\mathbb{E}_{sy}^h$ denotes the conditional expectation of corresponding stochastic processes. The purposes of the players can be reformulated in the following way. The first (respectively, second) player wishes to minimize (respectively, maximize) the value $$\mathbb{E}_{sy}^h\sigma(X_h(T,s,y,u,v)). $$ Let $\mathcal{U}^h[s]$ be a set of stochastic processes $u$ taking values in $U$ such that the pair $(u,v)$ is admissible for any $v\in\mathcal{V}_{\rm det}[s]$. Analogously, let $\mathcal{V}^h[s]$ be a set of stochastic processes $v$ taking values in $V$ such that the pair $(u,v)$ is admissible for any $u\in\mathcal{U}_{\rm det}[s]$. Denote by $P_{sy}^h(A)$ the conditional probability of the event $A$ under condition that the Markov chain corresponding to the parameter $h$ starts at $y$ at time $s$, i.e. $$P^h_{sy}(A)=\mathbb{E}_{sy}^h\mathbf{1}_{A}$$ Further, let $p^h(s,y,t,z,u,v)$ denote the transition probability i.e. $$p^h(s,y,t,z,u,v)=P_{sy}^h(X^h(t,s,y,u,v)=z)=\mathbb{E}_{sy}^h\mathbf{1}_{\{z\}}(X^h(t,s,y,u,v)). $$ The substituting $\mathbf{1}_{\{z\}}$ for $f$ in (\ref{generator}) and (\ref{dynkin}) gives that \begin{multline}\label{tran_prob}p^h(s,y,t,z,v)= p^h(s,y,s,z,u,v)\\ +\frac{1}{h}\int_{s}^t\mathbb{E}^h_{sy}\sum_{i,j=1}^dX_{h,i}(\tau,s,y,u,v)Q_{ij}(\tau,X_h(\tau,s,y,u,v),u(\tau),v(\tau)) \\\cdot[\mathbf{1}_z(X_h(\tau,s,y,u,v)-he^i+he^j)-\mathbf{1}_z(X_h(\tau,s,y,u,v))]d\tau. \end{multline} Here $X_{h,i}(\tau,s,y,u,v)$ denotes the $i$-th component of $X_h(\tau,s,y,u,v)$. Recall, see \cite{Kol}, that if $h\rightarrow 0$, then the generator $L_t^h[u,v]$ converges to the generator \begin{equation*}\begin{split}\Lambda_t[u,v]f(x)=&\sum_{i=1}^d\sum_{j\neq i}x_iQ_{ij}(t,x,u(t),v(t))\left[\frac{\partial f}{\partial x_j}(x)-\frac{\partial f}{\partial x_i}(x)\right]\\=&\sum_{k=1}^d\sum_{i\neq k}[x_iQ_{ik}(t,x,u(t),v(t))-x_kQ_{ki}(t,x,u(t),v(t))]\frac{\partial f}{\partial x_k}(x).\end{split}\end{equation*} For controls $u\in\mathcal{U}_{\rm det}[s]$ and $v\in\mathcal{V}_{\rm det}[s]$ the deterministic evolution generated by the $\Lambda_t[u(t),v(t)]$ is described by the equation \begin{multline}\label{deter_evol} \frac{d }{d t}f_t(x)=\sum_{k=1}^d\sum_{i\neq k}[x_iQ_{ik}(t,x,u(t),v(t))-x_kQ_{ki}(t,x,u(t),v(t))]\frac{\partial f_t}{\partial x_k}(x),\\ f_s(x)=f(x). \end{multline} Here the function $f_t(y)$ is equal to $f(x(t))$ when $x(s)=y$. The characteristics of (\ref{deter_evol}) solve the ODEs \begin{equation*}\begin{split}\frac{d}{dt}{x}_k(t)=&\sum_{i\neq k}[x_i(t)Q_{ik}(t,x(t),u(t),v(t))-x_k(t)Q_{ki}(t,x(t),u(t),v(t))]\\=&\sum_{i=1}^d x_i(t)Q_{ik}(t,x(t),u(t),v(t)). \end{split}\end{equation*} One can rewrite this equation in the vector form \begin{equation}\label{char_eq} \frac{d}{dt}{x}(t)=x(t)Q(t,x(t),u(t),v(t)),\ \ t\in [0,T], \ \ x(t)\in\mathbb{R}^n,\ \ u(t)\in U,\ \ v(t)\in V. \end{equation} For given $u\in\mathcal{U}_{\rm det}[s]$, $v\in \mathcal{V}_{\rm det}[s]$ denote the solution of initial value problem for~(\ref{char_eq}) and condition $x(s)=y$ by $x(\cdot,s,y,u,v)$. Consider the deterministic zero-sum game with the dynamics given by (\ref{char_eq}) and terminal payoff equal to $\sigma(x(T,s,y,u,v))$. This game has a value that is a continuous function of the position. Denote it by ${\rm Val}(s,y)$. Recall (see \cite{Subb_book}) that the function ${\rm Val}(s,y)$ is a minimax (viscosity) solution of the Hamilton--Jacobi PDE \begin{equation}\label{HJ_eq} \frac{\partial W}{\partial t}+H(t,x,\nabla W)=0,\ \ W(T,x)=\sigma(x). \end{equation} Here the Hamiltonian $H$ is defined by the rule $$H(t,x,\xi)=\min_{u\in U}\max_{v\in V}\langle \xi,xQ(t,x,u,v)\rangle. $$ \section{Control with guide strategies}\label{sect_cgs} In this section we introduce the control with guide strategies for the Markov game. It is assumed that the control is formed stepwise and the player has an information about the current state of the system i.e. the vector $x$ is known. Additionally, we assume that the player can evaluate the expected state and the player's control depends on current state of the system and on the evaluated state. This evaluation is called guide. At each time of control correction the player computes the value of the guide and the control that is used up to the next time of control correction. Formally (see \cite{Subb_chen}), control with guide strategy of player 1 is a triple $\mathfrak{u}=(u(t,x,w),\psi_1(t_+,t,x,w),\chi_1(s,y))$. Here the function $u(t,x,w)$ is equal to the control implemented after time $t$ if at time $t$ the state of the system is $x$ and the state of the guide is $w$. The function $\psi_1(t_+,t,x,w)$ determines the state of the guide at time $t_+$ under the condition that at time $t$ the state of the system is $x$ and the state of the guide is $w$. The function $\chi_1$ initializes the guide i.e. $\chi_1(s,y)$ is the state of the guide in the initial position $(s,y)$. We use the control with guide strategies for Markov game with the generator $L_t^h$. Here we assume that $h>0$ is fixed. Let $(s,y)$ be an initial position, $s\in [0,T]$ $y\in\Sigma_d^h$. Assume that player 1 chooses the control with guide strategy $\mathfrak{u}$ and the partition $\Delta=\{t_k\}_{k=0}^m$ of the time interval $[s,T]$; whereas player 2 chooses the control $v\in\mathcal{V}^h[s]$. This control can be also formed stepwise using some second player's control with guide strategy. We say that the stochastic process $\mathcal{X}_1^h[\cdot,s,y,\mathfrak{u},\Delta,v]$ is generated by strategy $\mathfrak{u}$, partition $\Delta$ and the second player's control $v$ if for $t\in [t_k,t_{k+1})$ $\mathcal{X}_1^h[t,s,y,\mathfrak{u},\Delta,v]=X^h(t,t_k,x_k,u_k,v), $ where \begin{itemize} \item $x_0=y$, $w_0=\chi_1(t_0,x_0)$, $u_0=u(t_0,x_0,w_0)$; \item for $k=\overline{1,r}$ $x_k=X^h(t_k,t_{k-1},x_{k-1},u_{k-1},v)$, $w_k=\psi_1(t_k,t_{k-1},x_{k-1},w_{k-1})$, $u_k=u(t_k,x_k,w_k)$. \end{itemize} Note that even though the state of the guide $w_k$ is determined by the deterministic function it depends on the random variable $x_{k-1}$. Thus, $w_k$ is a random variable. Below we define the first player's control with guide strategy that realizes the extremal shift rule (see \cite{NN_PDG_en}). Let $\varphi$ be a supersolution of equation (\ref{HJ_eq}). That means (see \cite{Subb_book}) that for any $(t_*,x_*)\in [0,T]\times \Sigma_{d}$, $t_+>t_*$ and $v_*\in V$ there exists a solution $\zeta_1(\cdot,t_+,t_*,x_*,v_*)$ of differential inclusion $$\dot{\zeta}_1(t)\in {\rm co}\{\zeta_1(t)Q(t,\zeta_1(t),u,v_*):u\in U\} $$ satisfying conditions $\zeta_1(t_*,t_+,t_*,x_*,v_*)=x_*$ and $\varphi(t_+,\zeta_1(t_+,t_+,t_*,x_*,v_*))\leq \varphi(t_*,x_*)$. Define the control with guide strategy $\hat{\mathfrak{u}}=(\hat{u},\hat{\psi}_1,\hat{\chi}_1)$ by the following rules. If $t_*,t_+\in [0,T]$, $t_+>t_*$, $x_*,w_*\in \Sigma_d$, then choose $u_*$, $v_*$ by the rules \begin{equation}\label{u_star_def} \min_{u\in U}\max_{v\in V}\langle x_*-w_*,x_*Q(t_*,x_*,u,v)\rangle=\max_{v\in V}\langle x_*-w_*,x_*Q(t_*,x_*,u_*,v)\rangle, \end{equation} \begin{equation}\label{v_star_def} \max_{v\in V}\min_{u\in U}\langle x_*-w_*,x_*Q(t_*,x_*,u,v)\rangle=\min_{u\in U}\langle x_*-w_*,x_*Q(t_*,x_*,u,v_*)\rangle. \end{equation} Put \begin{list}{\rm (u\arabic{tmp})}{\usecounter{tmp}} \item\label{str_def_1} $\hat{u}(t_*,x_*,w_*)=u_*$, \item\label{str_def_2} $\hat{\psi}_1(t_+,t_*,x_*,w_*)=\zeta_1(t_+,t_+,t_*,w_*,v_*)$, \item\label{str_def_3} $\hat{\chi}_1(s,y)=y$. \end{list} Note that if the first player uses the strategy $\hat{\mathfrak{u}}$ in the differential game with the dynamics given by (\ref{char_eq}) then she guarantees the limit outcome not greater then $\varphi$ (see~\cite{NN_PDG_en},~\cite{Subb_book}). If additionally $\varphi={\rm Val}$, then the strategy $\hat{\mathfrak{u}}$ is optimal in the deterministic game. The main result of the paper is the following. \begin{Th}\label{th_kras_sub_prob} Assume that $\sigma$ is Lipschitz continuous with a constant $R$, and the function $\varphi$ is a supersolution of (\ref{HJ_eq}). If the first player uses the control with guide strategy $\hat{\mathfrak{u}}$ determined by (u\ref{str_def_1})--(u\ref{str_def_3}) for the function $\varphi$ then \begin{list}{\rm (\roman{tmp})}{\usecounter{tmp}} \item \begin{multline*}\lim_{\delta\downarrow 0}\sup\{\mathbb{E}_{sy}^h(\sigma(\mathcal{X}_1^h[T,s,y,\hat{\mathfrak{u}},\Delta,v])):d(\Delta)\leq \delta,v\in \mathcal{V}^h[s]\}\\\leq \varphi(s,y)+R\sqrt{Dh}.\end{multline*} \item \begin{multline*} \lim_{\delta\downarrow 0}\sup\Bigl\{P_{sy}^h\Bigl(\sigma(\mathcal{X}_1^h[T,s,y,\hat{\mathfrak{u}},\Delta,v])\geq \varphi(s,y)+R\sqrt[3]{Dh}\Bigr):\\d(\Delta)\leq\delta,\ \ v\in\mathcal{V}^h[s]\Bigl\} \leq \sqrt[3]{Dh}. \end{multline*} \end{list} Here $D$ is a constant not dependent on $\varphi$ and $\sigma$. \end{Th} The theorem is proved in Section \ref{sect_proof}. Now let us consider the case when the second player uses control with guide strategies. The control with guide strategy of the second player is a triple $\mathfrak{v}=(v(t,x,w),\psi_2(t_+,t,x,w),\chi_2(s,y))$. Here $w$ denotes the state of the second player's guide. The control in this case is formed also stepwise. If $(s,y)$ is an initial position, $\Delta$ is a partition of time interval $[s,T]$ and $u\in\mathcal{U}^h[s]$ is a control of player 1 then denote by $\mathcal{X}_2^h[\cdot,s,y,\mathfrak{v},\Delta,u]$ the corresponding stochastic process. Let $\omega$ be a subsolution of equation (\ref{HJ_eq}). That means (see \cite{Subb_book}) that for any $(t_*,x_*)\in [0,T]\times \Sigma_{d}$, $t_+>t_*$ and $u^*$ there exists a trajectory $\zeta_2(\cdot,t_+,t_*,x_*,u^*)$ of the differential inclusion $$\dot{\zeta}_2(t)\in {\rm co}\{\zeta_2(t)Q(t,\zeta_2(t),u^*,v):v\in V\} $$ satisfying conditions $\zeta_2(t_*,t_+,t_*,x_*,u^*)=x_*$ and $\omega(t_+,\zeta_2(t_+,t_+,t_*,x_*,u^*))\geq \omega(t_*,x_*)$. Define the strategy $\hat{\mathfrak{v}}$ by the following rule. If $(t_*,x_*)$ is a position, $t_+>t_*$ and $w_*\in\Sigma_d$ is a state of the guide then choose $v^*$ and $u^*$ by the rules $$ \min_{v\in V}\max_{u\in U}\langle x_*-w_*,x_*Q(t_*,x_*,u,v)\rangle=\max_{u\in U}\langle x_*-w_*,x_*Q(t_*,x_*,u,v^*)\rangle, $$ $$ \max_{u\in U}\min_{v\in V}\langle x_*-w_*,x_*Q(t_*,x_*,u,v)\rangle=\min_{v\in V}\langle x_*-w_*,x_*Q(t_*,x_*,u^*,v)\rangle. $$ Put \begin{list}{\rm (v\arabic{tmp})}{\usecounter{tmp}} \item $v(t_*,x_*,w_*)=v^*$, \item $\psi_2(t_+,t_*,x_*,w_*)=\zeta_2(t_+,t_+,t_*,x_*,u^*)$ \item $\chi_2(s,y)=y$. \end{list} \begin{coll}\label{coll_second} If the second player uses the control with guide strategy $\hat{\mathfrak{v}}$ determined by (v1)--(v3) for the function $\omega$ that is a subsolution of (\ref{HJ_eq}), then \begin{list}{\rm (\roman{tmp})}{\usecounter{tmp}} \item \begin{multline*}\lim_{\delta\downarrow 0}\inf\{\mathbb{E}_{sy}^h(\sigma(\mathcal{X}_1^h[T,s,y,\hat{\mathfrak{u}},\Delta,v])):d(\Delta)\leq \delta,u\in\mathcal{U}^h[s]\}\\\geq \omega(s,y)-R\sqrt{Dh}.\end{multline*} \item \begin{multline*} \lim_{\delta\downarrow 0}\sup\Bigl\{P_{sy}^h\Bigl(\sigma(\mathcal{X}_2^h[T,s,y,\hat{\mathfrak{v}},\Delta,u])\leq \omega(s,y)-R\sqrt[3]{Dh}\Bigr):\\d(\Delta)\leq\delta,\ \ u\in\mathcal{U}^h[s]\Bigl\} \leq \sqrt[3]{Dh}. \end{multline*}\end{list} \end{coll} The corollary is also proved in Section \ref{sect_proof}. \section{Properties of transition probabilities}\label{sect_prop} Now we prove the following. \begin{Lm}\label{lm_trans_prob} There exists a function $\alpha^h(\delta)$ such that $\alpha^h(\delta)\rightarrow 0$ as $\delta\rightarrow 0$ and for any $t_*,t_+\in [0,T]$, $\xi,\eta\in \Sigma_d$, $\xi=(\xi_1,\ldots,\xi_d)$, $\bar{u}\in U$, $\bar{v}\in \mathcal{V}^h[t_*]$ \begin{enumerate} \item if $\eta=\xi$, then \begin{multline*}p^h(t_*,\xi,t_+,\eta,\bar{u},\bar{v})\\\leq 1+\frac{1}{h}\sum_{k=1}^d\int_{t_*}^{t_+}\int_{V} \xi_kQ_{kk}(t_*,\xi,\bar{u},v)\nu_\tau (dv)d\tau+\alpha^h(t_+-t_*)\cdot(t_+-t_*);\end{multline*} \item if $\eta=\xi-h e^i+he^j$, then $$p^h(t_*,\xi,t_+,\eta,u,v)\leq \frac{1}{h}\int_{t_*}^{t_+}\int_{V} \xi_iQ_{ij}(t_*,\xi,\bar{u},v)\nu_\tau(dv) d\tau+\alpha^h(t_+-t_*)\cdot(t_+-t_*); $$ \item if $\eta\neq \xi$ and $\eta\neq \xi-h e^i+he^j$, then $$p^h(t_*,\xi,t_+,\eta,u,v)\leq \alpha^h(t_+-t_*)\cdot(t_+-t_*); $$ \end{enumerate} Here $\nu_\tau$ is a measure on $V$ depending on $t_*,t_+$, $\xi$, $\eta$, $\bar{u}$ and $\bar{v}$. \end{Lm} \begin{proof} First denote \begin{equation}\label{K_def} K=\sup\{|Q_{ij}(t,x,u,v)|:i,j=\overline{1,d},\ \ t\in[0,T],\ \ x\in \Sigma_d,\ \ u\in U,\ \ v\in Q\}. \end{equation} Note that for any $x\in\Sigma_d$, $t\in [0,T]$, $u\in U$, $v\in V$ the following estimates hold true \begin{equation}\label{sigma_d_property} \|x\|\leq \sqrt{d},\ \ \left|\sum_{i=1}^n x_iQ_{ij}(t,x,u,v)\right|\leq K,\ \ \|xQ(t,x,u,v)\|\leq K\sqrt{d}. \end{equation} Further, let $\gamma(\delta)$ be a common modulus of continuity with respect to $t$ of the functions $Q_{ij}$ i.e. for all $i$, $j$, $t',t''\in [0,T]$, $x\in\Sigma_d$, $u\in U$, $v\in Q$ \begin{equation}\label{gamma_def} |Q_{ij}(t',x,u,v)-Q_{ij}(t'',x,u,v)|\leq \gamma(t''-t') \end{equation} and $\gamma(\delta)\rightarrow 0$ as $\delta\rightarrow 0$. From (\ref{tran_prob}) and (\ref{sigma_d_property}) we obtain that \begin{equation}\label{prob_der_estima} p^h(t_*,\xi,t,\eta,u,v)\leq p^h(t_*,\xi,t_*,\eta,u,v)+\frac{2Kd}{h}(t-t_*). \end{equation} Further, for a given control $\bar{v}\in\mathcal{V}^h[t_*]$ let $\mathbb{E}^h_{t_*\xi;\tau x}$ denote the expectation under conditions $X^h(t_*,t_*,\xi,\bar{u},\bar{v})=\xi$, and $X^h(\tau,t_*,\xi,\bar{u},\bar{v})=x$. We have that $$\mathbb{E}_{t_*\xi}^hf=\sum_{x\in\Sigma_d^h}\mathbb{E}^h_{t_*\xi;\tau x}f\cdot p^h(t_*,\xi,\tau,x,\bar{u},\bar{v}). $$ From this and (\ref{tran_prob}) we get \begin{equation*}\begin{split} p^h(t_*,\xi,t,\eta,\bar{u}&,\bar{v})=p(t_*,\xi,t,\eta,\bar{u},\bar{v})\\+\frac{1}{h}\int_{t_*}^t\sum_{x\in\Sigma_d^h}\mathbb{E}^h_{t_*\xi;\tau x} &\sum_{i,j=1}^d x_iQ_{i,j}(\tau,x,\bar{u},\bar{v}(\tau))[\mathbf{1}_\eta(x-he^i+he^j)-\mathbf{1}_\eta(x)]\\\cdot &p(t_*,\xi,\tau,x,\bar{u},\bar{v})d\tau \leq p(t_*,\xi,t,\eta,\bar{u},\bar{v})\\ +\frac{1}{h}\int_{t_*}^t&\sum_{x\in\Sigma_d^h}\mathbb{E}^h_{t_*\xi;\tau x} \sum_{i,j=1}^d x_iQ_{i,j}(\tau,x,\bar{u},\bar{v}(\tau))[\mathbf{1}_\eta(x-he^i+he^j)-\mathbf{1}_\eta(x)]\\\cdot &p(t_*,\xi,t_*,x,\bar{u},\bar{v})d\tau+\frac{2K^2d^2}{h}(t-t_*)^2. \end{split}\end{equation*} We have that $p(t_*,\xi,t_*,x,\bar{u},\bar{v})=1$ for $x=\xi$ and $p(t_*,\xi,t_*,x,\bar{u},\bar{v})=0$ for $x\neq \xi$. Thus, \begin{equation*}\begin{split} p^h(t_*,\xi,t,\eta,\bar{u},\bar{v}&)\leq p(t_*,\xi,t,\eta,\bar{u},\bar{v}) \\ +\frac{1}{h}\int_{t_*}^t\mathbb{E}^h_{t_*\xi;\tau \xi} \sum_{i,j=1}^d &\xi_iQ_{i,j}(\tau,\xi,\bar{u},\bar{v}(\tau))[\mathbf{1}_\eta(\xi-he^i+he^j)-\mathbf{1}_\eta(\xi)] +\frac{2K^2d^2}{h}(t-t_*)^2\\ \leq p(t_*,\xi,t,\eta,\bar{u},&\bar{v}) \\ +\frac{1}{h}\int_{t_*}^t\mathbb{E}^h_{t_*\xi;\tau \xi} &\sum_{i,j=1}^d \xi_iQ_{i,j}(t_*,\xi,\bar{u},\bar{v}(\tau))[\mathbf{1}_\eta(\xi-he^i+he^j)-\mathbf{1}_\eta(\xi)]d\tau\\ &+\frac{2K^2d^2}{h}(t-t_*)^2+\frac{2d}{h}\gamma(t-t_*)\cdot(t-t_*). \end{split}\end{equation*} There exists a measure $\nu_\tau$ on V such that $$\mathbb{E}_{t_*\xi;\tau \xi}Q_{ij}(t_*,\xi,\bar{u},\bar{v}(\tau))=\int_VQ_{ij}(t_*,\xi,\bar{u},v)\nu_\tau(dv). $$ Consequently, \begin{multline}\label{trans_prob_estima} p^h(t_*,\xi,t,\eta,\bar{u},\bar{v})\leq p(t_*,\xi,t,\eta,\bar{u},\bar{v}) \\+\frac{1}{h}\int_{t_*}^t\int_{V} \sum_{i,j=1}^d \xi_iQ_{i,j}(t_*,\xi,\bar{u},v)[\mathbf{1}_\eta(\xi-he^i+he^j)-\mathbf{1}_\eta(\xi)]\nu_\tau(dv)d\tau\\ +\alpha(t-t_*)\cdot (t-t_*). \end{multline} Here we denote $$\alpha(\delta)=\frac{2K^2d^2}{h}(\delta)2+\frac{2d}{h}\gamma(\delta). $$ From (\ref{trans_prob_estima}) the second and third statements of the Lemma follows. To derive the first statement use the property of Kolmogorov matrixes (\ref{Kolmogorov}). We have that \begin{multline*} p^h(t_*,\xi,t,\xi,\bar{u},\bar{v})\leq p(t_*,\xi,t,\eta,\bar{u},\bar{v}) \\-\frac{1}{h}\int_{t_*}^t\int_{V} \sum_{i=1}^d\sum_{j\neq i} \xi_iQ_{i,j}(t_*,\xi,\bar{u},v)\nu_\tau(dv)d\tau +\alpha(t-t_*)\cdot (t-t_*)\\= p(t_*,\xi,t,\eta,\bar{u},\bar{v}) +\frac{1}{h}\int_{t_*}^t\int_{V} \sum_{i=1}^d \xi_iQ_{i,i}(t_*,\xi,\bar{u},v)\nu_\tau(dv)d\tau +\alpha(t-t_*)\cdot (t-t_*). \end{multline*} \end{proof} \section{Key estimate}\label{sect_estima} This section provides the estimate of the distance between the controlled Markov chain and the guide. This estimate is an analog of \cite[Lemma 2.3.1]{NN_PDG_en}. \begin{Lm}\label{lm_kras_subb} There exist constants $\beta,C>0$, and a function $\varkappa^h(\delta)$ such that $\varkappa^h(\delta)\rightarrow 0$ as $\delta\rightarrow 0$ and the following property holds true. \noindent If\begin{enumerate} \item $(t,x)\in [0,T]\times \Sigma_{d}^h$, $w_*\in\Sigma_d$, $t_+>t_*$, \item the controls $u_*$ $v_*$ are chosen by rules (\ref{u_star_def}) and (\ref{v_star_def}) respectively, \item $w_+=\zeta_1(t_+,t_+,t_*,w_*,v_*)$, \end{enumerate} then for any $v\in\mathcal{V}^h[t_*]$ \begin{multline*}\mathbb{E}_{t_*x_*}^h(\|\mathcal{X}(t_+,t_*,x_*,u_*,v)-w_+\|^2)\\\leq (1+\beta(t_+-t_*))\|x_*-w_*\|+Ch(t_+-t_*)+\varkappa^h(t_+-t_*)\cdot(t-t_*). \end{multline*} \end{Lm} \begin{proof} Denote the $i$-th component of vector $x_*$ by $x_{*i}$. We have that \begin{equation}\label{expectation} \mathbb{E}_{t_*x_*}^h(\|\mathcal{X}(t_+,t_*,x_*,u_*,v)-w_+\|^2)=\sum_{z\in\Sigma_d^h}\|z-w_+\|^2p(t_*,x_*,t_+,z,u_*,v). \end{equation} Further, \begin{equation*}\begin{split}\|z-w_+\|^2=\|(z-x_*)&+(x_*-w_*)+(w_*-w_+)\|^2\\ =\|x_*-w_*&\|^2+2\langle x_*-w_*,z-x_*\rangle\\-2\langle &x_*-w_*,w_+-w_*\rangle +\|z-x_*\|^2+\|w_+-w_*\|^2. \end{split}\end{equation*} It follows from (\ref{sigma_d_property}) that \begin{equation}\label{w_estimate} \left\|\frac{d}{dt}\zeta_1(t_+,t,t_*,w_*,v_*)\right\|\leq K\sqrt{d},\ \ \|w_+-w_*\|^2\leq K^2d(t_+-t_*)^2. \end{equation} From Lemma \ref{lm_trans_prob} it follows that \begin{equation}\label{z_estima}\begin{split} \sum_{z\in\Sigma_d^h}\|z-x_*\|^2p(t_*,x_*,t_+,z,u_*,v&)\\\leq \sum_{i=1}^d\sum_{j\neq i}\|-he^i+he^j&\|^2\frac{1}{h}\int_{t_*}^{t_+}\int_VQ_{ij}(t_*,x_*,u_*,v)\nu_\tau(dv)d\tau\\ &+2d^3\alpha^h(t_+-t_*)\cdot(t_+-t_*) \\ \leq 2hd^2K(t_+-t_*)+2&d^3\alpha^h(t_+-t_*)\cdot(t_+-t_*). \end{split}\end{equation} For simplicity denote $\zeta_*(t)=\zeta_1(t,t_+,t_*,w_*,u,v_*)$. We have that for each $t$ there exists a probability $\mu_t$ on $V$ such that $$\frac{d\zeta_*}{dt}(t)=\int_{u\in U}\zeta_*(t)Q(t,\zeta_*(t),u,v_*)\mu_t(du). $$ Therefore, \begin{equation}\label{inner_transform}\begin{split} \sum_{z\in\Sigma_d^h}\langle x_*-w_*,&w_+-w_*\rangle p(t_*,x_*,t_+,z,u_*,v)\\=\Bigl\langle x_*-&w_*, \int_{t_*}^{t_+}\int_{u\in U}\zeta_*(t)Q(t,\zeta_*(t),u,v_*)\mu_t(du)dt\Bigr\rangle. \end{split}\end{equation} Define \begin{equation}\label{rho_def}\begin{split}\varrho(\delta&)\triangleq\sup\{|y''Q(t'',y'',u,v)-y'Q(t',y',u,v)|:\\ &t',t''\in [0,T], \ \ y',y''\in \Sigma_d,\ \ u\in U,\ \ v\in V,\ \ |t'-t''|\leq \delta,\ \ \|y'-y''\|\leq \delta K\sqrt{d}\}. \end{split}\end{equation} We have that $\varrho(\delta)\rightarrow 0$, as $\delta\rightarrow 0$. From (\ref{w_estimate}), (\ref{inner_transform}), and (\ref{rho_def}) it follows that \begin{equation}\label{cross_estima_guide}\begin{split} &\sum_{z\in\Sigma_d^h}\langle x_*-w_*,w_+-w_*\rangle p(t_*,x_*,t_+,z,u_*,v)\\ &\geq \left\langle x_*-w_*,\int_{t_*}^{t_+}\int_{u\in U}w_*Q(t_*,w_*,u,v_*)\mu_t(du)dt\right\rangle-\sqrt{2d}\varrho(t_+-t_*)\cdot(t_+-t_*). \end{split}\end{equation} Using Lemma \ref{lm_trans_prob} one more time we get the inequality \begin{equation}\label{cross_estima_state_pre}\begin{split} \sum_{z\in\Sigma_d^h}\langle x_*-w_*,z-x_*\rangle p(t_*,x_*,t_+,z,u_*,v&)\\ \leq \sum_{i=1}^d\sum_{j\neq i}\langle x_*-w_*,-he^i+he^j&\rangle\frac{1}{h}\int_{t_*}^{t_+}\int_V x_{*i}Q_{ij}(t_*,x_*,u_*,v)\nu_t(dv)dt\\&+2d^3\alpha^h(t_+-t_*)\cdot(t_+-t_*). \end{split}\end{equation} The first term in the right-hand side of (\ref{cross_estima_state_pre}) can be transformed as follows. Denote for simplicity $$\widehat{Q}_{ij}=\int_{t_*}^{t_+}\int_V Q_{ij}(t_*,x_*,u_*,v)\nu_t(dv)dt .$$ Note that $\widehat{Q}=(\widehat{Q}_{ij})_{i,j=1}^d$ is a Kolmogorov matrix. That means that $$-\sum_{j\neq i}\widehat{Q}_{ij}=\widehat{Q}_{ii}. $$ We have that \begin{equation*}\begin{split} \sum_{i=1}^d\sum_{j\neq i}(-he^i+he^j&)\frac{1}{h}\int_{t_*}^{t_+}\int _V x_{*i}Q_{ij}(t_*,x_*,u_*,v)\nu_t(dv)dt \\ &=\sum_{i=1}^d\sum_{j\neq i}e^j x_{*,i}\widehat{Q}_{ij}-\sum_{i=1}^d x_{*,i}e^i\sum_{j\neq i} \widehat{Q}_{ij}=\\ &=\sum_{i=1}^d \sum_{j=1}^d e^j x_{*,i}\widehat{Q}_{ij}=\sum_{j=1}^d\left[\sum_{i=1}^d x_{*,i}\widehat{Q}_{ij}\right]e^j=x_*\widehat{Q}. \end{split}\end{equation*} This and (\ref{cross_estima_state_pre}) yield the estimate \begin{equation}\label{cross_estima_state_fin}\begin{split} \sum_{z\in\Sigma_d^h}&\langle x_*-w_*,z-x_*\rangle p(t_*,x_*,t_+,z,u_*,v)\\ \leq &\left\langle x_*-w_*,\int_{t_*}^{t_+}\int_V x_*Q(t_*,x_*,u_*,v)\nu_t(dv)dt\right\rangle +2d^3\alpha^h(t_+-t_*)\cdot(t_+-t_*). \end{split}\end{equation} Substituting (\ref{w_estimate})--(\ref{cross_estima_guide}), (\ref{cross_estima_state_fin}) in (\ref{expectation}) we get the estimate \begin{equation}\label{expec_stima_pre}\begin{split} \mathbb{E}_{t_*x_*}^h(\|\mathcal{X}(t_+,&t_*,x_*,u_*,v)-w_+\|^2)\leq \|x_*-w_*\|^2\\ &+ 2\left\langle x_*-w_*,\int_{t_*}^{t_+}\int_V x_*Q(t_*,x_*,u_*,v)\nu_t(dv)dt\right\rangle\\ &-2\left\langle x_*-w_*,\int_{t_*}^{t_+}\int_{u\in U} w_*Q(t_*,w_*,u,v_*)\mu_t(du)dt\right\rangle\\ &+2Kd^2 h(t_+-t_*)+ (6d^3\alpha^h(t_+-t_*)+\sqrt{2d}\varrho(t_+-t_*))\cdot(t_+-t_*). \end{split}\end{equation} Let $L$ be a Lipschitz constant of the function $y\mapsto yQ(t,y,u,v)$ i.e. for all $y',y''\in\Sigma_d$, $t\in [0,T]$, $u\in U$, $v\in Q$ $$\|y'Q(t,y',u,v)-y''Q(t,y'',u,v)\|\leq L\|y'-y''\|. $$ We have that \begin{equation*}\begin{split} 2\Bigl\langle x_*-w_*,\int_{t_*}^{t_+}&\int_V x_*Q(t_*,x_*,u_*,v)\nu_t(dv)dt\Bigr\rangle\\&-2\Bigl\langle x_*-w_*,\int_{t_*}^{t_+}\int_{u\in U} w_*Q(t_*,w_*,u,v_*)\mu_t(du)dt\Bigr\rangle \\ \leq 2\int_{t_*}^{t_+}\int_{u\in U}&\int_{v\in V}\Bigl [\Bigl\langle x_*-w_*, x_*Q(t_*,x_*,u_*,v)\Bigr\rangle\\-\Bigl\langle x_*-&w_*, x_*Q(t_*,x_*,u,v_*)\Bigr\rangle\Bigr]\nu_t(dv)\mu_t(du)dt\\&+2L\|x_*-w_*\|^2(t_+-t_*). \end{split}\end{equation*} The choice of $u_*$ and $v_*$ gives that for all $u\in U$ and $v\in V$ $$\langle x_*-w_*, x_*Q(t_*,x_*,u_*,v)\rangle\leq \langle x_*-w_*, x_*Q(t_*,x_*,u,v_*)\rangle. $$ Consequently, we get the estimate \begin{equation}\label{extr_shift_fin}\begin{split} 2\Bigl\langle x_*-w_*,\int_{t_*}^{t_+}\int_V x_*Q(t_*,x_*,u_*,v)\nu_t&(dv)dt\Bigr\rangle\\-2\Bigl\langle x_*-w_*,\int_{t_*}^{t_+}\int_{u\in U} w_*Q(t_*,w_*,&u,v_*)\mu_t(du)dt\Bigr\rangle \\ &\leq 2L\|x_*-w_*\|^2(t_+-t_*). \end{split}\end{equation} From (\ref{expec_stima_pre}) and (\ref{extr_shift_fin}) the conclusion of the Lemma follows for $$\beta=2L,\ \ C=2d^2K, \ \ \varkappa^h(\delta)=6d^3\alpha^h(\delta)+\sqrt{2d}\varrho(\delta). $$\end{proof} \section{Near Optimal Strategies}\label{sect_proof} In this section we prove Theorem \ref{th_kras_sub_prob} and Corollary \ref{coll_second}. \begin{proof}[Proof of Theorem \ref{th_kras_sub_prob}] Let $v\in\mathcal{V}^h[s]$ be a control of the second player. Consider a partition $\Delta=\{t_k\}_{k=1}^m$ of the time interval $[s,T]$. If $x_0,x_1,\ldots,x_m$ are vectors, $x_0=y$ then denote by $\hat{p}^h_r(x_1,\ldots,x_r,\Delta)$ the probability of the event $\mathcal{X}_1^h[t_k,s,y,\hat{\mathfrak{u}},\Delta,v]=x_k$ for $k=\overline{1,r}$. Define vectors $w_0,\ldots,w_m$ recursively in the following way. Put \begin{equation}\label{w_0_def} w_0\triangleq\hat{\chi}_1(s,y)=y, \end{equation} for $k>0$ put \begin{equation}\label{w_k_def} w_k\triangleq\hat{\psi}_1(t_k,t_{k-1},x_{k-1},w_{k-1}). \end{equation} If $w_0,\ldots,w_m$ are defined by rules (\ref{w_0_def}), (\ref{w_k_def}) and $r\in \overline{1,n}$ we write $$(w_0,\ldots,w_r)=g_r(x_0,\ldots,x_{r-1},\Delta). $$ In addition, put $g_0(\Delta)\triangleq y$. Below we use the transformation $G(\cdot,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])$ of the stochastic $\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v]$ defined in the following way. If $x_i$ are values of $\mathcal{X}^h_1[t_i,s,y,\hat{\mathfrak{u}},\Delta,v]$, $i=0,\ldots,r$, and $(w_0,\ldots,w_r)=g_r(x_0,\ldots,x_{r-1},\Delta)$, then we put $$G(t_r,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\triangleq w_r. $$ Generally, the stochastic process $G(\cdot,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])$ is non-Markov. Further, if $u_i=\hat{u}(t_i,x_i,w_i)$, $i=0,\ldots,r$, and $(w_0,\ldots,w_r)=g_r(x_0,\ldots,x_{r-1},\Delta)$, we write $\varsigma_r(x_0,\ldots,x_r,\Delta)\triangleq u_r$. We have that for any $r\in \overline{1,m}$ \begin{equation}\label{first_expectation_estima}\begin{split} \mathbb{E}_{sy}^h(\|\mathcal{X}^h_1[t_r,s,y,\hat{\mathfrak{u}},\Delta,&v]- G(t_r,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\|^2) \\= \sum_{x_1,\ldots,x_r}\|x_r-g_r(x_0,&\ldots,x_{r-1},\Delta)\|^2\hat{p}_r(x_0,\ldots,x_{r},\Delta)\\= \sum_{x_1,\ldots,x_{r-1}}\hat{p}_{r-1}(x_0,&\ldots,x_{r-1},\Delta) \cdot\sum_{x_r}\|x_r-g_r(x_0,\ldots,x_{r-1},\Delta)\|^2 \\\cdot &P_{t_{r-1}x_{r-1}}^h(X(t_r,t_{r-1},x_{r-1},\varsigma_{r-1}(x_0,\ldots,x_{r-1}),v)=x_r). \end{split}\end{equation} By Lemma \ref{lm_kras_subb} we have that \begin{equation*}\begin{split} \sum_{x_r}\|x_r-g_r(x_1,\ldots,x_{r-1},&\Delta))\|^2 \\\cdot P_{t_{r-1}x_{r-1}}^h(X(t_r,&t_{r-1},x_{r-1},\varsigma_{r-1}(x_0,\ldots,x_{r-1}),v)=x_r) \\ \leq (1+\beta(t_{r}-t_{r-1}))\|x_{r-1}&-g_{r-1}(x_0,\ldots,x_{r-2},\Delta)\|^2\\ +Ch\cdot(t_r-&t_{r-1})+\varkappa^h(t_r-t_{r-1})\cdot(t_r-t_{r-1}). \end{split}\end{equation*} From this and (\ref{first_expectation_estima}) it follows that \begin{equation}\label{ineq_r}\begin{split} \mathbb{E}_{sy}^h(\|\mathcal{X}^h_1[t_r,s,y,\hat{\mathfrak{u}},&\Delta,v]- G(t_r,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\|^2)\\ \leq (1+&\beta(t_{r}-t_{r-1}))\mathbb{E}_{sy}^h(\|x_{r-1}-g_{r-1}(x_0,\ldots,x_{r-2})\|^2)\\&+ Ch\cdot(t_r-t_{r-1})+\varkappa^h(t_r-t_{r-1})\cdot(t_r-t_{r-1}). \end{split}\end{equation} Applying this inequality recursively we get \begin{equation*}\begin{split} \mathbb{E}_{sy}^h(\|\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v&]- G(T,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\|^2)\\ \leq \exp(&\beta (T-s))\mathbb{E}_{sy}^h(\|x_0-g_0(\Delta)\|^2)\\&+Ch\cdot (T-s)+\varkappa^h(d(\Delta))\cdot(T-s). \end{split}\end{equation*} Taking into account the equality $x_0=y=g_0(\Delta)$ we conclude that \begin{equation}\label{final_estima} \mathbb{E}_{sy}^h(\|\mathcal{X}^h_1[T,s,y,\hat{\mathfrak{u}},\Delta,v]- G(T,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\|^2)\\ \leq \epsilon(h,d(\Delta)). \end{equation} Here we denote $$\epsilon(h,\delta)\triangleq Dh+T\varkappa^h(\delta),\ \ D\triangleq CT.$$ Note that for any $h$ \begin{equation}\label{epsilon_convergence} \epsilon(h,\delta)\rightarrow Dh,\mbox{ as } \delta\rightarrow 0. \end{equation} From (\ref{final_estima}) and Jensen's inequality we get \begin{equation}\label{expect_estima} \mathbb{E}_{sy}^h(\|\mathcal{X}^h_1[T,s,y,\hat{\mathfrak{u}},\Delta,v]- G(T,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\|)\leq\sqrt{\epsilon(d(\Delta),h)}. \end{equation} By construction of control with guide strategy $\hat{\mathfrak{u}}$ \begin{equation*}\begin{split}\varphi(s,y)=\varphi(t_0,g_0(\Delta))&\geq\varphi(t_1,g_1(x_0,\Delta)) \geq\ldots\\&\geq\varphi(t_m,g_m(x_0,\ldots,x_{m0-1},\Delta))=\sigma(g_m(x_0,\ldots,x_{m0-1},\Delta)). \end{split}\end{equation*} Hence, \begin{equation}\label{sigma_varphi} \sigma(G(T,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v]))\leq \varphi(s,y). \end{equation} Since $\sigma$ is Lipschitz continuous with the constant $R$ we have that for any partition $\Delta$ and second player's control $v$ $$\sigma(\mathcal{X}^h_1[T,s,y,\hat{\mathfrak{u}},\Delta,v])\leq \varphi(s,y)+R\|\mathcal{X}^h_1[T,s,y,\hat{\mathfrak{u}},\Delta,v]- G(T,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\|. $$ This and (\ref{expect_estima}) give the inequality $$\mathbb{E}_{sy}^h\sigma(\mathcal{X}^h_1[T,s,y,\hat{\mathfrak{u}},\Delta,v])\leq \varphi(s,y)+R\sqrt{\epsilon(d(\Delta),h)}. $$ Passing to the limit as $d(\Delta)\rightarrow 0$ and taking into account the property $\epsilon(\delta,h)\rightarrow Dh$, as $\delta\rightarrow 0$, (see \ref{epsilon_convergence}) we obtain the first statement of the Theorem. Now let us prove the second statement of the Theorem. Using Markov inequality and (\ref{final_estima}) we get \begin{equation*}\begin{split}P\bigl(\|&\mathcal{X}^h_1[T,s,y,\hat{\mathfrak{u}},\Delta,v]- G(T,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\|\geq[\epsilon(h,d(\Delta))]^{1/3}\bigr)\\ &=P\bigl(\|\mathcal{X}^h_1[T,s,y,\hat{\mathfrak{u}},\Delta,v]- G(T,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\|^2\geq [\epsilon(h,d(\Delta))]^{2/3}\bigr)\\ &\leq \frac{\mathbb{E}_{sy}^h(\|\mathcal{X}^h_1[T,s,y,\hat{\mathfrak{u}},\Delta,v]- G(T,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\|^2)}{[\epsilon(h,d(\Delta))]^{2/3}} \leq \sqrt[3]{\epsilon(h,d(\Delta))}. \end{split}\end{equation*} Lipschitz continuity of the function $\sigma$ and (\ref{sigma_varphi}) yield the following inclusion \begin{equation*}\begin{split} \{\sigma(\mathcal{X}_1^h[T,s,y,\hat{\mathfrak{u}},\Delta,v]&)\geq\varphi(s,y)+R[\epsilon(h,d(\Delta))]^{1/3}\}\subset \\\bigl\{\|\mathcal{X}^h_1[T,s,&y,\hat{\mathfrak{u}},\Delta,v]- G(T,\mathcal{X}^h_1[\cdot,s,y,\hat{\mathfrak{u}},\Delta,v])\|\geq[\epsilon(h,d(\Delta))]^{1/3}\bigr\}. \end{split}\end{equation*} Finally, for any partition $\Delta$ and any second player's control $v\in\mathcal{V}^h[s]$ we have that $$P\{\sigma(\mathcal{X}_1^h[T,s,y,\hat{\mathfrak{u}},\Delta,v])\geq\varphi(s,y)+ R[\epsilon(h,d(\Delta))]^{1/3}\}\leq [\epsilon(h,d(\Delta))]^{1/3}. $$ From this the second statement of the Theorem follows. \end{proof} To prove Corollary \ref{coll_second} it suffices to replace the payoff function with $-\sigma$ and interchange the players. \section{Conclusion} In the paper we applied the deterministic strategy that is optimal for deterministic zero-sum game to the Markov game describing interacting particle system. We showed that it is near optimal. We considered control with guide strategy. This strategy requires computer to storage and compute a finite dimensional vector that is an evaluation of the current position. The question whether there exists an optimal for differential game feedback deterministic strategy that is near optimal for Markov game is open. We restricted our attention to the Markov game describing the interacting particle systems. The extensions of the results of the paper to the general case is the theme of future works. The author would like to thank Vassili Kolokoltsov for insightful discussions. \end{document}
math
\begin{document} \begin{frontmatter} \title{Adjacent Vertex Distinguishing Total Coloring of Corona Product of Graphs\footnote{Conflict of Interest: Author A declares that she has no conflict of interest. Author B has received research grants from UNAM - grant PAPIIT-UNAM-IN117219}} \author[label1]{Hanna Furma\'nczyk\corref{cor1}} \address[label1]{Institute of Informatics, Faculty of Mathematics, Physics and Informatics,\\University of Gda\'nsk, Wita Stwosza 57, 80-309 Gda\'nsk, Poland} \ead{[email protected]} \cortext[cor1]{Corresponding author} \author[label5]{Rita Zuazua} \address[label5]{Department of Mathematics, Faculty of Sciences, National Autonomous University of Mexico, Ciudad Universitaria, Coyoacan, 04510 Mexico, DF, Mexico} \ead{[email protected]} \begin{abstract} An adjacent vertex distinguishing total $k$-coloring $f$ of a graph $G$ is a proper total $k$-coloring of $G$ such that no pair of adjacent vertices has the same color sets, where the color set at a vertex $v$, $C^G_f(v)$, is $\{f(v)\} \cup \{f(vu)|u \in V (G), vu \in E(G)\}$. In 2005 Zhang et al. posted the conjecture (AVDTCC) that every simple graph $G$ has adjacent vertex distinguishing total $(\Delta(G)+3)$-coloring. In this paper we confirm the conjecture for many coronas, in particular for generalized, simple and $l$-coronas of graphs, not relating the results to particular graph classes. \end{abstract} \begin{keyword} corona graph \sep $l$-corona \sep generalized corona graph \sep adjacent vertex distinguishing total coloring \sep AVDTC Conjecture \MSC 05C15 \sep 05C76 \sep 68R10 \end{keyword} \end{frontmatter} \section{Introduction} The processes occurring in the world around us can very often be modeled by the language of graph theory. The graph coloring problems, vertex, edge as well as total version, are ones of the best known problems of graph theory. Proper total coloring was considered for the first time by Rosenfeld in 1970 \cite{rosen}. In the first decade of this century a new concept appeared in the topic of graph colorings. Many researchers considered colorings (proper, total or from lists) such that vertices (all or adjacent) are distinguished either by sets or multisets or sums. In this paper we investigate the problem of proper total distinguishing adjacent vertices by sets. Let $G = (V, E)$ be a simple graph with maximum degree $\Delta(G)$. Let $[k]$ denote the set $\{1,\ldots,k\}$ for any positive integer $k$. Suppose that $f : V \cup E \rightarrow [k]$ is a proper total coloring of $G$, i.e. no two adjacent edges, no two adjacent vertices, and no edge and its endvertices are assigned the same color. The smallest number $k$ admitting such a proper total $k$-coloring is named \emph{total chromatic number} and is denoted by $\chi''(G)$. Clearly, $\chi''(G)\geq \Delta(G)+1$. Vizing \cite{Vizing}, and independly Behzad et al. \cite{behzad}, posted the following conjecture. \begin{conjecture}[TCC, \cite{behzad,Vizing}] For any graph $G$, $\chi''(G) \leq \Delta(G)+2.$\label{tcc:conj} \end{conjecture} For a given proper total $k$-coloring of $G$ and for a vertex $v\in V(G)$, let $C_f(v)$ denote the \emph{color set} of $v$ with respect to $f$, i.e. the set $\{f(v)\} \cup \{f(vu)|u \in V (G), vu \in E(G)\}$. Sometimes we will consider the color set restricted to some subgraph of $G$. Let $H$ be a subgraph of $G$ and let $v\in V(H)$. Then, $C^H_f(v)$ denotes the set $\{f(v)\} \cup \{f(vu)|u \in V (H), vu \in E(H)\}$. If the total coloring is clear, we can use the notation of $C(v)$ and $C^H(v)$, respectively. In this paper, we are interested in the smallest number $k$ of colors such that there is a proper total $k$-coloring of $G$ with the adjacent vertices being distinguished by their color sets. Such a model was introduced by Zhang et al. \cite{zhang} in 2005. More formally, in \emph{adjacent vertex distinguishing total $k$-coloring} $f$ (avd total $k$-coloring, for short), we have $C_f(u) \neq C_f (v)$ for every pair of vertices $u,v$ such that $uv \in E(G)$. The smallest $k$ admitting such coloring is called the \emph{adjacent vertex distinguishing total chromatic number} (avd total chromatic number, for short) and is denoted by $\chi_{a}''(G)$. Of course, $\chi_{a}''(G)\geq \chi''(G)$. It turns out that there are a lot of examples of graphs for which this inequality is strict, e.g. $2l+1=\chi''(K_{2l+1})<\chi_a''(K_{2l+1})=2l+3$ \cite{short}. As a direct consequence of the definition, we have the following relation between avd total chromatic number and chromatic number, $\chi(G)$, and chromatic index of a graph $G$, $\chi'(G)$. \begin{proposition} For any graph $G$, $\chi''_a(G)\leq \chi(G) + \chi'(G)$.\label{prop:obs} $\Box$ \end{proposition} Taking into account for example Vizing and Brook's theorems we get \begin{proposition} Let $G\neq K_n$ and $G\neq C_{2k+1}$. Then $\chi''_a(G)\leq 2\Delta(G)+1.$ $\Box$ \end{proposition} Huang et al. \cite{huang} proved that the bound from the last proposition can be improved to $2\Delta(G)$. Whereas, for planar graphs Proposition \ref{prop:obs} implies \begin{proposition} For any planar graph $G$ we have $\chi''_a(G)\leq \Delta(G)+5.$ $\Box$ \end{proposition} Zhang et al. in \cite{zhang} determined $\chi_{a}''(G)$ for many basic families of graphs, including cycles, complete graphs, fans, wheels or trees. Additionally, the following bound on $\chi_{a}''(G)$ in terms of the maximum degree of a graph $\Delta(G)$ was conjectured. \begin{conjecture}[AVDTCC, \cite{zhang}] For any simple graph $G$, $\chi_{a}''(G)\leq \Delta(G)+3$.\label{conj:zhang} \end{conjecture} The conjecture has been attracting the attention of many graph theorists since 2005. It has been proved for some families of graphs, including planar \cite{planar8, planar10, planar9, planfrom10}, outerplanar \cite{outer}, subcubic \cite{delta3, short, delta3wang}, bipartite \cite{delta3}, and 4-regular graphs \cite{4reg}. The last result is proved by giving a relevant algorithm for avd total 7-coloring of 4-regular graphs. Coker and Johannson \cite{coker} used probabilistic methods to prove that there exists a constant $c$ such that $\chi''_a(G)\leq \Delta(G)+c$. Zhang et al. \cite{zhang} proved also \begin{lemma}[\cite{zhang}] If $G$ has two vertices of maximum degree which are adjacent, then $\chi_{a}''(G) \geq \Delta(G) + 2$. \end{lemma} \begin{lemma}[\cite{zhang}] If $G$ has $m$ components $G_i$, $i\in[m]$, and $|V(G_i)| \geq 2$, $i\in[m]$, then $\chi_{a}''(G) = \max\{\chi_{a}''(G_1), \chi_{a}''(G_2), \ldots, \chi_{a}''(G_m)\}$. \end{lemma} That is why we assume that all graphs considered in this paper are connected. In this paper we put our attention to graph products. They are interesting and useful in many situations. The complexity of many problems that deal with very large and complicated graphs is reduced greatly if one is able to fully characterize the properties of less complicated prime factors. In the literature we have some results concerning adjacent vertex distinguishing total coloring for join graphs of paths with cycles and fans \cite{join_paths_cycles, join_path_fan}, and some Cartesian products of simple graphs \cite{cartpath, cart, cart_myc, relations}. We consider the objective problem for corona product of graphs - generalized, simple and $l$-coronas. They are often close to the boundary between easy and hard coloring problems \cite{harder}. \begin{definition} \emph{For a given simple graph $G$ with $V(G)=\{v_1,\ldots,v_{n_G}\}$, and graphs $H_1,\ldots,$ $H_{n_G}$, the} generalized corona, \emph{denoted by $G\Tilde{\circ} \Lambda _{i=1}^{n_G} H_i$ or by $G \Tilde{\circ} (H_1,\ldots,H_{n_G})$, is the graph obtained by taking one copy of graphs $G$, $H_1,...,H_{n_G}$ and joining the $v_i$ vertex of $G$ to every vertex of $H_i$ (cf. Fig.~\ref{fig:ex_gen}).} \end{definition} \begin{figure} \caption{An example of a generalized corona - graph $C_4\Tilde{\circ} \label{fig:ex_gen} \end{figure} In the cases when all graphs $H_i$ are isomorphic, i.e. $H_1 \simeq H_2 \simeq \cdots \simeq H_{n_G}\simeq H$, the generalized corona is reduced to \emph{simple} corona $G\circ H$. Graph $G$ is called the \emph{center graph}, while graph $H$ is named the \emph{outer graph}. Such type of graph product was introduced by Frucht and Harary \cite{frucht}. Let $v\in V(G)$, by $F_v$ we denote the set of edges of $G\circ H$ linking $v$ with the relevant copy of $H$, and we name it by \emph{fan} in $v$. \begin{definition} \emph{For any integer $l \geq 2$, the graph $G \circ ^l H$ is defined as $G \circ ^l H = (G \circ ^{l-1} H ) \circ H$, where $G \circ ^1 H =G \circ H$. Graph $G \circ ^l H$ is also named as} $l$-corona product \emph{of $G$ and $H$. } \end{definition} In this paper we confirm Conjecture \ref{conj:zhang} for many coronas, generalized, simple, or $l$-coronas, of graphs that fulfill the conjecture, not relating the results to particular graph classes. We conclude the paper with some open questions. \section{Main results}\label{main} We start with some basic observations. \begin{observation} Let $G$ be a simple graph. If there is avd total $k$-coloring of $G$ then there is also its avd total $(k+1)$-coloring. \end{observation} \begin{observation} In any avd total $k$-coloring $f$ of a graph $G$, if the degree of a vertex $u$ is different of the degree of $v$, $u, v\in V(G)$, then $C_f(u) \neq C_f (v)$. \end{observation} Note that every bipartite graph $G$ has $(\Delta(G)+2)$-adjacent-vertex-\-disting\-uishing total coloring such that $\Delta(G)$ colors are used to color edges of $G$, while the remaining two colors are used to color vertices. Moreover, we may extend this coloring into adjacent-vertex-distinguishing total coloring that uses more than $\Delta(G)+2$ colors by introducing new colors only for vertices. \begin{theorem}[\cite{delta3}] If $G$ is a bipartite graph, then $\chi''_a(G)\leq \Delta(G)+2$.\label{avd-bip} \end{theorem} We start our main results from the theorem concerning generalized coronas $G\Tilde{\circ} \Lambda _{i=1}^n H_i$ where the maximum degree of each graph $H_i$ does not exceed the maximum degree of $G$. It is worth emphasizing that in the assumption of the theorem we use only avd total coloring of graph $G$ while graphs $H_i$ are colored totally in any proper way. \begin{theorem} Let $G$ and $H_1,H_2,\ldots H_{n_G}$ be connected simple graphs, each one on at least two vertices, such that $\chi''_a(G)\leq \Delta(G)+t$, $t\geq 2$, and $\chi''(H_i)\leq \Delta(H_i)+t_i$, $1\leq t_i\leq t$, for each $i\in[n_G]$. In addition, let $\Delta (G)\geq \Delta (H_i)$ for all $1\leq i\leq n_G$. Then, $$\chi_a''(G\Tilde{\circ} \Lambda _{i=1}^n H_i) \leq \Delta(G \Tilde{\circ} \Lambda _{i=1}^n H_i)+t.$$\label{thm:gen_cor} \end{theorem} \begin{proof} Without loss of generality we can assume that $\Delta(H_1)\geq \Delta(H_2) \geq \cdots\geq \Delta(H_{n_G})$. Let $|V(G)|=n_G$ and $|V(H_i)|=n_{H_i}$, $i\in[n_G]$. From the assumption, we have $n_G\geq 2$ and $n_{H_i}\geq 2$ for every $i\in[n_G]$. It is clear that $\delta(G)+\min_i n_{H_i} \leq \Delta(G\Tilde{\circ} \Lambda _{i=1}^n H_i)\leq \Delta(G)+\max_i n_{H_i}$ and the maximum degree of the generalized corona can be realized only by vertices of $G$. In order to obtain adjacent vertex distinguishing total $(\Delta(G\Tilde{\circ} \Lambda _{i=1}^n H_i)+t)$-coloring $f$ we start from any avd total $(\Delta (G)+t)$-coloring of $G$. We will extend the coloring for all graphs $H_i$, $i\in [n_G]$, and the relevant fans. We consider $v_i\in V(G)$ with the relevant graph $H_i$, for consecutive $i\in[n_G]$. We apply one of the following cases, depending on the relation between degrees of $G$ and $H_i$. \begin{description} \item[Case 1.] $\Delta(G) > \Delta(H_i)$ or $\Delta(G) = \Delta(H_i)$, but $t_i<t$. We color vertices and edges of $H_i$ in a proper way with $\Delta(H_i)+t_i$ colors from the set $[\Delta(H)+t_i+1]$, but we do not use the color assigned to $v_i$ in $f|_G$. Since $\Delta(H_i)+t_i <\Delta(G)+t$, it is doable. Finally, we use colors $\Delta(G)+t+1, \ldots, \Delta(G)+n_{H_i}+t$ to color $n_{H_i}$ edges in the fan $F_{v_i}$. Note, that these $n_{H_i}$ colors are not used in either $G$ or $H_i$. \item[Case 2.] $\Delta(G) = \Delta(H_i)$ and $t_i=t$. If we are able to color graph $H_i$ with $\Delta(H_i)+t$ colors in a proper total way such that color $f(v_i)$ is not used to color vertices in $H_i$, then we do it. Next, we assign colors $\Delta(G)+t+1, \ldots, \Delta(G)+n_{H_i}+t$ to color $n_{H_i}$ edges in the fan $F_{v_i}$. Otherwise, we consider any total $(\Delta(H_i)+t)$-coloring of $H_i$. Note that taking exactly such a coloring of $H_i$ to $G\Tilde{\circ} \Lambda _{i=1}^n H_i$ results in improper partial total coloring. The vertices in $H_i$ that are assigned color $f(v_i)$ need to be recolored into a new color $\Delta(G)+t+1$ and such a modified total coloring of $H_i$ is taken to $G\Tilde{\circ} \Lambda _{i=1}^n H_i$. Because $n_{H_i} \geq 2$ and $H_i$ is connected, then we are able to choose one vertex in $H_i$ not colored with $\Delta(G)+t+1$, let us say vertex $u'$, and the edge $v_iu'$ in the fan $F_{v_i}$ can be assigned color $\Delta(G)+t+1$. The rest of edges in the fan $F_{v_i}$ are colored with $\Delta(G)+t+2, \ldots, \Delta(G)+n_H+t$ colors, not used earlier either in $G$ or in $H_i$. \end{description} Finally, after coloring all outer graphs and the relevant fans, the total coloring of the generalized corona is proper. We claim that it is adjacent vertex distinguishing. In order to justify this we consider the following cases. \begin{itemize} \item Let $a$ and $b$ be two adjacent vertices in $H_i$, i.e. $ab\in E(H_i)$ for any $i\in[n_G]$. Note that $C^{H_i}(a)$, as well as $C^{H_i}(b)$, is completed only by one color in the whole avd total coloring of $G\Tilde{\circ} \Lambda _{i=1}^n H_i$. In Case 1 the colors used to color edges in $F_{v_i}$ were different from those in $C^{H_i}(a) \cup C^{H_i}(b)$ and they have not been used in $H$ before. In Case 2, at least one of these two vertices $a$ and $b$ is completed by a new color not used earlier. Thus, $C^{G\Tilde{\circ} \Lambda _{i=1}^n H_i}(a)\neq C^{G\Tilde{\circ} \Lambda _{i=1}^n H_i}(b)$. \item Let $a$ and $b$ be any two adjacent vertices in $G$. We colored them taking into account only graph $G$, i.e. $C^G(a)\neq C^G(b)$, next their color sets were completed by the set of new colors, not used earlier in $G$. Thus, $C^{G\Tilde{\circ} \Lambda _{i=1}^n H_i}(a)\neq C^{G\Tilde{\circ} \Lambda _{i=1}^n H_i}(b)$. \item Let $a$ be a vertex of $H_i$ and let $b$ be a vertex of $G$. Since in the corona $G\Tilde{\circ} \Lambda _{i=1}^n H_i$, $\deg(a)\leq n_{H_i} $ while $\deg(b)\geq n_{H_i}+1$, then $\deg(a) \neq \deg(b)$ and $C^{G\Tilde{\circ} \Lambda _{i=1}^n H_i}(a)\neq C^{G\Tilde{\circ} \Lambda _{i=1}^n H_i}(b)$. \end{itemize} \end{proof} As a consequence of the previous theorem, for the case when $H_i \simeq H_j$ for any $i,j \in [n_G]$, we get the following corollary. \begin{corollary} Let $G$ and $H$ be connected simple graphs on at least two vertices, for which $\chi_{a}''(G)\leq \Delta (G)+t$ with $t\geq 2$ and $\chi''(H)\leq \Delta (H)+t'$ with $1 \leq t'\leq t$. If $\Delta(G)\geq \Delta(H)$, then $$\chi_{a}''(G \circ H)\leq \Delta(G \circ H)+t.$$\label{theo:crazy} \end{corollary} \begin{corollary}Let $G$ and $H$ be connected simple graphs on at least two vertices, for which $\Delta(G)\geq \Delta(H)$. \begin{enumerate} \item If Conjecture~\ref{conj:zhang} holds for $G$ and Conjecture~\ref{tcc:conj} holds for $H$, then $$\chi_{a}''(G \circ H)\leq \Delta(G \circ H)+3.$$ \item If Conjecture~\ref{conj:zhang} holds for $G$ and $H$, then $$\chi_{a}''(G \circ H)\leq \Delta(G \circ H)+3.$$\label{theo:HlessG} \item If $G$ and $H$ are bipartite graphs. Then, $$\chi_a''(G \circ H) \leq \Delta(G \circ H)+2.$$ $\Box$ \end{enumerate} \end{corollary} Observe that $\Delta(G\circ^l H)=\Delta(G)+l\cdot n_H$, for any $l\geq 1$. If $\Delta(H) \leq \Delta(G)$ then we immediately have $\Delta(H) \leq \Delta(G \circ^{l-1} H)$, for any $l\geq 2$. Hence we have the following generalization of Corollary \ref{theo:crazy}. \begin{corollary} Let $G$ and $H$ be connected simple graphs on at least two vertices, for which $\Delta(G)\geq \Delta(H)$. \begin{enumerate} \item If Conjecture~\ref{conj:zhang} holds for $G$ and Conjecture~\ref{tcc:conj} holds for $H$, then $$\chi_{a}''(G \circ^l H)\leq \Delta(G \circ^l H)+3,$$ for any integer $l\geq 2$. \item If Conjecture~\ref{conj:zhang} holds for $G$ and $H$, then $$\chi_{a}''(G \circ^l H)\leq \Delta(G \circ^l H)+3,$$ for any integer $l\geq 2$. \item If $G$ and $H$ are bipartite graphs, then $$\chi_{a}''(G \circ^l H)\leq \Delta(G \circ^l H)+2,$$ for any integer $l\geq 2$. $\Box$ \end{enumerate} \end{corollary} One can ask what about the Conjecture \ref{conj:zhang} for coronas $G\circ H$ where $\Delta(H)>\Delta(G)$. We also ask how big the difference between maximum degrees can be to be sure that AVDTC Conjecture holds. We partially answer these questions. \begin{theorem} Let $G$ and $H$ be connected simple graphs on at least two vertices, for which Conjecture~\ref{conj:zhang} holds. Let $\Delta(H)= \Delta(G)+1$. Then, $$\chi_a''(G\circ H) \leq \Delta(G \circ H)+3.$$\label{thm:diff1} \end{theorem} \begin{proof} If $H$ is a bipartite graph, then, due to Theorem \ref{avd-bip}, we may start with any avd total $(\Delta(H)+2)$-coloring of each copy of $H$. In this case we need the same number of colors for avd total $(\Delta(G)+3)$-coloring of $G$. Hence, this case can be seen as equivalent to the one given in Case 2 in the proof of Theorem \ref{theo:crazy}. So, let us assume $H$ is not bipartite. Then an avd total $(\Delta(G \circ H)+3)$-coloring $f$ of $G\circ H$ can be obtained as follows. \begin{enumerate} \item Color vertices and edges of graph $G$ with $\Delta(G)+3$ colors in adjacent vertex distinguishing way. We will refer to this part of the coloring as to $f|_{G}$. \item We will extend our avd total coloring $f|_{G}$ into avd total coloring of each copy of $H$ in $G\circ H$. Let $v\in V(G)$, $c=f(v)$, and we consider the relevant copy of $H$. Let $f|_H$ denote an avd total $(\Delta (H)+3)$-coloring of $H$ such that there is a vertex $u\in V(H)$ for which $f(u)\neq c$ and the color $\Delta (H)+3$ does not belong to color set of $u$ in $H$, i.e. $\Delta (H)+3 \not \in C_{f|_H}^H(u)$. Note that such a coloring of $H$ always exists due to the fact that we have at least two missing colors in a color set of every vertex in $H$. So we color the chosen $H$, $H\subset G\circ H$, in the desirable way. Since $\Delta (H)+3=\Delta (G)+4$, the color $\Delta (H)+3$ does not belong to $C_{f|_H}^H(u)$, and $\Delta (G)+4$ is not used in $f|_G$, we can color an edge $uv$ in the fan $F_v$ with $\Delta (G)+4$. Note that after this step the partial total coloring $f$ of $G\circ H$ may not be proper. We need to fix it, if it is the case. We do it in the following way. If the coloring is improper, i.e. there are vertices in $H$ colored with $c$, we recolor them to $\Delta(G)+5$. Note that after this recoloring the new coloring, limited to graph $H$, is certainly proper avd-total-coloring, while the partial total coloring of the whole corona, received so far, is proper. \item Next, we will complete the coloring of the fan $F_v$. If the coloring was initially not proper, we choose one vertex $w\in V(H)$ such that $w\neq u$ and $w$ is not colored with $c$. Such a vertex certainly exists, because $H$ is not bipartite. Note that $vw$ can be colored with $\Delta(G)+5$. We do it. The rest uncolored $n_H-2$ edges in $F_v$ are colored with new colors: $\Delta(G)+6, \ldots, \Delta(G)+n_H+3$. Otherwise, we color all uncolored $n_H-1$ edges in $F_v$ with $\Delta(G)+5, \ldots, \Delta(G)+n_H+3$. \end{enumerate} We repeat Step 2 and Step 3 for every $v\in V(G)$ with the relevant copy of $H$. Observe that for any vertex $v\in V(G)$, $C^{G\circ H}(v)=C^G(v)\cup \{ \Delta (G) +4,...,\Delta (G)+n_H+3\}$. Since the coloring $f|_{G}$ was adjacent vertex distinguishing, then also $f$, in accordance to vertices of $G$, is avd. In addition, for every $uw\in E(H)$, there exists at least one color $t\in \{1,2,..,\Delta (G)+3\}$ such that $t\notin C^H(u)\cap C^H(w)$. Hence, the obtained $(\Delta(G\circ H)+3)$-coloring of the whole corona $G\circ H$ is proper adjacent vertex distinguishing total coloring. \end{proof} \begin{theorem} Let $G$ be a connected simple graph on at least two vertices, for which Conjecture~\ref{conj:zhang} holds. And let $H$ be the complete graph with order $n_H=\Delta (G)+ 3$. Then $$\chi_a''(G\circ H) \leq \Delta(G \circ H)+3.$$ \label{thm:complete} \end{theorem} \begin{proof} We start from any adjacent vertex distinguishing total $(\Delta(G)+3)$-coloring of $G$. We will refer to this part of the coloring as to $f|_{G}$. An extension of $f|_{G}$ into the whole $(\Delta(G\circ H)+3)$-coloring $f$ of $G\circ H$ is obtained as follows. Let $v\in V(G)$, $f(v)=c$ with $c \in [\Delta(G)+3]$, which is equivalent to $c\in [\Delta(H)+1]$. We consider the relevant copy of $H$. Let $f|_{H}$ be an adjacent vertex distinguishing total $(\Delta(H)+3)$-coloring of $H$, such that for all $u\in V(H)$, $f|_{H}(u)\notin \{ c, \Delta (G)+5\}.$ It is possible because $H=K_{\Delta (G)+3}$ and we use exactly $\Delta(G)+3$ colors to color vertices. Observe that for any vertex $u\in V(H)$, $|\overline{C^H(u)}|=2,$ and for any two vertices $u,w$ in $H$, the following holds $0\leq |\overline{C^H(u)} \cap \overline{C^H(w)}| \leq 1.$ We claim that there are two vertices $u,w \in V(H)$ such that $\overline{C^H(u)} \cap \overline{C^H(w)} =\emptyset$. Otherwise, consider two vertices $x_1,x_2\in V(H)$ such that $\overline{C^H(x_1)} =\{ a, d\},\overline{C^H(x_2)} =\{ b, d\}$. Now, the color $d$ need to belong to a color set of at least one vertex. Let us say $x_3\in V(H)$ is such a vertex that $\overline{C^H(x_3)} =\{ a, b\}$, $a\neq d \neq b$, and for any other vertex $x_4\in V(H)$ we have only one of the following possibilities: $\overline{C^H(x_4)} =\{ a, b\}$, or $\overline{C^H(x_4)} =\{ a, d\}$, or $\overline{C^H(x_4)} =\{ d, b\}$, which is a contradiction. Let $\{ u,w\}\subset V(H)$ such that $\overline{C^H(u)} \cap \overline{C^H(w)} =\emptyset $ with $\overline{C^H(u)} =\{ a, b \},$ $\overline{C^H(w)} =\{ d, g\}$, where $a,b,d,g$ are four different colors other than $c$. In this case, if it is necessary, we can recolor the graph $H$ such that $\overline{C^H(u)} =\{ a, \Delta (G) +4\},\overline{C^H(w)} =\{ d, \Delta (G)+5\}$. Next we color the edges $f(uv)=\Delta (G) + 4$, $f(wv)=\Delta (G) + 5$ and we use the new colors $\{ \Delta (G)+6, \Delta (G) + 7,..., \Delta (G)+n_H+3\}$ to the edges $xv$, where $x\in V(H)\backslash \{ u, w\}.$ Observe that the obtained total $\Delta(G\circ H)+3)$-coloring of $\Delta(G\circ H)$ is adjacent vertex distinguishing. Indeed, due to the fact that $\deg(a)\neq \deg(b)$ for any pair of vertices such that $a\in V(G)$ and $b\in V(H)$, we need to consider only the case of different color sets for two adjacent vertices within graph $G$ or $H$. If $a$ and $b$ are two adjacent vertices in $G$, since $C^G(a)\neq C^G(b)$ and next their color sets were completed by the set of new colors not used earlier in $G$, then $C^{G\circ H}(a)\neq C^{G\circ H}(b)$. On the other hand, if $a$ and $b$ are two adjacent vertices in $H$, then since their color sets from $f|_H$ were completed by one color and in the case where $u \neq a\neq w$ and $u \neq b\neq w$ it was a new color not used earlier in $H$, then we get $C^{G\circ H}(a)\neq C^{G\circ H}(b)$. The only doubt could appear for vertices $u,w$ in $H$, chosen as above. But since $\overline{C^H(u)} \cap \overline{C^H(w)} =\emptyset$, then $\overline{C^{G\circ H)}(u)}\cap [\Delta(G)+5]=a$ and $\overline{C^{G\circ H)}(w)}\cap [\Delta(G)+5]=d$, $a\neq d$. Thus $C^{G\circ H}(u)\neq C^{G\circ H}(w)$ and the proof is complete. \end{proof} \begin{proposition} Let $H$ be a connected simple graph with $\Delta(H)\geq 3$ for which Conjecture \ref{conj:zhang} holds. Let $\alpha(H)\geq 2$ and let $u_1$ and $u_2$ be any two non-adjacent vertices in $H$. Then for any color $c\in [\Delta(H)+1]$ there is an avd total $(\Delta(H)+3)$-coloring $f$ of $H$ such that all four conditions hold: \begin{enumerate} \item $\Delta(H)+2 \in \overline{C^H_f(u_1)}$, \item $\Delta(H)+3 \in \overline{C^H_f(u_2)}$, \item $f(u_1) \neq c$, \item $f(u_2) \neq c$. \end{enumerate}\label{prop:demand} \end{proposition} \begin{proof} We can easily start from any avd total $(\Delta(H)+3)$-coloring $f$ of $H$ such that $f(u_1) =c_1 \neq c$ and $f(u_2)=c_2 \neq c$ and $c_1, c_2\in [\Delta(H)+1]$. It may happen that $c_1=c_2$. Now, let us assume that $\overline{C^H_f(u_1)}$ does not contain $\Delta(H)+2$. Since each vertex has at least two missing colors, let us say $\{a_1,a_2\}\subset \overline{C^H_f(u_1)}$, then we can exchange one of missing colors, let us say $a_1$, with $\Delta(H)+2$, and vice versa, in the whole graph $H$. Similarly, if $\overline{C^H_f(u_2)}$ does not contain $\Delta(H)+3$, let $\{b_1,b_2\}\subset \overline{C^H_f(u_2)}$. We can choose one of missing colors, different than $a_1$, and exchange it into $\Delta(H)+3$, and vice versa, also in the whole graph $H$. Finally, we receive an avd total $(\Delta(H)+3)$-coloring $f$ of $H$ fulfilling the desirable conditions. \end{proof} \begin{theorem} Let $G$ an $H$ be connected simple graphs on at least two vertices, for which Conjecture~\ref{conj:zhang} holds. Let $\Delta(H)= \Delta(G)+2$ and $\alpha(H)\geq 2$. Then $$\chi_a''(G\circ H) \leq \Delta(G \circ H)+3.$$ \label{thm:diff2} \end{theorem} \begin{proof} We start from any avd total $(\Delta(G)+3)$-coloring of $G$. We will refer to this part of the coloring as to $f|_{G}$. Further, we extend this coloring into the whole corona depending on the form of $H$. Let $v\in V(G)$ and $f|_G(v)=c$, where $c\in[\Delta(G)+3]$, or in other words $c\in [\Delta(H)+1]$. We consider the relevant copy of $H$. If $H$ is bipartite, $H=(V_1\cup V_2,E)$, then by Theorem \ref{avd-bip}, $\chi_a''(H) \leq \Delta(H)+2$. By K\"{o}nig's theorem, we color the edges of $H$ with $\Delta(H)$ colors: color $c$ and $\Delta(H)-1$ other colors from $[\Delta(H)+1]$. One color among $\{1,\ldots,\Delta(H)+1\}$ is not used to color edges in $H$. We assign it to color all vertices in $V_1$, while $\Delta(G)+4$ is used to color all vertices in $V_2$. Next, we choose one vertex $x\in V_1$ and assign $\Delta(G)+4$ to $vx$. Further, we complete the coloring of edges in the fan $F_v$ by coloring the residual uncolored $n_H-1$ edges with different colors from the set $\{\Delta(G)+5,\ldots, \Delta(G)+n_H+3\}$. We repeat the procedure for all copies of $H$. It is easy to see that the coloring $f$ of $G\circ H$ is adjacent vertex distinguishing total coloring. Otherwise, i.e. if $H$ is not bipartite, an extension of $f|_{G}$ into the whole $(\Delta(G\circ H)+3)$-coloring $f$ of $G\circ H$ is obtained as follows. Let $I$ be any independent vertex set in $H$ of size at least 2 and let $u_1,u_2$ be any two vertices from $I$. \begin{enumerate} \item Color vertices and edges of $H$ in avd total way with $\Delta(H)+3=\Delta(G)+5$ colors in such a way that color $\Delta(G)+4$ is missing in the color set of $u_1$, i.e. $\Delta(G)+4\in \overline{C^H(u_1)}$, and $\Delta(G)+5 \in \overline{C^H(u_2)}$, and $f(u_1)\neq c$, and $f(u_2)\neq c$. This is possible due to Proposition \ref{prop:demand}. It may happen that the partial total coloring of $G\circ H$ is improper at this stage. We will fix it later. \item Assign color $\Delta(G)+4$ to an edge $vu_1$ and color $\Delta(G)+5$ to $vu_2$. \item If the partial total coloring of $G\circ H$ is improper then we recolor vertices colored initially with $c$ into $\Delta(G)+6$. \item Choose any vertex $x\in V\backslash\{u_1,u_2\}$ not colored with $\Delta(G)+6$ and assign $\Delta(G)+6$ to $vx$. Note that such a vertex always exists because $H$ is not bipartite and there exists at least one edge with both endvertices in $V\backslash H$. At least one of them is not colored with $\Delta(G)+6$. \item Complete the coloring of edges in the fan $F_v$ by coloring the remaining uncolored $n_H-3$ edges with new colors $\Delta(G)+7,\ldots, \Delta(G)+n_H+3$. Note, that the only cases for which we really need to check the color sets are adjacent vertices of $x$ which are colored with $\Delta(G)+6$. But edges in the fan $F_v$ joining such a vertex are colored with completely new colors not used any more in $F_v$ and in the copy of $H$ under consideration. So, even for such adjacent vertices, their color sets are different. Thus, the partial coloring of $G\circ H$ is proper and adjacent vertex distinguishing from the point of view of vertices from the copy of $H$. \end{enumerate} We repeat the same procedure for all copies of $H$. Finally, we get an avd total $\Delta(G\circ H)+3$-coloring of $G\circ H$. The justification for two adjacent vertices within $G$ is the same as in the proofs of the previous theorems. The proof is complete. \end{proof} Up to now, since actually Theorem \ref{thm:complete} concerns the case when $\alpha(H)=1$ and $\Delta(H)=\Delta(G)+2$, we proved Conjecture \ref{conj:zhang} for all coronas $G \circ H$ of graphs $G$ and $H$ with $\Delta(H)-\Delta(G)\leq 2$, under some additional assumptions for $G$ and $H$. Now we present a partial result concerning graphs with a greater difference, i.e. let $\Delta(H)=\Delta(G)+k$, $k\geq 3$. We start with $H$ being bipartite and $k=3$. \begin{theorem} Let $G$ be connected simple graphs on at least two vertices, for which Conjecture~\ref{conj:zhang} holds. Let $H=(V_1\cup V_2,E)$ be a bipartite graph with $\Delta(H)= \Delta(G)+3$. Then $$\chi_a''(G\circ H) \leq \Delta(G \circ H)+3.$$\label{thm:bip} \end{theorem} \begin{proof} By Theorem \ref{avd-bip}, $\chi_a''(H) \leq \Delta(H)+2$. We start from any $(\Delta(G)+3)$-adjacent-vertex-distinguishing total coloring of $G$. Let $v\in V(G)$ and $f|_G(v)=c$, where $c\in[\Delta(G)+3]$, or in other words $c\in [\Delta(H)+6]$. We consider the relevant copy of $H$. By K\"{o}nig's theorem, we color the edges in the relevant copy of $H$ with $\Delta(H)$ colors, $\Delta(H)= \Delta(G)+3$, among them the color $c$. We assign color $\Delta(G)+4$ to all vertices in $V_1$ while color $\Delta(G)+5$ is assigned to all vertices from $V_2$. Now, let us choose a vertex colored with $\Delta(G)+4$, let us name it $x_1$, $x_1\in V_1$, and a vertex colored with $\Delta(G)+5$, let us name it by $x_2$, $x_2\in V_2$. At first, try to choose non-adjacent vertices or vertices of different degrees. When it is not possible, choose any two vertices colored appropriately. Let $f(vx_1)=\Delta(G)+5$ and let $f(vx_2)=\Delta(G)+4$. If $C^{G\circ H}(x_1)=C^{G\circ H}(x_2)$, then recolor $x_2$ into $\Delta(G)+6$ and choose any vertex in the same partition, let us say $x_3$, $x_3\in V_2$, and color $vx_3$ with $\Delta(G)+6$, otherwise we assign $\Delta(G)+6$ to any uncolored edge in the $F_v$. Next, we complete the coloring of edges in the fan $F_v$ by coloring the remaining uncolored $n_H-3$ edges with new colors $\Delta(G)+7,\ldots, \Delta(G)+n_H+3$. Note that the partial coloring of $G\circ H$ is proper and adjacent vertex distinguishing from the point of view of vertices from the copy of $H$. We repeat the same procedure for all vertices $v\in V(G)$ and the relevant copies of $H$. Since we completed $C^G_f(v)$, for each $v\in V(G)$, by the same set of colors $\{\Delta(G)+4, \ldots,\Delta(G)+n_H+3\}$, and taking into account the previous reasoning for color sets of any two adjacent vertices from $H$, the total coloring of the whole corona is adjacent vertex distinguishing. The proof is complete. \end{proof} Note that a similar idea applied to $H$ being bipartite graph, in particular complete bipartite graph, with $\Delta(H)=\Delta(G)+k$ for $k\geq 4$ will not work. We mean assigning only $\Delta(H)$ colors to edges of $H$ and assigning colors $\Delta(G)+4, \ldots, \Delta(G)+n_H+3$ to edges of a fan $F_v$. Since in such solutions all $\Delta(H)$ colors used to color edges of $H$ are present in color sets of all vertices in $H$, then we can use none of these colors to color edges in $F_v$ and thus such an approach would involve more than $\Delta(G\circ H)+3$ colors, for $k\geq 4$. Hence, we certainly need to color edges of $H$ with more colors than only $\Delta(H)$. Now, let us consider more general graphs. We take an attempt of generalization of the method given in the proof of Theorem \ref{thm:diff2}. The basis of this method is such an avd total $(\Delta(H)+3)$-coloring $f|_H$ of $H$ that there is a set of non-adjacent vertices $U=\{u_1,\ldots,u_k\}$ for which the colors from $[\Delta(H)+3]\backslash [\Delta(G)+3]$ are missing colors for vertices in $U$. For $k=2$ such a coloring was easy to achieve and it was guaranteed by Proposition \ref{prop:demand}. For a greater $k$ we need additional conditions for degrees of $u\in U$. \begin{proposition} Let $H$ be a connected simple graph with $\Delta(H)\geq k+1$ for which Conjecture \ref{conj:zhang} holds. Let $\alpha(H) \geq k$ and let $u_1,\ldots,u_k$ be any $k$ non-adjacent vertices in $H$ such that $\deg(u_1)\leq \Delta(H)$, $\deg(u_2)\leq \Delta(H)$, $\deg(u_i)\leq \Delta(H)-i+2$ for $i\in\{3,\ldots,k-2\}$. Then for any color $c\in [\Delta(H)-k+4]$ there is an avd total $(\Delta(H)+3)$-coloring $f$ of $H$ such that all the following conditions hold: \begin{enumerate} \item $\Delta(H)+1+i \in \overline{C^H_f(u_i)}$, $i\in[k]$, \item $f(u_i)\neq c$, $i\in[k]$. \end{enumerate}\label{prop:demandk} \end{proposition} \noindent We remain the full proof for the reader, but it is easy to see that the conditions for degrees of chosen vertices in an independent set of size at least $k$ guarantee us that $|\bigcup_{i\in[k]}\overline{C^H_f(u_i)}|\geq k$. So we are able to exchange colors in $H$ to achieve the desirable conditions. Of course, the conditions for degrees in Proposition \ref{prop:demandk} are not the only one guaranteeing us ''good'' avd total coloring of $H$. \begin{theorem} Let $G$ an $H$ be connected simple graphs on at least two vertices, for which Conjecture~\ref{conj:zhang} holds. Let $k\geq 3$ be an integer, $\Delta(H)= \Delta(G)+k$, and $\alpha(H)\geq k$. If there exist $k$ non-adjacent vertices in $H$ $u_1,\ldots,u_k$ such that $\deg(u_1)\leq \Delta(H)$, $\deg(u_2)\leq \Delta(H)$, $\deg(u_i)\leq \Delta(H)-i+2$ for $i\in\{3,\ldots,k-2\}$ then $$\chi_a''(G\circ H) \leq \Delta(G \circ H)+3.$$ \label{thm:diffk} \end{theorem} \begin{proof} We start from any adjacent vertex distinguishing total $(\Delta(G)+3)$-coloring of $G$. We will refer to this part of the coloring as to $f|_{G}$. An extension of $f|_{G}$ into the whole $(\Delta(G\circ H)+3)$-coloring $f$ of $G\circ H$ is obtained as follows. Let $u_1,\ldots,u_k$ be any $k$ non-adjacent vertices fulfilling the assumption of the theorem. \begin{enumerate} \item Color vertices and edges of $H$ in avd total way with $\Delta(H)+3=\Delta(G)+k+3$ colors in such a way that color $\Delta(G)+3+i$ is missing in the color set of $u_i$, i.e. $\Delta(G)+3+i\in \overline{C^H(u_i)}$, $i\in[k]$, and none of vertices $u_i,\ldots,u_k$ is colored with $c$. This is possible due to Proposition \ref{prop:demandk}. It may happen that the partial total coloring of $G\circ H$ is improper at this stage. We will fix it later. \item Assign color $\Delta(G)+3+i$ to an edge $vu_i$, for every $i\in [k]$. \item If the partial total coloring of $G\circ H$ is improper then we recolor vertices colored initially with $c$ into $\Delta(G)+k+4$. \item Choose any vertex $x\in V\backslash\{u_1,\ldots,u_k\}$ not colored with $\Delta(G)+k+4$ and assign $\Delta(G)+k+4$ to $vx$. Note that such a vertex always exists. Since $\Delta(H)\geq k+1$ there must exist an edge whose endvertices are out of the set $\{u_1,\ldots,u_k\}$, then at least one of the endvertices of such an edge is not colored with $\Delta(G)+k+4$. \item Complete the coloring of edges in the fan $F_v$ by coloring the remaining uncolored $n_H-k-1$ edges with new colors $\Delta(G)+k+5,\ldots, \Delta(G)+n_H+3$. Note, that the only cases for which we really need to check the color sets are adjacent vertices of $x$ which are colored with $\Delta(G)+k+4$. But edges in the fan $F_v$ joining such vertices are colored with completely new colors not used any more in $F_v$ and in the copy of $H$ under consideration. So, even for such adjacent vertices, their color sets are different. Thus, the partial coloring of $G\circ H$ is proper and adjacent vertex distinguishing from the point of view of vertices from the copy of $H$. \end{enumerate} We repeat the same procedure for all copies of $H$. Finally, we get a total $(\Delta(G\circ H)+3)$-coloring of $G\circ H$. Since we completed $C^G_f(v)$, for each $v\in V(G)$, by the same set of colors $\{\Delta(G)+4, \ldots,\Delta(G)+n_H+3\}$, the total coloring of the whole corona is adjacent vertex distinguishing. The proof is complete. \end{proof} \section{Conclusion} In the paper we considered adjacent vertex distinguishing total coloring of corona graphs in the context of AVDTC Conjecture posted by Zhang in 2005. We confirmed this conjecture for: \begin{itemize} \item generalized coronas $G\Tilde{\circ} \Lambda _{i=1}^n H_i$ with $\Delta(G)\geq \Delta(H_i)$, under the assumption that Conjecture \ref{conj:zhang} holds for $G$ and Conjecture \ref{tcc:conj} holds for every $H_i$, $i\in[n_G]$; \item all simple coronas $G \circ H$ with $\Delta(H)-\Delta(G)\leq 2$, under the assumption that Conjecture \ref{conj:zhang} holds for $G$ and $H$; Actually, the assumption for graph $H$ can be a little weaker. Our proofs show that it is enough that Conjecture \ref{tcc:conj} holds for $H$. \item all simple coronas $G\circ H$ with $\Delta(H)=\Delta(G)+3$, where $H$ is bipartite and Conjecture \ref{conj:zhang} holds for $G$; \item some simple coronas $G\circ H$ with $\Delta(H)= \Delta(G)+k$, $k\geq 3$ under some additional constraints - for details see Theorem \ref{thm:diffk}. \end{itemize} Taking into account the results known from the literature and taking the results from this work, we can replace our general graphs $G$ and $H$ with particular graph classes fulfilling Conjecture \ref{tcc:conj} and \ref{conj:zhang}. In Table \ref{HmnG} we present only exemplary results. \begin{table}[htb] \begin{center} \begin{tabular}{|c|*{4}{c|}}\hline \backslashbox[50mm]{$G$}{$H$} & path & cycle & 3-regular& 4-regular \\ \hline path & \checkmark & \checkmark & \checkmark&\checkmark\\ \hline cycle & \checkmark & \checkmark & \checkmark&\checkmark\\ \hline 3-regular & \checkmark& \checkmark& \checkmark & \checkmark\\\hline 4-regular & \checkmark & \checkmark& \checkmark & \checkmark \\\hline complete graph $K_n$, $n\geq 6$ & \checkmark & \checkmark & \checkmark& \checkmark\\ \hline \end{tabular} \caption{An exemplary graph classes of $G$ and $H$ such that Conjecture \ref{conj:zhang} holds for $G\circ H$.} \end{center} \label{HmnG} \end{table} One can ask what about the remaining coronas not covered by the results of this paper. We retain this for a further investigation and as an open problem for other graph theorists. \noindent{\bf Compliance with Ethical Standards:} \noindent{\bf Funding:} This study was funded by UNAM (Grant PAPIIT-UNAM-IN117219) - Author B. \noindent{\bf Ethical approval:} This article does not contain any studies with human participants or animals performed by any of the authors. \noindent{\bf Statements and Declarations:} The manuscript has no associated data. \end{document} \end{document}
math